text
stringlengths 56
7.94M
|
---|
\begin{document}
\maketitle
\begin{abstract}
We establish the connection between Green's ${\mathcal J}$-classes and the subduction equivalence classes defined on the image sets of an action of a semigroup.
The construction of the skeleton order (on subduction equivalence classes) is shown to depend in a functorial way on transformation semigroups and surjective morphisms, and to factor through the $\leq_{\mathcal L}$-order and $\leq_{\mathcal J}$-order on the semigroup and through the inclusion order on image sets.
For right regular representations, the correspondence between the ${\mathcal J}$-class order and the skeleton is one of isomorphism.
\end{abstract}
\section{Introduction}
The holonomy decomposition algorithm \cite{zeiger67a,zeiger68,ginzburg_book68,eilenberg,holcombe_textbook,KRTforCategories,automatanetworks2005} is a wreath product decomposition theorem for finite transformation semigroups. It yields a Krohn-Rhodes decomposition by the detailed analysis of the semigroup action on all subsets of the state set which occur as images under the semigroup action.
One of the main tools of this analysis is the \emph{subduction} preorder relation defined on the set of images of the members of the semigroup considered as mappings.
(See below for precise definitions.)
Green's preorders give ample information about the semigroup's internal structure, while subduction captures details of the semigroup action.
Therefore, the natural question arises:
\emph{What is the connection between the Green's relations and the subduction relation?}
More generally, by aiming to describe the connection between a semigroup action and the internal structure of semigroup itself we may get information on what transformation representations are possible for an abstract semigroup.
\subsection{Notation}
$(X,S)$ is a {\em transformation semigroup} with $S$ acting faithfully on the \emph{state set} $X$ if $S$ is a subsemigroup of the (right) full transformation semigroup ${\mathcal T}(X)$ of all mappings on a set $X$.
For $x\in X, s\in S$, the result of the action is written $x^s$.
The action can be extended to subsets of $X$, if $P\subseteq X$ and $s\in S$ then $P^s=\{x^s\mid x\in P\}$.
The \emph{image} of a transformation $s$ is defined by $\im(s)=X^s$, and we can also say that $\im(s)$ is the image of $X$ under $s$.
$S^1$ is the monoid obtained by adjoining the identity on $X$ to the semigroup $S$, if it is not a member of $S$, otherwise $S^1=S$.
$(A,\leq)$ is a \emph{preorder} (sometimes called a `quasi-order') if $\leq$ is a reflexive and transitive relation on the set $A$.
For a preorder, there exists an equivalence relation $(A,\equiv)$ defined by $a\equiv b \iff a\leq b$ and $b\leq a$,
and an induced partial order on the equivalence classes $(A/\!\equiv,\leq)$.
The surjective map $A\twoheadrightarrow A/\!\equiv$ is denoted by $\eta^\natural$.
The classical Green's relations $\leq_{\mathcal R}$, $\leq_{\mathcal L}$, $\leq_{\mathcal J}$ and $\leq_{\mathcal H}$, on any semigroup $S$ are the preorders:
$$t \leq_{\mathcal R} s \iff tS^1 \subseteq sS^1,$$
$$t \leq_{\mathcal L} s \iff S^1 t \subseteq S^1 s,$$
$$t \leq_{\mathcal J} s \iff S^1 t S^1 \subseteq S^1 s S^1$$ and $\leq_{\mathcal H}$ is the intersection of the $\leq_{\mathcal L}$ and $\leq_{\mathcal R}$ relations.
Then $ {\mathcal R},{\mathcal L},{\mathcal J}$ and ${\mathcal H}$ denote the equivalence relations arising from the preorders $\leq_{\mathcal R}$, $\leq_{\mathcal L}$, $\leq_{\mathcal J}$ and $\leq_{\mathcal H}$, respectively, and in each case the equivalence classes carry the induced partial order. The equivalence relation ${\mathcal D}$ on $S$ is the composite of ${\mathcal L}$ and ${\mathcal R}$, which commute. (For standard definitions and elementary properties see for instance \cite{clifford_preston,lallement1979,Howie95}).
In the finite case, the $\leq_{\mathcal J}$ and $\leq_{\mathcal D}$, and thus the ${\mathcal J}$ and ${\mathcal D}$ relations coincide.
Here we only consider finite transformation semigroups.
However, we still use the ${\mathcal J}$-ordering of ${\mathcal J}$-classes, as by definition ${\mathcal D}$ is an equivalence relation that does not necessarily come from a preorder in the general case (see e.g.~\cite{lallement1979}).
Though currently there is no infinite version of the holonomy theorem using subduction, we would like to keep the proofs of results given here compatible with possible future developments.
\section{Subduction Relation}
For a transformation semigroup $(X,S)$ the set ${\mathcal I}(X)=\{\im(s)\mid s\in S^1\}$ is the \emph{image set} of the semigroup action. Note $X=\im(1)$ is always in $ {\mathcal I}(X)$.
\begin{definition}[Subduction]
For $P,Q\in {\mathcal I}(X)$
$$P\subseteq_S Q \iff \exists s \in S^1 \text{ such that } P \subseteq Q^s.$$
So $P$ is a subset of $Q$ or it is a subset of some image of $Q$ under the semigroup action.
\label{def:subduction}
\end{definition}
\begin{lemma}
1. $({\mathcal I}(X),\subseteq_S)$ is a preorder. \\
2. If $P \subseteq_S Q$ and $Q \subseteq_S P$ then $|P|=|Q|$.
\end{lemma}
\proof
Obviously $\subseteq_S$ is reflexive, since $P\subseteq P^1$. It is transitive, since if $P \subseteq Q^{s_1}$ and $ Q \subseteq R^{s_2}$ then $P \subseteq R^{s_2s_1}$. For (2), there exists $s\in S^1$ with $P \subseteq Q^s$, so $Q$ has cardinality at least that of $P$. By symmetry, it follows that $P$ and $Q$ have the same cardinality. \qed\\
Therefore, we can naturally define by mutually subduced subsets an equivalence relation on ${\mathcal I}(X)$, denoted by $\equiv_S$.
\begin{corollary}
$({\mathcal I}(X)/\!\!\equiv_S,\subseteq_S)$ is a partial order.
\end{corollary}
One calls the subduction class ordering of $(X,S)$ its {\em skeleton ordering} for $(X,S)$.
This structure provides the scaffolding for a holonomy decomposition since subduction equivalent subsets have isomorphic holonomy permutation groups, so only one copy of these groups is needed per class in the decomposition \cite{zeiger67a,zeiger68,ginzburg_book68,eilenberg,holcombe_textbook,automatanetworks2005}.
For the holonomy decomposition the skeleton order is extended by using
the {\em extended image set} ${\mathcal I}^+(X)={\mathcal I}(X)\cup \{\{x\}: x \in X\}$ which includes any singletons that do not occur as images. This could potentially result in additional minimal equivalence classes for these singletons being adjoined to the skeleton.
\section{${\mathcal J}$-classes and skeleton classes}
We can establish connection between the induced classes of two preorders through a given preorder preserving map.
First, we describe the situation for preorders in general.
\begin{lemma}
\label{lem:preorders}
Let $(A,\leq_1)$ and $(X,\leq_2)$ be preorders and $f:A\rightarrow X$ a function respecting preordering.
Then,
\begin{enumerate}
\item f induces an order-preserving map $\bar{f}:(A/\!\!\equiv_1,\leq_1)\rightarrow (X/\!\!\equiv_2,\leq_2)$, and the following diagram commutes.
\begin{center}
\begin{tikzcd}
A \arrow{r}{f}\arrow[two heads]{d}{\eta_1^\natural} & X\arrow[two heads]{d}{\eta_2^\natural} \\
A/\!\!\equiv_1 \arrow{r}{\bar{f}}& X/\!\!\equiv_2
\end{tikzcd}
\end{center}
\item The kernel, the equivalence relation induced on $A$ by
$\eta_1^\natural \circ \bar{f} = f \circ \eta_2$ is not finer than $\equiv_1$.
\end{enumerate}
\end{lemma}
\proof
(1) Let $\bar{f}$ denote the function taking the $\equiv_1$-class of any $a\in A$ to the $\equiv_2$-class of $f(a)$.
If $a\equiv_1 b$ then by definition $a\leq_1 b$ and $b\leq_1 a$.
Since $f$ respects preordering, $f(a)\leq_2 f(b)$ and $f(b)\leq_2 f(a)$, therefore $f(a)\equiv _2 f(b)$.
It follows that $\bar{f}$, given by $\bar{f}([a]_1)$=$[f(a)]_2$, is well-defined and order preserving.
(2) Since $\bar{f}$ is a function, the inverse image of an $\equiv_2$-class consists of $\equiv_1$-classes. Hence the inverse image of this in $A$ is
the union of $\equiv_1$-equivalence classes.
\qed\\
\noindent{\bf Remark: } It is important to notice that $a<_1 b$ does not exclude the possibility of $f(a)\equiv_2 f(b)$. Moreover,
even if neither $a\leq_1 b$ nor $b \leq_1 a$ holds one might still have $f(a)<_2 f(b)$ or $f(a)\equiv_2 f(b)$. \\
Now we have two preorders: $\leq_{\mathcal J}$ on $S$ and subduction $\subseteq_S$ on ${\mathcal I}(X)$.
Next we show that the surjective function $\im$ respects preordering.
For the weaker case, it is a basic fact that $a{\mathcal L} b\implies \im(a)=\im(b)$.
However, ${\mathcal J}$-related elements can have different images. For instance, in the full transformation semigroups on $n$ points, all constant maps are ${\mathcal J}$-equivalent to each other.
\begin{lemma}\label{lem:image-map} For any transformation semigroup $(X,S)$ and
any $a,b\in S$, we have
$$a\leq_{\mathcal L} b\implies \im(a) \subseteq \im(b).$$
$$a\leq_{\mathcal J} b\implies \im(a)\subseteq_S\im(b).$$
That is, $\im$ maps the ${\mathcal L}$-preorder to the inclusion partial order and maps the ${\mathcal J}$-preorder to the subduction preorder. Moreover, $\im$ induces a surjective map from $S^1$ in each case.
\end{lemma}
\proof
The first assertion is well-known:
If $a \leq_{\mathcal L} b$ then $a=sb$ for some $s\in S^1$. Thus $\im(a)=X^a=X^{sb}=(X^s)^b\subseteq X^b=\im(b)$.
For the second, if $a\leq_{\mathcal J} b$ then there exist $s,t\in S^1$ such that $a=sbt$,
$$ \im(a)=\im(sbt)=\im(sb)^t\subseteq
\im(1 b)^t
=\im(b)^t,$$
therefore $\im(a)\subseteq_S\im(b)$.
Obviously $\im$ maps $S^1$ surjectively onto ${\mathcal I}(S)=\{\im(s): s \in S\}$, hence onto the preorder $({\mathcal I}(X),\subseteq_S)$ which has the same underlying set.
\qed
\begin{theorem}\label{functor}
For a transformation semigroup $(X,S)$, there is a surjective order-preserving map $\bar{\im}_S$
from the partial order of ${\mathcal J}$-classes $({S^1}/{\mathcal J},\leq_{\mathcal J})$,
onto the partial order of subduction classes $({\mathcal I}(X)/\!\!\equiv_S,\subseteq_S)$.
The inverse image of a subduction equivalence class is a union of ${\mathcal J}$-classes.
\end{theorem}
\proof
$\im$ is a surjective, and is a preorder morphism from the Green's ${\mathcal J}$ preorder to the subduction preorder by Lemma \ref{lem:image-map}, therefore by using Lemma \ref{lem:preorders}(1), the induced map $\bar{\im}$, which we shall denote $\bar{\im}_S$ to distinguish it from the map in the next result, is a surjective order-preserving map. By Lemma \ref{lem:preorders}(2), the inverse image of a subduction class corresponds to a union of ${\mathcal J}$-classes. \qed\\
Similarly, generalizing the basic fact mentioned above, we have
\begin{theorem}
For a transformation semigroup $(X,S)$, there is a surjective order-preserving
map $\bar{\im}$ from the ${\mathcal L}$-class order for $S^1$ onto the inclusion partial order on ${\mathcal I}(X)$.
The inverse image of an image set is a union of ${\mathcal L}$-classes.
\end{theorem}
Putting these facts together, it follows that
\begin{theorem}\label{diagram} For any transformation semigroup $(X,S)$, there is a commutative diagram of surjective order-preserving morphisms:
\begin{center}
\begin{tikzcd}
(S^1, \leq_{\mathcal L})\arrow[two heads]{d}{/{\mathcal L}}\arrow[two heads]{rd}{\im}\\
(S^1 / {\mathcal L}, \leq_{\mathcal L}) \arrow[two heads]{r}{\bar{\im}}\arrow[two heads]{d}{/{\mathcal J}} & ({\mathcal I}(X),\subseteq) \arrow[two heads]{d}{/\!\!\,\equiv_S} \\
(S^1 / {\mathcal J}, \leq_{\mathcal J})\arrow[two heads]{r}{\bar{\im}_S} & ({\mathcal I}(X)/\!\! \equiv_S,\subseteq_S)
\end{tikzcd}
\end{center}
\end{theorem}
\begin{corollary}
For the right regular representation $(S^1,S)$:
\begin{enumerate}
\item The ${\mathcal J}$-class order and the subduction order are isomorphic.
\item The ${\mathcal L}$-class order and the inclusion order on image sets ${\mathcal I}(X)$ are isomorphic.
\end{enumerate}
\end{corollary}
\proof
(1) By Lemma~\ref{lem:image-map}, it suffices to show that $\im(a)\subseteq_S\im(b)\implies a\leq_{\mathcal J} b$.
By definition of subduction
$\im(a)\subseteq\im(b)^t$ for some $t\in S^1$.
Since $X=S^1$ we can write $\im(a)$ as $\big(S^1\big)^a$, or by shifting notation from semigroup action to semigroup multiplication, simply as $S^1 a$.
Therefore,
$$S^1 a\subseteq S^1 bt\implies S^1 a S^1\subseteq S^1 b t S^1\subseteq S^1 b S^1\implies a\leq_{\mathcal J} b.$$ It follows that, if $\im(a) \not\equiv_S \im(b)$ then $a {\mathcal J} b$ does not hold. Thus $\bar{\im}_S$ is injective, hence bijective.
\noindent (2) More simply for the ${\mathcal L}$-order, $\im(a) \subseteq \im(b)$ in the case of the right regular representation is just $(S^1)^a \subseteq (S^1)^b$, i.e., $S^1 a \subseteq S^1 b$, which is the definition of
$a \leq_{\mathcal L} b$. Hence, $\im(a) \subseteq \im(b)$ implies $a\leq_{\mathcal L} b$. By Lemma~\ref{lem:image-map} for the $\leq_{{\mathcal L}}$-preorder, the converse holds. It follows that if $\im(a)\neq\im(b)$ then it cannot be that $a {\mathcal L} b$, hence $\bar{\im}$ is injective, and hence bijective as well.
\qed\\
In the case of the right regular representation this says that the horizontal mapping in Theorem~\ref{diagram} are order isomorphisms.
Both the ${\mathcal J}$-class order and the skeleton capture information about the structure of the semigroup, therefore surjective homomorphisms should respect them.
\begin{theorem}[Functoriality]
Suppose $\varphi: (X,S) \twoheadrightarrow (Y,T)$ is a surjective morphism of transformation semigroups such that if $1 \in S$ then $\varphi(1)$ is the identity on $Y$. Then $\varphi$ induces a natural mapping of the commutative
diagram for $(X,S)$ as in Theorem~\ref{diagram}, to the commutative diagram
for $(Y,T)$.
\end{theorem}
\proof A surjective map of semigroups induces a surjective map of the $\leq_{\mathcal L}$ and $\leq_{\mathcal J}$ pre-orders and orderings (as well as for $\leq_{\mathcal R}$ and $\leq_{\mathcal H}$).
$\varphi$ also induces a surjective map from ${\mathcal I}(X)$ onto ${\mathcal I}(Y)$, and subduction in the source implies subduction in the target since $P \subseteq Q^s$ implies
$\varphi(P) \subseteq \varphi(Q^s)=\varphi(Q)^{\varphi(s)}$, hence the subduction relation is respected, and the result follows.
\qed
\section{Examples}
We present a few examples to illustrate the connection between the ${\mathcal J}$-class order and the skeleton. The partial orders are displayed as Hasse diagrams. Shaded clusters of ${\mathcal J}$-classes are mapped to a single subduction class.
\begin{example}[Simple collapsing of a chain] Let $X=\{1,2,3\}$,
$t_1=\left(\begin{smallmatrix}1&2&3\\1& 3& 3\end{smallmatrix}\right)$,
$t_2=\left(\begin{smallmatrix}1&2&3\\3& 1& 3\end{smallmatrix}\right)$,
$t_3=\left(\begin{smallmatrix}1&2&3\\3& 3& 3\end{smallmatrix}\right)$
and $M$ the monoid $\{1,t_1,t_2,t_3\}$, so $(X,M)$ is a transformation monoid on 3 points. The principal two-sided ideals are:
\begin{align*}
M1 M&=M\\
Mt_1M&=\{t_1,t_2,t_3\}\\
Mt_2M&=\{t_2,t_3\}\\
Mt_3M&=\{t_3\},
\end{align*}
therefore $t_3 <_{{\mathcal J}} t_2 <_{{\mathcal J}} t_1 <_{{\mathcal J}} 1$ and all elements form a singleton ${\mathcal J}$-class on their own. ${\mathcal I}(X)=\big\{\{1,2,3\},\{1,3\},\{3\}\big\}$ defines the subduction classes.
\begin{center}
\input{fig-M32_dcl2sk_chaincollapse.tikz}
\end{center}
\end{example}
A simple linear order is mapped to a shorter linear order, since $\im(t_1)=\im(t_2)=\{1,3\}$.
\begin{example} More general collapsing (a usual motif) for a transformation monoid on 3 points,
$M=\big\{1,
\left(\begin{smallmatrix}1&2&3\\1& 1& 3\end{smallmatrix}\right),
\left(\begin{smallmatrix}1&2&3\\3& 2& 3\end{smallmatrix}\right),
\left(\begin{smallmatrix}1&2&3\\3& 1& 3\end{smallmatrix}\right),
\left(\begin{smallmatrix}1&2&3\\3& 3& 3\end{smallmatrix}\right)\big\}.$
\begin{center}
\input{fig-M32_5_dcl2sk_simplecollapsemotif.tikz}
\end{center}
\end{example}
The right regular transformation representation of $M$ can be encoded as $M'=\big\{1,
\left(\begin{smallmatrix}1&2&3&4&5\\2& 2& 4& 4& 5\end{smallmatrix}\right),
\left(\begin{smallmatrix}1&2&3&4&5\\3& 5& 3& 5& 5\end{smallmatrix}\right),
\left(\begin{smallmatrix}1&2&3&4&5\\4& 5& 4& 5& 5\end{smallmatrix}\right),
\left(\begin{smallmatrix}1&2&3&4&5\\5& 5& 5& 5& 5\end{smallmatrix}\right)\big\}.$
Its skeleton is isomorphic to the ${\mathcal J}$-class order of $M$.
\begin{center}
\input{fig-M32_regrep.tikz}
\end{center}
\begin{example}
$M$ monoid generated by
$a=\left(\begin{smallmatrix}1&2&3&4&5\\2& 1& 1& 1&4\end{smallmatrix}\right)$ and
$b=\left(\begin{smallmatrix}1&2&3&4&5\\1& 2& 2& 3&4\end{smallmatrix}\right)$.
In $M$, $a$ and $b$ are $\leq_{\mathcal J}$-incomparable, but $\im(a)\subset_S\im(b)$. This is so as there is no solution for the equation $b=sat$ or $a=sbt$ for $s,t\in M$, although $\im(a)\subset\im(b)$.
\begin{center}
\input{fig-M25_dcl2sk_newrel.tikz}
\end{center}
This shows that the subduction order may contain new relations beyond those induced by collapsing nodes of the ${\mathcal J}$-order diagram.
Consequently, the length of a longest ${\mathcal J}$-chain is not an upper bound for the height of the skeleton.
\end{example}
So far the ${\mathcal J}$-class orders were all lattices, but this is not true in general, therefore we have to look at a monoid with more inner structure.
\begin{example}[Nonlinear, non-lattice skeleton] Let
$a=\left(\begin{smallmatrix}1&2&3&4&5\\2& 2& 1& 2&4\end{smallmatrix}\right)$,
$b=\left(\begin{smallmatrix}1&2&3&4&5\\3& 5& 2& 3&2\end{smallmatrix}\right)$,
$b=\left(\begin{smallmatrix}1&2&3&4&5\\3& 5& 4& 5&4\end{smallmatrix}\right)$ and $M=\langle a,b,c\rangle$.
$|M|=31$, $|{\mathcal I}(X)|=16$, number of ${\mathcal D}$-classes is 13, and the number of skeleton classes is 9.
On the left the ${\mathcal D}$-class picture is drawn. On top of each ${\mathcal L}$-class (drawn vertically) the corresponding image is displayed.
${\mathcal H}$-classes with an idempotent are shaded.
The grey background blobs indicate ${\mathcal D}$-classes that are collapsed into one subduction class.
On the right the skeleton order is drawn.
It is nonlinear and it is not a lattice. The boxes indicate subduction equivalence classes.
\begin{center}
\input{fig-M35_skeleton.tikz}
\end{center}
The skeleton also contains nonsingleton subduction equivalence classes.
\end{example}
\section{Conclusion}
Working towards a simplified and elementary description of the holonomy decomposition, we clarified the connection between the ${\mathcal J}$-classes of a semigroup and the subduction classes of a transformation representation of the same semigroup.
We showed how the partial order of ${\mathcal J}$-classes constrains the image relations in the possible (faithful) actions of the semigroup.
Therefore, these results may also be useful for investigating or enumerating the possible action representations of a semigroup.
Theorem~\ref{functor} suggests that the holonomy decomposition might be made functorial, or nearly functorial, for a suitable category of transformation semigroups and surjective morphisms since the skeleton order is.
For calculating and checking the examples we used the \textsc{Gap}~\cite{GAP4} packages \textsc{Semigroups}~\cite{Semigroups}, \textsc{SgpDec}~\cite{SgpDec} and \textsc{SgpViz}\cite{sgpviz}.
\end{document} |
\color{blue}egin{document}
\color{blue}aselineskip=20pt \hoffset=-3cm \voffset=0cm \oddsidemargin=3.2cm
\evensidemargin=3.2cm \thispagestyle{empty}
\hbadness=10000
\tolerance=10000
\hfuzz=150pt
\color{blue}aselineskip=20pt \hoffset=-3cm \voffset=0cm \oddsidemargin=3.2cm
\evensidemargin=3.2cm \thispagestyle{empty}
\hbadness=10000
\tolerance=10000
\hfuzz=150pt
\title{\textbf{Relative Morse index theory and applications in wave equations}}
\author{\Large Qi Wang $^{{\color{red}m a,b}}$,$\quad$ Li Wu $^{{\color{red}m b}}$}
\date{} \maketitle
\color{blue}egin{center}
\it\scriptsize ${}^{\color{red}m a}$ School of Mathematics and Statistics, Henan University, Kaifeng 475000, PR China\\
${}^{\color{red}m b}$Department of Mathematics, Shandong University, Jinan, Shandong, 250100, PR China\\
\end{center}
\footnotetext[0]{$^a${\color{blue}f Corresponding author.} Supported by NNSF of China(11301148) and PSF of China(188576).}
\footnotetext[0]{\footnotesize\;{\it E-mail address}: [email protected]. (Qi Wang), [email protected] (Li Wu).}
\noindent
{\color{blue}f Abstract:} { We develop the relative Morse index theory for linear self-adjoint operator equation without compactness assumption and give the relationship between the index defined in \cite{Wang-Liu-2016} and \cite{Wang-Liu-2017}. Then we generalize the method of saddle point reduction and get some critical point theories by the index, topology degree and critical point theory. As applications, we consider the existence and multiplicity of periodic solutions of wave equations.}
\noindent{\color{blue}f Keywords:} {\small}Relative Morse index; Periodic solutions; Wave equations
\section{Introduction}\label{section-introduction}
Many problems can be displayed as a self-adjoint operator equation
\[
Au=F'(u),\;u\in D(A)\subset \mathbf H,\eqno{(O.E.)},
\]
where $\mathbf H$ is an infinite-dimensional separable Hilbert space, $A$ is a self-adjoint operator on $\mathbf H$ with its domain $D(A)$, $F$ is a nonlinear functional on $\mathbf H$.
Such as boundary value problem for Laplace's equation on bounded domain, periodic solutions of Hamiltonian systems, Schr\"{o}dinger equation, periodic solutions of wave equation and so on.
By variational method, we know that the solutions of (O.E.) correspond to the critical points of a functional.
So we can transform the problem of finding the solutions of (O.E.) into the problem of finding the critical points of the functional.
From 1980s, begin with Ambrosetti and Rabinowitz's famous work\cite{Ambrosetti-Rabinowitz-1973}(Mountain Pass Theorem), many crucial variational methods have been developed, such as Minimax-methods, Lusternik-Schnirelman theory, Galerkin approximation methods, saddle point reduction methods, dual variational methods, convex analysis theory, Morse theory and so on (see\cite{Amann-1976},\cite{Amann-Zehnder-1980},\cite{Aubin-Ekeland-1984},\cite{Chang-1993},\cite{Ekeland-1990},\cite{Ekeland-Temam-1976} and the reference therein).
We classified all of these variational problems into three kinds by the spectrum of $A$. For simplicity, denote by $\sigma(A)$, $\sigma_e(A)$ and $\sigma_d(A)$ the spectrum, the essential spectrum and the discrete finite dimensional point spectrum of $A$ respectively.
The first is $\sigma(A)=\sigma_d(A)$ and $\sigma(A)$ is bounded from below(or above), such as boundary value problem for Laplace's equation on bounded domain and periodic problem for second order Hamiltonian systems.
Morse theory can be used directly in this kind and this is the simplest situation.
The second is $\sigma(A)=\sigma_d(A)$ and $\sigma(A)$ is unbounded from above and below, such as periodic problem for first order Hamiltonian systems.
In this kind, Morse theory cannot be used directly because in this situation the functionals are strongly indefinite and the Morse indices at the critical points of the functional are infinite. In order to overcome this difficulty, the index theory is worth to note here. By the work \cite{Ekeland-1984} of Ekeland, an index theory for convex linear Hamiltonian systems was established.
By the works \cite{Conley-Zehnder-1984,Long-1990,Long-1997,Long-Zehnder-1990} of Conley, Zehnder and Long, an index theory for symplectic paths was introduced.
These index theories have important and extensive applications, e.g \cite{Dong-Long-1997,Ekeland-Hofer-1985,Ekeland-Hofer-1987,Liu-Long-Zhu-2002,Long-Zhu-2000}.
In \cite{Zhu-Long-1999, Long-Zhu-2000-2} Long and Zhu defined spectral flows for paths of linear operators and redefined Maslov index for symplectic paths.
Additionally, Abbondandolo defined the concept of relative Morse index theory for Fredholm operator with compact perturbation (see\cite{Abb-2001} and the references therein). In the study of the $L$-solutions (the solutions starting and ending at the same Lagrangian subspace $L$) of Hamiltonian systems, Liu in \cite{Liu-2007} introduced an index theory for symplectic paths using the algebraic methods and gave some applications in
\cite{ Liu-2007, Liu-2007-2}. This index had been generalized by Liu, Wang and Lin in \cite{Liu-Wang-Lin-2011}.
In addition to the above index theories defined for specific forms, Dong in \cite{Dong-2010} developed an index theory for abstract operator equations (O.E.).
The third is $\sigma_e(A)\neq \emptyset$, the most complex situation. Since lack of compactness, many classical methods can not be used here. Specially, if $\sigma_e(A)\cap(-\infty,0)\neq \emptyset$ and $\sigma_e(A)\cap(0,\infty)\neq \emptyset$, Ding established a series of critical points theories and applications in homoclinic orbits in Hamiltonian systems, Dirac equation, Schr\"{o}dinger equation and so on, he named these problems???very strongly indefinite problems (see \cite{Ding-2007},\cite{Ding-2017}). Wang and Liu defined the index theory ($i_A(B),\nu_A(B)$) for this kind and gave some applications in wave equation, homoclinic orbits in Hamiltonian systems and Dirac equation, the methods include dual variation and saddle point reduction(see \cite{Wang-Liu-2016} and\cite{Wang-Liu-2017}). Additionally, Chen and Hu in \cite{Chen-Hu-2007} defined the index for homoclinic orbits of Hamiltonian systems. Recently, Hu and Portaluri in \cite{Hu-Portaluri-2017} defined the index theory for heteroclinic orbits of Hamiltonian systems.\\
In this paper, consider the kind of $\sigma_e(A)\neq \emptyset$. Firstly, we develop the relative Morse index theory. Compared with Abbondandolo's work(\cite{Abb-2001}), we generalize the concept of relative Morse index $i^*_A(B)$ for Fredholm operator without the compactness assumption on the perturbation term(see Section \color{red}ef{section-relative Morse index}). And we gave the relationship between the relative Morse index $i^*_A(B)$ and the index $i_A(B)$ defined in \cite{Wang-Liu-2016} and\cite{Wang-Liu-2017}. The bridge between them is the concept of spectral flow. As far as we know, the spectral flow is introduced by Atiyah-Patodi-Singer(see\cite{Atiyah-Patodi-Singer-1976}). Since then, many interesting properties and applications of spectral flow have been subsequently established(see\cite{Cappell-Lee-Miller-1994},\cite{Floer-1988},\cite{Robbin-Salamon-1993},\cite{Robbin-Salamon-1995} and \cite{Zhu-Long-1999}).
Secondly, we generalize the method of saddle point reduction and get some critical point theories. With the relative Morse index defined above, we will establish some new abstract critical point theorems by saddle point reduction, topology degree and Morse theory, where we do not need the nonlinear term to be $C^2$ continuous(see Section \color{red}ef{section-saddle point reduction}).
Lastly, as applications, we consider the existence and multiplicity of the periodic solutions for wave equation and give some new results(sec Section \color{red}ef{section-applications}). To the best of the authors' knowledge, the problem of finding periodic solutions of nonlinear wave equations has attracted much attention since 1960s. Recently, with critical point theory, there are many results on this problem.
For example, Kryszewski and Szulkin in \cite{Kryszewski-Szulkin-1997} developed an infinite dimensional cohomology theory and the corresponding Morse theory,
with these theories, they obtained the existence of nontrivial periodic solutions of one dimensional wave equation.
Zeng, Liu and Guo in \cite{Zeng-Liu-Guo-2004}, Guo and Liu in \cite{Gou-Liu-2007} obtained the existence and multiplicity of nontrivial periodic solution of one dimensional wave equation
and beam equation by their Morse index theory developed in \cite{Guo-Liu-Zeng-2004}. Tanaka in \cite{Tanaka-2006} obtained the existence of nontrivial periodic solution of
one dimensional wave equation by linking methods. Ji and Li in \cite{Ji-Li-2006} considered the periodic solution of one dimensional wave equation with $x$-dependent coefficients.
By minimax principle, Chen and Zhang in \cite{Chen-Zhang-2014} and \cite{Chen-Zhang-2016} obtained infinitely many symmetric periodic solutions
of $n$-dimensional wave equation. Ji in \cite{Ji-2018} considered the periodic solutions for one dimensional wave equation with bounded nonlinearity and $x$-dependent coefficients.
\section{Relative Morse Index $i^*_A(B)$ and the relationship with $i_A(B)$}\label{section-relative Morse index}
Let $\mathbf H$ be an infinite dimensional separable Hilbert space with inner product $(\cdot,\cdot)_\mathbf H$ and norm $\|\cdot\|_\mathbf H$.
Denote by $\mathcal O(\mathbf H)$ the set of all linear self-adjoint operators on $\mathbf H$. For $A\in \mathcal O(\mathbf H)$, we denote by
$\sigma(A)$ the spectrum of $A$ and $\sigma_e(A)$ the essential spectrum of $A$. We define a subset of $\mathcal O( \mathbf H)$ as follows
\[
\mathcal O^0_e(a,b)=\{A\in \mathcal O(\mathbf H)|\;\sigma_e(A)\cap(a,b)=\emptyset \;{\color{red}m and}\;\sigma(A)\cap (a, b)\ne \emptyset\}.
\]
Denote $\mathcal{L}_s(\mathbf H)$ the set of all linear bounded self-adjoint operators on $\mathbf H$ and a subset of $\mathcal{L}_s(\mathbf H)$ as follows
\color{blue}egin{equation}\label{eq-L0}
\mathcal{L}_s(\mathbf H,a,b)=\{B\in \mathcal{L}_s(\mathbf H), \;a\cdot I<B< b\cdot I\},
\end{equation}
where $I$ is the identity map on $\mathbf H$, $B< b\cdot I$ means that there exists $\delta>0$ such that $(b-\delta)\cdot I-B$ is positive define,
$B> a\cdot I$ has the similar meaning. For any $B\in\mathcal{L}_s(\mathbf H,a,b)$, we have the index pair ($i_A(B),\nu_A(B)$)(see \cite{Wang-Liu-2016,Wang-Liu-2017} for details).
In this section, we will define the relative Morse index $i^*_A(B)$ and give the relationship with $i_A(B)$.
\subsection{Relative Morse Index $i^*_A(B)$}
As the beginning of this subsection, we will give a brief introduction of relative Morse index. The relative Morse index can be derived in different ways (see\cite{Abb-2001,Chang-Liu-Liu-1997,Fei-1995,Zhu-Long-1999}). Such kinds of indices have been extensively studied in dealing with periodic orbits of first order Hamiltonian systems. As far as authors known, the existing relative Morse index theory can be regarded as compact perturbation for Fredholm operator. Assume $A$ is a self-adjoint Fredholm operator on Hilbert space $\mathbf H$, with the orthogonal splitting
\color{blue}egin{equation}\label{eq-decomposition of space H}
\mathbf H=\mathbf H^-_A\oplus \mathbf H^0_A\oplus \mathbf H^+_A,
\end{equation}
where $A$ is negative, zero and positive definite on $\mathbf H^-_A,\;\mathbf H^0_A$ and $\mathbf H^+_A$ respectively. Let $P_A$ denote the orthogonal projection from $\mathbf H$ to $\mathbf H^-_A$. If the perturbation term $F$ is a compact self-adjoint operator on $\mathbf H$, then we have $P_A-P_{A-F}$ is compact and $P_A:\mathbf H^-_{A-F}\to \mathbf H^-_A$ is a Fredholm operator and we can define the so called relative Morse index by the Fredholm index of $P_A:\mathbf H^-_{A-F}\to \mathbf H^-_A$.
Generally, if the operator $A$ is not Fredholm operator or the perturbation $F$ is not compact, $P_A:\mathbf H^-_{A-F}\to \mathbf H^-_A$ will not be Fredholm operator and the concept of relative Morse index will be meaningless,
but if the perturbation lies in the gap of $\sigma_{e}(A)$, that is to say $A\in \mathcal O^0_e(\lambda_a,\lambda_b)$ for some $\lambda_a,\lambda_b\in\mathbb{R}$ and the perturbation $B\in \mathcal{L}_s(\mathbf H,\lambda_a,\lambda_b)$, we can also defined the relative Morse index $i^*_A(B)$ and give the relationship with the index $i_A(B)$ defined in \cite{Wang-Liu-2017}. Firstly, we need two abstract lemmas.
\color{blue}egin{lem} \label{Fredholm projection}
Let $A:\mathbf H\color{red}ightarrow\mathbf H$ be a bounded self-adjoint operator.
Let $W,V$ be closed spaces of $\mathbf H$.
Denote the orthogonal projection $\mathbf H\color{red}ightarrow Y$ by $P_Y$ for any closed linear subspace $Y$ of $\mathbf H$. Assume that\\
\noindent (1). $(Ax,x)_\mathbf H<-\epsilon_1 \|x\|^2_\mathbf H,\;\forall x\in W\color{blue}ackslash\{0\}$, with some constant $\epsilon_1>0$,\\
\noindent (2). $(Ax,x)_\mathbf H> 0,\; \forall x\in V^{\color{blue}ot}\color{blue}ackslash\{0\} $,\\
\noindent (3). $ (Ax,y)_\mathbf H=0, \forall x\in V,y\in V^\color{blue}ot$.\\
Then $P_V|_{W}$ is an injection and $P_V(W)$ is a closed subspace of $\mathbf H$.
Furthermore, if we assume \\
\noindent (4). $(Ax,x)_\mathbf H\leq 0,\; \forall x\in V\color{blue}ackslash\{0\}$,\\
and there is a closed subspace $U$ of $W^\color{blue}ot$ such that\\
\noindent (5). $W^\color{blue}ot/U$ is finite dimensional,\\
\noindent (6). $(Ax,x)_\mathbf H>0,\;\forall x\in U\setminus \{0\}$. \\
Then $P_V:W\color{red}ightarrow V$ and $P_W:V\color{red}ightarrow W$ are both Fredholm operators and
\[
{\color{red}m ind}(P_W:V\color{red}ightarrow W)=-{\color{red}m ind}(P_V:W\color{red}ightarrow V).
\]
\end{lem}
\color{blue}egin{proof}
Note that $\ker P_V|_{W}= \ker P_V\cap W =V^\color{blue}ot \cap W$.
From condition (1) and (2), we have $V^\color{blue}ot \cap W=\{0\}$, so $P_V|_{W}$ is an injection.
For $x\in W$, from condition (2) and (3), we have
\color{blue}egin{align*}
-\|A\|\|P_Vx\|^2_\mathbf H&\leq(AP_V x,P_V x)_\mathbf H\\
&=(Ax,x)_\mathbf H-(A(I-P_V)x,(I-P_V)x)_\mathbf H\\
&\leq (Ax,x)_\mathbf H\\
&< -\epsilon_1 \|x\|^2_\mathbf H
\end{align*}
It follows that
\color{blue}egin{equation}\label{eq-relative Morse index-3}
\|P_V x\|_\mathbf H\ge \sqrt{\frac{\epsilon_1}{\|A\|}}\|x\|_\mathbf H,\;\forall x\in W,
\end{equation}
so $P_V(W)$ is a closed subspace of $\mathbf H$.
For any $x\in (P_V(W))^\color{blue}ot \cap V $, that is to say $x\color{blue}ot P_V(W)$ and $x\color{blue}ot (I-P_V)(W)$, so we have $x\color{blue}ot W$ and
\color{blue}egin{equation}\label{eq-relative Morse index-1}
P_V(W)^\color{blue}ot \cap V \subset W^\color{blue}ot.
\end{equation}
From condition (4) and (6),
\color{blue}egin{align}\label{eq-relative Morse index-2}
((P_V(W))^\color{blue}ot \cap V) \cap U&\subset V\cap U\nonumber\\
& =\{0\}.
\end{align}
From \eqref{eq-relative Morse index-1}, \eqref{eq-relative Morse index-2} and condition (5), $(P_V(W))^\color{blue}ot \cap V$ is finite dimensional.
It follows that $P_V:W\color{red}ightarrow V$ is a Fredholm operator.
From \eqref{eq-relative Morse index-3}, we have
\color{blue}egin{align*}
\|(I-P_V)x\|^2&=\|x\|^2-\|P_V x\|^2\\
&\leq (1-\epsilon_1/\|A\|)\|x\|^2,\;\forall x\in W.
\end{align*}
It follows that $\|I-P_V|_W\|<1$.
So the operator $P_WP_V=P_W-P_W(I-P_V):W\color{red}ightarrow W$ is invertible.
It follows that $P_W:V\color{red}ightarrow W$ is surjective, and
\color{blue}egin{equation}\label{eq-relative Morse index-4}
\ker P_W\cap P_V(W)={0}.
\end{equation}
Note that $V$ has the following decomposition
\[
V=P_V(W)\color{blue}igoplus ((P_V(W))^\color{blue}ot \cap V),
\]
from \eqref{eq-relative Morse index-4} and $\dim((P_V(W))^\color{blue}ot \cap V)<\infty$, we have $\ker P_W \cap V$ is finite dimensional.
So the operator $P_W:V\color{red}ightarrow W$ is a Fredholm operator.
Since $P_WP_V:W\color{red}ightarrow W$ is invertible, we have
\color{blue}egin{align*}
0&={\color{red}m ind}(P_WP_V:W\color{red}ightarrow W)\\
&={\color{red}m ind}(P_W:V\color{red}ightarrow W)+{\color{red}m ind}(P_V:W\color{red}ightarrow V).
\end{align*}
Thus we have proved the lemma.
\end{proof}
\color{blue}egin{lem}\label{finite_pertubation}
Let $V_1\subset V_2$,$ W_1\subset W_2$ be linear closed subspaces of $\mathbf H$ such that $V_2/V_1$ and $W_2/W_1$ are finite dimensional linear spaces.
Let $P_{V_i}$, $P_{W_j}$ be the orthogonal projections onto $V_i$ and $W_j$ and respectively, $i,j=1,2$.
Assume that $P_{W_{j^*}}:V_{i^*}\color{red}ightarrow W_{j^*}$ is a Fredholm operator for some fixed $i^*,j^*\in \{1,2\}$.
Then $P_{W_j}:V_i \color{red}ightarrow W_j$, $i,j=1,2$ are all Fredholm operators.
Furthermore, we have
\color{blue}egin{align*}
{\color{red}m ind}(P_{W_j}:V_i\color{red}ightarrow W_j)=&{\color{red}m ind}(P_{V_{i^*}}:V_i\color{red}ightarrow V_{i^*})+{\color{red}m ind}(P_{W_{j^*}}:V_{i^*}\color{red}ightarrow W_{j^*})\\
&+{{\color{red}m ind}}(P_{W_j}:W_{j^*}\color{red}ightarrow W_j).
\end{align*}
\end{lem}
\color{blue}egin{proof}
Since $V_2/V_1$ and $W_2/W_1$ are finite dimensional linear spaces, $P_{W_j}-P_{W_{j^*}}$ and $P_{V_i}-P_{V_{i^*}}$ are both compact operator. So $P_{W_j}P_{V_i}-P_{W_{j^*}}P_{V_{i^*}}$ is also compact operator.
Note that on $V_i$,
\[
(P_{W_j}-P_{W_j}P_{W_{j^*}}P_{V_{i^*}})|_{V_i}=P_{W_j}(P_{W_j}P_{V_i}-P_{W_{j^*}}P_{V_{i^*}})|_{V_i}.
\]
It follows that $P_{W_j}-P_{W_j}P_{W_{j^*}}P_{V_{i^*}}:V_i\color{red}ightarrow W_j$ is compact.
Then we can conclude that
\color{blue}egin{align*}
{{\color{red}m ind}}(P_{W_j}:V_i\color{red}ightarrow W_j)=&{\color{red}m ind}(P_{W_j}P_{W_{j^*}}P_{V_{i^*}}:V_i\color{red}ightarrow W_j)\\
=&{\color{red}m ind}(P_{V_{i^*}}:V_i\color{red}ightarrow V_{i^*})+{\color{red}m ind}(P_{W_{j^*}}:V_{i^*}\color{red}ightarrow W_{j^*})\\
&+{{\color{red}m ind}}(P_{W_j}:W_{j^*}\color{red}ightarrow W_j).
\end{align*}
We have proved the lemma.
\end{proof}
With these two lemmas, we can define the relative Morse index. We consider a normal type that is $A\in \mathcal O^0_e(-1,1)$ for simplicity.
Let $B\in \mathcal{L}_s(\mathbf H,-1,1)$ with its norm $\|B\|=c_B$, so we have $0\leq c_B<1$. Then $A-tB$ is a self-adjoint Fredholm operator for $t\in [0,1]$. We have $\sigma_{ess}(A-tB)\cap (-1+tc_B,1-tc_B)=0$. Let $E_{A-tB}(z)$ be the spectral measure of $A-tB$. Denote
\color{blue}egin{equation}\label{eq-projection splitting}
P(A-tB,U)= \int_{U} dE_{A-tB}(z),
\end{equation}
with $U\subset \mathbb{R}$, and rewrite it as $P(t,U) $ for simplicity. Let
\[
V(A-tB,U)={\color{red}m im} P(t,U)
\]
and rewrite it as $V(t,U)$ for simplicity. For any $c_0\in\mathbb{R}$ satisfying $c_B<c_0<1$, we have
\[
((A-B)x,x)_\mathbf H>(c_0-c_B)\|x\|^2_\mathbf H,\;x\in V(0,(c_0,+\infty))\cap D(A).
\]
So there is $\epsilon >0$, such that
\color{blue}egin{equation}\label{eq-relative Morse index-5}
((A-B)x,x)_\mathbf H>\epsilon ((|A-B|+I)x,x)_\mathbf H ,\forall x\in V(0,(c_0,+\infty))\cap D(A)
\end{equation}
Similarly, we have
\color{blue}egin{equation}\label{eq-relative Morse index-6}
((A-B)x,x)_\mathbf H<-\epsilon ((|A-B|+I)x,x)_\mathbf H ,\forall x\in V(0,(-\infty,-c_0))\cap D(A)
\end{equation}
Denote
\[
P_{s,a}^{t,b}:=P(t,(-\infty,b))|_{V(s,(-\infty,a))},\forall t,s\in [0,1]\;{\color{red}m and}\;a,b\in \mathbb{R}.
\]
Clearly, we have $P(t,(-\infty,b))=P_{s,+\infty}^{t,b}$ , $\forall s\in [0,1] $.
\color{blue}egin{lem}\label{inverse_relation}
For any $a\in[-c_0,c_0]$, the map $P_{0,a}^{1,0}$ $P_{1,0}^{0,a}$ are both Fredholm operators.
Furthermore, we have ${\color{red}m ind}(P_{0,a}^{1,0})=-{\color{red}m ind}(P_{1,0}^{0,a})$.
\end{lem}
\color{blue}egin{proof}
From \eqref{eq-relative Morse index-5} and \eqref{eq-relative Morse index-6}, there is $\epsilon>0$ such that
\[
((A-B)(|A-B|+I)^{-1}x,x)>\epsilon \|x\|^2,\;\forall x\in V(0,(c_0,+\infty)),
\]
and
\[
((A-B)(|A-B|+I)^{-1}x,x)<-\epsilon \|x\|^2,\;\forall x\in V(0,(-\infty,-c_0)).
\]
Now, let the operator $(A-B)(|A-B|+I)^{-1}$, the spaces $V(0,(-\infty,-c_0))$, $V(1,(-\infty,0])$ and $V(0,(c_0,+\infty))$ be the operator $A$ and the spaces $W,V$ and $U$ in Lemma \color{red}ef{Fredholm projection} correspondingly.
It's easy to verify that condition (1), (2), (3), (4) and (6) are satisfied, and since $A\in \mathcal O^0_e(-1,1)$, $V(0,[-c_0,c_0])$ is finite dimensional, so condition (5) is satisfied. Then $P_{0,-c_0}^{1,0} $ and $P_{1,0}^{0,-c_0} $ are both Fredholm operators.
We also have
\[
{\color{red}m ind}(P_{0,-c_0}^{1,0})=-{\color{red}m ind}(P_{1,0}^{0,-c_0}).
\]
By Lemma \color{red}ef{finite_pertubation}, $ P_{0,a}^{1,0}$ and $P_{1,0}^{0,a}$ are both Fredholm operators with $a\in [-c_0,c_0]$, and we have
\[
{\color{red}m ind}(P_{0,a}^{1,0})=-{\color{red}m ind}(P_{1,0}^{0,a}), a\in [-c_0,c_0].
\]
\end{proof}
\color{blue}egin{rem}\label{inverse_relation(general result)}
Generally, we have $P_{s,a}^{t,b}\; and\; P_{t,b}^{s,a}$ are both Fredholm operators
with $a\in (-1+sc_B,1-sc_B)$, $b\in (-1+tc_B,1-tc_B)$ and we have
\[
{{\color{red}m ind}}(P_{s,a}^{t,b})=-{{\color{red}m ind}}(P_{t,b}^{s,a}).
\]
\end{rem}
Here we replace $A,B$ by $A'=A-sB$ and $B'=(t-s)B$ respectively in Lemma \color{red}ef{inverse_relation}, then all the proof will be same, so we omit the proof here.
\color{blue}egin{defi}\label{defi-index defined by relative Morse index}
Define the relative Morse index by
\[
i^*_A(B):={\color{red}m ind}(P^{0,0}_{1,0}),\;\forall B\in\mathcal{L}_s(\mathbf H,-1,1).
\]
\end{defi}
\subsection{The relationship between $i^*_A(B)$ and $i_A(B)$}
Now, we will prove that $i^*_A(B)=i_A(B)$ by the concept of spectral flow. We need some preparations. There are some equivalent definitions of spectral flow. We use the Definition 2.1, 2.2 and 2.6 in \cite{Zhu-Long-1999}.
Let $A_s$ be a path of self-adjoint Fredholm operators.
The APS projection of $A_s$ is defined by $Q_{A_s}=P(A_s,[0,+\infty))$ .
Recall that locally, the spectral flow of $A_s$ is the s-flow of $Q_{A_s}$.
Choose $\epsilon>0$ such that $V(A_{s_0},[0,+\infty))=V(A_{s_0},[-\epsilon,+\infty))$.
Then $\epsilon \notin \sigma (A_{s_0})$.
Let $P_{A_s}=P(A_s,[-\epsilon,+\infty))$.
Then there is $\delta>0$ such that $P_{A_s}$ is continuous on $(s_0-\delta,s_0+\delta)$
and $P_{A_s}-Q_{A_s}$ is compact for $s\in(s_0-\delta,s_0+\delta)$ .
The the s-flow of $Q_{A_s}$ on $[s_0,b]\subset (s_0-\delta,s_0+\delta)$ can be calculated as
\color{blue}egin{align*}
sfl(Q_{A_s},[s_0,b])=& -{\color{red}m ind}(P_{A_{s_0}}:V(A_{s_0},[0,\infty)\to V(A_{s_0},[0,\infty))\\
&+{\color{red}m ind}(P_{A_{s_b}}:V(A_{s_b},[0,\infty)\to V(A_{s_b},[-\epsilon,\infty))\\
=&-{\color{red}m dim}(V(A_{s_b},[-\epsilon,0)))\\
=&{\color{red}m ind}({\color{red}m Id}-P_{A_{s_b}}:V(A_{s_b},(-\infty,-\epsilon)\to V(A_{s_b},(-\infty,0)).
\end{align*}
If $A_s=A-sB$, with $\epsilon$ and $\delta$ chosen like above,
we have $sf\{A-sB,[s_0,s_1]\}={\color{red}m ind}P_{s_1,-\epsilon}^{s_1,0}$ for $[s_1,s_2]\subset [s_1,s_1+\delta]$.
\color{blue}egin{lem}\label{continuity}
Let $t_0\in [0,1]$.
Let $a\in (-1+t_0c_B,1-t_0c_B)\color{blue}ackslash\sigma(A-t_0B) $.
Then we have
\[
\lim_{s\to t_0}\|P_{t_0,a}^{t,0}-P_{s,a}^{t,0}P_{t_0,a}^{s,a}\|=0,\forall t\in[0,1].
\]
and
\color{blue}egin{align*}
\lim_{s\to t_0}{{\color{red}m ind}}(P_{s,a}^{t,0}))={{\color{red}m ind}}(P_{t_0,a}^{t,0})\\
\lim_{s\to t_0}{{\color{red}m ind}}(P_{t,0}^{s,a}))={{\color{red}m ind}}(P_{t,0}^{t_0,a}).
\end{align*}
\end{lem}
\color{blue}egin{proof}
Since $a\notin\sigma(A-t_0B) $,
there is $\delta_1>0$ such that $P(\cdot,(-\infty,a))$ is a continuous path of operators on $(t_0-\delta_1,t_0+\delta_1)$,
and
\[\|(P(s,(-\infty,a))-P(t_0,(-\infty,a)))\|<1\]
with $s\in (t_0-\delta_1,t_0+\delta_1)$.
Then $P_{t_0,a}^{s,a}$ and $ P_{s,a}^{t_0,a}$ are both homeomorphisms.
Note that on $V(t_0,(-\infty,a))$, we have
\[
P_{t_0,a}^{t,0}-P_{s,a}^{t,0}P_{t_0,a}^{s,a}
=P(t,(-\infty,0))(P(t_0,(-\infty,a))-P(s,(-\infty,a)))|_{V(t_0,(-\infty,a))}.
\]
By the continuity of $P(s,(-\infty,a))$, it follows that
\[
\lim_{s\to t_0}\|P_{t_0,a}^{t,0}-P_{s,a}^{t,0}P_{t_0,a}^{s,a}\|=0.
\]
Then we have
\color{blue}egin{align*}
{\color{red}m ind}(P_{t_0,a}^{t,0})&=\lim_{s\to t_0}{\color{red}m ind}(P_{s,a}^{t,0}P_{t_0,a}^{s,a})\\
&=\lim_{s\to t_0}{\color{red}m ind}(P_{s,a}^{t,0})+{\color{red}m ind}(P_{t_0,a}^{s,a}))\\
&=\lim_{s\to t_0}{{\color{red}m ind}}(P_{s,a}^{t,0}).
\end{align*}
By remark \color{red}ef{inverse_relation(general result)}, we get
\color{blue}egin{align*}
{\color{red}m ind}(P_{t,0}^{t_0,a})&=-{\color{red}m ind} (P_{t_0,a}^{t,0})\\
&=-\lim_{s\to t_0}{\color{red}m ind}(P_{s,a}^{t,0})\\
&=\lim_{s\to t_0}{\color{red}m ind}(P_{t,0}^{s,a}).
\end{align*}
\end{proof}
\color{blue}egin{lem}\label{local_flow}
For each $t_1\in [0,1]$, there is $\delta >0$ such that
\[{\color{red}m ind} (P_{t_1,0}^{t_2,0})=sf\{A-t_1 B-s(t_2-t_1)B,[0,1]\}\] with $|t_2-t_1|<\delta$.
\end{lem}
\color{blue}egin{proof}
Since $A-t_1B$ is a Fredholm operator, there is $\epsilon>0$ such that $P(t_1,(-\infty,0))=P(t_1,(-\infty,-\epsilon))$.
It follows that $\epsilon \notin \sigma (A-t_1B)$, and we have
\[
P_{t_1,-\epsilon}^{t_2,0}=P_{t_1,0}^{t_2,0}.
\]
By lemma \color{red}ef{continuity} we have
\[
\lim_{t_2\to t_1}{\color{red}m ind}(P_{t_1,-\epsilon}^{t_2,-\epsilon})={\color{red}m ind}(P_{t_1,-\epsilon}^{t_1,-\epsilon})=0.
\]
It follows that
\color{blue}egin{align*}
\lim_{t_2\to t_1}{\color{red}m ind}(P_{t_1,-\epsilon}^{t_2,0})&=\lim_{t_2\to t_1}{\color{red}m ind}(P_{t_2,-\epsilon}^{t_2,0}P_{t_1,-\epsilon}^{t_2,-\epsilon})\\
&=\lim_{t_2\to t_1}{\color{red}m ind}(P_{t_2,-\epsilon}^{t_2,0})+\lim_{t_2\to t_1}{\color{red}m ind}(P_{t_1,-\epsilon}^{t_2,-\epsilon})\\
&=\lim_{t_2\to t_1}{\color{red}m ind}(P_{t_2,-\epsilon}^{t_2,0}).
\end{align*}
So there is $\delta>0$ such that ${{\color{red}m ind}}(P_{t_1,-\epsilon}^{t_2,0})={\color{red}m ind}(P_{t_2,-\epsilon}^{t_2,0}) $ with $|t_2-t_1|<\delta$,
and $P(t,(-\infty,-\epsilon))$ is continuous on $(t_1-\delta,t_1+\delta)$.
Note that $sf\{A-t_1 B-s(t_2-t_1)B,[0,1]\}={{\color{red}m ind}} P_{t_2,-\epsilon}^{t_2,0}$ by continuation of $P(t,(-\infty,-\epsilon))$.
Then the lemma follows.
\end{proof}
\color{blue}egin{lem}\label{additional}
${{\color{red}m ind}}(P_{0,0}^{1,0})={{\color{red}m ind}}(P_{t,0}^{1,0})+{{\color{red}m ind}}( P_{0,0}^{t,0})$ with $\forall t\in [0,1]$.
\end{lem}
\color{blue}egin{proof}
By Lemma \color{red}ef{finite_pertubation} and Lemma \color{red}ef{inverse_relation(general result)}, for any $t_0\in [0,1]$
\color{blue}egin{align*}
{\color{red}m ind}(P_{t_0,0}^{1,0})+{\color{red}m ind}( P_{0,0}^{t_0,0})&={\color{red}m ind}(P_{t_0,a}^{1,0})+{\color{red}m ind}( P_{t_0,0}^{t_0,a})+{\color{red}m ind}( P_{t_0,a}^{t_0,0})+{\color{red}m ind}( P_{0,0}^{t_0,a})\\
&={\color{red}m ind}(P_{t_0,a}^{1,0})+{{\color{red}m ind}}( P_{0,0}^{t_0,a}),\; \forall a\in (-1+sc_B,1-sc_B).
\end{align*}
Choose $a_{t_0}\in (-1+t_0c_B,1-t_0c_B)$ and $a_{t_0}\notin \sigma(A-t_0B)$.
By lemma \color{red}ef{continuity},
\[
f:t\to {\color{red}m ind}(P_{t,a_{t_0}}^{1,0})+{\color{red}m ind}( P_{0,0}^{t,a_{t_0}})
\]
is continuous at $t_0$.
So the function $f:t\to {{\color{red}m ind}}(P_{t,0}^{1,0})+{{\color{red}m ind}}( P_{0,0}^{t,0}) $ is continuous on $[0,1]$.
So it must be a constant function.
It follows that
\[
{{\color{red}m ind}}(P_{0,0}^{1,0})=f(1)=f(t)={{\color{red}m ind}}(P_{t,a}^{1,0})+{{\color{red}m ind}}( P_{0,0}^{t,a}).
\]
\end{proof}
\color{blue}egin{rem}
In fact, we have
\[{{\color{red}m ind}}(P_{a,0}^{b,0})={{\color{red}m ind}}(P_{s,0}^{b,0})+{{\color{red}m ind}}( P_{a,0}^{s,0})\] with $s\in [0,1]$.
\end{rem}
\color{blue}egin{thm}\label{thm-relative morse index and spectral flow}
We have
\[
sf\{A-tB,[a,b]\}={{\color{red}m ind}}(P_{a,0}^{b,0})=-{{\color{red}m ind}}(P_{b,0}^{a,0})
\]
with $[a,b]\subset [0,1]$.
\end{thm}
\color{blue}egin{proof}
It is a direct consequence of Lemma \color{red}ef{local_flow} and Lemma \color{red}ef{additional}.
\end{proof}
Now by the property of $i_A(B)$(see \cite[Lemma 2.9]{Wang-Liu-2016},\cite[Lemma 2.3]{Wang-Liu-2017}) and Theorem \color{red}ef{thm-relative morse index and spectral flow}, we have the following result.
\color{blue}egin{prop}\label{prop-relations between indexes}
$
i^*_A(B)=i_A(B), A\in\mathcal{O}^0_e(-1,1),B\in\mathcal{L}_s(\mathbf H,-1,1).
$
\end{prop}
Generally, with the same method we can define the relative Morse index $i^*_A(B)$ for $A\in\mathcal{O}^0_e(\lambda_a,\lambda_b)$, $B\in\mathcal{L}_s(\mathbf H,\lambda_a,\lambda_b)$ and we can prove the index $i^*_A(B)$ coincide with $i_A(B)$ by the concept of spectral flow, we omit them here.
\section{Saddle point reduction of (O.E.) and some abstract critical points Theorems}\label{section-saddle point reduction}
Now for simplicity, let $b>0$ and $a=-b$, for $A\in\mathcal O^0_e(-b,b)$, we consider the following operator equation
\[
Az=F'(z),\;z\in D(A)\subset \mathbf H, \eqno(O.E.)
\]
where $F\in C^1(\mathbf H,\mathbb{R})$. Assume\\
\noindent($F_1$) $F\in C^1(\mathbf H,\mathbb{R})$, $F':\mathbf H\to\mathbf H$ is Lipschitz continuous
\color{blue}egin{equation}\label{eq-the Lipschitz continuity of F'}
\|F'(z+h)-F'(z)\|_{\mathbf H}\leq l_F\|h\|_{\mathbf H},\;\forall z,h\in\mathbf H,
\end{equation}
with its Lipschitz constant $l_F<b$.
\subsection{Saddle point reduction of (O.E.)}
In this part, assume $A\in\mathcal{O}^0_e(-b,b)$ and $F$ satisfies condition ($F_1$), we will consider the method of saddle point reduction without assuming the nonlinear term $F\in C^2(D(|A|^{1/2}))$, then we will give some abstract critical point theorems. Let $E_A(z)$ the spectrum measure of $A$, since $\sigma_e(A)\cap(-b,b)=\emptyset$, we can choose $l\in (l_F,b)$, such that
\[
-l, l\notin\sigma(A).
\]
Different from the above section, in this section, consider projection map $P(A,U)$ defined in \eqref{eq-projection splitting} on $H$, for simplicity, we rewrite them as
\color{blue}egin{equation}\label{eq-projections}
P^-_A:=P(A,(-\infty,-l)),\;P^+_A:=P(A,(l,\infty)),\;P^0_A:=P(A,(-l,l)),
\end{equation}
in this section.
Then we have the following decomposition which is different from \eqref{eq-decomposition of space H},
\[
\mathbf H=\widehat{\mathbf H}^-_A\oplus\widehat{\mathbf H}^+_A \oplus\widehat{\mathbf H}^0_A,
\]
where $\widehat{\mathbf H}^*_A:=P^*_A\mathbf H$($*=\pm, 0$) and $\widehat{\mathbf H}^0_A$ is finite dimensional subspace of $\mathbf H$,
for simplicity we rewrite $\mathbf H^*:=\widehat{\mathbf H}^*_A$. Denote $A^*$ the restriction of $A$ on $\mathbf H^*$($*=\pm, 0$), thus we have $(A^\pm)^{-1}$ are bounded self-adjoint linear operators on $\mathbf H^\pm$ respectively and satisfying
\color{blue}egin{equation}\label{eq-the norm of the inverse of Apm}
\|(A^\pm)^{-1}\|\leq \frac{1}{l}.
\end{equation}
Then (OE) can be rewritten as
\color{blue}egin{equation}\label{eq-decomposition 1 of OE}
z^\pm =(A^\pm)^{-1}P^\pm_A F'(z^++z^-+z^0),
\end{equation}
and
\color{blue}egin{equation}\label{eq-decomposition 2 of OE}
A^0z^0=P^0_AF'(z^++z^-+z^0),
\end{equation}
where $z^*=P^*_Az$($*=\pm, 0$), for simplicity, we rewrite $x:=z^0$. From \eqref{eq-the Lipschitz continuity of F'} and \eqref{eq-the norm of the inverse of Apm}, we have $(A^\pm)^{-1}P^\pm_A F'$ is contraction map on $\mathbf H^+\oplus \mathbf H^-$ for any $x\in\mathbf H^0$. So there is a map $z^\pm(x):\mathbf H^0\to\mathbf H^\pm$ satisfying
\color{blue}egin{equation}\label{eq-zpm(x)}
z^\pm(x)=(A^\pm)^{-1}P^\pm F'(z^\pm(x)+x),\;\forall x\in\mathbf H,
\end{equation}
and the following properties.
\color{blue}egin{prop}\label{prop-continuous and property of saddle point reduction}
(1) The map $z^\pm(x):\mathbf H^0\to \mathbf H^\pm$ is continuous, in fact we have
\[
\|(z^++z^-)(x+h)-(z^++z^-)(x)\|_\mathbf H\leq\frac{l_F}{l-l_F}\|h\|_{\mathbf H},\;\;\forall x,h\in\mathbf H^0.
\]
(2) $\|(z^++z^-)(x)\|_\mathbf H\displaystyle\leq \frac{l_F}{l-l_F}\|x\|_{\mathbf H}+\frac{1}{l-l_F}\|F'(0)\|_{\mathbf H}$.
\end{prop}
\noindent\textbf{Proof.}(1) For any $x,\;h\in \mathbf H^0$, here we write $z^\pm(x):=z^+(x)+z^-(x)$ and $(A^\pm)^{-1}P^\pm_A:=(A^+)^{-1}P^+_A+(A^-)^{-1}P^-_A$ for simplicity, we have
\color{blue}egin{align*}
\|z^\pm(x+h)-z^\pm(x)\|_{\mathbf H} &=\|(A^\pm)^{-1}P^\pm_A F'(z^\pm(x+h)+x+h)-(A^\pm)^{-1}P^\pm_A F'(z^\pm(x)+x)\|_{\mathbf H}\\
&\leq\frac{1}{l}\|F'(z^\pm(x+h)+x+h)-F'(z^\pm(x)+x)\|_{\mathbf H}\\
&\leq\frac{l_F}{l}\|z^\pm(x+h)-z^\pm(x)+h\|_{\mathbf H}\\
&\leq\frac{l_F}{l}\|z^\pm(x+h)-z^\pm(x)\|_{\mathbf H}+\frac{l_F}{l}\|h\|_{\mathbf H}.
\end{align*}
So we have $\|z^\pm(x+h)-z^\pm(x)\|_\mathbf H\leq\frac{l_F}{l-l_F}\|h\|_{\mathbf H}$ and the map $z^\pm(x):{\mathbf H}^0\to {\mathbf H}^\pm$ is continuous.\\
(2)Similarly,
\color{blue}egin{align*}
\|z^\pm(x)\|_{\mathbf H}&=\|(A^\pm)^{-1}P^\pm_A F'(z^\pm(x)+x)\|_{\mathbf H}\\
&\leq\frac{1}{l}\|F'(z^\pm(x)+x)\|_{\mathbf H}\\
&\leq\frac{1}{l}\|F'(z^\pm(x)+x)-F'(0)\|_{\mathbf H}+\frac{1}{l}\|F'(0)\|_{\mathbf H}\\
&\leq\frac{l_F}{l}(\|z^\pm(x)\|_{\mathbf H}+\|x\|_H)+\frac{1}{l}\|F'(0)\|_{\mathbf H}.
\end{align*}
So we have $\|z^\pm(x)\|_\mathbf H\leq \frac{l_F}{l-l_F}\|x\|_{\mathbf H}+\frac{1}{l-l_F}\|F'(0)\|_{\mathbf H}$.$
\Box$
\color{blue}egin{rem}Denote $\mathbf E=D(|A|^{\frac{1}{2}})$, with its norm
\[
\|z\|^2_\mathbf E:=\||A|^{\frac{1}{2}}(z^++z^-)\|^2_\mathbf H+\|x\|^2_\mathbf H,\;u\in \mathbf E.
\]
From \eqref{eq-zpm(x)}, we have $z^\pm(x)\in D(A)\subset\mathbf E$,
and we have\\
(1) The map $z^\pm(x):\mathbf H^0\to \mathbf E$ is continuous, and
\color{blue}egin{equation}\label{eq-uniform continuous of z in E}
\|(z^++z^-)(x+h)-(z^++z^-)(x)\|_\mathbf E\leq\frac{l_F \cdot l^\frac{1}{2}}{l-l_F}\|h\|_{\mathbf H},\;\;\forall x,h\in\mathbf H^0.
\end{equation}
(2) $\|(z^++z^-)(x)\|_\mathbf E\displaystyle\leq \frac{l^\frac{1}{2}}{l-l_F}(l_F\cdot\|x\|_{\mathbf H}+\|F'(0)\|_{\mathbf H})$.
\end{rem}
\noindent{\color{blue}f Proof.} The proof is similar to Proposition \color{red}ef{prop-continuous and property of saddle point reduction}, we only prove (1).
\color{blue}egin{align*}
\|z^\pm(x+h)-z^\pm(x)\|_{\mathbf E} &=\|(|A|^{\frac{1}{2}})[z^\pm(x+h)-z^\pm(x)]\|_{\mathbf H}\\
&=\|(A^\pm)^{-\frac{1}{2}}[P^\pm_A F'(z^\pm(x+h)+x+h)-P^\pm_A F'(z^\pm(x)+x)]\|_{\mathbf H}\\
&\leq\frac{1}{l^\frac{1}{2}}\|F'(z^\pm(x+h)+x+h)-F'(z^\pm(x)+x)\|_{\mathbf H}\\
&\leq\frac{l_F}{l^\frac{1}{2}}\|z^\pm(x+h)-z^\pm(x)+h\|_{\mathbf H}\\
&\leq\frac{l_F}{l^\frac{1}{2}}\|z^\pm(x+h)-z^\pm(x)\|_{\mathbf H}+\frac{l_F}{l^\frac{1}{2}}\|h\|_{\mathbf H}\\
&\leq\frac{l_F}{l}\|z^\pm(x+h)-z^\pm(x)\|_{\mathbf E}+\frac{l_F}{l^\frac{1}{2}}\|h\|_{\mathbf H},
\end{align*}
where the last inequality depends on the fact that $\|z^\pm\|_\mathbf E\geq l^\frac{1}{2}\|z^\pm\|_\mathbf H$, so we have \eqref{eq-uniform continuous of z in E}.
Now, define the map $z:\mathbf H^0\to\mathbf H$ by
\[
z(x)=x+z^+(x)+z^-(x).
\]
Define the functional $a:\mathbf H^0\to \mathbb{R}$ by
\color{blue}egin{equation}\label{eq-saddle point reduction}
a(x)=\frac{1}{2}(Az(x),z(x))_\mathbf H-F(z(x)),\;x\in \mathbf H^0.
\end{equation}
With standard discussion, the critical points of $a$ correspond to the solutions of (O.E.), and we have
\color{blue}egin{lem}\label{lem-the smoothness of a}
Assume $F$ satisfies ($F_1$), then we have $a\in C^1(\mathbf H^0,\mathbb{R})$ and
\color{blue}egin{equation}\label{eq-saddle point reduction-the derivative of a}
a'(x)=Az(x)-F'(z(x)),\;\;\forall x\in \mathbf H^0.
\end{equation}
Further more, if $F\in C^2(\mathbf H,\mathbb R)$, we have $a\in C^2(\mathbf H^0,\mathbb{R})$, for any critical point $x$ of $a$, $F''(z(x))\in \mathcal{L}_s(\mathbf H,-b,b)$ and the morse index $m^-_a(x)$ satisfies the following equality
\color{blue}egin{equation}\label{eq-relation between morse index and our index}
m^-_a(x_2)-m^-_a(x_1)=i^*_A(F''(z(x_2)))-i^*_A(F''(z(x_1))),\;\;\forall x_1,x_2\in \mathbf H^0.
\end{equation}
\end{lem}
\noindent{\color{blue}f Proof.} For any $x,h\in\mathbf H^0$, write
\[
\eta(x,h):=z^+(x+h)+z^-(x+h)-z^+(x)-z^-(x)+h
\]
for simplicity, that is to say
\[
z(x+h)=z(x)+\eta(x,h),\;\;\forall x,h\in\mathbf H^0,
\]
and from \eqref{eq-uniform continuous of z in E}, we have
\color{blue}egin{equation}\label{eq-uniform astimate of eta}
\|\eta(x,h)\|_\mathbf H\leq C\|h\|_{\mathbf H},\;\;\forall x,h\in\mathbf H^0,
\end{equation}
where $C=\displaystyle \frac{l+l_F}{l-l_F}$. Let $h\to 0$ in $\mathbf H^0$, and for any $x\in\mathbf H^0$, we have
\color{blue}egin{align*}
a(x+h)-a(x)=&\frac{1}{2}[(Az(x+h),z(x+h))_\mathbf H-(Az(x),z(x))_\mathbf H]-[F(z(x+h))-F(z(x))]\\
=&(Az(x),\eta(x,h))_\mathbf H+\frac{1}{2}(A\eta(x,h),\eta(x,h))_\mathbf H\\
&-(F'(z(x)),\eta(x,h))_\mathbf H+o(\|\eta(x,h)\|_{\mathbf H}).
\end{align*}
From \eqref{eq-uniform astimate of eta} we have
\[
a(x+h)-a(x)=(Az(x)-F'(z(x)),\eta(x,h))_\mathbf H+o(\|h\|_\mathbf H),\;\;\forall x\in \mathbf H^0,\; {\color{red}m and}\;\|h\|_\mathbf H\to 0.
\]
Since $z^\pm(x)$ is the solution of \eqref{eq-zpm(x)} and from the definition of $\eta(x,h)$, we have
\[
(Az(x)-F'(z(x)),\eta(x,h))_\mathbf H=(Az(x)-F'(z(x)),h)_\mathbf H,\;\;\forall x, h\in \mathbf H^0,
\]
so we have
\[
a(x+h)-a(x)=(Az(x)-F'(z(x)),h)_\mathbf H+o(\|h\|_\mathbf H),\;\;\forall x\in \mathbf H^0,\; {\color{red}m and}\;\|h\|_\mathbf H\to 0,
\]
and we have proved \eqref{eq-saddle point reduction-the derivative of a}. If $F\in C^2(\mathbf H,\mathbb R)$, from \eqref{eq-zpm(x)} and by Implicit function theorem, we have $z^\pm \in C^1(\mathbf H^0,\mathbf H^\pm)$.
From \eqref{eq-zpm(x)} and\eqref{eq-saddle point reduction-the derivative of a}, we have
\[
a'(x)=Ax-P^0F(z(x))
\]
and
\[
a''(x)=A|_{\mathbf H_0}-P^0F''(z(x))z'(x),
\]
that is to say $a\in C^2(\mathbf H^0,\mathbb R)$. Finally, from Theorem \color{red}ef{thm-relative morse index and spectral flow} received above, Definition 2.8 and Lemma 2.9 in \cite{Wang-Liu-2016}, we have \eqref{eq-relation between morse index and our index}. $
\Box$
\subsection{Some abstract critical points Theorems}
In this part, we will give some abstract critical points Theorems for (O.E.) by the method of saddle point reduction introduced above. Since we have Proposition \color{red}ef{prop-relations between indexes}, we will not distinguish $i^*_A(B)$ from $i_A(B)$. Beside condition ($F_1$), assume $F$ satisfying the following condition.\\
\noindent ($F_2$) There exist $B_1,B_2\in \mathcal{L}_s(\mathbf H,-b,b)$ and $B:\mathbf H\to \mathcal{L}_s(\mathbf H,-b,b)$ satisfying
\[
B_1\leq B_2,\;i_A(B_1)=i_A(B_2),\; {\color{red}m and}\; \nu_A(B_2)=0,
\]
\[
B_1\leq B(z) \leq B_2,\forall z\in\mathbf H,
\]
such that
\[
F'(z)-B(z)z=o(\|z\|_\mathbf H),\|z\|_\mathbf H\to\infty.
\]
Before the following Theorem, we need a Lemma.
\color{blue}egin{lem}\label{lem-0 has a positive distance from sigma(A-B)}
Let $B_1,B_2\in \mathcal{L}_s(\mathbf H,-b,b)$ with $B_1\leq B_2,\;i_A(B_1)=i_A(B_2),\; {\color{red}m and}\; \nu_A(B_2)=0$, then there exists $\varepsilon>0$, such that for all $B\in\mathcal{L}_s(\mathbf H)$ with
\[
B_1\leq B \leq B_2,
\]
we have
\[
\sigma(A-B)\cap (-\varepsilon,\varepsilon)=\emptyset.
\]
\end{lem}
{\noindent}{\color{blue}f Proof.} For the property of $i_A(B)$, we have $\nu_A(B_1)=0$. So there is $\varepsilon>0$, such that
\[
i_A(B_{1,\varepsilon})=i_A(B_1)=i_A(B_2)=i_A(B_{2,\varepsilon}),
\]
with $B_{*,\varepsilon}=B_*+\varepsilon\cdot I,(*=1,2)$. Since $B_{1,\varepsilon}\leq B-\varepsilon I<B+\varepsilon I\leq B_2'$. It follows that $i_A(B-\varepsilon I)=i_A(B+\varepsilon I)$. Note that
\[
i_A(B+\varepsilon)-i_A(B-\varepsilon)=\sum_{-\varepsilon < t \le \varepsilon } \nu_A(B-t \cdot I).
\]
We have $0\notin \sigma(A-B-\eta),\;\forall \eta\in(-\varepsilon,\varepsilon)$, thus the proof is complete.$
\Box$
\color{blue}egin{thm}\label{thm-abstract thm 1 for the existence of solution}
Assume $A\in\mathcal O^0_e(-b,b)$. If $F$ satisfies conditions ($F_1$) and ($F_2$), then (O.E.) has at least one solution.
\end{thm}
\noindent{\color{blue}f Proof.} Firstly, for $\lambda\in[0,1]$, consider the following equation
\[
Az=(1-\lambda) B_1z+\lambda F'(z).\eqno(O.E.)_\lambda
\]
We claim that the set of all the solutions ($z,\lambda$) of (O.E.)$_\lambda$ are a priori bounded. If not, assume there exist $\{(z_n,\lambda_n)\}$ satisfying (O.E.)$_\lambda$ with $\|z_n\|_{\mathbf H}\to\infty$. Without lose of generality, assume $\lambda_n\to\lambda_0\in[0,1]$. Denote by
\[
F_\lambda(z)=\frac{1-\lambda}{2}(B_1z,z)_{\mathbf H}+\lambda F(z),\;\forall z\in \mathbf H.
\]
Since $F$ satisfies condition ($F_1$) and $B_1\in \mathcal{L}_s(\mathbf H,-b,b)$, we have $F'_\lambda:\mathbf H\to\mathbf H$ is Lipschitz continuous with its Lipschitz constant less than $b$, that is to say there exists $\hat{l}\in [\l_F,b)$ such that
\[
\|F'_\lambda(z+h)-F'_\lambda(z)\|_{\mathbf H}\leq \hat{l}\|h\|_{\mathbf H},\;\forall z,h\in\mathbf H,\lambda\in[0,1].
\]
Now, consider the projections defined in \eqref{eq-projections}, choose $l\in (\hat{l},b)$ satisfying $-l,l\notin \sigma(A)$, from \eqref{eq-decomposition 1 of OE} and \eqref{eq-decomposition 2 of OE}, we decompose $z_n$ by
\[
z_n=z^+_n+z^-_n+x_n,
\]
with $z^*_n\in\mathbf H^*$($*=\pm,0$) and $z^{\pm}_n$ satisfies Proposition \color{red}ef{prop-continuous and property of saddle point reduction} with $l_F$ replaced by $\hat{l}$. So we have $\|x_n\|_\mathbf H\to\infty$. Denote by
\[
y_n=\frac{z_n}{\|z_n\|_\mathbf H},
\]
and $\color{blue}ar{B}_n:=(1-\lambda_n)B_1+\lambda_nB(z_n)$, we have
\color{blue}egin{equation}\label{eq-the equation of yn}
Ay_n=\color{blue}ar{B}_ny_n+\frac{o(\|z_n\|_\mathbf H)}{\|z_n\|_\mathbf H}.
\end{equation}
Decompose $y_n=y^{\pm}_n+y^0_n$ with $y^*_n=z^*_n/\|z_n\|_\mathbf H$, we have
\color{blue}egin{align*}
\|y^0_n\|_\mathbf H&=\frac{\|x_n\|_\mathbf H}{\|z_n\|_\mathbf H}\\
&\geq\frac{\|x_n\|_\mathbf H}{\|x_n\|_\mathbf H+\|z^+_n+z^-\|_\mathbf H}\\
&\geq\frac{(l-\hat{l})\|x_n\|_\mathbf H}{l\|x_n\|_\mathbf H+\|F'_\lambda(0)\|_\mathbf H}.
\end{align*}
That is to say
\color{blue}egin{equation}\label{eq-y0n not to 0}
\|y^0_n\|_\mathbf H\geq c>0
\end{equation}
for some constant $c>0$ and $n$ large enough.
Since $B_1\leq B(z)\leq B_2$, we have $B_1\leq \color{blue}ar{B}_n\leq B_2$.
Let $\mathbf H=\mathbf H^+_{A-\color{blue}ar B_n}\color{blue}igoplus \mathbf H^-_{A-\color{blue}ar B_n}$ with $A-\color{blue}ar B_n$ is positive and negative define on $\mathbf H^+_{A-\color{blue}ar B_n}$ and $\mathbf H^-_{A-\color{blue}ar B_n}$ respectively. Re-decompose $y_n=\color{blue}ar{y}^+_n+\color{blue}ar{y}^-_n$ respect to $\mathbf H^+_{A-\color{blue}ar B_n}$ and $\mathbf H^-_{A-\color{blue}ar B_n}$. From Lemma \color{red}ef{lem-0 has a positive distance from sigma(A-B)} and \eqref{eq-the equation of yn}, we have
\color{blue}egin{align}\label{eq-y0n to 0}
\| y^0_n\|^2_\mathbf H&\leq \| y_n\|^2_\mathbf H\nonumber\\
&\leq \frac{1}{\varepsilon} ((A-\color{blue}ar{B}_n)y_n,\color{blue}ar{y}^+_n+\color{blue}ar{y}^-_n)_\mathbf H\nonumber\\
&\leq \frac{1}{\varepsilon} \frac{o(\|z_n\|_\mathbf H)}{\|z_n\|_\mathbf H}\|y_n\|_\mathbf H.
\end{align}
Since $\|z_n\|_\mathbf H\to \infty$ and $\|y_n\|=1$, we have $\|y^0_n\|_\mathbf H\to 0$ which contradicts to \eqref{eq-y0n not to 0}, so we have $\{z_n\}$ is bounded.
Secondly, we apply the topological degree theory to complete the proof. Since the solutions of (O.E.)$_\lambda$ are bounded, there is a number $R>0$ large eoungh,
such that all of the solutions $z_\lambda$ of (O.E.)$_\lambda$ are in the ball $B(0,R):=\{z\in \mathbf H| \|z\|_\mathbf H<R\}$. So we have the Brouwer degree
\[
deg (a'_1,B(0,R)\cap \mathbf H^0,0)= deg (a'_0,B(0,R)\cap \mathbf H^0,0)\neq 0,
\]
where $a_\lambda(x)=\frac{1}{2}(Az_\lambda(x),z_\lambda(x))_\mathbf H-F_\lambda(z_\lambda(x))$, $\lambda\in[0,1]$. That is to say (O.E.) has at least one solution.
$
\Box$
In Theorem \color{red}ef{thm-abstract thm 1 for the existence of solution}, the non-degeneracy condition of $B(z)$ is important to keep the boundedness of the solutions. The following theorem will not need this non-degeneracy condition, the idea is from \cite{Ji-2018}.
\color{blue}egin{thm}\label{thm-abstract thm 3}
Assume $A\in\mathcal O^0_e(-b,b)$. If $F$ satisfies conditions ($F_1$) and the following condition.\\
($F^\pm_2$) There exists $M>0$, $B_\infty\in\mathcal{L}_s(\mathbf H, -b,b)$, such that
\[
F'(z)=B_\infty z+r(z),
\]
with
\[
\|r(z)\|_\mathbf H\leq M,\;\;\forall z\in\mathbf H,
\]
and
\color{blue}egin{equation}\label{eq-condition of r in ab-thm 3}
(r(z),z)_\mathbf H\to\pm\infty,\;\;\|z\|_\mathbf H\to\infty.
\end{equation}
Then (O.E.) has at least one solution.
\end{thm}
\noindent{\color{blue}f Proof.} If $0\not\in \sigma(A-B_\infty)$, then with the similar method in Theorem \color{red}ef{thm-abstract thm 1 for the existence of solution}, we can prove the result.
So we assume $0\in \sigma(A-B_\infty)$ and we only consider the case of ($F^-_2$). Since $0$ is an isolate eigenvalue of $A-B_\infty$ with finite dimensional eigenspace (see \cite{Wang-Liu-2016} for details), there exists $\eta>0$ such that
\[
(-\eta,0)\cap\sigma(A-B_\infty)=\emptyset.
\]
For any $\varepsilon\in (0,\eta)$, we have
$0\not\in \sigma(\varepsilon+A-B_\infty)$. Thus, with the similar method in Theorem \color{red}ef{thm-abstract thm 1 for the existence of solution},
we can prove that there exists $z_\varepsilon\in \mathbf H$ satisfying the following equation
\color{blue}egin{equation}\label{eq-equation of z-varepsilon}
\varepsilon z_\varepsilon+(A-B_\infty)z_\varepsilon=r(z_\varepsilon).
\end{equation}
In what follows, We divide the following proof into two steps and $C$ denotes various constants independent of $\varepsilon$.
{\color{blue}f Step 1. We claim that $\|z_\varepsilon\|_\mathbf H\leq C$. } Since $z_\varepsilon$ satisfies the above equation, we have
\color{blue}egin{align*}
\varepsilon (z_\varepsilon,z_\varepsilon)_\mathbf H&=-((A-B_\infty)z_\varepsilon,z_\varepsilon)_\mathbf H+(r(z_\varepsilon),z_\varepsilon)_\mathbf H\\
&\leq \frac{1}{\eta}\|(A-B_\infty)z_\varepsilon\|^2_\mathbf H+M\|z_\varepsilon\|_\mathbf H\\
&=\frac{1}{\eta}\|\varepsilon z_\varepsilon-r(z_\varepsilon)\|^2_\mathbf H+M\|z_\varepsilon\|_\mathbf H\\
&\leq \frac{\varepsilon^2}{\eta}\|z_\varepsilon\|^2_\mathbf H+C\|z_\varepsilon\|_\mathbf H +C.
\end{align*}
So we have
\[
\varepsilon\|z_\varepsilon\|_\mathbf H\leq C.
\]
Therefore
\color{blue}egin{equation}\label{eq-boundedness of z-1}
\|(A-B_\infty)z_\varepsilon\|_\mathbf H=\|\varepsilon z_\varepsilon-r(z_\varepsilon)\|_\mathbf H\leq C.
\end{equation}
Now, consider the orthogonal splitting as defined in \eqref{eq-decomposition of space H},
\[
\mathbf H=\mathbf H^0_{A-B_\infty}\oplus\mathbf H^*_{A-B_\infty},
\]
where $A-B_\infty$ is zero definite on $\mathbf H^0_{A-B_\infty}$, $\mathbf H^*_{A-B_\infty}$ is the orthonormal complement space of $\mathbf H^0_{A-B_\infty}$. Let $z_\varepsilon=u_\varepsilon+v_\varepsilon$ with $u_\varepsilon\in \mathbf H^0_{A-B_\infty}$ and $v_\varepsilon\in \mathbf H^*_{A-B_\infty}$. Since $0$ is an isolated point in $\sigma(A-B_\infty)$, from \eqref{eq-boundedness of z-1}, we have
\color{blue}egin{equation}\label{eq-boundedness of z-2}
\|v_\varepsilon\|_\mathbf H\leq C
\end{equation}
Additionally, since $r(z)$ and $v_\varepsilon$ are bounded, we have
\color{blue}egin{align}\label{eq-boundedness of z-3}
(r(z_\varepsilon),z_\varepsilon)_\mathbf H&=(r(z_\varepsilon),v_\varepsilon)_\mathbf H+(r(z_\varepsilon),u_\varepsilon)_\mathbf H\nonumber\\
&=(r(z_\varepsilon),v_\varepsilon)_\mathbf H+(\varepsilon z_\varepsilon+(A-B_\infty)z_\varepsilon,u_\varepsilon)_\mathbf H\nonumber\\
&=(r(z_\varepsilon),v_\varepsilon)_\mathbf H+\varepsilon(u_\varepsilon,u_\varepsilon)_\mathbf H\nonumber\\
&\geq C.
\end{align}
Therefor, from \eqref{eq-condition of r in ab-thm 3}, $\|u_\varepsilon\|_\mathbf H$ are bounded in $\mathbf H$ and we have proved the boundedness of $\|z_\varepsilon\|_\mathbf H$.
{\color{blue}f Step 2. Passing to a sequence of $\varepsilon_n\to 0$, there exists $z\in\mathbf H$
such that
\[
\displaystyle\lim_{\varepsilon_n\to 0}\|z_{\varepsilon_n}-z\|_\mathbf H=0.
\]
}
Different from the above splitting, now, we recall the projections $P^-_A,\;P^0_A$ and $P^+_A$ defined in \eqref{eq-projections} and the splitting $\mathbf H=\mathbf H^-\oplus\mathbf H^0\oplus\mathbf H^+$ with $\mathbf H^*=P^*_A$($*=\pm,0$).
So $z_\varepsilon$ has the corresponding splitting
\[
z_\varepsilon=z_\varepsilon^++z_\varepsilon^-+z_\varepsilon^0,
\]
with $z_\varepsilon^*\in \mathbf H^*$ respectively.
Since $\mathbf H^0$ is a finite dimensional space and $\|z_\varepsilon\|_\mathbf H\leq C$, there exists a sequence $\varepsilon_n\to 0$ and $z^0\in \mathbf H^0$, such that
\[
\displaystyle\lim_{n\to\infty}z^0_{\varepsilon_n}=z^0.
\]
For simplicity, we rewrite $z^*_n:=z^*_{\varepsilon_n}$, $A_n:=\varepsilon_n+A$ and $A^\pm_n:=A_n|_{\mathbf H^\pm} $.
Since $z_\varepsilon$ satisfies \eqref{eq-equation of z-varepsilon}, we have
\[
z^\pm_n=(A_n^\pm)^{-1}P^\pm_A F'(z^+_n+z^-_n +z^0_n).
\]
Since $F$ satisfies ($F_1$), with the similar method used in Proposition \color{red}ef{prop-continuous and property of saddle point reduction}, for $n$ and $m$ large enough, we have
\color{blue}egin{align*}
\|z^\pm_n-z^\pm_m\|_\mathbf H=&\|(A_n^\pm)^{-1}P^\pm_A F'(z_n)-(A_m^\pm)^{-1}P^\pm_A F'(z_m)\|_\mathbf H \\
\leq&\|(A_n^\pm)^{-1}P^\pm_A (F'(z_n)-F'(z_m))\|_\mathbf H+\|((A_n^\pm)^{-1}-(A_m^\pm)^{-1})P^\pm_A F'(z_m)\|_\mathbf H \\
\leq&\frac{l_F}{l}\|z_n-z_m\|_\mathbf H+\|((A_n^\pm)^{-1}-(A_m^\pm)^{-1})P^\pm_A F'(z_m)\|_\mathbf H.
\end{align*}
Since $(A_n^\pm)^{-1}-(A_m^\pm)^{-1}=(\varepsilon_m-\varepsilon_n)(A_n^\pm)^{-1}(A_m^\pm)^{-1}$ and $z_n$ are bounded in $\mathbf H$, we have
\[
\|((A_n^\pm)^{-1}-(A_m^\pm)^{-1})P^\pm_A F'(z_m)\|_\mathbf H=o(1),\;\;n,m\to\infty.
\]
So we have
\[
\|z^\pm_n-z^\pm_m\|_\mathbf H\leq \frac{l_F}{l-l_F}\|z^0_n-z^0_m\|_\mathbf H+o(1),\;\;n,m\to\infty,
\]
therefor, there exists $z^\pm\in\mathbf H^\pm$, such that $\displaystyle\lim_{n\to\infty}\|z^\pm_n- z^\pm\|_\mathbf H=0$. Thus, we have
\[
\displaystyle\lim_{n\to\infty}\|z_{\varepsilon_n}-z\|_\mathbf H=0,
\]
with $z=z^-+z^++z^0$.
Last, let $n\to\infty$ in \eqref{eq-equation of z-varepsilon}, we have $z$ is a solution of (O.E.).$
\Box$
\color{blue}egin{thm}\label{thm-abstract thm 2 for the multiplicity of solutions}
Assume $A\in\mathcal O^0_e(-b,b)$, $F$ satisfies ($F_1$) with $\pm l_F\not\in\sigma(A)$ and the following condition:\\
($F^+_3$) There exist $B_3\in\mathcal{L}_s(\mathbf{H},-b,b)$ and $C\in\mathbb{R}$, such that
\[
B_3>\color{blue}eta:=\max\{\lambda|\lambda\in \sigma_A\cap(-\infty,l_F)\},
\]
with
\[
F(z)\geq\frac{1}{2}(B_3z,z)_\mathbf{H}-C,\;\;\forall z\in \mathbf H.
\]
Or ($F^-_3$) There exist $B_3\in\mathcal{L}_s(\mathbf{H},-b,b)$ and $C\in\mathbb{R}$, such that
\[
B_3<\alpha:=\min\{\lambda|\lambda\in \sigma_A\cap(-l_F,\infty)\},
\]
with
\[
F(z)\leq\frac{1}{2}(B_3z,z)_\mathbf{H}+C,\;\;\forall z\in \mathbf H.
\]
Then (O.E.) has at least one solution. Further more, assume $F$ satisfies \\
($F^\pm_4$) $F\in C^2(\mathbf H,\mathbb{R})$, $F'(0)=0$ and there exists $B_0\in\mathcal{L}_s(\mathbf{H},-b,b)$ with
\color{blue}egin{equation}\label{eq-twisted condition 1 in abstract thm 2}
\pm(i_A(B_0)+\nu_A(B_0))<\pm i_A(B_3),
\end{equation}
such that
\[
F'(z)=B_0z+o(\|z\|_\mathbf H),\;\;\|z\|_\mathbf H\to 0.
\]
Then (O.E.) has at least one nontrivial solution. Additionally, if
\color{blue}egin{equation}\label{eq-twisted condition 2 in abstract thm 2}
\nu_A(B_0)=0
\end{equation}
then (O.E.) has at least two nontrivial solutions.\end{thm}
\noindent{\color{blue}f Proof.} We only consider the case of ($F^+_3$). According to the saddle point reduction, since $\pm l_F\not\in\sigma(A)$, we can choose $l\in(l_F,b)$ in \eqref{eq-projections} satisfying
\[
[-l,-l_F]\cap\sigma(A)=\emptyset=[l_F,l]\cap\sigma(A).
\]
We turn to the function
\[
a(x)=\frac{1}{2}(Az(x),z(x))-F(z(x)),
\]
where $z(x)=x+z^+(x)+z^-(x)$, $x\in \mathbf H^0$ and $z^\pm\in \mathbf H^\pm$. Denote by $w(x)=x+z^-(x)$ and write $z=z(x)$, $w=w(x)$ for simplicity. Since
\color{blue}egin{equation}\label{eq-eq 1 in abstract thm 2}
a(x)=\left\{\frac{1}{2}(Aw,w)-F(w)\color{red}ight\}+\left\{\frac{1}{2}[(Az,z)-(Aw,w)]-[F(z)-F(w)]\color{red}ight\}.
\end{equation}
By condition ($F^+_3$), we obtain
\color{blue}egin{equation}\label{eq-eq 2 in abstract thm 2}
\frac{1}{2}(Aw,w)-F(w)\leq \frac{1}{2}((\color{blue}eta-B_3)w,w)_\mathbf{H}+C,
\end{equation}
and the terms in the second bracket are equal to
\color{blue}egin{align}\label{eq-eq 3 in abstract thm 2}
&\frac{1}{2}(Az^+,z^+)-\int^{1}_{0}(F'(sz^++w),z^+)ds\nonumber\\
=&\frac{1}{2}(Az^+,z^+)-(F'(z^++w),z^+)+\int^{1}_{0}(F'(z^++w)-F'(sz^++w),z^+)ds\nonumber\\
=&-\frac{1}{2}(Az^+,z^+)+\int^{1}_{0}(F'(z^++w)-F'(sz^++w),z^+)ds\nonumber\\
\leq&-\frac{1}{2}(Az^+,z^+)+\int^{1}_{0}(1-s)ds\cdot l_F\cdot\|z^+\|^2_\mathbf H\nonumber\\
\leq& -\frac{l-l_F}{2}\|z^+\|^2_\mathbf H,
\end{align}
where the last equality is from the fact that $Az^+=P^+F'(z^++w)$. From \eqref{eq-eq 1 in abstract thm 2},\eqref{eq-eq 2 in abstract thm 2} and \eqref{eq-eq 3 in abstract thm 2} we have
\color{blue}egin{align*}
a(x)&\leq \frac{1}{2}((\color{blue}eta-B_3)w,w)_\mathbf{H} -\frac{l-l_F}{2}\|z^+\|^2_\mathbf H+C\\
&\to-\infty,\; as \|x\|\to\infty.
\end{align*}
Thus the function $-a(x)$ is bounded from below and satisfies the (PS) condition. So the maximum of $a$ exists and the maximum points are critical points of $a$.
In order to prove the second part, similarly, we only consider the case of ($F^+_3$) and ($F^+_4$). We only need to realize that $0$ is not a maximum point from \eqref{eq-twisted condition 1 in abstract thm 2}, so the maximum points discovered above are not $0$. In the last, if \eqref{eq-twisted condition 2 in abstract thm 2} is satisfied, we can use the classical three critical points theorem, since $0$ is neither a maximum nor degenerate and the proof is complete. $
\Box$
\color{blue}egin{rem}
(A). Theorem \color{red}ef{thm-abstract thm 2 for the multiplicity of solutions} is generalized from \cite[IV,Theorem 2.3]{Chang-1993}. In the first part of our Theorem, we do not need $F$ to be $C^2$ continuous.\\
(B). Theorem \color{red}ef{thm-abstract thm 2 for the multiplicity of solutions} is different from our former result in \cite[Theorem 3.6]{Wang-Liu-2016}.
Here, we need the Lipschitz condition to keep the method of saddle point reduction valid, where, in \cite[Theorem 3.6]{Wang-Liu-2016}, in order to use the method of dual variation, we need the convex property.
\end{rem}
\section{Applications in one dimensional wave equation}\label{section-applications}
In this section, we will consider the following one dimensional wave equation
\[
\left\{\color{blue}egin{array}{ll}
\Box u\equiv u_{tt}-u_{xx}=f(x, t, u),\\
u(0,t)=u(\pi, t)= 0, \\
u(x, t+T)=u(x, t),\\
\end{array}
\color{red}ight.\forall (x, t)\in[0,\pi]\times S^1, \eqno(W.E.)
\]
where $T>0$, $S^1:=\mathbb{R}/T\mathbb{Z}$ and $f:[0,\pi]\times S^1\times\mathbb{R}\to\mathbb{R}$.
In what follows we assume systematically that $T$ is a rational multiple of $\pi$.
So, there exist coprime integers $(p,q)$, such that
\[
T=\frac{2\pi q}{p}.
\]
Let
\[
L^2:=\left\{u,u=\sum_{j\in\mathbb{N}^+,k\in\mathbb{Z}} u_{j,k}\sin jx\exp ik\frac{p}{q}t\color{red}ight\},
\]
where $i=\sqrt{-1}$ and $u_{j,k}\in \mathbb{C}$ with $u_{j,k}=\color{blue}ar{u}_{j,-k}$, its inner product is
\[
(u,v)_2=\sum_{j\in\mathbb{N}^+,k\in\mathbb{Z}}(u_{j,k},\color{blue}ar{v}_{j,k}),\;u,v\in L^2,
\]
the corresponding norm is
\[
\|u\|^2_2=\sum_{j\in\mathbb{N}^+,k\in\mathbb{Z}}|u_{j,k}|^2\;u,v\in L^2.
\]
Consider $\Box$ as an unbounded self-adjoint operator on $L^2$. Its' spectrum set is
\[
\sigma(\Box)=\{(p^2k^2-q^2j^2)/q^2|j\in\mathbb{N}^+,k\in\mathbb{Z}\}.
\]
It is easy to see $\Box$ has only one essential spectrum $\lambda_0=0$. Let $\Omega:=[0,\pi]\times S^1$, assume $f$ satisfying the following conditions.
\noindent($f_1$) $f\in C(\Omega\times\mathbb{R},\mathbb{R})$, there exist $ b\neq 0$ and $l_F\in(0,|b|)$, such that
\[
|f_{ b}(x,t,u+v)-f_{ b}(x,t,u)|\leq l_F|v|,\;\;\forall (x,t)\in \Omega,\;u,v\in\mathbb{R},
\]
where
\[
f_{ b}(x,t,u):=f(x,t,u)-bu,\;\;\forall (x,t,u)\in\Omega\times \mathbb{R}.
\]
Let the working space $\mathbf H:=L^2$ and the operator $A:=\Box-b\cdot I$, with $I$ the identity map on $\mathbf H$. Thus we have $A\in\mathcal O^0_e(-|b|,|b|)$. Denote $L^\infty:=L^\infty(\Omega, \mathbb{R})$ the set of all essentially bounded functions. For any $g\in L^\infty$, it is easy to see $g$ determines a bounded self-adjoint
operator on $L^2$, by
\[
u(x,t)\mapsto g(x,t)u(x,t),\;\;\forall u\in L^2,
\]
without confusion, we still denote this operator by $g$, that is to say we have the continuous embedding $L^\infty\hookrightarrow \mathcal L_s(\mathbf H)$. Thus for any $g\in L^\infty\cap \mathcal L_s(\mathbf H,-|b|,|b|)$, we have the index pair ($i_A(g),\nu_A(g)$). Besides, for any $g_1,g_2\in L^\infty$, $g_1\leq g_2$ means that
\[
g_1(x,t)\leq g_2(x,t),\;{\color{red}m a.e.} (x,t)\in\Omega.
\]
\noindent ($f_2$) There exist $g_1,g_2\in L^\infty\cap \mathcal L_s(\mathbf H,-|b|,|b|)$ and $g\in L^\infty(\Omega\times \mathbb{R},\mathbb{R})$, with
\[
g_1\leq g_2,\;i_A(g_1)=i_A(g_2),\;\nu_A(g_2)=0,
\]
\[
g_1(x,t)\leq g(x,t,u)\leq g_2(x,t),\;\;\;{\color{red}m a.e.} (x,t,z)\in\Omega\times\mathbb{R},
\]
such that
\[
f_{ b}(x,t,u)-g(x,t,u)u=o(|u|),\;|u|\to\infty,\;{\color{red}m uniformly for }(x,t)\in \Omega.
\]
We have the following results.
\color{blue}egin{thm}\label{thm-application-1}
Assume $T$ is a rational multiple of $\pi$, $f$ satisfying ($f_1$) and ($f_2$), then (W.E.) has a weak solution.
\end{thm}
\noindent{\color{blue}f Proof of Theorem \color{red}ef{thm-application-1}.} Let
\[
\mathcal{F}_b(x,t,u):=\int^u_0f_b(x,t,s)ds,\;\;\forall(x,t,u)\in\Omega\times\mathbb{R},
\]
and
\color{blue}egin{equation}\label{eq-definition of F}
F(u):=\int_\Omega \mathcal F_b(x,t,u(x,t))dxdt,\;\;\forall u\in \mathbf H.
\end{equation}
It is easy to verify that $F$ will satisfies condition ($F_1$) and ($F_2$) if $f$ satisfies condition ($f_1$) and ($f_2$).
Thus, by Theorem \color{red}ef{thm-abstract thm 1 for the existence of solution}, the proof is complete.$
\Box$
Here, we give an example of Theorem \color{red}ef{thm-application-1}.
\color{blue}egin{eg}
For any $b\neq 0$, assume $\alpha,\color{blue}eta\in (-|b|,|b|)$ and
$
[\alpha,\color{blue}eta]\cap \sigma(\Box-b)=\emptyset.
$
Let
\[
g(x,t,u):=\displaystyle\frac{\color{blue}eta-\alpha}{2}\sin[\varepsilon_1\ln (|x|+|t|+|u|+1)]+\frac{\alpha+\color{blue}eta}{2},
\]
and $h\in C(\mathbb{R},\mathbb{R})$ is Lipschitz continuous with
\[
h(u)=o(|u|),\;\;|u|\to\infty.
\]
then
\[
f(x,t,u):=bu+g(x,t,u)u+\varepsilon_2h(u)
\]
will satisfies condition ($f_1$) and ($f_2$) for $\varepsilon_1$ and $\varepsilon_2>0$ small enough.
\end{eg}
\color{blue}egin{thm}\label{thm-application-3}
Assume $T$ is a rational multiple of $\pi$, $f$ satisfies ($f_1$) and the following condition,\\
($f^\pm_2$) There exists $g_\infty(x,t)\in L^\infty\cap\mathcal{L}_s(\mathbf H, -|b|,|b|)$ with
\[
|f_b(x,t,u)-g_{\infty}(x,t)u|\leq M_{1}\;\;\forall (x,t,u)\in\Omega\times\mathbf R,
\]
and
\color{blue}egin{equation}\label{eq-condition of r}
\pm(f_b(x,t,u)-g_{\infty}(x,t)u,u)\geq c|u|,\;\;\forall (x,t,u)\in\Omega\times\mathbb R/[-M_{2},M_{2}],
\end{equation}
where $M_1,\;M_2,\;c>0$ are constants. Then (W.E.) has a weak solution.
\end{thm}
\noindent{\color{blue}f Proof.} We only consider the case of $f^{-}_{2}$. Let $r(x,t,u):=f_b(x,t,u)-g_{\infty}(x,t)u$, then $r$ is bounded in $\mathbf H$. Generally speaking, from \eqref{eq-condition of r}, we cannot prove \eqref{eq-condition of r in ab-thm 3}, so we cannot use Theorem \color{red}ef{thm-abstract thm 3} directly. By checking the proof of Theorem \color{red}ef{thm-abstract thm 3}, in step 1, when we got \eqref{eq-boundedness of z-2}, \eqref{eq-condition of r in ab-thm 3} was only used to get the boundedness of $z^0_{\varepsilon}$. Now, with \eqref{eq-condition of r}, we can also get the boundedness of $z^0_{\varepsilon}$ from \eqref{eq-boundedness of z-2}. Recall that $\mathbf H=L^{2}(\Omega)$ in this section, from the boundedness of $z^{\pm}_{\varepsilon}$ in $\mathbf H$, we have the boundedness of $z^{\pm}_{\varepsilon}$ in $L^{1}(\Omega)$. On the other hand, since $\ker (A-g_{\infty})$ is a finite dimensional space, if $\|z^{0}_{\varepsilon}\|_{\mathbf H}\to\infty$, we have $\|z^{0}_{\varepsilon}\|_{L^{1}}\to\infty$, thus $\|z_{\varepsilon}\|_{L^{1}}\to\infty$. Therefor, we have the contradiction from \eqref{eq-boundedness of z-2} and \eqref{eq-condition of r}. So we have gotten the boundedness of $z^0_{\varepsilon}$. The rest part of the proof is similar to Theorem \color{red}ef{thm-abstract thm 3}, we omit it here.
\color{blue}egin{eg}
Here we give an example of Theorem \color{red}ef{thm-abstract thm 3}. For any $b\neq 0$, and $g_\infty\in C(\Omega)$ with
\[
\|g_\infty\|_{C(\Omega)}<|b|.
\]
Let $r(u)=\varepsilon\arctan u$, then
\[
f(x,t,u):=bu+g_\infty(x,t)u\pm r(u)
\]
will satisfies the conditions in Theorem \color{red}ef{thm-abstract thm 3} for $\varepsilon>0$ small enough.
\end{eg}
Now, in order to use Theorem \color{red}ef{thm-abstract thm 2 for the multiplicity of solutions}, we assume $f$ satisfies the following conditions.
\noindent ($f^{\pm}_3$) There exists $g_3(x,t)\in L^\infty\cap\mathcal{L}_s(\mathbf H, -|b|,|b|)$, with
\[
\pm g_3(x,t)>\max\{\lambda|\lambda\in \sigma_{(\pm A)}\cap (-\infty,l_F)\},
\]
such that
\[
\pm\mathcal{F}_b(x,t,u)\geq \frac{1}{2}(g_3(x,t)u,u)+c,\;\;\forall (x,t,u)\in\Omega\times\mathbb{R},
\]
for some $c\in\mathbb{R}$.
\noindent ($f^\pm_4$) $f\in C^1(\Omega\times \mathbb{R},\mathbb{R})$, $f(x,t,0)\equiv 0,\;\forall(x,t)\in\Omega$ and
\[
g_0(x,t):=f'_b(x,t,u),\;\;\forall (x,u)\in\Omega,
\]
with
\[
\pm (i_A(g_0)+\nu_A(g_0))<\pm i_A(g_3).
\]
We have the following result.
\color{blue}egin{thm}\label{thm-application-2}
Assume $T$ is a rational multiple of $\pi$. \\
(A.)If $f$ satisfies condition ($f_1$), ($f^+_3$)( or ($f^-_3$)), then (W.E.) has at least one solution.\\
(B.) Further more, if $f$ satisfies condition ($f^+_4$)( or ($f^-_4$)), then (W.E.) has at least one nontrivial solution. Additionally, if $\nu_A(g_0)=0$, then (W.E.) has at least two nontrivial solutions.
\end{thm}
The proof is to verify the conditions in Theorem \color{red}ef{thm-abstract thm 2 for the multiplicity of solutions}, we only verify the smoothness of $F(u)$ defined in \eqref{eq-definition of F}. From condition ($f_1$) and $f\in C^1(\Omega\times\mathbb{R})$, we have the derivative $f'_b(x,t,u)$ of $f_b$ with respect to $u$, satisfying
\color{blue}egin{equation}\label{eq-the boundedness of f'_b}
|f'_b(x,t,u)|\leq l_F,\;\;\forall (x,t,u)\in\Omega\times\mathbb{R}.
\end{equation}
For any $u,v\in\mathbf{H}$,
\color{blue}egin{align*}
F'(u+v)-F'(u)&=f_b(x,t,u+v)-f_b(x,t,u)\\
&=f'_b(x,t,u)v+(f'_b(u+\xi v)-f'_b(u))v.
\end{align*}
From \eqref{eq-the boundedness of f'_b}, we have $f'_b(u+\xi v)-f'_b(u)\in \mathbf{H}$ and
\[
\displaystyle\lim_{\|v\|_\mathbf{H}\to 0}\|f'_b(u+\xi v)-f'_b(u)\|_\mathbf{H}=0,\;\;\forall u\in\mathbf{H}.
\]
That is to say $F''(u)=f'_b(x,t,u)$ and $F\in C^2(\mathbf{H},\mathbb{R})$.
\color{blue}egin{eg}
In order to give an example for Theorem \color{red}ef{thm-application-2}, assume
\color{blue}egin{equation}\label{eq-spectrum of Box}
\sigma(\Box)=\mathop{\cup}\limits_{n\in\mathbb{Z}}\{\lambda_n\},
\end{equation}
with $\lambda_0=0$ and $\lambda_n<\lambda_{n+1}$ for all $n\in\mathbb{Z}$. Choose any $k\in\{2,3\cdots\}$. Let
\[
g_0(x,t)\in C(\Omega,[\alpha,\color{blue}eta]),\;{\color{red}m with}\;[\alpha,\color{blue}eta]\in(0,\lambda_k),
\]
and $h\in C(\mathbb{R},\mathbb{R})$ defined above. Define
\[
g(x,t,u):=g_0(x,t)+\displaystyle(\lambda_k-g_0(x,t)-\varepsilon_1)\frac{2}{\pi}\arctan(\varepsilon_1u^2),
\]
then
\[
f(x,t,u):=g(x,t,u)u+\varepsilon_2 h(u)
\]
will satisfies condition ($f_1$) and ($f^+_3$) with $b=\frac{\lambda_{k}}{2}$ and $\varepsilon_1,\varepsilon_2>0$ small enough.
Further more, if $g_0,h$ are $ C^1$ continuous and $\color{blue}eta<\lambda_{k-1}$, we have condition ($f^+_4$) is satisfied.
Additionally, if $[\alpha,\color{blue}eta]\cap\sigma(\Box)=\emptyset$, then $\nu_A(g_0)=0$.
\end{eg}
\color{blue}egin{rem}
We can also use Theorem \color{red}ef{thm-abstract thm 1 for the existence of solution} , Theorem \color{red}ef{thm-abstract thm 3} and Theorem \color{red}ef{thm-abstract thm 2 for the multiplicity of solutions} to consider the radially symmetric solutions for the $n$-dimensional wave equation:
\[
\left\{\color{blue}egin{array}{ll}
\Box u\equiv u_{tt}-\vartriangle_x u=h(x,t,u), &t\in\mathbb{R},\;x\in B_R,\\
u(x,t)= 0, t\in\mathbb{R}, &t\in\mathbb{R},\;x\in \partial B_R,\\
u(x,t+T)=u(x,t), &t\in\mathbb{R},\;x\in B_R,\\
\end{array}
\color{red}ight. \eqno(n \textendash W.E.)
\]
where $B_R=\{x\in\mathbb{R}^n,|x|<R\}$, $\partial B_R=\{x\in\mathbb{R}^n,|x|=R\}$, $n>1$ { and the nonlinear term $h$ is $T$-periodic in variable $t$}.
Restriction of the radially symmetry allows us to know the nature of spectrum of the wave operator.
Let $r=|x|$ and {$S^1:=\mathbb{R}/T$, if $h(x,t,u)=h(r,t,u)$} then the $n$-dimensional wave equation ($n$\textendash W.E.) can be transformed into:
\[
\left\{\color{blue}egin{array}{ll}
A_0u:=u_{tt}-u_{rr}-\frac{n-1}{r}u_r=h(r,t,u),\\
u(R,t)=0,\; \\
u(r,0)=u(r,T),\;u_t(r,0)=u_t(r,T),
\end{array}
\color{red}ight. \;\;\;(r,t)\in{\Omega:=[0,R]\times S^1}.\eqno(RS\textendash W.E.)
\]
$A_0$ is symmetric on $L^2(\Omega,\color{red}ho)$, where $\color{red}ho=r^{n-1}$ and
\[
L^2(\Omega,\color{red}ho):=\left\{u|\|u\|^2_{L^2(\Omega,\color{red}ho)}:=\int_\Omega|u(t,r)|^2r^{n-1}dtdr<\infty\color{red}ight\}.
\]
By the asymptotic properties
of the Bessel functions (see\cite{Watson-1952}), the spectrum of the wave operator can be characterized (see\cite[Theorem 2.1]{Schechter-1998}).
Under some more assumption, the self-adjoint extension of $A_0$ has no essential spectrum, and we can get more solutions of (RS\textendash W.E.).
\end{rem}
\color{blue}egin{thebibliography}{99}
\color{blue}ibitem{Abb-2001}A. Abbondandolo,
Morse theory for hamiltonian systems,
Chapman \& Hall/CRC, 2001.
\color{blue}ibitem{Amann-1976} H. Amann,
Fixed point equations and nonlinear eigenvalue problems in ordered Banach spaces,
SIAM Rev. 18 (1976) 620-709.
\color{blue}ibitem{Amann-Zehnder-1980} H. Amann, E. Zehnder,
Nontrivial solutions for a class of nonresonance problems and applications to nonlinear differential equations,
Annali Scuola Norm. Sup. Pisa 7 (1980) 539-603.
\color{blue}ibitem{Aubin-Ekeland-1984} J.P. Aubin, I. Ekeland,
Applied nonlinear analysis,
Wiley, 1984.
\color{blue}ibitem{Ambrosetti-Rabinowitz-1973}A. Ambrosetti, P.H. Rabinowitz,
Dual variational methods in critical point theory and applications,
J. Funct. Anal. 14 (1973) 349-381.
\color{blue}ibitem{Atiyah-Patodi-Singer-1976} M. F.Atiyah, V. K.Patodi, I. M.Singer,
Spectral asymmetry and Riemannian geometry III,
Proc.Camb. Phic. Soc. 79(1976), 71-99.
\color{blue}ibitem{Cappell-Lee-Miller-1994}S.E. Cappell, R. Lee, E.Y. Miller,
On the Maslov index,
Comm. Pure Appl. Math. 47 (1994) 121-186.
\color{blue}ibitem{Chang-1993} K.Q. Chang,
Infinite Dimensional Morse Theory and Multiple Solution Problems,
Birkhauser, Basel, 1993.
\color{blue}ibitem{Chang-Liu-Liu-1997} K.C. Chang, J.Q. Liu, M.J. Liu,
Nontrivial periodic solutions for strong resonance Hamiltonian systems,
Ann. Inst. H. Poincar????Anal. Non Lin??? aire 14 (1997) 103-117.
\color{blue}ibitem{Chen-Hu-2007} C. Chen, X. Hu,
Maslov index for homoclinic orbits of Hamiltonian systems,
Ann.Inst. H. Poincar\'{e}, Anal. Non lin \'{e}aire 24 (2007) 589-603.
\color{blue}ibitem{Conley-Zehnder-1984}C. Conley, E. Zehnder,
Morse-type index theory for flows and periodic solutions for Hamiltonian equations,
Comm. Pure Appl. Math. 37 (1984) 207-253.
\color{blue}ibitem{Ding-2007} Y. Ding,
Variational Methods for Strongly Indefinite Problems,
World Scientific Publishing, 2007.
\color{blue}ibitem{Ding-2017} Y. Ding,
Variational methods for strongly indefinite problems (in Chinese),
Sci Sin Math. 47 (2017) 779-810.
\color{blue}ibitem{Chen-Zhang-2014} J. Chen, Z. Zhang,
Infinitely many periodic solutions for a semilinear wave equation in a ball in $\mathbb{R}^n$,
J. Differential Equations 256(2014) 1718-1734.
\color{blue}ibitem{Chen-Zhang-2016} J. Chen, Z. Zhang,
Existence of infinitely many periodic solutions for the radially symmetric wave equation with resonance,
J. Differential Equations 260(2016) 6017-6037.
\color{blue}ibitem{Dong-Long-1997}D. Dong, Y. Long,
The iteration formula of Maslov-type index theory with applications to nonlinear Hamiltonian systems,
Trans. American Math. Soc. 349 (1997) 2619-2661.
\color{blue}ibitem{Dong-2010} Y. Dong,
Index theory for linear selfadjoint operator equations and nontrivial solutions for asymptotically linear operator equations,
Calc. Var. 38 (2010) 75-109.
\color{blue}ibitem{Ekeland-1984}I. Ekeland,
Une theorie de Morse pour les systemes hamiltoniens convexes,
Ann IHP Analyse non lineaire 1 (1984) 19-78.
\color{blue}ibitem{Ekeland-1990} I. Ekeland,
Convexity Methods in Hamiltonian Mechanics,
Springer, 1990.
\color{blue}ibitem{Ekeland-Hofer-1985}I. Ekeland, H. Hofer,
Periodic solutions with prescribed period for convex autonomous Hamiltonian systems,
Invent. Math. 81 (1985) 155-188.
\color{blue}ibitem{Ekeland-Hofer-1987} I. Ekeland, H. Hofer,
Convex Hamiltonian energy surfaces and their closed trajectories,
Comm. Math. Phys. 113 (1987) 419-467.
\color{blue}ibitem{Ekeland-Temam-1976} I. Ekeland, R. Temam,
Convex analysis and variational problems,
North-Holland-Elsevier, 1976.
\color{blue}ibitem{Fei-1995} G. Fei
Relative Morse index and its application to Hamiltonian systems in the Presence of symmetries,
122 (1995) 302-315.
\color{blue}ibitem{Floer-1988}A. Floer,
A relative Morse index for the symplectic action,
Comm. Pure Appl. Math. 41 (1988) 393-407.
\color{blue}ibitem{Gou-Liu-2007}Y. Guo, J. Liu,
Periodic solutions for an asymptotically linear wave equation with resonance,
Nonlinear Ana.TMA 67 (2007) 2727-2743.
\color{blue}ibitem{Guo-Liu-Zeng-2004} Y. Guo, J. Liu, P. Zeng,
A new morse index theory for strongly indefinite functionals,
Nonlinear Ana.TMA 57 (2004) 485-504.
\color{blue}ibitem{Ji-2018}S.Ji,
Periodic solutions for one dimensional wave equation with bounded nonlinearity,
J. Differential Equations 264(2018)5527-5540.
\color{blue}ibitem{Ji-Li-2006} S. Ji, Y. Li,
Periodic solutions to one-dimensional wave equation with $x$-dependent coefficients,
J. Differential Equations 229(2006) 466-493.
\color{blue}ibitem{Kryszewski-Szulkin-1997} W. Kryszewski, A. Szulkin,
An infinite dimensional Morse theory with applications,
Trans. of AMS. 349(8)(1997) 3181-3234.
\color{blue}ibitem{Hu-Portaluri-2017}X. Hu, A. Portaluri,
Index theory for heteroclinic orbits of Hamiltonian systems,
Calc. Var. Partial Differential Equations 56 (2017).
\color{blue}ibitem{Liu-2007}C. Liu,
Maslov-type index theory for symplectic paths with Lagrangian boundary conditions,
Advanced Nonlinear Studies 7 (2007) 131-161.
\color{blue}ibitem{Liu-2007-2}C. Liu,
Asymptotically linear Hamiltonian system with Lagrangian
boundary conditions, Pacific J. Math. 232 (2007) 232-254.
\color{blue}ibitem{Liu-Long-Zhu-2002} C. Liu, Y. Long, C. Zhu,
Multiplicity of closed characteristics on symmetric convex hypersurfaces in $\mathbb{R}^{2n}$,
Math. Ann. 323 (2002) 201-215.
\color{blue}ibitem{Liu-Wang-Lin-2011} C. Liu, Q. Wang, X. Lin,
An index theory for symplectic paths associated with two Lagrangian subspaces with applications,
Nonlinearity 24 (2011) 43-70.
\color{blue}ibitem{Long-1990}Y. Long,
Maslov-type index, degenerate critical points, and asymptotically linear Hamiltonian systems,
Sci. China 33 (1990) 1409-1419.
\color{blue}ibitem{Long-1997}Y. Long,
A Maslov-type index theory for symplectic paths,
Topol. Methods Nonlinear Anal. 10 (1997) 47-78.
\color{blue}ibitem{Long-Zehnder-1990} Y. Long, E. Zehnder,
Morse theory for forced oscillations of asymptotically linear Hamiltonian systems,
Stock. Process. Phys. Geom. ed S Alberverio et al(Teaneck, NJ:World Scientific) (1990) 528-563.
\color{blue}ibitem{Long-Zhu-2000-2}Y. Long, C. Zhu,
Maslov type index theory for symplectiuc paths and spectral flow(II),
Chinese Ann. of Math. 21B (2000) 89-108.
\color{blue}ibitem{Long-Zhu-2000} Y. Long, C. Zhu,
Closed characteristics on compact convex hypersurfaces in $\mathbb{R}^{2n}$,
Ann. Math. 155 (2000) 317-368.
\color{blue}ibitem{Robbin-Salamon-1993}J. Robbin, D. Salamon,
The Maslov index for paths,
Topology 32 (4) (1993) 827-844.
\color{blue}ibitem{Robbin-Salamon-1995}J. Robbin, D. Salamon,
The spectral flow and the Maslov index,
Bull. London Math. Soc. 27 (1) (1995) 1-33.
\color{blue}ibitem{Schechter-1998} M. Schechter,
Rotationally invariant periodic solutions of semilinear wave equations,
Abstr. Appl. Anal. 3(1998) 171-180.
\color{blue}ibitem{Tanaka-2006} M. Tanaka,
Existence of multiple weak solutinos for asymptotically linear wave equations,
Nonlinear Analysis 65(2006) 475-499.
\color{blue}ibitem{Wang-Liu-2016} Q. Wang, C.Liu,
A new index theory for linear self-adjoint operator equations and its applications,
J. Differential Equations 260 (2016) 3749-3784.
\color{blue}ibitem{Wang-Liu-2017}Q.Wang, C. Liu,
An index theory with applications to homoclinic Orbits of Hamiltonian systems and Dirac equations,
preprint, arXiv:1802.03492.
\color{blue}ibitem{Watson-1952} G. Watson,
A Treatise on the Theory of Bessel Functions, second edition,
Cambridge University Press, Cambridge, 1952.
\color{blue}ibitem{Zeng-Liu-Guo-2004} P. Zeng, J. Liu, Y. Guo,
Computations of critical groups and applications to asymptotically linear wave equation and beam equation,
J.Math.Anal.Appl. 300(2004), 102-128.
\color{blue}ibitem{Zhu-Long-1999}C. Zhu, Y. Long,
Maslov-type index theory for symplectic paths and spectral flow (I),
Chin. Ann. of Math. 20B (4) (1999) 413-424.
\end{thebibliography}
\end{document} |
\begin{document}
\title{Experimental quantum key distribution based on a Bell test}
\author{Alexander Ling $^{1,2}$}
\author{Matthew P. Peloso $^1$}
\author{Ivan Marcikic}
\author{Valerio Scarani $^{1,3}$}
\author{Ant\'{\i}a Lamas-Linares $^{1,3}$}
\author{Christian Kurtsiefer $^{1,3}$}
\affiliation{$^1$ Department of Physics, National University of
Singapore, Singapore 117542\\$^2$ Temasek Laboratories, National University
of Singapore, Singapore 123456\\$^3$ Centre for Quantum Technologies,
National University of Singapore, Singapore 117543
}
\date{\today}
\begin{abstract}
We report on a complete free-space field implementation of a modified Ekert91
protocol for quantum key distribution
using entangled photon pairs. For each photon pair we perform a random choice
between key generation and a Bell inequality. The amount of violation
is used to determine the possible knowledge of an eavesdropper to ensure
security of the distributed final key.
\end{abstract}
\keywords{quantum cryptography, CHSH inequality, Ekert91}
\maketitle
{\it Introduction.}
Proposals for quantum key distribution (QKD) were first published over two
decades ago \cite{bb84,quantummoney,bb84expt}.
In particular, the protocol of Bennett and Brassard in 1984 (BB84) sought to
distribute a random encryption key via correlated polarization states of
single photons \cite{bb84,bb84expt}.
Its strength was derived from the no-cloning theorem
\cite{nocloning1,nocloning2} which states that the
state of a single quantum system cannot be copied perfectly.
A measurement attempt on the distributed key is revealed as errors in the
expected correlation of the measurement results. BB84 must treat all noise as evidence of an eavesdropper.
Whether a completely secure key can then be distilled after error correction \cite{bennett95} depends only on the fraction of errors in the initial key.
The `quantum' nature of QKD was explored from a different angle in 1991 when
Ekert proposed an implementation using non-local correlations between
maximally entangled photon-pairs \cite{E91}.
The quality of entanglement between a photon-pair can be measured by the
degree of violation of a Bell inequality \cite{bell66}.
Maximally entangled photon-pairs have perfect correlations in their
polarization states, and violate the Clauser-Horne-Shimony-Holt (CHSH)
version \cite{chsh} of this inequality with the maximum value.
The defining feature in Ekert91 is the suggestion to use the degree of
violation of the CHSH inequality as a test of security.
This conjecture is related to the concept later known as the monogamy of
entanglement \cite{coffman00}: the entanglement between two systems decreases
when a third system (for example, the measurement apparatus of an
eavesdropper) interacts with the pair.
Although BB84 and Ekert91 utilize different aspects of quantum mechanics, once one writes down explicitly the expected qubit states and the measurements that should be performed, the two protocols turn out to generate the same set of correlations \cite{bennett92}. When these calculations were extended to include error correction and privacy amplification, a quantitative link was found between Eve's information (assuming individual attacks) and the amount of violation of the CHSH inequality \cite{fuchs97}, thus vindicating Ekert's intuition. BB84 and Ekert91 came to be considered as fully equivalent. In this perspective, the choice between a prepare-and-measure and an entanglement-based implementation is dictated only by a balance of practical benefits. For instance, BB84 involves an active choice when encoding the logical bits 0 and 1 into the
polarization states, requiring a trusted high-bandwidth random number source
\cite{bienfang04}; in comparison, no active choice is necessary with entanglement-based QKD.
Besides its ability to remove the need for random number generators
\cite{marcikic06}, technical difficulties related to the lack of practical true
single photon sources can be avoided. The price of entanglement-based QKD is a lower key generation rate due to the
limited brightness of contemporary entangled photon-pair sources when compared
with faint coherent pulse approximations of single photon sources.
Recently two theoretical developments pointed to the fact that BB84 and
Ekert91 may not be equivalent after all. The first such development are the
proofs of unconditional security developed by Koashi and Preskill \cite{koa03}
and improved by Ma, Fung and Lo \cite{ma07}. These authors proved that the
security of entanglement-based implementations can be based on the sole
knowledge of the error rate, because this quantity already contains
information about the imperfection of the source --- while such imperfections
(e.g. the photon-number statistics, or spectral distinguishability of
different letters \cite{spectral02}) must be carefully taken into
account in prepare-and-measure schemes \cite{bra00,hwa03,sca04}. The second
development is due to Ac\'{\i}n and coworkers \cite{acin07}. These authors
went back to Ekert's original idea of basing the security \textit{only} on the
combined correlation function $S$ for violating CHSH and derived the formula
\begin{equation}
I_{Eve}\,=\,h\left({1+\sqrt{S^2/4-1}\over 2}\right)\,,\label{eqacin}
\end{equation}
with the binary entropy $h(x)=-x\log_2(x)-(1-x)\log_2(1-x)$. This formula provides an unconditional security bound under the same assumptions as in \cite{ma07}; it also guarantees partial security in a more paranoid scenario, in which the QKD devices are untrusted (we shall come back to this issue in the conclusions).
In this paper, we describe an entanglement-based QKD experiment in which we
monitor the violation of the CHSH inequality and use (\ref{eqacin}) to quantify
the degree of raw key compression in the privacy amplification
step. Typically, implementations of entanglement-based QKD
systems do not monitor Bell inequalities \cite{nai00,tit00,marcikic06}; in one
of the first experiments \cite{jennewein00}, a Bell-type inequality was
monitored, but no quantitative measure of security was derived from the
observed violation.
\begin{figure}
\caption{Orientation of different detector polarizations. Coincidences (1,1')
and (2,2') are used for key generation, while coincidences between any of
(3,4,5,6) and any of (1',2',3',4') will be used for testing a CHSH
inequality for various settings.}
\label{fig:arrows}
\end{figure}
{\it Experiment.}
We implement a modified Ekert91 protocol \cite{aci06} that uses a minimal
combination of three detection settings $a_0,a_1,a_K$ on one side, and two distinct
detection settings $b_0,b_1$ on the other side for performing polarization
measurements on a photon-pair in a singlet state $\ket{\Psi^-} = \frac{1}{\sqrt{2}}(\ket{H_A V_B} - \ket{V_A H_B})$.
The setting pair $(a_K,b_0)$ corresponds to
horizontal/vertical polarization, and should lead (in the absence of noise) to
perfectly anti-correlated measurement results which form the raw key. The
setting $b_0$ and the other ones are used to check
the violation of the CHSH inequality $\left|S\right|\leq 2$ with
\begin{eqnarray}
\label{eq:bell}
S&=& E(a_0,b_0)+E(a_0,b_1)+E(a_1,b_0)-E(a_1,b_1)\,.
\end{eqnarray}
The correlation coefficients $E$ are determined from the number $n_{ij}$ of coincidence events between detectors $i$ on one side and $j$ on the other side, collected during a given integration time $T$.
Measurement bases are chosen such that a maximal value of $|S|=2\sqrt{2}$ would be expected.
Basis $b_1$ is chosen to correspond to $\pm45^\circ$ linear polarization, and bases $b,c$ need to form an orthogonal set corresponding to $\pm22.5^\circ,\pm67.5^\circ$ linear polarizations (see Fig.~\ref{fig:arrows}).
With that, we evaluate for example
\begin{equation}\label{eq:corrcoeff}
E(a_0,b_0)={n_{3,1'}+n_{4,2'} - n_{3,2'}-n_{4,1'} \over n_{3,1'}+n_{4,2'} + n_{3,2'}+n_{4,1'} }\,,
\end{equation}
and the other coefficients in (\ref{eq:bell}) accordingly from an ensemble of
pair detection events.
\begin{figure}
\caption{Experimental setup. Polarization-entangled photon pairs are generated
via parametric down conversion pumped by a laser diode (LD) in a
nonlinear optical crystal (BBO) with walk-off compensation (WP, CC) into
single mode optical fibers (SMF). A free-space optical channel for one
detector set (Bob) is realized using small telescopes on both sides (ST,
RT). Spectral filtering is implemented with a pinhole (PH) which is imaged
completely onto the active area of the photodetectors with a relay lens (R)
through a spectral filter (F). Both parties perform polarization
measurements in bases randomly chosen by beam splitters (B1-B3), and defined
by properly oriented wave plates (H1-H3) in front of polarizing beam
splitters (PBS) and photon counting detectors. Photo events are registered
separately with time stamp units (TU) connected to two PC linked via a
classical channel. }
\label{fig:setup}
\end{figure}
The random choice of measurement bases is performed with a combination of polarization-independent beam splitters (B1-B3, see Fig.~\ref{fig:setup}), with a 50:50 splitting ratio. This avoids an explicit generation of a random number by a device. The base settings corresponding to the angles shown in Fig.~\ref{fig:arrows} are adjusted by appropriately oriented half wave plates (H1-H3).
The remaining elements of the experimental setup are similar to a previous experiment implementing an entanglement-based BB84 protocol \cite{marcikic06}.
Polarization-entangled photon pairs are generated in a compact diode-laser
pumped non-collinear type-II parametric down conversion process \cite{kwiat95}
with efficient collection techniques into single mode optical
fibers \cite{kurtsiefer01, trojek04}.
We pump a 2\,mm thick $\beta$-Barium Borate (BBO) crystal at a wavelength of
407\,nm with a power of 40\,mW and observe a photo coincidence rate of about
18000\,s$^{-1}$ in passively-quenched silicon avalanche photodiodes directly
at the source.
We separate the two measurement devices by $\approx1.5$\,km in an urban
environment, introducing a link
loss of about 3\,dB caused primarily by atmospheric absorption at the down
converted wavelength of 810\,nm and transmission fluctuations.
Background light suppression (at night) was accomplished using a spatial
filter (PH) in the receiving telescope (acceptance range
$\Omega=6.5\cdot10^{-9}$\,sr) and a color glass filter (RG780) with a peak
transmission of $\approx90\%$ for the down-converted light at 810\,nm.
Correlated photons are identified by recording their time of arrival at each
detector and running a cross correlation of the timing information on both
sides (similar to the scheme in \cite{marcikic06}). The virtual coincidence
window defined in software was 3.75\,ns, and we monitored the accidental
coincidences in an equally wide time window offset by 20\,ns. Detector time
delay compensation was adjusted to better than 0.5\,ns to avoid leakage
through a classical timing channel \cite{lamas07}.
\begin{figure}
\caption{Experimental results in a key distribution experiment implementing an
Ekert91 protocol. (a) shows total (upper trace) and accidental (lower trace)
detected coincidence rates between Alice and Bob, (b) the error ration in the
raw key, (c) the degree of violation of a CHSH-type Bell inequality, (d) the
final key rate after error correction and privacy amplification.
The experiment was terminated by a storm misaligning a telescope of the
optical link at 5\,am.}
\label{fig:expresult}
\end{figure}
The experimental results from one 9.5 hour run are shown in
Fig.~\ref{fig:expresult}. In this interval, we observe a small drop of the
coincidence rate due to an alignment drift in the optical link. Accidental
coincidences were about 0.5\% of the coincidences from down-converted photon
pairs.
Half of the identified photon pairs were seen by detectors
(3,4,5,6) paired with (1',2',3',4'), which were used to evaluate the violation of (\ref{eq:bell}). About a
quarter of the pairs in detector combinations (1,2) with (1',2') contributed
to the raw key, while the residual
quarter of pairs in combinations (1,2) with (3',4') were discarded. Detectors
(1,2,1',2') were adjusted to coincide with the natural
axes of the down conversion crystal to keep the error rate on the raw key as
small as possible.
Error correction following a modified CASCADE protocol~\cite{cascade} was
performed in real time on packets of at least 10000 raw bits for a targeted
final bit error ratio of $10^{-12}$. The corresponding quantum bit error
ratio (QBER) was extracted out of this procedure
(Fig.~\ref{fig:expresult}b). The combined correlation value $S$ was extracted
via (\ref{eq:corrcoeff}) for that block of raw key, and stayed at around 2.5
over the whole measurement time (Fig.~\ref{fig:expresult}c). This is not a
particularly
high value, and we suspect a broad optical spectrum in the blue pump diode as
a reason for this problem. This is compatible with lower
polarization correlation in the $\pm45^\circ$ basis due to a
residual distinguishability between the two decay paths in the SPDC
process. However, it serves as a typical model for an eavesdropping attempt
e.g. by a partial intercept-resend attack in the H/V basis. While such an
attack is not revealed in the QBER in this protocol, it clearly shows in a
reduction of $S$ from the maximally expected value of $2\sqrt{2}$.
The average information leakage $l$ per raw bit to an eavesdropper was
estimated for each block following (\ref{eqacin}). Together with the
revealed bits in the error correction procedure (and not assuming that any
errors are due to intrinsic detector noise), we can then establish the secret
key fraction
Alice and Bob can extract out of the privacy amplification hashing procedure
from a given raw key block. The result over time is shown in
Fig.~\ref{fig:expresult}d, resulting in an average final key rate of around
300\,bit\,s$^{-1}$ or about $10^7$\,bit of error-free secret key.
The estimation of the eavesdropper
knowledge is applicable strictly only for an infinitely large number of bits;
recent work on the security of finite length keys implies that the privacy amplification should be carried out over large ensembles \cite{mawang,hase,scaren1}. For the protocol studied here, a finite-key bound has been presented in Ref.~\cite{scaren2}. By performing privacy amplification on blocks of $n=10^6$ bits of the raw key, the extractable secure key is around half of the asymptotic value \footnote{Note that this value cannot be read from Fig.~1 in \cite{scaren2}, because that plot assumed (i) an a priori relation between $Q$ and $S$ that is not fulfilled in the experiment and (ii) the optimization of the probability of the measurements as a function of $N$, while in the experiment the choices are always made by 50:50 splitters.}.
{\it Conclusion and perspectives.}
We have demonstrated a free space implementation of a modified Ekert91
protocol. The security of the key distilled was derived from the
violation of the CHSH inequality. This ensured that the key was distributed
not by some arbitrary random number generator, but with the non-local
correlations shared by entangled photon-pairs.
Using Ekert91, the authorized parties can give up control over the photon
source. Ac\'{\i}n et al. \cite{acin07} showed that the CHSH violation is in principle sufficient to decide the security (against collective attacks) of a distributed key, even if the measurement apparatus is not trusted. Unfortunately, such a scheme is not yet experimentally feasible because of the stringent requirement
it places on detector efficiencies \cite{acin07,zhao07}.
A final point must be made about the random choice generator. Our
implementation leaves this choice to the beam splitter B1 in
Fig.~\ref{fig:setup}, which is accessible from the quantum channel. We are then assuming that the eavesdropper cannot change the beam splitter's behavior. This is reasonable; however, it makes our setup fall outside the device-independent scenario, even in the lossless regime. In particular, in that scenario, one can construct a situation in which BB84 would not be secure at all \cite{bb84side} because it is conceivable that different states are sent to the different measurement devices; but this is excluded for a well-behaved beam-splitter, which is precisely our assumption. Device-independent security requires the choice to be made on degrees of freedom independent of those accessible to the eavesdropper.
\section*{Acknowledgment}
This work was supported by the National Research Foundation and the
Ministry of Education, Singapore.
\begin{thebibliography}{19}
\expandafter\ifx\csname natexlab\endcsname\relax\def\natexlab#1{#1}\fi
\expandafter\ifx\csname bibnamefont\endcsname\relax
\def\bibnamefont#1{#1}\fi
\expandafter\ifx\csname bibfnamefont\endcsname\relax
\def\bibfnamefont#1{#1}\fi
\expandafter\ifx\csname citenamefont\endcsname\relax
\def\citenamefont#1{#1}\fi
\expandafter\ifx\csname url\endcsname\relax
\def\url#1{\texttt{#1}}\fi
\expandafter\ifx\csname urlprefix\endcsname\relax\defURL {URL }\fi
\providecommand{\bibinfo}[2]{#2}
\providecommand{\eprint}[2][]{\url{#2}}
\bibitem[{\citenamefont{Bennett and Brassard}(1984)}]{bb84}
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Bennett}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Brassard}}, in
\emph{\bibinfo{booktitle}{Proceedings of IEEE International Conference on
Computers, Systems and Signal Processing, Bangalore, India, December 1984,
pp. 175 - 179.}} (\bibinfo{year}{1984}).
\bibitem[{\citenamefont{Wiesner}(1983)}]{quantummoney}
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Wiesner}},
\bibinfo{journal}{Sigact News} \textbf{\bibinfo{volume}{15}},
\bibinfo{pages}{78} (\bibinfo{year}{1983}).
\bibitem[{\citenamefont{Bennett
et~al.}(1992{\natexlab{a}})\citenamefont{Bennett, Bessette, Brassard,
Salvail, and Smolin}}]{bb84expt}
\bibinfo{author}{\bibfnamefont{C.~H.} \bibnamefont{Bennett}},
\bibinfo{author}{\bibfnamefont{F.}~\bibnamefont{Bessette}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Brassard}},
\bibinfo{author}{\bibfnamefont{L.}~\bibnamefont{Salvail}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Smolin}},
\bibinfo{journal}{J. Cryptology} \textbf{\bibinfo{volume}{5}},
\bibinfo{pages}{3} (\bibinfo{year}{1992}{\natexlab{a}}).
\bibitem[{\citenamefont{Wooters and Zurek}(1982)}]{nocloning1}
\bibinfo{author}{\bibfnamefont{W.~K.} \bibnamefont{Wooters}} \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{W.~H.} \bibnamefont{Zurek}},
\bibinfo{journal}{Nature} \textbf{\bibinfo{volume}{299}},
\bibinfo{pages}{802} (\bibinfo{year}{1982}).
\bibitem[{\citenamefont{Dieks}(1982)}]{nocloning2}
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Dieks}},
\bibinfo{journal}{Phys. Lett. A} \textbf{\bibinfo{volume}{92}},
\bibinfo{pages}{271} (\bibinfo{year}{1982}).
\bibitem[{\citenamefont{Bennett et~al.}(1995)\citenamefont{Bennett, Brassard,
and Crepeau}}]{bennett95}
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Bennett}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Brassard}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Crepeau}, and
\bibfnamefont{U.~Maurer}}, \bibinfo{journal}{Information Theory, IEEE
Transactions on} \textbf{\bibinfo{volume}{41}}, \bibinfo{pages}{1915}
(\bibinfo{year}{1995}), ISSN \bibinfo{issn}{0018-9448}.
\bibitem[{\citenamefont{Ekert}(1991)}]{E91}
\bibinfo{author}{\bibfnamefont{A.~K.} \bibnamefont{Ekert}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{67}},
\bibinfo{pages}{661} (\bibinfo{year}{1991}).
\bibitem[{\citenamefont{Bell}(1966)}]{bell66}
\bibinfo{author}{\bibfnamefont{J.~S.} \bibnamefont{Bell}},
\bibinfo{journal}{Rev. Mod. Phys.} \textbf{\bibinfo{volume}{38}},
\bibinfo{pages}{447} (\bibinfo{year}{1966}).
\bibitem[{\citenamefont{Clauser et~al.}(1969)\citenamefont{Clauser, Horne,
Shimony, and Holt}}]{chsh}
\bibinfo{author}{\bibfnamefont{J.~F.} \bibnamefont{Clauser}},
\bibinfo{author}{\bibfnamefont{M.~A.} \bibnamefont{Horne}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Shimony}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{R.~A.} \bibnamefont{Holt}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{23}},
\bibinfo{pages}{880} (\bibinfo{year}{1969}).
\bibitem[{\citenamefont{Coffman et~al.}(2000)\citenamefont{Coffman, Kundu, and
Wootters}}]{coffman00}
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Coffman}},
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Kundu}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{W.~K.} \bibnamefont{Wootters}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{61}},
\bibinfo{pages}{052306} (\bibinfo{year}{2000}).
\bibitem[{\citenamefont{Bennett
et~al.}(1992{\natexlab{b}})\citenamefont{Bennett, Brassard, and
Mermin}}]{bennett92}
\bibinfo{author}{\bibfnamefont{C.~H.} \bibnamefont{Bennett}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Brassard}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{N.~D.} \bibnamefont{Mermin}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{68}},
\bibinfo{pages}{557} (\bibinfo{year}{1992}{\natexlab{b}}).
\bibitem[{\citenamefont{Fuchs et~al.}(1997)\citenamefont{Fuchs, Gisin,
Griffiths, Niu, and Peres}}]{fuchs97}
\bibinfo{author}{\bibfnamefont{C.~A.} \bibnamefont{Fuchs}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Gisin}},
\bibinfo{author}{\bibfnamefont{R.~B.} \bibnamefont{Griffiths}},
\bibinfo{author}{\bibfnamefont{C.-S.} \bibnamefont{Niu}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Peres}},
\bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{56}},
\bibinfo{pages}{1163} (\bibinfo{year}{1997}).
\bibitem[{\citenamefont{Bienfang et~al.}(2004)\citenamefont{Bienfang, Gross,
Mink, Hershman, Nakassis, Tang, Lu, Su, Clark, Williams et~al.}}]{bienfang04}
\bibinfo{author}{\bibfnamefont{J.}~\bibnamefont{Bienfang}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Gross}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Mink}},
\bibinfo{author}{\bibfnamefont{B.}~\bibnamefont{Hershman}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Nakassis}},
\bibinfo{author}{\bibfnamefont{X.}~\bibnamefont{Tang}},
\bibinfo{author}{\bibfnamefont{R.}~\bibnamefont{Lu}},
\bibinfo{author}{\bibfnamefont{D.}~\bibnamefont{Su}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Clark}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Williams}},
\bibnamefont{et~al.}, \bibinfo{journal}{Opt. Express}
\textbf{\bibinfo{volume}{12}}, \bibinfo{pages}{2011} (\bibinfo{year}{2004}).
\bibitem[{\citenamefont{Marckic et~al.}(2006)\citenamefont{Marckic,
Lamas-Linares, and Kurtsiefer}}]{marcikic06}
\bibinfo{author}{\bibfnamefont{I.}~\bibnamefont{Marckic}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Lamas-Linares}},
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Kurtsiefer}},
\bibinfo{journal}{Appl. Phys. Lett.} \textbf{\bibinfo{volume}{89}},
\bibinfo{pages}{101122} (\bibinfo{year}{2006}).
\bibitem[{\citenamefont{Koashi and Preskill}(2003)}]{koa03}
\bibinfo{author}{\bibnamefont{M.~Koashi, and J.~Preskill}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{90}},
\bibinfo{pages}{057902} (2003).
\bibitem[{\citenamefont{Ma, Fung and Lo}(2007)}]{ma07}
\bibinfo{author}{\bibnamefont{X.~Ma, C.-H.\,F.~Fung, and H.-K.~Lo}}, \bibinfo{journal}{Phys. Rev. A} \textbf{\bibinfo{volume}{76}},
\bibinfo{pages}{012307} (2007).
\bibitem[{\citenamefont{Kurtsiefer et al.}(2002)}]{spectral02}
\bibinfo{author}{\bibnamefont{C. Kurtsiefer, P. Zarda, M. Halder, P.M. Gorman,
P.R. Tapster, J.G. Rarity, and H. Weinfurter}},
\bibinfo{journal}{Proceedings SPIE} \textbf{\bibinfo{volume}{4917}},
\bibinfo{pages}{25-31} (2002).
\bibitem[{\citenamefont{Brassard {\em et al.}}(2000)}]{bra00}
\bibinfo{author}{\bibnamefont{G. Brassard, N. L\"{u}tkenhaus, T. Mor, and B. C. Sanders}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{85}},
\bibinfo{pages}{1330} (2000).
\bibitem[{\citenamefont{Hwang}(2003)}]{hwa03}
\bibinfo{author}{\bibnamefont{W.-Y. Hwang}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{91}},
\bibinfo{pages}{057901} (2003).
\bibitem[{\citenamefont{Scarani, Ac\'{\i}n, Ribordy and Gisin}(2004)}]{sca04}
\bibinfo{author}{\bibnamefont{V. Scarani, A. Ac\'{\i}n, G. Ribordy, and
N. Gisin}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{92}},
\bibinfo{pages}{057901} (2004).
\bibitem[{\citenamefont{Acin et~al.}(2007)\citenamefont{Ac\'{\i}n, Brunner, Gisin,
Massar, Pironio, and Scarani}}]{acin07}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Acin}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Brunner}},
\bibinfo{author}{\bibfnamefont{N.}~\bibnamefont{Gisin}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Massar}},
\bibinfo{author}{\bibfnamefont{S.}~\bibnamefont{Pironio}}, \bibnamefont{and}
\bibinfo{author}{\bibfnamefont{V.}~\bibnamefont{Scarani}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{98}},
\bibinfo{eid}{230501} (\bibinfo{year}{2007}).
\bibitem[{\citenamefont{Naik \textit{et al.}}(2000)}]{nai00}
\bibinfo{author}{\bibnamefont{D.S. Naik, C.G. Peterson, A.G. White, A.J. Berglund, P.G. Kwiat}}
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{84}},
\bibinfo{pages}{4733} (2000).
\bibitem[{\citenamefont{Tittel \textit{et al.}}(2000)}]{tit00}
\bibinfo{author}{\bibnamefont{W. Tittel, J. Brendel, H. Zbinden, and N. Gisin}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{84}},
\bibinfo{pages}{4737} (2000).
\bibitem[{\citenamefont{Jennewein et~al.}(2000)\citenamefont{Jennewein, Simon,
Weihs, Weinfurter, and Zeilinger}}]{jennewein00}
\bibinfo{author}{\bibfnamefont{T.}~\bibnamefont{Jennewein}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Simon}},
\bibinfo{author}{\bibfnamefont{G.}~\bibnamefont{Weihs}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}},
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Zeilinger}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{84}},
\bibinfo{pages}{4729} (\bibinfo{year}{2000}).
\bibitem{aci06} A. Ac\'{\i}n, S. Massar, S. Pironio, New J. Phys. \textbf{8}, 126 (2006)
\bibitem[{\citenamefont{Kwiat et~al.}(1995)\citenamefont{Kwiat, Mattle,
Weinfurter, Zeilinger, Sergienko, and Shih}}]{kwiat95}
\bibinfo{author}{\bibfnamefont{P.~G.} \bibnamefont{Kwiat}},
\bibinfo{author}{\bibfnamefont{K.}~\bibnamefont{Mattle}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}},
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Zeilinger}},
\bibinfo{author}{\bibfnamefont{A.~V.} \bibnamefont{Sergienko}},
\bibnamefont{and} \bibinfo{author}{\bibfnamefont{Y.}~\bibnamefont{Shih}},
\bibinfo{journal}{Phys. Rev. Lett.} \textbf{\bibinfo{volume}{75}},
\bibinfo{pages}{4337} (\bibinfo{year}{1995}).
\bibitem[{\citenamefont{Kurtsiefer et~al.}(2001)\citenamefont{Kurtsiefer,
Oberparleiter, Weinfurter, Kurtsiefer, Oberparleiter, and
Weinfurter}}]{kurtsiefer01}
\bibinfo{author}
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Kurtsiefer}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Oberparleiter}},
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}},
\bibinfo{journal}{Phys. Rev. A}
\textbf{\bibinfo{volume}{64}}, \bibinfo{pages}{023802}
(\bibinfo{year}{2001}).
\bibitem[{\citenamefont{Trojek et~al.}(2004)\citenamefont{Trojek, Schmid,
Bourennane, Weinfurter, and Kurtsiefer}}]{trojek04}
\bibinfo{author}{\bibfnamefont{P.}~\bibnamefont{Trojek}},
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Schmid}},
\bibinfo{author}{\bibfnamefont{M.}~\bibnamefont{Bourennane}},
\bibinfo{author}{\bibfnamefont{H.}~\bibnamefont{Weinfurter}},
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Kurtsiefer}},
\bibinfo{journal}{Opt. Express} \textbf{\bibinfo{volume}{12}},
\bibinfo{pages}{276} (\bibinfo{year}{2004}).
\bibitem{lamas07}
\bibinfo{author}{\bibfnamefont{A.}~\bibnamefont{Lamas-Linares}}
\bibnamefont{and}
\bibinfo{author}{\bibfnamefont{C.}~\bibnamefont{Kurtsiefer}},
\bibinfo{journal}{Opt. Express} \textbf{\bibinfo{volume}{15}},
\bibinfo{pages}{9388} (\bibinfo{year}{2007}).
\bibitem{cascade} G. Brassard and L. Savail, in: \textit{Advances in
Cryptology - Proc. Eurocrypt '94}, pp. 410-423 (1994); T. Sugimoto and
K. Yamazaki, \textit{IEICE Trans. Fundamentals} \textbf{E38-A}, 1987
(2000).
\bibitem{mawang} X. Ma, B. Qi, Y. Zhao, H.-K. Lo, Phys. Rev. A \textbf{72}, 012326 (2005); X.-B. Wang, Phys. Rev. Lett. \textbf{94}, 230503 (2005).
\bibitem{hase} J. Hasegawa, M. Hayashi, T. Hiroshima, A. Tanaka, A. Tomita, arXiv:0705.3081.
\bibitem{scaren1} V. Scarani, R.Renner, Phys. Rev. Lett. \textbf{100}, 200501 (2008)
\bibitem{scaren2} V. Scarani, R. Renner, arXiv:0806.0120
\bibitem{zhao07} Y. Zhao, C.-H. F. Fung, B. Qi, C. Chen, H.-K. Lo, arXiv:0704.3253
\bibitem{bb84side} F. Magniez et al., quant-ph/0512111, Appendix A; A. Ac\'{\i}n, N. Gisin, L. Masanes,
Phys. Rev. Lett. {\bf 97}, 120405 (2006); V. Scarani et al.,
Phys. Rev. A {\bf 74}, 042339 (2006).
\end{thebibliography}
\end{document} |
\begin{document}
\title{A gradient descent akin method for inequality constrained optimization
}
\author{Long Chen \and
Wenyi Chen \and
Kai-Uwe Bletzinger
}
\institute{L. Chen \at
Technical University of Munich, Germany \\
Tel.: +49 (89) 289 - 22462\\
Fax: +49 (89) 289 - 22421\\
\email{[email protected]}
\and
W. Chen \at
Wuhan Univeristy, China \\
\email{[email protected]} \\
\and
K.-U. Bletzinger \at
Technical University of Munich, Germany \\
\email{[email protected]}
}
\date{Submitted}
\maketitle
\begin{abstract}
We propose a first-order method for solving inequality constrained optimization problems. The method is derived from our previous work \cite{Chen}, a modified search direction method (MSDM) that applies the singular-value decomposition of normalized gradients. In this work, we simplify its computational framework to a ``gradient descent akin'' method, i.e., the search direction is computed using a linear combination of the negative and normalized objective and constraint gradient. The main focus of this work is to provide a mathematical aspect to the method. We analyze the global behavior and convergence of the method using a dynamical systems approach. We then prove that the resulting trajectories find local solutions by asymptotically converging to the central path(s) for the logarithmic barrier interior-point method under the so-called \textit{relative convex condition}. Numerical examples are reported, which include both common test examples and applications in shape optimization.
\keywords{first-order method; negative and normalized gradients; inequality constrained optimization; central path; nonlinear programming}
\subclass{65K05 \and 90C30 \and 90C51 \and 90C90}
\end{abstract}
\section{Introduction}
\label{sec:intro}
In this paper, we propose a first-order method for solving inequality constrained optimization problems. The problem of interest is
\begin{equation}
\begin{split}
&\textnormal{minimize}~~~ f(x), \\
&\textnormal{subject to}~~ g_i(x) \leq 0, ~ i = 1,...,m, \\
\end{split}
\label{eq:Optimization_Problem}
\end{equation}
where $f, g_1,..., g_m: \mathbb{R}^{n} \rightarrow \mathbb{R}$ are twice differentiable.
The present method is derived from our previous work \cite{Chen}, a modified search direction method (MSDM) for inequality constrained optimization problems. MSDM computes a descent direction of the objective function using the singular-value decomposition that exploits the normalized gradient information of a constrained optimization problem. Numerical experiments show that the MSDM finds local solutions by traversing along the central path(s) for the logarithmic barrier method. However, there is a lack of the mathematical theory of the method. The intrinsic optimization parameter in MSDM has remained heuristic. In this work, we simplify the computational framework of MSDM to a ``gradient descent akin" method, i.e., we compute the search direction using a linear combination of the negative and normalized objective and constraint gradient. We analyze the global behavior and convergence using a dynamical system perspective. We prove that the resulting trajectories find local solutions by asymptotically converging to the central path of the logarithmic barrier interior-point method. We should note that in this paper, we do not address the design and analysis of a practical implementation of the method, which is currently an active area of research.
\subsection{Main results}
\label{sec:results}
For problem (\ref{eq:Optimization_Problem}), we make the following assumptions:
\noindent (A1) Coercive condition for the objective function $f(x)$,
$$\lim_{x\rightarrow \infty}f(x)=+\infty;$$
\noindent (A2) \textcolor[rgb]{0,0,0}{$\nabla f(x) \neq 0$ in the feasible set $\Omega=\{x: g_i(x) \leq 0, ~ i = 1,...,m\}$;}
\noindent (A3) $\nabla g_i(x) \neq 0, ~ i = 1,...,m$ in the feasible set $\Omega$;
\noindent (A4) $f$ and $g_i$ are twice continuously differentiable functions.
Under assumptions (A1)$-$(A4), we propose a dynamical system for solving the problem (\ref{eq:Optimization_Problem}) that includes a single inequality constraint $g(x)$,
\begin{equation}
\frac{d x}{d t} =-\frac{\nabla f}{|\nabla f|}
-\zeta\frac{\nabla g}{|\nabla g|}, ~~ \zeta \in [0,1),
\label{eq:modified_search_direction}
\end{equation}
where ``$|\cdot|$" denotes the Euclidean norm. $\nabla f$ and $\nabla g$ are the gradient row vectors of the objective function and constraint function, respectively.
We show analysis of the global behavior of the method. Among others, we propose an error measure $\epsilon$ for the optimization solution, and show that the time in which the present system finds a local solution is $\mathcal{O}(1/\epsilon)$, which is the ergodic convergence rate for first-order methods. We prove global and local convergence of the method. The present system results in a trajectory, which, under the so-called \textit{relative convex condition}, converges to the central path of the interior-point methods as $\zeta \rightarrow 1^-$.
Provided the well-known logarithmic barrier function
\begin{equation}
\Phi(x) = - \sum_{i = 1}^{m} \log (-g_i(x)), ~ i = 1,...,m,
\label{eq:log_barrier}
\end{equation}
where $\log(\cdot)$ denotes the natural logarithm,
we propose a generalization of the system (\ref{eq:modified_search_direction}) for the multiple constrained problem with $\zeta\in [0,1)$ as
\begin{equation}
\frac{d x}{d t}=\left\{
\begin{split}
&-\frac{\nabla f}{|\nabla f|}
-\zeta\frac{\nabla \Phi}{|\nabla \Phi|}, ~~~~~&\text{if}~~\nabla \Phi \neq 0;\\
& -\nabla f , &\text{if}~~\nabla \Phi = 0.
\end{split}\right.
\label{eq: vector_s_multiple}
\end{equation}
Compared to our previous work \cite{Chen}, we simplified the framework to compute the search direction so that the implementation effort and computational cost are reduced significantly. We report computational experiences with various test examples that include both common benchmark problems and applications in shape optimization. Finally, we provide a large-scale real world shape optimization, which, to the best of our knowledge, is unlike any other case previously presented in the literature.
\subsection{Organization of paper}
In section \ref{sec:related}, we first review related works with the focus on theoretical aspects. In section \ref{sec:derivation}, we derive the method for single inequality constrained problems. Preliminary studies are then presented in section \ref{sec:preliminary}. Our main results are presented in section \ref{sec: global_behavior} - \ref{sec:multiple_constraint}. We report numerical examples in section \ref{sec:experiments}, and finally give a conclusion in section \ref{sec:conclusion}.
\section{Related works}
\label{sec:related}
We review related works to share our view of the present method from different well-established perspectives. Here, the emphasis is on the theoretical connections/differences as the main focus of this paper is to provide a theoretical foundation for the present method. We also briefly review the shape optimization as it has motivated the development of the method in this work.
\subsection{Gradient descent method}
The gradient descent method, originally proposed by Cauchy, is a first-order method for unconstrained optimization that uses the negative gradient of the objective function for the design update. In a dynamical system, it writes
$$ \frac{dx}{dt} = -\nabla f.$$
The direction of the gradient descent is the steepest descent direction in the Euclidean norm. A first-order Taylor expansion at current iterate $x$ of the objective function reads
\begin{equation*}
f(x + d) \approx f(x) + \nabla f(x) d.
\end{equation*}
The steepest descent direction $d$ is found by the optimization problem
\begin{equation*}
\begin{split}
&\textnormal{minimize}~~~ \nabla f(x) d, \\
&\textnormal{subject to}~~~ | d| = 1.
\end{split}
\end{equation*}
From the Cauchy-Schwarz inequality, we obtain
\begin{equation}
d = - \frac{\nabla f}{|\nabla f|},
\label{eq:steepest_descent}
\end{equation}
which is the negative and normalized objective gradient.
Comparing (\ref{eq:steepest_descent}) and (\ref{eq:modified_search_direction}), which linearly combines the negative and normalized objective and constraint gradient, we consider the present system: a gradient descent akin method for inequality constrained optimization problems. We especially note that the present method results in trajectories that are homotopic with the gradient descent trajectory (see Remark \ref{remark1}).
\subsection{Dynamical systems approaches}
Dynamical systems approaches have been used to study optimization methods in many works of literature. Extensive studies on the connections between interior-point flows with linear programming methods can be found in \cite[Chapter~4]{helmke2012optimization} and the references therein. For quadratic programming problems, \cite{dorr2012smooth} proposes a dynamical system, which results in trajectories that converge to the saddle point of the associated Lagrangian function. \cite{su2014differential} studies the celebrated Nesterov's accelerated gradient method using a dynamical system as the analysis tool. In \cite{lessard2016analysis}, a framework based on dynamical systems is proposed to analyze and design first-order unconstrained optimization methods. In this work, we study the behavior of the present method using the dynamical systems approach. The ODE interpretation of the method allows us to give a rigorous analysis of its global and local convergence.
In some literature, optimization methods that use dynamical systems are called trajectory methods. These methods construct optimization paths in a way so that one or all solutions to the optimization problem are \textit{a priori} known to lie on these paths \cite{diener1995trajectory}. Typically, these optimization paths are solution trajectories to ODE of first or second-order. Trajectory methods are mainly studied for unconstrained optimizations for finding local solutions \cite{behrman1998efficient}\cite{botsaris1978differential}, and global solutions \cite{griewank1981generalized}\cite{snyman1987multi}. Studies for constrained optimization are, however, very limited, see \cite{ali2018trajectory} and the references therein. In this work, we propose a new dynamical system for inequality constrained optimization problems. The resulting trajectories find local solutions by the limiting behavior as $\zeta \rightarrow 1^-$.
\subsection{Interior-point methods}
Interior-point methods (IPMs), which are based on the Newton method, are among the most competitive methods for constrained optimization problems. The signature of IPM is the existence of continuously parameterized families of approximate solutions that converge to the exact solution asymptotically \cite{Forsgren}. IPMs find a wide variety of applications of convex and nonconvex optimizations in broad fields. There are a vast amount of excellent works that have been devoted to IPM. A comprehensive review of this class of methods is certainly beyond the scope of the present paper, however we refer the interested reader to \cite{Forsgren}\cite{Potra}, more recently, in \cite{gondzio2012interior}, and many other excellent optimization books.
The present method results in trajectories that asymptotically converge to the central path for a particular IPM, the logarithmic barrier method \cite{fiacco1990nonlinear}. We introduce its connections/differences to the present method in the next. Consider a convex optimization problem of the form (\ref{eq:Optimization_Problem}), we start with its approximated unconstrained problem using the logarithmic barrier function $\Phi$,
\begin{equation}
\textnormal{minimize} ~~~~ f(x) + \eta \Phi(x),
\label{eq:logarithmic_barrier}
\end{equation}
where the barrier parameter $\eta$ is a positive parameter. As $\eta \rightarrow 0$, the solution of the approximated problem converges to the original one. The central path is characterized by the set of points that satisfy the necessary and sufficient conditions \cite{boyd}:
\begin{equation}
0 = \nabla f(x^*) + \eta \nabla \Phi(x^*), ~~x^* \in \Omega_-,
\label{eq:barrier_central_point}
\end{equation}
where $\Omega_-=\{x: g_i(x) < 0, ~ i = 1,...,m\}$. The conditions (\ref{eq:barrier_central_point}) are interpreted as a modified KKT system in the literature \cite{boyd}\cite{byrd2000trust}\cite{Forsgren}. The barrier method finds an approximated solution for the original problem by 1) iteratively decreasing the barrier parameter $\eta$, and 2) in each iteration, solving the subproblem (the modified KKT system) defined by (\ref{eq:barrier_central_point}) using the Newton method.
Therefore, the barrier parameter $\eta$ can be considered a central path parameter.
In the present method, we do not parameterize the central path. Instead, we normalized the objective and constraint gradients. For optimization problems with a single inequality constraint, we propose the \textit{normalized central path condition} as
\begin{equation}
\frac{\nabla f}{|\nabla f|} + \frac{\nabla g}{|\nabla g|} = 0.
\label{eq:normalized_central_path_condition_single_constraint}
\end{equation}
The condition (\ref{eq:normalized_central_path_condition_single_constraint}) is used in section \ref{sec: global_behavior} and \ref{sec:local_analysis} for the analysis. A generalization of the \textit{normalized central path condition} from single inequality to multiple inequalities is introduced by the use of the logarithmic barrier function (see section \ref{sec:multiple_constraint}) as
\begin{equation}
\frac{\nabla f}{|\nabla f|} + \frac{\nabla \Phi}{|\nabla \Phi|} = 0.
\label{eq:normalized_central_path_condition}
\end{equation}
Compared with the barrier method, the path parameter $\eta$ has vanished. The condition (\ref{eq:normalized_central_path_condition}) characterizes the central path, which differs from (\ref{eq:barrier_central_point}), which instead characterizes a point on the central path.
To illustrate the behavior of the present method in a very rough way, we rewrite the first ODE of system (\ref{eq: vector_s_multiple}) as
\begin{equation}
\frac{d x}{d t} =-\frac{\nabla f}{|\nabla f|}(1- \zeta)
+ \left( -\frac{\nabla f}{|\nabla f|} - \frac{\nabla \Phi}{|\nabla \Phi|}\right) \zeta, ~~ \zeta \in [0,1).
\label{eq:homotopy}
\end{equation}
While $-\frac{\nabla f}{|\nabla f|} $ is the steepest descent direction of the objective function, the term $\left( -\frac{\nabla f}{|\nabla f|} - \frac{\nabla \Phi}{|\nabla \Phi|}\right)$ contributes to the centering behavior (referred to Theorem \ref{theorem3}). There are three major differences compared to the barrier method:
\begin{itemize}
\item[1)] While $\eta$ is a path parameter that controls the asymptotic convergence progress for the barrier method, $\zeta$ is a homotopy parameter that determines the shape of the optimization trajectory in the present method;
\item[2)] There is no subproblem (modified KKT system) defined and solved in the present method. Instead, we solve directly for the trajectory of the proposed dynamical system. The optimization solutions are known \textit{a priori} to lie on the resulting trajectory;
\item[3)] The local and global convergence behavior of the present method are markedly different from the barrier method.
\end{itemize}
The asymptotic convergence for the barrier method along the central path has been extensively studied in {\cite{fiacco1990nonlinear}}. For general inequality-constrained problems (\ref{eq:Optimization_Problem}) under mild assumptions (A1)-(A4), to force global convergence, IPM usually implements a merit function associated line-search method or trust region framework \cite{byrd2000trust}\cite{vanderbei1999interior}\cite{wachter2006implementation}. To provide a global and local convergence theory for the present method, we prove the following:
\begin{itemize}
\item[-] The resulting trajectory converges to a critical point of the objective function as $\zeta \in [0,1)$ (see Theorem \ref{theorem2});
\item[-] The trajectory finds a KKT solution upon reaching the boundary of the feasible set $\Omega$ as $\zeta \rightarrow 1^-$, provided that there is no critical point of the objective function in $\Omega$ (see Theorem \ref{theorem1}, \ref{theorem5}, and Remark \ref{remark_fo});
\item[-] As $\zeta \rightarrow 1^-$, the second-order optimality conditions are automatically satisfied at KKT solutions (see section \ref{sec:local_analysis});
\item[-] The trajectory is able to switch between central paths to remain a descent direction of the objective function without using an additional framework (referred to Theorem \ref{theorem8} and Lemma \ref{lemma2}).
\end{itemize}
From a more abstract point of view, the difference between the present method and the barrier method may be analogous to the difference between the gradient descent method and the Newton-based method for unconstrained minimizations. While the former seek critical points of the objective function or KKT solutions that satisfy the respective second-order optimality conditions using the negative gradient information, the latter seek the respective solutions using the Newton method.
\subsection{Feasible direction methods}
The method of feasible directions (MFD) dates back to the 1960's by the work of Zoutendijk \cite{zoutendijk1960methods} and has enjoyed fruitful developments for decades. MFDs have been especially popular in the engineering community because of the importance of ending up with a design that satisfies the hard specifications expressed by a set of inequalities \cite{chen2000methods}. The general idea behind the MFD is to move from one feasible design to an improved feasible design iteratively so that a local solution can be found \cite{Arora}. In the present method, remaining feasibility is not a mandatory mechanism. The idea behind the method design is to find a search direction that approaches the central path while maintaining a descent direction of the objective function. Due to this major difference, we do not categorize our method as a method of feasible directions.
In the present method, we compute a search direction that uses normalized gradients. In \cite{stander1993new}, the authors also present a feasible direction method that applies normalized gradients. Their work is then continued and further developed in \cite{de1994feasible} and \cite{stander1995robustness}. In these works, the active-set strategy is used. The common idea is to formulate a linear system under given input criteria on a chosen working set of (active) constraints, and a feasible descent search direction is obtained by solving the linear system. In the present method, we use a barrier function based formulation to treat multiple inequality constraints. The search direction is computed as a linear combination of the negative and normalized objective and barrier gradient.
\subsection{Shape optimization}
As a subset of structural optimization, shape optimization is characterized by a very large or even infinite number of design variables that describe the varying boundary in the optimization process. Introductions to shape optimization are given in \cite{haslinger2003introduction}\cite{sokolowski1992introduction}. Shape optimization is distinct from another well-known problem in structural engineering: topology optimization \cite{bendsoe2013topology} (sometimes referred to as the homogenization method \cite{allaire2012shape}). The main difference is that the topology optimization method removes smoothness and topological constraints in shape optimization, which results in different optimization formulations. Many topology optimization problems can be formulated in an (equivalent) convex optimization problem, while shape optimizations are typically nonlinear, and
often nonconvex \cite{hoppe2007adaptive}. This difference partially contributes to the fact that there are successful implementations of IPM for large-scale topology optimization problems \cite{jarre1998optimal}\cite{kocvara2016primal}\cite{maar2000interior}, but only a few works have presented a shape optimization that uses an IPM as the optimizer \cite{antil2007path}\cite{herskovits2000shape}. In the latter works, the size of the shape optimization problem is only moderate so that the power of IPM is not fully exploited. One of the most successful methods for nonlinear topology optimization is the method of moving asymptotes (MMA) that was introduced by Svanberg in 1987 \cite{svanberg1987method}. In each iteration, MMA generates and solves an approximated convex problem related to the original one. For shape optimization, however, there is as yet no literature that discusses a large-scale problem using MMA.
A notable difficulty for shape optimization is the computation of shape Hessians, which are complex objects even for moderate problems. Analysis of aerodynamic optimization in \cite{arian1999analysis} shows that shape Hessians are ill-conditioned for three-dimensional problems. Recently, several works compute approximated shape Hessians and use a Newton-based method for the design optimization \cite{schillings2011efficient}\cite{schmidt2013three}. For some disciplines, such as computational fluid dynamics or transient coupled problems, even the computation of shape gradient can be a challenge. See for example \cite{albring2016efficient}\cite{korelc2009automation}\cite{reuther1999constrained}, which are actively undergoing investigation.
In general, large-scale shape optimization is mainly performed using gradient descent type method so far \cite{schulz2015}. In engineering practice, a large number of constraints may be considered. The lack of literature in this regard has motivated our development of MSDM in \cite{Chen}. In the present work, we further simplify the computational framework of MSDM to a gradient descent akin method. We provide a mathematical basis for the method's optimization behavior. As a result, the implementation effort and computational cost are reduced significantly. It opens the possibility of shape optimization to a wider range of applications.
\section{Deriving the search direction for single inequality constrained optimizations}
\label{sec:derivation}
In this section, we show the consistent derivation of the system (\ref{eq:modified_search_direction}) from our previous work \cite{Chen} considering a single inequality constraint. First, we review the basic ideas of the modified search direction method and then show the derivation.
In MSDM, at each iteration, we construct a sensitivity matrix
\begin{equation}
\mathbf{m} = \begin{pmatrix} \frac{\nabla f}{|\nabla f|} \\ \frac{\nabla g}{|\nabla g|} \end{pmatrix}.
\label{eq: sensitivity_matrix}
\end{equation}
The change in the objective function $df$ and the constraint function $dg$ resulting from an arbitrary design change $dx \in \mathbb{R}^n$ reads
\begin{equation}
\begin{split}
df = \nabla f dx, \\
dg = \nabla g dx.
\end{split}
\end{equation}
A perspective from an input-output system established by the sensitivity matrix $\mathbf{m}$ gives
\begin{equation}
\begin{pmatrix} \frac{df}{|\nabla f|} \\\frac{dg}{|\nabla g|} \end{pmatrix} = \mathbf{m} dx.
\label{eq:input_output_smatrix}
\end{equation}
Applying singular-value decomposition to the sensitivity matrix $\mathbf{m}$,
\begin{equation}
\mathbf{m} = \mathbf{U} \mathbf{\Sigma} \mathbf{V}^T = \sum_{i = 1}^{min(2,n)} \mathbf{\sigma}_{i} \mathbf{u}_i \mathbf{v}_i^T.
\label{eq: svd_m}
\end{equation}
Thus, an orthonormal bases set $\mathbf{v}_i, ~i = 1,2$ is obtained. Each $\mathbf{v}_i$ can be used as a base search direction for the design update, and $\mathbf{v}_1$ and $\mathbf{v}_2$ are defined as follows:
\begin{itemize}
\item[-] $\mathbf{v}_1$: by taking $\delta \mathbf{v}_{1}$ as the design change, we obtain a change in objective as well as in constraint function $[\frac{df}{|\nabla f|}, \frac{dg}{|\nabla g|}]^T = \sigma_{1} \delta \mathbf{u}_{1} $, which is a \textbf{decrease} in the objective function and an \textbf{increase} in the constraint function.
\item[-] $\mathbf{v}_2$: by taking $\delta \mathbf{v}_{2}$ as the design change, we obtain a change in the objective as well as in the constraint function $[\frac{df}{|\nabla f|}, \frac{dg}{|\nabla g|}]^T = \sigma_{2} \delta \mathbf{u}_{2} $, which is a \textbf{decrease} in the objective function and a \textbf{decrease} in the constraint function.
\end{itemize}
It is worth mentioning that the design vector $\delta \mathbf{v}_1$ provides a similar result as the filter approach presented in \cite{fletcher2002nonlinear}, which tries to minimize the so-called bi-objective optimization problem with two goals of minimizing the objective function $f$ and the constraint violation $|g|$ (with the difference that $g$ takes different signs in both cases).
With $\mathbf{v}_1$ and $\mathbf{v}_2$, we can then rewrite the normalized steepest descent direction $-\frac{\nabla f}{|\nabla f|}$ as
\begin{equation}
-\frac{\nabla f}{|\nabla f|} = \cos \alpha_1 \mathbf{v}^T_{1} + \cos \alpha_2 \mathbf{v}^T_{2},
\label{eq:rewrite_steepest_descent}
\end{equation}
where $\alpha_1$ is the angle between $\mathbf{v}^T_{1}$ and $-\frac{\nabla f}{|\nabla f|}$, and $\alpha_2$ is the angle between $\mathbf{v}^T_{2}$ and $-\frac{\nabla f}{|\nabla f|}$. The modified search direction proposed in \cite{Chen} reads
\begin{equation}
\mathbf{s}_c = \cos \alpha_1 \mathbf{v}^T_{1} + c \cdot \cos \alpha_2 \mathbf{v}^T_{2},
\label{eq:modified_search_direction_svd}
\end{equation}
where $c \geq 1$ is introduced to enlarge the contribution of the design mode $\mathbf{v}_2$.
In the following, we show the derivation of (\ref{eq:modified_search_direction}) from (\ref{eq:modified_search_direction_svd}). According to SVD and the definition of $\mathbf{v}_1$ and $\mathbf{v}_2$, we have
\begin{equation}
\begin{split}
\mathbf{v}^T_1&=\frac{1}{\sqrt{2-2\cos\theta}}\left( -\frac{\nabla f}{|\nabla f|}+\frac{\nabla g}{|\nabla g|}\right),
\\
\mathbf{v}^T_2&=\frac{1}{\sqrt{2+2\cos\theta}} \left(-\frac{\nabla f}{|\nabla f|}-\frac{\nabla g}{|\nabla g|}\right),
\end{split}
\label{eq:base_vectors}
\end{equation}
where $\theta$ is the angle between the objective function gradient $\nabla f$ and the constraint function gradient $\nabla g$. With $\theta$ we also have
\begin{equation}
\begin{split}
\cos \alpha_1=-<\frac{\nabla f}{|\nabla f|},\mathbf{v}^T_1>=\frac{\sqrt{1-\cos\theta}}{\sqrt{2}},\\ \cos \alpha_2=-<\frac{\nabla f}{|\nabla f|},\mathbf{v}^T_2>=\frac{\sqrt{1+\cos\theta}}{\sqrt{2}}.
\end{split}
\label{eq:cos_alphas}
\end{equation}
Inserting (\ref{eq:base_vectors}), (\ref{eq:cos_alphas}) into (\ref{eq:modified_search_direction_svd}) we have
\begin{equation}
\mathbf{s}_c= -\frac{\nabla f}{|\nabla f|}-\frac{(c-1)}{2} \left(\frac{\nabla f}{|\nabla f|}+\frac{\nabla g}{|\nabla g|}\right).
\end{equation}
As we are mainly interested in the direction of the vector field $\mathbf{s}_c$, we can rewrite it as
\begin{equation}
\mathbf{s}_\zeta =-\frac{\nabla f}{|\nabla f|}
-\zeta\frac{\nabla g}{|\nabla g|},
\label{vector_s}
\end{equation}
with $\zeta = \frac{c-1}{c+1}$. With $c \in [1,+\infty)$ we have $\zeta \in [0, 1)$ and thus we get the present dynamical system (\ref{eq:modified_search_direction}).
\section{Preliminary studies and intuition}
\label{sec:preliminary}
In this section, we demonstrate the present method on two simple examples and conjecture the behavior of the resulting optimization trajectory with intuition.
\subsection{An analytical 2D optimization example}
We first show a 2D optimization problem and solve it analytically. The optimization problem reads,
\begin{equation}
\begin{split}
&\textnormal{minimize}~~~ f(x_1,x_2)=\frac{1}{2}(x_1^2 + x_2^2), \\
&\textnormal{subject to}~~ g(x_1,x_2)= - x_2 + 10 \leq 0. \\
\end{split}
\label{eq:example_proof}
\end{equation}
\begin{figure}
\caption{Optimization trajectories for the 2D linear constrained optimization problem \ref{eq:example_proof}
\label{fig:2d_linear_constraint_analysis}
\end{figure}
The present search direction field $\mathbf{s}_\zeta$ reads
\begin{equation}
\mathbf{s}_\zeta = \frac{-1}{\sqrt{x_1^2 + x_2^2}}\left\lbrace x_1, x_2 - \zeta \sqrt{x_1^2 + x_2^2} \right\rbrace.
\end{equation}
We define an initial design as $(x_1^0, x_2^0)$. Let $\bar{x}_2 = \frac{1}{2} \left(x_2^0 + \sqrt{(x_1^0)^2 + (x_2^0)^2}\right)$, then, the trajectory $\Gamma^\zeta$ of the present search direction field $\mathbf{s}_\zeta$ is
\begin{equation}
x_2 + \sqrt{x_1^2 + x_2^2} = 2 \bar{x}_2 \left|\frac{x_1}{x_1^0}\right|^{1- \zeta}.
\end{equation}
Let $(x_{1,\zeta}, x_{2,\zeta})$ be a point on the trajectory $\Gamma^\zeta$ with a maximal $x_2$ component, then we have
\begin{equation}
(x_{2,\zeta})^\zeta=\frac{2}{1+\zeta}\frac{\bar{x}_2}{|x_1^0|^{1-\zeta}}
\left(\frac{\sqrt{1-\zeta^2}}{\zeta}\right)^{1-\zeta},
\end{equation}
and
\begin{equation}
|x_{1,\zeta}|=\frac{\bar{x}_2}{\zeta}\sqrt{1-\zeta^2}.
\end{equation}
Let $\zeta \rightarrow 1^{-} $, then, $x_{1,\zeta} \rightarrow 0$, $x_{2,\zeta} \rightarrow \bar{x}_2$, the trajectory $\Gamma^\zeta$ will converge to the curve
$\Gamma$ that is a union of the parabola
\begin{equation}
x_1^2 = 4 \bar{x}_2^2 - 4 x_2 \bar{x}_2, ~~~~ x_1\in (0, x_1^0) (~~~\mbox{or} (x_1^0,0),)
\end{equation}
and the interval $(0,\bar{x}_2)$ on $x_2$-axis as is shown in figure \ref{fig:2d_linear_constraint_analysis}.
As $\zeta \rightarrow 1^{-} $, a part of the trajectory $\Gamma^\zeta$ converges to the central path. For $\zeta \in [0,1)$, the resulting trajectories are homotopic relative to their endpoints, which are the initial design and the critical point of the objective function.
By $\bar{x}_2 = \frac{1}{2} \left(x_2^0 + \sqrt{(x_1^0)^2 + (x_2^0)^2}\right)$, we have $\bar{x}_2 \geq x_2^0$. This means for any feasible initial design, as $\zeta \rightarrow 1^-$, the resulting optimization trajectory always reaches first a close neighborhood of the central path (at point $(0, \bar{x}_2)$) , where it is at a larger distance to the boundary of the feasible set compared to the initial design. It then follows the central path and reaches the optimal solution.
\subsection{A nonconvex optimization example} We show a second example that includes a nonconvex constraint,
\begin{equation}
\begin{split}
&\textnormal{minimize}~~~ f(x_1,x_2)= (x_1 - 2)^2 + (x_2 -2)^2, \\
&\textnormal{subject to}~~ g(x_1,x_2)= -\frac{1}{10}(x_1 - 3)^2 - x_2 + 3 \leq 0. \\
\end{split}
\label{eq:2d_quadrati_obj_nonconvex_constraint}
\end{equation}
In figure \ref{fig:NGopt_non_convex_contours_new}, we plot the optimization trajectory with $\zeta = 0.9999$ together with a few depicted contours of both the objective function and the constraint function. We plot three points A, B, and C on the central path, where the plotted contours of the objective and constraint function are tangent to each other. We choose an initial design $\mathbf{x}^0$ that is close to the point A. It can be observed: instead of heading to the left side of the central path, the optimization trajectory finds its way to the right side. It then follows the central path, but leaves at point C and reaches the other central path. It eventually finds the optimal solution by following this central path. Along the optimization trajectory, the objective function value decreases steadily. The question is now, why does the optimization trajectory choose one side (point B side) of the same central path over another (point A side)? The answer may lie in the difference in the curvatures of the contours of the objective and constraint function between point A and B. We observe the following fact:
Let $\kappa_f$ and $\kappa_g$ be the curvature of the contours of objective and constraint function at the central path, respectively, both with the normal vector $\frac{\nabla f}{|\nabla f|}$, then
\begin{itemize}
\item[-] at point A: $\kappa_f - \kappa_g > 0$;
\item[-] at point B: $\kappa_f - \kappa_g < 0$.
\end{itemize}
\begin{figure}
\caption{A study on the behavior of the optimization trajectory for problem (\ref{eq:2d_quadrati_obj_nonconvex_constraint}
\label{fig:NGopt_non_convex_contours_new}
\end{figure}
Based on this observation, we conjecture the behavior of the optimization trajectory: As $\zeta \rightarrow 1^-$, the optimization trajectory is able to approach and follow a central path, on which the central point satisfies the condition
\begin{equation}
\kappa_f - \kappa_g < 0.
\label{eq: relative_convex}
\end{equation}
We call condition (\ref{eq: relative_convex}) \textit{the relative convex condition}. This behavior can also be used to explain why the optimization trajectory leaves the central path at point C, where $\kappa_f = \kappa_g$, and heads towards another central path. In section \ref{sec:local_analysis}, we give a local convergence analysis starting with this conjecture.
\section{Global behavior and convergence under coercive condition}
\label{sec: global_behavior}
We show analysis for the global behavior for the present dynamical system (\ref{eq:modified_search_direction}) under assumptions (A1)$-$(A4). To this purpose, we partly use the geometric analysis in \cite{jost2011riemannian}.
Consider a $\mu-$neighborhood of a central path with $\mu \in [0,1]$,
$$\Theta_\mu=\{x: g(x) \leq 0, \cos\theta<-\mu\}.$$
Obviously, $\Theta_\mu$ shrinks to the central path as $\mu \rightarrow 1^-.$
Recall the present search direction field $\textbf{s}_\zeta$,
\begin{equation*}
\textbf{s}_\zeta =-\frac{\nabla f}{|\nabla f|}
-\zeta\frac{\nabla g}{|\nabla g|}.
\label{vector-s}
\end{equation*}
Let $x(t; \zeta,x_0)$ be the solution of the system:
\begin{equation}\label{system}
\left\{
\begin{split}
&\frac{dx}{dt}= \textbf{s}_\zeta(x),\\
& x|_{t=0} = x_0.
\end{split}\right.
\end{equation}
with $x_0$ as the initial design in the feasible set and $T_{\zeta,x_0}$ as the maximal existence interval of $x(t; \zeta,x_0)$ in the whole $\mathbb{R}^n$. We show some properties of the trajectory $x(t; \zeta,x_0)$.
\vskip 2mm
\begin{mylemma}
Let $f$ and $g$ satisfy assumptions (A1)(A3) and (A4), then
(i) The trajectory $x(t; \zeta,x_0)$ always stays within a bounded domain, i.e.,
$$x(t; \zeta,x_0)\in \Omega_{f(x_0)}=\{x: f(x) \leq f(x_0)\}.$$
(ii) The constraint function $g$ decreases along the trajectory $x(t; \zeta,x_0)$ out of the cone neighborhood $\Theta_{\zeta}$, and increases in $\Theta_{ \zeta}$.
\label{lemma1}
\end{mylemma}
\begin{proof}
~~Based on the deformation of the objective function $f$ and constraint function $g$ along the trajectory $x(t; \zeta,x_0)$:
\begin{equation}\label{deform}
\left\{
\begin{split}&
\frac{d }{dt}f(x(t; \zeta,x_0))= -|\nabla f|(1+\zeta \cos \theta),
\\ &
\frac{d }{dt}g(x(t; \zeta,x_0))= -|\nabla g|(\zeta +\cos \theta).
\end{split}
\right.
\end{equation}
We get a proof directly.
\end{proof}
\begin{mytheorem}
Suppose that assumptions (A1) - (A4) hold. Then
(i) the trajectory $x(t; \zeta,x_0)$ must go out of the feasible set with $\zeta \in [0,1)$;
(ii) the minimum time in which the trajectory reaches the boundary of the feasible set is at most $\frac{C}{1-\zeta}$ with $C$ independent of $\zeta$.
\label{theorem1}
\end{mytheorem}
\begin{proof}
\textcolor[rgb]{0,0,0}{~~We prove (i) by contradiction. Suppose that $x(t; \zeta,x_0)$ stays in the feasible set $\Omega $ for any $t<T_{\zeta, x_0}$, then the vector field $\textbf{s}_{\zeta}$ keeps $C^1$ continuous in a neighborhood of trajectory $x(t; \zeta,x_0)|_{[0,T_{\zeta, x_0})}$. Lemma 1 ensures that }
\begin{equation}
|\nabla f(x(t; \zeta,x_0))|\geq A, |f(x(t; \zeta,x_0)|\leq B, ~~~ \forall t<T_{\zeta, x_0},
\label{lowerbound}
\end{equation}
with some positive numbers $A$ and $B$. On the other hand, Picard's existence theorem implies that $T_{\zeta, x_0}=+\infty.$
The integral of the first formula of (\ref{deform}) shows
$$
\int_0^\infty|\nabla f(x(t; \zeta,x_0))|(1+\zeta \cos \theta)dt<\infty.
$$
This is to say
\begin{equation}
\int_0^\infty|\nabla f(x(t; \zeta,x_0))|dt<\infty.
\label{finit integral}
\end{equation}
Notice that
\begin{equation*}
\begin{split}
&\frac{d}{dt}|\nabla f(x(t; \zeta,x_0))|=\nabla |\nabla f(x(t; \zeta,x_0))|\cdot \frac{dx}{dt}\\
& \hskip 3mm=\nabla |\nabla f(x(t; \zeta,x_0))|\cdot \textbf{s}_\zeta(x)\\
& \hskip 3mm=\frac{-1}{|\nabla f|}\sum\limits_{k=1,j=1}^{n}\frac{\partial f}{\partial x_k } \frac{\partial^2 f}{\partial x_k \partial x_j} \left(\frac{1}{|\nabla f|}\frac{\partial f}{\partial x_j }+\frac{\zeta}{|\nabla g|}\frac{\partial g}{\partial x_j }\right).
\end{split}
\end{equation*}
\textcolor[rgb]{0,0,0}{
So
\begin{equation*}
\left|\frac{d}{dt}|\nabla f(x(t; \zeta,x_0))|\right|\leq 2 \sqrt{\sum\limits_{k=1,j=1}^{n} \left|\frac{\partial^2 f}{\partial x_k \partial x_j} \right|^2}.
\end{equation*}}
By assumption (A4) and Lemma \ref{lemma1}, there is a constant $l$ so that
\begin{equation*}
\left|\frac{d}{dt}|\nabla f(x(t; \zeta,x_0))|\right|\leq l, ~\forall t.
\end{equation*}
Hence, we have a Lipschitz continuity for $|\nabla f(x(t; \zeta,x_0))|$:
\begin{equation}
\left||\nabla f(x(t^\prime; \zeta,x_0))|-|\nabla f(x(t^{\prime\prime}; \zeta,x_0))|\right| \leq l |t^\prime-t^{\prime\prime}|,~~~~~ \forall t^\prime,t^{\prime\prime}.
\label{lipschitz}
\end{equation}
Now, we claim
\begin{equation}
\lim_{t\rightarrow +\infty}|\nabla f(x(t;\zeta,x_0))|=0.
\label{claim}
\end{equation}
Otherwise, there is a sequence of $t_j\rightarrow +\infty$ and a positive constant $b$ so that
\begin{equation*}
|\nabla f(x(t_j;\zeta,x_0))|\geq b>0.
\end{equation*}
Choosing $\delta=\frac{b}{2l}$, then
\begin{equation*}
\begin{split}
&|\nabla f(x(t; \zeta,x_0))|\geq |\nabla f(x(t_j; \zeta,x_0))|\\
&\hskip 3mm -\left||\nabla f(x(t_j; \zeta,x_0))|-|\nabla f(x(t; \zeta,x_0))|\right| \\
& \hskip 3mm \geq |\nabla f(x(t_j; \zeta,x_0))|-l|t_j-t|\geq b-\delta l=\frac{b}{2}
\end{split}
\end{equation*}
for any $|t_j-t|\leq \delta.$ Therefore,
\begin{equation*}
\begin{split}
&\int_0^\infty|\nabla f(x(t; \zeta,x_0))|dt\geq \sum\limits_{j=1}^{\infty} \int_{t_j-\delta}^{t_j+\delta}|\nabla f(x(t; \zeta,x_0))|dt\\
&\hskip 3mm \geq \sum\limits_{j=1}^{\infty} \int_{t_j-\delta}^{t_j+\delta}\frac{b}{2}dt=\sum\limits_{j=1}^{\infty} \delta b=+\infty.
\end{split}
\end{equation*}
\textcolor[rgb]{0,0,0}{ This is a contradiction to (\ref{finit integral}). Hence
\begin{equation*}
\lim_{t\rightarrow +\infty}|\nabla f(x(t;\zeta,x_0))|=0.
\label{claim}
\end{equation*}
Notice that this convergence contradicts (\ref{lowerbound}).
This completes the proof for (i).}
\vskip 2mm
To prove (ii), let $T_{\zeta, x_0}^\sharp$ be the time in which the trajectory first reaches the boundary of feasible set, then (\ref{lowerbound}) holds for $0<t<T_{\zeta, x_0}^\sharp$. So
$$
\int_0^{T_{\zeta, x_0}^\sharp}|\nabla f|(1+\zeta \cos \theta)dt=f(x_0)-f(x(T_{\zeta, x_0}^\sharp; \zeta,x_0))\leq 2B.
$$
Hence
$$
T_{\zeta, x_0}^\sharp A(1-\zeta)\leq 2B,
$$
which implies
$$
T_{\zeta, x_0}^\sharp\leq \frac{2B}{A(1-\zeta)},
$$
so that $T_{\zeta, x_0}^\sharp \leq \frac{C}{1-\zeta}$ holds. This ends the proof.
\end{proof}
\begin{mytheorem}
Suppose that assumptions (A1),(A3), (A4) hold and $\nabla g\neq 0$ in the whole $\mathbb{R}^n$. The trajectory $x(t; \zeta,x_0)$ converges to a connected subset of critical points of the objective function by $\zeta \in [0,1)$. Especially, if the critical points of objective function is isolated, then
\begin{equation}\label{convergencetocritical}
\lim_{t\rightarrow T_{\zeta,x_0}^-}x(t; \zeta,x_0)=x_c,~~~~~~~~~~ \forall \zeta \in [0,1).
\end{equation}
\label{theorem2}
\end{mytheorem}
\begin{proof}
~~First notice that the system (\ref{system}) is not well defined when $\nabla f=0$. The trajectory $x(t; \zeta,x_0)$ will terminate at these points
upon finding them. Therefore, the maximal interval $T_{\zeta,x_0}$ may be finite. To overcome this difficulty, we use an equivalent Y$-$system:
\begin{equation}
\left\{
\begin{split}
&\frac{dy}{d\tau}=|\nabla f| \textbf{s}_\zeta(x),\\
& y|_{\tau=0} = x_0.
\end{split}
\right.
\label{Y-system}
\end{equation}
The system (\ref{Y-system}) has the same orbit as (\ref{system}) but different parameterization. It has the solution $y(\tau; \zeta,x_0)$ with infinite existence interval in the whole $\mathbb{R}^n$ by Picard's existence theorem.
For $ \zeta \in [0,1) $, consider an integral along $y(\tau; \zeta, x_0)$, \begin{equation*}
f(y(T; \zeta,x_0))= f(x_0)-\int_0^{T}|\nabla f|^2(1 +\zeta\cos \theta)(y(\tau; \zeta,x_0))d\tau.
\end{equation*}
Lemma \ref{lemma1} ensures
\begin{equation*}
\int_0^{T}|\nabla f|^2(1 +\zeta\cos \theta)(y(\tau; \zeta,x_0))d\tau= f(x_0)-f(y(T; \zeta,x_0))\leq M \end{equation*} with some positive number $M$ independent of $T$. Hence
\begin{equation*}
\int_0^{\infty}|\nabla f|^2(1 +\zeta\cos \theta)(y(\tau; \zeta,x_0))d\tau\leq M.
\end{equation*}
\textcolor[rgb]{0,0,0} {With $1+\zeta\cos \theta\geq 1-\zeta$, a similar method to the proof for Theorem \ref{theorem1}(i) shows that}
\begin{equation*}
\lim_{\tau\rightarrow +\infty}|\nabla f|^2=0.
\end{equation*}
This proves that $y(\tau, \zeta, x_0)$ approaches the connected subset of the critical points. An isolated condition makes sure that
\begin{equation*}
\lim_{\tau\rightarrow +\infty}y(\tau; \zeta, x_0)=x_c,
\end{equation*}
for some critical point $x_c$ of the objective function.
Notice that along the trajectory,
\begin{equation*}
t=\int_0^{\tau}|\nabla f|d \tau,
\end{equation*}
hence
\begin{equation*}
\lim_{t\rightarrow T_{\zeta,x_0}^-}x(t; \zeta,x_0)=x_c,~~~~~~~~~~ \forall \zeta \in [0,1).
\end{equation*}
\end{proof}
\begin{myremark}
The global behavior of the system (\ref{system}) looks like a gradient flow. Obviously, as $\zeta = 0$, the system reduces to the normalized and negative gradient flow of the objective function. Especially, the present trajectory is homotopic with the gradient descent trajectory for $\zeta \in (0,1)$ by Theorem \ref{theorem2} and the continuous dependence of the solutions to differential equations on parameters.
\label{remark1}
\end{myremark}
\begin{myremark}
The analytical trajectory may find a critical point of constraint $g$ without condition (A3). In practical implementations, we suggest using the gradient descent $-\nabla f$ to escape these critical points.
\end{myremark}
\vskip 2mm
\begin{mylemma}
Under assumptions (A1)$-$(A4), the trajectory $x(t; \zeta=1, x_0)$ converges, as $t\rightarrow+\infty$, into any $\mu-$neighborhood $\Theta_\mu$ of the central path $L$ with $\mu \in [0,1)$, given that the initial design $x_0 \in \Omega$.
\label{lemma2}
\end{mylemma}
\begin{proof}
~~Along the trajectory $x(t; 1,x_0)$, we derive the constraint function $g$ with respect to $t$ and have
\begin{equation}\label{zeta1}
\frac{d }{dt}g(x(t; 1,x_0))= -|\nabla g|(1 +\cos \theta)\leq 0.
\end{equation}
By Picard's existence theorem, the vector field $\textbf{s}_1$ keeps $C^1$ continuity in a neighborhood of the trajectory $x(t; 1,x_0)$ and $T_{1,x_0}=+\infty$.
Integrating (\ref{zeta1}) gives
\begin{equation}
\int_0^{+\infty}|\nabla g|(1 +\cos \theta)dt = g(x(T_{1, x_0}; 1, x_0)) - g(x(0; 1, x_0)) <+\infty.
\end{equation}
Based on Lemma \ref{lemma1} and the assumption (A4), we know that the integral function $|\nabla g|(1 +\cos \theta)$ is continuously differentiable with respect to $t$. Same procedure as in the proof for Theorem \ref{theorem1} shows that
\begin{equation}
\lim_{t\rightarrow +\infty}|\nabla g|(1 +\cos \theta)=0,
\label{eq:continuous_argument}
\end{equation}
resulting with (A3) that $(1 + \cos \theta) \rightarrow 0$ as $t \rightarrow + \infty$. In other word, trajectory $x(t; 1,x_0)$ comes into any $\mu-$neighborhood $\Theta_\mu$ of the central path $L$, $\mu \in [0,1)$.
\end{proof}
\begin{mytheorem}
Under assumptions (A1)$-$(A4),
the trajectory $x(t;1,x_0)$ converges to a point $\widehat{x}$ on central path $L$ generally, i.e.,
\begin{equation}
\lim_{t\rightarrow +\infty}x(t; 1,x_0)=\widehat{x}\in L,
\end{equation}
provided $x_0 \in \Omega$.
\label{theorem3}
\end{mytheorem}
\begin{proof}
~~According to (\ref{eq:continuous_argument}) we have
$$
\lim_{t\rightarrow +\infty}\cos\theta=-1.
$$
Choose a sequence $t_j\rightarrow +\infty$ as $j\rightarrow +\infty$ with
\begin{equation}
\lim_{j\rightarrow +\infty}x(t_j; 1,x_0)=\widehat{x},
\label{sequence}
\end{equation}
for some point $\widehat{x}$. Hence
$$
\cos\theta|_{\widehat{x}}=-1,
$$
and
\begin{equation}
\left\{
\begin{split}
& f(\widehat{x})= f(x_0)-\int_0^{+\infty}|\nabla f|(1 +\cos \theta)(x(t; 1,x_0))dt, \\
& g(\widehat{x})= g(x_0)-\int_0^{+\infty}|\nabla g|(1 +\cos \theta)(x(t; 1,x_0))dt.\\
\end{split}
\right.
\label{eq:property_central_point}
\end{equation}
This says that the point $\widehat{x}\in L$, and the values $f(\widehat{x}), g(\widehat{x})$ depend only on $x_0$. In general, the point with these conditions is unique\footnote{If the central path and the contour of the constraint function $g$ intersect transversally, then the point $\widehat{x}$ is unique. By Sard's theorem, transversal intersection occurs with the probability of 1.}. Therefore
\begin{equation}
\lim_{t\rightarrow +\infty}x(t; 1,x_0)=\widehat{x}\in L.
\label{eq:converge_to_central_point}
\end{equation}
\end{proof}
\begin{mytheorem}
Under assumptions (A1)$-$(A4), the trajectory $x(t; \zeta,x_0)$ is convergent to the trajectory $x(t; 1, x_0)$ uniformly for $t\in (0, \alpha\log \frac{1}{1-\zeta})$ for some $\alpha>0$.
\label{theorem4}
\end{mytheorem}
\begin{proof}
~~By system (\ref{system}), we have the integral equation:
\begin{equation*}
x(T;\zeta,x_0)=x_0+\int_0^T \textbf{s}_\zeta(x(t;\zeta,x_0)dt.
\end{equation*}
So
\begin{equation}
\begin{split}
& x(T;1,x_0)-x(T;\zeta,x_0)= \int_0^T \left[\textbf{s}_1(x(t;1,x_0)- \textbf{s}_\zeta(x(t;\zeta,x_0)\right]dt \\
&\hskip 15mm = -\int_0^T \left[\frac{\nabla f}{|\nabla f|}(x(t;1,x_0))-\frac{\nabla f}{|\nabla f|}(x(t;\zeta,x_0))\right]dt\\
&\hskip 19mm-\zeta\int_0^T \left[\frac{\nabla g}{|\nabla g|}(x(t;1,x_0))-\frac{\nabla g}{|\nabla g|}(x(t;\zeta,x_0))\right]dt\\
&\hskip 19mm-(1-\zeta)\int_0^T \left[\frac{\nabla g}{|\nabla g|}(x(t;1,x_0))\right]dt.
\end{split}
\end{equation}
In the bounded domain $\{x\in \mathbb{R}^n: f(x)\leq f(x_0), g(x) \leq 0\}$, there is a constant $M$ such that
\begin{equation}
\begin{split}
&\left|\frac{\nabla f}{|\nabla f|}(x(t;1,x_0))- \frac{\nabla f}{|\nabla f|}(x(t;\zeta,x_0))\right|\leq M \left|x(t;1,x_0)- x(t;\zeta,x_0)\right|, \\
&\left|\frac{\nabla g}{|\nabla g|}(x(t;1,x_0))- \frac{\nabla g}{|\nabla g|}(x(t;\zeta,x_0))\right|\leq M \left|x(t;1,x_0)- x(t;\zeta,x_0)\right|.
\end{split}
\end{equation}
Set
$$\psi_\zeta(t)=\left|x(t;1,x_0)- x(t;\zeta,x_0)\right|,~~ \Psi_\zeta(T)=\int_0^T\psi_\zeta(t)dt.$$
Then
\begin{equation}
\psi_\zeta(T)\leq 2M\int_0^T\psi_\zeta(t)dt+(1-\zeta)T.
\end{equation}
Hence
\begin{equation}
\Psi_\zeta^\prime(t)\leq 2M\Psi_\zeta(t)+(1-\zeta)t.
\end{equation}
Or
\begin{equation}
\left(e^{-2Mt}\Psi_\zeta(t)\right)^\prime\leq (1-\zeta)te^{-2Mt}.
\end{equation}
Integrating this inequality gives
\begin{equation}
\Psi_\zeta(T)\leq (1-\zeta)e^{2MT}\int_0^T te^{-2Mt}dt.
\end{equation}
It follows that for $T\leq -\alpha\log (1-\zeta),$ we have
\begin{equation}
\begin{split}
&\psi_\zeta(T)\leq 2M(1-\zeta)\int_0^T te^{2M(T-t)}dt+(1-\zeta)T\\
&\hskip 10mm \leq (1-\zeta)Te^{2MT}\leq -\alpha(1-\zeta)^{1-2M\alpha}\log (1-\zeta).
\end{split}
\end{equation}
Therefore, for $t\in (0,-\alpha \log (1-\zeta))$
\begin{equation}
\left|x(t;1,x_0)- x(t;\zeta,x_0)\right|\leq -\alpha(1-\zeta)^{1-2M\alpha}\log (1-\zeta).
\end{equation}
A choice of $\alpha>0$ with $1-2M\alpha>0$ completes the proof of the theorem.
\end{proof}
\begin{myremark}
Theorem \ref{theorem4} indicates that the time in which the trajectory needs to arrive at the boundary of the feasible set must be larger than $\alpha \log \frac{1}{1-\zeta}$.
\end{myremark}
\begin{mytheorem}
Let $x_\zeta^\sharp$ be the first point where the trajectory reaches the boundary of the feasible set, then
\begin{equation}
x_\zeta^\sharp\in \{x: g(x) = 0, \cos\theta\leq -\zeta \}.
\label{eq:theorem_5}
\end{equation}
This is to say that the point $x_\zeta^\sharp$ belongs to the closure of the $\zeta$-neighborhood of the central path. Especially, the limit of $x_\zeta^\sharp$ as $\zeta\rightarrow 1^-$ is at the intersection of the central path and the boundary of the feasible set.
\label{theorem5}
\end{mytheorem}
\begin{proof}
~~Let $x_\zeta^\sharp=x(t^\sharp;\zeta,x_0)$, by the choice of the point $x_\zeta^\sharp$,
\begin{equation*}
\left\{
\begin{split}
&g(x(t^\sharp;\zeta,x_0))=0,\\
& g(x(t;\zeta,x_0))<0,~~ t<t^\sharp.
\end{split}\right.
\end{equation*}
Hence
$$
\frac{d }{dt}g(x(t^\sharp; \zeta,x_0))\geq 0.
$$
By (\ref{deform}),
$$ |\nabla g|(\zeta +\cos \theta)\leq 0. $$
Therefore,
$$ \cos \theta\leq -\zeta. $$
This ends the proof.
\end{proof}
\begin{myremark}
According to Theorem \ref{theorem5}, obviously, we have $\cos \theta \rightarrow -1^+$ as $\zeta \rightarrow 1^-$. The normalized centrality condition (\ref{eq:normalized_central_path_condition_single_constraint})
$$\frac{\nabla f}{|\nabla f|} + \frac{\nabla g}{|\nabla g|} = 0 $$
can be satisfied. Given that $x_\zeta^\sharp$ is a point on the boundary of the feasible set, the first-order necessary conditions (KKT conditions) are therefore satisfied. This is straightforward as the Lagrange multiplier $\lambda^\star$ associated with the Lagrangian function $\mathcal{L}(x,\lambda)$ for the considered problem is
$$ \lambda^\star = \frac{|\nabla f|}{|\nabla g|},$$
and
$$\textcolor[rgb]{0,0,0}{\nabla_x \mathcal{L}}(x_\zeta^\sharp, \lambda^\star) = \nabla f + \lambda^\star \nabla g = 0.$$
Under assumption (A2) and (A3), we have $\lambda^\star > 0$. With $g(x_\zeta^\sharp) = 0$ strict complementarity holds. To find out whether the point $x_\zeta^\sharp$ is a local solution, one needs to check the second-order sufficient conditions. We discuss this in the next section.
\label{remark_fo}
\end{myremark}
\textcolor[rgb]{0,0,0}{
\begin{myremark}
Based on Theorem \ref{theorem5}, we propose an error measure $\epsilon > 0$ for the optimization solutions
\begin{equation}
\epsilon = 1-\zeta.
\end{equation}
According (\ref{eq:theorem_5}), we then have $$x_\zeta^\sharp\in \{x: g(x) = 0, \cos\theta\leq -1 + \epsilon \}.$$
The KKT conditions are satisfied for $x_\zeta^\sharp$ when $\cos \theta = -1$ as mentioned in Remark \ref{remark_fo}. A small value $\epsilon>0$ seems to be a natural error measure for the present method. Additionally, $\epsilon$ is defined using the intrinsic parameter $\zeta$ that determines the shape of the optimization trajectory. With the error measure $\epsilon$ we can interpret Theorem \ref{theorem1}(ii) and Theorem \ref{theorem4} as follows. Theorem \ref{theorem1}(ii) implies the time for the trajectory to find a first-order $\epsilon$-optimal solution is $\mathcal{O}(1/\epsilon)$, which is the ergodic rate of convergence for first-order methods; Theorem \ref{theorem4} implies the trajectory is convergent to the trajectory $x(t;1, x_0)$ uniformly for $t\in (0, \alpha \log(1/\epsilon))$.
\label{remark_error}
\end{myremark}
}
\section{Local convergence analysis}
\label{sec:local_analysis}
Consider optimization problem (\ref{eq:Optimization_Problem}) with a single inequality constraint. For simplicity, let the origin be a point on the central path, and $x_n$ lies in the direction of $\frac{\nabla f}{|\nabla f|}$, i.e.,
\begin{equation}
\nabla f(0,...,0) = \left[ 0,...,0,\frac{\partial f}{\partial x_n}\right] \ne \mathbf{0}.
\end{equation}
By the implicit function theorem, we have a function $x_n = \phi(x_1,...,x_{n-1})$, which satisfies
\begin{equation}
f(x_1, \cdots, x_{n-1}, \phi(x_1, \cdots, x_{n-1}))\equiv f(0, \cdots, 0, 0).
\end{equation}
Furthermore:
\begin{itemize}
\item[\textcircled{1}] At the origin,
\begin{equation}
\phi(0,...,0) = 0;
\label{eq:phi_condition_1}
\end{equation}
\item[\textcircled{2}] In a neighborhood of the origin,
\begin{equation}
\frac{\partial \phi}{\partial x_k} = - \frac{\partial f}{\partial x_k} / \frac{\partial f}{\partial x_n}.
\label{eq:phi_condition_2}
\end{equation}
Thus we have
\begin{equation}
\frac{\partial \phi(\mathbf{0})}{\partial x_k} = 0, ~~ 1 \leq k \leq n-1.
\end{equation}
\end{itemize}
\textcolor[rgb]{0,0,0}{The contours of the objective function is the graph of the implicit function
$x_n=\phi(x_1,...,x_{n-1})$. For $n=2$, the curvature formula gives
$$\kappa_f=\left.\frac{\phi^{\prime\prime}(x_1)}
{(1+(\phi^\prime(x_1))^2)^\frac{3}{2}}\right|_{x_1=0}=\phi^{\prime\prime}(0).
$$
}
\textcolor[rgb]{0,0,0}{For $n>2$, we choose a direction $\omega=(\omega_1,\cdots, \omega_{n-1})$ with $|\omega|=1$.
\textcolor[rgb]{0,0,0}{Then, in the two dimensional subspace defined by $\mathbb{R}^2=\{\omega t,x_n\}= \{\omega_1 t,\cdots, \omega_{n-1} t, x_n\}$,} the function $x_n=\phi(\omega t)$ has curvature in direction $\omega$,
\begin{equation}\kappa_f^\omega=\left.\frac{d^2\phi(\omega t)}{dt^2}\right|_{t=0}
=\sum\limits_{k=1,j=1}^{n-1} \frac{\partial^2 \phi(\mathbf{0})}{\partial x_k \partial x_j} \omega_k \omega_j.
\label{directioncurvaturephi}
\end{equation}
}
Similar analysis on $\psi$, which is deduced from the implicit function $g=\text{constant}$, gives:
\begin{equation}
\kappa_g^\omega=\left.\frac{d^2\psi(\omega t)}{dt^2}\right|_{t=0}
=\sum\limits_{k=1,j=1}^{n-1} \frac{\partial^2 \psi(\mathbf{0})}{\partial x_k \partial x_j} \omega_k \omega_j.
\label{directioncurvaturepsi}
\end{equation}\vskip 2mm
Now the conjectured \textit{relative convex condition} (\ref{eq: relative_convex}) in the two dimensional subspace $\mathbb{R}^2=\{\omega t,x_n\}$ would be $\kappa_f^\omega < \kappa_g^\omega$, i.e.,
\begin{equation}
\sum\limits_{k=1,j=1}^{n-1} \frac{\partial^2 \phi(\mathbf{0})}{\partial x_k \partial x_j} \omega_k \omega_j <
\sum\limits_{k=1,j=1}^{n-1} \frac{\partial^2 \psi(\mathbf{0})}{\partial x_k \partial x_j} \omega_k \omega_j
\label{Hessiancomp}
\end{equation}
for any direction $\omega$.
Let $H_{\phi} (\mathbf{0})$ and $H_{\psi} (\mathbf{0})$ be the Hessian matrix of $\phi$ and $\psi$ about variable $\tilde{x}$, respectively. Then, in the sense of positive definite matrix, we have
\begin{equation}\label{semi-positive definition}
H_{\phi} (\mathbf{0}) \prec H_{\psi} (\mathbf{0}).
\end{equation}
At $(x_1,...,x_{n-1}) = (0,...,0)$, we have
\begin{equation}
\begin{split}
\frac{\partial^2 \phi}{\partial x_k \partial x_j} &= \frac{\partial}{\partial x_k} \left(\frac{\partial \phi}{\partial x_j}\right)
= -\frac{\partial}{\partial x_k} \left(\frac{\partial f}{\partial x_j} / \frac{\partial f}{\partial x_n}\right) \\
&= - \left( \frac{\partial f}{\partial x_n} \right)^{-1} \frac{\partial^2 f}{\partial x_k \partial x_j}.
\end{split}
\end{equation}
Notice that
\begin{equation}
\nabla^2_{\tilde{x}} f = \left( \frac{\partial^2 f}{\partial x_k \partial x_j} \right)_{ 1 \leq k,j \leq n-1}.
\end{equation}
Thus, we have
\begin{equation}
H_{\phi}(\mathbf{0}) = - \left( \frac{\partial f}{\partial x_n} \right)^{-1} \nabla^2_{\tilde{x}} f(\mathbf{0}).
\end{equation}
Similarly, we have for $H_{\psi} (\mathbf{0})$
\begin{equation}
H_{\psi} (\mathbf{0}) = - \left( \frac{\partial g}{\partial x_n} \right)^{-1} \nabla^2_{\tilde{x}} g(\mathbf{0}).
\end{equation}
Due to the positive definiteness (\ref{semi-positive definition}), we have
\begin{equation}
- \left( \frac{\partial f}{\partial x_n} \right)^{-1} \nabla^2_{\tilde{x}} f(\mathbf{0}) \prec - \left( \frac{\partial g}{\partial x_n} \right)^{-1} \nabla^2_{\tilde{x}} g(\mathbf{0}).
\end{equation}
Recall at the origin we have
$$ \frac{\partial f}{\partial x_n} = | \nabla f|, ~~\frac{\partial g}{\partial x_n} = -| \nabla g|.$$
Therefore we have
\begin{equation}
\frac{1}{|\nabla f|} \nabla^2_{\tilde{x}} f(\mathbf{0}) + \frac{1}{|\nabla g|} \nabla^2_{\tilde{x}} g(\mathbf{0}) \succ 0.
\label{eq:relative_semi_convex}
\end{equation}
Set
\begin{equation}
\tilde{C} = \frac{1}{|\nabla f|} \nabla^2_{\tilde{x}} f(\mathbf{0}) + \frac{1}{|\nabla g|} \nabla^2_{\tilde{x}} g(\mathbf{0}).
\label{eq:relative_convex_matrix}
\end{equation}
\begin{mydefinition}
A point $x$ on the central path is called nondegenerate if $\tilde{C}(x)$ is invertible; it is \textit{relative convex} if $\tilde{C}(x)$ is a positive definite matrix.
\end{mydefinition}
\vskip 2mm
\begin{myremark}
Let a point $x^\star$ satisfy the KKT conditions. The relative convex condition $\tilde{C}(x^\star) \succ 0 $ is equivalent to the second-order sufficient conditions for constrained optimization. This is straightforward as the matrix
$$ \tilde{H} = |\nabla f(x^\star) | \tilde{C}(x^\star) = \nabla^2_{\tilde{x}} f(\mathbf{0}) + \frac{|\nabla f|}{|\nabla g|} \nabla^2_{\tilde{x}} g(\mathbf{0}) \succ 0$$
is equivalent to the \textit{projected Hessian} being positive definite \cite[p.~348]{Nocedal}, with a Lagrange multiplier $\lambda^\star$ satisfying the KKT conditions and strictly complementarity holding,
$$ \lambda^\star = \frac{|\nabla f|}{|\nabla g|} > 0, ~g(x^\star) = 0.$$
Here, we deduce the relative convex condition from the perspective of the difference in the curvatures of the function contours in the feasible set. It is defined on the central path and may be seen as a perturbed version of the second-order sufficient conditions.
\label{remark_so}
\end{myremark}
\vskip 2mm
At the origin, the Jacobian matrix for $\frac{\nabla f}{|\nabla f|}$ reads
\begin{equation}
J_f^n = \left( \frac{\partial}{\partial x_j} \left( \frac{\partial_{x_i} f}{|\nabla f|} \right)\right)_{1\leq i,j\leq n}.
\end{equation}
For $ 1 \leq i \leq n-1$, $\partial_{x_i} f = 0$ at the origin, thus
\begin{equation}
\left( \frac{\partial}{\partial x_j} \left( \frac{\partial_{x_i} f}{|\nabla f|} \right)\right) = \frac{\partial_{x_i} \partial_{x_j} f}{|\nabla f|}.
\end{equation}
Recall also at the origin we have
\begin{equation}\left\{
\begin{split}
\frac{\nabla f}{|\nabla f|} &= (\frac{\partial_{x_1} f}{|\nabla f|},...,\frac{\partial_{x_{n-1}} f}{|\nabla f|},\frac{\partial_{x_n} f}{|\nabla f|}) = (0,...,0,1), \\
\frac{\nabla g}{|\nabla g|} &= (\frac{\partial_{x_1} g}{|\nabla g|},...,\frac{\partial_{x_{n-1}} g}{|\nabla g|},\frac{\partial_{x_n} g}{|\nabla g|}) = (0,...,0,-1).
\end{split}
\right.
\end{equation}
At the origin, $\frac{\partial_{x_n} f}{|\nabla f|} $ has a maximum value $1$, resulting in its derivatives being zero:
\begin{equation}
\frac{\partial}{\partial x_k} \left( \frac{\partial_{x_n} f}{|\nabla f|}\right) = 0, ~ 1\leq k \leq n.
\end{equation}
Thus, the Jacobian matrix $J_f^n $ for $\frac{\nabla f}{|\nabla f|}$ at the origin reads
\begin{equation}
J_f^n =
\begin{bmatrix}
\begin{array}{ccccc|c}
& & & & & \frac{\partial}{\partial x_n} \left( \frac{\partial_{x_1} f}{|\nabla f|}\right) \\
& & J_f^{n-1} & & & \vdots \\
& & & & & \frac{\partial}{\partial x_n} \left( \frac{\partial_{x_{n-1}} f}{|\nabla f|}\right) \\
\hline
0 & & \cdots & & 0 & 0
\end{array}
\end{bmatrix}
\end{equation}
where $ J_f^{n-1}$ is an $(n-1) \times (n-1)$ matrix:
\begin{equation}
J_f^{n-1} = \frac{1}{|\nabla f|} \left( \partial_{x_i} \partial_{x_j} f\right)_{ 1\leq i,j \leq n-1}.
\end{equation}
Similarly, we have the Jacobian matrix $J_g^n$ for $\frac{\nabla g}{|\nabla g|}$ at the origin that reads
\begin{equation}
J_g^n=
\begin{bmatrix}
\begin{array}{ccccc|c}
& & & & & \frac{\partial}{\partial x_n} \left( \frac{\partial_{x_1} g}{|\nabla g|}\right) \\
& & J_g^{n-1} & & & \vdots \\
& & & & & \frac{\partial}{\partial x_n} \left( \frac{\partial_{x_{n-1}} g}{|\nabla g|}\right) \\
\hline
0 & & \cdots & & 0 & 0
\end{array}
\end{bmatrix}
\end{equation}
where $ J_g^{n-1}$ is also an $(n-1) \times (n-1)$ matrix:
\begin{equation}
J_g^{n-1} = \frac{1}{|\nabla g|} \left( \partial_{x_i} \partial_{x_j} g\right)_{1\leq i,j \leq n-1}.
\end{equation}
So we get matrix $\tilde{C}=J_f^{n-1} + J_g^{n-1}$ once more again.
By choosing a new coordinate system, say $\tilde{x}= (x_1, ...,x_{n-1})$ again, whose basis vectors are the eigenvectors $\mathbf{v}$ of $\tilde{C}$, the matrix $\tilde{C}$ transforms to a diagonal matrix $\tilde{C}_\lambda$ in the new coordinate system:
\begin{equation}
\tilde{C}_\lambda =
\begin{bmatrix}
\lambda_1 & & 0\\
& \ddots & \\
0 & & \lambda_{n-1}\\
\end{bmatrix}
\end{equation}
with $\lambda_1, ..., \lambda_{n-1}$, which are the eigenvalues of the symmetric matrix $\tilde{C}$. Hence
\begin{equation}
J_f^n + J_g^n =
\begin{bmatrix}
\begin{array}{ccccc|c}
\lambda_1 & & & & & \mu_1 \\
& & \ddots & & & \vdots \\
& & & & \lambda_{n-1} & \mu_{n-1} \\
\hline
0 & & \cdots & & 0 & 0
\end{array}
\end{bmatrix}
\end{equation}
where $\mu_i = \frac{\partial}{\partial x_n} \left( \frac{\partial x_i f}{|\nabla f|}\right) + \frac{\partial}{\partial x_n} \left( \frac{\partial x_i g}{|\nabla g|}\right)$.
Assume that at the origin, the tangent of the central path $\mathcal{L}: \frac{\nabla f}{|\nabla f|} + \frac{\nabla g}{|\nabla g|} = 0$ writes as
\begin{equation}
x_i = l_i x_n, ~ 1 \leq i \leq n-1.
\label{eq:cp_tangent}
\end{equation}
The directional derivative of $\frac{\nabla f}{|\nabla f|} + \frac{\nabla g}{|\nabla g|} $ along the tangent of cental path $L$ is zero, we have
\begin{equation}
\lambda_i l_i + \mu_i = 0, ~ 1 \leq i \leq n-1.
\end{equation}
We denote at origin:
\begin{equation}\left\{
\begin{split}&a_{ij} = \frac{\partial_{x_i} \partial_{x_j} g}{|\nabla g|}, ~1 \leq i,j \leq n-1,\\
&b_i = \partial_{x_n} \left( \frac{\partial_{x_i} g}{|\nabla g|}\right).
\end{split}
\right.
\end{equation}
The Taylor formula of the negative search direction $\frac{\nabla f}{|\nabla f|} + \zeta \frac{\nabla g}{|\nabla g|}$ reads
\begin{equation}
\begin{split}
\textcolor[rgb]{0,0,0}{\left(
\frac{\nabla f}{|\nabla f|} + \zeta \frac{\nabla g}{|\nabla g|}\right)^T} =
&\begin{bmatrix}
\begin{array}{ccccc|c}
\lambda_1 & & & & & -\lambda_1 l_1 \\
& & \ddots & & & \vdots \\
& & & & \lambda_{n-1} & -\lambda_{n-1} l_{n-1} \\
\hline
0 & & \cdots & & 0 & 0
\end{array}
\end{bmatrix}
\begin{bmatrix}
x_1\\
\vdots\\
x_{n-1}\\
x_n
\end{bmatrix}\\
+ (\zeta -1) &\begin{bmatrix}
\begin{array}{ccc|c}
& & & b_1 \\
& (a_{ij})_{(n-1)\times(n-1)} & & \vdots \\
& & & b_{n-1} \\
\hline
0 & \cdots & 0 & 0
\end{array}
\end{bmatrix}
\begin{bmatrix}
x_1\\
\vdots\\
x_{n-1}\\
x_n
\end{bmatrix}\\
+ &\begin{bmatrix}
0\\
\vdots\\
0\\
1-\zeta
\end{bmatrix} + o(\rho).
\end{split}
\label{Taylor}
\end{equation}
Summarizing the above analysis, we have
\vskip 2mm
\begin{mylemma}
If a point on the central path $L$ is nondegenerate, then the system (\ref{system})
may locally be considered as a perturbation of the linear equations:
\begin{equation}
\left\{
\begin{split}
&\frac{dx_i}{dt} = -\lambda_i x_i + \lambda_i l_i x_n + (1- \zeta) \left(\sum\limits_{j = 1}^{n-1}a_{ij} x_j + b_i x_n\right),~ 1 \leq i \leq n-1 \\
&\frac{dx_n}{dt} = -(1 - \zeta).
\end{split}
\right.
\label{linearized}
\end{equation}
\end{mylemma}
In the following, we use the matrix representations
\begin{equation}\left\{
\begin{split}
&\Lambda_\zeta = \left( \lambda_i \delta_{ij} -(1-\zeta) a_{ij} \right)_{(n-1) \times (n-1)}, \\
&B_\zeta = \left(\lambda_i l_i + (1-\zeta) b_i \right)_{(n-1)\times 1}, \\
\end{split}\right.
\end{equation}
where $\delta_{ij}$ is the Kronecker-delta.
\vskip 2mm
\begin{mylemma}
Given an initial point $(x_1^0, x_2^0,\cdots,x_n^0), ~x_n^0 >0$, the linearized equation system (\ref{linearized}) has the solution
\begin{equation}
\left\{
\begin{split}
&\tilde{x} = C_0 e^{-\Lambda_\zeta t} + \Lambda_\zeta^{-1} B_\zeta x_n(t) + (1-\zeta) \Lambda_\zeta^{-2} B_\zeta, \\
&x_n(t) = x_n^0 - (1-\zeta)t
\end{split}
\right.
\label{limitation}
\end{equation}
with
$$C_0=\tilde{x}_0-\Lambda_\zeta^{-1} B_\zeta x_n^0 -(1-\zeta) \Lambda_\zeta^{-2} B_\zeta.$$
\end{mylemma}
\vskip 2mm
\begin{proof}
~~Given an initial point $(x_1^0, x_2^0,\cdots,x_n^0), ~x_n^0 >0$, we have
$$
x_n(t) = x_n^0 - (1-\zeta)t.
$$
Therefore, for $1\leq i \leq n-1$ we have
\begin{equation}
\frac{dx_i}{dt} = -\lambda_i x_i + (1-\zeta) \sum\limits_{j=1}^{n-1} a_{ij} x_j + [\lambda_i l_i + (1-\zeta) b_i] [x_n^0 -(1-\zeta)t].
\label{eq:ODE_index_form}
\end{equation}
We write the ODE (\ref{eq:ODE_index_form}) in matrix form,
\begin{equation}
\frac{d\tilde{x}}{dt} = - \Lambda_\zeta \tilde{x} + (x_n^0 -(1-\zeta) t) B_\zeta.
\label{eq:ode_matrix}
\end{equation}
(\ref{eq:ode_matrix}) has the solution
\begin{equation}
\tilde{x} = C_0 e^{-\Lambda_\zeta t} + \Lambda_\zeta^{-1} B_\zeta x_n(t) + (1-\zeta) \Lambda_\zeta^{-2} B_\zeta.
\label{eq:ode_solution_hd}
\end{equation}
where $C_0$ is a vector of constants depending on the initial point $(x_1^0,\cdots,x_n^0)$.
\end{proof}
\begin{mytheorem}
If $\tilde{C}_\lambda$ is positive definite, then there is a positive $\zeta_0<1$ such that the solution of the system (\ref{linearized}) has the asymptotic line
\begin{equation}
\tilde{x} = \Lambda_\zeta^{-1} B_\zeta x_n + (1-\zeta) \Lambda_\zeta^{-2} B_\zeta,
\label{infty}
\end{equation}
as $t\rightarrow +\infty$
for any $\zeta_0<\zeta<1$.
\end{mytheorem}
\vskip 2mm
\begin{proof}
~~The distance of a point on the trajectory (\ref{eq:ode_solution_hd}) to the asymptotic line (\ref{infty}) is at most $|C_0 e^{-\Lambda_\zeta t} |$. Choose a $\zeta_0$ such that $\Lambda_\zeta$ is a positive definite matrix, then the distance goes to zero as
$t\rightarrow +\infty$ for any $\zeta_0<\zeta<1$. This is a proof of the theorem.
\end{proof}
To observe the behavior of the linearized equation (\ref{linearized}) when $\zeta\rightarrow 1^-$, we eliminate the variable $t$ with $t = \frac{x_n^0-x_n}{1-\zeta}$, and rewrite the solution (\ref{eq:ode_solution_hd}) as
\begin{equation}
\tilde{x} = C_0 e^{-\Lambda_\zeta \frac{(x_n^0-x_n)}{(1-\zeta)}} + \Lambda_\zeta^{-1} B_\zeta x_n + (1-\zeta) \Lambda_\zeta^{-2} B_\zeta.
\label{non-t}
\end{equation}
Notice that $x_n^\prime(t)<0$, and we have $x_n(t) = x_n^0 - (1-\zeta)t<x_n^0.$
\vskip 2mm
\begin{mytheorem}
Let $\zeta\rightarrow 1^-$ and the point on the central path be relative convex, then the solution of system (\ref{linearized}) converges to the tangent of the central path $\mathcal{L}$ defined in (\ref{eq:cp_tangent}).
\end{mytheorem}
\vskip 2mm
\begin{proof}
~~Following the observation described above, $x_n<x_n^0.$ Let $\zeta \rightarrow 1^-$, then $ (x_n^0-x_n)/(1-\zeta) \rightarrow +\infty$. With the relative convex condition, we have $\lambda_i > 0$. If $ \zeta \rightarrow 1^-$, the eigenvalues of $\Lambda_\zeta$ $> \frac{1}{2} \min (\lambda_i) > 0$. Resulting in $\Lambda_1 \succ 0$ by denoting $\Lambda_{\zeta = 1} = \Lambda_1$.
Therefore,
\begin{equation*}
e^{-\Lambda_\zeta \frac{(x_n^0-x_n)}{(1-\zeta)}} \rightarrow 0,~~~ \mbox{if}~ \zeta \rightarrow 1^-.
\end{equation*}
The solution (\ref{non-t}) converges to
\begin{equation}
\tilde{x}= \Lambda_1^{-1} B_1 x_n + (1-1) \Lambda_1^{-2} B_1 = \left( l_i x_n \right)_{(n-1) \times 1},
\end{equation}
which is the tangent of the central path $\mathcal{L}$ as is given in (\ref{eq:cp_tangent}).
\end{proof}
Similar method gives:
\vskip 2mm
\begin{mytheorem}
Suppose a point $x$ on the central path is nondegenerate and $\tilde{C}_\lambda(x)$ has at least one negative eigenvalue. Let $\zeta\rightarrow 1^-$, then the solution of system (\ref{linearized}) will leave a neighborhood of the central path.
\label{theorem8}
\end{mytheorem}
\vskip 2mm
\begin{myremark}
The present trajectory converges to a central path with $\zeta \rightarrow 1^-$ under the relative convex condition. In Remark \ref{remark_fo}, we state that the first-order necessary conditions can be obtained as the trajectory reaches the boundary of the feasible set as $\zeta \rightarrow 1^-$. In Remark \ref{remark_so}, we state that the second-order sufficient conditions is equivalent to the relative convex condition at the boundary of the feasible set. Combining both statements we conclude: by choosing $\zeta \rightarrow 1^-$, the present trajectory approaches a local solution by traversing along a central path.
\end{myremark}
\section{The method for multiple constraints: a formulation based on the logarithmic barrier function}
\label{sec:multiple_constraint}
Recall the logarithmic barrier function for the problem (\ref{eq:Optimization_Problem}),
$$\Phi(x) = - \sum_{i = 1}^{m} \log (-g_i(x)), ~ i = 1,...,m.$$
We first notice that the barrier function grows without bound if $g_i(x) \rightarrow 0^-$ and it is not differentiable at the boundary of the feasible set. To overcome this difficulty, we consider a subset of the feasible set,
\begin{equation}
\Omega_M=\{x: \Phi(x) \leq M \},
\end{equation}
where M is a sufficiently large positive number so that $\Omega_M$ approximates the original feasible set $\Omega$.
The barrier function is twice continuously differentiable in $\Omega_M$ and thus fulfills the assumption (A4).
Set $$ G(x) = \Phi(x) - M, ~ x \in \Omega_M. $$
The original problem (\ref{eq:Optimization_Problem}) can be approximately reformulated as
\begin{equation}
\begin{split}
&\textnormal{minimize}~~~ f(x), \\
&\textnormal{subject to}~~ G(x) \leq 0, \\
\end{split}
\label{eq:Log_barrier_approximation}
\end{equation}
so that the present system (\ref{eq:modified_search_direction}) may be applied.
Notice that the global behavior shown in section \ref{sec: global_behavior} is based on the assumption that $\nabla g \neq 0$ in the feasible set $\Omega$. In general, this may not always be the case for the function $G(x), \forall x \in \Omega_M$. To escape the points where $\nabla \Phi(x) = 0$, we suggest using the gradient descent $-\nabla f(x)$. Thus we get the system (\ref{eq: vector_s_multiple}) for solving multiple inequality constrained optimization with $\zeta\in [0,1)$ as
\begin{equation*}
\frac{d x}{d t}=\left\{
\begin{split}
&-\frac{\nabla f}{|\nabla f|}
-\zeta\frac{\nabla \Phi}{|\nabla \Phi|}, ~~~~~&\text{if}~~\nabla \Phi \neq 0;\\
& -\nabla f ~~~~~~~~~~~~~~~~~~~~&\text{if}~~\nabla \Phi = 0.
\end{split}\right.
\end{equation*}
Note that in computational practice, $\nabla \Phi = 0$ rarely occurs. The second ODE of the above system serves as a safeguard for the present method.
\vskip 2mm
\begin{mycorollary}
Let $x_\zeta^\sharp$ be the first point where the resulting trajectory of the system (\ref{eq: vector_s_multiple}) reaches the boundary of $\Omega_M$, then
$$ x_\zeta^\sharp\in \{x:\Phi(x) = M , \cos\theta\leq -\zeta \}.$$
This is to say that the point $x_\zeta^\sharp$ belongs to the closure of $\zeta$-neighborhood of the central path. Especially, the limit of $x_\zeta^\sharp$ as $\zeta\rightarrow 1^-$ is at the intersection of the central path and the boundary of the subset $\Omega_M$.
\end{mycorollary}
\begin{proof}
~~A similar method as used in Theorem \ref{sec: global_behavior} gives the proof.
\end{proof}
Let $\mathbf{x}^\star \in \Omega_M$ be a point on the central path of the barrier function method, set
\begin{equation}
\tilde{C}_\Phi = \frac{1}{|\nabla f|} \nabla^2_{\tilde{x}} f(\mathbf{x}^\star) + \frac{1}{|\nabla \Phi |} \nabla^2_{\tilde{x}} \Phi(\mathbf{x}^\star).
\label{eq:relative_convex_log_barrier}
\end{equation}
\begin{mydefinition}
A point $\mathbf{x}^\star \in \Omega_M$ on the central path is \textit{relative convex} if $\tilde{C}_\Phi(\mathbf{x}^\star)$ is a positive definite matrix.
\end{mydefinition}
\vskip 2mm
\begin{mycorollary}
Let $\zeta\rightarrow 1^-$, and the point $\mathbf{x}^\star$ on the central path be relative convex. Then, the solution of the linearized system of (\ref{eq: vector_s_multiple}) converges to the tangent line of the central path $\mathcal{L}$ at $\mathbf{x}^\star$.
\end{mycorollary}
\begin{proof}
~~In the subset $\Omega_M$, $\nabla \Phi$ is Lipschitz continuous. Under the assumption (A2), we have $\nabla \Phi \neq 0$ along a central path. A uniformly continuous argument shows that $\nabla \Phi \neq 0$ in a close neighborhood of the central path. A similar method shown in section \ref{sec:local_analysis} gives a proof.
\end{proof}
\textcolor[rgb]{0,0,0}{
To conclude, using the barrier function formulation for problem (\ref{eq:Optimization_Problem}), the resulting trajectory achieves an approximated local solution that locates on the boundary of the subset $\Omega_M$. Notice, too, that the system (\ref{eq: vector_s_multiple}) does not depend on the choice of $M$, and $\Omega_M$ exhaust $\Omega$, i.e., $\Omega = \cup_{M>0} \Omega_M$. This means that the resulting trajectory keeps approaching the boundary of the original feasible set $\Omega$ by crossing the boundary of any subset $\Omega_M$ dependent on $M$. Therefore, it eases the practical implementation of the method since no extra parameter $M$ needs to be defined for the stopping criterion, but rather, a check for each constraint violation would be sufficient.}
\section{Numerical experiments}
\label{sec:experiments}
When implementing the present method numerically, the canonical first-order optimization procedure is defined by the differential equation
\begin{equation}
\frac{d x}{d t} = \mathbf{s}_\zeta,
\label{eq:GD_trajectory}
\end{equation}
where
\begin{equation*}
\mathbf{s}_\zeta=\left\{
\begin{split}
&-\frac{\nabla f}{|\nabla f|}
-\zeta\frac{\nabla \Phi}{|\nabla \Phi|}, ~~~~~&\text{if}~~\nabla \Phi \neq 0;\\
& -\nabla f ~~~~~~~~~~~~~~~~~~~~&\text{if}~~\nabla \Phi = 0.
\end{split}\right.
\end{equation*}
In practical applications, this procedure might result in poor performance. We first notice that, according to Theorem \ref{theorem5}, the accuracy of the solutions increases as the parameter $\zeta \rightarrow 1^-$. However, as $\zeta \rightarrow 1^-$, $| \mathbf{s}_\zeta| \downarrow 0$ at the central path, thus resulting in slow convergence rate.
Another difficulty results from the potential poor scaling of the logarithmic barrier function, which results in ill-conditioned Hessian near the boundary of the feasible set \cite[p.~500-502]{Nocedal}. We will show test examples that suffer from this problem in the following.
To overcome these difficulties, a step size rule that considers and adapts the parameter $\zeta$ may be needed.
A good practical self-adaptive $\zeta$ is expected to result in an optimization trajectory that behaves similar to the Mehrotra's practical implementation for IPM \cite{Mehrotra}, which is well-known for its very good performance and its difficulty in deriving a convergence theory. Design and analyze such a self-adaptive $\zeta$ require studies and analysis that exceeds the scope of this manuscript.
Another way to improve the performance might be the implementation of a momentum method or the Nesterov Accelerated Gradient \cite{nesterov1983method} for the present system. Although our first implementations have shown promising performance on some test problems, we leave a systematical study together with the design of a step size rule (that considers self-adaptive $\zeta$) for future work.
In this work, rather, we propose a more fundamental framework so that the focus is on the numerical behavior of the present trajectory, neglecting any advanced modification. In this framework, we use the fixed parameter $\zeta$. In doing so, we can only get solutions that are $(1-\zeta)$-suboptimal solutions (referred in Theorem \ref{theorem5} and Remark \ref{remark_error}). We let $\mathcal{X}_\zeta$ denote the set of all $(1-\zeta)$-suboptimal solutions,
\begin{equation}
\mathcal{X}_\zeta = \{x^\sharp : x^\sharp \in \Theta_{\zeta}, \max(g_i(x^\sharp)) = 0 \},
\end{equation}
given that the solutions exist only on the boundary of the feasible set. A different way of defining the suboptimal solutions is given in \cite{skaf2010techniques} and some useful applications using suboptimal solutions are shown.
To get better performance compared to the procedure (\ref{eq:GD_trajectory}), we suggest the following modified dynamical system:
\begin{equation}
\frac{d x}{d t} = \frac{\mathbf{s}_\zeta}{|\mathbf{s}_\zeta|}.
\label{eq:NGD_trajectory}
\end{equation}
Referring to the work \cite{murray2019revisiting}, we can say that the systems (\ref{eq:NGD_trajectory}) and (\ref{eq:GD_trajectory}) are topologically equivalent and solutions of (\ref{eq:NGD_trajectory}) are merely arc-length reparameterizations of (\ref{eq:GD_trajectory}) solutions. As the system (\ref{eq:GD_trajectory}) may move slowly at the close neighborhood of a central path, the system (\ref{eq:NGD_trajectory}) moves in the same orbit with constant speed. In \cite{murray2019revisiting}, the authors show that the normalized gradient descent method escapes saddle points ``quickly''. This might be beneficial when solving nonconvex optimization problems, where the gradient of the function $\nabla f(x)$ vanishes at saddle points. In fact, this phenomenon may be similar to the present system (\ref{eq:GD_trajectory}), where $|\mathbf{s}_\zeta| \downarrow 0$ at the central path as $\zeta \rightarrow 1^-$. In the following, we show numerical experiments applying the system (\ref{eq:NGD_trajectory}) with appropriate constant step sizes. Note, by using the logarithmic barrier function formulation for multiple constraints in the system (\ref{eq:NGD_trajectory}), we may still suffer from potential poor scaling behavior near feasible set boundaries.
\subsection{Experiments with common benchmarks}
We first show numerical experiments for the inequality constrained problems in the EA competition at the 2006 IEEE Congress on Evolutionary Computation \cite{liang2006problem}. These benchmarks are widely used among the community of evolutionary algorithms. We choose them as test examples for three reasons. First, they are well defined constrained optimization problems and have different characteristics \cite{rao2016jaya}. Second, they are nontrivial to solve with first-order methods. Third, the relatively simple formulation of the optimization problems allows us to gain deeper insight into the numerical behavior of the present method. We choose the inequality constrained optimization problems for the experimentation. The problem $G12$ is excluded because it has a feasible set consisting of $9^3$ disjointed spheres. For the problem $G24$, which has a feasible set consisting of two disconnected sub-regions, we choose initial designs in the sub-region that contains the reported optimal solution.
We conduct numerical experiments with the parameter $\zeta = 0.98$ and tuned fixed-step sizes. All bound constraints are treated as inequalities. Random initializations, that are away from the reported optimal solution, are selected in the feasible sets.
As shown in table \ref{tb:stepsize_0.98}, apart from the problem G19, we find solutions for the test problems that have an absolute error less than 2e-2 compared to the reported optima. More accurate results can be obtained when we choose shorter step sizes and larger parameter $\zeta$.
For the problem G19, the reported optimal solution $ \mathbf{x}^\star = $ (1.6699e-17, 3.9637e-16, 3.9459, 1.060e-16, 3.2831, 9.9999, 1.1283e-17, 1.2026e-17, 2.5071e-15, 2.2462e-15, 0.3708, 0.2785, 0.5238, 0.3886, 0.2982) is not achievable with a fixed-step size that has a Euclidean norm of $5e-2$.
For the problems G01, G04, G06, G07, G08, G24, we find sub-optimal solutions that are close to the reported optimal designs. For the problems G09, G10, G18, G19, sub-optimal solutions close to local minima are found. It appears that the ``no free lunch theorem" \cite{wolpert1997no} may apply to the present method implementation: while we solve some of the test problems in no more than a few hundred of iterations, a much larger number of iterations are needed for the remaining problems.
\begin{table} [h]
\begin{center}
\caption{Results with the parameter $\zeta = 0.98$ and tuned step sizes}
\captionsetup{justification=centering}
\begin{tabular}{|c | c | c | c | c | c |}
\hline
Prob. & step size& iters. & f($\mathbf{x}^\star$) & f($\mathbf{x_\zeta}$) & abs. error \\ [0.5ex]
\hline\hline
G01 & 0.002 & 2362 & -15 & -14.7215 & 1.86e-2\\
\hline
G04 & 0.2 & 136 & -3.0665e+4 & -3.0657e+4 & 2.85e-4 \\
\hline
G06 & 0.002 & 4826 & -6.9618e+3 & -6.8371e+3 & 1.79e-2\\
\hline
G07 & 0.0027 & 3009 & 24.3062 & 24.7876 & 1.98e-2 \\
\hline
G08 & 0.01 & 66 & -9.5825e-2& -9.5063e-2 & 0.80e-2\\
\hline
G09 & 0.05 & 120 & 6.8063e+2 & 6.9238e+2 & 1.73e-2\\
\hline
G10 & 0.35 & 5319 & 7.0492e+3 & 7.1898e+3 & 1.99e-2 \\
\hline
G18 & 0.01 & 257 & -0.8660 & -0.8546 & 1.32e-2 \\
\hline
G19 & 0.05 & 294 & 32.6556 & 2.7120e+2 & 7.30 \\
\hline
G24 & 0.02 & 268 & -5.5080 & -5.4147 & 1.69e-2 \\
\hline
\end{tabular}
\label{tb:stepsize_0.98}
\end{center}
\end{table}
Still, to get more insight into the numerical behavior of the method implementation, we plot the centrality measure $\cos \theta$ over the optimization process for each test problem in figure \ref{fig:convergence_plots}. The dashed-lines indicate the $\zeta$-neighborhood.
The results may be summarized in three categories:
\textit{Category 1.} G04 and G09: optimization traverses within the $\zeta$-neighborhood;
\textit{Category 2.} G01, G07, G08, G19, G19, and G24: optimization traverses to the $\zeta$-neighborhood but zigzags when it gets close to the solutions;
\textit{Category 3.} G06 and G10: optimization zigzags around the $\zeta$-neighborhood during the optimization process.
The reasons for the zigzagging may be two-folds. First, it may be due to the poor scaling of the logarithmic barrier function near the boundary of the feasible set. The second reason may be the ``overshooting": a large fixed-step size is unable to achieve a small $\zeta$-neighborhood when close to an optimal solution. In problem G06 and G10, the central paths locate closely to the boundaries of the respective feasible sets, thus resulting in zigzagging throughout the whole optimization process. In figure \ref{fig:G06_feasibleset}, we show the narrow feasible set of the test problem G06. The central path traverses close to the two boundaries of the feasible set. A similar phenomenon can be observed in problem G10. In problem G08, the zigzagging disappears when sufficiently short step sizes are chosen. It thus supports our argument of the ``overshooting".
\begin{figure}
\caption{G01}
\caption{G04}
\caption{G06}
\caption{G07}
\caption{G08}
\caption{G09}
\caption{G10}
\caption{G18}
\caption{G19}
\caption{G24}
\caption{$\cos \theta$ plots}
\label{fig:convergence_plots}
\end{figure}
\begin{figure}
\caption{Feasible set for problem G06}
\label{fig:G06_feasibleset}
\end{figure}
\subsection{Test example in shape optimization}
Although it is shown that the potential poor scaling of the logarithmic barrier function may result in increasing computational effort, we want to point out that, for the problems of \textit{categories 1 and 2}, the logarithmic barrier function formulation can be very efficient in treating a large number of constraints. We support our argument with a shape optimization problem.
Here, we show an academic convex problem. The objective is to maximize the volume of a small sphere. This small sphere is located inside a bigger sphere that acts as the geometric constraint. The shape of both spheres are represented with finite element meshes \cite{zienkiewicz1977finite}. The optimization problem writes
\begin{equation}
\begin{split}
& \textnormal{minimize} ~~~ -V(x), \\
& \textnormal{subject to} ~~~ g_i(x) \leq 0, ~ i = 1,...,m,\\
\end{split}
\end{equation}
where $V(x)$ is the volume function, $g_i(x)$ is a point-wise defined geometric constraint for the $i$-th design node, $m$ is the number of nodes of the design mesh, and $x \in \mathbb{R}^{3m}$ is the field of nodal coordinates of the design sphere mesh. The number of nodes of the small sphere (design sphere) is 19897. Thus, the total number of design variables is 59691 and the total number of constraints is 19897. We use the logarithmic barrier function for multiple constraints and choose the parameter $\zeta = 0.95$. In figure \ref{fig:sphere_iters}, the shape variation process with depicted iterations is shown. Initially, the design sphere is located close to the boundary of the constraint sphere. During the shape variation process, it moves towards the center of the constraint sphere, while adapting its shape at each iteration. One could recognize easily that the central path of this optimization problem is being approached and followed until the solution is found when constraints become active.
\begin{figure}
\caption{Initial design}
\label{fig:1}
\caption{Iteration 15}
\label{fig:2}
\caption{Iteration 30}
\label{fig:3}
\caption{Iteration 45}
\label{fig:4}
\caption{Iteration 60}
\label{fig:5}
\caption{Iteration 70}
\label{fig:6}
\caption{Iteration 80}
\label{fig:7}
\caption{Iteration 101}
\label{fig:8}
\caption{Design updates}
\label{fig:sphere_iters}
\end{figure}
\subsection{Real-world application to shape optimization}
We consider a real-world application to shape optimization. The present method is implemented in ShapeModule, which is a flexible solver-agnostic optimization platform and provides optimization algorithms as well as shape control methods, such as Vertex Morphing \cite{Hojjat}. The optimization problem is to minimize the mass of a frame structure under load-displacement constraint (i.e., the displacement of every surface node is bounded). The optimization problem writes
\begin{equation}
\begin{split}
& \textnormal{minimize} ~~~~ M(x), \\
& \textnormal{subject to} ~~~ g_i(x) \leq 0, ~ i = 1,...,m,
\end{split}
\end{equation}
where $M(x)$ is the function for the mass, $g_i(x)$ is a point-wise formulated displacement constraint for the $i$-th node, $m$ is the number of nodes of the design surface mesh, and $x \in \mathbb{R}^{3m}$ is the field of nodal coordinates of the design surface mesh. The number of design variables is 144423, and the number of constraints is 48141. Note that for multiple constraints, we can use the logarithmic barrier function formulation as in the previous test examples. Each single displacement constraint gradient can be efficiently computed using the adjoint sensitivity analysis. In this application, we use the load-displacement sensitivity provided by the software OptiStruct to conform with a standard industrial design chain. We choose the parameter $\zeta = 0.95$.
\begin{figure}
\caption{The initial frame design}
\label{fig:frame_iter_1}
\caption{The optimized frame design}
\label{fig:frame_iter_193}
\caption{Design optimization of a real-world frame structure}
\label{fig:frame}
\end{figure}
\begin{figure}
\caption{Plot of the frame objective}
\label{fig:frame_objective}
\end{figure}
\begin{figure}
\caption{Plot of the frame constraint}
\label{fig:frame_constraint_plot}
\end{figure}
\begin{figure}
\caption{Plot of the centrality measure}
\label{fig:frame_centrality}
\end{figure}
In figure \ref{fig:frame} we show the initial frame design and the shape optimized design after 194 iterations. The mass of the structure is reduced by $ 41\% $ as shown in figure \ref{fig:frame_objective}. In figure \ref{fig:frame_constraint_plot}, we show a plot of maximum constraint value $g = max\{g_i\}$ of each iteration. In figure \ref{fig:frame_centrality}, we show that the optimization is able to approach and follow a central path within the $\zeta$-neighborhood.
\begin{myremark}
By following a central path, an intermediate design improves not only the objective function but also the constraint function. Take the design of iteration 80 as an example: the mass is reduced by $20.5\%$, and the displacement is reduced by $12.1 \%$. These designs may enrich the design options if the original problem is reformulated as a bi-objective optimization problem, in which both mass and the maximum displacement are set as objectives. The resulting intermediate designs alongside a central path are approximated Pareto solutions.
\end{myremark}
\section{Conclusion}
\label{sec:conclusion}
This paper proposes a gradient descent akin method for solving inequality constrained nonlinear programming problems. We show the global behavior and convergence of the method and prove local convergence under the introduced \textit{relative convex condition}. Robustness is shown in various computational test problems. It is also shown that the method exhibits the potential for solving nonconvex nonlinear problems. The method may be especially suited for large-scale problems due to the very cheap computational cost in each iteration. Practical implementations based on the present method are of interest in future work.
\section*{Acknowledgments.}
The work by the author LC was done during his work as one of the coordinators at the Bavarian Graduate School of Computational Engineering (BGCE). The working experiences and financial support are gratefully acknowledged. The authors are grateful to the ShapeModule team at the BMW Group for providing a real-world model and their framework. The authors also thank Jian Cui from the Helmholtz Pioneer Campus for his generosity in proofreading this manuscript.
\end{document} |
\begin{eqnarray}\new\begin{array}{cc}gin{document}
\setcounter{pac}{0}
\setcounter{footnote}0
\begin{eqnarray}\new\begin{array}{cc}gin{center}
\phantom.
{\Large\bf On explicit realization of algebra of complex powers of generators of $U_{q}(\mathfrak{sl}(3))$}
{\lambdaambdarge Pavel Sultanich \footnote {E-mail: [email protected]}},\\
{\imatht Moscow Center for Continuous Mathematical Education, 119002, Bolshoy Vlasyevsky Pereulok 11, Moscow, Russia
}\\
\end{pmatrix}silonnd{center}
\begin{eqnarray}\new\begin{array}{cc}gin{abstract}
\noindentndent
In this note we prove an integral identity involving complex powers of generators of quantum group $U_{q}(\mathfrak{sl}(3))$ considered as certain positive operators in the setting of positive principal series representations. This identity represents a continuous analog of one of the Lusztig's relations between divided powers of generators of quantum groups, which play an important role in the study of irreducible modules \cdotite{Lu 1}. We also give definitions of arbitrary functions of $U_{q}(\mathfrak{sl}(3))$ generators and give another proofs for some of the known results concerning positive principal series representations of $U_{q}(\mathfrak{sl}(3))$.
\end{pmatrix}silonnd{abstract}
\section{Introduction}
The notion of modular double of a quantum group $U_{q}(\mathfrak{g})$ plays an important role in different areas of mathematical physics such as Liouville theory \cdotite{PT},\cdotite{FKV}, relativistic Toda model \cdotite{KLSTS} and others. It was introduced by Faddeev in \cdotite{F2} who noticed that certain representations of a quantum group $U_{q}(\mathfrak{sl}(2))$, $q = e^{\pi\imathmath b^{2}}$ have a remarkable duality under $b\lambdaeftrightarrow b^{-1}$ and proposed to consider instead of single quantum group enlarged object generated by two sets of generators $K$, $E$, $F\imathn U_{q}(\mathfrak{sl}(2))$ and $\tilde{K}$, $\tilde{E}$, $\tilde{F}\imathn U_{\tilde{q}}(\mathfrak{sl}(2))$, $\tilde{q} = e^{\pi\imathmath b^{-2}}$. In \cdotite{BT} it was shown that in a special class of representations of modular double the rescaled generators defined by
$K$, $\mathcal{E} = -\imathmath(q-q^{-1})E$, $\mathcal{F} = -\imathmath(q-q^{-1})F$ of $U_{q}(\mathfrak{sl}(2))$ are positive operators. This allows one to use functional calculus and consider arbitrary functions of them. Moreover, the generators of dual group $\tilde{K}$, $\tilde{\mathcal{E}} = -\imathmath(\tilde{q}-\tilde{q}^{-1})\tilde{E}$, $\tilde{\mathcal{F}} = -\imathmath(\tilde{q}-\tilde{q}^{-1})\tilde{F}$ are expressed as non-integer powers of the original generators
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\tilde{K} = K^{b^{-2}},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\tilde{\mathcal{E}} = \mathcal{E}^{b^{-2}},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\tilde{\mathcal{F}} = \mathcal{F}^{b^{-2}}.
\end{pmatrix}silonnd{equation}
These relations were called transcendental relations. This kind of representations, admitting the transcendental relations, has been generalized to higher ranks \cdotite{FrIp}, \cdotite{Ip2} and has been called positive principal series representations. Introduction of particular non-integer powers of generators of quantum group naturally leads to consideration of arbitrary powers of generators. Thus, the modular double becomes a discrete subalgebra in the algebra generated by arbitrary powers of generators $K^{\imathmath p}$, $\mathcal{E}^{\imathmath s}$, $\mathcal{F}^{\imathmath t}$.
In \cdotite{Lu 1}, eq.(4.1a)-eq.(4.1j) Lusztig summarized the relations between the divided powers of generators of $U_{q}(\mathfrak{g})$ for simply-laced $\mathfrak{g}$. He used these identities in the study of finite-dimensional modules of $U_{q}(\mathfrak{g})$ in the case where $q$ is a root of unity.
So in the study of the algebra of arbitrary complex powers of quantum group generators, the question of the generalization of these relations arises. Some of the relations were found in \cdotite{Ip1}, eq.(6.16), eq.(6.17). Another integral relation which is a generalization of Kac's identity \cdotite{Lu 1}, eq.(4.1a) appeared in \cdotite{Su1} and was proved in the explicit representation for the case of $U_{q}(\mathfrak{sl}(2))$ in \cdotite{Su2}. To write it down explicitly, let $G_{b}(x)$ be quantum dilogarithm \cdotite{F1} which is a special function playing an important role in the study of algebra of complex powers of generators of $U_{q}(\mathfrak{g})$. Its properties will be outlined in Section 2. Let $K_{j} = q^{H_{j}}$, $\mathcal{E}_{j}$, $\mathcal{F}_{j}$ be $U_{q}(\mathfrak{g})$ generators which are assumed to be positive operators so that the functions of them are defined. Let continuous analogs of divided powers be defined by $A^{(\imathmath s)}_{i} = G_{b}(-\imathmath bs)A^{\imathmath s}$. Explicit expressions for the powers of operators under consideration will be given later. Then the generalized Kac's identity reads
\begin{eqnarray}\new\begin{array}{cc}gin{equation}\begin{eqnarray}\new\begin{array}{cc}gin{split}
\mathcal{E}_{j}^{(\imathmath s)}\mathcal{F}_{j}^{(\imathmath t)} = \imathnt\lambdaimits_{\mathcal{C}} d\tau e^{\pi bQ\tau}\mathcal{F}_{j}^{(\imathmath t+\imathmath\tau)}K_{j}^{-\imathmath \tau}
\frac{G_{b}(\imathmath b\tau)G_{b}(-bH_{j} + \imathmath b(s+t+\tau))}{G_{b}(-bH_{j}+\imathmath b(s+t+2\tau))}\mathcal{E}_{j}^{(\imathmath s + \imathmath \tau)},
\end{pmatrix}silonnd{split}\end{pmatrix}silonnd{equation}
where the contour $\mathcal{C}$ goes slightly above the real axis but passes below the pole at $\tau = 0$.
In this note we prove this identity for the case of positive principal series representations of $U_{q}(\mathfrak{sl}(3))$.
The paper is organized as follows. In Section 2, we recall the definition of quantum group $U_{q}(\mathfrak{g})$ and outline the definition and basic properties of the quantum dilogarithm $G_{b}(x)$ and related function $g_{b}(x)$. In Section 3 we recall the construction of arbitrary functions of generators and generalized Kac's identity in the case of positive principal series representations of $U_{q}(\mathfrak{sl}(2))$.The main result of the paper is formulated in Theorem 4.1 in Section 4. We define arbitrary functions of $U_{q}(\mathfrak{sl}(3))$ generators in the positive principal series representations. We prove the generalized Kac's identity using unitary transform interwining the formulas of functions of generators of $U_{q}(\mathfrak{sl}(2))_{i}$ subalgebra corresponding to simple root $i$, with the formulas for $U_{q}(\mathfrak{sl}(2))$, defined in Section 3. This calculation also represents another proof of Theorem 4.7 in \cdotite{Ip1} which states that positive principal series representation of $U_{q}(\mathfrak{sl}(3))$ decomposes into direct integral of positive principal series representations of its $U_{q}(\mathfrak{sl}(2))$ subalgebra corresponding to each simple root.
{\bf Acknowledgements:} The research was supported by RSF (project 16-11-10075). I am grateful to A.A.Gerasimov and D.R.Lebedev for helpful discussions and interest in this work.
\section{Preliminaries}
We start with the definition of quantum groups following \cdotite{ChPr},\cdotite{Lu book}.
Let $(a_{ij})_{1\lambdae i,j\lambdae r}$ be Cartan matrix of semisimple Lie algebra $\mathfrak{g}$ of rank $r$. Let $\mathfrak{b}_{\pm}\subset \mathfrak{g}$ be opposite Borel subalgebras. For simplicity let us restrict ourselves to the simply-laced case $a_{ii} = 2$, $a_{ij} = a_{ji} = \{0,-1\}$, $i\ne j$. Let $U_{q}(\mathfrak{g})$ $(q = e^{\pi\imathmath b^{2}}$, $b^{2}\imathn \mathbb{R}\setminus \mathbb{Q})$ be the quantum group with generators $E_{j}$, $F_{j}$, $K_{j} = q^{H_{j}}$, $1\lambdae j \lambdae r$ and relations
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
K_{i}K_{j} = K_{j}K_{i},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
K_{i}E_{j} = q^{a_{ij}}E_{j}K_{i},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
K_{i}F_{j} = q^{-a_{ij}}F_{j}K_{i},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
E_{i}F_{j} - F_{j}E_{i} = \delta_{ij}\frac{K_{i} - K_{i}^{-1}}{q-q^{-1}}.
\end{pmatrix}silonnd{equation}
For $a_{ij} = 0$ we have
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
E_{i}E_{j} = E_{j}E_{i},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
F_{i}F_{j} = F_{j}F_{i}.
\end{pmatrix}silonnd{equation}
For $a_{ij} = -1$ we have
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
E_{i}^{2}E_{j} - (q+q^{-1})E_{i}E_{j}E_{i} + E_{j}E_{i}^{2} = 0,
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
F_{i}^{2}F_{j} - (q+q^{-1})F_{i}F_{j}F_{i} + F_{j}F_{i}^{2} = 0,
\end{pmatrix}silonnd{equation}
Coproduct is given by
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\Delta E_{j} = E_{j}\otimes 1 + K_{j}^{-1}\otimes E_{j},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\Delta F_{j} = 1\otimes F_{j} + F_{j}\otimes K_{j},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\Delta K_{j} = K_{j}\otimes K_{j}.
\end{pmatrix}silonnd{equation}
Non-compact quantum dilogarithm $G_{b}(z)$ is a special function introduced in \cdotite{F1} (see also \cdotite{F0}, \cdotite{FKV}, \cdotite{V}, \cdotite{Ka1}, \cdotite{KLSTS}, \cdotite{BT}). It is defined as follows
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\lambdaog G_{b}(z) = \lambdaog\bar{\zeta}_{b} - \imathnt\lambdaimits_{\mathbb{R}+\imathmath 0} \frac{dt}{t}\frac{e^{zt}}{(1-e^{bt})(1-e^{b^{-1}t})},
\end{pmatrix}silonnd{equation}
where $Q = b+b^{-1}$ and $\zeta_{b} = e^{\frac{\pi\imathmath}{4} + \frac{\pi\imathmath(b^{2}+b^{-2})}{12}}$. Note, that $G_{b}(z)$ is closely related to the double sine function $S_{2}(z|\omega_{1},\omega_{2})$, see eq.(A.22) in \cdotite{KLSTS}.
Below we outline some properties of $G_{b}(z)$\\*
1. The function $G_{b}(z)$ has simple poles and zeros at the points
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
z = -n_{1}b -n_{2}b^{-1},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
z = Q +n_{1}b + n_{2}b^{-1},
\end{pmatrix}silonnd{equation}
respectively, where $n_{1}$,$n_{2}$ are nonnegative integer numbers.\\*
2. $G_{b}(z)$ has the following asymptotic behavior:
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
G_{b}(z) \sim
\begin{eqnarray}\new\begin{array}{cc}gin{cases} \bar{\zeta}_{b}, Im z \rightarrow +\imathnfty ,\\ \zeta_{b} e^{\pi\imathmath z(z-Q)}, Im z \rightarrow -\imathnfty . \end{pmatrix}silonnd{cases}
\end{pmatrix}silonnd{equation}
3. Functional equation:
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
G_{b}(z +b^{\pm 1}) = (1-e^{2\pi\imathmath b^{\pm 1}z})G_{b}(z).
\end{pmatrix}silonnd{equation}
4. Reflection formula:
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
G_{b}(z)G_{b}(Q-z) = e^{\pi\imathmath z(z-Q)}.
\end{pmatrix}silonnd{equation}\\*
5. 4-5 integral identity, \cdotite{V}:
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\imathnt d\tau
e^{-2\pi\gamma\tau}
\frac{G_{b}(\alphapha+\imathmath\tau)G_{b}(\begin{eqnarray}\new\begin{array}{cc}ta+\imathmath\tau)}{G_{b}(\alphapha+\begin{eqnarray}\new\begin{array}{cc}ta+\gamma+\imathmath\tau)G_{b}(Q+\imathmath\tau)} =
\frac{G_{b}(\alphapha)G_{b}(\begin{eqnarray}\new\begin{array}{cc}ta)G_{b}(\gamma)}{G_{b}(\alphapha+\gamma)G_{b}(\begin{eqnarray}\new\begin{array}{cc}ta+\gamma)}.
\end{pmatrix}silonnd{equation}\\*
Define also the function $g_{b}(x)$ by
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
g_{b}(x) = \frac{\bar{\zeta}_{b}}{G_{b}(\frac{Q}{2} +\frac{1}{2\pi\imathmath b}\lambdaog x)}.
\end{pmatrix}silonnd{equation}
It has the following properties:\\*
1. $|g_{b}(x)| = 1$, if $x\imathn \mathbb{R}_{+}$. So if $A$ is a positive self-adjoint operator, then $g_{b}(A)$ is unitary. \\*
2. Fourier transform:
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
g_{b}(x) = \imathnt d\tau x^{\imathmath b^{-1}\tau}e^{\pi Q\tau}G_{b}(-\imathmath\tau).
\end{pmatrix}silonnd{equation}\\*
Let $U$, $V$ be positive self-adjoint operators satisfying the relation $UV = q^{2}VU$. Then the following non-commutative identities hold:\\*
3. Quantum exponential relation, \cdotite{F2}:
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
g_{b}(U)g_{b}(V) = g_{b}(U+V).
\end{pmatrix}silonnd{equation}
4. Quantum pentagon relation, \cdotite{Ka0}:
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
g_{b}(V)g_{b}(U) = g_{b}(U)g_{b}(q^{-1}UV)g_{b}(V).
\end{pmatrix}silonnd{equation}
5. Another useful relation, \cdotite{BT}:
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
U + V = g_{b}(qU^{-1}V)Ug^{\ast}_{b}(qU^{-1}V),
\end{pmatrix}silonnd{equation}
where the star means hermitian conjugation.
\section{Algebra of complex powers of generators of $U_{q}(\mathfrak{sl}(2))$}
Let $q = e^{\pi\imathmath b^{2}}$, $(b^{2}\imathn \mathbb{R}\setminus \mathbb{Q})$ and let $K = q^{H}$, $E$, $F$ be the generators of $U_{q}(\mathfrak{sl}(2))$ subjected to the relations
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
KE = q^{2}EK,
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
KF = q^{-2}FK,
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
EF-FE = \frac{K-K^{-1}}{q-q^{-1}}.
\end{pmatrix}silonnd{equation}
Define the rescaled versions of generators $E$, $F$ by
\begin{eqnarray}\new\begin{array}{cc}gin{equation}\lambdaambdabel{rescaled E}
\mathcal{E} = -\imathmath(q-q^{-1})E,
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{F} = -\imathmath(q-q^{-1})F.
\end{pmatrix}silonnd{equation}
Let $\nu$ be a positive real number. There is a well-known representation of $U_{q}(\mathfrak{sl}(2))$ (see e.g.\cdotite{PT1}):
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
H = -2\imathmath b^{-1}u,
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
K = q^{H} = e^{2\pi bu},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{E} = q^{-\frac{1}{2}}e^{\pi b\nu +\pi bu}e^{-\imathmath b\partial_{u}} + q^{\frac{1}{2}}e^{-\pi b\nu -\pi bu}e^{-\imathmath b\partial_{u}},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{F} = q^{-\frac{1}{2}}e^{\pi b\nu-\pi bu}e^{\imathmath b\partial_{u}} +q^{\frac{1}{2}}e^{-\pi b\nu+\pi bu}e^{\imathmath b\partial_{u}}.
\end{pmatrix}silonnd{equation}
This representation is a particular example of positive principal series representations of $U_{q}(\mathfrak{g})$ \cdotite{Ip2}.\\*
The following lemma was proven for a slightly different representation of $U_{q}(\mathfrak{sl}(2))$ in \cdotite{BT} and for the representation we use in this paper in \cdotite{Ip1}. It gives the expressions of the generators of $U_{q}(\mathfrak{sl}(2))$ in a form convenient for the definition of functions of them. It is based on the formula eq.(B.2) in \cdotite{BT}, stating that given positive self-adjoint operators $U$,$V$ satisfying $UV = q^{2}VU$, one can write
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
U+V = g_{b}(qU^{-1}V)U(g_{b}(qU^{-1}V))^{-1}.
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{lem}
Let $\mathcal{E}$, $\mathcal{F}$ be the rescaled positive generators of $U_{q}(\mathfrak{sl}(2))$ defined above. They can be written in the following form:
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{E} = g_{b}(e^{-2\pi b\nu-2\pi bu})e^{\pi b\nu +\pi bu - \imathmath b\partial_{u}}g^{\ast}_{b}(e^{-2\pi b\nu-2\pi bu}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{F} = g_{b}(e^{-2\pi b\nu+2\pi bu})e^{\pi b\nu-\pi bu+\imathmath b\partial_{u}}g^{\ast}_{b}(e^{-2\pi b\nu+2\pi bu}).
\end{pmatrix}silonnd{equation}
\end{pmatrix}silonnd{lem}
$\noindent {\it Proof}. $
Let $U$, $V$ be positive self-adjoint operators such that
$$
UV = q^{2}VU.
$$
Then, \cdotite{BT}:
$$
U+V = g_{b}(qU^{-1}V)U(g_{b}(qU^{-1}V))^{-1}.
$$
For $\mathcal{E}$ we have $U = q^{-\frac{1}{2}}e^{\pi b\nu +\pi bu}e^{-\imathmath b\partial_{u}}$, $V = q^{\frac{1}{2}}e^{-\pi b\nu -\pi bu}e^{-\imathmath b\partial_{u}}$ and
$$
qU^{-1}V = q q^{\frac{1}{2}}e^{\imathmath b\partial_{u}}e^{-\pi b\nu-\pi bu}q^{\frac{1}{2}}e^{-\pi b\nu -\pi bu}e^{-\imathmath b\partial_{u}} = e^{-2\pi b\nu-2\pi bu},
$$
so
$$
\mathcal{E} = g_{b}(e^{-2\pi b\nu-2\pi bu})q^{-\frac{1}{2}}e^{\pi b\nu +\pi bu}e^{-\imathmath b\partial_{u}}(g_{b}(e^{-2\pi b\nu-2\pi bu}))^{-1}.
$$
For $\mathcal{F}$ we have $U = q^{-\frac{1}{2}}e^{\pi b\nu-\pi bu}e^{\imathmath b\partial_{u}}$, $V = q^{\frac{1}{2}}e^{-\pi b\nu+\pi bu}e^{\imathmath b\partial_{u}}$,
$$
qU^{-1}V = qq^{\frac{1}{2}}e^{-\imathmath b\partial_{u}}e^{-\pi b\nu+\pi bu}q^{\frac{1}{2}}e^{-\pi b\nu+\pi bu}e^{\imathmath b\partial_{u}} = e^{-2\pi b\nu +2\pi bu},
$$
$$
\mathcal{F} = g_{b}(e^{-2\pi b\nu+2\pi bu})q^{-\frac{1}{2}}e^{\pi b\nu-\pi bu}e^{\imathmath b\partial_{u}}(g_{b}(e^{-2\pi b\nu+2\pi bu}))^{-1}.
$$
$\Box$
Multiplication by $g_{b}(e^{-2\pi b\nu-2\pi bu})$ and $g_{b}(e^{-2\pi b\nu+2\pi bu})$ is unitary transformation, since $|g_{b}(x)| = 1$, for $x\imathn\mathbb{R}_{+}$. As a consequence, following \cdotite{BT}, eq.(3.15), eq.(3.21), one can define functions of generators $\mathcal{E}$ and $\mathcal{F}$ as follows:
\begin{eqnarray}\new\begin{array}{cc}gin{de}
Let $\varphi(x)$ be a complex-valued function and let $K$, $\mathcal{E}$, $\mathcal{F}$ be $U_{q}(\mathfrak{sl}(2))$ generators in the positive principal series representation. The functions of these operators are defined as follows
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\varphi(K) = \varphi(e^{2\pi bu}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}\lambdaambdabel{function of E}
\varphi(\mathcal{E}) = g_{b}(e^{-2\pi b\nu-2\pi bu})\varphi(e^{\pi b\nu +\pi bu - \imathmath b\partial_{u}})g^{\ast}_{b}(e^{-2\pi b\nu-2\pi bu}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}\lambdaambdabel{function of F}
\varphi(\mathcal{F}) = g_{b}(e^{-2\pi b\nu+2\pi bu})\varphi(e^{\pi b\nu-\pi bu+\imathmath b\partial_{u}})g^{\ast}_{b}(e^{-2\pi b\nu+2\pi bu}).
\end{pmatrix}silonnd{equation}
\end{pmatrix}silonnd{de}
In particular, the powers of $\mathcal{E}$ and $\mathcal{F}$ are given by
\begin{eqnarray}\new\begin{array}{cc}gin{equation}\lambdaambdabel{Imaginary power E Uqsl(2)}
\mathcal{E}^{\imathmath s} = g_{b}(e^{-2\pi b\nu-2\pi bu})e^{\pi\imathmath bs\nu +\pi\imathmath bsu + bs\partial_{u}}g^{\ast}_{b}(e^{-2\pi b\nu-2\pi bu}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}\lambdaambdabel{Imaginary power F Uqsl(2)}
\mathcal{F}^{\imathmath t} = g_{b}(e^{-2\pi b\nu+2\pi bu})e^{\pi\imathmath bt\nu-\pi\imathmath btu- bt\partial_{u}}g^{\ast}_{b}(e^{-2\pi b\nu+2\pi bu}).
\end{pmatrix}silonnd{equation}
The formulas for the powers in this particular representation were obtained in \cdotite{Ip1}.\\*
Define the arbitrary divided powers of $A$ by
\begin{eqnarray}\new\begin{array}{cc}gin{equation}\lambdaambdabel{complex devided power}
A^{(\imathmath s)} = G_{b}(-\imathmath bs)A^{\imathmath s}.
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{te}\cdotite{Su2}
The following generalized Kac's identity holds
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{E}^{(\imathmath s)}\mathcal{F}^{(\imathmath t)} = \imathnt\lambdaimits_{\mathcal{C}} d\tau e^{\pi bQ\tau}\mathcal{F}^{(\imathmath t+\imathmath\tau)}K^{-\imathmath\tau}\frac{G_{b}(\imathmath b\tau)G_{b}(-bH+\imathmath b(s+t+\tau))}{G_{b}(-bH+\imathmath b(s+t+2\tau))}\mathcal{E}^{(\imathmath s+\imathmath\tau)},
\end{pmatrix}silonnd{equation}
where the contour $\mathcal{C}$ goes slightly above the real axis but passes below the pole at $\tau = 0$.
\end{pmatrix}silonnd{te}
\section{Algebra of complex powers of generators of $U_{q}(\mathfrak{sl}(3))$}
In this section we prove the Generalized Kac's identity in the case of positive principal series representations of $U_{q}(\mathfrak{sl}(3))$.
Let $q = e^{\pi\imathmath b^{2}}$, ($b^{2}\imathn \mathbb{R}\setminus \mathbb{Q}$). Let $a_{ij}$, ($i$,$j = 1$, $2$) be the Cartan matrix corresponding to $\mathfrak{sl}(3)$ Lie algebra, i.e. $a_{11} = a_{22} = 2$, $a_{12} = a_{21} = -1$. $U_{q}(\mathfrak{sl}(3))$ $(q = e^{\pi\imathmath b^{2}}$, $b^{2}\imathn \mathbb{R}\setminus \mathbb{Q})$ is defined by generators $E_{j}$, $F_{j}$, $K_{j} = q^{H_{j}}$, $1\lambdae j \lambdae 2$ and relations
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
K_{i}K_{j} = K_{j}K_{i},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
K_{i}E_{j} = q^{a_{ij}}E_{j}K_{i},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
K_{i}F_{j} = q^{-a_{ij}}F_{j}K_{i},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
E_{i}F_{j} - F_{j}E_{i} = \delta_{ij}\frac{K_{i} - K_{i}^{-1}}{q-q^{-1}}.
\end{pmatrix}silonnd{equation}
For $i\ne j$ we have
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
E_{i}^{2}E_{j} - (q+q^{-1})E_{i}E_{j}E_{i} + E_{j}E_{i}^{2} = 0,
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
F_{i}^{2}F_{j} - (q+q^{-1})F_{i}F_{j}F_{i} + F_{j}F_{i}^{2} = 0.
\end{pmatrix}silonnd{equation}
The general construction of the positive principal series representations of $U_{q}(\mathfrak{g})$ in the simply-laced case using Lusztig's data was given in \cdotite{Ip2}. Let $w_{0}$ be the longest element of the Weyl group. There are different realizations of positive principal series representations corresponding to each reduced expressions of $w_{0}$. In the case of $U_{q}(\mathfrak{sl}(3))$ there are two options $w_{0} = s_{1}s_{2}s_{1}$ and $w_{0} = s_{2}s_{1}s_{2}$. In the following we give the explicit formulas for both of this cases.
Let $\mathcal{E}_{j} = -\imathmath(q-q^{-1})E_{j}$, $\mathcal{F}_{j} = -\imathmath(q-q^{-1})F_{j}$, $j = 1$, $2$ be the rescaled versions of $U_{q}(\mathfrak{sl}(3))$ generators.
\begin{eqnarray}\new\begin{array}{cc}gin{prop} \cdotite{Ip2}.
Let $K_{j}$, $\mathcal{E}_{j}$, $\mathcal{F}_{j}$ be the rescaled generators of $U_{q}(\mathfrak{sl}(3))$. Let $w_{0} = s_{1}s_{2}s_{1}$ be reduced expression of the longest Weyl element. Let $\nu_{1}$,$\nu_{2}$ be positive real numbers. The positive principal series representation of $U_{q}(\mathfrak{sl}(3))$ corresponding to these data is given by:
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
K_{1} = e^{-2\pi b\nu_{1}+2\pi bu-\pi bv+2\pi bw},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
K_{2} = e^{-2\pi b\nu_{2}-\pi bu+2\pi bv-\pi bw},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{E}_{1} = e^{\pi bw-\imathmath b\partial_{w}} + e^{-\pi bw-\imathmath b\partial_{w}},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{E}_{2} = e^{\pi bv-\pi bw-\imathmath b\partial_{v}}+ e^{\pi bu -\imathmath b\partial_{u} -\imathmath b\partial_{v}+\imathmath b\partial_{w}} +
e^{-\pi bu-\imathmath b\partial_{u}-\imathmath b\partial_{v}+\imathmath b\partial_{w}} +
e^{-\pi bv+\pi bw-\imathmath b\partial_{v}},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{F}_{1} =
e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}} + e^{2\pi b\nu_{1}-\pi bu+\imathmath b\partial_{u}} +
e^{-2\pi b\nu_{1}+\pi bu+\imathmath b\partial_{u}} + e^{-2\pi b\nu_{1}+2\pi bu-\pi bv+\pi bw+\imathmath b\partial_{w}},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{F}_{2} = e^{2\pi b\nu_{2}+\pi bu-\pi bv +\imathmath b\partial_{v}} + e^{-2\pi b\nu_{2}-\pi bu +\pi bv+\imathmath b\partial_{v}}.
\end{pmatrix}silonnd{equation}
\end{pmatrix}silonnd{prop}
Similar to $U_{q}(\mathfrak{sl}(2))$ case using eq.(B.2), \cdotite{BT} we represent the generators in a form convenient for the definition of functions of them
\begin{eqnarray}\new\begin{array}{cc}gin{lem}
Let $\mathcal{E}_{i}$, $\mathcal{F}_{i}$, $(i = 1,2)$ be the generators of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{1}s_{2}s_{1}$. They can be represented in the following form
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{E}_{1} = g_{b}(e^{-2\pi bw})e^{\pi bw-\imathmath b\partial_{w}}g_{b}^{\ast}(e^{-2\pi bw}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{multline}
\mathcal{E}_{2} =
g_{b}(e^{\pi bu-\pi bv +\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})g_{b}(e^{-\pi bu -\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})g_{b}(e^{-2\pi bv+2\pi bw})\times \\
e^{\pi bv-\pi bw-\imathmath b\partial_{v}}\times \\
g^{\ast}_{b}(e^{-2\pi bv+2\pi bw})g^{\ast}_{b}(e^{-\pi bu -\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g^{\ast}_{b}(e^{\pi bu-\pi bv +\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}}),
\end{pmatrix}silonnd{multline}
\begin{eqnarray}\new\begin{array}{cc}gin{multline}
\mathcal{F}_{1} = g_{b}(e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})\times \\
e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}}\times \\
g_{b}^{\ast}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})
g_{b}^{\ast}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}^{\ast}(e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}}),
\end{pmatrix}silonnd{multline}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{F}_{2} = g_{b}(e^{-4\pi b\nu_{2}-2\pi bu+2\pi bv})e^{2\pi b\nu_{2}+\pi bu-\pi bv+\imathmath b\partial_{v}}
g^{\ast}_{b}(e^{-4\pi b\nu_{2}-2\pi bu+2\pi bv}).
\end{pmatrix}silonnd{equation}
\end{pmatrix}silonnd{lem}
$\noindent {\it Proof}. $
Let $q=e^{\pi\imathmath b^{2}}$ and let $U$, $V$ be positive essentially self-adjoint operators subjected to the relation $UV=q^{2}VU$. We will need the following identity,\cdotite{BT}:
$$
U+V = g_{b}(qU^{-1}V)Ug^{\ast}_{b}(qU^{-1}V),
$$
and the quantum exponential relation, \cdotite{F2}:
$$
g_{b}(U+V) = g_{b}(U)g_{b}(V).
$$
Let us start with $\mathcal{E}_{1}$. It has the following form
$$
\mathcal{E}_{1} = U+V,
$$
where $U = e^{\pi bw-\imathmath b\partial_{w}}$, $V = e^{-\pi bw-\imathmath b\partial_{w}}$. Using the identities $e^{A}e^{B} = e^{\frac{[A,B]}{2}}e^{A+B}$ and $e^{A}e^{B} = e^{[A,B]}e^{B}e^{A}$ in the case when the commutator $[A,B]$ commutes with both $A$ and $B$, and also the identity $[x,\partial_{x}] = -1$, one checks that
$$
UV = e^{\pi bw-\imathmath b\partial_{w}}e^{-\pi bw-\imathmath b\partial_{w}} =
e^{[\pi bw-\imathmath b\partial_{w},-\pi bw-\imathmath b\partial_{w}]}e^{-\pi bw-\imathmath b\partial_{w}}e^{\pi bw-\imathmath b\partial_{w}} =
$$
$$
e^{2\pi\imathmath b^{2}}e^{-\pi bw-\imathmath b\partial_{w}}e^{\pi bw-\imathmath b\partial_{w}} = q^{2}VU,
$$
and
$$
qU^{-1}V = e^{\pi\imathmath b^{2}}e^{-\pi bw+\imathmath b\partial_{w}}e^{-\pi bw-\imathmath b\partial_{w}}=
e^{\pi\imathmath b^{2}}e^{\frac{1}{2}[-\pi bw+\imathmath b\partial_{w},-\pi bw-\imathmath b\partial_{w}]} = e^{-2\pi bw},
$$
so we have
$$
\mathcal{E}_{1} = g_{b}(qU^{-1}V)Ug^{\ast}_{b}(qU^{-1}V) = g_{b}(e^{-2\pi bw})e^{\pi bw-\imathmath b\partial_{w}}g_{b}^{\ast}(e^{-2\pi bw}).
$$
For $\mathcal{E}_{2}$ we have
$$
\mathcal{E}_{2} = U_{1} + U_{2} + U_{3} + U_{4},
$$
where $U_{1} = e^{\pi bv-\pi bw-\imathmath b\partial_{v}}$ $U_{2} = e^{\pi bu -\imathmath b\partial_{u} -\imathmath b\partial_{v}+\imathmath b\partial_{w}}$, $U_{3} = e^{-\pi bu-\imathmath b\partial_{u}-\imathmath b\partial_{v}+\imathmath b\partial_{w}}$, $U_{4} = e^{-\pi bv+\pi bw-\imathmath b\partial_{v}}$.
These operators satisfy the relations
$$
U_{i}U_{j} = q^{2}U_{j}U_{i},
$$
if $i< j$. We obtain
$$
\mathcal{E}_{2} = g_{b}(qU_{1}^{-1}(U_{2}+U_{3}+U_{4}))U_{1}g^{\ast}_{b}(qU_{1}^{-1}(U_{2}+U_{3}+U_{4})) =
$$
$$
g_{b}(qU_{1}^{-1}U_{2})g_{b}(qU_{1}^{-1}U_{3})g_{b}(qU_{1}^{-1}U_{4})U_{1}g^{\ast}_{b}(qU_{1}^{-1}U_{4})g^{\ast}_{b}(qU_{1}^{-1}U_{3})g^{\ast}_{b}(qU_{1}^{-1}U_{2}),
$$
where in the second equality we have used the quantum exponential relation, provided that operators $(qU_{1}^{-1}U_{i})$ are positive and satisfy $(qU_{1}^{-1}U_{i})(qU_{1}^{-1}U_{j}) = q^{2}(qU_{1}^{-1}U_{j})(qU_{1}^{-1}U_{i})$ for $1<i<j$ .
$$
qU_{1}^{-1}U_{2} = e^{\pi\imathmath b^{2}}e^{-\pi bv+\pi bw+\imathmath b\partial_{v}}e^{\pi bu -\imathmath b\partial_{u} -\imathmath b\partial_{v}+\imathmath b\partial_{w}} =
$$
$$
e^{\pi\imathmath b^{2}}e^{\frac{1}{2}[-\pi bv+\pi bw+\imathmath b\partial_{v},\pi bu -\imathmath b\partial_{u} -\imathmath b\partial_{v}+\imathmath b\partial_{w}]}e^{\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}} =
$$
$$
e^{\pi\imathmath b^{2}}e^{-\pi\imathmath b^{2}}e^{\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}} =
e^{\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}}.
$$
Analogously we obtain
$$
qU_{1}^{-1}U_{3} = e^{-\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}},
$$
$$
qU_{1}^{-1}U_{4}= e^{-2\pi bv+2\pi bw}.
$$
Substituting these expressions into the formula for $\mathcal{E}_{2}$ we obtain
$$
\mathcal{E}_{2} = g_{b}(e^{\pi bu-\pi bv +\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{-\pi bu -\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{-2\pi bv+2\pi bw})e^{\pi bv-\pi bw-\imathmath b\partial_{v}}\times
$$
$$
g^{\ast}_{b}(e^{-2\pi bv+2\pi bw})g^{\ast}_{b}(e^{-\pi bu -\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g^{\ast}_{b}(e^{\pi bu-\pi bv +\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}}).
$$
For the generator $\mathcal{F}_{1}$ we have
$$
\mathcal{F}_{1} = U_{1} + U_{2} + U_{3} + U_{4},
$$
where we used the notations
$$
U_{1} = e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}},$$
$$
U_{2} = e^{2\pi b\nu_{1}-\pi bu+\imathmath b\partial_{u}},
$$
$$
U_{3} = e^{-2\pi b\nu_{1}+\pi bu+\imathmath b\partial_{u}},
$$
$$
U_{4} = e^{-2\pi b\nu_{1}+2\pi bu-\pi bv+\pi bw+\imathmath b\partial_{w}}.
$$
Again, for $i<j$ we have the relations
$$
U_{i}U_{j} = q^{2}U_{j}U_{i},
$$
and for $1<i<j$
$$
(qU^{-1}_{1}U_{i})(qU^{-1}_{1}U_{j}) = q^{2}(qU^{-1}_{1}U_{j})(qU^{-1}_{1}U_{i}).
$$
Explicit expressions for the operators $qU^{-1}_{1}U_{i}$ are given by
$$
qU_{1}^{-1}U_{2} = e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}},
$$
$$
qU_{1}^{-1}U_{3} = e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}},
$$
$$
qU_{1}^{-1}U_{4} = e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}.
$$
Using the formulas $U+V = g_{b}(qU^{-1}V)Ug^{\ast}_{b}(qU^{-1}V)$ and $g_{b}(U+V) = g_{b}(U)g_{b}(V)$ for positive operators satisfying the relation $UV = q^{2}VU$ we obtain
$$
\mathcal{F}_{1} = g_{b}(qU_{1}^{-1}(U_{2}+U_{3}+U_{4}))U_{1}g^{\ast}_{b}(qU_{1}^{-1}(U_{2}+U_{3}+U_{4})) =
$$
$$
g_{b}(qU_{1}^{-1}U_{2})g_{b}(qU_{1}^{-1}U_{3})g_{b}(qU_{1}^{-1}U_{4})U_{1}g^{\ast}_{b}(qU_{1}^{-1}U_{4})g^{\ast}_{b}(qU_{1}^{-1}U_{3})g^{\ast}_{b}(qU_{1}^{-1}U_{2}) =
$$
$$
g_{b}(e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})\times
$$
$$
e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}}\times
$$
$$
g_{b}^{\ast}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})
g_{b}^{\ast}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}^{\ast}(e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}}).
$$
Generator $\mathcal{F}_{2}$.
$$
\mathcal{F}_{2} = A_{1} + A_{2},
$$
where
$$
A_{1} = e^{2\pi b\nu_{2}+\pi bu-\pi bv +\imathmath b\partial_{v}},
$$
$$
A_{2} = e^{-2\pi b\nu_{2}-\pi bu +\pi bv+\imathmath b\partial_{v}}.
$$
Then
$$
qA_{1}^{-1}A_{2} = e^{-4\pi b\nu_{2}-2\pi bu+2\pi bv},
$$
$$
A_{1}A_{2} = q^{2}A_{2}A_{1},
$$
and using the identity
$$
A_{1} + A_{2} = g_{b}(qA_{1}^{-1}A_{2})A_{1}g_{b}^{\ast}(qA_{1}^{-1}A_{2}),
$$
we obtain
$$
\mathcal{F}_{2} = g_{b}(e^{-4\pi b\nu_{2}-2\pi bu+2\pi bv})e^{2\pi b\nu_{2}+\pi bu-\pi bv+\imathmath b\partial_{v}}
g^{\ast}_{b}(e^{-4\pi b\nu_{2}-2\pi bu+2\pi bv}).
$$
$\Box$
Similar to eq.(3.15), eq.(3.21) \cdotite{BT}, we define the functions of the operators in the following way:
\begin{eqnarray}\new\begin{array}{cc}gin{de}
Let $\varphi(x)$ be a complex-valued function of one variable and let $K_{i}$, $\mathcal{E}_{i}$, $\mathcal{F}_{i}$, $(i = 1,2)$ be the generators of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{1}s_{2}s_{1}$.
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\varphi(K_{1}) = \varphi(e^{-2\pi b\nu_{1}+2\pi bu-\pi bv+2\pi bw}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\varphi(K_{2}) = \varphi(e^{-2\pi b\nu_{2}-\pi bu+2\pi bv-\pi bw}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\varphi(\mathcal{E}_{1}) = g_{b}(e^{-2\pi bw})\varphi(e^{\pi bw-\imathmath b\partial_{w}})g_{b}^{\ast}(e^{-2\pi bw}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{multline}
\varphi(\mathcal{E}_{2}) = g_{b}(e^{\pi bu-\pi bv +\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{-\pi bu -\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{-2\pi bv+2\pi bw})\times \\
\varphi(e^{\pi bv-\pi bw-\imathmath b\partial_{v}})\times \\
g^{\ast}_{b}(e^{-2\pi bv+2\pi bw})g^{\ast}_{b}(e^{-\pi bu -\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g^{\ast}_{b}(e^{\pi bu-\pi bv +\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}}),
\end{pmatrix}silonnd{multline}
\begin{eqnarray}\new\begin{array}{cc}gin{multline}
\varphi(\mathcal{F}_{1}) = g_{b}(e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})\times \\
\varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}})\times \\
g_{b}^{\ast}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})
g_{b}^{\ast}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}^{\ast}(e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}}),
\end{pmatrix}silonnd{multline}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\varphi(\mathcal{F}_{2}) = g_{b}(e^{-4\pi b\nu_{2}-2\pi bu+2\pi bv})\varphi(e^{2\pi b\nu_{2}+\pi bu-\pi bv+\imathmath b\partial_{v}})
g^{\ast}_{b}(e^{-4\pi b\nu_{2}-2\pi bu+2\pi bv}).
\end{pmatrix}silonnd{equation}
\end{pmatrix}silonnd{de}
Choosing in the definition the function to be $\varphi(x) = x^{\imathmath s}$ we obtain the expressions for arbitrary powers of generators.
In the following we repeat the same steps for representations corresponding to another choice of reduced expression $w_{0} = s_{2}s_{1}s_{2}$.
\begin{eqnarray}\new\begin{array}{cc}gin{prop}\cdotite{Ip2}.
The positive principal series representation of $U_{q}(\mathfrak{sl}(3))$ corresponding to the reduced expression of the longest Weyl element $w_{0} = s_{2}s_{1}s_{2}$ and positive real parameters $\nu_{1}$,$\nu_{2}$ is given by
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
K_{1} = e^{-2\pi b\nu_{1}-\pi bu+2\pi bv-\pi bw},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
K_{2} = e^{-2\pi b\nu_{2}+2\pi bu-\pi bv+2\pi bw},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{E}_{1} = e^{\pi bv-\pi bw-\imathmath b\partial_{v}}+ e^{\pi bu -\imathmath b\partial_{u} -\imathmath b\partial_{v}+\imathmath b\partial_{w}} +
e^{-\pi bu-\imathmath b\partial_{u}-\imathmath b\partial_{v}+\imathmath b\partial_{w}} +
e^{-\pi bv+\pi bw-\imathmath b\partial_{v}},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{E}_{2} = e^{\pi bw-\imathmath b\partial_{w}} + e^{-\pi bw-\imathmath b\partial_{w}},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{F}_{1} = e^{2\pi b\nu_{1}+\pi bu-\pi bv +\imathmath b\partial_{v}} + e^{-2\pi b\nu_{1}-\pi bu +\pi bv+\imathmath b\partial_{v}},
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{F}_{2} =
e^{2\pi b\nu_{2}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}} + e^{2\pi b\nu_{2}-\pi bu+\imathmath b\partial_{u}} +
e^{-2\pi b\nu_{2}+\pi bu+\imathmath b\partial_{u}} + e^{-2\pi b\nu_{2}+2\pi bu-\pi bv+\pi bw+\imathmath b\partial_{w}}.
\end{pmatrix}silonnd{equation}
\end{pmatrix}silonnd{prop}
\begin{eqnarray}\new\begin{array}{cc}gin{lem}
Let $\mathcal{E}_{i}$, $\mathcal{F}_{i}$, $(i = 1,2)$ be the generators of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{2}s_{1}s_{2}$. They can be represented in the following form
\begin{eqnarray}\new\begin{array}{cc}gin{multline}
\mathcal{E}_{1} = g_{b}(e^{\pi bu-\pi bv +\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{-\pi bu -\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{-2\pi bv+2\pi bw})\times \\
e^{\pi bv-\pi bw-\imathmath b\partial_{v}}\times \\
g^{\ast}_{b}(e^{-2\pi bv+2\pi bw})g^{\ast}_{b}(e^{-\pi bu -\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g^{\ast}_{b}(e^{\pi bu-\pi bv +\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}}),
\end{pmatrix}silonnd{multline}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{E}_{2} = g_{b}(e^{-2\pi bw})e^{\pi bw-\imathmath b\partial_{w}}g_{b}^{\ast}(e^{-2\pi bw}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\mathcal{F}_{1} = g_{b}(e^{-4\pi b\nu_{1}-2\pi bu+2\pi bv})e^{2\pi b\nu_{1} +\pi bu-\pi bv+\imathmath b\partial_{v}}
g^{\ast}_{b}(e^{-4\pi b\nu_{1}-2\pi bu+2\pi bv}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{multline}
\mathcal{F}_{2} = g_{b}(e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{2}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{2}+4\pi bu-2\pi bv+2\pi bw})\times \\
e^{2\pi b\nu_{2}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}}\times \\
g_{b}^{\ast}(e^{-4\pi b\nu_{2}+4\pi bu-2\pi bv+2\pi bw})
g_{b}^{\ast}(e^{-4\pi b\nu_{2}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}^{\ast}(e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}}).
\end{pmatrix}silonnd{multline}
\end{pmatrix}silonnd{lem}
\begin{eqnarray}\new\begin{array}{cc}gin{de}
Let $\varphi(x)$ be a complex-valued function of one variable and let $K_{i}$, $\mathcal{E}_{i}$, $\mathcal{F}_{i}$, $(i = 1,2)$ be the generators of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{2}s_{1}s_{2}$.The functions of the generators are defined as follows:
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\varphi(K_{1}) = \varphi(e^{-2\pi b\nu_{1}-\pi bu+2\pi bv-\pi bw}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\varphi(K_{2}) = \varphi(e^{-2\pi b\nu_{2}+2\pi bu-\pi bv+2\pi bw}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{multline}
\varphi(\mathcal{E}_{1}) = g_{b}(e^{\pi bu-\pi bv +\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{-\pi bu -\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{-2\pi bv+2\pi bw})\times \\
\varphi(e^{\pi bv-\pi bw-\imathmath b\partial_{v}})\times \\
g^{\ast}_{b}(e^{-2\pi bv+2\pi bw})g^{\ast}_{b}(e^{-\pi bu -\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g^{\ast}_{b}(e^{\pi bu-\pi bv +\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}}),
\end{pmatrix}silonnd{multline}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\varphi(\mathcal{E}_{2}) = g_{b}(e^{-2\pi bw})\varphi(e^{\pi bw-\imathmath b\partial_{w}})g_{b}^{\ast}(e^{-2\pi bw}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\varphi(\mathcal{F}_{1}) = g_{b}(e^{-4\pi b\nu_{1}-2\pi bu+2\pi bv})\varphi(e^{2\pi b\nu_{1} +\pi bu-\pi bv+\imathmath b\partial_{v}})
g^{\ast}_{b}(e^{-4\pi b\nu_{1}-2\pi bu+2\pi bv}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{multline}
\varphi(\mathcal{F}_{2}) = g_{b}(e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{2}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{2}+4\pi bu-2\pi bv+2\pi bw})\times \\
\varphi(e^{2\pi b\nu_{2}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}})\times \\
g_{b}^{\ast}(e^{-4\pi b\nu_{2}+4\pi bu-2\pi bv+2\pi bw})
g_{b}^{\ast}(e^{-4\pi b\nu_{2}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}^{\ast}(e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}}).
\end{pmatrix}silonnd{multline}
\end{pmatrix}silonnd{de}
Assuming $\varphi(x) = x^{\imathmath s}$, we obtain the expressions for arbitrary powers of generators.\\*
Recall the definition of the divided powers of $A$
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
A^{(\imathmath s)} = G_{b}(-\imathmath bs)A^{\imathmath s}.
\end{pmatrix}silonnd{equation}
Now, after we have defined arbitrary devided powers of the generators of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representations corresponding to both reduced expressions of the Weyl element, we can state the main theorem
\begin{eqnarray}\new\begin{array}{cc}gin{te}
Let $q = e^{\pi\imathmath b^{2}}$, $(b^{2}\imathn \mathbb{R}\setminus \mathbb{Q})$ and let $K_{j} = q^{H_{j}}$, $\mathcal{E}_{j} = -\imathmath (q-q^{-1})E_{j}$, $\mathcal{F}_{j} = -\imathmath (q-q^{-1})F_{j}$, $1\lambdae j\lambdae 2$ be $U_{q}(\mathfrak{sl}(3))$ generators in the positive principal series representation corresponding to any reduced expression of the Weyl element. Then the following generalized Kac's identity holds:
\begin{eqnarray}\new\begin{array}{cc}gin{equation}\begin{eqnarray}\new\begin{array}{cc}gin{split}
\mathcal{E}_{j}^{(\imathmath s)}\mathcal{F}_{j}^{(\imathmath t)} = \imathnt\lambdaimits_{\mathcal{C}} d\tau e^{\pi bQ\tau}\mathcal{F}_{j}^{(\imathmath t+\imathmath\tau)}K_{j}^{-\imathmath \tau}
\frac{G_{b}(\imathmath b\tau)G_{b}(-bH_{j} + \imathmath b(s+t+\tau))}{G_{b}(-bH_{j}+\imathmath b(s+t+2\tau))}\mathcal{E}_{j}^{(\imathmath s + \imathmath \tau)},
\end{pmatrix}silonnd{split}\end{pmatrix}silonnd{equation}
where the contour $\mathcal{C}$ goes slightly above the real axis but passes below the pole at $\tau = 0$.
\end{pmatrix}silonnd{te}
$\noindent {\it Proof}. $
The proof follows from the results stated in Proposition 4.3, Lemma 4.3, Proposition 4.4, Corollary 4.1, Corollary 4.2, Corollary 4.3.
The next statement (Theorem 5.7 in \cdotite{Ip2}) establishes the unitary equivalence of positive principal series representations corresponding to different expressions of the longest Weyl element. We give here another proof for the case of $U_{q}(\mathfrak{sl}(3))$ of this result which allows explicitly illustrate its validness for arbitrary functions of generators. In this proof the pentagon identity \cdotite{Ka0} is extensively used which states that for positive self-adjoint operators $U$,$V$ satisfying the relation $UV = q^{2}VU$ we have
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
g_{b}(V)g_{b}(U) = g_{b}(U)g_{b}(q^{-1}UV)g_{b}(V).
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{prop}
Let $X_{s_{1}s_{2}s_{1}}$ be any generator of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{1}s_{2}s_{1}$. Let $X_{s_{2}s_{1}s_{2}}$ be the same generator in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{2}s_{1}s_{2}$ and let $\varphi(x)$ be a complex-valued function.
The unitary transformation defined by
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
U = (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}}).
\end{pmatrix}silonnd{equation}
relates the functions of generators in these two representations by
$$
\varphi(X_{s_{2}s_{1}s_{2}}) = U\varphi(X_{s_{1}s_{2}s_{1}})U^{\ast}.
$$
\end{pmatrix}silonnd{prop}
$\noindent {\it Proof}. $
Let $\mathcal{E}_{1}$ be the generator in $s_{1}s_{2}s_{1}$ representation. Its unitary transform is given by
$$
U\varphi(\mathcal{E}_{1})U^{\ast} =
(uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})\times
$$
$$
g_{b}(e^{-2\pi bw})\varphi(e^{\pi bw-\imathmath b\partial_{w}})(h.c.) =
$$
$$
(uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
\times
$$
$$
g_{b}(e^{-2\pi bw})\varphi(e^{\pi bw-\imathmath b\partial_{w}})(h.c.).
$$
We have used the commutation of the factors $g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})$ and $g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})$. Now, according to the identity $g_{b}(x)g_{b}(\frac{1}{x}) = e^{\frac{\pi\imathmath}{4\pi^{2}b^{2}}\lambdaog^{2}x}$ we have
$$
g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}}) = e^{-\frac{\pi\imathmath}{4\pi^2 b^{2}}(\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w})^{2}}
g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}}).
$$
Let
$$
A_{1} = e^{-2\pi bw},
$$
$$
A_{2} = e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}},
$$
Then
$$
q^{-1}A_{1}A_{2} = e^{-\pi bu+\pi bv-3\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}}
$$
and $A_{1}A_{2} = q^{2}A_{2}A_{1}$. Using the pentagon identity, \cdotite{Ka0}:
$$
g_{b}(A_{2})g_{b}(A_{1}) = g_{b}(A_{1})g_{b}(q^{-1}A_{1}A_{2})g_{b}(A_{2}),
$$
we have
$$
g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})g_{b}(e^{-2\pi bw}) = g_{b}(e^{-2\pi bw})g_{b}(e^{-\pi bu+\pi bv-3\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}}).
$$
Substituting all these into the expression for $U\varphi(\mathcal{E}_{1})U^{\ast}$ we obtain
$$
U\varphi(\mathcal{E}_{1})U^{\ast} =
$$
$$
(uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}
e^{-\frac{\pi\imathmath}{4\pi^2 b^{2}}(\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w})^{2}}
g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})\times
$$
$$
g_{b}(e^{-2\pi bw})g_{b}(e^{-\pi bu+\pi bv-3\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
\varphi(e^{\pi bw-\imathmath b\partial_{w}})(h.c.).
$$
Note, that $g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})$ and
$\varphi(e^{\pi bw-\imathmath b\partial_{w}})$ commute, so the quantum dilogarithm passes through and cancels with its hermitian conjugate:
$$
U\varphi(\mathcal{E}_{1})U^{\ast} =
$$
$$
(uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}
e^{-\frac{\pi\imathmath}{4\pi^2 b^{2}}(\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w})^{2}}
g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})\times
$$
$$
g_{b}(e^{-2\pi bw})g_{b}(e^{-\pi bu+\pi bv-3\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
\varphi(e^{\pi bw-\imathmath b\partial_{w}})(h.c.).
$$
Let $A$, $B$ be self-adjoint operators satisfying the relation $[A,B] = c$, where $c$ is a number. Let $f(x)$ be a function and $\alphapha$ a number. Then
$$
e^{\alphapha B^{2}}f(A) = f(A -2c\alphapha B)e^{\alphapha B^{2}}.
$$
Using this identity to push the exponent $e^{-\frac{\pi\imathmath}{4\pi^2 b^{2}}(\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w})^{2}}$ to the right we obtain
$$
U\varphi(\mathcal{E}_{1})U^{\ast} =
(uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}
g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})\times
$$
$$
g_{b}(e^{-\pi bu+\pi bv-3\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-2\pi bu+2\pi bv-4\pi bw})
\varphi(e^{\pi bu-\pi bv+2\pi bw-\imathmath b\partial_{u}})(h.c.)
$$
Now use the relation
$$
e^{\alphapha x\partial_{y}}f(y,\partial_{x}) = f(y+\alphapha x,\partial_{x}-\alphapha\partial_{y})e^{\alphapha x\partial_{y}},
$$
to push the exponents $e^{-w\partial_{u}}$ and $e^{w\partial_{v}}$ to the right:
$$
U\varphi(\mathcal{E}_{1})U^{\ast} =
(uv)(vw)g_{b}(e^{-\pi bu+\pi bv+\pi bw+\imathmath b\partial_{v}-\imathmath b\partial_{w}})\times
$$
$$
g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imathmath b\partial_{v}-\imathmath b\partial_{w}})
g_{b}(e^{-2\pi bu+2\pi bv})
\varphi(e^{\pi bu-\pi bv-\imathmath b\partial_{u}})(h.c.) =
$$
$$
g_{b}(e^{\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{-\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{-2\pi bv+2\pi bw})
\varphi(e^{\pi bv-\pi bw-\imathmath b\partial_{v}})(h.c.)
$$
Let $\mathcal{E}_{2}$ be the generator in $s_{1}s_{2}s_{1}$ representation. Then
$$
U\varphi(\mathcal{E}_{2})U^{\ast} =
(uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})\times
$$
$$
g_{b}(e^{\pi bu-\pi bv +\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{-\pi bu -\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{-2\pi bv+2\pi bw})
\varphi(e^{\pi bv-\pi bw-\imathmath b\partial_{v}})\times (h.c.),
$$
where by $(h.c.)$ we denoted the hermitian conjugate operator of everything that stands before $\varphi(e^{\pi bv-\pi bw-\imathmath b\partial_{v}})$. Noticing that
$$
g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{\pi bu-\pi bv +\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}}) = 1,
$$
we obtain
$$
U\varphi(\mathcal{E}_{2})U^{\ast} =
(uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}\times
$$
$$
g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{-\pi bu -\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{-2\pi bv+2\pi bw})
\varphi(e^{\pi bv-\pi bw-\imathmath b\partial_{v}})\times (h.c.).
$$
Let $A_{1}$, $A_{2}$ be as follows
$$
A_{1} = e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}},
$$
$$
A_{2} = e^{-2\pi bv+2\pi bw},
$$
Then
$$
q^{-1}A_{1}A_{2} = e^{-\pi bu -\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}},
$$
moreover
$$
A_{1}A_{2} = q^{2}A_{2}A_{1},
$$
and we can apply the pentagon identity \cdotite{Ka0}:
$$
g_{b}(A_{1})g_{b}(q^{-1}A_{1}A_{2})g_{b}(A_{2}) = g_{b}(A_{2})g_{b}(A_{1}),
$$
which leads to the following result
$$
U\varphi(\mathcal{E}_{2})U^{\ast} = (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}
g_{b}(e^{-2\pi bv+2\pi bw})g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
\varphi(e^{\pi bv-\pi bw-\imathmath b\partial_{v}})\times (h.c.).
$$
Operators $g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})$ and
$\varphi(e^{\pi bv-\pi bw-\imathmath b\partial_{v}})$ commute, so the quantum dilogarithm passes through and cancels with its conjugate. We obtain
$$
U\varphi(\mathcal{E}_{2})U^{\ast} = (uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}
g_{b}(e^{-2\pi bv+2\pi bw})\varphi(e^{\pi bv-\pi bw-\imathmath b\partial_{v}})\times (h.c.) =
$$
$$
(uv)(vw)g_{b}(e^{-2\pi bv})\varphi(e^{\pi bv-\imathmath b\partial_{v}})g^{\ast}_{b}(e^{-2\pi bv})(vw)(uv) =
g_{b}(e^{-2\pi bw})\varphi(e^{\pi bw-\imathmath b\partial_{w}})g^{\ast}_{b}(e^{-2\pi bw}).
$$
The case of $\mathcal{F}_{1}$
$$
U\varphi(\mathcal{F}_{1})U^{\ast} =
$$
$$
(uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})\times
$$
$$
g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})
\varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}})(h.c.)
$$
The factors $g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})$ and $g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})$ commute so we can change their order. After that we observe that
$$
g_{b}(e^{-\pi bu+\pi bv-\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})
g_{b}(e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}}) =
e^{\frac{\pi\imathmath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w})^{2}},
$$
which follows from the identity $g_{b}(x)g_{b}(\frac{1}{x}) = e^{\frac{\pi\imathmath}{4\pi^{2}b^{2}}\lambdaog^{2}x}$. Doing these we obtain
$$
U\varphi(\mathcal{F}_{1})U^{\ast} =
$$
$$
(uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}
e^{\frac{\pi\imathmath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w})^{2}}
g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}})\times
$$
$$
g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})
\varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}})(h.c.).
$$
Again using the identity $g_{b}(x)g_{b}(\frac{1}{x}) = e^{\frac{\pi\imathmath}{4\pi^{2}b^{2}}\lambdaog^{2}x}$ to rewrite
$$
g^{\ast}_{b}(e^{\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w}}) =
e^{-\frac{\pi\imathmath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w})^{2}}
g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}}),
$$
we have
$$
U\varphi(\mathcal{F}_{1})U^{\ast} =
$$
$$
(uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}
e^{\frac{\pi\imathmath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w})^{2}}
e^{-\frac{\pi\imathmath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w})^{2}}
g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})\times
$$
$$
g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})
\varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}})(h.c.).
$$
Let
$$
A_{1} = e^{-\pi bu+\pi bv-\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}},
$$
$$
A_{2} = e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}.
$$
Then
$$
q^{-1}A_{1}A_{2} = e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}},
$$
$$
A_{1}A_{2} = q^{2}A_{2}A_{1},
$$
and we can apply the pentagon identity $g_{b}(A_{1})g_{b}(q^{-1}A_{1}A_{2})g_{b}(A_{2}) = g_{b}(A_{2})g_{b}(A_{1})$:
$$
g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw}) =
$$
$$
g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
$$
Note also that $g_{b}(e^{-\pi bu+\pi bv-\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})$ commutes with $\varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}})$ so we obtain
$$
U\varphi(\mathcal{F}_{1})U^{\ast} =
$$
$$
(uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}
e^{\frac{\pi\imathmath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w})^{2}}
e^{-\frac{\pi\imathmath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw-\imathmath b\partial_{u}+\imathmath b\partial_{w})^{2}}\times
$$
$$
g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})
\varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}})(h.c.)
$$
To push the quadratic exponents to the right we use the formula $e^{\alphapha B^{2}}f(A) = f(A -2c\alphapha B)e^{\alphapha B^{2}}$ for self-adjoint $A$ and $B$ satisfying the relation $[A,B] = c$, where $c$, $\alphapha$ are numbers:
$$
U\varphi(\mathcal{F}_{1})U^{\ast} =
(uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}
e^{\frac{\pi\imathmath}{4\pi^{2}b^{2}}(\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w})^{2}}\times
$$
$$
g_{b}(e^{-4\pi b\nu_{1} +3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
\varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}})(h.c.) =
$$
$$
(uv)(vw)e^{-w\partial_{u}}e^{w\partial_{v}}
g_{b}(e^{-4\pi b\nu_{1}+2\pi bu})
\varphi(e^{2\pi b\nu_{1}-\pi bu+\imathmath b\partial_{u}})(h.c.) =
$$
$$
(uv)(vw)
g_{b}(e^{-4\pi b\nu_{1}+2\pi bu-2\pi bw})
\varphi(e^{2\pi b\nu_{1}-\pi bu+\pi bw+\imathmath b\partial_{u}})(h.c.) =
$$
$$
g_{b}(e^{-4\pi b\nu_{1}-2\pi bu+2\pi bv})
\varphi(e^{2\pi b\nu_{1}+\pi bu-\pi bv+\imathmath b\partial_{v}})(h.c.).
$$
$\Box$
Let $K_{1}$, $\mathcal{E}_{1}$, $\mathcal{F}_{1}$ be the subset of $U_{q}(\mathfrak{sl}(3))$ generators in $s_{1}s_{2}s_{1}$ principal series representation. Recall that for a complex-valued function $\varphi(x)$ we have
$$
\varphi(K_{1}) = \varphi(e^{-2\pi b\nu_{1}+2\pi bu-\pi bv+2\pi bw}),
$$
$$
\varphi(\mathcal{E}_{1}) = g_{b}(e^{-2\pi bw})\varphi(e^{\pi bw-\imathmath b\partial_{w}})g_{b}^{\ast}(e^{-2\pi bw}),
$$
$$
\varphi(\mathcal{F}_{1}) = g_{b}(e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})\times
$$
$$
\varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}})\times
$$
$$
g_{b}^{\ast}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})
g_{b}^{\ast}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}^{\ast}(e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}}),
$$
This subset generates $U_{q}(\mathfrak{sl}(2))$ subalgebra.
The unitary transform from the following proposition was given in the proof of Theorem 4.7 in \cdotite{Ip1}. It was used as the first step in mapping the generators $K_{i}$, $\mathcal{E}_{i}$, $\mathcal{F}_{i}$ of $U_{q}(\mathfrak{sl}(2))_{i}$ subalgebra of $U_{q}(\mathfrak{g})$ to the formulas corresponding to positive principal series representations of $U_{q}(\mathfrak{sl}(2))$. We explicitly check its action on the functions of generators.
\begin{eqnarray}\new\begin{array}{cc}gin{lem}
Let $K_{1}$, $\mathcal{E}_{1}$, $\mathcal{F}_{1}$ be as above. Let $\varphi(x)$ be a complex-valued function. Let
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
V = e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}}
e^{-\frac{\pi\imathmath u^{2}}{2}+2\pi\imathmath\nu_{1}u}e^{-u\partial_{w}}e^{-\frac{\pi\imathmath w^{2}}{2}}g_{b}(e^{2\pi bw})
g_{b}(e^{4\pi b\nu_{1}-2\pi bu})
\end{pmatrix}silonnd{equation}
be a unitary transform. Then
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
V\varphi(K_{1})V^{\ast} = \varphi(e^{2\pi bw}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
V\varphi(\mathcal{E}_{1})V^{\ast} = \varphi(e^{-\imathmath b\partial_{w}}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{multline}
V\varphi(\mathcal{F}_{1})V^{\ast} = g_{b}(e^{2\pi bw}e^{-2\pi bu})g_{b}(e^{2\pi bw}e^{\imathmath b\partial_{u}})g_{b}(e^{2\pi bw}e^{2\pi bu})\times \\
\varphi(e^{-2\pi bw+\imathmath b\partial_{w}})\times \\
g^{\ast}_{b}(e^{2\pi bw}e^{2\pi bu})g^{\ast}_{b}(e^{2\pi bw}e^{\imathmath b\partial_{u}})g^{\ast}_{b}(e^{2\pi bw}e^{-2\pi bu}).
\end{pmatrix}silonnd{multline}
\end{pmatrix}silonnd{lem}
$\noindent {\it Proof}. $
$$
V\varphi(\mathcal{F}_{1})V^{\ast} =
e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}}
e^{-\frac{\pi\imathmath u^{2}}{2}+2\pi\imathmath\nu_{1}u}e^{-u\partial_{w}}e^{-\frac{\pi\imathmath w^{2}}{2}}g_{b}(e^{2\pi bw})\times
$$
$$
g_{b}(e^{4\pi b\nu_{1}-2\pi bu})
g_{b}(e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})\times
$$
$$
\varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}})\times (h.c.).
$$
Let
$$
U_{1} = e^{4\pi b\nu_{1}-2\pi bu},
$$
$$
U_{2} = e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}}.
$$
Then
$$
q^{-1}U_{1}U_{2} = e^{\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}},
$$
and the following relation holds
$$
U_{1}U_{2} = q^{2}U_{2}U_{1},
$$
which allows us to use the quantum pentagon identity
$$
g_{b}(U_{1})g_{b}(q^{-1}U_{1}U_{2})g_{b}(U_{2}) = g_{b}(U_{2})g_{b}(U_{1}).
$$
We obtain
$$
V\varphi(\mathcal{F}_{1})V^{\ast} =
e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}}
e^{-\frac{\pi\imathmath u^{2}}{2}+2\pi\imathmath\nu_{1}u}e^{-u\partial_{w}}e^{-\frac{\pi\imathmath w^{2}}{2}}g_{b}(e^{2\pi bw})\times
$$
$$
g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{4\pi b\nu_{1}-2\pi bu})
g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})
\varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}})\times (h.c.)
$$
Note that the operator $g_{b}(e^{4\pi b\nu_{1}-2\pi bu})$ commutes with $g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})$ and $\varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}})$, so it goes through and cancels with its hermitian conjugate.
$$
V\varphi(\mathcal{F}_{1})V^{\ast} =
e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}}
e^{-\frac{\pi\imathmath u^{2}}{2}+2\pi\imathmath\nu_{1}u}e^{-u\partial_{w}}e^{-\frac{\pi\imathmath w^{2}}{2}}g_{b}(e^{2\pi bw})\times
$$
$$
g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})
\varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-\pi bw+\imathmath b\partial_{w}})\times (h.c.).
$$
Using the following operator relations
$$
e^{\alphapha x^{2} +\begin{eqnarray}\new\begin{array}{cc}ta x}f(\partial_{x}) = f(\partial_{x}-2\alphapha x -\begin{eqnarray}\new\begin{array}{cc}ta)e^{\alphapha x^{2} +\begin{eqnarray}\new\begin{array}{cc}ta x},
$$
$$
e^{\alphapha x\partial_{y}}f(y,\partial_{x}) = f(y+\alphapha x,\partial_{x}-\alphapha\partial_{y})e^{\alphapha x\partial_{y}},
$$
we move all the exponents to the right
$$
V\varphi(\mathcal{F}_{1})V^{\ast} =
e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}}
e^{-\frac{\pi\imathmath u^{2}}{2}+2\pi\imathmath\nu_{1}u}e^{-u\partial_{w}}g_{b}(e^{2\pi bw})\times
$$
$$
g_{b}(e^{-4\pi b\nu_{1}+3\pi bu-\pi bv+2\pi bw+\imathmath b\partial_{u}-\imathmath b\partial_{w}})
g_{b}(e^{-4\pi b\nu_{1}+4\pi bu-2\pi bv+2\pi bw})
\varphi(e^{2\pi b\nu_{1}-2\pi bu+\pi bv-2\pi bw+\imathmath b\partial_{w}})\times (h.c.) =
$$
$$
e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}}
e^{-\frac{\pi\imathmath u^{2}}{2}+2\pi\imathmath\nu_{1}u}g_{b}(e^{2\pi bw-2\pi bu})\times
$$
$$
g_{b}(e^{-4\pi b\nu_{1}+\pi bu-\pi bv+2\pi bw+\imathmath b\partial_{u}})
g_{b}(e^{-4\pi b\nu_{1}+2\pi bu-2\pi bv+2\pi bw})
\varphi(e^{2\pi b\nu_{1}+\pi bv-2\pi bw+\imathmath b\partial_{w}})\times (h.c.) =
$$
$$
e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}}
g_{b}(e^{2\pi bw-2\pi bu})
g_{b}(e^{-2\pi b\nu_{1}-\pi bv+2\pi bw+\imathmath b\partial_{u}})\times
$$
$$
g_{b}(e^{-4\pi b\nu_{1}+2\pi bu-2\pi bv+2\pi bw})
\varphi(e^{2\pi b\nu_{1}+\pi bv-2\pi bw+\imathmath b\partial_{w}})\times (h.c.) =
$$
$$
g_{b}(e^{2\pi bw-2\pi bu})g_{b}(e^{2\pi bw+\imathmath b\partial_{u}})
g_{b}(e^{2\pi bw+2\pi bu})
\varphi(e^{-2\pi bw+\imathmath b\partial_{w}})
g^{\ast}_{b}(e^{2\pi bw+2\pi bu})g^{\ast}_{b}(e^{2\pi bw+\imathmath b\partial_{u}})
g^{\ast}_{b}(e^{2\pi bw-2\pi bu}).
$$
$\Box$
To finish the mapping $\varphi(K_{1})$,$\varphi(\mathcal{E}_{1})$,$\varphi(\mathcal{F}_{1})\rightarrow$ $\varphi(K)$,$\varphi(\mathcal{E})$,$\varphi(\mathcal{F})$, where the second set of operators is defined by the equations (\ref{function of E})-(\ref{function of F}), we need to perform a certain integral transformation which will be defined shortly.
Let $\lambdaambdambda$ be a positive real number. Define the following set of functions, \cdotite{Ka2}, \cdotite{KLSTS}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\Phi_{\lambdaambdambda}(u) = e^{\pi\imathmath u^{2}+\pi Qu}G_{b}(-\imathmath u+\imathmath\lambdaambdambda)G_{b}(-\imathmath u-\imathmath\lambdaambdambda).
\end{pmatrix}silonnd{equation}
The integral transform $\Phi$ is defined by
$$
\Phi: L^{2}(\mathbb{R}) \rightarrow L^{2}(\mathbb{R}^{+},d\mu(\lambdaambdambda)),
$$
\begin{eqnarray}\new\begin{array}{cc}gin{equation}\lambdaambdabel{Kashaev integral transform}
\Phi: f(u) \rightarrow F(\lambdaambdambda) = \imathnt\lambdaimits_{\mathbb{R}-\imathmath 0} du f(u)\Phi^{\ast}_{\lambdaambdambda}(u),
\end{pmatrix}silonnd{equation}
This transform is an isometry, see \cdotite{Ka2}. The inverse is given by
$$
\Phi^{-1}: L^{2}(\mathbb{R}^{+},d\mu(\lambdaambdambda)) \rightarrow L^{2}(\mathbb{R}),
$$
\begin{eqnarray}\new\begin{array}{cc}gin{equation}\lambdaambdabel{Kashaev inverse integral transform}
\Phi^{-1} : F(\lambdaambdambda) \rightarrow f(u) = \lambdaim\lambdaimits_{\end{pmatrix}silonpsilon\rightarrow 0}\imathnt\lambdaimits_{0}^{+\imathnfty}F(\lambdaambdambda)\Phi_{\lambdaambdambda}(u+\imathmath\end{pmatrix}silonpsilon)e^{-2\pi\end{pmatrix}silonpsilon u} d\mu(\lambdaambdambda),
\end{pmatrix}silonnd{equation}
with the measure given by $d\mu(\lambdaambdambda) = 4\sinh(\pi b\lambdaambdambda)\sinh(\pi b^{-1}\lambdaambdambda)$.
\begin{eqnarray}\new\begin{array}{cc}gin{prop}
The function $\Phi_{\lambdaambdambda}(u)$ is an eigenfunction of the operator $g_{b}(e^{2\pi bw}e^{-2\pi bu})g_{b}(e^{2\pi bw}e^{\imathmath b\partial_{u}})g_{b}(e^{2\pi bw}e^{2\pi bu})$:
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
g_{b}(e^{2\pi bw}e^{-2\pi bu})g_{b}(e^{2\pi bw}e^{\imathmath b\partial_{u}})g_{b}(e^{2\pi bw}e^{2\pi bu})\Phi_{\lambdaambdambda}(u) =
g_{b}(e^{2\pi b\lambdaambdambda+2\pi bw})g_{b}(e^{-2\pi b\lambdaambdambda+2\pi bw})\Phi_{\lambdaambdambda}(u).
\end{pmatrix}silonnd{equation}
\end{pmatrix}silonnd{prop}
$\noindent {\it Proof}. $
Recalling the definition of $g_{b}(x)$
$$
g_{b}(x) = \frac{\bar{\zeta}_{b}}{G_{b}(\frac{Q}{2} + \frac{1}{2\pi\imathmath b}\lambdaog x)},
$$
and the Fourier transform
$$
g_{b}(x) = \imathnt d\tau x^{\imathmath b^{-1}\tau}e^{\pi Q\tau}G_{b}(-\imathmath\tau),
$$
we obtain
$$
g_{b}(e^{2\pi bw}e^{-2\pi bu}) = \frac{\bar{\zeta}_{b}}{G_{b}(\frac{Q}{2} -\imathmath w + \imathmath u)},
$$
$$
g_{b}(e^{2\pi bw}e^{2\pi bu}) = \frac{\bar{\zeta}_{b}}{G_{b}(\frac{Q}{2} -\imathmath w - \imathmath u)},
$$
$$
g_{b}(e^{2\pi bw}e^{\imathmath b\partial_{u}}) = \imathnt d\tau e^{\pi Q\tau+2\pi\imathmath w\tau}G_{b}(-\imathmath\tau)e^{-\tau\partial_{u}}.
$$
Substituting these expressions into the left-hand side of the eigenvalue equation we obtain
$$
g_{b}(e^{2\pi bw}e^{-2\pi bu})g_{b}(e^{2\pi bw}e^{\imathmath b\partial_{u}})g_{b}(e^{2\pi bw}e^{2\pi bu})\Phi_{\lambdaambdambda}(u) =
$$
$$
\frac{1}{G_{b}(\frac{Q}{2}-\imathmath w+\imathmath u)}\imathnt d\tau e^{\pi Q\tau + 2\pi\imathmath w\tau}G_{b}(-\imathmath\tau)e^{-\tau\partial_{u}}
\frac{e^{\pi\imathmath u^{2}+\pi Qu}G_{b}(-\imathmath u+\imathmath\lambdaambdambda)G_{b}(-\imathmath u-\imathmath\lambdaambdambda)}{G_{b}(\frac{Q}{2}-\imathmath w-\imathmath u)} =
$$
$$
\frac{1}{G_{b}(\frac{Q}{2}-\imathmath w+\imathmath u)}\imathnt d\tau e^{\pi Q\tau+2\pi\imathmath w\tau+\pi\imathmath(u-\tau)^{2}+\pi Q(u-\tau)}
\frac{G_{b}(-\imathmath\tau)G_{b}(-\imathmath u+\imathmath\lambdaambdambda+\imathmath\tau)G_{b}(-\imathmath u-\imathmath\lambdaambdambda+\imathmath\tau)}{G_{b}(\frac{Q}{2}-\imathmath w-\imathmath u+\imathmath\tau)}
$$
Applying the reflection formula
$$
G_{b}(-\imathmath\tau) = \frac{e^{-\pi\imathmath\tau^{2}-\pi Q\tau}}{G_{b}(Q+\imathmath\tau)},
$$
we get
$$
g_{b}(e^{2\pi bw}e^{-2\pi bu})g_{b}(e^{2\pi bw}e^{\imathmath b\partial_{u}})g_{b}(e^{2\pi bw}e^{2\pi bu})\Phi_{\lambdaambdambda}(u) =
$$
$$
\frac{e^{\pi\imathmath u^{2}+\pi Qu}}{G_{b}(\frac{Q}{2}-\imathmath w+\imathmath u)}\imathnt d\tau
e^{-2\pi(\frac{Q}{2}+\imathmath u-\imathmath w)\tau}
\frac{G_{b}(-\imathmath u+\imathmath\lambdaambdambda+\imathmath\tau)G_{b}(-\imathmath u-\imathmath\lambdaambdambda+\imathmath\tau)}{G_{b}(\frac{Q}{2}-\imathmath w-\imathmath u+\imathmath\tau)G_{b}(Q+\imathmath\tau)}.
$$
Let $\alphapha = -\imathmath u+\imathmath\lambdaambdambda$, $\begin{eqnarray}\new\begin{array}{cc}ta = -\imathmath u-\imathmath\lambdaambdambda$, $\gamma = \frac{Q}{2}+\imathmath u-\imathmath w$. Then
$$
g_{b}(e^{2\pi bw}e^{-2\pi bu})g_{b}(e^{2\pi bw}e^{\imathmath b\partial_{u}})g_{b}(e^{2\pi bw}e^{2\pi bu})\Phi_{\lambdaambdambda}(u) =
$$
$$
\frac{e^{\pi\imathmath u^{2}+\pi Qu}}{G_{b}(\frac{Q}{2}-\imathmath w+\imathmath u)}\imathnt d\tau
e^{-2\pi\gamma\tau}
\frac{G_{b}(\alphapha+\imathmath\tau)G_{b}(\begin{eqnarray}\new\begin{array}{cc}ta+\imathmath\tau)}{G_{b}(\alphapha+\begin{eqnarray}\new\begin{array}{cc}ta+\gamma+\imathmath\tau)G_{b}(Q+\imathmath\tau)} =
$$
$$
\frac{e^{\pi\imathmath u^{2}+\pi Qu}}{G_{b}(\frac{Q}{2}-\imathmath w+\imathmath u)}
\frac{G_{b}(\alphapha)G_{b}(\begin{eqnarray}\new\begin{array}{cc}ta)G_{b}(\gamma)}{G_{b}(\alphapha+\gamma)G_{b}(\begin{eqnarray}\new\begin{array}{cc}ta+\gamma)},
$$
here $4-5$ integral identity \cdotite{V} has been used. Substituting $\alphapha$, $\begin{eqnarray}\new\begin{array}{cc}ta$, $\gamma$ we obtain
$$
g_{b}(e^{2\pi bw}e^{-2\pi bu})g_{b}(e^{2\pi bw}e^{\imathmath b\partial_{u}})g_{b}(e^{2\pi bw}e^{2\pi bu})\Phi_{\lambdaambdambda}(u) =
e^{\pi\imathmath u^{2}+\pi Qu}\frac{G_{b}(-\imathmath u+\imathmath\lambdaambdambda)G_{b}(-\imathmath u-\imathmath\lambdaambdambda)}{G_{b}(\frac{Q}{2}+\imathmath\lambdaambdambda-\imathmath w)G_{b}(\frac{Q}{2}-\imathmath\lambdaambdambda-\imathmath w)} =
$$
$$
\frac{1}{G_{b}(\frac{Q}{2}+\imathmath\lambdaambdambda-\imathmath w)G_{b}(\frac{Q}{2}-\imathmath\lambdaambdambda-\imathmath w)}\Phi_{\lambdaambdambda}(u) =
g_{b}(e^{2\pi b\lambdaambdambda+2\pi bw})g_{b}(e^{-2\pi b\lambdaambdambda+2\pi bw})\Phi_{\lambdaambdambda}(u).
$$
$\Box$
\begin{eqnarray}\new\begin{array}{cc}gin{cor}
Let $\varphi(K_{1})$,$\varphi(\mathcal{E}_{1})$,$\varphi(\mathcal{F}_{1})$ be the functions of the subset of generators of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{1}s_{2}s_{1}$ of the longest Weyl element. Let $\Phi$, $\Phi^{\ast}$ be the integral transform and its inverse defined in (\ref{Kashaev integral transform})-(\ref{Kashaev inverse integral transform}). Let $\Omega$ be the unitary transform defined by
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\Omega_{1} =
e^{-\frac{\pi\imathmath}{2}(\lambdaambdambda+w)^{2}}g_{b}(e^{-2\pi b\lambdaambdambda-2\pi bw})
\cdotirc\Phi \cdotirc e^{(\nu_{1}+\frac{v}{2})\partial_{w}}e^{(\nu_{1}+\frac{v}{2})\partial_{u}}
e^{-\frac{\pi\imathmath u^{2}}{2}+2\pi\imathmath\nu_{1}u}e^{-u\partial_{w}}e^{-\frac{\pi\imathmath w^{2}}{2}}g_{b}(e^{2\pi bw})
g_{b}(e^{4\pi b\nu_{1}-2\pi bu}).
\end{pmatrix}silonnd{equation}
Then
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\Omega_{1}\varphi(K_{1})\Omega_{1}^{\ast} = \varphi(e^{2\pi bw}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\Omega_{1}\varphi(\mathcal{E}_{1})\Omega_{1}^{\ast} = g_{b}(e^{-2\pi b\lambdaambdambda-2\pi bw})\varphi(e^{\pi b\lambdaambdambda +\pi bw - \imathmath b\partial_{w}})g^{\ast}_{b}(e^{-2\pi b\lambdaambdambda-2\pi bw}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\Omega_{1}\varphi(\mathcal{F}_{1})\Omega_{1}^{\ast} = g_{b}(e^{-2\pi b\lambdaambdambda+2\pi bw})\varphi(e^{\pi b\lambdaambdambda-\pi bw+\imathmath b\partial_{w}})g^{\ast}_{b}(e^{-2\pi b\lambdaambdambda+2\pi bw}).
\end{pmatrix}silonnd{equation}
\end{pmatrix}silonnd{cor}
\begin{eqnarray}\new\begin{array}{cc}gin{cor}
Let $\varphi(K_{2})$,$\varphi(\mathcal{E}_{2})$,$\varphi(\mathcal{F}_{2})$ be the functions of the subset of generators of $U_{q}(\mathfrak{sl}(3))$ in the positive principal series representation corresponding to the reduced expression $w_{0} = s_{2}s_{1}s_{2}$ of the longest Weyl element. Let $\Omega_{2}$ be a unitary transform defined by
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\Omega_{2} =
e^{-\frac{\pi\imathmath}{2}(\lambdaambdambda+w)^{2}}g_{b}(e^{-2\pi b\lambdaambdambda-2\pi bw})
\cdotirc\Phi \cdotirc e^{(\nu_{2}+\frac{v}{2})\partial_{w}}e^{(\nu_{2}+\frac{v}{2})\partial_{u}}
e^{-\frac{\pi\imathmath u^{2}}{2}+2\pi\imathmath\nu_{1}u}e^{-u\partial_{w}}e^{-\frac{\pi\imathmath w^{2}}{2}}g_{b}(e^{2\pi bw})
g_{b}(e^{4\pi b\nu_{2}-2\pi bu}).
\end{pmatrix}silonnd{equation}
Then
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\Omega_{2}\varphi(K_{2})\Omega_{2}^{\ast} = \varphi(e^{2\pi bw}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\Omega_{2}\varphi(\mathcal{E}_{2})\Omega_{2}^{\ast} = g_{b}(e^{-2\pi b\lambdaambdambda-2\pi bw})\varphi(e^{\pi b\lambdaambdambda +\pi bw - \imathmath b\partial_{w}})g^{\ast}_{b}(e^{-2\pi b\lambdaambdambda-2\pi bw}),
\end{pmatrix}silonnd{equation}
\begin{eqnarray}\new\begin{array}{cc}gin{equation}
\Omega_{2}\varphi(\mathcal{F}_{2})\Omega_{2}^{\ast} = g_{b}(e^{-2\pi b\lambdaambdambda+2\pi bw})\varphi(e^{\pi b\lambdaambdambda-\pi bw+\imathmath b\partial_{w}})g^{\ast}_{b}(e^{-2\pi b\lambdaambdambda+2\pi bw}).
\end{pmatrix}silonnd{equation}
\end{pmatrix}silonnd{cor}
\noindent {\it Proof}.
Note, that swapping the indices $K_{1}\lambdaeftrightarrow K_{2}$, $\mathcal{E}_{1}\lambdaeftrightarrow \mathcal{E}_{2}$, $\mathcal{F}_{1}\lambdaeftrightarrow \mathcal{F}_{2}$, $\nu_{1}\lambdaeftrightarrow \nu_{2}$, of generators and parameters in a representation of $U_{q}(\mathfrak{sl}(3))$ corresponding to a particular choice of reduced expression of the longest Weyl element gives representation for another choice of reduced expression. So, from the statement, that $\Omega_{1}$ transforms the action of operators
$\varphi(K_{1})$,$\varphi(\mathcal{E}_{1})$,$\varphi(\mathcal{F}_{1})$ in $s_{1}s_{2}s_{1}$ representation to the $U_{q}(\mathfrak{sl}(2))$ formulas (\ref{function of E})-(\ref{function of F}), it automatically follows that $\Omega_{2}$ which is obtained from $\Omega_{1}$ by the replacement of the parameter $\nu_{1}$ by $\nu_{2}$, transforms the action of operators
$\varphi(K_{2})$,$\varphi(\mathcal{E}_{2})$,$\varphi(\mathcal{F}_{2})$ in $s_{2}s_{1}s_{2}$ representation to $U_{q}(\mathfrak{sl}(2))$ formulas.
$\Box$
\begin{eqnarray}\new\begin{array}{cc}gin{cor}
In the positive principal series representation corresponding to any reduced expression of the Weyl element the generalized Kac's identity holds.
\end{pmatrix}silonnd{cor}
\noindent {\it Proof}.
As follows from the corollaries 4.2, 4.3, there is a unitary transformation which transforms the operators $\varphi(K_{i})$,$\varphi(\mathcal{E}_{i})$,$\varphi(\mathcal{F}_{i})$ defined in the positive principal series of $U_{q}(\mathfrak{sl}(3))$ to the operators $\varphi(K)$,$\varphi(\mathcal{E})$,$\varphi(\mathcal{F})$ defined in positive principal series representation of $U_{q}(\mathfrak{sl}(2))$. Since the generalized Kac's identity is valid in $U_{q}(\mathfrak{sl}(2))$ case, it follows that it is as well valid in the case of $U_{q}(\mathfrak{sl}(3))$.
$\Box$\\*
This completes the proof of the Theorem 4.1.
Note, that we have also given another proof in $U_{q}(\mathfrak{sl}(3))$ for the Theorem 4.7 in \cdotite{Ip1} which states that
the positive principal series representation of $U_{q}(\mathfrak{sl}(3))$ decomposes into direct integral of positive principal series representations of its $U_{q}(\mathfrak{sl}(2))$ subalgebra corresponding to any simple root.
\oldfalsepage
\begin{eqnarray}\new\begin{array}{cc}gin{thebibliography}{}
\bibitem{BT}
A.Bytsko, J.Teschner, R-operator, co-product and Haar-measure for the modular double of $U_q (sl(2;R))$, arXiv:math/0208191v2.
\bibitem{ChPr}
V.Chari, A.Pressley, A guide to quantum groups, Cambridge University Press, 1994.
\bibitem{F0}
L.Faddeev, Current-Like Variables in Massive and Massless Integrable Models, arXiv:hep-th/9408041v1
\bibitem{F1}
L.Faddeev, Discrete Heisenberg-Weyl group and modular group, Lett. Math. Phys. v.3 ,(1995), 249.
\bibitem{F2}
L.Faddeev, Modular Double of Quantum Group, arXiv:math/9912078v1.
\bibitem{FKV}
L.Faddeev, R.Kashaev, A.Volkov, Strongly coupled quantum discrete Liouville theory I: Algebraic approach and duality, hep-th/0006156
\bibitem{FrIp}
I.Frenkel, I.Ip, Positive representations of split real quantum groups and future perspectives, arXiv:1111.1033v1[math.RT]
\bibitem{Ip1}
I.Ip, Positive Representations of Split Real Quantum Groups: The Universal R Operator, arXiv:1212.5149v1
\bibitem{Ip2}
I.Ip, Positive Representations of Split Real simply-laced Quantum Groups, arXiv:1203.2018v4
\bibitem{Ka0}
R.M.Kashaev, On the spectrum of Dehn twists in quantum Teichmuller theory, arXiv:math/0008148v1
\bibitem{Ka1}
R.M.Kashaev, The non-compact quantum dilogarithm and the Baxter equations, J. Stat.
Phys. 102 (2001) 923--936.
\bibitem{Ka2}
R.M.Kashaev, The quantum dilogarithm and Dehn twist in quantum Teichmuller theory, Integrable Structures of Exactly Solvable Two-Dimensional Models of Quantum Field Theory (Kiev, Ukraine, September 25-30, 2000), NATO Sci. Ser. II Math. Phys. Chem., vol. 35, Kluwer, Dordrecht, 211-221 (2001)
\bibitem{KLSTS}
S.Kharchev, D.Lebedev, M.Semenov-Tian-Shansky, Unitary representations of $U_{q}(\mathfrak{sl}(2,\mathbb{R}))$, the modular double, and the multiparticle $q$-deformed Toda chains, Communication Math. Phys. v.225 (2002)
573--609.
\bibitem{Lu book}
G.Lusztig, Introduction to quantum groups, Progress in Mathematics,110, Boston, MA,1993.
\bibitem{Lu 1}
G.Lusztig, Modular representations and quantum groups, Contemporary Mathematics, v. 82 (1989) 59--77.
\bibitem{PT}
B.Ponsot, J.Teschner, Liouville bootstrap via harmonic analysis on a noncompact quantum group, arXiv:hep-th/9911110v2
\bibitem{PT1}
B.Ponsot, J.Teschner, Clebsch-Gordan and Racah-Wigner coefficients for a continuous series of representations of $U_q(\mathfrak{sl}(2,R))$, arXiv:math/0007097v2
\bibitem{Su1}
P.Sultanich, On modular double of semisimple quantum groups, arXiv:1811.10934v1
\bibitem{Su2}
P.Sultanich, On explicit realization of algebra of complex divided powers of $U_{q}(\mathfrak{sl}(2)),$ arXiv:1911.12902v1
\bibitem{V}
A.Yu.Volkov, Noncommutative Hypergeometry, Commun. Math. Phys. 258, 257--273 (2005).
\end{pmatrix}silonnd{thebibliography}
\end{pmatrix}silonnd{document} |
\begin{document}
\title{\bf \Large Customer sojourn time in $GI/GI/1$ feedback queue\\ in the presence of heavy tails}
\author{Sergey Foss\\Heriot-Watt University and \\ Novosibirsk State University\footnote{Research of S.~Foss is supported
by RSF research grant No. 17-11-01173. He also thanks EPFL, Lausanne for their hospitality}\\ \and Masakiyo Miyazawa\\ Tokyo University of Science\footnote{Research of M.~Miyazawa is supported in part by
JSPS KAKENHI Grant No. JP16H02786}}
\date{June 17, 2018 (to appear in the Journal of Statistical Physics)}
\maketitle
\begin{abstract}
We consider a single-server $GI/GI/1$ queueing system with feedback.
We assume the service time distribution to be (intermediate) regularly varying.
We find the tail asymptotics for a customer's sojourn time in two cases: the customer arrives in an empty system, and the customer arrives in the system
in the stationary regime. In particular, in the case of Poisson input we
obtain more explicit formulae than those in the general case. As auxiliary results, we find the tail asymptotics for the busy period
distribution in a single-server queue with an intermediate varying service times distribution and establish the principle-of-a-single-big-jump equivalences that characterise the asymptotics.
\end{abstract}
\begin{quotation}
\noindent {\bf Keywords:} single-server queue, feedback, heavy-tailed and intermediate regularly varying distributions, sojourn time, tail asymptotics, principle of a single big jump
\end{quotation}
\section{Introduction}
\label{sect:introduction}
In queueing theory, the sojourn time $U$ of a customer in a queueing system is one of important characteristics, this is the time from its arrival instant to departure instant.
In general, the distribution of $U$ is hard to find analytically, and research interest is directed to the asymptotics of the tail probability,
$\dd{P}(U > x)$, as $x \to \infty$ under various stochastic assumptions. Among them, the following assumption is typically used.
\begin{itemize}
\item [(\sect{introduction})] Delaying arrivals leads to increase of the characteristic.
\end{itemize}
The assumption (\sect{introduction}) is known as a {\it monotonicity} property, it plays an important role in the asymptotic analysis of various characteristics. Another important factor for the tail asymptotic problem is the heaviness of service time distributions. A nonnegative random variable $X$ is said to have a heavy tail distribution if $\dd{E} \exp (sX) = \infty$ for all $s > 0$, and a light tail distribution, otherwise. If service time distributions have heavy tails (or light tails), we talk about a heavy (or light) tail regime of the system.
Under the heavy tail regime, the sojourn time problem is relatively well studied for the system satisfying the monotonicity assumption (\sect{introduction}) (see e.g. \cite{BaFo2004}, \cite{JeMoZw2004}, \cite{FoKo2012}, \cite{FoMy2014} and references therein).
However, if a customer separately takes multiple services, then the monotonicity is violated, in general. This is a common phenomenon in queueing networks. Clearly, {\it non-monotone} characteristics are also important, but we are unaware of any asymptotic results for them in the presence of heavy tails.
So we challenge the tail asymptotics problem of a non-monotone characteristic under the heavy tail regime, using a relatively simple model as one of the first attempts.
We consider the following single server system. Exogenous customers arrive at the system subject to $i.i.d.$ inter-arrival times $\{t_n\}$ with finite mean $a$, join at the end of a queue, and are served in the first-in first-out order by a single server with $i.i.d.$ service times $\{\sigma_n\}$. When a customer completes service, it returns to the end of the queue for another service, with probability $p\in (0,1)$, or leaves the system, with probability $q=1-p$. Both transitions are independent of everything else. Note that if a customer requires more than one service, its sojourn time is the sum of several periods of waiting in queue with following service. One can see that the sojourn time $U$ does not have natural monotonicity properties with respect to inter-arrival times. We refer this system as a $GI/GI/1$ feedback queue. If $p=0$ (no customer returns), then it is reduced to a standard $GI/GI/1$ queue (e.g., see \cite{As2003}).
Because of independent feedback of customers completing service, the $i$-th arriving customer, customer $i$ for short, requires a geometric number of services, say $K_i$, and their total duration is $\sum_{j=1}^{K_i} \sigma_i^{(j)}$ where $\{\sigma_i^{(j)}\}$ are $i.i.d.$ with finite mean $b$ and do not depend on $K_{i}$. Throughout the paper, we assume that the system is stable, that is, $\rho \equiv \lambda b/q < 1$, where $\lambda = 1/a$, and the service-time distribution is heavy-tailed. More precisely, we
assume the distribution to be {\it intermediate regularly varying} -- see the list of heavy-tailed distributions at the end of this section and in \app{properties}.
For this $GI/GI/1$ feedback queue, we derive the asymptotics of the probability $\dd{P}(U>x)$ for the sojourn time $U$ of customer $1$ as $x\to\infty$ under the heavy tail regime. This $U$ depends on the state of the system just before the arrival instant of customer $1$. We consider two scenarios for the system state found by customer $1$. The first one is that customer $1$ enters an empty queue. It is the most difficult because
coefficients appearing in the asymptotics are sensitive not only to the first moments, but to the whole distribution of inter-arrival and service times, in general. The second one is that customer $1$ enters a stationary queue. In both cases, we also obtain simpler formulae in the case when the input is Poisson. In particular, the first moment of the sojourn time can be obtained in a closed form in this special case. A part of this information will be used for the general renewal arrival case.
Our model may be considered as a particular case of a two-server {\it generalised Jackson network} where server $i=1,2$ has service times $\{\sigma_n^{(i)}\}$. Each family of service times is $i.i.d.$ and they are mutually independent.
Customers arrive in a renewal input to server 1 and join the queue there. After service completion at server 1, a customer either leaves the network, with probability $p_{10}$, or joins the queue to the second server, with probability $p_{12}=1-p_{10}$. Similarly, after service completion at server 2, a customer either leaves the
network, with probability $p_{20}$, or joins the queue to server 1, with probability $p_{21}=1-p_{20}$. Customers are server in the order of their (external and internal) arrival
to the servers.
If we let $\sigma_n^{(2)}\equiv 0$ and $p_{12}p_{21}=p$, we obtain our
model as a particular case indeed. So the study of our model is not only of interest itself, but also opens a window to analysing a broad class of more general models.
In the $GI/GI/1$ feedback queue, one may change the service order in such a way that each customer continuously gets service without interruption when it completes service and returns to the queue. Then such a system is nothing else than the standard $GI/GI/1$ queue with ``new'' $i.i.d.$ service times $\sum_{j=1}^{K_i} \sigma_i^{(j)}$, and the sojourn time is again the sum of the waiting time and of the (new) service time. Although the busy period, which is the time from the moment when the system becomes non-empty to the moment when it is again empty, is unchanged by this modification, the sojourn time does change. We will use this modified system to study the tail asymptotic of the busy period.
Thus, our analysis is connected to the standard $GI/GI/1$ queue. In this case, there is no feedback, and the monotonicity is satisfied. Hence, the waiting time of a tagged customer is a key characteristic because the sojourn time $U$ is the sum of the waiting and service times, which are independent. In particular, the stationary waiting time is a major target for the tail asymptotic analysis. Let $u_{0}$ be the unfinished work found by the ``initial'' customer $1$ that arrives at the system at time $0$, and let $W_{n}$ be the waiting times of the $n$-th arriving customer. Then, $W_{1} = u_{0}$ and we have the {\it Lindley recursion}:
\begin{align}\label{eq:lindley}
W_{n+1}= \max (0, W_n+\sigma_{n}-t_n), \qquad n \ge 1.
\end{align}
We assume both inter-arrival and service times to have finite means, $a=\dd{E}t_n$ and $b=\dd{E}\sigma_n$.
Here $W_n$ forms a Markov chain which is {\it stable} (i.e. converges in distribution to the limiting/stationary random variable $W=W_{\infty}$) if the {\it traffic intensity} $\rho :=b/a $ is less than 1. It is well-known (see e.g. \cite{As2003}) that if $u_0=0$, then $W$ coincides in distribution
with the supremum $M=\sup_{n\ge 0} \sum_{i=1}^n (\sigma_n-t_n)$ of a random walk with increments $\sigma_n-t_n$.
The tail asymptotics for $\dd{P}(M>x)$ as $x\to\infty$ is known under the {\it light-tail} and {\it heavy-tail} regimes. In the case of light tails, there are three types of the tail asymptotics, depending on properties
of the moment generating function $\varphi (s) = \dd{E} \exp (s\sigma)$ -- see e.g. \cite{FoPu(2011)} and references therein. In the case of heavy tails, the tail asymptotics
are known in the class of so-called {\it subexponential distributions} and are based on the {\it principle of a single big jump (PSBJ)}:
$M$ takes a large value if one of the service times is large. This PSBJ has been used for the asymptotic analysis in several other (relatively simple) stable queueing models, for
a number of characteristics (waiting time, sojourn time, queue length, busy cycle/period, maximal data, etc.) that possess the monotonicity property (\sect{introduction}) (see e.g. \cite{BaFo2004}).
Our proofs rely on the tail asymptotics for the first and stationary busy periods of the system. We establish the PSBJ for the busy period
first. This allows us to establish the principle for the sojourn time since the tail distribution asymptotics of the busy
period is of the same order with that of the sojourn time. Then insensitivity properties of the {\it intermediate varying distributions} (see Appendix A again) allow us to compute the exact tail asymptotics for the sojourn time. The main result from \cite{FoZa2003} is a key tool
in our analysis.
The paper is organised as follows. \sectn{main} formally introduces the model and presents main results. \sectn{psbj} states the tail asymptotic of the busy period and the PSBJ. All theorems from Sections \sect{main} and \sect{psbj} are proved in \sectn{proofs}. The Appendix consists of three parts. Part A contains an overview on basic properties of heavy-tailed distributions and part B the proof of Corollary 3.1. In part C, we propose an alternative
approach to the proof of Corollary 3.2.
Throughout the paper, we use the following notation: $1(\cdot)$ is the indicator function of the event ``$\cdot$''. For two positive functions $f$ and $g$, we write $f(x)\sim g(x)$ if $f(x)/g(x)\to 1$ as $x\to\infty$, $f(x) \gtrsim g(x)$ if
$\liminf_{x\to\infty} f(x)/g(x) \ge 1$ and $f(x)\lesssim g(x)$ if
$\limsup_{x\to\infty} f(x)/g(x) \le 1$. For a distribution function $F$, its tail $\overline{F}$ is defined as
\begin{align*}
\overline{F}(x) = 1 - F(x).
\end{align*}
For random variables $X, Y$ with distributions $F, G$, respectively, $X =_{st} Y$ if $F = G$, and $X \le_{st} Y$ if $\overline{F}(x) \le \overline{G}(x)$ for all real $x$. Two families of events $A_x$ and $B_x$ of non-zero probabilities are equivalent, $A_x \simeq B_x$, if
\begin{align}\label{strongeq}
\dd{P} (A_x\Delta B_x) = o (\dd{P}(A_x)), \quad \mbox{as} \quad x\to\infty,
\end{align}
where $A_x\Delta B_x= (A_x\setminus B_x)\cup (B_x\setminus A_x)$ is the symmetric difference of $A_x$ and $B_x$. Note that equivalence $A_x\simeq B_x$ is symmetric, since
\begin{align*}
|\dd{P}(B_x) - \dd{P}(A_x)| \le \dd{P}(A_x \Delta B_x).
\end{align*}
Note also that $A_x\simeq B_x$ is stronger than equivalence $\dd{P}(A_{x}) \sim \dd{P}(B_{x})$.
We complete the Introduction by a short\\
\noindent {\large \it Summary of main classes of heavy tail distributions}
\\
\indent In this paper, we are concerned with several classes of heavy tail distributions. We list their definitions below. Their basic properties are discussed in \app{properties}.
In all definitions below, we assume that $\overline{F}(x)>0$ for all $x$.
\begin{enumerate}
\item Distribution $F$ on the real line belongs to the class ${\cal L}$ of {\it long-tailed} distributions if,
for some $y>0$ and
as $x\to\infty$,
\begin{equation}
\label{eq:long}
\frac{\overline{F}(x+y)}{\overline{F}(x)} \rightarrow 1
\end{equation}
(we may write equivalently $\overline{F}(x+y) \sim \overline{F}(x)$).
\item Distribution $F$ on the positive half-line belongs to the class ${\cal S}$ of {\it subexponential} distributions if
\begin{align*}
\int_0^x F(dt) \overline{F}(x-t) \sim \overline{F}(x)
\quad \mbox{as} \quad x\to\infty ,
\end{align*}
which is equivalent to $\int_{x}^{\infty} F(dt) F(x-t) \sim 2 \overline{F}(x)$. Distribution $F$ of a real-valued random variable $\xi$ is subexponential if distribution $F^+(x) = F(x) 1(x\ge 0)$ of random variable $\xi^+ =\max (0,\xi )$ is subexponential.
\item Distribution $F$ on the real line belongs to the class ${\cal S}^{*}$ of {\it strong subexponential} distributions if
$m^+(F) \equiv \int_{0}^{\infty} \overline{F}(x) dx$ is finite and
$$
\int_0^x \overline{F}(y) \overline{F}(x-y) dy \sim
2m^+(F) \overline{F}(x) \quad \mbox{as} \quad x\to\infty .
$$
\item Distribution $F$ on the real line belongs to the class ${\cal D}$ of {\it dominantly varying} distributions if there exists
$\alpha >1$ (or, equivalently, for all $\alpha >1$) such that
\begin{align}
\label{eq:DV}
\liminf_{x\to\infty}
\frac{\overline{F}(\alpha x)}{\overline{F}(x)} > 0
\end{align}
\item Distribution $F$ on the real line belongs to the class ${\cal IRV}$ of {\it intermediate regularly varying} distributions if
\begin{align}
\label{eq:IRV}
\lim_{\alpha \downarrow 1} \liminf_{x\to\infty}
\frac{\overline{F}(\alpha x)}{
\overline{F}(x)} = 1.
\end{align}
\item Distribution $F$ on the real line belongs to the class ${\cal RV}$ of {\it regularly varying} distributions if, for some
$\beta >0$,
\begin{align}
\label{eq:RV}
\overline{F}(x) = x^{-\beta} L(x),
\end{align}
where $L(x)$ is a {\it slowly varying} function, i.e. $L(cx)\sim L(x)$ as
$x\to\infty$, for any $c>0$.
\end{enumerate}
The following relations between the classes introduced above may be found,
say, in the books \cite{EKM1997} or
\cite{FKZ2013}: in the class of distributions $F$ with finite $m^+(F)$,
\begin{align}
\label{eq:class order}
{\cal RV} \subset {\cal IRV} \subset {\cal L\cap D} \subset {\cal S^*} \subset {\cal S} \subset {\cal L}.
\end{align}
\section{The modelling assumptions and main results}
\label{sect:main}
\setnewcounter
In this section, we describe the dynamics of the sojourn time of a tagged customer (customer $1$), and present main results on the tail asymptotics of its sojourn time. In Subsection 2.1, we formally introduce the $GI/GI/1$ feedback queue and then, in Subsection 2.2, a particular $M/GI/1$ feedback queue, which has Poisson arrivals. Then the main results are presented in Subsection 2.3.
\subsection{$GI/GI/1$ feedback queue}
\label{sect:GI/GI/1}
Let $K$ be the number of services of the tagged customer until its departure. By the feedback assumption, $K$ is geometrically distributed with parameter $p$, that is,
\begin{align}
\label{eq:K 1}
\dd{P}(K=k) = q p^{k-1}, \qquad k=1,2,\ldots,
\end{align}
and independent of everything else. Throughout the paper, we make the following assumptions:
\begin{itemize}
\item [(i)] The exogenous arrival process is a renewal process with a finite mean interarrival time $a > 0$.
\item [(ii)] All the service times that start after time $0$ are $i.i.d.$ with finite mean $b > 0$, they are jointly independent of the arrival process.
\item [(iii)] The system is stable, that is,
\begin{align}
\label{eq:stability 1}
\rho \equiv \lambda b/q < 1,
\end{align}
where $\lambda = 1/a$.
\end{itemize}
We denote the counting process of the exogenous arrivals by $N^{e}(\cdot) \equiv \{N^{e}(t); t \ge 0\}$. We use the notation $G$ for the service time distribution, and use $\sigma$ for a random variable subject to $G$.
Let $(X_{0},R^{s}_{0})$ be the pair of the number of earlier customers and the remaining service time of a customer being served at time 0, where $R^{s}_{0} = 0$ if there is no customer in the system. Let $u_0$
be the waiting time of the tagged customer before the start of its first
service, Then
\begin{align}\label{u0}
u_{0} = R^{s}_{0} + \sum_{i=1}^{X_{0}-1} \sigma_{0,i+1},
\end{align}
where $\sigma_{0,i}$'s for $i \ge 2$ are $i.i.d.$ random variables each of which has the same distribution as $\sigma$. There are two typical scenarios for the initial distribution, that is, the distribution of $(X_{0},R^{s}_{0})$.
\begin{itemize}
\item [(\sect{main}a)] A tagged arriving customer finds the system empty. That is, $(X_{0},R^{s}_{0}) = (0,0)$.
\item [(\sect{main}b)] A tagged arriving customer finds $X_{0}$ customers and the remaining service time $R^{s}_{0}$ of the customer being served. Thus, the initial state $(X_{0},R^{s}_{0}) \ne (0,0)$.
\end{itemize}
In this paper, we assume that the service time distribution is heavy tailed, and mainly consider the tail asymptotic of the sojourn time distribution of the $GI/GI/1$ feedback queue under the scenario (\sect{main}a). The case (\sect{main}b) when $X_{0}$ and $R_0^s$ are bounded by a constant may be studied very similarly to the case (\sect{main}a), therefore we do not analyse it. We consider the case (\sect{main}b) when $(X_{0},R^{s}_{0})$ is subject to the stationary distribution embedded at the arrival instants.
For given $(X_{0},R^{s}_{0})$, we have defined $u_{0}$. Let $X_{k}$ be the queue length behind the tagged customer when it finished its $k$th service for $k \ge 1$ when the tagged customer gets service at least $k$ times. Similarly, let $U_{k}$ be the sojourn time of the tagged customer measured from its $(k-1)$th service completion to its $k$th service completion, and let $T_{k}$ be the sojourn time of the tagged customer just after its $k$th service completion.
We now formally define random variables $X_{k}$, $U_{k}$ and $T_{k}$ by induction.
Let $T_0=0$. Denote the $k$th service time of the tagged customer by $\sigma_{k,0}$, while $\sigma_{k,i}, i=1,\ldots, X_{k-1}$ are the service times of the customers waiting before the tagged one on its $k$th return. Note that $\sigma_{k,i}$'s for $k \ge 1, i \ge 0$ are $i.i.d.$ random variables subject to the same distribution as $\sigma$. Then, $X_{k}$, $U_{k}$ and $T_{k}$ for $k \ge 1$ are defined as
\begin{align}
\label{eq:U k}
& U_{k} = \begin{cases}
\sigma_{1,0} +u_{0}& k = 1, \\
\sum_{i=0}^{X_{k-1}} \sigma_{k,i} & k \ge 2,
\end{cases},\\
\label{eq:t k}
& T_k = T_{k-1} + U_{k},\\
\label{eq:X k}
& X_{k} = N^{e}(T_k) - N^{e}(T_{k-1}) + N^{B}_{k}(X_{k-1}),
\end{align}
where $u_0$ is given by \eqref{u0}, and $N^{B}_{k}(n)$'s are $i.i.d.$ random variables each of which is subject to the Binomial distribution with parameters $n, p$. The dynamics of the sojourn time is depicted below when $X_{0} = 0$, that is, a tagged customer finds the system empty.
\begin{figure}
\caption{Sample path of the queue length process $L(t)$ and $(X_{k}
\label{fig:Feedback_dynamics_1}
\end{figure}
To make clear the dependence of $X_{k}, U_{k}, T_{k}$, we introduce a filtration $\{\sr{F}_{t}; t \ge 0\}$ as
\begin{align*}
\sr{F}_{t} = \mbox{$\sigma$-field generated by } \{X_{0}, R^{s}_{0}, (N^{e}(u), N^{s}(u), N^{r}(u)), u \le t\},
\end{align*}
where $N^{s}(t)$ and $N^{r}(t)$ are the numbers of customers who completed service and who return to the queue, respectively, up to time $t$. Clearly, $T_{k}$ is a $\sr{F}_{t}$-stopping time, and $X_{k}$ and $U_{k}$ are $\sr{F}_{T_{k}}$-measurable. Furthermore, $\sigma_{k,0}$ and $\sigma_{k,i}$ for $i \ge 1$ are independent of $\sr{F}_{T_{k-1}}$. Then $U$, the sojourn time of the tagged customer,
may be represented as
\begin{align}
\label{eq:U 1}
U = u_{0} + \sum_{k=1}^{K} U_{k} = u_{0} + \sum_{k=1}^{K} \sum_{i=0}^{X_{k-1}} \sigma_{k, i}.
\end{align}
For $k\ge 0$, let $Y_{k} = \sum_{\ell = 0}^{k} X_{\ell}$ for $k \ge 0$, which is the total number of external and internal arrivals to the queue up to time $T_k$ plus the number of customers in system at time $0$. Then
\begin{align}
\label{eq:Y recursive 1}
Y_{k} & = X_{0} + N^{e}(T_{k}) + \sum_{\ell=1}^{k} N^{B}_{\ell}\Big(X_{\ell-1}\Big) =_{st} X_{0} + N^{e}(T_{k}) + N^{B}_{k}\Big(Y_{k-1}\Big).
\end{align}
Hence, under scenario (\sect{main}a), we have $u_{0}= X_{0} = 0$, so
\begin{align}
\label{eq:U 1 (a)}
U = \sum_{k=1}^{K} \sum_{i=0}^{X_{k-1}} \sigma_{k i} =_{st} \sum_{i=1}^{K+Y_{K-1}} \sigma_{i},
\end{align}
while, under scenario (\sect{main}b),
\begin{align}
\label{eq:U 1 (b)}
U =_{st} u_{0}+ \sum_{i=1}^{K+Y_{K-1}} \sigma_{i},
\end{align}
where $\sigma_{i}$'s are $i.i.d.$ random variables each of which has the same distribution as $\sigma$. Note that $K+Y_{K-1}$ is ${\cal F}_{T_{K-1}}$-measurable that depends, in general, on all
$\sigma_i$'s of customers who arrive before $T_{K-1}$.
This causes considerable difficulty in the asymptotic analysis of $U$.
Thus, we need to consider dependence structure in the representation of $U$. Furthermore, $\{(U_{k}, X_{k}); k \ge 0\}$ is generally not a Markov chain for a general renewal process.
On the other hand, if the arrival process $N^{e}(\cdot)$ is Poisson, then not only $\{(U_{k},X_{k}); k \ge 0\}$ but also $\{X_{k}; k \ge 0\}$ is a Markov chain with respect to the filtration $\{\sr{F}_{T_{k}}; k \ge 0\}$. In this case, we may obtain exact expressions for $\dd{E}X_k$ and then an explicit form for the tail asymptotics.
\subsection{$M/GI/1$ feedback queue and branching process}
\label{sect:M/GI/1}
In this subsection, we assume that the exogenous arrival process is Poisson with rate $\lambda > 0$. This model is analytically studied using Laplace transforms in \cite{Taka1963}, but no asymptotic results are given there. Note that we may consider $\{X_{k}; k\ge 0\}$ as a branching process and directly compute $\dd{E}(X_{k})$, which then will be used for the general renewal input case.
Since the Poisson process $N^{e}(\cdot)$ has independent increments, \eq{X k} is simplified to
\begin{align}
\label{eq:X recursive 1}
X_{k} & = N^{e}_{k}(U_{k}) + N^{B}_{k}(X_{k-1}), \qquad 1 \le k \le K,
\end{align}
using independent Poisson processes $N^{e}_{k}$ and independent Binomial random variables $N^{B}_{k}(n)$. Furthermore, \eq{X recursive 1} can be written as
\begin{align}
\label{eq:X recursive 2}
& X_{k} = \begin{cases}
N_{1,1}^{e}(\sigma_{1,0} +u_{0}) + N^{B}_{1}(X_{0}), & k=1, \\
N_{k,0}^{e}(\sigma_{k,0}) + \sum_{i=1}^{X_{k-1}} (N_{k,i}^{e}(\sigma_{k,i}) + N^{B}_{k,i}(1)), & k \ge 2,
\end{cases}
\end{align}
where $N_{k,i}^{e}(\cdot)$'s are independent Poisson processes with rate $\lambda$. Hence, $\{X_{k}; k \ge 1\}$ is a branching process with immigration.
Due to the branching structure, we can compute the moments of $X_{k}$ explicitly. We are particularly interested in their means.
From \eq{X recursive 2}, we have
\begin{align*}
& \dd{E}(X_{k}) = \begin{cases}
\lambda (b + \dd{E}(u_{0})) + p \dd{E}(X_{0}), \quad & k = 1,\\
\lambda b + \dd{E}(X_{k-1}) r, \qquad & k \ge 2,
\end{cases}
\end{align*}
where $r = \lambda b + p$. By the stability condition \eq{stability 1}, $r < 1$, and we have
\begin{align}
\label{eq:X k mean}
\dd{E}(X_{k}) = \frac {1 - r^{k-1}} {1 - r} \lambda b + \dd{E}(X_{1}) r^{k-1}, \qquad k \ge 1.
\end{align}
Hence, we have a uniform bound:
\begin{align}
\label{eq:Xk bound 2}
\dd{E}(X_{k}) \le \frac {1} {1 - r} \lambda b + \dd{E}(X_{1}), \qquad k \ge 1.
\end{align}
Furthermore, we have
\begin{align}
\label{eq:Z mean 1}
\dd{E}(K + Y_{K-1}) &= \dd{E}(K) + \sum_{k=1}^{\infty} \sum_{\ell=0}^{k-1} \dd{E}(X_{\ell}) \dd{P}(K = k) \nonumber\\
&= \dd{E}(K) + \dd{E}(X_{0}) + \sum_{\ell=1}^{\infty} \dd{E}(X_{\ell}) \dd{P}(K \ge \ell+1) \nonumber\\
&=\frac 1q + \dd{E}(X_{0}) + \frac {\lambda b p} {(1 - r)q} + \frac {(1-r) \dd{E}(X_{1}) - \lambda b } {(1 - r)(1 - p r)} p.
\end{align}
Under the scenario (2a), $\dd{E}(X_{k})$ of the $M/GI/1$ feedback queue will be used for the tail asymptotic of the sojourn time in the $GI/GI/1$ feedback queue. Thus, we introduce notations for them. Let $X_{k}^{(0)}(M/GI/1)$ be the $X_{k}$ of the $M/GI/1$ feedback queue for $X_{0} = 0$, then define $m^{(0)}_{k}$ as
\begin{align*}
m^{(0)}_{0} = 0, \qquad m^{(0)}_{k} = \dd{E}(X_{k}^{(0)}(M/GI/1)), \quad k \ge 1.
\end{align*}
From \eq{X k mean}, we have
\begin{align}
\label{eq:m k}
& m^{(0)}_{k} = \frac {1 - r^{k}} {1 - r} \lambda b = (1+r+\ldots + r^{k-1}) \lambda b .
\end{align}
We will use $m^{(0)}_{K-1}$ and $m^{(0)}_{K}$ in main results the next section. It should be noticed that they are random variables obtained by substituting $K-1$ and $K$ into $k$ of $m^{(0)}_{k}$.
\subsection{Main results}
\label{sect:main results}
We are ready to present the main results of this paper. They are proved in \sectn{proofs}.
\begin{theorem}
\label{thr:U asym 1}
For the stable $GI/GI/1$ feedback queue, assume that its service time distribution is intermediate regularly varying ($\sr{IRV}$). If the tagged customer finds the system empty, then
\begin{align}
\label{eq:U asym 1}
\dd{P}\left(U > x\right) & \sim \frac 1q \dd{E}\left( 1 + X^{(0)}_{K-1} \right) \dd{P}\left((1+ m^{(0)}_{K-1}) \sigma > x \right) \quad \mbox{ as } x \to \infty,
\end{align}
where $X^{(0)}_{k}$ is $X_{k}$ for $X_{0} = 0$, $m^{(0)}_{k} = \frac {1-r^{k}}{1-r} \lambda b$ for $r = p + \lambda b$ by \eq{m k}. Here the random
variable $K$ does not depend on $\{X_k^{(0)}\}$ and $\sigma$.
\end{theorem}
\begin{remark}
\label{rem:U asym 1}
In the case of a Poisson input, $\dd{E}\big(1+X^{(0)}_{K-1}\big) = \frac {q(1+p)} {1-rp}$. In general, it is hard to evaluate $\dd{E}\big(1+X^{(0)}_{K-1}\big)$ because this requires computing $\dd{E}(N^{e}(T_{k}))$.
\end{remark}
We prove this theorem in \sectn{proof U asym} using the PSBJ that is established in \thr{GG1final}.
\begin{corollary}
\label{cor:U finite 1}
Under the assumptions of \thr{U asym 1}, for each $k \ge 1$,
\begin{align}
\label{eq:U finite 1}
\dd{P}\left(T_{k} > x\right) & \sim \sum_{\ell=0}^{k-1} \dd{E}\left( 1 + X^{(0)}_{k-\ell-1} \right) \dd{P}\left((1+ m^{(0)}_{\ell}) \sigma > x \right) \quad \mbox{ as } x \to \infty.
\end{align}
\end{corollary}
This corollary is easily obtained from arguments used in the proof of \thr{U asym 1}. On the other hand, if we take the geometrically weighted sum of \eq{U finite 1} and if the interchange of this sum and the asymptotic limit are allowed, then we have \eq{U asym 1}. This interchange of the limits is legitimated by \thr{GG1final}. However, \cor{U finite 1} itself can be directly proved. We provide such a proof for a slightly extended version of \cor{U finite 1} in \app{alternative}.
\begin{corollary}
\label{cor:M/GI/1}
Assume that the assumptions of \thr{U asym 1} hold. \\
(a) If the distribution of service times is $\sr{RV}$,
${\overline G}(x) = L(x)/x^{\alpha+1}$ with $\alpha >0$ and slowly varying function $L(x)$, then
$$
\dd{P}\left((1+ m^{(0)}_{K-1}) \sigma > x \right) \sim \frac{qL(x)}{x^{\alpha+1}}
\sum_{k=0}^{\infty} p^k \left(\frac {1-(p+ r^{k}\lambda b)} {1-r} \right)^{\alpha+1},
$$
where $r= p+\lambda b$.\\
(b) Under the assumption in (a), if the input stream is Poisson with parameter $\lambda$, then
$$
\dd{P} (U>x) \sim CL(x)x^{-(\alpha+1)}
$$
where
$$
C= \frac{q(1+p)}{(1-r)^{\alpha+1}(1-rp)} \sum_{k=0}^{\infty}
p^k (1-(p+r^k\lambda b))^{\alpha+1}.
$$
\end{corollary}
We next present the tail asymptotic for a tagged customer that arrives in the stationary system. By ``stationary'' we mean stationary in discrete time, i.e. at embedded arrival epochs, this is detailed in \sectn{psbj-st}.
\begin{theorem}
\label{thr:stationary case}
Let $U^0$ be the sojourn time of a typical customer in the stationary $GI/GI/1$ feedback queue with $\sr{IRV}$ distribution
$G$ of service times with mean $b$, $i.i.d.$ inter-arrival times with mean $a$ and probability of feedback $p=1-q \in (0,1)$. Let $\sigma_{I}$ be a random variable having the distribution function $G_{I}(x) \equiv 1 - \min\left(1,\int_{x}^{\infty} \overline{G}(u) du\right)$.\\
(a) Then, as $x\to\infty$,
\begin{align}
\label{eq:final2-stat 2}
\dd{P}(U^0>x) & \sim \frac{\lambda}{1 - \lambda b} \left( \dd{P}(m^{(0)}_{K} \sigma_{I} > x) + \frac {(1-q) \rho} {1 - \rho} \dd{P}((\lambda b\, m^{(0)}_{K}) \sigma_{I} > x) \right),
\end{align}
where $m^{(0)}_{k}$ is defined by \eq{m k}, and $\sigma_{I}$ is independent of $K$.\\
(b) In particular, if $G$ is an $\sr{RV}$ distribution, $\overline{G}(x)=L(x)/x^{\alpha +1}$
with $\alpha >0$, where $L(x)$ is a slowly varying function, then
\begin{align}
\label{eq:final3-stat}
\dd{P}(U^0>x) & \sim \frac{L(x)}{x^{\alpha}} \cdot \frac{q(1-r)^{{-\alpha}}}{\alpha (a-b)}\left(
1+\frac{b(1-q)}{aq-b} \left(\frac{b}{a}\right)^{\alpha}\right)
\sum_{k=1}^{\infty} p^{k-1} (1-r^k)^{{\alpha}}.
\end{align}
\end{theorem}
\begin{remark}
\label{rem:stationary case}
(a) The tail function $\overline{G_{I}}(x) \equiv 1 - G_{I}(x)$ is called the ``integrated tail'' of $G$. Instead of $G_{I}$, we may use the stationary excess distribution $G_{e}(x) \equiv \frac 1b \int_{0}^{x} \overline{G}(u) du$ since $\overline{G}_{I}(x) = b \, \overline{G_{e}}(x)$ for sufficiently large $x$. In this case, let $\sigma_{e}$ be a random variable subject to $G_{e}$, then we can replace $\sigma_{I}$ by $\sigma_{e}$ in \eq{final2-stat 2}, multiplying its right-hand side by $b$.\\
(b) It is notable that no $X^{(0)}_{K-1}$ for the renewal arrivals is involved in \eq{final2-stat 2}. This is different from \eq{U asym 1}, and may come from averaging in the steady state.\\
(c) It may be interesting to compare the asymptotics in \eq{final2-stat 2} with those without feedback, which is well known (e.g., see \cite{BaFo2004}). Namely, let the stationary sojourn time $\widetilde{U}^0$ in the standard $GI/GI/1$ queue with inter-arrival times $\{t_n\}$ and with service times $\{\sigma_n^{H}\}$. where $\sigma_n^{H}$ has the same distribution as $\sum_{i=1}^{K} \sigma_{i}$. If $\sigma_{I}$ has a subexponential distribution, then
\begin{align}
\label{eq:final2-stat 3}
\dd{P}(\widetilde{U}^0>x) \sim \frac{\lambda}{1-\rho} \int_{x}^{\infty} \dd{P}\Big(\sum_{i=1}^{K} \sigma_{i} > u\Big) du \sim \frac{\lambda}{q(1-\rho)} \dd{P}(\sigma_{I} > x),
\end{align}
where the second asymptotic equivalence follows from Lh\'{o}pital's theorem and \eqref{Kesten}. Thus, one can see that \eq{final2-stat 2} is asymptotically compatible with \eq{final2-stat 3} as $q \uparrow 1$, because $\rho \to \lambda b$ and $m^{(0)}_{K} \to 1$ almost surely as $q \uparrow 1$.
\end{remark}
\section{Busy period and the principle of a single big jump}
\label{sect:psbj}
\setnewcounter
In this section, we present the Principle of a Single Big Jump (PSJB) in \thr{GG1final} below, which will be used for a proof \thr{U asym 1}. For that, we first provide an auxiliary result on the tail asymptotics of the busy period in the $GI/GI/1$ queue without feedback. Denote its service time distribution by $H$ and let $\sigma^{H}_{i}$ be the $i$th service time. It is assumed that the arrivals are subject to the renewal process $N^{e}$ with interarrival times $t_i$ with mean $a$, and $H$ has a finite and positive mean $b_{H} > 0$. Denote the traffic intensity by $\rho \equiv b_{H}/a < 1$. Let $B$ be the (duration of the) first busy period in this $GI/GI/1$ queue, which is the time from the instant when the system becomes non-empty to the instant when it again becomes empty. We here omit the subscript $_{H}$ for $\rho, B$, because they will be unchanged for the $GI/GI/1$ feedback queue. We finally let $\tau^{H}$ be the number of customers served in the first busy period.
We let $\xi^{H}_i=\sigma^{H}_{i}-t_{i}$, and let $S^{H}_{n} = \sum_1^n \xi^{H}_i$. Then $\tau^{H} = \min \{n\ge 1 : S^{H}_{n} \le 0\}$. Recall the definitions of classes
of heavy-tailed distributions ${\cal L}, {\cal S^*}, {\cal IRV}$ and ${\cal RV}$ at the end of \sectn{introduction}. The following theorem is proved in \sectn{proof gg1}.
\begin{theorem}
\label{thr:gg1}
Consider a stable $GI/GI/1$ queue, $\rho <1$, with the service time distribution ${H}$.\\
If $H\in {\cal L}$, then
\begin{align}
\label{eq:B1}
\liminf_{x\to\infty}
\frac{\dd{P} (B>x)}{\dd{E} \tau^{H} \overline{H}(x(1-\rho ))}
\geq 1 \quad \mbox{and}
\quad \liminf_{x\to\infty}\frac{\dd{P} (\tau^{H} >x)}{\dd{E} \tau^{H} \overline{H}(x(a-b_{H}))}
\geq 1.
\end{align}
If, in addition, $H \in {\cal S}^{*}$, then,
for any $0<c<1$,
\begin{align}
\label{eq:B2}
\limsup_{x\to\infty}
\frac{\dd{P} (B>x)}{\dd{E} \tau^{H} \overline{H}(c x(1-\rho ))}
\leq 1 \quad \mbox{and}
\quad \limsup_{x\to\infty}\frac{\dd{P} (\tau^{H} >x)}{\dd{E} \tau^{H} \overline{H}(c x(a-b_{H}))}
\leq 1.
\end{align}
Finally, if $H \in {\cal IRV}$, then, as $x\to\infty$,
\begin{align}
\label{eq:B3}
\dd{P} (B>x) \sim \dd{E} (\tau^{H}) \overline{H}(x(1-\rho ))
\quad \mbox{and}
\quad
\dd{P} (\tau^{H} >x) \sim \dd{E} (\tau^{H}) \overline{H}(x(a-b_{H})).
\end{align}
\end{theorem}
\begin{remark}
For the class of regularly varying tails, the equivalence \eq{B3}
was proved by Zwart in \cite{Zwar2001}. We provide a different proof which
is shorter and works for a broader class of distributions. Our proof
is based on probabilistic intuition related to the principle of a single big jump. A similar result holds for another class of distributions that overlaps
with the ${\cal IRV}$ class but does not contain it, see
e.g. \cite{JeMoZw2004}.
\end{remark}
Recall the equivalence $A_{x} \simeq B_{x}$ for two families of events $A_{x}$ and $B_{x}$ with variable $x$. We have the following corollary, which is proved in \app{gg2}.
\begin{corollary}
\label{cor:gg2}
Consider the same $GI/GI/1$ queue as in \thr{gg1}. Let $\sigma^{H}_{n}$ be the service time of the $n$th arriving customer.
If $H\in {\cal IRV}$, then, for the busy period $B$, as $x \to \infty$,
\begin{align}
\label{eq:PSBJ-B1}
& \dd{P} (B>x) \sim \sum_{n\ge 1} \dd{P} (\tau^{H}\ge n) \dd{P}(\sigma^{H}_n>x(1-\rho)) = \dd{E}\tau^{H} \dd{P}(\sigma^{H}_1>x(1-\rho)),
\end{align}
and, for any $\varepsilon > 0$, one can choose $N = N^{e}(\varepsilon) \ge 1$ such that, as $x \to \infty$,
\begin{align}
\label{eq:PSBJ-B2}
& \dd{P} (B>x) \gtrsim \sum_{n=1}^N\dd{P} (\tau^{H} \ge n, \sigma^{H}_{n}\ge x(1-\rho ))
\gtrsim (1-\varepsilon ) \dd{P} (B>x).
\end{align}
Furthermore, the following PSBJ holds:
\begin{equation}
\label{eq:PSBJ-B3}
\{B>x\}
\simeq
\cup_{n\ge 1} \{\tau^{H}\ge n, \sigma^{H}_{n} > x(1-\rho ) \}, \qquad x \to \infty.
\end{equation}
\end{corollary}
We now return to the $GI/GI/1$ feedback queue with the service time distribution $G$. Assume that the first customer arrives at the system at time instant $T_0=0$ and finds it empty.
Recall that $K_i$ is the number of services $i$th customer has in the system, $K_{i}$'s are independent of everything else and $i.i.d$ with the same geometric distribution as $K$ (see \eq{K 1}). For convenience, we let $K=K_1$. Denote by $\sigma_{i}^{(j)}$ the $j$th service time of the $i$th customer in the $GI/GI/1$ feedback queue. Recall that $\sigma_{i}^{(j)}$ has the same distribution $G$.
Consider the $GI/GI/1$ queue without feedback and with service times $\sigma^{H}_i$ where
\begin{align*}
\sigma^{H}_{i} = \sum_{j=1}^{K_{i}} \sigma_{i}^{(j)},
\end{align*}
and denote its distribution by $H$. Since the length of the busy period, $B$,
does not depend on the order of services, we may allow the server to proceed with services of lengths $\sigma_i^{j}$, like in the queue with feedback, and conclude that the (the lengths of) the busy periods are the same in both queues. Similarly, the traffic intensity $\rho$ in the new queue without feedback coincides with that in the $GI/GI/1$ queue with feedback. Furthermore, let $\tau$ be the number of service times in the first busy period of this feedback queue. Then, $\tau = \sum_{i=1}^{\tau^{H}} K_{i}$, and therefore we have
\begin{align*}
\dd{E}(\tau) = \dd{E}(K) \dd{E}(\tau^{H}), \qquad b_{H} = \dd{E}(K) b.
\end{align*}
We now consider the $GI/GI/1$ feedback queue introduced in \sectn{GI/GI/1}.
We establish the PSBJ, i.e. show that, for large $x$, the rare event $\{U>x\}$ occurs mostly due to a big value of one of the service times.
Our proof of Theorem \ref{thr:GG1final} is based on \thr{gg1} and is given in \sectn{psbj 1}.
\begin{theorem}
\label{thr:GG1final}
Consider a stable single-server queue $GI/GI/1$ with feedback. Assume that the service times distribution is intermediate regularly varying. Denote by $U$ be the sojourn time of the first customer, and let
\begin{align}
\label{eq:P(x) 1}
P_{k,\ell,i,j}(x) = \dd{P} (U>x, K=k+\ell, X_{k-1}=j,\sigma_{k,i}>x(1-\rho )).
\end{align}
If there exists a collection of positive functions $\{g_{k,\ell,i,j}(x)\}$ such that, as $x \to \infty$,
\begin{align}
\label{eq:GG1final g}
P_{k,\ell,i,j}(x) \sim g_{k,\ell,i,j}(x), \qquad \forall k \ge 1, \ell \ge 0, 0\le i \le j,
\end{align}
and constants $C_{k,\ell,i,j}$ such that, for any $k \ge 1,\ell \ge 0, j \ge 0, 0 \le i \le j, x \ge 0$,
\begin{align}
\label{eq:g C}
& g_{k,\ell,i,j}(x) \le C_{k,\ell,i,j} \cdot \dd{P}(\sigma > x),\\
\label{eq:C finite}
& C:= \sum_{k=1}^{\infty}\sum_{\ell=0}^{\infty}\sum_{j=0}^{\infty}\sum_{i=0}^{j} C_{k,\ell,i,j} < \infty,
\end{align}
then
\begin{equation}
\label{eq:GG1final2}
\dd{P} (U>x) \sim \sum_{k=1}^{\infty}\sum_{\ell=0}^{\infty}\sum_{j=0}^{\infty}\sum_{i=0}^{j} g_{k,\ell,i,j}(x).
\end{equation}
\end{theorem}
\section{Proofs of the theorems}
\label{sect:proofs}
\setnewcounter
\subsection{Proof of \thr{gg1}}
\label{sect:proof gg1}
We will prove \thr{gg1} for the tail asymptotics of the busy period $B$ only.
The proof for $\tau^{H}$, the number of arriving customers in the busy period, is similar.
It is enough to prove the lower and upper bounds in \eq{B1} and \eq{B2}. Then the equivalences in \eq{B3}
follow by letting $c$ tend to 1 and using the property of ${\cal IRV}$
distributions.
\noindent {\bf Lower bound.}
Since $\xi^{H}_{1} = \sigma^{H}_{1} - t_{1}$, $d_0 \equiv b^{H}/(a-b^{H})$ is the solution to the equation
\begin{align*}
\dd{E} \sigma^{H}_1 + d_0 \dd{E} {\xi}^{H}_1 =0.
\end{align*}
We put $\widehat{\psi}^{H}_n = \sigma^{H}_n + d_{1} \xi^{H}_n$ for any positive number $d_{1} < d_{0}$.
Then $\{\widehat{\psi}^{H}_n\}$ are i.i.d. random variables with common mean $\dd{E} \widehat{\psi}^{H}_1 >0$. Recall that $\tau^{H} = \min \{ n \ge 1 : S^{H}_n \leq 0\}$.
For any fixed real $C>0$, $R>0$, integer $N\ge 1$, and for $x\ge 0$, define events $D_{i}$ and $A_{i}$ for $i \ge 1$ as
\begin{align*}
& D_i = \left\{ \sum_{j=1}^{i-1} |\widehat{\psi}^{H}_j|\le C, \tau^{H} \ge i, \widehat{\psi}^{H}_i>x+C+R \right\},
\qquad A_i = \bigcap_{\ell \ge 1} \left\{\sum_{j=1}^{\ell} \widehat{\psi}^{H}_{i+j}\ge -R \right\}.
\end{align*}
Then, we have
\begin{equation}
\label{double}
\dd{P} (B>x) \ge
\dd{P} \left(\sum_{i=1}^{\tau^{H}} \widehat{\psi}^{H}_i >x\right) \ge \sum_{i=1}^N \dd{P} \left(D_i \cap A_i\right).
\end{equation}
Here, the first inequality in \eqref{double} holds since $S^{H}_{\tau^{H}}$ is non-positive, and the second inequality comes from the following facts. Events $D_i$
are disjoint and, given the event $D_i$, we have $\sum_{j=1}^i \widehat{\psi}^{H}_j >x+R$. Then, given the event $D_i\cap A_i$,
we have $\sum_{j=1}^k \widehat{\psi}^{H}_j \ge x$ for all $k\ge i$ and, in particular, $\sum_{j=1}^{\tau^{H}} \widehat{\psi}^{H}_j>x$. Thus, \eqref{double} holds.
The events $\{A_i\}$ form a stationary sequence. Due to the SLLN, for any $\varepsilon >0$,
one can choose $R$ so large that $\dd{P}(A_i) \ge 1-\varepsilon$. For this $\varepsilon$ and any $N \ge 1$, we can choose sufficiently large $C$ such that
\begin{align*}
\dd{P} \left(\sum_{j=1}^{i-1} |\widehat{\psi}^{H}_j|\le C, \tau^{H} \ge i\right) \ge (1-\varepsilon) \dd{P} (\tau^{H} \ge i), \qquad i \le N.
\end{align*}
Hence, \eqref{double} implies that, as $x\to\infty$,
\begin{align*}
\dd{P} (B>x) &\ge
\sum_{i=1}^N \dd{P} \left(\sum_{j=1}^{i-1} |\widehat{\psi}^{H}_j|\le C, \tau^{H} \ge i\right)
\dd{P}(\widehat{\psi}^{H}_i>x+C+R) \dd{P}(A_i) \\
&\ge
(1-\varepsilon )^2 \dd{P} (\widehat{\psi}^{H}_1 >x+C+R) \sum_{i=1}^N \dd{P} (\tau^{H} \ge i),
\end{align*}
and therefore the long-tailedness of distribution $H$ and (iii) of \rem{tailprop} yield
\begin{align*}
\liminf_{x \to \infty} \frac {\dd{P} (B>x)}{\dd{P} (\widehat{\psi}^{H}_1 >x)} = \liminf_{x \to \infty} \frac {\dd{P} (B>x)}{\overline{H}(x/(1+d_{1}))} \ge (1-\varepsilon )^2 \sum_{i=1}^N \dd{P} (\tau^{H} \ge i).
\end{align*}
Letting first $N$ to infinity and then $\varepsilon$ to zero completes the proof of the first inequality of \eq{B1}.
\noindent {\bf Upper bound.}
Take $L>0$ and put $\tilde{t}_n = \min (t_n,L)$,
$\tilde{\xi}^{H}_n = \sigma_n-\tilde{t}_n$,
$\tilde{S}^{H}_n = \sum_1^n \tilde{\xi}^{H}_i$,
$\tilde{\tau}^{H} = \min \{ n \ge 1: \tilde{S}^{H}_n \leq 0\}$.
Put also $\psi^{H}_n = \sigma^{H}_n + d_{2}\tilde{\xi}^{H}_n$ where $d_{2}>d_0$ is any number.
Note that $\tilde{\xi}^{H}_1$ converges to $\xi^{H}_1$ in distribution and in mean, as $L\to\infty$.
We may choose $L$ so large that both $\dd{E} \tilde{\xi}^{H}_1$ and
$\dd{E} \psi^{H}_1$ are negative. Then $\tilde{\tau}^{H}$ is finite and
$\tilde{S}^{H}_{\tilde{\tau}^{H}} \in (-L,0]$ a.s.
Further, as $L$ grows, $\tilde{\tau}^{H}$ converges to $\tau$ in distribution and
in mean and, for any $\varepsilon >0$, we may choose $L$ so large that
$\dd{E} \tau^{H} \le \dd{E}\tilde{\tau}^{H} \le (1+\varepsilon ) \dd{E} \tau^{H}$.
By \rem{tailprop}, the distribution of $\psi^{H}_1$ also belongs to the class ${\cal S}^*$.
We have
\begin{align*}
\dd{P} (B>x) &\leq
\dd{P} \left(\sum_{i=1}^{\tilde{\tau}^{H}} \sigma^{H}_i > x\right) =
\dd{P} \left(\sum_{i=1}^{\tilde{\tau}^{H}} \psi^{H}_i > x +
d_{2}\tilde{S}^{H}_{\tilde{\tau}^{H}} \right)\\
&\leq
\dd{P} \left(\sum_{i=1}^{\tilde{\tau}^{H}} \psi^{H}_i > x -d_{2}L\right) \le \dd{P} \left(\max_{1\le j\le \tilde{\tau}^{H}} \sum_{i=1}^j \psi^{H}_i > x-d_{2}L \right)\\
&\sim \dd{E} \tilde{\tau}^{H} \dd{P} (\psi^{H}_1 > x-d_{2}L)
\leq (1+\varepsilon )\dd{E} \tau^{H} \dd{P} (\psi^{H}_1 > x-d_{2}L),
\end{align*}
where the equivalence follows from \thr{FZ1}. Further,
$$
\dd{P} (\psi^{H}_1 > x-d_{2}L)
\sim
\dd{P} (\psi^{H}_1 > x) \\
\sim
\dd{P} ((d_{2}+1)\sigma^{H}_1 > x) =
\overline{H}(x/(d_{2}+1))
$$
where the first equivalence follows from the long-tailedness of the distribution of $\psi^{H}_1$ and the second from \rem{tailprop}.
Letting $\varepsilon$ tend to zero, we have
$$
\lim \sup_{x\to\infty}
\frac{\dd{P} (B>x)}{\dd{E} \tau^{H} \overline{H}(x/(d_{2}+1))}
\leq 1
$$
for any $d_{2} > d_0$. Let $c = (d_0+1)/(d_{2}+1) <1$. This proves the first inequality in \eq{B2}.
\subsection{Proof of \thr{GG1final}}
\label{sect:psbj 1}
Recall that we consider the scenario where the initial customer $1$
arrives at the empty system. Clearly, $\sigma^{H}_1 \le U \le B$ a.s. where
$\sigma^{H}_1$
is the total service time of customer $1$, and $B$ is the duration of the first busy period.
Equivalence relation \eqref{Kesten} and \rem{tailprop}
from the Appendix
imply that, given $\sigma$ has an intermediate regularly varying distribution, random variables $\sigma^{H}_i$ have
a common intermediate varying distribution too. Since
any intermediate varying distribution is dominantly varying (see Property \eq{class order}),
we get from \thr{gg1} that
\begin{equation}\label{weakTE}
\limsup_{x\to\infty} \dd{P}(B>x)/\dd{P}(\sigma^{H}_1 >x) <\infty.
\end{equation}
Relations
\eqref{weakTE} and \eqref{Kesten} lead then to
the logarithmic asymptotics:
\begin{lemma}
Assume that the distribution of the service time $\sigma_{1}^{(1)}$ belongs to the class ${\cal IRV}$. Then,
as $x\to\infty$,
\begin{equation}\label{heavy-log}
\log \dd{P} (U>x) \sim \log \dd{P} (\sigma^{H}_1>x) \sim
\log \dd{P} (\sigma_{1,1}>x).
\end{equation}
\end{lemma}
Further, by \cor{gg2}, the PSBJ for $B$ holds:
\begin{equation}\label{BP}
\dd{P} (B>x) \sim
\dd{P} \left(B >x, \cup_{i=1}^{\tau^{H}} \{\sigma^{H}_i >x (1-\rho )\}\right)
\sim \dd{P} \left(\cup_{i=1}^{\tau^{H}} \{\sigma^{H}_i >x (1-\rho )\}\right)
\end{equation}
Here $\tau$ is the number of customers served within the first busy period.
Combining \eqref{BP} and \eqref{Kesten}, we arrive at the following result:
\begin{lemma}\label{PSBJJ}
Consider a stable single-server queue $GI/GI/1$ with feedback. Let $B$ be the duration of the first busy period and
$U$ the sojourn time of the first customer. Assume that the service times distribution is intermediate regularly varying.
Then
\begin{equation}\label{IRVEA1}
\dd{P} (U>x) = \dd{P} (U>x,B>x)\sim \dd{P} \left(\{U>x\}
\bigcap \bigcup_{i=1}^{\tau^{H}}
\bigcup_{j=1}^{K_i}\{\sigma_{i}^{(j)}>x (1-\rho)\}\right).
\end{equation}
\end{lemma}
To derive the exact asymptotics for $\dd{P} (U>x)$, we recall that, for $1\le k <K \equiv K_{1}$, $X_{k} \ge 0$ is the total number of services of other customers between the $k$th and the $(k+1)$st services
of customer 1, and let $\sigma_{k,i}$ be the service time of the $i$ service there, $1\le i \le X_{k}$. Further, under the scenario (\sect{main}a), $X_0=0$. Then
let $\nu \ge 0$ be the total number of services of other customers after the departure of the first customer within the
busy period, and let $\sigma_{i}^{*}$ be the $i$th service time there, $1\le i \le \nu$.
Then
random variables $\sigma_{k,.i}$ and $\sigma_{i}^{*}$ are $i.i.d.$ with the same distribution as $\sigma$ and $U$ is given by \eq{U 1 (a)}. From \eqref{IRVEA1}, we get,
\begin{align*}
\dd{P} (U>x) \sim \dd{P} \left(\{U>x\}
\bigcap \left( \bigcup_{i=1}^{K_1}\bigcup_{j=1}^{X_i}
\{\sigma_{j}^{(i)}>x (1-\rho)\}\bigcup\bigcup_{j=1}^{\nu} \{\sigma_j^{*}>x(1-\rho )\}\right)\right).
\end{align*}
On the other hand, we have
\begin{align*}
\dd{P} \left(\{U>x\}
\bigcap \bigcup_{i=1}^{\nu} \{\sigma_i^{*}>x(1-\rho )\}\right) & =
\sum_{n\ge 1} \dd{P} (U>x,\nu =n) \cdot \dd{P}\left(
\bigcup_{i=1}^{n} \{\sigma_i^{*}>x(1-\rho )\}\right)\\
&\le C \sum_{n\ge 1} \dd{P} (U>x,\nu =n) n \overline{G}(x)\\
&= C \dd{E} \left( \nu \cdot {1} (U>x) \right)
\overline{G}(x)\\
&\le C \dd{E} \left(\tau \cdot {1} (U>x)\right)
\overline{G}(x) = o(\overline{G} (x)),
\end{align*}
where $C=\sup_x \overline{G}(x(1-\rho))/\overline{G}(x)<\infty$ is a constant, and recall that $\tau$ is the total number of services within the busy cycle. The last line follows since
$\dd{E} \tau = \dd{E} \tau^{H} /q$ is finite.
Therefore, we have
\begin{lemma}\label{PSBJJ2}
In the conditions of Lemma \ref{PSBJJ}, we have
\begin{equation}\label{IRVEA2}
\dd{P} (U>x) \sim \dd{P} \left(\{U>x\}
\bigcap \bigcup_{k =1}^{K_1} \bigcup_{i=0}^{X_{k-1}}
\{\sigma_{k,i}>x (1-\rho)\}\right).
\end{equation}
\end{lemma}
Moreover, the following result holds:
\begin{lemma}\label{PSBJJ3}
Assume the conditions of Lemma \ref{PSBJJ} hold.
Then, for any $\varepsilon >0$, one can find $N$ such that
\begin{equation}\label{IRVEA3}
\dd{P} (U>x) \gtrsim \dd{P} (D_{N}(x))\gtrsim (1-\varepsilon ) \dd{P} (U>x)
\end{equation}
where
\begin{equation}\label{DNL}
D_{N}(x) =
\bigcup_{k=1}^N
\bigcup_{\ell=0}^{2N}
\left\{\{U>x, K_1=k+\ell\}
\bigcap \bigcup_{i=0}^{X_{k-1}}
\{\sigma_{k,i}>x (1-\rho)\} \right\}.
\end{equation}
\end{lemma}
{\sc Proof.} Indeed, the term in the right-hand side of \eqref{IRVEA2} is bigger
than $\dd{P} (D_{N}(x))$ and smaller than the sum
$\dd{P} (D_{N}(x)) + \dd{P} (U>x, K_1 >N)$, where
\begin{equation}\label{IRVEA4}
\dd{P} (U>x, K_1 >N) \le \dd{P} (B>x, K_1>N).
\end{equation}
Consider again the auxiliary $GI/GI/1$ queue with service times $\sigma^{H}_i
=\sum_{j=1}^{K_i}\sigma_{i}^{(j)}$ and the first-come-first-served service discipline.
Consider the following majorant: assume that at the beginning of the first cycle, in addition to customer 1, an extra $K-1$ new customers arrive, so there are $K$ arrivals in total. Here $K$ is a geometric random variable with parameter $p$ that does not depend on service times.
Then the first busy period in this queue has the same
distribution as $\sum_{i=1}^K B_i$ where $B_i$ are $i.i.d.$ random variables that have the same distribution as $B$ and do not depend on $K$.
By monotonicity,
$$
\dd{P} (B>x, K_1>N) \le \dd{P} \left(\sum_{i=1}^K B_i>x, K>N\right)
=\dd{P} \left(\sum_{i=1}^{K{1}(K>N)} B_i >x\right).
$$
Due to \eqref{Kesten}, the latter probability is equivalent, as $x\to\infty$, to
$$
\dd{E} (K{1} (K>N)) \dd{P}(B>x)\le
C_0\dd{E} (K{1} (K>N))\dd{E} K \overline{G}(x)
$$
where $C_0$ is from \eqref{weakTE}. Now choose $N$ such that
$C_0 \dd{E} (K{1} (K>N))\dd{E} K \le \varepsilon$.
Since $\dd{P}(U>x) \ge \overline{G}(x)$, \eqref{IRVEA3} follows.
We can go further and obtain the following result.
\begin{lemma}
\label{lem:PSBJJ4}
Assume the conditions of Lemma \ref{PSBJJ} hold, and let
\begin{equation}\label{GNLR}
G_{N,R}(x) = \bigcup_{k=1}^N \bigcup_{\ell=0}^{2N}
\left\{\{U>x, K_1=k+\ell\} \bigcap \bigcup_{i=0}^{R} \{\sigma_{k,i}>x (1-\rho)\} \right\},
\end{equation}
Then, for any $\varepsilon >0$,
one can choose a positive integer $R$ such that
\begin{equation}
\label{IRVEA5}
\dd{P} (D_{N}(x))
\ge
\dd{P} (G_{N,R}(x))
\ge
(1-\varepsilon)\dd{P} (D_{N}(x))
\end{equation}
where the event $D_{N}(x)$ was defined in \eqref{DNL}.
Further,
\begin{equation}
\label{IRVEA6}
\dd{P} (G_{N,R}(x)) \sim
\sum_{k=1}^N\sum_{\ell=0}^{2N}\sum_{j=0}^R\sum_{i=0}^{j}
\dd{P} (U>x, K_1=k+\ell, X_{k-1}=j,\sigma_{k,i}>x(1-\rho )).
\end{equation}
\end{lemma}
{\sc Proof}.
Indeed,
\begin{align*}
\dd{P} & \left(D_{N}(x) \setminus G_{N,R}(x)\right)\\
&\le \sum_{k=1}^{N} \sum_{\ell=0}^{2N} \sum_{j > R}
\dd{P} \left( \{U>x, K_1=k+\ell, X_{k-1} = j\} \bigcap \bigcup_{i=R+1}^{j} \{\sigma_{k,i}>x (1-\rho)\} \right) \\
&\le
\sum_{k=1}^{N} \sum_{j>R} \sum_{i=0}^{j}
\dd{P} ( X_{k-1}=j, \sigma_{k,i}>x(1-\rho))\\
&=
\sum_{k=1}^{N}\sum_{j>R} (j+1) \dd{P} ( X_{k-1}=j)
\overline{G}(x(1-\rho ))\\
&= \sum_{k=1}^{N}\dd{E} (( X_{k-1}+1) {1}( X_{k-1} >R))
\overline{G}(x(1-\rho )) \\
&\le
N \dd{E} ((\tau+1){1} (\tau>R)) \overline{G}(x(1-\rho ))
\end{align*}
where the term $\dd{E} ((\tau+1){1} (\tau>R))$ may be made
as small as possible by taking a sufficiently large $R$.
Then \eqref{IRVEA6} follows
since the probability of a union of events is always smaller than
the sum of their probabilities, and is bigger than the sum of probabilities of events minus the sum of probabilities of pairwise
intersections of events. Each probability of intersection of two independent
events is smaller than
$$
\dd{P} (\sigma_1^{(1)}>x(1-\rho),\sigma_2^{(1)}>x(1-\rho)) \le C\overline{G}^2(x) = o(\overline{G}(x)),
$$
therefore their finite sum is $o(\overline{G}(x))$ and \eqref{IRVEA6}
follows.
We are now in a final step of the proof of \thr{GG1final}. For $k \ge 1,\ell, j \ge 0$, define $D_{k,\ell,j}$ as
\begin{align}
\label{eq:D klj}
D_{k,\ell,j} = \dd{P} (K=k+\ell, X_{k-1}=j) = \dd{P} (K>k, X_{k-1}=j) \dd{P} (K=\ell),
\end{align}
where the second equality holds because $K$ is geometrically distributed. Then, \lem{PSBJJ4} implies \eq{GG1final2} for $g_{k,\ell,i,j}(x) = P_{k,\ell,i,j}(x)$
since, for any $k,\ell,j\ge i$,
$$
g_{k,\ell,i,j}(x) \le \dd{P} (K=k+\ell, X_{k-1}=j, \sigma_{k,i}>x(1-\rho)) = D_{k,\ell,j} \dd{P}(\sigma >x(1-\rho)),
$$
and
\begin{align}
\label{eq:D finite}
\sum_{k,\ell,i \le j} D_{k,\ell,j} = \sum_{k,\ell,j} jD_{k,\ell,j} = \sum_{k,j} j \dd{P}(X_{k-1}=j, K>k) \le \dd{E} \tau/q <\infty,
\end{align}
where, recall, $\tau$ is the total number of customers served in the first busy period. Clearly, \eq{GG1final2} is also valid for a general $\{g_{k,\ell,i,j}(x)\}$ because of the conditions \eq{g C} and \eq{C finite}. This completes the proof of \thr{GG1final}.
\subsection{Proof of \thr{U asym 1}}
\label{sect:proof U asym}
We first recall the notation: $U_{1}, U_{2},\ldots$ and $X_{0}, X_{1}, \ldots$ are the service cycles and the number of customers other than the tagged customer served in the cycles, respectively. Here $u_{0} = X_{0} = 0$.
In general, the sojourn time is a randomly stopped sum of $i.i.d.$ positive random variables, and both the summands and the counting random variable have heavy-tailed distributions. It is known that
it is hard to study the tail asymptotics for general heavy-tailed distributions (see, e.g., \cite{Gree1973}) in this case. We proceed under the assumption that the service time distribution is intermediate regularly varying.
Recall that $\sigma_{k,0}$ is the $k$th service time of the tagged customer and, for $i=1,\ldots, X_k$, $\sigma_{k,i}$ is the $i$th service time in the queue $X_k$. Further, $T_{k}=\sum_{\ell=1}^k U_{\ell}$ be the time instant when the $k$th service of the tagged customer is completed, where $U_{1} = \sigma_{1,0}$. Introduce the notation
\begin{align*}
U_k^+ = \sum_{\ell=k+1}^K U_\ell, \qquad k \ge 1,
\end{align*}
which is the remaining time the tagged customer spends in the system after the completion of the $k$th service, and let $v_{k}$ be the residual inter-arrival time of the input when the $k$th service of the tagged customer ends.
In what follows, we will say that an event involving some constants and functions/sequences
occurs ``with high probability'' if, for any $\varepsilon >0$, there exists constants and functions/sequences (that depend on $\varepsilon$) with the desired properties such that the event occurs with probability at least $1-\varepsilon$.
For example, let $S^{\sigma}_n = \sum_1^n \sigma_i$ be the sum of $i.i.d.$ random variables with finite mean $b$. Then the phrase
``with high probability (WHP), for all $n=1,2,\ldots$,
$$
S^{\sigma}_n \in (n(a-\delta_n)-C, n(a+\delta_n)+C)
$$
with $C>0$ and $\delta_n\downarrow 0$''
means that
``for any $\varepsilon >0$, there exist a constant $C\equiv C_{\varepsilon}>0$ and a sequence $\delta_n\equiv\delta_n(\varepsilon) \downarrow 0$ such that the probability of the event
$$
\{ S^{\sigma}_n \in (n(a-\delta_n)-C, n(a+\delta_n)+C), \ \mbox{for all} \ \ n\ge 1\}
$$
is at least $1-\varepsilon$''. We can say equivalently that
``WHP, for all $n=1,2,\ldots$,
$
S^{\sigma}_n \in (na - o(n), na+ o(n))
$
'', or, simply, ``WHP, $S^{\sigma}_n \sim an$'',
and this means that
``for any $\varepsilon >0$, there exists a positive function $h(n)=h_{\varepsilon}(n)$ which is an $o(n)$-function
(it may tend to infinity, but slower than $n$) and is such that
the probability of the event
$$
\{ S^{\sigma}_n \in (na - h(n), na+ h(n)), \ \mbox{for all} \ n\}
$$
is at least $1-\varepsilon$.''
Now we show \eq{GG1final g} for
\begin{align}
\label{eq:g 1}
g_{k,\ell,i,j}(x) & = \dd{P}\left( K = k+\ell, X_{k-1} = j\right) \dd{P}((1+m^{(0)}_{\ell}) \sigma_{k,i} > x), \nonumber\\
& \hspace{30ex} k\ge 1, \ell \ge 0, j\ge 0, 0\le i\le j.
\end{align}
Namely, we show that, for all $k \ge 1, \ell \ge 0, 0\le i \le j$,
\begin{align}
\label{eq:g 2}
\dd{P} (U>x, K=k+\ell, X_{k-1}=j,\sigma_{k,i}>x(1-\rho )) \sim g_{k,\ell,i,j}(x).
\end{align}
We prove \eq{g 2} by induction on $\ell \ge 0$, for each fixed $k\ge 1, 0 \le i \le j$.
\noindent {\bf Lower bound, $\ell=0.$}\\
Since $\sigma_{k,i} > x$ implies that $U > x$ and $\sigma_{k,i} > (1-\rho)x$, the lower bound for the LHS of \eq{g 2} is
\begin{align*}
\dd{P} (K=k, X_{k-1}=j) \overline{G}(x) = g_{k,0,i,j}(x),
\end{align*}
because $m^{(0)}_{0} = 0$.
\noindent {\bf Upper bound, $\ell=0$.} \\
There is a constant $w > 0$ such that $T_{k-1}\le w$ and $\sum_{0\le i' \le j, i' \ne i} \sigma_{k,i'} \le w$ WHP.
Then $U\le 2w+\sigma_{k,i}$, so the upper bound for the LHS of \eq{g 2} is
\begin{align*}
&
\varepsilon\overline{G}(x(1-\rho)) + (1+o(1)) \dd{P} (K=k, X_{k-1}=j, \sigma_{k,i}+2w>x)\\
& \quad \sim
\varepsilon\overline{G}(x(1-\rho)) + \dd{P} (K=k, X_{k-1}=j) \overline{G}(x).
\end{align*}
Letting $\varepsilon$ tend to zero in this upper bounds yields that the lower and upper bounds are asymptotically identical. Since $m^{(0)}_{0} = 0$, they are further identical to $g_{k,0,i,j}(x)$ of \eq{g 1}. Thus, \eq{g 2} is verified.
Turn to the case $\ell=1.$
\noindent {\bf Lower bound, $\ell=1.$}\\
Like in the case $\ell=0$, replace all other service times $\sigma_{k,i'}, i'\ne i$
by zero. Assume that all $j$ customers from the group $X_{k-1}$ leave the system after their service completions.
WHP, $v_{k-1}\le w$. Given $y=\sigma_{k,i}$ is large and much bigger than $w$, we have that at least
$N^{e}(y-w)$ customers arrive during time $U_{k} \ge \sigma_{k,i} = y$. Again WHP,
$$
N^{e}(y-w) \in (\lambda y - o(y), \lambda y + o(y))
$$
and, again WHP, their total service time is within the time interval
$(\lambda b y - o(y), \lambda b y + o(y))$. Therefore,
$$
U\ge U_{k}+U_{k+1} \ge y + \lambda b y - o(y)
$$
and the RHS is bigger than $x$ if $y>x/(1+\lambda b) + o(x)$.
Therefore, the lower bound for the LHS of \eq{g 2}
is
\begin{align*}
&(1+o(1)) \dd{P} (K=k+1, X_{k-1}=j, \sigma_{k-1,i}>x/(1+\lambda b) + o(x)) - \varepsilon\overline{G}(x(1-\rho))\\
& \quad \sim
\dd{P} (K=k+1, X_{k-1}=j) \overline{G}(x/(1+\lambda b)) - \varepsilon\overline{G}(x(1-\rho)).
\end{align*}
\noindent {\bf Upper bound, $\ell=1.$} \\
WHP, $T_{k-1}\le w$, $v_k\le w$ and $\sum_{0 \le i' \le j, i'\ne i} \sigma_{k,i'} \le w$. Let $\sigma_{k,i}=y \gg 1$.
Then, WHP, $U_{k}\le y+2w$ and the number of external arrivals within
$U_{k}$ is bounded above by $1+N^{e}(y+2w) = \lambda y + o(y)$, again WHP.
Assume that all $X_{k-1}=j$ customers stay in the system after their services. Then again
$j+1+N^{e}(y+w) = \lambda y +o(y)$, WHP. Therefore,
$U_{k+1} = b \lambda y + o(y).$ Then we arrive at the upper bound that meets the
lower bound.
Thus, \eq{g 2} is verified for $g_{k,2,i,j}(x)$ because $m^{(0)}_{1} = \lambda b $ by \eq{m k}.
\noindent {\bf Induction step.}\\
We can provide induction for any finite number of steps. Here is the induction base.
Assume that $\sigma_{k,i}=y \gg 1$ and that, after $\ell \ge 1$ steps, $T_{k+\ell'} \sim (1+m^{(0)}_{\ell'}) y$ for $0 \le \ell' \le \ell$, and there are
$X_{k+\ell-1}$ customers in the queue and that $X_{k+\ell-1}=wy + o(y)$, WHP,
where $w>0$. Then, combining upper and lower bounds, we may conclude that,
again WHP,
$U_{k+\ell} = b wy + o(y)$ and then
\begin{align}
\label{eq:Tk 1}
& T_{k+\ell} = T_{k+\ell-1} + U_{k+\ell} \sim (1+m^{(0)}_{\ell-1} + bw) y,\\
\label{eq:Xk 1}
& X_{k+\ell} = pX_{k+\ell-1} + \lambda U_{k+\ell} + o(y) = wy r + o(y),
\end{align}
where we recall that $r = p + \lambda b$. By the induction hypothesis, \eq{Tk 1} implies that
\begin{align*}
1+m^{(0)}_{\ell} = 1+m^{(0)}_{\ell-1} + bw,
\end{align*}
which, with \eq{m k}, yields that
\begin{align*}
bw = m^{(0)}_{\ell} - m^{(0)}_{\ell-1} = r^{\ell-1} \lambda b.
\end{align*}
Hence, by \eq{Xk 1},
\begin{align*}
T_{k+\ell+1} = T_{k+\ell} + U_{k+\ell+1} & \sim (1+m^{(0)}_{\ell}) y + bw ry \nonumber\\
& = (1 + m^{(0)}_{\ell} + r^{\ell} \lambda b) y = (1 + m^{(0)}_{\ell+1}) y.
\end{align*}
This completes the induction step for $\ell+1$.
We finally check the conditions \eq{g C} and \eq{C finite}. Since an intermediate regularly varying distribution is dominantly varying, it follows from \eq{g 1} that
\begin{align*}
g_{k,\ell,i,j}(x) & \le \dd{P}\left( K = k+\ell, X_{k-1} = j\right) \dd{P}((1+m^{(0)}_{\infty}) \sigma > x)\\
& \le \dd{P}\left( K = k+\ell, X_{k-1} = j\right) c \dd{P}(\sigma > x)
\end{align*}
for some $c > 0$. Hence, letting
\begin{align*}
C_{k,\ell,i,j} = \dd{P}\left( K = k+\ell, X_{k-1} = j\right) c,
\end{align*}
\eq{g C} is verified, while \eq{C finite} follows from \eq{D finite}. Thus, by \thr{GG1final},
\begin{align*}
\dd{P}(U > x) & \sim \sum_{k=1}^{\infty}\sum_{\ell=0}^{\infty}\sum_{j=0}^{\infty}\sum_{i=0}^{j} \dd{P}\left( K = k+\ell, X_{k-1} = j\right) \dd{P}\left((1+m^{(0)}_{\ell}) \sigma > x \right)\\
& = \frac 1q \sum_{k=1}^{\infty}\sum_{\ell=0}^{\infty} qp^{k}\dd{E}\left( 1 + X_{k-1}\right) qp^{\ell-1} \dd{P}\left((1+m^{(0)}_{\ell}) \sigma > x \right),
\end{align*}
which implies \eq{U asym 1}, and \thr{U asym 1} is proved.
\subsection{PSBJ for the stationary queue}
\label{sect:psbj-st}
We now consider the case where customer $1$ arrives to the stationary queue and denote by $U^0$ its sojourn time.
In this Section, we frequently use the following notation: for a distribution $F$ having a finite mean, $\overline{F_{I}}(x) = \min (1, \int_x^{\infty} \overline{F}(y) dy)$ is its integrated tail distribution (see (a) of \rem{stationary case}).
By ``stationarity'' we mean stationarity in discrete time, i.e. at embedded arrival epochs.
So we assume that the system has started from time $-\infty$ and that customer $1$ arrives at time $\widetilde{t}_{1} \equiv 0$, customers with indices $k \le 0$
enter the system at time instants $\widetilde{t}_k=-\sum_{j=k}^0 t_j$ and customers with indices $k \ge 2$
at time instants $\widetilde{t}_k=\sum_{j=1}^{k-1} t_j$.
For $k \le \ell$, let $S^{H}_{k,\ell} = \sum_{j=k}^{\ell}\xi^{H}_j$, where $\xi^{H}_{j} = \sigma^{H}_{j} - t_{j}$. Then the stationary busy cycle covering $0$ starts at $\widetilde{t}_k$,
$k \le 0$ if
$$
\{\sup_{j<k} S^{H}_{j,k}\le 0, \ \min_{k \le \ell \le 0} S^{H}_{k,\ell }>0\}.
$$
So, if $B^0$ is the remaining duration of the busy period viewed at time $0$, then
\begin{align}\label{eq:decomp}
\{B^0>x\} & = \bigcup_{k = -\infty}^{0} \left\{\sup_{j<k} S^{H}_{j,k}\le 0, B_k > -\widetilde{t}_k + x\right\},
\end{align}
where $B_k$ is the duration of the period that starts at time $\widetilde{t}_k$ given that customer
$k$ arrives in the empty system (then, in particular, $B=B_0$). See \fig{Feedback_time_indeces}.
\begin{figure}
\caption{Workload and time indexes under the stationary regime: $B^{0}
\label{fig:Feedback_time_indeces}
\end{figure}
Let
\begin{align*}
\tau^{H}_{k} = \min\left\{ n \ge 1; S^{H}_{k,n} \le 0 \right\}, \qquad k \le 0,
\end{align*}
which is the number of customers arriving at or after time $0$ in the busy period when it starts at time $\widetilde{t}_{k}$, and let
\begin{align*}
A^{H}_{k} = \{\sup_{j <k} S^{H}_{j,k}\le 0\}, \qquad k \le 0,
\end{align*}
then $A^{H}_{k} \cap \{\tau^{H}_{k} \ge \ell\}$'s for $k \le 0, \ell \ge -k+1$ are disjoint sets.
Thus, \eq{decomp} may be written as
\begin{align}
\label{eq:decomp2}
\{B^0>x\} & = \bigcup_{k = 0}^{\infty}
\left(A^{H}_{-k} \cap \left\{\tau^{H}_{-k} \ge k+1, B_{-k} > -\widetilde{t}_{-k} + x\right\}\right),
\end{align}
and therefore, applying the PSBJ of \cor{gg2} to each busy period $B_{-k}$,
\begin{align*}
& \{B^0>x\} = \bigcup_{k = 0}^{\infty} \left(A^{H}_{-k} \cap \left\{\tau^{H}_{-k} \ge k+1, B_{-k} > -\widetilde{t}_{-k} + x\right\}\right) \nonumber\\
& \quad \simeq \bigcup_{k = 0}^{\infty} \bigcup_{i = 1}^{\infty} \left(A^{H}_{-k} \cap \left\{\tau^{H}_{-k} \ge \max(k+1,i), \sigma^{H}_{-k+i} > (-\widetilde{t}_{-k} + x)(1-\rho) \right\}\right),
\end{align*}
where $\sigma^{H}_{-k+i}$, $i\ge 0$ is the service time of the $i$-th customer arriving in the busy period that starts at time $\widetilde{t}_{-k}$.
Hence, letting
\begin{align*}
A^{0}_{-}(x) & = \bigcup_{k=1}^{\infty} \bigcup_{i=1}^{k} \left(A^{H}_{-k} \cap \left\{\tau^{H}_{-k} \ge k+1, \sigma^{H}_{-k+i} > (-\widetilde{t}_{-k}+x)(1-\rho)\right\} \right) \nonumber\\
A^{0}_{+}(x) & = \bigcup_{k=0}^{\infty} \bigcup_{i=k+1}^{\infty} \left(A^{H}_{-k} \cap \left\{\tau^{H}_{-k} \ge i,\sigma^{H}_{-k+i} > (-\widetilde{t}_{-k}+x)(1-\rho)\right\} \right),
\end{align*}
we have
\begin{align}
\label{eq:decomp3}
\{B^0>x\} & \simeq A^{0}_{-}(x) \cup A^{0}_{+}(x).
\end{align}
We first consider the event $A^{0}_{+}(x)$, which is a contribution of big jumps at or after time $0$, and show that its probability is negligible with respect to
$\overline{H_I}(x)$, as $x\to\infty$.
Clearly, for any positive function $h(x)$ and
for any $\varepsilon\in (0,a)$,
\begin{align*}
A_+^0(x) & \subseteq \bigcup_{k\ge 0} \{-\widetilde{t}_{-k} < (a-\varepsilon )k - h(x)\}\\
& \qquad \bigcup \bigcup_{k\ge 0} \bigcup_{i\ge k+1}
\{\tau_{-k}^H\ge i, \sigma_{-k+i}^H > ((a-\varepsilon)k+x-h(x))(1-\rho))\}.
\end{align*}
Then
\begin{align*}
\dd{P}(A_+^0(x)) & \le \sum_{k=0}^{\infty} \sum_{i=k+1}^{\infty}
\dd{P}(\tau_{-k}^H\ge i, \sigma_{-k+i}^H > (-\widetilde{t}_{-k}+x)(1-\rho ))\\
& \le \sum_{k\ge 0} \dd{P}(-\widetilde{t}_{-k}<(a-\varepsilon )k - h(x)) \\
& \qquad + \sum_{k\ge 0} \sum_{i\ge k+1} \dd{P} (\tau_{-k}^H\ge i) \dd{P} (\sigma^H_{-k+i}>
((a-\varepsilon)k + x - h(x))(1-\rho))\\
& \le C e^{-\alpha h(x)} + \sum_{k\ge 0} \dd{E} ((\tau^H-k+1)^+) \dd{P}
(\sigma^H_{-k+i}>
((a-\varepsilon)k + x - h(x))(1-\rho))\\
& = o(\overline{H_I}(x(1-\rho)))= o(\overline{G_I}(x(1-\rho))),
\end{align*}
if one takes, say, $h(x) = x^c$ for some $c<1$. Here the second inequality follows since
$\left\{\tau^{H}_{-k+i} \ge i\right\} = \left\{\tau^{H}_{-k+i} \le i-1\right\}^{c}$ is independent of $\sigma^{H}_{-k+i}$, the
third inequality from Chernoff's inequality, for a small $\alpha>0$, and the final conclusion from
property \eqref{PSBJ101} in the Appendix.
Thus, we only need to evaluate the contribution of big jumps that occur before time $0$. Namely, we analyse $A^{0}_{-}(x)$.
Note that, for any $k_0>0$, the probability of the event
\begin{align*}
A_-^{(0, k_0)}(x) & = \bigcup_{k=1}^{k_0} \bigcup_{i=1}^{k} \left(A^{H}_{-k} \bigcap \left\{\tau^{H}_{-k} \ge k+1, \sigma^{H}_{-k+i} > (-\widetilde{t}_{-k}+x)(1-\rho)\right\} \right)
\end{align*}
is of order $O(\overline{G}(x(1-\rho)))$ which is negligible with respect to
$\overline{G_I}(x(1-\rho))$. Therefore, one can choose an integer-valued
$h(x)\to \infty$ such that $\dd{P}\left(A_-^{(0,h(x))}(x)\right)=o(\overline{G_I}(x(1-\rho)))$. So we may apply again the SLLN,
$\widetilde{t}_{-k} \sim ak$ for sufficiently large $k$, to get
\begin{align*}
& \bigcup_{k=1}^{\infty} \left(A^{H}_{-k} \bigcap \left(\bigcup_{i=1}^{k} \left\{\tau^{H}_{-k} \ge k+1, \sigma^{H}_{-k+i} > (-\widetilde{t}_{-k}+x)(1-\rho)\right\}\right)\right) \\
& \quad \simeq \bigcup_{k=1}^{\infty} \bigcup_{\ell=0}^{k-1} \left(A^{H}_{-k} \bigcap \left\{\tau^{H}_{-k} \ge k+1, \sigma^{H}_{-\ell} > (ak+x)(1-\rho)\right\} \right)\\
& \quad = \bigcup_{\ell=0}^{\infty} \bigcup_{k=\ell+1}^{\infty} \left(A^{H}_{-k} \bigcap \left\{\tau^{H}_{-k} \ge k+1, \sigma^{H}_{-\ell} > (ak+x)(1-\rho)\right\} \right)\\
& \quad \subseteq
\bigcup_{\ell=0}^{\infty} \left\{\sigma^{H}_{-\ell} > (a(\ell+1) +x)(1-\rho)\right\}.
\end{align*}
On the other hand, for $h(x)\uparrow\infty$ sufficiently slowly and for an appropriate sequence $\varepsilon_{\ell}\downarrow 0$ (that comes from the SLLN), we have
\begin{align*}
& \ \ \quad \bigcup_{\ell=0}^{\infty} \left\{\sigma^{H}_{-\ell} > (a(\ell+1) +x)(1-\rho)\right\}
\sim \bigcup_{\ell=h(x)}^{\infty} \left\{\sigma^{H}_{-\ell} > (a(\ell+1) +x)(1-\rho)\right\}\\
& \sim \bigcup_{\ell=h(x)}^{\infty} \left\{\sigma^{H}_{-\ell} > ((a+2\varepsilon_{\ell})(\ell+1) +x+h(x))(1-\rho)+h(x)\right\}\bigcap D_{-(\ell+1)} \equiv E(x),
\end{align*}
where
\begin{align*}
D_{-\ell} = \bigcap_{j=1}^{\infty}\left\{\sum_{i=-\ell+1}^{-\ell+j} t_i \le (a+\varepsilon_j)j+h(x), \sum_{i=-\ell+1}^{-\ell+j} \sigma^H_i\ge (b^h-\varepsilon_j)j-h(x)\right\}.
\end{align*}
Since $B^0>x$ on the event $E(x)$, we arrive at the following PSBJ for the stationary busy period.
\begin{lemma}
\label{lem:psbj-st1}
If the $GI/GI/1$ feedback queue is stable and its service time distribution has an $\sr{IRV}$ distribution with a finite mean, then
\begin{align}
\label{eq:psbj-st4}
\{B^{0} > x\} \simeq \bigcup_{k=0}^{\infty} \left\{\sigma^{H}_{-k} > (x+a(k+1))(1-\rho)\right\}.
\end{align}
\end{lemma}
The lemma implies that
\begin{align}\label{eq:psbj-st3}
\dd{P}(B^0>x) \sim
\sum_{k=0}^{\infty} \dd{P}(\sigma_{-k}^H > (x+a(k+1))(1-\rho ))
\end{align}
since the sum of the probabilities of pairwise intersections is of order $O(\overline{G_{I}}^2(x))=o(\overline{G_{I}}(x))$.
Then we may conclude that the principle of a single big jump can be applied to the stationary sojourn time too:
\begin{align}\label{eq:negl-h}
\dd{P} (U^0>x) & \sim \sum_{n=0}^{\infty} \dd{P}(U^0>x, \sigma_{-n}^H >(x+(n+1)a)(1-\rho)) \nonumber\\
& \sim \sum_{n=h(x)}^{\infty} \dd{P}(U^0>x, \sigma_{-n}^H >(x+(n+1)a)(1-\rho))
\end{align}
where the second equivalence is valid for any integer-valued function $h(x)\uparrow \infty$, $h(x)=o(x)$ and follows from \eq{decomp} and from the properties of ${\cal IRV}$ and integrated tail distributions, see \app{properties}.
\subsection{Proof of \thr{stationary case}}
\label{sect:tail-st}
First, we comment that it is easy to obtain the logarithmic asymptotics for the stationary sojourn time.
Since the sojourn time of the customer entering the stationary queue at time $0$ is not bigger than the stationary busy period
and is not smaller than the stationary sojourn time in the auxiliary queue without feedback, and since both bounds have tail distributions that are proportional to the integrated tail distribution of a single service time (see the Appendix for definitions),
we immediately get the logarithmic tail asymptotics:
\begin{align}\label{eq:log-stat}
\log \dd{P}(U^0>x) \sim \log \overline{H_{I}}(x) \sim \log \overline{G_{I}}(x).
\end{align}
Now we provide highlights for obtaining the exact tail asymptotics for the stationary sojourn time distribution and give the final answer. For this, we use the following simplifications, which are made rigorous in the ``WHP'' terminology and due to the $o(x)$-insensitivity of the service-time distribution.
\begin{itemize}
\item [(1)] We observe that the order of services prior to time $0$ is not important for the customer that enters the stationary queue
at time $0$: the joint distribution of
the residual service time and of the queue length at time 0 stays the same for all reasonable service disciplines (that do not allow processor sharing).
So we may assume that,
up to time $0$, all arriving customers are served in order of their external arrival: the system
serves the ``oldest'' customer a geometric number of times and then turns to the service of the next customer.
\item [(2)] We simplify the model by assuming that all inter-arrival times are deterministic and equal to $a=\lambda^{-1}$.
\item [(3)] We further assume that all service times of all customers but one are equal to $b$, so every customer
but one has a geometric number of services of length $b$. The ``exceptional'' customer may be any customer
$-n\le 0$, it has a geometric number of services, one of those is random and large and all others equal to $b$.
So the total service time of the ``exceptional'' customer has the tail distribution equivalent to
$$
\dd{E}K \cdot \overline{G}(x) = \frac{1}{q}\overline{G}(x).
$$
\item [(4)] We assume that the ``exceptional'' customer arrives at an empty queue, that is, the workload found by this customer is negligible compared with his exceptional service time.
\end{itemize}
Due to the arguments explained above, we can show that the tail asymptotics of the sojourn time of customer $1$ in the original and in the auxiliary
system are equivalent. We start by repeating our calculations from the proof of \thr{U asym 1}, but in two slightly different settings.
Assume all service times but the very first one are equal to $b$ for the exceptional customer arriving at or before time $0$. Assume that, if customer $1$ arriving at time $0$ is not exceptional, then it finds $X_0=N$ customers in the queue, and otherwise it finds a negligible number of customers compared with $N$ while its first service time is $Nb$. Assume customer $1$ leaves the system after $K=k$ services. Denote, as before, by $U_i$ the time between its $(i-1)$st and $i$th services and by $X_i$ the queue behind customer $1$ after its $i$th service completion.
How large should $N$ be for the sojourn
time of customer $1$ to be bigger than $x$ where $x$ is large?
(A) Assume that the (residual) service time, $z$, of the very first customer in the queue
is not bigger than $b$ (so we may neglect it).
When $N$ is large, we get that $U_{1} \sim Nb$. Then we have
\begin{align*}
X_{i} \sim X_{i-1} p + \lambda U_{i}, \qquad U_{i} \sim X_{i-1} b, \qquad i =1,2,\ldots,k.
\end{align*}
Hence, $X_i \sim X_{i-1}p + \lambda U_{i} \sim N r^i$ and $U_{i} \sim Nbr^{i-1}$ where $r=p+\lambda b<1$. Then $U_1+\ldots+U_{k}\sim Nb(1-r^k)/(1-r)$. Thus, we may conclude that
\begin{align}\label{eq:XU}
\dd{P}(U>x, K=k) \sim qp^{k-1}\dd{P}(Nb> x_k),
\end{align}
where $x_k = x(1-r)/(1-r^k)$.
(B) Assume now that both $X_0=N$ and $z$ are large. Then $U_{1} \sim z+Nb$ and $X_1 \sim Np+\lambda (z+Nb)$ and, further, $X_{i}\sim X_{i-1}r \sim X_1 r^{i-1}$ and
$U_{i+1} \sim X_ib \sim X_1b r^{i-1} \sim Nbr^i + zr^i$. Thus,
$U_1+\ldots+U_{k}\sim Nb(1-r^k)/(1-r)+ z(1-r^k)/(1-r)$ and
\begin{align}\label{eq:XU2}
\dd{P}(U>x, K=k) \sim qp^{k-1}\dd{P}(Nb+z> x(1-r)/(1-r^k)).
\end{align}
Let $W(t)$ be the total work in the system at time $t$. We illustrate $W(t)$ below to see how the cases (A) and (B) occur.
\begin{figure}
\caption{Sample path of the workload $W(t)$ for the cases (A) and (B) with $K=2$}
\label{fig:Feedback_stationary}
\end{figure}
We will see now that if $K=k$ and if there is a big service time of the $(-n)$th ``exceptional''
customer, then the case (A) occurs if $n>x_k/b$ and the case (B) if $n<x_k/b$.
Let the big service time take value $y \gg 1$. Recall from \eq{negl-h} that it is enough to consider values of $n\ge h(x)$ only,
where $h(x)\uparrow\infty$, $h(x)=o(x)$.
For any $k\ge 1$, assume $K=k$ and $y \le na$, then the exceptional service is completed before or at time $0$, and the situation (A) occurs. Hence, $X_{0} \equiv N = n - j$ for some nonnegative $j \le n$, and $y+jb/q \approx na$ because approximately $j$ further customers leave the system prior to time $0$. Then $U \sim Nb (1-r^{k})(1-r)$, and $U > x$ is asymptotically equivalent to $Nb (1-r^{k})(1-r) > x$, where the last inequality is identical with $(n-j)b > x_{k}$. This together with $j \approx (na-y) q/b$ implies that
\begin{align*}
y \gtrsim x_{k}/q + n(a-b/q).
\end{align*}
Since $na \ge y$, this further implies that $n \gtrsim x_{k}/b$.
We next assume $K=k$ and $n<x_k/b$. Then, the contraposition of the above implication implies that $y > na$, and the situation (B) occurs. Therefore, we should take $y=z+na$, and $U_{0} = nb + z$, where $N=n$. Since $U \sim (nb+z) (1-r^{k})/(1-r)$, $U > x$ is equivalent to $nb + z \gtrsim x_{k}$, and therefore
\begin{align*}
y \gtrsim x_{k} + n(a-b).
\end{align*}
Combining together both cases, we obtain the following result:
\begin{align}\label{eq:final1-stat}
\dd{P}(U^0>x) & \sim q\sum_{k=1}^{\infty} p^{k-1}
\Bigg( \sum_{n=1}^{x_k/b - 1} \dd{P}(\sigma>x_k+n(a-b)) \nonumber\\
& \hspace{16ex} +
\sum_{n=x_k/b}^{\infty} \dd{P}(\sigma > x_k/q + n(a-b/q))\Bigg).
\end{align}
Clearly, the second sum in the parentheses is equivalent to
\begin{align*}
\sum_{n=0}^{\infty} \dd{P}(\sigma > x_ka/b + n(a-b/q)) & \sim
(a-b/q)^{-1} \overline{G_{I}}(x_ka/b)
\end{align*}
while the first sum in the parentheses is
\begin{align*}
\sum_{n=1}^{\infty} - \sum_{n=x_k/b}^{\infty} & \sim
(a-b)^{-1} \left(\overline{G_{I}}(x_k) - \overline{G_{I}}(x_ka/b)\right).
\end{align*}
Hence, we have
\begin{align}
\label{eq:final2-stat}
\dd{P}(U^0>x) & \sim \frac{q}{a-b}\sum_{k=1}^{\infty} p^{k-1}
\left(\overline{G_{I}}(x_k) + \frac{b(1-q)}{aq-b}\overline{G_{I}}(x_ka/b)
\right),
\end{align}
where we recall that $x_k=\frac{x(1-r)} {1-r^k}$, for $k\ge 1$. Since $m^{(0)}_{K} = \dd{E}\left( \frac {1-r^{K}} {1-r}\right)$,
we arrive at \eq{final2-stat 2} and \eq{final3-stat}.
\appendix
\section*{Appendix}
\label{sect:appendix}
\setnewcounter
\section{Properties of heavy-tailed distributions}
\label{app:properties}
We revise basic properties of several classes of heavy-tailed distributions listed at the end of \sectn{introduction} (see \cite{FKZ2013} for the modern theory of heavy tailed distribution and other books \cite{As2003}, \cite{EKM1997} for more detail), and formulate a part of the main result from \cite{FoZa2003} that plays
an important role in our analysis.
Let $\{ \xi_n\}_{-\infty}^{\infty}$ be $i.i.d.$ r.v.'s
with finite mean $\dd{E} \xi_1$ and with $\dd{P} (\xi_1>0)>0$ and $\dd{P} (\xi_1<0)>0$.
Let $F(x) = \dd{P} (\xi_1 \leq x)$ be their common distribution and $\overline{F}(x) =
1-F(x) = \dd{P} (\xi_1>x)$ its tail.
Let $m^+ \equiv m^+(F) = \dd{E} \max (0,\xi_1) = \int_0^{\infty} \overline{F}(t) dt$.
Let $S_0 =0$, $S_n = \sum_1^n \xi_i$, and
$\tau = \min \{ n \ge 1 ~:~ S_n \leq 0\}$.
If $F \in \sr{L}$ (long tailed), that is, \eq{long} holds for some $y>0$, then it holds for all $y$ and, moreover, uniformly in $|y|\le C$, for any fixed $C$.
Therefore, if $F\in {\cal L}$, then there exists a positive function $h(x)\to\infty$ such
that $\overline{F}(x-h(x))\sim \overline{F}(x) \sim \overline{F}(x+h(x))$. In this case we say that
the tail distribution $\overline{F}$ {\it is} $h$-{\it insensitive}.
In what follows, we make use of the following characteristic result (see Theorem 2.47 in \cite{FKZ2013}):
\begin{equation}
\label{charIRV}
F\in {\cal IRV} \ \mbox{if and only if} \ F \ \mbox{is} \ h-\mbox{insensitive, for any} \ h(x)=o(x).
\end{equation}
In particular, if $F$ is $h$-insensitive, then $F$ is $h_c$-insensitive for any $c>0$, where $h_c(x)=h(cx).$
We also use another characteristic result which is a straightforward minor extension of Theorem 2.48 from \cite{FKZ2013}:
\begin{equation}
\label{char2IRV}
F\in {\cal IRV} \ \ \mbox{if and only if} \ \ \overline{F}(V_n)\sim \overline{F}(v_n),
\end{equation}
for any sequence of non-negative random variables $V_n$ with corresponding means $v_n=\dd{E} V_n$ satisfying
\begin{equation*}
V_n\to\infty \ \mbox{and} \ V_n/v_n\to 1 \ \ \ \mbox{in probability}.
\end{equation*}
Here is another good property of ${\cal IRV}$ distributions. Let random variables $X$ and $Y$ have arbitrary joint distribution, with the distribution of $X$ being ${\cal IRV}$ and
$\dd{P} (|Y|>x) =o(\dd{P}(X>x))$. Then
\begin{equation}\label{o-little}
\dd{P} (X+Y>x) \sim \dd{P} (X>x) \ \mbox{as} \ x\to\infty.
\end{equation}
If $F$ is an $\sr{IRV}$ distribution with finite mean, then the distribution with the integrated tail
$\overline{F_I}(x)=\min (1,\int_x^{\infty}\overline{F}(y) dy)$ is also $\sr{IRV}$ and $\overline{F}(x) = o(\overline{F_I}(x))$ and, moreover, $\int_x^{x+h(x)}\overline{F}(y) dy
= o(\overline{F_I}(x))$ if $F_I$ is $h$-insensitive.
We use the following well-known result: if $\{\sigma_{1,j}\}$ is an $i.i.d.$ sequence of random
variables with common subexponential distribution $F$ and if the counting random variable $K$ does
not depend on the sequence and has a light-tailed distribution, then
\begin{equation}\label{Kesten}
\left\{ \sum_1^K \sigma_{1,j} >x \right\} \simeq \cup_{1}^K\{\sigma_{1,j}>x\} \ \ \mbox{and} \ \
\dd{P} \left( \sum_1^K \sigma_{1,j} >x\right) \sim \dd{E} K \overline{F}(x), \quad x\to\infty.
\end{equation}
Here is the principle of a single big jump again: the sum is large when one of the summands is large.
Let $M =\sup_{n\ge 0} \sum_{i=1}^n \xi_i$ where $\{\xi_i\}$ are $i.i.d.$ r.v.'s with negative mean $-m$
and with common distribution function $F$ such that $F_I$ is subexponential. Then
\begin{align}
\label{MRSBJ}
& \{M>x\} \simeq \cup_{n\ge 1}\{\xi_n> x+mn\}, \nonumber\\
& \dd{P}(M>x) \sim \sum_{n\ge 1} \dd{P}(\xi_1>x+mn) \sim \frac{1}{m}\overline{F_I}(x).
\end{align}
Further, if $F_I$ is subexponential, then, for any sequence $m_n\to m>0$ and any function
$h(x)=o(x)$,
\begin{align}\label{PSBJ100}
\sum_{n\ge 0} \overline{F}(x+m_nn+h(x))\sim \frac{1}{m}\overline{F_I}(x)
\end{align}
and, for any sequence $c_n\to 0$,
\begin{align}\label{PSBJ101}
\sum_{n\ge 0} c_n\overline{F}(x+m_nn+h(x)) =o\left(\overline{F_I}(x)\right).
\end{align}
\begin{remark}
\label{rem:tailprop}
Let ${\cal K}$ be any of the classes ${\cal L},{\cal RV}, {\cal IRV},
{\cal D},
{\cal S}, {\cal S}^*$.
The property of belonging to class ${\cal K}$ is
a {\it tail property}: if $F\in {\cal K}$ and if $\overline{G}(x) \sim C\overline{F}(x)$ where $C$ is a positive constant, then
$G\in {\cal K}$. In particular,\\
(i) if $F\in {\cal K}$, then $F_+\in {\cal K}$;\\
(ii) if the random variable $\xi$ has distribution $F\in {\cal K}$ and $c_1 > 0$ and $c_2$ are any constants,
then the distribution of the random variable $\eta = c_1\xi + c_2$ also belongs to ${\cal K}$; \\
(iii) if the random variable $\xi$ may be represented as $\xi = \sigma - t$ where $\sigma$ and $t$ are mutually independent random variables and $t$ is non-negative (or, slightly more generally, bounded from below), and if the distribution of
$\sigma$ belongs to class ${\cal K}$, then $\dd{P} (\xi >x) \sim
\dd{P} (\sigma > x)$, so
the distribution of $\xi$ belongs to ${\cal K}$ too.
\end{remark}
The following result is a part of Theorem 1 in \cite{FoZa2003}, see also
\cite{FoPaZa2005} for a more general statement.
\begin{theorem}
\label{thr:FZ1}
Let $S_n=\sum_1^n \xi$, $S_0=0$ be a random walk with $i.i.d.$ increments with distribution function $F$ and finite negative mean
\begin{equation}\label{NEG}
\dd{E} \xi_1 = -m < 0.
\end{equation}
Assume $F\in {\cal S}^*$. Let $T \le \infty$ be any stopping time (with respect to
$\{ \xi_n\}$). Let $M_{T} = \max_{0\le n\le T} S_n$. Then
\begin{equation}\label{si}
\lim_{x\to\infty} \frac{\dd{P} (M_{T} >x)}{\overline{F}(x)} = \dd{E} T.
\end{equation}
\end{theorem}
\section {Proof of \cor{gg2}}
\label{app:gg2}
We first note that the event $\{\tau^{H} \ge n \}$ is independent of $\sigma^{H}_{n}$ and $\xi^{H}_{n}$ because $\{\tau^{H} \ge n \} = \{\tau^{H} \le n-1 \}^{c}$ is $\sigma(\{\xi^{H}_{\ell}; 1 \le \ell \le n-1\})$-measurable. Hence,
\begin{align*}
\dd{P}(\tau^{H} \ge n, \sigma^{H}_{n} > x(1-\rho)) = \dd{P}(\tau^{H} \ge n) \dd{P}(\sigma^{H}_{n} > x(1-\rho)),
\end{align*}
and therefore the equivalence \eq{PSBJ-B1} is immediate from \eq{B3} of \thr{gg1}, while \eq{PSBJ-B2} easily follows from \eq{PSBJ-B1}.
Thus, it remains to prove \eq{PSBJ-B3}. For this, we introduce some notation. Let $S^{H}_{n} = \sum_{i=1}^n \xi^{H}_{i}$ and let $S^{\sigma^{H}}_n = \sum_{i=1}^n \sigma^{H}_i$. Define a sequence of events $E_{n}$, $n=0,1,\ldots$, as
$$
E_{n} = \cap_{\ell\ge 1}\{(S^{H}_{\ell+n}-S^{H}_{n})\ge (b_{H}-a){\ell}-\delta_{\ell} \ell - C,
(S^{\sigma^{H}}_{\ell+n}-S^{\sigma^{H}}_{n})\ge b_{H}{\ell} - \delta_{\ell} \ell -C\}
$$
which is stationary in $n$ (here, by convention, $S^{H}_0=S^{\sigma^{H}}_0=0$). Due to the SLLN,
there exists a sequence $\delta_{\ell} \downarrow 0$ such that
$$
\dd{P} (|S^{H}_{\ell}/\ell-(b_{H}-a)|\le \delta_{\ell} \ \mbox{and} \
|S^{\sigma^{H}}_{\ell}/{\ell} - b_{H}|\le \delta_{\ell}, \ \mbox{for all} \ {\ell}\ge n)\to 1,
$$
as $n\to\infty$. Therefore, for any $\varepsilon >0$, there exists $C=C_{\varepsilon}>0$, for which $E_{n}$ is denoted by $E_{n,\varepsilon}$, such that
\begin{align}
\label{eq:En 1}
\dd{P}(E_{n,\varepsilon}) = \dd{P}(E_{0,\varepsilon}) \ge 1- \varepsilon.
\end{align}
Introduce a function $h_{\varepsilon}(x)$ by
$$
h_{\varepsilon}(x)= \max_{i \le [x/a]} \left(i \delta_{i}\right) + C_{\varepsilon} + b_{H},
$$
where $[x/a]$ is the integer part of the ratio $x/a$. Then, for this $\varepsilon$ and $n \ge 1$, define $J_{n,\varepsilon}(x)$ as
\begin{align*}
J_{n,\varepsilon}(x) = \left\{ \tau^{H} \ge n, \xi^{H}_{n} > x(1-\rho) + h_{\varepsilon}(x)\right\}, \qquad x > 0.
\end{align*}
Then, on the event $J_{n,\varepsilon}(x) \cap E_{n,\varepsilon}$, we have $S^{H}_{n-1} > 0$, $S^{H}_{n} > \xi^{H}_{n} > x(1-\rho) + h_{\varepsilon}(x)$, and therefore
\begin{align*}
S^{H}_{n+\ell} & > x(1-\rho) + h_{\varepsilon}(x) - (1-\rho) a \ell - \delta_{\ell} \ell - C_{\varepsilon}\\
& = (1-\rho) (x-a \ell) + \max_{i \le [x/a]} \left(i \delta_{i}\right) - \delta_{\ell} \ell + b_{H} > 0, \qquad 0 \le \ell \le [x/a].
\end{align*}
Hence, letting $\ell_{0} = [x/a]$, we have, on the same event,
\begin{align*}
B \ge \sum_{i = n}^{n + \ell_{0}} \sigma^{H}_{i} & = \sigma^{H}_{n} + S^{\sigma^{H}}_{n+\ell_{0}} - S^{\sigma^{H}}_{n} > x(1-\rho) + h_{\varepsilon}(x) + b_{H} \ell_{0} - \delta_{\ell_{0}} \ell_{0} - C_{\varepsilon} > x.
\end{align*}
For any integer $N \ge 1$, let
\begin{align*}
L_{N,\varepsilon}(x) = \cup_{n=1}^N \{\max_{i<n} \xi^{H}_i \le x(1-\rho)\} \cap J_{n,\varepsilon}(x) \cap E_{n,\varepsilon},
\end{align*}
then we have
\begin{align}
\label{eq:DN 1}
\{B>x\} \supseteq L_{N,\varepsilon}(x).
\end{align}
Let $F^{H}$ be the distribution of $\xi^{H}$, and recall that $H$ is the distribution of $\sigma^{H}$. Both of them are intermediate regularly varying, they are tail-equivalent and $h$-insensitive (see \rem{tailprop} and \eqref{charIRV}). Since $L_{N,\varepsilon}(x)$ is a union of $N$ disjoint events and $\{\tau^{H} \ge n, \max_{i<n} \xi^{H}_i \le x(1-\rho) \}$ is independent of $\xi^{H}_{n}$ and $E_{n,\varepsilon}$, \eq{En 1} yields, as $x\to\infty$,
\begin{align}
\label{eq:LN 1}
\dd{P} (L_{N,\varepsilon}(x))
&= \sum_{n=1}^N \dd{P} (\tau^{H} \ge n, \max_{i<n} \xi^{H}_i \le x(1-\rho)) \cdot
\overline{F^{H}}( x(1-\rho ) + h_{\varepsilon}(x))\cdot \dd{P} (E_{n,\varepsilon}) \nonumber\\
&\sim
\sum_{n=1}^N \dd{P} (\tau^{H} \ge n) \overline{H}(x(1-\rho )) \dd{P} (E_{n,\varepsilon}) \nonumber\\
&\ge
(1-\varepsilon ) \overline{H}(x(1-\rho )) \sum_{n=1}^N \dd{P} (\tau^{H} \ge n).
\end{align}
Let $L_{\varepsilon}(x) = \lim_{N \to \infty} L_{N,\varepsilon}(x)$, then
$L_{\varepsilon}(x) \subset \{B>x\}$ by \eq{DN 1}, and, for any $N$, by \eq{B3} of \thr{gg1},
$$
\dd{P} (B>x) - \dd{P} (L_{\varepsilon}(x)) \lesssim
{\overline H}(x(1-\rho))\left(\varepsilon \sum_{n=1}^N \dd{P}(\tau^{H} \ge n) + \sum_{n=N+1}^{\infty}
\dd{P} (\tau^{H} >n)\right).
$$
Choosing $N$ such that $\sum_{n=N+1}^{\infty} \dd{P} (\tau^{H} >n) \le \varepsilon \dd{E} \tau^{H}$, we get
\begin{align}
\label{eq:BL 1}
0 \le \dd{P} (B>x) - \dd{P} (L_{\varepsilon}(x)) \lesssim
2 {\overline H}(x(1-\rho))\varepsilon \dd{E} \tau^{H}.
\end{align}
For $x > 0$, define events $J(x)$ and $\overline{J}_{\varepsilon}(x)$ as
\begin{align*}
& J(x) = \bigcup_{n=1}^{\infty} \{\tau^{H} \ge n, \xi^{H}_{n} > x(1-\rho)\},\\
& \overline{J}_{\varepsilon}(x) = \bigcup_{n=1}^{\infty} \{\max_{i<n} \xi^{H}_i \le x(1-\rho)\} \cap J_{n,\varepsilon}(x).
\end{align*}
Since $L_{\varepsilon}(x) = \cup_{n=1}^{\infty} \{\max_{i<n} \xi^{H}_i \le x(1-\rho)\} \cap J_{n,\varepsilon}(x) \cap E_{n,\varepsilon}$, we have
\begin{align*}
0 \le \dd{P} (\overline{J}_{\varepsilon}(x)) - \dd{P} (L_{\varepsilon}(x)) & = \sum_{n=1}^{\infty} \dd{P}(\tau^{H} \ge n, \max_{i<n} \xi^{H}_i \le x(1-\rho)) \overline{F^{H}}(x(1-\rho)) (1- \dd{P}(E_{n,\varepsilon}))\\
& \le
\varepsilon \dd{E}\tau^{H} \overline{F^{H}}(x(1-\rho)).
\end{align*}
Further, $\overline{J}_{\varepsilon}(x) \subset J(x)$ and
$$
0 \le \dd{P} (J(x))-\dd{P}(\overline{J}_{\varepsilon}(x)) \le
\dd{E}\tau^{H} \dd{P} (\xi^{H}_1\in (x(1-\rho), x(1-\rho)+h_{\varepsilon}(x)]) =
o(\overline{F^{H}}(x(1-\rho)).
$$
Combining those with \eq{BL 1}, we obtain that
\begin{align*}
\dd{P} \left(\{B>x\}\setminus J(x)\right) & \le \dd{P}(B>x) - \dd{P}(L_{\varepsilon}(x) \cap J(x))\\
& = \dd{P}(B>x) - \dd{P}(L_{\varepsilon}(x)) \le 2\varepsilon \dd{E}\tau^{H} \overline{F^{H}}(x(1-\rho)),\\
\dd{P} \left(J(x)\setminus \{B>x\}\right) & \le \dd{P} \left(J(x)\right) - \dd{P}\left(L_{\varepsilon}(x)\right)\\& \lesssim \varepsilon \dd{E}\tau^{H} \overline{F^{H}}(x(1-\rho)) + o(\overline{F^{H}}(x(1-\rho)).
\end{align*}
Since $\varepsilon >0$ may be taken arbitrarily small, we arrive at \eq{PSBJ-B3}.
\section{Alternative proof of \cor{U finite 1}}
\label{app:alternative}
In this section, we give an alternative proof of \cor{U finite 1}, which is based on the result from \cite{FoZa2003}, not using PSBJ. Instead of it, our basic tools are \thr{FZ1} and the law of large numbers. We also slightly generalise \cor{U finite 1}.
\begin{theorem}
\label{thr:U finite a1}
For the stable $GI/GI/1$ feedback queue, assume that its service time distribution is intermediate regularly varying and has a finite mean. If the first customer arriving at the empty system has an exceptional first service time $\eta$ instead of $\sigma_{1,0}$, that is, $U_{1} = \eta$, such that
\begin{itemize}
\item [(I)] $\eta$ has an intermediate regularly varying distribution with $\dd{E}(\eta) < \infty$;
\item [(II)] $\displaystyle \frac {\dd{P}(\eta > x)} {\dd{P}(\sigma > x)}$ converges to a constant from $[0,\infty]$, as $x \to \infty$;
\item [(III)] $\dd{E}(X^{(0)}_{k-1}) < \infty$ for $k \ge 1$;
\end{itemize}
then, for each $k \ge 1$, as $x \to \infty$,
\begin{align}
\label{eq:U finite a1}
\dd{P}\left(T_{k} > x\right) \sim \sum_{\ell=0}^{k-2} & \dd{E}\left( 1 + X^{(0)}_{k-\ell-1} \right) \dd{P}\left((1+ m^{(0)}_{\ell}) \sigma > x \right) + \dd{P}\left((1+ m^{(0)}_{k-1}) \eta > x \right).
\end{align}
\end{theorem}
\begin{remark}
\label{rem:U finite a1}
If $\eta = \sigma_{1,0}$, then the conditions (I)--(III) are satisfied, and this theorem is just \cor{U finite 1}.
\end{remark}
\begin{proof}
Recall the definitions of $X_{\ell-1}, U_{\ell}$ under $\sigma_{1,0} = \eta$. We have $X_{0} = u_{0}= 0$, $U_{1} = \eta$, and, for $\ell \ge 2$,
\begin{align}
\label{eq:X0 1}
& X^{(0)}_{\ell-1} = N^{e}(T_{\ell-2}+U_{\ell-1}) - N^{e}(T_{\ell-2}) + N^{B}_{\ell-2}(X^{(0)}_{\ell-2}),\\
\label{eq:X0 sum}
& \sum_{j=1}^{\ell-1}X^{(0)}_{j} = N^{e}(T_{\ell-1}) + \sum_{j=2}^{\ell-1} N^{B}_{j-1}(X^{(0)}_{j-1}),\\
\label{eq:T0 1}
& T_{\ell} = \sum_{j=1}^{\ell} U_{j} = \sum_{j}^{\ell} \sum_{i=0}^{X^{(0)}_{j-1}} \sigma_{j,i},
\end{align}
where $X_{j-1}$ and $\sigma_{j, i}$ are independent. Since \eq{U finite a1} is an identity for $k=1$, we assume that $k \ge 2$. We partition the event $\{T_{k} > x\}$ into the following $k$ disjoint sets for each $\vc{y} \equiv( y_{1}, y_{2}, \ldots, y_{k-1}) > \vc{0}$.
\begin{align*}
I_{k}^{(\ell)}(\vc{y},x) = \begin{cases}
\{T_{k} > x, U_{j} \le y_{j}, 1 \le j \le k-1\}, & \ell=1,\\
\{T_{k} > x, U_{j} \le y_{j}, 1 \le j \le k - \ell, U_{k-\ell+1} > y_{k-\ell+1}\}, & 2 \le \ell \le k.
\end{cases}
\end{align*}
We prove that
\begin{align}
\label{eq:Ik 1}
& \dd{P}\left(I_{k}^{(\ell)}(\vc{y},x)\right) \sim \begin{cases}
\dd{E}\left( 1 + X^{(0)}_{k-\ell} \right) \dd{P}\left((1+ m^{(0)}_{\ell-1}) \sigma > x \right), & 1 \le \ell \le k-1,\\
\dd{P}\left((1+ m^{(0)}_{k-1}) \eta > x \right), & \ell=k,
\end{cases}
\end{align}
as $x \to \infty$ then $y_{1}, \ldots, y_{k-1} \to \infty$. By the assumptions (I)--(III), these asymptotics yield \eq{U finite a1}, and therefore the theorem is obtained.
We prove \eq{Ik 1} deriving upper and lower bounds. We first consider the case that $\ell = 1$. Since $U_{j} \le y_{j}$ for $1 \le j \le k-1$, we have that $T_{j} \le \sum_{j'=1}^{j} y_{j'}$ for $1 \le j \le k-1$, and
\begin{align*}
\sum_{j=1}^{\ell-1}X^{(0)}_{j} \le N^{e}\left(\sum_{j'=1}^{\ell-1} y_{j'}\right) + \sum_{j=2}^{\ell-1} N^{B}_{j-1}\left(X^{(0)}_{j-1}\right) \quad \mbox{ on } I_{k}^{(1)}(\vc{y},x).
\end{align*}
This inductively shows that $X^{(0)}_{k-1}$ has light tail on $I_{k}^{(\ell)}(\vc{y},x)$, and therefore \eq{T0 1} implies that
\begin{align*}
\limsup_{x \to \infty} \frac {\dd{P}\left( I_{k}^{(1)}(\vc{y},x) \right)} {\dd{P}(\sigma > x)} \le \limsup_{x \to \infty} \frac {\dd{P}\left( \sum_{i=0}^{X^{(0)}_{k-1}} \sigma_{k,i} > x - \sum_{j=1}^{k-1} y_{j}\right)} {\dd{P}(\sigma > x)} = \dd{E}\left( 1 + X^{(0)}_{k-1} \right).
\end{align*}
The corresponding lower bound is obvious. That is,
\begin{align*}
\liminf_{x \to \infty} \frac {\dd{P}\left( I_{k}^{(1)}(\vc{y},x) \right)} {\dd{P}(\sigma > x)} & \ge \liminf_{x \to \infty} \frac {\dd{P}\left( \sum_{i=0}^{X^{(0)}_{k-1}1(U_{j} <y_{j}, 1\le j \le k-1)} \sigma_{k,i} > x\right)} {\dd{P}(\sigma > x)}\\
& = \dd{E}\left( 1(U_{j} <y_{j}, 1\le j \le k-1) \left( 1 + X^{(0)}_{k-1} \right) \right).
\end{align*}
Hence, letting $y_{j} \to \infty$ for $j=1,2,\ldots,k$, we obtain \eq{Ik 1} for $\ell=1$.
We next consider the case $\ell=2$. Let $c_{1}$ be a positive constant, which will be appropriately determined for each sufficiently small $\varepsilon > 0$.
\begin{align*}
\dd{P}\left( I_{k}^{(2)}(\vc{y},x) \right) \le A^{(2)}_{+}(y_{1},\ldots,y_{k-2},x) + B^{(2)}_{+}(y_{1},\ldots,y_{k-2},x),
\end{align*}
where
\begin{align*}
A^{(2)}_{+}(y_{1},\ldots,y_{k-2},x) & = \dd{P}\Bigg( U_{j} \le y_{j}, 1 \le j \le k-2, U_{k-1} > c_{1} x\Bigg),\\
B^{(2)}_{+}(y_{1},\ldots,y_{k-2},x) & = \dd{P}\Bigg( \sum_{i=0}^{X^{(0)}_{k-1}} \sigma_{k,i} > (1-c_{1}) x - (y_{1}+\ldots+y_{k-2}),\\
& \hspace{10ex} U_{j} \le y_{j}, 1 \le j \le k-2, y_{k-1} < U_{k-1} \le c_{1} x\Bigg).
\end{align*}
Since $U_{k-1} = \sum_{i=1}^{X^{(0)}_{k-2}} \sigma_{k-1,i}$, the term $A^{(2)}_{+}(y_{1},\ldots,y_{k-2},x)$ has the same asymptotics as $I^{(1)}_{k-1}(\vc{y},c_{1}x)$. Hence, it follows from the asymptotics of $I^{(1)}_{k}(\vc{y},x)$ that
\begin{align}
\label{eq:A2 1}
A^{(2)}_{+}(y_{1},\ldots,y_{k-2},x) \sim \dd{E}\left( 1 + X^{(0)}_{k-1} \right) 1(k=2) \dd{P}(\eta > c_{1}x) + 1(k \ge 3) \dd{P}(\sigma > c_{1}x).
\end{align}
Thus, if we show that $B^{(2)}_{+}(y_{1},\ldots,y_{k-2},x)$ is asymptotically negligible, then the right-hand side of \eq{Ik 1} is obtained as an upper bound for $\ell=2$. To see this, we consider $X^{(0)}_{k-1}$ on the event
\begin{align*}
E^{U}_{k-1}(\vc{y},x) \equiv \{U_{j} \le y_{j}, 1 \le j \le k-2, y_{k-1} < U_{k-1} \le c_{1} x\},
\end{align*}
on which we have
\begin{align*}
X^{(0)}_{k-1} = N^{e}(T_{k-2}+U_{k-1}) - N^{e}(T_{k-2}) + N^{B}_{k-1}(X^{(0)}_{k-2}) \lesssim N^{e}(c_{1}x), \qquad x \to \infty.
\end{align*}
Hence, letting
\begin{align*}
& z_{k-2} = y_{1}+\ldots+y_{k-2},\\
& \widetilde{S}^{\sigma}_{n} = \sum_{i=1}^{n} (\sigma_{k,i} - (b+\varepsilon)), \qquad \widetilde{M}_{n} = \max_{1 \le j \le n} \widetilde{S}^{\sigma}_{j}, \qquad n \ge 1,
\end{align*}
and applying \thr{FZ1}, we have
\begin{align*}
B^{(2)}_{+}(y_{1},\ldots,y_{k-2},x) & = \dd{P}\Big( \widetilde{S}^{\sigma}_{X^{(0)}_{k-1}+1} + (b+\varepsilon) (X^{(0)}_{k-1}+1) > (1-c_{1}) x - z_{k-2}, E^{U}_{k-1}(\vc{y},x) \Big)\\
& \le \dd{P}\Big( \widetilde{M}_{X^{(0)}_{k-1}+1} + (b+\varepsilon) (X^{(0)}_{k-1}+1) > (1-c_{1}) x - z_{k-2}, E^{U}_{k-1}(\vc{y},x) \Big)\\
& \le \dd{P}\Big( \widetilde{M}_{X^{(0)}_{k-1}+1} > (1-c_{1}(1+(b+\varepsilon)(\lambda+\varepsilon)) x, E^{U}_{k-1}(\vc{y},x) \Big)\\
& \quad + \dd{P}\Big( X^{(0)}_{k-1}+1 > c_{1} (\lambda+\varepsilon)x - z_{k-2}, E^{U}_{k-1}(\vc{y},x) \Big)\\
& \lesssim \dd{E}\left( (X^{(0)}_{k-1}+1)1_{E^{U}_{k-1}(\vc{y},x)} \right) \dd{P}\left( \sigma > (1-c_{1}(1+(b+\varepsilon)(\lambda+\varepsilon)) x \right)\\
& \quad + \dd{P}\Big( N^{e}(c_{1}x) > c_{1} (\lambda+\varepsilon)x - z_{k-2} \Big),
\end{align*}
where the last probability term decays super-exponentially fast, so it is negligible. Thus, if we choose $c_{1} > 0$ such that $c_{1}(1+(b+\varepsilon)(\lambda+\varepsilon)) < 1$, then $B^{(2)}_{+}(y_{1},\ldots,y_{k-2},x)$ is asymptotically negligible because
\begin{align*}
\lim_{x \to \infty} \dd{E}\left( (X^{(0)}_{k-1}+1)1_{E^{U}_{k-1}(\vc{y},x)} \right) = 0.
\end{align*}
Consequently, we choose $c_{1} = \frac {1 - \varepsilon}{1 + (b+\varepsilon)(\lambda+\varepsilon)}$, which converges to $(1 + \lambda b)^{-1}$ as $\varepsilon \downarrow 0$. Thus, we have proved that the right-hand side of \eq{Ik 1} is an upper bound for $\ell=2$.
For the lower bound for $\ell=2$, we take another decomposition. Let $d_{1} = \frac {1+\varepsilon}{1+\lambda b} < 1$ for a sufficiently small $\varepsilon > 0$, then, for $d_{1} x > y_{k-1}$,
\begin{align*}
\dd{P}\left( I_{k}^{(2)}(y_{1},\ldots,y_{k-2},d_{1}x,x) \right) & \ge \dd{P}(U_{k-1} > d_{1} x, U_{j} \le y_{j}, 1 \le j \le k - 2)\\
& \qquad - \dd{P}(T_{k} \le x, U_{j} \le y_{j}, 1 \le j \le k - 2, U_{k-1} > d_{1} x).
\end{align*}
Similar to \eq{A2 1}, we have
\begin{align*}
& \dd{P}(U_{k-1} > d_{1} x, U_{j} \le y_{j}, 1 \le j \le k - 2) \\
& \quad \sim \dd{E}\left( 1 + X^{(0)}_{k-1} \right) 1(k=2) \dd{P}(\eta > d_{1}x) + 1(k \ge 3) \dd{P}(\sigma > d_{1}x).
\end{align*}
On the other hand, by the law of large numbers,
\begin{align*}
& \dd{P}(T_{k} \le x, U_{j} \le y_{j}, 1 \le j \le k - 2, U_{k-1} > d_{1}x)\\
& \quad \le \dd{P}\left(\sum_{i=0}^{X^{(0)}_{k-1}} \sigma_{k,i} \le (1-d_{1} x), U_{j} \le y_{j}, 1 \le j \le k - 2, U_{k-1} > d_{1} x\right)\\
& \quad \le \dd{P}\left(\sum_{i=0}^{N^{e}(z_{k-2}+d_{1}x)-N^{e}(z_{k-2}) + N^{B}_{k-1}(X^{(0)}_{k-2})} \hspace{-10ex} \sigma_{k,i} \le (1-d_{1} x), U_{j} \le y_{j}, 1 \le j \le k - 2, U_{k-1} > d_{1} x\right)\\
& \quad = o(1) \dd{P}\left(U_{k-1} > d_{1} x\right).
\end{align*}
Hence, this term is asymptotically negligible, and therefore we have the asymptotic lower bound for $I_{k}^{(2)}(\vc{y},x)$, which agrees with the upper bound, by letting $\varepsilon \downarrow 0$. Thus, we have proved \eq{Ik 1} for $\ell=2$. For $\ell = 3,\ldots,k$, \eq{Ik 1} is similarly proved (we omit the details). Then the proof of the corollary is completed.
\end{proof}
\end{document} |
\begin{document}
\title{Hermitian
structures on the derived category of coherent sheaves}
\author{
\\
\\
Jos\'e I. Burgos Gil\footnote{Partially supported by grant
MTM2009-14163-C02-01 and CSIC research project 2009501001.}\\
\small{Instituto de Ciencias Matem\'aticas (ICMAT-CSIC-UAM-UCM-UC3)}\\
\small{Consejo Superior de Investigaciones Cient\'ificas (CSIC)}\\
\small{Spain}\\
\small{\texttt{[email protected]}}\\
\text{{\rm an}}d
Gerard Freixas i Montplet\footnote{Partially supported by
grant MTM2009-14163-C02-01}\\
\small{Institut de Math\'ematiques de Jussieu (IMJ)}\\
\small{Centre National de la Recherche Scientifique (CNRS)}\\
\small{France}\\
\small{\texttt{[email protected]}}\\
\text{{\rm an}}d
R\u azvan Li\c tcanu\footnote{Supported by CNCSIS -UEFISCSU,
project number PNII - IDEI 2228/2008 .}\\
\small{Faculty of Mathematics}\\
\small{University Al. I. Cuza Iasi}\\
\small{Romania}\\
\small{\texttt{[email protected]}}
}
\maketitle
\begin{abstract}
The main objective of the present paper is to set up the theoretical
basis and the language needed to deal with the problem of direct
images of hermitian vector bundles for projective non-necessarily
smooth morphisms. To this end, we first define hermitian structures on the
objects of the bounded derived category of coherent sheaves on a
smooth complex variety. Secondly we extend the theory of Bott-Chern
classes to these hermitian structures. Finally we introduce the
category $\oSm_{{\text{\rm l,ll,a}}t/{\mathbb C}}$ whose morphisms are projective morphisms
with a hermitian structure on the relative tangent complex.
\end{abstract}
\section{Introduction}
\label{sec:introduction}
Derived categories were introduced in the 60's of the last century by
Grothendieck and Verdier in order to study and generalize duality
phenomenons in Algebraic Geometry (see \cite{Hartshorne:rd},
\cite{Verdier:MR1453167}). Since
then, derived categories had become a standard tool in Algebra and Geometry
and the right framework to define derived functors and to study
homological properties. A paradigmatic example is the definition of
direct image of sheaves. Given a map $\pi\colon X\to Y$ between
varieties and a sheaf $\mathcal{F}$ on $X$, there is a notion of
direct image $\pi _{{\text{\rm l,ll,a}}t}\mathcal{F}$. We are not specifying what kind
of variety or sheaf we are talking about because the same circle of
ideas can be used in many different settings. This direct image is not
exact in the sense that if $f\colon \mathcal{F}\to \mathcal{G}$ is a
surjective map of sheaves, the induced morphism $\pi _{{\text{\rm l,ll,a}}t}f\colon
\pi _{{\text{\rm l,ll,a}}t}\mathcal{F}\to \pi _{{\text{\rm l,ll,a}}t}\mathcal{G}$ is not necessarily
surjective. One then can define a \emph{derived} functor $R\pi
_{{\text{\rm l,ll,a}}t}$ that takes values in the derived category of sheaves on $Y$
and that is exact in an appropriate sense. This functor encodes a lot
of information about the topology of the fibres of the map $\pi $.
The interest for the derived category of coherent sheaves on a variety
exploded with the celebrated 1994 lecture by Kontsevich
\cite{Kontsevich:MR1403918}, interpreting
mirror symmetry as an equivalence between the derived category of the
Fukaya category of certain symplectic manifold and the derived category of
coherent sheaves of a dual complex manifold. In the last decades, many
interesting results about the derived category of coherent sheaves
have been obtained, like Bondal-Orlov
Theorem \cite{BondalOrlov:MR1818984} that shows that a projective
variety with ample canonical or anti-canonical bundle can be recovered
from its derived category of coherent sheaves. Moreover, new tools for
studying algebraic varieties have been developed in the context of
derived categories like the
Fourier-Mukai transform \cite{Mukai:FT}. The interested reader is
referred to books like \cite{Huybrechts:FM} and
\cite{Bartoccietal:MR2511017} for a thorough exposition of recent
developments in this area.
Hermitian vector bundles are ubiquitous in Mathematics. An
interesting problem is to define the direct image of hermitian vector
bundles. More concretely, let $\pi \colon X\to Y$ be a proper
holomorphic map of complex manifolds and let $\overline E=(E,h)$ be a
hermitian holomorphic vector bundle on $X$. We would like to define
the direct image
$\pi_{{\text{\rm l,ll,a}}t}\overline E$ as something as close as possible to a hermitian
vector bundle on $Y$. The information that would be easier to extract from
such a direct image
is encoded in the determinant of the cohomology
\cite{Deligne:dc}, that can be defined directly. Assume that $\pi $ is a
submersion and that we have chosen a hermitian metric on the relative
tangent bundle $T_{\pi }$ of $\pi $ satisfying certain technical
conditions. Then the determinant line bundle $\lambda
(E)=\det(R\pi _{{\text{\rm l,ll,a}}t}E)$ can be equipped with the Quillen metric
(\cite{Quillen:dCRo}, \cite{BismutFreed:EFI}, \cite{BismutFreed:EFII}), that
depends on the metrics on $E$ and $T_{\pi }$ and is constructed using
the analytic torsion \cite{RaySinger:ATCM}. The Quillen metric has
applications in Arithmetic Geometry (\cite{Faltings:cas}, \cite{Deligne:dc},
\cite{GilletSoule:aRRt}) and also in String Theory
(\cite{Yau:MR915812}, \cite{AlvarezGaumeetals:MR908551}). Assume
furthermore that the higher direct image sheaves $R^{i}\pi _{{\text{\rm l,ll,a}}t}E$
are locally free. In general it is not possible to define an
analogue of the Quillen metric as a hermitian metric on each vector
bundle $R^{i}\pi
_{{\text{\rm l,ll,a}}t}E$. But following Bismut and K\"ohler \cite{Bismut-Kohler}, one
can do something almost as good. We can define the $L^{2}$-metric on $R^{i}\pi
_{{\text{\rm l,ll,a}}t}E$ and \emph{correct} it using the higher analytic torsion
form. Although this \emph{corrected metric} is not properly a
hermitian metric, it is enough for constructing
characteristic forms and it appears in the Arithmetic
Grothendieck-Riemann-Roch
Theorem in higher degrees~\cite{GilletRoesslerSoule:_arith_rieman_roch_theor_in_higher_degrees_p}.
The main objective of the present paper is to set up the theoretical
basis and the language needed to deal with the problem of direct
images of hermitian vector bundles for projective non-necessarily
smooth morphisms.
This program will be continued in the
subsequent paper \cite{BurgosFreixasLitcanu:GenAnTor} where we
give an axiomatic characterization of analytic torsion forms and we
generalize them to projective morphisms.
The ultimate goal of this program is to state and prove an Arithmetic
Grothendieck-Riemann-Roch Theorem for general projective
morphisms. This last result will be the topic of a forthcoming paper.
When dealing with direct images of hermitian vector bundles for non
smooth morphisms, one is naturally
led to consider hermitian structures on objects of the bounded derived
category of coherent sheaves $\text{{\rm cur}}b$. One reason for this is that, for
a non-smooth projective morphism $\pi $, instead
of the relative tangent
bundle one should consider the relative tangent complex, that defines
an object of $\text{{\rm cur}}b(X)$. Another reason is that, in general, the higher
direct images $R^{i}\pi _{{\text{\rm l,ll,a}}t}E$ are coherent sheaves and the derived
direct image $R\pi _{{\text{\rm l,ll,a}}t}E$ is an object of $\text{{\rm cur}}b(Y)$.
Thus the first goal of this paper is to define hermitian structures.
A possible starting point is to
define a hermitian metric on an object
$\mathcal{F}$ of $\text{{\rm cur}}b(X)$ as an isomorphism $E\dashrightarrow \mathcal{F}$ in
$\text{{\rm cur}}b(X)$, with $E$ a bounded complex of vector bundles, together
with a choice of a hermitian metric on each constituent vector bundle
of $E$.
Here we find a problem, because even being $X$ smooth,
in the bounded derived category of
coherent sheaves of $X$, not every object can be
represented by a bounded complex of locally free sheaves (see
\cite{Voisin:counterHcK} and Remark \ref{rem:3}). Thus the previous
idea does not work for general complex manifolds.
To avoid this
problem we will restrict ourselves to the
algebraic category. Thus, from now on the letters $X,\ Y,\dots$ will
denote smooth algebraic varieties over ${\mathbb C}$, and all sheaves
will be algebraic.
With the previous definition of hermitian metric, for each object of
$\text{{\rm cur}}b(X)$ we obtain a class of
metrics that is too wide. Different constructions
that ought to produce the same metric produce in fact different
metrics. This indicates that we may define a hermitian structure as an equivalence
class of hermitian metrics.
Let us be more precise. Being $\text{{\rm cur}}b(X)$ a triangulated category, to
every morphism
$\mathcal{F}\overlineerset{f}{\dashrightarrow}\mathcal{G}$ in $\text{{\rm cur}}b(X)$ we can associate
its cone, that
is defined up to a (not unique) isomorphism by the fact that
\begin{displaymath}
\mathcal{F}\dashrightarrow \mathcal{G}\dashrightarrow \cone(f)\dashrightarrow \mathcal{F}[1]
\end{displaymath}
is a distinguished triangle. If now $\mathcal{F}$ and $\mathcal{G}$
are provided with hermitian metrics, we want that $\cone(f)$ has an
induced hermitian structure that is well defined up to
\emph{isometry}. By choosing a representative of the map $f$ by means
of morphisms of complexes of vector bundles, we can induce a hermitian metric on
$\cone(f)$, but this hermitian metric depends on the choices. The idea
behind the definition of hermitian structures is to introduce the
finest equivalence relation between metrics such that all possible
induced hermitian metrics on $\cone(f)$ are equivalent.
Once we have defined hermitian structures a new invariant of $X$
can be naturally defined. Namely, the set of hermitian structures on a zero object of
$\text{{\rm cur}}b(X)$ is an abelian group that we denote $\KA(X)$ (Definition
\ref{def:KA}). In the same way that $K_{0}(X)$ is the universal
abelian group for additive characteristic classes of vector bundles,
$\KA(X)$ is the universal abelian group for secondary characteristic
classes of acyclic complexes of hermitian vector bundles (Theorem~\ref{thm:8}).
Secondary characteristic classes constitute other of the central topics of
this paper. Recall that to each vector bundle we can associate its
Chern character, that is an additive characteristic class. If the
vector bundle is provided with a hermitian metric, we can use
Chern-Weil theory to construct a
concrete representative of the Chern character, that is a differential
form. This characteristic
form is additive only for orthogonally split short exact sequences and
not for general short exact sequences. Bott-Chern classes were
introduced in \cite{BottChern:hvb} and are secondary classes that
measure the lack of additivity of the characteristic forms.
The Bott-Chern classes have been extensively used in Arakelov Geometry
(\cite{GilletSoule:vbhm}, \cite{BismutGilletSoule:at}) and they can be
used to construct characteristic classes in higher $K$-theory
(\cite{Burgos-Wang}).
The second goal of this paper is to
extend the definition of
additive Bott-Chern
classes to the derived category. This is the most general definition
of additive Bott-Chern classes and encompasses both, the Bott-Chern
classes defined in \cite{BismutGilletSoule:at} and the ones defined in
\cite{Ma:MR1765553} (Example \ref{exm:1}).
Finally, recall that the hermitian structure on the direct image of a
hermitian vector bundle should also depend on a hermitian structure on
the relative tangent complex. Thus the last goal of this paper is to
introduce the category $\overline{\Sm}_{{\text{\rm l,ll,a}}t/{\mathbb C}}$ (Definition
\ref{def:16}, Theorem \ref{thm:17}). The objects of this category are smooth
algebraic varieties over ${\mathbb C}$ and the morphisms are pairs
$\overline{f}=(f,\overline{T}_{f})$ formed by a
projective morphism of smooth complex varieties $f$, together with a
hermitian structure on the relative tangent complex $T_{f}$. The main
difficulty here is to define the composition of two such morphisms.
The remarkable
fact is that the hermitian cone construction enables us to define a
composition
rule for these morphisms.
We describe with more detail the contents of each section.
In Section \ref{sec:meager-complexes} we
define and characterize the notion of \textit{meager complex}
(Definition \ref{def:9} and Theorem \ref{thm:3}). Roughly speaking,
meager complexes are bounded acyclic complexes of hermitian vector
bundles whose Bott-Chern classes vanish for structural
reasons. We then introduce the concept of tight morphism (Definition
\ref{def:tight_morphism}) and tight equivalence relation (Definition
\ref{def:17}) between bounded complexes of hermitian vector
bundles. We explain a series of useful computational rules on the
monoid of hermitian vector bundles modulo tight equivalence relation,
that we call \textit{acyclic calculus} (Theorem \ref{thm:7}). We prove
that the submonoid of acyclic complexes modulo meager
complexes has a structure of abelian group, this is the group
$\KA(X)$ mentioned previously.
With these tools at hand, in Section
\ref{sec:oDb} we define hermitian
structures on objects of $\text{{\rm cur}}b(X)$ and we introduce the category
$\oDb(X)$. The objects of the category
$\oDb(X)$ are objects of $\text{{\rm cur}}b(X)$ together with a hermitian structure,
and the morphisms are just morphisms in $\text{{\rm cur}}b(X)$. Theorem \ref{thm:13}
is devoted to describe the structure of the forgetful functor
$\oDb(X)\to\text{{\rm cur}}b(X)$. In particular, we show that the group $\KA(X)$
acts on the fibers of this functor, freely and
transitively.
An important example of use of hermitian structures is
the construction of the \textit{hermitian cone} of a morphism in
$\oDb(X)$ (Definition \ref{def:her_cone}), which is well defined only
up to tight isomorphism. We also study several elementary
constructions in $\oDb(X)$. Here we mention the classes of
isomorphisms and distinguished triangles in $\oDb(X)$. These classes
lie in the group $\KA(X)$ and their properties are listed in Theorem
\ref{thm:10}. As an application we show that $\KA(X)$ receives classes
from $K_{1}(X)$ (Proposition \ref{prop:K1_to_KA}).
Section \ref{sec:bott-chern-classes} is devoted to the extension of
Bott-Chern classes to the derived category. For every additive
genus, we associate to each isomorphism or
distinguished triangle in $\oDb(X)$ a Bott-Chern class satisfying
properties analogous to the classical ones.
We conclude the paper with Section \ref{sec:multiplicative-genera},
where we extend the definition of Bott-Chern classes to
multiplicative genera and in particular to the Todd genus. In this
section
we also define the category $\overline{\Sm}_{{\text{\rm l,ll,a}}t/{\mathbb C}}$.
\textbf{Acknowledgements:}
We would like to thank the following institutions where part of the
research conducting to this paper was done. The CRM in Bellaterra
(Spain), the CIRM in Luminy (France), the Morningside Institute of Beijing
(China), the University of Barcelona and the IMUB, the Alexandru Ioan
Cuza University of Iasi,
the Institut de Math\'ematiques de Jussieu and the ICMAT (Madrid).
We would also like to thank D. Eriksson and D. R\"ossler for several
discussions on the subject of this paper.
\section{Meager complexes and acyclic calculus}
\label{sec:meager-complexes}
The aim of this section is to construct a universal group for
additive Bott-Chern classes of acyclic complexes of
hermitian vector bundles. To this end we first introduce and study the
class of meager complexes. Any Bott-Chern class that is
additive for certain short exact sequences of acyclic complexes (see
\ref{thm:8}) and that
vanishes on orthogonally split complexes, necessarily vanishes on
meager complexes. Then we develop an acyclic calculus that will ease
the task to check if a particular complex is meager. Finally we
introduce the group $\KA$, which is the universal group for additive
Bott-Chern classes.
Let $X$ be a complex algebraic variety over ${\mathbb C}$, namely a reduced
and separated scheme of finite type over ${\mathbb C}$. We denote by $\Vb(X)$
the exact category of bounded complexes of algebraic vector bundles on
$X$. Assume in addition that $X$ is smooth over ${\mathbb C}$. Then $\oV(X)$
is defined as the category of pairs $\overline E=(E,h)$, where $E\in \Ob
\Vb(X)$ and $h$ is a smooth hermitian metric on the complex of
analytic vector bundle $E^{\text{{\rm an}}}$. From now on we shall make no
distinction between $E$ and $E^{\text{{\rm an}}}$. The complex $E$ will be called
\emph{the underlying complex of } $\overline E$. We will denote by the
symbol $\sim$ the quasi-isomorphisms in any of the above categories.
A basic construction in $\Vb(X)$ is the cone of a morphism of
complexes. Recall that, if $f\colon E\to F$ is such a morphism, then,
as a graded vector bundle $\cone(f)=E[1]\text{{\rm op}}lus F$ and the differential
is given by $\dd(x,y)=(-\dd x,f(x)+\dd y).$
We can extend the cone construction easily to $\oV(X)$ as follows.
\begin{definition} \label{def:14}
If $f\colon \overlineerline E\to \overlineerline F$ is a morphism in $\oV(X)$,
\emph{the hermitian cone} of $f$,
denoted by $\ocone(f)$, is defined as the cone of $f$
provided with the orthogonal sum hermitian metric.
When the morphism
is clear from the context we will sometimes
denote $\ocone(f)$ by $\ocone(\overline E,\overlineerline F)$.
\end{definition}
\begin{remark} \label{rem:2} Let $f\colon \overline E\to \overline F$ be a
morphism in $\oV(X)$. Then
there is an exact sequence of complexes
\begin{displaymath}
0\longrightarrow \overlineerline F\longrightarrow \ocone(f)
\longrightarrow \overlineerline E[1]\longrightarrow 0,
\end{displaymath}
whose constituent short exact sequences are orthogonally
split. Conversely, if
\begin{displaymath}
0\longrightarrow \overlineerline F\longrightarrow \overline G
\longrightarrow \overlineerline E[1]\longrightarrow 0
\end{displaymath}
is a short exact sequence all whose constituent exact sequences are
orthogonally split, then there is a natural section $s\colon E[1]\to
G$. The image of $\dd s-s\dd$ belongs to
$F$ and, in fact, determines a morphism
of complexes
\begin{displaymath}
f_{s}:=\dd s-s\dd\colon \overline E
\longrightarrow \overline F.
\end{displaymath}
Moreover, there is a natural isometry $\overlineerline G \cong \ocone
(f_{s})$.
\end{remark}
The hermitian cone has the following useful property.
\begin{lemma}\label{lemm:13}
Consider a diagram in $\oV(X)$
\begin{displaymath}
\xymatrix{
\overlineerline{E}'\ar[r]^{f'}\ar[d]_{g'} &\overlineerline{F}'\ar[d]^{g}\\
\overlineerline{E}\ar[r]^{f} &\overlineerline{F}.
}
\end{displaymath}
Assume that the diagram is commutative up to homotopy and fix a
homotopy $h$. The homotopy $h$ induces
morphisms of complexes
\begin{align*}
&\psi \colon\ocone(f')\longrightarrow\ocone(f)\\
&\phi \colon\ocone(-g')\longrightarrow\ocone(g)
\end{align*}
and there is a natural isometry of complexes
\begin{displaymath}
\ocone(\phi )\overlineerset{\sim}{\longrightarrow}\ocone(\psi ).
\end{displaymath}
Morever, let $h'$ be a second homotopy between $g\circ f'$ and
$f\circ g'$ and let $\psi '$ be the induced morphism. If there
exists a higher homotopy between $h$ and $h'$, then $\psi
$ and $\psi '$ are homotopically equivalent.
\end{lemma}
\begin{proof}
Since $h\colon E'\to F[-1]$ is a homotopy between $gf'$ and $fg'$,
we have
\begin{equation}\label{eq:homotopy_square_1}
gf'-fg'=\dd h+h\dd.
\end{equation}
First of all, define the arrow $\psi \colon\ocone(f')\to\ocone(f)$ by
the following rule:
\begin{displaymath}
\psi (x',y')=(g'(x'),g(y')+h(x')).
\end{displaymath}
From the definition of the differential of a cone and the homotopy
relation (\ref{eq:homotopy_square_1}), one easily checks that $\psi $
is a morphism of complexes. Now apply the same construction to the
diagram
\begin{equation}\label{eq:homotopy_square_2}
\xymatrix{
\overlineerline{E}'\ar[r]^{-g'}\ar[d]_{-f'} &\overlineerline{E}\ar[d]^{f}\\
\overlineerline{F'}\ar[r]^{g} &\overlineerline{F}.
}
\end{equation}
The diagram (\ref{eq:homotopy_square_2}) is still commutative up
to homotopy and $h$ provides such a homotopy. We obtain a
morphism of complexes $\phi :\ocone(-g')\to\ocone(g)$, defined by the
rule
\begin{displaymath}
\phi (x',x)=(-f'(x'),f(x)+h(x')).
\end{displaymath}
One easily checks that a suitable reordering of factors sets an
isometry of complexes between $\ocone(\phi )$ and $\ocone(\psi )$.
Assume now that $h'$ is a second homotopy and that there is a
higher homotopy $s\colon \overline E' \to \overline F[-2]$ such that
\begin{displaymath}
h'-h=\dd s- s \dd.
\end{displaymath}
Let $H\colon \ocone(f')\to \ocone(f)[-1]$ be given by
$H(x',y')=(0,s(x'))$. Then
\begin{displaymath}
\psi '-\psi =\dd H + H \dd.
\end{displaymath}
Hence $\psi $ and $\psi' $ are homotopically equivalent.
\end{proof}
Recall that, given a morphism of complexes $f\colon \overline E \to \overline
F$, we use the abuse of notation $\ocone(f)=\ocone(\overline E, \overline
F)$. As seen in the previous lemma, sometimes it is natural to consider
$\ocone(-f)$. With the notation above it will be denoted also by
$\ocone(\overline E,\overline F)$. Note that this ambiguity is harmless because
there is a natural isometry between $\ocone(f)$ and $\ocone(-f)$. Of
course, when more than one morphism between $\overline E$ and $\overline F$ is
considered, the above notation should be avoided.
With this convention, Lemma \ref{lemm:13} can
be written as
\begin{equation}
\label{eq:53}
\ocone(\ocone(\overline E',\overline E),\ocone(\overline F',\overline F))\cong
\ocone(\ocone(\overline E',\overline F'),\ocone(\overline E,\overline F)).
\end{equation}
\begin{definition}\label{def:11}
We will denote by $\mathscr{M}_{0}=\mathscr{M}_{0}(X)$ the subclass of
$\oV(X)$ consisting of
\begin{enumerate}
\item the orthogonally split complexes;
\item all objects $\overline E$ such that there
is an acyclic complex $\overline F$ of $\oV(X)$, and an isometry $\overline E
\to \overline F\text{{\rm op}}lus \overline F[1]$.
\end{enumerate}
\end{definition}
We want to stabilize $\mathscr{M}_{0}$ with respect to hermitian cones.
\begin{definition}\label{def:9}
We will denote by $\mathscr{M}=\mathscr{M}(X)$ the smallest subclass of
$\oV(X)$ that satisfies the following
properties:
\begin{enumerate}
\item \label{item:12} it contains $\mathscr{M}_{0}$;
\item \label{item:13} if $f\colon \overlineerline
E\to \overlineerline F$ is a morphism and two
of $\overlineerline E$, $\overlineerline F$ and $\ocone(f)$ belong
to $\mathscr{M}$, then so does the third.
\end{enumerate}
The elements of $\mathscr{M}(X)$ will be called \emph{meager
complexes}.
\end{definition}
We next give a characterization of meager complexes.
For this, we introduce two auxiliary classes.
\begin{definition} \label{def:12}
\begin{enumerate}
\item \label{item:29} Let $\mathscr{M}_{F}$ be the subclass of
$\oV(X)$ that contains all complexes $\overlineerline E$ that have a
finite filtration $\Fil$ such that
\begin{enumerate}
\item[({\bf A})] \label{item:14} for every $p,n\in {\mathbb Z}$, the exact
sequences
\begin{displaymath}
0\to \Fil^{p+1}\overlineerline E^{n}\to \Fil^{p}\overlineerline E^{n}\to
\Gr^{p}_{\Fil}\overlineerline E^{n}\to 0,
\end{displaymath}
with the induced metrics, are orthogonally split short exact
sequences of vector bundles;
\item[({\bf B})] \label{item:15} the complexes
$\Gr^{\bullet}_{\Fil}\overlineerline E$ belong to $\mathscr{M}_{0}$.
\end{enumerate}
\item \label{item:32} Let $\mathscr{M}_{S}$ be the subclass of
$\oV(X)$ that contains all complexes $\overlineerline E$ such that there
is a morphism of complexes $f\colon \overlineerline E\to \overlineerline F$
and both $\overlineerline F$ and $\ocone (f)$ belong to
$\mathscr{M}_{F}$.
\end{enumerate}
\end{definition}
\begin{lemma} \label{lemm:10} Let $0\to \overline E\to \overline F\to \overline G\to
0$ be an exact sequence in $\oV(X)$ whose
constituent rows are orthogonally split. Assume $\overline E$ and $\overline
G$ are in $\mathscr{M}_{F}$ . Then $\overline F\in\mathscr{M}_{F}$. In
particular, $\mathscr{M}_{F}$ is closed under cone formation.
\end{lemma}
\begin{proof}
For the first claim, notice that the filtrations of $\overline E$ and $\overline G$
induce a filtration on $\overline F$ satisfying conditions
\ref{def:12}~({\bf A}) and \ref{def:12}~({\bf B}). The second
claim then follows by Remark \ref{rem:2}.
\end{proof}
\begin{example} \label{exm:2}
Given any complex $\overline E\in \Ob \oV(X)$, the complex
$\ocone(\Id_{\overline E})$ belongs to
$\mathscr{M}_{F}$. This can be seen by induction on the length of
$\overline E$ using Lemma \ref{lemm:10} and the b\^ete filtration of $\overline
E$. For the starting point of the induction one takes into account
that, if $\overline E$ has only one non zero
degree, then $\ocone(\Id_{\overline E})$ is orthogonally split. In fact,
this argument shows something slightly stronger. Namely, the complex
$\ocone(\Id_{\overline E})$ admits a finite filtration $\Fil$ satisfying
\ref{def:12}~({\bf A}) and such that the complexes
$\Gr^{\bullet}_{\Fil} \ocone(\Id_{\overline E})$ are orthogonally
split.
\end{example}
\begin{theorem}\label{thm:3} The equality
\begin{math}
\mathscr{M}=\mathscr{M}_{S}
\end{math}
holds.
\end{theorem}
\begin{proof}
We start by proving that $\mathscr{M}_{F}\subset \mathscr{M}$.
Let $\overlineerline E\in \mathscr{M}_{F}$ and let $\Fil$ be any
filtration that satisfies conditions \ref{def:12}~({\bf A}) and
\ref{def:12}~({\bf B}). We show that $
\overlineerline E\in \mathscr{M}$ by induction
on the length of $\Fil$. If
$\Fil $ has length one, then $\overlineerline E$ belongs to
$\mathscr{M}_{0}\subset \mathscr{M}$. If the length of $\Fil$ is
$k>1$, let $p$ be
such that $\Fil^{p}\overlineerline E=\overlineerline E$ and
$\Fil^{p+1}\overlineerline E\not = \overlineerline E$. On the one hand, $\Gr
^{p}_{\Fil}\overlineerline E [-1]\in
\mathscr{M}_{0}\subset \mathscr{M}$ and, on the other hand, the
filtration $\Fil$
induces a filtration on $\Fil^{p+1}\overlineerline E$ fulfilling
conditions \ref{def:12}~({\bf A}) and \ref{def:12}~({\bf B}) and has
length $k-1$. Thus, by induction hypothesis, $\Fil^{p+1}\overlineerline
E\in \mathscr{M}$. Then, by
Lemma \ref{lemm:10}, we deduce that $\overlineerline E\in
\mathscr{M}$.
Clearly, the fact that $\mathscr{M}_{F}\subset \mathscr{M}$
implies that $\mathscr{M}_{S}\subset \mathscr{M}$. Thus, to
prove the theorem, it only remains to show that $\mathscr{M}_{S}$
satisfies the condition \ref{def:9}~\ref{item:13}.
The content of the next result is that
the apparent asymmetry in the definition of
$\mathscr{M}_{S}$ is not real.
\begin{lemma}\label{lemm:9}
Let $\overlineerline E\in \Ob \oV(X)$. Then there is a morphism $f\colon
\overlineerline
E\to \overlineerline F$ with $\overlineerline F$ and
$\ocone(f)$ in $\mathscr{M}_{F}$ if and only if there is a
morphism $g\colon \overlineerline G\to \overlineerline E$ with
$\overlineerline G$ and
$\ocone(g)$ in $\mathscr{M}_{F}$.
\end{lemma}
\begin{proof}
Assume that there is a morphism $f\colon \overlineerline
E\to \overlineerline F$ with $\overlineerline F$ and
$\ocone(f)$ in $\mathscr{M}_{F}$. Then, write $\overlineerline
G= \ocone(f)[-1]$ and let $g\colon \overlineerline G\to
\overlineerline E$ be the natural map. By hypothesis, $\overlineerline
G\in \mathscr{M}_{F}$. Moreover, since there is a natural isometry
\begin{displaymath}
\ocone(\ocone(\overlineerline E,\overlineerline
F)[-1],\overlineerline E)\cong
\ocone(\ocone(\Id_{\overlineerline E})[-1],\overlineerline
F),
\end{displaymath}
by Example \ref{exm:2} and Lemma \ref{lemm:10} we obtain that
$\ocone(g)\in \mathscr{M}_{F}$. Thus we have proved one
implication. The proof of the other implication is analogous.
\end{proof}
Let now $f\colon \overlineerline E\to \overlineerline F$ be a morphism of
complexes with $\overlineerline E, \overlineerline F\in
\mathscr{M}_{S}$. We want to show that $\ocone(f)\in
\mathscr{M}_{S}$. By Lemma \ref{lemm:9}, there are morphisms of
complexes
$g\colon \overlineerline G\to \overlineerline E$ and
$h\colon \overlineerline H\to \overlineerline F$ with $\overlineerline
G,\ \overlineerline H,\ \ocone(g),\ \ocone(h)\in
\mathscr{M}_{F}$. We consider the map $\overlineerline G\to
\ocone(h)$ induced by $f\circ g$. Then we write
\begin{displaymath}
\overlineerline{G'}=\ocone(\overlineerline G, \ocone(h) )[-1].
\end{displaymath}
By Lemma \ref{lemm:10}, we have that $\overlineerline{G'}\in
\mathscr{M}_{F}$. We denote by $g'\colon G'\to E$ and $k\colon G'\to H$ the
maps $g'(a,b,c)=g(a)$ and $k(a,b,c)=-b$.
There is an exact sequence
\begin{displaymath}
0\to \ocone(h)\to \ocone(g')\to \ocone(g)\to 0
\end{displaymath}
whose constituent short exact sequences are orthogonally split.
Since $\ocone(h)$ and $\ocone(g)$ belong to $\mathscr{M}_{F}$,
Lemma \ref{lemm:10} insures that $\ocone(g')$ belongs to
$\mathscr{M}_{F}$ as well.
There is a diagram
\begin{equation}\label{eq:51}
\xymatrix{
\overlineerline {G'} \ar[d]_{g'} \ar[r]^{k} & \overlineerline H\ar[d]^{h} \\
\overlineerline E \ar[r]^{f} & \overlineerline F
}
\end{equation}
that commutes up to homotopy. We fix the homotopy $s\colon \overline G'\to
F$ given by $s(a,b,c)=c$. By Lemma \ref{lemm:13} there is
a natural isometry
\begin{displaymath}
\ocone(\ocone(g'),\ocone(h))\cong \ocone(\ocone(-k),\ocone(f)).
\end{displaymath}
Applying Lemma \ref{lemm:10} again, we have that $\ocone(-k)$ and
$\ocone(\ocone(g'),\ocone(h))$ belong to
$\mathscr{M}_{F}$. Therefore $\ocone(f)$ belongs to
$\mathscr{M}_{S}$.
\begin{lemma} \label{lemm:11}
Let $f\colon \overline E\to \overline F$ be a morphism in $\oV(X)$.
\begin{enumerate}
\item \label{item:18} If $\overline E\in \mathscr{M}_{S}$ and $\ocone(f)\in
\mathscr{M}_{F}$ then $\overline F\in \mathscr{M}_{S}$.
\item \label{item:19} If $\overline F\in \mathscr{M}_{S}$ and $\ocone(f)\in
\mathscr{M}_{F}$ then $\overline E\in \mathscr{M}_{S}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Assume that $\overline E\in \mathscr{M}_{S}$ and $\ocone(f)\in
\mathscr{M}_{F}$. Let $g\colon \overline G\to \overline E$ with $\overline G\in
\mathscr{M}_{F}$ and $\ocone(g)\in \mathscr{M}_{F}$. By Lemma
\ref{lemm:10} and Example \ref{exm:2}, $\ocone(\ocone(\Id_{\overline
G}),\ocone(f))\in \mathscr{M}_{F}$. But
there is a natural isometry of complexes
\begin{displaymath}
\ocone(\ocone(\Id_{\overline G}),\ocone(f))\cong
\ocone(\ocone(\ocone(g)[-1],\overline G),\overline F).
\end{displaymath}
Since, by Lemma \ref{lemm:10}, $\ocone(\ocone(g)[-1],\overline G)\in
\mathscr{M}_{F}$, then $\overline F\in \mathscr{M}_{S}$.
The second statement of the lemma is proved using the dual
argument.
\end{proof}
\begin{lemma} \label{lemm:12}
Let $f\colon \overline E\to \overline F$ be a morphism in $\oV(X)$.
\begin{enumerate}
\item \label{item:20} If $\overline E\in \mathscr{M}_{F}$ and $\ocone(f)\in
\mathscr{M}_{S}$ then $\overline F\in \mathscr{M}_{S}$.
\item \label{item:21} If $\overline F\in \mathscr{M}_{F}$ and $\ocone(f)\in
\mathscr{M}_{S}$ then $\overline E\in \mathscr{M}_{S}$.
\end{enumerate}
\end{lemma}
\begin{proof}
Assume that $\overline E\in \mathscr{M}_{F}$ and $\ocone(f)\in
\mathscr{M}_{S}$. Let $g\colon \overline G\to \ocone(f)$ with $\overline G$
and $\ocone(\overline G, \ocone(f))$ in $\mathscr{M}_{F}$. There is a
natural isometry of complexes
\begin{displaymath}
\ocone(\overline G,\ocone(f)))\cong \ocone(\ocone(\overline G[-1],\overline
E),\overline F)
\end{displaymath}
that shows $\overline F\in \mathscr{M}_{S}$.
The second statement of the lemma is proved by a dual argument.
\end{proof}
Assume now that $f\colon \overline E\to \overline F$ is a morphism in $\oV(X)$
and $\overline E,\ \ocone(f)\in
\mathscr{M}_{S}$. Let $g\colon \overline G \to \overline E$ with $\overline G,\
\ocone(g)\in \mathscr{M}_{F}$. There is a natural isometry
\begin{displaymath}
\ocone(\ocone(\overline G,\overline E),\ocone(\Id_{\overline F}))
\cong
\ocone(\ocone(\overline G,\overline F),\ocone(\overline E, \overline F)),
\end{displaymath}
that implies $\ocone(\ocone(\overline G,\overline F),\ocone(\overline E, \overline
F))\in \mathscr{M}_{F}$. By Lemma \ref{lemm:11}, we deduce that
$\ocone(\overline G,\overline F)\in \mathscr{M}_{S}$. By Lemma \ref{lemm:12},
$\overline F\in \mathscr{M}_{S}$.
With $f$ as above, the fact that, if $\overline F$ and $\ocone(f)$ belong
to $\mathscr{M}_{S}$ so does $\overline E$, is proved by a similar
argument. In conclusion, $\mathscr{M}_{S}$ satisfies the condition
\ref{def:9}~\ref{item:13}, hence $\mathscr{M}\subset
\mathscr{M}_{S}$, which completes the proof of the
theorem.
\end{proof}
The class of meager complexes satisfies the next list of properties,
that follow almost directly from Theorem \ref{thm:3}.
\begin{theorem} \label{thm:4}
\begin{enumerate}
\item \label{item:22} If $\overlineerline E$ is a meager complex and $\overline
F$ is a hermitian vector bundle, then the complexes
$\overline F \otimes \overline E$, $\Hom(\overline F,\overline E)$ and $\Hom(\overline E,\overline
F)$, with the induced metrics, are meager.
\item \label{item:23} If $\overlineerline E^{{\text{\rm l,ll,a}}t,{\text{\rm l,ll,a}}t}$ is a bounded
double complex of hermitian vector bundles and all rows (or
columns) are meager complexes,
then the complex $\Tot(\overline E^{{\text{\rm l,ll,a}}t,{\text{\rm l,ll,a}}t})$ is meager.
\item \label{item:24} If $\overline E$ is a meager complex and $\overline F$ is another
complex of hermitian vector bundles, then the complexes
\begin{align*}
\overline E\otimes \overline F&=\Tot((\overline F^{i}\otimes \overline E^{j})_{i,j}),\\
\uHom (\overline E,\overline F)&= \Tot(\Hom ((\overline E^{-i},\overline F^{j})_{i,j}))\text{
and }\\
\uHom (\overline F,\overline E)&= \Tot(\Hom ((\overline F^{-i},\overline E^{j})_{i,j})),
\end{align*}
are meager.
\item \label{item:25} If $f\colon X\to Y$ is a morphism of smooth complex
varieties and $\overline E$ is a meager complex on $Y$, then $f^{{\text{\rm l,ll,a}}t}
\overline E$ is a meager complex on $X$.
\end{enumerate}
\end{theorem}
We now introduce the notion of tight morphism.
\begin{definition}\label{def:tight_morphism}
A morphism $f\colon\overline E\to \overline F$ in $\oV(X)$ is said to be
\emph{tight} if
$\ocone(f)$ is a meager complex.
\end{definition}
\begin{proposition} \label{prop:8}
\begin{enumerate}
\item Every meager complex is acyclic.
\item Every tight morphism is a quasi-isomorphism.
\end{enumerate}
\end{proposition}
\begin{proof}
Let $\overlineerline E\in \mathscr{M}_{F}(X)$. Let $\Fil$ be any
filtration that satisfies conditions
\ref{def:12}~({\bf A}) and \ref{def:12}~({\bf B}).
By definition, the
complexes $\Gr_{\Fil}^{p}\overlineerline E$ belong to $\mathscr{M}_{0}$,
so they are acyclic. Hence
$\overlineerline E$ is acyclic.
If $\overlineerline E\in
\mathscr{M}_{S}(X)$, let $\overlineerline F$ and $\ocone(f)$
be as in Definition \ref{def:12}~\ref{item:32}. Then, $\overlineerline F$
and
$\ocone(f)$ are acyclic, hence $\overlineerline E$ is also
acyclic. Thus we have proved the first statement.
The second statement is a direct consequence of the first one.
\end{proof}
Many arguments used for proving that a certain complex is meager or a certain
morphism is tight involve
cumbersome diagrams. In order to ease these arguments we will
develop a calculus of acyclic complexes.
Before starting we need some preliminary lemmas.
\begin{lemma} \label{lemm:16}
Let $\overline E$, $\overline F$ be objects of $\oV(X)$. Then the following
conditions are equivalent.
\begin{enumerate}
\item \label{item:37} There exists an object $\overline G$ and a diagram
\begin{displaymath}
\xymatrix{
& \overline G \ar[dl]^{\sim}_{f} \ar[dr]^{g}&\\
\overline E && \overline F,}
\end{displaymath}
such that $\ocone(g)\text{{\rm op}}lus \ocone(f)[1]$ is meager.
\item \label{item:38}
There exists an object $\overline G$ and a diagram
\begin{displaymath}
\xymatrix{
& \overline G \ar[dl]^{\sim}_{f} \ar[dr]^{g}&\\
\overline E && \overline F,}
\end{displaymath}
such that $f$ and $g$ are tight morphisms.
\end{enumerate}
\end{lemma}
\begin{proof}
Clearly, \ref{item:38} implies \ref{item:37}. To prove the converse
implication, if $\overline G$ satisfies the conditions of \ref{item:37},
we put $G'=G\text{{\rm op}}lus \ocone(f)$ and consider the morphisms $f'\colon \overline G'\to
E$ and $g'\colon G'\to F$ induced by the first projection $G'\to
G$. Then
\begin{displaymath}
\ocone(f')=\ocone(f)\text{{\rm op}}lus \ocone(f)[1],
\end{displaymath}
that is meager because $\ocone(f)$ is acyclic, and
\begin{displaymath}
\ocone(g')=\ocone(g)\text{{\rm op}}lus \ocone(f)[1],
\end{displaymath}
that is meager by hypothesis.
\end{proof}
\begin{lemma}\label{lemm:15}
Any diagram of tight morphisms, of the following types:
\begin{equation}\label{diag:1}
\begin{array}{ccc}
\xymatrix{
\overline E\ar[rd]_{f} & &\overline G\ar[ld]^{g}\\
&\overline F
} &
\quad \quad &
\xymatrix{
&\overline H\ar[rd]^{g'}\ar[ld]_{f'} &\\
\overline E & &\overline G
}\\
(i) && (ii)
\end{array}
\end{equation}
can be completed into a diagram of tight morphisms
\begin{equation}\label{eq:meager_1}
\xymatrix{
&\overline H\ar[ld]_{f'}\ar[rd]^{g'} &\\
\overline E \ar[rd]_{f}& &\overline G\ar[ld]^{g}\\
&\overline F, &
}
\end{equation}
which commutes up to homotopy.
\end{lemma}
\begin{proof}
We prove the statement only for the case (i), the other one being analogous.
Note that there is a natural arrow $\overline
G\to\ocone(f)$. Define
\begin{displaymath}
\overline H=\ocone(\overline G,\ocone(f))[-1].
\end{displaymath}
With this choice, diagram (\ref{eq:meager_1} i) becomes commutative up
to homotopy, taking the projection $H\to F[-1]$ as homotopy. We
first show that $\ocone(\overline H, \overline G)$ is meager. Indeed, there is a
natural isometry
\begin{displaymath}
\ocone(\overline H,\overline G)
\cong\ocone(\ocone(\Id_{\overline G}), \ocone(\overline E,\overline F)[-1])
\end{displaymath}
and the right hand side complex is meager. Now for $\ocone(\overline H,
\overline E)$. By Lemma \ref{lemm:13}, there is an isometry
\begin{equation}
\ocone(\ocone(\overline H, \overline E),\ocone(\overline G, \overline F))
\cong
\ocone(\ocone(\overline H, \overline G),\ocone(\overline E, \overline F)).
\end{equation}
The right hand side complex is meager, hence the left hand side is
meager as well. Since, by hypothesis, $\ocone(\overline G, \overline F)$ is
meager, the same is true for $\ocone(\overline H, \overline E)$.
\end{proof}
\begin{definition} \label{def:17}
We will say that two complexes $\overline E$ and $\overline F$ are \emph{tightly
related} if any of the equivalent conditions of Lemma
\ref{lemm:16} holds.
\end{definition}
It is easy to see, using Lemma \ref{lemm:15}, that to be tightly
related is an equivalence relation.
\begin{definition} \label{def:10}
We denote by $\oV(X)/\mathscr{M}$ the set of classes of
tightly related complexes. The class of a complex $\overline E$ will be
denoted $[\overline E]$.
\end{definition}
\begin{theorem}[Acyclic calculus] \label{thm:7}
\begin{enumerate}
\item \label{item:34} For a complex $\overline E\in \Ob\oV(X)$, the class
$[\overline E]=0$ if and only if $\overline E\in \mathscr{M}$.
\item \label{item:33} The operation $\text{{\rm op}}lus$ induces an operation,
that we denote $+$, in $\oV(X)/\mathscr{M}$. With this operation
$\oV(X)/\mathscr{M}$ is an associative abelian semigroup.
\item \label{item:39} For a complex $\overline E$, there exists a complex
$\overline F$ such that $[\overline F]+[\overline E]=0$, if and only if $\overline E$ is acyclic. In
this case $[\overline E[1]]=-[\overline E]$.
\item \label{item:35} For every morphism $f\colon \overline E \to \overline F$,
if $E$ is acyclic, then the equality
\begin{displaymath}
[\ocone(\overline E,\overline F)]=[\overline F]-[\overline E]
\end{displaymath}
holds.
\item \label{item:40} For every morphism $f\colon \overline E \to \overline F$,
if $F$ is acyclic, then the equality
\begin{displaymath}
[\ocone(\overline E,\overline F)]=[\overline F]+[\overline E[1]]
\end{displaymath}
holds.
\item \label{item:36} Given a diagram
\begin{displaymath}
\xymatrix{
\overlineerline{E}'\ar[r]^{f'}\ar[d]_{g'} &\overlineerline{F}'\ar[d]^{g}\\
\overlineerline{E}\ar[r]^{f} &\overlineerline{F}
}
\end{displaymath}
in $\oV(X)$, that commutes up to homotopy, then for every choice
of homotopy we have
\begin{displaymath}
[\ocone(\ocone(f'),\ocone(f))]=[\ocone(\ocone(-g'),\ocone(g))].
\end{displaymath}
\item \label{item:41} Let $f\colon \overline E\to \overline F$, $g\colon \overline
F\to \overline G$ be morphisms of complexes. Then
\begin{align*}
[\ocone(\ocone(g\circ f),\ocone(g))]&=[\ocone(f)[1]],\\
[\ocone(\ocone(f),\ocone(g\circ f))]&=[\ocone(g)].
\end{align*}
If one of $f$ or $g$ are quasi-isomorphisms, then
\begin{displaymath}
[\ocone(g\circ f)]=[\ocone(g)]+[\ocone(f)].
\end{displaymath}
If $g\circ f$ is a quasi-isomorphism, then
\begin{displaymath}
[\ocone(g)]=[\ocone(f)[1]]+[\ocone(g\circ f)].
\end{displaymath}
\end{enumerate}
\end{theorem}
\begin{proof}
The statements \ref{item:34} and \ref{item:33} are immediate.
For assertion \ref{item:39}, observe that, if $\overline E$ is
acyclic, then $\overline E\text{{\rm op}}lus \overline E[1]$ is meager. Thus
\begin{displaymath}
[\overline E]+[\overline E[1]]=[\overline E\text{{\rm op}}lus \overline E[1]]=0.
\end{displaymath}
Conversely, if $[\overline F]+[\overline E]=0$, then $\overline F\text{{\rm op}}lus \overline E$ is
meager, hence acyclic. Thus $\overline E$ is acyclic.
For property \ref{item:35} we consider the map $\overline F\text{{\rm op}}lus \overline E[1]\to
\ocone(f)$ defined by the map $\overline F\to \ocone(f)$. There is a
natural isometry
\begin{displaymath}
\ocone(\overline F\text{{\rm op}}lus \overline E[1],\ocone(f))\cong
\ocone(\overline E\text{{\rm op}}lus \overline E[1],\ocone(\Id_{F})).
\end{displaymath}
Since the right hand complex is meager, so is the first. In consequence
\begin{displaymath}
[\ocone(f)]=[\overline F\text{{\rm op}}lus \overline E[1]]=[\overline F]+[\overline E[1]]=[\overline F]-[\overline E].
\end{displaymath}
Statement \ref{item:40} is proved analogously.
Statement \ref{item:36} is a direct consequence of Lemma
\ref{lemm:13}.
Statement \ref{item:41} is an easy consequence of the previous
properties.
\end{proof}
\begin{remark}\label{rem:9}
In $f\colon \overline E \to \overline F$ is a morphism and neither $\overline E$ nor
$\overline F$ are acyclic, then $[\ocone(f)]$ depends on the homotopy
class of $f$ and not only on $\overline E$ and $\overline F$. For instance, let
$\overline E$ be a non-acyclic complex of hermitian bundles. Consider the
zero map and the identity map $0,\Id\colon \overline E\to \overline E$. Since,
by Example \ref{exm:2}, we know that $\ocone (\Id)$ is
meager, then $[\ocone (\Id)]=0$. By contrast,
\begin{displaymath}
[\ocone(0)]=[\overline E]+[\overline E[-1]]\not = 0
\end{displaymath}
because $\overline E$ is not acyclic. This implies that we can not extend
Theorem \ref{thm:7}~\ref{item:35} or \ref{item:40} to the case
when none of the complexes are acyclic.
\end{remark}
\begin{corollary}\label{cor:3}
\begin{enumerate}
\item \label{item:42}
Let
\begin{displaymath}
0\longrightarrow \overline E\longrightarrow \overline F\longrightarrow \overline G
\longrightarrow 0
\end{displaymath}
be a short exact sequence in $\oV(X)$ all whose constituent short
exact sequences are orthogonally split. If either $\overline E$ or
$\overline G$ is acyclic, then
\begin{displaymath}
[\overline F]=[\overline E]+[\overline G].
\end{displaymath}
\item \label{item:58} Let $\overline E^{{\text{\rm l,ll,a}}t,{\text{\rm l,ll,a}}t}$ be a bounded double complex of
hermitian vector bundles. If the columns of $\overline E^{{\text{\rm l,ll,a}}t,{\text{\rm l,ll,a}}t}$
are acyclic, then
\begin{displaymath}
[\Tot(\overline E^{{\text{\rm l,ll,a}}t,{\text{\rm l,ll,a}}t})]=\sum_{k}(-1)^{k} [\overline E^{k,{\text{\rm l,ll,a}}t}].
\end{displaymath}
If the rows are acyclic, then
\begin{displaymath}
[\Tot(\overline E^{{\text{\rm l,ll,a}}t,{\text{\rm l,ll,a}}t})]=\sum_{k}(-1)^{k} [\overline E^{{\text{\rm l,ll,a}}t,k}].
\end{displaymath}
In particular, if rows and columns are acyclic
\begin{displaymath}
\sum_{k}(-1)^{k} [\overline E^{k,{\text{\rm l,ll,a}}t}]=\sum_{k}(-1)^{k} [\overline E^{{\text{\rm l,ll,a}}t,k}].
\end{displaymath}
\end{enumerate}
\end{corollary}
\begin{proof}
The first item follows from Theorem \ref{thm:7}~\ref{item:35}
and \ref{item:40}, by using Remark \ref{rem:2}. The second assertion
follows from the first by induction on the size of the complex, by
using the usual filtration of $\Tot(E^{{\text{\rm l,ll,a}}t,{\text{\rm l,ll,a}}t})$.
\end{proof}
As an example of the use of the acyclic calculus we prove
\begin{proposition}\label{prop:6}
Let $f\colon \overline E\to\overline F$ and $g\colon \overline F\to\overline G$ be morphisms
of complexes. If two of $f,\ g,\ g\circ f$ are tight, then so is
the third.
\end{proposition}
\begin{proof}
Since tight morphisms are quasi-isomorphisms, by Theorem
\ref{thm:7}~\ref{item:41}
\begin{displaymath}
[\ocone(g\circ f)]=[\ocone(f)]+[\ocone(g)].
\end{displaymath}
Hence the result follows from \ref{thm:7}~\ref{item:34}.
\end{proof}
\begin{definition}\label{def:KA}
We will denote by $\KA(X)$ the set of invertible elements of
$\oV(X)/\mathscr{M}$. This is an abelian subgroup. By Theorem
\ref{thm:7}~\ref{item:39} the group $\KA(X)$
agrees with the image in $\oV(X)/\mathscr{M}$ of the class of
acyclic complexes.
\end{definition}
The group $\KA(X)$ is a universal abelian group for additive Bott-Chern
classes. More precisely, let us denote by $\oVo(X)$ the
full subcategory of $\oV(X)$ of acyclic complexes.
\begin{theorem} \label{thm:8}
Let $\mathscr{G}$ be an abelian group and let $\varphi\colon \Ob \oVo(X) \to
\mathscr{G}$ be an assignment such that
\begin{enumerate}
\item (Normalization) Every complex of the form
\begin{displaymath}
\overline E\colon\quad 0\longrightarrow \overline A
\overlineerset{\Id}{\longrightarrow}
\overline A \longrightarrow 0
\end{displaymath}
satisfies $\varphi(\overline E)=0$.
\item (Additivity for exact sequences) For every short exact
sequence in $\oVo(X)$
\begin{displaymath}
0\longrightarrow \overline E\longrightarrow \overline F\longrightarrow \overline G
\longrightarrow 0,
\end{displaymath}
all whose constituent short
exact sequences are orthogonally split, we have
\begin{displaymath}
\varphi(\overline F)=\varphi(\overline E)+\varphi(\overline G).
\end{displaymath}
\end{enumerate}
Then $\varphi$ factorizes through a group homomorphism $\text{{\rm w}}idetilde
\varphi\colon \KA(X)\to \mathscr{G}$.
\end{theorem}
\begin{proof}
The second condition tells us that $\varphi$ is a morphism of
semigroups. Thus we only need to show that it vanishes on meager
complexes.
Again by the second condition, it is enough to prove that $\varphi$
vanishes on the class $\mathscr{M}_{0}$. Both conditions together
imply that $\varphi$ vanishes on orthogonally split
complexes. Therefore, by Example
\ref{exm:2}, it vanishes on complexes of the form
$\ocone(\Id_{E})$. Once more by the second condition, if $E$ is acyclic,
\begin{displaymath}
\varphi(E)+\varphi(E[1])=\varphi(\ocone(\Id_{E}))=0.
\end{displaymath}
Thus $\varphi$ vanishes also on the complexes described in Definition \ref{def:11} (ii).
Hence $\varphi$ vanishes on the class $\mathscr{M}$.
\end{proof}
\begin{remark}
The considerations of this section carry over to the category of
complex analytic varieties. If $M$ is a complex analytic variety, one thus obtains for
instance a group $\KA^{\text{{\rm an}}}(M)$. Observe that, by GAGA principle, whenever $X$ is a
proper smooth algebraic variety over ${\mathbb C}$, the group $\KA^{\text{{\rm an}}}(X^{\text{{\rm an}}})$ is
canonically isomorphic to $\KA(X)$.
\end{remark}
As an example, we consider the simplest case $\Spec {\mathbb C}$ and we
compute the group $\KA(\Spec {\mathbb C})$. Given an acyclic
complex $E$ of ${\mathbb C}$-vector spaces, there is a canonical isomorphism
\begin{displaymath}
\alpha :\det E\longrightarrow {\mathbb C}.
\end{displaymath}
If we have an acyclic complex of hermitian vector bundles $\overline E$,
there is an induced metric on $\det E$. If we put on ${\mathbb C}$ the trivial hermitian metric, then there is a well
defined positive real number $\|\alpha \|$, namely the norm of the isomorphism $\alpha$.
\begin{theorem}
The assignment $\overline E\mapsto \log\|\alpha \|$ induces an isomorphism
\begin{displaymath}
\text{{\rm w}}idetilde{\tau }\colon \KA(\Spec
{\mathbb C})\overlineerset{\simeq}{\longrightarrow}{\mathbb R}.
\end{displaymath}
\end{theorem}
\begin{proof}
First, we observe that the assignment in the theorem
satisfies the hypothesis of Theorem \ref{thm:8}. Thus,
$\text{{\rm w}}idetilde{\tau }$ exists and is a group morphism.
Second, for every $a\in {\mathbb R}$ we consider the acyclic complex
\begin{displaymath}
e^{a}:= 0\longrightarrow
\overlineerline{{\mathbb C}}\overlineerset{e^{a}}{\longrightarrow}
\overlineerline{{\mathbb C}}\longrightarrow 0,
\end{displaymath}
where $\overlineerline {{\mathbb C}}$ has the standard metric and the left copy of
$\overline {{\mathbb C}}$ sits in degree $0$.
Since $\text{{\rm w}}idetilde {\tau }([e^{a}])=a$ we deduce that $\text{{\rm w}}idetilde{\tau }$ is
surjective.
Next we prove that the complexes of the form $[e^{a}]$ form a set of
generators of $\KA(\Spec {\mathbb C})$.
Let
$\overline E=(\overline E^{{\text{\rm l,ll,a}}t},f^{{\text{\rm l,ll,a}}t})$ be an acyclic complex. Let $r=\sum_{i}
\rk(E^{i})$. We will show by induction on $r$ that
$[\overline E]=\sum_{k}(-1)^{i_{k}}[e^{a_{k}}]$ for certain integers
$i_{k}$ and real numbers $a_{k}$. Let $n$ be the
smallest integer such that $f^{n}\colon E^{n}\to E^{n+1}$ is
non-zero. Let $v\in E^{n}\setminus \{0\}$. By acyclicity, $f^{n}$ is
injective, hence $\|f^{n}(v)\|\not=0.$ Set $i_{1}=n$ and
$a_{1}=\log(\|f^{n}(v)\|/\|v\|)$ and consider the diagram
\begin{displaymath}
\xymatrix{
& &0\ar[d] &0\ar[d] & &\\
&0\ar[r] &\overline{{\mathbb C}}\ar[r]^{e^{a}}\ar[d]^{\gamma ^{n}}
&\overline{{\mathbb C}}\ar[r]\ar[d]^{\gamma ^{n+1}}
&0 \ar[r]\ar[d]
& \dots
\\
&0\ar[r] &\overline E^{n}\ar[r]\ar[d] &\overline E^{n+1}\ar[r]\ar[d] &\overline E^{n+2}\ar[r]\ar[d] &\dots
\\
&0\ar[r] &\overline F^{n}\ar[r]\ar[d] &\overline F^{n+1}\ar[r]\ar[d] &\overline F^{n+2}\ar[r]\ar[d] &\dots
\\
& &0 &0 &0 & &
}
\end{displaymath}
where $\gamma ^{n}(1)=v$, $\gamma ^{n+1}(1)=f^{n}(v)$ and all the
columns are orthogonally split short exact sequences. By Corollary
\ref{cor:3}~\ref{item:42} and Theorem \ref{thm:7}~\ref{item:39}, we have
\begin{displaymath}
[\overline E]=(-1)^{i_{1}}[e^{a_{1}}]+[\overline F].
\end{displaymath}
Thus we deduce the claim.
Considering now the diagram
\begin{displaymath}
\xymatrix{ \overline {\mathbb C} \ar[r]^{e^{a}}\ar[d]_{\Id}& \overline {\mathbb C}\ar[d]^{e^{b}}\\
\overline {\mathbb C} \ar[r]_{e^{a+b}}& \overline {\mathbb C}
}
\end{displaymath}
and using Corollary \ref{cor:3}~\ref{item:58} we deduce that
$[e^{a}]+[e^{b}]=[e^{a+b}]$ and $[e^{-a}]=-[e^{a}]$. Therefore
every element of $\KA(\Spec {\mathbb C})$ is of the form $[e^{a}]$. Hence
$\text{{\rm w}}idetilde{\tau }$ is also injective.
\end{proof}
\section{Definition of $\oDb(X)$ and basic
constructions}\label{sec:oDb}
Let $X$ be a smooth algebraic variety over ${\mathbb C}$. We denote by
$\Coh(X)$ the abelian category of coherent sheaves on $X$ and by
$\text{{\rm cur}}b(X)$ its bounded derived category.
The objects of $\text{{\rm cur}}b(X)$ are complexes of quasi-coherent sheaves
with bounded coherent cohomology.
The reader is referred to \cite{GelfandManin:MR1950475} for an introduction to derived categories.
For notational convenience, we also introduce $\Cb(X)$,
the abelian category of bounded cochain complexes of coherent sheaves
on $X$. Arrows in $\text{{\rm cur}}b(X)$ will be written as $\dashrightarrow$, while
arrows in $\Cb(X)$ will be denoted by $\rightarrow$. The symbol $\sim$
will mean either quasi-isomorphism in $\Cb(X)$ or isomorphism in
$\text{{\rm cur}}b(X)$. Every functor from $\text{{\rm cur}}b(X)$ to another category will tacitly
be assumed to be the derived functor. Therefore we will denote just by
$\Rd f_{{\text{\rm l,ll,a}}t}$, $\Ld f^{{\text{\rm l,ll,a}}t}$, $\otimes^{\Ld}$ and $\Rd\uHom$ the derived
direct image, inverse image, tensor product and internal Hom.
Finally, we will refer to (complexes of) locally free sheaves by normal upper case
letters (such as $F$) whereas we reserve script upper case letters for
(complexes of) quasi-coherent sheaves in general (for instance $\mathcal{F}$).
\begin{remark}\label{rem:3}
Because $X$ is in particular a smooth noetherian scheme over ${\mathbb C}$,
every object $\mathcal{F}$ of $\Cb(X)$ admits a quasi-isomorphism
$F\to \mathcal{F}$, with $F$ an object of $\Vb(X)$. Hence, if
$\mathcal{F}$ is an object in $\text{{\rm cur}}b(X)$, then there is an
isomorphism $F\dashrightarrow\mathcal{F}'$ in $\text{{\rm cur}}b(X)$, for some
object $F\in\Vb(X)$. In general, the analogous
statement is no longer true if we work with complex manifolds, as
shown by the counterexample \cite[Appendix,
Cor. A.5]{Voisin:counterHcK}.
\end{remark}
For the sake of completeness, we recall how morphisms in $\text{{\rm cur}}b(X)$
between bounded complexes of vector bundles can be represented.
\begin{lemma}\label{lemma:morphisms_Db}
\begin{enumerate}
\item \label{item:26} Let $F, G$ be bounded complexes of vector
bundles on $X$. Every morphism $F\dashrightarrow G$ in $\text{{\rm cur}}b(X)$ may be
represented by a diagram in $\Cb(X)$
\begin{displaymath}
\xymatrix{
&E\ar[ld]_{f}\ar[rd]^{g} &\\
F & &G,
}
\end{displaymath}
where $E\in \Ob \Vb(X)$ and $f$ is a quasi-iso\-morphism.
\item \label{item:27} Let $E$, $E'$, $F$, $G$ be bounded
complexes of
vector bundles on $X$. Let $f$, $f'$, $g$, $g'$
be morphisms in $\Cb(X)$ as in the diagram below, with $f$, $f'$
quasi-isomorphisms. These data define the same morphism $F \dashrightarrow
G$ in $\text{{\rm cur}}b(X)$ if, and only if, there exists a bounded complex of
vector bundles $E''$ and a diagram
\begin{displaymath}
\xymatrix{
& & E''\ar[ld]_{\alpha}\ar[rd]^{\beta} & &\\
&E\ar[ld]_{f}\ar[rrrd] &{}_g\hspace{2cm} {}_{f'}
&E'\ar[llld]\ar[rd]^{g'} &\\
F & & & &G,
}
\end{displaymath}
whose squares are commutative up to homotopy and where $\alpha$
and $\beta$ are quasi-isomorphisms.
\end{enumerate}
\end{lemma}
\begin{proof}
This follows from the equivalence of $\text{{\rm cur}}b(X)$ with the localization
of the homotopy category of $\Cb(X)$ with respect to the class of
quasi-isomorphisms and Remark \ref{rem:3}.
\end{proof}
\begin{proposition} \label{prop:7} Let $f\colon \overline E\to \overline E$ be an
endomorphism in $\oV(X)$ that represents $\Id_{E}$ in $\text{{\rm cur}}b(X)$. Then
$\ocone(f)$ is meager.
\end{proposition}
\begin{proof}
By Lemma \ref{lemma:morphisms_Db}~\ref{item:27}, the fact that $f$
represents the identity in $\text{{\rm cur}}b(X)$ means that there are diagrams
\begin{displaymath}
\xymatrix{
\overline E' \ar[r]^{\alpha }_{\sim} \ar[d]_{\beta }^{\sim}& \overline E \ar[d]^{\Id_{E}} &&
\overline E' \ar[r]^{\alpha }_{\sim} \ar[d]_{\beta }^{\sim}& \overline E\ar[d]^{f}\\
\overline E \ar[r]_{\Id_{E}}& \overline E, && \overline E \ar[r]_{\Id_{E}}& \overline E,
}
\end{displaymath}
that commute up to homotopy. By Theorem \ref{thm:7}~\ref{item:35}
and \ref{item:36} the equalities
\begin{align*}
[\ocone(\alpha )]-[\ocone(\Id_{E})]&=
[\ocone(\beta )]-[\ocone(\Id_{E})]\\
[\ocone(\alpha )]-[\ocone(\Id_{E})]&= [\ocone(\beta )]-[\ocone(f)]
\end{align*}
hold in the group $\KA(X)$ (observe that these relations do not
depend on the choice of homotopies).Therefore
\begin{displaymath}
[\ocone(f)]=[\ocone(\Id_{E})]=0.
\end{displaymath}
Hence $\ocone(f)$ is meager.
\end{proof}
\begin{definition}\label{definition:hermitian_structure}
Let $\mathcal{F}$ be an object of $\text{{\rm cur}}b(X)$. A \textit{hermitian
metric on} $\mathcal{F}$ consists of the following data:
\begin{enumerate}
\item[--] an isomorphism
$E\overlineerset{\sim}{\dashrightarrow}\mathcal{F}$ in $\text{{\rm cur}}b(X)$, where
$E\in \Ob \Vb(X)$;
\item[--] an object $\overline E\in \Ob \oV(X)$, whose underlying complex
is $E$.
\end{enumerate}
We write $\overlineerline{E}\dashrightarrow\mathcal{F}$ to refer to the
data above and we call it a \textit{metrized object of} $\text{{\rm cur}}b(X)$.
\end{definition}
Our next task is to define the category $\oDb(X)$, whose objects are
objects of $\text{{\rm cur}}b(X)$ provided with equivalence classes of metrics. We
will show that in this category there is a hermitian cone well defined
up to isometries.
\begin{lemma}\label{lemm:14}
Let $\overline E, \overline E'\in \Ob(\oV(X))$ and consider an arrow $E\dashrightarrow E'$ in
$\text{{\rm cur}}b(X)$.
Then the following statements are equivalent:
\begin{enumerate}
\item \label{item:30} for any diagram
\begin{equation}\label{diag:2}
\xymatrix{
& E'' \ar[dl]_{\sim} \ar[dr]&\\
E && E',}
\end{equation}
that represents $E \dashrightarrow E'$, and any choice of hermitian metric on
$E''$, the complex
\begin{equation}\label{eq:54}
\ocone(\overline E'',\overline E)[1]\text{{\rm op}}lus \ocone(\overline E'',\overline E')
\end{equation}
is meager;
\item \label{item:31} there is a diagram (\ref{diag:2})
that represents $E \dashrightarrow E'$, and a choice of hermitian metric on
$E''$, such that the complex (\ref{eq:54}) is meager;
\item \label{item:31bis}there is a diagram (\ref{diag:2})
that represents $E \dashrightarrow E'$, and a choice of hermitian metric on
$E''$, such that the arrows $\overline E''\to \overline E$ and $\overline E'\to \overline
E'$ are tight morphisms.
\end{enumerate}
\end{lemma}
\begin{proof}
Clearly \ref{item:30} implies \ref{item:31}. To prove the converse
we assume the existence of a $\overline E''$ such that the complex
\eqref{eq:54} is meager, and let $\overline E'''$ be any complex that satisfies the
hypothesis of \ref{item:30}. Then there is a diagram
\begin{displaymath}
\xymatrix{
& & E^{''''}\ar[ld]_{\alpha}\ar[rd]^{\beta} & &\\
&E''\ar[ld]_{f}\ar[rrrd] &{}_g\hspace{2cm} {}_{f'}
&E'''\ar[llld]\ar[rd]^{g'} &\\
E & & & &E'
}
\end{displaymath}
whose squares commute up to homotopy. Using acyclic calculus we have
\begin{multline*}
[\ocone(g')]-[\ocone(f')]=\\
[\ocone(\beta )]+[\ocone(g)]-[\ocone(\alpha )]
-[\ocone(\beta )]-[\ocone(f)]+[\ocone(\alpha )]=\\
[\ocone(g)]-[\ocone(f)]=0.
\end{multline*}
Now repeat the argument of Lemma \ref{lemm:16} to prove that
\ref{item:31} and \ref{item:31bis} are equivalent. The only point is
to observe that the diagram constructed in Lemma \ref{lemm:16}
represents the same morphism in the derived category as the
original diagram.
\end{proof}
\begin{definition}\label{def:13}
Let $\mathcal{F}\in \Ob\text{{\rm cur}}b(X)$ and let $\overline E\dashrightarrow \mathcal{F}$ and
$\overline E'\dashrightarrow \mathcal{F}$ be two hermitian metrics on
$\mathcal{F}$. We say that they \emph{fit tight} if the induced
arrow $\overline E\dashrightarrow\overline E'$ satisfies any of the equivalent conditions
of Lemma \ref{lemm:14}
\end{definition}
\begin{theorem} \label{thm:5} The relation ``to fit tight'' is an
equivalence relation.
\end{theorem}
\begin{proof}
The reflexivity and the symmetry are obvious. To prove the
transitivity, consider a diagram
\begin{displaymath}
\xymatrix{
& \overline F \ar[dl]_{f} \ar[dr]^{g}& & \overline F' \ar[dl]_{f'} \ar[dr]^{g'} &\\
\overline E & & \overline E' & & \overline E'' ,
}
\end{displaymath}
where all the arrows are tight morphisms and $f$, $f'$ are quasi-isomorphisms. By Lemma \ref{lemm:15},
this diagram can be completed into a diagram
\begin{displaymath}
\xymatrix{
& & \overline F'' \ar[dl]_{\alpha } \ar[dr]^{\beta }& & \\
& \overline F \ar[dl]_{f} \ar[dr]^{g}& & \overline F' \ar[dl]_{f'} \ar[dr]^{g'} &\\
\overline E & & \overline E' & & \overline E'' ,
}
\end{displaymath}
where all the arrows are tight morphisms and the square commutes up
to homotopy. Now observe that $f\circ\alpha$ and $g'\circ\beta$
represent the morphism $E\dashrightarrow E''$ in $\text{{\rm cur}}b(X)$ and are both tight
morphisms by Proposition \ref{prop:6}. This finishes the proof.
\end{proof}
\begin{definition}\label{def:category_oDb}
We denote by $\oDb(X)$ the category whose objects are pairs
$\ocF=(\mathcal{F},h)$ where $\mathcal{F}$ is an object of $\text{{\rm cur}}b(X)$
and $h$ is an equivalence class of metrics that fit tight, and with
morphisms
\begin{displaymath}
\Hom_{\oDb(X)}(\overlineerline{\mathcal{F}},\overlineerline{\mathcal{G}})
=
\Hom_{\text{{\rm cur}}b(X)}(\mathcal{F},\mathcal{G}).
\end{displaymath}
A class $h$ of metrics will be called \emph{a hermitian structure},
and may be referenced by any representative $\overline E\dashrightarrow \mathcal{F}$
or, if the arrow is clear, by the complex $\overline E$. We will denote by
$\overline 0\in \Ob \oDb(X)$ a zero object of $\text{{\rm cur}}b(X)$ provided with a
trivial hermitian structure given by any meager complex.
If the underlying complex to an object $\overline{\mathcal{F}}$ is
acyclic, then its hermitian structure has a well defined class in
$\KA(X)$. We will use the notation $[\overline{\mathcal{F}}]$ for this
class.
\end{definition}
\begin{definition}
A morphism in $\oDb(X)$, $f\colon (\overline E\dashrightarrow\mathcal{F})
\dashrightarrow(\overline F\dashrightarrow \mathcal{G})$, is called \emph{a tight
isomorphism} whenever the underlying morphism $f\colon
\mathcal{F}\dashrightarrow \mathcal{G}$ is an isomorphism and the metric on
$\mathcal{G}$ induced by $f$ and $\overline E$ fits tight with $\overline F$. An
object of $\oDb(X)$ will be called \emph{meager} if it is tightly
isomorphic to the zero object with the trivial metric.
\end{definition}
\begin{remark} \label{rem:5} A word of warning should be said about
the use of acyclic calculus to show that a particular map is a tight
isomorphism. There is an assignment $\Ob\oDb(X)\to
\oV(X)/\mathscr{M}$ that sends $\overline E\dashrightarrow \mathcal{F}$ to $[\overline
E]$. This assignment is not injective. For instance, let $r>0$ be a
real number and consider the trivial bundle $\mathcal{O}_{X}$ with
the trivial metric $\|1\|=1$ and with the metric $\|1\|'=1/r$. Then
the product by $r$ induces an isometry between both bundles. Hence,
if $\overline E$ and $\overline E'$ are the complexes that have
$\mathcal{O}_{X}$ in degree $0$ with the above hermitian metrics,
then $[\overline E]=[\overline E']$, but they define different hermitian
structures on $\mathcal{O}_{X}$ because the product by $r$ does not
represent $\Id_{\mathcal{O}_{X}}$.
Thus the right procedure to show that a morphism $f\colon (\overline E\dashrightarrow
\mathcal{F})\dashrightarrow (\overline F\dashrightarrow \mathcal{G})$ is a tight isomorphism, is
to construct a diagram
\begin{displaymath}
\xymatrix{
& \overline G \ar[dl]^{\sim}_{\alpha } \ar[dr]^{\beta }&\\
\overline E && \overline F}
\end{displaymath}
that represents $f$ and use the acyclic calculus to show that
$[\ocone(\beta )]-[\ocone(\alpha )]=0$.
\end{remark}
By definition, the forgetful functor $\mathfrak{F}\colon \oDb(X)\to
\text{{\rm cur}}b(X)$ is fully faithful. The structure of this functor will be given
in the next result that we suggestively summarize by saying that
$\oDb(X)$ is a principal fibered category over $\text{{\rm cur}}b(X)$ with
structural group $\KA(X)$ provided with a flat connection.
\begin{theorem}\label{thm:13}
The functor $\mathfrak{F}\colon \oDb(X)\to \text{{\rm cur}}b(X)$ defines a
structure of category fibered in grupoids. Moreover
\begin{enumerate}
\item \label{item:43} The fiber $\mathfrak{F}^{-1}(0)$ is the
grupoid associated to the abelian group
$\KA(X)$. The object $\overline 0$ is the neutral element of $\KA(X)$.
\item \label{item:44} For any object $\mathcal{F}$ of $\text{{\rm cur}}b(X)$, the
fiber $\mathfrak{F}^{-1}(\mathcal{F})$ is the grupoid associated
to a torsor over $\KA(X)$. The action of $\KA(X)$ over
$\mathfrak{F}^{-1}(\mathcal{F})$ is given by orthogonal direct
sum. We will denote this action by $+$.
\item \label{item:45} Every isomorphism $f\colon \mathcal{F}\dashrightarrow
\mathcal{G}$ in $\text{{\rm cur}}b(X)$ determines an isomorphism of
$\KA(X)$-torsors
\begin{displaymath}
\mathfrak{t}_{f}\colon \mathfrak{F}^{-1}(\mathcal{F})\longrightarrow
\mathfrak{F}^{-1}(\mathcal{G}),
\end{displaymath}
that sends the hermitian structure $\overline E\overlineerset{\epsilon }{\dashrightarrow}
\mathcal{F}$ to the hermitian structure $\overline E\overlineerset{f\circ
\epsilon }{\dashrightarrow} \mathcal{G}$. This isomorphism will be called
the parallel transport along $f$.
\item \label{item:46} Given two isomorphisms $f\colon
\mathcal{F}\dashrightarrow \mathcal{G}$ and $g\colon \mathcal{G}\dashrightarrow
\mathcal{H}$, the equality
$$\mathfrak{t}_{g\circ f}=\mathfrak{t}_{g}\circ \mathfrak{t}_{f}$$
holds.
\end{enumerate}
\end{theorem}
\begin{proof} Recall that $\mathfrak{F}^{-1}(\mathcal{F})$ is the
subcategory of $\oDb(X)$ whose objects satisfy
$\mathfrak{F}(A)=\mathcal{F}$ and whose morphisms satisfy
$\mathfrak{F}(f)=\Id_{\mathcal{F}}$. The first assertion is
trivial. To prove that $\mathfrak{F}^{-1}(\mathcal{F})$ is a torsor
under $\KA(X)$, we need to show that $\KA(X)$ acts freely and
transitively on this fiber. For the freeness, it is enough to
observe that if for $\overline E\in\oV(X)$ and $\overline M\in\oVo(X)$, the
complexes $\overline E$ and $\overline E\text{{\rm op}}lus \overline M$ represent the same
hermitian structure, then the inclusion $\overline E\hookrightarrow \overline
E\text{{\rm op}}lus\overline M$ is tight. Hence $\ocone(\overline E, \overline E\text{{\rm op}}lus \overline M)$ is
meager. Since
\begin{displaymath}
\ocone(\overline E, \overline E\text{{\rm op}}lus \overline M)=\ocone(\overline E, \overline E)\text{{\rm op}}lus \overline M
\end{displaymath}
and $\ocone(\overline E, \overline E)$ is meager, we deduce that $\overline M$ is
meager. For the transitivity, any two hermitian structures on
$\mathcal{F}$ are related by a diagram
\begin{displaymath}
\xymatrix{
&\overline E''\ar[ld]_{\sim}^{f}\ar[rd]^{\sim}_{g} &\\
\overline E & &\overline E'.
}
\end{displaymath}
After possibly replacing $\overline E''$ by $\overline E''\text{{\rm op}}lus\ocone(f)$, we
may assume that $f$ is tight. We consider the natural arrow $\overline
E''\to \overline E'\text{{\rm op}}lus\ocone(g)[1]$ induced by $g$. Observe that
$\ocone(g)[1]$ is acyclic. Finally, we find
\begin{displaymath}
\ocone(\overline E'', \overline E'\text{{\rm op}}lus\ocone(g)[1])=\ocone(g)\text{{\rm op}}lus\ocone(g)[1],
\end{displaymath}
that is meager. Thus the hermitian structure represented by $\overline
E''$ agrees with the hermitian structure represented by $\overline
E'\text{{\rm op}}lus\ocone(g)[1]$.
The remaining properties are straightforward.
\end{proof}
Our next objective is to define the cone of a morphism in
$\oDb(X)$. This will be an object of $\oDb(X)$ uniquely defined up to
tight isomorphism. Let $f\colon
(\overlineerline{E}\dashrightarrow\mathcal{F})\dashrightarrow
(\overlineerline{E}'\dashrightarrow\mathcal{G})$
be a morphism in $\oDb(X)$, where $\overline E$ and $\overline E'$ are
representatives of the hermitian structures.
\begin{definition}\label{def:her_cone}
A \textit{hermitian cone} of $f$, denoted $\ocone(f)$, is an
object $(\cone(f),h_{f})$ of $\oDb(X)$ where:
\begin{enumerate}
\item[--] $\cone(f)\in\Ob\text{{\rm cur}}b(X)$ is a choice of cone of $f$. Namely
an object of $\text{{\rm cur}}b(X)$ completing $f$ into a distinguished
triangle;
\item[--] $h_{f}$ is a hermitian structure on $\cone(f)$
constructed as follows. The morphism $f$ induces an arrow $E\dashrightarrow
E'$. Choose any bounded complex $E''$ of vector bundles with a
diagram
\begin{displaymath}
\xymatrix{
&E''\ar[ld]^{\sim}\ar[rd] &\\
E & &E'
}
\end{displaymath}
that represents $E\dashrightarrow E'$, and an arbitrary hermitian metric on
$E''$. Put
\begin{equation}\label{eq:her_cone}
\overline{C}(f)=\ocone(\overline{E}'',\overline{E})[1]\text{{\rm op}}lus\ocone(\overline{E}'',\overline{E}').
\end{equation}
There are morphisms defined as compositions
\begin{displaymath}
\overline{E}'\longrightarrow \ocone(\overline{E}'',\overline{E}')
\longrightarrow \overline{C}(f),
\end{displaymath}
where the second arrow is the natural inclusion, and
\begin{displaymath}
\overline{C}(f) \longrightarrow \ocone(\overline{E}'',\overline{E}')
\longrightarrow \overline{E}''[1] \longrightarrow
\overline{E}[1],
\end{displaymath}
where the first arrow is the natural projection.
These morphisms fit into a natural distinguished triangle completing
$\overline{E}\dashrightarrow\overline{E}'$. By the axioms of triangulated category,
there is a quasi-isomorphism $\overline{C}(f)\dashrightarrow\cone(f)$ such that
the following diagram (where the rows are distinguished triangles)
\begin{displaymath}
\xymatrix{
\overline{E}\ar@{-->}[r]\ar@{-->}[d] &\overline{E}'\ar@{-->}[r]\ar@{-->}[d]
&\overline{C}(f)\ar@{-->}[d]\ar@{-->}[r] &\overline{E}[1]\ar@{-->}[d]\\
\mathcal{F}\ar@{-->}[r] &\mathcal{G}\ar@{-->}[r]
&\cone(f)\ar@{-->}[r] &\mathcal{F}[1]
}
\end{displaymath}
commutes. We take the hermitian structure that
$\overline{C}(f)\dashrightarrow\cone(f)$ defines on $\cone(f)$. By Theorem
\ref{thm:6bis} below, this hermitian structure does not depend on
the particular choice of arrow $\overline{C}(f)\dashrightarrow\cone(f)$. Moreover,
by Theorem \ref{thm:6}, the hermitian structure will not depend
on the choices of representatives of hermitian structures nor on
the choice of $\overline{E}''$.
\end{enumerate}
\end{definition}
\begin{remark}\label{rem:8}
The factor $\ocone(\overline{E}'',\overline{E})[1]$ has to be seen as a
correction term to take into account the difference of metrics from
$\overline E $ and $\overline E''$. We would have obtained an equivalent
definition using the factor $\ocone(\overline{E}'',\overline{E})[-1]$.
\end{remark}
\begin{theorem}\label{thm:6bis}
Let
\begin{displaymath}
\xymatrix{
\mathcal{F}\ar@{-->}[r]\ar[d]^{\Id}
&\mathcal{G}\ar@{-->}[r]\ar[d]^{\Id}
&\mathcal{H}\ar@{-->}[r]\ar@{-->}[d]^{\alpha }
& \mathcal{F}[1]\ar@{-->}[r] \ar[d]^{\Id}
&\dots\\
\mathcal{F}\ar@{-->}[r]
&\mathcal{G}\ar@{-->}[r]
&\mathcal{H}\ar@{-->}[r]
& \mathcal{F}[1]\ar@{-->}[r]
&\dots
}
\end{displaymath}
be a commutative diagram in $\text{{\rm cur}}b(X)$, where the rows are the same
distinguished triangle. Let $\overline H\dashrightarrow \mathcal{H}$ be any
hermitian structure. Then $\alpha\colon (\overline H\dashrightarrow
\mathcal{H})\dashrightarrow (\overline H\dashrightarrow \mathcal{H}) $ is a tight
isomorphism.
\end{theorem}
\begin{proof}
First of all, we claim that if
$\gamma:\overline{\mathcal{B}}\dashrightarrow\overline{\mathcal{H}}$ is any isomorphism,
then $\gamma^{-1}\circ\alpha\circ\gamma$ is tight if, and only if,
$\alpha$ is tight. Indeed, denote by $\overline G\dashrightarrow\mathcal{B}$ a
representative of the hermitian structure on
$\overline{\mathcal{B}}$. Then there is a diagram
\begin{displaymath}
\xymatrix{
& & &\overline
R\ar[ld]^{t_{1}}_{\sim}\ar[rd]_{t_{2}}^{\sim} & &\\
& &\overline P\ar[ld]^{w_{1}}_{\sim}\ar[rd]_{w_{2}}^{\sim}
& &\overline Q\ar[ld]^{w_{3}}_{\sim}\ar[rd]_{w_{4}}
^{\sim}&\\
&\overline{G}'\ar[ld]_{\sim}^{u}\ar[rd]^{\sim}_{v} &
&\overline{H}'\ar[rd]^{\sim}_{f}\ar[ld]_{\sim}^{g} &
&\overline{G}'\ar[ld]_{\sim}^{v}\ar[rd]^{\sim}_{u}&\\
\overline G\ar@{-->}[rr] & &\overline H\ar@{-->}[rr] &
&\overline H\ar@{-->}[rr] & &\overline G
}
\end{displaymath}
for the liftings of $\gamma^{-1}$, $\alpha$, $\gamma$ to
representatives, as well as for their composites, all whose squares
are commutative up to homotopy. By acyclic calculus, we have the
following chain of equalities
\begin{multline*}
[\ocone(u\circ w_{1}\circ t_{1})[1]]+[\ocone(u\circ w_{4}\circ
t_{2})]=\\
[\ocone(u)[1]]+[\ocone(v)]+[\ocone(g)[1]]+
[\ocone(f)]+[\ocone(v)[1]]+[\ocone(u)]=\\
[\ocone(g)[1]]+[\ocone(f)].
\end{multline*}
Thus, the right hand side vanishes if and only if the left hand
side vanishes, proving the claim. This observation allows to reduce
the proof of the lemma to the following situation: consider a
diagram of complexes of hermitian vector bundles
\begin{displaymath}
\xymatrix{
\overlineerline{E}\ar[d]^{\Id}\ar[r]^{f}
&\overlineerline{F}\ar[d]^{\Id}\ar[r]^{\iota}
&\overlineerline{\cone}(f)\ar[r]^{\pi}\ar@{-->}[d]^{\phi}_{\sim} &
\overlineerline{E}[1]\ar[d]^{\Id}\ar[r] &\dots\\
\overlineerline{E}\ar[r]^{f} &\overlineerline{F}\ar[r]^{\iota}
&\overlineerline{\cone}(f)\ar[r]^{\pi} &\overlineerline{E}[1]\ar[r]
&\dots,
}
\end{displaymath}
which commutes in $\text{{\rm cur}}b(X)$. We need to show that $\phi$ is a tight
isomorphism. The commutativity of the diagram translates into the
existence of bounded complexes of hermitian vector bundles
$\overlineerline{P}$ and $\overlineerline{Q}$ and a diagram
\begin{displaymath}
\xymatrix{
& &
&\overlineerline{\cone}(f)\ar[rd]^{\pi}\ar@{-->}[dd]^{\phi}_{\sim}
&\\
\overlineerline{F}\ar@/^/[rrru]^{\iota}\ar@/_/[rrrd]_{\iota}
&\overlineerline{P}\ar[l]_{\hspace{0.2cm} j}^{\sim}\ar[r]^{g}
&\overlineerline{Q}
\ar[ru]_{u}^{\sim}\ar[rd]^{v}_{\sim} & &\overlineerline{E}[1]\\
& &
&\overlineerline{\cone}(f)\ar[ru]_{\pi} &
}
\end{displaymath}
fulfilling the following properties: (a) $j$, $u$, $v$ are
quasi-isomorphisms; (b) the squares formed by $\iota, j, g, u$ and
$\iota, j, g, v$ are commutative up to homotopy; (c) the morphisms
$u$, $v$ induce $\phi$ in the derived category. We deduce a
commutative up to homotopy square
\begin{displaymath}
\xymatrix{
\ocone(g)\ar[d]_{\tilde{v}}^{\sim}\ar[r]^{\tilde{u}}_{\sim}
&\ocone(\iota)\ar[d]^{\tilde{\pi}}_{\sim}\\
\ocone(\iota)\ar[r]^{\tilde{\pi}}_{\sim} &\overlineerline{E}[1].
}
\end{displaymath}
The arrows $\tilde{u}$, $\tilde{v}$ are induced by $j,u$ and $j, v$
respectively. Observe they are quasi-isomorphisms. Also the natural
projection $\tilde{\pi}$ is a quasi-isomorphism. By acyclic
calculus, we have
\begin{displaymath}
[\ocone(\tilde{\pi})]+[\ocone(\tilde{u})]=[\ocone(\tilde{\pi})]+[\ocone(\tilde{v})].
\end{displaymath}
Therefore we find
\begin{equation}\label{eq:59}
[\ocone(\tilde{u})]=[\ocone(\tilde{v})].
\end{equation}
Finally, notice there is an exact sequence
\begin{displaymath}
0\longrightarrow\ocone(u)\longrightarrow
\ocone(\tilde{u})\longrightarrow \ocone(j[1]) \longrightarrow 0,
\end{displaymath}
whose rows are orthogonally split. Therefore,
\begin{equation}\label{eq:60}
[\ocone(\tilde{u})]=[\ocone(u)]+[\ocone(j[1])].
\end{equation}
Similarly we prove
\begin{equation}\label{eq:61}
[\ocone(\tilde{v})]=[\ocone(v)]+[\ocone(j[1])].
\end{equation}
From equations (\ref{eq:59})--(\ref{eq:61}) we infer
\begin{displaymath}
[\ocone(u)[1]]+[\ocone(v)]=0,
\end{displaymath}
as was to be shown.
\end{proof}
\begin{theorem} \label{thm:6} The object $\overline{C}(f)$ of equation
\eqref{eq:her_cone} is well defined up to tight isomorphism.
\end{theorem}
\begin{proof}
We first show the independence on the choice of $\overline E''$, up to
tight isomorphism. To this end, it is enough to assume that there
is a diagram
\begin{displaymath}
\xymatrix{
&& \overline E''' \ar[dl]_{\sim} \ar[rdd]&\\
& \overline E''\ar[dl]_{\sim}\ar[rrd]&&\\
\overline E &&&\overline E'
}
\end{displaymath}
such that the triangle commutes up to homotopy. Fix such a homotopy. Then
\begin{align*}
[\ocone(\ocone(\overline E''',\overline E'),\ocone(\overline E'',\overline E'))]&=
-[\ocone(E''', E'')],\\
[\ocone(\ocone(\overline E''',\overline E),\ocone(\overline E'',\overline E))]&=
-[\ocone(E''', E'')].
\end{align*}
By Lemma \ref{lemm:13}, the left hand sides of these relations
agree and hence this implies that the hermitian structure does not
depend on the choice of $\overline E''$.
We now prove the independence on the choice of the representative
$\overline E$. Let $\overline F\to \overline E$ be a tight morphism. Then we can
construct a diagram
\begin{displaymath}
\xymatrix{
& \overline E''' \ar[ddl]_{\sim} \ar[rd]^{\sim}&&\\
&& \overline E''\ar[dl]_{\sim}\ar[rd]&\\
\overline F \ar[r]^{\sim} &\overline E &&\overline E',
}
\end{displaymath}
where the square commutes up to homotopy. Choose one
homotopy. Taking into account Lemma \ref{lemm:13}, we find
\begin{align*}
[\ocone(\ocone(\overline E''',\overline E'),\ocone(\overline E'',\overline E'))]&=
-[\ocone(E''', E'')],\\
[\ocone(\ocone(\overline E''',\overline F),\ocone(\overline E'',\overline E))]&=
-[\ocone(E''', E'')]+[\ocone(\overline F,\overline
E)]\\
&=-[\ocone(E''', E'')].
\end{align*}
Hence the definitions of $\overline{C}(f)$ using $\overline E$ or $\overline F$ agree
up to tight isomorphism. The remaining possible choices of
representatives are treated analogously.
\end{proof}
\begin{remark}
The construction of $\ocone(f)$ involves the choice of $\cone(f)$,
which is unique up to isomorphism. Since the construction of
$\overline{C}(f)$ \eqref{eq:her_cone} does not depend on the choice of
$\cone(f)$, by Theorem \ref{thm:6bis}, we see that different
choices of $\cone(f)$ give rise to tightly isomorphic hermitian
cones. Therefore $\ocone(f)$ is well defined up to tight isomorphism
and we will usually call it \emph{the}
hermitian cone of $f$. When the morphism is clear, we will also write
$\ocone(\overline{\mathcal{F}},\overline{\mathcal{G}})$ to refer to it.
\end{remark}
The hermitian cone satisfies the same relations than the usual cone.
\begin{proposition} \label{prop:10} Let $f\colon \overline {\mathcal{F}}\dashrightarrow
\overline {\mathcal{G}}$ be a morphism in $\oDb(X)$. Then, the natural
morphisms
\begin{gather*}
\ocone(\overline{\mathcal{G}},\ocone(f))\dashrightarrow
\overline{\mathcal{F}}[1],\\
\overline{\mathcal{G}}\dashrightarrow \ocone(\ocone(f)[-1],\overline{\mathcal{F}})
\end{gather*}
are tight isomorphisms.
\end{proposition}
\begin{proof}
After choosing representatives, there are isometries
\begin{align*}
\ocone(\ocone(\overline{\mathcal{G}},\ocone(f)),
\overline{\mathcal{F}}[1])\cong &
\ocone(\ocone(\Id_{\mathcal{F}}),\ocone(\Id_{\mathcal{G}}))\cong\\
&\ocone(\overline{\mathcal{G}}, \ocone(\ocone(f)[-1],\overline{\mathcal{F}})).
\end{align*}
Since the middle term is meager, the same is true for the other two.
\end{proof}
We next extend some basic constructions in $\text{{\rm cur}}b(X)$ to $\oDb(X)$.
\noindent\textbf{Derived tensor product.} Let
$\overlineerline{\mathcal{F}}_{i}=(
\overlineerline{E}_{i}\dashrightarrow\mathcal{F}_{i})$, $i=1,2$, be objects
of $\oDb(X)$. The derived tensor product $\mathcal{F}_{1}\otimes^{\Ld}
\mathcal{F}_{2}$ is endowed with a natural hermitian structure
\begin{equation}
\label{eq:29}
\overlineerline{E}_{1}\otimes\overlineerline{E}_{2}\dashrightarrow
\mathcal{F}_{1}\otimes^{\Ld} \mathcal{F}_{2},
\end{equation}
that is well defined by Theorem \ref{thm:4}~\ref{item:24}. We write
$\overlineerline{\mathcal{F}}_{1}\otimes^{\Ld} \overlineerline{\mathcal{F}}_{2}$ for
the resulting object in $\oDb(X)$.
\noindent\textbf{Derived internal $\Hom$ and dual objects.} Let
$\overlineerline{\mathcal{F}}_{i}=(
\overlineerline{E}_{i}\dashrightarrow\mathcal{F}_{i})$, $i=1,2$, be objects
of $\oDb(X)$. The derived internal $\Hom$, $
\Rd\uHom(\mathcal{F}_{1},\mathcal{F}_{2})$ is endowed with a natural
hermitian structure
\begin{equation}
\label{eq:22}
\uHom(\overlineerline{E}_{1},\overlineerline{E}_{2})\dashrightarrow
\Rd\uHom(\mathcal{F}_{1},\mathcal{F}_{2}),
\end{equation}
that is well defined by Theorem \ref{thm:4}~\ref{item:24}. We write
$\Rd\uHom(\overline{\mathcal{F}}_{1},\overline{\mathcal{F}}_{2})$ for the resulting
object in $\oDb(X)$.
In particular, denote by $\overline {\mathcal{O}}_{X}$ the structural sheaf
with the metric $\|1\|=1$. Then, for every object
$\overline{\mathcal{F}} \in \oDb(X)$, the \emph{dual object} is defined to
be
\begin{equation}
\label{eq:30}
\overline{\mathcal{F}}^{\vee}=\Rd\uHom(\overline
{\mathcal{F}},\overline{\mathcal{O}}_{X}).
\end{equation}
\noindent\textbf{Left derived inverse image.} Let
$g\colon X^{\prime}\rightarrow X$ be a morphism of smooth algebraic
varieties over ${\mathbb C}$ and $\overline{\mathcal{F}}=(\overline E\dashrightarrow \mathcal{F})\in
\Ob \oDb(X)$. Then the left derived inverse image $\Ld
g^{{\text{\rm l,ll,a}}t}(\mathcal{F})$ is equipped with the hermitian structure
$g^{{\text{\rm l,ll,a}}t}(\overlineerline{E})\dashrightarrow\Ld g^{{\text{\rm l,ll,a}}t}(\mathcal{F})$,
that is well defined up to tight isomorphism by Theorem
\ref{thm:4}~\ref{item:25}. As it is customary, we will pretend that
$\Ld g^{{\text{\rm l,ll,a}}t}$ is a functor. The notation for the corresponding
object in $\oDb(X^{\prime})$ is $\Ld
g^{{\text{\rm l,ll,a}}t}(\overlineerline{\mathcal{F}})$. If $f\colon
\overlineerline{\mathcal{F}}_{1}\dashrightarrow \overlineerline{\mathcal{F}}_{2}$
is a morphism in $\oDb(X)$, we denote by $\Ld g^{{\text{\rm l,ll,a}}t}(f)\colon \Ld
g^{{\text{\rm l,ll,a}}t}(\overlineerline{\mathcal{F}}_{1})\dashrightarrow\Ld g^{{\text{\rm l,ll,a}}t}
(\overlineerline{\mathcal{F}}_{2})$ its left derived inverse image by $g$.
The functor $\Ld g^{{\text{\rm l,ll,a}}t}$ preserves the structure of principal
fibered category with flat connection and the formation of hermitian
cones. Namely we have the following result that is easily proved.
\begin{theorem} \label{thm:9} Let $g\colon X^{\prime}\rightarrow X$
be a morphism of smooth algebraic varieties over ${\mathbb C}$ and let
$f\colon \overline {\mathcal{F}}_{1}\dashrightarrow \overline {\mathcal{F}}_{2}$ be a
morphism in $\oDb(X)$.
\begin{enumerate}
\item The functor $\Ld g^{{\text{\rm l,ll,a}}t}$ preserves the forgetful functor:
\begin{displaymath}
\mathfrak{F}\circ \Ld g^{{\text{\rm l,ll,a}}t}=\Ld g^{{\text{\rm l,ll,a}}t}\circ \mathfrak{F}
\end{displaymath}
\item The restriction $\Ld g^{{\text{\rm l,ll,a}}t}\colon\KA(X)\to \KA(X')$ is a
group homomorphism.
\item The functor $\Ld g^{{\text{\rm l,ll,a}}t}$ is equivariant with respect to the
actions of $\KA(X)$ and $\KA(X')$.
\item The functor $\Ld g^{{\text{\rm l,ll,a}}t}$ preserves parallel transport: if
$f$ is an isomorphism, then
\begin{displaymath}
\Ld g^{{\text{\rm l,ll,a}}t}\circ \mathfrak{t}_{f}=\mathfrak{t}_{\Ld
g^{{\text{\rm l,ll,a}}t}(f)}\circ \Ld g^{{\text{\rm l,ll,a}}t}.
\end{displaymath}
\item The functor $\Ld g^{{\text{\rm l,ll,a}}t}$ preserves hermitian cones:
\begin{displaymath}
\Ld g^{{\text{\rm l,ll,a}}t}(\ocone(f))=\ocone(\Ld g^{{\text{\rm l,ll,a}}t}(f)).
\end{displaymath}
\end{enumerate}
\end{theorem}
$\square$
\noindent\textbf{Classes of isomorphisms and distinguished
triangles.}
Let $f\colon \overline {\mathcal{F}}\overlineerset{\sim}{\dashrightarrow}
\overline{\mathcal{G}}$ be an isomorphism in $\oDb(X)$. To it, we attach a
class $[f]\in \KA(X)$ that measures the default of being a tight
isomorphism. This class is defined using the hermitian cone.
\begin{equation}
\label{eq:57}
[f]=[\ocone(f)].
\end{equation}
Observe the abuse of notation: we wrote $[\ocone(f)]$ for the class
in $\KA(X)$ of the hermitian structure of a hermitian cone of
$f$. This is well defined, since the hermitian cone is unique up to
tight isomorphism. Alternatively, we can construct $[f]$ using
parallel transport as follows. There is a unique element $\overline
A\in\KA(X)$ such that
\begin{displaymath}
\overline {\mathcal{G}}=\mathfrak{t}_{f}\overline {\mathcal{F}}+\overline A.
\end{displaymath}
We denote this element by $\overline {\mathcal{G}}-\mathfrak{t}_{f}\overline
{\mathcal{F}}$. Then
\begin{displaymath}
[f]=\overline {\mathcal{G}}-\mathfrak{t}_{f}\overline
{\mathcal{F}}.
\end{displaymath}
By the very definition of parallel transport, both definitions
clearly agree.
\begin{definition}\label{def:6}
A \textit{distinguished triangle in} $\oDb(X)$, consists in a
diagram
\begin{equation}\label{eq:64}
\overlineerline{\tau}=(u,v,w):
\overlineerline{\mathcal{F}}\overlineerset{u}{\dashrightarrow}
\overlineerline{\mathcal{G}}
\overlineerset{v}{\dashrightarrow}\overlineerline{\mathcal{H}}
\overlineerset{w}{\dashrightarrow}\overlineerline{\mathcal{F}}[1]
\overlineerset{u}{\dashrightarrow}\dots
\end{equation}
in $\oDb(X)$, whose underlying morphisms in $\text{{\rm cur}}b(X)$ form a
distinguished triangle. We will say that it is \emph{tightly
distinguished} if there is a commutative diagram
\begin{equation}\label{eq:63}
\xymatrix{
\overline{\mathcal{F}}\ar@{-->}[r]\ar[d]^{\Id}
&\overline{\mathcal{G}}\ar@{-->}[r]\ar[d]^{\Id}
&\ocone(\overline {\mathcal{F}},\overline{\mathcal{G}})
\ar@{-->}[r]\ar@{-->}[d]^{\alpha }
& \overline{\mathcal{F}}[1]\ar@{-->}[r] \ar[d]^{\Id}
&\dots\\
\overline{\mathcal{F}}\ar@{-->}[r]
&\overline{\mathcal{G}}\ar@{-->}[r]
&\overline{\mathcal{H}}\ar@{-->}[r]
& \overline{\mathcal{F}}[1]\ar@{-->}[r]
&\dots,
}
\end{equation}
with $\alpha $ a tight isomorphism.
\end{definition}
To every distinguished triangle in $\oDb(X)$ we can associate a class
in $\KA(X)$ that measures the default of being tightly distinguished.
Let $\overline {\tau }$ be a distinguished triangle as in
\eqref{eq:64}. Then there is a diagram as \eqref{eq:63}, but with
$\alpha $ an isomorphism non-necessarily tight. Then we define
\begin{equation}
\label{eq:65}
[\overline{\tau }]=[\alpha ].
\end{equation}
By Theorem \ref{thm:6bis}, the class $[\alpha]$ does not depend on
the particular choice of morphism $\alpha$ in $\oDb(X)$ for which
\eqref{eq:63} commutes. Hence \eqref{eq:65} only depends on
$\overline{\tau}$.
\begin{theorem}\label{thm:10}\
\begin{enumerate}
\item \label{item:51} Let $f$ be an isomorphism in $\oDb(X)$
(respectively $\overline \tau $ a distinguished triangle). Then $[f]=0$
(respectively $[\overline \tau ]=0$) if and only if $f$ is a tight isomorphism
(respectively $\overline \tau $ is tightly distinguished).
\item \label{item:50} Let $g\colon X'\to X$ be a morphism of smooth
complex varieties, let $f$ be an isomorphism in $\oDb(X)$ and
$\overline \tau $ a distinguished triangle in $\oDb(X)$. Then
\begin{displaymath}
\Ld g^{{\text{\rm l,ll,a}}t}[f]=[\Ld g^{{\text{\rm l,ll,a}}t}f],\qquad \Ld g^{{\text{\rm l,ll,a}}t}[\overline \tau
]=[\Ld g^{{\text{\rm l,ll,a}}t}\overline \tau ].
\end{displaymath}
In particular, tight isomorphisms and tightly distinguished
triangles are preserved under left derived inverse images.
\item \label{item:16}
Let $f\colon
\overlineerline{\mathcal{F}}\dashrightarrow\overlineerline{\mathcal{G}}$ and
$h\colon \overlineerline{\mathcal{G}}\dashrightarrow
\overlineerline{\mathcal{H}}$ be two isomorphisms in $\oDb(X)$. Then:
\begin{displaymath}
[h\circ f]=[h]+[f].
\end{displaymath}
In particular, $[f^{-1}]=-[f]$.
\item \label{item:17} For any distinguished triangle $\overline \tau $ in
$\oDb(X)$ as in Definition \ref{def:6}, the rotated triangle
\begin{displaymath}
\overlineerline{\tau}'\colon\
\overlineerline{\mathcal{G}}\overlineerset{v}{\dashrightarrow}\overlineerline{\mathcal{H}}
\overlineerset{w}{\dashrightarrow}\overlineerline{\mathcal{F}}[1]
\overlineerset{-u[1]}{\dashrightarrow}\overlineerline{\mathcal{G}}[1]
\overlineerset{v[1]}{\dashrightarrow}\dots
\end{displaymath}
satisfies
\begin{math}
[\overline \tau ']=-[\overline \tau ].
\end{math}
In particular, rotating preserves tightly distinguished
triangles.
\item \label{item:28} For any acyclic complex $\overline{\mathcal{F}}$,
we have
\begin{displaymath}
[\overlineerline{\mathcal{F}}\to 0\to
0\to\dots]=
[\overline{\mathcal{F}}].
\end{displaymath}
\item \label{item:47} If
$f\colon\overline{\mathcal{F}}\dashrightarrow\overline{\mathcal{G}}$ is an isomorphism
in $\oDb(X)$, then
\begin{displaymath}
[0\to\overlineerline{\mathcal{F}}\dashrightarrow\overline{\mathcal{G}}
\to\dots]=[f].
\end{displaymath}
\item \label{item:48} For a commutative diagram of distinguished
triangles
\begin{displaymath}
\xymatrix{
\overline \tau \ar@{-->}[d]
&\overline{\mathcal{F}}\ar@{-->}[r]\ar@{-->}[d]^{f}_{\sim}
&\overline{\mathcal{G}}\ar@{-->}[r]\ar@{-->}[d]^{g}_{\sim}
&\overline{\mathcal{H}}
\ar@{-->}[r]\ar@{-->}[d]^{h} _{\sim}
& \overline{\mathcal{F}}[1]\ar@{-->}[r] \ar@{-->}[d]^{f[1]}_{\sim}
&\dots\\
\overline \tau ' &\overline{\mathcal{F}}'\ar@{-->}[r]
&\overline{\mathcal{G}}'\ar@{-->}[r]
&\overline{\mathcal{H}}'\ar@{-->}[r]
& \overline{\mathcal{F}}'[1]\ar@{-->}[r]
&\dots,
}
\end{displaymath}
the following relation holds:
\begin{displaymath}
[\overline \tau ']
-[\overline \tau ]=
[f]-[g]+[h].
\end{displaymath}
\item \label{item:49} For a commutative diagram of distinguished
triangles
\begin{equation}\label{eq:66}
\xymatrix{
\overline \tau \ar@{-->}[d]
&\overline{\mathcal{F}}\ar@{-->}[r]\ar@{-->}[d]
&\overline{\mathcal{G}}\ar@{-->}[r]\ar@{-->}[d]
&\overline{\mathcal{H}}
\ar@{-->}[r]\ar@{-->}[d]
& \overline{\mathcal{F}}[1]\ar@{-->}[r] \ar@{-->}[d]
&\dots\\
\overline \tau' \ar@{-->}[d]
&\overline{\mathcal{F}}'\ar@{-->}[r]\ar@{-->}[d]
&\overline{\mathcal{G}}'\ar@{-->}[r]\ar@{-->}[d]
&\overline{\mathcal{H}}'
\ar@{-->}[r]\ar@{-->}[d]
& \overline{\mathcal{F}}'[1]\ar@{-->}[r] \ar@{-->}[d]
&\dots\\
\overline \tau '' &\overline{\mathcal{F}}''\ar@{-->}[r]\ar@{-->}[d]
&\overline{\mathcal{G}}''\ar@{-->}[r] \ar@{-->}[d]
&\overline{\mathcal{H}}''\ar@{-->}[r] \ar@{-->}[d]
& \overline{\mathcal{F}}''[1]\ar@{-->}[r] \ar@{-->}[d]
&\dots\\
&\overline{\mathcal{F}}[1]\ar@{-->}[r]\ar@{-->}[d]
&\overline{\mathcal{G}}[1]\ar@{-->}[r]\ar@{-->}[d]
&\overline{\mathcal{H}}[1]
\ar@{-->}[r]\ar@{-->}[d] &\overline{\mathcal{F}}[2]\ar@{-->}[r]
\ar@{-->}[d]
&\dots&\\
&\vdots &\vdots &\vdots & \vdots &&\\
& \overline \eta \ar@{-->}[r] & \overline \eta' \ar@{-->}[r] & \overline \eta''
&&&
}
\end{equation}
the following relation holds:
\begin{displaymath}
[\overline \tau ]-[\overline \tau' ]
+[\overline \tau'' ]=
[\overline \eta ]-[\overline \eta' ]
+[\overline \eta'' ].
\end{displaymath}
\end{enumerate}
\end{theorem}
\begin{proof}
The first two statements are clear. For the third, we may assume
that $f$ and $g$ are realized by quasi-isomorphisms
\begin{displaymath}
f\colon \overline F\longrightarrow \overline G,
\quad g\colon \overline G\longrightarrow \overline H.
\end{displaymath}
Then the result follows from Theorem \ref{thm:7}~\ref{item:41}.
The fourth assertion is a consequence of Proposition
\ref{prop:10}. Then \ref{item:28}, \ref{item:47} and \ref{item:48}
follow from equation \eqref{eq:65} and the fourth statement. The
last property is derived from \ref{item:48} by comparing the
diagram to a diagram of tightly distinguished triangles.
\end{proof}
As an application of the class in $\KA(X)$ attached to a
distinguished triangle, we exhibit a natural morphism
$K_{1}(X)\to\KA(X)$. This is included for the sake of completeness,
but won't be needed in the sequel.
\begin{proposition}\label{prop:K1_to_KA}
There is a natural morphism of groups $K_{1}(X)\to\KA(X)$.
\end{proposition}
\begin{proof}
We follow the definitions and notations of \cite{Burgos-Wang}. From
\emph{loc. cit.} we know it is enough to construct a morphism of
groups
\begin{equation}\label{eq:K1-KA}
H_{1}(\text{{\rm w}}idetilde{\mathbb{Z}}C(X))\to\KA(X).
\end{equation}
By definition, the piece of degree $n$ of the homological complex
$\text{{\rm w}}idetilde{\mathbb{Z}}C(X)$ is
\begin{displaymath}
\text{{\rm w}}idetilde{\mathbb{Z}}C_{n}(X)=\mathbb{Z}C_{n}(X)/D_{n}.
\end{displaymath}
Here $\mathbb{Z}C_{n}(X)$ stands for the free abelian group on
metrized exact $n$-cubes and $D_{n}$ is the subgroup of degenerate
elements. A metrized exact $1$-cube is a short exact sequence of
hermitian vector bundles. Hence, for such a $1$-cube
$\overline{\varepsilon}$, there is a well defined class in
$\KA(X)$. Observe that this class coincides with the class of
$\overline{\varepsilon}$ thought as a distinguished triangle in
$\oDb(X)$. Because $\KA(X)$ is an abelian group, it follows the
existence of a morphism of groups
\begin{displaymath}
\mathbb{Z}C_{1}(X)\longrightarrow\KA(X).
\end{displaymath}
From the definition of degenerate cube \cite[Def. 3.3]{Burgos-Wang}
and the construction of $\KA(X)$, this morphism clearly factors
through $\text{{\rm w}}idetilde{\mathbb{Z}}C_{1}(X)$. The definition of the
differential $d$ of the complex $\text{{\rm w}}idetilde{\mathbb{Z}}C(X)$
\cite[(3.2)]{Burgos-Wang} and Theorem \ref{thm:10} \ref{item:49}
ensure that $d\mathbb{Z}C_{2}(X)$ is in the kernel of the
morphism. Hence we derive the existence of a morphism
\eqref{eq:K1-KA}.
\end{proof}
\noindent\textbf{Classes of complexes and of direct images of
complexes.} In
\cite[Section 2]{BurgosLitcanu:SingularBC} the notion of homological
exact sequences of metrized coherent sheaves is treated. In the
present article, this situation will arise in later
considerations. Therefore we provide the link between the point of
view of \emph{loc. cit.} and the formalism adopted here. The reader
will find no difficulty to translate it to cohomological complexes.
Consider a homological complex
\begin{displaymath}
\overlineerline{\varepsilon }:\quad
0\to \overlineerline{\mathcal{F}}_{m}\to \dots \to
\overlineerline{\mathcal{F}}_{l}\to 0
\end{displaymath}
of metrized coherent sheaves, namely coherent sheaves provided with
hermitian structures $\overline {\mathcal{F}}_{i}=(\mathcal{F}_{i},\overline
F_{i}\dashrightarrow \mathcal{F}_{i})$. We may equivalently see
$\overlineerline{\varepsilon}$ as a cohomological complex, by the usual
relabeling $\overline{\mathcal{F}}^{-i}=\overline{\mathcal{F}}_{i}$. This will be
freely used in the sequel, especially in cone constructions.
\begin{definition}\label{def:1}
The complex $\overline \varepsilon $ defines an object $[\overline \varepsilon
]\in \Ob \oDb(X)$ that is determined inductively by the condition
\begin{displaymath}
[\overline \varepsilon ]=\ocone(\overline{\mathcal{F }}_{m}[m],[\sigma _{<m}\overline
\varepsilon ]).
\end{displaymath}
Here $\sigma _{<m}$ is the homological b\^ete filtration and
$\overline{\mathcal{F
}}_{m}$ denotes a cohomological complex concentrated in degree
zero. Hence, $\overline{\mathcal{F }}_{m}[m]$ is a cohomological complex
concentrated in degree $-m$.
\end{definition}
If $\overline{E}$ is a hermitian vector bundle on $X$, then
\begin{math}
[\overline{\varepsilon}\otimes \overline{E}]=[\overline{\varepsilon}]\otimes \overline{E}
\end{math}.
According to Definition \ref{def:category_oDb}, if $\varepsilon $ is
an acyclic complex, then we also have the corresponding class
$[[\overline{\varepsilon}]]$ in $\KA(X)$. We will employ the lighter
notation $[\overline{\varepsilon}]$ for this class.
Given a morphism $\varphi\colon \overline{\varepsilon}\to\overline{\mu}$ of
bounded complexes of metrized coherent sheaves, the pieces of the
complex $\cone(\varepsilon,\mu)$ are naturally endowed with hermitian
metrics. We thus get a complex of metrized coherent sheaves
$\overline{\cone(\varepsilon,\mu)}$. Hence Definition \ref{def:1} provides
an object $[\overline{\cone(\varepsilon,\mu)}]$ in $\oDb(X)$. On the other
hand, Definition \ref{def:her_cone} attaches to $\varphi$ the
hermitian cone $\ocone([\overline{\varepsilon}],[\overline{\mu}])$, which is well
defined up to tight isomorphism. Both constructions actually agree.
\begin{lemma}\label{lemm:3}
Let $\overline{\varepsilon}\to\overline{\mu}$ be a morphism of bounded complexes
of metrized coherent sheaves on $X$. Then there is a tight
isomorphism
\begin{displaymath}
\ocone([\overline{\varepsilon}],[\overline{\mu}])\cong [\overline{\cone(\varepsilon,\mu)}],
\end{displaymath}
\end{lemma}
\begin{proof}
The case when $\varepsilon$ and $\mu$ are both concentrated in a
single degree $d$ is clear. The general case follows by induction
taking into account Definition \ref{def:1}.
\end{proof}
Assume now that $f\colon X\to Y$ is a morphism of smooth complex
varieties and, for each complex $\Rd f_{{\text{\rm l,ll,a}}t}\mathcal{F}_{i}$, we have
chosen a hermitian structure $\overline{\Rd f_{{\text{\rm l,ll,a}}t} \mathcal{F}_{i}}=(\overline
E_{i}\dashrightarrow \Rd f_{{\text{\rm l,ll,a}}t} \mathcal{F}_{i})$. Denote by $\overline {\Rd
f_{{\text{\rm l,ll,a}}t}\varepsilon }$ this choice of metrics.
\begin{definition} \label{def:5} The family of hermitian structures
$\overline {\Rd f_{{\text{\rm l,ll,a}}t}\varepsilon }$ defines an object $[\overline{\Rd
f_{{\text{\rm l,ll,a}}t}\varepsilon }]\in \Ob \oDb(Y)$ that is determined
inductively by the condition
\begin{displaymath}
[\overline {\Rd f_{{\text{\rm l,ll,a}}t}\varepsilon }]=\ocone (\overline {\Rd f_{{\text{\rm l,ll,a}}t}
\mathcal{F}_{m}}[m],[\overline {\Rd f_{{\text{\rm l,ll,a}}t}\sigma _{< m} \varepsilon} ]).
\end{displaymath}
\end{definition}
We remark that the notation $\overline {\Rd
f_{{\text{\rm l,ll,a}}t}\varepsilon }$ means that the hermitian structure is
chosen after taking the direct image and it is not determined by the
hermitian structure on $\overline \varepsilon $.
If $\overline{F}$ is a hermitian vector bundle on $Y$, then the
object $[\overline{\Rd f_{{\text{\rm l,ll,a}}t}(\varepsilon\otimes f^{{\text{\rm l,ll,a}}t} F)}]$ (whose
definition is obvious)
satisfies
\begin{displaymath}
[\overline{\Rd f_{{\text{\rm l,ll,a}}t}(\varepsilon\otimes f^{{\text{\rm l,ll,a}}t} F)}]
=[\overline{\Rd f_{{\text{\rm l,ll,a}}t}\varepsilon}]\otimes\overline{F}.
\end{displaymath}
Notice also that if $\varepsilon$ is an acyclic complex on $X$,
we have the class $[\overline {\Rd f_{{\text{\rm l,ll,a}}t}\varepsilon}]\in \KA(Y)$.
Let $\varepsilon\to\mu$ be a morphism of bounded complexes of coherent
sheaves on $X$ and $f\colon X\to Y$ a morphism of smooth complex
varieties. Fix choices of metrics $\overline{\Rd f_{{\text{\rm l,ll,a}}t}\varepsilon}$ and
$\overline{\Rd f_{{\text{\rm l,ll,a}}t}\mu}$. Then there is an obvious choice of metrics on
$\Rd f_{{\text{\rm l,ll,a}}t}\cone(\varepsilon,\mu)$, that we denote $\overline{\Rd
f_{{\text{\rm l,ll,a}}t}\cone(\varepsilon,\mu)}$, and hence an object $[\overline{\Rd
f_{{\text{\rm l,ll,a}}t}\cone(\varepsilon,\mu)}]$ in $\oDb(Y)$. On the other hand,
we also have the hermitian cone $\ocone([\overline{\Rd
f_{{\text{\rm l,ll,a}}t}\varepsilon}],[\overline{\Rd f_{{\text{\rm l,ll,a}}t}\mu}])$. Again both
definitions agree.
\begin{lemma}\label{lemm:3bis}
Let $\varepsilon\to\mu$ be a morphism of bounded complexes of
coherent sheaves on $X$ and $f\colon X\to Y$ a morphism of smooth
complex varieties. Assume that families of metrics $\overline{\Rd
f_{{\text{\rm l,ll,a}}t}\varepsilon}$ and $\overline{\Rd f_{{\text{\rm l,ll,a}}t}\mu}$ are chosen. Then there is a
tight isomorphism
\begin{displaymath}
\ocone([\overline{\Rd f_{{\text{\rm l,ll,a}}t}\varepsilon}],[\overline{\Rd f_{{\text{\rm l,ll,a}}t}\mu}])\cong [\overline{\Rd f_{{\text{\rm l,ll,a}}t}\cone(\varepsilon,\mu)}].
\end{displaymath}
\end{lemma}
\begin{proof}
If $\varepsilon$ and $\mu$ are concentrated in a single degree $d$,
then the statement is obvious. The proof follows by induction and
Definition \ref{def:5}.
\end{proof}
The objects we have defined are compatible with short exact sequences,
in the sense of the following statement.
\begin{proposition} \label{prop:4} Consider a commutative diagram of
exact sequences of coherent sheaves on $X$
\begin{displaymath}
\xymatrix{
& & &0\ar[d] & &0\ar[d] & \\
&\mu' &0\ar[r] &\mathcal{F}_{m}'\ar[r]\ar[d] &\dots\ar[r] &\mathcal{F}_{l}'\ar[r]\ar[d] &0
\\
&\mu &0\ar[r] &\mathcal{F}_{m}\ar[r]\ar[d] &\dots\ar[r] &\mathcal{F}_{l}\ar[r]\ar[d] &0
\\
&\mu'' &0\ar[r] &\mathcal{F}_{m}''\ar[r]\ar[d] &\dots\ar[r] &\mathcal{F}_{l}''\ar[r]\ar[d] &0
\\
& & &0 & &0 & \\
& & &\xi_{m} &\dots &\xi_{l}. &
}
\end{displaymath}
Let $f\colon X\to Y$ be a morphism of smooth complex varieties and
choose hermitian structures on the sheaves $\mathcal{F}_{j}'$,
$\mathcal{F}_{j}$, $\mathcal{F}_{j}''$ and on the objects $\Rd
f_{{\text{\rm l,ll,a}}t}\mathcal{F}_{j}'$, $\Rd f_{{\text{\rm l,ll,a}}t}\mathcal{F}_{j}$ and $\Rd
f_{{\text{\rm l,ll,a}}t}\mathcal{F}_{j}''$, $j=l,\dots, m$. Then the following
equalities hold in $\KA(X)$ and $\KA(Y)$, respectively:
\begin{align*}
&\sum_{j}(-1)^{j}[\overline{\xi}_{j}]=[\overline{\mu}']-[\overline{\mu}]+[\overline{\mu}''],\\
&\sum_{j}(-1)^{j}[\overline{\Rd f_{{\text{\rm l,ll,a}}t}\xi}_{j}]=[\overline{\Rd
f_{{\text{\rm l,ll,a}}t}\mu}']-[\overline{\Rd f_{{\text{\rm l,ll,a}}t} \mu}]+[\overline{\Rd f_{{\text{\rm l,ll,a}}t}\mu}''].
\end{align*}
\end{proposition}
\begin{proof}
The lemma follows inductively taking into account definitions
\ref{def:1} and \ref{def:5} and Theorem \ref{thm:10}~\ref{item:49}.
\end{proof}
\begin{corollary}\label{cor:5}
Let $\overline{\varepsilon}\to\overline{\mu} $ be a morphism of exact sequences
of metrized coherent sheaves. Let $f\colon X\to Y$ be a morphism of
smooth complex varieties and fix families of metrics $\overline{\Rd
f_{{\text{\rm l,ll,a}}t}\varepsilon}$ and $\overline{\Rd f_{{\text{\rm l,ll,a}}t}\mu}$. Then the following
equalities in $\KA(X)$ and $\KA(Y)$, respectively, hold
\begin{align}
&[\overline{\cone(\varepsilon,\mu)}]=[\overline{\mu}]-[\overline{\varepsilon}],\\
&[\overline{\Rd f_{{\text{\rm l,ll,a}}t}\cone(\varepsilon,\mu})]=[\overline{\Rd
f_{{\text{\rm l,ll,a}}t}\mu}]-[\overline{\Rd f_{{\text{\rm l,ll,a}}t}\varepsilon}].
\end{align}
\end{corollary}
\begin{proof}
The result readily follows from lemmas \ref{lemm:3}, \ref{lemm:3bis}
and Proposition \ref{prop:4}.
\end{proof}
\noindent\textbf{Hermitian structures on cohomology.}
Let $\mathcal{F}$ be an object of $\text{{\rm cur}}b(X)$ and denote by $\mathcal{H}$
its cohomology complex. Observe that $\mathcal{H}$ is a bounded
complex with 0 differentials. By the preceding discussion and because
$\KA(X)$ acts transitively on hermitian structures, giving a hermitian
structure on $\mathcal{H}$ amounts to give hermitian structures on the
individual pieces $\mathcal{H}^{i}$. We show that to these data there
is attached a natural hermitian structure on the complex
$\mathcal{F}$. This situation will arise when considering cohomology
sheaves endowed with $L^2$ metric structures. The construction is
recursive. If the cohomology complex is trivial, then we endow
$\mathcal{F}$ with the trivial hermitian structure. Otherwise, let
$\mathcal{H}^{m}$ be the highest non-zero cohomology sheaf. The
canonical filtration $\tau ^{\leq m}$ is given by
\begin{displaymath}
\tau^{\leq
m}\mathcal{F}\colon\quad\dots\to\mathcal{F}^{m-2}\to\mathcal{F}^{m-1}
\to\ker(\dd^{m})\to 0.
\end{displaymath}
By the condition on the highest non vanishing cohomology sheaf, the
natural inclusion is a quasi-isomorphism:
\begin{equation}\label{eq:her_coh_1}
\tau^{\leq m}\mathcal{F}\overlineerset{\sim}{\longrightarrow}\mathcal{F}.
\end{equation}
We also introduce the subcomplex
\begin{displaymath}
\text{{\rm w}}idetilde{\mathcal{F}}\colon\quad\dots\to\mathcal{F}^{m-2}\to
\mathcal{F}^{m-1}\to{\im}(\dd^{m-1})\to 0.
\end{displaymath}
Observe that the cohomology complex of $\text{{\rm w}}idetilde{\mathcal{F}}$ is
the b\^ete truncation $\mathcal{H}/\sigma^{\ge m}\mathcal{H}$. By induction,
$\text{{\rm w}}idetilde{\mathcal{F}}$ carries an induced hermitian
structure. We also have an exact sequence
\begin{equation}\label{eq:her_cor_2}
0\to\text{{\rm w}}idetilde{\mathcal{F}}\to\tau^{\leq m}\mathcal{F}\to\mathcal{H}^{m}[-m]\to 0.
\end{equation}
Taking into account the quasi-isomorphism \eqref{eq:her_coh_1} and the
exact sequence \eqref{eq:her_cor_2}, we construct a natural
commutative diagram of distinguished triangles in $\text{{\rm cur}}b(X)$
\begin{displaymath}
\xymatrix{
\mathcal{H}^{m}[-m-1]\ar@{-->}[r]^{\hspace{0.6cm} 0}\ar[d]^{\Id} &\text{{\rm w}}idetilde{\mathcal{F}}\ar@{-->}[r]\ar[d]^{\Id}
&\mathcal{F}\ar@{-->}[r]\ar@{-->}[d]^{\sim} &\mathcal{H}^{m}[m]\ar[d]^{\Id}\\
\mathcal{H}^{m}[-m-1]\ar@{-->}[r]^{\hspace{0.6cm} 0} &\text{{\rm w}}idetilde{\mathcal{F}}\ar@{-->}[r]
&\cone(\mathcal{H}^{m}[-m-1],\text{{\rm w}}idetilde{\mathcal{F}})\ar@{-->}[r] &\mathcal{H}^{m}[m].
}
\end{displaymath}
By the hermitian cone construction and Theorem \ref{thm:6bis}, we see
that hermitian structures on $\text{{\rm w}}idetilde{\mathcal{F}}$ and
$\mathcal{H}^{m}$ induce a well defined hermitian structure on
$\mathcal{F}$.
\begin{definition}\label{def:her_coh}
Let $\mathcal{F}$ be an object of $\text{{\rm cur}}b(X)$ with cohomology complex
$\mathcal{H}$. Assume the pieces $\mathcal{H}^{i}$ are endowed with
hermitian structures. The hermitian structure on $\mathcal{F}$
constructed above will be called the \emph{hermitian structure induced by
the hermitian structure on the cohomology complex} and will be
denoted $(\mathcal{F},\overline{\mathcal{H}})$.
\end{definition}
The following proposition is a direct consequence of the definitions.
\begin{proposition}\label{prop:her_coh}
Let $\varphi\colon\mathcal{F}_{1}\dashrightarrow\mathcal{F}_{2}$ be an
isomorphism in $\text{{\rm cur}}b(X)$. Assume the pieces of the cohomology
complexes $\mathcal{H}_{1}$, $\mathcal{H}_{2}$ of $\mathcal{F}_{1}$,
$\mathcal{F}_{2}$ are endowed with hermitian structures. If the
induced isomorphism in cohomology
$\varphi_{{\text{\rm l,ll,a}}t}\colon\mathcal{H}_{1}\to\mathcal{H}_{2}$ is tight,
then $\varphi$ is tight for the induced hermitian structures on
$\mathcal{F}_{1}$ and $\mathcal{F}_{2}$.
\end{proposition}
$\square$
\section{Bott-Chern classes for
isomorphisms and distinguished triangles in $\oDb(X)$}
\label{sec:bott-chern-classes}
In this section we will define Bott-Chern classes for
isomorphisms and distinguished triangles in $\oDb(X)$.
The natural context where one can define the Bott-Chern
classes is that of Deligne
complexes. For details about Deligne complexes the reader is referred
to \cite{Burgos:CDB} and \cite{BurgosKramerKuehn:cacg}. In this
section we will use the same notations as in
\cite{BurgosLitcanu:SingularBC} \S1. In particular,
the \emph{Deligne algebra of differential forms} on $X$
is denoted by
\begin{math}
\mathcal{D}^{{\text{\rm l,ll,a}}t}(X,{\text{\rm l,ll,a}}t).
\end{math}
and we use the notation
\begin{displaymath}
\text{{\rm w}}idetilde
{\mathcal{D}}^{n}(X,p)=\left. \mathcal{D}^{n}(X,p)\right/
\dd_{\mathcal{D}}\mathcal{D}^{n-1}(X,p).
\end{displaymath}
When characterizing axiomatically Bott-Chern classes, the
basic tool to exploit the functoriality axiom is to use a
deformation parametrized by ${\mathbb P}^{1}$. This argument leads to the
following lemma that will be used to prove the uniqueness of the
Bott-Chern classes introduced in this section.
\begin{lemma} \label{lemm:1}
Let $X$ be a smooth complex variety. Let
$\text{{\rm w}}idetilde \varphi$ be an assignment that, to each smooth morphism
of complex
varieties $g\colon X'\to X$ and each acyclic complex $\overline A$ of
hermitian vector bundles on $X'$
assigns a class
\begin{displaymath}
\text{{\rm w}}idetilde \varphi(\overline A)\in \bigoplus
_{n,p}\text{{\rm w}}idetilde{\mathcal{D}}^{n}(X',p)
\end{displaymath}
fulfilling the following properties:
\begin{enumerate}
\item (Differential equation) the equality
\begin{math}
\dd_{\mathcal{D}}\text{{\rm w}}idetilde \varphi(\overline A)=0
\end{math}
holds;
\item (Functoriality) for each morphism of smooth complex varieties
$h\colon X''\to X'$ with $g\circ h $ smooth, we have
\begin{math}
h^{{\text{\rm l,ll,a}}t} \text{{\rm w}}idetilde \varphi(\overline A)=\text{{\rm w}}idetilde \varphi(h^{{\text{\rm l,ll,a}}t}\overline A)
\end{math};
\item (Normalization) if $\overline A$ is orthogonally split, then $\text{{\rm w}}idetilde
\varphi(\overline A)=0$.
\end{enumerate}
Then $\text{{\rm w}}idetilde \varphi=0$.
\end{lemma}
\begin{proof}
The argument of the proof of \cite[Thm.
2.3]{BurgosLitcanu:SingularBC} applies \emph{mutatis mutandis} to
the present situation.
\end{proof}
\begin{definition} \label{def:24}
An \emph{additive genus in Deligne cohomology} is a
characteristic class $\varphi$ for vector bundles of any rank in the sense of
\cite[Def. 1.5]{BurgosLitcanu:SingularBC} that satisfies the
equation
\begin{equation}
\label{eq:75}
\varphi(E_{1}\text{{\rm op}}lus E_{2})=\varphi(E_{1})+\varphi(E_{2}).
\end{equation}
\end{definition}
Let ${\mathbb D}$ denote the base ring for Deligne cohomology (see
\cite{BurgosLitcanu:SingularBC} before Definition 1.5). A consequence
of \cite[Thm. 1.8]{BurgosLitcanu:SingularBC} is that there is a
bijection between the set of additive genera in Deligne cohomology and
the set of power series in one variable ${\mathbb D}[[x]]$. To each power
series $\varphi \in {\mathbb D}[[x]]$ it corresponds the unique additive genus
such that
\begin{displaymath}
\varphi(L)=\varphi (c_{1}(L))
\end{displaymath}
for every line bundle $L$.
\begin{definition} \label{def:25}
A \emph{real additive genus} is an additive genus such that the
corresponding power series belong to ${\mathbb R}[[x]]$.
\end{definition}
\begin{remark}\label{rem:7}
It is clear that, if $\varphi$ is a real additive genus, then for each
vector bundle $E$ we have
\begin{displaymath}
\varphi(E)\in \bigoplus_{p}H_{\mathcal{D}}^{2p}(X,{\mathbb R}(p))
\end{displaymath}
\end{remark}
We now focus on additive genera, for instance the Chern character is a
real additive genus. Let
$\varphi $ be such a genus. Using Chern-Weil theory, to each hermitian
vector bundle
$\overlineerline E$
on $X$ we can attach a closed characteristic form
\begin{displaymath}
\varphi (\overlineerline E)\in \bigoplus_{n,p}\mathcal{D}^{n}(X,p).
\end{displaymath}
If $\overline E$ is an object of
$\oV(X)$, then
we define
$$\varphi (\overlineerline E)=\sum_{i}
(-1)^{i} \varphi (\overlineerline E^{i}).$$
If $\overlineerline E$ is acyclic, following \cite[Sec.
2]{BurgosLitcanu:SingularBC},
we associate to it a Bott-Chern characteristic class
\begin{displaymath}
\text{{\rm w}}idetilde \varphi (\overlineerline E)\in
\bigoplus_{n,p}\text{{\rm w}}idetilde {\mathcal{D}}^{n-1}(X,p)
\end{displaymath}
that satisfies the differential equation
\begin{displaymath}
\dd_{\mathcal{D}}\text{{\rm w}}idetilde \varphi (\overlineerline E)=\varphi
(\overlineerline E).
\end{displaymath}
In fact, \cite[Thm. 2.3]{BurgosLitcanu:SingularBC} for additive
genera can be restated as follows.
\begin{proposition} \label{prop:9}
Let $\varphi $ be an additive genus. Then there
is a unique group homomorphism
\begin{displaymath}
\text{{\rm w}}idetilde \varphi \colon \KA(X)\to \bigoplus_{n,p}\text{{\rm w}}idetilde
{\mathcal{D}}^{n-1}(X,p)
\end{displaymath}
satisfying the properties:
\begin{enumerate}
\item (Differential equation) $\dd_{\mathcal{D}}
\text{{\rm w}}idetilde\varphi(\overline E)= \varphi(\overline E).$
\item (Functoriality) If $f\colon X\to Y$ is a morphism of smooth
complex varieties, then
$
\text{{\rm w}}idetilde{\varphi}(\Ld f^{{\text{\rm l,ll,a}}t}(\overline{E}))=f^{{\text{\rm l,ll,a}}t}(\text{{\rm w}}idetilde{\varphi}(\overline{E})).
$
\end{enumerate}
\end{proposition}
\begin{proof}
For the uniqueness, we observe that, if $\text{{\rm w}}idetilde \varphi$ is a group
homomorphism then $\text{{\rm w}}idetilde\varphi (\overline 0)=0$. Hence, if $\overline E$
is a orthogonally split complex, then it is meager and therefore
$\text{{\rm w}}idetilde\varphi (\overline E)=0$. Thus, the assignment that, to each acyclic
complex bounded $\overline E$, associates the class $\text{{\rm w}}idetilde
\varphi([\overline E])$ satisfies the
conditions of \cite[Thm. 2.3]{BurgosLitcanu:SingularBC}, hence is
unique. For the existence, we note that
Bott-Chern classes for additive genera satisfy the
hypothesis of Theorem \ref{thm:8}. Hence the result follows.
\end{proof}
\begin{remark} \label{rem:6}
If
\begin{displaymath}
\overlineerline{\varepsilon }:\quad
0\to \overlineerline{\mathcal{F}}_{m}\to \dots \to
\overlineerline{\mathcal{F}}_{l}\to 0
\end{displaymath}
is an acyclic complex of coherent sheaves on $X$ provided with
hermitian structures
$\overline {\mathcal{F}}_{i}=(\mathcal{F}_{i},\overline F_{i}\dashrightarrow
\mathcal{F}_{i})$, by Definition \ref{def:1} we have an object $[\overline
\varepsilon ]\in \KA(X)$, hence a class $\text{{\rm w}}idetilde \varphi([\overline
\varepsilon ])$. In the case of the Chern character, in \cite[Thm.
2.24]{BurgosLitcanu:SingularBC}, a class $\text{{\rm w}}idetilde
{\ch}(\overline \varepsilon )$ is defined. It follows from \cite[Thm.
2.24]{BurgosLitcanu:SingularBC} that both classes agree. That is,
$\text{{\rm w}}idetilde{\ch}([\overline \varepsilon ])=\text{{\rm w}}idetilde{\ch}(\overline \varepsilon
)$. For this reason we will denote $\text{{\rm w}}idetilde \varphi([\overline
\varepsilon ])$ by $\text{{\rm w}}idetilde \varphi(\overline
\varepsilon)$.
\end{remark}
\begin{definition}\label{definition:forms_complexes}
Let
$\overlineerline{\mathcal{F}}=(\overline E\overlineerset{\sim}{\dashrightarrow}\mathcal{F})$
be an object of $\oDb(X)$. Let $\varphi$ denote
an additive genus. We denote the form
\begin{equation*}
\varphi(\overlineerline{\mathcal{F}})=\varphi(\overline E)\in \bigoplus_{n,p}\mathcal{D}^{n}(X,p)
\end{equation*}
and the class
\begin{equation*}
\varphi(\mathcal{F})=[\varphi(\overline E)]\in
\bigoplus_{n,p}H_{\mathcal{D}}^{n}(X,{\mathbb R}(p)).
\end{equation*}
Note that the form $\varphi(\overlineerline{\mathcal{F}})$ only depends on the hermitian structure
and not on a particular representative thanks to Proposition
\ref{prop:7} and Proposition \ref{prop:9}. The class
$\varphi(\mathcal{F})$ only depends on the object $\mathcal{F}$ and
not on the hermitian structure.
\end{definition}
\begin{remark}
The reason to restrict to additive genera when working with the
derived category is now clear: there is no
canonical way to attach a rank to $\text{{\rm op}}lus_{i\even}\mathcal{F}^{i}$
(respectively $\text{{\rm op}}lus_{i\odd} \mathcal{F}^{i}$). The naive choice
$\rk(\text{{\rm op}}lus_{i\even} E^{i})$ (respectively $\rk(\text{{\rm op}}lus_{i\odd} E^{i})$)
does depend on $E\dashrightarrow\mathcal{F}$. Thus we can not define
Bott-Chern classes by the general rule from
\cite{BurgosLitcanu:SingularBC}. The case of a multiplicative genus
such as the Todd genus will be considered later.
\end{remark}
Next we will construct Bott-Chern classes for isomorphisms
in $\oDb(X)$.
\begin{definition}
Let $f\colon \overlineerline{\mathcal{F}}\dashrightarrow\overlineerline{\mathcal{G}}$
be a morphism in $\oDb(X)$ and $\varphi$ an additive
genus. We define the differential form
\begin{equation*}
\varphi(f)=\varphi(\overlineerline{\mathcal{G}})-
\varphi(\overlineerline{\mathcal{F}}).
\end{equation*}
\end{definition}
\begin{theorem}\label{theorem:ch_tilde_qiso}
Let $\varphi$ be an additive genus. There is a unique way to attach to
every isomorphism in $\oDb(X)$
\begin{math}
f\colon (\overlineerline{F}\dashrightarrow\mathcal{F})
\overlineerset{\sim}{\dashrightarrow}
(\overlineerline{G}\dashrightarrow\mathcal{G})
\end{math}
a Bott-Chern class
\begin{displaymath}
\text{{\rm w}}idetilde{\varphi}(f)\in\bigoplus_{n,p}\text{{\rm w}}idetilde{\mathcal{D}}^{n-1}(X,p)
\end{displaymath}
such that the following axioms are satisfied:
\begin{enumerate}
\item (Differential equation)
\begin{math}
\dd_{\mathcal{D}}\text{{\rm w}}idetilde{\varphi}(f)=\varphi(f).
\end{math}
\item (Functoriality) If $g\colon X'\rightarrow X$ is a morphism of smooth
noetherian schemes over ${\mathbb C}$, then
\begin{displaymath}
\text{{\rm w}}idetilde{\varphi}(\Ld g^{{\text{\rm l,ll,a}}t}(f))=g^{{\text{\rm l,ll,a}}t}(\text{{\rm w}}idetilde{\varphi}(f)).
\end{displaymath}
\item (Normalization) If $f$ is a tight isomorphism, then
\begin{math}
\text{{\rm w}}idetilde{\varphi}(f)=0.
\end{math}
\end{enumerate}
\end{theorem}
\begin{proof}
For the existence we define
\begin{equation}
\label{eq:56}
\text{{\rm w}}idetilde \varphi(f)=\text{{\rm w}}idetilde \varphi([f]),
\end{equation}
where $[f]\in\KA(X)$ is the class of $f$ given by equation
\eqref{eq:57}.
That $\text{{\rm w}}idetilde \varphi$ satisfies the axioms
follows from Proposition \ref{prop:9} and Theorem
\ref{thm:9}.
We now focus on the uniqueness. Assume such a theory
$f\mapsto\text{{\rm w}}idetilde{\varphi}_{0}(f)$ exists. Fix $f$ as in the statement.
Since $\text{{\rm w}}idetilde \varphi_{0}$ is well defined, by replacing $\overline F$
by one that is tightly related, we may assume that $f$ is
realized by a morphism of complexes
\begin{displaymath}
f\colon \overline F\longrightarrow \overline G.
\end{displaymath}
We factorize $f$ as
\begin{displaymath}
\overline F\overlineerset{\alpha }{\longrightarrow }
\overline G\text{{\rm op}}lus \ocone(\overline F,\overline G)[-1]
\overlineerset{\beta }{\longrightarrow}
\overline G,
\end{displaymath}
where both arrows are zero on the second factor of the middle
complex. Since $\alpha $ is a tight morphism and $\ocone(\overline F,\overline
G)[-1]$ is acyclic, we are reduced to the case when $\overline F=\overline
G\text{{\rm op}}lus \overline A$, with $\overline A$ an acyclic complex and $f$ is the
projection onto the first factor.
For each smooth
morphism $g\colon X'\to X$ and each acyclic complex of vector bundles
$\overline E$ on $X'$, we denote
\begin{displaymath}
\text{{\rm w}}idetilde \varphi_{1}(\overline E)=\text{{\rm w}}idetilde \varphi_{0} (g^{{\text{\rm l,ll,a}}t}\overline G\text{{\rm op}}lus \overline
E\to g^{{\text{\rm l,ll,a}}t}\overline G)+\text{{\rm w}}idetilde \varphi(\overline E),
\end{displaymath}
where $\text{{\rm w}}idetilde \varphi$ is the usual Bott-Chern form for acyclic
complexes of hermitian vector bundles associated to
$\varphi$.
Then $\text{{\rm w}}idetilde \varphi_{1}$ satisfies the
hypothesis of Lemma \ref{lemm:1}, so $\text{{\rm w}}idetilde \varphi_{1}=0$. Therefore
\begin{math}
\text{{\rm w}}idetilde \varphi(f)=-\text{{\rm w}}idetilde \varphi(\overline A).
\end{math}
\end{proof}
\begin{proposition}\label{proposition:ch_tilde_comp}
Let
$f\colon \overlineerline{\mathcal{F}}\dashrightarrow\overlineerline{\mathcal{G}}$
and $g\colon \overlineerline{\mathcal{G}}\dashrightarrow
\overlineerline{\mathcal{H}}$ be two isomorphisms in $\oDb(X)$. Then:
\begin{displaymath}
\text{{\rm w}}idetilde{\varphi}(g\circ f)=
\text{{\rm w}}idetilde{\varphi}(g)+\text{{\rm w}}idetilde{\varphi}(f).
\end{displaymath}
In particular, $\text{{\rm w}}idetilde{\varphi}(f^{-1})=-\text{{\rm w}}idetilde{\varphi}(f)$.
\end{proposition}
\begin{proof}
The statement follows from Theorem \ref{thm:10}~\ref{item:16}.
\end{proof}
The Bott-Chern classes behave well under shift.
\begin{proposition}
Let $f\colon \overline{\mathcal{F}}\dashrightarrow \overline{\mathcal{G}}$ be an
isomorphism in $\oDb(X)$. Let $f[i]\colon \overline{\mathcal{F}}[i]\dashrightarrow
\overline{\mathcal{G}}[i]$ be the shifted isomorphism. Then
\begin{displaymath}
(-1)^{i}\text{{\rm w}}idetilde{\varphi}(f[i])=\text{{\rm w}}idetilde{\varphi}(f).
\end{displaymath}
\end{proposition}
\begin{proof}
The assignment $f\mapsto (-1)^{i}\text{{\rm w}}idetilde{\varphi}(f[i])$
satisfies the characterizing properties of Theorem
\ref{theorem:ch_tilde_qiso}. Hence it agrees with $\text{{\rm w}}idetilde
\varphi$.
\end{proof}
The following notation will be sometimes used.
\begin{notation}
Let $\mathcal{F}$ be an object of $\text{{\rm cur}}b(X)$ and consider two choices of
hermitian structures $\overline{\mathcal{F}}$ and $\overline{\mathcal{F}}'$. Then
we write
\begin{displaymath}
\text{{\rm w}}idetilde{\varphi}(\overline{\mathcal{F}},\overline{\mathcal{F}}')=
\text{{\rm w}}idetilde{\varphi}(\overline{\mathcal{F}}
\overlineerset{\Id}{\dashrightarrow}\overline{\mathcal{F}}').
\end{displaymath}
Thus $\dd_{\mathcal{D}}\text{{\rm w}}idetilde{\varphi}
(\overline{\mathcal{F}},\overline{\mathcal{F}}') =
\varphi(\overline{\mathcal{F}}')-\varphi(\overline{\mathcal{F}}).
$
\end{notation}
\begin{example}\label{exm:1}
Let $\overline
{\mathcal{F}}=(\mathcal{F},\mathcal{F}\dashrightarrow\overlineerline E)$
be an object of $\oDb(X)$. Let $\mathcal{H}^{i}$ denote the
cohomology sheaves of $\mathcal{F}$ and assume that we have chosen
hermitian structures $\overline{\mathcal{H}}^{i} $ of each
$\mathcal{H}^{i}$. In the case when the sheaves $\mathcal{H}^{i}$
are vector bundles and the hermitian structures are hermitian
metrics, X. Ma, in the paper \cite{Ma:MR1765553}, has associated to
these data a Bott-Chern class, that we
denote $M(\overline {\mathcal{F}},\overline {\mathcal{H}})$. By
the characterization given by Ma of $M(\overline {\mathcal{F}},\overline
{\mathcal{H}})$, it is immediate that
\begin{displaymath}
M(\overline {\mathcal{F}},\overline {\mathcal{H}})=
\cht(\overline {\mathcal{F}},(\mathcal{F},\overline {\mathcal{H}})),
\end{displaymath}
where $(\mathcal{F},\overline {\mathcal{H}})$ is as in Definition
\ref{def:her_coh}.
\end{example}
Our next aim is to construct Bott-Chern classes for
distinguished triangles.
\begin{definition}
Let $\overline \tau $ be a distinguished triangle in $\oDb(X)$,
\begin{displaymath}
\overlineerline{\tau}\colon\
\overlineerline{\mathcal{F}}\overlineerset{u}{\dashrightarrow}\overlineerline{\mathcal{G}}
\overlineerset{v}{\dashrightarrow}\overlineerline{\mathcal{H}}
\overlineerset{w}{\dashrightarrow}\overlineerline{\mathcal{F}}[1]
\overlineerset{u}{\dashrightarrow}\dots
\end{displaymath}
For an additive genus
$\varphi$, we attach the differential form
\begin{displaymath}
\varphi(\overlineerline{\tau})=\varphi(\overlineerline{\mathcal{F}})-
\varphi(\overlineerline{\mathcal{G}})+\varphi(\overlineerline{\mathcal{H}}).
\end{displaymath}
\end{definition}
Notice that if $\overline \tau$ is tightly distinguished, then $\varphi(\overline
\tau )=0$. Moreover, for any distinguished triangle $\overline \tau $ as above, the
rotated triangle
\begin{displaymath}
\overlineerline{\tau}'\colon\
\overlineerline{\mathcal{G}}\overlineerset{v}{\dashrightarrow}\overlineerline{\mathcal{H}}
\overlineerset{w}{\dashrightarrow}\overlineerline{\mathcal{F}}[1]
\overlineerset{-u[1]}{\dashrightarrow}\overlineerline{\mathcal{G}}[1]
\overlineerset{v[1]}{\dashrightarrow}\dots
\end{displaymath}
satisfies
\begin{math}
\varphi(\overline \tau ')=-\varphi(\overline \tau).
\end{math}
\begin{theorem}\label{thm:16}
Let $\varphi$ be an additive genus. There is a unique way to attach to
every distinguished triangle in $\oDb(X)$
\begin{displaymath}
\overlineerline{\tau}:\quad
\overlineerline{\mathcal{F}}\overlineerset{u}{\dashrightarrow}\overlineerline{\mathcal{G}}
\overlineerset{v}{\dashrightarrow}\overlineerline{\mathcal{H}}
\overlineerset{w}{\dashrightarrow}\overlineerline{\mathcal{F}}[1]
\overlineerset{u[1]}{\dashrightarrow}\dots
\end{displaymath}
a Bott-Chern class
\begin{displaymath}
\text{{\rm w}}idetilde{\varphi}(\overlineerline{\tau})
\in\bigoplus_{n,p}\text{{\rm w}}idetilde{\mathcal{D}}^{n-1}(X,p)
\end{displaymath}
such that the following axioms are satisfied:
\begin{enumerate}
\item (Differential equation)
\begin{math}
\dd_{\mathcal{D}}\text{{\rm w}}idetilde{\varphi}(\overlineerline{\tau})=
\varphi(\overlineerline{\tau}).
\end{math}
\item (Functoriality) If $g\colon X^{\prime}\rightarrow X$ is a morphism of smooth noetherian schemes over ${\mathbb C}$, then
\begin{displaymath}
\text{{\rm w}}idetilde{\varphi}(\Ld g^{{\text{\rm l,ll,a}}t}(\overlineerline{\tau}))=g^{{\text{\rm l,ll,a}}t}\text{{\rm w}}idetilde{\varphi}(\overlineerline{\tau}).
\end{displaymath}
\item (Normalization) If $\overlineerline{\tau}$ is tightly distinguished, then
\begin{math}
\text{{\rm w}}idetilde{\varphi}(\overlineerline{\tau})=0.
\end{math}
\end{enumerate}
\end{theorem}
\begin{proof}
To show the existence we write
\begin{equation}
\label{eq:58}
\text{{\rm w}}idetilde \varphi(\overline \tau)=\text{{\rm w}}idetilde \varphi([\overline \tau ]).
\end{equation}
Theorem \ref{thm:10} implies that it satisfies the axioms.
To prove the uniqueness, observe that, by replacing representatives of
the hermitian structures by tightly related ones, we may assume that
the distinguished triangle is represented by
\begin{displaymath}
\overline F \longrightarrow \overline G \longrightarrow \ocone(\overline F,\overline
G)\text{{\rm op}}lus \overline K \longrightarrow \overline F[1],
\end{displaymath}
with $\overline K$ acyclic.
Then Lemma \ref{lemm:1} shows that the axioms imply
\begin{math}
\text{{\rm w}}idetilde \varphi(\overline \tau )=\text{{\rm w}}idetilde \varphi(\overline K).
\end{math}
\end{proof}
\begin{remark}\label{rem:1} The normalization axiom can be replaced by
the apparently weaker condition that $\text{{\rm w}}idetilde \varphi(\overline \tau
)=0$ for all distinguished triangles of the form
\begin{displaymath}
\overline {\mathcal{F}}\dashrightarrow \overline {\mathcal{F}}\overlineerset{\perp}{\text{{\rm op}}lus}
\overline {\mathcal{G}}\dashrightarrow \overline {\mathcal{G}} \dashrightarrow
\end{displaymath}
where the maps are the natural inclusion and projection.
\end{remark}
Theorem
\ref{thm:10}~\ref{item:17}-\ref{item:49} can be easily translated to Bott-Chern
classes.
\section{Multiplicative genera, the Todd genus and the category
$\oSm_{{\text{\rm l,ll,a}}t/{\mathbb C}}$}
\label{sec:multiplicative-genera}
Let $\psi $ be a multiplicative genus, such that the piece of degree
zero is $\psi ^{0}=1$, and
\begin{displaymath}
\varphi=\log(\psi ).
\end{displaymath}
It is a well defined additive genus, because, by the condition above,
the power series $\log (\psi )$ contains only finitely many terms in each degree.
If $\overlineerline \theta $ is either a hermitian vector bundle, a complex
of hermitian vector bundles, a morphism in $\oDb(X)$ or a
distinguished triangle in $\oDb(X)$ we can write
\begin{displaymath}
\psi (\overline \theta )=\exp(\varphi(\overline \theta )).
\end{displaymath}
All the results of the previous sections can be translated to the
multiplicative genus $\psi $. In particular, if $\overline \theta $ is an
acyclic complex of hermitian vector bundles, an isomorphism in
$\oDb(X)$ or a distinguished triangle in $\oDb(X)$, we define a
Bott-Chern class
\begin{displaymath}
\text{{\rm w}}idetilde \psi_{m} (\overline \theta)=
\frac{\exp(\varphi(\overline \theta))-1}{\varphi(\overline \theta)}
\text{{\rm w}}idetilde\varphi(\overline \theta).
\end{displaymath}
\begin{theorem}\label{thm:11} The characteristic class $\text{{\rm w}}idetilde
\psi_{m} (\overline \theta)$ satisfies:
\begin{enumerate}
\item (Differential equation)
\begin{math}
\dd_{\mathcal{D}}\text{{\rm w}}idetilde{\psi}_{m}(\overlineerline{\theta
})=\psi(\overlineerline{\theta })-1.
\end{math}
\item (Functoriality) If $g\colon X^{\prime}\rightarrow X$ is a morphism
of smooth noetherian schemes over ${\mathbb C}$, then
\begin{displaymath}
\text{{\rm w}}idetilde{\psi}_{m}(\Ld g^{{\text{\rm l,ll,a}}t}(\overlineerline{\theta }))
=g^{{\text{\rm l,ll,a}}t}\text{{\rm w}}idetilde{\psi}_{m}(\overlineerline{\theta }).
\end{displaymath}
\item (Normalization) If $\overlineerline{\theta }$ is either a meager
complex, a tight isomorphism or a tightly distinguished triangle,
then
\begin{math}
\text{{\rm w}}idetilde{\psi}_{m}(\overlineerline{\theta })=0.
\end{math}
\end{enumerate}
Moreover $\text{{\rm w}}idetilde \psi_{m} $ is uniquely characterized by these
properties.
\end{theorem}
\begin{remark}
For an acyclic complex of vector bundles $\overline E$, using the general
procedure for arbitrary symmetric power series, we can associate a
Bott-Chern class $\text{{\rm w}}idetilde \psi (\overline E)$ (see for instance
\cite[Thm. 2.3]{BurgosLitcanu:SingularBC}) that satisfies the
differential equation
\begin{displaymath}
\dd_{\mathcal{D}} \text{{\rm w}}idetilde \psi (\overline E)= \prod _{k \text{ even }}\psi (\overline
E^{k})-\prod _{k \text{ odd }}\psi (\overline
E^{k}),
\end{displaymath}
whereas $\text{{\rm w}}idetilde \psi _{m}$ satisfies the differential equation
\begin{equation} \label{eq:62}
\dd_{\mathcal{D}} \text{{\rm w}}idetilde \psi _{m}(\overline E)= \prod _{k}\psi (\overline
E^{k})^{(-1)^{k}}-1.
\end{equation}
In fact both Bott-Chern classes are related by
\begin{equation}\label{eq:76}
\text{{\rm w}}idetilde \psi _{m}(\overline E)=
\text{{\rm w}}idetilde \psi (\overline E)\prod_{k \text{ odd}}\psi (\overline E^{k})^{-1}.
\end{equation}
\end{remark}
The main example of a multiplicative genus with the above properties
is the Todd genus $\Td$. From now on we will treat only this
case. Following the above procedure, to the Todd genus we can
associate two Bott-Chern classes for acyclic complexes
of vector bundles: the one given by the
general theory, denoted by $\text{{\rm w}}idetilde {\Td}$, and the one given by the
theory of multiplicative genera, denoted $\text{{\rm w}}idetilde {\Td}_{m}$. Both
are related by the equation \eqref{eq:76}. Note however that, for
isomorphisms and distinguished triangles in $\oDb(X)$, we only
have the multiplicative version.
We now consider morphisms between smooth complex varieties and
relative hermitian structures.
\begin{definition}
Let $f\colon X\rightarrow Y$ be a morphism of smooth complex varieties. The
\textit{tangent complex} of $f$ is the complex
\begin{displaymath}
T_{f}\colon \quad 0\longrightarrow T_{X}
\overlineerset{df}{\longrightarrow} f^{{\text{\rm l,ll,a}}t}T_{Y}\longrightarrow 0
\end{displaymath}
where $T_{X}$ is placed in degree 0 and $f^{{\text{\rm l,ll,a}}t}T_{Y}$ is placed in
degree 1. It defines an object $T_{f}\in \Ob \text{{\rm cur}}b(X)$. A
\emph{relative hermitian structure on} $f$ is the choice of an object $\overline T_{f}\in
\oDb(X)$ over $T_{f}$.
\end{definition}
The
following particular situations are of special interest:
\begin{enumerate}
\item[--] suppose $f\colon X\hookrightarrow Y$ is a closed immersion. Let
$N_{X/Y}[-1]$ be the normal bundle to $X$ in $Y$, considered as a
complex concentrated in degree 1. By definition, there is a natural
quasi-isomorphism $p\colon T_{f}\overlineerset{\sim}{\rightarrow} N_{X/Y}[-1]$ in
$\Cb(X)$, and hence an isomorphism $p^{-1}\colon N_{X/Y}[-1]\overlineerset{\sim}
{\dashrightarrow} T_{f}$ in $\text{{\rm cur}}b(X)$. Therefore, a hermitian metric $h$
on the vector bundle $N_{X/Y}$ naturally induces a hermitian
structure $p^{-1}\colon (N_{X/Y}[-1],h) \dashrightarrow T_{f}$ on $T_{f}$. Let
$\overlineerline{T}_{f}$ be the corresponding object in $\oDb(X)$. Then we have
\begin{displaymath}
\Td(\overlineerline{T}_{f})=\Td(N_{X/Y}[-1],h)=\Td(N_{X/Y},h)^{-1};
\end{displaymath}
\item[--] suppose $f\colon X\rightarrow Y$ is a smooth morphism. Let
$T_{X/Y}$ be the relative tangent bundle on $X$, considered as a
complex concentrated in degree $0$. By definition, there is a
natural quasi-isomorphism $\iota\colon T_{X/Y}\overlineerset{\sim}{\rightarrow}
T_{f}$ in $\Cb(X)$. Any choice of hermitian metric $h$ on $T_{X/Y}$
naturally induces a hermitian structure
$\iota\colon (T_{X/Y},h)\dashrightarrow T_{f}$. If $\overlineerline{T}_{f}$ denotes
the corresponding object in $\oDb(X)$, then we find
\begin{displaymath}
\Td(\overlineerline{T}_{f})=\Td(T_{X/Y},h).
\end{displaymath}
\end{enumerate}
Let now $g\colon Y\rightarrow Z$ be another morphism of smooth
varieties over ${\mathbb C}$. The tangent complexes $T_{f}$, $T_{g}$ and
$T_{g\circ f}$ fit into a distinguished triangle in $\text{{\rm cur}}b(X)$
\begin{displaymath}
\mathcal{T}\colon T_f\dashrightarrow T_{g\circ f}\dashrightarrow
\Ld f^{{\text{\rm l,ll,a}}t}T_g\dashrightarrow T_f[1].
\end{displaymath}
\begin{definition}\label{def:16}
We denote $\oSm_{{\text{\rm l,ll,a}}t/{\mathbb C}}$ the following data:
(i) The class $\rm{Ob}\,\oSm_{{\text{\rm l,ll,a}}t/{\mathbb C}}$ of smooth complex varieties.
(ii) For each $X,Y\in \rm{Ob}\,\oSm_{{\text{\rm l,ll,a}}t/{\mathbb C}}$, a set of morphisms $\oSm_{{\text{\rm l,ll,a}}t/{\mathbb C}}(X,Y)$ whose elements are
pairs $\overline f=(f,\overline T_{f})$, where $f\colon X\to Y$ is a projective morphism and $\overline T_{f}$
is a hermitian structure on $T_{f}$. When $\overline f$ is given we will
denote the hermitian structure by $T_{\overline f}$. A hermitian
structure on $T_{f}$ will also be called a hermitian structure on
$f$.
(iii) For each pair of morphisms $\overline f\colon X\to Y$
and $\overline g\colon Y\to Z$, the composition defined as
\begin{displaymath}
\overline g\circ \overline f=(g\circ f,\ocone(\Ld f^{{\text{\rm l,ll,a}}t} T_{\overline g}[-1], T_{\overline f})).
\end{displaymath}
\end{definition}
We shall prove (Theorem \ref{thm:17}) that $\oSm_{{\text{\rm l,ll,a}}t/{\mathbb C}}$ is a
category. Before this, we proceed with some examples emphasizing some
properties of the composition rule.
\begin{example}\label{exm:3}
Let $f\colon X\to Y$
and $g\colon Y\to Z$, be projective morphisms of smooth complex
varieties. Assume
that we have chosen hermitian metrics on the tangent vector bundles
$T_{X}$, $T_{Y}$ and $T_{Z}$. Denote by $\overline f$, $\overline g$ and
$\overline{g\circ f}$ the morphism of $\oSm_{{\text{\rm l,ll,a}}t/{\mathbb C}}$ determined by
these metrics. Then
\begin{displaymath}
\overline g\circ \overline f=\overline{g\circ f}.
\end{displaymath}
This is seen as follows. By the choice of metrics, there is a tight
isomorphism
\begin{displaymath}
\ocone(T_{\overline f},T_{\overline{g\circ f}})\to \Ld
f^{{\text{\rm l,ll,a}}t}T_{\overline g}.
\end{displaymath}
Then the natural maps
\begin{displaymath}
T_{\overline g\circ \overline f}\to \ocone(\Ld
f^{{\text{\rm l,ll,a}}t}T_{\overline g}[-1],T_{\overline f})\to
\ocone(\ocone(T_{\overline f},T_{\overline{g\circ f}})[-1],T_{\overline f})\to
T_{\overline{g\circ f}}
\end{displaymath}
are tight isomorphisms.
\end{example}
\begin{example} \label{exm:4}
Let $f\colon X\to Y$
and $g\colon Y\to Z$, be smooth projective morphisms of smooth complex
varieties. Choose hermitian metrics on the relative tangent
vector bundles
$T_{f}$, $T_{g}$ and $T_{g\circ f}$. Denote by $\overline f$, $\overline g$ and
$\overline{g\circ f}$ the morphism of $\oSm_{{\text{\rm l,ll,a}}t/{\mathbb C}}$ determined by
these metrics. There is a short exact sequence of hermitian vector
bundles
\begin{displaymath}
\overline \varepsilon \colon
0\longrightarrow \overline T_{f}
\longrightarrow \overline T_{g\circ f}
\longrightarrow f^{{\text{\rm l,ll,a}}t} \overline T_{g}
\longrightarrow 0,
\end{displaymath}
that we consider as an acyclic complex declaring $f^{{\text{\rm l,ll,a}}t} \overline T_{g}$
of degree $0$. The morphism $f^{{\text{\rm l,ll,a}}t}T_{\overline g}[-1]\dashrightarrow T_{\overline f}$ is
represented by the diagram
\begin{displaymath}
\xymatrix{
& \ocone(T_{\overline f},T_{\overline{g\circ f}})[-1]
\ar[dl]_{\sim} \ar[rd] &\\
f^{{\text{\rm l,ll,a}}t}T_{\overline g}[-1] && T_{\overline f}.
}
\end{displaymath}
Thus, by the definition of a composition we have
\begin{displaymath}
T_{\overline g \circ \overline f}=
\ocone(\ocone(T_{\overline f},T_{\overline{g\circ f}})[-1],f^{{\text{\rm l,ll,a}}t}T_{\overline
g}[-1])[1]\text{{\rm op}}lus
\ocone(\ocone(T_{\overline f},T_{\overline{g\circ f}})[-1],T_{\overline f}).
\end{displaymath}
In general this hermitian structure is different to $T_{\overline{g\circ
f}}$.
\emph{Claim.} The equality of hermitian structures
\begin{equation}
\label{eq:84}
T_{\overline g \circ \overline f}=
T_{\overline{g\circ f}}+ [\overline \varepsilon ]
\end{equation}
holds.
\begin{proof}[Proof of the claim]
We have a commutative diagram of distinguished triangles
\begin{displaymath}
\xymatrix{
\overline{\varepsilon} &T_{\overline{f}}\ar[r]\ar[d]_{\Id} &T_{\overline{g\circ f}}\ar[r]\ar[d]
&f^{{\text{\rm l,ll,a}}t}T_{\overline{g}}\ar[d]_{\Id}\ar@{-->}[r] &T_{\overline{f}}[1]\ar[d]_{\Id}\\
\overline{\tau} &T_{\overline{f}}\ar[r] &T_{\overline{g}\circ \overline{f}}\ar[r]
&f^{{\text{\rm l,ll,a}}t}T_{\overline{g}}\ar@{-->}[r] &T_{\overline{f}}[1].
}
\end{displaymath}
By construction the triangle $\overline{\tau}$ is tightly distinguished,
hence $[\overline{\tau}]=0$. Therefore, according to Theorem \ref{thm:10}
\ref{item:48}, we have
\begin{displaymath}
[T_{\overline{g\circ f}}\rightarrow T_{\overline{g}\circ\overline{f}}]=[\overline{\varepsilon}].
\end{displaymath}
The claim follows.
\end{proof}
\end{example}
\begin{theorem}\label{thm:17}
$\oSm_{{\text{\rm l,ll,a}}t/{\mathbb C}}$ is a category.
\end{theorem}
\noindent\emph{Proof}
The only non-trivial fact to prove is the associativity of the composition, given by the
following lemma:
\begin{lemma}\label{lem:Sm_cat}
Let $\overline{f}:X\to Y$, $\overline{g}:Y\to Z$ and $\overline{h}:Z\to W$ be
projective morphisms together with hermitian structures. Then
$\overline{h}\circ(\overline{g}\circ\overline{f})=(\overline{h}\circ\overline{g})\circ\overline{f}$.
\end{lemma}
\begin{proof}
First of all we observe that if the hermitian structures on
$\overline{f}$, $\overline{g}$ and $\overline{h}$ come from fixed hermitian metrics on
$T_{X}$, $T_{Y}$, $T_{Z}$ and $T_{W}$, Example \ref{exm:3} ensures
that the proposition holds. For the general case, it is enough to
see that if the proposition holds for a fixed choice of hermitian
structures $\overline{f}$, $\overline{g}$, $\overline{h}$, and we change the metric on
$f$, $g$ or $h$, then the proposition holds for the new choice of
metrics. We treat, for instance, the case when we change the
hermitian structure on $g$, the proof of the other cases being analogous.
Denote by $\overline{g}'$ the new hermitian structure on
$g$. Then there exists a unique class $\varepsilon\in\KA(Y)$ such
that $T_{\overline{g}'}=T_{\overline{g}}+\varepsilon$. According to the
definitions, we have
\begin{displaymath}
T_{\overline{h}\circ(\overline{g}'\circ\overline{f})}=
\ocone((g\circ
f)^{{\text{\rm l,ll,a}}t}T_{\overline{h}}[-1],\ocone(f^{{\text{\rm l,ll,a}}t}(T_{\overline{g}}+\varepsilon)[-1],T_{\overline{f}}))
=T_{\overline{h}\circ(\overline{g}\circ\overline{f})}+f^{{\text{\rm l,ll,a}}t}\varepsilon.
\end{displaymath}
Similarly, we find
\begin{displaymath}
T_{(\overline{h}\circ\overline{g}')\circ\overline{f}}=
\ocone(f^{{\text{\rm l,ll,a}}t}\ocone(g^{{\text{\rm l,ll,a}}t}T_{\overline{h}}[-1],T_{\overline{g}})[-1]+
f^{{\text{\rm l,ll,a}}t}(-\varepsilon),
T_{\overline{f}})
=T_{(\overline{h}\circ\overline{g})\circ\overline{f}}+f^{{\text{\rm l,ll,a}}t}\varepsilon.
\end{displaymath}
By assumption,
$T_{\overline{h}\circ(\overline{g}\circ\overline{f})}=T_{(\overline{h}\circ\overline{g})\circ\overline{f}}$. Hence
the relations above show
\begin{displaymath}
T_{\overline{h}\circ(\overline{g}'\circ\overline{f})}=T_{(\overline{h}\circ\overline{g}')\circ\overline{f}}.
\end{displaymath}
This concludes the proofs of Lemma \ref{lem:Sm_cat} and of Theorem \ref{thm:17}.
\end{proof}
Let $f\colon X\to Y$ and $g\colon Y\to Z$ be projective morphisms of
smooth complex varieties. By the definition of composition, hermitian
structures on $f$ and $g$ determine a hermitian structure on $g\circ
f$. Conversely we have the following result.
\begin{lemma}\label{lemm:4}
Let $\overline {g}$ and $\overline {g\circ f}$ be hermitian structures on $g$
and $g\circ f$. Then there is a unique hermitian structure $\overline f$ on
$f$ such that
\begin{equation}
\label{eq:85}
\overline{g\circ f}=\overline g\circ \overline f.
\end{equation}
\end{lemma}
\begin{proof}
We have the distinguished triangle
\begin{displaymath}
T_{f}\dashrightarrow T_{g\circ f}\dashrightarrow f^{{\text{\rm l,ll,a}}t}T_{g}\dashrightarrow T_{f}[1].
\end{displaymath}
The unique hermitian structure that satisfies equation
\eqref{eq:85} is $\ocone(T_{\overline{g\circ
f}},f^{{\text{\rm l,ll,a}}t}T_{\overline{g}})[-1]$.
\end{proof}
\begin{remark}
By contrast with the preceding result, it is not true in general that
hermitian structures $\overline f$ and $\overline{g\circ f}$ determine a
unique hermitian structure $\overline g$ that satisfies equation
\eqref{eq:85}. For instance, if $X=\emptyset$, then any hermitian
structure on $g$ will satisfy this equation.
\end{remark}
If $\Sm_{{\text{\rm l,ll,a}}t/{\mathbb C}}$ denotes the category of smooth complex varieties and
projective morphisms and $\mathfrak{F}\colon \oSm_{{\text{\rm l,ll,a}}t/{\mathbb C}}\to
\Sm_{{\text{\rm l,ll,a}}t/{\mathbb C}}$ is the forgetful functor, for any object $X$ we have
that
\begin{align*}
\Ob \mathfrak{F}^{-1}(X)&=\{X\},\\
\Hom_{\mathfrak{F}^{-1}(X)}(X,X)&=\KA(X).
\end{align*}
To any arrow $\overline f\colon X\to Y$ in $\oSm_{{\text{\rm l,ll,a}}t/{\mathbb C}}$ we associate a
Todd form
\begin{equation}\label{eq:1}
\Td(\overline f):=\Td(T_{\overline f})\in \bigoplus_{p}\mathcal{D}^{2p}(X,p).
\end{equation}
The following simple properties of $\Td(\overline f)$ follow directly from
the definitions.
\begin{proposition}
\begin{enumerate}
\item Let $\overline f\colon X\to Y$ and $\overline{g}\colon Y\to Z$ be
morphisms in $\oSm_{{\text{\rm l,ll,a}}t/{\mathbb C}}$. Then
\begin{displaymath}
\Td(\overline g\circ\overline f)=f^{{\text{\rm l,ll,a}}t}\Td(\overline{g})\bullet\Td(\overline{f}).
\end{displaymath}
\item Let $f,f'\colon X\to Y$ be two morphisms in $\oSm_{{\text{\rm l,ll,a}}t/{\mathbb C}}$
with the same underlying algebraic morphism. There is an
isomorphism $\overline \theta \colon T_{\overline f}\to T_{\overline f'}$ whose
Bott-Chern class $\text{{\rm w}}idetilde{\Td}_{m}(\overline\theta)$ satisfies
\begin{displaymath}
\dd_{\mathcal{D}}\text{{\rm w}}idetilde {\Td}_{m}(\overline \theta )=
\Td(T_{\overline f'}) \Td(T_{\overline f})^{-1}-1.
\end{displaymath}
\end{enumerate}
\end{proposition}
\newcommand{\etalchar}[1]{$^{#1}$}
\newcommand{\noopsort}[1]{} \newcommand{\printfirst}[2]{#1}
\newcommand{\singleletter}[1]{#1} \newcommand{\switchargs}[2]{#2#1}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\preprint{APS/123-QED}
\title{Dynamics simulation and numerical analysis of arbitrary time-dependent $\mathcal{PT}$-symmetric system based on density operators\\}
\author{Xiaogang Li}
\affiliation{State Key Laboratory of Low-Dimensional Quantum Physics and Department of Physics, Tsinghua University, Beijing 100084, China}
\author{Chao Zheng}
\affiliation{Department of Physics, College of Science, North China University of Technology, Beijing 100144, China}
\author{Jiancun Gao}
\affiliation{State Key Laboratory of Low-Dimensional Quantum Physics and Department of Physics, Tsinghua University, Beijing 100084, China} \affiliation{Frontier Science Center for Quantum Information, Beijing 100084, China}
\author{Guilu Long}
\email{[email protected]}
\affiliation{State Key Laboratory of Low-Dimensional Quantum Physics and Department of Physics, Tsinghua University, Beijing 100084, China}
\affiliation{Frontier Science Center for Quantum Information, Beijing 100084, China}
\affiliation{Beijing Academy of Quantum Information Sciences, Beijing 100193, China}
\affiliation{Beijing National Research Center for Information Science and Technology and School of Information, Tsinghua University,
Beijing 100084, China}
\date{\today}
\begin{abstract}
$\mathcal{PT}$-symmetric system has attracted extensive attention in recent years because of its unique properties and applications. How to simulate $\mathcal{PT}$-symmetric system in traditional quantum mechanical system has not only fundamental theoretical significance but also practical value. We propose a dynamics simulation scheme of arbitrary time-dependent $\mathcal{PT}$-symmetric system based on density operators, and the results are compatible with previous methods based on pure-state vectors.
Based on the above, we are able to study the influence of quantum noises on the simulation results with the technique of vectorization of density operators and matrixization of superoperators (VDMS), and we show the depolarizing (Dep) noise is the most fatal and should be avoided as much as possible. Meanwhile, we also give a numerical analysis. We find that the problem of chronological product usually has to be solved not only in the numerical calculation, but also even in the experiment, because the dilated higher-dimensional Hamiltonian is usually time-dependent. Through theoretical analysis and numerical calculation, we find that on the premise of meeting the goal of calculation accuracy and saving computing resources, the time step of calculation and the cut-off term of Magnus series have to be carefully balanced.
\end{abstract}
\maketitle
\section{\label{Introduction}Introduction}
That all physical observables, including Hamiltonians, must be Hermitian operators has long been seen as one of the axioms in \emph{conventional quantum mechanics} (CQM) \cite{Griffiths2018}, because Hermitian operators have real eigenspectrums as we all know. However, Bender et al. found that some non-Hermitian Hamiltonians, which are parity-time ($\mathcal{PT}$)-reversal symmetric, may also have real eigenspectrums in 1998 \cite{Bender1998}, and then established the \emph{$\mathcal{PT}$-symmetric quantum mechanics} ($\mathcal{PT}$-QM) \cite{Bender1999,Bender2002,Bender2004}. In last two decades, $\mathcal{PT}$-symmetry theory has developed rapidly \cite{Mostafazadeh2002b,Mostafazadeh2002a,Mostafazadeh2002,Mostafazadeh2003a,Mostafazadeh2007a,Curtright2007,Brody2013,Zhang2020}, and aroused wide attention \cite{Bender2007,Mostafazadeh2010}. The phenomenon related to $\mathcal{PT}$-symmetry exists widely, not only in classical systems, such as optical systems\cite{Makris2008,Rueter2010}, microcavities \cite{Peng2014} and circuits \cite{Assawaworrarit2017}, but also in quantum systems, such as strongly correlated many-body systems \cite{Ashida2017}, quantum critical spin chains \cite{Couvreur2017}, and ultracold atoms \cite{li2019observation}. In addition, $\mathcal{PT}$-symmetric system also shows practical value in quantum sensors \cite{Zhang2019a,Chu2020,Yu2020}, which can use the sensitivity of $\mathcal{PT}$-symmetric system near the exceptional points (EPs) to amplify small signals. In particular, it is worth noting that, with the increasing interest in $\mathcal{PT}$-symmetric systems, some new phenomena have emerged \cite{Croke2015,Kawabata2017}, which are impressive because they seem to conflict with theory of conventional quantum mechanics or theory of relativity, such as the instantaneous quantum brachistochrone problem \cite{Mostafazadeh2007,Bender2007a,Guenther2008,Guenther2008a,Mostafazadeh2010,Ramezani2012,ZhengChao2013,Beygi2018,Brody2021}, the discrimination of nonorthogonal quantum states \cite{Bender2013,Wang2020}, and the violation of no-signaling principle \cite{Barnett2002,Lee2014,Brody2016,Tang2016,feng2017non,Beygi2018,Bagchi2020}. However, some anomalies actually come from not knowing how to simulate $\mathcal{PT}$-symmetric system in conventional quantum system, for instance, if the discarded probability during the simulation of $\mathcal{PT}$-symmetric system is considered, the no-signaling principle will still hold \cite{Huang2018}. Therefore, finding a way to simulate $\mathcal{PT}$-symmetric system in conventional quantum system has not only practical value, but also theoretical significance.
At present, there are at least three technical routes to simulate $\mathcal{PT}$-symmetric system, and all of them can be realized in experiments \cite{ZhengChao2013,Yu2020,wu2019observation}. The method of linear combination of unitaries (LCU) \cite{GuiLu2006}, can be used to simulate various non-Hermitian $\mathcal{PT}$-symmetric systems in discrete time, whether they are in unbroken or broken phase \cite{GuiLu2006,ZhengChao2013,Zheng2018,Gao2021}, and even anti-$\mathcal{PT}$-symmetric systems \cite{Zheng2019a}. The method of weak measurement\cite{Huang2019,Yu2020}, can also be used to simulate various time-independent unbroken and broken $\mathcal{PT}$-symmetric systems in restricted continuous time under the condition of weak interaction by a weak measurement. The methods based on embedding \cite{Huang2018,Li2022,wu2019observation}, can be used to simulate the dynamics of unbroken time-independent $\mathcal{PT}$-symmetric system (in pure-states case \cite{Huang2018} or mixed-states case \cite{Li2022}) in unrestricted continuous time with only one qubit as an auxiliary system.
What deserves special attention is that in 2019, Wu et al. proposed a general simulation scheme of dynamics of time-dependent (TD) arbitrary $\mathcal{PT}$-symmetric system based on pure-state vectors through the dilation method, and realized it with a single nitrogen-vacancy center in diamond \cite{wu2019observation}. Specifically, their method is performed by dilating a general TD $\mathcal{PT}$–symmetric Hamiltonian into a higher dimensional TD Hermitian one with the help of an auxiliary qubit system, and evolving the state in the dilated Hermitian system for a period of time, then performing a fixed projection measurement on the auxiliary system, after that, the remained main system is equivalent to going through the evolution process govern by the $\mathcal{PT}$-symmetric Hamiltonian. However, the dilated Hamiltonian is usually time-dependent, which means the system is actually an open quantum system. From one point of view, as we all know, in an open quantum system, the influence of quantum noises in the environment is usually inevitable so that they have to be considered \cite{nielsen2002quantum}. The time evolution of an open quantum system interacting with memoryless environment can be described by the Lindblad master equation \cite{nielsen2002quantum,Minganti2019}, which is usually based on the density operators (matrices) rather than the state vectors. From another point of view, a pure state will evolve to a mixed state under the quantum noises, while the theory based on pure-state vectors can not conveniently deal with the question related to mixed states, then the tool of pure-state vectors may be failed, in this situation, the density operators will also be a better tool.
In this paper, we first generalize the outstanding work of Wu et al. based on dilation method from the pure-state vectors case to the mixed-state density operators case with the tool of density operators \cite{wu2019observation}, and provide more mathematical and physical completeness. It is worth emphasizing that it is not a trivial process for generalizing the quantum state from pure-state vectors to mixed-state density operators in the simulation of $\mathcal{PT}$-symmetric system, and the difficulty mainly comes from the flexibility of characterizing quantum states in $\mathcal{PT}$-symmetric system and the uncertainty of mapping them to the high-dimensional quantum states in conventional quantum system \cite{Li2022,Ohlsson2021}. Only based on this groundwork can we deal with problems in open quantum systems, and we deal with these problems using the vectorization of density operators and matrixization of superoperators (VDMS) technique. Then we study the influence of quantum noises to the dynamics of time-dependent arbitrary $\mathcal{PT}$-symmetric system, meanwhile, we also give a numerical analysis. Through theoretical analysis and numerical calculation, we find that on the premise of meeting the goal of calculation accuracy and saving computing resources, the time step of calculation and the cut-off term of Magnus series have to be carefully balanced. In the numerical analysis, we also find that the time step $h$ of each numerical calculation step shall be limited to its corresponding critical time $T_c$ of the convergence of Magnus series, especially when the high-order terms of Magnus series are considered. This phenomenon occurs because the dilated higher-dimensional Hamiltonian is usually time-dependent, then the problem of chronological product usually has to be dealt with, and the Magnus series may have to be calculated \cite{Magnus1954,Blanes1998,Blanes2009}, which may diverge when $t\rightarrow T_c$ so that the error may be amplified after the Magnus series is truncated to a high-order term in calculation. Meanwhile, the implemented duration of experimental running is actually bounded by the critical time $T_l$ of the legitimacy of dilation method. This phenomenon occurs because when $t\rightarrow T_l$, the energy may diverge. In fact, the problem of chronological product may have to be solved not only in the numerical calculation, but also even in the experiment, because the dilated $\hat{H}_{AS}(t)$ has to be parameterized in advance by numerically calculating the chronological product caused by $H_S(t)$ needed to be dilated. In addition, when considering the influence of quantum noises, we find that the depolarizing (Dep) noise (channel) is the most fatal to the simulation of $\mathcal{PT}$-symmetric system among three kinds of quantum noises we considered and should be avoided as much as possible. It is worth noting that when the system considered is time-independent and $\mathcal{PT}$-symmetry unbroken, the results of dynamics simulation in this work are consistent with our previous results in Ref.\cite{Li2022}, and when the state considered is the pure state, the results of this work are consistent with the theoretical results given in Ref.\cite{wu2019observation}. In summary, this work provides a general theoretical framework based on density operators to analytically and numerically analyze the dynamics of time-dependent arbitrary $\mathcal{PT}$-symmetric system and the influence of quantum noises.
The rest of this paper is organized as follows. In Sec.\ref{Theoretical_preparations}, we give some necessary basic theories of $\mathcal{PT}$-symmetric system. In Sec.\ref{dilation_method}, we give a universal Hermitian dilation method of non-Hermitian Hamiltonians with density operators, and based on that, we give a universal simulation scheme of the dynamics of arbitrary TD $\mathcal{PT}$-symmetric system. To be able to solve problems in open quantum system, we vectorize density operators and matrixize the Liouvillian superoperators in Sec.\ref{VD_and_ML}. In addition, we specially discuss the numerical calculation methods of time-dependent linear matrix differential equations involved in this paper in Sec.\ref{numerical_calculation}. In Sec.\ref{example}, we give an example of two-dimensional $\mathcal{PT}$-symmetric system, and numerically analyze its dynamics, meanwhile, we also consider the influence of three kinds of quantum noises. In Sec.\ref{conclusions}, we give conclusions and discussions. Further more, we also make the Appendix \ref{appendix_derivation-H_24} to show the details of the dilated Hamiltonians, and the Appendix \ref{chronological product} to introduce the problem of chronological product at the end of this paper.
\section{Theoretical preparations\label{Theoretical_preparations}}
Given a $n$-dimensional non-Hermitian $\mathcal{PT}$-symmetric Hamiltonian $\mathcal{H}$, the parity operator $\mathcal{P}$
and time-reversal operator $\mathcal{T}$, where $\mathcal{T}$ is an anti-linear operator, and $H$, $P$, $T$ denote their matrix representation, respectively. They have the following properties:
\begin{align}
P^2=I, T\overline{T}=&I, PT=T\overline{P}, \nonumber \\
PT\overline{H}=&HPT,
\end{align}
where $\overline{[\cdot]}$ denotes complex conjugate of $[\cdot]$, and it occurs because $\mathcal{T}$ is an anti-linear operator. If $H$ is similar to a real diagonal matrix, $H$ will be $\mathcal{PT}$-symmetric unbroken, otherwise, $H$ is called $PT$-symmetry broken
if and only if it satisfies either of these two conditions \cite{Mostafazadeh2002a,Mostafazadeh2002b,huang2021solvable,Li2022}: (1) it cannot be diagonalized, (2) it has complex eigenvalues that appears in complex conjugate pairs.
For a time-independent $\mathcal{PT}$-symmetry $H$, there is a time-independent operator $\eta$ that satisfies:
\begin{equation}\label{metric1}
\eta H=H^\dag\eta,
\end{equation}
where $\eta$ is called the metric operator of $H$, and it is a reversible operator (usually Hermitian), when $H$ is $\mathcal{PT}$-symmetry unbroken, it can be completely positive. The metric operator is usually not unique, for instance, if $\eta$ is a metric operator of $H$, so is $r\eta$ ($r\in \mathbb{R}$). The above Eq.\eqref{metric1} is also referred to as the pseudo-Hermiticity relation, and $H$ is also referred to as pseudo-Hermitian (Hamiltonian) \cite{Mostafazadeh2002b,Zhang2019}. It is worth mentioning that in 2002, Mostafazadeh pointed out that all the $\mathcal{PT}$-symmetric non-Hermitian Hamiltonians belong to the class of pseudo-Hermitian Hamiltonians \cite{Mostafazadeh2002b}, recently, in 2020, this conclusion was more strictly proved and strengthened by Ruili Zhang et al., i.e., $\mathcal{PT}$-symmetry entails pseudo-Hermiticity regardless of diagonalizability \cite{Zhang2020}. The theory of pseudo-Hermiticity provides more convenience in dealing with questions related to mixed states, so we use the tools usually used in the framework of pseudo-Hermiticity hereafter, such as biorthonormal eigenbasis, metric operators , etc.
The elements of $\mathcal{PT}$-QM in the unbroken phase of $H$ can be represented by biorthogonal basis of $H$: \{$|\chi_n,a\rangle, |\phi_n,a\rangle$\}, and they have the properties as follows \cite{Mostafazadeh2002b}:
\begin{subequations}\label{biorthononal_basis1}
\begin{align}
\langle\chi_m,a|\phi_n,b\rangle=&\delta_{mn}\delta_{ab} \\
H|\phi_n,a\rangle=E_n|\phi_n,a\rangle,\quad &H^\dag|\chi_n,a\rangle=E_n|\chi_n,a\rangle \\
\sum\limits_n\sum\limits_{a=1}^{d_n}|\chi_n,a\rangle\langle\phi_n,a|=&\sum\limits_n\sum\limits_{a=1}^{d_n}|\phi_n,a\rangle\langle\chi_n,a|=I,\\
|\chi_n,a\rangle=&\eta|\phi_n,a\rangle,\\
\eta=\sum_n\sum_{a=1}^{d_n}&|\chi_n,a\rangle\langle\chi_n,a|,\\
\eta^{-1}=\sum_n\sum_{a=1}^{d_n}&|\phi_n,a\rangle\langle\phi_n,a|,
\end{align}
\end{subequations}
where $d_n$ is the degree of degeneracy of the eigenvalue $E_n$ (in the $\mathcal{PT}$-unbroken case, $E_n$ is real), and $a$ and $b$ are degeneracy labels, and $|\phi_k\rangle$s ($|\chi_k\rangle$s) are usually not orthogonal to each other. The possible real coefficients before $\eta$ have been absorbed into the biorthogonal basis. It is worth noting that in the extreme case, when $H$ becomes Hermitian, then the biorthogonal basis will become orthogonal basis because of $H=H^\dag$ and then $\{|\phi_k\rangle\}=\{|\chi_k\rangle\}$. For convenience, we set $d_n=1$ hereafter. If we recorded that $\Phi=[|\phi_1\rangle,...|\phi_i\rangle,...|\phi_n\rangle]$, $\Xi=[|\chi_1\rangle,...|\chi_i\rangle,...|\chi_n\rangle]$, $E=$diag$(E_1,...E_i,...E_n)$, then according to Eqs.\eqref{biorthononal_basis1} we will get \cite{Huang2018,Li2022}:
\begin{equation}\label{jordan_block_H1}
\begin{split}
&\Phi^{-1}H\Phi=E, \quad \Xi^{-1}H^\dag\Xi=E \\
&\eta=\Xi\Xi^\dag, \quad \eta^{-1}=\Phi\Phi^\dag \\
&\Xi=\eta\Phi, \quad \Xi^\dag\Phi=I_n.
\end{split}
\end{equation}
Through the positive Hermitian metric operator $\eta$, the representations of a quantum observable $\mathcal{O}$ under the framework of CQM and the framework of $\mathcal{PT}$-QM can be connected by a similarity transformation, i.e., the Dyson map \cite{Fring2016,Luiz2020}:
\begin{align}\label{Dyson_map1}
O_c=\eta^{\frac{1}{2}}\cdot O_{\mathcal{PT}}\cdot \eta^{-\frac{1}{2}},
\end{align}
where $O_c$ is the observable in $\mathcal{PT}$-QM framework, $O_{\mathcal{PT}}$ is the corresponding observable in CQM. Similarly, there is a relation between the quantum state $\rho_c$ in CQM and the state $\rho_{\mathcal{PT}}$ in $\mathcal{PT}$-QM:
\begin{equation}\label{rhoc_rhopt}
\rho_c=\sum_{mn}{\rho_c}_{mn}|m\rangle\langle n| \Leftrightarrow \rho_{\mathcal{PT}}=\sum_{mn}{\rho_c}_{mn}|\phi_m\rangle\langle \chi_n|,
\end{equation}
where $\{|n\rangle\}$ is a mutual orthogonal basis in CQM; ${\rho_c}_{mn}$s are the matrix elements; $\rho_c$ and $\rho_{\mathcal{PT}}$ are connected by the Dyson map given above in Eq.\eqref{Dyson_map1}. Therefore, there is a fixed relation between the quantum state (unnormalized) $\rho_S$ in the framework of CQM and the quantum state $\rho_{\mathcal{PT}}$ in the framework of $\mathcal{PT}$-QM (for simplicity, we only use this conclusion without more derivation, more details can be found in our previous work in Ref.\cite{Li2022}):
\begin{align}\label{rho_simulation1}
\rho_S&=\rho_{\mathcal{PT}}\cdot\eta^{-1} \nonumber \\
&=\sum_{mn}{\rho_c}_{mn}|\phi_m\rangle\langle \phi_n|,
\end{align}
where $\rho_S$ can be normalized by $\rm{Tr}(\eta\rho_S)=1$. Clarifying the relation between the density operators in $\mathcal{PT}$-QM and the density operators in CQM is an important step to extend the dynamics simulation scheme of $\mathcal{PT}$-symmetric system from the pure-state vectors case to the mixed-state density operators case \cite{Li2022}. From the above, we know that this process is not trivial.
Now we introduce the concept of $\eta$-inner product \cite{Mostafazadeh2002b,Li2022}:
\begin{align}\label{eta-inner-product1}
(|\psi_{1}\rangle,|\psi_{2}\rangle)_{\eta}\equiv\langle\psi_{1} \mid \psi_{2}\rangle_{\eta}:=\left\langle\psi_{1}|\eta| \psi_{2}\right\rangle& \nonumber\\
\forall\left|\psi_{1}\right\rangle,\left|\psi_{2}\right\rangle \in L(\mathcal{H}),&
\end{align}
where $L(\mathcal{H})$ denotes Hilbert space, and $\eta$ is a reversible Hermitian metric operator of this $\eta$-inner space, especially in the unbroken phase of $\mathcal{PT}$-QM, it can be a positive operator \cite{Mostafazadeh2002b}.
Next we discuss the situation that non-Hermitian $\mathcal{PT}$-symmetric system $H(t)$ is time-dependent. Considering two evolving states $|\psi_1(t)\rangle$ and $|\psi_2(t)\rangle$, we assume that they satisfy the Schr\"{o}dinger-like equation:
\begin{align}
\frac{\mathrm{d}|\psi(t)\rangle}{\mathrm{d}t}=-iH(t)|\psi(t)\rangle,
\end{align}
where we have set $\hbar=1$ here and after. According to the probability conservation in the inner product space defined like in Eq.\eqref{eta-inner-product1}, we can obtain that
\begin{align}\label{probability_conservation2}
&\frac{\mathrm{d}}{\mathrm{d}t}\langle\psi_{1}(t)|\psi_{2}(t)\rangle_{\eta(t)} \nonumber\\
\equiv& \frac{\mathrm{d}}{\mathrm{d}t}\langle\psi_{1}(t)|\eta(t)|\psi_{2}(t)\rangle \nonumber\\
=&\langle\psi_{1}(t)|-i\eta(t)H(t)+iH^\dag(t) \eta(t)+\eta'(t)| \psi_{2}(t)\rangle \nonumber\\
=&0,
\end{align}
where we have recorded the differential operator $\frac{\mathrm{d}}{\mathrm{d}t}$ as the symbol "$'$". Then we can get the result:
\begin{align}\label{TD-pseudo-Hermiticity-relation}
\eta'(t)=i[\eta(t)H(t)-H^\dag(t) \eta(t)].
\end{align}
The above Eq.\eqref{TD-pseudo-Hermiticity-relation} is referred to as the time-dependent (TD) pseudo-Hermiticity relation, $\eta(t)$ is the time-dependent (TD) metric operator in its corresponding inner product space, i.e., $\eta(t)$-inner product space, which leads to the probability conservation \cite{Fring2016,Luiz2020}. Nothing that $\eta(t)$-inner product space is time-dependent.
The solution of Eq.\eqref{TD-pseudo-Hermiticity-relation} can be obtained as:
\begin{align}\label{eta_TD}
\eta(t)=\mathbb{T}e^{-i\int_{0}^{t}{H}^\dag(\tau)d\tau}\eta(0)\mathbb{\overline{T}}e^{i\int_{0}^{t}H(\tau)d\tau},
\end{align}
where $\mathbb{T}$ is time-ordering operator and $\mathbb{\overline{T}}$ is the anti-time-ordering operator, moreover, $\eta(0)$ can be arbitrary Hermitian operator. If we take $\eta(0)>0$, then there must exist a period of time $T_t$ make $\eta(t)>0$ during $t\in[0,T_t)$. It is worth noting that when $H$ is time-independent, $\eta(t)$ may be time-independent so that $\eta'=0$ (for instance, when $H$ is $\mathcal{PT}$-symmetric unbroken, $\eta(0)$ can be taken as the metric operator like the one in Eq.\eqref{metric1}), and then the TD pseudo-Hermiticity relation given above in Eq.\eqref{TD-pseudo-Hermiticity-relation} will be reduced to the pseudo-Hermiticity relation given in Eq.\eqref{metric1}.
In addition, according to Eq.\eqref{probability_conservation2}, if we set $|\psi(t)\rangle_1=|\psi(t)\rangle_2=|\psi(t)\rangle$, and $\rho_S(t)=|\psi(t)\rangle\langle\psi(t)|$, then we know:
\begin{align}
\frac{\mathrm{d}\mathrm{Tr}[\rho_S(t)\eta(t)]}{\mathrm{d}t}\equiv\frac{\mathrm{d}\mathrm{Tr}[\rho_{\mathcal{PT}}(t)]}{\mathrm{d}t}\equiv0.
\end{align}
Here $\rho_{\mathcal{PT}}(t)\equiv\rho_S(t)\eta(t)$ can be seen as a quantum state in TD $\mathcal{PT}$-QM, and can be normalized by $\mathrm{Tr}[\rho_{\mathcal{PT}}(t)]=\mathrm{Tr}[\rho_S(t)\eta(t)]\equiv1$. The form of $\rho_{\mathcal{PT}}(t)$ can be easily generalized, and can be mapped to a quantum state $\rho_c(t)$ in (TD) CQM through a time-dependent (TD) similarity transformation, i.e., the TD Dyson map similar to Eq.\eqref{Dyson_map1} \cite{Luiz2020}:
\begin{align}\label{Dyson_map1-TD}
O_c(t)=\eta^{\frac{1}{2}}(t)\cdot O_{\mathcal{PT}}(t)\cdot \eta^{-\frac{1}{2}}(t),
\end{align}
and $\rho_c(t)$ is similar to Eq.\eqref{rhoc_rhopt}:
\begin{align}\label{rhoc_rhopt-TD}
\rho_c(t)=&\sum_{mn}{\rho_c}_{mn}(t)|m(t)\rangle\langle n(t)| \Leftrightarrow \nonumber\\ \rho_{\mathcal{PT}}(t)=&\sum_{mn}{\rho_c}_{mn}(t)|\phi_m(t)\rangle\langle \chi_n(t)|, \nonumber\\
=&\sum_{mn}{\rho_c}_{mn}(t)|\phi_m(t)\rangle\langle \phi_n(t)|\cdot\eta(t),
\end{align}
where $\rho_c(t)$ is a quantum state in CQM with a TD mutual orthogonal basis $\{|m(t)\rangle\}$ in CQM, and $\{|\phi(t)\rangle,|\chi(t)\rangle\}$ is a TD biorthogonal basis in $\mathcal{PT}$-QM, and $|\chi(t)\rangle=\eta(t)|\phi(t)\rangle$. Therefore, if we take $\eta(0)>0$ given in Eq.\eqref{TD-pseudo-Hermiticity-relation}, similar to Eq.\eqref{rhoc_rhopt}, we can obtain that
\begin{align}\label{rho_simulation1-TD}
\rho_S(t)&=\rho_{\mathcal{PT}}(t)\cdot\eta^{-1}(t) \nonumber \\
&=\sum_{mn}{\rho_c}_{mn}(t)|\phi_m(t)\rangle\langle \phi_n(t)|,
\end{align}
which means on the promise that $\eta(t)>0$, we can always find an unnormalized quantum state $\rho_S(t)$ in TD CQM related to a quantum state $\rho_{\mathcal{PT}}(t)$ in TD $\mathcal{PT}$-QM (refer to Appendix B in Ref.\cite{Li2022} for the proof that $\rho_S$ is actually an unnormalized state in (TD) CQM). According to the relation between the unnormalized state of $\rho_S(t)$ in TD CQM and quantum state $\rho_{\mathcal{PT}}(t)$ in TD $\mathcal{PT}$-QM given above in Eq.\eqref{rho_simulation1-TD}, we know that once $\eta(t)$ is known (given, or calculated), for the purpose of simulation, $\rho_S$ can be used to represent $\rho_{\mathcal{PT}}$. It is worth noting that the two are actually different in the physical sense, because they are not satisfied with similarity transformation. Fortunately, for a simulation task, it is not necessary to pursue the absolute equivalence of the two physical meanings, but only to ensure that their form is appropriate and can be realized physically \cite{Li2022,Guenther2008a,Brody2012}. The Eq.\eqref{rho_simulation1-TD} is actually the prerequisite for the implementation of the dilation method based on density operators we will discuss next.
\section{\label{dilation_method}Universal Hermitian dilation method of non-Hermitian Hamiltonians based on density operators}
One of the methods to simulate the dynamics of $\mathcal{PT}$-symmetric system is to find a dilated higher-dimensional Hermitian system (marked by "$AS$", where "$A$", "$S$", "$AS$" represents the auxiliary system, the main system used to generate the dynamics of the non-Hermitian system, and the composite system, respectively), which obey the von Neumann equation (it reduces to the Schr\"{o}dinger equation when restricted to pure-state vectors), to simulate the dynamics of non-Hermitian system, which obey the von Neumann-like equation (Schr\"{o}dinger-like equation in pure-state vectors case). We assume the evolution equation (the von Neumann-like equation) of unnormalized state $\rho_S$ that has been mentioned in Eq.\eqref{rho_simulation1-TD} is (hereafter, we set $\hbar=1$) \cite{Ohlsson2021,Brody2012,Kawabata2017,Xiao2019}:
\begin{align}\label{H_dyn}
\frac{\mathrm{d}{\rho_S(t)}}{\mathrm{d} t}&=-i[H_S(t), \rho_S(t)]_{\dag}\nonumber\\
&\equiv-i[H_S(t)\rho_S(t)-\rho_S(t){H_S}^\dag(t)],
\end{align}
where $H_S(t)$ is non-Hermitian Hamiltonian in the system $S$ and can be $\mathcal{PT}$-symmetric Hamiltonian, for generality, we assume it is time-dependent, so the time-independent Hamiltonian can be regarded as a special case of it. It is worth noting that because $H(t)$ is non-Hermitian, the Eq.\eqref{H_dyn} cannot usually be realized directly in physics. $\rho_S(t)$ is the unnormalized state in the system $S$ at time $t$s, its normalized form is
\begin{align}\label{rho_S_N}
\rho_{S_\mathrm{N}}(t)=\frac{\rho_S(t)}{\mathrm{Tr}[\rho_S(t)]},
\end{align}
where $\rho_{S_\mathrm{N}}(t)$ is a legal quantum state (i.e., a positive-semidefinite operator with unit trace) actually measured in the experiment, and $P_0(t)\equiv\mathrm{Tr}[\rho_S(t)]$ represents the corresponding measured probability of the state $\rho_{S_\mathrm{N}}(t)$ at the time moment $t$. Because both $P_0(t)$ and $\rho_{S_\mathrm{N}}(t)$ can be measured in an experiment, $\rho_S(t)$ can be used to represent $\rho_{S_\mathrm{N}}(t)$, the probability $P_0(t)$ is actually
absorbed in the unnormalized density matrix $\rho_S(t)$ (see the Ref.\cite{Li2022} and its Appendix B for details), and this practice meet the common usage\cite{Bender2007a,Guenther2008,Guenther2008a,wu2019observation,Li2022}. Without misunderstanding, we assume that they are equivalent and are strictly distinguished only when calculating probability. It should be noted that since $H_S$ is non-Hermitian, $\mathrm{Tr}[\rho_S(t)]$ is usually not constant, and may be greater than one so that loses its physical meaning. Therefore, we set the constraint condition $\mathrm{Tr}[\rho_S(t)]\leqslant1$, which can be satisfied by choosing the appropriate unnormalized initial state $\rho_S(0)$ and duration time $T$ of the evolution.
However, because $H_S$ is non-Hermitian, the state evolution satisfying von Neumann-like equation like Eq.\eqref{H_dyn} can not be realized directly in CQM. Fortunately, we can map it into the following von Neumann equation. The von Neumann equation related to the above von Neumann-like equation in the Eq.\eqref{H_dyn} is:
\begin{align}\label{H_hat_dyn}
\frac{\mathrm{d}\rho_{AS}(t)}{\mathrm{d} t}=-i[\hat{H}_{AS}(t), \rho_{AS}(t)],
\end{align}
where $\hat{H}_{AS}(t)$ is a Hermitian Hamiltonian in the system $AS$, so the Eq.\eqref{H_hat_dyn} can be realized directly in physics compared with Eq.\eqref{H_dyn}, and $\rho_{AS}(t)$ is the density operator in this system.
We assume that
\begin{align}\label{rho_rhoS}
\rho_{AS}(t)&=\begin{bmatrix}
I_S \\
\xi(t)
\end{bmatrix}\cdot\rho_S(t)\cdot
\begin{bmatrix}
I_S &\xi^\dag(t)
\end{bmatrix}\nonumber\\
&=\begin{bmatrix}
\rho_S(t) &\rho_S(t)\cdot\xi^\dag(t)\\
\xi(t)\rho_S(t)&\xi(t)\cdot\rho_S(t)\cdot\xi^\dag(t)
\end{bmatrix},
\end{align}
where $\begin{bmatrix}
I_S \\
\xi(t)
\end{bmatrix}=|0\rangle_A\otimes I_S+|1\rangle_A\otimes\xi(t)$, and we can see that the state $\rho_{AS}(t)$ is usually an entanglement state.
Meanwhile, we assume
\begin{equation}\label{H_dag}
\hat{H}_{AS}(t)=
\begin{bmatrix}
H_1(t) & H_2(t)\\
H_2^\dag(t) &H_4(t)
\end{bmatrix},
\end{equation}
where $H_1(t),H_2(t),H_4(t)$ are all operators, and it is obvious that $H_1^\dag(t)=H_1(t),H_4^\dag(t)=H_4(t)$, i.e., they are Hermitian, because $\hat{H}_{AS}(t)$ is Hermitian.
Then according to the probability conservation principle, we know that
\begin{align}\label{probability conservation}
&\frac{\mathrm{d}{\mathrm{Tr}[\rho_{AS}(t)]}}{\mathrm{d}t}\nonumber\\
&=\frac{\mathrm{d}\mathrm{Tr}[\rho_S(t)+\xi(t)\rho_S(t)\xi^\dag(t)]}{\mathrm{d}t}\nonumber\\
&=\frac{\mathrm{d}\mathrm{Tr}[(\xi^\dag(t)\xi(t)+I_S)\rho_S(t)]}{\mathrm{d}t} \nonumber \\
&\equiv\frac{\mathrm{d}\mathrm{Tr}[M(t)\rho_S(t)]}{\mathrm{d}t} \nonumber \\
&=\mathrm{Tr}[M'(t)\rho_S(t)+M(t)\rho_S'(t)] \nonumber \\
&=\mathrm{Tr}\{M'(t)\rho_S(t)-iM(t)[H_S(t)\rho_S(t)-\rho_S(t){H_S}^\dag(t)]\} \nonumber \\
&=\mathrm{Tr}\{[M'(t)-i(M(t)H_S(t)-{H_S}^\dag(t)M(t))]\rho_S(t)\} \nonumber \\
&\equiv0,
\end{align}
where we have made
\begin{equation}\label{M_xi}
M(t)=\xi^\dag(t)\xi(t)+I_S.
\end{equation}
It is obvious that $M(t)$ is Hermitian and $M(t)>1$, so M(t) is reversible. For the convenience, we set
\begin{align}\label{unit_trace}
\mathrm{Tr}[\rho_{AS}(t)]=\mathrm{Tr}[M(t)\rho_{S}(t)]\equiv1.
\end{align}
Then according to Eq.\eqref{probability conservation} we get the result
\begin{equation}\label{M'_TD}
M'(t)=-i[{H_S}^\dag(t)M(t)-M(t)H_S(t)].
\end{equation}
It is worth noting that the above Eq.\eqref{M'_TD} has the same form with Eq.\eqref{TD-pseudo-Hermiticity-relation}, so this relation can also be a TD pseudo-Hermiticity relation, which can replace the (time-independent) pseudo-Hermiticity relation like ${H_S}^\dag M= M H_S$ \cite{Mostafazadeh2007}. In general, $M'(t)\neq0$, so ${H_S}^\dag(t)M(t)\neq M(t)H_S(t)$, which means $M(t)$ is not the metric operator of $H_S(t)$, but only the TD metric operator of inner product space (i.e., the $M(t)$-inner product space), which leads to probability conservation. We find that $M(t)$ is actually the metric operator of TD pseudo-Hermitian Hamiltonian $h_S(t)$ that will be introduced next.
Then multiplying left and right sides of the above Eq.\eqref{M'_TD} by $M^{-1}(t)$, we can get
\begin{equation}\label{K_mid}
iM^{-1}(t)M'(t)M^{-1}(t)=M^{-1}(t){H_S}^\dag(t)-H_S(t)M^{-1}(t).
\end{equation}
If we introduce an operator:
\begin{equation}\label{K}
K(t)=H_S(t)M^{-1}(t)+\frac{i}{2}M^{-1}(t)M'(t)M^{-1}(t),
\end{equation}
then according to the above Eq.\eqref{K_mid}, we will find $K^\dag(t)=K(t)$, which means $K(t)$ is Hermitian. The introduction of the operator $K(t)$ will be very beneficial to our following work. We then introduce a new quantity $h_S(t)$:
\begin{align}
h_S(t)&\equiv K(t)M(t) \nonumber\\
&=H_S(t)+\frac{i}{2}M^{-1}(t)M'(t)\nonumber\\
&=\frac{1}{2}H_S(t)+\frac{1}{2}M^{-1}(t)H_S^\dag(t)M(t),
\end{align}
where the Eq.\eqref{M'_TD} is used in the derivation. Obviously, $h_S(t)$ is non-Hermitian. At the same time, it is easy to verify that
\begin{align}\label{Mh_s}
M(t)h_S(t)=h_S^\dag(t)M(t),
\end{align}
which is exactly pseudo-Hermitian relation mentioned in Eq.\eqref{metric1}, and that means $h_S(t)$ is actually a pseudo-Hermitian quantity, and $M(t)$ is actually its metric operator. As we have known in Eq.\eqref{M_xi}, $M(t)>I_S$, so according to above Eq.\eqref{Mh_s}, there is a similarity transformation make
\begin{align}
h_{\mathrm{phys}}(t)&\equiv M^{\frac{1}{2}}(t)\cdot h_S(t)\cdot M^{-\frac{1}{2}}(t)\nonumber\\
&=M^{-\frac{1}{2}}(t)\cdot M(t)h_S(t)\cdot M^{-\frac{1}{2}}(t) \nonumber\\
&=h_{\mathrm{phys}}^\dag(t) \nonumber \\
&=\frac{1}{2}[M^{\frac{1}{2}}(t)H_S(t)M^{-\frac{1}{2}}(t)\!+\!M^{-\frac{1}{2}}(t)H_S^\dag(t)M^{\frac{1}{2}}(t)],
\end{align}
which means $h_{\mathrm{phys}}(t)$ is a Hermitian operator with real eigenspectrum and can be a legal observable in physics, so $h_S(t)$ is also a legal observable with real eigenspectrum \cite{Bender2003,Bender2007,Mostafazadeh2010,Fring2016}, and can be a physically permissible Hamiltonian. We reveal the physical meaning of $M(t)$, and find the physical observables $h_S(t)$ and $h_{\mathrm{phys}}(t)$ related to it, so provide more physical completeness \cite{wu2019observation}.
By solving the Eq.\eqref{M'_TD}, we obtain that
\begin{align}\label{M_TD}
M(t)=\mathbb{T}e^{-i\int_{0}^{t}{H_S}^\dag(\tau)d\tau}M(0)\mathbb{\overline{T}}e^{i\int_{0}^{t}H_S(\tau)d\tau},
\end{align}
where $M(0)$ can be any Hermitian operator satisfying the condition $M(0)>I_S$. The above Eq.\eqref{M_TD} is also obtained in Ref.\cite{wu2019observation}. It is worth noting that if we take $M(0)=\eta(0)>1$ given in Eq.\eqref{eta_TD}, then $M(t)$ will be $\eta(t)$. This means that $M(t)$ is actually an TD metric operator like $\eta(t)$ given in Eq.\eqref{eta_TD}.
In addition, when $H_S$ is given, some eigenvalues of $M(t)$ may be decreases with time in some cases, such as in the $\mathcal{PT}$-symmetry broken phase of $H_S$, so we can defined a critical time $T_l$ of the legitimacy that make $M(t)$ legal, i.e., $M(t)>I_S$ when $t\in[0, T_l)$, while when $t=T_l$, at least one of eigenvalues of $M(T_l)$ become one. It should be noted that $T_l$ may be infinite in some cases, such as in the $\mathcal{PT}$-symmetry unbroken phase of $H_S$. In addition, in the cases that $T_l\neq \infty$, $T_l$ will depend on the initial setting of $M(0)$, in general, $T_l$ will increase with the increase of eigenvalues of $M(0)$, a big enough $T_l$ can always be obtained by scaling $M(0)$ to meet the requirement of experiment or numerical calculation tasks.
It is worth noting that when the system is time-independent and $\mathcal{PT}$-symmetry unbroken, there must be a metric operator $\eta$ that satisfies $\eta>1$, then $M(0)$ can be chosen as $M(0)=\eta$, after that
\begin{align}\label{Mt-eta}
M(t)&=e^{-i\int_{0}^{t}H^\dag \mathrm{d}\tau}\eta e^{i\int_{0}^{t}H\mathrm{d}\tau} \nonumber \\
&=\eta\cdot e^{-i\int_{0}^{t}H\mathrm{d}\tau}\cdot e^{i\int_{0}^{t}H \mathrm{d}\tau} \nonumber \\
&=\eta,
\end{align}
which means $M(t)$ will be the metric operator and is time-independent.
By the above Eq.\eqref{rho_rhoS}, we have established a map between $\rho_S(t)$ and $\rho_{AS}(t)$, now we try to establish the map between $H_S(t)$ and $\hat{H}_{AS}(t)$, specifically, we need to find the solutions of $H_1(t), H_2(t), H_4(t)$ in Eq.\eqref{H_dag}.
Substituting the Eqs.\eqref{rho_rhoS} and \eqref{H_dyn} into the Eq.\eqref{H_hat_dyn}, we obtain that
\begin{widetext}
\begin{align}\label{H_hat_map_H}
\frac{\mathrm{d}{\rho_{AS}(t)}}{\mathrm{d}t}&=
\begin{bmatrix}
0 \\ \xi'(t)
\end{bmatrix}\cdot\rho_S(t)\cdot
\begin{bmatrix}
I_S &\xi^\dag(t)
\end{bmatrix}+
\begin{bmatrix}
I_S \\
\xi(t)
\end{bmatrix}\cdot\rho'_0\cdot
\begin{bmatrix}
I_S &\xi^\dag(t)
\end{bmatrix}+
\begin{bmatrix}
I_S \\ \xi(t)
\end{bmatrix}\cdot\rho_S\cdot
\begin{bmatrix}
0 &\xi'^\dag(t)
\end{bmatrix} \nonumber\\
&=-i\left\{\begin{bmatrix}
H_S(t) \\
\xi(t)H_S(t)+i\xi'(t)
\end{bmatrix}\cdot\rho_S(t)\cdot
\begin{bmatrix}
I_S &\xi^\dag(t)
\end{bmatrix}-
\begin{bmatrix}
I_S \\
\xi(t)
\end{bmatrix}\cdot\rho_S(t)\cdot
\begin{bmatrix}
{H_S}^\dag(t) &{H_S}^\dag(t)\xi^\dag(t)-i\xi'^\dag(t)
\end{bmatrix}\right\} \nonumber \\
&=-i[\hat H_S(t)\rho_{AS}(t)-\rho_{AS}(t)\hat H_S(t)] \nonumber\\
&=-i\left\{\begin{bmatrix}
H_1(t) & H_2(t)\\
H_2^\dag(t) &H_4(t)
\end{bmatrix}\cdot
\begin{bmatrix}
I_S \\
\xi(t)
\end{bmatrix}\cdot \rho_S\cdot
\begin{bmatrix}
I_S &\xi^\dag(t)
\end{bmatrix}-
\begin{bmatrix}
I_S \\
\xi(t)
\end{bmatrix}\cdot \rho_S\cdot
\begin{bmatrix}
I_S &\xi^\dag(t)
\end{bmatrix}\cdot
\begin{bmatrix}
H_1(t) & H_2(t)\\
H_2^\dag(t) &H_4(t)
\end{bmatrix}\right\} \nonumber\\
&=-i\left\{\begin{bmatrix}
H_1(t)+H_2(t)\xi(t) \\
H_2^\dag(t)+H_4(t)\xi(t)
\end{bmatrix} \cdot \rho_S\cdot
\begin{bmatrix}
I_S &\xi^\dag(t)
\end{bmatrix}-
\begin{bmatrix}
I_S \\
\xi(t)
\end{bmatrix}\cdot\rho_S(t)\cdot
\begin{bmatrix}
H_1(t)+\xi^\dag(t)H_2^\dag(t) &H_2(t)+\xi^\dag(t)H_4
\end{bmatrix}\right\}.
\end{align}
\end{widetext}
According to the above Eq.\eqref{H_hat_map_H}, we can get the following relation:
\begin{subequations}\label{relation1}
\begin{align}
H_1(t)+H_2(t)\xi(t)&=H_S(t) \label{eq1}\\
H_2(t)+H_4(t)\xi(t)&=i\xi'(t)+\xi(t)H_S(t). \label{eq2}
\end{align}
\end{subequations}
Observing the above equations, we know that the solutions of $H_1(t), H_2(t), H_4(t)$ are not unique, and $H_1$ can be chosen as the unique variable. Therefore, we can add a gauge in order to obtain unique solutions. However, the value of $H_1$ is artificially assigned in Ref.\cite{wu2019observation}, so some unique properties of the resulting dilated $\hat{H}_{AS}$ may be masked, we will see that in our Sec.\ref{example}, and especially in Fig.\ref{magnus3_eigenvalue} we will see the eigenspectrum of $\hat{H}_{AS}$ actually has symmetric property, and $\hat{H}_{AS}$ obtained in Ref.\cite{wu2019observation} is actually the result of the application of a symmetric gauge. We then provide more mathematical completeness.
By observing the Eq.\eqref{H_dag}, we know that the space of $\hat{H}_{AS}(t)$ is $2n$-dimensional, however, the space of $\rho_S$ defined in the Eq.\eqref{rho_rhoS} is just $n$-dimensional, consequently, similar to the Eq.\eqref{rho_rhoS}, we can assume that
\begin{align}\label{rho_rhoS-2}
\rho_{AS}^\bot(t)&=
\begin{bmatrix}
-\xi^\dag(t) \\
I_S
\end{bmatrix}\cdot\rho_S(t)\cdot
\begin{bmatrix}
-\xi(t) &I_S
\end{bmatrix}\nonumber\\
&=\begin{bmatrix}
\xi^\dag(t)\cdot\rho_S(t)\cdot\xi(t) &-\xi^\dag(t)\cdot\rho_S(t)\\
-\rho_S(t)\xi(t)&\rho_S(t)
\end{bmatrix}.
\end{align}
It is easy to check that $\mathrm{Tr}[\rho_{AS}(t)\rho^\bot(t)]\equiv0$, which means that the space of $\rho$ and $\rho^\bot$ are mutually orthogonal, then we know that $\rho$ and $\rho^\bot$ are located in two different orthogonal subspaces in the space where $\hat{H}_{AS}(t)$ is located. Therefore, we can adopt the following symmetric gauge (there are also some other valid gauges\cite{Zhang2019,Huang2019,Luiz2020}):
\begin{equation}\label{gauge}
\frac{\mathrm{d}{\rho_{AS}^\bot(t)}}{\mathrm{d}t}=-i[\hat{H}_{AS}(t), \rho_{AS}^\bot(t)].
\end{equation}
With this gauge, according to the probability conservation principle again, similar to Eq.\eqref{probability conservation}, we can also derive another relation between $M(t)$ and $\xi(t)$ as follows:
\begin{equation}\label{M_xi-2}
M(t)=\xi(t)\xi^\dag(t)+I_S,
\end{equation}
where $M(t)$ has been given in Eq.\eqref{M_TD}. Comparing the Eq.\eqref{M_xi-2} with the Eq.\eqref{M_xi}, we get
\begin{equation}\label{xi_xi}
\xi(t)\xi^\dag(t)=\xi^\dag(t)\xi(t),
\end{equation}
which means $\xi(t)$ will be a normal operator under the symmetric gauge in Eq.\eqref{gauge}, therefore, for convenience, we take $\xi(t)$ Hermitian, and then
\begin{equation}\label{xi}
\xi(t)=[M(t)-I_S]^\frac{1}{2}.
\end{equation}
At the same time, by adopting the similar method with Eq.\eqref{H_hat_map_H}, we can obtain the following relation:
\begin{subequations}\label{relation2}
\begin{align}
-H_1(t)\xi^\dag(t)+H_2(t)&=-i\xi^\dag{'}(t)-\xi^\dag(t)H_S(t) \label{eq3}\\
-H_2^\dag(t)\xi^\dag(t)+H_4(t)&=H_S(t). \label{eq4}
\end{align}
\end{subequations}
Then multiplying Eq.\eqref{eq1} right by $\xi(t)$ and substituting it to Eq.\eqref{eq3}, we can obtain
\begin{align}\label{H_2}
H_2(t)&=[-i\xi'(t)+H_S(t)\xi(t)-\xi(t)H_S(t)]M^{-1}(t) \nonumber \\
&=K(t)\xi(t)-\xi(t)K(t)-\frac{i}{2}[\xi'(t)M^{-1}(t)+\nonumber\\
&\quad M^{-1}(t)\xi'(t)].
\end{align}
It is obvious that $H_2(t)$ is anti-Hermitian. The details of the derivation is given in Appendix \ref{appendix_derivation-H_24}.
Then in a similar way like above, multiplying Eq.\eqref{eq2} right by $\xi(t)$ and substituting it into Eq.\eqref{eq4}, we can obtain
\begin{align}\label{H_4}
H_4(t)=&[i\xi'(t)\xi(t)+\xi(t)H_S(t)\xi(t)+H_S(t)]M^{-1}(t) \nonumber \\
=&K(t)\!+\!\xi(t)K(t)\xi(t)\!+\!\frac{i}{2}[\xi'(t)\xi(t)M^{-1}(t)\!-\nonumber\\
&M^{-1}(t)\xi(t)\xi'(t)],
\end{align}
$H_4(t)$ is Hermitian, obviously. The details of the derivation is also given in Appendix \ref{appendix_derivation-H_24}.
Next, according to Eq.\eqref{H_2} and Eq.\eqref{eq1}, the result as follows will be obtained:
\begin{align}\label{H_1}
H_1(t)=&H_S(t)-H_2(t)\xi(t) \nonumber \\
=&H_S(t)-[-i\xi'(t)+H_S(t)\xi(t)-\xi(t)H_S(t)]\cdot \nonumber\\
&M^{-1}(t)\cdot\xi(t) \nonumber \\
=&[i\xi'(t)\xi(t)+\xi(t)H_S(t)\xi(t)+H_S(t)]M^{-1}(t) \nonumber \\
=&K(t)+\xi(t)K(t)\xi(t)+\frac{i}{2}[\xi'(t)\xi(t)M^{-1}-\nonumber\\
&M^{-1}\xi(t)\xi'(t)] \nonumber \\
=&H_4(t).
\end{align}
Consequently, according to Eq.\eqref{H_1} and Eq.\eqref{H_2} we can easily get
\begin{small}
\begin{widetext}
\begin{subequations}\label{H1_add_iH2}
\begin{align}
H_1(t)+iH_2(t)&=[I_S-i\xi(t)]H_S(t)M^{-1}(t)[I_S+i\xi(t)]+\xi'(t)M^{-1}(t) \nonumber \\
&=[I_S-i\xi(t)]K(t)[I_S+i\xi(t)]+\frac{1}{2}[\xi'(t)M(t)^{-1}[I_S+i\xi(t)]+[I_S-i\xi(t)]M(t)^{-1}\xi'(t)] \nonumber \\
&=[I_S-i\xi(t)]\left\{K(t)+\frac{1}{2}[I_S-i\xi(t)]^{-1}[\xi'(t)[I_S-i\xi(t)]^{-1}+[I_S+i\xi(t)]^{-1}\xi'(t)][I_S+i\xi(t)]^{-1}\right\}[I_S+i\xi(t)], \\
H_1(t)-iH_2(t)&=[I_S+i\xi(t)]H_S(t)M^{-1}(t)[I_S-i\xi(t)]-\xi'(t)M^{-1}(t) \nonumber \\
&=[I_S+i\xi(t)]K(t)[I_S-i\xi(t)]-\frac{1}{2}[\xi'(t)M(t)^{-1}[I_S-i\xi(t)]+[I_S+i\xi(t)]M(t)^{-1}\xi'(t)] \nonumber \\
&=[I_S+i\xi(t)]\left\{K(t)-\frac{1}{2}[I_S+i\xi(t)]^{-1}[\xi'(t)[I_S+i\xi(t)]^{-1}+[I_S-i\xi(t)]^{-1}\xi'(t)][I_S-i\xi(t)]^{-1}\right\}[I_S-i\xi(t)],
\end{align}
\end{subequations}
\end{widetext}
\end{small}
where $M(t)=\xi^2(t)+I_S=[I_S\pm i\xi(t)]\cdot[I_S\mp i\xi(t)]$. Both $H_1(t)+iH_2(t)$ and $H_1(t)-iH_2(t)$ are both Hermitian.
Finally, according to Eq.\eqref{H_dag}, we obtain the final form of $\hat{H}_{AS}(t)$:
\begin{align}\label{H_hat_t}
\hat{H}_{AS}(t)=&I_S\otimes H_1(t)+i\sigma_y\otimes H_2(t) \nonumber\\
=&|+_y\rangle\langle+_y|\otimes[H_1(t)+iH_2(t)]\nonumber\\
+&|-_y\rangle\langle-_y|\otimes[H_1(t)-iH_2(t)],
\end{align}
where $|+_y\rangle=\frac{1}{\sqrt{2}}(|0\rangle_A+i|1\rangle_A), |-_y\rangle=\frac{1}{\sqrt{2}}(|0\rangle_A-i|1\rangle_A)$ are the eigenstates of $\sigma_y$ corresponding to its eigenvalues of $+1, -1$, respectively. The equation above has a similar form to the result of pure-state case given in Ref.\cite{wu2019observation} (see Eqs.(19)-(21) in their Supplementary Materials), while the differences are mainly caused by they actually use the basis $\{|-_y\rangle,-i|+_y\rangle\}$, while we use the basis $\{|0\rangle,|1\rangle\}$. According to Eq.\eqref{H_hat_t}, we can find $\hat{H}(t)$ is highly symmetric under the symmetric gauge of Eq.\eqref{gauge}, and use only one single qubit as its auxiliary system. We have to point out that, whether in the experiments or numerical calculations, as long as we use a time-dependent $\hat{H}_{AS}(t)$ given above in Eq.\eqref{H_hat_t}, it may be inevitable to solve the problem of chronological product. The reason for the numerical calculation situation is obvious, while in an experiment, $\hat{H}_{AS}(t)$ usually has to be parameterized in advance by numerically calculating $M(t)$ given in Eq.\eqref{M_TD}, which needs to numerically computing the chronological product caused by $H_S(t)$, so it usually needs to deal with the problem of chronological product unless any two moments of $\hat{H}_{AS}(t)$ are commute to each other (such as the case of $H_S$ is time-independent and $\mathcal{PT}$-symmetry unbroken, then $M(0)$ can be chosen as its metric operator $\eta$ just like the case in Eq.\eqref{Mt-eta} \cite{wu2019observation}. We will see the impact of the chronological product on simulation accuracy more clearly in Fig.\ref{Magnus_expansion—figs} and Fig.\ref{Magnus2-fig} in our example given in Sec.\ref{example}.
For the convenience of expression, we define the "$\circ$" operation as $C\circ[\cdot]=C[\cdot]C^\dag$. After that, looking back Eqs.\eqref{H_dyn} and \eqref{H_hat_dyn}, their solution can be expressed as:
\begin{subequations}\label{H_hat_H_dyn-solutions}
\begin{align}
\rho_S(t)&=\mathcal{U}_S\circ\rho_S(0)\equiv\mathbb{T}e^{-i\int^t_0 H_S(\tau)\mathrm{d}\tau}\circ\rho_S(0),\label{H_hat_H_dyn-solutions1}\\
\rho_{AS}(t)&=U_{AS}\circ\rho_{AS}(0)\equiv\mathbb{T}e^{-i\int^t_0 \hat{H}_{AS}(\tau)\mathrm{d}\tau}\circ\rho_{AS}(0), \label{H_hat_H_dyn-solutions2}
\end{align}
\end{subequations}
where $\hat{H}_{AS}(t)$ has been obtained in Eq.\eqref{H_hat_t}, and $\mathcal{U}_S$ is the non-unitary evolution operator related to the non-Hermitian Hamiltonian $H_S$, while $U_{AS}$ is the unitary evolution operator related to the Hermitian Hamiltonian $H_{AS}$. We will call the method of obtaining $\rho_{AS}(t)$ through Eq.\eqref{H_hat_H_dyn-solutions2} the dilation method, while call the method of first obtaining $\rho_S(t)$ according to Eq.\eqref{H_hat_H_dyn-solutions1} and then combining it into $\rho_{AS}(t)$ by Eq.\eqref{rho_rhoS} the combination method. The difference between two methods is that the delated Hamiltonian $\hat{H}_{AS}$ containing the term $\xi'(t)$ is avoid to be calculated for latter, which may increase the error in numerical calculation, while only need to calculate $M(t)$ given in Eq.\eqref{M_TD} and $\xi(t)$ given in Eq.\eqref{xi}. The error of the numerical calculation between the dilation method and the combination method can be defined as follows:
\begin{align}\label{norm_Delta}
\Delta_\rho(t)=\|{\rho_{AS}}_\mathrm{dilation}(t)-{\rho_{AS}}_\mathrm{combination}(t)\|_F,
\end{align}
where ${\rho_{AS}}_\mathrm{dilation}(t)$ denotes the result calculated by the dilation method, while ${\rho_{AS}}_\mathrm{combination}(t)$ denotes the result calculated by the combination method, and the symbol "$F$" denotes Frobenius norm.
We define the measurement operator: $\Pi_k=|k\rangle_A\langle k|\otimes I_S, k\in \{0,1\}$, and the map $\mathcal{M}_k$: $\mathcal{M}_k[\rho_{AS}]=\mathrm{Tr}_A[\Pi_k\circ\rho_{AS}]$. Therefore, the Eq.\eqref{H_hat_H_dyn-solutions2} can be mapped to the Eq.\eqref{H_hat_H_dyn-solutions1} by the map $\mathcal{M}_0$, experimentally, by performing a projection measurement $\Pi_0\equiv|0\rangle_A\langle0|$ on the auxiliary qubit:
\begin{align}\label{M_Map}
\mathcal{M}_0[\rho_{AS}(t)]=\rho_S(t)=\mathbb{T}e^{-i\int^t_0 H_S(\tau)\mathrm{d}\tau}\circ\rho_S(0),
\end{align}
which means we can simulate the non-unitary evolution (the dynamics) of non-Hermitian system $H_S$ in the higher-dimensional system $\hat{H}_{AS}$. It is worth noting that the map $\mathcal{M}_0$ is realized by a fixed projection measurement (post-selection) $\Pi_0\equiv|0\rangle_A\langle0|$, which is time-independent, so it will be easy to be realized in experiment \cite{wu2019observation}. However, the success of this process $\mathcal{M}_0$ is probabilistic, and the corresponding success probability $P_0(t)$ is
\begin{align}
P_0(t)=\mathrm{Tr}[\rho_S(t)].
\end{align}
Obviously, in general, $P_0(t)<1$. After the measurement $\Pi_0$, the state in main system $S$ will be ${\rho_S}_\mathrm{N}(t)$ in Eq.\eqref{rho_S_N} with success probability $P_0(t)$ given above. It is worth mentioning that the success probability may be optimized by technical means, such as using the local-operations-and-classical-communication (LOCC) protocol scheme proposed in Ref.\cite{Li2022}, which may be of significance for experiment.
In particular, in the situation that $H_S$ is $\mathcal{PT}$-unbroken and time-independent, according to the result of Eq.\eqref{Mt-eta}, we can take the operator $M(t)\equiv \eta$, where $\eta$ is a positive metric operator and $\eta>1$, and then according to the Eqs.\eqref{K}, \eqref{H_1}, \eqref{H_2} and \eqref{H_4}, $K, H_1, H_2, H_4$ all will be time-independent, specifically, according to Eqs.\eqref{jordan_block_H1}, they will become \cite{Huang2018,Huang2019,Li2022}:
\begin{subequations} \label{H124_TID}
\begin{align}
M(t)&\equiv\eta\Rightarrow\xi=(\eta-I_S)^\frac{1}{2},\\
K&\equiv H_S\cdot\eta^{-1}=\Phi\cdot E_S\cdot\Phi^\dag, \\
H_1&=H_S\eta^{-1}+\xi H_S\eta^{-1}\xi \nonumber\\
&=\Phi E_S\Phi^\dag+\xi\cdot\Phi E_S\Phi^\dag\cdot\xi \\
H_2&=H_S\eta^{-1}\xi-\xi H_S\eta^{-1}, \nonumber \\
&=\Phi E_S\Phi^\dag\cdot\xi-\xi\cdot\Phi E_S\Phi^\dag \nonumber \\
&=-H_2^\dag, \\
H_4&=H_S\eta^{-1}+\xi H_S\eta^{-1}\xi \nonumber \\
&=\xi^{-1}(H_S\eta^{-1}+\eta H_S-H_S-H_S^\dag)\xi^{-1}+H_S\eta^{-1} \nonumber \\
&=H_1.
\end{align}
\end{subequations}
They are the same as the results we obtained in Ref.\cite{Li2022}. Therefore, according to Eq.\eqref{H1_add_iH2}, $H_1\pm iH_2$ will also become time-independent:
\begin{subequations}
\begin{align}
H_1+iH_2&=(I_S-i\xi)\Phi\circ E_S=\Phi_{+}\circ E_S, \\
H_1-iH_2&=(I_S+i\xi)\Phi\circ E_S=\Phi_{-}\circ E_S,
\end{align}
\end{subequations}
where $\Phi_{\pm}=(I_S\mp i\xi)$.
At the same time, the dilated higher-dimensional system $\hat{H}_{AS}$ will also become time-independent, and according to the result of Eq.\eqref{H_hat_t}:
\begin{align}\label{H_hat_unique_TID}
\hat H_{AS}&=I_A\otimes H_1+i\sigma_y\otimes H_2 \nonumber\\
&=\!I_A\!\!\otimes\!(\!H_S\eta^{-1}\!\!+\!\!\xi H_S\eta^{-1}\xi)\!+\!i\sigma_y\!\!\otimes\!(H_S\eta^{-1}\xi\!-\!\xi H_S\eta^{-1}\!)\nonumber \\
&=V_{AS}\circ [I_A\otimes E_S],
\end{align}
where
\begin{equation*}\label{V}
\begin{split}
V_{AS}&=\frac{1}{\sqrt{2}}[(I_A+i\sigma_x)\otimes I_S-i(\sigma_y+\sigma_z)\otimes\xi]\cdot I_A\otimes\Phi \\
&=\frac{1}{\sqrt{2}}
\begin{bmatrix}
\Phi_{+} &i\Phi_{-} \\
i\Phi_{+} &\Phi_{-}
\end{bmatrix} \\
&=\frac{1}{\sqrt{2}}
\begin{bmatrix}
(I_S-i\xi)\Phi &i(I_S+i\xi)\Phi \\
i(I_S-i\xi)\Phi &(I_S+i\xi)\Phi
\end{bmatrix}
\end{split}
\end{equation*}
is an unitary operator, while $V_{AS}$ is not unique. This result is also the same as that in Ref.\cite{Li2022}. From the above Eq.\eqref{H_hat_unique_TID}, it can be found that the degeneracy of higher-dimensional dilated system $\hat{H}_{AS}$ is twice that of lower-dimensional $\mathcal{PT}$-symmetric system $H_S$ in this situation.
\section{Vectorization of density operators and matrixization of Liouvillian superoperators in open quantum system\label{VD_and_ML}}
The evolution equation of an open quantum system with a Markovian approximation (i.e., memoryless) can be expressed by Lindblad master equation \cite{Minganti2019}:
\begin{equation}\label{master equation}
\frac{\mathrm{d}\rho_{AS}(t)}{\mathrm{d} t}=\mathcal{L}\rho_{AS}(t)=-i[\hat H_{AS}(t),\rho_{AS}(t)]+\sum_{\mu}\mathcal{D}[\Gamma_\mu]\rho_{AS}(t),
\end{equation}
where $\rho_{AS}(t)$ is the density operator of the system, $\Gamma_\mu$ is the jump operator, $\mathcal{L}$ is the Liouvillian superoperator, and $\mathcal{D}[\Gamma_\mu]$ is the dissipator related to the $\Gamma_\mu$, which is used to describe the dissipation:
\begin{equation}\label{Lindblad}
\mathcal{D}\left[\Gamma_\mu\right] \rho_{AS}(t)=\Gamma_\mu \rho_{AS}(t) \Gamma_\mu^\dag-\frac{\Gamma_\mu^\dag \Gamma_\mu}{2} \rho_{AS}(t)-\rho_{AS}(t) \frac{\Gamma_\mu^\dag \Gamma_\mu}{2}.
\end{equation}
The operator-sum representation is a convenient tool to describe the open system, various models of decoherence and dissipation in open quantum systems have been widely studied, such as amplitude damping (AD) channel model, phase damping (PD) channel model, and depolarizing (Dep) channel. In some cases of simple decoherence (such as the case of Hamiltonian $H=0$), these models have a concise form in the framework of operator-sum representation, however, when the Hamiltonian gets complicated just like the one in Eq.\eqref{H_hat_t}, the application of the operator-sum representation will be indirect and inconvenient. Therefore, the vectorization of density operators and matrixization of superoperators (VDMS) technique may be a more convenient tool (see the Appendix of Ref.\cite{Minganti2019} for details).
We adopt VDMS technique to carry out Kraus decomposition of density operators in matrix basis $\{|\alpha\rangle\langle\beta|, \alpha, \beta=1,2,\cdots,n\}$ \cite{andersson2007finding}. In this VDMS technique, a matrix $A$ can be mapped to a vector $\vec{A}$ by stacking all the rows of the matrix $A$ to a column in order:
\begin{align*}
A=
\begin{bmatrix}
a_1 &a_2\\
a_3 &a_4
\end{bmatrix}\rightarrow
\vec{A}=
\begin{bmatrix}
a_1\\a_2\\a_3\\a_4
\end{bmatrix},
\end{align*}
In the similar way, the density operator $\rho$ will be mapped to the vector $\vec\rho$, and the superoperators $\mathcal{A}[\cdot]=A[\cdot]B$ will be mapped to a matrix, which can be recorded as $\overline{\overline{\mathcal{A}}}=A\otimes B^{\mathrm{T}}$. so
$\overrightarrow {{\cal A}[ \cdot ]} = \overrightarrow {A[ \cdot ]B} = \overline {\overline {\mathcal{A} }} \vec{[ \cdot ]} = A \otimes {B^{\rm{T}}}\vec{[ \cdot ]}$, where the superscript "$\mathrm{T}$" denotes the transpose operation..
Meanwhile, give a matrix
\begin{align*}
B=
\begin{bmatrix}
b_1 &b_2 \\
b_3 &b_4
\end{bmatrix},
\end{align*}
in the VDMS technique, the Hilbert-Schmidt inner product can be introduced as follows:
\begin{align}
\langle B|A\rangle=
\begin{bmatrix}
b_1^*&b_2^*&b_3^*&b_4^*
\end{bmatrix}
\begin{bmatrix}
a_1\\a_2\\a_3\\a_4
\end{bmatrix}
=\mathrm{Tr}[B^\dag A],
\end{align}
where $|A\rangle, \langle B|$ are actually the $\vec{A}, \vec{B}^\dag$ mentioned above. From this point of view, it is easy to find that the vector of $\rho_{AS}$ given in Eq.\eqref{rho_rhoS} and $\rho^\bot_{AS}$ given in Eq.\eqref{rho_rhoS-2} will still be mutually orthogonal.
Therefore, the Lindbald superoperator $\mathcal{L}$ in Eq.\eqref{master equation} will be mapped to
\begin{align}\label{Lindblad operator_matrix}
\overline{\overline{\mathcal{L}}}(t)=&-i[\hat{H}_{AS}(t)\otimes I_{AS}-I_{AS}\otimes\hat{H}^{\mathrm{T}}_{AS}(t)]\nonumber\\
&+\sum_\mu{ [\Gamma_\mu\otimes({\Gamma^\dag_\mu})^\mathrm{T}-\frac{\Gamma^\dag_\mu\Gamma_\mu\otimes I_{AS}}{2}-\frac{I_{AS}\otimes({\Gamma^\dag_\mu\Gamma_\mu})^\mathrm{T}}{2} ] },
\end{align}
After that the Eq.\eqref{H_hat_dyn} will be mapped to
\begin{equation}\label{Lindblad equation vec}
\frac{\mathrm{d}\vec{\rho}_{AS}(t)}{\mathrm{d} t}=\overline{\overline{\mathcal{L}}}(t)\vec{\rho}_{AS}(t),
\end{equation}
and then we can get the solution of the above equation:
\begin{equation}\label{solution_master_equation}
\vec{\rho}_{AS}(t)=\mathbb{T}e^{\int_0^t \overline{\overline{\mathcal{L}}}(\tau)\mathrm{d}\tau }\vec{\rho}_{AS}(0),
\end{equation}
where $\mathbb{T}$ is the time-ordering operator mentioned above in Eq.\eqref{eta_TD}, and $\vec{\rho}_{AS}(0)$ is the vector representation of the initial density operator $\rho_{AS}(0)$. In general, the dilated $\hat{H}_{AS}(t)$ is time-dependent, so $\overline{\overline{\mathcal{L}}}$ will also be time-dependent, then the problem of chronological product may also have to be dealt with (see more details in Appendix \ref{chronological product} and in our example given in Sec.\ref{example}).
\section{Numerical calculation of the relevant linear time-dependent matrix differential equations\label{numerical_calculation}}
As we all know, when the system $H_S$ and $H_{AS}$ is time-independent, the related calculations including Eq.\eqref{M_TD}, Eq.\eqref{H_hat_t}, Eqs.\eqref{H_hat_H_dyn-solutions} and et.al. are trivial. However, when they are time-dependent, the problem of chronological product may have to be considered. According to Magnus's theory \cite{Magnus1954,Blanes2009}, the solution of a linear matrix differential equation (include time-dependent Schr\"{o}dinger equation):
\begin{align}\label{matrix-differential-equation2}
Y^{\prime}(t)=A(t) Y(t), \quad Y\left(t_{0}\right)=Y_{0},
\end{align}
can be expressed by the form of exponential matrix like:
\begin{align}\label{Magnus2}
Y(t)=\exp \left(\Omega\left(t, t_{0}\right)\right) Y_{0},
\end{align}
where $Y_{0}$ is the initial vector (state), and
\begin{align}\label{Omega2}
\Omega(t,t_0)=\sum_{k=1}^{\infty} \Omega_{k}(t,t_0)
\end{align}
is the matrix series: so-called Magnus expansion or Magnus series, and $\Omega_n(t,t_0)$ is the $n$-th term (see more details in Appendix \ref{chronological product}).
However, for an arbitrary TD linear operator (matrix) $A(t)$, the above Magnus series may diverge \cite{Blanes2009}, and a sufficient but unnecessary condition for it to converge in $t\in[t_0, t_f)$ is:
\begin{align}\label{convergence2}
\int_{t_0}^{t_f}\|A(s)\|_{2} \mathrm{d}s<\pi,
\end{align}
where $\|A\|_{2}$ denotes 2-norm of $A$. The above integral converges completely in the interval $t\in[t_0, t_0+T_c)$ (see more details in Appendix \ref{chronological product}), where $T_c$ is defined as the critical time of convergence that makes the above integral take $\pi$, and it only depends on the operator $A$ itself. This brings some restrictions to the time step $h$ in numerical calculation, because when $h<T_c$, the results can be completely trusted, while when $h>T_c$, the results becomes untrustworthy and the error may be amplified after Magnus series is truncated to a high-order term $\Omega_n$. The higher the order of items $\Omega_n$, the more obvious it is.
An alternative method to overcome the above finite convergence interval is to divide the interval into $N$ segments so that the Magnus series in each segment $[t_k,t_{k+1}]$ meets the above convergence conditions Eq.\eqref{convergence2}, i.e., $t_{k+1}-t_k<T_c$, then the Eq.\eqref{Magnus2} can be replaced by (see Eq.(240) in Ref.\cite{Blanes2009}):
\begin{align}\label{Magnus_product}
Y\left(t_N\right)=\prod_{k=0}^{N-1} \exp \left(\Omega\left(t_{k+1}, t_{k}\right)\right) Y_0,
\end{align}
where $\Omega\left(t_{k+1}, t_{k}\right)$ can be appropriately truncated to $\Omega_n\left(t_{k+1}, t_{k}\right)$ in practical use.
Suppose that a fixed step size $h$ is adopted in computation, according to the analysis in Ref.\cite{Blanes2009}) (see around Eq.(66) and Eq.(242) for details), the error of computations is $O(h^3)$ when the computations up to first term of Magnus series $\Omega_1$ are carried out, and $O(h^5)$ when the computations up to second term $\Omega_2$ are carried out. It is worth noting that, for the purpose of numerical calculation, we usually have to consider the computational complexity of the exponential matrix $\exp \left(\Omega\left(t, t_{0}\right)\right)$, because the computation cost of the exponential matrix is usually very expensive, especially when the matrix becomes large, so in this case, we should try to avoid its frequent computation. Therefore, on the premise of meeting the goal of calculation accuracy and saving computing resources, the time step of calculation and the cut-off term of Magnus series $\Omega_n$ have to be carefully balanced, specifically, we can reduce the calculation times of the exponential matrix by selecting a big enough $h$, and compensate for the loss of calculation accuracy by calculating up to higher order terms of Magnus series in Eq.\eqref{Magnus_product}, and we will see this compensation in the comparison between Fig.\ref{Magnus_expansion—figs} and Fig.\ref{Magnus2-fig} in next section.
\section{An example: 2-dimensional $\mathcal{PT}$-symmetric system\label{example}}
In this section, we analyze an example: 2-dimensional $\mathcal{PT}$-symmetric system:
\begin{equation}\label{example_H}H_S=
\begin{bmatrix}
re^{i\theta}&s \\
s &re^{-i\theta}
\end{bmatrix}, r, s\in\mathbb{R},\theta\in[-\pi/2, \pi/2],
\end{equation}
$\theta$ can be understood as the parameter representing the degree of non-Hermiticity of the Hamiltonian $H_S$, and the degree of non-Hermiticity will increase with $|\theta|$ (when $\theta=0$, $H_S$ will be Hermitian, when $\theta=\pi/2$, $H_S$ will be anti-Hermitian, see Sec.V in Ref.\cite{Li2022} for details). The eigenvalues of $H_S$ are $E_\pm=r\cos{\theta}\pm\sqrt{s^2-r^2\sin^2{\theta}}$, and when $s^2-r^2\sin^2{\theta}>0$, $H$ is $\mathcal{PT}$-symmetry unbroken, otherwise, when $s^2-r^2\sin^2{\theta}<0$, $H$ is $\mathcal{PT}$-symmetry broken, then the two eigenvalues become complex conjugate.
The results of pure-state vectors case has been given in Ref.\cite{wu2019observation}, in order to facilitate comparison with the results in Ref.\cite{wu2019observation}, we set $\theta=\pi/2,s=1$, which makes $H_S$ have the same form as the one in Ref.\cite{wu2019observation}, and under this parameter configuration, the system is in the $\mathcal{PT}$-symmetry unbroken phase, then the eigenvalues of $H_S$ will be $E_\pm=\pm\sqrt{1-r^2}$. At the same time, we set $M(0)=5I_S$ given in Eq.\eqref{M_TD}. We set the initial density operator of pure-state case as $\rho_S(0)=\frac{1}{5}|0\rangle_S\langle0|$, which correspondings to the initial state $|0\rangle_S$ in Ref.\cite{wu2019observation}, and the coefficient $1/5$ is required by $\mathrm{Tr}[\rho_{AS}]\equiv1$ according to Eq.\eqref{unit_trace}. We set the initial density operator of mixed-state case as ${\rho_S}_{\mathrm{mixed}}(0)=\frac{1}{30}\begin{pmatrix}
4 & 1 \\
1 & 2
\end{pmatrix}$, which can not be described by the pure-state vectors used in Ref.\cite{wu2019observation}.
\subsection{The effectiveness of the density operators tool and the eigenspectrum of Hamiltonian before and after dilation}
\begin{figure}
\caption{The eigenvalues of $\mathcal{PT}
\label{unbroken1_magnus3_eigenvalue}
\label{broken1_magnus3_eigenvalue}
\label{magnus3_eigenvalue}
\end{figure}
The eigenvalues of $\mathcal{PT}$-symmetric Hamiltonian $H_S$ ($E_\pm$) and the eigenvalues of its dilated Hermitian Hamiltonian $\hat{H}_{AS}$ ($E_1,E_2,E_3,E_4$, arranged in descending order) are given in Fig.\ref{magnus3_eigenvalue}. Fig.\ref{magnus3_eigenvalue}(a) is about the $\mathcal{PT}$-symmetric unbroken phase ($r=0.6$, $E_\pm=\pm\sqrt{1-0.6^2}=\pm0.8$), while Fig.\ref{magnus3_eigenvalue}(b) is about the broken phase ($r=1.4,E_\pm=\pm\sqrt{1-1.4^2}=\pm0.98i$ will be conjugate complex numbers, which is exactly what $\mathcal{PT}$-symmetry theory predicts \cite{Mostafazadeh2002b}). From Fig.\ref{magnus3_eigenvalue}(a), we can see $E_\pm$ remain unchanged, while $E_1$ and $E_2$ oscillate around $E_+$, $E_3$ and $E_4$ oscillate around $E_-$ and change periodically with time $t$ (in this case, the critical time $T_l$ of the legitimacy given below Eq.\eqref{convergence2} under the setting $r=0.6$ is infinite, i.e., $T_l=\infty$). Meanwhile, $E_1$ and $E_4$, $E_2$ and $E_3$ are symmetric about $E=0$ (black dashed line). From Fig.\ref{magnus3_eigenvalue}(b), we can see that $E_k$s ($k=1,2,3,4$) are also symmetric about $E=0$ (black dashed line), which is similar as the case of unbroken phase. In fact, the symmetry in the case of unbroken and broken phase is caused by the symmetric gauge adopted in Eq.\eqref{gauge}. However, the periodicity oscillation of $E_k$s are destroyed, and $E_k$s increase (decrease) monotonically with time $t$. Especially, when $t\rightarrow T_l$ (the critical time $T_l$ of the legitimacy under the setting $r=1.4$ is about 0.604), $E_1$ ($E_4$) will increases (decreases) sharply to infinity, which is caused by one of eigenvalues of $M(t)$ given in Eq.\eqref{M_TD} tend to one. At this moment, the energy of system $\hat{H}_{AS}$ may diverge, so can not be realized by an experiment. In a summary, the critical time $T_l$ of the legitimacy actually bounds the duration of implemented experimental running.
We also characterize the evolution using the renormalized population ${P_\mathrm{N}}_0(t)$ of state $\rho_S(0)$ in main system $S$ \cite{wu2019observation}. In this situation, the renormalized population ${P_\mathrm{N}}_0(t)$ can be obtained by
\begin{align}\label{P_N0}
{P_\mathrm{N}}_0(t)=\frac{\mathrm{Tr}[|0\rangle_S\langle0|\cdot\rho_S(t)]}{\mathrm{Tr}[\rho_S(t)]}=\frac{\langle0|\rho_S(t)|0\rangle_S}{P_0(t)},
\end{align}
where $P_0(t)=\mathrm{Tr}[\rho_S(t)]$ has been given below Eq.\eqref{rho_S_N}. The result based on the pure-state vectors has been given analytically in Ref.\cite{wu2019observation} as follows:
\begin{widetext}
\begin{align}\label{reference}
{P_\mathrm{N}}_0(t)=
\begin{cases}
\frac{\left|e^{t \sqrt{r^{2}-1}}\left(r+\sqrt{r^{2}-1}\right)-e^{-t \sqrt{r^{2}-1}}\left(r-\sqrt{r^{2}-1}\right)\right|^{2}}{\left|e^{t \sqrt{r^{2}-1}}\left(r+\sqrt{r^{2}-1}\right)-e^{-t \sqrt{r^{2}-1}}\left(r-\sqrt{r^{2}-1}\right)\right|^{2}+\left|i e^{-t \sqrt{r^{2}-1}}-i e^{t \sqrt{r^{2}-1}}\right|^{2}}, &r \neq 1 \\
\frac{(t+1)^{2}}{(t+1)^{2}+t^{2}}, &r=1.
\end{cases}
\end{align}
\end{widetext}
According to Eq.\eqref{H_hat_H_dyn-solutions1} and above Eq.\eqref{P_N0}, we can calculate the renormalized population ${P_\mathrm{N}}_0(t)$ under the initial density operator $\rho_S(0)$. To verify the effectiveness of density operators method in Sec.\ref{dilation_method}, we take $r=0.6$ and draw the Fig.\ref{comparisons}. The green curve is drawn according to the analytical results based on the pure-state vector method given in the above Eq.\eqref{reference}, while the blue dotted curve is drawn according to Eqs.\eqref{H_hat_H_dyn-solutions1} and \eqref{P_N0}, and we can find their results are completely the same, which means the density operator method are compatible with pure-state vector method, just as we intuitively think. It should be pointed out that the Hamiltonian $H_S$ in this example is time-independent, so the problem of chronological product can be avoid in the calculation of $M(t)$ according to Eq.\eqref{M_TD}.
\begin{figure}
\caption{Comparisons of the results for 2-dimensional $\mathcal{PT}
\label{comparisons}
\end{figure}
\subsection{The influence of three kinds of quantum noises generated in main system}
In the experimental simulation of $\mathcal{PT}$-symmetric system dynamics, the quantum state will inevitably be disturbed by quantum noises, especially when the state is entangled just like the state in Eq.\eqref{rho_rhoS}, the damage of quantum noise to entanglement may be so fatal that has to be studied carefully. In this section, we introduce three common quantum noise (channel) models and using the VDMS technique mentioned in Sec.\ref{VD_and_ML} to characterize the Lindblad master equation corresponding to them. We only consider the situation where the noises act on the main system space (similarly, the situation where the noises act on the auxiliary system can also be studied). Based on that, we analysis their effects on the simulation of $\mathcal{PT}$-symmetric system dynamics.
In AD channel and PD channel mentioned blow Eq.\eqref{Lindblad}, there is only one jump operator (for convenience, all parameters of decay rate have been set to $\gamma$): $\Gamma_{\mathrm{AD}}=\sqrt{\gamma}\sigma^S_{-}=\sqrt{\gamma}|0\rangle_S\langle1|$, $\Gamma_{\mathrm{PD}}=\sqrt{\gamma}\sigma^S_z$; while in Dep channel, there are three jump operators:
${\Gamma_{\mathrm{Dep}}}_k=\sqrt{\gamma}\sigma^S_k$, where $k=x,y,z$, $\{\sigma^S_k\}$ are Pauli operators, and the subscript "$S$" indicates it acts on the main system, and the complete form of any operator $X$ with different superscripts is $X^S=I_A\otimes X^S, X^A=X^A\otimes I_S$.
Considering the system $\hat{H}_{AS}(t)$ used to simulate the $\mathcal{PT}$-symmetric system dynamics in Eq.\eqref{H_hat_t}, and substituting jump operators into Eq.\eqref{Lindblad operator_matrix} we obtain that
\begin{widetext}
\begin{subequations}\label{AD PD_Dep}
\begin{align}
\overline{\overline{\mathcal{L}}}_{\mathrm{AD}}(t)&=-i[\hat{H}_{AS}(t)\otimes I_{AS}-I_{AS}\otimes\hat{H}^{\mathrm{T}}_{AS}(t)]+\gamma[ |0\rangle_S\langle1|\otimes(|1\rangle_S\langle0|)^\mathrm{T}-\frac{|1\rangle_S\langle1|}{2}\otimes I_S-I_S\otimes\frac{(|1\rangle_S\langle1|)^\mathrm{T}}{2} ],\label{AD channel}\\
\overline{\overline{\mathcal{L}}}_{\mathrm{PD}}(t)&=-i[\hat{H}_{AS}(t)\otimes I_{AS}-I_{AS}\otimes\hat{H}^{\mathrm{T}}_{AS}(t)]+\gamma[ \sigma^S_z\otimes({\sigma^S_z})^\mathrm{T}-I_S\otimes I_S],\label{PD channel}\\
\overline{\overline{\mathcal{L}}}_{\mathrm{Dep}}(t)&=-i[\hat{H}_{AS}(t)\otimes I_{AS}-I_{AS}\otimes\hat{H}^{\mathrm{T}}_{AS}(t)]+\gamma\sum_{k\in\{x,y,z\}}[ \sigma^S_k\otimes({\sigma}^S_k)^\mathrm{T}-I_S\otimes I_S],\label{Dep channel}
\end{align}
\end{subequations}
\end{widetext}
where $\hat{H}_{AS}(t)$ is the dilated Hamiltonian in Eq.\eqref{H_hat_t}, $I_{AS}$ and $I_S$ represent the identity operator in the composite system $AS$ and the main system $S$ respectively, and they are the same when written in complete form. We calculate Eqs.\eqref{AD PD_Dep} and Eqs.\eqref{H_hat_H_dyn-solutions} by Magnus series given in Eq.\eqref{Magnus2}, and only the first two terms of Magnus series, i.e., $\Omega_1,\Omega_2$ are considered. Specifically, in the calculation, the operator $A(t)$ given in Eq.\eqref{matrix-differential-equation2} can be replaced by $-i\hat{H}_{AS}(t)$ obtained in Eq.\eqref{H_hat_t}, and $\overline{\overline{\mathcal{L}}}_{\mathrm{AD}}(t), \overline{\overline{\mathcal{L}}}_{\mathrm{PD}}(t), \overline{\overline{\mathcal{L}}}_{\mathrm{Dep}}(t)$ given in Eqs.\eqref{AD PD_Dep}.
Based on the density operators, we are able to consider the effect of quantum noise to the simulation of the dynamics of $\mathcal{PT}$-symmetric system $H_S$. According to Eq.\eqref{M_Map}, Eqs.\eqref{AD PD_Dep} and Eq.\eqref{P_N0}, we can calculate the renormalized population ${P_\mathrm{N}}_0(t)$ under AD, PD, Dep channel and no noise, respectively. Under the parameter settings of Fig.\ref{Magnus_expansion—figs}, Fig.\ref{Magnus2-fig} and Fig.\ref{Magnus_expansion—figs_mixed}, i.e., $\gamma=0.25,\theta=\pi/2,s=1,r=0.6$, the critical time of the legitimacy mentioned below Eq.\eqref{M_TD} $T_l=\infty$ because in the $\mathcal{PT}$-symmetry unbroken phase of $H_S$, all eigenvalues of $H_S$ are real. In addition, the $\max_{t\in[0, 8]}\|H_{AS}(t)\|_2\approx1.08$, $\max_{t\in[0, 8]}\|\overline{\overline{\mathcal{L}}}_{AD}(t)\|_2\approx2.19$, $\max_{t\in[0, 8]}\|\overline{\overline{\mathcal{L}}}_{PD}(t)\|_2\approx2.34$, $\max_{t\in[0, 8]}\|\overline{\overline{\mathcal{L}}}_{Dep}(t)\|_2\approx2.38$, so when we set time step $h=0.02$ (Fig.\ref{Magnus_expansion—figs}(b)(d) and Fig.\ref{Magnus_expansion—figs_mixed}) or $h=0.2$ (Fig.\ref{Magnus_expansion—figs}(a)(c) and Fig.\ref{Magnus2-fig}), the convergence condition, i.e., Eq.\eqref{convergence2}, will always be satisfied in every step because $h<T_c$.
The relations between the renormalized population ${P_\mathrm{N}}_0$ and time $t$ in the case of quantum noise and no noise are given in Fig.\ref{Magnus_expansion—figs} and Fig.\ref{Magnus2-fig} respectively. At first, we focus on the cases of no noise, which are related to the green curves (the combination method) and blue dotted curves (the dilation method) in Fig.\ref{Magnus_expansion—figs}. In Fig.\ref{Magnus_expansion—figs}, the subfigures (a) and (b) are related to the relation between ${P_\mathrm{N}}_0$ and $t$ with Magnus series are calculated to the first term $\Omega_1$ in time steps $h=0.2$ and $h=0.02$ respectively according to Eqs.\eqref{H_hat_H_dyn-solutions}, Eqs.\eqref{AD PD_Dep} and Eq.\eqref{Magnus_product}, while in Fig.\ref{Magnus2-fig}(a), Magnus series is calculated to the second term $\Omega_2$ with time step $h=0.2$. In Fig.\ref{Magnus_expansion—figs}(a-b) and Fig.\ref{Magnus2-fig}(c), the green lines represent the results of no noise computed by the combination method (act as a theoretical result), which only involves calculations involving $H_S(t)$, not calculations involving $H_{AS}(t)$, while the blue doted lines represent the results of no noise computed by the dilation method (act as a simulation result), which involves calculations involving both $H_{S}(t)$ and $H_{AS}(t)$. The Fig.\ref{Magnus_expansion—figs}(c-d) and Fig.\ref{Magnus2-fig}(b) are errors between the green lines and the blue lines.
Compared Fig.\ref{Magnus_expansion—figs}(c) with (d), we can find that when the time step is increased from $h=0.2$ to $h=0.02$, the error is reduced by two orders of magnitude, which is duce to the error is $O(h^3)$ when the Magnus series is computed to the first term $\Omega_1$ according to Eq.\eqref{Magnus_product} \cite{Blanes2009}. Meanwhile, compared Fig.\ref{Magnus_expansion—figs}(c) with Fig.\ref{Magnus2-fig}(b), by calculating Magnus series to the second term $\Omega_2$, we can find the error in Fig.\ref{Magnus2-fig}(b) is also reduced by two orders of magnitude although they are obtained with the same time step $h=0.2$, which is due to the error is $O(h^5)$ when the Magnus series is computed to the second term $\Omega_2$ \cite{Blanes2009}. It is worth pointing that although computing Magnus series to high-order terms $\Omega_n (n\geqslant2)$ may lead to higher accuracy, $\Omega_n$ are usually difficult to be computed especially when $H_{AS}$ is high-dimensional multivariable symbolic matrix (see Eq.\eqref{Magnus series1-4} in Appendix \ref{chronological product}). On the contrary, the advantage of only computing $\Omega_1$ is very convenient, however, the improvement of accuracy has to be achieved by reducing the time step $h$, which means that more exponential matrices have to be calculated, which is usually computing-expensive especially when the matrix is high-dimensional. Therefore, considering the computational cost of achieving a specific accuracy, we usually have to make a compromise between the fewer computed terms of Magnus series and the bigger step size.
\begin{figure*}
\caption{The effects of quantum noises on the dynamics simulation of $\mathcal{PT}
\label{unbroken1_magnus1}
\label{unbroken1_magnus1_norm}
\label{unbroken1_magnus2}
\label{unbroken1_magnus2_norm}
\label{Magnus_expansion—figs}
\end{figure*}
\begin{figure*}
\caption{The effects of quantum noises on the dynamics simulation of $\mathcal{PT}
\label{unbroken1_magnus3}
\label{unbroken1_magnus3_norm}
\label{Magnus2-fig}
\end{figure*}
Now we focus on the effects of quantum noises on the renormalized population ${P_\mathrm{N}}_0$. From Fig.\ref{Magnus_expansion—figs}(a-b) and Fig.\ref{Magnus2-fig}, we can find that the curves related to Dep channel drops rapidly, and faster than the cases of AD, PD channels, which is caused by the tendency of Dep channel that transforms any quantum state into the maximal mixed state, where ${P_\mathrm{N}}_0$ will keep $1/2$. The similar phenomenon can also be seen in mixed-state density operator case in Fig.\ref{Magnus_expansion—figs_mixed}, where we have set the initial density operator of the mixed state as ${\rho_S}_{\mathrm{mixed}}(0)=\frac{1}{30}\begin{pmatrix}
4 & 1 \\
1 & 2
\end{pmatrix}$.
Meanwhile, in the initial short time, the red curves related to AD channel almost coincides with the cyan curves related to PD channel, while after a long time, they separate from each other. The similar phenomenon can also be seen in Fig.\ref{Magnus_expansion—figs_mixed}. However, this phenomenon is accidental because the initial pure-state density operator $\rho_S(0)=\frac{1}{5}|0\rangle_S\langle0|$ is just located on the eigenstate (steady state, or the fixed point of superoperator of the noise channel) of the dissipation terms of AD channel given in Eq.\eqref{AD channel} and PD channel given in Eq.\eqref{PD channel} \cite{Minganti2019,andersson2007finding}, so their roles can be ignored in a short time, and only the left terms containing $\hat{H}_{AS}(t)$ play the roles; while in the long run, the evolution state $\rho_S(t)$ is far away from the initial pure state $|0\rangle_S$, so it can be affected by the dissipation terms where their roles cannot be ignored. Compared with the case of mixed state, we can understand this phenomenon more clearly. From Fig.\ref{Magnus_expansion—figs_mixed} we can see that in a short time, all the curves including noise and no noise cases rise, which are the results driven by Hamiltonian $\hat{H}_{AS}(t)$. However, the case of AD channel rises faster than all other curves, because the AD channel has a tendency to change all states to the state $|0\rangle_S$, which will contribute the renormalized population ${P_\mathrm{N}}_0$ (more strictly, $|0\rangle_S$ is the fixed point (steady state) of AD channel \cite{Minganti2019,andersson2007finding}.
\begin{figure*}
\caption{The effects of quantum noises on the dynamics simulation of $\mathcal{PT}
\label{mixed_unbroken1_magnus1}
\label{mixed_unbroken1_magnus1_norm}
\label{Magnus_expansion—figs_mixed}
\end{figure*}
The effects of quantum noises on the dynamics simulation of $\mathcal{PT}$-symmetry broken system under the initial state of pure-state density operator $\rho_S=\frac{1}{5}|0\rangle_S\langle0|$ are given in Fig.\ref{Magnus_expansion—figs_broken}, and the renormalized population ${P_\mathrm{N}}_0$ is calculated to the first term of Magnus series $\Omega_1$. In this figure, we set the parameter as $\gamma=0.25,\theta=\pi/2,s=1,r=1.4$, then the legitimacy will be lost after the critical time $T_l\approx0.604$. Under this parameter configuration, $\max_{t\in[0, 0.6]}\|H_{AS}(t)\|_2\approx14.20$, $\max_{t\in[0, 0,6]}\|\overline{\overline{\mathcal{L}}}_{AD}(t)\|_2\approx28.40$, $\max_{t\in[0, 0.6]}\|\overline{\overline{\mathcal{L}}}_{PD}(t)\|_2\approx28.40$, $\max_{t\in[0, 0.6]}\|\overline{\overline{\mathcal{L}}}_{Dep}(t)\|_2\approx28.41$, so when we set time step $h=0.02$, the convergence condition, i.e., Eq.\eqref{convergence2}, will always be satisfied in every step because $h<T_c$ (including noises case and no-noise case). From Fig.\ref{Magnus_expansion—figs_broken}(a) we can see that all curves decrease monotonically because the eigenvalues of $H_S$ will be complex numbers, which can be seen from Fig.\ref{magnus3_eigenvalue}, so the evolution operator $\mathbb{T}e^{-i\int_{0}^{t}H_S(\tau)d\tau}$ may cause the decay of model $|0\rangle_S$. In addition, the blue curve and the green curve almost completely coincide in the whole interval $t\in[0,T_l)$, which can be seen more clearly in the error diagram given in Fig.\ref{Magnus_expansion—figs_broken}(b), which shows high accuracy.
\begin{figure*}
\caption{The effects of quantum noises on the dynamics simulation of $\mathcal{PT}
\label{broken1_magnus1}
\label{broken1_magnus1_norm}
\label{Magnus_expansion—figs_broken}
\end{figure*}
\section{Conclusions and discussions\label{conclusions}}
In this work, we generalized the results of Wu et al. in Ref.\cite{wu2019observation}, which are based on the dilation method, from the pure-state vectors case to the mixed-state case with the help of density operators, and provided a general theoretical framework based on density operators to analytically and numerically analyze the dynamics of TD arbitrary $\mathcal{PT}$-symmetric system and the influence of quantum noises. We make conclusions from the perspective of analytical analysis and numerical analysis, respectively.
At first, from the perspective of analytical analysis, more physical completeness was provided. In the process of derivation, we discussed the physical meaning of $M(t)$ ignored in Ref.\cite{wu2019observation}. Specifically, we proved that $M(t)$ is not the metric operator of $\mathcal{PT}$-symmetric system $H_S$, but the TD metric operator of $M(t)$-inner product space, which satisfies probability conservation. Meanwhile, we also gave a quantity $h_{S}(t)$ related to $M(t)$, and proved that it actually is a physical observable, because it can be mapped to the Hermitian quantity $H_{\mathrm{phys}}$, which has a real eigenspectrum, through a TD similarity transformation, i.e., the TD Dyson map. In addition, more mathematical completeness was also provided by us. Specifically, in the derivation, we obtained the dilated Hamiltonian $H_{AS}(t)$ by attaching a symmetric gauge instead of artificially assigning a quantity to the free variable as in Ref.\cite{wu2019observation}. As a result, the hidden symmetry of the eigenspectrum of the dilated Hamiltonian $H_{AS}(t)$ was able to be revealed. It is worth noting that when the system considered is time-independent and $\mathcal{PT}$-symmetry unbroken, the results of dynamics simulation in this work are consistent with our previous results in Ref.\cite{Li2022}, and when the state considered is pure state, the results of this work are consistent with the results given in Ref.\cite{wu2019observation}. Because the dilated system $\hat{H}_{AS}(t)$ is actually located in an open quantum system, the influence of environment will be inevitable, we introduced the tool of VDMS to solve the Lindblad master equation under three kinds of quantum noises, then the influence of quantum noises can be studied in the dynamics simulation of $\mathcal{PT}$-symmetric system.
Then, from the perspective of numerical analysis, we pointed out that on the premise of meeting the goal of calculation accuracy and saving computing resources, the time step $h$ of calculation and the cut-off term of Magnus series have to be carefully balanced. In addition, the time step $h$ of the numerical calculation should be restricted by the critical time $T_c$ of convergence of Magnus series in every step, because the dilated higher-dimensional Hamiltonian $\hat{H}_{AS}(t)$ is usually time-dependent, the problem of chronological product has to be solved, and the Magnus series has to be calculated, which may diverge when $t>T_c$ so that the error may be amplified after Magnus series is truncated to high-order terms in calculation. Meanwhile, the implemented duration of experimental running is actually bounded by the critical time $T_l$ of the legitimacy of dilation method. This phenomenon occurs because when $t\rightarrow T_l$, at least one of eigenvalues of $M(t)$ given in Eq.\eqref{M_TD} will be close to one, then the corresponding eigenvalue of $\xi(t)$ given in Eq.\eqref{xi} will be close to zero, so the energy may diverge (see Fig.\ref{magnus3_eigenvalue}(b)). In fact, the problem of chronological product has to be solved not only in the numerical calculation, but also even in the experiment, because $\hat{H}_{AS}(t)$ has to be parameterized in advance by calculating it. When considering the influence of quantum noises, according to the results of the numerical calculation in our example, we know the depolarizing noise is the most fatal to the dynamics simulation of $\mathcal{PT}$-symmetric system among three kinds of quantum noises we considered and should be avoided as much as possible.
Finally, we have to mention that in Sec.\ref{example}, we actually give an example of time-independent Hamiltonian rather that TD Hamiltonian, just like in Ref.\cite{wu2019observation}, for the purpose of facilitate display and comparison. However, the method of analysis and calculation is universal, and what only needs additional attention is to be careful about the convergence of chronological product when calculating $M(t)$ given in Eq.\eqref{M_TD}, which has been avoid in our example.
\section{Derivations of $H_2(t)$ and $H_4(t)$ \label{appendix_derivation-H_24}}
Here we drive $H_2(t)$, $H_4(t)$ expressed by the operator $K(t)$ in the main text. First of all, we know
\begin{equation}
M'(t)=(\xi^2(t)+I)'=\xi'(t)\xi(t)+\xi(t)\xi'(t),
\end{equation}
then according to Eq.\eqref{H_2},
\begin{widetext}
\begin{align}\label{H_2-app}
H_2(t)=&[-i\xi'(t)+H_S(t)\xi(t)-\xi(t)H_S(t)]M^{-1}(t)\xi(t) \nonumber \\
=&-i\xi'(t)M^{-1}(t)+H_S(t)M^{-1}(t)\cdot\xi(t)-\xi(t)H_S(t)M^{-1}(t) \nonumber \\
=&-i\xi'(t)M^{-1}(t)+[K(t)-\frac{i}{2}M^{-1}M'(t)M^{-1}(t)]\xi(t)-\xi(t)[K(t)-\frac{i}{2}M^{-1}M'(t)M^{-1}(t)] \nonumber \\
=&K(t)\xi(t)-\xi(t)K(t)-\frac{i}{2}[2\xi'(t)M^{-1}(t)+M^{-1}(t)M'(t)\xi(t)M^{-1}(t)-M^{-1}\xi(t)M'(t)M^{-1}(t)] \nonumber \\
=&K(t)\xi(t)\!-\!\xi(t)K(t)\!-\!\frac{i}{2}\{2\xi'(t)M^{-1}(t)\!+\!M^{-1}(t)[\xi'(t)\xi(t)\!+\!\xi(t)\xi'(t)]\xi(t)M^{-1}(t)-\nonumber\\
&M^{-1}\xi(t)[\xi'(t)\xi(t)\!+\!\xi(t)\xi'(t)]M^{-1}(t)\}\nonumber\\
=&K(t)\xi(t)-\xi(t)K(t)-\frac{i}{2}\{2\xi'(t)M^{-1}(t)+M^{-1}(t)[\xi'(t)(M(t)-I)M^{-1}(t)-M^{-1}(t)(M(t)-I)\xi'(t)M^{-1}\} \nonumber\\
=&K(t)\xi(t)-\xi(t)K(t)-\frac{i}{2}[\xi'(t)M^{-1}(t)-M^{-1}(t)\xi'(t)].
\end{align}
\end{widetext}
In the similar way above, according to Eq.\eqref{H_4},
\begin{widetext}
\begin{align}\label{H_4-app}
H_4(t)=&[i\xi'(t)\xi(t)+\xi(t)H_S(t)\xi(t)+H_S(t)]M^{-1}(t) \nonumber \\
=&i\xi'(t)\xi(t)M^{-1}(t)+H_S(t)M^{-1}(t)+\xi(t)H_S(t)M^{-1}(t)\xi(t) \nonumber \\
=&i\xi'(t)\xi(t)M^{-1}(t)+K(t)-\frac{i}{2}M^{-1}M'(t)M^{-1}(t)+\xi(t)[K(t)-\frac{i}{2}M^{-1}M'(t)M^{-1}(t)]\xi(t) \nonumber \\
=&K(t)+\xi(t)K(t)\xi(t)+\frac{i}{2}M^{-1}(t)[2M(t)\xi'(t)\xi(t)-M'(t)-\xi(t)M'(t)\xi(t)]M^{-1}(t) \nonumber \\
=&K(t)+\xi(t)K(t)\xi(t)+\frac{i}{2}M^{-1}(t)\{2M(t)\xi'(t)\xi(t)-[\xi'(t)\xi(t)+\xi(t)\xi'(t)]-\xi(t)[\xi'(t)\xi(t)+\xi(t)\xi'(t)]\xi(t)\}M^{-1}(t) \nonumber \\
=&K(t)+\xi(t)K(t)\xi(t)\!+\!\frac{i}{2}M^{-1}(t)\{2M(t)\xi'(t)\xi(t)\!-\![\xi'(t)\xi(t)+\xi(t)\xi'(t)]\!-\!\xi(t)\xi'(t)[M(t)\!-\!I]-\nonumber\\
&[M(t)\!-\!I]\xi'(t)\xi(t)\}M^{-1}(t) \nonumber \\
=&K(t)+\xi(t)K(t)\xi(t)+\frac{i}{2}M^{-1}(t)[M(t)\xi'(t)\xi(t)-\xi(t)\xi'(t)M(t)]M^{-1}(t) \nonumber \\
=&K(t)+\xi(t)K(t)\xi(t)+\frac{i}{2}[\xi'(t)\xi(t)M^{-1}-M^{-1}\xi(t)\xi'(t)].
\end{align}
\end{widetext}
\section{Problem of chronological product\label{chronological product}}
Considering a matrix differential equation \cite{Magnus1954}:
\begin{align}\label{matrix-differential-equation}
Y^{\prime}(t)=A(t) Y(t), \quad Y\left(t_{0}\right)=Y_{0},
\end{align}
where $A(t)$ is a known time-dependent matrix, $Y_{0}$ is the initial value of $Y(t)$, and $Y(t)$ is the matrix to be solved. The formal solution of the above equation is \cite{Blanes1998,Blanes2009}:
\begin{align}\label{formal solution}
Y(t)=\mathbb{T}\exp \left(\int_{t_{0}}^{t} A(s) d s\right) Y_{0},
\end{align}
where $\mathbb{T}$ is the time-ordering operator. For arbitrary two time $t_1$ and $t_2$ ($t_1\neq t_2$), in general, $[A(t_1), A(t_2)]\neq0$, then $e^{A(t_1)+A(t_2)}\neq e^{A(t_1)}\cdot e^{A(t_2)}$, the symbol $\mathbb{T}$ can not be ignored. When $[A(t_1), A(t_2)]=0$ for arbitrary two time $t_1$ and $t_2$, especially when $A$ is time-independent, the symbol $\mathbb{T}$ can be ignored.
The Eq.\eqref{formal solution} can be expressed as \cite{Blanes2009}:
\begin{align}
Y(t)=\exp \left(\Omega\left(t, t_{0}\right)\right) Y_{0},
\end{align}
where $\Omega(t)$ can be written as the sum of series:
\begin{align}\label{Magnus series}
\Omega(t)=\sum_{k=1}^{\infty} \Omega_{k}(t),
\end{align}
and $\Omega_n(t)$ is the $n$-th term of Magnus series. Magnus points out that the differential of $\Omega$ with respect to $t$ can be written as:
\begin{align}
\Omega^{\prime}=\frac{\operatorname{ad}_{\Omega}}{\exp \left(\operatorname{ad}_{\Omega}\right)-1} A,
\end{align}
so the solutions of the above equation constitute Magnus expansion, or Magnus series.
The term $\Omega_{n}$ can be obtained by $S_{n}^{(j)}$, which can be obtained by the following recursive formula:
\begin{align}
S_{n}^{(j)}&=\sum_{m=1}^{n-j}\left[\Omega_{m}, S_{n-m}^{(j-1)}\right], \quad 2 \leq j \leq n-1 \nonumber\\
S_{n}^{(1)}&=\left[\Omega_{n-1}, A\right], \quad S_{n}^{(n-1)}=\operatorname{ad}_{\Omega_{1}}^{n-1}(A),
\end{align}
where $\operatorname{ad}_{\Omega}^{n}$ is a shorthand for an iterated commutator, and
\begin{align}
\operatorname{ad}_{\Omega}^{0} A=A, \quad \operatorname{ad}_{\Omega}^{k+1} A=\left[\Omega, \operatorname{ad}_{\Omega}^{k} A\right].
\end{align}
For convenience, we set $t_0=0$. Therefore, we can get
\begin{align}
\Omega_{1} &=\int_{0}^{t} A(\tau) d \tau \nonumber \\
\Omega_{n} &=\sum_{j=1}^{n-1} \frac{B_{j}}{j !} \int_{0}^{t} S_{n}^{(j)}(\tau) d \tau, \quad n \geq 2,
\end{align}
where $B_j$ are the Bernoulli numbers, and $B_1=-1/2$. For the convenience of using, we write out the first four terms of $\Omega_{n}$ as follows:
\begin{align}\label{Magnus series1-4}
\begin{aligned}
\Omega_{1}(t)=& \int_{0}^{t} A\left(t_{1}\right) d t_{1} \\
\Omega_{2}(t)=& \frac{1}{2} \int_{0}^{t} d t_{1} \int_{0}^{t_{1}} d t_{2}\left[A\left(t_{1}\right), A\left(t_{2}\right)\right] \\
\Omega_{3}(t)=& \frac{1}{6} \int_{0}^{t} d t_{1} \int_{0}^{t_{1}} d t_{2} \int_{0}^{t_{2}} d t_{3}\cdot\\
&\left(\left[A\left(t_{1}\right),\left[A\left(t_{2}\right), A\left(t_{3}\right)\right]\right]+\left[A\left(t_{3}\right),\left[A\left(t_{2}\right), A\left(t_{1}\right)\right]\right]\right), \\
\Omega_{4}(t)=& \frac{1}{12} \int_{0}^{t} d t_{1} \int_{0}^{t_{1}} d t_{2} \int_{0}^{t_{2}} d t_{3} \int_{0}^{t_{3}} d t_{4}\left(\left[\left[\left[A_{1}, A_{2}\right], A_{3}\right], A_{4}\right]\right.\\
&+\left[A_{1},\left[\left[A_{2}, A_{3}\right], A_{4}\right]\right]+\left[A_{1},\left[A_{2},\left[A_{3}, A_{4}\right]\right]\right]+\\
&\left[A_{2},\left[A_{3},\left[A_{4}, A_{1}\right]\right]\right]).
\end{aligned}
\end{align}
It is worth noting that the Magnus series in Eq.\eqref{Magnus series} may diverge \cite{Blanes2009}, and a sufficient condition for it to converge in $t\in[0, T)$ is :
\begin{align}
\int_{0}^{T}\|A(s)\|_{2} \mathrm{d}s<\pi,
\end{align}
where $\|A\|_{2}$ denotes 2-norm of $A$.
\end{document} |
\begin{equation}gin{document}
\title{A regularizing property of the $2D$-eikonal equation}
\begin{equation}gin{abstract}
We prove that any $2$-dimensional solution $\psi\in W_{loc}^{1+\varphirac 1 3, 3}$ of the eikonal equation has locally Lipschitz gradient $\nabla \psi$ except at a locally finite number of vortices.
\end{abstract}
\section{Introduction}
\label{chap_div}
Let $\Omegaega\subset \mathbb{R}^2$ be an open set. We will focus on (locally) Lipschitz solutions $\psi:\Omegaega \to \mathbb{R}$ of the eikonal equation, namely such that
\begin{equation}
\label{eiko}
|\nabla \psi|=1 \quad \Thetaxtrm{a.e. in}\quad \Omegaega\, .
\end{equation}
Since all our results will have a local nature, this amounts to investigate curl-free $L^1$ vector fields $w:\Omegaega\to \mathbb{R}^2$ of unit length or,
equivalently, $L^1$ vector fields $u=w^\perp=(-w_2, w_1):\Omegaega\to \mathbb{R}^2$ that satisfy \begin{equation}
\label{contraintes}
|u|=1 \, \, \Thetaxtrm{a.e. in } \Omegaega \quad \Thetaxtrm{and} \quad \nabla {\cal C^\varepsilon}_+ot u=0\quad \Thetaxtrm{ in }\, \, {\cal D}'(\Omegaega).
\end{equation}
Typical examples of stream functions $\psi$ satisfying \eqref{eiko} are distance functions $\psi=\mathop{\rm div}st({\cal C^\varepsilon}_+ot, K)$ to some closed nonempty set $K\subset \mathbb{R}^2$ (see Figure \ref{Landau}).
In general such $\psi$ are not smooth and generate line singularities or vortex-point singularities for the gradient $\nabla \psi$.
\begin{equation}gin{figure}[ht]
\begin{equation}gin{minipage}{0.44\linewidth}
\centering
\includegraphics[height=2.5cm]{vortex_nancy.eps}
\end{minipage}
\begin{equation}gin{minipage}{0.52\linewidth}
\centering
\includegraphics[height=1.5cm]{landau.eps}
\end{minipage}
\caption{Vector fields $\nabla^\perp \mathop{\rm div}st({\cal C^\varepsilon}_+ot, K)$ when $K$ is a point (left) and a rectangle (right).}
\label{Landau}
\end{figure}
We denote by $W_{div}^{s,p} (\Omegaega, \mathbb{S}^1)$ the Sobolev space of order $s>0$ and $p\geq 1$ of divergence-free unit-length vector fields, namely $$W_{div}^{s,p}(\Omega, \mathbb{S}^1)=\{u\in W_{loc}^{s,p}(\Omega, \mathbb{R}^2)\, :\, u \Thetaxtrm{ satisfies } \eqref{contraintes} \, \} ,$$
and we show that elements in the critical spaces $u\in W_{div}^{1/p,p} (\Omegaega, \mathbb{S}^1)$ have, for $p\in [1,3]$, only vortex-point singularities,
i.e. they gain more regularity.
\begin{equation}gin{thm}
\label{teo}
If $u\in W_{div}^{1/p,p}(\Omega, \mathbb{S}^1)$ with $p\in [1, 3]$ then
$u$ is locally Lipschitz continuous inside $\Omegaega$ except at a locally finite number of singular points. Moreover, every singular point $P$ of $u$ corresponds to a vortex-point singularity of degree $1$ of $u$, i.e., there
exists a sign $\alpha=\pm 1$ such that $$u(x)=\alpha \varphirac{(x-P)^\perp}{|x-P|} \quad \Thetaxtrm{for every $x\neq P$ in any convex neighborhood of $P$ in $\Omegaega$}.$$
\end{thm}
Following the same strategy we can also show a related regularizing effect for solutions of the Burgers' equation
\begin{equation}gin{equation}\label{e:Burgers}
v_t + \left(\Thetaxtstyle{\varphirac{v^2}{2}}\right)_s = 0\, ,
\end{equation}
where $(t,s) = (x_1, x_2)$ will be used for the time-space variables. The link between \eqref{contraintes} and \eqref{e:Burgers} is discussed in the next section.
\begin{equation}gin{thm}
\label{teo_2}
Let $\Omegaega = I \times J$ with $I, J\subset \mathbb{R}$ two intervals and $v\in L^4(\Omegaega)$ be a distributional solution of \eqref{e:Burgers} which belongs to the space $L^3 (I, W^{1/3, 3} (J))$, namely
\begin{equation}gin{equation}\label{e:time-space}
\int_I \int_{J\times J} \varphirac{|v(t,s) - v (t, \sigma)|^3}{|s-\sigma|^2} \, ds\, d\sigma\, dt < \infty\, .
\end{equation}
Then $v$ is locally Lipschitz.
\end{thm}
\begin{equation}gin{rem}
\begin{equation}gin{enumerate}
\item[i)] Theorem \ref{teo} was proved by Ignat \cite{Ignat_JFA} for $p\in [1,2]$. Moreover, by standard interpolation we have the inclusion $W^{1/p, p}_{div} (\Omegaega, \mathbb{S}^1) \subset W^{1/q, q} (\Omegaega, \mathbb{S}^1)$ for any $p<q$ (the target of such maps being $\mathbb{S}^1$, they are always in $L^\infty$).
\item[(ii)] Note that the $W^{1/p,p}$ assumption naturally excludes ``jump-singularities'' but allows ``oscillations''. For instance, if $p=2$, then the function $\varphi:(-\varphirac 1 2, \varphirac 1 2)\to \mathbb{R}$ defined as $\varphi(x_1)=\log|\log |x_1||$ for $x_1\neq 0$ belongs to $H^{1/2}((-\varphirac 1 2, \varphirac 1 2)) = W^{1/2,2} ((-\varphirac 1 2, \varphirac 1 2))$. So, setting $\Omegaega=(-\varphirac 1 2, \varphirac 1 2)^2\subset \mathbb{R}^2$, then the function $u(x_1, x_2):=e^{i\varphi(x_1)}$ belongs to $H^{1/2}(\Omegaega, \mathbb{S}^1)$ and obviously, $L=\{(0, x_2)\, :\, x_2\in (-\varphirac 1 2, \varphirac 1 2)\}\subset \Omegaega$ is an "oscillating" line-singularity of $u$. Theorem \ref{teo} excludes, however, this type of behavior exploiting the additional assumption that $u$ is divergence-free.
\end{enumerate}
\end{rem}
One interesting point is the fundamental role played by a commutator estimate from Constantin, E and Titi, which was used in \cite{CET} to prove that $C^{1/3+\varepsilon}$ solutions of the incompressible Euler equations preserve the kinetic energy. Our proof uses a similar argument to show that the $u$ of Theorem \ref{teo} and the $v$ of Theorem \ref{teo_2} both satisfy some additional balance laws. Such laws hold obviously for smooth solutions but are false in general for distributional solutions. The question of which threshold regularity ensures their validity can be surprisingly subtle. In the case of the incompressible Euler's equations a well-known conjecture of Onsager in the theory of turbulence claims that $1/3$ is the critical H\"older exponent for energy conservation: the ``positive side'' of this conjecture was indeed proved in \cite{CET} (see also \cite{Eyink}), whereas the ``negative side'' is still open, although there have been recently many results in that direction (see for instance \cite{BDS, DS2, DS1, Isett}).
\section{Entropies and kinetic formulations}
\noindent The main feature of both problems relies on the concept of characteristic. Assume for the moment that $u$ is a smooth solution of \eqref{contraintes} and fix a point $x\in \Omegaega$; then the characteristic of $u = \nabla^\perp \psi$ at $x$ is given by
\begin{equation}
\label{syst_dyn}
\dot{X}(t,x)=u^\perp(X(t,x))
\end{equation} with the initial condition $X(0,x)=x$. The orbit $\{X(t, x)\}_t$ is a straight line (i.e., $X(t,x)=x+tu^\perp(x)$ for $t$ in some interval around $0$) along which $u$ is perpendicular and constant. A similar conclusion can be drawn for smooth solutions of Burgers' equation, considering the corresponding characteristics, cf. for instance \cite{Dafermos}.
Observe that \eqref{syst_dyn} does not have a direct proper meaning in the case $u\in W^{1/p,p}$ because \`a-priori there is no trace of $u$ defined ${\mathcal{H}}^1$-a.e. on curves $\{X(t, x)\}_t$. To overcome this difficulty, the following notion of weak characteristic was introduced (see e.g. Jabin-Perthame \cite{Jabin-Perthame}): for every direction $\xi\in \mathbb{S}^1$, the function
$\chi({\cal C^\varepsilon}_+ot, \xi):\Omegaega\to\{0, 1\}$ is defined as
\begin{equation}
\label{weak_cara}
\chi(x, \xi)=\begin{equation}gin{cases} 1 &\quad \Thetaxtrm{ for }\, \, u(x){\cal C^\varepsilon}_+ot \xi>0, \\
0 &\quad \Thetaxtrm{ for }\, \, u(x){\cal C^\varepsilon}_+ot \xi\leq 0.
\end{cases}\end{equation}
When $u$ is smooth around a point $x\in \Omegaega$, then for the choice $\xi:=u^\perp(x)$ either $\nabla \chi({\cal C^\varepsilon}_+ot, \xi)$ locally vanishes
(if $u$ is constant in a neighborhood of $x$), or $\nabla \chi({\cal C^\varepsilon}_+ot, \xi)$ is a measure concentrated on the characteristic $\{X(t, x)\}_t$ and oriented by $\xi^\perp$ (see Figure \ref{chara}).
In other words, we have the following ``kinetic formulation" of the problem:
$$\xi{\cal C^\varepsilon}_+ot \nabla \chi(x, \xi)=0.$$
Note that the knowledge of $\chi({\cal C^\varepsilon}_+ot, \xi)$ in every direction $\xi\in \mathbb{S}^1$ determines completely the vector field $u$ due to the averaging formula
\begin{equation}
\label{aver_form}
u(x)=\varphirac 1 2 \int_{\mathbb{S}^1} \xi \chi(x, \xi)\, d\xi \quad \Thetaxtrm{ for a.e. }\, \, x\in \Omegaega.\end{equation}
A similar approach can be used to capture the corresponding characteristics for solutions of Burgers' equation and in fact the work of \cite{Jabin-Perthame} originated from ideas applied first in the theory of scalar conservation laws: inspired by the classical work of Kru\v{z}kov, cf. \cite[Section 6.2]{Dafermos}, a similar ``kinetic formulation'' was introduced first by Lions, Perthame and Tadmor in \cite{LPT} for
entropy solutions of scalar conservation laws (in any dimension).
\begin{equation}gin{figure}[htbp]
\center
\includegraphics[scale=0.3,
width=0.3\Thetaxtwidth]{charact.eps} \caption{Characteristics of $u$.} \label{chara}
\end{figure}
The key point in the proof of Theorem \ref{teo} consists in showing an appropriate ``kinetic formulation'' for $W_{div}^{1/p,p}(\Omega, \mathbb{S}^1)$-vector fields.
Indeed Theorem \ref{teo} follows from the following Proposition via an argument of Jabin, Otto and Perthame \cite{JOP02}.
\begin{equation}gin{pro}[Kinetic formulation]
\label{kinet}
Let $u\in W_{div}^{1/p,p}(\Omega, \mathbb{S}^1)$ with $p\in [1,3]$. For every direction $\xi\in \mathbb{S}^1$, the function $\chi({\cal C^\varepsilon}_+ot, \xi)$ defined at \eqref{weak_cara} satisfies the following kinetic equation:
\begin{equation}
\label{eqkine}
\xi{\cal C^\varepsilon}_+ot \nabla \chi({\cal C^\varepsilon}_+ot, \xi)=0 \quad \Thetaxtrm{in} \quad {\cal D}'(\Omegaega). \end{equation}
\end{pro}
\begin{equation}gin{rem}
\label{remh12}
\begin{equation}gin{enumerate}
\item[i)] In Ignat \cite{Ignat_JFA}, the above result was proved for $p\in [1,2]$ and it was conjectured that \eqref{eqkine} still holds for any $p>2$. Proposition \ref{kinet} answers partially to that question for the case $p\leq 3$.
\item[ii)] A ``kinetic averaging lemma'' (see e.g. Golse-Lions-Perthame-Sentis \cite{Golse}) shows that a measurable vector-field $u:\Omegaega\to \mathbb{S}^1$ satisfying \eqref{eqkine} belongs to $H^{1/2}_{loc}$ (due to \eqref{aver_form}). This property can be read as the converse of Proposition \ref{kinet} for the case $u\in H^{1/2}(\Omegaega, \mathbb{S}^1)$. A-posteriori, such a vector field has stronger regularity since it shares the structure described in Theorem \ref{teo}.
\end{enumerate}
\end{rem}
The main concept that is hidden in the kinetic formulation \eqref{eqkine} is that of entropy coming from scalar conservation laws. Indeed,
for each direction $\xi\in \mathbb{S}^1$ we introduce the maps
$\Phi^\xi:\mathbb{S}^1\to \mathbb{R}^2$ defined by
\begin{equation}
\label{element}
\Phi^\xi(z):=\begin{equation}gin{cases} \xi &\quad \Thetaxtrm{ for }\, \, z\in \mathbb{S}^1, \, z{\cal C^\varepsilon}_+ot \xi>0, \\
0 &\quad \Thetaxtrm{ for }\, \, z\in \mathbb{S}^1, \, z{\cal C^\varepsilon}_+ot \xi\leq 0,
\end{cases}\end{equation}
which will be called "elementary entropies". Clearly
$$\Phi^\xi(u(x))=\xi \chi(x, \xi) \quad \Thetaxtrm{ for a.e. } \quad x\in \Omegaega$$
and \eqref{eqkine} can be regarded as a vanishing entropy production:
$$\nabla {\cal C^\varepsilon}_+ot [ \Phi^\xi(u)]=\xi{\cal C^\varepsilon}_+ot \nabla \chi({\cal C^\varepsilon}_+ot, \xi)=0 \quad \Thetaxtrm{ in } \quad {\cal D}'(\Omegaega).$$
\noindent The link between \eqref{contraintes} and scalar conservation laws is the following.
If $u$ is a solution of \eqref{contraintes} of the form $u=(v,h(v))$ (for the flux $h(v)=\pm\sqrt{1-v^2}$) then the divergence-free constraint turns into the
scalar conservation law
\begin{equation}gin{equation}
\label{conserv}
v_t + (h(v))_s=0\, .
\end{equation}
From the theory of scalar conservation laws, it is known that, when $h$ is not linear, there is in general no global smooth solution of the Cauchy problem associated to \eqref{conserv}. This leads naturally to consider weak (distributional) solutions of \eqref{conserv} but in this class there are often infinitely many solutions for the same initial data. The concept of entropy solution restores uniqueness, together with good approximation properties with suitable regularizations (see Kru{\v{z}}kov \cite{Kru}). To clarify this notion we recall that an
entropy - entropy flux pair for \eqref{conserv} is a couple of scalar (Lipschitz) functions $(\eta, q)$ such that $\varphirac{dq}{dv}=\varphirac{dh}{dv} \varphirac{d\eta}{dv}$, which entails that
every smooth solution $v$ of \eqref{conserv} satisfies the balance law $(\eta(v))_t + (q(v))_s=0$.
A solution $v$ of \eqref{conserv} (in the sense of distributions) is called entropy solution if for every convex entropy $\eta$,
the entropy production $(\eta(v))_t + (q(v))_s$ is a nonpositive measure. We summarize all these
concepts in the following definition for the particular case of Burgers' equation:
\begin{equation}gin{df}\label{d:entropy_Burg}
An entropy - entropy flux pair $(\eta, q)$ for \eqref{e:Burgers} consists of two (locally) Lipschitz functions $(\eta, q):\mathbb R \to \mathbb R^2$ such that $q' (w) = w \eta' (w)$ for a.e. $w\in \mathbb R$. A distributional solution $v\in L^\infty_{loc}(\Omegaega)$ of \eqref{e:Burgers} is an
{\em entropy} solution if $(\eta (v))_t + (q(v))_s \leq 0$ for every such pair $(\eta, q)$ with $\eta$ convex.
\end{df}
The main point of Theorem \ref{teo_2} is to show that $W^{1/p,p}$ {\em weak} solutions of Burgers' equation are in fact
{\em entropy} solutions. \varphiootnote{Heuristically, the link between \eqref{contraintes} and \eqref{e:Burgers} can be understood by approximating $h(v)=-\sqrt{1-v^2}=-1+\varphirac{v^2}{2}+O(v^4)$ for small $v$ in \eqref{conserv}. Therefore, the link between Theorems \ref{teo} and \ref{teo_2} is the following: in the framework of Theorem \ref{teo_2}, if $\psi$ is a function with
$\psi_t=1-\varphirac{v^2}{2}$ and $\psi_s=v$, then $\psi$ is a $C^{1,1}$ viscosity solution of the Hamilton-Jacobi equation $\psi_t+\varphirac{(\psi_s)^2}{2}=1$. Obviously, in the approximation $v$ taken very small, the last equation approximates the eikonal equation $|\nabla \psi|=1$.}
\begin{equation}gin{pro}[Entropy solutions]\label{p:entropy}
Let $v$ be as in Theorem \ref{teo_2}. Then $v$ is a (locally bounded) {\em entropy solution} and moreover
\begin{equation}gin{equation}\label{e:no_shocks}
\left(\varphirac{v^2}{2}\right)_t + \left(\varphirac{v^3}{3}\right)_s = 0.
\end{equation}
\end{pro}
Indeed we will focus in showing only the identity \eqref{e:no_shocks}, since it implies that $v$ is an entropy solution by
\cite[Theorem 2.4]{DOW2} (see also \cite{Panov}).
In the case of Burgers' (or more generally for conservation laws $v_t + (h(v))_x =0$ with a uniformly convex $h$), entropy solutions $v$ are functions of bounded variation by Oleinik's estimate (see \cite{Dafermos}). The
chain rule of Volpert (cf. \cite[Theorem 3.99]{AFP}) shows then that the entropy production measure $\mu:= (\eta (v))_t + (q (v))_x$
concentrates on lines (corresponding to "shocks" of $v$): in fact we can use such chain rule to show that \eqref{e:no_shocks} rules out the existence of shocks and then Theorem \ref{teo_2} can be concluded from the classical theory of hyperbolic conservation laws, cf. \cite[Section 11.3]{Dafermos}.
Alternatively we could argue as for Theorem \ref{teo} using the corresponding kinetic formulation, as it is done in \cite[Proposition 3.3]{COW}.
The link between \eqref{contraintes} and \eqref{conserv} suggests to use quantities similar to the entropy - entropy flux pairs $(\eta, q)$ to detect "local" line-singularities of $u$. This idea, which we will explain in a moment, has been used when dealing with reduced models in micromagnetics, e.g., Jin-Kohn \cite{JK00}, Aviles-Giga~\cite{AG99},
DeSimone-Kohn-M\"uller-Otto \cite{DKMO01}, Ambrosio-DeLellis-Mantegazza \cite{AdLM99}, Alouges-Riviere-Serfaty~\cite{ARS02}, Ignat-Merlet \cite{IM09}, \cite{IMpre}, Ignat-Moser \cite{Ignat_Moser}. However in these cases the corresponding entropy production measures usually change sign.
This raised the question of proving the concentration of the entropy production measures on $1$-dimensional sets for those weak solutions with entropy productions which are {\em signed} Radon measures. Partial results are available, see \cite{AKLR,DOW1,DR}, but the general problem is still widely open.
\noindent In the sequel we will always use the following notion of entropy introduced in~\cite{DKMO01} for solutions of the eikonal equation (see also \cite{DLO03, IMpre, JK00}). It corresponds to the entropy - entropy flux pair from the scalar conservation laws, but here the pair is defined in terms of the couple $(v, h(v))$ and not only on $v$.
\begin{equation}gin{df} [DKMO \cite{DKMO01}]
\label{defentrop}
We will say that $\Phi\in C^\infty(\mathbb{S}^1,\mathbb{R}^2)$ is an entropy if
\begin{equation}gin{equation}
\label{condentrop}
\varphirac{d}{d\theta}\Phi(z){\cal C^\varepsilon}_+ot z \ =\ 0, \quad \Thetaxtrm{for every $z=e^{i\theta}=(\cos\theta,\sin\theta)\in \mathbb{S}^1$.}
\end{equation}
Here, $\varphirac{d}{d\theta}\Phi(z):=\varphirac{d}{d\theta}[\Phi(e^{i\theta})]$ stands for the angular derivative of $\Phi$. The set of all entropies is denoted by $\, ENT$.
\end{df}
\noindent The following two characterizations of entropies are proved in \cite{DKMO01}:
\begin{equation}gin{enumerate}
\item[1.] A map $\Phi\in C^\infty(\mathbb{S}^1,\mathbb{R}^2)$ is an entropy if and only every $u\in C^\infty(\Omegaega, \mathbb{R}^2)$ as in \eqref{contraintes} has no entropy production:
\begin{equation} \label{propentrop}
\nabla\cdot [\Phi(u)]=0 \quad \Thetaxtrm{in} \quad {\cal D}'(\Omegaega). \end{equation}
\item[2.] A map $\Phi\in C^\infty(\mathbb{S}^1,\mathbb{R}^2)$ is an entropy if and only if there exists a (unique) $2\pi$-periodic function $\varphi\in C^\infty(\mathbb{R})$ such that for every $z=e^{i\theta}\in \mathbb{S}^1$,
\begin{equation}gin{equation}
\label{prop_ent}
\Phi(z)=\varphi(\theta)z+ \varphirac{d\varphi}{d\theta}(\theta)z^\perp.
\end{equation}
In this case,
\begin{equation}
\label{prop_ent2}
\varphirac{d}{d\theta}\Phi(z)=\gamma_\varepsilonmma(\theta) z^\perp,\end{equation} where $\gamma_\varepsilonmma \in C^\infty(\mathbb{R})$ is the $2\pi$-periodic function defined by
$ \lambdabda=\varphi+\varphirac{d^2}{d\theta^2}\varphi$ in $\mathbb{R}$.
\end{enumerate}
\noindent As shown in Ignat-Merlet \cite{IMpre}, these properties can be extended to nonsmooth entropies, in particular to the special class of elementary entropies $\Phi^\xi$ of \eqref{element}, which are maps of bounded variations. Although $\Phi^\xi$ is not a smooth entropy (in fact, $\Phi^\xi$ has a jump at the points $\pm \xi^\perp\in \mathbb{S}^1$), the equality \eqref{condentrop} trivially holds in ${\cal D}'(\mathbb{S}^1)$.
Moreover, as shown in \cite{DKMO01}, there exists a sequence of smooth entropies $\{\Phi_k\}\subset ENT$ such that
$\{\Phi_k\}$ is uniformly bounded and $\lim_k \Phi_k(z)=\Phi^\xi(z)$ for every $z\in \mathbb{S}^1$ (this approximation result follows via \eqref{prop_ent}).
Therefore, in order to have the kinetic formulation in Proposition \ref{kinet}, we will prove the following result:
\begin{equation}gin{pro}
\label{pro_equi}
Let $\Phi\in C^\infty(\mathbb{S}^1,\mathbb{R}^2)$ be an entropy. Then for every $u\in W_{div}^{1/p,p}(\Omega, \mathbb{S}^1)$, $p\in [1, 3]$, the identity \eqref{propentrop} holds true.
\end{pro}
\noindent Note that this result represents an extension to the class of $W^{1/p, p}$-vector fields of the characterization \eqref{propentrop} of an entropy.
\section{Proofs of Proposition \ref{p:entropy} and Proposition \ref{pro_equi}}
Proposition \ref{pro_equi} was proved in \cite{Ignat_JFA} (see also Ignat \cite{Confluentes}) for $p\in [1,2]$ using a duality argument that cannot be adapted to the case $p>2$. We will present the strategy used in \cite{Confluentes} for the case $p=2$, together with a very elementary argument for $p=1$ (cf. Steps 4 and 5 in the proof below) and then we will present a new method
that enables to conclude in the case $p\leq 3$. However the easier cases $p\in (1,2]$ can be conclude directly from the latter (cf. Step 7 in the proof below).
\noindent {\bf Proof of Proposition \ref{pro_equi}.} Let $\Phi\in C^\infty(\mathbb{S}^1,\mathbb{R}^2)$ be an entropy, i.e., \eqref{condentrop} holds. Let $B\subset\subset \Omegaega$ be a ball inside and $\{\rho_\varepsilon\}_{\varepsilon>0}$ be a family of standard mollifiers in $\mathbb{R}^2$ of the form
$$\rho_\varepsilon(x)=\varphirac 1 {\varepsilon^2} \rho\left(\varphirac x \varepsilon\right)$$ with $\rho:\mathbb{R}^2\to \mathbb{R}_+$ smooth, $\int_{\mathbb{R}^2} \rho(x)\, dx=1$ and $\omegaperatorname{supp} \rho \subset B_1$ where $B_1$ is the unit ball in $\mathbb{R}^2$. For $\varepsilon>0$ small enough, we consider the approximation of $u\in W_{div}^{1/p,p}(\Omega, \mathbb{S}^1)$ in $B$ by convolution with $\rho_\varepsilon$:
$$u_\varepsilon=u\star \rho_\varepsilon \quad \Thetaxtrm{ in } \quad B.$$ Then $u_\varepsilon\in C^\infty(B, \mathbb{R}^2)$, $\nabla {\cal C^\varepsilon}_+ot u_\varepsilon=0$ and $|u_\varepsilon|\leq 1$ in $B$.
\noindent {\it Step 1. Extension $\tilde \Phi$ of the entropy $\Phi$ to $\mathbb{R}^2$.} We extend the entropy $\Phi$ to a ``generalized" entropy $\tilde \Phi$ on $\mathbb{R}^2$. For that, we consider a smooth function $\eta:[0, \infty)\to \mathbb{R}$ such that $\eta=0$ on $[0, 1/2]\cup [2, \infty)$ and $\eta(1)=1$ and define $\tilde
\Phi\in C_c^\infty(\mathbb{R}^2, \mathbb{R}^2)$ by
$$
\tilde \Phi(z):=\eta(|z|) \Phi(\varphirac{z}{|z|}) \, \, \Thetaxtrm{ for every } \, \, z\in \mathbb{R}^2\setminus\{0\}.$$ By \eqref{condentrop}, we have that
\begin{equation}
\label{ref_extension}
z{\cal C^\varepsilon}_+ot D\tilde \Phi(z)z^\perp=|z|z{\cal C^\varepsilon}_+ot \varphirac{\partial \tilde \Phi}{\partial \theta}(z)=
|z|\eta(|z|)z{\cal C^\varepsilon}_+ot \varphirac{d \Phi}{d \theta}(\varphirac{z}{|z|})\stackrel{\eqref{condentrop}}{=}0, \quad z\in \mathbb{R}^2,\end{equation}
with the usual notation $(D\tilde \Phi)_{i,j}=\varphirac{\partial \tilde \Phi_i}{\partial x_j}$.
\noindent {\it Step 2. Decomposition of $D \tilde \Phi$.} We show that there exist $\Psi \in C^\infty_c( \mathbb{R}^2,\mathbb{R}^2)$
and $\gamma_\varepsilonmma \in C^\infty_c(\mathbb{R}^2,\mathbb{R})$ such that
$$
D\tilde \Phi (z)= -2\Psi(z) \omegatimes z + \gamma_\varepsilonmma(z) Id\quad \Thetaxtrm{for every } \, \, z\in \mathbb{R}^2,$$
where $Id$ is the identity matrix (see \cite{DKMO01}). Indeed, one considers
$$\gamma_\varepsilonmma(z)=\varphirac{z^\perp{\cal C^\varepsilon}_+ot D\tilde \Phi(z)z^\perp}{|z|^2}\quad \Thetaxtrm{ and }\quad \Psi(z)=\varphirac{-D\tilde \Phi(z)z+\gamma_\varepsilonmma(z)z}{2|z|^2}, \quad z\in \mathbb{R}^2.
$$
(Here, $\gamma_\varepsilonmma$ is indeed an extension to the whole plane $\mathbb{R}^2$ of the function given in \eqref{prop_ent2}.) Denoting $\vec{r}=\varphirac{z}{|z|}$ and
$\vec{\theta}=\varphirac{z^\perp}{|z|}$ for $z\neq 0$, one checks, using the spectral decomposition, that
\begin{equation}gin{align*}
D\tilde \Phi (z)-\gamma_\varepsilonmma(z)Id&=\bigg(D\tilde \Phi (z)\vec{r}-\gamma_\varepsilonmma(z)\vec{r}\bigg)\omegatimes \vec{r} +
\underbrace{\bigg(D\tilde \Phi (z)\vec{\theta}-\gamma_\varepsilonmma(z)\vec{\theta}\bigg)}_{=0 \, \, \Thetaxtrm{by}\, \, \eqref{ref_extension}}\omegatimes \, \vec{\theta}=-2\Psi(z) \omegatimes z \quad \varphiorall z\neq 0.
\end{align*}
\noindent {\it Step 3. The entropy production $\nabla{\cal C^\varepsilon}_+ot [\Phi (u_\varepsilon)]$.}
For the smooth approximation $u_\varepsilon$, we obtain the entropy production (as in \cite{DKMO01}):
\begin{equation}gin{align}
\nonumber
\nabla{\cal C^\varepsilon}_+ot [\tilde \Phi (u_\varepsilon)]&={\rm Tr} \bigg(D\tilde \Phi(u_\varepsilon)Du_\varepsilon\bigg)\stackrel{Step \, \, 2}{=}-2{\rm Tr} \bigg(\Psi(u_\varepsilon)\omegatimes u_\varepsilon \, Du_\varepsilon\bigg)+\gamma_\varepsilonmma(u_\varepsilon)
\underbrace{\nabla {\cal C^\varepsilon}_+ot u_\varepsilon}_{=0}\\
\nonumber
&=-2\Psi(u_\varepsilon) {\cal C^\varepsilon}_+ot (Du_\varepsilon)^T u_\varepsilon=-\Psi(u_\varepsilon) {\cal C^\varepsilon}_+ot \nabla |u_\varepsilon|^2\\
\label{decomp_eps}
&=\Psi(u_\varepsilon) {\cal C^\varepsilon}_+ot \nabla \big(1-|u_\varepsilon|^2\big) \quad \Thetaxtrm{in} \quad B.
\end{align}
\noindent {\it Step 4. Proof of \eqref{propentrop} for $p=1$.} The final issue consists in passing to the limit in \eqref{decomp_eps} as $\varepsilon\to 0$.
On one hand, the chain rule implies that $\tilde \Phi(u_\varepsilon)\to \tilde \Phi(u)=\Phi(u)$ in $W^{1,1}(B)$, in particular,
\begin{equation}
\label{p=1}
\nabla {\cal C^\varepsilon}_+ot [\tilde \Phi(u_\varepsilon)]\to \nabla {\cal C^\varepsilon}_+ot [\Phi(u)] \Thetaxtrm{ in } L^1(B).\end{equation} On the other hand, the chain rule leads to $1-|u_\varepsilon|^2\to 1-|u|^2=0$ in
$W^{1,1}(B)$, in particular, $$\nabla (1-|u_\varepsilon|^2) \to 0 \Thetaxtrm{ in } L^1(B).$$ Since $\{\Psi(u_\varepsilon)\}$ is uniformly bounded, the duality $<{\cal C^\varepsilon}_+ot, {\cal C^\varepsilon}_+ot>_{L^{\infty}(B), L^1(B)}$ leads to
$$\Psi(u_\varepsilon){\cal C^\varepsilon}_+ot \nabla (1-|u_\varepsilon|^2) \to 0 \Thetaxtrm{ in } L^1(B),$$ which by \eqref{decomp_eps} and \eqref{p=1} yield $\nabla{\cal C^\varepsilon}_+ot [\Phi(u)]=0$ (in $L^1(B)$).
\noindent {\it Step 5. Proof of \eqref{propentrop} for $p=2$.} We repeat the above argument using the duality $$<{\cal C^\varepsilon}_+ot, {\cal C^\varepsilon}_+ot>_{{\mathcal{H}}b^{-1/2}(B), H^{1/2}_{00}(B)}$$ where ${\mathcal{H}}b^{-1/2}(B)$ is the dual space of $H^{1/2}_{00}(B)$:
$$H_{00}^{1/2}(B)=
\{\zeta\in H^{1/2}(B)\,: \, \int_B \int_B \varphirac{|\zeta(x)-\zeta(y)|^2}{|x-y|^{3}}\, dxdy+\int_B \varphirac{|\zeta(x)|^2}{d(x)}\, dx<\infty\}$$
with $d(x)={\rm dist}(x, \partial B)$. In fact, $H_{00}^{1/2}(B)$ can be seen as the closure of $C_c^\infty(B)$ in $H^{1/2}(\mathbb{R}^2)$ (see e.g. \cite{Ignat_JFA} for more details). More precisely, on one hand, the chain rule implies that $\tilde \Phi(u_\varepsilon)\to \tilde \Phi(u)=\Phi(u)$ in $H^{1/2}(B)$, in particular,
\begin{equation}
\label{p=2}
\nabla {\cal C^\varepsilon}_+ot [\tilde \Phi(u_\varepsilon)]\to \nabla {\cal C^\varepsilon}_+ot [\Phi(u)] \Thetaxtrm{ in } {\mathcal{H}}b^{-1/2}(B).\end{equation} On the other hand, the chain rule leads to $1-|u_\varepsilon|^2\to 1-|u|^2=0$ in
$H^{1/2}(B)$, in particular, $$\nabla (1-|u_\varepsilon|^2) \to 0 \Thetaxtrm{ in } {\mathcal{H}}b^{-1/2}(B).$$ Since $\Psi(u_\varepsilon)\to \Psi(u)$ in $H^{1/2}(B)$, we conclude that for every $\zeta\in C^\infty_c(B)$,
$$<\nabla (1-|u_\varepsilon|^2), \zeta \Psi(u_\varepsilon)>_{{\mathcal{H}}b^{-1/2}(B), H^{1/2}_{00}(B)} \to 0,$$ which by \eqref{decomp_eps} and \eqref{p=2} yield
$$<\nabla{\cal C^\varepsilon}_+ot [\Phi(u)], \zeta>_{{\mathcal{H}}b^{-1/2}(B), H^{1/2}_{00}(B)}=0.$$ Hence, $\nabla{\cal C^\varepsilon}_+ot [\Phi(u)]=0$ in ${\cal D}'(B)$.
\noindent {\it Step 6. Proof of \eqref{propentrop} for $p=3$.} In this case, we use the estimate of Constantin, E and Titi, cf. \cite{CET}. Let $\zeta\in C^\infty_c(B)$.
By \eqref{decomp_eps}, we write:
\begin{equation}gin{align*}
\int_{B} \zeta(x) \nabla{\cal C^\varepsilon}_+ot [\tilde \Phi (u_\varepsilon)]\, dx&=\int_{B} \zeta(x) \Psi(u_\varepsilon) {\cal C^\varepsilon}_+ot \nabla \big(1-|u_\varepsilon|^2\big)\, dx\\
&=\underbrace{\int_{B} \zeta(x) \nabla{\cal C^\varepsilon}_+ot \big[\Psi(u_\varepsilon) (1-|u_\varepsilon|^2)\big]\, dx}_{=I_\varepsilon}-\underbrace{\int_{B} \zeta(x) (1-|u_\varepsilon|^2) \nabla{\cal C^\varepsilon}_+ot [\Psi(u_\varepsilon)] \, dx}_{=II_\varepsilon}.
\end{align*}
\noindent {\it Passing to the limit for $I_\varepsilon$ as $\varepsilon\to 0$.} By dominated convergence theorem, we have that $\Psi(u_\varepsilon) (1-|u_\varepsilon|^2)\to 0$ in $L^1(B)$ so that, after integrating by parts, we conclude $I_\varepsilon\to 0$ as $\varepsilon\to 0$.\\
\noindent {\it Passing to the limit for $II_\varepsilon$ as $\varepsilon\to 0$.} This part is subdivided in three more steps.
\noindent {\it (i)} First, we write for $x\in B$ and for small $\varepsilon$:
\begin{equation}gin{align*}
1-|u_\varepsilon(x)|^2&=|u|^2\star \rho_\varepsilon(x)-|u\star \rho_\varepsilon(x)|^2\\
&=\int_{\mathbb{R}^2} |u(x-z)|^2 \rho_\varepsilon(z)\, dz-\bigg(\int_{\mathbb{R}^2} u(x-z) \rho_\varepsilon(z)\, dz\bigg){\cal C^\varepsilon}_+ot \bigg(\int_{\mathbb{R}^2} u(x-w) \rho_\varepsilon(w)\, dw\bigg)\\
&=\int_{\mathbb{R}^2} \int_{\mathbb{R}^2} u(x-z){\cal C^\varepsilon}_+ot(u(x-z)-u(x-w)) \rho_\varepsilon(z) \rho_\varepsilon(w)\, dz\, dw\\
&\stackrel{z:=w, \, w:=z}{=}\varphirac 1 2 \int_{\mathbb{R}^2} \int_{\mathbb{R}^2} \big|u(x-z)-u(x-w)\big|^2 \rho_\varepsilon(z) \rho_\varepsilon(w)\, dz\, dw\\
&\leq 2\int_{\mathbb{R}^2} \big|u(x-z)-u(x)\big|^2 \rho_\varepsilon(z) \, dz\\
&\leq \varphirac{2\|\rho\|_{L^\infty}} {\varepsilon^2} \int_{B_\varepsilon} \big|u(x-z)-u(x)\big|^2 \, dz,
\end{align*}
where we used the inequality $\varphirac 1 2 |u(x-z)-u(x-w)\big|^2\leq |u(x-z)-u(x)|^2+|u(x-w)-u(x)|^2$ and the properties of the mollifiers, i.e., $\omegaperatorname{supp} \rho_\varepsilon\subset B_\varepsilon$ (that is the ball of radius $\varepsilon$ centered at the origin) and $\int_{B_\varepsilon} \rho_\varepsilon(z)\, dz=1$.
\noindent {\it (ii)} Second, we write the last term in $II_\varepsilon$ as $\nabla{\cal C^\varepsilon}_+ot [\Psi(u_\varepsilon)]={\rm Tr} \bigg(D\Psi(u_\varepsilon)\nabla u_\varepsilon\bigg)$. Moreover, since
$\int_{B_\varepsilon} \partial_j \rho(\varphirac z \varepsilon)\, dz=0$ for $j=1,2$, we observe that
\begin{equation}gin{align*}
\partial_j u_\varepsilon(x)&=u\star \partial_j \rho_\varepsilon(x)=\varphirac 1 {\varepsilon^3} \int_{B_\varepsilon}u(x-z) \partial_j \rho(\varphirac{z} \varepsilon)\, dz=\varphirac 1 {\varepsilon^3} \int_{B_\varepsilon}\big(u(x-z)-u(x)) \partial_j
\rho(\varphirac{z} \varepsilon)\, dz\\
&\leq \varphirac{\|\nabla \rho\|_{L^\infty}} {\varepsilon^3} \int_{B_\varepsilon} \big|u(x-z)-u(x)\big| \, dz,
\end{align*}
for $j=1,2$.
\noindent {\it (iii)} Third, using Jensen's inequality, we deduce by {\it (i)} and {\it (ii)}:
\begin{equation}gin{align}
|II_\varepsilon|&\leq \varphirac C \varepsilon \int_B \Big(\int_{B_\varepsilon} {\mathcal{H}}space{-6mm}-\, \, \, |u(x-z)-u(x)|\, dz\Big) \Big(\int_{B_\varepsilon} {\mathcal{H}}space{-6mm}-\, \, \, |u(x-z)-u(x)|^2\, dz\Big) \, dx \nonumber\\
&\leq \varphirac C \varepsilon \int_B \Big(\int_{B_\varepsilon} {\mathcal{H}}space{-6mm}-\, \, \, |u(x-z)-u(x)|^3\, dz\Big)^{1/3}
\Big(\int_{B_\varepsilon} {\mathcal{H}}space{-6mm}-\, \, \, |u(x-z)-u(x)|^3\, dz\Big)^{2/3}\, dx \nonumber\\
&= \varphirac C \varepsilon \int_B \int_{B_\varepsilon} {\mathcal{H}}space{-6mm}-\, \, \, |u(x-z)-u(x)|^3\, dz\, dx \\
&= \varphirac C {\varepsilon^3} \int_B \int_{B_\varepsilon} |u(x-z)-u(x)|^3\, dz\, dx \nonumber\\
&\stackrel{|z|\leq \varepsilon}{\leq} \int_B \int_{B_\varepsilon} \varphirac{|u(x-z)-u(x)|^3}{|z|^3}\, dz\, dx = \int_B \int_{B_\varepsilon (x)} \varphirac{|u(x)-u(y)|^3}{|y-x|^3}\, dy\, dx\label{e:vanishing}\, .
\end{align}
Since $u\in W^{1/3, 3}(B)$, the integral
\[
\int_{B\times B} \varphirac{|u(x)-u(y)|^3}{|y-x|^3}\, dy\, dx
\]
is finite and thus the last integral in \eqref{e:vanishing} converges to $0$ as $\varepsilon\downarrow 0$.
Therefore, we conclude that \eqref{propentrop} holds for $p=3$.
\noindent {\it Step 7. Proof of \eqref{propentrop} for $p\in (1,3)$.} By
Gagliardo-Nirenberg embedding: $L^\infty\cap W^{1/p, p}\subset W^{1/3,3}$ (see \cite{BBM_lifting}, Lemma D.1) and thus, one concludes by Step 6.
\noindent Since $B\subset \Omegaega$ was an arbitrarily chosen ball, \eqref{propentrop} follows in $\Omegaega$.
$\square$
\begin{equation}gin{proof}{ of Proposition \ref{p:entropy}}
We use computations very similar to those of Step 6 in the previous proof to show that \eqref{e:no_shocks} holds. More precisely
consider a family of standard mollifiers $\rho_\varepsilon$, but this time in the space variable $s$ only: $\rho \in C^\infty_c (]-1,1[)$ and $\rho_\varepsilon (s) = \varphirac{1}{\varepsilon} \rho (\varphirac{s}{\varepsilon})$. We still use the notation $v_\varepsilon = v\star \rho_\varepsilon$ for the convolution of $v$ and $\rho$ in the space variable {\em only}, namely
\[
v\star \rho_\varepsilon (t,s) = \int v (t, s-\sigma) \rho_\varepsilon (\sigma)\, d\sigma\, .
\]
Fix a smooth test function $\zeta\in C^\infty_c (\Omegaega)$. Our goal is to show that
\begin{equation}gin{align}\label{e:regularized}
\lim_{\varepsilon\downarrow 0} \underbrace{\int_\Omegaega \left(\Thetaxtstyle{\varphirac{v_\varepsilon^2}{2}} \zeta_t + \Thetaxtstyle{\varphirac{v_\varepsilon^3}{3}} \zeta_x\right)}_{=:J_\varepsilon} = 0\, .
\end{align}
This in turn would imply that \eqref{e:no_shocks} holds and the Proposition would then follow from \cite[Theorem 2.4]{DOW2}. Observe that, although
we are only mollifying in space, we can conclude from \eqref{e:Burgers} that
\begin{equation}gin{equation}\label{e:Burgers_mollified}
(v_\varepsilon)_t + \left(\Thetaxtstyle{\varphirac{v^2 \star \rho_\varepsilon}{2}}\right)_s = 0 \qquad \mbox{in $\Omegaega_\varepsilon = \{(s,t)\in \Omegaega : {\rm dist}\, ((s,t), \partial \Omegaega) > \varepsilon\}$.}
\end{equation}
In particular, for $\varepsilon$ sufficiently small, $v_\varepsilon$ turns out to be $C^1$ on the support of $\zeta$.
Integrating by parts, using the chain rule and then subtracting \eqref{e:Burgers} we easily reach
\begin{equation}gin{align}
J_\varepsilon &= - \int v_\varepsilon \zeta \left((v_\varepsilon)_t + \left(\Thetaxtstyle{\varphirac{v_\varepsilon^2}{2}}\right)_s\right) \stackrel{\eqref{e:Burgers_mollified}}{=} - \varphirac{1}{2} \int v_\varepsilon \zeta (v_\varepsilon^2 - v^2\star\rho_\varepsilon)_s\nonumber\\
&= \varphirac{1}{2} \underbrace{\int (v_\varepsilon)_s \zeta (v_\varepsilon^2 - v^2 \star \rho_\varepsilon)}_{=: I_\varepsilon} + \varphirac{1}{2} \int v_\varepsilon \zeta_s (v_\varepsilon^2 - v^2\star \rho_\varepsilon)\, .\label{e:identita}
\end{align}
Observe that the second integral in \eqref{e:identita} goes to $0$ because $v_\varepsilon$ is uniformly bounded in $L^3$ (indeed by assumption it is bounded in $L^4$) and $v_\varepsilon^2 - v^2\star \rho_\varepsilon$ converges to $0$ strongly in $L^{3/2}$ (in fact by assumption it converges even in $L^2$). We thus need to show that $I_\varepsilon$ converges to $0$. Following the same computations of the Steps 6 and 7 in the previous proof we can easily show that:
\begin{equation}gin{align*}
|(v_\varepsilon)_s (t,s)| &= \varphirac{2}{\varepsilon} \left|\int_{-\varepsilon}^\varepsilon {\mathcal{H}}space{-6mm}-\, \, \, (v (t,s -\sigma) - v(t,s)) \rho' \left(\varphirac{\sigma}{\varepsilon}\right)\, d\sigma\right|\nonumber\\
&\leq \varphirac{C}{\varepsilon} \left(\int_{-\varepsilon}^\varepsilon {\mathcal{H}}space{-6mm}-\, \, \, |v (t, s-\sigma) - v(t,s)|^3\, d\sigma\right)^{1/3}\\
|v_\varepsilon^2 - v^2*\rho_\varepsilon| (t,s) &= \varphirac{1}{2\varepsilon^2} \left|\int\int (v (t, s-\sigma) - v (t,s-\sigma'))^2 \rho \left(\varphirac{\sigma}{\varepsilon}\right)
\rho \left(\varphirac{\sigma'}{\varepsilon}\right)\, d\sigma d\sigma'\right|\nonumber\\
&\leq {C} \int_{-\varepsilon}^\varepsilon {\mathcal{H}}space{-6mm}-\, \, \, |(v(t, s-\sigma)- v(t,s)|^2\, d\sigma\\
&\leq C \left(\int_{-\varepsilon}^\varepsilon {\mathcal{H}}space{-6mm}-\, \, \, |v (t, s-\sigma) - v(t,s)|^3\, d\sigma\right)^{2/3}\, .
\end{align*}
Recalling that ${\rm supp}\, (\zeta) \subset I \times K \subset\subset I \times J$ for some closed interval $K$, we conclude
\begin{equation}gin{align*}
|I_\varepsilon| &\leq \varphirac{C}{\varepsilon^2} \int_I \int_K \int_{-\varepsilon}^\varepsilon |v(t, s-\sigma) - v (t,s)|^3\, d\sigma\, ds\, dt\\
&\leq C\int_I \int_K \int_{s-\varepsilon}^{s+\varepsilon} \varphirac{|v(t,s)-v(t,\sigma)|^3}{|s-\sigma|^2}\, d\sigma\, ds\, dt\, .
\end{align*}
Since by assumption
\[
\int_I \int_{K\times K} \varphirac{|v(t,s)-v(t,\sigma)|^3}{|s-\sigma|^2}\, d\sigma\, ds\, dt < \infty\, ,
\]
we obviously conclude that $I_\varepsilon\to 0$.
$\square$
\end{proof}
\section{Proofs of Proposition \ref{kinet}, Theorem \ref{teo} and Theorem \ref{teo_2}}
\begin{equation}gin{proof}{ of Proposition \ref{kinet}} For every $\xi\in \mathbb{S}^1$,
the non-smooth "elementary entropies" $\Phi^\xi:\mathbb{S}^1\to \mathbb{R}^2$ given by \eqref{element}
can be approximated by a sequence of smooth entropies $\{\Phi_k\}\subset ENT$ such that
$\{\Phi_k\}$ is uniformly bounded and with $\lim_k \Phi_k(z)=\Phi^\xi(z)$ for every $z\in \mathbb{S}^1$. Indeed, this smoothing result follows by \eqref{prop_ent}: if one writes $\xi=e^{i\theta_0}$ with $\theta_0\in(-\pi, \pi]$, then the unique
$2\pi$-periodic function $\varphi\in C(\mathbb{R})$ satisfying \eqref{prop_ent} for $\Phi^\xi$ is given by:
$$\varphi(\theta)=\xi{\cal C^\varepsilon}_+ot z {\bf 1}_{\{z{\cal C^\varepsilon}_+ot\xi>0\}}=\cos(\theta-\theta_0) {\bf 1}_{\{\theta-\theta_0\in (-\pi/2, \pi/2)\}} \quad \Thetaxtrm{ for }\, \, z=e^{i\theta}, \, \theta\in (-\pi+\theta_0, \pi+\theta_0).$$
By \eqref{prop_ent} for $\Phi^\xi$, the choice of $\varphi'$ is fixed at the jump points $\pm \xi^\perp\in \mathbb{S}^1$:
$$\varphi'(\theta)=-\sin (\theta-\theta_0) {\bf 1}_{\{\theta-\theta_0\in (-\pi/2, \pi/2)\}} \quad \Thetaxtrm{ for}\, \, \theta\in (-\pi+\theta_0, \pi+\theta_0).$$
Now, one regularizes $\varphi$ by $2\pi-$periodic functions $\varphi_k\in C^\infty(\mathbb{R})$ that are uniformly bounded in $W^{1, \infty}(\mathbb{R})$ and $\lim_k \varphi_k(\theta)=\varphi(\theta)$ as well as
$\lim_k \varphi'_k(\theta)=\varphi'(\theta)$ for every $\theta\in \mathbb{R}$. Thus, the desired (smooth) approximating entropies $\Phi_k$ are given by $\varphi_k$ via \eqref{prop_ent}.
Therefore, Proposition \ref{pro_equi} implies that for every {$u\in W_{div}^{1/p,p}(\Omega, \mathbb{S}^1)$ (with $p\in [1,3]$)},
one has $\int_\Omegaega \Phi_k(u){\cal C^\varepsilon}_+ot \nabla \zeta\, dx=0$ for every $\zeta\in C^\infty_c(\Omegaega)$ and by the dominated convergence theorem, we pass to the limit $k\to \infty$ and conclude that $$0=\nabla\cdot [\Phi^\xi(u)]=\nabla\cdot [\xi \chi({\cal C^\varepsilon}_+ot, \xi)]=\xi{\cal C^\varepsilon}_+ot \nabla \chi({\cal C^\varepsilon}_+ot, \xi) \quad \Thetaxtrm{in} \quad {\cal D}'(\Omegaega).$$
$\square$
\end{proof}
\proof{ of Theorem \ref{teo}}
It is a consequence of Proposition \ref{kinet} combined with the strategy of Jabin-Otto-Perthame (see Theorem 1.3 in \cite{JOP02}). For completeness of the writing, let us recall the main steps of that argument:
let $u:\Omegaega\to \mathbb{S}^1$ be a measurable function that satisfies \eqref{eqkine} for every $\xi\in \mathbb{S}^1$. Notice that the divergence-free condition is automatically satisfied (in ${\cal D}'(\Omegaega)$) because
of \eqref{aver_form}. The first step consists in defining a $L^\infty$-trace of $u$ on each segment $\Sigma\subset \Omegaega$. More precisely, if $\Sigma:=\{0\}\times [-1,1]\subset \Omegaega$, then there exists a trace
$\tilde u\in L^\infty(\Sigma, \mathbb{S}^1)$ such that
$$\lim_{r\to 0} \varphirac 1 r \int_{-r}^r \int_{-1}^1 |u(x_1, x_2)-\tilde u(x_2)|\, dx_2 dx_1=0$$ and for each Lebesgue point $(0,x_2)\in \Sigma$ of $u$, one has $u(0,x_2)=\tilde u(x_2)$. Observe that this step is straightforward in the case of $u\in W_{div}^{1,1}(\Om, \Ss^1)$; however, it is essential for example in the case of $p>1$. The second step is to prove that if the trace $\tilde u$ of $u$ is orthogonal at $\Sigma$ at some point, then $\tilde u$ is almost everywhere orthogonal at $\Sigma$ (which coincides with the classical principle of characteristics for smooth vector fields $u$). The key point for that resides in a relation of order of characteristics of $u$, i.e., for every two Lebesgue points $x, y\in \Omegaega$ of $u$ with the segment $[x,y]\subset \Omegaega$, the following implication holds:
$$u(x){\cal C^\varepsilon}_+ot (y-x)>0 \mathbb{R}ightarrow u(y){\cal C^\varepsilon}_+ot (y-x)>0.$$ The final step consists is proving that on any open convex subset $\omegamegaega \subset \Omegaega$ with $d=\mathop{\rm div}st(\omegamegaega, \partial \Omegaega)>0$, only two situations may occur: either two
characteristics of $u$
intersect at $P\in \Omegaega$ with $\mathop{\rm div}st(P, \omegamegaega)<d$ and $u(x)=\pm \varphirac{(x-P)^\perp}{|x-P|}$ for $x\in \omegamegaega\setminus \{P\}$, or $u$ is $1/d$-Lipschitz in $\omegamegaega$, i.e.,
$$|u(x)-u(y)|\leq \varphirac 1 d |x-y|, \quad \Thetaxtrm{ for every } \, x,y\in \omegamegaega.$$
(In this last case, every two characteristics passing through $\omegamegaega$ may intersect only at distances $\geq d$ outside $\omegamegaega$). Note that $u$ may have infinitely many vortex points $P_k$ and any vortex point has
degree one, but the orientation $\alpha_k$ of the vortex point $P_k$ could change or not in $\Omegaega$.
$\square$
\begin{equation}gin{proof}{ of Theorem \ref{teo_2}} As shown in Proposition \ref{p:entropy}, $v$ is an entropy solution. As such, we conclude from the classical Oleinik's estimate (cf. \cite[Theorem 11.2.1]{Dafermos}) that $v_x$ is a Radon measure and hence that $v$ is in fact $L^\infty_{loc}$ and $BV_{loc}$. On the other hand the equality \eqref{e:no_shocks} implies that $v$ is shock-free in $\Omegaega$ (cf. for instance the proof of \cite[Corollary 2.5]{DOW2}). In particular it follows from \cite[Theorem 11.3.2]{Dafermos} that $v$ is everywhere continuous and therefore from \cite[Theorem 11.3.5]{Dafermos} that it is locally Lipschitz.
$\square$
\end{proof}
\begin{equation}gin{thebibliography}{10}
\bibitem{ARS02}
Fran{\c{c}}ois Alouges, Tristan Rivi{\`e}re, and Sylvia Serfaty.
\newblock N\'eel and cross-tie wall energies for planar micromagnetic
configurations.
\newblock {\em ESAIM Control Optim. Calc. Var.}, 8:31--68 (electronic), 2002.
\newblock A tribute to J. L. Lions.
\bibitem{AdLM99}
Luigi Ambrosio, Camillo De~Lellis, and Carlo Mantegazza.
\newblock Line energies for gradient vector fields in the plane.
\newblock {\em Calc. Var. Partial Differential Equations}, 9(4):327--255, 1999.
\bibitem{AFP}
Luigi Ambrosio, Nicola Fusco, and Diego Pallara.
\newblock {\em Functions of bounded variation and free discontinuity problems}.
\newblock Oxford Mathematical Monographs. The Clarendon Press Oxford University
Press, New York, 2000.
\bibitem{AKLR}
Luigi Ambrosio, Bernd Kirchheim, Myriam Lecumberry, and Tristan Rivi{\`e}re.
\newblock On the rectifiability of defect measures arising in a micromagnetics
model.
\newblock In {\em Nonlinear problems in mathematical physics and related
topics, {II}}, volume~2 of {\em Int. Math. Ser. (N. Y.)}, pages 29--60.
Kluwer/Plenum, New York, 2002.
\bibitem{AG99}
Patricio Aviles and Yoshikazu Giga.
\newblock On lower semicontinuity of a defect energy obtained by a singular
limit of the {G}inzburg-{L}andau type energy for gradient fields.
\newblock {\em Proc. Roy. Soc. Edinburgh Sect. A}, 129(1):1--17, 1999.
\bibitem{BBM_lifting}
Jean Bourgain, Haim Brezis, and Petru Mironescu.
\newblock Lifting in {S}obolev spaces.
\newblock {\em J. Anal. Math.}, 80:37--86, 2000.
\bibitem{BDS}
T.~{Buckmaster}, C.~{De Lellis}, and L.~{Sz{\'e}kelyhidi}, Jr.
\newblock {Dissipative Euler flows with Onsager-critical spatial regularity}.
\newblock {\em ArXiv e-prints}, April 2014.
\bibitem{CET}
P.~Constantin, W.~E, and E.~S. Titi.
\newblock Onsager's conjecture on the energy conservation for solutions of
{E}uler's equation.
\newblock {\em Comm. Math. Phys.}, 165(1):207--209, 1994.
\bibitem{COW}
Gianluca Crippa, Felix Otto, and Michael Westdickenberg.
\newblock Regularizing effect of nonlinearity in multidimensional scalar
conservation laws.
\newblock In {\em Transport equations and multi-{D} hyperbolic conservation
laws}, volume~5 of {\em Lect. Notes Unione Mat. Ital.}, pages 77--128.
Springer, Berlin, 2008.
\bibitem{Dafermos}
Constantine~M. Dafermos.
\newblock {\em Hyperbolic conservation laws in continuum physics}, volume 325
of {\em Grundlehren der Mathematischen Wissenschaften [Fundamental Principles
of Mathematical Sciences]}.
\newblock Springer-Verlag, Berlin, 2000.
\bibitem{DS2}
C.~De~Lellis and L.~Sz{\'e}kelyhidi, Jr.
\newblock Dissipative {E}uler flows and {O}nsager's conjecture.
\newblock {\em To appear in {\em JEMS}}, pages 1--40, 2012.
\bibitem{DLO03}
Camillo De~Lellis and Felix Otto.
\newblock Structure of entropy solutions to the eikonal equation.
\newblock {\em J. Eur. Math. Soc. (JEMS)}, 5(2):107--145, 2003.
\bibitem{DOW1}
Camillo De~Lellis, Felix Otto, and Michael Westdickenberg.
\newblock Structure of entropy solutions for multi-dimensional scalar
conservation laws.
\newblock {\em Arch. Ration. Mech. Anal.}, 170(2):137--184, 2003.
\bibitem{DOW2}
Camillo De~Lellis, Felix Otto, and Michael Westdickenberg.
\newblock Minimal entropy conditions for {B}urgers equation.
\newblock {\em Quart. Appl. Math.}, 62(4):687--700, 2004.
\bibitem{DR}
Camillo De~Lellis and Tristan Rivi{\`e}re.
\newblock The rectifiability of entropy measures in one space dimension.
\newblock {\em J. Math. Pures Appl. (9)}, 82(10):1343--1367, 2003.
\bibitem{DS1}
Camillo De~Lellis and L{\'a}szl{\'o} Sz{\'e}kelyhidi, Jr.
\newblock Dissipative continuous {E}uler flows.
\newblock {\em Invent. Math.}, 193(2):377--407, 2013.
\bibitem{DKMO01}
Antonio DeSimone, Stefan M{\"u}ller, Robert~V. Kohn, and Felix Otto.
\newblock A compactness result in the gradient theory of phase transitions.
\newblock {\em Proc. Roy. Soc. Edinburgh Sect. A}, 131(4):833--844, 2001.
\bibitem{Eyink}
G.~L. Eyink.
\newblock Energy dissipation without viscosity in ideal hydrodynamics. {I}.
{F}ourier analysis and local energy transfer.
\newblock {\em Phys. D}, 78(3-4):222--240, 1994.
\bibitem{Golse}
Fran{\c{c}}ois Golse, Pierre-Louis Lions, Beno{\^{\i}}t Perthame, and R{\'e}mi
Sentis.
\newblock Regularity of the moments of the solution of a transport equation.
\newblock {\em J. Funct. Anal.}, 76(1):110--125, 1988.
\bibitem{Confluentes}
Radu Ignat.
\newblock Singularities of divergence-free vector fields with values into
{$S^1$} or {$S^2$}. {A}pplications to micromagnetics.
\newblock {\em Confluentes Math.}, 4(3):1230001, 80, 2012.
\bibitem{Ignat_JFA}
Radu Ignat.
\newblock Two-dimensional unit-length vector fields of vanishing divergence.
\newblock {\em J. Funct. Anal.}, 262(8):3465--3494, 2012.
\bibitem{IM09}
Radu Ignat and Beno{\^{\i}}t Merlet.
\newblock Lower bound for the energy of {B}loch walls in micromagnetics.
\newblock {\em Arch. Ration. Mech. Anal.}, 199(2):369--406, 2011.
\bibitem{IMpre}
Radu Ignat and Beno{\^{\i}}t Merlet.
\newblock Entropy method for line-energies.
\newblock {\em Calc. Var. Partial Differential Equations}, 44(3-4):375--418,
2012.
\bibitem{Ignat_Moser}
Radu Ignat and Roger Moser.
\newblock A zigzag pattern in micromagnetics.
\newblock {\em J. Math. Pures Appl. (9)}, 98(2):139--159, 2012.
\bibitem{Isett}
P.~Isett.
\newblock H{\"o}lder continuous {E}uler flows in three dimensions with compact
support in time.
\newblock {\em Preprint}, pages 1--173, 2012.
\bibitem{JOP02}
Pierre-Emmanuel Jabin, Felix Otto, and Beno{\^{\i}}t Perthame.
\newblock Line-energy {G}inzburg-{L}andau models: zero-energy states.
\newblock {\em Ann. Sc. Norm. Super. Pisa Cl. Sci. (5)}, 1(1):187--202, 2002.
\bibitem{Jabin-Perthame}
Pierre-Emmanuel Jabin and Beno{\^{\i}}t Perthame.
\newblock Compactness in {G}inzburg-{L}andau energy by kinetic averaging.
\newblock {\em Comm. Pure Appl. Math.}, 54(9):1096--1109, 2001.
\bibitem{JK00}
Weimin Jin and Robert~V. Kohn.
\newblock Singular perturbation and the energy of folds.
\newblock {\em J. Nonlinear Sci.}, 10(3):355--390, 2000.
\bibitem{Kru}
Stanislav~N. Kru{\v{z}}kov.
\newblock First order quasilinear equations with several independent variables.
\newblock {\em Mat. Sb. (N.S.)}, 81 (123):228--255, 1970.
\bibitem{LPT}
P.-L. Lions, B.~Perthame, and E.~Tadmor.
\newblock A kinetic formulation of multidimensional scalar conservation laws
and related equations.
\newblock {\em J. Amer. Math. Soc.}, 7(1):169--191, 1994.
\bibitem{Panov}
E.~Yu. Panov.
\newblock Uniqueness of the solution of the {C}auchy problem for a first-order
quasilinear equation with an admissible strictly convex entropy.
\newblock {\em Mat. Zametki}, 55(5):116--129, 159, 1994.
\end{thebibliography}
\end{document} |
\begin{document}
\pagestyle{plain}
\title{\large
{\textbf{REAL HYPERSURFACES EQUIPPED WITH $\xi$-PARALLEL STRUCTURE JACOBI OPERATOR IN $\mathbb{C}P^{2}$ OR
$\mathbb{C}H^{2}$}}}
\author{ \textbf{\normalsize{Konstantina Panagiotidou and Philippos J. Xenos}}\\
\small \emph{Mathematics Division-School of Technology, Aristotle University of Thessaloniki, Greece}\\
\small \emph{E-mail: [email protected], [email protected]}}
\date{}
\maketitle
\begin{flushleft}
\small {\textsc{Abstract}. The $\xi$-parallelness condition of the structure Jacobi operator of real hypersurfaces has been studied in combination with additional conditions. In the present paper we study three dimensional real hypersurfaces in $\mathbb{C}P^{2}$ or
$\mathbb{C}H^{2}$ equipped with $\xi$-parallel structure Jacobi operator. We prove that they are Hopf hypersurfaces and if additional $\eta(A\xi)\neq0$, we give the classification of them.}
\end{flushleft}
\begin{flushleft}
\small{\emph{Keywords}: Real hypersurface, $\xi$-parallel structure Jacobi operator, Complex projective space, Complex hyperbolic space.\\}
\end{flushleft}
\begin{flushleft}
\small{\emph{Mathematics Subject Classification }(2000): Primary 53B25; Secondary 53C15, 53D15.}
\end{flushleft}
\section{Introduction}
A complex n-dimensional Kaehler manifold of constant holomorphic sectional curvature c is called a complex space form, which is denoted by $M_{n}(c)$. A complete and simply connected complex space form is complex analytically isometric to a complex projective space $\mathbb{C}P^{n}$, a complex Euclidean space $\mathbb{C}^{n}$ or a complex hyperbolic space $\mathbb{C}H^{n}$ if $c>0, c=0$ or $c<0$ respectively.
The study of real hypersurfaces in a nonflat complex space form is a classical problem in Differential Geometry. Let $M$ be a real
hypersurface in $M_{n}(c)$. Then $M$ has an almost contact metric structure $(\varphi,\xi,\eta,g)$. The structure vector field $\xi$
is called principal if $A\xi=\alpha\xi$ holds on $M$, where A is the shape operator of $M$ in $M_{n}(c)$ and $\alpha$ is a smooth function. A real hypersurface is called \textit{Hopf hypersurface} if $\xi$ is principal.
Takagi in \cite{T2} classified homogeneous real hypersurfaces in $\mathbb{C}P^{n}$ and Berndt in \cite{Ber} classified Hopf hypersurfaces with constant principal curvatures in $\mathbb{C}H^{n}$. Let $M$ be a real hypersurface in $M_{n}(c)$, $c\neq0$. Then we state the following theorems due to Okumura \cite{Ok} for $\mathbb{C}P^{n}$ and Montiel and Romero \cite{MR} for $\mathbb{C}H^{n}$ respectively.\\
\begin{theorem}
Let M be a real hypersurface of $M_{n}(c)$ , $n\geq2$, $c\neq0$. If it satisfies
$A\varphi-\varphi A=0$, then M is locally congruent to one of the
following hypersurfaces:
\begin{itemize}
\item In case $\mathbb{C}P^{n}$\\
$(A_{1})$ a geodesic hypersphere of radius r , where
$0<r<\frac{\pi}{2}$,\\
$(A_{2})$ a tube of radius r over a totally geodesic
$\mathbb{C}P^{k}$,$(1\leq k\leq n-2)$, where $0<r<\frac{\pi}{2}.$
\item In case $\mathbb{C}H^{n}$\\
$(A_{0})$ a horosphere in $ \mathbb{C}H^{n}$, i.e a Montiel tube,\\
$(A_{1})$ a geodesic hypersphere or a tube over a hyperplane $\mathbb{C}H^{n-1}$,\\
$(A_{2}) $ a tube over a totally geodesic $\mathbb{C}H^{k}$ $(1\leq k\leq n-2)$.
\end{itemize}
\end{theorem}
Since 2006 many authors have studied real hypersurfaces whose structure Jacobi operator is parallel $(\nabla l=0)$. Ortega, Perez and
Santos \cite{OPS} proved the nonexistence of real hypersurfaces in non-flat complex space form with parallel structure Jacobi operator
$\nabla l=0$. Perez, Santos and Suh \cite{PSaSuh} continuing the work of \cite{OPS} considered a weaker condition ($\mathbb{D}$-parallelness),
that is $\nabla_{X}l=0$ for any vector field $X$ orthogonal to $\xi$. They proved the non-existence of such real hypersurfaces in
$\mathbb{C}P^{m}$, $m\geq3$.
Kim and Ki in \cite{KK} classified real hypersurfaces if $\nabla_{\xi}l=0$ and $S\varphi=\varphi S$. Ki and Liu \cite{KL} proved that real
hypersurfaces satisfying $\nabla_{\xi}l=0$ and $lS=Sl$ are Hopf hypersurfaces provided that the scalar curvature is non-negative.
Ki, et.al. in \cite{KPSaSuh} classified real hypersurfaces satisfying $\nabla_{\xi}l=0$ and $\nabla_{\xi}S=0$. Kim et.al. in \cite{KKK} studied the real hypersurfaces
satisfying $g(\nabla_{\xi}\xi,\nabla_{\xi}\xi)=\mu^{2}=$const, $6\mu^{2}+\frac{c}{4}\neq0$ and classified those whose $l$ is
$\xi-$parallel. Cho and Ki \cite{CK1} classified real hypersurfaces satisfying A$l=l$A and $\nabla_{\xi}l=0$.
Recently Ivey and Ryan, in \cite{IR} studied real hypersurfaces in $M_{2}(c)$.
Motivated by all the above conclusions we study real hypersurfacs in $\mathbb{C}P^{2}$ or
$\mathbb{C}H^{2}$ equipped with $\xi$-parallel structure Jacobi operator, i.e. $\nabla_{\xi}l=0$. More precisely, the following relation holds:
\begin{eqnarray}
(\nabla_{\xi}l)X=0.
\end{eqnarray}
We prove the following theorem
\begin{pro}
Let M be a connected real hypersurface in $\mathbb{C}P^{2}$
or $\mathbb{C}H^{2}$ with $\xi$-parallel structure Jacobi
operator. Then M is a Hopf hypersurface. Further, if
$\eta(A\xi)\neq0$, then :
\begin{itemize}
\item in the case of $\mathbb{C}P^{2}$, $M$ is locally congruent to\\
a geodesic sphere, where
$0<r<\frac{\pi}{2}$ and $r\neq\frac{\pi}{4}$,
\item in the case of $\mathbb{C}H^{2}$, $M$ is locally congruent \\
to a horosphere,\\
or to a geodesic sphere\\
or to a tube over the hyperplane $\mathbb{C}H^{1}$.\\
\end{itemize}
\end{pro}
\section{Preliminaries}
Throughout this paper all manifolds, vector fields e.t.c. are assumed to be of class $C^{\infty}$ and all manifolds are assumed to be connected. Furthermore, the real hypersurfaces are supposed to be oriented and without boundary.
Let $M$ be a real hypersurface immersed in a nonflat complex space form $(M_{n}(c),G)$ with almost complex structure J
of constant holomorphic sectional curvature $c$. Let $N$ be a unit normal vector field on $M$ and $\xi=-JN$. For a vector field $X$ tangent to $M$ we can write $JX=\varphi (X)+\eta(X)N$, where $\varphi X$ and $\eta(X)N$ are the tangential and the normal
component of $JX$ respectively. The Riemannian connection $\overline{\nabla}$ in $M_{n}(c)$ and $\nabla$ in $M$ are related
for any vector fields $X$, $Y$ on $M$:
$$\overline{\nabla}_{Y}X=\nabla_{Y}X+g(AY,X)N$$
$$\overline{\nabla}_{X}N=-AX$$
where $g$ is the Riemannian metric on $M$ induced from G of $M_{n}(c)$ and A is the shape operator of $M$ in $M_{n}(c)$. $M$ has an almost
contact metric structure $(\varphi,\xi,\eta)$ induced from J on $M_{n}(c)$ where $\varphi$ is a (1,1) tensor field and $\eta$ a
1-form on $M$ such that (see \cite{Bl} $$g(\varphi X,Y)=G(JX,Y),\hspace{20pt}\eta(X)=g(X,\xi)=G(JX,N).$$
Then we have
\begin{eqnarray}
\varphi^{2}X=-X+\eta(X)\xi,\hspace{20pt}
\eta\circ\varphi=0,\hspace{20pt} \varphi\xi=0,\hspace{20pt}
\eta(\xi)=1
\end{eqnarray}
\begin{eqnarray}\hspace{20pt}
g(\varphi X,\varphi
Y)=g(X,Y)-\eta(X)\eta(Y),\hspace{10pt}g(X,\varphi Y)=-g(\varphi
X,Y)
\end{eqnarray}
\begin{eqnarray}
\nabla_{X}\xi=\varphi
AX,\hspace{20pt}(\nabla_{X}\varphi)Y=\eta(Y)AX-g(AX,Y)\xi
\end{eqnarray}
Since the ambient space is of constant holomorphic sectional curvature $c$, the equations of Gauss and Codazzi for any vector
fields $X$, $Y$, $Z$ on $M$ are respectively given by
\begin{eqnarray}
R(X,Y)Z=\frac{c}{4}[g(Y,Z)X-g(X,Z)Y+g(\varphi Y ,Z)\varphi
X\end{eqnarray} $$-g(\varphi X,Z)\varphi Y-2g(\varphi X,Y)\varphi
Z]+g(AY,Z)AX-g(AX,Z)AY$$
\begin{eqnarray}
\hspace{10pt}
(\nabla_{X}A)Y-(\nabla_{Y}A)X=\frac{c}{4}[\eta(X)\varphi
Y-\eta(Y)\varphi X-2g(\varphi X,Y)\xi]
\end{eqnarray}
where $R$ denotes the Riemannian curvature tensor on $M$.
For every point $P\epsilon M$, the tangent space $T_{P}M$ can be decomposed as following:
$$T_{P}M=span\{\xi\}\oplus ker{\eta}$$
where $ker(\eta)=\{X\;\;\epsilon\;\; T_{P}M:\eta(X)=0\}$.
Due to the above decomposition,the vector field $A\xi$ is decomposed as follows:
\begin{eqnarray}
A\xi=\alpha\xi+\beta U
\end{eqnarray}
where $\beta=|\varphi\nabla_{\xi}\xi|$ and
$U=-\frac{1}{\beta}\varphi\nabla_{\xi}\xi\;\epsilon\;ker(\eta)$, provided
that $\beta\neq0$.
\section{Auxiliary relations}
Let $M$ be a real hypersurfaces in $\mathbb{C}P^{2}$ or $\mathbb{C}H^{2}$, i.e. $M_{2}(c)$, $c\neq0$. We consider the open subset $\mathcal{N}$ of $M$ such that:
$$\mathcal{N}=\{P\;\;\epsilon\;\;M:\;\beta\neq0,\;\;\mbox{in a neighborhood of P}\}.$$
Furthermore, we consider $\mathcal{V}$, $\Omega$ open subsets of $\mathcal{N}$ such that:
$$\mathcal{V}=\{P\;\;\epsilon\;\;\mathcal{N}:\alpha=0,\;\;\mbox{in a neighborhood of P}\},$$
$$\Omega=\{P\;\;\epsilon\;\;\mathcal{N}:\alpha\neq0,\;\;\mbox{in a neighborhood of P}\},$$
where $\mathcal{V}\cup\Omega$ is open and dense in the closure of $\mathcal{N}$.
\begin{lemma}
Let M be a real hypersurface in $M_{2}(c)$, equipped with $\xi$-parallel structure Jacobi operator. Then
$\mathcal{V}$ is empty.
\end{lemma}
\textbf{Proof:} Let $\{U,\varphi U,\xi\}$ be a local orthonormal basis on $\mathcal{V}$. The
relation (2.6) takes the form $A\xi=\beta U$. The first relation of (2.3) for $X=\xi$, taking into account the latter implies
\begin{eqnarray}
\nabla_{\xi}\xi=\beta\varphi U.\nonumber\
\end{eqnarray}
Relation (1.1) for $X=\xi$, because of the above relation yields:
\begin{eqnarray}
\nabla_{\xi}(l\xi)=l\nabla_{\xi}\xi\Rightarrow \beta\varphi U=0,\nonumber\
\end{eqnarray}
which leads to a contradiction and this completes the proof of Lemma 3.1.
{$
\Box$}
\\
In what follows we work on $\Omega$, where $\alpha\neq0$ and $\beta\neq0$.
\begin{lemma}
Let M be a real hypersurface in $M_{2}(c)$, equipped with $\xi$-parallel structure Jacobi operator. Then the following relations hold in $\Omega$:
\begin{eqnarray}
\hspace{-140pt}AU=(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+\frac{\kappa}{\alpha})U+\beta\xi,\;\;\;\;
A\varphi U=-\frac{c}{4\alpha}\varphi U
\end{eqnarray}
\begin{eqnarray}
\hspace{-100pt}\nabla_{\xi}\xi=\beta\varphi U,\;\;\;
\nabla_{U}\xi=(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+\frac{\kappa}{\alpha})\varphi
U,\;\;\; \nabla_{\varphi U}\xi=\frac{c}{4\alpha}U
\end{eqnarray}
\begin{eqnarray}
\hspace{-100pt}\nabla_{\xi}U=\kappa_{1}\varphi U,\;\;\;
\nabla_{U}U=\kappa_{2}\varphi U,\;\;\; \nabla_{\varphi
U}U=\kappa_{3}\varphi U-\frac{c}{4\alpha}\xi
\end{eqnarray}
\begin{equation}
\nabla_{\xi}\varphi U=-\kappa_{1}U-\beta\xi,\;\;\;
\nabla_{U}\varphi
U=-\kappa_{2}U-(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+\frac{\kappa}{\alpha})\xi,\;\;\;
\nabla_{\varphi U}\varphi U=-\kappa_{3}U
\end{equation}
\begin{eqnarray}
\hspace{-180pt}\kappa\kappa_{1}=0,\;\;\; (\xi\kappa)=0,
\end{eqnarray}
where $\kappa,\kappa_{1},\kappa_{2},\kappa_{3}$ are smooth
functions on M.
\end{lemma}
\textbf{Proof:} Let $\{U,\varphi U,\xi\}$ be a local orthonormal basis of $\Omega$.
The first relation of (2.3) for $X=\xi$ implies: $\nabla_{\xi}\xi=\beta\varphi U$ and so relation (1.1) for $X=\xi$, taking into account the latter, gives:
\begin{eqnarray}
l\varphi U=0.
\end{eqnarray}
Relation (2.4) for $X=\varphi U$ and $Y=Z=\xi$ gives: $l\varphi U=\frac{c}{4}\varphi U+\alpha A\varphi U$, which because of (3.6) implies the second of (3.1). Relation (2.4) for $X=U$ and $Y=Z=\xi$, we have:
\begin{eqnarray}
lU=\frac{c}{4}U+\alpha AU-\beta A\xi\
\end{eqnarray}
The scalar products of (3.7) with $\varphi U$ and $U$, because of (2.6) and the second of (3.1) imply the first of (3.1), where $\kappa=g(lU,U)$.
The first relation of (2.3), for $X=U$ and $X=\varphi U$. taking into consideration relations (3.1), gives the rest of relation (3.2).
From the well known relation: $Xg(Y,Z)=g(\nabla_{X}Y,Z)+g(Y,\nabla_{X}Z)$ for $X,Y,Z\;\;
\epsilon$
$\{\xi,U,\varphi U\}$ we obtain (3.3) and (3.4), where $\kappa_{1},\kappa_{2},\kappa_{3}$ are
smooth functions in $\Omega$.
On the other hand
\begin{eqnarray}
&&\xi\kappa=\xi g(lU,U)\nonumber\\
&&\Rightarrow \xi\kappa=g(\nabla_{\xi}(lU),U)+g(lU,\nabla_{\xi}U)\nonumber\\
&&\Rightarrow \xi\kappa=g((\nabla_{\xi}l)U+l(\nabla_{\xi}U),U)+g(lU,\nabla_{\xi}U)\nonumber\\
&&\Rightarrow \xi\kappa=g(l(\nabla_{\xi}U),U)+g(lU,\nabla_{\xi}U)\nonumber\
\end{eqnarray}
The above relation because of (3.3), (3.6) and (3.7) yields:
\begin{eqnarray}
\xi\kappa=g(\kappa_{1}l\varphi U,U)+g(lU,\kappa_{1}\varphi
U)\Rightarrow \xi\kappa=0\nonumber\
\end{eqnarray}
On the other hand:
\begin{eqnarray}
&&\xi g(l\varphi U,U)=0\nonumber\\
&&\Rightarrow g(\nabla_{\xi}(l\varphi U),U)+g(l\varphi U,\nabla_{\xi}U)=0\nonumber\\
&&\Rightarrow g((\nabla_{\xi}l)\varphi U+l(\nabla_{\xi}\varphi U),U)+g(l\varphi U,\nabla_{\xi}U)=0\nonumber\
\end{eqnarray}
From the above equation because of (1.1), (2.6), (3.4), (3.6) and $\kappa=g(lU,U)$ we obtain:
\begin{eqnarray}
&&g(l(-\kappa_{1}U-\beta\xi),U)=0\Rightarrow \kappa\kappa_{1}=0\nonumber\
\end{eqnarray}
{$
\Box$}
Relation (2.5) for $X$ $\epsilon$ $\{U,\varphi U\}$ and $Y=\xi$, because of Lemma 3.2 yields:
\begin{eqnarray}
U\beta&=&\xi(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+\frac{\kappa}{\alpha})\\
U\alpha&=&\xi\beta\\
\frac{\beta^{2}\kappa_{1}}{\alpha}&=&\kappa+\beta\kappa_{2}+\frac{c}{4\alpha}(\frac{\kappa}{\alpha}-\frac{c}{4\alpha}+\frac{\beta^{2}}{\alpha})\\
(\varphi U)\beta&=&\frac{\kappa_{1}\beta^{2}}{\alpha}+\beta^{2}+\frac{c}{4\alpha}(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+\frac{\kappa}{\alpha})\\
\xi\alpha&=&\frac{4\alpha^{2}\kappa_{3}\beta}{c}\\
(\varphi U)\alpha&=&\beta(\kappa_{1}+\alpha+\frac{3c}{4\alpha})
\end{eqnarray}
Furthermore, relation (2.5), for $X=U$ and $Y=\varphi U$, due to Lemma 3.2 and (3.10), implies:
\begin{eqnarray}
(\varphi U)\kappa&=&-\frac{c\beta\kappa_{1}}{4\alpha}+\kappa\beta+\kappa\kappa_{2}-c\beta\\
U\alpha&=&\frac{4\kappa_{3}\alpha}{c}(\beta^{2}+\kappa)
\end{eqnarray}
Using the relations (3.9)-(3.15) and Lemma 3.2 we obtain:
\begin{eqnarray}
&&[U,\xi](\frac{c}{4\alpha})=(\nabla_{U}\xi-\nabla_{\xi}U)\frac{c}{4\alpha}\nonumber\\
&&\Rightarrow [U,\xi](\frac{c}{4\alpha}) =-\frac{c\beta}{4\alpha^{2}}(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+\frac{\kappa}{\alpha}\frac{}{}-\kappa_{1})(\kappa_{1}+\alpha+\frac{3c}{4\alpha})
\end{eqnarray}
\begin{eqnarray}
&& [U,\xi](\frac{c}{4\alpha})=(U(\xi\frac{c}{4\alpha}))-(\xi(U\frac{c}{4\alpha}))\nonumber\\
&&\Rightarrow [U,\xi](\frac{c}{4\alpha})=-\beta\kappa_{3}^{2}-\beta(U\kappa_{3})+(\frac{\beta^{2}}{\alpha}+\frac{\kappa}{\alpha})(\xi\kappa_{3})
\end{eqnarray}
Similarly:
\begin{eqnarray}
&&[U,\varphi
U](\frac{c}{4\alpha})=\kappa_{2}\kappa_{3}(\frac{\beta^{2}}{\alpha}+\frac{\kappa}{\alpha})+\beta\kappa_{3}(\frac{\kappa}{\alpha}-\frac{c}{2\alpha}+\frac{\beta^{2}}{\alpha})\nonumber\\
&&+\frac{c\beta\kappa_{3}}{4\alpha^{2}}(\kappa_{1}+\alpha+\frac{3c}{4\alpha})
\end{eqnarray}
\begin{eqnarray}
&&[U,\varphi
U](\frac{c}{4\alpha})=\frac{2\kappa_{3}\beta^{3}\kappa_{1}}{\alpha^{2}}+\frac{\kappa_{3}\beta^{3}}{\alpha}+\frac{5c\kappa_{3}\beta}{4\alpha^{3}}(\beta^{2}+\kappa)-\frac{c\beta\kappa_{1}\kappa_{3}}{4\alpha^{2}}\nonumber\\
&&-\frac{c\beta\kappa_{3}}{4\alpha}-\frac{5c^{2}\beta\kappa_{3}}{16\alpha^{3}}-\frac{c\beta}{4\alpha^{2}}(U\kappa_{1})-\frac{\beta\kappa\kappa_{3}}{\alpha}+\frac{\kappa_{3}}{\alpha}((\varphi U)\kappa)\nonumber\\
&&+(\frac{\beta^{2}}{\alpha}+\frac{\kappa}{\alpha})((\varphi U)\kappa_{3})
\end{eqnarray}
\begin{eqnarray}
[\varphi
U,\xi](\frac{c}{4\alpha})=-\kappa_{3}(\kappa_{1}+\frac{c}{4\alpha})(\frac{\beta^{2}}{\alpha}+\frac{\kappa}{\alpha})-\beta^{2}\kappa_{3}
\end{eqnarray}
\begin{eqnarray}
&&[\varphi
U,\xi](\frac{c}{4\alpha})=-\frac{2\kappa_{1}\kappa_{3}\beta^{2}}{\alpha}-\kappa_{3}\beta^{2}-\frac{7c\beta^{2}\kappa_{3}}{4\alpha^{2}}+\frac{c^{2}\kappa_{3}}{16\alpha^{2}}+\frac{c\kappa\kappa_{3}}{2\alpha^{2}}\nonumber\\
&&-\beta(\varphi U)\kappa_{3}+\kappa\kappa_{3}+\frac{c\beta}{4\alpha^{2}}(\xi\kappa_{1}).
\end{eqnarray}
Due to the first relation of (3.5), we consider $\Omega_{1}$ the open subset of $\Omega$ such that:
$$\Omega_{1}=\{P\;\;\epsilon\;\;\Omega:\kappa_{1}\neq0,\;\;in\;\;a\;\;neighborhood\;\;of\;\;P\}.$$
So in $\Omega_{1}$, we have: $\kappa=0$.
In $\Omega_{1}$ relation (3.14), since $\kappa=0$, yields:
\begin{eqnarray}
\kappa_{1}=-4\alpha
\end{eqnarray}
and from relation (3.10), taking into account (3.22), we get:
\begin{eqnarray}
\kappa_{2}=-4\beta-\frac{c\beta}{4\alpha^{2}}+\frac{c^{2}}{16\alpha^{2}\beta}
\end{eqnarray}
From (3.20) and (3.21), using (3.12), (3.22) and (3.23) we obtain:
\begin{eqnarray}
\beta(\varphi U)\kappa_{3}=-\frac{3c\beta^{2}\kappa_{3}}{2\alpha^{2}}+\frac{c^{2}\kappa_{3}}{16\alpha^{2}}
\end{eqnarray}
From (3.18), (3.19), using (3.15), (3.22), (3.23) and (3.24), we obtain:
\begin{eqnarray}
\kappa_{3}(4\alpha^{2}-c)=0.
\end{eqnarray}
Because of (3.25), let $\Omega'_{1}$ be the open subset of $\Omega_{1}$ such that:
$$\Omega'_{1}=\{P\;\;\epsilon\;\;\Omega_{1}:\kappa_{3}\neq0,\;\;in\;\;a\;\;neighborhood\;\;of\;\;P\}.$$
So in $\Omega'_{1}$ we obtain: $c=4\alpha^{2}$. Differentiation of the latter with respect to
$\xi$, implies $\xi\alpha=0$ which because of (3.12) leads to $\kappa_{3}=0$, which is impossible. So $\Omega'_{1}$ is empty and $\kappa_{3}=0$ in $\Omega_{1}$.
\begin{lemma}
Let M be a real hypersurface in $M_{2}(c)$, equipped with $\xi$-parallel structure Jacobi operator. Then $\Omega_{1}$ is empty.
\end{lemma}
\textbf{Proof:} We resume that in $\Omega_{1}$ we have:
\begin{eqnarray}
\kappa=\kappa_{3}=0
\end{eqnarray}
and relations (3.22), (3.23) and (3.24) hold.
Relations (3.8), (3.9), (3.12) and (3.15), because of (3.5) and (3.26), yield:
\begin{eqnarray}
U\alpha=U\beta=\xi\alpha=\xi\beta=0
\end{eqnarray}
In $\Omega_{1}$, combining (3.16) and (3.17) and taking into account (3.22) and (3.26), we obtain:
\begin{eqnarray}
(\frac{c}{4\alpha}-\alpha)(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+4\alpha)=0
\end{eqnarray}
Owing to (3.28), let $\Omega_{11}$ be the open subset of $\Omega_{1}$, such that:
$$\Omega_{11}=\{P\;\;\epsilon\;\;\Omega_{1}:c\neq4\alpha^{2},\;\;\mbox{in a neighborhood of P}\}.$$
From (3.28) in $\Omega_{11}$, we have: $4\alpha=-\frac{\beta^{2}}{\alpha}+\frac{c}{4\alpha}$. Differentiation of the latter along $\varphi U$, because of (3.11), (3.13), (3.22), (3.26) and the last relation yields $c=0$, which is impossible. Hence, $\Omega_{11}$ is empty.
So in $\Omega_{1}$ the relation $c=4\alpha^{2}$ holds. Due to the last relation and (3.22), the relation (3.11) becomes:
\begin{eqnarray}
(\varphi U)\beta=-(\alpha^{2}+2\beta^{2}).
\end{eqnarray}
From (3.27) we have $[U,\xi]\beta=U(\xi\beta)-\xi(U\beta)\Rightarrow [U,\xi]\beta=0$. On the other hand, from (3.2) and (3.3) we obtain
$[U,\xi]\beta=(\nabla_{U}\xi-\nabla_{\xi}U)\beta\Rightarrow [U,\xi]\beta=\frac{1}{\alpha}(3\alpha^{2}+\beta^{2})(\varphi U)\beta$. The
last two relations imply $(\varphi U)\beta=0$. Therefore, from (3.29) we obtain $\alpha^{2}+2\beta^{2}=0$, which is a contradiction. Hence, $\Omega_{1}$ is empty.
{$
\Box$}
\\
Since $\Omega_{1}$ is empty, in $\Omega$ we have $\kappa_{1}=0$. So from relations (3.20) and (3.21) we obtain:
$$\beta(\varphi
U)\kappa_{3}=\frac{\kappa_{3}}{16\alpha^{2}}[c^{2}-24c\beta^{2}+12c\kappa+16\alpha^{2}\kappa].$$
Furthermore, the combination of relations (3.18) and (3.19), using (3.10) and (3.14), implies:
$$(\beta^{2}+\kappa)(\varphi
U)\kappa_{3}=\frac{c\beta\kappa_{3}}{16\alpha^{2}}[16\alpha^{2}-24(\beta^{2}+\kappa)+9c].$$
From the last two relations we obtain:
\begin{eqnarray}
\kappa_{3}[c^{2}\kappa+12c\kappa^{2}+12c\beta^{2}\kappa+16\alpha^{2}\beta^{2}\kappa-16c\alpha^{2}\beta^{2}+16\alpha^{2}\kappa^{2}-8c^{2}\beta^{2}]=0\nonumber\
\end{eqnarray}
Due to the above relation, we consider $\Omega_{2}$ the open subset of $\Omega$, such that:
$$\Omega_{2}=\{P\;\;\epsilon\;\;\Omega:\kappa_{3}\neq0,\;\;\mbox{in a neighborhood of P}\},$$
so in $\Omega_{2}$ the following relation holds:
\begin{eqnarray}
c^{2}\kappa+12c\kappa^{2}+12c\beta^{2}\kappa+16\alpha^{2}\beta^{2}\kappa-16c\alpha^{2}\beta^{2}+16\alpha^{2}\kappa^{2}-8c^{2}\beta^{2}=0.
\end{eqnarray}
Differentiating (3.30) with respect to $\xi$ and using (3.5), (3.9), (3.12) and (3.15) we obtain:
\begin{eqnarray}
&&8\alpha^{2}\beta^{2}\kappa-8c\alpha^{2}\beta^{2}+8\alpha^{2}\kappa^{2}+3c\beta^{2}\kappa-2c^{2}\beta^{2}+3c\kappa^{2}-4c\alpha^{2}\kappa\nonumber\\
&&-2c^{2}\kappa=0
\end{eqnarray}
From (3.30) and (3.31) we obtain:
\begin{eqnarray}
5c\kappa+6\kappa^{2}+6\beta^{2}\kappa-4c\beta^{2}+8\alpha^{2}\kappa=0.
\end{eqnarray}
Differentiating (3.32) with respect to $\xi$ and using (3.5), (3.9), (3.12) and (3.15) we have:
$4\kappa\alpha^{2}=(2c-3\kappa)(\beta^{2}+\kappa).$ The last relation with (3.32) imply: $\kappa=0$. Substituting the latter in (3.30) gives $c=-2\alpha^{2}$. Differentiation of the last relation with respect to $\varphi U$ and taking into account (3.13), $c=-2\alpha^{2}$ and $\kappa_{1}=0$ results in $\alpha=0$, which is impossible.
So $\Omega_{2}$ is empty and in $\Omega$ we get: $\kappa_{3}=0$.
\begin{lemma}
Let M be a real hypersurface in $M_{2}(c)$, equipped with $\xi$-parallel structure Jacobi operator. Then $\Omega$ is empty.
\end{lemma}
\textbf{Proof:} We resume that in $\Omega$ the following relation holds:
\begin{eqnarray}
\kappa_{1}=\kappa_{3}=0.
\end{eqnarray}
Relations (3.8), (3.9), (3.12), (3.15), because of (3.5) and (3.33) yield:
\begin{eqnarray}
U\alpha=U\beta=\xi\alpha=\xi\beta=0
\end{eqnarray}
In $\Omega$ the combination of (3.16), (3.17) and taking into account (3.33), implies:
\begin{eqnarray}
(4\alpha^{2}+3c)(\beta^{2}+\kappa-\frac{c}{4})=0
\end{eqnarray}
Due to (3.35), we consider $\Omega_{3}$ the open subset of $\Omega$ such that:
$$\Omega_{3}=\{P\;\;\epsilon\;\;\Omega:\beta^{2}+\kappa\neq\frac{c}{4},\;\;\mbox{in a neighborhood of P}\}.$$
So in $\Omega_{3}$ the following relation holds:
\begin{eqnarray}
c=-\frac{4\alpha^{2}}{3}.
\end{eqnarray}
Differentiation of (3.36) with respect to $\varphi U$ implies:
\begin{eqnarray}
(\varphi U)\alpha=0.
\end{eqnarray}
Because of (3.34) we have $[U,\xi]\beta=U(\xi\beta)-\xi(U\beta)\Rightarrow [U,\xi]\beta=0$. On the other hand due to
(3.2), (3.3) and (3.33) we get $[U,\xi]\beta=(\nabla_{U}\xi-\nabla_{\xi}U)\beta\Rightarrow [U,\xi]\beta=(\frac{\beta^{2}}{\alpha}-\frac{c}{4\alpha}+\frac{\kappa}{\alpha})(\varphi
U)\beta$. Combination of the last relations imply:
\begin{eqnarray}
(\varphi U)\beta=0
\end{eqnarray}
From (3.11), owing to (3.33), (3.36) and (3.38) yields $2\beta^{2}=\kappa+\frac{\alpha^{2}}{3}$. Differentiation of the
last relation with respect to $\varphi U$ and taking into account (3.37) and (3.38) imply
$(\varphi U)\kappa=0$. So from (3.14), because of the latter and (3.33), we obtain $\kappa(\beta+\kappa_{2})=c\beta$. The combination of the latter with (3.10) and taking into account (3.33), (3.36) and $2\beta^{2}=\kappa+\frac{a^{2}}{3}$ imply:
\begin{eqnarray}
\alpha^{2}=18\beta^{2}\hspace{20pt}\kappa_{2}=5\beta\hspace{20pt}\kappa=-4\beta^{2}
\end{eqnarray}
The relations of Lemma 3.2 in $\Omega_{3}$, because of (3.36) and (3.39) become:
\begin{eqnarray}
&&AU=\frac{\alpha}{6}U+\beta\xi,\;\;\;A\varphi U=\frac{\alpha}{3}\varphi U\\
&&\nabla_{\xi}\xi=\beta\varphi U,\;\;\;\nabla_{U}\xi=\frac{\alpha}{6}\varphi U,\;\;\;\nabla_{\varphi U}\xi=-\frac{\alpha}{3}U,\\
&&\nabla_{\xi}U=0,\;\;\;\nabla_{U}U=5\beta\varphi U,\;\;\;\nabla_{\varphi U}U=\frac{\alpha}{3}\xi,\\
&&\nabla_{\xi}\varphi U=-\beta\xi,\;\;\;\nabla_{U}\varphi U=-5\beta U-\frac{\alpha}{6}\xi,\;\;\;\nabla_{\varphi U}\varphi U=0.
\end{eqnarray}
The relation (2.4), because of (3.36), (3.39) and (3.40) implies: $R(U,\varphi U)U=23\beta^{2}\varphi U$. On the other hand
$R(X,Y)Z=\nabla_{X}\nabla_{Y}Z-\nabla_{Y}\nabla_{X}Z-\nabla_{[X,Y]}Z$, because of (3.34), (3.36), (3.39) and(3.41)-(3.43) yields: $R(U,\varphi
U)U=26\beta^{2}\varphi U$. The combination the last two relations implies $\beta=0$, which is impossible in $\Omega_{3}$.
So $\Omega_{3}$ is empty and in $\Omega$ the following relation holds
\begin{eqnarray}
\beta^{2}+\kappa=\frac{c}{4}.
\end{eqnarray}
In $\Omega$ (3.10) becomes:
\begin{eqnarray}
\kappa+\beta\kappa_{2}=0.
\end{eqnarray}
Differentiating (3.44) with respect to $\varphi U$ and using (3.11), (3.14), (3.33), (3.44) and (3.45) we obtain:
$\beta^{2}=-\frac{c}{4}$. Differentiation of the last relation along $\varphi U$ implies $(\varphi U)\beta=0$, which because of (3.11), (3.33) and (3.44) yields $\beta=0$, which is a contradiction. Therefore, $\Omega$ is empty and this completes the proof of Lemma 3.4.
{$
\Box$}
\\
From Lemmas 3.1 and 3.4, we conclude that $\mathcal{N}$ is empty and we lead to the following result:
\begin{proposition}
Every real hypersurface in $M_{2}(c)$, equipped with $\xi$-parallel structure Jacobi operator, is a Hopf hypersurface.
\end{proposition}
\section{\hspace{-15pt}.\hspace{10pt}Proof of Main Theorem}
Since $M$ is a Hopf hypersurface, due to Theorem 2.1 \cite{NR1} we have that $\alpha$ is a constant. We suppose that $\alpha\neq0$. We consider a unit vector field $e$ $\epsilon$ $\mathbb{D}$, such that $Ae=\lambda e$, then $A\varphi e=\nu\varphi e$ at some point $P$ $\epsilon$ $M$, where $\{ e, \varphi e, \xi\}$ is a local orthonormal basis. Then the following relation holds on $M$, (Corollary 2.3 \cite{NR1}):
\begin{eqnarray}
\lambda\nu=\frac{\alpha}{2}(\lambda+\nu)+\frac{c}{4}.
\end{eqnarray}
The first relation of (2.3) for $X=e$ implies:
\begin{eqnarray}
\nabla_{e}\xi=\lambda\varphi e.
\end{eqnarray}
Relation (2.4) for $X=e$ and $Y=Z=\xi$ yields:
\begin{eqnarray}
le=\frac{c}{4}e+\alpha Ae.
\end{eqnarray}
From relation (1.1) for $X=e$, we obtain:
\begin{eqnarray}
\nabla_{\xi}(le)=l\nabla_{\xi}e.
\end{eqnarray}
From (2.4) for $X=\nabla_{\xi}e$ and $Y=Z=\xi$, we get:
\begin{eqnarray}
l\nabla_{\xi}e=\frac{c}{4}\nabla_{\xi}e+\alpha A(\nabla_{\xi}e).
\end{eqnarray}
Substitution in (4.4) of (4.3) and (4.5) yields:
\begin{eqnarray}
(\nabla_{\xi}A)e=0.
\end{eqnarray}
The relation (2.5) for $X=\xi$ and $Y=e$, taking into account (4.6), we get:
\begin{eqnarray}
(\nabla_{e}A)\xi=-\frac{c}{4}\varphi e
\end{eqnarray}
Finally, the scalar product of (4.7) with $\varphi e$, taking into consideration (4.1), (4.2) and $A\varphi e=\nu\varphi e$ yields:
\begin{eqnarray}
&&g(\nabla_{e}(A\xi)-A\nabla_{e}\xi, \varphi e)=-\frac{c}{4}\nonumber\\
&& \Rightarrow \alpha\lambda=-\frac{c}{4}+\lambda\nu\Rightarrow \lambda=\nu.\nonumber\
\end{eqnarray}
Then $Ae=\lambda e$ and $A\varphi e=\lambda\varphi e$, therefore we obtain: $$(A\varphi-\varphi A)X=0,\;\;\forall\;\;X\;\;\epsilon\;\;TM.$$
From the above relation Theorem 1.1 holds. Since $\alpha\neq0$ we can not have the geodesic sphere of radius $r=\frac{\pi}{4}$ and this completes the Proof of Main Theorem.
\end{document} |
\begin{document}
\def\lfhook#1{\setbox0=\hbox{#1}{\ooalign{\hidewidth
\lower1.5ex\hbox{'}\hidewidth\crcr\unhbox0}}}
\title{Graph products of completely positive maps}
\author{Scott Atkinson}
\thanks{The author received partial support from NSF Grant \# DMS-1362138.}
\address{Vanderbilt University, Nashville, TN, USA}
\email{[email protected]}
\begin{abstract}
We define the graph product of unital completely positive maps on a universal graph product of unital $C^*$-algebras and show that it is unital completely positive itself. To accomplish this, we introduce the notion of the non-commutative length of a word, and we obtain a Stinespring construction for concatenation. This result yields the following consequences. The graph product of positive-definite functions is positive-definite. A graph product version of von Neumann's Inequality holds. Graph independent contractions on a Hilbert space simultaneously dilate to graph independent unitaries.
\end{abstract}
\maketitle
\section{Introduction}
In operator algebras, graph products unify the notions of free products and tensor products. In particular, given a simplicial graph $\Gamma = (V,E)$ assign an algebra to each vertex. If there is an edge between two vertices then the two corresponding algebras commute with each other in the graph product; if there is no edge between two vertices then the two corresponding algeras have no relations with each other within the graph product. Thus free products are given by edgeless graphs, and tensor products are given by complete graphs.
Such products were initially studied in the group theory context where the most prominent examples are the so-called right-angled Artin groups (RAAGs), first introduced by Baudisch in \cite{baudisch}, and right-angled Coxeter groups, first introduced by Chiswell in \cite{chiswell}. One of the most high-profile appearances of RAAGs is their role in the article \cite{hagwis} by Haglund-Wise whose results are utilized in Agol's celebrated resolution of the virtual Haken conjecture \cite{agol}. There has been extensive work on this subject in group theory, and we cannot possibly acknowledge all of the significant contributions to the topic. A very incomplete list of some notable references in the group context are Droms's series of papers \cite{droms3,droms2,droms1}, Green's general treatment \cite{green}, Januskiewicz's representation theoretic result \cite{janus}, Valette's weak amenability result \cite{valette}, Charney's survey \cite{charney}, and Wise's book \cite{wise}.
Graph products have been recently imported into operator algebras by several authors under just about as many names. M\l{}otkowski developed some of the theory under the name \textquotedblleft$\Lambda$-free probability\textquotedblright in the context of non-commutative probability in \cite{mlot}. In \cite{spewys}, Speicher-Wysocza\'nski revived M\l{}otkowski's work, looking at the related cumulant combinatorics and calling the idea \textquotedblleft$\varepsilon$-independence.\textquotedblright\; Independently, in \cite{casfim}, Caspers-Fima drew inspiration directly from Green's thesis \cite{green} and took a foundational approach to graph products from both operator algebraic and quantum group theoretic perspectives.
The purpose of the present paper is to write down a graph product of unital completely positive maps and show that it is again unital completely positive in the spirit of \cite{boca}. This was done particularly for graph products of finite von Neumann algebras in Proposition 2.30 of \cite{casfim} in order to prove that the Haagerup property is preserved under taking graph products. This article gives the result for the much more general $C^*$-algebraic setting.
The strategy for proving the main result, Theorem \ref{mainthm}, is largely combinatorial. While there are alternative avenues potentially available (especially in light of the recent preprint \cite{davkak}), the appeal of the approach in this article is the development of some tools addressing the less-familiar combinatorics presented by graph products. In particular, in \S\S\ref{ncl}, we introduce the notion of the \emph{non-commutative length} of a reduced word in a graph product (see Definition \ref{nclength}). Just as the length of a word is an indispensable tool in the theory of free products, the non-commutative length of a word in a graph product can be used analogously to organize arguments by ignoring, in a sense, letters that commute. In fact, in the free product (edgeless graph) case, the two notions essentially coincide--see Remark \ref{lengths}. Additionally, in \S\S\ref{stine}, we develop a Stinespring construction for concatenation within a finite subset of words in a graph product. This construction yields a version of Schwarz's Inequality for our setting. Immediately after the proof of Theorem \ref{mainthm}, in \S\S\ref{tpe}, we illustrate how our proof strategy applies in the complete graph case; this gives a new combinatorial proof of the fact that the tensor product of ucp maps on a max tensor product is again ucp.
Following the festival of induction in \S\ref{gpmaps}, we record several consequences in \S\ref{cons}. The first is Corollary \ref{choda}, giving the graph product analog of Choda's main result from \cite{choda} . Next, we present Theorem \ref{posdef} which states that the graph product of positive-definite functions on a graph product of groups is itself positive-definite. We conclude the paper with some results regarding unitary dilation in the graph product context. In particular, we obtain graph product versions of the Sz.-Nagy-Foia\lfhook{s} dilation theorem (Theorem \ref{dil}), von Neumann's Inequality (Corollary \ref{vnineq}), and unitary dilation of graph independent contractions (Theorem \ref{gpdilation}).
\section{Preliminaries}
Fix a simplicial (i.e. undirected, no single-vertex loops, at most one edge between vertices) graph $\Gamma = (V, E)$, where $V$ denotes the set of vertices of $\Gamma$ and $E \subset V \times V$ denotes the set of edges of $\Gamma$. Given discrete groups $\left\{G_v\right\}_{v \in V}$ one can define the graph product of the $G_v$'s as follows.
\begin{dfn}[\cite{green, casfim}]
The graph product $\bigstar_\Gamma G_v$ is given by the free product $* G_v$ modulo the relations $[g,h] =1$ whenever $g \in G_v, h \in G_w$ and $(v,w) \in E$.
\end{dfn}
In the context of $C^*$-algebras, per usual, there are two flavors of graph products: universal and reduced. Some set-up is in order before presenting these constructions. Both \cite{mlot} and \cite{casfim} present cosmetically differing constructions of the same objects, but since we are adhering to the language of graphs, we will draw primarily from the discussion in \cite{casfim}.
When working with graph products, the bookkeeping can be done by considering words with letters from the vertex set $V$. Such words are given by finite sequences of elements from $V$ and will be denoted with bold letters. In order to encode the commuting relations given by $\Gamma$, we consider the equivalence relation generated by the following relations.
\begin{align*}
(v_1,\dots, v_i, v_{i+1},\dots, v_n) &\sim (v_1,\dots, v_i, v_{i+2}, \dots, v_n) &\text{if} &&v_i = v_{i+1}\\
(v_1,\dots, v_i, v_{i+1},\dots, v_n) &\sim (v_1,\dots, v_{i+1}, v_i,\dots, v_n) &\text{if} &&(i,i+1) \in E.
\end{align*}
The concept of a reduced word is central to the theory of graph products. The following definition is Definition 3.2 of \cite{spenic} in graph language; the equivalent definition in \cite{casfim} appears differently.
\begin{dfn}\label{redv}
A word $\mathbbf{v} = (v_1,\dots,v_n)$ is \emph{reduced} if whenever $v_k = v_l, k < l$, then there exists a $p$ with $k< p < l$ such that $(v_k, v_p) \notin E$. Let $\mathcal{W}_\text{red}$ denote the set of all reduced words. We take the convention that the empty word is reduced.
\end{dfn}
\begin{prop}[\cite{green,casfim}]\label{reducedlemma}\hspace*{\fill}
\begin{enumerate}
\item Every word $\mathbbf{v}$ is equivalent to a reduced word $\mathbbf{w} = (w_1,\dots, w_n)$. (We let $|\mathbbf{w}|=n$ denote the \emph{length} of the reduced word.)
\item If $\mathbbf{v} \sim \mathbbf{w}\sim\mathbbf{w}'$ with both $\mathbbf{w}$ and $\mathbbf{w}'$ reduced, then the lengths of $\mathbbf{w}$ and $\mathbbf{w}'$ are equal and $\mathbbf{w}' = (w_{\sigma(1)},\dots,w_{\sigma(n)})$ is a permutation of $\mathbbf{w}$. Furthermore, this permutation $\sigma$ is unique if we insist that whenever $w_k = w_l, k< l$ then $\sigma(k) < \sigma(l)$.
\end{enumerate}
\end{prop}
\noindent Let $\mathcal{W}_\text{min}$ be a set of representatives of every reduced word such that each equivalence class has exactly one representative in $\mathcal{W}_\text{min}$. An element of $\mathcal{W}_\text{min}$ is called a \emph{minimal word}.
\subsection{Universal graph products} To define universal graph products we follow the discussion from \cite{mlot} which gives a more constructive definition compared to the equivalent definition appearing in \cite{casfim}.
\begin{dfn}\label{unidef}
Given a graph $\Gamma = (V,E)$ and unital $C^*$-algebras $\mathcal{A}_v$ for every $v \in V$, the \emph{universal graph product $C^*$-algebra} is the unique unital $C^*$-algebra $\bigstar_\Gamma \mathcal{A}_v$ together with unital $*$-homomorphisms $\iota_v: \mathcal{A}_v \rightarrow \bigstar_\Gamma \mathcal{A}_v$ satisfying the following universal property.
\begin{enumerate}
\item $\iota_v(a)\iota_w(b) = \iota_w(b)\iota_v(a)$ whenever $a \in \mathcal{A}_v, b \in \mathcal{A}_w, (v,w) \in E$;
\item For any unital $C^*$-algebra $\mathcal{B}$ with $*$-homomorphisms $f_v: \mathcal{A}_v \rightarrow \mathcal{B}$ such that $f_v(a)f_w(b) = f_w(b)f_v(a)$ whenever $a \in \mathcal{A}_v, b \in \mathcal{A}_w, (v,w) \in E$, there exists a unique $*$-homomorphism $\bigstar_\Gamma f_v: \bigstar_\Gamma \mathcal{A}_v \rightarrow \mathcal{B}$ such that $\bigstar_\Gamma f_v \circ \iota_{v_0} = f_{v_0}$ for every $v_0 \in V$.
\end{enumerate}
\end{dfn}
The graph product $\bigstar_\Gamma \mathcal{A}_v$ is the universal $C^*$-algebraic free product $*_{v \in V} \mathcal{A}_v$ modulo the ideal generated by the commutation relations encoded in the graph $\Gamma$.
The following constructive description of universal graph product $C^*$-algebras also appears in \cite{mlot}. Ignoring the norm topology, we can consider the universal $*$-algebraic graph product of the $\mathcal{A}_v$'s, $\text{\ding{73}}_\Gamma \mathcal{A}_v$, as the universal $*$-algebraic free product of the $\mathcal{A}_v$'s modulo the ideal generated by the commutation relations coming from the graph $\Gamma$. For each $v \in V$ fix a state $\varphi_v \in S(\mathcal{A}_v)$, and let $\mathring{\mathcal{A}}_v = \ker(\varphi_v)$. For each $\mathbbf{v} = (v_1,\dots,v_n) \in \mathcal{W}_\text{min}$ let $\mathcal{A}_\mathbbf{v} = \mathring{\mathcal{A}}_{v_1}\otimes \cdots \otimes \mathring{\mathcal{A}}_{v_n}$ with $\mathcal{A}_e = \mathbb{C}1$ where $e$ is the empty word. We can identify $\text{\ding{73}}_\Gamma \mathcal{A}_v$ (as a vector space) with the following direct sum of tensor products. \[\text{\ding{73}}_\Gamma \mathcal{A}_v = \bigoplus_{\mathbbf{v} \in \mathcal{W}_\text{min}} \mathcal{A}_\mathbbf{v}\] Then the $C^*$-algebraic graph product $\bigstar_\Gamma \mathcal{A}_v$ is the $C^*$-envelope of the $*$-algebraic graph product $\text{\ding{73}}_\Gamma \mathcal{A}_v$. Compare this with the discussion in Sections 1.2 and 1.4 of \cite{vodyni}.
\begin{dfn}
A \emph{reduced word} $a \in \text{\ding{73}}_\Gamma \mathcal{A}_v$ is an element of the form $a = a_1\cdots a_m$ where $a_k \in \mathring{\mathcal{A}_{v_k}}$ and $(v_1,\dots,v_m) \in \mathcal{W}_\text{red}$. In such an instance we write $(v_1,\dots, v_m) = \mathbbf{v}_a$ and say $|a| = m$--denoting the \emph{length} of $a$ (well-defined by Proposition \ref{reducedlemma}). Accepting the common risks of abusing notation, we let $\mathcal{W}_\text{red}$ also denote the set of reduced words in $\text{\ding{73}}_\Gamma \mathcal{A}_v$. The linear span of $\mathcal{W}_\text{red} \cup \left\{1\right\}$ is dense in $\bigstar_\Gamma \mathcal{A}_v$ (see \cite{mlot}).
\end{dfn}
\subsection{Reduced graph products} The following construction can be found in \cite{casfim}. The reduced graph product of $C^*$-algebras is defined in the presence of states and depends on the construction of a graph product of Hilbert spaces, defined in a way similar to that of the definition of a free product of Hilbert spaces.
For each $v \in V$ let $\mathcal{H}_v$ be a Hilbert space with a distinguished unit vector $\xi_v \in \mathcal{H}_v$. Put $\mathring{\mathcal{H}}_v := \mathcal{H}_v \ominus \mathbb{C}\xi_v$. Given $\mathbbf{v} = (v_1,\dots,v_n) \in \mathcal{W}_\text{red}$, define \[\mathcal{H}_\mathbbf{v} := \mathring{\mathcal{H}}_{v_1} \otimes \cdots \otimes \mathring{\mathcal{H}}_{v_n}.\] If $\mathbbf{v}, \mathbbf{w} \in \mathcal{W}_\text{red}$ with $\mathbbf{v}\sim\mathbbf{w}$ then by Proposition \ref{reducedlemma} there is a uniquely determined unitary $\mathcal{Q}_{\mathbbf{v},\mathbbf{w}}: \mathcal{H}_v \rightarrow \mathcal{H}_w$. Since each reduced word $\mathbbf{v}$ has a unique representative $\mathbbf{v}' \in \mathcal{W}_\text{min}$, we write $\mathcal{Q}_\mathbbf{v}$ instead of $\mathcal{Q}_{\mathbbf{v},\mathbbf{v}'}$.
\begin{dfn}
Define the graph product Hilbert space $(\bigstar_\Gamma\mathcal{H}_v,\Omega)$ as follows. \[\bigstar_\Gamma\mathcal{H}_v := \mathbb{C}\Omega \oplus \bigoplus_{\mathbbf{w} \in \mathcal{W}_\text{min}} \mathcal{H}_\mathbbf{w}\]
\end{dfn}
Next, given $v_0 \in V$ we define a canonical (left) representation of $B(\mathcal{H}_{v_0})$ in $B(\bigstar_\Gamma \mathcal{H}_v)$. Let $\mathcal{W}_l(v_0)\subset \mathcal{W}_\text{min}$ be the set of minimal words $\mathbbf{w}$ such that $v_0\mathbbf{w}$ is still reduced. Put \[\mathcal{H}_l(v_0):= \mathbb{C}\Omega \oplus \bigoplus_{\mathbbf{w} \in \mathcal{W}_l(v_0)} \mathcal{H}_\mathbbf{w}.\] We have that $\bigstar_\Gamma \mathcal{H}_v \cong \mathcal{H}_{v_0} \otimes \mathcal{H}_l(v_0)$ via the unitary $U_l(v_0)$ defined as follows.
\begin{align*}
U_l(v_0): \mathcal{H}_{v_0} \otimes \mathcal{H}_l(v_0) &\rightarrow \bigstar_\Gamma \mathcal{H}_v\\
\xi_{v_0} \otimes \Omega &\mapsto \Omega\\
\mathring{\mathcal{H}}_{v_0} \otimes \Omega &\mapsto \mathring{\mathcal{H}}_{v_0}\\
\xi_{v_0} \otimes \mathcal{H}_\mathbbf{w} &\mapsto \mathcal{H}_\mathbbf{w}\\
\mathring{\mathcal{H}}_{v_0}\otimes \mathcal{H}_\mathbbf{w} &\mapsto \mathcal{Q}_{v_0\mathbbf{w}}(\mathring{\mathcal{H}}_{v_0} \otimes \mathcal{H}_\mathbbf{w})
\end{align*}
\noindent Then we define $\lambda_{v_0}: B(\mathcal{H}_{v_0}) \rightarrow B(\bigstar_\Gamma \mathcal{H}_v)$ by \[\lambda_{v_0}(x) = U_l(v_0)(x\otimes 1)U_l(v_0)^*.\]
\begin{dfn}
For each $v \in V$ let $\mathcal{A}_v$ be a unital $C^*$-algebra, let $\varphi_v \in S(\mathcal{A}_v)$ be a state, and let $(\pi_v, \mathcal{H}_v, \xi_v)$ be the corresponding GNS triple. The \emph{(left) reduced graph product $C^*$-algebra} is denoted $\bigstar_\Gamma (\mathcal{A}_v, \varphi_v)$ and is defined to be the $C^*$-subalgebra in $B(\bigstar_\Gamma \mathcal{H}_v)$ generated by $\left\{\lambda_v(\pi_v(\mathcal{A}_v))\right\}_{v \in V}$. The vector state $\left\langle \cdot \Omega| \Omega\right\rangle$ on $\bigstar_\Gamma (\mathcal{A}_v, \varphi_v)$ is the reduced graph product state denoted $\bigstar_\Gamma \varphi_v$.
\end{dfn}
\begin{rmk}\label{B}
As outlined in \cite{casfim}, one can analogously construct right representations $\rho_{v_0}: B(\mathcal{H}_{v_0}) \rightarrow B(\bigstar_\Gamma \mathcal{H}_v)$ and subsequently define a right reduced graph product $C^*$-algebra.
\end{rmk}
\subsection{Graph independence}
We briefly discuss graph products in the context of non-commutative probability. Compare this discussion with \cite{mlot,spewys}.
\begin{dfn}
A \emph{non-commutative probability space} is given by a pair $(\mathcal{A}, \varphi)$ where $\mathcal{A}$ is a unital $C^*$-algebra and $\varphi \in S(\mathcal{A})$ is a state on $\mathcal{A}$.
\end{dfn}
\begin{dfn}
Given a non-commutative probability space $(\mathcal{A},\varphi)$ and a graph $\Gamma = (V,E)$, let $\left\{\mathcal{A}_v \right\}_{v\in V} \subset \mathcal{A}$ be a family of unital $C^*$-subalgebras. Put $\mathring{\mathcal{A}}_v:= \ker(\varphi|_{\mathcal{A}_v})$. An element $a \in C^*(\cup_{v\in V} \mathcal{A}_v)$ is \emph{reduced with respect to $\varphi$} if $a = a_1\cdots a_m$ where $a_j \in \mathring{\mathcal{A}}_{v_j}$ for $1 \leq j \leq m$ and $(v_1,\dots,v_m)$ is reduced in the sense of Definition \ref{redv}.
\end{dfn}
\begin{dfn}\cite{mlot,spewys}
Given a non-commutative probability space $(\mathcal{A},\varphi)$ and a graph $\Gamma = (V,E)$, a family of unital $C^*$-subalgebras $\left\{\mathcal{A}_v\right\}_{v\in V}\subset (\mathcal{A},\varphi)$ is \emph{$\Gamma$ independent} (or \emph{graph independent} when context is clear) if
\begin{enumerate}
\item $(v,v') \in E \Rightarrow \mathcal{A}_v$ and $\mathcal{A}_{v'}$ commute;
\item for any $a \in C^*(\cup_{v\in V} \mathcal{A}_v)$ such that $a$ is reduced with respect to $\varphi, \varphi(a) = 0$.
\end{enumerate}
A family of random variables $\left\{x_v\right\}_{v\in V} \subset \mathcal{A}$ is $\Gamma$ independent if the family of their generated unital $C^*$-algebras $\left\{C^*(1,x_v)\right\}_{v \in V}$ is $\Gamma$ independent.
\end{dfn}
\begin{exmpl}
By construction, $\left\{\lambda_v(\pi_v(\mathcal{A}_v))\right\}_{v\in V} \subset (\bigstar_\Gamma (\mathcal{A}_v,\varphi_v), \bigstar_\Gamma \varphi_v)$ is $\Gamma$ independent.
\end{exmpl}
Consider the following analog of Lemma 5.13 of \cite{spenic}.
\begin{lem}\label{stres}
Let $(\mathcal{A},\varphi)$ be a non-commutative probability space. Let $\Gamma = (V,E)$ be a graph, and let the unital subalgebras $\mathcal{A}_v, v \in V$, be $\Gamma$ independent in $(\mathcal{A}, \varphi)$. Let $\mathcal{B}$ be the $C^*$-algebra generated by the $A_v$'s. Then $\varphi|_\mathcal{B}$ is uniquely determined by $\varphi|_{\mathcal{A}_v}$ for all $v \in V$.
\end{lem}
\begin{proof}
This follows directly from (the proof of) Lemma 1 in \cite{mlot}.
\end{proof}
\begin{rmk}
Although this is not the topic of the present paper, we note that the existence of left and right (cf. Remark \ref{B}) representations on graph product Hilbert spaces sets the stage for an investigation into \textquotedblleft bi-graph independence\textquotedblright--see \cite{pof1}.
\end{rmk}
\section{Graph products of maps}\label{gpmaps}
This section presents the main result of the present article, establishing the existence of graph products of unital completely positive maps. The max tensor product and the universal free product are both examples of universal graph products; so the following result is a generalization and unification of the max tensor product and Boca's universal free product of completely positive maps appearing in \cite{boca}.
Let $\Gamma = (V, E)$ be a graph. Let $\mathcal{B}$ be a unital $C^*$-algebra. For each $v \in V$, let $\mathcal{A}_v$ be a unital $C^*$-algebra, and let $\theta_v: \mathcal{A}_v \rightarrow \mathcal{B}$ be a unital completely positive map with the property that if $(v,v') \in E$ then $\theta_v(\mathcal{A}_v)$ commutes with $\theta_{v'}(\mathcal{A}_{v'})$. Furthermore, for each $v_0 \in V$, fix a state $\varphi_{v_0} \in S(\mathcal{A}_{v_0})$, and let $\iota_{v_0}: \mathcal{A}_{v_0} \rightarrow \bigstar_\Gamma \mathcal{A}_v$ be the inclusion given in Defintion \ref{unidef}. We densely define the unital graph product map $\bigstar_\Gamma \theta_v$ with respect to the states $\varphi_v$ on $\mathcal{W}_\text{red} \cup \left\{1\right\}$ and extend linearly. For $a_j \in \mathring{\mathcal{A}}_{v_j}:=\ker(\varphi_{v_j}), 1 \leq j \leq n, (v_1,\dots, v_n) \in \mathcal{W}_\text{red}$,
\begin{align}
\bigstar_\Gamma \theta_v \Big(\prod_{j=1}^n \iota_{v_j}(a_j)\Big) := \prod_{j=1}^n \iota_{v_j}(\theta_{v_j}(\iota_{v_j}(a_j))).\label{ucpdef}
\end{align}
From now on, we suppress the $\iota_v$'s.
\begin{thm}\label{mainthm}
The map $\bigstar_\Gamma \theta_v$ densely defined on the linear span of $\mathcal{W}_\text{red}\cup\left\{1\right\}$ by the relation \eqref{ucpdef} extends by continuity to a unital completely positive map $\bigstar_\Gamma \mathcal{A}_v \rightarrow \mathcal{B}$.
\end{thm}
The proof we present at the end of this section is an adaptation of Boca's original proof in \cite{boca}. It deserves mentioning that a recently posted preprint (\cite{davkak}) by Davidson-Kakariadis exhibits an alternative proof of the corresponding result in the amalgamated free product case using a dilation theoretic approach. While a graph product companion to Davidson-Kakariadis's technique is worth pursuing, generalizing Boca's strategy to the graph product setting has the benefit of developing some tools and facts regarding the less familiar--and sometimes frustrating--combinatorics of graph products. Due to the subtlety of the combinatorics, some preparation is in order.
For the sake of simpler notation we will denote $\bigstar_\Gamma \theta_v$ by $\Theta$. As in \cite{boca} assume that $\mathcal{B} \subset B(\mathcal{H})$ for some Hilbert space $\mathcal{H}$ and that $I_\mathcal{H} \in \mathcal{B}$. It is well-known that it suffices to show that for any $n \in \mathbb{N}, x_1, \dots, x_n \in \bigstar_\Gamma \mathcal{A}_v, \xi_1, \dots, \xi_n \in \mathcal{H}$, \[\sum_{i,j =1}^n \langle \Theta(x_i^*x_j)\xi_j | \xi_i \rangle \geq 0.\] By an argument identical to the one in \cite{boca}, we can further reduce the required inequality to the following. It is enough to check that for any finite set $X$ in $\mathcal{W}_\text{red} \cup \left\{1\right\}$ and any function $\xi: X \rightarrow \mathcal{H}$, we have \[\sum_{x,y \in X} \langle \Theta(x^*y)\xi(y) | \xi(x)\rangle \geq 0.\]
Although the following fact is very simple, it deserves to be recorded separately because it is so fundamental to the proceeding arguments.
\begin{prop}
Let $x, y \in \mathcal{W}_\text{red}$. If $x^*y$ is not reduced, there exist orderings of $\mathbbf{v}_x = (v_1,\dots, v_n)$ and $\mathbbf{v}_y= (v'_1,\dots, v'_m)$ such that $v_1 = v'_1$.
\end{prop}
\subsection{Non-commutative length}\label{ncl}
We now discuss useful tools for the relevant combinatorics of this question.
\begin{dfn}\label{complete}
A finite subset $X \subset \mathcal{W}_\text{red} \cup \left\{ 1 \right\}$ is \emph{complete} if $1 \in X$ and whenever $a_1\cdots a_m \in X$ we have $a_{\sigma(2)}\cdots a_{\sigma(m)} \in X$ and $a_{\sigma(1)} \cdots a_{\sigma(m-1)} \in X$ for every permutation $\sigma \in S_m$ such that $a_1 \cdots a_m = a_{\sigma(1)} \cdots a_{\sigma(m)}$. In other words $X$ is complete if it contains the unit and is closed under left and right truncations of any equivalent rearrangements. Compare this to Boca's definition of a complete set in \cite{boca}. Let $\mathbbf{v}_X:= \left\{\mathbbf{v} \in \mathcal{W}_\text{red} | \mathbbf{v} = \mathbbf{v}_a \text{ for some } a \in X\right\}$.
\end{dfn}
Since every finite set in $\mathcal{W}_\text{red} \cup \left\{ 1\right\}$ is contained in a complete set, we can make one final reduction of the desired inequality as follows. For any complete set $X \subset \mathcal{W}_\text{red} \cup \left\{ 1 \right\}$ and any function $\xi: X \rightarrow \mathcal{H}$, we have
\begin{align}
\sum_{x,y \in X} \langle \Theta(x^*y)\xi(y) | \xi(x)\rangle &\geq 0. \label{goal}
\end{align}
\begin{dfn}
We can place a partial order $\preceq$ on $\mathcal{W}_\text{red} \cup \left\{ 1\right\}$ with respect to truncation as follows. For every $x \in \mathcal{W}_\text{red}, 1 \preceq x$; and given $x, y \in \mathcal{W}_\text{red}, y \preceq x$ if either $x = y$ or $x$ truncates (as in Definition \ref{complete}) to $y$. This order also applies to the words in $V$.
\end{dfn}
Let $Y \subset \mathcal{W}_\text{red}\cup\left\{1\right\}$ be any finite nonempty subset. Put \[Y^\preceq := \left\{x \in \mathcal{W}_\text{red}\cup\left\{1\right\} | \exists y \in Y: x \preceq y \right\}.\] Clearly, $Y^\preceq$ is complete.
\begin{dfn}\label{nclength}
Fix $v_0 \in V$. Let $\mathbbf{v} = (v_1,\dots,v_n,v_0)$ be reduced. We let $\Shortstack{. . . .} \mathbbf{v} \Shortstack{. . . .}_{v_0}$ denote the \emph{(right-hand) non-commutative length of $\mathbbf{v}$ with respect to $v_0$}, given by \[\Shortstack{. . . .} \mathbbf{v} \Shortstack{. . . .}_{v_0} := \text{Card}\Big(\left\{i | 1\leq i \leq n, (v_i,v_0) \notin E\right\}\Big).\] Note that this counts when $v_0$ is repeated. If $\mathbbf{v}$ cannot be written with $v_0$ at the right-hand end, put $\Shortstack{. . . .} \mathbbf{v} \Shortstack{. . . .}_{v_0} = -1$. If $w \in \bigstar_\Gamma \mathcal{A}_v$ is reduced, let $\Shortstack{. . . .} w \Shortstack{. . . .}_{v_0} = \Shortstack{. . . .} \mathbbf{v}_w\Shortstack{. . . .}_{v_0}$. Given a finite set $X$ of reduced words (of vertices or algebra elements), we define the \emph{(right-hand) non-commutative length of $X$ with respect to $v_0$}, denoted $\Shortstack{. . . .} X \Shortstack{. . . .}_{v_0}$ to be given by \[ \Shortstack{. . . .} X \Shortstack{. . . .}_{v_0} := \max_{w \in X} \Shortstack{. . . .} w \Shortstack{. . . .}_{v_0}.\]
\end{dfn}
\begin{rmk}\label{lengths}
Observe that in a free product (graph product over a graph with no edges), the length of a reduced word is always one more than the non-commutative length of a reduced word.
\end{rmk}
\begin{dfn}\label{stdform}
Fix $v_0 \in V$. Let $\mathbbf{x} \in \mathcal{W}_\text{red}$ be such that $v_0 \in \mathbbf{x}$. Suppose $\mathbbf{y},\mathbbf{c},\mathbbf{b} \in \mathcal{W}_\text{red}$, satisfy the following properties.
\begin{itemize}
\item $\mathbbf{x} = \mathbbf{y}\mathbbf{c}(v_0)\mathbbf{b}$;
\item $\mathbbf{b}$ is the word of smallest length so that $\mathbbf{y}\mathbbf{c}(v_0) \preceq \mathbbf{x}$ and $\Shortstack{. . . .} \mathbbf{y}\mathbbf{c}(v_0)\Shortstack{. . . .}_{v_0} = \Shortstack{. . . .} \left\{\mathbbf{x}\right\}^\preceq \Shortstack{. . . .}_{v_0}$;
\item $\mathbbf{y}$ is the word of smallest length so that $\mathbbf{y}(v_0) \preceq \mathbbf{x}$ and $\Shortstack{. . . .} \mathbbf{y}(v_0) \Shortstack{. . . .}_{v_0} = \Shortstack{. . . .} \left\{\mathbbf{x}\right\}^\preceq \Shortstack{. . . .}_{v_0}$.
\end{itemize}
Then we say that $\mathbbf{x} = \mathbbf{y}\mathbbf{c}(v_0)\mathbbf{b}$ is in \emph{standard form with respect to $v_0$}.
We extend this definition to reduced words of algebra elements.
\end{dfn}
The following proposition follows from a straightforward induction argument using the fact that truncation preserves standard form; the proof is left as an exercise.
\begin{prop}
If $\mathbbf{x} = \mathbbf{y}\mathbbf{c}(v_0)\mathbbf{b}$ is in standard form with respect to $v_0$, then the words $\mathbbf{y}, \mathbbf{c},$ and $\mathbbf{b}$ are unique.
\end{prop}
\noindent Given $a \in \mathcal{A}_v$, let $\mathring{a}:= a - \varphi_v(a)1$. We have the following lemma.
\begin{lem}\label{X1crossterms}
Fix $v_0 \in V$. Let $\mathbbf{x} \in \mathcal{W}_\text{red}$ be such that $v_0 \in \mathbbf{x}$. Say $\mathbbf{x} = \mathbbf{y}\mathbbf{c}(v_0)\mathbbf{b}$ is in standard form with respect to $v_0$. Let $y, c, a, b \in \mathcal{W}_\text{red} \cup \left\{1\right\}$ be such that $\mathbbf{v}_y = \mathbbf{y}, \mathbbf{v}_c = \mathbbf{c}, \mathbbf{v}_b = \mathbbf{b},$ and $a \in \mathring{\mathcal{A}}_{v_0}$. If $\mathbbf{x}'$ is such that $\Shortstack{. . . .} \left\{\mathbbf{x}'\right\}^\preceq \Shortstack{. . . .}_{v_0} < \Shortstack{. . . .} \left\{\mathbbf{x}\right\}^\preceq \Shortstack{. . . .}_{v_0}$, then for every $x' \in \mathcal{W}_\text{red}$ such that $\mathbbf{x}' = \mathbbf{v}_{x'}$, \[\Theta(b^*a^*c^*y^*x') = \Theta(b^*a^*)\Theta(c^*y^*x').\]
\end{lem}
\begin{proof}
We proceed by induction on $\Shortstack{. . . .} \left\{\mathbbf{x}\right\}^\preceq \Shortstack{. . . .}_{v_0}$.
\begin{itemize}
\item $\Shortstack{. . . .} \left\{\mathbbf{x}\right\}^\preceq \Shortstack{. . . .}_{v_0} = 0$: We proceed by further induction on $|\mathbbf{x}'|$.
\begin{itemize}
\item[$\bullet\bullet$] $|\mathbbf{x}'| = 0$: $x' = 1$, and the statement is obviously true.
\item [$\bullet\bullet$] $|\mathbbf{x}'| = k >0$: if $b^*a^*c^*y^*x'$ is reduced then the equality holds. Suppose $b^*a^*c^*y^*x'$ is not reduced. In this case $y =1$. Let $c = c_1\cdots c_m$ and $x' = x'_1\cdots x'_k$. By the definition of standard form, we have that we can rearrange the $c_i$'s and $x'_i$'s so that $\mathbbf{v}_{c_1} = \mathbbf{v}_{x'_1}$. That is, none of the $b$ terms can cross past $a$; otherwise the minimality of $|b|$ would be contradicted. So we have
\begin{align}
&\Theta(b^* a^* c_m^* \cdots c_1^* x'_1 \cdots x'_k)\notag\\
& = \Theta(b^*a^*c_m^*\cdots c_2^*(\mathring{c_1^*x'_1})x'_2\cdots x'_k)\notag\\
& + \varphi_{\mathbbf{v}_{x'_1}}(c_1^*x'_1)\Theta(b^*a^*c_m^*\cdots c_2^*x'_2\cdots x'_k) \notag \\
&= \Theta(b^*a^*)\Theta(c_m^*\cdots c_2^*(\mathring{c_1^*x'_1})x'_2\cdots x'_k) \label{1}\\
&+ \varphi_{\mathbbf{v}_{x'_1}}(c_1^*x'_1)\Theta(b^*a^*)\Theta(c_m^*\cdots c_2^*x'_2\cdots x'_k)\notag \\
&= \Theta(b^*a^*)\Theta(c_m^* \cdots c_1^* x'_1 \cdots x'_k) \notag
\end{align}
where \eqref{1} follows from the fact that $\Shortstack{. . . .} \left\{x_2' \cdots x_k\right\}^\preceq\Shortstack{. . . .}_{v_0}$ is less than $\Shortstack{. . . .} \left\{(\mathring{c_1^*x_1'})^* c_2 \cdots c_m a b \right\}^\preceq\Shortstack{. . . .}_{v_0}$ and $\Shortstack{. . . .} \left\{c_2\cdots c_m a b\right\}^\preceq\Shortstack{. . . .}_{v_0}$, and thus the inductive hypothesis gives the desired equality.
\end{itemize}
\item $\Shortstack{. . . .} \left\{\mathbbf{x}\right\}^\preceq \Shortstack{. . . .}_{v_0} >0$: Again we induct further on $|\mathbbf{x}'|$.
\begin{itemize}
\item[$\bullet\bullet$] $|\mathbbf{x}'| =0$: Trivial.
\item[$\bullet\bullet$] $|\mathbbf{x}'| = k>0$: If $b^*a^*c^*y^*x'$ is reduced then the equality holds. Suppose $b^*a^*c^*y^*$ is not reduced, and let $cy = z_1 \cdots z_m$ and $x' = x'_1 \cdots x'_k$. As before, we can rearrange the $z_i$'s and $x'_i$'s so that $\mathbbf{v}_{z_1} = \mathbbf{v}_{x'_1}$. If $(\mathbbf{v}_{z_1}, v_0) \in E$ then the argument in the $\Shortstack{. . . .} \left\{\mathbbf{x}\right\}^\preceq \Shortstack{. . . .}_{v_0} = 0$ case holds. Assume that $(\mathbbf{v}_{z_1}, v_0) \notin E$. Then $\Shortstack{. . . .} z_2 \cdots z_m a\Shortstack{. . . .}_{v_0} = \Shortstack{. . . .} yca\Shortstack{. . . .}_{v_0} - 1 \geq 0$. It is a quick check to see that if $\Shortstack{. . . .} \left\{\mathbbf{x}'\right\}^\preceq\Shortstack{. . . .}_{v_0} \neq -1$ then deleting $x'_1$ from the left decreases the non-commutative length by one, and if $\Shortstack{. . . .} \left\{\mathbbf{x}'\right\}^\preceq\Shortstack{. . . .}_{v_0} = -1$, then deleting $x'_1$ leaves the non-commutative length alone. In either case, the inductive hypothesis applies, yielding the equality as illustrated above.\qedhere
\end{itemize}
\end{itemize}
\end{proof}
\begin{lem}\label{Y1crossterms}
Fix $v_0 \in V$. Let $\mathbbf{x}, \mathbbf{x}' \in \mathcal{W}_\text{red}$ be such that $\Shortstack{. . . .} \left\{ \mathbbf{x}\right\}^\preceq \Shortstack{. . . .}_{v_0} = \Shortstack{. . . .} \left\{ \mathbbf{x}'\right\}^\preceq \Shortstack{. . . .}_{v_0} >0$. Let $\mathbbf{y}, \mathbbf{y}', \mathbbf{c}, \mathbbf{c}', \mathbbf{b}, \mathbbf{b}' \in \mathcal{W}_\text{red}$ be such that $\mathbbf{x} = \mathbbf{y} \mathbbf{c} (v_0) \mathbbf{b}$ and $\mathbbf{x}' = \mathbbf{y}' \mathbbf{c}' (v_0) \mathbbf{b}'$ are both in standard form with respect to $v_0$. If $\mathbbf{y} \neq \mathbbf{y}'$ then for every $y, y', c, c', a, a', b, b' \in \mathcal{W}_\text{red}\cup \left\{1\right\}$ such that $\mathbbf{v}_y = \mathbbf{y}, \mathbbf{v}_{y'} = \mathbbf{y}', \mathbbf{v}_c = \mathbbf{c}, \mathbbf{v}_{c'} = \mathbbf{c}', \mathbbf{v}_b = \mathbbf{b}, \mathbbf{v}_{b'} = \mathbbf{b}',$ and $a, a' \in \mathring{\mathcal{A}}_{v_0}$ we have \[\Theta(b^*a^*c^*y^*y'c'a'b') = \Theta(b^*a^*)\Theta(c^*y^*y'c'a'b') (= \Theta(b^*a^*)\Theta(c^*y^*y'c')\Theta(a'b') ) .\]
\end{lem}
\begin{proof}
Let $yc = z_1\cdots z_m$ and $y'c' = z'_1\cdots z'_{m'}$. We proceed by induction on $\Shortstack{. . . .} \left\{\mathbbf{x}\right\}^\preceq\Shortstack{. . . .}_{v_0}$.
\begin{itemize}
\item $\Shortstack{. . . .} \left\{\mathbbf{x}\right\}^\preceq \Shortstack{. . . .}_{v_0} = 1$: We induct further on $m + m'$.
\begin{itemize}
\item[$\bullet \bullet$] $m+m' = 2$: Since $\mathbbf{y} \neq \mathbbf{y}'$, we immediately get that $b^*a^*z_1^*z_1'a'b'$ is reduced. So the equality follows.
\item[$\bullet \bullet$] $m + m' > 2$: If $b^*a^*c^*y^*y'c'a'b'$ is reduced then we are done. Suppose $b^*a^*c^*y^*y'c'a'b'$ is not reduced. Then we can rearrange the $z$ and $z'$ terms so that $\mathbbf{v}_{z_1} = \mathbbf{v}_{z'_1}$. Then we have
\begin{align}
&\Theta(b^* a^* z_m^* \cdots z_1^* z'_1 \cdots z'_{m'} a' b')\notag\\
& = \Theta(b^*a^*z_m^*\cdots z_2^*(\mathring{z_1^*z'_1})z'_2\cdots z'_{m'} a' b') \label{2} \\
&+ \varphi_{\mathbbf{v}_{z_1}}(z_1^*z'_1)\Theta(b^*a^*z_m^*\cdots z_2^*z'_2\cdots z'_{m'} a' b') \notag
\end{align}
Since $\mathbbf{y} \neq \mathbbf{y}'$ we have that $(\mathbbf{v}_{z_1}, v_0) \in E$. The inductive hypothesis on $m + m'$ applies, yielding the desired equality.
\end{itemize}
\item $\Shortstack{. . . .} \left\{\mathbbf{x}\right\}^\preceq \Shortstack{. . . .}_{v_0} >1$: Again, induct further on $m + m'$.
\begin{itemize}
\item[$\bullet \bullet$] $m + m' = 2\Shortstack{. . . .} \left\{\mathbbf{x}\right\}^\preceq \Shortstack{. . . .}_{v_0}$: Suppose $b^*a^*c^*y^*y'c'a'b'$ is not reduced and that $\mathbbf{v}_{z_1} = \mathbbf{v}_{z'_1}$. Then we obtain the same decomposition as in \eqref{2}. Then by applying Lemma \ref{X1crossterms} to the first term on the right-hand side of \eqref{2} and the inductive hypothesis on $\Shortstack{. . . .} \left\{\mathbbf{x}\right\}^\preceq\Shortstack{. . . .}_{v_0}$ to the second term, we obtain the desired equality.
\item[$\bullet \bullet$] $m + m' > 2\Shortstack{. . . .}\left\{\mathbbf{x}\right\}^\preceq\Shortstack{. . . .}_{v_0}$: Suppose $b^*a^*c^*y^*y'c'a'b'$ is not reduced and that $\mathbbf{v}_{z_1} = \mathbbf{v}_{z'_1}$; consider the decomposition from \eqref{2}. If $(\mathbbf{v}_{z_1}, v_0) \notin E$, then as in the $m + m' = 2\Shortstack{. . . .} \left\{\mathbbf{x}\right\}^\preceq \Shortstack{. . . .}_{v_0}$ case, apply Lemma \ref{X1crossterms} to the first term on the right-hand side of \eqref{2} and apply the inductive hypothesis on $\Shortstack{. . . .} \left\{\mathbbf{x}\right\}^\preceq\Shortstack{. . . .}_{v_0}$ to the second term. If $(\mathbbf{v}_{z_1}, v_0) \in E$, apply the inductive hypothesis on $m + m'$ to both terms on the right-hand side of \eqref{2}.\qedhere
\end{itemize}
\end{itemize}
\end{proof}
\subsection{A Stinespring construction for concatenation}\label{stine}
The goal of this subsection is to show the following generalization of Schwarz's Inequality.
\begin{prop}\label{schwarz}
Let $X \subset \mathcal{W}_\text{red}\cup \left\{1\right\}$ be a complete set, and assume that for every function $\xi: X \rightarrow \mathcal{H}$, \eqref{goal} holds. For $1 \leq i \leq N$, let $b_i, c_i, b_ic_i \in X$. Then we have the following matrix inequality. \[ \left[ \Theta(b_i^*c_i^*c_jb_j)\right]_{ij} \geq \left[\Theta(b_i^*)\Theta(c_i^*c_j)\Theta(b_j)\right]_{ij}\]\end{prop}
\noindent We will prove Proposition \ref{schwarz} by making use of a Stinespring construction for (left-hand) concatenation. Consider $\mathbb{C}^{|X|}$ with standard basis $\left\{e_x\right\}_{x\in X}$. The inequality \eqref{goal} implies that we can define a positive semi-definite sesquilinear form on $\mathcal{H} \otimes \mathbb{C}^{|X|}$ given by \[ \langle \xi \otimes e_y | \eta \otimes e_x\rangle = \langle \Theta(x^*y) \xi | \eta\rangle.\] By standard arguments this yields a Hilbert space that we will denote by $\mathcal{H} \otimes_\Theta \mathbb{C}^{|X|}$. For each $x \in X$ let $V_x: \mathcal{H} \rightarrow \mathcal{H}\otimes_\Theta \mathbb{C}^{|X|}$ be given by $V_x(\xi) = \xi \otimes_\Theta e_x$. Observe that $V_1$ is an isometry:
\begin{align*}
||V_1 \xi||^2_{\mathcal{H}\otimes_\Theta \mathbb{C}^{|X|}} &= \langle \xi \otimes_\Theta e_1 | \xi \otimes_\Theta e_1\rangle \\
& = \langle \Theta(1) \xi | \xi\rangle\\
&= ||\xi||^2_\mathcal{H}.
\end{align*}
Given $x \in X$ with $|x| = 1$, we define the \emph{left-concatenation operator} $L_x: \mathcal{H}\otimes_\Theta \mathbb{C}^{|X|} \rightarrow \mathcal{H} \otimes_\Theta \mathbb{C}^{|X|}$ as follows. \[L_x(\xi \otimes_\Theta e_y) = \left\{\begin{array}{lcr}
0 & \text{if} & xy \notin X \\
&&\\
\xi \otimes_\Theta e_{xy} & \text{if} & xy \in X
\end{array}\right.\]
\begin{prop}\label{Lxbdd}
Let $X \subset \mathcal{W}_\text{red}\cup \left\{1\right\}$ be a complete set, and assume that for every function $\xi: X \rightarrow \mathcal{H}$, \eqref{goal} holds. Given $x \in X$ with $|x|=1$, the left-concatenation operator $L_x$ is bounded.
\end{prop}
\noindent Proposition \ref{Lxbdd} is all we need to prove Proposition \ref{schwarz}:
\begin{proof}[Proof of Proposition \ref{schwarz}]
Given $a = a_1 \cdots a_m \in X$, Proposition \ref{Lxbdd} provides that the corresponding left-concatenation operator $L_a := L_{a_1}\cdots L_{a_m}$ is bounded. Evidently, given $x,y \in X,$ \[\Theta(x^*y) = V_1^*L_x^*L_yV_1.\] Thus we have
\begin{align*}
&\left[ \Theta(b_i^*c_i^*c_jb_j) \right]_{ij} \\
& = \left[V_1^*L_{b_i}^*L_{c_i}^*L_{c_j}L_{b_j}V_1 \right]_{ij}\\
&=\text{Dg}(V_1^*L_{b_i}^*) \left[
\begin{matrix}
L_{c_1}^*\\
\vdots \\
L_{c_N}^*
\end{matrix}\right]
\left[\begin{matrix}
L_{c_1} & \cdots & L_{c_N}
\end{matrix}\right]
\text{Dg}(L_{b_j}V_1)\\
&\geq \text{Dg}(V_1^*L_{b_i}^*) \left(\text{Dg}(V_1V_1^*)\left[
\begin{matrix}
L_{c_1}^*\\
\vdots \\
L_{c_N}^*
\end{matrix}\right]
\left[\begin{matrix}
L_{c_1} & \cdots & L_{c_N}
\end{matrix}\right]
\text{Dg}(V_1V_1^*)\right) \text{Dg}(L_{b_j}V_1)\\
&= \left[V_1^*L_{b_i}^*V_1V_1^*L_{c_i}^*L_{c_j}V_1V_1^*L_{b_j}V_1\right]_{ij}\\
&= \left[\Theta(b_i^*)\Theta(c_i^*c_j)\Theta(b_j)\right]_{ij}
\end{align*}
where $\text{Dg}(\bullet)$ is the $N \times N$ matrix with the diagonal given by $\bullet$.
\end{proof}
\noindent Thus we have reduced the goal of the current subsection to proving Proposition \ref{Lxbdd}. We accomplish this by making one last reduction. The following technical lemma can be used to prove Proposition \ref{Lxbdd}.
\begin{lem}\label{techlem}
Let $X \subset \mathcal{W}_\text{red}\cup \left\{1\right\}$ be a complete set with $|X|\geq 2$, and assume that for every function $\xi: X \rightarrow \mathcal{H}$, \eqref{goal} holds. Let $(v_0) \in \mathbbf{v}_X$ and let $y \in X$ be such that $(v_0)\cdot\mathbbf{v}_y \in \mathbbf{v}_X$. For \emph{any} $a \in \mathcal{A}_{v_0}$, \[\Theta(y^*a^*ay) \geq \Theta(y^*)\Theta(a^*a)\Theta(y) = \Theta(y^*)\theta_{v_0}(a^*a)\Theta(y) \geq 0.\]
\end{lem}
\begin{proof}[Proof of Proposition \ref{Lxbdd}]
Let $x \in X$ be such that $|x| = 1$, and let $y \in X$ be such that $xy \in X$. We have that $||x^*x|| - x^*x \geq 0$, so there is some $a \in \mathcal{A}_{\mathbbf{v}_x}$ such that $a^*a = ||x^*x||-x^*x$. Then by Lemma \ref{techlem},
\[
\Theta(y^*(||x^*x||-x^*x)y) = \Theta (y^*a^*ay) \geq \Theta(y^*)\Theta(a^*a)\Theta(y) \geq 0.\]
Thus
\begin{align*}
||L_x(\xi \otimes_\Theta e_y)||^2_{\mathcal{H}\otimes_\Theta \mathbb{C}^{|X|}} &= ||\xi \otimes_\Theta e_{xy}||^2_{\mathcal{H}\otimes_\Theta \mathbb{C}^{|X|}}\\
& = \langle \Theta(y^*x^*xy)\xi | \xi\rangle \\
& \leq \langle \Theta(y^*||x^*x||y) \xi | \xi \rangle\\
& = ||x||^2 ||\xi \otimes_\Theta e_y||^2_{\mathcal{H}\otimes_\Theta \mathbb{C}^{|X|}}.\qedhere
\end{align*}
\end{proof}
\begin{proof}[Proof of Lemma \ref{techlem}]
We proceed by induction on $|X|$.
\begin{itemize}
\item $|X| =2$: Then $|X| = \left\{1, a\right\}$, and for $y$ to satisfy the hypothesis, $y =1$. So the statement holds trivially.
\item $|X| > 2$: We induct further on $|y|$.
\begin{itemize}
\item[$\bullet\bullet$] $|y| = 0$: Trivial.
\item[$\bullet\bullet$] $|y| >0$: Let $y = y_1\cdots y_m$ so that $y_j \in \mathring{\mathcal{A}}_{v_j}, 1 \leq j \leq m$. If for every $1 \leq j \leq m, (v_0,v_j) \in E$, then \[\Theta(y^*a^*ay) = \theta_{v_0}(a^*a)\Theta(y^*y).\] Consider the complete set $X' := \left\{y\right\}^\preceq$. Since $\left\{1\right\} \subsetneq X' \subsetneq X$, we have that $X'$ is a complete set with $|X'|\geq 2$ such that for every function $\xi: X \rightarrow \mathcal{H}$, \eqref{goal} holds. By the inductive hypothesis on the cardinality of the complete set and the proofs of Propositions \ref{Lxbdd} and \ref{schwarz}, we have that \[\Theta(y^*y) \geq \Theta(y^*)\Theta(y).\] Because $\theta_{v_0}(a^*a)$ is positive and $\theta_{v_0}(a^*a)$ and $\Theta(y^*y) - \Theta(y^*)\Theta(y)$ commute, we have that
\begin{align*}
\Theta(y^*a^*ay) &= \theta_{v_0}(a^*a)\Theta(y^*y)\\
&\geq \theta_{v_0}(a^*a)\Theta(y^*)\Theta(y) \\
&= \Theta(y^*)\theta_{v_0}(a^*a)\Theta(y).
\end{align*}
If there exists $1 \leq j \leq m$ such that $(v_0,v_j) \notin E$, let $1 \leq J \leq m$ be the largest index (among all equivalent permutations) such that $v_J = v_j$. Consider
\begin{align*}
\hspace{.8cm}\Theta(y_m^* &\cdots y_1^*a^*ay_1 \cdots y_m)\\
\hspace{.8cm} = \Theta(y_m^* \cdots y_1^* (\mathring{a^*a})y_1\cdots y_m) &+ \varphi_{v_0}(a^*a)\Theta(y_m^*\cdots y_1^*y_1\cdots y_m).
\end{align*}
Notice that $\Shortstack{. . . .} (\mathring{a^*a})y_1\cdots y_J\Shortstack{. . . .}_{v_J} > \Shortstack{. . . .} y_1\cdots y_J\Shortstack{. . . .}_{v_J}$. Since we chose the largest possible $J$, \[(\mathring{a^*a})y_1\cdots y_{J-1} (y_J)(y_{J+1} \cdots y_m)\] is in standard form with respect to $v_J$. So applying Lemma \ref{X1crossterms} twice, we get that
\begin{align*}
& \Theta(y_m^* \cdots y_1^*a^*ay_1 \cdots y_m)\\
& = \Theta(y_m^*\cdots y_J^*)\Theta(y_{J-1}^*\cdots y_1^*(\mathring{a^*a})y_1\cdots y_{J-1})\Theta(y_J \cdots y_m)\\
&+ \varphi_{v_0}(a^*a) \Theta(y_m^*\cdots y_J^* y_{J-1}^*\cdots y_1^*y_1\cdots y_{J-1} y_J \cdots y_m).
\end{align*}
By the same inductive argument as in the commuting case, the generalized Schwarz's Inequality for the strictly smaller complete set $\left\{y\right\}^\preceq$ gives
\begin{align}
\hspace{.6cm}&\Theta(y_m^*\cdots y_J^*)\Theta(y_{J-1}^*\cdots y_1^*(\mathring{a^*a})y_1\cdots y_{J-1})\Theta(y_J \cdots y_m)\notag\\
\hspace{.6cm} &+ \varphi_{v_0}(a^*a) \Theta(y_m^*\cdots y_J^* y_{J-1}^*\cdots y_1^*y_1\cdots y_{J-1} y_J \cdots y_m)\notag\\
\hspace{.6cm}&\geq \Theta(y_m^*\cdots y_J^*)\Theta(y_{J-1}^*\cdots y_1^*(\mathring{a^*a})y_1\cdots y_{J-1})\Theta(y_J \cdots y_m)\notag\\
\hspace{.6cm}& + \varphi_{v_0}(a^*a) \Theta(y_m^*\cdots y_J^*)\Theta(y_{J-1}^*\cdots y_1^*y_1\cdots y_{J-1})\Theta(y_J \cdots y_m)\notag\\
\hspace{.6cm}&= \Theta(y_m^* \cdots y_J^*)\Theta(y_{J-1}^*\cdots y_1^* a^*a y_1 \cdots y_{J-1})\Theta(y_J \cdots y_m)\notag\\
\hspace{.6cm}&\geq \Theta(y_m^* \cdots y_J^*)\Theta(y_{J-1}^*\cdots y_1^*)\Theta_{v_0}(a^*a)\Theta(y_1\cdots y_{J-1})\Theta(y_J\cdots y_M) \label{3}\\
\hspace{.6cm}& = \Theta(y_m^*\cdots y_1^*)\Theta_{v_0}(a^*a)\Theta(y_1\cdots y_m)\notag
\end{align}
where \eqref{3} follows from the inductive hypothesis on $|y|$.\qedhere.
\end{itemize}
\end{itemize}
\end{proof}
We use our generalized Schwarz's Inequality to prove the following lemma.
\begin{lem}\label{Y1square}
Let $\left\{ x_i\right\}_{i=1}^N \in (\mathcal{W}_\text{red}\cup \left\{1\right\})^N$ be a finite sequence such that for every $1 \leq i \leq N,$ we have $v_0 \in \mathbbf{v}_{x_i}$. For each $1 \leq i \leq N$, let $x_i = y_ic_ia_ib_i$ be in standard form with respect to $v_0$ ($a_i \in \mathring{\mathcal{A}}_{v_0}$). Assume the following.
\begin{enumerate}
\item For every $1 \leq i,j \leq N, \mathbbf{v}_{y_i} = \mathbbf{v}_{y_j}$;
\item\label{sc} For every complete set $X \subsetneq (\left\{ x_i\right\}_{i=1}^N)^\preceq$ and any function $\xi: X \rightarrow \mathcal{H}$, \eqref{goal} holds.
\end{enumerate}
Then \[\left[\Theta(x_i^*x_j)\right]_{ij} \geq \left[\Theta(b_i^*a_i^*)\Theta(c_i^*y_i^*y_jc_j)\Theta(a_jb_j)\right]_{ij}.\]
\end{lem}
\begin{proof}\hspace*{\fill}
\begin{itemize}
\item First suppose $\Shortstack{. . . .} (\left\{x_i\right\}_{i=1}^N)^\preceq\Shortstack{. . . .}_{v_0} = 0$. Then for every $1 \leq i \leq N, y_i = 1$. So $x_i = c_ia_ib_i$. Standard form implies that for each $1 \leq i \leq N, c_i$ commutes with $a_i$. Thus,
\begin{align}
&\Theta(b_i^*a_i^*c_i^*c_ja_jb_j) \notag\\
&=\Theta(b_i^*a_i^*a_jc_i^*c_jb_j)&\notag\\
&= \Theta(b_i^*(\mathring{a_i^*a_j})c_i^*c_jb_j) + \varphi_{v_0}(a_i^*a_j) \Theta(b_i^*c_i^*c_jb_j)\notag\\
&= \Theta(b_i^*(\mathring{a_i^*a_j}))\Theta(c_i^*c_jb_j) + \varphi_{v_0}(a_i^*a_j)\Theta(b_i^*c_i^*c_jb_j) \label{4}\\
&=\Theta(b_i^*)\Theta(\mathring{a_i^*a_j})\Theta(c_i^*c_jb_j) + \varphi_{v_0}(a_i^*a_j)\Theta(b_i^*c_i^*c_jb_j)\notag\\
&=\Theta(b_i^*)\Theta((\mathring{a_i^*a_j})c_i^*c_jb_j) + \varphi_{v_0}(a_i^*a_j)\Theta(b_i^*c_i^*c_jb_j)\notag\\
&=\Theta(b_i^*)\Theta(c_i^*c_j(\mathring{a_i^*a_j})b_j) + \varphi_{v_0}(a_i^*a_j)\Theta(b_i^*c_i^*c_jb_j)\notag\\
&=\Theta(b_i^*)\Theta(c_i^*c_j)\Theta((\mathring{a_i^*a_j})b_j) + \varphi_{v_0}(a_i^*a_j)\Theta(b_i^*c_i^*c_jb_j)\label{5}\\
&=\Theta(b_i^*)\Theta(c_i^*c_j)\Theta(\mathring{a_i^*a_j})\Theta(b_j) + \varphi_{v_0}(a_i^*a_j)\Theta(b_i^*c_i^*c_jb_j)\notag
\end{align}
where \eqref{4} and \eqref{5} follow from Lemma \ref{X1crossterms}. Now, $\left\{c_ib_i\right\}_{i=1}^N$ is a sequence of elements from a complete set $X$ strictly contained in $(\left\{x_i\right\}_{i=1}^N)^\preceq$. So by assumption \eqref{sc}, we have that \eqref{goal} holds for $X$, and so by Proposition \ref{schwarz}, we have \[\left[\Theta(b_i^*c_i^*c_jb_j)\right]_{ij} \geq \left[\Theta(b_i^*)\Theta(c_i^*c_j)\Theta(b_j)\right]_{ij}.\] And since $[\varphi_{v_0}(a_i^*a_j)]_{ij}$ is positive and the $\varphi_{v_0}(a_i^*a_j)$'s are central, we have by Lemma IV.4.24 in \cite{takesaki} that \[\left[\varphi_{v_0}(a_i^*a_j)\Theta(b_i^*c_i^*c_jb_j)\right]_{ij} \geq \left[\varphi_{v_0}(a_i^*a_j)\Theta(b_i^*)\Theta(c_i^*c_j)\Theta(b_j)\right]_{ij}.\] Also, we have that $[\Theta(c_i^*c_j)]_{ij} \geq 0$ by \eqref{sc} and Proposition \ref{schwarz}, and again by the classical version of Schwarz's Inequality,
\begin{align*}
\left[\Theta(a_i^*a_j)\right]_{ij} & = \left[\theta_{v_0}(a_i^*a_j)\right]_{ij}\\
&\geq \left[\theta_{v_0}(a_i^*)\theta_{v_0}(a_j)\right]_{ij}\\
&= \left[\Theta(a_i^*)\Theta(a_j)\right]_{ij}.
\end{align*}
So since the $\Theta(c_i^*c_j)$'s commute with the $\Theta(a_i^*a_j)$'s and $\Theta(a_i^*)\Theta(a_j)$'s, then again by \cite{takesaki},
\[\left[\Theta(c_i^*c_j)\Theta(a_i^*a_j)\right]_{ij} \geq \left[\Theta(c_i^*c_j)\Theta(a_i^*)\Theta(a_j)\right]_{ij}.\]
Thus we have
\begin{align*}
&\left[\Theta(x_i^*x_j)\right]_{ij} \\
&=\left[ \Theta(b_i^*)\Theta(c_i^*c_j)\Theta(\mathring{a_i^*a_j})\Theta(b_j)\right]_{ij} + \left[ \varphi_{v_0}(a_i^*a_j)\Theta(b_i^*c_i^*c_jb_j)\right]_{ij}\\
&\geq \left[ \Theta(b_i^*)\Theta(c_i^*c_j)\Theta(\mathring{a_i^*a_j})\Theta(b_j)\right]_{ij} + \left[ \varphi_{v_0}(a_i^*a_j)\Theta(b_i^*)\Theta(c_i^*c_j)\Theta(b_j)\right]_{ij}\\
& = \left[ \Theta(b_i^*)\Big(\Theta(c_i^*c_j)\Theta(a_i^*a_j)\Big)\Theta(b_j)\right]_{ij}\\
& \geq \left[ \Theta(b_i^*)\Big(\Theta(c_i^*c_j)\Theta(a_i^*)\Theta(a_j)\Big)\Theta(b_j)\right]_{ij}\\
& = \left[ \Theta(b_i^*)\Theta(a_i^*)\Theta(c_i^*c_j)\Theta(a_j)\Theta(b_j)\right]_{ij} \\
& = \left[ \Theta(b_i^*a_i^*)\Theta(c_i^*c_j)\Theta(a_jb_j) \right]_{ij}.
\end{align*}
\item Now suppose that $\Shortstack{. . . .} (\left\{x_i\right\}_{i=1}^N)^\preceq \Shortstack{. . . .}_{v_0} >0$. Say that $y_i = y_1(i) \cdots y_m(i)$. Observe that
\begin{align}
&\Theta(b_i^*a_i^*c_i^*y_i^*y_jc_ja_jb_j) \notag\\
&=\Theta(b_i^*a_i^*c_i^*y_m(i)^*\cdots y_2(i)^*(\mathring{y_1(i)^*y_1(j)})y_2(j)\cdots y_m(j)c_ja_jb_j)\notag \\
& + \varphi_{\mathbbf{v}_{y_1(1)}}(y_1(i)^*y_1(j))\Theta(b_i^*a_i^*c_i^*y_m(i)^*\cdots y_2(i)^*y_2(j)\cdots y_m(j) c_j a_j b_j)\notag\\
&= \Theta(b_i^*a_i^*)\Theta(c_i^*y_m(i)^*\cdots y_2(i)^*(\mathring{y_1(i)^*y_1(j)})y_2(j)\cdots y_m(j)c_j)\Theta(a_jb_j)\label{6} \\
& + \varphi_{\mathbbf{v}_{y_1(1)}}(y_1(i)^*y_1(j))\Theta(b_i^*a_i^*c_i^*y_m(i)^*\cdots y_2(i)^*y_2(j)\cdots y_m(j) c_j a_j b_j)\notag
\end{align}
where \eqref{6} follows from Lemma \ref{Y1crossterms}. Since $(\left\{ y_2(i)\cdots y_m(i)c_ia_ib_i\right\}_{i=1}^N)^\preceq$ is a strictly smaller complete set, then assumption \eqref{sc} combined with Proposition \ref{schwarz} and \cite{takesaki} gives that
\begin{small}
\begin{align*}
\hspace{1cm}&\left[\varphi_{\mathbbf{v}_{y_1(1)}}(y_1(i)^*y_1(j))\Theta(b_i^*a_i^*c_i^*y_m(i)^*\cdots y_2(i)^*y_2(j)\cdots y_m(j) c_j a_j b_j)\right]_{ij}\\
\hspace{1cm}&\geq \left[\varphi_{\mathbbf{v}_{y_1(1)}}(y_1(i)^*y_1(j))\Theta(b_i^*a_i^*)\Theta(c_i^*y_m(i)^*\cdots y_2(i)^*y_2(j)\cdots y_m(j) c_j)\Theta(a_j b_j)\right]_{ij}.
\end{align*}
\end{small}
The desired inequality follows.\qedhere
\end{itemize}
\end{proof}
\subsection{Proof of the Main Theorem}
We are now adequately prepared to prove Theorem \ref{mainthm}.
\begin{proof}[Proof of Theorem \ref{mainthm}]
It will suffice to show that $\Theta$ is completely positive on the linear span of $\mathcal{W}_\text{red} \cup \left\{1\right\}$. Indeed, Proposition 2.1 of \cite{paulsen} would then give that $\Theta$ is bounded and thus extends by continuity to a completely positive map on $\bigstar_\Gamma \mathcal{A}_v$.
As discussed above, this problem reduces to showing that given a complete set $X \subseteq \mathcal{W}_\text{red}\cup \left\{1\right\}$ and any function $\xi: X \rightarrow \mathcal{H}$ the inequality \eqref{goal} holds. We proceed by induction on $|X|$.
\begin{itemize}
\item $|X| =1$: Trivial.
\item $|X| \geq 2$: Let $(v_0) \in \mathbbf{v}_X$. Put \[X_1:=\left\{ x \in X \Big| \Shortstack{. . . .} \left\{x\right\}^\preceq \Shortstack{. . . .}_{v_0} = \Shortstack{. . . .} X \Shortstack{. . . .}_{v_0}\right\},\] and let $x_0 \in X_1$ be an element of longest length in $X_1$. Say that $x_0 = y_0c_0a_0b_0$ is in standard form with respect to $v_0$ (and so $a_0 \in \mathring{\mathcal{A}}_{v_0}$). Define \[Y_1 := \left\{x \in X_1 \big| \text{ in standard form } x = ycab \,(a \in \mathring{\mathcal{A}}_{v_0}), \mathbbf{v}_y = \mathbbf{v}_{y_0}\right\}.\] Note the following decomposition.
\begin{align*}
&\sum_{x,y \in X} \langle \Theta(x^*y)\xi(y) | \xi(x)\rangle \\
& =\sum_{w,z \in X\setminus Y_1} \langle \Theta(w^*z)\xi(z) | \xi(w)\rangle\\
& + \sum_{x, x' \in Y_1} \langle \Theta(x^*x')\xi(x') | \xi(x)\rangle\\
& + \sum_{x \in Y_1, z \in X \setminus Y_1} 2\mathfrak{Re} \langle \Theta(x^*z)\xi(z) | \xi(x)\rangle.
\end{align*}
Consider $X \setminus Y_1 \subset (X\setminus Y_1)^\preceq$. By our choice of $x_0$, we have that $x_0 \notin (X\setminus Y_1)^\preceq$, so the inductive hypothesis on $|X|$ applies to the strictly smaller complete set $(X\setminus Y_1)^\preceq$. By the discussion in \S\S\ref{stine}, there is a Hilbert space $\mathcal{K}$ and operators $V_w \in B(\mathcal{H},\mathcal{K})$ for every $w \in X\setminus Y_1$ such that $V_w^*V_z = \Theta(w^*z)$ for every $w,z \in X \setminus Y_1$.
For $x, x' \in Y_1$, let $x = ycab$ and $x' = y'c'a'b'$ be their standard forms with respect to $v_0$. By Lemmas \ref{X1crossterms} and \ref{Y1crossterms}, we have that
\begin{align*}
&\sum_{x \in Y_1, z \in X \setminus Y_1} 2\mathfrak{Re} \langle \Theta(x^*z)\xi(z) | \xi(x)\rangle\\
&= \sum_{ycab \in Y_1, z \in X \setminus Y_1} 2\mathfrak{Re} \langle \Theta(b^*a^*)\Theta(c^*y^*z)\xi(z) | \xi(ycab)\rangle\\
&= \sum_{ycab \in Y_1, z \in X \setminus Y_1} 2\mathfrak{Re} \langle V_z\xi(z) | V_{yc}\Theta(ab)\xi(ycab)\rangle.
\end{align*}
By Lemma \ref{Y1square}, we have that
\begin{align*}
&\sum_{x, x' \in Y_1} \langle \Theta(x^*x')\xi(x') | \xi(x)\rangle\\
& \geq \sum_{x=ycab, x'=y'c'a'b' \in Y_1} \langle \Theta(b^*a^*)\Theta(c^*y^*y'c')\Theta(a'b')\xi(y'c'a'b') | \xi(ycab)\rangle\\
&= \sum_{ycab, y'c'a'b' \in Y_1} \langle V_{y'c'}\Theta(a'b')\xi(y'c'a'b') | V_{yc}\Theta(ab)\xi(ycab)\rangle\\
&= \Big|\Big|\sum_{ycab \in Y_1} V_{yc}\Theta(ab)\xi(ycab)\Big|\Big|^2.
\end{align*}
We also have
\begin{align*}
\sum_{w,z \in X\setminus Y_1} \langle \Theta(w^*z)\xi(z) | \xi(w)\rangle &= \sum_{w,z \in X\setminus Y_1} \langle V_w^*V_z \xi(z) | \xi(w)\rangle\\
& = \sum_{w,z \in X\setminus Y_1} \langle V_z \xi(z) | V_w\xi(w)\rangle\\
&= \Big|\Big|\sum_{w \in X\setminus Y_1} V_w \xi(w)\Big|\Big|^2
\end{align*}
Thus we have
\begin{align*}
&\sum_{x,y \in X} \langle \Theta(x^*y)\xi(y) | \xi(x)\rangle \\
& =\sum_{w,z \in X\setminus Y_1} \langle \Theta(w^*z)\xi(z) | \xi(w)\rangle + \sum_{x, x' \in Y_1} \langle \Theta(x^*x')\xi(x') | \xi(x)\rangle \\
&+ \sum_{x \in Y_1, z \in X \setminus Y_1} 2\mathfrak{Re} \langle \Theta(x^*z)\xi(z) | \xi(x)\rangle\\
&\geq \Big|\Big|\sum_{w \in X\setminus Y_1} V_w \xi(w)\Big|\Big|^2 + \Big|\Big|\sum_{x=ycab \in Y_1} V_{yc}\Theta(ab)\xi(ycab)\Big|\Big|^2 \\
&+ \sum_{x=ycab \in Y_1, z \in X \setminus Y_1} 2\mathfrak{Re} \langle V_z\xi(z) | V_{yc}\Theta(ab)\xi(ycab)\rangle\\
&= \Big|\Big|\sum_{w \in X \setminus Y_1} V_w \xi(w) + \sum_{x=ycab \in Y_1} V_{yc}\Theta(ab) \xi(ycab)\Big|\Big|^2\\
&\geq 0. \qedhere
\end{align*}
\end{itemize}
\end{proof}
\subsection{Tensor Product Example}\label{tpe} Due to the technical nature of the above proof, it is illustrative to write out the case where $\Gamma$ is a complete graph. This gives a new combinatorial proof of the fact that the tensor product of ucp maps on the maximal tensor product of unital $C^*$-algebras is ucp.
Let $\mathcal{A}_v, \varphi_v, \theta_v, \mathcal{B}\subset B(\mathcal{H})$ be as in the statement of Theorem \ref{mainthm}, and suppose that $\Gamma$ is a complete graph. Let $\Theta:= \bigstar_\Gamma \theta_v$. We wish to show that for any complete set $X \subset \mathcal{W}_\text{red}\cup\left\{1\right\}$ and any function $\xi: X \rightarrow \mathcal{H}$ we have the following inequality. \[\sum_{x,y \in X} \langle \Theta(x^*y)\xi(y) | \xi(x)\rangle \geq 0\] Let $v_0 \in V$ be such that $(v_0) \in \mathbbf{v}_X$. We proceed by induction on $|X|$. The base case is again trivial. Following the definitions in the proof of Theorem \ref{mainthm}, we have \[X_1 = Y_1 = \left\{ x \in X | v_0 \in \mathbbf{v}_x\right\};\] furthermore, for any $x \in Y_1, \mathbbf{v}_x = (\cdots, v_0)$ because $\Gamma$ is complete. So for any $x \in Y_1,$ we can write $x$ in standard form with respect to $v_0$ as follows.
\begin{align}
x = ca \text{ where } &a \in \mathring{\mathcal{A}}_{v_0} \text{ and } v_0 \notin \mathbbf{v}_{c} \label{deco}
\end{align} Again, consider the decomposition given by
\begin{align*}
&\sum_{x,y \in X} \langle \Theta(x^*y)\xi(y) | \xi(x)\rangle\\
& =\sum_{w,z \in X\setminus Y_1} \langle \Theta(w^*z)\xi(z) | \xi(w)\rangle \\
&+ \sum_{x, x' \in Y_1} \langle \Theta(x^*x')\xi(x') | \xi(x)\rangle \\
&+ \sum_{x \in Y_1, z \in X \setminus Y_1} 2\mathfrak{Re} \langle \Theta(x^*z)\xi(z) | \xi(x)\rangle.
\end{align*}
As before, we have
\begin{align*}
\sum_{w,z \in X\setminus Y_1} \langle \Theta(w^*z)\xi(z) | \xi(w)\rangle &= \sum_{wz \in X \setminus Y_1} \langle V_w^*V_z \xi(z) | \xi(w) \rangle\\
&= \Big|\Big| \sum_{w \in X \setminus Y_1} V_w \xi(w)\Big|\Big|^2.
\end{align*}
By \eqref{deco}, it is clear that
\begin{align*}
\sum_{x \in Y_1, z \in X \setminus Y_1} 2\mathfrak{Re} \langle \Theta(x^*z)\xi(z) | \xi(x)\rangle & = \sum_{x=ca \in Y_1, z \in X \setminus Y_1} 2\mathfrak{Re} \langle \Theta(a^*c^*z)\xi(z) | \xi(ca)\rangle\\
&= \sum_{ca \in Y_1, z \in X \setminus Y_1} 2\mathfrak{Re} \langle V_z\xi(z) | V_c \Theta(a) \xi(a)\rangle.
\end{align*}
Lastly we have
\begin{align}
\sum_{x, x' \in Y_1} \langle \Theta(x^*x')\xi(x') | \xi(x)\rangle & = \sum_{x=ca, x'=c'a' \in Y_1} \langle \Theta(a^*c^*c'a') \xi(c'a') | \xi(ca)\rangle \notag\\
& = \sum_{ca,c'a' \in Y_1} \langle \Theta(a^*a')\Theta(c^*c') \xi(c'a') | \xi(ca)\rangle \label{11}\\
& \geq \sum_{ca,c'a' \in Y_1} \langle \Theta(a^*)\Theta(a')\Theta(c^*c') \xi(c'a') | \xi(ca)\rangle \label{12}\\
&= \Big|\Big|\sum_{ca,c'a' \in Y_1} V_c\Theta(a) \xi(ca)\Big|\Big|^2 \notag
\end{align}
where \eqref{11} follows from the fact that $\Gamma$ is complete, and \eqref{12} follows from the classical Schwarz Inequality applied to the ucp map $\theta_{v_0}$ combined with Lemma IV.4.24 in \cite{takesaki}. Combining these observations yields
\begin{align*}
&\sum_{x,y \in X} \langle \Theta(x^*y)\xi(y) | \xi(x)\rangle\\
&\geq \Big|\Big| \sum_{w \in X \setminus Y_1} V_w \xi(w)\Big|\Big|^2 + \Big|\Big|\sum_{ca,c'a' \in Y_1} V_c\Theta(a) \xi(ca)\Big|\Big|^2 \\
&+ \sum_{ca \in Y_1, z \in X \setminus Y_1} 2\mathfrak{Re} \langle V_z\xi(z) | V_c \Theta(a) \xi(ca)\rangle\\
&= \Big|\Big|\sum_{w \in X \setminus Y_1} V_w \xi(w) + \sum_{ca,c'a' \in Y_1} V_c\Theta(a) \xi(ca)\Big|\Big|^2\\
&\geq 0.
\end{align*}
\section{Consequences}\label{cons}
\subsection{Reduced version}
We record the graph product version of Proposition 2.1 in \cite{choda}. As in the amalgamated free product case, this result follows directly from Theorem \ref{mainthm}. It should be noted that although the reduced version follows directly from Boca's result in the amlagamated free product setting, Choda's approach explicitly constructs a dilation on a Hilbert space containing the free product Hilbert space. We present the graph product version as a direct corollary to Theorem \ref{mainthm}, but it is not unreasonable to expect that one can give a graph product adaptation of Choda's proof.
\begin{cor}\label{choda}
Let $\Gamma = (V, E)$ be a graph, and for each $v\in V$ let $\mathcal{A}_v$ and $\mathcal{B}_v$ be unital $C^*$-algebras with states $\varphi_v \in S(\mathcal{A}_v)$ and $\psi_v \in S(\mathcal{B}_v)$. For each $v \in V$ let $\theta_v: \mathcal{A}_v \rightarrow \mathcal{B}_v$ be a unital completely positive map with $\psi_v \circ \theta_v = \varphi_v$. Then there exists a unital completely positive map $\bigstar_\Gamma \theta_v: \bigstar_\Gamma \mathcal{A}_v \rightarrow \bigstar_\Gamma (\mathcal{B}_v, \psi_v)$ such that
\begin{enumerate}
\item\label{A} $\bigstar_\Gamma \psi_v \circ \bigstar_\Gamma \theta_v = \bigstar_\Gamma \varphi_v$;
\item $\bigstar_\Gamma \theta_v (a_1\cdots a_n) = \theta_{v_1}(a_1)\cdots \theta_{v_n}(a_n)$ for $a_j \in \mathring{\mathcal{A}}_{v_j}, (v_1,\dots,v_n) \in \mathcal{W}_\text{red}$.
\end{enumerate}
\end{cor}
\begin{proof}
Take $\bigstar_\Gamma \theta_v$ to be the graph product ucp map as in in \ref{mainthm} defined with respect to the states $\varphi_v$. Part \eqref{A} follows from Lemma \ref{stres}.
\end{proof}
\subsection{Graph products of positive-definite functions}
We show here that the graph product of positive-definite functions is positive-definite itself. This is a graph product version of Theorem 7.1 in \cite{boz}.
\begin{dfn}
Let $G$ be a group and $\mathcal{H}$ be a Hilbert space. A function $f: G \rightarrow B(\mathcal{H})$ is \emph{positive-definite} if for every finite subset $\left\{g_1,\dots, g_n\right\} \subset G$, the $n\times n$ matrix \[\left[ f(g_i^{-1}g_j)\right]_{ij}\] is positive.
\end{dfn}
\begin{dfn}\label{gpposdef}
Let $\mathcal{H}$ be a Hilbert space, and for each $v \in V$, let $G_v$ be a group and $f_v: G_v \rightarrow B(\mathcal{H})$ be positive-definite with $f_v(e) = 1$. If $(v,v') \in E \Rightarrow f_v(G_v)$ and $f_{v'}(G_{v'})$ commute, then we define the graph product of the $f_v$'s, $\bigstar_\Gamma f_v: \bigstar_\Gamma G_v \rightarrow B(\mathcal{H})$, as follows.
\begin{enumerate}
\item $\bigstar_\Gamma f_v (e) = 1$;
\item if for $1 \leq k \leq n,\; g_k \in G_{v_k}\setminus\left\{1\right\}$ and $(v_1,\dots,v_n) \in \mathcal{W}_\text{red}$, then \[\bigstar_\Gamma f_v (g_1\cdots g_n) := f_{v_1}(g_1)\cdots f_{v_n}(g_n).\]
\end{enumerate}
\end{dfn}
It is well-known that there is a 1-1 correspondence between positive-definite functions $f:G \rightarrow B(\mathcal{H}), f(e) =1$ and ucp maps $\theta: C^*(G) \rightarrow B(\mathcal{H})$ in the following sense. If $u_g \in C^*(G)$ denotes the unitary corresponding to the group element $g \in G$, then
\begin{align*}
f &\rightarrow \theta_f(u_g) := f(g)\\
f_\theta(g):= \theta(u_g) &\leftarrow \theta.
\end{align*}
\begin{thm}\label{posdef}
Let $G_v, f_v$ and $\mathcal{H}$ be as in Definition \ref{gpposdef}. Then $\bigstar_\Gamma f_v$ is positive-definite.
\end{thm}
\begin{proof}
Let $\bigstar_\Gamma \theta_{f_v}$ be the graph product of the ucp maps on $C^*(G_v)$ corresponding to $f_v$ defined with respect to states given by the canonical traces (from the left-regular representation) on $C^*(G_v)$. By Theorem \ref{mainthm}, $\bigstar_\Gamma \theta_{f_v}$ is ucp. Then it is easy to check that $f_{\bigstar_\Gamma \theta_{f_v}} = \bigstar_\Gamma f_v$.
\end{proof}
\subsection{Unitary dilation}
We conclude the paper with some results on unitary dilation in the graph product context. Consider the following version of the Sz.-Nagy-Foia\lfhook{s} dilation theorem.
\begin{thm}\label{dil}
Let $\Gamma = (V,E)$ be a graph. Let $\mathcal{H}$ be a Hilbert space and $\left\{T_v \right\}_{v \in V} \subset B(\mathcal{H})$ be contractions such that if $(v,v') \in E$ then $T_v$ and $T_{v'}$ doubly commute ($[T_v,T_{v'}] = [T_v^*,T_{v'}] = 0$). Then there exist a Hilbert space $\mathcal{K}$ containing $\mathcal{H}$ and unitaries $U_v \in B(\mathcal{K})$ for each $v \in V$ such that for any polynomial $p \in \mathbb{C}\langle X_v \rangle_{v\in V}$ in $|V|$ non-commuting indeterminates we have \[ p(\left\{A_v\right\}_{v\in V}) = P_\mathcal{H} p(\left\{U_v\right\}_{v\in V}) |_\mathcal{H}.\]
\end{thm}
\begin{proof}
By Stinespring, we will be done if we obtain a ucp map $\Theta: \bigstar_\Gamma C^*(\mathbb{Z}) \rightarrow B(\mathcal{H})$ such that $\Theta(p((x_v)) = p((T_v))$. Indeed, let $U_v$ be the image of $x_v$ under the resulting Stinespring representation.
Define the ucp map $\theta_{v}$ on the $v^\text{th}$ copy of $C^*(\mathbb{Z})$ as follows. \[\theta_{v}(x_v^m) = \left\{ \begin{array}{lcr} T_v^m & \text{if} & m \geq 0\\ (T_v^*)^{-m} & \text{if} & m <0\end{array}\right.\] (This map is ucp by Sz.-Nagy's unitary dilation theorem). Then the map $\Theta = \bigstar_\Gamma \theta_{v} : \bigstar_\Gamma C^*(\mathbb{Z}) = C^*(\bigstar_\Gamma \mathbb{Z}) \rightarrow B(\mathcal{H})$ defined with respect to the canoncial trace on $C^*(\mathbb{Z})$ does the job.
\end{proof}
\begin{rmk}
It should be emphasized that the doubly commuting assumption is important for the above theorem. In particular, Op\v{e}la showed in Theorem 2.3 of \cite{opela} that if $\Gamma = (V, E)$ is a graph with $n \in \mathbb{N}$ vertices containing a cycle (a closed path of edges) then there are contractions $T_1, \dots, T_n$ such that if $(v_i,v_j) \in E$ then $[T_i,T_j] =0$ (not doubly commuting) with no simultaneous unitary dilation. On the other hand, if $\Gamma$ has no cycles, then plain (single) commutation relations according to $\Gamma$ can be dilated.
\end{rmk}
The following corollary is a graph product version of Theorem 8.1 of \cite{boz} and follows immediately from Theorem \ref{dil}. First a definition is in order.
\begin{dfn}
Given a graph $\Gamma = (V,E)$, let $\bigstar_\Gamma \mathbb{Z}$ denote the graph product group $\bigstar_\Gamma G_v$ where $G_v = \mathbb{Z}$ for every $v \in V$. This is the graph product analog of $\mathbb{F}_n$.
\end{dfn}
\begin{cor}\label{vnineq}
Let $\Gamma = (V,E)$ be a graph. Let $\mathcal{H}$ be a Hilbert space and $\left\{T_v \right\}_{v \in V} \subset B(\mathcal{H})$ be contractions such that if $(v,v') \in E$ then $T_v$ and $T_{v'}$ doubly commute ($[T_v,T_{v'}] = [T_v^*,T_{v'}] = 0$). Let $p \in \mathbb{C}\langle X_v\rangle_{v\in V}$ be a polynomial in $|V|$ non-commuting indeterminates. Then \[||p(\left\{T_v\right\}_{v \in V})|| \leq ||p(\left\{x_v\right\}_{v\in V})||_{C^*(\bigstar_\Gamma \mathbb{Z})}\] where for each $v \in V$, $x_v$ denotes the unitary corresponding to the canonical generator of the $v^\text{th}$ copy of $\mathbb{Z}$.
\end{cor}
\begin{rmk}
Note that by the universality of $C^*(\mathbb{F}_{|V|})$ we have \[||p||_{C^*(\bigstar_\Gamma \mathbb{Z})} \leq ||p||_{C^*(\mathbbf{F}_{|V|})}.\]
\end{rmk}
Lastly, we have a version of Theorem \ref{dil} viewed through the lens of non-commutative probability. The statement and proof are simple adaptations of the free versions presented in \cite{sacr}.
\begin{thm}\label{gpdilation}
Given a graph $\Gamma = (V, E)$ and $\Gamma$ independent contractions $\left\{T_v\right\}_{v \in V}$ in the noncommutative probability space $(B(\mathcal{H}), \varphi)$, there exist a Hilbert space $\mathcal{K}$ containing $\mathcal{H}$ and unitaries $\left\{U_v\right\}_{v\in V} \subset B(\mathcal{K})$ that are $\Gamma$ independent with respect to $\varphi \circ \text{Ad}(P_\mathcal{H})$ such that for any polynomial $p \in \mathbb{C}\langle X_v \rangle_{v\in V}$ in $|V|$ non-commuting indeterminates we have \[ p(\left\{T_v\right\}_{v\in V}) = P_\mathcal{H} p(\left\{U_v\right\}_{v\in V}) |_\mathcal{H}.\] Furthermore, this dilation is unique up to unitary equivalence if $\mathcal{K}$ is minimal.
\end{thm}
\begin{proof}
We use the same dilation as in Theorem \ref{dil}, letting $\pi: \bigstar_\Gamma C^*( \mathbb{Z}) \rightarrow B(\mathcal{K})$ denote the corresponding Stinespring representation; and for every $v \in V$ let $U_v = \pi(x_v)$ where $x_v$ is unitary corresponding to the canonical generator of the $v^\text{th}$ copy of $\mathbb{Z}$. It remains to show the $\Gamma$ independence of $\left\{ U_v \right\}_{v\in V} \subset B(\mathcal{K})$ and uniqueness in the case that $\mathcal{K}$ is minimal.
To show that the random variables in $\left\{U_v\right\}_{v \in V}$ are $\Gamma$ independent with respect to $\varphi \circ \text{Ad}(P_\mathcal{H})$, let $a = a_1\cdots a_m$ where $a_j \in \mathring{C^*(U_{v_j})}$ for $1 \leq j \leq m$ be reduced with respect to $\varphi \circ \text{Ad}(P_\mathcal{H})$. For $1 \leq j \leq m$, let $b_j$ be an element of the $v_j^\text{th}$ copy of $C^*(\mathbb{Z})$ such that $\pi(b_j) = a_j$. It follows that \[\bigstar_\Gamma \theta_v(b_1\cdots b_m) = \theta_{v_1}(b_1)\cdots \theta_{v_m}(b_m)\] is reduced with respect to $\varphi$. Then by the $\Gamma$ independence of $\left\{T_v\right\}_{v\in V}$, we have
\begin{align*}
\varphi(P_\mathcal{H} a_1\cdots a_m |_\mathcal{H}) &= \varphi(P_\mathcal{H} \pi(b_1\cdots b_m)|_\mathcal{H}) \\
&= \varphi(\bigstar_\Gamma \theta_v (b_1\cdots b_m))\\
&= \varphi(\theta_{v_1}(b_1)\cdots \theta_{v_m}(b_m))\\
&=0.
\end{align*}
The minimality argument follows from the same argument presented in the proof of Theorem 3.2 in \cite{sacr} using Lemma \ref{stres} in place of Lemma 5.13 from \cite{spenic}.
\end{proof}
\begin{rmk}\hspace*{\fill}
\begin{enumerate}
\item If $\Gamma$ is complete then, as shown in \cite{nagy-foias, sacr}, we can take $p$ to be a $*$-polynomial.
\item By Theorem 1 in \cite{mlot}, we have that $\varphi \circ \text{Ad}(P_\mathcal{H})$ is tracial on $C^*(\left\{U_v\right\}_{v \in V})$.
\end{enumerate}
\end{rmk}
\subsection*{Acknowledgments}
Gratitude is due to Ben Hayes for initiating the author's interest in this subject and to David Sherman for valuable conversations about this project. Also, the author would like to thank Andrew Sale for providing helpful information on the relevant group theoretic literature. Because of a gracious invitation, a portion of this article was completed during a June 2017 visit to the Centre de Recerca Matem\`{a}tica in Barcelona, Spain.
{}
\end{document} |
\begin{document}
\title{Global existence, boundedness and stabilization in a high-dimensional chemotaxis system with consumption}
\begin{abstract}
\noindent This paper deals with the homogeneous Neumann boundary-value problem for the
chemotaxis-consumption system
\begin{eqnarray*}
\left\{
\begin{array}{llc}
u_t=\Delta u-\chi\nablabla\cdotot\big(u\nablabla v\big)+\kappa u-\mu u^2,
&x\in \Omegaega, \,t>0,\\
\displaystyle v_t=\Delta v-uv , &x\in \Omegaega,\, t>0,
\end{array}
\right.
\end{eqnarray*}
in $N$-dimensional bounded smooth domains for suitably regular positive initial data. \\
We shall establish the existence of a global bounded
classical solution for suitably large $\mu$ and prove that for any $\mu>0$ there
exists a weak solution.\\
Moreover, in the case of $\kappa>0$ convergence to the constant equilibrium $(\frac{\kappa}{\mu },0)$ is shown. \\
\noindent{\bf Keywords:} Chemotaxis; logistic source; global existence; boundedness; asymptotic stability; weak solution\\
\noindent {\bf MSC:} 35Q92; 35K55; 35A01; 35B40; 35D30; 92C17
\end{abstract}
\section{Introduction}
Chemotaxis is the adaption of the direction of movement to an external chemical signal. This signal can be a substance produced by the biological agents (cells, bacteria) themselves, as is the case in the celebrated Keller-Segel model (\cite{KS}, \cite{horstmann_I}) or -- in the case of even simpler organisms -- by a nutrient that is consumed. A prototypical model taking into account random and chemotactically directed movement of bacteria alongside death effects at points with high population densities and population growth together with diffusion and consumption of the nutrient is given by
\begin{align}\label{sys.intro}
u_t&=\Delta u -\chi \nablabla \cdotot (u\nablabla v) +\kappa u-\mu u^2\\
v_t&=\Delta v - uv,\nonumber
\end{align}
considered in a smooth, bounded domain $\Omega\subsetset \mathbb{R} ^N$ together with homogeneous Neumann boundary conditions and suitable initial data. Herein, $\chi > 0$, $\kappa\in\mathbb{R} $, $\mu >0$ denote chemotactic sensitivity, growth rate (or death rate, if negative) and strength of the overcrowding effect, respectively.
The system \eqref{sys.intro}, in a basic form often with $\kappa=\mu =0$, appears as part of chemotaxis-fluid models intensively studied over the past few years (see e.g. the survey \cite[sec. 4.1]{BBTW} or \cite{cao_lankeit} for a recent contribution with an extensive bibliography).
Compared with the classical Keller-Segel model
\begin{align}\label{KS}
u_t&=\Delta u-\chi \nabla \cdotot(u\nabla v)+\kappa u-\mu u^2\\
v_t&=\Delta v-v+u, \nonumber
\end{align}
which we have given in the form with logarithmic source terms paralleling that in \eqref{sys.intro}, at first glance, \eqref{sys.intro} seems much more amenable to the global existence (und boundedness) of solutions -- after all, the second equation by comparison arguments immediately provides an $L^\infty$-bound for $v$.
However, such a bound is not sufficient for dealing with the chemotaxis term, and accordingly global existence and boundedness of solutions to \eqref{sys.intro} with $\kappa=\mu =0$ is only known under the smallness condition
\begin{equation}\label{smallnessconditionforsourcefreemodel}
\chi \norm[\Lom\infty]{v(\cdotot,0)}\leq \fracrac1{6(N+1)}
\end{equation}
on the initial data (\cite{tao_consumption_bdness}) or in a
two-dimensional setting (\cite{win_ctfluid}, \cite{win_arma} and
also \cite{xieli}). Their rate of convergence has been treated in
\cite{zhang_li}. In three-dimensional domains, weak solutions have
been constructed that eventually become smooth
\cite{taowin_ev_consumption}.
For \eqref{KS}, the presence of logarithmic terms has been shown to exclude otherwise possible finite-time blow-up phenomena (cf. \cite{win_blowuphigherdim}, \cite{mizoguchi_winkler_13}) -- at least as long as $\mu $ is sufficiently large if compared to the strenght of the chemotactic effects (\cite{winkler_10_boundedness})
or if the dimension is $2$ (\cite{osaki_yagi_02}). If the quotient $\fracrac{\mu }{\chi }$ is sufficiently large, solutions to \eqref{KS} uniformly converge to the constant equilibrium (\cite{win_stability}); convergence rates have been considered in \cite{he_zheng}. Explicit largeness conditions on $\fracrac{\mu }{\chi }$ that ensure convergence, also for slightly more general source terms, can be found in \cite{lin_mu}, see also \cite{win_ksns_logsource}. For small $\mu >0$, at least global weak solutions are known to exist (\cite{lankeit_ev_smooth}), and in $3$-dimensional domains and for small $\kappa$, their large-time behaviour has been investigated (\cite{lankeit_ev_smooth}).
Also the chemotaxis-consumption model \eqref{sys.intro} has
already been considered with nontrivial source terms in
\cite{wang_khan_khan}. There it was proved that classical
solutions exist globally and are bounded as long as
\eqref{smallnessconditionforsourcefreemodel} holds -- which is the
same condition as for $\kappa=\mu =0$, thus shedding no light on
any possible interplay between chemotaxis and the population
kinetics.
In a three-dimensional setting and in the presence of a Navier-Stokes fluid, in \cite{lankeit_fluid} it was recently possible to construct global weak solutions for any positive $\mu $, which moreover eventually become classical and uniformly converge to the constant equilibrium in the large-time limit.
It is the aim of the present article to prove the existence of global classical solutions if only $\mu $ is suitably large and to show their large-time behaviour. For the case of small $\mu >0$, we will prove the existence of global weak solutions (in the sense of Definition \ref{def:weaksol}). \\
\noindent\textbf{What largeness condition on $\mu $ might be sufficient for boundedness?}
For the Keller-Segel type model \eqref{KS} the typical condition
reads: 'If $\mu $ is large compared to $\chi $, then the
solution is global and bounded, independent of initial data.' In
order to see why this condition would be far less natural for
\eqref{sys.intro}, let us suppose we are given suitably regular
initial data $u_0$, $v_0$ and a corresponding solution $(u,v)$ of
\[
\begin{cases} u_t=\Delta u - \chi \nablabla\cdotot(u\nablabla v) + \kappa u-\mu u^2\\
v_t=\Delta v - uv\\
\partialartialrtial_{\nu} u\big\rvert_{\partialartialrtial\Omegaega}=\partialartialrtial_{\nu} v\big\rvert_{\partialartialrtial\Omegaega}=0\\
u(\cdotot,0)=u_0, v(\cdotot,0)=v_0,
\end{cases}
\]
and let us define
\[
w:=\chi v.
\]
Then $(u,w)$ solves
\[
\begin{cases} u_t=\Delta u - \nablabla\cdotot(u\nablabla w) + \kappa u-\mu u^2\\
w_t=\Delta w - uw\\
\partialartialrtial_{\nu} u\big\rvert_{\partialartialrtial\Omegaega}=\partialartialrtial_{\nu} v\big\rvert_{\partialartialrtial\Omegaega}=0\\
u(\cdotot,0)=u_0, w(\cdotot,0)=\chi v_0,
\end{cases}
\]
which is the same system, only with different chemotaxis
coefficent and rescaled initial data for the second component.
Consequently, \textit{in
\eqref{sys.intro}, large initial data equal high chemotactic
strength}. Hence, there cannot be any condition for global
existence which includes $\mu $ and $\chi $, but not $\norm[\Lom
\infty]{v_0}$. In light of this discussion, the requirement in
Theorem \ref{thm1} that $\mu $ be large with respect to $\chi
\norm[\Lom \infty]{v_0}$ seems natural. On the other hand, this
observation does not preclude conditions that involve neither
$\chi $ nor $\norm[\Lom \infty]{v_0}$, and indeed $\mu >0$ is
sufficient for the global existence of weak solutions.
The first main result of the present article is global existence of classical solutions, provided that $\mu $ is sufficiently large as compared to $\norm[\Lom\infty]{\chi v_0}$:
\begin{thm}\label{thm1}
Let $N\in \mathbb{N} $ and let $\Omega\subset \mathbb{R} ^N$ be a smooth, bounded domain. There are constants $k_1=k_1(N)$ and $k_2=k_2(N)$ such that the following holds: Whenever $\kappa\in\mathbb{R} $, $\chi >0$, and $\mu >0$ and initial data
\begin{eqnarray}\label{id}
\left\{
\begin{array}{llc}
u_0\in C^0(\overline\Omegaega), \quad { u_0> 0} \,\,\text{in}\,\,\overline{\Omegaega},\\
\displaystyle
v_0\in C^1(\overline{\Omegaega}), \quad { v_0> 0 } \,\,\text{in}\,\,\overline{\Omegaega}
\end{array}
\right.
\end{eqnarray}
are such that
\[
\mu >k_1(N)\norm[\Lom\infty]{\chi v_0}^{\fracrac1N} + k_2(N)\norm[\Lom\infty]{\chi v_0}^{2N},
\]
then the system
\begin{equation}\label{a}
\left\{
\begin{array}{llc}
u_t=\Delta u-\nablabla\cdotot\big(u\nablabla v\big)+\kappa u-\mu u^2,
&x\in \Omegaega, \,t>0,\\
\displaystyle v_t=\Delta v-uv , &x\in \Omegaega,\, t>0,\\
\displaystyle
\partialartialrtial_{\nu} u=\partialartialrtial_{\nu} v=0, &x\in\partialartial\Omega,\, t>0,\\
\displaystyle u(x,0)=u_0(x),\,\, v(x,0)=v_0(x),
&x\in\Omega,
\end{array}
\right. \end{equation}
has a unique global classical solution $(u,v)$ which is uniformly bounded in the sense that there is some constant $C>0$ such that
\begin{eqnarray}\label{B}
\|u(\cdotot,t)\|_{L^{\infty}(\Omegaega)}+\|v(\cdotot,t)\|_{W^{1,\infty}(\Omegaega)}
\le C \qquad
\mathrm{for}\,\,\mathrm{all} \quad t\in(0,\infty). \end{eqnarray}
\end{thm}
\begin{remark}\label{R1}
Here we have to leave open the question, whether, for small values of $\mu >0$ and large $\chi v_0$, blow-up of solutions is possible at all. Consequently, the range of $\mu$ in this
result is not necessarily an optimal one. Nevertheless, the present condition can easily be made explicit (see Lemma \ref{lem:ge.for.positive.eps.or.large.mu}
and \eqref{eq:defk1k2}
for the values of $k_1$ and $k_2$). It seems worth pointing out that, in contrast to the condition \eqref{smallnessconditionforsourcefreemodel}, Theorem \ref{thm1} admits large values of $\chi v_0$, if only $\mu $ is appropriately large.
\end{remark}
The second outcome of our analysis is concerned with the large time behaviour of global solutions and reads as follows:
\begin{thm}\label{thm3}
Let $N\in \mathbb{N} $ and let $\Omega\subset\mathbb{R} ^N$ be a bounded smooth domain.
Suppose that $\chi >0$, $\kappa>0$ and $\mu>0$. Let $(u,v)\in C^{2,1}(\overline{\Omegaega}\times(0,\infty ))\cap C^0(\overline{\Omegaega}\times[0,\infty))$ be any global bounded solution to \eqref{a} (in the sense that \eqref{B} is fulfilled) which obeys \eqref{id}. Then
\begin{eqnarray}\label{stability1}
\Big\|u(\cdotot,t)-\frac{\kappa}{\mu }\Big\|_{L^{\infty}(\Omegaega)}\rightarrow
0
\end{eqnarray}
and
\begin{eqnarray}\label{stability2}
\|v(\cdotot,t)\|_{L^{\infty}(\Omegaega)}\rightarrow 0
\end{eqnarray}
as $t\rightarrow \infty$.
\end{thm}
\begin{remark}
This theorem in particular applies to the solutions considered in Theorem \ref{thm1}.
\end{remark}
\begin{remark}
Boundedness is not necessary in the sense of \eqref{B}; in light of Lemma \ref{lem:ulp.to.boundedness}, the existence of $C>0$ and $p>N$ such that
\[
\norm[\Lom p]{u(\cdotot,t)}\leq C\qquad \text{for all } t>0
\]
would be sufficient.
\end{remark}
\noindent\textbf{Unconditional global weak solvability.}
As in the context of the classical Keller-Segel model \eqref{KS} (\cite{lankeit_ev_smooth}), global weak solutions to \eqref{a} can be shown to exist regardless of the size of initial data and for any positive $\mu$:
\begin{thm}\label{thm:weaksol}
Let $N\in\mathbb{N} $ and let $\Omega\subset \mathbb{R} ^N$ be a bounded smooth domain. Let $\chi >0$, $\kappa\in \mathbb{R} $, $\mu >0$ and assume that $u_0$, $v_0$ satisfy \eqref{id}. Then
system \eqref{a} has a global weak solution (in the sense of Definition \ref{def:weaksol} below).
\end{thm}
These solutions, too, stabilize toward $(\frac{\kappa}{\mu },0)$ as $t\to\infty$, even though in a weaker sense than guaranteed by Theorem \ref{thm3} for classical solutions:
\begin{thm}\label{thm:weaksol-limit}
Let $N\in\mathbb{N} $ and let $\Omega\subset \mathbb{R} ^N$ be a bounded smooth domain. Let $\chi >0$, $\kappa>0$, $\mu >0$ and assume that $u_0$, $v_0$ satisfy \eqref{id}. Then for any $p\in[1,\infty)$ the weak solution $(u,v)$ to \eqref{a} that has been constructed during the proof of Theorem \ref{thm:weaksol} satisfies
\[
\norm[\Lom p]{v(\cdotot,t)}\to 0 \quad \text{ and } \int_t^{t+1} \norm[\Lom 2]{u(\cdotot,s)-\frac{\kappa}{\mu }} ds \to 0
\]
as $t\to \infty$.
\end{thm}
\begin{remark}
Under the restriction $N=3$, the existence of global weak solutions that eventually become smooth and uniformly converge to $(\frac{\kappa}{\mu },0)$ has been proven in \cite{lankeit_fluid}, where a coupled chemotaxis-fluid model is treated.
\end{remark}
{\textbf{Plan of the paper.}} In Section \ref{sec:prelim} we will prepare some general calculus inequalities.
In the following for some $a>0$ we will then consider
\begin{equation}\label{epssys}
\begin{cases} u_{\varepsilons }t=\Delta u_{\varepsilons } - \chi \nablabla\cdotot(u_{\varepsilons }\nablabla v_{\varepsilons }) + \kappau_{\varepsilons } - \mu u_{\varepsilons }^2 - \varepsilon u_{\varepsilons }^2\ln au_{\varepsilons } \\
v_{\varepsilons }t=\Delta v_{\varepsilons } - u_{\varepsilons }v_{\varepsilons }\\
\partialartialrtial_{\nu} u_{\varepsilons }\big\rvert_{\partialartialrtial\Omegaega}=\partialartialrtial_{\nu} v_{\varepsilons }\big\rvert_{\partialartialrtial\Omegaega}=0\\
u_{\varepsilons }(\cdotot,0)=u_0, v_{\varepsilons }(\cdotot,0)=v_0.
\end{cases}
\end{equation}
For $\varepsilon =0$, this system reduces to \eqref{a}; for $\varepsilon \in(0,1)$ we will be able to derive global existence of solutions without any concern for the size of initial data and hence obtain a suitable stepping stone for the construction of weak solutions. Beginning the study of solutions to this system in Section \ref{sec:locex-andbasic} with a local existence result and elementary properties of the solutions, we will in Section \ref{sec:bdclasssol} consider a functional of the type $\int_\Omegaega u^p+\int_\Omegaega |\nabla v|^{2p}$ and finally, aided by estimates for the heat semigroup, obtain globally bounded solutions, thus proving Theorem \ref{thm1}.
In Section \ref{sec:stabilization} where $\kappa$ is assumed to be positive, we will let $a:=\fracrac{\mu }{\kappa}$ and employ the functional
\[
\mathcal{F}_{\varepsilon }(t)=\int_\Omegaega u_{\varepsilons }(\cdotot,t)-\fracrac{\kappa}{\mu }\int_\Omegaega \lnu_{\varepsilons }(\cdotot,t) + \fracrac{\kappa}{2\mu }\int_\Omegaega v_{\varepsilons }^2(\cdotot,t)
\]
in order to derive the stabilization result in Theorem \ref{thm3} and already prepare Theorem \ref{thm:weaksol-limit}. Section \ref{sec:weaksol}, finally, will be devoted to the construction of weak solutions to \eqref{a}, and to the proofs of Theorem \ref{thm:weaksol} and Theorem \ref{thm:weaksol-limit}.
\begin{remark}
In \eqref{epssys}, the additional term $-\varepsilon u_{\varepsilons }^2\ln au_{\varepsilons }$ could be replaced by $-\varepsilon \Phi(u_{\varepsilons })$ with some other continuous function $\Phi$ which satisfies: $\Phi(s)\to 0$ as $s\searrow 0$, $\fracrac{\Phi(s)}{s^2}\to \infty$ as $s\to\infty$ and, for the stabilization results in Section \ref{sec:stabilization}, $\Phi<0$ on $(0,\fracrac{\kappa}{\mu})$ as well as $\Phi>0$ on $(\fracrac{\kappa}{\mu},\infty)$. \\
We will always let
\begin{equation}\label{defa}
a:=\begin{cases} \fracrac{\mu}{\kappa},& \text{if }\; \kappa>0\\ \mu&\text{if }\; \kappa\leq 0\end{cases}
\end{equation}
and note that the choice for the case $\kappa\leq 0$ was arbitrary and that in Sections \ref{sec:bdclasssol} and \ref{sec:weaksol}, the precise value of $a$ plays no important role.
\end{remark}
\textbf{Notation.} For solutions of PDEs we will use $T_{\rm max}$ to denote their maximal time of existence (cf. also Lemma \ref{criterion}). Throughout the article we fix $N\in\mathbb{N}$ and a bounded, smooth domain $\Omega\subset\mathbb{R}^N$.
\section{General preliminaries}\label{sec:prelim}
In this section we provide some estimates that are valid for all suitably regular functions and not only for solutions of the PDE under consideration.
\begin{lem}\label{lem:elementary.estimates}
a) For any $c\in C^2(\Omega)$:
\begin{equation}\label{eq:Delta.Hessian}
|\Delta c|^2\leq N|D^2 c|^2 \quad \text{throughout } \Omega.
\end{equation}
b) There are $C>0$ and $k>0$ such that every positive $c\in C^2(\overline{\Omegaega})$ fulfilling $\partialartialrtial_{\nu} c=0$ on $\partialartialrtial\Omegaega$ satisfies
\begin{equation}\label{eq:lembdrytermvi}
-2\int_\Omegaega \fracrac{|\Delta c|^2}{c} +\int_\Omegaega \fracrac{|\nabla c|^2\Delta c}{c^2} \leq -k \int_\Omegaega c|D^2\ln c|^2 -k \int_\Omegaega\fracrac{|\nabla c|^4}{c^3} + C\int_\Omegaega c.
\end{equation}
\end{lem}
\begin{proof}
a) Straightforward calculations yield
\begin{align*}
\kl{\sum_{i=1}^N c_{x_ix_i}}^2=\sum_{i,j=1}^N c_{x_ix_i}c_{x_jx_j}\leq \sum_{i,j=1}^N \kl{\fracrac12 c_{x_ix_i}^2 + \fracrac12 c_{x_jx_j}^2}
= N\sum_{i=1}^N c_{x_ix_i}^2 \leq N\sum_{i,j=1}^N c_{x_ix_j}^2.
\end{align*}
b) This is \cite[Lemma 2.7 vi)]{lankeit_fluid}.
\end{proof}
Let us now derive the following interpolation inequality on which we will rely in obtaining an estimate for $\int_\Omegaega u^p+\int_\Omegaega |\nabla v|^{2p}$ in Section \ref{sec:bdclasssol}
\begin{lem}\label{l_interpolation}
Let $q\in[1,\infty)$. Then for any $c\in C^2(\overline{\Omegaega} )$
satisfying $c\fracrac{\partialartialrtial c}{\partialartialrtial \nu}=0$ on $\partialartialrtial
\Omegaega$, the inequality
\begin{eqnarray}\label{inter0}
\|\nablabla c\|^{2q+2}_{L^{2q+2}(\Omegaega)} \le
2(4q^2+N)\|c\|^2_{L^{\infty}} \big\||\nablabla
c|^{q-1}D^2c\big\|^{2}_{L^2(\Omegaega)}
\end{eqnarray}
holds, where $D^2c$ denotes the Hessian of $c$.
\end{lem}
\begin{proof} Since $c\fracrac{\partialartialrtial c}{\partialartialrtial \nu}=0$ on
$\partialartialrtial \Omegaega$, an integration by parts yields
$$\|\nablabla c\|^{2q+2}_{L^{2q+2}(\Omegaega)}=-\int_{\Omegaega}c|\nablabla c|^{2q}\Delta c-2q\int_{\Omegaega}c |\nablabla c|^{2q-2} \nablabla c\cdotot (D^2 c\cdotot \nablabla c).$$
Using Young's inequality and \eqref{eq:Delta.Hessian} we can estimate
\begin{eqnarray}\label{inter1}
\Big|-\int_{\Omegaega}c|\nablabla c|^{2q}\Delta c\Big| &\le &
\fracrac{1}{4}\int_{\Omegaega}|\nablabla
c|^{2q+2}+\int_{\Omegaega}c^2|\nablabla
c|^{2q-2}|\Delta c|^2\nonumber\\
&\le&
\fracrac{1}{4}\int_{\Omegaega}|\nablabla
c|^{2q+2}+N\|c\|^2_{L^{\infty}(\Omegaega)}\int_{\Omegaega}|\nablabla
c|^{2q-2}|D^2 c|^2.
\end{eqnarray}
Likewise, we see that
\begin{eqnarray}\label{inter2}
\Big|-2q\int_{\Omegaega}c |\nablabla c|^{2q-2} \nablabla c\cdotot (D^2 c\cdotot
\nablabla c)\Big| &\le &
\fracrac{1}{4}\int_{\Omegaega}|\nablabla
c|^{2q+2}+4q^2\|c\|^2_{L^{\infty}(\Omegaega)}\int_{\Omegaega}|\nablabla
c|^{2q-2}|D^2 c|^2.
\end{eqnarray}
In consequence, \eqref{inter1} and \eqref{inter2} prove
\eqref{inter0}. \qquad
\end{proof}
\section{Local existence and basic properties of solutions}\label{sec:locex-andbasic}
We first recall a result on local solvability of \eqref{epssys}:
\begin{lem}\label{criterion}
Let $u_0$, $v_0$ satisfy \eqref{id}, let $\kappa\in\mathbb{R} $,
$\mu >0$, $\chi >0$ and $q>N$. Then for any $\varepsilon
\in[0,1)$ there exist $T_{max}\in (0,\infty]$ and unique classical
solution $(u_{\varepsilons },v_{\varepsilons })$ of system \eqref{epssys} with $a$ as in \eqref{defa} in
$\Omegaega\times(0,T_{max})$ such that
\begin{eqnarray*}
&&u_{\varepsilons }\in C^{0}\big(\overline{\Omegaega} \times[0,T_{\rm max})\big)\cap
C^{2,1}\big(\overline{\Omegaega} \times(0,T_{\rm max})\big),\\
&&v_{\varepsilons }\in C^{0}\big(\overline{\Omegaega} \times[0,T_{\rm max})\big)\cap
C^{2,1}\big(\overline{\Omegaega} \times(0,T_{\rm max})\big).
\end{eqnarray*}
Moreover, we have $u_{\varepsilons }> 0$ and $v_{\varepsilons } > 0$ in $\overline{\Omegaega} \times
[0, T_{\max})$, and
\begin{equation}\label{extcrit}
\mathrm{if} \,\,\, T_{\rm max}<\infty,\,\,\, \mathrm{then}
\,\,\,\limsup_{t\nearrowT_{\rm max}}\kl{\|u_{\varepsilons }(\cdotot,t)\|_{L^{\infty}(\Omegaega)}+\|v_{\varepsilons }(\cdotot,t)\|_{W^{1,q}(\Omegaega)}}=\infty.
\end{equation}
\end{lem}
\begin{proof}
Apart from minor adaptions necessary if $\varepsilon >0$ (see also \cite[Lemma 3.1]{win_ksns_logsource}), this lemma is contained in \cite[Lemma 2.1]{win_ctfluid}.
\end{proof}
Even thought the total mass is not conserved, an upper bound for it can be obtained easily:
\begin{lem}\label{lu1}
Let $u_0$, $v_0$ satisfy \eqref{id}, let $\kappa\in\mathbb{R} $, $\mu >0$, $\chi >0$. Then for any $\varepsilon \in[0,\infty)$ the solution of \eqref{epssys} with $a$ as in \eqref{defa} satisfies
\begin{eqnarray}\label{2.1}
\int_\Omegaega u_{\varepsilons }(x,t) dx\le \max\Big\{\frac{\kappa|\Omega|}{2\mu }+\sqrt{\kl{\frac{\kappa_+|\Omega|}{2\mu }}^2+\varepsilon \frac{|\Omega|}{2a^2e\mu }},\int_\Omegaega u_0 \Big\}=:m_{\varepsilon } \quad\mathrm{ for\,\,\, all }\quad
t\in(0,T_{\rm max}).
\end{eqnarray}
\end{lem}
\begin{proof}
Because $s^2\ln(as)\geq -\fracrac1{2a^2e}$ for all $s>0$, integrating the first equation in \eqref{epssys} over $\Omegaega$ and applying Hölder's inequality shows that
\[
\ddt \int_\Omegaega u_{\varepsilons }\leq \kappa\int_\Omegaega u_{\varepsilons } - \fracrac{\mu }{|\Omega|}\kl{\int_\Omegaega u_{\varepsilons }}^2+\fracrac{\varepsilon |\Omega|}{2a^2e}\quad \text{ on } (0,T_{\rm max})
\]
and the claim results from an ODI-comparison argument.
\end{proof}
For the second component, even uniform boundedness can be deduced instantly:
\begin{lem}\label{lem:normvdecreases}
Let $u_0$, $v_0$ satisfy \eqref{id}, let $\kappa\in\mathbb{R} $, $\mu >0$, $\chi >0$.
Then for any $\varepsilon \in[0,1)$ the solution of \eqref{epssys} with $a$ as in \eqref{defa} satisfies
\begin{eqnarray}\label{2.2}
\|v_{\varepsilons }(\cdotot,t)\|_{L^{\infty}(\Omegaega)}\le \|v_0\|_{L^{\infty}(\Omegaega)} \qquad\mathrm{ for\,\,\, all }\quad
t\in(0,T_{\rm max})
\end{eqnarray}
and
\[
(0,T_{\rm max})\ni t\mapsto \norm[\Lom\infty]{v_{\varepsilons }(\cdotot,t)}
\]
is monotone decreasing.
\end{lem}
\begin{proof}
This is a consequence of the maximum principle and the nonnegativity of the solution.
\end{proof}
Also the gradient of $v$ can be controlled in an $L^2(\Omega)$-sense:
\begin{lem}\label{ltv2} Let $u_0$, $v_0$ satisfy \eqref{id}, let $\kappa\in\mathbb{R} $, $\mu >0$, $\chi >0$.
There exists a positive constant $M$ such that for all $\varepsilon \in[0,1)$ the solution of \eqref{epssys} with $a$ as in \eqref{defa} satisfies
\begin{eqnarray}\label{2.tv2}
\int_{\Omegaega}|\nablabla
v_{\varepsilons }(\cdotot,t)|^2\le M
\qquad\mathrm{ for\,\,\, all }\quad t\in(0,T_{\rm max}).
\end{eqnarray}
\end{lem}
\begin{proof} Integration by parts and the Young inequality result
in
\begin{eqnarray}\label{2.tv1}
\fracrac{d}{dt}\int_{\Omegaega}|\nablabla v_{\varepsilons }|^2&=& 2\int_{\Omegaega} \nablabla
v_{\varepsilons }\cdotot \nablabla (\Delta v_{\varepsilons }-u_{\varepsilons }v_{\varepsilons })\nonumber \\
&\le& -2\int_{\Omegaega}|\Delta v_{\varepsilons }|^2 -2\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^2+2\int_{\Omegaega} v_{\varepsilons }(u_{\varepsilons }-1)\Delta v_{\varepsilons } \nonumber\\
&\le& -\int_{\Omegaega}|\Delta v_{\varepsilons }|^2 -2\int_{\Omegaega}|\nablabla v_{\varepsilons }|^2
+\int_{\Omegaega} v_{\varepsilons }^2(u_{\varepsilons }-1)^2\nonumber\\
&\le& -\int_{\Omegaega}|\Delta v_{\varepsilons }|^2 -2\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^2+\|v_0\|_{L^{\infty}(\Omegaega)}^2\int_{\Omegaega}
u_{\varepsilons }^2+2\|v_0\|_{L^{\infty}(\Omegaega)}^2\int_{\Omegaega}
u_{\varepsilons }+\|v_0\|_{L^{\infty}(\Omegaega)}^2
\end{eqnarray}
on $(0,T_{\rm max})$. Furthermore,
\begin{eqnarray}\label{2.tv3}
\fracrac{\|v_0\|_{L^{\infty}(\Omegaega)}^2}{\mu}\fracrac{d}{dt}\int_{\Omegaega}
u_{\varepsilons }\le \fracrac{\kappa_{+}\|v_0\|_{L^{\infty}(\Omegaega)}^2}{\mu}\int_{\Omegaega}
u_{\varepsilons }-\|v_0\|_{L^{\infty}(\Omegaega)}^2\int_{\Omegaega} u_{\varepsilons }^2-\fracrac{\varepsilon \norm[\Lom\infty]{v_0}^2}{\mu }\int_\Omegaega u_{\varepsilons }^2\ln (au_{\varepsilons }).
\end{eqnarray}
Adding \eqref{2.tv1} to \eqref{2.tv3} and taking into account that
\[
-\fracrac{\varepsilon \norm[\Lom\infty]{v_0}^2}{\mu } s^2\ln as\leq \fracrac{\norm[\Lom\infty]{v_0}^2}{2ea^2\mu }
\]
for any $\varepsilon \in[0,1)$ and $s\geq 0$, we obtain that
\begin{eqnarray*}
&&\fracrac{d}{dt}\Big\{\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^2+\fracrac{\|v_0\|_{L^{\infty}(\Omegaega)}^2}{\mu}\int_{\Omegaega}u_{\varepsilons }\Big\}\\
&& \le -\left(\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^2+\fracrac{\|v_0\|_{L^{\infty}(\Omegaega)}^2}{\mu}\int_{\Omegaega}u_{\varepsilons }\right)+\left(2\|v_0\|_{L^{\infty}(\Omegaega)}^2+\fracrac{\kappa_{+}+1}{\mu}\|v_0\|_{L^{\infty}(\Omegaega)}^2\right)\int_{\Omegaega}
u_{\varepsilons }+\|v_0\|_{L^{\infty}(\Omegaega)}^2+\fracrac{|\Omega|\norm[\Lom\infty]{v_0}^2}{2\mu ea^2}.
\end{eqnarray*}
Since Lemma \ref{lu1} shows that $\int_\Omegaega u_{\varepsilons }(x,t) dx\le m_1$ for any $\varepsilon \in[0,1)$ and $t\in(0,T_{\rm max})$,
a comparison argument leads to
\begin{eqnarray*}
&&\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^2+\fracrac{\|v_0\|_{L^{\infty}(\Omegaega)}^2}{\mu}\int_{\Omegaega}u_{\varepsilons } \\
&& \le \max \left\{|\nablabla
v_0|^2+\fracrac{\|v_0\|_{L^{\infty}(\Omegaega)}^2}{\mu}\int_{\Omegaega}u_0,\,\kl{1+\fracrac{|\Omega|}{2ea^2\mu }}\|v_0\|_{L^{\infty}(\Omegaega)}^2+
\left(2\|v_0\|_{L^{\infty}(\Omegaega)}^2+\fracrac{\kappa_{+}+1}{\mu}\|v_0\|_{L^{\infty}(\Omegaega)}^2\right)m_1\right\},
\end{eqnarray*}
holding true on $(0,T_{\rm max})$, which in particular implies \eqref{2.tv2}
\end{proof}
\section{Existence of a bounded classical solution}\label{sec:bdclasssol}
We now turn to the analysis of the coupled functional of
$\int_{\Omegaega}u^p$ and $\int_{\Omegaega}|\nablabla v|^{2p}$. We first
apply standard testing procedures to gain the time evolution of
each quantity.
\begin{lem}\label{l3.1} Let $u_0$, $v_0$ satisfy \eqref{id}, let $\kappa\in\mathbb{R} $, $\mu >0$, $\chi >0$. For any $p\in[1,\infty)$, any $\varepsilon \in[0,1)$, we have that the solution of \eqref{epssys} with $a$ as in \eqref{defa} satisfies
\begin{equation}\label{3.1}
\fracrac{d}{dt}\int_{\Omegaega}u_{\varepsilons }^p+\fracrac{2(p-1)}{p}\int_{\Omegaega}|\nablabla
u_{\varepsilons }^{\fracrac{p}{2}}|^2\le \fracrac{p(p-1)}{2} \chi
^2\int_{\Omegaega}u_{\varepsilons }^p|\nablabla v_{\varepsilons }|^2+p\kappa\int_{\Omegaega}u_{\varepsilons }^p-p\mu
\int_{\Omegaega}u_{\varepsilons }^{p+1} -\varepsilon p\int_\Omegaega u_{\varepsilons }^{p+1}\ln au_{\varepsilons }
\end{equation}
on $(0,T_{\rm max})$.
\end{lem}
\begin{proof} Testing the first equation in \eqref{a} against
$u_{\varepsilons }^{p-1}$ and using Young's inequality, we can obtain
\begin{align}\label{3.1''}
\fracrac{1}{p}\fracrac{d}{dt}\int_{\Omegaega}u_{\varepsilons }^p
&=-(p-1)\int_{\Omegaega}u_{\varepsilons }^{p-2}|\nablabla
u_{\varepsilons }|^2+(p-1)\chi \int_{\Omegaega}u_{\varepsilons }^{p-1}\nablabla u_{\varepsilons } \cdotot \nablabla
v_{\varepsilons }+\kappa\int_{\Omegaega}u_{\varepsilons }^p-\mu\int_{\Omegaega}u_{\varepsilons }^{p+1}\nonumber\\
&\quad -\varepsilon \int_\Omegaega u_{\varepsilons }^{p+1}\ln (au_{\varepsilons })\nonumber\\
&\le -(p-1)\int_{\Omegaega}u_{\varepsilons }^{p-2}|\nablabla
u_{\varepsilons }|^2+\fracrac{p-1}{2}\int_{\Omegaega}u_{\varepsilons }^{p-2}|\nablabla
u_{\varepsilons }|^2+\fracrac{p-1}{2} \chi^2\int_{\Omegaega}u_{\varepsilons }^{p}|\nablabla
v_{\varepsilons }|^2
\nonumber\\
&\quad+\kappa\int_{\Omegaega}u_{\varepsilons }^p-\mu\int_{\Omegaega}u_{\varepsilons }^{p+1}-\varepsilon \int_\Omegaega u_{\varepsilons }^{p+1}\ln (au_{\varepsilons })
\end{align}
on $(0,T_{\rm max})$, which by using the fact that
$$\int_{\Omegaega}u^{p-2}|\nablabla
u_{\varepsilons }|^2=\fracrac{4}{p^2}\int_{\Omegaega}|\nablabla u_{\varepsilons }^{\fracrac{p}{2}}|^2 \quad \text{ on } (0,T_{\rm max})$$
directly results in
\eqref{3.1}.
\end{proof}
\begin{lem}\label{l3.2} Let $u_0$, $v_0$ satisfy \eqref{id}, let $\kappa\in\mathbb{R} $, $\mu >0$, $\chi >0$.
For any $p\in[1,\infty)$, any $\varepsilon \in[0,1)$, we have that the solution of \eqref{epssys} with $a$ as in \eqref{defa} satisfies
\begin{eqnarray}\label{3.1'}
\fracrac{d}{dt}\int_{\Omegaega} |\nablabla v_{\varepsilons }|^{2p}+p\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^{2p-2}|D^2 v_{\varepsilons }|^2 \le
p(p+N-1)\|v_0\|_{L^{\infty}(\Omegaega)}^2\int_{\Omegaega}u_{\varepsilons }^2|\nablabla
v_{\varepsilons }|^{2p-2} \quad \text{on } (0,T_{\rm
max}).
\end{eqnarray}
\end{lem}
\begin{proof} We differentiate the second equation in \eqref{a} to
compute
$$(|\nablabla v_{\varepsilons }|^2)_t=2\nablabla v_{\varepsilons }\cdotot \nablabla \Delta v_{\varepsilons }-2\nablabla v_{\varepsilons }\cdotot \nablabla (u_{\varepsilons }v_{\varepsilons })=\Delta |\nablabla v_{\varepsilons }|^2-2|D^2v_{\varepsilons }|^2-2\nablabla v_{\varepsilons }\cdotot \nablabla (u_{\varepsilons }v_{\varepsilons })
\quad \text{in } \Omegaega\times (0,T_{\rm max}).$$ Upon multiplication
by $(|\nablabla v_{\varepsilons }|^2)^{p-1}$ and integration, this leads to
\begin{equation}\label{3.2}
\fracrac{1}{p}\fracrac{d}{dt} \int_{\Omegaega}|\nablabla
v_{\varepsilons }|^{2p}+(p-1)\int_{\Omegaega} |\nablabla v_{\varepsilons }|^{2p-4}\big|\nablabla|\nablabla
v_{\varepsilons }|^2\big|^2+2\int_{\Omegaega}|\nablabla v_{\varepsilons }|^{2p-2}|D^2v_{\varepsilons }|^2\le
-2\int_{\Omegaega}|\nablabla v_{\varepsilons }|^{2p-2} \nablabla v_{\varepsilons } \cdotot \nablabla(u_{\varepsilons }v_{\varepsilons })
\end{equation}
on $(0,T_{\rm max})$. Then integrating by parts, we
achieve
\begin{align*}
-2\int_{\Omegaega}|\nablabla v_{\varepsilons }|^{2p-2} \nablabla v_{\varepsilons } \cdotot
\nablabla(u_{\varepsilons }v_{\varepsilons })&=2\int_{\Omegaega} u_{\varepsilons }v_{\varepsilons }|\nablabla v_{\varepsilons }|^{2p-2} \Delta
v_{\varepsilons }+2(p-1)\int_{\Omegaega} u_{\varepsilons }v_{\varepsilons }|\nablabla v_{\varepsilons }|^{2p-4}\nablabla v_{\varepsilons }\cdotot
\nablabla|\nablabla v_{\varepsilons }|^2\\
&\le 2\|v_0\|_{L^{\infty}(\Omegaega)}\int_{\Omegaega} u_{\varepsilons }|\nablabla
v_{\varepsilons }|^{2p-2} |\Delta
v_{\varepsilons }|+2(p-1)\|v_0\|_{L^{\infty}(\Omegaega)}\int_{\Omegaega} u_{\varepsilons }|\nablabla
v_{\varepsilons }|^{2p-3}\cdotot \big|\nablabla|\nablabla v_{\varepsilons }|^2\big|
\end{align*}
throughout $(0,T_{\rm max})$, were we have used Lemma \ref{lem:normvdecreases}.
Next by Young's inequality and Lemma \ref{lem:elementary.estimates} a)
we have that
$$2\|v_0\|_{L^{\infty}(\Omegaega)}\int_{\Omegaega} u_{\varepsilons }|\nablabla
v_{\varepsilons }|^{2p-2} |\Delta v_{\varepsilons }|\le \int_{\Omegaega} |\nablabla v_{\varepsilons }|^{2p-2} |D^2
v_{\varepsilons }|^2+N\|v_0\|^2_{L^{\infty}(\Omegaega)}\int_{\Omegaega}u_{\varepsilons }^2 |\nablabla
v_{\varepsilons }|^{2p-2},$$
and
\begin{eqnarray*}
&& 2(p-1)\|v_0\|_{L^{\infty}(\Omegaega)}\int_{\Omegaega} u_{\varepsilons }|\nablabla
v_{\varepsilons }|^{2p-3}\cdotot \big|\nablabla|\nablabla v_{\varepsilons }|^2\big|\\
&&\le (p-1)\int_{\Omegaega} |\nablabla v_{\varepsilons }|^{2p-4}\big|\nablabla|\nablabla
v_{\varepsilons }|^2\big|^2+(p-1)\|v_0\|^2_{L^{\infty}(\Omegaega)}\int_{\Omegaega}u_{\varepsilons }^2
|\nablabla v_{\varepsilons }|^{2p-2}.
\end{eqnarray*}
Thereupon, \eqref{3.2} implies that
$$\fracrac{d}{dt} \int_{\Omegaega}|\nablabla
v_{\varepsilons }|^{2p}+p\int_{\Omegaega}|\nablabla v_{\varepsilons }|^{2p-2}|D^2v_{\varepsilons }|^2\le
p(p+N-1)\|v_0\|^2_{L^{\infty}(\Omegaega)}\int_{\Omegaega}u_{\varepsilons }^2 |\nablabla
v_{\varepsilons }|^{2p-2}$$ on $(0,T_{\rm max})$.
\end{proof}
Next we will show that if $\mu$ is suitably large, then all
integrals on the right side in \eqref{3.1} and \eqref{3.1'}
can adequately be estimated in terms of the respective dissipated
quantities on the left, in consequence implying the $L^p$ estimate
of $u$ and the boundedness estimate for $|\nablabla v|$.
\begin{lem}\label{l3.5}
Let $p>1$. With
\begin{align}\label{eq:defk1k2}
k_1(p,N):=\frac{p(p-1)}{(p+1)}\left(\fracrac{4(p-1)(4p^2+N)}{p+1}\right)^{\fracrac{1}{p}}\nonumber\\
k_2(p,N):=\fracrac{4(p+N-1)}{p+1}\left(\fracrac{8(p-1)(p+N-1)(4p^2+N)}{p+1}\right)^{\fracrac{p-1}{2}}
\end{align}
the following holds:
If $\mu >0$, $\chi >0$ and the positive function $v_0\in C^1(\overline{\Omegaega})$ fulfil
\begin{equation}\label{cond:mularge}
\mu\ge k_1(p,N)\|\chi v_0\|^{\fracrac{2}{p}}_{L^\infty(\Omegaega)}+k_2(p,N)\|\chi v_0\|^{2p}_{L^\infty(\Omegaega)},
\end{equation}
then
for every $\kappa\in \mathbb{R} $, $0<u_0\in C^0(\overline{\Omegaega})$ there is $C>0$ such that for every $\varepsilon \in[0,1)$ the solution $(u_{\varepsilons },v_{\varepsilons })$ of \eqref{epssys} with $a$ as in \eqref{defa} satisfies
\[
\int_\Omegaega u_{\varepsilons }^p(\cdotot,t) + \int_\Omegaega |\nabla v_{\varepsilons }(\cdotot,t)|^{2p}\leq C \qquad \text{on } (0,T_{\rm max}).
\]
If, however, $\mu >0$, $\chi >0$ and $0<v_0\in C^1(\overline{\Omegaega})$ do not satisfy \eqref{cond:mularge}, then
for every $\varepsilon \in(0,1)$, $\kappa\in \mathbb{R} $, $0<u_0\in C^0(\overline{\Omegaega})$ there is $c_\varepsilon>0$ such that the solution $(u_{\varepsilons },v_{\varepsilons })$ of \eqref{epssys} with $a$ as in \eqref{defa} satisfies
\[
\int_\Omegaega u_{\varepsilons }^p(\cdotot,t) + \int_\Omegaega |\nabla v_{\varepsilons }(\cdotot,t)|^{2p}\leq c_\varepsilon \qquad \text{on } (0,T_{\rm max}).
\]
\end{lem}
\begin{proof} Lemma \ref{l3.1} and \ref{l3.2} show that
\begin{align}\label{3.12}
& \fracrac{d}{dt}\Big(\int_{\Omegaega}u_{\varepsilons }^p+\chi ^{2p}\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^{2p}\Big)+\fracrac{2(p-1)}{p}\int_{\Omegaega}|\nablabla
u_{\varepsilons }^{\fracrac{p}{2}}|^2+p\chi ^{2p}\int_{\Omegaega}|\nablabla v_{\varepsilons }|^{2p-2}|D^2 v_{\varepsilons }|^2 \nonumber\\
&\le \fracrac{p(p-1)}{2} \chi ^2\int_{\Omegaega}u_{\varepsilons }^p|\nablabla
v_{\varepsilons }|^2+p(p+N-1)\|v_0\|_{L^{\infty}(\Omegaega)}^2\chi ^{2p}\int_{\Omegaega}u_{\varepsilons }^2|\nablabla
v_{\varepsilons }|^{2p-2}\nonumber\\
&+p\kappa\int_{\Omegaega}u_{\varepsilons }^p-p\mu \int_{\Omegaega}u_{\varepsilons }^{p+1}- \varepsilon p\int_\Omegaega u_{\varepsilons }^{p+1}\ln au_{\varepsilons }
\end{align}
throughout $(0,T_{\rm{max}})$. Using Young's inequality, we can assert that for any
$\delta_1>0$,
\begin{eqnarray}\label{3.13}
\fracrac{p(p-1)}{2} \chi ^{2}\int_{\Omegaega}u_{\varepsilons }^p|\nablabla
v_{\varepsilons }|^2\le \fracrac{p(p-1)\delta_1 ^{p+1}}{2(p+1)}\chi ^{2p}\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^{2(p+1)}+\fracrac{p^2(p-1)}{2(p+1)}\Big(\fracrac{1}{\delta_1}\Big)^{\fracrac{p+1}{p}}\chi ^{\fracrac2p}\int_{\Omegaega}u_{\varepsilons }^{p+1}
\end{eqnarray}
on $(0,T_{\rm max})$. We then apply Lemma
\ref{l_interpolation} and $\|v_{\varepsilons }(\cdotot,t)\|_{L^{\infty}(\Omegaega)}\le
\|v_0\|_{L^{\infty}(\Omegaega)}$ to obtain
$$ \fracrac{p(p-1)\delta_1 ^{p+1}}{2(p+1)}\chi ^{2p}\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^{2(p+1)}\le \fracrac{p(p-1)(4p^2+N)\delta_1
^{p+1}\|v_0\|^2_{L^\infty(\Omegaega)}}{(p+1)}\chi ^{2p}\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^{2p-2}|D^2v_{\varepsilons }|^2 $$ for all $t\in (0,T_{\rm max})$. If we let
$\delta_1=\left(\fracrac{p+1}{4(p-1)(4p^2+N)\|v_0\|^2_{L^\infty(\Omegaega)}}\right)^{\fracrac{1}{p+1}}$,
\eqref{3.13} shows that
\begin{equation}\label{3.14}
\fracrac{p(p-1)}{2}\chi ^2 \int_{\Omegaega}u_{\varepsilons }^p|\nablabla v_{\varepsilons }|^2\le
\fracrac{p}{4}\chi ^{2p}\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^{2p-2}|D^2v_{\varepsilons }|^2+\fracrac{p^2(p-1)}{2(p+1)}\left(\fracrac{4(p-1)(4p^2+N)}{p+1}\right)^{\fracrac{1}{p}}\|v_0\|^{\fracrac{2}{p}}_{L^\infty(\Omegaega)}\chi ^{\fracrac2p}\int_{\Omegaega}u_{\varepsilons }^{p+1}
\end{equation}
on $(0,T_{\rm max})$. Similarly, for any $\delta_2>0$ we have
\begin{eqnarray}\label{3.15}
&& p(p+N-1)\|v_0\|_{L^{\infty}(\Omegaega)}^2\chi ^{2p}\int_{\Omegaega}u^2|\nablabla
v_{\varepsilons }|^{2p-2} \nonumber\\
&& \le \fracrac{p(p-1)(p+N-1)\delta_2
^{\fracrac{p+1}{p-1}}\|v_0\|^2_{L^\infty(\Omegaega)}}{p+1}\chi
^{2p}\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^{2(p+1)}\nonumber\\
&&\quad
+\fracrac{2p(p+N-1)\|v_0\|^2_{L^\infty(\Omegaega)}}{p+1}\Big(\fracrac{1}{\delta_2}\Big)^{\fracrac{p+1}{2}}\chi
^{2p}\int_{\Omegaega}u_{\varepsilons }^{p+1}
\end{eqnarray}
on $(0,T_{\rm max})$. Using Lemma \ref{l_interpolation} once more and taking
$\delta_2=\left(\fracrac{p+1}{8(p-1)(p+N-1)(4p^2+N)\|v_0\|^4_{L^\infty(\Omegaega)}}\right)^{\fracrac{p-1}{p+1}}$,
we can obtain from \eqref{3.15} that
\begin{eqnarray}\label{3.17}
&& p(p+N-1)\|v_0\|_{L^{\infty}(\Omegaega)}^2\chi ^{2p}\int_{\Omegaega}u_{\varepsilons }^2|\nablabla
v_{\varepsilons }|^{2p-2}\nonumber\\
&& \le \fracrac{p}{4}\chi ^{2p}\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^{2p-2}|D^2v_{\varepsilons }|^2\nonumber\\
&&\quad+\fracrac{2p(p+N-1)}{p+1}\left(\fracrac{8(p-1)(p+N-1)(4p^2+N)}{p+1}\right)^{\fracrac{p-1}{2}}\|v_0\|^{2p}_{L^\infty(\Omegaega)}\chi
^{2p}\int_{\Omegaega}u_{\varepsilons }^{p+1}
\end{eqnarray}
on $(0,T_{\rm max})$.
Combining inequalities \eqref{3.12}, \eqref{3.13}
and \eqref{3.17}, we arrive at
\begin{align}\label{eq:diffineq}
&\fracrac{d}{dt}\kl{\int_{\Omegaega}u_{\varepsilons }^p+\chi ^{2p}\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^{2p}}+\fracrac{2(p-1)}{p}\int_{\Omegaega}|\nablabla
u_{\varepsilons }^{\fracrac{p}{2}}|^2+\fracrac{p}{2}\chi ^{2p}\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^{2p-2}|D^2 v_{\varepsilons }|^2 \\\nonumber &\le
\fracrac{p}2\kl{k_1(p,N)\norm[\Lom\infty]{\chi v_0}^{\fracrac2p} +
k_2(p,N)\norm[\Lom\infty]{\chi v_0}^{2p} - \mu }\int_\Omegaega
u_{\varepsilons }^{p+1} - \varepsilon \int_\Omegaega u_{\varepsilons }^{p+1}\ln au_{\varepsilons } + p\kappa\int_\Omegaega u_{\varepsilons }^p -
\fracrac{\mu p}2\int_\Omegaega u_{\varepsilons }^{p+1}
\end{align}
on $(0,T_{\rm{max}})$.
We can moreover invoke the Poincar\'{e} inequality along with Lemma \ref{lu1} to
estimate
\begin{eqnarray*}
\int_{\Omegaega}u_{\varepsilons }^p = \|u_{\varepsilons }^{\fracrac{p}{2}}\|^2_{L^2(\Omegaega)}\le
c_1\big(\|\nablabla
u_{\varepsilons }^{\fracrac{p}{2}}\|^2_{L^2(\Omegaega)}+\|u_{\varepsilons }^{\fracrac{p}{2}}\|^2_{L^{\fracrac{2}{p}}(\Omegaega)}\big)\le
c_2\Big(\int_{\Omegaega}|\nablabla u_{\varepsilons }^{\fracrac{p}{2}}|^2+1\Big) \qquad
\text{on } (0,T_{\rm max})
\end{eqnarray*}
with some $c_1>0$ and $c_2>0$. In a quite similar way, using Lemma
\ref{ltv2} we obtain constants $c_3>0$ and $c_4>0$ such that
\begin{eqnarray*}
\int_{\Omegaega}|\nablabla v_{\varepsilons }|^{2p} &=& \big\||\nablabla
v_{\varepsilons }|^{p}\big\|^2_{L^2(\Omegaega)}\\
&\le& c_3\big(\big\|\nablabla |\nablabla v_{\varepsilons }|^{p}
\big\|^2_{L^2(\Omegaega)}+\big\||\nablabla
v_{\varepsilons }|^{p}\big\|^2_{L^{\fracrac{2}{p}}(\Omegaega)}\big)\\
&\le& c_4\Big(\int_{\Omegaega}\big|\nablabla |\nablabla
v_{\varepsilons }|^{p}\big|^2+1\Big)\\
&=& c_4\Big(p^2\int_{\Omegaega}|\nablabla v_{\varepsilons }|^{2p-4}|D^2v_{\varepsilons } \nablabla
v_{\varepsilons }|^2+1\Big)\\
&\le& c_4\Big(p^2\int_{\Omegaega}|\nablabla v_{\varepsilons }|^{2p-2}|D^2v_{\varepsilons } |^2+1\Big)
\qquad \text{on }
(0,T_{\rm max}).
\end{eqnarray*}
Introducing $c_5:=\min\set{\fracrac{2(p-1)}{c_2p},\fracrac{p}{2c_4}}$ and abbreviating $y_{\varepsilons }(t):=\int_\Omegaega u_{\varepsilons }^p+\int_\Omegaega |\nabla v_{\varepsilons }|^{2p}$, we thus obtain from \eqref{eq:diffineq} that
\[
y_{\varepsilons }'(t)+c_5 y_{\varepsilons }(t) \leq K \qquad \text{for all } t\in(0,T_{\rm max}),
\]
where
\[
K:=\begin{cases}
p|\Omega|\cdotot\kl{\sup_{s>0} (\kappa s^p -\fracrac{\mu }2s^{p+1}) + |\inf_{s>0} s^{p+1} \ln s|}, &\text{ if \eqref{cond:mularge}}\\%\mu >k_1(p,N)\|v_0\|^{\fracrac{2}{p}}_{L^\infty(\Omegaega)}+k_2(p,N)\|v_0\|^{2p}_{L^\infty(\Omegaega)}\\
p|\Omega| \sup_{s>0} \kl{\kl{\fracrac12 k_1(p,N)\norm[\Lom\infty]{v_0}^{\fracrac2p} + \fracrac12 k_2(p,N)\norm[\Lom\infty]{v_0}^{2p}-\mu } s^{p+1} + \kappa s^p -\varepsilon s^{p+1}\ln as}&\text{ else}.
\end{cases}
\]
In consequence,
\[
y_{\varepsilons }(t)\leq \max\set{y_{\varepsilons }(0);\fracrac{K}{c_{42}}}
\]
for all $t\in(0,T_{\rm max})$. We note that $K$ depends on $\varepsilon $ if and only if \eqref{cond:mularge} is not satisfied.
\end{proof}
The previous lemma ensures boundedness of $u$ in some $L^p$-space for finite $p$ only. Fortunately, this is already sufficient for the solution to be bounded -- and global.
\begin{lem}\label{lem:ulp.to.boundedness}
Let $T\in(0,\infty]$, $p>N$, $M>0$, $a>0$, $\kappa\in\mathbb{R} $, $\mu >0$. Then there is $C>0$ with the following property:
If for some $\varepsilon \in[0,1)$, the function $(u_{\varepsilons },v_{\varepsilons })\in (C^0(\overline{\Omegaega}\times[0,T))\cap C^{2,1}(\overline{\Omegaega}\times(0,T)))^2$ is a solution to \eqref{epssys} with $a$ as in \eqref{defa} such that
\[
0\leq u_{\varepsilons },\,\, 0\leq v_{\varepsilons } \text{ in }\Omega\times(0,T)\text{ and } \int_\Omegaega u_{\varepsilons }^p(\cdotot,t)\leq M \text{ for all } t\in (0,T),
\]
then
\[
\norm[\Lom\infty]{u_{\varepsilons }(\cdotot,t)} + \norm[\Lom\infty]{\nabla v_{\varepsilons }(\cdotot,t)}\leq C \qquad \text{for all } t\in(0,T).
\]
\end{lem}
\begin{proof}
We use the standard estimate for the Neumann heat
semigroup (\cite[Lemma 1.3]{win_aggregationvs}) to conclude that with some $c_1>0$
\begin{align*}
\|\nablabla v_{\varepsilons }(\cdotot,t)\|_{L^{\infty}(\Omegaega)}&\le \|\nablabla
e^{t\triangle} v_{\varepsilons }(\cdotot,0)\|_{L^{\infty}(\Omegaega)}+\int^t_0 \|\nablabla
e^{(t-s)\triangle}u_{\varepsilons }(\cdotot,s)v_{\varepsilons }(\cdotot,s)\|_{L^{\infty}(\Omegaega)}\\
&\le \norm[\Lom\infty]{\nabla v_0}+c_1\int^t_0
c_1\Big(1+(t-s)^{-\frac{1}{2}-\frac N{2p}}\Big)e^{-\lambda_1(t-s)}\|u_{\varepsilons }(\cdotot,s)v_{\varepsilons }(\cdotot,s)\|_{L^{p}(\Omegaega)} \quad \text{for all } t\in(0,T),
\end{align*}
where $\lambda_1$ denotes the first nonzero eigenvalue of
$-\Delta$ in $\Omegaega$ under the homogeneous Neumann boundary
conditions. Due to Lemma \ref{lem:normvdecreases} and the condition on $u_{\varepsilons }$, we obtain $c_2>0$ such that
\begin{equation}\label{vlinftybound}
\norm[\Lom \infty]{\nabla v_{\varepsilons }(\cdotot,t)}\leq c_2 \quad \text{for all } t\in (0,T).
\end{equation}
In order to obtain a bound for $u_{\varepsilons }$, we use the
variation-of-constants formula to represent $u_{\varepsilons }(\cdotot,t)$ as
\begin{eqnarray}\label{duhamel.u}
u_{\varepsilons }(\cdotot,t)&=&e^{(t-t_0)\Delta}u_{\varepsilons }(\cdotot,t_0)-\int^t_{t_0}
e^{(t-s)\Delta}\nablabla\cdotot(u_{\varepsilons }(\cdotot,s)\nablabla v_{\varepsilons }(\cdotot,s))ds\nonumber\\
&\quad&+\int^t_{t_0}e^{(t-s)\Delta}(\kappa u_{\varepsilons }(\cdotot,s)-\mu
u_{\varepsilons }^2(\cdotot,s)-\varepsilon u_{\varepsilons }^2(\cdotot,s)\ln au_{\varepsilons }(\cdotot,s))ds,
\end{eqnarray}
for each $t\in (0,T)$, where $t_0=(t-1)_{+}$.
Due to the estimate
\[
\kappa s-\mu s^2-\varepsilon s^2\ln as\leq \fracrac1{2a^2e} + \sup_{\xi >0} (\kappa\xi -\mu \xi ^2)=:c_3,
\]
positivity of the heat semigroup ensures that
\begin{equation}\label{uestimate:inhomogeneity}
\int_{t_0}^t e^{(t-s)\Delta }(\kappau_{\varepsilons }(\cdotot,s)-\mu u_{\varepsilons }^2(\cdotot,s)-\varepsilon u_{\varepsilons }^2(\cdotot,s)\ln au_{\varepsilons }(\cdotot,s)) \leq c_3(t-t_0)\leq c_3.
\end{equation}
Moreover, from the maximum principle we can easily infer that
\begin{equation}\label{uestimate:initdata.smallt}
\norm[\Lom\infty]{e^{(t-t_0)\Delta} u_{\varepsilons }(\cdotot,t_0)}=\norm[\Lom\infty]{e^{t\Delta} u_0} \leq \norm[\Lom\infty]{u_0} \text{ if } t\in[0,2]
\end{equation}
and that with $c_4>0$ taken from \cite[Lem. 1.3]{win_aggregationvs}
\begin{equation}\label{uestimate:initdata.larget}
\norm[\Lom\infty]{e^{(t-t_0)\Delta}u_{\varepsilons }(\cdotot,t_0)}\leq \norm[\Lom\infty]{u_{\varepsilons }(\cdotot,t_0)}\leq \norm[\Lom\infty]{e^{1\cdotot\Delta} u_{\varepsilons }(\cdotot,t_0-1)} \leq c_4 (1+1^{-\fracrac N2})
\norm[\Lom 1]{u_{\varepsilons }(\cdotot,t_0-1)}\leq c_4 m_1,
\end{equation}
whenever $t>2$ and with $m_1$ as in \eqref{2.1}.
Finally, we estimate the second integral on the right hand of
\eqref{duhamel.u}. \cite[Lemma 1.3]{win_aggregationvs} provides $c_5>0$ fulfilling
\begin{align}\label{uestimate:chemotaxisterm}
\Big\|\int^t_{t_0} e^{(t-s)\Delta}\nablabla\cdotot(u_{\varepsilons }(\cdotot,s)\nablabla
v_{\varepsilons }(\cdotot,s))ds\Big\|_{L^{\infty}(\Omegaega)}&\le c_5\int^t_{t_0}
(t-s)^{-\fracrac{1}{2}-\fracrac{N}{2p}}\|u_{\varepsilons }(\cdotot,s)\nablabla
v_{\varepsilons }(\cdotot,s)\|_{L^{p}(\Omegaega)}ds\nonumber\\
&\leq c_5M c_2\int_0^1\sigma^{-\fracrac12-\fracrac N{2p}} d\sigma
=:c_6
\end{align}
for $t\in(0,T)$. In view of \eqref{duhamel.u}, \eqref{uestimate:initdata.smallt}, \eqref{uestimate:initdata.larget}, \eqref{uestimate:chemotaxisterm}, we have obtained that
\[
0\leq u_{\varepsilons }(\cdotot,t)\leq \max\set{\norm[\Lom\infty]{u_0}, c_4m_1} + c_6 + c_3
\]
holds for any $t\in(0,T)$, which combined with \eqref{vlinftybound} is the desired conclusion.
\end{proof}
In fact, the assumption of Lemma \ref{lem:ulp.to.boundedness} suffices for even higher regularity, as we will see in Lemma \ref{lem:holder}.
For the moment we return to the proof of global existence of solutions.
\begin{lem}\label{lem:ge.for.positive.eps.or.large.mu}
Let $\varepsilon \in(0,1)$ and let $a$ be as in \eqref{defa} or let $\varepsilon =0$ and $\mu >k_1(N,N)\norm[\Lom\infty]{\chi v_0}^{\fracrac2N}+k_2(N,N)\norm[\Lom\infty]{\chi v_0}^{2N}$, where $k_1$, $k_2$ are as in Lemma \ref{l3.5}. Then the classical solution to \eqref{epssys} given by Lemma \ref{criterion} is global and bounded.
\end{lem}
\begin{proof}
By continuity, there is $p>N$ such that $\mu >k_1(p,N)\norm[\Lom\infty]{\chi v_0}^{\fracrac2p}+k_2(p,N)\norm[\Lom\infty]{\chi v_0}^{2p}$, and Lemma \ref{l3.5} shows that $\int_\Omegaega u_{\varepsilons }^p$ is bounded on $(0,T_{\rm max})$. Lemma \ref{lem:ulp.to.boundedness} together with Lemma \ref{lem:normvdecreases} turns this into a uniform bound on $\norm[\Lom \infty]{u_{\varepsilons }(\cdotot,t)}+\norm[W^{1,\infty}(\Omega)]{v_{\varepsilons }(\cdotot,t)}$ on $(0,T_{\rm max})$, so that the extensibility criterion \eqref{extcrit} shows that $T_{\rm max}=\infty$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm1}] Theorem \ref{thm1} is the case $\varepsilon =0$ in Lemma \ref{lem:ge.for.positive.eps.or.large.mu}.\end{proof}
\section{Stabilization}\label{sec:stabilization}
In this section, we shall consider the large time asymptotic
stabilization of any global classical bounded solution.
In a first step we derive uniform Hölder bounds that will facilitate convergence. After that, we have to ensure that solutions actually converge, and in particular must identify their limit.
In the spirit of the persistence-of-mass result in \cite{taowin_persistence}, showing that $v\to 0$ as $t\to \infty$ would be possible by relying on a uniform lower bound for $\int_\Omegaega u$ and finiteness of $\int_0^\infty\int_\Omegaega uv$ (see also \cite[Lemmata 3.2 and 3.3]{lankeit_fluid}). We will instead focus on other information that can be obtained from the following functional of type already employed in \cite{lankeit_fluid} (after the example of \cite{win_ksns_logsource}), namely
\begin{equation}\label{eq:defF}
\mathcal{F}_{\varepsilon}(t):=\int_{\Omegaega}u_{\varepsilons }(\cdotot,t)-\frac{\kappa}{\mu }\int_{\Omegaega}\ln u_{\varepsilons }(\cdotot,t)+\frac{\kappa}{2\mu }\int_{\Omegaega}v_{\varepsilons }^2(\cdotot,t).
\end{equation}
This way, in Lemma \ref{lem:vtozero} we will achieve a convergence result for $v_{\varepsilons }$ that will also be useful in the investigation of the large time behaviour of weak solutions in Section \ref{sec:weaksol}.
\begin{lem}\label{lem:holder}
Let $\varepsilon\in[0,1)$, $\mu >0$, $\chi >0$, $\kappa\in\mathbb{R} $. Let $(u_{\varepsilons },v_{\varepsilons })\in C^0(\overline{\Omegaega}\times [0,\infty))\cap C^{2,1}(\overline{\Omegaega}\times(0,\infty))$ be a solution to \eqref{epssys} with $a$ as in \eqref{defa} which is bounded in the sense that there exists $M>0$ such that
\begin{equation}\label{eq:boundednessconditionforregularity}
\norm[\Lom\infty]{u_{\varepsilons }(\cdotot,t)}+\norm[W^{1,\infty}(\Omega)]{v_{\varepsilons }(\cdotot,t)}\leq M \quad \text{for all } t\in(0,\infty).
\end{equation}
Then there are $\alpha\in(0,1)$ and $C>0$ such that
\[
\normm{C^{\alpha,\fracrac{\alpha}2}(\overline{\Omegaega}\times[t,t+1])}{u_{\varepsilons }} + \normm{C^{2+\alpha,1+\fracrac\alpha2}(\Omega\times[t,t+1])}{v_{\varepsilons }}\leq C \qquad \text{for all } t\in (2,\infty).
\]
\end{lem}
\begin{proof}
Due to the time-uniform ($L^\infty(\Omega)$-)bound on $v_{\varepsilons }$ and on the right hand side of
\[
v_{\varepsilons }t-\Delta v_{\varepsilons } = u_{\varepsilons }v_{\varepsilons } \text{ in } \Omega\times[t,t+2], \quad \partialartialrtial_{\nu} v_{\varepsilons }\big\rvert_{\partialartialrtial\Omegaega}=0
\]
of which $v_{\varepsilons }$ is a weak solution, \cite[Thm. 1.3]{porzio_vespri} immediately yields $\alpha _1\in(0,1)$ and $c_1>0$ such that
\[
\normm{C^{\alpha _1,\fracrac{\alpha _1}2}(\overline{\Omegaega}\times[t+1,t+2])}{v_{\varepsilons }}\leq c_1
\]
for any $t>0$.
Similarly, \eqref{eq:boundednessconditionforregularity} provides $t$-independent bounds on the functions $\partialsi _0:=\fracrac12\chi ^2u_{\varepsilons }^2|\nabla v_{\varepsilons }|^2$, $\partialsi _1:=\chi u_{\varepsilons }|\nabla v_{\varepsilons }|$, $\partialsi _2:=|\kappa|u_{\varepsilons }-\mu u_{\varepsilons }^2-\varepsilon u_{\varepsilons }^2\ln au_{\varepsilons }$ in conditions (A$_1$), (A$_2$), (A$_3$) of \cite{porzio_vespri}. An application of \cite[Thm. 1.3]{porzio_vespri} to solutions of
\[
u_{\varepsilons }t - \nabla\cdotot(\nabla u_{\varepsilons }-\chi u_{\varepsilons }\nabla v_{\varepsilons }) = \kappau_{\varepsilons }-\mu u_{\varepsilons }^2-\varepsilon u_{\varepsilons }^2\ln au_{\varepsilons } \text{ in } \Omega\times[t,t+2], \partialartialrtial_{\nu} u_{\varepsilons }\big\rvert_{\partialartialrtial\Omegaega}=0
\]
therefore provides $\alpha _2\in(0,1)$, $c_2>0$ such that
\[
\normm{C^{\alpha _2,\fracrac{\alpha _2}2}(\overline{\Omegaega}\times[t+1,t+2])}{u_{\varepsilons }}\leq c_2
\]
for any $t>0$.
We pick a monotone increasing function $\zeta \in C^\infty(\mathbb{R} )$ such that $\zeta |_{(-\infty,\fracrac12)}\equiv 0$, $\zeta|_{(1,\infty)}\equiv 1$ and note that, for any $t_0>1$, the function $(x,t)\mapsto \zeta (t-t_0)v_{\varepsilons }(x,t)$ belongs to $C^{2,1}(\overline{\Omegaega}\times[t_0,t_0+2])$ and satisfies
\[
(\zeta v_{\varepsilons })_t=\Delta (\zeta v_{\varepsilons }) - u_{\varepsilons }\zeta v_{\varepsilons } + \zeta 'v_{\varepsilons }, \quad (\zeta v_{\varepsilons })(\cdotot,t_0)=0, \quad \partialartialrtial_{\nu}(\zeta v_{\varepsilons })\big\rvert_{\partialartialrtial\Omegaega}=0.
\]
Due to the uniform bound for $u_{\varepsilons }\zeta v_{\varepsilons } + \zeta 'v_{\varepsilons }$ in some Hölder space, an application of \cite[Thm. IV.5.3]{LSU} (together with \cite[Thm. III.5.1]{LSU}) ensures the existence of $\alpha _3\in(0,1)$ and $c_3>0$ such that
\[
\normm{C^{2+\alpha _3,1+\fracrac{\alpha _3}2}(\overline{\Omegaega}\times[t_0+1,t_0+2])}{v_{\varepsilons }}=\normm{C^{2+\alpha _3,1+\fracrac{\alpha _3}2}(\overline{\Omegaega}\times[t_0+1,t_0+2]}{\zeta v_{\varepsilons }}\leq\normm{C^{2+\alpha _3,1+\fracrac{\alpha _3}2}(\overline{\Omegaega}\times[t_0,t_0+2]}{\zeta v_{\varepsilons }}\leq c_3
\]
for any $t_0>1$.
\end{proof}
\begin{lem}\label{lem:ddtF}
Let $u_0$, $v_0$ satisfy \eqref{id} and assume that $\mu >0$, $\kappa>0$, $\chi >0$, $a=\frac{\mu }{\kappa}$ (as in \eqref{defa}). Then for any $\varepsilon \in[0,1)$ any solution $(u_{\varepsilons },v_{\varepsilons })\in C^{2,1}(\overline{\Omegaega}\times(0,\infty))\cap C^0(\overline{\Omegaega}\times[0,\infty))$ of \eqref{epssys} satisfies
\begin{equation}\label{FODI}
\mathcal{F}_\varepsilon'(t)+\mu \int_\Omegaega\kl{u_{\varepsilons }-\fracrac{\kappa}{\mu }}^2\leq 0 \qquad \text{for all } t\in(0,\infty)
\end{equation}
and, consequently, there is $C>0$ such that for any $\varepsilon \in [0,1)$
\begin{equation}\label{eq:ueminlimit2.bounded}
\int_0^\infty \int_\Omegaega \kl{u_{\varepsilons }-\fracrac{\kappa}{\mu }}^2 \leq C.
\end{equation}
\end{lem}
\begin{proof}
In fact, on $(0,\infty)$
\begin{align*}
\mathcal{F}_\varepsilon'&=\int_{\Omegaega}
u_{\varepsilons }t-\frac{\kappa}{\mu }\int_{\Omegaega}\fracrac{u_{\varepsilons }t}{u_{\varepsilons }}+\frac{\kappa}{\mu }\int_{\Omegaega}v_{\varepsilons }v_{\varepsilons }t
\nonumber \\
&= \kappa\int_{\Omegaega}
u_{\varepsilons }-\mu\int_{\Omegaega}u_{\varepsilons }^2 - \varepsilon \int_\Omegaega u_{\varepsilons }^2\ln au_{\varepsilons } -\frac{\kappa}{\mu }\int_{\Omegaega}\fracrac{\Delta
u_{\varepsilons }}{u_{\varepsilons }}+\frac{\kappa}{\mu }\int_{\Omegaega}\fracrac{\nablabla u_{\varepsilons }\cdotot \nablabla
v_{\varepsilons }}{u_{\varepsilons }}-\frac{\kappa^2}{\mu }\int_{\Omegaega} 1+\kappa\int_{\Omegaega}
u_{\varepsilons }\\
&+\frac{\kappa}{\mu }\int_{\Omegaega}v_{\varepsilons }\Delta
v_{\varepsilons }-\frac{\kappa}{\mu }\int_{\Omegaega}u_{\varepsilons }v_{\varepsilons }^2
+\varepsilon \frac{\kappa}{\mu }\int_\Omegaega u_{\varepsilons }\ln au_{\varepsilons }
\end{align*}
Because $\varepsilon s(\frac{\kappa}{\mu }-s)\ln (\frac{\mu }{\kappa}s)$ is negative for any $s>0$, we obtain
\begin{align*}
\mathcal{F}_\varepsilon'&\le
-\mu\int_{\Omegaega}\Big(u_{\varepsilons }-\frac{\kappa}{\mu }\Big)^2-\frac{\kappa}{\mu }\int_{\Omegaega}\fracrac{|\nablabla
u_{\varepsilons }|^2}{u_{\varepsilons }^2}+\frac{\kappa}{2\mu }\int_{\Omegaega}\fracrac{|\nablabla
u_{\varepsilons }|^2}{u_{\varepsilons }^2}+\frac{\kappa}{2\mu }\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^2-\frac{\kappa}{\mu }\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^2-\frac{\kappa}{\mu }\int_{\Omegaega}u_{\varepsilons }v_{\varepsilons }^2\nonumber\\
&=
-\mu\int_{\Omegaega}\Big(u_{\varepsilons }-\frac{\kappa}{\mu }\Big)^2-\frac{\kappa}{2\mu }\int_{\Omegaega}\fracrac{|\nablabla
u_{\varepsilons }|^2}{u_{\varepsilons }^2}-\frac{\kappa}{2\mu }\int_{\Omegaega}|\nablabla
v_{\varepsilons }|^2-\frac{\kappa}{\mu }\int_{\Omegaega}u_{\varepsilons }v_{\varepsilons }^2
\end{align*}
on $(0,\infty)$, which implies \eqref{FODI}.
\end{proof}
Building upon \eqref{eq:ueminlimit2.bounded} and the second equation of \eqref{epssys}, we can now acquire decay information about $v_{\varepsilons }$:
\begin{lem}\label{lem:vtozero}
Let $\chi >0$, $\kappa>0$, $\mu >0$, let $u_0$ and $v_0$ satisfy \eqref{id} and moreover set $a:=\fracrac{\mu }{\kappa}$. Then for every $p\in[1,\infty)$ and every $\eta >0$ there is $T>0$ such that for every $t>T$ and every $\varepsilon \in[0,1)$ every global classical solution $(u_{\varepsilons },v_{\varepsilons })$ of \eqref{epssys} satisfies
\begin{equation}\label{eq:vtozerolp}
\norm[\Lom p]{v_{\varepsilons }(\cdotot,t)}<\eta .
\end{equation}
\end{lem}
\begin{proof}
By Lemma \ref{lem:ddtF} we find $c_1>0$ such that for any $\varepsilon\in[0,1)$ we have $\int_0^\infty\int_\Omegaega \kl{\fracrac{\kappa}{\mu }-u_{\varepsilons }}^2<c_1$.
Integrating the second equation of \eqref{epssys} shows that
\[
(0,\infty)\ni t\mapsto \int_\Omegaega v_{\varepsilons }(\cdotot,t)
\]
is decreasing, and that, moreover,
\[
\fracrac{\kappa}{\mu }\int_0^t\int_\Omegaega v_{\varepsilons } + \int_0^t\int_\Omegaega\kl{u_{\varepsilons }-\fracrac{\kappa}{\mu }}v_{\varepsilons } = \int_0^t \int_\Omegaega u_{\varepsilons }v_{\varepsilons } \leq \int_\Omegaega v_0
\]
for any $t>0$.
We conclude that for any $t>0$ and any $\varepsilon \in[0,1)$
\begin{align*}
\int_\Omegaega v_{\varepsilons }(\cdotot,t) \leq \fracrac1t\int_0^t \int_\Omegaega v_{\varepsilons }&\leq\fracrac{\mu }{\kappa t} \int_\Omegaega v_0 + \fracrac{\mu }{\kappa t}\int_0^t\int_\Omegaega \kl{\fracrac{\kappa}{\mu }-u_{\varepsilons }}v_{\varepsilons }\\
&\leq \fracrac{\mu }{\kappa t}\int_\Omegaega v_0 + \fracrac{\mu }{\kappa t}\sqrt{\int_0^t\int_\Omegaega v^2}\sqrt{\int_0^t\int_\Omegaega \kl{\fracrac{\kappa}{\mu }-u_{\varepsilons }}^2}\\
&\leq \fracrac{\mu }{\kappa t}\int_\Omegaega v_0 +\fracrac{\mu }{\kappa t} \sqrt{\norm[\Lom\infty]{v_0}^2|\Omega|t} \sqrt{\int_0^\infty\int_\Omegaega\kl{\fracrac{\kappa}{\mu }-u_{\varepsilons }}^2}\\
&\leq \fracrac{\mu }{\kappa t}\int_\Omegaega v_0 +\fracrac{\mu \sqrt{|\Omega|}\norm[\Lom\infty]{v_0}\sqrt{c_1}}{\kappa\sqrt{t}}
\end{align*}
and hence already have that $\int_\Omegaega v_{\varepsilons }(\cdotot,t)$ converges to $0$ as $t\to\infty$, uniformly with respect to $\varepsilon $. In order to obtain \eqref{eq:vtozerolp}, we invoke the additional interpolation
\[
\norm[\Lom p]{v_{\varepsilons }(\cdotot,t)}\leq \norm[\Lom\infty]{v_{\varepsilons }(\cdotot,t)}^{\fracrac{p-1}p}\norm[\Lom 1]{v_{\varepsilons }(\cdotot,t)}^{\fracrac1p}\leq \norm[\Lom\infty]{v_0}^{\fracrac{p-1}p}\norm[\Lom 1]{v_{\varepsilons }(\cdotot,t)}^{\fracrac1p},
\]
valid for any $t>0$.
\end{proof}
A combination of the previous lemmata in this section reveals the large time behaviour of bounded classical solutions:
\begin{lem}\label{lem:convergence.classical}
Let $\kappa>0$, $\mu >0$, $\chi >0$ and let $u_0$, $v_0$ satisfy \eqref{id}. For any solution $(u,v)\in C^{2,1}(\overline{\Omegaega}\times(0,\infty))\cap C^0(\overline{\Omegaega}\times[0,\infty))$ of \eqref{a} that satisfies the boundedness condition \eqref{eq:boundednessconditionforregularity}, we have
\begin{equation}\label{eq:convergencestatement}
u(\cdotot,t)\to \fracrac{\kappa}{\mu } \quad \text{in } C^0(\overline{\Omegaega}),\quad v(\cdotot,t)\to 0\quad \text{in }C^2(\overline{\Omegaega}).
\end{equation}
as $t\to \infty $.
\end{lem}
\begin{proof} For $j\in\mathbb{N} $ we define
\[
u_j(x,\tau ):=u(x,j+\tau ), \qquad v_j(x,\tau ):=v(x,j+\tau ),\qquad x\in\overline{\Omegaega}, \tau \in[0,1].
\]
We let $(j_k)_{k\in \mathbb{N}}\subsetset\mathbb{N} $ be a sequence
satisfying $j_k\to \infty $ as $k\to \infty $. By Lemma
\ref{lem:holder} there are $\alpha \in(0,1)$, $C>0$ such that
\[
\normm{C^{\alpha,\fracrac{\alpha }2}(\overline{\Omegaega}\times[0,1])}{u_{j_k}}\leq C, \quad \normm{C^{2+\alpha,1+\fracrac{\alpha}2}(\overline{\Omegaega}\times[0,1])}{v_{j_k}}\leq C
\]
for all $k\in \mathbb{N} $ and hence there are $u,v\in C^{\alpha
,\fracrac{\alpha }2}(\overline{\Omegaega}\times[0,1])$ such that $u_{j_{k_l}}\to u$
in $C^0(\overline{\Omegaega}\times[0,1])$ and $v_{j_{k_l}}\to v$ in
$C^2(\overline{\Omegaega}\times[0,1])$ as $l\to \infty $ along a suitable
subsequence. According to \eqref{eq:ueminlimit2.bounded} and
Lemma \ref{lem:vtozero} $u\equiv \fracrac{\kappa}{\mu }$, $v\equiv
0$. Because every subsequence of $((u_j,v_j))_{j\in\mathbb{N} }$
contains a subsequence converging to $(\frac{\kappa}{\mu },0)$, we
conclude that $(u_j,v_j)\to(\frac{\kappa}{\mu },0)$ in
$C^0(\overline{\Omegaega}\times[0,1])\times C^2(\overline{\Omegaega}\times[0,1])$ and hence, a
fortiori, \eqref{eq:convergencestatement}.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm3}]
The statement of Lemma \ref{lem:convergence.classical} is even slightly stronger than that of Theorem \ref{thm3}.
\end{proof}
\section{Weak solutions}\label{sec:weaksol}
Purpose of this section is the construction of weak solutions to \eqref{a}, in those cases, where Theorem \ref{thm1} is not applicable. To this end let us first state what a weak solution is supposed to be:
\begin{dnt}\label{def:weaksol}
A weak solution to \eqref{a} for initial data $(u_0,v_0)$ as in \eqref{id} is a pair $(u,v)$ of functions
\begin{align*}
u\in L^2_{loc}(\overline{\Omegaega}\times[0,\infty)) \quad \text{ with } \quad \nabla u \in L^1_{loc}(\overline{\Omegaega}\times [0,\infty)),\\
v\in L^\infty(\Omega\times(0,\infty))\quad \text{ with } \quad \nabla v \in L^2(\Omega\times(0,\infty))
\end{align*}
such that, for every $\varphi \in C_0^{\infty}(\overline{\Omegaega}\times[0,\infty))$,
\begin{align*}
-\int_0^\infty\int_\Omegaega u\varphi _t -\int_\Omegaega u_0\varphi (\cdotot,0) &= -\int_0^\infty\int_\Omegaega \nabla u\cdotot \nabla \varphi + \chi \int_0^\infty\int_\Omegaega u\nabla v\cdotot\nabla\varphi +\kappa\int_0^\infty\int_\Omegaega u\varphi -\mu \int_0^\infty\int_\Omegaega u^2\varphi \\
-\int_0^\infty\int_\Omegaega v\varphi _t -\int_\Omegaega v_0\varphi (\cdotot,0) &= -\int_0^\infty\int_\Omegaega \nabla v\cdotot \nabla \varphi - \int_0^\infty \int_\Omegaega uv\varphi
\end{align*}
hold true.
\end{dnt}
Some of the estimates neeeded for the compactness arguments in the construction of these weak solutions will spring from the following quasi-energy inequality:
\begin{lem}\label{lem:energyfunctional}
Let $\mu ,\chi \in(0,\infty)$, $\kappa\in\mathbb{R} $ and let $(u_0,v_0)$ satisfy \eqref{id}.
There are constants $k_1>0$, $k_2>0$ such that for any $\varepsilon \in(0,1)$ the solution of \eqref{epssys} with $a$ as in \eqref{defa} satisfies
\begin{align}\label{eq:ddtuloguplusnavdv}
\ddt&\kl{\int_\Omegaega u_{\varepsilons }\ln u_{\varepsilons } + \fracrac{\chi }{2} \int_\Omegaega \fracrac{|\nabla v_{\varepsilons }|^2}{v_{\varepsilons }}}\nonumber\\
& + \int_\Omegaega \fracrac{|\nabla u_{\varepsilons }|^2}{u_{\varepsilons }} + k_1 \int_\Omegaega \fracrac{|\nabla v_{\varepsilons }|^4}{v_{\varepsilons }^3} + k_1 \int_\Omegaega v_{\varepsilons }|D^2\ln v_{\varepsilons }|^2 + \fracrac{\mu }2\int_\Omegaega u_{\varepsilons }^2\ln u_{\varepsilons }+ \varepsilon \int_\Omegaega u_{\varepsilons }^2\ln au_{\varepsilons }\ln u_{\varepsilons } \nonumber\\
&\leq k_2\int_\Omegaega v_{\varepsilons }+k_3
\end{align}
on $(0,\infty)$.
\end{lem}
\begin{proof}
According to Lemma \ref{lem:ge.for.positive.eps.or.large.mu}, for any $\varepsilon \in(0,1)$, the solution to \eqref{epssys} is global, and from the second equation of \eqref{epssys} we obtain that
\begin{align}\label{eq:ddtnavdv}
\ddt \int_\Omegaega \fracrac{|\nabla v_{\varepsilons }|^2}{v_{\varepsilons }} &= 2\int_\Omegaega \fracrac{\nabla v_{\varepsilons } \nabla v_{\varepsilons }t}v_{\varepsilons } - \int_\Omegaega \fracrac{|\nabla v_{\varepsilons }|^2}{v_{\varepsilons }^2}v_{\varepsilons }t\nonumber\\
&=-2\int_\Omegaega \fracrac{\Delta v_{\varepsilons } v_{\varepsilons }t}{v_{\varepsilons }} + 2\int_\Omegaega \fracrac{|\nabla v_{\varepsilons }|^2}{v_{\varepsilons }^2} v_{\varepsilons }t -\int_\Omegaega \fracrac{|\nabla v_{\varepsilons }|^2}{v_{\varepsilons }^2}v_{\varepsilons }t\nonumber\\
&= -2\int_\Omegaega \fracrac{|\Delta v_{\varepsilons }|^2}v_{\varepsilons } + 2\int_\Omegaega u_{\varepsilons }\Delta v_{\varepsilons } +\int_\Omegaega \fracrac{|\nabla v_{\varepsilons }|^2}{v_{\varepsilons }^2}\Delta v_{\varepsilons } - \int_\Omegaega \fracrac{|\nabla v_{\varepsilons }|^2}{v_{\varepsilons }}u_{\varepsilons }\nonumber\\
&\leq -2\int_\Omegaega \fracrac{|\Delta v_{\varepsilons }|^2}v_{\varepsilons } - 2\int_\Omegaega \nabla u_{\varepsilons }\cdotot \nabla v_{\varepsilons } + \int_\Omegaega \fracrac{|\nabla v_{\varepsilons }|^2}{v_{\varepsilons }^2}\Delta v_{\varepsilons }\quad\text{on } (0,\infty).
\end{align}
Here we may rely on Lemma \ref{lem:elementary.estimates} b) to obtain $k_1>0$, $k_2>0$ such that
\[
\ddt \int_\Omegaega \fracrac{|\nabla v_{\varepsilons }|^2}{v_{\varepsilons }}\leq -2\int_\Omegaega \nabla u_{\varepsilons }\cdotot \nabla v_{\varepsilons } - \fracrac{2k_1}{\chi }\int_\Omegaega v_{\varepsilons }|D^2\ln v_{\varepsilons }|^2-\fracrac{2k_1}{\chi }\int_\Omegaega \fracrac{|\nabla v_{\varepsilons }|^4}{v_{\varepsilons }^3} + \fracrac{2k_2}{\chi } \int_\Omegaega v_{\varepsilons } \quad \text{on } (0,\infty).
\]
Concerning the entropy term, we compute
\begin{align}\label{eq:ddtulogu}
\ddt \int_\Omegaega u_{\varepsilons }\ln u_{\varepsilons } &= \int_\Omegaega u_{\varepsilons }t \ln u_{\varepsilons } + \kappa\int_\Omegaega u_{\varepsilons } - \mu \int_\Omegaega u_{\varepsilons }^2 -\varepsilon \int_\Omegaega u_{\varepsilons }^2\ln au_{\varepsilons }\nonumber\\
&= -\int_\Omegaega \fracrac{|\nabla u_{\varepsilons }|^2}u_{\varepsilons } + \chi \int_\Omegaega \nabla u_{\varepsilons }\cdotot \nabla v_{\varepsilons } + \kappa\int_\Omegaega u_{\varepsilons }\ln u_{\varepsilons } - \mu \int_\Omegaega u_{\varepsilons }^2\ln u_{\varepsilons }-\varepsilon \int_\Omegaega u_{\varepsilons }^2\ln au_{\varepsilons }\ln u_{\varepsilons }\nonumber\\
&\quad+\kappa\int_\Omegaega u_{\varepsilons }-\mu \int_\Omegaega u_{\varepsilons }^2-\varepsilon \int_\Omegaega u_{\varepsilons }^2\ln au_{\varepsilons } \quad \text{on } (0,\infty ).
\end{align}
Additionally, $s^2\ln as>-\fracrac{1}{2a^2e}$ for all $s\in(0,\infty)$, so that for all $\varepsilon \in(0,1)$
we have $-\varepsilon (s^2\ln as)<\fracrac1{2a^2e}$. Since moreover
$\lim_{s\to \infty}(\kappa s-\mu s^2+\kappa s\ln s-\fracrac{\mu
}2s^2\ln s)=-\infty $, we can find $k_3>0$ such that
\[
\kappa s\ln s -\fracrac{\mu }{2}s^2\ln s +\kappa s-\mu s^2-\varepsilon s^2\ln as \leq \fracrac{k_3}{|\Omega|}
\]
for any $s\geq 0$ and $\varepsilon \in(0,1)$. Inserting this into the sum of \eqref{eq:ddtulogu} and a multiple of \eqref{eq:ddtnavdv}, we obtain \eqref{eq:ddtuloguplusnavdv}.
\end{proof}
The following lemma serves as collection of the bounds we have prepared:
\begin{lem}\label{lem:bounds}
Let $\mu >0$, $\chi >0$, $\kappa\in\mathbb{R} $ and suppose that $u_0$, $v_0$ satisfy \eqref{id}.
Then there is $C>0$ and for any $T>0$ and $q>N$ there is $C(T)>0$ such that for any $\varepsilon \in(0,1)$ the solution $(u_{\varepsilons },v_{\varepsilons })$ of \eqref{epssys} with $a$ as in \eqref{defa} satisfies
\begin{align}
\int_0^T \int_\Omegaega u_{\varepsilons }^2\leq C(T)\label{bd:ul2}\\
\int_0^T \int_\Omegaega \fracrac{|\nabla u_{\varepsilons }|^2}u\leq C(T)\label{bd:nau2u}\\
\int_0^T \int_\Omegaega |\nabla u_{\varepsilons }|^\fracrac43\leq C(T)\label{bd:nau}\\
\int_0^T \int_\Omegaega u_{\varepsilons }^2\ln au_{\varepsilons } \leq C(T)\label{bd:u2logu}\\
\int_0^T \int_\Omegaega \varepsilon u_{\varepsilons }^2(\ln u_{\varepsilons })\ln au_{\varepsilons }\leq C(T)\label{bd:epsu2logu2}\\
\int_0^T \int_\Omegaega |\nabla v_{\varepsilons }|^4 \leq C\label{bd:nav4}\\
\int_0^{\infty } \int_\Omegaega |\nabla v_{\varepsilons }|^2\leq C\label{bd:nav}\\
\norm[ L^\infty(\Omega\times (0,\infty ))]{v_{\varepsilons }}\leq C\label{bd:v}\\
\norm[L^2((0,T);(W_0^{1,2}(\Omega))^\ast)]{v_{\varepsilons }t}\leq C(T)\label{bd:vt}\\
\norm[L^1((0,T);(W_0^{1,2}(\Omega))^\ast)]{u_{\varepsilons }t}\leq C(T)\label{bd:ut}
\end{align}
If, moreover $\kappa>0$, then there is $C>0$ such that for any $\varepsilon \in(0,1)$ the solution $(u_{\varepsilons },v_{\varepsilons })$ of \eqref{epssys} with $a=\frac{\mu}{\kappa}$ as in \eqref{defa} satisfies
\begin{equation}
\int_0^\infty\int_\Omegaega \kl{u_{\varepsilons }-\frac{\kappa}{\mu }}^2 \leq C.\label{bd:uminlimit2}.
\end{equation}
\end{lem}
\begin{proof}
Bondedness of $v_{\varepsilons }$ as in \eqref{bd:v} has been shown in Lemma \ref{lem:normvdecreases}; \eqref{bd:ul2}, \eqref{bd:nau2u}, \eqref{bd:u2logu}, \eqref{bd:epsu2logu2} result from Lemma \ref{lem:energyfunctional} by straightforward integration, as well as \eqref{bd:nav4} if Lemma \ref{lem:normvdecreases} is taken into account. Testing the second equation in \eqref{epssys} by $v_{\varepsilons }$, \eqref{bd:nav} is readily obtained. By an application of Hölder's inequality, \eqref{bd:nau} immediately follows from \eqref{bd:ul2} and \eqref{bd:nau2u}. Moreover, \eqref{bd:uminlimit2} is a consequence of \eqref{FODI}.
For any $\varphi \in C_0^\infty(\overline{\Omegaega}\times[0,T))$ we have
\begin{align*}
\int_0^T\int_\Omegaega v_{\varepsilons }t\varphi &= -\int_0^T\int_\Omegaega \nabla \varphi \cdotot \nabla v_{\varepsilons } - \int_0^T\int_\Omegaega \varphi u_{\varepsilons }v_{\varepsilons } \\
&\leq \norm[L^2(\Omega\times(0,T))]{\nabla \varphi }\norm[L^2(\Omega\times(0,T))]{\nabla v_{\varepsilons }} + \norm[L^\infty(\Omega\times(0,T))]{v_{\varepsilons }}\norm[L^2(\Omega\times(0,T))]{u_{\varepsilons }}\norm[L^2(\Omega\times(0,T))]{\varphi }
\end{align*}
and -- by \eqref{bd:ul2}, \eqref{bd:nav}, \eqref{bd:v} -- hence \eqref{bd:vt}. In order to obtain \eqref{bd:ut}, we let $\varphi \in (L^1((0,T);(W_0^{2,q}(\Omega))^\ast))^\ast = L^\infty((0,T);W_0^{2,q}(\Omega))$ with $\norm[L^\infty((0,T);W_0^{2,q}(\Omega))]{\varphi }\leq 1$ and have
\begin{align*}
\int_0^T\int_\Omegaega u_{\varepsilons }t \varphi &=\int_0^T\int_\Omegaega u_{\varepsilons } \Delta \varphi + \chi \int_0^T\int_\Omegaega u_{\varepsilons }\nabla v_{\varepsilons }\cdotot\nabla\varphi + \kappa\int_0^T\int_\Omegaega u_{\varepsilons }\varphi -\mu \int_0^T\int_\Omegaega u_{\varepsilons }^2\varphi +\varepsilon \int_0^T\int_\Omegaega \varphi u_{\varepsilons }^2\ln au_{\varepsilons }\\
&\leq \norm[L^2(\Omega\times(0,T))]{u_{\varepsilons }}\norm[L^2(\Omega\times(0,T))]{\Delta \varphi }+\chi \norm[L^2(\Omega\times(0,T))]{u_{\varepsilons }}\norm[L^2(\Omega\times(0,T))]{\nabla v}\norm[L^\infty(\Omega\times(0,T))]{\nabla \varphi } \\
&+ |\kappa| \norm[L^2(\Omega\times(0,T))]{u_{\varepsilons }}\norm[L^2(\Omega\times(0,T))]{\varphi }+\mu \norm[L^2(\Omega\times(0,T))]{u_{\varepsilons }}^2\norm[L^\infty(\Omega\times(0,T))]{\varphi } \\
&+ \norm[L^\infty(\Omega\times(0,T))]{\varphi }\varepsilon \int_0^T\int_\Omegaega u_{\varepsilons }^2|\ln au_{\varepsilons }|,
\end{align*}
which, due to \eqref{bd:ul2}, \eqref{bd:nav}, \eqref{bd:u2logu}, proves \eqref{bd:ut}.
\end{proof}
By means of compactness arguments, these estimates allow for the construction of weak solutions. This is to be our next undertaking:
\begin{lem}\label{lem:weaksol}
Let $\mu >0$, $\chi >0$, $\kappa\in\mathbb{R} $ and assume that $u_0$, $v_0$ satisfy \eqref{id}.
There are a sequence $(\varepsilon _j)_{j\in \mathbb{N}}$, $\varepsilon _j\searrow 0$ and functions
\begin{align*}
u&\in L^2_{loc}(\overline{\Omegaega}\times[0,\infty)) \quad \text{ with } \quad \nabla u\in L^{\fracrac43}_{loc}(\overline{\Omegaega}\times [0,\infty)),\\
v&\in L^\infty(\Omega\times(0,\infty)) \quad \text{ with }\quad \nabla v \in L^2(\Omega\times(0,\infty))
\end{align*}
such that the solutions $(u_{\varepsilons },v_{\varepsilons })$ of \eqref{epssys} with $a$ as in \eqref{defa} satisfy
\begin{align}
u_{\varepsilons }&\to u & &\text{in } L^{\fracrac 43}_{loc}([0,\infty);L^{\fracrac43}(\Omega)) \quad \text{ and a.e. in } \Omega\times(0,\infty)\label{conv:u}\\
\nablau_{\varepsilons }&\rightharpoonup \nabla u&&\text{ in } L^{\fracrac 43}_{loc}([0,\infty);L^{\fracrac43}(\Omega))\label{conv:nau}\\
u_{\varepsilons }^2&\to u^2&&\text{ in } L^1_{loc}(\overline{\Omegaega}\times[0,\infty))\label{conv:u2}\\
\varepsilon u_{\varepsilons }^2\ln (au_{\varepsilons })&\to 0&&\text{ in } L^1_{loc}(\overline{\Omegaega}\times[0,\infty))\label{conv:epsu2logu}\\
v_{\varepsilons } &\to v & & \text{a.e. in } \Omega\times (0,\infty)\label{conv:v}\\
v_{\varepsilons } &\weakstarto v && \text{in } L^\infty((0,\infty),\Lom p)\quad \text{for any } p\in[1,\infty]\label{conv:vweakstar}\\
\nabla v_{\varepsilons }&\rightharpoonup \nabla v && \text{in } L^4_{loc}([0,\infty);\Lom4)\label{conv:nav4}\\
\nablav_{\varepsilons }&\rightharpoonup \nabla v && \text{in } L^2((0,\infty);\Lom2)\label{conv:nav}
\end{align}
as $\varepsilon =\varepsilon _j\searrow 0$ and such that $(u,v)$ is a weak solution to \eqref{a}.\\
If additionally $\kappa>0$ and $a=\frac{\mu }{\kappa}$ as in \eqref{defa}, then $\varepsilon _j$ can be chosen such that additionally
\begin{equation}
u_{\varepsilons }-\fracrac{\kappa}{\mu }\rightharpoonup u-\frac{\kappa}{\mu }\qquad\text{ in } L^2(\Omega\times(0,\infty))\label{conv:uminlimit}
\end{equation}
as $\varepsilon =\varepsilon _j\searrow 0$, and
\begin{equation}\label{uminlimitinL2}
\kl{u-\fracrac{\kappa}{\mu }}\in L^2(\Omega\times(0,\infty))
\end{equation}
\end{lem}
\begin{proof}
\cite[Cor. 8.4]{simon} transforms \eqref{bd:ul2}, \eqref{bd:nau} and \eqref{bd:ut} into \eqref{conv:u} along a suitable sequence $(\varepsilon_j)_j\searrow 0$; the bound in \eqref{bd:nau} enables us to find a further subsequence such that \eqref{conv:nau} holds. Similarly, \eqref{bd:nav} facilitates the extraction of a subsequence satisfying \eqref{conv:nav}, and an analogous application of \cite[Cor. 8.4]{simon} as before from \eqref{bd:nav}, \eqref{bd:v} and \eqref{bd:vt} provides a (non-relabeled) subsequence such that $v_{\varepsilon _j}\to v$ in $L^2(\Omega\times(0,\infty))$ and, along another subsequence thereof establishes \eqref{conv:v}. Also \eqref{conv:vweakstar} is immediately obtained from \eqref{bd:v}, as is \eqref{conv:nav4} from \eqref{bd:nav4}; \eqref{conv:uminlimit} results from \eqref{bd:uminlimit2}. For the $L^1$-convergence statements in \eqref{conv:u2} and \eqref{conv:epsu2logu}, mere boundedness, like obtainable from \eqref{bd:nau2u} and \eqref{bd:u2logu}, even if combined with the a.e. convergence provided by \eqref{conv:u}, is insufficient for the existence of a convergent subsequence; we must, in addition,
check for equi-integrability on $\Omega\times(0,T)$ for any
finite $T>0$. To this purpose we note that with $C(T)$ from \eqref{bd:epsu2logu2}
\begin{align*}
\inf_{b\geq0} \sup_{\varepsilon \in(0,1)} \int_0^T\int_{\set{\varepsilon u_{\varepsilons }^2\ln a u_{\varepsilons }>b}} |\varepsilon u_{\varepsilons }^2\ln a u_{\varepsilons }| &\leq \inf_{b>a }\sup_{\varepsilon \in(0,1)} \int_0^T\int_{\set{\varepsilon u_{\varepsilons }^2\ln a u_{\varepsilons }>b}} \varepsilon u_{\varepsilons }^2\ln a u_{\varepsilons }\\
&\leq \inf_{b>a }\sup_{\varepsilon \in(0,1)} \int_0^T\int_{\set{a u_{\varepsilons }^3>b}} \varepsilon u_{\varepsilons }^2\ln a u_{\varepsilons }\\
&\leq \inf_{b>a }\sup_{\varepsilon \in(0,1)} \int_0^T\int_{\set{\ln u_{\varepsilons }>\fracrac13 \ln \fracrac ba}} \varepsilon u_{\varepsilons }^2(\ln au_{\varepsilons })\ln u_{\varepsilons }\cdotot \fracrac 3{\ln \fracrac{b}{a}}\\
&\leq \inf_{b>a } \fracrac{3C(T)}{\ln \fracrac{b}{a}} =0
\end{align*}
and, due to \eqref{bd:u2logu},
\begin{align*}
\inf_{b\geq 0} \sup_{\varepsilon \in(0,1)} \int_0^T\int_{\set{u_{\varepsilons }^2>b}}u_{\varepsilons }^2 \leq \inf_{b>1}\sup_{\varepsilon \in(0,1)}\int_0^T\int_{\set{u_{\varepsilons }^2>b}} u_{\varepsilons }^2\ln u_{\varepsilons } \fracrac{1}{\ln b} \leq \inf_{b>1} \fracrac{C(T)}{\ln b} = 0.
\end{align*}
Accordingly, $\set{ \varepsilon u_{\varepsilons }^2\ln u_{\varepsilons }; \varepsilon \in(0,1)}$ and $\set{u_{\varepsilons }^2; \varepsilon \in(0,1)}$ are uniformly integrable, hence
by \eqref{conv:u} and the Vitali convergence theorem we can extract subsequences such that \eqref{conv:epsu2logu} and \eqref{conv:u2} hold; \eqref{conv:u2} also proves that $u\in L^2_{loc}(\overline{\Omegaega}\times[0,\infty ))$. Passing to the limit in each of the integrals making up a weak formulation of \eqref{epssys} with $\varepsilon >0$, which is possible due to \eqref{conv:u}, \eqref{conv:nau}, \eqref{conv:nav4}, \eqref{conv:u2}, \eqref{conv:epsu2logu} and \eqref{conv:vweakstar}, shows that $(u,v)$ is a weak solution to \eqref{epssys} with $\varepsilon =0$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:weaksol}]
The assertion of Theorem \ref{thm:weaksol} is part of Lemma \ref{lem:weaksol}.
\end{proof}
We will finally prove that one can expect at least some stabilization of weak solutions also. Here, the preparation in Lemma \ref{lem:vtozero} obtained from the energy inequality for $\mathcal{F}$ will be crucial.
\begin{lem}\label{lem:weaklimit} Let $\mu >0$, $\chi >0$, $\kappa>0$ and assume that $u_0$, $v_0$ satisfy \eqref{id}.
The weak solution $(u,v)$ to \eqref{a} obtained in Lemma \ref{lem:weaksol} satisfies
\begin{equation}\label{eq:weaksolvtozero}
\norm[\Lom p]{v(\cdotot,t)}\to 0
\end{equation}
for any $p\in[1,\infty)$ and
\begin{equation}\label{eq:weaksoluconv}
\int_t^{t+1} \norm[\Lom 2]{u-\fracrac{\kappa}{\mu }}\to0
\end{equation}
as $t\to \infty$.
\end{lem}
\begin{proof}
Using characteristic functions of sets $\Omega\times(t,t+1)$ for sufficiently large $t$ as test functions in the weak-$*$-convergence statement \eqref{conv:vweakstar}, from Lemma \ref{lem:vtozero} we obtain that for every $\eta>0$ there is $T>0$ such that $\norm[L^\infty((T,\infty);\Lom p)]{v}<\eta$,
whereas \eqref{eq:weaksoluconv} is implied by \eqref{uminlimitinL2}.
\end{proof}
\begin{remark}
If $N\leq 3$, the uniform bound on $\int_t^{t+1}\int_\Omegaega |\nabla v|^4$ contained in Lemma \ref{lem:energyfunctional} proves to be sufficient for \eqref{eq:weaksolvtozero} even to hold for $p=\infty$, which can be used as starting point for derivation of eventual smoothness of solutions via a quasi-energy-inequality for $\int_\Omegaega \fracrac{u^p}{(\eta -v)^\theta}$ with suitable numbers $\theta$ and $\eta $. This result is already contained in \cite{lankeit_fluid}.
\end{remark}
\begin{proof}[Proof of Theorem \ref{thm:weaksol-limit}]
Lemma \ref{lem:weaklimit} is identical with Theorem \ref{thm:weaksol-limit}.
\end{proof}
\section*{Acknowledgment}
J.~Lankeit acknowledges support of the {\em Deutsche Forschungsgemeinschaft} within the project {\em Analysis of chemotactic cross-diffusion in complex
frameworks}. Y.~Wang was supported by the NNSF of China (no. 11501457).
{\small
}
\end{document} |
\begin{document}
\begin{abstract}
We consider the moduli space $\mathfrak{M}_{g,n}$ of Riemann surfaces of genus $g\ge0$ with $n\ge1$
ordered and directed marked points. For $d\ge 2g+n-1$, we show that $\mathfrak{M}_{g,n}$ is homotopy equivalent to a component
of the generalised Hurwitz space $\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})$,
associated with the partially multiplicative quandle $\mathfrak{S}_d^{\mathrm{geo}}$.
As an application, we give a new proof of the Mumford conjecture on the stable rational cohomology of moduli spaces of Riemann surfaces. We also provide a combinatorial model for the infinite loop space $\Omega^{\infty-2}\mathrm{MTSO}(2)$ of Hurwitz flavour.
\end{abstract}
\maketitle
\section{Introduction}
The aim of this article is to use the theory of generalised Hurwitz spaces from \cite{Bianchi:Hur1,Bianchi:Hur2,Bianchi:Hur3} to give
a combinatorial model for the moduli space $\mathfrak{M}_{g,n}$ of closed, connected Riemann surfaces of genus $g\ge0$ with $n\ge1$ ordered and directed
marked points; a \emph{directed} marked point is endowed with a non-zero tangent vector.
As an application, we reprove
the Mumford conjecture on the stable rational cohomology ring of moduli spaces \cite{Mumford}, originally proved by Madsen and Weiss \cite{MadsenWeiss}, which can be formulated as follows: there is an isomorphism of graded commutative $\mathbb{Q}$-algebras
\[
\lim_{g\to\infty}H^*(\mathfrak{M}_{g,1};\mathbb{Q})\cong\mathbb{Q}[x_1,x_2,\dots]
\]
between the ring of stable rational cohomology classes of moduli spaces $\mathfrak{M}_{g,1}$, and a polynomial ring in infinitely many variables $x_i$, one in each even degree $2i$.
Our argument provides also a ``Hurwitz model'' for the space $\Omega^{\infty-2}\mathrm{MTSO}(2)$, where $\mathrm{MTSO}(2)$ is the spectrum used by Madsen and Weiss to prove the Mumford conjecture.
\subsection{Classical Hurwitz spaces and moduli spaces}
The connection between Hurwitz spaces and moduli spaces goes back to Hurwitz himself \cite{Hurwitz}: for $d\ge2$ and $k\ge d-1$, the classical Hurwitz space $\mathbb{C}Hur_{\mathfrak{S}_d,k}^c$ parametrises equivalence classes $[\mathcal{S},f,\mathfrak{u}]$, where
\begin{itemize}
\item $\mathcal{S}$ is a connected, punctured Riemann surface;
\item $f\colon\mathcal{S}\to\mathbb{C}$ is a proper map and is a branched cover of degree $d$, admitting exactly $k$ distinct branch values in $\mathbb{C}$, all of which are \emph{simple};
\item $\mathfrak{u}\colon f^{-1}(\mathbb{H}_{\le\underline{p}silon})\cong\set{1,\dots,d}\times\mathbb{H}_{\le\underline{p}silon}$ is a trivialisation of $f$ over some lower
half-plane $\mathbb{H}_{\le\underline{p}silon}:=\set{z\in\mathbb{C}\,|\,\Im(z)\le\underline{p}silon}$ that is disjoint from all branch values, for a suitable $\underline{p}silon\in\mathbb{R}$.
\end{itemize}
The equivalence relation is given by declaring $(\mathcal{S},f,\mathfrak{u})$ and $(\mathcal{S}',f',\mathfrak{u}')$ equivalent if there is a biholomorphism $\chi\colon\mathcal{S}\cong\mathcal{S}'$ such that $f'=f\circ\chi$ and $\mathfrak{u}\equiv\mathfrak{u}'\circ\chi$ on $f^{-1}(\mathbb{H}_{\le\min(\underline{p}silon,\underline{p}silon')})$.
Our notation here is the same used in \cite{EVW:homstabhur}: the ``$\mathrm{C}$'' indicates that we restrict to \emph{connected} branched covers, i.e. we require $\mathcal{S}$ to be connected; the ``$c$'' denotes the conjugacy class of transpositions in the symmetric group $\mathfrak{S}_d$, where local monodromies take value.
Given $[\mathcal{S},f,\mathfrak{u}]\in \mathbb{C}Hur_{\mathfrak{S}_d,k}^c$, represented by $(\mathcal{S},f,\mathfrak{u})$, we let $P=\set{z_1,\dots,z_k}\subset\mathbb{C}$ be the set of simple branch values of $f$, and restrict $f$ to a genuine $d$-fold cover
\[
f\colon f^{-1}(\mathbb{C}mP)\to \mathbb{C}mP.
\]
Choosing a basepoint $*_P\in\mathbb{C}mP$ ``below $P$'', i.e. inside the lower halfplane $\mathbb{H}_{\le\underline{p}silon}$ over which the covering is trivialised, we can consider the action of $\pi_1(\mathbb{C}mP,*_P)$ on the fibre $f^{-1}(*_P)\cong\set{1,\dots,d}$, and thus obtain a homomorphism of groups $\varphi\colon\pi_1(\mathbb{C}mP,*_P)\to\mathfrak{S}_d$. In particular we can evaluate $\varphi$ at the class of a simple loop $\gamma$ in $\mathbb{C}mP$ spinning clockwise around all points of $P$, obtaining the \emph{total monodromy of $[\mathcal{S},f,\mathfrak{u}]$}, which is a permutation in $\mathfrak{S}_d$: by \emph{loop} we mean a continuous map $\gamma\colon[0,1]\to\mathbb{C}mP$ sending both $0$ and $1$ to $*_P$, and by \emph{simple} we mean that $\gamma$ induces an injective map $[0,1]/\set{0,1}\to\mathbb{C}mP$.
Hurwitz proved that a complete invariant for connected components of $\mathbb{C}Hur_{\mathfrak{S}_d,k}^c$ is given by the total monodromy. More precisely, for $\sigma\in\mathfrak{S}_d$
denote by $\mathbb{C}Hur_{\mathfrak{S}_d,k,\sigma}^c$ the subspace of $\mathbb{C}Hur_{\mathfrak{S}_d,k}^c$ of configurations with total monodromy equal to $\sigma$.
Moreover, let $N(\sigma)$ be the word length norm of $\sigma$ with respect to all transpositions in $\mathfrak{S}_d$, i.e. the minimal $r\ge0$ such that $\sigma$ can be written as a product of $r$ transpositions. Then $\mathbb{C}Hur_{\mathfrak{S}_d,k,\sigma}^c$ is connected and non-empty if $k-(2d-2-N(\sigma))$ is even and non-negative, and $\mathbb{C}Hur_{\mathfrak{S}_d,k,\sigma}^c$ is empty otherwise.
In the following discussion we restrict for simplicity to the case $n=1$, i.e. we focus on the connection between Hurwitz spaces and moduli spaces of Riemann surfaces with a single marked point. Let $\mathrm{lc}_d$ denote the permutation $(1,2,\dots,d)\in\mathfrak{S}_d$ consisting of a unique \emph{long cycle}, and note that $N(\mathrm{lc}_d)=d-1$. There is a continuous, forgetful map
\[
\theta_{\mathbb{C}Hur}\colon \mathbb{C}Hur_{\mathfrak{S}_d,d-1+2g,\mathrm{lc}_d}^c\to\mathfrak{M}_{g,1},
\]
sending the class $[\mathcal{S},f,\mathfrak{u}]$ of a branched cover $f\colon\mathcal{S}\to\mathbb{C}$ to the total space $\bar\mathcal{S}$ of its \emph{completion} $f\colon \bar\mathcal{S}\to\mathbb{C} P^1$; the smooth, closed Riemann surface $\bar\mathcal{S}$ can be naturally equipped with a directed marked point $Q$, namely the (unique) preimage along $f$ of $\infty\in\mathbb{C} P^1$, or in other words, the unique point in $\bar\mathcal{S}\setminus\mathcal{S}$; a tangent vector $X$ at $Q\in\bar\mathcal{S}$ can also be chosen in a canonical way, pointing in the direction of $\set{1}\times\mathbb{H}_{\le\underline{p}silon}\subset\mathcal{S}\subset\bar\mathcal{S}$, where we use $\mathfrak{u}$, the trivialisation of $f$ over some lower half-plane in $\mathbb{C}$, to embed $\set{1,\dots,d}\times\mathbb{H}_{\le\underline{p}silon}$ into $\mathcal{S}$. The fact that both the source and the target of the map $\theta$ are connected spaces implies that $\theta$ is a $0$-connected map.footnote{In fact,
as mentioned in \cite{EVW:homstabhur}, Severi first proved that $\mathfrak{M}_g$ is connected by constructing a surjective map similar to $\theta$ with source a connected component of a Hurwitz space. At the time of Severi, Teichm\"uller theory, and in particular the contractibility of $\mathfrak{T}_g$, was not available yet!}. A natural question arises: \emph{how highly connected is the map $\theta$?}
For $d=2$ and $g=1$, the map $\theta_{\mathbb{C}Hur}\colon \mathbb{C}Hur_{\mathfrak{S}_2,3,\mathrm{lc}_2}^c\to\mathfrak{M}_{1,1}$ happens to be a homotopy equivalence: this follows from the well-known fact that every closed Riemann surface $\bar\mathcal{S}$ of genus $1$ with a marked point $Q$ admits a unique hyperelliptic involution fixing the point $Q$. However, it turns out that the previous is a very special case, and in fact $\theta_{\mathbb{C}Hur}$ is not even a $1$-connected map for $g\ge2$:
note for instance that $H_1(\mathbb{C}Hur_{\mathfrak{S}_d,d-1+2g,\mathrm{lc}_d}^c)$ is infinite, as it admits a surjection onto $\mathbb{Z}$, regarded as $H_1$ of the unordered configuration space of $d-1+2g$ points in $\mathbb{C}$; on the other hand
$H_1(\mathfrak{M}_{g,1})$ is finite for $g\ge2$.
In fact $\theta_{\mathbb{C}Hur}\colon \mathbb{C}Hur_{\mathfrak{S}_d,d-1+2g,\mathrm{lc}_d}^c\to\mathfrak{M}_{g,1}$ is the restriction of another map, whose target is still the moduli space $\mathfrak{M}_{g,1}$, but whose domain of definition is a larger space, which we can think of as a \emph{completion} of $\mathbb{C}Hur_{\mathfrak{S}_d,d-1+2g,\mathrm{lc}_d}^c$. In this article, for $g\ge0$ and $d\ge1$, we introduce the space $\bar{\cO}_{g,1}[d]$: it contains
equivalence classes $[\mathcal{S},f,\mathfrak{u}]$ of $d$-fold branched covers $f\colon\mathcal{S}\to\mathbb{C}$ equipped with a trivialisation $\mathfrak{u}$ over some lower half-plane $\mathbb{H}_{\le\underline{p}silon}$,
such that the total monodromy is equal to $\mathrm{lc}_d\in\mathfrak{S}_d$, and such that the total space $\bar\mathcal{S}$ of the compactified covering $f\colon\bar\mathcal{S}\to\mathbb{C} P^1$ is a smooth Riemann surface of genus $g$. Comparing with the definition of $\mathbb{C}Hur_{\mathfrak{S}_d,d-1+2g,\mathrm{lc}_d}^c$, there is only one requirement which we drop: we no longer require that all branch values of $f$ in $\mathbb{C}$ be \emph{simple}.
As we will see, it is possible to identify $\mathbb{C}Hur_{\mathfrak{S}_d,d-1+2g,\mathrm{lc}_d}^c$ with an open dense subspace of $\bar{\cO}_{g,1}[d]$, and there is a forgetful
map $\theta_{\bar{\cO}}\colon \bar{\cO}_{g,1}[d]\to\mathfrak{M}_{g,1}$ extending $\theta_{\mathbb{C}Hur}$.
The space $\bar{\cO}_{g,1}[d]$ is introduced, in a similar way as in \cite{Bianchi:PhD},
as a closed subspace of a space $\mathcal{H}arm_{g,1}[d]$, which is a mild variation of the \emph{slit configuration space}
$\mathfrak{H}_{g,1}[d]$. The latter space $\mathfrak{H}_{g,1}[d]$ was introduced by B\"odigheimer in \cite{Boedigheimer90}, in the case $d=1$, to give a combinatorial model of $\mathfrak{M}_{g,1}$; the construction was generalised in \cite{BH} to higher values of $d$
(see also \cite[Chapter 6]{Bianchi:PhD} for a short discussion).
The space $\mathcal{H}arm_{g,1}[d]$ is also endowed with a natural forgetful map $\check\theta_{\mathfrak{H}}$ towards $\mathfrak{M}_{g,1}$, which happens to be a homotopy equivalence for all $d\ge1$; the restriction of this map to $\bar{\cO}_{g,1}[d]$ is precisely the map $\theta_{\bar{\cO}}$ considered above, which further restricts to $\theta_{\mathbb{C}Hur}$ on $\mathbb{C}Hur_{\mathfrak{S}_d,d-1+2g,\mathrm{lc}_d}^c$.
\subsection{Statement of results}
We highlight five results from this article, focusing for simplicity on the case $n=1$.
\begin{thm}[Theorem \ref{thm:main1} for $n=1$]
\label{thm:main1intro}
Let $d\ge 2g\ge0$; then the canonical map
\[
\theta_{\bar{\cO}}\colon \bar{\cO}_{g,1}[d]\to \mathfrak{M}_{g,1}
\]
is a homotopy equivalence: it starts in the completion $\bar{\cO}_{g,1}[d]$ of the classical Hurwitz space $\mathbb{C}Hur_{\mathfrak{S}_d,d-1+2g,\mathrm{lc}_d}^c$ considered above; and it ends in the moduli space $\mathfrak{M}_{g,1}$.
\end{thm}
Before stating the second result, we recall the notion of partially multiplicative quandle (PMQ), introduced in \cite[Section 2]{Bianchi:Hur1}: roughly speaking, a PMQ is a set with well-behaved binary operations of conjugation and of partial product, satisfying conditions similar to those that the usual conjugation and product operations in a group satisfy.
For an \emph{augmented} PMQ $\mathcal{Q}$ (see \cite[Definition 4.9]{Bianchi:Hur1}), a generalised Hurwitz space $\mathrm{Hur}^{\Delta}(\mathcal{Q})$ was constructed
in \cite[Definition 6.13]{Bianchi:Hur1}. The augmented PMQ $\mathfrak{S}_d^{\mathrm{geo}}$ was introduced in \cite[Definition 7.1]{Bianchi:Hur1}, and plays a central role in this article: its underlying set is the symmetric group $\mathfrak{S}_d$, the operation of conjugation coincides with group conjugation, the partial product is only defined for certain couples of permutations and coincides with the usual product of permutations.
\begin{thm}[Theorem \ref{thm:main2}]
\label{thm:main2intro}
Let $d\ge1$; then the PMQ $\mathfrak{S}_d^{\mathrm{geo}}$ is a \emph{Poincare} PMQ. In other words, all components of the generalised Hurwitz space $\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})$ are topological manifolds.
\end{thm}
The connection between generalised Hurwitz spaces and moduli spaces is established combining Theorem \ref{thm:main1intro} and the following theorem.
\begin{thm}[Theorem \ref{thm:main3} for $n=1$]
\label{thm:main3intro}
The space $\bar{\cO}_{g,1}[d]$ is homeomorphic to the
connected component $\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})(\kld{d}_g)$
of the space $\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})$.
\end{thm}
Here $\kld{d}_g$ is a certain element of the completion $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ of the PMQ $\mathfrak{S}_d^{\mathrm{geo}}$. Recall that for a generic augmented PMQ $\mathcal{Q}$ we have a completion $\hat\mathcal{Q}$, and the connected components of $\mathrm{Hur}^{\Delta}(\mathcal{Q})$ are classified by the set $\hat\mathcal{Q}$, along the $\hat\mathcal{Q}$-valued total monodromy. In our case, we are considering the connected component of $\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})$
of configurations whose $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$-valued total monodromy is equal to $\kld{d}_g$.
The element $\kld{d}_g$ can for instance be expressed as the product
\[
\kld{d}_g=\widehat{(1,2)}\widehat{(1,2)}\dots\widehat{(1,2)}\widehat{(2,3)}\widehat{(3,4)}\dots\cdot\widehat{(d-1,d)}\in\widehat{\mathfrak{S}_d^{\mathrm{geo}}},
\]
where at the beginning the factor $\widehat{(1,2)}$ is repeated in total $2g+1$ times. The element
$\kld{d}_g$ can also be expressed in the notation of \cite[Proposition 7.13]{Bianchi:Hur1} as
\[
\kld{d}_g=\pa{\mathrm{lc}_d\,;\,\set{1,\dots,d}\,;\,d-1+2g}.
\]
We remark that the single element $\kld{d}_g\in\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ contains three pieces of information:
\begin{itemize}
\item the classical $\mathfrak{S}_d$-valued total monodromy: in our case, the permutation $\mathrm{lc}_d$;
\item a partition of the set $\set{1,\dots,d}$ into subsets: in our case, the trivial partition with a single subset,
corresponding to the requirement that total spaces of branched covers be connected surfaces;
\item the number of simple branch values seen in the generic case, split as a sum of numbers $\ge0$ corresponding to the pieces of the partition: in our case, the single number $d-1+2g$.
\end{itemize}
The main result of the article is the following.
\begin{thm}[Theorem \ref{thm:main4}]
\label{thm:main4intro}
The stable homology groups $\mathrm{colim}_{g\to\infty}H_*(\mathfrak{M}_{g,1})$ are isomorphic to $H_*(\Omega_0^2\mathbb{B}_\infty)$,
where $\mathbb{B}_\infty$ is a certain space obtained as a homotopy colimit of a sequence of \emph{relative} Hurwitz spaces, and $\Omega^2_0$ is a component of the double loop space of $\mathbb{B}_\infty$.
\end{thm}
We summarise here the argument, providing also the definition of $\mathbb{B}_\infty$. We replace each of the spaces $\bar{\cO}_{g,1}[d]\cong\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})_{\kld{d}_g}$ with the homotopy equivalent space
$\mathring{\mathrm{HM}}(\mathfrak{S}_d^{\mathrm{geo}})_{\kld{d}_g}$; here, following \cite[Definition 2.4]{Bianchi:Hur2}, we consider the space $\mathring{\mathrm{HM}}(\mathfrak{S}_d^{\mathrm{geo}})$, which is a strict topological monoid version of the generalised Hurwitz space $\mathrm{Hur}^\Delta(\mathfrak{S}_d^{\mathrm{geo}})$.
We consider then a double-indexed diagram of spaces $\mathring{\mathrm{HM}}(\mathfrak{S}_d^{\mathrm{geo}})_{\kld{d}_g}$, for $d\ge2 $ and $g\ge0$, with stabilising maps increasing either $g$ or $d$ by 1. We identify the following colimits of homology groups
\[
\mathrm{colim}_{g\to\infty} H_*(\mathfrak{M}_{g,1})\cong\mathrm{colim}_{g,d\to\infty}H_*\pa{\mathring{\mathrm{HM}}(\mathfrak{S}_d^{\mathrm{geo}})_{\kld{d}_g}}
\]
using Theorem \ref{thm:main1intro} and a diagonal argument. We then use the group-completion theorem \cite{SegalMcDuff, FM94}
together with \cite[Theorem 4.19]{Bianchi:Hur3}, and we rewrite the second colimit as
\[
\begin{split}
\mathrm{colim}_{g,d\to\infty}H_*\pa{\mathring{\mathrm{HM}}(\mathfrak{S}_d^{\mathrm{geo}})_{\kld{d}_g}}&=\mathrm{colim}_{d\to\infty}
\pa{\mathrm{colim}_{g\to\infty}H_*\pa{\mathring{\mathrm{HM}}(\mathfrak{S}_d^{\mathrm{geo}})_{\kld{d}_g}}}\\
&\cong\mathrm{colim}_{d\to\infty} H_*\pa{\Omega_0^2\mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}}\\
&\cong H_*(\Omega^2_0\mathbb{B}_\infty).
\end{split}
\]
Here $\mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}$ is a certain \emph{relative} Hurwitz space, see Section \ref{sec:mumford}
for a brief explanation, and \cite{Bianchi:Hur2} for the precise construction of relative Hurwitz spaces.
The space $\mathbb{B}_\infty$ is defined as the homotopy colimit
\[
\mathbb{B}_\infty:=\mathrm{hocolim}_{d\to\infty}\mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}},
\]
and the last isomorphism follows immediately.
We exploit Theorem \ref{thm:main4intro} to make a further computation, passing to rational coefficients and to cohomology. We use \cite[Theorem 6.1]{Bianchi:Hur3} (which is applicable in our situation thanks to Theorem \ref{thm:main3intro}) and a standard argument in rational homotopy theory to
compute the cohomology ring
\[
H^*(\Omega^2_0\mathbb{B}_\infty;\mathbb{Q})\cong\lim_{d\to\infty} H^*(\Omega_0^2\mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}} ;\mathbb{Q})\cong\mathbb{Q}[x_1,x_2,\dots],
\]
where $x_i$ has degree $2i$.
For the first isomorphism we check that the Mittag-Leffler condition is satisfied, so that no $\lim^1$ terms occur.
Thus $\lim_{g\to\infty} H^*(\mathfrak{M}_{g,1};\mathbb{Q})$ is isomorphic
to the polynomial ring $\mathbb{Q}[x_1,x_2,\dots]$, which is a formulation of the Mumford conjecture;
see Corollary \ref{cor:mumford}.
We conclude the article with the following theorem, which compares our proof of the Mumford conjecture with the original one, due to Madsen and Weiss \cite{MadsenWeiss}, and relying on the spectrum $\mathrm{MTSO}(2)$.
\begin{thm}[Theorem \ref{thm:main5}]
\label{thm:main5intro}
The space $\mathbb{B}_\infty$ is homotopy equivalent to the infinite loop space $\Omega^{\infty-2}\mathrm{MTSO}(2)$.
where $\mathrm{MTSO}(2)$ is the spectrum used by Madsen and Weiss in \cite{MadsenWeiss}.
\end{thm}
Thus $\mathbb{B}_\infty$ provides a ``Hurwitz model'' for infinite loop space of the double suspension of $\mathrm{MTSO}(2)$.
\subsection{Motivation}
This is the fourth and final article in a series about Hurwitz spaces. We apply the machinery of generalised Hurwitz spaces $\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})$ with monodromies in the PMQ $\mathfrak{S}_d^{\mathrm{geo}}$ to give a new combinatorial model of moduli spaces $\mathfrak{M}_{g,n}$ of Riemann surfaces. The model is handy enough to allow a new proof of the Mumford conjecture.
Our hope is to exploit further the generalised Hurwitz spaces $\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})$ to obtain information about the \emph{unstable} homology of moduli spaces. As discussed in \cite[Section 6]{Bianchi:Hur1}, leveraging on the fact that $\mathfrak{S}_d^{\mathrm{geo}}$ is a Poincare PMQ, one can in principle compute the homology of any component of $\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})$ by using an explicit, finite chain complex of \emph{arrays}, which is in spirit similar to the Fadel-Neuwirth-Fuchs chain complex computing the homology of classical configuration spaces. We hope that some new, interesting homological information about moduli spaces of Riemann surfaces can be extracted in the future from this chain model.
\subsection{Acknowledgments}
This series of articles is a generalisation and a further development of my PhD thesis \cite{Bianchi:PhD}. I would like to dedicate this article to my PhD supervisor Carl-Friedrich B\"odigheimer for his advice and encouragement during and also after my PhD, and for introducing me to the world of slit configurations.
I am also grateful to Felix Boes for several explanations about slit configurations, and to Jesper Grodal, Ib Madsen and Nathalie Wahl for the suggestion of deriving Theorem \ref{thm:main5intro} from the argument of Theorem \ref{thm:main4intro}. Finally, I would like to thank Nathalie Wahl also for some helpful comments on a first draft of the article.
\tableofcontents
\section{Preliminaries on moduli spaces}
\label{sec:slit}
\subsection{Moduli spaces of surfaces with directed marked points}
\label{sec:intro}
We fix $g\geq 0$ and $n\geq 1$ throughout the section;
the discussion in the case $g=0$ and $n=1$ is slightly different and we will add some remarks when needed.
We fix a closed, smooth, connected, orientable surface $\Sigma_g$ of genus $g$.
We fix $n$ distinct points $Q_1,\dots, Q_n\in\Sigma_g$, called \emph{marked points},
and for each $Q_i$ we fix a non-zero tangent vector $X_i\in T_{Q_i}\Sigma_g$ (see Figure \ref{fig:moduli_1}). We denote by $\Sigma_{g,n}$ the surface $\Sigma_g$ equipped with these $n$ directed marked points.
We abbreviate $\underline{Q}=(Q_1,\dots,Q_n)$ and $\underline{X}=(X_1,\dots,X_n)$. The open surface
$\Sigma_{g,n}\setminus\set{Q_1,\dots,Q_n}$ is usually denoted by $\Sigma_{g,n}\setminus\underline{Q}$.
A sequence $\underline{d}=(d_1,\dots,d_n)$ of numbers $d_i\geq 1$ is fixed throughout the section, and we denote by $d$ the sum $\sum_{i=1}^nd_i$. Finally, we set $h:=2g+n+d-2$ throughout the section.
\begin{figure}
\caption{A surface of genus 3 with 4 marked points.}
\label{fig:moduli_1}
\end{figure}
\begin{defn}
\label{defn:mcg}
We denote by $\Diff_{g,n}$ the topological group of diffeomorphisms of $\Sigma_g$ that preserve the orientation and fix all points $Q_i$ and all vectors $X_i$.
The mapping class group $\Gamma_{g,n}$ is the group $\pi_0(\Diff_{g,n})$ of isotopy classes of such diffeomorphisms.
\end{defn}
\begin{defn}
\label{defn:fM}
The moduli space $\mathfrak{M}_{g,n}$ contains equivalence classes of Riemann structures $\mathfrak{r}$ on $\Sigma_{g,n}$: two Riemann structures $\mathfrak{r}$ and $\mathfrak{r}'$
on $\Sigma_{g,n}$ are \emph{equivalent} if there exists a diffeomorphism $\psi\in\Diff_{g,n}$ which is a biholomorphism from $(\Sigma_{g,n},\mathfrak{r})$ to $(\Sigma_{g,n},\mathfrak{r}')$.
More formally, we consider the action of the topological group $\Diff_{g,n}$ on the space
$\mathbb{R}iem(\Sigma_{g,n})$ of Riemann structures on $\Sigma_{g,n}$;
the moduli space $\mathfrak{M}_{g,n}$ is the orbit space $\mathbb{R}iem(\Sigma_{g,n})/\Diff_{g,n}$.
\end{defn}
In a two-step process, one usually first defines the Teichm\"uller space $\mathfrak{T}_{g,n}=\mathfrak{Riem}(\Sigma_{g,n})/\Diff^{\sim\Id}_{g,n}$ as the quotient by the subgroup of $\Diff_{g,n}$ of diffeomorphisms isotopic to the identity.
For $g>0$ or $n>1$, the space $\mathfrak{T}_{g,n}$ is homeomorphic to $\mathbb{R}^{6g-6+4n}$, and one then defines $\mathfrak{M}_{g,n}$ as the quotient of $\mathfrak{T}_{g,n}$ by the free, properly discontinuous and orientation-preserving action of $\Gamma_{g,n}$; thus $\mathfrak{M}_{g,n}$ is a classifying space for the group $\Gamma_{g,n}$ and it is an orientable manifold of real dimension $6g-6+4n$.
The case $g=0$ and $n=1$ is exceptional: $\mathfrak{T}_{0,1}$ is a point, $\Gamma_{0,1}$ is the trivial group, and $\mathfrak{M}_{0,1}=\mathfrak{T}_{0,1}/\Gamma_{0,1}$ is also just a point.
\begin{nota}
\label{nota:modulus}
We usually denote by $\mathfrak{m}=[\mathfrak{r}]\in\mathfrak{M}_{g,n}$ a \emph{modulus} on $\Sigma_{g,n}$, i.e. the equivalence class of a Riemann structure $\mathfrak{r}$ on $\Sigma_{g,n}$.
\end{nota}
\begin{defn}
\label{defn:normalchart}
A \emph{normal chart} at $(Q_i,X_i)$ for a Riemann structure $\mathfrak{r}$ on $\Sigma_{g,n}$
is a holomorphic chart $w_i\colon U_i\to\mathbb{C}$ defined on a neighbourhood $Q_i\in U_i\subset\Sigma_{g,n}$ with the following properties:
\begin{itemize}
\item $w_i\colon Q_i\mapsto 0\in\mathbb{C}$;
\item $Dw_i\colon X_i\mapsto \partial/\partial x$, where $x=\mathbb{R}e(z)$ and $y=\Im(z)$ are the standard real coordinates on $\mathbb{C}\simeq\mathbb{R}^2$, and $\partial/\partial x$ is the dual vector of $dx$ (informally, it is the horizontal unit vector pointing to right).
\end{itemize}
\end{defn}
Note that the composition of a normal chart with the inverse of another normal chart
is a holomorphic function $\mathfrak{w}=\mathfrak{w}(z)$ defined on some neighbourhood of $0\in\mathbb{C}$;
its Taylor expansion has the form $\mathfrak{w}(z)=z+\mathrm{l.o.t.},$
where the lower order terms correspond to powers of $z$ with exponent higher than $1$. In particular
the differential of a normal chart $Dw_i\colon T_{Q_i}\Sigma_{g,n}\to T_0\mathbb{C}$ does not depend on the normal chart.
Note also that $X_i$ can be recovered from a normal chart $w_i$ as $Dw_i^{-1}(\partial/\partial x)$.
\begin{nota}
\label{nota:CP1}
Our preferred model of surface $\Sigma_{0,1}$ is $\mathbb{C} P^1$:
the unique marked point $Q$ is the point at infinity $\infty=[1:0]$;
the vector $X\in T_{\infty} \mathbb{C} P^1$ is the unique vector such that the
chart $[a:b]\mapsto b/a\in\mathbb{C}$, defined on a neighbourhood of $\infty$,
is normal. The difference $\mathbb{C} P^1\setminus \set{Q}$ is identified with $\mathbb{C}$ through the usual map $[a:b]\mapsto a/b$. The standard Riemann structure on $\mathbb{C} P^1$ is denoted $\mathfrak{r}st$.
\end{nota}
The action of $\Diff_{0,1}$ on $\mathfrak{Riem}(\Sigma_{0,1})$ is transitive but not free. The isotropy group of $\mathfrak{r}st$ is the group of biholomorphisms $f\colon \mathbb{C} P^1\to\mathbb{C} P^1$ that fix $\infty$ and induce the identity on $T_{\infty}\mathbb{C} P^1$: this group is isomorphic to the additive group $\mathbb{C}$, where $\lambda\in\mathbb{C}$ corresponds to the translation $z\mapsto z+\lambda$ on $\mathbb{C}$, extended by $\infty\mapsto\infty$.
In the case $g> 0$ or $n> 1$, instead, the action of $\Diff_{g,n}$ on
$\mathfrak{Riem}(\Sigma_{g,n})$ is free. See \cite{EarleSchatz} and \cite{Gardiner} for these classical results in Teichm\"uller theory.
In order to study the homotopy type of $\mathfrak{M}_{g,n}$
it is convenient to have a combinatorial model for this space;
by \emph{combinatorial model}
we mean in great generality a pair of finite CW-complexes $(\mathcal{X},\mathcal{X}')$ whose definition (cells and
attaching maps) is combinatorial in flavour, and such that $\mathcal{X}\setminus\mathcal{X}'$ is homotopy equivalent to $\mathfrak{M}_{g,n}$.
In Subsection \ref{subsec:harm} we briefly recall the model $\mathfrak{H}ud$ of slit configurations, and in Section \ref{sec:ccO} we will describe the Hurwitz model $\bar{\cO}ud$.
\subsection{The theorem of Riemann-Roch}
\label{subsec:RiemannRoch}
\begin{defn}
\label{defn:stddivisor}
A divisor $\mathcal{D}$ on $\Sigma_{g,n}$ is an element of the free abelian group generated by points of $\Sigma_{g,n}$.
The degree of $\mathcal{D}=\sum_{i=1}^k\lambda_iP_i$ is $\deg(\mathcal{D})=\sum_{i=1}^k\lambda_i$.
We say that $\mathcal{D}$ is $\underline{Q}$-supported if it is of the form $\sum_{i=1}^n\lambda_i Q_i$, with coefficients $\lambda_i\geq 1$.
We denote by $D=D(\underline{d})=\sum_{i=1}^n d_i Q_i$
the $\underline{Q}$-supported divisor of $\Sigma_{g,n}$ associated with the sequence $\underline{d}$; note that
$\deg(D)=d=\sum_{i=1}^nd_i$. Similarly, we denote by $D+\underline{Q}$ the $\underline{Q}$-supported divisor $\sum_{i=1}^n (d_i+1)Q_i$.
\end{defn}
Let $\mathfrak{r}$ be a Riemann structure on $\Sigma_{g,n}$, let $\mathcal{D}=\sum_{i=1}^k\lambda_iP_i$ be a divisor and consider the complex vector space
$\mathcal{O}(\mathfrak{r},\mathcal{D})$ containing all meromorphic functions $f\colon(\Sigma_g,\mathfrak{r})\to\mathbb{C} P^1$ such that
$f$ is holomorphic away from $\set{P_1,\dots,P_k}$ and,
for all $1\leq i\leq k$,
$f$ has a pole of order at most $\lambda_i$ at $P_i$. This means that if $\lambda_i<0$, then $f$ must have a zero of
order at least $-\lambda_i$ at $P_i$.
The complex dimension of $\mathcal{O}(\mathfrak{r},\mathcal{D})$ depends in general on $\mathfrak{r}$ and $\mathcal{D}$.
Recall that we can associate with every Riemann surface $(\Sigma_{g,n},\mathfrak{r})$
a \emph{canonical divisor} $K(\mathfrak{r})$ of degree $2g-2$, whose corresponding holomorphic line bundle is isomorphic to the holomorphic cotangent bundle $T^*\Sigma_{g,n}$. The divisor $K(\mathfrak{r})$ is uniquely determined up to principal divisors.
\begin{thm}[Riemann-Roch]
\label{thm:RiemannRoch}
Let $\mathfrak{r}$ be a Riemann structure on $\Sigma_{g,n}$, let $\mathcal{D}$ be any divisor on $\Sigma_{g,n}$, and let $K(\mathfrak{r})$ be a canonical divisor associated with the Riemann surface $(\Sigma_{g,n},\mathfrak{r})$.
Then
\[
\dim_{\mathbb{C}}\mathcal{O}(\mathfrak{r},\mathcal{D})-\dim_{\mathbb{C}}\mathcal{O}(\mathfrak{r},K(\mathfrak{r})-\mathcal{D})=\deg(\mathcal{D})-g+1.
\]
\end{thm}
Note that the inequality $\dim_{\mathbb{C}}\mathcal{O}(\mathfrak{r},D)\geq \deg(\mathcal{D})-g+1$ always holds;
if moreover $\deg(\mathcal{D})\geq 2g-1$, then
$K(\mathfrak{r})-\deg(\mathcal{D})$ has degree $\leq -1$, so that $\mathcal{O}(\mathfrak{r},K(\mathfrak{r})-D)=0$ and we obtain the
equality $ \dim_{\mathbb{C}}\mathcal{O}(\mathfrak{r},\mathcal{D})= \deg(\mathcal{D})-g+1$.
\subsection{The parallel slit complex}
\label{subsec:harm}
The model $\mathfrak{H}ud$ is constructed by considering Riemann surfaces $(\Sigma_{g,n},\mathfrak{r})$ equipped with a harmonic
function $u\colon\Sigma_{g,n}\setminus\underline{Q}\to\mathbb{R}$ having some prescribed behaviour near the marked points.
One can cut the surface along the critical lines of the flow associated
with $-\nabla u$, the negative gradient of $u$; the cut surface consists of $d$ contractible components, each of which can be biholomorphically embedded in $\mathbb{C}$
as the complement of a finite collection of slits. A \emph{slit} is a horizontal halfline in $\mathbb{C}$ running towards left,
i.e. of the form $\set{z_0-t\,|\, t\geq 0}$ for some $z_0\in\mathbb{C}$ (see Figure \ref{fig:moduli_2}).
\begin{figure}
\caption{On left, the Riemann surface $(\Sigma_{1,2}
\label{fig:moduli_2}
\end{figure}
The process of dissecting a Riemann surface as described above, in order to represent its parts as tame open subsets of the plane, is called \emph{Hilbert uniformisation} \cite{Hilbert}. B\"{o}digheimer \cite{Boedigheimer90} considers all moduli of Riemann surfaces at the same time and describes the model $\mathfrak{H}_{g,1}[1]$ for the moduli space $\mathfrak{M}_{g,1}$;
the same technique works for surfaces with more than one marked point and with higher values for the numbers $d_i$. The formal definition of $\mathfrak{H}ud$ requires some preparation.
\begin{defn}
\label{defn:udharm}
Let $\mathfrak{r}$ be a Riemann structure on $\Sigma_{g,n}$. A harmonic function $u\colon\Sigma_{g,n}\setminus\underline{Q}\to\mathbb{R}$ is \emph{$\underline{d}$-directed} if for all $1\leq i\leq n$ it has a \emph{directed} pole of order $d_i$ at $Q_i$ in the following sense: in a normal chart $w_i\colon U_i\to\mathbb{C}$ around $Q_i$ we can write $u$ as
\[
u(w_i)=\mathbb{R}e\pa{\mathfrak{r}ac{1}{w_i^{d_i}}+\mathrm{l.o.t.}}-B_i\log\abs{w_i},
\]
for some real number $B_i$; the lower order terms correspond to powers of $w_i$ with exponent higher than $-d_i$. The word ``directed'' refers to the fact that the coefficient of $w_i^{-d_i}$ is required to be $1$, and not another number in $\mathbb{C}^*$.
\end{defn}
\begin{defn}
\label{defn:critgraph}
The critical graph of a harmonic function $u\colon(\Sigma_{g,n},\mathfrak{r})\setminus\underline{Q}\to\mathbb{R}$, denoted $\mathcal{K}_0(u)\subset\Sigma_{g,n}$, is defined by considering the flow lines of $-\nabla u$, the negative gradient of $u$ on $\Sigma_{g,n}\setminus\underline{Q}$. The vector field $-\nabla u$, and hence the parametrisation of the flow lines, depend on a choice of Riemannian metric in the conformal class $\mathfrak{r}$; but the decomposition of $\Sigma_{g,n}\setminus\underline{Q}$ into maximal flow lines is independent of this choice.
The graph $\mathcal{K}_0$ is the union of all critical points of $u$ (points where $du=0$), all flow lines having a critical point of $u$ as negative limit, and all marked points $Q_i$. See Figure \ref{fig:moduli_5}.
\end{defn}
\begin{figure}
\caption{On left, the critical graph $\mathcal{K}
\label{fig:moduli_5}
\end{figure}
For a $\underline{d}$-directed harmonic function $u\colon(\Sigma_{g,n},\mathfrak{r})\setminus\underline{Q}\to \mathbb{R}$, the complement of the critical graph $\Sigma_{g,n}\setminus\mathcal{K}_0(u)$ consists of $d$ contractible, open regions. Let
$L_{i,j}\colon[0,\varepsilon)\to\Sigma_{g,n}$, for $1\le i\le n$ and $0\le j\le d_i-1$, be a short, smooth path exiting from $Q_i$ with velocity
$e^{2\pi j\sqrt{-1}/d_i}X_i$, where we use the Riemann structure $\mathfrak{r}$ to rotate $X_i$ by an angle $2\pi j/d_i$. Each arc $L_{i,j}$ is contained, for small positive times, in a single region of $\Sigma_{g,n}\setminus\mathcal{K}_0(u)$; viceversa, all $d$ regions of $\Sigma_{g,n}\setminus\mathcal{K}_0(u)$
contain the germ of precisely one arc $L_{i,j}$. See Figure \ref{fig:moduli_3}.
\begin{figure}
\caption{An example of choices of the germs $L_{i,j}
\label{fig:moduli_3}
\end{figure}
\begin{defn}
\label{defn:Harmudfr}
Let $\mathfrak{r}$ be a Riemann structure on $\Sigma_{g,n}$. The space $\mathfrak{H}(\mathfrak{r},\underline{d})$ contains couples $(u,v)$, where $u\colon(\Sigma_{g,n},\mathfrak{r})\setminus\underline{Q}\to\mathbb{R}$ is a $\underline{d}$-directed harmonic function, and $v\colon\Sigma_{g,n}\setminus\mathcal{K}_0(u)$ is a conjugate harmonic function to $u$ defined on the complement of the critical graph of $u$, i.e. $u+\sqrt{-1}v$ is a holomorphic function on $\Sigma_{g,n}\setminus\mathcal{K}_0(u)$.
\end{defn}
We note that $\mathfrak{H}(\mathfrak{r},\underline{d})$ has the structure of a real affine space. If $(u,v)$ and $(u',v')$ are in $\mathfrak{H}(\mathfrak{r},\underline{d})$ and $t\in\mathbb{R}$, then $u'':=tu+(1-t)u'$ is also a $\underline{d}$-directed harmonic function on $\Sigma_{g,n}$; the complement
$\Sigma_{g,n}\setminus\mathcal{K}_0(u'')$ consists of $d$ connected components, each containing the germ of one of the paths $L_{i,j}$; the function $tv+(1-t)v'$ is well-defined on the image of $L_{i,j}$, at least for small positive times, and can be extended uniquely to a conjugate harmonic function to $u''$ on the component of $\Sigma_{g,n}\setminus\mathcal{K}_0(u'')$ containing $L_{i,j}$; these $d$ extensions give a harmonic function $v''\colon(\Sigma_{g,n},\mathfrak{r})\setminus\mathcal{K}_0(u'')$, and we may define the convex combination $t(u,v)+(1-t)(u',v')$ to be $(u'',v'')$.
To prove that $\mathfrak{H}(\mathfrak{r},\underline{d})$ is non-empty, and to compute its real dimension,
we first give a definition.
\begin{defn}
\label{defn:uddf}
Let $df$ be a meromorphic differential on $\Sigma_{g,n}$. We say that $df$ is $\underline{d}$-directed if $df$ is holomorphic away from $\underline{Q}$, and if for all $1\le i\le n$ the Laurent expansion of $df$ in a normal chart $w_i$ around $Q_i$ has the form
\[
df(w_i)=\mathfrak{r}ac{-d_i}{w_i^{d_i+1}}dw_i+\mathrm{l.o.t.}.
\]
\end{defn}
Note the different exponents in comparison with Definition \ref{defn:udharm}.
We then argue as follows: see \cite{Boedigheimer90} for more details.
\begin{itemize}
\item Given $(u,v)\in\mathfrak{H}(\mathfrak{r},\underline{d})$, the complex differential $df:=du+\sqrt{-1}dv$ can be continuously extended to a $\underline{d}$-directed meromorphic differential on $\Sigma_{g,n}$;
moreover, for all loops $\gamma$ in $\Sigma_{g,n}\setminus\underline{Q}$ we have $\mathbb{R}e(\int_\gamma df)=0$: we call this the \emph{$\mathbb{R}e\!\int\!0$-condition}.
\item Viceversa, given a $\underline{d}$-directed meromorphic differential $df$ on $\Sigma_{g,n}$
satisfying the $\mathbb{R}e\!\int\!0$-condition, we can choose
an integral $u=\int \mathbb{R}e(df)\colon\Sigma_{g,n}\setminus\underline{Q}\to \mathbb{R}$; $u$ is a $\underline{d}$-directed harmonic function, uniquely determined up to one real integration constant;
on $\Sigma_{g,n}\setminus\mathcal{K}_0(u)$ we can choose an integral $v=\int\Im(df)$, which is uniquely determined up to $d$ real integration constants.
\item The space of \emph{all} $\underline{d}$-directed meromorphic differentials on $(\Sigma_{g,n},\mathfrak{r})$ is an affine translation of $\mathcal{O}(\mathfrak{r},K(\mathfrak{r})+D)$ inside $\mathcal{O}(\mathfrak{r},K(\mathfrak{r})+D+\underline{Q})$; since $\deg(K(\mathfrak{r})+D)\ge 2g-1$, the complex dimension of $\mathcal{O}(\mathfrak{r},K(\mathfrak{r})+D)$ is $g-1+d$.
\item The space of $\underline{d}$-directed meromorphic differentials $df$ on $(\Sigma_{g,n},\mathfrak{r})$
satisfying the $\mathbb{R}e\!\int\!0$-condition is non-empty by an application of the Dirichlet principle; it has real dimension $2g-2+2d-(2g+n-1)=2d-n-1$
because it is obtained from the affine space of all $\underline{d}$-directed meromorphic differentials by imposing the vanishing of $2g+n-1$ real linear functionals.
\item Summing the real dimension of the space of $\underline{d}$-directed meromorphic differentials $df$ satisfying the $\mathbb{R}e\!\int\!0$-condition with the number $d+1$ of integration constants, we obtain that the real dimension of $\mathfrak{H}(\mathfrak{r},\underline{d})$ is
$3d-n=3h-(6g+4n-6)$. This is equal to $3h-\dim\mathfrak{M}_{g,n}$ unless $g=0$ and $n=1$,
in which case we get $3h+2$.
\end{itemize}
The spaces $\mathfrak{H}(\mathfrak{r},\underline{d})$, for varying $\mathfrak{r}\in\mathfrak{Riem}(\Sigma_{g,n})$, assemble into a space $\tilde{\Harm}ud$; the forgetful map
$\tilde\theta_{\mathfrak{H}}\colon\tilde{\Harm}ud\to\mathfrak{Riem}(\Sigma_{g,n})$, sending $(\mathfrak{r},u,v)$ to $\mathfrak{r}$, is a bundle map.
\begin{defn}
\label{defn:Harmud}
The group $\Diff_{g,n}$ acts freely on $\tilde{\Harm}ud$ by pulling back both Riemann structures and harmonic functions. The orbit space is denoted
\[
\mathfrak{H}ud:=\tilde{\Harm}ud/\Diff_{g,n};
\]
the forgetful map $\tilde\theta_{\mathfrak{H}}$ gives rise to a bundle map $\theta_{\mathfrak{H}}\colon\mathfrak{H}ud\to\mathfrak{M}_{g,n}$.
\end{defn}
The space $\mathfrak{H}ud$ is an orientable manifold of dimension $3h$, and the map $\theta_{\mathfrak{H}}$ is a bundle map. If $g>0$ or $n>1$, the fibre $\theta_{\mathfrak{H}}^{-1}(\mathfrak{m})$ is canonically identified with $\mathfrak{H}(\mathfrak{r},\underline{d})$, for any $\mathfrak{m}=[\mathfrak{r}]\in\mathfrak{M}_{g,n}$. In the case $g=0$ and $n=1$, both $\theta_{\mathfrak{H}}$ and $\tilde\theta_{\mathfrak{H}}$ have only one fibre, $\tilde{\Harm}(\underline{d})$ is itself a real affine space of dimension $3h+2$, and
$\mathfrak{H}ud$ is the quotient of $\tilde{\Harm}(\underline{d})$ by a free, affine action of $\mathbb{C}$ by translations. In all cases $\theta_{\mathfrak{H}}$ has contractible fibres and is therefore a homotopy equivalence.
Given an class $[\mathfrak{r},u,v]\in\tilde{\Harm}ud$, we can consider the holomorphic function $f=u+\sqrt{-1}v\colon(\Sigma_{g,n}\setminus\mathcal{K}_0(u),\mathfrak{r})\to\mathbb{C}$.
The map $f$ restricts to an open embedding on each of the $d$ connected components of $\Sigma_{g,n}\setminus\mathcal{K}_0(u)$, with image given by the complement of a finite configuration of horizontal, left-oriented halflines, called \emph{slits}.
The graph $\mathcal{K}_0(u)$ has the following decorations:
\begin{itemize}
\item it is a \emph{fat graph}, i.e. at each vertex there is a cyclic order of the incident edges; the associated surface, obtained by fattening edges to strips, has genus $g$ and $d$ boundary components (it is indeed a small neighbourhood of $\mathcal{K}_0(u)$ in $\Sigma_{g,n}$);
\item all edges are oriented (by $-\nabla u$);
\item $n$ of the vertices are labelled $Q_1,\dots, Q_n$; let $\mathcal{P}_1,\dots,\mathcal{P}_k$ be the other vertices, for some $k\ge0$;
\item all edges incident to $Q_i$ are incoming; there is a partition of the
edges incident to $Q_i$ into $d_i$ parts; each part has an internal order, and the set of parts is also totally ordered; the cyclic order at $Q_i$ is induced by the lexicographic order given by this partition (use the vectors $e^{2\pi j\sqrt{-1}/d_i}X_i$ to split the parts of this partition);
\item no two edges incident to $\mathcal{P}_i$ can be consecutive in the cyclic order and both incoming;
\item the vertices $\mathcal{P}_1,\dots,\mathcal{P}_k$ are endowed with a preorder, i.e.
an equivalence relation together with a total order on the set of equivalence classes; $\mathcal{P}_i$ strictly preceeds $\mathcal{P}_j$ in the preorder as soon as there is an edge oriented from $\mathcal{P}_j$ to $\mathcal{P}_i$ (evaluate the function $u$ to obtain the preorder).
\end{itemize}
If we fix a combinatorial type for $\mathcal{K}_0$ as decorated graph, the subspace of $\mathfrak{H}ud$ of classes $[\mathfrak{r},u,v]$ realising this combinatorial type is homeomorphic to an open multisimplex, whose coordinates correspond to the horizontal and vertical positions of the slits in the associated slit configurations. The one-point compactification of $\mathfrak{H}ud$ admits a cell decomposition with cells in bijection with combinatorial types for $\mathcal{K}_0$ (together with the 0-cell given by the point at infinity).
The cellular structure of $\mathfrak{H}ud^{\infty}$ is described in detail in \cite{BH} and \cite{Bianchi:PhD}.
\section{The moduli space of branched coverings of \texorpdfstring{$\mathbb{C} P^1$}{CP1}}
\label{sec:ccO}
We fix $g\geq 0$, $n\geq 1$ and a sequence $\underline{d}=(d_1,\dots,d_n)$ as in Section \ref{sec:slit};
we moreover assume $d\ge2$, as in this section the case $d=1$ would be of little interest.
We still denote $d=\sum_{i=1}^nd_i$ and $h=2g+n+d-2$.
In this section we introduce $\bar{\cO}ud$, the moduli space of Riemann surfaces $(\Sigma_{g,n},\mathfrak{r})$ endowed with a $\underline{d}$-directed meromorphic function. In fact $\bar{\cO}ud$ can be considered as a closed subspace of $\mathfrak{H}ud$, and the restricted map
\[
\theta_{\bar{\cO}}=\theta_{\mathfrak{H}}|_{\bar{\cO}ud}\colon\bar{\cO}ud\to\mathfrak{M}_{g,n}
\]
is a homotopy equivalence whenever $d\ge 2g+n-1$. It will however be convenient to embed $\bar{\cO}ud$ directly
into a certain \emph{quotient} $\mathcal{H}armud$ of $\mathfrak{H}ud$: roughly speaking, if elements of $\mathfrak{H}ud$ carry the information of $d$ integration constants for the harmonic form $dv$, elements of $\mathcal{H}armud$ only carry the information of a single integration constant.
\subsection{Definition of \texorpdfstring{$\bar{\cO}ud$}{Ogn[ud]} as a subspace of \texorpdfstring{$\mathcal{H}armud$}{cHgn[ud]}}
\begin{defn}
\label{defn:cHarmr}
Recall Definition \ref{defn:udharm} and \ref{defn:Harmudfr}. For a Riemann structure $\mathfrak{r}$ on $\Sigma_{g,n}$ we denote by $\mathcal{H}arm(\mathfrak{r},\underline{d})$ the space of triples $(u,dv,v_{1,0})$, where $u\colon(\Sigma_{g,n},\mathfrak{r})\setminus\underline{Q}\to\mathbb{R}$ is a $\underline{d}$-directed harmonic function, $dv$ is the conjugate harmonic 1-form to $du$ on $\Sigma_{g,n}\setminus\underline{Q}$ (i.e. $dv$ is obtained from $du$ by applying the Hodge star operator), and $v_{1,0}$ is an integral of $dv$ on the component
of $\Sigma_{g,n}\setminus\mathcal{K}_0(u)$ containing the germ of a path $L_{1,0}$ exiting from $Q_1$ with velocity $X_1$, compare with Figure \ref{fig:moduli_3}.
\end{defn}
The space $\mathcal{H}arm(\mathfrak{r},\underline{d})$ is an affine quotient of $\mathfrak{H}(\mathfrak{r},\underline{d})$, by considering the surjective map of affine spaces sending $(u,v)$ to $(u,dv,v_{1,0})$, where $v_{1,0}$ is the restriction of $v$ to the component
of $\Sigma_{g,n}\setminus\mathcal{K}_0(u)$ containing the germ of $L_{1,0}$. The real dimension of $\mathcal{H}arm(\mathfrak{r},\underline{d})$ is
$3d-n-(d-1)=2d-n+1$.
\begin{defn}
\label{defn:cHarmud}
For $g>0$ or $n>0$ we define $\mathcal{H}armud$ as the real affine bundle of dimension $2d-n+1$ over $\mathfrak{M}_{g,n}$
whose fibre over $[\mathfrak{r}]=\mathfrak{m}\in\mathfrak{M}_{g,n}$ is $\mathcal{H}arm(\mathfrak{r},\underline{d})$. Formally, we first consider the affine bundle over $\mathbb{R}iem(\Sigma_{g,n})$ with fibre $\mathcal{H}arm(\mathfrak{r},\underline{d})$ over $\mathfrak{r}$, and then we take the quotient of the total space of this affine bundle by the group $\Diff_{g,n}$, which acts by pulling back simultaneously Riemann structures, harmonic functions and harmonic forms.
We denote by $\check\theta_{\mathfrak{H}}\colon\mathcal{H}armud\to\mathfrak{M}_{g,n}$ the associated bundle map.
\end{defn}
For $g=0$ and $n=0$ we will not need to consider a space $\mathcal{H}armud$. The following definition should be compared with Definitions \ref{defn:udharm} and \ref{defn:uddf}.
\begin{defn}
\label{defn:ccOr}
Let $\mathfrak{r}$ be a Riemann structure on $\Sigma_{g,n}$.
A meromorphic function $f\colon(\Sigma_{g,n},\mathfrak{r})\to \mathbb{C} P^1$ is \emph{$\underline{d}$-directed} if it is holomorphic on $\Sigma_{g,n}\setminus\underline{Q}$ and if
for all $1\leq i\leq n$ and for any normal chart $w_i$ around $Q_i$, the Laurent expansion of $f$ has the form
\[
f(w_i)=\mathfrak{r}ac{1}{w_i^{d_i}}+\mathrm{l.o.t.},
\]
where the lower order terms correspond powers of $w_i$ with exponent higher than $-d_i$:
we say that $f$ has a \emph{directed pole} of order $d_i$ at $Q_i$.
We let $\bar{\cO}(\mathfrak{r},\underline{d})$ be the space of $\underline{d}$-directed meromorphic functions on $(\Sigma_{g,n},\mathfrak{r})$.
\end{defn}
Using normal charts, one can note that if $f,g\in\bar{\cO}(\mathfrak{r},\underline{d})$ and $\lambda\in\mathbb{C}$, then
also $\lambda f+(1-\lambda)g\in\bar{\cO}(\mathfrak{r},\underline{d})$; therefore $\bar{\cO}(\mathfrak{r},\underline{d})$ is a \emph{complex affine} subspace
of $\mathcal{O}(\mathfrak{r},D)$ (see Definition \ref{defn:stddivisor}).
Given $f\in\bar{\cO}(\mathfrak{r},\underline{d})$, we can define a harmonic
function $u=\mathbb{R}e(f)$ on $(\Sigma_{g,n},\mathfrak{r})\setminus\underline{Q}$, which admits a conjugate harmonic function $v=\Im(f)$ on the entire subspace $\Sigma_{g,n}\setminus\underline{Q}$, and in particular on the component of $\Sigma_{g,n}\setminus\mathcal{K}_0(u)$ containing the germ of a path $L_{1,0}$ as in Definition \ref{defn:cHarmr}: thus we determine a triple $(u,dv,v_{1,0})\in\mathcal{H}arm(\mathfrak{r},\underline{d})$.
We obtain an inclusion of real affine spaces $\bar{\cO}(\mathfrak{r},\underline{d})\subset\mathcal{H}arm(\mathfrak{r},\underline{d})$;
the image of this inclusion consists of all triples $(u,dv,v_{1,0})\in\mathfrak{H}(\mathfrak{r},\underline{d})$ such that $v_{1,0}$ can be continuously extended over $\Sigma_{g,n}\setminus\underline{Q}$ to an integral of $dv$.
\begin{defn}
\label{defn:ccOud}
We define $\bar{\cO}ud$ as the moduli space of Riemann surfaces $(\Sigma_{g,n},\mathfrak{r})$ endowed with a $\underline{d}$-directed meromorphic function
$f\in\bar{\cO}(\mathfrak{r},\underline{d})$. Two couples $(\mathfrak{r},f)$ and $(\mathfrak{r}',f')$ are equivalent if there is a diffeomorphism $\psi\in\Diff_{g,n}$
pulling back $\mathfrak{r}'$ to $\mathfrak{r}$ and $f'$ to $f$.
Formally, we first consider the space $\tilde\bar{\cO}_{g,n}[\underline{d}]$ of couples $(\mathfrak{r},f)$,
with $\mathfrak{r}\in\mathbb{R}iem(\Sigma_{g,n})$ and $f\colon(\Sigma_{g,n},\mathfrak{r})\to\mathbb{C} P^1$ a $\underline{d}$-directed meromorphic function; the group
$\Diff_{g,n}$ acts freely on this space by pulling back Riemann structures and meromorphic functions, and
we define $\bar{\cO}ud$ as the quotient $\tilde\bar{\cO}_{g,n}[\underline{d}]/\Diff_{g,n}$.
We denote by $\theta_{\bar{\cO}}\colon\bar{\cO}ud\to\mathfrak{M}_{g,n}$ the forgetful map $\theta_{\bar{\cO}}\colon [\mathfrak{r},f]\mapsto[\mathfrak{r}]$.
\end{defn}
We used the same notation ``$\theta$'' in Definitions \ref{defn:Harmud}, \ref{defn:cHarmud} and \ref{defn:ccOud}: in fact, for $g>0$ or $n>1$, the space
$\bar{\cO}ud$ can be embedded into $\mathcal{H}armud$ as the closed subspace of classes of quadruples $[\mathfrak{r},u,dv,v_{1,0}]$ such that $v_{1,0}$ can be continuously extended to an integral $v\colon(\Sigma_{g,n},\mathfrak{r})\setminus\underline{Q}\to\mathbb{R}$ of $dv$.
Under this embedding, the map $\theta_{\bar{\cO}}$ from Definition \ref{defn:ccOud} is the restriction of the map
$\check\theta_{\mathfrak{H}}$ from Definition \ref{defn:Harmud}. In the following we reformulate the characterisation of $\bar{\cO}ud$ as subspace of $\mathcal{H}armud$.
\begin{defn}
\label{defn:cFgn}
For $g>0$ or $n>1$ we denote by $p\colon\mathscr{F}_{g,n}\to\mathfrak{M}_{g,n}$ the tautological \emph{closed} surface bundle over $\mathfrak{M}_{g,n}$, with fibre $\Sigma_{g,n}$. Formally, $\mathscr{F}_{g,n}=(\Sigma_{g,n}\times\mathbb{R}iem(\Sigma_{g,n}))/\Diff_{g,n}$. Removing marked points fibrewise yields a tautological \emph{open} surface bundle $\mathring{\pr}\colon\mathring{\scF}_{g,n}\to\mathfrak{M}_{g,n}$, with fibre $\Sigma_{g,n}\setminus\underline{Q}$.
We can then compute fibrewise the first real cohomology and obtain a vector bundle
$H^1(\mathring{\pr})\colon H^1(\mathring{\scF}_{g,n})\to\mathfrak{M}_{g,n}$ with fibre $H^1(\Sigma_{g,n}\setminus\underline{Q})\cong\mathbb{R}^{2g+n-1}$.
We define a map of real affine bundles over $\mathfrak{M}_{g,n}$
\[
\int\colon\mathcal{H}armud\to H^1(\mathring{\scF}_{g,n})
\]
by sending the class $[\mathfrak{r},u,dv,v_{1,0}]$ to the cohomology class on $\mathring{\pr}^{-1}([\mathfrak{r}])\cong(\Sigma_{g,n},\mathfrak{r})\setminus\underline{Q}$ given by integrating the closed real 1-form $dv$ along 1-cycles.
\end{defn}
After Definition \ref{defn:cFgn}, for $g>0$ or $n>1$ we can then characterise $\bar{\cO}ud\subset\mathcal{H}armud$ as the kernel of the map $\int$, i.e. the preimage of the zero section of $H^1(\mathring{\scF}_{g,n})$. Comparing with the discussion after Definition \ref{defn:uddf}, we may say that for $f=(u,v)\in\bar{\cO}(\mathfrak{r},\underline{d})$ the $\underline{d}$-directed meromorphic form $df=du+\sqrt{-1}dv$ not only satisfies the ``$\mathbb{R}e\!\int\!0$-condition'', but also the analogous ``$\Im\!\int\!0$-condition''.
If $g> 0$ or $n> 1$, for all $\mathfrak{m}=[\mathfrak{r}]\in\mathfrak{M}_{g,n}$
there is a natural identification between $\theta^{-1}_{\bar{\cO}}(\mathfrak{m})$ and $\bar{\cO}(\mathfrak{r},\underline{d})$.
In the case $g=0$ and $n=1$, the space $\bar{\cO}(\mathfrak{r}st,d)$
coincides with the space $\mathfrak{MonPol}_d$ of monic polynomials $z^d+a_{d-1}z^{d-1}+\dots+a_0$ of degree $d$ with complex coefficients. The space
$\bar{\cO}_{0,1}[d]$ is then the quotient of $\bar{\cO}(\mathfrak{r}st,d)$ by the free action of $\mathbb{C}$ given by precomposing polynomials with translations of the complex plane: the element $\lambda\in \mathbb{C}$ acts on the polynomial $z^d+a_{d-1}z^{d-1}+\dots+a_0$ by sending it to the polynomial $(z+\lambda)^d+a_{d-1}(z+\lambda)^{d-1}+\dots+a_0$, which is again a monic polynomial of degree $d$.
\begin{defn}
\label{defn:NMonPol}
We denote by $\mathfrak{NMonPol}_d$ the space of \emph{normalised} monic polynomials of degree $d$, i.e. the quotient
of $\mathfrak{MonPol}_d$ by the free action of $\mathbb{C}$ described above.
The projection $\mathfrak{MonPol}_d\to\mathfrak{NMonPol}_d$ is a surjective map,
admitting a section given by taking polynomials of the form $z^d+a_{d-2}z^{d-2}+\dots+a_0$, i.e. with vanishing coefficient of $z^{d-1}$. We will therefore regard $\mathfrak{NMonPol}_d$ also as an affine subspace of $\mathfrak{MonPol}_d$.
\end{defn}
Note that in general the projection $\mathfrak{MonPol}_d\to\mathfrak{NMonPol}_d$ is not a map of affine spaces.
\subsection{Moduli spaces as classifying spaces}
\label{subsec:classifyingspaces}
Let $g>0$ or $n>1$. The tautological fibre bundle $p\colon\mathscr{F}_{g,n}\to\mathfrak{M}_{g,n}$ is a fibre bundle with fibres Riemann surfaces of type $\Sigma_{g,n}$: that is, each fibre of $p$ is a Riemann surface of genus $g$ endowed with $n$ ordered and directed marked points.
In fact, $p$ is \emph{universal} among fibre bundles with this property, in the following strong sense. Let $\mathcal{X}$ be a paracompact space and let $p_{\mathcal{Y}}\colon\mathcal{Y}\to\mathcal{X}$ be a fibre bundle over $\mathcal{X}$ with fibres smooth surfaces of genus $g$ endowed with the following additional structure:
\begin{itemize}
\item $n$ disjoint, continuous sections $Q_1,\dots,Q_n\colon \mathcal{X}\to\mathcal{Y}$ of $p_{\mathcal{Y}}$ are chosen, yielding $n$ marked points on each fibre;
\item for each $1\le i\le n$, a nowhere-vanishing section $X_i\colon Q_i(\mathcal{X})\to VT(\mathcal{Y})$ is chosen, where $VT(\mathcal{Y})$ denotes the vertical tangent bundle of $\mathcal{Y}$, making the $n$ marked points of each fibre into directed marked points;
\item each fibre of $p_{\mathcal{Y}}$ is endowed with a Riemann structure; Riemann structures change continuously
in $\mathbb{R}iem_{g,n}$, i.e. for any fibre-wise smooth trivialisation $p_{\mathcal{Y}}^{-1}(U)\cong U\times\Sigma_{g,n}$ the natural map of sets $U\to\mathbb{R}iem_{g,n}$ is continuous.
\end{itemize}
Then there is a unique, continuous map $\kappa\colon\mathcal{X}\to\mathfrak{M}_{g,n}$ such that $\mathcal{Y}$ is the pull back of $\mathscr{F}_{g,n}$ along $\kappa$.
Consider now the projection $\check\theta\colon\mathcal{H}armud\to\mathfrak{M}_{g,n}$, and let $\checkp_{\mathfrak{H}}\colon\check\mathscr{F}_{g,n}^{\mathfrak{H}}\to\mathcal{H}armud$ be the pull back of the tautological surface bundle.
Similarly, let $p_{\bar{\cO}}\colon\mathscr{F}^{\bar{\cO}}_{g,n}\to\bar{\cO}ud$ be the restriction of $\checkp_{\mathfrak{H}}$
over $\bar{\cO}ud\subset\mathcal{H}armud$.
Then $\checkp_{\mathfrak{H}}$ is universal among fibre bundles $p_{\mathcal{Y}}\colon\mathcal{Y}\to\mathcal{X}$ over paracompact spaces with fibres Riemann surfaces of type $\Sigma_{g,n}$ and endowed with the following:
\begin{itemize}
\item there is a continuous function $u\colon\mathcal{Y}\setminus(Q_1(\mathcal{X})\sqcup\dots\sqcup Q_n(\mathcal{X}))\to\mathbb{R}$ restricting to a $\underline{d}$-directed harmonic function on each fibre;
\item there is a continuous function $v\colon \mathcal{Y}_{1,0}\to\mathbb{R}$, where $\mathcal{Y}_{1,0}\to\mathbb{R}$ is the union, over the fibres of $p_{\mathcal{Y}}$, of the connected components of the complements of the critical graphs of $u$ containing germs of paths exiting from the section $Q_1$ with velocity the vector field $X_1$; $v$ is moreover a harmonic conjugate to $u$ on each portion of fibre where it is defined.
\end{itemize}
Indeed the datum of such a fibre bundle is equivalent to a map $\kappa\colon\mathcal{X}\to\mathfrak{M}_{g,n}$ together with a section of the pull back affine bundle $\kappa^*(\check\theta_{\mathfrak{H}})$; in other words, the datum of such a fibre bundle is equivalent to a map $\check\kappa_{\mathfrak{H}}\colon\mathcal{X}\to\mathcal{H}armud$.
Finally, by restriction of the previous case, $p_{\bar{\cO}}$ is universal among fibre bundles over paracompact spaces with fibres Riemann surfaces $\Sigma_{g,n}$ endowed with a $\underline{d}$-directed meromorphic function.
\subsection{Homotopy equivalence between \texorpdfstring{$\bar{\cO}ud$}{Ogn[ud]} and \texorpdfstring{$\mathfrak{M}_{g,n}$}{Mgn}}
\label{subsec:ccOhtequivmgn}
In this subsection we prove the following theorem, which is the first main result of the article.
\begin{thm}
\label{thm:main1}
Suppose $d\geq 2g+n-1$. Then the map $\theta\colon\bar{\cO}ud\to\mathfrak{M}_{g,n}$ is a fibre bundle map with contractible fibres,
hence a homotopy equivalence; moreover the space $\bar{\cO}ud$ is an orientable manifold of dimension $2h=2(2g+n+d-2)$.
\end{thm}
The condition $d\geq 2g+n-1$ is needed in the following lemma, in which we apply Theorem \ref{thm:RiemannRoch} to prove that $\bar{\cO}(\mathfrak{r},\underline{d})$ is non-empty, for all Riemann structures $\mathfrak{r}$ on $\Sigma_{g,n}$.
\begin{lem}
\label{lem:ccOnonempty}
Suppose $d\geq 2g+n-1$. Then for every Riemann structure $\mathfrak{r}$ on $\Sigma_{g,n}$,
$\bar{\cO}(\mathfrak{r},\underline{d})$ is non-empty and is a complex affine space of complex dimension $d-g+1-n$.
\end{lem}
\begin{proof}
We consider the divisor $\mathcal{D}:=D-\sum_{i=1}^nQ_i$, whose degree is $\geq 2g-1$: by Theorem \ref{thm:RiemannRoch} we have
\[
\dim_{\mathbb{C}}\mathcal{O}\pa{\mathfrak{r},\mathcal{D}}=d-g+1-n.
\]
We note that $\mathcal{O}\pa{\mathfrak{r},\mathcal{D}}$ is a sub-vector space of $\mathcal{O}(\mathfrak{r},D)$, and that
$\bar{\cO}(\mathfrak{r},\underline{d})$ is either empty or a translate of $\mathcal{O}\pa{\mathfrak{r},\mathcal{D}}$ in $\mathcal{O}(\mathfrak{r},D)$:
indeed the difference of two meromorphic functions in $\mathcal{O}(\mathfrak{r},D)$ lies in $\mathcal{O}\pa{\mathfrak{r},\mathcal{D}}$.
To prove that $\bar{\cO}(\mathfrak{r},\underline{d})$ is non-empty, consider for all $1\leq j\leq n$ the divisor $\mathcal{D}+Q_j$, whose
degree is $\geq 2g$: again by Theorem \ref{thm:RiemannRoch} we have
\[
\dim_{\mathbb{C}}\mathcal{O}\pa{\mathfrak{r},\mathcal{D}+Q_j}=d-g+2-n,
\]
so that, by comparing dimensions, we can find a function
\[
f_j\in\mathcal{O}\pa{\mathfrak{r},\mathcal{D}+Q_j}\setminus \mathcal{O}\pa{\mathfrak{r},\mathcal{D}}.
\]
Note that $f_j$ has a pole of order exactly $d_j$ at $Q_j$;
up to multiplying by a suitable constant in $\mathbb{C}^*$ we may assume that the Laurent expansion of $f_j$ around $Q_j$, read in a normal
chart $w_j$, has the form $1/w_j^{d_j}+\mathrm{l.o.t.}$.
We can now consider the sum $f=\sum_{j=1}^nf_j$, which belongs to $\bar{\cO}(\mathfrak{r},\underline{d})$ and witnesses that the latter is non-empty.
\end{proof}
Using Lemma \ref{lem:ccOnonempty}, we are ready to prove Theorem \ref{thm:main1}.
\begin{proof}[Proof of Theorem \ref{thm:main1}]
Assume first $g> 0$ or $n>1$.
Recall that $\check\theta_{\mathfrak{H}}\colon\mathcal{H}armud\to\mathfrak{M}_{g,n}$ is a real affine bundle of dimension $2d-n+1$, whereas $H^1(\mathring{\pr})\colon H^1(\mathring{\scF}_{g,n})\to\mathfrak{M}_{g,n}$ is a vector bundle of dimension $2g+n-1$. Lemma \ref{lem:ccOnonempty}
implies that $\int\colon\mathfrak{H}ud\to H^1(\mathring{\scF}_{g,n})$ is surjective on fibres:
indeed the difference of dimensions $2d-n+1-(2g+n-1)=2d-2g+2-2n$ coincides with the dimension of the fibres of $\theta_{\bar{\cO}}=\ker\int$. Since $\bar{\cO}ud$ is the kernel of a surjective map of a real affine bundle onto a real vector bundle, we conclude that
$\theta_{\bar{\cO}}\colon\bar{\cO}ud\to\mathfrak{M}_{g,n}$ is also an affine bundle map, with fibres of real dimension $2(d-g+1-n)$.
Since fibres of $\theta_{\bar{\cO}}$ have a complex structure, they are equipped with a canonical orientation.
Since $\theta_{\bar{\cO}}\colon\bar{\cO}ud\to\mathfrak{M}_{g,n}$ is a fibre bundle map with contractible
fibres, we obtain the first statement of Theorem \ref{thm:main1}. Moreover $\bar{\cO}ud$ is the total space of a fibre bundle over an orientable manifold of real dimension $6g-6+4n$, with fibres being oriented manifolds of real dimension $2(d-g+n-1)$: it follows that $\bar{\cO}ud$ is an orientable manifold of real dimension $6g-6+4n + 2(d-g+1-n)=2h$.
In the case $g=0$ and $n=1$, the map $\theta_{\bar{\cO}}\colon\bar{\cO}ud\to\mathfrak{M}_{g,n}$ is also a homotopy equivalence, as
$\bar{\cO}_{0,1}[d]\cong\mathfrak{NMonPol}_d$ is contractible; the real dimension of
the unique fibre of $\theta_{\bar{\cO}}$ is $2(d-1)=2h$; note again that this is $2$ \emph{less} than the real dimension of
$\bar{\cO}(\mathfrak{r}st,d)\cong\mathfrak{MonPol}_d$, which is $2d$.
\end{proof}
In the case $d<2g+n-1$ we fail in proving the surjectivity of $\theta_{\bar{\cO}}\colon\bar{\cO}ud\to\mathfrak{M}_{g,n}$, but it is still true that fibres $\theta_{\bar{\cO}}^{-1}(\mathfrak{m})$ are either empty or contractible, so one could expect $\bar{\cO}ud$ to capture the homotopy type of a subspace of $\mathfrak{M}_{g,n}$. For example, when $n=1$ and $d=2$, the image of
$\theta_{\bar{\cO}}\colon\bar{\cO}_{g,1}[d]\to\mathfrak{M}_{g,1}$ is precisely the \emph{hyperelliptic moduli space}, i.e. the subspace of $\mathfrak{M}_{g,1}$ containing moduli $\mathfrak{m}=[\mathfrak{r}]$ such that $(\Sigma_{g,1},\mathfrak{r})$ admits a hyperelliptic involution fixing its marked point $Q$ (and sending $X$ to $-X$). It would be interesting to generalise this observation to higher values of $d$.
\section{The Poincare property for the PMQ \texorpdfstring{$\mathfrak{S}_d^{\mathrm{geo}}$}{Sdgeo}}
\label{sec:Poincare}
Let $d\ge1$ be fixed throughout the section and let $\mathfrak{S}_d$ denote the group of permutations of the set $\set{1,\dots,d}$. In \cite{Bianchi:Hur1}
the word length norm on $\mathfrak{S}_d$ with respect to the generating set of all transpositions was used to define a \emph{partially multiplicative quandle} (shortly PMQ) called $\mathfrak{S}_d^{\mathrm{geo}}$. The underlying set of $\mathfrak{S}_d^{\mathrm{geo}}$ is $\mathfrak{S}_d$, and conjugation is defined using the conjugation of the group $\mathfrak{S}_d^{\mathrm{geo}}$; also the partial multiplication coincides, whenever defined, with the multiplication in $\mathfrak{S}_d$; a couple of permutations $(\sigma,\tau)$ admits a product in $\mathfrak{S}_d^{\mathrm{geo}}$ if and only if it is \emph{geodesic}, i.e. the permutation $\sigma\tau\in\mathfrak{S}_d$ has norm equal to the sum of the norms of $\sigma$ and $\tau$. By construction, $\mathfrak{S}_d^{\mathrm{geo}}$ is a \emph{normed} PMQ, i.e. it is endowed with a morphism of PMQs $N\colon\mathfrak{S}_d^{\mathrm{geo}}\to\mathbb{N}$, sending
$(\mathfrak{S}_d^{\mathrm{geo}})_+:=\mathfrak{S}_d^{\mathrm{geo}}\setminus\set{\mathbbl{1}}$ to positive natural numbers. We remark that $\mathfrak{S}_d^{\mathrm{geo}}$
is augmented (a product of elements $\neq\mathbbl{1}$ is $\neq \mathbbl{1}$) and locally finite (every element can be written in finitely many ways as a finite product of elements $\neq\mathbbl{1}$). In this section we show the following theorem.
\begin{thm}
\label{thm:main2}
For $d\ge1$ the partially multiplicative quandle $\mathfrak{S}_d^{\mathrm{geo}}$ is Poincare, i.e.
all components of $\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})$ are topological manifolds.
\end{thm}
We immediately remark that for $d=1$ the PMQ $\mathfrak{S}_1^{\mathrm{geo}}$ is isomorphic to the trivial PMQ $\set{\mathbbl{1}}$, so that $\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})$ is a point, and in particular a manifold of dimension 0. We will henceforth focus on the case $d\ge2$.
\begin{nota}
\label{nota:cR}
For $t\ge0$ we denote by $\mathcal{R}_t$ the closed rectangle $[0,t]\times[0,1]\subset\mathbb{C}$, which for $t=0$ is a vertical segment; we denote by $\mathring{\cR}_t$ the interior of $\mathcal{R}_t$, which for $t=0$ is empty. For $t=1$ we abbreviate $\mathcal{R}_1=\mathcal{R}$ and $\mathring{\cR}_1=\mathring{\cR}$.
\end{nota}
The simplicial Hurwitz space $\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})$ was introduced in \cite{Bianchi:Hur1}; since $\mathfrak{S}_d^{\mathrm{geo}}$ is a locally finite PMQ, by \cite[Theorem 9.1]{Bianchi:Hur2} the space $\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})$ is homeomorphic to the \emph{coordinate-free Hurwitz space} $\mathrm{Hur}(\mathring{\cR},(\mathfrak{S}_d^{\mathrm{geo}})_+)$: points in the latter space are couples $(P,\psi)$ such that $P\subset\mathring{\cR}$ is a finite subset, and $\psi\colon\mathfrak{Q}(P)\to\mathfrak{S}_d^{\mathrm{geo}}$ is an augmented map of PMQs (roughly speaking, a consistent assignment of an element in $(\mathfrak{S}_d^{\mathrm{geo}})_+$ for each based loop in $\pi_1(\mathbb{C}\setminus P)$ that spins clockwise around exactly one point of $P$).
In turn, by \cite[Proposition 7.3]{Bianchi:Hur2} and \cite[Lemma 2.7]{Bianchi:Hur3}, there is a homotopy equivalence between $\mathrm{Hur}(\mathring{\cR},(\mathfrak{S}_d^{\mathrm{geo}})_+)$ and the Hurwitz-Moore topological monoid $\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)$: the latter space contains triples $(t,P,\psi)$, where $t\ge0$ and $(P,\psi)$ has a similar description as above, but $P$ is required to be a finite subset of $\mathring{\cR}_t$.
By \cite[Theorem 2.15]{Bianchi:Hur3}, the connected components of $\mathrm{Hur}(\mathring{\cR},(\mathfrak{S}_d^{\mathrm{geo}})_+)$ are in bijection with the elements of the completion $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ of the PMQ $\mathfrak{S}_d^{\mathrm{geo}}$: the bijection is given by sending configurations in $\mathrm{Hur}(\mathring{\cR},(\mathfrak{S}_d^{\mathrm{geo}})_+)$ to their $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$-valued total monodromy. Putting these results together, and noting that all above homeomorphisms respect the total monodromy, we obtain that $\pi_0(\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}}))$ is in bijection with $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$.
The complete PMQ $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ was described in \cite[Proposition 7.13]{Bianchi:Hur1}: its elements are given by sequences $(\sigma;\mathfrak{P}_1,\dots,\mathfrak{P}_\ell;r_1,\dots,r_\ell)$, where $\sigma\in\mathfrak{S}_d$, the sets $\mathfrak{P}_1,\dots,\mathfrak{P}_\ell$ form an unordered partition of $\set{1,\dots,d}$, and $r_1,\dots,r_\ell$ are non-negative integers, each corresponding to one piece of the partition; moreover, certain mild combinatorial conditions must be satisfied. By \cite[Lemma 7.12]{Bianchi:Hur1}, each element in $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ can be also represented as a product $\hat\mathbf{t}_1\dots\hat\mathbf{t}_r$, where $\hat\mathbf{t}_i\in\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ is the image of the transposition $\mathbf{t}_i\in\mathfrak{S}_d$ under the canonical inclusion of PMQs $\mathfrak{S}_d^{\mathrm{geo}}\hookrightarrow\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$; the decomposition of an element in $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ into transpositions is unique up to standard moves \cite[Definition 3.6]{Bianchi:Hur1}.
In order to prove Theorem \ref{thm:main2} we thus have to show that for all $a\in\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ the space
$\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})(a)$, or equivalently the homeomorphic space $\mathrm{Hur}(\mathring{\cR},(\mathfrak{S}_d^{\mathrm{geo}})_+)_a$, is a topological manifold. By \cite[Theorem 9.3]{Bianchi:Hur2} we can restrict to the case $a=\hat\sigma$, i.e. $a$ is the image of an element $\sigma\in\mathfrak{S}_d^{\mathrm{geo}}$ along the canonical inclusion $\mathfrak{S}_d^{\mathrm{geo}} \hookrightarrow\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$; in the notation of \cite[Proposition 7.13]{Bianchi:Hur1} we have
$\hat\sigma=(\sigma;c_1,\dots,c_n;0,\dots,0)$, where $c_1,\dots,c_n$ are the cycles of a permutation $\sigma\in\mathfrak{S}_d$, considered as subsets of $\set{1,\dots,d}$. Using the action by global conjugation \cite[Subsection 6.2]{Bianchi:Hur2}, it suffices to consider one representative $\sigma$ in each conjugacy class of $\mathfrak{S}_d^{\mathrm{geo}}$.
We will first address the case $\sigma=\mathrm{lc}_d\in\mathfrak{S}_d^{\mathrm{geo}}$, where $\mathrm{lc}_d=(1,\dots,d)$ is the \emph{long cycle}, and then we will deduce the case of a generic permutation $\sigma\in\mathfrak{S}_d^{\mathrm{geo}}$.
\subsection{The case \texorpdfstring{$\sigma=\mathrm{lc}_d$}{sigma=lcd}}
\label{subsec:monic}
Recall Definition \ref{defn:NMonPol}, and let $f(z)=z^d+a_{d-2}z^{d-2}+\dots+a_0$ be a normalised monic polynomial of degree $d$.
The derivative of $f$ is the polynomial $f'(z)=dz^{d-1}+(d-2)a_{d-2}z^{d-3}+\dots+a_1$; denote the $d-1$ roots of $f'$ by $\zeta_1,\dots,\zeta_{d-1}$. The images $f(\zeta_1),\dots,f(\zeta_{d-1})$ are the \emph{critical values} of $f$ when considered as a branched cover $f\colon\mathbb{C}\to\mathbb{C}$, and they form an element of the $(d-1)$-fold symmetric power of $\mathbb{C}$, denoted $\mathrm{SP}^{d-1}(\mathbb{C})=\mathbb{C}^{d-1}/\mathfrak{S}_{d-1}$. We obtain a continuous and proper map $\mathfrak{cv}\colon\mathbb{N}MonPol_d\to\mathrm{SP}^{d-1}(\mathbb{C})$, with finite fibres.
\begin{defn}
We denote by $\mathbb{N}MonPol_d(\mathcal{R})$ (respectively $\mathbb{N}MonPol_d(\mathring{\cR})$) the preimage under $\mathfrak{cv}$ of $\mathrm{SP}^{d-1}(\mathcal{R})$ (respectively of $\mathrm{SP}^{d-1}(\mathring{\cR})$).
\end{defn}
The space $\mathbb{N}MonPol_d(\mathcal{R})$ is compact. In this subsection we prove the following lemma.
\begin{lem}
\label{lem:tildecv}
There exists a continuous bijection
\[
\check\mathfrak{cv}^{\mathcal{R}}\colon\mathbb{N}MonPol_d(\mathcal{R})\to\mathrm{Hur}(\mathcal{R};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\mathrm{lc}_d},
\]
restricting to a bijection $\mathbb{N}MonPol_d(\mathring{\cR})\to\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\mathrm{lc}_d}$.
\end{lem}
Let us assume for a moment that Lemma \ref{lem:tildecv} holds. Both $\mathbb{N}MonPol_d(\mathcal{R})$
and $\mathrm{Hur}(\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}})_{\hat\mathrm{lc}_d}$ are compact Hausdorff spaces: the second, in particular, is homeomorphic
to the realisation of a finite semi-bisimplicial set $\mathrm{Arr}^{\mathrm{ndeg}}(\mathfrak{S}_d^{\mathrm{geo}})(\hat\mathrm{lc}_d)$.
It follows that $\check\mathfrak{cv}^{\mathcal{R}}$ is a homeomorphism, hence, by restriction, $\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\mathrm{lc}_d}$
is homeomorphic to the open subspace $\mathbb{N}MonPol_d(\mathring{\cR})$ of $\mathbb{N}MonPol_d$, which is a manifold.
The map $\check\mathfrak{cv}^{\mathcal{R}}$ will be a lift of the map $\mathfrak{cv}\colon\mathbb{N}MonPol_d(\mathcal{R})\to\mathrm{Hur}(\mathcal{R};\mathbb{N}_+)_{d-1}\cong\mathrm{SP}^{d-1}(\mathcal{R})$
along the map $N_*\colon \mathrm{Hur}(\mathcal{R};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\mathrm{lc}_d}\to\mathrm{Hur}(\mathcal{R};\mathbb{N}_+)_{d-1}$ induced by the augmented map of PMQs $N\colon\mathfrak{S}_d^{\mathrm{geo}}\to \mathbb{N}$; therefore we use the notation ``$\check\mathfrak{cv}$''. The superscript ``$\mathcal{R}$'' is to distinguish this map from a similar one that we will consider in Section \ref{sec:homeo}.
\begin{defn}
\label{defn:f0}
We denote by $f_0\in\mathbb{N}MonPol_d(\mathcal{R})$ the polynomial $f_0(z)=z^d$. We define a deformation retraction $\rho\colon\mathbb{N}MonPol_d(\mathcal{R})\times[0,1]\to\mathbb{N}MonPol_d(\mathcal{R})$ onto $f_0$ by the formula $\rho(f,t)=f_0$ for $t=0$, and
$\rho(f,t)=t^d f(z/t)$ for $t>1$. Note that the critical values of $t^d f(z/t)$ are obtained by multiplying the critical values of $f(z)$ by $t^d$, and hence they are contained in $\mathcal{R}$ as well.
\end{defn}
Let $f(z)\in\mathbb{N}MonPol_d(\mathcal{R})$, and let $P=\set{f(\zeta_1),\dots,f(\zeta_{d-1})}$, considered as a finite subset of $\mathcal{R}$ of cardinality at most $d-1$, where $\zeta_1,\dots,\zeta_{d-1}$ are the roots of $f'(z)$, as above. The branched cover $f\colon\mathbb{C}\to\mathbb{C}$ restricts to a genuine $d$-fold cover $f\colon f^{-1}(\mathbb{C}mP)\to\mathbb{C}mP$. In particular, the restriction $f\colon f^{-1}(\mathbb{C}\setminus\mathcal{R})\to\mathbb{C}\setminus\mathcal{R}$ is a $d$-fold cyclic covering of the annulus
$\mathbb{C}\setminus\mathcal{R}$. Varying $f$ in $\mathbb{N}MonPol_d(\mathcal{R})$, we obtain a family of $d$-fold cyclic coverings of $\mathbb{C}\setminus\mathcal{R}$.
We fix once and for all a trivialisation $f_0^{-1}(\mathbb{H}_{<0})\cong\set{1,\dots,d}\times \mathbb{H}_{<0}$, where $\mathbb{H}_{<0}=\set{z\,|\,\Im(z)<0}$; later, we will add a mild combinatorial assumption on this fixed trivialisation.
Through the homotopy $\rho$, we obtain for all $f\in\mathbb{N}MonPol_d(\mathcal{R})$ a trivialisation
\[
f^{-1}(\mathbb{H}_{<0})\cong\set{1,\dots,d}\times \mathbb{H}_{<0}.
\]
Taking all $f\in\mathbb{N}MonPol_d(\mathcal{R})$ at the same time, we obtain an embedding
\[
\iota\colon \mathbb{N}MonPol_d(\mathcal{R})\times\set{1,\dots,d}\times\mathbb{H}_{<0}\hookrightarrow \mathbb{N}MonPol_d(\mathcal{R})\times\mathbb{C}
\]
that is compatible with the projection onto $\mathbb{N}MonPol_d(\mathcal{R})$, and that satisfies the equality
$f(\pi_{\mathbb{C}}(\iota(f,i,z)))=z$ for all $f\in\mathbb{N}MonPol_d(\mathcal{R})$, $i\in\set{1,\dots,d}$ and $z\in\mathbb{H}_{<0}$. Here $\pi_{\mathbb{C}}$ denotes the projection onto the factor $\mathbb{C}$.
Let $*=-\sqrt{-1}$ be the basepoint of $\mathbb{C}mP$, as well as of its subspace $\mathbb{C}\setminus\mathcal{R}$. If a $d$-fold cyclic covering $f\colon f^{-1}(\mathbb{C}\setminus\mathcal{R})\to\mathbb{C}\setminus\mathcal{R}$ of the annulus $\mathbb{C}\setminus\mathcal{R}$ is endowed with a trivialisation over ${\Im<z}$, and in particular over $*$, we can compute the monodromy $\omega(f)\in\mathfrak{S}_d$ as the action on $f^{-1}(*)\cong\set{1,\dots,d}$ induced by a based loop in $\mathbb{C}\setminus\mathcal{R}$ spinning once clockwise around $\mathcal{R}$. The monodromy $\omega(f)\in\mathfrak{S}_d$ is a permutation consisting of a single cycle of length $d$, and is independent of $f\in\mathbb{N}MonPol_d(\mathcal{R})$; up to changing the above ``once and for all fixed'' trivialisation $f_0^{-1}(\mathbb{H}_{<0})\cong\set{1,\dots,d}\times \mathbb{H}_{<0}$, we can assume $\omega(f)=\mathrm{lc}_d$ for all $f\in\mathbb{N}MonPol_d(\mathcal{R})$.
More generally, the covering $f\colon f^{-1}(\mathbb{C}mP)\to\mathbb{C}mP$ gives rise to an action
of the group $\pi_1(\mathbb{C}mP,*)$ on the set $f^{-1}(*)\cong\set{1,\dots,d}$. We thus obtain a map of groups $\varphi\colon\pi_1(\mathbb{C}mP,*)\to\mathfrak{S}_d$, which is the $\mathfrak{S}_d$-valued monodromy associated with $f$. Restricting to the fundamental PMQ $\mathfrak{Q}(P)$, and interpreting elements of $\mathfrak{S}_d$ as the corresponding elements of the PMQ $\mathfrak{S}_d^{\mathrm{geo}}$, we obtain a map of PMQs $\psi\colon\mathfrak{Q}(P)\to\mathfrak{S}_d^{\mathrm{geo}}$, which is the $\mathfrak{S}_d^{\mathrm{geo}}$-valued monodromy of $f$. We thus obtain a configuration $(P,\psi)\in\mathrm{Hur}(\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}})$.
By construction, if $f\in\mathbb{N}MonPol_d(\mathring{\cR})$, then the critical values of $f$, i.e. the set $P$, is contained in $\mathring{\cR}$, so that $(P,\psi)$ is contained in the subspace $\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})$.
Our next goal is to show the following two statements:
\begin{itemize}
\item the $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$-valued total monodromy $\omega(P,\psi)$ is precisely $\hat\mathrm{lc}_d$, i.e. the image of $\mathrm{lc}_d\in\mathfrak{S}_d^{\mathrm{geo}}$ under the inclusion $\mathfrak{S}_d^{\mathrm{geo}}\hookrightarrow\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$;
\item the map $\psi$ sends non-trivial loops of $\mathfrak{Q}(P)$ to elements of $(\mathfrak{S}_d^{\mathrm{geo}})_+$.
\end{itemize}
Let $|P|=k$ and write $P=\set{z_1,\dots,z_k}$. Let $\alpha_1,\dots,\alpha_k\colon[0,1]\to\mathbb{C}mP$ be loops based at $*$ representing an admissible generating set for $\pi_1(\mathbb{C}mP,*)$: the loops are embedded and disjoint away from the endpoints, and $\alpha_i$ only spins once clockwise around $z_i$. Let $D_i\subset\mathbb{C}$ be the open disc bounded by $\alpha_i$. Let $\sigma_i=\varphi(\alpha_i)\in\mathfrak{S}_d$, and let $c_{i,1},\dots,c_{i,\lambda_i}$ be the cycles in the cycle decomposition of $\sigma_i$, where $\lambda_i= d-N(\sigma_i)$. Then $f^{-1}(D_i)$ is the union of $\lambda_i$ open discs in $\mathbb{C}$, which are in canonical bijection with the cycles $c_{i,1},\dots,c_{i,\lambda_i}$; moreover the disc corresponding to $c_{i,j}$ covers $D_i$ as a cyclic branched cover of degree $|c_{i,j}|$ with a unique branch point $\zeta_{i,j}$ over $z_i$. The point $\zeta_{i,j}\in\mathbb{C}$ is a zero of $f'$ of order $|c_{i,j}|-1=N(c_{i,j})$, and all zeroes of $f'$ arise in this way. It follows that $N(\sigma_i)=\sum_{j=1}^{\lambda_i}N(c_{i,j})$ is also equal to the number of zeroes of $f'$ lying over the critical value $z_i$ of $f$, counted with multiplicity. Since $z_i$ is assumed to be a critical value of $f$, we obtain that $N(\sigma_i)\ge1$, i.e. $\sigma_i\in(\mathfrak{S}_d^{\mathrm{geo}})_+$. Moreover, summing over $1\le i\le k$, we obtain the equality $N(\sigma_1)+\dots+N(\sigma_k)=d-1$, which is the degree of the polynomial $f'$, but also the norm of $\mathrm{lc}_d$. Up to rearranging the indices from $1$ to $k$, we can assume that the product $\alpha_1\dots\alpha_k$ represents the same element of $\pi_1(\mathbb{C}mP,*)$ as an embedded loop spinning once clockwise around $\mathcal{R}$. It follows that the product $\sigma_1\dots\sigma_k$ is defined in $\mathfrak{S}_d^{\mathrm{geo}}$, and is equal to $\mathrm{lc}_d\in\mathfrak{S}_d^{\mathrm{geo}}$ (a priori we only know that $\sigma_1\dots\sigma_k$ is an element of $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ mapping to the group element $\mathrm{lc}_d\in\mathfrak{S}_d$ under the natural map of PMQs $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}\to\mathfrak{S}_d$ having the group $\mathfrak{S}_d$ as target).
In fact a bit more is true, and will be used later: each partial product $\sigma_i\dots\sigma_{i'}$, for $i<i'$, is defined in $\mathfrak{S}_d^{\mathrm{geo}}$.
\begin{defn}
\label{defn:tildecv}
The above assignment $f\mapsto(P,\psi)$ gives a map of sets
\[
\check\mathfrak{cv}^{\mathcal{R}}\colon\mathbb{N}MonPol_d(\mathcal{R})\to\mathrm{Hur}(\mathcal{R};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\mathrm{lc}_d}.
\]
\end{defn}
Our next goal is to prove continuity of $\check\mathfrak{cv}^{\mathcal{R}}$. Let $f\in\mathbb{N}MonPol_d(\mathcal{R})$ and denote $\check\mathfrak{cv}^{\mathcal{R}}(f)=(P,\psi)\in \mathrm{Hur}(\mathcal{R};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\mathrm{lc}_d}$; we write $P=\set{z_1,\dots,z_k}$ as above, and fix loops $\alpha_1,\dots,\alpha_k$ as above. Let $U_1,\dots,U_k$ be disjoint, convex open sets contained in $\mathbb{C}\setminus\pa{\bigcup_{i=1}^k\alpha_i}$, such that $U_i$ intersects $P$ in $z_i$; the sets $U_i$ give an \emph{adapted covering} $\underline{U}=(U_1,\dots,U_k)$ of $P$. Take now a connected neighbourhood $f\in V\subset\mathbb{N}MonPol_d(\mathcal{R})$ such that $\mathfrak{cv}(V)$ is contained in $\mathrm{SP}^{d-1}(\underline{U}\cap\mathcal{R})\subset\mathrm{SP}^{d-1}(\mathcal{R})$: such a neighbourhood exists because $\mathfrak{cv}$ is continuous. Let $\bar f\in V$ and denote $(\bar P,\bar\psi)=\check\mathfrak{cv}^{\mathcal{R}}(\bar f)$.
Note that for all $1\le i\le k$ the map $\bar\psi\colon\mathfrak{Q}(\bar P)\to\mathfrak{S}_d^{\mathrm{geo}}$ can be extended over $\alpha_i\in\pi_1(\mathbb{C}\setminus \bar P,*)$ in the sense of \cite[Definition 2.13]{Bianchi:Hur2}: we can factor $\alpha_i$ as a product of admissible generators $\bar\alpha_{i,1}\dots\bar\alpha_{i,k_i}$ and use the remark before Definition \ref{defn:tildecv}.
Then $\bar P\subset\underline{U}$ by the condition $\mathfrak{cv}(\bar f)\in \mathrm{SP}^{d-1}(\underline{U}\cap\mathcal{R})$; moreover, for any path of polynomials $(f_s)_{0\le s\le 1}$ in $V$ joining $f$ with $\bar f$,
the $\mathfrak{S}_d^{\mathrm{geo}}$-valued monodromy of $f_s$ along $\alpha_i$ is well-defined and locally constant in $s$, since the $\mathfrak{S}_d^{\mathrm{geo}}$-valued total monodromy is reflected by the $\mathfrak{S}_d$-valued total monodromy, which is locally constant on the path. It follows that $\bar\psi(\alpha_i)=\psi(\alpha_i)$ as elements of $\mathfrak{S}_d^{\mathrm{geo}}$. This means precisely that $(\bar P,\bar \psi)$ belongs to the normal neighbourhood $fU(P,\psi;\underline{U})$ of $(P,\psi)$ in $\mathrm{Hur}(\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}})_{\hat\mathrm{lc}_d}$, see Definition \cite[Definition 3.7]{Bianchi:Hur2}; we have thus proved that $\check\mathfrak{cv}^{\mathcal{R}}$ is continuous.
The last thing to check in this subsection is that $\check\mathfrak{cv}^{\mathcal{R}}$ is a bijection of sets. For this we will define a map of sets
$\mathfrak{bc}^{\mathcal{R}}\colon\mathrm{Hur}(\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}})_{\mathrm{lc}_d}\to\mathbb{N}MonPol_d(\mathcal{R})$ which is both a left and right inverse to $\check\mathfrak{cv}^{\mathcal{R}}$. The notation ``$\mathfrak{bc}$'' stands for ``branched cover''; the superscript ``$\mathcal{R}$'' is to distinguish this map from a similar one used in Section \ref{sec:homeo}.
Let $(P,\psi)\in\mathrm{Hur}(\mathcal{R},(\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\mathrm{lc}_d}$. The map of PMQs $\psi\colon\mathfrak{Q}(P)\to\mathfrak{S}_d^{\mathrm{geo}}$ gives rise to a map of groups $\varphi\colon\pi_1(\mathbb{C}mP,*)\to\mathfrak{S}_d$ using \cite[Theorem 3.3]{Bianchi:Hur1}; the action of $\mathfrak{S}_d$ on the set $\set{1,\dots,d}$ gives rise to a covering $f\colon \mathcal{F}\to\mathbb{C}mP$ of degree $d$, with fibre $f^{-1}(*)$ canonically identified with the set $\set{1,\dots,d}$. The total space $\mathcal{F}$ of the covering is connected, as the image of $\varphi$ contains $\mathrm{lc}_d$ and thus acts transitively on $\set{1,\dots,d}$. The Euler characteristic of $\mathcal{F}$ is computed as
\[
\chi(\mathcal{F})=d\chi(\mathbb{C}mP)=d(1-k),
\]
using again the notation $P=\set{z_1,\dots,z_k}$, hence $|P|=k$. We can compactify $\mathcal{F}$ to a smooth closed Riemann surface $\bar\mathcal{F}$ endowed with a branched covering map $f\colon\bar\mathcal{F}\to\mathbb{C} P^1$ by adjoining one point $Q$ lying over $\infty\in\mathbb{C} P^1$, and several other points, lying over the points of $P$. More precisely, fix loops $\alpha_1,\dots,\alpha_k$ as above, and use the same notation used above in the following: then for each $1\le i\le k$, the preimage $f^{-1}(D_i\setminus z_i)$ is a disjoint union of cyclic covers of the punctured disc $D_i\setminus z_i$, one of degree $|c_{i,j}|$ for every $1\le j\le \lambda_i=d- N(\sigma_i)$; we adjoin a point $\zeta_{i,j}$ to compactify each of these cyclic covers over $z_i$.
The Euler characteristic of the resulting surface is $\chi(\mathcal{F})$ plus the number of adjoined points, i.e.
\[
\chi(\bar\mathcal{F})=d\chi(\mathbb{C}mP)+1+\sum_{i=1}^k (d-N(\sigma_i))=d-kd+1+kd-{d-1}=2.
\]
This implies that $\bar\mathcal{F}$ is biholomorphic to the Riemann sphere; we note that $\infty\in C P^1$ has a unique preimage $Q\in\bar\mathcal{F}$. Moreover the meromorphic map
$f\colon\bar\mathcal{F}\to\mathbb{C} P^1$ restricts to a trivial $d$-fold cover over $\mathbb{H}_{<0}\subset\mathbb{C} P^1$, and we can uniquely extend the trivialisation $f^{-1}(*)\cong\set{1,\dots,d}$ to a trivialisation $f^{-1}(\mathbb{H}_{<0})\cong{1,\dots,d}\times\mathbb{H}_{<0}$.
We let then $X\in T_Q\bar\mathcal{F}$ be the unique non-zero vector making $Q$ into a directed pole of $f$ of order $d$, and such that a germ of path $L$ exiting from $Q$ with velocity $X$ lies in $\set{1}\times\mathbb{H}_{<0}\subset\mathbb{C} P^1$ for small positive times.
We obtain therefore a surface $(\bar\mathcal{F},Q,X)$ of type $\Sigma_{0,1}$, that can be identified with $\mathbb{C} P^1$; the map $f\colon\bar\mathcal{F}\to\mathbb{C} P^1$ corresponds to the map $f\colon \mathbb{C} P^1\to\mathbb{C} P^1$ represented by a polynomial $f(z)$ of degree $d$. By construction, the polynomial $f(z)$ is monic, i.e. it belongs to $\mathfrak{MonPol}_d$, and its projection
to $\mathbb{N}MonPol_d$ is independent of the identification $(\bar\mathcal{F},Q,X)\cong\mathbb{C} P^1$. Moreover the critical values of $f$ are all points of $P$ (all of them, because the assumption that $\psi$ is an augmented map of PMQs implies that each $\sigma_i$ is not the identity permutation, for all $1\le i\le k$), and in particular lie in $\mathcal{R}$.
\begin{defn}
\label{defn:bccR}
The above assignment $(P,\psi)\mapsto f$ gives a map of sets
\[
\mathfrak{bc}^{\mathcal{R}}\colon\to\mathrm{Hur}(\mathcal{R};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\mathrm{lc}_d}\to\mathbb{N}MonPol_d(\mathcal{R}).
\]
\end{defn}
The fact that $\check\mathfrak{cv}^{\mathcal{R}}$ and $\mathfrak{bc}^{\mathcal{R}}$ are inverse of each other is straightforawrd.
\subsection{The generic case \texorpdfstring{$\sigma\in\mathfrak{S}_d^{\mathrm{geo}}$}{sigma in Sdgeo}}
\label{subsec:hNs}
Let $\sigma\in\mathfrak{S}_d^{\mathrm{geo}}$ be generic, and let $\sigma=c_1\dots c_{\lambda}$ be the cycle decomposition of $\sigma$, with $\lambda=d-N(\sigma)$. Let $\mathfrak{S}_{c_i}$ be the symmetric group on the set $c_i\subset\set{1,\dots,d}$. There is an inclusion of PMQs (see \cite[Definition 2.16]{Bianchi:Hur1} for the product of PMQs)
\[
\mu\colon \mathfrak{S}_{\underline{c}}^{\mathrm{geo}}:=\mathfrak{S}_{c_1}^{\mathrm{geo}}\times\dots\times\mathfrak{S}_{c_\lambda}^{\mathrm{geo}}\hookrightarrow\mathfrak{S}_d^{\mathrm{geo}}
\]
with image those permutations $\tau\in\mathfrak{S}_d^{\mathrm{geo}}$ such that each cycle of $\tau$, considered as a subset of $\set{1,\dots,d}$, is contained in a single cycle $c_i$. By \cite[Corollary 7.5]{Bianchi:Hur1}, the image of $\mu$ is closed under factorisations in $\mathfrak{S}_d^{\mathrm{geo}}$, or in other words, the complement
$\mathfrak{S}_d^{\mathrm{geo}}\setminus\mu\pa{\mathfrak{S}_{\underline{c}}^{\mathrm{geo}}}$ is an ideal
of $\mathfrak{S}_d^{\mathrm{geo}}$ (see \cite[Definition 2.20]{Bianchi:Hur1} for the notion of ideal). Moreover, the image of $\mu$ contains $\sigma$.
It follows that $\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\sigma}$ is homeomorphic to $\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_{\underline{c}}^{\mathrm{geo}})_+)_{\hat\sigma}$.
Moreover $\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_{\underline{c}}^{\mathrm{geo}})_+)_{\hat\sigma}$ is homeomorphic to the product of spaces
\[
\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_{\underline{c}}^{\mathrm{geo}})_+)_{\hat\sigma}\cong pod_{i=1}^\lambda \mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_{c_i}^{\mathrm{geo}})_+)_{\hat c_i}
\]
as can be directly seen using continuity of the external product \cite[Definition 5.7]{Bianchi:Hur2} and functoriality of Hurwitz spaces in the PMQ \cite[Subsection 4.1]{Bianchi:Hur2}.
Each space $\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_{c_i}^{\mathrm{geo}})_+)_{\hat c_i}$ is a manifold, since $c_i$ is a \emph{long cycle} (a permutation with a single cycle) in $\mathfrak{S}_{c_i}$; we conclude that the product of manifolds
$\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\sigma}$ is a manifold.
\section{Homeomorphism between \texorpdfstring{$\bar{\cO}ud$}{Ogn[d]} and generalised Hurwitz spaces}
\label{sec:homeo}
In this section we prove Theorem \ref{thm:main3}, which is the third main result of the article and establishes a connection between moduli spaces $\bar{\cO}ud$ and generalised Hurwitz spaces $\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})$.
Some of the arguments used in this section are similar to the ones used in Section \ref{sec:Poincare}.
We fix $g\ge0$, $n\ge1$ and $\underline{d}$ as in the previous sections; we further assume $d=\sum_{i=1}^nd_i\ge2$ throughout this section. We still denote $h=2g+n+d-2$.
\begin{defn}
\label{defn:klud}
For $1\le i\le n$ we set $\mathbf{l}_i=1+\sum_{j=1}^{i-1}d_i$; in particular $\mathbf{l}_1=1<\mathbf{l}_2<\dots\mathbf{l}_n$. We let $(\!(\ud)\!)\in\mathfrak{S}_d$ be the permutation with cycle decomposition
\[
(\!(\ud)\!)=(\mathbf{l}_1,\dots,\mathbf{l}_2-1)(\mathbf{l}_2,\dots,\mathbf{l}_3-1),\dots,(\mathbf{l}_n,\dots,d).
\]
Note that $(\!(\ud)\!)$ is a permutation on $n$ cycles of lengths $d_1,\dots,d_n$. We consider $(\!(\ud)\!)$ also as an element of $\mathfrak{S}_d^{\mathrm{geo}}$.
We denote by $\widehat{(\!(\ud)\!)}$ the image of $(\!(\ud)\!)\in\mathfrak{S}_d^{\mathrm{geo}}$ under the inclusion
$\mathfrak{S}_d^{\mathrm{geo}}\hookrightarrow\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$, and for a transposition $\mathbf{t}=(i,j)\in\mathfrak{S}_d$, similarly, we denote by $\hat\mathbf{t}=\widehat{(i,j)}$ the corresponding element in
$\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$. For $2\le i\le n$, we denote by $\mathbf{t}^{\underline{d}}_i$ the transposition $(\mathbf{l}_i-1,\mathbf{l}_i)$. We define $(\!(\ud)\!)_g\in\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ as the product
\[
(\!(\ud)\!)_g=\widehat{(1,2)}\cdot\dots\cdot\widehat{(1,2)}\cdot\widehat{(\!(\ud)\!)}
\cdot\hat\mathbf{t}^{\underline{d}}_2\cdot\hat\mathbf{t}^{\underline{d}}_2\cdot\hat\mathbf{t}^{\underline{d}}_3\cdot\hat\mathbf{t}^{\underline{d}}_3\cdot\dots\cdot\hat\mathbf{t}^{\underline{d}}_n\cdot\hat\mathbf{t}^{\underline{d}}_n,
\]
where $\widehat{(1,2)}$ is repeated $2g$ times at the beginning, and each $\hat\mathbf{t}^{\underline{d}}_i$ is repeated twice at the end.
\end{defn}
Using the notation of \cite[Proposition 7.13]{Bianchi:Hur1}, we can also write
\[
(\!(\ud)\!)_g=((\!(\ud)\!);\set{1,\dots,d};h),
\]
where ``$\set{1,\dots,d}$'' denotes the trivial partition of the set $\set{1,\dots,d}$,
and ``$h$'' denotes a splitting of $h=N((\!(\ud)\!)_g)$ into several summands, one for each
piece of the partition (in this case a single summand). Here we consider the evaluation at $(\!(\ud)\!)_g$ of the extension $N\colon\widehat{\mathfrak{S}_d^{\mathrm{geo}}}\to \mathbb{N}$ of the norm $N\colon\mathfrak{S}_d^{\mathrm{geo}}\to\mathbb{N}$.
\begin{thm}
\label{thm:main3}
There is a homeomorphism
\[
\mathfrak{bc}\colon\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})((\!(\ud)\!)_g)\overset{\cong}{\to}\bar{\cO}ud.
\]
\end{thm}
In the case $g=0$ and $n=1$, Theorem \ref{thm:main3} reduces to the statement that $\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})(\hat\mathrm{lc}_d)$ is homeomorphic to $\mathbb{N}MonPol_d\cong\mathbb{C}^{d-1}$; this follows directly from the arguments of Subsection \ref{subsec:monic}, where we prove that
$\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})(\hat\mathrm{lc}_d)$ is homeomorphic to the open subsset $\mathbb{N}MonPol_d(\mathring{\cR})$ of $\mathbb{N}MonPol_d$: since $\mathbb{N}MonPol_d(\mathring{\cR})$ is the interior of a contractible, semi-algebraic $2(d-1)$-dimensional submanifold with boundary
$\mathbb{N}MonPol_d(\mathcal{R})$ of $\mathbb{N}MonPol_d$, we conclude that $\mathbb{N}MonPol_d(\mathring{\cR})$ is homeomorphic to an open $2(d-1)$-dimensional ball, i.e. to $\mathbb{C}^{d-1}$. Therefore, from now on we will focus on the case $g>0$ or $n>1$.
To prove Theorem \ref{thm:main3} we will construct the map $\mathfrak{bc}$ and another map
\[
\check\mathfrak{cv}\colon\bar{\cO}ud\to\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})((\!(\ud)\!)_g),
\]
such that $\check\mathfrak{cv}$ and $\mathfrak{bc}$ are inverse bijections, and prove continuity of the two maps.
It will in fact be convenient to replace $\mathrm{Hur}^{\Delta}(\mathfrak{S}_d^{\mathrm{geo}})((\!(\ud)\!)_g)$ by the homeomorphic space $\mathrm{Hur}(\mathring{\cR},(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\!(\ud)\!)_g}$, using \cite[Theorem 9.1]{Bianchi:Hur2}, and to replace further
$\mathrm{Hur}(\mathring{\cR},(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\!(\ud)\!)_g}$ with a homeomorphic space $\mathrm{Hur}(\mathbb{C},(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\!(\ud)\!)_g}$ that we will introduce in Subsection \ref{subsec:HurC}; we will then construct the maps $\mathfrak{bc}$ and $\check\mathfrak{cv}$ directly between $\bar{\cO}ud$
and the new space $\mathrm{Hur}(\mathbb{C},(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\!(\ud)\!)_g}$.
\subsection{Hurwitz spaces with \texorpdfstring{$\mathbb{C}$}{C} as background}
\label{subsec:HurC}
Let $\mathcal{Q}$ be a PMQ throughout the subsection (see \cite[Section 2]{Bianchi:Hur1}).
For each subspace $\mathcal{X}\subset\mathbb{H}$ of the closed upper half plane,
a coordinate-free Hurwitz space $\mathrm{Hur}(\mathcal{X},\mathcal{Q})$ was introduced in \cite{Bianchi:Hur2}; a point in
$\mathrm{Hur}(\mathcal{X},\mathcal{Q})$ can be described as a couple $(P,\psi)$, where $P\subset\mathcal{X}$ is a finite subset, and
$\psi\colon\mathfrak{Q}(P)\to\mathcal{Q}$ is a map of PMQs. The fundamental PMQ $\mathfrak{Q}(P)$ is defined as a subset of the fundamental group $\pi_1(\mathbb{C}mP,*)$, which in turn is defined after choosing a basepoint $*\in\mathbb{C}mP$. The choice $*=-\sqrt{-1}$, together with the hypothesis $\mathcal{X}\subset\mathbb{H}$, ensures that $*\not\in P$ for all $P$, and gives thus a ``constant'' choice of basepoint for the spaces $\mathbb{C}mP$, for $P$ varying among finite subsets of $\mathcal{X}$.
\begin{defn}
For a relatively compact subset $\mathcal{K}\subset\mathbb{C}$ we define the point $*_{\mathcal{K}}\in\mathbb{C}\setminus\mathcal{K}$ as
\[
*_{\mathcal{K}}=\pa{\mathrm{inf}\set{\Im(z)\,|\, z\in \mathcal{K}\cup\set{0}}-1}\cdot\sqrt{-1}.
\]
\end{defn}
Note that if $\mathcal{K}\subset\mathbb{H}$, then $*_{\mathcal{K}}=-\sqrt{-1}$. We can now generalise the definition of the fundamental PMQ $\mathfrak{Q}(P)$ to all finite subsets $P\subset\mathbb{C}$; we treat the more general case of a finite union of disjoint convex subsets of $\mathbb{C}$: a finite collection of points is a special case.
\begin{defn}
\label{defn:fQP}
Let $\mathcal{K}$ be a finite union in $\mathbb{C}$ of relatively compact, convex subsets with disjoint closures. We denote by $\mathfrak{Q}(\mathcal{K})\subset\pi_1(\mathbb{C}\setminus\mathcal{K},*_{\mathcal{K}})$ the union of conjugacy classes corresponding to simple closed curves in $\mathbb{C}\setminus\mathcal{K}$ spinning clockwise around at most one component of $\mathcal{K}$, and by $\mathfrak{Q}ext(\mathcal{K})$ the union of all conjugacy classes corresponding to simple closed curves in $\mathbb{C}\setminus\mathcal{K}$ spinning clockwise.
\end{defn}
We note that $\mathfrak{Q}(\mathcal{K})\subset\mathfrak{Q}ext(\mathcal{K})$ are augmented PMQs
as can be easily checked by considering the abelianisation of $\pi_1(\mathbb{C}\setminus\mathcal{K},*_{\mathcal{K}})$.
See the proof of \cite[Lemma 2.16]{Bianchi:Hur2} for an analogous argument.
\begin{defn}
\label{defn:HurC}
An element of the space $\mathrm{Hur}(\mathbb{C},\mathcal{Q})$ has the form $(P,\psi)$, where $P\subset\mathbb{C}$ is finite and $\psi\colon\mathfrak{Q}(P)\to\mathcal{Q}$ is a map of PMQs.
Given $(P,\psi)\in\mathrm{Hur}(\mathbb{C},\mathcal{Q})$, write $P=\set{z_1,\dots,z_k}$ and let $\underline{U}=(U_1,\dots,U_k)$
be a collection of disjoint, covex open sets with compact, disjoint closures, such that $z_i\in U_i$ (an \emph{adapted covering}). Translating basepoints along straight segments in $\mathbb{C}$ gives a natural identification $\mathfrak{Q}(P)\cong\mathfrak{Q}(\underline{U})$ and a natural inclusion $\mathfrak{Q}(\underline{U})\to\mathfrak{Q}ext(P')$, for any $P'\subset\underline{U}$ with $P'$ intersecting all components of $\underline{U}$.
We define the \emph{normal neighbourhood} $fU(P,\psi;\underline{U})$ as the subset of $\mathrm{Hur}(\mathbb{C},\mathcal{Q})$ of configurations $(P',\psi')$ such that $P'\subset \underline{U}$, $P'$ intersects all components of $\underline{U}$, $\psi'$ can be extended to a sub-PMQ of $\mathfrak{Q}ext(P')$ containing $\mathfrak{Q}(\underline{U})$, and the restriction of $(\psi')^{\mathrm{ext}}$ on $\mathfrak{Q}(\underline{U})$ coincides, up to the natural identification $\mathfrak{Q}(P)\cong\mathfrak{Q}(\underline{U})$, with $\psi\colon\mathfrak{Q}(P)\to\mathcal{Q}$.
The sets $fU(P,\psi;\underline{U})$ define a Hausdorff topology on $\mathrm{Hur}(\mathbb{C},\mathcal{Q})$.
For an element $a$ in the completion $\hat\mathcal{Q}$ of $\mathcal{Q}$, we denote by $\mathrm{Hur}(\mathbb{C},\mathcal{Q})_a$ the closed subspace of configurations $(P,\psi)$ such that the natural extension $\mathfrak{Q}ext(P)\to\hat\mathcal{Q}$ of $\psi$ sends the class of a simple loop $\gamma$ spinning clockwise around $P$ to the element $a$: we say that $a$ is the ($\hat\mathcal{Q}$-valued) total monodromy of $(P,\psi)$.
If $\mathcal{Q}$ is augmented, we denote by $\mathrm{Hur}(\mathbb{C};\mathcal{Q}_+)\subset\mathrm{Hur}(\mathbb{C},\mathcal{Q})$ the subspace of configurations $(P,\psi)$ such that $\psi\colon\mathfrak{Q}(P)\to\mathcal{Q}$ is an \emph{augmented} map of augmented PMQs,
i.e. the preimage of $\mathbbl{1}\in\mathcal{Q}$ is $\set{\mathbbl{1}}\subseteq\mathfrak{Q}(P)$.
\end{defn}
Compare with \cite[Sections 2 and 3]{Bianchi:Hur2}. Our next goal is to identify the spaces
$\mathrm{Hur}(\mathring{\cR};\mathcal{Q})$ and $\mathrm{Hur}(\mathbb{C},\mathcal{Q})$. Let $\xi\colon\mathbb{C}=\mathbb{R}^2\overset{\cong}{\to}\mathring{\cR}=(0,1)^2$ be the homeomorphism given by $\xi(x,y)=(e^x/(e^x+1),e^y/(e^y+1))$. Given $(P,\psi)\in\mathrm{Hur}(\mathbb{C},\mathcal{Q})$, let $P'=\xi(P)\subset\mathring{\cR}$. The fundamental groups $\pi_1(\mathbb{C}mP',*)$ and $\pi_1(\mathbb{C}mP',\xi(*_P))$ can be identified by translating $*=-\sqrt{-1}$ to $\xi(*_P)$ along a straight segment in $\mathbb{C}mP'$. Thus $\xi$ yields an identification of groups
\[
\pi_1(\mathbb{C}mP,*_P)\cong \pi_1(\mathring{\cR}\setminus P',\xi(*_P))\cong \pi_1(\mathbb{C}mP',\xi(*_P))\cong
\pi_1(\mathbb{C}mP',*)
\]
and this identification restricts to an identification $\mathfrak{Q}(P)\cong\mathfrak{Q}(P')$. We can then define $\psi'\colon \mathfrak{Q}(P')\to\mathcal{Q}$ as the map of PMQs corresponding to $\psi$. The assignment $\xi_*\colon(P,\psi)\mapsto(P',\psi')$ is a bijection between $\mathrm{Hur}(\mathbb{C},\mathcal{Q})$ and $\mathrm{Hur}(\mathring{\cR},\mathcal{Q})$. Moreover $\xi_*$ induces a bijection between the bases of the two topologies given by normal neighbourhoods corresponding to coverings $\underline{U}$ consisting of disjoint open rectangles compactly contained in $\mathbb{C}$ and in $\mathring{\cR}$, respectively. Hence $\xi_*\colon \mathrm{Hur}(\mathbb{C},\mathcal{Q})\to\mathrm{Hur}(\mathring{\cR},\mathcal{Q})$ is a homeomorphism.
Finally, $\xi_*$ respects the $\hat\mathcal{Q}$-valued total monodromy; and when $\mathcal{Q}$ is augmented, $\xi_*$ restricts to a homeomorphism $\mathrm{Hur}(\mathbb{C},\mathcal{Q}_+)\to\mathrm{Hur}(\mathring{\cR},\mathcal{Q}_+)$.
In the rest of the section we will identify the spaces $\mathrm{Hur}(\mathbb{C},\mathfrak{S}_d^{\mathrm{geo}})$ and
$\mathrm{Hur}(\mathring{\cR},\mathfrak{S}_d^{\mathrm{geo}})$ and their corresponding pairs of subspaces without further mention.
\subsection{The map of sets \texorpdfstring{$\check{\mathfrak{cv}}$}{tildecv}}
\label{subsec:tildecv}
Denote again $h=2g+n+d-2$. The map of PMQs $N\colon\mathfrak{S}_d^{\mathrm{geo}}\to \mathbb{N}$ given by the norm is augmented, and can be extended to an augmented map of complete PMQs $N\colon\widehat{\mathfrak{S}_d^{\mathrm{geo}}}\to\mathbb{N}$. The element $(\!(\ud)\!)_g$ considered in the statement
of Theorem \ref{thm:main3} has norm precisely $h$, i.e. $N((\!(\ud)\!)_g)=h$.
Similarly as in Subsection \ref{subsec:monic}, we first consider a map of sets
\[
\mathfrak{cv}\colon\bar{\cO}ud\to\mathrm{SP}^h(\mathbb{C})\cong\mathrm{Hur}(\mathbb{C},\mathbb{N}_+)_h,
\]
and later construct $\check\mathfrak{cv}$ as a lift of $\mathfrak{cv}$ along
the map $N_*\colon\mathrm{Hur}(\mathbb{C},(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\!(\ud)\!)_g}\to \mathrm{Hur}(\mathbb{C},\mathbb{N}_+)_h$. In this subsection we only define $\mathfrak{cv}$ and $\check\mathfrak{cv}$ as maps of sets; we will deal in Subsections \ref{subsec:ccvcontinuous} and \ref{subsec:bccontinuous} of the appendix with the continuity.
Let $[\mathfrak{r},f]\in\bar{\cO}ud$ be represented by $(\mathfrak{r},f)$, where $\mathfrak{r}$ is a Riemann structure on $\Sigma_{g,n}$ and $f\colon (\Sigma_{g,n},\mathfrak{r})\to\mathbb{C} P^1$ is an $\underline{d}$-directed meromorphic function. Denote by $P=\set{z_1,\dots,z_k}$ the critical values of $f$ lying in $\mathbb{C}$, and for each $1\le i\le k$ let $\zeta_{i,1},\dots,\zeta_{i,\lambda_i}$ be the points in the fibre $f^{-1}(z_i)\subset\Sigma_{g,n}$; denote by $\underline{\zeta}$ the set $\set{\zeta_{i,j}\,|\, 1\le i\le k,1\le j\le \lambda_i}$. Since
$f\colon\Sigma_{g,n}\setminus\pa{\underline{Q}\cup\underline{\zeta}}$ is a genuine $d$-fold cover of $\mathbb{C}mP$, we have
\[
2-2g-n-\sum_{i=1}^k\lambda_i=\chi(\Sigma_{g,n}\setminus\pa{\underline{Q}\cup\underline{\zeta}})=d\chi(\mathbb{C}mP)=d(1-k),
\]
implying the equality
\[
\sum_{i=1}^k(d-\lambda_i)=2g+n+d-2=h.
\]
Thus the formal sum $\sum_{i=1}^k(d-\lambda_i)\cdot z_i$ represents an element in $\mathrm{SP}^h(\mathbb{C})$. If we change representative of $[\mathfrak{r},f]\in\bar{\cO}ud$, we obtain the same element in $\mathrm{SP}^h(\mathbb{C})$.
\begin{defn}
We define a map of sets $\mathfrak{cv}\colon \bar{\cO}ud\to \mathrm{SP}^h(\mathbb{C})$ by the assignment $[\mathfrak{r},f]\mapsto \sum_{i=1}^k(d-\lambda_i)\cdot z_i$.
\end{defn}
We can say a bit more: let $\alpha\subset\mathbb{C}mP$ be a simple closed curve bounding a disc $D\subset\mathbb{C}$; then we can repeat the above argument and compute the Euler characteristic of $f^{-1}(D)$, which is a subsurface of $\Sigma_{g,n}$:
\[
\chi(f^{-1}(D))=\chi(f^{-1}(D\setminus\underline{\zeta}))+\sum_{z_i\in D\cap P}\lambda_i=d-\sum_{z_i\in D\cap P}(d-\lambda_i).
\]
Next, for fixed $[\mathfrak{r},f]\in\bar{\cO}ud$ represented by $(\mathfrak{r},f)$, and using the notation above, consider the closed segment $I_P=\set{*_P+t\,|\, t\in\mathbb{R}_{\ge0}}\cup\set{\infty}\subset\mathbb{C} P^1$, i.e. the closure of the horizontal line in $\mathbb{C}$ starting at $*_P$ and running towards right. Note that $I_P$ intersects, possibly, only at $\infty$ the set of critical values of $f$. The preimage $f^{-1}(I_P)$ is in fact a union of $d$ segments $I_{P,1},\dots,I_{P,d}\subset\Sigma_{g,n}$, where the labeling is given as follows: for all $1\le i\le n$ and $0\le j\le d_i-1$, the segment $I_{\mathbf{l}_i+j}$ has an endpoint at $Q_i$ and is tangent to $e^{2\pi j\sqrt{-1}/d_i}X_i$; compare with Figure \ref{fig:moduli_3}.
We obtain a trivialisation $f^{-1}(*_P)\cong\set{1,\dots,d}$ by declaring the endpoint of $I_i$ lying over $*_P$ to be the $i$\textsuperscript{th} point of $f^{-1}(*_P)$. In fact, using that $f$ restricts to a trivial $d$-fold covering over the lower half-plane $\mathbb{H}_{\le\Im(*_P)}=\set{\Im\le\Im(*_P)}\subset\mathbb{C}$, we also obtain a trivialisation
$f^{-1}(\mathbb{H}_{\le\Im(*_P)})\cong \set{1,\dots,d}\times \mathbb{H}_{\le\Im(*_P)}$.
We can now consider the map of groups $\varphi\colon\pi_1(\mathbb{C}mP,*_P)\to\mathfrak{S}_d$ given by the monodromy of the $d$-fold cover $f^{-1}(\mathbb{C}mP)\overset{f}{\to}\mathbb{C}mP$, with trivialised fibre over $*_P$. Restricting $\varphi$ over
$\mathfrak{Q}(P)$, and interpreting permutations as elements of $\mathfrak{S}_d^{\mathrm{geo}}$, we obtain a map of PMQs
$\psi\colon\mathfrak{Q}(P)\to\mathfrak{S}_d^{\mathrm{geo}}$: here we also use \cite[Theorem 3.3]{Bianchi:Hur1}. The couple $(P,\psi)$ is an element of $\mathrm{Hur}(\mathbb{C},\mathfrak{S}_d^{\mathrm{geo}})$, and it is independent of the representative of $[\mathfrak{r},f]\in\bar{\cO}ud$.
\begin{defn}
\label{defn:tcv}
We define a map of sets $\check\mathfrak{cv}\colon \bar{\cO}ud\to \mathrm{Hur}(\mathbb{C},\mathfrak{S}_d^{\mathrm{geo}})$ by the above assignment $[\mathfrak{r},f]\mapsto (P,\psi)$.
\end{defn}
We note now that if $\alpha_i$ is a simple loop in $\mathbb{C}mP$ bounding a disc $D_i\subset\mathbb{C}$ with $D_i\cap P=\set{z_i}$, then $\psi(\alpha_i)$ is a permutation with precisely $\lambda_i$ cycles, one for each preimage $\zeta_{i,j}$ of $z_i$ along $f$. In particular, since $z_i$ is assumed to be a critical value of $f$, we have $\lambda_i<d$ and thus $N(\psi(\alpha_i))>0$. This implies $\psi(\alpha_i)\in(\mathfrak{S}_d^{\mathrm{geo}})_+$, and therefore
$\psi\colon\mathfrak{Q}(P)\to\mathfrak{S}_d^{\mathrm{geo}}$ is an augmented map of PMQs. Moreover we have the equality
\[
\chi(f^{-1}(D_i))=d-(d-\lambda_i)=d-N(\psi(\alpha_i)).
\]
Given now a simple loop $\alpha$ representing a class in $\mathfrak{Q}ext(P)$, we can also consider the natural extension $\hat\psi\colon\mathfrak{Q}ext(P)\to\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ and evaluate $\hat\psi(\alpha)$: for instance, up to renumbering the points of $P$, we can find an admissible generating set $\alpha_1,\dots,\alpha_k$ of $\pi_1(\mathbb{C}mP,*_P)$ such that $\alpha$ decomposes as product $\alpha_i\dots\alpha_j$, for some $1\le i,j\le k$ with $i\le j+1$ (to allow an empty product), and such that the disc $D$ bounded by $\alpha$ contains the discs $D_i,\dots,D_j$ bounded by $\alpha_i,\dots,\alpha_j$; then we have
\[
\hat\psi(\alpha)=\widehat{\psi(\alpha_i)}\dots\widehat{\psi(\alpha_j)},
\]
implying
\[
N(\hat\psi(\alpha))=N(\psi(\alpha_i))+\dots+N(\psi(\alpha_j));
\]
on the other hand we have
\[
\begin{split}
\chi(f^{-1}(D)) & =d\chi(f^{-1}(D\setminus\set{z_i,\dots,z_j})+\lambda_i+\dots+\lambda_j\\
&=d-\pa{N(\psi(\alpha_i))+\dots+N(\psi(\alpha_j))}=d- N(\hat\psi(\alpha)).
\end{split}
\]
This holds in particular in the case in which $\alpha$ is a simple loop in $\mathbb{C}mP$ spinning clockwise around all points of $P$: in this case $\hat\psi(\alpha)$ is the $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$-valued total monodromy of $(P,\psi)$, which we denote by $\omega(P,\psi)\in\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$.
The above argument yields the equality $2-2g-n=d-N(\hat\psi(\alpha))$, i.e. $N(\omega(P,\psi)=h$.
Moreover the permutations $\psi(\alpha_1),\dots,\psi(\alpha_k)$ must generate a subgroup of $\mathfrak{S}_d$ acting transitively on the set $\set{1,\dots,d}$, as the space $f^{-1}(\mathbb{C}mP)=\Sigma_{g,n}\setminus\pa{\underline{Q}\cup\underline{\zeta}}$ is connected. Finally, the identity of sets $\mathfrak{S}_d^{\mathrm{geo}}\to\mathfrak{S}_d$ is a map of PMQs, where the second is a group and hence a complete PMQ;
the image of $\omega(P,\psi)$ under the natural extension $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}\to\mathfrak{S}_d$ is the permutation $(\!(\ud)\!)$, which is the $\mathfrak{S}_d$-valued total monodromy of the covering $f^{-1}(\mathbb{C}mP)\to\mathbb{C}mP$. Using \cite[Proposition 7.13]{Bianchi:Hur1}, we obtain the equality
\[
\omega(P,\psi)=\pa{(\!(\ud)\!), \set{1,\dots,d};h}=(\!(\ud)\!)_g.
\]
We obtain the following proposition.
\begin{prop}
The map of sets $\check\mathfrak{cv}$ from Definition \ref{defn:tcv} has values in the subspace
$\mathrm{Hur}(\mathbb{C},(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\!(\ud)\!)_g}$.
\end{prop}
\subsection{The map of sets \texorpdfstring{$\mathfrak{bc}$}{bc}}
We now construct a map of sets
\[
\mathfrak{bc}\colon\mathrm{Hur}(\mathbb{C};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\!(\ud)\!)_g}\to\bar{\cO}ud.
\]
Let $(P,\psi)\in\mathrm{Hur}(\mathbb{C};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\!(\ud)\!)_g}$; the map $\psi\colon\mathfrak{Q}(P)\to\mathfrak{S}_d^{\mathrm{geo}}$ gives rise to a map of groups $\varphi\colon\pi_1(\mathbb{C}mP,*)\to\mathfrak{S}_d$ using \cite[Theorem 3.3]{Bianchi:Hur1}, and as in Subsection
\ref{subsec:monic} this gives in turn a $d$-fold covering $f\colon\mathcal{F}\to\mathbb{C}mP$ with trivialised fibre $f^{-1}(*_P)\cong\set{1,\dots,d}$:
in fact the covering $f$ is trivial, and hence trivialised, over the lower half-plane $\mathbb{H}_{\le\Im(*_P)}$. Let $V\subset \set{z\,|\,\Im(z)>\Im(*_P)}\subset \mathbb{C}$ be a convex, open set containing $P$; then the restriction $f\colon f^{-1}(\mathbb{C}\setminus V)\to\mathbb{C}\setminus V$ is a disjoint union of $n$ cyclic covers of the annulus $\mathbb{C}\setminus V$:
more precisely, for all $1\le i\le n$, there is a component of $f^{-1}(\mathbb{C}\setminus V)$ containing the points labeled $\mathbf{l}_i,\dots\mathbf{l}_i+d_i-1$ of the trivialised fibre $f^{-1}(*_P)\cong\set{1,\dots,d}$.
We can compactify $f^{-1}(\mathbb{C}\setminus V)$ by adjoining $n$ points $Q_1^\mathcal{F},\dots,Q_n^\mathcal{F}$: the point $Q_i^\mathcal{F}$ compactifies the $i$\textsuperscript{th} component of $f^{-1}(\mathbb{C}\setminus V)$. We can also extend $f$ continuously by declaring $f(Q_i^\mathcal{F})=\infty\in\mathbb{C} P^1$. Similarly, we adjoin points $\zeta_{i,j}$ to compactify $\mathcal{F}$ over the points $z_i\in P$, and extend $f$ continuously by declaring $f(\zeta_{i,j})=z_i$. The number $\lambda_i$ of points to adjoin over $z_i$ is equal to the number of cycles of the permutation $\varphi(\alpha_i)$, where $\alpha_1,\dots,\alpha_k$ is an admissible generating set for $\pi_1(\mathbb{C}mP,*_P)$; in other words, we have $\lambda_i=d-N(\psi(\alpha_i))$. Since $\psi$ is an augmented map, we have moreover $\lambda_i<d$.
The result of the compactification process is a smooth Riemann surface $\bar\mathcal{F}$ endowed with a branched covering map $f\colon\bar\mathcal{F}\to \mathbb{C} P^1$. The Euler characteristic of $\bar\mathcal{F}$ is easily computed as
\[
\chi(\bar\mathcal{F})=d\chi(\mathbb{C}mP)+n+\sum_{i=1}^k\lambda_i=d+n-\sum_{i=1}^k N(\psi(\alpha_i))=d+n-h=2-2g,
\]
implying that $\bar\mathcal{F}$ is a closed Riemann surface of genus $g$. Moreover, for each $1\le i\le n$ we can uniquely define a vector $X_i^\mathcal{F}\in T_{Q_i^\mathcal{F}}\bar\mathcal{F}$ satisfying the following properties:
\begin{itemize}
\item $X_i^\mathcal{F}$ is tangent to the unique segment in $\bar\mathcal{F}$ projecting homeomorphically to $I_P$ and having as endpoints $Q_i\in\bar\mathcal{F}$ and the point of $f^{-1}(*_P)$ labelled $\mathbf{l}_i$;
\item $f$ has a $X_i^\mathcal{F}$-\emph{directed} pole of order $d_j$ at $Q_j^\mathcal{F}$, i.e.
for every holomorphic chart $w_i$ defined on a neighbourhood of $Q_i^\mathcal{F}\in\bar\mathcal{F}$ such that $w_i(Q_i^\mathcal{F})=0\in\mathbb{C}$ and $Dw_i(X_i^\mathcal{F})=\partial/\partial x\in T_0\mathbb{C}$,
the function $f$ has the form
\[
f(w_i)=\mathfrak{r}ac{1}{w_i^{d_i}}+\mathrm{l.o.t.}.
\]
\end{itemize}
Note that the first property determines $X_j^\mathcal{F}$ up to real positive multiples, whereas the second determines it up to a rotation by an integral multiple of $2\pi/d_j$.
We can now identify $(\bar\mathcal{F};Q_1^\mathcal{F},\dots,Q_n^\mathcal{F};X_1^\mathcal{F},\dots,X_n^\mathcal{F})$ with $\Sigma_{g,n}=(\Sigma_g;\underline{Q};\underline{X})$ by a diffeomorphism; the Riemann structure on $\bar\mathcal{F}$ gives a Riemann structure $\mathfrak{r}$ on $\Sigma_{g,n}$, and the function $f$ can be considered as a $\underline{d}$-directed meromorphic function on $\Sigma_{g,n}$. We thus obtain a class $[\mathfrak{r},f]\in\bar{\cO}ud$.
\begin{defn}
\label{defn:bc}
We define a map of sets $\mathfrak{bc}\colon \mathrm{Hur}(\mathbb{C},(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\!(\ud)\!)_g}\to \bar{\cO}ud$
by the above assignment $(P,\psi)\mapsto [\mathfrak{r},f]$.
\end{defn}
The maps of sets $\check\mathfrak{cv}$ and $\mathfrak{bc}$ are inverse bijections, as is straightforward from the two constructions. It is left to prove that $\check\mathfrak{cv}$ and $\mathfrak{bc}$ are continuous: this is done in Subsections \ref{subsec:ccvcontinuous} and \ref{subsec:bccontinuous} of the appendix, respectively.
\section{Stable rational cohomology of moduli spaces}
\label{sec:mumford}
\subsection{Surfaces with boundary}
The moduli space $\mathfrak{M}_{g,n}$ parametrises \emph{closed} Riemann surfaces of genus $g$ with $n$ directed marked points. It is convenient in this section to replace $\mathfrak{M}_{g,n}$ by the homotopy equivalent space $\mathfrak{M}_{g,n}^\partial$.
\begin{defn}
We denote by $\Sigma_{g,n}^\partial$ a compact, oriented, smooth surface of genus $g$ with $n$ ordered boundary components. Each boundary component $\partial_i\Sigma_{g,n}^\partial$ is endowed with a fixed parametrisation by $S^1\subset\mathbb{C}$ which is compatible with the orientation induced on the boundary by the orientation of $\Sigma_{g,n}^\partial$. We also fix a germ of parametrisation of a collar neighbourhood of $\partial_i\Sigma_{g,n}^\partial$ by a collar neighbourhood of $S^1\subset\mathbb{D}$, where $\mathbb{D}=\set{z\in \mathbb{C}\,\colon\, |z|\le 1}$ is the unit disc in $\mathbb{C}$; this germ of parametrisation of a collar neighbourhood is automatically orientation-preserving.
We denote by $\Diff(\Sigma_{g,n}^\partial,\partial)$ the topological group of diffeomorphisms of $\Sigma_{g,n}^\partial$ fixing pointwise some neighbourhood of $\partial\Sigma_{g,n}^\partial$.
We denote by $\mathbb{R}iem(\Sigma_{g,n}^\partial)$ the space of Riemann structures $\mathfrak{r}$ on $\Sigma_{g,n}^\partial$ for which the germ of parametrisation of a collar neighbourhood of each boundary curve of $\Sigma_{g,n}^\partial$ admits a holomorphic representative. The moduli space $\mathfrak{M}_{g,n}^\partial$ contains equivalence classes $[\mathfrak{r}]$ of Riemann structures
in $\mathbb{R}iem(\Sigma_{g,n}^\partial)$:
two Riemann structures $\mathfrak{r}$ and $\mathfrak{r}'$ are considered equivalent if there is a diffeomorphism $\varphi\in\Diff(\Sigma_{g,n}^\partial,\partial)$ that pulls back $\mathfrak{r}$ to $\mathfrak{r}'$. Formally, $\mathfrak{M}_{g,n}^\partial$ is the quotient of the space $\mathbb{R}iem(\Sigma_{g,n}^\partial)$ by the action of the topological group $\Diff(\Sigma_{g,n}^\partial,\partial)$.
\end{defn}
We can sew a copy $\mathbb{D}_i$ of the standard disc $\mathbb{D}=\set{z\in \mathbb{C}\,\colon\, |z|\le 1}$ along each boundary curve of $\Sigma_{g,n}^\partial$ by the map $S^1\to S^1$ given by $z\mapsto z^{-1}$; the result is a closed surface of genus $g$, endowed with $n$ marked points (the centres $Q_i$ of the discs $\mathbb{D}_i$); moreover, each marked point is endowed with a non-zero tangent vector $X_i:=\mathfrak{r}ac{d}{dx}\in T_{Q_i}\mathbb{D}_i\cong T_0\mathbb{D}$. We thus can obtain $\Sigma_{g,n}$ from $\Sigma_{g,n}^\partial$ by sewing $n$ standard discs; by using germs of parametrised collar neighbourhoods of boundary components, one can also ensure to have a well-defined smooth structure on the sewing locus. Compare also with Definition \ref{defn:scM} in the next section.
If we have a Riemann structure $\mathfrak{r}\in\mathbb{R}iem(\Sigma_{g,n}^\partial)$, we can extend $\mathfrak{r}$ to a Riemann structure on $\Sigma_{g,n}$ by adjoining the stadard Riemann structure on the discs $\mathbb{D}_i$: this gives a map $\mathbb{R}iem(\Sigma_{g,n}^\partial)\to\mathbb{R}iem(\Sigma_{g,n})$. Similarly, a diffeomorphism of $\Sigma_{g,n}^\partial$ fixing pointwise a neighbourhood of the boundary can be extended to a diffeomorphism of $\Sigma_{g,n}$ by the identity of each disc $\mathbb{D}_i$: this gives a homomorphism of groups $\Diff(\Sigma_{g,n}^\partial,\partial)\to\Diff_{g,n}$. Finally, the map $\mathbb{R}iem(\Sigma_{g,n}^\partial)\to\mathbb{R}iem(\Sigma_{g,n})$ is equivariant with respect to the two actions of
$\Diff(\Sigma_{g,n}^\partial,\partial)$ and $\Diff_{g,n}$ respectively, compared along the given group homomorphism: therefore we obtain a map of spaces $\mathfrak{M}_{g,n}^\partial\to\mathfrak{M}_{g,n}$, which is known to be a weak homotopy equivalence.
The advantage of the space $\mathfrak{M}_{g,n}^\partial$ is that it allows to define the genus stabilisation map.
Let $\bar\mathfrak{r}_{1,2}$ be a fixed Riemann structure in $\mathbb{R}iem(\Sigma_{1,2}^\partial)$ and let $1\le i\le n$.
Given a Riemann structure $\mathfrak{r}\in\mathbb{R}iem(\Sigma_{g,n}^\partial)$, we can obtain a Riemann surface of genus $g+1$ with $n$ boundary curves as follows: for a fixed $1\le i\le n$,
we sew $(\Sigma_{g,n}^\partial,\mathfrak{r})$ and $(\Sigma_{1,2}^\partial,\bar\mathfrak{r}_{1,2})$ by identifying $\partial_i\Sigma_{g,n}^\partial$ with $\partial_1\Sigma_{1,2}^\partial$ along the map $S^1\to S^1$ given by $z\mapsto z^{-1}$.
The new surface has $n-1$ boundary curves coming from $\Sigma_{g,n}^\partial$, and one boundary curve
coming from $\Sigma_{1,2}$, which is declared to be the $i$\textsuperscript{th} boundary curve of the new surface. We can identify the new surface with $\Sigma_{g+1,n}^\partial$.
\begin{defn}
\label{defn:stabi}
The above construction gives rise, for all $g\ge0$, $n\ge1$ and $1\le i\le n$, to a map
\[
\mathrm{stab}_i\colon\mathfrak{M}_{g,n}^\partial\to\mathfrak{M}_{g+1,n}^\partial.
\]
\end{defn}
\subsection{A brief historical note}
A classical theorem of Harer \cite{Harer} (with later improvements in the stability ranges, see \cite{Ivanov, Boldsen, ORW:resolutions_homstab})
ensures that $\mathrm{stab}_i$ induces an isomorphism in integral homology $H_*(\mathfrak{M}_{g,n}^\partial)\to H_*(\mathfrak{M}_{g+1,n}^\partial)$ in the range of degrees $*\le\mathfrak{r}ac 23g-1$.
For simplicity, in
the rest of the section we shall restrict to surfaces with one boundary component, i.e. we set $n=1$.
It is interesting to compute the homology groups of the homotopy colimit
\[
\mathfrak{M}_{\infty,1}\colon =\mathrm{hocolim}\pa{\mathfrak{M}_{0,1}^\partial\overset{\mathrm{stab}_1}{\to}\mathfrak{M}_{1,1}^\partial\overset{\mathrm{stab}_1}{\to}\mathfrak{M}_{2,1}^\partial\overset{\mathrm{stab}_1}{\to}\dots},
\]
because these homology groups coincide, in a range depending on $g$, with the actual homology groups of $\mathfrak{M}_{g,1}$.
Mumford \cite{Mumford} conjectured that the cohomology ring $H^*(\mathfrak{M}_{\infty,1};\mathbb{Q})$ is isomorphic to the polynomial ring $\mathbb{Q}[\kappa_1,\kappa_2,\dots]$ generated by variables $\kappa_i$ of degree $2i$, and he also gave a construction of candidates for such free multiplicative generators. Miller \cite{Miller} and Morita \cite{Morita87} proved independently the algebraic independence over $\mathbb{Q}$ of the classes $\kappa_i\in H^*(\mathfrak{M}_{\infty,1};\mathbb{Q})$.
Successively, Tillmann \cite{Tillmann97} proved that the integral homology $H_*(\mathfrak{M}_{\infty,1})$ coincides with the homology of a component of an infinite loop space, and Madsen and Weiss \cite{MadsenWeiss} finally proved that $H_*(\mathfrak{M}_{\infty,1})$ is isomorphic to the homology $H_*(\Omega^\infty_0\mathrm{MTSO}(2))$ of a component of the infinite loop space associated with the spectrum $\mathrm{MTSO}(2)$. As a corollary of their theorem, Madsen and Weiss proved the Mumford conjecture.
\subsection{Double loop spaces of Hurwitz spaces}
In order to state the main result of this section, we need to recollect some definitions. For $d\ge2$ the \emph{relative} Hurwitz space
$\mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}$, introduced in \cite{Bianchi:Hur2},
is the space of triples $(P,\psi,\varphi)$ where
\begin{itemize}
\item $P$ is a \emph{non-empty}, finite subset of $\mathcal{R}=[0,1]^2$ (whence the ``$_+$'');
\item $\varphi\colon \pi_1(\mathbb{C}mP,*_P)\to\mathfrak{S}_d$ is a morphism of groups (the $\mathfrak{S}_d$-valued monodromy);
\item $\psi\colon \mathfrak{Q}_{(\mathcal{R},\partial\mathcal{R})}(P)\to\mathfrak{S}_d^{\mathrm{geo}}$ is a morphism of PMQs, only defined on $\mathfrak{Q}_{(\mathcal{R},\partial\mathcal{R})}(P)\subset\pi_1(\mathbb{C}mP,*_P)$, the relative fundamental PMQ of $P$ with respect to the \emph{nice couple} $(\mathcal{R},\partial\mathcal{R})$ (see \cite[Definition 2.9]{Bianchi:Hur2}).
\end{itemize}
A compatibility condition between $\varphi$ and $\psi$ is required:
the pair $(\psi,\varphi)$ is required to be a morphism of PMQ-group pairs
\[
(\psi,\varphi)\colon(\mathfrak{Q}_{(\mathcal{R},\partial\mathcal{R})}(P),\pi_1(\mathbb{C}mP,*_P))\to(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d).
\]
Moreover the total monodromy $\omega(P,\psi,\varphi)$ is required to be equal to $\mathbbl{1}\in\mathfrak{S}_d$; in the relative setting, the only well-defined total monodromy is the $\mathfrak{S}_d$-valued total monodromy, and there is no total monodromy with values in $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$.
For $d\ge2$ we have an inclusion of PMQ-group pairs $(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)\hookrightarrow(\mathfrak{S}_{d+1}^{\mathrm{geo}},\mathfrak{S}_{d+1}^{\mathrm{geo}})$ coming from the standard inclusion of groups $\mathfrak{S}_d\hookrightarrow\mathfrak{S}_{d+1}$; by functoriality of Hurwitz spaces in the PMQ-group pair (see \cite[Subsection 4.1]{Bianchi:Hur2}), we obtain an inclusion
\[
\mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}\hookrightarrow\mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_{d+1}^{\mathrm{geo}},\mathfrak{S}_{d+1})_{\mathbbl{1}}.
\]
The following is the main result of this section.
\begin{thm}
\label{thm:main4}
Let $\mathbb{B}_\infty=\mathrm{hocolim}_{d\to\infty}\mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}$.
Then there is a zig-zag of integral homology equivalences of spaces starting from
$\mathfrak{M}_{\infty,1}$ and ending with $\Omega^2_0 \mathbb{B}_\infty$; in particular
\[
H_*(\mathfrak{M}_{\infty,1})\cong H_*\pa{\Omega^2_0 \mathbb{B}_\infty}.
\]
\end{thm}
\subsection{A new proof of the Mumford conjecture}
As a corollary of Theorem \ref{thm:main4}, we obtain a new proof of the Mumford conjecture.
\begin{cor}
\label{cor:mumford}
The rational cohomology ring $H^*(\mathfrak{M}_{\infty,1};\mathbb{Q})$ is isomorphic to a polynomial ring $\mathbb{Q}[y_1,y_2,\dots]$ generated by a sequence of variables $y_i$, one in each even degree $2i>0$.
\end{cor}
\begin{proof}
By Theorem \ref{thm:main4} it suffices to prove an isomorphism of rings $H^*(\Omega_0^2\mathbb{B}_\infty;\mathbb{Q})\cong\mathbb{Q}[y_1,y_2,\dots]$.
By Theorem \ref{thm:main2} the PMQ $\mathfrak{S}_d^{\mathrm{geo}}$ is Poincare for all $d\ge2$. We can thus apply \cite[Theorem 6.1]{Bianchi:Hur3} and obtain an isomorphism of graded rings
\[
H^*\pa{\mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_{d+1}^{\mathrm{geo}},\mathfrak{S}_{d+1})_{\mathbbl{1}};\mathbb{Q}}\cong \mathcal{A}(\mathfrak{S}_d^{\mathrm{geo}}),
\]
where we denote by $\mathcal{A}(\mathfrak{S}_d^{\mathrm{geo}}):=\mathbb{Q}[\mathfrak{S}_d^{\mathrm{geo}}]^{\mathfrak{S}_d}$ the sub-$\mathbb{Q}$-algebra of conjugation-invariants in the PMQ-algebra $\mathbb{Q}[\mathfrak{S}_d^{\mathrm{geo}}]$. A basis of $\mathcal{A}(\mathfrak{S}_d^{\mathrm{geo}})$ as a $\mathbb{Q}$-vector space is given by the elements $\sca{S}=\sum_{\sigma\in S}\sca{\sigma}$, for $S$ varying among conjugacy classes of $\mathfrak{S}_d^{\mathrm{geo}}$.
Here $\sca{\sigma}$ denotes the basis element of $\mathbb{Q}[\mathfrak{S}_d^{\mathrm{geo}}]$ corresponding to $\sigma\in\mathfrak{S}_d^{\mathrm{geo}}$.
A conjugacy class in $\mathfrak{S}_d^{\mathrm{geo}}$ is uniquely determined by a sequence
$\underline{\lambda}=(\lambda_i)_{i\ge2}$ of integers $\lambda_i\ge0$, such that $\sum_{i\ge2}i\lambda_i\le d$: we associate with $\underline{\lambda}$ the conjugacy class of permutations whose cycle decomposition contains precisely $\lambda_i$ cycles of length $i$, for all $i\ge2$, and a suitable number of fixpoints. We shall denote by $S_{\underline{\lambda},d}\subseteq\mathfrak{S}_d^{\mathrm{geo}}$ the conjugacy class corresponding to $\underline{\lambda}$, if $\sum_{i\ge2}i\lambda_i\le d$, and the empty set, if $\sum_{i\ge2}i\lambda_i> d$.
The element $\sca{S_{\underline{\lambda},d}}\in\mathcal{A}(\mathfrak{S}_d^{\mathrm{geo}})$ lies in cohomological degree $2N(\underline{\lambda})$, where $N(\underline{\lambda}):=\sum_{i\ge2}(i-1)\lambda_i$.
The inclusion $\mathfrak{S}_d^{\mathrm{geo}}\hookrightarrow\mathfrak{S}_{d+1}^{\mathrm{geo}}$ is norm-preserving and induces an injective map betwee sets of conjugacy classes, sending $S_{\underline{\lambda},d}\mapsto S_{\underline{\lambda},d+1}$. Moreover, all conjugacy classes $S_{\underline{\lambda},d+1}$ with $N(\underline{\lambda})\le d/2$ are hit by a (non-empty) conjugacy class $S_{\underline{\lambda},d}\subset\mathfrak{S}_d^{\mathrm{geo}}$.
The argument of the proof of
\cite[Theorem 6.1]{Bianchi:Hur3} ensures that the map of cohomology rings induced by the inclusion
\[
\mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}\hookrightarrow\mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_{d+1}^{\mathrm{geo}},\mathfrak{S}_{d+1})_{\mathbbl{1}}.
\]
corresponds to the map of rings $\mathcal{A}(\mathfrak{S}_{d+1}^{\mathrm{geo}})\to\mathcal{A}(\mathfrak{S}_d^{\mathrm{geo}})$ sending $\sca{S_{\underline{\lambda},d+1}}\mapsto\sca{S_{\underline{\lambda},d}}$. In particular the map $\mathcal{A}(\mathfrak{S}_{d+1}^{\mathrm{geo}})\to\mathcal{A}(\mathfrak{S}_d^{\mathrm{geo}})$ is surjective,
and it is an isomorphism in cohomological degree $*\le d$.
We can thus compute
\[
H^*\pa{\mathbb{B}_\infty;\mathbb{Q}}\cong\lim_{d\to\infty}\mathcal{A}(\mathfrak{S}_d^{\mathrm{geo}}),
\]
leveraging on the fact that the Mittag-Leffler condition is satisfied and no $\lim^1$ terms occur.
We now recall from \cite[Remark 6.3]{LehnSorger} that $\mathcal{A}(\mathfrak{S}_d^{\mathrm{geo}})$ is isomorphic in cohomological degrees $*\le d$ to the polynomial algebra $\mathbb{Q}[x_1,x_2,x_3,\dots,]$ generated by a sequence of variables $x_i$, one in each degree $2i\ge0$; see also \cite[Proposition 3.3]{BCP:Hilb}. We thus obtain an isomorphism of rings
\[
H^*\pa{\mathbb{B}_\infty;\mathbb{Q}}\cong \mathbb{Q}[x_1,x_2,x_3,\dots,].
\]
To conclude, by \cite[Theorem 4.19]{Bianchi:Hur3} the spaces $\mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}$
are simply connected, and thus also $\mathbb{B}_{\infty}$ is simply connected. We can then use a standard argument in rational homotopy theory: the map $\mathbb{B}_\infty\topod_{i=1}^\infty K(2i,\mathbb{Q})$ classifying the cohomology classes $x_i$, is a rational cohomology equivalence between simply connected spaces, hence a rational equivalence. Its double looping is thus also a rational equivalence, and in particular a rational cohomology equivalence. Taking one component of the double loop spaces, we obtain
\[
\begin{split}
H^*(\Omega_0^2\mathbb{B}_\infty;\mathbb{Q}) & \cong H^*\pa{pod_{i=1}^\infty \Omega_0^2K(2i,\mathbb{Q});\mathbb{Q}}\\
&\cong H^*\pa{pod_{i=1}^\infty K(2i,\mathbb{Q});\mathbb{Q}}\cong
\mathbb{Q}[y_1,y_2,y_3,\dots,].
\end{split}
\]
\end{proof}
The rest of the section is devoted to the proof of Theorem \ref{thm:main4}.
\subsection{The main diagram}
For all $d\ge2$ we have a PMQ $\mathfrak{S}_d^{\mathrm{geo}}$, giving rise to a Hurwitz-Moore topological monoid $\mathring{\mathrm{HM}}(\mathfrak{S}_d^{\mathrm{geo}})$, see \cite[Definition 2.4]{Bianchi:Hur3}. It will be convenient to consider in this section the submonoid
$\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)$, containing configurations $(t,(P,\psi))$ with $\psi$ an \emph{augmented} map of PMQs.
The induced map on group completions
$\Omega B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)\hookrightarrow\Omega B\mathring{\mathrm{HM}}(\mathfrak{S}_d^{\mathrm{geo}})$ is a weak equivalence, so for our purposes the two monoids are interchangeable.
The inclusion of PMQs $\mathfrak{S}_d^{\mathrm{geo}}\hookrightarrow\mathfrak{S}_{d+1}^{\mathrm{geo}}$ gives rise to an inclusion of monoids $\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)\hookrightarrow\mathring{\mathrm{HM}}((\mathfrak{S}_{d+1}^{\mathrm{geo}})_+)$.
\begin{nota}
Recall from \cite[Theorem 2.15]{Bianchi:Hur3} that
$\pi_0(\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+))$ is in bijection with $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$.
For all transposition $\mathbf{t}\in\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ we fix once and for all a configuration in $\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\mathbf{t}}$ of ``width'' $1$, i.e. of the form $(1,\mathfrak{c}_\mathbf{t})$ for some $\mathfrak{c}_\mathbf{t}\in\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\mathbf{t}}$.
We denote by $\mathbf{l}_\mathbf{t}\colon\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)\to\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)$ the map $(t,\mathfrak{c})\mapsto (1,\mathfrak{c}_\mathbf{t})\cdot(t,\mathfrak{c})$.
We denote by $\mathbf{r}_\mathbf{t}\colon\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)\to\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)$ the map $(t,\mathfrak{c})\mapsto (t,\mathfrak{c})\cdot (1,\mathfrak{c}_\mathbf{t})$.
\end{nota}
Recall Definition \ref{defn:klud}, and note that for $n=1$ the element $\kld{d}_g:=(\!(\ud)\!)_g$ can be factored in $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ as
\[
\kld{d}_g=\widehat{(1,2)}\dots \widehat{(1,2)}\ \cdot\ \hat\mathrm{lc}_d=\widehat{(1,2)}\dots \widehat{(1,2)}\ \cdot\ \widehat{(1,2)}\widehat{(2,3)}\dots\widehat{(d-1,d)},
\]
where $\widehat{(1,2)}$ is repeated $2g$ times at the beginning.
For all $d\ge2$ and $g\ge0$, we consider the following two maps:
\begin{itemize}
\item the map $\mathbf{l}_{(1,2)}^2\colon \mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{\kld{d}_g}\to\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{\kld{d}_{g+1}}$,
obtained by iterating twice the map $\mathbf{l}_{(1,2)}$;
\item the map $\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{\kld{d}_g}\to\mathring{\mathrm{HM}}((\mathfrak{S}_{d+1}^{\mathrm{geo}})_+)_{\kld{d+1}_g}$ obtained by composing the inclusion
$\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{\kld{d}_g}\hookrightarrow\mathring{\mathrm{HM}}((\mathfrak{S}_{d+1}^{\mathrm{geo}})_+)_{\kld{d}_g}$ and the right multiplication map
$\mathbf{r}_{(d,d+1)} \colon \mathring{\mathrm{HM}}((\mathfrak{S}_{d+1}^{\mathrm{geo}})_+)_{\kld{d}_g}\to \mathring{\mathrm{HM}}((\mathfrak{S}_{d+1}^{\mathrm{geo}})_+)_{\kld{d+1}_g}$.
\end{itemize}
We obtain a strictly commutative diagram of spaces, which we will refer to as the \emph{main diagram}
\[
\begin{tikzcd}[column sep=15pt]
\mathring{\mathrm{HM}}((\mathfrak{S}_2^{\mathrm{geo}})_+)_{\kld{2}_0}\ar[r,"\mathbf{l}^2_{(1,2)}"]\ar[d,"\mathbf{r}_{(2,3)}"] &
\pa{\mathring{\mathrm{HM}}((\mathfrak{S}_2^{\mathrm{geo}})_+)_{\kld{2}_1}}\ar[r,"\mathbf{l}^2_{(1,2)}"]\ar[d,"\mathbf{r}_{(2,3)}"] &
\mathring{\mathrm{HM}}((\mathfrak{S}_2^{\mathrm{geo}})_+)_{\kld{2}_2}\ar[r,"\mathbf{l}^2_{(1,2)}"]\ar[d,"\mathbf{r}_{(2,3)}"] & \dots\\
\mathring{\mathrm{HM}}((\mathfrak{S}_3^{\mathrm{geo}})_+)_{\kld{3}_0}\ar[r,"\mathbf{l}^2_{(1,2)}"]\ar[d,"\mathbf{r}_{(3,4)}"] &
\mathring{\mathrm{HM}}((\mathfrak{S}_3^{\mathrm{geo}})_+)_{\kld{3}_1}\ar[r,"\mathbf{l}^2_{(1,2)}"]\ar[d,"\mathbf{r}_{(3,4)}"] &
\mathring{\mathrm{HM}}((\mathfrak{S}_3^{\mathrm{geo}})_+)_{\kld{3}_2}\ar[r,"\mathbf{l}^2_{(1,2)}"]\ar[d,"\mathbf{r}_{(3,4)}"] & \dots\\
\mathring{\mathrm{HM}}((\mathfrak{S}_4^{\mathrm{geo}})_+)_{\kld{4}_0}\ar[r,"\mathbf{l}^2_{(1,2)}"]\ar[d,"\mathbf{r}_{(4,5)}"] &
\mathring{\mathrm{HM}}((\mathfrak{S}_4^{\mathrm{geo}})_+)_{\kld{4}_1}\ar[r,"\mathbf{l}^2_{(1,2)}"]\ar[d,"\mathbf{r}_{(4,5)}"] &
\pa{\mathring{\mathrm{HM}}((\mathfrak{S}_4^{\mathrm{geo}})_+)_{\kld{4}_2}}\ar[r,"\mathbf{l}^2_{(1,2)}"]\ar[d,"\mathbf{r}_{(4,5)}"] & \dots\\
\vdots &\vdots &\vdots &\ddots
\end{tikzcd}
\]
\begin{defn}
We denote by $\mathring{\mathrm{HM}}((\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{\kld{\infty}_\infty}$ the homotopy colimit of the main diagram.
\end{defn}
We have highlighted some of the objects in the main diagram by putting them in parentheses: these are the spaces $\mathring{\mathrm{HM}}((\mathfrak{S}_{2g}^{\mathrm{geo}})_+)_{\kld{2g}_g}$, for varying $g\ge1$, and they form a diagonal of the diagram, which in particular is cofinal. We can thus compute $\mathring{\mathrm{HM}}((\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{\kld{\infty}_\infty}$ also as
\[
\mathring{\mathrm{HM}}((\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{\kld{\infty}_\infty}\simeq \mathrm{hocolim}_{g\to \infty} \mathring{\mathrm{HM}}((\mathfrak{S}_{2g}^{\mathrm{geo}})_+)_{\kld{2g}_g},
\]
where now we have a sequential colimit, and the stabilisation maps are given by composing two vertical maps and one horizontal map in the previous diagram.
By \cite[Lemma 2.7]{Bianchi:Hur3}, each space $\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_{2g}^{\mathrm{geo}})_+)_{\kld{2g}_g}$ admits an inclusion into
$\mathring{\mathrm{HM}}((\mathfrak{S}_{2g}^{\mathrm{geo}})_+)_{\kld{2g}_g}$ which is a homotopy equivalence, with inverse an explicit deformation retraction, which in particular is natural in the PMQ. Using this together with Theorem \ref{thm:main3}, we obtain that each space $\mathring{\mathrm{HM}}((\mathfrak{S}_{2g}^{\mathrm{geo}})_+)_{\kld{2g}_g}$ is homotopy equivalent to the corresponding moduli space $\mathfrak{M}_{g,1}$.
Moreover, for each $g\ge1$, the following diagram commutes up to homotopy:
\[
\small
\begin{tikzcd}[column sep=7pt]
\!\!\mathring{\mathrm{HM}}((\mathfrak{S}_{2g}^{\mathrm{geo}})_+)_{\kld{2g}_g}\! \ar[d,"\mathbf{l}^2_{(1,2)}"] & \!\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_{2g}^{\mathrm{geo}})_+)_{\kld{2g}_g}\!\ar[l,hook,"\simeq"']
\ar[r,"\mathfrak{bc}","\cong"'] &
\bar{\cO}_{g,1}[2g] \ar[r,"\theta_{\bar{\cO}}","\simeq"'] & \mathfrak{M}_{g,1}\! &\mathfrak{M}_{g,1}^\partial \ar[l,"\simeq"']\ar[d,"\mathrm{stab}_1"']\\
\!\!\mathring{\mathrm{HM}}((\mathfrak{S}_{2g}^{\mathrm{geo}})_+)_{\kld{2g}_{g+1}}\! \ar[d,"\mathbf{r}_{(2g,2g+1)}"] & \!\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_{2g}^{\mathrm{geo}})_+)_{\kld{2g}_g}\!\ar[l,hook,"\simeq"'] \ar[r,"\mathfrak{bc}","\cong"'] &
\bar{\cO}_{g+1,1}[2g] \ar[r,"\theta_{\bar{\cO}}"] & \mathfrak{M}_{g+1,1}\! &\mathfrak{M}_{g+1,1}^\partial \ar[l,"\simeq"']\ar[d,equal]\\
\!\!\mathring{\mathrm{HM}}((\mathfrak{S}_{2g+1}^{\mathrm{geo}})_+)_{\kld{2g+1}_{g+1}}\! \ar[d,"\mathbf{r}_{(2g+1,2g+2)}"]& \!\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_{2g}^{\mathrm{geo}})_+)_{\kld{2g}_g}\!\ar[l,hook,"\simeq"'] \ar[r,"\mathfrak{bc}","\cong"'] &
\bar{\cO}_{g+1,1}[2g\!+\!1] \ar[r,"\theta_{\bar{\cO}}"] & \mathfrak{M}_{g+1,1}\! &\mathfrak{M}_{g+1,1}^\partial \ar[l,"\simeq"']\ar[d,equal]\\
\!\!\mathring{\mathrm{HM}}((\mathfrak{S}_{2g+2}^{\mathrm{geo}})_+)_{\kld{2g+2}_{g+1}}\! & \!\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_{2g}^{\mathrm{geo}})_+)_{\kld{2g}_g}\ar[l,hook,"\simeq"']\!\ar[r,"\mathfrak{bc}","\cong"'] &
\bar{\cO}_{g+1,1}[2g\!+\!2] \ar[r,"\theta_{\bar{\cO}}","\simeq"'] & \mathfrak{M}_{g+1,1}\! &\mathfrak{M}_{g+1,1}^\partial \ar[l,"\simeq"'].
\end{tikzcd}
\]
The top and bottom rows are zig-zags of weak equivalences: using that $\mathfrak{M}_{g,1}$ is a complex variety, and in particular is homeomorphic to a CW-complex, we could in fact invert up to homotopy the arrows
$\mathfrak{M}_{g,1}^\partial\to\mathfrak{M}_{g,1}$. Similarly, we could invert up to homotopy the inclusions
$\mathrm{Hur}_+(\mathring{\cR};\mathfrak{S}_{2g}^{\mathrm{geo}})_{\kld{2g}_g}\hookrightarrow \mathring{\mathrm{HM}}((\mathfrak{S}_{2g}^{\mathrm{geo}})_+)_{\kld{2g}_g}$ appearing on left, using the deformation retractions provided by \cite[Lemma 2.7]{Bianchi:Hur3}.
It follows that the homotopy colimit given by the left, vertical columns, for varying $g$, is homotopy equivalent to the homotopy colimit given by the right, vertical columns. In other words we have a weak equivalence
\[
\mathrm{hocolim}_{g\to \infty} \mathring{\mathrm{HM}}((\mathfrak{S}_{2g}^{\mathrm{geo}})_+)_{\kld{2g}_g} \simeq
\mathrm{hocolim}_{g\to \infty} \mathfrak{M}_{g,1}^\partial.
\]
Putting together this equivalence and the previous one, we obtain the following lemma.
\begin{lem}
\label{lem:zigzag}
There is a zig-zag of weak equivalences of topological spaces between $\mathring{\mathrm{HM}}((\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{\kld{\infty}_\infty}$ and
$\mathfrak{M}_{\infty,1}$.
\end{lem}
Our final goal is to prove the existence of a homology equivalence
\[
\mathring{\mathrm{HM}}((\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{\kld{\infty}_\infty}\to \Omega^2_0\mathbb{B}_\infty.
\]
\begin{nota}
For $d\ge2$ we denote by $\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{\kld{d}_\infty}$ the homotopy colimit of the $d$\textsuperscript{th} row of the main diagram, i.e.
\[
\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{\kld{d}_\infty}:=\mathrm{hocolim}_{g\to \infty}\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{\kld{d}_g}.
\]
\end{nota}
We can then write $\mathring{\mathrm{HM}}((\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{\kld{\infty}_\infty}$ as $\mathrm{hocolim}_{d\to\infty}\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{\kld{d}_\infty}$.
Similarly,
$\Omega^2_0\mathbb{B}_\infty$ is homotopy equivalent to the homotopy colimit
\[
\Omega^2_0\mathbb{B}_\infty\simeq\mathrm{hocolim}_{d\to\infty}\Omega^2_0 \mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}},
\]
where we use the maps induced on double loop spaces by the inclusions of spaces
$\mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}\hookrightarrow\mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_{d+1}^{\mathrm{geo}},\mathfrak{S}_{d+1})_{\mathbbl{1}}$,
which in turn are induced by the inclusions
of PMQ-group pairs $(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)\hookrightarrow(\mathfrak{S}_{d+1}^{\mathrm{geo}},\mathfrak{S}_{d+1})$.
The proof of \cite[Theorem 4.1]{Bianchi:Hur3} is natural in the PMQ-group pair: hence we can replace each term
$\Omega^2_0 \mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}$ in the last homotopy colimit
by the homotopy equivalent term $\Omega_0B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)$, and thus obtain a homotopy
equivalence
\[
\Omega^2_0\mathbb{B}_\infty\simeq \mathrm{hocolim}_{d\to\infty} \Omega^2_0 \mathrm{Hur}_+(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}} \simeq \mathrm{hocolim}_{d\to\infty} \Omega_0B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)
\]
We would like therefore to prove the existence of a homology equivalence
\[
\Xi\colon \mathring{\mathrm{HM}}((\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{\kld{\infty}_\infty}\to \mathrm{hocolim}_{d\to\infty} \Omega_0B\mathring{\mathrm{HM}}(\mathfrak{S}_d^{\mathrm{geo}}).
\]
In order to do so, we will prove the following proposition.
\begin{prop}
\label{prop:Xid}
There exist a family of homology equivalences
\[
\Xi_d\colon \mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{\kld{d}_\infty}\to \Omega_0 B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+),\quad\quad d\ge2,
\]
such that for all $d\ge2$ the following square commutes up to homotopy
\[
\begin{tikzcd}
\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{\kld{d}_\infty}\ar[r,"\Xi_d"] \ar[d] &\Omega_0 B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+) \ar[d]\\
\mathring{\mathrm{HM}}((\mathfrak{S}_{d+1}^{\mathrm{geo}})_+)_{\kld{d+1}_\infty}\ar[r,"\Xi_{d+1}"]& \Omega_0 B\mathring{\mathrm{HM}}((\mathfrak{S}_{d+1}^{\mathrm{geo}})_+).\\
\end{tikzcd}
\]
\end{prop}
\subsection{Mapping telescopes}
\label{subsec:maptel}
Let $M$ be a unital topological monoid, and let $\bar m\in M$ be an element. We can consider the mapping telescope
\[
\Tel(M;\bar m):=\mathrm{hocolim}\pa{M\overset{\bar m\cdot-}{\to} M\overset{\bar m\cdot-}{\to}M \overset{\bar m\cdot-}{\to}\dots},
\]
obtained by iterating left multiplication by $\bar m$;
the argument of the proof of
the group-completion theorem \cite{SegalMcDuff,FM94} provides a map $\Xi_{M;\bar m}$
from $\Tel(M;\bar m)$ to $\Omega BM$. First, one defines a map $\mathbf{s}\colon M\to \Omega BM$,
sending $m\in M$ to the loop in $BM$ represented by the 1-simplex $m$; then one defines
$\Xi_{M;\bar m}$ as the map induced on homotopy colimits of the rows of the following diagram,
where squares can be filled by homotopies in a way that is natural in $M$ and $\bar m$:
\[
\begin{tikzcd}[column sep=40pt]
M\ar[r,"\bar m\cdot-"]\ar[d,"\mathbf{s}(-)"] &
M\ar[r,"\bar m\cdot-"]\ar[d,"{\mathbf{s}(\bar m)^{-1}\cdot\mathbf{s}(-)}"] & M \ar[r,"\bar m\cdot-"]\ar[d,"{\mathbf{s}(\bar m^2)^{-1}\cdot\mathbf{s}(-)}"] & \dots\\
\Omega BM\ar[r,equal] &\Omega BM\ar[r,equal] &\Omega BM\ar[r,equal] &\dots.
\end{tikzcd}
\]
Here, for $m\in M$, the loop $\mathbf{s}(m)^{-1}\in\Omega BM$ is the inverse of the loop $\mathbf{s}(m)$, and a representative $-\cdot-\colon\Omega BM\times\Omega BM\to\Omega BM$ is fixed.
We remark that, in order to prove that the map $\Xi_{M;\bar m}$ is a homology
equivalence, one requires additional conditions on $M$ and $\bar m$; but the map $\Xi_{M;\bar m}$ itself can be defined in general.
If we fix an element $a\in M$, representing a path component $\pi_0(a)\in\pi_0(M)$,
we can also restrict the previous to a map
\[
\Tel(M,a;\bar m):=\mathrm{hocolim}\pa{M_{\pi_0(a)}\overset{\bar m\cdot-}{\to} M_{\pi_0(\bar m a)}\overset{\bar m\cdot-}{\to}M_{\pi_0(\bar m^2a)}\overset{\bar m\cdot-}{\to}\!\dots\!}\!\to\Omega_{\pi_0(a)}BM.
\]
We can then postcompose the previous with the map $\Omega_{\pi_0(a)}BM\overset{\simeq}{\to}\Omega_0BM$
given by right multiplication by $\mathbf{s}(a)^{-1}$.
We obtain a map
\[
\Xi_{M,a;\bar m}\colon \Tel(M,a;\bar m)\to \Omega_0BM.
\]
The construction above is natural with respect to triples $(M,a,\bar m)$ consisting of a topological monoid with two chosen elements. Moreover, if $b\in M$ is another element of $M$,
we can consider the map $\mathbf{r}^{\Tel}_b\colon \Tel(M,a;\bar m)\to \Tel(M,ab;\bar m)$ induced levelwise by right multiplication by $b$: then the following diagram commutes up to a homotopy
which is again natural in the quadruple $(M,a,\bar m,b)$:
\[
\begin{tikzcd}
\Tel(M,a;\bar m)\ar[r,"\mathbf{r}^{\Tel}_b"] \ar[dr,"\Xi_{M,a;\bar m}"'] & \Tel(M,ab;\bar m)\ar[d,"\Xi_{M,ab;\bar m}"]\\
& \Omega_0BM.
\end{tikzcd}
\]
In homology, we have a canonical identification of $H_*(\Tel(M,a;\bar m))$ with the colimit
\[
\colim\pa{H_*(M_{\pi_0(a)})\overset{\bar m\cdot-}{\to} H_*(M_{\pi_0(\bar ma)})\overset{\bar m\cdot-}{\to}H_*(M_{\pi_0(\bar m^2a)})\overset{\bar m\cdot-}{\to}\dots},
\]
where we consider $\bar m\cdot -$ as the left multiplication by the ``ground class''
$\bar m$ in $H_0(M_{\pi_0(\bar m)})$, using the Pontryiagin ring structure on $H_*(M)$.
The map $H_*\pa{\Tel(M,a;\bar m)}\to H_*(\Omega_0BM)$ induced by $\Xi_{M,a;\bar m}$ coincides with the map induced on colimits of rows by the following diagram:
\[
\begin{tikzcd}[column sep =45pt]
H_*(M_{\pi_0(a)})\ar[r,"\bar m\cdot-"]\ar[d,"{\mathbf{s}(a)^{-1}\cdot\mathbf{s}(-)}"] & H_*(M_{\pi_0(\bar ma) })\ar[r,"\bar m\cdot-"] \ar[d,"{\mathbf{s}(\bar m a)^{-1}\cdot\mathbf{s}(-)}"] &
H_*(M_{\pi_0(\bar m^2a)})\ar[r,"\bar m\cdot-"] \ar[d,"{\mathbf{s}(\bar m^2a)^{-1}\cdot\mathbf{s}(-)}"] &\dots \\
H_*(\Omega_0BM)\ar[r,equal] &H_*(\Omega_0BM)\ar[r,equal] &H_*(\Omega_0BM)\ar[r,equal] &\dots
\end{tikzcd}
\]
We can now define $\Xi_d$ to be the map $\Xi_{M,a;\bar m}$ corresponding
to the data:
\begin{itemize}
\item $M=\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)$;
\item $a=\pa{1,\mathfrak{c}_{(1,2)}}\pa{1,\mathfrak{c}_{(2,3)}}\dots\pa{1,\mathfrak{c}_{(d-1,d)}}$;
\item $\bar m=\pa{1,\mathfrak{c}_{(1,2)}}^2$.
\end{itemize}
Note that $\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{\kld{d}_\infty}$ is defined as the same homotopy colimit as $\Tel_{M,a;\bar m}$, for $(M,a,\bar m)$ as above.
The diagram in the statement of Proposition \ref{prop:Xid} commutes by considering the inclusion of monoids $\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)\hookrightarrow\mathring{\mathrm{HM}}((\mathfrak{S}_{d+1}^{\mathrm{geo}})_+)$ together with the map $\mathbf{r}^{\Tel}_b$
induced by the element $b=(1,\mathfrak{c}_{(d,d+1)})$.
In order to prove Proposition \ref{prop:Xid}, and hence Theorem \ref{thm:main4}, it is only left to prove that $\Xi_d$ is a homology equivalence.
\subsection{Propagators}
\label{subsec:propagators}
In this subsection, we fix $d\ge2$ and aim at proving that $\Xi_d$ is a homology equivalence. We abbreviate $\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)$ by $\mathring{\mathrm{HM}}_+$ for simplicity (note that this notation is not compatible with a similar notation
used in \cite{Bianchi:Hur3}, for example in \cite[Theorem 2.19]{Bianchi:Hur3}).
We denote $a=\pa{1,\mathfrak{c}_{(1,2)}}\pa{1,\mathfrak{c}_{(2,3)}}\dots\pa{1,\mathfrak{c}_{(d-1,d)}}$ as at the end of the previous subsection.
\begin{nota}
\label{nota:propagators}
We denote by $e$ the element $\pa{1,\mathfrak{c}_{(1,2)}}\dots \pa{1,\mathfrak{c}_{(1,2)}}\in\mathring{\mathrm{HM}}_+$, where $\pa{1,\mathfrak{c}_{(1,2)}}$ is repeated $2d-2$ times; we consider $\pi_0(e)\in\pi_0(\mathring{\mathrm{HM}}_+)$,
and denote by $\hat\omega(e)$ the $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$-valued total monodromy of $e$. We denote by $\pi_0(e)$ also the corresponding ``ground'' class
in $H_0\pa{\mathring{\mathrm{HM}}_{+,\hat\omega(e)}}\cong\mathbb{Z}$.
Similarly, we denote by $e'\in\mathring{\mathrm{HM}}_+$ the element
\[
e'=\pa{1,\mathfrak{c}_{(1,2)}}\pa{1,\mathfrak{c}_{(1,2)}}\pa{1,\mathfrak{c}_{(2,3)}}\pa{1,\mathfrak{c}_{(2,3)}}\dots \pa{1,\mathfrak{c}_{(d-1,d)}}\pa{1,\mathfrak{c}_{(d-1,d)}},
\]
and consider $\pi_0(e')$ also as an element in $H_0\pa{\mathring{\mathrm{HM}}_{+,\hat\omega(e')}}\cong\mathbb{Z}$.
\end{nota}
Notice that $\hat\omega(e)=(\mathbbl{1};\set{1,2},\set{3},\dots,\set{d};2d-2,0,\dots,0)$, using the notation of \cite[Proposition 7.13]{Bianchi:Hur1}, whereas $\hat\omega(e')=(\mathbbl{1};\set{1,\dots,d};2d-2)$: these are different elements in $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$.
Recall from \cite[Lemma 7.2]{Bianchi:Hur1} that the enveloping group $\mathcal{G}(\mathfrak{S}_d^{\mathrm{geo}})$ of $\mathfrak{S}_d^{\mathrm{geo}}$ is the index two subgroup $\tilde\mathfrak{S}_d\subset\mathbb{Z}\times\mathfrak{S}_d$ containing pairs $(r,\sigma)$ with $r$ and $\sigma$ of the same parity. We remark that $\hat\omega(e)$ and $\hat\omega(e')$
are sent to the same element $(2d-2,\mathbbl{1})\in\tilde\mathfrak{S}_d\cong \mathcal{G}(\mathfrak{S}_d)$ under the natural map
$\widehat{\mathfrak{S}_d^{\mathrm{geo}}}\to\mathcal{G}(\mathfrak{S}_d^{\mathrm{geo}})$. We thus have
$\omega(e)=\omega(e')=(2d-2,\mathbbl{1})$, i.e. $\tilde\mathfrak{S}_d$-valued total monodromies of $e$ and $e'$ are equal.
There is yet another, more relevant difference between $e$ and $e'$: the element $e'$ is a \emph{propagator} of the topological monoid
$\mathring{\mathrm{HM}}_+$, whereas $e$ is not, in the following sense.
\begin{defn}
Let $M$ be a weakly braided topological monoid (see \cite[Definition 3.5]{Bianchi:Hur3}) and let $m\in M$. We say that $m$ is a \emph{propagator} if for every $x\in M$ there exist $y\in M$ and $k\ge1$ such that $x\cdot y$ and $m^k$ belong to the same path component of $M$.
\end{defn}
The fact that $e'$ is a propagator follows from the fact that $\pi_0(\mathring{\mathrm{HM}}_+)\cong\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$
is generated by the elements $\widehat\mathbf{t}$ corresponding to transpositions $\mathbf{t}\in\mathfrak{S}_d^{\mathrm{geo}}$; for any fixed transposition $\mathbf{t}$, the element $\pi_0(e')\in\pi_0(\mathring{\mathrm{HM}}_+)$ admits a factorisation $\hat\mathbf{t}_1\dots\hat\mathbf{t}_{2d-2}$ in $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$, with $\mathbf{t}_1=\mathbf{t}$, see \cite[Proposition 7.11]{Bianchi:Hur1}.
The group completion theorem ensures that the map
\[
\Xi_{\mathring{\mathrm{HM}}_+,a;e'}\colon \Tel(\mathring{\mathrm{HM}}_+,a;e') \to \Omega_0B\mathring{\mathrm{HM}}_+
\]
is a homology equivalence. We then note the following fact.
\begin{lem}
\label{lem:braiding}
Let $g\ge0$ and consider the two maps
\[
e\cdot-\,,\,e'\cdot-\colon \mathring{\mathrm{HM}}_{+,\kld{d}_g}\to\mathring{\mathrm{HM}}_{+,\kld{d}_{g+d-1}};
\]
given by left multiplication by $e$ and $e'$ respectively. Then $e\cdot-$ and $e'\cdot-$ are homotopic maps.
\end{lem}
\begin{proof}
We can write $e\cdot-$ as the composition $\mathbf{l}_{(1,2)}^{2d-2}$, whereas $e'\cdot-$ can be written as composition $\mathbf{l}_{(1,2)}^2\mathbf{l}_{(2,3)}^2\dots\mathbf{l}_{(d-1,d)}^2$. It suffices therefore to prove, for all $g\ge0$, that for all $1\le j\le d-2$ the maps
$\mathbf{l}_{(j,j+1)}^2$ and $\mathbf{l}_{(j+1,j+2)}^2$ are homotopic maps $\mathring{\mathrm{HM}}_{+,\kld{d}_g}\to\mathring{\mathrm{HM}}_{+,\kld{d}_{g+1}}$.
For this, recall from the proof of \cite[Lemma 3.6]{Bianchi:Hur3} that the map
\[
\mathbf{r}aiding\colon\mathring{\mathrm{HM}}\times\mathring{\mathrm{HM}}\to\mathring{\mathrm{HM}}\times\mathring{\mathrm{HM}},\quad\quad \pa{(t,\mathfrak{c}),(t',\mathfrak{c}')}\mapsto\pa{(t',\mathfrak{c}'),(t,\mathfrak{c}^{\omega(\mathfrak{c}')})}
\]
is a weak braiding, i.e. the composite map
\[
\begin{tikzcd}
\mathring{\mathrm{HM}}\times \mathring{\mathrm{HM}}\ar[r,"\mathbf{r}aiding"] & \mathring{\mathrm{HM}}\times \mathring{\mathrm{HM}}\ar[r,"-\cdot-"] &\mathring{\mathrm{HM}}
\end{tikzcd}
\]
is homotopic to the multiplication map $-\cdot-\colon \mathring{\mathrm{HM}}\times\mathring{\mathrm{HM}}\to \mathring{\mathrm{HM}}$. It follows that also the composite
\[
\begin{tikzcd}
\mathring{\mathrm{HM}}\times \mathring{\mathrm{HM}}\ar[r,"\mathbf{r}aiding"] &\mathring{\mathrm{HM}}\times \mathring{\mathrm{HM}}\ar[r,"\mathbf{r}aiding"] & \mathring{\mathrm{HM}}\times \mathring{\mathrm{HM}}\ar[r,"-\cdot-"] &\mathring{\mathrm{HM}},
\end{tikzcd}
\]
sending $((t,\mathfrak{c}),(t',\mathfrak{c}'))\mapsto \pa{t,\mathfrak{c}^{\omega(\mathfrak{c}')}}\cdot \pa{t',(\mathfrak{c}')^{\omega(\mathfrak{c})^{\omega(\mathfrak{c}')}}}$
is homotopic to $-\cdot-$. The same properties hold for the restricted map $\mathbf{r}aiding\colon\mathring{\mathrm{HM}}_+\times\mathring{\mathrm{HM}}_+\to\mathring{\mathrm{HM}}_+\times\mathring{\mathrm{HM}}_+$.
We can now set $(t,\mathfrak{c})=\pa{1,\mathfrak{c}_{(j+1,j+2)}}\cdot\pa{1,\mathfrak{c}_{(j+1,j+2)}}$; using that the element
\[
\omega\big((1,\mathfrak{c}_{(j+1,j+2)})(1,\mathfrak{c}_{(j+1,j+2)})\big)=(2,\mathbbl{1})\in\tilde\mathfrak{S}_d=\mathcal{G}(\mathfrak{S}_d^{\mathrm{geo}})
\]
conjugates trivially all elements in $\mathfrak{S}_d^{\mathrm{geo}}$,
and using instead that
\[
(j+1,j+2)^{(d-1+2g,\mathrm{lc}_d)}=(j+1,j+2)^{\mathrm{lc}_d}=(j,j+1)\in\mathfrak{S}_d^{\mathrm{geo}},
\]
we obtain precisely that
the map $\mathbf{l}^2_{(j,j+1)}=\pa{1,\mathfrak{c}_{(j,j+1)}}\cdot\pa{1,\mathfrak{c}_{(j,j+1)}}\cdot-$ is homotopic to the map
$\mathbf{l}^2_{(j+1,j+2)}=\pa{1,\mathfrak{c}_{(j+1,j+2)}}\cdot\pa{1,\mathfrak{c}_{(j+1,j+2)}}\cdot-$, when considered as maps
$\mathring{\mathrm{HM}}_{+,\kld{d}_g}\to\mathring{\mathrm{HM}}_{+,\kld{d}_{g+2}}$.
\end{proof}
We now notice that both $\Xi_{\mathring{\mathrm{HM}}_+,a;e'}\colon H_*(\Tel(\mathring{\mathrm{HM}}_+,a;e')) \to H_*(\Omega_0B\mathring{\mathrm{HM}}_+)$ and $\Xi_{\mathring{\mathrm{HM}}_+,a;e}\colon H_*(\Tel(\mathring{\mathrm{HM}}_+,a;e)) \to H_*(\Omega_0B\mathring{\mathrm{HM}}_+)$ can be described, by Lemma \ref{lem:braiding}, as the map induced on colimits of rows by the following commutative diagram of homology groups, where we use $\hat\omega(a)=\kld{d}_0$:
\[
\begin{tikzcd}[column sep =30pt, row sep=40pt]
H_*(\mathring{\mathrm{HM}}_{+,\kld{d}_0})\ar[r,"e\cdot-=e'\cdot-"]\ar[d,"{\mathbf{s}(-)\cdot\mathbf{s}(a)^{-1}}"] & H_*(\mathring{\mathrm{HM}}_{+,\kld{d}_{d-1}})\ar[r,"e\cdot-=e'\cdot-"]
\ar[d,"{\mathbf{s}(-)\cdot\mathbf{s}(e\cdot a)^{-1}}=", near start]
\ar[d,"{\mathbf{s}(-)\cdot\mathbf{s}(e'\cdot a)^{-1}}", near end] &
H_*(\mathring{\mathrm{HM}}_{+,\kld{d}_{2d-2}})\ar[r,"e\cdot-=e'\cdot-"]
\ar[d,"{\mathbf{s}(-)\cdot\mathbf{s}(e\cdot e\cdot a)^{-1}}=", near start]
\ar[d,"{\mathbf{s}(-)\cdot\mathbf{s}(e'\cdot e'\cdot a)^{-1}}", near end]
&\dots \\
H_*(\Omega_0B\mathring{\mathrm{HM}}_+)\ar[r,equal] &H_*(\Omega_0B\mathring{\mathrm{HM}}_+)\ar[r,equal] &H_*(\Omega_0B\mathring{\mathrm{HM}}_+)\ar[r,equal] &\dots
\end{tikzcd}
\]
Since $\Xi_{\mathring{\mathrm{HM}}_+,a;e'}$ induces a homology isomorphism, also
$\Xi_{\mathring{\mathrm{HM}}_+,a;e}$ induces a homology isomorphism.
To conclude, let $\bar m= \pa{1,\mathfrak{c}_{(1,2)}}^2$ as in the previous subsection: we note that there is a weak equivalence $\Tel(\mathring{\mathrm{HM}}_+,a;e)\overset{\simeq}{\to}\Tel\pa{\mathring{\mathrm{HM}}_+,a;\bar m}$,
given by the fact that $e=\bar m^{d-1}$ and hence the first telescope is defined as a homotopy colimit over a cofinal subsequence of the sequence of spaces yielding the second telescope as homotopy colimit. Moreover the following triangle commutes up to homotopy:
\[
\begin{tikzcd}
\Tel(\mathring{\mathrm{HM}}_+,a;e)\ar[r,"\simeq"] \ar[dr,"{\Xi_{\mathring{\mathrm{HM}}_+,a;e}}"']& \Tel\pa{\mathring{\mathrm{HM}}_+,a;\bar m}\ar[d,"{\Xi_{\mathring{\mathrm{HM}}_+,a;\bar m}}"]\\
& \Omega_0B\mathring{\mathrm{HM}}_+.
\end{tikzcd}
\]
Since the horizontal and the diagonal arrows are homology equivalences, also the vertical arrow is a homology equivalence.
This concludes the proof of Proposition \ref{prop:Xid}, and hence of Theorem \ref{thm:main4}.
\section{A Hurwitz model for \texorpdfstring{$\mathrm{MTSO}(2)$}{MTSO(2)}}
\label{sec:MTSO}
In this section we compare Theorem \ref{thm:main4}, asserting that
$H_*(\mathfrak{M}_{\infty,1})\cong H_*\pa{\Omega^2_0 \mathbb{B}_\infty}$, with the isomorphism
$H_*(\mathfrak{M}_{\infty,1})\cong H_*\pa{\Omega^\infty_0\mathrm{MTSO}(2)}$ proved
by Madsen and Weiss \cite{MadsenWeiss}. The following is the main result of the section.
\begin{thm}
\label{thm:main5}
Let $\mathbb{B}_\infty$ be the space occurring in the statement of Theorem \ref{thm:main4}.
There is a weak homotopy equivalence of spaces
\[
\mathbb{B}_\infty\simeq\Omega^{\infty-2}\mathrm{MTSO}(2).\]
\end{thm}
We note that both spaces are simply connected:
in particular, by \cite[Theorem 4.19]{Bianchi:Hur3}, $\mathbb{B}_\infty$ is a sequential homotopy colimit of simply connected spaces.
Therefore it suffices to construct a homology equivalence between the two spaces.
\subsection{\texorpdfstring{$E_2$}{E2}-algebras from Hurwitz spaces}
For $d\ge2$, the space $\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})$ can be considered as an $E_1$-algebra,
i.e. an algebra over the operad of little $1$-cubes,
by squeezing horizontally configurations supported in $\mathring{\cR}=(0,1)^2$ and by embedding them into disjoint sub-rectangles of $\mathring{\cR}$ of the form
$(t_1,t_2)\times(0,1)$. The Hurwitz-Moore topological monoid $\mathring{\mathrm{HM}}(\mathfrak{S}_d^{\mathrm{geo}})$ is a strictification of the $E_1$-algebra
$\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})$.
Unfortunately, for $d\ge3$, the monoid $\pi_0(\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}}))\cong \pi_0(\mathring{\mathrm{HM}}(\mathfrak{S}_d^{\mathrm{geo}}))$ is not commutative, and therefore we cannot upgrade $\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})$ to an $E_2$-algebra.
\begin{nota}
For $r\ge1$ we denote by $E_2(r)$ the space of $r$-tuples $(\iota_1,\dots,\iota_r)$ of self-embeddings $\iota_i\colon\mathring{\cR}\to\mathring{\cR}$ satisfying the following properties:
\begin{itemize}
\item the embeddings are \emph{rectilinear} in the following strong sense: for each $1\le i\le r$ there is $\bar z_i\in\mathbb{C}$ and $\lambda_i>0$ such that $\iota_i(z)=\lambda z+\bar z$ for all $z\in\mathring{\cR}$;
\item the closures in $\mathbb{C}$ of the open squares $\iota_i(\mathring{\cR})$ are pairwise disjoint and are contained in $\mathring{\cR}$.
\end{itemize}
\end{nota}
The spaces $E_2(r)$ assemble into a version of the little $2$-cubes operad; however, we insist that a rectilinear embedding squeezes $\mathring{\cR}$ as much horizontally as vertically, and that boundaries of little $2$-squares and of their ambient square are disjoint.
Recall that the enveloping group $\mathcal{G}(\mathfrak{S}_d^{\mathrm{geo}})$ of $\mathfrak{S}_d^{\mathrm{geo}}$ is the index-2 subgroup $\tilde\mathfrak{S}_d\subset\mathbb{Z}\times\mathfrak{S}_d$ containing pairs $(r,\sigma)$ with $r$ and $\sigma$ of the same parity \cite[Lemma 7.2]{Bianchi:Hur1}.
Recall also that the $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$-valued total monodromy $\hat\omega\colon\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})\to \widehat{\mathfrak{S}_d^{\mathrm{geo}}}$
can be postcomposed with the natural map of PMQs $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}\to\mathcal{G}(\mathfrak{S}_d^{\mathrm{geo}})$, yielding the
$\tilde\mathfrak{S}_d$-valued total monodromy $\omega\colon\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})\to\tilde\mathfrak{S}_d$.
\begin{defn}
We denote by $\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})_{(\bullet,\mathbbl{1})}\subset\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})$ the subspace of configurations $(P,\psi)$
whose $\tilde\mathfrak{S}_d^{\mathrm{geo}}$-valued total monodromy has the form $(h,\mathbbl{1})$ for some even integer $h\in2\mathbb{Z}$.
\end{defn}
Note that $\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})_{(\bullet,\mathbbl{1})}$ is a union of connected components of $\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})$,
and it is a sub-$E_1$-algebra of $\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})$.
\begin{lem}
\label{lem:E2structure}
The $E_1$-algebra structure on $\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})_{(\bullet,\mathbbl{1})}$ can be extended to an $E_2$-algebra structure.
\end{lem}
\begin{proof}
Let $(\iota_1,\dots,\iota_r)\in E_2(r)$ be a configuration of little $2$-cubes, i.e. rectilinear embeddings $\iota_i\colon\mathring{\cR}\hookrightarrow\mathring{\cR}$, and choose an extension of each $\iota_i$ to a semi-algebraic
homeomorphism $\tilde\iota_i\colon\mathbb{C}\to\mathbb{C}$ fixing $*=-\sqrt{-1}\in\mathbb{C}$.
Let $\mathfrak{c}_1,\dots,\mathfrak{c}_r\in\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})_{(\bullet,\mathbbl{1})}$
be configurations, and write $\mathfrak{c}_i=(P_i,\psi_i)$. We let $P=\iota_1(P_1)\sqcup\dots\sqcup \iota_r(P_r)$, and write
$P=\set{z_{i,j}}_{1\le i\le r,1\le j\le k_i}$, with $z_{i,j}\in\iota_i(P_i)$, for suitable $k_i=|P_i|\ge0$.
Fix an admissible generating set of $\pi_1(\mathbb{C}mP,*)$ represented by loops $\alpha_{i,j}$ for $1\le i\le r$ and $1\le j\le k_i$, such that $\alpha_{i,j}$ spins clockwise around $z_{i,j}$, and such that for each $1\le i\le r$
the product $\alpha_{i,1}\dots\alpha_{i,k_i}$ is homotopic in $\mathbb{C}mP$ to a simple loop spinning clockwise around $\iota_i(\mathring{\cR})\subset\mathbb{C}$ and disjoint from $\iota_1(\mathring{\cR})\sqcup\dots\sqcup\iota_r(\mathring{\cR})$.
We define a map of PMQs $\psi\colon\mathfrak{Q}(P)\to\mathfrak{S}_d^{\mathrm{geo}}$ by sending $\alpha_{i,j}\mapsto \psi_i(\tilde\iota_i^{-1}(\alpha_{i,j}))$.
The map $\psi$ does not depend on the choice of admissible generating set $\set{\alpha_{i,j}}_{1\le i\le r,1\le j\le k_i}$ with the properties above, nor on the choice of extensions $\tilde\iota_i\colon \mathbb{C}\to \mathbb{C}$ of $\iota_i\colon\mathring{\cR}\to \mathbb{C}$ with the properties above,
as a consequence of the fact that each configuration $\mathfrak{c}_i$ has a $\tilde\mathfrak{S}_d$-valued total monodromy $\omega(\mathfrak{c}_i)$ that acts trivially by conjugation on $\mathfrak{S}_d^{\mathrm{geo}}$, and hence $\omega(\mathfrak{c}_i)$ also acts trivially by global conjugation on
the space $\mathrm{Hur}(\mathfrak{S}_d^{\mathrm{geo}})$, and in particular on its subspace $\mathrm{Hur}_{+,(\bullet,\mathbbl{1})}$.
We then let the operation $(\iota_1,\dots,\iota_r)\in E_2(r)$ act on the $r$-tuple of inputs
$(\mathfrak{c}_1,\dots,\mathfrak{c}_r)$ by giving the constructed output $(P,\psi)$. This defines an action of the operad $E_2$ on $\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})_{(\bullet,\mathbbl{1})}$.
See also Figure \ref{fig:action_7}.
\end{proof}
\begin{figure}
\caption{
On top, a configuration $(\iota_1,\iota_2)\in E_2(2)$
and two configurations in $\mathrm{Hur}
\label{fig:action_7}
\end{figure}
We can consider the subspace $\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)$, containing configurations $(P,\psi)$ with $\psi\colon\mathfrak{Q}(P)\to\mathfrak{S}_d^{\mathrm{geo}}$ being an augmented map of PMQs; the $E_2$-algebra structure from
Lemma \ref{lem:E2structure} restricts to an $E_2$-algebra structure on the corresponding subspace
$\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}$.
The standard inclusion of augmented PMQs $\mathfrak{S}_d^{\mathrm{geo}}\subset\mathfrak{S}_{d+1}^{\mathrm{geo}}$ gives rise to an inclusion of $E_2$-algebras
\[
\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}\subset \mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_{d+1}^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})},
\]
with image a union of connected components.
\begin{nota}
We denote by $\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}$ the $E_2$-algebra
\[
\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}:= \bigcup_{d\ge2}\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}.
\]
\end{nota}
Our next goal is to identify the homology of the group completion
\[
H_*(\Omega B \mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}),
\]
where we consider $\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}$ as an $E_1$-algebra.
We will first compute
$H_*(\Omega B \mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})})$ for all $d\ge2$, and then take the colimit for $d\to\infty$.
\subsection{Comparison between mapping telescopes}
For each $d\ge2$, we replace the $E_2$-algebra $\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}$ by the corresponding sub-topological monoid
$\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}$ of $\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)$, in order to be able to use again part of the arguments of Section \ref{sec:mumford}. We fix $d \ge2$ and abbreviate $\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)$ by $\mathring{\mathrm{HM}}_+$ throughout the subsection, to simplify formulas.
Let $e$ and $e'$ be as in Notation \ref{nota:propagators}. Let $a=\pa{1,\mathfrak{c}_{(1,2)}}\pa{1,\mathfrak{c}_{(2,3)}}\dots\pa{1,\mathfrak{c}_{(d-1,d)}}$ as in Subection \ref{subsec:propagators}, and let similarly $\dot a:=\pa{1,\mathfrak{c}_{(d-1,d)}}\pa{(1,\mathfrak{c}_{(d-2,d-1)}}\dots \pa{1,\mathfrak{c}_{(1,2)}}$. Let the neutral element $\pa{0,(\emptyset,\mathbbl{1})}\in\mathring{\mathrm{HM}}_+$ be abbreviated by $0$. Note that $\pi_0(a\dot a)=\pi_0(\dot a a)=\pi_0(e')\in\pi_0(\mathring{\mathrm{HM}}_+)$, as follows from the equalities
\[
\begin{split}
\hat\omega(a\dot a)&=\hat\omega(a)\hat\omega(\dot a)=\hat{\mathrm{lc}_d}\,\widehat{\mathrm{lc}_d^{-1}}= (\mathbbl{1},\set{1,\dots,d},2d-2)\\
&=\widehat{\mathrm{lc}_d^{-1}}\,\hat{\mathrm{lc}_d}=\hat\omega(\dot a)\hat\omega(a)=\hat\omega(\dot aa)=\hat\omega(e')\in\widehat{\mathfrak{S}_d^{\mathrm{geo}}}.
\end{split}
\]
Passing to $\tilde\mathfrak{S}_d$-valued total monodromies, we obtain the equality $\omega(a\dot a)=\omega(\dot aa)=\omega(e')=(2d-2,\mathbbl{1})\in\tilde\mathfrak{S}_d$. In particular, using that the element $(2d-2,\mathbbl{1})$ acts trivially by conjugation on $\mathfrak{S}_d^{\mathrm{geo}}$, and thanks to the weak braiding of $\mathring{\mathrm{HM}}_+$ already recalled in the proof of Lemma \ref{lem:braiding}, we obtain that the six maps $e'\cdot-$, $-\cdot e'$, $a\dot a\cdot -$, $-\cdot a\dot a$, $\dot a a\cdot -$ and $-\cdot\dot aa$ are all homotopic to each other when considered as maps $\mathring{\mathrm{HM}}_+\to\mathring{\mathrm{HM}}_+$.
The mapping telescopes
$\Tel(\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})},0;e')$ and $\Tel(\mathring{\mathrm{HM}}_+,a;e')$
can be compared as follows:
\begin{itemize}
\item there is a map, as in Subsection \ref{subsec:maptel},
\[
\mathbf{r}^{\Tel}_a\colon
\Tel(\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})},0;e')\to\Tel(\mathring{\mathrm{HM}}_+,a;e');
\]
\item there is a map
\[
\mathbf{r}^{\Tel}_{\dot a}\colon
\Tel(\mathring{\mathrm{HM}}_+,a;e')\to\Tel(\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})},a\dot a;e');
\]
we may then consider $\Tel(\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})},a\dot a;e')$ as a subspace of
$\Tel(\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})},0;e')$, since both spaces are homotopy colimits of sequences of spaces, and the first sequence is obtained from the second by deleting the very first term.
\end{itemize}
We obtain a commutative diagram of homology groups as follows, whose rows have
$H_*(\Tel(\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})},0;e'))$ and $H_*(\Tel(\mathring{\mathrm{HM}}_+,a;e'))$
as colimits, respectively
\[
\begin{tikzcd}[row sep=30pt]
H_*(\mathring{\mathrm{HM}}_{+,\mathbbl{1}})\ar[r,"e'\cdot-"]\ar[d,"-\cdot a"] & H_*(\mathring{\mathrm{HM}}_{+,\hat\omega(e')})\ar[r,"e'\cdot-"] \ar[d,"-\cdot a"] &
H_*(\mathring{\mathrm{HM}}_{+,\hat\omega(e'\cdot e')})\ar[r,"e'\cdot-"] \ar[d,"-\cdot a"] &\dots \\
H_*(\mathring{\mathrm{HM}}_{+,\hat\mathrm{lc}_d})\ar[r,"e'\cdot-"] \ar[ur,"-\cdot \dot a"] & H_*(\mathring{\mathrm{HM}}_{+,\hat\mathrm{lc}_d\cdot\hat\omega(e')})\ar[r,"e'\cdot-"] \ar[ur,"-\cdot \dot a"]&
H_*(\mathring{\mathrm{HM}}_{+,\hat\mathrm{lc}_d\cdot \hat\omega(e'\cdot e')})\ar[r,"e'\cdot-"] \ar[ur,"-\cdot \dot a"] &\dots \\
\end{tikzcd}
\]
It follows that the maps $\mathbf{r}^{\Tel}_{a}$ and $\mathbf{r}^{\Tel}_{\dot a}$ induce homology isomorphisms between $H_*(\Tel(\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})},0;e'))$ and $H_*(\Tel(\mathring{\mathrm{HM}}_+,a;e'))$.
The group completion theorem can be applied to the (weakly) braided topological monoid $\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})}$, for which $e'$ is a propagator:
indeed $\pi_0(\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})})$ is generated by the elements $\hat\mathbf{t}\cdot\hat\mathbf{t}\in\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ for $\mathbf{t}$ ranging among transpositions in $\mathfrak{S}_d^{\mathrm{geo}}$,
and again by \cite[Proposition 7.11]{Bianchi:Hur1}, for each transposition $\mathbf{t}$ we can factor $e'$ inside $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ as a product of $\hat\mathbf{t}\cdot\hat\mathbf{t}$ and some other element, whose image in $\tilde\mathfrak{S}_d$ along the natural map $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}\to\tilde\mathfrak{S}_d$ is $(2d-4,\mathbbl{1})$.
We obtain a commutative diagram
\[
\begin{tikzcd}
H_*(\Tel(\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})},0;e'))\ar[r,"\mathbf{r}^{\Tel}_{a}","\cong"']\ar[d,"\Xi_{\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})},0;e'}","\cong"'] & H_*(\Tel(\mathring{\mathrm{HM}}_+,a;e')) \ar[d,"\Xi_{\mathring{\mathrm{HM}}_+,a;e'}","\cong"']\\
H_*(\Omega_0 B\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})})\ar[r]& H_*(\Omega_0B\mathring{\mathrm{HM}}_+),
\end{tikzcd}
\]
where the bottom horizontal map is the map induced by the inclusion of monoids $\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})}\hookrightarrow\mathring{\mathrm{HM}}_+$. We conclude therefore the following lemma.
\begin{lem}
\label{lem:Hiso0mHurmonetomHurm}
The inclusion of monoids $\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}\hookrightarrow\mathring{\mathrm{HM}}(\mathfrak{S}_d^{\mathrm{geo}})_+$ induces a homology isomorphism
$H_*(\Omega_0 B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})})\cong H_*(\Omega_0B\mathring{\mathrm{HM}}(\mathfrak{S}_d^{\mathrm{geo}})_+)$.
\end{lem}
\subsection{Enveloping groups of \texorpdfstring{$\pi_0$}{pi0}}
We fix $d\ge2$ also in this subsection, and abbreviate $\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)$ by $\mathring{\mathrm{HM}}_+$.
We want now to determine the enveloping group of
the abelian monoid $\pi_0(\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})}))$; in other words, we want to compute the fundamental group
$\pi_1(B \mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})})$.
\begin{lem}
\label{lem:cGmHurmone}
The map $\pi_1(B \mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})})\to\pi_1(B\mathring{\mathrm{HM}}_+)\cong\tilde\mathfrak{S}_d$ induced by the inclusion
of monoids $\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})}\hookrightarrow\mathring{\mathrm{HM}}_+$ is injective, with image the subgroup
$2\mathbb{Z}\subset\tilde\mathfrak{S}_d$.
\end{lem}
\begin{proof}
Using \cite[Propositions 7.11 and 7.13]{Bianchi:Hur1} we can
identify $\pi_0(\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})}))$ with the subset of
$\widehat{\mathfrak{S}_d^{\mathrm{geo}}}=\pi_0(\mathring{\mathrm{HM}}_+)$ containing tuples of the following form,
see the notation of \cite[Proposition 7.13]{Bianchi:Hur1}:
\[
(\mathbbl{1};\mathfrak{P}_1,\dots,\mathfrak{P}_\ell;r_1,\dots,r_\ell).
\]
Note that all numbers $r_i$ are automatically even, by the conditions
imposed by \cite[Proposition 7.13]{Bianchi:Hur1}.
Multiplication by $\pi_0(e')$ transforms the above element of $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ into the element
\[
(\mathbbl{1};\set{1,\dots,d};r_1+\dots+r_\ell+2d-2)\in\widehat{\mathfrak{S}_d^{\mathrm{geo}}};
\]
it follows that $\mathcal{G}(\pi_0(\mathring{\mathrm{HM}}_{+,(\bullet,\mathbbl{1})}))$ coincides with the enveloping group of the submonoid of $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$
spanned by elements of the form $(\mathbbl{1},\set{1,\dots,d},h)$ for $h\ge2$ even; on this submonoid the composition
is just given by summing the third components, and the enveloping group of this submonoid can be identified
with the subgroup of $\tilde\mathfrak{S}_d$ of elements of the form $(h,\mathbbl{1})$, with $h\in2\mathbb{Z}$
\end{proof}
In order to put together Lemmas \ref{lem:Hiso0mHurmonetomHurm} and \ref{lem:cGmHurmone}, we need to replace
$\Omega B\mathring{\mathrm{HM}}_+$ by its sub-$E_1$-algebra of connected components mapping to $2\mathbb{Z}$ under the map
$\pi_0\colon \Omega B\mathring{\mathrm{HM}}_+\to \pi_0(\Omega B\mathring{\mathrm{HM}}_+)\cong\tilde\mathfrak{S}_d$:
roughly speaking, this is a sub-$E_1$-algebra of ``index $\mathfrak{S}_d$'' in $\Omega B\mathring{\mathrm{HM}}_+$.
\begin{nota}
We denote by $\Omega_{(\bullet,\mathbbl{1})} B\mathring{\mathrm{HM}}_+\subset \Omega B\mathring{\mathrm{HM}}_+$ the aforementioned sub-$E_1$-algebra of $\Omega B\mathring{\mathrm{HM}}_+$.
\end{nota}
The $E_1$-algebra $\Omega_{(\bullet,\mathbbl{1})}B\mathring{\mathrm{HM}}_+$ is the loop space of some
covering space of $B\mathring{\mathrm{HM}}_+$ with $\mathfrak{S}_d$ as group of deck transformations,
and this covering space can be identified, up to homotopy,
with the space $\breve{\mathrm{HM}}_+(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}$ from \cite[Definition 2.4]{Bianchi:Hur3}:
\begin{itemize}
\item \cite[Theorem 4.1]{Bianchi:Hur3} provides a weak homotopy equivalence
\[
B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)\simeq \mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathfrak{S}_d,\mathfrak{S}_d^{op}};
\]
\item the space $\mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathfrak{S}_d,\mathfrak{S}_d^{op}}$ is by definition the quotient of another space, denoted
$\mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{z_{\diamond}^{\mathrm{l}},z_{\diamond}^{\mathrm{r}}}$, by a free, properly discontinuous action of the group $\mathfrak{S}_d\times\mathfrak{S}_d^{op}$;
\item the action of the subgroup $\mathfrak{S}_d^{op}\subset\mathfrak{S}_d\times\mathfrak{S}_d^{op}$ is free and transitive on connected
components of $\mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{z_{\diamond}^{\mathrm{l}},z_{\diamond}^{\mathrm{r}}}$;
\item the quotient
\[
\pa{\mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{z_{\diamond}^{\mathrm{l}},z_{\diamond}^{\mathrm{r}}}}/\mathfrak{S}_d^{op}
\]
is thus connected and carries a residual action of $\mathfrak{S}_d$; we can identify this partial quotient with
the component $\mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{z_{\diamond}^{\mathrm{l}},z_{\diamond}^{\mathrm{r}};\mathbbl{1}}$ of configurations in
$\mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{z_{\diamond}^{\mathrm{l}},z_{\diamond}^{\mathrm{r}}}$ with $\mathfrak{S}_d$-valued total monodromy equal to $\mathbbl{1}\in\mathfrak{S}_d$;
the corresponding action of $\mathfrak{S}_d$ can be identified with the action by global conjugation of $\mathfrak{S}_d$ on
$\mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{z_{\diamond}^{\mathrm{l}},z_{\diamond}^{\mathrm{r}};\mathbbl{1}}$;
\item the composite map of monoids
\[
\begin{tikzcd}[column sep=10pt]
\widehat{\mathfrak{S}_d^{\mathrm{geo}}}\ar[r,"\cong"] & \pi_0(\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+))\ar[r,"\mathbf{s}"]& \pi_0(\Omega B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+))
\ar[d,"\cong"] & \\
& & \pi_1\pa{\mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathfrak{S}_d,\mathfrak{S}_d^{op}}}\ar[r,"\mathrm{deck}"] & \mathfrak{S}_d
\end{tikzcd}
\]
coincides with the canonical map of PMQs $\widehat{\mathfrak{S}_d^{\mathrm{geo}}}\to\mathfrak{S}_d$; here
``$\mathrm{deck}$'' denotes the map of groups corresponding to the action of $\pi_1(\mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathfrak{S}_d,\mathfrak{S}_d^{op}})$ by deck transformations on the covering space
\[
\begin{tikzcd}
\mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{z_{\diamond}^{\mathrm{l}},z_{\diamond}^{\mathrm{r}};\mathbbl{1}}\ar[d] & \\
\pa{\mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{z_{\diamond}^{\mathrm{l}},z_{\diamond}^{\mathrm{r}};\mathbbl{1}}}/\mathfrak{S}_d\ar[r,equal]&\mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathfrak{S}_d,\mathfrak{S}_d^{op}};
\end{tikzcd}
\]
this fact can be checked by analysing the behaviour of the generators $\hat\mathbf{t}\in\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ along the composition, where $\mathbf{t}\in\mathfrak{S}_d^{\mathrm{geo}}$ is a transposition;
\item it follows that the weak homotopy equivalence obtained by looping \cite[Theorem 4.1]{Bianchi:Hur3}
\[
\Omega B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)\simeq \Omega\mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathfrak{S}_d,\mathfrak{S}_d^{op}}
\]
restricts to a weak homotopy equivalence
\[
\Omega_{(\bullet,\mathbbl{1})} B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)\simeq \Omega\mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{z_{\diamond}^{\mathrm{l}},z_{\diamond}^{\mathrm{r}};\mathbbl{1}}
\]
\item the space $\mathrm{Hur}(\breve{\diamo}^{\mathrm{lr}},\breve{\partial};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{z_{\diamond}^{\mathrm{l}},z_{\diamond}^{\mathrm{r}};\mathbbl{1}}$ is weakly equivalent to the topological mo\-noid
$\breve{\mathrm{HM}}_+(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}$, as already observed in \cite[Subsection 4.4]{Bianchi:Hur3}.
\end{itemize}
We obtain the following lemma.
\begin{lem}
\label{lem:firstHequiv}
The map $\Omega B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}\to\Omega B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)$ induced by the inclusion
of monoids $\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}\hookrightarrow\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)$ has image in the subspace
$\Omega_{(\bullet,\mathbbl{1})}B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)\simeq \Omega\breve{\mathrm{HM}}_+(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}$,
and induces an isomorphism on homology groups
\[
H_*(\Omega B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})})\overset{\cong}{\to}
H_*(\Omega_{(\bullet,\mathbbl{1})}B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+))\cong H_*(\Omega\breve{\mathrm{HM}}_+(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}).
\]
\end{lem}
In fact both loop spaces $\Omega B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}$
and $\Omega\breve{\mathrm{HM}}_+(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}$
are double loop spaces:
\begin{itemize}
\item on the one hand we have $\Omega B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}\simeq \Omega^2B^2(\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})})$,
where $B^2(\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})})$ denotes the double bar construction
of the $E_2$-algebra $\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}$;
\item on the other hand we have
$\Omega\breve{\mathrm{HM}}_+(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}\simeq \Omega^2 B\breve{\mathrm{HM}}_+(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}$,
since the topological monoid $\breve{\mathrm{HM}}_+(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}$
is connected, hence group-like.
\end{itemize}
The map $\Omega B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}\to
\Omega\breve{\mathrm{HM}}_+(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}$ from Lemma \ref{lem:firstHequiv} is in fact a map of double loop spaces.
We can now use again \cite[Theorem 4.1]{Bianchi:Hur3} and write a sequence of weak equivalences,
homeomorphisms and equalities of double loop spaces
\[
\begin{split}
\Omega\breve{\mathrm{HM}}_+(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}&=\Omega\breve{\mathrm{HM}}_+(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)\simeq
\Omega(\Omega B\breve{\mathrm{HM}}_+(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d))\\
&=\Omega^2 (B\breve{\mathrm{HM}}_+(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d))
\simeq
\Omega^2(\mathrm{Hur}(\diamond,\partial\diamond;\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathfrak{S}_d,\mathfrak{S}_d^{op}})\\
&\cong
\Omega^2(\mathrm{Hur}(\diamond,\partial\diamond;\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{z_{\diamond}^{\mathrm{l}},z_{\diamond}^{\mathrm{r}};\mathbbl{1}})\simeq
\Omega^2\mathrm{Hur}(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}.
\end{split}
\]
We have used that the monoid $\breve{\mathrm{HM}}_+(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)$ is group-like \cite[Theorem 2.19]{Bianchi:Hur3},
that a loop space does not change by adding connected components to a space, and that a double loop space
does not change up to homeomorphism by replacing a space with a covering of it: in fact
$\mathrm{Hur}(\diamond,\partial\diamond;\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{z_{\diamond}^{\mathrm{l}},z_{\diamond}^{\mathrm{r}};\mathbbl{1}}$ is a covering space
of $\mathrm{Hur}(\diamond,\partial\diamond;\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathfrak{S}_d,\mathfrak{S}_d^{op}}$, with $\mathfrak{S}_d$ as group of deck transformations.
Using Lemma \ref{lem:firstHequiv} we thus obtain a map of double loops spaces which is a homology equivalence
\[
\Omega B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}\to \Omega^2\mathrm{Hur}(\mathcal{R},\partial\mathcal{R};\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)_{\mathbbl{1}}.
\]
The arguments of this subsection (in particular those relying on \cite{Bianchi:Hur3}) are natural with respect
to the inclusions of PMQ-group pairs $(\mathfrak{S}_d^{\mathrm{geo}},\mathfrak{S}_d)\hookrightarrow(\mathfrak{S}_{d+1}^{\mathrm{geo}},\mathfrak{S}_{d+1})$. Letting $d$ tend
to infinity, we thus obtain a map of double loops spaces which is a homology equivalence
\[
\Omega B\mathring{\mathrm{HM}}((\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}\to \Omega^2\mathbb{B}_\infty.
\]
In the last formula $\Omega B\mathring{\mathrm{HM}}((\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}$ denotes the homotopy colimit
of the sequence $\Omega B\mathring{\mathrm{HM}}((\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}$, for $d\to\infty$;
we can compute the same homotopy colimit using the double loop spaces
$\Omega^2 B^2(\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})})$, thus obtaining a map of double loop spaces
which is a homology equivalence
\[
\Omega^2 B^2(\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})})\to \Omega^2\mathbb{B}_\infty.
\]
Here we use that both $\Omega^2$ and $B^2$ are functors that preserve sequential homotopy colimits.
We can then deloop twice the previous, i.e. apply again the functor $B^2$ from $E_2$-algebras to simply connected spaces, which sends homology equivalences
to homology equivalences; we obtain a homology equivalence of simply connected spaces
\[
B^2(\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})})\to \mathbb{B}_\infty,
\]
which is therefore a weak homotopy equivalence;
a posteriori we understand that we have been working with weak equivalences, instead of just homology equivalences, essentially in the entire subsection!
\subsection{\texorpdfstring{$E_2$}{E2}-algebras from moduli spaces}
\begin{defn}
\label{defn:scM}
Let $d\ge1$. A \emph{$d$-little Riemann surface} is a compact, possibly disconnected Riemann surface $\mathcal{S}$ with $d$ ordered boundary components $\partial_1\mathcal{S},\dots,\partial_d\mathcal{S}$, such that each connected component of $\mathcal{S}$ touches $\partial\mathcal{S}$, and such that
$\partial_i$ is endowed with a \emph{little parametrisation}: by this we mean that there is a collar neighbourhood $\partial_i\mathcal{S}\subset U_i\subset\mathcal{S}$ and a continuous map $\mathfrak{u}_i\colon U_i\to\mathcal{R}=[0,1]^2\subset\mathbb{C}$ which is a homeomorphism onto its image, sends $\partial_i\mathcal{S}$ homeomorphically to $\partial\mathcal{R}$,
and restricts to a holomorphic map on $U_i\setminus\mathcal{S}$; moreover, we are only considering $[U_i,\mathfrak{u}_i]$ as a germ of parametrised collar neighbourhood of $\partial_i\mathcal{S}$, i.e. as an equivalence class of parametrised collar neighbourhoods of $\partial_i\mathcal{S}$ up to the equivalence relation given by agreement of parametrisations on some smaller collar neighbourhood: such a germ is what we mean by \emph{little parametrisation} of a boundary component of $\mathcal{S}$.
We denote by $\mathscr{M}_d$ the moduli space of $d$-little Riemann surfaces. Two $d$-little Riemann surfaces are considered equivalent if there is a biholomorphism between them which is compatible with the ordering of boundary components and with the little parametrisations of boundary components.
\end{defn}
The space $\mathscr{M}_d$ is disconnected, and there is a connected component for every tuple $(\mathfrak{P}_1,\dots,\mathfrak{P}_\ell;2g_1,\dots,2g_\ell)$,
where $\mathfrak{P}_1,\dots,\mathfrak{P}_\ell$ is a partition of $\set{1,\dots,d}$ into $1\le \ell\le d$ non-empty subsets, and $2g_1,\dots,2g_\ell$ are even numbers $\ge0$: this connected component contains $d$-little Riemann surfaces $\mathcal{S}$ having precisely $\ell$ components $\mathcal{S}(1),\dots,\mathcal{S}(\ell)$ of genera $g_1,\dots,g_\ell$, and such that $\partial_i\mathcal{S}\subset\mathcal{S}(j)$ if and only if $i\in \mathfrak{P}_j$. The use of even numbers instead of just natural numbers will become clear later.
Given an operation $(\iota_1,\dots,\iota_r)\in E_2(r)$ and given $d$-little surfaces $\mathcal{S}_1,\dots,\mathcal{S}_r$, we can create a new $d$-little surface $\mathcal{S}$ by sewing
$\mathcal{S}_1\sqcup\dots\sqcup\mathcal{S}_r$ together with $d$ ordered copies of $\mathcal{R}\setminus\pa{\iota_1(\mathring{\cR})\sqcup\dots\sqcup\iota_r(\mathring{\cR})}$: we sew $\partial_i\mathcal{S}_j$ with the $i$\textsuperscript{th} copy of $\partial(\iota_j(\mathring{\cR}))$, leveraging on the fact that both curves are identified with $\partial\mathcal{R}$, either by translation and dilation, or by the little parametrisation. The Riemann structure on the sewing locus is defined in the evident way. The outer boundary $\partial\mathcal{R}$
of each copy of $\mathcal{R}\setminus\pa{\iota_1(\mathring{\cR})\sqcup\dots\iota_r(\mathring{\cR})}$ is endowed with a tautological little parametrisation. The previous construction leads to the following, which we state as a lemma.
\begin{lem}
For all $d\ge1$ the space $\mathscr{M}_d$ is an $E_2$-algebra.
\end{lem}
Adjoining a copy of $\mathcal{R}$, whose boundary is labeled $d+1$, gives an injective map of $E_2$-algebras $\mathscr{M}_d\hookrightarrow\mathscr{M}_{d+1}$.
\begin{nota}
We denote by $\mathscr{M}_\infty$ a homotopy colimit in the category of $E_2$-algebras of the
sequence $\mathscr{M}_1\hookrightarrow\mathscr{M}_2\hookrightarrow\mathscr{M}_3\hookrightarrow\dots$.
\end{nota}
For all $d\ge2$ we can construct a map of $E_2$-algebras
\[
\mathfrak{bc}_d\colon \mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})_{(\bullet,\mathbbl{1})}\to \mathscr{M}_d
\]
as follows: given a configuration $(P,\psi)\in\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})_{(\bullet,\mathbbl{1})}$, we consider the $d$-fold, possibly disconnected cover $f\colon\mathring{\mathcal{S}}\to\mathbb{C}mP$ with monodromy induced by $\psi$, we compactify $\mathring{\mathcal{S}}$
to a closed, possibly disconnected, smooth Riemann surface $\bar\mathcal{S}$ admitting a $d$-fold branch cover map $f\colon\bar\mathcal{S}\to\mathbb{C} P^1$,
and finally we take $\mathcal{S}=f^{-1}(\mathcal{R})$: the fact that $\omega(P,\psi)\in\tilde\mathfrak{S}_d$ has the form $(h,\mathbbl{1})$, for some $h\in2\mathbb{Z}$,
implies that $\mathcal{S}$ has $d$ boundary curves; these boundary curves have a canonical order, and the very projection $f$ endowes each of them with a little parametrisation.
The explicit description of $\pi_0(\mathscr{M}_d)$ given above can be compared with the description of $\pi_0(\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}$ derived from \cite[Proposition 7.13]{Bianchi:Hur1}: the conclusion is that, for all $d\ge2$, the map $\mathfrak{bc}_d$ induces a bijection on path components
$\pi_0(\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})_{(\bullet,\mathbbl{1})})\cong\pi_0(\mathscr{M}_d)$.
For all $d\ge2$ the following diagram of $E_2$-algebras commutes strictly
\[
\begin{tikzcd}
\mathrm{Hur}(\mathring{\cR};\mathfrak{S}_d^{\mathrm{geo}})_{(\bullet,\mathbbl{1})}\ar[r,hook]\ar[d,"\mathfrak{bc}_d"] & \mathrm{Hur}(\mathring{\cR};\mathfrak{S}_{d+1}^{\mathrm{geo}})_{(\bullet,\mathbbl{1})}\ar[d,"\mathfrak{bc}_{d+1}"]\\
\mathscr{M}_d\ar[r,hook] &\mathscr{M}_{d+1}
\end{tikzcd}
\]
\begin{nota}
We denote by $\mathfrak{bc}_\infty\colon \mathrm{Hur}(\mathring{\cR};\mathfrak{S}_\infty^{\mathrm{geo}})_{(\bullet,\mathbbl{1})}\to\mathscr{M}_\infty$ the map of $E_2$-algebras between the homotopy colimits induced by the sequence of maps $\mathfrak{bc}_d$ for $d\ge2$.
\end{nota}
The theorem of Madsen and Weiss identifies $B^2\mathscr{M}_d$ up to weak equivalence with
$\Omega^{\infty-2}\mathrm{MTSO}(2)$ for all $d\ge2$,
in such a way that the map of spaces $B^2\mathscr{M}_d\to B^2\mathscr{M}_{d+1}$ induced by the inclusion of $E_2$-algebras
$\mathscr{M}_d\hookrightarrow\mathscr{M}_{d+1}$ can be identified up to homotopy with the identity of $\Omega^{\infty-2}\mathrm{MTSO}(2)$.
In particular we also have $B^2\mathscr{M}_\infty\simeq\Omega^{\infty-2}\mathrm{MTSO}(2)$, and looping twice we have
a weak equivalence of double loop spaces
\[
\Omega^2B^2\mathscr{M}_\infty\simeq \Omega^\infty\mathrm{MTSO}(2).
\]
Our next aim is to prove that the map of double loop spaces
\[
\Omega^2B^2\mathfrak{bc}_\infty \colon \Omega^2B^2 \mathrm{Hur}(\mathring{\cR};\mathfrak{S}_\infty^{\mathrm{geo}})_{(\bullet,\mathbbl{1})}\to\Omega^2B^2\mathscr{M}_\infty
\]
is a weak equivalence. The fact that each map $\mathfrak{bc}_d$ is a bijection on path components
implies that also $\mathfrak{bc}_\infty$ is a bijection on path components, and hence also $\Omega^2B^2\mathfrak{bc}_\infty$
is a bijection on path components.
We thus only have to prove that
$\Omega^2_0B^2\mathfrak{bc}_\infty \colon \Omega^2_0B^2 \mathrm{Hur}(\mathring{\cR};\mathfrak{S}_\infty^{\mathrm{geo}})_{(\bullet,\mathbbl{1})}\to\Omega^2_0B^2\mathscr{M}_\infty$
is a weak equivalence.
We can identify up to homotopy the previous map with the composition of weak homotopy equivalences
\[
\begin{split}
\Omega^2_0B^2 \mathrm{Hur}(\mathring{\cR};\mathfrak{S}_\infty^{\mathrm{geo}})_{(\bullet,\mathbbl{1})} &\simeq \Omega_0B\mathring{\mathrm{HM}}((\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})}\\
&\simeq \Omega_0B\mathring{\mathrm{HM}}((\mathfrak{S}_\infty^{\mathrm{geo}})_+)\\
&\simeq \pa{\mathring{\mathrm{HM}}((\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{\kld{\infty}_\infty}}^+ \simeq \pa{\mathfrak{M}_{\infty,1}}^+\\
&\simeq \Omega^2_0B^2\mathscr{M}_1\simeq \Omega^2_0B^2\mathscr{M}_\infty.
\end{split}
\]
Here $(-)^+$ denotes the Quillen plus construction of a space, and in particular the weak equivalence
$\pa{\mathring{\mathrm{HM}}((\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{\kld{\infty}_\infty}}^+ \simeq \pa{\mathfrak{M}_{\infty,1}}^+$ is induced
by the zig-zag of weak equivalences from Lemma \ref{lem:zigzag}.
Delooping twice we obtain a weak equivalence
\[
B^2 \mathrm{Hur}(\mathring{\cR};\mathfrak{S}_\infty^{\mathrm{geo}})_{(\bullet,\mathbbl{1})} \simeq B^2\mathscr{M}_\infty\simeq \Omega^{\infty-2}\mathrm{MTSO}(2).
\]
Finally, using the weak equivalence $B^2(\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_\infty^{\mathrm{geo}})_+)_{(\bullet,\mathbbl{1})})\to \mathbb{B}_\infty$
from the previous subsection, we have completed the proof of Theorem \ref{thm:main5}.
\appendix
\section{Deferred proofs}
\subsection{Continuity of \texorpdfstring{$\check{\mathfrak{cv}}$}{ccv}}
\label{subsec:ccvcontinuous}
Recall Definition \ref{defn:ccOud}. To prove continuity of $\check\mathfrak{cv}$, it suffices to prove continuity of the composite
\[
\tilde{\check{\mathfrak{cv}}}\colon \tilde{\ccO}_{g,n}[\ud]\to\bar{\cO}ud\overset{\check\mathfrak{cv}}{\to} \mathrm{Hur}(\mathbb{C},(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\!(\ud)\!)_g}
\]
where the first map is the quotient map $(\mathfrak{r},f)\mapsto [\mathfrak{r},f]$.
Recall from Subsection \ref{subsec:classifyingspaces} that there is a surface bundle $p_{\bar{\cO}}\colon\mathscr{F}^{\bar{\cO}}_{g,n}\to\bar{\cO}ud$
with fibres Riemann surfaces of type $\Sigma_{g,n}$ (i.e., each fibre has genus $g$ and is endowed with $n$ directed marked points), and there is a continuous map $f_{\bar{\cO}}\colon\mathscr{F}^{\bar{\cO}}_{g,n}\to\mathbb{C} P^1$ restricting to $\underline{d}$-directed meromorphic functions on fibres of $p_{\bar{\cO}}$. In fact, the surface bundle $p_{\bar{\cO}}$ is obtained from the surface bundle
\[
\tildep_{\bar{\cO}}\colon \tilde\mathscr{F}^{\bar{\cO}}_{g,n}:=\Sigma_{g,n}\times \tilde{\ccO}_{g,n}[\ud]\to\tilde{\ccO}_{g,n}[\ud],
\]
where the bundle map is projection on the second factor: more precisely,
one quotients the total space by the diagonal action of $\Diff_{g,n}$, and the base space by the action of $\Diff_{g,n}$.
Each fibre of $\tildep_{\bar{\cO}}$ is a Riemann surface of type $\Sigma_{g,n}$ (in fact, it is a copy of $\Sigma_{g,n}$ with a Riemann structure $\mathfrak{r}$ dictated by the selected point $(\mathfrak{r},f)$ on the base space).
The composite $\tilde\mathscr{F}^{\bar{\cO}}_{g,n}\to\mathscr{F}^{\bar{\cO}}_{g,n}\overset{f_{\bar{\cO}}}{\to}\mathbb{C} P^1$ is a continuous map $\tilde f_{\bar{\cO}}$ restricting to a $\underline{d}$-directed meromorphic function on fibres of $\tildep_{\bar{\cO}}$: more precisely, $\tilde f_{\bar{\cO}}$ restricts to the map $f\colon\Sigma_{g,n}\to\mathbb{C} P^1$ on the fibre $\Sigma_{g,n}\times(\mathfrak{r},f)=\tildep_{\bar{\cO}}^{-1}(\mathfrak{r},f)$.
Let now $(\mathfrak{r},f)$ be a point in $\tilde{\ccO}_{g,n}[\ud]$, and let $(P,\psi)=\tilde{\check{\mathfrak{cv}}}(\mathfrak{r},f)$.
We want to show that the preimage along $\tilde{\check{\mathfrak{cv}}}$ of a sufficiently small neighbourhood of $(P,\psi)$ contains a neighbourhood of $(\mathfrak{r},f)$.
Write $P=\set{z_1,\dots,z_k}$ and let $\underline{U}=(U_1,\dots,U_k)$ be an adapted covering of $P$: the open sets $U_i\subset\mathbb{C}$ are convex, relatively compact and have disjoint closures.
Let also $V\subset\set{z\in\mathbb{C}\,|\,\Im(z)>\Im(*_P)}$ be a convex, compact set containing all points of $P\cup\set{0}$ in its interior; up to shrinking the sets $U_i$, let us assume that $U_i\subset V$ for all $1\le i\le k$. Let
\[
\underline{\zeta}=\set{\zeta_{i,j}\,|\,1\le i\le k,1\le j\le\lambda_i}\subset\Sigma_{g,n}\setminus\underline{Q}
\]
be the set of critical points of $f$ lying over $\mathbb{C}$, with $\zeta_{i,j}$ lying over $z_i$, and let $\mathcal{S}\subset\Sigma_{g,n}$ be a compact subsurface of $\Sigma_{g,n}$ with the following property: the image along $f$ of $\Sigma_{g,n}\setminus\mathcal{S}$ is relatively compactly contained in $\underline{U}\cup \mathbb{C} P^1\setminus V$. Such $\mathcal{S}$ can be taken as the complement in $\Sigma_{g,n}$ of a small neighbourhood of $\underline{Q}\cup\underline{\zeta}$; in particular, let us assume that $\mathcal{S}$ is a surface of genus $g$ with $n+\sum_{i=1}^k\lambda_i$ boundary curves. We denote by $\beta_{i,j}\subset\partial\mathcal{S}$ the curve bounding a disc $D_{i,j}\subset\Sigma_{g,n}$ that contains $\zeta_{i,j}$, for $1\le i\le k$ and $1\le j\le \lambda_i$; and we denote by $\beta_{\infty,i}\subset\partial\mathcal{S}$ the curve bounding a disc $D_i\subset\Sigma_{g,n}$ that contains $Q_i$, for $1\le i\le n$.
Let $\tilde\mathscr{F}^{\bar{\cO}}_{\mathcal{S}}:=\mathcal{S}\times\tilde{\ccO}_{g,n}[\ud] \subset \tilde\mathscr{F}^{\bar{\cO}}_{g,n}$, and consider
$df_{\bar{\cO}}$ as a section of the vertical cotangent bundle of $\tildep_{\bar{\cO}}$, defined only over the subspace
$\tilde\mathscr{F}^{\bar{\cO}}_{\mathcal{S}}$: the restriction of $df_{\bar{\cO}}$ over $\mathcal{S}\times(\mathfrak{r}',f')$
is defined to be the differential $df'$, for all $(f',\mathfrak{r}')\in\tilde{\ccO}_{g,n}[\ud]$.
Note that both $f'$ and $df'$ have singularities near the points $Q_i$, and that is why we restrict $df_{\bar{\cO}}$ to $\mathcal{S}$. More precisely, for $p\in\Sigma_{g,n}$, $df'_p$ is really a $\mathbb{C}$-linear map $T_p\Sigma_{g,n}\to T_{f'(p)}\mathbb{C} P^1$, and only assuming that $f'(p)\neq \infty$ we can identify $T_{f'(p)}\mathbb{C} P^1= T_{f'(p)}\mathbb{C}\cong \mathbb{C}$, and thus treat $df'_p$ as an element of the dual of $T_p\Sigma_{g,n}$.
The section $df_{\bar{\cO}}$ does not vanish on any point of $\mathcal{S}\times(\mathfrak{r},f)$; similarly, the function $f_{\bar{\cO}}$ admits no zero on any point of $D_i\times(\mathfrak{r},f)$, for all $1\le i\le n$.
Using compactness of $\mathcal{S}$ and the discs $D_i$, we can find a neighbourhood
$(\mathfrak{r},f)\in \mathscr{U}\subset\tilde{\ccO}_{g,n}[\ud]$ such that, for all $(\mathfrak{r}',f')\in\mathscr{U}$, the following hold:
\begin{itemize}
\item $df_{\bar{\cO}}$ does not vanish on any point of $\mathcal{S}\times(\mathfrak{r}',f')$;
\item for all $1\le i\le n$, $f_{\bar{\cO}}$ does not vanish on any point of $D_i\times (\mathfrak{r}',f')$.
\end{itemize}
This implies that, for $(\mathfrak{r}',f')\in\mathscr{U}$, the function $f'\colon(\Sigma_{g,n},\mathfrak{r}')\to\mathbb{C} P^1$ admits no critical points in $\mathcal{S}\subset\Sigma_{g,n}$.
Up to shrinking $\mathscr{U}$, we can also assume the following, again using the compactness of $\mathcal{S}$: for
$(\mathfrak{r}',f')\in\mathscr{U}$, the image of $\mathcal{S}$ along $f'$ is relatively compactly contained in $\underline{U}\cup \mathbb{C} P^1\setminus V$. It follows
that for $(\mathfrak{r}',f')\in\mathscr{U}$, the critical values of $f'\colon(\Sigma_{g,n},\mathfrak{r}')\to\mathbb{C} P^1$ are contained in $\underline{U}\cup \mathbb{C} P^1\setminus V$.
The next step is to prove that for $(\mathfrak{r}',f')\in\mathscr{U}$ and for all $1\le i\le n$, the function $f'\colon(\Sigma_{g,n},\mathfrak{r}')\to\mathbb{C} P^1$ admits no critical point $p\in D_i$, except, possibly, the point $Q_i$. Suppose instead that $\set{p_1,\dots,p_l}\subset D_i\setminus\set{Q_i}\subset\Sigma_{g,n}\setminus\set{Q_i}$ are the critical points of $f'$ contained in $D_i$ and different from $Q_i$. Then $1/f'$ is a holomorphic function on $D_i$ (here we use that $f'$ admits no zero on $D_i$), and we can apply the Riemann-Hurwitz formula to compute
\[
1=\chi(D_i)=\mathrm{wind}_{d(1/f')}(\beta_{\infty,i})+\mathrm{ind}_{d(1/f')}(Q_i)+\sum_{j=1}^l\mathrm{ind}_{d(1/f')}(p_j).
\]
Here $\mathrm{wind}_{d(1/f')}$ is the winding number of $d(1/f')=-df'/(f')^2$ along the curve $\beta_{\infty,i}$, which is well-defined because $d(1/f')$ does not vanish along $\beta_{\infty,i}$; similarly, we denote by $\mathrm{ind}_{d(1/f')}(p_j)$ and $\mathrm{ind}_{d(1/f')}(Q_i)$ the indices of $d(1/f')$ at the critical points $p_j$ and at $Q_i$. We can compare the above computation with the similar one obtained using $1/f$, recalling that the only critical point of $f$ in $D_i$ is, possibly, $Q_i$:
\[
1=\chi(D_i)=\mathrm{wind}_{d(1/f)}(\beta_{\infty,i})+\mathrm{ind}_{d(1/f)}(Q_i)=-(d_i-2)+(d_i-1)
\]
We can now use the continuity of the winding number along the curve $\beta_{\infty,i}\subset\Sigma_{g,n}$ with respect to no-where-vanishing section of the cotangent bundle of $\Sigma_{g,n}$ along $\beta_{\infty,i}$: up to shrinking $\mathscr{U}$, we can assume the equality
\[
\mathrm{wind}_{d(1/f')}(\beta_{\infty,i})=\mathrm{wind}_{d(1/f)}(\beta_{\infty,i})=-(d_i-2).
\]
Moreover, since $f'$ is $\underline{d}$-adapted, we have $\mathrm{ind}_{d(1/f')}(Q_i)=d_i-1$. It follows that
$\sum_{j=1}^l\mathrm{ind}_{d(1/f')}(p_j)=0$, and since each summand is strictly positive, we conclude that this is an empty sum, i.e. $l=0$.
This implies that for all $(\mathfrak{r}',f')\in\mathscr{U}$, the set $P'$ of critical values of $f'$ in $\mathbb{C}$ is contained in $\underline{U}\subset\mathbb{C}$.
Write $P'=\set{z'_{i,m}\,|\, 1\le i\le k,1\le m\le k'_i}$, for some integers $k'_i\ge0$, such that $z'_{i,m}$ is contained in $U_i$ (soon we will see that actually $k'_i\ge1$).
For all $1\le i\le k$, $1\le m\le k'_i$ and $1\le j\le \lambda_i$ we denote by
$\zeta'_{i,m,j,1},\dots,\zeta'_{i,m,j,\lambda'_{i,m,j}}$ the critical points of $f'$ lying in $D_{i,j}$ and lying over $z'_{i,m}$. We can then compute
\[
1=\chi(D_{i,j})=\mathrm{wind}_{d(1/f')}(\beta_{i,j})+\sum_{m=1}^{k'_i}\sum_{r=1}^{\lambda'_{i,m,j}}\mathrm{ind}_{d(1/f')}(\zeta'_{i,m,j,r}).
\]
Similarly as above, up to shrinking $\mathscr{U}$ we can assume the equality $\mathrm{wind}_{d(1/f')}(\beta_{i,j})=\mathrm{wind}_{d(1/f)}(\beta_{i,j})$ for all $1\le i\le k$ and $1\le j\le\lambda_i$; we thus obtain, for all $1\le i\le k$ and $1\le j\le \lambda_i$ the equality
\[
\sum_{m=1}^{k'_i}\sum_{r=1}^{\lambda'_{i,m,j}}\mathrm{ind}_{d(1/f')}(\zeta'_{i,j,m,r})=\mathrm{ind}_{d(1/f)}(\zeta_{i,j}).
\]
Summing over $j$, and setting $\lambda'_{i,m}=\sum_{j=1}^{\lambda_i}\lambda'_{i,m,j}$, we obtain
\[
\sum_{m=1}^{k'_i}\pa{d-\lambda'_{i,m}}=d-\lambda_i.
\]
In particular, since for all $1\le i\le k$ we have $d-\lambda_i>0$, we must also have $k'_i>0$.
If $(P',\psi')=\tilde{\check{\mathfrak{cv}}}(\mathfrak{r}',f')$, then we can let $(\alpha'_{i,m})_{1\le i\le k,1\le m\le k'_i}$ be loops representing an admissible generating set of $\pi_1(\mathbb{C}mP',*_{P'})$. The inclusion $\mathbb{C}\setminus\underline{U}\subset\mathbb{C}mP'$ induces an inclusion of fundamental groups $\pi_1(\mathbb{C}\setminus\underline{U},*_{\underline{U}})\subseteq\pi_1(\mathbb{C}mP',*_{\underline{U}})$. The former group can be identified with $\pi_1(\mathbb{C}mP,*_P)$, whereas the second can be identified with $\pi_1(\mathbb{C}mP',*_{P'})$: both identification come from inclusion of subspaces of $\mathbb{C}$ and translations of basepoints along straight segments. We thus get an inclusion
$\pi_1(\mathbb{C}mP,*_P)\subset\pi_1(\mathbb{C}mP,*_{P'})$. We can assume that for all $1\le i\le k$, the product
$\alpha'_{i,1}\dots\alpha'_{i,k'_i}\in\pi_1(\mathbb{C}mP',*_{P'})$ corresponds to the image, under the above inclusion, of an element $\alpha_i\in\pi_1(\mathbb{C}mP,*_P)$, such that $\alpha_1,\dots,\alpha_k$ form an admissible generating set of
$\pi_1(\mathbb{C}mP,*_P)$. The above formula becomes, for all $1\le i\le k$, the equality
\[
\sum_{m=1}^{k'_i}N(\psi'(\alpha'_{i,m}))=N(\psi(\alpha_i)).
\]
If we prove that the product of permutations $\psi'(\alpha'_{i,1})\dots\psi'(\alpha'_{i,k'_i})$ equals $\psi(\alpha_i)$ in the symmetric group $\mathfrak{S}_d$, then the equality
\[
\psi'(\alpha'_{i,1})\dots\psi'(\alpha'_{i,k'_i})=\psi(\alpha_i)
\]
also holds in $\mathfrak{S}_d^{\mathrm{geo}}$, and we could conclude that $(P',\psi')$ lies in the neighbourhood $fU(P,\psi;\underline{U})$ of $(P,\psi)$ in $\mathrm{Hur}(\mathbb{C},(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\!(\ud)\!)_g}$;
we would then have that $\tilde{\check{\mathfrak{cv}}}$ sends the entire neighbourhood $\mathscr{U}$ of $(\mathfrak{r},f)$ inside $fU(P,\psi;\underline{U})$,
thus concluding the proof of continuity of $\tilde{\check{\mathfrak{cv}}}$, and hence the proof of continuity of $\check{\mathfrak{cv}}$.
Use translations of basepoints along straight segments to identify, for all $P'\subset\underline{U}$ (including $P$), $\pi_1(\mathbb{C}mP',*_{P'})\cong \pi_1(\mathbb{C}mP',*_{\underline{U}})$. Denote by $\bar\underline{U}$ the closure of $\underline{U}$ in $\mathbb{C}$, and
suppose that $\alpha_i$ is a smooth, immersed simple loop $[0,1]\to\mathbb{C}\setminus\bar\underline{U}$ based at $*_{\underline{U}}$; denote by $\sigma_i=\psi(\alpha_i)\in\mathfrak{S}_d^{\mathrm{geo}}$, and regard $\psi(\alpha_i)$ as a permutation in $\mathfrak{S}_d$. For all $1\le j\le d$ we can lift $\alpha_i$ along $f$ to a smooth, immersed path $\alpha_{i,(j)}\colon[0,1]\to\Sigma_{g,n}$, starting at the point labeled $j$ in $f^{-1}(*_{\underline{U}})$ and ending at the point labeled $\sigma_i(j)$: here we use that $\Im(*_{\underline{U}})\le\Im(*_{P})$, so $f^{-1}(*_{\underline{U}})$ also comes with an identification with $\set{1,\dots,d}$.
Let $I_{\underline{U}}\subset\mathbb{C} P^1$ be the segment starting at $*_{\underline{U}}$ and running horizontally towards right in $\mathbb{C}$ to $\infty$. For all $1\le j\le d$
denote by $I_{\underline{U},j}\subset\Sigma_{g,n}$ the lift of $I_{\underline{U}}$ along $f$ starting at the point $*_{\underline{U},j}\in f^{-1}(*_{\underline{U}})$ labelled $j$. We parametrise $I_{\underline{U},j}$ as a smooth path $[0,1]\to\Sigma_{g,n}$ exiting from $Q_{\ell(j)}$ with velocity $\mu(j)\cdot X_{\ell(j)}$,
for some $1\le\ell(j)\le n$ and some $0\le\mu(j)\le d_{\ell(j)}$: compare with Figure \ref{fig:moduli_3}.
We define a piecewise smooth path $\tilde\alpha_{i,(j)}\colon[0,3]\to\Sigma_{g,n}\setminus f^{-1}(\bar\underline{U})$ by concatenating $I_{\underline{U},j}$, $\alpha_{i,(j)}$ and the inverse of $I_{\underline{U},\sigma_i(j)}$, on the segments $[0,1]$, $[1,2]$ and $[2,3]$ respectively.
Using that $\Sigma_{g,n}\setminus f^{-1}(\bar\underline{U})$ is open in $\Sigma_{g,n}$ and $[0,3]$ is compact, up to shrinking $\mathscr{U}$ we may assume that
for all $(\mathfrak{r}',f')\in\mathscr{U}$ the following hold:
\begin{itemize}
\item $f'\circ\tilde\alpha_{i,(j)}\colon [0,3]\to\mathbb{C} P^1$ is a piecewise smooth loop based at $\infty$ and taking values in $\mathbb{C} P^1\setminus \bar\underline{U}$.
\end{itemize}
Moreover, up to shrinking $\mathscr{U}$, and using again compactness of $[0,3]$, we can assume that the smooth functions $f,f'\colon\Sigma_{g,n}\to\mathbb{C} P^1$ are close enough in the $C^{\infty}$-topology, and that the Riemann structures $\mathfrak{r},\mathfrak{r}'$ are also close enough with respect to the $C^{\infty}$-topology, that there is an oblique hyperplane of the form
\[
\set{\Im-\mathbb{R}e<\underline{p}silon}=\set{z\in\mathbb{C}\,|\,\Im(z)-\mathbb{R}e(z)<\underline{p}silon}\subset\mathbb{C},
\]
containing $\bar\underline{U}$ and such that the following holds:
\begin{itemize}
\item for all $1\le i\le k$ and $1\le j\le d$, the loops $f'\circ\tilde\alpha_{i,(j)})\colon[0,3]\to\mathbb{C} P^1\setminus\bar\underline{U}$ and
$f\circ\tilde\alpha_{i,(j)}\colon[0,3]\to\mathbb{C} P^1\setminus\bar\underline{U}$, which are loops based at $\infty$, actually take value inside $\set{\Im-\mathbb{R}e<\underline{p}silon}\setminus\bar\underline{U}$,
and they are based-homotopic loops inside $\set{\Im-\mathbb{R}e<\underline{p}silon}\setminus\bar\underline{U}$.
\end{itemize}
It follows that, along all identifications, $\psi'(\alpha_i)$ is a permutation sending $j\mapsto\sigma_i(j)$ for all $1\le j\le d$, just as $\psi(\alpha_i)$ does; in other words, $\psi'(\alpha_i)=\psi(\alpha_i)\in\mathfrak{S}_d^{\mathrm{geo}}$. Since $\alpha_i=\alpha'_{i,1}\dots\alpha'_{i,k'_i}$ in $\pi_1(\mathbb{C}mP',*_{\underline{U}})$, we also conclude the equality $\psi'(\alpha_i)=\psi'(\alpha'_{i,1})\dots\psi'(\alpha'_{i,k'_i})$ in $\mathfrak{S}_d$.
This concludes the proof of continuity of $\tilde\mathfrak{cv}$.
\subsection{Continuity of \texorpdfstring{$\mathfrak{bc}$}{bc}}
\label{subsec:bccontinuous}
We fix $(P,\psi)\in\mathrm{Hur}(\mathbb{C};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\!(\ud)\!)_g}$, we write $P=\set{z_1,\dots,z_k}$, we choose $\varepsilon>0$ such that the open squares
\[
z_i+(-5\varepsilon,5\varepsilon)^2\subset\mathbb{C}
\]
centred at $z_i$ and of side $10\varepsilon$ are pairwise disjoint for $1\le i\le k$, we set $U_i=z_i+(-\varepsilon,\varepsilon)^2\subset\mathbb{C}$ and thus obtain an adapted covering $\underline{U}=(U_1,\dots,U_k)$ of $P$. Our aim is to prove that $\mathfrak{bc}$ is continuous on the normal neighbourhood $fU(P,\psi;\underline{U})\subset \mathrm{Hur}(\mathbb{C};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{(\!(\ud)\!)_g}$. We
consider $*_{\underline{U}}$ as preferred basepoint for fundamental groups of subsets of $\mathbb{C}$ containing $\mathbb{C}\setminus\underline{U}$: in all of the following arguments, basepoints for fundamental groups can always be translated to $*_{\underline{U}}$ along a straight segment.
Fix simple loops $\alpha_1,\dots,\alpha_k\subseteq\mathbb{C}\setminus\underline{U}$ that are disjoint away from $*_{\underline{U}}$ and represent an admissible generating set of $\pi_1(\mathbb{C}mP,*_{\underline{U}})$, and let $D_i\subset\mathbb{C}$ be the disc bounded by $\alpha_i$. Define also
$\sigma_i=\psi(\alpha_i)\in\mathfrak{S}_d^{\mathrm{geo}}$, and consider $\sigma_i$ also as element in $\mathfrak{S}_d$. The permutation $\sigma_i$ decomposes as a product of cycles $c_{i,1}\dots c_{i,\lambda_i}$, and we denote by $1\le d_{i,j}\le d$ the length of $c_{i,j}$.
Let $\mathring{\mathcal{S}}$ be the total space of the $d$-fold
covering of $\mathbb{C}\setminus\underline{U}$ associated with the monodromy $\varphi\colon\pi_1(\mathbb{C}mP,*_{\underline{U}})\to\mathfrak{S}_d$ induced by $\psi\colon\mathfrak{Q}(P)\to\mathfrak{S}_d^{\mathrm{geo}}$, and compactify $\mathring{\mathcal{S}}$ over $\infty$ by adjoining $n$ points $Q_1,\dots,Q_n$, in order to obtain a compact surface $\mathcal{S}$ of genus $g$ with $kd-h$ boundary components. There is a branched cover map $f\colon\mathcal{S}\to\mathbb{C} P^1\setminus\underline{U}$ of degree $d$, branching, possibly, only over $\infty$. There are $\lambda_i$ boundary components of $\mathcal{S}$ lying over $\partial U_i$:
more precisely, $\partial\mathcal{S}$ contains a component $\partial_{i,j}\mathcal{S}$ covering $\partial U_i$ with degree $d_{i,j}$, for all $1\le i\le k$ and $1\le j\le \lambda_i$: more precisely, $\partial_{i,j}\mathcal{S}$ is the only component in $\partial\mathcal{S}$ that can be connected inside $f^{-1}(D_i\setminus U_i)$ to the $d_{i,j}$ points of $f^{-1}(*_{\underline{U}}))$ labeled by the elements of the cycle $c_{i,j}$.
We denote by $5U_i$ the square $z_i+(-5\varepsilon,5\varepsilon)^2$, and we let $5U_i\setminus U_i$ be a collar neighbourhood of $\partial U_i$ in $\mathbb{C}\setminus\underline{U}$.
We denote by $V_{i,j}$ the connected component of $f^{-1}(5U_i\setminus U_i)\subset\mathcal{S}$ containing $\partial_{i,j}\mathcal{S}$. Up to translations and dilations in $\mathbb{C}$, we can identify $5U_i\setminus U_i$ with $(-2,3)^2\setminus\mathring{\cR}$, where we recall that $\mathring{\cR}$ denotes the unit square $(0,1)^2\subset\mathbb{C}$.
Similarly, after choosing an element in the cycle $c_{i,j}$, we can identify $V_{i,j}$ with the standard $d_{i,j}$-fold cyclic cover of $(-2,3)^2\setminus\mathring{\cR}$, with trivialised fibre over $(0,-1)=-\sqrt{-1}$.
We can then consider $\mathcal{S}\timesfU(P,\psi;\underline{U})$ as a trivial bundle over $fU(P,\psi;\underline{U})$ with fibre a surface of genus $g$ with $(\sum_{i=1}^k\lambda_i)$ boundary components; moreover each boundary component of each fibre has a collar neighbourhood parametrised by a suitable cyclic cover of $(-2,3)^2\setminus\mathring{\cR}$.
Now, similarly as in \cite[Definition 3.15]{Bianchi:Hur2}, we define for all $1\le i\le k$ a continuous map $\mathfrak{i}^{\mathbb{C}}_{D_i}\colon fU(P,\psi;\underline{U})\to\mathrm{Hur}(U_i;(\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\sigma_i}$, where $\mathrm{Hur}(U_i;(\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\sigma_i}$
denotes the subspace of $\mathrm{Hur}(\mathbb{C};(\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\sigma_i}$ of configurations supported on $U_i$, and $\hat\sigma_i\in\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$ is the image of $\sigma_i\in\mathfrak{S}_d^{\mathrm{geo}}$ along the canonical inclusion $\mathfrak{S}_d^{\mathrm{geo}}\hookrightarrow\widehat{\mathfrak{S}_d^{\mathrm{geo}}}$.
Given $(P',\psi')\infU(P,\psi;\underline{U})$, we send it to the pair $(P'',\psi'')$, where
\begin{itemize}
\item $P''=P'\cap U_i$;
\item $\psi''\colon\mathfrak{Q}(P'')\to\mathfrak{S}_d^{\mathrm{geo}}$ is the unique (augmented) map of PMQs that agrees with $\psi'$ on all loops in $D_i\setminus P''$ representing classes both in $\mathfrak{Q}(P'')$ and $\mathfrak{Q}(P')$.
\end{itemize}
We can then identify the spaces
\[
\mathrm{Hur}(U_i;(\mathfrak{S}_d^{\mathrm{geo}})_+)_{\hat\sigma_i}\cong pod_{j=1}^{\lambda_i}\mathrm{Hur}(U_i;(\mathfrak{S}_{c_{i,j}}^{\mathrm{geo}})_+)_{\hat c_{i,j}},
\]
and thus define maps $\mathfrak{i}^{\mathbb{C}}_{D_i,j}\colon fU(P,\psi;\underline{U})\to\mathrm{Hur}(U_i;(\mathfrak{S}_{c_{i,j}}^{\mathrm{geo}})_+)_{\hat c_{i,j}}$ by composing $\mathfrak{i}^{\mathbb{C}}_{D_i}$ with the $j$\textsuperscript{th} projection, for all $1\le j\le \lambda_i$.
By rescaling and translating, we can identify $U_i$ with $\mathring{\cR}$, and thus obtain maps $\mathfrak{i}^{\mathbb{C}}_{\mathring{\cR},i,j}\colon fU(P,\psi;\underline{U})\to\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_{c_{i,j}}^{\mathrm{geo}})_+)_{\hat c_{i,j}}$ for all $1\le j\le \lambda_i$. We can then use the argument from Subsection \ref{subsec:monic} and identify $\mathrm{Hur}(\mathring{\cR};(\mathfrak{S}_{c_{i,j}}^{\mathrm{geo}})_+)_{\hat c_{i,j}}$ with the space $\mathbb{N}MonPol_{d_{i,j}}(\mathring{\cR})$ of monic polynomials of degree $d_{i,j}$ whose critical values in $\mathbb{C}$ actually lie in $\mathring{\cR}$.
Define
\[
\mathscr{D}_d=\set{(f,z)\in\mathbb{N}MonPol_d(\mathring{\cR})\times\mathbb{C}\,|\, f(z)\in [-2,3]^2};
\]
then the projection $\pi_d\colon \mathscr{D}_d\to \mathbb{N}MonPol_d$ given by $\pi_d\colon (f,z)\mapsto f$ is a holomorphic disc bundle over $\mathbb{N}MonPol_d$; moreover each fibre $\pi_d^{-1}(f)$ has a collar neighbourhood of its boundary, namely the subspace
$V_f=\set{z\in\mathbb{C}\,|\,f(z)\in [-2,3]^2\setminus\mathcal{R}}$. This subspace is canonically identified with the standard $d$-fold cyclic
cover of $[-2,3]^2\setminus\mathcal{R}$, for varying $f\in\mathbb{N}MonPol_d(\mathring{\cR})$.
For $(P',\psi')\infU(P,\psi;\underline{U})$ we can now retrieve $\mathfrak{bc}(P',\psi')$ as follows:
\begin{itemize}
\item we take a copy of $\mathcal{S}$, which is a Riemann surface with $n$ directed marked points and with $(\sum_{i=1}^k\lambda_i)$ boundary curves, endowed with parametrised collar neighbourhoods;
\item we consider, for each $1\le i\le k$ and $1\le j\le \lambda_i$, the polynomial $f_{i,j}(P',\psi'):=\mathfrak{i}^{\mathbb{C}}_{\mathring{\cR},i,j}(P',\psi')$, and take a copy of the disc $\pi_{d_{i,j}}^{-1}(f_{i,j}(P',\psi'))$, which is endowed with a parametrised collar neighbourhood $V_{f_{i,j}(P',\psi')}$ of its boundary;
\item we use these collar neighbourhoods to glue $\mathcal{S}$ with the $(\sum_{i=1}^k\lambda_i)$ discs: more precisely, we identify, for each $1\le i\le k$ and $1\le j\le \lambda_i$, the points of $V_{i,j}$ and $V_{f_{i,j}(P',\psi')}$ with an equal image in the standard $d_{i,j}$-fold cyclic cover of $(-2,3)^2\setminus\mathcal{R}$ (note that we glue along an annulus which is open on both sides);
\item we obtain a Riemann surface of type $\Sigma_{g,n}$, represented by a Riemann structure $\mathfrak{r}'\in\mathbb{R}iem(\Sigma_{g,n})$; moreover we have an $\underline{d}$-directed meromorphic function $f'\colon(\Sigma_{g,n},\mathfrak{r}')\to \mathbb{C} P^1$ which is defined by $f\colon\mathcal{S}\to\mathbb{C} P^1\setminus\underline{U}$ on $\mathcal{S}$, and is defined by suitably translating and rescaling the polynomial $f'_{i,j}(P',\psi')$ on $\pi_{d_{i,j}}^{-1}(f_{i,j}(P',\psi'))$.
\end{itemize}
We note that all operations are continuous, in particular glueing a family of discs to a fixed surface with boundary along parametrised collar neighbourhood of boundary curves yields a family of closed Riemann surfaces. This concludes the proof
that $\mathfrak{bc}$ is continuous.
\end{document} |
\begin{document}
\title{Hidden convexity, optimization, and algorithms on rotation matrices}
\author[1]{Akshay Ramachandran}
\author[2]{Kevin Shu}
\author[1,3]{Alex L. Wang}
\affil[1]{Centrum Wiskunde \& Informatica, Amsterdam, The Netherlands}
\affil[2]{George Institute of Technology, Atlanta, GA}
\affil[3]{Purdue University, West Lafayette, IN}
\maketitle
\begin{abstract}
This paper studies
hidden convexity properties associated with
constrained optimization problems over the set of rotation matrices ${\mathbb{S}}O(n)$.
Such problems are nonconvex due to the constraint $X\in{\mathbb{S}}O(n)$. Nonetheless, we show that certain linear images of ${\mathbb{S}}O(n)$ are convex, opening up the possibility for convex optimization algorithms with provable guarantees for these problems.
Our main technical contributions show that any two-dimensional image of ${\mathbb{S}}O(n)$ is convex and that the projection of ${\mathbb{S}}O(n)$ onto its strict upper triangular entries is convex. These results allow us to construct exact convex reformulations for constrained optimization problems over ${\mathbb{S}}O(n)$ with a single constraint or with constraints defined by low-rank matrices.
Both of these results are optimal in a formal sense. \end{abstract}
\section{Introduction}
This paper studies a general class of optimization problems over rotations and orthogonal bases.
This class of problems covers applications such as the point registration problem in computer graphics \cite{ConvexPointRegistration, FunctionalMatching}, Wahba's problem of satellite attitude determination \cite{wahba1965least}, spacecraft orientation \cite{lee2011spacecraft}, and obstacle avoidance in robotics \cite{chen2020active}.
The main goal of this paper is show that in certain cases of interest, we can produce natural \emph{convex relaxations} that exactly recover the optimal solutions for such problems.
Recall, ${\mathbb{O}}(n)$ is the set of orthogonal bases in ${\mathbb{R}}^n$, or more explicitly,
\[
{\mathbb{O}}(n) \coloneqq \{X \in {\mathbb{R}}^{n\times n} : X^{\intercal}X = I\}.
\]
On the other hand, ${\mathbb{S}}O(n)$ is the set of (orientation-preserving) rotations on ${\mathbb{R}}^n$, defined as
\[
{\mathbb{S}}O(n) \coloneqq \set{X \in {\mathbb{R}}^{n\times n} :\, \begin{array}{l}
X^\intercal X = I\\
\det(X) = 1
\end{array}}.
\]
These groups are referred to as the orthogonal and special orthogonal groups.
We consider optimization problems of the form
\begin{align}
\label{eq:general_problem_format}
\sup_{X\in{\mathbb{S}}O(n)}\set{\ip{A,X}:\, {\cal B}(X) \in {\cal C}},
\end{align}
and their ${\mathbb{O}}(n)$ counterparts.
Here, the objective function is defined by $A\in{\mathbb{R}}^{n\times n}$, and the constraint is defined by a linear operator ${\cal B}:{\mathbb{R}}^{n\times n}\to{\mathbb{R}}^m$ and some convex set ${\cal C}\subseteq {\mathbb{R}}^m$.
The notation $\ip{A,B}$ denotes the trace inner product $\tr(A^\intercal B)$.
We will also study feasibility variants of \eqref{eq:general_problem_format} where the goal is to identify an $X\in{\mathbb{S}}O(n)$ or $X\in{\mathbb{O}}(n)$ satisfying ${\cal B}(X)\in{\cal C}$, or to declare that no such $X$ exists.
We give additional motivation for these problems in \cref{subsec:motivation}, where we discuss constrained versions of Wahba's problem and point registration~\cite{ConvexPointRegistration,wahba1965least}.
Problems of the form \eqref{eq:general_problem_format} are ostensibly \emph{nonconvex} due to the constraint $X\in{\mathbb{S}}O(n)$ or $X\in{\mathbb{O}}(n)$.
Nevertheless, we will show that certain families of such problems admit exact convex reformulations.
To achieve this, our main technical contributions show that \emph{the images of ${\mathbb{S}}O(n)$ or ${\mathbb{O}}(n)$ under certain linear maps are convex}.
Such results---showing that certain transformations of nonconvex sets are convex---are often referred to as hidden convexity results and enable the application of convex optimization algorithms to nonconvex problems~\cite{xia2020survey,polik2007survey}.
To see how such a result might be useful in solving problems of the form \eqref{eq:general_problem_format}, suppose that ${\cal L}:{\mathbb{R}}^{n\times n}\to{\mathbb{R}}^{1+m}$ is the linear map
\[
{\cal L}(X) \coloneqq \begin{pmatrix}
\ip{A,X}\\
{\cal B}(X)
\end{pmatrix}.
\]
If the image of ${\mathbb{S}}O(n)$ under ${\cal L}$ is convex, then we would have that ${\cal L}({\mathbb{S}}O(n)) = \conv({\cal L}({\mathbb{S}}O(n))) = {\cal L}(\conv({\mathbb{S}}O(n)))$.
Here, $\conv(\cdot)$ represents the convex hull of the argument.
In this case, it would then follow that
\begin{align*}
\sup_{X\in{\mathbb{S}}O(n)}\set{\ip{A,X}:\, {\cal B}(X)\in{\cal C}} = \sup_{X\in\conv({\mathbb{S}}O(n))}\set{\ip{A,X}:\,{\cal B}(X)\in{\cal C}}.
\end{align*}
In other words, convexity of the image ${\cal L}({\mathbb{S}}O(n))$ implies that the convex relaxation of \eqref{eq:general_problem_format} that simply replaces ${\mathbb{S}}O(n)$ with $\conv({\mathbb{S}}O(n))$ is exact. This exactness is in terms of objective value, however we will also see how to numerically recover an actual optimizer in ${\mathbb{S}}O(n)$ or ${\mathbb{O}}(n)$ in the settings we consider.
Note that it does not suffice simply for ${\cal B}({\mathbb{S}}O(n))$ to be convex (see for example \cref{fig:b_convex_not_sufficient}).
Similar results can be derived for ${\mathbb{O}}(n)$ and/or feasibility variants of \eqref{eq:general_problem_format} under corresponding convexity results.
\begin{figure}
\caption{Consider the set $S\subseteq{\mathbb{R}
\label{fig:b_convex_not_sufficient}
\end{figure}
The convex hulls of both ${\mathbb{O}}(n)$ and ${\mathbb{S}}O(n)$ can be described via linear matrix inequalities (LMIs). The first fact is well-known while the latter fact is due to \citet{saunderson2015semidefinite}. For ease of reference, we collect both facts in the following proposition.
\begin{proposition}[Classical/\cite{saunderson2015semidefinite}]
\label{thm:convSOn}
The convex hull of ${\mathbb{O}}(n)$ is equal to the operator norm ball and can be written as
\begin{align*}
\conv(O(n)) = {\mathbb{B}}_\textup{op}(n)=\set{X\in{\mathbb{R}}^{n\times n}:\, \begin{pmatrix}
I & X\{\mathbb{X}}^\intercal & I
\end{pmatrix}\succeq 0}.
\end{align*}
There exist symmetric matrices $A_{i,j}\in{\mathbb{S}}^{2^{n-1}}$ indexed by $(i,j)\in[n]^2$ such that
\begin{align*}
\conv({\mathbb{S}}O(n))=\set{X\in{\mathbb{R}}^{n\times n}:\, \sum_{i=1}^n\sum_{j=1}^nX_{i,j}A_{i,j}\preceq I_{2^{n-1}}}.
\end{align*}
\end{proposition}
In light of this fact, the convex relaxations we consider in the ${\mathbb{O}}(n)$ setting can be directly solved with semidefinite programs~(SDPs), assuming that ${\cal C}$ is itself efficiently SDP-representable.
In contrast, the convex relaxations we consider in the ${\mathbb{S}}O(n)$ setting result in exponentially sized SDPs.
For this reason, we also offer new algorithms for optimizing over $\conv({\mathbb{S}}O(n))$ in the settings we consider that do not rely on the description of $\conv({\mathbb{S}}O(n))$ given in \cref{thm:convSOn}.
\subsection{Motivation}
\label{subsec:motivation}
We first discuss \emph{constrained} variants of Wahba's problem.
To set up Wahba's problem, imagine that a satellite in space wants to determine its relative rotation (with respect to a reference rotation) given the observed direction of some number of far-away stars (or other objects).
Formally, we are given a set of (unit) vectors $v_1 ,\dots, v_k \in {\mathbb{R}}^3$, corresponding to the known directions of the $k$ stars in the reference rotation, and (unit) vectors $u_1 ,\dots, u_k \in {\mathbb{R}}^3$, corresponding to the observed directions of the $k$ stars in the satellite's frame.
Our goal is to find a rotation minimizing the observation error
\begin{align*}
\min_{X\in{\mathbb{S}}O(3)} \sum_{i=1}^k \norm{X u_i - v_i}_2^2.
\end{align*}
In \cite{wahba1965least}, it was observed that this is equivalent to a linear optimization problem over ${\mathbb{S}}O(3)$
\begin{align*}
\max_{X\in{\mathbb{S}}O(3)} \ip{\sum_{i=1}^k u_i v_i^{\intercal},\, X},
\end{align*}
and that this problem can be solved in turn using a single SVD computation.
Now, suppose we are given additional information about the true rotation $X^*$. We will incorporate this additional information as hard constraints into Wahba' problem to get a constrained optimization problem over ${\mathbb{S}}O(3)$.
For example, we may
know that the true rotation $X^*$ is within some angle, $\delta$, of another rotation $X_0\in{\mathbb{S}}O(3)$.
In this case, we would need to solve the problem
\begin{align*}
\max_{X\in{\mathbb{S}}O(3)}\set{\ip{\sum_{i=1}^k u_i v_i^{\intercal}, X}:\, \langle X_0, X\rangle \ge 1 + 2 \cos(\delta)}.
\end{align*}
\cref{thm:oneconnected} below implies that this problem has the same optimal value as its convex relaxation.
We will additionally show that
the optimum value of this problem can be efficiently computed using convex optimization techniques, even for $n\geq 3$.
As a second example, we may have a few high-fidelity observations, in which case we could introduce constraints for those observations:
\begin{align*}
\ip{X u_i, v_i} = \ip{u_iv_i^\intercal, X} \geq \cos(\delta_i).
\end{align*}
\cref{thm:opt_so_with_sut} below implies that feasibility problems with at most $n - 1$ such constraints and certain optimization problems over such constraints can be solved efficiently using convex optimization techniques.
High-dimensional settings, where $n\geq 3$, have found use in modeling nonlinear transformations on manifolds~\cite{FunctionalMatching}.
In this setting, the goal is to learn a nonlinear transformation mapping one manifold to another based on given point--point correspondences.
\citet{FunctionalMatching} suggest modeling this problem as that of finding an orthogonal (linear) transformation in the \emph{space of functions} on the manifolds.\footnote{The function spaces are endowed with bases corresponding to (possibly a truncated set of) eigenfunctions of the Laplace--Beltrami operators.}
Note, this space of functions may be high dimensional.
In this function space, point--point correspondence constraints or symmetry constraints can be naturally expressed as linear constraints on the linear transformation.
Additional desirable properties of the nonlinear transformation can be encoded as an orthogonality constraint.
Thus, this problem has a natural interpretation as an optimization problem of the form \eqref{eq:general_problem_format} over ${\mathbb{O}}(n)$.
\subsection{Statement of Results}
Our main contributions show that certain linear images of ${\mathbb{S}}O(n)$ are convex and that certain problems of the form \eqref{eq:general_problem_format} and its variants can be solved efficiently. We give an overview of these results in the order of the sections they appear in; see also \cref{tab:summary}.
\begin{table}
\centering
\begin{tabular}{|l|ll|}
\hline
& Hidden convexity & Algorithms\\\hline
Diag.\ constraints & \cite[Theorem 8]{horn1954doubly} & \cref{thm:diagonal}\\
One constraint & \cref{thm:oneconnected,thm:twoconvex} & \cref{thm:2dAlgo}\\
SUT constraints & \cref{thm:opt_so_with_sut,thm:hidden_convexity_so_with_sut,thm:feasibility} & \cref{thm:ut_structure_dt}\\\hline
\end{tabular}
\caption{Summary of our hidden convexity results and algorithms for constrained optimization over ${\mathbb{S}}O(n)$.
We present accompanying results (\cref{thm:obstructions}) showing that our hidden convexity results are each ``maximal'' in certain senses.}
\label{tab:summary}
\end{table}
\subsubsection{Feasibility problems on ${\mathbb{S}}O(n)$ with diagonal constraints}
A classical theorem of Horn, \cite[Theorem 8]{horn1954doubly}, gives a first example of a hidden convexity result on ${\mathbb{S}}O(n)$.
\begin{theorem}
\label{thm:diagconv}
Let $\textrm{diag}:{\mathbb{R}}^{n\times n}\to{\mathbb{R}}^n$ map a square matrix to its diagonal elements, then $\diag({\mathbb{S}}O(n))$ is convex (in fact, polyhedral) and its image is given by the \emph{parity polytope} $\textup{PP}_n$.
\end{theorem}
See \cref{sec:feas_diag} for a definition of $\textup{PP}_n$ and \cref{app:separation_pp} for efficient separation and optimization oracles for $\textup{PP}_n$.
It follows that feasibility problems on ${\mathbb{S}}O(n)$ with convex constraints on the diagonal elements, i.e., given convex ${\cal C}\subseteq {\mathbb{R}}^n$, decide the feasibility of
\begin{align}
\label{eq:diagX_feas}
\set{X\in{\mathbb{S}}O(n):\, \diag(X)\in{\cal C}},
\end{align}
can be decided by testing the feasibility of $\textup{PP}_n\cap {\cal C}$.
In \cref{sec:feas_diag}, we complete this picture by showing how to construct an element of \eqref{eq:diagX_feas} given $d\in \textup{PP}_n\cap {\cal C}$.
\begin{restatable}{theorem}{thmdiagonal}
\label{thm:diagonal}
Given $d\in\textup{PP}_n$, it is possible to construct $X\in{\mathbb{S}}O(n)$ satisfying $\diag(X) = d$ in time $O(n^2)$.
\end{restatable}
\subsubsection{Optimization on ${\mathbb{S}}O(n)$ with one constraint}
\label{subsubsec:results_one_constraint}
The main result of \cref{sec:son_one_constraint} is that the intersection of ${\mathbb{S}}O(n)$ with any codimension-one affine space is connected.
\begin{restatable}{theorem}{thmoneconnected}
\label{thm:oneconnected}
Let $n\geq 3$, $A \in {\mathbb{R}}^{n\times n}$, and $c\in{\mathbb{R}}$. Then, the set ${\mathbb{S}}O(n) \cap \set{X\in{\mathbb{R}}^{n\times n}:\, \ip{A,X} = c}$ is connected.
\end{restatable}
An immediate corollary of \cref{thm:oneconnected} is the following:
\begin{restatable}{corollary}{thmtwoconvex}
\label{thm:twoconvex}
Let $n\geq 3$ and let $\pi : {\mathbb{S}}O(n) \rightarrow {\mathbb{R}}^2$ be linear, then $\pi({\mathbb{S}}O(n))$ is convex.
\end{restatable}
This follows as a set is convex if and only if its intersection with any one-dimensional affine subspace (i.e., a line) is connected.
\cref{thm:oneconnected} may be of importance in its own right
as it suggests the possibility of local optimization methods on the level set ${\mathbb{S}}O(n)\cap\set{X\in{\mathbb{R}}^{n\times n}:\, f(X) = c}$.
While these results imply that it is possible to use convex optimization techniques to solve problems of the form
\begin{equation}
\max_{X \in {\mathbb{S}}O(n)} \left\{\ip{A, X} : \ip{B, X} \in [a,b]\right\}\label{eq:ndmax}
\end{equation}
(for example by replacing ${\mathbb{S}}O(n)$ with $\conv({\mathbb{S}}O(n))$), it is not clear how to do so \emph{efficiently} when $n$ is large.
This is because {\mathbb{C}}ref{thm:convSOn} only guarantees an exponentially sized LMI representation for $\conv({\mathbb{S}}O(n))$.
To address this issue, we give an efficient algorithm for problems of the form \eqref{eq:ndmax} based on running the ellipsoid algorithm on the two-dimensional image of ${\mathbb{S}}O(n)$.
It is noteworthy that because we use the ellipsoid algorithm in a constant-dimensional space, we do not face the infamously high running times of the ellipsoid method in high-dimensional spaces.
\begin{restatable}{theorem}{thmtwodalgo}
\label{thm:2dAlgo}
Let $n\geq 3$, $A,\, B\in{\mathbb{R}}^{n\times n}$ with $\norm{A}_{\tr} = \norm{B}_{\tr} = 1$.
Here $\|\cdot\|_{\tr}$ is the trace norm, defined formally in {\mathbb{C}}ref{sec:prelim}.
Let $X^*$ be the optimal solution to \eqref{eq:ndmax}. We can compute $\ip{A,X^*}$ and $\ip{B,X^*}$ within an additive error of $\epsilon$ in time
\[
O\left( n^3\log\left(\frac{1}{\epsilon}\right)^2\right).
\]
Here, $n^3$ is the time complexity of computing the SVD of an $n\times n$ matrix.
Moreover, we will return $\alpha, \beta \in {\mathbb{R}}$ so that
$|\alpha| + |\beta| = 1$ and
\[
\ip{\alpha A + \beta B, X^*} + \epsilon \ge \max \{\ip{\alpha A + \beta B, X} : X \in {\mathbb{S}}O(n)\}.
\]
\end{restatable}
\begin{remark}
Let $\alpha,\,\beta$ denote the quantities returned in \cref{thm:2dAlgo}. While {\mathbb{C}}ref{thm:2dAlgo} does not directly return a minimizer of \eqref{eq:ndmax}, we believe that
any element of
\begin{align*}
\argmax_{X\in{\mathbb{S}}O(n)}\ip{\alpha A + \beta B, X}
\end{align*}
should be a good approximation of a true minimizer under mild conditions. Such an element can be computed from $\alpha A + \beta B$ in the time of a single SVD decomposition.
Analyzing this procedure is outside the scope of the current paper and we leave this question for future work.
\end{remark}
\subsubsection{Feasibility and optimization on ${\mathbb{S}}O(n)$ with strictly upper triangular constraints}
The last class of constraints we consider are constraints on the strictly upper triangular (SUT) entries of $X\in{\mathbb{S}}O(n)$.
Our main result in this direction shows that not only is the projection of ${\mathbb{S}}O(n)$ onto its SUT entries convex, but furthermore, it is possible to optimize certain linear functions subject to convex constraints on the SUT entries using convex optimization.
Let $\pi_{\cal T}(X) = (X_{ij})_{i<j}\in{\mathbb{R}}^{\binom{n}{2}}$ denote the projection of $X$ onto its SUT entries (i.e., those entries $X_{ij}$ such that $i<j$).
We will consider constraining the value of $\pi_{\cal T}(X)$ for $X\in{\mathbb{S}}O(n)$ and then optimizing a linear function over this set.
Let $A\in{\mathbb{R}}^{n\times n}$ and let ${\cal C}$ be a nonempty closed convex subset of $\pi_{\cal T}({\mathbb{B}}_\textup{op}(n))$.
Recall ${\mathbb{B}}_\textup{op}(n)$ is the operator norm ball and is the same as $\conv({\mathbb{O}}(n))$ by {\mathbb{C}}ref{thm:convSOn}.
We consider the problems
\begin{align}
&\sup_{X\in{\mathbb{S}}O(n)}\set{\ip{A,X}:\, \pi_{\cal T}(X)\in {\cal C}} \label{eq:opt_over_so_with_sut}\\
&\qquad\leq \sup_{X\in{\mathbb{O}}(n)}\set{\ip{A,X}:\, \pi_{\cal T}(X)\in {\cal C}}\label{eq:opt_over_o_with_sut}\\
&\qquad\leq
\max_{X\in{\mathbb{B}}_\textup{op}(n)}\set{\ip{A,X}:\, \pi_{\cal T}(X)\in {\cal C}}. \label{eq:opt_over_bop_with_sut}
\end{align}
Our main result on this topic is:
\begin{restatable}{theorem}{thmoptsosut}
\label{thm:opt_so_with_sut}
Let $A\in{\mathbb{R}}^{n\times n}$ be a diagonal matrix and let ${\cal C}\subseteq\pi_{\cal T}({\mathbb{B}}_\textup{op}(n))$ be a nonempty closed convex set. Then, equality holds between \eqref{eq:opt_over_o_with_sut} and \labelcref{eq:opt_over_bop_with_sut}.
If additionally $\det(A)\geq 0$, then equality holds between \labelcref{eq:opt_over_so_with_sut,eq:opt_over_o_with_sut,eq:opt_over_bop_with_sut}.
\end{restatable}
An immediate corollary of \cref{thm:opt_so_with_sut} is the following:
\begin{restatable}{corollary}{thmfeasibility}
\label{thm:feasibility}
It holds that $\pi_{\cal T}({\mathbb{S}}O(n)) = \pi_{\cal T}({\mathbb{O}}(n)) = \pi_{\cal T}({\mathbb{B}}_\textup{op}(n))$. In particular, all three sets are convex.
\end{restatable}
We remark that any $n-1$ rank-one matrices $u_1v_1^\intercal,\dots,u_{n-1}v_{n-1}^\intercal$ can be made strictly upper triangular by left- and right-multiplying by ${\mathbb{S}}O(n)$ matrices using Gram--Schmidt. In particular, optimization problems or feasibility problems with constraints on $\ip{u_iv_i^\intercal, X}$ are a special case of problems with SUT constraints (see \cref{prop:rank_one}).
This holds too for any $n-1$ coordinate constraints. See \cref{subsec:rank_one_coordinate} for a more detailed explanation.
Optimization over ${\mathbb{B}}_\textup{op}(n)$ is tractable using a linearly sized SDP by {\mathbb{C}}ref{thm:convSOn}, so the presentation of this theorem also can be turned into an efficient algorithm for performing such optimization whenever ${\cal C}$ is itself efficiently SDP-representable.
We will further show strong structural results about the matrices in ${\mathbb{S}}O(n)$ with fixed SUT entries. These structural results allow us to explicitly construct an optimizer of \eqref{eq:opt_over_so_with_sut} given an optimizer of \eqref{eq:opt_over_bop_with_sut} under the assumptions of \cref{thm:opt_so_with_sut}.
They will additionally allow us to extend \cref{thm:opt_so_with_sut} to an approximation result for ${\mathbb{S}}O(n)$ with $\det(A)<0$. These structural results are summarized below and proven in parts throughout {\mathbb{C}}ref{sec:utconstructions}.
\begin{theorem}
\label{thm:ut_structure_dt}
Let $\sigma \in\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(n)))\subseteq {\mathbb{R}}^{\binom{n}{2}}$ and let $V_\sigma = \{X \in {\mathbb{O}}(n) : \pi_{\cal T}(X) = \sigma\}$.
The following assertions hold:
\begin{enumerate}
\item $|V_\sigma| = 2^n$.
\item For each $i \in [n]$, there exist functions $\alpha_i(\sigma) < \beta_i(\sigma)$, so that $X_{i,i} \in \{\alpha_i, \beta_i\}$ for each $X \in V_\sigma$. We will suppress the dependence of $\alpha_i$ and $\beta_i$ on $\sigma$ for convenience of notation.
\item No two elements in $V_\sigma$ have the same diagonal entries.
That is, for each $d \in \{\alpha_1, \beta_1\} \times \{\alpha_2, \beta_2\} \times\dots\times \{\alpha_n, \beta_n\}$, there is a unique $X \in V_\sigma$ so that $\diag(X) = d$.
\item For each $i\in[n]$, $\alpha_i$ and $\beta_i$ are continuous functions of $\sigma$. The function $\beta_i$ is convex in $\sigma$, and the function $\alpha_i$ is concave in $\sigma$.
\item $X \in V_\sigma$ is in ${\mathbb{S}}O(n)$ if and only if the number of $i$ so that $X_{i,i} = \alpha_i$ is even.
\item Given $\rho \in \{-1, 1\}^n$, we can construct a matrix $X \in V_\sigma$ so that $X_{i,i} = \begin{cases} \alpha_i \text{ if }\rho_i = -1\\\beta_i \text{ if }\rho_i = 1\end{cases}$ in time $O(n^3)$.
\end{enumerate}
\end{theorem}
\subsubsection{Obstructions to Progress}
Finally, \cref{sec:obstructions} provides constructions showing that {\mathbb{C}}ref{thm:diagconv}, {\mathbb{C}}ref{thm:twoconvex}, and {\mathbb{C}}ref{thm:hidden_convexity_so_with_sut} above are \emph{optimal} in specific senses. We summarize the results here:
\begin{theorem}
\label{thm:obstructions}
The following assertions hold:
\begin{enumerate}
\item For any $n\geq 3$, the images of ${\mathbb{S}}O(n)$ under linear maps to ${\mathbb{R}}^2$ are ``maximally convex'' in the following sense: There exists $\pi : {\mathbb{R}}^{n\times n}\rightarrow {\mathbb{R}}^3$ so that $\pi({\mathbb{S}}O(n))$ is nonconvex.
\item The projection of ${\mathbb{S}}O(n)$ onto its diagonal is ``maximally convex'' in the following sense:
For $A \in {\mathbb{R}}^{n\times n}$, let $\pi(X) = (X_{11}, X_{22}, \dots, X_{nn}, \ip{A, X})$. If $A$ is not itself diagonal, then $\pi({\mathbb{S}}O(n))$ is not convex.
\item For any $n\geq 3$, the projection of ${\mathbb{S}}O(n)$ onto its SUT entries is ``maximally convex'' in the following sense:
If $\pi : {\mathbb{R}}^{n\times n} \rightarrow {\mathbb{R}}^m$ is any linear map with $\rank(\pi)> \binom{n}{2}$, then $\pi({\mathbb{S}}O(n))$ is not convex.
\item \label{itm:necessary} The assumption $\det(C)\geq 0$ in \cref{thm:opt_so_with_sut} is necessary in the following sense:
There exists $\sigma \in {\mathbb{R}}^{\binom{n}{2}}$ and a diagonal matrix $C$ so that
\begin{align*}
\max_{X \in {\mathbb{S}}O(n)} \{\ip{C, X} : \pi_{\cal T}(X) = \sigma\} &<
\max_{X \in \conv({\mathbb{S}}O(n))} \{\ip{C, X} : \pi_{\cal T}(X) = \sigma\}\\
\end{align*}
\end{enumerate}
\end{theorem}
This theorem is proven in parts throughout {\mathbb{C}}ref{sec:obstructions}.
\subsection{Related literature}
Hidden convexity results are scattered throughout the literature on optimization, numerical linear algebra, and matrix analysis.
We recommend the following surveys/chapters for introductions to this subject~\cite{xia2020survey,polik2007survey,barvinok2002course}.
Along these lines, our hidden convexity results and their subsequent applications in deriving convex SDP relaxations of nonconvex problems parallel Dines' Theorem~\cite{dines1941mapping} and its application in deriving the S-lemma~\cite{fradkov1979thes}, a fundamental result in control theory and nonlinear optimization.
Our results extend existing hidden convexity results related to the (special) orthogonal group.
Some of the earliest work in this line is \cite[Theorem 8]{horn1954doubly} stating that $\diag({\mathbb{S}}O(n)) = \textup{PP}_n$.
There are other similar results concerning the convexity of the image of ${\mathbb{S}}O(n)$ under various nonlinear maps, for example the famous Schur--Horn theorem~\cite{horn1954doubly}.
Another paper along these lines is \cite{fiedler2009suborthogonality}, which characterizes the possible projections of ${\mathbb{S}}O(n)$ onto its rectangular submatrices.
In particular, it is not hard to show using their results that the projection of ${\mathbb{S}}O(n)$ onto a $k \times \ell$ rectangular submatrix is convex if and only if $k + \ell \le n$. Our results extend \cite{fiedler2009suborthogonality} to nonrectangular coordinate patterns.
For further work in this direction, see \cite{TamSurvey,GSBook}.
Another important piece of related work is \cite{saunderson2015semidefinite}, which gives a LMI description of $\conv({\mathbb{S}}O(n))$.
This LMI description is constructed using Lie group theory applied to ${\mathbb{S}}O(n)$
and is related to the fact that the fundamental group of ${\mathbb{S}}O(n)$ is ${\mathbb{Z}}/2{\mathbb{Z}}$.
This fact will also be crucial in our proof of \cref{thm:oneconnected}.
Inspired by techniques from \cite{saunderson2015semidefinite}, we can view our hidden convexity results as new quadratic convexity results on the sphere in the spirit of Brickman's Theorem~\cite{brickman1961field}. Recall, Brickman's Theorem states that for any $A,B\in{\mathbb{S}}^n$ and $n\geq 3$, that
\begin{align*}
\set{\begin{pmatrix}
x^\intercal Ax\\
x^\intercal Bx
\end{pmatrix}\in{\mathbb{R}}^2:\, x\in{\mathbf{S}}^{n-1}}
\end{align*}
is convex. Here, ${\mathbf{S}}^{n-1}$ is the sphere in ${\mathbb{R}}^n$.
We elaborate on this connection in {\mathbb{C}}ref{sec:quadratic_convexity}.
There are many other examples in which people consider optimization over the special orthogonal group.
These problems were implicitly studied for ${\mathbb{S}}O(3)$ in \cite{lee2011spacecraft}, where they consider a formulation in terms of quadratic maps of quaternions.
In another instance, \cite{brynte2022tightness} shows that certain standard semidefinite programming approaches to quadratic optimization problems applied to ${\mathbb{S}}O(n)$ do not always produce the correct result.
For this, they use the theory of nonnegative quadratic forms over real varieties developed in \cite{blekherman2016sums}.
Some recent work of \citet{gilman2022semidefinite}
considers the exactness of SDP relaxations of quadratic optimization problems with variables in the Stiefel manifold $\set{X \in {\mathbb{R}}^{n\times k}:\, X^\intercal X = I_k}$ for some $k \leq n$. Note that when $k=n$, that this set is identical to ${\mathbb{O}}(n)$. \citet{gilman2022semidefinite} show that the natural SDP relaxation is exact for such problems when the operator defining the quadratic form is close enough to being diagonalizable.
\section{Preliminaries}
\label{sec:prelim}
We will need to define a \emph{maximal torus} in ${\mathbb{S}}O(n)$.
Fix some $n$ for this section.
Let $k = \lfloor \frac{n}{2}\rfloor$ and
let $R(\theta_1,\dots,\theta_k)$ denote the matrix in ${\mathbb{S}}O(n)$ given by
\begin{align}
R(\theta_1,\dots,\theta_k) &\coloneqq \begin{pmatrix}
\begin{smallmatrix}
\cos(\theta_1) & \sin(\theta_1)\\
-\sin(\theta_1) & \cos(\theta_1)
\end{smallmatrix} & \\
& \ddots\\
& & \begin{smallmatrix}
\cos(\theta_k) & \sin(\theta_k)\\
-\sin(\theta_k) & \cos(\theta_k)
\end{smallmatrix}
\end{pmatrix}
\label{eq:torus_even}
\end{align}
if $n$ is even, and
\begin{align}
R(\theta_1,\dots,\theta_k) &\coloneqq \begin{pmatrix}
1&&\\
& \begin{smallmatrix}
\cos(\theta_1) & \sin(\theta_1)\\
-\sin(\theta_1) & \cos(\theta_1)
\end{smallmatrix} & \\
& & \ddots\\
& & & \begin{smallmatrix}
\cos(\theta_k) & \sin(\theta_k)\\
-\sin(\theta_k) & \cos(\theta_k)
\end{smallmatrix}
\end{pmatrix}
\label{eq:torus_odd}
\end{align}
if $n$ is odd.
We define the maximal torus ${\mathbb{T}}$ to be the set of matrices of the form $R(\theta_1, \dots, \theta_k)$ as the $\theta_i$ range over $[0,2\pi)$.
The following result is a special case of what is known as the Maximal Torus Theorem, but is a simple corollary of the real spectral theorem~\cite[Theorem
2.5.8]{horn2012matrix} in our setting.
\begin{theorem}
\label{lem:max_torus}
For any $X\in{\mathbb{S}}O(n)$, there exists $U\in{\mathbb{S}}O(n)$ so that $U^{\intercal} X U \in {\mathbb{T}}$. That is, $U^{\intercal}XU = R(\theta_1 ,\dots, \theta_k)$ for some $\theta_i \in [0, 2\pi)$.
\end{theorem}
For $A\in{\mathbb{R}}^{n\times n}$, let $\|A\|_{\tr}$ and $\|A\|_\textup{op}$ denote the \emph{trace norm} and \emph{operator norm} of $A$. These are defined as the sum of the singular values of $A$ and the maximum singular value of $A$ respectively.
Define the \emph{special trace} of $A\in{\mathbb{R}}^{n\times n}$ to be
\[
\str(A) \coloneqq \max_{X \in {\mathbb{S}}O(n)} \ip{A, X}.
\]
This function is well-defined as ${\mathbb{S}}O(n)$ is compact.
Furthermore, $\str(\cdot)$ is convex and $1$-Lipschitz with respect to the trace norm.
This holds because $\str(\cdot)$ is defined as the pointwise maximum of linear functions which are individually $1$-Lipschitz with respect to the trace norm.
Finally, $\str(A)$ can be computed exactly given an SVD of $A$.
\section{Feasibility problems on ${\mathbb{S}}O(n)$ with diagonal constraints}
\label{sec:feas_diag}
This section considers the feasibility problem
\begin{align}
\label{eq:feasibility_diagonal}
\set{X\in{\mathbb{S}}O(n):\, \diag(X)\in{\cal C}},
\end{align}
where ${\cal C}\subseteq{\mathbb{R}}^n$ is convex. We will assume that ${\cal C}$ has an efficient separation oracle.
Recall, the parity polytope is defined as
\begin{align*}
\textup{PP}_n \coloneqq \conv\set{x\in\set{\pm 1}^n:\, \prod_{i=1}^n x_i = 1}.
\end{align*}
\citet[Theorem 8]{horn1954doubly} shows that $\diag({\mathbb{S}}O(n))=\textup{PP}_n$.
As an immediate corollary, \eqref{eq:feasibility_diagonal} is feasible if and only if
$\textup{PP}_n\cap{\cal C}$ is nonempty.
\cref{app:separation_pp} shows how to efficiently separate from $\textup{PP}_n$. Combined with a separation oracle for ${\cal C}$, we may then run an ellipsoid-style algorithm for deciding feasibility of $\textup{PP}_n\cap{\cal C}$ (up to the usual errors).
Supposing that $d\in\textup{PP}_n\cap{\cal C}$ is found, it remains to see how to construct a witness $X\in{\mathbb{S}}O(n)$ with $\diag(X) = d$.
We will need the following description of $\textup{PP}_n$ given in \cite{Lancia2018,jeroslow1975defining}:
\[ \textup{PP}_{n} = \set{ x \in [-1,1]^{n} :\, \ip{x, 1_{n} - 2 \cdot 1_{S}} \leq n-2 ,\,\quad \forall \text{ odd } S \subseteq [n]}. \]
Here, $1_n$ is the all-ones vector, $1_S$ is the indicator vector of the set $S$ and $S$ is odd if $\abs{S}$ is odd.
We will also need the following constructive version of the Schur--Horn theorem (whose proof is essentially due to \cite{chan1983diagonal}).
\begin{lemma}
\label{lem:chan_li}
Given $c,\,d\in{\mathbb{R}}^n$ such that $c$ majorizes $d$, it is possible to construct a sequence of matrices
\begin{align*}
Q_1,\dots, Q_{n-1}\in{\mathbb{S}}O(n)
\end{align*}
in time $O(n\log n)$ satisfying
\begin{align*}
\diag\left(\left(\prod_i^{n-1} Q_i\right)^\intercal {\mathbb{D}}iag(c)\left(\prod_i^{n-1} Q_i\right)\right) = d.
\end{align*}
Furthermore, each $Q_i$ differs from the identity on only one principal $2 \times 2$ block, where it is a rotation matrix.
\end{lemma}
We are now ready to prove the following theorem.
\thmdiagonal*
\begin{proof}
We focus on the case of even $n$ for simplicity. The odd case follows analogously.
Let $m\coloneqq n/2$ and let $\theta_{1}, ..., \theta_{m} \in [0,2\pi)$ to be fixed later. Recall the definition of $R(\theta_1 ,\dots, \theta_k)$ from \eqref{eq:torus_even} and \eqref{eq:torus_odd}, and let $c\coloneqq\diag(R(\theta_1,\dots,\theta_m))$. If we can find $\theta_1,\dots,\theta_m$ so that $c$ majorizes $d$, then we can apply \cref{lem:chan_li} to produce the required element of ${\mathbb{S}}O(n)$ with diagonal $d$. Equivalently, we may pick $c_1=c_2,c_3=c_4,\dots$ arbitrarily in $[-1,1]$ and define $\theta_i = \arccos(c_{2i})$.
We will set $c_i$ as follows: Let $t \coloneqq \frac{1}{4} (n - \langle d, 1_{n} \rangle)$ and let $j-1 = \lfloor t \rfloor$ be the integer part and $\delta := t - \floor{t}$ be the fractional part of $t$. We set
\[ c_{1} = ... = c_{2(j-1)} = -1, \qquad c_{2j+1} = ... = c_{n} = 1 , \]
and the remaining elements we set as $c_{2j-1} = c_{2j} = 1-2\delta$. Note that $1-2\delta\in[-1,1]$ since the fractional part $\delta \in [0,1]$. Then, we have
\begin{align*}
\langle c, 1_{n} \rangle & = - 2 (j-1) + 2 (1-2\delta) + 2 (m-j)
\\ & = \frac{-2}{4} (n - \langle d, 1_{n} \rangle - 4 \delta ) + 2 (1-2\delta) + \frac{2}{4} (n + \langle d, 1_{n} \rangle - 4 (1-\delta))
= \langle d, 1_{n} \rangle ,
\end{align*}
where the second step was by our definition of $c$, and the third was by our choice of $j$.
Now we verify the majorization inequalities:
\begin{align*}
\forall k \leq 2(j-1)&\qquad \sum_{i=1}^{k} c_{i} = -k \leq \sum_{i=1}^{k} d_{i},\qquad\text{and}\\
\forall k \geq 2j + 1 &\qquad \sum_{i=k}^{n} c_{i} = (n-k+1) \geq \sum_{i=k}^{n} d_{i}.
\end{align*}
Here, the last step in both inequalities hold because $d \in [-1,1]^{n}$. Since $\langle c, 1_{n} \rangle = \langle d, 1_{n} \rangle$, the second set of inequalities are equivalent to
\[ \forall k \geq 2j\qquad \sum_{i=1}^{k} c_{i} = -k \leq \sum_{i=1}^{k} d_{i} . \]
We now verify the final inequality for index $k := 2j-1$:
\begin{align*}
\sum_{i=1}^{k} c_{i} & = -2(j-1) + (1-2\delta) = 1 - \frac{1}{2} (n - \langle c, 1_{n} \rangle )
= \frac{1}{2} ( \langle d, 1_{n} \rangle - (n-2) ) \leq \sum_{i=1}^{k} d_{i} ,
\end{align*}
where the first step was by definition of $c$, in the second step we used that $j-1 = t-\delta$, in the third step we used that $\langle c, 1_{n} \rangle = \langle d, 1_{n} \rangle$, and the final step was by the defining inequalities of $\textup{PP}_{n}$.
Setting $\cos(\theta_{i}) = c_{2i}$, we have $R(\theta_{1}, ..., \theta_{m}) \in {\mathbb{S}}O(n)$ with diagonal $c$ majorizing $d$.
Now, apply \cref{lem:chan_li} to get a matrix $U = \prod_{i=1}^{n-1} Q_i \in{\mathbb{S}}O(n)$ such that
\begin{align*}
\diag(U^\intercal {\mathbb{D}}iag(c) U) = d.
\end{align*}
For notational convenience, write $R$ for $R(\theta_1,\dots,\theta_m)$. Then, $U^\intercal R U\in{\mathbb{S}}O(n)$ satisfies
\begin{align*}
\diag(U^\intercal R U) &= \diag\left(U^\intercal {\mathbb{D}}iag(c) U\right) + \diag\left(U^\intercal (R-{\mathbb{D}}iag(c))U\right)\\
&= \diag(U^\intercal {\mathbb{D}}iag(c)U) = d.
\end{align*}
Here, the second line follows as $R - {\mathbb{D}}iag(c)$ is skew symmetric so that $U^\intercal(R - {\mathbb{D}}iag(c))U$ must also be skew symmetric. In particular, $\diag(U^\intercal(R-{\mathbb{D}}iag(c))U) = 0$.
The time complexity follows from the fact that each $Q_i$ differs from the identity only in a principal $2 \times 2$ block, so all $n-1$ conjugations by $Q_1, ..., Q_{n-1}$ can be completed in $O(n^2)$ time. \ije{\qedhere}
\end{proof}
\section{Optimization on ${\mathbb{S}}O(n)$ subject to one constraint}
\label{sec:son_one_constraint}
This section will discuss optimization of a linear function over ${\mathbb{S}}O(n)$ subject to a single (possibly two-sided) linear constraint:
\begin{align}
\label{eq:one_constraint_problem}
&\max_{X\in{\mathbb{S}}O(n)}\set{\ip{A,X}:\, \ip{B,X}\in[a,b]}.
\end{align}
We will provide a proof that for problems of the form \eqref{eq:one_constraint_problem}, the convex relaxation that replaces ${\mathbb{S}}O(n)$ with $\conv({\mathbb{S}}O(n))$ is exact.
Moreover, we will give a practical algorithm for this problem that runs in roughly the same time as the unconstrained optimization problem, i.e., in the time to compute an SVD.
The technical core of these results lies in the following theorem:
\thmoneconnected*
This theorem implies the following fact using the observation that a subset of ${\mathbb{R}}^2$ is convex if and only if its intersection with every one-dimensional affine subspace is connected.
\thmtwoconvex*
As we have seen, this fact implies that any optimization problem of the form \eqref{eq:one_constraint_problem} can be solved as a convex optimization problem on $\conv({\mathbb{S}}O(n))$. Thus, one could theoretically solve this problem with an exponentially sized SDP using \cref{thm:convSOn}.
Alternatively, we give an algorithm which can successfully solve any such optimization problem in $O(n^3\log^2(n))$ time.
In the case of ${\mathbb{S}}O(3)$, \cref{thm:twoconvex} can be viewed as a corollary of Brickman's theorem \cite{brickman1961field}, which states that the image of the unit sphere under a homogeneous quadratic map into ${\mathbb{R}}^2$ is always convex.
This, together with the fact that ${\mathbb{S}}O(3)$ is the image of a sphere under a quadratic map shows the result in that case (see also \cref{sec:quadratic_convexity}).
On the other hand, \cref{thm:twoconvex} does not follow directly from known quadratic convexity theorems for $n\geq 4$.
\subsection{Topological preliminaries for the proof of {\mathbb{C}}ref{thm:oneconnected}}
The proof of {\mathbb{C}}ref{thm:oneconnected} will require some topological techniques, which we review here.
We attempt to be as explicit as possible in our constructions and proofs to make them accessible to readers that are less familiar with such arguments.
As a general reference for (algebraic) topology, we refer to~\cite{MR1867354}.
As motivation, consider the (easy) problem of showing that any one-dimensional image of ${\mathbb{S}}O(n)$ is convex.
For this, note that ${\mathbb{S}}O(n)$ is a connected set.
Using the topological fact that the image of a connected set under any continuous map is connected, we can conclude that any one-dimensional image of ${\mathbb{S}}O(n)$ is connected.
Finally, as any connected set in ${\mathbb{R}}$ is an interval, this image must be convex.
In order to generalize this proof to two dimensions, we will need
a generalization of the topological component of this argument.
For this, it will be useful to know some basic definitions from topology/homotopy theory that we present below.
We encourage the reader to keep the following topological spaces in mind:
\begin{itemize}
\item ${\mathbb{S}}O(n)\subseteq{\mathbb{R}}^{n\times n}$ viewed as a subtopological space of ${\mathbb{R}}^{n\times n}$ with the standard topology.
\item The punctured plane ${\mathbb{R}}^2\setminus (0,0)$ viewed as a subtopological space of ${\mathbb{R}}^2$ with the standard topology.
\end{itemize}
Let $X$ be a topological space with a designated base point $x\in X$.
The fundamental group of $X$, denoted $\pi_1(X)$, is a group whose elements are (equivalence classes of) functions $\gamma : [0,1] \rightarrow X$ so that $\gamma(0) = \gamma(1) = x$.
We will refer to such functions as \emph{loops} .
We will say that two loops $\gamma_1$ and $\gamma_2$ are equivalent if there
exists a continuous function $T : [0,1] \times [0,1] \rightarrow X$ so that $T(0,t) = \gamma_1(t)$ and $T(1,t) = \gamma_2(t)$ for all $t \in [0,1]$.
We refer to such a $T$ as a \emph{homotopy}.
Intuitively, two loops $\gamma_1$ and $\gamma_2$ are equivalent if $\gamma_1$ can be continuously deformed into $\gamma_2$.
This set of loops can be made into a group with the group operations being the concatenation of loops.
The identity element of $X$ is represented by the constant loop given by $i(t) = x$ for all $t \in [0,1]$.
If $X$ is path connected, then the fundamental group is independent of the choice of basepoint---this will be the case for all topological spaces we consider.
For example, the fundamental group of the punctured plane ${\mathbb{R}}^2\setminus(0,0)$ is ${\mathbb{Z}}$. Explicitly, any loop in ${\mathbb{R}}^2\setminus(0,0)$ can be identified with the number of times it winds anticlockwise around the origin.
Less intuitively, we will also need the fact that for $n\geq 3$, the fundamental group of ${\mathbb{S}}O(n)$ is ${\mathbb{Z}}/2{\mathbb{Z}}$.
One key aspect of the fundamental group of topological spaces is that continuous maps of topological spaces give rise to group homomorphisms.
That is, if $f : X \rightarrow Y$ is a continuous map of topological spaces, then there is a map $f_* : \pi_1(X) \rightarrow \pi_1(Y)$ which is a group homomorphism.
This map is given by letting $(f_*(\gamma))(t) = f(\gamma(t))$. In other words, $f_*$ is simply composition with $f$.
This will be relevant to us in the following context:
\begin{lemma}
\label{lem:existencepoint}
Let $n \ge 3$.
Suppose $f : {\mathbb{S}}O(n) \rightarrow {\mathbb{R}}^2$ is a continuous map and $\gamma:[0,1]\to {\mathbb{S}}O(n)$ is a loop such that $(0,0)$ is not in the image $(f_*(\gamma))([0,1])$. In this case, we may view $f_*(\gamma)$ as a loop in ${\mathbb{R}}^2\setminus(0,0)$.
If $f_*(\gamma)$ is not equivalent to the identity element in $\pi_1({\mathbb{R}}^2\setminus(0,0))$, then $(0,0)\in f({\mathbb{S}}O(n))$.
\end{lemma}
\begin{proof}
Suppose for the sake of contradiction that $f({\mathbb{S}}O(n))$ does not contain $(0,0)$.
Then, $f$ is a continuous map between topological spaces ${\mathbb{S}}O(n)$ and ${\mathbb{R}}^2\setminus(0,0)$. The associated group homomorphism $f_*:\pi_1({\mathbb{S}}O(n))\to\pi_1({\mathbb{R}}^2\setminus(0,0))$ must be the $0$ map, since that is the only group homomorphism from ${\mathbb{Z}}/2{\mathbb{Z}}$ to ${\mathbb{Z}}$.
From this, it follows that $f_*(\gamma)$ is equivalent to the identity element in $\pi_1({\mathbb{R}}^2\setminus(0,0))$, a contradiction.\ije{\qedhere}
\end{proof}
By adding a translation term to $f$, we can apply \cref{lem:existencepoint} with an arbitrary point $\beta\in{\mathbb{R}}^2$ in place of the origin.
\cref{fig:proof_lem3} shows a cartoon of the proof strategy for \cref{lem:existencepoint}.
{\mathbb{C}}ref{fig:projectiontorus}
depicts a two-dimensional linear image of ${\mathbb{S}}O(3)$ and a loop $\gamma$ in ${\mathbb{S}}O(3)$ and gives some intuition on how we will use \cref{lem:existencepoint} to prove \cref{thm:oneconnected}.
\begin{figure}
\caption{A cartoon of the proof of \cref{lem:existencepoint}
\label{fig:proof_lem3}
\end{figure}
\begin{figure}
\caption{The solid red area is the image of ${\mathbb{S}
\label{fig:projectiontorus}
\end{figure}
\subsection{Proof of {\mathbb{C}}ref{thm:oneconnected}}
This subsection will contain a proof of {\mathbb{C}}ref{thm:oneconnected}.
Let $H = \set{X\in{\mathbb{R}}^{n\times n}:\, \ip{A,X} = c}$.
We will require the following lemma that is proved at the end of this section.
\begin{lemma}
\label{lem:curve}
Let $U,V\in H$. Then, there exists a continuous function $\gamma : [0,1] \rightarrow {\mathbb{S}}O(n)$ with the following properties:
\begin{itemize}
\item $\gamma(0) = \gamma(1) = U$,
\item $\gamma(\frac{1}{2}) = V$,
\item either $\ip{A,\gamma(t)} = c$ for all $t \in (0,\frac{1}{2})$ or $\ip{A, \gamma(t)} > c$ for all $t\in(0,\frac{1}{2})$, and
\item either $\ip{A,\gamma(t)} = c$ for all $t \in (\frac{1}{2},1)$ or $\ip{A, \gamma(t)} < c$ for all $t\in(\frac{1}{2},1)$.
\end{itemize}
\end{lemma}
We are now ready to prove \cref{thm:oneconnected}.
\begin{proof}[Proof of \cref{thm:oneconnected}]
Suppose for the sake of contradiction that $H \cap {\mathbb{S}}O(n)$ is not connected, which by definition means that there exist nonempty closed sets ${\cal U}, {\cal V} \subseteq H \cap {\mathbb{S}}O(n)$ so that ${\cal U} \cap {\cal V} = \varnothing$, and ${\cal U} \cup {\cal V} = H \cap {\mathbb{S}}O(n)$.
As ${\cal U}$ is closed, the distance function
\begin{align*}
\dist_{\cal U}(X) \coloneqq \min_{U\in{\cal U}}\norm{U - X}_\textup{op}
\end{align*}
is well-defined. Let $\delta\coloneqq \min_{V\in{\cal V}}\dist_{\cal U}(V)$. As ${\cal U}$ and ${\cal V}$ are compact and disjoint, we have that $\delta>0$.
Define $f:{\mathbb{S}}O(n)\to{\mathbb{R}}^2$ given by
\begin{align*}
f(X) = \begin{pmatrix}
\ip{A,X} - c\\
\dist_{\cal U}(X) - \delta/2
\end{pmatrix}
\end{align*}
By assumption, there does not exist an $X\in{\mathbb{S}}O(n)$ such that $\ip{A,X} = c$ and $\dist_{\cal U}(X) = \delta/2$. In other words, $(0,0)\notin f({\mathbb{S}}O(n))$.
Fix $U\in{\cal U}$ and $V\in{\cal V}$ and let $\gamma$ denote the loop constructed by \cref{lem:curve}. Note that as ${\cal U}$ and ${\cal V}$ are not connected in $H$, it must hold that $A(\gamma(t))>c$ for all $t\in(0,1/2)$ and $\ip{A,\gamma(t)} < c$ for all $t\in (1/2,1)$.
We now verify that $f$ and $\gamma$ satisfy the assumptions of \cref{lem:existencepoint}.
To do so, we will exhibit a homotopy from $f_*(\gamma)$ to the loop $(\sin(2\pi t),-\cos(2\pi t))$.
Let
\begin{align*}
T(s,t) \coloneqq s f_*(\gamma)(t)+(1-s)(\sin(2 \pi t), -\cos(2 \pi t)).
\end{align*}
This is clearly a continuous function from $[0,1]\times[0,1]\to{\mathbb{R}}^2$. Thus, to verify that it is a valid homotopy in ${\mathbb{R}}^2\setminus(0,0)$ it remains to check that $T(s,t)\neq(0,0)$ for any $(s,t)\in[0,1]\times[0,1]$.
To see this, note that for $t \in (0,\frac{1}{2})$, we have $\ip{A,\gamma(t)} > c$ so that $f(\gamma(t))_1 > 0$.
Additionally, for all $t \in (0,\frac{1}{2})$, $\sin(2\pi t) > 0$.
Thus, $T(s,t)_1 \neq 0$ for all $t\in(0,\frac{1}{2})$ and $s\in[0,1]$.
Similar arguments show that $T(s,t)_1\neq 0$ for all $t\in(\frac{1}{2},1)$ and $s\in[0,1]$, and that
$T(s,t)_2\neq 0$ for all $t \in\set{0,\frac{1}{2},1}$ and $s\in[0,1]$.
Finally, \cref{lem:existencepoint} implies that $(0,0)\in f({\mathbb{S}}O(n))$, a contradiction. We conclude that $H\cap {\mathbb{S}}O(n)$ is connected.\ije{\qedhere}
\end{proof}
It remains to prove {\mathbb{C}}ref{lem:curve}.
\begin{proof}[Proof of {\mathbb{C}}ref{lem:curve}]
We begin with the case where $U = I$ and $V = R(\theta_1,\dots,\theta_k)$ for some $\theta_i\in[0,2\pi)$.
Let ${\mathbf{S}}^1$ denote the unit circle in ${\mathbb{R}}^2$, thought of as the points $[0,2\pi)$ where $0$ and $2\pi$ are identified.
Let ${\mathbb{T}}$ denote the set of all matrices $R(\phi_1, \dots, \phi_k)$ as $\phi_1,\dots,\phi_k$ range over ${\mathbf{S}}^1$.
Examining the entries of $R(\phi_1, \dots, \phi_k)$, we deduce that the expression
$\ip{A, R(\phi_1,\dots,\phi_k)}$ can be written as
\[
\ip{A, R(\phi_1, \dots, \phi_k)} = \sum_{i=1}^k c_i \ip{\begin{pmatrix}
\cos(\hat\phi_i)\\
\sin(\hat\phi_i)
\end{pmatrix}, \begin{pmatrix}
\cos(\phi_i)\\
\sin(\phi_i)
\end{pmatrix}}
\]
for some $c_i\geq 0$, and $\hat{\phi}_i \in {\mathbf{S}}^1$.
We will define $\phi_i(t)$ to be a continuous function $[0,1]\to{\mathbf{S}}^1$ where
$\phi_i(0) = 0$, $\phi_i(\frac{1}{4}) = \hat{\phi}_i$, $\phi_i(\frac{1}{2}) = \theta_i$, $\phi_i(\frac{3}{4}) = \hat{\phi}_i - \pi$, $\phi_i(1) = 0$.
It is not hard to verify that $\phi_i$ can be extended to a continuous piecewise linear function on $[0,1]$ such that
\begin{align*}
\ip{\begin{pmatrix}
\cos(\hat\phi_i)\\
\sin(\hat\phi_i)
\end{pmatrix}, \begin{pmatrix}
\cos(\phi_i)\\
\sin(\phi_i)
\end{pmatrix}}
\end{align*}
is linear for all $t\in[0,1]\setminus\set{0,\tfrac{1}{4},\tfrac{1}{2},\tfrac{3}{4},1}$.
We then define $\gamma(t) \coloneqq R(\phi_1(t), \phi_2(t), \dots, \phi_k(t))$.
We see that this loop satisfies all of the properties desired based on properties of the individual $\phi_i(t)$.
Now, consider the case where $U,\, V \in H$ are general.
By \cref{lem:max_torus} applied to $U^\intercal V$, there exists $W\in{\mathbb{S}}O(n)$ so that $W^\intercal U^\intercal VW = R(\theta_1 ,\dots, \theta_k)$ for some $\theta_i\in[0,2\pi)$.
Define $\tilde A \coloneqq W^\intercal U^\intercal A W$. Then,
\begin{align*}
\ip{\tilde A, I} = \ip{A,U}\qquad\text{and}\qquad
\ip{\tilde A, R(\theta_1,\dots,\theta_m)} = \ip{A,V}.
\end{align*}
Let $\gamma(t)$ denote the loop given by applying the preceding construction to $I,\, R(\theta_1,\dots,\theta_m)$ and $\tilde A$. Note that by definition of $W$ and $\tilde A$, we have
\begin{align*}
\ip{A, UW \gamma(t) W^\intercal} = \ip{\tilde A, \gamma(t)}.
\end{align*}
Thus, the loop $t\mapsto UW\gamma(t)W^\intercal$ satisfies the stated properties.\ije{\qedhere}
\end{proof}
\begin{figure}
\caption{Example of the construction of $\phi_i(t)$ for $\hat\phi_i = \tfrac{-8\pi}
\label{fig:proof_one_connected}
\end{figure}
\subsection{Algorithms for 1 Constraint Optimization}
Here, we aim to solve the optimization problem given in \eqref{eq:ndmax}.
\thmtwodalgo*
To state the algorithm we note that for any $A, B \in {\mathbb{R}}^{n\times n}$, \eqref{eq:ndmax} is equivalent to the following optimization problem
\begin{equation*}
\max_{x \in \pi({\mathbb{S}}O(n))} \{x_1 : x_2 \in [a,b]\}.
\end{equation*}
Here, $\pi({\mathbb{S}}O(n))$ is the image of ${\mathbb{S}}O(n)$ in ${\mathbb{R}}^2$ given by $\pi(X) = (\langle A, X\rangle, \langle B, X\rangle)$.
By {\mathbb{C}}ref{thm:twoconvex}, we have that $\pi({\mathbb{S}}O(n))$ is convex, and we can apply standard methods from convex optimization to solve it.
For this, we appeal to the ellipsoid algorithm, as described in \cite{grotschel1981ellipsoid}.
If $C \subseteq {\mathbb{R}}^n$ is a compact convex set and $x \not \in C$, then there is a hyperplane that separates $x$ and $C$.
This \emph{separating hyperplane} is given by a nonzero vector $y \in {\mathbb{R}}^n$ so that $\langle y, x \rangle \geq \max \{\langle y, c \rangle : c \in C\}$.
A $\epsilon$-\emph{weak separation oracle} for $C$ is an oracle that on an input $x \in {\mathbb{R}}^n$, either correctly declares $x \in C + {\mathbb{B}}_{\infty}(0,\epsilon)$, or outputs $y \in {\mathbb{R}}^n$ so that $y$ is a separating hyperplane between $x$ and $C$.
Here, ${\mathbb{B}}_{\infty}(a,r)$ is the ball of radius $r$ in the $L_{\infty}$ norm centered at $a$. The algorithmic equivalence between weak separation oracles and approximate optimization over convex sets is outlined in \cite{grotschel2012geometric}.
In ${\mathbb{R}}^2$, the ellipsoid algorithm provides the following guarantee.
\begin{theorem}
Suppose that $C \subseteq {\mathbb{R}}^2$ is given by an $\epsilon$-weak separation oracle, we are given a $R \in {\mathbb{R}}$ so that $C \subseteq {\mathbb{B}}_2(0,R)$, and that $C$ includes a ball of radius at least $\epsilon$.
There is an algorithm that optimizes a linear function with unit $L_2$ norm over $C$ within an additive error of $\epsilon$ using at most $O(\log(\frac{R}{\epsilon}))$ calls to the weak separation oracle.
\end{theorem}
Hence, to show {\mathbb{C}}ref{thm:2dAlgo}, we only need to provide a weak separation oracle for the set $\pi({\mathbb{S}}O(n))$ that can run in time $O(n^3\log(\frac{\max\{\|A\|_{\tr}, \|B\|_{\tr}\}}{\epsilon}))$, where $n^3$ is the time required for a single SVD computation.
\begin{lemma}
Let $n\geq 3$ and $A,B\in{\mathbb{R}}^{n\times n}$ with $\norm{A}_{\tr} = \norm{B}_{\tr} = 1$.
There is a weak separation oracle for the set $\pi({\mathbb{S}}O(n))$ that runs in time $O(n^3\log(\frac{1}{\epsilon}))$.
\end{lemma}
\begin{proof}
Suppose we are given $A, B \in {\mathbb{R}}^{n \times n}$ and $x \in {\mathbb{R}}^2$.
If $\|x\|_\infty > 1 + \epsilon$, then in fact, $x \not \in \pi({\mathbb{S}}O(n)) + {\mathbb{B}}_{\infty}(0, \epsilon)$ as, by Holder's inequality,
\[ X \in {\mathbb{S}}O(n) \implies \max\{ |\langle A, X \rangle|, |\langle B, X \rangle| \} \leq \|X\|_{op} \max\{ \norm{A}_{\tr}, \norm{B}_{\tr}\} \leq 1 , \]
where the last step was by our assumption $\norm{A}_{\tr} = \norm{B}_{\tr} = 1$. Therefore, in this case, we may immediately terminate with one of $(\pm1,0)$ or $(0,\pm1)$ as a separating hyperplane.
For the remainder, we assume that $\|x\|_\infty \leq 1 + \epsilon$.
A nonzero vector $y\in{\mathbb{R}}^2$ defines a separating hyperplane between $x$ and ${\mathbb{S}}O(n)$ if and only if
\[
\langle y, x\rangle \geq \max_{X \in {\mathbb{S}}O(n)} \langle y, \pi(X)\rangle.
\]
Recalling the definition of $\str(\cdot)$ from \cref{sec:prelim}, the expression on the right can be written as
\[
\max_{X \in {\mathbb{S}}O(n)} \langle y, \pi(X)\rangle = \max_{X \in {\mathbb{S}}O(n)} \langle \pi^*(y), X\rangle = \str(\pi^*(y)).
\]
Define the function
\[
f(y) \coloneqq \str(\pi^*(y)) - \langle y, x\rangle.
\]
Thus, a nonzero $y \in {\mathbb{R}}^{2}$ defines a separating hyperplane if and only if $f(y) \leq 0$.
As $f$ is 1-homogeneous, such a $y$ exists if and only if one exists with $\|y\|_1 = 1$.
We will use the following lemma to complete the current proof.
\begin{lemma}
\label{lem:subroutine}
We can construct $\widehat{y}$ with $\norm{\widehat y}_1=1$ so that
\begin{align*}
f(\widehat y) - \epsilon \leq \min_y \{f(y) : \|y\|_1 = 1\}
\end{align*}
using at most $O\left(\log\left(\frac{1}{\epsilon}\right)\right)$ evaluations of $f$ and additional computations.
\end{lemma}
Suppose we have constructed such a $\widehat y$.
If $f(\widehat{y}) \leq 0$, then we may output $\widehat y$ as a separating hyperplane.
For the remainder of the proof, suppose $f(\widehat{y}) > 0$. By \cref{lem:subroutine} and $1$-homogeneity, $f(y) > -\epsilon$ for all $y \in {\mathbb{B}}_{1}(0,1)$.
We claim that $x \in \pi({\mathbb{S}}O(n)) + {\mathbb{B}}_{\infty}(0, \epsilon)$.
If, to the contrary, $x \not \in \pi({\mathbb{S}}O(n)) + {\mathbb{B}}_{\infty}(0, \epsilon)$, then by the separating hyperplane theorem, there would be some $y$ so that
\[
\langle y, x \rangle \geq \max \{\langle y, c+\delta\rangle : c \in \pi({\mathbb{S}}O(n)), \delta \in {\mathbb{B}}_{\infty}(0, \epsilon)\} =
\max \{\langle y, c\rangle : c \in \pi({\mathbb{S}}O(n))\} + \epsilon \|y\|_1.
\]
In particular, there would be some $y$ with $\|y\|_1 = 1$ such that
\[
f(y) = \str(\pi^*(y)) - \langle y, x\rangle \leq -\epsilon,
\]
which is a contradiction.\ije{\qedhere}
\end{proof}
\begin{proof}[Proof of {\mathbb{C}}ref{lem:subroutine}]
We note that $\{y : \|y\|_1=1\}$ is a union of 4 line segments, so minimizing $f$ on this set can be done by minimizing the following 4 univariate functions on $[0,1]$:
\[
g_{\sigma_1\sigma_2}(\alpha) = f(\sigma_1 \alpha, \sigma_2 (1-\alpha)) = \str(\sigma_1\alpha A + \sigma_2(1-\alpha)B) - \sigma_1 \alpha x_1 - \sigma_2 (1-\alpha) x_2,
\]
indexed by $\sigma_1,\sigma_2\in\set{\pm1}$.
Each of the four functions $g_{\sigma_1\sigma_2}$ is a one-dimensional convex function with Lipschitz constant bounded by
\begin{align*}
\norm{A}_{\tr} + \norm{B}_{\tr} + \norm{x}_1 \leq 4+2\epsilon.
\end{align*}
For each $\sigma_1\sigma_2\in\set{\pm 1}$, we may use golden section search \cite{kiefer1953sequential} to find a $\widehat\alpha_{\sigma_1\sigma_2}\in[0,1]$ such that
\begin{align*}
g_{\sigma_1\sigma_2}(\widehat\alpha_{\sigma_1\sigma_2}) \leq \min_{\alpha\in[0,1]}g_{\sigma_1\sigma_2}(\alpha) + \epsilon.
\end{align*}
Each application of golden section search requires
$O\left(\log\left(\frac{1}{\epsilon}\right)\right)$
evaluations of $g_{\sigma_1\sigma_2}$, or equivalently, evaluations of $f$.
\ije{\qedhere}
\end{proof}
\section{Optimization on ${\mathbb{S}}O(n)$ and ${\mathbb{O}}(n)$ with strict upper triangular constraints}
In this section, we consider optimization problems with strict upper triangular (SUT) constraints over ${\mathbb{S}}O(n)$ and ${\mathbb{O}}(n)$: Let $A\in{\mathbb{R}}^{n\times n}$ and let ${\cal C}$ be a nonempty closed convex subset of $\pi_{\cal T}({\mathbb{B}}_\textup{op}(n))$. We consider the problems
\begin{align}
&\sup_{X\in{\mathbb{S}}O(n)}\set{\ip{A,X}:\, \pi_{\cal T}(X)\in {\cal C}} \tag{\ref{eq:opt_over_so_with_sut}}\\
&\qquad\leq \sup_{X\in{\mathbb{O}}(n)}\set{\ip{A,X}:\, \pi_{\cal T}(X)\in {\cal C}}\tag{\ref{eq:opt_over_o_with_sut}}\\
&\qquad\leq
\max_{X\in{\mathbb{B}}_\textup{op}(n)}\set{\ip{A,X}:\, \pi_{\cal T}(X)\in {\cal C}}. \tag{\ref{eq:opt_over_bop_with_sut}}
\end{align}
where the last inequality is because ${\mathbb{B}}_\textup{op}(n)$ is the convex hull of ${\mathbb{O}}(n)$.
Here, we define the values of \labelcref{eq:opt_over_so_with_sut,eq:opt_over_o_with_sut} to be $-\infty$ whenever they are infeasible. Nonetheless, by compactness, \labelcref{eq:opt_over_so_with_sut,eq:opt_over_o_with_sut} both achieve their maxima as long as they are feasible.
The following theorem is the main result of this section and shows that one or both of these inequalities hold at equality for certain choices of $A$.
\thmoptsosut*
We will prove \cref{thm:opt_so_with_sut} in \cref{subsec:proof_opt_so_with_sut}. As a byproduct of the proof, we will also see a numerical method for constructing optimizers of \eqref{eq:opt_over_so_with_sut} or \eqref{eq:opt_over_o_with_sut} from an optimizer of the convex program \eqref{eq:opt_over_bop_with_sut} by solving an SDP.
In \cref{subsec:rank_one_coordinate}, we verify that our results in this section can be applied to problems with few coordinate constraints or rank-one constraints.
Before moving on, we note two hidden convexity properties implied by \cref{thm:opt_so_with_sut}.
\thmfeasibility*
\begin{proof}
It is clear that $\pi_{\cal T}({\mathbb{S}}O(n))\subseteq\pi_{\cal T}({\mathbb{O}}(n))\subseteq\pi_{\cal T}({\mathbb{B}}_\textup{op}(n))$. Now, let $\sigma\in\pi_{\cal T}({\mathbb{B}}_\textup{op}(n))$ and set $A = I$ and ${\cal C}=\set{\sigma}$. \cref{thm:opt_so_with_sut} implies the feasibility of \eqref{eq:opt_over_so_with_sut}, i.e., $\sigma\in\pi_{\cal T}({\mathbb{S}}O(n))$, whence $\pi_{\cal T}({\mathbb{B}}_\textup{op}(n))\subseteq\pi_{\cal T}({\mathbb{S}}O(n))$.\ije{\qedhere}
\end{proof}
\begin{corollary}
\label{thm:hidden_convexity_so_with_sut}
Let $C\in{\mathbb{R}}^{n\times n}$ be a diagonal matrix, then
\begin{align*}
\set{\begin{pmatrix}
\ip{A,X} - \gamma\\
\pi_{\cal T}(X)
\end{pmatrix}:\, \begin{array}{l}
X\in{\mathbb{O}}(n)\\
\gamma \geq 0
\end{array}} =
\set{\begin{pmatrix}
\ip{A,X} - \gamma\\
\pi_{\cal T}(X)
\end{pmatrix}:\,
\begin{array}{l}
X\in{\mathbb{B}}_\textup{op}(n)\\
\gamma \geq 0
\end{array}}.
\end{align*}
If additionally $\det(A)\geq 0$, then
\begin{align*}
\set{\begin{pmatrix}
\ip{A,X} - \gamma\\
\pi_{\cal T}(X)
\end{pmatrix}:\, \begin{array}{l}
X\in{\mathbb{S}}O(n)\\
\gamma \geq 0
\end{array}} =
\set{\begin{pmatrix}
\ip{A,X} - \gamma\\
\pi_{\cal T}(X)
\end{pmatrix}:\,
\begin{array}{l}
X\in{\mathbb{B}}_\textup{op}(n)\\
\gamma \geq 0
\end{array}}.
\end{align*}
\end{corollary}
\begin{proof}
We prove only the first claim as the second is proved analogously.
For convenience, let ${\cal L}$ and ${\cal R}$ denote the left- and right-hand side sets in the first claim.
As ${\mathbb{O}}(n)\subseteq{\mathbb{B}}_\textup{op}(n)$, we have that ${\cal L}\subseteq{\cal R}$. Now, suppose $(v,\sigma)\in{\cal R}$. Let
\begin{align*}
v' = \max_{X\in{\mathbb{B}}_\textup{op}(n)}\set{\ip{A,X}:\, \pi_{\cal T}(X) = \sigma}.
\end{align*}
By definition, $v'\geq v$. Next, by \cref{thm:opt_so_with_sut}, there exists $X\in{\mathbb{O}}(n)$ such that $\pi_{\cal T}(X) = \sigma$ and $\ip{A,X} = v'$. Then $(v',\sigma)\in{\cal L}$. As ${\cal L}$ is closed downwards, $(v,\sigma)\in{\cal L}$. As $(v,\sigma)\in{\cal R}$ was arbitrary, we conclude ${\cal R}\subseteq{\cal L}$.\ije{\qedhere}
\end{proof}
\subsection{Proof of \cref{thm:opt_so_with_sut}}
\label{subsec:proof_opt_so_with_sut}
We begin by proving the following special case of \cref{thm:opt_so_with_sut}.
\begin{proposition}
\label{prop:opt_so_with_sut_generic}
Let $A\in{\mathbb{R}}^{n\times n}$ be a diagonal matrix with $\det(A)\neq 0$ and let $\sigma\in\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(n)))$. Then,
\begin{align}
\label{eq:opt_bop_generic}
\max_{X\in{\mathbb{B}}_\textup{op}(n)}\set{\ip{A,X}:\, \pi_{\cal T}(X)=\sigma}
\end{align}
has a unique optimizer $\hat X$. It holds that $\hat X\in{\mathbb{O}}(n)$. If additionally $\det(A)>0$, then $\hat X\in{\mathbb{S}}O(n)$.
\end{proposition}
\begin{proof}
First, note that ${\mathbb{B}}_\textup{op}(n)$ is full-dimensional in ${\mathbb{R}}^{n \times n}$ so its projection $\pi_{\cal T}({\mathbb{B}}_\textup{op}(n))$ is also full-dimensional. Thus, $\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(n))) = \pi_{\cal T}(\inter({\mathbb{B}}_\textup{op}(n)))$ and \eqref{eq:opt_bop_generic} is strictly feasible. We deduce that strong duality and dual attainability hold in the following primal and dual problems
\begin{align*}
&\max_{X\in{\mathbb{B}}_\textup{op}(n)}\set{\ip{A,X}:\, \pi_{\cal T}(X) = \sigma}\\
&\qquad=\min_{Y\in{\mathbb{R}}^{n\times n},\,\lambda\in{\mathbb{R}}^{\binom{n}{2}}}\set{\ip{\sigma,\lambda} + \norm{Y}_{\tr} :\, Y + \pi_{\cal T}^*(\lambda) = A}.
\end{align*}
Now, let $(\hat Y,\hat\lambda)$ optimize the dual problem. Note that $\hat Y = A - \pi^*_{\cal T}(\hat\lambda)$ is upper triangular with $\diag(A)$ on its diagonal, so $\det(\hat{Y}) = \det(A) \neq 0$ by assumption, i.e., $\rank(\hat Y) = n$.
Let $\hat X\in{\mathbb{B}}_\textup{op}(n)$ be an arbitrary maximizer of \eqref{eq:opt_bop_generic}, which exists by compactness of ${\mathbb{B}}_\textup{op}(n)$. By strong duality,
\begin{align*}
\norm{\hat Y}_{\tr} + \ip{\sigma,\hat\lambda} = \ip{A,\hat X} = \ip{\hat Y + \pi_{\cal T}^*(\hat\lambda), \hat X} = \ip{\hat Y,\hat X} +\ip{\sigma,\hat\lambda}.
\end{align*}
Thus, $\norm{\hat Y}_{\tr} = \ip{\hat Y,\hat X}$. Let $U{\mathbb{S}}igma V^\intercal = \hat Y$ be an SVD of $\hat Y$. Then, $\tr({\mathbb{S}}igma) = \norm{\hat Y}_{\tr} = \ip{\hat Y, \hat X} = \ip{{\mathbb{S}}igma, U^\intercal \hat X V}$. Noting that $U^\intercal \hat X V\in {\mathbb{B}}_\textup{op}(n)$ and that ${\mathbb{S}}igma$ has only positive diagonal entries, we deduce that $U^\intercal \hat X V = I$ so that $\hat X = UV^\intercal\in {\mathbb{O}}(n)$. This proves the first claim.
Now, suppose $\det(A) > 0$. Then, $\det(\hat Y) = \det(A - \pi^*_{\cal T}(\hat\lambda)) = \det(A) >0$. Thus, $\det(\hat X) = \det(U)\det(V^\intercal) = \frac{\det(Y)}{\det({\mathbb{S}}igma)}>0$. We conclude that $\hat X\in{\mathbb{S}}O(n)$.\ije{\qedhere}
\end{proof}
We may now prove \cref{thm:opt_so_with_sut} in full generality.
\begin{proof}
[Proof of \cref{thm:opt_so_with_sut}]
Let $\hat X$ be an optimizer of \eqref{eq:opt_over_bop_with_sut} and set $\sigma = \pi_{\cal T}(\hat X)$.
Now, let $\epsilon\in(0,1]$ and define $\sigma_\epsilon \coloneqq (1-\epsilon)\sigma$.
If $\det(A)\neq 0$, then define $A_\epsilon \coloneqq A$. Otherwise, construct $A_\epsilon\in{\mathbb{R}}^{n\times n}$ by setting each zero diagonal entry of $A$ to $\pm\tfrac{\epsilon}{n}$ in such a way that $\det(A_\epsilon) > 0$.
Then, applying \cref{prop:opt_so_with_sut_generic} with $A_\epsilon$ and $\sigma_\epsilon$, there exists $X_\epsilon\in{\mathbb{O}}(n)$ satisfying
\begin{align}
\label{eq:X_epsilon_objective_value}
\ip{A,X_\epsilon} \geq \ip{A_\epsilon,X_\epsilon} - \epsilon \geq \ip{A_\epsilon, \hat X} - \epsilon \geq \ip{A,\hat X} - 2\epsilon
\end{align}
and
\begin{align}
\label{eq:X_epsilon_constraint}
\pi_{\cal T}(X_\epsilon) = (1-\epsilon)\sigma.
\end{align}
Next, consider a sequence $\set{\epsilon_k}\subseteq(0,1]$ converging to zero and the corresponding sequence $\set{X_{\epsilon_k}}\subseteq {\mathbb{O}}(n)$. As ${\mathbb{O}}(n)$ is compact, $\set{X_{\epsilon_k}}$ has a subsequential limit $\tilde X\in{\mathbb{O}}(n)$. By continuity, we have that $\ip{A,\tilde X} \geq \ip{A,\hat X}$ and $\pi_{\cal T}(\tilde X) = \sigma \in {\cal C}$. We deduce that equality holds between \eqref{eq:opt_over_o_with_sut} and \eqref{eq:opt_over_bop_with_sut}.
Finally, suppose $\det(C)\geq 0$ so that $\det(C_\epsilon)>0$. Then, the sequence $\set{X_{\epsilon_k}}$ lies in ${\mathbb{S}}O(n)$ so that the subsequential limit $\tilde X$ may also be taken to live in ${\mathbb{S}}O(n)$.\ije{\qedhere}
\end{proof}
\begin{remark}
\label{rem:recover_so_n_opt}
The proof of \cref{thm:opt_so_with_sut} suggests a numerical method for recovering an optimizer of \eqref{eq:opt_over_o_with_sut} from an optimizer, $\hat X$, of \eqref{eq:opt_over_bop_with_sut}. Let $\epsilon>0$ be some small numerical parameter and let $A_\epsilon,\, \sigma_\epsilon$ be as defined in the proof of \cref{thm:opt_so_with_sut}. Then, the unique maximizer of
\begin{align*}
\max_{X\in{\mathbb{B}}_\textup{op}(n)}\set{\ip{A_\epsilon,X}:\, \pi_{\cal T}(X) = \sigma_\epsilon}
\end{align*}
is guaranteed to lie in ${\mathbb{O}}(n)$ and is both approximately optimal and approximately feasible in the sense of \labelcref{eq:X_epsilon_objective_value,eq:X_epsilon_constraint}.
Alternatively, we may shortcut solving two separate convex optimization problems by preemptively replacing $A$ and ${\cal C}$ with $A_\epsilon$ and ${\cal C}_\epsilon$, in such a way that guarantees $\det(A_\epsilon)\neq 0$ and ${\cal C}_\epsilon\subseteq\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(n)))$. With these perturbed sets, \cref{prop:opt_so_with_sut_generic} guarantees that any optimizer of
\begin{align*}
\max_{X\in{\mathbb{B}}_\textup{op}(n)}\set{\ip{A_\epsilon,X}:\, \pi_{\cal T}(X) \in {\cal C}_\epsilon}
\end{align*}
lies in ${\mathbb{O}}(n)$. Analogous statements hold for the ${\mathbb{S}}O(n)$ setting.
\end{remark}
\subsection{Applications to low-rank constraints}
\label{subsec:rank_one_coordinate}
The following proposition shows that optimization and feasibility problems over ${\mathbb{S}}O(n)$ with convex constraints on the values of $\ip{B_i,X}$, where $B_i\in{\mathbb{R}}^{n\times n}$ are low-rank matrices, can be seen as a special case of SUT constraints after a reparameterization of ${\mathbb{S}}O(n)$.
\begin{proposition}
\label{prop:low_rank}
Let $k \le n-1$.
Fix $u_1, \dots, u_k \in {\mathbb{R}}^n$, and also fix $v_1, \dots, v_k \in {\mathbb{R}}^n$.
For $i = 1, \dots, m$, let $B_i = \sum_{j=1}^{k} \beta_{ij} u_{j}v_{j}^\intercal$, for some $\beta_{i,j} \in {\mathbb{R}}$.
Let ${\cal B}(X)\coloneqq \begin{pmatrix}
\ip{B_i,X} \end{pmatrix}_{i\in[m]}$.
Then ${\cal B}({\mathbb{S}}O(n))$ is convex.
\end{proposition}
This tells us that we may decide feasibility of problems of the form ${\cal B}({\mathbb{S}}O(n))\cap {\cal C}$ for compact convex ${\cal C}$ via a simple SDP.
As an example, \cref{prop:low_rank} applies if $m\leq n -1$ and $B_i$ are each rank one as in the situation of \cref{subsec:motivation}.
In order to show this proposition, we will need a lemma.
\begin{lemma}
\label{prop:rank_one}
Let $\set{u_1,\dots, u_{n-1}}\subseteq{\mathbb{R}}^n$ and $\set{v_1,\dots, v_{n-1}}\subseteq{\mathbb{R}}^n$. Then, there exists $U,V\in{\mathbb{S}}O(n)$ such that for all $i\in[n]$, $v_i^\intercal (V^\intercal X U) u_i$ depends only on the strict upper triangular entries of $X$.
\end{lemma}
\begin{proof}
It follows from the existence of QR decompositions that we may upper triangularize the $\set{u_i}$ with a special orthogonal matrix, i.e., there exists $U\in{\mathbb{S}}O(n)$ such that $\textrm{supp}(U u_i)\subseteq [1,i]$ for each $i \in [n]$.
Similarly, we may lower triangularize the $\set{v_i}$ with a special orthogonal matrix, i.e., there exists $V\in{\mathbb{S}}O(n)$ such that $\textrm{supp}(V v_i)\subseteq[i + 1, n]$.
Then
\begin{align*}
\textrm{supp}(Uu_i v_i^\intercal V^\intercal) \subseteq [1,i]\times [i+1,n].
\end{align*}
Thus $(Vv_i)^\intercal X (Uu_i)$ depends only the strictly upper triangular entries of $X$.
\end{proof}
We now show \cref{prop:low_rank}.
\begin{proof}[Proof of \cref{prop:low_rank}]
Note that ${\cal B}({\mathbb{S}}O(n))$ is a
linear image of the set \begin{align}
\label{eq:ro_example_2}
\set{\begin{pmatrix}
v_j^\intercal Xu_j \end{pmatrix}_{j\in[k]}:\, X\in{\mathbb{S}}O(n)}.
\end{align}
Let $U,V\in{\mathbb{S}}O(n)$ denote the matrices guaranteed by \cref{prop:rank_one}, then \eqref{eq:ro_example_2} is equivalent to
\begin{align}
\label{eq:ro_example_3}
\set{\begin{pmatrix}
v_j^\intercal (V^\intercal XU)u_j \end{pmatrix}_{j\in[k]}:\, X\in{\mathbb{S}}O(n)}
\end{align}
by the fact that ${\mathbb{S}}O(n)= V^\intercal {\mathbb{S}}O(n)U$.
Finally, by the assumed properties of $U$ and $V$, we have that \eqref{eq:ro_example_3} is a linear image of $\pi_{\cal T}({\mathbb{S}}O(n))$ so that ${\cal B}({\mathbb{S}}O(n))$ is convex.
\end{proof}
\section{Explicit constructions for elements of ${\mathbb{S}}O(n)$ with fixed strictly upper triangular entries}
\label{sec:utconstructions}
This section gives full characterizations and explicit constructions for $\pi_{\cal T}^{-1}(\sigma)\cap {\mathbb{S}}O(n)$ and $\pi_{\cal T}^{-1}(\sigma)\cap {\mathbb{O}}(n)$, where ${\cal T}$ is the strictly upper triangular coordinates in ${\mathbb{R}}^{n \times n}$, for $\sigma\in\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(n)))$. This will allow us to extend \cref{thm:opt_so_with_sut} to an approximation result in the remaining setting $\det(C)<0$.
We overload notation below. Given $A\in{\mathbb{S}}^n_+$, let
\begin{align*}
{\mathbb{O}}(A) &\coloneqq \set{X\in{\mathbb{R}}^{n\times n}:\, X^\intercal X = A}\\
{\mathbb{B}}_\textup{op}(A) &\coloneqq \set{X\in{\mathbb{R}}^{n\times n}:\, X^\intercal X \preceq A}.
\end{align*}
Note that ${\mathbb{O}}(I_n) = {\mathbb{O}}(n)$ and ${\mathbb{B}}_\textup{op}(I_n) = {\mathbb{B}}_\textup{op}(n)$. Furthermore, if $A\in{\mathbb{S}}^n_{++}$, then
\begin{align*}
\inter({\mathbb{B}}_\textup{op}(A)) = \set{X\in{\mathbb{R}}^{n\times n}:\, X^\intercal X \prec A}
\end{align*}
is full-dimensional. Thus, $\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(A))) = \pi_{\cal T}(\inter({\mathbb{B}}_\textup{op}(A)))$.
We will require the following technical lemma.
\begin{lemma}
\label{lem:complete_OA_from_submatrix}
Let $A\in{\mathbb{S}}^n_{++}$ and $\tilde U\in{\mathbb{R}}^{n\times (n-1)}$.
Suppose $\tilde U^\intercal \tilde U = A_{2,2}$, the bottom right $(n-1)\times (n-1)$ submatrix of $A$.
Suppose also that the bottom $(n-1)\times(n-1)$ submatrix of $\tilde U$ has full rank.
Then, there exist exactly two choices of $u\in {\mathbb{R}}^n$ such that
\begin{align*}
U = \begin{pmatrix}
u & \tilde U
\end{pmatrix}\in {\mathbb{O}}(A)
\end{align*}
and the two choices of $u$ differ on their first coordinates. Furthermore, given $A_{2,2}^{-1}$, the two choices of $u$ can be computed in $O(n^2)$ time.
\end{lemma}
\begin{proof}
Expanding the definition of $U$, we have that $U\in {\mathbb{O}}(A)$ if and only if
\begin{align*}
\begin{pmatrix}
u^\intercal u & u^\intercal \tilde U\\
\tilde U^\intercal u & \tilde U^\intercal \tilde U
\end{pmatrix} = \begin{pmatrix}
A_{1,1} & A_{1,2}\\
A_{2,1} & A_{2,2}
\end{pmatrix},
\end{align*}
i.e., if and only if $\norm{u}^2 = A_{1,1}$ and $\tilde U ^\intercal u = A_{2,1}$.
We decompose $\tilde U^\intercal$ as $\tilde U^\intercal = \begin{pmatrix}
\hat u &
\hat U^\intercal
\end{pmatrix}
$,
where $\hat u\in{\mathbb{R}}^{n-1}$ and $\hat U\in {\mathbb{R}}^{(n-1)\times (n-1)}$. By assumption, $\hat U$ is invertible.
Thus, $\ker(\tilde U^\intercal)$ is one-dimensional and spanned by the vector
\begin{align*}
z \coloneqq \begin{pmatrix}
1\\
-\hat U^{-\intercal}\hat u
\end{pmatrix}.
\end{align*}
Next, note that $u_0\coloneqq \tilde U A_{2,2}^{-1} A_{2,1}$ satisfies $\tilde U^\intercal u_0 = A_{2,1}$.
Thus, $u_0 + tz$ parameterizes the solutions of $\tilde U^\intercal u = A_{2,1}$.
Note that $u_0$ has squared norm
\begin{align*}
\norm{u_0}^2 = A_{2,1}^\intercal A_{2,2}^{-1}(\tilde U^\intercal\tilde U) A_{2,2}^{-1} A_{2,1}=A_{1,2} A_{2,2}^{-1} A_{2,1} < A_{1,1} ,
\end{align*}
where the last inequality follows by the Schur complement lemma and the assumption that $A\in{\mathbb{S}}^n_{++}$.
We deduce that the quadratic equation $\norm{u_0 + t z}^2 = A_{1,1}$ in $t$ has exactly two solutions.
In other words,
there are exactly two choices of $u\in{\mathbb{R}}^n$ such that $U\in{\mathbb{O}}(A)$.
Then, as $z_1 = 1$, we have that the two possible choices of $u$ differ in their first coordinates.
We now turn to the time complexity. Note that $A_{2,2} = \tilde U^\intercal \tilde U = \hat u \hat u^\intercal + \hat U^\intercal \hat U$. Thus, $(\hat U^\intercal \hat U)^{-1} = (A_{2,2} - \hat u\hat u^\intercal)^{-1}$. This quantity can be computed in $O(n^2)$ time given $A_{2,2}^{-1}$ using the Sherman--Morrison formula. Then, the quantity $-\hat U^{-\intercal}\hat u$ can be written as
\begin{align*}
-\hat U^{-\intercal}\hat u &= - \hat U^{-\intercal}\hat U^{-1} \hat U \hat u\\
&= - (A_{2,2} - \hat u\hat u^\intercal)^{-1} \hat U\hat u.
\end{align*}
We deduce that the quantities $u_0$ and $z$ can both be computed in $O(n^2)$ time. Finally, computing the two choices of $t$ can also be done within this time limit.
\ije{\qedhere}
\end{proof}
\begin{remark} \label{r:upperTriContinuity}
The output of the construction in \cref{lem:complete_OA_from_submatrix} is continuous in $\tilde U$ and $A$ wherever it is defined. Formally, there are two continuous functions $u_1$ and $u_2$ from
\begin{align*}
\set{(\tilde U, A)\in{\mathbb{R}}^{n\times (n-1)}\times {\mathbb{S}}^n:\, \begin{array}{l}
A\in{\mathbb{S}}^n_{++}\\
\tilde U^\intercal \tilde U = A_{2,2}\\
\hat U\text{ is invertible}
\end{array}}
\end{align*}
to ${\mathbb{R}}^n$ that track the two possible choices of $u$ in \cref{lem:complete_OA_from_submatrix}.
This follows from continuity of $z$, $u_0$, and the coefficients in the quadratic equation $\norm{u_0 + tz}^2 = A_{1,1}$ in the proof of \cref{lem:complete_OA_from_submatrix}.
\end{remark}
The following theorem provides a parameterized construction for the entire set $\pi_{\cal T}^{-1}(\sigma)\cap {\mathbb{O}}(n)$.
\begin{proposition}
\label{thm:triangle_general_A}
Let $A\in{\mathbb{S}}^n_{++}$ and $\sigma\in\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(A)))$. Then, $\abs{\pi_{\cal T}^{-1}(\sigma)\cap {\mathbb{O}}(A)}=2^n$.
Furthermore, no two matrices in $\pi_{\cal T}^{-1}(\sigma)\cap {\mathbb{O}}(A)$ agree on all of their diagonal entries.
\end{proposition}
\begin{proof}
We will induct on $n$. The claim is vacuously true for $n = 1$, thus assume $n\geq 2$.
Let $X\in\inter({\mathbb{B}}_\textup{op}(A))$ satisfy $\sigma=\pi_{\cal T}(X)$.
Partition $X$ and $A$ as
\begin{align*}
X = \begin{pmatrix}
\xi & x^\intercal\\
\bar x & X_{2,2}
\end{pmatrix},\qquad
A = \begin{pmatrix}
\alpha & a^\intercal\\
a & A_{2,2}
\end{pmatrix}.
\end{align*}
As $X\in\inter({\mathbb{B}}_\textup{op}(A))$, we have that $X^\intercal X \prec A$ so that
\begin{align*}
A_{2,2} \succ xx^\intercal + X_{2,2}^\intercal X_{2,2}.
\end{align*}
Thus, $X_{2,2} \in \inter({\mathbb{B}}_\textup{op}(A_{2,2} - xx^\intercal))$ and $A_{2,2} - xx^\intercal \in{\mathbb{S}}^{n-1}_{++}$. By induction, there exist exactly $2^{n-1}$ matrices $U_{2,2}\in O(A_{2,2}- xx^\intercal)$ matching the strictly upper triangular entries of $X_{2,2}$.
For each of these choices, the matrix $U_{2,2}$ has rank $n-1$.
Define $\tilde U \in{\mathbb{R}}^{n\times (n-1)}$ as
\begin{align*}
\tilde U = \begin{pmatrix}
x^\intercal\\
U_{2,2}
\end{pmatrix}.
\end{align*}
Note that $\tilde U^\intercal \tilde U = xx^\intercal + U_{2,2}^\intercal U_{2,2} = A_{2,2}$.
By \cref{lem:complete_OA_from_submatrix}, for each choice of $\tilde U$, there are exactly two ways to append a column to the left of $\tilde U$ to construct a matrix $U\in{\mathbb{O}}(A)$.
Furthermore, the two choices differ in their diagonal entry.
Finally, we note that the strictly upper triangular entries of $U$ match the strictly upper triangular entries of $X$.\ije{\qedhere}
\end{proof}
For each $\rho\in\set{\pm 1}^n$, we may now define the map
$X_\rho:\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(n)))\to{\mathbb{O}}(n)$ to be the output of the above construction applied to $\sigma\in\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(n)))$, where in the $k \times k$ submatrix we pick the larger (or smaller) possible diagonal entry if $\rho_{n-k+1}$ is positive (or negative).
For example, if $\sigma = 0\in{\mathbb{R}}^{\binom{n}{2}}$, then $X_{\rho}(\sigma) = {\mathbb{D}}iag(\rho)\in{\mathbb{O}}(n)$.
Inductively applying \cref{r:upperTriContinuity}, one may verify that $X_\rho(\sigma)$ is continuous as a function of $\sigma \in\inter({\mathbb{B}}_\textup{op}(n))$.
\begin{lemma}
Given $\sigma\in\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(n)))$ and $\rho\in\set{\pm 1}^n$, we can construct
$X_\rho(\sigma)$ in time $O(n^3)$.
\end{lemma}
\begin{proof}
We will apply the construction of \cref{thm:triangle_general_A} using \cref{lem:complete_OA_from_submatrix} while recursively maintaining $A_{2,2}^{-1}$ in time $O(n^2)$.
It is clear that we have access to $A_{2,2}^{-1}$ at the very top of the recursion as $A_{2,2}^{-1} = I_{n-1}^{-1} = I_{n-1}$.
It remains to prove the following fact: Given $A\in{\mathbb{S}}^k$ and $x\in{\mathbb{R}}^k$ such that $A - xx^\intercal \succ 0$, it is possible to compute the inverse of the bottom-right $(k-1)\times(k-1)$ block of $A - xx^\intercal$ from the inverse of $A$ in time $O(n^2)$.
Write
\begin{align*}
A - xx^\intercal = \begin{pmatrix}
\alpha & a^\intercal\\
a & A_{2,2}
\end{pmatrix}.
\end{align*}
Note that $\alpha >0$ by the assumption that $A - xx^\intercal\succ 0$. Then,
\begin{align*}
\begin{pmatrix}
\alpha &\\&A_{2,2}
\end{pmatrix} = A - xx^\intercal - \begin{pmatrix}
0 & a^\intercal\\ a & 0
\end{pmatrix}.
\end{align*}
Thus, we can compute $A_{2,2}^{-1}$ by computing the inverse of the expression on the right hand side and taking its bottom right block. This can be done via the Sherman--Morrison formula in time $O(n^2)$.
Repeating once for each of the $n$ entries results in $O(n^3)$ time in total.\ije{\qedhere}
\end{proof}
By \cref{thm:triangle_general_A}, $\diag\left(\pi_{\cal T}^{-1}(\sigma)\cap{\mathbb{O}}(n)\right)\subseteq{\mathbb{R}}^n$ is a set of $2^n$ distinct elements.
The following result, due to~\citet{fiedler2009suborthogonality}, allows us to deduce that the $2^n$ elements correspond to the vertices of a (scaled) hypercube.
\begin{lemma}[\citet{fiedler2009suborthogonality}]
\label{thm:fiedler}
Let $U\in{\mathbb{O}}(n)$ and let $R\in{\mathbb{R}}^{a+b}$ be a submatrix of $U$ where $a + b > n$. Then, $\norm{R}_\textup{op} = 1$.
\end{lemma}
\begin{lemma}
\label{lem:diagonals_of_so}
Let $\sigma\in\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(n)))$.
There exist scalars $\alpha_i < \beta_i$ for $i\in[n]$ (independent of $\rho$) such that for all $\rho\in\set{\pm 1}^n$,
\begin{align*}
X_{\rho}(\sigma)_{i,i} = \begin{cases}
\alpha_i & \text{if } \rho_i = -1\\
\beta_i & \text{if } \rho_i = 1
\end{cases}.
\end{align*}
\end{lemma}
\begin{proof}
Fix $\sigma\in\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(n)))$ and let $X\in\inter({\mathbb{B}}_\textup{op}(n))$ such that $\pi_{\cal T}(X) = \sigma$.
For $i\in[n]$, let $\hat R_i$ denote the $i\times (n-i+1)$ dimensional submatrix of $X$ with bottom-left entry at coordinate $(i,i)$.
Let $R_i(s)\in{\mathbb{R}}^{i\times(n-i+1)}$ denote the matrix that replaces the bottom-left entry of $\hat R_i$ with $s\in{\mathbb{R}}$.
Then, $R_i(s)$ parameterizes a line that intersects the interior of the operator norm ball. As the operator norm ball is a compact convex body, there are exactly two choices of $s$, denoted $\alpha_i<\beta_i$, for which $\norm{R_i(s)}_\textup{op} = 1$.
Then, by \cref{thm:fiedler}, we deduce that $\diag\left(\pi_{\cal T}^{-1}(\sigma) \cap {\mathbb{O}}(n)\right) \subseteq \prod_i \set{\alpha_i,\beta_i}$.
Combining with \cref{thm:triangle_general_A} completes the proof.\ije{\qedhere}
\end{proof}
The following result states that the sign of $\det(X_{\rho}(\sigma))$ depends only on $\rho$.
\begin{lemma}
\label{thm:sign_of_x_rho}
Let $\sigma\in\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(n)))$ and $\rho\in\set{\pm 1}^n$. Then $\det(X_{\rho}(\sigma)) = \prod_i \rho_i$.
\end{lemma}
\begin{proof}
Fix $\rho\in\set{\pm 1}$ and $\sigma\in\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(n)))$ and consider the continuous function $f(\alpha)\coloneqq \det(X_{\rho}(\alpha\sigma))$ defined on $\alpha\in[0,1]$. As $X_{\rho}(\alpha\sigma)\in{\mathbb{O}}(n)$ for all $\alpha\in[0,1]$, we have that $f$ can only take on the values $\pm 1$. As $f$ is also continuous, it must be constant so that $f(1) = f(0) = \det(X_{\rho}(0)) = \det({\mathbb{D}}iag(\rho)) = \prod_i \rho_i$.\ije{\qedhere}
\end{proof}
\subsection{Refinements of \cref{thm:opt_so_with_sut}}
The following theorem extends \cref{thm:opt_so_with_sut} to an approximation result in the remaining case to maximization over ${\mathbb{S}}O(n)$ with SUT constraints and $\det(A)<0$.
\begin{theorem}
\label{thm:approximate_det_c_negative_generic}
Let $A\in{\mathbb{R}}^{n\times n}$ be a diagonal matrix with $\det(A)<0$ and let ${\cal C}\subseteq \pi_{\cal T}({\mathbb{B}}_\textup{op}(n))$ be a nonempty closed convex set. Then \eqref{eq:opt_over_bop_with_sut} provides a $\left(1 - \frac{1}{n}\right)$-approximation of \eqref{eq:opt_over_so_with_sut} in the following sense:
\begin{align*}
&\max_{X\in{\mathbb{S}}O(n)}\set{\ip{A,X}:\, \pi_{\cal T}(X)\in{\cal C}}\\
&\qquad\geq \left(1 - \frac{1}{n}\right) \max_{X\in{\mathbb{B}}_\textup{op}(n)}\set{\ip{A,X}:\, \pi_{\cal T}(X) \in{\cal C}} + \frac{1}{n}\min_{X\in{\mathbb{B}}_\textup{op}(n)}\set{\ip{A,X}:\, \pi_{\cal T}(X) \in{\cal C}}.
\end{align*}
\end{theorem}
\begin{proof}
Let $\hat X\in{\mathbb{B}}_\textup{op}(n)$ maximize \eqref{eq:opt_over_bop_with_sut} and let $\sigma = \pi_{\cal T}(\hat X)$.
We will only consider the case where $\sigma\in\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(n)))$. The general case follows by continuity and compactness.
We will fix $\sigma$ in the remainder of the proof and write $X_\rho$ instead of $X_\rho(\sigma)$.
Let $(\alpha_i,\beta_i)$ be the quantities furnished by \cref{lem:diagonals_of_so} applied to $\sigma$.
For $i\in[n]$, let $\rho^{(i)}\in\set{\pm 1}^n$ denote the vector that negates the $i$th entry of $\sign(\diag(C))$. Thus by \cref{thm:sign_of_x_rho}, for all $i\in[n]$, we have
$X_{\rho^{(i)}}\in{\mathbb{S}}O(n)$ and $\pi_{\cal T}(X_{\rho^{(i)}}) = \sigma \in{\cal C}$. Let $\hat\rho = \sign(\diag(A))$.
Then,
\begin{align*}
\max_{X\in{\mathbb{S}}O(n)}\set{\ip{A,X}:\, \pi_{\cal T}(X) \in{\cal C}}
&\geq \max_{i\in[n]} \ip{A, X_{\rho^{(i)}}}\\
&= \ip{A,\hat X} - \min_{i\in[n]}\abs{A_{i,i}(\beta_i - \alpha_i)}\\
&\geq \ip{A,\hat X} - \frac{1}{n}\left(\sum_{i=1}^n \abs{A_{i,i}(\beta_i - \alpha_i)}\right)\\
&= \ip{A,X_{\hat\rho}} - \frac{1}{n}\left(\ip{A, X_{\hat\rho}} - \ip{A,X_{-\hat\rho}}\right)\\
&= \left(1 - \frac{1}{n}\right)\ip{A,X_{\hat\rho}} + \frac{1}{n}\ip{A, X_{-\hat\rho}}.
\end{align*}
Noting that $\pi_{\cal T}(X_{-\hat\rho})=\sigma\in{\cal C}$ completes the proof in the case where $\sigma\in\inter(\pi_{\cal T}({\mathbb{B}}_\textup{op}(n)))$.\ije{\qedhere}
\end{proof}
\section{Obstructions to further generalization}
\label{sec:obstructions}
This section collects a number of results showing that our hidden convexity results are essentially tight.
\subsection{Maximality of \cref{thm:twoconvex}}
Recall that {\mathbb{C}}ref{thm:twoconvex} shows any two dimensional linear image of ${\mathbb{S}}O(n)$ is convex.
The following result shows this
is optimal in a specific sense.
\begin{lemma}
\label{thm:nonconvex}
For any $n\geq 3$, there exists a linear map $\pi : {\mathbb{S}}O(n) \rightarrow {\mathbb{R}}^3$ so that $\pi({\mathbb{S}}O(n))$ is nonconvex.
\end{lemma}
\begin{proof}
We define
\[
\pi(X) = \left(X_{11}, X_{12}, \sum_{i=3}^n X_{ii}\right).
\]
Let $H = \{x \in {\mathbb{R}}^3 : x_{3} = n-2\}$.
To see that $S = \pi({\mathbb{S}}O(n))$ is nonconvex, we show that $S \cap H$ is nonconvex.
Consider a general $X \in {\mathbb{S}}O(n)$. It holds that $\sum_{i=3}^n X_{ii} = n-2$ if and only if $X_{ii} = 1$ for all $i > 2$.
This occurs if and only if $X$ is block diagonal, so that
\[
X = \begin{pmatrix}
\begin{smallmatrix}
\cos(\theta) & \sin(\theta)\\
-\sin(\theta) & \cos(\theta)
\end{smallmatrix} & \\
& I
\end{pmatrix},
\]
for some $\theta \in [0,2\pi]$.
Now, if $X$ has this form, then $\pi(X) = (\cos(\theta), \sin(\theta), n-2)$, i.e.,
\[
S \cap H = \{(\sin(\theta), \cos(\theta), n-2) : \theta \in [0,2\pi]\}.
\]
This is clearly nonconvex, for example, $(0,0,n-2) \in \conv (S \cap H) \setminus (S \cap H)$.\ije{\qedhere}
\end{proof}
In fact, this construction gives us an example of a 2-constraint optimization problem over ${\mathbb{S}}O(n)$ for which the $\conv({\mathbb{S}}O(n))$ relaxation is not tight.
Consider the following optimization problem:
\begin{align}
\label{eq:two_constraint_noncvx}
\max_{X\in {\mathbb{S}}O(n)} \left\{\sum_{i=3}^n X_{ii} : \begin{array}{l} X_{1,1} = 0\\ X_{1,2} = 0\end{array}\right\}.
\end{align}
We have seen that it is not possible for a matrix in ${\mathbb{S}}O(n)$ to attain a value of $n-2$ in this problem, since any matrix in ${\mathbb{S}}O(n)$ where $\sum_{i=3}^n X_{ii} = n-2$ has the property that $X_{11}^2+X_{12}^2 = 1$.
However, $n-2$ is attainable by a convex combination of matrices in ${\mathbb{S}}O(n)$,
\[
\frac{1}{2}\left(
\begin{pmatrix}
\begin{smallmatrix}
1 & 0\\
0 & 1
\end{smallmatrix} & \\
& I
\end{pmatrix}
+
\begin{pmatrix}
\begin{smallmatrix}
-1 & 0\\
0 & -1
\end{smallmatrix} & \\
& I
\end{pmatrix}
\right).
\]
Thus, the convex relaxation of \eqref{eq:two_constraint_noncvx} that replaces ${\mathbb{S}}O(n)$ with $\conv({\mathbb{S}}O(n))$ achieves value $n - 2$.
\subsection{Maximality of \cite[Theorem 8]{horn1954doubly}}
Horn's result \cite[Theorem 8]{horn1954doubly} shows that the diagonal projection of ${\mathbb{S}}O(n)$ is a convex polytope.
A slight modification of the construction from the previous subsection shows this is maximally convex in the following sense:
\begin{lemma}
\label{thm:diagonal_maximal}
Let $A \in {\mathbb{R}}^{n\times n}$ be a nondiagonal matrix.
Consider the linear map $\pi_A : {\mathbb{R}}^{n\times n} \rightarrow {\mathbb{R}}^{n+1}$, so that $\pi_A(X)_{i} = X_{ii}$ for $i = 1, \dots, n$, and $\pi_A(X)_{n+1} = \ip{A,X}$.
Then $\pi_A({\mathbb{S}}O(n))$ is not convex.
\end{lemma}
\begin{proof}
We first consider the case when $A$ is not symmetric.
By permuting coordinates, we may assume $A_{1,2} \neq A_{2,1}$.
Define $H = \{x \in {\mathbb{R}}^{n+1} :\, x_{i} = 1 \text{ for } i = 3, \dots, n\}$ and consider $\pi({\mathbb{S}}O(n)) \cap H$.
We have seen that if $X \in {\mathbb{S}}O(n)$ and $X_{ii} = 1$ for $i = 3, \dots, n$, then
\[
X = \begin{pmatrix}
\begin{smallmatrix}
\cos(\theta) & \sin(\theta)\\
-\sin(\theta) & \cos(\theta)
\end{smallmatrix} & \\
& I
\end{pmatrix},
\]
and therefore,
\[
\ip{A, X} = (A_{11} + A_{22})\cos(\theta) + (A_{1, 2} - A_{2,1})\sin(\theta) + \sum_{i=3}^n A_{ii}.
\]
We will consider the linear map of $\pi({\mathbb{S}}O(n))\cap H$ into ${\mathbb{R}}^2$ that sends $\pi(X)$ to
\[
\left(X_{1,1}, \frac{\ip{A,X} - (A_{11} + A_{22})X_{1,1} - \sum_{i=3}^n A_{ii}}{A_{1,2}-A_{2,1}}\right) = (\cos(\theta), \sin(\theta)).
\]
In other words, this linear map sends $\pi({\mathbb{S}}O(n)) \cap H$ to a circle. We conclude that $\pi({\mathbb{S}}O(n))$ is not convex.
Now, we consider the case when $A$ is symmetric but not diagonal.
We may permute coordinates to assume that $A_{1,2} = A_{2,1}\neq 0$.
Let $D^{(i)}$ be a diagonal matrix where $D^{(i)}_{ii} = -1$ and for all $j \neq i$, $D^{(i)}_{jj} = 1$.
Then, define $A' = D^{(1)} A D^{(2)}$, which is not symmetric. Note that
\[
\pi_{A'}(X) = (X_{11}, X_{22}, X_{33},\dots, X_{nn}, \langle A', X\rangle) = \tau(\pi_A(D^{(1)}XD^{(2)})),
\]
where $\tau(x) = (-x_1, -x_2,x_3 ,\dots, x_n, x_{n+1})$. In particular, $\pi_A({\mathbb{S}}O(n))$ is convex if and only if $\pi_{A'}({\mathbb{S}}O(n))$ is convex, from which the claim follows.\ije{\qedhere}
\end{proof}
\subsection{Maximality of \cref{thm:feasibility}}
\cref{thm:feasibility} gives an example of an $m = \binom{n}{2}$-dimensional linear projection of ${\mathbb{S}}O(n)$ that is convex.
The following lemma shows this is maximal in terms of dimension.
\begin{lemma}
Suppose $n\geq 3$ and $\pi : {\mathbb{R}}^{n\times n} \rightarrow {\mathbb{R}}^m$ satisfies $\rank(\pi) > \binom{n}{2}$.
Then, $\pi({\mathbb{S}}O(n))$ is non-convex.
\end{lemma}
\begin{proof}
It suffices to show this statement in the case where $\rank(\pi)=m$.
Suppose that $\pi({\mathbb{S}}O(n)) = \pi(\conv({\mathbb{S}}O(n)))$ is convex.
A convex set has the property that it either contains an interior point or is contained in a proper affine subspace.
As $\conv({\mathbb{S}}O(n))$ is full-dimensional for all $n\geq 3$ and $\pi$ has full rank, we deduce that $\pi(\conv({\mathbb{S}}O(n)))$ cannot be contained in a proper affine subspace of ${\mathbb{R}}^m$.
Therefore, $\pi({\mathbb{S}}O(n))$ must have an interior point and in particular, has a positive measure.
This is a contradiction to Sard's lemma: ${\mathbb{S}}O(n)$ is $\binom{n}{2}$-dimensional and ${\mathbb{R}}^m$ is $m$-dimensional, but Sard's lemma states that the image of a manifold of dimension less than $m$ under a smooth map into ${\mathbb{R}}^m$ must have measure zero.\ije{\qedhere}
\end{proof}
\subsection{Necessity of $\det(A)\geq 0$ in \cref{thm:opt_so_with_sut}}
We have shown that if $A$ is a diagonal matrix with $\det(A)\geq 0$, then the optimization problem
\begin{equation*}
\max_{X \in {\mathbb{S}}O(n)} \{\ip{D,X} : \pi_{\cal T}(X) = \sigma\}
\end{equation*}
agrees with the convex relaxation replacing ${\mathbb{S}}O(n)$ with ${\mathbb{B}}_\textup{op}(n)$ (see \cref{thm:opt_so_with_sut}).
The following numerical example shows that the assumption that $\det(A)\geq 0$ cannot be dropped in \cref{thm:opt_so_with_sut} even if we strengthen the convex relaxation by replacing ${\mathbb{S}}O(n)$ with $\conv({\mathbb{S}}O(n))$.
The following numerical example is computed using the
cvxpy convex optimization package \cite{diamond2016cvxpy}
and the description of $\conv({\mathbb{S}}O(3))$ given in \cite{saunderson2015semidefinite}[Theorem 1.3]:
\begin{equation*}
\max_{X\in{\mathbb{S}}O(3)} \left\{X_{1,1}+X_{2,2}-X_{3,3} :
\begin{array}{l}
X_{1,2} = 0.5\\
X_{1,3} = 0.3\\
X_{2,3} = 0.2
\end{array}
\right\} = 0.921,
\end{equation*}
whereas
\begin{equation*}
\max_{X\in\conv({\mathbb{S}}O(3))} \left\{X_{1,1}+X_{2,2}-X_{3,3} :
\begin{array}{l}
X_{1,2} = 0.5\\
X_{1,3} = 0.3\\
X_{2,3} = 0.2
\end{array}
\right\} = 1.0.
\end{equation*}
\section{Summary and open questions}
In this paper, we proved new hidden convexity results inspired by solving constrained optimization problems over ${\mathbb{S}}O(n)$ and ${\mathbb{O}}(n)$. These results in turn show that specific structured instances of constrained optimization over ${\mathbb{S}}O(n)$ and ${\mathbb{O}}(n)$ can be efficiently solved via their convex relaxations. We close by posing some natural questions surrounding hidden convexity.
\paragraph{Convex coordinate projections.}
In general, it seems to be difficult to fully characterize the possible sets of coordinates $S \subseteq [n] \times [n]$ so that the projection of ${\mathbb{S}}O(n)$ onto the coordinates in $S$ is convex.
We will note some basic invariants of this question: clearly if $\pi_S({\mathbb{S}}O(n))$ is convex, then for all $T \subseteq S$, $\pi_{\cal T}({\mathbb{S}}O(n))$ is convex.
We say that $S$ has a property \emph{up to permutation} if there are permutations $\sigma$ and $\rho$ so that $\{(\sigma_i, \rho_j) : (i, j) \in S\}$ has this property.
Similarly we say that $S$ has a property \emph{up to transposition} if either $S$ or $\{(j,i) : (i, j) \in S\}$ has this property.
Clearly, $S$ has the property that $\pi_S({\mathbb{S}}O(n))$ is convex if and only if it has this property up to permutations and transposition.
Note also that by \cref{thm:fiedler}, that $\pi_S({\mathbb{S}}O(n))$ is not convex if $S$ contains a rectangle of size $a\times b$ where $a+b>n$. Here, by rectangle we mean a subset of $S$ of the form $A\times B\subseteq S$ where $A,\, B\subseteq [n]$ and $\abs{A} = a$ and $\abs{B} = b$.
Using this idea with additional casework (which we feel is ultimately uninformative)
we can obtain the following characterization of the coordinate subsets of $[4] \times [4]$ so that $\pi_S({\mathbb{S}}O(4))$ is convex:
\begin{lemma}
Let $S \subseteq [4] \times [4]$ be such that $\pi_S({\mathbb{S}}O(4))$ is convex. Then (up to permutations and transpositions), $S$ is a subset of one of the following:
\begin{itemize}
\item $T = \{(i,j) : i < j \in [4]\}$
\item $D = \{(i,i) : i \in [4]\}$
\item $F = \{(1,1),(1,2),(2,3),(2,4)\}$.
\end{itemize}
\end{lemma}
The structure of these examples suggest that there may be some rich combinatorial information hidden in the question of whether or not a given coordinate projection of ${\mathbb{S}}O(n)$ is convex.
In particular, we suspect the following: consider the decision problem \texttt{CONVEX} whose input is a set $S \subseteq [n] \times [n]$, and whose output is TRUE if $\pi_S({\mathbb{S}}O(n))$ is convex, and FALSE otherwise.
\begin{conjecture}
The problem \texttt{CONVEX} is NP-hard.
\end{conjecture}
We will remark that it is not even clear if this problem is in NP, as there does not seem to be an efficient witness to the fact that $\pi_S({\mathbb{S}}O(n))$ is convex. We note that determining whether $S \subseteq [n] \times [n]$ is (up to permutations and transpositions) a subset of the upper triangular entries of $[n] \times [n]$ is NP-hard \cite{fertin2015obtaining}.
\paragraph{Small semidefinite representation of two-dimensional images.}
It is known that the smallest equivariant (see \cite{saunderson2015semidefinite} for a definition) semidefinite representation of $\conv({\mathbb{S}}O(n))$ is exponential in size~\cite{saunderson2015semidefinite}. We leave open the question of whether $\pi({\mathbb{S}}O(n))$, where $\pi:{\mathbb{R}}^{n\times n}\to{\mathbb{R}}^2$, may have a small (possibly linear-sized) semidefinite representation.
\paragraph{Hidden convexity of multiple copies of ${\mathbb{S}}O(n)$.}
Finally, we also leave the study of convex images of direct products of ${\mathbb{S}}O(n)$ to future work. Such results may be useful in applications such as cryo-EM~\cite{bandeira2020non}, where the optimization problems contain multiple ${\mathbb{S}}O(n)$ matrices.
\section*{Acknowledgments}
Akshay Ramachandran was supported by the European Research Council (ERC)
under the European Union’s Horizon 2020 research and innovation programme:
AR from grant agreement no.\ 805241-QIP.
Kevin Shu was supported by the ACO-ARC fellowship while conducting this work.
Part of this work was completed while Alex L.\ Wang was supported by the Dutch Scientific Council (NWO) grant OCENW.GROOT.2019.015 (OPTIMAL).
{
}
\begin{appendix}
\crefalias{section}{appendix}
\section{Separation and optimization oracles for the parity polytope}
\label{app:separation_pp}
It is possible to implement separation and optimization oracles for $\textup{PP}_n$ that run in ${\mathbb{O}}(n\log n)$ time.
\paragraph{Separation.}
We will use the following description of $\textup{PP}_n$ given in \cite{jeroslow1975defining,Lancia2018}:
\[ \textup{PP}_{n} \coloneqq \set{ x \in [-1,1]^{n} :\, \ip{x, 1_{n} - 2 \cdot 1_{S}} \leq n-2 ,\,\quad \forall \text{ odd } S \subseteq [n]}. \]
Here, we will say that $S\subseteq[n]$ is odd if $\abs{S}$ is odd. Else, $S$ is even.
The set of constraints can be rewritten as
\[ \min_{\text{odd }S\subseteq[n]} \ip{x, 1_{S}} \geq \frac{1}{2} (\langle x, 1_{n} \rangle - (n-2) ) . \]
In $O(n\log n)$ time, we may sort the entries of $x$ and compute the sums of all odd-length prefixes of the sorted vector.
If every sum is at least $\frac{1}{2} (\langle x, 1_{n} \rangle - (n-2) )$, then $x\in\textup{PP}_n$. Otherwise, we have found a separating hyperplane.
\paragraph{Optimization.}
We will use the vertex description
\[ \textup{PP}_{n} \coloneqq \conv\set{ 1_{n} - 2\cdot 1_{S} :\,
\text{even } S\subseteq[n]}. \]
Then, given $w\in{\mathbb{R}}^n$, we may optimize $\max_{x \in \textup{PP}_{n}} \ip{w,x}$ by solving $\min_{\text{even} S\subseteq[n]} \ip{w,1_S}$. We can construct a minimizer of the latter problem in $O(n\log n)$ time by sorting $w$ and computing the even-length prefix sums of the sorted vector.
\section{Connections with quadratic convexity theorems}
\label{sec:quadratic_convexity}
This appendix interprets hidden convexity results on ${\mathbb{S}}O(n)$ as quadratic convexity results on the unit sphere.
A basic result in the Lie group theory of ${\mathbb{S}}O(n)$ is the existence of a quadratic map $Q:{\mathbb{R}}^{2^{n-1}}\to{\mathbb{R}}^{n\times n}$ and a subset $\spin(n)$ of the unit sphere in ${\mathbb{R}}^{2^{n-1}}$ such that $Q(\spin(n)) = {\mathbb{S}}O(n)$.
This map is quadratic in the sense that there exists a collection of $n^2$ symmetric matrices $\set{A_{ij}}\subseteq{\mathbb{S}}^{2^{n-1}}$ indexed by $(i,j)\in[n]^2$ such that
\begin{align*}
(Q(x))_{i,j} = \ip{x, A_{ij} x}.
\end{align*}
This result and its construction are explained in detail in \cite[Appendix A]{saunderson2015semidefinite}.
It is additionally shown in \cite[Theorem 1.1]{saunderson2015semidefinite} that for any $Y\in{\mathbb{R}}^{n\times n}$,
\begin{gather}
\label{eq:saunderson_one_dim}
\max_{X\in{\mathbb{S}}O(n)}\ip{Y, X} =
\max_{x\in\spin(n)}\ip{Y,Q(x)}
=
\max_{x\in{\mathbf{S}}^{2^{n-1}-1}}\ip{Y, Q(x)}.
\end{gather}
Now, let $\pi:{\mathbb{R}}^{n\times n}\to{\mathbb{R}}^m$ be a linear function. Then,
\begin{align*}
\pi({\mathbb{S}}O(n)) &= (\pi\circ Q)(\spin(n))
\subseteq (\pi\circ Q)({\mathbf{S}}^{2^{n-1}-1})
\subseteq \conv(\pi({\mathbb{S}}O(n))).
\end{align*}
Here, the last inclusion follows by \eqref{eq:saunderson_one_dim}.
We deduce that if $\pi({\mathbb{S}}O(n))$ is convex, then equality holds throughout this chain and the image of the unit sphere ${\mathbf{S}}^{2^{n-1}-1}$ under the quadratic map $\pi\circ Q$ is convex.
For example, combined with \cref{thm:feasibility}, we have that
\begin{align*}
\set{(\ip{x, A_{ij}x})_{i<j}:\, x\in{\mathbf{S}}^{2^{n-1}-1}}\subseteq{\mathbb{R}}^{\binom{n}{2}}
\end{align*}
is convex.
As another example, combined with \cite[Theorem 8]{horn1954doubly}, we have that
\begin{align*}
\set{\begin{pmatrix}
\ip{x,A_{11}x}\\
\vdots\\
\ip{x,A_{nn}x}
\end{pmatrix}:\, x\in{\mathbf{S}}^{2^{n-1}-1}}\subseteq{\mathbb{R}}^n
\end{align*}
is convex (and equal to the polytope $\textup{PP}_n$).
\end{appendix}
\end{document} |
\begin{document}
\title{Assessment of the variational quantum eigensolver:\\
application to the Heisenberg model}
\author{Manpreet Singh Jattana}
\affiliation{Institute for Advanced Simulation, J{\"u}lich Supercomputing Centre, Forschungszentrum J{\"u}lich, D-52425 J{\"u}lich, Germany}
\affiliation{RWTH Aachen University, D-52062 Aachen, Germany}
\author{Fengping Jin}
\affiliation{Institute for Advanced Simulation, J{\"u}lich Supercomputing Centre,\\ Forschungszentrum J{\"u}lich, D-52425 J{\"u}lich, Germany}
\author{Hans De Raedt}
\affiliation{Zernike Institute for Advanced Materials,\\
University of Groningen,
NL-9747 AG Groningen, The Netherlands}
\affiliation{Institute for Advanced Simulation, J{\"u}lich Supercomputing Centre,\\ Forschungszentrum J{\"u}lich, D-52425 J{\"u}lich, Germany}
\author{Kristel Michielsen}
\affiliation{Institute for Advanced Simulation, J{\"u}lich Supercomputing Centre,\\ Forschungszentrum J{\"u}lich, D-52425 J{\"u}lich, Germany}
\affiliation{RWTH Aachen University, D-52062 Aachen, Germany}
\date{\today}
\begin{abstract}
We present and analyze large-scale simulation results of a hybrid quantum-classical variational method to calculate the ground state energy of the anti-ferromagnetic Heisenberg model. Using a massively parallel universal quantum computer simulator, we observe that a low-depth-circuit ansatz advantageously exploits the efficiently preparable N\'{e}el initial state, avoids potential barren plateaus, and works for both one- and two-dimensional lattices. The analysis reflects the decisive ingredients required for a simulation by comparing different ans\"{a}tze, initial parameters, and gradient-based versus gradient-free optimizers. Extrapolation to the thermodynamic limit accurately yields the analytical value for the ground state energy, given by the Bethe ansatz. We predict that a fully functional quantum computer with 100 qubits can calculate the ground state energy with a relatively small error.
\end{abstract}
\maketitle
\section{Introduction}\label{INT}
Variational methods, in particular the variational quantum eigensolver (VQE) \cite{Peruzzo2014, McClean2016}, have recently been successful in demonstrating to solve proof-of-concept problems on current quantum computing devices \cite{OMalley2016, Kokail2019, Kandala2017}. Despite the initial success, it remains an open question which problems would demonstrate an advantage on future quantum computers. Finding the ground state energy of the Heisenberg model is one of the candidates.
Recent works have focused on the implementation of the VQE on quantum computers, including the invention of efficient methods for current devices \cite{Kandala2017, Huggins2019}, the reduction of the total number of required qubits \cite{Liu2019, Fujii2020}, the testing of optimization algorithms \cite{Koczor2020, Ostaszewski2021}, and a study of the effects of noise \cite{Zeng2020}. Also, attempts to implement the Bethe ansatz \cite{Bethe1931} on a quantum computer \cite{Nepomechie2020, VanDyke2021} have been made. Results for implementations of the VQE on quantum computers with up to 20 qubits \cite{Lyu2020} or less \cite{Seki2020, Slattery2021, Jin2020, Bespalova2020} are available.
Despite the progress in hybrid quantum-classical variational methods, several of their important aspects are still unexplored. Large-scale simulations of VQE for calculating the ground state energy of the Heisenberg model have not yet been performed. It is unclear if a single ansatz can be used for both the one- and two-dimensional lattices. A clear picture of how the minimum energy scales by using a certain ansatz within and beyond what is emulatable on classical hardware is missing. In this work, we present results for all these aspects.
The rest of the paper is structured as follows. In Sec.~\ref{sec2}, we briefly review the variational principle and the Heisenberg model, and introduce the ansatz. In Sec.~\ref{sec3}, we present the results of our work for one- and two-dimensional lattices. Finally, in Sec. \ref{sec4}, we summarise our findings.
\section{Theory}\label{sec2}
\subsection{Variational principle}
The variational principle states that the energy $E$ obtained by using a certain parameterized wavefunction $\psi(\theta)$ for a problem Hamiltonian $H$ is a strict upper bound to the ground state energy $E_0$ of $H$:
\begin{equation}
E = \braket{\psi(\theta )|H|\psi(\theta ) } \geq E_0.\label{eq4:1}
\end{equation}
The VQE uses the variational principle, Eq.~(\ref{eq4:1}), to compute $E_0$ of $H$ on a quantum computer. The VQE algorithm is a hybrid quantum-classical algorithm that utilizes resources from quantum and classical computers in an iterative process. The diagram depicted in Fig.~\ref{fig11} shows the link between the quantum processing unit (QPU) and the classical processing unit (CPU). The QPU is responsible for carrying out the computation for a certain quantum circuit that generates the state $\psi(\theta)$, depending on a set of parameters, and returns the corresponding bitstrings to the CPU obtained after the measurement. The bitstrings are accumulated and processed by the classical unit, fed to an optimizer that suggests the next set of parameters that will lower the energy in successive iterations.
\subsection{Heisenberg model}
We analyze the Hamiltonian representing the quantum spin model
\begin{equation}
H = \sum_{i>j}^N \Big({J}_{ij}^{xx} \sigma_i^x \cdot \sigma_j^x+{J}_{ij}^{yy} \sigma_i^y \cdot \sigma_j^y+{J}_{ij}^{zz} \sigma_i^z \cdot \sigma_j^z \Big ),\label{eq4:ham1}
\end{equation}
where $N$ denotes the number of spins, and $\sigma^x,\sigma^y,$ and $\sigma^z $ are the Pauli matrices. Throughout the rest of the paper, we use units such that $\hbar=1$ and $J$'s are dimensionless. If $J^{\alpha \alpha}_{ij} = 1$ for all $i,j=1,\dots,N$, and $\alpha\in \{x,y,z\}$, we call $H$ the \textit{isotropic anti-ferromagnetic} Heisenberg model Hamiltonian. If all coefficients ${J}_{ij}^{\alpha \alpha}$ are chosen randomly in the interval $(0,1]$, then we call $H$ the \textit{random} Hamiltonian. We use both open and periodic boundary conditions and map each spin to a qubit. For most of our simulations we consider bipartite spin lattices with a single exception of a (frustrated) triangular spin lattice.
\begin{figure}
\caption{(Colour online) Schematic of the hybrid quantum-classical mechanism of a variational quantum eigensolver.\label{fig11}
\label{fig11}
\end{figure}
\subsection{Ansatz}
In general, the final state of a system acted upon by a parametrized ansatz can be written in the form
\begin{equation}
\ket{\psi_f(\bm \theta)} = U(\bm \theta)\ket{\psi_0},\label{eqps}
\end{equation}
where $\bm \theta $ are the variational parameters, $U$ is the ansatz, and $\ket{\psi_0}$ denotes the initial state. In this paper we demonstrate that the following ansatz is sufficient to yield an accurate approximation to the ground state energy.
\begin{equation}
U(\bm \theta) = \Big [\prod_{ l =N-1 }^{1} \prod_{k = N}^{ l+1} U_{lk}(\theta_{lk}) \Big ] \Big [\prod_{ l =N-1 }^{1} \prod_{k = N}^{ l+1} U_{kl}(\theta_{kl}) \Big ],\label{eqxyxy1}
\end{equation}
where
\begin{equation}
U_{pq}(\theta_{pq}) =
\begin{cases}
e^{-i\theta_{pq} \sigma_p^y\sigma_q^x } & \text{if $p=N$ or $q=N$,}\\
e^{-i\theta_{pq} \sigma_p^y\sigma_q^x\sigma_N^z } & \text{otherwise.}\label{eqxy1}
\end{cases}
\end{equation}
We elaborate on how to expand the Eq.~(\ref{eqxyxy1}). There is an independent index $l$ and a dependent index $k$. The index $l$ is monotonically decreased from $N-1$ to $1$. For each value of $l$, the dependent index $k$ is decreased from $N$ to $l+1$. The index $l$ is not decremented until all the values of $k$ are enumerated. These values of $l$ and $k$ describe the indices of the qubits that the operators in Eq.~(\ref{eqxy1}) act upon. In every operator, no more than three qubits are involved. The corresponding parameters $\theta_{kl}$ are independent for each unique combination of $l$ and $k$. The parameters $\theta_{kl}$ affect the phase of the $N^{\text{th}}$ qubit. The number of unitary operators in Eq.~(\ref{eqxyxy1}) is given by $N(N-1)$. The ordering of the unitary operators is important. In this paper, the specific combination of the operators in Eq.~(\ref{eqxy1}) is termed the XY-ansatz. Any other ordering is prone to produce results which may be different from each other. This is consistent with the findings in recent literature \cite{Grimsley2020, Tranter2019}.
The motivation to keep the number of operators to a maximum of two or three is inspired from the coupled cluster ansatz which can be powerful enough to express relevant states even in this restrictive form \cite{McClean2016}. It is then intuitive to try this approach also for the Heisenberg model. Accordingly, such an ansatz is expressed by
\begin{equation}
\ket{\psi_f(\bm \theta)} = e^{-i A(\bm \theta)}\ket{\psi_0},\label{eq4:123}
\end{equation}
where $A$ can contain sums of products of Pauli operators. The implementation of an ansatz, e.g. the one given by Eq.~(\ref{eq4:123}), is not a simple task in general as it requires factorization of the matrix exponential $e^{-iA(\bm \theta)}$ \cite{DeRaedt1987}. Factorization creates a series of products of unitary operations, which results in deeper quantum circuits with a large number of gates. In this work, we do not directly implement the ansatz given by Eq.~(\ref{eq4:123}), but seek for other ans\"{a}tze in a factorized form which do not require further factorization. In effect, we create a quantum circuit from an set of operators instead of a sum of operators. Such an approach allows us to build low-depth quantum circuits. Furthermore, from an experimental perspective, it is difficult to build a quantum computing device in which all the qubits work equally well. Some qubits may perform certain gates more efficiently than others. In order to exploit such devices efficiently and to accommodate for experimental imperfections, we proposed the ansatz where all the parameterized gates are placed on only one qubit. All operations of the parametrised gates can be restricted to this single qubit.
For comparison with the XY-ansatz, we consider two different ans\"{a}tze. The first one is inspired by quantum chemistry. The unitary coupled cluster ansatz restricted to single and double excitations (UCCSD) is shown to produce results with chemical accuracy \cite{Xia2020, Romero2019, Barkoutsos2018, Shen2017}. We consider
\begin{equation}
U(\bm \theta) = \prod_{l=N-1}^{1} \prod_{k=N}^{l+1} U_{kl}(\theta_{kl}),\label{eqall}
\end{equation}
where
\begin{equation}
U_{kl}(\theta_{kl}) =
\begin{cases}
\prod_{\substack{\alpha}} \prod_{\beta} e^{-i \theta_{kl}^{\beta \alpha} \sigma_k^\beta \sigma_l^\alpha } & \text{if $k=N$ or $l=N$,}\\
\prod_{\alpha} \prod_{\beta} e^{-i \theta_{kl}^{\beta \alpha} \sigma_k^\beta \sigma_l^\alpha\sigma_N^z } & \text{otherwise,}
\end{cases}
\end{equation}
where $\alpha,\beta \in\{x,y,z \} $. A combinatorial calculation shows that the number of unitary operators in Eq.~(\ref{eqall}) is given by $3^2 N!/2!(N-2 )!$. Although the total number of terms scales polynomially rather than exponentially, further reductions are always welcome since the redundant terms often slow down the optimization process. Clearly, the operators in Eq.~(\ref{eqxyxy1}) are a subset of those in Eq.~(\ref{eqall}). The differences between using these two are highlighted in the results section.
The second ansatz is inspired by the problem Hamiltonian. We consider the ansatz which for the one-dimensional lattice Hamiltonians is given by
\begin{equation}
U(\bm \theta) = U_p(\bm \theta_p) \ldots U_1(\bm \theta_1), \label{1123}
\end{equation}
where $U_p$ is given
\begin{equation}
U_p (\bm \theta_p)= \prod_{k=1}^{N} U_{kp}(\bm \theta_{kp}),
\end{equation}
and boundary conditions are used. Each $ U_{kp}(\bm \theta_{kp})$ is given by
\begin{equation}
U_{kp} (\bm \theta_{kp})=
\begin{cases}
\prod_{\alpha} e^{-i \theta_{k}^{\alpha} \sigma_k^{\alpha}\sigma_{k+1}^{\alpha} } & \text{if $k=N$ or $k+1=N$,}\\
\prod_{\alpha} e^{-i \theta_{k}^{\alpha }\sigma_k^{\alpha}\sigma_{k+1}^{\alpha} \sigma_N^z } & \text{otherwise.}\label{eqh}
\end{cases}
\end{equation}
For a lattice of size $N$ in one-dimension, the ansatz in Eq.~(\ref{1123}) has $p\times 3N$ unitary operators. The variational Hamiltonian ansatz \cite{Wecker2015} is itself inspired from adiabatic evolution. The idea is that a combination of ansatz and initial parameters that mimics the adiabatic evolution can have a lower initial energy to start the variational optimization. It has been used for solving the Hubbard-Fermi model \cite{Reiner2019, okokok} and a modified Haldane-Shastry Hamiltonian \cite{Wiersema2020}.
A good choice for the initial state $\ket{\psi_0}$ often yields better variational results. In the case of antiferromagnets, a good $\ket{\psi_0}$ for the bipartite lattices is known to be the N\'{e}el state where one sublattice is initialised with spins anti-parallel to the other sublattice. The N\'{e}el state, for an even number of spins, is in the magnetic sector of zero magnetization, where the ground state for the one-dimensional isotropic anti-ferromagnetic Heisenberg model is located. For the frustrated lattice, half of the lattice spins are initialised anti-parallel to the other half, without regard to their location in the lattice. The qubits representing the spins for one group are initialised as zeros and those in the other group (anti-parallel) as ones.
\subsection{Implementation}
We use the massively parallel simulator J\"{u}lich Universal Quantum Computer Simulator (JUQCS) \cite{DeRaedt2007, DeRaedt2019} to perform operations on the state vector. We also use Qiskit \cite{Qiskit} for small problem sizes. In an actual quantum device, the state vector itself is not accessible. Instead, the quantum device will produce an ensemble of bitstrings consisting of $0$s and $1$s only, from which the expectation values of the observables can be derived. This raises two issues. First, since the number of samples or bitstrings can only be finite, it is not always clear if finite sampling can accurately represent the underlying probability distribution. Second, we need a procedure to measure the individual terms of the Hamiltonian. While the first is an open problem, recent works have developed efficient methods for the second when it becomes a problem \cite{Bonet-Monroig2020, Verteletskyi2020, Hadfield2020, Gokhale2019, Huggins2021}. Fortunately, both these problems do not hinder finding the ground state energy of the Heisenberg model. First, on an actual quantum device, we do not need explicit knowledge of the probability distribution; one can calculate the expectation values directly by sampling. From the samples we can estimate the energy with an accuracy proportional to the square root of the number of samples. Second, unlike in quantum chemistry, the measurement of individual terms for the Heisenberg model is not a problem.
The implementation of the ansatz is best understood through an example. Consider the four-qubit circuit shown in Fig.~\ref{fig1}, implementing the XY-ansatz for the $N=4$ isotropic Heisenberg ring (see Eq.~(\ref{eq4:ham1})). There are $12$ parameters in total, and Fig.~\ref{fig1} shows the implementation of the first three. After the initial state preparation, gates are applied to construct the unitary operators. The circuit in the rectangular solid box corresponds to the implementation of the $\exp(-i\theta_{43}^{yx} \sigma^z \cdot \sigma^z)$ operator having a variational parameter $ \theta_{43}^{yx}$. The terms containing $\sigma^x$ and $ \sigma^y$ are implemented by changing to the appropriate basis. The rectangular dashed box highlights the implementation of the third term in the ansatz, given by $\exp(-i \theta_{32}^{yx} \sigma^y_3 \cdot \sigma^x_2 \cdot \sigma^z_4)$. Although the results for the special case of four qubits can be achieved using an even smaller subset of terms in the XY-ansatz operators containing only five parameters, for consistency and completeness, the twelve parameters are used for all the cases. The simulation for this example using Qiskit and JUQCS gives the energy $-8.0$, which is also the theoretical value obtained by exact diagonalization. The final state also has a $100\%$ overlap with the ground state. The corresponding z- and total-magnetization were both zero, as required.
\begin{figure}
\caption{Circuit showing the first three parameters for a four-qubit XY-ansatz. The $R_x^+, R_x^-$, and $R_y^+, R_y^-$ gates are $R_x(\pi/2), R_x(-\pi/2)$, and $R_y(\pi/2), R_y(-\pi/2)$, respectively. The parameterized gate is always placed on the last qubit. Gates in the solid box correspond to the $e^{-i\theta \sigma^z \cdot \sigma^z}
\label{fig1}
\end{figure}
For the classical optimisation part of the VQE, we use the sequential least squares quadratic programming (SLSQP) \cite{Kraft} optimizer in the SciPy package \cite{scipy}. SLSQP is a quasi-Newton gradient based algorithm. When using an ideal simulator, the gradients are computed numerically using the cost-effective forward differences formula. The calculation of the energy for given values of the parameters involves the quantum subroutine of VQE. Once the energies are obtained, the calculation of the gradient is done on the classical computer. Calculation of gradient and the next iterate is not a hard problem and classical computers can be used. It is the calculation of the energy for which quantum resources will be required. In our work, the cost of gradient computation for the optimizer is included in the total energy evaluations. For comparison, we also use a gradient-free optimizer called constrained optimization by linear approximation (COBYLA) \cite{scipy, Powell1994, Powell2007}.
\section{Results}\label{sec3}
\subsection{One-dimensional lattices}
The results for the one-dimensional isotropic Hamiltonians with periodic boundary conditions are shown in Fig.~\ref{fig2}(A). The coloured lines show the optimization progress, i.e. the lowest energy achieved by successive energy evaluations for different lattice sizes. Initially, the system is in the N\'{e}el state. To take advantage of the N\'{e}el state, all the optimization parameters are initialised as zeros. As the optimization proceeds, the drop in energy is visible for all lattice sizes shown. The stepped progression is characteristic of the quasi-Newton optimization algorithms, which calculate the gradient before deciding on the step size. Each "step" of the staircase corresponds to a length equal to the number of parameters, since the formula of the forward differences for calculating the gradients requires $N(N-1)+1$ energy evaluations, where $N(N-1)$ is the number of parameters. Below each optimization curve is a horizontal black line corresponding to the energy of the ground state, obtained numerically by the Lanczos algorithm. According to the variational principle, the calculated energies are upper bounds to the energy of the ground states. Starting from a lattice with $14$ spins, the optimizer gave no signal for termination, but due to a time limit of $24$ hours on the supercomputer \cite{JUWELS}, the calculations stopped. Due to the different number of compute nodes, circuit depths, and parameters, the total number of energy evaluations differs for each lattice size. The energy optimization process can be restarted from the last values of the parameters. An example is shown for the lattice with $25$ spins, as can be seen from the longer curve resulting from the extra energy evaluations. In all the cases shown, the XY-ansatz produced a final energy $E_f$, which is close to the ground state energy.
\begin{figure}
\caption{(Colour online) (A): Optimization progress using the XY ansatz for $11$ spins (first line), $12$ spins (second line) and so on, up to $26$ spins (bottom line). The small horizontal black lines represent the ground state energy. (B): Absolute difference in the variational and ground state energy per spin for different spins or lattice size. \label{fig2}
\label{fig2}
\end{figure}
\begin{figure}
\caption{(Colour online) (A): Progress comparison when initializing with zeros (solid lines) versus random values (dashed lines) as parameters for the XY-ansatz. (B): Same as (A), except for using the ansatz given by Eq.~(\ref{eqall}
\label{figuccd}
\end{figure}
The absolute difference between $E_f$ and the corresponding ground state energy $E_0$ per spin for each lattice size is shown in Fig.~\ref{fig2}(B). Results up to the lattice size of $6$ spins match exactly with the ground state energy (data not shown in (A)), and the differences show up for $N\geq 7$. The points cluster in two groups, one corresponding to lattices with an even and another one with an odd number of spins. We observe that the final energies for the lattices with an even number of spins are lower than for the ones with an odd number of spins. Additionally, we note that the ground states of the systems with an odd number of spins are degenerate. Through the variational principle, the ground state energies are mapped to the global minima of an energy landscape when an ansatz can express the ground state. There may be multiple global minima for degenerate cases. Due to this reason the energy landscape will be different from the even spin cases. Therefore, we conjecture that finding the ground state of degenerate cases using VQE is more challenging. This is also observed in the simulations we performed.
Figure~\ref{figuccd}(A) shows the energy optimization progress using two different sets of initial parameters. The legend indicates the lattice size. The lines can be separated into two groups, top (dashed) and bottom (solid). The bottom group corresponds to the case where the parameters were initialised as zeros, thus taking advantage of the N\'{e}el state and starting from a lower energy value. The top group of lines corresponds to parameters initialised as randomly distributed values in the interval $[0, 2\pi)$, not taking advantage of the N\'{e}el state and starting from a much higher energy, often close to $E = 0$. For all lattice sizes, initializing the parameters as zeros yields large drops in energy for the first few iterations, and $E_f$ is close to the ground state energy. After the significant drop, the (relative) progress slows down as the energy landscape becomes flatter near a minimum, and a large number of iterations is required to decrease the energy further. On the other hand, random initializations of the parameters do not yield significantly big drops in the energy in any of the cases. A prohibitively large number of energy evaluations seems required to obtain the same accuracy as when parameters are initialised as zeros. From a practical perspective, starting from random parameters does not appear to be very useful for the current problem.
Figure.~\ref{figuccd}(B) shows the energy optimization progress using the UCCSD inspired ansatz given by Eq.~(\ref{eqall}). Similar to plot (A), plot (B) shows the trend corresponding to the two different initializations of the parameters. Initializing all parameters as zeros is observed to have a significant initial drop in the energy, contrary to the random parameters. Interestingly, energy optimization progresses for cases with random initial parameters appear to be slower in proportion to the increasing lattice size, i.e. larger $N$ have a slower drop in energy. This effect appears to be consistent with what is termed in the recent literature as the \textit{barren-plateau} \cite{McClean2018,Campos2021,Holmes2021,Cerezo2020}. The larger lattice sizes, which require a larger number of parameters, lead to vanishingly small gradients. Given that vanishingly small gradients appear when one approaches a minimum, it is difficult to determine whether the random parameters landed in a local minimum of the energy landscape or at a barren-plateau. One way to find this out would be to restart multiple times with new sets of random parameters, but given that the barren-plateau effect is something that one aims to avoid, which is possible by initializing all parameters as zeros in this case, we skip such an approach.
In both plots (A) and (B) in Fig.~\ref{figuccd}, for parameters initialised as zeros, $E_f$ is very close to the ground state energy. This is understandable as the terms in the XY-ansatz are a subset of the terms given in Eq.~(\ref{eqall}). However, the difference between the two lies in the number of energy evaluations required to reach the ground state energy. The energies obtained with the XY-ansatz have the same (or comparable) values as the energies obtained with the UCCSD inspired ansatz, but much less energy evaluations with fewer parameters.
Despite the success of initializing the parameters as zeros, it should be noted that such an approach does not necessarily give the lowest possible energy given the ansatz. We performed random initializations with one hundred sets of random parameters for the smaller lattices. The results are shown in Fig.~\ref{figh}(A). We count and plot the number of cases in which the energy found by starting the optimiser using random initial parameters obtained a final energy equal to or lower than that obtained when starting from the N\'{e}el state. We observe that the cases drop sharpy as the lattice size increases. However, finding even a single energy lower than that found when starting from the N\'{e}el state shows that the minimum energy reached by initializing all parameters as zeros, although being very close to the ground state energy, is still only a local minimum and usually not a global minimum.
\begin{figure}
\caption{(Colour online) (A): The number of cases where the final energy obtained using random initial parameters was either equal to or lower than the energy obtained by setting zeros as initial parameters. (B): Optimization progress for the ansatz given by Eq. (9) for $p=1$ (group of shorter lines on the left) and $p=5$ (group of longer lines on the right). The legend shows the lattice size. The short horizontal black lines represent the ground state energy. \label{figh}
\label{figh}
\end{figure}
Figure~\ref{figh}(B) shows the energy optimization progress using the ansatz given by Eq.~(\ref{1123}). The lines can again be divided into two groups. The shorter lines on the left correspond to $p=1$ and the long lines on the right to $p=5$. The lines for $p=5$ are longer simply because in this case there are five times more parameters than for $p=1$, and so five times more energy evaluations are required to compute the gradient per iteration. For this ansatz, the parameters need to be initialised randomly since initializing them with zeros (for the tested cases with $N\leq 10$) results in the optimizer being trapped in a local minimum at the first iteration. It is observed that simply increasing the number of parameters in the circuit is no guarantee for improving the minimum energy. While for the lattice with $19$ spins a lower energy is reached with $p=1$ than with $p=5$, the opposite is the case for the lattice with $20$ spins. While a larger parameter space may facilitate a broader spectrum of states, the energy landscape may be the cause of this observation as local minima may surround the global minimum and impede reaching it. Since a random initialization would not be able to take advantage offered by the N\'{e}el state, and would thus require a much larger number of energy evaluations to converge to $E_f$, we restrict our study to only one such random initialization.
\begin{figure}
\caption{(Colour online) (A): Energy optimization progress using the XY-ansatz for different Hamiltonians with random coupling interactions for the different numbers of spins shown in legend. (B): Energy optimization progress using gradient-based (solid lines) and gradient-free (dashed lines) optimizers for rings with random couplings between the spins with $12$ spins (first pair of the lines from top), $14$ spins (second pair of lines from the top), and so on up to $20$ spins (last pair of lines at the bottom). The short horizontal black lines represent the ground state energy.\label{figco}
\label{figco}
\end{figure}
Figure~\ref{figco}(A) shows the energy optimization progress for the random case of the anti-ferromagnetic one-dimensional ring. Using the XY-ansatz, the parameters were initialised as zeros. Except for the ring with $10$ spins, none of the optimization processes signalled convergence, and the optimization could be continued to reduce the energy further if required. A quick drop in the initial energy is also observed for the random case, and all the energies are reasonably close to the ground state energy.
Figure~\ref{figco}(B) shows the energy optimization progress using the gradient-based optimizer SLSQP (solid lines) and the gradient-free optimizer COBYLA (dashed lines). We only show the results for the even lattice sizes. The results for the odd lattice sizes are similar. The gradient-based method gives lower energies at each energy evaluation and finds a much lower minimum faster than the gradient-free method. This result confirms the commonly accepted notion that for noiseless functions, gradient-based methods are superior.
\begin{figure}
\caption{(Colour online) (A): Energy optimization progress for isotropic ladders with open boundary conditions. The top line is for a lattice with $3\times 2$ spins, the second for one with $4\times 2$ spins and so on until the bottom line which is for a lattice with $13\times 2$ spins. (B) Energy optimization progress for two-dimensional square lattices of size $4\times 4$ and $5\times 5$ with open boundary conditions and $6\times 6$ with periodic boundary conditions. The dashed line is for a frustrated triangular lattice of dimensions $5 \times 6$ with open boundary conditions. The short horizontal black lines represent the ground state energy. \label{lad}
\label{lad}
\end{figure}
\subsection{Two-dimensional lattices}
We apply the XY-ansatz without any changes. Results for an isotropic ladder Hamiltonian with open boundary conditions are shown in Fig.~\ref{lad}(A). The results show the energy optimization progress for ladders of size $3\times 2$ up to size $13\times 2$. For example, the $8\times 2$ ladder, where the optimizer did not converge even after two runs of continued optimization, reported $E_f/N = -2.212$ as compared to the ground state energy per spin $E_0/N=-2.229$.
Figure~\ref{lad}(B) shows results for three cases with square lattices and a $5\times 6$ (frustrated) triangular lattice. The $4\times 4$ and $5\times 5$ lattices had open boundary conditions and the final energies were $E_f/N = -2.218$ and $E_f/N = -2.298$ as compared to $E_0/N=-2.297$ and $E_0/N=-2.351$, respectively. The optimized (unoptimized) circuit depths were $805$ $(1398)$ and $1939$ $(3531)$, respectively. For the frustrated lattice, $E_f/N = -1.817$ as compared to $E_0/N=-1.986$. The results for the frustrated lattice show a slightly larger gap in the energy obtained and the ground state. This could either be explained by assuming that the ansatz is less suitable for the case with the frustrated lattice or that the energy obtained when initializing all parameters as zeros corresponds to a local minimum far away from the global minimum.
The case of the square lattice of size $6\times 6$ with periodic boundary conditions poses a specific difficulty with the parameter optimization. The problem is that the XY-ansatz for $36$ spins has $1260$ parameters, and only one iteration can be performed as the evaluation of the gradient is possible only once within $24$ hours, the time per job on the supercomputer. Since the quasi-Newton methods require multiple iterations without losing the internal variables that systematically improve the convergence per iteration, the optimization is ill-suited for larger problems that cannot undergo multiple iterations in one run. However, this problem can be avoided by saving the internal variables of the optimizer, but this approach is beyond the scope of this work. Despite such a drawback, we proceeded with the first few iterations without saving the internal variables and were able to see a reasonable drop in the initial energy. The circuit depth after optimizing the circuit was reduced from $7458$ to $3985$. The final energy was $E_f/N = -2.631$ as compared to $E_0/N=-2.715$.
\begin{figure}
\caption{(Colour online) (A): Least squares fitting to the variational energies obtained for the one-dimensional isotropic rings of different sizes. The slope of the line, $-1.7783$, gives the energy for the infinite ring case. (B): Absolute differences of the variational and ground state energies for different lattice sizes for the isotropic ring using the ansatz from Eq.~(\ref{1123}
\label{figl}
\end{figure}
\subsection{Extrapolation}
The energies obtained for the one-dimensional lattices can be fitted to a line for extrapolation, as shown in Fig.~\ref{figl}(A). The slope of the line that gives the energy per spin in the \textit{thermodynamic limit}, is $-1.7783$, which can be compared to the exact value $-1.7726$, known from the Bethe ansatz \cite{Parkinson2010}. The reported value only differs from the exact value by $5.7\times 10^{-3}$. Note that while the variational theorem guarantees that the energy obtained is a strict upper bound to the ground state, this is not necessarily the case when estimating the value in the thermodynamic limit by means of a fitted extrapolation.
One measure of an ansatz's ability to scale up beyond what classical computers can simulate is to predict, given the available data, the expected difference between the VQE and exact energies. Such a calculation can be performed by extrapolating the available data. Figure.~\ref{figl}(B) shows data for four different ans\"{a}tze, namely the ansatz from Eq.~(\ref{1123}) with $p=1$ (green triangles), $p=5$ (red inverted triangles), the ansatz from Eq.~(\ref{eqall}) (orange squares), and the XY-ansatz (blue dots). Although some of the orange squares are lower than the corresponding blue dots, indicating that a lower energy was obtained using the ansatz from Eq.~(\ref{eqall}) compared to using the XY-ansatz, the difference is small, and the XY-ansatz is the preferred choice because its number of parameters is significantly smaller. For both the $p$ values tested, the obtained data for the ansatz from Eq.~(\ref{1123}) puts it out of competition with the XY-ansatz. Moreover, there is no clear pattern that may help predict the behaviour for the ansatz from Eq.~(\ref{1123}) beyond the available data. Linear fitting is performed for the XY-ansatz, and the slope is equal to $2.0829 \times 10^{-2}$ with the intercept $-8.6202\times 10^{-2}$. Using the given slope, we predict that for a $100$-spin ring, the expected energy per spin will be higher than the ground state energy per spin by a value of approximately $0.02$.
\section{Conclusions}\label{sec4}
We calculated the minimum energy for various implementations of the Heisenberg model for the one- and two-dimensional, isotropic, frustrated, and randomly-coupled lattices, using gradient-based and gradient-free optimizers, and different ans\"{a}tze. The herein proposed XY-ansatz shows reasonable results if all the simulation variables are optimized, i.e. a suitable initial state combined with a good quality optimizer and a good choice of initial parameters.
Given an ansatz, there is currently no analytical method to ascertain if its global minimum corresponds to the ground state energy. Thus, we rely on optimization algorithms to navigate through the multi-dimensional rugged energy landscapes. In many such landscapes, there appear multiple local minima close to the ground state energy. For the cases where the exact ground state energy was not obtained using the XY-ansatz, it remains an open question if the global minimum of the energy landscape corresponds to the ground state energy. Thus, an improvement of the optimization algorithms appears to be essential for the further success of hybrid variational methods.
For the variational simulations performed in this work, initializing the variables as zeros instead of random numbers produced better results. For the anti-ferromagnetic Heisenberg model, it is known that the N\'{e}el state is an efficient initial state, and starting from zeros takes advantage of this knowledge. Therefore, in general, it is the knowledge or insight about a particular problem Hamiltonian that is relevant for an improved performance of the variational methods.
\end{document} |
\begin{document}
\title{Efficient estimation of nearly sparse many-body quantum Hamiltonians}
\author{A. Shabani}
\affiliation{Department of Chemistry, Princeton University,
Princeton, New Jersey 08544}
\author{M. Mohseni}
\affiliation{Research Laboratory of Electronics, Massachusetts
Institute of Technology, Cambridge, MA 02139}
\author{S. Lloyd}
\affiliation{Research Laboratory of Electronics, Massachusetts
Institute of Technology, Cambridge, MA 02139}
\affiliation{Department of Mechanical Engineering, Massachusetts
Institute of Technology, Cambridge, MA 02139}
\author{R. L. Kosut}
\affiliation{SC Solutions, Sunnyvale, CA 94085}
\author{H. Rabitz}
\affiliation{Department of Chemistry, Princeton University,
Princeton, New Jersey 08544}
\begin{abstract}
We develop an efficient and robust approach to Hamiltonian
identification for multipartite quantum systems based on the method
of compressed sensing. This work demonstrates that with only
$\mathcal{O}(s \log(d))$ experimental configurations, consisting of
random local preparations and measurements, one can estimate the
Hamiltonian of a $d$-dimensional system, provided that the
Hamiltonian is nearly $s$-sparse in a known basis. We numerically
simulate the performance of this algorithm for three- and four-body
interactions in spin-coupled quantum dots and atoms in optical
lattices.
Furthermore, we apply the algorithm to characterize Hamiltonian fine
structure and unknown system-bath interactions.
\end{abstract}
\maketitle
\section{introduction}
The dynamical behavior of multipartite quantum systems is governed by
the interactions amongst the constituent particles. Although, the
physical or engineering considerations may specify some generic properties about
the nature of quantum dynamics, the specific form
and the strength of multi-particle interactions are typically
unknown. Additionally, quantum systems usually have an unspecified
interaction with their surrounding environment. In principle, one
can characterize quantum dynamical systems via
\textquotedblleft quantum process tomography" (QPT)
\cite{dcqd3,dcqd31,dcqd32,dcqd33,dcqd34,dcqd35,dcqd36,dcqd37}.
However, the relationship between relevant physical properties of a
system to the information gathered via QPT is typically unknown.
Alternatively, knowledge about the
nature of inter- and intra- many-body interactions within the system
and/or its environment can be constructed by identifying a set of (physical
or effective) Hamiltonian parameters generating the dynamics
\cite{JM,Cory,Cory2,Cory3,Cole,Cole2,Soph,Levi,Mohseni09,Franco}. Currently,
a scalable approach for efficient estimation of a full set of Hamiltonian
parameters does not exist.
The dynamics of a quantum system can be estimated by observing the
evolution of some suitable test states. This can be achieved by a
complete set of experimental configurations consisting of
appropriate input states and observables measured at given time
intervals. Knowledge about the dynamics may then be reconstructed
via inversion of the laboratory data by fitting a set of dynamical
variables to the desired accuracy. Estimating Hamiltonian parameters
from such a procedure faces three major problems: (1) The number of
required physical resources
grows exponentially with the degrees of freedom of the system
\cite{dcqd3,dcqd31,dcqd32,dcqd33,dcqd34,dcqd35,dcqd36,dcqd37}. (2) There are
inevitable statistical errors associated with the inversion of
experimental data \cite{dcqd3,dcqd31,dcqd32,dcqd33,dcqd34,dcqd35,dcqd36,dcqd37}.
(3) The inversion generally involves
solving a set of nonlinear and non-convex equations, since the
propagator is a nonlinear function of Hamiltonian parameters
\cite{JM,Cory,Cory2,Cory3,Cole,Cole2,Soph,Levi,Mohseni09,Franco}.
The first two problems are always present with any
form of quantum tomography, but the last problem is specific to the
task of Hamiltonian identification as we wish to reconstruct the
generators of the dynamics. Many quantum systems involve two-body
local interactions, so the goal is often to estimate sparse
Hamiltonians with effectively a polynomial number of unknown
parameters. Unfortunately, quantum state and process tomography
cannot readily exploit this potentially useful feature.
The highly nonlinear feature in the required inversion of laboratory
data was studied in Ref.\cite{JM} in which closed-loop learning
control strategies were used for the Hamiltonian identification. In
that approach one estimates the unknown Hamiltonian parameters by
tailoring shaped laser pulses to enhance the quality of the
inversion. Identification of time-independent (or piece-wise
constant) Hamiltonians have been studied for single-qubit and
two-qubit cases \cite{Cole,Cole2} to verify the performance of quantum
gates. Estimation of these Hamiltonians is typically achieved via
monitoring the expectation values of some observable, e.g.
concurrence, which are time periodic functions. Through Fourier
transform of this signal the identification task is reduced to
finding the relative location of the peaks and heights of the
Fourier spectrum \cite{Cole,Cole2}. Bayesian analysis is another method
proposed for robust estimation of a two-qubit Hamiltonian \cite{Soph}.
The difficulty with these methods is
then scalability with the size of the system. A symmetrization
method for efficient estimation of the magnitude of effective
two-body error generators in a quantum computer was studied in
\cite{Levi} by monitoring quantum gate average fidelity decay.
Recently, it was demonstrated that direct or selective QPT schemes
could be used for efficient identification of short-time behavior of
sparse Hamiltonians \cite{Mohseni09} assuming controllable two-body
quantum correlations with auxiliary systems and the exact knowledge
of the sparsity pattern. Another scheme for the determination of the
coupling parameters in a chain of interacting spins with restricted
controllability was introduced in Ref. \cite{Franco}.
In this work, inspired by recent advances in classical signal
processing known as \emph{compressed sensing} \cite{CS.intro}, we
use random \textit{local} input states and measurement observables
for efficient Hamiltonian identification. We show how the
difficulties with the nonlinearity of the equations can be avoided
by either a short time or a perturbative treatment of the dynamics.
We demonstrate that randomization of the measurement observables
enables compressing the extracted Hamiltonian information into a
exponentially smaller set of outcomes. This is accomplished by a
generalization of compressed sensing to utilize random matrices with
correlated elements. This approach is applicable for Hamiltonians
that are nearly sparse in a known basis with an arbitrary unknown
sparsity pattern of parameters. The laboratory data can then be
inverted by solving a convex optimization problem. This algorithm is
highly tolerant to noise and experimental imperfections. The power
of this procedure is illustrated by simulating three- and four-body
Hamiltonians for neutral atoms in an optical lattice and
spin-coupled quantum dot systems, respectively.
Furthermore, we directly apply the algorithm to estimate Hamiltonian
fine structure and characterize unknown system-bath interactions for
open quantum systems.
\section{Quantum dynamical equations}The time evolution of a quantum
system in a pure state is governed by the Shr\"{o}dinger equation, $
d\left|\psi(t)\right\rangle/dt =-iH\left|\psi(t)\right\rangle $. The
solution of this equation for a time-independent Hamiltonian can be simply
expressed as $\left|\psi(t)\right\rangle
=\exp(-itH)\left|\psi(0)\right\rangle $. In principle, the Hamiltonian of the
system $H$ can be estimated by preparing an appropriate set of test states
$\{\left|\psi_k\right\rangle\}$ and measuring the expectation value of a
set of observables $\{M_j\}$ after the system has evolved for a certain
period of time. The expectation value of these observables can be expressed as
\begin{equation}
p_{jk}=\langle M_j\rangle_{\psi_k}=\left\langle\psi_k\right| e^{itH}M_j
e^{-itH}\left|\psi_k\right\rangle \label{nonlinear}
\end{equation}
Equation (\ref{nonlinear}) implies that the experimental outcomes $
\{p_{jk}\}$ are nonlinear functions of the Hamiltonian parameters. To
avoid the difficulties of solving a set of coupled nonlinear equations we
consider the short time behavior of the system. Monitoring the short time
dynamics of the system is valid when the relevant time scales of the system
evolution satisfy $t \ll K^{-1}$ where, for positive operator-valued measure (POVM) operators $\{M_j\}$,
the constant $K$ equals $2||H||_{spec}$.
The general expression of $K$ is given in appendix B,
also see appendix A for definition of the norms.
This yields the linearized form of the Eq. (\ref{nonlinear})
\begin{eqnarray}
p_{jk}=\left\langle \psi_{k}\right|M_{j}\left|\psi_{k}\right\rangle
+it\left\langle \psi_{k}\right|[H,M_{j}]\left|\psi_{k}\right\rangle+\mathcal{O}(K^2t^2)
\label{linear}
\end{eqnarray}
The linear approximation contains enough information to
fully identify the Hamiltonian and the higher order terms do not
provide additional information. The short-time approximation implies
prior knowledge about the system dynamical time-scale or the order of
magnitude of $||H||_{spec}$. This prior knowledge can be available
from generic physical and engineering considerations.
For example, in solid-state quantum
devices the time-scale of single qubit rotations is typically on the
order of 1-10 ns. The switching time for exchange interactions
varies among different solid-state systems from 1ps to 100ps, (for
more details see appendix B.)
We expand the Hamiltonian in an orthonormal basis $
\{\Gamma_{\alpha}\}$, where $\text{Tr}(\Gamma^{\dagger}_{\alpha}\Gamma_{
\beta})=d\delta_{\alpha,\beta}$: $H=\sum_{\alpha} h_{\alpha}\Gamma_{\alpha}$
. Here $d$ is the dimension of the Hilbert space. In this representation the
Hamiltonian parameters are the coefficients $h_{\alpha}$. The expanded form
of the above affine equation (\ref{linear}) is
\begin{eqnarray}
\bar{p}_{jk}=it\sum_{\alpha}\left\langle
\psi_{k}\right|[\Gamma_{\alpha},M_{j}]\left|\psi_{k}\right\rangle h_{\alpha}
\label{expanded}
\end{eqnarray}
Here we introduce the experimental outcomes as $\bar{p}_{jk}=p_{jk}-\left\langle \psi_{k}\right|M_{j}\left|\psi_{k}\right\rangle$,
since $\left\langle \psi_{k}\right|M_{j}\left|\psi_{k}\right\rangle$ is
\textit{a priori} known. The relation (\ref{expanded}) corresponds to a single
experimental configuration ($M_j$,$\left|\psi_{k}\right\rangle$). For a $d$-dimensional system, the total number of
Hamiltonian parameters $h_{\alpha}$ is $d^2$. Thus, one requires the
same number of experimental outcomes, $p_{jk}$ that
leads to $d^2$ linearly independent equations.
For a system of $n$ qubits, this number
grows exponentially with $n$ as $d=2^{2n}$. In order to devise an efficient
measurement strategy we will focus on physically motivated \textit{nearly sparse}
Hamiltonians.
A Hamiltonian $H$ is considered to be $s$-sparse if it only contains
$s$ non-zero parameters $\{h_{\alpha}\}$. More generally, a
Hamiltonian $H$ is termed nearly $s$-sparse, for a threshold
$\eta$, if at most $s$ coefficients $h_{\alpha}$ ($H=\sum
h_{\alpha}\Gamma_{\alpha}$) have magnitude greater than $\eta h_{max}$
where $h_{max}=\max(h_{\alpha})$.
By definition, the sparsity is basis dependent. However, for local interactions, the basis in which the Hamiltonian
is sparse is typically known from physical or engineering considerations.
\section{Compressed Hamiltonian estimation} Our algorithm
is based on general methods of so-called compressed
sensing that recently have been developed in signal processing theory \cite
{CS.intro}. Compressed sensing allows for condensing signals and
images into a significantly smaller amount of data,
and recovery of the signal becomes possible
from far fewer measurements than required by traditional methods.
Compressed sensing has two main steps: encoding and decoding. The
information contained in the signal is mapped into a set of laboratory data
with an exponentially smaller representation. This compression can be
achieved by randomization of data acquisition. The actual signal
can be recovered via an efficient algorithm
based on convex optimization methods.
Compressed sensing has been
applied to certain quantum tomography tasks. Standard compressed
sensing has been directly used for efficient pseudothermal ghost
imaging \cite{GI:09,GI:091}.
Recently, a quadratic reduction in the total number of
measurements for quantum tomography of a low rank density matrix has
been demonstrated using a compressed sensing approach \cite{Gross:09}.
Here, we first describe how the Hamiltonian information is compressed into the
experimental data. The output of a single measurement is related to the
unknown signal (Hamiltonian parameters) through the relation (\ref{expanded}
). Suppose we try $m$ different experimental configurations (i.e., $m$ different
pairs of $(M_j,\left| \psi_k\right\rangle)$). This yields a set of
linear equations
\begin{equation}
\overrightarrow{p^{\prime }}=\Phi\overrightarrow{h} \label{linear-relation}
\end{equation}
where $\Phi$ is a $m\times d^2$ matrix with elements $\Phi_{jk,\alpha}=it/\sqrt{m}
\left\langle
\psi_{k}\right|[\Gamma_{\alpha},M_{j}]\left|\psi_{k}\right\rangle$
(A
factor $1/\sqrt{m}$ is included for simplifying the proofs, appendix C). In
general $m$ has to be greater than or equal to $d^2$ in order to solve Eq. (
\ref{linear-relation}). A Hamiltonian estimation attempt with $m<d^2$
seems impossible as we face an underdetermined system of linear equations
with an infinite number of solutions. However, any two $s$-sparse
Hamiltonians $h_1$ and $h_2$ still can be distinguished via a properly
designed experimental setting, if the measurement matrix $\Phi$ preserves the
distance between $h_1$ and $h_2$ to a good approximation:
\begin{equation}
(1-\delta_{s}) ||h_2-h_1||_{l_2}^{2} \leq ||\Phi (h_2-h_1)||_{l_2}^{2} \leq
(1+\delta_{s})||h_2-h_1||_{l_2}^{2} \label{RIP}
\end{equation}
for a constant $\delta_s\in (0,1)$. A smaller $\delta_s$ ensures
higher distinguishability of $s$-sparse Hamiltonians. The inequality relation (\ref{RIP})
is termed a \textit{restricted isometry property} (RIP) of the matrix $\Phi$ \cite{RIP2}. We now
discuss how to construct a map $\Phi$ satisfying this inequality, and how
small the value of $m$ can be made.
The RIP (\ref{RIP}) for a matrix $\Phi$ can be established by employing the
measure concentration properties of random matrices.
In each experiment the test state
and the measurement observable can be drawn randomly from a set of
configurations $\{M_j,\left| \psi_k\right\rangle\}$ realizable in the laboratory.
The independent selection of $\left| \psi_k\right\rangle$ and $M_j$ leads to a matrix $\Phi$ with
independent rows but correlated elements $\Phi_{jk,\alpha}$ in each row.
Thus the standard results from compressed sensing theory are not applicable here (appendix C).
In contrast, here we derive a concentration inequality for a matrix with
independent rows and correlated columns
as the backbone for the RIP of our quantum problem in appendix C.
Using Hoeffding's inequality,
we show that for any Hamiltonian $h$ and
a random matrix $\Phi$ with column only correlations, the random variable $||\Phi h||^2$ is
concentrated around $||h||^2$ with a high probability, i.e. $
\forall$ $0<\delta<1$
\begin{equation}
\text{Prob.}\{|||\Phi h||_{l_2}^2-||h||_{l_2}^2|\geq \delta ||h||_{l_2}^2\}\leq
2e^{-mc_0(\delta+c_1)^2} \label{RIP2}
\end{equation}
for some constants $c_0$ and $c_1$.
Using the above inequality, now we can show how an
exponential reduction in the minimum number of the required
configurations can be achieved for Hamiltonian estimation. The inequality (\ref{RIP2}) is
defined for any $h$ while the inequality in the definition of
RIP, Eq.(\ref{RIP}), is for any $s$-sparse $h$. As
shown in Ref. \cite{RIP}, there is an inherent
connection between these two inequalities. It is proved that any
matrix $\Phi$ satisfying (\ref{RIP2}) has RIP with probability
greater than $1-2\exp(-mc_0(\delta_\frac{s}{2}+c_1)^2+s[\log(d^4/s)+\log(12e/\delta_\frac{s}{2})])$.
In addition, whenever $m\geq c_2s\log(d^4/s)$, for a sufficiently
large constant $c_2$ one can find a constant $c_3\geq 0$ such that
the likelihood of the RIP to be satisfied converges exponentially fast to unity as $1-2\exp(-c_3
m)$.
The set of experimental configurations defined by Eq (4), and the concentration properties given
by Eq (5) and (6) can be understood as encoding the information of
a sparse Hamiltonian into a space with a lower dimension.
Next we need to provide an efficient method
for decoding in order to recover the original Hamiltonian. The
decoder is simply the minimizer of the $l_1$ norm of the signal $h$
. Implementing this decoder is a special convex optimization problem, which can be solved
via fast classical algorithms, yet not stricktly scalable. Furthermore, the encoding/decoding scheme is robust to
noisy data as $||p^{\prime }-\Phi h||_{l_2}\leq \epsilon$ where $\epsilon$
is the noise threshold. Note that $\epsilon$ includes the error of linearization (see Eq.(\ref{linear}))
that is $\mathcal{O}(\sqrt{m}Kt^2)$. Denote $h_0$ as the true representation of the
Hamiltonian. For a threshold $\eta$, $h_0(s)$ is an approximation to $h_0$ obtained by selecting
the $s$ elements of $h_0$ as those that are larger than $\eta h_{max}$ and setting the remaining elements to
zero. Now we state our main result:
\section{Algorithm Efficiency}
\textit{If the measurement matrix $\Phi\in\mathbb{C}^{m\times d^4}$ is drawn
randomly from a probability distribution that satisfies the concentration
inequality in (\ref{RIP}) with $\delta_s<\sqrt{2}-1$, then there exist constants $c_2,c_3,d_1,d_2>0$
such that the solution $h^\star$ to the convex optimization problem,
\begin{align}
&\text{minimize $||h||_{l_1}$} \notag \\
&\text{subject to $||p^{\prime }-\Phi h||_{l_2}\leq \epsilon$},
\label{decoder}
\end{align}
satisfies,
\begin{align}
||h^\star-h_0||_{l_2} \leq \frac{d_1}{\sqrt{s}} ||h_0(s)-h_0||_{l_1}+d_2
\epsilon \label{nearlysparse}
\end{align}
with probability $\geq 1-2e^{-mc_3}$ provided that,
\begin{eqnarray}
m \geq c_2s\log(d^4/s),\label{m}
\end{eqnarray}}where the performance of a $l_1$ minimizer, Eq. (\ref{nearlysparse}), and
the necessary bound $\delta_s<\sqrt{2}-1$ are derived by Cand\'es in Ref. \cite{candes-rip}.
As an example, for a system consisting of $n$ interacting qubits,
the exponential number of
parameters describing the dynamics, $2^{2n}$, can be estimated with a
linearly growing number of experiments $m \geq c_2s(8\log(2)n-\log(s))$.
The second term, $d_2\epsilon$, indicates that the algorithmic
performance is bounded by the experimental uncertainties.
Consequently, for fully sparse Hamiltonians and $\epsilon=0$ the
exact identification of an unknown Hamiltonian is achievable.
The properties of the ensemble from which the states and measurement observables
are chosen would determine the parameter $\delta_s$ and consequently the performance
of the algorithm.
The linear independency of the $\Phi$ matrix rows for a random set of local state preparations
and observables can be guaranteed by a polynomial level of computational overhead before conducting the experiments.
A certification for the nearly sparsity assumption can be obtained from
Eqs.(\ref{nearlysparse}) and (\ref{m}) as follows: Suppose $h^\star_m$ is the algorithm's outcome
for $m$ configurations. The nearly sparsity assumption is certified on the fly during the experiment,
if the estimation improvement $||h^\star_{m+1}-h^\star_m||$
converges to zero for a polynomially large total number of configurations.
\section{Physically nearly sparse hamiltonian}
Although physical systems at the fundamental level involve local
two-body interactions, many-body Hamiltonians often describe quantum
dynamics in a particular representation
or in well defined approximate limits.
The strength of the non-local $k$-body terms typically is much smaller
than the two-body terms with strength $J$ and decreases with the number $k$.
For a fixed sparsity threshold $\eta$, $k_{\eta}$ is defined as the largest number $k$
for which $k$-body terms have strength larger than $\eta J$.
Then the number of the elements of a $s$-sparse approximation of a $n$-body Hamiltonian
grows linearly as $\mathcal{O}(n g(k_\eta))$, where the $g(k_\eta)$ is determined by the geometry of the system.
A general class of many-body interactions arises when we
change the basis for a bosonic or ferminoic system expressed by a
(typically local) second-quantized Hamiltonian to a Pauli basis,
e.g., via a Jordan-Wigner transformation. For fermionic systems the
interactions are imposed physically from Coulomb's force and Pauli
exclusion principle.
The second-quantized Hamiltonian for these systems can be generally
written as:
\begin{equation}
\hat{H}=\sum_{p,q}b_{pq}\hat{a}_{p}^{+}\hat{a}_{q}+\sum_{p,q,r,s}b_{pqrs}
\hat{a}_{p}^{+}\hat{a}_{q}^{+}\hat{a}_{r}\hat{a}_{s},
\label{eq:ham2nd}
\end{equation}
where the annihilation and creation operators ($\hat{a}_{j}$ and $\hat{a}
_{j}^{+}$ respectively) satisfy the fermionic anti-commutation relations: $\{
\hat{a}_{i},\hat{a}_{j}^{+}\}=\delta _{ij}$ and $\{\hat{a}_{i},\hat{a}
_{j}\}=0$ \cite{Mahan}.
For example, in chemical systems the coefficients $h_{pq}$ and
$h_{pqrs}$ can be evaluated using the Hartree-Fock procedure for $N$
single-electron basis functions. The Jordan-Wigner transformation
can then be used to map the fermionic creation and annihilation
operators into a representation in terms of Pauli matrices
$\hat{\sigma}^{x},$~$\hat{\sigma}^{y},\hat{\sigma}^{z} $. This
allows for a convenient implementation on a quantum computer, as was
demonstrated recently for the efficient simulation of chemical
energy of molecular systems \cite{Lanyon}. An important example of a
Coulomb based Hamiltonian is the spin-coupled interactions in
quantum dots which has the following Pauli representation:
\begin{equation}
H=\sum_{i,j,k,\cdots }b_{i,j,k,\cdots }\sigma_A^{i}\otimes
\sigma_B^{j}\otimes \sigma_C^ {k}\cdots ,
\end{equation}
where $A,B,C,\cdots $ indicate the location of the quantum dots,
, $\sigma^i$s are Pauli operators, and $
b_{i,j,k,\cdots }$ generally represents a many-body spin interacting
term. In practice, these Hamiltonians are highly sparse or almost
sparse
due to symmetry considerations associated
with total angular momentum \cite{mizel}.
For example the Hamiltonian for the case of four
quantum dots ($A,B,C,D$) takes the general form \cite{mizel}:
\begin{align}
H_{exchange}& =J\sum_{A\leq i<j\leq D}\sigma_{i}.\sigma_{j}+J^{\prime
}[(\sigma_{A}.\sigma_{B})(\sigma_{C}.\sigma_{D}) \notag \\
&
+(\sigma_{A}.\sigma_{C})(\sigma_{B}.\sigma_{D})+(\sigma_{A}.\sigma_{D})(\sigma_{B}.\sigma_{C})],\label{four-body}
\end{align}
Another class of effective many-body interactions often emerge in a perturbative
and/or short time expansion of dynamics, such as effective
three-body interactions between atoms in optical lattices
\cite{Pachos:04} that we study in this work.
Next, we
simulate the performance of our algorithm for estimation of such
sparse many-body Hamiltonians in optical lattices \cite{Pachos:04}
and quantum dots \cite{mizel}.
\subsection{Three-body interactions in optical lattices}
An optical lattice is a periodic potential formed from interference
of counterpropagating laser beams where neutral atoms are typically
cooled and trapped one per site.
Consider four sites in two adjacent
building blocks of a triangular optical lattice filled by two
species of atoms \cite{Pachos:04}. The interaction between atoms is facilitated by
the tunneling rate $J$ between neighboring sites and collisional
couplings $U$ when two or more atoms occupy the same site. For each site an
effective spin is defined by the presence of one type of atom
as the up-state $\uparrow$ and the presence of the other type as the down-state $
\downarrow$. Three-body interactions between atoms in a triangular optical
lattice can be significant. The effective Hamiltonian for this
system is studied in Ref. \cite{Pachos:04}. The on-site collisional
interaction $U$, and tunneling rates $J=J^\uparrow=2J^\downarrow$ are
taken to be the same in all sites, also $U=U_{\uparrow\uparrow}=U_{\downarrow
\downarrow}=2.12U_{\uparrow\downarrow}=10kHz$. The effective Hamiltonian of
the 4-spin system is
\begin{align}
H_{opt-latt}=\sum_{j,\alpha=x,y,z} b_1^\alpha \sigma^\alpha_j\sigma^\alpha_{j+1}
+ b_2^\alpha\sigma^\alpha_j\sigma^\alpha_{j+1}\sigma^\alpha_{j+2}
\end{align}
where $\{b_1^\alpha,b_2^\alpha\}$ are functions of $\{J,U\}$ and their explicit forms are given in appendix D.
The ratio $\eta=|J/U|$ quantifies the sparsity level. For a fixed value of $U$, a smaller $J$ leads to weaker three-body
interactions and therefore a higher level of sparsity. As expected, this
enhances the algorithm performance.
We assume that the system can be initialized in a random product state $\left|
\psi_{k}\right\rangle=\left| \psi_{k}^1\right\rangle\otimes ...\otimes
\left| \psi_{k}^4\right\rangle$, where $\left| \psi_{k}^i\right\rangle$ are drawn from
the Fubini-Study metric induced distribution. The required observables for the algorithm
are uniformly selected from single qubit Pauli operators $\{\sigma^x_i,\sigma^y_i,\sigma^z_i\}$. This choice
of states and observables allows for $\delta_s\approx 0.37<\sqrt{2}-1$. Let us denote
the extracted Hamiltonian and the true Hamiltonian by $H^*$ and $H_{true}$,
respectively. Here, the performance of the algorithm is defined by the relative error $
1-||H^*-H_{true}||_{fro}/||H_{true}||_{fro}$. The results for different
number of configurations are depicted in Fig. (\ref
{fig_optlattice}), for various values of $J$.
As evident in Fig.(\ref{fig_optlattice}), performance accuracy of above
$94\%$ can be obtained with only 80 settings significantly smaller than approximately $6\times10^4$ configurations required in QPT.
The robustness of this scheme was also investigated for 10\%
random error in simulated experimental data leading to about a 5\% reduction in the overall performance.
\begin{figure}
\caption{The Hamiltonian estimation average performance is illustrated for a system
of four adjacent sites in an optical lattice for different tunneling rates, $J$, and collisional coupling $U=10kHz$.
The error bars demonstrate the standard deviation of the performance
due to the random and independent selection of $m$ configurations (shown only for $J=5kHz$).
Performance accuracy of above $90\%$ with only 60 settings is achievable for $J=1kHz$, which is significantly
smaller than about $6\times10^4$ required experimental configurations in QPT.}
\label{fig_optlattice}
\end{figure}
\subsection{Four-body interactions in quantum dots}
Another important class of effective many-body Hamiltonians
can be obtained for electrons in quantum dots coupled through an isotropic
(Heisenberg) or anisotropic exchange interaction.
For example the Hamiltonian for the case of four
quantum dots ($A,B,C,D$) takes the general form Eq. (\ref{four-body}).
The first term in the
summation is a two-body Heisenberg exchange interaction and the last
three terms are four-body spin interactions.
In certain regimes, the ratio $|J^{\prime }/J|$ can reach up to
$16\%$. The amplitude of $\eta=|J^{\prime }/J|$ determines the sparsity level
of the Hamiltonian.
Here we use an efficient
modification of signal recovery referred as "reweighted $l_1$-minimization" which is
described in appendix E. The performance of this algorithm is
demonstrated in Fig. (\ref{fig_exchange}) that shows a significant reduction of the
required number of settings in contrast to the standard QPT.
\begin{figure}
\caption{Estimation of the exchange interaction Hamiltonian for four
electrons in quantum dots. The average performance of the procedure is illustrated for
different values of $|J^{\prime }
\label{fig_exchange}
\end{figure}
\section{V. Characterization of Hamiltonian fine structures and system-bath interactions}
\subsection{Hamiltonian fine estimation} In many systems a primary model of
the interactions is often known
through physical and/or engineering
considerations. Starting with such an initial model we seek to improve
our knowledge about the Hamiltonian by random measurements. Let's assume
the initial guess about the Hamiltonian $H_0$ is close to the true
form $H_{true}$ that is $||\Delta=H_{true}-H_0||\ll ||H_{true}||$.
Therefore for a perturbative treatment we demand $t||\Delta||\ll 1$,
which is a much weaker requirement compared to $t||H_{true}||\ll 1$.
We can approximate Eq. (1) in the paper to find
\begin{eqnarray}
p_{jk}&\approx&\left\langle \psi_{k}\right|M^0_{j}\left|\psi_{k}\right\rangle
\notag \\
&+&i\left\langle \psi_{k}\right|[\int_0^t e^{isH_0}\Delta e^{-isH_0}
ds,M^0_j]\left|\psi_{k}\right\rangle, \label{fine}
\end{eqnarray}
where $M^0_{j}=e^{itH_0} M_j e^{-itH_0}$ \cite{fineref}. This equation is
linear in $\Delta$, consequently, in a similar fashion as above, the
compressed sensing analysis can be applied for efficient estimation of the
fine structure of Hamiltonians.
\subsection{Characterizing system-bath interactions} The identification
of a decoherence process is a vital task for quantum
engineering. In contrast to the usual approach of describing dynamics of
an open quantum system by a Kraus map or a reduce master equation,
here we use a microscopic Hamiltonian picture to efficiently
estimate the system-bath coupling terms generating the overall
decoherence process. However since we consider a full dynamics
of the system and bath, this method can be applied to a finite
size environment such as a spin bath, or a surrogate Hamiltonian
modeling of a infinite bath. In the latter case a harmonic bath of
oscillators is approximated by a finite spin bath \cite{surrogate}.
Consider an open
quantum system with a total Hamiltonian:
\begin{equation}
H=H_{S}\otimes I_{B}+I_{S}\otimes H_{B}+H_{SB}
\end{equation}
and
\begin{equation}
H_{SB}=\sum_{p,q}\lambda _{p,q}S_{p}\otimes B_{q}
\end{equation}
where $H_{S}$ ($H_{B}$) denotes the system (bath) free Hamiltonian and $
H_{SB}$ is the system-bath interaction with coupling strengths $\{\lambda
_{p,q}\}$, and a complete operator basis of the system and bath being $\{S_{p}\}$ and
$\{B_{q}\}$, respectively.
We develop a formalism to estimate $
\lambda _{p,q}$ parameters in the weak system-bath coupling regime
and with the sparsity assumption that a few number of $\lambda _{p,q}$ have
a significant value.
The Liouvilian dynamical equation is
\begin{equation}
\frac{d}{dt}\rho_{SB}(t)=(\mathcal{L}_0+\sum_{pq}\lambda_{pq}\mathcal{L}_{pq})[\rho_{SB}(t)]
\label{master}
\end{equation}
where $\mathcal{L}_0[.]=-i[H_{S}\otimes I_{B}+I_{S}\otimes H_{B}, . ]$
and $\mathcal{L}_{pq}[.]=-i[S_{p}\otimes B_{q}, . ]$.
In the regime of weak coupling to a finite bath, $||H_{SB}|| \ll \min\{||H_S||,||H_B||\}$,
the Liouvillan equation (\ref{master}) can be solved perturbatively if time $t$ satisfies $t||H_{SB}|| \ll 1$.
For an initial system density state $\rho_k$, using the matrix identity given in Ref. \cite{fineref} we find the measurement outcomes as
\begin{eqnarray}
p_{jk}&\approx&tr(\rho_k M_{j}) \label{open}\\
&+&\sum_{pq}\lambda_{pq}tr([\int_0^t ds e^{(t-s)\mathcal{L}_0} \mathcal{L}_{pq} e^{s\mathcal{L}_0}
[\rho_k],M_j]) \notag
\end{eqnarray}
where $M_j$ is a system only observable.
This affine function between the outcomes $p_{jk}$ and coupling parameters $
\{\lambda_{pq}\}$ is similar to Eq.(2) in the paper for Hamiltonian estimation.
Consequently, the compressed sensing algorithm can be employed for computing
$\{\lambda_{pq}\}$s.
\section{Outlook} We have introduced an efficient and
robust experimental procedure for the identification of nearly
sparse Hamiltonians using only separable (local) random state
preparations and measurements. There are a number of future
directions and open problems associated with this work. It is not
known how the performance of the algorithm depends on the
distribution of the ensemble from which the states and measurement
observables are drawn. Also, a general closed-loop learning approach for updating the
knowledge of sparsity basis of an arbitrary Hamiltonian is an interesting open
problem that will be of importance for generic compressed system
identification. The presented method for Hamiltonian estimation is
promising for drastic reduction in the number of experimental
configurations. However the classical resources for post-processing
is not scalable. A fully scalable Hamiltonian estimation method
might be achievable via a hybrid of compressed sensing and DMRG
(Density-Matrix Renormalization Group) methods \cite{Plenio2010}. A
compressed tomography method can also be developed for nearly sparse
quantum processes \cite{Shabani:09}.
\section{vectors and operator norm}
In this paper we use the following different norms:
For a vector $x$,
\begin{equation}
||x||_{l_2}=\sqrt{x^\dagger x}, ||x||_{l_1}=\sum_i|x_i|.
\end{equation}
For a matrix $A$,
\begin{eqnarray}
||A||_{spec}&=&\sqrt{\lambda_{max}(A^\dagger A)}
\end{eqnarray}
where $\lambda_{max}$ means largest eigenvalue.
\begin{eqnarray}
||A||_{fro}&=&\sqrt{trace(A^{\dagger}A)}
\end{eqnarray}
\section{Analysis of the short time approximation}
The short time monitoring of the system's dynamics requires a
prior knowledge of the dynamical time scales.
In the solid-state quantum devices, in particular in the context of
quantum control and quantum information-processing, the time-scale
of single qubit rotations is typically on the order of 1-10 ns. The
switching time for exchange interactions varies among different
solid-state systems. For superconducting phase qubit the duration of
a swap gate is about 10 ns \cite{martinis2010}. For electron-spin
qubits in quantum dots and in donor atoms (Heisenberg models)
\cite{Loss,Loss2,Loss3}, and also for quantum dots in
cavities (anisotropic exchange interactions) \cite{Imamoglu} the
coupling time is between 10-100ps, while for exciton-coupled quantum
dots (XY model) and Forster energy transfer in multichromophoric
complexes the relevant time scale is in the order of 1ps. Next we rigorously
derive bound on the evolution time $t$ that guarantees the
validity of the short time approximation.
For an input state $\left|\psi_k\right\rangle$, the expectation value of an
observable $M_j$ is
\begin{eqnarray}
p_{jk}=\left\langle \psi_{k}(t)\right|M_{j}\left|\psi_{k}(t)\right\rangle
=\left\langle \psi_{k}\right|e^{iHt}M_{j}e^{-iHt}\left|\psi_{k}\right\rangle
\end{eqnarray}
Considering the expansion of the propagator $e^{-iHt}=I-itH-\frac{1}{2}t^2H^2+...$,
we find
\begin{eqnarray}
p_{jk}&=&\left\langle \psi_{k}\right|M_{j}\left|\psi_{k}\right\rangle+it\left\langle \psi_{k}\right|[H,M_{j}]\left|\psi_{k}\right\rangle\notag\\
&-&\frac{t^2}{2}\left\langle \psi_{k}\right|[H,[H,M_j]]\left|\psi_{k}\right\rangle+...
\end{eqnarray}
Therefore, for the linearization assumption, it is sufficient to have for the $l$'th term
\begin{eqnarray}
t^l \underset{j}{\min}\left\langle \psi_{k}\right|\overbrace{[H,[H,[...}^{l \textrm{ times}},M_j]]]\left|\psi_{k}\right\rangle\leq \notag\\
t^l\underset{j}{\min}|| [H,[H,[...,M_j]]]||_{spec} \ll 1, \forall l.
\end{eqnarray}
A tighter bound can be found for operators $\{M_j\}$ from a POVM as
\begin{equation}
||[H,[H,[...,M_j]]]||_{spec}\leq 2^l ||H||^l_{spec}
\end{equation}
To derive this we use
\begin{equation}
||[A,B]||_{spec}\leq ||AB||_{spec}+||BA||_{spec}\leq 2||A||_{spec}||B||_{spec}
\end{equation}
and $||A||_{spec}^2=||AA^\dagger||_{spec}$.
This gives a single bound sufficient for linearization: $t\ll\frac{1}{2}||H||_{spec}^{-1}$.
\section{RIP from a concentration inequality}
\label{app A}
In this work, we generalize the standard compressed sensing algorithm such that
the necessity for independent randomness in all elements of the measurement matrix, $\phi$, can be avoided.
A common approach to establish RIP (\cite{RIP}) for a matrix $\Phi$
is by introducing randomness in the elements of this matrix. This approach benefits from
measure concentration properties of random matrices.
In classical signal processing each element $\Phi_{jk,\alpha}$ can be
independently selected from a random distribution such as Gaussian or
Bernoulli. Whereas in the Hamiltonian estimation formulation
(Eq. (4) in the paper) there is no freedom for independent selection
of the $\Phi$ matrix elements.
Here we prove the concentration inequality that we
employed for establishing the restricted isometry property.
Though $\Phi$ is a random matrix, because it
is constructed from quantum states and observables of a finite
dimensional system, it is bounded. Thus we are able to apply
\textit{Hoeffding's concentration inequality:}
If $v_{1},...,v_{m}$ are independent bounded random variables such
that $\text{Prob.}\{v_{i} \in [a_{i},b_{i}]\}=1$, then for $S=\sum_{i}v_{i}$,
\begin{eqnarray}
\text{Prob.}\{S-{\bf E}(S)\geq t\}\leq e^{-2t^2/\sum_i (b_i-a_i)^2} \notag\\
\text{Prob.}\{S-{\bf E}(S)\leq -t\}\leq e^{-2t^2/\sum_i (b_i-a_i)^2}\label{app1}
\end{eqnarray}
for any $t>0$. (Here ${\bf E}$ denotes the expectation value.)
Set $v_{i}=|\phi_{i}^\dag x|^{2}$ for a row $\phi_i$. Then with $S=\sum_i v_i=||\Phi x||^2_{l_2}$, we get $\forall x$,
\begin{eqnarray}
v_i=x^\dagger(\phi_i\phi_i^\dagger)x\in(1/m)[w_l,w_u] ||x||^2_{l_2} \notag\\
{\bf E}(S)={\bf E}||\Phi x||^2_{l_2}\in [f,g]||x||^2_{l_2}\label{app2}
\end{eqnarray}
for constants $w_l,w_u,f,g$. Note that $f$ and $g$ are the min
and max singular values of ${\bf E}(\Phi^\dag\Phi)$.
From (\ref{app2}) we find $\forall t_+, t_->0$ and $\forall x$,
\begin{eqnarray*}
\text{Prob.}\{S-g||x||^2_{l_2} \geq t_+\}
&\leq&
\text{Prob.}\{S-{\bf E}(S) \geq t_+\}
\\
\text{Prob.}\{S-f||x||^2_{l_2} \leq -t_-\}
&\leq&
\text{Prob.}\{S-{\bf E}(S) \leq -t_-\}
\end{eqnarray*}
These together with (\ref{app1}) and (\ref{app2}), and the choice of
$t_+=(\delta+1-g)||x||^2_{l_2}$ and
$t_-=(f-1+\delta)||x||^2_{l_2}$ yields
\begin{eqnarray}
\text{Prob.}\{||\Phi x||^2_{l_2}-||x||^2_{l_2}|
\geq \delta ||x||^2_{l_2}\}
\leq 2e^\frac{-2m(\delta+\epsilon)^2}{(w_u-w_l)^2}
\end{eqnarray}
with $\epsilon=\min\{1-g,f-1\}$. To ensure that $t_+,t_->0$, we
need $1-\delta<f\leq g<1+\delta$. Since the observable $M$ can be
scaled by any real number, a sufficient condition is $g/f <
(1+\delta)/(1-\delta)$. For the simulations in this paper, this ratio becomes $2.176$.
\section{4-sites optical lattice Hamiltonian}
Let us consider four
sites in two adjacent building blocks of a triangular optical
lattice filled by two species of atoms, $\uparrow$ and $\downarrow$.
Atoms interact by tunneling between neighboring sites, $J^\uparrow$
and $J^\downarrow$, and through collisional couplings in the same
site, $U$. The Hamiltonian for such system can be written as
\cite{Pachos:04}:
\begin{align}
H_{opt-latt}=\sum_{j} (0.03\frac{J^{\uparrow 2}+J^{\downarrow 2}}{U}-0.27
\frac{J^{\uparrow 3}+J^{\downarrow 3}}{U^2})\sigma^z_j\sigma^z_{j+1} \notag \\
-(\frac{2.1(J^\uparrow+J^\downarrow)J^\uparrow J^\downarrow}{U^2}+\frac{
J^\uparrow J^\downarrow}{U})(\sigma^x_j\sigma^x_{j+1}+\sigma^y_j\sigma^y_{j+1}) \notag \\
+ \sum_{j} 0.14\frac{J^{\uparrow 3}-J^{\downarrow 3}}{U^2}\sigma^z_j\sigma^z_{j+1}\sigma^z_{j+2}
\notag \\
-0.6\frac{J^\uparrow J^\downarrow(J^\uparrow-J^\downarrow)}{U^2}
(\sigma^x_j\sigma^z_{j+1}\sigma^x_{j+2}+\sigma^y_j\sigma^z_{j+1}\sigma^y_{j+2}),
\end{align}
where $\sigma^{x,y,z}_j$ are Pauli operators.
\section{Reweighted $l_1$-minimization}
In order to simulate our alogrithm performance for estimating the above Hamiltonian we
use an iterative algorithm that outperforms the
standard $l_1$ norm minimization \cite{weighted}. This procedure entails initializing a weight matrix $
W=I_{d^{2}}$ and a weight factor $\sigma >0$, and repeating the
following steps until convergence is reached:
\begin{align}
&\text{1. Solve for $h$, } \text{minimize $||W h||_{l_1}$} \notag \\
&\text{subject to $||p^{\prime }-\Phi h||_{l_2}\leq \epsilon$}. \notag \\
&\text{2. Update weights} \notag \\
& W=diag(1/(|h_1|+\sigma),...,1/(|h_{d^2}|+\sigma)).
\end{align}
where $h=\textit{vec}(h_i)$ is the Hamiltonian vectorized form.
$\Phi$ is the measurement matrix and $p^\prime$ is the experimental data
with a noise threshold $\epsilon$.
\end{document} |
\begin{document}
\title[Naturally reductive pseudo Riemannian 2-step nilpotent Lie groups]
{Naturally reductive pseudo Riemannian \\ 2-step nilpotent Lie groups}
\author{Gabriela P. Ovando $^1$}
\address{G. P. Ovando: CONICET and ECEN-FCEIA, Universidad Nacional de Rosario
\\Pellegrini 250, 2000 Rosario, Santa Fe, Argentina}
\footnote{ Currently: Abteilung f\"ur Reine Mathematik, Albert-Ludwigs
Universit\"at
Freiburg, Eckerstr.1, 79104 Freiburg, Germany.}
\email{[email protected]}
\begin{abstract} This paper deals with naturally reductive pseudo
Riemannian 2-step nilpotent Lie groups $(N, \la \,,\,\ra_N)$. In the cases under consideration they are related to bi-invariant metrics. On the one hand, whenever $\la \,,\, \ra_N$ restricts to a metric in the center it is proved that the simply connected Lie group $N$ arises from a Lie algebra $\ggo$ and a representation of it. The Lie algebra
$\ggo$ carries an ad-invariant metric and its corresponding Lie group acts as a group of isometries of $(N, \la \,,\,\ra)$ fixing the identity element. On the other hand, a bi-invariant metric $\la\,,\,\ra$ on $N$ provides another family of examples of naturally reductive spaces, namely those of the form $(N/\Gamma, \la\,,\,\ra)$ being $\Gamma\subset N$ a lattice, which are also investigated. \end{abstract}
\thanks{{\it (2010) Mathematics Subject Classification}: 53C50 22E25 53B30
53C30. }
\maketitle
\section{Introduction}
The 2-step nilpotent groups are nonabelian and from the algebraic point of
view as close as possible to be Abelian and they evidence a rich geometry when
equipped with a metric tensor. While they have been extensively investigated
in the Riemannian situation, in the case of indefinite metrics, there
are significant advances as showed in \cite{Bo, C-P1, C-P2, Ge,
J-P-L,J-P,J-P-P,Pa} but
there are still several open problems. A first obstacle appears when
trying to traduce the left invariant metric to the Lie algebra level. So far
all attempts in this direction take as starting point the Riemannian
model. Among these pseudo Riemannian spaces, the {\em naturally reductive} ones are endowed with nice simple algebraic and geometric
properties. Examples of them are provided by 2-step nilpotent Lie groups carrying a bi-invariant metric.
Important
studies concerning the structure of a naturally reductive Riemannian Lie group $G$ when $G$ is compact and simple or when $G$ is non compact and
semisimple were given by D'Atri-Ziller \cite{DA-Z} and Gordon \cite{Go} respectively. Gordon showed that every naturally reductive Riemannian manifold may be realized as a
homogeneous space $G/H$ with Lie group $G$ of the form $G=G_{nc}G_cN$ where $G_{nc}$ is
a non compact semisimple normal subgroup, $G_c$ is compact semisimple and $N$ is the
nilradical of $G$. Furthermore $N\cap H=\{0\}$ and the induced metrics on each of
$G_{nc}/(G_{nc}\cap H)$, $G_c/(G_c \cap H)$ and $N (=N/(N\cap H))$ are naturally
reductive so that the study of naturally reductive metrics is partially reduced to
the cases in which $G$ is semisimple either of compact or non compact type or $G$ is
nilpotent. In the last case Gordon proved that $G$ must be at most 2-step nilpotent.
Lauret \cite{La} exploited this result to afford a classification of naturally reductive Riemannian
connected simply connected nilmanifolds. According to Wilson \cite{Wi} such a
manifold can be realized as a 2-step nilpotent Lie group equipped with a left
invariant metric.
Later Tricerri and Vanhecke \cite{T-V2} proved that a Riemannian manifold is
a naturally reductive homogeneous space if and only if there exists a
homogeneous structure $T$ satisfying $T_x x=0$ for all tangent vectors $x$, offering
in this way an infinitesimal description of these reductive manifolds.
The notion of {\em homogeneous structure} was introduced by
Ambrose and Singer \cite{A-S} to characterize connected simply connected and
complete homogeneous Riemannian manifolds. In the Riemannian case every homogeneous
manifold is complete and reductive.
More recently Gadea and Oubi\~na \cite{G-O1} proved that a
connected simply connected and complete pseudo Riemannian manifold admits a
pseudo-Riemannian structure if and only if it is reductive homogeneous. While Tricerri and Vanhecke \cite{T-V1} achieved the
classification of homogeneous Riemannian structures,
in the pseudo Riemannian case, a complete classification is still a pending
item.
However Calvaruso and
Marinosci \cite{Ca,C-M1, C-M2} studied homogeneous structures in dimension three, obtaining with their results the naturally reductive Lie groups with a left
invariant Lorent\-zian metric. In particular the Heisenberg Lie group admits two
naturally reductive left invariant Lorent\-zian metrics (and for which the center
is non degenerate).
In this paper we provide constructions for naturally reductive
pseudo Riemannian 2-step nilpotent Lie groups. By following a similar approach to that one of Gordon, one gets necessary and sufficient conditions to have naturally reductive pseudo Riemannian 2-step nilpotent Lie groups with non degenerate center -Theorem (\ref{t1})-. This enables to attach this kind of Lie groups to
Lie algebras endowed with an ad-invariant metric and to certain kind of
representations of them:
\vskip 2pt
{\bf Theorem \ref{t2}} {\em Let $\ggo$ denote a Lie algebra carrying an
ad-invariant metric $\la\,,\,\ra_{\ggo}$ and let $(\pi, \vv)$ be a real
faithful representation of $\ggo$ without trivial subrepresentations and
such that the metric on $\vv$, $\la\,,\,\ra_{\vv}$ is $\pi(\ggo)$-invariant. Let $\nn$
denote the Lie algebra
$\nn=\ggo \oplus \vv$ whose the Lie bracket is given by
$$\begin{array}{rcl}
[\ggo,\ggo]_{\nn}=[\ggo, \vv]_{\nn} =0 & [\vv, \vv] \subseteq \ggo \\ \\
\la [u,v], x\ra_{\ggo} = \la \pi(x) u, v\ra_{\vv}& \mbox{ for all } x\in \ggo,
\, \forall u, v\in
\vv,
\end{array}
$$
equip $\nn$ with the metric
$\la\,,\,\ra$
$$ \la\,,\,\ra_{\ggo\times \ggo}= \la\,,\,\ra_{\ggo}\qquad
\la\,,\,\ra_{\vv\times \vv}= \la\,,\,\ra_{\vv}\qquad \la \ggo, \vv\ra=0$$
then the corresponding simply connected 2-step nilpotent Lie group $(N, \la\,,\,\ra)$, being $\la\,,\,\ra$
the left invariant metric
induced by $\la\,,\,\ra$ above, is a naturally reductive pseudo Riemannian
space.
The converse holds whenever the center of $\nn$ is non degenerate and $j$ (defined
as in (\ref{br})) is faithful. }
\vskip 2pt
The previous result empores the
understanding of some geometrical features such as the isometry group
-Proposition 3.5- and it sets up the construction of new examples, in particular by describing some naturally reductive metrics in the Heisenberg Lie group $H_{2n+1}$.
We also bring into consideration 2-step nilpotent Lie groups furnished
with a bi-invariant metric in order to check geometrical and
algebraic structure differences between metrics for which the center is either degenerate or either non degenerate.
In fact bi-invariant metrics offer examples of flat pseudo Riemannian metrics for which the e
isometry group contains the group of orthogonal automorphisms as a proper
subgroup. Another application of bi-invariant metrics promotes the construction of
pseudo Riemannian naturally reductive compact spaces.
\section{On 2-step nilpotent Lie groups with a left invariant pseudo Riemannian
metric}
In this section we show suitable decompositions of the Lie algebra corresponding to a 2-step nilpotent
Lie group equipped with a left invariant pseudo Riemannian metric. We are
mainly interested here in those metrics for which the center is non degenerate, a fact that determines unambiguously the decomposition.
A {\em metric} on a real vector space $\vv$ is a non
degenerate symmetric bilinear form $\la\,,\,\ra:\vv \times \vv \to \RR$.
Whenever $\vv$ is the Lie algebra of a given Lie group $G$, by identifying
$\vv$ with the set of left invariant vector fields on $G$, the metric
induces by mean of the left translations, a pseudo
Riemannian metric tensor on
the corresponding Lie group. Conversely any left invariant pseudo
Riemannian metric on $G$ is completely determined by its value at the identity
tangent space $T_eG$.
Let $(N, \la\,,\,\ra)$ denotes a 2-step nilpotent Lie group
endowed with a left invariant pseudo Riemannian metric. There exist several
ways to describe the structure of the corresponding
Lie algebra $\nn$. The main difficult yields on the existence of degenerate
subspaces as one can see below.
\vskip 3pt
{\bf (a)} If the center is degenerate,
the null subspace is defined uniquely as
$$\uu=\{x\in \zz \, \mbox{ such that }\, \la x, z\ra=0 \quad \forall z\in \zz\}$$
and therefore the center of $\nn$ decomposes as a direct sum of vector subspaces
$$\zz =\uu \oplus \tilde{\zz}$$
where $\tilde{\zz}$ is a complementary subspace of $\uu$ in $\zz$ and it
is easy
to prove that the restriction of the metric to $\tilde{\zz}$ is non
degenerate.
Moreover it is possible to find a isotropic subspace $\vv\subset \nn$ such that $\vv \cap
\zz=\{0\}$ and the metric on $\uu\oplus \vv$ is non degenerate. This subspace
$\vv$ is not well defined invariantly but once $\vv$ is fixed, one can take
$\tilde{\zz}$ as the portion of the center in $(\uu\oplus\vv)^{\perp}$ and to
complete the decomposition of $\nn$ as a orthogonal direct sum
\begin{equation}\label{ortsum}
\nn=(\uu \oplus \vv) \oplus (\tilde{\zz}\oplus \tilde{\vv})
\end{equation}
in such a way that $(\uu\oplus \vv)^{\perp}=\tilde{\zz}\oplus \tilde{\vv}$
and (\ref{ortsum}) is a Witt decomposition. Note that $\tilde{\vv}$ is non degenerate.
Moreover it is possible to define a lineal map $j:\zz \to \End(\vv \oplus
\tilde{\vv})$ which
play a similar role to that one in the Riemannian case (see \cite{C-P1} for details).
In the last section of the present work, we show similar results for the
case of bi-invariant metrics.
\vskip 3pt
{\bf (b)} Let $e_1, \hdots, e_p$ denote a
basis of $\zz$. For any $u, v \in \nn$, the Lie bracket can be
written
$$[u, v] = \sum_{i=1}^p \la J_i u, v\ra e_i,
$$
where $J_i:\nn \to \nn$ are self adjoint endomorphisms with respect
to $\la \,,\,\ra$ and $\zz=\cap_{i=1}^p ker J_i$. In fact,
$[u,v]=\sum \omega_i(u,v) e_i$ where $\omega_i:\nn \times \nn \to \RR$ for
$i=1, \hdots p$, is a
familly of skew symmetric bilinear 2-forms which represents the coordinates
of $[u,v]$ with respect to the fixed basis.
Since the metric on $\nn$ is non degenerate, for every i there exists a endomorphism $J_i:\nn \to
\nn$ such that $\omega_i(u,v)=<J_iu,v>$.
The endomorphisms $J_i$ are thus called the {\em structure endomorphisms}
associated to $e_1 , . . . , e_p$ (see \cite{Bo}).
\vskip 3pt
Examples of pseudo
Riemannian 2-step nilpotent Lie groups $N$ arise by considering the simply
connected Lie groups whose Lie algebra can be constructed
as follows.
Let $(\zz, \la \,,\,\ra_{\zz})$ and $(\vv, \la \,,\,\ra_{\vv})$ denote vector
spaces endowed with (non necessarly definite) metrics. Let $\nn$ denote the
direct sum as vector spaces
\begin{equation}\label{des2}
\nn=\zz \oplus \vv \qquad \quad\mbox{ direct sum }
\end{equation}
and let $\la\,,\,\ra$ denote the metric given by
\begin{equation}\label{met}
\la \,,\,\ra_{|_{\zz \times \zz}}=\la \,,\,\ra_{\zz}\qquad
\la \,,\,\ra_{|_{\vv \times \vv}}=\la \,,\,\ra_{\vv} \qquad
\la \zz, \vv \ra=0.
\end{equation}
Let $j:\zz \to \End(\vv)$ be a linear map such that $j(z)$ is self adjoint with respect to
$\la \,,\,\ra_{\vv}$ for all $z\in \zz$. Then $\nn$
becomes a 2-step nilpotent Lie algebra if one defines a Lie bracket by
\begin{equation}\label{br}
\begin{array}{rcl}
[x,y] & = & 0 \quad \mbox{ for all }x\in \zz, y\in \nn\\
\la [u,v], x\ra & = & \la j(x) u,v\ra \qquad \mbox{ for } x\in \zz, u,v\in \vv.
\end{array}
\end{equation}
Conversely, let $\nn$ denote a 2-step nilpotent Lie algebra furnished with a metric
for which the center is non degenerate. Then $\nn$ can be decomposed into a
orthogonal direct sum as in (\ref{des2}) being $\vv:=\zz^{\perp}$ and the
Lie bracket on $\nn$ induces self adjoint linear maps
$j(x)$ for $x\in \zz$ given
by (\ref{br}).
\begin{prop} \label{p1} Let $(N,\la\,,\,\ra)$ denote a simply connected 2-step nilpotent Lie group equipped with a left invariant pseudo Riemannian metric. If the center of $N$ is non
degenerate then its Lie algebra $\nn$ admits a orthogonal decomposition as in
(\ref{des2}) and the corresponding Lie bracket can be obtained by (\ref{br}).
\end{prop}
This includes the Riemannian case, that is, when the metric $\la\,,\,\ra$ is
positive definite.
In this situation, the inner product $\la\,,\,\ra_+$ produces a decomposition of the center of the Lie algebra $\nn$ as a orthogonal direct sum as vector spaces
$$\zz=\ker j \oplus C(\nn)$$
and moreover $j$ is injective if and only if there is no Euclidean factor in the
De Rahm decomposition of the simply connected Lie group $(N, \la\,,\,\ra_+)$ (see
\cite{Go}). This does not necessarly hold in the pseudo Riemannian case.
Below we show an example of a Lorentzian metric on a 2-step nilpotent
Lie algebra $\nn$, where the center is non degenerate and such that
$ker(j)=[\nn,\nn]$, so that a splitting as above is not possible.
\begin{exa} \label{exa1} Let $\RR \times \hh_3$ be the 2-step nilpotent Lie algebra spanned by
the vectors $e_1,e_2, e_3, e_4$ with the Lie bracket $[e_1, e_2]=e_3$. Define a
metric where the non trivial relations are
$$\la e_1, e_1\ra=\la e_2, e_2\ra=\la e_3, e_4\ra=1.$$
After (\ref{br}) one can verify that $j(e_3)\equiv 0$, while
$$j(e_4)=\left(\begin{matrix}
0 & -1\\
1 & 0 \end{matrix} {\rm (i)}ght)
$$
Notice that $e_4\notin C(\RR \times \hh_3)$ and $ker j=\RR e_3=C(\RR\times
\hh_3)$, that is $ker j=C(\nn)$.
\end{exa}
Let $\Or(\vv, \la\,,\,\ra_{\vv})$ denote the group of linear maps on $\vv$ which are isometries for
$\la\,,\,\ra_{\vv}$ and whose Lie algebra $\sso(\vv,\la\,,\,\ra_{\vv})$ is the
set of linear maps on $\vv$ that are self adjoint with respect to
$\la \,,\,\ra_{\vv}$. The next goal is to describe the group of isometries
which plays an important role in the next section. Start with the next
result proved in \cite{C-P1}.
\begin{prop} \label{icp} Let $N$ denote a 2-step nilpotent Lie group endowed with a left invariant pseudo Riemannian metric, with respect to which the center is non degenerate. Then the group of isometries fixing the identity coincides with the group of orthogonal automorphisms of $N$.
\end{prop}
Denote by $H$ the group of orthogonal automorphisms and by $N$ also the
subgroup of isometries consisting of left translations by elements of $N$.
Consider the isometries of the form $h n$ where $h\in H$ and $n\in N$, and denote it by $I_a(N)$. Then
$N$ is a normal subgroup of $I_a(N)$, $N\cap H=\{e\}$ and therefore after (\ref{icp}) one has
$$I(N) = I_a(N)= H \ltimes N.$$
Whenever $(N, \la\,,\,\ra)$ is simply connected, we do not distinguish between the group of automorphisms of
$N$ and of $\nn$. Thus one obtains that the group $H$ is given by
\begin{equation}\label{oa}
H=\{(\phi, T)\in \Or(\zz, \la\,,\,\ra_{\zz}) \times \Or(\vv, \la\,,\,\ra_{\vv}):
Tj(x)T^{-1}=j(\phi x), \quad x\in \zz\}
\end{equation}
while its Lie algebra $\hh=\Der(\nn)\cap \sso(\nn,\la\,,\,\ra)$ is
\begin{equation}\label{sd}
\hh=\{(A,B)\in \sso(\zz,\la\,,\,\ra_{\zz}) \times \sso(\vv,\la\,,\,\ra_{\vv}):
[B,j(x)]=j(Ax),\quad x\in \zz\}.
\end{equation}
In fact, let $\psi$ denote an orthogonal automorphism of $(\nn, \la\,,\,\ra)$.
As automorphism $\psi(\zz)\subseteq \zz$ and since the decomposition
$$ \nn = \zz \oplus \vv$$
is orthogonal then $\psi(\vv)\subseteq \vv$. Set $\phi:=\psi_{|_{\zz}}$ and
$T:=\psi_{|_{\vv}}$, thus
$(\phi, T)\in \Or(\zz, \la\,,\,\ra_{\zz})\times \Or(\vv,\la\,,\,\ra_{\vv})$ such that
$$\begin{array}{rcl}
\la \phi^{-1}[u,v], x\ra &=& \la [Tu,Tv],j(x) \ra \quad \mbox{if and only if}\\
\la j(\phi x) u, v \ra & = & \la j(x)Tu, Tv \ra
\end{array}
$$
which implies (\ref{oa}). By derivating (\ref{oa}) one gets (\ref{sd}).
\begin{prop} Let $N$ denote a simply connected 2-step nilpotent Lie group endowed with a left
invariant pseudo Riemannian metric, with respect to which the center is non
degenerate. Then the group of isometries is
$$I(N) = H \ltimes N.$$
where $N$ denotes the set of left translations by elements of $N$ and $H$
the isotropy subgroup is given by
(\ref{oa}) with Lie algebra as in (\ref{sd}).
\end{prop}
\begin{exa} \label{change} Let $\nn$ be a 2-step nilpotent Lie algebra equipped with an
inner product and denote it by $\la \,,\,\ra_+$.
Let $J_z\in \sso(\vv, \la\,,\,\ra_+)$ denote the maps in (\ref{br}) for the inner
product.
We shall consider a non definite metric $\la\,,\, \ra$ on $\nn$ by
changing the sign of the metric on the center ${\zz}$; thus the metric
on $\vv$ remains invariant and we take
$$\la z_i, z_j\ra=-\la z_i, z_j\ra_+\qquad \mbox{ for } z_i, z_j\in \zz \qquad
\mbox{ and } \qquad \la \zz, \vv\ra=0.$$
By (\ref{br}) the maps $j(z)$ for the metric $\la\,,\,\ra$ on $\nn$ are
$$-\la z, [u,v]\ra_+= -\la J(z) u,v\ra_+ =\la j(z)u, v\ra=\la z,
[u,v]\ra,\quad \mbox{ for } z\in \zz$$
that is $j(z)=-J(z)$ for every $z\in \zz$.
We work out an example on the Heisenberg Lie group $H_3$. This is the
simply connected Lie group whose Lie algebra is $\hh_3$ which
is spanned by the vectors
$e_1, e_2, e_3$, with the non trivial Lie bracket relation $[e_1, e_2]=e_3$.
The canonical left invariant metric $(\la\,,\,\ra_+)$ is that one obtained by declaring the basis
above to be orthogonal and the map
$J(e_3)$ for $\la\,,\,\ra_+$ is
$$\left( \begin{matrix} 0 & -1 \\ 1 & 0
\end{matrix} {\rm (i)}ght)
$$
A Lorentzian metric $\la\,,\,\ra$ is obtained on $H_3$ by changing the sign of the
canonical metric on the center.
Kaplan showed that $(H_3,\la\,,\,\ra_+)$ is naturally
reductive (\cite{Ka}). In the next sections we shall see that
$(H_3,\la\,,\,\ra)$ and generalizations of it, are also naturally
reductive.
By (\ref{oa}) the group of isometries for any of these both metrics is
$(\RR \times O(2))\ltimes H_3$, where the action of the isotropy group is
given by $(\lambda, A)\cdot (z+v)=\lambda z + Av$ for $z\in \zz$ and
$v\in \vv = span\{e_1,e_2\}$, $\lambda \in \RR$ and $A\in O(2)$.
\end{exa}
\begin{defn} A homogeneous manifold $M$
is said to be {\em naturally reductive} if there is a transitive Lie group of isometries
$G$ with Lie algebra $\ggo$ and there exists a subspace
$\mm\subseteq \ggo$ complementary to $\hh$, the Lie algebra of the isotropy group
$H$, in $\ggo$ such that
$$\Ad(H)\mm \subseteq \mm \qquad \mbox{ and }\qquad
\la [x,y]_{\mm}, z\ra + \la y, [x,z]_{\mm} \ra=0 \qquad \mbox{ for all } x, y,
z\in \mm.$$
\end{defn}
Frequently we will say that a metric on a homogeneous space $M$ is naturally reductive even though it is not naturally with respect to a particular transitive group of isometries (see Lemma 2.3 in \cite{Go}).
For naturally reductive metrics the geodesics passing through $m\in M$ are of the form $$\gamma(t)=\exp(t x) \cdot m\qquad \quad \mbox{ for some }x\in \mm.$$
A point $p$ of a pseudo Riemannian manifold is called a {\em pole} provided the exponential map $exp_p$ is a diffeomorphism. Furthemore if $o$ is a pole of the naturally reductive pseudo Riemannian manifold $G/H$, then the map $(x, h) \to \exp(x) h$ is a diffeomorphism of $\mm \times H\to G$ \cite{ON} Ch. 11.
Indeed pseudo Riemannian
symmetric spaces are naturally reductive. Examples of naturally
reductive spaces arise from Lie groups equipped with a bi-invariant metric,
which could exist for nilpotent ones. In the Riemmannian case, if
a nilmanifold $N$ admits a naturally reductive metric, then $N$ is at most
2-step nilpotent \cite{Go}.
\section{Naturally reductive metrics with non degenerate center: a
characterization}
In this section we achieve a characterization of naturally reductive
pseudo Riemannian simply connected 2-step nilpotent Lie groups with non degenerate center
by studying the set of maps $j(z)$
$z\in \zz$ defined in (\ref{br}), showing that they build a subalgebra of the
Lie algebra of the isotropy group $H$.
\begin{lem} \label{l1} Let $(\nn, \la\,,\,\ra)$ denote a 2-step nilpotent Lie
algebra equipped with a metric for which its center $\zz$ is non degenerate
and assume $j$ is injective. Let
$\hh=\sso(\nn,\la\,,\,\ra) \cap \Der(\nn)$ denote the Lie subalgebra of the group of
isometries
fixing the identity element in the corresponding simply connected Lie group
$N$. Then
\vskip 3pt
{\rm i)} $\hh$ leaves each of $\zz$ and $\vv$ invariant,
\vskip 2pt
{\rm ii)} For $\phi\in \hh$,
$$\phi_{|_{\zz}} = j^{-1} \circ \ad_{\sso(\vv)}\phi_{|_{\vv}}\circ j.$$
In particular $\phi \to \phi_{|_{\vv}}$ is an isomorphism of $\hh$ onto a
subalgebra of $\sso(\vv,\la\,,\,\ra_{\vv})$.
\vskip 2pt
{\rm iii)} Let $\phi \in \sso(\vv,\la\,,\,\ra_{\vv})$. Then $\phi$ extends to an
element of $\hh$
if and only if $[\phi, j(\zz)]\subseteq J(\zz)$ and
$j^{-1} \circ \ad_{\sso(\vv)}\phi_{|_{\vv}}\circ j \in \sso(\zz,\la\,,\,\ra_{\zz})$.
\end{lem}
\begin{proof} i) is easy to prove. We shall
show (ii) and (iii). Let $A\in \sso(\zz,\la\,,\,\ra_{\zz})$ and $B\in
\sso(\vv,\la\,,\,\ra_{\vv})$, the linear map
$\phi$ which agrees with $(A, B)\in \zz \oplus \vv$ lies in $\hh$ if and only
if
$$\la j(Ax)u, v\ra=\la (B j(x)-j(x)B)u, v\ra \qquad \mbox{ for }x\in \zz,
\,u,v\in \vv$$
which is equivalent to $j(A(x))=[B, j(x)]$ the last one denotes the Lie
bracket in $\sso(\vv,\la\,,\,\ra_{\vv})$ and since $j$ was assumed injective one gets
$A=j^{-1}\circ\ad_{\sso(\vv)}(B) \circ j$.
\end{proof}
The proof of the next theorem coincides with that one given by C. Gordon in \cite{Go}. For the sake of
completeness we include it here. However the consequences are quite
different from the Riemannian situation.
\begin{thm} \label{t1} Let $(N, \la\,,\,\ra)$ denote a 2-step simply connected Lie group
equipped with a left invariant pseudo Riemannian metric such that the center is non
degenerate and assume $j$ is injective.
Then the metric is naturally reductive with respect to $G=H\ltimes N$
being $H$ the group of orthogonal automorphisms,
if and only if
\vskip 3pt
{\rm (i)} $j(\zz)$ is a Lie subalgebra of $\sso(\vv,\la\,,\,\ra_{\vv})$ and
\vskip 2pt
{\rm (ii)} $[j(x),j(y)] = j(\tau_x y)$ where $\tau_x\in
\sso(\zz,\la\,,\,\ra_{\zz})$ for any
$x\in \zz$.
\end{thm}
\begin{proof} Let $\ggo=\hh \ltimes \nn$ be the Lie algebra of $G=H\ltimes N$
and assume $N$ is naturally reductive with respect to $\ggo=\hh \oplus \mm$.
Set $\pi:\nn \to \hh$ so that
$$\mm=\{ x + \pi(x): x\in \nn\}.$$
The condition for natural reductivity says
$$
\la [x+\pi(x),y+\pi(y)]_{\mm}, z+\pi(z)\ra_{\mm} =- \la y+\pi(y),
[x+\pi(x),z+\pi(z)]_{\mm} \ra_{\mm}$$
where $\la\,,\,\ra$ is the pseudo Riemannian metric on $\mm$, so that the
previous equality can be interpreted on $\nn$ as
\begin{equation}\label{onn}
\la [x,y]+\pi(x)y-\pi(y)x, z\ra=-\la y, [x,z]+\pi(x) z-\pi(z)x\ra.
\end{equation}
where $\pi(x)$ is view as a linear operator on $\nn$ and one writes
$\pi(x)y=[x,y]$ when $x, y\in \nn$. Since $\pi(x)\in \sso(\nn, \la\,,\,\ra)$ the terms
involving $\pi(x)$ cancel and (\ref{onn}) yields
\begin{equation}\label{e11}
\ad(y)^*z+\ad(z)^*y = \pi(y) z+\pi(z) y\quad \mbox{ for all }y, z\in \nn.
\end{equation}
Since $[\hh, \nn]\subseteq \nn$ and $[\hh,
\mm]\subseteq \mm$, one has
$$[\pi(x), y+\pi(y)]=\pi(x) y +[\pi(x),\pi(y)]\in \mm$$
and therefore
\begin{equation}\label{e12}
\pi(\pi(x)y)=[\pi(x), \pi(y)] \quad \mbox{ for all } x,y \in \nn.
\end{equation}
If $z\in \zz$ and $y\in \vv$, $\ad(z)^*y=0$ and (\ref{e11}) says
\begin{equation}\label{e13}
j(z)y=\pi(y)z+\pi(z)y.
\end{equation}
But $\pi(y)z\in \zz$ and $\pi(z)y\in \vv$, so (\ref{e13}) implies
$$\pi(z)_{|_{\vv}}= j(z)\in \sso(\vv,\la\,,\,\ra_{\vv})\quad \mbox{ for every } z\in \zz.$$
It then follows that
$$[j(x), j(\zz)]\subset j(\zz) \quad \mbox{ and } \quad [j(x),
j(y)]=j(\tau_x y)\quad \mbox{ for } \tau_x\in
\sso(\zz,\la\,,\,\ra_{\zz}),\quad x,y\in \zz.$$
Conversely if (i) and (ii) hold, extend $j(x)$ to an element $\pi(x)$ of
$\hh$ such that the restriction of $\pi(x)$ to $\zz$ is given by the left
hand side of (ii). Extend $\rho$ as a linear map of $\nn$ by declaring
$\pi_{|{\vv}}\equiv 0$. We claim (\ref{e12}) hold for all $x, y\in \nn$. In fact it
is easy to verify it if at least one of $x,y\in \vv$. Assume $x,y \in \zz$, then
$$\pi(\pi(x)y)_{|_{\vv}}=j(j^ {-1}[j(x),j(y)])=[j(x),j(y)]$$
and therefore (\ref{e12}) is true after (\ref{l1}) ii). Define
$$\mathfrak l=\pi(\nn), \quad \mm=\{ x + \pi(x) : x \in \nn\}, \quad \mbox{
and } \quad \kk=\mathfrak l \oplus \mm.$$
By (\ref{e12}) $\mathfrak l$ is a Lie subalgebra of $\hh$ and $[\mathfrak l,
\mm]\subseteq \mm$ and since $\kk=\mathfrak l \oplus \nn$, $\kk$ is a Lie subalgebra of $\ggo$.
We assert that (\ref{e11}) is valid. This can be easily checked whenever at least
one of $x, y\in \vv$. If both $x, y\in \zz$ the left-hand side of (\ref{e11}) is
zero. The right-hand side lies in $\zz \cap ker(\pi)$, but
$\ker(\pi)=\ker(j)$ and since $j$ is injective one has $\zz \cap
ker(\pi)=\{0\}$, which proves (\ref{e11}). By following the argument preceding
(\ref{e11}) backwards, one can see that $M$ is naturally reductive with respect
to $\kk$.
\end{proof}
If $\hh$ is a Lie subalgebra of $\End(\vv)$ such that $\hh\subseteq \sso(\vv,
\la\,,\,\ra_{\vv})$ then we call $\la\,,\,\ra$ an {\em $\hh$-invariant metric}.
In the conditions of Theorem (\ref{t1}) it follows that if $(N, \la\,,\,\ra)$ is naturally
reductive then the bilinear map $\tau$ defines a Lie algebra
structure on $\zz$ and the map $j:\zz \to \sso(\vv,\la\,,\,\ra_{\vv})$ becomes a real
representation of the Lie algebra $(\zz, \tau)$. Furthermore the metric
on $\vv$ is $j(\zz)$-invariant and since $\tau_x\in \sso(\zz, \la\,,\,\ra_{\zz})$ the
metric on $\zz$ is ad($\zz$)-invariant, where $\ad$ denotes the adjoint
representation of $(\zz, \tau)$.
Conversely let $\ggo$ be a real Lie algebra endowed with an ad($\ggo$)-invariant
metric $\la \,,\, \ra_{\ggo}$ and let $(\pi, \vv)$ be a faithful representation
of $\ggo$ endowed with a $\pi(\ggo)$-invariant metric $\la\,,\,\ra_{\vv}$ and
without trivial subrepresentations, that is, $\bigcap_{x\in \ggo}ker \pi(x)=\{0\}$.
Define a 2-step nilpotent Lie algebra structure on the vector space underlying
$\nn=\ggo \oplus \vv$ by the following bracket
\begin{equation}\label{brac}
\begin{array}{ll}
[\ggo,\ggo]_{\nn}=[\ggo, \vv]_{\nn} =0 & [\vv, \vv] \subseteq \ggo \\ \\
\la [u,v], x\ra_{\ggo} = \la \pi(x) u, v\ra_{\vv}& \forall x\in \ggo, u, v\in
\vv.
\end{array}
\end{equation}
and equip $\nn$ with the metric obtained as the product metric
\begin{equation}\label{metric}
\la \,,\,\ra_{|_{\ggo \times \ggo}}=\la \,,\, \ra_{\ggo}\qquad \la
\,,\,\ra_{|_{\vv \times \vv}}=\la \,,\, \ra_{\vv} \qquad \la \ggo, \vv\ra=0.
\end{equation}
Take $N$ the simply connected 2-step nilpotent Lie group with Lie algebra
$\nn$ and endow it with the left invariant metric determined by $\la \,,\,\ra$.
Since $(\pi, \vv)$ has no trivial subrepresentations, the center of $\nn$
coincides with $\ggo$. Moreover $\vv$ is its orthogonal complement and the
transformation $j(x)$ defined as in (\ref{br}) is precisely $\pi(x)$ for all
$x\in \ggo$. Since $(\pi, \vv)$ is faithful, the commutator of $\nn$ is $\ggo$:
$C(\nn)=\ggo$. Since the set $\{\pi(x)\}_{x\in \ggo}$ is a Lie subalgebra of
$\sso(\vv,\la\,,\,\ra_{\vv})$ we conclude that $(N, \la\,,\,\ra)$ is
naturally reductive.
\begin{thm}\label{t2} Let $\ggo$ denote a Lie algebra equipped with an
ad-invariant metric $\la\,,\,\ra_{\ggo}$ and let $(\pi, \vv)$ be a real
faithful representation of $\ggo$ without trivial subrepresentations and
endowed with a $\pi(\ggo)$-invariant metric $\la\,,\,\ra_{\vv}$.
Let $\nn$ be the Lie algebra $\nn=\ggo \oplus \vv$ direct sum of vector spaces,
together with the Lie bracket given by (\ref{br}) and furnished with the metric
$\la\,,\,\ra$ as in (\ref{metric}). Then the corresponding simply connected
2-step nilpotent Lie group
$(N,\la \,,\,\ra)$, being $\la\,,\,\ra$ the induced left invariant metric, is a naturally reductive pseudo Riemannian
space.
The converse holds whenever the center of $N$ is non degenerate and $j$ is
faithful. \end{thm}
\begin{rem} Suppose the representation $(\pi, \vv)$ of $\ggo$ is not
faithful. Thus
$$z\in ker \pi \Longleftrightarrow \la z, [u,v]\ra=0 \quad \forall u,v \in
\vv $$
$\Longrightarrow z \in C(\nn)^{\perp}$. Since the metric on the center $\ggo$ is non definite, $ker \pi \cap C(\nn)$ coul be non trivial, so that the sum as vector spaces $ker \pi+C(\nn)$ is not necessarly direct.
When $\pi$ has some trivial subrepresentation,
$$u\in \cap_{x\in \ggo} \pi(x) \Longleftrightarrow \la \pi(x)u, v\ra=0 \quad
\forall v\in \vv,$$
$\Longrightarrow\la x, [u,v]\ra=0$ for all $x\in \ggo$, thus $[u,v]=0$ for
all $v\in \vv$ which says $u\in
\zz(\nn)$. Hence $\ggo \subsetneq \zz(\nn)$.
\end{rem}
\begin{rem} While in the Riemannian case, the condition of the metric to be
positive definite says that $\ggo$ must be compact, in the pseudo Riemannian
case the statement above imposes the restriction on $\ggo$ to carry an
ad-invariant metric. See the next example.
\end{rem}
\begin{exa} \label{exad} The Killing form on any semisimple Lie algebra is an ad-invariant
metric.
Any Lie algebra $\ggo$ can be embedded into a Lie algebra which admits an
ad-invariant metric. In fact, the cotangent $\ct^*\ggo=\ggo
\ltimes_{coad}\ggo^*$, being $coad$ the coadjoint representation, admits a neutral ad-invariant metric which is given by:
$$\la (x_1, \varphi_1),(x_2,\varphi_2)\ra=\varphi_1(x_2)+\varphi_2(x_1)\qquad
\qquad x_1, x_2\in \ggo,\quad \varphi_1, \varphi_2\in \ggo^*.$$
Notice that both $\ggo$ and $\ggo^*$ are isotropic subspaces.
\end{exa}
\vskip 3pt
A {\em data set} $(\ggo, \vv, \la\,,\,\ra)$ consists of
(i) a Lie algebra $\ggo$ equipped with a $\ad$-invariant metric
$\la\,,\,\ra_{\ggo}$,
(ii) a real faithful representation of $\ggo$: $(\pi, \vv)$, without trivial
subrepresentations,
(iii) $\la\,,\,\ra$ is a $\ggo$-invariant metric on $\nn=\ggo\oplus
\vv$, i.e. $\la\,,\,\ra_{|_{\ggo\times \ggo}}=\la\,,\,\ra_{\ggo}$ is
ad($\ggo$)-invariant and
$\la\,,\,\ra_{|_{\vv\times \vv}}$ is $\pi(\ggo)$-invariant and $\la \ggo,\vv\ra=0$.
A data set $(\ggo, \vv, \la\,,\,\ra)$ determines a 2-step nilpotent Lie group
denoted by $N(\ggo, \vv)$ whose Lie algebra is the underlying vector space $\nn=\ggo
\oplus \vv$ with the Lie bracket defined by (\ref{brac}). Extend the
metric on $\ggo$ by left translations after identifying
$\nn\simeq T_eN(\ggo, \vv)$, so that $N(\ggo, \vv)$ becomes a naturally reductive
pseudo Riemannian 2-step nilpotent Lie group (\ref{t2}).
We study the isometry group in this case. Let $\hh$ denote the Lie algebra of
the isometries fixing the identity element; by (\ref{sd}) an element $D\in \hh$
is a self adjoint derivation which can be written as
$D=(A,B)\in \sso(\ggo,\la\,,\,\ra_{\ggo}) \times \sso(\vv,\la\,,\,\ra_{\vv})$ such that
$$ B\pi(x) -\pi(x) B =\pi(Ax),\quad \forall x\in \ggo.$$
Denote by $[\,,\,]_{\nn}$ the Lie bracket on $\nn$ and by $[\,,\,]$ the Lie
brackets on $\ggo$ and $\End(\vv)$. Then
$$\begin{array}{rcl}
\pi(A[x,y]) & = & B\pi([x,y])-\pi([x,y])B = B[\pi(x),\pi(y)]-[\pi(x),\pi(y)]B \\
& = & [B, [\pi(x),\pi(y)]]=[[B,\pi(x)]+[\pi(x),[B,\pi(y)]]\\
& = & [\pi(Ax),\pi(y)]+[\pi(x),\pi(Ay)]=\pi([Ax,y]+[x,Ay]).
\end{array}
$$
Since $\pi$ is faithful then
$$A[x,y]=[Ax,y]+[x,Ay]\qquad \quad\mbox{ for all } x,y \in \ggo, $$
that is, $A\in \Der(\ggo) \cap \sso(\ggo,\la\,,\,\ra_{\ggo})$.
\begin{prop} \label{pi} The group of isometries fixing the identity on a naturally reductive pseudo Riemannian 2-step nilpotent Lie group $N(\ggo, \vv)$ as in (\ref{t2}) has Lie algebra
$$\hh=\{(A,B)\in (\Der(\ggo)\cap \sso(\ggo, \la\,,\,\ra_{\ggo}))\times \sso(\vv,\la\,,\,\ra_{\vv})\,:\, [\pi(x),B]=\pi(Ax)\quad \forall x\in \ggo\}.$$
\end{prop}
Whenever $\ggo$ is semisimple, the ad-invariant metric on $\ggo$ is the
Killing form; therefore any self adjoint derivation of $\ggo$ is of the form $\ad(x)$
for some $x\in \ggo$ . In this case one can consider $\ggo
\subset \hh$ where the action is given as
$$ x \cdot (z + v) = \ad(x) z + \pi(x) v\qquad x\in \ggo, \,z+ v\in \nn$$
being $\ad(x)$ the adjoint map on the semisimple Lie algebra $\ggo$.
Thus an element $D=(A,B)\in \hh$ is of the form
$$(A,B)=(\ad(x), \pi(x))+(0,B')\qquad x\in \ggo$$
with $B'=B-\pi(x)\in \End_{\ggo}(\vv)\cap \sso(\vv,
\la\,,\,\ra_{\vv})=\ee_{\ggo}$, where
$\End_{\ggo}(\vv)$ denotes the set of intertwinning operators of the
representation $(\pi,\vv)$ of $\ggo$. Since $\ggo$ and $\ee_{\ggo}$ commute,
then $\hh=\ggo\oplus \ee_{\ggo}$ is a direct sum of Lie algebras, here we
identify $\ggo$ with the set $\{(\ad(x), \pi(x)): x\in \ggo\}\subseteq \hh$.
This argues the following result.
\begin{cor} \label{coro} In the conditions of (\ref{pi}) with data set $(\ggo,\vv,
\la\,,\,\ra)$ for $\ggo$ semisimple the group of
isometries fixing the identity element is
$$H=G \times U\qquad \qquad U=\End_{\ggo}(\vv) \cap \Or(\vv,
\la\,,\,\ra_{\vv}).$$
\end{cor}
\begin{proof} By (\ref{oa}) we have that
$$H=\{(\phi, T)\in \Or(\ggo, \la\,,\,\ra_{\ggo}) \times \Or(\vv, \la\,,\,\ra_{\vv}):
T\pi(x)T^{-1}=\pi(\phi x), \quad x\in \ggo\}.$$
Hence $\phi=\pi^{-1}\circ \Ad(T)\circ \pi\in \Aut(\ggo)$. Since $\ggo$ is
semisimple any automorphism of $\ggo$ is an inner automorphism, thus there
exist $g\in G$ such that $\phi=\Ad(g)$. By the paragraph above,
$(Ad(g),\pi(g))\in H$ and therefore $\pi(g)^{-1} T\in U$. Hence
$$(\phi,T)=(\Ad(g), \pi(g)) \cdot (I, \pi(g)^{-1} T),$$
which says $H=G\times U$.
\end{proof}
\begin{rem} Compare with \cite{La}.
\end{rem}
\section{Geometry and Examples of naturally reductive 2-step nilmanifolds with non degenerate center}
The aim of this section is twofold. In the first part we write explicitly
some geometric features to bring into the proof of (\ref{icp}), while in the second part we show examples of
naturally reductive pseudo Riemannian 2-step nilpotent Lie groups with non
degenerate center.
Recall that a 2-step nilpotent Lie algebra $\nn$ is said to be {\em non singular} if
$\ad(x)$ maps $\nn$ onto $\zz$ for every $x\in \nn-\zz$. Suppose $\nn$ is equipped with a metric as
in (\ref{des2}) then $\nn$ is non singular if and only if $j(x)$ is non
singular for every $x\in \zz$. We shall say that a Lie group is non singular if its corresponding Lie algebra is non singular.
Whenever $N$ is simply connected 2-step nilpotent the exponential map
$\exp:\nn \to N$ produces global coordinates. In terms of this map the product on
$N$ can be obtained by
$$\exp(z_1+v_1) \exp(z_2+v_2)= \exp(z_1+z_2+\frac12 [v_1,v_2] + v_1+v_2) \quad\mbox{ for } z_1, z_2\in \zz, \, v_1, v_2\in \vv.$$
We shall study the geometry of 2-step nilpotent Lie groups when they are
endowed with a left invariant (pseudo Riemannian) metric $\la\,,\,\ra$ with respect to which the
center is non degenerate. In the
Riemannian case a deep study of the geometry can be found in the works of P. Eberlein \cite{Eb1, Eb2}.
The covariant derivative $\nabla$ is left invariant, hence one can see $\nabla$ as a bilinear form on $\nn$ getting the formula
\begin{equation}\label{nabla}
\nabla_x y= \frac12([x,y]-\ad(x)^*y -\ad(y)^*x) \qquad \quad \mbox{ for }
x,y \in \nn,
\end{equation}
where $\ad(x)^*$ denotes the adjoint of $\ad(x)$. By writing this
explicitly one obtains
\begin{equation}\label{nablaex}
\begin{array}{rcll}
\nabla_x y & = & \frac 12[x,y] & \mbox{ for all } x,y \in \vv\\
\nabla_x y= \nabla _y x & = & -\frac12 j(y) x & \mbox{ for all } x\in \vv, y\in
\zz \\
\nabla_x y & = & 0 & \mbox{ for all } x, y\in \zz
\end{array}
\end{equation}
Since translations on the left are isometries, to describe the geodesics of $(N, \la\,,\,\ra)$ it suffices to describe those
geodesics that begin at $e$ the identity of $N$. Let $\gamma(t)$ be a curve
with $\gamma(0) = e$, and let $\gamma'(0) = z_0+v_0 \in \nn$, where $z_0\in
\zz$ and $v_0\in \vv$. In exponential coordinates we write
$$\gamma(t)=\exp(z(t)+v(t)), \quad \mbox{ where } z(t)\in \zz, \,
v(t)\in \vv\quad \mbox{ for all $t$ and } z'(0)=z_0,\, v'(0)=v_0.$$
The curve $\gamma(t)$ is a geodesic if and only if the following equations
are satisfied:
\begin{eqnarray} \label{egeo}
v''(t) & = & j(z_0) v'(t) \mbox{ for all }t \in \RR \\
z_0 & \equiv & z'(t) + \frac12 [v'(t), v(t)] \mbox{ for all }t \in \RR
\end{eqnarray}
These equations were derived by A. Kaplan in \cite{Ka} to study
2-step nilpotent groups N of Heisenberg type, but the proof is valid in
general for 2-step nilpotent Lie groups equipped with a left invariant
pseudo Riemannian metric where the center is non degenerate as noted in
\cite{Ge} and \cite{Bo}.
Let $\gamma(t)$ be a geodesic of $N$ with $\gamma(0) = e$. Write
$\gamma'(0) = z_0+v_0$, where $z_0\in \zz$ and $v_0\in \vv$ and identify
$\nn=T_eN$. Then
\begin{equation}\label{geo}
\gamma'(t) = dL_{\gamma(t)}(e^{tj(z_0)} v_0 + z_0) \qquad \mbox{ for all }
t \in \RR
\end{equation}
where $e^{tj(z_0)}= \sum_{n=0}^{\infty} \frac{t^n}{n!} j(z_0)^n$. In fact,
write $\gamma(t)=exp (z(t)+v(t))$, where $z(t)$ and $v(t)$ lie in
$\zz$ and $\vv$ respectively for all $t \in \RR$. By using the previous
equations (\ref{egeo}) one has
$$\begin{array}{rcl}
\gamma'(t) &=& d\exp_{z(t)+v(t)}(z'(t)+v'(t))_{z(t)+v(t)} \\
& = & dL_{\gamma(t)}(z'(t) + \frac12 [v'(t), v(t)]+ v')\\
& = & dL_{\gamma(t)}(z_0+ v').
\end{array}
$$
Now by integrating the first equation of (\ref{egeo}) one gets
$v'(t)=e^{tj(z_0)} v_0$ which proves (\ref{geo}).
For $x,y$ elements in $\nn$ the curvature tensor is defined by
$$R(x,y)=[\nabla_x, \nabla_y]-\nabla_{[x,y]}.$$
Using (\ref{nablaex}) one gets
\begin{equation}\label{cu}
R(x,y)z = \left\{
\begin{array}{ll}
\frac12 j([x,y])z -\frac14j([y,z])x+\frac14 j([x,z])y& \mbox{
for } x,y,z \in \vv, \\ \\
-\frac14 [x,j(y)z] & \mbox{ for } x, y \in \vv, \, z\in \zz,\\ \\
-\frac14 [x,j(z) y]+\frac14 [y,j(z) x] & \mbox{ for } x, z \in \vv, \, y\in \zz,\\ \\
-\frac14 j(y)j(z) x & \mbox{ for } x\in \vv, y, z \in \zz,\\ \\
\frac14 [j(x),j(y)]z & \mbox{ for } x,y \in \zz, z\in \vv,\\\\
0 & \mbox{ for } x,y, z\in \zz.
\end{array} {\rm (i)}ght.
\end{equation}
Let $\Pi\subseteq \nn$ denote a non degenerate plane and let $Q$ be given by
$$Q(x,y)=\la x,x \ra \la y,y\ra-\la x,y \ra^2.$$
The non degeneracy property is equivalent to ask $Q(v,w)\neq 0$ for one -hence every- basis $v,w\in \Pi$ \cite{ON}.
The sectional curvature of $\Pi$ is the number $K(x,y):=\la R(x,y)y, x\ra
/Q(x,y)$, which is independent of the choice of the basis.
Now take a orthonormal
basis for $\Pi$, that is a linearly independent set $\{x,y\}$ such that
$\la x,y\ra=0$ and $\la x, x \ra =\pm 1$ and $\la y, y \ra=\pm 1$.
After (\ref{cu}) one obtains
\begin{equation}\label{sect}
K(x,y) = \left\{
\begin{array}{ll}
-\frac{3 \varepsilon_1 \varepsilon_2}4 \la [x,y], [x,y]\ra & \mbox{
for }x,y\in \vv \\ \\
\frac{\varepsilon_1 \varepsilon_2}4 \la j(y)x, j(y)x \ra & \mbox{
for }x\in \vv, y \in \zz,\\ \\
0 & \mbox{ for }x,y \in \zz
\end{array} {\rm (i)}ght.
\end{equation}
being $\varepsilon_1:=\la x,x \ra$ and $\varepsilon_2:= \la y, y\ra$.
\vskip 3pt
The Ricci tensor is given by
$Ric(x,y)={\rm trace}(z \to R(z,x)y), z\in \nn$ for arbitrary elements
$x,y\in \nn$.
\begin{prop}\label{p2} Let $\{z_i\}$ denote a orthonormal basis of $\zz$ and $\{v_j\}$ a orthonormal basis of $\vv$. It holds
$$Ric(x,y)=\left\{
\begin{array}{ll}
0 & \mbox{ for }x\in \vv, y\in \zz\\
\\
\frac12 \sum_i \varepsilon_i \la j(z_i)^2 x, y\ra & \mbox{ for } x, y\in \vv,\, \varepsilon_i=\la z_i, z_i\ra\\ \\
-\frac14 \sum_j {\varepsilon}_j \la j(x) j(y) v_j, v_j\ra & \mbox{ for } x,y \in \zz,\, {\varepsilon}_j=\la v_j, v_j\ra.
\end{array} {\rm (i)}ght.
$$
\end{prop}
Due to symmetries of the curvature tensor, the Ricci tensor
is a symmetric bilinear form on $\nn$ and hence there exists a
symmetric linear transformation $T:\nn \to \nn$ such that
$Ric(x,y)=\la Tx,y\ra$ for all $x,y \in \nn$. $T$ is called the Ricci
transformation. Let $\{e_k\}$ denote a orthonormal basis of $\nn$; it holds
$$Ric(x,y)=\sum_k \varepsilon_k \la R(e_k,x)y, e_k \ra =\la -\sum_k
\varepsilon_k R(e_k,x) e_k, y\ra$$
which implies
\begin{equation} \label{rt}
T(x)= -\sum_k \varepsilon_k R(e_k,x) e_k, \qquad \mbox{ being }
\varepsilon_k=\la e_k, e_k\ra.
\end{equation}
According to the results in (\ref{p2}) we have that $\zz$ and $\vv$ are
$T$-invariant subspaces and
$$
T(x) = \left\{ \begin{array}{ll}
\frac12 j(\sum_i
\varepsilon_i z_i)^2 x & x\in \vv, \quad\varepsilon_i=\la z_i, z_i\ra \\ \\
\frac14 \sum_j \varepsilon_j
[v_j, j(x)v_j] & x\in \zz \qquad \varepsilon_j=\la v_j, v_j \ra.
\end{array} {\rm (i)}ght.
$$
where $\{z_i\}$ and $\{v_j\}$ are orthonormal basis of $\zz$ and $\vv$
respectively.
\begin{rem} The formulas above were used in \cite{C-P1} to prove (\ref{icp}).
\end{rem}
\begin{rem} For naturally reductive metrics in the formulas
above, replace the maps $j$ be the corresponding representation $\pi:\ggo
\to \sso(\vv, \la\,,\,\ra_{\vv})$.
\end{rem}
Below we expose examples of naturally reductive metrics on 2-step
nilpotent Lie groups. This is achieved by translating the data at the Lie
algebra level to the corresponding simply connected Lie group by
following the key results provided in
(\ref{t2}). We shall make use of
euclidean and semisimple Lie algebras in order to obtained ad-invariant
metrics. For further details on Lie algebras with ad-invariant metrics see
for instance \cite{F-S,M-R}. Concerning isometries between pseudo Riemannian
2-step nilpotent Lie groups notice that orthogonal isomorphisms gives rise to isometries between the corresponding Lie groups (*).
\vskip 3pt
{\rm (i)} \,{\it Riemannian examples.} Naturally reductive Riemannian nilmanifolds arise by considering a data set with $\ggo$
compact. Recall that if $\ggo$ is compact then $\ggo=\kk \oplus \cc$ where
$\kk=[\ggo, \ggo]$ is a compact semisimple Lie algebra and $\cc$ is the
center (see \cite{Wa}). In \cite{La} they were extended studied.
In the Riemannian case the converse of (*) above holds \cite{Wi}.
\vskip 2pt
{\rm (i)}i \,{\it Modified Riemannian.} Take any of those data sets corresponding to the positive definite case and follow the ideas in (\ref{change}). Clearly
all requierements in (\ref{t2}) apply and so one can produce naturally
reductive pseudo Riemannian metrics of signature ($\dim \ggo, \dim \vv)$.
Let $N(\ggo,\vv)$ denote a Riemannian naturally reductive nilmanifold
obtained from a data set $(\ggo, \vv, \la\,,\,\ra)$. Let
$\tilde{N}(\ggo,\vv)$ denote the pseudo Riemannian 2-step nilpotent Lie
group obtained by changing the sign of the metric on $\ggo$. Therefore by \cite{Wi}
$$N(\ggo,\vv)\simeq N'(\ggo',\vv') \qquad \Longleftrightarrow \qquad \nn(\ggo,\vv)\simeq
\nn(\ggo, \vv')$$
and this occurs if and only if there an isometric isomorphism $\phi:(\ggo,
\la\,,\,\ra_+) \to (\ggo', \la\,,\,\ra_+')$ and a isometry $T:(\vv,
\la\,,\,\ra_+) \to (\vv',\la,,\ra_+)$ such that
$$T\pi(x)T^{-1}=\pi'(\phi x) \qquad \mbox{ for all }x\in \ggo.$$
Clearly $\phi:(\ggo, -\la\,,\,\ra_+) \to (\ggo', -\la\,,\,\ra_+')$ is also a isometric isomorphism, so that the corresponding simply connected Lie groups are isometric. Thus one has what follows.
\begin{prop} If $N(\ggo,\vv)\simeq N'(\ggo',\vv')$ then $\tilde{N}(\ggo,\vv)\simeq \tilde{N}'(\ggo',\vv')$.
\end{prop}
In \cite{La} detailed conditions to get the isometries $N(\ggo,\vv)\simeq N'(\ggo',\vv')$ were obtained.
\vskip 2pt
{\rm (i)}ii \, {\it Abelian center.} Let $\RR^{2n}$ be equipped with a metric $B$, that is, $B$ is determined by a non singular symmetric linear map such that
$$B(x,y)=\la b x, y\ra \qquad \la \,,\,\ra \mbox{ the canonical inner product on } \RR^{2n}.$$
Let $t\in \sso(\RR^{2n}, B)$, that is $t$ may satisfy $t^*=-btb$ where $t^*$ denotes adjoint with respect to the canonical inner product on $\RR^{2n}$.
Any non singular $t\in \sso(\RR^{2n}, B)$ gives rise to a faithful
representation of $\RR$ to $(\RR^{2n},B)$ without trivial
subrepresentations. Let $\nn$ be the vector space direct sum
$\RR z \oplus \RR^{2n}$ equipped with a metric $\la\,,\,\ra$ such that
$$\la z, \RR^{2n}\ra=0\qquad \la z, z \ra= \lambda\in \RR-\{0\} \qquad
\la\,,\,\ra_{\RR^{2n}}=B.$$
Define a Lie bracket on $\nn$ by
$$[z, y]=0\qquad \forall y\in \nn\qquad \mbox{ and }\quad \la [u,v], z\ra= B(t u,v) \quad u,v\in \RR^{2n}.$$
According to (\ref{t2}) this Lie bracket makes of $\nn$ a 2-step nilpotent Lie algebra and the given metric is naturally reductive whenever the center is non degenerate. This Lie algebra is isomorphic to the Heisenberg Lie algebra.
Furthermore, the group of isometries fixing the identity element has Lie algebra
\begin{equation} \label{ih} \hh=\mathcal Z_{\sso(\RR^{2n}, B)} (t)
\end{equation}
where $\mathcal Z_{\sso(\RR^{2n}, B)} (t)$ denotes the centralizer of $t$
in $\sso(\RR^{2n},B)$, which can be verified by applying Proposition (\ref{pi}).
In this way one gets naturally reductive metrics on the Heisenberg
Lie group of dimension 2n+1. The converse also holds.
\begin{prop} Any left invariant pseudo Riemannian metric on the Heisenberg
Lie group $H_{2n+1}$ for which
the center is non degenerate is naturally reductive.
The isotropy group has Lie algebra $\hh$ as in (\ref{ih}).
\end{prop}
\begin{proof}
Let $\hh_{2n+1}$
denote the Lie algebra of $H_{2n+1}$ and decompose it as a orthogonal direct sum
$\hh_{2n+1}=\RR z \oplus \vv$. Then the restriction of the metric to $\vv$ defines a metric $B$ of signature $(k,m)$.
The map $j$ defined in
(\ref{br}) is indeed self adjoint with respect to
$B:=\la\,,\,\ra_{|_{\vv\times\vv}}$ and it generates a subalgebra of
$\sso(\vv, B)$. Thus $z \to j(z)$ defines a faithful representation
without trivial subrepresentations since by $t:=j(z)$ one has
$$tu=0 \Longleftrightarrow B( tu, v)=0 \quad \forall v\in \vv \Longleftrightarrow B(z,[u,v])=0 \quad \forall v\in \vv.$$
But since the center is non degenerate then $[u,v]=0$ for all $v\in \vv$ which implies $u=0$.
Indeed any non degenerate metric on $\RR z$ is ad-invariant.
Hence the statements of (\ref{t2}) are satisfied and the metric on $\hh_{2n+1}$ is naturally reductive.
\end{proof}
\begin{exa} Let $\hh_3$ denote the Heisenberg Lie algebra of dimension three
with basis $e_1, e_2, e_3$ satisfying the Lie brackets $[e_1, e_2]=e_3$.
Lorentzian metrics on $\hh_3$ with non degenerate center can be defined by
$$\begin{array}{lrclcl}
(1) & -\la e_3, e_3\ra & = & 1 & = & \la e_1, e_1\ra=\la e_2, e_2\ra \\
(2) & \la e_3, e_3\ra & = & 1 & = & -\la e_1, e_1\ra=\la e_2, e_2\ra
\end{array}
$$
Thus in the basis $e_1,
e_2$ the map $j_1(e_3)$ for the metric in (1) is represented by the matrix
$$\left( \begin{matrix} 0 & 1 \\ -1 & 0 \end{matrix} {\rm (i)}ght)$$
(compare with (\ref{change})) while $j_2(e_3)$ for the metric (2) one has
$$\left( \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} {\rm (i)}ght).$$
See \cite{Ge} for more results concerning Lorentzian metrics.
\end{exa}
The construction on the Heisenberg Lie algebra, can be extended in the following
way. Set $B$ a non degenerate symmetric bilinear form on $\RR^k$ and let $t_1,
\hdots t_l$ be commuting linear maps in $\sso(\RR^k, B)$ and such that
$\bigcap_i ker(t_i)=\{0\}$.
Set $\nn=\RR^l \oplus \RR^k$ direct sum of vector spaces, equipped $\RR^l$ with
any metric and $\nn$ with the product metric such that $\la \RR^l, \RR^k\ra=0$.
The triple $(\RR^l, \RR^k, \la\,,\,\ra)$ is a data set which induces a naturally
reductive metric on the corresponding simply connected 2-step nilpotent Lie
group with Lie algebra $\nn$.
\vskip 2pt \, {\it Semisimple center.} Let $\RR^{p,q}$ denote the real vector
space $\RR^{p+q}$ endowed with a metric $\la\,,\ra_{p,q}$ of signature $(p,q)$.
Let $\sso(p,q)$ denote the set of self adjoint transformations for
$\la\,,\ra_{p,q}$. This a semisimple Lie algebra and the Killing form $K$ a
natural ad-invariant metric on $\sso(p,q)$. Indeed $\sso(p,q)$ acts on
$\RR^{p,q}$ just by evaluation. Take the direct sum as vector spaces
$\nn=\sso(p,q)\oplus \RR^{p,q}$ and equipped with the product metric
$\la\,,\,\ra_{\nn}$ such that $\la\,,\,\ra_{\sso(p,q)\times \sso(p,q)}=K$,
$\la\,,\,\ra_{\RR^{p,q}\times \RR^{p,q}}=\la\,,\,\ra_{p,q}$ and $\la\sso(p,q), \RR^{p,q}\ra=0$.
Thus a Lie bracket can be defined on $\nn$ by
$$K( [u,v], A )=\la A u, v\ra_{p,q} \qquad \quad \mbox{ for all }u,v \in
\RR^{p,q}, A\in \sso(p,q).$$
The corresponding 2-step nilpotent Lie group equipped with the left invariant
metric induced by the metric above, makes of $N$ a naturally reductive pseudo
Riemannian space (Theorem (\ref{t2}).
A similar construction can be done by restriction of the evaluating action to a non degenerate subalgebra of
$\sso(p,q)$.
\vskip 2pt
{\rm (v)} \, {\it Modified tangent semisimple.} The Killing
form $K$ is an ad-invariant metric on any semisimple Lie algebra $\ggo$.
As usual the tangent Lie algebra $\ct \ggo$ is the semidirect product $\ggo \ltimes
\ggo$ via the adjoint representation.
We shall modify the algebraic structure on $\ct \ggo$ in order to get a
naturally reductive pseudo Riemannian 2-step nilpotent Lie group.
Take the Lie algebra $\ggo$ together with the Killing form and let $\vv$
denote the underlying vector space to $\ggo$ endowed also with the Killing
form metric. To this pair $(\ggo, \vv)$ attach
- the metric given by $\la\,,\,\ra_{\ggo}=\la\,,\,\ra_{\vv}=K$ and
$\la\ggo,\vv\ra=0$;
- the adjoint representation $\ad:\ggo \to \sso(\vv, K)$.
The adjoint representation is faithful and there
is no trivial subrepresentations, so that $(\ggo, \vv, K+K)$ constitues a
data set for a 2-step nilpotent Lie group $N(\ggo, \vv)$ and by (\ref{t2})
it is naturally reductive pseudo Riemannian. Clearly the signature of this metric is twice
as much the signature of $B$ and the isometry group can be computed with
(\ref{coro}).
Notice that whenever $\ggo$ is compact the procedure above is a case of
the construction for naturally reductive Riemannian nilmanifolds (see
(i)).
In the next section we shall see that the 2-step nilpotent Lie algebra above,
together with another metric gives rise to a Lie algebra carrying an
ad-invariant metric.
\vskip 2pt
\section{Other examples of naturally reductive metrics}
In this section we study 2-step nilpotent Lie algebras with ad-invariant metrics. The corresponding Lie group carries a bi-invariant metric for which the center is degenerate.
An {\em ad-invariant metric} on a Lie algebra $\ggo$ is a non degenerate symmetric
bilinear map $\la\,,\,\ra:\ggo \times \ggo \to \RR$ such that
\begin{equation}
\la [x,y], z\ra + \la y, [x,z]\ra =0 \qquad \mbox{ for all } x,y, z \in \nn.
\end{equation}
Recall that on a connected Lie group $G$ furnished with a left invariant pseudo
Riemannian metric $\la\,,\,\ra$, the following statements are equivalent
(see \cite{ON} Ch. 11):
\begin{enumerate}
\item $\la\,,\,\ra$ is right invariant, hence bi-invariant;
\item $\la\,,\,\ra$ is $\Ad(G)$-invariant;
\item the inversion map $g\to g^{-1}$ is an isometry of $G$;
\item $\la [x,y], z\ra + \la y, [x,z]\ra =0$ for all $x,y, z \in \ggo$;
\item $\nabla_xy =\frac12 [x,y]$ for all $x,y\in \ggo$, where $\nabla$ denotes the
Levi Civita connection;
\item the geodesics of $G$ starting at $e$ are the one parameter subgroups of $G$.
\end{enumerate}
Clearly $(G, \la\,,\,\ra)$ is naturally
reductive, which by (3) is a symmetric space. Furthermore one has
\begin{itemize}
\item the Levi-Civita connection is given by
$$\nabla_x y=\frac 12 [x,y] \qquad \mbox{ for
all } x,y \in \ggo,$$
\item the curvature tensor is
$$R(x,y)=\frac14 \ad([x,y])\qquad \mbox{ for }x,y \in \ggo.$$
\end{itemize}
Hence any {\em simply connected 2-step nilpotent Lie group equipped with a
bi-invariant metric is flat}.
The set of nilpotent Lie groups carrying a bi-invariant pseudo Riemannian metric is non empty. An element of this set is for instance the simply connected Lie group
whose Lie algebra is the free 3-step
nilpotent Lie algebra in two generators: in fact,
$\nn$ the Lie algebra spanned as vector space by $e_1, e_2, e_3, e_4, e_5$ with the non zero Lie brackets
$$[e_1, e_2]=e_3\qquad [e_1, e_3]=e_4\qquad [e_2, e_3]=e_5, $$
carries the ad-invariant metric defined by the non vanishing symmetric relations
$$\la e_3, e_3\ra=1=\la e_1, e_5\ra=-\la e_2, e_4\ra.$$
Otherwise in the Riemannian case, a naturally reductive nilpotent Lie group may be at most 2-step nilpotent \cite{Go}.
\vskip 3pt
Let $\nn$ denote a 2-step nilpotent Lie algebra with an
ad-invariant metric $\la\,,\,\ra$. It is not hard to prove that $\zz^{\perp}=C(\nn)$ and
therefore the center is always an isotropic ideal. Moreover $\nn$ decomposes as a
orthogonal product
\begin{equation}\label{desad} \nn=\tilde{\zz} \times \tilde{\nn}
\end{equation}
where $\tilde{\zz}$ is a non degenerate central ideal and $\tilde{\nn}$ is a 2-step
nilpotent ideal of corank zero, being the corank of $\nn$ uniquely defined by the scalar $k:=\dim
\zz -\dim C(\nn)$. This follows essentially from the fact that the ad-invariant metric is non
degenerate on any complementary subspace of $C(\nn)$ in $\zz$. Thus by choosing
such a complement $\tilde{\zz}$, $\zz=\tilde{\zz}\oplus C(\nn)$ and
its orthogonal complement in $\nn$, $\nn=\tilde{\zz}\oplus \tilde{\zz}^{\perp}$ one gets a
decomposition as above (\ref{desad}) with $\tilde{\nn}:=\tilde{\zz}^{\perp}$.
Assume now the corank of $(\nn, \la\,,\,\ra)$ vanishes, so that
$\zz^{\perp}=C(\nn)=\zz$. One can produce a isotropic subspace $\vv_1$ such that
the ad-invariant metric on $\zz\oplus \vv_1$ is non degenerate. Hence one
obtains a orthogonal decomposition as vector spaces
$$\nn=(\zz \oplus \vv_1)\oplus \vv_2, \qquad\mbox{ where }\vv_2=(\zz\oplus
\vv_1)^{\perp}.$$
We claim $\vv_2=0$. In fact for all $x\in \vv_2$ one has $\la x, [u,v]\ra=0$ for
all $u,v\in \nn$
implying that $x\in C(\nn)^{\perp}\cap \vv_2=\{0\}$. Hence there is a
splitting of $\nn$ as a direct sum of the isotropic subspaces $\zz$ and $\vv$
so that the metric on $\nn$ is neutral:
$$\nn=\zz \oplus \vv.$$
Among other possible constructions, 2-step nilpotent Lie algebras admitting an
ad-invariant metric can be obtained as follows. Let $(\vv, \la \,,\, \ra_+)$ denote
a real vector space equipped with an inner product and let $\rho:\vv \to \sso(\vv)$
an injective linear map satisfying
\begin{equation}\label{jad}
\rho(u)u=0\qquad\mbox{ for all }u\in \vv.
\end{equation}
Consider the
vector space $\nn:=\vv^*\oplus \vv$ furnished with the canonical neutral metric
$\la\,,\,\ra$ and
define a Lie bracket on $\nn$ by
\begin{equation}\label{brad}
\begin{array}{rcl}
[x,y] & = & 0 \quad \mbox{ for }x\in \vv^*, y\in \nn\quad \mbox{ and }\quad
[\nn,\nn]\subseteq \vv^*\\
\la [u,v], w\ra & = & \la \rho(w) u,v\ra_+ \qquad \mbox{ for all } u,v,w\in \vv.
\end{array}
\end{equation}
Then $\nn$ becomes a 2-step nilpotent Lie algebra of
corank zero for which the metric $\la\,,\,\ra$ is ad-invariant. This construction was called the
{\em modified cotangent}, since $\nn$ is linear isomorphic to the cotangent of $\vv$.
Notice that the commutator coincides with the center and it equals $\vv^*$. This
allows to construct 2-step nilpotent Lie algebras of null corank which carry an ad-invariant metric. Furthermore this is basically
the way to obtain such Lie algebras, see \cite{Ov}:
\begin{thm} \label{mod} Let $(\nn, \la\,,\,\ra)$ denote a 2-step nilpotent Lie algebra of corank
$m$ endowed with an ad-invariant metric. Then $(\nn, \la\,,\,\ra)$ is isometric
isomorphic to an orthogonal direct product of the Lie algebras $\RR^m$ and a
modified cotangent.
\end{thm}
One can get 2-step nilpotent examples by proceding as follows. Let $(\ggo, B)$ denote a compact
semisimple Lie algebra and $B$ its Killing form. Since $B$ is negative
definite on $\ggo$, $-B$ determines an inner product
on $\ggo$. The adjoint map, $\ad:\ggo \to \sso(\ggo, B)$ satisfies
(\ref{jad}), therefore the vector space $\ggo^* \oplus \ggo$ equipped with the
Lie bracket defined in (\ref{brad}) makes of $\ggo^*\oplus \ggo$ a 2-step
nilpotent Lie algebra which carries an ad-invariant metric, the usual
neutral metric on $\ggo^*\oplus \ggo$.
From (\ref{jad}) it is clear that the non singular Lie algebras cannot carry an
ad-invariant metric. Note that if a 2-step nilpotent Lie algebra admits and
ad-invariant metric, then $\dim \nn -\dim \zz=\dim C(\nn)$. This condition is
however not sufficient.
A self adjoint derivation $\phi$ in such a Lie algebra of zero corank has the form
$$\begin{array}{rcll}
\phi(z)& = &-A^*z\in \vv^*& \mbox{ for } z\in \vv^*\\
\phi(v)& = & B v + A v & \mbox{ where }Bv\in \vv^*, \,
Av\in \vv, \,\mbox{ for } v\in \vv
\end{array}
$$
and such that $A^*$ denotes the dual map of $A$: $A^*\varphi= \varphi \circ A$ for
$\varphi \in \vv^*$. On the other hand according to the results in \cite{Mu} the isotropy group of isometries fixing the
identity element on the corresponding 2-step nilpotent Lie group consists of the self adjoint transformations with respect to $\la\,,\,\ra$. Thus
$$I_a(N)\subseteq I(N).$$
Examples of 2-step nilpotent Lie algebras with ad-invariant metrics arise
by taking $\ct^*\nn$, the cotangent of any 2-step
nilpotent Lie algebra $\nn$ together with the canonical neutral metric (see
(\ref{exad})). Let $\nn=\zz\oplus \vv$ denote a 2-step nilpotent Lie
algebra, where $\vv$ is any complementary subspace of $\zz$ in $\ggo$. Let $z_1, \hdots, z_m$ be a basis of the center $\zz$ and let $v_1,
\hdots, v_n$ be a basis of the vector space $\vv$. Thus
$$[v_i, v_j]=\sum_{s=k}^m c_{ij}^s z_s \qquad \quad i, j=1, \hdots n.$$
Let $\ct^*\nn=\nn\ltimes \nn^*$ denote the cotangent Lie algebra obtained via
the coadjoint representation. Indeed
$z^1, \hdots, z^m, v^1,
\hdots, v^n$ becomes the dual basis of the basis above adapted to the
decomposition $\nn^*=\zz^*\oplus \vv^*$. The non trivial Lie bracket relations concerning
the coadjoint action follow
$$
[v_i, z^j]=\sum_{s=1}^n d_{ij}^s v^s \qquad \quad \mbox{ for } i=1, \hdots n, j=1, \hdots m.
$$
Thus
$[v_i, z^j](v_k)= d_{ij}^k$ and by the definition
$$
[v_i, z^j](v_k) =-z^j(\sum_{s=1}^m c_{ik}^s z^s)= - c_{ik}^j \qquad \quad
i,k=1, \hdots n, j=1, \hdots m.
$$
Therefore $d_{ij}^k= - c_{ik}^j$ for $i,k=1, \hdots n, j=1, \hdots m$.
It is clear that if for some basis of $\nn$ the structure constants are rational numbers then by choosing the union of this basis and its dual on $\ct^*\nn$ one gets
rational structure constants for $\ct^*\nn$. Thus by the Mal'cev criterium $N$
and its cotangent $\ct^*N$, the simply connected Lie group with Lie algebra $\ct^*\nn$, admits
a lattice which induces a compact quotient (see \cite{O-V,Ra} for instance).
Let $\Gamma\subset \ct^*N$ denote a cocompact lattice of $\ct^*N$. Indeed
$\ct^*N$ acts on the compact nilmanifold $(\ct^*N)/\Gamma$ by left translation isometries if we
induce to the quotient the bi-invariant metric corresponding to the neutral canonical one on
$\ct^*\nn$. The tangent
space at the representative $e$ can be identified with $\ct^*\nn \simeq T_e((\ct^*N)/\Gamma)$ so that
$\ct^*\nn=\{0\}\oplus \ct^*\nn$ and clearly $Ad(\Gamma)\ct^*\nn\subseteq
\ct^*\nn$ which says that $(\ct^*N)/\Gamma$ is homogeneous reductive. Moreover
the induced metric on the quotient satisfies
$$\la[x,y],z\ra+ \la [x,z], y\ra=0\qquad \quad \forall x,y, z\in \ct^*\nn.$$
\begin{prop} Let $N$ denote a 2-step nilpotent Lie group. If it admits a
cocompact lattice then the cotangent Lie group $\ct^*N$ admits a cocompact
lattice $\Gamma$ such that $(\ct^*N)/\Gamma$ is pseudo Riemannian naturally reductive.
\end{prop}
\begin{exa} The low dimensional 2-step nilpotent Lie group $N$ admitting an
ad-invariant metric occurs in dimension six.
This Lie algebra can be also be described as the cotangent of the Heisenberg Lie algebra $\ct^*\hh_3$. Explicitly let $e_1,e_2,e_3, e_4, e_5, e_6$ be a basis of $\nn$; the Lie brackets are
$$[e_4,e_5]=e_1 \qquad [e_4,e_6]=e_2 \qquad [e_5,e_6]=e_3$$
and the ad-invariant metric is defined by the non zero symmetric relations
$$1 = \la e_1, e_6\ra =\la e_2, e_5\ra =\la e_3, e_4\ra.$$
The corresponding simply connected six dimensional Lie group $N$ can be modelled on $\RR^6$ together with the multiplication group given by
$$\begin{array}{rcl}
(x_1,x_2,x_3, x_4,x_5,x_6) \cdot (y_1, y_2,y_3,y_4,y_5,y_6) & = & (x_1 + y_1 + \frac12(x_4y_5-x_5y_4), \\
&& x_2+y_2+\frac12(x_4y_6-x_6y_4),\\
& & x_3+y_3+\frac12(x_5y_6-x_6y_5),\\
&& x_4+y_4, x_5+y_5, x_6+y_6).
\end{array}
$$
By the Malcev criterium $N$ admits a cocompact lattice $\Gamma$.
By inducing the bi-invariant metric of $N$ to $N/\Gamma$ one gets a
invariant metric on $N/\Gamma$, and in this way $N/\Gamma$ is
a pseudo Riemannian naturally reductive compact nilmanifold.
For instance the subgroup of $N$ given by
$$\Gamma=\{(k_1,k_2,k_3, 2k_4, k_5, 2k_6)\,/\mbox{ for }\,k_i\in \ZZ \, \forall i=1,2,3,4,5,6\}$$
is a co-compact lattice of $N$, so that $N/\Gamma$ is a
compact homogeneous manifold.
\end{exa}
\
{\sc Acknoledgements.} The author expresses her deep gratitud to Victor Bangert, for his support during the stay of the author at the Universit\"at Freiburg, where this work was done.
The author is partially supported by DAAD, CONICET, ANPCyT and Secyt - Universidad Nacional de C\'ordoba.
\end{document} |
\begin{document}
\mathfrak{p}rovidecommand{\mathbb{K}eywords}[1]
{
\small
\mathsf{t}extit{Keywords: } #1
}
\mathfrak{p}rovidecommand{\mathfrak{m}sccode}[1]
{
\small
\mathsf{t}extit{2020 MSC: } #1
}
\mathsf{t}itle{Representation Theorem for modules over a commutative ring}
\author
{Colin Tan\footnote{Division of Mathematical Sciences,
School of Physical and Mathematical Sciences,
College of Science,
Nanyang Technological University. Email: \mathsf{t}exttt{[email protected]}}
}
\date{04 May 2023}
\mathfrak{m}aketitle
\noindent \mathbb{K}eywords{archimedean semiring, module over commutative rings,
order unit, polynomial, polytope, positive definite matrix, Positivstellensatz,
pure state, simplex} \\
\noindent \mathfrak{m}sccode{Primary 12D99, 26C99; secondary 14P99, 15B48, 52A05}
\begin{abstract}
Positivstellens\"atze for polynomials, such as that of P\'olya and Handelman,
are known to be concrete instances of the abstract Representation Theorem
for (commutative unital) rings.
We generalise the Representation Theorem to
modules over rings.
When this module consists of all symmetric matrices with entires in the polynomial ring,
our generalisation of the Representation Theorem
becomes the corresponding generalisations of P\'olya's and Handelman's Positivstellens\"atze
to symmetric matrices with polynomial entries.
These generalisations were previously obtained by Scherer and Hol, and L\^{e}\ and Du'\ respectively,
using the method of effective estimates from analysis.
\end{abstract}
Since Berr and W\"ormann \cite{BerrWoermann2001},
and even earlier from W\"ormann's thesis \cite{Woermann1998},
it is known that several Positivstellens\"atze
follow from the Representation Theorem of real algebra.
In real algebraic geometry,
a \emph{Positivstellensatz} is the sufficiency of
the positivity of a polynomial on a space, usually compact,
for it to be representable in terms of a certificate.
A \emph{certificate} is an algebraic expression
that immediately witnesses the strict positivity of the polynomial on that space.
The Positivstellens\"atze that follow from the Representation Theorem
are characterised by their certificates forming a so-called \lq\lq\emph{archimedean}\rq\rq\!
subsemiring of the polynomial ring.
Given a ring $A$ (always commutative with multiplicative unit),
a subset $S \subseteq A$ is a \emph{subsemiring} if it contains $0$ and $1$ and is closed under addition and multiplication.
We say that $S \subseteq A$ is \emph{archimedean} if $S + {\mathfrak{m}athbb{Z}} = A$,
where ${\mathfrak{m}athbb{Z}}$ is the abelian group of integers.
These archimedean Positivstellens\"atze include those of
P\'olya \cite{Polya1928} (reproduced in \cite[pp.\ 57--60]{HLP1952}),
Handelman \cite{Handelman1988},
Schm\"udgen \cite{Schmuedgen1991},
and Putinar and Scheiderer \cite{PutinarScheiderer2010}.
The Representation Theorem
is a criterion for an element of a ring $A$ to lie in a module $M \subseteq A$ over an archimedean subsemiring $S$ of $A$.
Here, by an \emph{$S$-module}, we mean a subset $M \subseteq A$
that contains $0$,
is closed under multiplication
and satisfies $S M \subseteq M$.
The fundamental Representation Theorem was proven and rediscovered in various versions
by Stone, Krivine, Kadison and Dubois, among others.
Krivine's version is definitive \cite{Krivine64a, Krivine64b}.
Prestel and Delzell gave an account of its history \cite[Section 5.6]{PrestelDelzell2001}.
When $A = {\mathfrak{m}athbb{R}}[x_1, \dots, x_n]$ is the real polynomial algebra,
and by choosing $S$ appropriately,
the Representation Theorem specialises to each of the afore-mentioned Positivstellens\"atze.
The abstract criterion in the Representation Theorem for a polynomial $f \in A$
to be representable as a certificate
then reduces to the strict postivity of $f$ on the relevant compact subset of real euclidean space ${\mathfrak{m}athbb{R}}^n$.
We refer the reader to a survey of Scheiderer
that details
why these Positivstellens\"atze are instances of the Representation Theorem \cite{Scheiderer2009}
(except the Positivstellensatz of Putinar and Scheiderer, which was discovered after the survey was written).
The purpose of this note, then,
is to generalise the Representation Theorem
to a criterion for an element of a module $G$ over a ring $A$
to lie in a subsemimodule $M \subseteq G$ over an archimedean subsemiring $S \subseteq A$.
Here, we ask the reader to take note,
the term \lq\lq\emph{$A$-module}\rq\rq\ is in the usual sense of an additive group
equipped with an $A$-action $(f, s) \mathfrak{m}apsto f \cdot s: A \mathsf{t}imes G \mathsf{t}o G$
satisfying the usual axioms $f \cdot (s + t) = f \cdot s + f \cdot t$,
$(f + g) \cdot s = f \cdot s + g \cdot s$,
$(f g) \cdot s = f \cdot (g \cdot s)$,
and $1 \cdot s = s$, for all $f, g \in A$, $s, t \in G$.
Then a $M \subseteq G$
is a \emph{$S$-subsemimodule}
if it contains $0$,
is closed under addition
and satisfies $S \cdot M \subseteq M$.
So a $S$-module, in the above sense as used in real algebra,
is a $S$-subsemimodule of $A$ in our terminology,
where $A$ is regarded as a module over itself.
We state our main results.
Let $G$ be a module over a ring $A$, let $M \subseteq G$, and let $u \in M$.
Define $\mathcal{X}_A(G, M, u)$
as the set of all group homomorphisms $\mathbb{P}hi : G \mathsf{t}o {\mathfrak{m}athbb{R}}$ to the additive group of real numbers
such that $\mathbb{P}hi|_M \ge 0$, $\mathbb{P}hi(u) = 1$ and
\begin{align} \label{eq: multiplicativeLaw}
\mathbb{P}hi(f \cdot s) = \mathbb{P}hi(f \cdot u) \mathbb{P}hi(s) & & (\forall f\in A, s \in G).
\end{align}
Given $s \in G$, we write $s > 0$ (resp.\ $s \ge 0$) on $\mathcal{X}_A(G, M, u)$
if $\mathbb{P}hi(s) > 0$ (resp.\ $\mathbb{P}hi(s) \ge 0$) for all $\mathbb{P}hi \in \mathcal{X}_A(G, M, u)$.
\begin{theorem}
\label{thm: RepresentationTheoremForModulesOverARing}
Let $G$ be a module over a ring $A$,
let $M \subseteq G$ be a subsemimodule over an archimedean subsemiring $S$ of $A$.
Then, for each $s \in G$
with $s > 0$ on $\mathcal{X}_A(G, M, u)$,
there is some positive integer $n$
such that $n s \in M$.
\end{theorem}
The property that $n s \in M$ for some positive integer $n $
witnesses that $s \ge 0$ on $\mathcal{X}_A(G, M, u)$
since then $0 \le \mathbb{P}hi(n s) = n \mathbb{P}hi(s)$
would imply that $\mathbb{P}hi(s) \ge 0$,
for all $\mathbb{P}hi \in \mathcal{X}_A(G, M, u)$.
The special case of $(G, u) = (A, 1)$ is the standard Representation Theorem in real algebra.
This theorem was first proven algebraically by Becker and Schwartz \cite{BeckerSchwartz1983}
(refer also to \cite[Theorem 1.5.9]{Scheiderer2009} and \cite[Theorem 6.1]{BSS12}).
However, it is desirable that the conclusion of Theorem \ref{thm: RepresentationTheoremForModulesOverARing}
be strengthened to witness the strict positivity of $s$ on $\mathcal{X}_A(G, M, u)$.
Let ${\mathfrak{m}athbb{Q}}$ denote the field of rational numbers,
and let ${\mathfrak{m}athbb{Q}}_+$ denote the subsemifield of positive rational numbers.
Given a field $\mathbb{K}$ with ${\mathfrak{m}athbb{Q}} \subseteq \mathbb{K} \subseteq {\mathfrak{m}athbb{R}}$,
let $\mathbb{K}_+ := \mathbb{K} \cap {\mathfrak{m}athbb{Q}}_+$.
Given a $\mathbb{K}$-algebra $A$ (always associative and commutative with multiplicative unit),
a \emph{$\mathbb{K}_+$-subsemialgebra}
is a subsemiring $S$ of the underlying ring of $A$
that contains $\mathbb{K}_+$.
Given an abelian group $G$, written additively,
and a submonoid $M \subseteq G$,
an element $u \in G$ is said to be an \emph{order unit}
of $(G, M)$ if $M + {\mathfrak{m}athbb{Z}} u = G$.
Order units
were first named by Goodearl and Handelman
as \lq\lq\emph{strong units}\rq\rq\!
in the case where $M \cap (-M) = 0$ is a partial ordering \cite{GoodearlHandelman1976}.
By 1980, they had settled on the name \lq\lq order unit\rq\rq\ \cite{GoodearlHandelman1980}, \cite{HHL1980}.
For example, a semiring $S$ of a ring $A$ is archimedean
if and only if $1$ is an order unit of $(A, S)$.
An equivalent definition of an order unit
can be given in terms of a preordering $\le_M$ on $G$ associated to $M$,
defined by $s \le_M t$ if and only if $t - s \in M$ (for $s, t \in G$).
Then $u \in M$ is an order unit of $(G ,M)$
if, for each $s \in G$, there is a positive integer $n$ such that $s \le_M nu$.
\begin{theorem}
\label{thm: RepresentationTheoremForModulesOverAnAlgebra}
Let ${\mathfrak{m}athbb{Q}} \subseteq \mathbb{K} \subseteq {\mathfrak{m}athbb{R}}$ be a field.
Let $G$ be a module over a $\mathbb{K}$-algebra $A$,
let $M \subseteq G$ be a subsemimodule over an archimedean
$\mathbb{K}_+$-subsemialgebra $S$ of $A$,
and let $u \in M$ be an order unit of $(G, M)$.
Then every $s \in G$
with $s > 0$ on $\mathcal{X}_A(G, M, u)$
lies in $M$ and is, furthermore, an order unit of $(G, M)$.
\end{theorem}
As promised, the conclusion that $s \in M$ is an order unit of $(G, M)$
witnesses that $s > 0$ on $\mathcal{X}_A(G, M, u)$.
Indeed, for any order unit $s$ of $(G, M)$,
there is some positive integer $n$ with $u \le_M n s$,
hence $1 = \mathbb{P}hi(u) \le \mathbb{P}hi(n s) = n \mathbb{P}hi(s)$ by the motonicity of $\mathbb{P}hi$,
therefore $\mathbb{P}hi(s) \ge 1/n > 0$.
The main contribution of this paper is following observation ---
the argument of Burgdorf, Scheiderer and Schweighofer in \cite{BSS12}
for Theorem \ref{thm: RepresentationTheoremForModulesOverARing}
in the case where $G$ is a $S$-module (in the sense of real algebra)
already suffices to prove Theorem \ref{thm: RepresentationTheoremForModulesOverARing} in its full generality.
We recall their argument.
\begin{enumerate}[Step 1.]
\item \label{item: EHSCriterion}
First, they recall a result of Effros, Handelman and Shen from convex geometry \cite[Theorem 1.4]{EffrosHandelmanShen1980}.
\end{enumerate}
The result requires some terminology to state.
Let $G$ be an abelian group written additively,
$M \subseteq G$ a submonoid, and let $u \in M$ be an order unit of $(G, M)$.
A \emph{state} of $(G, M, u)$,
is a group homomorphism $\mathbb{P}hi : G \mathsf{t}o {\mathfrak{m}athbb{R}}$ to the additive group of real numbers
such that $\mathbb{P}hi|_M \ge 0$ and $\mathbb{P}hi(u) = 1$.
Regard the set of states, denoted by $\mathfrak{m}athcal{S}(G, M, u)$,
as a subset of ${\mathfrak{m}athbb{R}}^G := \mathfrak{p}rod_G {\mathfrak{m}athbb{R}}$ via the injection $\mathbb{P}hi \mathfrak{m}apsto (\mathbb{P}hi(g))_{g \in G} : \mathfrak{m}athcal{S}(G, M, u) \hookrightarrow {\mathfrak{m}athbb{R}}^G$.
Then $\mathfrak{m}athcal{S}(G, M, u)$ is convex,
so that we may define a \emph{pure} state
as an extremal point $\mathbb{P}hi \in \mathfrak{m}athcal{S}(G, M, u)$.
Explicitly, $\mathbb{P}hi \in \mathfrak{m}athcal{S}(G, M, u)$ is pure if, whenever $2\mathbb{P}hi = \mathbb{P}hi_1 + \mathbb{P}hi_2$ for any two $\mathbb{P}hi_1, \mathbb{P}hi_2 \in \mathfrak{m}athcal{S}(G, M, u)$,
then $\mathbb{P}hi = \mathbb{P}hi_1 = \mathbb{P}hi_2$.
\begin{lemma} \label{lem: EHSCriterionNonStrict}
Let $G$ be an abelian group,
$M \subseteq G$ a submonoid, and let $u \in M$ be an order unit of $(G, M)$.
Then, for every $s \in G$ with $\mathbb{P}hi(s) > 0$ for all pure states $\mathbb{P}hi : G \mathsf{t}o {\mathfrak{m}athbb{R}}$
of $(G ,M, u)$,
there is some positive integer $n$ such that $n s \in M$.
\end{lemma}
In their original version of Lemma \ref{lem: EHSCriterionNonStrict},
Effros, Handelman and Shen assumed that the preorder $\le_M$ is anti-symmetric,
or equivalently that $M \cap (-M) = 0$.
Burgdorf, Scheiderer and Schweighofer observed that this assumption is not necessary.
They outlined a proof of Lemma \ref{lem: EHSCriterionNonStrict}
using two theorems from covnex geometry,
namely the Krein-Milman Theorem
and a hyperplane seperation theorem
of Eidelheit \cite{Eidelheit1936} and Kakutani \cite{Kakutani1937}.
(The reader may also refer to Barvinok's textbook \cite{Barvinok2002},
where these two theorems are stated as
III.4.1 and III.1.7 respectively).
\begin{enumerate}[Step 1.]
\addtocounter{enumi}{1}
\item \label{item: formOfPureStatesOfRings}
Burgdorf, Scheiderer and Schweighofer showed that
every pure state of $(A, S, 1)$ is multiplicative, or equivalently,
lies in $\mathcal{X}_A(A, S, 1)$ \cite[Corollary 4.4]{BSS12}.
\item \label{item: substitution}
Therefore, the special case of Theorem \ref{thm: RepresentationTheoremForModulesOverARing} when $(G, u) = (A, 1)$
follows by applying Step \ref{item: formOfPureStatesOfRings} to
Lemma \ref{lem: EHSCriterionNonStrict}.
\end{enumerate}
We observe that their verbatim argument
proves the following generalisation of Step \ref{item: formOfPureStatesOfRings}
to modules over a ring.
\begin{proposition}
\label{prop: multiplicativeAlongFibres}
Let $G$ be a module over a ring $A$,
let $M \subseteq G$ be a subsemimodule over an archimedean subsemiring $S$ of $A$,
and let $u \in M$ be an order unit of $(G, M)$.
Then each pure state $\mathbb{P}hi$ of $(G, M, u)$
satisfies \eqref{eq: multiplicativeLaw}.
\end{proposition}
Burgdorf, Scheiderer and Schweighofer stated
Proposition \ref{prop: multiplicativeAlongFibres}
in the special case
where $G \subseteq A$ is an ideal
\cite[Proposition 4.1]{BSS12}.
We reproduce their proof in the Appendix,
where the reader may check that their argument is valid without the assumption that $G$ is contained in $M$.
Thus we are ready to prove Theorem \ref{thm: RepresentationTheoremForModulesOverARing}.
\begin{proof}[Proof of Theorem \ref{thm: RepresentationTheoremForModulesOverARing}]
Apply Lemma \ref{lem: EHSCriterionNonStrict} to Proposition \ref{prop: multiplicativeAlongFibres}.
\end{proof}
For Theorem \ref{thm: RepresentationTheoremForModulesOverAnAlgebra},
we will use the following version of Lemma \ref{lem: EHSCriterionNonStrict},
which is a special case of \cite[Corollary 2.7]{BSS12}.
\begin{lemma} \label{lem: EHSCriterionStrict}
Let ${\mathfrak{m}athbb{Q}} \subseteq \mathbb{K} \subseteq {\mathfrak{m}athbb{R}}$ be a field.
Let $G$ be a $\mathbb{K}$-vector space, let $M \subseteq G$ a $\mathbb{K}_+$-subsemimodule,
and let $u \in M$ be an order unit of $(G, M)$.
Then every $s \in G$ with $\mathbb{P}hi(s) > 0$ for all pure $\mathbb{P}hi \in \mathfrak{m}athcal{S}(G, M, u)$
lies in $M$ and is, furthermore, an order unit of $(G, M)$.
\end{lemma}
\begin{proof}[Proof of Theorem \ref{thm: RepresentationTheoremForModulesOverAnAlgebra}]
Apply Lemma \ref{lem: EHSCriterionStrict} to Proposition \ref{prop: multiplicativeAlongFibres}.
\end{proof}
We end with some initial consequences of Theorem \ref{thm: RepresentationTheoremForModulesOverAnAlgebra}.
Recall from the beginning of this note that
both the Positivstellens\"atze of Handelman \cite{Handelman1988}
and P\'olya \cite{Polya1928} for polynomials
are concrete instances of the regular Representation Theorem.
These two Positivstellens\"atze were generalised by
L\^{e}\ and Du'\ \cite[Theorem 3]{LeDu2018}, and Scherer and Hol \cite[Theorem 3]{SchererHol2006}
respectively
to symmetric matrices whose entries are polynomials.
We show that these matrix Positivstellens\"atze
are instances of Theorem \ref{thm: RepresentationTheoremForModulesOverAnAlgebra}.
A \emph{multi-index} is a $m$-tuple
$k = (k_1, \dots, k_m)$ of nonnegative integers.
We use the notation $|k| := k_1 + \cdots + k_m$
and $\ell^k := {\ell_1}^{k_1} \cdots {\ell_m}^{k_m}$,
for any $m$-tuple
of polynomials $(\ell_1, \dots, \ell_m)$
in ${\mathfrak{m}athbb{R}}[x] := {\mathfrak{m}athbb{R}}[x_1, \dots, x_d]$.
For any $f \in {\mathfrak{m}athbb{R}}[x]$ and any matrix $\mathfrak{m}atrix{P} = (p_{ij})$ with real entries,
the entrywise product $f \cdot \mathfrak{m}atrix{P} = (f p_{ij})$
is a matrix with entries in ${\mathfrak{m}athbb{R}}[x]$.
Given a symmetric real matrix $\mathfrak{m}atrix{A}$,
let $\mathfrak{m}atrix{A} \succ 0$ (resp.\ $\mathfrak{m}atrix{A} \succeq 0$)
denote that $\mathfrak{m}atrix{A}$ is positive definite (resp.\ positive semidefinite);
whenever this notation is used, $\mathfrak{m}atrix{A}$ is understood have real entries.
\begin{example}[L\^{e}\ and Du'] \label{eg: MatrixHandelman}
Let $P \subseteq {\mathfrak{m}athbb{R}}^d$ be a polytope
(i.e.\ the convex hull of a finite set of points)
assumed to be full-dimensional
(i.e. the affine span of $P$ is the entire ${\mathfrak{m}athbb{R}}^d$).
Fix a presentation of this bounded set $P$ as the intersection of halfspaces,
say
\begin{equation} \label{eq: HPolytopePresentation}
P = \{ x \in {\mathfrak{m}athbb{R}}^d :\, \ell_1(x), \dots, \ell_m(x) \ge 0\},
\end{equation}
where $\ell_1, \dots, \ell_m$
are polynomials in ${\mathfrak{m}athbb{R}}[x] := {\mathfrak{m}athbb{R}}[x_1, \dots, x_d]$ of degree $1$.
Then any symmetric matrix $\mathfrak{m}atrix{M} = (f_{ij})$
having entries $f_{ij}$ in ${\mathfrak{m}athbb{R}}[x]$
with $\mathfrak{m}atrix{M}(x) = (f_{ij}(x)) \succ 0$
for all $x \in P$ can be written as
\begin{equation} \label{eq: HandelmanMatrixCertificate}
\mathfrak{m}atrix{M} = \sum_{|k| = \mathbb{K}appa} \ell^k \cdot \mathfrak{m}atrix{P_k}
\end{equation}
for some integer $\mathbb{K}appa \ge 0$
and some family $\{\mathfrak{m}atrix{P_k}\}_{|k| = \mathbb{K}appa}$
of positive definite symmetric real matrices.
\end{example}
\begin{proof}
Let $A = {\mathfrak{m}athbb{R}}[x]$, let $S = \{\sum_k a_k \ell^k \in {\mathfrak{m}athbb{R}}[x] : \, \forall k, \, a_k \in {\mathfrak{m}athbb{R}}_+\} \subseteq A$
be the ${\mathfrak{m}athbb{R}}_+$-subsemialgebra generated by $\ell_1, \dots, \ell_m$,
let $G = \mathrm{Sym}_\mathfrak{m}atrixSize(A)$
be $A$-module
of symmetric $\mathfrak{m}atrixSize \mathsf{t}imes \mathfrak{m}atrixSize$ matrices with entries in $A$,
let $M = \{ \sum_i g_i \cdot \mathfrak{m}atrix{S_i} : \, \forall i, \, g_i \in S, \mathfrak{m}atrix{S_i} \succeq 0\}$
be the $S$-subsemimodule
generated by the positive semidefinite $\mathfrak{m}atrixSize \mathsf{t}imes \mathfrak{m}atrixSize$ symmetric real matrices,
and let $u = \mathfrak{m}atrix{I_\mathfrak{m}atrixSize}$ be the $\mathfrak{m}atrixSize \mathsf{t}imes \mathfrak{m}atrixSize$ identity matrix.
We shall apply Theorem \ref{thm: RepresentationTheoremForModulesOverAnAlgebra}.
To this end, recall that $S \subseteq A$ is archimedean by a classical argument of Minkowski.
Then $u = \mathfrak{m}atrix{I_\mathfrak{m}atrixSize}$ is an order unit of $(G, M)$ due to the following identity:
\begin{equation}
mn \mathfrak{m}atrix{I_\mathfrak{m}atrixSize} - f \cdot \mathfrak{m}atrix{A}
= \frac{1}{2}\big(
(m - f) \cdot (n \mathfrak{m}atrix{I_\mathfrak{m}atrixSize}+ \mathfrak{m}atrix{A})
+ (m + f) \cdot (n \mathfrak{m}atrix{I_\mathfrak{m}atrixSize} - \mathfrak{m}atrix{A})
\big),
\end{equation}
which holds for all positive integers $m, n$, all $f \in A$, and all $\mathfrak{m}atrixSize \mathsf{t}imes \mathfrak{m}atrixSize$ symmetric real matrices $\mathfrak{m}atrix{A}$.
This argument is standard, see \cite[Lemma 1]{BerrWoermann2001}.
Every $\mathbb{P}hi \in \mathcal{X}_A(G, M, u)$
takes the form
\begin{align}
\mathbb{P}hi(\mathfrak{m}atrix{M}) = \mathsf{t}r (\mathfrak{m}atrix{M}(x) \mathfrak{m}atrix{S}) & & (\forall \mathfrak{m}atrix{M} \in G)
\end{align}
for some $x \in P$ and some $\mathfrak{m}atrix{S} \succeq 0$ with $\mathsf{t}r(\mathfrak{m}atrix{S}^2) = 1$.
Here $\mathsf{t}r$ denotes the trace.
Therefore, if $\mathfrak{m}atrix{M} \in G $
has $\mathfrak{m}atrix{M}(x) \succ 0$
for all $x \in P$,
then $\mathfrak{m}atrix{M}(x) \mathfrak{m}atrix{S} \succeq 0$,
thus $\mathbb{P}hi(\mathfrak{m}atrix{M}) = \mathsf{t}r (\mathfrak{m}atrix{M}(x) \mathfrak{m}atrix{S}) > 0$
for all $\mathbb{P}hi \in \mathcal{X}_A(G, M, u)$.
Therefore $\mathfrak{m}atrix{M}$ is an order unit of $(G ,M)$ by Theorem \ref{thm: RepresentationTheoremForModulesOverAnAlgebra}.
Hence $\mathfrak{m}atrix{I_\mathfrak{m}atrixSize} \le_M n \mathfrak{m}atrix{M}$
for some positive integer $n$,
so that there are $a_{i, k} \in {\mathfrak{m}athbb{R}}_+$, $\mathfrak{m}atrix{S_i} \succeq 0$
such that
$\mathfrak{m}atrix{M} = (1/n) (\mathfrak{m}atrix{I_\mathfrak{m}atrixSize} + \sum_i \sum_{k} a_{i,k} \ell^k \cdot \mathfrak{m}atrix{S_i})$.
Therefore $\mathfrak{m}atrix{M}$ has the desired form in \eqref{eq: HandelmanMatrixCertificate}
by choosing $\mathbb{K}appa$ sufficiently large
and using the identity $\ell_1 + \cdots + \ell_m = 1$.
\end{proof}
The case of $1 \mathsf{t}imes 1$ matrices in Example \ref{eg: MatrixHandelman}
is the classical Positivstellensatz of Handelman.
L\^{e}\ and Du'\ also gave an effective upper bound of the least $\mathbb{K}appa$
required in \eqref{eq: HandelmanMatrixCertificate}
using the corresponding effective estimates of Powers and Reznick
for Handelman's Positivstellensatz \cite{PowersReznick2001}.
Choosing $P \subseteq {\mathfrak{m}athbb{R}}^d$ as a $d$-dimensional simplex,
and expressing $\mathfrak{m}atrix{M}$ in terms of the barycentric coordinates of $P$
gives Scherer and Hol's generalisation of P\'olya's Positivstellensatz.
In the $1 \mathsf{t}imes 1$ case, the observation that
P\'olya's Positivstellensatz
is Handelman's Positivstellensatz
for a full-dimensional simplex
in barycentric coordinates
is due to Powers and Reznick \cite{PowersReznick2001}.
Let $\Delta^d \subseteq {\mathfrak{m}athbb{R}}^{d + 1}$ denote the \emph{standard $d$-dimensional simplex},
whose vertices are $(1, 0, \dots, 0)$, $(0, 1, 0, \dots, 0)$, \dots, $(0, \dots, 0, 1)$.
A point $X = (X_0, \dots, X_d)\in {\mathfrak{m}athbb{R}}^{d + 1}$ lies in $\Delta^d$
if and only if $X_0, \dots, X_d \ge 0$ and $X_0 + \cdots + X_d = 1$.
\begin{example}[Scherer and Hol]
Fix an integer $e \ge 0$,
and let $\mathfrak{m}atrix{A}$ be a symmetric matrix
whose entries are forms (i.e. homogeneous polynomials) in
${\mathfrak{m}athbb{R}}[X_0, \dots, X_d]$ of degree $e$.
If $\mathfrak{m}atrix{A}(X) \succ 0$ for all $X \in \Delta^d$,
then there is some integer $\mathbb{K}appa \ge e$,
such that for every integer $m \ge \mathbb{K}appa$,
\begin{equation} \label{eq: matrixPolyaCertificate}
(X_0 + \cdots + X_d)^{m - e} \cdot \mathfrak{m}atrix{A}
= \sum_{|k| = m} {X_0}^{k_0} \cdots {X_d}^{k_d} \cdot \mathfrak{m}atrix{P_k}
\end{equation}
where $\mathfrak{m}atrix{P_k} \succ 0$
for all $|k| = m$.
Scherer and Hol similarly gave an effective estimate for $\mathbb{K}appa$
deduced from
that of Powers and Reznick for P\'olya's Positivstellensatz \cite{PowersReznick2001}.
\end{example}
Other applications of Theorem \ref{thm: RepresentationTheoremForModulesOverAnAlgebra} to Positivstellens\"atze should be possible.
\mathfrak{p}aragraph{Acknowledgements.} I am grateful for CheeWhye Chin's encouragement to pursue this line of research.
Thank you, Tim Netzer,
for giving me a chance to speak at a conference at Universit\"at Innsbruck
on some ideas in this paper.
I would also like to thank Tobias Fritz \cite{Fritz2023}, Xiangyu Liu, Mihai Putinar,
Claus Scheiderer, Markus Schweighofer, and Wing-Keung To
for disucussions and input.
Finally, without help from my better half,
I would not have had the peace of mind to complete this note.
\section*{Appendix: Proof of Proposition \ref{prop: multiplicativeAlongFibres}}
There is nothing essentially original due to the author below ---
the following proof of Proposition \ref{prop: multiplicativeAlongFibres}
is that of Burgdorf, Scheiderer and Schweighofer \cite[p.123]{BSS12},
and reproduced verbatim below only for the convenience of the reader.
The author's only contribution
is the observation that $G$ being contained in $A$ (and hence an ideal)
is never used in the proof.
As mentioned in \mathsf{t}extit{loc.\ cit.}, precedents of Proposition \ref{prop: multiplicativeAlongFibres} can be found in
the work of
Bonsall, Lindenstrauss and Phelps
\cite[Theorem 10]{BLP66},
Krivine \cite[Theorem 15]{Krivine64b} and
Handelman
\cite[Proposition 1.2]{Handelman85}.
Let $A, S, G, M, u$ be as in Proposition \ref{prop: multiplicativeAlongFibres}.
Given a map $\mathbb{P}hi : G \mathsf{t}o {\mathfrak{m}athbb{R}}$,
we associate to each $f \in A$ satisfying $\mathbb{P}hi(f \cdot u) \neq 0$
a map $\mathbb{P}hi_f : G \mathsf{t}o {\mathfrak{m}athbb{R}}$ given by
\begin{align}
\mathbb{P}hi_f(s) := \frac{\mathbb{P}hi(f \cdot s)}{\mathbb{P}hi(f \cdot u)} & & (\forall s\in G).
\end{align}
The reader can verify that if $\mathbb{P}hi$ is a state of $(G, M, u)$ and $p \in S$ satisfies $\mathbb{P}hi(p \cdot u) > 0$,
then $\mathbb{P}hi_p$ is also a state of $(G, M, u)$.
Furthermore, if $\mathbb{P}hi$ is a state and $p_1, p_2 \in S$ satisfy $\mathbb{P}hi(p_1 \cdot u), \mathbb{P}hi(p_2 \cdot u) > 0$,
so that $p_1 + p_2 \in S$ and $\mathbb{P}hi((p_1 + p_2) \cdot u )> 0$,
then $\mathbb{P}hi_{p_1 + p_2}$ is a proper convex combination
of the states $\mathbb{P}hi_{p_1}$ and $\mathbb{P}hi_{p_2}$:
\begin{equation} \label{eq: convexCombination}
\mathbb{P}hi(p_1 \cdot u) \mathbb{P}hi_{p_1} + \mathbb{P}hi(p_2 \cdot u) \mathbb{P}hi_{p_2}
= \mathbb{P}hi((p_1 + p_2) \cdot u) \mathbb{P}hi_{p_1 + p_2}.
\end{equation}
\begin{proof}[Proof of Proposition \ref{prop: multiplicativeAlongFibres}]
Since $S \subseteq A$ is archimedean and $u$ is an order unit of $(G, M)$,
it suffices to show that \eqref{eq: multiplicativeLaw}
holds whenever $f \in S$ and $s \in M$.
Let $f \in S$ and $s \in M$ be given.
Then $f \cdot u\in S \cdot M \subseteq M$.
Hence $\mathbb{P}hi(f \cdot u) \ge 0$.
There are two cases: either $\mathbb{P}hi(f \cdot u) = 0$ or $\mathbb{P}hi(f \cdot u) > 0$.
\mathfrak{p}aragraph{Case 1: $\mathbb{P}hi(f \cdot u) = 0$.}
Then $u$ being an order unit of $(G, M)$
gives a positive integer $n$ such that $0 \le_M s \le_M n u$.
Since $M$ is closed under the $S$-action,
$0 \le_M f \cdot s \le_M n f \cdot u$.
Now $\mathbb{P}hi$ is monotone with respect to $\le_M$, so
\begin{equation}
0 \le \mathbb{P}hi(f \cdot s) \le n \mathbb{P}hi(f \cdot u) = 0,
\end{equation}
forcing $\mathbb{P}hi(f \cdot s) = 0$
so that both sides of \eqref{eq: multiplicativeLaw} equals to zero in this case.
\mathfrak{p}aragraph{Case 2: $\mathbb{P}hi(f \cdot u) > 0$.}
Since $S \subseteq A$ is archimedean,
there is some positive integer $n$ such that $n - f \in S$.
But increasing $n$ if necessary,
we may suppose that $n > \mathbb{P}hi(f \cdot u)$.
Then
\begin{equation}
\mathbb{P}hi((n - f) \cdot u)
= n\mathbb{P}hi(u) - \mathbb{P}hi(f \cdot u)
= n - \mathbb{P}hi(f \cdot u) > 0.
\end{equation}
Since $f, n - f\in S$ and $\mathbb{P}hi(f \cdot u), \mathbb{P}hi((n - f) \cdot u ) >0$,
we may apply \eqref{eq: convexCombination}
to conclude that $\mathbb{P}hi_n$ is a proper convex combination of $\mathbb{P}hi_f$ and $\mathbb{P}hi_{n - f}$.
But $\mathbb{P}hi_n = \mathbb{P}hi$ (by direct calculation, using the fact that $n$ is a scalar),
so the purity of $\mathbb{P}hi$ implies that $\mathbb{P}hi_f = \mathbb{P}hi$,
which is just \eqref{eq: multiplicativeLaw}.
\end{proof}
\end{document} |
1egin{document}
\maketitle
1egin{abstract}
\tadd{
Autoencoders are a category of neural networks with applications in numerous domains and hence, improvement of their performance is gaining substantial interest from the machine learning community. Ensemble methods, such as boosting, are often adopted to enhance the performance of regular neural networks. In this work, we discuss the challenges associated with boosting autoencoders and propose a framework to overcome them. The proposed method ensures that the advantages of boosting are realized when either output (encoded or reconstructed) is used. The usefulness of the boosted ensemble is demonstrated in two applications that widely employ autoencoders: anomaly detection and clustering. }
\end{abstract}
1egin{keywords}
Autoencoders, Boosting, Ensemble networks, Anomaly detection, Clustering
\end{keywords}
\section{Introduction}
Autoencoders (AEs) are a class of neural networks in which the input data is encoded to a lower dimension and then decoded to reconstruct the original input. Introduced by 2ite{rumelhart1986learning} in the context of backpropagation without supervision, AEs are now being used in many applications such as dimensionality reduction, classification, generation, anomaly detection, clustering, information retrieval, etc. in varied fields like image and video processing, communication systems, recommendation systems and many more 2ite{bank2020autoencoders}. Given its increasing importance, there has been a lot of focus on enhancing the performance of AEs, especially in the context of prevention of overfitting. Regularized AEs, where the network was trained with a \(\mathcal{L}_1\) norm regularized reconstruction error, was employed to induce sparsity that prevented overfitting 2ite{alain2014regularized}. 2ite{vincent2008extracting} proposed a De-noising AE where random noise is deliberately added to the input and 2ite{hinton2012improving} recommended incorporating dropout during training.
Boosting is a time-tested method to decrease both bias and variance. It has been established that boosting neural networks makes them less prone to overfitting (2ite{DBLP:conf/birthday/Schapire13}). Boosting autoencoders is challenging as both the hidden representation and/or the reconstructed output may be used depending on the application. We explore boosting of autoencoders and develop an ensemble architecture that is applicable across the different applications. Boosting of autoencoders has been studied in the specific context of unsupervised anomaly detection by 2ite{sarvari2019unsupervised}, 2ite{minhas2020semi} and 2ite{chen2017outlier}. \tadd{While boosting emphasises on learning data points that are hard to classify, these methods reduce the focus on the hard-to-classify data points. This is done to ensure that the ensemble network is not trained on anomaly points.} Hence, these approaches are tailor-made to detect anomalies and cannot be applied to other applications of autoencoders.
In this work, we propose an architecture to boost autoencoders. \tadd{To the best of our knowledge, the proposition of boosting autoencoders with universal applicability has not been explored before.} We demonstrate that the boosted autoencoder attains lower reconstruction error on unseen data when compared to a single network. We also illustrate the utility of the proposed ensemble network in two different applications: anomaly detection and clustering, which use the reconstructed data and the encoded representation respectively. We compare their performances with the state-of-the-art methods and derive conclusions.
\section{Background}
\subsection{Autoencoders}
AEs are multi-layered neural networks that typically perform hierarchical and nonlinear dimensionality reduction of data to yield a compressed latent representation. An autoencoder consists of two parts: 1) encoder and 2) decoder. The encoder maps the input data to a lower dimensional space and the decoder converts the encoded output back into the input dimension. Both the encoder and the decoder are trained so that the input is reconstructed at the output while obtaining an informative, compressed representation. The schematic of a typical AE is depicted in Fig. \ref{fig:schematic}. $1m x, 1m y \in \mathbb{R}^d$ are the input and the reconstructed output. The encoded output is denoted by \(1m h \in \mathbb{R}^m, m<d\).
\tikzset
{
myTrapezium/.pic =
{
0.9raw [fill=blue!50] (0,0) -- (0,1) -- (2,2) -- (2,-2) -- (0,-1) -- cycle ;
0.9raw [dashed] (-0.2,2+0.1) -- (-0.2,-2-0.1);
0.9raw [dashed] (2+0.2,2+0.1) -- (2+0.2,-2-0.1);
0.9raw [dashed] (2+0.2,2+0.1) -- (-0.2,2+0.1);
0.9raw [dashed] (2+0.2,-2-0.1) -- (-0.2,-2-0.1);
2oordinate (-center) at (2/2,0);
2oordinate (-out) at (2,0);
},
myArrows/.style=
{
line width=1mm,
black,
-{Triangle[length=1.5mm,width=2mm]},
shorten >=1pt,
shorten <=1pt,
}
}
0.9ef2{2}
0.9ef1{1}
0.9ef2{2}
1egin{figure}[H]
2entering
1egin{tikzpicture}
[
node distance=10mm,
every node/.style={align=center},
]
\node (middleThing)
{1egin{tabular}{r}\end{tabular}};
\pic (right)[right=of middleThing.east] {myTrapezium} ;
\pic (left)[left=of middleThing.west, rotate=180] {myTrapezium} ;
\node at (left-center) {\small{Encoder}} ;
\node at (right-center) {\small{Decoder}} ;
\node {\LARGE{$\textbf{h}$}} ;
0.9ef0.9{0.9}
2oordinate (u) at (0.9,0);
0.9raw [myArrows] ($(left-out)+2.3*(u)$) -- ++(u) node [anchor=west]{};
0.9raw [myArrows] ($(right-out)-3.3*(u)$) -- ++(u) node [anchor=west]{};
0.9raw [myArrows] (right-out) -- ++(u) node [anchor=west]
{\LARGE{$\textbf{y}$}};
0.9raw [myArrows] ($(left-out)-(u)$) node [anchor=east] {\LARGE{$\textbf{x}$}} -- ++(u) ;
\end{tikzpicture}
2aption{Schematic of an autoencoder}
\label{fig:schematic}
\end{figure}
Based on the recent survey on AEs by 2ite{bank2020autoencoders}, they are classified based on various aspects such as architecture (feed forward, convolutional AEs), regularization (sparse AEs, contractive AEs) and training methods (variational AEs). In our work, we focus on enhancing the performance of feedforward and convolutional AEs using boosting.
\subsection{Boosting}
Ensemble learning, which uses multiple decision models instead of a single one, was introduced to reduce the bias and variance in a classifier. Boosting is a method of ensemble learning where the decisions of weak learners are combined to form a strong learner. The most popular algorithm for boosting, AdaBoost was introduced by 2ite{freund1997decision} for a binary classification problem. It consists of a sequence of weak learners. The AdaBoost algorithm operates by maintaining a distribution of weights ($w_i's$)over the training data; the distribution is updated such that the weights of the data points which are difficult to classify are increased. This ensures that the subsequent learners focus on the data points that are prone to error. A weighted average of the decisions by each of the encoders is computed as the output. AdaBoost is listed below as Algorithm \ref{alg:adaboost}.
\SetKwInput{KwInput}{Input}
\SetKwInput{KwOutput}{Output}
\SetKwInput{KwInitialization}{Initialization}
1egin{algorithm}[H]
\KwInput{No of Classifiers M, Training data $x_1,x_2, 2dots x_n$}
\KwInitialization{Initialize the observation weights $w_{i}=1 / n, i=1,2, \ldots, n .$}
\For{\(m = 1,2,2dots M\)}
{
Fit a classifier $T^{(m)}(1oldsymbol{x})$ to the training data using weights $w_{i}$\\
Compute
$e r r^{(m)}=\sum_{i=1}^{n} w_{i} 1ar{I}\left(c_{i} \neq T^{(m)}\left(1oldsymbol{x}_{i}\right)\right) / \sum_{i=1}^{n} w_{i}
$\\
Compute
$2lpha^{(m)}=\log \frac{1-e r r^{(m)}}{\operatorname{err}^{(m)}}
$\\[1PC]
Set
$w_{i} \leftarrow w_{i} 2dot \exp \left(2lpha^{(m)} 2dot \mathbb{I}\left(c_{i} \neq T^{(m)}\left(1oldsymbol{x}_{i}\right)\right)\right), i=1, \ldots, n
$\\
Re-normalize $w_{i}$
}
\KwOutput{$
C(1oldsymbol{x})=2rg \max _{k} \sum_{m=1}^{M} 2lpha^{(m)} 2dot \mathbb{I}\left(T^{(m)}(1oldsymbol{x})=k\right)
$}
2aption{Adaboost Algorithm}\label{alg:adaboost}
\end{algorithm}
\section{Boosting Autoencoders}
It has been observed that traditional AEs, such as fully connected AEs, are weak when implemented on a high dimensional dataset; on the other hand, Deep Convolutional AEs tend to overfit towards identity, even if the model capacity is limited (2ite{steck2020autoencoders}). \taddRev{In this section, we propose boosting autoencoders as a solution to both enhancing the performance of AEs and reducing the tendency to overfit. The architecture should allow the ensemble network to cater to applications which use either the reconstructed or the encoded output. This makes the task of choosing the architecture of the ensemble very crucial. In this section, we present an architecture for an ensemble of AEs, propose an algorithm for boosting them and provide simulation results.}
\subsection{Architecture of the ensemble}
The novelty of our approach lies in the architecture of the ensemble. The challenge in designing an architecture for boosting AEs is to ensure that the dimension of the compressed representation remains unchanged in spite of using multiple networks. Consider an ensemble of $M$ AEs, boosted using the output at the decoder. In this case, for an unseen data point, the latent representations of all $M$ AEs contain information about the data. Hence, the dimension of the latent representation is scaled up by a factor of $M$. To eliminate this, we propose using an ensemble of $M$ encoders and use their average output as the latent representation. Note that the proposed ensemble consists of a single decoder. The architecture proposed ensemble is illustrated in Fig. \ref{fig:ensemble}. The algorithm for boosting AEs using the proposed architecture is described in the next section.
1egin{figure}[h]
2entering
\tikzset
{
myTrapezium/.pic =
{
0.9raw [fill=blue!50] (0,0) -- (0,1) -- (2,2) -- (2,-2) -- (0,-1) -- cycle ;
0.9raw [dashed] (-0.2,2+0.1) -- (-0.2,-2-0.1);
0.9raw [dashed] (2+0.2,2+0.1) -- (2+0.2,-2-0.1);
0.9raw [dashed] (2+0.2,2+0.1) -- (-0.2,2+0.1);
0.9raw [dashed] (2+0.2,-2-0.1) -- (-0.2,-2-0.1);
2oordinate (-center) at (2/2,0);
2oordinate (-out) at (2,0);
},
myensemble/.pic =
{
0.9raw [fill=blue!50] (0,-1.6) -- (0,-1.6+1/5) -- (2,-1.6+2/5) -- (2,-1.6-2/5) -- (0,-1.6-1/5) -- cycle ;
\node at (0.1, -1.6) {\small{Encoder 1}} ;
0.9raw [fill=blue!50] (0,-0.6) -- (0,-0.6+1/5) -- (2,-0.6+2/5) -- (2,-0.6-2/5) -- (0,-0.6-1/5) -- cycle ;
\node at (0.1, -0.6) {\small{Encoder 2}} ;
0.9raw [fill=blue!50] (0,1.6) -- (0,1.6+1/5) -- (2,1.6+2/5) -- (2,1.6-2/5) -- (0,1.6-1/5) -- cycle ;
\node at (0.1, 1.6) {\small{Encoder M}} ;
0.9raw [dotted] (1.1,1.1) -- (1.1,-0.1);
0.9raw [dashed] (-0.2,2+0.1) -- (-0.2,-2-0.1);
0.9raw [dashed] (2+0.2,2+0.1) -- (2+0.2,-2-0.1);
0.9raw [dashed] (2+0.2,2+0.1) -- (-0.2,2+0.1);
0.9raw [dashed] (2+0.2,-2-0.1) -- (-0.2,-2-0.1);
2oordinate (-center) at (2/2,0);
2oordinate (-out) at (2,0);
},
myArrows/.style=
{
line width=1mm,
black,
-{Triangle[length=1.5mm,width=2mm]},
shorten >=1pt,
shorten <=1pt,
}
}
0.9ef2{2}
0.9ef1{1}
0.9ef2{2}
1egin{tikzpicture}
[
node distance=10mm,
every node/.style={align=center},
]
\node (middleThing)
{1egin{tabular}{r}\end{tabular}};
\pic (right)[right=of middleThing.east] {myTrapezium} ;
\pic (left)[left=of middleThing.west, rotate=180] {myensemble} ;
\node at (right-center) {\small{Decoder}} ;
\node {\LARGE{$\textbf{h}$}} ;
0.9ef0.9{0.9}
2oordinate (u) at (0.9,0);
0.9raw [myArrows] ($(left-out)+2.2*(u)$) -- ++(u) node [anchor=west]{};
0.9raw [myArrows] ($(right-out)-3.2*(u)$) -- ++(u) node [anchor=west]{};
0.9raw [myArrows] (right-out) -- ++(u) node [anchor=west]
{\LARGE{$\textbf{y}$}};
0.9raw [myArrows] ($(left-out)-(u)$) node [anchor=east] {\LARGE{$\textbf{x}$}} -- ++(u) ;
\end{tikzpicture}
2aption{Proposed architecture for boosting autoencoders }
\label{fig:ensemble}
\end{figure}
\subsection{Proposed algorithm}
The proposed algorithm is inspired from AdaBoost and employs an ensemble of $M$ networks (here, encoders). A distribution is maintained over the data points that suggest which data points need greater focus from the next encoder. While AdaBoost uses classification error to assign weights to data points, the proposed algorithm uses Mean Square Error (MSE) between the input and the output of the decoder, which is also termed as the reconstruction error; this is done so as to enable the algorithm to cater to a variety of applications including, but not limited to, image classification. The proposed algorithm is listed as Algorithm \ref{alg:ensemble}.
The weights corresponding to the distribution over the data points $1m x_i$, $i=1,2dots n$ are denoted by $w_i$. The algorithm is initialized by assigning equal weights to all the data points. For every iteration of the algorithm, data points are sampled using these weights $w_i$. Initially, the input is passed through a single encoder-decoder pair and they are trained. The weights are then re-distributed such that the samples with high reconstruction error are more likely to be sampled at the next iteration. During the next cycle, the data is passed through the first two encoders and their average is decoded. The second encoder is trained using the reconstruction error so obtained. This process is continued until all the $M$ encoders are trained. Note that only the encoder $m$ gets trained in a given cycle $m$ even though the encoded output is an average of encoders from $1$ to $m$; the decoder, on the other hand, is being constantly trained.
\SetKwInput{KwInput}{Input}
\SetKwInput{KwOutput}{Output}
\SetKwInput{KwInitialization}{Initialization}
1egin{algorithm}[h]
\KwInput{No of Encoders M, Training data $x_1,x_2, 2dots x_n$}
\KwInitialization{\\1) Initialize a set of Encoders $E_1, E_2... E_M$ and Decoder $D$ with weights randomly sampled from $\mathcal{N}(0,1)$ \\
2) Initialize weights to each input in the training dataset as $w_{i}=1 / n, i=1,2, \ldots, n$.}
\For{\(m = 1,2,2dots M\)}
{
\For{ \(iter = 1,2,2dots I\) }
{
Obtain a batch of $Q$ training samples distributed according to $w_{i}$'s.\\
Pass the chosen samples to the Encoders $E_1, E_2,..$ till $E_m$.\\
Compute $ Avg_{i} =\frac{\sum_{j=1}^{m}E^{(m)}\left(1oldsymbol{x}_{i}\right)}{m}$ for all $1oldsymbol{x}_{i}$ and pass it to the Decoder $D$.\\
Compute MSE loss between $x_i$ and the corresponding Decoded outputs $D(Avg_{i})$\\
Back-Propagate MSE through the decoder $D$ and the encoder $E_m$ (Only the last Encoder)
}
Compute $ w_{i}=\left({x}_{i} - D\left(Avg_{i})\right)\right)^2$ for every $x_i$ and re-normalize such that $\sum_i w_i=1$.
}
\KwOutput{The average of all encoders is the encoded output.}
2aption{Proposed Boosted Autoencoder Algorithm}\label{alg:ensemble}
\end{algorithm}
In our algorithm, the relationship between the first encoder and decoder is no different than a single AE. However, the subsequent encoders work on tweaking/correcting the output of the first encoder, so that the overall reconstruction is better. In other words, subsequent encoders learn the slack of previous encoders (Sequential Learning). At every stage, we take average of all the encoders as our latent representation. By doing this, we are enforcing every encoder to be equally represented. Note that although the encoded output is reported as an average, the individual encoders learn vastly distinct representations. This is because each encoder learns from a different distribution on the training data based on $w_i's$. The encoded output needs to be a function of the output of all encoders; for our algorithm, we have chosen the average as the function. The algorithm may be modified to use other functions in place of average. Although the procedure is similar to Adaboost, we are not trying to boost multiple weak learners in our method. The main idea at play here is Sequential Learning.
\subsection{Simulation results}
To illustrate the utility of our proposed method, we perform experiments on two well-known image datasets- CIFAR-10 (2ite{krizhevsky2009learning}) and Fashion-MNIST (2ite{xiao2017fashion}). The reconstruction error obtained by using the proposed algorithm is compared with that of a single AE.
\subsubsection{Experiments on the CIFAR-10 Dataset}
The CIFAR-10 dataset contains 60,000 32x32 color images (in 10 different classes) of which 40000 images were used for training, 10000 for validation and 10000 images for testing. Let \textbf{Conv2D(\textit{i,o,k,s,p})} denote a 2D convolution layer where 'i' is the number of input channels, 'o' is number of output channels, 'k' is the size of kernel, 's' is stride and 'p' is padding. The architecture of the encoder used is Conv2D(3,8,4,2,1)-Conv2D(8,16,4,2,1)-Conv2D(16,16,4,2,1). The decoder is constructed symmetrically to that of the encoder.
The architecture also uses uses a mixture of sigmoid and ReLU activation functions to overcome the Dying ReLU problem and the vanishing gradient problem cause by sigmoid (2ite{chen2017outlier}). The Adam optimizer (2ite{kingma2014adam}) was used in training both the single and the ensemble of AEs with a learning rate of 3e-3. A single AE was trained for 50 epochs using a batch size of 50. For the proposed boosted AE, we consider $M=20$; 50 data samples are chosen for each iteration ($Q=50$) and a total of 2000 iterations are performed for each encoder ($I=2000$). A comparison of the reconstruction loss is plotted in Fig. \hyperref[fig:cifar-recon]{3(a)}.
In this experiment, images of size 3x32x32 (3072 features) were compressed to a size of 16x4x4 (256 features) thereby achieving a factor of compression of more than 12. We note that the proposed method achieves a lower reconstruction error when compared to a single AE while maintaining the same factor of compression. In addition, we also note that the convergence of loss is much faster as compared to a single AE.
\subsubsection{Experiments on the Fashion MNIST Dataset}
The F-MNIST dataset contains 70,000 32x32 grey scale images in 10 different classes of which 50000 images were used for training, 10000 for validation and 10000 images for testing. The architecture of the encoder used is Conv2D(1,2,4,2,1)-Conv2D(2,4,4,2,1)-Conv2D(4,8,3,2,1)-Conv2D(8,8,4,2,1). The decoder is constructed symmetrically to that of the encoder. The activation functions are the same as in the CIFAR-10 experiment. The Adam optimizer was used in training both the single and the ensemble of AEs with a learning rate of 5e-3. A single AE was trained for 40 epochs using a batch size of 50. For the proposed boosted AE, we consider $M=20$; 50 data samples are chosen for each iteration ($Q=50$) and a total of 2000 iterations are performed for each encoder ($I=2000$). The reconstruction error for a single AE as well as the proposed method is plotted in Fig. \hyperref[fig:fmnist-recon]{3(b)}.
1egin{figure}[t]
2entering
\subfigure[CIFAR-10 dataset]{
\includegraphics[scale=0.5]{1.pdf}
\label{fig:cifar-recon}}
\hspace*{-2.9em}
\subfigure[F-MNIST dataset]{
\includegraphics[scale=0.5]{2.pdf}
\label{fig:fmnist-recon}
\label{fig:recon}}
2aption{Reconstruction error during validation}
\end{figure}
In the Fashion MNIST experiment, images of size 1x28x28 (784 features) were compressed to a size of 8x2x2 (32 features). The images have been compressed by more than a factor of 24 and once again, the proposed ensemble method outperforms a single AE in terms of both reconstruction error as well as convergence.
\section{Applications of Boosted Autoencoders}
AEs are used for varied applications such as anomaly detection, clustering (as a dimension reduction technique), de-noising of images, image classification, generative applications, etc. The applications can be broadly grouped based on whether the reconstructed output or the encoded output is used. In this section, we employ the proposed boosting method to an application from each of these groups to demonstrate its utility.
\subsection{Anomaly detection}
Anomaly detection refers to the task of finding unusual instances that stand out from the normal data. AEs have been widely employed in detecting anomalies in both images and videos (2ite{gong2019memorizing},2ite{hasan2016learning}). The concept behind using AEs for anomaly detection is as follows: when an AE trained on normal samples encounters an anomaly, it results in a high reconstruction error. For our experiment, a semi-supervised learning setting is considered where all the data points in the training set are normal, whereas the testing data contains both normal and anomalous samples (Refer to 2ite{villa2021semi} for a thorough survey on semi-supervised learning for anomaly detection).
\subsubsection{Simulation setting}
We conduct our experiments on two well-known datasets: CIFAR-10, and Fashion-MNIST (F-MNIST). Following the setting in 2ite{gong2019memorizing}, 2ite{zong2018deep} and 2ite{zhai2016deep}, 10 anomaly detection (i.e. one-class classification) datasets are constructed by sampling images from each class as normal samples and sampling anomalies from the remaining classes. The training data only consists of normal samples, and 10$\%$ of the training data is used for validation. The test data contains equal number of samples from all the 10 classes.
Following 2ite{ruff2018deep}, we use CNNs similar to LeNet, i.e., each Convolutional layer is followed by a 2x2 Max Pooling layer with stride=2 and a leaky ReLU activation layer. The leakiness of ReLU activations is set to be $2lpha$ = 0.1. For CIFAR-10, the encoder has 3 convolutional layers: Conv2D(3,32,3,1,1)-Conv2D(32,64,3,1,1)-Conv2D(64,64,3,1,1), followed by a dense layer of 256 units. For F-MNIST, we used a fully connected encoder. This shows that the proposed method is not limited to convolutional encoders. The encoder had 4 dense layers: 512-256-128-50. In all the cases, the decoder was constructed symmetrically to the encoder, replacing Max-Pooling with Upsampling.
The Adam optimizer (2ite{kingma2014adam}) was used in training both the single and the ensemble of AEs with a learning rate of 3e-3. The single AE was trained for 100 epochs using a batch size of 50. For the proposed boosted AE, we consider an ensemble of 5 networks ($M=5$); 50 data samples are chosen for each iteration ($Q=50$). For CIFAR-10 a total of 1800 iterations are performed for each encoder ($I=1800$), while for F-MNIST, a total of 2000 iterations are performed for each encoder ($I=2000$).
\subsubsection{Baselines}
The proposed method is compared with the following existing methods for anomaly detection:
1egin{itemize}
\item One Class SVM (OC-SVM/SVDD) (2ite{scholkopf1999support}): This algorithm captures the density of the data and classifies examples on the extremes of the density function as anomaly. \taddRev{The hyperparameter selection is as done in 2ite{ruff2018deep}.}
\item Isolation Forest (IF) (2ite{liu2008isolation}): This algorithm ‘isolates’ observations by randomly selecting a feature and then randomly selecting a split value between the maximum and minimum values of the selected feature. This random partitioning produces noticeable shorter paths for anomalies. We set the number of trees = 100 and sub\_sample size = 256.
\item Kernel Density Estimation (KDE): This algorithm uses multiple Gaussian Kernels to estimate the density of the distribution. We chose the bandwidth of the Gaussian Kernels from \{2$^{0.5}$,2$^{1}$,...,2$^{5}$\} using the log-likelihood score and a 5-fold Cross Validation.
\item Bagging Random Miner (BRM) (2ite{camina2019bagging}): This algorithm uses an ensemble of random miners; each random miner randomly chooses data from the training set and then creates a set of most representative objects (MROs). Then a covariance matrix is computed over these MROs, which is then used to calculate the average object pair distance (AOPD). This AOPD will then be used as a threshold to classify the outliers. 2ite{villa2021semi} has named it as the best semi-supervised classifier for anomaly detection.
\item One Class K-Means with with Randomly-projected features Algorithm (OCKRA) (2ite{rodriguez2016ensemble}): This algorithm uses an ensemble of one class K-means, each trained on a subset of features, which are randomly selected and is named the second best semi-supervised classifier for anomaly detection by 2ite{villa2021semi}. The hyperparameters are tuned according to 2ite{rodriguez2016ensemble}.
\end{itemize}
In all the above methods, the data is compressed using PCA while maintaining 95\% of the variance \taddRev{whereas our method does not employ any pre-processing technique.}
1egin{table}[H]
2entering
1egin{tabular}{| l | l | l | l | l | l|l|l|}
\hline Class & OC-SVM & IF & KDE & OCKRA&BRM& DAE & Boosted AE
\\ \hline Airplane & 0.6123 & 0.5186 & 0.6298 &0.5594&0.5965& 0.6268 & \textbf{0.6657}
\\ \hline Automobile & 0.5097& 0.5016 & 0.5033&0.5067 &0.5001& 0.5012&\textbf{0.5144}
\\ \hline Bird & 0.6041 & 0.5213 & 0.6276 &0.5911&0.6212& 0.6249 &\textbf{0.6603}
\\ \hline Cat & 0.5086 & 0.5026 & 0.5125 &0.4985&0.5273& 0.5359 &\textbf{0.5597}
\\ \hline Deer & \textbf{0.6903} & 0.5389 & 0.6769& 0.6542&0.6423& 0.6702&0.6838
\\ \hline Dog & 0.5205 & 0.5035 & 0.5317&0.5083&0.5249& \textbf{0.5386}&0.5334
\\ \hline Frog & 0.6488 & 0.5167 & 0.6788 &0.6543&0.5938& 0.6709&\textbf{0.6982}
\\ \hline Horse & 0.5202 & 0.503 & 0.5307 &0.5232&0.5032& \textbf{0.5355}&0.5280
\\ \hline Ship & 0.6150 & 0.5398& 0.6239 &0.6214&0.6246& 0.6635&\textbf{0.6734}
\\ \hline Truck & \textbf{0.5381} & 0.5554 & 0.4742 &0.5206&0.4986&0.5043 &0.4996
\\ \hline
\end{tabular}
2aption{CIFAR-10 Classwise AUC}
\label{tab:ano_cifar}
\end{table}
1egin{table}[H]
2entering
1egin{tabular}{| l | l | l | l | l | l| l| l |}
\hline Class & OC-SVM & IF & KDE & OCKRA & BRM & DAE & Boosted AE
\\ \hline T-Shirt & 0.8303 & 0.6932 & 0.8343 &0.8243& 0.8354& 0.8405 & \textbf{0.8425}
\\ \hline Trouser & 0.8978& 0.8982 & 0.9345 &0.9141&0.9380& 0.9529 & \textbf{0.9546}
\\ \hline Pullover & 0.8150 & 0.6851 & 0.8104&0.8078 &0.8057& 0.8134 &\textbf{0.8253}
\\ \hline Dress & 0.8525 & 0.7653 & 0.8631&0.8468 &0.8528& 0.8609 &\textbf{0.8858}
\\ \hline Coat & 0.8235 & 0.7228 & 0.8303 &0.8142&0.8230& \textbf{0.8581} &0.8344
\\ \hline Sandals & 0.7831 & 0.7711 & \textbf{0.8162} &0.7865&0.8099& 0.7868 &0.7933
\\ \hline Shirt & \textbf{0.7706 }& 0.6791 & 0.7359 &0.7491&0.7550& 0.7395 &0.7488
\\ \hline Sneaker & 0.9253 & 0.8823 & 0.9098 &0.9148&0.9004& 0.9237&\textbf{0.9311}
\\ \hline Bag & 0.7666 & 0.5997 & 0.7716&0.7675&0.7579&0.7700 &\textbf{0.7928}
\\ \hline Ankle Boots & 0.9056 & 0.8263 &0.8787&0.8933&0.8914& 0.8827 & \textbf{0.9064}
\\ \hline
\end{tabular}
2aption{F-MNIST Classwise AUC}
\label{tab:ano_fmnist}
\end{table}
We use the Area Under Receiver Operation Characteristic curve (AUC-ROC) as a metric to quantify the efficiency of anomaly detection (2ite{ruff2018deep},2ite{abati2019latent}, 2ite{gong2019memorizing}). An ROC curve plots True Positive Rate (TPR) vs. False Positive Rate (FPR) at different classification thresholds. AUC-ROC provides an aggregate measure of performance across all possible classification thresholds. The class-wise AUC-ROC values are tabulated for CIFAR-10, and F-MNIST in Tables \ref{tab:ano_cifar}, and \ref{tab:ano_fmnist} respectively.
Our proposed ensemble method consisting of 5 boosted encoders (termed 'Boosted AE') is compared to the methods listed above and the best performing method in each class is highlighted in bold. For CIFAR-10 and F-MNIST datasets, we note that the proposed method outperforms the others in most of the classes and is only slightly lower that the best method in other classes. This illustrates that the proposed boosting method is efficient while the application uses the reconstructed output.
\subsection{Clustering}
Clustering is the task of dividing unlabelled data points into groups based on their similarity. Traditional clustering algorithms such as 2ite{macqueen1967some} and 2ite{ng2001spectral} classify input data into the same class based on the similarity of extracted features. These methods cannot be applied directly to image data due to their high dimensions (2ite{zhu2020image}). An autoencoder allows the user to represent a high dimensional image in a lower-dimensional space and has recently become a popular choice as a pre-processing step for clustering 2ite{song2013auto}.
We use a single AE and the proposed boosted AE as a pre-processing step for clustering and compare them with a traditional dimensionality reduction technique, Principal Component Analysis (PCA). After dimensionality reduction, we use the K-means proposed by 2ite{lloyd1982least} algorithm for clustering. \taddRev{Note that any clustering algorithm may be used, we use K-means as an example to demonstrate the efficiency of the proposed method as a pre-processing technique.} Normalized Mutual Information (NMI) is used as our evaluation metric; NMI is a normalization of the Mutual Information (MI) score \taddRev{within the grouped classes} to scale the results between 0 (no mutual information) and 1 (perfect correlation). It is computed as $$\frac{2*I(Y;C)}{H(Y)+H(C)}$$
where $I$, $H$, $Y$ and $C$ refer to Mutual Information, entropy, class labels and cluster labels respectively.
The network architecture used is once again similar to LeNet, i.e., each convolutional layer is followed by a 2x2 Max Pooling layer with stride=2 and a leaky ReLU activation layer. The leakiness of ReLU activations is set to be $2lpha$ = 0.1. For both F-MNIST and MNIST, the encoder has 2 convolutional layers: Conv2D(1,8,5,1,0)-Conv2D(8,4,5,1,0), followed by a dense layer of 10 units. The experiments are performed in MNIST and F-MNIST datasets and the results obtained are tabulated in Table \ref{tab:clustering}.
1egin{table}[H]
2entering
1egin{tabular}{| l | l | l | l | l |}
\hline Method & MNIST & F-MNIST
\\ \hline PCA+K-means & 0.4800 & 0.5005
\\ \hline Single AE+K-means & 0.6476 & 0.5331
\\ \hline Ensemble Method+K-means & \textbf{0.6900} & \textbf{0.5687}
\\ \hline
\end{tabular}
2aption{NMI scores for clustering}
\label{tab:clustering}
\end{table}
It is observed from Table \ref{tab:clustering} that using AEs for reducing the dimension of the data is more efficient that PCA. It is also observed that boosting AEs lead to an improvement in the NMI score when compared to the use of a single AE. \taddRev{Our results demonstrate the improved performance of the proposed boosted AE as a pre-processing technique for clustering.} This shows that the proposed method helps in enhancing the performance of an AE when the encoded representation is used as well.
\section{Conclusion and Future Work}
In this work, we introduced a boosted ensemble of encoders \taddRev{as an effective way of boosting AEs for applications that use either reconstructed or encoded outputs.} Through various experiments, we have shown that our method performs significantly better than a single AE. Our method can also be extended to works that propose modifications to enhance the performance of a single AE. For example, 2ite{zhu2020image} uses AE for clustering, but incorporates Predefined Evenly-Distributed Class Centroids and MMD Distance in their loss function. The modification of a single AE can be combined with our proposed boosting architecture for a potential improvement in the performance. Similarly, 2ite{gong2019memorizing} and 2ite{ruff2018deep} have used modifications of AEs for anomaly detection. These methods can also be combined with our proposed boosting ensemble. We believe that our method provides the first step towards a universal boosting framework for AEs. It can be further improved by exploring a variety of modifications such as incorporating a weighted average of the encoded outputs, employing heterogeneous networks in the ensemble (for instance, different depths/widths/activations), etc.
1ibliography{main}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Monotone stability of quadratic semimartingales with
applications to unbounded general quadratic~BSDEs}
\runtitle{Monotone stability of quadratic semimartingales}
\begin{aug}
\author[A]{\fnms{Pauline} \snm{Barrieu}\corref{}\ead[label=e1]{[email protected]}}
\and
\author[B]{\fnms{Nicole} \snm{El Karoui}\ead[label=e2]{[email protected]}\thanksref{t1}}
\thankstext{t1}{Supported in part by the ``Chaire Financial Risk'' of
the Risk Foundation, Paris.}
\runauthor{P. Barrieu and N. El Karoui}
\affiliation{London School of Economics and Universit\'{e} Pierre et
Marie Curie}
\address[A]{Statistics Department\\
London School of Economics\\
Houghton street\\
London WC2A2AE\\
United Kingdom\\
\printead{e1}}
\address[B]{LPMA\\
Universit\'{e} Pierre et Marie Curie (Paris 6)\\
CNRS: UMR 7599\\
75005 Paris\\
France\\
\printead{e2}}
\end{aug}
\received{\smonth{1} \syear{2011}}
\revised{\smonth{1} \syear{2012}}
\begin{abstract}
In this paper, we study the stability and convergence of some general
quadratic semimartingales. Motivated by financial applications, we
study simultaneously the semimartingale and its opposite. Their
characterization and integrability properties are obtained through some
useful exponential submartingale inequalities. Then, a general
stability result, including the strong convergence of the martingale
parts in various spaces ranging from $\mathbb{H}^1$ to BMO, is derived
under some mild integrability condition on the exponential of the
terminal value of the semimartingale. This can be applied in particular
to BSDE-like semimartingales.
This strong convergence result is then used to prove the existence of
solutions of general quadratic BSDEs under minimal exponential
integrability assumptions, relying on a regularization in both
linear-quadratic growth of the quadratic coefficient itself. On the
contrary to most of the existing literature, it does not involve the
seminal result of Kobylanski [\textit{Ann. Probab.} \textbf{28} (2010)
558--602] on bounded solutions.
\end{abstract}
\begin{keyword}[class=AMS]
\kwd[Primary ]{60G07}
\kwd{60G44}
\kwd{60H99}
\kwd[; secondary ]{91B16}
\kwd{91B26}.
\end{keyword}
\begin{keyword}
\kwd{Quadratic semimartingale}
\kwd{monotone stability}
\kwd{strong convergence}
\kwd{BSDE-like semimartingale}
\kwd{quadratic BSDE}
\kwd{exponential transformation}
\kwd{entropic inequalities}.
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{sec1}
The Backward Stochastic Differential Equations (BSDEs) were first
introduced by Peng and Pardoux~\cite{PardouxPeng90} in 1990 in the
Lipschitz continuous framework, and then extended to continuous with
linear growth framework by Lepeltier and San Martin \cite
{LepeltierSanMartin97} in 1997. They have been soon recognized as powerful
tools with many different possible applications. More recently,
there has been an accrued interest for quadratic BSDEs, with various
fields of application such as risk sensitive control problems or
dynamic financial risk measures and indifference pricing in
mathematical finance.
In this case, the BSDE is an
equation of the following type:
\begin{equation}
-dY_{t}=g(t,Y_{t},Z_{t})\,dt-Z_{t}\,dW_{t},\qquad Y_{T}=\xi_{T},
\end{equation}
where $W_{\cdot}$ is a standard Brownian motion, and the
coefficient $g$ satisfies the following quadratic \textit{structure
condition} ${\mathcal Q}(l_{\cdot},c_{\cdot},\delta)$:
\begin{equation}
| g(t,y,z)|\leq\kappa(t,y,z) \equiv\frac{1}{\delta
}l_{t}+c_t|y|+\frac
{\delta}{2}
|z|^{2}, \qquad d\mathbb{P}\otimes dt\mathrm{\mbox{-}a.s.},
\end{equation}
where $\delta>0$ is a given constant, and $(l_t), (c_t)$ are
predictable nonnegative processes.
The first result concerning the existence and uniqueness of solutions
to these equations was obtained in the bounded case in a Brownian
filtration setting by Kobylanski~\cite{Kobylanski00} in 2000. The proof
first relies on an exponential transformation as to come back to the better
known framework of BSDEs with a coefficient with linear growth and
then uses a regularization procedure to take the limit. The major
difficulty is then about proving the strong convergence of the
martingale parts without having to impose too strong
assumptions. This seminal paper has been extended in several
directions, to a continuous setting by Morlais
\cite{Morlais1}, to unbounded solutions by Briand
and Hu~\cite{Briand-Hu} or more recently by Mochel and Westray~\cite
{Mocha-Westray}. Some other authors have obtained
further results in some particular situations (see, e.g., Hu
and Schweizer~\cite{Hu-Schweizer},
Hu, Imkeller and Muller~\cite{Hu-Imkeller-Muller}, Mania and Tevzadze
\cite{ManiaTevzadze} or Delbaen, Hu and Richou \cite
{Delbaen-Hu-Richou}). Recently in 2008, Tevzadze~\cite{Tevzadze} has
given a direct proof for the existence and uniqueness of a bounded
solution in the Lipschitz-quadratic case.
We adopt in this paper a
completely different approach and consider a forward point of view to
treat directly the questions of convergence.
To do so, we introduce the notion of general quadratic semimartingales
in Section~\ref{sectionquadraticsemimartingale} and study their
characterization with regards to their
integrability properties under some interesting exponential
transformations in Section~\ref{subsectioncharactexpoinequality}.
Mainly motivated by
financial applications, where a seller price and a buyer price have to
be given simultaneously, we apply systematically the same
assumptions on the semimartingale and on its opposite. Having both
exponential integrability properties proves to be essential in the a
priori estimation of their quadratic variations. In Section \ref
{sectionestimates}, we
obtain a general stability result, including the strong convergence of
the martingale parts as presented in Theorem~\ref{thNicolas}. The
result is very general and simply require the existence of exponential
moment of the absolute value of (or quantities related to) the terminal
value of the semimartingales. Our approach allows us to obtain the
strong convergence of the martingale parts in $\mathbb{H}^1$. Stability
results are also obtained in various spaces, depending on the
assumption made on the terminal values. It is interesting to note that,
on the contrary to most of the existing literature, the space of BMO
martingales does not play any particular role\vadjust{\goodbreak} as the semimartingales
are no longer bounded. This stability result is completed, in the BSDE
framework, by the convergence in total variation of the finite
variation part. In Section~\ref{sec5}, existence results become a possible
application of this stability result.
More precisely, coming back to our initial motivation of
quadratic BSDEs, we first regularize the quadratic coefficient of
the BSDE through inf-convolution as to transform it into a
coefficient with linear-quadratic growth. This regularization as
linear-quadratic, and not simply linear, allows us to consider
situations which are typically not considered in the literature.
Applying the stability result of the previous section, we can pass to
the limit and prove the existence result for general quadratic BSDEs,
under ``minimal'' integrability assumptions. The power of the forward
point of view is striking as existence results are easily obtained in a
more general framework than the classical existing literature. However,
uniqueness results requires stronger assumptions on the solutions, as
in Kobylanski~\cite{Kobylanski00} for the bounded case, or for convex
BSDEs, as in Briand and Hu~\cite{Briand-Hu} or more recently in Mochel
and Westray~\cite{Mocha-Westray} with exponential moments of any order,
or in Delbaen, Hu and Richou~\cite{Delbaen-Hu-Richou} under weaker
integrability assumptions.
This approach has also other potential applications that we will not
discuss here for lack of space. We can just mention numerical
simulations of quadratic BSDEs, study in terms of risk measures and
dual representation, solving of associated HJB-type equations.
\section{Quadratic semimartingales}\label{sectionquadraticsemimartingale}
Quadratic BSDEs have recently received a lot of attention, mainly
due to the wide range of possible applications, involving
optimization problems with an exponential criterion, such as
risk-sensitive control problems introduced by Fleming in the 1980s (see
Fleming and Sheu~\cite{FlemingSheu} for financial applications, or El
Karoui and Hamad\`{e}ne for an application to risk-sensitive zero-sum
stochastic functional games~\cite{ElKaroui-Hamadene}).
Financial applications have generated a renewed interest for this type
of BSDEs, particularly in connection with the theory of dynamic risk
measures as in Barrieu and El Karoui~\cite{Barrieu-ElKaroui6}, or
indifference pricing with exponential utility (see, e.g., Rouge
and El Karoui~\cite{Rouge-ElKaroui}, Mania and Schweizer \cite
{Mania-Schweizer} or the recent book edited by Carmona~\cite{Carmona}
among many other
references). Therefore, it is particularly relevant to understand the
structure of these processes, and to obtain conditions ensuring their
stability.
In the classical martingale theory, Burkolder--Davis--Gundy-type
estimates are crucial to obtain convergence results for martingales in
$\mathbb{H}^p$ from the convergence of their terminal values. The study
of classical BSDEs with linear growth relies also on precise a priori
estimates coming from the martingale theory, arising from a forward
point of view (see, e.g., in a general framework, El Karoui and
Huang~\cite{NEKHuang}).
In this section, after having defined quadratic BSDEs,
we adopt a forward point of view, introducing quadratic
semimartingales, with a similar structure condition, studying their
main properties and deriving some characterization results, which
depend on various integrability assumptions. These results will be
very useful to derive some stability and convergence results in the
next section.
\subsection{Definition of quadratic BSDEs and quadratic semimartingales}\label{paragraphdefinitionofquadraticsemimartingales}
Let us briefly recall the definition of a quadratic BSDE. Let $(\Omega,
\mathcal{F},\mathbb{ P},(\mathcal{F}_{t}))$ be a filtered probability
space, where the filtration $(\mathcal{F}_{t})$
satisfies the usual conditions of completeness and right-continuity.
The $\sigma$-field on $\Omega\times\mathbb R^+$ generated by the
adapted and left continuous processes is called the predictable $\sigma
$-field and denoted by $\mathcal{P}$. In this paper, we only consider
\textit{continuous} filtered probability space, that is, a filtered
probability space such that any locally bounded martingale is a
continuous martingale. A classical example is the probability space
generated by a Brownian motion, and satisfying the usual conditions.
\subsubsection*{Definition of quadratic BSDEs}
A~quadratic BSDE is an equation of the following type:
\begin{equation}
-dY_{t}=g(t,Y_{t},Z_{t})\,dt-Z_{t}\,dW_{t},\qquad Y_{T}=\xi_{T},
\label{eqBSDE}
\end{equation}
where $T>0$ is a given time horizon (possibly $\mathcal
{F}_{t}$-stopping time), $W_{\cdot}$ is a standard $d$-dimensional
$(\mathbb
{P},(\mathcal{F}_{t}))$-Brownian motion, and $Z_{t} \,dW_{t}$
simply denotes the scalar product. The $\mathcal{F}_{T}$-random
variable $\xi_{T}$ is the terminal condition,\setcounter{footnote}{1}\footnote{As pointed out
by one referee, the random variable $\xi_T$ has to be in fact
$\mathcal
{F}_{T^{-}}$-measurable as terminal value of a continuous process.}
and the coefficient $g$ is a $\mathcal{P}\otimes\mathcal{B}(\mathbb
{R}\times\mathbb{R}^d)$ measurable process satisfying the following
quadratic \textit{structure condition} ${\mathcal
Q}(l,c,\delta)$:
\begin{equation}\label{eqquadraticgrowth}
| g(\cdot,t,y,z)|\leq\kappa(t,y,z) \equiv|l_{t}|+c_t|y|+\frac
{\delta}{2}
|z|^{2}, \qquad d\mathbb{P}\otimes dt\mathrm{\mbox{-}a.s.},
\end{equation}
where $\delta>0$ is a given constant, and $(l_{\cdot}), (c_{\cdot})$ are
predictable positive\footnote{In the rest of the paper, we adopt the
following European terminology: a positive random variable $X$ verifies
$\mathbb P(X\geq0)=1$, and a strictly positive random variable
verifies $\mathbb P(X> 0)=1$. In the same way, a c\`{a}dl\`{a}g process
$K_{\cdot}$ is said to be increasing when, for any~$t,s$ such that $t
\geq s$, the random variable $A_t-A_s$ is positive and strictly increasing
when $A_t -A_s$ is strictly positive.} processes.
By solution to the $\operatorname{BSDE}(g,\xi_{T})$ defined in equation
(\ref
{eqBSDE}), we mean a pair of predictable processes taking values in
$\mathbb{R}\times\mathbb{R}^d$, $(Y,Z)=\{(Y_t,Z_t); t\in[0,T]\}$,
such that the paths of $Y$ are continuous,
$\int_0^T |Z_t|^2\,dt<\infty$, $\int_0^T
|g(t,Y_t,Z_t)|\,dt<\infty$, $\mathbb{P}$-a.s., and
\begin{equation}
\label{eqdefbsde}
Y_t=\xi_{T}+\int_t^T
g(s,Y_s,Z_s)\,ds-\int_t^T Z_s \,dW_s, \qquad\mathbb{P}\mathrm{\mbox
{-}a.s.}
\end{equation}
Note that, in the rest of the paper, this type of equality between two
processes has to be understood as holding up to
indistinguishability.\vadjust{\goodbreak}
This minimal definition will be completed later on by some further
integrability assumptions.
\subsubsection*{Definition of quadratic semimartingales}
Adopting a forward point of view, a solution of a quadratic BSDE is a
quadratic It\^o's semimartingale~$Y_{\cdot}$, where the predictable
process with finite variation satisfies the same quadratic structure
condition \eqref{eqquadraticgrowth}. Such a condition needs to be
further specified when considering the more general framework of
quadratic semimartingales defined on a continuous filtered probability
space.
\begin{edefinition}[(Quadratic semimartingale)]\label
{defquadraticsemimartingale}
Let $Y_{\cdot}$ be a continuous semimartingale,
with the decomposition
$Y_{\cdot}=Y_0-V_{\cdot}+M_{\cdot}$, where $V_{\cdot}$ is a
predictable process with finite total
variation~$|V|_{\cdot}$ and $M_{\cdot}$ is a local martingale with
quadratic variation
$\langle M \rangle_{\cdot}$.
$Y_{\cdot}$ is a \textit{quadratic semimartingale} if there exist two adapted
continuous increasing
processes $\Lambda_\cdot$ and $C_\cdot$ and a positive constant
$\delta$, such
that the
structure condition $\mathcal{Q}(\Lambda,C,\delta)$ holds true:
\begin{equation}\label{eqstructurecondition}
d|V|_t \ll\frac{1}{\delta}\,d\Lambda_t+|Y_t| \,dC_t+\frac{\delta
}{2}\,d\langle M \rangle_t,\qquad
d\mathbb{P}\mathrm{\mbox{-}a.s.}
\end{equation}
The symbol $\ll$ stands for the strong order of increasing
processes, stating that the difference is an increasing process.
Sometimes we use the short notation $D^{\Lambda,C}_{\cdot}(Y,\delta
)=\frac
{1}{\delta}\Lambda_{\cdot}+|Y_{\cdot}|\star C_{\cdot}$, and even
simply $D^{\Lambda,C}_{\cdot}$
when there is no ambiguity.
At this stage, no particular integrability assumption is made on the
processes~$\Lambda_{\cdot}$ and $C_{\cdot}$.
\end{edefinition}
\textit{Comments}:
(i) Observe that if $Y_{\cdot}$ is a quadratic semimartingale,
then $-Y_{\cdot}$ is also a quadratic semimartingale.\vspace*{-6pt}
\begin{longlist}[(iii)]
\item[(ii)]More generally, if $Y_{\cdot}$ is a quadratic semimartingale and
$\delta>0$, $Y^\delta_{\cdot} \equiv\delta Y_{\cdot}$ is a semimartingale
associated with $M^\delta_{\cdot} \equiv\delta M_{\cdot}$ with
quadratic variation
$\varq{M^\delta}_{\cdot}=\delta^2 \varq{M}_{\cdot}$ and $V^\delta
_{\cdot} \equiv\delta
V$. Then the structure condition for the process $\delta Y_{\cdot}$ becomes
$d|V^\delta|_t \ll \,d\Lambda_t+|Y^\delta_t| \,dC_t+\frac{1}{2}\,
d\varq
{M^\delta}_t$. This property justifies our choice of restricting our
study to quadratic semimartingales with constant $\delta=1$, without
any loss of generality.
\item[(iii)]The following notation specify different classes of quadratic
semimartingales, $\mathcal{Q}(\Lambda,C,\delta)$ for the general case,
$\mathcal{Q}(\Lambda,C)$ when $\delta=1$, $\mathcal{Q}$ when
$\Lambda
_{\cdot}\equiv0, C_{\cdot}\equiv0, \delta=1.$
\end{longlist}
\subsection{Exponential transformations and algebraic characterization of quadratic semimartingales}\label{parrecallsemimartingale}\label{parbasicproperties}
\subsubsection*{Some recalls on semimartingales on a continuous
probability space} (i) Let us first recall the conventional notation
for the exponential martingale of a continuous (local) martingale
$M_{\cdot}$ with quadratic variation $\langle M \rangle_{\cdot}$
\begin{equation}\label{eqexpmartingale}
\mathcal E_{\cdot}(M) \equiv\exp\bigl(M_{\cdot}- \tfrac{1}{2}\langle M \rangle
_{\cdot}\bigr).\vadjust{\goodbreak}
\end{equation}
\begin{longlist}[(iii)]
\item[(ii)]
A right continuous left limited submartingale (c\`{a}dl\`{a}g in the
French denomination) $S_{\cdot}$ is a c\`{a}dl\`{a}g optional process
$S_{\cdot}=S_0+N_{\cdot}+K_{\cdot},$ where $N_{\cdot}$ is a local
martingale and $K_{\cdot}$ a
predictable c\`{a}dl\`{a}g increasing process.
The pair $(N_{\cdot},K_{\cdot})$ is called the additive decomposition
of $S$. When
$S_{\cdot}$ is a positive submartingale, $(M_{\cdot},A_{\cdot})$ is
said to
be the multiplicative decomposition of $S_{\cdot}$ if $S_{\cdot}=S_0
\mathcal{E}_{\cdot}(M)\exp(A_{\cdot})$, where~$M_{\cdot}$ is a
local martingale and
$A_{\cdot}$ a predictable c\`{a}dl\`{a}g increasing process.
\item[(iii)] Dellacherie and Meyer~\cite{Dellacherie-Meyer} (in Appendix
1---Probabilit\'{e}s et Potentiel~B) have extended this definition to right
and left limited submartingales (also known as strong submartingales)
when the increasing predictable process $K_{\cdot}$ is only with left and
right limits (l\`{a}dl\`{a}g in the French denomination), with the
following decomposition $K_{\cdot}=K^1_{\cdot}+K^2_{-\cdot}$, where
$K^1_{\cdot}$ is a c\`
{a}dl\`{a}g predictable increasing process and $K^2_{-\cdot}$ is the
process of the left limits of a c\`{a}dl\`{a}g optional increasing
process $K^2_{\cdot}$.
\end{longlist}
\subsubsection*{Characterization of $\mathcal{Q}$-semimartingales when
$\Lambda\equiv C \equiv0$ and $\delta=1$}
The simplest $\mathcal{Q}$-semimartingales are those for which the
structure condition ${\mathcal Q}$ is saturated, that is, $V_{\cdot
}=\frac
{1}{2}\langle M \rangle_{\cdot}$ or $\lu{V}_{\cdot}=-\frac{1}{2}
\langle M \rangle_{\cdot}$. Because of their importance, we refer to them as $q
$ (resp.,
$\lu{q}$) semimartingales, and denote them by
\begin{equation}\label{eqrtM}
\cases{
r_{\cdot}(r_0,M) \equiv r_0+M_{\cdot}-\frac{1}{2}\langle M \rangle_{\cdot}
\equiv
r_0+r_{\cdot}(M),\vspace*{2pt}\cr
\lu{r}_{\cdot}(r_0,M) \equiv\lu{r}_0+ M_{\cdot}+\frac
{1}{2}\langle M \rangle
_{\cdot} \equiv\lu{r}_0-r_{\cdot}(-M).}
\end{equation}
The operator $M \rightarrow r_{\cdot}(M)$ is not an additive operator,
nevertheless $r_{\cdot}(M)+r_{\cdot}(M')=r_{\cdot}(M+M')+\langle
M,M' \rangle_{\cdot}$ and
$r_{\cdot}(M)-r_{\cdot}(M')=r_{\cdot}(M-M')-\langle M-M', M' \rangle
_{\cdot}$.
Taking the exponential of $r_{\cdot}(M)$ immediately leads to the
exponential martingale $\mathcal E_{\cdot}(M)=e^{r_{\cdot}(M)}$
defined in \eqref
{eqexpmartingale}, whilst the exponential of $\lu{r}_{\cdot}(M)$
leads to
$e^{\lu{r}_{\cdot}(M)}=(\mathcal E_{\cdot}(- M))^{-1}$.
It will also be interesting to introduce some asymmetry in the previous
definition of $\mathcal{Q}$-semimartingales, with the notion of
$\mathcal{Q}$-submartingales, especially useful when characterizing
the former.
\begin{edefinition}\label{defquadraticsubmartingale}
A $\mathcal{Q}$-submartingale is a continuous (or l\`{a}dl\`{a}g) semimartingale
$X_{\cdot}=X_0-V_{\cdot}+M_{\cdot}$ such that $A_{\cdot} \equiv-V_{\cdot
}+\frac{1}{2}\langle M \rangle_{\cdot}$ is a
predictable increasing process.
Equivalently, $e^{X_{\cdot}}=e^{X_0+A_{\cdot}}\mathcal E_{\cdot}(M)$
is a continuous (l\`{a}dl\`{a}g) submartingale.
\end{edefinition}
Obviously a $\mathcal{Q}$-semimartingale is a $\mathcal{Q}$-submartingale. Remarkably,
applying this property to both $X$ and $-X$ is sufficient to
characterize $\mathcal{Q}$-semimartingales. From a financial point of view,
this means that the same rules have to be used to characterize both the
buyer's and the seller's price.
\begin{theorem}\label{thcharacterizationcQquasimart}
Let $X_{\cdot}$ be a l\`{a}dl\`{a}g optional process. Then, $X_{\cdot
}$ is a $\mathcal{Q}
$-semimartingale if and only if both processes $X$ and $-X$ are $\mathcal{Q}
$-submartingales, or equivalently if and only if $\exp(X_{\cdot})$
and $\exp(-X_{\cdot})$
are submartingales. In all cases, $X_{\cdot}$ is a continuous process.
\end{theorem}
\begin{pf} We only have to prove the sufficiency.
Assume that $\exp(X_{\cdot})$ and $\exp(-X_{\cdot})$ are two
l\`{a}dl\`{a}g submartingales, with respective multiplicative
decomposition $(\lo{ M}_{\cdot},\lo A_{\cdot})$, and
$(\lu{ M}_{\cdot},\lu A_{\cdot})$. Taking the logarithm leads to two
different
decompositions of $X$,
\[
X_{\cdot}= X_0+\lo{ M}_{\cdot}-\tfrac{1}{2}\varq{\lo
M}_{\cdot}
+ \lo A_{\cdot}\quad \mbox{and} \quad {-}X_{\cdot}= -X_0+
\lu{M}_{\cdot}-\tfrac{1}{2}\varq{\lu{M}}_{\cdot} +\lu{A}_{\cdot}.
\]
Since the martingales and their quadratic variations are continuous,
the jumps of~$X$ are the same as the positive jumps
of the increasing process $\lo A_{\cdot}$. The same remark holds true
for the
jumps of the process $-X$. As, the jumps of $X$ are simultaneously
positive and negative, the process $X_{\cdot}$ is continuous.
Moreover, from the uniqueness of the predictable decomposition of
$X_{\cdot}$ we know that $\lu{M}_{\cdot}=-\lo M_{\cdot}$. Hence,
$\varq{\lu{M}}=\varq{\lo{M}}$
and $\lo{A}_{\cdot}+\lu{A}_{\cdot}=\varq{M}_{\cdot}$.
From Radon--Nikodym's theorem, there exists a predictable process
$\alpha_{\cdot}$, with $0\leq\alpha_{t}\leq2$, such that
$d\lo A_{t}=\frac{1}{2}\alpha_{t}\,d \varq{M}_t$. Substituting $\lo
A_{\cdot
} $
into the decomposition of $X_{\cdot}$, we get $dX_{t}=-\frac
{1}{2}(1-\alpha
_{t})\,d\varq{M}_t+dM_t$
with $|1-\alpha_{t} |\leq1$. Therefore, $X_{\cdot}$ is a ${\mathcal
Q}$-semimartingale.
\end{pf}
\subsubsection*{Characterization of $\mathcal{Q}(\Lambda,C,\delta
)$-semimartingales via exponential transformation}
In the general structure condition
(\ref{eqstructurecondition}), the presence of the term
$|Y_{\cdot}|\star C_{\cdot}$ makes the characterization of quadratic
semimartingales more difficult to obtain.
Nevertheless the transformations
proposed in the following proposition can partially reduce the problem to
$\mathcal{Q}$-submartingales.
\begin{theorem}\label{thcharacterizationgeneralquasimart}
Let us introduce the following transformations of any adapted (l\`{a}dl\`{a}g) process $Y_{\cdot}$:
\begin{eqnarray}\label{eqXLambda,C}
X^{\Lambda,C}_t(Y) &\equiv& Y_t +\Lambda_t+ \int_0^t |Y_s|\,dC_s
\equiv
Y_t +D^{\Lambda,C}_t(Y), \label{eqXLambda,C}\\
U^{\Lambda,C}_t(e^Y)& \equiv& e^{Y_t}+\int_0^te^{Y_s}\,d\Lambda
_s+\int
_0^te^{Y_s}|Y_s|\,dC_s. \label{eqULambda,C}
\end{eqnarray}
Then, $Y_{\cdot}$ is a $\mathcal{Q}(\Lambda,C,\delta
)$-semimartingale if and
only if $X^{\Lambda,C}_{\cdot}(\delta Y)$ and\break $X^{\Lambda,C}_{\cdot
}(-\delta Y)$
are $\mathcal{Q}$-submartingales, or equivalently if and only if both processes
$U^{\Lambda,C}_{\cdot}(e^{\delta Y})$ and $U^{\Lambda,C}_{\cdot
}(e^{-\delta Y})$
are submartingales.
\end{theorem}
The link between the two transformations $X^{\Lambda,C}$ and\vspace*{1pt}
$U^{\Lambda
,C}$ is clear when $Y$ is a continuous semimartingale, since
$dU^{\Lambda,C}_t(e^{Y})=e^{-D^{\Lambda,C}_t}\,de^{X^{\Lambda,C}_t(Y)}$
(see proof below). The motivation behind the transformation $U^{\Lambda
,C}_t(e^Y)$, first introduced by Briand and Hu~\cite{Briand-Hu} will be
presented later in Section~\ref{subsectioncharactexpoinequality}.\vadjust{\goodbreak}
\begin{pf*}{Proof of Theorem~\ref{thcharacterizationgeneralquasimart}} We can assume $\delta=1$ without any loss of generality
[refer in particular to Comment (ii) at the end of Section \ref
{paragraphdefinitionofquadraticsemimartingales}].
\begin{longlist}[(ii.b)]
\item[(i.a)]Necessary condition:
Let $\alpha^V_{\cdot}\in[-1,1]$ be a predictable process such that
$V_{\cdot}=\alpha_{\cdot}^V\star(\Lambda_{\cdot}+ |Y|\star
C_{\cdot}+\frac{1}{2}\langle M \rangle_{\cdot})$.
The semimartingale $X^{\Lambda,C}_{\cdot}(Y)=Y_{\cdot} +\Lambda
_{\cdot}+ |Y|\star
C_{\cdot}=Y_{\cdot}+D^{\Lambda,C}_{\cdot}(Y)$ is associated with
the martingale $M_{\cdot}$ and
the finite variation process $-V_{\cdot}^X$ where $V_{\cdot
}^X=V_{\cdot}-D^{\Lambda
,C}_{\cdot}(Y)=(\alpha_{\cdot}^V-1)\star D^{\Lambda,C}_{\cdot
}(Y)+\frac{1}{2}\alpha_{\cdot}^V\star
\langle M \rangle_{\cdot}$.
Since the process $-V_{\cdot}^X+\frac{1}{2}\langle M \rangle_{\cdot}=(1-\alpha
_{\cdot}^V)\star
(D^{\Lambda,C}_{\cdot}(Y)+\frac{1}{2}\langle M \rangle_{\cdot})$ is an increasing
process, the
semimartingale $X^{\Lambda,C}_{\cdot}(Y)$ is a $\mathcal{Q}$-submartingale.
\item[(i.b)]Assume now that both processes $e^{{\lo X}_{\cdot}}$ and
$e^{{\lu
X}_{\cdot}}$ are submartingales, where $ {\lo X}_{\cdot} \equiv
X^{\Lambda,C}_{\cdot}(Y)$
and
${\lu X}_{\cdot} \equiv X^{\Lambda,C}_{\cdot}(-Y)$. The processes
${\lo X}_{\cdot}$ and
${\lu
X}_{\cdot}$ satisfy the following relations, where $D^{\Lambda
,C}_{\cdot} \equiv
D^{\Lambda,C}_{\cdot}(Y)$:
\[
\tfrac{1}{2}({\lo X}_{\cdot}-{\lu X}_{\cdot})=Y_{\cdot}
\quad\mbox{and}\quad \tfrac{1}{2}
({\lo X}_{\cdot}+{\lu X}_{\cdot}) = D^{\Lambda,C}_{\cdot}=\Lambda
_{\cdot}+\tfrac{1}{2}|{\lo X}_{\cdot} -
{\lu X}_{\cdot}|\star C_{\cdot}.
\]
Using the same notation and arguments as above, the processes
${\lo X}_{\cdot}$ and ${\lu X}_{\cdot}$, whose exponentials are
submartingales,
can only have positive jumps. This contradicts the fact that their sum
is a continuous increasing process. Hence, both processes are continuous.
For the same reasons, the sum ${\lu M}_{\cdot}+{\lo M}_{\cdot}$ is identically
equal to $0$, and the sum of increasing processes
$\frac{1}{2}({\lu A}_{\cdot}+{\lo A}_{\cdot} )=D^{\Lambda,C}_{\cdot
}+\frac{1}{2}\varq{\lo M}_{\cdot}
\equiv\frac{1}{2}G^{\Lambda,C}_{\cdot}$.
There exists a predictable process $\alpha_{\cdot}$, with $\alpha
_{\cdot}\in
[0,2]$, such that ${\lo A}_{\cdot}=\frac{1}{2}\alpha_{\cdot}\star
G^{\Lambda,C}_{\cdot}$.
Substituting ${\lo A}_{\cdot}$ in the decomposition of $Y_{\cdot
}=\frac{1}{2}({\lo
X}_{\cdot}-{\lu X}_{\cdot})$, we get $dY_{t}=- \frac{1}{2}(1-\alpha
_{t})\,dG^{\Lambda
,C}_t+d{\lo M}_t $.
Therefore, $Y_{\cdot}$ is a $\mathcal{Q}(\Lambda,C)$-semimartingale.
\item[(ii.a)] Let $Y_{\cdot}$ be a $\mathcal{Q}(\Lambda, C)$-semimartingale. Since
$X^{\Lambda,C}_{\cdot}(Y)=Y_{\cdot}+D^{\Lambda,C}_{\cdot}$, we have
$e^{Y_{\cdot}}=e^{-D^{\Lambda,C}_{\cdot}}e^{X^{\Lambda,C}_{\cdot
}(Y)}$. From the
classical It\^o's formula,
\[
de^{Y_t}=e^{-D^{\Lambda,C}_t}\,de^{X^{\Lambda
,C}_t(Y)}-e^{Y_t}\,dD^{\Lambda,C}_t\quad
\mbox{and}\quad
dU^{\Lambda,C}_t(e^{Y})=e^{-D^{\Lambda,C}_t}\,de^{X^{\Lambda,C}_t(Y)}.
\]
Then when $Y_{\cdot}$ is a continuous process, $\exp(X^{\Lambda,
C}_{\cdot}(Y))$ is a
submartingale iff $U^{\Lambda,C}_{\cdot}(e^Y)$ is a submartingale.
\item[(ii.b)] Assume now that both processes $U_{\cdot}(e^Y)$ and $U_{\cdot
}(e^{-Y})$
are l\`adl\`ag submartingales. Let $U_{\cdot}(e^Y)=U_0+\lo{N}_{\cdot
}+\lo{K}_{\cdot}$
and $U_{\cdot}(e^{-Y})=U_0+\lu{N}_{\cdot}+\lu{K}_{\cdot}$ be their
respective additive
decompositions. As before, we can show that the process $Y_{\cdot}$ is
continuous. The previous equivalence yields to the result.\quad\qed
\end{longlist}
\noqed\end{pf*}
\section{Exponential uniform integrability and entropic inequalities}\label{subsectioncharactexpoinequality}
In the previous section, we have obtained a simple characterization of
${\mathcal
Q}(\Lambda,C)$-semimartingales using an exponential transformation,
leading naturally to positive submartingales defined by their multiplicative
or additive decomposition. Whenever submartingales have good
integrability properties, the existence of an additive decomposition\vadjust{\goodbreak}
is equivalent to the submartingale inequalities. It is the famous
Doob--Meyer decomposition. The main objective of this section is
to precise such integrability properties and the subsequent inequalities.
\subsection{Uniform integrability, class ($\mathcal{D}$) and their exponential equivalents}
\label{uniformintegrability} \label{parclassUexp}\label{parclassDexp}
\subsubsection*{The class ${\mathcal U}_{\exp}$}
In the classical martingale theory, uniformly integrable (u.i.)
martingales (in particular the conditional expectation of some positive
integrable random variable) play a key role as martingale equalities
are then valid between two stopping times.
The class of such martingales is denoted by $\mathcal{U}$.
In the exponential framework, any exponential martingale
$\mathcal{E}(M)_{\cdot}$ of a continuous martingale $M_{\cdot}$ is a positive
local martingale, with expectation $\leq1$, hence a
supermartingale. The process $\mathcal{E}_{\cdot}(M)$ is a u.i. martingale
on $[0,T]$ if and only $\mathcal{E}_t(M) =
\mathbb{E}[\mathcal{E}_T(M)|\mathcal F_t]$ $\mathbb{P} \mbox{-a.s.}$
It is therefore natural to introduce the class ${\mathcal U}_{\exp}$
of continuous martingales $M$
such that $\mathcal{E}_{\cdot}(M)$ is a uniformly integrable martingale.
\subsubsection*{The classes ${\mathbb L}^1_{\exp}$ and $({\mathcal
D}_{\exp})$}
A $\mathcal{F}_T$-measurable random variable $X_T$
belongs to ${\mathbb L}^1$ provided that ${\mathbb E}(|X_T|)<\infty$
and by definition belongs to ${\mathbb L}^1_{\exp}$ if $\exp(X_T)\in
{\mathbb L}^1.$
The optional processes $X$ for which the absolute value is
dominated by a uniformly integrable martingale are said to be in the
class\footnote{P. A. Meyer used the term ``class $(\mathcal{D})$,'' in the
honor of J. L. Doob.} $(\mathcal{D})$. They are also characterized by the fact
that the family of random variables $\{X_\sigma;
\sigma\leq T, \sigma\mbox{ stopping times}\}$ is uniformly
integrable.
When adopting the exponential point of view, we can extend this notion
into:
\[
X_{\cdot}\mbox{ is said to be in the class }(\mathcal
{D}_{\mathrm{exp}})\mbox{ if }e^{X_{\cdot}}\mbox{ belongs to the class }(\mathcal{D}).
\]
Observe that $|X_{\cdot}|$ belongs to the class $(\mathcal{D}_{\mathrm{exp}})$
if and only if $X_{\cdot}$ and $-X_{\cdot}$ belong to the class
$(\mathcal{D}_{\mathrm{exp}})$. The sufficient condition is based on the
intermediate result that $\cosh(X_{\cdot})=\cosh(|X_{\cdot}|)$
is in the class $(\mathcal{D})$.
\subsubsection*{$(\mathcal{D})$-submartingales and conditional
inequalities}
A submartingale $S_{\cdot}$ (as defined in its general form in
Section \ref
{parrecallsemimartingale}), which is in the
class $(\mathcal{D})$, satisfies the following conditional
``submartingale inequality'' \label{submartingaleinequalities}
\[
\mbox{for any stopping times }\sigma\leq\tau\leq T\qquad
S_\sigma\leq\mathbb{E}[S_\tau|\mathcal{F}_\sigma],\qquad \mbox{a.s.}
\]
Conversely, it is well known that any c\`{a}dl\`{a}g process in the
class $(\mathcal{D})$ satisfying these inequalities admits a Doob--Meyer
decomposition into a martingale and a predictable c\`{a}dl\`{a}g
increasing process (see Protter~\cite{Protter}, Chapter~3), that is, is
a submartingale in the previous sense. The less standard l\`{a}dl\`{a}g
case, motivated by optimal stopping problems, has been established by
Dellacherie and Meyer~\cite{Dellacherie-Meyer}.\vadjust{\goodbreak}
\subsection{Entropic inequalities and quadratic semimartingales}\label
{subsecentropicinequalities}\label{parentropicq-semimartingale} \label{parentropicq-semimartingale}\label{parmaximalinequalities}\label
{changeprobaentropy}
\subsubsection*{Entropic submartingales}
When considering a positive $(\mathcal{D})$-submartin\-gale~$S_{\cdot }$,
the logarithm $X_{\cdot}=\ln{S_{\cdot}} $ is a $\mathcal{Q}$-submartingale in
the class $(\mathcal{D}_{ \exp})$ and satisfies the so-called
\textit{entropic inequality}:
\begin{eqnarray}\label{eqentropicinequalities}
\forall\sigma\leq\tau\leq T\qquad X_\sigma\leq\rho_\sigma(X_\tau
)
\nonumber
\\[-8pt]
\\[-8pt]
\eqntext{\mbox{a.s. where }
\rho_{\sigma}(X_\tau)= \ln\mathbb{E}[\exp(X_\tau)|\mathcal{F}_\sigma].}
\end{eqnarray}
The operator $\rho_{\cdot}$ is known as the \textit{entropic
process} and
has been intensively studied in the framework of risk measures (see,
e.g., Barrieu and El Karoui~\cite{Barrieu-ElKaroui5} or
\cite{Barrieu-ElKaroui6}).
Since conversely, any $\mathcal{Q}$-submartingale in the class $(\mathcal{D}_{
\exp})$ satisfies the entropic inequalities, we refer to it as \textit{entropic submartingale.}
An example of entropic submartingale is the simple process $r_{\cdot}(M)$
defined in equation
(\ref{eqrtM}) with $M_{\cdot}\in{\mathcal U}_{\exp}.$
In this case,
$\exp r_{\cdot}(M)=\mathcal{E}_{\cdot}(M)$ is a positive u.i.
martingale, equal to
the conditional expectation of its terminal value $\exp(r_T(M))$. Since
$\xi_T \equiv r_T(M) \in{\mathbb L}^1_{\exp}$, we can recover
$r_t(M)$ from its terminal condition from the following
identity\footnote{Note that the identity
$\rho_{t}(\xi_T)=r_t(\rho_{0}(\xi_T),M)$
has suggested the notation $r_t(M)$ for the logarithm of some
exponential martingale.} based on the entropic process $\rho_{\cdot
}(\xi_T)$:
\begin{equation}\label{eqentropicprocess}
r_t(M)= \ln\mathbb{E}[\exp(\xi_T)|\mathcal{F}_t]=\rho_t(\xi_T),\qquad
\xi_T \equiv r_T(M).
\end{equation}
The conditional properties of the u.i. martingale $\exp
(r_t(M))=\break\mathbb
{E}[\exp(\xi_T)|\mathcal{F}_t] =\mathbb{E}[\exp(\xi_T)]\mathcal{E}_t(M)$ are
translated into the time consistency property of the entropic process
over any pair of stopping times $(\sigma,\tau)$ such that
$\sigma\leq\tau$, $\rho_{\sigma}(\xi_T)=\rho_{\sigma}(\rho
_{\tau
}(\xi_T))$.
Finally, let us observe that $\rho_{\cdot}(\xi_T)$ is the smallest
$q$-semimartingale $X_{\cdot}=X_0+r_{\cdot}(N)$ with the terminal
value $ X_T=\xi_T$.
This is a simple consequence of the fact that $\exp(X_{\cdot})$ is a positive
local martingale and hence a supermartingale.
\subsubsection*{Entropic inequalities and $\mathcal{Q}$-semimartingales} We are now able to give another
formulation for the characterization of $\mathcal{Q}$-semimartingales in the
class $(\mathcal{D}_{\exp})$ in terms of inequalities involving the
entropic process.
This formulation will prove to be better suited than that of Theorem
\ref{thcharacterizationgeneralquasimart} when taking limits as we will
see in a later section.
\begin{theorem}
\label{thcharacterizationqmartDexp} Let $X_{\cdot}$ be
a l\`{a}dl\`{a}g optional process such that $|X_T|\in\mathbb
L^1_{\exp}$. Then $X_{\cdot}$ is a $\mathcal{Q}$-semimartingale such that
$|X_{\cdot}|\in(\mathcal{D}_{\exp})$ if and only if $X_{\cdot}$
and $-X_{\cdot}$ are
entropic submartingales, or equivalently
if for any pair of stopping
times $0
\leq\sigma\leq\tau\leq T$,
\begin{equation}\label{eqentropicinequalities}
-\rho_{\sigma}(-X_{\tau}) \equiv{\lu{\rho}}_{\sigma}(X_{\tau
}) \leq X_{\sigma} \leq\rho_{\sigma}(X_{\tau}),\qquad
\mathbb{P}\mathrm{\mbox{-}a.s.}
\end{equation}
\end{theorem}
\begin{pf}
Thanks to Section~\ref{uniformintegrability}, when $|X_T|\in
\mathbb
L^1_{\exp}$, the following equivalences hold true [$X_{\cdot}$ is a
$\mathcal{Q}
$-semimartingale such that
$X_{\cdot}$ and $-X_{\cdot}$ are in the class $(\mathcal{D}_{\exp
})$] is equivalent
to [$e^{X_{\cdot}}$ and $e^{-X_{\cdot}}$ are ($\mathcal
{D}$)-submartingales] that
is equivalent to ($e^{X_{\cdot}}$ and $e^{-X_{\cdot}}$ satisfy the
submartingale
inequalities).
\end{pf}
\subsubsection*{Entropic inequalities and $\mathcal{Q}(\Lambda
,C)$-semimartingales}
The same type of characterization applied to the processes $X^{\Lambda
,C}_T(Y)$ or $U^{\Lambda,C}_T(e^{Y})$ involves inequalities depending
on the process $Y_{\cdot}$ itself and therefore is often difficult to
use. A
possible (but not equivalent) way is to work with the process $\bar
{X}^{\Lambda,C}_{\cdot}(|Y|)$ defined as $\bar{X}^{\Lambda
,C}_t(|Y|) \equiv
e^{C_t}|Y_t|+\int_0^t
e^{C_s}\,d\Lambda_s$ as a generalization of $|Y_{\cdot}|$ by assuming
that the
process $\exp(\bar{X}^{\Lambda,C}_{\cdot}(|Y|))$ is in the class
($\mathcal{D}$).
\begin{eproposition}
\label{propcharacterizationbarXDexp}
Let $\bar{X}^{\Lambda,C}_t(|Y|) \equiv e^{C_t}|Y_t|+\int_0^t
e^{C_s}\,d\Lambda_s$.
\begin{longlist}[(ii)]
\item[(i)]Let $Y$ be a $\mathcal{Q}(\Lambda,C)$ semimartingale. Then the process
$\bar{X}^{\Lambda,C}_{\cdot}(|Y|)$ is a $\mathcal{Q}$-submartingale.
\item[(ii)]Let $Y$ be an optional l\`{a}dl\`{a}g process with $\bar
{X}^{\Lambda,C}_T(|Y|)\in{\mathbb L}^1_{\exp}$.
\end{longlist}
Then the process $\bar{X}^{\Lambda,C}_{\cdot}(|Y|)$
is an entropic submartingale
if and only if the following inequalities hold true for any pair of stopping
times $0\leq\sigma\leq\tau\leq T$, where for $t\leq u$, $C_{t,u}
\equiv C_u - C_t$,
\begin{equation}\label{hypDirectinequality}
|Y_\sigma| \leq\rho_\sigma\biggl(e^{C_{\sigma,\tau}}|Y_\tau|+\int
_\sigma
^\tau e^{C_{\sigma,t}}\,d\Lambda_t\biggr), \qquad\mathbb{P}\mathrm{\mbox{-}a.s.}
\end{equation}
\end{eproposition}
\begin{pf} For the sake of simplicity, we omit $Y$ in $\bar X^{\Lambda
,C}_t(|Y|)$ and $D^{\Lambda,C}_{\cdot}(|Y|)$.
\begin{longlist}[(ii.a)]
\item[(i)]By It\^o--Tanaka formula involving the sign function [$\operatorname{sign}(x)=x/|x|$], with $\operatorname{sign}(0)=0$, and the local time $ L_{\cdot}(Y)$
of $Y.$ at $0$, $|Y_{\cdot}|=|Y_0|+\operatorname{sign}(Y)\star
Y_{\cdot}+L_{\cdot}(Y)=|Y_0|+M^s_{\cdot}-V^s_{\cdot}+L_{\cdot
}(Y)$, where $dM^s_t=\operatorname{sign}(Y)_t
\,dM_t$ and $dV^s_t=\operatorname{sign}(Y)_t\, dV_t$.
This decomposition leads to the following representation of the
differential of $\bar X^{\Lambda,C}_{\cdot}= e^{C_{\cdot}} |Y_{\cdot
}|+e^{C_{\cdot}}\star
\Lambda_{\cdot}$:
\begin{eqnarray*}
d\bar X^{\Lambda,C}_t&=&e^{C_t}[|Y_t|\,dC_t+d\Lambda_t+dM^s_t-
dV^s_t+dL_t(Y)]\\
&=&e^{C_t}\bigl[dD^{\Lambda,C}_t+\tfrac{1}{2}\,d\langle M \rangle_t-dV^s_t+dL_t(Y)
\bigr]+e^{C_t}\bigl(dM^s_t-\tfrac{1}{2}\,d\langle M \rangle_t\bigr).
\end{eqnarray*}
Observe that $\bar A^s_{\cdot}=D^{\Lambda,C}_{\cdot}+\frac
{1}{2}\langle M \rangle
_{\cdot}-V^s_{\cdot}+L_{\cdot}(Y)$
is an increasing process. The martingale part of $\bar X^{\Lambda
,C}_{\cdot}(|Y|)$ is $\bar M^{C}_{\cdot}=e^{C_{\cdot}}\star
M^s_{\cdot}$ with quadratic
variation $\varq{\bar M^{C}}_{\cdot}=e^{2C_{\cdot}}\star\langle M \rangle
_{\cdot}$. So, the
following decomposition shows that $\bar X^{\Lambda,C}_{\cdot}$ is a
$\mathcal{Q}
$-submartingale since $e^{C_{\cdot}}-1\geq0 $,
\[
d\bar X^{\Lambda,C}_{\cdot}=e^{C_{\cdot}}\bigl[d\bar
A^s_{\cdot}+\tfrac{1}{2}
(e^{C_{\cdot}}-1)\,d\langle M \rangle_{\cdot}\bigr] + d r_{\cdot}(e^{C_{\cdot
}}\star
M^s_{\cdot}).
\]
\item[(ii.a)] The assumption that $\exp(\bar{X}^{\Lambda,C}_{\cdot})$ is
a ($\mathcal{D}
$)-submartingale
implies in particular that
$\bar{X}^{\Lambda,C}_T\in{\mathbb L}^1_{\exp}$, and that
$\bar{X}^{\Lambda,C}_0=|Y_0|\leq\rho_0(\bar{X}^{\Lambda,C}_T)$. The
same inequality
holds true if\vadjust{\goodbreak} we start at time $\sigma$ with horizon $\tau$ by
considering the $\sigma$-conditional
expectation of $\bar{X}^{\Lambda,C}_{\sigma,\tau}=e^{C_{\sigma
,\tau
}}|Y_\tau|+\int_\sigma^\tau e^{C_{\sigma,t}}\,d\Lambda_t$, so that
$|Y_\sigma| \leq\rho_\sigma(\bar{X}^{\Lambda,C}_{\sigma,\tau}).$
\item[(ii.b)] Conversely, assume inequality (\ref{hypDirectinequality}),
$|Y_\sigma
| \leq\rho_\sigma(\bar{X}^{\Lambda,C}_{\sigma,\tau}).$
Observe that the entropic process
$\rho_{\delta,t}(\xi_T)=\frac{1}{\delta}\rho_{t}(\delta\xi_T)$ is
increasing with respect to the parameter $\delta$ (from the H\"{o}lder
inequality for the exponential). Then, since $e^{C_\sigma}\geq1$, we have:
$\rho_\sigma(e^{C_{\sigma,\tau}}|Y_\tau|+\int_\sigma^\tau
e^{C_{\sigma
,t}}\,d\Lambda_t)\leq e^{-C_\sigma}
\rho_\sigma(e^{C_{0,\tau}}|Y_\tau|+\int_\sigma^\tau
e^{C_{0,t}}\,d\Lambda_t)
$. So $\bar{X}^{\Lambda,C}_{\cdot}=e^{C_{\cdot}} |Y_{\cdot}|+
e^{C}\star\Lambda_{\cdot}$
satisfies the entropic inequalities $\bar{X}^{\Lambda,C}_\sigma
\leq\rho_\sigma(e^{C_{0,\tau}}|Y_\tau|+\int_0^\tau
e^{C_{0,t}}\,d\Lambda_t+\int_0^\tau e^{C_{0,t}}\,d\Lambda_t)= \rho
_\sigma(\bar{X}^{\Lambda,C}_\tau)$. Taking $\tau=T$, it follows that
$\bar{X}^{\Lambda,C}_{\cdot}$ is dominated by the $(\mathcal{D}_{\exp
})$-process $\rho
_{\cdot}(\bar{X}^{\Lambda,C}_T)$ and so is an entropic submartingale.
Hence, the result.\quad\qed
\end{longlist}
\noqed\end{pf}
The properties of the dominating process $\rho_{\cdot
}(e^{C_{\cdot,T}}|Y_T|+\int
_{\cdot}^Te^{C_{\cdot,s}}\,d\Lambda_s)$ are therefore essential to
obtain results for the process $Y_{\cdot}$. The nonadapted process
$\phi_{\cdot,T}(|Y_T|)=e^{C_{\cdot,T}}|Y_T|+\int_{\cdot
}^Te^{C_{\cdot,s}}\,d\Lambda_s$ with
initial condition $\phi_{0,T}(|Y_T|)= \bar X^{\Lambda,C}_{T}(|Y|)$,
first introduced in Briand and Hu~\cite{Briand-Hu}, Lemma 1, is the
positive decreasing solution
of the ordinary differential equation, with terminal condition
$|Y_T|$,
\begin{equation}\label{eqphi}
d\phi_t=
-(d\Lambda_t+|\phi_t|\,dC_t),\qquad \phi_T=|Y_T|.
\end{equation}
In order words, the nonadapted process $U^{\Lambda,C}_{\cdot
}(e^{\phi_{\cdot,T}
})=e^{\phi_{\cdot,T} }+\int_0^{\cdot}e^{\phi_{s,T}}\,d\Lambda_s+\int_0^{\cdot}
e^{\phi
_{s,T}}|\phi_{s,T}|\,dC_s$
is constant and equal to $e^{\phi_{0,T} }$. This property is the main
motivation for introducing the $U^{\Lambda,C}$ transformation.
The decreasing property of $\exp(\phi_{\cdot,T})$ explains the
supermartingale property of the process $\Phi_{\cdot}(|Y_T|)$ defined
as the
optional projection of $\exp(\phi_{\cdot,T})$:
\begin{eqnarray}\label{eqPhi}
\Phi_\sigma(|Y_T|)&\equiv&\mathbb{E}[\exp(\phi_{\sigma,T} (|Y_T|))|\mathcal{F}
_\sigma]
\nonumber
\\[-8pt]
\\[-8pt]
\nonumber
&=&
\exp\biggl(\rho_\sigma\biggl(e^{C_{\sigma,T}}|Y_T|+\int_\sigma^Te^{C_{\sigma
,t}}\,d\Lambda_t\biggr)\biggr).
\end{eqnarray}
Note that, for the sake of clarity, we often omit the reference to
$Y_T$ in
$\phi_{\cdot,T} (|Y_T|)$, $\Phi_{\cdot}(|Y_T|)$ or $\bar X^{\Lambda
,C}_{T}(|Y_T|)$.
\begin{theorem}\label{propUtransform}
Assume $\mathbb{E}[\exp(\bar X^{\Lambda,C}_{T}(|Y_T|))]=\mathbb{E}[\exp(\phi
_{0,T})]<\infty$.
\begin{longlist}[(iii)]
\item[(i)]The process $\Phi_{\cdot}$ is a $(\mathcal{D})$-supermartingale
dominated by the
martingale $ \mathbb{E}[e^{\phi_{0,T}}|\mathcal{F}_t]=N^0_t$, with the additive
decomposition $\Phi_{\cdot}=\Phi_0+N^\Phi_{\cdot} -A^\Phi_{\cdot
}$. The predictable
increasing process is
$A^\Phi_{\cdot} =\int_0^{\cdot} \Phi_s \,d\Lambda_s+\int_0^{\cdot} \mathbb{E}[e^{\phi
_{s,T}}|\phi
_{s,T}||\mathcal{F}_s]\,dC_s$, when the process $N^\Phi_{\cdot}$
is a uniformly integrable martingale.
\item[(ii)]The process $U^{\Lambda,C}_{\cdot}(\Phi)=\Phi_{\cdot}+\int
_0^{\cdot}\Phi_s
\,d\Lambda_s+ \int_0^{\cdot}\Phi_s\ln(\Phi_s)\,dC_s$ is a positive
$(\mathcal{D})$-supermartingale, associated with the same u.i. martingale
$N^\Phi_{\cdot}$, and the increasing process $A^U_{\cdot}=\int_0^{\cdot}
(\mathbb{E}[e^{\phi_{s,T}}|\phi_{s,T}||\mathcal{F}_s]-\Phi_s\ln(\Phi_s))\,dC_s$.
\item[(iii)]Assume inequality (\ref{hypDirectinequality}) for the process
$|Y_{\cdot}|$. The
processes $U^{\Lambda,C}_{\cdot}(e^Y)$ and $U^{\Lambda,C}_{\cdot
}(e^{-Y})$ are two
$(\mathcal{D})$-submar\-tingales dominated by the $(\mathcal
{D})$-supermar\-tingale
$U^{\Lambda,C}_{\cdot}(\Phi)$.\vspace*{-1pt}
\end{longlist}
\end{theorem}
\begin{remark}\label{remUtransform}
The positive quantity $H^{\mathrm{ent}}_s(e^{\phi_{s,T}}) \equiv
\mathbb{E}[e^{\phi_{s,T}}\phi_{s,T}|\mathcal{F}_s]-\break\Phi_s\ln(\Phi_s)$ appearing in
$A^U_{\cdot}$ is well known in statistics as the conditional Shannon entropy
of the random variable $e^{\phi_{s,T}}$. Its properties will be studied
in the next subsection when considering integrability properties of the
supremum.\vspace*{-1pt}
\end{remark}
\begin{pf*}{Proof of Theorem~\ref{propUtransform}}
As observed by Briand and Hu~\cite{Briand-Hu}, Lemma~1, since $\phi_{t,T}$
is a positive solution of the differential equation $d\phi_t=
-(d\Lambda_t+\phi_t \,dC_t)$, the nonadapted process $U^{\Lambda
,C}_t(e^{\phi_{\cdot,T} })$ is constant,
$U_t^{\Lambda, C}(e^{\phi_{\cdot,T}})=e^{\phi_{t,T} }+\int_0^te^{\phi
_{s,T}}\,d\Lambda_s+\int_0^t e^{\phi_{s,T}}|\phi_{s,T}|\,
dC_s=e^{\phi_{0,T}},$
with $\phi_{0,T} =\bar{X}^{\Lambda,C}_T\in\mathbb L^1_{\exp}$.
The dynamics of the supermartingale
$\Phi_t=\mathbb{E}[e^{\phi_{t,T} }|\mathcal{F}_t]$ is obtained by taking conditional
expectation in this relation.\vspace*{-1pt}
\begin{longlist}[(iii)]
\item[(i)]First, observe that the assumption $e^{\phi_{0,T}}\in{\mathbb
L}^1$ implies that $e^{\phi_{T,T}}\in{\mathbb L}^1$ and
that the nonadapted increasing process $B^\phi_t=\int_0^te^{\phi
_{s,T}}\,d\Lambda_s+\break\int_0^t e^{\phi_{s,T}}|\phi_{s,T}|\,dC_s$ is
integrable.
Since $\Phi_{\cdot}$ is the optional projection of $e^{\phi_{\cdot,T}}$, and
since both increasing processes $\Lambda_{\cdot}$ and $C_{\cdot}$
are adapted, the
dual predictable projection of $B^\phi_t$ is the continuous process
$A^\Phi_t=\int_0^t \Phi_s \,d\Lambda_s+\break\int_0^t\mathbb{E}[e^{\phi
_{s,T}}|\phi
_{s,T}||\mathcal{F}_s]\,dC_s$, generating the same conditional variation, $\mathbb{E}
[B^\phi_{t,T}-A^\Phi_{t,T}|\mathcal{F}_t]=0 $.
So the
process $N^1_t=\mathbb{E}[B^\phi_T-A^\Phi_T|\mathcal{F}_t]=\mathbb{E}[B^\phi_t-A^\Phi
_t|\mathcal{F}_t]$ is
a uniformly integrable martingale. Then, taking the conditional
expectation of the constant process $U_{\cdot}^{C,\Lambda}(e^{\phi_{\cdot,T}})$
implies that
$\Phi_t+A^\Phi_t+N^1_t=N^0_t$, and
$N^\Phi_t=N^0_t-N^1_t$.
\item[(ii)]To show that $U^{\Lambda,C}_{\cdot}(\Phi)$ is also a
supermartingale, we use that the Shannon entropy (see Remark \ref
{propUtransform}) $H^{\mathrm{ent}}_s(e^{\phi_{s,T}})=\mathbb{E}[e^{\phi
_{s,T}}|\phi
_{s,T}||\mathcal{F}_s]-\Phi_s\ln(\Phi_s)$ is positive, and the process
$A^U_{\cdot}=\int_0^{\cdot}H^{\mathrm{ent}}_s(e^{\phi_{s,T}})\,dC_s$ is
increasing. Then,
some simple
calculation shows that $U^{\Lambda,C}_{\cdot}(\Phi)+A^U_{\cdot
}=\Phi_{\cdot}+A^\Phi
_{\cdot}=N^\Phi_{\cdot}
$ is a positive u.i. martingale, that provides the Doob--Meyer
decomposition of the
supermartingale $U^{\Lambda,C}_{\cdot}(\Phi)$.
\item[(iii)]This last statement is a straightforward consequence
of the inequality $e^{|Y_{\cdot}|}\leq\Phi_{\cdot}$.\quad\qed\vspace*{-1pt}
\end{longlist}
\noqed\end{pf*}
\begin{remark}\label{remetaT}
The key condition to obtain these properties is that the process
$U^{\Lambda,C}_{\cdot}(\Phi(|Y_T|))$ is a ($\mathcal{D}$)-supermartingale.
Note that this is also true if we replace $|Y_T|$ by
any $\mathcal{F}_T$-random variable $|\eta_T|\geq|Y_T|$, such that
$e^{C_{T}}|\eta_T|+\int_0^Te^{C_{s}}\,d\Lambda_s \in{\mathbb
L}^1_{\exp}$.\vspace*{-1pt}
\end{remark}
\begin{remark} As observed by Briand and Hu~\cite{Briand-Hu}, extending
the results of Lepeltier and San Martin in~\cite{LepeltierSanMartin98},
the linear growth condition in\vadjust{\goodbreak} $Y_{\cdot}$, $|Y_{\cdot}|\star
C_{\cdot}$,
may replaced by a superlinear growth $h(|Y_{\cdot}|)\star C_{\cdot}$,
where $h$ is
an increasing convex $C^1$ function, with $h(0)>0$,
satisfying the integrability condition $\int_0^T du \frac
{|u|}{h(u)}=+\infty$.
The function $\phi(t)$ is then replaced by the solution of the ODE
$\phi'(t)=- h(\phi_t)$ with a terminal condition $\phi(T)=z \geq0$.
\end{remark}
\subsubsection*{Maximal exponential integrability and $L\log
L$-condition} When\footnote{This
paragraph can be omitted for a first reading.} looking for entropic
inequalities, assuming that the exponential of the processes is in the
class~($\mathcal{D}$) is a minimal assumption. However, it is sometimes
interesting to obtain estimates on the exponential of the maximum of
these processes. Entropic inequalities reduce the problem to the
estimation of the running supremum of some entropic processes, or
equivalently to the running supremum of some positive martingales, for
which we can apply standard Burkholder--Davis--Gundy (BDG) martingale
inequalities. An excellent presentation of the different martingale
inequalities may be found in Lenglart, Lepingle and Pratelli
\cite{Lenglart-Lepingle-Pratelli}.
From now on, we adopt the following nonstandard notation for the
running supremum of some measurable process $X$: $\max|X_{t}|=\max
_{0\leq u\leq t}|X_u|$ and $\max|X_{s,t}|=\max_{s\leq u\leq
t}|X_u-X_s|$. The space of semimartingales $X_{\cdot}$ such that $\max
|X_{T}|\in\mathbb L^p$ $(p\geq1)$ is denoted by $\mathcal S^p$. For
continuous local martingales, the relevant quantity is the quadratic
variation and we denote by $\mathbb{H}^p$ the space of martingales with
a quadratic variation in $\mathbb{L}^p$. Moreover, for any continuous
local martingale $M_{\cdot}$, such that $M_0=0$, the BDG inequality gives
some estimates of its maximum in terms of with its quadratic variation
as, for any $0<p<\infty$, there exist two positive constants $c_p$ and
$C_p$ such that:
\[
\mbox{for any }0<p<\infty\qquad c_p \mathbb E[\langle M \rangle_T^{p/2}]\leq
\mathbb E[\max|M|_{T}^{p}]\leq C_p \mathbb E[\langle M \rangle_T^{p/2}].
\]
The following Doob inequalities, based on the terminal condition and
only true for $p>1$, are more classical:
\[
\mbox{for any }p>1\qquad k_p \mathbb E[\langle M \rangle_T^{p/2}]\leq
\mathbb
E[ |M|_{T}^{p}]\leq K_p \mathbb E[\langle M \rangle_T^{p/2}].
\]
So, for $p>1$, $\mathbb E[\max|M|_{T}^{p}]<\infty$ if and only
$\mathbb E[\langle M \rangle_T^{p/2}]<\infty$. In other words, the spaces
$\mathcal S^p$ and
$\mathbb H^p$ coincide.
In terms of exponential martingale $L_{\cdot}=\mathcal E_{\cdot}(M)$, these
results become
\begin{eqnarray*}
\forall p>1 \qquad L_{\cdot} \in\mathcal S^p
\quad&\Longleftrightarrow&\quad L_T=\exp(r_T(M))\in\mathbb L^p\\
\quad&\Longleftrightarrow&\quad
\biggl(\int_0^TL^2_s\,d\langle M \rangle_s\biggr)^{1/2}\in\mathbb L^p.
\end{eqnarray*}
When $p<1$, a similar maximal inequality holds true for exponential
martingales or more generally for positive supermartingales \cite
{Lenglart-Lepingle-Pratelli},
\[
\forall p<1\qquad \mathbb E[\max L_{T}^{p}]\leq\frac
{\mathbb E((L_0)^p)}{1-p}.\vadjust{\goodbreak}
\]
When $p=1$ and the local martingale is positive, we have to use the
following $L\log L$-condition.
\begin{eproposition} \label{propLlogLinequality} Let $L_{\cdot}=\exp
(M_{\cdot}-\frac{1}{2}\langle M \rangle_{\cdot})$ be a positive continuous locale
martingale and
$\max L_t$ its running supremum.
\begin{longlist}[(iii)]
\item[(i)](Doob) Assume that $L_{\cdot}$ is a u.i. martingale.
Then
\[
\mathbb E(\max L_T)-1=\mathbb E(L_T\ln(\max
L_T))\geq\mathbb E(L_T \ln(L_T))
\]
and
\[\mathbb E(L_T \ln(L_T))=\mathbb E\bigl(L_T
\tfrac{1}{2}\langle M \rangle_T\bigr).
\]
\item[(ii)](Harremo\"es) The following inequality is sharp:
\begin{equation}\label{eqLlogLinequality}
\mathbb E(\max L_T)-1-\ln(\mathbb E(\max L_T))\leq\mathbb
E(L_T \ln(L_T))=H^{\mathrm{ent}}(L_T).
\end{equation}
The martingale $L_{\cdot}$ belongs to $\mathcal S^1$ if and only if
$\mathbb
E(L_T \ln(L_T))< \infty$.
\item[(iii)]Let $U_{\cdot}$ be a positive $(\mathcal{D})$-submartingale with deterministic
initial condition $U_0$ and $m= \mathbb E(U_T)\geq U_0$. The previous
Harremo\"es inequality becomes, when $u_m(x)=x-m-m \ln(x),$
\[
u_m({\mathbb E}(\max U_T))-u_m(U_0)\leq{\mathbb
E}(U_T \ln(U_T))- {\mathbb E}(U_T)\ln(\mathbb E(U_T)
)=H^{\mathrm{ent}}(U_T).
\]
In particular, ${\mathbb E}(\max U_T)$ is dominated by an increasing
function of\break $H^{\mathrm{ent}}(U_T)+u_m(U_0)$.
\end{longlist}
\end{eproposition}
\begin{pf}
The proof is based on Dellacherie~\cite{Dellacherie} and Harremo\"es
\cite{Harremoes}.
\begin{longlist}[(ii.b)]
\item[(i)]Since $L_{\cdot}$ is a continuous process, $\max L_{\cdot}$ only
increases
on the set $\{ L_{\cdot}=\max L_{\cdot}\}$ and $\max L_t=1+\int_0^t
d\max
L_s=1+\int_0^t \frac{L_s}{\max L_s}\,d\max L_s$. Taking the
expectation (after stopping at some stopping time bounding $\max
L_{\cdot}$
on $[0,T]$ if necessary) and using the fact that $L_{\cdot}$ is the
conditional expectation of its terminal value leads to $ \mathbb E(
\max L_T)-1=\mathbb E(L_T\ln(\max L_T))$.
\item[(i.a)] Since $\ln(\max L_T)\!\geq\!\ln( L_T)^+$, and $L_t \ln
(L_t)^-\!\leq\!
1/e$, then $|L_T\ln(L_T)|\!\in\break{\mathbb L}^1$
when $\max L_T\in{\mathbb L}^1$. This establishes the necessary
condition.
\item[(ii)]To prove that finite entropy implies integrability of the $\max$,
we show inequality \eqref{eqLlogLinequality}. We start by studying
$\mathbb E(L_T\ln(\max L_T))-\mathbb
E(L_T\ln( L_T))$ from the concavity of the function $\ln$.
Given that $x^* =\mathbb E(\max L_T)=\break\mathbb
E_{\mathbb Q}(\max L_T/L_T)$ if $\mathbb
Q=L_T.\mathbb P$, $\mathbb E_{\mathbb Q}( \ln(\max L_T/L_T))\leq
\ln(\mathbb E_{\mathbb Q}(\max L_T/L_T))=\ln x^* $.
Inequality \eqref{eqLlogLinequality} is then easily obtained.
An example of c\`{a}dl\`{a}g martingale satisfying the equality may be
found in Harremo\"es~\cite{Harremoes}.
\item[(iii)]The extension to $U_{\cdot}$ being a positive submartingale does
not present any specific difficulties other than purely
computational, since $ \mathbb E( \max U_T)-U_0\leq\mathbb
E(U_T\ln(\max U_T/U_0))$. Taking now $\mathbb
Q=(U_T/m).\mathbb P$, $x^* /m=\break\mathbb E(\max U_T)/m=\mathbb
E_{\mathbb Q}(\max L_T/L_T)$, the convexity inequality becomes:\break
$\mathbb E_{\mathbb Q}(\ln(\max U_T/U_T))\leq\ln(\mathbb E_{\mathbb
Q}(\max U_T/U_T))=\ln(x^* /m)$.\vadjust{\goodbreak} Some elementary algebra gives
the final result.
Observe that $u_m$ is convex and minimal at $z=m$. Since $m_0\leq m$,
$u_m(U_0)\geq u_m(m)$. Then since the entropy is positive, $u_m(U_0)+
H^{\mathrm{ent}}(U_T)$ belongs to the range of $\{u_m(z); z\geq m\}$ and $
\mathbb E( \max U_T)\leq u_m^{-1}(u_m(U_0)+ H^{\mathrm{ent}}(U_T))$.
\item[(i.b)] We now show the link between entropy and quadratic variation.
Assume that $L_T\ln(L_T)\in{\mathbb L}^1$. Let $T_K$ be an increasing
sequence of stop\-ping~times, such that $\ln(L_t)=M_t-\frac{1}{2}\langle M \rangle
_t$ is
bounded by $K$. The sequence $T_K$ is increasing and goes to infinity
with $K$. Thanks to the Girsanov theorem, $N^{\mathbb Q}_{\cdot
}=M_{\cdot}-\langle M \rangle
_{\cdot}$ is a local martingale with respect to the probability measure
$\mathbb Q=L_T.\mathbb P$, and
$\mathbb E(L_T\frac{1}{2}\langle M \rangle_T)=\lim_K \mathbb E(L_{T}\frac{1}{2}
\langle M \rangle_{T\wedge T_K})=\break\lim_K\mathbb E(L_{T\wedge T_K}\frac{1}{2}
\langle M \rangle_{T\wedge T_K}).$ Using \mbox{$\mathbb E(L_{T\wedge
T_K}N^{\mathbb Q}_{T\wedge T_K})=0$,}
\begin{eqnarray*}
\mathbb E\bigl(L_{T\wedge T_K}\tfrac{1}{2}\langle M \rangle_{T\wedge T_K}\bigr)&=&\mathbb
E\bigl(L_{T\wedge T_K}\bigl(M_{T\wedge T_K}-\langle M \rangle_{T\wedge T_K}+\tfrac
{1}{2}\langle M \rangle
_{T\wedge T_K}\bigr)\bigr)\\
&=&\mathbb E(L_{T\wedge T_K}\ln(L_{T\wedge T_K}))\leq\mathbb
E( \max L_{T\wedge T_K})-1\\
&\leq&
\mathbb E( \max L_T)-1.
\end{eqnarray*}
Then $N^{\mathbb Q}_{\cdot}$ is a square integrable $\mathbb Q$-martingale
and ${\mathbb E}_{\mathbb Q}(\ln(L_{T}))={\mathbb E}_{\mathbb Q}(\frac{1}{2}
\langle M \rangle_T)$, which is is the desired equality.\quad\qed
\end{longlist}
\noqed\end{pf}
Let us now come back to the question of maximal inequalities for
$\mathcal{Q}(\Lambda,C)$-semimartingales.
The various results are based on the behaviour of the entropic process
$\rho_{\cdot}(\bar{X}^{\Lambda,C}_T(|Y_T|))$ also denoted $\rho
_{\cdot}(\bar
{X}^{\Lambda,C}_T).$
To give a concise form to the various but similar estimates, we
introduce the following family of positive increasing functions $\psi
_p$ defined on $\mathbb R^+$ by $\psi_p(z)=z^p$ if $p\neq1$ and $\psi
_1(z)=z \ln z-z+1$. Note that, as in the previous subsections, we
consider separately the case of entropic submartingales.
\begin{eproposition} \textup{(i)} Assume $X_{\cdot}=X_0-V_{\cdot}+M_{\cdot}$
to be an entropic
submartingale ($|X_T|\in\mathbb L^1_{\exp}$), such that $\psi_p(
\exp
X_T)\in\mathbb L^1$ provided that $p\geq1$.
Then, both processes $\exp(X_{\cdot})$ and ${\mathcal E}(M)_{\cdot}$
belong to
$\mathcal S^p$, and their $\mathcal S^p$ norm are dominated by some
increasing function of $\mathbb E(\psi_p( \exp X_T))$ for $p\geq1$,
and of $\psi_p( \mathbb E(\exp X_T))$ for $p<1$.\vspace*{-6pt}
\begin{longlist}
\item[(ii)]Let $Y_{\cdot}$ be a $\mathcal{Q}(\Lambda,C)$-semimartingale
such that
$\psi_p( \exp\bar{X}^{\Lambda,C}_T)\in\mathbb L^1$ when $p\geq1$.
The processes $\exp(\rho_{\cdot}(\bar{X}^{\Lambda,C}_T))$, $\Phi
_{\cdot}(|Y_T|)$,
$\exp(e^{C_{\cdot}}|Y_{\cdot}|+\int_0^{\cdot} e^{C_s}\,d\Lambda_s)$ and
${\mathcal
E}(e^C\ast M)_{\cdot}$ belong to $\mathcal S^p$ and their $\mathcal S^p$
norms are dominated by some increasing function of $\mathbb E(\psi_p(
\exp\bar{X}^{\Lambda,C}_T))$ for $p \geq1$ or $\psi_p( \mathbb
E(\exp
\bar{X}^{\Lambda,C}_T))$ for $p<1$.
\end{longlist}
\end{eproposition}
\begin{pf} (i) The proof relies on the multiplicative decomposition of
the submartingale $\exp(X_{\cdot})=\exp(X_0+A_{\cdot}){\mathcal
E}(M)_{\cdot}$. Then $\exp
(X_{\cdot})$ and ${\mathcal E}(M)_{\cdot}$ have the same maximal properties.
The proof is a simple consequence of the entropic inequalities (\ref
{propcharacterizationbarXDexp}), BDG inequalities and the maximal
estimates given in Proposition~\ref{propLlogLinequality};\vadjust{\goodbreak}
\begin{longlist}[(ii)]
\item[(ii)]The maximal estimates of $\exp(\rho_{\cdot}(\bar{X}^{\Lambda,C}_T))$
are a simple consequence of (i), and yield to the other estimates
since the different process are dominated by $\exp(\rho_{\cdot}(\bar
{X}^{\Lambda,C}_T))$. For the process ${\mathcal E}(e^C\star M)_{\cdot
}$, we
have to use the decomposition of the entropic submartingale
$e^{C_{\cdot}}Y_{\cdot}+\int_0^{\cdot}e^{C_s}\,d\Lambda_s$.\quad\qed
\end{longlist}
\noqed\end{pf}
\subsubsection*{Change of probability measures and entropy} Let $L$ be a positive local martingale with
$L_0=1$. The condition $\mathbb E(L_T \ln(L_T))=H^{\mathrm{ent}}(L)<
\infty$
naturally appears when considering the martingale $L$ as
the likelihood of a probability measure $\mathbb{Q}$ equivalent to
$\mathbb{P}$,
as it measures the positive
Shannon entropy $H^{\mathrm{ent}}(d\mathbb{Q}/d\mathbb{P})=\mathbb E
(d\mathbb{Q}/d\mathbb{P} \ln(d\mathbb{Q}/d\mathbb{P}))$ of
$\mathbb
{Q}$ with respect to $\mathbb{P}$. The previous result states that
$H^{\mathrm{ent}}(d\mathbb{Q}/d\mathbb{P})$ is finite if and only if the
martingale density $L_{\cdot}$ is in $\mathcal S^1$.
This interpretation is particularly interesting when using the
variational formulation of the the entropic risk measure $\rho_0(\xi
_T)$ (see, e.g., Frittelli~\cite{Frittelli}, F\"ollmer and
Schied~\cite{Foellmer-Schied})
\begin{equation}\label{eqdualentropy}
\rho_0(\xi_T)=\sup_{\mathbb{Q}}\{\mathbb E_{\mathbb{Q}}(\xi
_T)-H^{\mathrm{ent}}({\mathbb{Q}}/{\mathbb{P}}) | H({\mathbb{Q}}/{\mathbb
{P}})<+\infty
\}.
\end{equation}
In other words, when $\xi_T \in\mathbb L^1_{\exp}$, for any martingale
density $L^Q \in\mathcal S^1$ whose the $\mathcal S^1$-norm is bounded
by $K$, we have an uniform estimate of $\mathbb E_Q(\xi_T)$ given by
$\mathbb E_Q(\xi_T)\leq\rho_0(\xi_T)+\mathbb E(L_T\ln(L_T))\leq
\rho
_0(\xi_T)+K$.
Moreover, when the random variable $\xi_T$ itself is associated with
a finite relative entropy probability measure $\mathbb{Q}^{\xi_T}$
defined by its density $L_T^{\xi_T}=e^{(\xi_T-\rho_0(\xi_T))}$, we
can prove by a simple verification that the supremum is attained for
$\mathbb{Q}^{\xi_T}$.
Very recently, Choulli and Schweizer~\cite{ChoulliSchweizer} have
developed applications to mathematical finance of the $L\log L$ condition.
\section{Quadratic variation estimates and stability results}\label
{sectionestimates}
We are now capable to establish the main contribution of this paper,
that is, some
stability results, which require some uniform estimation of key
quantities, including quadratic variation and running supremum. In
order to use the previous inequalities, we need the family of
$\mathcal{Q}(\Lambda,C)$-semimartingales we consider to be uniformly
dominated. Following Remark~\ref{remetaT}, we can
replaced $Y_T$ by a generic random variable $\eta_T$ such that $|\eta
_T| \geq|Y_T|$ and $\bar X^{C,\Lambda}_T(|\eta_T|)$ satisfies an
appropriate integrability condition. Therefore, it seems natural to
introduce the following
class ${\mathcal S_Q}(|\eta_T|,\Lambda, C)$, and to work within this
class of quadratic semimartingales:
\begin{edefinition}\label{defSQ}
Let $|\eta_T|$ be a $\mathcal{F}_T$-random variable, such that\break
$\bar X^{C,\Lambda}_T(|\eta_T|)=e^{C_{T}}|\eta_T|+\int
_0^Te^{C_{s}}\,d\Lambda_s$ belongs to ${\mathbb
L}^1_{\exp}$. The class ${\mathcal S_Q}(|\eta_T|,\Lambda, C)$ is the
set of $\mathcal{Q}(\Lambda,C)$-semimartingales $Y_{\cdot}$ defined on
$[0,T]$, such that
$|Y_{\cdot}|\leq
\rho_{\cdot}(e^{C_{\cdot,T}}|\eta_T|+\int_{\cdot}^Te^{C_{\cdot,s}}\,
d\Lambda
_s)$ a.s.
\end{edefinition}
\subsection{Quadratic variation estimates}\label{subvarqestimates}
We now study the quadratic variation of
$\mathcal{Q}(\Lambda,C)$-semimartingale $Y_{\cdot}$ when $Y_{\cdot}
$ belongs to
$\mathcal{S_Q}(|\eta_T|,\Lambda, C)$. Following Kobylanski
\cite{Kobylanski00}, the best way to do so
is to use the function $v(x)=e^{x}-1-x$ instead of the simple
exponential function. This function is indeed positive, convex, and increasing
for $x\geq0$, and satisfies $v''(x)-v'(x)=1$.
In the following, we use the short notation $\bar X_T^{C,\Lambda
}(|\eta
_T|)=\bar X^{C,\Lambda}_T$.
\begin{theorem}[(Quadratic variation estimates)]\label{thevarqestimates}
Let $Y_{\cdot} \in{\mathcal S_Q}(|\eta_T|,\Lambda, C)$.
\begin{longlist}[(iii)]
\item[(i)]Then, the quadratic variation $\langle M \rangle_{\cdot}$ of the $\mathcal{Q}
(\Lambda,
C)$-semimartingale $Y_{\cdot}=Y_0+M_{\cdot}-V_{\cdot}$ satisfies for
any stopping times
$\sigma\leq T$,
\begin{equation}\label{eqquadraticvariationestimate}
\quad\tfrac{1}{2}\mathbb{E}[\langle M \rangle_{\sigma,T}|\mathcal{F}_\sigma]\leq\Phi_\sigma
(|Y_T|){\mathbf
1}_{\{\sigma<T\}}\leq\mathbb{E}\bigl[\exp(\bar X^{C,\Lambda}_T(|\eta_T|))
{\mathbf1}_{\{\sigma<T\}}|\mathcal{F}_\sigma\bigr].
\end{equation}
In particular, the martingale $M_{\cdot}$ is in ${\mathbb H}^{2}$,
\textit{with
the uniform estimate}
\begin{equation}\label{eqquadraticvariationH2estimate}
\mathbb{E}\bigl[\tfrac{1}{2}\langle M \rangle_T\bigr]\leq\mathbb{E}[\exp(\bar X^{C,\Lambda}_T(|\eta_T|))].
\end{equation}
\item[(ii)]Let $p^\eta=\sup\{p; \mathbb{E}[\exp(p \bar
X^{C,\Lambda}_T(|\eta_T|))]<+\infty\}$. Then $p^\eta\geq1$ and
$\forall p\in[1,
p^\eta[$, the martingale $M$ belongs to ${\mathbb H}^{2p}$, and
\begin{equation}\label{eqquadraticvariationHpestimate}
\mathbb{E}[\langle M \rangle_T^p]\leq(2p)^p \mathbb{E}[\exp(p \bar X^{C,\Lambda}_T(|\eta
_T|))].
\end{equation}
\item[(iii)]If $\Phi_t(|\eta_T|)=\mathbb{E}[\exp
(e^{C_{t,T}}|\eta_T|+\int_t^Te^{C_{t,u}}\,d\Lambda_u)|\mathcal{F}_t]$ is
uniformly bounded in $t \leq T$, then the conditional quadratic
variation $\frac{1}{2}\mathbb{E}[\langle M \rangle_{\sigma,T}|\mathcal{F}_\sigma]$ is uniformly
bounded. Hence $M_{\cdot}$ is a BMO-martingale.
\end{longlist}
\end{theorem}
\begin{pf}
By analogy with the previous notation, when using the
function $v(x)=e^{x}-1-x$, we set $V^{\Lambda,C}_t(e^{|Y|})=
v(|Y_t|)+\int_0^t v'(|Y_s|)(d\Lambda_s+|Y_s|\,dC_s)= v(|Y_t|)+\int_0^t
v'(|Y_s|)\,dD^{\Lambda,C}_s$. So,\vspace*{1pt} $U^{\Lambda
,C}_t(e^{|Y|})-V^{\Lambda
,C}_t(e^{|Y|})=1+|Y_t|+D^{\Lambda,C}_t(|Y|)$, and both processes
$U^{\Lambda,C}_{\cdot}$ and $V^{\Lambda,C}_{\cdot}$ are in the
class $(\mathcal{D})$ since
$Y_{\cdot} \!\in\!{\mathcal S_Q}(|\eta_T|,\Lambda, C)$.
\begin{longlist}[(iii)]
\item[(i.1)] As we see in the proof of Proposition \ref
{propcharacterizationbarXDexp},
the semimartingale $|Y_{\cdot}|$ is associated
with the martingale $M^s_{\cdot}=\operatorname{sign}(Y_{\cdot})\star M_{\cdot
}$, the finite variation
process $V^s_{\cdot}=\operatorname{sign}(Y_{\cdot})\star V_{\cdot}$ and the
local time at $\{0\}$,
that disappears in the It\^o's formula since $v'(0)=0$. Using similar
calculation to those of the previous section, and the identity
$v''(x)-1=v'(x)$, we obtain that the process
$V^{\Lambda,C}_t(e^{|Y|})-\frac{1}{2}\langle M \rangle_t =v(|Y_0|)+\int_0^t
v'(|Y_s|)\,dM^s_s+\int_0^t v'(|Y_s|)(dD^{\Lambda,C}_s-dV^s_s+\frac{1}{2}
\,d\langle M \rangle_s)$
is a submartingale, and since $V^{\Lambda,C}_{\cdot}$ is in the class
$(\mathcal{D})$,
for any $\sigma\leq T$,
$ \mathbb{E}[\frac{1}{2}\langle M \rangle_{\sigma,T}|\mathcal{F}_\sigma]\leq\mathbb{E}
[v(|Y_T|)-v(|Y_\sigma
|)+\int_\sigma^Tv'(|Y_s|)\,dD^{\Lambda,C}_s|\mathcal{F}_\sigma]$.
\item[(i.2)] Since, by definition, $\forall x\geq0, 0\leq v(x) \leq e^x$ and $
v'(x) \leq e^x$,
\[
\int_\sigma^Tv'(|Y_s|)\,dD^{\Lambda,C}_s \leq\int
_\sigma^T
\Phi_s(d\Lambda_s+\ln|\Phi_s|\,dC_s) \qquad\mbox{for any }\sigma\leq T.
\]
Thanks to the supermartingale property of $U^{\Lambda,C}_{\cdot}(\Phi)$
[Proposition~\ref{propUtransform}(ii)] and the inequality $\Phi
_{\cdot}\geq
\exp(|Y_{\cdot}|)$ [implying in particular $\Phi_T=\exp(|\eta
|)\geq
v(|Y_T|)$], we have
$ \mathbb{E}[\int_\sigma^T \Phi_s(d\Lambda_s+\ln|\Phi_s|\,dC_s)|\mathcal{F}
_\sigma]\leq\mathbb{E}
[\Phi_\sigma-\Phi_T|\mathcal{F}_\sigma]$ and
\begin{eqnarray*}
\mathbb{E}\bigl[\tfrac{1}{2}\langle M \rangle_{\sigma,T}|\mathcal{F}_\sigma\bigr]&\leq&\mathbb{E}
[v(|Y_T|)-v(|Y_\sigma
|)-(\Phi_T-\Phi_\sigma)|\mathcal{F}_S]\\
&=&\mathbb{E}\bigl[\bigl(-\bigl(\Phi_T-v(|Y_T|)+v(|Y_\sigma|)\bigr)+\Phi_\sigma\bigr)\mathbf
1_{\{\sigma<T\}}\big|\mathcal{F}_\sigma\bigr]\\
&\leq&\Phi_\sigma\mathbf1_{\{\sigma<T\}}\leq\mathbb{E}\bigl[\exp\bar
X^{\Lambda
,C}_{T} {\mathbf1} _{\{\sigma<T\}}\big|\mathcal{F}_\sigma\bigr].
\end{eqnarray*}
\item[(ii)]As observed in Lenglart, L\'epingle and Pratelli \cite
{Lenglart-Lepingle-Pratelli}, the final result is a simple consequence
of the
so-called Garsia--Neveu lemma (Lemma~\ref{lemNeveuGarsia}) (see, e.g., Neveu~\cite{Neveu}) recalled below.
\item[(iii)]This is a straightforward consequence of the inequality
$\mathbb{E}[\frac{1}{2}\langle M \rangle_{\sigma,T}|\break\mathcal{F}_\sigma] \leq\Phi_\sigma
(|\eta_T|)$.\quad\qed
\end{longlist}
\noqed\end{pf}
\begin{elemma}[(Garsia--Neveu lemma)]
\label{lemNeveuGarsia} Let $A_{\cdot}$ be a predictable c\`adl\`ag
increasing process and $U$ a random variable, positive and integrable.
If for any stopping times $\sigma\leq T$, $\mathbb{E}[A_T-A_\sigma|\mathcal{F}
_{\sigma}]\leq\mathbb{E}[U{\mathbf1} _{\{\sigma<T\}}|\mathcal{F}_{\sigma}],$
\[
\forall p\geq1\qquad \mathbb{E}[A_T^p]\leq p^p\mathbb{E}[U^p].
\]
More generally, $\mathbb{E}[F(A_T)]\leq\mathbb{E}[F(pU)]$ for any convex function $F$
such that $p=\sup_{x>0}(x (\ln F)'(x))<+\infty$.
\end{elemma}
Here we apply this lemma to the random variable $U=\exp( \bar
X^{\Lambda,C}_{T}(|\eta_T|))$
for any $p\geq1$ such that $U\in{\mathbb L}^p.$
As a corollary of this result, uniform estimates may be obtained for
the total variation of the process~$V_{\cdot}$.
\begin{ecorollary}\label{corVestimates}
Let $Y_{\cdot}\in{\mathcal S_Q}(|\eta_T|,\Lambda, C)$.
The total variation of the process~$V_{\cdot}$ such that $Y_{\cdot
}=Y_0+M_{\cdot}-V_{\cdot}$
satisfies for $1\leq p<p^\eta$
\begin{equation}\label{eqquadraticvariationHpestimate}
\mathbb{E}[|V |^p_T]\leq(2p)
^p \mathbb{E}[\exp(p \bar X^{C,\Lambda}_T)].
\end{equation}
When $\Phi_{\cdot}(|\eta_T|)=\mathbb{E}[\exp
(e^{C_{\cdot,T}}|\eta_T|+\int_{\cdot}^Te^{C_{t,u}}\,d\Lambda_u)|\mathcal{F}
_{\cdot}]$ is
bounded by $K_C$, then $\mathbb{E}[|V|_{\sigma
,T}|\mathcal{F}_\sigma]\leq2K_C.$
\end{ecorollary}
\begin{pf} Since $V_{\cdot}$ satisfies the structure condition $\mathcal{Q}
(\Lambda,
C)$, $\mathbb{E}[ |V|_{\sigma,T}|\mathcal{F}_\sigma]\leq
E[\Lambda_{\sigma,T}+\int_\sigma^T|Y_s|\,dC_s+\frac{1}{2}\langle M \rangle
_{\sigma
,T}|\mathcal{F}
_\sigma]\leq
2\mathbb{E}[\exp(\bar X^{\Lambda,C}_{T}){\mathbf1} _{\{\sigma<T\}}|\mathcal{F}
_\sigma]$.
Indeed,\break $\mathbb{E}[\Lambda_{\sigma,T}+\int_\sigma^T|Y_s|\,dC_s|\mathcal{F}
_\sigma
]\leq\mathbb{E}
[\int_\sigma^T
e^{|Y_s|}(d\Lambda_s+|Y_s|\,dC_s)|\mathcal{F}_\sigma] \leq
\mathbb{E}[(\Phi_\sigma-\Phi_T)| \mathcal{F}_\sigma] \leq\mathbb{E}[\exp( \bar
X^{\Lambda,C}_{T}){\mathbf1} _{\{\sigma<T\}}|\mathcal{F}_\sigma]$. We
conclude with Lemma~\ref{lemNeveuGarsia}.
\end{pf}
\subsection{\texorpdfstring{Stability results for $\mathcal{Q}(\Lambda,C)$-semimartingales}
{Stability results for Q(Lambda,C)-semimartingales}}\label{substabilityresults}\label{substabilityresults}
We can start by noticing that the class ${\mathcal S_Q}(|\eta
_T|,\Lambda
, C)$ is stable by a.s. convergence,
since the submartingale property of both processes $U_{\cdot}(e^Y)$ and
$U_{\cdot}(e^{-Y})$, dominated by the $(\mathcal{D})$-supermartingale
$U_{\cdot}(\Phi)$, is
stable by a.s. convergence.
Moreover, Theorem~\ref{thcharacterizationgeneralquasimart} implies that\vadjust{\goodbreak}
the limit process is continuous and is also in ${\mathcal S_Q}(|\eta
_T|,\Lambda, C)$.
However, previous estimates of both
quadratic variation and
finite variation processes suggest that a better stability result
may hold true, in particular regarding the strong convergence of the
martingale parts. The space of martingales, where this convergence
takes place, depends essentially on the exponential integrability
properties of the random variable $X^{\Lambda,C}_T(|\eta_T|)$. The
method is very similar to that of Lepeltier and San Martin~\cite
{LepeltierSanMartin97}. When the
$\mathcal{Q}(\Lambda,C)$-semimartingales are bounded, this type of results
has already been obtained for the $\mathbb H^2$-convergence by
Kobylanski~\cite{Kobylanski00} and Morlais~\cite{Morlais1}.
Our stability result is novel and direct, and
gives better convergence results with the ${\mathbb H}^1$ convergence.
This result, that appears here for the first time in a BSDE framework,
is based on an old result of Barlow and Protter~\cite{Barlow-Protter} on
the convergence of semimartingales.
\begin{theorem}\label{thNicolas}
Assume the sequence $(Y^n_{\cdot})$ of ${\mathcal S_Q}(|\eta
_T|,\Lambda, C)$
semimartingales is a Cauchy sequence for the
a.s. uniform convergence, that is,\break $\sup_{t\leq T}|Y^n_t-Y^{n+p}_t|$
tends to 0 almost surely when $n \rightarrow\infty$.
Then the limit process $Y_{\cdot}$ is a ${\mathcal S_Q}(|\eta
_T|,\Lambda,
C)$-semimartingale
$Y_{\cdot}=Y_0+M_{\cdot}-V_{\cdot}$.
Different types of convergence hold true for the processes $(M^n_{\cdot},
V^n_{\cdot})$ of the decomposition $Y^n_{\cdot}=Y^n_0+M^n_{\cdot
}-V^n_{\cdot}$:
\begin{enumerate}[(ii)]
\item[(i)]Martingales convergence of $(M^n_{\cdot})$ to $M_{\cdot}$.
\begin{enumerate}[(a)]
\item[(a)] The sequence $(M^n_{\cdot})$ converges to~$M_{\cdot}$ in
${\mathbb H}^1$.
\item[(b)]If for some $p>1$ $\bar X^{\Lambda,C}_{T}(|\eta_T|)\in
{\mathbb
L}^p_{\exp}$, the sequence $(M^n_{\cdot})$ converges to~$ M_{\cdot}$ in $\mathbb H^{2p}$, and in the BMO-space if $ \Phi
_S(|\eta
_T|)$ is bounded.
\end{enumerate}
\item[(ii)]The sequence of finite variation processes $(V^n_{\cdot})$
converges at
least in $\mathcal S^1$ to the process $V_{\cdot}$ satisfying the structure
condition $\mathcal{Q}(\Lambda,C)$.
\end{enumerate}
\end{theorem}
\begin{pf} We proceed\footnote{An earlier proof of this result in the
BMO case is due to Nicolas Cazanave, a former Ph.D. student at Ecole
Polytechnique.} in several steps to prove this convergence result.
We first introduce some notation and make some elementary
calculations. For $s\leq t$, let $Y^{i,j}_t=Y_t^i-Y^j_t,
M^{i,j}_t=M_t^i-M^j_t $ and
$Y^{i,j}_{s,t}=(Y_t^i-Y^i_s)-(Y_t^j-Y^j_s) $, and the short notation
first introduced in Section~\ref{parmaximalinequalities}, $\sup
_{s\leq u\leq t}
| Y_{u}^{i,j}-Y_{s}^{i,j}|\equiv\max| Y^{i,j}_{s,t}|$.
Then for any stopping times $\sigma\leq\tau\leq T$,
\begin{eqnarray*}
\varq{M^{i,j}}_{\sigma,\tau}&=&|Y^{i,j}_{\sigma,\tau}|^2-2\int
_\sigma
^\tau Y^{i,j}_{\sigma,s} \,dY^{i,j}_s \\
&\leq&|Y^{i,j}_{\sigma,\tau}|^2-2\int_\sigma^\tau Y^{i,j}_{\sigma
,s}
\,dM^{i,j}_s
+ 2\int_\sigma^\tau|Y^{i,j}_{\sigma,s}| \, d(|V^j|_s+|V^i|_s).
\end{eqnarray*}
Using either the fact that $Y^{i,j}$ is bounded, or a uniform
localization procedure, the stochastic integral
$\int_\sigma^{\tau_n} Y^{i,j}_{\sigma,s} \,dM^{i,j}_s$ has null
conditional expectation for\vadjust{\goodbreak} a well-chosen stopping time $\tau_n$. Then,
thanks to the monotonicity of $\varq{M}$ and Corollary~\ref
{corVestimates}, with $B^{i,j}=2(|V^i|+|V^j|)$,
\begin{eqnarray*}
\mathbb{E}[\varq{M^{i,j}}_{\sigma,T} | \mathcal{F}_\sigma]
&\leq&\mathbb{E}\biggl[\max|Y^{i,j}_{\sigma,T}|^2 {\mathbf1} _{\{\sigma
<T\}}+
\int_\sigma^T\max|Y^{i,j}_{\sigma,s}| \,dB^{i,j}_s \Big| \mathcal{F}_{\sigma}
\biggr]\\
&\leq&\mathbb{E}\bigl[(\max|Y^{i,j}_{0,T}|^2+\max
|Y^{i,j}_{0,T}|B^{i,j}_T) {\mathbf1} _{\{\sigma<T\}}\big| \mathcal{F}_{\sigma
}\bigr].
\end{eqnarray*}
We now start with the proof corresponding to the assumption $\bar
X^{\Lambda,C}_{T}(|\eta_T|)\in{\mathbb L}^p_{\exp}$, since it is very
similar to the proof in the linear growth case (see Lepeltier and San
Martin~\cite{LepeltierSanMartin97}).
(i.b) Thanks to the Garsia--Neveu lemma (Lemma \ref
{lemNeveuGarsia}), for $r\geq1$,
\begin{eqnarray*}
\mathbb{E}[\varq{M^{i,j}}_T^r ] &\leq& r^r \mathbb{E}[(\max
|Y^{i,j}_{0,T}|^2+\max|Y^{i,j}_{0,T}| B^{i,j}_T)^r]\\
& \leq&\tfrac{1}{2}(2r)^r\{\mathbb{E}[(\max|Y^{i,j}_{0,T}|)^{2r}
]+\mathbb{E}[(\max|Y^{i,j}_{0,T}| B^{i,j}_T)^r]\}.
\end{eqnarray*}
Then, since $B^{i,j}_T$ belongs to $\mathbb L^p$, by H\"older
inequalities, for any $p$ and $q$ such that $\frac{1}{p}+\frac
{1}{q}=1$, and $1\leq r<p$, if $K_r=\frac{1}{2}(2r)^r $
\begin{eqnarray*}
\mathbb{E}[(\max|Y^{i,j}_{0,T}| B^{i,j}_T)^r]&\leq&(\mathbb
{E}[(\max|Y^{i,j}_{0,T}|)^q])^{{r}/{q}}
(\mathbb{E}[(B^{i,j}_T)^p])^{{r}/{p}},\\
\mathbb{E}[\varq{M^{i,j}}_T^r ]&\leq&
K_r\{\mathbb{E}[\max|Y^{i,j}_{0,T}|^{2r}
]\\
&&\hspace*{15pt}{}+(\mathbb{E}[(\max|Y^{i,j}_{0,T}|)^q])^{{r}/{q}}
(\mathbb{E}[(B^{i,j}_T)^p])^{{r}/{p}}\}.
\end{eqnarray*}
From the monotonicity of both sides of this inequality with respect to
$r$, we can take $r=p$.
We have used that
$\max|Y^{i,j}_{0,T}|$ has finite moments of all orders since as shown
in Section~\ref{parmaximalinequalities} $\max|Y^i_{0,T}|$ and $\max
|Y^i_{0,T}|$ are in ${\mathbb L}^p_{\exp}$.
Hence, we have the desired convergence.
(i.c) In the bounded case, thanks to Corollary~\ref{corVestimates},
the conditional total variation $\mathbb{E}[|V^n|_{\sigma,T}|\mathcal{F}
_\sigma
]$ are
uniformly bounded by $C_V$. To obtain the BMO convergence, we have to modify
the previous proof, by using an integration by parts formula
involving the conditional variation of $B^{i,j}$,
\begin{eqnarray*}
\mathbb{E}\biggl[\int_\sigma^T\max| Y_{\sigma,s}^{i,j}| \,dB^{i,j}_s |
\mathcal{F}_\sigma\biggr]
&=& \mathbb{E}\biggl[\int_\sigma^T d_u\max| Y_{\sigma,u}^{i,j}|\biggl(
\mathbb{E}\biggl[\int_u^T \,dB^{i,j}_s \Big| \mathcal{F}_u\biggr]\biggr) \Big| \mathcal
{F}_\sigma\biggr]\\
&\leq&
2 C_V
\mathbb{E}[\max| Y_{\sigma,T}^{i,j}| | \mathcal{F}_\sigma],
\end{eqnarray*}
and so
\[
\mathbb{E}[\varq{M^{i,j}}_{\sigma,T} |
\mathcal{F}_\sigma]\leq2 C_V
\mathbb{E}[\max| Y_{\sigma,T}^{i,j}| | \mathcal{F}_\sigma
]+\mathbb
{E}[|Y^{i,j}_{\sigma,T}|^2 | \mathcal{F}_\sigma].
\]
Then, the BMO-convergence holds true.
(i.a) The proof of the general case requires a different argument,
based on a result of Barlow and Protter~\cite{Barlow-Protter}
on the convergence of semimartingales. In the framework of quadratic
semimartingales, the key points are the uniform estimates of both
the quadratic variation and the total variation\vadjust{\goodbreak} given in Theorem \ref
{thevarqestimates}, equation \eqref{eqquadraticvariationH2estimate}
and Corollary~\ref{corVestimates}. The proof given in \cite
{Barlow-Protter} of the $\mathbb H^1$-convergence of the martingales is
based on the
square root of the inequality given at the beginning of the proof,
\[
\varq{M^{i,j}}_{t}\leq|Y^{i,j}_{0,t}|^2-2\int_0^t Y^{i,j}_{0,s} \,dM^{i,j}_s+
2\int_0^t |Y^{i,j}_{0,s}| \,dB^{i,j}_s.
\]
The first step is to estimate the square root of $\max
|Y^{i,j}_{0,\cdot}\star M^{i,j}_{0,T}|$
using the Burkolder--Davis--Gundy inequalities
for continuous martingales for $p=\frac{1}{2}$, that have been
recalled in
Section~\ref{subsecentropicinequalities}:
$\mathbb{E}[\max|Y^{i,j}_{0,\cdot}\star M^{i,j}_{T}|^{{1}/{2}}]\leq
\bar{C}
\mathbb{E}[\varq{Y^{i,j}_{0,\cdot}\star M^{i,j}}_{T}^{1/4}]$ where $\bar
{C}$ is a
universal constant. Then, since $\mathbb{E}[\varq{Y^{i,j}_{0,\cdot}\star
M^{i,j}}_{T}^{1/4}]\leq\mathbb{E}[(\max|Y^{i,j}_{0,T}|)^{{1}/{2}}\varq
{M^{i,j}}_{T}^{1/4}]$,
\begin{eqnarray*}
\mathbb{E}\bigl[\sqrt{\varq{M^{i,j}}_T}\bigr]& \leq&\mathbb{E}[\max|Y^{i,j}_{0,T}|]+\sqrt
{2}\bar{C} \mathbb{E}[\max|Y^{i,j}_{0,T}|]^{ {1}/{2}}\mathbb{E}\bigl[\sqrt{\varq
{M^{i,j}}_T}\bigr]^{{1}/{2}}\\
&&{}+
\sqrt{2}\mathbb{E}[\max|Y^{i,j}_{0,T}|]^{{1}/{2}}\mathbb{E}[B^{i,j}_T|]^{{1}/{2}}.
\end{eqnarray*}
Since $\mathbb{E}[\sqrt{\varq{M^{i,j}}_T}] $ and $\mathbb{E}[B^{i,j}_T]$ are uniformly
bounded, and $\mathbb{E}[\max|Y^{i,j}_{T}|]$ goes to~$0$, then $\mathbb{E}[\sqrt
{\varq{M^{i,j}}_T}] $ also goes to $0$. The $\mathbb
H^1$-convergence of the martingale part is established.
(ii) The next point is to study the convergence of the sequence
$(V^n_{\cdot})$ to a process $V_{\cdot}$ satisfying the same
structure condition
$\mathcal{Q}(\Lambda,C)$. Since, the sequence $(Y^n_{\cdot},
M^n_{\cdot},
\varq{M^n}_{\cdot}^{{1}/{2}})$ converges in $\mathcal S^1$ to
$(Y_{\cdot}, M_{\cdot},\langle M \rangle^{{1}/{2}}_{\cdot})$, the sequence
$(V^n_{\cdot})$ also converges in
$\mathcal S^1$. Therefore, we can extract a
subsequence, still denoted $(Y^n_{\cdot}, M^n_{\cdot},V^n_{\cdot},$
$\varq{M^n}_{\cdot}^{{1}/{2}})$, such that the
sequence converges uniformly in time almost surely.
(iii) This point is obvious since, as observed at the beginning of
this section, the class ${\mathcal S}(|\eta_T|)$ is stable by a.s. convergence.
\end{pf}
\subsubsection*{Stability results for BSDE-like quadratic
semimartingales}
To obtain the convergence of the finite variation processes in total
variation, we need to make additional assumption on the processes
$V^n$, as in the BSDE framework. We adopt the general setting where the
reference to the Brownian framework is relaxed as in El Karoui and Huang
\cite{NEKHuang}.
\begin{edefinition}[(BSDE-like quadratic semimartingale)]\label
{defgeneralquadraticBSDE}
Let us consider a continuous predictable increasing process $K_{\cdot}$,
a $d$-dimensional continuous orthogonal
martingale $N_{\cdot}=(N^i_{\cdot})_{i=1}^d$, with quadratic
variation $\varq
{N^i}_{\cdot}$ strongly dominated by $K_{\cdot}$ such that $d\varq
{N^i}_t=\gamma
^{i}_t \,dK_t$,
two increasing processes $\Lambda_{\cdot}$ and $C_{\cdot}$ also
dominated by $K_{\cdot}$
such that $d\Lambda_t=l_t\,dK_t$ and $dC_t=c_t \,dK_t$, such that all processes
$\gamma^{i}_{\cdot}, l_{\cdot},c_{\cdot}$ are bounded by $k$ (e.g., $K_{\cdot}=\sum
_{i=1}^d\varq{N^i}_{\cdot}+\Lambda_{\cdot}+C_{\cdot}$ and $k=1$).
The coefficient
$g(\cdot,y,z)$ is a $\mathcal{P}\otimes\mathcal{B}(\mathbb{R}\times
\mathbb
{R}^d)$ measurable process, often assumed to be continuous with respect
to $(y,z)$.\vadjust{\goodbreak}
A semimartingale $Y_{\cdot}$ with the decomposition $Y_{\cdot
}=Y_0-V_{\cdot}+M_{\cdot}$ is said
to have a quadratic coefficient $g$ if $dY_t=-dV_t+dM_t,$ with
\begin{equation}\label{eqgeneralquadraticBSDE}
\cases{
dV_t=g(t,Y_t, Z_t) \,dK_t,\vspace*{2pt}\cr
dM_t=Z_t\,dN_t+dM_t^\bot \qquad\forall i\,
d\varq{N^i,M^\bot}_t=0, \vspace*{2pt}\cr
\displaystyle|g(t,Y_t, Z_t)| \leq\frac{1}{\delta}l_t +|Y_t| c_t +\frac{\delta
}{2} \bigl|\sqrt\gamma_t Z_t\bigr|^2,\vspace*{2pt}\cr
\displaystyle\bigl|\sqrt\gamma_t Z_t\bigr|^2=\sum
_{i=1}^d\gamma^i_t |Z^i_t|^2.
}
\end{equation}
The local martingale $Z \star N$ is the orthogonal projection of the local
martingale~$M_{\cdot}$ onto the space of stochastic integrals
generated by
the local martingale $N_{\cdot}$, and $d\varq{Z \star N}_t\ll \,d\langle M \rangle
_t$, so
that $d|V|_t\ll\frac{1}{\delta}\,d\Lambda_t+|Y_t| \,dC_t +\delta
\,d\langle M \rangle
_t$ and $Y_{\cdot}$
is a quadratic semimartingale.
\end{edefinition}
When considering sequences of BSDE-like quadratic semimartingales
under mild assumptions on the sequence of coefficients, the sequence
of finite
variation processes is converging in total variation in the
appropriate space, and the limit is still a BSDE-like quadratic
semimartingale.
The uniform convergence of the quadratic semimartingales
needed for these convergence results may seem very strong. We know
however from Theorem~\ref{thcharacterizationgeneralquasimart}
that all the processes obtained by a.s. convergence are continuous.
Thanks to Dini's theorem, the monotone convergence implies uniform
convergence for continuous functions on compact spaces.
Therefore, by a localization procedure, we can prove
the following very strong result.
\begin{theorem}\label{thmonotonecv}
Let assume the sequence $(Y^n_{\cdot})$ to be a monotone sequence of
${\mathcal S_Q}(|\eta_T|,\Lambda, C)$-semimartingales
converging almost surely to a process $Y_{\cdot}$.
\begin{enumerate}[(ii)]
\item[(i)]Then, the limit process $Y_{\cdot}$ is a continuous ${\mathcal
S_Q}(|\eta
_T|,\Lambda, C)$-
semimartingale,
the convergence is locally uniform and all properties given in Theorem
\ref{thNicolas} hold (locally) true. In particular, there exists a
subsequence of martingales $M^n_{\cdot}=Z^n\star N_{\cdot}+M^{n,\bot
}_{\cdot}$ converging
in $\mathbb{H}^1$ and almost surely to $M_{\cdot}=Z\star N_{\cdot
}+M^{\bot}_{\cdot}$.
\item[(ii)]Suppose in addition that the processes $(Y^n_{\cdot})$ are BSDE-like
quadratic semimartingales, associated with a sequence of monotone
coefficients $g_n$ converging almost surely to $g$, having the
following properties:
\begin{enumerate}[(a)]
\item[(a)]The monotone sequence $g_n$ have uniform quadratic growth:
\[
|g_n(t,Y^n_t,Z^n_t)| \leq\frac{1}{\delta}l_t +|Y^n_t|
c_t +\frac{\delta}{2} \bigl|\sqrt\gamma_t Z^n_t\bigr|^2,\qquad d\mathbb P\times
dK_{\cdot} \mbox{ a.s.}
\]
\item[(b)]The sequence $g_n(\cdot,Y^n_{\cdot},Z^n_{\cdot})$ converges to
$ g(\cdot,Y_{\cdot},Z_{\cdot}), d\mathbb P\times dK_{\cdot} \mbox{ a.s}$.
\end{enumerate}
Then, the limit process $Y_{\cdot}$ is a BSDE-like ${\mathcal
S}_Q(|\eta
_T|,\Lambda, C)$-semimartingale with coefficient $g(t,y,z)=\lim g_n(t,y,z)$.
\end{enumerate}
\end{theorem}
\begin{pf}
Note the characterization of ${\mathcal S}_Q(|\eta_T|,\Lambda,
C)$-semimartingales given in Theorem \ref
{thcharacterizationgeneralquasimart} passes to the limit, since all
processes $U^{\Lambda, C}_{\cdot}(e^{|Y^n|})$ are dominated by the
$(\mathcal{D})$-process $U^{\Lambda, C}_{\cdot}(\Phi(|\eta_T|))$. The
limit process
$Y$ is a continuous ${\mathcal S}_Q(|\eta_T|,\Lambda, C)$-semimartingale,
with decomposition $Y_{\cdot}=Y_0+M_{\cdot}-V_{\cdot}.$
\begin{longlist}[(ii)]
\item[(i)] The localization procedure is based on the family $(T_K)$ of
stopping times as to bound the u.i. martingale $N^0_t=\mathbb
E[\exp(\phi_0(|\eta_T|)|\mathcal{F}_t]$ by $K$. By the characterization of
u.i. continuous martingale (see, e.g., Azema, Gundy and Yor
\cite{Azema-Gundy-Yor}), the sequence $T_K$ goes to $\infty$ and for
$K\geq K_\varepsilon$ large enough, $\mathbb P(T_{K}<T)\leq
\frac{\varepsilon}{K}$. Therefore, the sequence $(Y^n_{\cdot\wedge T_K})$
lives on a compact set where the monotone convergence to a
\textit{continuous process} is uniform.
The sequence of martingales $(M^n_{\cdot\wedge T_K})_n$ strongly
converges in the appropriate space to the martingale $M_{\cdot\wedge
T_K}$. The same property holds true for the sequence $V^n_{\cdot\wedge
T_K}$.
Thanks to the previous estimates, for all these processes $Y^n_{\cdot
},M^n_{\cdot},
V^n_{\cdot}$ the convergence is uniform on $[0,T\wedge T_K] $ in
probability.
\item[(ii)] Let $Z^{n,K}_t \equiv Z^n_t \mathbf{1}_{\{t\leq T_K\}}$ in such way
that $(Z^{n}\star N)_{\cdot\wedge T_K}=Z^{n,K}\star N_{\cdot}$. Since the
sequence $(M^n_{\cdot\wedge T_K})_n$ strongly converges, the
sequences of orthogonal martingales $(M^{n,\bot}_{\cdot\wedge T_K})_n$
and $(Z^{n,K}\star N_{\cdot})_n$ also strongly converge in the
appropriate space, and at least in $\mathbb H^1$.
Therefore, we can extract a subsequence still denoted $Z_{\cdot}^{n,K}$
converging a.s. By assumption, for $t\leq T_K$ the sequence
$g^n(t,Y^n_t, Z^{n,K}_t)$ goes to $g(t,Y_t, Z_t)$ $dK_{\cdot}\otimes
d\mathbb
P$ a.s.
It now remains to show that the convergence is also true in expectation.
Observe that $\mathbb
E[\int_0^{T_K}|g_n(s,Y^{n}_s,Z^{n})_s-g(s,Y_s,Z_s)|\mathbf{1}_{\{|Z^{n}_s|
\leq C\}}\,dK_s]$ goes to $0$, by
dominated convergence, since $\Phi_{\cdot}$ and $Y_{\cdot}^n$ are
bounded on $[0,T_K]$.
Moreover, since the sequence in $n$ of
the quadratic variations at time $T_K$, $\varq{Z^{n,K}\star N}_{T_K}$
is bounded in~$\mathbb
L^1$, for $s\leq T_K$, $|g_n(s,Y^{n}_s, Z^{n}_s)-g(s,Y_s,Z_s)| \leq
\Psi
_s+\frac{1}{2}
|Z^n_s|^2$,
with $\Psi_t \mathbf{1}_{\{t\leq T_K\}} \in\mathbb L^1(d\mathbb P\otimes
dK_s)$ and
$\mathbb P(|Z^{n}_s| \geq C)\leq\frac{1}{C^2}\mathbb E
(|Z^n_s|^2)$. Hence, $\mathbb
E[\int_0^{T_K}|g_n(s,Y^{n}_s, Z^{n}_s)-g(s,Y_s, Z_s)|\mathbf{1}_{\{
|Z^{n}_s| >
C\}}\,dK_s]$ goes to $0$ when $C$ goes to $\infty$, uniformly in $n$.
As a consequence, the process $V_{\cdot}$ in the decomposition of the quadratic
semimartingale~$Y_{\cdot}$ is given by $dV_t=g(t,Y_t, Z_t) \,dK_t$ on $[0,T_K]$
for any $K$.\quad\qed
\end{longlist}
\noqed\end{pf}
\begin{remark} Delbaen, Hu and Bao show in~\cite{Delbaen-Hu-Bao} that
increasing the growth of the coefficient into a superquadratic growth
yields to ill-posed problems. In particular, monotone stability does
not hold any more. For classical BSDEs, when the coefficient simply
depends on $z$, superquadratic growth means that $\limsup
g(z)/|z|^2=\infty$.
\end{remark}
\section{Existence result for quadratic BSDEs}\label{sec5}
The question of existence of bounded solutions for the classical
quadratic BSDEs in Brownian framework has been solved by Kobylanski
\cite{Kobylanski00}, using an exponential transformation as to
come back to the standard framework of a coefficient with linear
growth. A detailed review of the literature including the\vadjust{\goodbreak}
comparison theorem and different applications may be found in El
Karoui, Hamad\`ene and Matoussi~\cite{Elkaroui-Hamadene-Matoussi}.
Most of the recent papers focusing on financial applications of
quadratic BSDEs consider the situation where the martingale $M_{\cdot
}$ is
BMO
(see, e.g., the recent papers by Hu, Imkeller and Muller \cite
{Hu-Imkeller-Muller}, Ankirchner, Imkeller and Reis
\cite{Ankirchner-Imkeller-Reis},
\cite{Morlais2}, or the Ph.D. thesis of dos Reis~\cite{DosReis}). From
Theorem~\ref{thevarqestimates}, such a framework is equivalent to look
at bounded solutions.
Briand and Hu~\cite{Briand-Hu} have been the first to extend
the previous results to unbounded solutions.
In all these papers, as in Kobylanski~\cite{Kobylanski00}, the main
difficulty is however to prove the strong convergence of the martingale
part.
The stability result we have
obtained in the previous section opens a new possible direction to
tackle this question. The idea is to approximate monotonically the
coefficient itself by coefficients with a linear and quadratic
growth, for which there are some results on the existence of
solution but also for which it is possible to take the limit thanks
to the stability Theorem~\ref{thmonotonecv}.
In our approach, we do not need this BMO framework
and have a stability result prevailing in a wider context,
moving away from the bounded case to the case where the terminal
condition has exponential moment. Indeed, having bounded
solutions is naturally replaced by belonging to the class ${\mathcal
S_Q}(|\eta_T|,\Lambda, C)$ as in the previous section, which reduces
to an exponential moment condition for $|\eta_T|$, when
$\Lambda\mbox{ and } C\equiv0.$ Recall that this last condition is
equivalent to have the absolute value of the solution in
the class ($\mathcal D_{\exp}$) when the coefficient does not depend on
$y$ [and $g(t,0,0) \equiv0$].
We start this section by looking more closely at the interrelationship between
quadratic BSDEs and quadratic semimartingales,
when the quadratic structure condition is saturated.
\subsection{\texorpdfstring{A canonical example: $q_\delta$-BSDE and entropic process}
{A canonical example: q delta-BSDE and entropic process}}\label{parq-BSDEs}
We are focusing on simplest quadratic BSDEs when the structure
condition is saturated and the coefficient is simply denoted by
$q_{\delta}$. This framework has a particular importance in finance as
it corresponds to that of indifference pricing in incomplete markets
when using an exponential utility criterion (in general, in the bounded
case) as in Rouge and El Karoui~\cite{Rouge-ElKaroui}, and many other
papers (see, e.g., Mania and Schweizer~\cite{Mania-Schweizer})
or the recent book on indifference pricing edited by Carmona \cite
{Carmona}.
In this simple framework, it is interesting to consider the various
possible points of view. In particular, note that the two following
problems coincide in a Brownian framework:
\begin{longlist}[(ii)]
\item[(i)] First, finding a quadratic $q_\delta$-semimartingale
$Y_t=Y_0+M_t-\frac{\delta}{2}\langle M \rangle_t$ with terminal condition
$Y_T=\xi_T$.
We refer to the solution as a GBSDE($q_\delta, \xi_T)$-solution, where
$G$ stands for ``generalized.'' The process $-Y$ is a GBSDE-solution
associated with ($\lu q_\delta, -\xi_T)$.
\item[(ii)] In the second case, corresponding to the BSDE general framework
(Definition~\ref{defgeneralquadraticBSDE}), the problem is to find\vadjust{\goodbreak}
$(Y_{\cdot}, M_{\cdot} \equiv Z\ast N_{\cdot}+M^\bot_{\cdot})$,
such that $dY_t = -\frac
{\delta}{2}|\sqrt{\gamma_t}Z_t|^2 \,dK_t -Z_t \,dN_t-dM^\bot_t$ with
terminal condition $Y_T=\xi_T$. The similar equation with the opposite
process will be also considered. In the following, we refer to this
situation as $q$-BSDE.
\end{longlist}
Based on the previous results, we will consider these two questions in
parallel in the paragraphs below.
\subsubsection*{Summary of previous results on GBSDEs}
The entropic process $\rho_{t}(\xi_T)$ defined earlier in
equation (\ref{eqentropicprocess}) as
$\ln\mathbb{E}[\exp(\xi_T)|\mathcal{F}_t] \equiv\rho_{t}(\xi_T)$
appears naturally when studying such $(q, \mbox{ or } \lu q)$-GBSDEs.
Indeed, as presented in
the following proposition, if the terminal condition $\xi_T\in
\mathbb{L}_{\exp}^1$, then $\rho_{\cdot}(\xi_T)$ is a
$(\mathcal{D}_{\exp})$-solution of $q$-GBSDE. The stronger assumption on the
terminal condition $|\xi_T|\in\mathbb{L}_{\exp}^1$ is used for the
estimates of the quadratic variation or for some stability result.
\begin{eproposition}\label{propminimalentropic}
\begin{enumerate}[(iii)]
\item[(i)] Assume that $\xi_T \in\mathbb{L}_{\exp}^1$. Then
the entropic process $\rho_{\cdot}(\xi_T)$ is the unique $(\mathcal{D}
_{\exp
})$-solution of the quadratic $\operatorname{GBSDE}(q,\xi_T)$, that is, there
exists a martingale $M^\rho_{\cdot}\in{\mathcal U}_{\exp}$ such
that
\[
d\rho_t(\xi_T)=-\tfrac{1}{2}\,d\varq{M^\rho}_t +dM^\rho_t,\qquad \rho
_T(\xi
_T)=\xi_T.
\]
Moreover, $\rho_{\cdot}(\xi_T)$ is minimal in the class of solutions
$Y_{\cdot}$:
$\rho_{\cdot}(\xi_T)\leq Y_{\cdot}$.
\item[(ii)]Assume that $-\xi_T \in\mathbb{L}_{\exp}^1$. The negative
entropic process $\underline{\rho}_{\cdot}(\xi_T)$ is
{a} solution of the
$\operatorname{GBSDE}(\underline{q},\xi_T)$, i.e., there exists a martingale
$\underline{M}^\rho$ such that
\[
d\underline{\rho}_t(\xi_T)=\tfrac{1}{2}\,d\varq{\underline{M}^{\rho}}_t
+d\underline
{M}^{\rho}_t,\qquad \underline{\rho}_T(\xi_T)=\xi_T,
\]
but in general $\underline{\rho}_{\cdot}(\xi_T)$ is not a $(\mathcal{D}_{\exp})$-solution.
\item[(iii)]When $|\xi_T| \in\mathbb{L}_{\exp}^1$, then:
\begin{enumerate}[(b)]
\item[(a)] $\underline{\rho}_t(\xi_T)$ is the maximal solution of the $\operatorname
{GBSDE}(\underline{q}
,\xi_T)$.
\item[(b)] The martingales $M^\rho$ and $\underline{M}^{\rho}$ are in
$\mathbb
{H}^2$ and if $\xi_T$ is
bounded, they are BMO-martingales.
\item[(c)] If in addition $|\xi_T|+ \ln(|\xi_T|)\in\mathbb{L}_{\exp
}^1$, the
r.v. $\max|\rho_{0,T}(\xi_T)|$ and $\max|\underline{\rho}_{0,T}(\xi_T)|$
belong to
$\mathbb{L}_{\exp}^1$. {Moreover, the following variational
representation holds true:}
\begin{equation}\label{eqdualdynamicentropy}
\rho_t(\xi_T)=\operatorname{ess}\sup_{M^{\mathbb Q}}\bigl\{\mathbb E_{\mathbb{Q}}
\bigl(\xi_T-\tfrac{1}{2}\varq{M^{\mathbb Q}}_{t,T}\bigr)|\mathcal{F}_{t} | \mathbb
E_{\mathbb{Q}}(\langle M \rangle^{\mathbb Q}_{t,T})<+\infty\bigr\}.
\end{equation}
\end{enumerate}
\end{enumerate}
\end{eproposition}
\begin{pf}
\textup{(i)} From Section~\ref{subsectioncharactexpoinequality} and as
$\rho_ .(\xi_T)=\rho_ 0(\xi_T)+ r_{\cdot}(M)$, $\rho_{\cdot}(\xi
_T)$ is the unique
$(\mathcal{D}_{\exp})$-solution for the GBSDE$(q,\xi_T)$, and the smallest in
the class of the $q$-semimartingale with the same terminal value.\vspace*{-6pt}
\begin{longlist}[(iii)]
\item[(ii)] Since $-\xi_T \in\mathbb{L}_{\exp}^1$, the process
$\rho_{\cdot}(-\xi_T)$ is well defined\vspace*{1pt} in $(\mathcal{D}_{\exp})$ and $-\rho
_{\cdot}(-\xi_T)$
is solution of the $\lu q$-GBSDE, but not in general in the
class $(\mathcal{D}_{\exp})$.\vadjust{\goodbreak}
\item[(iii)] Assume both variables $\xi_T$ and $-\xi_T$ in
$\mathbb{L}^1_{\exp}$. Using the convexity of $\rho$, its follows
that $0=\rho_{\cdot}(0)\leq\frac{1}{2}(\rho_{\cdot}(\xi_T)-\underline{\rho}
_{\cdot}(\xi_T))$. Then,
$\rho_{\cdot}(\xi_T)\in(\mathcal{D}_{\exp})$ implies $\underline{\rho}_{\cdot}(\xi
_T)\in(\mathcal{D}_{\exp})$.
The comparison with the other solutions is a simple consequence of
the fact that $-Y$ is a solution of GBSDE$(q,-\xi_T)$, and therefore
bigger than $\rho_{\cdot}(-\xi_T)=-\underline{\rho}_{\cdot}(\xi_T)$.
The rest of (iii) is a straightforward consequence of Theorem
\ref{thevarqestimates}.\quad\qed
\end{longlist}
\noqed\end{pf}
For lack of space, we will not further develop the variational point of
view, but this approach can be extended to $q$-BSDEs, using in
particular approximations based on the solutions of convex BSDEs with
linear growth (see, e.g., El Karoui, Hamad\`ene and Matoussi
\cite{Elkaroui-Hamadene-Matoussi}).
\subsubsection*{\texorpdfstring{$(q \mbox{ or } \underline{q})$-BSDEs}{$(q, or \underline{q})$-BSDEs}} The
question of the existence of solutions of the $(q \mbox{ or }
\underline{q})$-BSDEs is more delicate to tackle and does not admit explicit
representation. These difficulties also appear in the Brownian
framework when the vector martingale~$N$ is defined from a limited
number of components of the generating Brownian motion. Different
methods can be used, the first one is based on linear growth
approximating solutions, whilst the second one uses the convexity of
the coefficient and represents solutions as value function of some
optimization problems. We now develop the first point of view.
In this case, the approximation is based on the coefficients $q_n(z)
\equiv\frac{1}{2}(|z|^{2}-(z-n)^{+2})=\frac{1}{2}(|z|^{2}\mathbf{1}_{\{
|z|\leq n\}}+(n|z|-\frac{1}{2}n^2)\mathbf{1}_{\{|z|> n\}})$ with linear and
quadratic growth, increasing to $q(z)$ when $n$ goes to infinity. For
$\xi_T\in{\mathbb L}^2$, using by the classical theory, the $\operatorname{BSDE}(q_n,
\xi_T)$ has a unique solution in ${\mathcal S}^2$, bounded if $\xi_T$
is bounded.
\begin{eproposition}\label{propdualentropy}
Let $|\eta_T| \in\mathbb L^1_{\exp}$, and $(\xi^n_T)$ a sequence of
increasing r.v., bounded by $|\eta_T|$ and converging a.s. to $\xi_T$.
\begin{longlist}[(iii)]
\item[(i)]Denote by $(Y^n_{\cdot}, Z^n_{\cdot}, M_{\cdot}^{n,\bot}) \in
\mathbb H^2(\mathbb
R^+)\otimes\mathbb H^2(\mathbb R^n)$ the unique solution of the $\operatorname{BSDE}(q_n, \xi_T)$. The process $Y^n_{\cdot}$ is a $\mathcal Q$-semimartingale
satisfying the entropic inequality $|Y^n_{\cdot}|\leq\rho_{\cdot
}(|\eta_T|)$.
\item[(ii)]The sequence $(q^n, Y^n_{\cdot}, Z^n_{\cdot}, M_{\cdot
}^{n,\bot}) $ satisfies
the hypothesis of Theorem~\ref{thmonotonecv} and strongly converges to
$(Y_{\cdot}, Z_{\cdot}, M_{\cdot}^{\bot}) $, minimal solution of\break
$\operatorname{BSDE}(q,\xi_T)$ such
that $|Y_{\cdot}|\leq\rho_{\cdot}(|\eta_T|)$, with the variational
representation
\begin{equation}\label{eqdualdynamicq-entropy}
Y_t(\xi_T)=\operatorname{ess}\sup_{\nu}\biggl\{\mathbb E_{\mathbb{Q^\nu}}\biggl(\xi
_T-\frac{1}{2}\int_t^T\bigl|\sqrt{\gamma_s }\nu_s\bigr|^2\,dK_s\big|\mathcal{F}_{t} \biggr)\biggr\},
\end{equation}
where $\mathbb{Q^\nu}$ is the probability with density $\mathcal
E_{\cdot}(\nu
\star N)$ with finite entropy\break
$\mathbb E_{\mathbb{Q^\nu}}( \int_0^T|\sqrt{\gamma_t }\nu
_t|^2\,dK_t)<+\infty$.
\item[(iii)] Uniqueness holds in the class of solutions $Y$ such that
$|Y_{\cdot}|\leq\rho_{\cdot}(|\xi_T|)$, and $|\xi_T|$ is bounded
or such that $\rho
_\delta(|\xi_T|)$ for any $\delta>0$.
\end{longlist}
\end{eproposition}
\begin{pf} (i) Its is clear that $Y^n_{\cdot}$ is a $\mathcal
Q$-semimartingale, bounded if $\xi_T$ is bounded. Then $|Y^n_{\cdot}| $
belongs to the class $({\mathcal D}_{\exp})$ and satisfies the entropic
inequality $|Y^n_{\cdot}|\leq\rho_{\cdot}(|\xi_T|)$. Since both processes
$Y^n_{\cdot}(\xi_T)$ and $\rho_{\cdot}(\xi_T)$ are monotone with
respect to their
terminal condition (by approximating~$\xi_T)$ by bounded random
variables, the entropic inequality holds at the limit under the
assumption $|\xi_T|\in\mathbb L^1_{\exp}$.\vspace*{-6pt}
\begin{longlist}[(iii)]
\item[(ii)] The first result is a direct consequence of the stability result
given in Theorem~\ref{thmonotonecv}.
The variational representation for $Y^n$ is as in \eqref
{eqdualdynamicq-entropy} with the restriction that $\nu$ is bounded by
$n$. That is a standard result on convex BSDEs with uniformly linear
growth (see, e.g., El Karoui, Peng and Quenez~\cite
{ElKaroui-Peng-Quenez} or Theorem 8.7
in El Karoui, Hamad\`{e}ne and Matoussi \cite
{Elkaroui-Hamadene-Matoussi}). Thanks to entropy result in Section
\ref{changeprobaentropy}, the representation \eqref
{eqdualdynamicq-entropy} pass to the limit, since $\xi_T$ is $\mathbb
Q^\nu$-integrable.
\item[(iii)] Let $Y$ be a solution satisfying $|Y_{\cdot}|\leq\rho_{\cdot
}(|\xi_T|)$. We
first assume that $|\xi_T|$ is bounded, so that all solutions are
bounded and the associated martingales $M_{\cdot}$, $M^{\bot}_{\cdot
}$ and
$M_{\cdot}-M^{\bot}_{\cdot}$ are BMO-martingales.
Denote by $Y^i_{\cdot}$ and $Y^i_{\cdot}$ two solutions satisfying
the entropic
inequalities with two bounded terminal conditions. Using the same
notation than in the proof of Theorem~\ref{thNicolas}, we observe that
the difference $Y^{i,j}_{\cdot} \equiv Y^i_{\cdot}-Y^j_{\cdot}$
verifies a linear BSDE,
with linear growth condition with respect to another probability measure,
\begin{eqnarray*}
dY^{i,j}_t&=&-\tfrac{1}{2}\bigl(\bigl|\sqrt{\gamma_t }Z^i_t\bigr|^2-\bigl|\sqrt{\gamma_t
}Z^j_t\bigr|^2\bigr)\,dK_t+dM^{i,j}_t\\
&=&-\tfrac{1}{2}\bigl(\sqrt{\gamma_t }Z^{i,j}_t.\sqrt{\gamma_t
}(Z^i_t+Z^j_t)\bigr)\,dK_t+Z^{i,j}_t.\,dN_t+dM^{i,j,\bot}_t\\
&=&Z^{i,j}_t \bigl(dN_t-\tfrac{1}{2}\sqrt{\gamma_t
}(Z^i_t+Z^j_t)\,dK_t\bigr)+dM^{i,j,\bot}_t.
\end{eqnarray*}
Since $Y^i_{\cdot}$ and $Y^j_{\cdot}$ are bounded solutions, by
Theorem \ref
{thevarqestimates}, the martingales $M^i_{\cdot}$, and $M^j_{\cdot}$ are
BMO-martingales, implying that the quadratic variation of $\frac{1}{2}
(Z^i_{\cdot}+Z^j_{\cdot})\star N_{\cdot}$ is also conditionally
bounded, and then $\frac{1}{2}
(Z^1_{\cdot}+Z^2_{\cdot})\star N_{\cdot}$ is a BMO-martingale. By
Girsanov theorem,
$\mathcal E(\frac{1}{2}(Z^1_{\cdot}+Z^2_{\cdot})\star N)_{\cdot}$
is a
u.i. exponential
martingale defining a new probability measure $\mathbb Q^{(i+j)}$ such
that $dN^{(i+j)}_t\equiv dN_t-\frac{1}{2}\sqrt(Z^i_t+Z^j_t)\,dK_t)$
is a
$\mathbb Q^{(i+j)}$-local martingale with the same quadratic variation
as $N_{\cdot}$. Moreover, $M^{i,j,\bot}_{\cdot}$ is still a $\mathbb
Q^{(i+j)}$-local martingale, orthogonal to $N^{(i+j)}_{\cdot}.$
Then, $Y^{i,j}_{\cdot} $ is a bounded $\mathbb Q^{(i+j)}$ local martingale,
and so a true martingale and $Y^{i,j}_{\cdot} =\mathbb E_{\mathbb
{Q}^{(i+j)}}(Y^{i,j}_T|\mathcal{F}_{\cdot})$. Uniqueness and comparison
theorem are
easily deduced of this property.
In the general case, the difficulty is to show directly that $\mathcal
E(\frac{1}{2}(Z^1_{\cdot}+\break Z^2_{\cdot})\star N)_{\cdot}$ is u.i.
martingale, given that
$\mathcal E(M^i)_{\cdot}$ and $\mathcal E(M^j)_{\cdot}$ are uniformly
integrable.
Under the assumptions of exponential moments of any order, uniqueness
has been proved first by Briand and Hu~\cite{Briand-Huconvex} and Mocha
and Westray~\cite{Mocha-Westray}.\quad\qed
\end{longlist}
\noqed\end{pf}
\subsection{\texorpdfstring{Existence result for BSDEs in the class ${\mathcal S_Q}(|\eta_T|,\Lambda, C)$}
{Existence result for BSDEs in the class S Q(|eta T|, Lambda, C)}}
We are now interested in quadratic BSDEs satisfying the general
structure condition $| g(\cdot,t,y,z)|\leq\kappa(t,y,z) \equiv
|l_{t}|+c_t|y|+\frac{1}{2}|z|^{2}, d\mathbb{P}\otimes dK_{\cdot
}\ \mathrm{a.s.}$, and are looking for solution in the class ${\mathcal
S_Q}(|\eta
_T|,\Lambda, C)$ only. As before, the method
relies on a regularization of the
quadratic coefficient it-self through inf-convolution as to
transform it into a coefficient with \textit{both} linear and quadratic
growth. This double structure of the transformed
coefficient leads to results both in terms of existence and
estimation.
The previous stability Theorem~\ref{thNicolas} can then
be applied to obtain the existence of a solution, after having
proved that
the approximate solutions are also ${\mathcal S_Q}(|\eta_T|,\Lambda,
C)$-semimartingales.
\subsubsection*{Regularization of the coefficient through inf-convolution}
The proof of this fundamental result is based on the following lemma
involving classical regularization by inf-convolution techniques
introduced by Lepeltier and San Martin~\cite{LepeltierSanMartin97}
in a BSDEs framework. Let us first observe that the appropriate
regularization when dealing with $\underline{q}(z)=-\frac{1}{2}|z|^2$ is a
sup-convolution since $\underline{q}(z)$ is concave. To overcome this difficulty,
we proceed in two steps, by first assuming that $g$ is bounded from
below by some basic function with both a linear and quadratic growth
$\lu{\kappa}_p$, where $-\lu{\kappa}_p(t,y,z) =\kappa
_p(t,y,z)\equiv
l_t+c_t |y|+ q_p(z)$ with $q_p(z)=\frac{1}{2}|z|^{2}\mathbf{1}_{\{|z|\leq
p\}
}+(p|z|-\frac{1}{2}p^2)\mathbf{1}_{\{|z|> p\}}$. When $p=1$, $\kappa_1(t,y,z)
\equiv\kappa(t,y,z)=l_t+c_t |y|+ q(z)$ with $q(z)=\frac{1}{2}|z|^2$.
\begin{elemma}\label{leminfconvol}
Let $g:{\mathbb R}\times{\mathbb R}^n\rightarrow{\mathbb
R}$ be a continuous function with linear growth in y, and quadratic
growth in $z$, bounded from below by some function $\lu{\kappa
}_p(t,y,z)=-(l_t+c_t |y|+ q_p(z))$ and from above by $\kappa(t,y,z)$:
\begin{equation}
\label{eqgconstraint}
\lu{\kappa}_p(t,y,z) \leq g(t,y,z) \leq\kappa(t,y,z),
\end{equation}
where the processes $c_{\cdot}$ and $l_{\cdot}$ are bounded by some universal
constant $\bar{C}$.
The regularizing functions are the convex functions with
linear growth $b_n(u,w)\equiv n |u|+n |w|$.The sequences $\lu{\kappa
}_{n,p}$, $\kappa_n$ and $g_n$ are defined, respectively, as the
inf-convolution
of the functions $\lu{\kappa}_p$, $\kappa$ and $g$ with the function~$b_n$,
\begin{eqnarray*}
\lu{\kappa}_{n,p}(t,y,z)&=&\lu{\kappa}_p \,\square\, b_n(t,y,z),\qquad
\kappa_n(t,y,z)=\kappa\,\square\, b_n(t,y,z),\\
g_n(t,y,z)&=&g \,\square\, b_n(t,y,z)=\inf_{u,w }
\bigl(g(t,u,w)+n|y-u|+n|z-w|\bigr)
\end{eqnarray*}
have the following properties, for $n\geq\sup(\bar{C},p)$:
\begin{longlist}[(iii)]
\item[(i)] $ \kappa_n(t,y,z)\,{=}\,l_t+c_t|y|+q_n(z)\leq l_t+c_t|y|+ \frac{1}{2}
|z|^2, \lu{\kappa}_{n,p}(t,y,z)\,{=}\, \lu{\kappa}_{p}(t,y,z)$;
\item[(ii)] $|g_n(t,y,z)|\leq l_t+c_t|y|+\sup(q_p(z), q_n(z))=\kappa
_n(t,y,z)\leq l_t+c_t |y|+ \frac{1}{2}|z|^{2} $;
\item[(iii)]the sequences $g_n$ and $\kappa_n$ are increasing;
\item[(iv)] the Lipschitz constant of functions $g_n$ is $n$;
\item[(v)] if $(y_n,z_n)\rightarrow(y,z)$, then $g_n(t,y_n, z_n)\rightarrow
g(t,y,z)$.\vadjust{\goodbreak}
\end{longlist}
\end{elemma}
In this lemma, the various functions are regularized through the
Lipschitzian regularization, whist the function $\kappa_n$ is the
Moreau--Yoshida regularization of $b_n$ (see Hiriart-Urruty and Lemar\'
{e}chal~\cite{Hiriart-Urruty}, Chapter E, for more details).
\subsubsection*{Existence result}
The important point now is to prove that the solutions to the
$\operatorname{BSDEs}(g_n, \xi_T)$ which are Lipschitz with linear growth, are
in the class ${\mathcal S_Q}(|\eta_T|,\Lambda, C)$ when $\mathbb{E}[\exp
(\bar
X^{\Lambda,C}_T(|\eta_T|))]<+\infty$.
\begin{elemma}\label{lemlinearquadraticgrowth}
Let $g$ and $g_n$, $\kappa$ and $\kappa_n$ as in Lemma \ref
{leminfconvol}. The coefficients $g_n$ and $\kappa_n$ are standard
uniformly Lipschitz coefficients.
For any $|\xi_T|\leq|\eta_T|$, let $(Y^n_{\cdot},Z^n_{\cdot
},M^{n,\bot}_{\cdot})$ and
$(U^n_{\cdot},V^n_{\cdot},W^{n,\bot}_{\cdot})$ be the unique
solution of the $\operatorname{BSDE}(g_n,\xi
_T)$ and $\operatorname{BSDE}(\kappa_n,|\eta_T|)$ in the appropriate space.
\begin{longlist}[(ii)]
\item[(i)] The sequences $(Y^n_{\cdot})$ and $(U^n_{\cdot})$ are
increasing, and satisfy
the entropic inequality, $|Y^n_{\cdot}|\leq U^n_{\cdot} \leq\rho
_{\cdot}(e^{C_{\cdot,T} }
|\eta_T|+\int_{\cdot}^T
e^{C_{\cdot,t}}\,d\Lambda_t ), a.s.$
Both sequences $(Y^n_{\cdot})$ and $(U^n_{\cdot})$
are ${\mathcal S_Q}(|\eta_T|,\Lambda, C)$-quadratic semimartingales.
\item[(ii)] The sequence $(Y^n_{\cdot},Z^n_{\cdot},M^{n,\bot}_{\cdot})$
converges uniformly in
probability to a minimal solution $(Y_{\cdot},Z_{\cdot}, M^{\bot
}_{\cdot})$ of the
$\operatorname{BSDE}(g, \xi_T)$.
\end{longlist}
\end{elemma}
\begin{pf} The proof relies on classical properties of BSDEs solutions
associated with standard coefficients (with linear growth), in a
$\mathbb H^2$-space. In particular, existence, uniqueness and
comparison hold true in this case, that implies (i).
\begin{longlist}[(ii)]
\item[(i)] First, assume that ${\bar X}^{\Lambda,C}_T$ is bounded. The
solutions $U^n$ are bounded and the entropic inequality is valid.
Since these inequalities are stable when taking increasing limit with
respect to $\Lambda,C, \eta$, the same inequalities hold still true
under the assumption ${\bar X}^{\Lambda,C}_T(\eta_T)\in\mathbb
L^1_{\exp}$.
Then, by construction, $(Y^n_{\cdot})$ and $(U^n_{\cdot})$ are
${\mathcal S_Q}(|\eta_T|,\Lambda, C)$- quadratic semimartingales.
\item[(ii)] Finally, using Theorem~\ref{thNicolas}, we
obtain the convergence of this sequence to a solution of the
$\operatorname{BSDE}(g, \xi_T)$ in the space ${\mathcal
S_Q}(|\eta_T|,\Lambda,C)$.\quad\qed
\end{longlist}
\noqed\end{pf}
It remains to overcome the assumption made on the coefficient of a
linear quadratic growth lower bound.
Given a coefficient $g$ with decomposition $g=g^+-g^-$, where both
positive functions $g^+$ and $g^-$ have the same quadratic structure.
Let $g_p \equiv g^+-g^-\,\square\, b_p$. Then $ g_p$ satisfies Condition
(\ref{eqgconstraint}), and the $\operatorname{BSDE}(g_p, \xi_T)$ admits a minimal
solution; the sequence of solutions $Y^p$ is decreasing, and belongs to
the space ${\mathcal S_Q}(|\eta_T|,\Lambda,C)$. Once again, we use the
stability theorem to conclude that the sequence $Y^p$ converges to a
solution of the $\operatorname{BSDE}(g,\xi_T)$. We summarize the general form of our
results in the following theorem.
\begin{theorem}\label{lemgeneralexistenceresult} Let us consider a
general $\operatorname{BSDE}(g,\xi_T)$,
where $\xi_T$ be a $\mathcal{F}_T$-random variable such that
$\mathbb{E}[\exp(e^{C_{T}} | \delta\xi_T|+ \int_0^Te^{C_s}\,d\Lambda
_s)]<+\infty$.
The coefficient $g(t,y,z)$ is satisfying
the quadratic structure condition (\ref{eqquadraticgrowth}), $|
g(\cdot,t,y,z)|\leq\frac{1}{\delta} |l_{t}|+c_t|y|+\frac{\delta
}{2}|z|^{2}$.
Then, there exists at least a solution $(Y,Z,M^{\bot})$ in ${\mathcal
S_Q}(|\xi_T|,\Lambda, C,\delta)$ of the $\operatorname{BSDE}(g,\xi_T)$.
\end{theorem}
\begin{remark} When both $\Lambda, C \equiv0$, as in the
framework of cash additive risk measures, the theorem simply states: if
$| g(\cdot,t,y,z)|\leq\frac{\delta}{2}|z|^{2}$, and $\mathbb E[\exp
(\delta|\xi_T|)]<+\infty$, their exists at least a solution in the
class $({\mathcal D}_{\exp}$).
\end{remark}
\subsubsection*{Comment on the uniqueness of the solution}
The question of the uniqueness of the solution to a general
quadratic BSDE is not trivial. In the standard framework where
the terminal condition is bounded, Kobylanski~\cite{Kobylanski00}
obtains the uniqueness of the solution under some Lipschitz style
assumptions. Recently, Tevzadze~\cite{Tevzadze} gives a direct proof
of uniqueness still in the bounded case. In the case of an unbounded
terminal condition, Briand and Hu~\cite{Briand-Huconvex} work under
the additional assumption that the coefficient $g$ is convex with
respect to the variable $z$. This allows them to derive the
comparison theorem, which is needed to obtain the uniqueness. Their
methodology can be adapted and generalized to our framework without
any particular difficulty. In a very recent paper
\cite{Mocha-Westray}, Mocha and Westray have considered general
quadratic BSDEs under some stronger assumptions of exponential
moment of order $p>1$ and boundedness of the increasing processes.
They obtain some interesting results for the uniqueness of the
solution. The convex case has been also studied in Delbaen, Hu and Richou
\cite{Delbaen-Hu-Richou}
under weaker assumptions.
In this paper, we study the stability and convergence of some general
quadratic semimartingales. The general stability result (Theorem \ref
{thNicolas}), including the strong convergence of the martingale parts
in various spaces ranging from $\mathbb{H}^1$ to BMO, is derived under
some mild integrability condition on the exponential of the terminal
value of the semimartingale. This strong convergence result is then
used to prove the existence of solutions of general quadratic BSDEs
under minimal exponential integrability assumptions, relying on a
regularization in both linear-quadratic growth of the quadratic
coefficient itself. On the contrary to most of the existing literature,
it does not involve the seminal result of Kobylanski \cite
{Kobylanski00} on bounded solutions. As previously mentioned, this
approach has also other potential applications such as numerical
simulations of quadratic BSDEs, study in terms of risk measures and
dual representation, solving of associated HJB-type equations\ldots. The
various results obtained in the paper can be extended to jump processes.
\section*{Acknowledgments}
Both authors would like to thank Mingyu Xu and Anis Matoussi for their
helpful comments and discussions at various stages of the paper. The
paper has also greatly benefited from a careful reading and helpful
suggestions from two anonymous referees. Finally, the authors would
like to give some special thanks to Arthur who waited patiently before arriving.
\printaddresses
\end{document} |
\mathfrak{b}egin{document}
\title[Automorphic Green functions on Hilbert modular surfaces]{Automorphic Green functions\\on Hilbert modular surfaces}
\mathfrak{a}uthor{Johannes J. Buck}
\mathfrak{a}ddress{Fachbereich Mathematik, Technische Universit\"at Darmstadt, Schlossgartenstrasse 7,
D--64289 Darmstadt, Germany}
\email{[email protected]}
\thanks{The author was supported by the DFG Collaborative Research Centre TRR 326 Geometry and Arithmetic of Uniformized Structures, project number 444845124.}
\mathfrak{d}ate{\today}
\mathfrak{b}egin{abstract}
In this paper, we generalize results of Bruinier on automorphic Green functions on Hilbert modular surfaces to arbitrary ideals. For instance, we compute the Fourier expansion of the unregularized Green functions, use it to regularize them, obtain the Fourier expansion of the regularized Green functions and evaluate integrals of unregularized and regularized Green functions. Furthermore, we investigate their growth behavior at the cusps in the Hirzebruch compactification by computing the precise vanishing orders along the exceptional divisors. This makes the arithmetic Hirzebruch--Zagier theorem from Bruinier, Burgos Gil and Kühn more explicit. To this end, we generalize the theory of local Borcherds products. Lastly, we investigate a new decomposition of the Green functions into smooth functions and compute and estimate the Fourier coefficients of those smooth functions.
Finally, this is employed to prove the well-definedness and almost everywhere convergence of the generating series of the Green functions and the modularity of its integral.
\end{abstract}
\maketitle
\subsettcounter{tocdepth}{1}
\tableofcontents
\subsettcounter{tocdepth}{2}
\subsetction{Introduction}
In 1976, Hirzebruch and Zagier showed that the intersection numbers of Hirzebruch--Zagier divisors on Hilbert modular surfaces can be interpreted as the Fourier coefficients of holomorphic elliptic modular forms of weight~$2$ (cf.~\cite{hirzebruch1976intersection}).
This result can essentially be reformulated by stating that the generating series
\[
A(\tau) = c_1(\mathcal M_{-1/2}(\mathbb{C})) + \sum_{m=1}^\infty Z(m) q^m \in \mathbb{Q}[[q]] \otimes_\mathbb{Q} \operatorname{CH}^1(\overline{X})_\mathbb{Q}
\]
is a holomorphic modular form of weight~$2$, level $D$ and nebentypus $\chi_D$ with values in $\operatorname{CH}^1(\overline{X})_\mathbb{Q}$.
Here, by $D$ we denote the discriminant of the underlying real quadratic number field $K$, by $c_1(\mathcal M_{k}(\mathbb{C}))$ the first Chern class of the line bundle of modular forms of weight~$k$, by $\overline{X}$ the Hirzebruch compactification of the Hilbert modular surface $X$ associated to $K$ and by $Z(m)$ certain extensions of the Hirzebruch--Zagier divisors $T(m)$ of discriminant $m$ on $X$ to the Hirzebruch compactification $\overline{X}$.
Kudla and Millson aimed at a generalization of this result and studied special cycles for the orthogonal group $\mathcal{O}Group(p,q)$ and the unitary group $\UGroup(p,q)$ in great generality by means of the Weil representation (cf.~\cite{kudlamillson}).
In the Kudla program one is interested in having arithmetic analogues to the Hirzebruch--Zagier theorem (cf.~\cite{kudla2002derivatives} and \cite{kudla2004special}).
More precisely, instead of proving the modularity of generating series like $A(\tau)$ with coefficients in classical Chow groups, one is interested in proving the modularity of generating series with coefficients in arithmetic Chow groups. The elements of arithmetic Chow groups are arithmetic divisors (or more general arithmetic cycles) up to rational equivalence. An arithmetic divisor in turn is a pair $(Z,g)$, where $Z$ is a classical divisor (on an integral model of $\overline X$) and $g$ is a Green current corresponding to $Z$.
Particular cases to study the Kudla program are smooth compactifications of Hilbert modular surfaces. On them there are two natural choices to complete the Hirzebruch--Zagier divisors with Green currents to arithmetic divisors. The first such choice is given by the automorphic Green functions introduced by Bruinier in \cite{bruinier1999borcherds} and the second by Kudla Green functions (cf.~\cite{kudla1997central} and \cite{kudla2004special}).
The author dealt in his dissertation \cite{buckdiss} with both types. In this paper we confine ourselves to automorphic Green functions and present many results of his thesis with slight extensions. In an upcoming paper we will deal with the Kudla Green functions.
Automorphic Green functions on Hilbert modular surfaces were investigated earlier in works of Bruinier and Bruinier, Burgos Gil and Kühn under certain assumptions on the level and the discriminant (cf.~\cite{bruinier1999borcherds} and \cite{bruinier2007borcherds}).
In the present paper, we provide extensions of their results and add new results.
The first generalization is that we associate to each fractional ideal $\mathfrak{a} \in \mathcal{I}K$ its Hilbert modular group $\Gamma_\mathfrak{a}$ and corresponding Hilbert modular surfaces $X(\mathfrak{a})$, $X(\mathfrak{a})B$ and $X(\mathfrak{a})H$ with automorphic Green functions $\mathbb{P}hi(\mathfrak{a},m,s,z)$ and its regularization $\mathbb{P}hi(\mathfrak{a},m,z)$. Classically, mainly the case $\mathfrak{a} = \mathcal{O}_K$ was considered. However, this generalization is necessary to investigate the classical Green function $\mathbb{P}hi(\mathcal{O}_K,m,z)$ near a cusp $\kappa \in \mathbb{P}^1(K)$, since this corresponds to the investigation of the Green function $\mathbb{P}hi(\mathfrak{a}^2,m,z)$ near the cusp~$\infty$ for $\mathfrak{a} \in \mathcal{I}K$ chosen appropriately.
After discussing some essentials of Hilbert modular groups, associated lattices, Hilbert modular surfaces with their Hirzebruch--Zagier divisors and pre-log-log Green functions in the sense of \cite{BKK1} in Section~\ref{pre-section}, we start Section~\ref{auto-section} with the computation of the Fourier expansions of the unregularized Green function $\mathbb{P}hi(\mathfrak{a},m,s,z)$.
This allows us to identify a Dirichlet series of representation numbers associated to the ideal $\mathfrak{a}$ in the Fourier expansion which is responsible for the diverging behavior of $\mathbb{P}hi(\mathfrak{a},m,s,z)$ at the harmonic point $s=1$. We then describe a regularization process and obtain the regularized Green function $\mathbb{P}hi(\mathfrak{a},m,z)$ with its Fourier expansion following the basic argument of \cite{bruinier1999borcherds} and \cite{zagier1975modular}. Here, a closer investigation of the part of the Fourier expansion which generates the logarithmic singularities along the Hirzebruch--Zagier divisors near the cusp~$\infty$ takes place.
After developing a more general theory of local Borcherds products of \cite{bruinier2001local} and \cite[p. 150--153]{123mf} in Subsection~\ref{local-bp-section}, we are able to identify local Borcherds products in the Fourier expansion of $\mathbb{P}hi(\mathfrak{a},m,z)$ and we are able to describe the vanishing order of those products at the exceptional divisors over the cusps. By our definition of the Hirzebruch--Zagier divisors $Z(\mathfrak{a},m)$ on the Hirzebruch compactification $X(\mathfrak{a})H$ the vanishing orders coincide with the respective multiplicities.
This together with some estimates of the remaining terms in the Fourier expansion proves our first main result and closes Section~\ref{auto-section}.
\mathfrak{b}egin{theorem}[cf.~Theorem \ref{Phi-is-green}]
The function $\mathbb{P}hi(\mathfrak{a},m,z)$ is a pre-log-log Green function on $X(\mathfrak{a})H$ with respect to the divisor $Z(\mathfrak{a},m)$.
\end{theorem}
\sloppy In Section~\ref{decomp-section} we present a new decomposition of the unregularized Green function $\mathbb{P}hi(\mathfrak{a},m,s,z)$ into smooth $\Gamma_\mathfrak{a}$ invariant functions $\mathbb{P}hi_n(\mathfrak{a},m,s,z)$ for $n \in \mathbb{N}_0$, which induces a respective decomposition of the regularized Green function $\mathbb{P}hi(\mathfrak{a},m,z)$ into smooth $\Gamma_\mathfrak{a}$ invariant functions as well. Because of the smoothness of the functions $\mathbb{P}hi_n(\mathfrak{a},m,s,z)$ they possess an everywhere converging Fourier expansion.
We partially compute the Fourier coefficients of this expansion and estimate the remaining Fourier coefficients which allows us to show the integrability of the regularized Green function $\mathbb{P}hi(\mathfrak{a},m,z)$ and to obtain a polynomial bound in $m$ on the integral of the absolute value $\mathfrak{a}bs{\mathbb{P}hi(\mathfrak{a},m,z)}$ of the Green function in Section~\ref{integral-section}. This last section deals with integrability and the actual integrals of the unregularized and regularized Green functions and the components of their smooth decomposition. All the integrals can be made explicit in term of volumes of Hirzebruch--Zagier divisors, for instance we show
\[
\int_{X(\mathfrak{a})} \mathbb{P}hi(\mathfrak{a},m,s,z) \omega^2 = \frac{2 \vol(T(\mathfrak{a},m))}{s(s-1)}
\und
\int_{X(\mathfrak{a})} \mathbb{P}hi(\mathfrak{a},m,z) \omega^2 = -2 \vol(T(\mathfrak{a},m)).
\]
in Theorem~\ref{Phi-s-integral} and Theorem~\ref{Phi-integral} for $m\in \mathbb{N}$ and $\mathbb{R}e(s)>1$.
We then employ the polynomial growth in $m$ of the integrals of $\mathfrak{a}bs{\mathbb{P}hi(\mathfrak{a},m,z)}$ to derive the following striking consequence.
\mathfrak{b}egin{theorem}[cf.~Corollary~\ref{almost-everywhere-coro}]
The generating series
\mathfrak{b}egin{align} \label{Phi-gen-series}
\sum_{m=1}^\infty \mathbb{P}hi(\mathfrak{a},m,z) q^m
\end{align}
with $q \in \mathbb{C}$, $|q|<1$ converges absolutely for almost all $z \in \mathbb{H}^2$ and is integrable over $X(\mathfrak{a})$.
\end{theorem}
Note that in the Kudla program one deals with arithmetic generating series where the coefficients are arithmetic divisors interpreted as elements of the first arithmetic Chow group. Here however, we make sense of the generating series over the actual Green functions $\mathbb{P}hi(\mathfrak{a},m,z)$. The pointwise limit which exists for almost all $z \in X(\mathfrak{a})$ gives rise to a current because of its integrability. However, as function on $X(\mathfrak{a})$ the limit is almost nowhere continuous (cf.~Remark~\ref{almost-nowhere-continuous}).
Finally, we compute the integral of the generating series~\eqref{Phi-gen-series} over $X(\mathfrak{a})$ and prove its modularity in Theorem~\ref{modular-integral}.
\subsection{The underlying real quadratic number field}
Throughout the paper $K$ is a real quadratic field of discriminant $D$. With $x \mapsto x'$ we denote the conjugation in $K$, with $N(x):=xx'$ and $\tr(x):=x+x'$ the norm and the trace. The trace is a $\mathbb{Q}$ linear map and the norm is a non-degenerate quadratic form turning $K$ into a rational quadratic space of signature $(1,1)$. Another non-degenerate quadratic form is induced by the trace $(x,y)\mapsto \tr(xy)$. The latter is positive definite, i.e., of signature $(2,0)$.
The ring of integers of $K$ is given by
\[
\mathcal{O}_K = \mathbb{Z} + \tfrac{D+\sqrt{D}}{2} \mathbb{Z}.
\]
By Dirichlet's unit theorem, there exists a unique $\varepsilon_0>1$ (we understand $K$ as subfield of $\mathbb{R}$ with $\sqrt{D}>0$) such that
\[\mathcal{O}_K^\times = \subsett{\mathfrak{p}m \varepsilon_0^k :\; k \in \mathbb{Z}}. \]
Analogously, there exists a unique $\varepsilon_1>1$ such that
\[ \mathcal{O}_K^+ := \mathcal{O}_K^\times \cap K^+ = \subsett{ \varepsilon_1^k :\; k \in \mathbb{Z}}
\with
K^+:=\subsett{x \in K :\; x \gg 0}
. \]
Here $x \gg 0$ being totally positive means $x>0$ and $x'>0$. If $N(\varepsilon_0)=1$, we have $\varepsilon_1= \varepsilon_0$ and otherwise $\varepsilon_1 = \varepsilon_0^2$.
By $\mathcal{I}K$ we denote the ideal group of $K$.
Recall that two ideals $\mathfrak{a}, \mathfrak{b} \in \mathcal{I}K$ belong to the same genus if and only if there exists a $\lambda \in K$ with $N(\lambda) N(\mathfrak{a})=N(\mathfrak{b})$.
The dual $\mathfrak{a}^\vee$ of an ideal $\mathfrak{a} \in \mathcal{I}K$ with respect to the norm form is given by ${(\mathfrak{a}\mathfrak{d})'}^{-1}$ and with respect to the trace form by $(\mathfrak{a}\mathfrak{d})^{-1}$.
Here, $\mathfrak{d}$ denotes the \emph{different} $\mathfrak{d}=(\sqrt{D}) = \sqrt{D}\mathcal{O}_K$.
The volume of $\mathfrak{a}$ is given by $\vol(\mathfrak{a})=N(\mathfrak{a})\sqrt{D}$ with respect to both forms.
\subsection{Hilbert modular groups}
In this paper we consider the Hilbert modular groups
\[
\Gamma_\mathfrak{a} := \mathcal{S}Lp{\mathfrak{a}} := \matrix{\mathcal{O}_K}{\mathfrak{a}^{-1}}{\mathfrak{a}}{\mathcal{O}_K} \cap \mathcal{S}L_2(K)
\]
associated to $\mathfrak{a} \in \mathcal{I}K$.
Recall that they act by
\[
\mathfrak{a}bcd (\mathfrak{a}lpha : \mathfrak{b}eta) := (a\mathfrak{a}lpha +b \mathfrak{b}eta : c\mathfrak{a}lpha + d \mathfrak{b}eta)
\]
on $\mathbb{P}^1(K)$. The quotient of this operation defines the cusps of $\Gamma_\mathfrak{a}$ of which there are $h_K$ many with $h_K$ being the class number of $K$.
For relations between different Hilbert modular groups and lattices associated to them it is useful to introduce the sets
\mathfrak{b}egin{align} \label{SLM-def}
\mathcal{S}LM{\mathfrak{a}}{\mathfrak{b}} := \matrix{\mathfrak{a}}{(\mathfrak{a}\mathfrak{b})^{-1}}{\mathfrak{a}\mathfrak{b}}{\mathfrak{a}^{-1}} \cap \mathcal{S}L_2(K)
\end{align}
for $\mathfrak{a},\mathfrak{b} \in \mathcal{I}K$. They satisfy the equations
\mathfrak{b}egin{align}
\mathcal{S}LM{\mathfrak{a}}{\mathfrak{b}}^{-1} = \mathcal{S}LM{\mathfrak{a}^{-1}}{\mathfrak{a}^2\mathfrak{b}}
\und
\mathcal{S}LM{\mathfrak{a}_1}{\mathfrak{b}} \mathcal{S}LM{\mathfrak{a}_2}{\mathfrak{a}_1^2 \mathfrak{b}} = \mathcal{S}LM{\mathfrak{a}_1 \mathfrak{a}_2}{\mathfrak{b}}.
\end{align}
For example they imply
\[
M^{-1} \mathcal{S}Lp{\mathfrak{b}} M = \mathcal{S}Lp{\mathfrak{a}^2\mathfrak{b}}.
\]
for all $M \in \mathcal{S}LM{\mathfrak{a}}{\mathfrak{b}}$. Hence, the Hilbert modular groups $\Gamma_\mathfrak{a}$ and $\Gamma_\mathfrak{b}$ are conjugated if $\mathfrak{a}\mathfrak{b}$ is a square in the group $\mathcal{I}K$.
\subsection{Lattices associated to ideals}
Throughout the paper $V$ denotes the $\mathbb{Q}$ vector space
\mathfrak{b}egin{align} \label{def-V}
V := \subsett{ \matrix{a}{\lambda'}{\lambda}{b} \in K^{2\times 2} :\; a,b \in \mathbb{Q},\lambda \in K } = \subsett{A \in K^{2\times 2} :\; A^\top = A'}.
\end{align}
equipped with the determinant as quadratic form of signature $(2,2)$.
Using the map
\[
\mathcal{S}L_2(K) \to \mathcal{O}Group(V),
\quad
M \mapsto (A \mapsto M.A := M A (M')^\top)
\]
whose kernal is given by $\mathfrak{p}m 1$ we can view $\mathbb{P}SL_2(K) := \mathcal{S}L_2(K) / \subsett{\mathfrak{p}m 1}$ as subgroup of $\mathcal{O}Group(V)$.
We associate to each $\mathfrak{a} \in \mathcal{I}K$ the lattice
\[
L(\mathfrak{a}) := \subsett{ \matrix{a}{\lambda'}{\lambda}{b} \in V :\; a \in \mathbb{Z}, b \in N(\mathfrak{a})\mathbb{Z},\lambda \in \mathfrak{a} }.
\]
Its dual is given by
\mathfrak{b}egin{align} \label{La-dual}
L(\mathfrak{a})^\vee = \subsett{ \frac{1}{N(\mathfrak{a})}\matrix{a}{\lambda'}{\lambda}{b} \in V :\; a \in\mathbb{Z}, b \in N(\mathfrak{a})\mathbb{Z},\lambda \in \mathfrak{a}\mathfrak{d}^{-1} }
\end{align}
and we have
\mathfrak{b}egin{align} \label{lattice-translate}
M.L(\mathfrak{a}^2\mathfrak{b}) = N(\mathfrak{a}) L(\mathfrak{b})
\und
M.L(\mathfrak{a}^2\mathfrak{b})^\vee = \frac{L(\mathfrak{b})^\vee}{N(\mathfrak{a})}
\end{align}
for all $\mathfrak{a}, \mathfrak{b} \in \mathcal{I}K$ and $M \in \mathcal{S}LM{\mathfrak{a}}{\mathfrak{b}}$.
In particular, the lattices $L(\mathfrak{a})$ and $L(\mathfrak{a})^\vee$ are invariant under $\Gamma_\mathfrak{a}$.
With $\mathbb{H} := \subsett{z \in \mathbb{C} :\; \mathcal{I}m(z)>0}$ being the complex upper half plane every point $z \in \mathbb{H}^2$ gives rise to an orthogonal decomposition $W_z \oplus \tilde W_z$ of $V_\mathbb{R} := V \otimes_\mathbb{Z} \mathbb{R}$ such that the quadratic form (the determinant) restricted to $W_z$ is negative definite and the determinant restricted to $\tilde W_z$ is positive definite. Namely, the vectors
\[
X_z := \matrix{x_1x_2-y_1y_2}{x_1}{x_2}{1}
,\quad
Y_z := \matrix{x_1y_2+x_2y_1}{y_1}{y_2}{0}
\]
and
\[
\tilde X_z := \matrix{x_1x_2+y_1y_2}{x_1}{x_2}{1}
,\quad
\tilde Y_z := \matrix{x_1y_2-x_2y_1}{-y_1}{y_2}{0}
\]
with $z = (z_1,z_2)=(x_1+iy_1,x_2+iy_2)$
form an orthogonal basis of $V_\mathbb{R}$. The first two span $W_z$ and the last two $\tilde W_z$.
We obtain a decomposition $\mathfrak{d}et=q_{W_z}+q_{\tilde W_z}$ with $q_{W_z}$ being the projection onto $W_z$ composed with the determinant. For later use we define $h(A,z) := -q_{W_z}(A)$ and obtain the majorant
\[
q_z(A) := h(A,z) + q_{\tilde W_z}(A) = \mathfrak{d}et(A)+2h(A,z),
\]
a positive definite quadratic form on $V_\mathbb{R}$.
For elements $A=\sMatrix{a}{\lambda'}{\lambda}{b}\in V_\mathbb{R}$ we obtain
\mathfrak{b}egin{align} \label{hA-frac}
h(A,z) = \frac{|b z_1 z_2 - \lambda z_1 - \lambda' z_2 +a |^2}{4 y_1y_2}
\und
q_{\tilde W_z}(A) = \frac{|b \overline{z}_1 z_2 - \lambda \overline{z}_1 - \lambda' z_2 +a |^2}{4 y_1y_2}.
\end{align}
For anisotropic $A=\sMatrix{a}{\lambda'}{\lambda}{b}\in V_\mathbb{R}$ the normalized function
\[
g(A,z) := \frac{h(A,z)}{\mathfrak{d}et(A)} = \frac{\mathfrak{a}bs{bz_1z_2 - \lambda z_1 - \lambda' z_2 + a}^2}{4 y_1y_2 \mathfrak{d}et(A)}
\]
comes in handy from time to time. We have
\mathfrak{b}egin{align} \label{hg-transform}
h(A,z) = h(M.A,Mz)
\und
g(A,z) = g(M.A,Mz)
\end{align}
for all $M \in \mathcal{S}L_2(K)$.
\mathfrak{b}egin{remark} \label{gA-with-d-remark}
Using the $\GL_2(\R)$ invariant hyperbolic distance $d(z_1,z_2) := |z_1-z_2|^2/y_1y_2$ we can express $g(A,z)$ with $S := \sMatrix{0}{-1}{1}{0}$ by
\[
g(A,z) = \frac{d(z_1,ASz_2)}{4} = \frac{|z_1-ASz_2|^2}{4\mathcal{I}m(z_1)\mathcal{I}m(SAz_2)}.
\]
\end{remark}
\subsection{Hilbert modular surfaces and Hirzebruch--Zagier divisors}
We associate to each $\mathfrak{a} \in \mathcal{I}K$ its Hilbert modular surface
$X(\mathfrak{a}) := \Gamma_\mathfrak{a} \mathfrak{b}ackslash \mathbb{H}^2$. By $X(\mathfrak{a})B := X(\mathfrak{a}) \cup \Gamma_\mathfrak{a} \mathfrak{b}ackslash \mathbb{P}^1(K)$ we denote its Baily--Borel compactification (cf.~\cite[Chapter II, Section 1.2]{123mf} or \cite[Chapter I, Section 2]{freitag1990hilbert} for an introduction), a normal complex space.
The cusps in $X(\mathfrak{a})B$ are highly singular but can be desingularized; one obtains the Hirzebruch compactification $X(\mathfrak{a})H$ (cf.~\cite[Chapter II]{geer1988hms}) which is smooth at the boundary (the only left over singular points are the elliptic fix points which are finite quotient singularities). In the Hirzebruch compactification every cusp $\kappa$ is replaced by an exceptional divisor $E^\kappa(\mathfrak{a})$ which consists of finitely many glued $S_k \cong \mathbb{P}^1(\mathbb{C})$ ($k \in \mathbb{Z}/r_\kappa\mathbb{Z}$ for $r_\kappa \in \mathbb{N}$ chosen appropriately for each cusp $\kappa$).
We call the sum of all exceptional divisors $E(\mathfrak{a}) := \sum_{\kappa \in \Gamma_\mathfrak{a} \mathfrak{b}ackslash \mathbb{P}^1(K)} E^\kappa(\mathfrak{a})$.
Let $\mathfrak{b} \in \mathcal{I}K$ and $\kappa = (\mathfrak{a}lpha : \mathfrak{b}eta) \in \mathbb{P}^1(K)$. Then there exists a matrix $M \in \mathcal{S}LM{\mathfrak{a}}{\mathfrak{b}}$ with $\mathfrak{a} := \mathfrak{a}lpha \mathcal{O}_K + \mathfrak{b}eta \mathfrak{b}^{-1}$ and $M \infty = \kappa$. Now the map
\mathfrak{b}egin{align} \label{M-iso}
(\mathbb{H}^2)^* \to (\mathbb{H}^2)^*,\quad z \mapsto M^{-1}z
\end{align}
induces an isomorphism $X(\mathfrak{b})B \isoArrow \XB{\mathfrak{a}^2\mathfrak{b}}$ mapping the cusp $\kappa$ of $X(\mathfrak{b})B$ to the cusp~$\infty$ of $\XB{\mathfrak{a}^2\mathfrak{b}}$. That is why it is enough to study the cusp~$\infty$ for all $X(\mathfrak{a})$ instead of all cusps of $X(\mathcal{O}_K)$. To study the desingularized cusp~$\infty$ we have to express it in local coordinates. We call them $(u,v) \in \mathbb{C}^2$ and they satisfy
\mathfrak{b}egin{align} \label{local-to-z}
\matrixCol{2\mathfrak{p}i i z_1}{2\mathfrak{p}i i z_2} = \matrix{\mathfrak{a}lpha}{\mathfrak{b}eta}{\mathfrak{a}lpha'}{\mathfrak{b}eta'}\matrixCol{\log u}{\log v}
\end{align}
with respect to a totally positive basis $(\mathfrak{a}lpha,\mathfrak{b}eta)$ of $\mathfrak{a}^{-1}$. The $S_k$ correspond then (up to one point) to $u=0$ ($v=0$, respectively) and we have the following lemma.
\mathfrak{b}egin{lemma} \label{exponentials-in-local}
Let $\nu \in \mathfrak{a}\mathfrak{d}^{-1}$. Then the following functions are $\mathfrak{a}^{-1}$ invariant and can be expressed in local coordinates $(u,v)$ with respect to $(\mathfrak{a}lpha,\mathfrak{b}eta)$:
\mathfrak{b}egin{align*}
e(\tr(\nu z)) = e(\nu z_1)e(\nu' z_2) &= u^{\tr(\mathfrak{a}lpha \nu)} v^{\tr(\mathfrak{b}eta \nu)}, \\
e(\tr(\nu \overline{z})) = e(\nu \overline{z_1})e(\nu' \overline{z_2}) &= \overline{u}^{-\tr(\mathfrak{a}lpha \nu)} \overline{v}^{-\tr(\mathfrak{b}eta \nu)}, \\
e(\nu z_1)e(\nu' \overline{z_2})
&= u^{\mathfrak{a}lpha \nu} \overline{u}^{-\mathfrak{a}lpha'\nu'} v^{\mathfrak{b}eta \nu} \overline{v}^{-\mathfrak{b}eta'\nu'},\\
e(\nu \overline{z_1})e(\nu' z_2)
& = u^{\mathfrak{a}lpha' \nu'} \overline{u}^{-\mathfrak{a}lpha\nu} v^{\mathfrak{b}eta' \nu'} \overline{v}^{-\mathfrak{b}eta\nu}.
\end{align*}
The evaluation of the third and fourth line is independent of the chosen branch of the logarithm $\log(u)$ as long as the branch of $\log(\overline{u})$ is chosen accordingly, i.e., $\log(\overline{u}):=\overline{\log(u)}$. The same holds for $\log(v)$ and $\log(\overline{v})$, respectively.
\end{lemma}
As Kähler manifold $X(\mathfrak{a})$ possesses a Kähler form
\mathfrak{b}egin{align} \label{omega-def}
\omega := \eta_1+\eta_2
\with
\eta_j := \frac{1}{4 \mathfrak{p}i} \frac{dx_j dy_j}{y_j^2}.
\end{align}
It induces the volume form $\omega^2$ which allows us to integrate over the Hilbert modular surface. For instance, its volume is given by $\vol(X(\mathfrak{a})) = \zeta_K(-1) = L(-1,\chi_D) \zeta(-1)$.
For non-zero $A=\sMatrix{a}{\lambda'}{\lambda}{b}\in V$ we define
\[
T_A := \subsett{ z \in \mathbb{H}^2 :\; h(A,z)=0 } = \subsett{ z \in \mathbb{H}^2 :\; bz_1z_2 - \lambda z_1 - \lambda' z_2 +a=0 }
\]
leading us for $m \in \mathbb{N}$ to the definition of the Hirzebruch--Zagier divisors
\mathfrak{b}egin{align} \label{Tam-def}
T(\mathfrak{a},m) := \sum_{ \substack{A \in L(\mathfrak{a})^\vee / \subsett{\mathfrak{p}m 1}\\ \mathfrak{d}et(A)=m/(N(\mathfrak{a})D)} } T_A.
\end{align}
Because of the transformation law~\eqref{hg-transform} $T(\mathfrak{a},m)$ is invariant under $\Gamma_\mathfrak{a}$ and therefore it is well-defined on $X(\mathfrak{a})$.
By an argument similar to \cite[Section 3.2]{bruinier2007borcherds} it can be shown that
\mathfrak{b}egin{align} \label{Tm-volume-sum}
\vol(T(\mathfrak{a},m))
= \sum_{ \substack{A \in \Gamma_\mathfrak{a} \mathfrak{b}ackslash L(\mathfrak{a})^\vee / \subsett{\mathfrak{p}m 1}\\ \mathfrak{d}et(A)=m/(N(\mathfrak{a})D)} } \vol(T_A)
= \sum_{ \substack{A \in \Gamma_\mathfrak{a} \mathfrak{b}ackslash L(\mathfrak{a})^\vee / \subsett{\mathfrak{p}m 1}\\ \mathfrak{d}et(A)=m/(N(\mathfrak{a})D)} } \vol( \Gamma_\mathfrak{a}s{\mathfrak{p}m A}' \mathfrak{b}ackslash \mathbb{H} )
\end{align}
with respect to the pullback of the Kähler form $\omega$.
Here, the stabilizer $\Gamma_\mathfrak{a}s{ \mathfrak{p}m A}'$ is given by
\[
\Gamma_\mathfrak{a}s{ \mathfrak{p}m A}' := \subsett{ M' :\; M \in \Gamma_\mathfrak{a} \und M.A \in \subsett{\mathfrak{p}m A} }.
\]
A component $T_A \subset \mathbb{H}^2$ with $A = \sMatrix{a}{\lambda'}{\lambda}{b}$ runs into the cusp~$\infty$ if and only if $b=0$.
Therefore, we define
\mathfrak{b}egin{align*}
\Lambda(\mathfrak{a},m) := &\subsett{\lambda \in \mathfrak{a}\mathfrak{d}^{-1} :\; N(\lambda) = - \frac{mN(\mathfrak{a})}{D} },\
\Lambda^\mathfrak{p}m(\mathfrak{a},m) := &\subsett{\lambda \in \Lambda(\mathfrak{a},m) :\; \sgn(\lambda)= \mathfrak{p}m 1}
\end{align*}
and
\mathfrak{b}egin{align} \label{Tinf-def}
T^\infty(\mathfrak{a},m) := \sum_{ \substack{A=\sMatrix{a}{\lambda'}{\lambda}{0} \in L(\mathfrak{a})^\vee / \subsett{\mathfrak{p}m 1}\\ \mathfrak{d}et(A)=m/(N(\mathfrak{a})D)} } T_A
=\sum_{\lambda \in \Lambda^+(\mathfrak{a},m)} \sum_{a\in \mathbb{Z}} \subsett{ z \in \mathbb{H}^2 :\; \tr(\lambda z)=a}.
\end{align}
The divisor $T^\infty(\mathfrak{a},m)$ is invariant under
\[
\Gas{\infty} = \subsett{ \matrix{\varepsilon}{\mu}{0}{\varepsilon^{-1}} :\; \varepsilon \in \mathcal{O}_K^\times, \mu \in \mathfrak{a}^{-1} }.
\]
Now, consider the real codimension one submanifold
\[
S(\mathfrak{a},m) := \mathfrak{b}igcup_{\lambda \in \Lambda^+(\mathfrak{a},m)} S_\lambda
\with
S_\lambda := \subsett{z \in \mathbb{H}^2 :\; \tr(\lambda y)=0}
\]
which contains the divisor $T^\infty(\mathfrak{a},m)$ as subset.
The connected components of its complement $\mathbb{H}^2 \subsettminus S(\mathfrak{a},m)$ are the so-called \emph{Weyl chambers of index $m$}.
The cyclic group $(\mathcal{O}_K^\times)^2$ acts on $\Lambda^+(\mathfrak{a},m)$ by multiplication. The quotient is finite and with respect to a fixed $w \in (\mathbb{R}^+)^2$ it is possible to specify a unique representative from each orbit. Namely, we call $\lambda \in \Lambda^+(\mathfrak{a},m)$ \emph{reduced} with respect to $w$ if $\lambda$ is minimal with $\tr(\lambda w) \ge 0$ in its $(\mathcal{O}_K^\times)^2$ orbit. The set of all reduced $\lambda \in \Lambda^+(\mathfrak{a},m)$ with respect to $w$ is denoted by $R(\mathfrak{a},m,w)$.
For all $w=(y_1,y_2)$ with $z \in W$ in a fixed Weyl chamber $W$ the set $R(\mathfrak{a},m,w)$ is the same. Therefore, we allow the notation $R(\mathfrak{a},m,W)$ with $W$ being a Weyl chamber.
We define
\[
\rho(\mathfrak{a},m,w) := \sum_{ \lambda \in R(\mathfrak{a},m,w)} \frac{\lambda}{\varepsilon_0^{2}-1}
\]
and call it \emph{Weyl vector} with respect to $w$ ($\rho(\mathfrak{a},m,W)$, respectively).
Those Weyl vectors allow us now to define the completed Hirzebruch--Zagier divisors $Z(\mathfrak{a},m)$ for the Hirzebruch compactification $X(\mathfrak{a})H$.
Namely, the component of $Z(\mathfrak{a},m)$ living on $E^\infty(\mathfrak{a})$ is given by
\mathfrak{b}egin{align} \label{Zinf-def}
Z^\infty(\mathfrak{a},m) := \sum_{k=1}^{r_\infty} \tr(\rho(\mathfrak{a},m,A_k)A_k) S_k.
\end{align}
Here, $A_k$ is the element of $\mathfrak{a}^{-1}$ corresponding to $S_k$ due to \cite{geer1988hms}.
For other cusps than $\infty$, the divisor $Z^\kappa(\mathfrak{a},m)$ is given by the image of $Z^\infty(\mathfrak{a}\mathfrak{b}^2,m)$ under the isomorphism $\overline{X(\mathfrak{a}\mathfrak{b}^2)} \isoArrow X(\mathfrak{a})H$ (here $\mathfrak{b} \in \mathcal{I}K$ has to be chosen appropriately, cf.~\eqref{M-iso}).
We then define
\[
Z(\mathfrak{a},m) := T(\mathfrak{a},m) + \sum_{\kappa \in \Gamma_\mathfrak{a} \mathfrak{b}ackslash \mathbb{P}^1(K)} Z^\kappa(\mathfrak{a},m)
\]
where $\kappa$ runs through all cusps of $\Gamma_\mathfrak{a}$.
\subsection{Logarithmic singularities and pre-log-log Green functions}
We expect the reader to be familiar with the concept of logarithmic singularities and pre-log-log Green functions (cf.~\cite[Section 1.2]{bruinier2007borcherds} for details).
We use the following scaling of logarithmic singularities in this paper: Let $h$ be a holomorphic function. Then the function $\log(|h|^2)$ has logarithmic singularities along the divisor $\mathfrak{d}iv(h)$. This scaling implies the satisfaction of the Green equation
\[
dd^c [g] + \partialta_{Z} = [dd^c g]
\]
for a function $g$ with logarithmic singularities along the divisor $-Z$. Note that a Green function for a divisor $Z$ has logarithmic singularities along $-Z$ (and not $+Z$).
In our situation, the logarithmic singularities of the Green functions we investigate are along the Hirzebruch--Zagier divisors $-Z(\mathfrak{a},m)$. Additionally, we allow the pre-log-log growth along the exceptional divisor $E(\mathfrak{a})$.
\mathfrak{b}egin{remark} \label{pre-log-log-condition}
Recall that a function $f$ is a pre-log-log growth form if and only if
\[
f,\quad
w_1 \log(|w_1|) \frac{\partial f}{\partial w_1},\quad
w_1 w_2 \log(|w_1|)\log(|w_2|) \frac{\partial^2 f}{\partial w_1\partial w_2}
\]
have log-log growth for $w_1,w_2 \in \subsett{z_1,\mathfrak{d}ots,z_k,\overline{z}_1,\mathfrak{d}ots,\overline{z}_k}$ and $w_1 \ne w_2$.
In most cases where we have to prove a function $f$ to be a pre-log-log growth form we will see that the given terms even go to $0$ for small $z_i$ ($1 \le i \le k$).
\end{remark}
\subsetction{Investigation of automorphic Green functions} \label{auto-section}
\subsection{Unregularized Fourier expansion} \label{fourier-exp-and-reg}
In this subsection we generalize the definition and many results of the automorphic Green function living on $X(\mathcal{O}_K)$ from \cite{bruinier1999borcherds} to automorphic Green function living on $X(\mathfrak{a})$ for arbitrary ideals $a \in \mathcal{I}K$. We follow \cite[Section 3]{bruinier1999borcherds} and skip most of the proofs since they are similar to the proofs of the source.
\mathfrak{b}egin{definition} \label{Phi-amsz-def}
For $\mathfrak{a} \in \mathcal{I}K$, $m \in \mathbb{N}$, $s \in \mathbb{C}$ with $\mathbb{R}e(s)>1$ and $z \in \mathbb{H}^2 \subsettminus T(\mathfrak{a},m)$ we define
\[
\mathbb{P}hi(\mathfrak{a},m,s,z) :=
\sum_{ \substack{ A \in L(\mathfrak{a})^\vee \\ \mathfrak{d}et(A) = m/(N(\mathfrak{a})D) } } Q_{s-1} \mathfrak{b}r{ 1+ 2 g(A,z) }.
\]
\end{definition}
Analogous to \cite{bruinier1999borcherds} we are rather interested in $\mathbb{P}hi(\mathfrak{a},m,s,z)$ at the harmonic point $s=1$, but unfortunately the series diverges at $s=1$. Therefore, we regularize $\mathbb{P}hi(\mathfrak{a},m,s,z)$ at $s=1$. For this purpose we have to compute the Fourier expansion of $\mathbb{P}hi(\mathfrak{a},m,s,z)$ and extend the Fourier coefficients meromorphically in $s$.
\mathfrak{b}egin{proposition} \label{first-Phi-prop}
The series defining $\mathbb{P}hi(\mathfrak{a},m,s,z)$ converges normally for $\mathbb{R}e(s)>1$ and $z \in \mathbb{H}^2 \subsettminus T(\mathfrak{a},m)$ to a smooth function in $z$ which is $\Gamma_\mathfrak{a}$ invariant and holomorphic in $s$. It is an eigenfunction of the Laplacian $\Delta_j$ for $j \in \subsett{1,2}$ with eigenvalue $s(s-1)$.
Further, we have for $\mathfrak{a},\mathfrak{b} \in \mathcal{I}K$ and $M \in \mathcal{S}LM{\mathfrak{a}}{\mathfrak{b}}$
\[ \mathbb{P}hi(\mathfrak{b},m,s,Mz) = \mathbb{P}hi(\mathfrak{a}^2\mathfrak{b},m,s,z). \]
\end{proposition}
\mathfrak{b}egin{proof}
The transformation law involving the ideals $\mathfrak{a},\mathfrak{b} \in \mathcal{I}K$ implies the $\Gamma_\mathfrak{a}$ invariance of $\mathbb{P}hi(\mathfrak{a},m,s,z)$ (choose $\mathfrak{a}= \mathcal{O}_K$, then $\mathcal{S}LM{\mathfrak{a}}{\mathfrak{b}} = \Gamma_\mathfrak{b}$). The transformation law is a consequence of \eqref{lattice-translate} and \eqref{hg-transform}. For the rest follow \cite{bruinier1999borcherds}.
\end{proof}
Using the decomposition
\[
\mathbb{P}hi(\mathfrak{a},m,s,z)
= \sum_{b \in \mathbb{Z}} \mathbb{P}hi^b(\mathfrak{a},m,s,z)
= \mathbb{P}hi^0(\mathfrak{a},m,s,z) + 2\sum_{b=1}^\infty \mathbb{P}hi^b(\mathfrak{a},m,s,z)
\]
with
\[
\mathbb{P}hi^b(\mathfrak{a},m,s,z) :=\sum_{ \substack{ A =\sMatrix{a}{\lambda'}{\lambda}{b} \in L(\mathfrak{a})^\vee \\ \mathfrak{d}et(A) = m/(N(\mathfrak{a})D) } } Q_{s-1} \mathfrak{b}r{ 1+ 2 g(A,z) }
\]
we can compute the Fourier expansion of $\mathbb{P}hi(\mathfrak{a},m,s,z)$. Note that each single $\mathbb{P}hi^b(\mathfrak{a},m,s,z)$ is still invariant under $\Gas{\infty}$, in particular under translation by $\mathfrak{a}^{-1}$ and possesses therefore a Fourier expansion.
Furthermore, for $b \in \mathbb{N}$ the function $\mathbb{P}hi^b(\mathfrak{a},m,s,z)$ has no singularity for arguments $z \in \mathbb{H}^2$ with $\mathcal{I}m(z) > B := m/(N(\mathfrak{a})Db^2)$.
For $b \in \mathbb{N}$ let $R^b(\mathfrak{a},m)$ be a set of representatives of
\[
\subsett{\lambda \in \mathfrak{a}\mathfrak{d}^{-1} / b\mathfrak{a} :\; \frac{N(\sqrt{D}\lambda)}{N(\mathfrak{a})} \equiv m \mathfrak{p}mod{bD}}.
\]
We compute for $b \in \mathbb{N}$
\mathfrak{b}egin{align*}
\mathbb{P}hi^b(\mathfrak{a},m,s,z)
&= \sum_{ \substack{a \in \mathbb{Z}/N(\mathfrak{a}),\:\lambda \in \mathfrak{a}\mathfrak{d}^{-1}/N(\mathfrak{a}) \\ ab-N(\lambda) = m/(N(\mathfrak{a})D) } } Q_{s-1} \mathfrak{b}r{1+ \frac{|bz_1z_2-\lambda z_1-\lambda'z_2+a|^2}{2y_1y_2 m/(N(\mathfrak{a})D)} }\\
&= \sum_{ \substack{a \in \mathbb{Z}/N(\mathfrak{a}),\:\lambda \in \mathfrak{a}\mathfrak{d}^{-1}/N(\mathfrak{a}) \\ ab-N(\lambda) = m/(N(\mathfrak{a})D) } } Q_{s-1}
\mathfrak{b}r{1+ \frac{| (z_1-\lambda'/b)(z_2-\lambda/b) +B |^2}{2y_1y_2 B} }\\
&= \sum_{\lambda \in R^b(\mathfrak{a},m)} \sum_{\mu \in \mathfrak{a}}
Q_{s-1} \mathfrak{b}r{1+ \frac{ \mathfrak{a}bs{ \mathfrak{b}r {z_1- \frac{\lambda'+b\mu'}{N(\mathfrak{a})b}}\mathfrak{b}r{z_2 - \frac{\lambda+b\mu}{N(\mathfrak{a})b}} + B }^2}{2y_1y_2 B} }\\
&= \sum_{\lambda \in R^b(\mathfrak{a},m)} \sum_{\mu \in \mathfrak{a}^{-1}}
Q_{s-1} \mathfrak{b}r{1+ \frac{ \mathfrak{a}bs{ \mathfrak{b}r {z_1 + \mu + \frac{\lambda'}{N(\mathfrak{a})b}} \mathfrak{b}r {z_2 + \mu' + \frac{\lambda}{N(\mathfrak{a})b}} + B }^2}{2y_1y_2 B} }.
\end{align*}
Hence, the problem is deduced to computing the Fourier expansion of the $\mathfrak{a}^{-1}$ periodic function $H_s^B (\mathfrak{a}^{-1},z)$ with
\mathfrak{b}egin{align} \label{HB-def}
H_s^B (\mathfrak{b},z) := \sum_{\mu \in \mathfrak{b}} Q_{s-1}\mathfrak{b}r{ 1 + \frac{|(z_1+\mu)(z_2+\mu')+B|^2}{2y_1y_2B} }.
\end{align}
Namely, let
\[
H_s^B (\mathfrak{b},z) = \sum_{\nu \in (\mathfrak{b}\mathfrak{d})^{-1} } b_s^B(\mathfrak{b},\nu,y) e(\tr(\nu x))
\]
be the Fourier expansion of $H_s^B (\mathfrak{b},z)$. Then we have
\mathfrak{b}egin{align*}
\mathbb{P}hi^b(\mathfrak{a},m,s,z)
&= \sum_{\lambda \in R^b(\mathfrak{a},m)} \sum_{\nu \in \mathfrak{a}\mathfrak{d}^{-1}} b_s^B(\mathfrak{a}^{-1},\nu,y) e \mathfrak{b}r{\tr \mathfrak{b}r{\nu \mathfrak{b}r{ x + \tfrac{\lambda'}{N(\mathfrak{a})b} }} }\\
&= \sum_{\nu \in \mathfrak{a}\mathfrak{d}^{-1}} \mathfrak{b}r{ \sum_{\lambda \in R^b(\mathfrak{a},m)} e \mathfrak{b}r{\tr \mathfrak{b}r{\tfrac{\nu\lambda'}{N(\mathfrak{a})b} } } } b_s^B(\mathfrak{a}^{-1},\nu,y) e(\tr(\nu x))\\
&= \sum_{\nu \in \mathfrak{a}\mathfrak{d}^{-1}} G^b(\mathfrak{a},m,\nu) b_s^B(\mathfrak{a}^{-1},\nu,y) e(\tr(\nu x))
\end{align*}
with the finite exponential sum
\mathfrak{b}egin{align} \label{gaus-sum}
G^b(\mathfrak{a},m,\nu) := \sum_{ \substack{ \lambda \in \mathfrak{a}\mathfrak{d}^{-1} / b\mathfrak{a} \\ \frac{N(\lambda)}{N(\mathfrak{a})} \equiv -\frac{m}{D}\: (b\mathbb{Z}) } } e \mathfrak{b}r{\tr \mathfrak{b}r{\tfrac{\nu\lambda'}{N(\mathfrak{a})b} } } .
\end{align}
\mathfrak{b}egin{definition} \label{our-bessel}
For shorter notation we define for $\nu \in K^\times$
\[
\mathcal{I}^\nu_\kappa (z) :=
\mathfrak{b}egin{cases}
I_\kappa(z),&\quad N(\nu)>0,\\
J_\kappa(z),&\quad N(\nu)<0.
\end{cases}
\]
Here, $I_\kappa(z)$ and $J_\kappa(z)$ denote the respective Bessel functions, i.e.,
$I_\kappa(z)$ is the modified Bessel function of the first kind (cf.~\cite[10.25.2]{handbook})
and $J_\kappa(z)$ is the Bessel function of the first kind (cf.~\cite[10.2.2]{handbook}).
By $K_\kappa(z)$ we denote the modified Bessel function of the second kind (cf.~\cite[10.25.3]{handbook}).
\end{definition}
\mathfrak{b}egin{lemma} \label{Phi-b-fc}
Let $\mathfrak{b} \in \mathcal{I}K$ and $B>0$.
The function $H_s^B (\mathfrak{b},z)$ defined by equation~\eqref{HB-def} converges normally for $\mathbb{R}e(s)>1/2$ and for those $z \in \mathbb{H}^2$ at which no term in the series has a singularity, i.e., the arguments of all $Q_{s-1}$ are greater than $1$. For $y_1y_2>B$ this is the case and the series has the Fourier expansion
\[
H_s^B (\mathfrak{b},z) = \sum_{\nu \in (\mathfrak{b}\mathfrak{d})^{-1} } b_s^B(\mathfrak{b},\nu,y) e(\tr(\nu x))
\]
with
\mathfrak{b}egin{align*}
b_s^B(\mathfrak{b},0,y) = &\frac{\mathfrak{p}i \Gamma_\mathfrak{a}mma(s-1/2)^2}{2 \sqrt{D} N(\mathfrak{b}) \Gamma_\mathfrak{a}mma(2s)} (4B)^s (y_1y_2)^{1-s},\\
b_s^B(\mathfrak{b},\nu,y) = &\frac{4 \mathfrak{p}i}{N(\mathfrak{b})} \sqrt{ \frac{B y_1y_2}{D} } \mathcal{I}^\nu_{2s-1} \mathfrak{b}r{4 \mathfrak{p}i \sqrt{B|N(\nu)|}} K_{s-1/2}(2\mathfrak{p}i |\nu| y_1)\\ &\times K_{s-1/2}(2\mathfrak{p}i |\nu'| y_2), \quad \text{if } \nu \ne 0.\\
\end{align*}
\end{lemma}
We are left with the analysis of $\mathbb{P}hi^0(\mathfrak{a},m,s,z)$. We have
\mathfrak{b}egin{align*}
\mathbb{P}hi^0(\mathfrak{a},m,s,z)
&= \sum_{ \substack{ A =\sMatrix{a}{\lambda'}{\lambda}{0} \in L(\mathfrak{a})^\vee \\ \mathfrak{d}et(A) = m/(N(\mathfrak{a})D) } } Q_{s-1} \mathfrak{b}r{ 1+ 2 g(A,z) }\\
&= \sum_{ \substack{a \in \mathbb{Z}/N(\mathfrak{a}),\:\lambda \in \mathfrak{a}\mathfrak{d}^{-1}/N(\mathfrak{a}) \\ -N(\lambda) = m/(N(\mathfrak{a})D) } } Q_{s-1} \mathfrak{b}r{1+ \frac{|-\lambda z_1-\lambda'z_2+a|^2}{2y_1y_2 m/(N(\mathfrak{a})D)} }\\
&= 2 \sum_{ \lambda \in \Lambda^+(\mathfrak{a},m) }
\sum_{a \in \mathbb{Z}} Q_{s-1} \mathfrak{b}r{1+ \frac{|\lambda z_1+\lambda'z_2+a|^2}{2y_1y_2 mN(\mathfrak{a})/D} }.
\end{align*}
Let us define for $r_1,r_2 \in \mathbb{R}$
\mathfrak{b}egin{align} \label{alpha-beta-def}
\mathfrak{a}lpha(r_1,r_2) := \max(|r_1|,|r_2|)
\und
\mathfrak{b}eta(r_1,r_2) := \min(|r_1|,|r_2|).
\end{align}
\mathfrak{b}egin{lemma} \label{Phi-0-fc}
The series
\mathfrak{b}egin{align*}
\mathbb{P}hi^0(\mathfrak{a},m,s,z)
= 2 \sum_{ \lambda \in \Lambda^+(\mathfrak{a},m) } \sum_{a \in \mathbb{Z}} Q_{s-1} \mathfrak{b}r{1+ \frac{|\lambda z_1+\lambda'z_2+a|^2}{2y_1y_2 mN(\mathfrak{a})/D} }
\end{align*}
converges normally for $z \in \mathbb{H}^2 \subsettminus T^\infty(\mathfrak{a},m)$ and $\mathbb{R}e(s)>1/2$. Moreover, on $\mathbb{H}^2 \subsettminus S(\mathfrak{a},m)$ one has the Fourier expansion
\mathfrak{b}egin{align*}
\mathbb{P}hi^0(\mathfrak{a},m,s,z) = \frac{4\mathfrak{p}i}{2s-1} \sum_{ \lambda \in \Lambda^+(\mathfrak{a},m) } \mathfrak{a}lpha(\lambda y_1,\lambda' y_2)^{1-s}\mathfrak{b}eta(\lambda y_1,\lambda' y_2)^s\\
+\: 4 \mathfrak{p}i \sum_{ \lambda \in \Lambda^+(\mathfrak{a},m) } \sum_{n=1}^\infty \sqrt{ |\lambda \lambda' y_1y_2| } I_{s-1/2} (2 \mathfrak{p}i n \mathfrak{b}eta(\lambda y_1,\lambda'y_2))\\
\times \ K_{s-1/2}(2\mathfrak{p}i n \mathfrak{a}lpha(\lambda y_1,\lambda' y_2)) \mathfrak{b}r{e(n \tr(\lambda x))+e(-n \tr(\lambda x))}.
\end{align*}
\end{lemma}
Hence, we obtain analogous to \cite{bruinier1999borcherds} that the individual $\mathbb{P}hi^b(\mathfrak{a},m,s,z)$ are well-defined for $\mathbb{R}e(s)>1/2$.
\subsection{Regularization with Fourier expansion}
Following \cite{buckdiss} one now investigates the ingredients of the Fourier expansion of $\mathbb{P}hi(\mathfrak{a},m,s,z)$ and finds that the only part which does not converge at $s=1$ is the component
\mathfrak{b}egin{align} \label{diverging-part}
\frac{\mathfrak{p}i \Gamma_\mathfrak{a}mma(s-1/2)^2}{\sqrt{D} \Gamma_\mathfrak{a}mma(2s)} \mathfrak{b}r{4m/D}^s (N(\mathfrak{a})y_1y_2)^{1-s} \sum_{b=1}^\infty G^b(\mathfrak{a},m,0) b^{-2s}
\end{align}
of the constant Fourier coefficient. This is due to the diverging series $\sum_{b=1}^\infty G^b(\mathfrak{a},m,0) b^{-2s}$ of representation numbers. In \cite{bruinier2007borcherds} this series is investigated in more detail in the special case of $\mathfrak{a}= \mathcal{O}_K$, $D$ prime and $m \in \mathbb{N}$.
Analogously (with a lot of tedious computations which can be found in \cite{buckRepNumbers}) one shows Proposition~\ref{Gseries-with-div-sum} for which we need a generalized definition of the divisor sum:
\mathfrak{b}egin{definition} \label{div-sum-def}
For odd discriminant $D$, $m \in \mathbb{Z} \subsettm{0}$ and $\mathfrak{a} \subset \mathcal{O}_K$ such that $N(\mathfrak{a})$ is coprime to $D$ we define
\[
\sigma(\mathfrak{a},m,s) = |m|^{(1-s)/2} \sum_{d \mid m} d^s \mathfrak{p}rod_{p \mid D} (\chi_{D(p)}(d) + \chi_{D(p)}(N(\mathfrak{a})m/d)).
\]
The product ranges over all prime divisors $p$ of $D$ and $D(p) \in \subsett{\mathfrak{p}m p}$ such that $D(p)$ is a discriminant, i.e. $D(p) \equiv 1 \mathfrak{p}mod 4$.
Now for arbitrary $\mathfrak{b} \in \mathcal{I}K$ we define
$\sigma(\mathfrak{b},m,s) := \sigma(\mathfrak{a},m,s)$
with $\mathfrak{a} \subset \mathcal{O}_K$ coprime to $D$ within the genus of $\mathfrak{b}$.
\end{definition}
The divisor sum satisfies the functional equation $\sigma(\mathfrak{a},m,s)=\sigma(\mathfrak{a},m,-s)$ (cf.~\cite[Lemma~7.3]{buckRepNumbers}).
\mathfrak{b}egin{proposition}[{{\cite[Theorem~8.1]{buckRepNumbers}}}] \label{Gseries-with-div-sum}
For odd discriminant $D$ and $m \in \mathbb{Z} \subsettm{0}$ the series $\sum_{b=1}^\infty G^b(\mathfrak{a},m,0) b^{-s}$
has a meromorphic continuation to the complex plane with a simple pole at $s=2$. It satisfies
\[
\sum_{b=1}^\infty G^b(\mathfrak{a},m,0)b^{-s} = |m|^{-s/2} \frac{\zeta(s-1)}{L(s,\chi_D)} \sigma(\mathfrak{a},m,1-s).
\]
\end{proposition}
Analogous to \cite[eq. (2.39)]{bruinier2007borcherds} one defines\footnote{Note that the definition of $\mathfrak{p}h(\mathfrak{a},m,s)$ and $q(\mathfrak{a},m)$ depends only on the genus of the ideal $\mathfrak{a}$ whereas $L(\mathfrak{a},m)$ depends on the ideal $\mathfrak{a}$ itself.}
\[
\mathfrak{p}h(\mathfrak{a},m,s) := -\frac{\Gamma_\mathfrak{a}mma(s-1/2)}{\Gamma_\mathfrak{a}mma(3/2-s)} \frac{\sigma(\mathfrak{a},m,1-2s)}{L(1-2s,\chi_D)}
\]
and proves using Proposition~\ref{Gseries-with-div-sum} that the constant term in the Laurent expansion at $s=1$ of \eqref{diverging-part} equals
\[
L(\mathfrak{a},m) - q(\mathfrak{a},m)\log(16 \mathfrak{p}i^2 y_1y_2)
\]
with
\mathfrak{b}egin{align} \label{q-sigma}
q(\mathfrak{a},m) = \mathfrak{p}h(\mathfrak{a},m,1) = -\frac{\sigma(\mathfrak{a},m,-1)}{L(-1,\chi_D)}
\end{align}
being the residue at $s=1$ of \eqref{diverging-part} and
\mathfrak{b}egin{align} \label{L-constant}
L(\mathfrak{a},m) = \mathfrak{p}h(\mathfrak{a},m,1) \mathfrak{b}r{ 2 \frac{L'(-1,\chi_D)}{L(-1,\chi_D)} - 2 \frac{\sigma'(\mathfrak{a},m,-1)}{\sigma(\mathfrak{a},m,-1)} + \log\mathfrak{b}r{ \frac{D}{N(\mathfrak{a})} }}.
\end{align}
As direct consequence we obtain the following growth estimates.
\mathfrak{b}egin{corollary} \label{qL-growth}
For large $m$ we have
\[
q(\mathfrak{a},m) = O(m^2)
\und
L(\mathfrak{a},m) = O(m^2 \log(m)).
\]
\end{corollary}
The understanding of the series~\eqref{diverging-part} leads then to the next Theorem.
\mathfrak{b}egin{theorem} \label{Phi-cont-theorem}
The function $\mathbb{P}hi(\mathfrak{a},m,s,z)$ has a meromorphic continuation in $s$ to\\$\subsett{s \in \mathbb{C} :\; \mathbb{R}e(s)>3/4}$ for all $z \in \mathbb{H}^2 \subsettminus T(\mathfrak{a},m)$. Up to a simple pole at $s=1$ of residue $q(\mathfrak{a},m)$ it is holomorphic in this domain.
\end{theorem}
Theorem~\ref{Phi-cont-theorem} allows us to define the regularized automorphic Green function $\mathbb{P}hi(\mathfrak{a},m,z)$.
\mathfrak{b}egin{definition} \label{regularized-Green-def}
We define
\[
\mathbb{P}hi(\mathfrak{a},m,z):=\mathcal{C}_{s=1} \sq{\mathbb{P}hi(\mathfrak{a},m,s,z)}
\]
to be the constant term in the Laurent expansion of $\mathbb{P}hi(\mathfrak{a},m,s,z)$ at $s=1$.
\end{definition}
By construction $\mathbb{P}hi(\mathfrak{a},m,z)$ is $\Gamma_\mathfrak{a}$ invariant and has logarithmic singularities along $-T(\mathfrak{a},m)$ because of
\mathfrak{b}egin{align*}
Q_{0} \mathfrak{b}r{ 1+ 2 g(A,z) }
&= \frac{1}{2}\log\mathfrak{b}r{ \frac{g(A,z)+1}{g(A,z)} }
= \frac{1}{2}\log\mathfrak{b}r{ \frac{\mathfrak{d}et(A)+h(A,z)}{h(A,z)} }\\
&= \frac{1}{2}\log\mathfrak{b}r{ \frac{q_{\tilde W_z}(A)}{h(A,z)} }
= \log \mathfrak{a}bs{ \frac{bz_1\overline{z_2}-\lambda z_1 -\lambda' \overline{z_2} + a}{bz_1z_2 - \lambda z_1 - \lambda' z_2 + a} }
\end{align*}
for $A =\sMatrix{a}{\lambda'}{\lambda}{b}$ by \eqref{hA-frac}. The numerator is smooth and zero free on $\mathbb{H}^2$ while the denominator is holomorphic with the appropriate zero set. Recall that $- \log |bz_1z_2 - \lambda z_1 - \lambda' z_2 + a|$ occurs twice because $A$ comes together with $-A$.
\mathfrak{b}egin{theorem} \label{first-Phi-fc}
The Fourier expansion of $\mathbb{P}hi(\mathfrak{a},m,z)$ is given for $z \in \mathbb{H}^2 \subsettminus S(\mathfrak{a},m)$ with $\mathcal{I}m(z)>m/(DN(\mathfrak{a}))$ by
\mathfrak{b}egin{align*}
&\mathbb{P}hi(\mathfrak{a},m,z)
= L(\mathfrak{a},m) - q(\mathfrak{a},m)\log(16 \mathfrak{p}i^2 y_1y_2)\\
+\ &4 \mathfrak{p}i \sum_{ \lambda \in \Lambda^+(\mathfrak{a},m) }\mathfrak{b}eta(\lambda y_1,\lambda' y_2)\\
+\ & \sum_{ \lambda \in \Lambda^+(\mathfrak{a},m) } \sum_{n=1}^\infty \frac{e^{-2 \mathfrak{p}i n |\tr(\lambda y)|}-e^{-2 \mathfrak{p}i n ( \lambda y_1 - \lambda'y_2)}}{ n} \mathfrak{b}r{e(n \tr(\lambda x))+e(-n \tr(\lambda x))}\\
+\ & \sum_{\substack{\nu \in \mathfrak{a}\mathfrak{d}^{-1} \\ \nu \ne 0 }} \frac{2 \mathfrak{p}i}{D} \sqrt{ \frac{mN(\mathfrak{a})}{|N(\nu)|} } \exp(-2\mathfrak{p}i \tr(|\nu|y))
\sum_{b=1}^\infty \frac{G^b(\mathfrak{a},m,\nu)}{b} \mathcal{I}^\nu_{1} \mathfrak{b}r{ \frac{4 \mathfrak{p}i}{b} \sqrt{ \frac{m |N(\nu)|}{N(\mathfrak{a})D}}} e(\tr(\nu x)).
\end{align*}
\end{theorem}
\mathfrak{b}egin{proof}
The theorem is obtained from the preceding treatment of \eqref{diverging-part} together with an evaluation of the other terms of the Fourier expansion of $\mathbb{P}hi(\mathfrak{a},m,s,z)$ at $s=1$.
\end{proof}
\mathfrak{b}egin{lemma} \label{series-into-log}
We have for $z \in \mathbb{H}^2 \subsettminus S(\mathfrak{a},m)$
\mathfrak{b}egin{align*}
\sum_{ \lambda \in \Lambda^+(\mathfrak{a},m) } \sum_{n=1}^\infty \frac{e^{-2 \mathfrak{p}i n |\tr(\lambda y)|}-e^{-2 \mathfrak{p}i n ( \lambda y_1 - \lambda'y_2)}}{ n} \mathfrak{b}r{e(n \tr(\lambda x))+e(-n \tr(\lambda x))}\\
= -4 \mathfrak{p}i \sum_{ \lambda \in \Lambda^+(\mathfrak{a},m)} \mathfrak{b}eta(\lambda y_1,\lambda' y_2)
+ 2\log \mathfrak{p}rod_{\lambda \in \Lambda^+(\mathfrak{a},m)} \mathfrak{a}bs{ \frac{1- e(|\lambda| z_1) \overline{ e(|\lambda'| z_2)}}{e(|\lambda| z_1) -e(|\lambda'| z_2)} }.
\end{align*}
\end{lemma}
\mathfrak{b}egin{proof}
This identity is proved by making use of the power series of the logarithm $\log(x)$ at $x=1$ and by careful case distinctions based on the sign of $\tr(\lambda y)$.
\end{proof}
Lemma~\ref{series-into-log} gives rise to the following simplification of Theorem~\ref{first-Phi-fc}.
\mathfrak{b}egin{theorem} \label{nice-Phi-rep}
The Green function $\mathbb{P}hi(\mathfrak{a},m,z)$ is given for ${z \in \mathbb{H}^2 \subsettminus S(\mathfrak{a},m)}$ with $\mathcal{I}m(z)>m/(DN(\mathfrak{a}))$ by
\mathfrak{b}egin{align*}
\mathbb{P}hi(\mathfrak{a},m,z)
&= L(\mathfrak{a},m) - q(\mathfrak{a},m)\log(16 \mathfrak{p}i^2 y_1y_2)\\
&+\ 2\log \mathfrak{p}rod_{\lambda \in \Lambda^+(\mathfrak{a},m)} \mathfrak{a}bs{ \frac{1- e(|\lambda| z_1) \overline{ e(|\lambda'| z_2)}}{e(|\lambda| z_1) -e(|\lambda'| z_2)} }\\
&+\ \sum_{\substack{\nu \in \mathfrak{a}\mathfrak{d}^{-1} \\ \nu \gg 0 }} \frac{2 \mathfrak{p}i}{D} \sqrt{ \frac{mN(\mathfrak{a})}{|N(\nu)|} } \sum_{b=1}^\infty \frac{G^b(\mathfrak{a},m,\nu)}{b} I_{1} \mathfrak{b}r{ \frac{4 \mathfrak{p}i}{b} \sqrt{ \frac{m |N(\nu)|}{N(\mathfrak{a})D}}}\\
&\times \ \mathfrak{b}r{e(\tr(\nu z)) + \overline{e(\tr(\nu z))}}\\
&+\ \sum_{\substack{\nu \in \mathfrak{a}\mathfrak{d}^{-1} \\ \nu >0, \, \nu'<0 }} \frac{2 \mathfrak{p}i}{D} \sqrt{ \frac{mN(\mathfrak{a})}{|N(\nu)|} } \sum_{b=1}^\infty \frac{G^b(\mathfrak{a},m,\nu)}{b} J_{1} \mathfrak{b}r{ \frac{4 \mathfrak{p}i}{b} \sqrt{ \frac{m |N(\nu)|}{N(\mathfrak{a})D}}}\\
&\times \ \mathfrak{b}r{ e(\nu z_1)\overline{ e(-\nu'z_2) } + \overline{e(\nu z_1)} e(-\nu'z_2) }.
\end{align*}
\end{theorem}
\mathfrak{b}egin{proof}
Starting from Theorem~\ref{first-Phi-fc}, the main work was done in Lemma~\ref{series-into-log}. For the different notation of the exponentials in the last lines verify
for $\nu \in K^\times$
\mathfrak{b}egin{align*}
e (\tr(\nu x)) e(i \tr(|\nu|y)) =
\mathfrak{b}egin{cases}
e(\tr(\nu z)),&\quad \nu \gg 0,\\
\overline{e(-\tr(\nu z))},&\quad \nu \ll 0,\\
e(\nu z_1)\overline{ e(-\nu'z_2) },&\quad \nu>0,\ \nu'<0,\\
\overline{e(-\nu z_1)}e(\nu'z_2),&\quad \nu<0,\ \nu'>0.
\end{cases}
\end{align*}
Finally, by definition~\eqref{gaus-sum} of the exponential sum $G^b(\mathfrak{a},m,\nu)$ we have
\[
G^b(\mathfrak{a},m,\nu)=G^b(\mathfrak{a},m,-\nu)
\]
since the index set of the sum is invariant under multiplication with $-1$.
\end{proof}
\mathfrak{b}egin{proposition} \label{Phi-real-analytic}
The regularized Green function $\mathbb{P}hi(\mathfrak{a},m,z)$ is real analytic and satisfies for $j \in \subsett{1,2}$
\[
\Delta_j \mathbb{P}hi(\mathfrak{a},m,z) = q(\mathfrak{a},m).
\]
\end{proposition}
\mathfrak{b}egin{proof}
In Theorem~\ref{nice-Phi-rep} we see that all terms except for $- q(\mathfrak{a},m)\log(16 \mathfrak{p}i^2 y_1y_2)$ are the real part of a holomorphic function (in $z_1$ or $z_2$, respectively). This proves that $\mathbb{P}hi(\mathfrak{a},m,z)$ is real analytic and the Laplace equation follows with
\[
\Delta_j \log(y_1y_2) = \Delta_j \log(y_j) = -1.
\]
\end{proof}
\subsection{Local Borcherds product} \label{local-bp-section}
In this subsection, we define for each ideal $\mathfrak{a} \in \mathcal{I}K$ the local Borcherds product $\mathbb{P}si(\mathfrak{a},m,z)$ at infinity in $X(\mathfrak{a})H$, obtain interesting representations and express it in local coordinates to determine its vanishing orders along the components of the exceptional divisor $E^\infty(\mathfrak{a})$.
The motivation is that the logarithmic singularities of $\mathbb{P}hi(\mathfrak{a},m,z)$ at and near infinity match (up to a factor) the logarithm of $|\mathbb{P}si(\mathfrak{a},m,z)|$.
The latter is analyzed in Corollary~\ref{Bp-log-sing}.
\mathfrak{b}egin{definition} \label{Psi-sigma}
Let
\[
\sigma : \Lambda^+(\mathfrak{a},m) \to \subsett{\mathfrak{p}m 1}
\]
be a sign function with
\[
\lim_{\lambda \to 0} \sigma(\lambda) = +1
\und
\lim_{\lambda \to \infty} \sigma(\lambda) = -1.
\]
We define for $z \in \mathbb{H}^2$
\[
\mathbb{P}si_\sigma(\mathfrak{a},m,z)
:= \mathfrak{p}rod_{\lambda \in \Lambda^+(\mathfrak{a},m)} \sigma(\lambda) \mathfrak{p}si_\lambda(z)
\with
\mathfrak{p}si_\lambda(z) := e(|\lambda| z_1) -e(|\lambda'| z_2).
\]
\end{definition}
\mathfrak{b}egin{remark}
The function $\sigma$ in the definition of $\mathbb{P}si_\sigma(\mathfrak{a},m,z)$ is there for technical reasons only to make the product convergent. Namely, for fixed $z \in \mathbb{H}^2$ we have
\[
\lim_{\lambda \to 0}\mathfrak{p}si_\lambda(z) = +1
\und
\lim_{\lambda \to \infty }\mathfrak{p}si_\lambda(z) = -1.
\]
By the equivalence relation
\[
\sigma_1 \sim \sigma_2
\quad:\Leftrightarrow\quad
\mathfrak{p}rod_{\lambda \in \Lambda^+(\mathfrak{a},m)} \sigma_1(\lambda)\sigma_2(\lambda) = 1
\]
we partition the set of all admissible sign functions $\sigma$ into two classes. Note that the product defining the equivalence relation is well-defined since almost all factors are equal to $1$.
We have
\[
\mathbb{P}si_{\sigma_1}(\mathfrak{a},m,z) = \mathbb{P}si_{\sigma_2}(\mathfrak{a},m,z) \quad\Leftrightarrow\quad \sigma_1 \sim \sigma_2
\]
and
\[
\mathbb{P}si_{\sigma_1}(\mathfrak{a},m,z) = -\mathbb{P}si_{\sigma_2}(\mathfrak{a},m,z) \quad\Leftrightarrow\quad \sigma_1 \not\sim \sigma_2.
\]
There is no canonical choice for the sign function $\sigma$, that is why we have to include it in the definition of $\mathbb{P}si_\sigma(\mathfrak{a},m,z)$. Later we are mostly interested in $|\mathbb{P}si(\mathfrak{a},m,z)|$ where the original sign of the product does not matter anymore. Whenever the sign is unimportant we simply write $\mathbb{P}si(\mathfrak{a},m,z)$.
\end{remark}
\mathfrak{b}egin{proposition} \label{power-of-BP-is-invariant}
The product $\mathbb{P}si_\sigma(\mathfrak{a},m,z)$ is a holomorphic function on $\mathbb{H}^2$ with simple roots at $T^\infty(\mathfrak{a},m)$. Let $n \in 2\mathbb{N}$ with
\[
\frac{n}{1-\varepsilon_0^2} \in \mathcal{O}_K.
\]
Then $\mathbb{P}si(\mathfrak{a},m,z)^n$ is invariant under $\Gas{\infty}$.
\end{proposition}
\mathfrak{b}egin{proof}
Clearly, each $\mathfrak{p}si_\lambda(z)$ for $\lambda \in \Lambda^+(\mathfrak{a},m)$ is holomorphic. Consider
\mathfrak{b}egin{align*}
\mathfrak{p}si_\lambda(z) = 0
&\quad\Leftrightarrow\quad e(\lambda z_1) = e(-\lambda'z_2)\\
&\quad\Leftrightarrow\quad e(\tr(\lambda z)) = 1\\
&\quad\Leftrightarrow\quad \tr(\lambda z) \in \mathbb{Z}
\end{align*}
to see that $\mathfrak{p}si_\lambda(z)$ vanishes if and only if $z$ lies in the components of $T^\infty(\mathfrak{a},m)$ belonging to $\lambda$ (cf.~representation~\eqref{Tinf-def} of $T^\infty(\mathfrak{a},m)$). Further, from $e(z)$ having a non-vanishing derivative it follows that all zeros of $\mathfrak{p}si_\lambda(z)$ are simple.
Hence, the normal convergence of the product proves that $\mathbb{P}si(\mathfrak{a},m,z)$ is a holomorphic function on $\mathbb{H}^2$ with simple roots at $T^\infty(\mathfrak{a},m)$.
To prove the $\Gas{\infty}$ invariance we make use of the decomposition
$\Gas{\infty}b \cong \mathfrak{a}^{-1} \rtimes (\mathcal{O}_K^\times)^2$
and show the invariance for both factors individually.
For $\varepsilon^2 \in (\mathcal{O}_K^\times)^2$ it is immediate by the definition of $\mathfrak{p}si_\lambda(z)$ that we have
\[
\mathfrak{p}si_\lambda(\varepsilon^2 z) = \mathfrak{p}si_{\varepsilon^2 \lambda}(z).
\]
Because $n$ is even we do not have to bother about the sign. Hence, the factors are only permuted by the action of $(\mathcal{O}_K^\times)^2$.
However, for $\mu \in \mathfrak{a}^{-1}$ we have
\mathfrak{b}egin{align*}
\mathfrak{p}si_\lambda(z+\mu)
&= e(\lambda (z_1+\mu)) - e(-\lambda' (z_2+\mu'))\\
&= e(\lambda z_1)e(\lambda\mu) - e(-\lambda' z_2)e(-\lambda' \mu')\\
&= e(\lambda\mu) \mathfrak{b}r{e(\lambda z_1) - e(-\lambda' z_2)e(-\lambda\mu)e(-\lambda' \mu')}\\
&= e(\lambda\mu) \mathfrak{p}si_{\lambda}(z).
\end{align*}
Here we used $\tr(\lambda\mu) \in \mathbb{Z}$ which is true because $\mathfrak{a}\mathfrak{d}^{-1}$ is the trace dual of $\mathfrak{a}^{-1}$.
Analogously, we can factor $e(-\lambda'\mu')$ out to obtain
\[
\mathfrak{p}si_\lambda(z+\mu) = e(-\lambda'\mu') \mathfrak{p}si_{\lambda}(z).
\]
In particular, we have $e(\lambda\mu)= e(-\lambda'\mu')$ which can also be seen directly using $\tr(\lambda\mu) \in \mathbb{Z}$.
The set $\Lambda^+(\mathfrak{a},m)$ decomposes into finitely many $(\mathcal{O}_K^\times)^2$ orbits. For each orbit we have
\[
\mathfrak{p}rod_{k \in \mathbb{Z}} \mathfrak{p}si_{\varepsilon_0^{2k} \lambda}(z+\mu)^n
= \mathfrak{p}rod_{k \in \mathbb{Z}} \mathfrak{p}si_{\varepsilon_0^{2k}\lambda}(z)^n \cdot \mathfrak{p}rod_{k \in \mathbb{Z}} e(\lambda \varepsilon_0^{2k} \mu)^n.
\]
To compute the later product we use
\[
\mathfrak{p}rod_{k \in \mathbb{Z}} e(\lambda \varepsilon_0^{2k} \mu)
= \mathfrak{p}rod_{k=1}^\infty e(\lambda \varepsilon_0^{-2k} \mu) \cdot \mathfrak{p}rod_{k=0}^\infty e(-\lambda' \varepsilon_0^{-2k} \mu').
\]
Using the functional equation, this boils down to computing the sum
\mathfrak{b}egin{align*}
\sum_{k=1}^\infty \lambda \varepsilon_0^{-2k} \mu - \sum_{k=0}^\infty \lambda' \varepsilon_0^{-2k} \mu'
&= \lambda \mu \frac{\varepsilon_0^{-2}}{1-\varepsilon_0^{-2}} - \lambda' \mu' \frac{1}{1-\varepsilon_0^{-2}}\\
&= \lambda \mu \frac{1}{\varepsilon_0^{2}-1} - \lambda' \mu' \mathfrak{b}r{ \frac{1}{1-\varepsilon_0^{2}} }'
= \tr\mathfrak{b}r{ \frac{\lambda \mu }{\varepsilon_0^{2}-1} }.
\end{align*}
Hence, we have proven
\[
\mathfrak{p}rod_{k \in \mathbb{Z}} e(\lambda \varepsilon_0^{2k} \mu)^n = e\mathfrak{b}r{ \tr\mathfrak{b}r{ \frac{n \lambda \mu }{\varepsilon_0^{2}-1} }}.
\]
By the choice of $n$ we have
\[
\frac{n\lambda}{\varepsilon_0^{2}-1} \in \mathfrak{a}\mathfrak{d}^{-1}
\]
which proves
\[
\tr\mathfrak{b}r{ \frac{n\lambda \mu }{\varepsilon_0^{2}-1} } \in \mathbb{Z}.
\]
Hence, the infinite product
\[
\mathfrak{p}rod_{k \in \mathbb{Z}} \mathfrak{p}si_{\varepsilon_0^{2k}\lambda}(z)^n
\]
is invariant under translation by $\mathfrak{a}^{-1}$ and therefore invariant under $\Gas{\infty}$. The same holds for $\mathbb{P}si(\mathfrak{a},m,z)^n$ which is a finite product of such factors.
\end{proof}
An easy way to come up with an admissible sign function $\sigma$ is to partition the set $\Lambda^+(\mathfrak{a},m)$ into a lower and an upper part with respect to a fixed $w \in (\mathbb{R}^+)^2$ using the trace by
\[
\sigma_w : \Lambda^+(\mathfrak{a},m) \to \subsett{\mathfrak{p}m 1},
\quad \sigma_w (\lambda) := \mathfrak{b}egin{cases}
+1,&\quad \tr(\lambda w) < 0,\\
-1,&\quad \tr(\lambda w) \ge 0.
\end{cases}
\]
The next proposition states a useful representation of $\mathbb{P}si_{\sigma_w}$.
\mathfrak{b}egin{proposition} \label{borcherds-product-using-weyl}
Let $w \in (\mathbb{R}^+)^2$ and let
\[
\Lambda_w :=
\subsett{ \lambda \in \Lambda^+(\mathfrak{a},m) :\; \tr(\lambda w) \ge 0}
\cup
\subsett{ \lambda \in \Lambda^-(\mathfrak{a},m) :\; \tr(\lambda w) > 0}.
\]
Then we have
\[
\mathbb{P}si_{\sigma_w}(\mathfrak{a},m,z) = e\mathfrak{b}r{ \tr\mathfrak{b}r{ \rho(\mathfrak{a},m,w) z } } \mathfrak{p}rod_{\lambda \in \Lambda_w} \mathfrak{b}r{1- e(\tr(\lambda z))}.
\]
\end{proposition}
\mathfrak{b}egin{proof}
This proposition is proven by exploiting the functional equation of the exponential function and using $R(\mathfrak{a},m,w)$, the set of reduced $\lambda \in \Lambda^+(\mathfrak{a},m)$ with respect to $w$, to express the elements of $\Lambda_w$.
\end{proof}
The classic approach introducing the local Borcherds product makes use of Weyl chambers (cf.~\cite[p.~153, eq. (3.13)]{123mf}). The next corollary shows that the resulting product is the same.
\mathfrak{b}egin{corollary}
Let $W \in W(\mathfrak{a},m)$ be a Weyl chamber of index $m$. Let us fix one $z_0 \in W$ to define $\sigma(\lambda) :=-\sgn(\tr(\lambda y_0))$. Then we have
\[
\mathbb{P}si_\sigma(\mathfrak{a},m,z) = e\mathfrak{b}r{ \tr\mathfrak{b}r{ \rho(\mathfrak{a},m,W) z } } \mathfrak{p}rod_{ \substack{\lambda \in \Lambda(\mathfrak{a},m) \\ (\lambda,W)>0} } \mathfrak{b}r{1- e(\tr(\lambda z))}.
\]
\end{corollary}
\mathfrak{b}egin{proof}
Using $w:=y_0$, we have $\sigma=\sigma_{w}$, $\rho(\mathfrak{a},m,W) = \rho(\mathfrak{a},m,w)$ and
\[
\subsett{\lambda \in \Lambda(\mathfrak{a},m) :\; (\lambda,W)>0} = \Lambda_w
\]
with $\Lambda_w$ defined as in Proposition~\ref{borcherds-product-using-weyl}. Hence, the result is nothing but a direct application of Proposition~\ref{borcherds-product-using-weyl}.
\end{proof}
\mathfrak{b}egin{proposition} \label{BP-vanish-order}
Let $(\mathfrak{a}lpha,\mathfrak{b}eta)$ be a totally positive basis of $\mathfrak{a}^{-1}$ and $n \in \mathbb{N}$ with
\[
\frac{n}{1-\varepsilon_0^2} \in \mathcal{O}_K.
\]
Then $\mathbb{P}si(\mathfrak{a},m,z)^n$ is invariant under $\mathfrak{a}^{-1}$ and possesses a holomorphic extension to $u=0$ and $v=0$ in local coordinates $(u,v)$ with respect to $(\mathfrak{a}lpha,\mathfrak{b}eta)$. At $u=0$ ($v=0$, respectively) the product vanishes. Its order of vanishing along $u$ ($v$, respectively) is given by $n\tr(\rho(\mathfrak{a},m,\mathfrak{a}lpha)\mathfrak{a}lpha)$ ($n\tr(\rho(\mathfrak{a},m,\mathfrak{b}eta)\mathfrak{b}eta)$, respectively).
\end{proposition}
\mathfrak{b}egin{proof}
Since $\mathfrak{a}lpha$ and $\mathfrak{b}eta$ (and hence $u$ and $v$) are interchangeable, we prove the result for $v$ only. By Proposition~\ref{borcherds-product-using-weyl} the Borcherds product is expressible as
\[
e\mathfrak{b}r{ \tr\mathfrak{b}r{ \rho(\mathfrak{a},m,\mathfrak{b}eta) z } } \mathfrak{p}rod_{\lambda \in \Lambda_\mathfrak{b}eta} \mathfrak{b}r{1- e(\tr(\lambda z))}.
\]
By Lemma~\ref{exponentials-in-local} each factor of the product is $\mathfrak{a}^{-1}$ invariant and we have
\[
\mathfrak{p}rod_{\lambda \in \Lambda_\mathfrak{b}eta} \mathfrak{b}r{1- e(\tr(\lambda z))}=
\mathfrak{p}rod_{\lambda \in \Lambda_\mathfrak{b}eta} (1-u^{\tr(\lambda \mathfrak{a}lpha)}v^{\tr(\lambda \mathfrak{b}eta)})
\]
in local coordinates.
We list some facts we know about the exponents of $u$ and $v$:
\mathfrak{b}egin{enumerate}[(i)]
\item We have $\tr(\lambda \mathfrak{a}lpha) \in \mathbb{Z}$ and $\tr(\lambda \mathfrak{b}eta) \in \mathbb{N}_0$ for all $\lambda \in \Lambda_\mathfrak{b}eta$.
\item For each $m \in \mathbb{Z}$ there are at most two $\lambda \in \Lambda_\mathfrak{b}eta$ with $\tr(\lambda \mathfrak{a}lpha) = m$ ($\tr(\lambda \mathfrak{b}eta) = m$ respecively).
\item There are only finitely many $\lambda \in \Lambda_\mathfrak{b}eta$ with $\tr(\lambda \mathfrak{a}lpha)<0$.
\end{enumerate}
Those facts imply that the product converges normally to a holomorphic function in $u$ and $v$ in the domain
\[
\subsett{ (u,v) \in \mathbb{C}^2 :\; 0<|u|<1 , |v|<1 }
\]
and that it does not vanish at $v=0$.
Hence, we are left with inspecting the factor in front of the product $e(\tr(\rho z))$ (for simplicity we abbreviate $\rho:=\rho(\mathfrak{a},m,\mathfrak{b}eta)$ for the rest of the proof). This factor might not be $\mathfrak{a}^{-1}$ invariant but the $n$-th power is because we have $e(\tr(\rho z))^n=e(\tr(n\rho z))$.
Now by assumption on $n$ and the definition of the Weyl vector $\rho$ we have $n\rho \in \mathfrak{a}\mathfrak{d}^{-1}$. Hence, Lemma~\ref{exponentials-in-local} again implies the $\mathfrak{a}^{-1}$ invariance of $e(\tr(n\rho z))$ and
\[
e(\tr(n\rho z)) = u^{\tr(n\rho \mathfrak{a}lpha)}v^{\tr(n\rho \mathfrak{b}eta)}.
\]
It is easy to see that the Weyl vector $\rho$ is totally positive. That makes $\tr(\rho \mathfrak{b}eta)$ positive which finishes the proof.
\end{proof}
\mathfrak{b}egin{corollary} \label{Bp-log-sing}
The function
\[
\log \mathfrak{a}bs{\mathbb{P}si(\mathfrak{a},m,z)}^2
\]
is well-defined in a neighborhood of the exceptional divisor $E^\infty(\mathfrak{a}) \subset X(\mathfrak{a})H$ and has logarithmic singularities along the divisor $T^\infty(\mathfrak{a},m)+Z^\infty(\mathfrak{a},m)$.
\end{corollary}
\mathfrak{b}egin{proof}
Let $n \in \mathbb{N}$ be like in Proposition~\ref{power-of-BP-is-invariant}. Then $\mathbb{P}si(\mathfrak{a},m,z)^n$ is invariant under $\Gas{\infty}$. This shows that $\mathbb{P}si(\mathfrak{a},m,z)^n$ is well-defined on a punctured neighborhood of $\infty$ in $X(\mathfrak{a})B$ and holomorphic there. With Proposition~\ref{BP-vanish-order} we obtain that $\mathbb{P}si(\mathfrak{a},m,z)^n$ is well-defined on $E^\infty(\mathfrak{a})$ as well, hence on a neighborhood of $E^\infty(\mathfrak{a})$ in $X(\mathfrak{a})H$, and that this extension is holomorphic.
With $\mathbb{P}si(\mathfrak{a},m,z)^n$ being well-defined, of course also
\[
\log \mathfrak{a}bs{\mathbb{P}si(\mathfrak{a},m,z)}^2 = \frac{1}{n} \log \mathfrak{a}bs{\mathbb{P}si(\mathfrak{a},m,z)^n}^2
\]
is well-defined.
Now, we come to prove the stated logarithmic singularities. For this we have to show that the divisor of the holomorphic function $\mathbb{P}si(\mathfrak{a},m,z)^n$ agrees with $n (T^\infty(\mathfrak{a},m)+Z^\infty(\mathfrak{a},m))$.
By Proposition~\ref{power-of-BP-is-invariant} the function $\mathbb{P}si(\mathfrak{a},m,z)^n$ vanishes of order $n$ at $T^\infty(\mathfrak{a},m)$ in $X(\mathfrak{a})$. By Proposition~\ref{BP-vanish-order} the divisor $n Z^\infty(\mathfrak{a},m)$ provides the correct multiplicities for the vanishing of $\mathbb{P}si(\mathfrak{a},m,z)^n$ along $E^\infty(\mathfrak{a})$. To see that, recall definition~\eqref{Zinf-def} of $Z^\infty(\mathfrak{a},m)$ with $(\mathfrak{a}lpha,\mathfrak{b}eta):=(A_{k-1},A_k)$ to realize that the multiplicities of the components $S_k$ of $nZ^\infty(\mathfrak{a},m)$ are precisely defined to match the multiplicites of the zeros of $\mathbb{P}si(\mathfrak{a},m,z)^n$ along $S_k$.
\end{proof}
\subsection{Growth analysis}
In this subsection we prove that the regularized automorphic Green functions $\mathbb{P}hi(\mathfrak{a},m,z)$ are actual Green functions, i.e., $\mathbb{P}hi(\mathfrak{a},m,z)$ is a pre-log-log Green function on $X(\mathfrak{a})H$ for the divisor $Z(\mathfrak{a},m)$. On $X(\mathfrak{a})$ this is already clear because $\mathbb{P}hi(\mathfrak{a},m,z)$ has logarithmic singularties along $-T(\mathfrak{a},m)$ and is elsewhere smooth, even real analytic. On the cusps however, $\mathbb{P}hi(\mathfrak{a},m,z)$ is not smooth anymore, even after subtracting the logarithmic singularities and the log-log growth of $- q(\mathfrak{a},m)\log(16 \mathfrak{p}i^2 y_1y_2)$.
We start with three lemmata which are straight forward to prove using Remark~\ref{pre-log-log-condition}.
\mathfrak{b}egin{lemma} \label{ddc-for-absolute-powers}
Let $a_1,a_2, b_1,b_2 \in \mathbb{Z}$ and $\mathfrak{a}lpha,\mathfrak{b}eta \in \mathbb{R}$ with
\[
\mathfrak{a}lpha + a_1 + a_2 > 0
\und
\mathfrak{b}eta + b_1 + b_2 > 0.
\]
Then the function
\[
f: (\mathbb{C}^\times)^2 \to \mathbb{C},\quad
f(u,v) = u^{a_1}\overline{u}^{a_2} |u|^\mathfrak{a}lpha \cdot v^{b_1}\overline{v}^{b_2} |v|^\mathfrak{b}eta
\]
is a pre-log-log growth form along $uv=0$.
\end{lemma}
\mathfrak{b}egin{lemma} \label{ddc-for-log-one-minusabsolute-powers}
Let $a_1,a_2, b_1,b_2 \in \mathbb{Z}$ and $\mathfrak{a}lpha,\mathfrak{b}eta \in \mathbb{R}$ with
\[
\mathfrak{a}lpha + a_1 + a_2 > 0
\und
\mathfrak{b}eta + b_1 + b_2 > 0.
\]
Then the function
\[
f: (\mathbb{C}^\times)^2 \to \mathbb{C},\quad
f(u,v) = \log \mathfrak{a}bs{1- u^{a_1}\overline{u}^{a_2} |u|^\mathfrak{a}lpha \cdot v^{b_1}\overline{v}^{b_2} |v|^\mathfrak{b}eta}^2
\]
is a pre-log-log growth form along $uv=0$.
\end{lemma}
\mathfrak{b}egin{lemma} \label{green-equation-for-log-t}
Let $(\mathfrak{a}lpha,\mathfrak{b}eta)$ be a totally positive basis of $\mathfrak{a}^{-1}$. The $\mathfrak{a}$ invariant function
\[
f:\mathbb{H}^2 \to \mathbb{C},\quad
f(z):=\log(y_1y_2)\]
expressed in local coordinates $(u,v)$ with respect to $(\mathfrak{a}lpha,\mathfrak{b}eta)$ is a pre-log-log growth form along $uv=0$.
\end{lemma}
\mathfrak{b}egin{theorem} \label{Phi-is-green}
The function $\mathbb{P}hi(\mathfrak{a},m,z)$ is a pre-log-log Green function on $X(\mathfrak{a})H$ with respect to the divisor $Z(\mathfrak{a},m)$.
\end{theorem}
\mathfrak{b}egin{proof}
As already mentioned in the beginning of the subsection we do not have to care about $X(\mathfrak{a})$ anymore.
Therefore, the focus of this proof lies on the cusps. Because of the isomorphism
$\overline{X(\mathfrak{a}\mathfrak{b}^2)} \isoArrow X(\mathfrak{a})H$
together with
\[ \mathbb{P}hi(\mathfrak{b},m,s,Mz) = \mathbb{P}hi(\mathfrak{a}^2\mathfrak{b},m,s,z) \]
for $M \in \mathcal{S}LM{\mathfrak{a}}{\mathfrak{b}}$ by Proposition~\ref{first-Phi-prop} it is enough to consider the cusp~$\infty$.
We write
\[
\mathbb{P}hi(\mathfrak{a},m,z) = f_1(z) + f_2(z) + f_3(z) + f_4(z) + f_5(z) + f_6(z)
\]
near the cusp~$\infty$ as sum of six parts according to Theorem~\ref{nice-Phi-rep}.
We show that the functions $f_j$ with $j \in \subsett{1,2,3,4,5}$ are pre-log-log growth forms along $E^\infty(\mathfrak{a})$ and that $f_6$ has logarithmic singularities along the divisor $-(T^\infty(\mathfrak{a},m)+Z^\infty(\mathfrak{a},m))$. Note that the divisor $T^\infty(\mathfrak{a},m)+Z^\infty(\mathfrak{a},m)$ is the part of $Z(\mathfrak{a},m)$ in small neighborhoods of $E^\infty(\mathfrak{a})$.
For proving the pre-log-log growth we express $f_j$ in local coordinates $(u,v)$ with respect to a totally positive basis $(\mathfrak{a}lpha,\mathfrak{b}eta)$ of $\mathfrak{a}^{-1}$.
Let us make our decomposition of $\mathbb{P}hi(\mathfrak{a},m,z)$ precise:
\mathfrak{b}egin{align*}
f_1(z) &:= L(\mathfrak{a},m),\\
f_2(z) &:= - q(\mathfrak{a},m)\log(16 \mathfrak{p}i^2 y_1y_2),\\
f_3(z) &:= \sum_{\substack{\nu \in \mathfrak{a}\mathfrak{d}^{-1} \\ \nu \gg 0 }} \frac{2 \mathfrak{p}i}{D} \sqrt{ \frac{mN(\mathfrak{a})}{|N(\nu)|} } \sum_{b=1}^\infty \frac{G^b(\mathfrak{a},m,\nu)}{b} I_{1} \mathfrak{b}r{ \frac{4 \mathfrak{p}i}{b} \sqrt{ \frac{m |N(\nu)|}{N(\mathfrak{a})D}}}\\
&\times \ \mathfrak{b}r{e(\tr(\nu z)) + \overline{e(\tr(\nu z))}},\\
f_4(z) &:= \sum_{\substack{\nu \in \mathfrak{a}\mathfrak{d}^{-1} \\ \nu >0, \, \nu'<0 }} \frac{2 \mathfrak{p}i}{D} \sqrt{ \frac{mN(\mathfrak{a})}{|N(\nu)|} } \sum_{b=1}^\infty \frac{G^b(\mathfrak{a},m,\nu)}{b} J_{1} \mathfrak{b}r{ \frac{4 \mathfrak{p}i}{b} \sqrt{ \frac{m |N(\nu)|}{N(\mathfrak{a})D}}}\\
&\times \ \mathfrak{b}r{ e(\nu z_1)\overline{ e(-\nu'z_2) } + \overline{e(\nu z_1)} e(-\nu'z_2) },\\
f_5(z) &:= \log \mathfrak{p}rod_{\lambda \in \Lambda^+(\mathfrak{a},m)} \mathfrak{a}bs{ 1- e(|\lambda| z_1) \overline{ e(|\lambda'| z_2)}}^2
= \sum_{\lambda \in \Lambda^+(\mathfrak{a},m)} \log \mathfrak{a}bs{ 1- e(\lambda z_1) \overline{ e(-\lambda' z_2)}}^2,\\
f_6(z) &:= -\log \mathfrak{p}rod_{\lambda \in \Lambda^+(\mathfrak{a},m)} \mathfrak{a}bs{ e(|\lambda| z_1) -e(|\lambda'| z_2) }^2 = -\log \mathfrak{a}bs{\mathbb{P}si(\mathfrak{a},m,z)}^2.
\end{align*}
The function $f_1$ is constant, hence it is a pre-log-log growth form.
The function $f_2$ was considered (up to constants) in Lemma~\ref{green-equation-for-log-t}.
The function $f_3$ is real analytic even at $uv=0$ because of
\mathfrak{b}egin{align*}
e(\tr(\nu z)) = u^{\tr(\mathfrak{a}lpha \nu)} v^{\tr(\mathfrak{b}eta \nu)}
\und
\overline{e(\tr(\nu z))} = \overline{u}^{\tr(\mathfrak{a}lpha \nu)} \overline{v}^{\tr(\mathfrak{b}eta \nu)}
\end{align*}
by Lemma~\ref{exponentials-in-local}. Note that $\tr(\mathfrak{a}lpha \nu),\tr(\mathfrak{b}eta\nu) \in \mathbb{N}$. Hence, it is a pre-log-log growth form.
Unfortunately, the function $f_4$ is not even differentiable at $uv=0$ but at least continuous. We have by Lemma~\ref{exponentials-in-local}
\mathfrak{b}egin{align*}
e(\nu z_1)\overline{ e(-\nu'z_2) }
&= u^{\mathfrak{a}lpha \nu} \overline{u}^{-\mathfrak{a}lpha'\nu'} v^{\mathfrak{b}eta \nu} \overline{v}^{-\mathfrak{b}eta'\nu'}\\
&= u^{\tr(\mathfrak{a}lpha \nu)} |u|^{-2 \mathfrak{a}lpha'\nu'} v^{\tr(\mathfrak{b}eta \nu)} |v|^{-2 \mathfrak{b}eta'\nu'}
\end{align*}
and
\[
\overline{e(\nu z_1)} e(-\nu'z_2)
=\overline{u}^{\tr(\mathfrak{a}lpha \nu)} |u|^{-2 \mathfrak{a}lpha'\nu'} \overline{v}^{\tr(\mathfrak{b}eta \nu)} |v|^{-2 \mathfrak{b}eta'\nu'}.
\]
The advantage of having integer powers on $u$, $\overline{u}$, $v$ and $\overline{v}$ is that it is well-defined without specifying a branch of the logarithm.
Since $\nu>0$ and $\nu'<0$, we have
\[
\tr(\mathfrak{a}lpha \nu)-2 \mathfrak{a}lpha'\nu' = \mathfrak{a}lpha \nu - \mathfrak{a}lpha'\nu' > 0
\und
\tr(\mathfrak{b}eta \nu)-2 \mathfrak{b}eta'\nu' = \mathfrak{b}eta \nu - \mathfrak{b}eta
\nu'> 0.
\]
Hence, the claim for $f_4$ follows by Lemma~\ref{ddc-for-absolute-powers}.
Considering $f_5$, we see that we can write each summand in local coordinates using the same identity and get
\[
\log \mathfrak{a}bs{ 1- e(\lambda z_1) \overline{ e(-\lambda' z_2)}}^2 =
\log \mathfrak{a}bs{ 1- u^{\tr(\mathfrak{a}lpha \lambda)} |u|^{-2 \mathfrak{a}lpha'\lambda'} v^{\tr(\mathfrak{b}eta \lambda)} |v|^{-2 \mathfrak{b}eta'\lambda'} }^2.
\]
Because of $\lambda>0$ and $\lambda'<0$ we can apply Lemma~\ref{ddc-for-log-one-minusabsolute-powers} to achieve the claim for $f_5$.
Now, we are left with
\[
f_6(z) = -\log \mathfrak{a}bs{\mathbb{P}si(\mathfrak{a},m,z)}^2
\]
for which we have proven the claim already in Corollary~\ref{Bp-log-sing}.
\end{proof}
\subsetction{Smooth decomposition of automorphic Green functions} \label{decomp-section}
\subsection{A valuable representation using the hypergeometric function}
In this subsection we follow the idea (for example present in \cite{bruinier2021cm}) to express $\mathbb{P}hi(\mathfrak{a},m,s,z)$ using the Gaussian hypergeometric function ${}_2F_1(a,b;c;z)$.
This yields a valuable decomposition
\[
\mathbb{P}hi(\mathfrak{a},m,s,z) = \sum_{n=0}^\infty \mathbb{P}hi_n(\mathfrak{a},m,s,z)
\]
into smooth, $\Gamma_\mathfrak{a}$ invariant functions $\mathbb{P}hi_n(\mathfrak{a},m,s,z)$. Using this decomposition, a lot of already known results about $\mathbb{P}hi(\mathfrak{a},m,s,z)$ and $\mathbb{P}hi(\mathfrak{a},m,z)$ can be reproven.
Some of those proofs reveal new perspectives on the old results.
For example, computing the Fourier expansions of the functions $\mathbb{P}hi_n(\mathfrak{a},m,s,z)$ yields new formulae for the Fourier coefficients of $\mathbb{P}hi(\mathfrak{a},m,s,z)$.
However, the motivation for the author to look at this decomposition was to prove the integrability of $\mathbb{P}hi(\mathfrak{a},m,z)$ and understand the growth behavior of
\[
\int_{X(\mathfrak{a})} |\mathbb{P}hi(\mathfrak{a},m,z)| \omega^2
\]
for large $m$ which is essential for the main result of \cite{buckdiss}. Those two results can be found in Corollary~\ref{Phi-integrable} and Theorem~\ref{Phi-integrable-estimate}. The main work towards these theorems is done in the current section.
The main ingredient in Definition~\ref{Phi-amsz-def} of $\mathbb{P}hi(\mathfrak{a},m,s,z)$ is $Q_{s-1}(x)$, the Legendre function of the second kind. This however has the nice representation
\mathfrak{b}egin{align} \label{Qs-as-hyper}
Q_{s-1}(x) = \frac{\Gamma_\mathfrak{a}mma(s)^2}{2\Gamma_\mathfrak{a}mma(2s)} \mathfrak{b}r{ \frac{2}{1+x} }^s {}_2F_1\mathfrak{b}r{s,s;2s;\frac{2}{1+x}}
\end{align}
using the hypergeometric function ${}_2F_1(a,b;c;z)$
which follows from \cite[14.3.7 and 15.8.13]{handbook} together with the Legendre duplication formula. The hypergeometric function again is defined by its power series
\mathfrak{b}egin{align} \label{hyper-power}
\mathfrak{b}egin{split}
{}_2F_1(a,b;c;z)
:= \sum_{n=0}^\infty \frac{(a)_n (b)_n}{(c)_n} \frac{z^n}{n!}
= \frac{\Gamma_\mathfrak{a}mma(c)}{\Gamma_\mathfrak{a}mma(a) \,\Gamma_\mathfrak{a}mma(b)} \sum_{n=0}^\infty \frac{ \Gamma_\mathfrak{a}mma(a+n) \Gamma_\mathfrak{a}mma(b+n) }{ \Gamma_\mathfrak{a}mma(c+n)} \frac{z^n}{n!}
\end{split}
\end{align}
which implies
\[
Q_{s-1}(x) = \frac{1}{2} \sum_{n=0}^\infty \frac{\Gamma_\mathfrak{a}mma(s+n)^2}{\Gamma_\mathfrak{a}mma(2s+n)} \frac{1}{n!} \mathfrak{b}r{\frac{2}{1+x}}^{n+s}.
\]
Plugged into Definition~\ref{Phi-amsz-def} we get
\[
\mathbb{P}hi(\mathfrak{a},m,s,z) = \frac{1}{2} \sum_{n=0}^\infty \frac{\Gamma_\mathfrak{a}mma(s+n)^2}{\Gamma_\mathfrak{a}mma(2s+n)} \frac{1}{n!} \sum_{ \substack{ A \in L(\mathfrak{a})^\vee \\ \mathfrak{d}et(A) = m/(N(\mathfrak{a})D) } } \mathfrak{b}r{1+g(A,z)}^{-(n+s)}.
\]
Defining
\mathfrak{b}egin{align} \label{Psi-def}
\mathbb{P}si(\mathfrak{a},m,s,z) := \sum_{ \substack{ A \in L(\mathfrak{a})^\vee \\ \mathfrak{d}et(A) = m/(N(\mathfrak{a})D) } } \mathfrak{b}r{1+g(A,z)}^{-s},
\end{align}
we obtain
\[
\mathbb{P}hi(\mathfrak{a},m,s,z) = \sum_{n=0}^\infty \mathbb{P}hi_n(\mathfrak{a},m,s,z)
\quad\text{with}\quad
\mathbb{P}hi_n(\mathfrak{a},m,s,z) := \frac{\Gamma_\mathfrak{a}mma(s+n)^2}{\Gamma_\mathfrak{a}mma(2s+n)} \frac{\mathbb{P}si(\mathfrak{a},m,s+n,z)}{2n!}.
\]
The convergence of $\mathbb{P}si(\mathfrak{a},m,s,z)$ for $\mathbb{R}e(s)>1$ follows directly from the convergence of $\mathbb{P}hi(\mathfrak{a},m,s,z)$. Here, $\mathbb{P}si(\mathfrak{a},m,s,z)$ is even well-defined for $z \in T(\mathfrak{a},m)$ and smooth in $z$ since $(1+x)^{-s}$ has no singularity at $x=0$. Furthermore, $\mathbb{P}si(\mathfrak{a},m,s,z)$ is holomorphic in $s$.
It follows that the functions $\mathbb{P}hi_n(\mathfrak{a},m,s,z)$ are holomorphic in $s$, $\Gamma_\mathfrak{a}$ invariant and smooth in $z$ on $\mathbb{H}^2$ for $\mathbb{R}e(s)>1-n$. Inductively, one can show that for all $N \in \mathbb{N}_0$
\[
\sum_{n=N}^\infty \mathbb{P}hi_n(\mathfrak{a},m,s,z)
\]
converges for $\mathbb{R}e(s)>1-N$ to a $\Gamma_\mathfrak{a}$ invariant and smooth function on $\mathbb{H}^2 \subsettminus T(\mathfrak{a},m)$ which is holomorphic in $s$ (in particular $N=1$ implies convergence for $\mathbb{R}e(s)>0$). By Theorem~\ref{Phi-cont-theorem} we know that $\mathbb{P}hi(\mathfrak{a},m,s,z)$ has a meromorphic extension to $\mathbb{R}e(s)>3/4$ for $z \in \mathbb{H}^2 \subsettminus T(\mathfrak{a},m)$ with simple pole at $s=1$ of residue $q(\mathfrak{a},m)$. It follows that $\mathbb{P}hi_0(\mathfrak{a},m,s,z)$ has a meromorphic extension to $\mathbb{R}e(s)>3/4$ with simple pole at $s=1$ of residue $q(\mathfrak{a},m)$.
We define
\[
\mathbb{P}hi_0(\mathfrak{a},m,z) := \mathcal{C}_{s=1} \sq{\mathbb{P}hi_0(\mathfrak{a},m,s,z)}
\]
and get
\[
\mathbb{P}hi(\mathfrak{a},m,z) = \mathbb{P}hi_0(\mathfrak{a},m,z) + \sum_{n=1}^\infty \mathbb{P}hi_n(\mathfrak{a},m,1,z).
\]
\subsection{Fourier expansion of the decomposition} \label{fc-decomp}
We proceed analogously to Subsection~\ref{fourier-exp-and-reg} and write
\[
\mathbb{P}si(\mathfrak{a},m,s,z)
= \mathbb{P}si^0(\mathfrak{a},m,s,z) + 2\sum_{b=1}^\infty \mathbb{P}si^b(\mathfrak{a},m,s,z)
\]
with
\[
\mathbb{P}si^b(\mathfrak{a},m,s,z) :=\sum_{ \substack{ A =\sMatrix{a}{\lambda'}{\lambda}{b} \in L(\mathfrak{a})^\vee \\ \mathfrak{d}et(A) = m/(N(\mathfrak{a})D) } } \mathfrak{b}r{1+g(A,z)}^{-s}.
\]
The functions $\mathbb{P}si^b(\mathfrak{a},m,s,z)$ are invariant under $\Gas{\infty}$ as $\mathbb{P}hi^b(\mathfrak{a},m,s,z)$ in Subsection~\ref{fourier-exp-and-reg}. Hence, they are $\mathfrak{a}^{-1}$ periodic and possess a Fourier expansion. Again, we treat the cases $b=0$ and $b \in \mathbb{N}$ separately and start with $b \in \mathbb{N}$.
We have with $B :=m/(N(\mathfrak{a})Db^2)$ and $R^b(\mathfrak{a},m)$ defined as in Subsection~\ref{fourier-exp-and-reg}
\mathfrak{b}egin{align*}
\mathbb{P}si^b(\mathfrak{a},m,s,z)
&= \sum_{ \substack{a \in \mathbb{Z}/N(\mathfrak{a}),\:\lambda \in \mathfrak{a}\mathfrak{d}^{-1}/N(\mathfrak{a}) \\ ab-N(\lambda) = m/(N(\mathfrak{a})D) } } \mathfrak{b}r{1+ \frac{|bz_1z_2-\lambda z_1-\lambda'z_2+a|^2}{4y_1y_2 m/(N(\mathfrak{a})D)} }^{-s}\\
&= \sum_{ \substack{a \in \mathbb{Z}/N(\mathfrak{a}),\:\lambda \in \mathfrak{a}\mathfrak{d}^{-1}/N(\mathfrak{a}) \\ ab-N(\lambda) = m/(N(\mathfrak{a})D) } } \mathfrak{b}r{1+ \frac{| (z_1-\lambda'/b)(z_2-\lambda/b) +B |^2}{4y_1y_2 B} }^{-s}\\
=\sum_{\lambda \in R^b(\mathfrak{a},m)} &\sum_{\mu \in \mathfrak{a}^{-1}}
\mathfrak{b}r{1+ \frac{ \mathfrak{a}bs{ \mathfrak{b}r {z_1 + \mu + \frac{\lambda'}{N(\mathfrak{a})b}} \mathfrak{b}r {z_2 + \mu' + \frac{\lambda}{N(\mathfrak{a})b}} + B }^2}{4y_1y_2 B} }^{-s}.
\end{align*}
Hence, the problem is reduced to computing the Fourier expansion of the $\mathfrak{a}^{-1}$ periodic function $\tilde H_s^B (\mathfrak{a}^{-1},z)$ with
\mathfrak{b}egin{align*}
\tilde H_s^B (\mathfrak{b},z) := \sum_{\mu \in \mathfrak{b}} \mathfrak{b}r{ 1 + \frac{|(z_1+\mu)(z_2+\mu')+B|^2}{4y_1y_2B} }^{-s}.
\end{align*}
Namely, let
\[
\tilde H_s^B (\mathfrak{b},z) = \sum_{\nu \in (\mathfrak{b}\mathfrak{d})^{-1} } \tilde b_s^B(\mathfrak{b},\nu,y) e(\tr(\nu x))
\]
be the Fourier expansion of $\tilde H_s^B (\mathfrak{b},z)$. Then we have
\mathfrak{b}egin{align*}
\mathbb{P}si^b(\mathfrak{a},m,s,z)
&= \sum_{\nu \in \mathfrak{a}\mathfrak{d}^{-1}} G^b(\mathfrak{a},m,\nu) \tilde b_s^B(\mathfrak{a}^{-1},\nu,y) e(\tr(\nu x))
\end{align*}
with $G^b(\mathfrak{a},m,\nu)$ defined as in equation~\eqref{gaus-sum}.
By Poisson summation the Fourier coefficients are then given by
\mathfrak{b}egin{align} \label{tilde-b-formula}
\tilde b_s^B(\mathfrak{b},\nu,y) = \frac{1}{\vol(\mathfrak{b})} \int_{\mathbb{R}^2} \mathfrak{b}r{ 1 + \frac{|z_1z_2+B|^2}{4y_1y_2B} }^{-s} e(-\tr(\nu x)) dx_1 dx_2.
\end{align}
For $\nu \ne 0$, the double integral is too complicated to be solved explicitly. Only one of the integrals can be solved explicitly, for the second one the author did not come up with an explicit solution.
However, for our purpose it is enough to estimate $|\tilde b_1^B(\mathfrak{b},\nu,y)|$.
Nevertheless, for $\nu = 0$ an estimate of $\tilde b_1^B(\mathfrak{b},0,y)$ is not enough because the series
\mathfrak{b}egin{align} \label{meromorphic-tilde-b-series}
\sum_{b=1}^\infty G^b(\mathfrak{a},m,0) \tilde b_s^{m/(N(\mathfrak{a})Db^2)}(\mathfrak{a}^{-1},0,y)
\end{align}
diverges at $s=1$. Rather, we have to determine $\tilde b_s^B(\mathfrak{b},0,y)$ explicitly to compute the meromorphic continuation at $s=1$ of \eqref{meromorphic-tilde-b-series} and extract (or estimate) the constant term.
\mathfrak{b}egin{lemma} \label{fc-b-tilde-upper-bound}
Let $B>0$, $\mathfrak{b} \in \mathcal{I}K$ and $\nu \in (\mathfrak{b}\mathfrak{d})^{-1}$. Then we have (cf.~equation~\eqref{alpha-beta-def} for the definition of $\mathfrak{a}lpha(\cdot,\cdot)$)
\[
\mathfrak{a}bs {\tilde b_1^B(\mathfrak{b},\nu,y)} \le \frac{4 B \mathfrak{p}i^2}{\vol(\mathfrak{b})} \exp(- 2\mathfrak{p}i \mathfrak{a}lpha(\nu y_1,\nu'y_2)).
\]
\end{lemma}
\mathfrak{b}egin{proof}
We have to estimate the integral given by equation~\eqref{tilde-b-formula} at $s=1$:
\mathfrak{b}egin{align*}
\tilde b_1^B(\mathfrak{b},\nu,y) =
& \frac{1}{\vol(\mathfrak{b})} \int_{\mathbb{R}^2} \mathfrak{b}r{ 1 + \frac{|z_1z_2+B|^2}{4y_1y_2B} }^{-1} e(-\tr(\nu x)) dx_1 dx_2\\
= & \frac{4y_1y_2B}{\vol(\mathfrak{b})} \int_{\mathbb{R}^2} \mathfrak{b}r{ 4y_1y_2B +|z_1z_2+B|^2 }^{-1} e(-\tr(\nu x)) dx_1 dx_2.
\end{align*}
Now, using the identity
\[
4y_1y_2 B +|z_1z_2+B|^2
= |z_2|^2 \mathfrak{b}r{ \mathfrak{b}r{ x_1 + \frac{Bx_2}{|z_2|^2} }^2 + \mathfrak{b}r{y_1 + \frac{By_2}{|z_2|^2}}^2 }
\]
the double integral is given by
\mathfrak{b}egin{align*}
\int_{\mathbb{R}} |z_2|^{-2} \int_{\mathbb{R}} \mathfrak{b}r{ \mathfrak{b}r{ x_1 + \frac{Bx_2}{|z_2|^2} }^2 + \mathfrak{b}r{y_1 + \frac{By_2}{|z_2|^2}}^2 }^{-1} e(-\tr(\nu x)) dx_1 dx_2\\
= \int_{\mathbb{R}} |z_2|^{-2} \mathfrak{b}r { \int_{\mathbb{R}} \mathfrak{b}r{ x_1^2 + a(y_1,z_2)^2 }^{-1} e(-\nu x_1) dx_1 } e\mathfrak{b}r{\frac{\nu Bx_2}{|z_2|^2} -\nu x_2 } dx_2
\end{align*}
with $a(y_1,z_2) =: y_1 + \frac{By_2}{|z_2|^2}$. Using \cite[p.~8, eq. (11)]{1954tables} (which holds for $\nu=0$ as well, even though this case is omitted in the reference), we get for the inner integral
\mathfrak{b}egin{align*}
\int_{\mathbb{R}} \mathfrak{b}r{ x_1^2 + a(y_1,z_2)^2 }^{-1} e(-\nu x_1) dx_1
=\ &2\int_{0}^\infty \mathfrak{b}r{ x_1^2 + a(y_1,z_2)^2 }^{-1} \cos(2 \mathfrak{p}i |\nu| x_1) dx_1\\
=\ &\mathfrak{p}i \frac{\exp(- 2 \mathfrak{p}i |\nu| a(y_1,z_2) )}{a(y_1,z_2)}.
\end{align*}
Coming back to our double integral, we estimate
\mathfrak{b}egin{align*}
&\mathfrak{a}bs {\int_{\mathbb{R}} |z_2|^{-2} \mathfrak{b}r{ \mathfrak{p}i \frac{\exp(- 2 \mathfrak{p}i |\nu| a(y_1,z_2) )}{a(y_1,z_2)} } e\mathfrak{b}r{\frac{\nu Bx_2}{|z_2|^2} -\nu x_2 } dx_2}\\
\le\ &\mathfrak{p}i \int_{\mathbb{R}} \frac{\exp(- 2 \mathfrak{p}i |\nu| a(y_1,z_2) )}{a(y_1,z_2)|z_2|^2} dx_2\\
\le\ &\frac{\mathfrak{p}i \exp(-2 \mathfrak{p}i |\nu| y_1)}{y_1} \int_{\mathbb{R}} \frac{1}{x_2^2+y_2^2} dx_2
= \frac{\mathfrak{p}i^2 \exp(-2 \mathfrak{p}i |\nu| y_1)}{y_1y_2} .
\end{align*}
Hence, in total we have shown
\[
\mathfrak{a}bs {\tilde b_1^B(\mathfrak{b},\nu,y)} \le \frac{4 B \mathfrak{p}i^2}{\vol(\mathfrak{b})} \exp(-2 \mathfrak{p}i |\nu| y_1).
\]
For symmetry reasons we have
\[
\mathfrak{a}bs {\tilde b_1^B(\mathfrak{b},\nu,y)} \le \frac{4 B \mathfrak{p}i^2}{\vol(\mathfrak{b})} \exp(-2 \mathfrak{p}i |\nu'| y_2)
\]
as well which proves the claim.
\end{proof}
In order to compute $\tilde b_s^B(\mathfrak{b},0,y)$ explicitly, we use the following two lemmata.
\mathfrak{b}egin{lemma} \label{square-sum-to-s-integral}
Let $a>0$ and $s \in \mathbb{C}$ with $\mathbb{R}e(s)>1/2$. Then we have
\[
\int_{\mathbb{R}} (x^2+a^2)^{-s} dx = a^{1-2s} \Beta(\tfrac{1}{2},s-\tfrac{1}{2}).
\]
Here, by $\Beta(x,y)$ we denote the beta function $\Beta(x,y) := \Gamma_\mathfrak{a}mma(x)\,\Gamma_\mathfrak{a}mma(y)/\Gamma_\mathfrak{a}mma(x+y)$.
\end{lemma}
\mathfrak{b}egin{proof}
The identity is proven by appropriate substitutions and the use of the integral representation (cf.~\cite[5.12.3]{handbook})
\mathfrak{b}egin{align} \label{beta-integral2}
\Beta(x,y) = \int_0^\infty \frac{t^{x-1}}{(1+t)^{x+y}} dt
\end{align}
which holds for $\mathbb{R}e(x),\mathbb{R}e(y)>0$.
\end{proof}
\mathfrak{b}egin{lemma} \label{crazy-mathematica-integral}
Let $\mathbb{R}e(s)>1/2$ and $b>0$. Then we have
\mathfrak{b}egin{align*}
\int_{\mathbb{R}} \frac{(x^2+b^2)^{1-2s}}{(x^2+1)^{1-s}} dx
= \Beta \mathfrak{b}r{\tfrac{1}{2},s-\tfrac{1}{2}} {}_2F_1 \mathfrak{b}r{2s-1,s-\tfrac{1}{2};s;1-b^2}.
\end{align*}
\end{lemma}
\mathfrak{b}egin{proof}
Can be checked using computer algebra systems.
\end{proof}
\mathfrak{b}egin{lemma} \label{fc-b-tilde-0}
For $B>0$, $\mathfrak{b} \in \mathcal{I}K$ and $\mathbb{R}e(s)>1/2$ the constant Fourier coefficient of $\tilde H_s^B (\mathfrak{b},z)$ is given by
\[
\tilde b_s^B(\mathfrak{b},0,y) = \frac{(4B)^s (y_1y_2)^{1-s} \Beta(\tfrac{1}{2},s-\tfrac{1}{2})^2}{\vol(\mathfrak{b})} {}_2F_1 (2s-1,s-\tfrac{1}{2};s;-B/(y_1y_2)).
\]
\end{lemma}
\mathfrak{b}egin{proof}
We can copy the proof of Lemma~\ref{fc-b-tilde-upper-bound} until the point of the substitution in the inner integral of the double integral. By that we get
\[
\tilde b_s^B(\mathfrak{b},0,y) = \frac{(4y_1y_2B)^s}{\vol(\mathfrak{b})} \int_{\mathbb{R}} |z_2|^{-2s} \int_\mathbb{R} \mathfrak{b}r{ x_1^2 + a(y_1,z_2)^2 }^{-s} dx_1 dx_2.
\]
Now using Lemma~\ref{square-sum-to-s-integral}, the inner integral computes to
\mathfrak{b}egin{align*}
a(y_1,z_2)^{1-2s} \Beta(\tfrac{1}{2},s-\tfrac{1}{2}).
\end{align*}
The integrand of the outer integral is then, up to the beta function factor, given by
\mathfrak{b}egin{align*}
|z_2|^{-2s} a(y_1,z_2)^{1-2s}
=y_1^{1-2s} \frac{ ( x_2^2+y_2^2 + By_2/y_1 )^{1-2s} }{ (x_2^2+y_2^2)^{1-s} }.
\end{align*}
It follows
\mathfrak{b}egin{align*}
\int_\mathbb{R} &|z_2|^{-2s} a(y_1,z_2)^{1-2s} dx_2
= (y_1y_2)^{1-2s} \int_\mathbb{R} \frac{ ( x^2 + b(y)^2 )^{1-2s} }{ (x^2+1)^{1-s} } dx
\end{align*}
with $b(y)^2=1+B/(y_1y_2)$. The last integral is given using Lemma~\ref{crazy-mathematica-integral} by
\[
\Beta(\tfrac{1}{2},s-\tfrac{1}{2}) {}_2F_1 (2s-1,s-\tfrac{1}{2};s;-B/(y_1y_2)).
\]
Collecting the omitted prefactors, we get the stated result.
\end{proof}
Now we come to the case $b=0$. Hence, we determine the Fourier expansion of $\mathbb{P}si^0(\mathfrak{a},m,s,z)$.
We have
\mathfrak{b}egin{align*}
\mathbb{P}si^0(\mathfrak{a},m,s,z)
&= \sum_{ \substack{a \in \mathbb{Z}/N(\mathfrak{a}),\:\lambda \in \mathfrak{a}\mathfrak{d}^{-1}/N(\mathfrak{a}) \\ -N(\lambda) = m/(N(\mathfrak{a})D) } } \mathfrak{b}r{1+ \frac{|-\lambda z_1-\lambda'z_2+a|^2}{4y_1y_2 m/(N(\mathfrak{a})D)} }^{-s}\\
&= 2 \sum_{ \lambda \in \Lambda^+(\mathfrak{a},m) } \sum_{a \in \mathbb{Z}} \mathfrak{b}r{1+ \frac{|\lambda z_1+\lambda'z_2+a|^2}{4y_1y_2 mN(\mathfrak{a})/D} }^{-s}.
\end{align*}
\mathfrak{b}egin{lemma} \label{fc-b0-tilde}
The series
\mathfrak{b}egin{align*}
\mathbb{P}si^0(\mathfrak{a},m,s,z) =
2 \sum_{ \lambda \in \Lambda^+(\mathfrak{a},m) } \sum_{a \in \mathbb{Z}} \mathfrak{b}r{1+ \frac{|\lambda z_1+\lambda'z_2+a|^2}{4y_1y_2 mN(\mathfrak{a})/D} }^{-s}
\end{align*}
converges normally for $z \in \mathbb{H}^2$ and $\mathbb{R}e(s)>1/2$ and has the Fourier expansion
\mathfrak{b}egin{align*}
\mathbb{P}si^0(\mathfrak{a},m,s,z) = 2 \mathfrak{b}r{\frac{4y_1y_2 mN(\mathfrak{a})}{D}}^s \Beta(\tfrac{1}{2},s-\tfrac{1}{2}) \sum_{ \lambda \in \Lambda^+(\mathfrak{a},m) } (\lambda y_1-\lambda' y_2)^{1-2s}\\
+ \frac{4\mathfrak{p}i^s}{\Gamma_\mathfrak{a}mma(s)} \mathfrak{b}r{\frac{4y_1y_2 mN(\mathfrak{a})}{D}}^s \sum_{ \lambda \in \Lambda^+(\mathfrak{a},m) } \sum_{n=1}^\infty \mathfrak{b}r{\frac{n}{\lambda y_1-\lambda' y_2}}^{s-1/2}\\
\times K_{s-1/2}(2 \mathfrak{p}i n (\lambda y_1-\lambda' y_2)) \mathfrak{b}r{e(n\tr(\lambda x))+e(-n\tr(\lambda x))}.
\end{align*}
\end{lemma}
\mathfrak{b}egin{proof}
As in Lemma~\ref{Phi-0-fc}, we investigate the series over $a$ for each ${\lambda \in \Lambda^+(\mathfrak{a},m)}$ individually:
\mathfrak{b}egin{align*}
&\sum_{a \in \mathbb{Z}} \mathfrak{b}r{1+ \frac{|\lambda z_1+\lambda'z_2+a|^2}{4y_1y_2 mN(\mathfrak{a})/D} }^{-s}\\
= &\sum_{a \in \mathbb{Z}} \mathfrak{b}r{1+ \frac{ (\tr(\lambda x)+a)^2 + \tr(\lambda y)^2}{-4y_1y_2 \lambda \lambda'} }^{-s}\\
= &\mathfrak{b}r{\frac{4y_1y_2 mN(\mathfrak{a})}{D}}^s \sum_{a \in \mathbb{Z}} \mathfrak{b}r{(\tr(\lambda x)+a)^2 + (\lambda y_1-\lambda'y_2)^2 }^{-s}.
\end{align*}
Hence, we are interested in the Fourier expansion of the $\mathbb{Z}$ periodic function
\[
h_\gamma(s,x) := \sum_{a \in \mathbb{Z}} \mathfrak{b}r{(x+a)^2+\gamma^2}^{-s}
\]
with $\gamma>0$ (note that actually always $\gamma:=|\lambda y_1- \lambda'y_2|>0$ since $\lambda y_1 \ne \lambda'y_2$ due to $N(\lambda)<0$).
It is straightforward to see that $h_\gamma(s,x)$ converges if and only if $\mathbb{R}e(s)>1/2$.
We have
\[
h_\gamma(s,x) = \sum_{n \in \mathbb{Z}} a_\gamma(s,n) e(nx)
\]
with
\[
a_\gamma(s,n) = \int_\mathbb{R} (x^2+\gamma^2)^{-s} e(-nx)dx.
\]
Lemma~\ref{square-sum-to-s-integral} yields
\[
a_\gamma(s,0) = \Beta(\tfrac{1}{2},s-\tfrac{1}{2}) \gamma^{1-2s} .
\]
For $n \ne 0$ we use \cite[p.~11, eq. (7)]{1954tables} (valid for $\mathbb{R}e(s)>0$)
\mathfrak{b}egin{align*}
a_\gamma(s,n)
&= 2 \int_0^\infty (x^2+\gamma^2)^{-s} \cos(2 \mathfrak{p}i |n| x)dx\\
&= 2 \mathfrak{b}r{\frac{\mathfrak{p}i|n|}{\gamma}}^{s-1/2} \sqrt{\mathfrak{p}i} \Gamma_\mathfrak{a}mma(s)^{-1} K_{s-1/2}(2 \mathfrak{p}i \mathfrak{a}lpha |n|)\\
&= \frac{2 \mathfrak{p}i^s}{\Gamma_\mathfrak{a}mma(s)} \mathfrak{b}r{\frac{|n|}{\gamma}}^{s-1/2} K_{s-1/2}(2 \mathfrak{p}i \gamma |n|).
\end{align*}
We obtain
\mathfrak{b}egin{align*}
&\sum_{a \in \mathbb{Z}} \mathfrak{b}r{1+ \frac{|\lambda z_1+\lambda'z_2+a|^2}{4y_1y_2 mN(\mathfrak{a})/D} }^{-s}
= \mathfrak{b}r{\frac{4y_1y_2 mN(\mathfrak{a})}{D}}^s h_{|\lambda y_1-\lambda' y_2|}(s,\tr(\lambda x))\\
= &\mathfrak{b}r{\frac{4y_1y_2 mN(\mathfrak{a})}{D}}^s \Beta(\tfrac{1}{2},s-\tfrac{1}{2}) |\lambda y_1-\lambda' y_2|^{1-2s}\\
+ &\frac{2\mathfrak{p}i^s}{\Gamma_\mathfrak{a}mma(s)} \mathfrak{b}r{\frac{4y_1y_2 mN(\mathfrak{a})}{D}}^s \ssum_{n \in \mathbb{Z}} \mathfrak{a}bs{\frac{n}{\lambda y_1-\lambda' y_2}}^{s-1/2} K_{s-1/2}(2 \mathfrak{p}i |n| |\lambda y_1-\lambda' y_2|) e(\tr(\lambda n x))
\end{align*}
which proves the lemma. Here, the tick at the sum indicates that we do not sum over $n=0$.
\end{proof}
\mathfrak{b}egin{corollary} \label{Psi-0-fc}
The function $\mathbb{P}si^0(\mathfrak{a},m,1,z)$ has the Fourier expansion
\mathfrak{b}egin{align*}
\frac{8 \mathfrak{p}i y_1y_2 mN(\mathfrak{a})}{D} \sum_{ \lambda \in \Lambda^+(\mathfrak{a},m) } \sum_{n \in \mathbb{Z}} \frac{\exp(-2 \mathfrak{p}i |n| (\lambda y_1-\lambda'y_2))}{\lambda y_1-\lambda'y_2} e(n\tr(\lambda x)).
\end{align*}
\end{corollary}
\mathfrak{b}egin{proof}
Plugging in $s=1$ into Lemma~\ref{fc-b0-tilde} with the identity $K_{1/2}(x)=\sqrt{\frac{\mathfrak{p}i}{2x}} \exp(-x)$
yields the Fourier expansion.
\end{proof}
\subsetction{Integrability and integrals} \label{integral-section}
In this section we compute the integral of $\mathbb{P}hi(\mathfrak{a},m,z)$. In order to do so, we compute the integrals of $\mathbb{P}si(\mathfrak{a},m,s,z)$ and $\mathbb{P}hi(\mathfrak{a},m,s,z)$ as well. Our method of computing $\mathbb{P}hi(\mathfrak{a},m,z)$ requires the integrability of $\mathbb{P}hi(\mathfrak{a},m,z)$ first which is much more demanding to show than computing the actual integral afterwards. In the process we prove that the growth of
\[
\int_{X(\mathfrak{a})} |\mathbb{P}hi(\mathfrak{a},m,z)| \omega^2
\]
is polynomial in $m$ which is an important ingredient for the main theorem of \cite{buckdiss}.
\mathfrak{b}egin{lemma} \label{Psi-core-integral}
Let $\mathbb{R}e(s)>1$. Then we have
\[
\int_\mathbb{H} \mathfrak{b}r{1+ \frac{|z-i|^2}{4y}}^{-s} \frac{dxdy}{y^2} = \frac{4 \mathfrak{p}i }{s-1}.
\]
\end{lemma}
\mathfrak{b}egin{proof}
We have by Lemma~\ref{square-sum-to-s-integral}
\mathfrak{b}egin{align*}
\int_\mathbb{H} \mathfrak{b}r{1+ \frac{|z-i|^2}{4y}}^{-s} \frac{dxdy}{y^2}
&= 4^s \int_\mathbb{H} \mathfrak{b}r{4y+ (x^2+(y-1)^2)}^{-s} \frac{dxdy}{y^{2-s}}\\
&= 4^s \int_{0}^\infty \int_{-\infty}^\infty \mathfrak{b}r{x^2+(y+1)^2}^{-s} dx \ y^{s-2} dy\\
&= 4^s \int_{0}^\infty B(\tfrac{1}{2},s-\tfrac{1}{2}) (y+1)^{1-2s} y^{s-2} dy\\
&= 4^s B(\tfrac{1}{2},s-\tfrac{1}{2}) \int_{0}^\infty \frac{ y^{s-2}}{(y+1)^{2s-1}} dy\\
&= 4^s B(\tfrac{1}{2},s-\tfrac{1}{2}) B(s-1,s).
\end{align*}
The last identity is due to equation~\eqref{beta-integral2} where we need $\mathbb{R}e(s)>1$.
By making use of the Legendre duplication formula and the functional equation of the gamma function one shows
\mathfrak{b}egin{align*}
4^s B(\tfrac{1}{2},s-\tfrac{1}{2}) B(s-1,s)
&= \frac{8 \mathfrak{p}i}{2s-2} = \frac{4 \mathfrak{p}i}{s-1}.
\end{align*}
\end{proof}
In the next theorem we compute the integral of $\mathbb{P}si(\mathfrak{a},m,s,z)$ over $X(\mathfrak{a})$ by unfolding the integral. This technique allows us to compute the integral without determining a fundamental domain for $X(\mathfrak{a})$.
\mathfrak{b}egin{theorem} \label{Psi-integral}
For $\mathbb{R}e(s)>1$ we have
\[
\int_{X(\mathfrak{a})} \mathbb{P}si(\mathfrak{a},m,s,z) \omega^2 = \frac{4}{s-1} \vol(T(\mathfrak{a},m)).
\]
\end{theorem}
\mathfrak{b}egin{proof}
In this proof we will freely interchange integration and summation.
Looking at the definition
\[
\mathbb{P}si(\mathfrak{a},m,s,z) := \sum_{ \substack{ A \in L(\mathfrak{a})^\vee \\ \mathfrak{d}et(A) = m/(N(\mathfrak{a})D) } } \mathfrak{b}r{1+g(A,z)}^{-s},
\]
we see that for $s \in \mathbb{R}$ this is justified by Tonelli's theorem because all summands are positive. For $s \in \mathbb{C}$ we see
\[\mathfrak{a}bs{\mathbb{P}si(\mathfrak{a},m,s,z)} \le \mathbb{P}si(\mathfrak{a},m,\mathbb{R}e(s),z)\]
using the triangle inequality, hence Fubini's theorem can be applied by Lebesgue's dominated convergence theorem.
We start by some rewriting of $\mathbb{P}si(\mathfrak{a},m,s,z)$. Subsequently, we explain every step.
\mathfrak{b}egin{align*}
\mathbb{P}si(\mathfrak{a},m,s,z)
&\overset{(i)}{=} 2\sum_{ \substack{A \in L(\mathfrak{a})^\vee / \subsett{\pm 1} \\ \mathfrak{d}et(A) = m/(N(\mathfrak{a})D) } } \mathfrak{b}r{1+g(A,z)}^{-s}\\
&\overset{(ii)}{=} 2\sum_{ \substack{A \in \Gamma_\mathfrak{a} \mathfrak{b}ackslash L(\mathfrak{a})^\vee / \subsett{\pm 1} \\ \mathfrak{d}et(A) = m/(N(\mathfrak{a})D) } } \sum_{M \in \Gamma_\mathfrak{a} / \Gamma_\mathfrak{a}s{ \mathfrak{p}m A} } \mathfrak{b}r{1+g(M.A,z)}^{-s}\\
&\overset{(iii)}{=} 2\sum_{ \substack{A \in \Gamma_\mathfrak{a} \mathfrak{b}ackslash L(\mathfrak{a})^\vee / \subsett{\pm 1} \\ \mathfrak{d}et(A) = m/(N(\mathfrak{a})D) } } \sum_{M \in \Gamma_\mathfrak{a} / \Gamma_\mathfrak{a}s{ \mathfrak{p}m A} } \mathfrak{b}r{1+g(A,M^{-1} z)}^{-s}\\
&\overset{(iv)}{=} 2\sum_{ \substack{A \in \Gamma_\mathfrak{a} \mathfrak{b}ackslash L(\mathfrak{a})^\vee / \subsett{\pm 1} \\ \mathfrak{d}et(A) = m/(N(\mathfrak{a})D) } } \sum_{M \in \Gamma_\mathfrak{a}s{ \mathfrak{p}m A} \mathfrak{b}ackslash \Gamma_\mathfrak{a} } \mathfrak{b}r{1+g(A,M z)}^{-s}.
\end{align*}
In step~(i) we use the sign invariance of $g(A,z)$.
In step~(ii) we group up the summands by factoring out the action of $\Gamma_\mathfrak{a}$ on $L(\mathfrak{a})^\vee / \subsett{\pm 1}$. The resulting quotient is finite and each element corresponds to one component of $T(\mathfrak{a},m)$ viewed as divisor of $X(\mathfrak{a})$. Now for each element in the quotient we have to sum over the whole $\Gamma_\mathfrak{a}$ orbit to obtain all original summands back. This is what the inner sum does. We have to factor out the stabilizer
$\Gamma_\mathfrak{a}s{\mathfrak{p}m A} := \subsett{ M :\; M \in \Gamma_\mathfrak{a} \und M.A \in \subsett{\mathfrak{p}m A} }$
in order to obtain every element in the orbit once.
In step~(iii) we use the invariance of $g(A,z)$ (cf.~equation \eqref{hg-transform}).
In step~(iv) we invert $\Gamma_\mathfrak{a} / \Gamma_\mathfrak{a}s{ \mathfrak{p}m A}$ which turns the left cosets into right cosets. This is compensated by inverting $M^{-1}$ as well.
Because the inner sum is invariant under $\Gamma_\mathfrak{a}$ for each fixed $A \in L(\mathfrak{a})^\vee$ with $\mathfrak{d}et(A) = m/(N(\mathfrak{a})D)$, we can compute the integral of that inner sum over $X(\mathfrak{a}) = \Gamma_\mathfrak{a} \mathfrak{b}ackslash \mathbb{H}^2$ first on its own. Again, we explain the equations step by step after the computation.
\mathfrak{b}egin{align*}
&\int_{\Gamma_\mathfrak{a} \mathfrak{b}ackslash \mathbb{H}^2} \sum_{M \in \Gamma_\mathfrak{a}s{\mathfrak{p}m A} \mathfrak{b}ackslash \Gamma_\mathfrak{a} } \mathfrak{b}r{1+g(A,M z)}^{-s} \omega^2\\
\overset{(v)}{=}& \int_{\Gamma_\mathfrak{a}s{\mathfrak{p}m A} \mathfrak{b}ackslash \mathbb{H}^2} \mathfrak{b}r{1+g(A,z)}^{-s} \omega^2\\
\overset{(vi)}{=}&\ 2\int_{z_2 \in \Gamma_\mathfrak{a}s{ \mathfrak{p}m A}' \mathfrak{b}ackslash \mathbb{H}}\int_{z_1 \in \mathbb{H}} \mathfrak{b}r{1+g(A,z)}^{-s} \eta_1\eta_2\\
\overset{(vii)}{=}&\ 2\int_{z_2 \in \Gamma_\mathfrak{a}s{ \mathfrak{p}m A}' \mathfrak{b}ackslash \mathbb{H}}\int_{z_1 \in \mathbb{H}} \mathfrak{b}r{1+\frac{d(z_1,ASz_2)}{4} }^{-s} \eta_1\eta_2\\
\overset{(viii)}{=}&\ 2\int_{z_2 \in \Gamma_\mathfrak{a}s{ \mathfrak{p}m A}' \mathfrak{b}ackslash \mathbb{H}}\int_{z_1 \in \mathbb{H}} \mathfrak{b}r{1+\frac{d(z_1,i)}{4} }^{-s} \eta_1\eta_2\\
\overset{(ix)}{=}&\ \frac{2}{s-1}\int_{z_2 \in \Gamma_\mathfrak{a}s{ \mathfrak{p}m A}' \mathfrak{b}ackslash \mathbb{H}}\eta_2
\overset{(x)}{=} \frac{2}{s-1} \vol(T_A).
\end{align*}
In step~(v) the actual unfolding takes place. Instead of integrating a sum of $\Gamma_\mathfrak{a}s{\mathfrak{p}m A} \mathfrak{b}ackslash \Gamma_\mathfrak{a}$ shifted functions over $\Gamma_\mathfrak{a} \mathfrak{b}ackslash \mathbb{H}^2$ it is possible to integrate over $\Gamma_\mathfrak{a}s{\mathfrak{p}m A} \mathfrak{b}ackslash \mathbb{H}^2$ in the first place and skip the sum and shifting.
In step~(vi) we use the fact that up to a set of measure zero a fundamental domain of $\Gamma_\mathfrak{a}s{\mathfrak{p}m A} \mathfrak{b}ackslash \mathbb{H}^2$ is given by $\mathbb{H} \times \Gamma_\mathfrak{a}s{\mathfrak{p}m A}' \mathfrak{b}ackslash \mathbb{H}$. Further, we use that $\omega^2 = 2 \eta_1\eta_2$ (cf.~equation~\eqref{omega-def}).
Step~(vii) is an application of Remark~\ref{gA-with-d-remark}.
Step~(viii) is a consequence of the $\GL_2(\R)p$ invariance of $\eta_1$ and the hyperbolic distance. Since we integrate over all of $\mathbb{H}$ in the first argument, the reference point in the second argument is arbitrary.
Step~(ix) is an application of Lemma~\ref{Psi-core-integral} (note the scaling of $\eta$ with $(4\mathfrak{p}i)^{-1}$ in equation~\eqref{omega-def}).
For step~(x) see equation~\eqref{Tm-volume-sum}.
In total we have
\mathfrak{b}egin{align*}
\int_{X(\mathfrak{a})} \mathbb{P}si(\mathfrak{a},m,s,z) \omega^2
= 2\sum_{ \substack{A \in \Gamma_\mathfrak{a} \mathfrak{b}ackslash L(\mathfrak{a})^\vee / \subsett{\pm 1} \\ \mathfrak{d}et(A) = m/(N(\mathfrak{a})D) } } \frac{2}{s-1} \vol(T_A).
=\frac{4}{s-1} \vol(T(\mathfrak{a},m))
\end{align*}
\end{proof}
This allows us to compute the integral of $\mathbb{P}hi(\mathfrak{a},m,s,z)$.
\mathfrak{b}egin{theorem} \label{Phi-s-integral}
For $\mathbb{R}e(s)>1$ we have
\[
\int_{X(\mathfrak{a})} \mathbb{P}hi(\mathfrak{a},m,s,z) \omega^2 = \frac{2 \vol(T(\mathfrak{a},m))}{s(s-1)}.
\]
\end{theorem}
\mathfrak{b}egin{proof}
To compute the integral we use the decomposition
\[
\mathbb{P}hi(\mathfrak{a},m,s,z) = \sum_{n=0}^\infty \frac{\Gamma_\mathfrak{a}mma(s+n)^2}{\Gamma_\mathfrak{a}mma(2s+n)} \frac{\mathbb{P}si(\mathfrak{a},m,s+n,z)}{2n!}
\]
and Theorem~\ref{Psi-integral}.
We get
\mathfrak{b}egin{align*}
\int_{X(\mathfrak{a})} \mathbb{P}hi(\mathfrak{a},m,s,z)
&= \sum_{n=0}^\infty \frac{\Gamma_\mathfrak{a}mma(s+n)^2}{\Gamma_\mathfrak{a}mma(2s+n)} \frac{\frac{4}{s+n-1} \vol(T(\mathfrak{a},m))}{2n!}\\
&= 2 \vol(T(\mathfrak{a},m)) \sum_{n=0}^\infty \frac{\Gamma_\mathfrak{a}mma(s+n)^2}{\Gamma_\mathfrak{a}mma(2s+n)} \frac{1}{n!} \frac{1}{s+n-1}.
\end{align*}
Using the functional equation of the gamma function, the power series expansion of the hypergeometric function~\eqref{hyper-power} and \cite[15.4.2]{handbook} one shows the identity
\[
\sum_{n=0}^\infty \frac{\Gamma_\mathfrak{a}mma(s+n)^2}{\Gamma_\mathfrak{a}mma(2s+n)} \frac{1}{n!} \frac{1}{s+n-1} = \frac{1}{s(s-1)}
\]
which finishes the proof.
\end{proof}
\mathfrak{b}egin{proposition} \label{integral-Phi-0}
The function $\mathbb{P}hi_0(\mathfrak{a},m,z)$ is integrable and we have
\[
\int_{X(\mathfrak{a})} \mathfrak{a}bs{\mathbb{P}hi_0(\mathfrak{a},m,z)} \omega^2 = O(m^2 \log(m))
\]
for large $m$.
\end{proposition}
\mathfrak{b}egin{proof}
The proof of this proposition relies on the results of Subsection~\ref{fc-decomp}. Because of its length and many tedious estimates it is skipped here. All the details can be found in \cite{buckdiss}. The rough idea is to replace each term in the Fourier series by its absolute value and estimate the integral over the resulting series. However, when dealing with the constant Fourier coefficient where the regularization takes place one has to work more subtle.
\end{proof}
\mathfrak{b}egin{corollary} \label{Phi-integrable}
The function $\mathbb{P}hi(\mathfrak{a},m,z)$ is integrable.
\end{corollary}
\mathfrak{b}egin{proof}
We make use of the decomposition
\mathfrak{b}egin{align} \label{Phi-decomp}
\mathbb{P}hi(\mathfrak{a},m,z) = \mathbb{P}hi_0(\mathfrak{a},m,z) + \sum_{n=1}^\infty \mathbb{P}hi_n(\mathfrak{a},m,1,z).
\end{align}
The integrability of $\mathbb{P}hi_0(\mathfrak{a},m,z)$ is the statement of Proposition~\ref{integral-Phi-0}.
For the remaining series one argues similar as in the proof of Theorem~\ref{Phi-s-integral} and obtains integrability holomorphicity of the integral for $\mathbb{R}e(s)>0$.
\end{proof}
\mathfrak{b}egin{theorem} \label{Phi-integral}
We have
\[
\int_{X(\mathfrak{a})} \mathbb{P}hi(\mathfrak{a},m,z) \omega^2 = -2 \vol(T(\mathfrak{a},m)) = -q(\mathfrak{a},m)\zeta_K(-1),
\text{ thus }
q(\mathfrak{a},m) = \frac{2\vol(T(\mathfrak{a},m))}{\zeta_K(-1)}.
\]
In particular, for odd $D$ we obtain
\[
\vol(T(\mathfrak{a},m)) = \frac{\sigma(\mathfrak{a},m,-1)}{24}.
\]
\end{theorem}
\mathfrak{b}egin{proof}
By Corollary~\ref{Phi-integrable} we know that $\mathbb{P}hi(\mathfrak{a},m,z)$ is integrable. This allows us to apply Lebesgue's dominated convergence theorem
\mathfrak{b}egin{align*}
\int_{X(\mathfrak{a})} \mathbb{P}hi(\mathfrak{a},m,z) \omega^2
&= \lim_{s \to 1} \mathfrak{b}r{ \int_{X(\mathfrak{a})} \mathbb{P}hi(\mathfrak{a},m,s,z) \omega^2 - \int_{X(\mathfrak{a})} \frac{q(\mathfrak{a},m)}{s-1} \omega^2 }\\
&= \lim_{s \to 1} \mathfrak{b}r{ \frac{2 \vol(T(\mathfrak{a},m))}{s(s-1)} - \frac{q(\mathfrak{a},m)}{s-1}\zeta_K(-1) }.
\end{align*}
Since the integral is finite by Corollary~\ref{Phi-integrable} the only possibility is
\[
2 \vol(T(\mathfrak{a},m)) = q(\mathfrak{a},m)\zeta_K(-1)
\]
which proves the stated identity about $q(\mathfrak{a},m)$.
The integral identity follows then with L'Hôpital's rule.
For odd $D$ we may use equation~\eqref{q-sigma} and $\zeta(-1)=-1/12$ to obtain
\[
\vol(T(\mathfrak{a},m))
= q(\mathfrak{a},m) \frac{\zeta_K(-1)}{2}
= -\frac{\sigma(\mathfrak{a},m,-1)}{L(-1,\chi_D)} \frac{\zeta(-1) L(-1,\chi_D)}{2}
= \frac{\sigma(\mathfrak{a},m,-1)}{24}.
\]
\end{proof}
\mathfrak{b}egin{corollary} \label{Phi-0-integral}
We have
\[
\int_{X(\mathfrak{a})} \mathbb{P}hi_0(\mathfrak{a},m,z) \omega^2 = -4 \vol(T(\mathfrak{a},m))
\]
and
\[
\int_{X(\mathfrak{a})} \sum_{n=1}^\infty \mathbb{P}hi_n(\mathfrak{a},m,1,z) \omega^2 = 2 \vol(T(\mathfrak{a},m)).
\]
\end{corollary}
\mathfrak{b}egin{theorem} \label{Phi-integrable-estimate}
We have
\[
\int_{X(\mathfrak{a})} \mathfrak{a}bs{\mathbb{P}hi(\mathfrak{a},m,z)} \omega^2 = O(m^2 \log(m))
\]
for large $m$.
\end{theorem}
\mathfrak{b}egin{proof}
Using decomposition~\eqref{Phi-decomp} and Proposition~\ref{integral-Phi-0} we are left with proving
\[
\int_{X(\mathfrak{a})} \mathfrak{a}bs{ \sum_{n=1}^\infty \mathbb{P}hi_n(\mathfrak{a},m,1,z) } \omega^2 = 2 \vol(T(\mathfrak{a},m)) = O(m^2 \log(m)).
\]
The first equality follows with $\mathbb{P}hi_n(\mathfrak{a},m,1,z) \ge 0$ for $n \in \mathbb{N}$ and Corollary~\ref{Phi-0-integral}, the second one with $\vol(T(\mathfrak{a},m)) = q(\mathfrak{a},m) \zeta_K(-1)/2$ and Corollary~\ref{qL-growth}.
\end{proof}
\mathfrak{b}egin{corollary} \label{almost-everywhere-coro}
The generating series
\[
\sum_{m=1}^\infty \mathbb{P}hi(\mathfrak{a},m,z) q^m
\und \sum_{m=1}^\infty \mathfrak{a}bs{\mathbb{P}hi(\mathfrak{a},m,z) q^m}
\]
with $q \in \mathbb{C}$, $|q|<1$ converge absolutely for almost all $z \in \mathbb{H}^2$ and are integrable over $X(\mathfrak{a})$.
\end{corollary}
\mathfrak{b}egin{proof}
We have by Tonelli's theorem and Theorem~\ref{Phi-integrable-estimate} for appropriate constant $C_1,C_2>0$
\mathfrak{b}egin{align*}
\int_{X(\mathfrak{a})} \sum_{m=1}^\infty \mathfrak{a}bs{ \mathbb{P}hi(\mathfrak{a},m,z) q^m} \omega^2
&= \sum_{m=1}^\infty \mathfrak{b}r{\int_{X(\mathfrak{a})} \mathfrak{a}bs{\mathbb{P}hi(\mathfrak{a},m,z) } \omega^2} |q|^m\\
&\le C_1 + \sum_{m \gg 1}^\infty C_2 m^2 \log(m) |q|^m < \infty.
\end{align*}
This implies all stated assertions.
\end{proof}
\mathfrak{b}egin{remark} \label{almost-nowhere-continuous}
By assigning $\infty$ to the values where
\[
\sum_{m=1}^\infty \mathbb{P}hi(\mathfrak{a},m,z) q^m
\]
diverges we can interpret the series as well-defined function $X(\mathfrak{a}) \to \mathbb{P}^1(\mathbb{C})$. However, this function is discontinuous at all $z \in X(\mathfrak{a})$ where the series converges. This is because the set of singularities coming from the logarithmic singularities of the single $\mathbb{P}hi(\mathfrak{a},m,z)$ at the Hirzebruch--Zagier divisors lies dense in $X(\mathfrak{a})$.
\end{remark}
\mathfrak{b}egin{theorem} \label{modular-integral}
Assume that $D$ is odd. The integral of
\[
\sum_{m=1}^\infty \mathbb{P}hi(\mathfrak{a},m,z) e(\tau m)
\]
over $X(\mathfrak{a})$ is a holomorphic modular form of weight $2$ in $\tau \in \mathbb{H}$ up to a constant.
\end{theorem}
\mathfrak{b}egin{proof}
By Corollary~\ref{almost-everywhere-coro} the integral of the series is well defined and by Theorem~\ref{Phi-integral} equals to
\[
- \frac{1}{12} \sum_{m=1}^\infty \sigma(\mathfrak{a},m,-1) e(\tau m).
\]
In \cite[Corollary~4.1]{buckEisen} the Fourier expansion of an holomorphic Eisenstein series for $\Gamma_\mathfrak{a}mma_0(D)$ of nebentypus $\chi_D$ and weight $2$ is stated to be
\[
1 + \frac{2}{ L(-1,\chi_D)} \sum_{m=1}^\infty \sigma(\mathfrak{a},m,-1) e(m\tau).
\]
\end{proof}
\mathfrak{b}egin{thebibliography}{BvdGHZ08}
\mathfrak{b}ibitem[BBGK07]{bruinier2007borcherds}
Jan~Hendrik Bruinier, José~Ignacio Burgos~Gil, and Ulf K{\"u}hn.
\newblock Borcherds products and arithmetic intersection theory on {H}ilbert
modular surfaces.
\newblock {\em Duke Mathematical Journal}, 139(1):1--88, 2007.
\mathfrak{b}ibitem[BEY21]{bruinier2021cm}
Jan~Hendrik Bruinier, Stephan Ehlen, and Tonghai Yang.
\newblock {CM} values of higher automorphic {G}reen functions for orthogonal
groups.
\newblock {\em Inventiones mathematicae}, 225(3):693--785, 2021.
\mathfrak{b}ibitem[BF01]{bruinier2001local}
Jan~Hendrik Bruinier and Eberhard Freitag.
\newblock Local {B}orcherds products.
\newblock {\em Universit\'{e} de Grenoble. Annales de l'Institut Fourier},
51(1):1--26, 2001.
\mathfrak{b}ibitem[BGKK07]{BKK1}
José~Ignacio Burgos~Gil, J{\"u}rg Kramer, and Ulf K{\"u}hn.
\newblock {C}ohomological arithmetic {C}how rings.
\newblock {\em Journal of the Institute of Mathematics of Jussieu},
6(1):1--172, 2007.
\mathfrak{b}ibitem[Bru99]{bruinier1999borcherds}
Jan~Hendrik Bruinier.
\newblock {B}orcherds products and {C}hern classes of {H}irzebruch-{Z}agier
divisors.
\newblock {\em Inventiones mathematicae}, 138(1):51--83, 1999.
\mathfrak{b}ibitem[Buc22]{buckdiss}
Johannes~J. Buck.
\newblock {\em {G}reen functions and arithmetic generating series on {H}ilbert
modular surfaces}.
\newblock PhD thesis, Technische Universit{\"a}t Darmstadt, Darmstadt, 2022.
\mathfrak{b}ibitem[Buc23a]{buckRepNumbers}
Johannes~J. Buck.
\newblock {D}irichlet series associated to representation numbers of ideals in
real quadratic number fields.
\newblock {\em arXiv preprint arXiv:2302.02844v2}, 2023.
\mathfrak{b}ibitem[Buc23b]{buckEisen}
Johannes~J. Buck.
\newblock {E}lliptic {E}isenstein series associated to ideals in real quadratic
number fields.
\newblock {\em arXiv preprint arXiv:2303.17821}, 2023.
\mathfrak{b}ibitem[BvdGHZ08]{123mf}
Jan~Hendrik Bruinier, Gerard van~der Geer, G{\"u}nter Harder, and Don Zagier.
\newblock {\em {T}he 1-2-3 of {M}odular {F}orms}.
\newblock Universitext. Springer-Verlag, Berlin, 2008.
\newblock Notes of the lectures at the summer school on “Modular Forms and
their Applications” at the Sophus Lie Conference Center, Nordfjordeid,
Norway, June 2004, edited by Kristian Ranestad.
\mathfrak{b}ibitem[EMOT54]{1954tables}
A.~Erd\'{e}lyi, W.~Magnus, F.~Oberhettinger, and F.~G. Tricomi.
\newblock {\em Tables of integral transforms. {V}ol. {I}}.
\newblock McGraw-Hill Book Co., Inc., New York-Toronto-London, 1954.
\newblock Based, in part, on notes left by Harry Bateman.
\mathfrak{b}ibitem[Fre90]{freitag1990hilbert}
Eberhard Freitag.
\newblock {\em Hilbert modular forms}.
\newblock Springer-Verlag, Berlin, 1990.
\mathfrak{b}ibitem[HZ76]{hirzebruch1976intersection}
Friedrich Hirzebruch and Don Zagier.
\newblock {I}ntersection {N}umbers of {C}urves on {H}ilbert {M}odular
{S}urfaces and {M}odular {F}orms of {N}ebentypus.
\newblock {\em Inventiones mathematicae}, 36(1):57--113, 1976.
\mathfrak{b}ibitem[KM90]{kudlamillson}
Stephen~S. Kudla and John~J. Millson.
\newblock Intersection numbers of cycles on locally symmetric spaces and
{F}ourier coefficients of holomorphic modular forms in several complex
variables.
\newblock {\em Publications Math{\'e}matiques de l'IH{\'E}S}, 71:121--172,
1990.
\mathfrak{b}ibitem[Kud97]{kudla1997central}
Stephen~S. Kudla.
\newblock Central derivatives of {E}isenstein series and height pairings.
\newblock {\em Annals of Mathematics}, 146(3):545--646, 1997.
\mathfrak{b}ibitem[Kud02]{kudla2002derivatives}
Stephen~S. Kudla.
\newblock Derivatives of {E}isenstein series and generating functions for
arithmetic cycles.
\newblock {\em Astérisque}, 276:341--368, 2002.
\newblock Séminaire Bourbaki, 52 année, 1999--2000, no. 876.
\mathfrak{b}ibitem[Kud04]{kudla2004special}
Stephen~S. Kudla.
\newblock Special cycles and derivatives of {E}isenstein series.
\newblock In {\em Heegner points and {R}ankin {$L$}-series}, volume~49 of {\em
Mathematical Sciences Research Institute Publications}, pages 243--270.
Cambridge University Press, 2004.
\mathfrak{b}ibitem[OLBC10]{handbook}
Frank W.~J. Olver, Daniel~W. Lozier, Ronald~F. Boisvert, and Charles~W. Clark.
\newblock {\em {NIST} {H}andbook of {M}athematical {F}unctions}.
\newblock Cambridge University Press, 2010.
\mathfrak{b}ibitem[vdG88]{geer1988hms}
Gerard van~der Geer.
\newblock {\em Hilbert modular surfaces}, volume~16 of {\em Ergebnisse der
Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related
Areas (3)]}.
\newblock Springer-Verlag, Berlin, 1988.
\mathfrak{b}ibitem[Zag75]{zagier1975modular}
Don Zagier.
\newblock Modular {F}orms {A}ssociated to {R}eal {Q}uadratic {F}ields.
\newblock {\em Inventiones mathematicae}, 30(1):1--46, 1975.
\end{thebibliography}
\end{document} |
\begin{document}
\title{Pure non-Markovian evolutions}
\author{Dario De Santis$^{1,2}$}
\affiliation{$^1$ ICFO-Institut de Ciencies Fotoniques, The Barcelona Institute of Science and Technology, 08860 Castelldefels (Barcelona), Spain}
\affiliation{$^2$ Scuola Normale Superiore, I-56126 Pisa, Italy}
\date{\today}
\begin{abstract}
Non-Markovian dynamics are characterized by information backflows, where the evolving open quantum system retrieves part of the information previously lost in the environment. Hence, the very definition of non-Markovianity implies an initial time interval when the evolution is noisy, otherwise no backflow could take place.
We identify two types of initial noise, where the first has the only effect of degrading the information content of the system, while the latter is essential for non-Markovian phenomena.
Hence, all non-Markovian evolutions can be divided into two classes: noisy non-Markovian (NNM), showing both types of noise, and pure non-Markovian (PNM), implementing solely essential noise.
We make this distinction through a timing analysis of fundamental non-Markovian phenomena.
First, we prove that all NNM dynamics can be simulated through a Markovian pre-processing of a PNM core. We quantify the gains in terms of information backflows and non-Markovianity measures provided by PNM evolutions.
Similarly, we study how the entanglement breaking property behaves in this framework and we discuss a technique to activate correlation backflows. Finally, we show the applicability of our results through the study of several well-know dynamical models.
\end{abstract}
\maketitle
Open quantum system dynamics describe the evolution of quantum systems interacting with an external system, typically represented by the surrounding environment. The unavoidable nature of this interaction made this topic of central interest in the field of quantum information \cite{book_B&P,book_R&H}.
This reciprocal action may lead to two different regimes for the information initially stored into our system.
An evolution is called Markovian whenever there are no memory revivals and therefore the system shows a monotonic information degradation.
On the contrary, non-Markovian evolutions are those showing information backflows, where partial information stored into the system is first lost into the environment and then retrieved at later times (for reviews on this topic see \cite{rev_RHP,rev_BLP,revmod2,rev4}).
Hence, the very definition of these evolutions implies the existence of an initial time interval when the dynamics is noisy, otherwise no backflow from the environment could be possible.
In this work, we address the question of whether all the initial noise applied by an evolution is necessary for the following non-Markovian phenomena.
We identify two noise types. While the first, that we call \textit{useless}, is not necessary for information backflows, only the information lost with \textit{essential} noise takes part to the characteristic non-Markovian phenomena.
Starting from this observation, we classify non-Markovian evolutions as \textit{noisy} or \textit{pure}, where the first have both types of noise, while the second implements essential noise only. Hence, the information initially lost with pure non-Markovian (PNM) evolutions always take part to a later backflow, which occur even in time intervals starting immediately after the beginning of the interaction with the environment. Instead, the useless noise of noisy non-Markovian (NNM) evolutions has the sole purpose of damping the information content of the open system and therefore backflows.
This classification is in close analogy with the structure of quantum states, where mixed states can be obtained through noisy operations on pure states.
Similarly, NNM evolutions can be obtained via a Markovian pre-processing, corresponding to the useless noise, of PNM evolutions, which we call PNM \textit{cores}.
Moreover, as well as pure states allow the best performances in several scenarios and protocols, PNM evolutions are characterized by the largest information revivals and non-Markovianity measures.
The interest in considering PNM cores of known NNM evolutions resides in the possibility to isolate a dynamics with the same non-Markovian qualitative features and at the same time with the largest possible non-Markovian phenomena. For instance, in case of an experimental setup where the visible non-Markovian phenomena generated by a target evolution are not significant due to various additional noise sources in the laboratory (preparation, measurements, thermal noise, ecc...), the possibility to isolate and implement the corresponding PNM core may allow to appreciate the same non-Markovian phenomena that we failed to detect with the noisy version.
The first main goal of this work is to identify the initial useless noise of generic non-Markovian evolutions. While doing so, we propose a structure for the timing of the fundamental non-Markovian phenomena happening in finite and infinitesimal time intervals.
This framework provides a straightforward and natural approach to discriminate Markovian, NNM and PNM evolutions.
We follow by showing how to isolate the PNM core of a generic NNM evolution and, conversely, how any PNM evolution can generate a whole class of NNM evolutions.
Later, we explain how and to what extent PNM evolutions are characterized by larger information backflows and non-Markovianity measures. In particular, we focus on backflows of state distinguishability. Finally, we show how the entanglement breaking property behaves in this scenario and we discuss a technique to activate correlation backflows that cannot be observed in presence of useless noise. Later we apply our results to several models, such as depolarizing and dephasing evolutions.
\section{Quantum evolutions}\label{evolutions}
We define $S(\mathcal{H})$ to be the set of density matrices of a generic $d$-dimensional Hilbert space $\mathcal{H}$. The time evolution of any open quantum system can be represented by a one-parameter family ${\mathbf{\Lambda}}=\{\Lambda_t\}_{t\geq 0}$ of quantum maps, namely completely positive and trace preserving (CPTP) superoperators.
We define ${\mathbf{\Lambda}}$ to be the \textit{evolution} of the system, while $\Lambda_t$ is the corresponding \textit{dynamical map} at time $t$.
Hence, the transformation of an initial state $\rho(0)\in S(\mathcal{H})$ into the corresponding evolved state at time $t$ is $ \rho(t)=\Lambda_t(\rho(0))\in S(\mathcal{H})$.
We consider ${\mathbf{\Lambda}}$ as a collection of dynamical maps \textit{continuous} in time. This is because any open quantum system evolution obtained through the physical interaction with an environment, even in case of non-continuous Hamiltonians, are continuous \cite{continuity}. Secondly, we assume \textit{divisibility}, namely the existence of an \textit{intermediate map} for any time interval. More precisely, for all $0\leq s \leq t$ we assume the existence of a linear map $V_{t,s}$ such that $\Lambda_t=V_{t,s}\circ \Lambda_s$. Invertible evolutions are an instance of divisible evolutions. We call an evolution invertible if, for all $t\geq 0$, there exists the operator $\Lambda_t^{-1}$ such that $\Lambda_t^{-1}\circ\Lambda_t=I$, where $I$ is the identity map on $S(\mathcal{H})$. Indeed, in these cases $V_{t,s}=\Lambda_t\circ \Lambda_s^{-1}$. While divisibility makes all the steps of the following sections easier, in Section \ref{nondivisible} we show how to generalize our results to non-divisible evolutions.
We say that an evolution is \textit{CP-divisible} if and only if between any two times it is represented by a quantum channel.
Hence, this property corresponds to require that the intermediate maps $V_{t,s}$ are CPTP for all $0\leq s\leq t$. Remember that any implementable quantum operation is represented by a CPTP operator, which is the reason why dynamical maps $\Lambda_t$ are required to be CPTP at all times. In case of non-CP-divisible evolutions, there exist $s\leq t$ such that $V_{t,s}$ is not CPTP, while at the same time $\Lambda_t=V_{t,s}\circ \Lambda_s$ must be CPTP. In this case the transformation acting in the time interval $[s,t]$ cannot be applied independently from the transformation applied in $[0,s]$.
We define Markovian evolutions as those being CP-divisible.
Thanks to the Stinespring-Kraus representation theorem \cite{Stine,Kraus}, such a definition adheres with the impossibility of the system to recover any information that was previously lost. Indeed, as we better explain later, CPTP operators degrades the information content of the system.
Therefore, an evolution is non-Markovian if and only if there exists at least one time interval $[s,t]$ when the evolution is not described by a CPTP intermediate map. Indeed, whenever this is the case, it is possible to obtain an information backflow during the same time interval \cite{bogna,DDSshort}, even for non-invertible evolutions \cite{BD,DDSMJ}.
Once that we have an explicit formulation of ${\mathbf{\Lambda}}$, the corresponding dynamical maps $\Lambda_t$ and intermediate maps $V_{t,s}$ for all $0\leq s\leq t$, we can define $\mathcal{P}^\Lambda $: the collection of those time pairs such that $V_{t,s}$ is CPTP. This set can be obtained by considering the smallest eigenvalue $\lambda_{t,s}$ of the operator obtained by applying the Choi-Jamiołkowski isomorphism \cite{Choi,Jamilokowski} to $V_{t,s}$. Indeed, $V_{t,s}$ is CPTP if and only if $\lambda_{t,s}\geq 0$.
\begin{eqnarray}\label{CPTPsub}
\hspace{-0.9cm}\mathcal{P}^\Lambda = \{0\leq s\leq t \,|\, V_{t,s} \mbox{ CPTP} \}= \{0\leq s\leq t \,|\, \lambda_{t,s}\geq 0 \}.
\end{eqnarray}
Similarly, we define $\mathcal{N}^\Lambda$ as the pairs of times for which $V_{t,s}$ is non-CPTP ($\lambda_{t,s}<0$) and therefore it is complementary to $\mathcal{P}^\Lambda$.
\begin{eqnarray}
\label{nonCPTPsub}
\hspace{-0.4cm} \mathcal{N}^\Lambda = \{0\leq s\leq t \,|\, V_{t,s} \mbox{ not CPTP} \} = \{0\leq s\leq t \,|\, \lambda_{t,s}< 0 \} .
\end{eqnarray}
Notice that ${\mathbf{\Lambda}}$ is Markovian if and only if $\mathcal{P}^\Lambda=\{s,t\}_{0\leq s\leq t} $.
We prove that $\mathcal{P}^\Lambda$ is closed and $\mathcal{N}^\Lambda$ is open in Appendix \ref{proofclosed}. Moreover, the border of $\{s,t\}_{0\leq s\leq t} $ always belongs to $\mathcal{P}^\Lambda$: the vertical line $\{0,t\}_{t\geq 0}$ corresponds to the (CPTP) dynamical maps $\Lambda_t$ and the diagonal line $\{t,t\}_{t\geq 0}$ corresponds to the trivial intermediate (identity) maps $V_{t,t}=I$, for which $\lambda_{t,t}=0$. Notice that pairs $\{s,t\}$ infinitesimally close to $\{t,t\}_{t\geq 0}$ correspond to the infinitesimal intermediate maps $V_{t+\epsilon,t}$, where their CPTP/non-CPTP nature can be studied through the rates of the corresponding master equation \cite{RHP,darekk}. We show this feature in Section \ref{eternality}.
Instead, any point in the interior of $\{s,t\}_{0\leq s\leq t}$ can either belong to $\mathcal P^\Lambda$ or $\mathcal N^\Lambda$. At the same time, not every open set is allowed for $\mathcal N^\Lambda$: these sets have to satisfy some reciprocal constraints deriving from fundamental map composition rules \footnote{Consider $V_{t_3,t_1}=V_{t_3,t_2}\circ V_{t_2,t_1}$ for $t_1< t_2< t_3$: (i)
if $V_{t_2,t_1}$ and $ V_{t_3,t_2}$ are CPTP, then $V_{t_3,t_1}$ is CPTP; (ii)
if $V_{t_3,t_1}$ is non-CPTP and $V_{t_2,t_1}$ is CPTP, then $V_{t_3,t_2}$ is non-CPTP; (iii) if $V_{t_3,t_1}$ is non-CPTP and $V_{t_3,t_2}$ is CPTP, then $V_{t_2,t_1}$ is non-CPTP; (iv) if $V_{t_2,t_1}$ is non CPTP, there exists $t\in (t_1,t_2)$ such that $V_{t+\epsilon,t}$ is non-CPTP for infinitesimal $\epsilon>0$.}. Below, we show several representations of $\mathcal P^\Lambda$ and $\mathcal N^\Lambda$.
\section{Timing of information backflows}
In this section we show how the timing of the main non-Markovian phenomena of an evolution are always ruled by the three times: ${T^\Lambda}$, $\tau^\Lambda$ and $t^\Lambda$. Before giving the corresponding mathematical definitions, we anticipate their intuitive meaning. It is possible to observe an information backflows during a time interval if and only if it starts later than ${T^\Lambda}$. Hence, there exist intervals $[{T^\Lambda}+\epsilon, t]$, where the corresponding intermediate maps are not CPTP for infinitesimal $\epsilon>0$ \footnote{When we say that a feature is satisfied for an ``infinitesimal epsilon'', we mean that for a given $\epsilon^*>0$ this feature holds true for all $\epsilon \in (0,\epsilon^*)$.}. Among these time intervals, $[{T^\Lambda}+\epsilon, t^\Lambda]$ is the shortest, namely $t=t^\Lambda$ is the earliest final time such that $V_{t^\Lambda, {T^\Lambda}+\epsilon}$ is not CPTP for infinitesimal $\epsilon>0$. Instead, $\tau^\Lambda$ is the earliest time when an instantaneous backflow can be observed, namely the earliest $t$ such that $V_{t+\epsilon,t}$ is not CPTP for infinitesimal $\epsilon>0$. Hence, $[{T^\Lambda}+\epsilon, t^\Lambda]$ ($[\tau^\Lambda,\tau^\Lambda+\epsilon]$) is the shortest time interval with the earliest \textit{initial} (\textit{final}) time when the corresponding intermediate map is not CPTP.
For these reasons, we call the noise applied by the evolution in $[0,{T^\Lambda}]$ as {useless} for non-Markovianity, while the noise applied later than ${T^\Lambda}$ is {essential} for non-Markovian phenomena.
We represent the typical role of these three times in Fig. \ref{disegnetto}.
Given a generic evolution ${\mathbf{\Lambda}}$, we define:
\begin{eqnarray}\label{TLambdanuovo}
\!\!\! T^{\Lambda}&=& \max \left\{\, T \, \left| \,
\begin{array}{cccc}
\!\mbox{(A)} & V_{t,s}& \mbox{CPTP for all} & s\leq t \leq T\\
\!\mbox{(B)} & V_{t,T}& \mbox{CPTP for all} & T\leq t \\
\!\mbox{(C)} & \Lambda_T& \mbox{not unitary} & { T>0}
\end{array}
\right. \right\} .
\end{eqnarray}
We briefly discuss conditions (A), (B) and (C). Condition (A) requires the evolution to be CP-divisible before ${T^\Lambda}$: \textit{no non-Markovian effects can take place in $[0,{T^\Lambda}]$}. Condition (B) requires the evolution following $T^\Lambda$ to be physical \textit{by its own}, namely such that the composition with the initial noise $\Lambda_{T^\Lambda}$ is not needed for the intermediate maps $\{V_{t,T^\Lambda}\}_{t\geq T^\Lambda}$ to be CPTP~\footnote{Conditions (A) and (B) are equivalent to (A+B): $V_{t,s}$ CPTP for all $ s\leq t \mbox{ and }s\in[0,T]$.}. Finally, condition (C) is imposed because a unitary transformation is not detrimental for the information content of our system and we cannot consider it useless noise: it is ``useless'' but not noisy. We remember that evolutions with dynamical maps that are unitary at all times can be simulated with closed quantum systems, and therefore we do not focus on cases where condition (C) is necessary. In Appendix \ref{proofclosed} we show that Eq. (\ref{TLambdanuovo}) is indeed a maximum and not a supremum.
An evolution ${\mathbf{\Lambda}}$ is Markovian if and only if $T^\Lambda=+\infty$. Indeed, all the noise applied by the evolution is not necessary for the later evolution to be physical. Markovian evolutions can be interpreted as sequences of noisy independent operations. Indeed, the dynamics between any two times the evolution is represented by a (noisy) CPTP map. We consider as trivial those evolutions that are unitary at all times, namely that can be simulated with a closed quantum system.
Below we show that a finite value of ${T^\Lambda}\geq 0$ implies the evolution to have non-CPTP intermediate maps and therefore to be non-Markovian. From now on, by $T^\Lambda\geq 0$ we mean $T^\Lambda\in [0,\infty)$. Hence, the time ${T^\Lambda}$ can be used to classify quantum evolutions:
$T^\Lambda=+\infty$: Markovian (M);
$T^\Lambda \in [0,+\infty)$: non-Markovian (NM);
$T^\Lambda \in (0,+\infty)$: noisy non-Markovian (NNM) and
$T^\Lambda=0$: pure non-Markovian (PNM). The last class is the main topic studied in this work. Finally, the following results clarify the role of
${T^\Lambda}$ (proofs in Appendix \ref{lemma123}):
\begin{lemma}\label{ABviolation}
Conditions (A), (B) and (C) are simultaneously satisfied at time $T$ if and only if $T\in [0,{T^\Lambda}]$. If (A) is violated at time $T$, (B) is violated at a strictly earlier time.
\end{lemma}
\begin{lemma}\label{lemma4}
Any non-Markovian evolution ${\mathbf{\Lambda}}$ has non-CPTP intermediate maps $V_{T,{T^\Lambda}+\epsilon}$ for one or more final times $T>{T^\Lambda}$ and infinitesimal values of $\epsilon >0$. If $V_{t,s}$ is a non-CPTP intermediate map of ${\mathbf{\Lambda}}$, then ${T^\Lambda}<s$.
\end{lemma}
\begin{figure}
\caption{Typical information content of an open quantum system evolving under a NNM evolution. An increase, or backflow, of information, is a typical sign of non-Markovianity, namely of non-CPTP intermediate maps. Blue/red regions represent times when the infinitesimal intermediate map $V_{t+\epsilon,t}
\label{disegnetto}
\end{figure}
We want $\tau^\Lambda$ to represent the first time when the instantaneous information flow inverts its direction back to the system. Hence, we focus on the earliest time such that the intermediate map $V_{T+\epsilon,T}$ is non-CPTP for infinitesimal $\epsilon>0$:
\begin{equation}\label{taunm}
\tau^{\Lambda}=\lim_{\epsilon\rightarrow 0^+} {\inf} \left\{ T \,| \, V_{T+\epsilon,T} \mbox{ not CPTP } \right\} .
\end{equation}
We can say that $\tau^\Lambda$ defines when instantaneous non-Markovian phenomena begins as it is the earliest time when an infinitesimal intermediate map is non-CPTP and information is instantaneously retrieved from the environment.
The time intervals with the earliest initial time such that the corresponding intermediate maps are non-CPTP are of the form $[{T^\Lambda}+\epsilon,t]$ (see Lemma \ref{lemma4}). We define $t^{\Lambda}$ to be the earliest final time $t$ such that this intermediate map is non-CPTP:
\begin{equation}\label{tnm}
t^{\Lambda}=\lim_{\delta\rightarrow 0^+} \inf \left\{ T \,| \, V_{T,{T^\Lambda}+\delta} \mbox{ not CPTP } \right\}.
\end{equation}
The timing of the earliest information backflows is therefore dictated by the values of ${T^\Lambda}$, $\tau^\Lambda$ and $t^\Lambda$, which have a definite value for all non-Markovian evolutions.
These three characteristic times satisfy the following reciprocal relation (proof in Appendix \ref{tlambdavari}):
\begin{equation}\label{order}
0\leq {T^\Lambda} \leq \tau^\Lambda \leq t^\Lambda \leq \infty .
\end{equation}
We briefly discuss the possible equalities that can hold in the above equation and later propose exemplary models. PNM evolutions (${T^\Lambda}=0$) are largely analysed throughout this work. A divergent $t^\Lambda=\infty$ is allowed only if $ {T^\Lambda} < \tau^\Lambda < t^\Lambda$. For what concerns the possible equalities between ${T^\Lambda}$, $\tau^\Lambda$ and $ t^\Lambda$, we have that ${T^\Lambda}= \tau^\Lambda$ implies ${T^\Lambda}= \tau^\Lambda = t^\Lambda$ (proof in Appendix \ref{tlambdavari}), namely
${T^\Lambda} = \tau^\Lambda < t^\Lambda$ is forbidden.
On the contrary, ${T^\Lambda}< \tau^\Lambda = t^\Lambda$ is allowed.
We mention some examples for the above-mentioned patterns of ${T^\Lambda}$, $\tau^\Lambda$ and $t^\Lambda$. Concerning the difference between ${T^\Lambda}=0$ and ${T^\Lambda}>0$, we show how to obtain a PNM evolution (${T^\Lambda}=0$) from any NNM evolution (${T^\Lambda}>0$) and vice-versa in Section \ref{secPNM}.
We propose and in-depth study of the case ${T^\Lambda} < \tau^\Lambda < t^\Lambda < \infty$ for depolarizing evolutions in Section \ref{secdepo}.
A well-known instance of $0={T^\Lambda}= \tau^\Lambda = t^\Lambda $ is given by the eternal non-Markovian model~\cite{eternal0,eternal,eternal1}, while ${T^\Lambda}< \tau^\Lambda < t^\Lambda=\infty$ and ${T^\Lambda}< \tau^\Lambda = t^\Lambda<\infty$ can be obtained with quasi-eternal non-Markovian evolutions~\cite{DDSlong}. These models are studied in Section \ref{eternality}.
The following generalizes Lemma \ref{lemma4}:
\begin{proposition}\label{propDgen}
Let ${\mathbf{\Lambda}}$ be a generic non-Markovian evolution. If ${T^\Lambda}<t^\Lambda$, then $ V_{t^\Lambda,s}$ is non-CPTP for all $s\in({T^\Lambda},t^\Lambda)$. If ${T^\Lambda}=t^\Lambda$, for all $T>{T^\Lambda}$ the infinitesimal intermediate map $V_{t+\epsilon,t}$ is non-CPTP for an infinite number of times $t\in({T^\Lambda},T)$.
\end{proposition}
Hence, not only each non-Markovian evolution must have a non-CPTP intermediate map for time intervals starting immediately after ${T^\Lambda}$ (Lemma \ref{lemma4}), but whenever ${T^\Lambda}<t^\Lambda$ there is a whole continuum of non-CPTP intermediate maps $V_{t^\Lambda,s}$ for $s\in({T^\Lambda},t^\Lambda)$. Additionally, if $V_{t,{T^\Lambda}+\epsilon}$ is not CPTP for $t>t^\Lambda$, all the intermediate maps $V_{t,s}$ with $s\in({T^\Lambda},t^\Lambda)$ are non-CPTP.
In case of ${T^\Lambda}=t^\Lambda$, the infinitesimal intermediate maps $V_{t+\epsilon,t}$ are non-CPTP either for all the times inside a time interval of the type $({T^\Lambda}, T)$ for some $T>{T^\Lambda}$ or for infinite times that do not constitute an interval for any $T>{T^\Lambda}$. We propose an example of the latter pathological case in Appendix~\ref{lemma123}. A special case is given by the eternal NM model, which has non-CPTP intermediate maps $V_{t,s}$ \textit{for all} $0<s<t$ (see Section \ref{eternality}): ${T^\Lambda}=t^\Lambda$ and it enjoys both properties described in Proposition \ref{propDgen}.
\begin{widetext}
\begin{figure}
\caption{Open quantum systems are physically represented by a system $S$ interacting with a surrounding environment $E$. This interaction leads to information losses (blue arrows) and, in case of non-Markovian evolutions, backflows (red arrows).
NNM evolutions ${\mathbf{\Lambda}
\label{resume}
\end{figure}
\end{widetext}
\section{Pure non-Markovian evolutions}\label{secPNM}
We show that the initial noise that any non-Markovian evolution $\mathbf{\Lambda}$ induces in the time interval $[0,T^\Lambda]$ is useless for the following non-Markovian effects to happen.
By doing so, we better explain the role of PNM evolutions, namely those one-parameter continuous family of CPTP maps $\mathbf{\Lambda}=\{\Lambda_t\}_{t\geq 0}$ such that ${T^\Lambda}=0$. We prove that any NNM evolution can be simulated by a Markovian pre-processing of the input states followed by a PNM evolution. Finally, we prove that, if an evolution perfectly retrieve the initial information of the system, then it must be PNM.
\subsection{Simulation of NNM evolutions with PNM evolutions}\label{simulation}
We start by simulating NNM evolutions $\mathbf{\Lambda}$ with the subsequent interaction of the system with two different environments. We consider the Stinespring-Kraus representation theorem \cite{Stine,Kraus}, which allows us to describe continuous family of CPTP maps through the interaction of the system with an initially uncorrelated environment. Hence, we consider the system in contact with a first environment $E_1$ in the time interval $[0,{T^\Lambda}]$ and at later times $t>{T^\Lambda}$ is in contact with $E_2$. Hence, consider the following two-step scenario:
\begin{itemize}
\item $t\in[0,T^\Lambda]$ ({\bf Markovian pre-processing}): the evolution is simulated by the interaction with $E_1$. A unitary transformation $U'_t$ evolves the system-environment state, which at time $0$ is in a product state (no initial system-environment correlations):
\begin{equation}\label{E1}
\rho_{S}(0)\rightarrow \rho_S(t)=\mbox{Tr}_{E_1}[U'_{t} (\rho_S(0){\,\otimes\,}imes \omega_{E_1})U'_{t} ].
\end{equation}
This simulation is possible because $\Lambda_t$ is CPTP for all $t\in [0,{T^\Lambda}]$. The phenomenology during this time interval is Markovian as Eq. (\ref{TLambdanuovo}) requires ${\mathbf{\Lambda}}$ to be CP-divisible in $[0,{T^\Lambda}]$. This stage represents the useless noise of ${\mathbf{\Lambda}}$.
\item $t=T^\Lambda$: we discard $E_1$ and let the system interact with a second environment $E_2$. The system-environment state is given by $\rho_{SE_2}({T^\Lambda})=\rho_S({T^\Lambda}){\,\otimes\,}imes \omega_{E_2}$ (no initial system-environment correlations);
\item $t\geq T^\Lambda$ ({\bf PNM core}): the evolution is simulated by the interaction with the environment $E_2$. A unitary transformation $U''_\tau$ evolves the system-environment state:
\begin{equation}\label{E2}
\rho_{S}(T^\Lambda)\rightarrow \rho_S(t)=\mbox{Tr}_{E_2}[U''_{\tau} (\rho_S(T^\Lambda){\,\otimes\,}imes \omega_{E_2})U''_{\tau} ] \, ,
\end{equation}
where $\tau=t-T^\Lambda\geq 0$. This simulation is possible because $V_{t,{T^\Lambda}}$ is CPTP for all $t\geq {T^\Lambda}$.
The phenomenology during this time interval is non-Markovian.
\end{itemize}
As we already noticed, no information backflow can be observed in $[0,T^\Lambda]$: the phenomenology \textit{in this time interval} is Markovian.
Now, thanks to this two-stage simulation, we can state that the information involved in the backflows was originally lost later than $T^\Lambda$. Indeed, the non-Markovian effects of $\mathbf{\Lambda}$ do not depend on the behaviour of the dynamics in the time interval $[0,T^\Lambda]$, when ${\mathbf{\Lambda}}$ is CP-divisible.
This is the reason why we call the first stage a \textit{Markovian pre-processing} and we say that $\Lambda_t$, for $t\in [0,{T^\Lambda}]$, generates the \textit{useless noise} of the evolution.
Conversely, the (CPTP) intermediate maps $V_{t,s}$ for ${T^\Lambda}\leq s\leq t$ generate the \textit{essential noise} needed for non-Markovian phenomena.
We define $\overline{\mathbf{\Lambda}}=\{\overline{{\Lambda}}_\tau\}_{\tau\geq 0}$ to be the evolution that represents the interaction with the second environment, where
\begin{equation}\label{E22}
\overline{{\Lambda}}_\tau (\cdot) = \mbox{Tr}_{E_2}[U''_{\tau} (\,\cdot \,{\,\otimes\,}imes \omega_{E_2} )U''_{\tau} ] \, .
\end{equation}
From the above definitions it is easy to see that the dynamical and intermediate maps $\overline V_{t,s}$ of the evolution $\overline{\mathbf{\Lambda}}$ are connected with the intermediate maps $V_{t,s}$ of $\mathbf{\Lambda}$ as follows:
\begin{equation}\label{PNM}
\begin{array}{ccc}
& & \overline{{\Lambda}}_{t} = V_{t+{T^\Lambda},{T^\Lambda}}\hspace{0.4cm} \\
& & \overline{V}_{t,s} = V_{t+{T^\Lambda},s+{T^\Lambda}}
\end{array} \,\,\,\, \mbox{ for } \,\,\,\, 0 \leq s \leq t \, .
\end{equation}
The map $\overline{{\Lambda}}_{t} $ is CPTP for all $t \geq 0$ and therefore $\overline{\mathbf{\Lambda}}=\{\overline{{\Lambda}}_t\}_{t\geq 0}$ is a valid evolution by itself (see Eq. (\ref{TLambdanuovo})). It is immediate to check that $T^{\overline{\Lambda}}=0$: the evolution $\overline{\mathbf{\Lambda}}$ is PNM and we call it the \textit{PNM core of} $\mathbf{\Lambda}$.
Moreover, the relation between the characteristic times of ${\mathbf{\Lambda}}$ and $\overline{\mathbf{\Lambda}}$ is (see Eqs. (\ref{taunm}), (\ref{tnm}) and (\ref{PNM})):
\begin{equation}\label{tLoL}
T^{\overline{\Lambda}}=0 \,,\,\,\,\, \tau^{\overline{\Lambda}}=\tau^\Lambda-{T^\Lambda} \, , \,\,\,\, t^{\overline{\Lambda}}=t^\Lambda-{T^\Lambda} \, .
\end{equation}
We conclude that NNM evolutions can be simulated via a {Markovian pre-processing} (physically represented by Eq. (\ref{E1})) followed by the action of the corresponding PNM core (physically represented by Eq. (\ref{E22})):
\begin{equation}\label{NNMevo2}
\mathbf{\Lambda}= \left\{
\begin{array}{ccc}
\Lambda_t & t< T^{\Lambda} & \mbox{ (Markovian pre-processing)} \\
\overline{\Lambda}_{t-T^\Lambda} \circ \Lambda_{T^\Lambda} & t\geq T^{\Lambda} & \mbox{ (PNM core)}
\end{array} \right. \, .
\end{equation}
This decomposition is summarized in Fig. \ref{resume}.
Naturally, a different Markovian pre-processing with the same PNM core $\overline{\mathbf{\Lambda}}$ provides a different NNM evolution. For instance, if $\{\Gamma_t\}_{t\in[0,T^\Gamma]}$ is a CP-divisible continuous family of maps, we obtain the NNM evolution $\mathbf{\Gamma}$ defined by
\begin{equation}\label{NNMevo3}
\mathbf{\Gamma}= \left\{
\begin{array}{ccc}
\Gamma_t & t< T^{\Gamma} & \mbox{(Markovian pre-processing)} \\
\overline{\Lambda}_{t-T^\Gamma} \circ \Gamma_{T^\Gamma} & t\geq T^{\Gamma} & \mbox{(PNM core)}
\end{array} \right. \, ,
\end{equation}
where in general $T^\Gamma\neq T^\Lambda$. We can summarize the proven relations between NNM and PNM evolutions as follows:
\begin{proposition}\label{PNMNNM}
Any NNM evolution can be written as a Markovian pre-processing of a PNM evolution.
Any PNM evolution defines a class of NNM evolutions given by all its possible Markovian pre-processings.
\end{proposition}
We conclude this section by discussing the non-Markovian effects of $\mathbf\Lambda$ and $\overline{\mathbf{\Lambda}}$, where $\overline{\mathbf{\Lambda}}$ is the PNM core of ${\mathbf{\Lambda}}$. Differently from before, consider both evolutions acting from the initial time.
Any intermediate map of ${\mathbf{\Lambda}}$ taking place later than ${T^\Lambda}$ is also present during the dynamics of $\overline{\mathbf{\Lambda}}$: the intermediate map of ${\mathbf{\Lambda}}$ during $[s,t]$ is the same intermediate map of $\overline{\mathbf{\Lambda}}$ during $[s-T^\Lambda,t-T^\Lambda]$, namely $V_{t,s}=\overline V_{t-T^\Lambda,s-T^\Lambda}$. More importantly, the two evolutions have the same non-CPTP intermediate maps. It follows that the nature of
the non-Markovian effects observable with ${\mathbf{\Lambda}}$ can be observed also with the $\overline{\mathbf{\Lambda}}$, where in the latter case these effects are amplified. Indeed, the states evolved by $\mathbf\Lambda$ receive a damped version of the non-Markovian effects obtainable with $\overline{\mathbf{\Lambda}}$, where the total damping is represented by $\Lambda_{T^\Lambda}$, namely the Markovian pre-processing. Later, we show more precisely how this damping acts on information backflows (Section \ref{tracedistance}) and non-Markovianity measures (Section \ref{measures}).
\subsection{Features of PNM evolutions}\label{secfeatures}
In this section we study the differences between PNM and NNM evolutions.
First, we remind that an information backflow can be obtained in a time interval if and only if the corresponding intermediate map is not CPTP~\cite{BD,DDSMJ}. Hence,
Lemma~\ref{lemma4} and the two-step simulation of NNM evolutions imply that: \textit{``The information that a quantum system loses with a NNM evolution ${\mathbf{\Lambda}}$ during the initial time interval $[0,T^\Lambda]$ is never retrieved''}.
Instead, if we restrict Proposition \ref{propDgen} to PNM evolutions, it reads:
\begin{corollary}\label{corollaryPNM}
An evolution $\mathbf{\Lambda}$ is PNM if and only if for small enough initial times $\epsilon >0$ there always exists a final time $t>0$ such that $V_{t,\epsilon}$ is non-CPTP. Moreover, $0<t^\Lambda$ implies that $V_{t^\Lambda,s}$ is non-CPTP for all initial times $s\in (0,t^\Lambda)$. Instead, if $0=t^\Lambda$, for all $T>0$ the infinitesimal intermediate map $V_{t+\epsilon,t}$ is non-CPTP for an infinite number of times $t\in(0,T)$.
\end{corollary}
We can say that \textit{``The information that is initially lost with a PNM evolution always takes part in later backflows''}. Indeed, as soon as a PNM evolution starts, even by considering infinitesimal initial times $\epsilon>0$, we have at least one later time $T$ such that there is an information backflow in $[\epsilon,T]$.
We move our attention to an interesting class of evolutions, namely those allowing a \textit{complete information retrieval}. More precisely, they first cause a partial or total degradation of the information encoded in the system and later, thanks to one or more backflows, they provide a perfect restore of the initial information content of the system.
These instances are represented by those $\mathbf{\Lambda}$ having a non-unitary dynamical map $\Lambda_s$ at time $s$ and a unitary dynamical map $\Lambda_t=U$ at a later time $t$.
Since unitary transformations do not degrade the information content of quantum systems, all these evolutions completely retrieve in the time interval $[s,t]$ any type of information lost in the time interval $[0,s]$. These evolutions are always PNM, where $V_{t,s}$ is not even positive trace-preserving (PTP) (proof in Appendix \ref{proofpropC}):
\begin{proposition}\label{propC}
An evolution characterized by $s<t$ such that $\Lambda_{s}$ is not unitary and $\Lambda_{t}$ is unitary is PNM with the intermediate map $V_{t,s}$ not even PTP.
\end{proposition}
We end this section by clarifying the nature of PNM evolutions via some examples.
First, one may think that evolutions being first contractive and then unitary are exotic. Interestingly, many of the well-known NM models used in the literature have PNM cores with this property, e.g. dephasing, depolarizing and amplitude damping channels.
Secondly, having a non-unitary map at time $s$ and a unitary map at time $t>s$ is sufficient but not necessary for an evolution to be PNM. Hence, not all PNM evolutions present complete information retrieval. Indeed, there exist P-divisible PNM evolutions (see Section \ref{eternality}).
As a consequence, we may conclude that non-P-divisible PNM evolutions satisfy Proposition \ref{propC}. The following counterexample proves that this is false. Consider the qubit dephasing map: $\Lambda_{i}(\rho)=p_i \rho+(1-p_i ) \sigma_i \rho \sigma_i$, where $\sigma_{i=x,y,z}$ are the Pauli operators and $p_i \in[0,1]$. Take an evolution such that, for $0<t_1<t_2<t_3$: (i) $\Lambda_{t_1}=\Lambda_{x}$ with $p_x<1$, (ii) $V_{t_2,t_1}=\Lambda_{z}$ with $p_z <1$ and (iii) $V_{t_3,t_2}=\Lambda_{x}^{-1}$, which is not even PTP. Moreover, we require $\Lambda_{t}=p_x(t) \rho+(1-p_x(t)) \sigma_x \rho \sigma_x$ where $p_x(t)$ is a continuous function such that $p_x(t)<1$ and decreasing in $t\in(0,t_1)$. It is easy to see that this evolution is PNM, e.g. by evolving $\rho(0)=\ketbra{0}{0}$. Indeed, $\Lambda_{t_3}(\ketbra{0}{0})=\ketbra{0}{0}$ but $V_{t_3,\epsilon}(\ketbra{0}{0})$ is outside the state space, which proves that $V_{t_3,\epsilon}$ is non-CPTP for infinitesimal $\epsilon>0$. Nonetheless, even if at time $t_3$ the evolved state goes back to its initialization ($\rho(0)=\rho(t_3)$), we have that $\Lambda_{t_3}=\Lambda_{x}^{-1} \circ\Lambda_{z}\circ \Lambda_{x}=\Lambda_{z}$ is not unitary. In this case, the PNM ${\mathbf{\Lambda}}$ completely retrieves the information content of the system only if properly initialized.
Noteworthy, the initial Markovian pre-processing (useless noise) of NNM evolutions may not affect some types of information contained in specific initializations for which the information is instead completely restored.
Now, we describe a NNM evolution ${\mathbf{\Lambda}}$ that completely recovers the information to distinguish two states.
Consider the elements of the previous example, where now $\Lambda_{t_1}=\Lambda_{z}$, $V_{t_2,t_1}=\Lambda_{x}$ and $V_{t_3,t_2}=\Lambda_x^{-1}$. The initial states $\ketbra{0}{0}$ and $\ketbra{1}{1}$ get closer during the time interval $[t_1,t_2]$ and later they recover their (maximal) initial distance during the time interval $[t_2,t_3]$.
Nonetheless, this evolution is NNM because the initial noise $\Lambda_{t}$ for $t\in[0,t_1]$ is useless ($T^{\Lambda}=t_1$).
Indeed, the initial Markovian pre-processing $\Lambda_{{T^\Lambda}}=\Lambda_{t_1}=\Lambda_z$, although it reduces the distance of several pairs of states, e.g. $\ketbra{+}{+}$ and $\ketbra{-}{-}$, leaves $\ketbra{0}{0}$ and $\ketbra{1}{1}$ untouched.
Hence, given a NNM evolution ${\mathbf{\Lambda}}$, the corresponding Markovian pre-processing may not affect the information content of some initializations.
Nonetheless, the corresponding PNM core typically provides larger backflows for generic initializations. For instance, the PNM core $\overline{\mathbf{\Lambda}}$ of the previous example satisfies Proposition \ref{propC}: it corresponds to the identity map: $\overline{\Lambda}_{t}=I_S$ at $t=t_3-t_1$.
\subsection{Non-divisible evolutions}\label{nondivisible}
We briefly approach the case of non-divisible evolutions, namely those for which the intermediate map $V_{t,s}$ cannot be written for all $0< s< t$. Great part of the results presented in this work are connected with ${T^\Lambda}$, which in turn is strictly connected with the properties of $V_{t,s}$. Hence, studying the PNM core of a non-divisible NNM evolution sounds problematic.
We start by reminding that we are considering continuous evolutions (see Section \ref{evolutions}). Moreover, continuous non-divisible evolutions must have an initial time interval $[0,T^{NB})$ when an inverse $\Lambda_t^{-1}$ exists \cite{continuity}. Hence, we can consider intermediate maps of the form $V_{t,s}=\Lambda_t \circ \Lambda_s^{-1}$ for all $s< T^{NB}$. Notice that this is possible for all final times $t$. Moreover, we remember that invertibility implies divisibility, but the inverse is not true. Therefore, any non-divisible evolution is characterized by a time $T^{ND}$, in general larger than $T^{NB}$, such that $V_{t,s}$ can be defined for all $s < T^{ND}$. As a result, even for non-invertible evolutions there is always a finite time interval inside which we can look for ${T^\Lambda}$. We can replace
Eq. (\ref{TLambdanuovo}) with
\begin{equation}\label{Tnondivisible}
{T^\Lambda}\!=\! \max \left\{\, T<T^{ND} \left|
\begin{array}{cccc}
\mbox{(A)} & V_{t,s}& \mbox{CPTP for all} & s\leq t \leq T\\
\mbox{(B)} & V_{t,T}& \mbox{CPTP for all} & T\leq t \\
\mbox{(C)} & \Lambda_T& \mbox{not unitary} & { T>0}
\end{array}
\right. \right\} ,
\end{equation}
where we set $T^{ND}=\infty$ for divisible evolutions.
We study an example where we obtain the PNM core $\overline{\mathbf{\Lambda}}$ of a non-invertible NNM depolarizing evolution ${\mathbf{\Lambda}}$ in Appendix~\ref{noninvdepo}. In this example we show that, although technically ${T^\Lambda}$ should be defined by Eq.~(\ref{Tnondivisible}), in practice its difference from Eq.~(\ref{TLambdanuovo}) does not play a fundamental role in many cases. In other words, the additional condition $T<T^{ND}$ is not stringent for many relevant cases.
\section{Distinguishability backflows}\label{tracedistance}
In this section we study the relation between NNM and PNM evolutions under the point of view of information backflows, as measured by the distinguishability between pairs of evolving states.
Hence, we analyse the potential of non-Markovian evolutions to make two states more distinguishable in a time interval when the corresponding intermediate map is not CPTP.
Consider the scenario where we are given one state chosen randomly between $\rho_1$ and $\rho_2$, and, through measurements, we have to guess which state we received. The maximum probability to correctly distinguish the two states is called the \textit{guessing probability} and it corresponds to
$
P_g(\rho_1,\rho_2)=(2+||\rho_1-\rho_2||_1)/4
$,
where $||\,\cdot\,||_1$ is the trace norm. The maximum value 1 is obtained for perfectly distinguishable, i.e. orthogonal, states. Instead, the minimum value $1/2$ is obtained if and only if $\rho_1$ and $\rho_2$ are identical. For the sake of simplicity, in the following we study $||\rho_1-\rho_2||_1$, which we call the \textit{distinguishability} of $\rho_1$ and $\rho_2$.
Consider a composite system $SA$ with Hilbert space $S (\mathcal{H}_S{\,\otimes\,}imes\mathcal{H}_A)$, where $S$ is evolved by $\mathbf{\Lambda}$ and $A$ is an ancillary system. Hence, a generic initialization $\rho_{SA}(0)\in S (\mathcal{H}_S{\,\otimes\,}imes\mathcal{H}_A)$ evolves as $\rho_{SA}(t)=\Lambda_t{\,\otimes\,}imes I_A(\rho_{SA}(0))$. Take two states $\rho_{SA,1}(t) $ and $ \rho_{SA,2}(t)$ evolving under the same evolution.
Any increase of $||\rho_{SA,1}(t) - \rho_{SA,2}(t)||_1$ represents a recovery of the missing information needed to distinguish the two states and is a signature of non-Markovianity \cite{BLP,bogna}. Indeed, this quantity is contractive under quantum channels: CPTP intermediate maps $V_{t,s}$ imply a distinguishability degradation $||\rho_{SA,1}(s) - \rho_{SA,2}(s)||_1 \geq ||\rho_{SA,1}(t) - \rho_{SA,2}(t)||_1$, while a \textit{distinguishability backflow} $||\rho_{SA,1}(s) - \rho_{SA,2}(s)||_1< ||\rho_{SA,1}(t) - \rho_{SA,2}(t)||_1$ implies that $V_{t,s}$ is not CPTP. For this reason, Markovian evolutions are characterized by monotonically decreasing distinguishabilities, while non-Markovian evolutions can provide distinguishability backflows.
We remember that for all bijective (or almost-always bijective) evolutions there exists a constructive method for an initial pair $\rho_{SA,1}(0)$, $ \rho_{SA,2}(0)$ that provides a distinguishability backflow in $[s,t]$ if and only if the corresponding intermediate map is not CPTP \cite{bogna}.
We proceed by studying in which cases and to what extent NNM evolutions damp distinguishability backflows if compared with their corresponding PNM cores. We saw that the initial noise $\Lambda_{{T^\Lambda}}$ of NNM evolutions is useless for non-Markovian phenomena. Now, we quantify how much $\Lambda_{{T^\Lambda}}$ suppresses backflows for each specific initialization.
\begin{proposition}\label{propbackflows}
Consider a NNM evolution $\mathbf{\Lambda}$ providing a distinguishability backflow in $[s,t]$ by evolving $\rho_{SA,1}(0)$ and $\rho_{SA,2}(0)$, where $V_{t,s}$ is not CPTP. If $\rho_{SA,1}(T^\Lambda)$ and $\rho_{SA,2}(T^\Lambda)$ are not orthogonal, the corresponding PNM core $\overline{\mathbf{\Lambda}}$ provides a larger backflow in $[s-T^\Lambda, t-T^\Lambda]$ by evolving a pair of orthogonal states, where $V_{t,s}=\overline V_{t-T^\Lambda,s-T^\Lambda}$. The proportionality factor between the backflows is $2/||\rho_{SA,1}(T^\Lambda)-\rho_{SA,2}(T^\Lambda)||_1>1$.
\end{proposition}
\begin{proof} We define $\Delta(T^\Lambda)=\rho_{SA,1}(T^\Lambda)- \rho_{SA,2}(T^\Lambda)$, which is hermitian and traceless at all times. This operator corresponds to a segment in the state space with direction and length (as measured by $||\cdot ||_1$) defined by $\rho_{SA,1}(T^\Lambda)$ and $ \rho_{SA,2}(T^\Lambda)$.
Since $\rho_{SA,1}({T^\Lambda})$ and $ \rho_{SA,2}({T^\Lambda})$ are not orthogonal, $||\Delta({T^\Lambda})||_1<2$. We define $\overline{\Delta}(0)=2\Delta(t)/||\Delta({T^\Lambda})||_1$, namely a segment with the same direction of $\Delta({T^\Lambda})$ but with $||\overline{\Delta}(0)||_1=2$. Since $\overline{\Delta}(0)$ is hermitian and traceless, it can be diagonalized with a unitary $ U$, namely $\overline{\Delta}(0)= U \overline{\Delta}_D(0) U^\dagger$, where $\overline{\Delta}_D(0)=\mbox{diag}(\delta_1,\delta_2,\cdots,\delta_d)$, $\delta_i \in [-1,1]$ and $d=d_Sd_A$ is the dimension of $\mathcal{H}_S{\,\otimes\,}imes \mathcal{H}_A$. Moreover, we have that $||\overline{\Delta}(0)||_1=||\overline{\Delta}_D(0)||_1=\sum_i |\delta_i|=2$ and $\trr{\overline{\Delta}(0)}=\trr{\overline{\Delta}_D(0)}= \sum_i \delta_i=0$. Therefore, we can write $\overline{\Delta}_D(0)=\sigma^+-\sigma^-$, where $\sigma^+$ ($\sigma^-$) is a diagonal density matrix obtained by replacing the negative diagonal elements of $\overline{\Delta}_D(0)$ ($-\overline{\Delta}_D(0)$) with a zero. We define $\overline\rho_{SA,1}(0)=U\sigma_1U^\dagger$ and $\overline\rho_{SA,2}(0)=U\sigma_2U^\dagger$. Notice that $\overline\rho_{SA,1}(0)-\overline\rho_{SA,2}(0)=\overline\Delta(0)$: the two states $\overline\rho_{SA,1}(0)$ and $\overline\rho_{SA,2}(0)$ are orthogonal and their difference $\overline\Delta(0)$ is proportional to $\Delta({T^\Lambda})=\rho_{SA,1}({T^\Lambda})- \rho_{SA,2}({T^\Lambda})$. Hence, if $\mathbf{\Lambda}$ provides a distinguishability backflow in $[s,t]$ by evolving $\rho_{SA,1}(0)$ and $ \rho_{SA,2}(0) $, $\overline{\mathbf{\Lambda}}$ provides a (larger) backflow in $[s-T^\Lambda,t -T^\Lambda]$ by evolving $\overline\rho_{SA,1}(0)$ and $ \overline\rho_{SA,2}(0) $, where the latter backflow is larger than the former by the factor $2/||\Delta({T^\Lambda})||_1>1$.
\end{proof}
Notice that in this proof we provide a constructive method to build the pairs of states $\overline{\rho}_{SA,1}(0)$, $\overline \rho_{SA,2}(0)$ which, if evolved with the corresponding PNM core, provide larger backflows.
The meaning of this proposition matches the information-theoretic interpretation of PNM evolutions gave above. Indeed, consider a NNM evolution ${\mathbf{\Lambda}}$ and those pairs of states that provide the largest distinguishability backflows in a time interval $[s,t]$. It can be proven that the corresponding initial states $\rho_{SA,1}(0)$, $\rho_{SA,2}(0)$ are orthogonal. If at time ${T^\Lambda}$ the two states $\rho_{SA,1}({T^\Lambda})$ and $\rho_{SA,2}({T^\Lambda})$ are still orthogonal when evolved by ${\mathbf{\Lambda}}$, it means that the noise $\Lambda_{T^\Lambda}$ did not dissipate the information to distinguish this particular pair of states, as discussed in Section \ref{secfeatures}. On the contrary, if $\rho_{SA,1}({T^\Lambda})$ and $\rho_{SA,2}({T^\Lambda})$ are no more orthogonal, $\Lambda_{T^\Lambda}$ dissipated part of the information useful to distinguish the two states unnecessarily: this information does not take part to later backflows. Hence, in these cases the corresponding PNM cores provide larger backflows.
The results of Proposition \ref{propbackflows} are independent from $s$, $t$ and the magnitude of the corresponding distinguishability backflow. Indeed, it solely depends on the information lost after the Markovian pre-processing, namely the distinguishability at time ${T^\Lambda}$. Hence, the results of Proposition \ref{propbackflows} can be directly extended to all the backflows that the same pair shows, even without knowing their magnitude and when they take place
\begin{corollary}\label{corpropbackflows}
Consider a NNM evolution $\mathbf{\Lambda}$ and its corresponding PNM core $\overline{\mathbf{\Lambda}}$. For any pair of states $\rho_{SA,1}(0)$, $\rho_{SA,2}(0)$ that are not orthogonal at time ${T^\Lambda}$ and provide one or more distinguishability backflows when evolved by $\mathbf{\Lambda}$, there exists a corresponding pair of orthogonal states such that, if evolved by $\overline{\mathbf{\Lambda}}$, each backflow is larger by a factor $2/||\rho_{SA,1}(T^\Lambda)-\rho_{SA,2}(T^\Lambda)||_1>1$. The intermediate maps generating the backflows of the two evolutions are the same and the corresponding time intervals differ by a shift of ${T^\Lambda}$.
\end{corollary}
\section{Non-Markovianity measures}\label{measures}
A non-Markovianity measure $M(\mathbf{\Lambda})$ quantifies the non-Markovian content of evolutions, where Markovianity implies $M({\mathbf{\Lambda}})=0$, while $M({\mathbf{\Lambda}})>0$ implies ${\mathbf{\Lambda}}$ to be non-Markovian. As we see below, those measures that are connected with the actual time evolution of one or more states are influenced by the initial noisy action that precedes non-Markovian phenomena. Hence, in these cases PNM cores provide higher non-Markovianity measures $M(\mathbf{\Lambda})\leq M(\overline{\mathbf{\Lambda}})$: the largest values that any Markovianity measure of this type can assume can be obtained with PNM evolutions and any value assumed with a NNM evolution can be matched or outperformed by a PNM evolution.
The main representatives of this class of measures are defined through the collection of the information backflows obtainable with ${\mathbf{\Lambda}}$, where this quantity is maximized with respect all the possible system initializations~\cite{BLP,Luo,DDSMJ,WWTAWWTANM}. In the following, we refer to these cases as flux measures. Note that there are non-Markovianity measure satisfying $M(\mathbf{\Lambda})\leq M(\overline{\mathbf{\Lambda}})$ while not being flux measures, e.g. \cite{DDSV}.
On the contrary, those measures $M$ that solely depend on the features of intermediate maps, without considering the action of the preceding dynamics, imply $M(\mathbf{\Lambda})= M(\overline{\mathbf{\Lambda}})$. Indeed, a NNM evolution $\mathbf{\Lambda}$ and its corresponding PNM evolution $\overline{\mathbf{\Lambda}}$ have the same non-CPTP intermediate maps. The main representatives of this second class are the Rivas-Huelga-Plenio \cite{RHP} measure and the $k$-divisibility hierarchy \cite{SAB}. We underline that, while flux measures represent the amplitude of phenomena that can be observed, this is not true for this class.
\subsection{Flux measures}
Flux measures quantify the non-Markovian content of evolutions as follow. Pick an information quantifier and maximize the sum of all the corresponding backflows that ${\mathbf{\Lambda}}$ shows with respect to all the possible initializations. More precisely, consider a functional $W(\rho_{SA}(t))=W(\Lambda_t{\,\otimes\,}imes I_A(\rho_{SA}))\geq 0$ which represents the amount of information as measured by $W$ contained in the evolving state. We can also consider quantifiers taking multiple states as input, e.g. state distinguishability. In order to consider $W$ an information quantifier for $S$, we require it to be contractive under quantum channels on $S$, namely $W(\rho_{SA}(0))\geq W(\Lambda{\,\otimes\,}imes I_A (\rho_{SA}))$ for all $\rho_{SA}$ and CPTP $\Lambda$.
We define the \textit{information flux} as
\begin{equation}\label{flux}
\sigma(\Lambda_t{\,\otimes\,}imes I_A(\rho_{SA}))=\frac{d}{dt} W (\Lambda_t{\,\otimes\,}imes I_A(\rho_{SA})) \, .
\end{equation}
Since Markovianity corresponds to CP-divisibility, Markovian evolutions imply non-positive fluxes. Instead, if $\sigma(\Lambda_t{\,\otimes\,}imes I_A(\rho_{SA}))>0$, we say that the evolution of $\rho_{SA}$ \textit{witnesses} the non-Markovian nature of $\mathbf{\Lambda}$ through a backflow of $W$.
Flux measures consist of the greatest amount of $W$ that an evolution can retrieve during the evolution with respect to any initialization, namely:
\begin{equation}\label{NW}
M^W(\mathbf{\Lambda})=\max_{\rho_{SA}} \int_{t\geq 0,\, \sigma>0} \sigma_t(\Lambda_t{\,\otimes\,}imes I_A(\rho_{SA})) dt \, ,
\end{equation}
where the maximization is performed over the whole system-ancilla state space \footnote{The results that we present are not influenced by a possible maximization over the ancillas $A$. In this case the maximum is replaced by a supremum.}.
Being $\overline{\mathbf{\Lambda}}$ the PNM core of $\mathbf{\Lambda}$ and $\mbox{Im} (\Lambda_t{\,\otimes\,}imes I_A)$ the image of the evolution at time $t$, namely those system-ancilla states that can be obtained as output of the map $\Lambda_t{\,\otimes\,}imes I_A$, we obtain:
\begin{eqnarray}\nonumber
M^W(\mathbf{\Lambda})&=&\max_{\rho_{SA}} \int_{t\geq T^\Lambda,\, \sigma> 0} \sigma (\Lambda_t{\,\otimes\,}imes I_A (\rho_{SA})) \\ \nonumber
&=& \max_{\rho_{SA}\in \mbox{Im}(\Lambda_{T^\Lambda} {\,\otimes\,}imes I_A)} \int_{t\geq T^{\Lambda},\, \sigma>0} \sigma (V_{t,T^{\Lambda}}{\,\otimes\,}imes I_A (\rho_{SA}))
\\ \nonumber
&=& \max_{ \rho_{SA}\in \mbox{Im}(\Lambda_{T^\Lambda} {\,\otimes\,}imes I_A)} \int_{t\geq 0, \, \sigma>0} \sigma(\overline{\Lambda}_t {\,\otimes\,}imes I_A (\rho_{SA})) \\ \label{diffN}
&\leq& \max_{ \rho_{SA}} \int_{t\geq 0, \, \sigma>0} \sigma(\overline{\Lambda}_t {\,\otimes\,}imes I_A (\rho_{SA})) = M^W(\overline{\mathbf{\Lambda}}) \, ,
\end{eqnarray}
where the first equality is justified by the fact that backflows can only happen for $t\geq T^\Lambda$ (any NNM evolution is CP-divisible in $[0,T^\Lambda]$), the second equality is a simple consequence of $\Lambda_t=V_{t,T^\Lambda}\circ \Lambda_{T^\Lambda}$, the third equality follows from Eq. (\ref{PNM}) and the inequality follows from the enlargement of the maximization space.
It is interesting to understand when $M^W(\mathbf{\Lambda})< M^W(\overline{\mathbf{\Lambda}})$.
Consider the information quantifier $D(\rho_{SA,1}(t),\rho_{SA,2}(t))=||\rho_{SA,1}(t)-\rho_{SA,2}(t)||_1$ for a fixed ancilla $A$, where in this case we consider the evolution ${\mathbf{\Lambda}}$. We call $\{\rho^{i}_{SA,1},\rho^{i}_{SA,2}\}$ those pairs that allow to obtain the maximum of Eq. (\ref{NW}). Notice that these pairs of states are always initially orthogonal: $D(\rho^{i}_{SA,1}(0),\rho^{i}_{SA,2}(0))=2$ for all $i$. Hence, if $\sigma^D(\Lambda_t{\,\otimes\,}imes I_A (\rho_{SA,1}),\Lambda_t{\,\otimes\,}imes I_A (\rho_{SA,2}))$ is the flux associated to $D(\rho_{SA,1}(t),\rho_{SA,2}(t))$ as in Eq. (\ref{flux}), we have
\begin{eqnarray}\nonumber
&&M^D({\mathbf{\Lambda}})=\!\!\max_{\rho_{SA,1},\,\rho_{SA,2}} \!\int_{t\geq 0,\, \sigma^D>0}\!\!\!\! \sigma^D(\Lambda_t{\,\otimes\,}imes I_A (\rho_{SA,1}),\Lambda_t{\,\otimes\,}imes I_A (\rho_{SA,2})) \,dt
\\ \label{measureNMdist}
&&= \int_{t\geq 0,\, \sigma^D>0} \!\!\!\!\!\!\!\!\!\sigma^D(\Lambda_t{\,\otimes\,}imes I_A (\rho^{i}_{SA,1}),\Lambda_t{\,\otimes\,}imes I_A (\rho^{i}_{SA,2})) \,dt \,\,\, \,\,\, \mbox{for all } i.
\end{eqnarray}
Thanks to Corollary \ref{corpropbackflows}, we can prove that:
\begin{equation}
M^D({\mathbf{\Lambda}})\leq M^D(\overline{\mathbf{\Lambda}})=\max_i \frac{2 M^D({\mathbf{\Lambda}})}{||\rho^{i}_{SA,1}(T^\Lambda)-\rho^{i}_{SA,2}(T^\Lambda)||_1} \, ,
\end{equation}
where $\rho^{i}_{SA,1}({T^\Lambda})=\Lambda_{T^\Lambda}{\,\otimes\,}imes I_A (\rho^{i}_{SA,1}(0))$ and $\rho^{i}_{SA,2}({T^\Lambda})=\Lambda_{T^\Lambda}{\,\otimes\,}imes I_A( \rho^{i}_{SA,2}(0))$.
Hence, if the information content of the pairs $\{\rho^{i}_{SA,1},\rho^{i}_{SA,2}\}$ at time ${T^\Lambda}$ is lower than at the initial time when evolved by ${\mathbf{\Lambda}}$, then $M^D({\mathbf{\Lambda}})<M^D(\overline{\mathbf{\Lambda}})$. Moreover, the proportionality factor between the two measures is given by the states $\{\rho^{j}_{SA,1},\rho^{j}_{SA,2}\}$ that get the closest at time ${T^\Lambda}$. In Section~\ref{secdepo} we explicitly evaluate $M^D({\mathbf{\Lambda}})$ and $M^D(\overline{\mathbf{\Lambda}})$ in case of depolarizing evolutions and we show that, even without ancillary systems, $M^D({\mathbf{\Lambda}})<M^D(\overline{\mathbf{\Lambda}})$ is always verified.
A second measure of non-Markovianity similar to $M^W$ is given by \cite{vault}
\begin{equation}\label{Nmax}
M^{W,max}(\mathbf{\Lambda})=\max_{s< t,\, \rho_{SA}} \{0,W(\rho_{SA}(t))-W(\rho_{SA}(s))\} ,
\end{equation}
which corresponds to the largest backflow of $W$ that the dynamics is able to show in a single time interval. Finally, a third measure is \cite{vault}
\begin{equation}\label{Nav}
M^{W,av}(\mathbf{\Lambda})=\max_{t>0, \rho_{SA}} \{0, W(\rho_{SA}(t))-\langle W(\rho_{SA}(t))\rangle \} ,
\end{equation}
where $\langle W(\rho_{SA}(t)) \rangle = t^{-1} \int_0^t W(\rho_{SA}(s)) ds $. This measure
corresponds to the largest difference, with respect to $t$, between the information $W$ at time $t$ and its average in the time interval $[0,t]$. Moreover, $M^{W,av}$ has a precise operational meaning connected with the probability to store and faithfully retrieve information, as measured by $W$, by state preparation and measurement, where an attack performed by an eavesdropper may occur. It can be proven \cite{vault} that, for any evolution and information quantifier, we have $M^{W,av} (\mathbf{\Lambda}) \leq M^{W,max} (\mathbf{\Lambda}) \leq M^{W} (\mathbf{\Lambda})$.
Similarly to Eq. (\ref{diffN}), it is possible to demonstrate that
\begin{eqnarray}\label{diffN2}
\!\!\!\!\!\!\!\!\! M^{W,max}(\mathbf{\Lambda}) \leq M^{W,max}(\overline{\mathbf{\Lambda}}) \,\, \, \mbox{ and } \,\, \,
M^{W,av}(\mathbf{\Lambda}) \leq M^{W,av }(\overline{\mathbf{\Lambda}}) \, .
\end{eqnarray}
\subsection{Incoherent mixing measure}
A second type of non-Markovianity measure corresponds to the minimal incoherent Markovian noise needed to make a non-Markovian evolution ${\mathbf{\Lambda}}$ Markovian \cite{DDSV}. In order to describe this measure, we first consider an evolution obtained as convex combination of ${\mathbf{\Lambda}}$ and a generic Markovian evolution ${\mathbf{\Lambda}}^M$.
We consider the mixed evolution
$
\mathbf{\Lambda}^{mix}_p=(1-p)\mathbf{\Lambda}+p\mathbf{\Lambda}^M \,
$.
and define a non-Markovianity measure by looking for the minimal value of $p$, hence the minimal amount of Markovian noise, such that ${\mathbf{\Lambda}}^{mix}_p$ is Markovian, namely:
\begin{equation}\label{Nmix}
M^{mix}(\mathbf{\Lambda})=\min\{\,p\,|\, \exists \mathbf{\Lambda}^M \mbox{ s.t. } \mathbf{\Lambda}^{mix}_p \mbox{ is Markovian} \} \, .
\end{equation}
In Appendix \ref{appNmix} we prove that:
\begin{equation}
M^{mix}(\mathbf{\Lambda})\leq M^{mix}(\overline{\mathbf{\Lambda}}) \, .
\end{equation}
In Section \ref{secdepo} we show that $M^{mix}(\mathbf{\Lambda})< M^{mix}(\overline{\mathbf{\Lambda}})$ for all NNM depolarizing evolutions ${\mathbf{\Lambda}}$.
\subsection{RHP measure and $k$-divisibility}
Consider a generic PNM evolution $\overline{\mathbf{\Lambda}}$ and all the corresponding NNM evolutions $\mathbf{\Lambda}$ that can be obtained from $\overline{\mathbf{\Lambda}}$ with a Markovian pre-processing. As we saw, $\overline{\mathbf{\Lambda}}$ and all its corresponding ${\mathbf{\Lambda}}$ have the same non-CPTP intermediate maps.
Therefore, the non-Markovianity measures that solely depend on the properties of non-CPTP intermediate maps, since they are not influenced by the particular (useless) noise that precedes their action, assume the same value for $\overline{\mathbf{\Lambda}}$ and all its corresponding ${\mathbf{\Lambda}}$. This is the case of the RHP measure $\mathcal{I}({\mathbf{\Lambda}})$ (see Eq.~(4) from Ref.~\cite{RHP}), and the $k$-divisibility non-Markovian degree NMD$[{\mathbf{\Lambda}}]$ (see Ref.~\cite{SAB}):
\begin{equation}
\mathcal{I}(\overline{\mathbf{\Lambda}}) = \mathcal{I}({\mathbf{\Lambda}}) \,\, \, \mbox{ and } \,\,\, \mbox{NMD}(\overline{\mathbf{\Lambda}})= \mbox{NMD}({\mathbf{\Lambda}}) \, .
\end{equation}
\section{Entanglement breaking property}\label{EB}
We call $C(\rho_{AB})$ a correlation measure for the bipartite system $AB$ if: (i) $C(\rho_{AB})\geq 0$ for all $\rho_{AB}$, (ii) $C(\rho_{AB})= 0$ for all product states $\rho_{A}{\,\otimes\,}imes \rho_B$, (iii) $C(\Lambda_A{\,\otimes\,}imes I_B(\rho_{AB}))\leq C(\rho_{AB})$ and $C(I_A{\,\otimes\,}imes \Lambda_B(\rho_{AB}))\leq C(\rho_{AB})$ for all $\rho_{AB}$ and CPTP maps $\Lambda_A$ and $\Lambda_B$.
Entanglement measures, denoted here by $E$, capture only non-classical correlations. Indeed, they satisfy the additional property of being non-increasing under local operations, namely (iii), assisted by classical communication (LOCC). As a consequence, $E(\rho_{AB})=0$ for all separable states, namely those that can be written as statistical mixtures of product states: $\rho_{AB}=\sum_i p_i \rho_A{\,\otimes\,}imes \rho_B$, where $\{p_i\}_i$ is a probability distribution.
We discuss how the link between NNM and PNM evolutions behaves with respect to the entanglement breaking (EB) property. A quantum channel $\Lambda_S$ is EB if it destroys the entanglement of any input state, namely if $\Lambda_S {\,\otimes\,}imes I_A (\rho_{SA})$ is separable for all $\rho_{SA}$. Consider a generic ${\mathbf{\Lambda}}$. We say that it is EB if there exists a time $t^{EB,\Lambda}>0$ such that $\Lambda_t$ is EB for all $t\geq t^{EB,\Lambda}$.
Take a PNM evolution $\overline{\mathbf{\Lambda}}$ and one of the possible NNM evolutions ${\mathbf{\Lambda}}$ that can be obtained with a Markovian pre-processing. This Markovian pre-processing cannot increase the amount of entanglement of any state. Hence, if $\overline{\mathbf{\Lambda}}$ is EB, then ${\mathbf{\Lambda}}$ is EB. Nonetheless, in case of $\overline{\mathbf{\Lambda}}$ and ${\mathbf{\Lambda}}$ EB, there is no general order for the corresponding EB times: $t^{EB,\Lambda}>t^{EB,\overline\Lambda}$ and $t^{EB,\Lambda}<t^{EB,\overline\Lambda}$ are both possible.
We have to take in mind that, if a generic NNM evolution ${\mathbf{\Lambda}}$ is EB, we cannot immediately say anything about the EB nature of $\overline{\mathbf{\Lambda}}$ and we must study the particular dynamics more in detail.
Indeed, it is easy to find NNM evolutions ${\mathbf{\Lambda}}$ with EB useless noises $\Lambda_{{T^\Lambda}}$, where the corresponding PNM core $\overline{\mathbf{\Lambda}}$ is not EB.
Also, there exist cases where the Markovian pre-processing $\Lambda_{{T^\Lambda}}$ is not EB, the PNM core $\overline{\mathbf{\Lambda}}$ is not EB, but the corresponding NNM evolution ${\mathbf{\Lambda}}$ is EB.
\subsection{Activation of correlation backflows}
We now discuss a technique focused on entanglement revivals which can be easily generalized to other correlation measures.
The isolation of the PNM core of a NNM evolution may lead to the \textit{activation} of entanglement backflows.
Take a bipartite system $SA$, where $S$ is evolved by ${\mathbf{\Lambda}}$ and $A$ is an ancilla.
Whenever we have a backflow of $E$, the same backflow can also observed with $\overline{\mathbf{\Lambda}}$, namely the corresponding PNM evolution. Moreover, as we saw for the corresponding flux non-Markovianity measure, $M^{E}({\mathbf{\Lambda}})\leq M^{E}(\overline{\mathbf{\Lambda}})$. What is interesting is the possibility to {activate} backflows of entanglement through the isolation of the PNM core, namely when $M^{E}({\mathbf{\Lambda}})=0$ and $M^{E}(\overline{\mathbf{\Lambda}})>0$. This scenario is made possible when ${\mathbf{\Lambda}}$ is EB, the corresponding non-CPTP intermediate maps $V_{t,s}$ take place only for $t^{EB,\Lambda}\leq s<t$ and the corresponding PNM core $\overline{\mathbf{\Lambda}}$ is not EB. In this case, when an entangled state is evolved by ${\mathbf{\Lambda}}$ and a non-CPTP intermediate map take place, all the entanglement has already been destroyed and no backflows are possible. Instead, for a system evolving under $\overline{\mathbf{\Lambda}}$, when the (same) non-CPTP intermediate map takes place entanglement can be non-zero and backflows are allowed.
Whenever a non-Markovian evolution does not provide correlation backflows, additional ancillary degrees of freedom can activate the possibility to observe backflows. This phenomena has already been studied for entanglement~\cite{Janek,DDSDONATO} and gaussian steeriing~\cite{DDSDONATO}. For instance, instead of evaluating entanglement among $S$ and $A$, we would need to evaluate it among $SA'$ and $A$, where $A'$ is an additional ancilla. Hence, our construction allows a different strategy to obtain correlation backflows for those situations where an $SA$ setup does not show any: instead of adding additional ancillary system, which in some cases could be experimentally more demanding to handle, we could simply consider the PNM core of the studied evolution.
\section{Depolarizing model}\label{secdepo}
We apply our results to a simple model called depolarizing. Starting from a generic NNM depolarizing evolution ${\mathbf{\Lambda}}$, we show how to find ${T^\Lambda}$, $\tau^\Lambda$ and $t^\Lambda$, the corresponding PNM evolution $\overline{\mathbf{\Lambda}}$ and we calculate the gains in terms of information backflows and non-Markovianity measures that $\overline{\mathbf{\Lambda}}$ provides with respect to ${\mathbf{\Lambda}}$.
We conclude by applying our technique to an explicit toy model. Moreover, we show how our approach can be directly applied to non-bijective depolarizing evolutions in Appendix \ref{noninvdepo}.
We define a generic depolarizing evolution ${\mathbf{\Lambda}}$ for a $d$-dimensional system $S$ through the corresponding dynamical map, namely
\begin{equation}\label{depo}
\Lambda_t (\,\cdot\,) = f(t) I_S ( \,\cdot\,) + (1-f(t)) \tr{ \,\cdot\, }\frac{\mathbbm{1}_S}{d} \, ,
\end{equation}
where $I_S$ is the identity map and ${\mathbbm{1}_S}/{d}$ is the maximally mixed state \cite{DDSV}. The behaviour of the evolution is determined by the \textit{characteristic function} $f(t)$. The dynamical maps $\Lambda_t$ are CPTP, continuous in time and such that $\Lambda_0=I_S$ if and only if $f(t)\in [-1/(d^2-1), 1]$ is a continuous function such that $f(0)=1$.
For the sake of simplicity, from now on we restrict our attention to depolarizing evolutions with $f(t)\in[0,1]$ for all $t\geq 0$. Those cases of $f(t)$ assuming negative values necessitate a simple generalization of the techniques used here. An in-depth analysis of depolarizing evolutions with $f(t)\in [-1/(d^2-1), 1]$ can be found in Ref. \cite{DDSV}.
The evolution ${\mathbf{\Lambda}}$ is invertible if and only if $f(t)>0$ at all times. Indeed, $f(t^{NB})=0$ implies that every initial state is mapped into the same (maximally mixed) state: $\Lambda_{t^{NB}}(\rho_S(0))= \mathbbm{1}_S/d$. In this case $\Lambda_{t^{NB}}$ is non-invertible and we cannot define the intermediate maps $V_{t,t^{NB}}$ with $t>t^{NB}$.
The interpretation of depolarizing evolutions is straightforward: at time $t$ each state is mixed with the maximally mixed state $\mathbbm{1}_S/d$ with a ratio given by $f(t)$. The larger is $f(t)$, the closer is $\rho_S(t)$ form its initial state $\rho_S(0)$. Moreover, this contraction towards the maximally mixed state is symmetric in the state space. Indeed, for any two initial states $ \rho_{S,1}(0)$ and $\rho_{S,2}(0)$ evolving under ${\mathbf{\Lambda}}$ we have:
\begin{equation}\label{tracedepo}
{||\rho_{S,1}(t)-\rho_{S,2}(t)||_1}= f(t) {||\rho_{S,1}(0)-\rho_{S,2}(0)||_1} \, .
\end{equation}
The intermediate map corresponding to the depolarizing evolution during a generic time interval $[s,t]$ assumes the same form of a depolarizing dynamical map
\begin{equation}\label{depoint}
V_{t,s} (\,\cdot\,) = \frac{f(t)}{f(s)} \, I_S (\,\cdot\,) + \left(1-\frac{f(t)}{f(s)} \right) \tr{ \,\cdot\, }\frac{\mathbbm{1}_S}{d} \, .
\end{equation}
Hence, the CPTP condition \footnote{In case of a generic characteristic function $f(t)\in[-1/(d^2-1),1]$, $V_{t,s}$ is CPTP if and only if $f(t)/f(s) \in [-1/(d^2-1),1]$. For the sake of simplicity, we do not treat this case.} for $V_{t,s}$ coincides with $f(t)/f(s) \in [0, 1]$, that is $f(s) \geq f(t)$ for $s\leq t$. Similarly, the intermediate map $V_{t+\epsilon,t}$ is CPTP for infinitesimal $\epsilon >0$ if and only if $f'(t)\leq 0$. Indeed, Markovian depolarizing evolutions are characterized by $f'(t)\leq 0$ at all times.
The Choi state of $V_{t,s}$ is $V_{t,s}{\,\otimes\,}imes I_S (\phi^+_{SA})= f(t)/f(s) \phi^+_{SA} + (1-f(t)/f(s)) \mathbbm{1}_{SA}/d^2$ and the corresponding smallest eigenvalue is $\lambda_{t,s}=(1-f(t)/f(s))/d^2$. Since $V_{t,s}$ is CPTP if and only if $\lambda_{t,s}\geq 0$, thanks to the evaluation of $\lambda_{t,s}$ we are able to obtain $\mathcal P^\Lambda$ and $\mathcal N^\Lambda$, the collection of time pairs $\{s,t\}$ such that $V_{t,s}$ is respectively CPTP and non-CPTP (see Eqs. (\ref{CPTPsub}) and (\ref{nonCPTPsub})).
Non-Markovian depolarizing evolutions have non-monotonic characteristic functions.
An increase of $f(t)$ in a given time interval corresponds to a corresponding non-CPTP intermediate map. Moreover, in the same time interval the trace distance between any two states increases, namely a distinguishability backflow.
The largest distinguishability backflows are provided by initially orthogonal states, for which the trace distance is equal to $2f(t)$ (see Eq. (\ref{tracedepo})). We consider the flux non-Markovianity measure $M^D$ in case of no ancillary systems (see Eq. (\ref{measureNMdist})):
\begin{eqnarray}\nonumber
M^D({\mathbf{\Lambda}})&=& \int_{ \sigma^D>0} \sigma^D(\Lambda_t (\rho_{S,1}),\Lambda_t (\rho_{S,2})) dt= 2 \int_{f'>0} f'(t) dt \\ &=& 2 \sum_i f(t_{fin,i})-f(t_{in,i}) = 2 \Delta \, , \label{measureNMdistdepo}
\end{eqnarray}
where $\rho_{S,1}$, $\rho_{S,2}$ are any two orthogonal states, $(t_{in,i},t_{fin,i})$ is the $i$-th time interval when $f'(t)>0$ and $\Delta>0$ is the sum of all the revivals of $f(t)$. Finally, the non-Markovianity measure given in Eq. (\ref{Nmix}) is equal to $M^{mix}({\mathbf{\Lambda}})=\Delta/(1+\Delta)$ \cite{DDSV}.
\subsection{Backflows timing and PNM core}\label{backdepo}
We are ready to evaluate ${T^\Lambda}$, $\tau^\Lambda$ and $t^\Lambda$. We can rewrite Eqs. (\ref{TLambdanuovo}), (\ref{taunm}) and (\ref{tnm}) in terms of $f(t)$ and $f'(t)$ as follows:
\begin{eqnarray}\label{TLambdadepo}
&T^{\Lambda}& = \max \left\{\, T \, \left| \,
\begin{array}{cccc}
\mbox{(A)} & f'(t)\leq 0 & \mbox{ for all} & t \leq T,\\
\mbox{(B)} & f(T)\geq f(t)& \mbox{ for all} & T\leq t, \\
\mbox{(C)} & f(T)\neq 1 & \mbox{ } & { T>0.}
\end{array}
\right. \right\} ,
\\ \label{taudepo}
&\tau^{\Lambda}& = {\inf} \left\{ T \, \left| \, f'(T)>0 \right. \right\},
\\ \label{tdepo}
&t^{\Lambda}&= \min \left\{\,T \, \left| \, f(T)=f({T^\Lambda}) \mbox{ for } T>{T^\Lambda} \right. \right\} ,
\end{eqnarray}
where the last equality holds because Eq. (\ref{TLambdadepo}) implies that $f({T^\Lambda})>f({T^\Lambda}+\epsilon)$ for infinitesimal $\epsilon>0$. As expected, condition (A) of Eq. (\ref{TLambdadepo}) implies that ${\mathbf{\Lambda}}$ behaves as a Markovian depolarizing in the time interval $[0,{T^\Lambda}]$. Secondly, by considering (A) and (B) together, we can state that $f({T^\Lambda})\in (0,1)$. As discussed in Section \ref{nondivisible}, in case ${\mathbf{\Lambda}}$ is non-invertible and $t^{NB}$ is the earliest time when $f(t^{NB})=0$, we should add to Eq.~(\ref{TLambdadepo}) the constraint ${T^\Lambda}<t^{NB}$. Anyway, as we show in Appendix \ref{noninvdepo}, even without imposing such a constraint, ${T^\Lambda}<t^{NB}$.
We proved that for a generic evolution $0\leq {T^\Lambda}\leq \tau^\Lambda\leq t^\Lambda$ (see Eq. (\ref{order})). Nonetheless, depolarizing evolutions are always characterized by $0\leq {T^\Lambda}< \tau^\Lambda< t^\Lambda \leq \infty$. The first equality is obtained for PNM depolarizing evolutions, while the last in case $f({T^\Lambda})>f(t)$ for all $t>{T^\Lambda}$ and $\lim_{t\rightarrow \infty} f(t)=f({T^\Lambda})$.
We obtain the PNM core of a NNM depolarizing evolution by exploiting the method presented in Section \ref{simulation}. Hence, if we apply Eq. (\ref{PNM}) to the intermediate maps of a NNM depolarizing evolution ${\mathbf{\Lambda}}$ characterized by $f(t)$, we obtain the PNM depolarizing evolution $\overline{\mathbf{\Lambda}}$ characterized by $\overline{f}(t)=f(t+{T^\Lambda})/f({T^\Lambda})$.
We can easily verify that it is a valid characteristic function ($\overline{f}(t)\in [0,1]$ and $\overline{f}(0)=1$) and $T^{\overline\Lambda}=0$. Therefore, the corresponding dynamical maps are:
\begin{eqnarray}\label{PNMdepo}
\!\!\!\!\! \overline{\Lambda}_t (\,\cdot\,) &=& \overline{f}(t) I_S ( \,\cdot\,) + \left(1-\overline{f}(t)\right) \tr{ \,\cdot\, }\frac{\mathbbm{1}_S}{d}
\\ \label{PNMdepo} &=&
\frac{f(t+{T^\Lambda})}{f({T^\Lambda})} I_S ( \,\cdot\,) + \left(1-\frac{f(t+{T^\Lambda})}{f({T^\Lambda})} \right) \tr{ \,\cdot\, }\frac{\mathbbm{1}_S}{d} \, .
\end{eqnarray}
The NNM evolution ${\mathbf{\Lambda}}$ can be expressed as a first time interval of Markovian pre-processing, expressed by $\Lambda_t$ for $t\in [0,{T^\Lambda}]$, followed by the action of the PNM evolution $\overline{\mathbf{\Lambda}}$ (see Eq. (\ref{NNMevo2})).
As we explained, $\overline{\mathbf{\Lambda}}$ is nothing but ${\mathbf{\Lambda}}$ without the resultant of its Markovian pre-processing $\Lambda_{T^\Lambda}$, which, not only is useless for the appearance of non-Markovian phenomena but damps information backflows. Indeed, we can apply Corollary \ref{corpropbackflows} and conclude that whenever we can obtain a distinguishability backflow with ${\mathbf{\Lambda}}$ in a time interval $[s,t]$, we can observe a backflow with $\overline{\mathbf{\Lambda}}$ in the time interval $[s-{T^\Lambda},t-{T^\Lambda}]$, where the proportionality factor between the two revivals is $1/f({T^\Lambda})>1$.
As expected, $\overline{\mathbf{\Lambda}}$ is characterized by larger non-Markovianity measures than ${\mathbf{\Lambda}}$:
\begin{eqnarray}\label{ND1}
&M^D(\overline{\mathbf{\Lambda}})&=
\frac{2\Delta}{f({T^\Lambda})}> M^D({\mathbf{\Lambda}}) = 2\Delta \, , \\ \label{Nmix1}
&M^{mix}(\overline{\mathbf{\Lambda}})&\,
=\frac{\Delta}{f({T^\Lambda})+\Delta} > M^{mix}({\mathbf{\Lambda}}) = \frac{\Delta}{1+\Delta} \, .
\end{eqnarray}
It can be proven that similar results holds true for the measures $M^{W,max}$ and
$M^{W,av}$ (see Eqs. (\ref{Nmax}) and (\ref{Nav})).
We conclude by noticing that all PNM depolarizing evolutions $\overline{\mathbf{\Lambda}}$ completely retrieve the initial information of the system at time $t^{\overline \Lambda}$. In particular, all PNM depolarizing evolutions satisfy the conditions of
Proposition \ref{propC}, where $\overline{\Lambda}_{t^{\overline \Lambda}}= I_S$. This result follows from the observation that $f(t^\Lambda)= f({T^\Lambda})$, and therefore all PNM depolarizing evolutions are such that $\overline f(t^{\overline\Lambda})=\overline f(0)=1$. Notice that $t^{\overline\Lambda}$ may be divergent.
\subsection{Example}\label{newsecexample}
We show how to apply our results to a simple characteristic function $f(t)$ representing a NNM depolarizing evolution ${\mathbf{\Lambda}}$.
The toy model considered here is given by $f(t)=(1 - 3 t + 2 t^2 + 2 t^3) / (1 + t^2 + t^3 + 3 t^5) $ (see Figure \ref{newfigdepo}): a continuous function with a single time interval of increase and an infinitesimal asymptotic behaviour. We start by calculating the times ${T^\Lambda}$, $\tau^\Lambda$ and $t^\Lambda$. Hence, we consider the sets $\mathcal P^\Lambda$ and $\mathcal N^\Lambda$, the sets containing the pairs of times $\{s,t\}$ such that $V_{t,s}$ is respectively CPTP and non-CPTP (see Eqs. (\ref{CPTPsub}) and~()). We can obtain these sets by noticing that the smallest eigenvalue $\lambda_{t,s}=(1-f(t)/f(s))/d^2$ of the Choi state of $V_{t,s}$ is non-negative if and only if $V_{t,s}$ is CPTP. The same analysis is performed for the corresponding PNM core $\overline{\mathbf{\Lambda}}$.
We start with a technical analysis of $f(t)$.
Standard numerical methods lead to ${T^\Lambda}\simeq 0.275$, $\tau^\Lambda=0.495$ and $t^\Lambda\simeq1.040$.
It is possible to have increases of $f(t)$ only in time intervals $[s,t]$ starting later than ${T^\Lambda}$. Moreover, as explained by Proposition~\ref{propDgen}, these increases take place for a continuum of initial times: $f(t^\Lambda)-f(s)>0$ for all $s\in({T^\Lambda}, t^\Lambda)$. Instead, if
we consider an initial time $s$ sooner than ${T^\Lambda}$, the characteristic function cannot increase: $f(t)-f(s)<0$ for $s< {T^\Lambda}$ and $s< t$.
The time $\tau^\Lambda$ is the first time after which $f'(t)>0$. Moreover, $f'(t)>0$ only for $t\in(t_{in},t_{fin})=(\tau^\Lambda,t^\Lambda)$, where the total revival is $\Delta=f(t^\Lambda)-f(\tau^\Lambda)\simeq 0.164$.
We now analyse $f(t)$ from the point of view of information backflows. The characteristic function $f(t)$ is directly connected with the time-dependent distinguishability $D(\rho_{S,1}(t),\rho_{S,2}(t))$ of two states evolving under ${\mathbf{\Lambda}}$ (see Eq. (\ref{tracedepo})).
In the first time interval $[0,{T^\Lambda}]$ information is lost and never recovered. Indeed, we called this noise \textit{useless} for non-Markovian phenomena and the resultant noise $\Lambda_{{T^\Lambda}}$ represents a Markovian pre-processing. As discussed above, the damping of the initial Markovian pre-processing is quantified by $f({T^\Lambda})\simeq 0.334 $.
In the time interval $[{T^\Lambda},\tau^\Lambda]$ the system keeps losing information.
Differently from the noise in $[0,{T^\Lambda}]$, this noise is \textit{essential} for the following non-Markovian phenomena. Indeed, we have increases $f(t^\Lambda)-f(s)>0$ for all the intervals $[s,t^\Lambda]$ with $s\in ({T^\Lambda},\tau^\Lambda)$.
The maximum information backflow is obtained in $[\tau^\Lambda,t^\Lambda]$, when the system recovers information from the environment at all times ($f'(t)>0$). Moreover, at time $t^\Lambda$, the system goes back to the state assumed at time ${T^\Lambda}$ ($f(t^\Lambda)=f({T^\Lambda})$), namely when useless noise ended and the essential noise started.
The characteristic function of the corresponding PNM core $\overline{\mathbf{\Lambda}}$ is $\overline{f}(t)=f(t+{T^\Lambda})/f({T^\Lambda})$ (see Figure \ref{newfigdepo}). We use Eq. (\ref{tLoL}) and get the characteristic times $\tau^{\overline \Lambda}\simeq 0.220$ and $t^{\overline \Lambda}\simeq 0.765$ ($T^{\overline\Lambda}=0$ because $\overline{\mathbf{\Lambda}}$ is PNM).
The total increase of $\overline f (t)$ is $\overline \Delta= \Delta/f({T^\Lambda})\simeq 0.491$. If we compare the non-Markovian effects of ${\mathbf{\Lambda}}$ and $\overline{\mathbf{\Lambda}}$, any distinguishability backflow is amplified by a factor $1/f({T^\Lambda})\simeq 2.990$ (see Corollary \ref{corpropbackflows}) and through Eqs. (\ref{ND1}) and (\ref{Nmix1}) we can evaluate the values of the corresponding non-Markovianity measures: $M^D(\overline{\mathbf{\Lambda}})\simeq 0.983>M^D({\mathbf{\Lambda}})\simeq 0.328$ and $M^{mix}(\overline{\mathbf{\Lambda}})\simeq 0.329>M^{mix}({\mathbf{\Lambda}})\simeq 0.141$.
The main qualitative difference between ${\mathbf{\Lambda}}$ and the corresponding PNM core $\overline{\mathbf{\Lambda}}$ is the presence of a time when all the initial information is recovered. If the system is evolved by $\overline{\mathbf{\Lambda}}$, any possible type of information is \textit{completely recovered} to its original value at time $t^{\overline\Lambda}$. Indeed, $\overline{f}(t^{\overline\Lambda})=1$ and the dynamical map at this time is equal to the identity, namely
$\Lambda_{t^{\overline\Lambda}}=I_S$.
For instance, any pair of initially orthogonal states $\{\overline\Lambda_t (\rho_{S,1}), \overline\Lambda_t (\rho_{S,2})\}$ goes from being perfectly distinguishable, to non-perfectly distinguishable for any $t\in(0,t^{\overline \Lambda})$ and then back to perfectly distinguishable at time $t^{\overline\Lambda}$. As noticed above, all PNM depolarizing evolutions completely restore the initial information content of the system at time $t^{\overline\Lambda}$, namely $\overline f(t^{\overline\Lambda})=1$ for all PNM depolarizing $\overline{\mathbf{\Lambda}}$.
Finally, we can see how the initial noise in this dynamics is essential for the following non-Markovian phenomena to happen. Indeed, as soon as we take a non-zero time $s\in(0,t^\Lambda)$, we have a distinguishability backflow in the time interval $[s,t^\Lambda]$.
\section{Quasi-eternal non-Markovian model}\label{eternality}
We briefly introduce a qubit model to show the existence of evolutions with ${T^\Lambda} <\tau^\Lambda<t^\Lambda=\infty $ and ${T^\Lambda} =\tau^\Lambda=t^\Lambda$. The dynamics in question are called quasi-eternal non-Markovian \cite{DDSlong}, which generalize the well-known qubit eternal non-Markovian model \cite{eternal0,eternal,eternal1}.
First, we define Pauli evolutions as those having dynamical maps with the following form
\begin{eqnarray}\label{Pauli}
\Lambda_t (\,\cdot\,) = \sum_{i=0,x,y,z} p_i(t) \sigma_i \, (\,\cdot\,) \, \sigma_i \, ,
\end{eqnarray}
where $\sigma_{x,y,z}$ are the Pauli operators, $\sigma_0=\mathbbm{1}$, and $p_0(t)=1-p_x(t)-p_y(t)-p_z(t)$. The Pauli map is CPTP if and only if $p_{0,x,y,z}(t)\geq 0$.
The easiest way to appreciate the non-Markovian features of Pauli evolutions is given by studying the corresponding master equation, namely the first-order differential equation defining the evolution of the corresponding system density matrix:
\begin{eqnarray}\label{master}
\frac{d}{dt} \rho_S(t)= \sum_{i=x,y,z} \gamma_i(t) (\sigma_i \rho_S(t) \sigma_i -\rho_S(t)) \, ,
\end{eqnarray}
where $\gamma_i(t)$ are time-dependent real functions.
It can be proven that $\gamma_i(t)\geq 0$ for all $i=x,y,z$ and $t\geq 0$ if and only if the corresponding evolution ${\mathbf{\Lambda}}$ is Markovian \cite{darekk}.
Moreover, if $\gamma_i(t)+\gamma_j(t)\geq 0$ for all $i\neq j$ and $t\geq 0$, the evolution is P-divisible, namely $V_{t,s}$ is at least P (but not necessarily CP) for all $s\leq t$.
The probabilities and the rates that define the quasi-eternal model are:
\begin{eqnarray}\nonumber
&&p_{x}(t)=p_y(t)= \frac{1-\expp{-2 \alpha t}}{4} \, , \\
\label{probs}
&& p_z(t)=\frac{1}{4} \left( 1 + \expp{-2 \alpha t} - \frac{2 \expp{-\alpha t} \, \mbox{cosh}^{\alpha} (t-t_0)}{\mbox{cosh}^{\alpha} (t_0)}\right) \, ,
\\ \label{rates}
&& \left\{\gamma_x(t),\gamma_y(t),\gamma_z(t) \right\}= \frac{\alpha}{2} \{1,1,-\mbox{tanh}(t-t_0)\} \, ,
\end{eqnarray}
where these time-dependent parameters generate maps $\Lambda_t$ that are CPTP at all times if and only if $\alpha>0$ and
$$t_{0}\geq t_{0,\alpha}=\max\{0,(\log(2^{1/\alpha} -1))/2\}, $$
where $t_{0,\alpha}>0$ for $\alpha\in(0,1)$ and $t_{0,\alpha}=0$ for $\alpha\geq 1$ \cite{DDSlong}.
We
call quasi-eternal non-Markovian the Pauli evolution defined by the probabilities (\ref{probs}), or equivalently the solution of the
master equation (\ref{master}) with rates (\ref{rates}), where $t_0\geq t_{0,\alpha}$. These evolutions are P-divisible and, since $\gamma_z(t)<0$ for $t>t_{0}$, the infinitesimal intermediate maps are non-CPTP for all $t>t_0$.
\begin{widetext}
\begin{figure}
\caption{
{\bf Top left:}
\label{newfigdepo}
\end{figure}
\end{widetext}
The intermediate map of a Pauli evolution assumes the Pauli form, namely $V_{t,s}(\,\cdot\,) = \sum_{i=0,x,y,z} p_i(s,t) \sigma_i (\,\cdot\,) \sigma_i $, where:
\begin{eqnarray}\nonumber
&&p_{x}(s,t)=p_y(s,t)= \frac{1-\expp{-2 \alpha (t-s)}}{4} \, ,
\\
\label{probsint}&& p_z(s,t)=\frac{1}{4}\left( 1 + \expp{-2 \alpha (t-s)} - \frac{2 \expp{-\alpha (t-s)} \, \mbox{cosh}^{\alpha} (t-t_0)}{\mbox{cosh}^{\alpha} (s-t_0)} \right) \, ,
\end{eqnarray}
Notice that, as any Pauli channel, the intermediate map $V_{t,s}$ is CPTP if and only if $p_{0,x,y,z}(s,t)\geq 0$.
The lowest eigenvalue of the Choi state of $V_{t,s}$ is $\lambda_{t,s}=p_z(s,t)$. In Figure \ref{quasieternal} we represent $\mathcal P^{\Lambda}$ and $\mathcal N^\Lambda$ for three PNM evolutions from this family, namely the collection of time-pairs $\{s,t\}$ such that $V_{t,s}$ is respectively CPTP and non-CPTP. We see that for $\alpha\in(0,1)$ we have ${T^\Lambda}<\tau^\Lambda=t^\Lambda$, while for $\alpha\geq 1$ we have ${T^\Lambda}=\tau^\Lambda=t^\Lambda$.
We prove that ${T^\Lambda}=t_0-t_{0,\alpha}$ and $\tau^\Lambda = t_0$ (see Appendix~\ref{TLalpha}).
The latter result is a direct consequence of the form of the master equation, which has negative rates if and only if $t>t_0$. Indeed, $V_{t+\epsilon,t}$ is CPTP for infinitesimal $\epsilon$ if and only if $\gamma_{x,y,z}(t)\geq 0$.
Interestingly, we can appreciate a peculiar scenario for $\alpha>1$, where we obtain a CPTP map through the composition of non-CPTP maps. Without loss of generality, we fix $t_0=t_{0,\alpha}=0$. There exist initial times $s'>0$ such that, $V_{t,s'}$ is non-CPTP for all $t\in(s,t')$, while $V_{t,s'}$ is CPTP for all $t\geq t'$. Notice that, since $\gamma_z(t)< 0$ for all $t>0$, $V_{t+\epsilon,t}$ is non-CPTP for infinitesimal $\epsilon>0$ and all $t> 0$. Therefore, if we consider $t_1<t'<t_2$, we have that $V_{t_1,s}$ is non-CPTP and $V_{t_2,s}$ is CPTP. The latter map can be obtained via the composition of $V_{t_1,s}$ with infinitesimal intermediate maps as follows $V_{t_2,s}=V_{t_2 , t_2 - \epsilon} \circ \dots \circ V_{t_1+\epsilon,t_1} \circ V_{t_1,s}$, namely the CPTP map $V_{t_2,s}$
\begin{widetext}
\begin{figure}
\caption{$\mathcal P^\Lambda$ and $\mathcal N^\Lambda$ for quasi-eternal PNM evolutions obtained with different $\alpha>0$ and $t_{0}
\label{quasieternal}
\end{figure}
\end{widetext}
is obtained by composing infinitesimal non-CPTP maps $V_{t+\epsilon,t}$
with the non-CPTP intermediate map $V_{t_1,s}$.
The composition of infinitesimal intermediate maps that we wrote corresponds to $V_{t_2,t_1}=V_{t_2 , t_2 - \epsilon} \circ \dots \circ V_{t_1+\epsilon,t_1}$, which, depending on $t_1$ and $t_2$, can be either CPTP or not.
Finally, a simple variation of this model leads to a trivial example of ${T^\Lambda}< \tau^\Lambda = t^\Lambda$, where we exploit condition (C) of Eq. (\ref{TLambdanuovo}). Consider an evolution that is unitary in an initial time interval, namely $\Lambda_t=U_t$ is unitary for $t\in [0,t_U]$, and later it behaves as an eternal PNM evolution with $\alpha>1$ and $t_0=0$.
Such an evolution would, for instance, be given by integrating the master equation (\ref{master}) with rates $\{\gamma_x(t),\gamma_y(t),\gamma_z(t) \} = \theta(t-t_U) \{1,1,-\mbox{tanh}(t-t_U)\} $, where $\theta(x)=1$ for $x\geq 0$ and it is zero-valued otherwise. Indeed, in $[0,t_U]$ the evolution would correspond to the identity and $0={T^\Lambda}<\tau^\Lambda=t^\Lambda=t_U$.
\section{Discussion}
We studied the difference between two types of initial noise in non-Markovian evolutions, where essential noise makes the system lose the same information that takes part during later backflows, while the information lost with useless noise is never recovered. Indeed, this last type of noise can be compared to a Markovian pre-processing of the system.
We identified as PNM those evolutions showing only essential noise, while NNM evolutions have both type of noises. We proved that any NNM evolution can be simulated as a Markovian pre-processing, which generates the useless noise, followed by a PNM evolution, which represents the (pure) non-Markovian core of the evolution.
In order to distinguish between PNM and NNM, we introduced a temporal framework that aims to describe the timing of fundamental non-Markovian phenomena. We identified the most distinguishable classes arising from this framework, where PNM and NNM evolutions fit naturally. Thus, we identified several mathematical features connected with this classification, generalized this approach to non-divisible evolutions and later focused on the phenomenological side of this topic. Indeed, we addressed the problem of finding which backflows and non-Markovian measures are amplified when PNM evolutions are compared with their corresponding noisy versions, proposing constructive and measurable results within the context of state distinguishability.
We studied how the entanglement breaking property is lost/preserved when we compare PNM cores and their corresponding NNM evolutions.
Moreover, we discussed the possibility to activate correlation backflows when we extract the PNM core out of NNM evolutions.
Finally, we studied several examples in order to show how to extract PNM cores, clarify the possible scenarios concerning the timings of non-Markovian phenomena and explain why useless noise has the only role of suppressing the backflows generated by the PNM core.
We claimed that some classes of evolutions, such as dephasing and amplitude damping, satisfy the conditions of Proposition~\ref{propC}, namely the corresponding dynamical map goes from being non-unitary to unitary. Nonetheless, as we showed with an explicit example in Section~\ref{secfeatures}, not all the PNM evolutions satisfy this property. It would be interesting to understand which are the minimal conditions under which a given class of NM evolutions has its PNM representative satisfying Proposition \ref{propC}. A reasonable class could be given by the one-parameter evolutions, as described in Ref. \cite{DDSlong}, namely those with a single rate in the corresponding Lindblad master equation. More in general, concerning the possibility to lose and completely recover some type of information during the evolution, it would be interesting to understand whether PNM evolutions always enjoy this property for at least one quantifier, namely \textit{``If an evolution is PNM, then there exist an information quantifier and an initialization that during the dynamics loses and then recovers the initial information''}.
We analysed when and to what extent distinguishability backflows are amplified by PNM cores (see Section \ref{tracedistance}). Moreover, we gave a constructive method to build the states that provide the largest backflows. It would be interesting to generalize this approach to other information quantifiers, such as the distinguishability of state ensembles \cite{BD}, the Fisher information \cite{Fisher0,Fisher} or correlations \cite{DDSlong,Janek,DDSDONATO}.
We saw that PNM evolutions have non-Markovianity measures that cannot be smaller than the associated NNM evolutions in Section \ref{measures}.
Moreover, we gave conditions under which PNM evolutions have strictly larger non-Markovianity measures connected with the flux of state distinguishability. Understanding in which other cases and to what extent this strict inequality can be obtained with other information quantifiers and other non-Markovianity measures is interesting.
Finally, another interesting topic would be to understand whether the extraction of the PNM core can lead to the activation of some non-Markovian phenomena, such as we discussed in the context of correlation backflows.
\appendix
\widetext
\section{Proof that ${T^\Lambda}$ is given by a maximum and not a supremum}\label{proofclosed}
Given the intermediate map $V_{t,s}$ of a given evolution $\mathbf{\Lambda}$, we call $\lambda_{t,s}$ the lowest eigenvalue of the corresponding operator given by the Choi-Jamiołkowski isomorphism. Hence, we have that $V_{t,s}$ is CPTP if and only if $\lambda_{t,s}\geq 0$. We call $\Omega_T \subseteq \mathbb{R}^2$ the closed set of pairs of times $\{s,t\}$ such that $s\leq t$ and $s\in [0,T]$. We can rewrite Eq. (\ref{TLambdanuovo}) as
\begin{equation}\label{TLN}
T^\Lambda=\sup\{\,T \,|\, \mbox{(A+B) } \,\, \lambda_{t,s}\geq 0 \,\,\,\mbox{for all } \{s,t\}\in \Omega_T\},
\end{equation}
where for now we replaced the maximum with the supremum and removed condition (C). Now, we prove that this supremum is indeed a maximum. We call $\mathcal P^\Lambda=\lambda^{-1}_{t,s}([0,\infty))\subseteq \mathbb{R}^2_\leq$ the set of pairs of times such that $\lambda_{t,s}\geq 0$. Being $V_{t,s}$ continuous in $\{s,t\}$, so it is $\lambda_{t,s}$. We notice that, since $\lambda_{t,s}$ is continuous and $\mathcal P^\Lambda\lambda^{-1}_{t,s}([0,\infty))$ is the preimage of the closed set $[0,\infty)$, $\mathcal P^\Lambda=\lambda^{-1}_{t,s}([0,\infty))$ is a closed subset of $\mathbb{R}^2$. Hence, $\mathcal N^\Lambda=\lambda^{-1}_{t,s}((-\infty,0))$ is an open subset of $\{s,t\}_{0\leq s\leq t}$.
Now we prove that Eq. (\ref{TLN}) is a maximum, namely that $\lambda_{t,s}\geq 0 \,\,\,\mbox{for all } \{s,t\}\in \Omega_{T^\Lambda}$. Suppose that $\lambda_{t,s}$ is not positive semidefinite for all $ \{s,t\}\in \Omega_{T^\Lambda}$. Hence, there must exists a pair of times $\{ \tilde s,\tilde t \}\in \Omega_{T^\Lambda}$ such that $\lambda_{\tilde t,\tilde s}<0$. Since $\lambda_{t,s}$ is continuous and $\lambda^{-1}_{t,s}((-\infty,0))$ is open, there must exists a small enough $\epsilon>0$ such that $\lambda_{\tilde t,\tilde s-\epsilon}<0$. It follows that $\lambda_{t,s}$ is not positive semidefinite for all $\{s,t\} \in \Omega_{T^\Lambda - \epsilon}$ and therefore we have a contradiction with $T^\Lambda$ being the supremum time $T$ for which $\lambda_{t,s}\geq 0 \mbox{ for all } \{s,t\}\in \Omega_T$. Hence, $\lambda_{t,s}\geq 0$ for all $\{t,s\}\in\Omega_{T^{\Lambda}}$ and Eq. (\ref{TLN}) is a maximum. Finally, the addition of condition (C) in Eq. (\ref{TLambdanuovo}) to Eq. (\ref{TLN}) does not change this result.
\section{Proof of Lemma \ref{ABviolation}, \ref{lemma4} and Proposition \ref{propDgen}}\label{lemma123}
\begin{proof}[Proof of Lemma \ref{ABviolation}]
First we prove the first sentence. Since $[0,T]$ is included in $[0,{T^\Lambda}]$, if (A) is satisfied in $[0,{T^\Lambda}]$, then it is also satisfied in $[0,T]$ for $T\leq {T^\Lambda}$. Concerning condition (B) for $T\leq {T^\Lambda}$, we have to distinguish two situations. If $t\in(T,{T^\Lambda}]$, then $V_{t,T}$ is CPTP because the evolution is CP-divisible in $[0,{T^\Lambda}]$. If $t>{T^\Lambda}$, we can write $V_{t,T}=V_{t,{T^\Lambda}}\circ V_{{T^\Lambda},T}$, which is CPTP as it is the composition of two CPTP maps: $V_{{T^\Lambda},T}$ is CPTP because the evolution is CP-divisible in $[0,{T^\Lambda}]$ and $V_{t,{T^\Lambda}}$ is CPTP because condition (B) is satisfied for $T={T^\Lambda}$.
Finally, we obtain the proof of the second sentence by considering that a violation of condition (A) implies that $V_{t,s}$ is not CPTP for some $s<t\leq T$. Hence, (B) is violated at time $s<T$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma4}]
This result is a direct consequence of Lemma \ref{ABviolation}. Indeed, for infinitesimal values of $\epsilon>0$, we can say that the condition (B) is violated at time ${T^\Lambda}+\epsilon$. This is the earliest time when this property holds true because, given the result of Lemma \ref{ABviolation}, if $V_{t,s}$ wouldn't be CPTP for $s\leq {T^\Lambda}$, then Eq. (\ref{TLambdanuovo}) would not correspond to the maximum time when (A) and (B) are satisfied.
Now we prove the last sentence. ${T^\Lambda}\leq t$ otherwise (A) would not be satisfied. Now we show that ${T^\Lambda}\in [s,t)$ leads to a contradiction. We can express the non-CPTP intermediate map through the following composition $V_{t,s}=V_{t,{T^\Lambda}}\circ V_{{T^\Lambda},s}$. The r.h.s. of this equation is CPTP as it is the composition of two CPTP maps: $V_{{T^\Lambda},s}$ is CPTP for condition (A) and $V_{t,{T^\Lambda}}$ is CPTP for condition (B). Since the l.h.s. is not CPTP, we have a contradiction: either $V_{{T^\Lambda},s}$, $V_{t,{T^\Lambda}}$ or both are not CPTP and therefore either (A), (B) or both cannot be satisfied for $T \in [s,t)$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{propDgen}]
We start with the case ${T^\Lambda}<t^\Lambda$.
We write $V_{t^\Lambda,{T^\Lambda}+\epsilon}=V_{t^\Lambda,s}\circ V_{s,{T^\Lambda}+\epsilon}$, where $V_{t^\Lambda,{T^\Lambda}+\epsilon}$ is not CPTP and $V_{s,{T^\Lambda}+\epsilon}$ is CPTP because $t^\Lambda$ is the earliest time for which the intermediate map starting from ${T^\Lambda}+\epsilon$ is not CPTP (see Lemma \ref{lemma4} and Eq. (\ref{tnm})). Since the composition of two CPTP maps is CPTP, $V_{t^\Lambda,s}$ cannot be CPTP, otherwise $V_{t^\Lambda,{T^\Lambda}+\epsilon}$ would be CPTP.
Consider now ${T^\Lambda}=t^\Lambda$. We want to prove that for all $T>{T^\Lambda}$, the infinitesimal intermediate map $V_{t+\epsilon,t}$ is non-CPTP for infinitesimal $\epsilon>0$ and infinite times $t$ inside $ ({T^\Lambda},T)$. Take a $T>{T^\Lambda}$ small enough such that either there are (i) infinite times $t\in({T^\Lambda},T)$ when $V_{t+\epsilon,t}$ is non-CPTP or (ii) there are zero. The case of finite times can be avoided by simply considering $T$ (greater but) close enough to ${T^\Lambda}$. We now analyse case (ii). Since ${T^\Lambda}=t^\Lambda$, the point $\{{T^\Lambda},{T^\Lambda}\}$ must belong to the border of $\mathcal N^\Lambda$, and therefore there exist a continuum of $\{s,t\}$ infinitesimally close to $\{{T^\Lambda},{T^\Lambda}\}$ which belong to $\mathcal N^\Lambda$. We remind that $\mathcal N^\Lambda$, the collection of pairs $\{s,t\}$ such that $V_{t,s}$ is non-CPTP, is an open set. Consider a non-CPTP $V_{t_2,t_1}$ such that ${T^\Lambda}<t_1<t_2<T$ and write it as the composition of a large number of infinitesimal intermediate maps. We obtain $V_{t_2,t_1}=V_{t_2,t_2-\epsilon}\circ V_{t_2-\epsilon,t_2-2\epsilon}\circ \dots \circ V_{t_1+\epsilon,t_1}$, where the l.h.s. is non-CPTP while, for small enough $\epsilon>0$, the components on the r.h.s. are all CPTP. The last statement is a consequence of the definition of case (ii). Since the composition of CPTP maps is CPTP, we have a contradiction and only the scenario (i) can take place.
A pathological case with $0={T^\Lambda}=t^\Lambda$ such that the times $t$ when $V_{t+\epsilon,t}$ is non-CPTP do not contain a whole interval $({T^\Lambda},T)$ where this property is verified is given by the master equation (\ref{master}) with the rates: $\{\gamma_x(t),\gamma_y(t),\gamma_z(t) \} = \{1,1,-\sin(1/t) \mbox{tanh}(t)\}$.
This is the case because $V_{t+\epsilon,t}$ is non-CPTP at time $t$ if and only if $\gamma_z(t)<0$ and $-\sin(1/t) \mbox{tanh}(t)$ has not got a definite sign in any time interval $(0,T)$.
\end{proof}
\section{Proof that ${T^\Lambda}\leq \tau^\Lambda\leq t^\Lambda$ and more}\label{tlambdavari}
The time ${T^\Lambda}$ cannot be larger than $\tau^\Lambda$ or we would have a violation of condition (A) from Eq. (\ref{TLambdanuovo}). Since $\tau^{\Lambda}$ is defined through the infimum, ${T^\Lambda}$ and $\tau^\Lambda$ may coincide, but in this case $V_{\tau^{\Lambda}+\epsilon,\tau^{\Lambda}}=V_{{T^\Lambda}+\epsilon,{T^\Lambda}}$ has to be CPTP for all $\epsilon > 0$, otherwise the condition (B) for ${T^\Lambda}$ would be violated. Hence, when ${T^\Lambda}=\tau^\Lambda$, Eq. (\ref{tnm}) is not a minimum.
Now we prove $\tau^\Lambda\leq t^\Lambda$ by showing that a violation of this inequality leads to a contradiction. If $t^\Lambda< \tau^\Lambda$, we would have that $V_{t^\Lambda,{T^\Lambda}+\epsilon}$ is not CPTP while at the same time $V_{\tau^\Lambda+\epsilon,\tau^\Lambda}$ should be the earliest non-CPTP map for an infinitesimal time interval. These two statements are in contradiction because if $V_{t^\Lambda,{T^\Lambda}+\epsilon}$ is not CPTP, there must be an infinitesimal time interval $[t_1,t_1+\epsilon]$ contained in $[{T^\Lambda}+\epsilon,t^\Lambda]$ such that $V_{t_1+\epsilon,t_1}$ is not CPTP (see below). Hence in this case we would have $T^\Lambda+\epsilon\leq t_1\leq t^\Lambda<\tau^\Lambda$. This is in contradiction with Eq. (\ref{taunm}) that defines $\tau^\Lambda$ the earliest time $t$ such that $V_{t+\epsilon,t}$ is not CPTP.
Hence, we only need to prove that if $[s,t]$ is a time interval where $V_{t,s}$ is not CPTP, then there exists an infinitesimal time interval $[t_1,t_1+\epsilon]$ such that $V_{t_1+\epsilon,t_1}$ is not CPTP. For any $\epsilon>0$, we can split $[s,t]$ in subintervals of width $\epsilon$ and consider the composition $V_{t,s}=V_{t,t-\epsilon}\circ V_{t-\epsilon,t-2\epsilon}\circ \dots \circ V_{s+\epsilon,s}$. Since the composition of CPTP maps is CPTP, if $V_{t,s}$ is not CPTP there must be at least one infinitesimal subinterval $[t_1,t_1+\epsilon]$ where the corresponding intermediate map is not CPTP.
Now, we prove that $T^\Lambda=\tau^\Lambda$ implies ${T^\Lambda}=\tau^\Lambda=t^\Lambda$.
From the definition of $\tau^\Lambda$ given in Eq. (\ref{taunm}), it follows that there exists $\overline \delta>0$ such that for all $\delta \in (0,\overline \delta)$, the intermediate map $V_{T^\Lambda+\delta+\epsilon,T^\Lambda+\delta}$ is not CPTP for all $\epsilon \in (0,\epsilon^{(\delta)})$, where $\epsilon^{(\delta)}>0$ depends on $\delta$. On the other hand, $t^{\Lambda,\delta}=\inf\{T\ | V_{T,T^\Lambda+\delta} \mbox{ is not CPTP}\}$. Hence, this infimum is given by $T=T^\Lambda+\delta$, and therefore $t^\Lambda={T^\Lambda}$.
\section{Proof of Proposition \ref{propC}}\label{proofpropC}
We start by considering the case where $\mathbf{\Lambda}$ is not characterized by an initial time interval $[0,\delta)$ when the corresponding dynamical maps are all unitary.
$\Lambda_s$ is not unitary for $s\in (0,\delta)$ and $\Lambda_{t}=U$ is unitary for some ${t}\geq \delta$. We can write the intermediate map starting from an infinitesimal time to $t$ as $V_{{t},\epsilon}=U \circ \Lambda_\epsilon^{-1}$, which is not CPTP for all $\epsilon\in (0,\delta)$. Indeed, since $U$ is unitary, $V_{{t},\epsilon}$ and $\Lambda_\epsilon^{-1}$ have the same eigenvalues and $\Lambda_\epsilon^{-1}$ is non-CPTP because it is the inverse of a non-unitary CPTP map.
It follows that $T^\Lambda=0$.
Suppose now that $\mathbf{\Lambda}$ is unitary in $[0,\delta)$. Since we assumed that $\Lambda_{s}$ is not unitary for some $s<t$, there exists at least one finite time interval $(\delta, \delta')$ such that $\Lambda_{s}$ is not unitary for $s\in(\delta, \delta')$, where $\delta'\leq t$. Given the conditions (B) and (C) of Eq. (\ref{TLambdanuovo}), we have to check whether $V_{t,s}$ is CPTP for $s=\delta+\epsilon$ with infinitesimal $\epsilon$. Hence, if we write $V_{t,\delta+\epsilon}=\Lambda_{t}\circ \Lambda^{-1}_{\delta+\epsilon} = U\circ \Lambda^{-1}_{\delta+\epsilon}$ cannot be CPTP because the inverse of a CPTP non-unitary map, namely $\Lambda^{-1}_{\delta+\epsilon}$, is not CPTP and its composition with a unitary transformation $\Lambda_{t}=U$ has the same eigenvalues as $\Lambda^{-1}_{\delta+\epsilon}$. Hence, $V_{t,\delta+\epsilon}$ is not CPTP and $T^\Lambda=0$.
Finally, we have to prove that there exists at least one intermediate map which is not even positive (P). We call vol$(\Lambda_{t})$ the volume of the image of the evolution at time $t$. Since $\Lambda_{s}$ is CPTP and not unitary, vol$(\Lambda_{s})<$vol$(\Lambda_{0})$ \cite{volume}.
It follows that vol$(\Lambda_{0})$=vol$(\Lambda_{t})<$vol$(\Lambda_{s})$, the intermediate map $V_{t,s}$ cannot be positive \cite{dividingqm}.
\section{Proof that $M^{mix}({\mathbf{\Lambda}})\leq M^{mix}(\overline{\mathbf{\Lambda}})$}\label{appNmix}
Let's say that $M^{mix}(\overline{\mathbf{\Lambda}}) = \overline p$, $\overline{\mathbf{\Lambda}}$ is the PNM core of ${\mathbf{\Lambda}}$
and
$
\overline{\mathbf \Lambda}^{mix} = (1-\overline p)\overline{\mathbf{\Lambda}}+\overline p \overline{\mathbf{\Gamma}}^M
$
is Markovian, namely $\overline{\mathbf{\Gamma}}^M$ is optimal to make $\overline{\mathbf{\Lambda}}$ Markovian. The intermediate maps of this evolution are CPTP and read
\begin{eqnarray}\label{incohV}
\overline{V}^{mix}_{t,s }= \left( (1-\overline p)\overline{{\Lambda}}_t+\overline p \overline{{\Gamma}}^M_t \right) \circ \left( (1-\overline p)\overline{{\Lambda}}_s+\overline p \overline{{\Gamma}}^M_s \right)^{-1} \, .
\end{eqnarray}
Now we take the Markovian evolution $\mathbf{\Gamma}^M$ and consider $\mathbf{\Lambda}^{mix} = (1-\overline p){\mathbf{\Lambda}}+\overline p {\mathbf{\Gamma}}^M$, where:
\begin{eqnarray}
{\mathbf{\Gamma}}^M=\left\{
\begin{array}{cc}
\Lambda_t & t< T^{\Lambda} \\
\overline{{\Gamma}}_{t-T^\Lambda} \circ {{\Lambda}}_{T^\Lambda} & t\geq T^{\Lambda}
\end{array} \right. \hspace{0.5cm} \mbox{ and } \hspace{0.5cm}
\mathbf{\Lambda}^{mix} = (1-\overline p){\mathbf{\Lambda}}+\overline p {\mathbf{\Gamma}}^M =
\left\{
\begin{array}{cc}
\Lambda_t & t< T^{\Lambda} \\
\left( (1-\overline p)\overline{{\Lambda}}_{t-T^\Lambda} + \overline p \overline{{\Gamma}}_{t-T^\Lambda} \right) \circ \Lambda_{T^\Lambda} & t\geq T^{\Lambda}
\end{array} \right. .
\end{eqnarray}
Hence, $\mathbf{\Lambda}^{mix} $ is CP-divisible in $t\in[0,T^\Lambda]$ because it corresponds to $\Lambda_t$. Consider the intermediate map $V_{t_2,t_1}^{mix}$ of $\mathbf{\Lambda}^{mix}$, where $T^\Lambda\leq t_1\leq t_2$. We can write it as
\begin{eqnarray}
V^{mix}_{t_2,t_1}&=&{\Lambda_{t_2}}^{mix} \circ ({\Lambda_{t_1}}^{mix})^{-1} = \left( \left( (1-\overline p)\overline{{\Lambda}}_{t_2-T^\Lambda} + \overline p \overline{{\Gamma}}_{t_2-T^\Lambda} \right) \circ \Lambda_{T^\Lambda} \right) \circ \left( \left( (1-\overline p)\overline{{\Lambda}}_{t_1-T^\Lambda} + \overline p \overline{{\Gamma}}_{t_1-T^\Lambda} \right) \circ \Lambda_{T^\Lambda} \right)^{-1} = \nonumber \\
&=& \left( (1-\overline p)\overline{{\Lambda}}_{t_2-T^\Lambda} + \overline p \overline{{\Gamma}}_{t_2-T^\Lambda} \right) \circ \Lambda_{T^\Lambda} \circ \Lambda_{T^\Lambda}^{-1} \circ \left( (1-\overline p)\overline{{\Lambda}}_{t_1-T^\Lambda} + \overline p \overline{{\Gamma}}_{t_1-T^\Lambda} \right)^{-1} = \overline{V}^{mix}_{t_2-T^\Lambda,t_1- T^\Lambda} \, , \end{eqnarray}
which is CPTP because, for $t\geq T^\Lambda$, it is the intermediate map of $\overline{\mathbf{\Lambda}}^{mix}$ (see Eq. (\ref{incohV})). In summary, the infinitesimal intermediate maps of ${\mathbf{\Lambda}}^{mix}$ are
\begin{eqnarray}
V^{mix}_{t+\epsilon, t}=\left\{
\begin{array}{cc}
V_{t+\epsilon,t} & t< T^{\Lambda} \\
\overline{V}^{mix}_{t-T^\Lambda+\epsilon,t-T^\Lambda} & t\geq T^{\Lambda}
\end{array} \right. \, ,
\end{eqnarray}
where $V_{t+\epsilon,t}$ are the infinitesimal intermediate maps of $\mathbf{\Lambda}$ which are CPTP for $t\in[0,T^\Lambda]$ and $\overline{V}^{mix}_{t-T^\Lambda+\epsilon,t-T^\Lambda}$, for $t\geq T^\Lambda$, are the intermediate maps of the Markovian evolution $\overline{\mathbf{\Lambda}}^{mix}$. Hence, $\mathbf{\Lambda}^{mix} = (1-\overline p){\mathbf{\Lambda}}+\overline p {\mathbf{\Gamma}}^M $ is CP-divisible (Markovian) and
\begin{eqnarray}\label{incoh2}
M^{mix}(\mathbf{\Lambda})=\min \{ \,p\,| \, \exists \mathbf{\Lambda}^M \mbox{ s.t. } (1- p){\mathbf{\Lambda}}+\overline p {\mathbf{\Lambda}}^M \mbox{ is Markovian} \} \leq \overline p = M^{mix}(\overline{\mathbf{\Lambda}})\, .
\end{eqnarray}
\section{Non-invertible depolarizing example}\label{noninvdepo}
\begin{figure}
\caption{
{\bf Top left:}
\label{figdepo}
\end{figure}
We proceed by showing how our framework behaves with a non-bijective depolarizing evolution.
Consider the characteristic function given by $f(t)=(2t-1)^2/(2t^3-t+1)$ (see Figure \ref{figdepo}).
This depolarizing evolution is not divisible. As we can see, the evolution is not bijective between $t^{NB}=1/2$ and any later time ($f(1/2)=0$). We cannot define intermediate maps $V_{t,t^{NB}}$ with initial time $t^{NB}$. Nonetheless, Eq. (\ref{TLambdadepo}) provides a ${T^\Lambda}$ smaller than $t^{NB}$, even without imposing the extra condition ${T^\Lambda}<t^{NB}$. Indeed, ${T^\Lambda}$ is the earliest time such that there exists a time interval $[{T^\Lambda}+\epsilon, t^\Lambda]$ when $V_{t^\Lambda,{T^\Lambda}+\epsilon}$ is not CPTP, while $V_{t^\Lambda,{T^\Lambda}}$ is CPTP (see Proposition \ref{propDgen}). Hence, $V_{t^\Lambda,{T^\Lambda}+\epsilon}$ not being CPTP implies
$f(t^\Lambda)-f({T^\Lambda}+\epsilon)>0$, from which we can state that $f(t^\Lambda)>0$. Similarly, from
$f(t^\Lambda)-f({T^\Lambda})=0$ we have $f({T^\Lambda})>0$. Since evolutions are continuous, so are characteristic functions. If $f(0)=1$ and $f(1/2)=0$, there must a be an intermediate time ${T^\Lambda}<1/2$ when the characteristic function assumes the value $f({T^\Lambda})>0$ while $f'({T^\Lambda})<0$. This property holds for any possible characteristic function that is zero-valued at one or more times, and therefore finding ${T^\Lambda}$ does not require any additional technique with respect to the divisible case.
Straightforward calculations lead to ${T^\Lambda}=1/8$, $\tau^\Lambda=1/2$ and $t^\Lambda=3/2$.
Most of the analysis made in Section \ref{newsecexample} for the corresponding example is similar here. Nonetheless, we underline some differences coming non-bijectivity.
First, both the evolutions ${\mathbf{\Lambda}}$ and $\overline{\mathbf{\Lambda}}$ have a time, $\tau^\Lambda=t^{NB}$ and $\tau^{\overline \Lambda}=\overline t^{NB}$ respectively, when \textit{all} the states are mapped into the maximally mixed state, namely when the corresponding characteristic function is null.
Nonetheless, differently from ${\mathbf{\Lambda}}$, the PNM core $\overline{\mathbf{\Lambda}}$ \textit{completely retrieves} any possible type of information at the later time $t^{\overline\Lambda}$, when $\overline{f}(t^{\overline\Lambda})=1$. Indeed, the dynamical map at this time is equal to the identity:
$\overline\Lambda_{t^{\overline\Lambda}}=I_S$. For instance, any pair of initially orthogonal states $\rho_{S,1}(t)$, $\rho_{S,2}(t)$ goes from perfectly distinguishable ($t=0$), to absolutely indistinguishable ($t=\tau^{\overline\Lambda}$) and then back to be perfectly distinguishable ($t=t^{\overline\Lambda}$).
Since we cannot write $V_{t,t^{NB}}$, the corresponding Choi operator lowest eigenvalues $\lambda_{t,t^{NB}}=(1-f(t)/f(t^{NB}))/4$ cannot be evaluated for all $0\leq s \leq t$. Indeed, $f(t^{NB})=0$ and this quantity would diverge. Hence, in Fig. \ref{figdepo} we plot $\mathcal P^\Lambda$ and $\mathcal N^\Lambda$ through the regularization $l_{t,s}=f(s) \lambda_{t,s}=(f(s)-f(t))/4$, which is non-negative if and only if there exists a CPTP $V_{t,s}$ and does not diverge for $s=t^{NB}$. We must underline an important subtlety. Even if $V_{t,t^{NB}}$ does not exists, the evolution behaves as non-Markovian in the time intervals $[t^{NB},t]$. Indeed, since Markovianity is defined through CP-divisibility, any other case is rightfully labelled as non-Markovian. Undoubtedly, this is the case, where we do not have a specific non-CPTP intermediate map, but since the evolution in $[t^{NB},t]$ is not represented by a CPTP operator, then the evolution must show some non-Markovian features. As a matter of fact, the largest non-Markovian features are shown during these time intervals, where states goes from being \textit{identical} to partially, or perfectly (in case of the PNM core and initially orthogonal states), distinguishable. Hence, these effects are not only quantitatively the largest, but they are qualitatively different.
\section{Proof that $T^\Lambda=t_0-t_{0,\alpha}$}\label{TLalpha}
We call $\Lambda^{(\alpha,t_0)}_t$ the dynamical map at time $t$ of the quasi-eternal NM evolution defined by the parameters $\alpha$ and $t_0$. Similarly, we define $V_{t,s}^{(\alpha,t_0)}$.
We need to find the maximum $T$ such that conditions (A) and (B) from Eq. (\ref{TLambdanuovo}) are satisfied. We start with condition (B). Since $\Lambda_t^{(\alpha,t_0)}$ is CPTP if and only if $t_0\geq t_{0,\alpha}$ and $V_{t,T}^{(\alpha,t_0)}=\Lambda_{t-T}^{\alpha,t_0-T}$, then $V_{t,T}^{(\alpha,t_0)}$ is CPTP if and only if $T\leq t_0-t_{0,\alpha}$. Instead, we have that an evolution generated from Eq. (\ref{master}) is CP-divisible in $[0,T]$ if and only if $\gamma_{x,y,z}(t)\geq 0$ for all $t\in [0,T]$. Therefore, condition (A) is satisfied for all $T\leq t_0$. Hence, the maximum $T$ for which conditions (A) and (B) are simultaneously satisfied is $t_0-t_{0,\alpha}$: PNM quasi-eternal non-Markovian evolutions are those having $t_0=t_{0,\alpha}$.
\end{document} |
\begin{document}
\title{f Well-posedness for the generalized Benjamin-Ono equations with arbitrary large
initial data in the critical space
}
\noindent {\bf Abstract.}\, We prove that the generalized Benjamin-Ono equations $\partial_tu+\mathcal{H}\partial_x^2u\pm u^k\partial_xu=0$, $k\geq 4$
are locally well-posed in the scaling invariant spaces $\dot{H}^{s_k}(\mathbb{R})$ where $s_k=1/2-1/k$. Our results also hold in the non-homogeneous spaces
$H^{s_k}(\mathbb{R})$. In the case $k=3$, local well-posedness is obtained in $H^{s}(\mathbb{R})$, $s>1/3$.
\\
\noindent
{\bf Keywords:} NLS-like equations, Cauchy problem\\
{\bf AMS Classification:} 35Q55, 35B30, 76B03, 76B55
\section{Introduction}
In this paper we pursue our study of the Cauchy problem for the generalized Benjamin-Ono equations
\begin{equation}\label{gBO}\tag{gBO}\left\{\begin{array}{ll}\partial_tu+\mathcal{H}\partial_x^2u\pm u^k\partial_xu=0,
\quad x,t\in\mathbb{R},\\u(x,t=0)=u_0(x),\quad
x\in\mathbb{R},\end{array}\right.\end{equation}
with $k$ an integer $\geq 3$ and with $\mathcal{H}$ the Hilbert transform defined \textit{via} the Fourier transform by
\begin{equation}\label{hilbert}\mathcal{H} f=\mathcal{F}^{-1}(-i\mathop{\rm sgn}\nolimits(\xi)\hat{f}(\xi)),\quad f\in\mathcal{S}'(\mathbb{R}).\end{equation}
The Hilbert transform is a real operator, and consequently we look for real-valued solutions. In view of (\ref{hilbert}), we see that $\mathcal{H}$ is nothing but $-i$ on positive frequencies
and $+i$ on negative ones. A very close equation to (\ref{gBO}) is then the derivative nonlinear Schr\"{o}dinger equation
\begin{equation}\label{nls}\partial_tu-i\partial_x^2u\pm u^k\partial_xu=0.\end{equation}
for which all our results remain true. Furthermore, (\ref{gBO}) and (\ref{nls}) enjoy the same linear estimates, see Section \ref{sec-lin}.
\vskip 0.5cm
A remarkable feature of (\ref{gBO}) is the following scaling invariance: if $u(t,x)$ is a solution of the equation on $[-T,+T]$, then for any $\lambda>0$,
$u_\lambda(t,x)=\lambda^{1/k}u(\lambda^2t,\lambda x)$
also solves (\ref{gBO}) on $[-\lambda^{-2}T,+\lambda^{-2}T]$ with initial data
$u_\lambda(0,x)$ and moreover
\[\|u_\lambda(\cdot,0)\|_{\dot{H}^s}=\lambda^{s+\frac{1}{k}-\frac{1}{2}}\|u(\cdot,0)\|_{\dot{H}^s}.\]
Hence the $\dot{H}^s(\mathbb{R})$ norm is invariant if and only if
$s=s_k=1/2-1/k$ and we may expect well-posedness in $\dot{H}^{s_k}(\mathbb{R})$.
\vskip 0.5cm
When $k=1$, (\ref{gBO}) is the ordinary Benjamin-Ono equation
derived by Benjamin \cite{1967JFM....29..559B} and later by Ono
\cite{MR0398275} as a model for one-dimensional waves in deep
water. The Cauchy problem for the Benjamin-Ono equation has been
extensively studied these last years, see
\cite{MR533234,MR1097916,MR847994}. In \cite{MR2052470}, Tao introduced a gauge transformation
(a kind of Cole-Hopf transformation)
which ameliorate the derivative nonlinearity, and
get the well-posedness of this equation in $H^s(\mathbb{R})$,
$s\geq 1$. Recently, combining a
gauge transformation together with a Bourgain's method, Ionescu
and Kenig \cite{MR2291918} shown that one could go down to
$L^2(\mathbb{R})$, which seems to be the critical space for the Benjamin-Ono equation. Note also that Burq and Planchon \cite{MR2357995}
have obtained well-posedness in $H^s(\mathbb{R})$, $s>1/4$ by similar methods. It is
worth noticing that all these results have been obtained by
compactness methods. On the other hand, Molinet, Saut and
Tzvetkov \cite{MR1885293} proved that, for all $s\in\mathbb{R}$, the
flow map $u_0\mapsto u$ is not of class $\mathcal{C}^2$ from
$H^s(\mathbb{R})$ to $H^s(\mathbb{R})$. Furthermore, building suitable families of
approximate solutions, Koch and Tzvetkov proved in
\cite{MR2172940} that the flow map is actually not even uniformly
continuous on bounded sets of $H^s(\mathbb{R})$, $s>0$. This explains why a Picard iteration scheme
fails to solve the Benjamin-Ono equation in Sobolev spaces.
\vskip 0.5cm
In the case of the modified Benjamin-Ono equation ($k=2$),
Kenig and Takaoka \cite{MR2219229} have recently obtained the
global well-posedness in the energy space $H^{1/2}(\mathbb{R})$. This have
been proved thanks to a localized gauge transformation combined
with a space-time $L^2$ estimate of the solution. It is important to note that this result is far from that given by the scaling index $s_2=0$.
However, it is known
to be sharp since the solution map $u_0\mapsto u$ is not
$\mathcal{C}^3$ in $H^s(\mathbb{R})$ as soon as $s<1/2$ (see \cite{MR2038121}).
\vskip 0.5cm
In the case $k=3$, the local
well-posedness is known in $H^s(\mathbb{R})$, $s>1/3$ for small initial
data \cite{MR2038121} but only in $H^s(\mathbb{R})$, $s>3/4$ for
large initial data. In \cite{vento-2007}, we showed that (\ref{gBO}) is $\mathcal{C}^4$-ill-posed in $H^s(\mathbb{R})$, $s<1/3$,
in the sense that the flow-map $u_0\mapsto u$
fails to be $\mathcal{C}^4$. We prove here that well-posedness occurs in $H^s(\mathbb{R})$, $s>1/3$, and without smallness
assumption on the initial data.
\vskip 0.5cm
Concerning the case $k\geq 4$, global well-posedness in $H^s(\mathbb{R})$, $s>s_k$ was derived for small initial data
by Molinet and Ribaud in \cite{MR2038121}. Later, by means of a gauge transformation, the same authors \cite{MR2101982} removed the size restriction on the data and showed well-posedness
in $H^{1/2}(\mathbb{R})$, whatever the value of $k$. By a refinement of their method, we reached in \cite{vento-2007} the well-posedness in $H^s(\mathbb{R})$, $s>s_k$,
but for high nonlinearities only ($k\geq 12$ in fact). On the other hand, in the particular case $k=4$, Burq and Planchon \cite{MR2227135} proved the local well-posedness
in the critical space $\dot{H}^{1/4}(\mathbb{R})$. Inspired by their works, we extend in this paper the well-posedness to $\dot{H}^{s_k}(\mathbb{R})$ for any $k\geq 4$, and
our method is flexible enough to get the result in the non-homogeneous space
$H^{s_k}(\mathbb{R})$. A standard fixed point argument allows us to construct a unique solution in a subspace of $\dot{H}^{s_k}(\mathbb{R})$ with a continuous flow-map $u_0\mapsto u$.
Recall that Biagioni and Linares \cite{MR1837253} proved using solitary waves, that this map cannot be uniformly continuous in $\dot{H}^{s_k}(\mathbb{R})$.
In the surcritical case $s<s_k$, we also know that the solution-map (if it exists) fails to be $\mathcal{C}^{k+1}$ in $H^s(\mathbb{R})$, see \cite{MR2038121}.
\section{Notations and main results}
\subsection{Notations}
For $A$ and $B$ two positive numbers, we write $A\lesssim B$ if it exists $c>0$ such that $A\leq cB$. Similarly define $A\gtrsim B$, $A\sim B$
if $A\geq cB$ and $A\lesssim B\lesssim A$ respectively. When the constant $c$ is large enough, we write $A\ll B$. For any $f\in\mathcal{S}'(\mathbb{R})$, we use
$\mathcal{F} f$ or $\hat{f}$ to denote its Fourier transform. For $1\leq p\leq\infty$, $L^p$ is the standard Lebesgue space and its space-time versions
$L^p_xL^q_T$ and $L^q_TL^p_x$ ($T>0$) are endowed with the norms
$$\|f\|_{L^p_xL^q_T}=\big\|\|f\|_{L^q_t([-T;T])}\big\|_{L^p_x(\mathbb{R})}\ \textrm{ and
}\
\|f\|_{L^q_TL^p_x}=\big\|\|f\|_{L^p_x(\mathbb{R})}\big\|_{L^q_t([-T;T])}.$$
The
pseudo-differential operator $D^\alpha_x$ is defined by its
Fourier symbol $|\xi|^\alpha$. We will denote by $P_+$ and $P_-$ the
projection on respectively the positive and the negative spatial Fourier modes. Thus one has
$$i\mathcal{H}=P_+-P_-.$$
Let $\eta\in\mathcal{C}_0^\infty(\mathbb{R})$, $\eta\geq 0$, $\mathop{\rm supp}\nolimits\
\eta\subset\{1/2\leq |\xi|\leq 2\}$ with
$\sum_{-\infty}^\infty\eta(2^{-j}\xi)=1$ for $\xi\neq 0$. We set
$p(\xi)=\sum_{j\leq -3}\eta(2^{-j}\xi)$ and consider, for all
$j\in\mathbb{Z}$, the operator $Q_j$ defined by
\[Q_j(f)=\mathcal{F}^{-1}(\eta(2^{-j}\xi)\hat{f}(\xi)).\]
We adopt the following summation convention. Any summation of the form $r\lesssim j$, $r\gg j$,...
is a sum over the $r\in\mathbb{Z}$ such that $2^r\lesssim 2^j$..., thus for instance $\sum_{r\lesssim j}=\sum_{r : 2^r\lesssim 2^j}$.
We define then the operators $Q_{\lesssim j}=\sum_{r\lesssim j}Q_r$, $Q_{\ll
j}=\sum_{r\ll j}Q_r$, etc. For $1\leq p,q, r\leq \infty$ and $s\in\mathbb{R}$, let $\dot{\mathcal{B}}^{s,r}_p(L^q_T)$ be the homogeneous Besov space equipped with the norm
$$\|f\|_{\dot{\mathcal{B}}^{s,r}_p(L^q_T)} = \dot{\mathcal{B}}ig(\sum_{j\in\mathbb{Z}}[2^{js}\|Q_jf\|_{L^p_xL^q_T}]^r\dot{\mathcal{B}}ig)^{1/r}.$$
Finally for $s\in\mathbb{R}$ and $\theta\in[0,1]$, we define the solution space $\dot{\mathcal{S}}^{s,\theta}$ (where lives our solution $u$) and the nonlinear space
$\dot{\mathcal{N}}^{s,\theta}$ (where lives the nonlinear term $u^k\partial_xu$) by
$$\dot{\mathcal{S}}^{s,\theta}=\dot{\mathcal{B}}^{s+\frac{3\theta-1}{4},2}_{\frac{4}{1-\theta}}(L^{\frac{2}{\theta}}_T),\quad
\dot{\mathcal{N}}^{s,\theta}=\dot{\mathcal{B}}^{s+\frac{1-3\theta}{4},2}_{\frac{4}{3+\theta}}(L^{\frac{2}{2-\theta}}_T).$$
\subsection{Main results}
We first state our well-posedness results in the case $k\geq 4$.
\begin{theorem}\label{th-hom} Let $k\geq 4$ and $u_0\in \dot{H}^{s_k}(\mathbb{R})$. There
exists $T=T(u_0)>0$ and a unique solution $u$ of (\ref{gBO}) such
that $u\in \dot{Z}_T$ with
$$\dot{Z}_T=\mathcal{C}([-T,+T],\dot{H}^{s_k}(\mathbb{R}))\cap \dot{X}^{s_k}\cap L^k_xL^\infty_T.$$
Moreover, the flow map $u_0\mapsto u$ is locally Lipschitz from $\dot{H}^{s_k}(\mathbb{R})$ to $\dot{Z}_T$.
\end{theorem}
\vskip 0.3cm
In the non-homogeneous case, one has the following result.
\begin{theorem}\label{th-nohom} Let $k\geq 4$ and $u_0\in {H}^{s}(\mathbb{R})$, $s\geq s_k$. There
exists $T=T(u_0)>0$ and a unique solution $u$ of (\ref{gBO}) such
that $u\in Z_T$ with
$$Z_T=\mathcal{C}([-T,+T],{H}^{s}(\mathbb{R}))\cap {X^s}\cap L^k_xL^\infty_T.$$
Moreover, the flow map $u_0\mapsto u$ is locally Lipschitz from $H^s(\mathbb{R})$ to $Z_T$.
\end{theorem}
\begin{remark} We only obtain the Lipschitz continuity of
the map $u_0\mapsto u$ in Theorems \ref{th-hom} and \ref{th-nohom} in $\dot{H}^{s_k}(\mathbb{R})$ (resp. $H^s(\mathbb{R})$).
As noticed in the introduction, the solution map given by Theorem \ref{th-hom} is not uniformly continuous from $\dot{H}^{s_k}(\mathbb{R})$ to
$\mathcal{C}([-T,T],\dot{H}^{s_k}(\mathbb{R}))$. Moreover, when $s<s_k$, the flow map in Theorem \ref{th-nohom} is no longer of class $\mathcal{C}^{k+1}$ in $H^s(\mathbb{R})$.
It is not clear wether the map given by Theorems \ref{th-hom} and \ref{th-nohom} is $\mathcal{C}^{k+1}$ or not.
\end{remark}
\begin{remark} The spaces $\dot{X}^{s_k}$ and $X^s$ will be defined in Section \ref{sec-lin} and are directly related with the linear estimates
for the linear Benjamin-Ono equation.
\end{remark}
\vskip 0.3cm
The main tools to prove Theorems \ref{th-hom} and \ref{th-nohom} are the sharp Kato smoothing effect and the maximal in time
inequality for the free solution $V(t)u_0$ where $V(t)=e^{it\mathcal{H}\partial_x^2}$. Recall that for regular solutions, (\ref{gBO}) is equivalent to its integral
formulation
\begin{equation}\label{eq-int}u(t)=V(t)u_0\mp\int_0^tV(t-t')(u^k(t')\partial_xu(t'))dt'.\end{equation}
It is worth noticing that (\ref{gBO}) provides a perfect balance between the derivative nonlinear term on one hand, and the available linear estimates on the other hand.
Heuristically, one may use (\ref{eq-int}) to write
\begin{align*}
\|D_x^{s_k+1/2}u\|_{L^\infty_xL^2_T}+\|u\|_{L^k_xL^\infty_T} &\lesssim \|u_0\|_{\dot{H}^{s_k}}+\|D_x^{s_k-1/2}\partial_x(u^{k+1})\|_{L^1_xL^2_T}\\
&\lesssim \|u_0\|_{\dot{H}^{s_k}}+\|D_x^{s_k+1/2}u\|_{L^\infty_xL^2_T}\|u\|_{L^k_xL^\infty_T}^k
\end{align*}
and perform a fixed point procedure. Unfortunately, such an argument fails for several reasons:
\begin{itemize}
\item First, it is not clear wether the second inequality holds true or not. Indeed, we used the fractional Leibniz rule (see the Appendix in
\cite{MR1211741}, \cite{MR2101982}) at the end points $L^p$, $p=1,\infty$. However, this inequality becomes true if
one works in the associated Besov spaces $\dot{\mathcal{B}}^{s_k+1/2,2}_\infty(L^2_T)\cap \dot{\mathcal{B}}^{0,2}_k(L^\infty_T)$ and provides
sharp well-posedness for small initial data, see \cite{MR2038121}.
\item The term $\|V(t)u_0\|_{L^k_xL^\infty_T}$ will be small only if $\|u_0\|_{\dot{H}^{s_k}}$ is small as well, even for small $T$.
Nevertheless, as noticed in \cite{MR2227135}, if we consider instead the difference $V(t)u_0-u_0$, then its $L^k_xL^\infty_T$-norm is
small provided we restrict ourselves to a small interval $[-T,T]$ (see Lemma \ref{lemsmall}).
\item We also need to get a better share of the derivative in the nonlinear term. By a standard paraproduct decomposition, we see that
the worst contribution in $\partial_xu^{k+1}$ is given by $\pi(u,u)$ where
$$\pi(f,g)=\sum_j\partial_xQ_j((Q_{\ll j}f)^kQ_{\sim j}g).$$
The main idea is then to inject this term (or more precisely $\pi(V(t)u_0,u)$)
in the linear part of the equation to get the variable-coefficient Schr\"{o}dinger equation
\begin{equation}\label{gBO2}\partial_tu+\mathcal{H}\partial_x^2u+\pi(V(t)u_0,u)=f\end{equation}
where $f$ will be a well-behaved term. Linear estimates for equation (\ref{gBO2}) are obtained by the localized gauge transform
$$w_j=e^{\frac i2\int_{-\infty}^x(Q_{\ll j}u_0)^k}P_+Q_ju,\quad j\in\mathbb{Z}.$$
\end{itemize}
\vskip 0.5cm
Now we turn to the case $k=3$. By similar considerations, we obtain the following result.
\begin{theorem}\label{th-keq3} Let $k= 3$ and $u_0\in {H}^{s}(\mathbb{R})$, $s>1/3$. There
exists $T=T(u_0)>0$ and a unique solution $u$ of (\ref{gBO}) such
that $u\in Z_T$ with
$$Z_T=\mathcal{C}([-T,+T],{H}^{s}(\mathbb{R}))\cap {X^s}\cap L^3_xL^\infty_T.$$
Moreover, the flow map $u_0\mapsto u$ is locally Lipschitz from $H^s(\mathbb{R})$ to $Z_T$.
\end{theorem}
This paper is organized as follows. In Section \ref{sec-lin}, we recall some sharp estimates related with the linear operator $V(t)$, and we derive linear estimates for equation
(\ref{gBO2}). Section \ref{sec-kgeq4} is devoted to the case $k\geq 4$. Finally, we prove Theorem \ref{th-keq3} in Section \ref{sec-keq3}.
\section{Linear estimates}\label{sec-lin}
\subsection{Estimates for the linear BO equation}
This section deals with the well-known linear estimates for the Benjamin-Ono equation. Note that all results
stated here hold as well for the Schr\"{o}dinger operator $S(t)=e^{it\partial_x^2}$.
The following lemma summarizes the main estimates related to the group $V(t)$. See for instance \cite{MR1101221,MR1086966} for the proof.
\begin{lemma}\label{lem-estlin} Let $\varphi\in\mathcal{S}(\mathbb{R})$, then
\begin{eqnarray}\label{est0}\|V(t)\varphi\|_{L^\infty_TL^2_x} &\lesssim
&\|\varphi\|_{L^2},\\
\label{est1}\|D^{1/2}_xV(t)\varphi\|_{L^\infty_x L^2_T} &\lesssim
&
\|\varphi\|_{L^2},\\
\label{est2}\|D^{-1/4}_xV(t)\varphi\|_{L^4_x L^\infty_T} &\lesssim
& \|\varphi\|_{L^2}.\end{eqnarray} Moreover, if $T\leq 1$ and $j\geq 0$,
\begin{align}
\|Q_{\leq 0}V(t)\varphi\|_{L^2_xL^\infty_T} &\lesssim \|Q_{\leq 0}\varphi\|_{L^2}\\
2^{-j/2}\|Q_jV(t)\varphi\|_{L^2_xL^\infty_T} &\lesssim \|Q_j\varphi\|_{L^2}
\end{align}
\end{lemma}
\begin{definition} A triplet $(\alpha,p,q)\in\mathbb{R}\times[2,\infty]^2$
is said to be
1-admissible if
$(\alpha,p,q)=(1/2,\infty,2)$ or \begin{equation}\label{1ad} 4\leq
p<\infty,\quad 2< q\leq\infty,\quad
\frac{2}{p}+\frac{1}{q}\leq\frac{1}{2},\quad
\alpha=\frac{1}{p}+\frac{2}{q}-\frac{1}{2}.
\end{equation}
\end{definition}
By Sobolev embedding and interpolation between estimates (\ref{est1}) and (\ref{est2}) we obtain the following result.
\begin{proposition}[\cite{MR2101982}]\label{admis} If $(\alpha,p,q)$ is 1-admissible, then for all $\varphi$ in
$\mathcal{S}(\mathbb{R})$,
\begin{equation}\label{in1ad}\|D^\alpha_xV(t)\varphi\|_{L^p_xL^q_T}\lesssim
\|\varphi\|_{L^2}.\end{equation}
\end{proposition}
Now we define our resolution spaces.
\begin{definition}
Let $k\geq 4$ and $s\in\mathbb{R}$ be fixed. For $0<\varepsilon\ll 1$, we define the spaces ${\dot{X}^{s}}=\dot{\mathcal{S}}^{s,\varepsilon}\cap\dot{\mathcal{S}}^{s,1}$
endowed with the norm
$$\|u\|_{\dot{X}^{s}}=\|u\|_{\dot{\mathcal{S}}^{s,\varepsilon}}+\|u\|_{\dot{\mathcal{S}}^{s,1}}.$$
\end{definition}
At this stage it is important to remark that ${\dot{X}^{s}}$ does not contain any $L^\infty_T$ component. As a consequence, for each
$u\in{\dot{X}^{s}}$ and $\eta>0$ fixed, we can choose $T=T(u)$ such that $\|u\|_{\dot{X}^{s}}<\eta$.
\vskip 0.3cm
In the case $k=3$, we shall require the following result which is not covered by Proposition \ref{admis}.
\begin{lemma}[\cite{MR2101982}]\label{lem-linl3}
Let $0<T\leq 1$ and $s>1/3$. Then it holds that
\begin{equation}\label{est3}\|V(t)\varphi\|_{L^3_xL^\infty_T}\lesssim
\|\varphi\|_{H^s},\quad \forall \varphi\in\mathcal{S}(\mathbb{R}).\end{equation}
\end{lemma}
We next state the $L^p_xL^q_T$ and $L^q_TL^p_x$ estimates for the linear operator $f\mapsto\int_0^tV(t-t')f(t')dt'$.
\begin{lemma}[\cite{MR2101982}]\label{lem-estnohom} Let $\alpha\in\mathbb{R}$,
and $2< p,q\leq \infty$
such that for all $\varphi\in\mathcal{S}(\mathbb{R})$,
\[\|D^{\alpha}_xV(t)\varphi\|_{L^{p}_xL^{q}_T}\lesssim
\|\varphi\|_{L^2}.\]
Then for all
$f\in\mathcal{S}(\mathbb{R}^2)$,
\begin{equation}\label{estnonhom}\dot{\mathcal{B}}ig\|D^{1/2}_x\int_0^tV(t-t')f(t')dt'\dot{\mathcal{B}}ig\|_{L^\infty_TL^2_x}\lesssim
\|f\|_{L^1_xL^{2}_T},\end{equation}
\begin{equation}\label{estnonhom2}\dot{\mathcal{B}}ig\|D^{\alpha+1/2}_x\int_0^tV(t-t')f(t')dt'\dot{\mathcal{B}}ig\|_{L^{p}_xL^{q}_T}
\lesssim
\|f\|_{L^{1}_xL^{2}_T}.\end{equation}
Similarly, if $$\|D_x^\alpha V(t)\varphi\|_{L^p_xL^q_T}\lesssim \|\varphi\|_{H^s}$$ for any $\varphi\in \mathcal{S}(\mathbb{R})$, then
\begin{equation}\label{estnonhom3}\dot{\mathcal{B}}ig\|D^{\alpha+1/2}_x\int_0^tV(t-t')f(t')dt'\dot{\mathcal{B}}ig\|_{L^{p}_xL^{q}_T}
\lesssim
\|\langle D_x\rangle^sf\|_{L^{1}_xL^{2}_T}.\end{equation}
\end{lemma}
We shall need the following Besov version of Lemma \ref{lem-estnohom}.
\begin{lemma}\label{lem-nohombes} Let $k\geq 4$.
For all $f\in\mathcal{S}(\mathbb{R})^2$,
$$\dot{\mathcal{B}}ig\|\int_0^tV(t-t')f(t')dt'\dot{\mathcal{B}}ig\|_{L^k_xL^\infty_T}\lesssim \|f\|_{\dot{\mathcal{N}}^{s_k,1}}.$$
\end{lemma}
\begin{proof} Note that the triplets $(1/2,\infty,2)$ and $(-s_k,k,\infty)$ are both 1-admissible. In particular we deduce
$$\dot{\mathcal{B}}ig\|\int_{-T}^TD_x^{1/2}V(-t')h(t')dt'\dot{\mathcal{B}}ig\|_{L^2}\lesssim \|h\|_{L^1_xL^2_T},\quad \forall h\in \mathcal{S}(\mathbb{R}^2),$$
which is the dual estimate of (\ref{in1ad}) for $(\alpha,p,q)=(1/2,\infty,2)$. Since $L^2=\dot{\mathcal{B}}^{0,2}_2$, we infer
$$\dot{\mathcal{B}}ig\|\int_{-T}^TD_x^{1/2}V(-t')h(t')dt'\dot{\mathcal{B}}ig\|_{L^2}\lesssim \|h\|_{\dot{\mathcal{B}}^{0,2}_1(L^2_T)},\quad \forall h\in \mathcal{S}(\mathbb{R}^2).$$
The usual $TT^\ast$ argument provides
$$\dot{\mathcal{B}}ig\|\int_{-T}^TV(t-t')f(t')dt'\dot{\mathcal{B}}ig\|_{L^k_xL^\infty_T}\lesssim \|f\|_{\dot{\mathcal{B}}^{-1/k,2}_1(L^2_T)}.$$
We can conclude with the Christ-Kiselev lemma for reversed norms (Theorem B in \cite{MR2227135}).
\end{proof}
\subsection{Linear estimates for equation (\ref{gBO2})}
Here and hereafter we take $k\geq 4$, the special case $k=3$ will be discussed in Section \ref{sec-keq3}.
Next lemma will be crucial in the proof of our main results.
\begin{lemma}\label{lemsmall}
Let $k\geq 4$ and $u_0\in \dot{H}^{s_k}$. For any $\eta>0$, there exists $T=T(u_0)$ such that
$$\|V(t)u_0-u_0\|_{L^k_xL^\infty_T}<\eta.$$
\end{lemma}
\begin{proof} Let $N>0$ to be chosen later. One has
\begin{align*}\|V(t)u_0-u_0\|_{L^k_xL^\infty_T} &\lesssim
\sum_{|j|<N}\|Q_j(V(t)u_0-u_0)\|_{L^k_xL^\infty_T}+\left(\sum_{|j|>N}\|Q_ju_0\|_{\dot{H}^{s_k}}^2\right)^{1/2}.
\end{align*}
Note
that $v=V(t)u_0-u_0$ solves the equation
$$\partial_tv+\mathcal{H}\partial_x^2v=-\mathcal{H}\partial^2_xu_0$$ with zero
initial data. Thus
$V(t)u_0-u_0=\int_0^tV(t-t')\mathcal{H}\partial_x^2u_0dt'$ and
\begin{align*}\sum_{|j|<N}\|Q_j(V(t)u_0-u_0)\|_{L^k_xL^\infty_T} &\lesssim
\sum_{|j|<N}2^{2j}\dot{\mathcal{B}}ig\|\int_0^tV(t')Q_j
u_0dt'\dot{\mathcal{B}}ig\|_{L^k_xL^\infty_T}\\
&\lesssim T\sum_{|j|<N}2^{2j}\|V(t)Q_ju_0\|_{L^k_xL^\infty_T}\\
&\lesssim T2^{2N}\|u_0\|_{\dot{H}^{s_k}}.
\end{align*}
It suffices now to
choose sufficiently large $N$ and then $T$ small enough.
\end{proof}
Let us turn back to the nonlinear (\ref{gBO}) equation. The sign of the nonlinearity is irrelevant in the study of the local problem, and
we choose for convenience the plus sign.
Using standard paraproduct rearrangements, we can rewrite the nonlinear term in (\ref{gBO}) as follows:
\begin{align*}
\partial_xQ_j(u^{k+1}) &= \partial_xQ_j(\lim_{r\rightarrow\infty}(Q_{<r}u)^{k+1})\\
&= \partial_xQ_j\dot{\mathcal{B}}ig(\sum_{-\infty}^\infty (Q_{<r+1}u)^{k+1}-(Q_{<r}u)^{k+1}\dot{\mathcal{B}}ig)\\
&= \partial_xQ_j\dot{\mathcal{B}}ig(\sum_{-\infty}^\infty Q_ru (Q_{\lesssim r}u)^k\dot{\mathcal{B}}ig)\\
&= \partial_xQ_j\dot{\mathcal{B}}ig(\sum_{r\sim j}Q_ru(Q_{\ll r}u)^k\dot{\mathcal{B}}ig)+\partial_xQ_j\dot{\mathcal{B}}ig(\sum_{r\gtrsim j}(Q_{\sim r}u)^2(Q_{\lesssim r}u)^{k-1}\dot{\mathcal{B}}ig)\\
&= \partial_xQ_j((Q_{\ll j}u)^kQ_{\sim j}u)-g_j.
\end{align*}
We set $$\pi(f,g)=\sum_j\partial_xQ_j((Q_{\ll j}f)^kQ_{\sim j}g)$$
so that (\ref{gBO}) reads
$$\partial_tu+\mathcal{H}\partial_x^2u+\pi(u,u)=g(t,x)$$
with $$g=\sum_jg_j.$$ Setting
$$f=\pi(u_L,u)-\pi(u,u)+g$$ where $u_L=V(t)u_0$ is
the solution of the free BO equation, we see that (\ref{gBO}) is equivalent to
\begin{equation}\label{gbo2}\partial_tu+\mathcal{H}\partial_x^2u+\pi(u_L,u)=f(t,x).\end{equation}
We intend to solve (\ref{gBO}) by a fixed point procedure on the Duhamel formulation of (\ref{gbo2}):
$$u(t)=U(t)u_0-\int_0^tU(t-t')f(t')dt',$$
where $U(t)\varphi$ is solution to
$$\partial_tu+\mathcal{H}\partial_x^2u+\pi(V(t)u_0,u)=0,\quad
u(0)=\varphi.$$ It is worth noticing that $U(t)$ depends on the data $u_0$.
\vskip 0.3cm
Setting $u_j=Q_j u$ and $f_j=Q_jf$, we get from
(\ref{gbo2}) that
$$\partial_t u_j+\mathcal{H}\partial_x^2u_j+\partial_x((u_{0,\ll j})^k
u_j)=\partial_x[((u_{0,\ll j})^k-(u_{L,\ll j})^k)
u_j]-\partial_x[Q_j,(u_{L,\ll j})^k]u_{\sim j}+f_j$$ and we will
denote by $R_j$ the right-hand side. Now take the positive
frequencies and set $v_j=P_+u_j$:
$$i\partial_t v_j+\partial_x^2 v_j+i\partial_x((u_{0,\ll j})^k
v_j)=iP_+R_j.$$ With $b_{\ll j}=\frac 12(u_{0,\ll j})^k$, we obtain
\begin{equation}\label{eq-vj}i\partial_t v_j+(\partial_x+ib_{\ll j})^2v_j= g_j\end{equation}
with \begin{equation}\label{gj}g_j=-i\partial_xb_{\ll j}.
v_j-b_{\ll j}^2v_j+iP_+R_j.\end{equation}
\begin{lemma}\label{lem-vj} Let $v_j$ be a solution to (\ref{eq-vj}) with initial data $v_{0,j}\in\dot{H}^{s_k}\cap\dot{H}^s$. Then there exists
$C=C(u_0)$ such that
$$\|v_j\|_{\dot{X}^{s}}\leq C\|v_{0,j}\|_{\dot{H}^s}+C\|g_j\|_{\dot{\mathcal{N}}^{s,1}}.$$
\end{lemma}
\begin{proof} We define $w_j$ by $$w_j=e^{i\int^xb_{\ll j}}v_j.$$ Then we easily check that $w_j$ solves
$$i\partial_tw_j+\partial_x^2w_j=e^{i\int^xb_{\ll j}}g_j.$$
From the well-known linear estimates on the Schr\"{o}dinger equation (Lemmas \ref{lem-estlin}-\ref{lem-estnohom}) we infer
$$\|\partial_xw_j\|_{L^\infty_xL^2_T}\lesssim \|e^{i\int^xb_{\ll j}}v_{0,j}\|_{\dot{H}^{1/2}}+\|g_j\|_{L^1_xL^2_T}.$$
Since $\partial_xw_j=e^{i\int^xb_{\ll j}}(\partial_xv_j+b_{\ll j}v_j)$, we have
\begin{align*}\|\partial_xv_j\|_{L^\infty_xL^2_T} &\lesssim \|\partial_xw_j\|_{L^\infty_xL^2_T}+\|b_{\ll j}v_j\|_{L^\infty_xL^2_T}\\
&\lesssim \|\partial_xw_j\|_{L^\infty_xL^2_T}+2^{-j}\|b_{\ll j}\|_{L^\infty}\|\partial_xv_j\|_{L^\infty_xL^2_T}.
\end{align*}
On the other hand, we can make $2^{-j}\|(u_{0,\ll
j})^k\|_{L^\infty}$ as small as desired by choosing the implicit
constant $J=J(u_0)$ in $u_{0,\ll j}$ large enough:
$$2^{-j}\|(u_{0,<j-J})^k\|_{L^\infty}\lesssim
2^{-j}2^{j-J}\|u_0\|_{L^k}^k\lesssim c(u_0)2^{-J}\ll 1.$$
It follows that $$\|\partial_xv_j\|_{L^\infty_xL^2_T}\lesssim \|e^{i\int^xb_{\ll j}}v_{0,j}\|_{\dot{H}^{1/2}}+\|g_j\|_{L^1_xL^2_T}.$$
We now use the fractional Leibniz rule (Theorem A.12 in \cite{MR1211741}) and Bernstein inequality to estimate the first term in the right-hand side,
\begin{align*}\|e^{i\int^xb_{\ll j}}v_{0,j}\|_{\dot{H}^{1/2}}& \lesssim \|e^{i\int^xb_{\ll j}}\|_{L^\infty}\|v_{0,j}\|_{\dot{H}^{1/2}}+
\|D_x^{1/2}e^{i\int^xb_{\ll j}}\|_{L^\infty}\|v_{0,j}\|_{L^2}\\
&\lesssim \|v_{0,j}\|_{\dot{H}^{1/2}}+\|(u_{0,\ll j})^k\|_{L^2}\|v_{0,j}\|_{L^2}\\
&\lesssim (1+\|u_0\|_{L^k}^k)\|v_{0,j}\|_{\dot{H}^{1/2}}
\end{align*}
Since $v_j$, $g_j$ as well as $v_{0,j}$ are frequency localized, we conclude
\begin{equation}\label{est-vj1}\|v_j\|_{\dot{\mathcal{B}}^{s+1/2,2}_\infty(L^2_T)} \lesssim \|v_{0,j}\|_{\dot{H}^s}+\|g_j\|_{\dot{\mathcal{B}}^{s-1/2,2}_1(L^2_T)}.\end{equation}
We also need $L^4_xL^\infty_T$-norm estimates. Our equation can be rewritten as
$$i\partial_t v_j+\partial_x^2v_j=g_j+h_j$$ with
$$h_j=b_{\ll j}^2v_j-i\partial_x(b_{\ll j}v_j)-ib_{\ll
j}\partial_xv_j.$$
Thus we get from Lemmas \ref{lem-estlin}-\ref{lem-estnohom} that
$$\|v_j\|_{\dot{\mathcal{B}}^{s-1/4,2}_4(L^\infty_T)}\lesssim
\|v_{0,j}\|_{\dot{H}^s}+\|g_j\|_{\dot{\mathcal{B}}^{s-1/2,2}_1(L^2_T)}+\|h_j\|_{\dot{\mathcal{B}}^{s-1/2,2}_1(L^2_T)}.$$
We bound the $h_j$ contribution with (\ref{est-vj1}): \begin{align*}\|b_{\ll
j}^2v_j\|_{\dot{\mathcal{B}}^{s-1/2,2}_1(L^2_T)} &\lesssim
2^{j(s-1/2)}\|b_{\ll j}^2\|_{L^1}\|v_j\|_{L^\infty_xL^2_T}\\
&\lesssim (2^{-j/2}\|b_{\ll
j}\|_{L^2})^2\|v_j\|_{\dot{\mathcal{B}}^{s+1/2,2}_1(L^2_T)}\\
&\lesssim \|b\|_{L^1}^2(\|v_{0,j}\|_{\dot{H}^s}+\|g_j\|_{\dot{\mathcal{B}}^{s-1/2,2}_1(L^2_T)}),
\end{align*}
and
\begin{align*}\|\partial_x(b_{\ll j}v_j)+b_{\ll
j}\partial_xv_j\|_{\dot{\mathcal{B}}^{s-1/2,2}_1(L^2_T)} &\lesssim 2^{j(s+1/2)}\|b_{\ll j}\|_{L^1}\|v_j\|_{L^\infty_xL^2_T}\\
& \lesssim \|b\|_{L^1}(\|v_{0,j}\|_{\dot{H}^s}+\|g_j\|_{\dot{\mathcal{B}}^{s-1/2,2}_1(L^2_T)}).
\end{align*}
Therefore,
\begin{equation}\label{est-vj0}\|v_j\|_{\dot{\mathcal{B}}^{s-1/4,2}_4(L^\infty_T)}\lesssim
\|v_{0,j}\|_{\dot{H}^s}+\|g_j\|_{\dot{\mathcal{B}}^{s-1/2,2}_1(L^2_T)}
\end{equation}
and the claim follows by interpolation between (\ref{est-vj0}) and (\ref{est-vj1}).
\end{proof}
We are now ready to prove the main linear estimate on equation (\ref{gbo2}).
\begin{proposition}\label{prop-estlin} Let $u$ be a solution of (\ref{gbo2}) with
initial data $u_0\in \dot{H}^s\cap \dot{H}^{s_k}$, $s\in\mathbb{R}$. Then
there exists $T=T(u_0)>0$ and $C=C(u_0)$ such that on $[-T,+T]$,
$$\|u\|_{\dot{X}^{s}}\leq C\|u_0\|_{\dot{H}^s}+C\|f\|_{\dot{\mathcal{N}}^{s,1}}.$$
\end{proposition}
\begin{proof}
Using that
$|P_+u_j|=|P_-u_j|$ (since $u$ is real) and Lemma \ref{lem-vj}, we infer
\begin{align*} \|u_j\|_{\dot{X}^{s}}\lesssim \|v_j\|_{\dot{X}^{s}} &\lesssim
\|Q_ju_0\|_{\dot{X}^{s}}+\|f_j\|_{\dot{\mathcal{N}}^{s,1}}+\|\partial_x (u_{0,\ll j})^k v_j\|_{\dot{\mathcal{N}}^{s,1}}+\|(u_{0,\ll j})^{2k}v_j\|_{\dot{\mathcal{N}}^{s,1}}\\ &\quad
+\left\|\partial_x[((u_{0,\ll j})^k-(u_{L,\ll j})^k)u_j]\right\|_{\dot{\mathcal{N}}^{s,1}}+
\left\|\partial_x[Q_j,(u_{L,\ll
j})^k]u_{\sim j}\right\|_{\dot{\mathcal{N}}^{s,1}}\\ &= \|Q_ju_0\|_{\dot{X}^{s}}+\|f_j\|_{\dot{\mathcal{N}}^{s,1}}+A+B+C+D.
\end{align*}
We bound $A$ by \begin{align*}A &\lesssim
2^{j(s-1/2)}\|\partial_x(u_{0,\ll j})^kv_j\|_{L^1_xL^2_T}\lesssim
2^{-j}\|\partial_x(u_{0,\ll
j})^k\|_{L^1}2^{j(s+1/2)}\|v_j\|_{L^\infty_xL^2_T}\\ &\lesssim
2^{-j}\|\partial_x(u_{0,\ll j})^k\|_{L^1}\|v_j\|_{\dot{X}^{s}}.
\end{align*}
As previously, $2^{-j}\|\partial_x(u_{0,\ll
j})^k\|_{L^1}$ can be made as small as needed by choosing the implicit
constant $J=J(u_0)$ in $u_{0,\ll j}$ large enough:
$$2^{-j}\|\partial_x(u_{0,<j-J})^k\|_{L^1}\lesssim
2^{-j}2^{j-J}\|u_{0}\|_{L^k}^k\lesssim c(u_0)2^{-J}\ll 1.$$ One proceeds similarly for $B$:
\begin{align*}B &\lesssim 2^{j(s-1/2)}\|(u_{0,\ll j})^{2k}v_j\|_{L^1_xL^2_T}\\ &\lesssim
2^{-j}\|(u_{0,\ll j})^k\|_{L^\infty}\|u_{0}\|_{L^{k}}^{k} 2^{j(s+1/2)}\|v_j\|_{L^\infty_xL^2_T}
\\ & \ll \|v_j\|_{\dot{X}^{s}}.
\end{align*}
Now we
estimate $C$:
\begin{align*}
C &\lesssim 2^{j(s-1/2)}\|\partial_x[((u_{0,\ll j})^k-(u_{L,\ll j})^k) u_j]\|_{L^1_xL^2_T}\\ &\lesssim
2^{j(s+1/2)}\|(u_{0,\ll j})^k-(u_{L,\ll j})^k\|_{L^1_xL^2_T}\|u_j\|_{L^\infty_xL^2_T}\\ &\lesssim
\|u_0-u_L\|_{L^k_xL^\infty_T}(\|u_0\|_{L^k}^{k-1}+\|u_L\|_{L^k_xL^\infty_T}^{k-1})\|u_j\|_{\dot{X}^{s}}\\ &\ll\|u_j\|_{\dot{X}^{s}}
\end{align*}
by Lemma \ref{lemsmall}. Finally we deal with term $D$. By commutator lemma
(Lemma 2.4 in \cite{MR2227135}), we get
\begin{align*}
D &\lesssim 2^{j(s-1/2)}\|\partial_x[Q_j, (u_{L,\ll j})^k] u_j\|_{L^1_xL^2_T}\\
&\lesssim 2^{j(s-1/2)}2^{-j}\|\partial_x(u_{L,\ll j})^k\|_{L^{\frac{4}{4-\varepsilon}}_xL^{\frac 2\varepsilon}_T}\|\partial_xu_j\|_{L^{\frac 4\varepsilon}_xL^{\frac 2{1-\varepsilon}}_T}\\
&\lesssim 2^{-j}2^{3j\varepsilon/4}\|\partial_x(u_{L,\ll j})^k\|_{L^{\frac{4}{4-\varepsilon}}_xL^{\frac 2\varepsilon}_T} \|u_j\|_{\dot{\mathcal{S}}^{s,1-\varepsilon}}\\
&\lesssim 2^{-j}2^{3j\varepsilon/4} \|\partial_x u_{L,\ll j}\|_{L^{(\frac 1k-\frac \varepsilon 4)^{-1}}_xL^{\frac 2\varepsilon}_T} \|u_{L,\ll j}\|_{L^k_xL^\infty_T}^{k-1}\|u_j\|_{\dot{X}^{s}}\\
&\lesssim \|D_x^{3\varepsilon/4}u_{L,\ll j}\|_{L^{(\frac 1k-\frac \varepsilon 4)^{-1}}_xL^{\frac 2\varepsilon}_T}\|u_j\|_{\dot{X}^{s}}.
\end{align*}
Since the triplet $(\frac{3\varepsilon}{4}-s_k,(\frac 1k-\frac \varepsilon 4)^{-1},\frac 2\varepsilon)$ is
1-admissible, for any $\eta>0$, we can choose $T>0$ small enough such that
$$\|D_x^{3\varepsilon/4}u_{L}\|_{L^{(\frac 1k-\frac \varepsilon
4)^{-1}}_xL^{\frac 2\varepsilon}_T}<\eta.$$ Gathering all these estimates
we infer $$\|u_j\|_{\dot{X}^{s}}\lesssim
\|Q_ju_0\|_{\dot{X}^{s}}+\|f_j\|_{\dot{\mathcal{N}}^{s,1}}.$$
Summing this inequality over $j$ finishes the proof of Proposition \ref{prop-estlin}.
\end{proof}
We also need $L^k_xL^\infty_T$-norm estimates.
\begin{proposition}\label{prop-lkli} Let $u$ be a solution of (\ref{gbo2}) with
initial data $u_0\in\dot{H}^{s_k}$. Then there exists $T>0$ and
$C=C(u_0)$ such that
$$\|u\|_{L^k_xL^\infty_T}\leq
C\|u_0\|_{\dot{H}^{s_k}}+C\|f\|_{\dot{\mathcal{N}}^{s_k,1}}.$$ Moreover, if $u_0\in \dot{H}^s\cap\dot{H}^{s_k}$, $s\in \mathbb{R}$, then
\begin{equation}\label{est-lihs}\|u\|_{L^\infty_T\dot{H}^{s}_x}\leq
C\|u_0\|_{\dot{H}^{s}}+C\|f\|_{\dot{\mathcal{N}}^{s,1}}.\end{equation}
\end{proposition}
\begin{proof} We can rewrite our equation as
$$u=u_L-\int_0^tV(t-t')(f-\pi(u_L,u))dt'.$$
By virtue of Lemma \ref{lem-nohombes} and Lemma \ref{lem-estnohom},
we deduce
$$\|u\|_{L^k_xL^\infty_T}\lesssim
\|u_0\|_{\dot{H}^{s_k}}+\|f\|_{\dot{\mathcal{N}}^{s_k,1}}+\|\pi(u_L,u)\|_{\dot{\mathcal{N}}^{s_k,1}},$$
$$\|u\|_{L^\infty_T\dot{H}^{s}_x} \lesssim
\|u_0\|_{\dot{H}^{s}}+\|f\|_{\dot{\mathcal{N}}^{s,1}}+\|\pi(u_L,u)\|_{\dot{\mathcal{N}}^{s,1}}.$$
Then we get
\begin{align*}
\|\pi(u_L,u)\|_{\dot{\mathcal{N}}^{s,1}} &\lesssim \dot{\mathcal{B}}ig(\sum_j\big[2^{j(s+1/2)}\|(u_{L,\ll j})^ku_{\sim j}\|_{L^1_xL^2_T}\big]^2\dot{\mathcal{B}}ig)^{1/2}\\
&\lesssim \|u_L\|_{L^k_xL^\infty_T}^k\dot{\mathcal{B}}ig(\sum_j\big[2^{j(s+1/2)}\|u_{\sim j}\|_{L^\infty_xL^2_T}\big]^2\dot{\mathcal{B}}ig)^{1/2}\\
&\lesssim \|u_0\|_{\dot{H}^{s_k}}^k\|u\|_{\dot{X}^{s}}\\
&\lesssim C(u_0)(\|u_0\|_{\dot{H}^{s}}+\|f\|_{\dot{\mathcal{N}}^{s,1}})
\end{align*}
by Proposition \ref{prop-estlin}.
\end{proof}
\section{Well-posedness for $k\geq 4$}\label{sec-kgeq4}
\subsection{Nonlinear estimates}
Now we estimate the right-hand side of (\ref{gbo2}) in
$\dot{\mathcal{N}}^{s,1}$-norm.
\begin{proposition}\label{prop-estnl}
For any $u\in{\dot{X}^{s}}\cap L^k_xL^\infty_T$, we have
$$\|\pi(u_L,u)-\pi(u,u)\|_{\dot{\mathcal{N}}^{s,1}}\lesssim
\|u_L-u\|_{L^k_xL^\infty_T}\big(\|u_L\|_{L^k_xL^\infty_T}^{k-1}+\|u\|_{L^k_xL^\infty_T}^{k-1}\big)\|u\|_{\dot{X}^{s}}$$
and
$$\|g\|_{\dot{\mathcal{N}}^{s,1}}\lesssim \|u\|_{L^k_xL^\infty_T}^{k-1}\|u\|_{\dot{X}^{s}}^2.$$
\end{proposition}
\begin{proof} Set $u_j=Q_ju$, $u_{\ll j}=Q_{\ll j}u$, etc. Then:
\begin{align*}
\|\pi(u_L,u)-\pi(u,u)\|_{\dot{\mathcal{N}}^{s,1}} &\lesssim \dot{\mathcal{B}}ig(\sum_j\big[2^{j(s-1/2)}\|\partial_x[((u_{L,\ll j})^k-(u_{\ll j})^k)u_{\sim j}]\|_{L^1_xL^2_T}\big]^2\dot{\mathcal{B}}ig)^{1/2}\\
&\lesssim \dot{\mathcal{B}}ig(\sum_j\big[2^{j(s+1/2)}\|(u_{L,\ll j})^k-(u_{\ll j})^k\|_{L^1_xL^\infty_T}\|u_{\sim j}\|_{L^\infty_xL^2_T}\big]^2\dot{\mathcal{B}}ig)^{1/2}\\
& \lesssim
\|u_L-u\|_{L^k_xL^\infty_T}\big(\|u_L\|_{L^k_xL^\infty_T}^{k-1}+\|u\|_{L^k_xL^\infty_T}^{k-1}\big)\|u\|_{\dot{X}^{s}}.
\end{align*}
We bound the second term by
\begin{align*}
\|g\|_{\dot{\mathcal{N}}^{s,1}} &\lesssim \dot{\mathcal{B}}ig(\sum_j\big[2^{j(s+1/2)}\sum_{r\gtrsim j}\|(u_{\sim r})^2(u_{\lesssim r})^{k-1}\|_{L^1_xL^2_T}\big]^2\dot{\mathcal{B}}ig)^{1/2}\\
&\lesssim \dot{\mathcal{B}}ig(\sum_j\big[\sum_{r\gtrsim j}2^{j(s+1/2)}\|u_{\sim r}\|_{L^{\frac 4\varepsilon}_xL^{\frac{2}{1-\varepsilon}}_T}
\|u_{\sim r}\|_{L^{(\frac 1k-\frac \varepsilon{4})^{-1}}_xL^{\frac 2\varepsilon}_T} \|u_{\lesssim r}\|_{L^k_xL^\infty_T}^{k-1}\big]^2\dot{\mathcal{B}}ig)^{1/2}\\
&\lesssim \|u\|_{L^k_xL^\infty_T}^{k-1}\sup_r2^{3\varepsilon r/4}\|u_{\sim r}\|_{L^{(\frac 1k-\frac \varepsilon{4})^{-1}}_xL^{\frac 2\varepsilon}_T}\\
&\quad \times
\dot{\mathcal{B}}ig(\sum_j\big[\sum_{r\gtrsim j}(2^{(j-r)(s+1/2)})(2^{r(s+1/2-3\varepsilon/4)}\|u_{\sim r}\|_{L^{\frac 4\varepsilon}_xL^{\frac 2{1-\varepsilon}}_T})\big]^2\dot{\mathcal{B}}ig)^{1/2}\\
&\lesssim \|u\|_{L^k_xL^\infty_T}^{k-1}\|u\|_{\dot{S}^{s,\varepsilon}}\dot{\mathcal{B}}ig(\sum_{j\leq 0}2^{j(s+1/2)}\dot{\mathcal{B}}ig)\dot{\mathcal{B}}ig(\sum_j\big[2^{j(s+1/2-3\varepsilon/4)}\|u_{\sim j}\|_
{L^{\frac 4\varepsilon}_xL^{\frac 2{1-\varepsilon}}_T}\big]^2\dot{\mathcal{B}}ig)^{1/2}\\
&\lesssim \|u\|_{L^k_xL^\infty_T}^{k-1}\|u\|_{\dot{X}^{s}}^2
\end{align*}
where we used discrete Young inequality.
\end{proof}
\subsection{Existence in $\dot{H}^{s_k}(\mathbb{R})$}
Consider the map $F$ defined as
$$F(u)=U(t)u_0-\int_0^tU(t-t')f(t')dt'.$$
We shall contract $F$ in the
intersection of two balls:
$$B_M(u_0,T)=\{u\in \dot{X}^{s_k}\cap L^k_xL^\infty_T :
\|u-u_0\|_{L^k_xL^\infty_T}\leq\delta\}$$ and
$$B_S(u_0,T)=\{u\in \dot{X}^{s_k}\cap L^k_xL^\infty_T :
\|u\|_{\dot{X}^{s_k}}\leq\delta\}$$ endowed with the norm
$$\|u\|_{\dot{Y}_T}=\|u\|_{\dot{X}^{s_k}}+\|u\|_{L^k_xL^\infty_T}.$$ Gathering
Propositions \ref{prop-estlin}, \ref{prop-lkli} and
\ref{prop-estnl} (with $s=s_k$) we find that there
exists $C=C(u_0)>1$ such that
\begin{multline*}\|F(u)\|_{\dot{X}^{s_k}}\leq
C\|U(t)u_0\|_{\dot{X}^{s_k}}
+C(1+\|u-u_0\|_{L^k_xL^\infty_T}^{k-1})\|u\|_{\dot{X}^{s_k}}^2\\
+C(\|u_L-u_0\|_{L^k_xL^\infty_T}+\|u-u_0\|_{L^k_xL^\infty_T})(1+\|u-u_0\|_{L^k_xL^\infty_T}^{k-1})\|u\|_{\dot{X}^{s_k}}
\end{multline*}
and
\begin{multline*}\|F(u)-u_0\|_{L^k_xL^\infty_T}\leq
\|U(t)u_0-u_0\|_{L^k_xL^\infty_T}
+C(1+\|u-u_0\|_{L^k_xL^\infty_T}^{k-1})\|u\|_{\dot{X}^{s_k}}^2\\
+C(\|u_L-u_0\|_{L^k_xL^\infty_T}+\|u-u_0\|_{L^k_xL^\infty_T})(1+\|u-u_0\|_{L^k_xL^\infty_T}^{k-1})\|u\|_{\dot{X}^{s_k}}.
\end{multline*}
We can choose $T=T(u_0)$ small enough so that the quantities
$\|U(t)u_0\|_{\dot{X}^{s_k}}$, $\|u_L-u_0\|_{L^k_xL^\infty_T}$ and
$\|U(t)u_0-u_0\|_{L^k_xL^\infty_T}$ are smaller than $\varepsilon=\frac
1{128C^2}$. Thus if $u\in B_M\cap B_S$ , then
$$\|F(u)\|_{\dot{X}^{s_k}}\leq 4C\varepsilon+4C\delta^2$$
and
$$\|F(u)-u_0\|_{L^k_xL^\infty_T}\leq 4C\varepsilon+4C\delta^2.$$
Now we take $\delta=\frac 1{8C}$ so that $F(u)$ belongs to $B_M\cap B_S$.
In the same way, for any $u_1$ and $u_2$ in $B_M\cap B_S$, one has
\begin{align}
\notag \|F(u_1)-F(u_2)\|_{\dot{Y}_T} &\lesssim \|f(u_1)-f(u_2)\|_{\dot{\mathcal{N}}^{s_k,1}}\\
\notag &\lesssim \|u_L-u_1\|_{L^k_xL^\infty_T}(1+\|u_1\|_{L^k_xL^\infty_T}^{k-1})\|u_1-u_2\|_{\dot{X}^{s_k}}\\ \notag &\quad
+\|u_2\|_{\dot{X}^{s_k}}(\|u_1\|_{L^k_xL^\infty_T}^{k-1}+\|u_2\|_{L^k_xL^\infty_T}^{k-1})\|u_1-u_2\|_{L^k_xL^\infty_T}\\ \notag
&\quad +\|u_1\|_{\dot{X}^{s_k}}^2(\|u_1\|_{L^k_xL^\infty_T}^{k-2}+\|u_2\|_{L^k_xL^\infty_T}^{k-2})\|u_1-u_2\|_{L^k_xL^\infty_T}\\ \label{est-uni} &\quad
+\|u_2\|_{L^k_xL^\infty_T}^{k-1}(\|u_1\|_{\dot{X}^{s_k}}+\|u_2\|_{\dot{X}^{s_k}})\|u_1-u_2\|_{\dot{X}^{s_k}}.
\end{align}
Therefore, $$\|F(u_1)-F(u_2)\|_{\dot{Y}_T} \lesssim
(\varepsilon+\delta)\|u_1-u_2\|_{\dot{Y}_T}$$ and for $\varepsilon,\delta$ small
enough, $F : B_M\cap B_S\rightarrow B_M\cap B_S$ is contractive.
There exists a solution $u$ in $B_M\cap B_S$.
The next step is to show that $u\in \mathcal{C}([-T,+T],\dot{H}^{s_k}(\mathbb{R}))$. Using
(\ref{est-lihs}) and Proposition \ref{prop-estnl}, we obtain that
$u\in L^\infty_T\dot{H}^{s_k}_x$. For any $t_1,t_2\in [0,T]$ with
$t_1<t_2$, writing $u(t)$ as
$$u(t)=V(t-t_1)u(t_1)-\int_{t_1}^tV(t-t')\partial_xu^{k+1}(t')dt',$$
we get
\begin{align*}\|u(t_1)-u(t_2)\|_{\dot{H}^{s_k}} &\lesssim
\sup_{t\in[t_1,t_2]}\|u(t)-u(t_1)\|_{\dot{H}^{s_k}}\\
&\lesssim \sup_{t\in[t_1,t_2]}\|u(t_1)-V(t-t_1)u(t_1)\|_{\dot{H}^{s_k}}\\ &\quad+\dot{\mathcal{B}}ig\|\int_{t_1}^tV(t-t')\partial_x u^{k+1}(t')dt'\dot{\mathcal{B}}ig\|_{L^\infty(t_1,t_2;\dot{H}^{s_k})}\\
&\rightarrow 0
\end{align*}
as $t_1\rightarrow t_2$.
Now consider $u_{0,1},u_{0,2}\in \dot{H}^{s_k}$ two initial data, and
$u_1,u_2\in \dot{Z}_T$ satisfying
$$u_1(t)=U_1(t)u_{0,1}-\int_0^tU_1(t-t')f_1(u_1)(t')dt',$$
$$u_2(t)=U_2(t)u_{0,2}-\int_0^tU_2(t-t')f_2(u_2)(t')dt',$$
where $U_j(t)\varphi$ is solution to
$$\partial_tu+\mathcal{H}\partial_x^2u+\pi(V(t)u_{0,j},u)=0,\quad
u(0)=\varphi$$ and $f_j$ is defined by
$$f_j(u)=\pi(V(t)u_{0,j},u)-\pi(u,u)+g(u).$$
We intend to show that there exists a nondecreasing polynomial function $P\geq 1$ such that
\begin{multline}\label{est-diff}\|u_1-u_2\|_{\dot{Z}_T}\lesssim P(\|u_1\|_{\dot{Z}_T}+\|u_2\|_{\dot{Z}_T})\big[\|u_{0,1}-u_{0,2}\|_{\dot{H}^{s_k}}\\
+(\|u_1\|_{\dot{X}^{s_k}}+\|u_2\|_{\dot{X}^{s_k}})\|u_1-u_2\|_{\dot{Z}_T}\big]
\end{multline}
where the implicit constant in the inequality may depends on $u_{0,1}$, $u_{0,2}$. Obviously, the uniqueness of the solution to (\ref{gBO}) and the fact that
the flow map is locally Lipschitz from $\dot{H}^{s_k}(\mathbb{R})$ to $\dot{Z}_T$ follow directly from (\ref{est-diff}).
One has
$$\|U_1(t)u_{0,1}-U_2(t)u_{0,2}\|_{\dot{Z}_T}\lesssim
\|U_1(t)(u_{0,1}-u_{0,2})\|_{\dot{Z}_T}+\|(U_1(t)-U_2(t))u_{0,2}\|_{\dot{Z}_T}.$$
The first term in the right-hand side is bounded by
$\|u_{0,1}-u_{0,2}\|_{\dot{H}^{s_k}}$. To treat the second one, we
note that $(U_1(t)-U_2(t))u_{0,2}$ is solution to
$$\partial_tu+\mathcal{H}\partial_x^2u+\pi(V(t)u_{0,1},u)=\pi(V(t)u_{0,1},U_2(t)u_{0,2})-\pi(V(t)u_{0,2},U_2(t)u_{0,2})$$
with zero initial data. Hence by Propositions \ref{prop-estlin} and \ref{prop-lkli}, \begin{align*}\|(U_1(t)-U_2(t))u_{0,2}\|_{\dot{Z}_T}
&\lesssim
\|\pi(V(t)u_{0,1},U_2(t)u_{0,2})-\pi(V(t)u_{0,2},U_2(t)u_{0,2})\|_{\dot{\mathcal{N}}^{s_k,1}}\\
&\lesssim \|u_{0,1}-u_{0,2}\|_{\dot{H}^{s_k}}.
\end{align*}
We also need to bound
\begin{align}\notag &\dot{\mathcal{B}}ig\|\int_0^t(U_1(t-t')f_1(u_1)-U_2(t-t')f_2(u_2))dt'\dot{\mathcal{B}}ig\|_{\dot{Z}_T}\\
\label{est-lip1} & \lesssim
\dot{\mathcal{B}}ig\|\int_0^tU_1(t-t')(f_1(u_1)-f_1(u_2))dt'\dot{\mathcal{B}}ig\|_{\dot{Z}_T}\\
\label{est-lip2}
&\quad+\dot{\mathcal{B}}ig\|\int_0^tU_1(t-t')(f_1(u_2)-f_2(u_2))dt'\dot{\mathcal{B}}ig\|_{\dot{Z}_T}\\
\label{est-lip3}
&\quad+ \dot{\mathcal{B}}ig\|\int_0^t(U_1(t-t')-U_2(t-t'))f_2(u_2)dt'\dot{\mathcal{B}}ig\|_{\dot{Z}_T}.
\end{align}
(\ref{est-lip1}) is bounded by
$$(\ref{est-lip1}) \lesssim \|f_1(u_1)-f_1(u_2)\|_{\dot\mathcal{N}^{s_k,1}}$$
and we can use (\ref{est-uni}) to get the desired estimate.
Term (\ref{est-lip2}) is bounded by \begin{align*}(\ref{est-lip2})
&\lesssim
\|\pi(V(t)u_{0,1},u_2)-\pi(V(t)u_{0,2},u_2)\|_{\dot\mathcal{N}^{s_k,1}}\\
&\lesssim
\|u_2\|_{\dot{X}^{s_k}}\|u_{0,1}-u_{0,2}\|_{\dot{H}^{s_k}}.
\end{align*}
Finally, note that $\int_0^t(U_1(t-t')-U_2(t-t'))f_2(u_2)dt'$ is solution to
$$\partial_tu+\mathcal{H}\partial_x^2u+\pi(V(t)u_{0,1},u)=\pi(V(t)u_{0,2},\psi)-\pi(V(t)u_{0,1},\psi),$$
with zero initial data, and where $\psi=\int_0^tU_2(t-t')f_2(u_2)dt'$. It follows that
\begin{align*}(\ref{est-lip3}) &\lesssim
\|\pi(V(t)u_{0,2},\psi)-\pi(V(t)u_{0,1},\psi)\|_{\dot\mathcal{N}^{s_k,1}}\\
&\lesssim \|\psi\|_{\dot{X}^{s_k}} \|u_{0,1}-u_{0,2}\|_{\dot{H}^{s_k}}\\
&\lesssim (\|u_2\|_{\dot{X}^{s_k}}+\|u_2\|_{\dot{X}^{s_k}}^{k+1})\|u_{0,1}-u_{0,2}\|_{\dot{H}^{s_k}}.
\end{align*}
Gathering all these estimates we obtain (\ref{est-diff}).
\subsection{Existence in ${H}^{s}(\mathbb{R})$, $s\geq s_k$}
Define the spaces ${X^s}=\dot{X}^{0}\cap{\dot{X}^{s}}$ and
$\mathcal{N}^{s,\theta}=\dot{\mathcal{N}}^{0,\theta}\cap\dot{\mathcal{N}}^{s,\theta}$.
We closely follow the proof of Theorem \ref{th-hom}. We show that $F$ is a contraction in the intersection of
$$B_M(u_0,T)=\{u\in {X}^{s}\cap L^k_xL^\infty_T :
\|u-u_0\|_{L^k_xL^\infty_T}\leq\delta\}$$ and
$$B_S(u_0,T)=\{u\in {X}^{s}\cap L^k_xL^\infty_T :
\|u\|_{{X}^{s}}\leq\delta\}$$ endowed with the norm
$$\|u\|_{Y_T}=\|u\|_{{X}^{s}}+\|u\|_{L^k_xL^\infty_T}.$$
Using Propositions \ref{prop-estlin}, \ref{prop-lkli} and
\ref{prop-estnl} (applied with $s\geq s_k$ and $s=0$) and the embedding $\mathcal{N}^{s,1}\hookrightarrow \dot{\mathcal{N}}^{s_k,1}$ for $s\geq s_k$ we find
\begin{multline*}\|F(u)\|_{{X}^{s}}\leq
C\|U(t)u_0\|_{{X}^{s}}
+C(1+\|u-u_0\|_{L^k_xL^\infty_T}^{k-1})\|u\|_{{X}^{s}}^2\\
+C(\|u_L-u_0\|_{L^k_xL^\infty_T}+\|u-u_0\|_{L^k_xL^\infty_T})(1+\|u-u_0\|_{L^k_xL^\infty_T}^{k-1})\|u\|_{{X}^{s}}
\end{multline*}
and
\begin{multline*}\|F(u)-u_0\|_{L^k_xL^\infty_T}\leq
\|U(t)u_0-u_0\|_{L^k_xL^\infty_T}
+C(1+\|u-u_0\|_{L^k_xL^\infty_T}^{k-1})\|u\|_{{X}^{s}}^2\\
+C(\|u_L-u_0\|_{L^k_xL^\infty_T}+\|u-u_0\|_{L^k_xL^\infty_T})(1+\|u-u_0\|_{L^k_xL^\infty_T}^{k-1})\|u\|_{{X}^{s}}.
\end{multline*}
In the same way, one may show that
\begin{align*}
\|F(u_1)-F(u_2)\|_{Y_T} &\lesssim \|f(u_1)-f(u_2)\|_{{\mathcal{N}}^{s,1}}\\
&\lesssim \|u_L-u_1\|_{L^k_xL^\infty_T}(1+\|u_1\|_{L^k_xL^\infty_T}^{k-1})\|u_1-u_2\|_{{X}^{s}}\\ &\quad
+\|u_2\|_{{X}^{s}}(\|u_1\|_{L^k_xL^\infty_T}^{k-1}+\|u_2\|_{L^k_xL^\infty_T}^{k-1})\|u_1-u_2\|_{L^k_xL^\infty_T}\\
&\quad +\|u_1\|_{{X}^{s}}^2(\|u_1\|_{L^k_xL^\infty_T}^{k-2}+\|u_2\|_{L^k_xL^\infty_T}^{k-2})\|u_1-u_2\|_{L^k_xL^\infty_T}\\ &\quad
+\|u_2\|_{L^k_xL^\infty_T}^{k-1}(\|u_1\|_{{X}^{s}}+\|u_2\|_{{X}^{s}})\|u_1-u_2\|_{{X}^{s}}.
\end{align*}
This proves the existence in $H^s(\mathbb{R})$. The end of the proof is identical to that of Theorem \ref{th-hom}.
\section{Well-posedness for $k=3$}\label{sec-keq3}
Let $k=3$ and $s>1/3$ be fixed.
The scheme of the proof is the same as for the case $k\geq 4$ with minor modifications. First, in view of Lemma \ref{lem-linl3},
it is clear that Lemma \ref{lemsmall} holds for $k=3$ with $u_0\in \dot{H}^{s_k}$ replaced by $u_0\in H^s$. Next we see that the
$\dot{\mathcal{B}}^{\frac{3\varepsilon}{4},2}_{(\frac{1}{k}-\frac{\varepsilon}{4})^{-1}}(L^{\frac 2\varepsilon}_T)$ -norm which appears in Proposition \ref{prop-estnl} when
estimating the nonlinear term $g$ is not bounded by the $\dot{S}^{\varepsilon,1}$-norm for $k=3$. So we modify slightly the space $X^s$ by setting
$$X^s=\dot{X}^0\cap \dot{X}^s\cap \dot{\mathcal{B}}^{\varepsilon,2}_3(L^{\frac{2}{\varepsilon}}_T).$$
On one hand, it is clear from Sobolev inequalities that
$$\|u\|_{\dot{\mathcal{B}}^{\frac{3\varepsilon}{4},2}_{(\frac{1}{3}-\frac{\varepsilon}{4})^{-1}}(L^{\frac 2\varepsilon}_T)} \lesssim \|u\|_{\dot{\mathcal{B}}^{\varepsilon,2}_3(L^{\frac{2}{\varepsilon}}_T)}
\lesssim \|u\|_{X^s}.$$
On the other hand, the $\dot{\mathcal{B}}^{\varepsilon,2}_3(L^{\frac{2}{\varepsilon}}_T)$-norm is acceptable since by (\ref{est3}),
\begin{multline*}\|V(t)\varphi\|_{\dot{\mathcal{B}}^{\varepsilon,2}_3(L^{\frac{2}{\varepsilon}}_T)}\lesssim \dot{\mathcal{B}}ig(\sum_j4^{j\varepsilon}\|Q_jV(t)\varphi\|_{L^3_xL^\infty_T}^2\dot{\mathcal{B}}ig)^{1/2}\\
\lesssim \dot{\mathcal{B}}ig(\sum_j\|Q_j\varphi\|_{H^{1/3+2\varepsilon}}^2\dot{\mathcal{B}}ig)^{1/2}\lesssim \|\varphi\|_{H^s}\end{multline*}
for $\varepsilon\ll 1$. From this, it is straightforward to check that the subcritical non-homogeneous versions of Propositions \ref{prop-estlin}, \ref{prop-lkli} and
\ref{prop-estnl} are valid whenever $k=3$. This essentially proves Theorem \ref{th-keq3}.
\section*{Acknowledgments}
The author wants to thank Fabrice Planchon for his enthusiastic help and his availability.
\end{document} |
\begin{document}
\date{}
\title{On the $\frac 1H$-variation of the divergence integral with respect to fractional Brownian motion with Hurst parameter $H<\frac12$ }
\author{El Hassan Essaky \\ Universit\'{e} Cadi Ayyad\\
Facult\'{e} Poly-disciplinaire\\
Laboratoire de Mod\'{e}lisation et Combinatoire\\
D\'{e}partement de Math\'{e}matiques et d'Informatique\\ BP 4162, Safi, Maroc \\ Email: [email protected]\vspace*{0.2in}\\
David Nualart \thanks{D. Nualart is supported by the NSF grant DMS 1208625} \\
The University of Kansas\\ Department of Mathematics\\
Lawrence, Kansas 66045, USA\\ Email: [email protected]}
\maketitle\maketitle
\begin{abstract}
In this paper, we study the $\frac 1H$-variation of stochastic divergence integrals $X_t=\int_{0}^{t} u_{s} \delta B_{s}$ with respect to a fractional Brownian motion $B$ with Hurst parameter $H< \frac12$. Under suitable assumptions on the process $u$, we prove that the $\frac 1H$-variation of $X$ exists in $L^1(\Omega)$ and is equal to $e_H \int_0^T |u_s|^{\frac{1}{H}} ds$, where $e_H = \E\left[|B_1|^{\frac{1}{H}}\right]$. In the second part of the paper, we establish an integral representation for the fractional Bessel Process $\|B_t\|$, where $B_t$ is a $d$-dimensional fractional Brownian motion with Hurst parameter $H<\frac 12$. Using a multidimensional version of the result on the $\frac 1H$-variation of divergence integrals, we prove that if $2dH^2>1$, then the divergence integral in the integral representation of the fractional Bessel process has a $\frac 1H$-variation equals to a multiple of the Lebesgue measure.
\end{abstract}
\vskip0.2cm
\ni {\small {\bf{Key words:}}
Fractional Brownian motion, Malliavin calculus, Skorohod integral, Fractional Bessel processes.}
\vskip0.2cm
\noindent {\small {\bf{Mathematics Subject Classification:}} 60H05, 60H07, 60G18.}
\section{Introduction}
The fractional Brownian motion (fBm for short) $B=\{B_{t} , t\in [0,T]\}$ with Hurst parameter $H\in (0,1)$ is a Gaussian self-similar process with stationary increments.
This process was introduced by Kolmogorov \cite{kol} and studied by Mandelbrot and Van Ness in \cite{MN}, where a stochastic integral representation in terms of a standard
Brownian motion was established. The parameter
$H$ is called Hurst index from the
statistical analysis, developed by the climatologist Hurst \cite{hurst}. The self-similarity and stationary increments properties make the fBm an appropriate model for many applications in diverse fields from biology to finance. From the properties of the fBm, it follows that for every $\alpha >0$
$$
\E\left(|B_t-B_s|^{\alpha}\right) = \E\left(|B_1|^{\alpha}\right)|t-s|^{\alpha H}.
$$
As a consequence of the Kolmogorov continuity theorem, we deduce that there exists a version of the fBm $B$ which is a continuous process and whose paths are $\gamma$-H\"{o}lder continuous for every $\gamma <H$.
Therefore, the fBm with Hurst parameter $H\neq \frac12$ is not a semimartingale and then the It\^{o} approach to the construction of stochastic integrals with respect to fBm is not valid. Two main approaches have been used in the literature to define stochastic integrals with respect to fBm with Hurst parameter $H$. Pathwise Riemann-Stieltjes stochastic integrals can be defined using Young's integral \cite{young} in the case $H>\frac 12$. When $H\in (\frac14, \frac12)$, the rough path analysis introduced by Lyons \cite{lyons} is a suitable method to construct pathwise stochastic integrals.
A second approach to develop a stochastic calculus with respect to the fBm is based on the techniques of Malliavin calculus. The divergence operator, which is the adjoint of the derivative operator, can be regarded as a stochastic integral, which coincides with the limit of Riemann sums constructed using the Wick product.
This idea has been developed by
Decreusefond and \"{U}st\"{u}nel \cite{DU}, Carmona, Coutin and Montseny \cite{CC}, Al\`os, Mazet and Nualart \cite{AMN1, AMN2}, Al\`os and Nualart \cite{AN} and Hu \cite{hu}, among others. The integral constructed by this method has zero mean.
Different versions of the It\^o formula have been proved by the divergence integral in these papers.
In particular, if $H\in (\frac 14, 1)$ and $f\in C^2(\R)$ is a real-valued function satisfying some suitable growth condition, then the stochastic process $\{f'(B_t)\bf{1}_{[0,t]}, 0\le t \le T\}$ belongs to domain of the divergence operator and
\begin{equation}\label{ito}
f(B_t) = f(0) + \displaystyle\int_0^t f'(B_s)\delta B_s
+ H\displaystyle\int_0^tf''(B_s) s^{2H-1} ds.
\end{equation}
For $H\in (0, \frac 14]$, this formula still holds, if the stochastic integral is interpreted as an extended divergence operator (see \cite{CN,LN}). A multidimensional version of the change of variable formula for the divergence integral has been recently proved by Hu, Jolis and Tindel in \cite{HMS}.
Using the self-similarity of fBm and the Ergodic Theorem one can prove that the fBm has a finite $\frac 1H$-variation on any interval $[0,t]$, equals to $e_H t$, where $e_H = \E\left[|B_1|^{\frac{1}{H}}\right]$ (see, for instance, Rogers \cite{rogers}). More precisely, we have, as $n$ tends to infinity
\begin{equation}\label{res1}
\sum_{i=0}^{n-1}|B_{t(i+1)/n}-B_{it/n}| ^{\frac{1}{H}} \overset{L^{1}(\Omega)} {\longrightarrow}
t\, e_H.
\end{equation}
This result has been generalized by Guerra and Nualart \cite{GN} to the case of divergence integrals with respect to the fBm with Hurst parameter $H\in (\frac12, 1)$.
The purpose of this paper is to study the $\frac 1H$-variation of divergence processes
$X=\{ X_t, t\in [0,T]\}$, where $X_t=\int_{0}^{t} u_{s} \delta B_{s}$, with respect to the fBm with Hurst parameter $H< \frac12$. Our main result, Theorem \ref{the2}, states that the $\frac 1H$-variation of $X$ exists in $L^1(\Omega)$ and is equal to $e_H \int_0^T |u_s|^{\frac{1}{H}} ds$, under suitable assumptions on the integrand $u$. This is done by proving an estimate of the $L^{p}$-norm of the Skorohod integral $\int_{a}^{b} u_{s} \delta B_{s}$, where $0\leq a\leq b\leq T$. Unlike the case $H>\frac 12$, here we need to impose H\"older continuity conditions on the process $u$ and its Malliavin derivative.
We also derive an extension of this result to divergence integrals with respect to a $d$-dimensional fBm, where $d\ge 1$.
In the last part of the paper, we study the fractional Bessel process $R= \{R_t, t\in [0,T]\}$, defined by $R_t:= \|B_t\|$, where $B$ is a $d$-dimensional fractional Brownian motion with Hurst parameter $H<\frac 12$.
The following integral representation of this process
\begin{equation} \label{rep1}
R_t = \displaystyle\sum_{i=1}^{d}\int_ 0^t\dfrac{B_s^{(i)}}{R_s}\delta B_s^{(i)} + H(d-1)\int_ 0^t \dfrac{s^{2H-1}}{R_s}ds,
\end{equation}
has been derived in \cite{GN} when $H>\frac 12$. Completing the analysis initiated in \cite{HN}, we
establish the representation (\ref{rep1}) in the case $H<\frac 12$, using a suitable notion of the extended domain of the divergence operator. Applying the results obtained in the first part of the paper and assuming $2dH^2>1$, we prove that the $\frac 1H$-variation of the divergence integral of the process
$$\Theta_t:= \displaystyle\sum_{i=1}^{d}\int_ 0^t\dfrac{B_s^{(i)}}{R_t}\delta B_s^{(i)},$$
exists in $L^1(\Omega)$ and is equal to$ \displaystyle\int_{\R^d}\left[\displaystyle\int_ 0^T \left | \left\langle\dfrac{B_s}{R_s}, \xi\right \rangle \right|^{\frac{1}{H}}ds\right]\nu(d\xi),
$ where $\nu$ is the normal distribution $N(0, I)$ on $\R^d$. We also discuss some other properties of the process $\{\Theta_t,t\in [0,T]\} $.
The paper is organized as follows. Section 2 contains some preliminaries on Malliavin calculus. In Section 3, we prove an $L^p$-estimate for divergence integral with respect to fBm. Section 4, is devoted to the study the $\frac 1H $-variation of the divergence integral with respect to fBm, for $H<\frac12$. Section 5, deals with the $\frac 1H $-variation of the divergence integral with respect to $d$-dimensional fBm. An application to fractional Bessel process has been given in Section 6.
\section{Preliminaries on Malliavin calculus}
Here we describe the elements from stochastic analysis that we will need in the paper. Let $B=\{B_{t} , t\in [0,T]\}$ be a fractional Brownian motion with Hurst parameter $H\in (0,1)$ defined in
a complete probability space $(\Omega, \mathcal{F},P)$,
where $\mathcal{F}$ is generated by $B$. That is, $B$ is a
centred Gaussian process with covariance function
\begin{equation*}
R_H(t,s):=\E(B_{t}B_{s}) = \dfrac12 (t^{2H}+s^{2H}-|t-s|^{2H}),
\end{equation*}
for $s,t \in [0,T]$.
We denote by $\EuFrak H$ the Hilbert space associated to $B$, defined as the closure of the linear space generated by
the indicator functions $\{ \mathbf{1}_{[0,t]}, t\in [0,T]\} $, with respect to the inner product
\begin{equation*}
\langle \mathbf{1}_{[0,t]} , \mathbf{1}_{[0,s] } \rangle _{\EuFrak H}
=R_H(t,s), \hskip0.5cm s,t\in [0,T].
\end{equation*}
The mapping $\mathbf{1}_{[0,t]} \to B_{t}$ can be extended to a linear isometry between $\EuFrak H$ and the Gaussian space generated by $B$. We
denote by $B(\varphi)= \int_0^T \varphi_t dB_t $ the image of an element $\varphi \in \EuFrak H$
by this isometry.
We will first introduce some elements of the Malliavin calculus associated
with $B$. We refer to \cite{nualart} for a detailed account of these notions.
For a smooth and cylindrical random variable $F=f\left( B(\varphi _{1}), \ldots ,
B(\varphi_{n})\right) $, with $\varphi_{i} \in \EuFrak H$ and $f\in
C_{b}^{\infty}(\R^{n})$ ($f$ and all its partial derivatives are bounded), the
derivative of $F$ is the $\EuFrak H$-valued random variable defined by
\begin{equation*}
D F =\sum_{j=1}^{n}\frac{\partial f}{\partial x_{j}}(B(\varphi_{1}),\dots,B(
\varphi_{n}))\varphi_{j}.
\end{equation*}
For any integer $k\ge 1$ and any real number $p\ge 1$ we denote by $\mathbb{D
}^{k,p}$ the Sobolev space defined as the the closure of the space of smooth and cylindrical
random variables with respect to the norm
\begin{equation*}
\Vert F\Vert_{k,p}^{p}=\E(|F|^{p})+\sum_{j=1}^{k} \E (\Vert
D^{j}F\Vert_{ \EuFrak H ^{\otimes j} }^{p}).
\end{equation*}
Similarly, for a given Hilbert space $V$ we can define Sobolev spaces of $V$
-valued random variables $\mathbb{D}^{k,p}(W)$.
The divergence operator $\delta$ is introduced as the adjoint of the derivative operator. More precisely, an element $u\in L^{2}(\Omega;\EuFrak H)$ belongs to the domain of $\delta$, denoted by ${\rm Dom}\, \delta$, if there exists
a constant $c_u$ depending on $u$ such that
\begin{equation*}
|\E(\langle D F,u\rangle_{\EuFrak H})|\leq c_u\Vert F\Vert_{2},
\end{equation*}
for any smooth random variable $F\in \mathcal{S}$. For any $u\in {\rm Dom}\, \delta$, $\delta(u)$ is the
element of $L^{2}(\Omega)$ given by the duality relationship
\begin{equation*}
\E(\delta (u)F)=\E(\langle D F,u\rangle_{\EuFrak H}),
\end{equation*}
for any $F\in \mathbb{D}^{1,2}$. We will make use of the notation $\delta
(u)=\int_{0}^{T}u_{s}\delta B_{s}$, and we call $\delta(u)$ the divergence integral of $u$ with respect to the fBm $B$.
Note that $\E(\delta (
u ) )=0$. On the other hand, the space $\mathbb{D}^{1,2}(\EuFrak H)$ is included in the domain of $\delta $, and for $u\in \mathbb{D}^{1,2}(\EuFrak H)$, the variance of $\delta(u)$ is given by
\begin{equation*}
\E(\delta (u)^{2})=\E(\Vert u\Vert_{\EuFrak H}^{2})+\E(\langle D u,(D
u)^{\ast}\rangle_{\EuFrak H\otimes\EuFrak H} ),
\end{equation*}
where $(D u)^{\ast}$ is the
adjoint of $D u$ in the Hilbert space $\EuFrak H\otimes\EuFrak H$.
By Meyer's inequalities (see Nualart \cite{nualart}), for all $p>1$, the divergence operator
is continuous from $ \mathbb{D}^{1,p}(\EuFrak H)$ into $ L^p(\Omega)$, that is,
\begin{equation}\label{meyer}
\E(|\delta (u)|^{p})\leq C_{p}\left( \E(\Vert u\Vert_{\EuFrak H
}^{p})+\E(\Vert D u\Vert_{\EuFrak H\otimes\EuFrak H}^{p})\right).
\end{equation}
We will make use of the property
\begin{equation}\label{p1}
\delta (Fu)= F\delta (u)+\langle D F,u\rangle_{\EuFrak H},
\end{equation}
which holds if $F\in \mathbb{D}^{1,2}$, $u\in {\rm Dom} \, \delta$ and the right-hand side is square integrable. We have also the commutativity relationship between $
D $ and $\delta $
\begin{equation*}
D \delta (u)= u + \int_{0}^{T} D u_{s}\delta B_{s},
\end{equation*}
which holds if $u\in \mathbb{D}^{1,2}(\EuFrak H)$ and the $\EuFrak H$-valued process $\{D u_s, s\in
[0,T]\}$ belongs to the domain of $\delta $.
The covariance of the fractional Brownian motion can be written as
$$
R_H(t,s) = \int_0^{t\wedge s} K_H(t,u)K_H(s,u)du,
$$
where $K_H(t,s)$ is a square integrable kernel, defined for $0<s<t<T$. In what follows, we assume that $0<H <\frac12$. In this case, this kernel has the following expression
$$
K_H(t,s)= c_H\left[ \left(\frac{t}{s}\right)^{H-\frac12}(t-s)^{H-\frac12} -(H-\frac12)s^{H-\frac12}\int_s^t u^{H-\frac32}(u-s)^{H-\frac12}du\right],
$$
with $c_H = \left(\frac{2H}{(1-2H)\beta(1-2H, H+\frac12)}\right)^{\frac12}$ and $\beta(x,y):= \displaystyle\int_ 0^1 t^{x-1}(1-t)^{y-1}dt$ for $x, y>0$. Notice also that
$$
\frac{\partial K_H}{\partial t}(t,s) = c_H (H-\frac12)\left(\frac{t}{s}\right)^{H-\frac12}(t-s)^{H-\frac32}.
$$
From these expressions it follows that the kernel $K_H$ satisfies the following two estimates
\begin{equation}\label{est1A}
\left|\frac{\partial K_H}{\partial t}(t,s)\right| \leq c_H (t-s)^{H-\frac32},
\end{equation}
and
\begin{equation}\label{est2}
|K_H(t,s)|\leq d_H \left((t-s)^{H-\frac12} + s^{H-\frac 12} \right),
\end{equation}
for some constant $d_H$.
Let $\mathcal{E}$ be the linear span of the indicator functions on $[0,T]$.
Consider the linear operator $K_H^*$ from ${\mathcal{E}}$ to $L^2([0, T])$ defined by
\begin{equation}\label{est0}
K_H^*(\varphi)(s) = K_H(T,s)\varphi(s)+ \int_s^T (\varphi(t)-\varphi(s))\dfrac{\partial K_H}{\partial t}(t,s)dt.
\end{equation}
Notice that
$$
K_H^*(\bf{1}_{[0, t]})(s) = K_H(t,s)\bf{1}_{[0, t]}(s).
$$
The operator $K_H^*$ can be expressed in terms of fractional derivatives as follows
$$
(K_H^*\varphi)(s)= c_H \Gamma(H+\frac12)s^{\frac12 -H}(D_{T-}^{\frac12 -H} u^{H -\frac12}\varphi(u))(s).
$$
In this expression, $D_{t-}^{\frac 12 -H}$ denotes the left-sided fractional derivative operator, given by
$$
D_{t-}^{\frac 12 -H }f(s):= \frac{1}{\Gamma(\frac 12+H )}\left(\dfrac{f(t)}{(t-s)^{\frac 12-H}}+\left(\frac 12 -H\right)\displaystyle\int_s^t\dfrac{f(s)-f(y)}{(y-s)^{\frac 32-H}}dy\right),
$$
for almost all $s\in (0,t)$ and for a function $f $ in
the image of $L^p([0,t])$, $p\ge 1$, by the left-sided fractional operator $I^{\frac 12-H}_{t-}$ (see \cite{SK} for more details).
As a consequence $C^{\gamma}([0,T])\subset \EuFrak H\subset L^2([0,T])$. It should be noted that the operator $K_H^*$ is an isometry between the Hilbert space $\EuFrak H$ and $L^2([0,T])$. That is, for every $\varphi, \psi\in\EuFrak H$,
\begin{equation} \label{equ1}
\langle \varphi, \psi \rangle_\EuFrak H= \langle K_H^*\varphi, K_H^* \psi \rangle_{L^2([0,T])}.
\end{equation}
Consider the following seminorm on the space ${\mathcal{E}}$
\begin{equation}\label{iso}
\begin{array}{ll}
\| \varphi\|_ K^2 = \displaystyle\int_ 0^T &\varphi^2(s)[(T-s)^{2H-1}+ s^{2H-1}]ds \\ & + \displaystyle\int_0^T\left(\displaystyle\int_s^T |\varphi(t)-\varphi(s)|(t-s)^{H-\frac32}dt\right)^2 ds.
\end{array}
\end{equation}
We denote by $\EuFrak H_K$ the completion of ${\mathcal{E}}$ with respect to this seminorm. From the estimates (\ref{est1A}) and (\ref{est2}), there exists a constant $k_H$ such that for any $\varphi \in \EuFrak H_K$,
\begin{equation}\label{est01}
\| \varphi\|^2_{\EuFrak H} =\|K^*_{H}(\varphi)\|^2_{L^2([0,T])}
\leq k_H\| \varphi\|^2_{ K} .
\end{equation}
As a consequence, the space $\EuFrak H_K$ is continuously embedded in $\EuFrak H$. This implies also that
$\mathbb{D}^{1, 2}(\EuFrak H_K) \subset \mathbb{D}^{1,2}(\EuFrak H) \subset {\rm Dom}\, \delta$.
One can show also that $\EuFrak H = I_{T-}^{\frac12 -H}(L^2([0,T]))$ (see \cite{DU}). Then, the space $\EuFrak H$ is too small for some purposes. For instance, it has been proved in \cite{CN}, that the trajectories of the fBm $B$
belongs to $\EuFrak H$ if and only if $H>\frac14$. This creates difficulties when defining the divergence $\delta(u)$ of a stochastic process whose trajectories do not belong to $\EuFrak H$, for example, if $u_t=f(B_t)$ and $H<\frac 14$, because the domain of $\delta$ is included in
$L^{2}(\Omega; \EuFrak H)$. To overcome this difficulty, an extended domain of the divergence operator has been introduced in \cite{CN}. The main ingredient in the definition of this extended domain is the extension of the inner produce $\langle \varphi, \psi \rangle_\EuFrak H$ to the case where $\psi \in \mathcal{E}$ and $\varphi \in L^\beta([0,T])$ for some $\beta >\frac 1{2H}$ (see \cite{LN}).
More precisely, for $\varphi \in L^\beta([0,T])$ and $\psi = \sum_{j=1}^{m}b_j\bf{1}_{[0,t_j]} \in \mathcal{E}$ we set
\begin{equation} \label{ext}
\langle\varphi, \psi \rangle_\EuFrak H = \displaystyle\sum_{j=1}^{m}b_j\displaystyle\int_0^T\varphi_s \dfrac{\partial R}{\partial s}(s, t_j)ds.
\end{equation}
This expression coincides with the inner produce in $\EuFrak H$ if $\varphi \in \EuFrak H$, and it is well defined, because
\[
|\langle\varphi, \bf{1}_{[0,t]} \rangle_\EuFrak H|
= \left|\int_0^T\varphi_s \dfrac{\partial R}{\partial s}(s, t)ds \right|
\leq \|\varphi\|_{L^\beta([0,T])} \sup_{0\leq t\leq T} \left(\int_0^T|\dfrac{\partial R}{\partial s}(s, t_j)|^{\alpha}ds\right)^{\frac{1}{\alpha}}<\infty.
\]
We will make use of following notations: for each $(a, b)\in\R^2$, $a\wedge b = \min(a, b)$ and
$a\vee b = \max(a, b)$.
\section{$L^p$-estimate of divergence integrals with respect to fBm}
Let $V$ be a given Hilbert space. We introduce the following hypothesis for a $V$-valued stochastic process $u=\{ u_t, t\in [0,T]\}$, for some $p\ge 2$.
\noindent
\textbf{Hypothesis} $\mathbf{(A.1)}_p$ \textit{ Let $p\ge 2$. Then, $\displaystyle\sup_{0\leq s\leq T}\Vert u_s\Vert_{L^{p}(\Omega; V)} <\infty $ and there exist constants $L>0$, $0<\alpha <\frac12$ and $\gamma >\frac12 -H$ such that,
\begin{equation*} \label{A1}
\Vert u_t -u_s\Vert_{L^{p}(\Omega; V)}\leq L s^{-\alpha }|t-s|^{\gamma},
\end{equation*}
for all $0<s\leq t \leq T$. }
For any $ 0\le a< b \le T$, we will make use of the notation
\[
\|u\|_{p,a,b} = \sup_{a\le s\le b} \|u_s\| _{L^{p}(\Omega; V)}.
\]
The following lemma is a crucial ingredient to establish the
$L^p$-estimates for the divergence integral with respect to fBm.
\begin{lemma}\label{lem1}
Let $u=\{u_t, 0\leq t\leq T\}$ be a process with values in a Hilbert space $V$, satisfying assumption $\mathbf{(A.1)}_p$ for some $p\geq 2$. Then, there exists a positive constant $C$ depending on $H$, $\gamma$ and $p$ such that for every $0<a\leq b \le T$
\begin{equation} \label{est1}
\E \left( \| u {\mathbf 1}_{[a,b]} \| ^p_{\EuFrak H \otimes V} \right) \leq C\left(\| u\|_{p,a,b}^p(b-a)^{pH}+ L^pa^{-p\alpha }(b-a)^{p\gamma +pH}\right).
\end{equation}
Moreover if $a=0$, then
\begin{equation} \label{est1a}
\E \left( \|u {\mathbf 1}_{[0,b]} \|^p_{\EuFrak H \otimes V} \right) \leq C\left(\| u\|_{p,0,b}^p b^{pH}+L^pb^{-p\alpha +p\gamma+pH}\right).
\end{equation}
\end{lemma}
\bop. Suppose first that $a> 0$. By equalities (\ref{equ1}) and (\ref{est0}) we obtain
\begin{eqnarray*}
&& \E \left( \| u {\mathbf 1}_{[a,b]} \| ^p_{\EuFrak H \otimes V} \right)
= \E \left( \| K_H^*(u \bf{1}_{[a,b]} ) \| ^p_{L^2([0,T];V)} \right) \\
& &= \E \left( \left\| K_H(T,s)u _s{\mathbf 1}_{[a,b]}(s) +\displaystyle\int_{s}^T\Big(u_t{\mathbf 1}_{[a,b]}(t)-u_{s} {\mathbf 1}_{[a,b]}(s)\Big)\dfrac{\partial K_H}{\partial t}(t,s)dt \right \|^p_{L^2([0,T];V)} \right).
\end{eqnarray*}
Consider the decomposition
\begin{eqnarray*}
&&\displaystyle\int_s^T\Big(u_t{\mathbf 1}_{[a,b]}(t)-u_s{\mathbf 1}_{[a,b]}(s)\Big)\dfrac{\partial K_H}{\partial t}(t,s)dt
= \left[\displaystyle\int_s^b(u_t -u_s)\dfrac{\partial K_H}{\partial t}(t,s)dt\right]{\mathbf 1}_{[a,b]}(s) \\
&&\qquad +\left[-\displaystyle\int_b^T u_s\dfrac{\partial K_H}{\partial t}(t,s)dt\right]{\mathbf 1}_{[a,b]}(s)
+\left[ \displaystyle\int_a^b u_t\dfrac{\partial K_H}{\partial t}(t,s)dt\right]{\mathbf 1}_{[0,a]}(s) \\
&& \qquad := I_1 +I_2+I_3.
\end{eqnarray*}
Therefore
\[
\E \left( \| u {\mathbf 1}_{[a,b]} \| ^p_{\EuFrak H \otimes V} \right) \le C\sum_{i=0}^3 A_i,
\]
where $A_0= \E\left[\| K_H(T,\cdot)u \bf{1}_{[a,b]} \|^p_{L^2([0,T]; V)} \right]$ and for $i=1,2,3$, $A_i= \E \left[\| I_i\|^p_{L^2([0,T];V)} \right]$.
Let us now estimate the four terms $A_i$, $i=0,1,2,3$, in the previous inequality. By estimate (\ref{est2}), Minkowski inequality and Hypothesis $\mathbf{(A.1)}_p$ we obtain
\begin{eqnarray}\notag
A_0
& \leq & C \E\left(\displaystyle\int_a^b [(T-s)^{2H-1}+ s^{2H-1}]\Vert u_s\Vert^2_Vds\right)^{\frac{p}{2}} \\ \notag
&\leq & C\left(\displaystyle\int_a^b [(T-s)^{2H-1}+s^{2H-1}]\| u_s\|^2_{{L^{p}(\Omega; V)}}ds\right)^{\frac{p}{2}} \\
& \leq & C \| u \|_{p,a,b}^p (b-a)^{pH},\label{eqA0}
\end{eqnarray}
where we have used that $(T-a)^{2H}\leq (T-b)^{2H} +(b-a)^{2H}$ and $b^{2H} -a^{2H} \le (b-a)^{2H}$.
Using Minkowski inequality, Hypothesis $\mathbf{(A.1)}_p$ and estimate (\ref{est1A}), it follows that
\begin{eqnarray}
A_1 \notag
& \leq & \left(\displaystyle\int_a^b \left\Vert \displaystyle\int_s^b(u_t -u_s)\dfrac{\partial K_H}{\partial t}(t,s)dt\right\Vert^2_{{L^{p}(\Omega; V)}}ds\right)^{\frac{p}{2}} \\ \notag
& \leq & \left(\displaystyle\int_a^b \left( \displaystyle\int_s^b\Vert u_t -u_s\Vert_{{L^{p}(\Omega; V)}}\left|\dfrac{\partial K_H}{\partial t}(t,s)\right|dt\right)^2ds
\right)^{\frac{p}{2}} \\
&\leq & C L^p\left(\displaystyle\int_a^b \left( \displaystyle\int_s^b s^{-\alpha }(t-s)^{\gamma+H-\frac32}dt\right)^2ds\right)^{\frac{p}{2}}.
\label{eqA00}
\end{eqnarray}
We have
\begin{eqnarray*}\notag
\displaystyle\int_a^b \left( \displaystyle\int_s^b s^{-\alpha}(t-s)^{\gamma+H-\frac32}dt\right)^2ds \notag
&= &\dfrac{1}{(\gamma +H-\frac12)^2}\displaystyle\int_a^bs^{-2\alpha }(b-s)^{2\gamma +2H-1}ds \\ \notag
&\le &
\dfrac{1}{(\gamma +H-\frac12)^2 (2\gamma +2H)} a^{-2\alpha } (b-a)^{2\gamma +2H}.
\end{eqnarray*}
Substituting this expression into inequality (\ref{eqA00}), yields
\begin{equation}\label{eqA1}
A_1
\leq
C L^p a^{-p\alpha } (b-a)^{p\gamma +pH}.
\end{equation}
By the same arguments as above, it follows from Minkowski inequality and Hypothesis $\mathbf{(A.1)}_p$ that
\begin{eqnarray}
A_2
& =& \notag
\left(\displaystyle\int_a^b \left\Vert\displaystyle\int_b^T u_s\dfrac{\partial K_H}{\partial t}(t,s)dt\right\Vert^2_{{L^{p}(\Omega; V)}}ds\right)^{\frac{p}{2}} \\ \notag
& \leq & C \left(\displaystyle\int_a^b\left(\displaystyle\int_b^T \Vert u_s\Vert_{{L^{p}(\Omega; V)}}(t-s)^{H-\frac32}dt\right)^2ds\right)^{\frac{p}{2}} \\ \notag
&\leq & C\|u \|_{p,a,b}^p \left(\displaystyle\int_a^b\left((T-s)^{H-\frac12}-(b-s)^{H-\frac12}\right)^2ds\right)^{\frac{p}{2}} \\ \notag
&\leq &C\|u \|_{p,a,b}^p\Big((T-a)^{2H}-(T-b)^{2H})+ (b-a)^{2H}\Big)^{\frac{p}{2}} \\ \label{eqA2}
&\leq & C\| u \|_{p,a,b}^p(b-a)^{pH},
\end{eqnarray}
where we have used that $(T-a)^{2H}-(T-b)^{2H} \leq (b-a)^{2H}$.\\
Finally, for the term $A_3$, we obtain in the same way
\begin{eqnarray}\notag
A_3 \notag
&\leq & \left(\displaystyle\int_0^a \left(\displaystyle\int_a^b \Vert u_t\Vert_{{L^{p}(\Omega; V)}}|\dfrac{\partial K_H}{\partial t}(t,s)|dt\right)^2ds\right)^{\frac{p}{2}} \\
& \leq & C\| u \|_{p,a,b}^p\left(\displaystyle\int_0^a \left(\displaystyle\int_a^b (t-s)^{H-\frac{3}{2}}dt\right)^2ds\right)^{\frac{p}{2}} \notag \\
& \le & C\| u \|_{p,a,b}^p\left(\displaystyle\int_0^a \left((a-s)^{H-\frac12} -(b-s)^{H-\frac12}\right)^2ds\right)^{\frac{p}{2}} \notag \\
& \le & C \| u \|_{p,a,b}^p (b-a) ^{pH}. \label{eqA3}
\end{eqnarray}
For the last inequality we have used the following computations
\begin{eqnarray*}
& & \displaystyle\int_0^a \left((a-s)^{H-\frac12} -(b-s)^{H-\frac12}\right)^2ds \\
&& \quad = \frac 1{2H} \left( b^{2H} + a^{2H} -(b-a) ^{2H} \right)
-2\displaystyle\int_0^a (a-s)^{H-\frac12}(b-s)^{H-\frac12}ds \\
& & \quad \leq \frac 1{2H} \left( b^{2H} + a^{2H} -(b-a) ^{2H} \right) -2\displaystyle\int_0^a (b-s)^{2H-1}ds \\
&& \quad \leq \frac 1{2H} \left( (b-a) ^{2H} - (b^{2H} -a^{2H}) \right) \le \frac 1{2H} (b-a)^{2H}.
\end{eqnarray*}
The inequality (\ref{est1}) follows from the estimates (\ref{eqA0}), (\ref{eqA1}), (\ref{eqA2}) and (\ref{eqA3}). The case $a=0$ can be proved using similar arguments. The proof of Lemma \ref{lem1} is then completed.
\eop
We are now in the position to prove the following theorem which gives an estimate of the $L^{p}$-norm of the Skorohod integral of a process $u$ with respect to a fBm with Hurst parameter $H\in (0,\frac 12)$. We first need the following assumption on the process $u$.
\noindent
\textbf{Hypothesis} $\mathbf{(A.2)}_p$ \textit{ Let $u\in \mathbb{D}^{1, 2}(\EuFrak H)$ be a real-valued stochastic process, which satisfies Hypothesis $\mathbf{(A.1)}_p$ with constants $L_u$, $ \alpha_1$ and $\gamma$ for a fixed $p\geq 2$. We also assume that the $\EuFrak H$-valued process $\{Du_s, s\in [0,T]\}$ satisfies Hypothesis $\mathbf{(A.1)}_p$ with constants $L_{Du}$, $ \alpha_2$ and $\gamma$ for the same value of $p$. }
Hypothesis $\mathbf{(A.2)}_p$ means that $u_s$ and $Du_s$ have bounded $L^p$ norms in $[0,T]$ and satisfy
\begin{eqnarray}
\Vert u_t -u_s\Vert_{L^{p}(\Omega)}&\leq& L_us^{-\alpha _1}|t-s|^{\gamma} \label{assump1}
\\
\Vert Du_t -Du_s\Vert_{L^{p}(\Omega; \EuFrak H)}&\leq & L_{Du}s^{-\alpha _2}|t-s|^{\gamma}, \label{assump2}
\end{eqnarray}
for all $0<s\leq t\leq T$.
\begin{theorem}\label{the1}
Suppose that $u\in \mathbb{D}^{1, 2}(\EuFrak H)$ is a stochastic process satisfying Hypothesis $\mathbf{(A.2)}_p$ for some $p\geq 2$. Let $0< a\leq b\leq T$. Then, there exists a positive constant $C$ depending on $H$, $\gamma$ and $p$ such that
\begin{eqnarray} \notag
& & \E\left( \left |\displaystyle\int_a^b u_s \delta B_s\right|^p \right) \\
& \leq &
C\left((\| u \|_{p,a,b}^p+\| Du \|_{p,a,b}^p)(b-a)^{pH}+ (L_u ^pa^{-p\alpha_1}+L_{Du}^pa^{-p\alpha_2})(b-a)^{p\gamma +pH}\right).\qquad
\label{ineq1}
\end{eqnarray}
If $a=0$, then
\begin{eqnarray}
\E \left( \left|\displaystyle\int_0^b u_s \delta B_s\right|^p \right) \leq C\left((\|u \|_{p,a,b}^p+\|Du \|_{p,a,b}^p) b^{pH}+(L_u^pb^{-p\alpha_1}+L_{Du}^pb^{-p\alpha_2})b^{p\gamma+pH}\right).
\label{ineq2}
\end{eqnarray}
\end{theorem}
\bop.
By inequality (\ref{meyer}), we have
$$
\E \left( \left|\displaystyle\int_a^b u_s \delta B_s\right|^p \right) \leq C_p\left ( \E (\| u {\mathbf 1}_{[a,b]} \| ^p_{ \EuFrak H } )+\E ( \| D_s(u_t\bf{1}_{[a,b]}(t)) \| ^p_{ \EuFrak H \otimes \EuFrak H}\right).
$$
The first and the second terms of the above inequality can be estimated applying Lemma \ref{lem1} to the processes $u$ and $Du$, with $V= \mathbb{R}$ and $V=\EuFrak H$, respectively. Theorem \ref{the1} is then proved.\eop
\begin{remark} If we suppose that $\alpha_1= \alpha_2=0$ in Hypothesis $\mathbf{(A.2)}_p$, that is, $u$ and $Du$ are H\"older continuous in $L^p$ on $[0,T]$, then estimate (\ref{ineq1}) in Theorem \ref{the1} can be written as
$$
\E\left( \left| \displaystyle\int_a^b u_s \delta B_s\right|^p \right) \leq C\Vert u\Vert_{1,p,\gamma}^p(b-a)^{pH},
$$
where
$$
\Vert u\Vert_{1,p,\gamma}=\displaystyle\sup_{0\leq s<t\leq T}\dfrac{\Vert u_t -u_s\Vert_{1,p}}{|t-s|^{\gamma}}+\displaystyle\sup_{0\leq s\leq T} \Vert u_s\Vert_{1,p}.
$$
\end{remark}
\section{The $\frac{1}{H}$-variation of divergence integral with respect to fBm}
Fix $q\geq 1$ and $T>0$ and set $t_i^n:= \frac{iT}{n}$, where $n$ is a positive integer and $i=0,1,2,\dots,n$. We need the following definition.
\begin{definition}
Let $X$ be a given stochastic process defined in the complete probability space $(\Omega, {\cal F}, P)$. Let $V_n^q(X)$ be the random variable defined by
$$
V_n^q(X):= \sum_{i=0}^{n-1}|\Delta_i^n X|^q,
$$
where $\Delta_i^n X := X_{t^n_{i+1}}-X_{t^n_{i}}$. We define the $q$-variation of $X$ as the limit in $L^1(\Omega)$, as $n$ goes to infinity, of $V_n^q(X)$ if this limit exists.
\end{definition}
As in the last section we assume that $H\in (0, \frac12)$. In this section, we need the following assumption on the process $u$.
\noindent
\textbf{Hypothesis} $\mathbf{(A.3)}$ \textit{
Let $u\in \mathbb{D}^{1, 2}(\EuFrak H)$ be a real-valued stochastic process which is bounded in $L^q(\Omega)$ for some $q>\frac{1}{H}$ and satisfies the H\"older continuity property (\ref{assump1}) with $p=\frac{1}{H}$, that is
\begin{eqnarray}
\Vert u_t -u_s\Vert_{L^{\frac{1}{H}}(\Omega)}&\leq& L_us^{-\alpha _1}|t-s|^{\gamma}. \label{assump11}
\end{eqnarray}
Suppose also that the $\EuFrak H$-valued process $\{Du_s, s\in [0,T]\}$ is bounded in $L^{\frac{1}{H}}(\Omega; \EuFrak H)$ and satisfies the H\"older continuity property (\ref{assump2}) with $p=\frac{1}{H}$, that is
\begin{eqnarray}
\Vert Du_t -Du_s\Vert_{L^{\frac{1}{H}}(\Omega; \EuFrak H)}&\leq& L_{Du}s^{-\alpha _2}|t-s|^{\gamma}. \label{assump21}
\end{eqnarray}
Moreover, we assume that the derivative $\{D_tu_s, s,t\in [0,T]\}$ satisfies
\begin{equation}\label{assump3}
\displaystyle\sup_{0\leq s\leq T}\Vert D_su_t\Vert_{L^{\frac{1}{H}}(\Omega)} \leq K t^{-\alpha_3},
\end{equation}
for every $t\in(0, T]$ and for some constants $0<\alpha_3<2H$ and $K>0$.}
Consider the indefinite divergence integral of $u$ with respect to the fBm $B$, given by
\begin{equation} \label{equ2}
X_t = \int_0^t u_s \delta B_s := \delta(u\bf{1}_{[0,t]}).
\end{equation}
The main result of this section is the following theorem.
\begin{theorem}\label{the2}
Suppose that $u\in \mathbb{D}^{1, 2}(\EuFrak H)$ is a stochastic process satisfying Hypothesis $\bf{(A.3)}$, and consider the divergence integral process $X$ given by (\ref{equ2}). Then, we have
\[
V_n^{\frac{1}{H}}(X) \overset{L^{1}(\Omega)} {\longrightarrow} e_H \displaystyle\int_ 0^T|u_s|^{\frac{1}{H}}ds,
\]
as $n$ tends to infinity,
where $e_H = \E \left[|B_1|^{\frac{1}{H}}\right]$.
\end{theorem}
\bop.
We need to show that the expression
\[
F_n:= \E\left(\left|\sum_{i=0}^{n-1}\left|\displaystyle\int_{t_i^n}^{t_{i+1}^n} u_s \delta B_s\right|^{\frac{1}{H}}-e_H \displaystyle\int_0^T |u_s|^{\frac{1}{H}}ds\right|\right),
\]
converges to zero as $n$ tends to infinity.
Using (\ref{p1}), we can write
\begin{equation}\label{decom}
\begin{array}{ll}
\displaystyle\int_{t_i^n}^{t_{i+1}^n} u_s \delta B_s
&=\displaystyle\int_{t_i^n}^{t_{i+1}^n} (u_s-u_{t_i^n}) \delta B_s + \displaystyle\int_{t_i^n}^{t_{i+1}^n} u_{t_{i}^n}\delta B_s
\\ & =\displaystyle\int_{t_i^n}^{t_{i+1}^n} (u_s-u_{t_i^n}) \delta B_s-\langle Du_{t_i^n}, \bf{1}_{[t_{i}^n, t_{i+1}^n]}\rangle_{{\EuFrak H}} + u_{t_{i}^n}(B_{t_{i+1}^n}-B_{t_{i}^n}).
\\ & := A_{i}^{1,n} -A_{i}^{2,n} +A_{i}^{3,n}.
\end{array}
\end{equation}
By the triangular inequality, we obtain
\begin{equation}
F_n \le \E\left(\sum_{i=0}^{n-1}\left| |A_{i}^{1,n} -A_{i}^{2,n} +A_{i}^{3,n} |^{\frac{1}{H}} - |A_{i}^{3,n} |^{\frac{1}{H}}\right|\right) +D_n, \label{eq45}
\end{equation}
where
\[
D_n=\E\left(\left|
\sum_{i=0}^{n-1} |A_{i}^{3,n} |^{\frac{1}{H}}-e_H \displaystyle\int_0^T |u_s|^{\frac{1}{H}}ds\right|\right).
\]
Using the mean value theorem and H\"older inequality, we can write
\begin{eqnarray} \notag
& &\E\left( \sum_{i=0}^{n-1}\left| |A_{i}^{1,n} -A_{i}^{2,n} +A_{i}^{3,n} |^{\frac{1}{H}} - |A_{i}^{3,n} |^{\frac{1}{H}}\right|\right) \\ \notag
&& \leq \frac{1}{H}\E\left(\sum_{i=0}^{n-1} |A_{i}^{1,n} -A_{i}^{2,n} |\left[ |A_{i}^{1,n} -A_{i}^{2,n}+A_{i}^{3,n}|^{\frac{1}{H}-1} + |A_{i}^{3,n}|^{\frac{1}{H}-1}\right]\right) \\ \notag
& & \leq
C\left[ \E\left(\sum_{i=0}^{n-1} |A_{i}^{1,n} -A_{i}^{2,n} |^{\frac{1}{H}}\right)\right]^H \\
&& \qquad \qquad \times \left[\E\left(\sum_{i=0}^{n-1} |A_{i}^{1,n} -A_{i}^{2,n}+A_{i}^{3,n} |^{\frac{1}{H}}\right)
+\E\left(\sum_{i=0}^{n-1} |A_{i}^{3,n} |^{\frac{1}{H}}\right)\right]^{1-H}. \label{eq451}
\end{eqnarray}
Substituting (\ref{eq451}) into (\ref{eq45}) yields
\[
F_n \le CA_{n}^H(B_n + C_n)^{1-H} + D_n,
\]
where
\begin{eqnarray*}
A_n&=&\E\left(\sum_{i=0}^{n-1} |A_{i}^{1,n} -A_{i}^{2,n} |^{\frac{1}{H}}\right), \\
B_n &=& \E\left(\sum_{i=0}^{n-1} |A_{i}^{1,n} -A_{i}^{2,n}+A_{i}^{3,n} |^{\frac{1}{H}}\right),\\
C_n &=&\E\left(\sum_{i=0}^{n-1} |A_{i}^{3,n} |^{\frac{1}{H}}\right).
\end{eqnarray*}
The proof will be divided into several steps. Along the proof, $C$ will denote a generic constant, which may vary from line to line and may depend on the processes $u$ and $Du$ and the different parameters appearing in the computations, but it is independent of $n$.
\\
\it{Step 1.} We first prove that $B_n$ and $C_n$ are bounded. Remark that
\begin{eqnarray*}
B_n &= &\E\left( \left| \int_{0}^{\frac{T}{n}} u_s \delta B_s\right|^{\frac{1}{H}} \right)+ \E \left( \sum_{i=1}^{n-1}\left|\int_{t_i^n}^{t_{i+1}^n} u_s\delta B_s\right|^{\frac{1}{H}} \right) \\
& := &K_1^n+K_2^n.
\end{eqnarray*}
Using estimate (\ref{ineq2}) with $p=\frac{1}{H}$, it follows that
\begin{eqnarray*}
K_1^n
&\leq & C\left(\| u\| ^\frac{1}{H}_{\frac{1}{H},0,\frac{T}{n}}+\| Du\|^\frac{1}{H}_{\frac{1}{H},0,\frac{T}{n}}\right)n^{-1}+ \left(L_{u}^{\frac{1}{H}} n^{\frac{\alpha_1}{H}}+L_{Du}^{\frac{1}{H}}n^{\frac{\alpha_2}{H}}\right)n^{-\frac{\gamma}{H}-1}
\\ & \leq & C \left(n^{-1} +n^{\frac{\alpha_1}{H}-\frac{\gamma}{H}-1}+n^{\frac{\alpha_2}{H}-\frac{\gamma}{H}-1}\right).
\end{eqnarray*}
Therefore, $K_1^n$ is bounded since $\alpha_1 <\gamma +H$ and $\alpha_2 <\gamma +H$. In a similar way, estimate (\ref{ineq1}) leads to
\begin{eqnarray*}
K_2^n &\leq & C\sum_{i=1}^{n-1}\bigg\{\left(\| u\|^{\frac{1}{H}}_{\frac{1}{H},t_i^n,t_{i+1}^n}+\| Du \|^{\frac{1}{H}}_{\frac{1}{H},t_i^n,t_{i+1}^n}\right)(t_{i+1}^n-t_{i}^n) \\
&& \qquad\qquad\quad+ \left(L_{u}^{\frac{1}{H}}(t_i^n)^{-\frac{\alpha_1}{H}}+L_{Du}^{\frac{1}{H}}(t_{i}^n)^{-\frac{\alpha_2}{H}}\right)(t_{i+1}^n-t_{i}^n)^{\frac{\gamma}{H} +1}\bigg\} \\
& \leq &C \left(1+ n^{\frac{\alpha_1}{H}-\frac{\gamma}{H}-1}\displaystyle\sum_{i=1}^{n-1}{i^{-\frac{\alpha_1}{H}}}+ n^{\frac{\alpha_2}{H}-\frac{\gamma}{H}-1}\displaystyle\sum_{i=1}^{n-1}{i^{-\frac{\alpha_2}{H}}}\right).
\end{eqnarray*}
This proves that $K_2^n$ is bounded and so is $B_n$.
Using H\"{o}lder inequality and the fact that $u$ is bounded in $L^q(\Omega)$ for $q>\frac{1}{H}$, we obtain
\begin{eqnarray*}
C_n
&=&\sum_{i=0}^{n-1}\E\left( |u_{t_{i}^n} |^{\frac{1}{H}} |B_{t_{i+1}^n}-B_{t_{i}^n})|^{\frac{1}{H}}\right)
\\ & \leq & \sum_{i=0}^{n-1}\left[ \E\left(|u_{t_{i}^n}|^{q}\right)\right] ^{\frac{1}{qH}}\left[ \E\left(|B_{t_{i+1}^n}-B_{t_{i}^n})|^{\frac{q}{qH -1}}\right)\right]^{1-\frac{1}{qH}}
\\ &
\leq & C \sum_{i=0}^{n-1}(t_{i+1}^n-t_{i}^n) =CT,
\end{eqnarray*}
and this proves the boundedness of $C_n$.\\
\it{Step 2.} We prove that $A_n$ converges to zero. Consider the decomposition
\[
\sum_{i=0}^{n-1}|A_{i}^{1,n}| ^{\frac{1}{H}}
=\left|\int_{0}^{\frac{T}{n}} (u_s-u_{0}) \delta B_s\right|^{\frac{1}{H}} + \sum_{i=1}^{n-1}|A_{i}^{1,n} |^{\frac{1}{H}}.
\]
Using estimate (\ref{ineq2}) with $p =\frac{1}{H}$, it follows that
\begin{eqnarray*}
&& \E\left( \left|\int_{0}^{\frac{T}{n}} (u_s-u_{0}) \delta B_s \right|^{\frac{1}{H}} \right) \\
&& \leq C\left[ \|u-u_{0} \|^\frac{1}{H}_{\frac{1}{H},0,\frac{T}{n}}+ \| Du-Du_{0} \|^\frac{1}{H}_{\frac{1}{H},0,\frac{T}{n}}\right]n^{-1}+ \left[L_{u}^{\frac{1}{H}}{n}^{\frac{\alpha_1}{H}}+L_{Du}^{\frac{1}{H}}{n}^{\frac{\alpha_2}{H}}\right]{n}^{-\frac{\gamma}{H}-1} \\
&& \leq C n^{-1}\left(1 +{n^{\frac{\alpha_1}{H}-\frac{\gamma}{H}}}+{n^{\frac{\alpha_2}{H}-\frac{\gamma}{H}}}\right).
\end{eqnarray*}
Therefore $ \E\left( \left|\int_{0}^{\frac{T}{n}} (u_s-u_{0}) \delta B_s \right|^{\frac{1}{H}} \right) $ converges to zero as $n$ tends to infinity, since $\alpha_1 <\gamma +H$ and $\alpha_2 <\gamma +H$. We can also prove that $\E \left(\sum_{i=1}^{n-1} |A_{i}^{1,n}|^{\frac{1}{H}} \right)$ converges to zero. In fact, using estimate (\ref{ineq1}) with $p =\frac{1}{H}$, we obtain
\begin{eqnarray*}
\E\left( \sum_{i=1}^{n-1} |A_{i}^{1,n}|^{\frac{1}{H}} \right)
& \leq &
C \sum_{i=1}^{n-1}\Bigg[\left(\| u-u_{t_i^n} \|^{\frac{1}{H}}_{\frac{1}{H},t_i^n,t_{i+1}^n}+\| Du-Du_{t_i^n} \|^{\frac{1}{H}}_{\frac{1}{H},t_i^n,t_{i+1}^n}\right)(t_{i+1}^n-t_{i}^n)\\
&& \qquad\qquad\quad+ \left(L_{u}^{\frac{1}{H}}(t_i^n)^{-\frac{\alpha_1}{H}}+L_{Du}^{\frac{1}{H}}(t_{i}^n)^{-\frac{\alpha_2}{H}}\right)(t_{i+1}^n-t_{i}^n)^{\frac{\gamma}{H} +1}\Bigg] \\
& & \leq C n^{-\frac{\gamma}{H}-1}\left({n^{\frac{\alpha_1}{H}}}\displaystyle\sum_{i=1}^{n-1}{i^{-\frac{\alpha_1}{H}}}+ {n^{\frac{\alpha_2}{H}}}\displaystyle\sum_{i=1}^{n-1}{i^{-\frac{\alpha_2}{H}}}\right),
\end{eqnarray*}
where we have used the fact that
\begin{equation*}
\| u-u_{t_i^n} \|_{\frac{1}{H},t_i^n,t_{i+1}^n} \leq L_{u}T^{\gamma-\alpha_1} i^{-\alpha_1} n^{\alpha_1-\gamma},
\end{equation*}
and
\begin{equation*}
\| Du-Du_{t_i^n} \|_{\frac{1}{H},t_i^n,t_{i+1}^n}
\leq L_{Du}T^{\gamma-\alpha_1} i^{-\alpha_2} n^{\alpha_2-\gamma}.
\end{equation*}
From the above computations,\ it follows that $\E\left( \sum_{i=1}^{n-1}|A_{i}^{1,n}|^{\frac{1}{H}} \right)$ converges to zero as $n$ goes to infinity. Therefore, we conclude that
\begin{equation}\label{term1}
\lim _{n \rightarrow \infty} \E\left( \sum_{i=1}^{n-1} |A_{i}^{1,n}|^{\frac{1}{H}} \right) =0.
\end{equation}
Second, let us prove that $\E \left(\sum_{i=1}^{n-1}|A_{i}^{2,n}|^{\frac{1}{H}} \right)$ converge to zero as $n$ tends to infinity. It follows from (\ref{ext}) that each term $A_i^{2,n}$ can be expressed as
\begin{equation*}
A_{i}^{2,n}
=\int_0^T D_su_{t_i^n}\dfrac{\partial}{\partial s}\bigg(R(s,t_{i+1}^n)-(R(s,t_{i}^n)\bigg)ds.
\end{equation*}
Therefore we have the following decomposition
\begin{equation*}
A_{i}^{2,n}:= J_1^{i,n}+J_2^{i,n}+J_3^{i,n},
\end{equation*}
where
\begin{eqnarray*}
J_1^{i,n} &=& \frac 12
\int_0^{t_{i}^n} D_su_{t_i^n}\dfrac{\partial}{\partial s}\left(((t_{i}^n-s)^{2H}-(t_{i+1}^n-s)^{2H}))\right)ds, \\
J_2^{i,n} &=&\frac 12 \int^{t_{i+1}^n}_{t_{i}^n} D_su_{t_i^n}\dfrac{\partial}{\partial s}\left(((s-t_{i}^n)^{2H}-(t_{i+1}^n-s)^{2H}))\right)ds, \\
J_3^{i,n}&=& \frac 12 \int_{t_{i+1}^n}^{T} D_su_{t_i^n}\frac{\partial}{\partial s}\left(((s-t_{i}^n)^{2H}-(s-t_{i+1}^n)^{2H}))\right)ds.
\end{eqnarray*}
Using Minkowski inequality and assumption (\ref{assump3}), we obtain
\begin{eqnarray*}
\E \left( \sum_{i=0}^{n-1}|J_1^{i,n}|^{\frac{1}{H}} \right)
& \leq & H \sum_{i=0}^{n-1}\left[ \int_0^{t_{i}^n} \Vert D_su_{t_i^n}\Vert_{L^{\frac{1}{H}}(\Omega)} \left|(t_{i+1}^n-s)^{2H-1}-(t_{i}^n-s)^{2H-1})\right|ds\right]^{\frac{1}{H}} \\
& \leq & C \sum_{i=1}^{n-1}(t_{i}^n)^{-\frac{\alpha_3}{H}}\left[ \int_0^{t_{i}^n} \left[(t_{i}^n-s)^{2H-1}-(t_{i+1}^n-s)^{2H-1}\right]ds\right]^{\frac{1}{H}} \\
& =& C \sum_{i=1}^{n-1}(t_{i}^n)^{-\frac{\alpha_3}{H}}\left[ (t_{i+1}^n-t_{i}^n)^{2H}-\left[(t_{i+1}^n)^{2H}-(t_{i}^n)^{2H}\right]\right]^{\frac{1}{H}} \\
& \leq & C \sum_{i=1}^{n-1}(t_{i}^n)^{-\frac{\alpha_3}{H}}(t_{i+1}^n-t_{i}^n)^{2} \\
& \leq & C {n^{\frac{\alpha_3}{H} -2}}\sum_{i=1}^{n-1} {i^{-\frac{\alpha_3}{H}}}.
\end{eqnarray*}
Taking into account that $\alpha_3 <2H$, we obtain that $\E \left( \sum_{i=0}^{n-1}|J_1^{i,n}|^{\frac{1}{H}} \right) $ converges to zero as $n$ tends to infinity. By means of similar arguments, we can show that $\E \left( \sum_{i=0}^{n-1}|J_2^{i,n}|^{\frac{1}{H}} \right) $ and $\E \left( \sum_{i=0}^{n-1}|J_3^{i,n}|^{\frac{1}{H}} \right) $ converge to zero as $n$ tends to infinity. Therefore,
\begin{equation}\label{term2}
\lim _{n \rightarrow \infty} \E\left( \sum_{i=1}^{n-1}|A_{i}^{2,n}|^{\frac{1}{H}} \right) =0.
\end{equation}
Consequently, from (\ref{term1}) and (\ref{term2}) we deduce that that $A_n$ converge to zero as $n$ goes to infinity.\\
\noindent
\it{Step 3.} In order to show that the term $D_n$ converges to zero as $n$ tends to infinity, we replace $n$ by the product $nm$ and we let first $m$ tend to infinity. That is, we consider the partition of interval $[0,T]$ given by $0=t_0^{nm}<\cdots <t_{nm}^{nm} = T$ and we define
\begin{eqnarray}
Z^{n,m} &:=& \notag
\left| \sum_{i=0}^{nm-1} |u_{t_{i}^{nm}}|^{\frac{1}{H}}|\Delta_{i}^{nm}B|^{\frac{1}{H}}
-e_H \sum_{j=0}^{n-1} |u_{t_{j}^{n}}|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n})\right| \\ \notag
& = & \bigg| \sum_{j=0}^{n-1} \bigg[\sum_{i=jm}^{(j+1)m -1} \left(|u_{t_{i}^{nm}}|^{\frac{1}{H}}-|u_{t_{j}^{n}}|^{\frac{1}{H}}\right)|\Delta_{i}^{nm}B|^{\frac{1}{H}} \\ \notag
& & \qquad +|u_{t_{j}^{n}}|^{\frac{1}{H}}\left(\sum_{i=jm}^{(j+1)m -1}|\Delta_{i}^{nm}B|^{\frac{1}{H}}-e_H (t_{j+1}^{n} -t_{j}^{n})\right)\bigg]\bigg|. \\ \notag
& \leq & \sum_{j=0}^{n-1} \sum_{i=jm}^{(j+1)m -1} \left||u_{t_{i}^{nm}}|^{\frac{1}{H}}-|u_{t_{j}^{n}}|^{\frac{1}{H}}\right||\Delta_{i}^{nm}B|^{\frac{1}{H}} \\ \notag
& &\qquad +\sum_{j=0}^{n-1}|u_{t_{j}^{n}}|^{\frac{1}{H}}\left|\sum_{i=jm}^{(j+1)m -1}|\Delta_{i}^{nm}B|^{\frac{1}{H}}-e_H (t_{j+1}^{n} -t_{j}^{n})\right| \\ \notag \label{eq3}
& := & Z_1^{n,m} + Z_2^{n,m}.
\end{eqnarray}
By the mean value theorem, we can write
\[
Z^{n,m}_1 \le
\frac 1H \sum_{j=0}^{n-1} \sum_{i=jm}^{(j+1)m -1} \left|u_{t_{i}^{nm}}-u_{t_{j}^{n}}\right|\left(|u_{t_{i}^{nm}}|^{\frac{1}{H}-1}+|u_{t_{j}^{n}}|^{\frac{1}{H}-1}\right)|\Delta_{i}^{nm}B|^{\frac{1}{H}}.
\]
Using H\"{o}lder inequality, assumption (\ref{assump11}) as well as the boundedness of $u$ in $L^q(\Omega)$ for some $q>\frac{1}{H}$, we obtain
\[
\E (Z^{n,m}_1)
\leq Cn^{-1}m^{-1} \sum_{j=0}^{n-1} \sum_{i=jm}^{(j+1)m -1} (t_j^n)^{-{\alpha_1}}(t_i^{nm}-t_j^{n})^{{\gamma}}
\leq {C}{n^{{\alpha_1}-{\gamma} -1}} \sum_{j=0}^{n-1} {j^{{-\alpha_ 1}}},
\]
which implies
\begin{equation}
\lim_{n\rightarrow \infty}\sup_{m\ge 1}\E (Z^{n,m}_1 )= 0. \label{equ6}
\end{equation}
On the other hand, using H\"{o}lder inequality and the fact that $u$ is bounded in $L^q(\Omega)$ for some $q>\frac{1}{H}$, we have
\begin{eqnarray*}
\E(Z^{n,m}_2)
&\leq &
\sum_{j=0}^{n-1}\left[ \E(|u_{t_{j}^{n}}|^q)\right]^{\frac{1}{qH}}\left[ \E \left(\left|\sum_{i=jm}^{(j+1)m -1}|\Delta_{i}^{nm}B|^{\frac{1}{H}}-e_H (t_{j+1}^{n} -t_{j}^{n})\right|^{\frac{qH}{qH-1}} \right)\right]^{1-\frac{1}{qH}} \\
& \leq & C \sum_{j=0}^{n-1}\left[ \E \left( \left|\sum_{i=jm}^{(j+1)m -1}|\Delta_{i}^{nm}B|^{\frac{1}{H}}-e_H (t_{j+1}^{n} -t_{j}^{n})\right|^{\frac{qH}{qH-1}} \right)\right]^{1-\frac{1}{qH}}.
\end{eqnarray*}
For any fixed $n\ge 1$, by the Ergodic Theorem the sequence $ \sum_{i=jm}^{(j+1)m -1}|\Delta_{i}^{nm}B|^{\frac{1}{H}}-e_H (t_{j+1}^{n} -t_{j}^{n})$ converges to $0$ in $L^s$ as $m$ tends to infinity, for every $s>1$. This implies that, for any $n\ge 1$,
\begin{equation}
\lim_{m\rightarrow \infty} \E (Z^{n,m}_2 )= 0. \label{equ7}
\end{equation}
Therefore, it follows from (\ref{equ6}) and (\ref{equ7}) that
\begin{equation}
\lim_{n\rightarrow \infty} \lim_{m\rightarrow \infty} \E (Z^{n,m})= 0. \label{equ8}
\end{equation}
By the mean value theorem, we can write
\begin{eqnarray*}
&& \Big| \sum_{j=0}^{n-1} |u_{t_{j}^{n}}|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n}) -\displaystyle\int_0^T|u_s|^{\frac{1}{H}} ds\Big|
\leq \sum_{j=0}^{n-1}\int_{t^n_j}^{t^n_{j+1}}\left||u_{t^n_j}|^{\frac{1}{H}}-|u_s|^{\frac{1}{H}}\right| ds \\
& & \qquad \qquad \leq \frac 1H \sum_{j=0}^{n-1}\int_{t^n_j}^{t^n_{j+1}}|u_{t^n_j}-u_s| \left(|u_{t^n_j}|^{\frac{1}{H} -1}+|u_s|^{\frac{1}{H} -1}\right) ds.
\end{eqnarray*}
Then, applying H\"{o}lder inequality and assumption (\ref{assump11}), yields
\begin{eqnarray*}
\E \left( \left| \sum_{j=0}^{n-1} |u_{t_{j}^{n}}|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n}) -\displaystyle\int_0^T|u_s|^{\frac{1}{H}} ds\right| \right)
&\leq& C \sum_{j=1}^{n-1}\displaystyle\int_{t^n_j}^{t^n_{j+1}} (t^n_j)^{-\alpha_1} ({t^n_{j+1}} -{t^n_{j}})^{\gamma} ds + Cn^{-1} \\
&\leq &C n^{\alpha_1-{\gamma}-1} \sum_{i=1}^{n-1}{i^{-{\alpha_1}}}+Cn^{-1}.
\end{eqnarray*}
This proves that $ \sum_{j=0}^{n-1} |u_{t_{j}^{n}}|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n})$
converge in $L^1$ to $\displaystyle\int_0^T|u_s|^{\frac{1}{H}} ds$ as $n$ tends to infinity. This convergence, together with (\ref{equ8}), imply that $D_n$ converges to zero as $n$ goes to infinity, which concludes the proof of the theorem.
\eop
\section{Divergence integral with respect to a $d$-dimensional fBm}
The purpose of this section is to generalize Theorem \ref{the2} to multidimensional processes. In order to proceed with this generalization, we first introduce the following notation.
Consider a $d$-dimensional fractional Brownian motion ($d\ge 2$)
$$
B=\{B_t, t\in [0,T]\} = \{ (B_t^{(1)}, B_t^{(2)},\dots, B_t^{(d)}),\,\, {t\in [0,T]}\}
$$
with Hurst parameter $H\in (0,1)$ defined in
a complete probability space $(\Omega, \mathcal{F},P)$, where $\mathcal{F}$ is generated by $B$. That is, the components $B^{(i)}$, $i=1,\dots,d$, are independent fractional Brownian motions with Hurst parameter $H$. We can define the derivative and divergence operators, $D^{(i)}$ and $\delta^{(i)}$, with respect to each component $B^{(i)},$ as in Section 2. Denote by $\mathbb{D}_i^{1,p}(\EuFrak H)$ the associated Sobolev spaces. We assume that these spaces include functionals depending on of all the components of $B$ and not only the $i$th component.
The Hilbert space $\EuFrak H_d$ associated with $B$ is the completion of the space $\mathcal{E}_d$ of step functions
$\varphi =(\varphi^{(1)},\dots,\varphi^{(d)}) : [0,T]\rightarrow \R^d$ with respect to the inner product
\[
\langle \varphi, \phi \rangle_{\EuFrak H_d} =\sum_{k=1}^d \langle \varphi^{(k)}, \phi^{(k)} \rangle_\EuFrak H.
\]
We can develop a Malliavin calculus for the process $B$, based on the Hilbert space $\EuFrak H_d$.
We denote by $\mathcal{S}_{d}$ the space of smooth and cylindrical random variables of the
form
$$
F=f\left( B(\varphi _{1}), \ldots ,
B(\varphi_{n})\right),
$$
where $f\in C_{b}^{\infty}(\R^{n})$,
$\varphi_{j} =(\varphi_{j}^{(1)},\dots,\varphi_{j}^{(d)}) \in \mathcal{E}_d$, and $B(\varphi_{j}) =\displaystyle\sum_{k=1}^{d} B ^{(k)} (\varphi_{j}^{(k)})$.
Denote by $\langle \cdot , \cdot \rangle$ the usual inner product on $\R^d$. The following result has been proved in \cite{GN} using the Ergodic Theorem.
\begin{lemma} \label{lem3}
Let $F$ be a bounded random variable with values in $\R^d$. Then, we have
$$
V_n^{\frac{1}{H}}(\langle F, B \rangle) \overset{L^{1}(\Omega)}{\longrightarrow} \displaystyle\int_{\R^d}\left[\displaystyle\int_ 0^T|\langle F, \xi\rangle|^{\frac{1}{H}}ds\right]\nu(d\xi),
$$
as $n$ tends to infinity, where $\nu$ is the normal distribution $N(0, I)$ on $\R^d$.
\end{lemma}
The following theorem is the multidimensional version of Theorem \ref{the2}.
\begin{theorem}\label{the5}
Suppose that for each $i=1,\dots, d$, $u^{(i)}\in \mathbb{D}^{1, 2}(\EuFrak H)$ is a stochastic process satisfying Hypothesis $\bf{(A.3)}$. Set $u_t=(u_t^{(1)},\dots,u_t^{(d)})$ and consider the divergence integral process $X=\{X_t, t \in [0,T]\}$ defined by $X_t :=\sum_{i=1}^d \int_0^t u_s^{(i)}\delta B_s^{(i)}$. Then, we have
$$
V_n^{\frac{1}{H}}(X) \overset{L^{1}(\Omega)}{\longrightarrow} \displaystyle\int_{\R^d}\left[\displaystyle\int_ 0^T|\langle u_s, \xi\rangle|^{\frac{1}{H}}ds\right]\nu(d\xi),
$$
as $n$ tends to infinity, where $\nu$ is the normal distribution $N(0, I)$ on $\R^d$.
\end{theorem}
\bop. This theorem can be proved by the same arguments as in the proof of Theorem \ref{the2}. We need to show that the expression
\[
F_n:= \E\left(\left|\sum_{i=0}^{n-1}\left|\sum_{k=1}^d\displaystyle\int_{t_i^n}^{t_{i+1}^n} u_s^{(k)} \delta B_s^{(k)}\right|^{\frac{1}{H}}-\displaystyle\int_{\R^d}\bigg[\displaystyle\int_ 0^T|\langle u_s, \xi\rangle|^{\frac{1}{H}}ds\bigg]\nu(d\xi)\right|\right),
\]
converges to zero as $n$ tends to infinity.
Using the decomposition (\ref{decom}) for $\displaystyle\int_{t_i^n}^{t_{i+1}^n} u_s^{(k)} \delta B_s^{(k)}$, and applying the same techniques as in the proof of Theorem \ref{the2}, it is not difficult to see that
\begin{equation*}\label{Fn}
F_n \le CA_{n}^H(B_n + C_n)^{1-H} + D_n,
\end{equation*}
where $B_n,$ $C_n$ are bounded, $A_n$ converges to zero as $n$ tends to infinity, and $D_n$ is given by
\begin{eqnarray*}
D_n &:=& \E\left(\left|\sum_{i=0}^{n-1}| \langle u_{t_{i}^{n}}, \Delta_{i}^{n}B\rangle |^{\frac{1}{H}}-\displaystyle\int_{\R^d}\bigg[\displaystyle\int_ 0^T|\langle u_s, \xi\rangle|^{\frac{1}{H}}ds\bigg]\nu(d\xi)\right|\right).
\end{eqnarray*}
It only remains to show that $D_n$ converges to zero as $n$ tends to infinity. To do this, as in the proof of Theorem \ref{the2}, we introduce the partition of interval $[0,T]$ given by $0=t_0^{nm}<\cdots <t_{nm}^{nm} = T$, and we write
\begin{eqnarray}
V^{n,m} &:=& \notag
\left| \sum_{i=0}^{nm-1} |\langle u_{t_{i}^{nm}}, \Delta_{i}^{nm}B\rangle|^{\frac{1}{H}}-\sum_{j=0}^{n-1}\displaystyle\int_{\R^d}
|\langle u_{t_{j}^{n}}, \xi\rangle|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n})\nu(d\xi)\right| \\ \notag
& \leq & \sum_{j=0}^{n-1} \sum_{i=jm}^{(j+1)m -1} |\langle u_{t_{i}^{nm}}-u_{t_{j}^{n}}, \Delta_{i}^{nm}B\rangle|^{\frac{1}{H}}\\ \notag
& & \qquad +\sum_{j=0}^{n-1}\bigg|\sum_{i=jm}^{(j+1)m -1}|\langle u_{t_{j}^{n}}, \Delta_{i}^{nm}B\rangle|^{\frac{1}{H}}-\displaystyle\int_{\R^d}|\langle u_{t_{j}^{n}}, \xi\rangle|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n})\nu(d\xi)\bigg| \\ \notag
& := & V_1^{n,m} + V_2^{n,m}.
\end{eqnarray}
Then, using the same arguments as in Theorem \ref{the2}, we have
\begin{equation}\label{equ6m}
\lim_{n\rightarrow \infty}\sup_{m\ge 1}\E (V^{n,m}_1 )= 0.
\end{equation}
On the other hand, Lemma \ref{lem3} implies that for all $n\geq 1$
\begin{equation}\label{equ7m}
\lim_{m\rightarrow \infty} \E (V^{n,m}_2 )= 0.
\end{equation}
Moreover, it is not difficult to show that
$$
\displaystyle\lim_{n\rightarrow \infty}\E\left|\sum_{j=0}^{n-1}\displaystyle\int_{\R^d}
|\langle u_{t_{j}^{n}}, \xi\rangle|^{\frac{1}{H}}(t_{j+1}^{n} -t_{j}^{n})\nu(d\xi)-\displaystyle\int_{\R^d}\bigg[\displaystyle\int_ 0^T|\langle u_s, \xi\rangle|^{\frac{1}{H}}ds\bigg]\nu(d\xi)\right| =0.
$$
Finally, this convergence, together with (\ref{equ6m}) and (\ref{equ7m}), imply that $D_n$ converges to zero as $n$ tends to infinity. This completes the proof of
Theorem \ref{the5}.\eop
\section{Fractional Bessel process}
In this section, we are going to apply the results of the previous section to the fractional Bessel process.
Let $B$ be a $d$-dimensional fractional Brownian motion ($d\ge 2$).
The process $R= \{R_t, t\in [0,T]\}$, defined by $R_t= \|B_t\|$,
is called the fractional Bessel process of dimension $d$ and Hurst parameter $H$.
It has been proved in \cite{CN} that, for $H> \frac12$, the fractional Bessel process $R$ has the following representation
\begin{equation}\label{rep}
R_t = \displaystyle\sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{B_s^{(i)}}{R_s}\delta B_s^{(i)} + H(d-1)\displaystyle\int_ 0^t \dfrac{s^{2H-1}}{R_s}ds.
\end{equation}
This representation (\ref{rep}) is similar the one obtained for Bessel processes with respect to standard Brownian motion (see, for instance, Karatzas and Shreve \cite{KS}). Indeed, if $W$ is a $d$-Brownian motion and $R_t =\|W_t\|$, then
$$
R_t = \displaystyle\sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{W_s^{(i)}}{R_s}dW_s^{(i)} + \frac{d-1}2\displaystyle\int_ 0^t \dfrac{ds}{R_s}.
$$
The goal of this section is to extend the integral representation (\ref{rep}) to the case $H<\frac 12$. We cannot apply directly the It\^o's formula because the function $\|x\|$ is not smooth at the origin. We need the following extension of the domain of the divergence operator to processes with trajectories in $L^{\beta}([0,T], \R^d)$, where $\beta >\frac 1{2H}$.
\begin{definition}\label{def3}
Fix $\beta >\frac 1{2H}$.
We say that a $d$-dimensional stochastic process $u=(u^{(1)},\dots , u^{(d)})\in L^1(\Omega; L^{\beta}([0,T], \R^d))$ belongs to the extended domain of the divergence ${\rm Dom}^*\delta$, if there exists $q>1$ such that
\begin{equation} \label{78}
|\E\langle u, DF\rangle_{\EuFrak H_d}|= \left |\sum_{i=1}^{d}\E(\langle u^{(i)}, D^{(i)} F\rangle_{\EuFrak H})\right | \leq c_u \| F\|_{L^q(\Omega)},
\end{equation}
for every smooth and cylindrical random variable $F \in \mathcal{S}_{d}$, where $c_u$ is some constant depending on $u$. In this case $\delta(u)\in L^{p}(\Omega)$, where $p$ is the conjugate of $q$, is defined by the duality relationship
$$
\E(\langle u, DF\rangle_\EuFrak H )=\E(\delta(u) F),
$$
for every smooth and cylindrical random variable $F \in \mathcal{S}_{d}$.
\end{definition}
Notice that the inner product in (\ref{78}) is well defined by formula (\ref{ext}). If $u\mathbf{1}_{[0,t]}$ belongs to the extended domain of the divergence, we will make use of the notation
\[
\delta(u\mathbf{1}_{[0,t]}) =\sum_{i=1}^d \int_0^t u^{(i)}_s \delta B_s^{(i)}.
\]
\begin{remark} Notice that, since $\beta >\frac 1{2H}$, we have $\EuFrak H_{d}\subset L^{\beta}([0,T], \R^d))$ and then ${\rm Dom}\, \delta \subset {\rm Dom}^*\delta$.
\end{remark}
\begin{remark} \label{rem5.1}
It should be noted that the process $R$ satisfies the following
\begin{equation}\label{eq6}
\E(R_t^{-q}) =C t^{-Hq} \displaystyle\int_0^{\infty} y^{d-1-q} e^{\frac{-y^2}{2}} dy:= K_q t^{-Hq},
\end{equation}
for every $q<d$, where $K_q$ is a positive constant. This property will be used later.
\end{remark}
We recall the following multidimensional It\^{o} formula for the fBm (see \cite{HMS}).
This formula requires a notion of extended domain of the divergence operator, ${\rm Dom}^{E} \delta$ introduced in \cite[Definition 3.9]{HMS}, which is slightly different from Definition
\ref{def3}, because we require $u\in L^1(\Omega; L^{\beta}([0,T], \R^d))$ (instead of $u\in L^2(\Omega \times [0,T]; \mathbb{R}^d)$ and the extended divergence belongs to $L^p(\Omega)$ (instead of $L^2(\Omega)$). Our notion of extended domain will be useful to handle the case of the fractional Bessel process. Moreover, the class of test functionals is not the same, although this is not relevant because both classes are dense in $L^p(\Omega)$.
\begin{theorem}
Let $B$ a $d$-dimensional fractional Brownian motion with Hurst parameter $H<\frac 12$. Suppose that $F\in C^2(\R^d)$ satisfies the growth condition
\begin{equation}\label{growth}
\max_{x \in\R^d}\left\{|F(x)|, \left \|\frac{\partial F}{\partial x_i}(x)\right\|, \left\| \frac{\partial^2 F}{\partial x^2_i}(x) \right\| , i=1,\dots,d\right\}\leq ce^{\lambda x^2},
\end{equation}
where $c$ and $\lambda$ are positive constants such that $\lambda <\dfrac{T^{-2H}}{4d }$. Then, for each $i=1,...,d$ and $t\in [0,T]$, the process $\bf{1}_{[0,t]}\dfrac{\partial F}{\partial x_i}(B_t) \in {\rm Dom}^{E} \delta$, and the following formula holds
\begin{equation} \label{ito}
F(B_t) = F(0)+ \sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{\partial F}{\partial x_i}(B_s)\delta B_s^{(i)}+
H \sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{\partial^2 F}{\partial x^2_i}(B_s)s^{2H-1}ds,
\end{equation}
where ${\rm Dom}^{E} \delta$ is the extended domain of the divergence operator in the sense of Definition 3.9 in \cite{HMS}.
\end{theorem}
The next result is a change of variable formula for the fractional Bessel process in the case $H<\frac{1}{2}$.
\begin{theorem}\label{pro1}
Let $H<\frac{1}{2}$, and let $R=\{R_t, \in [0,T]\}$ be the fractional Bessel process. Set $u_t^i =\frac {B^i_t}{R_t}$ and $u_t=(u_t^{(1)},\dots,u_t^{(d)})$, for
$t\in [0,T]$. Then, we have the following results:
\begin{enumerate}
\item[(i)] For any $t\in (0,T]$, the process $\{u_s \mathbf{1}_{[0,t]}(s), s\in [0,T]\}$ belongs to the extended domain ${\rm Dom}^*\delta$ and the representation (\ref{rep}) holds true.
\item[(ii)] If $H>\frac{1}{4}$, for any $t\in [0,T]$, the process $u\mathbf{1}_{[0,t]} $ belongs to $L^2(\Omega;\EuFrak H_d)$ and to
the domain of $\delta$ in $L^p(\Omega)$ for any $p<d$.
\end{enumerate}
\end{theorem}
\bop. Let us first prove part (i). Since the function $\|x\|$ is not differentiable at the origin, the It\^{o} formula (\ref{ito}) cannot be applied and we need to make a suitable approximation. For $\varepsilon >0$, consider the function $F_{\varepsilon}(x) = (\| x\|^2 +\varepsilon^2)^{\frac12}$, which is smooth and satisfies condition (\ref{growth}). Applying It\^{o}'s formula (\ref{ito}) we have
\begin{equation}\label{eq7}
F_{\varepsilon}(B_t) = \varepsilon+ \sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{B_s^{(i)}}{(R_s^2 +\varepsilon^2)^{\frac12}}\delta B_s^{(i)}+
Hd \int_ 0^t \dfrac{s^{2H-1}}{(R_t^2 +\varepsilon^2)^{\frac12}}ds-H \displaystyle\int_ 0^t \frac{s^{2H-1} R_s^2}{(R_s^2 +\varepsilon^2)^{\frac32}}ds.
\end{equation}
Clearly, $F_{\varepsilon}(B_t)$ converges to $R_t$ in $L^p$ for any $p\ge 1$. Let $1\leq p <d$. Using Minkowski's inequality, and taking into account Remark \ref{rem5.1}, we have
\begin{eqnarray*}
\E\left( \left| \int_ 0^t s^{2H-1}R_s^{-1}ds\right|^p \right)
&\leq &\left(\int_ 0^t s^{2H-1}(\E(R_s^{-p})^{\frac{1}{p}}ds\right)^p \\
&\leq & K_p \left(\int_ 0^t s^{-H} s^{2H-1}ds\right)^p \le K_p H^{-p} t^{pH}.
\end{eqnarray*}
Since for every $\varepsilon >0,$ $ \frac{s^{2H-1}}{(R_s^2 +\varepsilon^2)^{\frac12}}\leq s^{2H-1}R_s^{-1}$, the dominated convergence theorem leads to the fact that $\int_ 0^t \frac{s^{2H-1}}{(R_s^2 +\varepsilon^2)^{\frac12}}ds$ converges to $\int_ 0^t \frac{s^{2H-1}}{R_s}ds$ in $L^{p}$ for any $1\leq p<d$, as $\varepsilon$ converges to zero.
In the same way, we prove that $\int_ 0^t \frac{s^{2H-1}R_s^2}{(R_s^2 +\varepsilon^2)^{\frac32}}ds$ converges to $\int_ 0^t \frac{s^{2H-1}}{R_s}ds$ in $L^{p}$ for any $1\leq p<d$, as $\varepsilon$ converges to zero.
Coming back to (\ref{eq7}), we deduce that $ \sum_{i=1}^{d}\int_ 0^t\frac{B_s^{(i)}}{(R_t^2 +\varepsilon^2)^{\frac12}}\delta B_s^{(i)}$ converges in $L^{p}$ for any $1\leq p<d$, to some limit $G_t$, as $\varepsilon$ tends to zero.
We are going to show that the process $u\mathbf{1}_{[0,t]}$ belongs to the extended domain of the divergence and $\delta(u\mathbf{1}_{[0,t]})=G_t$. Let $F$ be a smooth and cylindrical random variable in $\mathcal{S}_{d}$.
For $i=1,\dots,d$, let $u_s^{\varepsilon, (i)} =\frac{B_s^{(i)}}{(R_t^2 +\varepsilon^2)^{\frac12}}$, and $u_s^\varepsilon= (u_s^{\varepsilon, (1)}, \dots, u_s^{\varepsilon, (d)})$. By the duality relationship we obtain
\begin{equation*}\label{duality}
\E (\langle u^{\varepsilon}\bf{1}_{[0,t]}, DF\rangle_{\EuFrak H_d} ) = \E(\delta(u^{\varepsilon}\bf{1}_{[0,t]}) F).
\end{equation*}
Taking into account that $\delta(u^{\varepsilon}\bf{1}_{[0,t]})$ converges to $G_t$ in $L^p$, and that
\[
\lim_{\varepsilon \rightarrow 0} \E(\langle u^{\varepsilon}\bf{1}_{[0,t]}, DF\rangle_{\EuFrak H_d}) =\E(\langle u \bf{1}_{[0,t]}, DF\rangle_{\EuFrak H_d}),
\]
since the components of $u$ are bounded by one, we deduce that
\[
\E(\langle u1_{[0,t]}, DF\rangle_{\EuFrak H_d})=\E(G_tF).
\]
This implies that $u\mathbf{1}_{[0,t]}$ belongs to the extended domain of the divergence and $\delta(u\mathbf{1}_{[0,t]})=G_t$.
To show part (ii), let us assume that $H>\frac14$. We first show that for any $i=1,\dots,d$, $u^{(i)} \in L^2( \Omega; \EuFrak H)$. We can write
\begin{eqnarray*}
|u_t^{(i)}-u_s^{(i)}|
& \leq &{|B_t^{(i)} - B_s^{(i)}|}{R_t^{-1}} + {|R_s - R_t||B_s^{(i)}|}{R_t^{-1}R_s^{-1}} \\
& \leq & {\| B_t - B_s\| }{R_t^{-1}} + {|R_s - R_t|\| B_s\|}{R_t^{-1}R_s^{-1}} \\
& \leq & 2{\| B_t - B_s\| }{R_t^{-1}},
\end{eqnarray*}
where we have used the fact that
$$
|R_s - R_t|=\left| \| B_t\| - \| B_s\|\right| \leq \| B_t - B_s\|.
$$
Since $|u_t^{(i)}-u_s^{(i)}|\leq 2$, we obtain
\[
|u_t^{(i)}-u_s^{(i)}| \leq 2\left({\| B_t - B_s\| }{R_t^{-1}}\wedge 1\right),
\]
which implies
\begin{equation}\label{eqqq1}
|u_t^{(i)}-u_s^{(i)}| \leq 2\| B_t - B_s\|^{\alpha} {R_t^{-\alpha}},
\end{equation}
for every $\alpha\in[0,1]$.
We can write, using (\ref{est01}),
\begin{eqnarray*}
\E(\| u_t^{(i)}\|_{\EuFrak H}^2) & \leq & k_H \E \left( \int_ 0^T (u_s^{(i)})^2[(T-s)^{2H-1}+ s^{2H-1}]ds \right) \\
&& + k_H \E\left(\displaystyle\int_0^T\left(\displaystyle\int_s^T |u_t^{(i)}-u_s^{(i)}|(t-s)^{H-\frac32}dt\right)^2 ds\right) \\
& :=& k_H[N_1 + N_2].
\end{eqnarray*}
Since $|u_t^i|\leq 1$, it is clear that $N_1$ is bounded. To estimate $N_2$, choose $\alpha$, $q$ and $p$ such that $\frac{1}{2H} -1<\alpha \leq 1 $, $1<q<\frac{d}{2\alpha}$, and $\frac{1}{p}+\frac{1}{q} =1$. Using inequality (\ref{eqqq1}) and Minkowski and H\"{o}lder inequalities, we get
\begin{eqnarray*}
N_2& \leq & 2
\int_0^T\E\left(\displaystyle\int_s^T \| B_t - B_s\|^{\alpha} {R_t^{-\alpha}}(t-s)^{H-\frac32}dt\right)^2 ds \\
& \leq & 2
\int_0^T\left(\int_s^T\left[ E( \| B_t -B_s\|^{2\alpha p})\right]^{\frac{1}{2p}} \left[ \E (R_t^{-2\alpha q}) \right]^{\frac{1}{2q}}(t-s)^{H-\frac32}dt\right)^2 ds \\
& \leq &
C\int_0^T\left(\int_s^T(t-s)^{\alpha H} t^{-\alpha H}(t-s)^{ H -\frac32}dt\right)^2 ds \\
& \leq &
C\displaystyle\int_0^T s^{-2\alpha H}(T-s)^{2(\alpha +1)H -1} ds \\
& =& C T^{2H}\beta(-2\alpha H+1, 2(\alpha +1)H).
\end{eqnarray*}
Hence, for $i=1,\dots,d$, $\E (\| u_t^{(i)}\|_{\EuFrak H}^2) <\infty$ and, therefore, $u\in L^2(\Omega, \EuFrak H_d)$. Moreover, by the first assertion, it follows that for every $F\in \mathcal{S}_{d}$ and for $p<d$,
\begin{equation*}
\E(\langle D F,u \mathbf{1}_{[0,t]}\rangle_{\EuFrak H_d})= \E(G_t F) \leq \|G_t\|_{p} \|F\|_{q}.
\end{equation*}
Therefore, $u\mathbf{1}_{[0,t]}$ belongs to the domain of $\delta$ in $L^p(\Omega)$.
\eop
Notice that, if $d>2$, then we can take $p=2$ in part (ii), and $u\mathbf{1}_{[0,t]}$ belongs to ${\rm Dom}\, \delta$.
Also, we remark that although $u\mathbf{1}_{[0,t]}$ belongs to the (extended) domain of the divergence, this does not imply that each component $u^{(i)}\mathbf{1}_{[0,t]}$ belongs to the domain of $\delta^{(i)}$. In the next theorem, we show that under the stronger condition $2dH^2 >1$, each process $u^{(i)}$ belongs to $\mathbb{D}^{1,2}_i (\EuFrak H)$, and satisfy the Hypothesis {\bf{(A.3)} of Section 4.
\begin{theorem}\label{pro2}
Suppose that $2dH^2>1$. Let $R=\{R_t, t\in [0,T]\}$ be the fractional Bessel process. Then, for $i=1,2,\dots,d$, the process $u_t^{(i)}=\dfrac{B_t^{(i)}}{R_t}$ satisfies Hypothesis $\bf{(A.3)}$.
\end{theorem}
\bop.
Fix $i=1,\dots, d$. The random variable $u_t^{(i)}$ is bounded and so, it is bounded in $L^{q}(\Omega)$ for all $q>\frac{1}{H}$. The Malliavin derivative $D^{(i)} u^{(i)}$ is given by
$$
D^{(i)}_su_t^{(i)} = \left(-R_t^{-3} (B_t^{(i)})^2+R_t^{-1}\right) \bf{1}_{[0,t]}(s):= \phi_t \bf{1}_{[0,t]}(s).
$$
Notice that
\begin{equation*} \label{89}
\| D^{(i)}u^{(i)}_t \|_{\EuFrak H} \le 2R_t^{-1} t^{H}.
\end{equation*}
This implies $D^{(i)}u^{(i)}_t$ is bounded in $L^{\frac{1}{H}}(\Omega; \EuFrak H)$ because $dH>1$. Indeed, we have
\begin{equation*}
\| D^{(i)}u^{(i)}_t\|_{L^{\frac{1}{H}}(\Omega; \EuFrak H)} \le 2 \left(\E[R_t^{-\frac{1}{H}}]\right)^{H} t^{H} \le C.
\end{equation*}
Let us now prove that $u^{(i)}$ satisfies the inequalities (\ref{assump11}) and (\ref{assump21}), with $p=\frac{1}{H}$.
Let $0<s\le t \le T$.
Using estimate (\ref{eqqq1}) and choosing $\frac{1}{2H} -1<\alpha < Hd\wedge 1$, it follows that for $1< q< \frac{Hd}{\alpha}$ and $p_1>1$ such that $\frac{1}{p_1}+\frac{1}{q} =1$,
\begin{equation*}
\| u_t^{(i)}-u_s^{(i)}\|_{L^{\frac{1}{H}}(\Omega)} \leq 2\left[ \E \left( \| B_t - B_s\|^{\frac{\alpha p_1}{H}} \right) \right]^{\frac{H}{p_1}} \left(\E (R_t^{-\frac{\alpha q}{H}}) \right)^{\frac{H}{q}}
\leq C (t-s)^{\alpha H} s^{-\alpha H}.
\end{equation*}
Hence inequality (\ref{assump11}) is satisfied with $\alpha_1 =\alpha H<\frac12$ and $\gamma =\alpha H>\frac12 -H$.
In order to show inequality (\ref{assump21}) with $p=\frac{1}{H}$, we first write for $0<r \le t \le T$,
\begin{eqnarray} \notag
\| \phi_t\bf{1}_{[0,t]}-\phi_r\bf{1}_{[0,r]} \|_{\EuFrak H}
&\leq & \| \phi_t(\bf{1}_{[0,t]}-\bf{1}_{[0,r]}) \|_{\EuFrak H}+\| (\phi_t-\phi_r)\bf{1}_{[0,r]} \|_{\EuFrak H} \\ \notag
& =& |\phi_t| \| \bf{1}_{[0,t]}-\bf{1}_{[0,r]} \|_{\EuFrak H}+|\phi_t-\phi_r|\| \bf{1}_{[0,r]} \|_{\EuFrak H} \\
& \leq & C\left( R_t^{-1}(t-r)^{H}+|\phi_t-\phi_r|r^{H}\right). \label{phi}
\end{eqnarray}
We have
\begin{eqnarray*}
|\phi_t-\phi_r|& \le & \left| R_t^{-3} (B^{(i)}_t)^2 -R_r^{-3} (B^{(i)}_r)^2 \right| + | R_t^{-1} - R_r^{-1} | \\
&\le& R_t^{-3} R_r^{-3} \left( |R_t^3-R_r^3| (B^{(i)}_r)^2 + R_t^3 |(B^{(i)}_t)^2-(B^{(i)}_r)^2 | \right)+ R_t^{-1} R_r^{-1} |R_t-R_r| \\
&\le & \| B_t- B_r\| \left( 2R_t^{-1} R_r^{-1}+ 2R_t^{-3} R_r + R_t^{-2} + R_r^{-2} \right),
\end{eqnarray*}
and
$$
|\phi_t-\phi_r|\leq |\phi_t|+ |\phi_r| \leq 2(R^{-1}_t +R^{-1}_r).
$$
Put $R_{tr} := R_t^{-1} R_r^{-1}+ R_t^{-3} R_r + R_t^{-2} + R_r^{-2}$. Then, the above inequalities imply
\begin{equation*}
|\phi_t-\phi_r|\leq 4\left[ \left(\| B_t -B_r\|R_{tr}\right)\wedge \left(R_t^{-1}\vee R_r^{-1}\right)\right].
\end{equation*}
By using the same argument as above one can find also that
\begin{equation*}
|\phi_t-\phi_r|\leq 4\left[ \left(\| B_t -B_r\|R_{rt}\right)\wedge \left(R_t^{-1}\vee R_r^{-1}\right)\right].
\end{equation*}
Therefore, for every $\alpha \in [0,1]$, we can write
\begin{eqnarray} \notag
|\phi_t-\phi_r| & \leq& 4\left[ \left(\| B_t -B_r\| (R_{tr}\wedge R_{rt})\right)\wedge\left(R_t^{-1}\vee R_r^{-1}\right) \right]\\ \notag
& \leq &4 \| B_t -B_r\| ^\alpha (R_{tr}^\alpha\wedge R_{rt}^\alpha)\left(R_t^{\alpha-1}\vee R_r^{\alpha-1}\right) \\ \label{79}
&\le & C\| B_t -B_r\|^{\alpha}\left(R_t^{-\alpha-1}\vee R_r^{-\alpha-1}\right).
\end{eqnarray}
Then, substituting (\ref{79}) into (\ref{phi}) yields
\begin{equation*}
\| \phi_t\bf{1}_{[0,t]}-\phi_r\bf{1}_{[0,r]} \|_{\EuFrak H}
\leq C\left( R_t^{-1}(t-r)^{H}+\| B_t -B_r\|^{\alpha}\left(R_t^{-\alpha-1}\vee R_r^{-\alpha-1}\right)r^{H}\right).
\end{equation*}
Choose $\alpha$, $p_1$ and $q$ such that $\frac{1}{2H}-1<\alpha < (Hd-1)\wedge 1$, $1<p_1<\frac{dH}{\alpha +1}$ and $\frac{1}{p_1}+\frac{1}{q}= 1$. Then, we can write
\begin{eqnarray*}
&& \E \left(\| \phi_t\bf{1}_{[0,t]}-\phi_r\bf{1}_{[0,r]} \|_{\EuFrak H}^{\frac{1}{H}} \right) \\
&&\le C \E\left[ R_t^{-\frac{1}{H}}(t-r)+ r\| B_t -B_r\|^{ \frac{\alpha}{H}}\left(R_t^{-\frac{\alpha+1}{H}}\vee R_r^{-\frac{\alpha+1}{H}}\right)\right] \\
& & \leq C \left[C t^{-1}(t-r)+ r\left[ \E \left( \|B_t -B_r\|^{\frac{\alpha q}{H}} \right) \right] ^{\frac{1}{q}}\left[ \E \left( \left(R_t^{-\frac{\alpha+1}{H}}\vee R_r^{-\frac{\alpha+1}{H}}\right)^{p_1} \right)\right]^{\frac{1}{p_1}}\right] \\
& & \leq C\bigg(r^{-1}(t-r)+ r^{-\alpha }(t-r)^{\alpha }\bigg) \\
& & \leq 2C \max(r^{-1}(t-r),r^{-\alpha }(t-r)^{\alpha }),
\end{eqnarray*}
and inequality (\ref{assump21}) is satisfied with $\alpha_2 =H$ and $\gamma =\alpha H$.
Finally, for every $s \le t$, we have
\begin{equation*}
\|D^{(i)}_su^{(i)}_t\|_{L^{\frac{1}{H}}(\Omega)}
\leq \left(\E (R_t^{-\frac{1}{H}} ) \right)^{H}
= C t^{-H},
\end{equation*}
and then assumption (\ref{assump3}) is satisfied with $\alpha_3 = H$.
This ends the proof of Theorem \ref{pro2}.
\eop
\begin{remark}
If $Hd>1$, we can show, using the same arguments as in the proof of Theorem \ref{pro2},
that $u^{(i)} \in \mathbb{D}^{1,2}_i (\EuFrak H)$, for $i=1,2,\dots,d$.
\end{remark}
We now discuss the properties of the process $\Theta= \{\Theta_t, t\in [0,T]\}$ defined by
$$
\Theta_t:= \displaystyle\sum_{i=1}^{d}\displaystyle\int_ 0^t\dfrac{B_s^{(i)}}{R_t}\delta B_s^{(i)}.
$$
By Theorem \ref{pro2}, we have that for every $i=1,\dots,d$, $u_t^{(i)}=\dfrac{B_t^{(i)}}{R_t}$ satisfies Hypothesis $\bf{(A.3)}$ if $2dH^2 >1$. Therefore, applying Theorem \ref{the5}, we have the following corollary.
\begin{corollary}\label{cor}
Suppose that $2dH^2 >1$. Then we have the following
\begin{equation*}
\begin{array}{ll}
V_n^{\frac{1}{H}}(\Theta) \overset{L^{1}(\Omega)}{{\longrightarrow}} \displaystyle\int_{\R^d}\left[\displaystyle\int_ 0^T \left| \left\langle\dfrac{B_s}{R_s}, \xi \right \rangle \right |^{\frac{1}{H}}ds\right]\nu(d\xi),
\end{array}
\end{equation*}
as $n$ tends to infinity, where $\nu$ is the normal distribution $N(0, I)$ on $\R^d$.
\end{corollary}
\begin{proposition} \label{prop7}
The process $\Theta$ is $H$-self-similar.
\end{proposition}
\bop. Let $a>0$. By the representation (\ref{rep}) and the self-similarity of fBm, we have
\begin{eqnarray}\notag
\Theta_{at} &= & R_{at} -H(d-1)\displaystyle\int_O^{at} \dfrac{s^{2H-1}}{R_s}ds
\\ & \overset{d}{=}& a^HR_t -H(d-1)a^H\displaystyle\int_0^t \dfrac{u^{2H-1}}{R_u}du = a^H \Theta_t,\notag
\end{eqnarray}
where the symbol $\overset{d}{=}$ means that the distributions of both processes are the same. This proves that $\Theta$ is $H$-self-similar.
\eop
\begin{remark}
\begin{enumerate}
\item
Corollary \ref{cor} and Proposition \ref{prop7} imply that the process $\Theta$ and the fBm have the same $\frac 1H$-variation, if $2dH^2>1$, and they are both $H$-self-similar. These results generalize those proved by Guerra and Nualart in \cite{GN} in the case $H<\frac 12$.
\item Let us note that although $\Theta$ and the one-dimensional fBm are both $H$-self-similar and have the same $\frac 1H$-variation, as it is shown in \cite{HN}, it is not a fractional Brownian motion with Hurst parameter $H$.
The proof of this fact is based on the Wiener chaos expansion. Whereas, in the classical Brownian motion case it is well known, from L\'{e}vy's characterization theorem, that the process $\Theta$ is a Brownian motion.
\end{enumerate}
\end{remark}
\bf{Acknowledgements.} This work was carried out during a stay of El Hassan Essaky at Kansas University (Lawrence, KS), as a part of Fulbright program. He would like to thank KU, especially Professor David Nualart, for warm welcome and kind hospitality.
\addcontentsline{toc}{chapter}{Bibliographie}
\end{document} |
\begin{document}
\maketitle
\begin{abstract}
The Ramanujan sequence
$\{\theta_{n}\}_{n \geq 0}$, defined as
\begin{gather*}\theta_{0}= \frac{1}{2} \ , \ \ \
\theta_{n} = \Big(\ \ \frac{e^{n}}{2} - \sum_{k=0}^{n-1} \frac{n^{k}}{k !} \ \
\Big) \cdot \frac{n !}{n^{n}} \ , \ \ n \geq 1 \ ,
\end{gather*} has been studied on many occasions and in many different
contexts. J. Adell and P. Jodra \cite{Ad}(2008) and S. Koumandos \cite{K}(2013) showed, respectively, that the
sequences $\{\theta_{n}\}_{n \geq 0}$ and $\{4/135 - n \cdot (\theta_{n}- 1/3 )\}_{n \geq 0}$ are completely
monotone. In the present paper we establish that the sequence $\{(n+1) \cdot
(\theta_{n}- 1/3 )\}_{n \geq 0}$ is also completely monotone. Furthermore, we prove that the analytic
function
$(\theta_{1}- 1/3 )^{-1}\sum_{n=1}^{\infty} (\theta_{n}- 1/3 ) \cdot z^{n} / n^{\alpha}$
is universally starlike for every $\alpha \geq 1$ in the slit domain
$\Bb{C}\setminus[1,\infty)$. This seems to be the first result putting the
Ramanujan sequence into the context of analytic univalent
functions and is a step towards a previous stronger
conjecture, proposed by S. Ruscheweyh, L. Salinas and T. Sugawa in
\cite{RSS}(2009), namely that
the function $(\theta_{1}- 1/3 )^{-1}\sum_{n=1}^{\infty} (\theta_{n}- 1/3 ) \cdot z^{n} $ is universally convex.
\end{abstract}
\maketitle
\section{Introduction}
A famous problem raised by Ramanujan in \cite{R2}(1911) states that
the so-called Ramanujan numbers
$\theta_{n}$, $n \geq 0$, defined as
\begin{gather}\label{1}\theta_{0}= \frac12 \ , \ \ \
\theta_{n} = \Big(\ \ \frac{e^{n}}{2} - \sum_{k=0}^{n-1} \frac{n^{k}}{k !} \ \
\Big) \cdot \frac{n !}{n^{n}} \ , \ \ n \geq 1 \ ,
\end{gather}
satisfy
\begin{gather}\label{2}
\theta_{n} \in \left[\ \frac{1}{3} \ , \ \frac{1}{2} \ \right] \ .
\end{gather}
\noindent In his first letter to Hardy \cite{R2}(1913)
Ramanujan refined his conjecture \eqref{2} as follows
\begin{gather}\label{3}
\theta_{n} = \frac{1}{3} + \frac{4}{135} \cdot \frac{1}{n + k_n} , \ \
k_n \in \left[\ \frac{2}{21} \ , \ \frac{8}{45} \ \right] \ , \ \ n \geq 0 \ .
\end{gather}
\noindent
The first proofs of \eqref{2} were published by G.Szeg{\"o} \cite{Sz} (1928) and
G.N.Watson \cite{W}(1929). A proof of \eqref{3} was obtained in 1995
by Flajolet et al. \cite{Fl}.
In 2003 S.E.Alm \cite{Alm} showed that the sequence $\{k_{n}\}_{n \geq 0}$ appearing in \eqref{3} is decreasing.
In 2008 J. Adell and P. Jodra \cite{Ad} proved that there is a probability
distribution function $F$ on $[0,1]$ such that
\begin{gather}\label{4}
\theta_n - 1/3 = \frac{1}{6} \int_{0}^{1} x^{n} d \, F (x) \ , \
n \geq 0 \ ,
\end{gather}
which implies that the sequence $\{\theta_{n}\}_{n \geq 0}$ is completely monotone, i.e.
\begin{gather}\label{5}
\sum_{m=0}^{n}\binom{n}{m} (-1)^{m} \theta_{k+m} \geq 0 \ , \ \ k \geq 0 \ , \ n \geq 0 \ .
\end{gather}
In 2013 S. Koumandos \cite{K}
proved the existence of a strictly positive function $k$ on $[0, +\infty)$ such that
\begin{gather}\label{5a}
\frac{4}{135} - n \cdot \left( \theta_{n} - \frac{1}{3}\right) =
\frac{1}{2} \int_{0}^{\infty} e^{- n x} k (x) d x \ , \ n \geq 0 \ ,
\end{gather}
\noindent
and noted that the complete monotony of the sequence
$\{4/135 - n \cdot (\theta_{n}- 1/3 )\}_{n \geq 0}$ follows from \eqref{5a}.
We refer the reader to Alzer \cite{Al}(2004), J. Adell and P. Jodra \cite{Ad}(2008) and S. Koumandos \cite{K}(2013)
for their surveys of other previous results on the Ramanujan sequence.
\section{The results}
\subsection{More monotonicity properties}
In this paper we refine the property \eqref{4} as follows.
\begin{thm}\label{th1}
There is a probability distribution function $G$ on $[0,1]$ such that
\begin{gather}\label{6}
\left(n + 1 \right) \left(\theta_n - 1/3\right) = \frac{4}{135} +
\frac{37}{270} \int_{0}^{1} x^{n} d \, G (x) \ , \
n \geq 0 \ .
\end{gather}
As a consequence, the sequence
\begin{gather*}
\Big\{ \ \left(n + 1 \right) \left(\theta_n - 1/3\right) \ \Big\}_{n \geq 0}
\end{gather*}
is completely monotone.
\end{thm}
We also show in Section~\ref{profth1} that Theorem \ref{thm4} below yields easily \eqref{4} and the validity of \eqref{5a} written in the following equivalent form
\begin{gather}\label{7}\frac{4}{135} - n \cdot \left( \theta_{n} - \frac{1}{3}\right)
= \frac{4}{135} \int_{0}^{1} t^{ n }d D (t) \ , \ n = 0, 1, 2, ... \ ,
\end{gather}
\noindent
where $D$ is a continuous probability distribution function on $[0,1]$.
\subsection{The Ramanujan sequence and univalent functions}
Let $\Lambda$ denote the slit domain $\Bb{C}\setminus[1,\infty)$ and
$\hol(\Lambda)$ the set of analytic functions in $\Lambda$. We write
$f\in\hol_1(\Lambda)$ if $f\in\hol(\Lambda)$ satisfies $f(0)=f'(0)-1=0$.
The following definition has been introduced in \cite[Def.1.3, 1.4,
pp. 290--291]{RSS}.
\begin{definition}
A function $f\in\hol_{1}(\Lambda)$ is called
universally starlike if it maps every circular domain
$\Omega\subset\Lambda$ with $0\in\Omega$ conformally onto a domain
starlike with respect to the origin. It is called universally convex
if it maps every circular domain $\Omega\subset\Lambda$ conformally
onto a convex domain.
\end{definition}
Note that in this definition circular domains are meant to be open
disks or open half-planes in $\Bb{C}$. It is an immediate consequence of
the definition that
\begin{equation}
\label{eq:x1}
f \mbox{ universally convex} \Bb{R}ightarrow f \mbox{ universally starlike} \ .
\end{equation}
In \cite[p.294]{RSS} the following conjecture has been proposed.
\begin{conx}[(S. Ruscheweyh, L. Salinas, and T. Sugawa, 2009 {\cite{RSS}})]
The function
\begin{gather}\label{10}
\sigma (z) := \frac{1}{\theta_{1}- 1/3} \cdot \sum_{n=1}^{\infty} (\theta_{n}- 1/3 ) \cdot z^{n}
\end{gather}
is universally convex.
\end{conx}
In a recent paper \cite{BRS} the present authors established the
following general result.
\begin{thmx}\label{poly}
For $f(z)=\sum_{n=1}^{\infty}a_n z^n\in\hol(\Lambda)$
let $f_{\alpha}(z):=\sum_{n=1}^{\infty}n^{-\alpha} a_n z^n$.
Then we have:
\noindent
1. If
$f$ is universally convex then the functions $f_{\alpha},\ \alpha\geq1$, are also
universally convex.
\noindent
2. If
$f$ is universally starlike then the functions $f_{\alpha},\ \alpha\geq0$, are also
universally starlike.
\end{thmx}
We shall prove
\begin{thm}\label{th3}
The functions
\begin{gather*}
\sigma_\alpha(z):= \frac{1}{\theta_{1}- 1/3} \sum_{n=1}^{\infty}
(\theta_n - 1/3) \frac{z^{n}}{n^{\alpha}} \ ,
\end{gather*}
are universally starlike for every $\alpha \geq 1$.
\end{thm}
In view of Theorem \ref{poly} and \eqref{eq:x1} it is clear that Theorem \ref{th3} represents a necessary
condition for Conjecture A to be valid, and therefore is a first step
towards the still open decision concerning Conjecture A.
\section{Watson’s approach}\label{s2}
In this section we follow Watson's reasonings from \cite{W}.
On the positive half-line there exist two functions $u$ and $U $ satisfying the following relations
\begin{gather}\label{2.1}
u (x) \cdot e^{{1 - u (x) }} = e^{-x} \ , \ \ \
U (x) \cdot e^{{1 - U (x) }} = e^{-x} \ , \ \ 0 \leq u (x) \leq 1\leq U (x) \ , \ \
x \geq 0 \ .
\end{gather}
The function $ U $ is strictly increasing on $[0, +\infty)$ with $ U (0)=1$ and $U(x)\to +\infty$ as $x \to +\infty$, whereas $ u $ is strictly decreasing on $[0, +\infty)$ with $ u(0)=1$ and $\lim_{x \to +\infty} u(x)=0$. Furthermore, $ u $ and $ U $ satisfy the differential equations (see \cite[p.295]{W})
\begin{gather}\label{2.2}
U^{\, \prime } (x) = \frac{U (x)}{U (x) - 1} \ , \ \
u^{\, \prime } (x) = - \frac{u (x)}{1 - u (x) } \ , \ x > 0 \ ,
\end{gather}
which imply that
\begin{gather}\label{2.3}
\begin{array}{ll} U^{\, \prime \prime } (x) = - \dfrac{U(x)}{\left(U(x) - 1\right)^{3}}\ , &
u^{\, \prime \prime } (x) = \dfrac{u(x)}{\left(1 - u(x)\right)^{3}} \ ,\\[0.5cm]
U^{\, \prime \prime \prime } (x) = \dfrac{U (x) (2 U(x) + 1)}{\left(U(x) - 1\right)^{5}}\ , &
u^{\, \prime \prime \prime } (x) = - \dfrac{u (x) (2 u(x) + 1)}{\left(1 - u (x)\right)^{5}} \ .
\end{array}
\end{gather}
\noindent
If we put
\begin{gather}\label{2.4}
w (x) := u (\log x) \ , \ \ W (x) := U (\log x) \ , \ x \geq 1 \ ,
\end{gather}
\noindent
then
\begin{gather}\label{2.5} \frac{e^{{ w (x) }}}{ e \cdot w (x)}
= x \ , \ \ \frac{e^{{ W (x) }}}{ e \cdot W (x)}
= x \ ,
\ \ 0 \leq w (x) \leq 1\leq W (x) \ , \ \
x \geq 1 \ ,
\end{gather}
and therefore
\begin{gather}\label{2.6}
w \left( \frac{e^{x}}{e \cdot x}\right) = x \ , \ \ x \in [0,1] \ ,
\ \
W \left( \frac{e^{x}}{e \cdot x}\right) = x \ , \ \ x \geq 1 \ .
\end{gather}
Since, on the positive semiaxis, the function $x / (e^{x} -1)$ decreases from $1$ to zero
while $x / (1 - e^{- x})$ increases from $1$ to $+\infty $ the
equations \eqref{2.6} can be written as
\begin{gather}\label{2.7}
w \left( \dfrac{ \exp\left(\ \dfrac{x}{e^{x} -1 }\right)}{e \cdot \dfrac{x}{e^{x} -1 }}\right) = \frac{x}{e^{x} -1 } \ , \ \
W \left( \frac{\exp\left( \ \dfrac{x}{1 - e^{- x} }\right)}{e \cdot \dfrac{x}{1 - e^{- x} }}\right) = \frac{x}{1 - e^{- x} }\ , \ \ x \geq 0 \ .
\end{gather}
Elementary calculations show that the even function
\begin{equation}\label{2.8}
\rho (x) := \frac{ \exp\left(\ \dfrac{x}{e^{x} -1 }\right) }{e \cdot \dfrac{x}{e^{x} -1 }} =
\dfrac{ \exp\left( \ \dfrac{x}{1 - e^{- x} }\right) }{e \cdot \dfrac{x}{1 - e^{- x} }} \ , \ \ x \in \Bb{R} \ ,
\end{equation}
\noindent
satisfies $\rho(0)=1$ and
\begin{gather}
\label{2.9}
\frac{\rho^{\, \prime } (x)}{\rho (x)} =
\dfrac{e^{x} (e^{x}\!-\! 1\! -\! x) (e^{-x}\!-\! 1\! +\! x) }{x
(e^{x}\! -\! 1)^{2} }\ > 0,\quad x>0.
\end{gather}
\noindent Therefore \eqref{2.8} and \eqref{2.4} imply
\begin{equation}\label{2.11}
U \left( \log \rho (x)\right) = \frac{x}{1 - e^{- x} }=:H(x)
\ , \ \ u \left( \log \rho (x)\right) = \frac{x}{e^{x} -1 }=:h(x)
\ , \ \ x \in \Bb{R} \ .
\end{equation}
and by virtue of \eqref{2.3} we obtain for arbitrary $x > 0$ the representations
\begin{align}
\label{2.12}
U^{\, \prime } \left( {{\log \rho (x)}}\right)
+ u^{\, \prime } \left( {{\log \rho (x)}}\right) & =
\frac{H(x)}{H(x)-1} +\frac{h(x)}{h(x)-1} \ , \\[0.3cm]
\label{2.13}
\ U^{\,\prime \prime } \left( {{\log \rho (x)}}\right)
+ u^{\,\prime \prime } \left( {{\log \rho (x)}}\right)
& =\frac{H(x)}{(1-H(x))^3}+\frac{h(x)}{(1-h(x))^3}\\[0.3cm]
\label{2.14}
U^{\,\prime \prime \prime } \left( {{\log \rho (x)}}\right)
+ u^{\,\prime \prime \prime } \left( {{\log \rho (x)}}\right)
& = H(x)\frac{1+2H(x)}{(H(x)-1)^5} + h(x)\frac{1+2h(x)}{(h(x)-1)^5}
\ \ ,
\end{align}
The following key result concerning these quantities will be established in Section~\ref{sect5}.
\begin{thm}
\label{thm4}
For $x>0$ we have
\begin{equation}
\label{eq:x2}
U^{\, \prime }(x)+u^{\, \prime }(x)>-(U^{\, \prime \prime }(x)+u^{\, \prime \prime }(x))>
U^{\, \prime \prime \prime }(x)+u^{\, \prime \prime \prime }(x)>0.
\end{equation}
\end{thm}
The last inequality in \eqref{eq:x2} is known (see Koumandos \cite[Lemma 2, p.452]{K}). However, in order
to make the present paper more self-contained. a new proof, based on the new algorithm disclosed in
~\ref{sect51}, is given in Subsection~\ref{thep}.
\section{\texorpdfstring{Proof of Theorem \ref{th1} }{Proof of Theorem 2.1}}\label{profth1}
G.N.Watson \cite[p.297 ]{W} obtained
\begin{equation}\label{2.19}
\theta_{n} - \frac{1}{3} = \frac{1}{2} \int_{0}^{\infty} e^{- n x}
\left(- U^{\,\prime \prime } (x) - u^{\,\prime \prime } (x)\right) d x \ ,
\ n \geq 0 \ ,
\end{equation}
\noindent
and \cite[p.300]{W}
\begin{equation}\label{2.20}
U^{\,\prime \prime } (0) + u^{\,\prime \prime } (0) = - \frac{8}{135} \ .
\end{equation}
Integration by parts applied to \eqref{2.19} using \eqref{2.20} gives
the basic relations used for this proof:
\begin{align}\label{2.21}
n \left( \theta_{n} - \frac{1}{3}\right) & = \frac{4}{135} -
\frac{1}{2} \int_{0}^{\infty} e^{- n x}
\left(U^{\,\prime \prime \prime } (x) + u^{\,\prime \prime \prime } (x)\right) d x \ ,
\ n \geq 0 \ , \\[0.3cm] \label{2.22}
\frac{\theta_{n} - \frac{1}{3}}{n} & =
\frac{1}{2} \int_{0}^{\infty} e^{- n x}
\left( \frac{4}{3} - U^{\, \prime }\left(x\right) - u^{\, \prime} \left( x\right) \right) d x \ ,
\ n \geq 1 \ ,\\[3mm]
\label{2.24}
\left(n + 1\right) \left(\theta_{n} - \frac{1}{3} \right) -
\frac{4}{135} &=
\frac{1}{2} \int\limits_{0}^{\infty} e^{- n x} \Delta (x) d x \ , \
n \geq 0 \ ,
\end{align}
where
\begin{equation}
\label{eq:x3}
\Delta(x) := -U^{\,\prime \prime } (x) - u^{\,\prime \prime } (x) - U^{\,\prime \prime \prime } (x) - u^{\,\prime \prime \prime } (x) \ , \ x \geq 0 \ .
\end{equation}
We mention in passing that \eqref{2.22} leads immediately to the
representation \eqref{4} for the Ramanujan sequence given by J. Adell and P. Jodra \cite[(5), p.3]{Ad}.
It follows from \eqref{2.8}, \eqref{2.9} and Theorem \ref{thm4} that
\begin{gather}\label{2.17}
\lim\limits_{x\to +\infty}\left(u^{\, \prime }(x) + U^{\, \prime }(x)\right) = 1 \ , \
u^{\, \prime } \left( 0\right) + U^{\, \prime } \left(0\right) = {4}/{3} \ , \ \ \ \
u^{\, \prime } \left( x\right) + U^{\, \prime } \left(x\right) > 0 \ , \ x > 0 \ .
\end{gather}
Theorem \ref{thm4} (see also Watson \cite[p.298]{W} and Alzer \cite[p.641]{Al}) implies $U^{\, \prime \prime }(x)+u^{\, \prime \prime }(x)<0,\ x>0,$ so that the function
\begin{gather}\label{2.23}
G_{0} (x) := 3 \left[\frac{4}{3} - U^{\, \prime }\left(x\right) - u^{\, \prime } \left( x\right)\right] \ , \ \ x \geq 0 \ ,
\end{gather}
increases from $0$ to $1$ on the positive half-line and in view of \eqref{2.19},
\begin{gather*}
\theta_{n} - \frac{1}{3} = \frac{1}{6} \int_{0}^{\infty} e^{- n x}
d G_{0} (x) = \frac{1}{6}\int_{0}^{1} t^{n} d\left[1- G_{0} \left(\log \frac{1}{t}\right)\right] \ ,
\ n \geq 0 \ .
\end{gather*}
Since
$ \theta_{n} > 0$, $n \geq 0$, and
\begin{gather*}
\sum_{m=0}^{n} C^{m}_{n} (-1)^{m} \theta_{k+m} = \sum_{m=0}^{n} C^{m}_{n} (-1)^{m}\left( \theta_{k+m} - \frac{1}{3} \right)=
\frac{1}{6}\int_{0}^{\infty} e^{- k x }\left(1 - e^{ - x}\right)^{n} d G_{0} (x) > 0 \ ,
\end{gather*}
for all $k \geq 0$ and $ n \geq 1$, we obtain the complete monotony of $\{\theta_{n}\}_{n \geq 0}$ and the validity
of (\ref{4}) for $F (x):= 1- G_{0} \left(\log {1}/{x}\right)$, $0 < x \leq 1$, $F (0):=0$.
Furthermore, by writing \eqref{2.21} in the form
\begin{align*}
1-\frac{135\,n}{4}\left(\theta_n-\frac{1}{3}\right)& =\frac{135}{8} \int_{0}^{\infty} e^{- n x}
\left(U^{\,\prime \prime \prime } (x) + u^{\,\prime \prime \prime } (x)\right) d x = \int_{0}^{\infty} e^{- n x} d G_{1}(x) \ ,
\ n \geq 0 \ ,
\end{align*}
where
\begin{align*}
G_{1}(x) & = 1 + \frac{135}{8} \left( U^{\,\prime \prime } (x) + u^{\,\prime \prime } (x) \right)\ , \ x \geq 0 \ ,
\end{align*}
\noindent
we obtain the validity of \eqref{7} for $D (x):= 1- G_{1} (\log (1/x))$
because $U^{\prime \prime \prime }+u^{\prime \prime \prime }$ is non-negative for all $x \geq 0$
by Theorem \ref{thm4} (see also Koumandos \cite[p.452]{K}).
Using \eqref{2.24} and again Theorem
\ref{thm4} to see that $\Delta(x)$ is non-negative we conclude that
\begin{gather*}
\left(n + 1\right) \left(\theta_{n} - \frac{1}{3} \right) - \frac{4}{135} =
\frac{37}{270}\int\nolimits_{[0,1]} x^n
d G (x) \ , \
n \geq 0 \ ,
\end{gather*}
where
\begin{gather*}
\frac{37}{45} G (e^{-x}) := 3 \left[ U^{\, \prime }\left(x\right) + u^{\, \prime } \left( x\right) +
U^{\,\prime \prime } (x) + u^{\,\prime \prime } (x)-1\right] \ , \ 0 \leq x < +\infty\ ,
\end{gather*}
\noindent
$G (0) := G (0+0) = 0$, $G (1-0) = G (1) = 1$ and $({37}/{45})e^{-x} G^{\, \prime } (e^{-x}) = 3 \Delta(x) > 0$ for all $x > 0$. Therefore $G$ is a probability distribution function on $[0,1]$ and the sequence in question turns out to be completely monotone.
The proof of Theorem \ref{th1} is now complete.
\section{\texorpdfstring{Proof of Theorem \ref{th3}}{Proof of Theorem 2.3}}
Note that \eqref{2.19} and \eqref{2.22} imply the following
integral representations for the functions dealt with in \eqref{10}
and Theorem \ref{th3} (for $\alpha=1$),
\begin{align}\label{2.26}
\left(\theta_{1}- 1/3\right) \sigma (z) & =
\frac{1}{2} \int_{1}^{\infty}\frac{z}{t - z }
\frac{- U^{\,\prime \prime } \left( \log t\right) - u^{\,\prime \prime } \left( \log t \right)}{t} d t \ , \ \ z \in \Lambda \ \ ,
\\[0.4cm]
\label{2.27}\left(\theta_{1}- 1/3\right) \sigma_{1} (z) & =
\frac{1}{2} \int_{1}^{\infty}\frac{z}{t - z }
\frac{ \left(4/3\right) - U^{\, \prime }\left( \log t \right) - u^{\, \prime } \left( \log t \right)}{t} d t \ , \ \ z \in \Lambda \ \ ,
\end{align}
and recall, using Theorem \ref{poly}, that we need to prove Theorem
\ref{th3} only for the case $\alpha=1$. The necessary and sufficient
condition for $\sigma_1$ to have that property is given in the
following result.
\begin{thmx}[(Corollary 1.1 {\cite{RSS}})]\label{xthbr}
Let $f \in \hol_{1} (\Lambda)$. Then $f$ is universally starlike
if and only if there exists a probability measure
$\mu$ on $[0, 1]$ such that
\begin{gather}\label{8}
\dfrac{f(z)}{z} =
\exp{\ {
{{\int\nolimits_{[ 0 , 1\, ]}}} \, \log \frac{1}{1 - t z} \ d \mu (t)
}} \ , \ \ z \in \Lambda \ .
\end{gather}
\end{thmx}
\subsection{Auxiliary results}\label{s3}
The following lemma is from \cite[Theorem 1.10, p.294]{RSS}.
\begin{lemx}
Let $\varphi, \psi : (0, 1) \to [0, +\infty )$ be two integrable
functions on $[0,1]$ satisfying
\begin{gather}\label{lem01} \int_{0}^{1} \varphi (t) \ d t = \int_{0}^{1} \psi (t) \ d t > 0 \ , \ \
\begin{vmatrix}
\varphi (x_{2}) & \psi (x_{2}) \\
\varphi (x_{1}) & \psi (x_{1}) \\
\end{vmatrix} \geq 0 \ , \ 0 < x_{1} \leq x_{2} < 1 \ .
\end{gather}
Then there exists a probability measure $\mu$ on $[0,1]$ such that
\begin{gather}\label{lem02}
\int\nolimits_{[ 0 , 1\, ]} \frac{ d \mu (t)}{1 - t z} \ = \int\nolimits_{0}^{1} \dfrac{\varphi (t) }{1-t z} d t \Big/ \int\nolimits_{0}^{1} \dfrac{\psi (t)}{1-t z} \ d t \ , \ \ z \in \Lambda \ .
\end{gather}
\end{lemx}
Lemma A allows us to prove the next statement.
\begin{lemma}\label{lemma1}
Let $g: (0, +\infty ) \to (0, +\infty )$ be twice continuously
differentiable on $(0,\infty)$ and assume
\begin{gather}\label{3.1}
\begin{array}{ll}
(a) \ \lim_{x \downarrow 0 } g (x) = 0 \ , &
(b) \ \int_{0}^{\infty} e^{-x} g (x) d x = 1 \ ,
\\[0.2cm]
(c)\ g^{\, \prime } ( x ) \geq 0\ , \ x > 0 \ , & (d) \ g^{\, \prime } ( x )^{2} - g^{\, \prime \prime} ( x ) g ( x ) \geq 0 \ , \ x > 0 \ .
\end{array}
\end{gather}
Then there exists a probability measure $\mu$ on $[0,1]$ such that
\begin{gather}\label{3.2}
\int_{1}^{\infty} \dfrac{ g ( \log t)/t }{t-z} \ d t = \exp{\ {
{{\int\nolimits_{[ 0 , 1\, ]}}} \, \log \frac{1}{1 - t z} \ d \mu (t)
}} \ , \ \ z \in \Lambda \ .
\end{gather}
\end{lemma}
\begin{proof}
Denote $v (x) := g ( \log x)/x$, $x > 1$, and for $z \in \Lambda$ let
\begin{gather*}
f (z) := z \int_{1}^{\infty} \dfrac{ g ( \log t)/t }{t-z} \ d t = z \int_{1}^{\infty} \frac{v (t) \ d t }{t-z} = \int_{1}^{\infty} \left[
-1 + \frac{t}{t-z}\right] v (t) \ d t \ .
\end{gather*}
The properties \eqref{3.1}(a),(b) mean that $v \in L_1 ([1, +\infty))$
and $\lim_{x\to 1+0}v (x) = 0$, which implies that
for arbitrary $z \in \Lambda$ we have
\begin{gather*}
f^{\, \prime } (z) = \int_{1}^{\infty} \frac{t v (t) }{(t-z)^{2}} d t = - \int_{1}^{\infty}t v (t) d \frac{1}{t-z} =
\frac{v (1)}{1-z} + \int_{1}^{\infty} \frac{\left(t v (t)\right)^{\, \prime } }{t-z} d t = \int_{1}^{\infty} \frac{\left(t v (t)\right)^{\, \prime } }{t-z} d t \ ,
\end{gather*}
and
\begin{gather*}
\frac{z f^{\, \prime } (z)}{ f (z)} = \frac{\int\limits_{1}^{\infty} \dfrac{\left(t v (t)\right)^{\, \prime } }{t-z} d t}{\int\limits_{1}^{\infty} \dfrac{v (t)}{t-z} \ d t} = \frac{\int\limits_{0}^{1} \dfrac{\left[(1/t) v^{\, \prime }(1/t) + v (1/t)\right]/t }{1-t z} d t}{\int\limits_{0}^{1}
\dfrac{v (1/t)/t}{1-t z} \ d t} =
\frac{\int\limits_{0}^{1} \dfrac{\varphi (t) }{1-t z} d t}{\int\limits_{0}^{1} \dfrac{\psi (t)}{1-t z} \ d t} \ \ ,
\end{gather*}
where
\begin{gather*}
\varphi (x) := \left((1/x) v^{\, \prime }(1/x) + v (1/x)\right)/x \ , \ \
\psi (x) := v (1/x)/x \ , \ \ 0 < x < 1 \ ,
\end{gather*}
and
\begin{gather*}
\int_{0}^{1} \varphi (t) \ d t = \int_{0}^{1} \psi (t) \ d t = \int_{1}^{\infty} \frac{v (x)}{x} d x =
\int_{1}^{\infty} \frac{g (\log x)}{x^{2}} d x = \int_{0}^{\infty} \frac{g ( x)}{e^{x}} d x = 1 \ .
\end{gather*}
Moreover, it follows from \eqref{3.1}(d) that the function $g^{\, \prime }(\log x) / g (\log x)$ is non-increasing on $(1, +\infty)$ and since
\begin{gather*}
1 + \frac{x v^{\, \prime }(x)}{ v (x)} = \frac{\dfrac{d}{d x} \left[x v(x)\right] }{ v (x)} =
\frac{ \dfrac{d}{d x } \left[x \cdot \dfrac{g ( \log x )}{x}\right]}{ \dfrac{g ( \log x )}{x} } \ \ = \frac{\dfrac{g^{\, \prime }( \log x )}{x} }{ \dfrac{g ( \log x )}{x}} = \frac{g^{\, \prime }( \log x )}{g ( \log x )}
\end{gather*}
the function $x v^{\, \prime }(x) / v (x)$ also does not increase on $(1, +\infty)$. This means that
for arbitrary $ 0 < x_{1} \leq x_{2} < 1 $ we have
\begin{gather*}
0 \leq \frac{1}{x_1 x_2} \begin{vmatrix}
v (1/x_1) & (1/x_1) v^{\, \prime }(1/x_1) \\
v (1/x_2) & (1/x_2) v^{\, \prime }(1/x_2) \\
\end{vmatrix} = \begin{vmatrix}
v (1/x_1)/x_1 & (1/x_1^{2}) v^{\, \prime }(1/x_1) \\
v (1/x_2)/x_2 & (1/x_2^{2}) v^{\, \prime }(1/x_2) \\
\end{vmatrix} = \begin{vmatrix}
\psi (x_1 ) &\varphi (x_1)\\
\psi (x_2 ) & \varphi(x_2) \\
\end{vmatrix} \ .
\end{gather*}
Lemma A guarantees the existence of a probability measure $\mu$ on $[0,1]$ such that
\begin{gather*}
\frac{z f^{\, \prime } (z)}{ f (z)} = \int\nolimits_{[ 0 , 1\, ]} \frac{ d \mu (t)}{1 - t z}\ , \ \ z \in \Lambda \ .
\end{gather*}
Since
\begin{gather*}
\frac{\dfrac{d}{d z} \dfrac{f (z)}{z}}{ \dfrac{f (z)}{z}} = \frac{ \dfrac{zf^{\, \prime } (z) - f (z)}{z^{2}}}{\dfrac{f (z)}{z}} = \frac{ f^{\, \prime } (z)}{ f (z)} - \frac{1}{z} = \int\nolimits_{[ 0 , 1\, ]} \frac{d}{d z} \log \frac{1}{1- t z} \ d \mu (t)
\end{gather*}
we can integrate this equality from $0$ to $z \in \Lambda$ and obtain
\begin{gather*}
\log \frac{f (z)}{z} - \log f^{\, \prime } (0) = \int\nolimits_{[ 0 , 1\, ]} \log \frac{1}{1 - t z} \ d \mu (t) \ ,
\end{gather*}
where $f^{\, \prime } (0) = 1$ by virtue of \eqref{3.1}(b). Lemma~\ref{lemma1} is proved.
\end{proof}
\subsection{The proof}\label{s4}
By Theorem \ref{xthbr} the statement of Theorem~\ref{th3} for $\alpha =1$ means that for $\sigma_1$
there exists a probability measure $\mu$ on $[0, 1]$ such that
\begin{gather}\label{fth31}
\dfrac{\sigma_{1}(z)}{z} =
\exp{\ {
{{\int\nolimits_{[ 0 , 1\, ]}}} \, \log \frac{1}{1 - t z} \ d \mu (t)
}} \ , \ \ z \in \Lambda \ ,
\end{gather}
where in accordance with \eqref{2.27},
\begin{gather*}
2 \left(\theta_{1}- 1/3\right) \frac{\sigma_{1} (z)}{z} =
\int_{1}^{\infty}\frac{1}{t - z }
\frac{ g \left( \log t \right)}{t} d t \ , \ \ z \in \Lambda \ \ , \\
g (x) := \frac{4}{3}- U^{\, \prime } \left( x\right) - u^{\, \prime } \left( x\right) \ , \ \ x > 0 \ ,
\end{gather*}
and in view of \eqref{2.22}
\begin{gather*}
\int_{0}^{\infty} e^{-x} g (x) d x =
\int_{0}^{\infty} e^{-x} \left[4/3 -U^{\, \prime } \left( x\right) - u^{\, \prime } \left( x\right)\right] \ d x =2 \left(\theta_{1}- 1/3\right) \ .
\end{gather*}
Moreover, Theorem~\ref{thm4} and \eqref{2.17} imply that for arbitrary $x > 0$
\begin{align*}
& g (x) = \frac{4}{3} - U^{\, \prime } \left( x\right) - u^{\, \prime } \left( x\right) \in \left[ 0 \ , \ \frac{1}{3} \right] \ , \ \
& g^{\, \prime} (x) = - U^{\, \prime \prime } \left( x\right) - u^{\, \prime \prime } \left( x\right) > 0 \ , \\ & g^{\, \prime \prime} (x) = - U^{\, \prime \prime \prime } \left( x\right) - u^{\, \prime \prime \prime } \left( x\right) < 0 \ , \ \ & g (0) = \frac{4}{3} - U^{\, \prime } \left(0\right) - u^{\, \prime } \left(0\right) = 0 \ .
\end{align*}
Thus,
\begin{gather*}
g^{\, \prime } ( x )^{2} - g^{\, \prime \prime} ( x ) g ( x ) > 0 \ , \ x > 0 \ ,
\end{gather*}
and Lemma~\ref{lemma1} yields the validity of \eqref{fth31}.
\section{\texorpdfstring{Proof of Theorem \ref{thm4}}{Proof of Theorem 3.1}}\label{sect5}
\subsection{An algorithm}\label{sect51}
In this section we present a general algorithm which gives sufficient
conditions for inequalities of the type described in Theorem
\ref{thm4}. It is dealing with exponential polynomials on $\Bb{R}^+$.
\begin{definition}\label{def2}
Let
$$ f(x):=\sum_{k=0}^{m} P_k(x) e^{k x},$$
where the $P_k$ are real polynomials of exact degree $n_k$. Then we call $f$ an
{\em exponential polynomial } of order $m$ and (multi-)degree
$\{n_0,\dots,n_m\}$.
\end{definition}
\begin{remark}
A polynomial $P(x)=\sum_{j=0}^{n}a_j x^j$ is said to be of exact
degree $n$ if $a_n\neq0$. If $P\equiv0$ then we say it is of
(exact) degree $-1$.
\end{remark}
\begin{thm}
\label{thm5}
Let
$$ f(x):=f_0(x)=\sum_{k=0}^{m} P_k(x) e^{k x}=\sum_{k=0}^{\infty}a_k x^k
$$
be an exponential polynomial of order $m$ and degree
$\{n_0,\dots,n_m\}$. Let
\begin{equation}
\label{eq:6}
f_{k+1}(x):=f_k^{(n_k+1)}(x)e^{-x},\quad k=0,\dots,m-1,
\end{equation}
and assume that
\begin{equation}
\label{eq:7}
f_k^{(s)}(0)\geq0,\quad s=0,\dots,n_k,\ k=0,\dots,m.
\end{equation}
Then all Taylor coefficients $a_k,\ k\geq 0,$ of $f(x)$ are non-negative. In
particular,
$f(x)\geq0,\ x\geq 0$.
\end{thm}
\begin{remark}
Note that there are only {\em finitely} many, namely
$$\mu(f):=\sum_{k=0}^{m}(n_k+1),$$ conditions
to be tested (which involve only the first $\mu(f)$
coefficients $a_k$ of $f$) to draw the conclusion for {\em all }
coefficients of $f$.
\end{remark}
\begin{proof}
The proof runs by mathematical induction. First note that for any
exponential polynomial
$$
h(x)=\sum_{k=0}^{m}P_k(x)e^{kx}, \quad \mbox{degree}(h)=
\{n_0,\dots,n_m\},
$$
we have
$$
h'(x)= P_0'(x)+\sum_{k=1}^{m}(P_k'(x)+k P_k(x))e^{kx},
$$
which is an exponential polynomial of order $m$ and degree
$\{n_0-1,n_1,\dots,n_m\}$. Further, if $n_0=0$,
the function $h'(x) e^{-x}$ is an exponential polynomial of order
$m-1$ and degree $\{n_1,\dots,n_m\}$.
We begin with the case $m=0$. Then we have $f=P_0$ with degree
$\{n_0\}$. In this case the conditions \eqref{eq:7} just say
$$P_0^{(s)}(0)\geq0,\quad s=0,\dots,n_0,
$$
which means that all coefficients of $P_0$ (and therefore of $f$) are
non-negative. This settles the case $m=0$.
Now assume that the theorem is valid for some $m-1\geq0$, and let $f=f_0$
be as in the statement of the theorem. The way the function $f_1$ is
defined
it is clear that it is an exponential polynomial of order $m-1$, and
degree $\{n_1,\dots,n_m\}$ and the conditions \eqref{eq:7}, applied to
$f_1$ instead of $f_0$, show, by our assumption that the theorem is
correct for functions of degree $m-1$, that $f_1$ has all of it's
Taylor coefficients non-negative, which implies that the Taylor
coefficients of
\begin{equation}
\label{eq:8}
f_0^{(n_0+1)}(x)=e^x f_1(x)
\end{equation}
are also all non-negative. The conditions \eqref{eq:7} concerning $f_0$
now say that the remaining first coefficients of $f$, namely
$a_s =f_0^{(s)}(0)/s!$, \ $s=0,\dots,n_0$, are non-negative as well. This
completes the proof.\end{proof}
When it comes to the application of this theorem we have to keep the
following facts in mind:
\begin{enumerate}
\item This algorithm is particularly suited for cases when the
polynomials $P_k$ have rational coefficients only since then all the
numbers $f^{(s)}_k(0)$ are rational and therefore their calculation
via a computer algebra program is exact and no numerical problem, f.i. with
cancelation, occurs.
\item Given an exponential polynomial it is not really necessary to
know its order or multi-degree to begin with: the algorithm can
decide, when properly implemented, by itself what to do next
(differentiate once more or go to the next $f_k$. In particular it
can stop as soon as one of the numbers $f^{(s)}_k(0)$ turns out to
be negative, which can save machine time.
\end{enumerate}
\subsection{The proof}\label{thep}
The proof of Theorem \ref{thm4} will be completely computer based,
using the algorithm just described. All coefficients in these cases are
rational, actually integers, so there is no numerical problem. The algorithm has been
programmed using Mathematica version 9.0 and run on a laptop
computer. Computation time was a few seconds for each of the three
cases to be verified for Theorem \ref{thm4}.
The resulting numbers $f^{(s)}_k(0)$ are collected in one single vector
$\lambda(f)$ with
$\mu(f)$ entries, listed in their natural order as they are being
calculated by the algorithm. If $\lambda(f)$ turns out to be
non-negative then the case under
consideration has been settled.
{\bf 6.2.1. \ Case 1: ${\mathbf{ (U'+u')+(U''+u'')>0}}$.}\
Using \eqref{2.12}, \eqref{2.13} we find for
\begin{align*}
R_1(x)&:=\frac{H(x)}{H(x)-1}+\frac{h(x)}{h(x)-1}-\frac{H(x)}{(H(x)-1)^3}-
\frac{h(x)}{(h(x)-1)^3}\\
&=\frac{x^2}{((e^x-1-x)^3 (1-e^x(1-x))^3}S_1(x),
\end{align*}
where
\begin{align*}
S_1(x)= & \ (-2-x)
+(8-3x-3x^2)e^x+(-14+9x-6x^2-5x^3)e^{2x}\\
&+(16+18x^2-2x^4)e^{3x}
+(-14-9x-6x^2+5x^3)e^{4x}\\
&+(8+3x-3x^2)e^{5x}
+(-2+x)e^{6x}.
\end{align*}
\noindent
So $S_1$ is an exponential polynomial of order 6 and degree
$\{1,2,3,4,3,2,1\}$.
Application of the algorithm produces the vector
\begin{eqnarray*}
\lambda(S_1)&=&
(0,0,0,0,0,0,0,0,0,0,72240,1155840,9557760,56267040,
\\ & &271084224,880843680, 2475629568,6343909632,
\\ & & 1533939393792,20392197120,25057382400,29561241600,\\ & & 4478976000) \ ,
\end{eqnarray*}
which proves that the desired inequality is valid.
{\bf 6.2.2. \ Case 2: ${\mathbf{ -(U''+u'')-(U'''+u''')>0}}$.} \
Here we have to show that
\begin{align*}
R_2(x)&:=\frac{H(x)}{(H(x)-1)^3}+\frac{h(x)}{(h(x)-1)^3}- H(x)\frac{1+2H(x)}{(H(x)-1)^5}-h(x)\frac{1+2h(x)}{(h(x)-1)^5}\\
&=\frac{x^2(e^x-1)^2}{((e^x-1-x)^5 (1-e^x(1-x))^5}S_2(x),
\end{align*}
where
\begin{align*}
S_2(x)= & \ (-4-x)
+(24-15x-5x^2)e^x
+(-64+70x-60x^2-50x^3-20x^4-4x^5)e^{2x}\\
&+(104-91x+285x^2+100x^3+20x^4-x^5-x^6)e^{3x}
+(-120-440x^2)e^{4x}\\
&+(104+91x+285x^2-100x^3+20x^4+x^5-x^6)e^{5x}
\\&+(-64-70x-60x^2+50x^3-20x^4+4x^5)e^{6x}
+(24+15x-5x^2)e^{7x}
+(-4+x)e^{8x}.
\end{align*}
So $S_2$ is an exponential polynomial of order 8 and degree
$\{1,2,5,6,2,6,5,2,1\}$.
Application of the algorithm produces the vector
\begin{eqnarray*}
\lambda(S_2)&=&
(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1095494400,38342304000,\\ & & 718413696000, 8922167654400,
85789518796800,686634000998400,\\ & & 4108040955648000, 21277519458048000,98491821821245440,
\\ & & 417993857883463680,1659729058910208000, 6264125727645450240,\\ & & 22744955668622376960,
57435249160046592000, 138673044884876820480,
\\ & & 324272107555238707200, 741041088684097536000, 1665009811944898560000,
\\ & & 3693054970331136000000,4415481367363584000000, 5133351192625152000000,
\\ & & 5850215720681472000000, 716770887598080000000)
\end{eqnarray*}
which proves the claim as all entries are non-negative.
{\bf 6.2.3. \ Case 3: ${\mathbf{ (U'''+u''')>}}0$.}\
Here we have to show that
\begin{align*}
R_3(x):=
H(x)\frac{1+2H(x)}{(H(x)-1)^5}+h(x)\frac{1+2h(x)}{(h(x)-1)^5}
=\frac{x(e^x-1)^3}{((e^x-1-x)^5 (1-e^x(1-x))^5}S_3(x)>0,
\end{align*}
where
\begin{align*}
S_3(x):=& \ 1-2x+(-5+20x+10x^3+5x^4+x^5)e^x
\\&+
(9-72x-70x^3-30x^4-11x^5-2x^6)e^{2x}+(-5+130x+160x^3+25x^4-10x^5)e^{3x}
\\ &+(-5-130x-160x^3+25x^4-10x^5)e^{4x}
+(9+72x+70x^3-30x^4+11x^5-2x^6)e^{5x}\\&+
(-5-20x-10x^3+5x^4-x^5)e^{6x}
+(1+2x)e^{7x}.
\end{align*}
So $S_3$ is an exponential polynomial of order 7
and degree
$\{1,5,6,5,5,6,5,1\}$.
Application of the algorithm produces the vector
\begin{eqnarray*}
\lambda(S_3) &= &
(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,115315200,3863059200,70457587200,\\
& & 927826099200, 9830767564800,8631514316800,615374090956800,\\
& & 83729093713049600,20168695176376320,99183876729477120,450524284521338880,\\
& & 1915432618475059200,5792081300977213440,16127157987099279360, \\
& & 41953781738132766720, 103330763975294484480, 243753521061983846400, \\
& &556095351762151833600,1236678576792676761600, 1461058224846520320000,\\
& & 1642128742165708800000,1795208480980992000000, 1936007205617664000000,\\
& & 2073220384555008000000, 2209799770472448000000, 136527788113920000000)
\end{eqnarray*}
which is also non-negative. The proof is complete.
\affiliationone{Andrew Bakan\\
Institute of Mathematics\\
National Academy of
Sciences of Ukraine\\
01601 Kyiv \\
Ukraine
\email{[email protected]} }
\affiliationtwo{Stephan Ruscheweyh\\
Institut f\"ur Mathematik \\
Universit\"at W\"urzburg \\
97074 W\"urzburg, Germany
\email{[email protected]}}
\affiliationthree{Luis Salinas \\
Departamento de Inform\'atica, UTFSM \\
Valpara\'\i{}so, Chile
\email{[email protected]}}
\end{document} |
\begin{document}
\title{Formalizing line editors in Coq}
\begin{abstract}
Text editors represent one of the fundamental tools that writers use - software developers, book authors, mathematicians. A text editor must work as intended in that it should allow the users to do their job. We start by introducing a small subset of a text editor - line editor. Next, we will give a concrete definition (specification) of what a complete text editor means. Afterward, we will provide an implementation of a line editor in Coq, and then we will prove that it is a complete text editor.
\end{abstract}
\keywords{Text editors \and Formal verification \and Coq}
\section{Introduction}
A line editor is a text editor that works in REPL mode. It accepts several commands, and each of the commands operates on one or multiple lines of text. The most popular line editor is Unix \texttt{ed} \cite{b1}, and we will show a short demo interacting with it.
\begin{lstlisting}[language=sh]
$ ed example.txt
> i
> Hello World!
> Line two
> .
> n
2 Line two
> 1
> n
1 Hello World!
> d
> n
1 Line two
\end{lstlisting}
We start by editing the file \texttt{example.txt}. We will explain the commands that we used:
\begin{itemize}
\item The command \texttt{i} starts the insertion mode and in the next lines it will accept content that should be added.
\item The command \texttt{.} exits the insertion mode.
\item The command \texttt{n} shows the current line pointer along with the contents.
\item Inputting a number as a command will set the line pointer to that number.
\item The command \texttt{d} deletes the current line.
\end{itemize}
A more generalized editor is a character editor, however, line editors are much more convenient, especially in the REPL mode. For example, it may be tricky for the user to keep track of the position of every character to read/insert/delete.
Coq \cite{b2} is a programming language designed to accomplish software correctness, and we will use it to implement and prove an implementation.
\section{Specification}
Before we start formalizing editors, we will provide some definitions.
\theoremstyle{definition}
\begin{definition}
A text editor is complete if it has the functionality to read, insert, and delete text at any position.
\end{definition}
Here's another definition that we'll rely on. This definition is already supported in the base of Coq.
\theoremstyle{definition}
\begin{definition}
Strings (list of characters) can be inserted (created), read, and changed.
\end{definition}
In Coq we don't do any "changes", rather, we'll be simply returning new (updated) strings.
\theoremstyle{definition}
\begin{definition}
A line editor contains a buffer - list of strings.
\end{definition}
Given these definitions, we can proceed with implementing them in Coq. The implementation in this paper will use line editors, however, a single character can still be changed in a line by deleting the line and inserting a new line with that character changed. Thus, the editor that we will implement will be complete according to the specifications.
\subsection{Coq definitions}
The editor has to be able to read a line (i.e. get \texttt{n}-th element of a list):
\begin{lstlisting}
Definition readLine {X : Type} (b : list X) (pos : nat) (d : X) : X :=
nth pos b d.
\end{lstlisting}
Further, the editor has to be able to insert a line (i.e. put an element in a list at a specific position):
\begin{lstlisting}
Definition insertLine {X : Type} (b : list X) (pos : nat) (s : X) : (list X) :=
firstn pos b ++ s :: nil ++ skipn pos b.
\end{lstlisting}
Finally, the editor needs to be able to delete a line (i.e. get first \texttt{n}-th elements of a list, skip n+1 elements of a list):
\begin{lstlisting}
Definition deleteLine {X : Type} (b : list X) (pos : nat) : (list X) :=
firstn pos b ++ skipn (pos + 1) b.
\end{lstlisting}
All of the definitions will be wrapped in a single \texttt{EditorEval} to make it more convenient, and at this point, we have implemented the DSL for our editor.
We can use Coq's extraction facilities to export these definitions to Haskell, for example. If we add IO functionalities on top of these definitions, we will have implemented a similar editor to \texttt{ed}.
\section{Formal proofs}
\subsection{Lemmas}
In this subsection, we will provide the lemmas that will be used by our proofs.
The following lemma states that the length of the first \(n\) elements of a list that contains at least \(n\) elements is \(n\).
\begin{lstlisting}
Lemma lemma_1 : forall {X:Type} (l : list X) (n : nat),
n <= length l -> length (firstn n l) = n.
\end{lstlisting}
The next lemma states that whenever \(n = m\), we can deduce \(n \geq m\).
\begin{lstlisting}
Lemma lemma_2 : forall n m, n = m -> n >= m.
\end{lstlisting}
Finally, \texttt{lemma\_3} states that when a list of length \(n\) is concatenated with another list with an element \(s\) in between, the \(n\)-th element of the concatenated list will be \(s\) (zero indexed). It relies on \texttt{lemma\_2} for the proof.
\begin{lstlisting}
Lemma lemma_3 : forall {X:Type} n l1 l2 (s:X) d, length l1 = n -> s = nth n (l1 ++ s :: l2) d.
\end{lstlisting}
The theorem \texttt{thm\_1} is a combination of \texttt{lemma\_3} and \texttt{lemma\_1}.
\begin{lstlisting}
Theorem thm_1 : forall {X:Type} (n : nat) (l1 l2 : list X) (s : X) (d : X), n <= length l1 -> s = nth n (firstn n l1 ++ s :: l2) d.
\end{lstlisting}
\subsection{Proofs}
The line editor can insert any text, that is, for all strings \(s\) and positions \(n\), there exists a buffer \(b\) such that the string is in \(insertLine( n, b, s )\).
\begin{center}
\(\forall s \forall n \exists b (s \in insertLine( n, b, s ))\)
\end{center}
We will only show the proof for this theorem, while the remaining proofs can be found in the associated paper's files.
\begin{lstlisting}
Theorem can_insert_text : forall (s : string) (n : nat), exists (b : list string), fst (EditorEval (InsertLine n s) b) = s :: nil.
Proof.
intros s n. simpl. unfold insertLine. exists nil. simpl.
case n.
- simpl. reflexivity.
- intros. simpl. reflexivity.
Qed.
\end{lstlisting}
Next, we will prove that the line editor can read any text, that is, for all strings \(s\), positions \(n\) and buffers \(b\), where the buffer is at least of the length of the requested position, reading from the inserted string at the specific position will return the same string. The actual proof of the theorem relies on \texttt{thm\_1}.
\begin{center}
\(\forall s \forall n \forall b (n \leq |b| \to (readLine(insertLine( n, b, s ), n) = s)\)
\end{center}
\begin{lstlisting}
Theorem can_read_text : forall (s : string) (n : nat) (b : list string), n <= List.length b -> snd (EditorEval (ReadLine n "") (fst (EditorEval (InsertLine n s) b))) = s.
\end{lstlisting}
Finally, we prove that the line editor can change any text. That is, there exists a function \(f\) that "changes" the value from \(s_1\) to \(s_2\) of reading an inserted line. The proof of this theorem relies on \texttt{lemma\_1} and \texttt{thm\_1}.
\begin{center}
\(\exists f \forall s_1 \forall s_2 \forall n \forall b, f(readLine(insertLine( n, b, s ), n) = s_1) \to (readLine(insertLine( n, b, s ), n) = s_2)\)
\end{center}
In the code, \(f\) is defined as a combination of deletion and insertion.
\begin{lstlisting}
Theorem can_change_text : forall (s1 s2 : string) (n : nat) (b : list string), n <= List.length b -> s1 = snd (EditorEval (ReadLine n "") b) -> s2 = (snd (EditorEval (ReadLine n "") (fst (EditorEval (InsertLine n s2) (fst (EditorEval (DeleteLine n "") b)))))).
\end{lstlisting}
\section{Conclusion}
We showed how to formally prove the functionality of a simple subset of text editors. We used line editors, but the same idea can be applied generally to text editors. We defined what a complete text editor means, and mapped those functionalities to Coq definitions. Most (if not all) text editors will use the same specifications. Having a unified standard for text editors will be useful for the users, as they can apply the same knowledge to a variety of editors. Further work can be focused on formalizing a larger DSL of text editors.
\end{document} |
\begin{document}
\markboth{ G. Fu, W. Qiu, W. Zhang}{HDG methods for convection--dominated diffusion problems}
\begin{abstract}
We present the first a priori error analysis of the $h$--version of the hybridizable discontinuous Galkerin (HDG) methods applied to convection--dominated diffusion problems.
We show that, when using polynomials of degree no greater than $k$, the $L^2$--error of the scalar variable
converges with order $k + 1/2$ on general conforming quasi--uniform simplicial meshes, just as for conventional DG methods.
We also show that the method achieves the optimal $L^2$--convergence order of $k+1$ on special meshes.
Moreover, we discuss a new way of implementing the HDG methods for which
the spectral condition number of the global matrix is independent of the diffusion
coefficient. Numerical experiments are presented which verify our theoretical results.
\end{abstract}
\title{An analysis of HDG methods for convection--dominated diffusion problems}
\section{Introduction}
In this paper, we present the first a priori error analysis of the $h$--version
of the HDG methods for the following convection--dominated diffusion model problem:
\begin{subequations}
\label{cd_eqs}
\begin{align}
\label{cd_1}
-\epsilon\Delta u + \boldsymbol{\beta}\cdot \nabla u = &\; f \quad \text{ in $\Omega$, }
\\
u = &\; g \quad \text{ on $\partial \Omega$, }
\end{align}
\end{subequations}
where $\Omega \in \mathbb{R}^d$ ($d = 2,3$) is a polyhedral domain, $\epsilon \ll |\bld \beta|_{L^\infty(\Omega)}$, $f\in L^2 (\Omega)$ and $g\in H^{1/2}(\partial \Omega)$.
As in \cite{AyusoMarini:cdf}, we assume that the velocity field $\boldsymbol{\beta}\in W^{1,\infty}(\Omega)$ has neither closed curves nor
stationary points, i.e.,
\begin{align}
\label{assump_beta0}
\bld\beta \in W^{1,\infty}(\Omega) \text{ has no closed curves},\quad \bld \beta(\bld x)\not=\bld 0\quad\forall\bld x\in\Omega.
\end{align}
This implies that there exists
a smooth function $\mrm{\Pi}\,i$
so that
\begin{align}
\label{beta_assumps}
\boldsymbol{\beta}\cdot \nabla\mrm{\Pi}\,i(x) & \geq b_{0}\qquad \forall x\in \Omega,
\end{align}
for some constant $b_0 > 0$, see \cite{DevinatzEllisFriedman} or \cite[Appendix A]{AyusoMarini:cdf} for a proof.
We also assume that
\begin{align}
\label{assump_beta1}
- \mathop{\nabla\cdot} \bld\beta(\bld x) \geq 0\quad \forall \bld x\in\Omega,
\end{align}
which means that the ``effective'' reaction is non--negative
since
\[
\bld \beta\cdot \nabla u=\; \mathop{\nabla\cdot} \left( \bld \beta u\right) -(\mathop{\nabla\cdot} \bld \beta)u.
\]
Let us remark that assumption \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_beta0} ensures the well--posedness
of the continuous problem in the pure hyperbolic limit ($\epsilon = 0$), see \cite[Chapter 3]{Goering83}
for details.
It is also well--known \cite{Eckhaus72,Goering83} that solutions to the problem \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_eqs} may develop layers, whose approximation
is the major difficulty of designing high--order, robust numerical schemes.
We refer to \cite{Roos08,Roos12} for a comprehensive information on different numerical techniques for \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_eqs}.
In the last decade, the discontinuous Galerkin methods \cite{Cockburn99,CockburnShu01}
have been extensively considered for convection--diffusion equations.
For example, see the local discontinuous Galerkin (LDG) methods \cite{CockburnShu98,CockburnDawson00,
HoustonSchwabSuli2002,CastilloCockburnSchotzauSchwab02,CockburnDong07}, the method of Baumann and Oden \cite{BaumannOden99},
the interior--penalty discontinuous Galerkin (IP--DG) methods
\cite{ZarinRoos2005,AyusoMarini:cdf}, the multiscale discontinuous Galerkin method \cite{HughesScovazziBochevBuffa2006,
BuffaHughesSangalli2006}, the mixed--hybrid--discontinuous Galerkin (MH--DG) method \cite{Egger2010}, and the HDG methods
\cite{CockburnDongGuzmanMarcoRiccardo09,NguyenPeraireCockburnHDGLCD09,NguyenPeraireCockburnHDGNCD09}.
On the other hand, for steady--state problems, the main disadvantage of conventional DG methods, compared to other methods, is that they require a higher number
of globally--coupled degrees of freedom for the same mesh.
In order to address this issue, the HDG methods
were introduced in \cite{CockburnGopalakrishnanLazarov09} in the framework of second--order uniformly elliptic problems.
The methods are such that the globally--coupled degrees of freedom are only those of the numerical traces on the mesh skeleton. A similar idea was used
in \cite{Egger2010} to obtain the MH--DG method. Hence, the use of the hybridization technique eliminates the main disadvantage
of DG methods to a significant extent.
In \cite{CockburnDongGuzmanSFH08,CockburnGopalakrishnanSayas09},
it was shown that, for the purely diffusive model problem, the numerical approximation of HDG methods achieves the same order of convergence
as that of mixed methods.
More precisely, when using polynomials of degree no greater than $k$, the $L^2$--error for both the scalar and
flux approximation converges optimally with order
$k+1$, and a postprocessed scalar approximation converges with order $k+2$ for $k\ge 1$.
Recently in \cite{ChenCockburnHDGI,ChenCockburnHDGII},
similar results have been proven for the convection--diffusion equation when
the diffusion coefficient is comparable to the convection coefficient, with
variable--degree approximations and nonconforming meshes.
In this work, we focus on the analysis of the convection--dominated case, that is, when
$\epsilon\ll |\bld \beta|_{L^\infty(\Omega)}$.
We show that for the HDG methods using polynomial degree $k\ge 1$ with a suitably chosen stabilization function,
we have, for general meshes, that
\begin{equation}
\label{our_result1}
\Vert u_h-u\Vert_{L^{2}(\Omega)} \le C\, h^{k}(\epsilon h^{-1/2}+\epsilon^{1/2} +h^{1/2})\vert u\vert_{H^{k+1}(\Omega)},
\end{equation}
and, for meshes (almost) aligned with the direction of $\bld \beta$, that
\begin{equation}
\label{our_result2}
\Vert u_{h}-u\Vert_{L^{2}(\Omega)} \le C\, h^{k}(\epsilon h^{-1/2}+\epsilon^{1/2} +h)\vert u\vert_{H^{k+1}(\Omega)},
\end{equation}
where $C$ is a constant independent of $\epsilon$ and $h$.
Note that if $\epsilon \leq \mathcal{O}(h^2)$, we obtain optimal convergence for $\|u_h-u\|_{L^2(\Omega)}$
in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{our_result2}, which can be considered as an extension of a similar result for the pure hyperbolic case
\cite{CockburnDongGuzman2008,CockburnDongGuzmanQian2010}.
We also show that, with a suitably chosen stabilization function, the condition number of the global matrix for the scaled
numerial traces is $\mathcal{O}(h^{-2})$, independent of $\epsilon$.
To prove these estimates, we cannot use the approach used in \cite{ChenCockburnHDGI,ChenCockburnHDGII}
because the constants in the error estimates in \cite{ChenCockburnHDGI,ChenCockburnHDGII}
may blow up as $\epsilon$ approaches $0$. For example, the constant $\Upsilon_K^{\max}$ in \cite[Theorem 2.1]{ChenCockburnHDGI}
is of order
$\mathcal{O}(\epsilon^{-1})$.
In order to obtain an estimate that is robust with respect to $\epsilon$, we need to modify the
energy argument used in \cite{ChenCockburnHDGI,ChenCockburnHDGII} by using test functions
similar to that used in \cite{Johnson86,AyusoMarini:cdf}. In \cite{Johnson86}, a weighted
test function was used to obtain the $L^2$--stability of the original DG method \cite{ReedHill73} for the pure hyperbolic equation.
In \cite{AyusoMarini:cdf}, the idea was extended to convection--diffusion--reaction equations using the IP--DG method.
We also need to use a new projection to obtain error estimates with less restrictive regularity assumptions.
Next, we would like to compare our results with those obtained for the IP--DG method in \cite{AyusoMarini:cdf}.
Our convergence result for $\|u_h-u\|_{L^2(\Omega)}$ on general meshes is the same as that in \cite{AyusoMarini:cdf}, while
the optimal convergence on special meshes is new. Also, our method has
less globally--coupled degrees of freedom, and our choice of the stabilization function is
determined clearly in the numerical formulation; there is no need to choose it empirically as in the IP--DG method.
Now, let us compare our results with those for the MH--DG method \cite{Egger2010}.
The MH--DG method uses a combination of upwind techniques used in DG methods for hyperbolic problems with conservative discretizations of mixed methods for elliptic problems.
To the best of our knowledge, \cite{Egger2010}
is the first paper which utilizes hybrid formulations for the mixed and DG methods to make them compatible.
We show that our method is quite similar to the MH--DG method. Actually, in Appendix~\ref{appendix-1}, we show that, using the
same approximation spaces as those of the MH--DG method, the HDG method becomes
exactly the same as the MH--DG method by suitably choosing the stabilization function. The new features of our analysis with respect to that of
\cite{Egger2010} are that we can deal with variable velocity
field $\bld \beta$ and that we
have an estimate of $\|u_h-u\|_{L^2(\Omega)}$, which is not obtained in \cite{Egger2010}.
Moreover, we prove that the condition number for the global linear system
can be rendered to be
independent of $\epsilon$ and of order $\mathcal{O}(h^{-2})$.
A well--known stabilization technique for convection--dominated diffusion problems in the finite element method literature
is residual--based stabilization, see the
SUPG \cite{BrooksHughes} and residual--free bubbles \cite{BrezziHughesMariniRussoSuli1999,BrezziMariniSuli2000} methods.
The main disadvantages of
residual--based stabilization are that they are not locally conservative and that the performance of the methods relies heavily
on a proper choice of the stabilization parameter, which might be hard to determine or expensive to compute.
We refer readers to \cite{HoustonSchwabSuli2000} for a detailed comparison of the hp--version of the DG methods with
the SUPG methods in the pure hyperbolic case, and to \cite{Egger2010} for a detailed comparision of the MH--DG method with the SUPG methods for
the convection--diffusion case.
The rest of the paper is organized as follows. In Section $2$, we introduce the HDG method and state and discuss the main theoretical results.
In Section $3$, we give a characterization of the HDG method,
and show that, after scaling, the condition number of the global matrix is
independent of $\epsilon$.
In Section $4$, we present the convergence analysis of
the HDG method. Finally, in Section $5$, we display numerical experiments which verify our theoretical results.
\section{The HDG method and main results}
In this section, we present the HDG method and state and discuss
our main theoretical results.
\subsection{The mesh}
Let ${\mathcal{T}_h}$ be a conforming, quasi--uniform simplicial
triangulation of $\Omega$.
Given an element (triangle/tetrahedron) $K \in {\mathcal{T}_h}$, which we assume to be an
open set,
${\partial K}$ denotes the set of its edges in the two
dimensional case and of its faces in the three dimensional case.
Elements of ${\partial K}$ will be generally referred to as faces,
{regardless of the spatial dimension}, and denoted by $F$. The set of all
{(interior)}
faces of the triangulation will be denoted ${\mathcal{E}_h}$ {(${\mathcal{E}_h}i$)}.
We distinguish functions defined on the faces of the triangulation (the
skeleton) by saying that they are defined on ${\mathcal{E}_h}$ from functions
defined on the boundaries of the elements (and therefore having the
ability to display two different values on interior faces) by saying
that they are defined on ${\partial \Omega}h$. Hence the spaces $L^2({\mathcal{E}_h})$ and
$L^2({\partial \Omega}h)$ have different meanings.
For each element $K\in\mathcal{T}_h$, we set $h_K := |K|^{\frac{1}{d}}$, and for each
of face $F$, $h_F := |F|^{\frac{1}{d-1}}$, where $|\cdot|$ denotes the Lebesgue measure in $d$ or $d -1$ dimensions.
{We define $h = \max_{K\in\mathcal{T}_{h}}h_{K}$.}
Moreover, we also consider special meshes that satisfy the following assumption: there exists a constant $C$ so that
\begin{align}
\label{mesh_assumps}
\max(\sup_{x\in F}\boldsymbol{\beta}(\bld x)\cdot\boldsymbol{n},0)\leq Ch_{K}, \quad \forall F \in \partial K \setminus F_{K}^{+}, \forall K\in{\mathcal{T}_h},
\end{align}
where $F_K^+$ is the face of $K$ such that
$
\sup_{x\in F^+_K}\boldsymbol{\beta}(\bld x)\cdot\boldsymbol{n} = \max_{F\in {\partial K}} \sup_{x\in F}\boldsymbol{\beta}(\bld x)\cdot\boldsymbol{n}.
$
These meshes have been introduced in \cite{CockburnDongGuzman2008} (see also \cite{CockburnDongGuzmanQian2010})
for the analysis of the original DG method. In appendix~\ref{section_appb}, we sketch how to generate
a triangulation satisfying assumption (\ref{mesh_assumps}).
\subsection{The HDG method}
In order to define the HDG method,
we first rewrite our model problem
(\ref{cd_eqs}) as the following first-order system by introducing $\bld q = -\epsilon \nabla u$ as a
new unknown:
\begin{subequations}
\label{cd_first_order}
\begin{align}
\label{cd_first_order1}
\epsilon^{-1}\boldsymbol{q} + \nabla u & = 0\text{ in }\Omega,\\
\label{cd_first_order2}
\nabla\cdot \boldsymbol{q} + \boldsymbol{\beta}\cdot \nabla u & = f \text{ in }\Omega,\\
\label{cd_first_order3}
u & = g \text{ on }\partial\Omega.
\end{align}
\end{subequations}
Let us also define the following finite element spaces:
\begin{subequations}
\label{fem_spaces}
\begin{align}
\label{fem_space1}
\boldsymbol{V}_{h}&=\{\boldsymbol{r}\in L^{2}(\Omega;\mathbb{R}^{d}):\boldsymbol{r}|_{K}\in P_{k}(K;\mathbb{R}^{d})
\quad \forall K\in \mathcal{T}_{h}\},\\
\label{fem_space2}
W_{h}&=\{w\in L^{2}(\Omega):w|_{K}\in P_{k}(K)
\quad \forall K\in \mathcal{T}_{h}\},\\
\label{fem_space3}
M_{h}&=\{\mu\in L^{2}(\mathcal{E}_{h}):\mu |_{F} \in P_{k}(F)
\quad \forall F\in \mathcal{E}_{h}\},\\
\label{fem_space4}
M_{h}(g)&=\{\mu\in M_{h}: \langle \mu, \xi \rangle_{\partial\Omega} = \langle g , \xi \rangle_{\partial\Omega} \quad \forall \xi\in M_{h}\},
\end{align}
\end{subequations}
where $P_{k}(D)$ is the space of polynomials of total degree not larger than $k\ge 0$ defined on $D$, and
\[
\langle \xi, \eta \rangle_{\partial\Omega} = \sum_{F\in {\partial \Omega}}\int_{F}\xi\,\eta\,\mathrm{ds}.
\]
The HDG method seeks an approximation $(\boldsymbol{q}_{h},u_{h},\widehat{u}_{h})\in
\boldsymbol{V}_{h}\times W_{h}\times M_{h}$ by requiring that
\begin{subequations}
\label{cd_hdg_eqs}
\begin{align}
\label{cd_hdg_eq1}
(\epsilon^{-1}\boldsymbol{q}_{h},\boldsymbol{r})_{\mathcal{T}_{h}} -(u_{h},\nabla\cdot \boldsymbol{r})_{\mathcal{T}_{h}}
+\langle \widehat{u}_{h},\boldsymbol{r}\cdot \boldsymbol{n}\rangle_{\partial\mathcal{T}_{h}}&=0,\\
\label{cd_hdg_eq2}
-(\boldsymbol{q}_{h}+\boldsymbol{\beta}u_{h},\nabla w)_{\mathcal{T}_{h}} - (\nabla\cdot\boldsymbol{\beta}u_{h},w)_{\mathcal{T}_{h}}
+\langle (\widehat{\vf q}_{h}+\widehat{\boldsymbol{\beta}u_{h}})\cdot \boldsymbol{n}, w\rangle_{\partial\mathcal{T}_{h}}&=(f,w)_{\mathcal{T}_{h}},\\
\label{cd_hdg_eq3}
\langle \widehat{u}_{h} , \mu\rangle_{\partial\Omega}&=\langle g , \mu \rangle_{\partial\Omega},\\
\label{cd_hdg_eq4}
\langle (\widehat{\vf q}_{h}+\widehat{\boldsymbol{\beta}u_{h}})\cdot \boldsymbol{n} , \mu \rangle_{\partial\mathcal{T}_{h}\backslash\partial\Omega}&=0,
\end{align}
\text{for all $(\boldsymbol{r},w,\mu)\in \boldsymbol{V}_{h}\times W_{h}\times M_{h}$, where
the numerical trace
$(\widehat{\vf{q}}_{h}+\widehat{\boldsymbol{\beta}u_{h}})\cdot\bld n$ is given by}
\begin{equation}
\label{cd_hdg_eq5}
(\widehat{\vf{q}}_{h}+\widehat{\boldsymbol{\beta}u_{h}})\cdot\bld n=\boldsymbol{q}_{h}\cdot\bld n+\boldsymbol{\beta}\cdot\bld n\,
\widehat{u}_{h}+\tau (u_{h}-\widehat{u}_{h})
\text{ on }\partial\mathcal{T}_{h},
\end{equation}
\end{subequations}
and {\em the stabilization function $\tau$ is piecewise, nonnegative constant defined on ${\partial \Omega}h$}.
{Here we write $\left(\eta,\zeta\right)_{\mathcal{T}_{h}} := \sum_{K \in \mathcal{T}_{h}} \int_K \eta\,\zeta\,\mathrm{dx},$
and
$\langle \eta, \zeta \rangle_{\partial\mathcal{T}_{h}} := \sum_{K \in \mathcal{T}_{h}}
\int_{\partial K} \eta\,\zeta\,\mathrm{ds}$.
In Section~\ref{sec:hybrid}, we show that the linear system \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_hdg_eqs} can be efficiently implemented
so that the only global unknowns are related to
the numerical trace $\widehat{u}_{h}$.
The HDG method \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_hdg_eqs} has a unique solution provided that the stabilization function $\tau$
in (\ref{cd_hdg_eq5}) satisfies the following assumption:
\begin{align}
\label{assump_tau_00}
\inf_{\bld x\in F}\left(\tau - \partialfrac{1}{2} \boldsymbol{\beta(x)}\cdot \boldsymbol{n}\right)\ge 0, \quad
\forall F\in{\partial K},\forall K\in\mathcal{T}_{h},
\end{align}
where in each element, the strict inequality holds at least on one face; see \cite[Theorem 3.1]{NguyenPeraireCockburnHDGLCD09} for a proof.
\subsection{Assumptions on the stabilization function}
Next, we present our assumptions on the stabilization function $\tau$
verifying the inequality \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_tau_00}. We then construct two examples satisfying them.
To do that, we need to introduce some notation.
Let $F_K^\star$ be the face of $K$ on which $\tau$ attains its maximum, and $F_K^s$ be the face of $K$ on which $\inf_{\bld x\in F}\left( \tau - \frac{1}{2}\bld \beta(\bld x)\cdot\bld n\right)$
attains its maximum, that is,
\begin{subequations}
\label{notation_F}
\begin{align}
\tau(F_K^\star) &: =\; \max_{F\in{\partial K}} \tau(F)& \forall K\in {\mathcal{T}_h},\\
\inf_{\bld x\in F_K^s}\left( \tau - \frac{1}{2}\bld \beta(\bld x)\cdot\bld n\right) &: =\;
\max_{F\in {\partial K}}\inf_{\bld x\in F}\left( \tau - \frac{1}{2}\bld \beta(\bld x)\cdot\bld n\right)& \forall K\in {\mathcal{T}_h},
\end{align}
\end{subequations}
and set
\begin{subequations}
\begin{align*}
\tau_K^w : = &\;\max_{F\in{\partial K}\backslash F_K^\star} \tau(F), & \tau^w : = \max_{K\in {\mathcal{T}_h}} \tau_K^w,\\
\tau_K^{\bld v} : = &\;\inf_{\bld x\in F_K^s}\left( \tau - \frac{1}{2}\bld \beta(\bld x)\cdot\bld n\right), & \tau^{\bld v} : = \min_{K\in {\mathcal{T}_h}} \tau_K^{\bld v}.
\end{align*}
\end{subequations}
We assume that there exists universal positive constants $C_0, C_1,C_2$ so that
\begin{subequations}
\label{assumption_tau_general}
\begin{align}
\label{assumption_tau_0}
\tau_K^w \le &\;C_0&\forall K\in {\mathcal{T}_h}, \\
\label{assumption_tau_3}
\tau_K^{\bld v} \ge &\;C_1\min (\frac{\epsilon}{h_{K}}, 1) & \forall K\in {\mathcal{T}_h}, \\
\label{assumption_tau_1}
\inf_{\bld x\in F}\left( \tau - \frac{1}{2}\bld \beta(\bld x)\cdot\bld n\right) \ge &\;C_2 \max_{x\in F}\left |\bld \beta(\bld x)\cdot\bld n\right |& \forall
F\in {\partial K}, \forall K\in {\mathcal{T}_h}.
\end{align}
\end{subequations}
In order to get an improved estimate,
we need to replace \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_0} by
the following, more restrictive assumption
on $\tau_K^w$: assume there exists a positive constant $C$ so that
\begin{align}
\label{assump_tau_s}
\tau_K^w \le &\;C h_K \quad\quad \forall K\in{\mathcal{T}_h}.
\end{align}
We remark this assumption might not be compatible with \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_1} on general meshes, but
it can hold for the meshes that satisfy assumption \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{mesh_assumps}.
Now, let us show that it is quite easy to construct $\tau$ satisfying assumptions \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_general}
by displaying two of them.
The first example of the stabilization function is
\begin{align}
\label{tau1}
\tau_1(F) = \max(\sup_{\bld x \in F} \,\bld \beta(\bld x)\cdot \bld n,0),\quad \forall F\in {\partial K}, \forall K\in {\mathcal{T}_h}.
\end{align}
Assumptions \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_0} and \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_1} are always satisfied, and assumption \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_3} holds
provided
\begin{align*}
\max_{F\in {\partial K}}\inf_{\bld x\in F}\left(- \bld \beta(\bld x)\cdot\bld n \right)\ge &\;C & \forall K\in {\mathcal{T}_h},
\end{align*}
for some positive constant $C$; this is true, for example, for piecewise-constant $\bld \beta$.
Moreover, assumption
\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_tau_s} is also satisfied if the mesh satisfies \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{mesh_assumps}.
The second example is
\begin{align}
\label{tau2}
\tau_2(F) = \max(\sup_{\bld x \in F} \,\bld \beta(\bld x)\cdot \bld n,0) + \min (\rho_0\frac{\epsilon}{h_K}, 1 ),\quad \forall F\in {\partial K}, \forall K\in {\mathcal{T}_h},
\end{align}
where $\rho_0 >0$ is a constant typically chosen to be less than or equal to $1$.
Assumptions \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_general} are always satisfied in this case.
Moreover, assumption \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_tau_s} is satisfied
provided the mesh satisfies \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{mesh_assumps} and we take $\epsilon \leq \mathcal{O}(h^2)$.
Let us conclude the discussion on $\tau$ by remarking that if we replace $\max(\sup_{\bld x \in F} \,\bld \beta(\bld x)\cdot \bld n,0)$ with
$\sup_{\bld x \in F} |\,\bld \beta(\bld x)\cdot \bld n|$ in the definition of $\tau$ in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{tau1} and \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{tau2}, assumptions \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_general}
will be satisfied, while \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_tau_s} is no longer true for the special meshes.
\subsection{The main theoretical results}
From now on, we use $C$ to denote a generic constant, which may be dependent on the polynomial degree $k$,
and/or the velocity field $\boldsymbol\beta$. The value $C$ at different occurrences may differ.
We proceed to state our main theoretical results.
We will show convergence estimates in the following norm
\begin{align*}
\vertiii{ (\boldsymbol{r}, w, \mu)}_{e} := \left(\|\epsilon^{-1/2}\boldsymbol{r}\|^2_{\mathcal{T}_h} + \|w\|^2_{\mathcal{T}_h} +
\left\| |\tau-\frac{1}{2} \boldsymbol{\beta}\cdot \boldsymbol{n}|^{1/2}(w-\mu)\right\|_{\partial\mathcal{T}_h}^2 \right)^{1/2},
\end{align*}
where $\|\cdot\|_{D}$ is the standard $L^2$--norm in the domain $D$.
\begin{theorem}
\label{MainTh1}
Let $(\bld q, u)$ be the solution to the boundary--value problem \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_first_order},
and let $(\bld q_h, u_h,\widehat u_h)$ be the solution to the HDG method
\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_hdg_eqs} where the stabilization function $\tau$ satisfies assumptions \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_general}.
Then, there exists $h_{0}$, independent of $\epsilon$,
such that when $h<h_{0}$, we have
\begin{align*}
& \vertiii{(\bld q-\bld q_h, u-u_h, u-\widehat{u}_h)}_e \\
\nonumber
\le & \; C \epsilon^{1/2}h^{s_{\bld v}+1/2}(\epsilon^{1/2}+h^{1/2})|u |_{H^{s_{\bld v}+2}({\mathcal{T}_h};\mathbb{R}^d)}
+ C h^{s_{w}+1/2}|u |_{H^{s_{w}+1}({\mathcal{T}_h})},
\end{align*}
for all $s_{\bld v}\in [0,k]$ and $s_w \in [0,k]$.
\end{theorem}
\begin{theorem}
\label{MainTh2}
Let $(\bld q, u)$ be the solution to the boundary--value problem \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_first_order}, and let
$(\bld q_h, u_h,\widehat u_h)$ be the solution to the HDG method
\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_hdg_eqs} where the stabilization function $\tau$ satisfies assumptions \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_general} and \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_tau_s}.
Then, there exists $h_{0}$, independent of $\epsilon$,
such that when $h<h_{0}$, we have
\begin{align*}
\|{u-u_h}\|_{{\mathcal{T}_h}} &\le \; C \epsilon^{1/2}h^{s_{\bld v}+1/2}(\epsilon^{1/2}+h^{1/2})|u |_{H^{s_{\bld v}+2}({\mathcal{T}_h};\mathbb{R}^d)} +
C h^{s_{w}+1}|u |_{H^{s_{w}+1}({\mathcal{T}_h})},
\end{align*}
for all $s_{\bld v}\in [0,k]$ and $s_w \in [0,k]$.
\end{theorem}
\begin{remark}
If $\tau$ satisfies assumptions \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_general}, $k\ge 1$, $u\in H^{k+1}(\Omega)$ and $\epsilon \leq \mathcal{O}(h)$, we get
\[
\|{u-u_h}\|_{{\mathcal{T}_h}} \le \;
C h^{k+1/2}|u |_{H^{k+1}({\mathcal{T}_h})}
\]
by choosing $s_{\bld v} = k-1, s_w = k$ in Theorem~\ref{MainTh1}.
If $\epsilon = 0$, our method collapses to the original DG method \cite{ReedHill73}. Since the best $L^2$--error
of the DG method for pure convection problems on general meshes is $\|u-u_h\|_{\mathcal{T}_h}\le C h^{k+1/2}|u|_{H^{k+1}},$
see \cite{Peterson91}; it is reasonable to expect $\|u-u_h\|_{\mathcal{T}_h}$ to be of
order $h^{k+1/2}$ when $\epsilon \ll 1$.
\end{remark}
\begin{remark} If $\tau$ satisfies assumptions \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_general} and \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_tau_s},
$k\ge 1$, $u\in H^{k+1}(\Omega)$ and $\epsilon \leq \mathcal{O}(h^2)$, we have
\[
\|{u-u_h}\|_{{\mathcal{T}_h}} \le \;
C h^{k+1}|u |_{H^{k+1}({\mathcal{T}_h})},
\]
by choosing $s_{\bld v} = k-1, s_w = k$ in Theorem~\ref{MainTh2}. Note that we can construct $\tau$
satisfing \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_general} and
\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_tau_s} provided that the mesh satisfies \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{mesh_assumps}. Hence, our result can be considered as a generalization of the results in
\cite{CockburnDongGuzman2008,CockburnDongGuzmanQian2010} in which the authors obtained optimal $L^2$--convergence
of the original DG method for convection--reaction equations on special meshes.
\end{remark}
\begin{remark}
It is shown in Appendix~\ref{appendix-1} that we can recover the MH--DG method \cite{Egger2010} from our formulation by
suitably choosing the stabilizaiton
function $\tau$ and the approximation spaces $\bld V_h, W_h, M_h$. Hence, our results can be directly applied to the MH--DG method.
In particular, we gain the
$L^2$--control of $u_h$ and obtained optimal order of convergence for $\|u-u_h\|_{{\mathcal{T}_h}}$ for special meshes.
\end{remark}
\section{A characterization of the HDG method}
\label{sec:hybrid}
Here, we show how to eliminate, in an elementwise manner,
the unknowns $\bld{q}_h$ and $u_h$ from the equations \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_hdg_eqs} and rewrite
the original system solely in terms of the unknown $\widehat{u}_h$, see also \cite{CockburnDongGuzmanMarcoRiccardo09,NguyenPeraireCockburnHDGLCD09}. In this way,
we do not have to deal with the large linear system generated by \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_hdg_eqs},
but with the inversion of a sparser matrix of
remarkably smaller size.
\subsection{The local problems}
We begin by showing how to express the unknowns
$\bld{q}_h$ and $u_h$ in terms of the unknown $\widehat{u}_h$.
Given ${\lambda}\in L^2({\mathcal{E}_h})$ and ${f}\in L^2({\mathcal{T}_h})$,
consider the solution to the set of local problems in
each $K \in {\mathcal{T}_h}$: find
\[
(\bld q_h,{u}_h)\in \bld V(K)\times W(K),
\]
where $\bld V(K) : = P_k(K;\mathbb{R}^d)$ and $W(K) := P_k(K),$
such that
\begin{subequations}
\begin{align}
\label{local1}
(\epsilon^{-1}\boldsymbol{q}_{h},\boldsymbol{r})_{K} -(u_{h},\nabla\cdot \boldsymbol{r})_{K}
&=
-\langle \lambda,\boldsymbol{r}\cdot \boldsymbol{n}\rangle_{{\partial K}},\\
\label{local2}
(\nabla\cdot \boldsymbol{q}_{h}, w)_{K} - (u_{h},\nabla\cdot(\boldsymbol{\beta}w))_{K}
+\langle \tau u_h, w\rangle_{{\partial K}}&= \langle (\tau-\bld{\beta}\cdot \bld n) \lambda, w\rangle_{{\partial K}}
+ (f,w)_{K},
\end{align}
\text{for all $(\boldsymbol{r},w)\in \boldsymbol{V}(K)\times W(K)$.}
\end{subequations}
We denote by $(\bld{q}_h^{f},u_h^{f})$
the solution of the above local problem when we take $\lambda=0$. Similarly,
we denote $(\bld{q}_h^{\lambda},u_h^{\lambda})$ the solution
when $f=0$. We can thus write that
\[
(\bld q_h,u_h)=(\bld{q}_h^{\lambda},
u_h^{\lambda})+ (\bld{q}_h^{f},u_h^{f}).
\]
\subsection{The global problem}
If we now take $\lambda:=\widehat{u}_h$, we see that $(\bld q_h,u_h)$
is expressed in terms of $\widehat{u}_h$ (and $f$). We can thus eliminate those
two unknowns from the equations and solve for $\widehat{u}_h$ only. The global problem that determines $\widehat{u}_h$ is not difficult to find.
We have that $\widehat{u}_h\in M_h(g)$ must satisfy
\[
a_h(\widehat{u}_h,\mu)=b_h(\mu)\qquad\forall\mu\in M_h(0),
\]
where
\begin{alignat*}{1}
a_h(\lambda,\mu)&:=
-\bint{\bld{q}^{\lambda}_h\cdot\bld{n}}{\mu}{{\partial \Omega}h} - \bint{\tau(u_h^{\lambda} - \lambda)}{\mu}{{\partial \Omega}h},
\\
b_h(\mu)&:=
\bint{\bld{q}^f_h\cdot\bld{n}}{\mu}{{\partial \Omega}h} + \bint{\tau u^f_h}{\mu}{{\partial \Omega}h}.
\end{alignat*}
Indeed, note that the definition of $M_h(g)$
incorporates the boundary condition \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_hdg_eq3}, and that the last equation
is nothing but a rewriting of the transmission condition \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_hdg_eq4} by observing that
\[\bint{\bld \beta \cdot \bld n\lambda}{\mu}{{\partial \Omega}h} = 0,\quad \forall \lambda\in M_h(g), \forall \mu\in M_h(0).\]
\subsection{A characterization of the approximate solution}
The above results suggest the following charaterization of the
approximate solution of the HDG method. We leave the proof to the interesed readers as an exercise, see also
\cite{CockburnDongGuzmanMarcoRiccardo09,NguyenPeraireCockburnHDGLCD09}.
\begin{theorem}\label{character}
The approximate solution of the HDG method
satisfies
\[
(\bld q_h,u_h)=(\bld{q}_h^{\widehat{u}_h},
u_h^{\widehat{u}_h})+ (\bld{q}_h^{f},u_h^{f}).
\]
Moreover, $\widehat{u}_h\in M_h(g)$ is the solution of
\begin{alignat}{2}
\label{hybrid-ee}
a_h (\widehat{u}_h,\mu) =&b_h(\mu) \qquad \forall \mu\in M_h(0).
\end{alignat}
Also, we have that
\begin{align*}
a_h(\lambda,\mu) &=\; (\epsilon^{-1}\,\bld{q}^\lambda_h,{\bld q}^\mu_h)_{\mathcal{T}_h}
-(u_h^\lambda, \nabla\cdot (\bld \beta u^\mu_h))_{\mathcal{T}_h} +\bint{\bld \beta \cdot \bld n\lambda}{ u^\mu_h}{{\partial \Omega}h}
+\bint{\tau(u^\lambda_h- \lambda)}{ u^\mu_h- \mu}{{\partial \Omega}h},\\
b_h(\mu) & = \; (f,u_h^\mu)_{\mathcal{T}_h} + \bint{\bld \beta \cdot \bld n u_h^f}{ \mu}{{\partial \Omega}h} + (u_h^f, \bld\beta \cdot \nabla u_h^\mu)_{\mathcal{T}_h}
- (\nabla u_h^f, \bld\beta u_h^\mu)_{\mathcal{T}_h}.
\end{align*}
\end{theorem}
\subsection{The conditioning of the HDG method}
We note that both examples of stabilization function $\tau$ in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{tau1} and \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{tau2} on a face $F$ can be very small if
$\boldsymbol{\beta}\cdot\boldsymbol{n}|_{F}$
and $\epsilon$ are very small.
In this case, the condition number of the global matrix generated by $a_{h}$ in
(\ref{hybrid-ee}) might blow up as $\epsilon$ goes to zero.
In order to make the condition number independent of $\epsilon$, we need a new assumption on $\tau$, namely,
\begin{align}
\label{assump_cond}
\inf_{\bld x\in F}\left( \tau - \frac{1}{2}\bld \beta(\bld x)\cdot\bld n\right) \ge &\;C_2
\min (\partialfrac{\epsilon}{h_{F}},1)& \forall
F\in {\partial K}, \forall K\in {\mathcal{T}_h}.
\end{align}
If we introduce
\begin{align}
\label{change_variables}
\tilde{\lambda} = & \Lambda_{\epsilon}\lambda,\quad \tilde{\mu} = \Lambda_{\epsilon}\mu,\qquad \forall
\lambda \in M_{h}(g),\mu\in M_{h}(0),\\
\nonumber
&\text{where }\Lambda_{\epsilon}|_{F} = \left(\sup_{x\in F}\vert \boldsymbol{\beta}\cdot\boldsymbol{n}(x)\vert
+\min(\partialfrac{\epsilon}{h_{F}},1) \right)^{1/2},\quad \forall F\in\mathcal{E}_{h},
\end{align}
the preferred form for implementation for the HDG method
is to find $\tilde{\lambda}\in M_{h}({\Lambda_{\epsilon}g})$ satisfying
\begin{align*}
& \tilde{a}_{h}(\tilde{\lambda}, \tilde{\mu}) = b_{h}(\Lambda_{\epsilon}^{-1}\tilde{\mu})
\end{align*}
for all $\tilde{\mu}\in M_{h}(0)$. Here,
\begin{align}
\label{reduced_matrix2}
\tilde{a}_{h}(\tilde{\lambda},\tilde{\mu}) = a_{h}(\Lambda_{\epsilon}^{-1}\tilde{\lambda}, \Lambda_{\epsilon}^{-1}\tilde{\mu}).
\end{align}
We have the following theorem concerning the condition number of the scaled global matrix in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{reduced_matrix2}.
\begin{theorem}
\label{Thm_conditioning}
Let the stabilization function $\tau$
satisfy assumptions \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_general} and \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_cond}, and let $\epsilon \leq \mathcal{O}(h)$.
Let $\kappa$ be the spectral condition number of the global matrix generated by $\tilde{a}_h$ in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{reduced_matrix2}.
Then there is $h_{0}>0$,
which is independent of $\epsilon$ and $h$, such that when $h< h_{0}$,
\begin{align*}
&\kappa \leq Ch^{-2}.
\end{align*}
\end{theorem}
We present a detailed proof of Theorem~\ref{Thm_conditioning} in Appendix~\ref{appendix}.
\begin{remark}
Obviously, assumption (\ref{assump_cond}) is satisfied by the second
stabilization function (\ref{tau2}) but not by the first one (\ref{tau1}) on meshes that aligned with the direction of $\bld \beta$.
Theorem~\ref{Thm_conditioning} shows that the condition number of the global matrix of the HDG method
for convection--dominated diffusion problems
is the same as that of HDG methods for elliptic problems in \cite{CockburnDuboisGopalakrishnanTan2013}.
\end{remark}
\section{Convergence analysis}
\label{analysis}
In this section, we prove Theorem~\ref{MainTh1} and Theorem~\ref{MainTh2}.
We begin by introducing the following bilinear form:
\begin{align}
\label{bilinear_form}
B((\boldsymbol{q},u,\lambda),(\boldsymbol{r},w,\mu))
= & \;(\epsilon^{-1}\boldsymbol{q},\boldsymbol{r})_{\mathcal{T}_{h}}-(u,\nabla\cdot\boldsymbol{r})_{\mathcal{T}_{h}}
+\langle \lambda, \boldsymbol{r}\cdot \boldsymbol{n}\rangle_{\partial\mathcal{T}_{h}}\\
\nonumber
& -(\boldsymbol{q}+\boldsymbol{\beta}u,\nabla w)_{\mathcal{T}_{h}}
+\langle (\boldsymbol{q}+\boldsymbol{\beta}\lambda)\cdot\boldsymbol{n}
+\tau (u-\lambda),w\rangle_{\partial\mathcal{T}_{h}}\\
\nonumber
& -((\nabla\cdot\boldsymbol{\beta})u,w)_{\mathcal{T}_{h}}
-\langle(\boldsymbol{q}+\boldsymbol{\beta}\lambda)\cdot\boldsymbol{n}
+\tau (u-\lambda), \mu
\rangle_{\partial\mathcal{T}_{h}},
\end{align}
for all
$(\boldsymbol{q},u,\lambda) \text{ and } (\boldsymbol{r},w,\mu)\in H^{1}(\mathcal{T}_{h};\mathbb{R}^{d})\times H^{1}(\mathcal{T}_{h})\times L^{2}(\mathcal{E}_{h})$.
It's easy to see that the HDG method \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_hdg_eqs} can be recasted in the following compact form:
Find $(\boldsymbol{q}_h, u_h, \widehat{u}_h)\in \boldsymbol{V}_{h}\times W_{h}\times M_{h}(g)$ so that
\begin{align}
\label{compact}
B((\boldsymbol{q}_h,u_h,\widehat{u}_h),(\boldsymbol{r},w,\mu)) & = (f, w)_{\mathcal{T}_{h}},
\end{align}
for all $(\boldsymbol{r},w,\mu)\in \boldsymbol{V}_{h}\times W_{h}\times M_{h}(0)$.
\subsection{Stability property for the HDG method}
It is well known that we have the following result regarding the stability of the convection--dominated diffusion problem,\begin{align}\label{stablity}
\epsilon \| \nabla u \|^2_{L^{2}(\Omega)} + \| u \|^2_{L^{2}(\Omega)} \leq C \| f \|^2_{L^{2}(\Omega)},
\end{align}
provided that $\boldsymbol{\beta}$ satisfies assumption \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{beta_assumps} and $g = 0$ on $\partial \Omega$, see \cite{AyusoMarini:cdf}.
On the other hand, by taking $(\boldsymbol{r},w,\mu) = (\boldsymbol{q}_h,u_h,\widehat{u}_h)$ in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{compact}, the standard energy argument
only gives the following estimate:
\begin{align*}
(\epsilon^{-1}\boldsymbol{q}_h,\boldsymbol{q}_h)_{\mathcal{T}_{h}}
+\langle (\tau-\frac{1}{2} \boldsymbol{\beta}\cdot \boldsymbol{n})(u_h-\widehat{u}_h),u_h-\widehat{u}_h\rangle -\frac{1}{2}((\nabla\cdot\boldsymbol{\beta})u_h,u_h)_{\mathcal{T}_{h}}
= (f, u_h)_{\mathcal{T}_h}.
\end{align*}
Hence, we do not have control of the $L^2$--norm of $u_h$ by the standard energy argument when the velocity field $\boldsymbol\beta$ is divergence-free.
The main idea of our stability analysis is to achieve the control of the $L^2$--norm of $u_h$
by mimicking the proof of the stability property \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{stablity} at the discrete level. We shall proceed in the following three steps.
{\bf Step One.}
In view of assumption \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_beta0}, we define a function
\begin{equation}
\label{weight_function}
\varphi := e^{-\mrm{\Pi}\,i}+\chi,
\end{equation}
where $\chi$ is a positive constant to be determined later.
Mimicking the proof of stability results carried out for the continuous problem, we obtain the following lemma.
\begin{lemma}
\label{lemma_ideal_infsup}
Let $\varphi$ be given in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{weight_function} where $\chi\geq 1+2b_{0}^{-1}\Vert e^{-\mrm{\Pi}\,i}\Vert_{L^{\infty}(\Omega)}
\cdot \Vert \nabla\mrm{\Pi}\,i\Vert_{L^{\infty}(\Omega)}^{2}$. Also, let $\tau$ satisfy assumption \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_tau_00}.
Then for all
$(\boldsymbol{q}_{h},u_{h},{\lambda}_{h})\in \boldsymbol{V}_{h}\times W_{h}\times M_{h}(0)$, the following inequality holds
\begin{align*}
B((\boldsymbol{q}_{h},u_{h},{\lambda}_{h}),(\boldsymbol{q}_{\varphi},u_\varphi,\lambda_\varphi)) \geq & \; C \vertiii{ (\boldsymbol{q}_h, u_h, \lambda_h)}_{e}^2,
\end{align*}
where
$\boldsymbol{q}_\varphi=\varphi \boldsymbol{q}_{h}$, $u_\varphi = \varphi u_{h}$ and
$\lambda_\varphi = \varphi {\lambda}_{h} $.
\end{lemma}
\begin{proof}
With $(\boldsymbol{q}_\varphi,u_\varphi,\lambda_\varphi)$ given above, we have that
\begin{align*}
B((\boldsymbol{q}_{h},u_{h},{\lambda}_{h}),(\boldsymbol{q}_\varphi,u_\varphi,\lambda_\varphi))
= (\epsilon^{-1}\boldsymbol{q}_{h},\varphi\boldsymbol{q}_{h})_{\mathcal{T}_{h}} + T_1 + T_2 + T_3 ,
\end{align*}
where
\begin{align*}
T_1 = &\; -(u_{h},\nabla\cdot (\varphi\boldsymbol{q}_{h}))_{\mathcal{T}_{h}}
+\langle {\lambda}_{h}, \varphi\boldsymbol{q}_{h}\cdot \boldsymbol{n}\rangle_{\partial\mathcal{T}_{h}}
-(\boldsymbol{q}_{h},\nabla (\varphi u_{h}))_{\mathcal{T}_{h}} + \langle \boldsymbol{q}_{h}\cdot\boldsymbol{n},\varphi( u_{h} - {\lambda}_{h}) \rangle_{\partial\mathcal{T}_{h}} \\
T_2 = &\; -(\boldsymbol{\beta}u_{h},\nabla(\varphi u_{h}))_{\mathcal{T}_{h}}
-
((\nabla\cdot\boldsymbol{\beta})u_{h},\varphi u_{h})_{\mathcal{T}_{h}}
+\langle \boldsymbol{\beta}\cdot\boldsymbol{n} {\lambda}_{h},\varphi u_{h} \rangle_{\partial\mathcal{T}_{h}}
\\
T_3 = &\; \langle
\tau (u_{h}-{\lambda}_{h}),\varphi( u_{h} - {\lambda}_{h}) \rangle_{\partial\mathcal{T}_{h}} .
\end{align*}
By integration by parts, we obtain
\begin{align*}
T_1 = &\; -(u_{h},\nabla\cdot (\varphi\boldsymbol{q}_{h}))_{\mathcal{T}_{h}}
+\langle {\lambda}_{h}, \varphi\boldsymbol{q}_{h}\cdot \boldsymbol{n}\rangle_{\partial\mathcal{T}_{h}}
-(\boldsymbol{q}_{h},\nabla (\varphi u_{h}))_{\mathcal{T}_{h}} + \langle \boldsymbol{q}_{h}\cdot\boldsymbol{n},\varphi( u_{h} - {\lambda}_{h}) \rangle_{\partial\mathcal{T}_{h}} \\
= & \; -(u_{h},\nabla\varphi \cdot\boldsymbol{q}_{h})_{\mathcal{T}_{h}} - (\varphi u_{h},\nabla\boldsymbol{q}_{h})_{\mathcal{T}_{h}}
-(\boldsymbol{q}_{h},\nabla (\varphi u_{h}))_{\mathcal{T}_{h}} + \langle \boldsymbol{q}_{h}\cdot\boldsymbol{n},\varphi u_{h} \rangle_{\partial\mathcal{T}_{h}} \\
= & \; -(u_{h},\nabla\varphi \cdot\boldsymbol{q}_{h})_{\mathcal{T}_{h}} \\
=&\; (u_{h},e^{-\mrm{\Pi}\,i}\nabla\mrm{\Pi}\,i \cdot\boldsymbol{q}_{h})_{\mathcal{T}_{h}}
\\
T_2 = &\; -(\boldsymbol{\beta}u_{h},\nabla(\varphi u_{h}))_{\mathcal{T}_{h}}
-
((\nabla\cdot\boldsymbol{\beta})u_{h},\varphi u_{h})_{\mathcal{T}_{h}}
+\langle \boldsymbol{\beta}\cdot\boldsymbol{n} {\lambda}_{h},\varphi u_{h} \rangle_{\partial\mathcal{T}_{h}}\\
=&\; -(\boldsymbol{\beta}\cdot\nabla\varphi, u_h^2 )_{\mathcal{T}_{h}} -(\boldsymbol{\beta} \varphi, \nabla\frac{u_h^2}{2} )_{\mathcal{T}_{h}}
-
((\nabla\cdot\boldsymbol{\beta})\varphi ,u_{h}^2)_{\mathcal{T}_{h}} + \langle \boldsymbol{\beta}\cdot\boldsymbol{n} {\lambda}_{h},\varphi u_{h} \rangle_{\partial\mathcal{T}_{h}}\\
=&\; -\frac{1}{2}(\boldsymbol{\beta}\cdot\nabla\varphi, u_h^2 )_{\mathcal{T}_{h}} - \frac{1}{2}\langle \boldsymbol{\beta}\cdot\boldsymbol{n} {u}_{h},\varphi u_{h} \rangle_{\partial\mathcal{T}_{h}}-
\frac{1}{2}((\nabla\cdot\boldsymbol{\beta})\varphi ,u_{h}^2)_{\mathcal{T}_{h}} + \langle \boldsymbol{\beta}\cdot\boldsymbol{n} {\lambda}_{h},\varphi u_{h} \rangle_{\partial\mathcal{T}_{h}}\\
=&\; \frac{1}{2}(\boldsymbol{\beta}\cdot\nabla\mrm{\Pi}\,i, e^{-\mrm{\Pi}\,i}u_h^2 )_{\mathcal{T}_{h}} -\frac{1}{2}((\nabla\cdot\boldsymbol{\beta})\varphi ,u_{h}^2)_{\mathcal{T}_{h}}
- \frac{1}{2}\langle \boldsymbol{\beta}\cdot\boldsymbol{n} (u_h-{\lambda}_{h}),\varphi (u_{h}-\lambda_h) \rangle_{\partial\mathcal{T}_{h}},
\end{align*}
where in the last step, we used $\langle \boldsymbol{\beta}\cdot\boldsymbol{n} {\lambda}_{h},\varphi \lambda_{h} \rangle_{\partial\mathcal{T}_{h}} = 0 $ due to the
fact that $\lambda_{h}$ is single valued on the interior faces and $\lambda_{h}=0$ on $\partial\Omega$.
Combining $T_1$, $T_2$ and $T_3$, we have that
\begin{align*}
B((\boldsymbol{q}_{h},u_{h},{\lambda}_{h}),(\boldsymbol{q}_\varphi,u_\varphi,\lambda_\varphi))
= & \;(\epsilon^{-1}\boldsymbol{q}_{h},\varphi\boldsymbol{q}_{h})_{\mathcal{T}_{h}}
+(u_{h},e^{-\mrm{\Pi}\,i}\nabla\mrm{\Pi}\,i\cdot\boldsymbol{q}_{h})_{\mathcal{T}_{h}}\\
&+\partialfrac{1}{2}([\boldsymbol{\beta}\cdot \nabla\mrm{\Pi}\,i]u_{h},e^{-\mrm{\Pi}\,i}u_{h})_{\mathcal{T}_{h}}
-\partialfrac{1}{2}((\nabla\cdot\boldsymbol{\beta})u_{h},\varphi u_{h})_{\mathcal{T}_{h}}\\
&+\langle(\tau-\partialfrac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n})\varphi(u_{h}-\lambda_{h}),
u_{h}-\lambda_{h}\rangle_{\partial\mathcal{T}_{h}}
\end{align*}
Invoking assumptions \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{beta_assumps} and \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_beta1}, and $\varphi\geq \chi$, we obtain
\begin{align*}
B((\boldsymbol{q}_{h},u_{h},\lambda_{h}),(\boldsymbol{q}_\varphi,u_\varphi,\lambda_\varphi))
\geq & (\epsilon^{-1}\boldsymbol{q}_{h},\varphi\boldsymbol{q}_{h})_{\mathcal{T}_{h}}
+(u_{h},e^{-\mrm{\Pi}\,i}\nabla\mrm{\Pi}\,i\cdot\boldsymbol{q}_{h})_{\mathcal{T}_{h}}
+\partialfrac{1}{2}b_{0}(u_{h},e^{-\mrm{\Pi}\,i}u_h)_{\mathcal{T}_{h}}\\
&+\langle (\tau-\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n})\varphi(u_{h}-\lambda_{h}),
u_{h}-\lambda_{h}\rangle_{\partial\mathcal{T}_{h}}\\
\geq & \chi(\epsilon^{-1}\boldsymbol{q}_{h},\boldsymbol{q}_{h})_{\mathcal{T}_{h}}
+(u_{h},e^{-\mrm{\Pi}\,i}\nabla\mrm{\Pi}\,i\cdot\boldsymbol{q}_{h})_{\mathcal{T}_{h}}
+\partialfrac{1}{2}b_{0}(u_{h},e^{-\mrm{\Pi}\,i}u_{h})_{\mathcal{T}_{h}}\\
&+\chi \langle (\tau-\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n})(u_{h}-\lambda_{h}),
u_{h}-\lambda_{h}\rangle_{\partial\mathcal{T}_{h}}.
\end{align*}
Using the Cauchy--Schwartz inequality, we have
\begin{equation*}
(u_{h},e^{-\mrm{\Pi}\,i}\nabla\mrm{\Pi}\,i\cdot\boldsymbol{q}_{h})_{\mathcal{T}_{h}}
\leq \partialfrac{1}{2}\left[
\partialelta^{-1}\Vert \nabla\mrm{\Pi}\,i\Vert_{L^{\infty}(\Omega)}^{2}(e^{-\mrm{\Pi}\,i}\boldsymbol{q}_{h},
\boldsymbol{q}_{h})_{\mathcal{T}_{h}}
+\partialelta(e^{-\mrm{\Pi}\,i}u_{h},u_{h})_{\mathcal{T}_{h}}^{2}
\right]
\end{equation*}
for any $\partialelta>0$.
Taking $\chi \geq 1+2b_{0}^{-1}\Vert e^{-\mrm{\Pi}\,i}\Vert_{L^{\infty}(\Omega)}\cdot \Vert \nabla\mrm{\Pi}\,i\Vert_{L^{\infty}(\Omega)}^{2}$
and $\partialelta = b_{0}/2$,
we get
\begin{align*}
B((\boldsymbol{q}_{h},u_{h},\lambda_{h}),(\boldsymbol{q}_\varphi,u_\varphi,\lambda_\varphi))
\geq & \epsilon^{-1}\partialfrac{\chi}{2}(\boldsymbol{q}_{h},\boldsymbol{q}_{h})_{\mathcal{T}_{h}}
+\partialfrac{b_{0}}{4}(e^{-\mrm{\Pi}\,i}u_{h},u_{h})_{\mathcal{T}_{h}}\\
& +\chi\Vert \vert \tau-\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n}\vert^{1/2} (u_{h}-\lambda_{h})
\Vert_{\partial\mathcal{T}_{h}}^{2}.
\end{align*}
To complete the proof, we simply absorb $\chi$, $e^{-\mrm{\Pi}\,i}$ and $b_0$ into the generic constant $C$.
\end{proof}
{\bf Step Two.}
We note that the test function $(\boldsymbol{q}_\varphi,u_\varphi,\lambda_\varphi)=(\varphi \boldsymbol{q}_{h}, \varphi u_{h}, \varphi {\lambda}_{h}) $
in Lemma \ref{lemma_ideal_infsup} is not in the discrete space
$
\boldsymbol{V}_h \times W_h \times M_h(0).
$
To establish a discrete stability property, we shall consider taking the discrete
test functions as a projection of $(\boldsymbol{q}_\varphi,u_\varphi,\lambda_\varphi)$
onto the spaces $ \boldsymbol{V}_h \times W_h \times M_h$, denoted by $\bld{\varPi}_h \bld q_\varphi, \varPi_h u_\varphi,
P_M\lambda_\varphi$.
Here $P_M$ is the $L^2$--projection onto $M_h$. And $\bld{\varPi}_h$ and $\varPi_h$ are the projections from $H^1({\mathcal{T}_h};\mathbb{R}^d)$ and
$H^1({\mathcal{T}_h})$ onto
$\bld V_h$ and $W_h$ respectively satisfying
\begin{subequations}
\label{eq:projI}
\begin{align}
\label{eq:projI1}
(\bld{\varPi}_h \bld{q}, \bld{v} )_K
&= ( \bld{q}, \bld{v} )_K
&& \forall\; \bld{v} \in \bpol{k-1}{K},
\\
\label{eq:projI2}
\bint{\bld{\varPi}_h \bld q\cdot \bld n}{\mu}{F} &= \bint{\bld q\cdot \bld n}{\mu}{F}
&& \forall \; \mu\in \pol{k}{F},\;\;\forall \; F\in{\partial K}\backslash F_K^s,\\
\label{eq:projII1}
(\varPi_h u, w )_K
&= ( u, w )_K
&& \forall\; w \in \pol{k-1}{K},
\\
\label{eq:projII2}
\bint{\varPi_h u}{\mu}{F_K^\star} &= \bint{u}{\mu}{F_K^\star},&& \forall \; \mu\in \pol{k}{F_K^\star}.
\end{align}
\end{subequations}
where $F_K^s$ and $F_K^\star$ are defined in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{notation_F}. We have the following optimal approximation property for $\bld{\varPi}_h$ and $\varPi_h$, whose proof was
available in \cite[Proposition 2.1]{CockburnDongGuzman2008}.
\begin{lemma}
\label{approx-q}
Assume that $\bld q \in H^{s+1}(K; \mathbb{R}^d)$ for $s\in [0,k]$ on an element $K\in {\mathcal{T}_h}$. Then
\[
\|\bld{\varPi}_h \bld q-\bld q\|_{K} \le C\,h^{s+1}|\bld q|_{H^{s+1}(K;\mathbb{R}^d)}.
\]
Assume that $u \in H^{s+1}(K)$ for $s\in [0,k]$ on an element $K\in {\mathcal{T}_h}$. Then
\[
\|\varPi_h u-u\|_{K} \le C\,h^{s+1}|u|_{H^{s+1}(K)}.
\]
\end{lemma}
We also need to estimate the difference between $\boldsymbol{q}$, $u$ and $\lambda$ and their corresponding projections.
Such an estimate is established in the following lemma. We refer the readers to Lemma~$4.2$ in \cite{AyusoMarini:cdf} for a detailed proof.
\begin{lemma}
\label{lemma_non_constant_ineqs}
Let $K\in\mathcal{T}_{h}$ and $\eta\in C^{1}(\bar{K})\cap W^{k+1,\infty}(K)$. Then, for any $(\bld v, v)\in P_k(K;\mathbb{R}^d)
\times P_k(K) $ and $\chi\in\mathbb{R}$,
\begin{align*}
& \Vert \bld{\varPi}_h ((\eta+\chi) \bld v)-(\eta+\chi) \bld v\Vert_{K}\leq C h_{K} \Vert \eta\Vert_{W^{k+1,\infty}(K)}\Vert\bld v\Vert_{K},\\
& \Vert \bld{\varPi}_h ((\eta+\chi) \bld v)-(\eta+\chi) \bld v\Vert_{F}\leq C h_{K}^{1/2} \Vert \eta\Vert_{W^{k+1,\infty}(K)}\Vert\bld v\Vert_{K},
\qquad \forall F\in {\partial K},\\
& \Vert \varPi_h ((\eta+\chi) v)-(\eta+\chi) v\Vert_{K}\leq C h_{K} \Vert \eta\Vert_{W^{k+1,\infty}(K)}\Vert v\Vert_{K},\\
& \Vert \varPi_h ((\eta+\chi) v)-(\eta+\chi) v\Vert_{F}\leq C h_{K}^{1/2} \Vert \eta\Vert_{W^{k+1,\infty}(K)}\Vert v\Vert_{K},
\qquad \forall F\in {\partial K}.
\end{align*}
\end{lemma}
Now, we go back to the stability estimate in Lemma \ref{lemma_ideal_infsup}, and divide the left hand side of the inequality into two terms, namely,
\begin{align*}
B((\boldsymbol{q}_{h},u_{h},{\lambda}_{h}),(\boldsymbol{q}_\varphi,u_\varphi,\lambda_\varphi))
&\; =
B((\boldsymbol{q}_{h},u_{h},{\lambda}_{h}),(\bld{\varPi}_h \boldsymbol{q}_\varphi, \varPi_h u_\varphi, P_M \lambda_\varphi))
\\
&\; +
B((\boldsymbol{q}_{h},u_{h},{\lambda}_{h}),((\mathsf{Id} - \bld{\varPi}_h) \boldsymbol{q}_\varphi,
(\mathsf{Id} - \varPi_h) u_\varphi, (\mathsf{Id} - P_M)\lambda_\varphi)).
\end{align*}
{\bf Step Three. }
We define the union of faces to simplify the presentation:
\begin{align*}
\partial \mathcal{T}_h^\star &: =\; \cup_{K\in {\mathcal{T}_h}}\cup_{F\in {\partial K}\backslash F_K^\star} F,\\
\partial \mathcal{T}_h^s &: =\; \cup_{K\in {\mathcal{T}_h}}F_K^s,
\end{align*}
where $F_K^\star$ and $F_K^s$ are defined in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{notation_F}.
Now, we are ready to derive the discrete stability result for the HDG method.
\begin{lemma}
\label{lemma_practical_infsup}
Let $\tau$ satisfies assumptions \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_general},
then there exists $h_0$, independent of $\epsilon$, so that
for any $h<h_0$, we have the following stability estimate:
for all
$(\boldsymbol{q}_{h},u_{h},{\lambda}_{h})\in \boldsymbol{V}_{h}\times W_{h}\times M_{h}(0)$,
\begin{align*}
\sup_{0\not =(\boldsymbol{r}_h,w_h,\mu_h)\in \bld V_h\times W_h\times M_h(0)}
\frac{B((\boldsymbol{q}_{h},u_{h},{\lambda}_{h}),(\boldsymbol{r}_h,w_h,\mu_h))}{\|| (\boldsymbol{r}_h, w_h, \mu_h)|\|_{e}}
\geq \; C \|| (\boldsymbol{q}_h, u_h, \lambda_h)|\|_{e}.
\end{align*}
\end{lemma}
\begin{proof}
For any $(\boldsymbol{r},w,\mu)\in H^1({\mathcal{T}_h};\mathbb{R}^d)\times H^1({\mathcal{T}_h})
\times L^2({\mathcal{E}_h})$ with $\mu=0 $ on ${\partial \Omega}$, define
\[
\bld {\partialelta r} := \bld {r}-\bld{\varPi}_h \bld {r}, \partialelta w := w - \varPi_hw, \partialelta\mu := \mu-P_M\mu.
\]
Using integration by parts and the definition of
the projections, we get
\begin{align*}
& B((\boldsymbol{q}_h,u_h,\lambda_h),(\boldsymbol{\partialelta r},\partialelta w,\partialelta\mu))\\
= & (\epsilon^{-1}\boldsymbol{q}_h,\boldsymbol{\partialelta r})_{\mathcal{T}_{h}}-(u_h,\nabla\cdot\boldsymbol{\partialelta r})_{\mathcal{T}_{h}}
+\langle \lambda_h, \boldsymbol{\partialelta r}\cdot \boldsymbol{n}\rangle_{\partial\mathcal{T}_{h}} \nonumber\\
& -(\boldsymbol{q}_h+\boldsymbol{\beta}u_h,\nabla \partialelta w)_{\mathcal{T}_{h}}
+\langle (\boldsymbol{q}_h+\boldsymbol{\beta}\lambda_h)\cdot\boldsymbol{n}
+\tau (u_h-\lambda_h),\partialelta w\rangle_{\partial\mathcal{T}_{h}}\nonumber\\
& -((\nabla\cdot\boldsymbol{\beta})u_h,\partialelta w)_{\mathcal{T}_{h}}
-\langle(\boldsymbol{q}_h+\boldsymbol{\beta}\lambda_h)\cdot\boldsymbol{n}
+\tau (u_h-\lambda_h), \partialelta\mu
\rangle_{\partial\mathcal{T}_{h}}\nonumber\\
= & (\epsilon^{-1}\boldsymbol{q}_h,\boldsymbol{\partialelta r})_{\mathcal{T}_{h}}+(\nabla u_h,\boldsymbol{\partialelta r})_{\mathcal{T}_{h}}
+\langle \lambda_h - u_h, \boldsymbol{\partialelta r}\cdot \boldsymbol{n}\rangle_{\partial\mathcal{T}_{h}}\nonumber\\
& +(\nabla\cdot \boldsymbol{q}_h,\partialelta w)_{\mathcal{T}_{h}} +(\bld \beta\cdot\nabla u_h,\partialelta w)_{\mathcal{T}_{h}}
+\langle (\tau- \bld \beta\cdot\bld n) (u_h-\lambda_h),\partialelta w\rangle_{\partial\mathcal{T}_{h}}\nonumber\\
&
-\langle(\boldsymbol{q}_h+\boldsymbol{\beta}\lambda_h)\cdot\boldsymbol{n}
+\tau (u_h-\lambda_h), \partialelta\mu
\rangle_{\partial\mathcal{T}_{h}}\nonumber\\
= & (\epsilon^{-1}\boldsymbol{q}_h,\boldsymbol{\partialelta r})_{\mathcal{T}_{h}}
+\langle \lambda_h - u_h, \boldsymbol{\partialelta r}\cdot \boldsymbol{n}\rangle_{\partial\mathcal{T}_{h}^s}
+((\bld \beta -\bld P_{0,h}\bld \beta)\cdot\nabla u_h,\partialelta w)_{\mathcal{T}_{h}}\nonumber\\
& +\langle (\tau- \bld \beta\cdot \bld n)(u_h-\lambda_h),\partialelta w\rangle_{\partial\mathcal{T}_{h}^\star}
-\langle \bld \beta\cdot\bld n (u_h-\lambda_h),\partialelta w\rangle_{\partial\mathcal{T}_{h}\backslash\partial\mathcal{T}_{h}^\star},\nonumber
\end{align*}
where $\bld P_{0,h}$ is the vectorial piecewise-constant projection.
Now, we take $(\bld r, w,\mu) = (\boldsymbol{q}_\varphi,u_\varphi,\lambda_\varphi)$ as in Lemma \ref{lemma_ideal_infsup}. By Cauchy--Schwartz inequality and the approximation
results in Lemma \ref{lemma_non_constant_ineqs},
we have
\begin{align*}
(\epsilon^{-1}\boldsymbol{q}_h,\boldsymbol{\partialelta q}_\varphi)_{\mathcal{T}_{h}}\le &\;
\|\epsilon^{-1/2}\boldsymbol{q}_h\|_{{\mathcal{T}_h}}\|\epsilon^{-1/2}\boldsymbol{\partialelta q}_\varphi\|_{\mathcal{T}_{h}} \\
\le &\; C h \|\epsilon^{-1/2}\boldsymbol{q}_h\|_{{\mathcal{T}_h}}^2,
\end{align*}
\begin{align*}
\langle \lambda_h - u_h, \boldsymbol{\partialelta q}_\varphi\cdot \boldsymbol{n}\rangle_{\partial\mathcal{T}_{h}^s}
\le &\; \left\||\tau-\frac{1}{2}\bld \beta\cdot\bld n|^{1/2} (\lambda_h - u_h)\right\|_{\partial\mathcal{T}_{h}^s}
\left\||\tau-\frac{1}{2}\bld \beta\cdot\bld n|^{-1/2} \bld{\partialelta q}_\varphi\right\|_{\partial\mathcal{T}_{h}^s}\\
\le &\; C \left(\frac{\epsilon}{\tau^{\bld v}}\right)^{1/2}\left\||\tau - \frac{1}{2}\bld \beta\cdot\bld n|^{1/2}
(\lambda_h - u_h)\right\|_{\partial\mathcal{T}_{h}^s} \|\epsilon^{-1/2}\bld{\partialelta q}_\varphi\|_{
\partial\mathcal{T}_h^s} \\
\le &\; C \left(\frac{\epsilon h}{\tau^{\bld v}}\right)^{1/2}\left\||\tau - \frac{1}{2}\bld \beta\cdot\bld n|^{1/2}
(\lambda_h - u_h)\right\|_{\partial\mathcal{T}_{h}^s}
\|\epsilon^{-1/2}\bld{q}_h\|_{{\mathcal{T}_h}}\\
\le &\; C (h^2+\epsilon h)^{1/2}\left\||\tau - \frac{1}{2}\bld \beta\cdot\bld n|^{1/2} (\lambda_h - u_h)\right\|_{\partial\mathcal{T}_{h}^s}
\|\epsilon^{-1/2}\bld{q}_h\|_{{\mathcal{T}_h}}\quad \text{ by \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_3}},
\end{align*}
\begin{align*}
((\bld \beta -\bld P_{0,h}\bld \beta)\cdot\nabla u_h,\partialelta u_\varphi)_{\mathcal{T}_{h}}
\le &\; C h \|\nabla u_h\|_{{\mathcal{T}_h}}\|\partialelta u_\varphi\|_{{\mathcal{T}_h}}\\
\le &\; Ch \|u_h\|_{{\mathcal{T}_h}}^2\\
\langle (\tau-\bld \beta\cdot\bld n) (u_h-\lambda_h),\partialelta u_\varphi\rangle_{\partial\mathcal{T}_{h}^\star}
\le &\; \left\||\tau-\bld \beta\cdot\bld n|^{1/2} (\lambda_h - u_h)\right\|_{\partial\mathcal{T}_{h}^\star}
\left\||\tau-\bld \beta\cdot\bld n|^{1/2} \partialelta u_\varphi\right\|_{\partial\mathcal{T}_{h}^\star}\\
\le &\; C \left\||\tau-\frac{1}{2}\bld \beta\cdot\bld n|^{1/2} (\lambda_h - u_h)\right\|_{\partial\mathcal{T}_{h}^\star}
\left\| \vert \tau -\bld\beta\cdot \bold n \vert^{1/2} \partialelta u_\varphi\right\|_{\partial\mathcal{T}_{h}^\star} \quad
\text{ by \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_1}}\\
\le &\; C(h(\tau^w+1))^{1/2} \left\||\tau-\frac{1}{2}\bld \beta\cdot\bld n|^{1/2} (\lambda_h - u_h)
\right\|_{\partial\mathcal{T}_{h}^\star} \|u_h\|_{{\mathcal{T}_h}}\\
\le &\; C h^{1/2} \left\||\tau-\frac{1}{2}\bld \beta\cdot\bld n|^{1/2} (\lambda_h - u_h)\right\|_{\partial\mathcal{T}_{h}} \|u_h\|_{{\mathcal{T}_h}} \quad
\text{ by \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_0}}\\
\langle \bld \beta\cdot\bld n (u_h-\lambda_h),\partialelta u_\varphi
\rangle_{\partial\mathcal{T}_{h}\backslash\partial\mathcal{T}_{h}^\star}
\le &\; \left\||\bld \beta\cdot\bld n|^{1/2}(\lambda_h - u_h)\right\|_{\partial\mathcal{T}_{h}\backslash\partial\mathcal{T}_{h}^\star}
\left\|\partialelta u_\varphi\right\|_{\partial\mathcal{T}_{h}\backslash\partial\mathcal{T}_{h}^\star}\\
\le &\; C \left\||\tau-\frac{1}{2}\bld \beta\cdot\bld n|^{1/2}(\lambda_h - u_h)\right\|_{\partial\mathcal{T}_{h}\backslash\partial\mathcal{T}_{h}^\star}
\left\|\partialelta u_\varphi\right\|_{\partial\mathcal{T}_{h}\backslash\partial\mathcal{T}_{h}^\star}\quad
\text{ by \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_1}}\\
\le &\; C h^{1/2}\left\||\tau-\frac{1}{2}\bld \beta\cdot\bld n|^{1/2}(\lambda_h - u_h)\right\|_{\partial\mathcal{T}_{h}}
\left\|u_h\right\|_{{\mathcal{T}_h}}.\\
\end{align*}
Summing the above inequalities all together, we get
\begin{align*}
B((\boldsymbol{q}_h,u_h,\lambda_h),(\boldsymbol{\partialelta q}_\varphi,\partialelta u_\varphi,\partialelta\lambda_\varphi))
\le & \;C h^{1/2} \vertiii{(\bld q_h, u_h,\lambda_h)}_e^2.
\end{align*}
Hence, choosing $h$ sufficiently small, we can ensure that
\begin{align*}
B((\boldsymbol{q}_h,u_h,\lambda_h),
(\boldsymbol{\partialelta q}_\varphi,\partialelta u_\varphi,\partialelta\lambda_\varphi)) \le
\frac{1}{2}B((\boldsymbol{q}_h,u_h,\lambda_h),(\boldsymbol{q}_\varphi,u_\varphi,\lambda_\varphi)).
\end{align*}
Consequently, we obtain
\begin{align*}
B((\boldsymbol{q}_h,u_h,\lambda_h),(\bld{\varPi}_h\boldsymbol{ q}_\varphi,P_h u_\varphi,P_M \lambda_\varphi)) \ge C \vertiii{(\bld q_h, u_h,\lambda_h)}_e^2.
\end{align*}
On the other hand, it is easy to obtain the following estimates
\begin{align*}
\vertiii{ (\bld{\varPi}_h\boldsymbol{ q}_\varphi,P_h u_\varphi,P_M \lambda_\varphi)}_e \le &\;C \vertiii{ (\boldsymbol{ q}_{h},u_{h},\lambda_{h})}_e .
\end{align*}
We conclude the proof by combining these two estimates.
\end{proof}
\subsection{The Error equation}
Here, we obtain the equation satisfied by the errors.
Note that by Galerkin--orthogonality, we have
\begin{align}
\label{galerkin-o}
B((\boldsymbol{q} - \boldsymbol{q}_h ,u - u_h,u - \widehat{u}_h), (\boldsymbol{r},w,\mu)) = 0 \quad \forall (\bld r, w, \mu)\in V_h\times W_h\times M_h(0),
\end{align}
where $(\bld q, u)$ is the exact solution of equations \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_first_order}.
We define the following quantities that will be used in the analysis:
\begin{align*}
&\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_h:= \boldsymbol{q}_{h}- \bld{\varPi}_h \boldsymbol{q}, \quad \bld{\partialelta q} = \boldsymbol{q}- \bld{\varPi}_h \boldsymbol{q}, \\
&\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_h := u_{h} - \varPi_h u,\quad \partialelta u := u - \varPi_h u, \\
&\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_hhat :=\widehat{u}_{h}- P_{M}u, \quad \widehat{\partialelta u} = u -P_{M}u .
\end{align*}
Recall that $\bld{\varPi}_h$ and $\varPi_h$ are the projections defined in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{eq:projI}, and $P_{M}$ is the $L^{2}$--projection
from $L^{2}(\mathcal{E}_{h})$ onto $M_{h}$.
Now, we are ready to present our error equation.
\begin{lemma}
\label{lemma-error-eq1}
The error equation takes the following form.
\begin{align}
\label{error-eq1}
B((\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_h ,\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_h ,\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_hhat), (\boldsymbol{r},w,\mu))
= &\;(\epsilon^{-1}\bld{\partialelta q},\boldsymbol{r})_{\mathcal{T}_{h}}
+\langle \boldsymbol{\partialelta q}\cdot\boldsymbol{n}
,w - \mu \rangle_{\partial\mathcal{T}_{h}^s} -(\boldsymbol{\beta}\,\partialelta u,\nabla w)_{\mathcal{T}_{h}} \\
& -((\nabla\cdot\boldsymbol{\beta})\partialelta u,w)_{\mathcal{T}_{h}}
+\langle\boldsymbol{\beta}\cdot\boldsymbol{n}\widehat{\partialelta u}
, w
\rangle_{\partial\mathcal{T}_{h}}+\langle\tau \partialelta u
, w - \mu
\rangle_{\partial\mathcal{T}_{h}^\star}\nonumber,
\end{align}
for all $(\bld r, w, \mu)\in V_h\times W_h\times M_h(0)$.
\end{lemma}
\begin{proof}
We use the Galerkin--orthogonality \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{galerkin-o} and the definition of the projections to prove the result.
For all $(\bld r, w, \mu)\in V_h\times W_h\times M_h(0)$, we have
\begin{align*}
B((\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_h,\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_h,\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_hhat),(\boldsymbol{r}, w,\mu)) =&\;B((\bld{\partialelta q},\partialelta u,\widehat{\partialelta u}),(\boldsymbol{r},w,\mu)) \\
= & \;(\epsilon^{-1}\bld{\partialelta q},\boldsymbol{r})_{\mathcal{T}_{h}}-(\partialelta u,\nabla\cdot\boldsymbol{r})_{\mathcal{T}_{h}}
+\langle \widehat{\partialelta u}, \boldsymbol{r}\cdot \boldsymbol{n}\rangle_{\partial\mathcal{T}_{h}} \nonumber\\
& -(\bld{\partialelta q}+\boldsymbol{\beta}\partialelta u,\nabla w)_{\mathcal{T}_{h}}
+\langle (\bld{\partialelta q}+\boldsymbol{\beta}\widehat{\partialelta u})\cdot\boldsymbol{n}
+\tau (\partialelta u-\widehat{\partialelta u}),w-\mu\rangle_{\partial\mathcal{T}_{h}}\nonumber\\
& -((\nabla\cdot\boldsymbol{\beta})\partialelta u_h,w)_{\mathcal{T}_{h}}\nonumber \\
= &\; (\epsilon^{-1}\bld{\partialelta q},\boldsymbol{r})_{\mathcal{T}_{h}}
+\langle \boldsymbol{\partialelta q}\cdot\boldsymbol{n}
,w - \mu \rangle_{\partial\mathcal{T}_{h}^s} -(\boldsymbol{\beta}\,\partialelta u,\nabla w)_{\mathcal{T}_{h}} \nonumber\\
& -((\nabla\cdot\boldsymbol{\beta})\partialelta u,w)_{\mathcal{T}_{h}}
+\langle\boldsymbol{\beta}\cdot\boldsymbol{n}\widehat{\partialelta u}
, w - \mu
\rangle_{\partial\mathcal{T}_{h}}+\langle\tau \partialelta u
, w - \mu
\rangle_{\partial\mathcal{T}_{h}^\star} \nonumber \\
= &\; (\epsilon^{-1}\bld{\partialelta q},\boldsymbol{r})_{\mathcal{T}_{h}}
+\langle \boldsymbol{\partialelta q}\cdot\boldsymbol{n}
,w - \mu \rangle_{\partial\mathcal{T}_{h}^s} -(\boldsymbol{\beta}\,\partialelta u,\nabla w)_{\mathcal{T}_{h}} \nonumber\\
& -((\nabla\cdot\boldsymbol{\beta})\partialelta u,w)_{\mathcal{T}_{h}}
+\langle\boldsymbol{\beta}\cdot\boldsymbol{n}\widehat{\partialelta u}
, w
\rangle_{\partial\mathcal{T}_{h}}+\langle\tau \partialelta u
, w - \mu
\rangle_{\partial\mathcal{T}_{h}^\star}, \nonumber
\end{align*}
where in the last step we used the fact that $\langle\boldsymbol{\beta}\cdot\boldsymbol{n}\widehat{\partialelta u}
, \mu
\rangle_{\partial\mathcal{T}_{h}} = 0$ for all $\mu \in M_h(0)$.
\end{proof}
\subsection{The error analysis}
Now, we are ready to prove our main results, Theorem~\ref{MainTh1} and Theorem~\ref{MainTh2}.
In order to prove Theorem~\ref{MainTh1} and Theorem~\ref{MainTh2}. We only need to bound the right hand side of the error equation \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{error-eq1} to get the error estimates.
For all $(\bld r, w, \mu)\in V_h\times W_h\times M_h(0)$, we have
\begin{align*}
(\epsilon^{-1}\bld{\partialelta q},\boldsymbol{r})_{\mathcal{T}_{h}} \le &\;
\|\epsilon^{-1/2}\bld{\partialelta q} \|_{{\mathcal{T}_h}}\|\epsilon^{-1/2}\boldsymbol{r}\|_{\mathcal{T}_{h}},\\
\langle \boldsymbol{\partialelta q}\cdot\boldsymbol{n}
,w - \mu \rangle_{\partial\mathcal{T}_{h}^s}
\le &\;\left\||\tau-\frac{1}{2}\bld \beta\cdot\bld n|^{-1/2} \bld{\partialelta q}\right\|_{\partial\mathcal{T}_{h}^s}
\left\||\tau-\frac{1}{2}\bld \beta\cdot\bld n|^{1/2} (w - \mu)\right\|_{\partial\mathcal{T}_{h}^s}
\nonumber\\
\le &\; C \left(\frac{\epsilon}{\tau^{\bld v}}\right)^{1/2}\|\epsilon^{-1/2}\bld{\partialelta q}\|_{\partial\mathcal{T}_{h}^{s}} \left\||
\tau - \frac{1}{2}\bld \beta\cdot\bld n|^{1/2} (w - \mu)\right\|_{\partial\mathcal{T}_{h}^s},\\
(\boldsymbol{\beta}\,\partialelta u,\nabla w)_{\mathcal{T}_{h}}
= &\;\left((\boldsymbol{\beta}-\bld P_{0,h}\bld \beta)\,\partialelta u,\nabla w\right)_{\mathcal{T}_{h}} \nonumber\\
\le &\; C h \|\partialelta u\|_{{\mathcal{T}_h}}\|\nabla w\|_{{\mathcal{T}_h}}\nonumber\\
\le &\; C \|\partialelta u\|_{{\mathcal{T}_h}}\|w\|_{{\mathcal{T}_h}},
\end{align*}
\begin{align*}
((\nabla\cdot\boldsymbol{\beta})\partialelta u,w)_{\mathcal{T}_{h}} \le &\; C \|\partialelta u\|_{{\mathcal{T}_h}}\|w\|_{{\mathcal{T}_h}},\\
\langle\boldsymbol{\beta}\cdot\boldsymbol{n}\widehat{\partialelta u}
, w
\rangle_{\partial\mathcal{T}_{h}}= &\;
\langle(\boldsymbol{\beta}-\bld P_{0,h}\bld \beta)\cdot\boldsymbol{n}\widehat{\partialelta u}
, w
\rangle_{\partial\mathcal{T}_{h}}\nonumber\\
\le & C h \|\widehat{\partialelta u}\|_{{\partial \Omega}h}\|w\|_{{\partial \Omega}h}\nonumber\\
\le & C h^{1/2} \|\widehat{\partialelta u}\|_{{\partial \Omega}h}\|w\|_{{\mathcal{T}_h}}\\
\langle\tau \partialelta u
, w - \mu
\rangle_{\partial\mathcal{T}_{h}^\star}
\le & \|\tau^{1/2} \partialelta u\|_{\partial\mathcal{T}_{h}^\star}\|\tau^{1/2}(w-\mu)\|_{\partial\mathcal{T}_{h}^\star}\nonumber\\
\le & C \|\tau^{1/2}\partialelta u\|_{\partial\mathcal{T}_{h}^\star}\left\||\tau-\frac{1}{2}\bld \beta\cdot\bld n|^{1/2} (w-\mu)\right\|_{\partial\mathcal{T}_{h}^\star}
\quad\qquad\text{ {by \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_1}}}\nonumber\\
\le & C (\tau^w)^{1/2} \|\partialelta u\|_{\partial\mathcal{T}_{h}^\star}
\left\||\tau-\frac{1}{2}\bld \beta\cdot\bld n|^{1/2} (w-\mu)\right\|_{\partial\mathcal{T}_{h}^\star}.
\end{align*}
Adding up these estimates all together, we obtain
\begin{align*}
& B((\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_h,\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_h,\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_hhat),(\boldsymbol{r}, w,\mu)) \\
\le &\;C (\|\epsilon^{-1/2}\bld{\partialelta q} \|_{{\mathcal{T}_h}}+
\left(\frac{\epsilon}{\tau^{\bld v}}\right)^{1/2}\|\epsilon^{-1/2}\bld{\partialelta q}\|_{\partial\mathcal{T}_{h}^{s}}+ \|\partialelta u\|_{{\mathcal{T}_h}} \nonumber\\
&\quad + h^{1/2} \|\widehat{\partialelta u}\|_{{\partial \Omega}h}+ (\tau^w)^{1/2}\|\partialelta u\|_{\partial\mathcal{T}_{h}^{\star}}
) \vertiii{(\bld r, w,\mu)}_e.\nonumber
\end{align*}
Note that $\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_hhat\in M_{h}(0)$ because $\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_hhat|_{\partial \Omega}=0$.
Using Lemma \ref{lemma_practical_infsup}, we immediately get
\begin{align*}
\vertiii{(\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_h, \varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_h,\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_hhat)}_e \le & \;C (\|\epsilon^{-1/2}\bld{\partialelta q} \|_{{\mathcal{T}_h}}+
\left(\frac{\epsilon}{\tau^{\bld v}}\right)^{1/2}\|\epsilon^{-1/2}\bld{\partialelta q}\|_{\partial\mathcal{T}_{h}^{s}}+ \|\partialelta u\|_{{\mathcal{T}_h}} \nonumber\\
&\quad + h^{1/2} \|\widehat{\partialelta u}\|_{{\partial \Omega}h}+ (\tau^w)^{1/2}\|\partialelta u\|_{\partial\mathcal{T}_{h}^{\star}}
),
\end{align*}
The approximation properties of the projections gives the following estimates,
\begin{align*}
\|\epsilon^{-1/2}\bld{\partialelta q} \|_{{\mathcal{T}_h}} \le &\;C \epsilon^{-1/2} h^{s+1} |\bld q |_{H^{s+1}({\mathcal{T}_h};\mathbb{R}^d)}\\
\left (\frac{\epsilon}{\tau^{\bld v}}\right)^{1/2}\|\epsilon^{-1/2}\bld{\partialelta q}\|_{\partial\mathcal{T}_{h}^{s}} \le &\;
C \left (\frac{h}{\tau^{\bld v}}\right)^{1/2}h^{s} |\bld q |_{H^{s+1}({\mathcal{T}_h};\mathbb{R}^d)} \\
\|\partialelta u\|_{{\mathcal{T}_h}} \le &\; C h^{s+1} |u |_{H^{s+1}({\mathcal{T}_h})} \\
h^{1/2} \|\widehat{\partialelta u}\|_{{\partial \Omega}h}\le &\; C h^{s+1} |u |_{H^{s+1}({\mathcal{T}_h})}\\
(\tau^w)^{1/2}\|\partialelta u\|_{\partial\mathcal{T}_{h}^{\star}} \le &\; C(\tau^w)^{1/2} h^{s+1/2}|u|_{H^{s+1}({\mathcal{T}_h})},
\end{align*}
for all $s\in [0,k]$.
Now, using assumption \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assumption_tau_general} on $\tau$, we obtain the following estimates,
\begin{align*}
\vertiii{(\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_h, \varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_h,\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_hhat)}_e\le &\; C (\epsilon^{-1/2}h^{s_{\bld v}+1} + h^{s_{\bld v}+1/2}) |\bld q |_{H^{s_{\bld v}+1}({\mathcal{T}_h};\mathbb{R}^d)} +
C h^{s_{w}+1/2}|u |_{H^{s_{w}+1}({\mathcal{T}_h})},
\end{align*}
for all $s_{\bld v}\in [0,k]$ and $s_w \in [0,k]$.
Using the
fact that $|\bld q |_{H^{k}({\mathcal{T}_h};\mathbb{R}^d)}= \epsilon |u|_{H^{k+1}({\mathcal{T}_h})}$, we get
\begin{align}
\label{project-ee}
\vertiii{(\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_h, \varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_h,\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_hhat)}_e\le &\; C (\epsilon^{1/2}h^{s_{\bld v}+1} + \epsilon h^{s_{\bld v}+1/2}) |u |_{H^{s_{\bld v}+2}({\mathcal{T}_h};\mathbb{R}^d)} +
C h^{s_{w}+1/2}|u |_{H^{s_{w}+1}({\mathcal{T}_h})},
\end{align}
for all $s_{\bld v}\in [0,k]$ and $s_w \in [0,k]$.
Moreover, by approximation properties of the projection, we can easily get
\begin{align}
\label{approx-ee}
\vertiii{(\bld{\partialelta q}, \partialelta u,\widehat{\partialelta u})}_e\le &\; C \epsilon^{1/2}h^{s_{\bld v}+1}|u |_{H^{s_{\bld v}+2}({\mathcal{T}_h};\mathbb{R}^d)} +
C h^{s_{w}+1/2}|u |_{H^{s_{w}+1}({\mathcal{T}_h})},
\end{align}
for all $s_{\bld v}\in [0,k]$ and $s_w \in [0,k]$.
Combining \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{project-ee}, \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{approx-ee} and using the triangle inequality, we obtain
\begin{align*}
& \vertiii{(\bld{q-q}_h, u-u_h,u-\widehat{u}_h)}_e \\
\le &\; C (\epsilon^{1/2}h^{s_{\bld v}+1} + \epsilon h^{s_{\bld v}+1/2})
|u |_{H^{s_{\bld v}+2}({\mathcal{T}_h};\mathbb{R}^d)} +
C h^{s_{w}+1/2}|u |_{H^{s_{w}+1}({\mathcal{T}_h})},
\end{align*}
and
\begin{align*}
\|u-{u}_h\|_{{\mathcal{T}_h}}\le &\; \vertiii{(\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_h, \varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_h,\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_hhat)}_e+\|\partialelta u\|_{{\mathcal{T}_h}}\\
\le &\; C(\epsilon^{1/2}h^{s_{\bld v}+1} + \epsilon h^{s_{\bld v}+1/2})|u |_{H^{s_{\bld v}+2}({\mathcal{T}_h};\mathbb{R}^d)} +
C h^{s_{w}+1/2}|u |_{H^{s_{w}+1}({\mathcal{T}_h})}.\nonumber
\end{align*}
This completes the proof of Theorem~\ref{MainTh1}.
For the proof of Theorem~\ref{MainTh2}, everything is exactly the same, except that
\begin{align*}
(\tau^w)^{1/2}\|\partialelta u\|_{\partial\mathcal{T}_{h}^{\star}} \le &\; C h^{s+1}|u|_{H^{s+1}({\mathcal{T}_h})},
\end{align*}
because assumption \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_tau_s} ensures $\tau^w \leq \mathcal{O}(h)$.
Hence we get
\begin{align*}
\vertiii{(\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_h, \varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_h,\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_hhat)}_e\le &\; C (\epsilon^{1/2}h^{s_{\bld v}+1} + \epsilon h^{s_{\bld v}+1/2})|u |_{H^{s_{\bld v}+2}({\mathcal{T}_h};\mathbb{R}^d)} +
C h^{s_{w}+1}|u |_{H^{s_{w}+1}({\mathcal{T}_h})}\\
\|u-{u}_h\|_{{\mathcal{T}_h}}\le &\; \vertiii{(\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_h, \varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_h,\varepsilon^{{\ensuremath{\scriptscriptstyle{u}}}}_hhat)}_e+\|\partialelta u\|_{{\mathcal{T}_h}}\\
\le &\; C (\epsilon^{1/2}h^{s_{\bld v}+1} + \epsilon h^{s_{\bld v}+1/2})|u |_{H^{s_{\bld v}+2}({\mathcal{T}_h};\mathbb{R}^d)} +
C h^{s_{w}+1}|u |_{H^{s_{w}+1}({\mathcal{T}_h})}.
\end{align*}
This completes the proof of Theorem~\ref{MainTh2}.
\section{Numerical results}\label{sec:num}
In this section, we present numerical studies using simple model problems in
2D to verify our theoretical results and display the performance of the HDG methods when the exact solution exibit layers.
Our test problems are similar to those studied in \cite{AyusoMarini:cdf}.
We fix the domain to be the unit square in all the experiments, and run simulations of the HDG methods \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_hdg_eqs} with the following three choices of approximation spaces and
stabilization functions:
\begin{itemize}
\item [1.] The approximation spaces are
\begin{align*}
\boldsymbol{V}_{h}&=\{\boldsymbol{r}\in L^{2}(\Omega;\mathbb{R}^{d}):\boldsymbol{r}|_{K}\in P_{k}(K;\mathbb{R}^{d})
\quad \forall K\in \mathcal{T}_{h}\},\\
W_{h}&=\{w\in L^{2}(\Omega):w|_{K}\in P_{k}(K)
\quad \forall K\in \mathcal{T}_{h}\},\\
M_{h}&=\{\mu\in L^{2}(\mathcal{E}_{h}):\mu |_{F} \in P_{k}(F)
\quad \forall F\in \mathcal{E}_{h}\},\\
\end{align*}
while the stabilization function is given by
\begin{align*}
\tau(F) = \max(\sup_{\bld x \in F} \,\bld \beta(\bld x)\cdot \bld n,0),\quad \forall F\in {\partial K}, \forall K\in {\mathcal{T}_h}.
\end{align*}
We denote this choice as $P_k$-$HDG1$.
\item [2.] The approximation spaces are the same as in the previous case, while the stabilization function is given by
\begin{align*}
\tau(F) = \max(\sup_{\bld x \in F} \,\bld \beta(\bld x)\cdot \bld n,0) + \min (0.1 \frac{\epsilon}{h_F}, 1 ),\quad \forall F\in {\partial K}, \forall K\in {\mathcal{T}_h}.
\end{align*}
We denote this choice as $P_k$-$HDG2$.
\item [3.] The approximation spaces are given as follows:
\begin{align*}
\boldsymbol{V}_{h}&=\{\boldsymbol{r}\in L^{2}(\Omega;\mathbb{R}^{d}):\boldsymbol{r}|_{K}\in P_{k}(K;\mathbb{R}^{d}) +\bld x P_k(K)
\quad \forall K\in \mathcal{T}_{h}\},\\
W_{h}&=\{w\in L^{2}(\Omega):w|_{K}\in P_{k}(K)
\quad \forall K\in \mathcal{T}_{h}\},\\
M_{h}&=\{\mu\in L^{2}(\mathcal{E}_{h}):\mu |_{F} \in P_{k}(F)
\quad \forall F\in \mathcal{E}_{h}\},\\
\end{align*}
while the sabilization function is the same as $P_k$-$HDG1$.
We denote this method as $P_k$-$HDG3$.
\end{itemize}
We remark that the method $P_k$-$HDG3$ is exactly the MH--DG method considered in \cite{Egger2010} when $\bld \beta$ is piecewise-constant,
which is proven in Appendix~\ref{appendix-1}.
\subsection{A smooth solution test}
\label{smooth}
We take the velocity field $\boldsymbol{\beta} = [1, 2]^T$,
and the diffusion coefficient $\epsilon$ as $1, 10^{-3}, 10^{-9}$. The source term $f$ is chosen so that
the exact solution is $ u(x,y) = \sin(2\pi\, x)\sin(2\pi\,y)$.
We obtain the computational meshes by uniform refinement of a mesh that consists of
a structured $5\times 5\times 2$ triangular elements, where the slanted edges are pointing in
the northeast direction.
Let us remark that when $\epsilon = 1$ and $k\ge 1$,
we can follow \cite{ChenCockburnHDGI} to use superconvergence results to locally postprocess the solution
to get a new approximation of the scalar variable $u_h^\star$,
which converges faster than $u_h$.
Here the definition of $u_h^\star\in P_{k+1}(K)$ for each element $K\in {\mathcal{T}_h}$ is as follows:
\begin{align*}
(\nabla u_h^\star,\nabla w)_K =&\; - (\epsilon^{-1}\,\bld q_h,\nabla w)_K&&\quad \text{ for all }
w\in P_{k+1}(K),\\
(u_h^\star,1)_K = &\; (u_h,1)_K.
\end{align*}
Table~\ref{smooth_test1} and Table~\ref{smooth_test_post} show the $L^2$--convergence
results for $u_h$ and $u_h^\star$ for the three HDG methods when $\epsilon = 1$.
For all the methods, we observe convergence order of $k+1$ for $u_h$ and convergence order of $k+2$ for
$u_h^\star$ (when $k\ge 1$). Also note that
the errors for the postprocessing $u_h^\star$ for the three methods are very close to each other.
When $\epsilon \ll 1 $, there is no superconvergence result for the HDG methods. Hence we only show
$\|u-u_h\|_{\mathcal{T}_h}$ in Table~\ref{smooth_test2}
when $\epsilon = 10^{-3}$ and $\epsilon = 10^{-9}$. Again, optimal converge rates are recovered; better than the one predicted by the theoretical
result in Theorem~\ref{MainTh1} which predict
the loss of half order of accuracy. Moreover, our numerical results in Table \ref{smooth_test2} show
that the performance of these three HDG methods are
equally good.
Hence, we prefer to use $P_k$-$HDG1$ and $P_k$-$HDG2$ rather than $P_k$-$HDG3$ because $P_k$-$HDG3$ has more degrees of freedom for the local problems.
\begin{table}[htbp]
\footnotesize
\begin{center}
\scalebox{0.9}{
$\begin{array}{|c|c||c c|c c|c c|}
\hline
\phantom{\big|} \mbox{degree} & \mbox{mesh} & \multicolumn{6}{|c|}{\epsilon=1\phantom{\big|}}
\\
\cline{3-8}
\phantom{\big|} k& h^{-1} & \mbox{error} & \mbox{order} & \mbox{error} & \mbox{order} & \mbox{error} & \mbox{order} \\
\hline
& & \multicolumn{2}{|c}{HDG1} & \multicolumn{2}{|c}{HDG2}&
\multicolumn{2}{|c|}{HDG3}
\\
\hline
& 5 & 1.74\mbox{e-}0 & -- & 7.60\mbox{e-}1 & -- & 2.06\mbox{e-}1 & -- \\
& 10 & 9.41\mbox{e-}1 & 0.88 & 3.33\mbox{e-}1 & 1.20 & 1.06\mbox{e-}1 & 0.97 \\
0& 20 & 4.83\mbox{e-}1 & 0.96 & 1.72\mbox{e-}1 & 0.95 & 5.29\mbox{e-}2 & 1.00 \\
& 40 & 2.44\mbox{e-}1 & 0.99 & 8.71\mbox{e-}2 & 0.98 & 2.64\mbox{e-}2 & 1.00 \\
\hline
& 5 & 3.75\mbox{e-}1 & -- & 1.72\mbox{e-}1 & -- & 4.88\mbox{e-}2 & -- \\
& 10 & 1.01\mbox{e-}1 & 1.89 & 3.88\mbox{e-}2 & 2.15 & 1.26\mbox{e-}2 & 1.95 \\
1& 20 & 2.59\mbox{e-}2 & 1.97 & 9.96\mbox{e-}3 & 1.96 & 3.18\mbox{e-}3 & 1.99 \\
& 40 & 6.52\mbox{e-}3 & 1.99 & 2.51\mbox{e-}3 & 1.99 & 7.96\mbox{e-}4 & 2.00 \\
\hline
& 5 & 6.19\mbox{e-}2 & -- & 2.88\mbox{e-}2 & -- & 8.60\mbox{e-}3 & -- \\
& 10 & 8.26\mbox{e-}3 & 2.90 & 3.20\mbox{e-}3 & 3.16 & 1.12\mbox{e-}3 & 2.95 \\
2& 20 & 1.05\mbox{e-}3 & 2.97 & 4.09\mbox{e-}4 & 2.97 & 1.41\mbox{e-}4 & 2.99 \\
& 40 & 1.33\mbox{e-}4 & 2.99 & 5.16\mbox{e-}5 & 2.99 & 1.77\mbox{e-}5 & 3.00 \\
\hline
& 5 & 8.35\mbox{e-}3 & -- & 3.90\mbox{e-}3 & -- & 1.21\mbox{e-}3 & -- \\
& 10 & 5.53\mbox{e-}4 & 3.92 & 2.16\mbox{e-}4 & 4.18 & 7.81\mbox{e-}5 & 3.95 \\
3& 20 & 3.52\mbox{e-}5 & 3.98 & 1.37\mbox{e-}5 & 3.97 & 4.92\mbox{e-}6 & 3.99 \\
& 40 & 2.21\mbox{e-}6 & 3.99 & 8.64\mbox{e-}7 & 3.99 & 3.08\mbox{e-}7 & 4.00 \\
\hline
\end{array} $
}
\end{center}{$\phantom{|}$}
\caption{History of convergence for $\|u - u_h\|_{L^2(\mathcal{T}_h)}$ when $\epsilon = 1$.}
\label{smooth_test1}
\end{table}
\begin{table}[htbp]
\footnotesize
\begin{center}
\scalebox{0.9}{
$\begin{array}{|c|c||c c|c c|c c|}
\hline
\phantom{\big|} \mbox{degree} & \mbox{mesh} & \multicolumn{6}{|c|}{\epsilon=1\phantom{\big|}}
\\
\cline{3-8}
\phantom{\big|} k& h^{-1} & \mbox{error} & \mbox{order} & \mbox{error} & \mbox{order} & \mbox{error} & \mbox{order} \\
\hline
& & \multicolumn{2}{|c}{HDG1} & \multicolumn{2}{|c}{HDG2}&
\multicolumn{2}{|c|}{HDG3}
\\
\hline
& 5 & 2.25\mbox{e-}2 & -- & 1.70\mbox{e-}2 & -- & 1.39\mbox{e-}2 & -- \\
& 10 & 3.08\mbox{e-}3 & 2.87 & 2.14\mbox{e-}3 & 2.99 & 1.70\mbox{e-}3 & 3.04 \\
1& 20 & 3.94\mbox{e-}4 & 2.96 & 2.65\mbox{e-}4 & 3.02 & 2.08\mbox{e-}4 & 3.03 \\
& 40 & 4.96\mbox{e-}5 & 2.99 & 3.28\mbox{e-}5 & 3.01 & 2.56\mbox{e-}5 & 3.02 \\
\hline
& 5 & 2.49\mbox{e-}3 & -- & 2.13\mbox{e-}3 & -- & 1.92\mbox{e-}3 & -- \\
& 10 & 1.59\mbox{e-}4 & 3.97 & 1.35\mbox{e-}4 & 3.98 & 1.23\mbox{e-}4 & 3.97 \\
2& 20 & 9.95\mbox{e-}6 & 4.00 & 8.45\mbox{e-}6 & 4.00 & 7.71\mbox{e-}6 & 3.99 \\
& 40 & 6.22\mbox{e-}7 & 4.00 & 5.28\mbox{e-}7 & 4.00 & 4.82\mbox{e-}7 & 4.00 \\
\hline
& 5 & 2.78\mbox{e-}4 & -- & 2.43\mbox{e-}4 & -- & 2.20\mbox{e-}4 & -- \\
& 10 & 8.87\mbox{e-}6 & 4.97 & 7.68\mbox{e-}6 & 4.98 & 6.94\mbox{e-}6 & 4.99 \\
3& 20 & 2.78\mbox{e-}7 & 4.99 & 2.40\mbox{e-}7 & 5.00 & 2.17\mbox{e-}7 & 5.00 \\
& 40 & 8.70\mbox{e-}9 & 5.00 & 7.50\mbox{e-}9 & 5.00 & 6.77\mbox{e-}9 & 5.00 \\
\hline
\end{array} $
}
\end{center}{$\phantom{|}$}
\caption{History of convergence for $\|u - u_h^\star\|_{L^2(\mathcal{T}_h)}$ when $\epsilon = 1$.}
\label{smooth_test_post}
\end{table}
\begin{table}[htbp]
\footnotesize
\begin{center}
\scalebox{0.9}{
$\begin{array}{|c|c||c c|c c|c c||c c|c c|c c|}
\hline
\phantom{\big|} \mbox{degree} & \mbox{mesh} & \multicolumn{6}{|c||}{\epsilon=10^{-3}\phantom{\big|}} &\multicolumn{6}{|c|}{
\epsilon=10^{-9}\phantom{\big|}}
\\
\cline{3-14}
\phantom{\big|} k& h^{-1} & \mbox{error} & \mbox{order} & \mbox{error} & \mbox{order} & \mbox{error} & \mbox{order} & \mbox{error} & \mbox{order}
& \mbox{error} & \mbox{order} & \mbox{error} & \mbox{order} \\
\hline
& & \multicolumn{2}{|c|}{HDG1} & \multicolumn{2}{|c}{HDG2}&
\multicolumn{2}{|c||}{HDG3} & \multicolumn{2}{|c}{HDG1}& \multicolumn{2}{|c}{HDG2} & \multicolumn{2}{|c|}{HDG3}
\\
\hline
& 5 & 3.16\mbox{e-}1 & -- & 3.16\mbox{e-}1 & -- & 3.14\mbox{e-}1 & -- & 3.18\mbox{e-}1 & -- & 3.18\mbox{e-}1 & -- & 3.18\mbox{e-}1 & -- \\
& 10 & 1.71\mbox{e-}1 & 0.88 & 1.71\mbox{e-}1 & 0.88 & 1.69\mbox{e-}1 & 0.89 & 1.74\mbox{e-}1 & 0.87 & 1.74\mbox{e-}1 & 0.87 & 1.74\mbox{e-}1 & 0.87 \\
0& 20 & 8.78\mbox{e-}2 & 0.96 & 8.78\mbox{e-}2 & 0.96 & 8.60\mbox{e-}2 & 0.98 & 9.06\mbox{e-}2 & 0.94 & 9.06\mbox{e-}2 & 0.94 & 9.06\mbox{e-}2 & 0.94 \\
& 40 & 4.37\mbox{e-}2 & 1.00 & 4.38\mbox{e-}2 & 1.00 & 4.22\mbox{e-}2 & 1.03 & 4.63\mbox{e-}2 & 0.97 & 4.63\mbox{e-}2 & 0.97 & 4.63\mbox{e-}2 & 0.97 \\
\hline
& 5 & 7.84\mbox{e-}2 & -- & 7.84\mbox{e-}2 & -- & 7.75\mbox{e-}2 & -- & 7.96\mbox{e-}2 & -- & 7.96\mbox{e-}2 & -- & 7.96\mbox{e-}2 & -- \\
& 10 & 2.00\mbox{e-}2 & 1.97 & 2.00\mbox{e-}2 & 1.97 & 1.95\mbox{e-}2 & 1.99 & 2.04\mbox{e-}2 & 1.85 & 2.04\mbox{e-}2 & 1.85 & 2.04\mbox{e-}2 & 1.85 \\
1& 20 & 4.95\mbox{e-}3 & 2.01 & 4.95\mbox{e-}3 & 2.01 & 4.73\mbox{e-}3 & 2.05 & 5.13\mbox{e-}3 & 1.96 & 5.13\mbox{e-}3 & 1.96 & 5.13\mbox{e-}3 & 1.96 \\
& 40 & 1.21\mbox{e-}3 & 2.03 & 1.21\mbox{e-}3 & 2.03 & 1.11\mbox{e-}3 & 2.09 & 1.28\mbox{e-}3 & 1.99 & 1.28\mbox{e-}3 & 1.99 & 1.28\mbox{e-}3 & 1.99 \\
\hline
& 5 & 1.32\mbox{e-}2 & -- & 1.32\mbox{e-}2 & -- & 1.31\mbox{e-}2 & -- & 1.35\mbox{e-}2 & -- & 1.35\mbox{e-}2 & -- & 1.35\mbox{e-}2 & -- \\
& 10 & 1.72\mbox{e-}3 & 2.95 & 1.72\mbox{e-}3 & 2.95 & 1.68\mbox{e-}3 & 2.96 & 1.77\mbox{e-}3 & 2.93 & 1.77\mbox{e-}3 & 2.93 & 1.77\mbox{e-}3 & 2.93 \\
2& 20 & 2.14\mbox{e-}4 & 3.00 & 2.14\mbox{e-}4 & 3.00 & 2.05\mbox{e-}4 & 3.03 & 2.24\mbox{e-}4 & 2.98 & 2.24\mbox{e-}4 & 2.98 & 2.24\mbox{e-}4 & 2.98 \\
& 40 & 2.63\mbox{e-}5 & 3.02 & 2.63\mbox{e-}5 & 3.02 & 2.45\mbox{e-}5 & 3.07 & 2.80\mbox{e-}5 & 3.00 & 2.80\mbox{e-}5 & 3.00& 2.80\mbox{e-}5 & 3.00\\
\hline
& 5 & 1.83\mbox{e-}3 & -- & 1.83\mbox{e-}3 & -- & 1.80\mbox{e-}3 & -- & 1.87\mbox{e-}3 & -- & 1.87\mbox{e-}3 & -- & 1.87\mbox{e-}3 & -- \\
& 10 & 1.17\mbox{e-}4 & 3.97 & 1.17\mbox{e-}4 & 3.97 & 1.13\mbox{e-}4 & 3.99 & 1.20\mbox{e-}4 & 3.95 & 1.20\mbox{e-}4 & 3.95 & 1.20\mbox{e-}4 & 3.95 \\
3& 20 & 7.23\mbox{e-}6 & 4.01 & 7.23\mbox{e-}6 & 4.01 & 6.82\mbox{e-}6 & 4.05 & 7.56\mbox{e-}6 & 3.99 & 7.56\mbox{e-}6 & 3.99& 7.56\mbox{e-}6 & 3.99\\
& 40 & 4.43\mbox{e-}7 & 4.03 & 4.43\mbox{e-}7 & 4.03 & 4.01\mbox{e-}7 & 4.09 & 4.73\mbox{e-}7 & 4.00 & 4.73\mbox{e-}7 & 4.00 & 4.73\mbox{e-}7 & 4.00\\
\hline
\end{array} $
}
\end{center}{$\phantom{|}$}
\caption{History of convergence for $\|u - u_h\|_{L^2(\mathcal{T}_h)}$ when $\epsilon = 10^{-3}$ and $\epsilon = 10^{-9}$.}
\label{smooth_test2}
\end{table}
\subsection{A rotating flow test}
We take $\epsilon = 10^{-6}$, $\boldsymbol{\beta} = [y - 1/2, 1/2 - x]^T$, and $f = 0$. The solution $u$ is prescribed along the slip $1/2\times [0, 1/2]$, as follows:
\[
u(1/2,y) = \sin^2(2\pi\,y)\qquad y \in [0, 1/2].
\]
See \cite{HughesScovazziBochevBuffa2006} for a detailed description of this test.
In Fig.~\ref{rotating1}, we plot $u_h$ obtained from the three HDG methods for various polynomial degrees in a structured triangular grid of 128 elements.
To better compare the results, we plot in Fig.~\ref{rotating3} extracted data of $u_h$
along the horizontal center line $y = 1/2$. We also plot in Fig.~\ref{rotating_high} a comparison of $P_0$-$HDG1$ in 8192 elements and $P_3$-$HDG1$ in 128 elements.
From Fig.~\ref{rotating1}, we find that all the HDG methods produce similar results. Moreover, it is clear that higher order methods
lead to better approximation results and
are computationally cheaper than lower order methods for qualitively similar numerical results.
\begin{figure}
\caption{3D plot of $u_h$ for rotating flow test with $\epsilon = 10^{-6}
\label{rotating1}
\end{figure}
\begin{figure}
\caption{$u_h$ along $y = 1/2$ for $P_k$-$HDG3$ with 128 elements.}
\label{rotating3}
\end{figure}
\begin{figure}
\caption{A comparison of $P_0$-$HDG$ and $P_3$-$HDG$. Left: $P_0$-$HDG1$ with $8192$ elements; Right $P_3$-$HDG1$
with $128$ elements. Top: 3D plot; Bottom: 2D contour.}
\label{rotating_high}
\end{figure}
\subsection{An interior layer test}
We take $\boldsymbol{\beta} = [1/2, \sqrt{3}/2]^T$, $f = 0$, and the Dirichlet boundary condition as follows:
\begin{align*}
u = \left\{\begin{tabular}{l l}
$1$& { on }$\{y = 0, 0\le x\le 1\}$,\\
$1$& { on }$\{x = 0, 0\le y\le 1/5\}$,\\
$0$& { elsewhere. }
\end{tabular}
\right.
\end{align*}
It is clear that for $\epsilon$ small, the exact solution produces an interior layer along $\bld \beta$ direction starting from $(0,1/5)$, and boundary
layers on the right and top right boundary.
In Fig.~\ref{interior31} and Fig.~\ref{interior91},
we plot the computational results in a structured triangular grid of 128 elements for $\epsilon = 10^{-3}$ and $\epsilon = 10^{-9}$
respectively.
In order to better see the performance of the HDG method in capturing interior layers,
in Fig.~\ref{interior_contour},
we plot the contour of $u_h$ using $P_k$-$HDG1$ with $ 0\le k \le 3$ for $\epsilon = 10^{-3}$ in three consecutive meshes with the coarsest one consists of 200 elements.
Again, all the HDG methods produce quite similar results.
Note that, as expected, the piecewise-constant approximations are free of oscillations but extensively smear out the interior layer, while, on the
other hand, higher order approximations capture the interior layer within a few elements but produce oscillations within the layer.
\begin{figure}
\caption{3D plot of $u_h$ for the interior layer test with $\epsilon = 10^{-3}
\label{interior31}
\end{figure}
\begin{figure}
\caption{3D plot of $u_h$ for the interior layer test with $\epsilon = 10^{-9}
\label{interior91}
\end{figure}
\begin{figure}
\caption{Contour plot of $u_h$ using $HDG1$ for the interior layer test with $\epsilon = 10^{-3}
\label{interior_contour}
\end{figure}
\subsection{A boundary layer test}
Finally, we take $\boldsymbol{\beta} = [1, 1]^T$, and choose the source term $f$ so that the exact solution
\[
u(x,y) = \sin\frac{\pi\,x}{2} + \sin\frac{\pi\,y}{2}\left(1-\sin\frac{\pi\,x}{2}\right) + \frac{e^{-1/\epsilon} - e^{-(1-x)(1-y)/\epsilon}}{1 - e^{-1/\epsilon}}.
\]
The solution develops boundary layers along the top and right boundaries for small $\epsilon$ (see Fig.~\ref{bdry21} for $\epsilon = 10^{-2}$ and Fig.~\ref{bdry61}
for $\epsilon = 10^{-6}$).
We take an exact solution which is a slight modification of that considered in \cite{AyusoMarini:cdf} so that,
away from the boundary layers, our exact solution behaves not
like a quadratic polynomial as in \cite{AyusoMarini:cdf}.
This modification is useful for us to clearly see the orders of convergence for $k=2,3$.
In Fig.~\ref{bdry21} and Fig.~\ref{bdry61}, we plot the exact solution and computational
results for $\epsilon = 10^{-2}$ and $\epsilon = 10^{-6}$ in a structured 200 elements.
We find that all the HDG methods produce similar results. The boundary layers are not resolved since
the mesh is too coarse.
In Table~\ref{bdry_order}, we show the convergence of $u_h$ in $L^2$--norm
for $\epsilon = 10^{-2},10^{-6}$ in the reduced domain $\widetilde{\Omega} = [0, 0.9]\times[0,0.9]\subset \Omega$ to exclude the unresolved boundary
layers. Just as in the smooth case, the three HDG methods produce very similar convergence results. Hence we only show the computed results for $P_k$-$HDG1$ in Table~\ref{bdry_order}.
We observe optimal $L^2$--convergence rates for $u_h$.
\begin{figure}
\caption{3D plot of the exact solution and $u_h$ for the boundary layer test with $\epsilon = 10^{-2}
\label{bdry21}
\end{figure}
\begin{figure}
\caption{3D plot of the exact solution and $u_h$ for the boundary layer test with $\epsilon = 10^{-6}
\label{bdry61}
\end{figure}
\begin{table}[htbp]
\footnotesize
\begin{center}
\scalebox{0.9}{
$\begin{array}{|c|c||c c||c c|}
\hline
\phantom{\big|} \mbox{degree} & \mbox{mesh} & \multicolumn{2}{|c||}{\epsilon=10^{-2}\phantom{\big|}} &\multicolumn{2}{|c|}{
\epsilon=10^{-6}\phantom{\big|}}
\\
\cline{3-6}
\phantom{\big|} k& h^{-1} & \mbox{error} & \mbox{order} & \mbox{error} & \mbox{order}\\
\hline
& 10 & 3.61\mbox{e-}2 & -- & 3.32\mbox{e-}2 & -- \\
& 20 & 1.81\mbox{e-}2 & 0.99 & 1.67\mbox{e-}2 & 1.00 \\
0& 40 & 9.06\mbox{e-}3 & 1.00 & 8.34\mbox{e-}3 & 1.00 \\
& 80 & 4.52\mbox{e-}3 & 1.00 & 4.17\mbox{e-}3 & 1.00 \\
\hline
& 10 & 4.22\mbox{e-}3 & -- & 1.20\mbox{e-}3 & -- \\
& 20 & 8.54\mbox{e-}4 & 2.30 & 3.00\mbox{e-}4 & 2.00 \\
1& 40 & 2.13\mbox{e-}4 & 2.00 & 7.51\mbox{e-}5 & 2.00 \\
& 80 & 5.30\mbox{e-}5 & 2.01 & 1.88\mbox{e-}5 & 2.00 \\
\hline
& 10 & 1.48\mbox{e-}3 & -- & 1.90\mbox{e-}5 & -- \\
& 20 & 6.66\mbox{e-}5 & 4.47 & 2.37\mbox{e-}6 & 3.00 \\
2& 40 & 8.19\mbox{e-}6 & 3.02 & 2.96\mbox{e-}7 & 3.00 \\
& 80 & 1.03\mbox{e-}6 & 3.00 & 3.70\mbox{e-}8 & 3.00 \\
\hline
& 10 & 4.10\mbox{e-}4 & -- & 3.17\mbox{e-}7 & -- \\
& 20 & 5.35\mbox{e-}6 & 6.26 & 1.99\mbox{e-}8 & 3.99 \\
3& 40 & 3.56\mbox{e-}7 & 3.91 & 1.25\mbox{e-}9 & 4.00 \\
& 80 & 2.27\mbox{e-}8 & 3.97 & 7.79\mbox{e-}11 & 4.00 \\
\hline
\end{array} $
}
\end{center}{$\phantom{|}$}
\caption{History of convergence of $HDG1$ for $\|u - u_h\|_{L^2(\widetilde{\Omega})}$ when $\epsilon = 10^{-2}$ and $\epsilon = 10^{-6}$.}
\label{bdry_order}
\end{table}
\subsection{The condition number}
Now, we present the condition number of the matrix generated by the original bilinear form $a_h$ in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{hybrid-ee} and
the scaled bilinear form $\widetilde{a}_h$ in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{reduced_matrix2}. We use the same setup as that of the smooth test with two choices of
$\bld \beta$. The first choice of $\bld \beta$ is $\bld\beta = [1,2]^T$. For this choice, assumption
\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_cond} is satisfied by both example of $\tau$ in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{tau1} and \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{tau2}.
Since the condition numbers of all three HDG methods are very similar in our tests, we only present
that for $P_k$-$HDG1$ in Table~\ref{cond_1}. Notice that $\mathcal{O}(h^{-2})$ is observed for $\epsilon = 1,10^{-3},10^{-9}$ for different polynomial degrees, and that the condition number of
the scaled system is similar to that of unscaled system.
The second choice of $\bld \beta$ is $\bld\beta = [1,1]^T$. For this choice, assumption
\bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_cond} is satisfied by the second choice of $\tau$ in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{tau2}, but not for the choice \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{tau1} since the mesh is aligned with $\bld\beta$. However, one can easily modify
$\tau$ in $P_k$-$HDG1$ and $P_k$-$HDG3$ on the aligned faces so that \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_cond} holds. We only present the condition numbers for $P_k$-$HDG2$ in Table~\ref{cond_2}.
The dependence of the condition number on $\epsilon$ seems to be of order $\mathcal{O}({\epsilon^{-1}})$ for the original system,
and we observe a huge improvement
of the condition number after scaling for $\epsilon = 10^{-9}$. Also, we find that the condition number for the scaled system is of order $\mathcal{O}(h^{-2})$ for $\epsilon = 1$, and
of order $\mathcal{O}(h^{-1})$ for $\epsilon = 10^{-3},10^{-9}$, which is not predicted by our theory.
\begin{table}[htbp]
\footnotesize
\begin{center}
\scalebox{0.9}{
$\begin{array}{|c|c||c|c|c|c|| c|c|c|c|}
\hline
\phantom{\big|} \mbox{$\epsilon$} & \mbox{mesh} & \multicolumn{4}{|c||}{\text{Condition numbers
for $a_h(\cdot,\cdot)$} \phantom{\big|}} & \multicolumn{4}{|c|}{\text{Condition numbers
for $\widetilde{a}_h(\cdot,\cdot)$} \phantom{\big|}}
\\
\cline{3-10}
\phantom{\big|} & h^{-1} & \mbox{k=}0 & \mbox{k=}1 & \mbox{k=}2 & \mbox{k=}3& \mbox{k=}0 & \mbox{k=}1 & \mbox{k=}2 & \mbox{k=}3\\
\hline
& 5 & 1.11\mbox{e+}2 & 1.42\mbox{e+}2 & 2.82\mbox{e+}2 & 3.31\mbox{e+}2 & 2.49\mbox{e+}2 & 2.65\mbox{e+}2 & 5.30\mbox{e+}2 & 6.29\mbox{e+}2\\
& 10 & 3.37\mbox{e+}2 & 5.28\mbox{e+}2 & 1.06\mbox{e+}3 & 1.24\mbox{e+}2 & 6.29\mbox{e+}2 & 9.83\mbox{e+}2 & 1.98\mbox{e+}3 & 2.34\mbox{e+}2 \\
1\mbox{e-}0 & 20 & 1.37\mbox{e+}3 & 2.09\mbox{e+}3 & 4.17\mbox{e+}3 & 4.89\mbox{e+}2 & 2.55\mbox{e+}3 & 3.89\mbox{e+}3 & 7.78\mbox{e+}3 & 9.21\mbox{e+}2 \\
& 40 & 5.50\mbox{e+}3 & 8.32\mbox{e+}3 & 1.66\mbox{e+}4 & 1.95\mbox{e+}2 & 1.03\mbox{e+}4 & 1.55\mbox{e+}4 & 3.10\mbox{e+}4 & 3.67\mbox{e+}2 \\
\hline
& 5 & 4.86\mbox{e+}1 & 1.60\mbox{e+}2 & 3.64\mbox{e+}2 & 5.00\mbox{e+}2 & 3.71\mbox{e+}1 & 1.12\mbox{e+}2 & 2.44\mbox{e+}2 & 3.40\mbox{e+}2 \\
& 10 & 1.74\mbox{e+}2 & 5.23\mbox{e+}2 & 8.79\mbox{e+}2 & 1.27\mbox{e+}3 & 1.36\mbox{e+}2 & 3.75\mbox{e+}2 & 6.51\mbox{e+}2 & 8.48\mbox{e+}2 \\
1\mbox{e-}3 & 20 & 6.48\mbox{e+}2 & 1.82\mbox{e+}3 & 2.35\mbox{e+}3 & 4.06\mbox{e+}3 & 5.06\mbox{e+}2 & 1.25\mbox{e+}3 & 1.17\mbox{e+}3 & 2.50\mbox{e+}3 \\
& 40 & 2.48\mbox{e+}3 & 6.94\mbox{e+}3 & 8.76\mbox{e+}3 & 1.49\mbox{e+}4 & 1.94\mbox{e+}3 & 4.44\mbox{e+}3 & 5.98\mbox{e+}3 & 8.96\mbox{e+}3 \\
\hline
& 5 & 4.90\mbox{e+}1 & 1.69\mbox{e+}2 & 4.21\mbox{e+}2 & 5.76\mbox{e+}2 & 3.73\mbox{e+}1 & 9.57\mbox{e+}1 & 2.97\mbox{e+}2 & 3.80\mbox{e+}2\\
& 10 & 1.74\mbox{e+}2 & 5.38\mbox{e+}2 & 1.17\mbox{e+}3 & 1.52\mbox{e+}3 & 1.36\mbox{e+}2 & 4.04\mbox{e+}2 & 9.11\mbox{e+}2 & 1.05\mbox{e+}3 \\
1\mbox{e-}9 & 20 & 6.49\mbox{e+}2 & 1.94\mbox{e+}3 & 3.76\mbox{e+}3 & 4.73\mbox{e+}3 & 5.05\mbox{e+}2 & 1.46\mbox{e+}3 & 2.94\mbox{e+}3 & 3.40\mbox{e+}3 \\
& 40 & 2.50\mbox{e+}3 & 7.06\mbox{e+}3 & 1.34\mbox{e+}4 & 1.65\mbox{e+}4 & 1.93\mbox{e+}3 & 5.37\mbox{e+}3 & 9.45\mbox{e+}3 & 1.25\mbox{e+}4\\
\hline
\end{array} $
}
\end{center}{$\phantom{|}$}
\caption{Condition numbers for $HDG1$ when $\bld\beta =[1,2]^T$.}
\label{cond_1}
\end{table}
\begin{table}[htbp]
\footnotesize
\begin{center}
\scalebox{0.9}{
$\begin{array}{|c|c||c|c|c|c|| c|c|c|c|}
\hline
\phantom{\big|} \mbox{$\epsilon$} & \mbox{mesh} & \multicolumn{4}{|c||}{\text{Condition numbers
for $a_h(\cdot,\cdot)$} \phantom{\big|}} & \multicolumn{4}{|c|}{\text{Condition numbers
for $\widetilde{a}_h(\cdot,\cdot)$} \phantom{\big|}}
\\
\cline{3-10}
\phantom{\big|} & h^{-1} & \mbox{k=}0 & \mbox{k=}1 & \mbox{k=}2 & \mbox{k=}3& \mbox{k=}0 & \mbox{k=}1 & \mbox{k=}2 & \mbox{k=}3\\
\hline
& 5 & 9.99\mbox{e+}1 & 1.41\mbox{e+}2 & 2.82\mbox{e+}2 & 3.31\mbox{e+}2 & 1.16\mbox{e+}2 & 1.96\mbox{e+}2 & 3.92\mbox{e+}2 & 4.64\mbox{e+}2\\
& 10 & 3.40\mbox{e+}2 & 5.34\mbox{e+}2 & 1.07\mbox{e+}3 & 1.25\mbox{e+}2 & 4.35\mbox{e+}2 & 6.82\mbox{e+}2 & 1.37\mbox{e+}3 & 1.61\mbox{e+}2 \\
1\mbox{e-}0 & 20 & 1.38\mbox{e+}3 & 2.12\mbox{e+}3 & 4.23\mbox{e+}3 & 4.96\mbox{e+}2 & 1.77\mbox{e+}3 & 2.71\mbox{e+}3 & 5.41\mbox{e+}3 & 6.38\mbox{e+}2 \\
& 40 & 5.57\mbox{e+}3 & 8.44\mbox{e+}3 & 1.68\mbox{e+}4 & 1.98\mbox{e+}2 & 7.12\mbox{e+}3 & 1.08\mbox{e+}4 & 2.15\mbox{e+}4 & 2.54\mbox{e+}2 \\
\hline
& 5 & 2.01\mbox{e+}2 & 6.63\mbox{e+}2 & 9.06\mbox{e+}2 & 1.55\mbox{e+}3 & 2.48\mbox{e+}2 & 1.81\mbox{e+}3 & 7.02\mbox{e+}3 & 9.83\mbox{e+}3 \\
& 10 & 4.00\mbox{e+}2 & 1.41\mbox{e+}3 & 1.62\mbox{e+}3 & 3.27\mbox{e+}3 & 5.63\mbox{e+}2 & 2.34\mbox{e+}3 & 7.45\mbox{e+}3 & 1.16\mbox{e+}4 \\
1\mbox{e-}3 & 20 & 1.25\mbox{e+}3 & 4.42\mbox{e+}3 & 4.98\mbox{e+}3 & 9.57\mbox{e+}3 & 1.20\mbox{e+}3 & 3.99\mbox{e+}3 & 9.01\mbox{e+}3 & 1.66\mbox{e+}4 \\
& 40 & 4.69\mbox{e+}3 & 1.63\mbox{e+}4 & 1.86\mbox{e+}4 & 3.12\mbox{e+}4 & 2.60\mbox{e+}3 & 7.29\mbox{e+}3 & 1.32\mbox{e+}4 & 2.49\mbox{e+}4 \\
\hline
& 5 & 1.43\mbox{e+}8 & 4.02\mbox{e+}8 & 6.04\mbox{e+}8 & 7.34\mbox{e+}8 & 2.38\mbox{e+}2 & 2.07\mbox{e+}3 & 1.23\mbox{e+}4 & 2.49\mbox{e+}4\\
& 10 & 1.31\mbox{e+}8 & 3.84\mbox{e+}8 & 5.68\mbox{e+}8 & 7.03\mbox{e+}8 & 5.35\mbox{e+}2 & 4.66\mbox{e+}3 & 2.77\mbox{e+}4 & 5.60\mbox{e+}4 \\
1\mbox{e-}9 & 20 & 1.25\mbox{e+}8 & 3.75\mbox{e+}8 & 5.51\mbox{e+}8 & 6.87\mbox{e+}8 & 1.13\mbox{e+}3 & 9.84\mbox{e+}3 & 5.85\mbox{e+}4 & 1.12\mbox{e+}5 \\
& 40 & 1.22\mbox{e+}8 & 3.70\mbox{e+}8 & 5.42\mbox{e+}8 & 6.79\mbox{e+}8 & 2.32\mbox{e+}3 & 2.02\mbox{e+}4 & 1.20\mbox{e+}5 & 2.30\mbox{e+}5\\
\hline
\end{array} $
}
\end{center}{$\phantom{|}$}
\caption{Condition numbers for $HDG2$ when $\bld\beta =[1,1]^T$.}
\label{cond_2}
\end{table}
{\bf Acknowledgements}. The authors would like to thank Professor Bernardo Cockburn
for constructive criticism leading to a better presentation of the material in this paper. The authors would also like to
thank one of the referees for pointing out the paper by Egger and Sch\"oberl \cite{Egger2010} and
for suggesting to comment on the relation of the HDG method to the MH--DG method in \cite{Egger2010}, as was done in Appendix~\ref{appendix-1}.
The work of Weifeng Qiu was supported by City University of Hong Kong under start-up grant (No. 7200324).
\appendix
\section{The relation between the HDG method
and the MH--DG method in \cite{Egger2010}}
\label{appendix-1}
In this section, we establish the relation between the HDG method \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_hdg_eqs} and the MH--DG considered in \cite{Egger2010}. We first present the MH--DG method for equations \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_eqs} under the condition that $g= 0$ and $\bld \beta \in H(\mathrm{div};\Omega)$ is
constant in each element. Then, we show that this method coincides with the HDG method \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{cd_hdg_eqs} when using the same approximation spaces and
choosing the stability function $\tau$ to be \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{tau1}.
The MH--DG method seeks an approximation $(\bld q_h,u_h,\lambda_h)\in \widetilde{\bld V}_h\times
W_h\times M_h(0)$ so that
\begin{align}
\label{mhdg}
B_h((\bld q_h,u_h,\lambda_h), (\bld r, w,\mu)) = (f,w)_{{\mathcal{T}_h}},
\end{align}
for all $(\bld r, w,\mu)\in \widetilde{\bld V}_h\times
W_h\times M_h(0)$,
where $W_h$ and $M_h(0)$ is defined in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{fem_spaces} and $\widetilde{\bld V}_h$ is the so called Raviart--Thomas space, slightly larger than $\bld V_h$,
defined as follows
\begin{align*}
\widetilde{\bld V}_{h}&=\{\boldsymbol{r}\in L^{2}(\Omega;\mathbb{R}^{d}):\boldsymbol{r}|_{K}\in P_{k}(K;\mathbb{R}^{d}) + \bld x P_k(K)
\quad \forall K\in \mathcal{T}_{h}\},
\end{align*}
and
\begin{align*}
B_h((\boldsymbol{q},u,\lambda),(\boldsymbol{r},w,\mu))
= & \;(\epsilon^{-1}\boldsymbol{q},\boldsymbol{r})_{\mathcal{T}_{h}}-(u,\nabla\cdot\boldsymbol{r})_{\mathcal{T}_{h}}
+\langle \lambda, \boldsymbol{r}\cdot \boldsymbol{n}\rangle_{\partial\mathcal{T}_{h}}\\
& -(\boldsymbol{q}+\boldsymbol{\beta}u,\nabla w)_{\mathcal{T}_{h}}
+\langle \bld q\cdot \boldsymbol{n}
+ \bld\beta\cdot\bld n \{\lambda/u\},w - \mu\rangle_{\partial\mathcal{T}_{h}},
\end{align*}
where
\[
\{\lambda / u\} : = \left\{\begin{tabular}{c c}
$\lambda$, &\quad\text{ if } $\bld \beta\cdot\bld n <0$,\\
$u$, &\quad\text{ if } $\bld \beta\cdot\bld n \ge 0$.\\
\end{tabular}
\right.
\]
Comparing the bilinear form for the HDG method \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{bilinear_form} with approximation spaces
$ \widetilde{\bld V}_h\times
W_h\times M_h(0)$ and the stability function $\tau$ in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{tau1} and that for the MH--DG method
in \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{mhdg}, we notice that the only difference lies in the definition of the numerical flux
$(\widehat{\bld q}_h +\widehat{\bld\beta u_h})\cdot \bld n$ and $\bld q_h \cdot \boldsymbol{n}
+ \bld\beta\cdot\bld n \{\lambda/u\}$.
However, by the following simple calculation, we observe that the two numerical flux are actually the same:
\begin{align*}
(\widehat{\bld q}_h +\widehat{\bld\beta u_h} )\cdot \bld n& = \;{\bld q}_h\cdot \bld n +{\bld\beta\cdot \bld n \lambda_h} + \tau (u_h-\lambda_h)\\
& = \;{\bld q}_h\cdot \bld n +{\bld\beta\cdot \bld n \lambda_h} + \max(\bld \beta\cdot \bld n,0) ( u_h-\lambda_h) \\
& = \; \left\{\begin{tabular}{c c}
$\bld q_h\cdot \bld n+ \bld\beta\cdot\bld n \lambda_h$, &\quad\text{ if } $\bld \beta\cdot\bld n <0$\\
$\bld q_h\cdot \bld n+ \bld\beta\cdot\bld n u_h$, &\quad\text{ if } $\bld \beta\cdot\bld n \ge 0$
\end{tabular}\right.\\
&=\; \bld q_h\cdot\bld n+\bld\beta\cdot\bld n \{\lambda_h/u_h\}.
\end{align*}
Hence, these two methods coincide.
\section{Conditioning of the HDG methods}
\label{appendix}
In this section, we give a proof of Theorem~\ref{Thm_conditioning}. Again, the key idea is to
recover an estimate of the $L^2$--norm of $u_h$, see the estimate \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{low_bound1} below.
By similar argument in the proof of Lemma~\ref{lemma_practical_infsup},
we have the following local energy estimate:
\begin{lemma}
\label{lemma_local_energy_estimates}
If $\epsilon \leq \mathcal{O}(h)$, then there is $h_{0}>0$, which is independent of $\epsilon$ and $h$,
such that for any $\lambda\in M_{h}(0)$ and
$K\in\mathcal{T}_{h}$,
\begin{align}
\label{local_energy_estimates}
\epsilon^{-1/2}\Vert \boldsymbol{q}_{h}^{\lambda}\Vert_{K}+ \Vert u_{h}^{\lambda}\Vert_{K}
+\Vert \vert\tau-\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n}\vert^{1/2}u_{h}^{\lambda} \Vert_{\partial K}
\leq C \Vert \vert\tau-\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n}\vert^{1/2}\lambda\Vert_{\partial K},
\end{align}
if $h< h_{0}$.
\end{lemma}
For all $(\boldsymbol{\sigma},v,\lambda), (\boldsymbol{r},w,\mu)\in H^{1}(\mathcal{T}_{h};\mathbb{R}^{d})\times H^{1}(\mathcal{T}_{h})\times L^{2}(\mathcal{E}_{h})$, we define
\begin{align*}
b_{h}((\boldsymbol{\sigma},v,\lambda), (\boldsymbol{r},w,\mu))
= &\; (\epsilon^{-1}\boldsymbol{\sigma},\boldsymbol{r})_{\mathcal{T}_{h}}
+\langle \tau (v-\lambda),w - \mu \rangle_{\partial\mathcal{T}_{h}}\\
&-(\boldsymbol{\beta}v,\nabla w)_{\mathcal{T}_{h}}
+ \langle (\boldsymbol{\beta}\cdot\boldsymbol{n})\lambda, w - \mu \rangle_{\partial\mathcal{T}_{h}}
-((\nabla\cdot \boldsymbol{\beta})v, w)_{\mathcal{T}_{h}},
\end{align*}
The next result is similar to Lemma~\ref{lemma_practical_infsup}.
\begin{lemma}
\label{low_bound_reduced_system}
If $\epsilon \leq \mathcal{O}(h)$, then there is $h_{0}>0$, which is independent of $\epsilon$ and $h$,
such that for any $\lambda\in M_{h}(0)$,
\begin{align*}
&\epsilon^{-1}\Vert \boldsymbol{q}_{h}^\lambda\Vert_{\mathcal{T}_{h}}^{2}+ \Vert u_{h}^\lambda\Vert_{\mathcal{T}_{h}}^{2}
+\Vert \vert \tau -\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n}\vert^{1/2}
(u_{h}^\lambda -\lambda)\Vert_{\partial\mathcal{T}_{h}}^{2} \\
\leq & C b_{h}((\boldsymbol{q}_{h}^{\lambda},u_{h}^{\lambda},\lambda), ( \boldsymbol{q}_{h}^{(P_{0,M}\varphi)\lambda},
u_{h}^{(P_{0,M}\varphi)\lambda}, (P_{0,M}\varphi)\lambda)),
\end{align*}
if $h< h_{0}$. Here, the weight function $\varphi = e^{-\mrm{\Pi}\,i}+\chi$ is introduced in (\ref{weight_function}).
$P_{0,M}$ is the $L^{2}$--orthogonal projection onto $P_{0}(\mathcal{E}_{h})$.
\end{lemma}
\begin{remark}
Notice that in general, the space $\{(\boldsymbol{q}_{h}^{m}, u_{h}^{m},m):m\in M_{h}(0)\}$ is a non-trivial subspace of
$\boldsymbol{V}_{h}\times W_{h}\times M_{h}(0)$.
Then, given $m\in M_{h}$, $(\boldsymbol{\Pi}_{h}(\varphi \boldsymbol{q}_{h}^{m}),P_{h}(\varphi u_{h}^{m}), P_{M}(\varphi m))$ is {\em not} necessarily contained in
$\{(\boldsymbol{q}_{h}^{m}, u_{h}^{m},m):m\in M_{h}\}$. So, the proof of Lemma~\ref{low_bound_reduced_system} can {\em not} be derived from the stability
of HDG methods in Lemma~\ref{lemma_practical_infsup}.
\end{remark}
\begin{proof}
We accomplish the proof in the following steps.
(\textbf{I}) By the same argument in the proof of Lemma~\ref{lemma_ideal_infsup}, if $h$ is small enough (independent of $\epsilon$), then
\begin{align*}
&\epsilon^{-1}\chi \Vert \boldsymbol{q}_{h}^{\lambda}\Vert_{\mathcal{T}_{h}}^{2}+ \Vert u_{h}^{\lambda}\Vert_{\mathcal{T}_{h}}^{2}
+\chi \Vert \vert \tau -\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n}\vert^{1/2}
(u_{h}^{\lambda} -\lambda)\Vert_{\partial\mathcal{T}_{h}}^{2} \\
\leq & C b_{h}((\boldsymbol{q}_{h}^{\lambda}, u_{h}^{\lambda},\lambda), (\varphi\boldsymbol{q}_{h}^{\lambda},
\varphi u_{h}^{\lambda}, \varphi\lambda)).
\end{align*}
(\textbf{II}) By similar argument in the proof of Lemma~\ref{lemma_practical_infsup}, if we choose $\chi$ big enough and $h$ small enough (both are independent of $\epsilon$), then
\begin{align*}
&\epsilon^{-1}\chi\Vert \boldsymbol{q}_{h}^{\lambda}\Vert_{\mathcal{T}_{h}}^{2}+ \Vert u_{h}^{\lambda}\Vert_{\mathcal{T}_{h}}^{2}
+\chi\Vert \vert \tau -\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n}\vert^{1/2}
(\mathcal{U}\lambda -\lambda)\Vert_{\partial\mathcal{T}_{h}}^{2} \\
\leq & C b_{h}((\boldsymbol{q}_{h}^{\lambda}, u_{h}^{\lambda},\lambda), ( (P_{0,h} \varphi) \boldsymbol{q}_{h}^{\lambda},
P_{h}(\varphi u_{h}^{\lambda}), (P_{0,M}\varphi)\lambda)).
\end{align*}
Here, we define $P_{0,h}\varphi$ to be the average of $\varphi$ on every element $K\in\mathcal{T}_{h}$.
(\textbf{III}) Now, we want to bound $b_{h}((\boldsymbol{q}_{h}^{\lambda},u_{h}^{\lambda},\lambda),
( (P_{0,h} \varphi)\boldsymbol{q}_{h}^{\lambda},
(P_{0,h} \varphi)u_{h}^{\lambda}, (P_{0,M}\varphi)\lambda))$ from below. Notice that
\begin{align*}
& b_{h}((\boldsymbol{q}_{h}^{\lambda},u_{h}^{\lambda},\lambda), ( (P_{0,h} \varphi)\boldsymbol{q}_{h}^{\lambda},
(P_{0,h} \varphi) u_{h}^{\lambda}, (P_{0,M}\varphi)\lambda))\\
= & b_{h}((\boldsymbol{q}_{h}^{\lambda},u_{h}^{\lambda},\lambda), ( (P_{0,h} \varphi)\boldsymbol{q}_{h}^{\lambda},
P_{h}(\varphi u_{h}^{\lambda}), (P_{0,M}\varphi)\lambda)) \\
& + (\boldsymbol{\beta}\cdot\nabla u_{h}^{\lambda},P_{h}(\varphi u_{h}^{\mu})-(P_{0,h} \varphi)u_{h}^{\mu})_{\mathcal{T}_{h}}\\
& +\langle (\tau-\boldsymbol{\beta}\cdot\boldsymbol{n})(u_{h}^{\lambda}-\lambda),
P_{h}(\varphi u_{h}^{\mu})-(P_{0,h} \varphi)u_{h}^{\mu}\rangle_{\partial\mathcal{T}_{h}}.
\end{align*}
By \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{local2}, we have
\begin{align}
\label{reduce_system_key1}
& b_{h}((\boldsymbol{q}_{h}^{\lambda},u_{h}^{\lambda},\lambda), ( (P_{0,h} \varphi)\boldsymbol{q}_{h}^{\lambda},
(P_{0,h} \varphi) u_{h}^{\lambda}, (P_{0,M}\varphi)\lambda))\\
\nonumber
= & b_{h}((\boldsymbol{q}_{h}^{\lambda},u_{h}^{\lambda},\lambda), ( (P_{0,h} \varphi)\boldsymbol{q}_{h}^{\lambda},
P_{h}(\varphi u_{h}^{\lambda}), (P_{0,M}\varphi)\lambda)) \\
\nonumber
& - (\nabla\cdot\boldsymbol{q}_{h}^{\lambda}, P_{h}(\varphi u_{h}^{\mu})-(P_{0,h} \varphi)u_{h}^{\mu})_{\mathcal{T}_{h}}.
\end{align}
By inverse inequality to $\nabla\cdot\boldsymbol{q}_{h}^{\lambda}$ and assumption $\epsilon \leq \mathcal{O}(h)$, we have
\begin{equation*}
\Vert \nabla\cdot\boldsymbol{q}_{h}^{\lambda}\Vert_{\mathcal{T}_{h}}\leq C \epsilon^{-1/2}h^{-1/2}
\Vert \boldsymbol{q}_{h}^{\lambda}\Vert_{\mathcal{T}_{h}}.
\end{equation*}
In addition, we have
\begin{equation*}
\Vert P_{h}(\varphi u_{h}^{\mu})-(P_{0,h} \varphi)u_{h}^{\mu}\Vert_{\mathcal{T}_{h}}\leq C h \Vert u_{h}^{\mu}\Vert_{\mathcal{T}_{h}}.
\end{equation*}
So, if $h$ is small enough (independent of $\epsilon,\chi$), we have
\begin{align}
\label{reduced_low_bound_ineq3}
&\chi\epsilon^{-1}\Vert \boldsymbol{q}_{h}^{\lambda}\Vert_{\mathcal{T}_{h}}^{2}+ \Vert u_{h}^{\lambda}\Vert_{\mathcal{T}_{h}}^{2}
+\chi\Vert \vert \tau -\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n}\vert^{1/2}
(u_{h}^{\lambda} -\lambda)\Vert_{\partial\mathcal{T}_{h}}^{2} \\
\nonumber
\leq & C b_{h}((\boldsymbol{q}_{h}^{\lambda},u_{h}^{\lambda},\lambda), ( (P_{0,h} \varphi)\boldsymbol{q}_{h}^{\lambda},
(P_{0,h} \varphi)u_{h}^{\lambda}, (P_{0,M}\varphi)\lambda)).
\end{align}
(\textbf{IV}) Now, we want to bound $b_{h}((\boldsymbol{q}_{h}^{\lambda},u_{h}^{\lambda},\lambda),
( (P_{0,h} \varphi)\boldsymbol{q}_{h}^{\lambda},
u_{h}^{(P_{0,M}\varphi)\lambda}, (P_{0,M}\varphi)\lambda))$ from below. Similar to (\ref{reduce_system_key1}), we have
\begin{align}
\label{reduce_system_key2}
& b_{h}((\boldsymbol{q}_{h}^{\lambda},u_{h}^{\lambda},\lambda), ( (P_{0,h} \varphi)\boldsymbol{q}_{h}^{\lambda},
u_{h}^{(P_{0,M}\varphi)\lambda}, (P_{0,M}\varphi)\lambda))\\
\nonumber
= & b_{h}((\boldsymbol{q}_{h}^{\lambda}, u_{h}^{\lambda},\lambda), ( (P_{0,h} \varphi)\boldsymbol{q}_{h}^{\lambda},
(P_{0,h} \varphi)u_{h}^{\lambda}, (P_{0,M}\varphi)\lambda)) \\
\nonumber
& - (\nabla\cdot\boldsymbol{q}_{h}^{\lambda}, (P_{0,h} \varphi)u_{h}^{\lambda}-u_{h}^{(P_{0,M}\varphi)\lambda})_{\mathcal{T}_{h}}.
\end{align}
Since $\lambda\rightarrow u_{h}^{\lambda}$ is linear and \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{local_energy_estimates}, then for any $K\in\mathcal{T}_{h}$,
\begin{align*}
& \Vert (P_{0,h} \varphi)u_{h}^{\lambda}-u_{h}^{(P_{0,M}\varphi)\lambda}\Vert_{K} \\
= &\Vert u_{h}^{(P_{0,h} \varphi)\lambda}-u_{h}^{(P_{0,M}\varphi)\lambda}\Vert_{K}\\
\leq & C \Vert \vert\tau -\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n}\vert^{1/2}
((P_{0,h} \varphi)\lambda-(P_{0,M}\varphi)\lambda)\Vert_{\partial K}\\
\leq & C h\Vert \vert\tau -\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n}\vert^{1/2}\lambda \Vert_{\partial K}\\
\leq & C h^{1/2} \left( \Vert u_{h}^{\lambda}\Vert_{K}^{2}
+\Sigma_{F\in\mathcal{E}(K)}\Vert \vert \tau -\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n}\vert^{1/2}
(u_{h}^{\lambda} -\lambda)\Vert_{F}^{2} \right)^{1/2}.
\end{align*}
Recall that by an inverse inequality to $\nabla\cdot\boldsymbol{q}_{h}^{\lambda}$ and assumption $\epsilon \leq \mathcal{O}(h)$,
\begin{align*}
\Vert \nabla\cdot\boldsymbol{q}_{h}^{\lambda}\Vert_{\mathcal{T}_{h}}\leq C \epsilon^{-1/2}h^{-1/2}
\Vert \boldsymbol{q}_{h}^{\lambda}\Vert_{\mathcal{T}_{h}}.
\end{align*}
So, if $\chi$ is big enough (independent of $\epsilon,h$), we have
\begin{align*}
&\epsilon^{-1}\Vert \boldsymbol{q}_{h}^{\lambda}\Vert_{\mathcal{T}_{h}}^{2}+ \Vert u_{h}^{\lambda}\Vert_{\mathcal{T}_{h}}^{2}
+\Vert \vert \tau -\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n}\vert^{1/2}
(u_{h}^{\lambda} -\lambda)\Vert_{\partial\mathcal{T}_{h}}^{2} \\
\leq & C b_{h}((\boldsymbol{q}_{h}^{\lambda},u_{h}^{\lambda},\lambda), ( (P_{0,h} \varphi)\boldsymbol{q}_{h}^{\lambda},
u_{h}^{(P_{0,M}\varphi)\lambda}, (P_{0,M}\varphi)\lambda)).
\end{align*}
(\textbf{V}) By \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{local_energy_estimates}, \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{reduced_low_bound_ineq3}, \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_cond} and the fact that
$\lambda\rightarrow\boldsymbol{q}_{h}^{\lambda}$ is linear, we have
\begin{align*}
&\epsilon^{-1}\Vert \boldsymbol{q}_{h}^{\lambda}\Vert_{\mathcal{T}_{h}}^{2}+ \Vert u_{h}^{\lambda}\Vert_{\mathcal{T}_{h}}^{2}
+\Vert \vert \tau -\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n}\vert^{1/2}
(u_{h}^{\lambda} -\lambda)\Vert_{\partial\mathcal{T}_{h}}^{2} \\
\leq & C b_{h}((\boldsymbol{q}_{h}^{\lambda},u_{h}^{\lambda},\lambda), ( \boldsymbol{q}_{h}^{(P_{0,M}\varphi)\lambda},
u_{h}^{(P_{0,M}\varphi)\lambda}, (P_{0,M}\varphi)\lambda))
\end{align*}
if $h$ is small enough (independent of $\epsilon$).
So, we can conclude the proof is complete.
\end{proof}
Now, we are ready to prove Theorem~\ref{Thm_conditioning}.
By the definition of $\tilde{a}_{h}$ in (\ref{reduced_matrix2}),
\begin{align*}
& \tilde{a}_{h}(\tilde{\lambda},\tilde{\mu}) = a_{h}(\Lambda_{\epsilon}^{-1}\tilde{\lambda},\Lambda_{\epsilon}^{-1}\tilde{\mu}) \\
= & b_{h}((\boldsymbol{q}_{h}^{\Lambda_{\epsilon}^{-1}\tilde{\lambda}}, u_{h}^{\Lambda_{\epsilon}^{-1}\tilde{\lambda}},
\Lambda_{\epsilon}^{-1}\tilde{\lambda}),
(\boldsymbol{q}_{h}^{\Lambda_{\epsilon}^{-1}\tilde{\mu}}, u_{h}^{\Lambda_{\epsilon}^{-1}\tilde{\mu}},\Lambda_{\epsilon}^{-1}\tilde{\mu})),
\end{align*}
for all $\tilde{\lambda},\tilde{\mu}\in M_{h}(0)$.
We recall that $\Lambda_{\epsilon}|_{F} = \left(\sup_{x\in F}\vert \boldsymbol{\beta}\cdot\boldsymbol{n}(x)\vert
+\min(\partialfrac{\epsilon}{h_{F}},1) \right)^{1/2},\quad \forall F\in\mathcal{E}_{h}$ in (\ref{change_variables}).
By assumption \bld{\varepsilon}^{{\ensuremath{\scriptscriptstyle{\bld{q}}}}}_href{assump_cond}
and Lemma~\ref{low_bound_reduced_system}, we have that for any $\tilde{\lambda}\in M_{h}(0)$,
\begin{align}
\label{low_bound1}
& \tilde{a}_{h}(\tilde{\lambda},(P_{0,M}\varphi)\tilde{\lambda})\\
\nonumber
\geq & C\left(\Vert u_{h}^{\Lambda_{\epsilon}^{-1}\tilde{\lambda}}\Vert_{\mathcal{T}_{h}}^{2}+
\Vert \vert \tau -\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n}\vert^{1/2}
(u_{h}^{\Lambda_{\epsilon}^{-1}\tilde{\lambda}} -\Lambda_{\epsilon}^{-1}\tilde{\lambda})\Vert_{\partial\mathcal{T}_{h}}^{2}\right)
\\
\nonumber
\geq & C h \Vert \vert \tau -\frac{1}{2}\boldsymbol{\beta}\cdot\boldsymbol{n}\vert^{1/2}
\Lambda_{\epsilon}^{-1}\tilde{\lambda}\Vert_{\partial\mathcal{T}_{h}}^{2}\quad\quad\text{ (by trace inequality and triangle inequality)}\\
\nonumber
\geq & C \Vert \tilde{\lambda}\Vert_{h}^{2}\geq C \Vert \tilde{\lambda}\Vert_{h}\cdot \Vert (P_{0,M}\varphi)\tilde{\lambda}\Vert_{h},
\end{align}
where
\begin{equation*}
\Vert \tilde{\lambda}\Vert_{h} = h^{1/2}\Vert \tilde{\lambda}\Vert_{\mathcal{E}_{h}},\quad\forall \tilde{\lambda}\in L^{2}(\mathcal{E}_{h}).
\end{equation*}
According to (\ref{local_energy_estimates}) and the definition of $\Lambda_{\epsilon}$ in (\ref{change_variables}), we have
\begin{equation}
\label{upp_bound1}
\tilde{a}_{h}(\tilde{\lambda},\tilde{\mu})\leq Ch^{-2}\Vert \tilde{\lambda}\Vert_{h}\cdot \Vert \tilde{\mu}\Vert_{h},
\quad \forall \tilde{\lambda}, \tilde{\mu}\in M_{h}(0).
\end{equation}
Using (\ref{low_bound1}) and (\ref{upp_bound1}), we can conclude the proof of Theorem~\ref{Thm_conditioning}.
\section{Generating special meshes}
\label{section_appb}
As in \cite{CockburnDongGuzmanQian2010}, we do not intend to provide a detailed description how to generate
meshes satisfying assumption (\ref{mesh_assumps}). We just give the main idea to generate the triangulation
in the following, which is similar to the idea in \cite{CockburnDongGuzmanQian2010}.
\begin{itemize}
\item[(i)] Given a positive value $h$, we triangulate the outflow boundary $\Gamma^{+}=
\{x\in\partial\Omega:\boldsymbol{\beta}\cdot\boldsymbol{n}(x)>0\}$ in segments of size no bigger than $h$.
\item[(ii)] For each node $x_{0}$ on $\Gamma^{+}$, we apply the forward Euler time-marching method to the problem
\begin{equation*}
\frac{d}{dt}x(t) = -\boldsymbol{\beta}(x(t))\quad t>0, x(0)=x_{0},
\end{equation*}
to obtain the set of nodes $\{x_i\}^{N(x_{0})}_{i=1}$ such that the distance between $x_i$ and
$x_{i-1}$ is of order $h$ and $x_{N (x_0)}$ is the point on $\partial\Omega\setminus \Gamma^{+}$.
\item[(iii)] We add the vertices of $\partial\Omega\setminus \Gamma^{+}$ to the set of nodes. Then we
generate a triangulation.
\item[(iv)] We numerically check assumption (\ref{mesh_assumps}) and modify the simplexes which
violate the assumption by using an algorithm similar to that in \cite{Iliescu99}.
\end{itemize}
\end{document} |
\begin{document}
\title{On a Problem of Hajdu and Tengely}
\author{Samir Siksek}
\address{Institute of Mathematics,
University of Warwick,
Coventry CV4 7AL, United Kingdom}
\email{[email protected]}
\author{Michael Stoll}
\address{Mathematisches Institut,
Universit\"at Bayreuth,
95440 Bayreuth, Germany.}
\email{[email protected]}
\keywords{}
\subjclass[2000]{Primary 11D41, Secondary 11G30, 14G05, 14G25}
\date{29 May, 2010}
\begin{abstract}
We prove a result that finishes the study of primitive arithmetic
progressions consisting of squares and fifth powers that was
carried out by Hajdu and Tengely in a recent paper:
The only arithmetic progression in coprime integers of the form
$(a^2, b^2, c^2, d^5)$ is $(1, 1, 1, 1)$.
For the proof, we first reduce the problem to that of determining
the sets of rational points on three specific hyperelliptic curves
of genus~4. A 2-cover descent computation shows that there are no
rational points on two of these curves. We find generators for a
subgroup of finite index of the Mordell-Weil group of the last curve.
Applying Chabauty's method, we prove that
the only rational points on this curve are the obvious ones.
\end{abstract}
\maketitle
\section{Introduction}
Euler (\cite[pages 440 and 635]{Dickson})
proved Fermat's claim that four distinct squares cannot
form an arithmetic progression. Powers in arithmetic progressions are
still a subject of current interest.
For example, Darmon and Merel \cite{DM} proved that the only solutions
in coprime integers to the Diophantine equation $x^n+y^n=2z^n$
with $n \geq 3$ satisfy $xyz=0$ or $\pm 1$. This shows that
there are no non-trivial three term arithmetic progressions
consisting of $n$-th powers with $n \geq 3$.
The result of Darmon and Merel is far from elementary; it needs
all the tools used in Wiles' proof of Fermat's Last Theorem
and more.
An arithmetic
progression $(x_1,x_2,\ldots,x_k)$ of integers is said to be {\em primitive}
if the terms are coprime, i.e., if $\gcd(x_1,x_2)=1$. Let $S$ be a finite subset of integers
$\geq 2$. Hajdu \cite{Hajdu} showed that if
\begin{equation}\label{eqn:ap}
(a_1^{\ell_1},\ldots, a_k^{\ell_k})
\end{equation}
is a non-constant primitive arithmetic progression with $\ell_i \in S$,
then $k$ is bounded by some (inexplicit) constant $C(S)$.
Bruin, Gy\H{o}ry, Hajdu and Tengely \cite{BGHT} showed
that for any $k \geq 4$ and any $S$, there are only finitely
many primitive arithmetic progressions of the form \eqref{eqn:ap},
with $\ell_i \in S$. Moreover, for $S=\{2,3\}$ and $k \geq 4$,
they showed that $a_i = \pm 1$ for $i=1,\ldots,k$.
A recent paper of Hajdu and Tengely \cite{HT} studies primitive
arithmetic progressions \eqref{eqn:ap} with exponents
belonging to $S=\{2,n\}$
and $\{3,n\}$. In particular, they show that any primitive non-constant
arithmetic progression \eqref{eqn:ap} with exponents $\ell_i \in \{2,5\}$
has $k \leq 4$. Moreover, for $k=4$ they show that
\begin{equation}\label{eqn:l}
(\ell_1,\ell_2,\ell_3,\ell_4) =(2,2,2,5) \quad \text{or}
\quad (5,2,2,2).
\end{equation}
Note that if $(a_i^{\ell_i} : i=1,\ldots,k)$ is
an arithmetic progression, then so is the reverse
progression $(a_i^{\ell_i}: i=k,k-1,\ldots,1)$.
Thus there is really only one case left open by Hajdu and Tengely,
with exponents $(\ell_1,\ell_2,\ell_3,\ell_4) =(2,2,2,5)$.
This is also mentioned as Problem~11
in a list of 22 open problems recently compiled by
Evertse and Tijdeman~\cite{LeidenProblem}.
In this paper we deal with this case.
\begin{Theorem} \label{Thm}
The only arithmetic progression in coprime integers of the form
\[ (a^2, b^2, c^2, d^5) \]
is $(1, 1, 1, 1)$.
\end{Theorem}
This together with the above-mentioned results of Hajdu
and Tengely completes
the proof of the following theorem.
\begin{Theorem}
There are no non-constant primitive arithmetic progressions
of the form \eqref{eqn:ap}
with $\ell_i \in \{2,5\}$ and $k \geq 4$.
\end{Theorem}
The primitivity condition is crucial, since otherwise solutions
abound. Let for example $(a^2, b^2, c^2, d)$ be any arithmetic
progression whose first three terms are squares --- there are infinitely
many of these; one can take
$a = r^2 - 2rs - s^2$, $b = r^2 + s^2$, $c = r^2 + 2rs - s^2 $ ---
then $\bigl((ad^2)^2, (bd^2)^2, (cd^2)^2, d^5)$ is an arithmetic
progression
whose first three terms are squares and whose last term is a fifth power.
For the proof of Thm.~\ref{Thm},
we first reduce the problem to that of determining
the sets of rational points on three specific hyperelliptic curves
of genus~4. A $2$-cover descent computation
(following Bruin and Stoll \cite{BSTwoCoverDesc})
shows that there are no
rational points on two of these curves. We find generators for a
subgroup of finite index of the Mordell-Weil group of the last curve.
Applying Chabauty's method, we prove that
the only rational points on this curve are the obvious ones.
All our computations are performed using the computer package
{\sf MAGMA}~\cite{MAGMA}.
The result we prove here may perhaps not be of compelling interest
in itself. Rather, the purpose of this paper is to demonstrate
how we can solve problems of this kind with the available machinery.
We review the relevant part of this machinery in Sect.~\ref{S:Back},
after we have constructed the curves pertaining to our problem in Sect.~\ref{Curves}.
Then, in Sect.~\ref{S:Points}, we apply the machinery to these curves.
The proofs are mostly computational. We have tried to make it clear
what steps need to be done, and to give enough information to make it
possible to reproduce the computations (which have been performed
independently by both authors as a consistency check).
\section{Construction of the Curves} \label{Curves}
Let $(a^2, b^2, c^2, d^5)$ be an arithmetic progression in coprime integers.
Since a square is $\equiv 0$ or $1 \bmod 4$, it follows that all terms are
$\equiv 1 \bmod 4$, in particular, $a$, $b$, $c$ and $d$ are all odd.
Considering the last three terms, we have the relation
\[ (-d)^5 = b^2 - 2 c^2 = (b + c \sqrt{2}) (b - c \sqrt{2}) \,. \]
Since $b$ and~$c$ are odd and coprime,
the two factors on the right are coprime in
$R = {\mathbb Z}[\sqrt{2}]$. Since $R^\times/(R^\times)^5$ is generated
by $1 + \sqrt{2}$, it follows that
\begin{equation} \label{rel1}
b + c \sqrt{2} = (1 + \sqrt{2})^j (u + v \sqrt{2})^5
= g_j(u,v) + h_j(u,v) \sqrt{2}
\end{equation}
with $-2 \le j \le 2$ and $u, v \in {\mathbb Z}$ coprime (with $u$ odd and
$v \equiv j+1 \bmod 2$).
The polynomials $g_j$ and $h_j$ are homogeneous of degree~5 and have
coefficients in~${\mathbb Z}$.
Now the first three terms of the progression give the relation
\[ a^2 = 2 b^2 - c^2 = 2 g_j(u,v)^2 - h_j(u,v)^2 \,. \]
Writing $y = a/v^5$ and $x = u/v$, this gives the equation of a hyperelliptic
curve of genus~4,
\[ C_j : y^2 = f_j(x) \]
where $f_j(x) = 2 g_j(x,1)^2 - h_j(x,1)^2$. Every arithmetic progression of
the required form therefore induces a rational point on one of the curves~$C_j$.
We observe that taking conjugates in~\eqref{rel1} leads to
\[ (-1)^j b + (-1)^{j+1} c\sqrt{2}
= (1 + \sqrt{2})^{-j} (u + (-v) \sqrt{2})^5 \,,
\]
which implies that $f_{-j}(x) = f_j(-x)$ and therefore that $C_{-j}$ and~$C_j$
are isomorphic and their rational points correspond to the same arithmetic
progressions. We can therefore restrict attention to $C_0$, $C_1$ and $C_2$.
Their equations are as follows.
\begin{align*}
C_0 : y^2 &= f_0(x) = 2 x^{10} + 55 x^8 + 680 x^6 + 1160 x^4 + 640 x^2 - 16 \\
C_1 : y^2 &= f_1(x) = x^{10} + 30 x^9 + 215 x^8 + 720 x^7 + 1840 x^6 + 3024 x^5 \\
& \qquad\qquad\qquad + 3880 x^4 + 2880 x^3 + 1520 x^2 + 480 x + 112 \\
C_2 : y^2 &= f_2(x)
= 14 x^{10} + 180 x^9 + 1135 x^8 + 4320 x^7 + 10760 x^6 + 18144 x^5 \\
& \qquad\qquad\qquad + 21320 x^4 + 17280 x^3 + 9280 x^2 + 2880 x + 368
\end{align*}
The trivial solution $a = b = c = d = 1$ corresponds to $j = 1$, $(u,v) = (1,0)$
in the above and therefore gives rise to the point $\infty_+$ on~$C_1$
(this is the point at infinity where $y/x^5$ takes the value~$+1$). Changing
the signs of $a$, $b$ or $c$ leads to $\infty_- \in C_1({\mathbb Q})$ (the point where
$y/x^5 = -1$) or to the two points at infinity on the isomorphic curve~$C_{-1}$.
\section{Background on Rational Points on Hyperelliptic Curves}
\label{S:Back}
Our task will be to determine the set of rational points on each of the
curves $C_0$, $C_1$ and~$C_2$ constructed in the previous section. In this
section, we will give an overview of the methods we will use, and in the
next section, we will apply these methods to the given curves.
We will restrict attention to {\em hyperelliptic} curves, i.e., curves
given by an affine equation of the form
\[ C : y^2 = f(x) \]
where $f$ is a squarefree polynomial with integral coefficients. The smooth
projective curve birational to this affine curve has either one or two
additional points `at infinity'. If the degree of~$f$ is odd, there is one point
at infinity, which is always a rational point. Otherwise there are
two points at infinity corresponding to the two square roots of the leading
coefficient of~$f$. In particular, these two points are rational if and only
if the leading coefficient is a square. For example, $C_1$ above has two
rational points at infinity, whereas the points at infinity on $C_0$ and~$C_2$
are not rational. We will use $C$ in the following to denote the smooth
projective model; $C({\mathbb Q})$ denotes as usual the set of rational points
including those at infinity.
\subsection{Two-Cover Descent}
\label{SS:Twocov}
It will turn out that $C_0$ and~$C_2$ do not have rational points. One way
of showing that $C({\mathbb Q})$ is empty is to verify that $C({\mathbb R})$ is empty or that
$C({\mathbb Q}_p)$ is empty for some prime~$p$. This does not work for $C_0$ or~$C_2$;
both curves have real points and $p$-adic points for all~$p$. (This can be
checked by a finite computation.) So we need a more sophisticated way of showing
that there are no rational points. One such method is known as {\em 2-cover descent}.
We sketch the method here; for a detailed description, see~\cite{BSTwoCoverDesc}.
An important ingredient of this and other methods is the algebra
\[ L := {\mathbb Q}[T] = \frac{{\mathbb Q}[x]}{{\mathbb Q}[x] \cdot f(x)} \,, \]
where $T$ denotes the image of~$x$. If $f$ is irreducible (as in our examples),
then $L$ is the number field generated by a root of~$f$. In general, $L$ will
be a product of number fields corresponding to the irreducible factors of~$f$.
We now assume that $f$ has even degree $2g+2$, where $g$ is the genus of the
curve. This is the generic case; the odd degree case is somewhat simpler.
We can then set up a map, called the {\em descent map} or {\em $x-T$ map}:
\[ x-T : C({\mathbb Q}) \longrightarrow H := \frac{L^\times}{{\mathbb Q}^\times (L^\times)^2} \,. \]
Here $L^\times$ denotes the multiplicative group of~$L$, and $(L^\times)^2$
denotes the subgroup of squares. On points $P \in C({\mathbb Q})$ that are neither
at infinity nor Weierstrass points (i.e., points with vanishing $y$ coordinate),
the map is defined as
\[ (x-T)(P) = x(P) - T \bmod {\mathbb Q}^\times (L^\times)^2 \,. \]
Rational points at infinity map to the trivial element, and if there are
rational Weierstrass points, their images can be determined using the fact
that the norm of $x(P) - T$ is $y(P)^2$ divided by the leading coefficient
of~$f$. If we can show that $x-T$ has empty image on~$C({\mathbb Q})$, then it
follows that $C({\mathbb Q})$ is empty.
We obtain information of the image by considering again $C({\mathbb R})$ and~$C({\mathbb Q}_p)$.
We can carry out the same construction over ${\mathbb R}$ and over~${\mathbb Q}_p$, leading
to an algebra $L_v$ ($v = p$, or $v = \infty$ when working over~${\mathbb R}$),
a group~$H_v$ and a map
\[ (x-T)_v : C({\mathbb Q}_v) \longrightarrow H_v \qquad \text{(where ${\mathbb Q}_\infty = {\mathbb R}$).} \]
We have inclusions $C({\mathbb Q}) \hookrightarrow C({\mathbb Q}_v)$ and canonical homomorphisms
$H \to H_v$. Everything fits together in a commutative diagram
\[ \xymatrix{ C({\mathbb Q}) \ar[rr]^{x-T} \ar[d] & & H \ar[d] \\
\prod_v C({\mathbb Q}_v) \ar[rr]^{\prod_v (x-T)_v} & & \prod_v H_v
}
\]
where $v$ runs through the primes and~$\infty$. If we can show that the
images of the lower horizontal map and of the right vertical map do not
meet, then the image of $x-T$ and therefore also $C({\mathbb Q})$ must be empty.
We can verify this by considering a finite subset of `places'~$v$.
In general, we obtain a finite subset of~$H$ that contains the image
of~$x-T$; this finite subset is known as the {\em fake 2-Selmer set}
of~$C/{\mathbb Q}$. It classifies either pairs of (isomorphism classes of)
2-covering curves of~$C$ that have points {\em everywhere locally},
i.e., over~${\mathbb R}$ and over all~${\mathbb Q}_p$,
or else it classifies such 2-covering curves, in which case it is
the (true) 2-Selmer set. Whether it classifies
pairs or individual 2-coverings depends on a certain
condition on the polynomial~$f$. This condition is satisfied if either
$f$ has an irreducible factor of odd degree, or if $\deg f \equiv 2 \bmod 4$
and $f$ factors over a quadratic extension ${\mathbb Q}(\sqrt{d})$ as a constant
times the product of two conjugate polynomials. A {\em 2-covering} of~$C$ is
a morphism $\pi : D \to C$ that is unramified and becomes Galois
over a suitable field extension of finite degree,
with Galois group $({\mathbb Z}/2{\mathbb Z})^{2g}$.
It is known that every rational point on~$C$ lifts to a rational point
on some 2-covering of~$C$.
The actual computation splits into a global and a local part. The global
computation uses the ideal class group and the unit group of~$L$ (or the
constituent number fields of~$L$) to construct a finite subgroup of~$H$
containing the image of~$x-T$. The local computation determines the
image of $(x-T)_v$ for finitely many places~$v$.
\subsection{The Jacobian}
\label{SS:Jac}
Most other methods make use of another object associated to the curve~$C$:
its {\em Jacobian variety} (or just {\em Jacobian}). This is an abelian variety~$J$
(a higher-dimensional analogue of an elliptic curve) of dimension~$g$, the
genus of~$C$. It reflects a large part of the geometry and arithmetic of~$C$;
its main advantage is that its points form an abelian group, whereas the
set of points on~$C$ does not carry a natural algebraic structure.
For our purposes, we can more or less forget the structure of~$J$ as a
projective variety. Instead we use the description of the points on~$J$
as the elements of the degree zero part of the {\em Picard group} of~$C$.
The Picard group is constructed as a quotient of the group of divisors on~$C$.
A {\em divisor} on~$C$ is an element of the free abelian group $\operatorname{Div}_C$ on the
set~$C(\bar{\mathbb Q})$ of all algebraic points on~$C$. The absolute Galois group
of~${\mathbb Q}$ acts on~$\operatorname{Div}_C$; a divisor that is fixed by this action is {\em rational}.
This does not mean that the points occurring in the divisor must be rational;
points with the same multiplicity can be permuted. A nonzero rational function~$h$
on~$C$ with coefficients in~$\bar{\mathbb Q}$ has an associated divisor $\operatorname{div}(h)$ that records
its zeros and poles (with multiplicities). If $h$ has coefficients in~${\mathbb Q}$,
then $\operatorname{div}(h)$ is rational. The homomorphism $\deg : \operatorname{Div}_C \to {\mathbb Z}$
induced by sending each point in~$C(\bar{\mathbb Q})$ to~$1$ gives the {\em degree}
of a divisor. Divisors of functions have degree zero.
Two divisors $D, D' \in \operatorname{Div}_C$ are {\em linearly equivalent} if their
difference is the divisor of a function. The equivalence classes are the
elements of the {\em Picard group} $\operatorname{Pic}_C$ defined by the following exact
sequence.
\[ 0 \longrightarrow \bar{\mathbb Q}^\times \longrightarrow \bar{\mathbb Q}(C)^\times \stackrel{\operatorname{div}}{\longrightarrow} \operatorname{Div}_C
\longrightarrow \operatorname{Pic}_C \longrightarrow 0
\]
Since divisors of functions have degree zero, the degree homomorphism
descends to~$\operatorname{Pic}_C$. We denote its kernel by $\operatorname{Pic}^0_C$. It is a fact
that $J(\bar{\mathbb Q})$ is isomorphic as a group to~$\operatorname{Pic}^0_C$. The rational
points $J({\mathbb Q})$ correspond to the elements of~$\operatorname{Pic}^0_C$ left invariant
by the Galois group. In general it is not true that a point in~$J({\mathbb Q})$
can be represented by a rational divisor, but this is the case when
$C$ has a rational point, or at least points everywhere locally.
The most important fact about the group $J({\mathbb Q})$ is the statement of
the {\em Mordell-Weil Theorem:} $J({\mathbb Q})$ is a {\em finitely generated}
abelian group. For this reason, $J({\mathbb Q})$ is often called the
{\em Mordell-Weil group} of $J$ or of~$C$.
If $P_0 \in C({\mathbb Q})$, then the map $C \ni P \mapsto [P - P_0] \in J$ is
a ${\mathbb Q}$-defined embedding of $C$ into~$J$. We use $[D]$ to denote the
linear equivalence class of the divisor~$D$. The basic idea of the
methods described below is to try to recognise the points of~$C$ embedded
in this way among the rational points on~$J$.
We need a way of representing elements of~$J({\mathbb Q})$. Let $P \mapsto P^-$
denote the {\em hyperelliptic involution} on~$C$; this is the morphism
$C \to C$ that changes the sign of the $y$~coordinate. Then it is easy
to see that the divisors $P + P^-$ all belong to the same class
$W \in \operatorname{Pic}_C$. An effective divisor~$D$ (a divisor such that no point occurs
with negative multiplicity) is {\em in general position} if there is
no point~$P$ such that $D - P - P^-$ is still effective. Divisors in
general position not containing points at infinity can be represented
in a convenient way by pairs of polynomials $(a(x), b(x))$. This pair
represents the divisor~$D$ such that its image on the projective line
(under the $x$-coordinate map) is given by the roots of~$a$; the corresponding
points on~$C$ are determined by the relation $y = b(x)$. The polynomials
have to satisfy the relation $f(x) \equiv b(x)^2 \bmod a(x)$. This
is the {\em Mumford representation} of~$D$. The polynomials $a$ and~$b$
can be chosen to have rational coefficients if and only if $D$ is rational.
(The representation can be adapted to allow for points at infinity occurring
in the divisor.)
If the genus $g$ is even, then it is a fact that every point in~$J({\mathbb Q})$
has a unique representation of the form $[D] - nW$ where $D$ is a rational
divisor in general position of degree~$2n$ and $n \ge 0$ is minimal.
The Mumford representation of~$D$ is then also called the Mumford representation
of the corresponding point on~$J$. It is fairly easy to add points
on~$J$ using the Mumford representation, see~\cite{Cantor}. This
addition procedure is implemented in~{\sf MAGMA}, for example.
There is a relation between 2-coverings of~$C$ and the Jacobian~$J$.
Assume $C$ is embedded in~$J$ as above. Then if $D$ is any 2-covering
of~$C$ that has a rational point~$P$, $D$ can be realised as the preimage
of~$C$ under a map of the form $Q \mapsto 2Q + Q_0$ on~$J$, where
$Q_0$ is the image of~$P$ on~$C \subset J$. A consequence of this is
that two rational points $P_1, P_2 \in C({\mathbb Q})$ lift
to the same 2-covering if and only if $[P_1 - P_2] \in 2 J({\mathbb Q})$.
\subsection{The Mordell-Weil Group}
\label{SS:MW}
We will need to know generators of a finite-index subgroup of the
Mordell-Weil group~$J({\mathbb Q})$. Since $J({\mathbb Q})$ is a finitely generated abelian
group, it will be a direct sum of a finite torsion part and a free abelian
group of rank~$r$; $r$ is called the {\em rank} of~$J({\mathbb Q})$. So what we need
is a set of $r$ independent points in~$J({\mathbb Q})$.
The torsion subgroup of~$J({\mathbb Q})$ is usually easy to determine. The main
tool used here is the fact that the torsion subgroup injects into~$J({\mathbb F}_p)$
when $p$ is an odd prime not dividing the discriminant of~$f$. If the orders
of the finite groups~$J({\mathbb F}_p)$ are coprime for suitable primes~$p$, then
this shows that $J({\mathbb Q})$ is torsion-free.
We can find points in~$J({\mathbb Q})$ by search. This can be done by searching
for rational points on the variety parameterising Mumford representations
of divisors of degree 2, 4, \dots. We can then check if the points found
are independent by again mapping into~$J({\mathbb F}_p)$ for one or several primes~$p$.
The hard part is to know when we have found enough points. For this we need
an upper bound on the rank~$r$. This can be provided by a {\em 2-descent}
on the Jacobian~$J$. This is described in detail in~\cite{Stoll2Descent}.
The idea is similar to the 2-cover descent on~$C$ described above
in Sect.~\ref{SS:Twocov}. Essentially we extend the $x-T$ map from points
to divisors. It can be shown that the value of $(x-T)(D)$ only depends
on the linear equivalence class of~$D$.
This gives us a homomorphism from~$J({\mathbb Q})$ into~$H$, or more
precisely, into the kernel of the norm map
$N_{L/{\mathbb Q}} : H \to {\mathbb Q}^\times/({\mathbb Q}^\times)^2$. It can be shown that the
kernel of this $x-T$ map on~$J({\mathbb Q})$ is either $2 J({\mathbb Q})$, or it contains
$2J({\mathbb Q})$ as a subgroup of index~2. The former is the case when $f$ satisfies
the same condition as that mentioned in Sect.~\ref{SS:Twocov}.
We can then bound $(x-T)(J({\mathbb Q}))$ in much the same way as we did when
doing a 2-cover descent on~$C$. The global part of the computation is
identical. The local part is helped by the fact that we now have a
group homomorphism (or a homomorphism of ${\mathbb F}_2$-vector spaces), so we
can use linear algebra. We obtain a bound for the order of $J({\mathbb Q})/2J({\mathbb Q})$,
from which we can deduce a bound for the rank~$r$. If we are lucky and
found that same number of independent points in~$J({\mathbb Q})$, then we know
that these points generate a subgroup of finite index.
The group containing $(x-T)(J({\mathbb Q}))$ we compute is known as the
{\em fake 2-Selmer group} of~$J$~\cite{PS}. If the polynomial~$f$ satisfies the
relevant condition, then this fake Selmer group is isomorphic to the
true 2-Selmer group of~$J$ (that classifies 2-coverings of~$J$ that
have points everywhere locally).
\subsection{The Chabauty-Coleman Method}
\label{SS:Chab}
If the rank~$r$ is less than the genus~$g$, there is a method available
that allows us to get tight bounds on the number of rational points on~$C$.
This goes back to Chabauty~\cite{Chabauty}, who used it to prove Mordell's
Conjecture in this case. Coleman~\cite{Coleman} refined the method. We
give a sketch here; more details can be found for example in~\cite{StollChabauty}.
Let $p$ be a prime of good reduction for~$C$ (this is the case when $p$
is odd and does not divide the discriminant of~$f$). We use $\Omega_C^1({\mathbb Q}_p)$
and~$\Omega_J^1({\mathbb Q}_p)$ to denote the spaces of regular 1-forms on~$C$ and~$J$
that are defined over~${\mathbb Q}_p$. If $P_0 \in C({\mathbb Q})$ and
$\iota : C \to J$, $P \mapsto [P-P_0]$ denotes the corresponding embedding
of $C$ into~$J$, then the induced map $\iota^* : \Omega_J^1({\mathbb Q}_p) \to \Omega_C^1({\mathbb Q}_p)$
is an isomorphism that is independent of the choice of basepoint~$P_0$.
Both spaces have dimension~$g$. There is an integration pairing
\[ \Omega_C^1({\mathbb Q}_p) \times J({\mathbb Q}_p) \longrightarrow {\mathbb Q}_p, \quad
(\iota^* \omega, Q) \longmapsto \int_0^Q \omega = \langle \omega, \log Q \rangle \,.
\]
In the last expression, $\log Q$ denotes the $p$-adic logarithm on~$J({\mathbb Q}_p)$
with values in the tangent space of~$J({\mathbb Q}_p)$ at the origin, and $\Omega^1_J({\mathbb Q}_p)$
is identified with the dual of this tangent space. If $r < g$, then there are
(at least) $g-r$ linearly independent differentials $\omega \in \Omega_C^1({\mathbb Q}_p)$
that annihilate the Mordell-Weil group~$J({\mathbb Q})$. Such a differential can
be scaled so that it reduces to a non-zero differential $\bar\omega$ mod~$p$.
Now the important fact is that if $\bar\omega$ does not vanish at a point
$\bar{P} \in C({\mathbb F}_p)$, then there is at most one rational point on~$C({\mathbb Q})$
whose reduction is~$\bar{P}$. (There are more general bounds valid when
$\bar\omega$ does vanish at~$\bar{P}$, but we do not need them here.)
\section{Determining the Rational Points} \label{S:Points}
In this section, we determine the set of rational points on the three curves
$C_0$, $C_1$ and~$C_2$. To do this, we apply the methods described in
Sect.~\ref{S:Back}.
We first consider $C_0$ and~$C_2$. We apply the
2-cover-descent procedure described in Sect.~\ref{SS:Twocov} to the
two curves and find that in each case, there are no 2-coverings that have points
everywhere locally. For $C_0$, only 2-adic information is needed in addition
to the global computation, for $C_2$, we need 2-adic and 7-adic information.
Note that the number fields generated by roots of $f_0$ or~$f_2$ are sufficiently
small in terms of degree and discriminant that the necessary class and unit
group computations can be done unconditionally. This leads to the following.
\begin{Proposition} \label{Prop02}
There are no rational points on the curves $C_0$ and~$C_2$.
\end{Proposition}
\begin{proof}
The 2-cover descent procedure is available in recent releases of {\sf MAGMA}.
The computations leading to the stated result can be performed by issuing
the following {\sf MAGMA} commands.
\begin{verbatim}
> SetVerbose("Selmer",2);
> TwoCoverDescent(HyperellipticCurve(Polynomial(
[-16,0,640,0,1160,0,680,0,55,0,2])));
> TwoCoverDescent(HyperellipticCurve(Polynomial(
[368,2880,9280,17280,21320,18144,10760,4320,1135,180,14])));
\end{verbatim}
We explain how the results can be checked independently. We give details
for~$C_0$ first. The procedure for~$C_2$ is similar, so we only explain
the differences.
The polynomial~$f_0$ is irreducible, and it can be checked that the number
field generated by one of its roots is isomorphic to $L = {\mathbb Q}(\!\sqrt[10]{288})$.
Using {\sf MAGMA} or pari/gp, one checks that this field has trivial
class group. The finite subgroup~$\tilde{H}$ of~$H$ containing the Selmer~set
is then given
as ${\mathcal O}_{L,S}^\times/({\mathbb Z}_{\{2,3,5\}}^\times ({\mathcal O}_{L,S}^\times)^2)$, where
$S$ is the set of primes in~${\mathcal O}_L$ above the `bad primes' 2, 3 and~5.
The set~$S$ contains two primes
above~2, of degrees 1 and~4, respectively, and one prime above 3 and~5
each, of degree~2 in both cases. Since $L$ has two real embeddings and
four pairs of complex embeddings, the unit rank is~5. The rank (or ${\mathbb F}_2$-dimension)
of~$\tilde{H}$ is then $7$. (Note that 2 is a square in~$L$.) The descent map takes
its values in the subset of~$\tilde{H}$ consisting of elements whose norm is
twice a square. This subset is of size~$32$; elements of~${\mathcal O}_L$ representing
it can easily be obtained. Let $\delta$ be such a representative. We let
$T$ be a root of~$f_0$ in~$L$ and check that the system of equations
\[ y^2 = f_0(x), \quad x - T = \delta c z^2 \]
has no solutions with $x,y,c \in {\mathbb Q}_2$, $z \in L \otimes_{{\mathbb Q}} {\mathbb Q}_2$.
The second equation leads, after expanding $\delta z^2$ as a ${\mathbb Q}$-linear
combination of $1, T, T^2, \dots, T^9$, to eight homogeneous quadratic
equations in the ten unknown coefficients of~$z$. Any solution to these
equations gives a unique~$x$, for which~$f_0(x)$ is a square. The latter
follows by taking norms on both sides of $x-T = \delta c z^2$. So we
only have to check the intersection of eight quadrics in~${\mathbb P}^9$ for
existence of ${\mathbb Q}_2$-points. Alternatively, we evaluate the descent map
on~$C_0({\mathbb Q}_2)$, to get its image in~$H_2 = L_2^\times/({\mathbb Q}_2^\times (L_2^\times)^2)$,
where $L_2 = L \otimes_{{\mathbb Q}} {\mathbb Q}_2$. Then we check that none of the
representatives~$\delta$ map into this image.
When dealing with~$C_2$, the field~$L$ is generated by a root of
$x^{10} - 6 x^5 - 9$. Since the leading coefficient of~$f_2$ is~$14$,
we have to add (the primes above)~7 to the bad primes. As before, the
class group is trivial, and we have the same splitting behaviour of
2, 3 and~5. The prime~7 splits into two primes of degree~1 and two
primes of degree~4. The group of $S$-units of~$L$ modulo squares has
now rank~14, the group $\tilde{H}$ has rank~10, and the subset of~$H$ consisting
of elements whose norm is 14~times a square has 128~elements. These elements
now have to be tested for compatibility with the 2-adic and the 7-adic
information, which can be done using either of the two approaches described
above. The 7-adic check is only necessary for one of the elements; the 127~others
are already ruled out by the 2-adic check.
\end{proof}
We cannot hope to deal with~$C_1$ in the same easy manner, since $C_1$ has
two rational points at infinity coming from the trivial solutions. We can still
perform a 2-cover-descent computation, though, and find that there is only
one 2-covering of~$C_1$ with points everywhere locally, which is the covering that
lifts the points at infinity. Only 2-adic information is necessary to show
that the fake 2-Selmer set has at most one element, so we can get this result
using the following {\sf MAGMA} command.
\begin{verbatim}
> TwoCoverDescent(HyperellipticCurve(Polynomial(
[112,480,1520,2880,3880,3024,1840,720,215,30,1]))
: PrimeCutoff := 2);
\end{verbatim}
(In some versions of {\sf MAGMA} this returns a two-element set. However, as can be checked
by pulling back under the map returned as a second value, these two elements
correspond to the images of $1$ and~$-1$ in $L^\times/(L^\times)^2 {\mathbb Q}^\times$
and therefore both represent the trivial element. The error is caused by
{\sf MAGMA} using $1$ instead of~$-1$ as a `generator' of ${\mathbb Q}^\times/({\mathbb Q}^\times)^2$.
This bug is corrected in recent releases.)
The computation can be performed in the same way as for $C_0$ and~$C_2$.
The relevant field~$L$ is generated by a root of $x^{10} - 18 x^5 + 9$; it
has class number~1, and the primes 2, 3 and~5 split in the same way as before.
The subset~$H'$ (in fact a subgroup)
of~$\tilde{H}$ consisting of elements with square norm has size~32.
Of these, only the element represented by~1 is compatible with the 2-adic
constraints.
We remark that by the way it is given, the
polynomial~$f_1$ factors over~${\mathbb Q}(\sqrt{2})$ into two conjugate factors of
degree~5. This implies that the `fake 2-Selmer set' computed by the 2-cover
descent is the true 2-Selmer set, so that there is really only one 2-covering
that corresponds to the only element of the set computed by the procedure.
We state the result as a lemma. We fix $P_0 = \infty_- \in C_1$ as our basepoint
and write $J_1$ for the Jacobian variety of~$C_1$. Then, as described
in Sect.~\ref{SS:Jac},
\[ \iota : C_1 \longrightarrow J_1\,, \quad P \longmapsto [P - P_0] \]
is an embedding defined over~${\mathbb Q}$.
\begin{Lemma} \label{Lemma2J}
Let $P \in C_1({\mathbb Q})$. Then the divisor class $[P - P_0]$ is in $2 J_1({\mathbb Q})$.
\end{Lemma}
\begin{proof}
Let $D$ be the unique 2-covering of~$C_1$ (up to isomorphism) that has
points everywhere locally. The fact that $D$ is unique follows from the
computation of the 2-Selmer set.
Any rational point $P \in C_1({\mathbb Q})$ lifts to a rational point on some 2-covering
of~$C_1$. In particular, this 2-covering then has a rational point, so it
also satisfies the weaker condition that it has points everywhere locally.
Since $D$ is the only 2-covering of~$C_1$ satisfying this condition,
$P_0$ and~$P$ must both lift to a rational point on~$D$. This implies
by the remark at the end of Sect.~\ref{SS:Jac} that $[P-P_0] \in 2 J_1({\mathbb Q})$.
\end{proof}
To make use of this information, we need to know $J_1({\mathbb Q})$, or at least a
subgroup of finite index. A computer search reveals two points in~$J_1({\mathbb Q})$,
which are given in Mumford representation (see Sect.~\ref{SS:Jac}) as follows.
\begin{align*}
Q_1 &= \bigl(x^4 + 4 x^2 + \tfrac{4}{5},\quad -16 x^3 - \tfrac{96}{5} x\bigr) \\
Q_2 &= \bigl(x^4 + \tfrac{24}{5} x^3 + \tfrac{36}{5} x^2 + \tfrac{48}{5} x
+ \tfrac{36}{5},\quad
-\tfrac{1712}{75} x^3 - \tfrac{976}{25} x^2 - \tfrac{1728}{25} x
- \tfrac{2336}{25}\bigr)
\end{align*}
We note that $2 Q_1 = [\infty_+ - \infty_-]$; this makes Lemma~\ref{Lemma2J}
explicit for the known two points on~$C_1$.
\begin{Lemma} \label{LemmaGroup}
The Mordell-Weil group $J_1({\mathbb Q})$ is torsion-free, and $Q_1$, $Q_2$ are
linearly independent. In particular, the rank of $J_1({\mathbb Q})$ is at least~2.
\end{Lemma}
\begin{proof}
The only primes of bad reduction for~$C_1$ are 2, 3 and~5. It is known that
the torsion subgroup of $J_1({\mathbb Q})$ injects into $J_1({\mathbb F}_p)$ when $p$ is an
odd prime of good reduction. Since $\#J_1({\mathbb F}_7) = 2400$ and
$\#J_1({\mathbb F}_{41}) = 2633441$
are coprime, there can be no nontrivial torsion in~$J_1({\mathbb Q})$.
We check that the image of $\langle Q_1, Q_2 \rangle$ in~$J_1({\mathbb F}_7)$ is
not cyclic. This shows that $Q_1$ and~$Q_2$ must be independent.
\end{proof}
The next step is to show that the Mordell-Weil rank is indeed~2. For this,
we compute the 2-Selmer group of~$J_1$ as sketched in Sect.~\ref{SS:MW}
and described in detail in~\cite{Stoll2Descent}.
We give some details of the computation, since it is outside the scope of
the functionality that is currently provided by {\sf MAGMA} (or any other
software package).
We first remind ourselves that $f_1$ factors over~${\mathbb Q}(\sqrt{2})$. This implies
that the kernel of the $x-T$ map on~$J({\mathbb Q})$ is~$2J({\mathbb Q})$. Therefore the
`fake 2-Selmer group' that
we compute is in fact the actual 2-Selmer group of~$J_1$.
Since $J_1({\mathbb Q})$ is torsion-free, the order of the 2-Selmer group is
an upper bound for~$2^r$, where $r$ is the rank of~$J_1({\mathbb Q})$.
The global computation is the same as that we needed to do for the
\hbox{2-cover} descent. In particular, the Selmer group is contained in the
group~$H'$ from above, consisting of the $S$-units of~$L$ with
square norm, modulo squares and modulo $\{2,3,5\}$-units of~${\mathbb Q}$.
For the local part of the computation,
we have to compute the image of $J_1({\mathbb Q}_p)$ under the local $x-T$ map
for the primes~$p$ of bad reduction. We check that there is no 2-torsion
in $J_1({\mathbb Q}_3)$ and~$J_1({\mathbb Q}_5)$ ($f_1$ remains irreducible both over~${\mathbb Q}_3$
and over~${\mathbb Q}_5$). This implies that the targets of
the local maps $(x-T)_3$ and $(x-T)_5$ are trivial, which means that
these two primes need not be considered as bad primes for the descent
computation. The real locus $C_1({\mathbb R})$ is connected, which implies that there
is no information coming from the local image at the infinite place.
(Recall that $C_1$ denotes the smooth projective model of the curve.
The real locus of the affine curve $y^2 = f_1(x)$ has two components,
but they are connected to each other through the points at infinity.)
Therefore, we only need to use 2-adic information in the computation.
We set $L_2 = L \otimes_{{\mathbb Q}} {\mathbb Q}_2$ and compute the natural homomorphism
\[ \mu_2 : H' \longrightarrow H_2 = \frac{L_2^\times}{{\mathbb Q}_2^\times (L_2^\times)^2} \,. \]
Let $I_2$ be the image of~$J_1({\mathbb Q}_2)$ in~$H_2$.
Then the 2-Selmer group is $\mu_2^{-1}(I_2)$.
It remains to compute~$I_2$, which is the hardest part of the computation.
The \hbox{2-torsion} subgroup $J_1({\mathbb Q}_2)[2]$ has order~2 ($f_1$ splits into
factors of degrees 2 and~8 over~${\mathbb Q}_2$); this implies that
$J_1({\mathbb Q}_2)/2 J_1({\mathbb Q}_2)$ has dimension~$g + 1 = 5$ as an \hbox{${\mathbb F}_2$-vector} space.
This quotient is generated by the images of $Q_1$ and~$Q_2$ and of three
further points of the form $[D_i] - \tfrac{\deg D_i}{2} W$, where $D_i$
is the sum of points on~$C_1$ whose $x$-coordinates are the roots of
\begin{align*}
D_1 : & \quad \bigl(x - \tfrac{1}{2}\bigr) \bigl(x - \tfrac{1}{4}\bigr)\,, \\
D_2 : & \quad x^2 - 2 x + 6\,, \\
D_3 : & \quad x^4 + 4 x^3 + 12 x^2 + 36\,, \\
\end{align*}
respectively. These points were found by a systematic search, using the
fact that the local map $(x-T)_2$ is injective in our situation. We can
therefore stop the search procedure as soon as we have found points whose
images generate a five-dimensional ${\mathbb F}_2$-vector space.
We thus find $I_2 \subset H_2$ and then can compute the 2-Selmer group.
In our situation, $\mu_2$ is injective, and the intersection of its image
with~$I_2$ is generated by the images of $Q_1$ and~$Q_2$. Therefore,
the ${\mathbb F}_2$-dimension of the 2-Selmer group is~2.
\begin{Lemma} \label{LemmaRank}
The rank of $J_1({\mathbb Q})$ is~2, and $\langle Q_1, Q_2 \rangle \subset J_1({\mathbb Q})$
is a subgroup of finite odd index.
\end{Lemma}
\begin{proof}
The Selmer group computation shows that the rank is $\le 2$, and
Lemma~\ref{LemmaGroup} shows that the rank is $\ge 2$. Regarding the second
statement, it is now clear that we have a subgroup of finite index.
The observation stated just before the lemma shows that the given subgroup
surjects onto the 2-Selmer group under the $x-T$ map. Since the kernel
of the $x-T$ map is~$2 J_1({\mathbb Q})$, this implies that the index is odd.
\end{proof}
Now we want to use the Chabauty-Coleman method sketched in Sect.~\ref{SS:Chab}
to show that $\infty_+$ and~$\infty_-$ are the only rational points on~$C_1$.
To keep the computations reasonably simple, we want to work at $p = 7$, which is
the smallest prime of good reduction.
For $p$ a prime of good reduction, we write $\rho_p$ for the two `reduction mod~$p$'
maps $J_1({\mathbb Q}) \to J_1({\mathbb F}_p)$ and $C_1({\mathbb Q}) \to C_1({\mathbb F}_p)$.
\begin{Lemma} \label{LemmaRed}
Let $P \in C_1({\mathbb Q})$. Then $\rho_7(P) = \rho_7(\infty_+)$ or
$\rho_7(P) = \rho_7(\infty_-)$.
\end{Lemma}
\begin{proof}
Let $G = \langle Q_1, Q_2 \rangle$ be the subgroup of $J_1({\mathbb Q})$ generated
by the two points $Q_1$ and~$Q_2$. We find that $\rho_7(G)$ has index~2
in~$J_1({\mathbb F}_7) \cong {\mathbb Z}/10{\mathbb Z} \oplus {\mathbb Z}/240{\mathbb Z}$.
By Lemma~\ref{LemmaRank}, we know that $(J_1({\mathbb Q}) : G)$ is
odd, so we can deduce that $\rho_7(G) = \rho_7(J_1({\mathbb Q}))$. The group
$J_1({\mathbb F}_7)$ surjects onto $({\mathbb Z}/5{\mathbb Z})^2$. Since $\rho_7(J_1(G))$ has index~2
in~$J_1({\mathbb F}_7)$, $\rho_7(G) = \rho_7(J_1({\mathbb Q}))$ also surjects onto~$({\mathbb Z}/5{\mathbb Z})^2$. This
implies that the index of~$G$ in~$J_1({\mathbb Q})$ is not divisible by~5.
We determine the points $P \in C_1({\mathbb F}_7)$ such that
$\iota(P) \in \rho_7(2 J_1({\mathbb Q})) = 2 \rho_7(G)$. We find the set
\[ X_7 = \{\rho_7(\infty_+), \rho_7(\infty_-), (-2, 2), (-2, -2)\}\,. \]
Note that for any $P \in J_1({\mathbb Q})$, we must have $\rho_7(P) \in X_7$
by Lemma~\ref{Lemma2J}.
Now we look at $p = 13$. The image of~$G$
in~$J_1({\mathbb F}_{13}) \cong {\mathbb Z}/10{\mathbb Z} \oplus {\mathbb Z}/2850{\mathbb Z}$ has index~5.
Since we already know that $(J_1({\mathbb Q}) : G)$ is not a multiple of~5, this
implies that $\rho_{13}(G) = \rho_{13}(J_1({\mathbb Q}))$. As above for $p = 7$,
we compute the set $X_{13} \subset C_1({\mathbb F}_{13})$ of points mapping into
$\rho_{13}(2 J_1({\mathbb Q}))$. We find
\[ X_{13} = \{\rho_{13}(\infty_+), \rho_{13}(\infty_-)\} \,. \]
Now suppose that there is $P \in C_1({\mathbb Q})$ with
$\rho_7(P) \in \{(-2,2), (-2,-2)\}$. Then $\iota(P)$
is in one of two specific cosets in
$J_1({\mathbb Q})/\ker \rho_7 \cong G/\ker \rho_7|_G$. On the other hand,
we have $\rho_{13}(P) = \rho_{13}(\infty_\pm)$, so that $\iota(P)$
is in one of two specific cosets in
$J_1({\mathbb Q})/\ker \rho_{13} \cong G/\ker \rho_{13}|_G$.
If we identify $G = \langle Q_1, Q_2 \rangle$ with ${\mathbb Z}^2$, then we can
find the kernels of $\rho_7$ and of~$\rho_{13}$ on~$G$ explicitly, and
we can also determine the relevant cosets explicitly. It can then be checked
that the union of the first two cosets does not meet the union of the
second two cosets. This implies that such a point $P$ cannot exist.
Therefore, the only remaining possibilities are that
$\rho_7(P) = \rho_7(\infty_\pm)$.
\end{proof}
\begin{Remark}
The use of information at $p = 13$ to rule out residue classes at $p = 7$
in the proof above is a very simple instance of a method known as the
{\em Mordell-Weil sieve}. For a detailed description of this method,
see~\cite{BSMWS}.
\end{Remark}
Now we need to find the space of holomorphic 1-forms on~$C_1$, defined over~${\mathbb Q}_7$,
that annihilate the Mordell-Weil group under the integration pairing,
compare Sect.~\ref{SS:Chab}.
We follow the procedure described in~\cite{StollXdyn06}. We first find
two independent points in the intersection of $J_1({\mathbb Q})$ and the kernel
of reduction mod~7. In our case, we take $R_1 = 20 Q_1$ and
$R_2 = 5 Q_1 + 60 Q_2$. We represent these points in the form
$R_j = [D_j - 4 \infty_-]$ with effective divisors $D_1$, $D_2$ of
degree~4. The coefficients of the primitive polynomial in~${\mathbb Z}[x]$ whose
roots are the $x$-coordinates of the points in the support of~$D_1$
have more than~100 digits and those of the corresponding polynomial
for~$D_2$ fill several pages, so we refrain from printing them here.
(This indicates that it is a good idea to work with a small prime!)
The points in the support of $D_1$ and~$D_2$ all reduce
to~$\infty_-$ modulo the prime above~7 in their fields of definition
(which are degree~4 number fields totally ramified at~7). Expressing
a basis of $\Omega^1_{C_1}({\mathbb Q}_7)$ as power series in the uniformiser
$t = 1/x$ at~$P_0 = \infty_-$ times~$dt$, we compute the integrals numerically.
More precisely, the differentials
\[ \eta_0 = \frac{dx}{2y}, \quad \eta_1 = \frac{x\,dx}{2y}, \quad
\eta_2 = \frac{x^2\,dx}{2y} \quad\text{and}\quad \eta_3 = \frac{x^3\,dx}{2y}
\]
form a basis of~$\Omega_{C_1}^1({\mathbb Q}_7)$. We get
\[ \eta_j = t^{3-j} \Bigl(\frac{1}{2} - \frac{15}{2} t + 115 t^2 - 1980 t^3
+ \frac{145385}{4} t^4 - \frac{2764899}{4} t^5 + \dots\Bigr)
\,dt
\]
as power series in the uniformiser. Using these power series up to a precision
of~$t^{20}$, we compute the following 7-adic approximations to the integrals.
\[ \Bigl(\int_0^{R_j} \eta_i\Bigr)_{0 \le i \le 3, 1 \le j \le 2}
= \begin{pmatrix}
-20 \cdot 7 + O(7^4) & -155 \cdot 7 + O(7^4) \\
-150 \cdot 7 + O(7^4) & -13 \cdot 7 + O(7^4) \\
-130 \cdot 7 + O(7^4) & -83 \cdot 7 + O(7^4) \\
-19 \cdot 7 + O(7^4) & 163 \cdot 7 + O(7^4)
\end{pmatrix}
\]
From this, it follows easily that the reductions mod~7 of the (suitably scaled)
differentials that kill~$J_1({\mathbb Q})$ fill the subspace of $\Omega^1_{C_1}({\mathbb F}_7)$
spanned by
\[ \omega_1 = (1 + 3 x - 2 x^2) \frac{dx}{2 y} \quad\text{and}\quad
\omega_2 = (1 - x^2 + x^3) \frac{dx}{2 y} \,.
\]
Since $\omega_2$ does not vanish at the points $\rho_7(\infty_\pm)$, this
implies that there can be at most one rational point~$P$ on~$C_1$ with
$\rho_7(P) = \rho_7(\infty_+)$ and at most one point~$P$ with
$\rho_7(P) = \rho_7(\infty_-)$ (see for example~\cite[Prop.~6.3]{StollChabauty}).
\begin{Proposition} \label{Prop1}
The only rational points on~$C_1$ are $\infty_+$ and~$\infty_-$.
\end{Proposition}
\begin{proof}
Let $P \in C_1({\mathbb Q})$. By Lemma~\ref{LemmaRed}, $\rho_7(P) = \rho_7(\infty_\pm)$.
By the argument above, for each sign $s \in \{+,-\}$, we have
$\#\{P \in C_1({\mathbb Q}) : \rho_7(P) = \rho_7(\infty_s)\} \le 1$. These two
facts together imply that $\#C_1({\mathbb Q}) \le 2$. Since we know the two rational
points $\infty_+$ and~$\infty_-$ on~$C_1$, there cannot be any further
rational points.
\end{proof}
We can now prove Thm.~\ref{Thm}.
\begin{proof}[of Thm.~\ref{Thm}]
The considerations
in Sect.~\ref{Curves} imply that if $(a^2, b^2, c^2, d^5)$ is an
arithmetic progression in coprime integers, then there are coprime $u$ and~$v$,
related to $a,b,c,d$ by~\eqref{rel1},
such that $(u/v, a/v^5)$ is a rational point on one of the curves~$C_j$
with $-2 \le j \le 2$. By Prop.~\ref{Prop02}, there are no rational
points on $C_0$ and~$C_2$ and therefore also not on the curve~$C_{-2}$, which
is isomorphic to~$C_2$. By Prop.~\ref{Prop1}, the only rational points
on~$C_1$ (and~$C_{-1}$) are the points at infinity. This translates into
$a = \pm 1$, $u = \pm 1$, $v = 0$, and we have $j = \pm 1$. We deduce
$a^2 = 1$, $b^2 = g_1(\pm 1, 0)^2 = 1$, whence also $c^2 = d^5 = 1$.
\end{proof}
\end{document} |
\begin{document}
\title{Generation of pure, ionic entangled states via linear optics}
\author{Ming Yang}
\email{[email protected]}
\author{Zhuo-Liang Cao}
\email{[email protected](Corresponding~Author)} \affiliation{ School
of Physics \& Material Science, Anhui University, Hefei, 230039,
People's Republic of China}
\begin{abstract}
In this paper, we propose a novel scheme to generate two-ion
maximally entangled states from either pure product states or mixed
states using linear optics. Our new scheme is mainly based on the
ionic interference. Because the proposed scheme can generate pure
maximally entangled states from mixed states, we denote it as
purification-like generation scheme. The scheme does not need a Bell
state analyzer as the existing entanglement generation schemes do,
it also avoids the difficulty of synchronizing the arrival time of
the two scattered photons faced by the existing schemes, thus the
proposed new entanglement generation scheme can be implemented more
easily in practice.
\end{abstract}
\pacs{03.67.Mn, 03.67.Hk, 03.67.Pp, 42.50.Dv}
\maketitle
\section{INTRODUCTION}
Quantum superposition principle is a fundamental principle in
quantum physics. When used in composite system, quantum
superposition principle can induce an entirely new result, which is
different from the classical physics, i.e. quantum
entanglement~\cite{en}. After a long time debate on the completeness
of quantum mechanics between Niels Bohr and Albert Einstein, it has
been generally accepted that the entanglement between two systems
exists. Entanglement state is the state that can not be expressed as
the product state of the two systems~\cite{en}. In this sense, the
quantum entanglement is used to disprove the local hidden variable
theory~\cite{hidden}. Because of the non-locality feature of
entanglement, entangled states have been widely used in quantum
information processing, such as quantum cryptography~\cite{cryp},
quantum computer~\cite{comput}, and quantum
teleportation~\cite{tele}. All of the above applications are based
on the entangled states, so the generation of entangled states plays
a critical role in quantum information processing. Many theoretical
and experimental schemes for the generation of entangled states have
been proposed in Cavity QED~\cite{generation}, ion trap~\cite{ion},
and NMR~\cite{nmr}. In photonic case, the polarization entangled
photons have been generated in experiment by using Spontaneous
Parametric-Down conversion~\cite{spdc}. For atomic case, the schemes
for the generation of entangled atomic states have been
proposed~\cite{generation}. Shi-Biao Zheng and Guang-Can Guo have
presented a realizable scheme for the generation of entangled atomic
states, which is mainly based on the dispersive interaction between
atoms and cavity modes. The obvious advantage of it is that the
cavity is only virtually excited during the process and the
requirement on the cavity quality is greatly loosened, which opens a
promising perspective for quantum information
processing~\cite{zheng}. This scheme has been realized in experiment
by the S. Haroche group~\cite{haroche}.
Alternatively, the entangled atomic states also can be generated via
atomic interference~\cite{ bose, browne, cabrillo, dlm, dlm1, feng,
lloyd, plenio, protsenko, simon}. Most of the schemes work as
follows: the two scattered(or leakage) photons from two spatially
separated atoms(or cavities) will be mixed by a Bell State
Analyzer(BSA) or Polarization Beam Splitter(PBS), and the two
photons will be detected after the BSA or PBS. Because we can not
distinguish from which atoms the two photons are scattered, the two
atoms will be left in entangled states after the photon detection.
The schemes of this type can entangle the spatially separated atoms,
but there is still a serious difficulty. These schemes require the
two photons reach the BSA or PBS simultaneously. Motivated by
Xing-Xiang Zhou's proposal on non-distortion quantum
interrogation~\cite{nqi}, we have proposed an entanglement
purification scheme for arbitrary unknown ionic states via linear
optics~\cite{mepra}. In this paper, we will propose a novel
entanglement generation scheme, which is free of the problem of
simultaneity. In our scheme,the photon wave function of one incident
circular polarized photon will be split into two parts, transmitted
part and the reflected part, by an ordinary nonpolarizing $50$--$50$
Beam Splitter(BS). Two multi-level ions will be pre-placed on the
two possible pathes of the photon. After interacting with the ions,
the two parts of the photon wave function will be re-combined by the
second ordinary nonpolarizing $50$--$50$ BS. Through detecting the
photon after the second BS, we can decide whether the entangled
ionic pairs has been created or not. The main part of the setup can
be regarded as a Mach-Zehnder interferometer(MZI). The BS is an
ordinary one, and the relative phase problem inherent in the
previous schemes has been avoided in our scheme. The photon detected
in this scheme is a circular polarized one, and can be detected
easier than the scattered one used in the previous schemes. It is
not easy to make the two photons from two different ions interfere
in the previous schemes. But in our scheme, the photon wave function
has been split into two parts in MZI, and the coherent condition is
satisfied naturally.
If nothing has been done on the entanglement generation setup before
the creation process, the probability of success is relative low
because of the low scattering rate of photon. So after discussion on
the original setup, we will make some modification on the main setup
to enhance the efficiency, i.e. the MZI will be surrounded by an
optical cavity resonant with the ionic transition. This cavity will
enhance the scattering rate and the successful probability of
entanglement generation. If we lengthen the two arms of the MZI, the
setup can entangle the two ions spatially separated, because the
polarization of photon can be preserved in a polarization-preserving
fiber over a long distance.
In the long distance case, two separate ions can be in a product
state or in a mixed state evolved from the pure entangled state
before distribution. If the two ions initially in mixed state are
placed on the setup, an pure maximally entangled state can be
extracted, and the efficiency of it can exceed the product state
case provided the mixed state satisfies some condition. In this
sense, the scheme for the mixed state case looks like a entanglement
purification process, but it is not the case, because entanglement
purification involves only classical communication and local
operations~\cite{purification}. So we only denote it as
"purification-like" generation scheme.
\section{GENERATION SCHEME FOR THE PRODUCT INITIAL STATES CASE}
Next, we will discuss the entanglement generation process in
details. Here, we will consider two identical ions, and they are all
multi-level systems. The level configuration of the ions has been
depicted in Fig. \ref{level}.
\begin{figure}
\caption{\label{level}
\label{level}
\end{figure}
Where $|m_{+}\rangle$ and $|m_{-}\rangle$ are two degenerate
metastable states which are used to store quantum information.
$|e\rangle$ is a excited state of ions and $|g\rangle$ is the stable
ground state. Ions in states $|m_{+}\rangle$ (or $|m_{-}\rangle$)
can be excited into the $|e\rangle$ state by absorbing one
$\sigma^{+}(\textrm{ or }\sigma^{-})$ circular polarized photon with
unit efficiency, then it will decay to ground state $|g\rangle$
rapidly and scatter a photon. This process can be expressed as:
\begin{equation}\label{scatter}
\hat{a}_{\pm}^{+}|0\rangle|m_{\pm}\rangle\longrightarrow|S\rangle|g\rangle.
\end{equation}
where $|S\rangle$ denotes the scattered photons which we assume will
not be reabsorbed by the ions and can be filtered away from the
detectors. Although this process does not always occur despite the
photon impinging on the ions, we still consider the ideal case to
demonstrate the process, and then we will consider how to enhance
the scattering rate by adding an optical cavity. The setup for
generation of maximally entangled ionic states is depicted in Fig.
\ref{setup}.
\begin{figure}
\caption{\label{setup}
\label{setup}
\end{figure}
One MZI with two BSs is the main part of the generation setup. One
$\sigma^{+}$ polarized photon enters the MZI from the left lower
port. The MZI is initially adjusted (without ions) such that the
upper detector($D_{u}$) registers photons with certainty. In the
case of the existence of two ions in arbitrary superposition states
of $|m_{+}\rangle$ and $|m_{-}\rangle$ at each arm of the MZI(the
two ions can be placed on the two arms of the MZI by using the
trapping techniques~\cite{trapping}), the upper and the lower
detectors($D_{u} \textrm{and} D_{l}$) all have the probability of
fire. If we select the superposition coefficients of the initial
states of the two ions appropriately, we can get the maximally
entangled ionic states conditioned on the fire at $D_{l}$. Suppose
that the two ions ($U, L$) are initially prepared in the following
states:
\begin{subequations}
\begin{equation}
|\Psi\rangle_{U}=\alpha|m_{+}\rangle_{U}+\beta|m_{-}\rangle_{U},
\end{equation}
\begin{equation}
|\Psi\rangle_{L}=a|m_{+}\rangle_{L}+b|m_{-}\rangle_{L},
\end{equation}
\end{subequations}
where the coefficients $\alpha, \beta, a, b$ satisfy
$|\alpha|^{2}+|\beta|^{2}=1$ and $|a|^{2}+|b|^{2}=1$. These states
can be prepared by a laser pulse focused on the ion. The effect of
the BS on the input photon can be expressed as:
\begin{subequations}
\begin{equation}
\hat{a}_{\rightleftarrows,l}^{+}|0\rangle\stackrel{\textrm{BS}}{\longrightarrow}
\frac{1}{\sqrt{2}}(\hat{a}_{\rightleftarrows,u}^{+}\pm{i}\hat{a}_{\rightleftarrows,l}^{+})|0\rangle,
\end{equation}
\begin{equation}
\hat{a}_{\rightleftarrows,u}^{+}|0\rangle\stackrel{\textrm{BS}}{\longrightarrow}
\frac{1}{\sqrt{2}}(\hat{a}_{\rightleftarrows,l}^{+}\pm{i}\hat{a}_{\rightleftarrows,u}^{+})|0\rangle.
\end{equation}
\end{subequations}
That is to say, the BS takes no effect on the polarization of the
input photon, and reflects the wave function with a
$\pm\frac{\pi}{2}$ phase shift corresponding to the propagation
direction of the photon~\cite{nqi}.
Next, we will trace the input photon and give the evolution of the
total system. After one $\sigma^{+}$ polarized photon entering the
left lower port of the MZI, its wave function will be split into two
parts (the upper arm and the lower arm) by $BS_{1}$. Because the two
ions are placed on the two arms, they will interact with the
different parts of the wave function. Then the two parts of the wave
function will be combined by $BS_{2}$. The total evolution of the
system can be expressed as follow:
\begin{align}\label{evolution}
&\hat{a}_{\rightarrow,l,+}^{+}|0\rangle(\alpha|m_{+}\rangle_{U}+\beta|m_{-}\rangle_{U})
(a|m_{+}\rangle_{L}+b|m_{-}\rangle_{L})\nonumber\\
&\longrightarrow\frac{1}{\sqrt{2}}\alpha|S\rangle_{U}|g\rangle_{U}(a|m_{+}\rangle_{L}+b|m_{-}\rangle_{L})\nonumber\\
&+\frac{i}{\sqrt{2}}a|S\rangle_{L}|g\rangle_{L}(\alpha|m_{+}\rangle_{U}+\beta|m_{-}\rangle_{U})\nonumber\\
&+\frac{i}{2}\hat{a}_{\rightarrow,u,+}^{+}|0\rangle(\beta{a}|m_{-}\rangle_{U}|m_{+}\rangle_{L}
+\alpha{b}|m_{+}\rangle_{U}|m_{-}\rangle_{L}\nonumber\\
&+2\beta{b}|m_{-}\rangle_{U}|m_{-}\rangle_{L})\nonumber\\
&+\frac{1}{2}\hat{a}_{\rightarrow,l,+}^{+}|0\rangle(\beta{a}|m_{-}\rangle_{U}|m_{+}\rangle_{L}
-\alpha{b}|m_{+}\rangle_{U}|m_{-}\rangle_{L}).
\end{align}
From the above result, we can get that the two ions will be left in
three possible states corresponding to three measurement results on
the two output ports respectively. If the $D_{l}$ fires, we get
two-ion entangled states:
$\beta{a}|m_{-}\rangle_{U}|m_{+}\rangle_{L}
-\alpha{b}|m_{+}\rangle_{U}|m_{-}\rangle_{L}$. If we modulate the
coefficients of the initial states to make $\alpha, a, \beta, b$
satisfy $|\alpha|=|a|$ and $|\beta|=|b|$, the two ions can be left
in maximally entangled state
$|\Psi\rangle_{UL}=\frac{1}{\sqrt{2}}(|m_{-}\rangle_{U}|m_{+}\rangle_{L}
-|m_{+}\rangle_{U}|m_{-}\rangle_{L})$ with probability
$P=\frac{1}{2}|a|^{2}(1-|a|^{2})$ . From this analysis, we conclude
that the two ions must be prepared in the same superposition state
initially, then we can get the two-ion maximally entangled state.
The successful probability is a function of the modulus of the
initial states.
Here we only discussed the ideal case where the ion decays are
coherent. In fact, the ion decays will be essentially incoherent
and very few of the photons will be in the correct direction in
the free space, so a resonant cavity must be introduced for each
ion to achieve directional emission of the photons from each
ion~\cite{simon}. To simplify the figures, we do not draw these
cavities in the generation setups.
In addition, we have only discussed the ideal case where we
suppose a photon impinging on an ion always leads to the process
described by Eq.(\ref{scatter}). But in most cases the photon will
not be scattered by the ions. If the ions are placed inside the
MZI, it would mean that detector $D_u$ will most likely fire as
before but without any entanglement between the ions being
created. To enhance the scattering rate, an optical cavity will be
added to the MZI. This cavity encloses the MZI, and it is
different from the resonant cavities surrounding the ions, so we
denote the cavity enclosing the MZI as \emph{enclosure cavity}.
The modified setup is depicted in Fig. \ref{newsetup}.
\begin{figure}
\caption{\label{newsetup}
\label{newsetup}
\end{figure}
Each side of the \emph{enclosure cavity} has a very high
reflection rate. One side($M_{1}$) is placed at the left lower
port of the MZI, the other($M_{2}$) at the right upper port. Thus,
if the photon has not been scattered by the ions, it will leave
MZI at the right upper port. It will be reflected into MZI by
$M_{2}$, and analogously for the description of $M_{1}$. That is
to say, if the photon is not scattered by the ions, it will
vibrate in the \emph{enclosure cavity} through MZI, which will
enhance the scattering rate naturally. From Eq. (\ref{evolution}),
if the photon has been scattered by one ion, there is still
possibility that one photon exit the MZI at the right upper port.
Then the same process will repeat until the $D_{1}$ or $D_{2}$
register one photon, which indicates the entanglement generation
process succeeds. During the process, if $D_{1}$ or $D_{2}$ do not
register photons after an interval of the order of the lifetime of
a photon, we must re-input one $\sigma^{+}$ polarized photon into
the MZI through the cavity \emph{enclosure cavity} side to repeat
the entanglement generation process until the $D_{1}$ or $D_{2}$
register one photon. To clarify the evolution process, we suppose
that one photon exiting the MZI at the right upper port indicates
the process of the third term of the Eq.(\ref{evolution})(this
assumption is optimal because the vibration of photon in the
cavity \emph{enclosure cavity} will enhance the chance of
occurring the scattering process of Eq. (\ref{scatter}). Once the
scattering process occurs the repetition of the third term of Eq.
(\ref{evolution}) will occur subsequently). After iteration of the
above process, the probability for ions in scattered states is
$\frac{1}{2}(|\alpha|^{2}+|a|^{2})+\frac{1}{6}(|\alpha{b}|^{2}+|\beta{a}|^{2})$,
the probability for ions in product state
$|m_{-}\rangle_{U}|m_{-}\rangle_{L}$ is $|\beta{b}|^{2}$, and the
probability for ions in entangled state
$\beta{a}|m_{-}\rangle_{U}|m_{+}\rangle_{L}
-\alpha{b}|m_{+}\rangle_{U}|m_{-}\rangle_{L}$ is
$\frac{1}{3}(|\alpha{b}|^{2}+|\beta{a}|^{2})$. If the initial
states of the two ions satisfy the condition $|\alpha|=|a|$ and
$|\beta|=|b|$, the two ions will be left in maximally entangled
state$|\Psi\rangle_{UL}=\frac{1}{\sqrt{2}}(|m_{-}\rangle_{U}|m_{+}\rangle_{L}
-|m_{+}\rangle_{U}|m_{-}\rangle_{L})$ with probability
$\frac{2}{3}|a|^{2}(1-|a|^{2})$. From the above result of
iteration, we get that the added optical cavity \emph{enclosure
cavity} enhance the scattering rate and the efficiency of
entanglement creation.
\section{GENERATION SCHEME FOR THE MIXED INITIAL STATES CASE}
Most of the previous preparation schemes prepare the entangled
states at one location, and then the entangled particles will be
distributed among different users for quantum communication purpose.
But during the transmission of the particles, it will unavoidably
couple with environments, and then the entanglement will degrade
exponentially. So the entangled states after distribution are
usually mixed ones, which need the purification process~\cite{mepra,
purification, pjw, distillation, distillation1} before use. We
consider the generation from two ions that are initially in mixed
state. For clarity, we will first give the evolution induced by the
original setup in Fig. \ref{setup}, then give some discussions on
the process if we use the modified setup in Fig. \ref{newsetup}.
Suppose that the initial mixed state is in the following
form~\cite{pjw,pjw1}:
\begin{equation}
\rho_{UL}=F|\Psi^{+}\rangle_{UL}\langle\Psi^{+}|+(1-F)|\Phi^{+}\rangle_{UL}\langle\Phi^{+}|.
\end{equation}
where $|\Psi^{+}\rangle_{UL}=\frac{1}{\sqrt{2}}
(|m_{+}\rangle_{U}|m_{-}\rangle_{L}+|m_{-}\rangle_{U}|m_{+}\rangle_{L})$
and $|\Phi^{+}\rangle_{UL}=\frac{1}{\sqrt{2}}
(|m_{+}\rangle_{U}|m_{+}\rangle_{L}+|m_{-}\rangle_{U}|m_{-}\rangle_{L})$
are two Bell states of the two ions. To express the evolution
clearly, we will consider the mixed state as the probabilistic
mixture of pure two-ion entangled states, i.e. the state
$|\Psi^{+}\rangle_{UL}$ with probability $F$ and the state
$|\Phi^{+}\rangle_{UL}$ with probability $1-F$. In the
$|\Psi^{+}\rangle_{UL}$ case, if $D_{u}$ fires the two ions will be
left in
$\frac{1}{\sqrt{2}}(|m_{+}\rangle_{U}|m_{-}\rangle_{L}+|m_{-}\rangle_{U}|m_{+}\rangle_{L})$
state. If $D_{l}$ fires the two ions will collapse into
$\frac{1}{\sqrt{2}}(|m_{-}\rangle_{U}|m_{+}\rangle_{L}-|m_{+}\rangle_{U}|m_{-}\rangle_{L})$
state. On the contrary, the $|\Phi^{+}\rangle_{UL}$ case only leads
to fire at $D_{u}$ with the two ions in
$|m_{-}\rangle_{U}|m_{-}\rangle_{L}$ state.
If we detect a photon at $D_{u}$, the two ions will be left in a
mixed state whose fidelity (with respect to $|\Psi^{+}\rangle_{UL}$
) is lower than the initial one. So we consider this result as
garbage. If $D_{l}$ registers one photon, the two ions are left in a
pure maximally entangled state $|\Psi\rangle_{UL}=\frac{1}{\sqrt{2}}
(|m_{-}\rangle_{U}|m_{+}\rangle_{L}-|m_{+}\rangle_{U}|m_{-}\rangle_{L})$
with probability $P'=\frac{F}{4}$. So long as the initial fidelity
satisfies $F>\frac{1}{2}$, the successful probability of the mixed
states case will be larger than the pure product states case. This
point can be understood easily. The pure states case starts from a
product state, but the mixed states case from a partially entangled
state. Naturally, the probability of the later case is larger than
the former one.
Then if the \emph{enclosure cavity} used in the pure product state
case has been added in the MZI of the mixed state case, the
scattering rate and the generation efficiency all can be enhanced.
Through analysis, we get that the successful probability of
getting pure maximally entangled states after iteration of the
process is $\frac{F}{3}$, which is larger than that of the one
round case $\frac{F}{4}$ with an increased scattering rate.
Compared to the previous generation schemes, our scheme has the
following advantages:(1) The relative phase problem has been
avoided successfully in our scheme by using MZI, and the relative
phase in our scheme is adjusted to zero and will not change in the
process. The common phase of the state takes no effect on the
entanglement of the generated entangled states. (2)The photon we
want to detect is the input circular polarized photon, which makes
it easier to be registered than the scattered ones, because the
input photon has a better directionality than the scattered one.
(3)In the previous schemes, the BSA is a necessity, but in our
scheme, only two \emph{ordinary} BS are needed. So the current
scheme is simpler than the previous ones. (4)The simultaneity of the
two scattered photons is a main difficulty of the preceding schemes.
But in our scheme, the simultaneity will be satisfied naturally
because of the MZI.
\section{DISCUSSION}
After discussion on the generation scheme itself, we will consider
the feasibility of the current scheme. Singly positively charged
alkaline ions, which have only one electron outside a closed shell,
are commonly used in the quantum information experiments using
trapped ions~\cite{ion1,ion2}. Here we discuss a possible
implementation of our generation scheme using $^{40}$Ca$^{+}$ as
example. The relevant levels of $^{40}$Ca$^{+}$ has been depicted in
Fig. \ref{newlevel}~\cite{simon}.
\begin{figure}
\caption{Relevant levels of $^{40}
\label{newlevel}
\end{figure}
$D_{5/2}$ and $D_{3/2}$ are two metastable levels of $^{40}$Ca$^{+}$
with lifetimes of the order of $1s$. $s_{1}$ and $s_{2}$ are two
sublevels of $D_{5/2}$ with $m=-5/2$ and $m=-1/2$, and this two
sublevels are coupled to $|e\rangle$ by $\sigma_-$ and $\sigma_+$
light at $854nm$. Here $e, S_{1}, S_{2}, S_{1/2}$ correspond to $e,
m_{-}, m_{+}, g$ in Fig. \ref{level}, respectively, i.e. we use the
$S_{1/2}$ as stable ground state, $S_{1}, S_{2}$ as two degenerate
metastable state and $P_{3/2}$ as excited state. Arbitrary
superposition state of this two degenerate metastable states can be
prepared by applying a laser pulse of appropriate length, and this
process can be realized in a few microsecond~\cite{ion4}. The
$^{40}$Ca$^{+}$ in state $S_{1}$ or $S_{2}$ can be excited into the
excited state $P_{3/2}$ by applying one $\sigma_-$ or $\sigma_+$
light at $854nm$. Then decay from $|e\rangle$ to $S_{1}, S_{2}$, to
$D_{3/2}$ and to $S_{1/2}$ are all possible. But the branching ratio
for $P_{3/2}\rightarrow D_{5/2}(854nm)$ versus $P_{3/2}\rightarrow
S_{1/2}(393nm)$ can be estimated as 1:30, giving $0.5 \times 10^7/$s
for the transition probability~\cite{ion2, simon}. So in most case,
the $^{40}$Ca$^{+}$ in the excited state will decay into the stable
ground state $S_{1/2}$. The detection of the internal states of
$^{40}$Ca$^{+}$ can be realized by using a cycling transition
between $S_{1/2}$ and $P_{1/2}(397nm)$~\cite{ion1,ion2}.
In section II and section III, we have discussed the effects of
the cavities enclosing the MZI (\emph{enclosure cavities}) on the
generation scheme, and next we will discuss the effects of the
resonant cavities surrounding the ions on the generation scheme.
To achieve directional emission of the photons from the ions, we
introduced an optical resonant cavity to surround each ion. We
will introduce a cavity on the $S_{1/2}$ to $P_{3/2}$ transition
to enhance the emission of the photons from atom transition
$P_{3/2}$ to $S_{1/2}$. Then the following two items will affect
the emission efficiency of the photon from the ions: (1) The
coupling between cavity mode and the $P_{3/2}\rightarrow
S_{1/2}(393nm)$ transition; (2) Decay from $P_{3/2}$ to $D_{5/2}$;
(3) Cavity decay. The probability $p_{cav}$ for a photon to be
emitted into the cavity mode after excitation to $|e\rangle$ can
be expressed as $p_{cav}=\frac{4 \gamma
\Omega^2}{(\gamma+\Gamma)(\gamma \Gamma + 4 \Omega^2)}$, where
$\gamma=4\pi c/F_{cav}L$ is the decay rate of the cavity,
$F_{cav}$ its finesse, $L$ its length,
$\Omega=\frac{D}{\hbar}\sqrt{\frac{hc}{2 \epsilon_0 \lambda V}}$
is the coupling constant between the transition and the cavity
mode, $D$ the dipole element, $\lambda$ the wavelength of the
transition, $V$ the mode volume (which can be made as small as
$L^2\lambda/4$ for a confocal cavity with waist
$\sqrt{L\lambda/\pi}$), and $\Gamma$ is the non-cavity related
loss rate~\cite{simon, decay}. From the discussion of
Ref.~\cite{simon}, the photon package is about 100ns, and such a
long coherence time makes it easy to achieve good overlap for the
wave function of the photon on the beam splitter.
When calculating the total efficiency of the generation scheme, we
must consider the following items:
\begin{itemize}
\item The emission efficiency of photon: $p_{cav}$, which has included the cavity
decay; To maximize the $p_{cav}$, we have chosen
$F_{cav}=19000$, $L=3mm$. Then
$\gamma=9.9\times10^{6}/s$, $p_{cav}=0.01$~\cite{simon};
\item The efficiency of the photon detectors is expressed as
$\eta$. Here we let the detection efficiency $\eta=0.7$, which is a level that can be reached within the current
technology.
\item Coupling the photon out of the cavities will introduce another error $\xi$, which can be modulated to be close to unit.
\end{itemize}
In addition, because the two ions have been placed on the MZI
symmetrically, the different transition times for the ions and the
consequent pulse broadening will affect the efficiency of the
scheme slightly. To complete the generation scheme, we suppose
that the state maker has held two ionic ensembles. After
considering the above factors, the total success probability can
be expressed as follow( considering the modified schemes as
example):
\begin{itemize}
\item
$P={\frac{F}{3}}\times{p_{cav}}\times{\eta}$
for mixed state, that is to say, if we input photon with the
rate of $5000/s$, we can get eight pairs of pure maximally entangled $^{40}$Ca$^{+}$
ions per second for $F=0.7$.
\item $P={\frac{2a^{2}(1-a^{2})}{3}}\times{p_{cav}}\times{\eta}$
for product initial states, that is to say, if we input photon with the
rate of $5000/s$, we can get five pairs of pure maximally entangled $^{40}$Ca$^{+}$
ions per second for $a^{2}=0.7$.
\end{itemize}
From the experimental point of view, because the efficiency of the
current scheme would be greatly enhanced if there were enough
photons in the resonant system to induce stimulated emission from
the ions, we will input enough photons into the MZI
simultaneously. That is to say, the current scheme becomes more
realizable.
\section{CONCLUSION}
In conclusion, we have proposed an entanglement generation scheme,
which can entangle two ions by using MZI plus an optical
\emph{enclosure cavity}. Pure maximally entangled states can be
generated from either product states or mixed states. Single
photon detection can give us the signal indicating whether the
generation process succeeds or not. The added optical
\emph{enclosure cavity} enhances the generation efficiency.
Because the simultaneity problem inherent in the preceding schemes
does not appear in our scheme, ours is more realizable than the
previous ones.
\begin{acknowledgments}
This work is supported by Anhui Provincial Natural Science
Foundation under Grant No: 03042401, the Key Program of the
Education Department of Anhui Province under Grant No:2004kj005zd
and the Talent Foundation of Anhui University.
\end{acknowledgments}
\end{document} |
\begin{document}
\twocolumn[
\icmltitle{Improving the Gaussian Process Sparse Spectrum Approximation by Representing Uncertainty in Frequency Inputs}
\icmlauthor{Yarin Gal}{[email protected]}
\icmlauthor{Richard Turner}{[email protected]}
\icmladdress{University of Cambridge}
\vskip 0.in
]
\begin{abstract}
Standard sparse pseudo-input approximations to the Gaussian process (GP) cannot handle complex functions well. Sparse spectrum alternatives attempt to answer this but are known to over-fit. We suggest the use of variational inference for the sparse spectrum approximation to avoid both issues. We model the covariance function with a finite Fourier series approximation and treat it as a random variable. The random covariance function has a posterior, on which a variational distribution is placed. The variational distribution transforms the random covariance function to fit the data. We study the properties of our approximate inference, compare it to alternative ones, and extend it to the distributed and stochastic domains. Our approximation captures complex functions better than standard approaches and avoids over-fitting.
\end{abstract}
\mathbf{s}ection{Introduction}
The Gaussian process \citep[GP, ][]{Rasmussen2005Gaussian} is a powerful tool for modelling distributions over non-linear functions. It offers robustness to over-fitting, a principled way to tune hyper-parameters, and uncertainty bounds over the outputs.
These properties are critical for tasks including non-linear function regression, reinforcement learning, density estimation, and more \citep{brochu2010tutorial,rasmussen2003gaussian,engel2005reinforcement,titsias2010bayesian}.
But the advantages of the Gaussian process come with a great computational cost. Evaluating the GP posterior involves a large matrix inversion -- for $N$ data points the model requires $\mathbf{m}athcal{O}(N^3)$ time complexity.
Many approximations to the GP have been proposed to reduce the model's time complexity. \citet{Quinonero-candela05unifying} survey approaches relying on \textit{``sparse pseudo-input''} approximations. In these, a small number of points in the input space with corresponding outputs (``inducing inputs and outputs'') are used to define a new Gaussian process. The new GP is desired to be as close as possible to the GP defined on the entire dataset, and the matrix inversion is now done with respect to the inducing points alone.
These approaches are suitable for locally complex functions. The approximate model would place most of the inducing points in regions where the function is complex, and only a small number of points would be placed in regions where the function is not. Highly complex functions cannot be modelled well with this approach.
\citet{lazaro2010sparse} suggested an alternative approximation to the GP model. In their paper they suggest the decomposition of the GP's stationary covariance function into its Fourier series. The infinite series is then approximated with a finite one. They optimise over the frequencies of the series to minimise some divergence from the full Gaussian process. This approach was named a \textit{``sparse spectrum''} approximation.
This approach is closely related to the one suggested by \citet{rahimi2007random} in the randomised methods community (random projections). In \citet{rahimi2007random}'s approach, the frequencies are randomised (sampled from some distribution rather than optimised) and the Fourier coefficients are computed analytically.
Both approaches capture globally complex behaviour, but the direct optimisation of the different quantities often leads to some form of over-fitting (as reported in \citep{wilson2014fast} for the SSGP and shown below for random projections).
Similar over-fitting problems observed with the \textit{sparse pseudo-input} approximation were answered with variational inference \citep{Titsias2009Variational}.
We suggest the use of variational inference for the \textit{sparse spectrum} approximation.
This allows us to avoid over-fitting while efficiently capturing globally complex behaviour.
We replace the stationary covariance function with a finite approximation obtained from Monte Carlo integration.
This finite approximation is a random variable, and conditioned on a dataset this random variable has an intractable posterior.
We approximate this posterior with variational inference, resulting in a non-stationary finite rank covariance function.
The approximating variational distribution transforms the covariance function to fit the data well. The prior from the GP model keeps the approximating distribution from over-fitting to the data.
Like in \citep{lazaro2010sparse}, we can marginalise over the Fourier coefficients. This results in approximate inference with $\mathbf{m}athcal{O}(NK^2 + K^3)$ time complexity with $N$ data points and $K$ inducing frequencies (components in the Fourier expansion).
This is the same as that of the sparse pseudo-input and sparse spectrum approximations.
We can further optimise a variational distribution over the frequencies reducing the time complexity to $\mathbf{m}athcal{O}(NK^2)$. This factorises the lower bound and allows us to perform distributed inference, resulting in $\mathbf{m}athcal{O}(K)$ time complexity given a sufficient number of nodes in a distributed framework.
We can approximate the latter lower bound and use random subsets of the data (mini batches) employing stochastic variational inference \citep{Hoffman2013Stochastic}. This results in $\mathbf{m}athcal{O}(SK^2)$ time complexity with $S << N$ the size of the mini-batch\mathbf{m}athbf{f}ootnote{Python code for all inference algorithms is available at \url{http://github.com/yaringal/VSSGP}}.
In the experiments section we demonstrate the properties of our GP approximation and compare it to alternative approximations.
We describe qualitative properties of the approximation and discuss how the approximation can be used to learn the covariance function by fitting to the data.
We compare the approximation to the full Gaussian process, sparse spectrum GP, sparse pseudo-input GP, and random projections.
We show that alternative approximations either over-fit or under-fit even on simple datasets.
We empirically demonstrate the advantages of the variational inference in avoiding over-fitting by comparing the approximation to the sparse spectrum one on audio data from the TIMIT dataset.
We compare the stochastic optimisation to the non-stochastic one, and compare the performance to the sparse pseudo-input SVI.
Finally, we inspect the model's time accuracy trade-off and show that it avoids over-fitting as the number of parameters increases.
\mathbf{s}ection{Sparse Spectrum Approximation in Gaussian Process Regression}
We use Bochner’s theorem \citep{bochner1959lectures} to reformulate the covariance function in terms of its frequencies.
Since our covariance function $K(\mathbf{m}athbf{x}, \mathbf{m}athbf{y})$ is stationary, it can be represented as $K(\mathbf{m}athbf{x}-\mathbf{m}athbf{y})$ for all $\mathbf{m}athbf{x},~\mathbf{m}athbf{y} \in \mathbf{m}athbb{R}^{Q}$. Following Bochner’s theorem, $K(\mathbf{m}athbf{x}-\mathbf{m}athbf{y})$ can be represented as the Fourier transform of some finite measure $\mathbf{s}igma^2 p(\mathbf{m}athbf{w})$ with $p(\mathbf{m}athbf{w})$ a probability density,
\mathbf{m}athbf{f}l[
K(\mathbf{m}athbf{x} - \mathbf{m}athbf{y}) &= \int_{\mathbf{m}athbb{R}^Q} \mathbf{s}igma^2 p(\mathbf{m}athbf{w}) e^{- 2 \mathbf{p}i i\mathbf{m}athbf{w}^T(\mathbf{m}athbf{x}-\mathbf{m}athbf{y})} \text{d} \mathbf{m}athbf{w} \notag\\
&=
\int_{\mathbf{m}athbb{R}^Q} \mathbf{s}igma^2 p(\mathbf{m}athbf{w}) \cos(2 \mathbf{p}i \mathbf{m}athbf{w}^T(\mathbf{m}athbf{x}-\mathbf{m}athbf{y})) \text{d} \mathbf{m}athbf{w}
\label{eq:bochner}
\]
since the covariance function is real-valued.
This can be approximated as a finite sum with $K$ terms using Monte Carlo integration,
\[
K(\mathbf{m}athbf{x} - \mathbf{m}athbf{y}) &\approx
\mathbf{m}athbf{f}rac{\mathbf{s}igma^2 }{K} \mathbf{s}um_{k=1}^K
\cos \big( 2 \mathbf{p}i \mathbf{m}athbf{w}_k^T \big( (\mathbf{m}athbf{x}-\mathbf{m}athbf{z}_k)-(\mathbf{m}athbf{y}-\mathbf{m}athbf{z}_k) \big) \big)
\]
with $\mathbf{m}athbf{w}_k \mathbf{s}im p(\mathbf{m}athbf{w})$ and $\mathbf{m}athbf{z}_k$ some $Q$ dimensional vectors for $k = 1, ..., K$. The points $\mathbf{m}athbf{z}_k$ act as inducing inputs, and will have corresponding inducing frequencies in our approximation. For the sparse spectrum GP, these points take value 0. These will be explained in detail in a later section.
Using identity \ref{identity:1} proved in the appendix we rewrite the terms above for every $k$ as
\[
&\cos \big( 2 \mathbf{p}i \mathbf{m}athbf{w}_k^T \big( (\mathbf{m}athbf{x}-\mathbf{m}athbf{z}_k)-(\mathbf{m}athbf{y}-\mathbf{m}athbf{z}_k) \big) \big)
\notag\\
&\qquad\qquad =
\int_{0}^{2\mathbf{p}i} \mathbf{m}athbf{f}rac{1}{2\mathbf{p}i}
\mathbf{s}qrt{2} \cos \big(2 \mathbf{p}i \mathbf{m}athbf{w}_k^T(\mathbf{m}athbf{x}-\mathbf{m}athbf{z}_k) + b \big) \notag\\
&\qquad\qquad\qquad \cdot
\mathbf{s}qrt{2} \cos \big(2 \mathbf{p}i \mathbf{m}athbf{w}_k^T(\mathbf{m}athbf{y}-\mathbf{m}athbf{z}_k) + b \big) \text{d} b.
\]
This integral can again be approximated as a finite sum using Monte Carlo integration. To keep the notation simple, we approximate the integral with a single sample\mathbf{m}athbf{f}ootnote{The above transformation and approximate integration are used in the \textit{randomised methods} literature \citep[``Random projections'', ][]{rahimi2007random}. It was shown to give better approximation than Monte Carlo integration of eq.\ \ref{eq:bochner}. Intuitively it is equivalent to a random phase shift for each basis function.} for every $k$,
\[
K(\mathbf{m}athbf{x} - \mathbf{m}athbf{y}) &\approx \mathbf{m}athbf{f}rac{\mathbf{s}igma^2 }{K} \mathbf{s}um_{k=1}^K
\mathbf{s}qrt{2} \cos(2 \mathbf{p}i \mathbf{m}athbf{w}_k^T(\mathbf{m}athbf{x}-\mathbf{m}athbf{z}_k) + b_k) \notag\\
&\qquad \qquad \cdot
\mathbf{s}qrt{2} \cos(2 \mathbf{p}i \mathbf{m}athbf{w}_k^T(\mathbf{m}athbf{y}-\mathbf{m}athbf{z}_k) + b_k) \\
&=: \widehat{K}(\mathbf{m}athbf{x} - \mathbf{m}athbf{y})
\]
with $b_k \mathbf{s}im \text{Unif}[0, 2\mathbf{p}i]$, defining the approximate covariance function $\widehat{K}$.
We refer to $(\mathbf{m}athbf{w}_k)_{k=1}^K$ as inducing frequencies and to $(b_k)_{k=1}^K$ as phases, and denote $\text{\boldmath$\omega$} = (\mathbf{m}athbf{w}_k, b_k)_{k=1}^K$.
Note that this integral could be approximated with any arbitrary number of samples instead.
We denote $\mathbf{X} \in \mathbf{m}athbb{R}^{N \times Q}$ the inputs and $\mathbf{Y} \in \mathbf{m}athbb{R}^{N \times D}$ the outputs of a real-valued dataset with $N$ data points. In Gaussian process regression we find the probability $P(\mathbf{Y} | \mathbf{X})$ with the assumption that the function generating $\mathbf{Y}$ is drawn from a Gaussian process. The full GP model is defined as (assuming stationary covariance function $K(\cdot, \cdot)$):
\[
\mathbf{F} ~|~ \mathbf{X} &\mathbf{s}im \mathbf{m}athcal{N}(\mathbf{0}, K(\mathbf{X}, \mathbf{X})) \\
\mathbf{Y} ~|~ \mathbf{F} &\mathbf{s}im \mathbf{m}athcal{N}(\mathbf{F}, \tau^{-1} \mathbf{I})
\]
with some precision hyper-parameter $\tau$.
Using $\widehat{K}$ instead as the covariance function of the Gaussian process yields the following generative model:
\[
\mathbf{m}athbf{w}_k &\mathbf{s}im p(\mathbf{m}athbf{w}),
~b_k \mathbf{s}im \text{Unif}[0, 2\mathbf{p}i], \notag\\
\text{\boldmath$\omega$} &= (\mathbf{m}athbf{w}_k, b_k)_{k=1}^K \notag\\
\widehat{K}(\mathbf{m}athbf{x}, \mathbf{m}athbf{y}) &= \mathbf{m}athbf{f}rac{\mathbf{s}igma^2 }{K} \mathbf{s}um_{k=1}^K \mathbf{s}qrt{2} \cos \big(2 \mathbf{p}i \mathbf{m}athbf{w}_k^T(\mathbf{m}athbf{x}-\mathbf{m}athbf{z}_k) + b_k \big) \notag\\
&\qquad \qquad \cdot
\mathbf{s}qrt{2} \cos \big(2 \mathbf{p}i \mathbf{m}athbf{w}_k^T(\mathbf{m}athbf{y}-\mathbf{m}athbf{z}_k) + b_k \big) \notag\\
\mathbf{F} ~|~ \mathbf{X}, \text{\boldmath$\omega$} &\mathbf{s}im \mathbf{m}athcal{N}(\mathbf{0}, \widehat{K}(\mathbf{X}, \mathbf{X})) \notag\\
\mathbf{Y} ~|~ \mathbf{F} &\mathbf{s}im \mathbf{m}athcal{N}(\mathbf{F}, \tau^{-1} \mathbf{I}).
\]
\mathbf{s}ection{Random Covariance Functions}
$K$ is a deterministic covariance function of its inputs; $\widehat{K}$ is a random finite rank covariance function. As such, we can find the conditional distribution of the covariance function given a dataset (more precisely, the conditional distribution of $\text{\boldmath$\omega$}$).
This is a powerful view of this approximation -- it allows us to transform the covariance function to fit the data well, while the prior keeps it from over-fitting to the data.
We will use $\widehat{K}$ as our Gaussian process covariance function from now on, replacing $K$.
This results in the following predictive distribution:
\[
p(\mathbf{Y} | \mathbf{X}) &= \int p(\mathbf{Y} | \mathbf{F}) p(\mathbf{F} | \text{\boldmath$\omega$}, \mathbf{X}) p(\text{\boldmath$\omega$}) \text{d} \text{\boldmath$\omega$} \text{d} \mathbf{F}.
\]
We can integrate this analytically for $\mathbf{F}$ and obtain
\[
p(\mathbf{Y} | \mathbf{X})
&= \int \mathbf{m}athcal{N}(\mathbf{Y}; \mathbf{0}, \widehat{K}(\mathbf{X},\mathbf{X}) + \tau^{-1} \mathbf{I}) p(\text{\boldmath$\omega$}) \text{d} \text{\boldmath$\omega$}
\]
but this involves the inversion of $\widehat{K}(\mathbf{X},\mathbf{X}) + \tau^{-1} \mathbf{I}$, which does not allow us to integrate over $\text{\boldmath$\omega$}$ (even variationally!). Instead, we introduce an auxiliary random variable.
Denoting the $1 \times K$ row vector
\[
\mathbf{p}hi(\mathbf{m}athbf{x}, \text{\boldmath$\omega$}) = \bigg[ \mathbf{s}qrt{\mathbf{m}athbf{f}rac{2\mathbf{s}igma^2}{K}} \cos \big(2 \mathbf{p}i \mathbf{m}athbf{w}_k^T(\mathbf{m}athbf{x} - \mathbf{m}athbf{z}_k) + b_k \big) \bigg]_{k=1}^K
\]
and the $N \times K$ feature matrix $\Phi = [\mathbf{p}hi(\mathbf{m}athbf{x}_n, \text{\boldmath$\omega$})]_{n=1}^N$, we have $\widehat{K}(\mathbf{X},\mathbf{X}) = \Phi\Phi^T$.
We rewrite $p(\mathbf{Y} | \mathbf{X})$ as
\[
&p(\mathbf{Y} | \mathbf{X}) = \int \mathbf{m}athcal{N}(\mathbf{Y}; \mathbf{0}, \Phi\Phi^T + \tau^{-1} \mathbf{I}) p(\text{\boldmath$\omega$}) \text{d} \text{\boldmath$\omega$}.
\]
Following identity \citep[page 93, equations 2.113 $-$ 2.115]{Bishop2006Pattern} we introduce a $K \times 1$ auxiliary random variable $\mathbf{m}athbf{a}_d \mathbf{s}im \mathbf{m}athcal{N}(0, \mathbf{I}_K)$ to the distribution inside the integral above,
\[
&\mathbf{m}athcal{N}(\mathbf{m}athbf{y}_d; \mathbf{0}, \Phi\Phi^T + \tau^{-1} \mathbf{I}) \notag\\
&\qquad = \int \mathbf{m}athcal{N}(\mathbf{m}athbf{y}_d; \Phi\mathbf{m}athbf{a}_d, \tau^{-1} \mathbf{I}) \mathbf{m}athcal{N}(\mathbf{m}athbf{a}_d; 0, \mathbf{I}_K) \text{d} \mathbf{m}athbf{a}_d,
\]
where $\mathbf{m}athbf{y}_d$ is the $d$'th column of the $N \times D$ matrix $\mathbf{Y}$.
Writing $\mathbf{A} = [\mathbf{m}athbf{a}_d]_{d=1}^D$, the above is equivalent to\mathbf{m}athbf{f}ootnote{This is equivalent to the weighted basis function interpretation of the Gaussian process \citep{Rasmussen2005Gaussian}.}
\mathbf{m}athbf{f}l[\label{eq:Y_given_A_X_o}
p(\mathbf{Y} | \mathbf{X})
&= \int p(\mathbf{Y} | \mathbf{A}, \mathbf{X}, \text{\boldmath$\omega$}) p(\mathbf{A}) p(\text{\boldmath$\omega$}) \text{d} \mathbf{A} \text{d} \text{\boldmath$\omega$}.
\]
We refer to $\mathbf{A} \in \mathbf{m}athbb{R}^{K \times D}$ as the Fourier coefficients.
Regarding $\text{\boldmath$\omega$}$ as parameters and optimising these values (integrating over $\mathbf{A}$) results in the \textit{sparse spectrum} approximation \citep{lazaro2010sparse}.
Regarding $\mathbf{A}$ as parameters and optimising these values (leaving $\text{\boldmath$\omega$}$ constant) results in a method known as ``random projections'' \citep{rahimi2007random}.
Related work to random projections variationally integrates over the hyper-parameters while leaving $\text{\boldmath$\omega$}$ constant \citep{Tan2013Variational}.
We can extend the above to sums of covariance functions as well. Following proposition \ref{prop:2} in the appendix, given a sum of covariance functions with $L$ components (with each corresponding to $\Phi_i$ an $N \times K$ matrix) we have $\Phi = [\Phi_i]_{i=1}^L$ an $N \times LK$ matrix.
As an example covariance function of this form consider the spectral mixture (SM) covariance function \citep{lindgren2012stationary,wilson2013gaussian}.
This covariance function has been used in the audio processing community since the '70s and was recently introduced to the machine learning community. It generalises many known covariance functions, such as the periodic covariance function, the automatic relevance determination (ARD) squared exponential (SE) covariance function, products of these and weighted sums of these products.
We will continue the development of our method using this covariance function. Note however that our method is general and can be extended for other covariance functions as well.
The spectral mixture covariance function with $L$ components is given by
\[
K(\mathbf{m}athbf{x}, \mathbf{m}athbf{y}) = \mathbf{s}um_{i=1}^L \mathbf{s}igma_i^2
&\exp \bigg(
-\mathbf{m}athbf{f}rac{1}{2} \mathbf{s}um_{q=1}^Q \mathbf{m}athbf{f}rac{(x_q - y_q)^2}{l_{iq}^2}
\bigg) \notag\\
& \cdot \mathbf{p}rod_{q=1}^Q
\cos \bigg( \mathbf{m}athbf{f}rac{2 \mathbf{p}i (x_q - y_q)}{p_{iq}} \bigg)
\]
with weights $\mathbf{s}igma_i^2$, length-scales $l_{iq}$ and periods $p_{iq}^{-1}$. We write $\overline{\p}_i = [p_{iq}^{-1}]_{q=1}^Q$ and $\mathbf{L}_i = \text{diag}([2 \mathbf{p}i l_{qi} ]_{q=1}^Q)$. This covariance function reduces to a sum of squared exponential (SE) covariance functions for $p_{iq} = \infty$ for all $i$ and $q$.
For $p(\mathbf{m}athbf{w})$ composed of a single SM component, we follow proposition \ref{prop:3} in the appendix and perform a change of variables, resulting in $p(\mathbf{m}athbf{w})$ a standard normal distribution with the parameters of $p(\mathbf{m}athbf{w})$ now expressed in $\Phi$ instead.
For $p(\mathbf{m}athbf{w})$ composed of several components, for each component $i$ we get $\Phi_i$ is an $N \times K$ matrix with elements
\[
\mathbf{s}qrt{\mathbf{m}athbf{f}rac{2\mathbf{s}igma_i^2}{K}} \cos \big(2 \mathbf{p}i (\mathbf{L}_i^{-1} \mathbf{m}athbf{w}_k + \overline{\p}_i)^T (\mathbf{m}athbf{x} - \mathbf{m}athbf{z}_k) + b_k \big),
\]
where for simplicity, we index $\mathbf{m}athbf{w}_k$ and $b_k$ with $k=1,...,LK$ as a function of $i$.
\mathbf{s}ection{Variational Inference}
The predictive distribution for an input point $\mathbf{m}athbf{x}^*$ is given by
\mathbf{m}athbf{f}l[\label{eq:pred_dist}
&p(\mathbf{m}athbf{y}^* | \mathbf{m}athbf{x}^*, \mathbf{X}, \mathbf{Y}) = \int p(\mathbf{m}athbf{y}^* | \mathbf{m}athbf{x}^*, \mathbf{A}, \text{\boldmath$\omega$}) p(\mathbf{A}, \text{\boldmath$\omega$} | \mathbf{X}, \mathbf{Y})\text{d} \mathbf{A} \text{d} \text{\boldmath$\omega$},
\]
with $\mathbf{m}athbf{y}^* \in \mathbf{m}athbb{R}^{1 \times D}$.
The distribution $p(\mathbf{A}, \text{\boldmath$\omega$} | \mathbf{X}, \mathbf{Y})$ cannot be evaluated analytically. Instead we define an approximating \textit{variational} distribution $q(\mathbf{A}, \text{\boldmath$\omega$})$, whose structure is easy to evaluate.
We would like our approximating distribution to be as close as possible to the posterior distribution obtained from the full GP. We thus minimise the Kullback--Leibler divergence
\[
\text{KL}(q(\mathbf{A}, \text{\boldmath$\omega$}) ~|~ p(\mathbf{A}, \text{\boldmath$\omega$} | \mathbf{X}, \mathbf{Y})),
\]
resulting in the approximate predictive distribution
\mathbf{m}athbf{f}l[\label{eq:approx_pred_dist}
&q(\mathbf{m}athbf{y}^* | \mathbf{m}athbf{x}^*) = \int p(\mathbf{m}athbf{y}^* | \mathbf{m}athbf{x}^*, \mathbf{A}, \text{\boldmath$\omega$}) q(\mathbf{A}, \text{\boldmath$\omega$}) \text{d} \mathbf{A} \text{d} \text{\boldmath$\omega$}.
\]
Minimising the Kullback--Leibler divergence is equivalent to maximising the log evidence lower bound
\mathbf{m}athbf{f}l[
&\mathbf{m}athcal{L} := \int q(\mathbf{A}, \text{\boldmath$\omega$}) \log p(\mathbf{Y} | \mathbf{A}, \mathbf{X}, \text{\boldmath$\omega$}) \text{d} \mathbf{A} \text{d} \text{\boldmath$\omega$} \notag\\
&\qquad \qquad \qquad \qquad
- \text{KL}(q(\mathbf{A}, \text{\boldmath$\omega$}) || p(\mathbf{A}) p(\text{\boldmath$\omega$})) \label{eq:lower_bound}
\]
with respect to the variational parameters defining $q(\mathbf{A},\text{\boldmath$\omega$})$.
We define a factorised variational distribution $q(\mathbf{A},\text{\boldmath$\omega$}) = q(\mathbf{A}) q(\text{\boldmath$\omega$})$. We define $q(\text{\boldmath$\omega$})$ with $\text{\boldmath$\omega$} = (\mathbf{m}athbf{w}_k, b_k)_{k=1}^K$ to be a joint Gaussian distribution and a uniform distribution,
\[
\mathbf{m}athbf{w}_k &\mathbf{s}im \mathbf{m}athcal{N}(\mathbf{m}u_k, \Sigma_k), &&k = 1, ..., LK \\
b_k &\mathbf{s}im \text{Unif}(\alpha_k, \beta_k), &&k = 1, ..., LK
\]
with $\Sigma_k$ diagonal, $0 \leq \alpha_k \leq \beta_k \leq 2 \mathbf{p}i$, and define $q(\mathbf{A}) = \mathbf{p}rod_{d=1}^D q(\mathbf{m}athbf{a}_d)$ (with $\mathbf{m}athbf{a}_d \in \mathbf{m}athbb{R}^{LK \times 1}$) by
\[
\mathbf{m}athbf{a}_d \mathbf{s}im \mathbf{m}athcal{N}(\mathbf{m}_d, \mathbf{s}_d), &&d = 1, ..., D
\]
with $\mathbf{s}_d$ diagonal. We evaluate the log evidence lower bound and optimise over $\{ \mathbf{m}u_k, \Sigma_k, \alpha_k, \beta_k \}_{k=1}^{LK}$, $\{ \mathbf{m}_d, \mathbf{s}_d \}_{d=1}^D$, and $\{ \mathbf{s}igma_i, \mathbf{L}_i, \overline{\p}_i \}_{i=1}^L$ to maximise Eq.\ \ref{eq:lower_bound}.
\mathbf{s}ubsection{Evaluating the Log Evidence Lower Bound}
Given $\mathbf{A}$ and $\text{\boldmath$\omega$}$, we evaluate the probability of the $d$'th element, $\mathbf{m}athbf{y}_d$, as
\[
&\log p(\mathbf{m}athbf{y}_d | \mathbf{A}, \mathbf{X}, \text{\boldmath$\omega$}) =
\notag\\
&\qquad
-\mathbf{m}athbf{f}rac{N}{2} \log(2 \mathbf{p}i \tau^{-1}) - \mathbf{m}athbf{f}rac{\tau}{2} (\mathbf{m}athbf{y}_d - \Phi\mathbf{m}athbf{a}_d)^T(\mathbf{m}athbf{y}_d - \Phi\mathbf{m}athbf{a}_d).
\]
Note that $\mathbf{m}athbf{y}_d$ is an $N \times 1$ vector, $\Phi$ is an $N \times LK$ matrix, and $\mathbf{m}athbf{a}_d$ is an $LK \times 1$ vector.
We need to evaluate the expectations of $\mathbf{m}athbf{y}_d^T\Phi\mathbf{m}athbf{a}_d$ and $\mathbf{m}athbf{a}_d^T\Phi^T\Phi\mathbf{m}athbf{a}_d$ (both scalar values) under $q(\mathbf{A})q(\text{\boldmath$\omega$})$:
\mathbf{m}athbf{f}l[\label{eq:exp_a_Phi}
&E_{q(\mathbf{A})q(\text{\boldmath$\omega$})}\big( \mathbf{m}athbf{y}_d^T\Phi\mathbf{m}athbf{a}_d \big)
=
\mathbf{m}athbf{y}_d^T E_{q(\text{\boldmath$\omega$})}\big( \Phi \big) E_{q(\mathbf{A})}\big( \mathbf{m}athbf{a}_d \big),
\]
and
\mathbf{m}athbf{f}l[\label{eq:exp_a_Phi_Phi_a}
E_{q(\mathbf{A})q(\text{\boldmath$\omega$})}\big(
\mathbf{m}athbf{a}_d^T\Phi^T\Phi\mathbf{m}athbf{a}_d
\big)
&=
\text{tr} \bigg(
E_{q(\text{\boldmath$\omega$})}\big(
\Phi^T\Phi
\big)
E_{q(\mathbf{A})}\big(
\mathbf{m}athbf{a}_d \mathbf{m}athbf{a}_d^T
\big)
\bigg).
\]
The values $E_{q(\mathbf{A})}( \mathbf{m}athbf{a}_d )$ and $E_{q(\mathbf{A})}( \mathbf{m}athbf{a}_d \mathbf{m}athbf{a}_d^T )$ are evaluated as
\[
E_{q(\mathbf{A})}( \mathbf{m}athbf{a}_d ) &= \mathbf{m}_d, \\
E_{q(\mathbf{A})}( \mathbf{m}athbf{a}_d \mathbf{m}athbf{a}_d^T ) &= \mathbf{s}_d + \mathbf{m}_d \mathbf{m}_d^T.
\]
Next we evaluate $E_{q(\text{\boldmath$\omega$})}\big(\Phi)$. Remember that $\Phi$ depends on $\text{\boldmath$\omega$}$ and that $q(\text{\boldmath$\omega$}) = q((\mathbf{m}athbf{w}_k, b_k)_{k=1}^{LK})$. We write as shorthand $\overline{\x}_{nk} := 2\mathbf{p}i \mathbf{L}_i^{-1} (\mathbf{m}athbf{x}_n - \mathbf{m}athbf{z}_k)$ and $\overline{b}_{nk} = b_k + 2 \mathbf{p}i \overline{\p}_i^T(\mathbf{m}athbf{x}_n-\mathbf{m}athbf{z}_k)$ with component $i$ appropriate to $k$.
Following identity \ref{identity:2} proved in the appendix, we have that
the expectation of a single element in the vector with respect to $q(\mathbf{m}athbf{w}_k)$ is
\[
&E_{q(\mathbf{m}athbf{w}_k)}\big(\cos \big(\mathbf{m}athbf{w}_k^T \overline{\x}_{nk} + \overline{b}_{nk} \big)\big) \notag\\
& \quad = e^{-\mathbf{m}athbf{f}rac{1}{2} \overline{\x}_{nk}^T \Sigma_k \overline{\x}_{nk}} \cos\big(\mathbf{m}u_k^T \overline{\x}_{nk} + \overline{b}_{nk} \big).
\]
where $\mathbf{m}u_k$ is the mean of $q(\mathbf{m}athbf{w}_k)$ and $\Sigma_k$ is its covariance.
We get
\mathbf{m}athbf{f}l[\label{eq:exp_Phi}
\bigg( E_{q(\text{\boldmath$\omega$})}\big( \Phi \big) \bigg)_{n,k} &=
\mathbf{s}qrt{\mathbf{m}athbf{f}rac{2\mathbf{s}igma_i^2}{K}} e^{-\mathbf{m}athbf{f}rac{1}{2} \overline{\x}_{nk}^T \Sigma_k \overline{\x}_{nk}}
\notag \\ &\qquad \cdot
E_{q(b_k)} \big( \cos(\mathbf{m}u_k^T \overline{\x}_{nk} + \overline{b}_{nk}) \big)
\]
with the integration with respect to $q(b_k)$ trivial.
Next we evaluate $E_{q(\text{\boldmath$\omega$})}\big( \Phi^T\Phi \big)$, an $LK \times LK$ matrix:
\mathbf{m}athbf{f}l[\label{eq:exp_PhiT_Phi}
&\bigg( E_{q(\text{\boldmath$\omega$})}\big( \Phi^T\Phi \big) \bigg)_{i, j} \notag \\ &\quad
= \mathbf{s}um_{n=1}^N \mathbf{m}athbf{f}rac{2 \mathbf{s}igma_i^2}{LK}
E_{q(\mathbf{m}athbf{w}_i, b_i, \mathbf{m}athbf{w}_j, b_j)}\big( \cos(\mathbf{m}athbf{w}_i^T \overline{\x}_{ni} + \overline{b}_{ni}) \notag\\
&\qquad\qquad\qquad\qquad\qquad\qquad \cdot \cos(\mathbf{m}athbf{w}_j^T \overline{\x}_{nj} + \overline{b}_{nj})\big)
\]
for $i, j \leq LK$.
For $i \neq j$, from independence we can break the expectation of each term into
\[
&E_{q(\mathbf{m}athbf{w}_i, b_i, \mathbf{m}athbf{w}_j, b_j)}\big(\cos(\mathbf{m}athbf{w}_i^T \overline{\x}_{ni} + \overline{b}_{ni}) \cos(\mathbf{m}athbf{w}_j^T \overline{\x}_{nj} + \overline{b}_{nj})\big)
\notag \\ &\quad =
E_{q(\mathbf{m}athbf{w}_i, b_i)}\big(\cos(\mathbf{m}athbf{w}_i^T \overline{\x}_{ni} + \overline{b}_{ni})\big)
\notag\\ &\quad\quad \cdot
E_{q(\mathbf{m}athbf{w}_j, b_j)}\big(\cos(\mathbf{m}athbf{w}_j^T \overline{\x}_{nj} + \overline{b}_{nj})\big),
\]
and for $i = j$,
\[
&E_{q(\mathbf{m}athbf{w}_i, b_i)}\big(\cos(\mathbf{m}athbf{w}_i^T \overline{\x}_{ni} + \overline{b}_{ni})^2\big) = \mathbf{m}athbf{f}rac{1}{2} +
\notag \\ &\qquad\qquad
\mathbf{m}athbf{f}rac{1}{2} e^{-2 \overline{\x}_{ni}^T \Sigma_i \overline{\x}_{ni}} E_{q(b_i)} \big( \cos(2\mathbf{m}u_i^T \overline{\x}_{ni} + 2\overline{b}_{ni}) \big)
\]
following identity \ref{identity:3}.
In conclusion, we obtained our optimisation objective:
\mathbf{m}athbf{f}l[\label{eq:lower_bound_factorised}
&\mathbf{m}athcal{L} =
\mathbf{s}um_{d=1}^D \bigg( -\mathbf{m}athbf{f}rac{N}{2} \log(2 \mathbf{p}i \tau^{-1}) -\mathbf{m}athbf{f}rac{\tau}{2} \mathbf{m}athbf{y}_d^T\mathbf{m}athbf{y}_d
\notag \\ &\qquad \qquad \quad
+\tau \mathbf{m}athbf{y}_d^T E_{q(\text{\boldmath$\omega$})}\big( \Phi \big) \mathbf{m}_d \notag\\
&\qquad \qquad \quad -\mathbf{m}athbf{f}rac{\tau}{2}
\text{tr} \big(
E_{q(\text{\boldmath$\omega$})}(
\Phi^T\Phi
)
(
\mathbf{s}_d + \mathbf{m}_d \mathbf{m}_d^T
)
\big)
\bigg)
\notag \\ &\qquad
- \text{KL}(q(\mathbf{A}) || p(\mathbf{A})) - \text{KL}(q(\text{\boldmath$\omega$}) || p(\text{\boldmath$\omega$})).
\]
The KL divergence terms can be evaluated analytically for the Gaussian and uniform distributions.
\mathbf{s}ubsection{Optimal variational distribution over $\mathbf{A}$}
In the above we optimise over the variational parameters for $\mathbf{A}$, namely $\mathbf{m}_d$ and $\mathbf{s}_d$ for $d \leq D$. This allows us to attain a reduction in time complexity compared to previous approaches and use stochastic inference, as will be explained below. This comes with a cost, as the dependence between $\text{\boldmath$\omega$}$ and $\mathbf{A}$ can render the optimisation hard.
We can find the optimal variational distribution $q(\mathbf{A})$ analytically, which allows us to optimise $\text{\boldmath$\omega$}$ and the hyper-parameters alone. In proposition \ref{prop:1} in the appendix we show that the optimal variational distribution is given by
\[
q(\mathbf{m}athbf{a}_d) = \mathbf{m}athcal{N}(\text{\boldmath$\Sigma$} E_{q(\text{\boldmath$\omega$})}(\Phi^T) \mathbf{m}athbf{y}_d, ~\tau^{-1} \text{\boldmath$\Sigma$})
\]
with $\text{\boldmath$\Sigma$} = (E_{q(\text{\boldmath$\omega$})}(\Phi^T \Phi) + \tau^{-1} I)^{-1}$.
The lower bound to optimise then reduces to
\mathbf{m}athbf{f}l[\label{eq:lower_bound_opt_A}
&\mathbf{m}athcal{L} =
\mathbf{s}um_{d=1}^D \bigg( -\mathbf{m}athbf{f}rac{N}{2} \log(2 \mathbf{p}i \tau^{-1}) -\mathbf{m}athbf{f}rac{\tau}{2} \mathbf{m}athbf{y}_d^T\mathbf{m}athbf{y}_d
+ \mathbf{m}athbf{f}rac{1}{2} \log(|\tau^{-1} \text{\boldmath$\Sigma$}|)
\notag\\
&\qquad \qquad \quad
+ \mathbf{m}athbf{f}rac{1}{2} \tau \mathbf{m}athbf{y}_d^T E_{q(\text{\boldmath$\omega$})}(\Phi) \text{\boldmath$\Sigma$} E_{q(\text{\boldmath$\omega$})}(\Phi^T) \mathbf{m}athbf{y}_d
\bigg)
\notag \\ &\qquad
- \text{KL}(q(\text{\boldmath$\omega$}) || p(\text{\boldmath$\omega$})).
\]
\mathbf{s}ection{Distributed Inference and Stochastic Inference}
Evaluating $\mathbf{m}athcal{L}$ in equation \ref{eq:lower_bound_factorised} requires $\mathbf{m}athcal{O}(N K^2)$ time complexity (for fixed $Q,D$, diagonal $\mathbf{s}_d$, and covariance function with one component $L=1$). This stems from the term $E_{q(\text{\boldmath$\omega$})}\big(\Phi^T\Phi\big)$ -- a $K \times K$ matrix, where each element is composed of a sum over $N$.
Following the ideas of \citep{Gal2014DistributedB} we show that the approximation can be implemented in a distributed framework.
Write
\[
\mathbf{m}athcal{L}_{nd} &=
-\mathbf{m}athbf{f}rac{1}{2} \log(2 \mathbf{p}i \tau^{-1}) -\mathbf{m}athbf{f}rac{\tau}{2} y_{nd} y_{nd}
+\tau y_{nd} E_{q(\text{\boldmath$\omega$})}\big( \mathbf{p}hi_n \big) \mathbf{m}_d \notag\\
&\qquad \quad -\mathbf{m}athbf{f}rac{\tau}{2}
\text{tr} \big(
E_{q(\text{\boldmath$\omega$})}(
\mathbf{p}hi_n^T\mathbf{p}hi_n
)
(
\mathbf{s}_d + \mathbf{m}_d \mathbf{m}_d^T
)
\big)
\]
with $\mathbf{p}hi_n = \mathbf{p}hi(\mathbf{m}athbf{x}_n, \text{\boldmath$\omega$})$.
We can break the optimisation objective in equation \ref{eq:lower_bound_factorised} into a sum over $N$,
\mathbf{m}athbf{f}l[\label{eq:lower_bound_dist}
\mathbf{m}athcal{L} & =
\mathbf{s}um_{d=1}^D \mathbf{s}um_{n=1}^N \mathbf{m}athcal{L}_{nd}
- \text{KL}(q(\mathbf{A}) || p(\mathbf{A})) - \text{KL}(q(\text{\boldmath$\omega$}) || p(\text{\boldmath$\omega$})).
\]
These terms can be computed concurrently on different nodes in a distributed framework, requiring $\mathbf{m}athcal{O}(K^2)$ time complexity in each iteration. We can further break the computation of $\mathbf{m}athcal{L}_{nd}$ into a sum over $K$ as well, thus reducing the time complexity to $\mathbf{m}athcal{O}(K)$ with $K$ inducing points. This is in comparison to distributed inference with sparse pseudo-input GPs which takes $\mathbf{m}athcal{O}(K^3)$ time complexity with $K$ inducing points, resulting from the covariance matrix inversion. This is a major advantage, as empirical results suggest that in many real-world applications the number of inducing points should scale with the data.
We can exploit the above representation and perform stochastic variational inference (SVI) by approximating the objective with a subset of the data, resulting in noisy gradients \citep{Hoffman2013Stochastic}.
Here we use as our objective
\mathbf{m}athbf{f}l[\label{eq:lower_bound_SVI}
\mathbf{m}athcal{L} &\approx
\mathbf{m}athbf{f}rac{N}{|S|}
\mathbf{s}um_{d=1}^D \mathbf{s}um_{n \in S} \mathbf{m}athcal{L}_{nd}
\notag \\ &\qquad
- \text{KL}(q(\mathbf{A}) || p(\mathbf{A})) - \text{KL}(q(\text{\boldmath$\omega$}) || p(\text{\boldmath$\omega$})).
\]
with a mini-batch $S$ of randomly selected points. This is an unbiased estimator to the lower bound. The time complexity of each iteration is $\mathbf{m}athcal{O}(SK^2)$ with $S<<N$ the size of the random subset, compared to $\mathbf{m}athcal{O}(SK^2+K^3)$ of GP SVI using sparse pseudo-input approximation \citep{hensman2013Gaussian}.
\mathbf{s}ection{Predictive Distribution}
The approximate predictive distribution for a point $\mathbf{m}athbf{x}^*$ is given by equation \ref{eq:approx_pred_dist}.
Denoting $\mathbf{M} = [\mathbf{m}_d]_{d=1}^D$, we have
\mathbf{m}athbf{f}l[\label{eq:pred_mean}
E_{q(\mathbf{m}athbf{y}^* | \mathbf{m}athbf{x}^*)}(\mathbf{m}athbf{y}^*) = E_{q(\text{\boldmath$\omega$})} \big( \mathbf{p}hi_* \big) \mathbf{M}
\]
following proposition \ref{prop:4} in the appendix.
The variance of the predictive distribution is given by
\mathbf{m}athbf{f}l[\label{eq:pred_uncertainty}
&\text{Var}_{q(\mathbf{m}athbf{y}^* | \mathbf{m}athbf{x}^*)}(\mathbf{m}athbf{y}^*)
=
\tau^{-1}\mathbf{I}_D + \Psi \\
&\qquad + \mathbf{M}^T \big( E_{q(\text{\boldmath$\omega$})}\big(\mathbf{p}hi_*^T \mathbf{p}hi_*\big) - E_{q(\text{\boldmath$\omega$})} \big( \mathbf{p}hi_* \big)^T E_{q(\text{\boldmath$\omega$})} \big( \mathbf{p}hi_* \big)\big) \mathbf{M}\notag
\]
with $\Psi_{i,j} = \text{tr} \big( E_{q(\text{\boldmath$\omega$})}\big(\mathbf{p}hi_*^T \mathbf{p}hi_*\big) \cdot \mathbf{s}_i \big) \cdot \mathbf{m}athds{1}[i=j]$, following proposition \ref{prop:5} in the appendix ($\mathbf{m}athds{1}$ is the indicator function).
When the optimal variational distribution over $\mathbf{A}$ is used, we have $\mathbf{M} = \text{\boldmath$\Sigma$} E_{q(\text{\boldmath$\omega$})}(\Phi^T) \mathbf{Y}$ and $\mathbf{s}_i = \tau^{-1} \text{\boldmath$\Sigma$}$ for all $i$.
\mathbf{s}ection{Properties of the Approximate Model}
We have presented a variational sparse spectrum approximation to the Gaussian process (VSSGP in short). We gave 3 approximate models with different lower bounds: an approximate model with an optimal variational distribution over $\mathbf{A}$ (equation \ref{eq:lower_bound_opt_A}, referred to as VSSGP), an approximate model with a \textit{factorised} lower bound over the data points (equations \ref{eq:lower_bound_factorised}, \ref{eq:lower_bound_dist}, referred to as factorised VSSGP -- fVSSGP), and an approximation to the lower bound of the factorised VSSGP for use in stochastic optimisation over subsets of the data (equation \ref{eq:lower_bound_SVI}, referred to as stochastic factorised VSSGP -- sfVSSGP).
The VSSGP model generalises on some of the GP approximations brought in the introduction.
Fixing $\Sigma_k$ at zero in our approximate model (as well as $\alpha_k$ and $\beta_k$ at 0 and $2 \mathbf{p}i$) and optimising only $\mathbf{m}u_k$ results in the \textit{sparse spectrum approximation}.
Randomising $\mathbf{m}u_k$, we obtain the \textit{random projections} approximation \citep{rahimi2007random}.
Indeed, for $\Sigma_k = \mathbf{0}$ and fixed phases we have that $E_{q(\text{\boldmath$\omega$})}\big( \Phi^T\Phi \big) = E_{q(\text{\boldmath$\omega$})}\big( \Phi^T \big) E_{q(\text{\boldmath$\omega$})}\big( \Phi \big)$ and $E_{q(\text{\boldmath$\omega$})}\big( \Phi \big) = \Phi$, and equation \ref{eq:lower_bound_opt_A} recovers equation 8 in \cite{lazaro2010sparse}.
The points $\mathbf{m}athbf{z}_k$ act as \textit{inducing inputs} with $\mathbf{m}athbf{w}_k$ and $b_k$ acting as \textit{inducing frequencies and phases} at these inputs. This is similar to the \textit{sparse pseudo-input} approximation, but instead of having inducing values in the output space, we have the inducing values in the frequency domain. These are necessary to the approximation. Without these points (or equivalently, setting these to $\mathbf{0}$), the features would decay quickly for data points far from the origin (the fixed point $\mathbf{0}$).
~\\
The distribution over the frequencies is optimised to fit the data well. The prior is used to regulate the fit and avoid over-fitting to the data. This approximation can be used to learn covariance functions by fitting them to the data. This is similar to the ideas brought in \citep{DuvLloGroetal13} where the structure of a covariance function is sought by looking at possible compositions of these. This can give additional insight into the data. In \citep{DuvLloGroetal13} the structure of the covariance composition is used to explain the data. In the approximation presented here the spectrum of the covariance function can be used to explain the data.
It is interesting to note that although the approximated covariance function $K(\mathbf{m}athbf{x}, \mathbf{m}athbf{y})$ has to be stationary (i.e.\ it is represented as $K(\mathbf{m}athbf{x},\mathbf{m}athbf{y}) = K(\mathbf{m}athbf{x}-\mathbf{m}athbf{y})$), the approximate posterior is not. This is in contrast to the SSGP that results in a stationary approximation.
Furthermore, unlike the SSGP, our approximation is not periodic. This is one of the theoretical limitations of the sparse spectrum approximation. The limitation arises from the fact that the covariance is represented as a weighted sum of cosines in the SSGP. In the our approximation this is avoided by decaying the cosines to zero.
This and other properties of the approximation are discussed further in discussion \ref{dis:1} in the appendix.
\mathbf{s}ection{Experiments}
We next study the properties of the VSSGP and compare it to alternative approximations, showing its advantages.
We compare the VSSGP to the full Gaussian process
(denoted Full GP)
, the sparse spectrum GP approximation
(denoted SSGP)
, a sparse pseudo-input GP approximation
(denoted SPGP)
, and the random projections approximation
(denoted RP)
. We compare the VSSGP to the fVSSGP and sfVSSGP that offer improved time complexity. We further compare sfVSSGP to the existing sparse pseudo-input GP approach used with SVI \citep[denoted sSPGP, ][]{hensman2013Gaussian}.
We inspect the model's time accuracy trade-off and show that it avoids over-fitting as the number of parameters increases.
\mathbf{s}ubsection{VSSGP Properties}
We evaluate the predictive mean and uncertainty of the VSSGP on the atmospheric CO$_2$ concentrations dataset derived from in situ air samples collected at Mauna Loa Observatory, Hawaii \citep{Keeling2004}. We fit the approximate model using a spectral mixture covariance function with two components initialised with periods $[5, \infty]$ and corresponding initial length-scales $[0.1, 1000]$ (resulting in a sum of SE $\times$ periodic and SE covariances). We randomised the phases following the Monte Carlo integration (instead of optimising a variational distribution on these\mathbf{m}athbf{f}ootnote{This seems to work better in practice.}) and initialise the frequencies at random.
We use 10 inducing inputs for each component ($K=10$), set the observation noise precision to $\tau = 10$, and covariance noise to $\mathbf{s}igma^2 = 1$. LBFGS \citep{zhu1997algorithm} was used to optimise the objective given in equation \ref{eq:lower_bound_opt_A}, and was run for 500 iterations.
\begin{figure}
\caption{Predictive mean and uncertainty on the Mauna Loa CO$_2$ concentrations dataset.
In red is the observed function; in blue is the predictive mean plus/minus two standard deviations. In this example the approximating distribution is used with a spectral mixture covariance with two components ($L=2$, $K=10$). }
\label{fig:exp1}
\end{figure}
Figure \ref{fig:exp1} shows the predictive mean with the predictive uncertainty increasing far from the data. This is a property shared with the SE GP. The covariance hyper-parameters optimise to periods of $[9.8, \infty]$, length-scales $[0.09, 54]$, and covariance noise $[0.0043, 5.7]$, correspondingly. The frequency with the smallest standard-deviation (highest confidence) for the first component is $1$ (corresponding to a period of $1$ year, capturing the short term behaviour). For the second component these are $0.0053, 0.00065$ (corresponding to periods of $185$ and $1536$ years capturing the long term behaviour).
\mathbf{s}ubsection{Comparison to Existing GP Approximations}
We compare various GP approximations on the solar irradiance dataset \citep{Lean2004}. We scaled the dataset dividing by the data standard deviation, and removed 5 segments of length 20.
We followed the experiment set-up of the previous section and used the same initial parameters for all approximate models. Instead of the SM covariance function we use a single SE setting its length-scale $l=1$, and used 50 inducing inputs. LBFGS was used for 1000 iterations. The RP model was run twice with two different settings: once following the same set-up of the other models, optimising over the model hyper-parameters (RP$_1$), and once keeping all hyper-parameters fixed and setting the observation noise precision to $\tau = 100$ with $K=500$ inducing inputs\mathbf{m}athbf{f}ootnote{This follows the usual use of the model in the randomised methods community. We experimented with various values of $\tau$ and decided to use 100.} (RP$_2$).
\begin{figure}
\caption{Predictive mean and uncertainty on the reconstructed solar irradiance dataset with missing segments, for the GP and various GP approximations. In red is the observed function and in green are the missing segments. In blue is the predictive mean plus/minus two standard deviations of the various approximations. All tests were done with the SE covariance function, and all sparse approximations use $K=50$ inducing inputs (apart from RP$_2$ with $K=500$).}
\label{fig:exp2a}
\end{figure}
Figure \ref{fig:exp2a} shows qualitatively the predictive mean and uncertainty of the various approaches.
SSGP and RP seem to over-fit the function using high frequencies with high confidence.
SPGP seems to under-fit the function, but has accurate predictive mean and uncertainty at points where many inducing inputs lie (such as the flat region).
VSSGP's predictive mean resembles that of the full GP, but with increased uncertainty throughout the space. Further, its uncertainty on the missing segments is smaller than that of the full GP (some frequencies have low uncertainty, thus used near the data).
The full GP learnt length-scale is $4$. VSSGP learnt a length-scale of $3$, and SPGP learnt a length-scale of $5$. SSGP and RP$_1$ learnt length-scales of $0.97,1.66$, i.e.\ the hyper-parameter optimisation found a local minimum.
\begin{table}[t!]
\center
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
Solar & SPGP & SSGP & RP$_1$ & RP$_2$ & GP & \textbf{VSSGP} \\
\hline
Train & 0.23 & 0.15 & 0.32 & 0.04 & 0.08 & \textbf{0.13} \\
\hline
Test & 0.61 & 0.63 & 0.65 & 0.76 & 0.50 & \textbf{0.41} \\
\hline
\end{tabular}
\caption{Imputation RMSE on both train and test sets, for the reconstructed solar irradiance dataset. All tests were done with the SE covariance function, and all sparse approximations use 50 inducing inputs (apart from RP$_2$ that uses $K=500$).}
\label{table:exp2b}
\end{table}
Table \ref{table:exp2b} gives a quantitative comparison of the different approximations for the task of imputation. RMSE (root mean square error) of the approximate predictive mean on the missing segments was computed (test error), as well as the RMSE on the observed function (training error). Note that the full GP seems to get worse results than VSSGP. This might be because of the (slightly) larger learnt length-scale.
\mathbf{s}ubsection{From SSGP to Variational SSGP}
We use of variational inference in the VSSGP to avoid over-fitting to the data, a behaviour that is often observed with SSGP.
To test this we perform a direct comparison of the proposed approximate model to SSGP on the task of audio signal imputation.
For this experiment we used a short speech signal with 1000 samples taken from the TIMIT dataset \citep{garofolo1993timit}. We removed 5 segments of length 40 from the signal, and evaluated the imputation error (RMSE) of the predictive mean with $K=100$ inducing points. We used the same experiment set-up as before with a sum of 2 SE covariance functions with length-scales $l=[2,10]$ and observation noise precision $\tau = 1000$ matching the signal magnitude. LBFGS was run for 1000 iterations. The experiment was repeated 5 times and the results averaged.
\begin{table*}[t!]
\center
\begin{tabular}{|c|c|c|c|}
\hline
Audio 1K & VSSGP & fVSSGP & sfVSSGP \\
\hline
Train & $\mathbf{m}athbf{0.0062 \mathbf{p}m 0.00048}$ $_{(0.063 \mathbf{p}m 0.0068)}$ & $\mathbf{m}athbf{0.0054 \mathbf{p}m 0.00083}$ $_{(0.055 \mathbf{p}m 0.0088)}$ & $\mathbf{m}athbf{0.005 \mathbf{p}m 0.003}$ $_{(0.052 \mathbf{p}m 0.031)}$ \\
\hline
Test & $\mathbf{m}athbf{0.034 \mathbf{p}m 0.0043}$ $_{(0.17 \mathbf{p}m 0.022)}$ & $\mathbf{m}athbf{0.038 \mathbf{p}m 0.0049}$ $_{(0.22 \mathbf{p}m 0.028)}$ & $\mathbf{m}athbf{0.04 \mathbf{p}m 0.0066}$ $_{(0.24 \mathbf{p}m 0.0089)}$ \\
\hline
\end{tabular}
\caption{Imputation RMSE (and in smaller font STFT RMSE) on train and test sets, for a speech signal segment of length 1K ($K=100$).}
\label{table:exp3}
\end{table*}
\begin{table}[h]
\center
\begin{tabular}{|c|c|c|}
\hline
Audio 1K & SSGP & \textbf{VSSGP} \\
\hline
Train & $0.0091 \mathbf{p}m 0.0042$ & $\mathbf{m}athbf{0.0062 \mathbf{p}m 0.00048}$ \\
\hline
Test & $0.088 \mathbf{p}m 0.033$ & $\mathbf{m}athbf{0.034 \mathbf{p}m 0.0043}$ \\
\hline
\end{tabular}
\caption{Imputation RMSE on both train and test sets, for a speech signal segment of length 1K ($K=100$).}
\label{table:exp2c}
\end{table}
Table \ref{table:exp2c} shows the RMSE of the training set and test set for the audio data. SSGP seems to achieve a small training error but cannot generalise well to unseen audio segments. VSSGP attains a slightly lower training error, and is able to impute unseen audio segments with better accuracy.
It is interesting to note that using the RMSE of the short-time Fourier transform of the original signal and the predicted mean (STFT, the common metric for audio imputation, with 25ms frame size and a hop size of 12ms), the SSGP model attains a training error of $0.094 \mathbf{p}m 0.05$ and a test error of $0.55 \mathbf{p}m 0.41$. The VSSGP attains a training error of $0.067 \mathbf{p}m 0.0067$ with a test error of $0.17 \mathbf{p}m 0.022$. For comparison, baseline performance of predicting 0 attains an error of $0.44$ on the training set and an error of $0.38$ on the test set.
\mathbf{s}ubsection{VSSGP, factorised VSSGP, and stochastic factorised VSSGP}
VSSGP, fVSSGP, and sfVSSGP all rely on different lower bounds to the same approximate model. Whereas VSSGP solves for the variational distribution over the Fourier coefficients analytically, fVSSGP optimises over these quantities. This reduces the time complexity, but with the price of potentially worsened performance. sfVSSGP further employs an approximation to the lower bound using random subsets of the data -- following the idea that not all data points have to be observed for a good fit to be found. This assumption has the potential to hinder performance even further. We next assess these trade-offs.
We repeated the experimental set-up of the previous section (and use the same RMSE for VSSGP). We optimise both fVSSGP and sfVSSGP for 5000 iterations instead of the 1000 of VSSGP. This is because the improved time complexity allows us to perform more function evaluations within the same time-frame. We optimise the fVSSGP lower bound with LBFGS, and the sfVSSGP lower bound with RMSPROP \citep{Tieleman2012COURSERA}. RMSPROP performs stochastic optimisation with no need for learning-rate tuning -- the learning rate changes adaptively based on the directions of the last two gradients.
Table \ref{table:exp3} shows the RMSE for the train and test sets. Both fVSSGP and sfVSSGP effectively achieve the same test set accuracy (taking the standard deviation into account). We also see a slight decrease in train set RMSE.
\mathbf{s}ubsection{Stochastic Variational Inference}
We compared sfVSSGP to the SPGP approximation with stochastic variational inference \citep[sSPGP, ][]{hensman2013Gaussian}. We used the same audio experiment as above, but with a signal of length 16000. 25 random segments of length 80 were removed from the signal.
sSPGP's time complexity ($\mathbf{m}athcal{O}(S K^2 + K^3)$ with mini-batch of size $S$ and $K$ inducing points) prohibits it from being used with a large number of inducing points. We therefore used 800 inducing points for sSPGP and 400 inducing inputs for each component in the covariance function of sfVSSGP ($K=400$).
\begin{figure}
\caption{Mean and standard deviation for train error, test error, and running time, all as functions of the number of inducing points ($K$) for a speech signal segment of length 4K.}
\label{fig:exp3}
\end{figure}
The RMSE of sSPGP for the training set is $0.043$ and for the test set is $\textbf{0.034}$ (with a training time of 133 minutes using GPy \citep{gpy2014}). The RMSE of sfVSSGP for the training set is $0.016$ and for the test set is $\textbf{0.034}$ (with a training time of 48 minutes). Using the same audio imputation metric as in the previous section, we get that the STFT RMSE for the sSPGP on the training set is $0.54$ and on the test set is $\textbf{0.43}$. The STFT RMSE for the VSSGP on the training set is $0.18$ with a test error of $\textbf{0.3}$.
For comparison again, baseline performance of predicting 0 attains an error of $0.52$ on the training set and an error of $\textbf{0.62}$ on the test set.
\mathbf{s}ubsection{Speed-Accuracy Trade-off}
We inspect the speed-accuracy trade-off of the approximation (RMSE as a function of the number of inducing points) for the sfVSSGP approximation.
We repeat the same audio experiment set-up with a speech signal with 4000 samples and evaluate the imputation error (RMSE) of the predictive mean with various numbers of inducing point. RMSPROP was run for 500 iterations. The experiment was repeated 5 times and the results averaged.
Figure \ref{fig:exp3} shows that the approximation offers increased accuracy with an increasing number of inducing points. No further improvement is achieved with more than 400 inducing points. The time scales quadratically with the number of inducing points.
Note that the approximation does not over-fit to the data as the number of parameters increases.
\mathbf{s}ection{Discussion}
Our approximate inference relates to the Bayesian neural network \citep[\textit{Bayesian NN}, ][]{mackay1992evidence,mackay1992practical}.
In the Bayesian NN a prior distribution is placed over the weights of an NN, and a posterior distribution (over the weights and outputs) is sought.
The model offers a Bayesian interpretation to the classic NN, with the desired property of uncertainty estimates on the outputs.
Inference in Bayesian NNs is generally hard, and approximations to the model are often used \citep[pp 277-290]{Bishop2006Pattern}.
Our GP approximate inference relates Bayesian NNs and GPs, and can be seen as a method for tractable variational inference in Bayesian NNs with a single hidden layer.
Future research includes the extension of our approximation to deep GPs \citep{damianou2012deep}. We also aim to use the approximate model as a method for adding and removing units in an NN in a principled way. Lastly, we aim to replace the cosines in the Fourier expansion with alternative basis functions and study the resulting approximate model.
\appendix
\mathbf{s}ection{Appendix}
\begin{identity}\label{identity:1}
\[
\cos(x-y)
=
\int_{0}^{2\mathbf{p}i} \mathbf{m}athbf{f}rac{1}{2\mathbf{p}i}
\mathbf{s}qrt{2} \cos(x + b) \mathbf{s}qrt{2} \cos(y + b) \text{d} b
\]
\end{identity}
\begin{proof}
We first evaluate the term inside the integral. We have
\[
&\cos(x+b)\cos(y+b) \\
&\quad = (\cos(x)\cos(b) - \mathbf{s}in(x)\mathbf{s}in(b)) \\
&\quad \qquad \cdot(\cos(y)\cos(b) - \mathbf{s}in(y)\mathbf{s}in(b)) \\
&\quad = (\cos(x)\cos(y)) \cos^2(b) + (\mathbf{s}in(x)\mathbf{s}in(y)) \mathbf{s}in^2(b) \\
&\quad \qquad - (\mathbf{s}in(x)\cos(y) + \cos(x)\mathbf{s}in(y)) \mathbf{s}in(b) \cos(b).
\]
Now, since $\int \cos^2(b) \text{d} b = \mathbf{m}athbf{f}rac{b}{2} + \mathbf{m}athbf{f}rac{1}{4} \mathbf{s}in(2b)$, as well as $\int \mathbf{s}in^2(b) \text{d} b = \mathbf{m}athbf{f}rac{b}{2} - \mathbf{m}athbf{f}rac{1}{4} \mathbf{s}in(2b)$, and $\int \mathbf{s}in(b) \cos(b) \text{d} b = - \mathbf{m}athbf{f}rac{1}{4} \cos(2b)$, we have
\[
&\int_{0}^{2\mathbf{p}i} \mathbf{m}athbf{f}rac{1}{2\mathbf{p}i}
\mathbf{s}qrt{2} \cos(x + b) \mathbf{s}qrt{2} \cos(y + b) \text{d} b \\
&\quad = \mathbf{m}athbf{f}rac{1}{\mathbf{p}i} (\cos(x) \cos(y)(\mathbf{p}i - 0) \\
&\quad\qquad + \mathbf{s}in(x)\mathbf{s}in(y)(\mathbf{p}i - 0) \\
&\quad\qquad - (\mathbf{s}in(x) \cos(y) + \cos(x) \mathbf{s}in(y)) \cdot 0 \\
&\quad = \cos(x-y)
\]
\end{proof}
\begin{identity}\label{identity:2}
\[
E_{\mathbf{m}athcal{N}(\mathbf{m}athbf{w}; \mathbf{m}u, \Sigma)} \big( \cos(\mathbf{m}athbf{w}^T \mathbf{m}athbf{x} + b) \big) =
e^{-\mathbf{m}athbf{f}rac{1}{2} \mathbf{m}athbf{x}^T \Sigma \mathbf{m}athbf{x}} \cos(\mathbf{m}u^T \mathbf{m}athbf{x} + b)
\]
\end{identity}
\begin{proof}
We rely on the characteristic function of the Gaussian distribution to prove this identity.
\[
&E_{\mathbf{m}athcal{N}(\mathbf{m}athbf{w}; \mathbf{m}u, \Sigma)} \big( \cos(\mathbf{m}athbf{w}^T \mathbf{m}athbf{x} + b) \big) \notag \\
&\quad=
\mathbf{m}athbb{R}e \bigg( e^{ib} E_{\mathbf{m}athcal{N}(\mathbf{m}athbf{w}; \mathbf{m}u, \Sigma)} \big( e^{i \mathbf{m}athbf{w}^T \mathbf{m}athbf{x}} \big) \bigg) \notag \\
&\quad=
\mathbf{m}athbb{R}e(e^{ib} e^{i \mathbf{m}athbf{w}^T \mathbf{m}u - \mathbf{m}athbf{f}rac{1}{2} \mathbf{m}athbf{x}^T \Sigma \mathbf{m}athbf{x}}) \notag \\
&\quad=
e^{-\mathbf{m}athbf{f}rac{1}{2} \mathbf{m}athbf{x}^T \Sigma \mathbf{m}athbf{x}} \cos(\mathbf{m}u^T \mathbf{m}athbf{x} + b)
\]
where $\mathbf{m}athbb{R}e(\cdot)$ is the real part function, and the transition from the second to the third lines uses the characteristic function of a multivariate Gaussian distribution.
\end{proof}
\begin{identity}\label{identity:3}
\[
&E_{\mathbf{m}athcal{N}(\mathbf{m}athbf{w}; \mathbf{m}u, \Sigma)} \big( \cos(\mathbf{m}athbf{w}^T \mathbf{m}athbf{x} + b)^2 \big) \notag\\
&\qquad \qquad \qquad = \mathbf{m}athbf{f}rac{1}{2} e^{-2\mathbf{m}athbf{x}^T \Sigma \mathbf{m}athbf{x}} \cos(2\mathbf{m}u^T \mathbf{m}athbf{x} + 2b)
+ \mathbf{m}athbf{f}rac{1}{2}
\]
\end{identity}
\begin{proof}
Following the identity $\cos(\theta)^2 = \mathbf{m}athbf{f}rac{\cos(2\theta)+1}{2}$,
\[
&E_{\mathbf{m}athcal{N}(\mathbf{m}athbf{w}; \mathbf{m}u, \Sigma)} \big(
\cos(\mathbf{m}athbf{w}^T \mathbf{m}athbf{x} + b)^2
\big)
\notag \\ &=
\mathbf{m}athbf{f}rac{1}{2} E_{\mathbf{m}athcal{N}(\mathbf{m}athbf{w}; \mathbf{m}u, \Sigma)} \big(
\cos(2\mathbf{m}athbf{w}^T \mathbf{m}athbf{x} + 2b)
\big)
+ \mathbf{m}athbf{f}rac{1}{2}
\notag \\ &=
\mathbf{m}athbf{f}rac{1}{2} e^{-2\mathbf{m}athbf{x}^T \Sigma \mathbf{m}athbf{x}} \cos(2\mathbf{m}u^T \mathbf{m}athbf{x} + 2b)
+ \mathbf{m}athbf{f}rac{1}{2}
\]
\end{proof}
\begin{proposition}\label{prop:2}
Given a sum of covariance functions with $L$ components (with each corresponding to $\Phi_i$ an $N \times K$ matrix) we have $\Phi = [\Phi_i]_{i=1}^L$ an $N \times LK$ matrix.
\end{proposition}
\begin{proof}
We extend the derivation of equation \ref{eq:Y_given_A_X_o} to sums of covariance functions. Given a sum of covariance functions with $L$ components
\[
K(\mathbf{m}athbf{x}, \mathbf{m}athbf{y}) = \mathbf{s}um_{i=1}^L \mathbf{s}igma_i^2 K_i(\mathbf{m}athbf{x},\mathbf{m}athbf{y}),
\]
following equation \ref{eq:bochner}
we have
\[
K(\mathbf{m}athbf{x}, \mathbf{m}athbf{y}) = \mathbf{s}um_{i=1}^L \int_{\mathbf{m}athbb{R}^Q} \mathbf{s}igma_i^2 p_i(\mathbf{m}athbf{w}) \cos(2 \mathbf{p}i \mathbf{m}athbf{w}^T(\mathbf{m}athbf{x}-\mathbf{m}athbf{y})) \text{d} \mathbf{m}athbf{w},
\]
where we write $\mathbf{s}igma_i^2$ instead of $\mathbf{s}igma \mathbf{s}igma_i^2$ for brevity (with $\mathbf{s}igma_i^2$ not having to sum to one).
Following the derivations of equation \ref{eq:Y_given_A_X_o}, for each component $i$ in the sum we get $\Phi_i$ an $N \times K$ matrix.
Writing $\Phi = [\Phi_i]_{i=1}^L$ an $N \times LK$ matrix, we have that the sum of covariance matrices can be expressed with a single term after marginalizing $\mathbf{F}$ out,
\[
\mathbf{s}um_{i=1}^L \Phi_i \Phi_i^T + \tau^{-1} \mathbf{I} = \Phi \Phi^T + \tau^{-1} \mathbf{I},
\]
thus identity \ref{eq:Y_given_A_X_o} still holds.
\end{proof}
\begin{proposition}\label{prop:3}
Performing a change of variables to the SM covariance function with a single component, results in $p(\mathbf{m}athbf{w})$ a standard normal distribution with covariance function hyper-parameters expressed in $\Phi$.
\end{proposition}
\begin{proof}
The SM covariance function's corresponding probability measure $p(\mathbf{m}athbf{w})$ is expressed as a mixture of Gaussians,
\[
p(\mathbf{m}athbf{w}) &= \mathbf{s}um_{i=1}^L \mathbf{s}igma_i^2
\mathbf{p}rod_{q=1}^Q \mathbf{s}qrt{2 \mathbf{p}i} l_{iq}
e^{-\mathbf{m}athbf{f}rac{(2 \mathbf{p}i l_{iq})^2}{2}
(w_q - \mathbf{m}athbf{f}rac{1}{p_{iq}})^2} \notag \\
&=
\mathbf{s}um_{i=1}^L \mathbf{s}igma_i^2
\mathbf{m}athcal{N}( \mathbf{m}athbf{w}; \overline{\p}_i, \mathbf{L}_i^{-2} ),
\]
with $\mathbf{s}igma_i^2$ summing to one.
Following equation \ref{eq:bochner} with the above $p(\mathbf{m}athbf{w})$ we perform a change of variables to get,
\[
&K(\mathbf{m}athbf{x},\mathbf{m}athbf{y}) \notag\\
&\quad = \mathbf{s}um_{i=1}^L
\int_{\mathbf{m}athbb{R}^Q} \mathbf{s}igma_i^2 \mathbf{m}athcal{N}( \mathbf{m}athbf{w}'; \overline{\p}_i, \mathbf{L}_i^{-2} ) \cos(2 \mathbf{p}i \mathbf{m}athbf{w}'^T(\mathbf{m}athbf{x}-\mathbf{m}athbf{y})) \text{d} \mathbf{m}athbf{w}' \notag\\
&\quad = \mathbf{s}um_{i=1}^L
\int_{\mathbf{m}athbb{R}^Q} \mathbf{s}igma_i^2 \mathbf{m}athcal{N}( \mathbf{m}athbf{w}; \mathbf{0}, \mathbf{I} ) \cos(2 \mathbf{p}i (\mathbf{L}_i^{-1} \mathbf{m}athbf{w} + \overline{\p}_i)^T(\mathbf{m}athbf{x}-\mathbf{m}athbf{y})) \notag\\
&\qquad\qquad\qquad\quad \cdot \text{d} \mathbf{m}athbf{w}
\]
for $\mathbf{m}athbf{w}' = \mathbf{L}_i^{-1} \mathbf{m}athbf{w} + \overline{\p}_i$.
For each component $i$ we get $\Phi_i$ an $N \times K$ matrix with elements
\[
\mathbf{s}qrt{\mathbf{m}athbf{f}rac{2\mathbf{s}igma_i^2}{K}} \cos \big(2 \mathbf{p}i (\mathbf{L}_i^{-1} \mathbf{m}athbf{w}_k + \overline{\p}_i)^T (\mathbf{m}athbf{x} - \mathbf{m}athbf{z}_k) + b_k \big),
\]
where for simplicity, we index $\mathbf{m}athbf{w}_k$ and $b_k$ with $k=1,...,LK$ as a function of $i$.
\end{proof}
\begin{proposition}\label{prop:1}
Let $p(\mathbf{m}athbf{a}) = \mathbf{m}athcal{N}(\mathbf{0}, \mathbf{I})$. The optimal distribution $q(\mathbf{m}athbf{a})$ solving
\[
&\int q(\mathbf{m}athbf{a}) \int q(\text{\boldmath$\omega$}) \log p(\mathbf{m}athbf{y} | \mathbf{m}athbf{a}, \mathbf{X}, \text{\boldmath$\omega$}) \text{d} \text{\boldmath$\omega$} \text{d} \mathbf{m}athbf{a} \notag\\
&\qquad \qquad \qquad \qquad
- \text{KL}(q(\mathbf{m}athbf{a}) || p(\mathbf{m}athbf{a})) - \text{KL}(q(\text{\boldmath$\omega$}) || p(\text{\boldmath$\omega$}))
\]
is given by
\[
q(\mathbf{m}athbf{a}_d) = \mathbf{m}athcal{N}(\text{\boldmath$\Sigma$} E_{q(\text{\boldmath$\omega$})}(\Phi^T) \mathbf{m}athbf{y}_d, ~\tau^{-1} \text{\boldmath$\Sigma$})
\]
with $\text{\boldmath$\Sigma$} = (E_{q(\text{\boldmath$\omega$})}(\Phi^T \Phi) + \tau^{-1} I)^{-1}$.
The lower bound to optimise then reduces to
\[
&\mathbf{m}athcal{L} =
\mathbf{s}um_{d=1}^D \bigg( -\mathbf{m}athbf{f}rac{N}{2} \log(2 \mathbf{p}i \tau^{-1}) -\mathbf{m}athbf{f}rac{\tau}{2} \mathbf{m}athbf{y}_d^T\mathbf{m}athbf{y}_d
\notag \\ &\qquad \qquad \quad
+ \mathbf{m}athbf{f}rac{1}{2} \log(|\tau^{-1} \text{\boldmath$\Sigma$}|)
\notag\\
&\qquad \qquad \quad
+ \mathbf{m}athbf{f}rac{1}{2} \tau \mathbf{m}athbf{y}_d^T E_{q(\text{\boldmath$\omega$})}(\Phi) \text{\boldmath$\Sigma$} E_{q(\text{\boldmath$\omega$})}(\Phi^T) \mathbf{m}athbf{y}_d
\bigg)
\notag \\ &\qquad
- \text{KL}(q(\text{\boldmath$\omega$}) || p(\text{\boldmath$\omega$})).
\]
\end{proposition}
\begin{proof}
Let
\[
&\mathbf{m}athcal{L} = \int q(\mathbf{m}athbf{a}) \int q(\text{\boldmath$\omega$}) \log p(\mathbf{m}athbf{y} | \mathbf{m}athbf{a}, \mathbf{X}, \text{\boldmath$\omega$}) \text{d} \text{\boldmath$\omega$} \text{d} \mathbf{m}athbf{a} \notag\\
&\qquad \qquad
- \int q(\mathbf{m}athbf{a}) \log \mathbf{m}athbf{f}rac{q(\mathbf{m}athbf{a})}{p(\mathbf{m}athbf{a})} \text{d} \mathbf{m}athbf{a} - \int q(\text{\boldmath$\omega$}) \log \mathbf{m}athbf{f}rac{q(\text{\boldmath$\omega$})}{p(\text{\boldmath$\omega$})} \text{d} \text{\boldmath$\omega$}.
\]
We want to solve
\[
\mathbf{m}athbf{f}rac{\text{d} (\mathbf{m}athcal{L} + \lambda \int (\int q(\mathbf{m}athbf{a}) \text{d} \mathbf{m}athbf{a} - 1)) }{\text{d} q(\mathbf{m}athbf{a})} = 0
\]
for some $\lambda$. I.e.\
\[
\int q(\text{\boldmath$\omega$}) \log p(\mathbf{m}athbf{y} | \mathbf{m}athbf{a}, \mathbf{X}, \text{\boldmath$\omega$}) \text{d} \text{\boldmath$\omega$} - \log \mathbf{m}athbf{f}rac{q(\mathbf{m}athbf{a})}{p(\mathbf{m}athbf{a})} - 1 + \lambda = 0.
\]
This means that
\[
q(\mathbf{m}athbf{a}) &= e^{\lambda - 1} e^{\int q(\text{\boldmath$\omega$}) \log p(\mathbf{m}athbf{y} | \mathbf{m}athbf{a}, \mathbf{X}, \text{\boldmath$\omega$}) \text{d} \text{\boldmath$\omega$}} p(\mathbf{m}athbf{a}) \\
&=
\exp \bigg(
-\mathbf{m}athbf{f}rac{1}{2} \mathbf{m}athbf{a}^T \tau(E(\Phi^T \Phi) + \tau^{-1}I) \mathbf{m}athbf{a} \\
&\qquad\qquad\qquad\qquad\qquad\qquad + \big( \tau \mathbf{m}athbf{y}^T E(\Phi) \big) \mathbf{m}athbf{a} + ...
\bigg)
\]
and since $q(\mathbf{m}athbf{a})$ is Gaussian, it must be equal to
\[
q(\mathbf{m}athbf{a}) = \mathbf{m}athcal{N}(\text{\boldmath$\Sigma$} E_{q(\text{\boldmath$\omega$})}(\Phi^T) \mathbf{m}athbf{y}, ~\tau^{-1} \text{\boldmath$\Sigma$})
\]
with $\text{\boldmath$\Sigma$} = (E_{q(\text{\boldmath$\omega$})}(\Phi^T \Phi) + \tau^{-1} I)^{-1}$.
Writing $p(\mathbf{m}athbf{a})$ and $q(\mathbf{m}athbf{a})$ explicitly and simplifying results in the required lower bound.
\end{proof}
\begin{proposition}\label{prop:4}
Denoting $\mathbf{M} = [\mathbf{m}_d]_{d=1}^D$, we have
\[
E_{q(\mathbf{m}athbf{y}^* | \mathbf{m}athbf{x}^*)}(\mathbf{m}athbf{y}^*) = E_{q(\text{\boldmath$\omega$})} \big( \mathbf{p}hi_* \big) \mathbf{M}.
\]
\end{proposition}
\begin{proof}
The $d$'th output $y_d^*$ of the mean of the distribution is given by (writing $\mathbf{p}hi_* = \mathbf{p}hi(\mathbf{m}athbf{x}^*, \text{\boldmath$\omega$})$)
\[
E_{q(y_d^* | \mathbf{m}athbf{x}^*)}(y_d^*)
&= \int y_d^* p(y_d^* | \mathbf{m}athbf{x}^*, \mathbf{A}, \text{\boldmath$\omega$}) q(\mathbf{A}, \text{\boldmath$\omega$}) \text{d} \mathbf{A} \text{d} \text{\boldmath$\omega$} \text{d} y_d^* \notag\\
&\quad = \int \big( \mathbf{p}hi_* \mathbf{m}athbf{a}_d \big) q(\mathbf{A}, \text{\boldmath$\omega$}) \text{d} \mathbf{A} \text{d} \text{\boldmath$\omega$} \notag\\
&\quad =
\int \mathbf{p}hi_* q(\text{\boldmath$\omega$}) \text{d} \text{\boldmath$\omega$}
\int \mathbf{m}athbf{a}_d q(\mathbf{A}) \text{d} \mathbf{A} \notag\\
&\quad =
E_{q(\text{\boldmath$\omega$})} \big( \mathbf{p}hi_* \big) \mathbf{m}_d,
\]
which can be evaluated analytically following equation \ref{eq:exp_Phi}.
\end{proof}
\begin{proposition}\label{prop:5}
The variance of the predictive distribution is given by
\[
&\text{Var}_{q(\mathbf{m}athbf{y}^* | \mathbf{m}athbf{x}^*)}(\mathbf{m}athbf{y}^*)
=
\tau^{-1}\mathbf{I}_D + \Psi \\
&\qquad + \mathbf{M}^T \big( E_{q(\text{\boldmath$\omega$})}\big(\mathbf{p}hi_*^T \mathbf{p}hi_*\big) - E_{q(\text{\boldmath$\omega$})} \big( \mathbf{p}hi_* \big)^T E_{q(\text{\boldmath$\omega$})} \big( \mathbf{p}hi_* \big)\big) \mathbf{M}\notag
\]
with $\Psi_{i,j} = \text{tr} \big( E_{q(\text{\boldmath$\omega$})}\big(\mathbf{p}hi_*^T \mathbf{p}hi_*\big) \cdot \mathbf{s}_i \big) \cdot \mathbf{m}athds{1}[i=j]$.
\end{proposition}
\begin{proof}
The raw second moment of the distribution is given by (remember that $\mathbf{m}athbf{y}^*$ is a $1 \times D$ row vector)
\[
&E_{q(\mathbf{m}athbf{y}^* | \mathbf{m}athbf{x}^*)}((\mathbf{m}athbf{y}^*)^T(\mathbf{m}athbf{y}^*)) \notag\\
&\quad = \int \bigg( (\mathbf{m}athbf{y}^*)^T(\mathbf{m}athbf{y}^*) p(\mathbf{m}athbf{y}^* | \mathbf{m}athbf{x}^*, \mathbf{A}, \text{\boldmath$\omega$}) \text{d} \mathbf{m}athbf{y}^* \bigg) q(\mathbf{A}, \text{\boldmath$\omega$}) \text{d} \mathbf{A} \text{d} \text{\boldmath$\omega$} \notag\\
&\quad = \int \big( \text{Cov}_{p(\mathbf{m}athbf{y}^*|\mathbf{m}athbf{x}^*, \mathbf{A}, \text{\boldmath$\omega$})}(\mathbf{m}athbf{y}^*) \notag \\
&\qquad + E_{p(\mathbf{m}athbf{y}^*|\mathbf{m}athbf{x}^*, \mathbf{A}, \text{\boldmath$\omega$})}(\mathbf{m}athbf{y}^*)^T E_{p(\mathbf{m}athbf{y}^*|\mathbf{m}athbf{x}^*, \mathbf{A}, \text{\boldmath$\omega$})}(\mathbf{m}athbf{y}^*) \big) q(\mathbf{A}, \text{\boldmath$\omega$}) \text{d} \mathbf{A} \text{d} \text{\boldmath$\omega$} \notag \\
&\quad = \tau^{-1} \mathbf{I}_D + E_{q(\mathbf{A})q(\text{\boldmath$\omega$})}\big( \mathbf{A}^T \mathbf{p}hi_*^T \mathbf{p}hi_* \mathbf{A} \big).
\]
Now, for $i \neq j$ between $1$ and $D$,
\[
\bigg( E_{q(\mathbf{A})q(\text{\boldmath$\omega$})}\big( \mathbf{A}^T \mathbf{p}hi_*^T \mathbf{p}hi_* \mathbf{A} \big) \bigg)_{i,j} &= E_{q(\mathbf{A})q(\text{\boldmath$\omega$})}\big( \mathbf{m}athbf{a}_i^T \mathbf{p}hi_*^T \mathbf{p}hi_* \mathbf{m}athbf{a}_j \big) \notag\\
&= \mathbf{m}_i^T E_{q(\text{\boldmath$\omega$})}\big(\mathbf{p}hi_*^T \mathbf{p}hi_*\big) \mathbf{m}_j,
\]
and for $i = j$ between $1$ and $D$,
\[
\bigg( E_{q(\mathbf{A})q(\text{\boldmath$\omega$})}\big( \mathbf{A}^T \mathbf{p}hi_*^T \mathbf{p}hi_* \mathbf{A} \big) \bigg)_{i,i} &= E_{q(\mathbf{A})q(\text{\boldmath$\omega$})}\big( \mathbf{m}athbf{a}_i^T \mathbf{p}hi_*^T \mathbf{p}hi_* \mathbf{m}athbf{a}_i \big) \notag\\
&=
\mathbf{m}_i^T E_{q(\text{\boldmath$\omega$})}\big(\mathbf{p}hi_*^T \mathbf{p}hi_*\big) \mathbf{m}_i\notag\\
&\qquad +
\text{tr} \bigg( E_{q(\text{\boldmath$\omega$})}\big(\mathbf{p}hi_*^T \mathbf{p}hi_*\big) \cdot \mathbf{s}_i \bigg)
\]
following equation \ref{eq:exp_a_Phi_Phi_a}.
Taking the difference between the raw second moment and the outer product of the mean we get that the variance of the predictive distribution is given by
\[
&\text{Var}_{q(\mathbf{m}athbf{y}^* | \mathbf{m}athbf{x}^*)}(\mathbf{m}athbf{y}^*)
=
\tau^{-1}\mathbf{I}_D + \Psi \\
&\qquad + \mathbf{M}^T \big( E_{q(\text{\boldmath$\omega$})}\big(\mathbf{p}hi_*^T \mathbf{p}hi_*\big) - E_{q(\text{\boldmath$\omega$})} \big( \mathbf{p}hi_* \big)^T E_{q(\text{\boldmath$\omega$})} \big( \mathbf{p}hi_* \big)\big) \mathbf{M}\notag
\]
with $\Psi_{i,j} = \text{tr} \big( E_{q(\text{\boldmath$\omega$})}\big(\mathbf{p}hi_*^T \mathbf{p}hi_*\big) \cdot \mathbf{s}_i \big) \cdot \mathbf{m}athds{1}[i=j]$.
\end{proof}
\begin{discussion}\label{dis:1}
We discuss some of the key properties of the VSSGP, fVSSGP, and sfVSSGP. Due to space constraints, this discussion was moved to the appendix.
Unlike the sparse pseudo-input approximation, where the variational uncertainty is over the locations of a sparse set of inducing points in the output space, the uncertainty in our approximation is over a sparse set of function frequencies.
As the uncertainty over a frequency ($\Sigma_k$) grows, the exponential decay term in the expectation of $\Phi$ decreases, and the expected magnitude of the feature ($[(E_{q(\text{\boldmath$\omega$})}(\Phi))_{n,k}]_{n=1}^N$) tends to zero for points $\mathbf{m}athbf{x}_n$ far from $\mathbf{m}athbf{z}_k$. Conversely, as the uncertainty over a frequency decreases, the exponential decay term increases towards one, and the expected magnitude of the feature does not diminish for points $\mathbf{m}athbf{x}_n$ far from $\mathbf{m}athbf{z}_k$.
With the predictive uncertainty in equation \ref{eq:pred_uncertainty} we preserve many of the GP characteristics. As an example, consider the SE covariance function\mathbf{m}athbf{f}ootnote{Given by $\mathbf{s}igma^2 \exp \big(
-\mathbf{m}athbf{f}rac{1}{2} \mathbf{s}um_{q=1}^Q \mathbf{m}athbf{f}rac{(x_q - y_q)^2}{l_{q}^2}
\big)$}. In full GPs the variance increases towards $\mathbf{s}igma^2 + \tau^{-1}$ far away from the data. This property is key to Bayesian optimisation for example where this uncertainty is used to decide what action to take given a GP posterior.
With the SE covariance function, our expression for $\mathbf{p}hi_*$ contains an exponential decay term $\exp(-\mathbf{m}athbf{f}rac{1}{2} (\mathbf{m}athbf{x}_n - \mathbf{m}athbf{z}_k)^T \Sigma_k (\mathbf{m}athbf{x}_n - \mathbf{m}athbf{z}_k))$. This term tends to zero as $\mathbf{m}athbf{x}_n$ diverges from $\mathbf{m}athbf{z}_k$. For $\mathbf{m}athbf{x}_n$ far away from $\mathbf{m}athbf{z}_k$ for all $k$ we get that the entire matrix $\Phi$ tends to zero, and that $E_{q(\text{\boldmath$\omega$})}\big(\mathbf{p}hi_*^T \mathbf{p}hi_*\big)$ tends to $\mathbf{m}athbf{f}rac{\mathbf{s}igma^2}{K} \mathbf{I}_k$.
For fVSSGP, equation \ref{eq:pred_uncertainty} then collapses to
\*[
&\text{Var}_{q(\mathbf{m}athbf{y}^* | \mathbf{m}athbf{x}^*)}(\mathbf{m}athbf{y}^*)
=
\tau^{-1}\mathbf{I}_D + \Psi'
\]
with $\Psi'_{i,j} =
\mathbf{s}igma^2 \mathbf{m}athbf{f}rac{1}{K} \mathbf{s}um_{k=1}^K (\mathbf{m}u_{ik}\mathbf{m}u_{jk} + s_{ik}^2\mathbf{m}athds{1}[i=j])$.
This term leads to identical predictive variance to that of the full GP when $\mathbf{A}$ is fixed and follows the prior. It is larger than the predictive variance of a full GP when $s_{di}^2 > 1 - \mathbf{m}u_{di}^2$ on average, and smaller otherwise.
Unlike the SE GP, the predictive mean in the VSSGP with a SE covariance function does not tend to zero quickly far from the data. This is because the model can have high confidence in some frequencies, driving the inducing frequency variances ($\Sigma_k$) to zero. This in turn requires $\mathbf{m}athbf{x}_{n}-\mathbf{m}athbf{z}_k$ to be much larger for the exponential decay term to tend to zero. The frequencies the model is confident about will be used far from the data as well.
Unlike the SSGP, the approximation presented here is not periodic. This is one of the theoretical limitations of the sparse spectrum approximation (although in practice the period was observed to often be larger than the range of the data). The limitation arises from the fact that the covariance is represented as a weighted sum of cosines in SSGP. In the approximation we present here this is avoided by decaying the cosines to zero.
It is interesting to note that although our approximated covariance function $K(\mathbf{m}athbf{x}, \mathbf{m}athbf{y})$ has to be stationary (i.e.\ it can be represented as $K(\mathbf{m}athbf{x},\mathbf{m}athbf{y}) = K(\mathbf{m}athbf{x}-\mathbf{m}athbf{y})$), the approximate posterior is not. This is because stationarity entails that for all $\mathbf{m}athbf{x}$ it must hold that $K(\mathbf{m}athbf{x}, \mathbf{m}athbf{x}) = K(\mathbf{m}athbf{x} - \mathbf{m}athbf{x}) = K(\mathbf{0})$. But for $E_{q(\text{\boldmath$\omega$})}(\widehat{K}(\mathbf{X}, \mathbf{X})) = E_{q(\text{\boldmath$\omega$})}(\Phi \Phi^T)$ we have that the diagonal terms depend on $\mathbf{m}athbf{x}$:
\*[
\big( E_{q(\text{\boldmath$\omega$})}(\Phi \Phi^T) \big)_{n,n} =
\mathbf{s}um_{k=1}^K
&\mathbf{m}athbf{f}rac{2\mathbf{s}igma_i^2}{K} e^{-\overline{\x}_{nk}^T \Sigma_k \overline{\x}_{nk}}
\notag \\ & \cdot
E_{q(b_k)} \big( \cos(\mathbf{m}u_k^T \overline{\x}_{nk}) + \overline{b}_{nk}) \big)^2.
\]
This is in comparison to the SSGP approximation, where the approximate model is stationary.
It is also interesting to note that the lower bound in equation \ref{eq:lower_bound_factorised} is equivalent to that of equation \ref{eq:lower_bound_opt_A} for $\mathbf{s}_d$ non-diagonal. For $\mathbf{s}_d$ diagonal the lower bound is looser, but offers improved time complexity.
The use of the factorised lower bound allows us to save on the expensive computation of $\mathbf{A}$ for small updates of $\text{\boldmath$\omega$}$. Intuitively, this is because small updates in $\text{\boldmath$\omega$}$ would result in small updates to $\mathbf{A}$. Thus solving for $\mathbf{A}$ analytically at every time point without re-using previous computations is very wasteful. Optimising over $\mathbf{A}$ to solve the linear system of equations (given $\text{\boldmath$\omega$}$) allows us to use optimal $\mathbf{A}$ from previous steps, adapting it accordingly.
Also, even though it is possible to analytically integrate over $\mathbf{A}$, we can't analytically integrate $\text{\boldmath$\omega$}$. This is because $\text{\boldmath$\omega$}$ appears inside a cosine inside an exponent in equation \ref{eq:Y_given_A_X_o}. We can't solve for $\text{\boldmath$\omega$}$ analytically either in the equation preceding equation \ref{eq:exp_a_Phi}. This is again because $\text{\boldmath$\omega$}$ appears inside a cosine (unlike $\mathbf{A}$ which appears in a quadratic form in that equation).
Finally, we can approximate our approach to achieve a much more scalable implementation by only using the $K'$ nearest inducing inputs for each data point. This is following the observation that for short length-scales and large $\Sigma$, the features will decay to zero exponentially fast with the distance of the data points from the inducing inputs.
\end{discussion}
\end{document} |
\begin{document}
\title[Radial departures]{Radial departures and plane embeddings\\of arc-like continua}
\author{Andrea Ammerlaan \and Ana Anu\v{s}i\'{c} \and Logan C. Hoehn}
\date{\today}
\address{Nipissing University, Department of Computer Science \& Mathematics, 100 College Drive, Box 5002, North Bay, Ontario, Canada, P1B 8L7}
\email{[email protected]}
\email{[email protected]}
\email{[email protected]}
\thanks{This work was supported by NSERC grant RGPIN-2019-05998}
\subjclass[2020]{Primary 54F15, 54C25; Secondary 54F50}
\keywords{Plane embeddings; accessible point; arc-like continuum}
\begin{abstract}
We study the problem of Nadler and Quinn from 1972, which asks whether, given an arc-like continuum $X$ and a point $x \in X$, there exists an embedding of $X$ in $\mathbb{R}^2$ for which $x$ is an accessible point. We develop the notion of a radial departure of a map $f \colon [-1,1] \to [-1,1]$, and establish a simple criterion in terms of the bonding maps in an inverse system on intervals to show that there is an embedding of the inverse limit for which a given point is accessible. Using this criterion, we give a partial affirmative answer to the problem of Nadler and Quinn, under some technical assumptions on the bonding maps of the inverse system.
\end{abstract}
\maketitle
\section{Introduction}
\label{sec:intro}
In this paper we study plane embeddings of chainable continua and their accessible sets. The question of understanding properties of planar embeddings of chainable continua has generated significant interest over the years, see for example \cite{mazurkiewicz1929, brechner1978, lewis1981, mayer1982, mayer1983, minc-transue1992, debski-tymchatyn1993, minc1997, anusic-bruin-cinc2017, anderson-choquet1959, ozbolt2020}. We are motivated by the question of Nadler and Quinn from 1972 (\cite[p.229]{nadler1972} and \cite{nadler-quinn1972}), which asks whether for every arc-like continuum $X$, and every $x \in X$, there is an embedding $\Omega$ of $X$ in the plane $\mathbb{R}^2$ such that $x$ is an \emph{accessible} point, i.e.\ such that there is an arc $A \subset \mathbb{R}^2$ with $A \cap \Omega(X) = \{\Omega(x)\}$ (for other definitions and notation, see the Preliminaries section below). The Nadler-Quinn problem has appeared in various compilations of continuum theory problems, including as Problem~140 in \cite{lewis1983}, as Question~16 (by J.C.\ Mayer, for indecomposable continua) in \cite{problems2002}, and as Problem~1 (by H.\ Bruin, J.\ \v{C}in\v{c} and the second author), Problem~48 (by W.\ Lewis), and Problem~50 (by P.\ Minc, for indecomposable continua) in \cite{problems2018}, but to this day remains open.
An alternative formulation of the Nadler-Quinn question is as follows. Given an arc-like continuum $X$ and a point $x \in X$, we can construct a (simple-triod-like) continuum $X_x = X \cup A$, where $A$ is an arc such that $A \cap X = \{x\}$. Then the question of whether $X$ can be embedded in the plane so as to make $x$ accessible is equivalent to the question of whether this continuum $X_x$ can be embedded in the plane. In this way, the Nadler-Quinn problem asks whether all spaces from this class of triod-like continua can be embedded in the plane. Answering this would be a step forward towards understanding which tree-like continua can be embedded in the plane, which is a question of central importance in the field.
P.\ Minc identified a particular, simple example of an arc-like continuum $X_M = \varprojlim \left \langle [0,1],f_M \right \rangle$, where $f_M$ is the piecewise-linear map shown in Figure~\ref{fig:minc}, and asked (see \cite[Question~19, p.335]{problems2002} and \cite[p.297]{problems2018}) whether $X$ can be embedded in the plane so as to make the point $\langle \frac{1}{2},\frac{1}{2},\ldots \rangle \in X$ accessible. This was recently answered affirmatively by the second author in \cite{anusic2021}, who proved more generally that for any simplicial, locally eventually onto map $f \colon [0,1] \to [0,1]$, and any point $x \in X = \varprojlim \left \langle [0,1],f \right \rangle$, there is an embedding of $X$ in the plane for which $x$ accessible. In this paper we give another partial affirmative answer to the question of Nadler and Quinn, for a different class of inverse systems, where in particular the bonding maps are not assumed to be all the same map.
\begin{figure}
\caption{The map $f_M$ defined by P.\ Minc.}
\label{fig:minc}
\end{figure}
With the use of the Anderson-Choquet Embedding Theorem \cite{anderson-choquet1959}, in \cite{anusic-bruin-cinc2020} Bruin, \v{C}in\v{c}, and the second author prove that given a point $x = \langle x_1,x_2,\ldots \rangle$ in an arc-like continuum $X = \varprojlim \left \langle [0,1],f_n \right \rangle$, there is an embedding of $X$ in the plane for which $x$ is accessible as long as, loosely speaking, $x_n$ is not in a ``zig-zag'' pattern in $f_n$. The approach taken in \cite{anusic2021} is to find an alternative inverse limit representation of the arc-like continuum $X$ for which the coordinates of the point $x$ are not in such ``zig-zags''. In this paper, we continue to explore this approach, and introduce a standard factorization of an interval map, the \emph{radial contour factorization}, as a means to find alternative inverse limit representations of arc-like continua. We break the notion of a ``zig-zag'' into two more fundamental parts, namely \emph{radial departures} (of opposite orientations).
This paper is organized as follows. After some preliminary definitions and notation, in Section~\ref{sec:prelim} we give a simple reduction of the question of Nadler and Quinn to the case of the point $x = \langle 0,0,\ldots \rangle$ in an inverse limit space $X = \varprojlim \left \langle [-1,1],f_n \right \rangle$, where $f_n(0) = 0$ for each $n$. In Section~\ref{sec:contour} we introduce the (``one-sided'') notions of \emph{departures} and the \emph{contour factorization} of a map $f \colon [0,1] \to [-1,1]$. We then introduce the notion of a \emph{radial departure} in Section~\ref{sec:rad deps}, and in Section~\ref{sec:embeddings} we show that there is an embedding of $X = \varprojlim \left \langle [-1,1],f_n \right \rangle$ in the plane for which $\langle 0,0,\ldots \rangle$ is accessible as long as for each $n$, $f_n$ does not contain both positive and negative radial departures. In Section~\ref{sec:rad contour} we introduce the \emph{radial contour factorization}, which we use in Section~\ref{sec:no twins} to give a partial affirmative answer (Theorem~\ref{thm:no twins}) to the Nadler-Quinn problem under certain assumptions on the bonding maps $f_n$.
\section{Preliminaries}
\label{sec:prelim}
Throughout this paper, the word \emph{map} will always mean a continuous function. A \emph{continuum} is a compact, connected metric space.
An \emph{inverse sequence} is a sequence of spaces and maps
\[ \langle X_n,f_n \rangle = \langle X_n,f_n \rangle_n = \langle X_1,f_1,X_2,f_2,X_3,\ldots \rangle \]
where for each $n$, $f_n$ is a map from $X_{n+1}$ to $X_n$. The spaces $X_n$ are called \emph{factor spaces} and the maps $f_n$ are called \emph{bonding maps}. If $S$ is an infinite subset of $\mathbb{N}$, then $\langle X_n,f_n \rangle_{n \in S}$ refers to the inverse system $\langle X_{n_k},f_{n_k} \rangle_k$, where $\{n_k: k \in \mathbb{N}\}$ is an increasing enumeration of $S$. In this situation, it is understood that for each $n \in S$, $f_n$ is a map from $X_{n'}$ to $X_n$, where $n'$ is the smallest element of $S$ which is greater than $n$.
The \emph{inverse limit} of an inverse sequence is the set
\[ \varprojlim \langle X_n,f_n \rangle = \left\{ \langle x_n \rangle_{n=1}^\infty: f_n(x_{n+1}) = x_n \textrm{ for each } n\right\} \]
considered as a subspace of the product $\prod_{n=1}^\infty X_n$. We record two standard properties of inverse limits:
\begin{itemize}
\item (Dropping finitely many coordinates) For any $n_0 \in \mathbb{N}$, $\varprojlim \langle X_n,f_n \rangle \approx \varprojlim \langle X_n,f_n \rangle_{n \geq n_0}$.
\item (Composing bonding maps) Let $S$ be an infinite subset of $\mathbb{N}$. Given $n \in S$, let $n'$ be the smallest element of $S$ which is greater than $n$, and define $f_n^\circ = f_n \circ f_{n+1} \circ \cdots \circ f_{n'-1}$, which is a map from $X_{n'}$ to $X_n$. Then $\varprojlim \langle X_n,f_n \rangle \approx \varprojlim \langle X_n,f_n^\circ \rangle_{n \in S}$.
\end{itemize}
A map $f \colon [-1,1] \to [-1,1]$ is \emph{piecewise-linear} if there is a finite set $S \subset [-1,1]$ with $-1,1 \in S$ such that on the intervals between points of $S$, $f$ is linear. An inverse system $\langle [-1,1],f_n \rangle$ is \emph{simplicial} if there exist finite sets $S_1,S_2,\ldots$ such that for each $n$, $-1,1 \in S_n$, $f_n(S_{n+1}) \subseteq S_n$, and if $I$ is any component of $[-1,1] \smallsetminus S_{n+1}$ then $f_n$ is either constant on $I$, or $f_n$ is linear on $I$ and $f(I) \cap S_n = \emptyset$.
A continuum $X$ is \emph{arc-like} (equivalently, \emph{chainable}) if it is homeomorphic to an inverse limit of arcs; that is, if there exist maps $f_n \colon [-1,1] \to [-1,1]$, for $n = 1,2,\ldots$, such that $X \approx \varprojlim \left \langle [-1,1],f_n \right \rangle$. It is well-known that any arc-like continuum is homeomorphic to the inverse limit of an inverse system $\langle [-1,1],f_n \rangle$ where each bonding map $f_n$ is piecewise-linear.
Let $f_n \colon [-1,1] \to [-1,1]$, $n = 1,2,\ldots$, be a sequence of (piecewise-linear) maps and let $x = \langle x_1,x_2,\ldots \rangle \in X = \varprojlim \left \langle [-1,1],f_n \right \rangle$. It is well-known that if $x_n = \pm 1$ for infinitely many $n$, then $x$ is an endpoint of $X$ and one can easily embed $X$ in $\mathbb{R}^2$ so as to make $x$ an accessible point (see e.g.\ \cite[p.295]{problems2018}). Hence we may as well assume that $x_n \neq \pm 1$ for all but finitely many $n$, and in fact by dropping finitely many coordinates we may assume that $x_n \neq \pm 1$ for all $n$. Then for each $n$, let $h_n \colon [-1,1] \to [-1,1]$ be a (piecewise-linear) homeomorphism such that $h_n(x_n) = 0$, and define $f_n' = h_n \circ f_n \circ h_{n+1}^{-1}$. Then $\varprojlim \left \langle [-1,1],f_n' \right \rangle \approx X$, and in fact an explicit homeomorphism $h \colon X \to \varprojlim \left \langle [-1,1],f_n' \right \rangle$ is given by $h \left( \langle y_n \rangle_{n=1}^\infty \right) = \langle h_n(y_n) \rangle_{n=1}^\infty$, and clearly $h(x) = \langle 0,0,\ldots \rangle$. Thus the question of Nadler and Quinn reduces to:
\begin{question}
\label{ques:0 accessible}
If $f_n \colon [-1,1] \to [-1,1]$, $n = 1,2,\ldots$, are piecewise-linear maps such that $f_n(0) = 0$ for each $n$, does there exist an embedding of $X = \varprojlim \left \langle [-1,1],f_n \right \rangle$ into $\mathbb{R}^2$ for which the point $\langle 0,0,\ldots \rangle \in X$ is accessible?
\end{question}
It is this form of the question which we attack in this paper. Our main result is Theorem~\ref{thm:no twins}, which provides an affirmative answer to Question~\ref{ques:0 accessible} under some additional technical assumptions on the maps $f_n$.
We remark that if, in the above reduction, $\langle [-1,1],f_n \rangle$ is a simplicial system, then so is the system $\langle [-1,1],f_n' \rangle$ for appropriate choices of the homeomorphisms $h_n$. So an affirmative answer to Question~\ref{ques:0 accessible} for simplicial systems would also imply an affirmative answer to the Nadler-Quinn problem for arc-like continua which are inverse limits of simplicial systems.
\section{Contour factorization}
\label{sec:contour}
In preparation for our study in the following sections of maps $f \colon [-1,1] \to [-1,1]$ with $f(0) = 0$, we begin in this section with the general notions of (one-sided) departures, contour points, and the contour factor of a map $f \colon [0,1] \to [-1,1]$. In later sections, we will apply these notions to both ``halves'', $f {\restriction}_{[0,1]}$ and $f {\restriction}_{[-1,0]}$, of a map $f \colon [-1,1] \to [-1,1]$. For the most part, our attention will be on piecewise linear maps, though we will only include this hypothesis when needed.
\begin{defn}
\label{defn:dep}
Let $f \colon [0,1] \to [-1,1]$ be a map with $f(0) = 0$. A \emph{departure} of $f$ is a number $x > 0$ such that $f(x) \notin f([0,x))$. We say a departure $x$ of $f$ is \emph{positively oriented} (or a \emph{positive departure}) if $f(x) > 0$, and $x$ is \emph{negatively oriented} (or a \emph{negative departure}) if $f(x) < 0$.
A \emph{contour point} of $f$ is a departure $\alpha$ such that for any departure $x$ of $f$ with $x > \alpha$, there exists a departure $y$ of $f$, of orientation opposite to that of $\alpha$, such that $\alpha < y \leq x$.
\end{defn}
A map $f$ may have countably infinitely many contour points, in which case they may be enumerated in decreasing order: $\beta_1 > \beta_2 > \cdots$. In this case, $\lim_{n \to \infty} f(\beta_n) = 0$, and if $x_0 = \lim_{n \to \infty} \beta_n$ then $f$ is constantly equal to $0$ on $[0,x_0]$.
If $f$ is piecewise-linear, then it has only finitely many contour points, which we may enumerate: $0 < \alpha_1 < \alpha_2 < \ldots < \alpha_n \leq 1$. For convenience, we also denote $\alpha_0 = 0$.
\begin{defn}
\label{defn:contour factor}
Let $f \colon [0,1] \to [-1,1]$ be a piecewise-linear map with $f(0) = 0$. The \emph{contour factor} of $f$ is the piecewise-linear map $t_f \colon [0,1] \to [-1,1]$ defined as follows:
Let $0 = \alpha_0 < \alpha_1 < \alpha_2 < \cdots < \alpha_n$ be the contour points of $f$. Then $t_f \left( \frac{i}{n} \right) = f(\alpha_i)$ for each $i = 0,\ldots,n$, and $t_f$ is defined to be linear in between these points.
A \emph{meandering factor} of $f$ is any (piecewise-linear) map $s \colon [0,1] \to [0,1]$ such that $s(0) = 0$ and $f = t_f \circ s$.
\end{defn}
The next Proposition shows that there always exists at least one meandering factor of any given map $f$. Note that $f$ may have more than one meandering factor. Observe that if $t_f$ is the contour factor of $f$ and $s$ is a meandering factor for $f$, then $t_f([0,1]) = f([0,1])$, and $s$ is onto.
\begin{prop}
\label{prop:meandering factor}
Let $f \colon [0,1] \to [-1,1]$ be a piecewise-linear map with $f(0) = 0$, and let $t_f$ be the contour factor of $f$. Then there exists a meandering factor of $f$.
\end{prop}
\begin{proof}
Let $0 = \alpha_0 < \alpha_1 < \alpha_2 < \cdots < \alpha_n$ be the contour points of $f$. Given $x \in [0,1]$, define $s(x)$ as follows.
If $x \in [\alpha_{i-1},\alpha_i)$ for some $i \geq 1$, then $f(x)$ is between $f(\alpha_{i-1})$ (inclusive) and $f(\alpha_i)$ (exclusive). Let $w = w(x) = \frac{f(x) - f(\alpha_{i-1})}{f(\alpha_i) - f(\alpha_{i-1})} \in [0,1)$, so that $f(x) = (1-w) \cdot f(\alpha_{i-1}) + w \cdot f(\alpha_i)$. Define $s(x) = (1-w) \cdot \frac{i-1}{n} + w \cdot \frac{i}{n}$. Since $t_f$ is linear on $\left[ \frac{i-1}{n},\frac{i}{n} \right]$,
\begin{align*}
t_f \circ s(x) &= (1-w) \cdot t_f \left( \tfrac{i-1}{n} \right) + w \cdot t_f \left( \tfrac{i}{n} \right) \\
&= (1-w) \cdot f(\alpha_{i-1}) + w \cdot f(\alpha_i) \\
&= f(x) .
\end{align*}
If $x \geq \alpha_n$, then $f(x)$ is between $f(\alpha_{n-1})$ and $f(\alpha_n)$ (inclusive). Let $w = \frac{f(x) - f(\alpha_{n-1})}{f(\alpha_n) - f(\alpha_{n-1})} \in [0,1]$, so that $f(x) = (1-w) \cdot f(\alpha_{n-1}) + w \cdot f(\alpha_n)$. Define $s(x) = (1-w) \cdot \frac{n-1}{n} + w \cdot 1$. Since $t_f$ is linear on $\left[ \frac{n-1}{n},1 \right]$,
\begin{align*}
t_f \circ s(x) &= (1-w) \cdot t_f \left( \tfrac{n-1}{n} \right) + w \cdot t_f(1) \\
&= (1-w) \cdot f(\alpha_{n-1}) + w \cdot f(\alpha_n) \\
&= f(x) .
\end{align*}
It is easy to see that $s$ is continuous and piecewise-linear (since $f$ is), and $s(0) = 0$.
\end{proof}
\begin{lem}
\label{lem:same contour}
Let $f,g \colon [0,1] \to [-1,1]$ be piecewise-linear maps with $f(0) = g(0) = 0$. Then $t_f = t_g$ if and only if:
\begin{enumerate}
\item for each departure $x$ of $f$, there exists a departure $x'$ of $g$ such that $f([0,x)) = g([0,x'))$; and
\item for each departure $x'$ of $g$, there exists a departure $x$ of $f$ such that $f([0,x)) = g([0,x'))$.
\end{enumerate}
\end{lem}
\begin{proof}
Let $0 = \alpha_0 < \alpha_1 < \alpha_2 < \cdots < \alpha_n$ be the contour points of $f$.
Suppose $t_f = t_g$. Then $f$ and $g$ have the same number of contour points, which have the same images. Let $0 = \beta_0 < \beta_1 < \beta_2 < \cdots < \beta_n$ be the contour points of $g$, so that $f(\alpha_i) = g(\beta_i)$ for each $i$. Let $x$ be a departure of $f$. Note that $0 < x \leq \alpha_n$, so there exists $i \geq 1$ such that $x \in (\alpha_{i-1},\alpha_i]$. Note that the departure $x$ of $f$ has the same orientation as $\alpha_i$, and $f([0,x)) = f([\alpha_{i-1},x))$ is the interval between $f(\alpha_{i-1})$ (inclusive) and $f(x)$ (exclusive). Similarly, for any departure $x' \in (\beta_{i-1},\beta_i]$ of $g$, $x'$ has the same orientation as $\beta_i$, and $g([0,x')) = g([\beta_{i-1},x'))$ is the interval between $g(\beta_{i-1}) = f(\alpha_{i-1})$ (inclusive) and $g(x')$ (exclusive). In the same way, considering the departures $\alpha_{i-1}$ of $f$ and $\beta_{i-1}$ of $g$, we deduce that $g([0,\beta_{i-1}]) = f([0,\alpha_{i-1}])$.
Since $f(x)$ is between $f(\alpha_{i-1}) = g(\beta_{i-1})$ (exclusive) and $f(\alpha_i) = g(\beta_i)$ (inclusive), there exists $x' \in (\beta_{i-1},\beta_i]$ such that $g(x') = f(x)$. Choose $x'$ minimal with respect to this property, so that $g(x') \notin g((\beta_{i-1},x'))$. Also,
\[ g(x') = f(x) \notin f([0,x)) \supset f([0,\alpha_{i-1}]) = g([0,\beta_{i-1}]) .\]
Therefore $x'$ is a departure of $g$, and $f([0,x)) = g([0,x'))$ as these are both the interval between $f(\alpha_{i-1}) = g(\beta_{i-1})$ (inclusive) and $f(x) = g(x')$ (exclusive). This proves (1); (2) follows similarly.
Conversely, suppose (1) and (2) hold. Fix any $i \geq 1$. By assumption, there exists a departure $\beta_i$ of $g$ such that $g([0,\beta_i)) = f([0,\alpha_i))$, which in particular implies that $\alpha_i$ has the same orientation for $f$ as $\beta_i$ has for $g$. Let $x' > \beta_i$ be any departure of $g$. By (2), there exists a departure $x$ of $f$ such that $f([0,x)) = g([0,x'))$, and clearly $x > \alpha_i$. Since $\alpha_i$ is a contour point of $f$, there exists a departure $y \in (\alpha_i,x]$ of $f$ of orientation opposite to that of $\alpha_i$. Then by (1), there exists a departure $y'$ of $g$ such that $f([0,y)) = g([0,y'))$, which implies $y' \in (\beta_i,x']$ and $y'$ has opposite orientation to that of $\beta_i$. Therefore $\beta_i$ is a contour point of $g$. Thus for each contour point $\alpha_i$ of $f$ there exists a corresponding contour point $\beta_i$ of $g$ with $f(\alpha_i) = g(\beta_i)$, and clearly $\beta_{i-1} < \beta_i$ for each $i$. Likewise, for each contour point of $g$ there is a corresponding contour point of $f$ with the same image, and these appear in the same order in $f$ as they do in $g$. Thus $t_f = t_g$.
\end{proof}
We give two applications of Lemma~\ref{lem:same contour} below. The first, Proposition~\ref{prop:contour minimal}, shows that the contour factor is the simplest map $\tau \colon [0,1] \to [0,1]$ for which there exists $\sigma \colon [0,1] \to [0,1]$ with $\sigma(0) = 0$ and $f = \tau \circ \sigma$. This is a characteristic property of the contour factor, however we will not use this result in the remainder of this paper. The second, Lemma~\ref{lem:same comp contour}, shows that the contour factor of a composition $g \circ f$ depends only on $g$ and the contour factor of $f$.
\begin{prop}
\label{prop:contour minimal}
Let $f \colon [0,1] \to [-1,1]$ be a piecewise-linear map with $f(0) = 0$. Suppose $\tau \colon [0,1] \to [-1,1]$ and $\sigma \colon [0,1] \to [0,1]$ are piecewise-linear maps such that $\tau(0) = \sigma(0) = 0$, $\sigma$ is onto, and $f = \tau \circ \sigma$. Then $t_\tau = t_f$. Consequently, there exists $s_0 \colon [0,1] \to [0,1]$ such that $f = t_f \circ s_0 \circ \sigma$ (that is, $s_0 \circ \sigma$ is a meandering factor of $f$).
\end{prop}
\begin{proof}
To prove $t_\tau = t_f$, we use Lemma~\ref{lem:same contour}.
Let $x$ be any departure of $f$, and let $x' = \sigma(x)$, so that $\tau(x') = f(x)$. Then $x' \notin \sigma([0,x))$ since otherwise $f(x) = \tau(x')$ would be in $f([0,x)) = \tau(\sigma([0,x))$, contradicting the assumption that $x$ is a departure of $f$. This means $\sigma([0,x)) = [0,x')$. Now $\tau([0,x')) = \tau(\sigma([0,x))) = f([0,x))$, and $x'$ is a departure of $\tau$ since $\tau(x') = f(x) \notin f([0,x)) = \tau([0,x'))$.
Conversely, let $x'$ be any departure of $\tau$. Since $\sigma$ is onto, there exists $x$ such that $\sigma(x) = x'$. Choose $x$ minimal with respect to this property, so that $\sigma([0,x)) = [0,\sigma(x))$. Then $f([0,x)) = \tau(\sigma([0,x))) = \tau([0,\sigma(x))) = \tau([0,x'))$, and $x$ is a departure of $f$ since $f(x) = \tau(x') \notin \tau([0,x')) = f([0,x))$.
Therefore, by Lemma~\ref{lem:same contour}, $t_\tau = t_f$.
\end{proof}
\begin{lem}
\label{lem:same comp contour}
Let $f_1,f_2 \colon [0,1] \to [-1,1]$ and $g \colon [-1,1] \to [-1,1]$ be piecewise-linear maps with $f_1(0) = f_2(0) = g(0) = 0$. If $t_{f_1} = t_{f_2}$, then $t_{g \circ f_1} = t_{g \circ f_2}$.
\end{lem}
\begin{proof}
Suppose $t_{f_1} = t_{f_2}$. To prove $t_{g \circ f_1} = t_{g \circ f_2}$, we use Lemma~\ref{lem:same contour}.
Let $x$ be a departure of $g \circ f_1$. Then $x$ is also a departure of $f_1$, therefore by Lemma~\ref{lem:same contour}, there exists a departure $x'$ of $f_2$ such that $f_1([0,x)) = f_2([0,x'))$, and so $g \circ f_1([0,x)) = g \circ f_2([0,x'))$. Suppose $y' \in [0,x')$ is such that $g \circ f_2(y') = g \circ f_2(x')$. Then $f_2(y') \in f_2([0,x')) = f_1([0,x))$, so there exists $y \in [0,x)$ such that $f_1(y) = f_2(y')$. But then $g \circ f_1(y) = g \circ f_2(y') = g \circ f_2(x') = g \circ f_1(x)$, contradicting the assumption that $x$ is a departure of $g \circ f_1$. Thus $x'$ is a departure of $g \circ f_2$.
Similarly, for each departure $x'$ of $g \circ f_2$ there exists a departure $x$ of $g \circ f_1$ such that $g \circ f_1([0,x)) = g \circ f_2([0,x'))$. Therefore, by Lemma~\ref{lem:same contour}, $t_{g \circ f_1} = t_{g \circ f_2}$.
\end{proof}
\section{Radial departures}
\label{sec:rad deps}
In the remainder of this paper we consider maps $f \colon [-1,1] \to [-1,1]$ satisfying $f(0) = 0$. We adapt the above concepts of departures and contour points to both the left and right ``halves'' of such a function $f$, as follows.
Define $\mathsf{r} \colon [0,1] \to [-1,0]$ by $\mathsf{r}(x) = -x$.
\begin{defn}
Let $f \colon [-1,1] \to [-1,1]$ be a map with $f(0) = 0$.
\begin{itemize}
\item A \emph{right departure} of $f$ is a number $x > 0$ such that $x$ is a departure (in the sense of Definition~\ref{defn:dep}) of $f {\restriction}_{[0,1]}$.
\item A \emph{left departure} of $f$ is a number $x < 0$ such that $-x$ is a departure (in the sense of Definition~\ref{defn:dep}) of $f {\restriction}_{[-1,0]} \circ \mathsf{r}$; i.e.\ a number $x < 0$ such that $f(x) \notin f((x,0])$.
\end{itemize}
If $x$ is either a right departure or a left departure of $f$, then we say $x$ is \emph{positively oriented} if $f(x) > 0$, and $x$ is \emph{negatively oriented} if $f(x) < 0$.
\begin{itemize}
\item A \emph{right contour point} of $f$ is a number $\alpha > 0$ such that $\alpha$ is a contour point (in the sense of Definition~\ref{defn:dep}) of $f {\restriction}_{[0,1]}$.
\item A \emph{left contour point} of $f$ is a number $\beta < 0$ such that $-\beta$ is a contour point (in the sense of Definition~\ref{defn:dep}) of $f {\restriction}_{[-1,0]} \circ \mathsf{r}$.
\end{itemize}
\end{defn}
\begin{defn}
Let $f \colon [-1,1] \to [-1,1]$ be a map with $f(0) = 0$. A \emph{radial departure} of $f$ is a pair $\langle x_1,x_2 \rangle$ such that $-1 \leq x_1 < 0 < x_2 \leq 1$ and either:
\begin{enumerate}[label=(\arabic{*})]
\item \label{pos dep} $f((x_1,x_2)) = (f(x_1),f(x_2))$; or
\item \label{neg dep} $f((x_1,x_2)) = (f(x_2),f(x_1))$.
\end{enumerate}
We say a radial departure $\langle x_1,x_2 \rangle$ of $f$ is \emph{positively oriented} (or a \emph{positive radial departure}) if \ref{pos dep} holds, and $\langle x_1,x_2 \rangle$ is \emph{negatively oriented} (or a \emph{negative radial departure}) if \ref{neg dep} holds.
\end{defn}
Observe that if $\langle x_1,x_2 \rangle$ is a positive (respectively, negative) radial departure of $f$, then $x_1$ is a negative (respectively, positive) left departure of $f$ and $x_2$ is a positive (respectively, negative) right departure of $f$. However, the converse is not true: if $x_1$ is a left departure of $f$ and $x_2$ is a right departure of $f$, and these two departures have opposite orientations, it is not necessarily the case that $\langle x_1,x_2 \rangle$ is a radial departure of $f$, because it may happen that $f(x_2) \in f((x_1,0])$ or $f(x_1) \in f([0,x_2))$.
We develop several basic properties of radial departures in the remainder of this section.
\begin{lem}
\label{lem:dep values unique}
Let $f \colon [-1,1] \to [-1,1]$ be a map with $f(0) = 0$. Suppose $\langle x_1,x_2 \rangle$ and $\langle x_1',x_2' \rangle$ are radial departures of $f$.
\begin{enumerate}
\item If $\{f(x_1),f(x_2)\} = \{f(x_1'),f(x_2')\}$ then $x_1 = x_1'$ and $x_2 = x_2'$.
\item If $\langle x_1,x_2 \rangle$ and $\langle x_1',x_2' \rangle$ have opposite orientations, then $\{f(x_1),f(x_2)\} \cap \{f(x_1'),f(x_2')\} = \emptyset$.
\end{enumerate}
\end{lem}
\begin{proof}
For (1), suppose $\{f(x_1),f(x_2)\} = \{f(x_1'),f(x_2')\}$. Since $\langle x_1,x_2 \rangle$ is a radial departure, it follows that $x_1',x_2' \notin (x_1,x_2)$, so $x_1' \leq x_1$ and $x_2' \geq x_2$. Similarly, $x_1 \leq x_1'$ and $x_2 \geq x_2'$.
For (2), suppose $\langle x_1,x_2 \rangle$ is a positive radial departure and $\langle x_1',x_2' \rangle$ is a negative radial departure. This means that $f(x_1) < 0 < f(x_1')$ and $f(x_2') < 0 < f(x_2)$. Suppose for a contradiction that $f(x_1') = f(x_2)$. Then $x_1' < x_1$ since $f(x_2) \notin f((x_1,x_2))$ and $x_2' < x_2$ since $f(x_1') \notin f((x_1',x_2'))$. But now $x_1 \in (x_1',x_2')$ and $x_2' \in (x_1,x_2)$, so $f(x_1) \in f((x_1',x_2')) = (f(x_2'),f(x_1'))$ and $f(x_2') \in f((x_1,x_2)) = (f(x_1),f(x_2))$, implying $f(x_1) > f(x_2')$ and $f(x_2') > f(x_1)$, a contradiction. Therefore $f(x_1') \neq f(x_2)$. Similarly, $f(x_2') \neq f(x_1)$.
\end{proof}
\begin{prop}
\label{prop:meet join}
Let $f \colon [-1,1] \to [-1,1]$ be a map with $f(0) = 0$. Suppose $\langle x_1,x_2 \rangle$ and $\langle x_1',x_2' \rangle$ are radial departures of $f$ with the same orientation. Then both
\[ \langle \min\{x_1,x_1'\}, \max\{x_2,x_2'\} \rangle \quad \textrm{and} \quad \langle \max\{x_1,x_1'\}, \min\{x_2,x_2'\} \rangle \]
are radial departures of $f$ of the same orientation as $\langle x_1,x_2 \rangle$ (and $\langle x_1',x_2' \rangle$).
\end{prop}
\begin{proof}
Suppose $\langle x_1,x_2 \rangle$ and $\langle x_1',x_2' \rangle$ are both positive radial departures (the case where they are both negative is similar). Let $y_1 = \min\{x_1,x_1'\}$, $y_2 = \max\{x_2,x_2'\}$, and let $z_1 = \max\{x_1,x_1'\}$, $z_2 = \min\{x_2,x_2'\}$. Observe that $f(y_1) = \min\{f(x_1),f(x_1')\}$, $f(y_2) = \max\{f(x_2),f(x_2')\}$, $f(z_1) = \max\{f(x_1),f(x_1')\}$, and $f(z_2) = \min\{f(x_2),f(x_2')\}$.
Given $x \in (y_1,y_2)$, we have $x > x_1$ or $x > x_1'$, hence $f(x) > f(x_1)$ or $f(x) > f(x_1')$, so $f(x) > f(y_1)$. Likewise, $f(x) < f(y_2)$. Thus $f((y_1,y_2)) = (f(y_1),f(y_2))$.
Similarly, given $x \in (z_1,z_2)$, we have $x > x_1$ and $x > x_1'$, hence $f(x) > f(x_1)$ and $f(x) > f(x_1')$, so $f(x) > f(z_1)$. Likewise, $f(x) < f(z_2)$. Thus $f((z_1,z_2)) = (f(z_1),f(z_2))$.
\end{proof}
\begin{prop}
\label{prop:alt dep nested}
Let $f \colon [-1,1] \to [-1,1]$ be a map with $f(0) = 0$. Suppose $\langle x_1,x_2 \rangle$ and $\langle x_1',x_2' \rangle$ are radial departures of $f$ with opposite orientations. Then either
\[ x_1 < x_1' < 0 < x_2' < x_2 \quad \textrm{or} \quad x_1' < x_1 < 0 < x_2 < x_2' .\]
\end{prop}
\begin{proof}
Suppose $\langle x_1,x_2 \rangle$ is a positive radial departure and $\langle x_1',x_2' \rangle$ is a negative radial departure. If $x_1 < x_1'$, then $x_1' \in (x_1,x_2)$, so $f(x_1') \in (f(x_1),f(x_2))$, so that $f(x_1') < f(x_2)$. Now we must have $x_2' < x_2$, since otherwise we would have $x_2 \in (x_1',x_2')$ and then $f(x_2) \in f((x_1',x_2')) = (f(x_2'),f(x_1'))$, meaning $f(x_2) < f(x_1')$, a contradiction. Similarly, if $x_2' < x_2$ then $x_1 < x_1'$.
\end{proof}
\begin{prop}
\label{prop:comp dep}
Let $f,g \colon [-1,1] \to [-1,1]$ be maps with $f(0) = g(0) = 0$, and let $x_1,x_2$ be such that $-1 \leq x_1 < 0 < x_2 \leq 1$. Then $\langle x_1,x_2 \rangle$ is a positive (respectively, negative) radial departure of $f \circ g$ if and only if either:
\begin{enumerate}
\item $\langle x_1,x_2 \rangle$ is a positive radial departure of $g$ and $\langle g(x_1),g(x_2) \rangle$ is a positive (respectively, negative) radial departure of $f$; or
\item $\langle x_1,x_2 \rangle$ is a negative radial departure of $g$ and $\langle g(x_2),g(x_1) \rangle$ is a negative (respectively, positive) radial departure of $f$.
\end{enumerate}
\end{prop}
\begin{proof}
If $\langle x_1,x_2 \rangle$ is a positive radial departure of $g$ and $\langle g(x_1),g(x_2) \rangle$ is a positive (respectively, negative) radial departure of $f$, then $f \circ g((x_1,x_2)) = f((g(x_1),g(x_2)))$, which equals $(f \circ g(x_1),f \circ g(x_2))$ (respectively, $(f \circ g(x_2),f \circ g(x_1))$). Thus $\langle x_1,x_2 \rangle$ is a positive (respectively, negative) radial departure of $f \circ g$.
Similarly, if $\langle x_1,x_2 \rangle$ is a negative radial departure of $g$ and $\langle g(x_2),g(x_1) \rangle$ is a negative (respectively, positive) radial departure of $f$, then $f \circ g((x_1,x_2)) = f((g(x_2),g(x_1)))$, which equals $(f \circ g(x_1),f \circ g(x_2))$ (respectively, $(f \circ g(x_2),f \circ g(x_1))$). Thus $\langle x_1,x_2 \rangle$ is a positive (respectively, negative) radial departure of $f \circ g$.
Conversely, suppose $\langle x_1,x_2 \rangle$ is a radial departure of $f \circ g$. Then $g(x_1),g(x_2) \notin g((x_1,x_2))$ since $f \circ g(x_1),f \circ g(x_2) \notin f \circ g((x_1,x_2))$, and $g(x_1) \neq g(x_2)$ since $f \circ g(x_1) \neq f \circ g(x_2)$. It follows that $\langle x_1,x_2 \rangle$ is a radial departure of $g$. Now since $f \circ g(x_1),f \circ g(x_2) \notin f \circ g((x_1,x_2))$ and $f \circ g(x_1) \neq f \circ g(x_2)$, we deduce:
\begin{enumerate}
\item if $\langle x_1,x_2 \rangle$ is a positive radial departure of $g$ then $\langle g(x_1),g(x_2) \rangle$ is a radial departure of $f$, and by the above argument the orientation of the radial departure $\langle g(x_1),g(x_2) \rangle$ of $f$ matches the orientation of the radial departure $\langle x_1,x_2 \rangle$ of $f$; and
\item if $\langle x_1,x_2 \rangle$ is a negative radial departure of $g$ then $\langle g(x_2),g(x_1) \rangle$ is a radial departure of $f$, and by the above argument the orientation of the radial departure $\langle g(x_2),g(x_1) \rangle$ of $f$ is opposite to the orientation of the radial departure $\langle x_1,x_2 \rangle$ of $f$.
\end{enumerate}
\end{proof}
\section{Radial departures and embeddings}
\label{sec:embeddings}
The main result of this section, Proposition~\ref{prop:embed 0 accessible}, essentially appears in \cite[Lemma 7.2]{anusic-bruin-cinc2020}, using a different language. We present an alternative argument here, to keep this treatment self-contained.
Let $\mathbb{H} = \{(x,y) \in \mathbb{R}^2: x \geq 0\}$ denote the closed right half plane. We use the usual Euclidean metric on the (half) plane, $d \left( (x_1,y_1), (x_2,y_2) \right) = \|(x_1,y_1) - (x_2,y_2)\|$, where $\|p\|$ denotes the Euclidean norm of the point $p \in \mathbb{R}^2$.
\begin{lem}
\label{lem:tuck}
Suppose $f \colon [-1,1] \to [-1,1]$ is a map with $f(0) = 0$ all of whose radial departures have the same orientation. Then for any $\varepsilon > 0$ there exists an embedding $\Phi$ of $[-1,1]$ into $\mathbb{H}$ such that $\Phi(0) = (0,0)$, $\Phi([-1,1]) \cap \partial \mathbb{H} = \{(0,0)\}$, and $\|\Phi(x) - (0,f(x))\| < \varepsilon$ for all $x \in [-1,1]$.
\end{lem}
\begin{proof}
Suppose $f$ has no negative radial departures (the case where $f$ has no positive radial departures is similar).
Without loss of generality, we may assume that $f$ has only finitely many contour points. Indeed, if $f$ has infinitely many contour points, we may approximate $f$ arbitrarily closely by maps with only finitely many contour points, all of whose radial departures are positive, e.g.\ by making $f$ monotone between some negative left contour point and some positive right contour point.
The proof consists of three steps. In the first step, we show that we can simultaneously traverse the graphs of $f {\restriction}_{[0,1]}$ (from $0$ to $1$) and $f {\restriction}_{[-1,0]}$ (from $0$ to $-1$) in such a way that the point on the right side is always above (or equal to) the point on the left side. In the second step, we perturb $f$ a small amount so that during this traversal the point on the right side is always strictly above the point on the left side. This enables us to embed an arc in $\mathbb{H}$ with both the right and left sides of the graph of this perturbed $f$ laid out to the right starting from $(0,0)$; however, this arc may have some constant (horizontal) sections at some contour points of $f$. In the third step, we contract these horizontal segments to points, to obtain the desired embedding $\Phi$. Figure~\ref{fig:tuck} illustrates the outcomes of these steps for a sample function $f$.
\begin{figure}
\caption{Above: a sample function $f$ with only positive radial departures. Below: the outcomes of the three steps of the proof of Lemma~\ref{lem:tuck}
\label{fig:tuck}
\end{figure}
\noindent \textbf{Step 1.} We show that it is possible to traverse the right and left sides of the graph of $f$ simultaneously in such a way that the $y$-value of the point on the right is always greater or equal to the $y$-value of the point on the left. Precisely, we show below that there exist onto, monotone maps $\psi_+ \colon [0,r_+] \to [0,1]$ and $\psi_- \colon [0,r_-] \to [-1,0]$, for some $r_+,r_- > 0$, such that $\psi_+(0) = \psi_-(0) = 0$, $f(\psi_+(t)) \geq f(\psi_-(t))$ for all $t \in [0, \min\{r_+,r_-\}]$, and these maps are not locally constant except possibly at points $t$ for which the value is a contour point of $f$. The bulk of the work involved is to show that it is possible to completely traverse one side of the graph of $f$, while simultaneously traversing a part of the other side in this way, which is the content of the following Claim.
\begin{claim}
\label{claim:psi+ psi-}
There exist maps $\psi_+ \colon [0,r] \to [0,1]$ and $\psi_- \colon [0,r] \to [-1,0]$, for some $r > 0$, such that:
\begin{enumerate}
\item $\psi_+(0) = \psi_-(0) = 0$, $\psi_+$ is non-decreasing, and $\psi_-$ is non-increasing;
\item For each $x \in (0,1]$ (respectively, $x \in [-1,0)$), if $\psi_+^{-1}(x)$ (respectively, $\psi_-^{-1}(x)$) is not a singleton, then $x$ is a positive right (respectively, negative left) contour point of $f$ (possibly $x = 0$);
\item $f(\psi_+(t)) \geq f(\psi_-(t))$ for all $t$;
\item Either $\psi_+$ or $\psi_-$ is onto.
\end{enumerate}
\end{claim}
\begin{proof}[Proof of Claim~\ref{claim:psi+ psi-}]
\renewcommand{\textsquare (Claim~\ref{claim:y5})}{\textsquare (Claim~\ref{claim:psi+ psi-})}
We proceed by induction on the number of right contour points of $f$. To assist with the induction, we add one further property to the above four:
\begin{enumerate}
\setcounter{enumi}{4}
\item If $\psi_-$ is not onto, then $\psi_-(r)$ is either $0$ or a negative left contour point of $f$.
\end{enumerate}
For the base case, suppose $f$ has only one (non-zero) right contour point $\alpha$. If $f(\alpha) > 0$, then it suffices to let $\psi_+$ linearly parameterize $[0,1]$ and let $\psi_- \equiv 0$. Suppose now that $f(\alpha) < 0$. Then first (non-zero) left contour point $\beta$ of $f$ must be negative, since otherwise $\langle \beta,\alpha \rangle$ would be a negative radial departure of $f$. If $\beta$ is the only left contour point of $f$, then it suffices to let $\psi_+ \equiv 0$ and let $\psi_-$ linearly parameterize $[-1,0]$. Otherwise, $f(\beta) \leq f(\alpha)$, since otherwise $\langle \beta',\alpha \rangle$ would be a negative radial departure of $f$, where $\beta'$ is the next left contour point of $f$ after $\beta$ (going from $0$ towards $-1$). In this case, let $0 < r_1 < r_2$, let $\psi_+{\restriction}_{[0,r_1]} \equiv 0$ and let $\psi_-{\restriction}_{[0,r_1]}$ linearly parameterize $[\beta,0]$, then let $\psi_+{\restriction}_{[r_1,r_2]}$ linearly parameterize $[0,1]$ while $\psi_-{\restriction}_{[r_1,r_2]} \equiv \beta$.
For the inductive step, suppose the Claim is established for maps with $n$ right contour points, and suppose $f$ has $n+1$ right contour points. Let $\alpha_n$,$\alpha_{n+1}$ be the last two right contour points of $f$, and apply induction to $f{\restriction}_{[0,\alpha_n]}$ to get $\psi_+ \colon [0,r] \to [0,\alpha_n]$ and $\psi_- \colon [0,r] \to [-1,0]$ satisfying the properties (1)--(5). If $\psi_-$ is onto, then we are already done. Suppose then that $\psi_-$ is not onto, which means that $\psi_+(r) = \alpha_n$ and $\psi_-(r)$ is a negative left contour point $\beta$ of $f$. Let $r < r_1 < r_2$. If $f(\alpha_{n+1}) \geq f(\beta)$, then it suffices to let $\psi_+ {\restriction}_{[r,r_1]}$ linearly parameterize $[\alpha_n,1]$ and let $\psi_- {\restriction}_{[r,r_1]} \equiv \beta$. Suppose now that $f(\alpha_{n+1}) < f(\beta)$, which means $\alpha_n$ is a positive right contour point. If $f(x) \leq f(\alpha_n)$ for all $x \in [-1,\beta]$, then it suffices to let $\psi_+ {\restriction}_{[r,r_1]} \equiv \alpha_n$ and let $\psi_- {\restriction}_{[r,r_1]}$ linearly parameterize $[-1,\beta]$. On the other hand, if $f(x) > f(\alpha_n)$ for some $x \in [-1,\beta)$, then there must be a left contour point $\beta' \in (x,\beta)$ with $f(\beta') \leq f(\alpha_{n+1})$ and $f(x') \leq f(\alpha_n)$ for all $x' \in [\beta',\beta]$, since otherwise $\langle x,\alpha_{n+1} \rangle$ would be a negative radial departure of $f$. In this case, let $\psi_+{\restriction}_{[r,r_1]} \equiv \alpha_n$ and let $\psi_-{\restriction}_{[r,r_1]}$ linearly parameterize $[\beta',\beta]$, then let $\psi_+{\restriction}_{[r_1,r_2]}$ linearly parameterize $[\alpha_n,1]$ while $\psi_-{\restriction}_{[r_1,r_2]} \equiv \beta'$.
This completes the inductive argument.
\end{proof}
Now we may assume $r < \frac{\varepsilon}{2}$. If $\psi_+$ is not onto, let $r_- = r$, let $r < r_+ < \frac{\varepsilon}{2}$ and let $\psi_+ {\restriction}_{[r,r_+]}$ linearly parameterize $[\psi_+(r),1]$. If $\psi_-$ is not onto, let $r_+ = r$, let $r < r_- < \frac{\varepsilon}{2}$ and let $\psi_- {\restriction}_{[r,r_-]}$ linearly parameterize $[-1,\psi_-(r)]$. If both $\psi_+$ and $\psi_-$ are onto, then let $r_+ = r_- = r$.
\noindent \textbf{Step 2.}
By a small perturbation of $f$, we may construct a map $f' \colon [-1,1] \to [-1,1]$ with $f'(0) = 0$ such that $f'(x) > f(x)$ for all $x \in (0,1]$, $f'(x) < f(x)$ for all $x \in [-1,0)$, and $\|(\frac{\varepsilon}{2},f'(x)) - (0,f(x))\| < \varepsilon$ for all $x \in [-1,1]$. Define $\Psi_+ \colon [0,r_+] \to \mathbb{H}$ and $\Psi_- \colon [0,r_-] \to \mathbb{H}$ by $\Psi_+(t) = (t,f'(\psi_+(t)))$ and $\Psi_-(t) = (t,f'(\psi_-(t)))$. Note that $\Psi_+(t_1) = \Psi_-(t_2)$ only when $t_1 = t_2 = 0$, hence $\Psi_+([0,r_+]) \cup \Psi_-([0,r_-])$ is an arc in $\mathbb{H}$.
\noindent \textbf{Step 3.}
Let $M \colon \mathbb{H} \to \mathbb{H}$ be a monotone map such that $M((0,0)) = (0,0)$, the $y$-coordinate of $M(p)$ equals the $y$-coordinate of $p$ for each $p \in \mathbb{H}$, $M([0,\frac{\varepsilon}{2}] \times \mathbb{R}) \subseteq [0,\frac{\varepsilon}{2}] \times \mathbb{R}$, $M(\Psi_+(\psi_+^{-1}(x)))$ (respectively, $M(\Psi_-(\psi_-^{-1}(x)))$) is a singleton for each $x \in [0,1]$ (respectively, $x \in [-1,0]$), and $M$ is otherwise one-to-one. Finally, define $\Phi \colon [-1,1] \to M(\Psi_+([0,r_+]) \cup \Psi_-([0,r_-]))$ by $\Phi(x) = M(\Psi_+(\psi_+^{-1}(x)))$ if $x \in [0,1]$ and $\Phi(x) = M(\Psi_-(\psi_-^{-1}(x)))$ if $x \in [-1,0]$. This $\Phi$ has the desired properties.
\end{proof}
We remark that the converse of Lemma~\ref{lem:tuck} is also true: if $f$ has both a positive and a negative radial departure, then for some $\varepsilon > 0$ there does not exist a map $\Phi$ as in the Lemma. However, we will not need this result, so we do not include a proof; see also \cite[Lemma 7.6]{anusic-bruin-cinc2020}.
\begin{prop}
\label{prop:embed 0 accessible}
Let $f_n \colon [-1,1] \to [-1,1]$, $n = 1,2,\ldots$, be maps with $f_n(0) = 0$ for each $n$. Suppose that for each $n$, all radial departures of $f_n$ have the same orientation. Then there exists an embedding of $X = \varprojlim \left \langle [-1,1], f_n \right \rangle$ into $\mathbb{R}^2$ for which the point $\langle 0,0,\ldots \rangle \in X$ is accessible.
\end{prop}
\begin{proof}
We apply the Anderson-Choquet Embedding Theorem \cite{anderson-choquet1959}. It suffices to prove that for any embedding $\Omega_n \colon [-1,1] \to \mathbb{H}$ such that $\Omega_n(0) = (0,0)$, and for any $\varepsilon > 0$, there exists an embedding $\Omega_{n+1} \colon [-1,1] \to \mathbb{H}$ such that $\Omega_{n+1}(0) = (0,0)$ and $\|\Omega_{n+1}(x) - \Omega_n(f_n(x))\| < \varepsilon$ for all $x \in [-1,1]$.
Let $A$ denote the straight segment in $\mathbb{R}^2$ from $(0,0)$ to $(-1,0)$, and let $I$ denote the straight segment in $\mathbb{R}^2$ from $(0,-1)$ to $(0,1)$.
Suppose $\Omega_n \colon [-1,1] \to \mathbb{H}$ is an embedding such that $\Omega_n(0) = (0,0)$, and let $\varepsilon > 0$ be arbitrary. Let $H \colon \mathbb{R}^2 \to \mathbb{R}^2$ be a homeomorphism such that $H(\Omega_n(x)) = (0,x)$ for each $x \in [-1,1]$, and $H {\restriction}_A = \mathrm{id}_A$. For some small $\varepsilon' > 0$, apply Lemma~\ref{lem:tuck} to construct an embedding $\Phi \colon [-1,1] \to \mathbb{H}$ such that (1) $\Phi(0) = (0,0)$; (2) $\Phi([-1,1]) \cap \partial \mathbb{H} = \{(0,0)\}$; and (3) $\|\Phi(x) - (0,f_n(x))\| < \varepsilon'$ for all $x \in [-1,1]$. Let $\Omega_{n+1} = H^{-1} \circ \Phi$. Clearly $\Omega_{n+1}(0) = (0,0)$. For sufficiently small $\varepsilon'$, we have (from (2)) that $\Omega_{n+1}$ is an embedding of $[-1,1]$ into $\mathbb{H}$, and (from (3) and uniform continuity of $H$ in a neighborhood of $A \cup I$) $\|\Omega_{n+1}(x) - \Omega_n(f_n(x))\| < \varepsilon$ for all $x \in [-1,1]$, as desired.
\end{proof}
\section{Radial contour factorization}
\label{sec:rad contour}
In this section we apply the contour factorization from Section~\ref{sec:prelim} to both the left and right ``halves'' of a map $f \colon [-1,1] \to [-1,1]$. The resulting factorization will be used in Section~\ref{sec:no twins} to produce alternative inverse limit representations of a given arc-like continuum.
Recall that $\mathsf{r} \colon [0,1] \to [-1,0]$ is the function $\mathsf{r}(x) = -x$.
\begin{defn}
\label{defn:rad contour factor}
Let $f \colon [-1,1] \to [-1,1]$ be a piecewise-linear map with $f(0) = 0$. The \emph{radial contour factor} of $f$ is the piecewise-linear map $t_f \colon [-1,1] \to [-1,1]$ such that:
\begin{enumerate}
\item $t_f {\restriction}_{[0,1]}$ is the contour factor of $f {\restriction}_{[0,1]}$ (in the sense of Definition~\ref{defn:contour factor}), and;
\item $t_f {\restriction}_{[-1,0]} \circ \mathsf{r}$ is the contour factor of $f {\restriction}_{[-1,0]} \circ \mathsf{r}$ (in the sense of Definition~\ref{defn:contour factor}).
\end{enumerate}
A \emph{radial meandering factor} of $f$ is a (piecewise-linear) map $s \colon [-1,1] \to [-1,1]$ which is \emph{sign preserving} (i.e.\ $s(x) \geq 0$ for all $x \geq 0$ and $s(x) \leq 0$ for all $x \leq 0$) and such that $f = t_f \circ s$.
\end{defn}
It follows from Proposition~\ref{prop:meandering factor} that there always exists at least one radial meandering factor of any given $f$, but it is not necessarily unique. Observe that if $s$ is any radial meandering factor of $f$, then $s$ has no negative radial departures. In fact, for any $y_1 < 0 < y_2$, there exists a positive radial departure $\langle x_1,x_2 \rangle$ of $s$ such that $s(x_1) = y_1$ and $s(x_2) = y_2$.
We develop several basic properties of radial contour factors and radial departures in the remainder of this section.
\begin{lem}
\label{lem:s match}
Let $f \colon [-1,1] \to [-1,1]$ be a piecewise-linear map with $f(0) = 0$, let $t_f$ be the radial contour factor of $f$, and let $\langle x_1,x_2 \rangle$ be a radial departure of $f$. Then there exists a radial departure $\langle y_1,y_2 \rangle$ of $t_f$ such that for any map $s \colon [-1,1] \to [-1,1]$ with $s(0) = 0$ and $f = t_f \circ s$:
\begin{enumerate}
\item $\langle x_1,x_2 \rangle$ is a positive radial departure of $s$; and
\item $s(x_1) = y_1$ and $s(x_2) = y_2$.
\end{enumerate}
\end{lem}
\begin{proof}
First let $s_0$ be a radial meandering factor of $f$. Since $f = t_f \circ s_0$, we have by Proposition~\ref{prop:comp dep} that $\langle x_1,x_2 \rangle$ is a positive radial departure of $s_0$ (positive because $s_0$ is a radial meandering factor), and $\langle s_0(x_1),s_0(x_2) \rangle$ is a radial departure of $t_f$ with the same orientation as $\langle x_1,x_2 \rangle$ has for $f$. Let $y_1 = s_0(x_1)$ and $y_2 = s_0(x_2)$.
Now let $s \colon [-1,1] \to [-1,1]$ be any map with $s(0) = 0$ and $f = t_f \circ s$. According to Proposition~\ref{prop:comp dep}, we have that either:
\begin{enumerate}[label=(\alph{*})]
\item \label{s option a} $\langle x_1,x_2 \rangle$ is a positive radial departure of $s$ and $\langle s(x_1),s(x_2) \rangle$ is a radial departure of $t_f$ with the same orientation as $\langle x_1,x_2 \rangle$ has for $f$; or
\item \label{s option b} $\langle x_1,x_2 \rangle$ is a negative radial departure of $s$ and $\langle s(x_2),s(x_1) \rangle$ is a radial departure of $t_f$ with the opposite orientation to what $\langle x_1,x_2 \rangle$ has for $f$.
\end{enumerate}
We claim that alternative \ref{s option b} is impossible. Indeed, suppose for a contradiction that \ref{s option b} holds. Then since $\{t_f(s_0(x_1)),t_f(s_0(x_2))\} = \{t_f(s(x_2)),t_f(s(x_1))\}$, we have a contradiction with Lemma~\ref{lem:dep values unique}(2), as $\langle s_0(x_1),s_0(x_2) \rangle$ and $\langle s(x_2),s(x_1) \rangle$ are radial departures of $t_f$ with opposite orientations.
Therefore \ref{s option a} holds, so $\langle x_1,x_2 \rangle$ is a positive radial departure of $s$. Moreover, since $\{t_f(s_0(x_1)),t_f(s_0(x_2))\} = \{t_f(s(x_1)),t_f(s(x_2))\}$, we have by Lemma~\ref{lem:dep values unique}(1) that $s(x_1) = s_0(x_1) = y_1$ and $s(x_2) = s_0(x_2) = y_2$.
\end{proof}
\begin{defn}
\label{defn:same deps}
Let $f,g \colon [-1,1] \to [-1,1]$ be maps with $f(0) = g(0) = 0$. We say \emph{$f$ and $g$ have the same radial departures} if for any $y_1,y_2 \in [-1,1]$, there exists a radial departure $\langle x_1,x_2 \rangle$ of $f$ with $f(x_1) = y_1$ and $f(x_2) = y_2$ if and only if there exists a radial departure $\langle x_1',x_2' \rangle$ of $g$ with $g(x_1') = y_1$ and $g(x_2') = y_2$.
\end{defn}
\begin{lem}
\label{lem:f tf same dep}
Let $f \colon [-1,1] \to [-1,1]$ be a piecewise-linear map with $f(0) = 0$, and let $t_f$ be the radial contour factor of $f$. Then $f$ and $t_f$ have the same radial departures.
\end{lem}
\begin{proof}
Let $s$ be a radial meandering factor for $f$, and let $y_1,y_2 \in [-1,1]$.
Suppose there exists a radial departure $\langle x_1,x_2 \rangle$ of $f$ with $f(x_1) = y_1$ and $f(x_2) = y_2$. Let $x_1' = s(x_1)$ and $x_2' = s(x_2)$. Then by Proposition~\ref{prop:comp dep}, $\langle x_1,x_2 \rangle$ is a positive radial departure of $s$ (positive because $s$ is a radial meandering factor), and $\langle x_1',x_2' \rangle$ is a radial departure of $t_f$ with $t_f(x_1') = y_1$ and $t_f(x_2') = y_2$.
Conversely, suppose there exists a radial departure $\langle x_1',x_2' \rangle$ of $t_f$ with $f(x_1') = y_1$ and $f(x_2') = y_2$. As observed after Definition~\ref{defn:rad contour factor}, there exists a positive radial departure $\langle x_1,x_2 \rangle$ of $s$ such that $s(x_1) = x_1'$ and $s(x_2) = x_2'$. Then by Proposition~\ref{prop:comp dep}, $\langle x_1,x_2 \rangle$ is a radial departure of $f$, with $f(x_1) = y_1$ and $f(x_2) = y_2$.
\end{proof}
\begin{cor}
\label{cor:same contour same dep}
Let $f,g \colon [-1,1] \to [-1,1]$ be piecewise-linear maps with $f(0) = g(0) = 0$. If $t_f = t_g$, then $f$ and $g$ have the same radial departures.
\end{cor}
\begin{proof}
This follows immediately from Lemma~\ref{lem:f tf same dep}.
\end{proof}
The converse of Corollary~\ref{cor:same contour same dep} does not hold in general. See Figure~\ref{fig:same deps} for an example.
\begin{figure}
\caption{An example of two maps which have the same radial departures, but do not have the same radial contour factors.}
\label{fig:same deps}
\end{figure}
\begin{prop}
\label{prop:comp same dep}
Let $f_1,f_2 \colon [-1,1] \to [-1,1]$ be maps with $f_1(0) = f_2(0) = 0$, and suppose $f_1$ and $f_1 \circ f_2$ have the same radial departures. Then
\begin{enumerate}
\item For any radial departure $\langle y_1,y_2 \rangle$ of $f_1$, there exists a positive radial departure $\langle x_1,x_2 \rangle$ of $f_2$ with $f_2(x_1) = y_1$ and $f_2(x_2) = y_2$; and
\item For any negative radial departure $\langle x_1,x_2 \rangle$ of $f_2$, $\langle f_2(x_2),f_2(x_1) \rangle$ is not a radial departure of $f_1$. In fact, if $\langle y_1,y_2 \rangle$ is any radial departure of $f_1$ then either
\[ f_2(x_2) < y_1 < 0 < y_2 < f_2(x_1) \quad \textrm{or} \quad y_1 < f_2(x_2) < 0 < f_2(x_1) < y_2 .\]
\end{enumerate}
\end{prop}
\begin{proof}
For (1), let $\langle y_1,y_2 \rangle$ be a radial departure of $f_1$. Since $f_1$ and $f_1 \circ f_2$ have the same radial departures, there is a radial departure $\langle x_1,x_2 \rangle$ of $f_1 \circ f_2$ with $f_1 \circ f_2(x_1) = f_1(y_1)$ and $f_1 \circ f_2(x_2) = f_1(y_2)$. By Proposition~\ref{prop:comp dep}, we have that $\langle f_2(x_1),f_2(x_2) \rangle$ is a radial departure of $f_1$, and, by Lemma~\ref{lem:dep values unique}(1), we deduce that $f_2(x_1) = y_1$ and $f_2(x_2) = y_2$. Then by Proposition~\ref{prop:comp dep} again, we conclude that $\langle x_1,x_2 \rangle$ is a positive radial departure of $f_2$.
For (2), let $\langle x_1,x_2 \rangle$ be a negative radial departure of $f_2$, and let $\langle y_1,y_2 \rangle$ be any radial departure of $f_1$. By part (1), there exists a positive radial departure $\langle x_1',x_2' \rangle$ of $f_2$ such that $f_2(x_1') = y_1$ and $f_2(x_2') = y_2$. By Proposition~\ref{prop:alt dep nested}, we have either
\[ x_1 < x_1' < 0 < x_2' < x_2 \quad \textrm{or} \quad x_1' < x_1 < 0 < x_2 < x_2' \]
and correspondingly either
\[ f_2(x_2) < f_2(x_1') < 0 < f_2(x_2') < f_2(x_1) \quad \textrm{or} \quad f_2(x_1') < f_2(x_2) < 0 < f_2(x_1) < f_2(x_2') \]
which proves (2).
\end{proof}
\section{Contour twins}
\label{sec:no twins}
This section contains the main result of this paper, Theorem~\ref{thm:no twins}, which gives a partial affirmative answer to the question of Nadler and Quinn (more precisely to the form of this question we state as Question~\ref{ques:0 accessible} in Section~\ref{sec:prelim}), for inverse systems $\left \langle [-1,1],f_n \right \rangle$ satisfying two technical conditions. The first condition is that the radial contour factors of the bonding maps are ``stable'', by which we mean that the radial contour factor of $f_n \circ f_{n+1}$ equals the radial contour factor of $f_n$, for each $n$. The second condition states that the left and right ``halves'' of each bonding map $f_n$ must be sufficiently ``misaligned'', in the sense of the following Definition.
\begin{defn}
Let $f \colon [-1,1] \to [-1,1]$ be a map with $f(0) = 0$. We say $f$ has \emph{no contour twins} if:
\begin{enumerate}
\item $0$ is neither a local minimum nor a local maximum of $f$; and
\item If $\alpha$ is a right contour point of $f$ and $\alpha'$ is a left contour point of $f$ with $0 < \alpha < 1$ and $-1 < \alpha' < 0$, then $f(\alpha) \neq f(\alpha')$.
\end{enumerate}
\end{defn}
\begin{lem}
\label{lem:sell}
Let $f_1,f_2,f_3 \colon [-1,1] \to [-1,1]$ be piecewise-linear maps with $f_1(0) = f_2(0) = f_3(0) = 0$, and such that:
\begin{enumerate}[label=(\roman{*})]
\item $t_{f_1} = t_{f_1 \circ f_2}$;
\item $t_{f_2} = t_{f_2 \circ f_3}$; and
\item each of $f_1$ and $f_2$ has no contour twins.
\end{enumerate}
Then there exists $\tilde{s} \colon [-1,1] \to [-1,1]$ with $\tilde{s}(0) = 0$ such that:
\begin{enumerate}
\item $t_{f_1} \circ \tilde{s} = f_1 \circ f_2$; and
\item $\tilde{s} \circ t_{f_3}$ has no negative radial departures.
\end{enumerate}
\end{lem}
\begin{proof}
For $k = 1,2,3$, let $t_k = t_{f_k}$. Let $s_1$ be a radial meandering factor of $f_1$ (so that $f_1 = t_1 \circ s_1$) and let $s_{12}$ be a radial meandering factor of $f_1 \circ f_2$ (so that $f_1 \circ f_2 = t_1 \circ s_{12}$). Define $\tilde{s} \colon [-1,1] \to [-1,1]$ by:
\[ \tilde{s}(x) = \begin{cases}
s_{12}(x) & \textrm{if } x \leq 0 \\
s_1 \circ f_2(x) & \textrm{if } x > 0 .
\end{cases} \]
See Example~\ref{ex:minc demo} below for a particular example with this map $\tilde{s}$ shown.
Note that $\tilde{s}$ has no negative radial departures, so by Proposition~\ref{prop:comp dep} we need only prove that for any negative radial departure $\langle w_1,w_2 \rangle$ of $t_3$, $\langle t_3(w_2),t_3(w_1) \rangle$ is not a positive radial departure of $\tilde{s}$.
Let $\langle w_1,w_2 \rangle$ be a negative radial departure of $t_3$, and let $x_1 = t_3(w_2)$ and $x_2 = t_3(w_1)$. Suppose for a contradiction that $\langle x_1,x_2 \rangle$ is a positive radial departure of $\tilde{s}$. This means
\[ s_{12}((x_1,0]) = (s_{12}(x_1),0] \quad \textrm{and} \quad s_1 \circ f_2([0,x_2)) \subseteq (s_{12}(x_1),s_1 \circ f_2(x_2)) .\]
The remainder of the proof proceeds in two parts. In the first part, we establish the existence and configuration of a few departures of $s_1 \circ f_2$, using the assumption that $f_2$ has no contour twins, and the observation \ref{properly nested} below. In the second part, we consider the map $t_1$, and the relationship between $s_1 \circ f_2$ and $s_{12}$, and derive a contradiction.
\noindent \textbf{Part 1.}
Recall that, by Proposition~\ref{prop:comp dep}, each radial departure of $s_1 \circ f_2$ is also a radial departure of $f_2$. Further, it follows from Lemma~\ref{lem:same comp contour} that $f_2 \circ f_3$ and $f_2 \circ t_3$ have the same radial contour factor, so $f_2$ and $f_2 \circ t_3$ have the same radial contour factor and hence the same radial departures by Corollary~\ref{cor:same contour same dep}. Proposition~\ref{prop:comp same dep}(2) then implies that
\begin{enumerate}[label=($\ast$), ref=($\ast$)]
\item \label{properly nested} If $\langle z_1,z_2 \rangle$ is a radial departure of $s_1 \circ f_2$, then either $z_1 < x_1 < 0 < x_2 < z_2$ or $x_1 < z_1 < 0 < z_2 < x_2$
\end{enumerate}
since any such $\langle z_1,z_2 \rangle$ is a radial departure of $f_2$ by Proposition~\ref{prop:comp dep}.
Since $0$ is not a local minimum of $f_2$ and $s_1$ is sign-preserving, it follows that $0$ is also not a local minimum of $s_1 \circ f_2$. We claim there exists $x \in (0,x_2)$ such that $s_1 \circ f_2(x) < 0$. Indeed, if not, then since $0$ is not a local minimum of $s_1 \circ f_2$, for some arbitrarily small $\varepsilon > 0$ we would have that $\langle -\varepsilon,x_2 \rangle$ is a (positive) radial departure of $s_1 \circ f_2$, but this would contradict \ref{properly nested}.
Let $y_0 = \min s_1 \circ f_2([0,x_2])$, and let $x_0 \in (0,x_2)$ be the right departure of $s_1 \circ f_2$ such that $s_1 \circ f_2(x_0) = y_0$. Also, let $y_2 = s_1 \circ f_2(x_2)$. Notice that $x_0$ is a contour point of $s_1 \circ f_2$.
Because $t_3([w_1,0]) \supseteq [0,x_2]$, we have that $y_0,y_2 \in s_1 \circ f_2 \circ t_3([w_1,0])$, therefore there exist left departures $\tilde{w}_0,\tilde{w}_2 < 0$ of $s_1 \circ f_2 \circ t_3$ such that $s_1 \circ f_2 \circ t_3(\tilde{w}_0) = y_0$ and $s_1 \circ f_2 \circ t_3(\tilde{w}_2) = y_2$. Since $s_1 \circ f_2 \circ t_3$ and $s_1 \circ f_2$ have the same radial contour factor (by Lemma~\ref{lem:same comp contour}), there exist corresponding left departures $\tilde{x}_0,\tilde{x}_2 < 0$ of $s_1 \circ f_2$ such that $s_1 \circ f_2(\tilde{x}_0) = y_0$ and $s_1 \circ f_2(\tilde{x}_2) = y_2$.
\begin{claim}
\label{claim:dep order}
$\tilde{x}_0 < \tilde{x}_2$ and $x_1 < \tilde{x}_2$.
\end{claim}
\begin{proof}[Proof of Claim~\ref{claim:dep order}]
\renewcommand{\textsquare (Claim~\ref{claim:y5})}{\textsquare (Claim~\ref{claim:dep order})}
Suppose for a contradiction that $\tilde{x}_2 < \tilde{x}_0$. First note that $\tilde{x}_0$ cannot be a contour point of $s_1 \circ f_2$, because if it were then $\tilde{x}_0$ and $x_0$ would be contour twins of $s_1 \circ f_2$, and hence of $f_2$, contradicting the hypotheses of the Lemma. Therefore there must be a left departure $x$ of $s_1 \circ f_2$ with $\tilde{x}_2 < x < \tilde{x}_0$ and $s_1 \circ f_2(x) < s_1 \circ f_2(\tilde{x}_0)$. But then $\langle x,x_2 \rangle$ is a (positive) radial departure of $s_1 \circ f_2$, contradicting \ref{properly nested}. This proves that $\tilde{x}_0 < \tilde{x}_2$.
We now have that $\langle \tilde{x}_2,x_0 \rangle$ is a negative radial departure of $s_1 \circ f_2$. Therefore by \ref{properly nested}, we must have $x_1 < \tilde{x}_2$.
\end{proof}
See Figure~\ref{fig:part1} for an illustration summarizing what has been deduced so far in Part 1.
\begin{figure}
\caption{A sample illustration of part of the map $s_1 \circ f_2$, with the departures $x_0,x_2,\tilde{x}
\label{fig:part1}
\end{figure}
\begin{claim}
\label{claim:above y0}
For all $x \in [x_1,0]$, $s_1 \circ f_2(x) \geq y_0$.
\end{claim}
\begin{proof}[Proof of Claim~\ref{claim:above y0}]
\renewcommand{\textsquare (Claim~\ref{claim:y5})}{\textsquare (Claim~\ref{claim:above y0})}
Suppose for a contradiction that there exists $p_0 \in [x_1,0]$ such that $s_1 \circ f_2(p_0) < y_0$. This means $x_1 \leq p_0 < \tilde{x}_0$. We may assume that $p_0$ is a left departure of $s_1 \circ f_2$. Let $q_3 = s_1 \circ f_2(p_0)$.
Because $t_3([0,w_2]) \supseteq [x_1,0]$, we have that $q_3 \in s_1 \circ f_2 \circ t_3([0,w_2])$, therefore there exists a right departure $\tilde{w}_3$ of $s_1 \circ f_2 \circ t_3$ such that $s_1 \circ f_2 \circ t_3(\tilde{w}_3) = q_3$. Since $s_1 \circ f_2 \circ t_3$ and $s_1 \circ f_2$ have the same radial contour factor, there exists a corresponding right departure $\tilde{q}_0$ of $s_1 \circ f_2$ such that $s_1 \circ f_2(\tilde{q}_0) = q_3$. Clearly $\tilde{q}_0 > x_2$.
Let $q_1 = \max \{s_1 \circ f_2(x): x \in [p_0,\tilde{x}_2]\}$, let $q_2 = \max \{s_1 \circ f_2(x): x \in [x_2,\tilde{q}_0]\}$, and let $p_1 \in [p_0,\tilde{x}_2]$ be the left departure of $s_1 \circ f_2$ and let $p_2 \in [x_2,\tilde{q}_0]$ be the right departure of $s_1 \circ f_2$ such that $s_1 \circ f_2(p_1) = q_1$ and $s_1 \circ f_2(p_2) = q_2$.
Observe that both $p_1$ and $p_2$ are contour points of $s_1 \circ f_2$, hence $q_1 \neq q_2$ according to the hypotheses of the Lemma. But if $q_1 > q_2$ then $\langle p_1,\tilde{q}_0 \rangle$ is a radial departure of $s_1 \circ f_2$ contradicting \ref{properly nested}, and if $q_2 > q_1$ then $\langle p_0,p_2 \rangle$ is a radial departure of $s_1 \circ f_2$ contradicting \ref{properly nested}.
\end{proof}
\noindent \textbf{Part 2.}
Suppose $f_1$ has $n$ right contour points (not including $0$), which means the non-zero right contour points of $t_1$ are $\frac{1}{n},\frac{2}{n},\ldots,1$.
\begin{claim}
\label{claim:y2 past contour pt}
$y_2 > \frac{1}{n}$.
\end{claim}
\begin{proof}[Proof of Claim~\ref{claim:y2 past contour pt}]
\renewcommand{\textsquare (Claim~\ref{claim:y5})}{\textsquare (Claim~\ref{claim:y2 past contour pt})}
Suppose $f_1$ has $m$ left contour points (not including $0$), which means the non-zero left contour points of $t_1$ are $-1,\ldots,\frac{-2}{m},\frac{-1}{m}$. Since $0$ is not a local minimum or a local maximum for $f_1$, and hence for $t_1$, it follows that $t_1$ is one-to-one on $\left[ \frac{-1}{m},\frac{1}{n} \right]$. Since $s_{12}(0) = s_1 \circ f_2(0) = 0$, and $t_1 \circ s_{12} = t_1 \circ s_1 \circ f_2$, it must be the case that $s_{12}(x) = s_1 \circ f_2(x)$ for all $x > 0$ for which $s_1 \circ f_2([0,x]) \subseteq \left( \frac{-1}{m},\frac{1}{n} \right)$.
Suppose for a contradiction that $y_2 \leq \frac{1}{n}$. Now $s_{12}(x) \geq 0 > \frac{-1}{m}$ for all $x \geq 0$, and $s_1 \circ f_2(x) < y_2 \leq \frac{1}{n}$ for all $x \in [0,x_0] \subset [0,x_2)$. Together, these imply that $s_1 \circ f_2(x) = s_{12}(x)$ for all $x \in [0,x_0]$. But $s_1 \circ f_2(x_0) = y_0 < 0$, while $s_{12}(x) \geq 0$, a contradiction.
\end{proof}
Let $i \in \{2,\ldots,n\}$ be such that $\frac{i-1}{n} < y_2 \leq \frac{i}{n}$. For simplicity, we assume that $t_1 \left( \frac{i}{n} \right) < 0$, which means $t_1 \left( \frac{i-1}{n} \right) > 0$. The case where $t_1 \left( \frac{i}{n} \right) > 0$ can be argued similarly.
\begin{claim}
\label{claim:y3}
There exists a left departure $y_3 \in (s_{12}(x_1),0]$ of $t_1$ such that $t_1(y_3) = t_1 \left( \frac{i-1}{n} \right)$.
\end{claim}
See Figure~\ref{fig:y3} for an illustration of the contour points $\frac{i-1}{n}$ and $\frac{i}{n}$ of $t_1$ together with $y_3$.
\begin{figure}
\caption{A sample illustration of part of the map $t_1$, with the contour points $\frac{i-1}
\label{fig:y3}
\end{figure}
\begin{proof}[Proof of Claim~\ref{claim:y3}]
\renewcommand{\textsquare (Claim~\ref{claim:y5})}{\textsquare (Claim~\ref{claim:y3})}
It suffices to prove that $t_1 \left( \tfrac{i-1}{n} \right) \in t_1((s_{12}(x_1),0])$. Observe that $\frac{i-1}{n} \in [0,y_2) \subseteq s_1 \circ f_2((\tilde{x}_2,0])$, and so
\begin{align*}
t_1 \left( \tfrac{i-1}{n} \right) &\in t_1 \circ s_1 \circ f_2((\tilde{x}_2,0]) \\
&= t_1 \circ s_{12}((\tilde{x}_2,0]) \quad \textrm{since $t_1 \circ s_1 \circ f_2 = t_1 \circ s_{12}$} \\
&\subseteq t_1 \circ s_{12}((x_1,0]) \quad \textrm{by Claim~\ref{claim:dep order}} \\
&= t_1((s_{12}(x_1),0]) .
\end{align*}
\end{proof}
Note that
\begin{enumerate}[label=($\dagger$), ref=($\dagger$)]
\item \label{y3 range} $t_1(y) \leq t_1 \left( \frac{i-1}{n} \right)$ for each $y \in \left[ y_3, \frac{i}{n} \right]$.
\end{enumerate}
\begin{claim}
\label{claim:y4}
There exists a left departure $y_4 \in (y_3,0]$ of $t_1$ such that $t_1(y_4) = t_1 \left( \frac{i}{n} \right)$. Further, $i < n$.
\end{claim}
\begin{proof}[Proof of Claim~\ref{claim:y4}]
\renewcommand{\textsquare (Claim~\ref{claim:y5})}{\textsquare (Claim~\ref{claim:y4})}
We need only argue that $t_1 \left( \frac{i}{n} \right) \in t_1([y_3,0])$, and that $i < n$. We consider two cases below, depending on whether $y_3 \leq y_0$ or $y_0 < y_3$. See Figure~\ref{fig:y4} for an illustration of the points constructed in the arguments below.
\begin{figure}
\caption{A sample illustration of parts of the maps $s_{12}
\label{fig:y4}
\end{figure}
Let $y_3'$ be such that $s_{12}(x_1) < y_3' < y_3$ and $t_1(y) > 0$ for each $y \in [y_3',y_3]$. Since $t_1$ has no contour twins, $y_3$ is not a contour point of $t_1$, so we may assume that $y_3'$ is a left departure of $t_1$, so that $t_1(y_3') > t_1 \left( \frac{i-1}{n} \right)$.
\textbf{Case~\ref{claim:y4}.1:} Suppose $y_3 \leq y_0$.
Let $a'$ be a left departure of $s_{12}$ such that $s_{12}(a') = y_3'$, so that $x_1 < a'$. Since $s_1 \circ f_2(a') \geq y_0 \geq y_3$ (by Claim~\ref{claim:above y0}) and $t_1 \circ s_1 \circ f_2(a') = t_1 \circ s_{12}(a') = t_1(y_3') > t_1 \left( \frac{i-1}{n} \right)$, it follows from \ref{y3 range} that $i < n$ and $s_1 \circ f_2(a') > \frac{i}{n}$. Thus there exists $a \in (a',0]$ such that $s_1 \circ f_2(a) = \frac{i}{n}$. Observe that $s_{12}(a)$ cannot be in $[y_3',y_3]$ because $t_1 \circ s_{12}(a) = t_1 \circ s_1 \circ f_2(a) = t_1 \left( \frac{i}{n} \right) < 0$. Therefore $s_{12}(a) \in (y_3,0]$, hence $t_1 \left( \frac{i}{n} \right) = t_1 \circ s_{12}(a) \in t_1((y_3,0])$.
\textbf{Case~\ref{claim:y4}.2:} Suppose $y_0 < y_3$.
Here we may assume that $y_0 < y_3'$ as well. Let $b'$ be a right departure of $s_1 \circ f_2$ such that $s_1 \circ f_2(b') = y_3'$, so that $b' < x_0$. Then $t_1 \circ s_{12}(b') = t_1 \circ s_1 \circ f_2(b') = t_1(y_3') > t_1 \left( \frac{i-1}{n} \right)$. It follows from \ref{y3 range} that $i < n$ and $s_{12}(b') > \frac{i}{n}$. Thus there exists $b \in [0,b')$ such that $s_{12}(b) = \frac{i}{n}$. Observe that $s_1 \circ f_2(b)$ cannot be in $[y_3',y_3]$ because $t_1 \circ s_1 \circ f_2(b) = t_1 \circ s_{12}(b) = t_1 \left( \frac{i}{n} \right) < 0$. Also, $s_1 \circ f_2(b)$ cannot be greater than $0$, because $s_1 \circ f_2(b) < y_2 \leq \frac{i}{n}$. Therefore $s_1 \circ f_2(b) \in (y_3,0]$, hence $t_1 \left( \frac{i}{n} \right) = t_1 \circ s_1 \circ f_2(b) \in t_1((y_3,0])$.
\end{proof}
Note that
\begin{enumerate}[label=($\dagger\dagger$), ref=($\dagger\dagger$)]
\item \label{y4 range} $t_1(y) \geq t_1 \left( \frac{i}{n} \right)$ for each $y \in \left[ y_4, \frac{i+1}{n} \right]$.
\end{enumerate}
The next Claim immediately leads to a contradiction with \ref{y3 range}, and as such will complete the proof of Lemma~\ref{lem:sell}. The proof is very similar to the proof of Claim~\ref{claim:y4}, but we include it for completeness.
\begin{claim}
\label{claim:y5}
There exists $y_5 \in (y_4,0]$ such that $t_1(y_5) = t_1 \left( \frac{i+1}{n} \right)$.
\end{claim}
\begin{proof}[Proof of Claim~\ref{claim:y5}]
\renewcommand{\textsquare (Claim~\ref{claim:y5})}{\textsquare (Claim~\ref{claim:y5})}
To argue that $t_1 \left( \frac{i+1}{n} \right) \in t_1((y_4,0])$, we consider two cases below, depending on whether $y_4 \leq y_0$ or $y_0 < y_4$.
Let $y_4'$ be such that $y_3 < y_4' < y_4$ and $t_1(y) < 0$ for each $y \in [y_4',y_4]$. Since $t_1$ has no contour twins, $y_4$ is not a contour point of $t_1$, so we may assume that $y_4'$ is a left departure of $t_1$, so that $t_1(y_4') < t_1 \left( \frac{i}{n} \right)$.
\textbf{Case~\ref{claim:y5}.1:} Suppose $y_4 \leq y_0$.
Let $a'$ be a left departure of $s_{12}$ such that $s_{12}(a') = y_4'$, so that $x_1 < a'$. Since $s_1 \circ f_2(a') \geq y_0 \geq y_4$ (by Claim~\ref{claim:above y0}) and $t_1 \circ s_1 \circ f_2(a') = t_1 \circ s_{12}(a') = t_1(y_4') < t_1 \left( \frac{i}{n} \right)$, it follows from \ref{y4 range} that $i+1 < n$ and $s_1 \circ f_2(a') > \frac{i+1}{n}$. Thus there exists $a \in (a',0]$ such that $s_1 \circ f_2(a) = \frac{i+1}{n}$. Observe that $s_{12}(a)$ cannot be in $[y_4',y_4]$ because $t_1 \circ s_{12}(a) = t_1 \circ s_1 \circ f_2(a) = t_1 \left( \frac{i+1}{n} \right) > 0$. Therefore $s_{12}(a) \in (y_4,0]$, hence $t_1 \left( \frac{i+1}{n} \right) = t_1 \circ s_{12}(a) \in t_1((y_4,0])$.
\textbf{Case~\ref{claim:y5}.2:} Suppose $y_0 < y_4$.
Here we may assume that $y_0 < y_4'$ as well. Let $b'$ be a right departure of $s_1 \circ f_2$ such that $s_1 \circ f_2(b') = y_4'$, so that $b' < x_0$. Then $t_1 \circ s_{12}(b') = t_1 \circ s_1 \circ f_2(b') = t_1(y_4') > t_1 \left( \frac{i}{n} \right)$. It follows from \ref{y4 range} that $i+1 < n$ and $s_{12}(b') > \frac{i+1}{n}$. Thus there exists $b \in [0,b')$ such that $s_{12}(b) = \frac{i+1}{n}$. Observe that $s_1 \circ f_2(b)$ cannot be in $[y_4',y_4]$ because $t_1 \circ s_1 \circ f_2(b) = t_1 \circ s_{12}(b) = t_1 \left( \frac{i+1}{n} \right) > 0$. Also, $s_1 \circ f_2(b)$ cannot be greater than $0$, because $s_1 \circ f_2(b) < y_2 \leq \frac{i}{n} < \frac{i+1}{n}$. Therefore $s_1 \circ f_2(b) \in (y_4,0]$, hence $t_1 \left( \frac{i+1}{n} \right) = t_1 \circ s_1 \circ f_2(b) \in t_1((y_4,0])$.
\end{proof}
We now have a contradiction with \ref{y3 range}, since $y_5 \in (y_4,0] \subset \left[ y_3,\frac{i}{n} \right]$ and $t_1(y_5) = t_1 \left( \frac{i+1}{n} \right) > t_1 \left( \frac{i-1}{n} \right)$. This completes the proof of Lemma~\ref{lem:sell}.
\end{proof}
\begin{thm}
\label{thm:no twins}
Let $f_n \colon [-1,1] \to [-1,1]$, $n = 1,2,\ldots$, be piecewise-linear maps with $f_n(0) = 0$ for each $n$. Suppose that for each $n$,
\begin{enumerate}
\item $f_n$ and $f_n \circ f_{n+1}$ have the same radial contour factor; and
\item $f_n$ has no contour twins.
\end{enumerate}
Then there exists an embedding of $X = \varprojlim \left \langle [-1,1], f_n \right \rangle$ into $\mathbb{R}^2$ for which the point $\langle 0,0,\ldots \rangle \in X$ is accessible.
\end{thm}
\begin{proof}
For each odd $n \geq 1$, let $t_n$ be the radial contour factor of $f_n$, and apply Lemma~\ref{lem:sell} to obtain a map $\tilde{s}_n \colon [-1,1] \to [-1,1]$ with $\tilde{s}_n(0) = 0$ such that $t_n \circ \tilde{s}_n = f_n \circ f_{n+1}$ and $\tilde{s}_n \circ t_{n+2}$ has no negative radial departures. Then by Proposition~\ref{prop:embed 0 accessible} there exists an embedding of $\displaystyle \varprojlim_{n \textrm{ odd}} \left \langle [-1,1], \tilde{s}_n \circ t_{n+2} \right \rangle$ into $\mathbb{R}^2$ for which the point $\langle 0,0,\ldots \rangle$ is accessible.
Also, by standard properties of inverse limits (see Section~\ref{sec:prelim}),
\begin{align*}
X = \varprojlim \left \langle [-1,1],f_n \right \rangle &\approx \varprojlim \left \langle [-1,1],f_n \circ f_{n+1} \right \rangle_{n \textrm{ odd}} \quad \textrm{by composing bonding maps} \\
&= \varprojlim \left \langle [-1,1],t_n \circ \tilde{s}_n \right \rangle_{n \textrm{ odd}} \\
&\approx \varprojlim \left \langle [-1,1], t_1, [-1,1], \tilde{s}_1, [-1,1], t_3, [-1,1], \tilde{s}_3, \ldots \right \rangle \\ & \hspace{1.5in} \textrm{by (un)composing bonding maps} \\
&\approx \varprojlim \left \langle [-1,1], \tilde{s}_1, [-1,1], t_3, [-1,1], \tilde{s}_3, [-1,1], t_5, \ldots \right \rangle \\ & \hspace{1.5in} \textrm{by dropping first coordinate} \\
&\approx \varprojlim \left \langle [-1,1],\tilde{s}_n \circ t_{n+2} \right \rangle_{n \textrm{ odd}} \quad \textrm{by composing bonding maps.}
\end{align*}
In fact, an explicit homeomorphism between $X$ and $\displaystyle \varprojlim_{n \textrm{ odd}} \left \langle [-1,1],\tilde{s}_n \circ t_{n+2} \right \rangle$ is given by
\[ h \left( \langle x_n \rangle_{n=1}^\infty \right) = \langle \tilde{s}_{2k-1}(x_{2k+1}) \rangle_{k=1}^\infty .\] Clearly $h(\langle 0,0,\ldots \rangle) = \langle 0,0,\ldots \rangle$. Thus there exists an embedding of $X$ into $\mathbb{R}^2$ for which the point $\langle 0,0,\ldots \rangle \in X$ is accessible.
\end{proof}
We present two examples to complement the proof of Lemma~\ref{lem:sell} above. In the first, we demonstrate the function $\tilde{s}$ constructed in the proof of Lemma~\ref{lem:sell} when each of the maps $f_1,f_2,f_3$ is Minc's map $f_M$. In the second, we substantiate the need for the assumption that the maps $f_1,f_2,f_3$ have no contour twins, by showing an instance where $f_2$ has contour twins and the map $\tilde{s}$ as constructed in the proof of Lemma~\ref{lem:sell} does not satisfy the conclusion of the Lemma.
\begin{example}
\label{ex:minc demo}
Let $f_1 = f_2 = f_3 = f_M$, where $f_M$ is the map of Minc described in the Introduction (Figure~\ref{fig:minc}), rescaled so as to be a map on $[-1,1]$ instead of $[0,1]$. It can be seen that $f_M^2$ has the same radial contour factor as $f_M$. In Figure~\ref{fig:minc demo} we show the maps $f_M$, $f_M^2$, $t_{f_M}$, $s_1$ (such that $f_M = t_{f_M} \circ s_1$), $s_{12}$ (such that $f_M^2 = t_{f_M} \circ s_{12}$), $\tilde{s}$ (as defined in the proof of Lemma~\ref{lem:sell}), and finally $\tilde{s} \circ t_{f_M}$, which has no negative radial departures (as implied by the proof of Lemma~\ref{lem:sell}).
\begin{figure}
\caption{The maps $f_M$, $f_M^2$, $t_{f_M}
\label{fig:minc demo}
\end{figure}
\end{example}
\begin{example}
\label{ex:twins ex}
In Figure~\ref{fig:twins ex} we present an example of functions $f_1,f_2,f_3$ where $f_2$ has contour twins, and three natural choices of the map $\tilde{s}$ do not satisfy the conclusion of Lemma~\ref{lem:sell}. We briefly list the key properties for this example below:
\begin{itemize}
\item $f_1 = t_1$, $f_2 = t_2$, $f_3 = t_3$. Consequently, $s_1 = \mathrm{id}$.
\item $t_{f_1 \circ f_2} = t_1$, and $t_{f_2 \circ f_3} = t_2$.
\item $f_2$ has contour twins: namely, $f(b_3) = f(b_4)$ and $f(b_1) = f(b_5)$.
\item There is a unique radial meandering factor $s_{12} \colon [-1,1] \to [-1,1]$ of $f_1 \circ f_2$, given in red in Figure~\ref{fig:twins ex}.
\item There is a left departure $x_1 \in (b_2,b_3)$ of $s_{12}$ such that $s_{12}(x_1) = -1$. There is a right departure $x_2 \in (b_4,b_5)$ of $s_{12}$ such that $s_{12}(x_2) = 1$.
\item Let $w_1 \in (a_1,a_2)$ be the left departure of $f_3$ such that $f_3(w_1) = x_2$, and let $w_2 \in (a_3,a_4)$ be the right departure of $f_3$ such that $f_3(w_2) = x_1$. Then:
\begin{enumerate}[label=(\roman{*})]
\item $\langle w_1,w_2 \rangle$ is a negative radial departure of $s_{12} \circ t_3$;
\item $\langle a_1,w_2 \rangle$ is a negative radial departure of $\tilde{s} \circ t_3$, where
\[ \tilde{s}(x) = \begin{cases}
s_{12}(x) & \textrm{if } x \leq 0 \\
s_1 \circ f_2(x) & \textrm{if } x > 0
\end{cases} \]
as in the proof of Lemma~\ref{lem:sell}; and
\item $\langle w_1,a_4 \rangle$ is a negative radial departure of $\tilde{s} \circ t_3$, where
\[ \tilde{s}(x) = \begin{cases}
s_1 \circ f_2(x) & \textrm{if } x \leq 0 \\
s_{12}(x) & \textrm{if } x > 0 ,
\end{cases} \]
a variant of the map constructed in the proof of Lemma~\ref{lem:sell}.
\end{enumerate}
\end{itemize}
Nevertheless, it is not difficult to construct a different map $\tilde{s}$ for this example satisfying the conclusion of Lemma~\ref{lem:sell}, for example:
\[ \tilde{s}(x) = \begin{cases}
s_1 \circ f_2(x) & \textrm{if } x \in [b_1,b_6] \\
s_{12}(x) & \textrm{otherwise.}
\end{cases} \]
\begin{figure}
\caption{The maps $f_1$, $f_2$, $f_3$ for Example~\ref{ex:twins ex}
\label{fig:twins ex}
\end{figure}
\end{example}
It would be interesting to know whether the assumption that the maps $f_n$ have no contour twins in Lemma~\ref{lem:sell} (and hence in Theorem~\ref{thm:no twins}) can be weakened or removed:
\begin{question}
\label{ques:no no twins}
Let $f_1,f_2,f_3 \colon [-1,1] \to [-1,1]$ be piecewise-linear maps with $f_1(0) = f_2(0) = f_3(0) = 0$, and such that $t_{f_1} = t_{f_1 \circ f_2}$ and $t_{f_2} = t_{f_2 \circ f_3}$. Does there exist a map $\tilde{s} \colon [-1,1] \to [-1,1]$ with $\tilde{s}(0) = 0$ such that $f_1 \circ f_2 = t_{f_1} \circ \tilde{s}$ and $\tilde{s} \circ t_{f_3}$ has no negative radial departures?
\end{question}
We point out that if the answer to Question~\ref{ques:no no twins} is affirmative, it would imply that for any simplicial system $\langle [-1,1],f_n \rangle$, for any point $x \in X = \varprojlim \left \langle [-1,1],f_n \right \rangle$ there would exist an embedding of $X$ into $\mathbb{R}^2$ for which the point $x \in X$ is accessible. This is because for a simplicial system, for any $n$, there are only finitely many possible radial contour factors for the maps $f_n$, $f_n \circ f_{n+1}$, $f_n \circ f_{n+1} \circ f_{n+2}$, $\ldots$, hence by replacing the bonding maps with appropriate compositions we may freely assume without loss of generality that $f_n$ and $f_n \circ f_{n+1}$ have the same radial contour factor for each $n$.
\end{document} |
\begin{document}
\begin{abstract}
We study the limit of the chromatic tower for not necessarily finite spectra, obtaining a generalization of the chromatic convergence theorem of Hopkins and Ravenel. Moreover, we prove that in general this limit does not coincide with harmonic localization, thereby answering a question of Ravenel's.
\end{abstract}
\title{Chromatic completion}
\tableofcontents
\Sigmagma^{\infty}ection{Introduction}\left\langleglebel{chromaticcompletion}
\Sigmagma^{\infty}ubsection{Background and motivation}
The suspension spectrum functor $\Sigmagma^{\infty}_+\colonlon \mathrm{Top} \to \mathrm{Sp}$ exhibits the category $\mathrm{Sp}$ of spectra as the stabilization of the category of topological spaces, with right adjoint given by $\Omega^{\infty}\colonlon \mathrm{Sp} \to \mathrm{Top}$.
The full subcategory of connective spectra admits then an equivalent description as the category of algebras for the corresponding monad $Q=\Omega^{\infty}\Sigmagma^{\infty}_+$.
Formally, the coalgebras for the comonad $\Sigmagma^{\infty}_+\Omega^{\infty}$ are the retracts of suspension spectra. However, these seem to be difficult to describe intrinsically to the category of spectra. In \cite{kuhnsuspension}, Kuhn proves that the Snaith splitting is characteristic for retracts of suspension spectra, a result which subsequently has been refined by Klein \cite{kleinsuspension}, but not much is known beyond that.
\begin{thm}[Kuhn]
A connected spectrum $X$ is a retract of a suspension spectrum if and only if there is an equivalence
\[\Sigmagma^{\infty}_+\Omega^{\infty}X \Sigmagma^{\infty}imeq \bigvee_{n \ge 0} \mathbb{D}_nX,\]
where $\mathbb{D}_n$ denotes the $n$-th extended power functor on $\mathrm{Sp}$.
\end{thm}
Here we approach this problem from the point of view of chromatic homotopy theory. Informally speaking, suspension spectra often have properties similar to finite spectra. For example, Bousfield \cite{bousfieldkneq} (see also \cite{wilsonkneq}) shows that the notion of type generalizes to suspension spectra up to a discrepancy at height $0$.
As another example, Hopkins and Ravenel~\cite{ssh} prove that suspension spectra are harmonic, i.e., local with respect to the wedge of all Morava $K$-theories, extending the analogous result for finite spectra established earlier by Ravenel \cite{ravconj}. Harmonic localization $L_{\infty}$ in turn is closely related to the functor $\mathbb{C}$ that sends a spectrum $X$ to the limit of its chromatic tower
\[\xymatrix{\cdots \ar[r] & L_nX \ar[r] & L_{n-1} \ar[r] & \cdots \ar[r] & L_0X,}\]
where $L_n$ denotes Bousfield localization with respect to height $n$ Johnson--Wilson theory $E(n)$. Viewing this construction as being a chromatic analogoue of $p$-adic completion $M \mapsto \mathrm{lim}_n(M \otimesimes \mathbb{Z}/p^n)$, Salch coined the term \emph{chromatic completion} for $\mathbb{C}$. The chromatic convergence theorem then says that finite spectra are chromatically complete.
\begin{thm}[Hopkins--Ravenel]\mylabel{chromaticconvergence}
If $X$ is a finite spectrum, then the limit of the chromatic tower $\cdots \to L_nX \to \cdots L_0X$ is equivalent to $X$.
\end{thm}
In light of the above, this motivates the question if all suspension spectra are chromatically complete, which would be a consequence of the following question raised by Ravenel in~\cite[Section 5]{ravconj}.
\begin{question}\mylabel{ravenelquestion}
Does harmonic localization coincide with chromatic completion?
\end{question}
\Sigmagma^{\infty}ubsection{Results and outline}
Our first goal in this paper is to give a general criterion for when a subcategory of $\mathrm{Sp}$ contains all suspension spectra. Using methods developed by Johnson and Wilson reviewed and extended in Section \ref{jwtheory}, we then prove our generalization of the chromatic convergence theorem, see \myref{finhomdimcc}.
\begin{thm*}
If $X$ is a connective spectrum with finite projective $BP$-dimension, then $X$ is chromatically complete.
\end{thm*}
Furthermore, we study the relation between harmonic localization and chromatic completion in the context of idempotent approximations, showing in \myref{lhidempapprox} that $L_{\infty}$ is the closest idempotent monad equipped with a monad map to $\mathbb{C}$. Finally, we construct an explicit spectrum $W$ such that $L_{\infty}W$ is not chromatically complete, thereby answering Ravenel's \myref{ravenelquestion}, see \myref{counterexample}.
\begin{thm*}
Harmonic localization and chromatic completion do not coincide.
\end{thm*}
The motivating question of whether suspension spectra are chromatically complete remains open.
\begin{rmk}
Work in progress of Salch provides different techniques for checking whether a given spectrum is chromatically complete.
\end{rmk}
\Sigmagma^{\infty}ubsection{Notation and terminology}
Fix a prime $p$. Let $L_n$ be Bousfield localization at the Johnson--Wilson theory $E(n)$ with acyclification functor $C_n$, and denote by $L_n^f$ the corresponding finite localization, see for example~\cite{millerfiniteloc}. Harmonic localization $L_{\infty}$ is defined as Bousfield localization at the wedge of all Morava K-theories $K(n)$ on the category of (always) $p$-local spectra $\mathrm{Sp}$, viewed as an idempotent monad in the natural way. A spectrum $X$ is called harmonic if the unit map $X \to L_{\infty} X$ is an equivalence, and dissonant if $L_{\infty} X$ is contractible.
Moreover, let $\mathbb{C}\colonlon \mathrm{Sp} \to \mathrm{Sp}$ be chromatic completion, i.e., the endofunctor given by $X \mapsto \mathrm{lim}_n L_nX$. The functor $\mathbb{C}$ inherits a monad structure from the tower of monads $\cdots \to L_n \to L_{n-1} \to \cdots \to L_0$ and we say that $X$ is chromatically complete if the natural map $X \to \mathbb{C} X$ is an equivalence.
\Sigmagma^{\infty}ubsection*{Acknowledgements}
This work benefited a lot from a correspondence with Andrew Salch, and I'm grateful to him for sharing his ideas. I would like to thank Omar Antol\'{\i}n-Camarena, Mike Hopkins, Tyler Lawson, Emily Riehl, Nathaniel Stapleton, and Steve Wilson for helpful conversations, and the Max Planck Institute for Mathematics for its hospitality.
\Sigmagma^{\infty}ection{Projective dimension and Johnson--Wilson spaces}\left\langleglebel{jwtheory}
We start by reviewing those aspects of Johnson--Wilson theory that will be used later, and extend one of their key results to infinite complexes in \myref{bptorsionfree}.
\Sigmagma^{\infty}ubsection{Projective dimension}\left\langleglebel{finpdim}
Recall that the Brown-Peterson spectrum $BP$ is an $\Es{4}$-ring spectrum with coefficients $\pi_*BP = \mathbb{Z}_{(p)}[v_1,v_2,\ldots]$ with $\deg(v_n) =2(p^n-1)$. In this section, we introduce the definition and some basic facts about projective $BP$-dimension.
\begin{defn}
If $M$ is a $BP_*$-module or a $(BP_*,BP_*BP)$-comodule, then the minimal length $m \in \mathbb{N} \cup \{\infty\}$ of a resolution of $M$ by projective $BP_*$-modules is called its projective (or homological) dimension, denoted by $\mathrm{projdim}_{BP_*}(BP_*X)=m$. The projective $BP$-dimension of a spectrum $X$ is defined as the projective dimension of $BP_*(X)$.
\end{defn}
\begin{rmk}
Implicitly, we will always work with graded modules, and our notion of projective dimension will always be with respect to $BP_*$. By a result of Conner and Smith~\cite{connersmith1}, graded projective $BP_*$-modules are free.
\end{rmk}
\begin{ex}
Johnson and Wilson~\cite{johnsonwilsonpgroups} prove that the projective dimension of the suspension spectrum of $(B\mathbb{Z}/p)^n$ is precisely $n$.
\end{ex}
If $X$ is a finite spectrum, then $BP_*(X)$ is in fact a coherent $BP_*$-module.
This observation led Landweber to study the abelian category $\Sigmagma^{\infty}B\Sigmagma^{\infty}P$ of $BP_*BP$-comodules that are coherent as $BP_*$-modules. In~\cite[Cor. 7]{landwebernewapp}, he showed:
\begin{prop}[Landweber]
For $M$ a finitely generated $BP_*BP$-comodule, the following conditions are equivalent:
\begin{enumerate}
\item $M \in \Sigmagma^{\infty}B\Sigmagma^{\infty}P$, i.e., $M$ is coherent
\item $\mathrm{projdim}_{BP_*}(M) < \infty$
\item There exists an $n$ such that $M$ is $v_n$ torsion-free.
\end{enumerate}
\end{prop}
However, we will be mainly interested in infinite spectra, so we need the more general techniques developed by Johnson and Wilson following Conner and Smith. We end this section by mentioning natural examples of spaces with infinite projective dimension. If $p=2$, there is a ($2$-local) fiber sequence
\[K(\mathbb{Z},3) \to BU\left\langleglengle 6 \right\rangleglengle \to BSU.\]
Since the second and third term have torsion-free $\mathbb{Z}_{(p)}$-homology, their projective dimension is 0. In contrast, there is the following surprising result of~\cite{johnsonwilsonems}.
\begin{thm} [Johnson--Wilson]
The projective dimension of Eilenberg-Mac Lane spaces is infinite in the following cases:
\begin{enumerate}
\item $\mathrm{projdim}_{BP_*}(BP_*K(\mathbb{Z},m)) = \infty$ if and only if $m \ge 3$.
\item $\mathrm{projdim}_{BP_*}(BP_*K(\mathbb{Z}/p^k,m)) = \infty$ if and only if $m \ge 2$ and $k \ge 1$.
\end{enumerate}
\end{thm}
In particular, $K(\mathbb{Z},3)$ has infinite projective dimension, and the methods of Section \ref{suspensionspectra} do not apply.
\Sigmagma^{\infty}ubsection{Truncated Brown--Peterson spectra and torsion}\left\langleglebel{truncatedbp}
Using~\cite{ekmm} or the manifolds with singularities approach of Baas and Sullivan, one constructs the truncated Brown--Peterson spectra $\BP{n}$ as a $BP$-module with coefficient ring $\mathbb{Z}_{(p)}[v_1,\ldots,v_n]$ for every $n \in \mathbb{N} \cup \{\infty\}$. By definition, we set $\BP{-1} = H\mathbb{F}_p$.
\begin{ex}\mylabel{truncatedbpex}
For example, $BP\left\langleglengle 0 \right\rangleglengle = H\mathbb{Z}_{(p)}$ and $BP\left\langleglengle 1 \right\rangleglengle$ is a summand of connective $K$-theory localized at $p$.
\end{ex}
As shown by Johnson and Wilson~\cite{johnsonwilson}, the theories $\BP{n}$ are convenient to track torsion in the $BP$-homology of spaces. To this end, they observe that for any spectrum $X$, there exists a natural tower of $BP_*$-modules
\[\resizebox{\colonlumnwidth}{!}{\xymatrix{BP_*(X) \ar[r] \ar@/^2pc/[rrr]^{\rho(n,\infty)} & \dots \ar[r] & BP\left\langleglengle n+1 \right\rangleglengle_*(X) \ar[r] & BP\left\langleglengle n \right\rangleglengle_*(X) \ar[r] \ar@{-->}[ld]^{\mathbb{D}elta_{n+1}} & \cdots \ar[r] & BP\left\langleglengle -1 \right\rangleglengle_*(X) \\
& & BP\left\langleglengle n+1 \right\rangleglengle_*(X) \ar[u]^{v_{n+1}} }}\]
where the indicated triangles are exact sequences. The main result of~\cite{johnsonwilson} is:
\begin{thm}[Johnson--Wilson]
For $X$ a finite complex, the following are equivalent:
\begin{enumerate}
\item $\mathrm{projdim}_{BP_*} BP_*(X) \le n+1$
\item $\rho(n,\infty)\colonlon BP_*X \to BP\left\langleglengle n \right\rangleglengle_*X$ is surjective
\item $BP_*(X) \otimesimes_{BP_*}BP\left\langleglengle n \right\rangleglengle_* \xrightarrow{\Sigmagma^{\infty}im} BP\left\langleglengle n \right\rangleglengle_*(X)$
\item $\mathrm{Tor}_1^{BP_*}(BP_*(X), BP\left\langleglengle n \right\rangleglengle_*)=0$.
\end{enumerate}
If $X$ is a connective CW spectrum with $H_*(X, \mathbb{Z}_{(p)})$ of finite type, (1) above is equivalent to:
\begin{enumerate}
\item[(5)] $BP\left\langleglengle n+1 \right\rangleglengle_*(X)$ is $v_{n+1}$ torsion-free.
\end{enumerate}
\end{thm}
Let $X$ be a connective spectrum with $\mathrm{projdim}_{BP_*}(BP_*X) =n$. Under the extra assumption that $H_*(X,\mathbb{Z}_{(p)})$ is of finite type, it follows from the above that $BP\left\langleglengle m \right\rangleglengle_*(X)$ is $v_m$ torsion-free for all $m\ge n$. However, the only place in the argument given there that claims to use the finite type hypothesis is~\cite[Prop. 3.10]{johnsonwilson}, stating that the $\mathbb{Z}_{(p)}$-homology of a connected spectrum is free if and only if so is $BP_*(X)$. Since the Atiyah--Hirzebruch spectral sequence exists and converges for such $X$ (see, for example,~\cite[Cor. 4.2.6]{kochmanbook}), the proof works equally well without the finite type assumption and we conclude:
\begin{cor}\mylabel{bptorsionfree}
If $X$ be a connective spectrum with $\mathrm{projdim}_{BP_*}(BP_*X) =n$, then $BP_*(X)$ is $v_m$ torsion-free for $m > n$.
\end{cor}
\begin{proof}
This follows from the previous discussion and~\cite[Cor. 3.5]{johnsonwilson}.
\end{proof}
\Sigmagma^{\infty}ubsection{Johnson--Wilson spaces}\left\langleglebel{jws}
The spaces appearing in the $\Omega$-spectrum of truncated Brown--Peterson spectra will play an important role in our study of suspension spectra. They were investigated thoroughly in Wilson's thesis~\cite{wilsonthesis1,wilsonthesis2}, the main results of which we summarize here as far as they are needed below.
\begin{defn}
The Johnson--Wilson space $\BP{n}_k$ is defined as the $k$-th space in the $\Omega$-spectrum of $\BP{n}$, i.e.,
\[\BP{n}_k = \Omega^{\infty-k}\BP{n}.\]
\end{defn}
\begin{ex}
It follows from \myref{truncatedbpex} that $\BP{0}_k$ is just the Eilenberg--Mac Lane space $K(\mathbb{Z}_{(p)},k)$. If $p=2$, one can show that $\BP{1}_4 = BSU$ and $\BP{1}_6 \Sigmagma^{\infty}imeq BU\left\langlegle 6\right\ranglegle$, see~\cite{priddycell}.
\end{ex}
In order to state Wilson's splitting theorem~\cite{wilsonthesis2}, we introduce the auxiliary function
\[f(n) = 2\Sigmagma^{\infty}um_{i=0}^np^i = 2 \frac{p^{n+1}-1}{p-1}.\]
\begin{thm}[Wilson]\mylabel{wilsonsplitting}
If $k \le f(n)$, $BP_k \Sigmagma^{\infty}imeq BP\left\langleglengle n \right\rangleglengle_k \times \prod_{j >n} BP\left\langleglengle j \right\rangleglengle_{k +2(p^j-1)}$, and for $k > f(n-1)$, this decomposition is as irreducibles. Furthermore, if $k <f(n)$, this is as $H$-spaces.
\end{thm}
This splitting has a number of interesting consequences. In~\cite{wilsonthesis1}, Wilson computes the $\mathbb{Z}_{(p)}$-(co)homology of $BP_k$ for $k \le f(n)$, which can be used to deduce the (co)homology of the Johnson--Wilson spaces $\BP{n}_k$ in the range $k \le f(n)$.
\begin{thm}[Wilson]\mylabel{homologyjws}
If $k \le f(n)$, then the $\mathbb{Z}_{(p)}$-(co)homology of the connected part of $BP\left\langleglengle n \right\rangleglengle_k$ has no torsion and is a polynomial algebra for $k<f(n)$ even and an exterior algebra for $k$ odd.
\end{thm}
\Sigmagma^{\infty}ection{Suspension spectra and generalized chromatic convergence}\left\langleglebel{suspensionspectra}
\Sigmagma^{\infty}ubsection{A criterion}
The following result is an abstraction of the argument used in \cite{ssh} showing that suspension spectra are harmonic.
\begin{thm}\mylabel{criterion}
Let $\mathcal{C}$ be a thick subcategory of $\mathrm{Sp}$ and let $\mathcal{C}_0$ be a subcategory of $\mathrm{Top}$ such that $\Sigmagma^{\infty}_+\mathcal{C}_0 \Sigmagma^{\infty}ubseteq \mathcal{C}$ and is closed under retracts. Suppose that
\begin{enumerate}
\item $\mathcal{C}_0$ is closed under weak infinite products of spaces
\item $\mathcal{C}_0$ contains the Johnson-Wilson spaces $BP\left\langleglengle n \right\rangleglengle_{k}$ for all $n$ and $f(n-1) < k \le f(n)$
\item $\mathcal{C}$ is closed under sequential homotopy limits,
\end{enumerate}
then $\mathcal{C}$ contains all suspension spectra.
\end{thm}
\begin{rmk}
The purpose of the auxiliary category $\mathcal{C}_0$ is that condition (2) is easier to check for a restrictive collection of spaces. For example, in \cite{ssh}, $\mathcal{C}$ is the category of harmonic spectra and $\mathcal{C}_0$ the category of spaces with torsion-free $\mathbb{Z}_{(p)}$-homology.
\end{rmk}
In order to prove this theorem, we need two lemmata that are of independent interest. We inductively define full subcategories $\mathcal{C}_i \Sigmagma^{\infty}ubseteq \mathrm{Top}$ for all $i \in \mathbb{N}$ by letting $\mathcal{C}_0$ be a full subcategory of $\mathrm{Top}$ satisfying (1) and (2) of the theorem and declaring $F$ to be in $\mathcal{C}_i$ if there exists a fiber sequence $F \to E \to B$ with $E,B \in \mathcal{C}_{i-1}$ and $B$ simply connected. We also set $\mathcal{C}_{\infty} = \bigcup_i \mathcal{C}_i$; this yields an ascending filtration
\[\mathcal{C}_0 \Sigmagma^{\infty}ubseteq \mathcal{C}_1 \Sigmagma^{\infty}ubseteq \ldots \Sigmagma^{\infty}ubseteq \mathcal{C}_{\infty} \Sigmagma^{\infty}ubseteq \mathrm{Top}.\]
\begin{lemma}\mylabel{emsfinitedepth}
For every $m\ge 2$, the Eilenberg--Mac Lane space $K(A,m) \in \mathcal{C}_{\infty}$.
\end{lemma}
\begin{proof}
By assumption, $\mathcal{C}_0$ contains $BP\left\langleglengle n \right\rangleglengle_{k}$ for all $n$ and $f(n-1) < k \le f(n)$. First note that the self map $v_n\colonlon \Sigmagma^{2(p^n-1)}\BP{n} \to \BP{n}$ induces fiber sequences
\[ BP\left\langleglengle n-1 \right\rangleglengle_i \to BP\left\langleglengle n \right\rangleglengle_{i+1+2(p^n-1)} \to BP\left\langleglengle n \right\rangleglengle_{i+1}\]
for all $i$. It follows easily that, for any $i \ge 0$, $BP\left\langleglengle n \right\rangleglengle_{i + f(n)} \in \mathcal{C}_i$. In particular, $K(\mathbb{Z},m) \Sigmagma^{\infty}imeq \BP{0}_m \in \mathcal{C}_{m}$ for all $m \ge 2$. Since $\mathrm{Mod}_{\mathbb{Z}}$ has global dimension 1, this implies $K(A,m) \in \mathcal{C}_{\infty}$ for all abelian groups $A$ and $m \ge 2$.
\end{proof}
\begin{lemma}\mylabel{geomemss}
If $F \to E \to B$ is a fiber sequence of spaces with $E,B \in \mathcal{C}_{i}$, $B$ simply connected and suppose $\Sigmagma^{\infty}_+\mathcal{C}_{i-1}$, then $\Sigmagma^{\infty}_+F \in \mathcal{C}$.
\end{lemma}
\begin{proof}
This is proven as in~\cite[Lemma 17]{ssh}, using the geometric construction of the Eilenberg-Moore spectral sequence. Indeed, the equivalence $F \Sigmagma^{\infty}imeq \mathrm{Tot}(E \times B^{\times \bullet})$ lifts to a presentation
\[\Sigmagma^{\infty}_+F \Sigmagma^{\infty}imeq \mathrm{lim}_j F_j,\]
where the spectra $F_j$ fit into cofiber sequences $F_{j+1} \to F_j \to C_i$ with $\Sigmagma^{j-1} C_i$ a retract of $\Sigmagma^{\infty}_+(E \times B^{\times j})$ and $F_1 = \Sigmagma^{\infty}_+E$. Since $\mathcal{C}_{i-1}$ is closed under finite products of spaces by (1), $E \times B^{\times j} \in \mathcal{C}_{i-1}$ and thus $F_j \in \mathcal{C}$ for all $j$ by induction. Hence $\Sigmagma^{\infty}_+F \in \mathcal{C}$ by (3).
\end{proof}
\begin{proof}[Proof of \myref{criterion}]
Since $\Sigmagma^{\infty}_+\mathcal{C}_0 \Sigmagma^{\infty}ubseteq \mathcal{C}$, \myref{geomemss} implies inductively that $\Sigmagma^{\infty}_+\mathcal{C}_{\infty} \alphalowbreak \Sigmagma^{\infty}ubseteq \mathcal{C}$. If $X$ is a simply connected space, then $X$ has a convergent Postnikov tower and \myref{emsfinitedepth} together with the closure of $\Sigmagma^{\infty}C$ under sequential limits shows that $\Sigmagma^{\infty}_+X \in \mathcal{C}$.
\end{proof}
\Sigmagma^{\infty}ubsection{Generalized chromatic convergence}
The aim of this section is to prove a generalization of the chromatic convergence theorem of Hopkins and Ravenel. To this end, let $\overline{BP}$ be the fiber of the unit map $S^0 \to BP$ and recall that a map $f\colonlon X \to Y$ is said to be $n$-phantom if $\mathrm{Hom}(F,f)=0$ for all finite spectra $F$ of dimension less than $n+1$.
\begin{defn}
A spectrum $X$ is called $BP$-convergent if for all $i \ge 0$ there exists some $s(i)$ such that $\overline{BP}^{s(i)} \simeqdge X \to X$ is $i$-phantom.
\end{defn}
In other words, if $X$ is $BP$-convergent, then $E_{\infty}^{s,s+i}(X) = 0$ for $s \ge s(i)$ in the Adams--Novikov spectral sequence for $X$, i.e., there exists a vanishing line (at $E_{\infty}$) determined by the function $s(i)$. It follows that every non-zero element in $\pi_iX$ has Adams--Novikov filtration less than $s(i)+1$.
As a formal consequence of the proof of the smash product theorem, Ravenel and Hopkins~\cite[8.6]{ravbook2} obtain:
\begin{lemma}\mylabel{bpconvergent}
If $X$ is connective, then $X$, $L_iX$, and thus $C_iX$ are $BP$-convergent for all $i \ge 0$.
\end{lemma}
\begin{comment}
\begin{lemma}\mylabel{pdimzeromap}
If $M \in \mathrm{Mod}_{BP}$ has projective dimension at most $n$, then the natural map $C_nM \to M$ is zero in homotopy.
\end{lemma}
\begin{proof}
This is essentially \cite[Cor. 3.5]{johnsonwilson}.\marginpar{Check this and maybe move it to where it belongs. It is not needed in the proof of the theorem.}
\end{proof}
\end{comment}
The next result connects the projective dimension of a spectrum $X$ to the Adams--Novikov filtration of $C_mX$.
\begin{lemma}\mylabel{pdimzeromap}
If $X$ is a connective spectrum with projective dimension at most $n$, then the natural map $C_{m+1}X \to C_{m}X$ is $BP$-acyclic for all $m \ge n$.
\end{lemma}
\begin{proof}
Let $X$ be a connective spectrum with $\mathrm{projdim}_{BP_*}$ $(BP_*X) =n$. By \myref{bptorsionfree}, $BP_*(X)$ is $v_m$ torsion-free for every $m > n$. Fix $m > n$ and consider the commutative diagram
\[\resizebox{\colonlumnwidth}{!}{\xymatrix{N_mBP \simeqdge X \ar[r]^{f_m} \ar[d]_{\Sigmagma^{\infty}imeq} & M_mBP \simeqdge X \ar[r] \ar[d]_{\Sigmagma^{\infty}imeq} & N_{m+1} BP \simeqdge X \ar[r] \ar[d]_{\Sigmagma^{\infty}imeq} & \Sigmagma N_mBP \simeqdge X \ar[d]^{\Sigmagma^{\infty}imeq} \\
\Sigmagma^{m} BP \simeqdge C_{m-1}X \ar[r] & \Sigmagma^mBP \simeqdge M_mX \ar[r] & \Sigmagma^{m+1}BP \simeqdge C_mX \ar[r] & \Sigmagma^{m+1} BP \simeqdge C_{m-1}X
}}\]
of cofiber sequences, where the upper row realizes the chromatic resolution as in \cite[Ch. 8]{ravbook2}. By construction, $f_m$ is injective in homotopy if and only if $BP_*(X)$ is $v_m$ torsion-free, thus the natural map $C_mX \to C_{m-1}X$ is $BP$-acyclic.
\end{proof}
We are ready to put the pieces together to generalize \myref{chromaticconvergence} to connective spectra of finite projective dimension.
\begin{thm}\mylabel{finhomdimcc}
If $X$ is a connective spectrum with finite projective dimension, then $X$ is chromatically complete.
\end{thm}
\begin{proof}
If $X$ is connective with projective dimension $n \in \mathbb{N}$ and $m \ge n$, then the map $C_{m+k}X \to C_{m}X$ has Adams--Novikov filtration at least $k$ for all $k \ge 0$ by \myref{pdimzeromap}. Therefore, \myref{bpconvergent} implies that $\pi_iC_{m + s(i)}X \to \pi_iC_mX$ is zero for any $i$, so that the tower
\[\ldots \to C_j X \to C_{j-1}X \to \ldots \to C_0X\]
is pro-trivial. This gives the claim.
\end{proof}
\begin{cor}\mylabel{jwgood}
All connective spectra with free homology are chromatically complete. In particular, this applies to the Johnson--Wilson spaces $BP\left\langleglengle n \right\rangleglengle_{k}$ for any $n$ and $k \le f(n)$.
\end{cor}
\begin{proof}
By the proof of \myref{bptorsionfree}, for a connective spectrum $X$, $H_*(X,\mathbb{Z}_{(p)})$ torsion-free over $\mathbb{Z}_{(p)}$ implies that $BP_*(X)$ is torsion-free over $BP_*$, thus \myref{finhomdimcc} applies. The claim about the Johnson--Wilson spaces $BP\left\langleglengle n \right\rangleglengle_{k}$ now follows immediately from \myref{homologyjws}.
\end{proof}
\Sigmagma^{\infty}ection{Idempotent approximation}
We briefly recall the basic properties of idempotent monads and their algebras, and introduce the theory of idempotent approximation of Casacuberta and Frei.
\Sigmagma^{\infty}ubsection{Idempotent monads and their algebras}
In order to state the main theorem, we need some terminology that will also be used in the next section.
\begin{defn}
A monad $L = (L,\mu, \eta)$ on a category $\Sigmagma^{\infty}C$ is called idempotent if it satisfies any of the following equivalent conditions:
\begin{enumerate}
\item $\mu\colonlon L^2 \to L$ is a natural equivalence
\item For every $c \in \mathrm{Alg}_L$, the action map $Lc \to c$ is an equivalence
\item The forgetful functor $\mathrm{Alg}_L \to \Sigmagma^{\infty}C$ is fully faithful.
\end{enumerate}
\end{defn}
\begin{lemma}\mylabel{idempotentalg}
Let $L$ be an idempotent monad on a category $\Sigmagma^{\infty}C$, then for any $c \in \Sigmagma^{\infty}C$ the following conditions are equivalent:
\begin{enumerate}
\item $c$ admits an $L$-algebra structure
\item The unit map $c \to Lc$ is an equivalence.
\end{enumerate}
\end{lemma}
Let $M$ be an arbitrary monad on a locally presentable category $\Sigmagma^{\infty}C$ and denote by $\mathrm{Alg}^{\mathrm{idem}}_M$ the full subcategory of $\mathrm{Alg}_M$ on those algebras $c$ for which the unit map induces an equivalence $c \xrightarrow{\Sigmagma^{\infty}im} Mc$ . Motivated by \myref{criterion}, we are interested in conditions on $M$ such that the category $\mathrm{Alg}^{\mathrm{idem}}_M$ is closed under (sequential) limits.
Note that for some well-known examples the category $\mathrm{Alg}^{\mathrm{idem}}_M$ either coincides with $\mathrm{Alg}_M$ or is trivial, e.g., if $M$ is the free monoid monad on the category of sets. In these cases, $\mathrm{Alg}^{\mathrm{idem}}_M$ is trivially closed under all limits. However, the next example shows that, in general, $\mathrm{Alg}^{\mathrm{idem}}_M$ cannot be expected to be closed under limits, not even sequential limits or infinite products.
\begin{ex}
If we let $M = \beta$ be the ultra-filter (Stone-\v{C}ech) monad on the category of sets, then $\mathrm{Alg}^{\mathrm{idem}}_M$ is precisely the category of finite sets, which is not closed under inverse limits.
\end{ex}
By \myref{idempotentalg}, every idempotent monad $M$ has the property that $\mathrm{Alg}^{\mathrm{idem}}_M = \mathrm{Alg}_M$ and hence is closed under limits. It is therefore natural to ask if chromatic completion is idempotent, given that it is the limit of idempotent monads; an affirmative answer would imply that all suspension spectra are chromatically complete.
Note that the category $\mathrm{Mon}ad(\Sigmagma^{\infty}C)$ of monads on $\Sigmagma^{\infty}C$ can be identified with the category of monoids in the functor category $\mathrm{Fun}(\Sigmagma^{\infty}C,\Sigmagma^{\infty}C)$ with monoidal structure given by composition of functors, and is thus closed under limits computed in the functor category. However, the subcategory of idempotent monads does not have this property as the following examples demonstrates.
\begin{ex}
Let $R$ be a non-noetherian commutative ring, and $I$ a non-finitely generated ideal in $R$. Clearly, $I$-adic completion on the category of all $R$-modules is a monad, which is constructed as the limit of the idempotent and exact monads $R/I^m \otimesimes -$. Yekutieli shows in~\cite{noetherian} that in the case of a polynomial ring $R = k[x_1,\ldots]$ in countably infinitely many variables and $I$ the maximal ideal corresponding to $0$, $I$-adic completion is not idempotent.
\end{ex}
This motivates the study of idempotent approximations to monads.
\Sigmagma^{\infty}ubsection{Idempotent approximations to monads}
\begin{defn}
An idempotent approximation to a monad $M$ on $\Sigmagma^{\infty}C$ is an idempotent monad $\hat{M}$ on $\Sigmagma^{\infty}C$ together with a map of monads $\hat{M} \to M$ which is terminal among all maps from idempotent monads to $M$.
\end{defn}
Casacuberta and Frei~\cite{casacuberta} give a convenient characterization of idempotent approximations, without any conditions on the underlying category $\Sigmagma^{\infty}C$. Moreover, their perspective allows to easily deduce some basic properties of idempotent approximations.
\begin{thm}[Casacuberta--Frei]\mylabel{idempapproxtest} Let $(M,\eta, \mu)$ be a monad on a category $\Sigmagma^{\infty}C$. If $(\hat{M},\hat{\eta},\hat{v})$ is an idempotent monad on $\Sigmagma^{\infty}C$ that inverts the same class of morphisms in $\Sigmagma^{\infty}C$ as $M$, then $\hat{M}$ is the idempotent approximation to $M$. In particular, it satisfies the following properties:
\begin{enumerate}
\item There exists a unique morphism of monads $\left\langleglembda\colonlon \hat{M} \to M$, which is terminal among morphisms from idempotent monads to $M$. Furthermore, if $\Sigmagma^{\infty}C$ is complete and well powered, $\left\langleglembda$ is a monomorphism.
\item Both $M\hat{\eta}$ and $\hat{\eta}M$ are isomorphisms.
\item For any $X \in \Sigmagma^{\infty}C$, the following are equivalent
\begin{enumerate}
\item $\eta_X$ is a $\hat{M}$-equivalence
\item $\eta_{MX}$ is an isomorphism
\item $\left\langleglembda_X$ is an isomorphism.
\end{enumerate}
\end{enumerate}
\end{thm}
\begin{rmk}
Idempotent approximation was studied previously by Fakir~\cite{fakir}. His main existence theorem says that if $\Sigmagma^{\infty}C$ is complete and well-powered, then the idempotent approximation to any monad on $\Sigmagma^{\infty}C$ exists. This construction provides a right adjoint to the natural inclusion of idempotent monads on $\Sigmagma^{\infty}C$ into $\mathrm{Mon}ad(\Sigmagma^{\infty}C)$.
\end{rmk}
\Sigmagma^{\infty}ection{Harmonic localization and chromatic completion}
Using the notion of idempotent approximation, we study the relation between harmonic localization and chromatic completion and deduce a new equivalent formulation of the telescope conjecture. We then construct a harmonic spectrum which is not chromatically complete.
\begin{prop}\mylabel{lhidempapprox}
Harmonic localization is the idempotent approximation to chromatic completion, $L_{\infty} = \hat{\mathbb{C}}$.
\end{prop}
\begin{proof}
Since $L_{\infty}$ is a localization functor and thus idempotent, using \myref{idempapproxtest} it suffices to show that the class $\Sigmagma^{\infty}S(L_{\infty})$ of harmonic equivalences coincides with the class $\Sigmagma^{\infty}S(\mathbb{C})$ of $\mathbb{C}$-equivalences. Because $\ker(L_{\infty}) \Sigmagma^{\infty}ubseteq \ker(\mathbb{C})$, we clearly have $\Sigmagma^{\infty}S(L_{\infty}) \Sigmagma^{\infty}ubseteq \Sigmagma^{\infty}S(\mathbb{C})$.
Conversely, let $f: X \to Y$ be a $\mathbb{C}$-equivalence. First note that, for any spectrum $Z$ and $n \ge 0$, the natural composite $Z\to \mathbb{C} Z \to L_nZ \to L_{K(n)}Z$ exhibits $K(n)_*(Z)$ as a (natural) retract of $K(n)_*(\mathbb{C} Z)$. Since $K(n)_*(\mathbb{C} f)$ is an isomorphism for any $n \ge 0$ and the retract of an isomorphism is an isomorphism, the claim follows.
\end{proof}
Replacing $L_n$ by $L_n^f$, the analogous statement for finite harmonic localization and finite chromatic completion is proven in exactly the same way, so that we get:
\begin{prop}
Finite harmonic localization $L_{\infty}^f$ is the idempotent approximation to finite chromatic completion $\widehat{\mathbb{C}^f}$.
\end{prop}
The abstract properties of idempotent approximations imply a useful criterion for checking when a harmonic spectrum is chromatically complete. Let $\left\langleglembda\colonlon L_{\infty} \to \mathbb{C}$ be the natural (and unique) monad map. The next definition uses part (3) of \myref{idempapproxtest}, and is motivated by the terminology of~\cite[I.5]{holim}.
\begin{defn}
A spectrum $X$ is called $\mathbb{C}$-good if any of the following equivalent conditions hold:
\begin{enumerate}
\item The map $\eta_X\colonlon X \to \mathbb{C} X$ is a harmonic equivalence
\item The map $\eta_{\mathbb{C} X}\colonlon \mathbb{C} X \to \mathbb{C}^2 X$ is an equivalence
\item The map $\left\langleglembda_X\colonlon L_{\infty} X \to \mathbb{C} X$ is an equivalence.
\end{enumerate}
Moreover, we denote the cofiber of the natural map $\eta_X$ of (1) by $\mathbb{A} X$.
\end{defn}
\begin{cor}\mylabel{cccrit}
A harmonic spectrum $X$ is $\mathbb{C}$-good if and only if it is chromatically complete. In particular, $L_{\infty} X$ is chromatically complete if and only if $\mathbb{A}X$ is dissonant.
\end{cor}
\begin{proof}
The first part is an immediate consequence of the previous lemma. To see the second claim, note that $L_{\infty}X$ is chromatically complete if and only if $L_{\infty}X \to \mathbb{C} L_{\infty}X \Sigmagma^{\infty}imeq \mathbb{C} X \Sigmagma^{\infty}imeq L_{\infty}\mathbb{C} X$ is an equivalence, which in turn is equivalent to $L_{\infty} \mathbb{A}X = 0$, hence the claim.
\end{proof}
As another consequence of the theorem, we deduce a new equivalent formulation of the telescope conjecture. This uses the well-known orthogonality relations for the telescopes $\mathrm{Tel}(n)$ of finite type $n$ spectra.
\begin{lemma}\mylabel{orthogrelfin}
In the category of spectra, we have the following relations among Bousfield classes, with $\delta_{m,n}$ being the Kronecker delta.
\begin{enumerate}
\item $\left\langlegle\mathrm{Tel}(n)\right\ranglegle \ge \left\langlegle K(n)\right\ranglegle$ for all $n \in \mathbb{N}$.
\item $\left\langlegle \mathrm{Tel}(m) \right\ranglegle \simeqdge \left\langlegle \mathrm{Tel}(n) \right\ranglegle = \delta_{m,n}\left\langlegle \mathrm{Tel}(m)\right\ranglegle$ for all $n,m \in \mathbb{N}$.
\end{enumerate}
\end{lemma}
\begin{cor}
The telescope conjecture holds for all heights $m$ if and only if the natural map $\mathbb{C}^f \to \mathbb{C}$ is an equivalence.
\end{cor}
\begin{proof}
The \emph{only if} direction is obvious. For the converse, assume that $\mathbb{C}^f \xrightarrow{\Sigmagma^{\infty}im} \mathbb{C}$. By \myref{lhidempapprox}, the natural map $L_{\infty}^f \to L_{\infty}$ is also an equivalence, hence $\left\langleglengle \bigvee_{n \ge 0} \mathrm{Tel}(n) \right\rangleglengle = \left\langleglengle \bigvee_{n \ge 0} K(n)\right\rangleglengle$. Smashing both sides with $\mathrm{Tel}(m)$ and using the orthogonality relations from \myref{orthogrelfin}, we get
\[ \left\langleglengle \mathrm{Tel}(m) \right\rangleglengle = \left\langleglengle \mathrm{Tel}(m) \simeqdge \bigvee_{n \ge 0} \mathrm{Tel}(n) \right\rangleglengle = \left\langleglengle \mathrm{Tel}(m) \simeqdge \bigvee_{n \ge 0} K(n) \right\rangleglengle = \left\langleglengle K(m) \right\rangleglengle \]
for all $m$, i.e., the telescope conjecture at height $m$.
\end{proof}
\Sigmagma^{\infty}ubsection{A counterexample}
We give an example of a harmonic spectrum that is not chromatically complete, thereby answering Ravenel's \myref{ravenelquestion} in the negative.
\begin{thm}\mylabel{counterexample}
$L_{\infty} \bigvee_{i\ge0} \Sigmagma^{i+1}C_iBP$ is harmonic, but not chromatically complete.
\end{thm}
\begin{proof}
To simplify notation, recall that $N_{i+1}BP = \Sigmagma^{i+1}C_iBP$. We claim that
\begin{equation}\left\langleglebel{claim}
\mathbb{C} \bigvee_i N_iBP \Sigmagma^{\infty}imeq \prod_i N_iBP.
\end{equation}
Since
\[\pi_{*}\bigvee_i N_iBP \colonng \bigoplus_i BP_*/(p^{\infty},\ldots,v_{i-1}^{\infty}), \ \pi_{*}\prod_i N_iBP \colonng \prod_i BP_*/(p^{\infty},\ldots,v_{i-1}^{\infty}) \]
it then follows that $\pi_*\mathbb{A}X$ contains non-torsion elements and thus is not dissonant. Therefore, $L_{\infty} \bigvee_i N_iBP$ is not chromatically complete by \myref{cccrit}.
In order to prove \eqref{claim}, consider the cofiber sequence
\[\bigvee_i N_iBP \longrightarrow L_{n}\bigvee_i N_iBP \longrightarrow \Sigmagma C_n \bigvee_i N_iBP,\]
which gives rise to an inverse system of short exact sequences
\[\xymatrixcolsep{1pc}\xymatrix{& \vdots \ar[d] & \vdots \ar[d] & \vdots \ar[d] & \\
0 \ar[r] & \bigoplus_{i \le n}\pi_*N_iBP \ar[r] \ar[d] & \bigoplus_{i\le n}\pi_*L_nN_iBP \ar[r] \ar[d] & \bigoplus_{i\le n}\pi_{*-1}N_nBP \ar[r] \ar[d] & 0\\
0 \ar[r] & \bigoplus_{i\le n-1}\pi_*N_iBP \ar[r] \ar[d] & \bigoplus_{i\le n-1}\pi_*L_{n-1}N_iBP \ar[r] \ar[d] & \bigoplus_{i\le n-1}\pi_{*-1}N_{n-1}BP \ar[r] \ar[d] & 0\\
& \vdots & \vdots & \vdots &}\]
using $C_nC_i = C_n$ for $i\le n$. Clearly the left vertical arrows are surjective, so we get $\mathrm{lim}^1_n(\bigoplus_{i \le n}\pi_*N_iBP)=0$. Since the natural map $\pi_*N_nBP \to \pi_*N_{n-1}BP$ is zero, the right vertical maps are zero, hence
\[\mathrm{lim}_n \bigoplus_{i\le n}\pi_{*-1}N_nBP = 0 = {\mathrm{lim}_n^1}\bigoplus_{i\le n}\pi_{*-1}N_nBP.\]
It follows by the long exact sequence for inverse limits that
\[\mathrm{lim}_n \pi_*L_n\bigvee_i N_iBP \colonng \mathrm{lim}_n \bigoplus_{i\le n}\pi_*L_nN_iBP \colonng \mathrm{lim}_n\bigoplus_{i \le n}\pi_*N_iBP \colonng \prod_i \pi_* N_iBP\]
and similarly
\[{\mathrm{lim}_n^1} \pi_*L_n\bigvee_i N_iBP \colonng {\mathrm{lim}_n^1}\bigoplus_{i \le n}\pi_*N_iBP = 0.\]
Therefore, the Milnor sequence shows that
\[\pi_*\mathbb{C} \bigvee_i N_iBP \colonng \mathrm{lim}_n \pi_*L_n\bigvee_i N_iBP \colonng \pi_*\prod_i N_iBP\]
verifying \eqref{claim}.
\end{proof}
\begin{cor}
Chromatic completion is not idempotent.
\end{cor}
\begin{proof}
Immediate from \myref{lhidempapprox} and \myref{counterexample}.
\end{proof}
We still do not know whether all suspension spectra are chromatically complete but, based on this example, suspect that there are counterexamples. Moreover, we believe that the collection of chromatically complete spectra is not closed under infinite products.
\begin{comment}
\begin{rmk}
In fact, $\prod_n L_nBP\left\langleglengle n \right\rangleglengle$ is harmonic, but not chromatically complete. In particular, the class of chromatically complete spectra is not closed under infinite products.
\end{rmk}
\end{comment}
\end{document} |
\begin{document}
\baselineskip=17pt
\title{Strong orbit equivalence in Cantor dynamics and simple locally finite groups}
\author{Simon ROBERT\\
Institut Camile Jordan\\
Université Claude Bernard - Lyon1\\
43 Boulevard du 11 novembre 1928, 69622 Villeurbanne Cedex, France\\
E-mail : [email protected]}
\date{}
\maketitle
\renewcommand{\arabic{footnote}}{}
\footnote{2020 \emph{Mathematics Subject Classification}: Primary 37B02; Secondary 03E15}
\footnote{\emph{Key words and phrases}: Borel-reducibility, Cantor Dynamics, Minimal homeomorphism, Strong Orbit Equivalence, Kakutani-Rocklin partitions, Topological full group, isomorphism relation on the space of countable, locally finite simple groups.}
\renewcommand{\arabic{footnote}}{\arabic{footnote}}
\setcounter{footnote}{0}
\begin{abstract}
We study certain countable locally finite groups attached to minimal homeomorphisms, and prove that the isomorphism relation on simple, countable, locally finite groups is a universal relation arising from a Borel $S_\infty$-action. This work also provides a dynamical approach to a result of Giordano, Putnam and Skau characterizing strong orbit equivalence.
\end{abstract}
\section{Introduction}
Algebraic tools (notably, dimension groups) have long been known as a useful and effective way to study minimal homeomophisms of the Cantor space;
sometimes this leads to relatively complex, high-powered proofs of theorems that one would like to understand dynamically, so as
to gain a different perspective or extend them to other contexts. Such is one of the purposes of this article.
Using a different perspective can lead to new results: even though the core of this article is about topological dynamics, our main result is about Borel reducibility theory. We only briefly explain the general context here and refer the interested reader to {\cite{BecKec}}, section 3, {\cite{Gao}}, section 5 and references therein for more advanced results.
\begin{defnonnum}
Let $E,F$ be equivalence relations on standard Borel spaces. We say that \emph{$E$ is Borel reducible to $F$}, and write $E\leq F$, if there exists a Borel map $f$ such that
\[\forall x,x' \ xEx'\Longleftrightarrow f(x)Ff(x').\]
We call such a map $f$ a \emph{Borel reduction from $E$ to $F$}. If $E\leq F$ and $F\leq E$, we say that \emph{$E$ and $F$ are Borel bireducible}.
\end{defnonnum}
The idea behind this definition is that solving the classification problem associated to $E$ (i.e deciding when two elements are in the same $E$-class) is then simpler than solving the one associated to $F$, hence the relation $F$ can be considered "more complex" than $E$.
However, without a requirement on $f$, this notion would only detect the number of equivalence classes, and such a map could be very chaotic, this is why one wants to ensure that the correspondence is somehow computable, and in this paper, the notion of computable retained is Borel.
An important source of equivalence relations is given by actions of Polish groups. The following theorem shows that given a Polish group, there always exists an equivalence relation arising from it that is as complex as possible :
\begin{thmnonnum}[{\cite{BecKec}, Corollary 3.5.2}]
Let $G$ be a Polish group. There exists an equivalence relation $E_G$ arising from a Borel $G$-action on a standard Borel space such that any other such relation Borel reduces to it.
\end{thmnonnum}
We will call such a relation \emph{$G$-universal}. We will be interested in equivalence relations arising from actions of $S_\infty$, the permutation group of a countably infinite set, which is a well known Polish group (see {\cite{Gao}, section 2.4}), and we will denote $E_\infty$ a (unique up to Borel bireducibility) universal relation arising from an action of it. Our main theorem is the following:
\begin{thmnonnum}
The relation of isomorphism of countable, locally finite, simple groups is a universal relation arising from a Borel action of $S_\infty$ (i.e is Borel bireducible to $E_\infty$).
\end{thmnonnum}
A proof of this result is given in section \ref{section: Borel complexity}. We use a result of {\cite{Mel20}}, namely that strong orbit equivalence (a notion from topological dynamics that we will define straight after this) is Borel bireducible to $E_\infty$, and we find a Borel way to associate to minimal homeomorphisms some locally finite simple groups that are isomorphic exactly when the homeomorphisms are strong orbit equivalent.
In order to describe more precisely the groups at stake, let us move on to some notions of dynamics.
Any action of a group $G$ on a set $X$ induces an equivalence relation, whose equivalence classes are the $G$-orbits.
Forgetting the action to focus only on this equivalence relation, two actions can be very different
dynamically speaking, even coming from different groups, yet generate the same equivalence relation up to a bijection.
This leads to the notion of orbit equivalence:
\begin{defnonnum}
Let $G,G'$ be groups and $X,X'$ be sets. Two actions $\alpha\colon G\curvearrowright X$ and $\beta\colon G'\curvearrowright \inftyolinebreak X'$ are \emph{orbit equivalent} if there exists a bijection \fct{h}{X}{X'} (called \emph{an orbit equivalence between $\alpha$ and $\beta$}) that realizes a bijective correspondence between $\alpha$-orbits and $\beta$-orbits, i.e $$\forall x\in X \ h(Orb_\alpha(x))=Orb_\beta(h(x))$$
\end{defnonnum}
\inftyoindent If the sets $X$ and $X'$ are endowed with a structure, it is natural to require $h$ to preserve this structure, i.e. to be an
isomorphism from $X$ to $X'$, and this will be a part of our conventions below. It is then tempting to try and classify actions up to orbit equivalence. In a measure theoretical context, this is now a well-understood problem;
A combination of works by Dye ({\cite{Dye}}) and later Ornstein--Weiss ({\cite{OW}}) gives the following famous result: there is only one probability measure preserving, ergodic action of an amenable group on a standard probability space up to orbit equivalence.
However, in a topological context, when $X$ is a Cantor space, the situation is more complicated. In {\cite{GPS1}} and {\cite{GPS2}}, Giordano, Putnam and Skau managed,
using $C^*$-algebra techniques, to obtain
results about $\mathbb{Z}$-actions. Before quoting them, let us recall some basic objects from topological dynamics:
\begin{defnonnum}
A homeomorphism $\varphi$ on the Cantor space $X$ is said to be \textit{minimal} if every $\varphi$-orbit is dense in $X$: $\forall x\in X \ \overline{Orb_\varphi(x)}=X$.
\end{defnonnum}
\begin{defnonnum}
The \textit{full group of $\varphi$}, denoted by $[\varphi]$, consists of all homeomorphisms that preserve $\varphi$-orbits:
\[[\varphi]=\left\{f\in\homeo(X) \colon \forall x\in X \ \exists n_f(x)\in\mathbb{Z} \text{ such that } f(x)=\varphi^{n_f(x)}(x)\right\} \]
For $f\in [\varphi]$, the application $\accfonction{n_f}{X}{\mathbb{Z}}{x}{n_f(x)}$ is called \textit{a cocycle of $f$}.
\end{defnonnum}
\begin{rmknonnum}
Since $\varphi$ is aperiodic (because it is minimal), cocycles are uniquely defined.
\end{rmknonnum}
\begin{defnonnum}
The \textit{topological full group of $\varphi$}, denoted by $\llbracket\varphi\rrbracket$, is the subgroup of elements in $[\varphi]$ which have a continuous cocycle.
\end{defnonnum}
Using this vocabulary, Giordano, Putnam and Skau's Theorem about orbit equivalence is the following:
\begin{thmnonnum}[{\cite{GPS2}, Corollary 4.6}]
Let ($\varphi_1,X_1)$ and $(\varphi_2,X_2)$ be minimal Cantor systems. Then the following are equivalent:
\begin{enumerate}[(i)]
\item The two systems ($\varphi_1,X_1)$ and $(\varphi_2,X_2)$ are orbit equivalent
\item The full groups $[\varphi_1]$ and $[\varphi_2]$ are isomorphic as abstract groups
\item \label{GPS2-iii} There exists a homeomorphism $\fct{g}{X_1}{X_2}$ that pushes forward $\varphi_1$-invariant measures onto $\varphi_2$-invariant measures,
i.e $g_*M(\varphi_1)=M(\varphi_2)$, where
$$M(\varphi_i)=\{ \mu \text{ probability measure on } X_i \colon {\varphi_i}_*\mu=\mu \}$$
\end{enumerate}
\end{thmnonnum}
In particular, point (\ref{GPS2-iii}) implies that there exist continuum many pairwise non orbit equivalent Cantor minimal systems,
see for example {\cite{Dow}}.
Giordano--Putnam--Skau also considered a notion which differs slightly from orbit equivalence and that we are concerned with in this article, which they call \emph{strong orbit equivalence}.
To explain it, let us introduce some terminology: an orbit equivalence $h$ between two minimal systems $(\varphi,X)$ and $(\partialsi,X)$ gives rise
to two applications $n$ and $m$ from $X$ to $\mathbb{Z}$ such that
$$\forall x \in X \ h(\varphi(x))=\partialsi^{n(x)}(h(x)) \text{ and conversely }\inv{h}(\partialsi(x))=\varphi^{m(x)}(\inv{h}(x)).$$
These two applications are called \emph{cocycles associated to $h$}. The orbit equivalence $h$ is called a \emph{strong orbit equivalence}
if both cocycles $n$ and $m$ have at most one point of discontinuity. This notion is, as the next theorem highlights, closely related to
the group $\Gamma^\varphi_{x}$ (associated to a minimal system $(\varphi,X)$ and a point $x\in X$) consisting of all elements $\gamma$ of
$\llbracket\varphi\rrbracket$ such that $\gamma(Orb^+_\varphi(x))=Orb^+_\varphi(x)$.
\begin{thmnonnum}[{\cite{GPS2}, Corollary 4.11}]
Two Cantor minimal systems $(\varphi,X)$ and $(\partialsi,X)$ are strong orbit equivalent if and only if $\Gamma^\varphi_{x}$ and $\Gamma^\partialsi_{y}$ are isomorphic as abstract
groups for all $x,y\in X$.
\end{thmnonnum}
An elementary proof of this result, which justifies our original interest for the groups $\Gamma_x^\varphi$, is given in section \ref{section: GPS proof}. By \emph{elementary},
we mean that the proof is only based on manipulations on closed-open sets.
The objects we mainly use are sequences of Kakutani-Rokhlin partitions, the point of which is to describe in finite time the image of (almost) any closed-open set by a minimal homeomorphism (details in section \ref{subsection: KR partitions}). Our general strategy is based on a theorem of Krieger ({\cite{Kr}, Theorem 3.5}). Recall that
two subgroups $\Gamma$ and $\Lambda$ of $ \homeo(X)$ are \emph{spatially isomorphic} if there exists a homeomorphism $g$ of $X$ such that
$$\Gamma=g\Lambda\inv{g}.$$ Krieger's theorem establishes that two countable, locally finite groups $\Gamma$ and $\Lambda$ satisfying some extra properties, called \emph{ample groups},
are spatially isomorphic if and only if there exists a homeomorphism $g$ such that
$$\accfonction{\overline{g}}{\quot{CO(X)}{\Gamma}}{\quot{CO(X)}{\Lambda}}{Orb_\Gamma(A)}{Orb_\Lambda(g(A))}$$
is a bijection.
To set the stage for the application of Krieger's theorem, we first recall the construction of the groups $\Gamma_x^\varphi$
out of a sequence of Kakutani-Rokhlin partitions, which shows that they are locally
finite (and actually even ample). Afterwards, an important step is to show that the relation induced by the $\Gamma_x^\varphi$ on closed-open sets does not depend
on the point $x$, i.e using Krieger's vocabulary that they have the same \emph{dimension range}.
We also notice that this approach recovers the result that $\overline{\Gamma^\varphi_x}=\overline{\llbracket\varphi\rrbracket}$ (see {\cite{IM}, Theorem 5.6}, or {\cite{GM}})
Finally, the groups $\Gamma_x^\varphi$ are not necessarily simple, but in section \ref{section: Borel complexity}, adapting arguments from {\cite{BM}} on the one hand, and {\cite{Med}}
on the other hand, we show respectively that the groups $D(\Gamma_x^\varphi)$ are always simple, and that any isomorphism between two such groups is spatial, which makes them locally finite, simple groups attesting to strong orbit equivalence of homeomorphisms they are attached to, as desired.
\begin{comment}
Algebraic tools (notably, dimension groups) have long been known as a useful and effective way to study minimal homeomophisms of the Cantor space;
sometimes this leads to relatively complex, high-powered proofs of theorems that one would like to understand dynamically, so as
to gain a different perspective or extend them to other contexts. Such is one of the purposes of this article.
Let us define the context and basic concepts: any action of a group $G$ on a set $X$ induces an equivalence relation, whose equivalence classes are the $G$-orbits.
Forgetting the action to focus only on this equivalence relation, it turns out that two actions can be very different
dynamically speaking, even coming from different groups, yet generate the same equivalence relation up to a bijection.
This leads to the notion of orbit equivalence:
\begin{defnonnum}
Two actions $\alpha\colon G\curvearrowright X$ and $\beta\colon G'\curvearrowright X'$ are \emph{orbit equivalent} if there exists a bijection
\fct{h}{X}{X'} that realizes a bijective correspondence between $\alpha$-orbits and $\beta$-orbits, i.e $$\forall x\in X \ h(Orb_\alpha(x))=Orb_\beta(h(x))$$
\end{defnonnum}
\inftyoindent If the sets $X$ and $X'$ are endowed with a structure, it is natural to require $h$ to preserve this structure, i.e. to be an
isomorphism from $X$ to $X'$, and this will be a part of our conventions below. It is then tempting to try and classify actions up to
orbit equivalence. In a measure theoretical context, this is now a well-understood problem;
in the context of amenable groups, one has the following results, due to Dye and later Ornstein--Weiss:
\begin{thmnonnum}[{\cite{Dye}}]
Any two probability measure preserving, ergodic actions of $\mathbb{Z}$ are orbit equivalent.
\end{thmnonnum}
\begin{thmnonnum}[{\cite{OW}}]
Any probability measure preserving ergodic action of an amenable group $G$ is orbit equivalent to an action of $\mathbb{Z}$.
\end{thmnonnum}
However, in a topological context, when $X$ is a Cantor space, the situation is more complicated. In {\cite{GPS1}} and {\cite{GPS2}}, Giordano, Putnam and Skau managed,
using $C^*$-algebra techniques, to obtain
results about $\mathbb{Z}$-actions. Before quoting the Theorems that are at stake in this article, let us recall some basic objects from topological dynamics:
\begin{defnonnum}
A homeomorphism $\varphi$ on the Cantor space $X$ is said to be \textit{minimal} if every $\varphi$-orbit is dense in $X$: $\forall x\in X \ \overline{Orb_\varphi(x)}=X$.
\end{defnonnum}
\begin{defnonnum}
The \textit{full group of $\varphi$}, denoted by $[\varphi]$, consists of all homeomorphisms that preserve $\varphi$-orbits:
\[[\varphi]=\left\{f\in\homeo(X) \colon \forall x\in X \ \exists n_f(x)\in\mathbb{Z} \text{ such that } f(x)=\varphi^{n_f(x)}(x)\right\} \]
For $f\in [\varphi]$, the application $\accfonction{n_f}{X}{\mathbb{Z}}{x}{n_f(x)}$ is called \textit{a cocycle of $f$}.
\end{defnonnum}
\begin{rmknonnum}
Since $\varphi$ is aperiodic (because it is minimal), cocycles are uniquely defined.
\end{rmknonnum}
\begin{defnonnum}
The \textit{topological full group of $\varphi$}, denoted by $\llbracket\varphi\rrbracket$, is the subgroup of elements in $[\varphi]$ which have a continuous cocycle.
\end{defnonnum}
Using this vocabulary, Giordano, Putnam and Skau's Theorem about orbit equivalence is the following:
\begin{thmnonnum}[{\cite{GPS2}, Corollary 4.6}]
Let ($\varphi_1,X_1)$ and $(\varphi_2,X_2)$ be minimal Cantor systems. Then the following are equivalent:
\begin{enumerate}[(i)]
\item The two systems ($\varphi_1,X_1)$ and $(\varphi_2,X_2)$ are orbit equivalent
\item The full groups $[\varphi_1]$ and $[\varphi_2]$ are isomorphic as abstract groups
\item \label{GPS2-iii} There exists a homeomorphism $\fct{g}{X_1}{X_2}$ that pushes forward $\varphi_1$-invariant measures onto $\varphi_2$-invariant measures,
i.e $g_*M(\varphi_1)=M(\varphi_2)$, where
$$M(\varphi_i)=\{ \mu \text{ probability measure on } X_i \colon {\varphi_i}_*\mu=\mu \}$$
\end{enumerate}
\end{thmnonnum}
In particular, point (\ref{GPS2-iii}) implies that there exist continuum many pairwise non orbit equivalent Cantor minimal systems, see for example {\cite{Dow}}.
Giordano--Putnam--Skau also considered a notion which differs slightly from orbit equivalence, which they call \emph{strong orbit equivalence}.
To explain it, let us introduce some terminology: an orbit equivalence $h$ between two minimal systems $(\varphi,X)$ and $(\partialsi,X)$ gives rise
to two applications $n$ and $m$ from $X$ to $\mathbb{Z}$ such that
$$\forall x \in X \ h(\varphi(x))=\partialsi^{n(x)}(h(x)) \text{ and conversely }\inv{h}(\partialsi(x))=\varphi^{m(x)}(\inv{h}(x)).$$
These two applications are called \emph{cocycles associated to $h$}. The orbit equivalence $h$ is called a \emph{strong orbit equivalence}
if both cocycles $n$ and $m$ have at most one point of discontinuity. This notion is, as the next theorem highlights, closely related to
the group $\Gamma^\varphi_{x}$ (associated to a minimal system $(\varphi,X)$ and a point $x\in X$) consisting of all elements $\gamma$ of
$\llbracket\varphi\rrbracket$ such that $\gamma(Orb^+_\varphi(x))=Orb^+_\varphi(x)$.
\begin{thmnonnum}[{\cite{GPS2}, Corollary 4.11}]
Two Cantor minimal systems $(\varphi,X)$ and $(\partialsi,X)$ are strong orbit equivalent if and only if $\Gamma^\varphi_{x}$ and $\Gamma^\partialsi_{y}$ are isomorphic as abstract
groups for all $x,y\in X$.
\end{thmnonnum}
An elementary proof of this result, which justifies our interest for the groups $\Gamma_x^\varphi$, is given in section \ref{section: GPS proof}. By \emph{elementary},
we mean that the proof is only based on manipulations on closed-open sets.
The objects we mainly use are sequences of Kakutani-Rokhlin partitions, the point of which is to describe in finite time the action of a minimal homeomorphism
on any closed-open set (details in section \ref{subsection: KR partitions}). Our general strategy is based on a theorem of Krieger ({\cite{Kr}, Theorem 3.5}). Recall that
two subgroups $\Gamma$ and $\Lambda$ of $ \homeo(X)$ are \emph{spatially isomorphic} if there exists a homeomorphism $g$ of $X$ such that
$$\Gamma=g\Lambda\inv{g}.$$ Krieger's theorem establishes that two countable, locally finite groups $\Gamma$ and $\Lambda$ satisfying some extra properties, called \emph{ample groups},
are spatially isomorphic if and only if there exists a homeomorphism $g$ such that
$$\accfonction{\overline{g}}{\quot{CO(X)}{\Gamma}}{\quot{CO(X)}{\Lambda}}{Orb_\Gamma(A)}{Orb_\Lambda(g(A))}$$
is a bijection.
To set the stage for the application of Krieger's theorem, we first recall the construction of the groups $\Gamma_x^\varphi$
out of a sequence of Kakutani-Rokhlin partitions, which shows that they are locally
finite (and actually even ample). Afterwards, an important step is to show that the relation induced by the $\Gamma_x^\varphi$ on closed-open sets does not depend
on the point $x$, i.e using Krieger's vocabulary that they have the same \emph{dimension range}.
We also notice that this approach recovers the result that $\overline{\Gamma^\varphi_x}=\overline{\llbracket\varphi\rrbracket}$ (see {\cite{IM}, Theorem 5.6}, or {\cite{GM}})
Finally in section \ref{section: Borel complexity} we go a little bit further, and adapting arguments from {\cite{BM}} on the one hand, and {\cite{Med}}
on the other hand, we prove respectively that
for any minimal homeomorphism $\varphi$, the group $D(\Gamma_\varphi)$ is simple, and that any isomorphism between two such groups is spatial. This enables us to
give an application to Borel reducibility theory; we refer the interested reader to {\cite{BecKec}}, section 3, {\cite{Gao}}, section 5 and references therein for
more advanced results and only briefly explain the general context here.
Given two equivalence relations $E$ and $E'$ on standard Borel spaces $X$ and $X'$ respectively, a Borel map \fct{f}{X}{X'} is called a Borel reduction of
$E$ into $E'$ if it satisfies
$$\forall x,y\in X \quad xEy\iff f(x)E'f(y) \ .$$
Such a reduction induces an injection \fct{\overline{f}}{\quot{X}{E}}{\quot{X'}{E'}} between the spaces of orbits, and in that
case we denote $E\leq E'$ and say that $E$ Borel reduces to $E'$. Using this vocabulary, we prove that the relation of strong orbit equivalence on the space of minimal
homeomorphisms of the Cantor space is Borel reducible to the isomorphism relation on the space of
simple, countable, locally finite groups. It was proved by Melleray in {\cite{Mel20}} that strong orbit equivalence is \emph{a universal relation arising from an
$S_\infty$-action}, meaning that every equivalence relation that arises from a Borel $S_\infty$-action on a standard Borel space Borel reduces to it (here $S_\infty$ is
the permutation group of a countably infinite set; see {\cite{Gao}, section 2.4}). We thus obtain the following theorem.
\begin{thmnonnum}
The relation of isomorphism of countable, locally finite, simple groups is a universal relation arising from a Borel action of $S_\infty$.
\end{thmnonnum}
Informally speaking, this means that classifying locally finite, simple groups is as complicated as it could possibly be,
in particular it is just as complicated as classifying countable groups up to isomorphism.
\end{comment}
\section{Preliminaries}\label{section: Preliminaries}
For the remainder of this article, $X$ denotes the Cantor space; $\varphi$ denotes a minimal homeomorphism of $X$ and $x_0$ a point of $X$; we call a
closed-open set a clopen set, and $CO(X)$ stands for the Boolean algebra of clopen subsets of $X$. Moreover, we use the notation $\llbracket i;j\rrbracket$ to denote the interval of integers $\{i,i+1\ldots,j\}$, which is by convention empty if $i>j$. Finally we draw the reader's attention to the following fact: for us, $\mathbb{N}$ is the set
of non-negative integers, and $Orb_\varphi^+(x_0)=\left\{\varphi^n(x_0)\right\}_{n\in\mathbb{N}}$.
Most of this section consists of well-known facts and objects, and the reader knowing what a Kakutani-Rokhlin partition is can skim through it until Theorem \ref{Krthm} and discussion below.
\medbreak
The following property is very useful to build and understand elements of a topological full group in practice:
\begin{prop}
Let $f$ be a homeomorphism of $X$. Then $f\in\llbracket\varphi\rrbracket$ if and only if there exist $A_1,\ldots ,A_n\in CO(X)$
and $k_1,\ldots,k_n\in\mathbb{Z}$ such that $X=\bigsqcup_{i=1}^n A_i$ and $\restr{f}{A_i}=\restr{\varphi^{k_i}}{A_i}$.
\end{prop}
The following theorem is well-known (and easy to check in this situation):
\begin{thm}\label{stone}(Stone's representation theorem for Boolean algebras)
For every automorphism $\fct{\alpha}{CO(X)}{CO(X)}$, there exists a (unique) homeomorphism $g$ on $X$ that extends
$\alpha$ (i.e $g(C)=\alpha(C)$ for every $C\in CO(X)$)
\end{thm}
\begin{rmk}
$\aut(CO(X))$ is a closed subset of $CO(X)^{CO(X)}$ equipped with the product topology (of the discrete topology on $CO(X)$). Thus this theorem implies that $\homeo(X)$ inherits a Polish group structure from that topology; a sub-basis is given by the sets $$[A\rightarrow B]=\{f\in \homeo(X)\colon f(A)=B\}$$ where $A$ and $B$ are clopen sets.
Actually, this is the only Polish topology on $\homeo(X)$ (see {\cite{Ros-Sol}}).
We always consider $\homeo(X)$ as endowed with this topology, and think of it in this way, even though one can use a complete metric on $X$ and think of $\homeo(X)$ as being endowed with the topology of uniform convergence.
\end{rmk}
\subsection{Kakutani-Rokhlin partitions}\label{subsection: KR partitions}
Kakutani-Rokhlin partitions (for which we will use the term "K-R partitions" for brevity) are an essential tool for an elementary study of the dynamics of minimal homeomorphisms.
The idea is the following: given a nonempty clopen set $A$, we look at the first return map
$$\accfonction{\varphi_A}{A}{A}{x}{\varphi^{\tau_A(x)}(x)} \ , \text{ where}$$
$$\accfonction{\tau_A}{A}{\mathbb{N}}{x}{\min\{k>0\colon \varphi^k(x)\in A\}}$$
is well-defined because a homeomorphism is minimal if and only if every forward orbit is dense in $X$. We can also consider $\varphi_A$ as a homeomorphism of $X$ by requiring $\restr{{\varphi_A}}{A^c}=\restr{id}{A^c}$. Since $A$
is compact and $\tau_A$ is continuous, $\tau_A(A)$ is finite, and $A$ admits a finite clopen partition consisting of points that return to $A$ in a certain
number of steps.
\begin{figure}
\caption{K-R partition built from $A$}
\label{KR-partition}
\end{figure}
More formally, $A=\bigsqcup_{k\in\mathbb{N}}A_k$, where $A_k := \inv{\tau_A}({k})$ is clopen, and all but a finite number of them are empty.
We then obtain a partition of $X$ that can be represented as on figure \ref{KR-partition}.
More generally we define K-R partitions as follows:
\begin{deff}
A \textit{K-R partition associated to $\varphi$} is a clopen partition
$$\Xi=(D_{i,j})_{i\in\llbracket1,N\rrbracket,
j\in\llbracket0,H_i-1\rrbracket}$$
of $X$ such that $\varphi(D_{i,j-1})=D_{i,j}$ for all $i\in\llbracket1,N\rrbracket$ and all $j\in\llbracket1,H_i-1\rrbracket$.
We call, for $i\in\llbracket1,N\rrbracket$, \textit{tower $i$ of $\Xi$} the set $T_i=\bigsqcup_{j\in\llbracket0,H_i-1\rrbracket}D_{i,j}$,
and \textit{height} of this tower the number $H_i$. We also talk about the \textit{$k$-th floor} of $\Xi$ to designate
$\bigsqcup_{i\in\llbracket1,N\rrbracket}\varphi^k(D_{i,0})$. The 0-th floor is called \textit{the base} of $\Xi$, and denoted by $B(\Xi)$,
and the $-1$-th one is called \textit{the top} of the partition.
In the case where the partition carries an index ($\Xi_n$ instead of $\Xi$), which will be the case very soon, we talk about the tower $T^n_i$, the atom $D^n_{i,j}$, etc.
\end{deff}
\begin{rmk}
Note that $\bigsqcup_{i\in\llbracket1,N\rrbracket}D_{i,H_i-1}$ is mapped by $\varphi$ onto $B(\Xi)$, and thus is the top of the partition.
However, $D_{i,H_i-1}$ can be mapped anywhere in $B(\Xi)$.
\end{rmk}
\begin{notation}
We denote by $\langle \Xi \rightarrowngle$ the Boolean algebra generated by the atoms \inftyewline $(D_{i,j})_{i\in\llbracket1,N\rrbracket, j\in\llbracket0,H_i-1\rrbracket}$ of $\Xi$.
\end{notation}
The idea behind K-R partitions is that they represent how $\varphi$ acts on the atoms of $\Xi$ which are not contained in the top of $\Xi$.
We want to get better and better approximations of $\varphi$ in terms of how it acts on clopen sets. In the same spirit as Theorem \ref{stone}, one can check
that knowing how $\varphi$ acts on clopen sets that does not contain a
particular point determines $\varphi$. In our case, we want to construct a sequence of K-R partitions in which every clopen set appears,
and such that the intersection of their bases (and hence of their tops) is a single point.
To do that, we have to be able to refine a K-R partition into another one, without losing information we have already obtained.
This is the purpose of the following proposition:
\begin{prop}\label{prop: raffinement d'une KR-part contenant un clopen fixé}
Let $A\in CO(X)$, and $\Xi$ be a K-R partition. Then there exists a K-R partition $\Xi'$ finer than $\Xi$ such that $A\in \langle\Xi' \rightarrowngle$.
\end{prop}
\begin{figure}
\caption{Cutting process in tower $i$ of $\Xi$, $A_{i,j}
\label{raffinementclopen}
\end{figure}
\begin{proof}
For each tower $T_i$ of $\Xi$, we define an equivalence relation $\mathscr{R}_i$ on $D_{i,0}$ by
\[x\mathscr{R}_i y \Leftrightarrow \forall j<H_i \ \left( (\varphi^j(x)\in A \wedge \varphi^j(y)\in A) \vee (\varphi^j(x)\in A^c \wedge\varphi^j(y)\in A^c) \right)\]
Denote by $P^i$ the partition of $D_{i,0}$ associated to $\mathscr{R}_i$. (cf figure \ref{raffinementclopen})
Denoting by $(B_l)_{l\in\llbracket0,N_i\rrbracket}$ the atoms of $P^i$, we just "cut" the $i-th$ tower into $N_i$ towers whose bases are the $B_l$'s.
Following this method for each tower, we obtain a K-R partition $\Xi'$ finer than $\Xi$ such that every $A\cap D_{i,j}$ is in $ \langle\Xi' \rightarrowngle$,
and so $A$ is also in $ \langle\Xi' \rightarrowngle$.
\end{proof}
Let $(U_n)_{n\in\mathbb{N}}$ be a clopen basis of the topology. Applying Proposition \ref{prop: raffinement d'une KR-part contenant un clopen fixé}
inductively, we obtain the following:
\begin{cor}\label{KR-properties}
There exists a sequence of K-R partitions $(\Xi_n)_{n\in\mathbb{N}}$ such that the following conditions hold for all $n\in\mathbb{N}$
:
\begin{enumerate}[(i)]
\item\label{KR-properties1} $\Xi_{n+1}$ is finer than $\Xi_n$
\item\label{KR-properties2} $B(\Xi_{n+1})\subset B(\Xi_n)$ and $\bigcap_i B(\Xi_i)=\{x_0\}$
\item\label{KR-properties3} $U_n\in \langle\Xi_n \rightarrowngle$
\item\label{KR-properties4} The minimal height of $\Xi_n$ is greater than $n$.
\end{enumerate}
\end{cor}
When we consider a sequence of K-R partitions, we assume from now on that it fulfills properties \eqref{KR-properties1} to \eqref{KR-properties4}.
\begin{rmk}\label{rmk: finite images of a small neighbourhood are disjoint}
Property \eqref{KR-properties4} is obtained thanks to aperiodicity of $\varphi$: \inftyewline
the points $x_0, \varphi(x_0),\ldots,\varphi^n(x_0)$ are all distinct, thus for a sufficiently small neighbourhood $U$ of $x_0$, we have that
$U,\varphi(U),\ldots,\varphi^n(U)$ are pairwise disjoint.
\end{rmk}
\begin{deff}
We call \textit{base point} of a sequence of K-R partitions $(\Xi_n)$ the point $x_0$ that appears in property \eqref{KR-properties2}, and \textit{top point} of the
sequence the point $\inv\varphi(x_0)$.
\end{deff}
\begin{rmk}
An important thing to understand about this construction is that a tower of $\Xi_{n+1}$ is obtained by cutting (vertically) the towers of $\Xi_n$
and then stacking some of them on top of each other. This is what is called "cutting and stacking" (see figure \ref{gamma_2_partitions}; arrows tell us
where a clopen set at the top of the partition is sent by $\varphi$. If there is none, it means that it is sent into the base of $\Xi_{n+1}$).
Indeed, for every $i$, $D_{i,0}^{n+1}\subset D_{i_0,0}^n\subset B(\Xi_n)$ for some $i_0$, so $\bigsqcup_{k=0}^{H_{i_0}^n-1} D^{n+1}_{i,k}$ is exactly obtained by cutting
the $i_0$-th tower of $\Xi_n$. If $D_{i,H_{i_0}^n-1}^{n+1}$ is not on the top of $\Xi_{n+1}$, it suffices to look at the tower of $\Xi_n$ in which it
is mapped by $\varphi$ to know which "tower" to stack over this one: if $\varphi(D^{n+1}_{i,H_{i_0}^n-1})\subset D_{i_1,0}^n$, then
$\bigsqcup_{k=H_{i_0}^n}^{H_{i_0}^n+H_{i_1}^n-1} D^{n+1}_{i,k}$ is obtained by cutting the $i_1$-th tower of $\Xi_n$, and so on.
\end{rmk}
\subsection{A locally finite group associated to a minimal homeomorphism}
In this section we study the group of homeomorphisms in $\llbracket\varphi\rrbracket$ that preserve the non-negative semi-orbit
of $x_0$. We denote this group
$$\Gamma_{x_0}^\varphi:=\{f\in\llbracket\varphi\rrbracket \colon f(Orb^+(x_0))=Orb^+(x_0)\} \ ,$$
or just $\Gamma_{x_0}$ if the homeomorphism involved is clear from the context.
As recalled in the introduction, these groups are used in {\cite{GPS2}} and are one of the main objects under consideration in our article.
For the moment we focus on explaining how they are related to K-R partitions.
\begin{figure}
\caption{an element $\gamma$ in $\Gamma_n$ and in $\Gamma_{n+1}
\label{gamma_2_partitions}
\end{figure}
Let $(\Xi_n)_{n\in\mathbb{N}}$ be a sequence of K-R partitions (satisfying properties \eqref{KR-properties1} to \eqref{KR-properties4}). For each $n\in\mathbb{N}$, we consider the group $\Gamma_n$
of elements of $\llbracket\varphi\rrbracket$ that have a constant cocycle on atoms of $\Xi_n$ and that "stay" in each tower of the partition, meaning
that an atom can not go through either the top of its tower nor its base under the action of a $\gamma$ in $\Gamma_n$.
More formally, if $(D_{i,j})_{i,j}$ are the atoms
of $\Xi_n$ and $H_i$ is the height of the $i$-th tower, then
\[ \Gamma_n=\{f\in\llbracket\varphi\rrbracket \colon \forall i,j \ \restr{f}{D_{i,j}}=\restr{\varphi^k}{D_{i,j}},~ k\in\llbracket-j,H_i-j-1\rrbracket\}\]
An element of this group may be thought of as permuting atoms in each tower, but it is important to remember that it acts on points,
in particular that helps to see the inclusion $\Gamma_n\subset\Gamma_{n+1}$.
Indeed, as $\Xi_{n+1}$ is constructed from $\Xi_n$ by cutting and stacking, each $\gamma\in\Gamma_n$ also belongs to $\Gamma_{n+1}$
(see figure \ref{gamma_2_partitions})
\begin{deff}
We have obtained from a sequence of K-R partitions $(\Xi_n)$ a locally finite group $\Gamma_\Xi=\bigcup_{n\in\mathbb{N}}\Gamma_n$.
If there is risk of confusion, we write $\Gamma^\varphi_\Xi$ to mention the homeomorphism involved.
\end{deff}
For the moment, it seems that $\Gamma_\Xi$ depends on the sequence of K-R partitions we have chosen. We
note that it only depends on the choice of the base point. Recall that $$\Gamma_{x_0}^\varphi=\left\{\gamma\in \llbracket\varphi\rrbracket\ |\ \gamma(Orb_\varphi^+(x_0))=Orb_\varphi^+(x_0)\right\}$$.
\begin{prop}\label{gamma_egal_orbit_pos}
The groups $\Gamma_\Xi$ and $\Gamma_{x_0}$ are equal.
\end{prop}
\begin{proof}
Let $\gamma\in\Gamma_\Xi$, and $y\in Orb^+(x_0)$. Say $y=\varphi^k(x_0)$ for a $k\in\mathbb{N}$. By property \eqref{KR-properties4} of Corollary \ref{KR-properties}, there exists $N\in\mathbb{N}$ big
enough that $\gamma\in\Gamma_N$ and each tower of $\Gamma_N$ has an height greater or equal to $k+1$.
Then $x_0$ and $y$ belong to the same tower of $\Xi_N$, and $x_0\in B(\Xi_N)$. Since $\gamma\in\Gamma_N$, $\gamma(y)=\varphi^j(y)$ with $j\geq -k$,
we have $\gamma(y)\in Orb^+(x_0)$. Since the same property holds for $\inv{\gamma}$, we obtain
$$\gamma(Orb^+(x_0))=Orb^+(x_0).$$
Conversely, let $h\in\Gamma_{x_0}$. Let $X=\bigsqcup_{i=1}^n A_i$ be a clopen partition such that $\restr{h}{{A_i}}=\restr{\varphi^{k_i}}{{A_i}}$.
Define $m_i=\min\{k\geq 0 \colon \varphi^{k}(x_0)\in A_i\}.$ Necessarily, $k_i\geq -m_i$. On the other hand, since $\varphi^k(x_0)\inftyotin A_i$ for all
$k\in\llbracket0,m_i-1\rrbracket$, there exists a clopen set $U_k^i\inftyi\varphi^k(x_0)$ disjoint from $A_i$. Thus
$\bigcap_i\bigcap_{0\leq k<m_i}\varphi^{-k}(U_k^i)$ is a neighbourhood of $x_0$, hence $B(\Xi_N)$ is contained in it for $N$ big enough.
Moreover, we can take $N$ big enough that every $A_i\in \langle\Xi_N \rightarrowngle$, and every tower in $\Xi_N$ has an height greater than $\max_i(m_i)$.
Then every atom of $\Xi_N$ is included into one of the $A_i$'s, and an atom contained in $A_i$ cannot appear before the $m_i$-th floor
(because the $k$-th floor is a subset of $U_k^i$ which is disjoint from $A_i$).
Since $h\in\llbracket\varphi\rrbracket$, we also have
$$h(Orb_\varphi^{<0}(x_0))=Orb_\varphi^{<0}(x_0)$$
(where $Orb_{\varphi}^{<0}(x_0)$ stands for $\{\varphi^k(x_0), k<0\}$). Thus,
defining
$$m'_i=\max\{k<0 \colon \varphi^{k}(x_0)\in A_i\}$$
and using the same argument, we get that $k_i<-m'_i$ and an atom contained in $A_i$ cannot appear higher than the
$m'_i$-th floor (i.e there is at least $\abs{m'_i}-1$ atoms above it before reaching the top of the tower).
This shows that $h$ cannot send an atom of $\Xi_N$ through the top or the base of the partition, and so $h\in\Gamma_\Xi$.
\end{proof}
In the next section, we use as a key ingredient the following theorem which is a combination of two results: the equivalence between Conditions \ref{thm Krieger, cond 2} and \ref{thm Krieger, cond 3} is due to Krieger ({\cite{Kr}, Theorem 3.5}), and its proof is elementary (actually, it is based on a back-and-forth argument that is easy
to set up directly in the case of the groups $\Gamma_{x}^\varphi$; we chose not to include this proof in an attempt not to overextend our claims to the reader's attention).
The equivalence between Conditions \ref{thm Krieger, cond 1} and \ref{thm Krieger, cond 2} is known since Giordano, Putnam and Skau (see {\cite{GPS2}, Theorem 4.2}) in the case $(\Gamma,\Lambda)=(\Gamma^\varphi_{x_1},\Gamma^\partialsi_{x_2})$. New proofs based on reconstruction theorems have been developed later, and we give a proof in the case $(\Gamma,\Lambda)=(D(\Gamma^\varphi_{x_1}),D(\Gamma^\partialsi_{x_2}))$ using this approach in Proposition \ref{prop: every isom is spatial}.
\begin{thm}\label{Krthm}
Let $\varphi$,$\partialsi$ be minimal homeomorphisms, $x_1,x_2$ be two points in $X$, and $(\Gamma,\Lambda)$ denote either $(\Gamma^\varphi_{x_1},\Gamma^\partialsi_{x_2})$ or their commutator subgroups $(D(\Gamma^\varphi_{x_1}),D(\Gamma^\partialsi_{x_2}))$. Then the following are equivalent:
\begin{enumerate}
\item \label{thm Krieger, cond 1} The abstract groups $\Gamma$ and $\Lambda$ are isomorphic
\item \label{thm Krieger, cond 2} There exists a homeomorphism $g$ such that $\Gamma=g\Lambda\inv{g}$
\item \label{thm Krieger, cond 3} There exists a homeomorphism $g$
such that $$\accfonction{\overline{g}}{\bigslant{CO(X)}{\Gamma}}{\bigslant{CO(X)}{\Lambda}}{Orb_{\Gamma}(A)}{Orb_{\Lambda}(gA)} \text{ is a bijection.}$$
\end{enumerate}
\end{thm}
\begin{rmk}\label{rmk : more than original Krieger is true}
Moreover, a bit more is true : if Condition \ref{thm Krieger, cond 3} is satisfied, and $(x,y),(x',y')$ are two pairs of point belonging respectively to different $\Lambda$ and $\Gamma$ orbits, then the homeomorphism $g$ of Condition \ref{thm Krieger, cond 2} can be chosen such that $g(x)=x'$ and $g(y)=y'$. This is not part of the original theorem in {\cite{Kr}} but can easily be seen following the proof. A proof of a much stronger generalization is given in {\cite{MR} (Theorem 3.11)}.
\end{rmk}
\begin{deff}[Krieger's vocabulary]\label{def: dimension range}
The relation induced by a group $\Gamma^\varphi_x$ on $CO(X)$ is called \emph{dimension range of $\Gamma^\varphi_x$}. A homeomorphism $g$ such that $\overline{g}$
(defined as in \ref{thm Krieger, cond 3}) is a bijection is \emph{an isomorphism between the dimension ranges of $\Gamma^\varphi_x$ and $\Gamma^\partialsi_y$},
and those two groups are said to have \emph{isomorphic dimension ranges}.
\end{deff}
\section{An elementary proof of Giordano, Putnam and Skau's characterization of strong orbit equivalence}\label{section: GPS proof}
We are now ready to prove Giordano, Putnam and Skau's characterization of strong orbit equivalence. Let us recall the definition of strong orbit equivalence \inftyolinebreak:
\begin{deff}
Two minimal homeomorphisms $\varphi$ and $\partialsi$ are called \emph{strong orbit equivalent} if there exists a homeomorphism $g$ such that
$$\forall x\in X \ g(Orb_\varphi(x))=Orb_\partialsi(g(x))$$
and such that the two associated cocyles $n$ and $m$, defined by
$$\forall x\in X \ g(\varphi(x))=\partialsi^{n(x)}(g(x)) \text{ and } \inv{g}(\partialsi(x))=\varphi^{m(x)}(\inv{g}(x))$$
have at most one point of discontinuity each.
\end{deff}
First of all we need to gain a better understanding of the relation induced by $\Gamma_{x}$ on clopen sets (the dimension range of $\Gamma_x$,
cf definition \ref{def: dimension range}). We have seen that the group does not depend on the sequence of partitions out of which it is
constructed. Now we go further, showing that the dimension range of $\Gamma_x$ does not depend on the base point $x$ either.
\begin{lema}\label{lemma GPS SOE-changing base point does not change dimension range}
For every $x,x'\in X$, $\Gamma_{x}$ and $\Gamma_{x'}$ have the same dimension range.
\end{lema}
\begin{figure}
\caption{Key figure for understanding of Lemma \ref{lemma GPS SOE-changing base point does not change dimension range}
\label{chgtptbase_fig}
\end{figure}
\begin{proof}
Let $(\Xi_n)$ and $(\Xi_n')$ be two sequences of K-R partitions, with bases $(D^n)$ and $(F^n)$ respectively, such that $\bigcap D^n = \{x\}$ and
$\bigcap F^n = \{x'\}$.
Let also $A$ and $B$ be two clopen sets such that $B\in Orb_{\Gamma_\Xi}(A)$. There exists $n\in\mathbb{N}$ and $\gamma\in\Gamma_{n,\Xi}$ such that
$\gamma(A)=B$, and $A,B$ are in $\langle\Xi_n\rightarrowngle$. Then in each tower of $\Xi_n$ there are as many atoms which are contained in $A$
as in $B$. Suppose $x'\in D^n_{i,j}$, and choose $m\in\mathbb{N}$ such that $\Xi_m'$ refines $\Xi_n$, and $F^m\subset D^n_{i,j}$.
Let $T'_k$ be a tower of $\Xi_m'$ with base $F^m_{k,0}$.
Then
$$H'^{m}_k=H^n_i-j+H^n_{i_1}+\ldots+H^n_{i_t}+j \text{ for some indices } i_1,\ldots,i_t$$ and $T'_k$ is composed of a stacking of
$\bigcup_{p\geq j}D^n_{i,p}\cap T'_k$, $T^n_{i_1}\cap T'_k$,$\ldots$, $T^n_{i_t}\cap T'_k$, and eventually $\bigcup_{p<j}D^n_{i,p}\cap T'_k$
(see figure \ref{chgtptbase_fig}). So there are as many atoms of $T'_k$ that are included in $A$ as in $B$.
We can then naturally define an involution $g\in\Gamma_{\Xi_m'}$ such that $g(A)=B$ (see figure \ref{chgtptbase_fig}); whence $B\in Orb_{\Gamma_{\Xi'}}(A)$, which concludes the proof.
\end{proof}
\begin{rmk}\label{rmk: lien entre mes lemmes et la densité de gamma dans le groupe plein topo}
This lemma could also be seen as an easy consequence of the fact that $\overline{\Gamma_\varphi}=\overline{\llbracket\varphi\rrbracket}$
(see \cite{GM} or \cite{IM}, Theorem 5.6). Conversely, Lemmas
\ref{lemma GPS SOE-changing base point does not change dimension range} and \ref{lemma GPS SOE-be in Gamma orbit can be seen piecewise} give an elementary proof of that fact, using very similar arguments to those in the proof of Theorem \ref{gps1}.
\end{rmk}
\begin{thm}[{\cite{GPS2}, Corollary 4.11}]\label{gps1}
Let $\varphi$ and $\partialsi$ be two minimal homeomorphisms. The following are equivalent:
\begin{enumerate}
\item $\varphi$ is strong orbit equivalent to $\partialsi$
\item There exist $x,y \in X$ such that $\Gamma^\varphi_{x}$ and $\Gamma^\partialsi_{y}$
are isomorphic as abstract groups \label{condition 2}
\item For all $x,y \in X$, $\Gamma^\varphi_{x}$ and $\Gamma^\partialsi_{y}$
are isomorphic as abstract groups. \label{condition 3}
\end{enumerate}
\end{thm}
\begin{rmk}\label{rmk: abstract isomorphism is equivalent to isomorphism between dimension ranges}
Thanks to Theorem \ref{Krthm}, for $x,y\in X$ the condition ``$\Gamma^\varphi_{x}$ and $\Gamma^\partialsi_{y}$
are isomorphic as abstract groups'' is equivalent to ``$\Gamma^\varphi_{x}$ and $\Gamma^\partialsi_{y}$ have isomorphic dimension ranges''.
\end{rmk}
\begin{notation}
For simplicity, in the following proof we write $\Lambda_x$ instead of $\Gamma^\partialsi_x$, and $\Gamma_x$ instead of $\Gamma^\varphi_x$ (for $x\in X$).
\end{notation}
\begin{proof}
First of all, note that conditions \ref{condition 2} and \ref{condition 3} are equivalent by Lemma
\ref{lemma GPS SOE-changing base point does not change dimension range} and Remark \ref{rmk: abstract isomorphism is equivalent to isomorphism between dimension ranges}.
\medbreak
Let $x$ be in $X$ and suppose that $\Gamma_x$ and $\Lambda_x$ have isomorphic dimension ranges. By continuity of $\varphi$, the top point of a sequence of partitions $(\Xi_N)_N\in\mathbb{N}$
associated to $\Gamma_x$ is $\inv{\varphi}(x)=y_1$. We also denote $y_2=\inv{\partialsi}(x)$. By Krieger's theorem (Theorem \ref{Krthm}) and Remark \ref{rmk : more than original Krieger is true}, there exists a homeomorphism $g$ such that $\Lambda_x=g\Gamma_x \inv{g}$, $g(x)=x$ and $g(y_1)=y_2$. This control on these two pairs of points ensures that $g$ is indeed an orbit equivalence.
Let $z\inftyeq y_1$. Then for a large enough $N$, $z$ does not belong to the top of $\Xi_N$, hence there exists $h\in \Gamma_x$
and a neighbourhood $W$ of $z$ such that $\restr{h}{W}=\restr{\varphi}{W} $. On the other hand $gh\inv{g}\in\Lambda_x\subset \llbracket\partialsi\rrbracket$,
so there exists a neighbourhood $V$ of $g(z)$ and an integer $n_0$ such that $\restr{gh\inv{g}}{V}=\restr{\partialsi^{n_0}}{V}$.
Finally we get:
\[\forall z'\in \inv{g}(V)\cap W \ g(\varphi(z'))=g(h(z'))=gh\inv{g}(g(z'))=\partialsi^{n_0}(g(z')),\]
and we deduce that the cocycle $n$ is continuous at the point $z$, hence at every point except $y_1$. The situation being symmetric,
we also get that the other cocycle is continuous everywhere, except maybe in one point, and by definition $g$ is then a strong orbit equivalence between $\varphi$ and $\partialsi$.
\medbreak
Conversely, suppose that $g$ realizes a strong orbit equivalence between $\varphi$ and $\partialsi$. By replacing $\partialsi$ by $\inv{g}\partialsi g $, we can assume that
$g=id$. Then
$$\forall x \in X \ \varphi (x)=\partialsi^{n(x)}(x) \text{ and } \partialsi (x)=\varphi^{m(x)}(x)$$ with $n$ and $m$ being continuous except maybe in $y$ and
$y'$ respectively. We set $x=\varphi (y)$ and $x'=\partialsi (y')$, and construct $\Gamma_{x}$ associated to a sequence of K-R partitions $(\Xi_N)_{N\in\mathbb{N}}$
with bases $(B^N)_{N\in\mathbb{N}}$ and $\Lambda_{x'}$ associated to a sequence of K-R partitions $(\Xi'_N)_{N\in\mathbb{N}}$ with bases $(D^N)_{N\in\mathbb{N}}$. We want to show that
$\accfonction{\overline{id}}{\bigslant{CO(X)}{\Gamma_{x}}}{\bigslant{CO(X)}{\Lambda_{x'}}}{Orb_{\Gamma_{x}}(A)}{Orb_{\Lambda_{x'}}(A)}$
is a bijection.
By symmetry, if we prove that it is well defined, the assertion follows. We use the following lemma, that we will prove right after the end of the current proof:
\begin{lema}\label{lemma GPS SOE-be in Gamma orbit can be seen piecewise}
Let f be a homeomorphism and $A=\bigsqcup_{i=1}^r A_i$ a clopen partition of $A$. Assume that for all $i$, $f(A_i)\in Orb_{\Lambda_{x'}}(A_i)$. Then $f(A)\in Orb_{\Lambda_{x'}}(A)$.
\end{lema}
Let $A$ be a clopen set, and $\gamma\in\Gamma_{N,{x}}$, where $N$ is sufficiently large that $A$ is in $\langle\Xi_N\rightarrowngle$.
We have to show that $\gamma(A)\in Orb_{\Lambda_{x'}}(A)$.
Thanks to Lemma \ref{lemma GPS SOE-be in Gamma orbit can be seen piecewise}, it is sufficient to show that two atoms of the same tower of $\Xi_N$ are in the same $\Lambda_{x'}$-orbit, or, told differently, that $\varphi(A)\in Orb_{\Lambda_{x'}(A)}$ for every atom $A$ of $\Xi_N$ which is not on the top of the partition.
For such an atom $A$, one has $y\inftyotin A$, and so the cocycle $n$ is continuous on $A$ which is compact, hence $A$ can be partitioned into pieces on which $n$ is constant. Thus using Lemma \ref{lemma GPS SOE-be in Gamma orbit can be seen piecewise} once more, we can assume that
$n$ is constant on $A$, equal to a certain $n_0$.
We set $M=\abs {n_0}$. Now, for all $x$ in $X$ there exists a clopen neighbourhood $V_x$ of $x$ such that $\partialsi^{-M}(V_x),\ldots,\partialsi^{M+1}(V_x)$ are pairwise disjoint (see Remark \ref{rmk: finite images of a small neighbourhood are disjoint}).
Then by compactness of $A$ we get a partition $A=\bigsqcup_{i=1}^{r} A_i$, with each $A_i$ such that $\partialsi^{-M}(A_i),\ldots,\partialsi^{M+1}(A_i)$ are pairwise disjoint. Let $i\in\llbracket1,r\rrbracket$ be fixed, and choose $x_i\inftyotin \bigcup_{j=-M}^M\partialsi^j(A_i)$ (for example $x_i\in \partialsi^{M+1}(A_i)$), so that none of the $\partialsi^j(x_i)$ is in $A_i$ for
$j\in\llbracket-M,M\rrbracket$. That means that no atom contained in $A_i$ belongs to $\partialsi^j(D^i_k)$
(where the $D^i_k$'s are the bases of a sequence of K-R partitions $(\Xi'^i_k)_{k\in\mathbb{N}}$ associated to $\Lambda_{x_i}$) for $k$ large enough and $j\in\llbracket-M,M\rrbracket$,
i.e every atom contained in $A_i$ is at distance at least $M$ from the top and the bottom of its tower, which means that
$\varphi(A_i)=\partialsi^{n_0}(A_i)\in Orb_{\Lambda_{x_i}}(A_i)=Orb_{\Lambda_{x'}}(A_i)$ (this last equality being due to Lemma
\ref{lemma GPS SOE-changing base point does not change dimension range}).
Using Lemma \ref{lemma GPS SOE-be in Gamma orbit can be seen piecewise} one last time, we conclude that $\varphi(A)\in Orb_{\Lambda_{x'}}(A)$, which concludes the proof.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma GPS SOE-be in Gamma orbit can be seen piecewise}]
Let $N$ be large enough that for all $i$, $A_i$ and $f(A_i)$ are in $\langle\Xi'_N\rightarrowngle$ and there exists
$h_i\in \Lambda_{N,{x'}}$ such that $f(A_i)=h_i(A_i)$. For all $k$, and for all $j\in \llbracket0,H'^N_k-1\rrbracket$, we define
$h\in\Lambda_{N,{x'}}$ on an atom $D^N_{k,j}$ of $\Xi'_N$ as follows (see figure \ref{reduc_orbites}):
\begin{itemize}
\item if $D^N_{k,j}\inftysubseteq A\cup f(A)$, we set $h=id$ on $D^N_{k,j}$ ;
\item if $D^N_{k,j}\subset A_i$ for some $i$, we set $h=h_i$ on $D^N_{k,j}$ ;
\item if $D^N_{k,j}\subset f(A)\setminus A$, we find the least positive integer $m$ and an integer $k_m$ such that
$$f^{-m}(D^N_{k,j})\subset A\setminus f(A) \text{ and } \partialsi^{k_m}(D^N_{k,j})=f^{-m}(D^N_{k,j})$$ and we set
$h=\partialsi^{k_m}$ on $D^N_{k,j}$.
\end{itemize}
\begin{figure}
\caption{Construction of $h$, example with $m=2$}
\label{reduc_orbites}
\end{figure}
\inftyoindent It is clear that $h\in\Lambda_{N,{x'}}$ and $h(A)=\bigsqcup_{i=1}^r h_i(A_i)=\bigsqcup_{i=1}^r f(A_i)=f(A)$
\end{proof}
\section{Borel complexity of isomorphism of countable, locally finite simple groups}\label{section: Borel complexity}
In this section we study algebraically in some detail the countable, locally finite groups $\Gamma$ that we have used above. Since for $x,y\in X$, $\Gamma_x$ and $\Gamma_y$
are (spatially) isomorphic by a combination of Lemma \ref{lemma GPS SOE-changing base point does not change dimension range} and Theorem \ref{Krthm},
we denote them $\Gamma$ and do not specify which semi-orbit it preserves. Observing that those groups are not always simple,
we study their commutator subgoups and show that they are simple; further, we point out that if the commutator subgroups of two of these groups
are isomorphic, then the groups themselves are isomorphic.
It enables us to show that the strong orbit equivalence relation is, in a way we are going to recall precisely (Borel reducibility theory,
cf section \ref{subsection: Borel reducibility}), ``simpler'' than the isomorphism relation between countable, locally finite simple groups.
\medbreak
\subsection{Study of $D(\Gamma)$}\label{subsection: study of D(Gamma)}
We have claimed that $\Gamma^\varphi$ is not always simple, so let us give an explicit example. Let $\sigma_3$ denote the 3-odometer on the
Cantor space $X=\{0,1,2\}^\mathbb{N}$, and $N_\alpha$ be the clopen subset consisting of all sequences starting by the finite sequence $\alpha$. Then we have a sequence
of K-R partitions, each consisting of a single tower of basis $N_{0^n}$ for the $n$-th partition. The cutting and stacking process
is then very simple: the $n+1$-th tower is a stacking of three copies of the $n$-th tower. The element $\gamma$ of $\Gamma^{\sigma_3}$
defined by $$\restr{\gamma}{{N_0}}=\restr{{\sigma_3}}{{N_0}} \text{ and } \restr{\gamma}{{N_1}}=\restr{\inv{{\sigma_3}}}{{N_1}}$$
is clearly not in the commutator subgroup, since on $\Xi_n$ it decomposes as a product of $3^{n-1}$ transpositions, which belongs to
$\mathfrak{S}_{3^{n}}\backslash\mathcal{A}_{3^{n}}$.
We however have the following:
\begin{prop}
For every minimal homeomorphism $\varphi$, $D(\Gamma^\varphi)$ is simple.
\end{prop}
To see that, it suffices to read carefully section 3 in {\cite{BM}}, where a similar result is established for commutator subgroups of topological full groups;
and to observe that the argument given there works exactly the same way, replacing
$\llbracket\varphi\rrbracket$ by $\Gamma^\varphi$.
\begin{figure}
\caption{Construction of $\gamma'_i$ in Remark \ref{D(gamma) dense}
\label{gamma'i}
\end{figure}
\begin{rmk}\label{D(gamma) dense}
The only remark we maybe need to make, in order to follow the argument of \cite{BM}, is that $D(\Gamma)$ is dense in $\Gamma$. Indeed, let $\gamma\in\Gamma$, and $A_1,\ldots,A_r$ be disjoint clopen sets. There are some integers $n$ and $m$ such that $A_i$ belongs to $\langle\Xi_n\rightarrowngle$ for all $i$ and towers of $\Xi_m$ are high enough that they must contain at least two copies of some tower of $\Xi_n$. Let us assume that the tower $T^m_i$ contains two copies of
$T^n_{j_i}$.
Let $D^m_{i,k}$ and $D^m_{i,k'}$ be distinct atoms of $T^m_i$ such that $D^m_{i,k}\cup D^m_{i,k'}\subset D^n\subset T^n_{j_i}$ (where $D^n$ is an
atom of $T^n_{j_i}$, no matter which one).
Let $N=\min\{l>0\colon \gamma^l(D^n)=D^n\}$.
We define $\gamma'_i\in \Gamma_m$ by $\gamma'_i(D)=
\begin{cases}
D^m_{i,k'} &\text{if } D=\gamma^{N-1}(D^m_{i,k}) \\
D^m_{i,k} &\text{if } D=\gamma^{N-1}(D^m_{i,k'})\\
\gamma(D) &\text{elsewhere}
\end{cases}$.
\inftyoindent Figure \ref{gamma'i} shows how it works with $N=3$.
The element $\gamma'_i$ is constructed in such a way that, viewing $\restr{\gamma}{{T^m_i}}$ and $\restr{{\gamma'_i}}{{T^m_i}}$ as permutations
of atoms of the tower $T^m_i$, exactly one of them belongs to the alternating group. Denote this restriction by $\overset{\sim}{\gamma_i}$.
Then $\overset{\sim}{\gamma}=\bigsqcup_i \overset{\sim}{\gamma_i}\in D(\Gamma)$ and acts the same way as $\gamma$ on $\langle\Xi_n\rightarrowngle$, in particular on every $A_i$.
\end{rmk}
\smallbreak
We now need the following:
\begin{prop}\label{prop: every isom is spatial}
Let $\varphi$ and $\partialsi$ be two minimal homeomorphisms. Then any group isomorphism $\fct{\alpha}{D(\Gamma^\varphi)}{D(\Gamma^\partialsi)}$ is spatial.
\end{prop}
The proof is very much inspired by \cite{Med}, though less technical. It uses as its principal ingredient the reconstruction Theorem 384D of
\cite{Fr}.
In order to use it we need the following definitions:
\begin{deff}
\begin{itemize}
\item An open set $A$ is called \textit{regular} if $A=int(\overline{A})$. The set of all regular open sets forms a Boolean algebra we denote
by $RO(X)$. Obviously clopen sets are regular, and so $CO(X)\subset RO(X)$
\item A group $G\leq\text{Homeo}(X)$ is said to \textit{have many involutions} if for any regular open set $A$, there exists an involution
$g\in G$ whose support is included in $A$.
\end{itemize}
\end{deff}
\begin{rmk}\label{manyinvol}
The "support of $g$" is here defined by $int(\{x\in X\colon g(x)\inftyeq x\})$, but since in our case $g$ always belongs to the topological
full group it makes no difference with the usual definition.\inftyewline
Note that for every minimal homeomorphism $\varphi$, $D(\Gamma^\varphi)$ has many involutions. Indeed, if $A$ is a regular open set, there exists a nonempty
clopen set $C\subset A$. Think of $\Gamma^\varphi$ as being constructed from a sequence $\Xi$ of K-R partitions whose base point is $x_0\in X$.
By minimality of $\varphi$, for $n$ large enough there are at least 4 atoms in the tower $T^n_{i_0}$ of $\Xi_n$ containing $x_0$ which are
contained in $C$, say $D^n_{i_0,k_0}, D^n_{i_0,k_1}, D^n_{i_0,k_2}, D^n_{i_0,k_3}$, with $k_j<k_{j+1}$. Then we can easily define an
involution $g\in D(\Gamma_n)$ whose support is contained in $C$, and thus in $A$ (for example the double transposition
$(D^n_{i_0,k_0} \ D^n_{i_0,k_1})(D^n_{i_0,k_2} \ D^n_{i_0,k_3})$).
\end{rmk}
\begin{proof}
Let $\fct{\alpha}{D(\Gamma^\varphi)}{D(\Gamma^\partialsi)}$ be an isomorphism.
Thanks to remark \ref{manyinvol}, we now know that $D(\Gamma^\varphi)$ and $D(\Gamma^\partialsi)$ both have many involutions.
Theorem 384D applies and gives us an automorphism of Boolean algebras
$$\fct{\Lambda}{RO(X)}{RO(X)}$$ such that $\alpha(g)(V)=\Lambda g\inv{\Lambda}(V)$
for all $g\in D(\Gamma^\varphi)$ and $V\in RO(X)$. If we can show that $\Lambda(CO(X))=CO(X)$, then $\Lambda$ is induced by a homeomorphism of $X$
and the proof is over. We proceed to explain why this is true.
We note that $CO(X)$ is generated by the supports of the involutions in $D(\Gamma)$, where $\Gamma$ stands for either
$\Gamma^\varphi$ or $\Gamma^\partialsi$ (or more generally for any $\Gamma$ associated to a minimal homeomorphism).
Indeed, take a clopen set $C$, and look at $\Gamma$ as constructed out of a sequence of K-R partitions $\Xi$. For $n$ large enough, $C$ belongs to $\langle\Xi_n\rightarrowngle$ and every tower of $\Xi_n$ has an height greater than 7. Now, given an atom $A$ of $\Xi_n$, it is easy to construct
two permutations $\gamma_1,\gamma_2\in D(\Gamma)$ such that $\supp(\gamma_1)\cap \supp(\gamma_2)=A$, indeed it suffices to take two double transpositions
defined on $A$ and three other atoms each, pairwise disjoint, which is possible because of the height of the tower being greater than 7.
The proof is now over by applying the following lemma, that follows from the proof of the Theorem 384D of \cite{Fr}:
\begin{lema}
If $g\in D(\Gamma^\varphi)$ is an involution, then $\supp(\alpha(g))=\Lambda(\supp(g))$.
\end{lema}
Indeed, this shows that $\Lambda(CO(X))\subset CO(X)$, and since the situation is symmetric we obtain as desired the equality $\Lambda(CO(X))=CO(X)$.
\end{proof}
Let us sum up what we know: given two minimal homeomorphisms $\varphi$ and $\partialsi$, we have
\[
\begin{aligned}
&D(\Gamma^\varphi), \ D(\Gamma^\partialsi) \text{ are isomorphic as abstract groups }\\
&\iff D(\Gamma^\varphi), \ D(\Gamma^\partialsi) \text{ are spatially isomorphic }\\
&\iff D(\Gamma^\varphi), \ D(\Gamma^\partialsi) \text{ have isomorphic dimension ranges, (Theorem \ref{Krthm})}\\
&\iff \Gamma^\varphi, \ \Gamma^\partialsi \text{ have isomorphic dimension ranges, (Remark \ref{D(gamma) dense})}\\
&\iff \varphi, \ \partialsi \text{ are strong orbit equivalent, (Theorem \ref{gps1})}
\end{aligned}
\]
\subsection{Isomorphism on the space of countable, locally finite simple groups is a universal relation}\label{subsection: Borel reducibility}
Let us start by recalling the definition of Borel reducibility from the introduction :
\begin{deff}
Let $E,F$ be equivalence relations on standard Borel spaces $X,Y$ respectively. $E$ is said to be \emph{Borel reducible to $F$}, and we write
$E\leq F$ if there exists a Borel map $\fct{f}{X}{Y}$ such that $$\forall x,x'\in X \ xEx'\iff f(x)Ff(x').$$ We call such a map $f$ a
\emph{Borel reduction from $E$ to $F$}. If $E\leq F$ and $F\leq E$, we say that \emph{$E$ and $F$ are Borel bireducible}.
\end{deff}
We also recall that there exists a universal equivalence $E_\infty$ arising from an action of $S_\infty$ on a standard Borel space, that we denote $E_\infty$. A theorem of Melleray gives us a particular realization of $E_\infty$ :
\begin{thm}[{\cite{Mel20}}]
The equivalence relation of strong orbit equivalence of minimal homeomorphisms on the Cantor space is Borel bireducible to $E_\infty$.
\end{thm}
\begin{comment}
The intuition associated to the theory is that, if $E$ Borel reduces to $F$, then $F$ is considered more ``complex'' than $E$ because being able to solve
the classification problem associated to it implies being able to solve the one associated to $E$. The Borel hypothesis guarantees that the
correspondence is somehow computable. The following theorem shows that there always exist
equivalence relations as complex as possible for Borel Polish group actions.
\begin{thm}[{\cite{BecKec}, Corollary 3.5.2}]
Let $G$ be a Polish group. There exists an equivalence relation arising from a Borel $G$-action on a standard Borel space $E_G$ such that
any other such relation Borel reduces to it.
\end{thm}
\end{comment}
We have obtained in the previous subsection (see the series of equivalences at the end of subsection \ref{subsection: study of D(Gamma)}) that strong orbit equivalence between two minimal homeomorphisms $\varphi$ and $\partialsi$ is characterized by the isomorphism of $D(\Gamma^\varphi)$ and $D(\Gamma^\partialsi)$, which are countable, simple and locally finite.
We need to exhibit a Borel way to associate to a minimal homeomorphism $\varphi$ its countable, locally finite simple group $D(\Gamma_{x_0}^\varphi)$.
There are plenty of ways to do it and the one we have chosen requires a lot of steps; but it is quite easy to check that each of those steps is Borel.
Let $\homeomin(X)$ be the Borel subset of $\homeo(X)$ that consists of minimal homeomorphisms of $X$.
First of all we define
$$\fct{EnumTfg}{\homeomin(X)}{\homeo(X)^\omega}$$
such that $EnumTfg(\varphi)$ is an enumeration of $\llbracket\varphi\rrbracket$.
Let $(n^i_1,\ldots,n^i_{k_i},A^i_1,\ldots A^i_{k_i})_{i\in\mathbb{N}}$ be an enumeration of $\bigsqcup_{k\in\mathbb{N}} \mathbb{N}^k\times CO(X)^k$. We define $EnumTfg$ as follows:
for all $i\in\mathbb{N}$, if $\bigsqcup_{j=1}^{k_i} A_j^i=X$ and $\bigsqcup_{j=1}^{k_i} \restr{\varphi^{n^i_j}}{A^i_j}\in\llbracket\varphi\rrbracket$, then
$EnumTfg(\varphi)(i)=\bigsqcup_{j=1}^{k_i} \restr{\varphi^{n^i_j}}{A^i_j}$, else, put it equal to $id$. This defines an enumeration of $\llbracket\varphi\rrbracket$
with repetitions. Note that one can always make a sequence injective in a Borel way by removing duplicates.
Now that we have encoded $\llbracket\varphi\rrbracket$, we define
$$\fct{Enum\Gamma_{x_0}}{\homeomin(X)}{\homeo(X)^\omega}$$ such that
$Enum\Gamma_{x_0}(\varphi)$ enumerates $\Gamma^\varphi_{x_0}$.
To do this, we introduce the function
$$\fct{IsIn\Gamma}{\homeo(X)\times\homeomin(X)}{\homeo(X)}$$ that associates $\alpha$ to $(\alpha, \varphi)$ if it belongs
to $\Gamma^\varphi_{x_0}$, and $id$ otherwise. The condition ``$\alpha$ belongs to $\Gamma^\varphi_{x_0}$'' can be written
$$\begin{array}[t]{lcrl}
\alpha\in\llbracket\varphi\rrbracket
& \text{ and } \forall n\in\mathbb{N} \exists m\in\mathbb{N} \alpha(\varphi^n(x_0))=\varphi^m(x_0) \\
& \text{ and } \forall n\in\mathbb{N} \exists m\in\mathbb{N} \alpha(\varphi^m(x_0))=\varphi^n(x_0)
\end{array} $$
whence $IsIn\Gamma$ is a Borel function.
Then it is clear that $Enum\Gamma_{x_0}$ defined by $$Enum\Gamma_{x_0}(\varphi)_i=IsIn\Gamma((EnumTfg(\varphi)_i,\varphi))$$
is a Borel function.
It remains to encode $D(\Gamma)$. Since one can define Borel functions $Commu$ and $GenBy$ which enumerate respectively all commutators of a given sequence of homeomorphisms
and the group generated by it, we can define
$$EnumD\Gamma=GenBy\circ Commu\circ Enum\Gamma_{x_0}$$
It is a Borel reduction of the strong orbit equivalence relation to the isomorphism relation on countable, locally finite simple groups, and we have proved the following :
\begin{thm}
The relation of isomorphism of countable, locally finite, simple groups is a universal relation arising from a Borel action of $S_\infty$.
\end{thm}
So in particular, in the case of Borel reducibility it is as complicated to classify simple, locally finite groups as it is to classify countable groups.
\end{document} |
\begin{document}
\title[A homotopy classification of two-component spatial graphs]{A homotopy classification of two-component spatial graphs up to neighborhood equivalence}
\author{Atsuhiko Mizusawa}
\address{Department of Mathematics, Faculty of Fundamental Science and Engineering, Waseda University, 3-4-1 Okubo, Shinjuku-ku, Tokyo 169-8555, Japan}
\email{a\symbol{"5F}[email protected]}
\author{Ryo Nikkuni}
\address{Department of Mathematics, School of Arts and Sciences, Tokyo Woman's Christian University, 2-6-1 Zempukuji, Suginami-ku, Tokyo 167-8585, Japan}
\email{[email protected]}
\thanks{The second author was partially supported by Grant-in-Aid for Scientific Research (C) (No. 24540094), Japan Society for the Promotion of Science.}
\subjclass{Primary 57M15; Secondary 57M25}
\date{}
\dedicatory{This article is dedicated to Professors Taizo Kanenobu, Yasutaka Nakanishi and Makoto Sakuma on their 60th birthdays.}
\keywords{Spatial graph, Linking number, Delta move, Handlebody-link}
\begin{abstract}
A neighborhood homotopy is an equivalence relation on spatial graphs which is generated by crossing changes on the same component and neighborhood equivalence. We give a complete classification of all $2$-component spatial graphs up to neighborhood homotopy by the elementary divisor of a linking matrix with respect to the first homology group of each of the connected components. This also leads a kind of homotopy classification of $2$-component handlebody-links.
\end{abstract}
\maketitle
\section{Introduction}
Throughout this paper we work in the piecewise linear category. An embedding of a graph into the $3$-sphere ${\mathbb S}^{3}$ is called a {\it spatial embedding} of the graph and the image is called a {\it spatial graph}. We say that two spatial graphs $G$ and $G'$ are {\it ambient isotopic} if there exists an orientation-preserving self-homeomorphism $\Phi$ on ${\mathbb S}^{3}$ such that $\Phi(G)=G'$. A graph is said to be {\it planar} if there exists an embedding of the graph into the $2$-sphere, and a spatial embedding of a planar graph is said to be {\it trivial} if it is ambient isotopic to an embedding of the graph into a $2$-sphere in ${\mathbb S}^{3}$. Such an embedding is unique up to ambient isotopy \cite{M69}. On the other hand, let us denote the regular neighborhood of a spatial graph $G$ in ${\mathbb S}^{3}$ by $N(G)$. Then, two spatial graphs $G$ and $G'$ are said to be {\it neighborhood equivalent} if there exists an orientation-preserving self-homeomorphism $\Phi$ on ${\mathbb S}^{3}$ such that $\Phi(N(G))=N(G')$ \cite{S70}. Note that ambient isotopic two spatial graphs are homeomorphic as abstract graphs, but neighborhood equivalent two spatial graphs are not always homeomorphic. For a spatial graph $G$ and an edge $e$ of $G$ which is not a loop, we call the spatial graph obtained from $G-{\rm int}e$ by identifying the end vertices of $e$ the {\it edge contraction} of $G$ along $e$. A {\it vertex splitting} is the reverse of an edge contraction. Then it is known that two spatial graphs are neighborhood equivalent if they are transformed into each other by edge contractions, vertex splittings and ambient isotopies \cite{I08}.
Two oriented links are said to be {\it link homotopic} if they are transformed into each other by crossing changes on the same component and ambient isotopies. It is well known that two oriented $2$-component links are link homotopic if and only if they have the same linking number \cite{M54}. Our purpose in this article is to generalize this fact to $2$-component spatial graphs from a viewpoint of neighborhood equivalence. We introduce the notion of {\it neighborhood homotopy} on spatial graphs as an equivalence relation which is generated by crossing changes on the same component and neighborhood equivalence; that is, two spatial graphs $G$ and $G'$ are neighborhood homotopic if they are transformed into each other by crossing changes between edges which belong to the same component, edge contractions, vertex splittings and ambient isotopies. Note that in the case of oriented links, neighborhood homotopy coincides with link homotopy. Moreover, we also introduce another equivalence relation on spatial graphs as follows. A {\it Delta move} is a local move on a spatial graph as illustrated in Fig. \ref{deltamove} \cite{Matveev87}, \cite{MN89}. We say that two spatial graphs are {\it Delta neighborhood equivalent} if they are transformed into each other by Delta moves, edge contractions, vertex splittings and ambient isotopies.
\begin{figure}\label{deltamove}
\end{figure}
In \cite{Mi12}, the first author introduced a sequence of invariant nonnegative integers for $2$-component spatial graphs under neighborhood equivalence as follows. Let $G=G_{1}\cup G_{2}$ be a $2$-component spatial graph. Let ${\mathcal Z}=\left\{z_{1},z_{2},\ldots,z_{m}\right\}$ be a basis of $H_{1}(G_{1};{\mathbb Z})$ and ${\mathcal W}=\left\{w_{1},w_{2},\ldots,w_{n}\right\}$ a basis of $H_{1}(G_{2};{\mathbb Z})$. Let $M_{G}\left({\mathcal Z},{\mathcal W}\right)$ be the $(m,n)$-matrix whose $(i,j)$-entry is the {\it linking number} ${\rm lk}(z_{i},w_{j})$ in ${\mathbb S}^{3}$. Then the sequence of elementary divisors $d_{1},d_{2},\ldots ,d_{l}$ $(d_{i}\in {\mathbb Z}_{> 0},\ d_{i}|d_{i+1}\ (i=1,2,\ldots, l-1))$ of $M_{G}\left({\mathcal Z},{\mathcal W}\right)$ is an invariant under neighborhood equivalence. We define ${\rm Lk}(G_{1},G_{2})$ by the sequence $\left\{d_{1},d_{2},\ldots,d_{l}\right\}$ if $l\ge 1$ and otherwise $0$. Now we state our main theorem.
\begin{Theorem}\label{main}
Let $G=G_{1}\cup G_{2}$ and $G'=G'_{1}\cup G'_{2}$ be two $2$-component spatial graphs satisfying with $H_{1}(G_{i};{\mathbb Z})\cong H_{1}(G'_{i};{\mathbb Z})\ (i=1,2)$. Then the following are equivalent.
\begin{enumerate}
\item $G$ and $G'$ are neighborhood homotopic.
\item $G$ and $G'$ are Delta neighborhood equivalent.
\item ${\rm Lk}(G_{1},G_{2})={\rm Lk}(G'_{1},G'_{2})$.
\end{enumerate}
\end{Theorem}
A {\it handlebody-link} in ${\mathbb S}^{3}$ is the image of an embedding of mutually disjoint handlebodies into ${\mathbb S}^{3}$. Two handlebody-links $L$ and $L'$ are said to be {\it equivalent} if there exists an orientation-preserving self-homeomorphism $\Phi$ on ${\mathbb S}^{3}$ such that $\Phi(L)=L'$. Note that two handlebody-links are equivalent if and only if their spines are neighborhood equivalent as spatial graphs. We say that two handlebody-links are {\it homotopic} if their spines are neighborhood homotopic as spatial graphs. For a $2$-component handlebody-link $L=V_{1}\cup V_{2}$, we can define ${\rm Lk}(V_{1},V_{2})$ in the same way as ${\rm Lk}(G_{1},G_{2})$ for a $2$-component spatial graph $G=G_{1}\cup G_{2}$ \cite{Mi12}. Then by Theorem \ref{main}, we immediately have the following.
\begin{Corollary}\label{main_cor}
Let $L=V_{1}\cup V_{2}$ and $L'=V'_{1}\cup V'_{2}$ be two $2$-component handlebody-links satisfying with $H_{1}(V_{i};{\mathbb Z})\cong H_{1}(V'_{i};{\mathbb Z})\ (i=1,2)$. Then $L$ and $L'$ are homotopic if and only if ${\rm Lk}(V_{1},V_{2})={\rm Lk}(V'_{1},V'_{2})$.
\end{Corollary}
\begin{Remark}
In \cite{F08}, Fleming introduced ``Milnor invariants'' for spatial graphs which is invariant under crossing changes on the same component and ambient isotopies. Since his invariants are derived from the fundamental group of spatial graph exterior, they also are invariant under neighborhood homotopy. See also \cite{M54} for Milnor's link homotopy invariants.
\end{Remark}
In the next section, we show some lemmas which are needed later. We prove Theorem \ref{main} in section $3$.
\section{Local moves on spatial graphs}
Let $B^{3}$ be the oriented unit $3$-ball. Let $T$ be a tangle in $B^{3}$ and $A$ a disjoint union of arcs in $\partial B$ with $\partial T = \partial A$ as illustrated in Fig. \ref{Hopf_chord}. Let $G$ be a spatial graph. Let $\psi_{i}:B^{3}\to {\mathbb S}^{3}$ be an orientation-preserving embedding for $i=1,2,\ldots,k$. Note that each $\psi_{i}(T\cup A)$ is a {\it Hopf link} in ${\mathbb S}^{3}$. Let $b_{i,p}$ be a $2$-disk which is embedded in ${\mathbb S}^{3}$ for $i=1,2,\ldots,k$ and $p=1,2$. Suppose that $\psi_{i}(B^{3})\cap G=\emptyset$ for each $i$, $\psi_{i}(B^{3})\cap \psi_{j}(B^{3})=\emptyset$ for $i\neq j$ and $b_{i,p}\cap b_{j,q}=\emptyset$ for $(i,p)\neq (j,q)$. Suppose that $b_{i,p}\cap G=\partial b_{i,p}\cap G$ is an arc away from the vertices of $G$ for each $i$ and $p$. Suppose that $b_{i,p}\cap \psi_{j}(B^{3})=\emptyset$ for $i\neq j$ and $b_{i,p}\cap \psi_{i}(B^{3})=\partial b_{i,p}\cap \psi_{i}(B^{3})$ is a component of $\psi_{i}(A)$ for each $i$ and $p$. Let $H$ be a spatial graph satisfying with the followings:
\begin{eqnarray}
&& G\setminus \bigcup_{i,p}b_{i,p} = H\setminus \bigcup_{i,p}b_{i,p}\cup\bigcup_{i}\psi_{i}(T),\label{g1}\\
&& H= G \cup {\bigcup_{i,p}\partial b_{i,p}}\cup {\bigcup_{i}\psi_{i}(T)}\setminus {\bigcup_{i,p}{\rm int}(G\cap b_{i,p})}\cup {\bigcup_{i}\psi_{i}({\rm int}A)}. \label{g2}
\end{eqnarray}
Then $H$ is called a {\it band sum of Hopf links} and $G$. The union $b_{i,1}\cup b_{i,2}\cup \psi_{i}(B^{3})$ is called a {\it Hopf chord}, and $b_{i,1}$ and $b_{i,2}$ are called the {\it associated bands} of the Hopf chord. An edge $e$ of $G$ is called an {\it associated edge} of the Hopf chord if $e$ has intersection with the associated bands.
\begin{figure}\label{Hopf_chord}
\end{figure}
Note that a crossing change is realized by a band sum of a single Hopf link, see Fig. \ref{Hopf_chord2}. Since any two spatial graphs which are homeomorphic as abstract graphs are transformed into each other by crossing changes and ambient isotopies, we have the following lemma.
\begin{Lemma}\label{forklore} {\rm (\cite{S69}, \cite{Yama90}, \cite{TY02})}
Let $G$ and $H$ be two spatial graphs which are homeomorphic as abstract graphs. Then $H$ is a band sum of Hopf links and $G$.
\end{Lemma}
\begin{figure}\label{Hopf_chord2}
\end{figure}
Then for a Delta move and a band sum of Hopf links, it is known the following.
\begin{Lemma}\label{delta_lemma}
Each of the local moves (1), (2) and (3) illustrated in Fig. \ref{Hopf_chord3} is realized by Delta moves and ambient isotopies.
\end{Lemma}
\begin{proof}
See \cite[Lemma 2.2]{TY02}.
\end{proof}
\begin{figure}\label{Hopf_chord3}
\end{figure}
Moreover, we also have the following.
\begin{Lemma}\label{delta_lemma2}
Each of the local moves (1), (2), $\ldots$, (6) illustrated in Fig. \ref{Hopf_cancel} is realized by an ambient isotopy.
\end{Lemma}
\begin{proof}
(1), (2), $\ldots$, (5) are clear. In the case of (6), see Fig. \ref{Hopf_chord4}.
\end{proof}
\begin{figure}\label{Hopf_cancel}
\end{figure}
\begin{figure}\label{Hopf_chord4}
\end{figure}
Now we show that neighborhood homotopy implies Delta neighborhood equivalence. Namely we have the following.
\begin{Lemma}\label{delta_nh1}
If two spatial graphs are neighborhood homotopic then they are Delta neighborhood equivalent.
\end{Lemma}
\begin{proof}
Let $G$ be a spatial graph. We show that a single crossing change on the same component of $G$ is realized by Delta moves, edge contractions, vertex splittings and ambient isotopies. Let $G'$ be a spatial graph which is obtained from $G$ by a single crossing change on the same component $G_{1}$ of $G$. Then by Lemma \ref{forklore}, $G'$ is a band sum of a Hopf link and $G$, where the associated edges of the Hopf chord belong to $G_{1}$. Let $T_{1}$ be a spanning tree of $G_{1}$. Then by sliding the roots of the associated bands of the Hopf chord along $T_{1}$ by using a move in Fig. \ref{Hopf_cancel} (6) if necessary, we can regard $G'$ as a band sum of Hopf links and $G$, where the associated edges of each of the Hopf chords do not belong to $T_{1}$. Let $G''$ be the spatial graph which is obtained from $G'$ by contracting all edges of $T_{1}$. Note that $G''$ has a spatial bouquet as a component, and the associated edges of each of the Hopf chords belong to the bouquet. Then by Lemma \ref{delta_lemma}, we deform $G''$ by Delta moves and ambient isotopies so that each of the Hopf chords is contained in a small $3$-ball as illustrated in Fig. \ref{Hopf_cancel0} (1), (2), (3) or (4). Then by the moves as illustrated in Fig. \ref{Hopf_cancel} (1), (2), (4) and (5), all of the Hopf chords can be removed. We finally obtain $G$ from $G''$ by restoring $T_{1}$ by suitable vertex splittings.
\end{proof}
\begin{figure}\label{Hopf_cancel0}
\end{figure}
In particular, the converse of Lemma \ref{delta_nh1} is also true in the case of $2$-component spatial graphs as follows.
\begin{Lemma}\label{delta_nh2}
Two $2$-component spatial graphs are neighborhood homotopic if and only if they are Delta neighborhood equivalent.
\end{Lemma}
\begin{proof}
By Lemma \ref{delta_nh1}, it is sufficient to show that if two $2$-component spatial graphs are Delta neighborhood equivalent then they are neighborhood homotopic. Let us consider a single Delta move on a $2$-component spatial graph. Then, there exist at least two of the three strings in the Delta move such that they belong to the same component. Then the Delta move is realized by two crossing changes on the same component and an ambient isotopy, see Fig. \ref{delta_hm}, where two strings which belong to the same component are expressed in bold lines.
\end{proof}
\begin{figure}\label{delta_hm}
\end{figure}
\begin{Remark}\label{mu}
For a positive integer $n$, if $n\ge 3$ then there exist two spatial graphs which are Delta neighborhood equivalent but not neighborhood homotopic. Actually, Borromean rings can be undone by a single Delta move \cite{MN89} but not trivial up to link homotopy \cite{M54}.
\end{Remark}
\section{Proof of Theorem \ref{main}}
In this section we prove Theorem \ref{main}.
\begin{proof}[Proof of Theorem \ref{main}]
By Lemma \ref{delta_nh2}, it follows that (1) and (2) are equivalent. In the following we show that (2) and (3) are equivalent. First we show that (2) implies (3). It is well known that a Delta move on a $2$-component oriented link does not change the linking number \cite{MN89} (the proof is the same as the proof of the fact that a Reidemeister move III does not change the linking number). Let ${\mathcal Z}=\left\{z_{1},z_{2},\ldots,z_{m}\right\}$ be a basis of $H_{1}(G_{1};{\mathbb Z})$ and ${\mathcal W}=\left\{w_{1},w_{2},\ldots,w_{n}\right\}$ a basis of $H_{1}(G_{2};{\mathbb Z})$. Note that both an edge contraction and a vertex splitting on $G$ do not change ${\rm lk}(z_{i},w_{j})$ through the isomorphism of the first homology. Moreover, since $z_{i}$ (resp. $w_{j}$) is represented by a homological sum of oriented knots in $G_{1}$ (resp. $G_{2}$), we see that ${\rm lk}(z_{i},w_{j})$ is represented by a sum of the linking numbers for some $2$-component constituent links in $G$. This implies that a Delta move on $G$ does not change ${\rm lk}(z_{i},w_{j})$. Thus it follows that there exist a basis ${\mathcal Z}'=\left\{z'_{1},z'_{2},\ldots,z'_{m}\right\}$ of $H_{1}(G'_{1};{\mathbb Z})$ and a basis ${\mathcal W}'=\left\{w'_{1},w'_{2},\ldots,w'_{n}\right\}$ of $H_{1}(G'_{2};{\mathbb Z})$ such that
\begin{eqnarray*}
M_{G}\left({\mathcal Z},{\mathcal W}\right)=\left({\rm lk}(z_{i},w_{j})\right)=\left({\rm lk}(z'_{i},w'_{j})\right)=M_{G'}\left({\mathcal Z}',{\mathcal W}'\right).
\end{eqnarray*}
This implies that ${\rm Lk}(G_{1},G_{2})={\rm Lk}(G'_{1},G'_{2})$.
Next we show that (3) implies (2). Assume that ${\rm Lk}(G_{1},G_{2})={\rm Lk}(G'_{1},G'_{2})$ is the sequence $\left\{d_{1},d_{2},\ldots ,d_{l}\right\}$ $(d_{i}\in {\mathbb Z}_{> 0},\ d_{i}|d_{i+1}\ (i=1,2,\ldots, l-1))$. In the following we deform $G=G_{1}\cup G_{2}$ into a certain canonical form by Delta moves, edge contractions, vertex splittings and ambient isotopies. Let $T_{i}$ be a spanning tree of $G_{i}$ $(i=1,2)$. Let $e_{1},e_{2},\ldots,e_{m}$ be all of the edges of $G_{1}$ which are not contained in $T_{1}$ and $f_{1},f_{2},\ldots,f_{n}$ all of the edges of $G_{2}$ which are not contained in $T_{2}$. Let ${\mathcal Z}=\left\{z_{1},z_{2},\ldots,z_{m}\right\}$ be the basis of $H_{1}(G_{1};{\mathbb Z})$ which are represented by $e_{1},e_{2},\ldots,e_{m}$, and ${\mathcal W}=\left\{w_{1},w_{2},\ldots,w_{n}\right\}$ the basis of $H_{1}(G_{2};{\mathbb Z})$ which are represented by $f_{1},f_{2},\ldots,f_{n}$. Let $B_{i}$ be a spatial bouquet obtained from $G_{i}$ by contracting all edges of $T_{i}$ ($i=1,2$). Note that all loops of $B_{1}$ (resp. $B_{2}$) can be regarded as $z_{1},z_{2},\ldots,z_{m}$ (resp. $w_{1},w_{2},\ldots,w_{n}$) through the isomorphism of the first homology. Let $U_{1}\cup U_{2}$ be the trivial $2$-component spatial graph, where $U_{1}$ is a spatial bouquet with $m$ loops and $U_{2}$ is a spatial bouquet with $n$ loops. Since $B_{i}$ is homeomorphic to $U_{i}$ ($i=1,2$), by Lemma \ref{forklore} it follows that $B_{1}\cup B_{2}$ is a band sum of Hopf links and $U_{1}\cup U_{2}$. Then in the same way as the proof of Lemma \ref{delta_nh1}, all of the Hopf chords whose associated edges belong to the same component can be removed by Delta moves and ambient isotopies. Moreover, by Lemma \ref{delta_lemma} and Lemma \ref{delta_lemma2}, we can deform the band sum of Hopf links so that all of the Hopf chords which are joining the loops $z_{i}$ and $w_{j}$ are parallel for each pair of $i$ and $j$, each of them have no twists of the associated bands as illustrated in Fig. \ref{Hopf_chord5} (1) or each of them has just a half twist of the associated bands as illustrated in Fig. \ref{Hopf_chord5} (2), which depends on the sign of ${\rm lk}(z_{i},w_{j})$, and therefore the number of such Hopf chords equals the absolute value of ${\rm lk}(z_{i},w_{j})$.
\begin{figure}\label{Hopf_chord5}
\end{figure}
Recall that the sequence of elementary divisors of $M_{G}\left({\mathcal Z},{\mathcal W}\right)$ are $d_{1},d_{2},\ldots,d_{l}$. This means that $M_{G}\left({\mathcal Z},{\mathcal W}\right)$ is transformed into the diagonal $(m,n)$-matrix whose $(i,j)$-entry is $d_{i}$ if $i=j$ and $0$ if $i\neq j$ by elementary transformations: (i) exchanging two columns (resp. rows), (ii) multiplying a column (resp. row) by $(-1)$ and (iii) adding a multiple of a column (resp. row) to another column (resp. row). Exchanging two columns (resp. rows) corresponds to exchanging two bases of $H_{1}(G_{2};{\mathbb Z})$ (resp. $H_{1}(G_{1};{\mathbb Z})$). The multiplication of the $j$th column by $(-1)$ can be realized by a deformation illustrated in Fig. \ref{Hopf_chord7} up to Delta moves and ambient isotopies under changing the base $w_{j}$ into the new base $-w_{j}$, where for an integer $r$, a symbol mark as in the first figure from the left in Fig. \ref{Hopf_chord5} denotes the parallel $r$ Hopf chords which are joining the loops $z_{i}$ and $w_{j}$ as illustrated in Fig. \ref{Hopf_chord5} (1) or (2), which depends on the sign of ${\rm lk}(z_{i},w_{j})$. The multiplication of the $i$th row by $(-1)$ also can be realized by a similar deformation under changing the base $z_{i}$ into the new base $-z_{i}$. Adding a multiple of the $p$th column to the $q$th column ($p\neq q$) can be realized by a deformation illustrated in Fig. \ref{Hopf_chord6} up to Delta moves and ambient isotopies under changing the base $w_{q}$ into the base $w_{q}+w_{p}$. Adding a multiple of the $p'$th row to the $q'$th row ($p'\neq q'$) also can be realized by a similar deformation under changing the base $z_{q'}$ into the base $z_{q'}+z_{p'}$. Therefore by applying Lemma \ref{delta_lemma} and Lemma \ref{delta_lemma2} if necessary, there exist a basis $\widetilde{\mathcal Z}=\left\{\tilde{z}_{1},\tilde{z}_{2},\ldots,\tilde{z}_{m}\right\}$ of $H_{1}(G_{1};{\mathbb Z})$ and a basis $\widetilde{\mathcal W}=\left\{\tilde{w}_{1},\tilde{w}_{2},\ldots,\tilde{w}_{n}\right\}$ of $H_{1}(G_{2};{\mathbb Z})$ such that all loops of $B_{1}$ are regarded as $\tilde{z}_{1},\tilde{z}_{2},\ldots,\tilde{z}_{m}$, all loops of $B_{2}$ are regarded as $\tilde{w}_{1},\tilde{w}_{2},\ldots,\tilde{w}_{n}$, all Hopf chords which are joining the loops $\tilde{z}_{i}$ and $\tilde{w}_{j}$ are parallel for each pair of $i$ and $j$ and ${\rm lk}(\tilde{z}_{i},\tilde{w}_{i})=d_{i}\ (i=1,2,\ldots,l)$.
\begin{figure}\label{Hopf_chord7}
\end{figure}
\begin{figure}\label{Hopf_chord6}
\end{figure}
Next we deform $G'$ to a similar canonical form of a band sum of Hopf links and the trivial $2$-component spatial graph $U_{1}\cup U_{2}$ by Delta moves, edge contractions, vertex splittings and ambient isotopies, where $U_{1}$ is a spatial bouquet with $m$ loops and $U_{2}$ is a spatial bouquet with $n$ loops. Since ${\rm lk}(G'_{1},G'_{2})=\left\{d_{1},d_{2},\ldots ,d_{l}\right\}$, by applying Lemma \ref{delta_lemma}, the Hopf chords for $G$ and those for $G'$ are transformed into each other by Delta moves and ambient isotopies. Thus $G$ and $G'$ are transformed into each other by Delta moves, edge contractions, vertex splittings and ambient isotopies. This completes the proof.
\end{proof}
\begin{Remark}
We refer the reader to \cite{MN89} for a complete classification of oriented links and \cite{taniyama95}, \cite{MT97}, \cite{ST03} for a complete classification of spatial embeddings of a graph up to Delta moves and ambient isotopies.
\end{Remark}
{\normalsize
}
\end{document} |
\begin{equation}gin{document}
\newcount\hour \newcount\minute
\hour=\time \divide \hour by 60
\minute=\time
\loop \ifnum \minute > 59 \advance \minute by -60 \repeat
\def\nowtwelve{\ifnum \hour<13 \number\hour:
\ifnum \minute<10 0\fi
\number\minute
\ifnum \hour<12 \ A.M.\else \ P.M.\fi
\else \advance \hour by -12 \number\hour:
\ifnum \minute<10 0\fi
\number\minute \ P.M.\fi}
\def\nowtwentyfour{\ifnum \hour<10 0\fi
\number\hour:
\ifnum \minute<10 0\fi
\number\minute}
\def \now {\nowtwelve}
\title{MAX INDEPENDENT SET AND THE QUANTUM ALTERNATING OPERATOR ANSATZ}
\author{Zain H. Saleem}
\email{[email protected]}
\affiliation{Argonne National Laboratory, 9700 S. Cass Ave.,
Lemont, IL 60439, USA}
\begin{equation}gin{abstract}
The maximum independent set (MIS) problem of graph theory using the quantum alternating operator ansatz is studied. We perform simulations on the Rigetti Forest simulator for the square ring, $K_{2,3}$, and $K_{3,3}$ graphs and analyze the dependence of the algorithm on the depth of the circuit and initial states. The probability distribution of observation of the feasible states representing maximum independent sets is observed to be asymmetric for the MIS problem, which is unlike the Max-Cut problem where the probability distribution of feasible states is symmetric. For asymmetric graphs it is shown that the algorithm clearly favors the independent set with the larger number of elements even for finite circuit depth. We also compare the approximation ratios for the algorithm when we choose different initial states for the square ring graph and show that it is dependent on the choice of the initial state.
\end{abstract}
\preprint{}
\maketitle
\section{Introduction}
The quantum computation community has been expressing growing interest in developing algorithms that can be implemented on near-term quantum machines \cite{preskil}. Several hybrid classical-quantum algorithms \cite{farhi2014,perruzo, moll} have been proposed that can take advantage of the available quantum resources in the presence of noisy gates and small decoherence times. The Quantum Approximate
Optimization Algorithm (QAOA) \cite{farhi2014} and the Variational
Quantum Eigensolver (VQE) \cite{perruzo} are two such classical-quantum algorithms. QAOA has been put forward to tackle combinatorial optimization problems, and the VQE algorithm has application in quantum chemistry problems where the ground state of a wave function needs to be determined. The VQE algorithm is used as a subroutine in QAOA.
In most of the hybrid algorithms the quantum part of the algorithm involves preparing a quantum circuit, and the classical part involves optimization. In the Quantum Approximate Optimization Algorithm a quantum state is created by a p-depth circuit specified by 2p variational parameters. The algorithm has been shown to be not efficiently simulatable classically even at the lowest p=1 depth \cite{farhi2016}. QAOA is thus a good candidate algorithm to study quantum advantage on near-term quantum machines. Although one can theoretically prove the success of QAOA in the $p\to \infty$ limit as it
approximates adiabatic quantum annealing \cite{farhi2014} in that limit, little is known about its performance when $1 < p \ll \infty$.
A significant amount of work on QAOA has been done in the context of the Max-Cut problem, which is an unconstrained optimization problem. However, not much work has been done on constrained combinatorial optimization problems in the quantum algorithms context \cite{shengtao}. The maximum independent set (MIS) problem is considered an ``unconstrained optimization'' problem. Unlike the Max-Cut problem, in which all the $2^n$ states are feasible, the feasible states in the MIS problem consist of a subset of the configuration space. For such ``constrained optimization'' problems a quantum alternating operator ansatz \cite{stuart2017,shengtao} has been proposed. In this paper we present a simulation of the quantum alternating operator ansatz on the Rigetti Forest simulator \cite{riggeti}.
\section{Quantum Approximate Optimization Algorithm}
The QAOA algorithm was proposed for unconstrained discrete optimization problems, such as Max-Sat, Max-Cut, and Max-Clique. Formally, consider
\begin{equation}gin{align}
C(\mathbf{x}) = \sum_{i=1}^{n} C_i(\mathbf{x}),
\end{align}
where $\mathbf{x} = [x_1, x_2, \ldots, x_n]$ denotes a binary label and $C_i(\mathbf{x})$ is the $i$th binary clause. The goal in optimization problems is to find a binary vector $\mathbf{x}^*$ that maximizes the number of satisfied clauses $C_i(\mathbf{x})$.
For unconstrained combinatorial optimization problems the quantum state is typically initialized to the superposition state $|+\rangle^{\otimes n}$. For the cost Hamiltonian $C$, let $U(C, \gamma)$ denote a unitary operator with an angle $0 \leq \gamma \leq 2\pi$, defined by
\begin{equation}gin{align}
U(C, \gamma) = \exp(- i \gamma C) = \prod^{n}_{i=1} \mathrm{e}^{- \gamma C_i}.
\end{align}
We also define a driver Hamiltonian $B=\displaystyle{\sum^n_{j=1}} X_j$, which flips $n$ qubits independently. The unitary operator for the Hamiltonian with an angle $0 \leq \begin{equation}ta \leq \pi$ is defined as
\begin{equation}gin{align}
U(B, \begin{equation}ta) = \exp(- i \begin{equation}ta B) = \prod^{n}_{j=1} \mathrm{e}^{- i \begin{equation}ta X_j}.
\end{align}
The ground state of the driver Hamiltonian is $|\varphii\rangle = |+\rangle^{\otimes n}$. The quantum approximate optimization algorithm uses an alternating quantum circuit of depth $p$ depending on Hamiltonians $B$ and $C$ to maximize the expected cost function, with $2p$ angle parameters $\boldsymbol{\gamma}$ and $\boldsymbol{\begin{equation}ta}$:
\begin{equation}gin{align}
|{\boldsymbol{\gamma}, \boldsymbol{\begin{equation}ta}}\rangle =
U(B, \begin{equation}ta_p)U(C, \gamma_p) \cdots U(B, \begin{equation}ta_1)U(C, \gamma_1) |\varphii\rangle.
\end{align}
If we denote expectation of the cost function $C$ as $F_p$,
\begin{equation}gin{align}
F_p({\boldsymbol{\gamma}, \boldsymbol{\begin{equation}ta}}) =
\langle C \rangle(\boldsymbol{\gamma}, \boldsymbol{\begin{equation}ta})
=\langle {\boldsymbol{\gamma}, \boldsymbol{\begin{equation}ta}} | C | {\boldsymbol{\gamma}, \boldsymbol{\begin{equation}ta}} \rangle,
\end{align}
and let $F^\star_p$ be the maximum of $F_p({\boldsymbol{\gamma}, \boldsymbol{\begin{equation}ta}})$ over the angles,
$F^\star_p = \max_{{\boldsymbol{\gamma}, \boldsymbol{\begin{equation}ta}}} F_p({\boldsymbol{\gamma}, \boldsymbol{\begin{equation}ta}})$, the objective of QAOA algorithm is to maximize $F^\star_p$ by properly choosing parameters $\boldsymbol{\gamma}, \boldsymbol{\begin{equation}ta}$.
The approximation improves as we increase $p$, and at infinite depth we have $\lim_{p \rightarrow \infty} F^\star_p = \max_\mathbf{x} C(\mathbf{x})$. The expectation $F_p({\boldsymbol{\gamma}, \boldsymbol{\begin{equation}ta}})$ is calculated by repeated measurements on quantum computers. The variational parameters are optimized on classical computers, for example, by using the Nelder--Mead method as part of the VQE subroutine.
\section{Max-Cut}
The Max-Cut combinatorial optimization problem is stated as follows: Given a graph $G=(V,E)$ with nodes $V$ and edges $E$, find a subset $S \in V$ such that the number of edges between $S$ and $S-V$ is maximized. Finding an exact solution for the Max-Cut problem is NP-hard~\cite{karp1972a}, but efficient polynomial-time classical algorithms do exist that find an approximate answer within some fixed factor of the optimum solution.
To apply the QAOA algorithm on the Max-Cut problem, we first encode the graph of the particular problem instance into a cost Hamiltonian for which any
bit string gives an energy that is the negative of the number of the cut edges. Such a cost Hamiltonian is given by
\begin{equation}gin{equation}
C = \frac{1}{2} \sum_{i,j \in E}w_{ij} ( 1- Z_i Z_j).
\end{equation}
Here $\sigma^z_i$ is the Pauli Z matrix applied to qubit $i$, $E$ is the set of edges, and $w$ is the adjacency matrix of the graph, with $w_{ij}=1$ if nodes are connected and zero otherwise. Since this is an unconstrained optimization problem, the initial state is prepared as a uniform superposition of all the bit strings. The mixing Hamiltonian $B$ is just a sum of the Pauli $X_i$ matrices acting on the $i$th qubit.
\begin{equation}gin{equation}
B= \sum_{i \in V}X_i
\end{equation}
\subsection{Simulation of Max-Cut QAOA}
We simulate the QAOA algorithm on the Rigetti Forest simulator \cite{riggeti}. The simulations are performed without including the noisiness of the gates. The variational quantum eigensolver subroutine is used to find the optimized parameters $\begin{equation}ta$ and $\gamma$. Within the VQE we use the classical Nelder--Mead method. The algorithm is run over 50 iterations, and the arithmetic averages of the probabilities of the states over these 50 iterations is calculated.
We choose the square ring, $K_{2,3}$, and $K_{3,3}$ graphs given in Figures \ref{image1}--\ref{image3} for our simulations. For the square ring graph the Max-Cuts are the $(1,3)$ and $(2,4)$ sets corresponding to the $\langle0101\rangle$ and $\langle1010\rangle$ states, respectively. For the $K_{2,3}$ graph, the Max-Cuts are $(1,2)$ and $(3,4,5)$ corresponding to the $\langle00011\rangle$ and $\langle11100\rangle$ states, respectively. Similarly for the $K_{3,3}$ states the Max-Cuts are the $(1,2,3)$ and $(3,4,5)$ sets corresponding to the $\langle000111\rangle$ and $\langle111000\rangle$ states, respectively. Since this is an unconstrained optimization problem, every set is a cut and represents a feasible solution. However, we expect to see the peaks in the probability distribution at the states representing the Max-Cuts.
\begin{equation}gin{figure}[h]
\minipage{0.18\textwidth}
\includegraphics[width=\linewidth]{Picture1.png}
\caption{Square ring graph}\label{image1}
\endminipage
\minipage{0.30\textwidth}
\includegraphics[width=\linewidth]{k23.png}
\caption{$K_{2,3}$ graph}\label{image2}
\endminipage
\minipage{0.30\textwidth}
\includegraphics[width=\linewidth]{k33.png}
\caption{$K_{3,3}$ graph}\label{image3}
\endminipage
\end{figure}
The results of the simulation for the three graphs (square ring, $K_{2,3}$, and $K_{3,3}$) are provided in Figure $\ref{maxcut1}$. One can see that the peaks are located at the Max-Cut solutions. For small values of $p$, other solutions also contribute; but as the value of $p$ is increased, the Max-Cut solutions dominate, and all other peaks disappear from the distribution. We also note that the probability distribution is symmetric in the Max-Cut and the feasible solutions.
\begin{equation}gin{figure}[h]
\minipage{0.33\textwidth}
\includegraphics[width=\linewidth]{maxcutsquarering.png}
\endminipage
\minipage{0.33\textwidth}
\includegraphics[width=\linewidth]{maxcutK23.png}
\endminipage
\minipage{0.33\textwidth}
\includegraphics[width=\linewidth]{maxcutK33.png}
\endminipage
\caption{Probability distribution of states for Max-Cut QAOA for the square ring, $K_{2,3}$, and $K_{3,3}$ graphs when $p=1,6$ and $15$. }\label{maxcut1}
\end{figure}
\section{Quantum Alternating Operator Ansatz}
A general QAOA circuit is defined by two parameterized families of operators: a family of phase separation operators $U_C(\gamma)$ that depends on the cost function and a family of $U_B(\begin{equation}ta)$ that depends on the domain and its structure. In the earlier implementation of unconstrained QAOA the feasible set of states consisted of the entire configuration space, and therefore the mixing operator in the algorithm was $U_B(\begin{equation}ta)= \exp(- i \begin{equation}ta B)$. The constrained optimization problems, however, require optimization over feasible solutions that are typically a subset of a configuration space. The feasible solution set is specified by a set of Boolean functions (hard constraints) that are satisfied by the feasible solutions. If the mixing operators preserve feasibility, then given a feasible initial state, the QAOA algorithm will produce a final state that, when measured, gives a feasible solution. This is achieved by the quantum alternating operator ansatz, which comprises three main components: the initial state, the phase operators, and the mixing operators.
The initial state must be feasible, and it must be trivial to implement such that it can be created by a constant depth quantum circuit from the $|0...0 \rangle_n$ state. The family of mixing unitaries $U_B(\begin{equation}ta)$ are required to take feasible states to feasible states for all values of parameters and must also provide transitions between all feasible solutions. For an objective function $C$ we define
$H_C$ to be the Hamiltonian that acts as $C$ on basis states $H_C |\mathbf{x}\rangle = C(\mathbf{x}) |\mathbf{x}\rangle.$ The phase separation operators~$U_C(\gamma)$ are required to be diagonal in the computational basis, and therefore the phase separation unitary is defined as $U_C(\gamma) = e^{-i \gamma H_C}$ up to trivial global phase terms.
\section{Maximum Independent Set}
Consider a graph $G = (V,E)$, with $V$ the set of nodes of the graph and $E$ the set of edges. Let $\mathcal{N}(i)=\{j \in V: (i,j) \in E\}$ be the neighbors of the $i^{th}$ node in $V$. Positive weights $w_i$ are associated with each node $i$. A subset $V'$ of $V$ is represented by a vector $\textbf{x} = (x_i) \in
\{0,1\}^{|V|}$, where $x_i = 1$ means $i$ is in the subset and $x_i = 0$ means $i$ is not in the subset. A subset $\textbf{x}$ is called an {independent set} if no two nodes in the subset are connected by an edge: $(x_i, x_j) \neq (1,1)$ for all $(i,j) \in E$. The maximum independent set is the independent set with the largest number of nodes. We are interested in finding a maximum weighted independent set $\textbf{x}^*$.
No known polynomial-time classical algorithm solves the maximum independent set unless P=NP \cite{Tre}. The best algorithm known for general graphs give approximations within a polynomial factor. MIS can be approximated to $(D_g + 2)/3$ \cite{a2} on bounded-degree graphs with maximum degree $D_g \geq 3$, but it still remains APX-complete \cite{a3}. The best-known classical algorithm for the weighted maximum independent set is the greedy local search algorithm \cite{chandra}, which also gives a polynomial factor approximation.
The three QAOA components for this maximum independent set problem are as follows.
\begin{equation}gin{itemize}
\item Initial state: The initial state can be the trivial state or any state representing the independent set.
\item Phase separation Hamiltonian: The objective function $H_C(x) = \sum_{j=1}^n x_j $ counts the number of vertices in $V'$, and the Hamiltonian corresponding to the function is
\begin{equation}gin{equation}
H_C = \frac12 \sum_{u\in V} (I-Z_{u}).
\end{equation}
\item Mixing Hamiltonian: When constructing the mixing Hamiltonian, we note two points: (1) given an independent set $V'$, adding a vertex $w \notin V'$ to $V'$ preserves feasibility only if none of the neighbors of $w'$ are already in $V'$; and (2) we can always remove any vertex $w \in V'$ without affecting the feasibility of the state. The transformation rule that preserves the feasibility is to flip the bit $x_w$ if and only if $\begin{eqnarray}r{x}_{v_1 } \begin{eqnarray}r{x}_{v_2 }\dots \begin{eqnarray}r{x}_{v_\ell }=1$, where $v_1,\dots,v_\ell$ are the vertices adjacent to $w$. Keeping these observations in mind, we can construct the following Hamiltonian: $B=\sum_u B_u$, where
\begin{equation}gin{equation} \label{eqn:driverIndepSet}
B_{u} = \frac{1}{2^{\ell}} X_u \;
\prod_{j=1}^{\ell} (I+Z_{v_j }).
\end{equation}
This is the Hamiltonian-based implementation of the mixing unitaries. A sequential implementation of the mixing unitaries is provided in \cite{stuart2017} and has some advantages, but we will leave that implementation for later work.
\end{itemize}
\subsection{Simulation of Maximum Independent Set}
The domain in the MIS problem is the n-bit strings corresponding to the independent sets in $G$. We again simulate the quantum alternating operator ansatz for the square ring, $K_{2,3}$, and $K_{3,3}$ graphs. In the case of the MIS problem, not all sets are feasible solutions of the problem. For example, in the square ring graph, the independent sets are $(\varphii), (1), (2), (3),(4), (1,3)$, and $(2,4)$ corresponding to the states $|0000\rangle, |0001\rangle, |0010\rangle, |0100\rangle, |1000\rangle, |0101\rangle$, and $|1010\rangle$, respectively. All other sets are not feasible solutions. Two maximum independent sets in the square ring graph correspond to $(1,3)$ and $(2,4)$. For the $K_{2,3}$ graph, the maximum independent sets are $(1,2)$ and $(3,4,5)$ corresponding to the $\langle00011\rangle$ and $\langle11100\rangle$ states, respectively. Similarly for the $K_{3,3}$ states the maximum independent sets are the $(1,2,3)$ and $(3,4,5)$ sets corresponding to the $\langle000111\rangle$ and $\langle111000\rangle$ states, respectively.
Below we present the results of our simulations. The MIS problem differs from the MAX-Cut problem in two crucial ways. (1) The probability distribution of the states is asymmetric. This is due to the asymmetry in the mixing operator. Unlike the Max-Cut problem where the mixing operator acts symmetrically on all qubits, in the MIS problem the mixing operator acts asymmetrically. (2) The initial state in the MIS problem can be any of the independent sets. In the Max-Cut problem the choice of the initial state was obvious, whereas in the independent set problem we can choose any of the independent sets as the initial state.
\textbf{Asymmetric probability distribution}: We analyze the probability distribution for the three graphs: square ring, $K_{2,3}$, and $K_{3,3}$. The initial state we use is the same empty set state $\langle 00\cdots \rangle$ for the three graphs. As expected, the probability distributions shown in Figure \ref{MIS} are asymmetric in all three cases. As the value of $p$ is increased, however, the distributions become more symmetric. The reason is that increasing $p$ allows more mixing to take place between the feasible solutions.
\begin{equation}gin{figure}[ht]
\minipage{0.33\textwidth}
\includegraphics[width=\linewidth]{MISsquarering.png}
\endminipage
\minipage{0.33\textwidth}
\includegraphics[width=\linewidth]{MISK23.png}
\endminipage
\minipage{0.33\textwidth}
\includegraphics[width=\linewidth]{MISK33.png}
\endminipage
\caption{Probability distribution of states for Max-Independent set QAOA for the square ring, $K_{2,3}$, and $K_{3,3}$ graphs when $p=1,6$ and $15$.}\label{MIS}
\end{figure}
The square ring and $K_{3,3}$ graphs are symmetric whereas the $K_{2,3}$ graph is an asymmetric graph. We note that for the $K_{2,3}$ graph, even when $p=6$ (finite circuit depth), the contribution from the independent set containing a larger number of elements $(3,4,5)$ is considerably larger than from any other set.
\textbf{Dependence on initial states}:
We also check the dependence of the outcome of our quantum approximate optimization algorithm on the choice of initial states. Here we analyze only the square ring graph. In the experiments that tested the dependence of the algorithm on the circuit depth and asymmetry of the probability distribution, we used the zero state (empty set) as our initial state. Here we run our simulations with the $\langle 0101 \rangle$ and $\langle 1010 \rangle$ initial states.
\begin{equation}gin{figure}[ht]
\centering
\minipage{0.33\textwidth}
\includegraphics[width=\linewidth]{MISsquarering0101.png}
\endminipage
\minipage{0.33\textwidth}
\includegraphics[width=\linewidth]{MISsquarering1010.png}
\endminipage
\caption{Dependence of the algorithm on the initial states.}
\end{figure}
We can see that for lower values of $p$ the initial state dominates the probability distribution. As the value of $p$ is increased, however, the distribution becomes more and more symmetrical.
\section{Initial States and Approximation Ratio }
The analytical calculation of $\langle C \rangle=\langle {\boldsymbol{\gamma}, \boldsymbol{\begin{equation}ta}} | C | {\boldsymbol{\gamma}, \boldsymbol{\begin{equation}ta}} \rangle$ is tricky for the MIS problem even on bounded-degree graphs because the mixing Hamiltonian contains the exponential of noncommuting Pauli matrices. We therefore calculate numerically the expectation of the cost function $\langle C \rangle$ for the square ring graph. Let us define $A= e^{-i \boldsymbol{\begin{equation}ta} H_M}e^{-i \boldsymbol{\gamma} H_C }$. For $p=1$ we have to calculate $\langle s |A_1^\dagger C A_1| s \rangle$, where $| s \rangle$ is the initial state. We perform the numerical calculation for different choices of initial states. The expectation values for the independent sets (IS's) $|1000\rangle$, $|0100\rangle$, $|0010\rangle$, and $|0001\rangle$ are the same, and the expectation values for the maximal independent sets $|0101\rangle$ and $|1010\rangle$ are the same. For the minimum depth QAOA circuit and $C_{max}=2$ we plot $\langle C_1 \rangle vs \begin{equation}ta_1$ as $\gamma_1$ cancels out of the expectation value.
\begin{equation}gin{equation}
\langle C_1 \rangle = \frac{\langle s |A_1^\dagger C A_1| s \rangle}{C_{max}}
\end{equation}
\begin{equation}gin{figure}[ht]
\centering
\includegraphics[width=3in , height =1.5 in]{approx.pdf}
\caption{approximation}
\end{figure}
The maximum value for the expectation $\max_{\gamma_1 , \begin{equation}ta_1} {\langle C_1 \rangle} = 1.0$, $0.89$ and $0.68$ for the MIS's, empty set, and IS's, respectively. We note that the approximation ratio is better for the empty set compared with the independent set states.
\section{Conclusion}
We have studied the maximum weighted independent set problem using the quantum alternating operator ansatz. We note that the probability distribution of observance of the maximum independent states is asymmetric; in contrast, the probability distribution of the Max-Cut states is symmetrically distributed. We also calculated the approximation ratios for our graph for different initial states. In this paper we considered a simple graph and observed the differences with the unconstrained problem.
Much more research is needed in order to understand our results analytically. We intend to run the experiments on larger graphs with larger circuit depths; and as the parameter space increases, it will be useful to understand improvements that can be made in the classical parameter optimization algorithms. We also plan to execute the algorithm on a quantum computer and see how far we can push it on a noisy intermediate-scale quantum device (NISQ).
Acknowledgments: I thank Stuart Hadfield and Shengtao Wang for valuable discussions. This material was based upon work supported by the U.S. Department of Energy, Office of Science, under contract DE-AC02-06CH11357.
\begin{equation}gin{thebibliography}{10}
\bibitem{preskil} J. Preskill, Quantum 2, 79 (2018)
\bibitem{farhi2014}E. Farhi, J. Goldstone, and S. Gutmann, (2014),
arXiv:1411.4028.
\bibitem{perruzo}A. Peruzzo, J. McClean, P. Shadbolt, M.-H. Yung, X.-Q.
Zhou, P. J. Love, A. Aspuru-Guzik, and J. L. O Brien,
Nature Communications 5, 4213 (2014).
\bibitem{moll}N. Moll, P. Barkoutsos, L. S. Bishop, J. M. Chow,
A. Cross, D. J. Egger, S. Filipp, A. Fuhrer, J. M.
Gambetta, M. Ganzhorn, A. Kandala, A. Mezzacapo,
P. Muller, W. Riess, G. Salis, J. Smolin, I. Tavernelli,
and K. Temme, Quantum Science and Technology 3,
030503 (2018).
\bibitem{farhi2016}E. Farhi and A. W. Harrow, (2016), arXiv:1602.07674
\bibitem{stuart2017}S. Hadfield, Z. Wang, B. O Gorman, E. G. Rieffel, D. Venturelli, and R. Biswas, arXiv:1709.03489.
\bibitem{shengtao}H Pichler, ST Wang, L Zhou, S Choi, MD Lukin
arXiv preprint arXiv:1808.10816
\bibitem{Tre}
Luca Trevisan, Technical Report TR04-065, Electronic Colloquium on Computational
Complexity, 2004.
\bibitem{karp1972a}R. M. Karp, “Reducibility among combinatorial problems,” (Springer US, Boston, MA, 1972) pp. 85–103.
\bibitem{a2} Bazgan, C., Escoffier, B., AND Paschos, V. T. Theoretical
Computer Science 339, 2-3 (2005), 272-292
\bibitem{a3} Papadimitriou, C. H., and Yannakis, M. Journal of Computer and System Sciences 43 (1991), 425-440.
\bibitem{chandra}B. Chandra, M.M. Halldorsson, in Proc. 10th
Annual SIAM-ACM Symposium on Discrete Algorithms (SODA), Baltimore, MD, 1999, pp. 169-176.
\bibitem{edward2014qaoa} E. Farhi, J. Goldstone, and S. Gutmann, arXiv:1411.4028, 2014.
\bibitem{edward2016quantum} E. Farhi and A. W. Harrow, arXiv:1602.07674, 2016.
\bibitem{pyquilqaoa} https://grove-docs.readthedocs.io/en/latest/qaoa.html
\bibitem{riggeti} http://docs.rigetti.com/en/stable/
\end{thebibliography}
\end{document} |
\betaegin{document}
\title[Duality and spherical adjunction from microlocalization]{\textbf{Duality and Spherical Adjunction from Microlocalization}\\
{\textbf{\footnotesize{-- An approach by contact isotopies --}}}}
\deltaate{}
\alphauthor{Christopher Kuo}
\alphaddress{Department of Mathematics, University of Southern California}
\varepsilonmail{[email protected]}
\alphauthor{Wenyuan Li}
\alphaddress{Department of Mathematics, Northwestern University.}
\varepsilonmail{[email protected]}
\mathfrak{m}aketitle
\betaegin{abstract}
For a subanalytic Legendrian $\mathbb{L}ambdambda \subseteq S^{*}M$, we prove that when $\mathbb{L}ambdambda$ is either swappable or a full Legendrian stop, the microlocalization at infinity $m_\mathbb{L}ambdambda: \mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$ is a spherical functor, and the spherical cotwist is the Serr{e} functor on the subcategory $\mathbb{S}h_\mathbb{L}ambdambda^b(M)_0$ of compactly supported sheaves with perfect stalks. In this case, when $M$ is compact the Verdier duality on $\mathbb{S}h_\mathbb{L}ambdambda^b(M)$ extends naturally to all compact objects $\mathbb{S}h_\mathbb{L}ambdambda^c(M)$. This is a sheaf theory counterpart (with weaker assumptions) of the results on the cap functor and cup functor between Fukaya categories.
When proving spherical adjunction,
we deduce the Sato-Sabloff fiber sequence and construct the Guillermou doubling functor for any Reeb flow. As a setup for the Verdier duality statement, we study the dualizability of $\mathbb{S}h_\mathbb{L}ambdambda(M)$ itself and obtain a classification result of colimit-preserving functors by convolutions of sheaf kernels.
\varepsilonnd{abstract}
\setcounter{tocdepth}{2}
\tableofcontents
\section{Introduction}
\subsection{Context and background}
Our goal in the paper is to investigate non-commutative geometry structure of the category of sheaves arising from the symplectic geometry structure on the Lagrangian skeleton of the Weinstein pair $(T^*M, \mathbb{L}ambdambda)$, where $\mathbb{L}ambdambda \subseteq S^*M$ is a subanalytic Legendrian subset in the ideal contact boundary $S^*M$ of the exact symplectic manifold $T^*M$.
Following Kashiwara-Schapira \cite{KS}, given a real analytic manifold $M$, one can define a stable $\infty$-category $\mathbb{S}h_\mathbb{L}ambdambda(M)$ of constructible sheaves on $M$ with subanalytic Legendrian singular support $\mathbb{L}ambdambda \subseteq S^*M$, which is invariant under Hamiltonian isotopies \cite{Guillermou-Kashiwara-Schapira}. On the other hand, one can also define another stable $\infty$-category $\mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$ of microlocal sheaves on the Legendrian $\mathbb{L}ambdambda \subseteq S^*M$ \cite{Gui,Nadler-pants}, and there is a microlocalization functor
$$m_\mathbb{L}ambdambda: \mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda).$$
From the perspective of non-commutative geometry, the categories of (microlocal) sheaves, as sheaves on the Lagrangian skeleton of the Weinstein pair $(T^*M, \mathbb{L}ambdambda)$, should be understood as a non-commutative manifold with boundary, abiding Poincar\'{e}-Lefschetz duality and fiber sequence with the presence of a relative Calabi-Yau structure \cite{relativeCY}, whose existence is proved by Shende-Takeda using arborealization \cite{ShenTake}.
In this paper, we will study some non-commutative geometry structures, which are related but different from the duality fiber sequence in Calabi-Yau structures.
The category of (micro)sheaves is closely related to a number of central topics in symplectic geometry and mathematical physics \cite{NadZas,Nad}.
Recently Ganatra-Pardon-Shende \cite{Ganatra-Pardon-Shende3} showed that the partially wrapped Fukaya category is equivalent to the category of compact objects in the unbounded dg category of sheaves. In particular, for contangent bundles with Legendrian stops,
$$\mathfrak{m}athrm{Perf}\,\mathfrak{m}athcal{W}(T^*M, \mathbb{L}ambdambda)^\text{op} \simeq \mathbb{S}h^c_\mathbb{L}ambdambda(M).$$
The homological mirror symmetry conjecture \cite{KonHMS,AurouxAnti} predicts an equivalence of the Fukaya categories categories of coherent sheaves on the mirror complex variety. On the other hand, the Betti geometric Langlands program proposed by Ben--Zvi and Nadler \cite{Ben-Zvi-Nadler-Betti} conjectures an equivalence of constructible sheaves on $\mathfrak{m}athrm{Bun}_G(X)$ and quasi-coherent sheaves on $\mathfrak{m}athrm{Loc}_{G^\vee}(X)$.
We will now explain the non-commutative geometry structures that will be investigated, namely duality and spherical adjunctions, which arise from various predictions from symplectic geometry, mirror symmetry and mathematical physics.
\subsubsection{Spherical adjunction and fiber sequence}
Spherical adjunctions are introduced by Anno-Logvinenko \cite{Spherical} in the dg setting and then by \cite{SphericalInfty} in the stable $\infty$ setting, as a generalization of the notion of spherical objects \cite{SeidelThom}. Like spherical objects, spherical adjunctions provide interesting fiber sequences and autoequivalences of the categories called spherical twists and cotwists.
In algebraic geometry, when we have a smooth variety $X$ with a divisor $i: D \hookrightarrow X$, the push forward functor and pull back functor
$$i_*: \mathbb{C}oh(D) \leftrightharpoons \mathbb{C}oh(X) : i^*$$
form a spherical adjunction between the dg categories of coherent sheaves, where the spherical twist is $-\text{ot}imes \mathfrak{m}athcal{O}_X(D)$.
In symplectic geometry, as is suggested by Kontsevich-Katzarkov-Pantev \cite{KonKarPanHodge} and Seidel \cite{SeidelSH=HH}, we have another interesting class of spherical adjunctions inspired by long exact sequences in Floer theory \cite{SeidelLES,SeidelFukI,Sabduality,EESduality}.
For a symplectic Lefschetz fibration $\varphii: X \rightarrow \mathfrak{m}athbb{C}$ with regular fiber $F = \varphii^{-1}(\infty)$, let $\mathfrak{m}athcal{FS}(X, \varphii)$ be the Fukaya-Seidel category associated to Lagrangian thimbles in $X$ and $\mathfrak{m}athcal{F}(F)$ the Fukaya category of closed exact Lagrangians in $F$ \cite{Seidelbook}. The cap functor
$$\cap_F: \mathfrak{m}athcal{FS}(X, \varphii) \rightarrow \mathfrak{m}athcal{F}(F)$$
defined by intersection of the Lagrangians with $F$ admits a left adjoint $\cup$ called the (left) cup functor \cite{AbGan} (see also \cite{AbSmithKhov}*{Appendix A}). In an unpublished work, Abouzaid-Ganatra proved that $\cap$ and $\cup$ form a spherical adjunction for general symplectic Landau-Ginzburg models \cite{AbGan}.
On the other hand, using the formalism of partially wrapped Fukaya categories \cite{Sylvan,GPS1}, Sylvan considered the Orlov cup functor
$$\cup_F: \mathfrak{m}athcal{W}(F) \rightarrow \mathfrak{m}athcal{W}(X, F)$$
associated to any Weinstein pair $(X, F)$ and showed that the $\cup$ is a spherical functor\footnote{The data of a spherical functor is equivalent to the data of a spherical adjunction, as will be explained in Section \ref{sec:spherical}. Here we use spherical functors because the adjoint functor is not explicit from Sylvan's work.} as long as the Weinstein stop $F \subset \varphiartial_\infty X$ is a so called swappable stop \cite{SylvanOrlov}. In this case, the spherical twists/cotwists are the monodromy functors defined by wrapping around the contact boundary.
In microlocal sheaf theory, Nadler has also shown that functors between the pair of microsheaf categories over the symplectic Landau-Ginzburg model $(\mathfrak{m}athbb{C}^n, \varphii = z_1 \deltaots z_n)$, aftering (heuristically speaking) adding additional fiberwise stops, form a spherical adjunction. Then, by removing the fiberwise stops, the spherical adjunction for the original pair is also obtained \cite{Nadspherical}, but it is unclear how general this argument is in sheaf theory.
The structure of spherical adjunctions has appeared in a number of previous works and leads to interesting applications in homological mirror symmetry \cite{AbAurouxHMS,Nadspherical,Gammagespherical,Jeffs}. However, there are still important problems that remain. Namely, the cap functor $\cap$ is only defined in the setting of a symplectic Landau-Ginzburg model instead of a general Weinstein pair, since it is completely not clear whether there are enough Lagrangian submanifolds asymptotic to a general Legendrian stop \cite{Ganatra-Pardon-Shende3}. This means that there is no formalism of results on the cap functor $\cap$ for general Weinstein pair. Therefore, even though the cup functor $\cup$ is proved to be spherical as long as the stop is swappable \cite{SylvanOrlov}, it is difficult to characterize the adjoint functors geometrically.
\subsubsection{Categorical duality and Serr{e} duality}
Serr{e} duality, in the form of an equivalence $\mathbb{C}oh(X)^{op} = \mathbb{C}oh(X)$, can be reinterpreted as the fact that its Ind-completion $\IndCoh(X)$ is self-dual. This is often done by realizing the diagonal bimodule $\Hom_{\mathbb{C}oh(X)}(-,-)$ as the geometric diagonal constructed by $\mathbb{D}elta: X \hookrightarrow X \times X$ using the six-functor formalism \cite{Ben-Zvi-Francis-Nadler, Gaitsgory1}.
We follow this approach, in the setting of constructible sheaves, and show that the dual of $\mathbb{S}h_\mathbb{L}ambdambda(M)$ is given by $\mathbb{S}h_{-\mathbb{L}ambdambda}(M)$ where $-\mathbb{L}ambdambda$ is the image of $\mathbb{L}ambdambda$ under the antipodal map of $S^* M$. The identification $\mathbb{S}h_\mathbb{L}ambdambda(M)^\vee = \mathbb{S}h_{-\mathbb{L}ambdambda}(M)$, in turn, induces an equivalence $\VD{\mathbb{L}ambdambda}: \mathbb{S}h_\mathbb{L}ambdambda^c(M)^{op} = \mathbb{S}h_\mathbb{L}ambdambda^c(M)$ and a natural question is to understand its relation with the classical Verdier duality $\VD{M}(F) \coloneqq \sHom(F,\omega_M)$ where $\omega_M$ is the dualizing sheaf.
Similar questions were considered previously, for instance, in the setting of Betti geometric Langlands program, where people have constructed a miraculous categorical duality on $\mathbb{S}h_{\cN ilp}(\mathfrak{m}athrm{Bun}_G(X))$ coming from Verdier duality \cite{Arinkin-Gaitsgory-Kazhdan-Raskin-Rozenblyum-Varshavsky} (in that case as $\mathfrak{m}athrm{Bun}_G(X)$ is not quasi-compact and defining Verdier duality requires more work).
On the other hand, from the perspective of Fukaya categories, following a proposal of Kontsevich, Seidel has conjectured \cite{SeidelSH=HH} that for a symplectic Lefschetz fibration, the spherical dual cotwist is the Serr{e} functor
$$\sigma^{-1}: \mathfrak{m}athcal{FS}(X, \varphii) \rightarrow \mathfrak{m}athcal{FS}(X, \varphii),$$
and proved partial results \cite{SeidelFukI,SeidelFukII,SeidelFukIV1/2}, while from the perspective of Legendrian contact homology, Ekholm-Etnyre-Sabloff have proved Sabloff duality \cite{Sabduality,EESduality} between linearized homology and cohomology. These results predict a Serr{e} functor, which should be the Poincar\'{e}-Lefschetz duality on the category of constructible sheaves with perfect stalks
$$S_\mathbb{L}ambdambda^-: \mathbb{S}h_\mathbb{L}ambdambda^b(M) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda^b(M).$$
The next natural question, therefore, is whether one can recover the Serr{e} duality from the Verdier duality and the standard duality.
\subsection{Main result on sphericality}
We state our main result which provides a general criterion for the microlocalization functor $m_\mathbb{L}ambdambda: \mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$ to be spherical. Under the equivalence of Ganatra-Pardon-Shende \cite{Ganatra-Pardon-Shende3}, the left adjoint of microlocalization $m_\mathbb{L}ambdambda^l$ is equivalent to the Orlov cup functor on wrapped Fukaya categories, while we expect that the microlocalization $m_\mathbb{L}ambdambda$ is the cap functor on Fukaya-Seidel categories (see Remark \ref{rem:comicro-cup} and \ref{rem:micro-cap}).
Let $M$ be a real analytic manifold. Consider a fixed Reeb flow $T_t: S^{*}M \rightarrow S^{*}M$. Recall that a (time-dependent) contact isotopy $\varphi_t: S^{*}M \times \mathfrak{m}athbb{R} \rightarrow S^{*}M$ is called a positive isotopy if $\alphalpha(\varphiartial_t \varphi_t) \mathfrak{m}athfrak{g}eq 0$.
In the definition, we use the word stop for any compact subanalytic Legendrians (following \cite{Sylvan,GPS1}), meaning that Hamiltonian flows are stopped by the Legendrian.
The geometric notion of a swappable subanalytic Legendrian originates from positive Legendrian loops that avoid the Legendrian at the base point \cite{BS-LegIsotopy}, and is explicitly introduced by Sylvan \cite{SylvanOrlov}. Here our definition is slightly different from \cite{SylvanOrlov}.
\betaegin{definition}
A compact subanalytic Legendrian $\mathbb{L}ambdambda \subseteq S^{*}M$ is called a swappable stop if there exists a compactly supported positive Hamiltonian on $S^{*}M \betaackslash \mathbb{L}ambdambda$ such that the Hamiltonian flow sends $T_\varepsilonpsilon(\mathbb{L}ambdambda)$ to an arbitrary small neighbourhood of $T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)$, and the backward flow sends $T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)$ to an arbitrary small neighbourhood of $T_\varepsilonpsilon(\mathbb{L}ambdambda)$.
\varepsilonnd{definition}
We also introduce the notion of geometric and algebraic full stops, both called full stops for simplicity. We will see in Proposition \ref{prop:full-geo=>alg} that a geometric full stop is always an algebraic full stop.
\betaegin{definition}
Let $M$ be compact. A compact subanalytic Legendrian $\mathbb{L}ambdambda \subseteq S^{*}M$ is called a geometric full stop if for a collection of generalized linking spheres at infinity $\mathbb{S}igma \subseteq S^{*}M$ of $\mathbb{L}ambdambda$, there exists a compactly supported positive Hamiltonian on $S^{*}M \betaackslash \mathbb{L}ambdambda$ such that the Hamiltonian flow sends $\mathbb{S}igma$ to an arbitrary small neighbourhood of $T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)$.
More generally, $\mathbb{L}ambdambda \subset S^{*}M$ is called an algebraic full stop if the category of compact objects $\mathbb{S}h^c_\mathbb{L}ambdambda(M)$ is proper.
\varepsilonnd{definition}
\betaegin{example}
There is a large class of examples of swappable stops and full stops in Section \ref{sec:sphere-crit}. Here are two cimple classes of examples. (1)~For a subanalytic triangulation $\mathfrak{m}athcal{S} = \{X_\alphalpha\}_{\alphalpha \in I}$, the union of unit conormal bundles $\betaigcup_{\alphalpha \in I}N^{*}_\infty X_\alphalpha$ is a algebraic full stop (we suspect that it is also a geometric full stop and a swappable stop, but we cannot prove that).
(2)~For an exact symplectic Landau-Ginzburg model $\varphii: T^*M \rightarrow \mathfrak{m}athbb{C}$, the Lagrangian skeleton $\mathfrak{m}athfrak{c}_F$ of a regular fiber at infinity $F = \varphii^{-1}(\infty)$ is a swappable stop and when $\varphii$ is a Lefschetz fibration it is a geometric full stop.
\varepsilonnd{example}
We are able to state our main result, which provides a general criterion for the microlocalization functor $m_\mathbb{L}ambdambda$ to be spherical.
\betaegin{theorem}[Theorem \ref{thm:main-fun}]\lambdabel{thm:main}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a compact subanalytic Legendrian. Suppose $\mathbb{L}ambdambda$ is either a full stop or a swappable stop. Then the microlocalization functor along $\mathbb{L}ambdambda$ and its left adjoint
$$m_\mathbb{L}ambdambda: \mathbb{S}h_\mathbb{L}ambdambda(M) \leftrightharpoons \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) : m_\mathbb{L}ambdambda^l$$
form a spherical adjunction.
\varepsilonnd{theorem}
Restricting attention to the pair of sheaf categories of compact objects, and the corresponding pair of sheaf categories of proper objects when the manifold is compact, which are the sheaf theoretic models of suitable versions of Fukaya categories, we can show the following corollary.
\betaegin{corollary}\lambdabel{cor:main}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a closed subanalytic Legendrian. Suppose $\mathbb{L}ambdambda$ is either a swappable stop or a geometric full stop. Then the microlocalization functor along $\mathbb{L}ambdambda$ on the sheaf category of objects with perfect stalks
$$m_\mathbb{L}ambdambda: \mathbb{S}h_\mathbb{L}ambdambda^b(M) \rightarrow \mathfrak{m}sh^b_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$$
is a spherical functor. Respectively, the left adjoint of the microlocalization functor on the sheaf category of compact objects
$$m_\mathbb{L}ambdambda^l: \mathfrak{m}sh^c_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \rightarrow \mathbb{S}h^c_\mathbb{L}ambdambda(M)$$
is also a spherical functor.
\varepsilonnd{corollary}
\betaegin{remark}\lambdabel{rem:comicro-cup}
According to \cite{Ganatra-Pardon-Shende3}*{Proposition 7.24} there is a commutative diagram between microlocal sheaf categories and wrapped Fukaya categories
\[\timesymatrix{
\mathfrak{m}athcal{W}(F) \alphar[r]^{\sim\hspace{10pt}} \alphar[d]_{\cup_F} & \mathfrak{m}sh^c_{\mathfrak{m}athfrak{c}_F}(\mathfrak{m}athfrak{c}_F) \alphar[d]^{m_{\mathfrak{m}athfrak{c}_F}^*}\\
\mathfrak{m}athcal{W}(T^*M, F) \alphar[r]^{\sim} & \mathbb{S}h^c_{\mathfrak{m}athfrak{c}_F}(M).
}\]
Therefore the second part of our theorem recovers the result by Sylvan \cite{SylvanOrlov} that
$$\cup_F: \mathfrak{m}athcal{W}(F) \rightarrow \mathfrak{m}athcal{W}(X, F)$$
is spherical in the case $X = T^*M$. However, different from \cite{SylvanOrlov}, we are able to explicitly construct the left and right adjoint functors in the proof.
\varepsilonnd{remark}
\betaegin{remark}\lambdabel{rem:micro-cap}
For a Lefschetz fibration $\varphii: T^*M \rightarrow \mathfrak{m}athbb{C}$
with regular fiber at infinity $F = \varphii^{-1}(\infty)$, $\mathfrak{m}athcal{W}(T^*M, F)$ is generated by Lagrangian thimbles \cite{GPS2}*{Corollary 1.14} and is a proper category \cite{Ganatra-Pardon-Shende3}*{Proposition 6.7} (when $M = T^n$ and $F$ is the Weinstein thickening of the FLTZ skeleton \cite{FLTZCCC,RSTZSkel}, this is also proved by \cite{KuwaCCC}), and hence
$$\mathbb{S}h^b_{\mathfrak{m}athfrak{c}_F}(M) \simeq \mathbb{S}h^c_{\mathfrak{m}athfrak{c}_F}(M) \simeq \mathfrak{m}athcal{W}(T^*M, F).$$
Since it is expected that $\mathfrak{m}athcal{W}(T^*M, F) \simeq \mathfrak{m}athcal{FS}(T^*M, \varphii)$\footnote{As pointed out in \cite{Ganatra-Pardon-Shende3}*{Footnote~2}, if one takes $\mathfrak{m}athcal{W}(T^*M, F)$ as the definition of the Fukaya-Seidel category then this is tautological. However a comparison result between $\mathfrak{m}athcal{W}(T^*M, F)$ and the Fukaya-Seidel category defined in \cite{Seidelbook}*{Part~3} is not yet in the literature.}, there should be an equivalence $\mathbb{S}h^b_{\mathfrak{m}athfrak{c}_F}(M) \simeq \mathfrak{m}athcal{FS}(T^*M, \varphii)$ (when $M = T^n$, Zhou has sketched a proof in his thesis \cite{ZhouThesis}). Therefore, our theorem should be viewed as a sheaf theory version of the result \cite{AbGan} that
$$\cap_F: \mathfrak{m}athcal{FS}(T^*M, \varphii) \rightarrow \mathfrak{m}athcal{F}(F)$$
is spherical. However, since we do not know a commutative diagram
\[\timesymatrix{
\mathfrak{m}athcal{FS}(T^*M, \varphii) \alphar[d]_{\cap_F} \alphar[r] & \mathbb{S}h^b_{\mathfrak{m}athfrak{c}_F}(M) \alphar[d]^{m_{\mathfrak{m}athfrak{c}_F}} \\
\mathfrak{m}athcal{F}(F) \alphar[r] & \mathfrak{m}sh^b_{\mathfrak{m}athfrak{c}_F}(\mathfrak{m}athfrak{c}_F),
}\]
that result \cite{AbGan} does not directly follow from ours.
\varepsilonnd{remark}
We can write down the spherical twists and cotwists as follows.
Previous work of the first author \cite{Kuo-wrapped-sheaves} has defined the positive wrapping functor $\mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+$ (resp.~negative wrapping functor $\mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^-$) sending an arbitrary sheaf in $\mathbb{S}h(M)$ to $\mathbb{S}h_\mathbb{L}ambdambda(M)$ by a colimit (resp. a limit) of positive (resp.~negative) wrappings into $\mathbb{L}ambdambda$. The spherical cotwist (resp.~the dual cotwist) $\mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda(M)$ for $m_\mathbb{L}ambdambda$ is explicitly as the functor defined by wrapping positively (resp.~negatively) around $S^{*}M$ once along the Reeb flow.
\betaegin{proposition}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ and $T_t: S^{*}M \rightarrow S^{*}M$ be a Reeb flow. Then the spherical cotwist and dual cotwist are the negative and positive wrap-once functor (where $\varepsilonpsilon > 0$ is sufficiently small)
$${S}_\mathbb{L}ambdambda^- = \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^- \circ T_{-\varepsilonpsilon}, \,\,\, {S}_\mathbb{L}ambdambda^+ = \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ \circ T_{\varepsilonpsilon}.$$
\varepsilonnd{proposition}
\betaegin{remark}
By \cite[Proposition 7.24]{Ganatra-Pardon-Shende3}, we know that the cup functor $\cap_F$ on partially wrapped Fukaya categories is isomorphic to the left adjoint of microlocalization $m_\mathbb{L}ambdambda^l$ on sheaf categories. Therefore, we have a commutative diagram
\[\timesymatrix{
\mathfrak{m}athcal{W}(T^*M, F) \alphar[r]^{\sim} \alphar[d]_{\mathfrak{m}athcal{S}_F^\varphim} & \mathbb{S}h^c_{\mathfrak{m}athfrak{c}_F}(M) \alphar[d]^{S_{\mathfrak{m}athfrak{c}_F}^\varphim}\\
\mathfrak{m}athcal{W}(T^*M, F) \alphar[r]^{\sim} & \mathbb{S}h^c_{\mathfrak{m}athfrak{c}_F}(M),
}\]
such that our wrap-once functor $S_{\mathfrak{m}athfrak{c}_F}^\varphim$ is isomorphic to the wrap-once functor of Sylvan \cite{SylvanOrlov}.
\varepsilonnd{remark}
We will also write down a formula for spherical twists and dual twists in Section \ref{sec:semi-ortho} Corollary \ref{cor:twist}, which can be interpreted as the monodromy functors.
\subsubsection{Spherical pairs and variation of skeleta}
We know that the data of a spherical adjunction
$$F: \sA \leftrightharpoons \sB: F^l$$
is equivalent to a perverse sheaf of categories on a disk with one singularity \cite{KapraSchober}, where the stalk at $0 \in \mathfrak{m}athbb{D}^2$ is the nearby category $\sA$ while the stalks at $(0, 1]$ are the vanishing category $\sB$. By considering a double cut of the real interval $[-1, 1] \subset \mathfrak{m}athbb{D}^2$, we can also define a spherical pair \cite{KapraSchober}
$$\sB_- \timesrightleftharpoons{F_-} \sC \timesleftrightharpoons{F_+} \sB_+$$
where the two vanishing categories $\sB_\varphim$ are related by a pair of equivalences, and the nearby category $\sC$ admits semi-orthogonal decompositions by $\sA$ and $\sB_\varphim$.
Given our formalism of spherical adjunctions, we will prove as a corollary spherical pairs which give rise to non-trivial equivalences of microlocal sheaf categories over different Lagrangian skeleta that do not a priori require non-characteristic deformations (while in known examples of such equivalences, \cite{DonovanKuwa,ZhouVarI} the Lagrangian skelata are related by non-characteristic deformations).
\betaegin{definition}
Let $\mathbb{L}ambdambda_\varphim \subseteq S^{*}M$ be two disjoint closed subanalytic Legendrian stops. Suppose there exist both a positive and a negative compactly supported Hamiltonian flow that sends $\mathbb{L}ambdambda_+$ to an arbitrary small neighbourhood of $\mathbb{L}ambdambda_-$, whose backward flows send $\mathbb{L}ambdambda_-$ to an arbitrary small neighbourhood of $\mathbb{L}ambdambda_+$. Then $(\mathbb{L}ambdambda_-, \mathbb{L}ambdambda_+)$ is called a swappable pair.
\varepsilonnd{definition}
\betaegin{remark}
When $\mathbb{L}ambdambda_\varphim \subseteq S^{*}M$ are Lagrangian skeleta of Weinstein hypersurfaces $F_\varphim \subseteq S^{*}M$, we do not know whether $(X, F_\varphim)$ are in fact Weinstein homotopic, though in some examples we will mention we suspect that they are. Moreover, it is in general a hard question when a singular Lagrangian will arise as the skeleton of a Weinstein manifold \cite{WeinRevisit}*{Problem 1.1~\&~Remark 1.2}.
\varepsilonnd{remark}
We will show that a swappable pair of Legendrian stops produces a spherical pair.
\betaegin{theorem}[Theorem \ref{thm:sphere-pair-var}]
Let $\mathbb{L}ambdambda_\varphim \subseteq S^{*}M$ be a swappable pair of closed Legendrian stops. Then $\mathbb{S}h_{\mathbb{L}ambdambda_-}(M) \simeq \mathbb{S}h_{\mathbb{L}ambdambda_+}(M), \mathfrak{m}sh_{\mathbb{L}ambdambda_-}(\mathbb{L}ambdambda_-) \simeq \mathfrak{m}sh_{\mathbb{L}ambdambda_+}(\mathbb{L}ambdambda_+)$, and there is a spherical pair
$$\mathbb{S}h_{\mathbb{L}ambdambda_-}(M) \rightleftharpoons \mathbb{S}h_{\mathbb{L}ambdambda_+ \cup \mathbb{L}ambdambda_-}(M) \leftrightharpoons \mathbb{S}h_{\mathbb{L}ambdambda_+}(M).$$
\varepsilonnd{theorem}
\betaegin{remark}
As in the previous result, we can show that the spherical pair can be restricted to the subcategories of compact objects of sheaf categories, which therefore leads to a result on partially wrapped Fukaya categories.
\varepsilonnd{remark}
\betaegin{remark}
For symplectic topologists, this result may seem boring since one may suspect that the corresponding Weinstein pairs turn out to be Weinstein homotopic. However, when considering Fukaya-Seidel categories given a Landau-Ginzburg potential, the Weinstein hypersurfaces $F_\varphim$ can be fibers of different potential functions. Therefore, studying the behaviour of their Lagrangian skeleta provides a way to compare the categories directly.
\varepsilonnd{remark}
\subsection{Main result on duality}
Another important aspect of noncommutative geometry is to study bimodules.
For example, let $A$ be an $k$-algebra, then the Hochschild homology $\HH_*(A)$ defined using the standard bar resolution, can be viewed as the self-tensored $A \text{ot}imes_{A \text{ot}imes A^{op}} A$ as bimodules.
In general, if $\sC_0$ is a (idempotent complete) small stable category, bimodules are defined to be bifunctors to the coefficient category $\mathbb{F}un^{ex}(\sC_0 \text{ot}imes \sC_0^{op}, \cV)$
and abstract theory implies they are the same as colimit-preserving endofunctors $\End^L(\sC)$ on their Ind-completion.
In the setting of sheaf theory, a natural recourse of such functors are given geometrically by convolutions, i.e., the assignment
\betaegin{align*}
\mathbb{S}h(X \times X) &\rightarrow \mathbb{F}un^L(\mathbb{S}h(X), \mathbb{S}h(X)) \\
K &\mathfrak{m}apsto (F \mathfrak{m}apsto K \circ F \coloneqq {p_2}_! ( K \text{ot}imes p_1^* F).
\varepsilonnd{align*}
A natural question to ask is if all such functors arise in this way.
This is first studied in the algebro-geometric setting by Mukai \cite{Mukai}, and is thus sometimes referred as Fourier-Mukai,
and later by Orlov \cite{Orlov} and Toen \cite{Toen-Morita-theory}.
We show that similar phenomenon holds in the microlocal sheaf setting.
\betaegin{theorem}\lambdabel{fmt}
Let $M$ and $N$ be real analytic manifolds and $\mathbb{L}ambdambda \subseteq S^* M$, $\mathbb{S}igma \subseteq S^* N$
be closed subanalytic singular isotropics.
Then, the identification $\mathbb{S}h_\mathbb{L}ambdambda(M)^\vee = \mathbb{S}h_{-\mathbb{L}ambdambda}(M)$ induces an equivalence
$$ \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{S}igma}(M \times N) = \mathbb{F}un^L(\mathbb{S}h_\mathbb{L}ambdambda(M),\mathbb{S}h_\mathbb{S}igma(N))$$
which is given by $K \mathfrak{m}apsto (H \mathfrak{m}apsto K \circ H)$ for $H \in \mathbb{S}h_\mathbb{S}igma(N)$.
\varepsilonnd{theorem}
Here we follows the strategy of Gaitsgory \cite{Gaitsgory1} and Nadler, Francis, and Ben-Zvi \cite{Ben-Zvi-Francis-Nadler}:
Lurie constructed a symmetry monoidal structure $\text{ot}imes$ on $\mathbb{P}rLst$, the (very large) category of presentable categories with morphisms given by left adjoints.
Thus, one can talk about dualizable objects and a standard result \cite[Proposition 4.10]{Hoyois-Scherotzke-Sibilla} is that
if $\sC$ is compactly generated, then it is dualizable by the triple $(\sC^\vee,\varepsilonpsilon,\varepsilonta)$ where $\sC^\vee \coloneqq \Ind(\sC^{c,op})$
is given by the Ind-completion of the opposite category of its compact objects and the unit and co-unit, $\varepsilonta$ and $\varepsilonpsilon$,
are both given by the diagonal module $\id_{\sC^c}$.
Furthermore, there is an identification $\mathbb{F}un^L(\sC,\sD) = \sC^\vee \text{ot}imes \sD$ and, since duals are unique, it's enough to exhibit $\mathbb{S}h_{-\mathbb{L}ambdambda}(M)$ as a dual of $\mathbb{S}h(M)$.
\betaegin{definition_theorem}
Denote by $\mathbb{D}elta: M \hookrightarrow M \times M$ the diagonal, $p: M \rightarrow \{*\}$ the projection,
and $\iota_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}^*: \mathbb{S}h(M \times M) \rightarrow \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M)$ the left adjoint of the inclusion $\mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M) \subset \mathbb{S}h(M \times M)$.
Then the triple $(\mathbb{S}h_{-\mathbb{L}ambdambda}(M)^\vee,\varepsilonpsilon,\varepsilonta)$ where
\betaegin{equation}\lambdabel{formula: standard_duality_data}
\betaegin{split}
\varepsilonpsilon &= p_! \mathbb{D}elta^* : \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M) \rightarrow \cV \\
\varepsilonta &= \iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* \mathbb{D}elta_* p^*
: \cV \rightarrow \mathbb{S}h_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}(M \times M)
\varepsilonnd{split}
\varepsilonnd{equation}
exhibits $\mathbb{S}h_{-\mathbb{L}ambdambda}(M)$ as a dual of $\mathbb{S}h_\mathbb{L}ambdambda(M)$.
As a consequence, there is an identification $\mathbb{S}h_{-\mathbb{L}ambdambda}(M) = \mathbb{S}h_\mathbb{L}ambdambda(M)^\vee$
and we call the induced duality $\mathbb{S}D{\mathbb{L}ambdambda}: \mathbb{S}h_{-\mathbb{L}ambdambda}^c(M)^{op} \timesrightarrow{\sim} \mathbb{S}h_\mathbb{L}ambdambda^c(M)$
as the \textit{standard duality} associated to the pair $(M,\mathbb{L}ambdambda)$.
\varepsilonnd{definition_theorem}
Denote by $\mathbb{S}h_\mathbb{L}ambdambda^b(M)$ the subcategory of $\mathbb{S}h_\mathbb{L}ambdambda(M)$ consisting of sheaves with perfect stalks.
Assume $M$ is compact, then $\mathbb{S}h_\mathbb{L}ambdambda^b(M) \subseteq \mathbb{S}h_\mathbb{L}ambdambda^c(M)$.
Classically, there is a Verdier duality
\betaegin{align*}
\VD{M}: \mathbb{S}h_{-\mathbb{L}ambdambda}^b(M)^{op} \timesrightarrow{\sim} \mathbb{S}h_\mathbb{L}ambdambda^b(M) \\
F \mathfrak{m}apsto \sHom(F, \omega_M)
\varepsilonnd{align*}
where $\omega_M = p^! 1_\cV$ is the dualizing sheaf of $M$.
One can show that, on $\mathbb{S}h_{-\mathbb{L}ambdambda}^b(M)^{op}$ the standard duality $\mathbb{S}D{\mathbb{L}ambdambda}$
is given by $S_\mathbb{L}ambdambda^+ \VD{M} \text{ot}imes \omega_M^{-1}$ (Proposition \ref{identifying_Verdier_with_standard}).
Thus, if $S_\mathbb{L}ambdambda^+$ is invertible, the Verdier duality $\VD{M}$ can be extended to an equivalence $\mathbb{S}h_{-\mathbb{L}ambdambda}^c(M)^{op} \timesrightarrow{\sim} \mathbb{S}h_\mathbb{L}ambdambda^c(M)$,
which by taking Ind-completion, provides another duality triple $(\mathbb{S}h_{-\mathbb{L}ambdambda}(M),\varepsilonpsilon^V,\varepsilonta^V)$.
We show that the converse is also true.
\betaegin{theorem}[Theorem \ref{converse-statement-Verdier}]
Let $M$ be a compact manifold, $\mathbb{L}ambdambda \subseteq S^* M$, and denote by $\varepsilonpsilon^V$ the colimit-preserving functor
$$p_* \mathbb{D}elta^!: \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M) \rightarrow \cV.$$
There exists an object $\varepsilonta^V$, which we identifies as a colimit-preserving functor
$$\varepsilonta^V: \cV \rightarrow \mathbb{S}h_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}(M \times M),$$
such that the pair $(\varepsilonpsilon^V,\varepsilonta^V)$ provides a duality data in $\mathbb{P}rLst$ if and only if $S_\mathbb{L}ambdambda^+$ is invertible.
In this case, $\varepsilonta^V$ up to a twist by $\omega_M$ corresponds to cotwist $S_\mathbb{L}ambdambda^-$ and
the induced duality on $\mathbb{S}h_{-\mathbb{L}ambdambda}^c(M)^{op} = \mathbb{S}h_\mathbb{L}ambdambda^c(M)$ restricts to the Verdier duality $\VD{M}$ on $\mathbb{S}h_\mathbb{L}ambdambda^b(M)$.
\varepsilonnd{theorem}
It is not always true that $S_\mathbb{L}ambdambda^+$ is invertible. In fact, in Section \ref{sec:example} we provide an explicit example where $S_\mathbb{L}ambdambda^+$ fails to be an equivalence.
Moreover, we can show that the functor $S_\mathbb{L}ambdambda^- \text{ot}imes \omega_M$, which relates $\mathbb{S}D{\mathbb{L}ambdambda}$ and $\VD{M}$ on the proper subcategory of compactly supported sheaves $\mathbb{S}h^b_\mathbb{L}ambdambda(M)_0$, turns out to be the Serr{e} functor on $\mathbb{S}h^b_\mathbb{L}ambdambda(M)$, which implies a folklore conjecture on Fukaya categories in the case of cotangent bundles.
\betaegin{proposition}[Proposition \ref{prop:serre}]
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a full or swappable subanalytic compact Legendrian stop. Then $S_\mathbb{L}ambdambda^- \text{ot}imes \omega_M$ is the Serr{e} functor on $\mathbb{S}h^b_\mathbb{L}ambdambda(M)_0$ of sheaves microsupported on $\mathbb{L}ambdambda$ with perfect stalks and compact supports. In particular, when $M$ is orientable, $S_\mathbb{L}ambdambda^-[-n]$ is the Serr{e} functor on $\mathbb{S}h^b_\mathbb{L}ambdambda(M)_0$.
\varepsilonnd{proposition}
\betaegin{remark}
Since our wrap-once functor is isomorphic to the wrap-once functor of Sylvan \cite{SylvanOrlov}, we can prove that the negative wrap-once functor of Sylvan
$$\mathfrak{m}athcal{S}_\mathbb{L}ambdambda^-: \mathfrak{m}athrm{Prop}\,\mathfrak{m}athcal{W}(T^*M, F) \rightarrow \mathfrak{m}athrm{Prop}\,\mathfrak{m}athcal{W}(T^*M, F)$$
is the Serr{e} functor when $M$ is orientable. In particular, when $F = \varphii^{-1}(\infty)$ is the fiber of a symplectic Lefschetz fibration, then $\mathfrak{m}athcal{S}_\mathbb{L}ambdambda^-$ is the Serr\'{e} functor on $\mathfrak{m}athcal{W}(T^*M, F)$.
\varepsilonnd{remark}
\betaegin{remark}
Spherical adjunctions together with a compatible Serr{e} functor, in the smooth and proper setting, implies existence of a weak relative right Calabi-Yau structure \cite{KPSsphericalCY}, but we do not expect the relative Calabi-Yau structures for general Weinstein pairs to be proved this way. See the discussion in Remark \ref{rem:relativeCY}.
\varepsilonnd{remark}
\subsection{Sheaf theoretic wrapping, small and large}
We mention one technical point behind the various arguments of this paper.
The Guillermou-Kashiwara-Schapira sheaf quantization \cite{Guillermou-Kashiwara-Schapira} allows us to define the notion of isotopies of sheaves, or simply wrappings, which we will recall in Section \ref{sec:isotopies_of_sheaves}.
For a sheaf $F \in \mathbb{S}h(M)$ and a positive contact isotopy $T_t$ on $S^* M$, there is a family of sheaves $F_t$ and morphism $F_s \rightarrow F_t$, $s \leq t$,
such that $F_0 = F$ and $\mathfrak{m}sif(F_t) = T_t(\mathfrak{m}sif(F))$.
Such families of sheaves allow us to do two things.
First, one can displace microsupport at infinite with small wrappings, which is a sheaf-theoretical version of demanding generic intersection of Lagrangians in the Floer setting.
For two sheaves $F, G \in \mathbb{S}h(M)$, although the object $\Hom(F,G)$ is always defined, it is usually easier to understand it when $\mathfrak{m}sif(F) \cap \mathfrak{m}sif(G) = \varnothing$.
Furthermore, when $\mathfrak{m}sif(F)$ and $\mathfrak{m}sif(G)$ are both Legendrians, and assume $T_t (\mathfrak{m}sif(G))$ intersects $\mathfrak{m}sif(F)$ only at $t = 0$,
the jump of $\Hom(F,G_t)$ from $t < 0$ to $t > 0$ can be measured, in a way tautologically by $\mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$ where $\mathbb{L}ambdambda \coloneqq \mathfrak{m}sif(F) \cap \mathfrak{m}sif(G)$,
through microlocalization.
Secondly, since the isotopy $T_t$ pushes away microsupport, one can in fact cut off the microsupport by consider large wrappings.
That is, the left (resp.~right) adjoint of the inclusion $\mathbb{S}h_X(M) \subseteq \mathbb{S}h(M)$, whose existence is gauranteed by the existence of microlocal cut-off lemma locally \cite[Section 5.2]{KS},
can be described by the global geometry through the functor $\wrap_\mathbb{L}ambdambda^+$ (resp.~$\wrap_\mathbb{L}ambdambda^-$) defined by taking the colimit (resp.~limit) over larger and larger positive (resp. negative) wrappings. (See Theorem \ref{w=ad} for details.)
Combining the two facts in Section \ref{sec:doubling}, we can give a geometric description to the left (resp.~right) adjoint of the microlocalization functor
$m_\mathbb{L}ambdambda: \mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$ by decomposing it to a inclusion called doubling functor $w_\mathbb{L}ambdambda$ followed by a quotient by the positive (resp.~negative) wrapping functor $\wrap_\mathbb{L}ambdambda^+$ (resp.~$\wrap_\mathbb{L}ambdambda^-$)
$$ \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \hookrightarrow \mathbb{S}h_{T_\varepsilonpsilon(\mathbb{L}ambdambda) \cup T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(M) \twoheadrightarrow \mathbb{S}h_\mathbb{L}ambdambda(M).$$
The doubling functor will induce a fiber sequence of functors $T_{-\varepsilonpsilon} \rightarrow T_\varepsilonpsilon \rightarrow w_\mathbb{L}ambdambda \circ m_\mathbb{L}ambdambda$, from where the spherical dual cotwist by wrapping around positively once $S_\mathbb{L}ambdambda^+$ (resp.~the spherical cotwist by wrapping around negatively once $S_\mathbb{L}ambdambda^-$) will naturally arise as we apply the positive (resp.~negative) wrapping functors. This turns out to be the key ingredient in our proof.
\section{Preliminary}
We give a quick review of the microlocal sheaf theory developed by Kashiwara and Schapira in \cite{KS} within the modern categorical setting as well as results from more recent work such as \cite{Nadler-pants, Ganatra-Pardon-Shende3}.
The purpose of this section is to fix notations and collect previous results which we will use in the main body of the text.
\subsection{Microlocal sheaf theory}
Through out this paper, we fix once and for all a rigid stable symmetric monoidal category $\cV_0$ in the sense of Hoyois, Scherotzke, and Sibilla \cite{Hoyois-Scherotzke-Sibilla}, and we announce that our sheaves will take coefficient in $\cV \coloneqq \Ind(\cV_0)$, the Ind-completion of $\cV_0$. As discussed in \cite{Hoyois-Scherotzke-Sibilla}, stable categories over $\cV$ enjoy many formal properties enjoyed by ordinary stable categories studied in \cite{Lurie2} and one can develop sheaf theory in this setting. Special cases of $\cV$ include dg categories over a field $\Bbbk$, those over the integer $\mathbb{Z}Z$, and the category of spectra $\mathbb{S}p$.
Now a sheaf $F$ on a topological space $X$ is a presheaf $F: \Op_X^{op} \rightarrow \cV$ satisfying the standard local-to-global condition:
For an open set $U \subseteq X$ and an open covering $\cU \coloneqq \{U_i\}$ of $U$, the canonical map
$$ F(U) \rightarrow \lmi{U_I \in C_\cU} F(U_I)$$
is an isomorphism where $C_\cU$ is the \v{C}ech nerve formed by finite intersections of open sets in $\cU$.
We use $\mathbb{S}h(X)$ to denote the category of sheaves on $X$.
When we restrict to locally compact Hausdorff topological spaces, the functor $\mathbb{S}h$ admits the celebrated Grothendieck six-functor formalism.
That is, there exists a tensor product $\text{ot}imes$ on $\mathbb{S}h(M)$ and the corresponding internal Hom $\sHom$, and
for a continuous map $f: X \rightarrow Y$, there exists two adjunction pairs $(f^*,f_*)$ and $(f_!,f^!)$ and various compatibility relations,
base change in particular, hold.
For a sheaf $F$ on $X$, one can associate a closed set $\supp(X) = \overline{ \{x | F_x \neq 0 \} }$,
the support of $F$, indicating the place where $F$ is non-trivial.
When we further restrict to the case of finite dimensional manifolds, one has access to a more powerful invariant
$\mathfrak{m}s(F) \subseteq T^*M$, the microsupport which is defined in \cite[Proposition 5.1.1]{KS}.
This is a conic closed subset in the cotangent bundle $T^* M$, roughtly indicating the codirections where the sheaf changes,
and a deep theorem \cite[Theorem 6.5.4]{KS} asserts that it is always coisotropic.
Let $0_M$ be the zero section in $T^*M$. We note that since the singular support is always a conical subset, we can consider the singular support at infinity $\mathfrak{m}s^\infty(F) = (\mathfrak{m}s(F) \betaackslash 0_M) / \mathbb{R}R_{>0}$, which is always a coisotropic subset in the contact manifold $S^*M = (T^* M \betaackslash 0_M)/\mathbb{R}R_{>0}$.
Let $T_Z^*M$ the conormal bundle of a closed submanifold $Z \subseteq M$, and $N^*_{in}(U)$ (resp.~$N^*_{out}(U)$) the union of the inward (resp.~outward) conormal bundle and the zero section $0_U$ of a locally closed submanifold $U \subseteq M$ with piecewise $C^1$-boundary (the definition can be generalized to any locally closed submanifolds; see \cite[Section 5.3]{KS}). For $f: M \rightarrow N$, let $df^*$ and $f_\varphii$ be the bundle maps
$$T^*M \timesleftarrow{df^*} M \times_N T^*N \timesrightarrow{f_\varphii} T^*N.$$
Finally, for $X \subset T^*M$, let $-X \subseteq T^*M$ be its image under the antipodal map.
We list here a few ways to estimate microsupports of sheaves and some deformation result which we will use in this paper:
\betaegin{definition}[{\cite[Definition 5.4.12]{KS}}]
Let $A$ be a closed conic subset of $T^* N$.
We say $f$ is \textit{noncharacteristic} for $A$ if
$$ f_\varphii^{-1}(A) \cap T_M^* N \subseteq M \times_N 0_N.$$
For a sheaf $F \in \mathbb{S}h(M)$, we say $f$ is \textit{noncharacteristic} for $F$ if it is the case for $\mathfrak{m}s(F)$.
\varepsilonnd{definition}
\betaegin{proposition}\lambdabel{prop:mses}
We have the following results:
\betaegin{enumerate}
\item (\cite[Proposition 5.1.3]{KS}) If $F \rightarrow G \rightarrow H$ is a fiber sequence in $\mathbb{S}h(M)$, then
$$ \left( \mathfrak{m}s(F) \setminus \mathfrak{m}s(H) \right) \cup \left( \mathfrak{m}s(H) \setminus \mathfrak{m}s(F) \right)
\subseteq \mathfrak{m}s(G) \subset \mathfrak{m}s(F) \cup \mathfrak{m}s(H).$$
This is usually referred as the microlocal triangular inequalities.
\item (\cite[Proposition 5.4.1]{KS})
For $F \in \mathbb{S}h(M)$, $G \in \mathbb{S}h(N)$, $\mathfrak{m}s(F \betaoxtimes G) \subseteq \mathfrak{m}s(F) \times \mathfrak{m}s(G)$.
\item (\cite[Proposition 5.4.2]{KS})
For $F \in \mathbb{S}h(M)$, $G \in \mathbb{S}h(N)$, $$\mathfrak{m}s\left(\sHom(\varphii_1^*F, \varphii_2^*G) \right) \subseteq -\mathfrak{m}s(F) \times \mathfrak{m}s(G)$$
\item (\cite[Proposition 5.4.4]{KS})
For $f: M \rightarrow N$ and $F \in \mathbb{S}h(M)$, if $f$ is proper on $\supp(F)$, then $$\mathfrak{m}s(f_* F)
\subseteq f_\varphii \left( (d f^*)^{-1} \mathfrak{m}s(F) \right).$$
\item (\cite[Proposition 5.4.5 and Proposition 5.4.13]{KS})
For $f: M \rightarrow N$ and $F \in \mathbb{S}h(N)$, if $f$ is noncharacteristic for $F$, then
$$\mathfrak{m}s(f^* F) \subseteq df^* ( f_\varphii^{-1}(\mathfrak{m}s(F)) )$$
and the natural map $f^* F \text{ot}imes f^! 1_Y \rightarrow f^! F$ is an isomorphism.
If $f$ is furthermore a submersion\footnote{Kashiwara-Schapira use the word smooth morphism for a submersion between manifolds, which comes from the notion in algebraic geometry.}, the estimation is an equality.
\item (\cite[Proposition 5.4.8]{KS})
Let $Z \subseteq M$ be closed.
If $\mathfrak{m}s(F) \cap N^*_{out} (Z) \subseteq 0_M$, then
$$\mathfrak{m}s(F_Z) \subseteq N^*_{in}(Z) + \mathfrak{m}s(F).$$
Similarly, let $U \subseteq M$ be open.
If $\mathfrak{m}s(F) \cap N^*_{in} (U) \subseteq 0_M$, then
$$\mathfrak{m}s(F_U) \subseteq N^*_{out}(U) + \mathfrak{m}s(F).$$
\item (\cite[Proposition 5.4.14]{KS})
For $F\,$and $G \in \mathbb{S}h(M)$, if $\mathfrak{m}s(F) \cap -\mathfrak{m}s(G) \subseteq 0_M$, then
$$ \mathfrak{m}s(F \text{ot}imes G) \subseteq \mathfrak{m}s(F) + \mathfrak{m}s(G).$$
\item (\cite[Proposition 5.4.14 and Exercise V.13]{KS})
For $F\,$and $G \in \mathbb{S}h(M)$, if $\mathfrak{m}s(F) \cap \mathfrak{m}s(G) \subseteq 0_M$, then
$$ \mathfrak{m}s( \sHom(F,G) ) \subseteq \mathfrak{m}s(G) - \mathfrak{m}s(F).$$
Write $\mathbb{N}D{M}(F) := \sHom(F,1_M)$. If moreover $F$ is cohomological constructible, then
the natural map $$\mathbb{N}D{M}(F) \text{ot}imes G \rightarrow \sHom(F,G)$$ is an isomorphism.
If furthermore $F = \omega_M$, i.e., when $\sHom(G,F) \varepsilonqqcolon D_M (G)$ is the Verdier dual,
then $\mathfrak{m}s(\VD{M} (G) ) = - \mathfrak{m}s(G)$.
\item (\cite[Exercise V.7]{KS}, \cite[Section 2.7]{JinTreu})
Let $I$ be a (small) set and $\{F_i\}_{i \in I}$ be a family of sheaves on $M$ index by $I$.
Then there are microsupport estimations,
$$\mathfrak{m}s\betaig( \betaigoplus_i F_i\betaig) \subseteq \overline{ \cup_i \mathfrak{m}s(F_i)}, \
\mathfrak{m}s\betaig(\varphirod_i F_i\betaig) \subseteq \overline{ \cup_i \mathfrak{m}s(F_i) }.$$
\varepsilonnd{enumerate}
\varepsilonnd{proposition}
\betaegin{remark}\lambdabel{rem:DFotimesG}
When $\mathfrak{m}athscr{F}$ is cohomologically constructible, by Proposition \ref{prop:mses}~(8) we know that $$\mathbb{D}elta^*\mathfrak{m}athscr{H}om(\varphii_1^*{F}, \varphii_2^*{G}) \simeq \mathbb{D}elta^*(\mathbb{N}D{M \times M}(\varphii_1^*{F}) \text{ot}imes \varphii_2^*{G}) \simeq \mathbb{N}D{M}({F}) \text{ot}imes {G},$$
where we define $\mathbb{N}D{M}(F) := \sHom(F, 1_M)$. This special case will be of use in the following sections.
\varepsilonnd{remark}
Here are some more delicate microsupport estimates. Let $A, B \subset T^*M$. Define a subset $A \,\widehat +\, B \subseteq T^*M$ such that $(x, \timesi) \in A \,\widehat +\, B$ if there exists $(a_n, \alphalpha_n) \in A, (b_n, \betaeta_n) \in B$ such that
$$a_n, b_n \rightarrow x, \, \alphalpha_n + \betaeta_n \rightarrow \timesi, \, |a_n - b_n||\alphalpha_n| \rightarrow 0.$$
Let $i: M \hookrightarrow N$ be a closed embedding. Then for $A \subset T^*N$, define $i^{\#}(A) \subseteq T^*M$ such that $(x, \timesi) \in i^{\#}(A)$ if there exists $(y_n, \varepsilonta_n, x_n, \timesi_n) \in A \times T^*M$ such that
$$x_n, y_n \rightarrow x, \, \varepsilonta_n - \timesi_n \rightarrow \timesi, \, |x_n - y_n||\varepsilonta_n| \rightarrow 0.$$
\betaegin{proposition}\lambdabel{prop:noncharacteristic-ms-es}
We have the following results:
\betaegin{enumerate}
\item (\cite{KS}*{Theorem 6.3.1}) Let $j: U \hookrightarrow M$ be an open embedding, ${F} \in \mathbb{S}h(U)$. Then
$$\mathfrak{m}s(j_*{F}) \subseteq \mathfrak{m}s({F}) \,\widehat +\, N_\text{in}^*(U), \;\; \mathfrak{m}s(j_!{F}) \subseteq \mathfrak{m}s({F}) \,\widehat +\, N_{out}^*(U).$$
\item (\cite{KS}*{Corollary 6.4.4})
Let $i: N \hookrightarrow M$ be a closed embedding, ${F} \in \mathbb{S}h(M)$. Then
$$\mathfrak{m}s(i^*{F}) \subseteq i^{\#}\mathfrak{m}s({F}).$$
\varepsilonnd{enumerate}
\varepsilonnd{proposition}
By combining the six-functors, one can produce functors between sheaves on two topological spaces from sheaves on their product. Let $X_i$, $i = 1, 2, 3$, be locally compact Hausdorff topological spaces, and write $X_{ij} = X_i \times X_j$, for $i < j$, $X_{123} = X_1 \times X_2 \rightarrow X_3 $, and $\varphii_{ij}: X_{123} \rightarrow X_{ij}$ for the corresponding projections. For $F \in \mathbb{S}h(X_{12})$, $G \in \mathbb{S}h(X_{23})$, the \textit{convolution} is defined to be
$$G \circ_{M_2} F \coloneqq {\varphii_{13}}_! (\varphii_{23}^* G \text{ot}imes \varphii_{12}^* F ) \in \mathbb{S}h(X_{13}).$$
When there is no confusion what $X_2$ is, we will usually surpass the notation and simply write it as $G \circ F$. This is usually the case when $X_1 = \{*\}$, $X_2 = X$, and $X_3 = Y$ and we think of $X$ as the source and $Y$ as the target, $G \in \mathbb{S}h(X \times Y)$ as a functor sending $F \in \mathbb{S}h(X)$ to $G \circ F \in \mathbb{S}h(Y)$. Note that from its expression, this functor is colimit-preserving.
\betaegin{lemma}[{\cite[Proposition 3.6.2]{KS}}]\lambdabel{convrad}
For a fixed $G \in \mathbb{S}h(X_{23})$, the functor $G \circ (-) : \mathbb{S}h(X_{12}) \rightarrow \mathbb{S}h(X_{13})$ induced by convoluting with $G$ has a right adjoint, which we denote by $\sHom^\circ(G,-): \mathbb{S}h(X_{13}) \rightarrow \mathbb{S}h(X_{12})$, that is given by
\betaegin{equation}\lambdabel{cvr}
H \mathfrak{m}apsto {\varphii_{12}}_* \sHom(\varphii_{23}^* G ,\varphii_{13}^! H).
\varepsilonnd{equation}
\varepsilonnd{lemma}
\betaegin{example}
We note that convolution recovers $*$-pullback and $!$-pushforward. For example, let $f: X \rightarrow Y$ be a continuous map and denote by $i: \Gamma_f \subseteq X \times Y$ its graph. Take $X_1 = \{*\}$, $X_2 = X$, and $X_3 = Y$, then for $F \in \mathbb{S}h(X)$,
$$1_{\Gamma_f} \circ F = {\varphii_Y}_! ( 1_{\Gamma_f} \text{ot}imes \varphii_X^* F) = {\varphii_Y}_! i_! i^* \varphii_X^* F = f_! F.$$
\varepsilonnd{example}
We note that the base change formula implies that convolution satisfies associativity.
\betaegin{proposition}\lambdabel{conv:asso}
Let $F_i \in \mathbb{S}h(X_{i i +1})$ for $i = 1, 2, 3$. Then
$$ F_3 \circ_{X_3} (F_2 \circ_{X_2} F_1) = (F_3 \circ_{X_3} F_2) \circ_{X_2} F_1.$$
In particular, if $G_1$, $G_2 \in \mathbb{S}h(X \times X)$, then there is an identification of functors
$$ G_2 \circ (G_1 \circ (-) ) = (G_2 \circ_X G_1) \circ (-).$$
\varepsilonnd{proposition}
We will use a relative version of convolution. Let $B$ be a locally compact Hausdorff space viewed as a parameter space. Regard $F \in \mathbb{S}h(X_{12} \times B)$, $G \in \mathbb{S}h(X_{23} \times B)$ as $B$-family sheaves, one can similarly define the relative convolution $G \circ|_B F \in \mathbb{S}h(X_{13} \times B)$ by replacing $\varphii_{ij}$ with
$$\varphii_{{ij},B}: X_{123} \times B \rightarrow X_{ij} \times B.$$
In the case of manifolds, convolution satisfies certain compatibility with microsupport. For $A \subseteq T^* M_{12}$ and $B \subseteq T^* M_{23}$, we set
$$B \circ A = \{ (x,\timesi,z,\zeta) \in T^* M_{13} | \varepsilonxists (y,\varepsilonta), (x,\timesi,y,\varepsilonta) \in A, (y,-\varepsilonta, z, \zeta) \in B \}.$$
Note if $A$ and $B$ are Lagrangian correspondences satisfying appropriate transversality condition, the set $B \circ A$ is the composite Lagrangian correspondence twisted by a minus sign on the second component. Write $q_{ij}: T^* M_{123} \rightarrow T^* M_{ij}$ to be the projection on the level of cotangent bundles and $q_{2^a3}$ the composition of $q_{23}$ with the antipodal map on $T^* M_2$. Then $B \circ A = q_{13} (q_{2^a3}^{-1} B \cap q_{12}^{-1} A)$ and (4), (6) and (3) of Proposition \ref{prop:mses} imply the following corollary.
\betaegin{corollary}[{\cite[(1.12)]{Guillermou-Kashiwara-Schapira}}]\lambdabel{msconv}
Assume the following two conditions
\betaegin{enumerate}
\item $\varphii_{13}$ is proper on $M_1 \times \supp(G) \cap \supp(F) \times M_3$;
\item $q_{2^a 3}^{-1} \mathfrak{m}s(G) \cap q_{12}^{-1} \mathfrak{m}s(F) \cap 0_{M_1} \times T^* M_2 \times 0_{M_3}
\subseteq 0_{M_{123}}.$
\varepsilonnd{enumerate}
$$\mathfrak{m}s( G \circ F ) \subseteq \mathfrak{m}s(G) \circ \mathfrak{m}s(F).$$
\varepsilonnd{corollary}
A similar microsupport estimation holds for the $B$-family case. One noticeable difference for the microsupport estimation is that instead of $T^* M_{ij}$ and $T^* M_{123}$ one has to consider $T^* M_{ij} \times T^* B$ and $T^* M_{123} \times (T^* B \times_B T^* B)$ instead. Here $\times_B$ is taken over the diagonal $B \hookrightarrow B \times B$. Also the projection $r_{ij}: T^* M_{123} \times (T^* B \times_B T^* B) \rightarrow T^* M_{ij} \times T^* B$ for the $B$-component is now given by the first projection (with a minus sign) for $ij =12$, the addition for $ij = 13$, and the second projection $ij = 23$. Other than that the microsupport estimation is similar to the ordinary case.
The following results show that microsupport estimates detect propagation of sections in a continuous family of deformation of open subsets.
\betaegin{lemma}[non-characteristic deformation lemma, {\cite[Proposition 2.7.2]{KS}}]
Let ${F} \in \mathbb{S}h(M)$ and $\{U_t\}_{t \in \mathfrak{m}athbb{R}}$ be a family of open subsets and $Z_t = \betaigcap_{t > s}\overline{U_t \betaackslash U_s}$. Suppose that
\betaegin{enumerate}
\item $U_t = \betaigcup_{s < t} U_s$, for $-\infty < t < +\infty$;
\item $\overline{U_t \betaackslash U_s} \cap \mathfrak{m}athrm{supp}({F})$ is compact, for $-\infty < s < t < +\infty$;
\item $\Gamma_{M \betaackslash U_t}({F})_x = 0$, for $x \in Z_s \betaackslash U_t, \, -\infty < s \leq t < +\infty$.
\varepsilonnd{enumerate}
Then for any $t \in \mathfrak{m}athbb{R}$ we have
$$\Gamma\betaigg(\betaigcup_{s \in \mathfrak{m}athbb{R}} U_s, {F}\betaigg) \timesrightarrow{\sim} \Gamma(U_t, {F}).$$
\varepsilonnd{lemma}
\betaegin{lemma}[microlocal Morse lemma, {\cite[Corollary 5.4.19]{KS}}]\lambdabel{prop:morselemma}
Let $F \in \mathbb{S}h(M)$ and let $\varphihi: M \rightarrow \mathbb{R}R^1$ be a $C^1$-function such that $\varphihi: \supp(F) \rightarrow \mathbb{R}R$ is proper.
Let $a < b \in \mathbb{R}R$.
\betaegin{enumerate}
\item Assume $d \varphihi(x) \not \in \mathfrak{m}s(F)$ for all $x \in M$ such that $a \leq \varphihi(x) < b$. Then the natural maps:
\betaegin{align*}
\Gamma(\varphihi^{-1}(-\infty,b); F)
&\rightarrow \Gamma(\varphihi^{-1}(-\infty,a];F) \\
&\rightarrow \Gamma(\varphihi^{-1}(-\infty,a);F)
\varepsilonnd{align*}
are isomorphisms.
\item Assume $-d \varphihi(x) \not \in \mathfrak{m}s(F)$ for all $x \in M$ such that $a < \varphihi(x) \leq b$ (resp. $a \leq \varphihi(x) < b$).
Then the natural map:
$$ \Gamma_{\varphihi^{-1}(-\infty,a]}(X; F) \rightarrow \Gamma_{\varphihi^{-1}(-\infty,b]}(X; F)$$
(resp. $\Gamma_c(\varphihi^{-1}(-\infty,a);F) \rightarrow \Gamma_c(\varphihi^{-1}(-\infty,b);F)$)
is an isomorphism.
\varepsilonnd{enumerate}
\varepsilonnd{lemma}
\subsection{Constructible sheaves}
Under some mild regularity assumptions, having an isotropic microsupport implies that the sheaf is constructible.
Recall that a \textit{stratification} $\cS$ of $X$ is a decomposition of $X$ into to a disjoint union of locally closed subset
$\{ X_s \}_{ s \in \cS}$.
In this paper, we work with stratifications which are locally finite, consist of subanalytic submanifolds, and satisfies the \textit{frontier
condition} that $\overline{X_s} \setminus X_s$ is a disjoint union of strata in $\cS$.
In this case, there is an ordering which is defined by $s \leq t$ if and only if $X_t \subseteq \overline{X_s}$.
We also use $\str(s)$ to denote $\coprod_{t \leq s} X_t$, which is the smallest open set built out of the strata that
contain $s$, and we note that $s \leq t$ if and only if $\str(s) \subseteq \str(t)$.
\betaegin{definition}
For a given stratification $\cS$,
a sheaf $F$ is said to be $\cS$-constructible if $F|_{X_s}$ is a local system for all
$s \in \cS$.
We denote the subcategory of $\mathbb{S}h(X)$ consisting of such sheaves by $\mathbb{S}h_{\cS}(X)$.
A sheaf $F$ is said to be constructible if $F$ is $\cS$-constructible for some stratification $\cS$.
\varepsilonnd{definition}
We use $\cS \deltaMod$ to denote $\mathbb{F}un(\cS^{op},\cV)$
and note that there is a canonical functor
\betaegin{align*}
\cS \deltaMod &\rightarrow \mathbb{S}h_{\cS}(X) \\
1_s &\mathfrak{m}apsto 1_{X_s}
\varepsilonnd{align*}
where $1_s \in \cS \deltaMod$ is the index functor representing $s$.
The following lemma provides a criterion when this functor is an isomorphism:
\betaegin{lemma}[{\cite[Lemma 4.2]{Ganatra-Pardon-Shende3}}]\lambdabel{lem:sheaves_by_representations}
Let $\mathbb{P}i$ be a poset with a map to $Op_M$, and let $\cV[\mathbb{P}i]$ denote its stabilization.
The following are equivalent
\betaegin{itemize}
\item $\Gamma(U;\cV) = 1_\cV$ for $U \in \mathbb{P}i$ and $\Gamma(U;\cV) \rightarrow \Gamma(U \setminus V;\cV)$
whenever $U \not \subseteq V$.
\item The composition $\cV[\mathbb{P}i] \rightarrow \cV[Op_M] \rightarrow \mathbb{S}h(M)$ is fully faithful
where the second map is given by $!$-pushforward.
\varepsilonnd{itemize}
\varepsilonnd{lemma}
Since simplices are contractible, the above lemma implies the following proposition from the same paper.
\betaegin{proposition}[{\cite[Lemma 4.7]{Ganatra-Pardon-Shende3}}]
Let $\cS$ be triangulation of $M$. Then $\mathbb{S}h_\cS(M) = \cS \deltaMod$.
\varepsilonnd{proposition}
Recall a stratification is called a \textit{triangulation} if $X = \alphabs{K}$ is a realization of some simplicial complex $K$
and $\cS \coloneqq \{ \alphabs{\sigma} | \sigma \in K \}$ is given by the simplexes of $K$.
Since simplexes are contractible, the conditions in the above lemma are satisfied by triangulations.
Let $N^*_\infty (X_s)$ be the conormal bundle of the locally closed submanifold $X_s$
We use the notation $N^*_\infty \cS \coloneqq \cup_{s \in \cS} N^*_\infty (X_s)$ and call it the conormal of the stratification.
In general, $\mathbb{S}h_{N^*_\infty \cS}(M)$ and $\mathbb{S}h_\cS(M)$ can be different \cite[Example 2.52]{Kuo-wrapped-sheaves}.
Nevertheless, they coincide when the stratification is Whitney:
\betaegin{definition}
We say a stratification $\cS = \{X_s\}$ is Whitney if for any $X_s \subseteq \overline{X_t}$,
any sequence $x_n \in X_t$ and $y_n \in X_s$ both converging to $x$,
if the sequence of lines $\overleftrightarrow{x_n y_n}$ converges to $l$ and the sequence $T_{x_n} X_t$ converges to $\tau$,
then $\tau \supseteq l$.
\varepsilonnd{definition}
\betaegin{proposition}[{\cite[Prop. 8.4.1]{KS}, \cite[Proposition 4.8]{Ganatra-Pardon-Shende3}}]
For a Whitney stratification $\cS$ of a $C^1$ manifold $M$, we have $\mathbb{S}h_{\cS}(M) =
\mathbb{S}h_{N^*_\infty \cS}(M)$ (i.e. having microsupport contained in $N^*_\infty \cS$ is equivalent to being
$\cS$-constructible).
\varepsilonnd{proposition}
Combining with the comment on triangulations, we obtain a simple description of sheaves microsupported in $N^*_\infty \cS$
for some $C^1$ Whitney triangulation $\cS$.
\betaegin{proposition}[{\cite[Proposition 4.19]{Ganatra-Pardon-Shende3}}]\lambdabel{mc=cc}
Let $\cS$ be a $C^1$ Whitney triangulation.
Then there is an equivalence
\betaegin{align*}
\mathbb{S}h_{N^*_\infty \cS}(M) &= \cS \deltaMod \\
1_{X_s} &\leftrightarrow 1_s
\varepsilonnd{align*}
where $1_s$ is the indicator which is defined by
$$1_s(t) =
\betaegin{cases}
1, &t \leq s. \\
0, &\text{otherwise}.
\varepsilonnd{cases}
$$
In particular, the category $\mathbb{S}h_{N^*_\infty \cS}(M)$ is compactly generated whose
compact objects $\mathbb{S}h_{N^*_\infty \cS}^c(M)$ are given by sheaves with compact support and perfect stalks.
\varepsilonnd{proposition}
\subsection{Isotropic microsupport}\lambdabel{ims}
We say a subset $\mathbb{L}ambdambda \subseteq S^* M$ is isotropic if it can be stratified by isotropic submanifolds.
A standard class of isotropic subsets are given by the conormal $N^*_\infty \cS$ of a stratification $\cS$ which we study
in the last section.
Assume $M$ is real analytic and we recall that a general isotropy which satisfies a decent regularity condition
are bounded by isotropics of this form.
\betaegin{definition}
A subset $Z$ of $M$ is said to be subanalytic at $x$ if there exists an open set $U \ni x$,
compact manifolds $Y_j^i$ $(i = 1, 2, 1 \leq j \leq N)$ and analytic morphisms
$f_j^i: Y_j^i \rightarrow M$ such that
$$Z \cap U = U \cap \betaigcup_{j=1}^N (f_j^1(Y_j^1) \setminus f_j^2(Y_j^2)).$$
We say $Z$ is subanalytic if $Z$ is subanalytic at $x$ for all $x \in M$.
\varepsilonnd{definition}
\betaegin{lemma}[{\cite[Corollary 8.3.22]{KS}}]
Let $\mathbb{L}ambdambda$ be a closed subanalytic isotropic subset of $S^* M$.
Then there exists a $C^\omega$ Whitney stratification $\cS$ such that $\mathbb{L}ambdambda \subseteq N^*_\infty \cS$.
\varepsilonnd{lemma}
Combining with the above lemma, we obtain a microlocal criterion for a sheaf $F$ with
subanalytic microsupport being constructible:
\betaegin{proposition}[{\cite[Theorem 8.4.2]{KS}}]
Let $F \in \mathbb{S}h(M)$ and assume $\mathfrak{m}sif(F)$ is subanalytic.
Then $F$ is constructible if and only if $\mathfrak{m}sif(F)$ is a singular isotropic.
\varepsilonnd{proposition}
Another feature of subanalytic geometry is that relatively compact subanalytic sets form an o-minimal structure.
Thus, one can apply the result of \cite{Czapla} to refine a $C^p$ Whitney stratification to a Whitney triangulation, for $1 \leq p < \infty$.
\betaegin{lemma}\lambdabel{cbs}
Let $\mathbb{L}ambdambda$ be a subanalytic singular isotropic in $S^* M$.
Then there exists a $C^1$ Whitney triangulation $\cS$ such that $\mathbb{L}ambdambda \subseteq N^*_\infty \cS$.
\varepsilonnd{lemma}
Combining the above two results, we conclude:
\betaegin{theorem}\lambdabel{extri}
Let $F \in \mathbb{S}h(M)$ and assume $\mathfrak{m}sif(F)$ is a subanalytic singular isotropic.
Then $F$ is $\cS$-constructible for some $C^1$ Whitney triangulation $\cS$.
\varepsilonnd{theorem}
Collectively, sheaves with the same subanalytic isotropic microsupport form a category with nice finiteness properties.
Let $\mathbb{L}ambdambda$ be a subanalytic singular isotropic in $S^* M$.
By picking a Whitney triangulation $\cS$ such that $\mathbb{L}ambdambda \subseteq N^*_\infty \cS$.
The fact that the inclusion $\mathbb{S}h_\mathbb{L}ambdambda(M) \subseteq \mathbb{S}h_{N^*_\infty \cS}(M) = \cS \deltaMod $
preserves both limits and colimits implies the following finiteness conditions:
\betaegin{proposition}\lambdabel{prop:stopremoval}
Let $\mathbb{L}ambdambda$ be a subanalytic singular isotropic in $S^* M$.
The category $\mathbb{S}h_\mathbb{L}ambdambda(M)$ is compactly generated.
If $\mathbb{L}ambdambda \subseteq \mathbb{L}ambdambda^\varphirime$ is an inclusion of subanalytic singular isotropics,
then the left adjoint of $\mathbb{S}h_\mathbb{L}ambdambda(M) \hookrightarrow \mathbb{S}h_{\mathbb{L}ambdambda^\varphirime}(M)$ sends compact objects to compact objects, i.e.,
$\mathbb{S}h_{\mathbb{L}ambdambda^\varphirime}^c(M) \twoheadrightarrow \mathbb{S}h_\mathbb{L}ambdambda^c(M)$.
\varepsilonnd{proposition}
In fact, there is a concrete description for the fiber of $\mathbb{S}h_{\mathbb{L}ambdambda^\varphirime}^c(M) \twoheadrightarrow \mathbb{S}h_\mathbb{L}ambdambda^c(M)$.
Let $\mathbb{L}ambdambda$ be a singular isotropic and $(x,\timesi) \in \mathbb{L}ambdambda$ be a smooth point.
Up to a shift, there is a microstalk functor $\mathfrak{m}u_{(x,\timesi)}:\mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \cV$ \cite[Proposition 7.5.3]{KS},
which admits descriptions by sub-level sets of functions whose differential is transverse to $\mathbb{L}ambdambda$
\cite[Theorem 4.10]{Ganatra-Pardon-Shende3}.
By applying its left adjoint to the generator $1 \in \cV$, we see that it is
tautologically corepresented by the compact object $\mathfrak{m}u_{(x,\timesi)}^l (1) \in \mathbb{S}h_\mathbb{L}ambdambda^c(M)$.
Furthermore, when there is an inclusion $\mathbb{L}ambdambda \subseteq \mathbb{L}ambdambda^\varphirime$ and $(x,\timesi) \in \mathbb{L}ambdambda^\varphirime$,
the corepresentative $\mathfrak{m}u_{(x,\timesi)}^l (1) \in \mathbb{S}h_{\mathbb{L}ambdambda^\varphirime}^c(M)$
is sent under $\mathbb{S}h_{\mathbb{L}ambdambda^\varphirime}^c(M) \twoheadrightarrow \mathbb{S}h_\mathbb{L}ambdambda^c(M)$ to a similar corepresentative
in $\mathbb{S}h_\mathbb{L}ambdambda^c(M)$ and,
they are tautologically sent to the zero object when $(x,\timesi)$ is a smooth point in $ \mathbb{L}ambdambda^\varphirime \setminus \mathbb{L}ambdambda$.
The converse is also true:
\betaegin{proposition}[Theorem 4.13 of \cite{Ganatra-Pardon-Shende3}]\lambdabel{wdstopr}
Let $\mathbb{L}ambdambda \subseteq \mathbb{L}ambdambda^\varphirime$ be subanalytic isotropics and let
$\sD^\mathfrak{m}u_{\mathbb{L}ambdambda^\varphirime,\mathbb{L}ambdambda}(T^* M)$ denote the fiber of the canonical functor
$\mathbb{S}h_{\mathbb{L}ambdambda^\varphirime}^c(M) \twoheadrightarrow \mathbb{S}h_{\mathbb{L}ambdambda}^c(M)$.
Then $\sD^\mathfrak{m}u_{\mathbb{L}ambdambda^\varphirime,\mathbb{L}ambdambda}(T^* M)$
is generated by the corepresentatives of the microstalk functors $\mathfrak{m}u_{(x,\timesi)}$ for smooth Legendrian points
$(x,\timesi) \in \mathbb{L}ambdambda^\varphirime \setminus \mathbb{L}ambdambda$.
\varepsilonnd{proposition}
\subsection{Microsheaves}\lambdabel{sec:micro}
Fix a smooth manifold $M$. We recall the construction of microsheaves, a sheaf of categories $\mathfrak{m}sh$ on the cotangent bundle.
The concept of microsheaves (also known as the Kashiwara-Schapira stack/sheaf) goes back to \cite{KS}*{Section 6} where they studied the $\mathfrak{m}hom$ functor.
More recently Guillermou introduced the Kashiwara-Schapira stack in the setting of bounded derived categories \cite{Gui}.
Here, working with (compactly generated) stable categories, we follow the definition in \cite{NadShen}.
We first define a presheaf, i.e., a functor
\betaegin{align*}
\mathfrak{m}sh^\varphire: (\Op_{T^* M}^{\mathbb{R}p})^{op} &\longrightarrow \st \\
\Omega &\longmapsto \mathbb{S}h(M)/ \mathbb{S}h_{\Omega^c}(M)
\varepsilonnd{align*}
where we restrict our attention to conic open sets $\Op_{T^* M}^{\mathbb{R}p}$,
and the target $\st$ is the (very large) category of stable categories with morphisms being exact functors.
We denote by $\mathfrak{m}sh$ its sheafification and refer it as the sheaf of microsheaves.
Note that since there is a canonical identification
\betaegin{align*}
\mathfrak{m}sh^\varphire(T^* U) &\timesrightarrow{\sim} \mathbb{S}h(U) \\
F &\mathfrak{m}apsto F |_U
\varepsilonnd{align*}
and sheaves on the base $U \mathfrak{m}apsto \mathbb{S}h(U)$ forms a sheaf in $\st$,
compatibility of sheafification and pullback implies that $\mathfrak{m}sh |_{0_M} = \mathbb{S}h$.
The first non-trivial statement is that the Hom's of $\mathfrak{m}sh$ can be computed by $\mathfrak{m}hom$.
\betaegin{definition}[{\cite[Definition 4.1.1]{KS}}]
Let $F$, $G \in \mathbb{S}h(M)$, we set
$$\mathfrak{m}hom(F,G) \coloneqq \mathfrak{m}u_{\mathbb{D}elta_M} \sHom(p_2^* F, p_1^! G)$$
where $\mathfrak{m}u_{\mathbb{D}elta_M}$ is the microlocalization along the diagonal \cite[Section 4.3]{KS}.
\varepsilonnd{definition}
\betaegin{definition_theorem}[{ \cite[Theorem 6.1.2]{KS}, \cite[Corollary 5.5.]{Gui} }] \lambdabel{mu-hom_as_hom}
Let $F\,$and $G \in \mathbb{S}h(M)$ represent objects with the same name in $\mathfrak{m}sh(\Omega)$.
Then there is an canonical isomorphism
$$ \mathfrak{m}hom(F,G) \rightarrow \sHom_{\mathfrak{m}sh(\Omega)}(F,G)$$
between sheaves on $\Omega$. Thus, we abuse the notation and simply use $\mathfrak{m}hom$ to denote the Hom of $\mathfrak{m}sh$ valued in sheaves on conic open sets of $T^* M$.
\varepsilonnd{definition_theorem}
We note also that the microsupport triangular inequality from (1) of Proposition \ref{prop:mses} implies that,
for an object $F \in \mathfrak{m}sh^\varphire(\Omega)$ represented by a sheaf with the same name,
the closed subset in $\Omega$, $\mathfrak{m}s_\Omega(F) \coloneqq \mathfrak{m}s(F) \cap \Omega$ is independent of representative in $\mathbb{S}h(M)$.
This induces the notion of microsupport for objects in $\mathfrak{m}sh(\Omega)$.
That is, we define for $F \in \mathfrak{m}sh(\Omega)$, a point $(x,\timesi) \in T^* M$ is in $\mathfrak{m}s(F)$ if $F \neq 0 \in \mathfrak{m}sh_{(x,\timesi)}$.
Note that this is notion of microsupport is well-defined and coincide with the originally one since
\betaegin{equation} \lambdabel{microsheaves_at_stalks}
\mathfrak{m}sh_{(x,\timesi)} = \mathfrak{m}sh^\varphire_{(x,\timesi)} = \clmi{\Omega \ni (x, \timesi) } \mathfrak{m}sh^\varphire(\Omega) = \mathbb{S}h(M) / \mathbb{S}h_{T^* M \setminus \mathbb{R}p (x,\timesi)}(M).
\varepsilonnd{equation}
\betaegin{remark}
We remark our choice of the coefficient category $\st$ for people who is familiar with the higher categorical derived setting.
First note that $\mathfrak{m}sh^\varphire$ can be regarded also as presheaf in $\mathbb{P}rLst$ or $\mathbb{P}rRst$,
the (very large) categories of presentable categories with morphisms being left or right adjoints.
These two are the more natural setting for the purpose of obtaining adjunctions as we shall see later when we restrict
to sheaves with a fixed microsupport condition.
However, stalks behaves badly in this setting and one can, in fact, compute that the inclusion $\mathbb{L}oc \rightarrow \mathbb{S}h$
of local systems into sheaves is isomorphic on stalks.
On the contrary, the virtue of $\st$ is that isomorphisms can be checked on stalks \cite{Rozenblyum-filtered}.
In fact, the Definition-Theorem \ref{mu-hom_as_hom} in \cite[Theorem 6.1.2]{KS} is a statement on the stalk.
The form in which it's written here is a simple corollary of the cited Theorem and this fact about $\st$.
\varepsilonnd{remark}
We note that since $\mathfrak{m}sh$ is conic, $\mathfrak{m}sh |_{\deltaT^* M}$ descents naturally to a sheaf on $S^* M$,
and we abuse the notation, denoting it by $\mathfrak{m}sh$ as well.
Similar statements for $\mathfrak{m}hom$ and $\mathfrak{m}s^\infty$ hold in this case.
Now fix a subanalytic isotropic subset $\mathbb{L}ambdambda \subseteq S^*M$.
\betaegin{definition}
We use $\mathfrak{m}sh_\mathbb{L}ambdambda$ denote the subsheaf of $\mathfrak{m}sh$ which consists of objects microsupported in $\mathbb{L}ambdambda$ or $\widehat{\mathbb{L}ambdambda} = \mathbb{L}ambdambda \times \mathbb{R}R_{>0}$.
\varepsilonnd{definition}
We note that because of Equation (\ref{microsheaves_at_stalks}) this sheaf coincides with the sheafification of the following
subpresheaf $\mathfrak{m}sh^\varphire_\mathbb{L}ambdambda$ of $\mathfrak{m}sh^\varphire$ (where $\mathfrak{m}s_\Omega(F) := \mathfrak{m}s(F) \cap \Omega$):
\betaegin{align*}
\mathfrak{m}sh^\varphire_\mathbb{L}ambdambda: (\Op_{T^* M}^{\mathbb{R}p})^{op} &\longrightarrow \st \\
\Omega &\longmapsto \{F \in \mathfrak{m}sh^\varphire(\Omega) | \mathfrak{m}s_\Omega(F) \subseteq \mathbb{L}ambdambda \}
\varepsilonnd{align*}
We note that $\mathfrak{m}sh^\varphire_\mathbb{L}ambdambda$ in fact takes value in $\mathbb{P}rLcs$, the category of compactly generated stable categories
whose morphisms are given by functor which admits both the left and the right adjoints,
and its sheafification in $\st$ and $\mathbb{P}rLcs$ coincide.
In other word, restriction maps $\mathfrak{m}sh_\mathbb{L}ambdambda(\Omega) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\Omega^\varphirime)$ admit both left and right adjoints.
In particular, we will refer the restriction map associated to $\deltaT^* M \subseteq T^* M$
the microlocalization functor along $\mathbb{L}ambdambda$
$$m_\mathbb{L}ambdambda: \mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$$
and denote its left and right adjoint by $m_\mathbb{L}ambdambda^l$ and $m_\mathbb{L}ambdambda^r$.
Note that, by the definition, $\mathfrak{m}sh_\mathbb{L}ambdambda$ is a constructible sheaf supported on $\mathbb{L}ambdambda$ or $\widehat{\mathbb{L}ambdambda}$,
and we will use the same notation $\mathfrak{m}sh_\mathbb{L}ambdambda$ to denote the corresponding sheaf on $\mathbb{L}ambdambda$ or $\widehat{\mathbb{L}ambdambda}$.
One can estimate the singular support of the sheaf $\mathfrak{m}hom({F, G})$ in $T^*M$. Recall that for $A, B \subset X$, we define the normal cone $C(A, B)$ such that $(x, \timesi) \in TX$ iff there exists $a_n \in A, b_n \in B, c_n \in \mathfrak{m}athbb{R}$ such that
$$a_n, b_n \rightarrow x, \;\; c_n(a_n - b_n) \rightarrow \timesi, \;\; n \rightarrow \infty.$$
\betaegin{proposition}[\cite{KS}*{Corollary 5.4.10 \& Corollary 6.4.3}]\lambdabel{prop:ss-muhom}
Let ${F, G} \in \mathbb{S}h(M)$.
Then
$$\mathfrak{m}s(\mathfrak{m}hom({F, G})) \subseteq C(\mathfrak{m}s({F}), \mathfrak{m}s({G})).$$
In particular, $\mathfrak{m}athrm{supp}(\mathfrak{m}hom({F, G})) \subseteq \mathfrak{m}s({F}) \cap \mathfrak{m}s({G})$.
\varepsilonnd{proposition}
\betaegin{remark}\lambdabel{rem:ss-muhom}
By Proposition \ref{prop:mses}~(4) \cite[Proposition 5.4.4]{KS}, we can show that \cite[Corollary 6.4.4 \& 6.4.5]{KS} for $\varphii: T^*M \rightarrow M$ and $\deltaot{\varphii}: \deltaot T^*M \rightarrow M$ we have
\betaegin{align*}
\mathfrak{m}s(\varphii_* \mathfrak{m}hom({F, G})) &\subset \varphii_\varphii(d\varphii^*)^{-1}C(\mathfrak{m}s(F), \mathfrak{m}s(G)) = -\mathfrak{m}s(F) \,\widehat+\, \mathfrak{m}s(G), \\
\mathfrak{m}s(\deltaot\varphii_* \mathfrak{m}hom({F, G})) &\subset \deltaot\varphii_\varphii(d\deltaot\varphii^*)^{-1}C(\mathfrak{m}s(F), \mathfrak{m}s(G)) = -\mathfrak{m}s(F) \,\widehat+_\infty\, \mathfrak{m}s(G).
\varepsilonnd{align*}
\varepsilonnd{remark}
In general, sheafifying a category-coefficient sheaf is complicated.
However, we notice that our on assumption on $\mathbb{L}ambdambda$ implies that $\mathfrak{m}sh_\mathbb{L}ambdambda^\varphire$ stabilizes after restricting to small open.
In fact $\mathfrak{m}sh_\mathbb{L}ambdambda$ is constructible and on small open sets it admits a simple description as mentioned in \cite[3.4]{Nadler-pants}:
For any $(x,\timesi) \in \mathbb{L}ambdambda$, we may choose a small open ball $\Omega \subseteq S^* M$ containing $\mathbb{L}ambdambda$
such that $\mathfrak{m}sh_\mathbb{L}ambdambda(\Omega)$ fits in a fiber sequence,
$$ K(B,\Omega) \hookrightarrow \mathbb{S}h_\mathbb{L}ambdambda(B,\Omega) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\Omega),$$
where $B = \varphii_\infty (\Omega)$, $\mathbb{S}h_\mathbb{L}ambdambda(B,\Omega)$ consists of sheaves $F$ on $B$ such that
$\mathfrak{m}sif(F) \cap \Omega \subseteq \mathbb{L}ambdambda$, and $K(B,\Omega)$ is its subcategory so that
$\mathfrak{m}sif(F) \cap \Omega = \varnothing$.
Consequently, one can characterize the stalks $\mathfrak{m}sh_\mathbb{L}ambdambda$ as follows. This is a consequence of (quantized) contact transformation \cite[Corollary 7.2.2]{KS}, which asserts that, on small neighborhoods on $S^* M$, $\mathfrak{m}sh^\varphire$ looks the same everywhere.
\betaegin{theorem}[\cite{Gui}*{Proposition 6.6 \& Lemma 6.7}, \cite{JinTreu}*{Section 3.8 \& 3.9}, \cite{NadShen}*{Corollary 5.4}]
Let $p = (x, \timesi)$ be a smooth point in the subanalytic isotropic subset $\mathbb{L}ambdambda \subset S^{*}M$. Then the stalk $\mathfrak{m}sh_{\mathbb{L}ambdambda,p} \simeq \cV$.
\varepsilonnd{theorem}
We know that the microlocalization induces morphisms
$$\mathfrak{m}hom(F, G) \rightarrow \mathfrak{m}hom(F, G)|_{S^{*}M}, \;\; \sHom(F, G) \rightarrow \deltaot\varphii_*(\mathfrak{m}hom({F, G})|_{S^{*}M}).$$
By \cite[Equation (4.3.1)]{KS}, we immediately know that the second morphism fits into the following Sato's fiber sequence. This will be an important ingredient for the Sato-Sabloff fiber sequence in Section \ref{sec:sato-sab}.
\betaegin{proposition}[Sato's fiber sequence \cite{Gui}*{Equation (2.17)}, \cite{Guisurvey}*{Equation (1.3.5)}]\lambdabel{thm:sato}
Let ${F}\,$and ${G} \in \mathbb{S}h(M)$.
Then there is a fiber sequence
$$\mathbb{D}elta^*\mathfrak{m}athscr{H}om(\varphii_1^*{F}, \varphii_2^*{G}) \rightarrow \mathfrak{m}athscr{H}om({F, G}) \rightarrow \deltaot\varphii_*(\mathfrak{m}u hom({F, G})|_{S^{*}M})$$
where $\mathbb{D}elta: M \hookrightarrow M \times M$ is the diagonal embedding and $\varphii_i: M \times M \rightarrow M$ are the projections.
\varepsilonnd{proposition}
\subsection{Various sheaf categories}\lambdabel{variouscat}
We have defined the sheaf of stable categories $\mathbb{S}h_\mathbb{L}ambdambda$ and $\mathfrak{m}sh_\mathbb{L}ambdambda$ consisting of sheaves and respectively microsheaves. However, in general we may want to work with either the subcategories of compact objects or proper objects. We explain how to restrict to these categories. Most of the discussions can be found in \cite{Nadler-pants}*{Section 3.6 \& 3.8} and \cite{Ganatra-Pardon-Shende3}*{Section 4.5}.
Throughout the discussion, we will be considering the microlocal sheaf category $\mathfrak{m}u Sh_\mathbb{L}ambdambda$ on a subanalytic Legendrian (or conical Lagrangian) subset.
\betaegin{definition}
For ${F} \in \mathfrak{m}sh_\mathbb{L}ambdambda(\Omega)$, we call it a compact object if $\Hom_{\mathfrak{m}sh_\mathbb{L}ambdambda(\Omega)}({F}, -)$
commutes with filtered colimits. Let $\mathfrak{m}sh_\mathbb{L}ambdambda^c(\Omega) \subset \mathfrak{m}sh_\mathbb{L}ambdambda(\Omega)$ be the full subcategory of compact objects.
\varepsilonnd{definition}
In particular, when we consider for a subanalytic Legendrian $\mathbb{L}ambdambda \subseteq S^{*}M$ the category of compact objects
$$\mathbb{S}h^c_\mathbb{L}ambdambda(M) = \mathfrak{m}sh^c_{M \cup \widehat{\mathbb{L}ambdambda}}(T^*M),$$
we can prove that under the compactness assumption on $M$ it is a smooth category in the sense of \cite[Definition 8.1.2]{Kontsevich-Soibelman-Ainfty} (see also \cite[Definition 4.6.4.13]{Lurie1}), namely that (for the small category $\sA$ under consideration) the diagonal bimodule
$$\sA_\mathbb{D}elta(X, Y) = \Hom_{\sA}(X, Y)$$
is a perfect $\sA^{op} \times \sA$ bimodule.
\betaegin{proposition}[{\cite[Corollary 4.25]{Ganatra-Pardon-Shende3}}]\lambdabel{prop:smooth}
Let $M$ be compact and $\mathbb{L}ambdambda \subseteq S^{*}M$ be a subanalytic isotropic subset. Then $\mathbb{S}h^c_\mathbb{L}ambdambda(M)$ is a smooth category.
\varepsilonnd{proposition}
We know that $\mathfrak{m}sh_\mathbb{L}ambdambda$ is both a sheaf and a cosheaf of categories, and in addition, for $V \subseteq U$, the restriction functor
$$r_{UV}^*: \, \mathfrak{m}sh_\mathbb{L}ambdambda(U) \rightarrow \mathfrak{m}u Sh_\mathbb{L}ambdambda(V)$$
preserves limits and colimits and thus admits left and right adjoints \cite[Lemma 4.12]{Ganatra-Pardon-Shende3}. Since $r_{UV}^*$ preserves colimits, its left adjoint, which is called the corestriction functor
$$r_{UV,!}: \, \mathfrak{m}u Sh_\mathbb{L}ambdambda(V) \rightarrow \mathfrak{m}u Sh_\mathbb{L}ambdambda(U).$$
preserves compact objects. Hence the corestriction functor restricts to the subsheaf of category of compact objects
$$r_{UV,!}: \, \mathfrak{m}sh^c_\mathbb{L}ambdambda(V) \rightarrow \mathfrak{m}sh^c_\mathbb{L}ambdambda(U).$$
Note that $\mathfrak{m}sh_{\mathbb{L}ambdambda \cap U}(U) = \mathfrak{m}sh_\mathbb{L}ambdambda(U)$, so this is indeed a functor on global sections of categories $\mathfrak{m}sh^c_{\mathbb{L}ambdambda \cap V}(V) \rightarrow \mathfrak{m}sh^c_{\mathbb{L}ambdambda \cap U}(U).$
\betaegin{remark}\lambdabel{rem:cores-cpt}
For closed subanalytic isotropic subsets $\mathbb{L}ambdambda \subseteq S^{*}M$, the microlocalization and its left adjoint in Section \ref{sec:micro}
$$m_\mathbb{L}ambdambda: \mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda), \;\; m_\mathbb{L}ambdambda^l: \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda(M)$$
are special cases of restriction functors and corestriction functors. In particular, the left adjoint of microlocalization $m_\mathbb{L}ambdambda^l$ preserves compact objects
$$m_\mathbb{L}ambdambda^l: \mathfrak{m}sh^c_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda^c(M).$$
\varepsilonnd{remark}
Given sheaves of categories $\mathfrak{m}sh_X$ and $\mathfrak{m}sh_Y$, where $X \subseteq Y$ is a closed subset, there is an inclusion functor between sheaves of categories
$$\iota_{XY *}: \, \mathfrak{m}sh_X \rightarrow \mathfrak{m}sh_Y$$
which also preserves limits and colimits. Since it preserves limits and is accessible, there is a left adjoint called the pullback functor
$$\iota^*_{XY}: \, \mathfrak{m}sh_Y \rightarrow \mathfrak{m}sh_X.$$
Since $\iota_{XY *}$ preserves colimits, $\iota^*_{XY}$ preserves compact objects. Hence the corestriction functor preserves the sub-cosheaf of categories of compact objects. By considering global sections, we get a pullback functor $\iota^*_{XY}: \, \mathfrak{m}sh^c_Y(Y) \rightarrow \mathfrak{m}sh^c_X(X)$.
\betaegin{remark}\lambdabel{rem:stoprem-cpt}
For closed subanalytic isotropic subsets $\mathbb{L}ambdambda \subseteq \mathbb{L}ambdambda' \subseteq S^{*}M$, the inclusion functor and its left adjoint in Section \ref{ims}
$$\iota_{\mathbb{L}ambdambda \mathbb{L}ambdambda' *}: \mathbb{S}h_\mathbb{L}ambdambda(M) \hookrightarrow \mathbb{S}h_{\mathbb{L}ambdambda'}(M), \;\; \iota_{\mathbb{L}ambdambda \mathbb{L}ambdambda'}^*: \mathbb{S}h_{\mathbb{L}ambdambda'}(M) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda(M)$$
are special cases of the inclusion and pullback functors above. In particular, the pullback functor preserves compact objects
$$\iota_{\mathbb{L}ambdambda \mathbb{L}ambdambda'}^*: \mathbb{S}h_{\mathbb{L}ambdambda'}^c(M) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda^c(M).$$
This is also called the stop removal functor \cite{Ganatra-Pardon-Shende3}*{Corollary 4.22} (one can compare it to the stop removal functors in partially wrapped Fukaya categories \cite{GPS2}*{Theorem 1.16}).
\varepsilonnd{remark}
On the other hand, we can consider the subcategory with perfect stalks, which turns out to be the subcategory of proper modules (equivalently, pseudoperfect modules) in the category of (micro)sheaves.
\betaegin{definition}
Let $\mathfrak{m}sh^b_\mathbb{L}ambdambda(U) \subseteq \mathfrak{m}sh_\mathbb{L}ambdambda(U)$ be the full subcategory of objects with perfect stalks, and $\mathfrak{m}sh^{pp}_\mathbb{L}ambdambda(U) = \mathfrak{m}athrm{Fun}^\text{ex}(\mathfrak{m}sh^c_\mathbb{L}ambdambda(U)^{op}, \cV_0)$ be the category of proper modules in $\mathfrak{m}sh^c_\mathbb{L}ambdambda(U)$, where $\mathfrak{m}athrm{Fun}^\text{ex}(-, -)$ is the stable category of exact functors.
\varepsilonnd{definition}
Since restriction functors in $\mathfrak{m}sh_\mathbb{L}ambdambda$ preserves (micro)stalks, the sheaf of categories $\mathfrak{m}sh_\mathbb{L}ambdambda$ can be restricted to a subsheaf of categories $\mathfrak{m}sh^b_\mathbb{L}ambdambda$. Meanwhile, since $\mathfrak{m}sh^c_\mathbb{L}ambdambda(U)$ forms a cosheaf of categories under corestriction functors, we know that the full subcategories of proper submodules $\mathfrak{m}sh^{pp}_\mathbb{L}ambdambda$ also forms a sheaf of categories under restriction functors.
The following theorem shows that $\mathfrak{m}sh^b_\mathbb{L}ambdambda(U)$ is the equivalent to the subcategories of proper modules $\mathfrak{m}sh^{pp}_\mathbb{L}ambdambda$ in $\mathfrak{m}sh^c_\mathbb{L}ambdambda(U)$.
\betaegin{theorem}[Nadler \cite{Nadler-pants}*{Theorem 3.21}, \cite{Ganatra-Pardon-Shende3}*{Corollary 4.23}]\lambdabel{thm:perfcompact}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a subanalytic isotropic subset. Then the natural pairing $\mathfrak{m}hom(-, -)$ defines an equivalence
$$\mathfrak{m}sh^b_\mathbb{L}ambdambda(U) \simeq \mathfrak{m}sh^{pp}_\mathbb{L}ambdambda(U) = \mathfrak{m}athrm{Fun}^\text{ex}(\mathfrak{m}sh^c_\mathbb{L}ambdambda(U)^{op}, \cV_0).$$
In particular, $\mathbb{S}h^b_\mathbb{L}ambdambda(M) \simeq \mathbb{S}h^{pp}_\mathbb{L}ambdambda(M)$.
\varepsilonnd{theorem}
Using the above theorem, for a subanalytic Legendrian $\mathbb{L}ambdambda \subseteq S^{*}M$ the category of proper modules
$$\mathbb{S}h^{pp}_\mathbb{L}ambdambda(M) = \mathfrak{m}sh^{pp}_{M \cup \widehat{\mathbb{L}ambdambda}}(T^*M),$$
is a proper category (see \cite[Definition 8.2.1]{Kontsevich-Soibelman-Ainfty} or \cite[Definition 4.6.4.2]{Lurie1}), namely that (for the small category $\sA$ under consideration) the diagonal bimodule $\sA_\mathbb{D}elta$ is a proper module, i.e.~for any $X, Y \in \sA$,
$$\Hom_{\sA}(X, Y) \in \cV_0.$$
\betaegin{proposition}[{\cite[Corollary 4.25]{Ganatra-Pardon-Shende3}}]\lambdabel{prop:proper}
Let $M$ be compact and $\mathbb{L}ambdambda \subseteq S^{*}M$ be a subanalytic isotropic subset. Then $\mathbb{S}h^{pp}_\mathbb{L}ambdambda(M)$ is a proper category.
\varepsilonnd{proposition}
Since $\mathbb{S}h^c_\mathbb{L}ambdambda(M)$ is a smooth category, we know by \cite[Lemma A.8]{Ganatra-Pardon-Shende3} that $\mathbb{S}h^{pp}_\mathbb{L}ambdambda(M) \subseteq \mathbb{S}h^c_\mathbb{L}ambdambda(M)$. Therefore we have the following corollary.
\betaegin{corollary}\lambdabel{cor:prop-in-perf}
Let $M$ be compact and $\mathbb{L}ambdambda \subseteq S^{*}M$ be a subanalytic isotropic subset. Then $\mathbb{S}h^{b}_\mathbb{L}ambdambda(M) \subseteq \mathbb{S}h^c_\mathbb{L}ambdambda(M)$.
\varepsilonnd{corollary}
\betaegin{remark}\lambdabel{rem:prop}
From the discussion above, we can show that
for closed subanalytic isotropic subsets $\mathbb{L}ambdambda \subset S^{*}M$, the microlocalization in Section \ref{sec:micro} preserves proper objects
$$m_\mathbb{L}ambdambda: \mathbb{S}h_\mathbb{L}ambdambda^b(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda^b(\mathbb{L}ambdambda),$$
and so does the inclusion functor in Section \ref{ims}
$$\iota_{\mathbb{L}ambdambda\mathbb{L}ambdambda' *}: \mathbb{S}h_\mathbb{L}ambdambda^b(M) \hookrightarrow \mathbb{S}h_{\mathbb{L}ambdambda'}^b(M).$$
\varepsilonnd{remark}
\section{Isotopy of sheaves}\lambdabel{sec:isotopies_of_sheaves}
Let $V$ be a contact manifold and $(B,b_0)$ be a pointed finite dimensional manifold.
We say a map $\mathbb{P}hi: V \times B \rightarrow V$ is a $B$-family contact isotopy
if $\varphi_b \coloneqq \mathbb{P}hi(-,b)$ is a contactomorphism for all $b \in B$ and $\varphi_{b_0} = \id_V$.
To simplify the convention, when $B$ is some open interval containing $b_0 = 0$,
we will surpass the notation and simply refer to $\mathbb{P}hi$ as a contact isotopy.
We use $t$ as the parameter for this case.
When $V$ is co-oriented by $\alphalpha$, i.e., the contact structure is given by $\ker \alphalpha$,
we say that an isotopy is positive if $\alphalpha(\varphiartial_t \varphi_t) \mathfrak{m}athfrak{g}eq 0$.
One important feature of such isotopies, in the Weinstein manifold setting,
is that they induce continuation maps on Floer homology and is a key ingredient to define the wrapped Fukaya categories
\cite[Section 3.3]{GPS1}.
We will be working in the sheaf-theoretical setting and focusing on the case when $V = S^*M$, the cosphere bundle of a manifold.
The foundational construction is performed in \cite{Guillermou-Kashiwara-Schapira} where they show that an isotopy on $S^* M$
produces a family of end-functors on $\mathbb{S}h(M)$.
When the isotopy is positive, this family of end-functors comes with a family of continuation maps in the form of natural transforms.
We will recall this construction and some relevant results following the setting of \cite[Section 3]{Kuo-wrapped-sheaves}.
\subsection{Continuation maps}\lambdabel{continuation maps}
Denote by $(t,\tau)$ the coordinate of $T^* I$ and consider $F \in \mathbb{S}h(M \times I)$ such that $\mathfrak{m}s(F) \subseteq \{ \tau \leq 0\}$.
As mentioned in Section \ref{ims} (and \ref{variouscat}), the inclusion of the subcategory formed by such sheaves admits both a left and a right adjoint.
For this special case, there is an explicit description of the two adjoints by a sheaf kernel:
\betaegin{proposition}[{\cite[Proposition 4.8]{Guillermou-Kashiwara-Schapira}}, \cite{Kuo-wrapped-sheaves}]
Let $\iota_*: \mathbb{S}h_{T^* M \times T^*_\leq I}(M \times I) \hookrightarrow \mathbb{S}h(M \times I)$
denote the tautological inclusion.
Then the left adjoint $\iota^* \deltaashv \iota_* \deltaashv \iota^!$ is given by convolution and its right adjoint is
$$ \iota^* F = 1_{ \{ t^\varphirime > t \} }[1] \circ F,\ \iota^! F = \sHom^\circ ( 1_{ \{ t^\varphirime > t \} }[1], F).$$
Here we denote by $(t,t^\varphirime)$ the coordinate of $I^2$.
\varepsilonnd{proposition}
A consequence of this expression is that $F = 1_{ \{ t^\varphirime > t \} }[1] \circ F$ for $F \in \mathbb{S}h_{T^* M \times T^*_\leq I}(M \times I)$.
The virtual of this is explicit description is that the sheaf kernel $1_{ \{ t^\varphirime > t \} }$ admits maps between its slices,
$$1_{(-\infty,s)} \rightarrow 1_{(-\infty,t)}, \ \text{for} \ s \leq t.$$
Denote by $i_t: M \hookrightarrow M \times I$ the inclusion of the $t$-slice and write $F_t \coloneqq i_t^* F$.
Since convolutions are compatible with $*$-pullback, namely $i_t^* ( 1_{ \{ t^\varphirime > t \} }[1] \circ F) = 1_{(-\infty,t)}[1] \circ F$,
we term for $F \in Sh_{T^* M \times T^*_\leq I}(M \times I)$ the canonical morphisms
$$c(s,t,F): F_s \rightarrow F_t, \ \text{for} \ s \leq t$$
induced from $1_{(-\infty,s)} \rightarrow 1_{(-\infty,t)}$ the continuation map of $F$.
Some properties of the continuation maps are listed as follow:
\betaegin{proposition}\lambdabel{prop:properties_of_continuation_maps}
Let $F \in \mathbb{S}h_{T^* M \times T^*_\leq I}(M \times I)$. Then
\betaegin{enumerate}
\item For $r \leq s \leq t$, there is an equality $c(r,s,F) \circ c(s,t,F) = c(r,t,F)$.
\item If $F$ is a constant on the $I$-direction on $[s,t]$, then $c(s,t,F) = \id$.
\item Continuation maps respect colimits forward, the canonical map $\operatornameeratorname{colim}_{r < t} F_r \rightarrow F_t$ is an isomorphism.
\item Assume further that $\mathfrak{m}s(F)$ is $I$-noncharacteristic, then
continuation maps respect limits backward, i.e., the canonical map $F_t \rightarrow \lim_{s > t} F_s$ is an isomorphism.
\varepsilonnd{enumerate}
\varepsilonnd{proposition}
\betaegin{remark}
We note that the noncharacteristic condition cannot be dropped.
For example, consider the case $M = \{ * \}$ and take $F = 1_{(-\infty,0]}$.
Then $F_t = 1_\cV$ when $t \leq 0$ and $0$ otherwise.
Thus $F_0 = \operatornameeratorname{colim}_{r < 0} F_r$ but $F_0 \neq \lim_{s > 0} F_s$.
\varepsilonnd{remark}
We also mention some homotopical invariance properties of the continuation maps.
Let $J$ another open interval and use $(s,\sigma)$ to denote the coordinates of its cotangent bundle.
Let $G \in \mathbb{S}h(M \times I \times J)$ be a sheaf such that $\mathfrak{m}s(G) \subseteq \{ \tau \leq 0 \}$.
For any $x \in I$, we use $G_{t = x} \coloneqq G|_{M \times \{x\} \times J }$ to denote the restriction
and similarly for $G_{s = y}$, $y \in J$.
Note by (2) of Proposition \ref{prop:noncharacteristic-ms-es}, the same condition $\mathfrak{m}s(G_{s = y}) \subseteq \{ \tau \leq 0 \}$ holds.
Assume further that there exists $a \leq b$ in $I$ such that $\mathfrak{m}s(G_{t = a})$, $\mathfrak{m}s(G_{t = b}) \subseteq T^* M \times 0_J$.
By Lemma \cite[Lemma 3.22]{Kuo-wrapped-sheaves}, this implies that there exist $F_a$, $F_b \in \mathbb{S}h(M)$ such that
$G_{t = a} = p_s^* F_a$ and $G_{t = b} = p_s^* F_b$
where we use $p_s: M \times J \rightarrow M$ to denote the projection.
Note that, for each $y \in J$, the restriction $G_{s = y}$ induces a continuation map
$c(G,y,a,b): F_a \rightarrow F_b$.
$$
\betaegin{tikzpicture}
\deltaraw [thick] (0,0) rectangle (5,3);
\deltaraw [thick] (1,0) -- (1,3);
\deltaraw [thick] (4,0) -- (4,3);
\node at (0.6,2.5) {$F_a$};
\node at (3.6,2.5) {$F_b$};
\deltaraw [->, thick] (1.1,1.5) -- (3.9,1.5) node [midway, above] {$c(G,y,a,b)$};
\deltaraw [->, thick] (1.1,0.5) -- (3.9,0.5) node [midway, above] {$c(G,y^\varphirime,a,b)$};
\node at (1,3.25) {$t = a$};
\node at (4,3.25) {$t = b$};
\node at (-0.7,0.5) {$s = y^\varphirime$};
\node at (-0.7,1.5) {$s = y$};
\varepsilonnd{tikzpicture}
$$
\betaegin{proposition} \lambdabel{homotopy_independence_of_continuation_maps}
The morphism $c(G,y,a,b)$ is independent of $y \in J$.
More generally, similar statements can be made for higher homotopical independence.
\varepsilonnd{proposition}
\subsection{Isotopies of sheaves}\lambdabel{Isotopies of sheaves}
We recall the notion of isotopies of sheaves based on the main theorem of Guillermou, Kashiwara, and Schapira
in \cite{Guillermou-Kashiwara-Schapira} and some applications.
This section is a summary of \cite[section 3.2]{Kuo-wrapped-sheaves} of the first author.
\betaegin{theorem}[{\cite[Proposition 3.2, Remark 3.9]{Guillermou-Kashiwara-Schapira}}]\lambdabel{thm:GKS}
Let $M$ be a manifold and $B$ a contractible finite dimensional manifold.
For a $B$-family contact isotopies $\mathbb{P}hi: S^* M \times B \rightarrow S^* M$ where $J$ is an open interval,
there exists a unique sheaf kernel
$K(\mathbb{P}hi) \in \mathbb{S}h(M \times M \times I \times B)$ such that
\betaegin{enumerate}
\item $K(\mathbb{P}hi) |_{b = b_0} = 1_{\mathbb{D}elta_M}$, and
\item $\mathfrak{m}sif(K(\mathbb{P}hi)) \subseteq \mathbb{L}ambdambda_\mathbb{P}hi$ where
\betaegin{equation}\lambdabel{contact_movie}
\mathbb{L}ambdambda_\mathbb{P}hi = \left\{ \left(x, -\timesi, \varphi_{t,b}(x,\timesi), t, - \alphalpha(V_{\mathbb{P}hi_b})(\varphi_{t,b}(x,\timesi)),
b, - \alphalpha_{\varphi_{t,b}(x,\timesi)} \circ d (\mathbb{P}hi \circ i_{x,\timesi,t})_b(\cdot) \right) \right\}
\varepsilonnd{equation}
is the contact movie of $\mathbb{P}hi$.
\varepsilonnd{enumerate}
Moreover, $\mathfrak{m}sif(K(\mathbb{P}hi)) = \mathbb{L}ambdambda_\mathbb{P}hi$ is simple along $\mathbb{L}ambdambda_\mathbb{P}hi$,
both projections $\supp(K) \hookrightarrow M \times M \times B \rightarrow M \times B$ are proper,
and the composition is compatible with convolution in the sense that
\betaegin{enumerate}
\item $K(\mathbb{P}si \circ \mathbb{P}hi) = K(\mathbb{P}si) \circ|_{B} K(\mathbb{P}hi)$,
\item $K(\mathbb{P}hi^{-1}) \circ|_{B} K(\mathbb{P}hi) = K(\mathbb{P}hi) \circ|_{B} K(\mathbb{P}hi^{-1}) = 1_{\mathbb{D}elta_M \times B}$.
\varepsilonnd{enumerate}
Here $\mathbb{P}hi^{-1}$ is the $J^n$-family of isotopies given by $\mathbb{P}hi^{-1}(-,t,b) \coloneqq \varphi_{t,b}^{-1} $.
\varepsilonnd{theorem}
This theorem is usually referred as \textit{Guillermou-Kashiwara-Schapira sheaf quantization},
since it is a categorical analogue of producing an operator from an contact isotopy,
and we will refer the sheaf kernel $K(\mathbb{P}hi)$ as the GKS sheaf quantization kernel associated to $\mathbb{P}hi$.
A corollary of this construction is that contact isotopies act on sheaves and the action is compatible with the microsupport:
\betaegin{corollary}[{\cite[Equation (4.4)]{Guillermou-Kashiwara-Schapira}}]\lambdabel{cor:GKS}
Let $\mathbb{P}hi:S^* M \times I \rightarrow S^* M$ be a contact isotopy.
Then the convolution
\betaegin{align*}
K(\mathbb{P}hi)|_t \circ (-): \mathbb{S}h(M) &\rightarrow \mathbb{S}h(M) \\
F &\mathfrak{m}apsto K(\mathbb{P}hi)|_t \circ F
\varepsilonnd{align*} is an equivalence whose inverse is given by $K(\mathbb{P}hi^{-1})|_t \circ (-)$.
For a sheaf $F \in \mathbb{S}h(M)$,
there is an equality $\mathfrak{m}snz(K(\mathbb{P}hi) \circ F) = \mathbb{L}ambdambda_\mathbb{P}hi \circ \deltaot{\mathfrak{m}s}(F)$.
In particular, if we set $F_t \coloneqq (K(\mathbb{P}hi) \circ_M F) |_{M \times \{t\}}$, then $\mathfrak{m}sif(F_t) = \varphi_t \mathfrak{m}sif(F)$ for $t \in I$.
Furthermore, if $F$ has compact support, then so does $F_t$ for all $t \in I$.
\varepsilonnd{corollary}
\betaegin{corollary}[{\cite[Theorem 7.2.1]{KS}}, {\cite[Lemma 5.6]{NadShen}}]\lambdabel{cor:cont-trans}
Let $\mathbb{P}hi:S^* M \times I \rightarrow S^* M$ be a contact isotopy.
Then the convolution induces a morphism between sheaf of categories
\betaegin{align*}
K(\mathbb{P}hi)|_t \circ (-): \mathfrak{m}sh_\mathbb{L}ambdambda &\rightarrow \varphi_t^* \mathfrak{m}sh_{\varphi_t(\mathbb{L}ambdambda)} \\
F &\mathfrak{m}apsto K(\mathbb{P}hi)|_t \circ F
\varepsilonnd{align*} and is an equivalence whose inverse is given by $K(\mathbb{P}hi^{-1})|_t \circ (-)$.
\varepsilonnd{corollary}
\betaegin{example}\lambdabel{GKS_standard_Reeb}
Let $(M,g)$ be a Riemannian manifold with non-zero injective radius.
The cosphere bundle $S^* M$ can be identified with the unit sphere bundle in $T^* M$ by $g$ as a contact hypersurface.
Take $\mathbb{P}hi$ to be the Reeb flow and $F = 1_{x}$ to be a skyscraper at some point $x \in M$.
Then, for small $t < 0$,
$F_t$ is given by $1_{\overline{B_{\varepsilonpsilon(t)}(x)}}$ the constant sheaf supported on some small closed ball center at $x$,
and, for small $t > 0$, $F_t$ is given by $\sHom(1_{\overline{B_{\varepsilonpsilon(t)}(x)}}, \omega_M)$ wher $\omega_M$ is the dualizing sheaf.
When the base manifold $M$ is orientable, the later is isomorphic to
$1_{B_{\varepsilonpsilon(t)}(x)}[\deltaim M]$ the constant sheaf supported on some
small open ball centered at $x$ with a shift by the dimension of $M$.
\varepsilonnd{example}
Now, consider the case when $B = I$ is given by an open interval containing $0$ and assume that $\mathbb{P}hi$ is positive, i.e.,
$\alphalpha(\varphiartial_t \varphi_t) \mathfrak{m}athfrak{g}eq 0$.
In this case, the consideration of the previous section \ref{continuation maps} implies that there are continuation maps
$$ K(\mathbb{P}hi)_s \rightarrow K(\mathbb{P}hi)_t, \ \text{for} \ s \leq t$$
and it induces continuation maps $F_s \rightarrow F_t$, $s \leq t$, for $F \in \mathbb{S}h(M)$.
We note that if there is an homotopy between two positive isotopies $\mathbb{P}hi$ and $\mathbb{P}si$,
then by Proposition \ref{homotopy_independence_of_continuation_maps}, the induced continuation maps
by $K(\mathbb{P}hi)$ and $K(\mathbb{P}si)$ are identified.
Now given a closed subset $X \subseteq S^* M$, one can consider the totality of positive isotopies,
declare that a morphism between two positive isotopies $\mathbb{P}hi_1 \rightarrow \mathbb{P}hi_2$ is a further isotopy $\mathbb{P}si$ of the same kind from $\mathbb{P}hi_1$
such that $\mathbb{P}hi_2 = \mathbb{P}si \# \mathbb{P}hi_1$, and one compares different morphisms by homotopies, etc..
Then, for $F \in \mathbb{S}h(M)$, GKS sheaf quantization produces a diagram whose vertices are given by the time-$1$ sheaves $F^w$,
and whose arrows are given by continuation maps $c(\mathbb{P}si): F^w \rightarrow F^{w^\varphirime}$.
The main theorem we recall in this section will be that colimit/limit over increasingly positive/negative isotopies
provides a description for the tautological inclusion $\mathbb{S}h_X(M) \subseteq \mathbb{S}h(M)$.
\betaegin{theorem}[{\cite[Theorem 1.2]{Kuo-wrapped-sheaves}}]\lambdabel{w=ad}
Let $\iota_{X *}: \mathbb{S}h_X(M) \hookrightarrow \mathbb{S}h(M)$ denote the tautological inclusion.
Then the left and right adjoints are given by the positive/negative colimiting/limiting wrapping
$$ \wrap_X^+(F) \coloneqq \clmi{F \rightarrow F^w} \, F^w, \ \wrap_X^-(F) \coloneqq \lmi{F^{w-} \rightarrow F} \, F^{w-}.$$
\varepsilonnd{theorem}
For $G \in \mathbb{S}h(M)$, it is in general hard to compute $\wrap_X^+(G)$ (resp.~$\wrap_X^-(G)$) since it is given by a colimit (resp.~a limit) over a rather large index category.
Nevertheless, when $X = \mathbb{L}ambdambda$ and $\mathfrak{m}sif(G)$ are both isotropic, the underlying geometry can sometimes provide a cofinal (resp.~final) one parameter family $G_t$ so that $G_0 = G$ and $\wrap_\mathbb{L}ambdambda^+ G = \operatornameeratorname{colim}_{t \rightarrow +\infty} \, G_t$ (resp.~$\wrap_\mathbb{L}ambdambda^- G = \lim_{t \rightarrow -\infty} G_t$). In this case, a natural question is when, for a fixed $F \in \mathbb{S}h(M)$ also with isotropic singular support, the canonical map
$$ \Hom(F,G) \rightarrow \Hom(F, \wrap_\mathbb{L}ambdambda^+G)$$
is an isomorphism. One such a case which we will encounter is the following:
\betaegin{lemma}\lambdabel{lem:nearby_cycle}
Let $\mathbb{L}ambdambda$ be a fixed compact isotropic and $F, G \in \mathbb{S}h(M)$ with isotropic singular support. Assume that $\mathfrak{m}sif(F) \cap \mathbb{L}ambdambda = \varnothing$, and there is a positive isotopy $\varphi_t$, $t \in \mathbb{R}R$, on $S^* M$ such that for any open neighborhood $\Omega$ of $\mathbb{L}ambdambda$, there is $T = T(\mathbb{L}ambdambda)$ such that $\varphi_t(\mathfrak{m}sif(G)) \subseteq \Omega$ for $t \mathfrak{m}athfrak{g}eq T$, and $\mathfrak{m}sif(F) \cap \varphi_t( \mathfrak{m}sif(G)) = \varnothing$ for all $t \mathfrak{m}athfrak{g}eq 0$, then the canonical map
$$ \Hom(F,G) \rightarrow \Hom(F, \wrap_\mathbb{L}ambdambda^+G)$$
is an isomorphism. A similar statement holds for $\Hom(G,F) \rightarrow \Hom(\wrap_\mathbb{L}ambdambda^-G,F)$ when given a negative isotopy satisfying a similar condition.
\varepsilonnd{lemma}
\betaegin{proof}
This is essentially \cite[Theorem 5.15]{Kuo-wrapped-sheaves}. The point is that we would like to apply the main theorem about nearby cycle, \cite[Theorem 4.2]{NadShen}. Although we did not assume compactness on $\supp(F)$ and $\supp(G)$ here as in \cite[Theorem 5.15]{Kuo-wrapped-sheaves}, the compactness assumption on $\mathbb{L}ambdambda$ will be sufficient to implied the gappedness condition for \cite[Theorem 4.2]{NadShen}.
\varepsilonnd{proof}
\betaegin{remark}
In practice, when we can find an increasing sequence of positive Hamiltonian flows $\varphi^k_t$, $k \in \mathbb{N}N$, such that for any open neighborhood $\Omega$ of $\mathbb{L}ambdambda$, there is $K \in \mathbb{N}N$ such that $\varphi^k_t(\mathfrak{m}sif(G)) \subseteq \Omega$ for $k \mathfrak{m}athfrak{g}eq K$, then the condition in the lemma holds. Indeed, we can define a time dependent smooth Hamiltonian $\varphi^t$ such that $\varphi_t(\mathfrak{m}sif(G)) = \varphi^k_{t-k}(\mathfrak{m}sif(G))$ when $t \in [k + 1 - \varepsilonpsilon, k+1]$. That satisfies the condition in the lemma.
\varepsilonnd{remark}
\section{Doubling and fiber sequence}\lambdabel{sec:doubling}
Our goal in this section is to interpret the left and right adjoint functors of microlocalization
$$m_\mathbb{L}ambdambda: \mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda).$$
by the doubling construction in sheaf theory (which is also known as the antimicrolocalization functor \cite{NadShen} or the Guillermou convolution functor \cite{JinTreu}).
First, we will realize the doubling functor with respect to an arbitrary Reeb flow $T_t$, $t \in \mathbb{R}R$, on $S^{*}M$ and show that this defines a fully faithful functor.
\betaegin{theorem}\lambdabel{thm:doubling}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a closed subanalytic Legendrian and $c(\mathbb{L}ambdambda)$ be the length of the shortest Reeb chord on $\mathbb{L}ambdambda$ with respect to the Reeb flow $T_t$. Then for $0 < \varepsilonpsilon < c(\mathbb{L}ambdambda)/2$, there is a fully faithful functor
$$w_\mathbb{L}ambdambda: \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \hookrightarrow \mathbb{S}h_{T_\varepsilonpsilon(\mathbb{L}ambdambda) \cup T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(M).$$
\varepsilonnd{theorem}
Then, we show that by wrapping positively or negatively around $S^{*}M \betaackslash \mathbb{L}ambdambda$, we will get the left and right adjoints from the doubling functor.
\betaegin{theorem}\lambdabel{cor:doubling}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a closed subanalytic Legendrian. Then there are equivalences
$$m_\mathbb{L}ambdambda^l = \wrap_\mathbb{L}ambdambda^+ \circ w_\mathbb{L}ambdambda[-1], \;\; m_\mathbb{L}ambdambda^r = \wrap_\mathbb{L}ambdambda^- \circ w_\mathbb{L}ambdambda: \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda(M)$$
where $\wrap_\mathbb{L}ambdambda^\varphim$ are the functors given in Theorem \ref{w=ad}.
In particular, the left and the right adjoint $m_\mathbb{L}ambdambda^l$ and $m_\mathbb{L}ambdambda^r$ can be decomposed to a inclusion followed by a quotient:
$$ \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \hookrightarrow \mathbb{S}h_{T_\varepsilonpsilon(\mathbb{L}ambdambda) \cup T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(M) \twoheadrightarrow \mathbb{S}h_\mathbb{L}ambdambda(M).$$
\varepsilonnd{theorem}
The doubling functor in sheaf theory goes back to Guillermou \cite{Gui}*{Section 13-15}, and is also formulated in a different way in Nadler-Shende \cite{NadShen}*{Section 6}. Here we will generalize that functor to arbitrary Reeb flows on $S^{*}M$. In Lagrangian Floer theory, the stop doubling construction has been discussed in the setting of Fukaya-Seidel categories \cite{AbGan} (see also \cite{AbSmithKhov,AbAurouxHMS}) as the cup functor
$$\cup_F: \mathfrak{m}athcal{F}(F) \rightarrow \mathfrak{m}athcal{FS}(X, \varphii),$$
and also in the setting of partially wrapped Fukaya categories as the doubling trick \cite{GPS2}*{Example 8.7}, cup functor or Orlov functor \cite{SylvanOrlov}
$$\cup_F: \mathfrak{m}athcal{W}(F) \rightarrow \mathfrak{m}athcal{W}(X, F).$$
Recently the doubling trick has been used in the theory of (twisted) generating families \cite{TwistGF}*{Theorem~C}.
Our key ingredient to deduce the doubling construction is sheaf theoretic wrappings discussed in Section \ref{Isotopies of sheaves}. The difference between positive and negative wrapping will lead to the Sato-Sabloff fiber sequence, which provides a new interpretation of the Sato fiber sequence (Theorem \ref{thm:sato}) in microlocal theory of sheaves from the perspective of Hamiltonian isotopies of sheaves. This generalizes previous results in \cite{LiEstimate}.
In Section \ref{sec:sato-sab}, we compare the fiber sequence and the Sato fiber sequence, showing that they are isomorphic, and thus comclude the Sato-Sabloff fiber sequence.
When discussing the Sato-Sabloff fiber sequence, we also show in Section \ref{sec:serre} a Sabloff duality using the Verdier duality on sheaves, which has appeared in a number of works in symplectic geometry \cite{Sabduality,EESduality,SeidelFukI}.
In Section \ref{sec:ad-micro}, we study the fiber sequences coming from adjoints of microlocalizations and explain the role of wrappings in the sequence. Then in Section \ref{sec:doubling-local}, using the Sato-Sabloff fiber sequence, we define the doubling construction, which allows us to provides a uniform characterization of the adjoints of microlocalization in Section \ref{sec:double-ad}.
\subsection{Sato-Sabloff fiber sequence}\lambdabel{sec:sato-sab}
For compact subanalytic Legendrians $\mathbb{L}ambdambda_{0}, \mathbb{L}ambdambda_{1} \subset S^{*}M$, we let $c(\mathbb{L}ambdambda_0, \mathbb{L}ambdambda_1)$ be the minimal absolute value of lengths of Reeb chords between $\mathbb{L}ambdambda_0$ and $\mathbb{L}ambdambda_1$ with respect to the Reeb flow $T_t$. Abusing notations, we also use $T_t$ to denote the associated functor of its time-$t$ flow which acts on sheaves on $M$.
The key proposition of this section is that the $\Hom$ in $\mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$ can be computed as a difference between the positive and negative wrappings.
Similar considerations have also appeared in previous works of for example Guillermou \cite[Section 11--13]{Gui} and Tamarkin \cite[Equation~(1)]{Tamarkin2}.
\betaegin{proposition}\lambdabel{prop:hom_w_pm}
Let $\mathbb{L}ambdambda_{0}$, $\mathbb{L}ambdambda_{1} \subseteq S^{*}M$ be compact subanalytic Legendrians, ${F} \in \mathbb{S}h_{\mathbb{L}ambdambda_0}(M)$, ${G} \in \mathbb{S}h_{\mathbb{L}ambdambda_1}(M)$ and $\mathfrak{m}athrm{supp}(F) \cup \mathfrak{m}athrm{supp}(G)$ is compact. Then there is a commutative diagram
$$
\betaegin{tikzpicture}
\node at (0,1.7) {$\Gamma(M, \,\mathbb{D}elta^*\mathfrak{m}athscr{H}om(\varphii_1^*{F}, \varphii_2^*{G}))$};
\node at (6,1.7) {$\Hom({F, G})$};
\node at (0,0) {$\Hom({F}, T_{-\varepsilonpsilon}({G}))$};
\node at (6,0) {$\Hom({F}, T_{\varepsilonpsilon}({G}))$};
\deltaraw [->, thick] (2.5,1.7) -- (4.9,1.7) node [midway, above] {$ $};
\deltaraw [->, thick] (1.5,0) -- (4.5,0) node [midway, above] {$c$};
\deltaraw [->, thick] (0,1.4) -- (0,0.3) node [midway, left] {\rotatebox{90}{$\sim$}};
\deltaraw [->, thick] (6,1.4) -- (6,0.3) node [midway, right] {\rotatebox{90}{$\sim$}};
\varepsilonnd{tikzpicture}
$$
\noindent where $c$ is the continuation map associated to the Reeb flow and the bottom arrow is the canonical map in Theorem \ref{thm:sato}.
\varepsilonnd{proposition}
\betaegin{corollary}[Sato-Sabloff fiber sequence]\lambdabel{cor:sato-sab}
For $F, G \in \mathbb{S}h_\mathbb{L}ambdambda(M)$, there is a fiber sequence
$$\Hom(F, T_{-\varepsilonpsilon}(G)) \timesrightarrow{c} \Hom(F, T_{\varepsilonpsilon}(G)) \rightarrow \Gamma(\deltaT^* M, \mathfrak{m}hom(F,G) )$$
where $c$ is induced by the continuation map $T_{-\varepsilonpsilon}(G) \rightarrow T_\varepsilonpsilon(G)$ and the second map is given by the canonical restriction map $\Gamma(T^* M; \mathfrak{m}hom(F,G)) \rightarrow \Gamma(\mathbb{L}ambdambda, \mathfrak{m}hom(F,G) )$.
\varepsilonnd{corollary}
\betaegin{remark}\lambdabel{rem:multi-component-sato}
We remark that the above computation also works in the case when we take microlocalization along a single connected component $\mathbb{L}ambdambda_i \subseteq \mathbb{L}ambdambda \subseteq S^{*}M$. Let $\widetilde{T}_t: S^{*}M \rightarrow S^{*}M$ be a Hamiltonian flow such that $\widetilde{T}_t|_{T_t(\mathbb{L}ambdambda_i)} = T_t$ is the Reeb flow while $\widetilde{T}_t|_{\mathbb{L}ambdambda \betaackslash \mathbb{L}ambdambda_i} = \mathfrak{m}athrm{id}$. Then there is a fiber sequence
$$\Hom(F, \widetilde{T}_{-\varepsilonpsilon}(G)) \rightarrow \Hom(F, \widetilde{T}_{-\varepsilonpsilon}(G)) \rightarrow \Gamma(\mathbb{L}ambdambda_i, \mathfrak{m}hom(F,G) )$$
\varepsilonnd{remark}
The above expression will be useful in Section \ref{sec:doubling-local} when we construct the doubling functor in Theorem \ref{thm:doubling}. Here we note that there is another expression,
$$\Hom(\wrap_\mathbb{L}ambdambda^- T_{-\varepsilonpsilon}(F),G) \rightarrow \Hom(F,G) \rightarrow \Hom(m_\mathbb{L}ambdambda^l m_\mathbb{L}ambdambda (F),G)$$
where we use Theorem \ref{w=ad} to identify $\Hom(\wrap_\mathbb{L}ambdambda^- T_{-\varepsilonpsilon}(F),G) = \Hom(T_{\varepsilonpsilon}(F), G) = \Hom(F,T_{-\varepsilonpsilon}(G))$ and Definition-Theorem \ref{mu-hom_as_hom} to identify $\Gamma(\mathbb{L}ambdambda,\mathfrak{m}hom(F,G)) = \Hom(m_\mathbb{L}ambdambda^l m_\mathbb{L}ambdambda (F), G)$.
Equivalently, we have the fiber sequence
$$m_\mathbb{L}ambdambda^l m_\mathbb{L}ambdambda \rightarrow \id \rightarrow \wrap_\mathbb{L}ambdambda^+ T_{\varepsilonpsilon}$$
between endofunctors on $\mathbb{S}h_\mathbb{L}ambdambda(M)$. A similarly discussion holds for the left adjoints. Later in Section \ref{sec:ad-micro} we will explain another perspective of understanding the fiber sequence.
\betaegin{definition}\lambdabel{def:wrap_once_functors}
We define the positive and negative warp-once functor $S_\mathbb{L}ambdambda^+$ and $S_\mathbb{L}ambdambda^-: \mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda(M)$ as the compositions
$$S_\mathbb{L}ambdambda^+(F) \coloneqq \wrap_\mathbb{L}ambdambda^+ T_{\varepsilonpsilon}(F), \; S_\mathbb{L}ambdambda^-(F) \coloneqq \wrap_\mathbb{L}ambdambda^- T_{-\varepsilonpsilon} (F) .$$
\varepsilonnd{definition}
The above discussion show that $S_\mathbb{L}ambdambda^-$ and $S_\mathbb{L}ambdambda^+$ is the cotwist and dual cotwist associated to the adjunctions $m_\mathbb{L}ambdambda^l \deltaashv m_\mathbb{L}ambdambda \deltaashv m_\mathbb{L}ambdambda^r$. See Section \ref{sec:spherical} for the terminology.
Before entering the proof of Proposition \ref{prop:hom_w_pm}, we recall that the continuation map $T_{-\varepsilonpsilon}(F)\rightarrow T_\varepsilonpsilon(F)$ is constructed the GKS sheaf kernel associated to the Reeb flow.
Write $q: M \times \mathfrak{m}athbb{R} \rightarrow M$ and $t: M \times \mathfrak{m}athbb{R} \rightarrow \mathfrak{m}athbb{R}$ for the projection maps. For a subanalytic Legendrian $\mathbb{L}ambdambda \subseteq S^{*}M$, consider the Legendrian movie of $\mathbb{L}ambdambda$ under the identity flow
$$\mathbb{L}ambdambda_q = \{(x, \timesi, t, 0) | (x, \timesi) \in \mathbb{L}ambdambda, t \in \mathbb{R}R\} \subset S^{*}(M \times \mathfrak{m}athbb{R}).$$
Let $T_t: S^{*}M \rightarrow S^{*}M$ be any Reeb flow defined by the positive Hamiltonian $H: S^{*}M \rightarrow \mathfrak{m}athbb{R}$ and consider the Legendrian movie of $\mathbb{L}ambdambda$ under the Reeb flow
$$\mathbb{L}ambdambda_T = \{(x, \timesi, t, \tau) | (x, \timesi) \in T_t(x_0, \timesi_0), \tau = -H \circ T_t(x_0, \timesi_0), (x_0, \timesi_0) \in \mathbb{L}ambdambda\} \subset S^{*}(M \times \mathfrak{m}athbb{R}).$$
A standard trick is to consider the total sheaf Hom, $\mathfrak{m}athscr{H}om(q^*{F}, K(T) \circ {G})$.
The following singular support estimate is essentially the same as \cite{LiEstimate}*{Lemma 4.1}.
Let $\mathfrak{m}athcal{Q}_\varphim(\mathbb{L}ambdambda_0, \mathbb{L}ambdambda_1)$ be the set of unoriented Reeb chords from $\mathbb{L}ambdambda_0$ to $\mathbb{L}ambdambda_1$, namely
$$\mathfrak{m}athcal{Q}_\varphim(\mathbb{L}ambdambda_0, \mathbb{L}ambdambda_1) = \{(x_0, \timesi_0, x_1, \timesi_1) \in \mathbb{L}ambdambda_0 \times \mathbb{L}ambdambda_1 |\, \varepsilonxists\, t \in \mathfrak{m}athbb{R}, T_t(x_0, \timesi_0) = (x_1, \timesi_1)\}.$$
For a Reeb chord such that $T_t(x_0, \timesi_0) = (x_1, \timesi_1)$, we call $t \in \mathbb{R}R$ the length of the Reeb chord.
\betaegin{lemma}\lambdabel{lem:ss-reeb}
Let $\mathbb{L}ambdambda_{0,1} \subset S^{*}M$ be subanalytic Legendrians, $T_t: S^{*}M \rightarrow S^{*}M$ be any Reeb flow and ${F} \in \mathbb{S}h_{\mathbb{L}ambdambda_0}(M), {G} \in \mathbb{S}h_{\mathbb{L}ambdambda_1}(M)$. Then
\[\betaegin{split}
\mathfrak{m}s^\infty(\mathfrak{m}athscr{H}om(q^*{F}, K(T) & \circ {G})) \cap \{(x, 0, t, \tau) \in S^{*}(M \times \mathfrak{m}athbb{R}) | \tau > 0\} = \varnothing, \\
\mathfrak{m}s^\infty(\mathfrak{m}athscr{H}om(q^*{F}, K(T) \circ {G}&\,)) \cap \{(x, 0, t, \tau) \in S^{*}(M \times \mathfrak{m}athbb{R}) | \tau < 0\} \hookrightarrow \mathfrak{m}athcal{Q}_\varphim(\mathbb{L}ambdambda_0, \mathbb{L}ambdambda_1).
\varepsilonnd{split}\]
The $t$ coordinates in the intersection correspond to lengths of Reeb chords in $\mathfrak{m}athcal{Q}_\varphim(\mathbb{L}ambdambda_0, \mathbb{L}ambdambda_1)$.
In particular, $\mathfrak{m}athscr{H}om(q^*{F}, K(T) \circ {G})$ is $\mathbb{R}R$-noncharacteristic away from the length spectrum of Reeb chords.
\varepsilonnd{lemma}
\betaegin{proof}
Since $\mathfrak{m}s^\infty(q^*{F}) \cap \mathfrak{m}s^\infty(K(T) \circ {G}) = \mathbb{L}ambdambda_{0,q} \cap \mathbb{L}ambdambda_{1,T} = \varnothing$, we can apply the singular support estimate (8) of Proposition \ref{prop:mses}
$$\mathfrak{m}s^\infty(\mathfrak{m}athscr{H}om(q^*{F}, K(T) \circ {G})) \subset (-\mathfrak{m}s^\infty(q^*{F})) + \mathfrak{m}s^\infty(K(T) \circ {G}) = (-\mathbb{L}ambdambda_{0,q}) + \mathbb{L}ambdambda_{1,T}.$$
Hence $(x, 0, t, \tau) \in (-\mathbb{L}ambdambda_{0,q}) + \mathbb{L}ambdambda_{1,T}$ if and only if there exists a pair $(x_0, \timesi_0) \in \mathbb{L}ambdambda_0, (x_1, \timesi_1) \in \mathbb{L}ambdambda_1$ such that $(x_1, \timesi_1) = T_t(x_0, \timesi_0)$, or in other words there is a Reeb chord from $\mathbb{L}ambdambda_0$ to $\mathbb{L}ambdambda_1$ of length $u$. In particular, we know that $\tau = -H(x_0, \timesi_0) < 0$ is determined by such a pair. Hence when $\tau > 0$, there will never be $(x, 0, t, \tau) \in (-\mathbb{L}ambdambda_{0,q}) + \mathbb{L}ambdambda_{1,T}$. Therefore
\[
\betaegin{split}
\mathfrak{m}s^\infty(\mathfrak{m}athscr{H}om(q^*{F}, K(T) \,\circ &\, {G})) \cap \mathfrak{m}athrm{Graph}(dt) = \varnothing, \\
\mathfrak{m}s^\infty(\mathfrak{m}athscr{H}om(q^*{F}, K(T) \circ {G})) \,\cap &\, \mathfrak{m}athrm{Graph}(-dt) \hookrightarrow \mathfrak{m}athcal{Q}_\varphim(\mathbb{L}ambdambda_0, \mathbb{L}ambdambda_1),
\varepsilonnd{split}
\]
where our injection maps $(x, 0, t, -\tau)$ to the Reeb chord of length $u$ connecting $(x_0, \timesi_0) \in \mathbb{L}ambdambda_0$ and $(x_1, \timesi_1) = T_t(x_0, \timesi_0) \in \mathbb{L}ambdambda_1$.
\varepsilonnd{proof}
\betaegin{proof}[Proof of Proposition \ref{prop:hom_w_pm}]
Denote by $i_t$ the inclusion of the slice of $M \times M$ at $t$.
We first prove the more straightforward statement of $\Hom(F,G) \timesrightarrow{\sim} \Hom(F,T_\varepsilonpsilon(G))$.
The above Lemma \ref{lem:ss-reeb} implies that, by Proposition \ref{prop:mses}~(5), the $\varepsilonpsilon$-slice of the total Hom sheaf $\mathfrak{m}athscr{H}om(q^*{F}, K(T) \circ {G})$ is the same as
$$ i_\varepsilonpsilon^* \mathfrak{m}athscr{H}om(q^*{F}, K(T) \circ {G}) = i_\varepsilonpsilon^! \mathfrak{m}athscr{H}om(q^*{F}, K(T) \circ {G})[-1] = \sHom(F,T_\varepsilonpsilon(G)).$$
Thus, we may apply Proposition \ref{prop:properties_of_continuation_maps}~(3) and get
$$\sHom(F,G) \timesrightarrow{\sim} \lmi{t \rightarrow 0^+} \sHom(F,T_t(G)).$$
Applying $\Gamma(M;-)$, we obtain that $\Hom(F,G) \timesrightarrow{\sim} \lim_{t \rightarrow 0^+} \Hom(F,T_t(G))$.
Denote by $t: M \times \mathbb{R}R \rightarrow \mathbb{R}R$ the projection to the parameter space.
But by the above Lemma \ref{lem:ss-reeb} and Proposition \ref{prop:mses}~(4), the sheaf $t_* \mathfrak{m}athscr{H}om(q^*{F}, K(T) \circ {G}) $ is a constant sheaf over $\left(0,c(\mathbb{L}ambdambda) \right)$.
Note we use the assumption that $\supp(F)$ and $\supp(G)$ are compact in order to obtain microsupport estimation for pushforward.
Thus the later limit when restricting to $0 < \varepsilonpsilon < c(\mathbb{L}ambdambda)$ is a constant diagram and the projection
$$\lmi{t \rightarrow 0^+} \Hom(F,T_t(G)) \timesrightarrow{\sim} \Hom(F,T_{\varepsilonpsilon}(G))$$
is an isomorphism for $0 < \varepsilonpsilon < c(\mathbb{L}ambdambda)$.
To prove the statement for $\Hom(F,T_{-\varepsilonpsilon}(G)) \rightarrow \Hom(F,G)$,
let $\varphii_{i,\mathbb{R}R}: M \times M \times \mathbb{R}R \rightarrow M \times \mathbb{R}R$ denote the $\mathbb{R}R$-parameter version of the projection to the $i$-th component.
Instead of the total Hom sheaf $ \mathfrak{m}athscr{H}om(q^*{F}, K(T) \circ {G})$, we will consider its $*$-variant $(\mathbb{D}elta \times \id_{\mathbb{R}R})^* \sHom( \varphii_{1,\mathbb{R}R}^* q^* F, \varphii_{2,\mathbb{R}R}^* K(T) \circ G)$. Proposition \ref{prop:mses}~(5) implies that the canonical map $f^*H \text{ot}imes f^!1_Y \rightarrow f^!H$ is an isomorphism when $f$ is noncharacteristic to the sheaf $H$.
Thus, there is a canonical map
$$\mathbb{D}elta^* \sHom(\varphii_1^* F, \varphii_2^* G) \rightarrow \mathbb{D}elta^! \sHom(\varphii_1^* F, \varphii_2^! G) = \sHom(F,G).$$
Here, we use the fact that $\mathbb{D}elta^! 1_{M \times M} = \omega_M^{-1}$ is an invertible sheaf so we can multiply the morphism with its inverse $\omega_M$.
Similarly, there is an canonical map $$(\mathbb{D}elta \times \id_{\mathbb{R}R})^* \sHom( \varphii_{1,\mathbb{R}R}^* q^* F, \varphii_{2,\mathbb{R}R}^* K(T) \circ G) \rightarrow \mathfrak{m}athscr{H}om(q^*{F}, K(T) \circ {G})$$
which is an isomorphism over $\left(-c(\mathbb{L}ambdambda),0 \right)$ by a similar microsupport estimation as the above Lemma \ref{lem:ss-reeb} and Proposition \ref{prop:mses}~(5).
Thus, by consider the $\varepsilonpsilon$-slice for $-c(\mathbb{L}ambdambda) < \varepsilonpsilon < 0$ and the $0$-slice, we obtain the following commutative diagram:
$$
\betaegin{tikzpicture}
\node at (0,1.7) {$\mathbb{D}elta^* \sHom(\varphii_1^* F, \varphii_2^* T_{-\varepsilonpsilon}(G))$};
\node at (5,1.7) {$\mathbb{D}elta^* \sHom (\varphii_1^* F, \varphii_2^* G)$};
\node at (0,0) {$\sHom(F,T_{-\varepsilonpsilon} (G))$};
\node at (5,0) {$\sHom(F,G)$};
\deltaraw [->, thick] (2,1.7) -- (3.2,1.7) node [midway, above] {$ $};
\deltaraw [->, thick] (1.5,0) -- (3.9,0) node [midway, above] {$c$};
\deltaraw [->, thick] (0,1.4) -- (0,0.3) node [midway, left] {\rotatebox{90}{$\sim$}};
\deltaraw [->, thick] (5,1.4) -- (5,0.3) node [midway, right] {$ $};
\varepsilonnd{tikzpicture}
$$
Apply Proposition \ref{prop:properties_of_continuation_maps}~(4) to $(\mathbb{D}elta \times \id_{\mathbb{R}R})^* \sHom( \varphii_{1,\mathbb{R}R}^* F, \varphii_{2,\mathbb{R}R}^* K(T) \circ G)$, we obtain that
$$ \clmi{-t \rightarrow 0^-} \, \sHom(F,T_{-t}(G) ) \timesleftarrow{\sim} \clmi{-t \rightarrow 0^-} \, \mathbb{D}elta^* \sHom(\varphii_1^* F, \varphii_2^* T_{-t}(G)) \timesrightarrow{\sim} \mathbb{D}elta^* \sHom (\varphii_1^* F, \varphii_2^* G).$$
Since $\supp(F)$ and $\supp(G)$ are compact, $\Gamma(M;-) = \Gamma_c(M;-)$ is colimit preserving, and thus we conclude that $\operatornameeratorname{colim}_{-t \rightarrow 0^-} \Hom(F,T_{-t}(G)) \timesrightarrow{\sim} \Gamma(M;\mathbb{D}elta^* \sHom (\varphii_1^* F, \varphii_2^* G))$ is an isomorphism.
The same argument as in the positive case then implies that the colimit diagram is constant and thus the inclusion
$$\Hom(F,T_{-\varepsilonpsilon}(G)) \rightarrow \clmi{-t \rightarrow 0^-} \Hom(F,T_{-t}(G))$$
is an isomorphism for $-c(\mathbb{L}ambdambda) < -\varepsilonpsilon < 0$.
Finally, we notice that the diagram commute in the statement commute because it is a composition of the following two commutative diagram:
\[
\betaegin{tikzpicture}
\node at (0,1.7) {$\Gamma(M, \,\mathbb{D}elta^*\mathfrak{m}athscr{H}om(\varphii_1^*{F}, \varphii_2^*{G}))$};
\node at (6,1.7) {$\Hom({F, G})$};
\node at (0,0) {$\Hom({F}, T_{-\varepsilonpsilon}({G}))$};
\node at (6,0) {$\Hom({F}, T_{\varepsilonpsilon}({G}))$};
\deltaraw [->, thick] (2.5,1.7) -- (4.9,1.7) node [midway, above] {$ $};
\deltaraw [->, thick] (1.5,0) -- (4.5,0) node [midway, above] {$ $};
\deltaraw [->, thick] (0,1.4) -- (0,0.3) node [midway, left] {\rotatebox{90}{$\sim$}};
\deltaraw [->, thick] (6,1.4) -- (6,0.3) node [midway, right] {\rotatebox{90}{$\sim$}};
\deltaraw [->, thick] (1.6,0.3) -- (5,1.3) node [midway, above] {$ $};
\node[scale=1.5] at (1.6,1) {$\circlearrowleft$};
\node[scale=1.5] at (4.6,0.7) {$\circlearrowleft$};
\varepsilonnd{tikzpicture} \qedhere
\]
\varepsilonnd{proof}
\betaegin{remark}
The identity $\Hom({F, G}) \simeq \Hom({F}, T_{\varepsilonpsilon}({G}))$ is often referred to as the perturbation trick, and has in fact appeared in previous works of Guillermou \cite[Corollary 16.6]{Gui} for the special case of vertical translation on $M \times \mathbb{R}R$, and Zhou for arbitrary Reeb flows \cite{Zhou}.
The proof here follows \cite[Proposition 3.18]{Kuo-wrapped-sheaves} of the first author.
\varepsilonnd{remark}
\subsection{Sabloff-Serre duality}\lambdabel{sec:serre}
In this section, we illustrate an additional property that arises from the Sato-Sabloff fiber sequence and prove a Sabloff-Serr{e} duality that
$$\Hom(F, T_{-\varepsilonpsilon}(G) \text{ot}imes \omega_M) = \Hom(F, T_\varepsilonpsilon(G))^\vee.$$
Such duality between a positive Reeb pushoff and a negative Reeb pushoff has been understood in symplectic geometry in a number of works. In Legendrian contact homology, this is known as the Sabloff duality \cite{Sabduality,EESduality}, and in Fukaya-Seidel categories, this is known as the Poincar\'{e}-Lefschetz duality proved by Seidel \cite{SeidelFukI}.
In the previous work of the second author \cite{LiEstimate}, we proved a version of Sabloff duality that for $\mathbb{L}ambdambda \subset J^1(N) \cong S^{*}_{\tau > 0}(N \times \mathbb{R})$ and $F, G \in \mathbb{S}h^b_\mathbb{L}ambdambda(N \times \mathbb{R})$ with compact supports when $N$ is orientable. Here we prove a general version of it.
Recall that we have assumed throughtout the paper that $\cV$ is a rigid symmetric monoidal category. We will explicitly use the following consequence from the rigidity assumption on $\cV$ in the computation.
\betaegin{lemma}[{\cite[Proposition 4.9]{Hoyois-Scherotzke-Sibilla}}]
Assume $\cV_0$ is a rigid symmetric monoidal category.
Then there is a canonical equivalence of symmetric monoidal $\infty$-categories
$$\cV_0 \rightarrow \cV_0^{op}, \, X \mathfrak{m}apsto X^\vee \coloneqq \Hom(X,1_{\cV}).$$
In particular, $(X^\vee)^\vee = X$.
\varepsilonnd{lemma}
To the study of Serr{e} functor, we need the following technical lemma. Let $\varphii_M: M \times N \rightarrow M$, $\varphii_N: M \times N \rightarrow N$ be projection maps.
\betaegin{lemma}\lambdabel{omega_product}
Let $M$ and $N$ be manifolds. Then
\betaegin{enumerate}
\item $\varphii_N^! 1_N = \varphii_M^* \omega_M$,
\item $\omega_{M \times N} = \omega_M \betaoxtimes \omega_N$.
\varepsilonnd{enumerate}
As a corollary, we see the inverse of $\omega_M$ is isomorphic to $\mathbb{D}elta^! 1_{M \times M}$.
\varepsilonnd{lemma}
\betaegin{proof}
Consider the pullback diagram:
$$
\betaegin{tikzpicture}
\node at (0,1.7) {$M \times N$};
\node at (4,1.7) {$N$};
\node at (0,0) {$M$};
\node at (4,0) {$\{*\}$};
\deltaraw [->, thick] (0.7,1.7) -- (3.7,1.7) node [midway, above] {$\varphii_N$};
\deltaraw [->, thick] (0.3,0) -- (3.6,0) node [midway, above] {$p_M$};
\deltaraw [->, thick] (0,1.4) -- (0,0.3) node [midway, right] {$\varphii_M$};
\deltaraw [->, thick] (4,1.4) -- (4,0.3) node [midway, right] {$p_N$};
\varepsilonnd{tikzpicture}
$$
For (1), the base change $p_M^* {p_N}_! = {\varphii_N}_! \varphii_M^*$ implies that there exists
a canonical map $\varphii_M^* p_N^! \rightarrow \varphii_N^! p_M^*$.
This map is in general not an isomorphism but in our case,
we may assume $M$ and $N$ are Euclidean spaces by checking the map locally.
Then the isomorphism follows from the isomorphism $1_\cV = \Gamma_c(\mathbb{R}R^k;\cV)[k]$
and $\omega_{\mathbb{R}R^k} = 1_{\mathbb{R}R^k}[k]$.
For (2), we can use (1) of this lemma and Proposition \ref{prop:mses}~(5) and compute that
$$\omega_M \betaoxtimes \omega_N = \varphii_M^* \omega_M \text{ot}imes \varphii_N^* \omega_N
= \varphii_M^* \omega_M \text{ot}imes \varphii_M^! 1_M = \varphii_M^! \omega_M = \omega_{M \times N}.$$
To obtain the corollary, we again apply Proposition \ref{prop:mses}~(5) again and compute that
\betaegin{equation*}
\mathbb{D}elta^!( 1_{M \times M}) \text{ot}imes \omega_M
= \mathbb{D}elta^!( 1_{M \times M}) \text{ot}imes \mathbb{D}elta^* (\varphii_1^* \omega_M )
= \mathbb{D}elta^! (\varphii_1^* \omega_M) = \mathbb{D}elta^! \varphii_2^! (1_M) = 1_M. \qedhere
\varepsilonnd{equation*}
\varepsilonnd{proof}
If $L$ is invertible, then $\sHom(F,G \text{ot}imes L) = \sHom(F,G) \text{ot}imes L$. Thus, when $M$ is a manifold, the Verdier duality $\VD{M}$ differs from the naive duality $\mathbb{N}D{M}(F) \coloneqq \sHom(F,1_M)$ by tensing $\omega_M^{-1}$.
The following proposition is the main result in this section, which generalizes the Sabloff duality in \cite{LiEstimate} to arbitrary manifolds.
\betaegin{proposition}[Sabloff-Serr{e} duality]\lambdabel{prop:sab-serre}
Let $\mathbb{L}ambdambda \subset S^{*}M$ be a compact subanalytic Legendrian, ${F, G} \in \mathbb{S}h^b_\mathbb{L}ambdambda(M)$ such that $\mathfrak{m}athrm{supp}(F) \cup \mathfrak{m}athrm{supp}(G)$ is compact. Then
$$\Hom({F}, T_{-\varepsilonpsilon}({G}) \text{ot}imes \omega_M) \simeq \Hom({G, F})^\vee.$$
In particular, when $M$ is oriented, $\Hom({F}, T_{-\varepsilonpsilon}({G}))[-n] \simeq \Hom({G, F})^\vee.$
\varepsilonnd{proposition}
\betaegin{proof}
By Proposition \ref{prop:hom_w_pm} and Remark \ref{rem:DFotimesG},
\betaegin{align*}
\Hom(F,T_{-\varepsilonpsilon}(G) \text{ot}imes \omega_M) &= p_* \left( \mathbb{D}elta^* \sHom(\varphii_1^* F, \varphii_2^* (G \text{ot}imes \omega_M)) \right) \\
&= p_* ( \mathbb{N}D{M}(F) \text{ot}imes G \text{ot}imes \omega_M) = p_* (\VD{M}(F) \text{ot}imes G).
\varepsilonnd{align*}
The compact support assumption then implies that
\betaegin{align*}
\Hom(F,T_{-\varepsilonpsilon}(G) \text{ot}imes \omega_M)^\vee &= \Hom(p_! ( \VD{M}(F) \text{ot}imes G), 1_\cV) = \Hom( \VD{M}(F) \text{ot}imes G, \omega_M) \\
&= \Hom\left(G, \VD{M} \circ \VD{M}(F) \right) = \Hom(G, F). \qedhere
\varepsilonnd{align*}
\varepsilonnd{proof}
In Section \ref{sec:serre-proper}, we will see that the above proposition plays a key role in the result regarding Serr{e} functors. Actually, one may have noticed that by Theorem \ref{w=ad}, we have shown
$$\Hom(F, S_\mathbb{L}ambdambda^-(G) \text{ot}imes \omega_M) = \Hom(G, F)^\vee.$$
However, we do not know whether $S_\mathbb{L}ambdambda^-$ sends $\mathbb{S}h^b_\mathbb{L}ambdambda(M)$ to $\mathbb{S}h^b_\mathbb{L}ambdambda(M)$ (in fact, in general it does not; see Section \ref{sec:example}). This issue will be addressed in Section \ref{sec:serre-proper}.
\subsection{Adjoints of microlocalization}\lambdabel{sec:ad-micro}
In Section \ref{sec:sato-sab}, we find that the cotwist and dual cotwist of the adjunctions $m_\mathbb{L}ambdambda^l \deltaashv m_\mathbb{L}ambdambda \deltaashv m_\mathbb{L}ambdambda^r$ defined by the fiber sequences
$$ m_\mathbb{L}ambdambda^l m_\mathbb{L}ambdambda \rightarrow \id \rightarrow S_\mathbb{L}ambdambda^+, \, S_\mathbb{L}ambdambda^- \rightarrow \id \rightarrow m_\mathbb{L}ambdambda^r m_\mathbb{L}ambdambda$$
are given by the wrap-once functors $S_\mathbb{L}ambdambda^\varphim$ in Definition \ref{def:wrap_once_functors}.
The goal of this section is to show that, the above observation goes deeper to the functors $m_\mathbb{L}ambdambda^l$ and $m_\mathbb{L}ambdambda^r$ that they admit descriptions by wrappings as well.
Recall that $\mathfrak{m}sh_\mathbb{L}ambdambda$ is a sheaf on $T^* M$ supported on $\mathbb{L}ambdambda$.
To give a description for an object $F \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$,
we choose an open cover $\{U_\alphalpha\}_{\alphalpha \in I}$ of $M$ and $\{\Omega_\alphalpha\}_{\alphalpha \in I}$ of $\mathbb{L}ambdambda \subseteq S^*M$
such that $\Omega_\alphalpha \subseteq S^{*}U_\alphalpha$.
Then we the following commutative diagram induced by the corresponding inclusion of open sets in $T^* M$
$$
\betaegin{tikzpicture}
\node at (0,1.7) {$\mathbb{S}h_\mathbb{L}ambdambda(M)$};
\node at (4,1.7) {$\mathbb{S}h_\mathbb{L}ambdambda(U_\alphalpha)$};
\node at (0,0) {$\mathfrak{m}sh_\mathbb{L}ambdambda(\deltaT^* M)$};
\node at (4,0) {$\mathfrak{m}sh_\mathbb{L}ambdambda(\Omega_\alphalpha)$};
\deltaraw [->, thick] (0.8,1.7) -- (3.2,1.7) node [midway, above] {$j_\alphalpha^*$};
\deltaraw [->, thick] (1.1,0) -- (3.1,0) node [midway, above] {$r_\alphalpha^*$};
\deltaraw [->, thick] (0,1.4) -- (0,0.3) node [midway, left] {$m_{\mathbb{L}ambdambda}$};
\deltaraw [->, thick] (4,1.4) -- (4,0.3) node [midway, right] {$m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}$};
\varepsilonnd{tikzpicture}
$$
Since the restriction maps admit left adjoints, we can pass to left adjoints and obtain the following diagram:
$$
\betaegin{tikzpicture}
\node at (0,1.7) {$\mathbb{S}h_\mathbb{L}ambdambda(M)$};
\node at (4,1.7) {$\mathbb{S}h_\mathbb{L}ambdambda(U_\alphalpha)$};
\node at (0,0) {$\mathfrak{m}sh_\mathbb{L}ambdambda(\deltaT^* M)$};
\node at (4,0) {$\mathfrak{m}sh_\mathbb{L}ambdambda(\Omega_\alphalpha)$};
\deltaraw [->, thick] (3.2,1.7) -- (0.8,1.7) node [midway, above] {$\wrap_\mathbb{L}ambdambda^+ {j_\alphalpha}_!$};
\deltaraw [->, thick] (3.1,0) -- (1.1,0) node [midway, above] {${r_\alphalpha}_!$};
\deltaraw [->, thick] (0,0.3) -- (0,1.4) node [midway, left] {$m_\mathbb{L}ambdambda^l$};
\deltaraw [->, thick] (4,0.3) -- (4,1.4) node [midway, right] {$m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^l$};
\varepsilonnd{tikzpicture}
$$
Here we use the left adjoint $\wrap_\mathbb{L}ambdambda^+: \mathbb{S}h(M) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda(M)$ from Theorem \ref{w=ad}.
Now the equivalence $\mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \timesrightarrow{\sim} \lim_{\alphalpha \in I} \mathfrak{m}sh_\mathbb{L}ambdambda(\Omega_\alphalpha)$ implies that
$$F = \clmi{\alphalpha \in I} \, {r_\alphalpha}_! r_\alphalpha^* F, \; F \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$$ where
$r_\alphalpha^*: \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \leftrightharpoons \mathfrak{m}sh_\mathbb{L}ambdambda(\Omega_\alphalpha): {r_\alphalpha}_!$
is the adjunction given by restrictions.
For each $\alphalpha \in I$, consider the fiber sequence (in $\mathbb{P}rLcs$)
$$ K(U_\alphalpha, \Omega_\alphalpha) \hookrightarrow \mathbb{S}h_\mathbb{L}ambdambda(U_\alphalpha, \Omega_\alphalpha) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\Omega_\alphalpha).$$
By Theorem \ref{w=ad}, the left adjoint of the inclusion
$K(U_\alphalpha,\Omega_\alphalpha) \hookrightarrow \mathbb{S}h_\mathbb{L}ambdambda(U_\alphalpha,\Omega_\alphalpha)$
can be described by
\betaegin{align*}
\mathbb{S}h_\mathbb{L}ambdambda(U_\alphalpha,\Omega_\alphalpha) &\rightarrow K(U_\alphalpha,\Omega_\alphalpha) \\
F &\mathfrak{m}apsto \clmi{w \in W^+(\mathbb{L}ambdambda \cap \Omega_\alphalpha^c)} F^w \varepsilonqqcolon \wrap_\alphalpha^+(F)
\varepsilonnd{align*}
where $W^+(\mathbb{L}ambdambda \cap \Omega_\alphalpha^c)$ means the category of positive wrappings compactly supported in $\Omega_\alphalpha \cup (S^*U_\alphalpha \setminus \mathbb{L}ambdambda)$\footnote{Recall that $\wrap_\mathbb{L}ambdambda^\varphim$ is defined by taking colimits/limits among all positive/negative wrappings in $S^*M \setminus \mathbb{L}ambdambda$. Here $\wrap_{\Omega^c}^\varphim$ is therefore defined by wrappings supported in $\Omega$.}. Therefore, the formal property of the adjunction $m_\mathbb{L}ambdambda^l \deltaashv m_\mathbb{L}ambdambda$ implies the following (local) fiber sequence:
\betaegin{lemma}\lambdabel{lem:loc-ad-of-microlocalize}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a subanalytic Legendrian, $U \subseteq M$ and $\Omega \subseteq S^*U$ be an open neighbourhood of some connected component of $\mathbb{L}ambdambda \cap S^*U$, such that there is a fiber sequence
$$ K(U, \Omega) \hookrightarrow \mathbb{S}h_\mathbb{L}ambdambda(U, \Omega) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\Omega).$$
Then given $F \in \mathbb{S}h_\mathbb{L}ambdambda(U, \Omega)$, there is a fiber sequence where $\Omega^c = S^*U \betaackslash \Omega$
$$m_{\mathbb{L}ambdambda \cap \Omega}^l m_{\mathbb{L}ambdambda \cap \Omega}(F) \rightarrow F \rightarrow \wrap_{\mathbb{L}ambdambda \cap \Omega^c}^+ F.$$
\varepsilonnd{lemma}
For a microsheaf $F \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$, the fiber sequence
$$ K(U_\alphalpha, \Omega_\alphalpha) \hookrightarrow \mathbb{S}h_\mathbb{L}ambdambda(U_\alphalpha, \Omega_\alphalpha) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\Omega_\alphalpha)$$
implies that there exists some $F_\alphalpha \in \mathbb{S}h_\mathbb{L}ambdambda(U_\alphalpha, \Omega_\alphalpha)$ such that $r_\alphalpha^* F \timesrightarrow{\sim} m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha} (F_\alphalpha)$.
By adjunction this identification is induced by a morphism $m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^r r_\alphalpha^* F \rightarrow F_\alphalpha$,
and the fiber sequence provides the commuting diagram with rows being fiber sequences
$$
\betaegin{tikzpicture}
\node at (-4,1.7) {$m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^l m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha} m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^r r_\alphalpha^* F$};
\node at (0,1.7) {$m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^r r_\alphalpha^* F$};
\node at (4,1.7) {$\wrap_{\alphalpha}^+ m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^r r_\alphalpha^* F$};
\node at (-4,0) {$m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^l m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha} F_\alphalpha$};
\node at (0,0) {$F_\alphalpha$};
\node at (4,0) {$\wrap_{\alphalpha}^+ F_\alphalpha$};
\deltaraw [->, thick] (-1.7,1.7) -- (-1.1,1.7) node [midway, above] {$ $};
\deltaraw [->, thick] (1.1,1.7) -- (2.5,1.7) node [midway, above] {$ $};
\deltaraw [->, thick] (-2.3,0) -- (-0.4,0) node [midway, above] {$ $};
\deltaraw [->, thick] (0.4,0) -- (3.2,0) node [midway, above] {$ $};
\deltaraw [->, thick] (-4,1.2) -- (-4,0.3) node [midway, right] {$ $};
\deltaraw [->, thick] (0,1.2) -- (0,0.3) node [midway, right] {$ $};
\deltaraw [->, thick] (4,1.2) -- (4,0.3) node [midway, right] {$ $};
\varepsilonnd{tikzpicture}
$$
Here by definition $\wrap_{\alphalpha}^+ : \mathbb{S}h_\mathbb{L}ambdambda(U_\alphalpha ,\Omega_\alphalpha) \rightarrow K(U_\alphalpha,\Omega_\alphalpha)$
is the left adjoint of the standard inclusion and its effect is to blow away all microsupport in $\Omega_\alphalpha$. Hence by definition, $\wrap_{\alphalpha}^+ m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^r r_\alphalpha^* F = 0$ since $m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^r r_\alphalpha^* F$ has microsupport in $\Omega_\alphalpha$.
Therefore, we conclude that
$$m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^r r_\alphalpha^* F \rightarrow F_\alphalpha \rightarrow \wrap_{\alphalpha}^+ F_\alphalpha$$
with the morphisms given above is a fiber sequence.
Thus we can expression $m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^l$ by
\betaegin{align*}
m_{\mathbb{L}ambdambda}^l(F) &= m_{\mathbb{L}ambdambda}^l \clmi{\alphalpha \in I} \, {r_\alphalpha}_! r_\alphalpha^* F
= \clmi{\alphalpha \in I} \, m_{\mathbb{L}ambdambda}^l {r_\alphalpha}_! r_\alphalpha^* F \\
&= \clmi{\alphalpha \in I} \, \wrap_\mathbb{L}ambdambda^+ {j_\alphalpha}_! m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^r r_\alphalpha^* F
= \clmi{\alphalpha \in I} \, \wrap_\mathbb{L}ambdambda^+ {j_\alphalpha}_! \mathfrak{m}athrm{Fib} \left( F_\alphalpha \rightarrow \wrap_{\alphalpha}^+ F_\alphalpha \right).
\varepsilonnd{align*}
In summary, we have the lemma
\betaegin{lemma}\lambdabel{lem:left_right_adjoints_of_microlocalization}
Fix an open cover $\{U_\alphalpha\}_{\alphalpha \in I}$ of $M$ and $\{\Omega_\alphalpha\}_{\alphalpha \in I}$ of $\mathbb{L}ambdambda \subseteq S^*M$
such that $ \Omega_\alphalpha \subseteq S^{*}U_\alphalpha$.
Let $F \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$.
Then the left and right adjoint of $m_\mathbb{L}ambdambda: \mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$
can be described as
$$ m_{\mathbb{L}ambdambda}^l(F) = \clmi{\alphalpha \in I} \, \wrap_\mathbb{L}ambdambda^+ {j_\alphalpha}_! \mathfrak{m}athrm{Fib} \left( F_\alphalpha \rightarrow \wrap_{\alphalpha}^+ F_\alphalpha \right),$$
and
$$ m_{\mathbb{L}ambdambda}^r(F) = \lmi{\alphalpha \in I} \, \wrap_\mathbb{L}ambdambda^- {j_\alphalpha}_* \mathfrak{m}athrm{Cofib} \left( \wrap_{\alphalpha}^- F_\alphalpha \rightarrow F_\alphalpha \right)$$
where the $F_\alphalpha \in \mathbb{S}h_\mathbb{L}ambdambda(U_\alphalpha, \Omega_\alphalpha)$'s are local representatives of $F$.
\varepsilonnd{lemma}
\betaegin{remark}
In the following section, we will make a more careful choice of the covers $\{U_\alphalpha\}_{\alphalpha \in I}$, $\{\Omega_\alphalpha\}_{\alphalpha \in I}$,
and representatives $\{ F_\alphalpha\}_{\alphalpha \in I}$. Such a choice will provide us with a variant of antimicrolocalization result:
a result which embedding microsheaves into certain category whose objects are represented by sheaves.
\varepsilonnd{remark}
\betaegin{remark}\lambdabel{rem:multi-component-ad}
Following Remark \ref{rem:multi-component-sato}, we remark that the above computation also works in the case when we take microlocalization along a single connected component $\mathbb{L}ambdambda_i \subseteq \mathbb{L}ambdambda \subseteq S^{*}M$. In this case
$$m_\mathbb{L}ambdambda^l, m_\mathbb{L}ambdambda^r: \mathfrak{m}sh_{\mathbb{L}ambdambda}(\mathbb{L}ambdambda_i) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda(M)$$
can be computed by the above formula as well.
\varepsilonnd{remark}
We remark that the lemma above allows us to further cut-off the singular support of the local representative $F_\alphalpha^0 \in \mathbb{S}h_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}(U_\alphalpha, \Omega_\alphalpha)$ and obtain a refined local representative $F_\alphalpha \in \mathbb{S}h_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}(U_\alphalpha)$ such that
$$m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}(F_\alphalpha) = F|_{\mathbb{L}ambdambda \cap \Omega_\alphalpha} \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda \cap \Omega_\alphalpha).$$
The following result has previously been obtained by Guillermou \cite{Gui} by applying the refined microlocal cut-off lemma \cite{KS}*{Proposition 6.1.4}.
\betaegin{corollary}[Guillermou \cite{Gui}*{Lemma 6.7} or \cite{Guisurvey}*{Lemma 10.2.5}]\lambdabel{lem:refine-cutoff}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a locally closed subanalytic Legendrian such that $\varphii|_\mathbb{L}ambdambda: \mathbb{L}ambdambda \rightarrow M$ is finite. Then for $(x, \timesi) \in \mathbb{L}ambdambda$, there is a neighbourhood $U$ of $x \in M$ and ${F}_U \in \mathbb{S}h_{\mathbb{L}ambdambda \cap S^{*}U}(U)$ such that $m_{\mathbb{L}ambdambda \cap S^{*}U}({F}_U) = {F} \in \mathfrak{m}sh_\mathbb{L}ambdambda(S^*U)$.
\varepsilonnd{corollary}
\betaegin{proof}
Consider an open neighbourhood $\Omega \subset S^{*}U$ of all finite components of $\mathbb{L}ambdambda \cap S^{*}U$ such that $\mathfrak{m}sh_\mathbb{L}ambdambda(\Omega)$ fits in a fiber sequence
$$ K(U, \Omega) \hookrightarrow \mathbb{S}h_\mathbb{L}ambdambda(U, \Omega) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\Omega).$$
For a representative $F \in \mathbb{S}h_\mathbb{L}ambdambda(U, \Omega)$, by Lemma \ref{lem:loc-ad-of-microlocalize} we know the fiber sequence
$$ m_{\mathbb{L}ambdambda \cap S^{*}U}^l m_{\mathbb{L}ambdambda \cap S^{*}U}( F|_{U} ) \rightarrow F|_{U} \rightarrow \wrap_{\mathbb{L}ambdambda \cap \Omega^c}^+(F|_{U}).$$
Then we claim that $m_{\mathbb{L}ambdambda \cap S^{*}U}^l m_{\mathbb{L}ambdambda \cap S^{*}U} F|_{U} \in \mathbb{S}h_{\mathbb{L}ambdambda \cap S^{*}U}(U)$. Actually, since $W^+(\Omega)$ only consists of positive wrappings supported in $\Omega$, we know that
$$F|_{U} \rightarrow \wrap_{\mathbb{L}ambdambda \cap \Omega^c}^+(F|_{U})$$
is an isomorphism in $S^{*}U \betaackslash \Omega$. Hence $m_{\mathbb{L}ambdambda \cap S^{*}M}^l m_{\mathbb{L}ambdambda \cap S^{*}M}( F|_{U}) \in \mathbb{S}h_\Omega(U)$. Therefore, since $F|_U$ and $\wrap_{\Omega^c}^+(F|_{U}) \in \mathbb{S}h_\mathbb{L}ambdambda(U, \Omega)$, we can conclude that
\betaegin{equation*}
m_{\mathbb{L}ambdambda \cap S^{*}M}^l m_{\mathbb{L}ambdambda \cap S^{*}M}( F|_{U}) \in \mathbb{S}h_{\mathbb{L}ambdambda \cap S^*M}(U). \qedhere
\varepsilonnd{equation*}
\varepsilonnd{proof}
Finally, we illustrate that the characterization of $m_\mathbb{L}ambdambda^l$ in terms of local wrappings is compatible with the characterization of the cotwist $S_\mathbb{L}ambdambda^+$ of the adjunction pair $m_\mathbb{L}ambdambda \vdash m_\mathbb{L}ambdambda^l$ in terms of global wrappings in Definition \ref{def:wrap_once_functors}
$$S_\mathbb{L}ambdambda^+ = \wrap_\mathbb{L}ambdambda^+ T_\varepsilonpsilon(F).$$
Write $S_{\mathbb{L}ambdambda,\text{alg}}^+ = \mathfrak{m}athrm{Cofib}(m_\mathbb{L}ambdambda^l m_\mathbb{L}ambdambda \rightarrow \mathfrak{m}athrm{id})$.
Consider the expression
$$m_\mathbb{L}ambdambda^l m_\mathbb{L}ambdambda (F) = \wrap_\mathbb{L}ambdambda^+ \clmi{\alphalpha \in I} \, j_{\alphalpha !}\mathfrak{m}athrm{Fib}(F|_{U_\alphalpha}\rightarrow \wrap_\alphalpha^+ F|_{U_\alphalpha})$$
which implies the following characterization of $m_\mathbb{L}ambdambda^l$ in terms of local wrappings
$$S_{\mathbb{L}ambdambda,\text{alg}}^+(F) = \wrap_\mathbb{L}ambdambda^+ \left( \clmi{\alphalpha \in I} \, j_{\alphalpha !} (\wrap_\alphalpha^+F|_{U_\alphalpha}) \right).$$
We therefore know by Sabloff-Sato fiber sequence Proposition \ref{prop:hom_w_pm} that the formula of $S_{\mathbb{L}ambdambda,\text{alg}}^+$ by gluing local wrappings is isomorphic to the characterization of $S_{\mathbb{L}ambdambda}^+$ by global wrappings.
\betaegin{proposition}\lambdabel{prop:wrap-local-to-global}
For $\mathbb{L}ambdambda \subseteq S^*M$ a compact subanalytic Legendrian, we have
$$S_\mathbb{L}ambdambda^+(F) = \wrap_\mathbb{L}ambdambda^+ T_\varepsilonpsilon(F) = \wrap_\mathbb{L}ambdambda^+ \left( \clmi{\alphalpha \in I} \, j_{\alphalpha !} (\wrap_\alphalpha^+F|_{U_\alphalpha}) \right) = S_{\mathbb{L}ambdambda,\text{alg}}^+(F).$$
\varepsilonnd{proposition}
The above statement relating (the colimit of) local wrappings to global wrappings is nontrivial since it is in general hard to glue positive Hamiltonian flows geometrically.
To illustrate the full strength of the local-to-global argument, we now present a proof of the proposition without appealing to the Sabloff-Sato fiber sequence (though one may realize that the ingredients of the Sato fiber sequence is in some sense hidden in Lemma \ref{lem:loc-ad-of-microlocalize}).
\betaegin{proof}[Proof of Proposition \ref{prop:wrap-local-to-global}]
We assume that $\Omega_\alphalpha \subseteq S^*M$ are sufficiently small open balls. Fix $\alphalpha \in I$ and consider $j_{\alphalpha !} (\wrap_\alphalpha^+F|_{U_\alphalpha})$.
The functor $\wrap_\alphalpha^+$ is given by wrappings $\varphi_\alphalpha$'s which are compactly supported on $\Omega_\alphalpha \cup (S^* U_\alphalpha \setminus \mathbb{L}ambdambda)$, i.e.~$\varphi_\alphalpha \in W^+(\mathbb{L}ambdambda \cap \Omega_\alphalpha^c)$.
Thus
$$j_{\alphalpha !} (\wrap_\alphalpha^+F|_{U_\alphalpha}) = j_{\alphalpha !} \left(\clmi{\varphi_\alphalpha \in W^+(\mathbb{L}ambdambda \cap \Omega_\alphalpha^c)} (F|_{U_\alphalpha})^{\varphi_\alphalpha} \right)
= \clmi{\varphi_\alphalpha \in W^+(\mathbb{L}ambdambda \cap \Omega_\alphalpha^c)} (F_{U_\alphalpha})^{\varphi_\alphalpha}.$$
Here, we use the fact that $\varphi_\alphalpha$ is compactly supported in $S^* U_\alphalpha$.
Now, we notice that the colimit $\operatornameeratorname{colim}_{\alphalpha \in I}$ means a \v{C}ech diagram so, for example, if $U_{\alphalpha \alphalpha_2 \deltaots \alphalpha_k} \coloneqq U_\alphalpha \cap U_{\alphalpha_2} \cap \deltaots U_{\alphalpha_k} \neq \varnothing$, the intersection contributes a family of maps $F_{U_{\alphalpha \alphalpha_2 \deltaots \alphalpha_k}}^{\varphi_{\alphalpha \alphalpha_2 \deltaots \alphalpha_k}} \rightarrow F_{U_\alphalpha}^{\varphi_\alphalpha}$
where $\varphi_{\alphalpha \alphalpha_2 \deltaots \alphalpha_k}$ runs through all wrappings which are compactly supported in $\Omega_{\alphalpha \alphalpha_2 \deltaots \alphalpha_k}$ and is smaller than $\varphi_\alphalpha$ as wrappings on $\Omega_\alphalpha$, and there are similar arrows to $\alphalpha_2, \deltaots, \alphalpha_k$.
This discussion implies that
$$S_\mathbb{L}ambdambda^+(F) = \wrap_\mathbb{L}ambdambda^+ \clmi{\alphalpha \in I, \varphi_\alphalpha \in W^+(\mathbb{L}ambdambda \cap \Omega_\alphalpha^c)} \, F_{U_\alphalpha}^{\varphi_\alphalpha} = \wrap_\mathbb{L}ambdambda^+ \clmi{\alphalpha \in I, \varphi_\alphalpha \in W^+(\mathbb{L}ambdambda \cap \Omega_\alphalpha^c)} \, F^{\varphi_\alphalpha}. $$
Here we make the (inessential) simplification from $F_{U_\alphalpha}^{\varphi_\alphalpha}$ to $F^{\varphi_\alphalpha}$ since the $\varphi_\alphalpha$ only modifies $F$ (micro)locally on $\Omega_\alphalpha$ so we might as well glue back the rest. In other words, the effect of the colimit is small positive wrappings of $F$ on each $\Omega_\alphalpha$ glued along intersections with small positive wrappings on $\Omega_{\alphalpha_1 \deltaots \alphalpha_k}$. Without loss of generality, assume that the open covers are locally finite. We state the following technical lemma whose proof will be presented later:
\betaegin{lemma}\lambdabel{lem:continuation}
Let $\mathbb{P}hi$ and $\mathbb{P}si: S^* M \times I \rightarrow S^* M$ be compactly supported contact isotopies with Hamiltonians $H_\varphi$ and $H_\varphisi$. Assume the generic situation that on $\{H_\varphi \neq 0, H_\varphisi \neq 0\}$, $\{H_\varphi = H_\varphisi \}$ is a union closed submanifolds with positive codimension. Let $K_{\mathfrak{m}in}$, $K_{\mathfrak{m}ax} \in \mathbb{S}h(M \times M \times I)$ be the sheaf kernel given by
$$\clmi{H \leq \mathfrak{m}in(H_\varphi,H_\varphisi)} \, K(H), \ \clmi{H \leq \mathfrak{m}ax(H_\varphi,H_\varphisi)} K(H) \in \mathbb{S}h(M \times M \times I)$$ where $K(H)$ is the GKS sheaf quantization of the contact isotopy associated to the smooth function $H$.
Then the diagram
$$
\betaegin{tikzpicture}
\node at (0,1.7) {$K_{\mathfrak{m}in}$};
\node at (3.5,1.7) {$K(\mathbb{P}hi)$};
\node at (0,0) {$K(\mathbb{P}si)$};
\node at (3.5,0) {$K_{\mathfrak{m}ax}$};
\deltaraw [->, thick] (0.5,1.7) -- (3,1.7) node [midway, above] {$ $};
\deltaraw [->, thick] (0.5,0) -- (3,0) node [midway, above] {$ $};
\deltaraw [->, thick] (0,1.4) -- (0,0.3) node [midway, right] {$ $};
\deltaraw [->, thick] (3.5,1.4) -- (3.5,0.3) node [midway, right] {$ $};
\varepsilonnd{tikzpicture}
$$
is a pullback/pushout diagram.
\varepsilonnd{lemma}
Given the above lemma, we can characterize the colimit over a smaller diagram category. First, consider for each $\alphalpha \in I$ one particular small positive wrapping $\varphi_\alphalpha^\star \in W^+(\Omega_\alphalpha^c)$ such that $\mathbb{L}ambdambda \subseteq \betaigcup_{\alphalpha \in I}\mathfrak{m}athrm{supp}(\varphi_\alphalpha^\star)^\circ$ (where $Z^\circ$ is the interior of $Z$).
Then, consider all small positive wrappings $\varphi_{\alphalpha_1\deltaots \alphalpha_k}^\star \in W^+(\Omega_{\alphalpha_1 \deltaots \alphalpha_k}^c)$ such that
$$H_{\varphi_{\alphalpha_1 \deltaots \alphalpha_k}^\star} \leq \mathfrak{m}in_{1\leq i \leq k}H_{\varphi_{\alphalpha_i}^\star}.$$
Denote by $W^+(\Omega_\alphalpha)^\star$ the corresponding subcategories of positive wrappings. By the lemma above we know that
$$\clmi{\alphalpha \in I, \varphi_\alphalpha^\star \in W^+(\Omega_\alphalpha^c)^\star} \, F^{\varphi_\alphalpha^\star} = \clmi{H_\varphi \leq \mathfrak{m}ax_{\alphalpha \in I}(H_{\varphi_\alphalpha^\star})}F^{\varphi}.$$
Rescaling the Hamiltonians $\varphi_\alphalpha^\star \in W^+(\Omega_\alphalpha^c)^\star$, we may assume that $\mathfrak{m}ax_{\alphalpha \in I}(H_{\varphi_\alphalpha^\star}) \leq H_T$ where $H_T$ is the Hamiltonian that defines the Reeb flow $T_t: S^*M \rightarrow S^*M$. There exists a continuation map by the universal property induced by positive isotopies in $W^+(\mathbb{L}ambdambda)$ (supported on $S^*M \betaackslash \mathbb{L}ambdambda$) such that
$$\clmi{\alphalpha \in I, \varphi_\alphalpha^\star \in W^+(\Omega_\alphalpha^c)^\star}\, F^{\varphi_\alphalpha^\star} \longrightarrow T_\varepsilonpsilon(F).$$
Since $\mathbb{L}ambdambda \subseteq \betaigcup_{\alphalpha \in I} \mathfrak{m}athrm{supp}({\varphi_\alphalpha^\star})^\circ$, by Proposition \ref{prop:mses}~(9), we know that
$$\mathfrak{m}s^\infty\Big(\clmi{H_\varphi \leq \mathfrak{m}ax_{\alphalpha \in I}(H_{\varphi_\alphalpha^\star})}F^{\varphi}\Big) \cap \mathbb{L}ambdambda = \varnothing.$$
Clearly, any further wrapping in $W^+(\mathbb{L}ambdambda \cap \Omega_\alphalpha^c) \betaackslash W^+(\Omega_\alphalpha^c)$ are supported away from $\mathbb{L}ambdambda$. Since $\Omega_\alphalpha \subseteq S^*M$ are chosen as small open balls, any further wrapping in $W^+(\Omega_\alphalpha^c) \betaackslash W^+(\Omega_\alphalpha^c)^\star$ are also supported away from $\mathbb{L}ambdambda$. Hence by the definition of $\wrap_\mathbb{L}ambdambda^+$ we can conclude that
\betaegin{equation*}
S_\mathbb{L}ambdambda^+(F) = \wrap_\mathbb{L}ambdambda^+ \clmi{\alphalpha \in I, \varphi_\alphalpha \in W^+(\mathbb{L}ambdambda \cap \Omega_\alphalpha^c)} \, F^{\varphi_\alphalpha} = \wrap_\mathbb{L}ambdambda^+ \clmi{\alphalpha \in I, \varphi_\alphalpha^\star \in W^+(\Omega_\alphalpha^c)^\star} \, F^{\varphi_\alphalpha^\star} = \wrap_\mathbb{L}ambdambda^+ T_\varepsilonpsilon(F). \qedhere
\varepsilonnd{equation*}
\varepsilonnd{proof}
\betaegin{proof}[Proof of Lemma \ref{lem:continuation}]
Pick an increasing sequence of non-negative Hamiltonians $H_i \rightarrow \mathfrak{m}in(H_\varphi,H_\varphisi)$ in $C^0$-norm as $i \rightarrow \infty$. One can check that this is a cofinal sequence of Hamiltonians.
Denote by $\mathbb{P}hi_i$ for the associated isotopies and $K(\mathbb{P}hi_i) \in \mathbb{S}h(M \times M \times I)$ the corresponding GKS sheaf quantizations.
Note that
$(H_i)_{i \in \mathbb{N}N}$ form a Cauchy sequence with respect to the $C^0$-norm. Let
$$K_{\mathfrak{m}in} = \clmi{i \rightarrow \infty}\, K(\mathbb{P}hi_i).$$
As a corollary of Proposition \ref{prop:mses}~(9), we have the microsupport estimation
$$\mathfrak{m}s( K_{\mathfrak{m}in} ) = \mathfrak{m}s\Big( \clmi{i \rightarrow \infty} K(\mathbb{P}hi_i)\Big) \subseteq \betaigcap_{N \in \mathbb{N}N} \overline{ \betaigcup_{i \mathfrak{m}athfrak{g}eq N} \mathfrak{m}s(K(\mathbb{P}hi_i)) } = \betaigcap_{N \in \mathbb{N}N} \overline{ \betaigcup_{i \mathfrak{m}athfrak{g}eq N} \mathbb{L}ambdambda_{\mathbb{P}hi_i} }.$$
Since the colimit is also the limit, in the analysis sense,
of the sheaf kernels with respect to the $C^0$-norm (equivalently, the Hofer norm), using the completeness theorem of sheaf kernels by Asano-Ike \cite[Proposition 5.10]{AsanoIke-C0}, we know that $K_{\mathfrak{m}in}|_{t=0} = 1_\mathbb{D}elta$ and $K_{\mathfrak{m}in}$ is invertible as a sheaf kernel.
Similarly, by considering an increasing cofinal sequence $H'_i \rightarrow \mathfrak{m}ax(H_\varphi, H_\varphisi)$ in $C^0$-norm as $i \rightarrow \infty$ we can define $K_{\mathfrak{m}ax} \in \mathbb{S}h(M \times M \times I)$.
Since $K_{\mathfrak{m}ax}|_{t=0} = 1_\mathbb{D}elta$ and $K_{\mathfrak{m}ax}$ is invertible as a sheaf kernel, to check the pullback/pushout diagram, we claim that $K'_{\mathfrak{m}ax} \coloneqq \mathfrak{m}athrm{Cofib}(K(\mathbb{P}hi) \leftarrow K_{\mathfrak{m}in} \rightarrow K(\mathbb{P}si))$ satisfies the same condition as $K_{\mathfrak{m}ax}$ and is thus the same as $K_{\mathfrak{m}ax}$. The initial condition is easy to check, and it suffices to check the singular support condition, for which we make use of the fact that
$$K'_{\mathfrak{m}ax} = \clmi{i \rightarrow \infty}\,\mathfrak{m}athrm{Cofib}(K(\mathbb{P}hi) \leftarrow K(\mathbb{P}hi_i) \rightarrow K(\mathbb{P}si)).$$
Let $\Omega_{\varphi} = \{(x, \timesi) | H_\varphi(x, \timesi) \mathfrak{m}athfrak{g}eq H_\varphisi(x, \timesi)\}$ and $\Omega_{\varphisi} = \{(x, \timesi) | H_\varphisi(x, \timesi) \mathfrak{m}athfrak{g}eq H_\varphi(x, \timesi)\}$.
The restriction $\mathbb{S}h(M) \rightarrow \mathfrak{m}sh^\varphire(\Omega_\varphi)$ preserves limits, colimits, and microsupports so, for the purpose of computing
$\mathfrak{m}s(K'_{\mathfrak{m}ax}) \cap \Omega_{\varphi} \times T^*M \times T^*I$ is enough to compute the diagram there.
However, by picking $\varphi_i$ more carefully, we may assume that $\varphi_i = \varphisi$ on a family of increasing exhausting open subsets of $\Omega_\varphi$ and $\varphi_i = \varphi$ on a family of increasing exhausting open subsets of $\Omega_\varphisi$. Thus, the sheaf $C$ on the interior $\Omega_{\varphi}^\circ \times T^*M \times T^*I$ is computed by
$$\mathfrak{m}athrm{Cofib}(K(\mathbb{P}hi) \leftarrow K(\mathbb{P}si) \timesrightarrow{=} K(\mathbb{P}si)),$$
A similar observation holds on the interior $\Omega_{\varphisi}^\circ \times T^*M \times T^*I$. Thus
$$\mathfrak{m}s(K'_{\mathfrak{m}ax}) \cap \Omega_{\varphi}^\circ \times T^*M \times T^*I \subseteq \mathbb{L}ambdambda_\mathbb{P}hi, \; \mathfrak{m}s(K'_{\mathfrak{m}ax}) \cap \Omega_{\varphi}^\circ \times T^*M \times T^*I \subseteq \mathbb{L}ambdambda_\mathbb{P}si.$$
Finally, consider the closed subset $Z = \deltaT^*M \betaackslash (\Omega_\varphi^\circ \cup \Omega_\varphisi^\circ)$. We know that $\mathfrak{m}s(K'_{\mathfrak{m}ax}) \subseteq \mathfrak{m}s(K(\mathbb{P}hi)) \cup \mathfrak{m}s(K(\mathbb{P}si)) \cup \mathfrak{m}s(K_{\mathfrak{m}in})$. Hence
$$\mathfrak{m}s( K_{\mathfrak{m}ax}' ) \cap Z \times T^*M \times T^*I \subseteq \betaigcap_{N \in \mathbb{N}N} \overline{ \betaigcup_{i \mathfrak{m}athfrak{g}eq N}\mathbb{L}ambdambda_{\mathbb{P}hi_i}} \cap Z \times T^*M \times T^*I .$$
However, since $H_\varphi = H_\varphisi$ on $Z$, we can choose $H'_i \rightarrow \mathfrak{m}ax(H_\varphi, H_\varphisi)$ in $C^0$-norm as $i \rightarrow \infty$ such that moreover $H'_i = H_i$ over smaller and smaller neighborhoods of $Z$. Therefore the above singular support estimation also holds for $K_{\mathfrak{m}ax}$, i.e.
$$\mathfrak{m}s( K_{\mathfrak{m}ax} ) \cap Z \times T^*M \times T^*I \subseteq \betaigcap_{N \in \mathbb{N}N} \overline{ \betaigcup_{i \mathfrak{m}athfrak{g}eq N}\mathbb{L}ambdambda_{\mathbb{P}hi_i}} \cap Z \times T^*M \times T^*I .$$
which concludes the pullback/pushout diagram.
\varepsilonnd{proof}
\betaegin{remark}
The sequence of non-negative Hamiltonians defines a Cauchy sequence of sheaf kernels in the category of sheaves with respect to the $C^0$-norm or Hofer norm of Hamiltonians (more precisely, with respect to the interleaving distance on the sheaf category). However, we do not know whether the sequence of Hamiltonian diffeomorphisms converge to a homeomorphism.
\varepsilonnd{remark}
\subsection{Local doubling construction and gluing}\lambdabel{sec:doubling-local}
Let us construct the doubling functor in this section. Our strategy is to define the doubling $w_\mathbb{L}ambdambda$ locally and then glue together the local pieces. Therefore, first we will construct the local model of $w_\mathbb{L}ambdambda$.
Consider ${F} \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$. Then by Corollary \ref{lem:refine-cutoff} there exists an open covering $\{U_\alphalpha\}_{\alphalpha\in I}$ of $M$ and ${F}_{\alphalpha} \in \mathbb{S}h_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}(U_\alphalpha)$ such that
$$m_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}({F}_{\alphalpha}) = {F}|_{\mathbb{L}ambdambda \cap \Omega_\alphalpha} \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda \cap S^{*}U_\alphalpha).$$
From Section \ref{sec:ad-micro}, we have seen that the adjoints of microlocalization can be characterized by
\betaegin{align*}
m_\mathbb{L}ambdambda^l(F) &= \wrap_\mathbb{L}ambdambda^+ \clmi{\alphalpha \in I} \, j_{\alphalpha !}\mathfrak{m}athrm{Fib}(F_\alphalpha \rightarrow \wrap_\alphalpha^+F_\alphalpha), \\
m_\mathbb{L}ambdambda^r(F) &= \wrap_\mathbb{L}ambdambda^- \lmi{\alphalpha \in I} \, j_{\alphalpha *}\mathfrak{m}athrm{Cofib}(\wrap_\alphalpha^- F_\alphalpha \rightarrow F_\alphalpha).
\varepsilonnd{align*}
Heuristically, one may interpret $\mathfrak{m}athrm{Fib}(F_\alphalpha \rightarrow \wrap_\alphalpha^+F_\alphalpha)$ as a positive pushoff and $\mathfrak{m}athrm{Cofib}(\wrap_\alphalpha^- F_\alphalpha \rightarrow F_\alphalpha)$ as a negative pushoff
\betaegin{align*}
\mathfrak{m}athrm{Fib}(F_\alphalpha \rightarrow \wrap_\alphalpha^+F_\alphalpha) &``="\, \wrap_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^+ \mathfrak{m}athrm{Fib}(T_{-\varepsilonpsilon}(F_\alphalpha) \rightarrow T_\varepsilonpsilon(F_\alphalpha)),\\
\mathfrak{m}athrm{Cofib}(\wrap_\alphalpha^- F_\alphalpha \rightarrow F_\alphalpha) &``="\, \wrap_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^- \mathfrak{m}athrm{Cofib}(T_{-\varepsilonpsilon}(F_\alphalpha) \rightarrow T_\varepsilonpsilon(F_\alphalpha)).
\varepsilonnd{align*}
However, the singular support of $\wrap_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^+ F$ (resp.~$\wrap_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^- F$) depends highly on the open subset $U_\alphalpha$ and $\Omega_\alphalpha$. Therefore, the idea is to push them off further to $\mathbb{L}ambdambda \cup T_{2\varepsilonpsilon}(\mathbb{L}ambdambda)$ (resp.~$\mathbb{L}ambdambda \cup T_{-2\varepsilonpsilon}(\mathbb{L}ambdambda)$) to get a functor independent of choices of open subsets.
Consider the (subanalytic) Legendrian $\mathbb{L}ambdambda \cup T_{2\varepsilonpsilon}(\mathbb{L}ambdambda) \subseteq S^*M$ and the microlocalization functor on a single connected component
$$m_{\mathbb{L}ambdambda}: \mathbb{S}h_{\mathbb{L}ambdambda \cup T_{2\varepsilonpsilon}(\mathbb{L}ambdambda)}(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda).$$
Then by the previous discussion, we know that the left adjoint can be written as
\betaegin{align*}
w_{\mathbb{L}ambdambda}(F) = \wrap_{\mathbb{L}ambdambda \cup T_{2\varepsilonpsilon}(\mathbb{L}ambdambda)}^+ \clmi{\alphalpha \in I} \, j_{\alphalpha !}\mathfrak{m}athrm{Fib}(F_\alphalpha \rightarrow \wrap_\alphalpha^+F_\alphalpha),
\varepsilonnd{align*}
which, heuristically, should be equivalent to
\betaegin{align*}
\wrap_{\mathbb{L}ambdambda \cup T_{2\varepsilonpsilon}(\mathbb{L}ambdambda)}^+ \mathfrak{m}athrm{Fib}(F_\alphalpha \rightarrow \wrap_\alphalpha^+F_\alphalpha) ``="\, T_\varepsilonpsilon \mathfrak{m}athrm{Fib}(T_{-\varepsilonpsilon}(F_\alphalpha) \rightarrow T_\varepsilonpsilon(F_\alphalpha)),
\varepsilonnd{align*}
This is the doubling functor that we would like to investigate. However, there seem to be some difficulties in showing full faithfulness of the doubling functor using this local-to-global formula. Therefore, we will use a different approach.
We will realize this heuristic by using one single Reeb flow instead of considering the colimits of all Reeb flows, and hence write down the doubling functor explicitly on each local chart. Using the results in Section \ref{sec:sato-sab}, we will show that it defines a fully-faithful functor.
To be more precise, we would like to construct a sheaf $w_\mathbb{L}ambdambda(F)$ which locally on an open subset $U_\alphalpha$ will be of the form $$w_\mathbb{L}ambdambda({F})_{U_\alphalpha} = \mathfrak{m}athrm{Cofib}(T_{-\varepsilonpsilon}({F}_\alphalpha) \rightarrow T_\varepsilonpsilon({F}_\alphalpha)).$$
However there is some technical issue that, under the Reeb flow $T_t$ on $S^*M$, it is not even true that $T_{\varphim \varepsilonpsilon}(\mathbb{L}ambdambda \cap S^*U_\alphalpha) \subseteq S^*U_\alphalpha$, and hence the above formula does not seem to be meaningful even at the first place.
\betaegin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{Doubling_cutoff.pdf}
\caption{We consider the open subset $U$ (in black), the Legendrian $\mathbb{L}ambdambda$ (in blue), and the Reeb flow being the geodesic flow, where $T_{\varphim \varepsilonpsilon}(\mathbb{L}ambdambda \cap S^*U_\alphalpha) \not\subseteq S^*U_\alphalpha$. Let $F_U$ be the sheaf as in the 2nd figure. Then $T_{\varphim \varepsilonpsilon}(j_{U *}F_U)$ are illustrated in the 3rd and 4th figure. The supports of the sheaves are in $V^{-1}$, while the singular support coming from $T_{\varphim \varepsilonpsilon}(N^*_{in/out}U)$ are outside $V^1$. Finally, $w_\mathbb{L}ambdambda(F)_V$ is shown in the 5th figure.}\lambdabel{fig:doubling-cutoff}
\varepsilonnd{figure}
Our solution to the above problem is as follows. We will need to push forward the sheaves on $U$ to sheaves on $M$ so as to apply the Reeb flow on the ambient manifold $S^*M$. The singular support of the resulting sheaves consists of both the Reeb pushoff of the Legendrian $T_{\varphim \varepsilonpsilon}(\mathbb{L}ambdambda \cap S^*U_\alphalpha)$ and the Reeb pushoff of the unit conormal bundle of the boundary $T_{\varphim \varepsilonpsilon}(N^*_{in/out} U_\alphalpha)$.
To block off the effect coming from the second part (which may come into $S^*U_\alphalpha$ under the Reeb flow), we will need to restrict the sheaf to a smaller neighbourhood $V_\alphalpha \subseteq U_\alphalpha$. This accounts for the following complicated definition.
From now on, when considering an open covering $\mathfrak{m}athscr{U} = \{U_\alphalpha\}_{\alphalpha \in I}$, we will always write $U_{\alphalpha_1\cdots\alphalpha_k} = \betaigcap_{i=1}^k U_{\alphalpha_i}$ for simplicity.
\betaegin{definition}\lambdabel{def:goodcover}
Let $\mathfrak{m}athscr{U} = \{U_\alphalpha\}_{\alphalpha \in I}$ be an open covering of $M$, $\mathbb{L}ambdambda \subseteq S^{*}M$ a closed subanalytic Legendrian and $\mathbb{L}ambdambda'$ a generic Hamiltonian perturbation of $\mathbb{L}ambdambda$. Then $\mathfrak{m}athscr{U}$ is a good covering with respect to $\mathbb{L}ambdambda \subset S^{*}M$ if
\betaegin{enumerate}
\item $U_{\alphalpha_1 \cdots \alphalpha_k}\,(\alphalpha_1, \cdots, \alphalpha_k \in I)$ are contractible;
\item $\varphiartial U_\alphalpha\,(\alphalpha \in I)$ are piecewise smooth with transverse intersections;
\item $N^*_{out}{U_{\alphalpha_1\cdots\alphalpha_k}} \cap \mathbb{L}ambdambda' = \varnothing\,(\alphalpha_1, \cdots, \alphalpha_k \in I)$.
\varepsilonnd{enumerate}
Given a good covering $\mathfrak{m}athscr{U}$ with respect to $\mathbb{L}ambdambda$, a good family of refinement with respect to $\mathbb{L}ambdambda$ is $\mathfrak{m}athscr{V}^t = \{V_\alphalpha^t\}_{\alphalpha \in I}$ where $t \in [-1, 1]$ is a family of open covering with $\mathfrak{m}athscr{V}^0 = \mathfrak{m}athscr{U}$ such that
\betaegin{enumerate}
\item $V_\alphalpha^{t'} \subseteq V_\alphalpha^t\,(\alphalpha \in I)$ for any $-1 \leq t \leq t' \leq 1$;
\item $V_{\alphalpha_1 \cdots \alphalpha_k}^t\,(\alphalpha_1, \cdots, \alphalpha_k \in I)$ are contractible for any $-1 \leq t \leq 1$;
\item $\varphiartial V_\alphalpha^t\,(\alphalpha \in I)$ are piecewise smooth with transverse intersections for any $-1 \leq t \leq 1$;
\item there exists some Riemannian metric $g$ on $M$ and some $\varepsilonpsilon > 0$ so that
$$\mathfrak{m}athrm{dist}_g(\varphiartial U_\alphalpha, \varphiartial V_\alphalpha^{\varphim 1}) \mathfrak{m}athfrak{g}eq \varepsilonpsilon.$$
\item $N^*_{out}{V_{\alphalpha_1\cdots\alphalpha_k}^t} \cap \mathbb{L}ambdambda' = \varnothing\,(\alphalpha_1, \cdots, \alphalpha_k \in I)$ for any $-1 \leq t \leq 1$;
\varepsilonnd{enumerate}
For simplicity, we will call $\mathfrak{m}athscr{V} = \mathfrak{m}athscr{V}^1$ a good refinement of $\mathfrak{m}athscr{U}$.
\varepsilonnd{definition}
\betaegin{remark}
This definition is in the same spirit as \cite{Guisurvey}*{Definition 11.4.1}. The reason that we also need to choose a family of good refinement instead of only a good covering is that here we need to consider an arbitrary Reeb flow, while Guillermou considered only the vertical translation in $S^*_{\tau>0}(M \times \mathfrak{m}athbb{R})$ and chose only open subsets of the form $U_i \times I_i \subset M \times \mathfrak{m}athbb{R}$. Here we are adding the contractibility assumption simply the discussion when constructing good refinements.
\varepsilonnd{remark}
\betaegin{lemma}
For any open covering $\mathfrak{m}athscr{U}_0$ on $M$ and a closed subanalytic Legendrian $\mathbb{L}ambdambda \subseteq S^{*}M$, there exists a refinement $\mathfrak{m}athscr{U}$ with respect to $\mathbb{L}ambdambda$ such that $\mathfrak{m}athscr{U}$ admits a good family of refinements $\mathfrak{m}athscr{V}^t$.
\varepsilonnd{lemma}
\betaegin{proof}
The existence of a refinement $\mathfrak{m}athscr{U}$ of $\mathfrak{m}athscr{U}_0$ satisfying (1)~\&~(2) follows from convex neighbourhood theorem in Riemannian geometry. The reason that
$$N^*_{out}{U_{\alphalpha_1\cdots\alphalpha_k}} \cap \mathbb{L}ambdambda' = \varnothing, \,\,\alphalpha_1, \cdots, \alphalpha_k \in I$$
for a generic perturbation $\mathbb{L}ambdambda'$ of $\mathbb{L}ambdambda$ is because $\betaigcup_{\alphalpha_1, \cdots, \alphalpha_k \in I}N^*_{out}{U_{\alphalpha_1\cdots\alphalpha_k}}$ is also a subanalytic Legendrian and hence the sum of dimensions is less than $2\deltaim M - 1$.
The existence of a family of refinement $\mathfrak{m}athscr{V}^t$ of $\mathfrak{m}athscr{U}$ satisfying (1)--(4) is again convex neighbourhood theorem. The reason that
$$N^*_{out}{V_{\alphalpha_1\cdots\alphalpha_k}^t} \cap \mathbb{L}ambdambda' = \varnothing, \,\,\alphalpha_1, \cdots, \alphalpha_k \in I, \, -1 \leq t \leq 1$$
for a generic perturbation $\mathbb{L}ambdambda'$ is because we can choose $\mathfrak{m}athscr{V}^t$ such that $\betaigcup_{\alphalpha_1, \cdots, \alphalpha_k \in I}N^*_{out}{V_{\alphalpha_1\cdots\alphalpha_k}^t}$ are small perturbations of $\betaigcup_{\alphalpha_1, \cdots, \alphalpha_k \in I}N^*_{out}{U_{\alphalpha_1\cdots\alphalpha_k}}$. This completes the proof.
\varepsilonnd{proof}
\betaegin{remark}
From now on, without loss of generality (by Theorem \ref{cor:GKS} and Theorem \ref{cor:cont-trans}) we will always assume that $\mathbb{L}ambdambda = \mathbb{L}ambdambda'$. In other words, we assume that $N^*_{out}{U_{\alphalpha_1\cdots\alphalpha_k}} \cap \mathbb{L}ambdambda = \varnothing$ and $N^*_{out}{V_{\alphalpha_1\cdots\alphalpha_k}^t} \cap \mathbb{L}ambdambda = \varnothing$ for any $-1 \leq t \leq 1$.
\varepsilonnd{remark}
The main microlocal properties of families of good refinements of coverings that we are going to use are given as follows.
\betaegin{lemma}\lambdabel{lem:lower*=lower!}
Let $\mathfrak{m}athscr{U}$ be a good covering of $M$ with a good family of refinements $\mathfrak{m}athscr{V}^t$ with respect to $\mathbb{L}ambdambda$. Write $j_\alphalpha: U_\alphalpha \hookrightarrow M$. Then given ${F} \in \mathbb{S}h(U_\alphalpha)$, for $\varepsilonpsilon > 0$ sufficiently small, we have
$$T_{\varphim \varepsilonpsilon}(j_{\alphalpha !}{F})|_{V_\alphalpha} \timesrightarrow{\sim} T_{\varphim \varepsilonpsilon}(j_{\alphalpha *}{F})|_{V_\alphalpha}.$$
\varepsilonnd{lemma}
\betaegin{proof}
Consider the mapping cone
$$\mathfrak{m}athrm{Cofib}(T_{\varphim \varepsilonpsilon}(j_{\alphalpha !}{F}) \rightarrow T_{\varphim \varepsilonpsilon}(j_{\alphalpha *}{F})) \simeq T_{\varphim \varepsilonpsilon} \mathfrak{m}athrm{Cofib}(j_{\alphalpha !}{F} \rightarrow j_{\alphalpha *}{F}).$$
Then by Proposition \ref{prop:noncharacteristic-ms-es}, $\mathfrak{m}s^\infty(\mathfrak{m}athrm{Cofib}(j_{\alphalpha !}{F} \rightarrow j_{\alphalpha *}{F})) \subseteq S^*M|_{\varphiartial U_\alphalpha}$, since $j_{\alphalpha !}{F} \simeq j_{\alphalpha *}{F}$ in the interior of $T^*U_\alphalpha$. By Corollary \ref{cor:GKS} we know that
$$\mathfrak{m}s^\infty(T_{\varphim \varepsilonpsilon}\mathfrak{m}athrm{Cofib}(j_{\alphalpha !}{F} \rightarrow j_{\alphalpha *}{F})) \subseteq T_{\varphim \varepsilonpsilon}(S^*M|_{\varphiartial U_\alphalpha}).$$
Since $T_{\varphim \varepsilonpsilon}(S^*M|_{\varphiartial U_\alphalpha}) \cap S^*V_\alphalpha = \varnothing$, we know that in particular for $\varepsilonpsilon > 0$ sufficiently small,
$$\mathfrak{m}s^\infty(T_{\varphim \varepsilonpsilon}\mathfrak{m}athrm{Cofib}(j_{\alphalpha !}{F} \rightarrow j_{\alphalpha *}{F})) \cap S^*V_\alphalpha = \varnothing.$$
Moreover, since $j_{\alphalpha !}{F} \simeq j_{\alphalpha *}{F}$ in $T^*V_\alphalpha$, by the above singular support estimate, we can conclude that $T_{\varphim \varepsilonpsilon}(j_{\alphalpha !}{F}) \simeq T_{\varphim \varepsilonpsilon}(j_{\alphalpha *}{F})$ in $T^*V_\alphalpha$, which proves the isomorphism as claimed.
\varepsilonnd{proof}
\betaegin{lemma}\lambdabel{lem:hom-cutoff}
Let $\mathfrak{m}athscr{U}$ be a good covering of $M$ with a good family of refinements $\mathfrak{m}athscr{V}^t$ with respect to $\mathbb{L}ambdambda$. Write $j_\alphalpha: U_\alphalpha \hookrightarrow M$. Then given ${F}, {G} \in \mathbb{S}h_{\mathbb{L}ambdambda \cap S^{*}U_\alphalpha}(U_\alphalpha)$, for $\varepsilonpsilon > 0$ sufficiently small, we have
$$\Hom(T_{\varphim \varepsilonpsilon}(j_{\alphalpha !}{F}), T_{\varphim \varepsilonpsilon}(j_{\alphalpha *}{G})) \timesrightarrow{\sim} \Hom(T_{\varphim \varepsilonpsilon}(j_{\alphalpha !}{F})|_{V_\alphalpha}, T_{\varphim \varepsilonpsilon}(j_{\alphalpha *}{G})|_{V_\alphalpha}).$$
\varepsilonnd{lemma}
\betaegin{proof}
Consider the sheaf $\mathfrak{m}athscr{H}om(T_{\varphim \varepsilonpsilon}(j_{\alphalpha !}{F}), T_{\varphim \varepsilonpsilon}(j_{\alphalpha *}{G}))$. We know by Proposition \ref{prop:mses}~(8) that
$$\mathfrak{m}s(\mathfrak{m}athscr{H}om(T_{\varphim \varepsilonpsilon}(j_{\alphalpha !}{F}), T_{\varphim \varepsilonpsilon}(j_{\alphalpha *}{G}))) \subseteq T_{\varphim \varepsilonpsilon}(-\mathfrak{m}s(j_{\alphalpha !}{F})) + T_{\varphim \varepsilonpsilon}(\mathfrak{m}s(j_{\alphalpha *}{F})).$$
Consider the family of good refinements $V_\alphalpha^t$, $t \in [-1, 1]$, where $V_\alphalpha^0 = U_\alphalpha$ and $V_\alphalpha^1 = V_\alphalpha$. We know by Proposition \ref{prop:noncharacteristic-ms-es} that
$$\mathfrak{m}s^\infty(j_{\alphalpha !}{F}) \subseteq \mathbb{L}ambdambda \,\widehat +\, N^*_{out}{U_\alphalpha} , \;\; \mathfrak{m}s^\infty(j_{\alphalpha *}{F}) \subseteq \mathbb{L}ambdambda \,\widehat +\, N^*_{in}{U_\alphalpha}.$$
Therefore by the condition on the family of good refinements $V_\alphalpha^t$
$$N^{*}_{out}{V_\alphalpha^t} \cap (-\mathfrak{m}s^\infty(j_{\alphalpha !}{F})) = N^{*}_{out}{V_\alphalpha^t} \cap \mathfrak{m}s^\infty(j_{\alphalpha *}{F}) = \varnothing.$$
Indeed, for $(x, \timesi) \in N^{*}_{out}{V_\alphalpha^t}$, $(x, \timesi) \in N^*_{in}{U_\alphalpha} \,\widehat +\, (\varphim \mathbb{L}ambdambda)$ only if there exists $(x_n, -\timesi_n) \in N^*_{in}{U_\alphalpha}$, $(y_n, \varepsilonta_n) \in \varphim \mathbb{L}ambdambda$ such that
$x_n, y_n \rightarrow x, \, -\timesi_n + \varepsilonta_n \rightarrow \timesi, \, |x_n - y_n||\timesi_n| \rightarrow 0.$
However, the fact that $\mathbb{L}ambdambda \cap N^*_{in/out}V_\alphalpha^t = \varnothing$ immediately implies that $\varepsilonta_n \rightarrow 0$. This forces $-\timesi_n \rightarrow \timesi$, which implies $\timesi = 0$, so in the unit cotangent bundle the intersections are empty.
Hence for $\varepsilonpsilon > 0$ sufficiently small, we have $\mathfrak{m}athrm{supp}(T_{\varphim \varepsilonpsilon}(j_{\alphalpha !}{F}))$, $\mathfrak{m}athrm{supp}(T_{\varphim \varepsilonpsilon}(j_{\alphalpha *}{G})) \subset V_\alphalpha^{-1}$, and
$$N^{*}_{out}{V_\alphalpha^t} \cap T_{\varphim \varepsilonpsilon}(-\mathfrak{m}s^\infty(j_{\alphalpha !}{F})) = N^{*}_{out}{V_\alphalpha^t} \cap T_{\varphim \varepsilonpsilon}(\mathfrak{m}s^\infty(j_{\alphalpha *}{F})) = \varnothing.$$
Therefore, by microlocal Morse lemma Proposition \ref{prop:morselemma}, restricting from $V_\alphalpha^{-1}$ to $V_\alphalpha^1$ we have
$$\Gamma(M, \mathfrak{m}athscr{H}om(T_{\varphim \varepsilonpsilon}(j_{\alphalpha !}{F}), T_{\varphim \varepsilonpsilon}(j_{\alphalpha *}{G}))) \simeq \Gamma(V_\alphalpha, \mathfrak{m}athscr{H}om(T_{\varphim \varepsilonpsilon}(j_{\alphalpha !}{F}), T_{\varphim \varepsilonpsilon}(j_{\alphalpha *}{G}))),$$
which shows the isomorphism.
\varepsilonnd{proof}
\betaegin{lemma}\lambdabel{lem:muhom-cutoff}
Let $\mathfrak{m}athscr{U}$ be a good covering of $M$ with a good family of refinements $\mathfrak{m}athscr{V}^t$ with respect to $\mathbb{L}ambdambda$. Write $j_\alphalpha: U_\alphalpha \hookrightarrow M$. Then given ${F}, {G} \in \mathbb{S}h_{\mathbb{L}ambdambda \cap S^{*}U_\alphalpha}(U_\alphalpha)$, for $\varepsilonpsilon > 0$ sufficiently small, we have
$$\Gamma(S^{*}V_\alphalpha^{-1}, \mathfrak{m}hom(j_{\alphalpha !}{F}, j_{\alphalpha *}{G})) \timesrightarrow{\sim} \Gamma(S^{*}V_\alphalpha^1, \mathfrak{m}hom({F}, {G})).$$
\varepsilonnd{lemma}
\betaegin{proof}
Consider the good family of refinements $V_\alphalpha^t$ where $V_\alphalpha^0 = U_\alphalpha$ and $V_\alphalpha^1 = V_\alphalpha$. Note that by Proposition \ref{prop:noncharacteristic-ms-es} we have
$$\mathfrak{m}s^\infty(j_{\alphalpha !}{F}) \subset \mathbb{L}ambdambda \,\widehat +\, N^*_{out}{U_\alphalpha}, \;\; \mathfrak{m}s^\infty(j_{\alphalpha *}{F}) \subset \mathbb{L}ambdambda \,\widehat +\, N^*_{in}{U_\alphalpha},$$
and then by Proposition \ref{prop:ss-muhom} \cite{KS}*{Corollary 6.4.3} one can deduce that
$$\mathfrak{m}s^\infty(\mathfrak{m}hom(j_{\alphalpha !}{F}, j_{\alphalpha *}{G})) \subset C(\mathbb{L}ambdambda \,\widehat +\, N^*_{out}{U_\alphalpha}, \mathbb{L}ambdambda \,\widehat +\, N^*_{in}{U_\alphalpha}) \subseteq S^*(S^{*}M).$$
Then by Proposition \ref{prop:mses}~(4) and Remark \ref{rem:ss-muhom} we know that
$$\mathfrak{m}s^\infty(\deltaot{\varphii}_*\mathfrak{m}hom(j_{\alphalpha !}{F}, j_{\alphalpha *}{G})) \subset -(\mathbb{L}ambdambda \,\widehat +\, N^*_{out}{U_\alphalpha}) \,\widehat+_\infty\, (\mathbb{L}ambdambda \,\widehat +\, N^*_{in}{U_\alphalpha}).$$
We know $(x, \timesi) \in (-\mathbb{L}ambdambda \,\widehat +\, N^*_{in}{U_\alphalpha}) \,\widehat+\, (\mathbb{L}ambdambda \,\widehat +\, N^*_{in}{U_\alphalpha}))$ if and only if there exists $(x_n, -\timesi_n) \in -\mathbb{L}ambdambda \,\widehat +\, N^*_{in}{U_\alphalpha}$ and $(y_n, \varepsilonta_n) \in \mathbb{L}ambdambda \,\widehat +\, N^*_{in}{U_\alphalpha}$ such that
$x_n, y_n \rightarrow x, \; -\timesi_n + \varepsilonta_n \rightarrow \timesi, \; |x_n - y_n| |\timesi_n| \rightarrow 0$. When we consider $x \in \varphiartial U_\alphalpha$, we know that $(x, \timesi) \in \varphim \mathbb{L}ambdambda \,\widehat+\, N^*_{in}U_\alphalpha$ and hence $(x, \timesi) \notin N^*_{out}V_\alphalpha^t$. When we consider $x \in U_\alphalpha$, we know that $(x, \timesi) \in \mathbb{L}ambdambda$ and hence $(x, \timesi) \notin N^*_{out}V_\alphalpha^t$ because $\varphim \mathbb{L}ambdambda \cap N^*_{out}V_\alphalpha^t = \varnothing$. These two facts imply that in the unit cotangent bundle the following intersection is empty
$$N^*_{out}V_\alphalpha^t \cap \mathfrak{m}s^\infty(\deltaot{\varphii}_*\mathfrak{m}hom(j_{\alphalpha !}{F}, j_{\alphalpha *}{G})) = \varnothing.$$
Therefore, by microlocal Morse lemma Proposition \ref{prop:morselemma}, restricting from $V_\alphalpha^{-1}$ to $V_\alphalpha^1$, we can show that the isomorphism holds.
\varepsilonnd{proof}
Given ${F} \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$, by Lemma \ref{lem:refine-cutoff} there exists an open covering $\mathfrak{m}athscr{U} = \{U_\alphalpha\}_{\alphalpha \in I}$ and a collection of sheaves $\{{F}_\alphalpha\}_{\alphalpha \in I}$ where ${F}_\alphalpha \in \mathbb{S}h_\mathbb{L}ambdambda(U_\alphalpha)$ such that ${F}|_{\mathbb{L}ambdambda \cap S^{*}U_\alphalpha} = m_\mathbb{L}ambdambda({F}_\alphalpha)$. Write $j_\alphalpha: U_\alphalpha \hookrightarrow M$. Now choose a good family of refinements $\mathfrak{m}athscr{V}^t\,(t \in [-1, 1])$ of $\mathfrak{m}athscr{U}$, and define (by Lemma \ref{lem:lower*=lower!})
\[\betaegin{split}
w_\mathbb{L}ambdambda({F})_{V_\alphalpha} & = \mathfrak{m}athrm{Cofib}(T_{-\varepsilonpsilon}(j_{\alphalpha *}{F}_\alphalpha)|_{V_\alphalpha} \rightarrow T_\varepsilonpsilon(j_{\alphalpha *}{F}_\alphalpha)|_{V_\alphalpha}) \\
& = \mathfrak{m}athrm{Cofib}(T_{-\varepsilonpsilon}(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha} \rightarrow T_\varepsilonpsilon(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha}).
\varepsilonnd{split}\]
\betaegin{proposition}\lambdabel{prop:doubling_ff}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a compact subanalytic Legendrian, $\mathfrak{m}athscr{U}$ a good open covering and $\mathfrak{m}athscr{V}^t$ a good family of refinements with respect to $\mathbb{L}ambdambda$. Then for $\varepsilonpsilon > 0$ sufficiently small, there is a natural isomorphism
$$\Hom(w_\mathbb{L}ambdambda({F})_{V_\alphalpha}, w_\mathbb{L}ambdambda({G})_{V_\alphalpha}) \timesrightarrow{\sim} \Gamma(S^{*}V_\alphalpha, \mathfrak{m}hom({F}_{V_\alphalpha}, {G}_{V_\alphalpha})).$$
\varepsilonnd{proposition}
\betaegin{proof}
Writing down the definition of $w_\mathbb{L}ambdambda({F})_{V_\alphalpha}$ and $w_\mathbb{L}ambdambda({G})_{V_\alphalpha}$, we have
\[\betaegin{split}
\Hom(w_\mathbb{L}ambdambda({F})&{}_{V_\alphalpha}, w_\mathbb{L}ambdambda({G})_{V_\alphalpha}) \\
\simeq \mathfrak{m}athrm{Cofib} \betaig(&\Hom(\mathfrak{m}athrm{Cofib}(T_{-\varepsilonpsilon}(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha} \rightarrow T_\varepsilonpsilon(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha}), T_{-\varepsilonpsilon}(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha}) \\
& \rightarrow \Hom(\mathfrak{m}athrm{Cofib}(T_{-\varepsilonpsilon}(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha} \rightarrow T_\varepsilonpsilon(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha}), T_\varepsilonpsilon(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha})\betaig).
\varepsilonnd{split}\]
For the first term, we claim that
\[\betaegin{split}
\Hom(&\mathfrak{m}athrm{Cofib}(T_{-\varepsilonpsilon}(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha} \rightarrow T_\varepsilonpsilon(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha}), T_{-\varepsilonpsilon}(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha}) \\
&\simeq \Gamma(S^{*}V_\alphalpha, \mathfrak{m}hom({F}_{V_\alphalpha}, {G}_{V_\alphalpha})).
\varepsilonnd{split}\]
To prove this, we apply the Sato-Sabloff fiber sequence Corollary \ref{cor:sato-sab} and get
\[\betaegin{split}
\Hom(T_\varepsilonpsilon(j_{\alphalpha !}{F}_\alphalpha), T_{-\varepsilonpsilon}(j_{\alphalpha *}{G}_\alphalpha)) & \rightarrow \Hom(T_{-\varepsilonpsilon}(j_{\alphalpha !}{F}_\alphalpha), T_{-\varepsilonpsilon}(j_{\alphalpha *}{G}_\alphalpha)) \\
& \rightarrow \Gamma(T^{*,\infty}V_\alphalpha^{-1}, \mathfrak{m}hom(j_{\alphalpha !}{F}_\alphalpha, j_{\alphalpha *}{G}_\alphalpha)).
\varepsilonnd{split}\]
Given the non-characteristic assumption on $\mathfrak{m}athscr{V}^t$ with respect to $\mathbb{L}ambdambda$, we can apply Lemma \ref{lem:hom-cutoff} to restrict the corresponding $\mathfrak{m}athscr{H}om(-, -)$ sheaves from $V_\alphalpha^{-1}$ to $V_\alphalpha^1$, and get quasi-isomorphisms
\[\betaegin{split}
\Hom(T_\varepsilonpsilon(j_{\alphalpha !}{F}_\alphalpha), T_{-\varepsilonpsilon}(j_{\alphalpha *}{G}_\alphalpha)) &\simeq \Hom(T_\varepsilonpsilon(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha}, T_{-\varepsilonpsilon}(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha}), \\
\Hom(T_\varepsilonpsilon(j_{\alphalpha !}{F}_\alphalpha), T_{\varepsilonpsilon}(j_{\alphalpha *}{G}_\alphalpha)) &\simeq \Hom(T_\varepsilonpsilon(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha}, T_{\varepsilonpsilon}(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha}).
\varepsilonnd{split}\]
On the other hand, we can also apply Lemma \ref{lem:muhom-cutoff} to restrict the corresponding $\mathfrak{m}hom(-, -)$ sheaves from $S^{*}V_\alphalpha^{-1}$ to $S^{*}V_\alphalpha^1$, and get
$$\Gamma(S^{*}V_\alphalpha^{-1}, \mathfrak{m}hom(j_{\alphalpha !}{F}_\alphalpha, j_{\alphalpha *}{G}_\alphalpha)) \simeq \Gamma(S^{*}V_\alphalpha^1, \mathfrak{m}hom({F}_\alphalpha, {G}_\alphalpha)).$$
Since the restriction maps commute with all the maps in the Sato-Sabloff fiber sequence, this proves our first claim. For the second term, we claim that there is a quasi-isomorphism
$$\Hom(\mathfrak{m}athrm{Cofib}(T_{-\varepsilonpsilon}(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha} \rightarrow T_\varepsilonpsilon(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha}), T_\varepsilonpsilon(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha}) \simeq 0.$$
Indeed by Proposition \ref{prop:hom_w_pm} we know that
$$\Hom(T_\varepsilonpsilon(j_{\alphalpha !}{F}_\alphalpha), T_\varepsilonpsilon(j_{\alphalpha *}{G}_\alphalpha)) \simeq \Hom(T_{-\varepsilonpsilon}(j_{\alphalpha !}{F}_\alphalpha), T_\varepsilonpsilon(j_{\alphalpha *}{G}_\alphalpha)) \simeq \Hom(j_{\alphalpha !}{F}_\alphalpha, j_{\alphalpha *}{G}_\alphalpha).$$
and the isomorphism is witnessed by the precomposition with the canonical continuation map $T_{-\varepsilonpsilon}(j_{\alphalpha *}{F}_\alphalpha) \rightarrow T_\varepsilonpsilon(j_{\alphalpha *}{F}_\alphalpha)$. Again by the non-characteristic assumption on $\mathfrak{m}athscr{U}, \mathfrak{m}athscr{V}$ with respect to $\mathbb{L}ambdambda$, we can apply Lemma \ref{lem:hom-cutoff} to restrict the corresponding $\mathfrak{m}athscr{H}om(-, -)$ sheaves from $V_\alphalpha^{-1}$ to $V_\alphalpha^1$, and the quasi-isomorphisms still hold. This proves our second claim.
\varepsilonnd{proof}
\betaegin{corollary}\lambdabel{cor:doubling_ff}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a closed subanalytic Legendrian, $\mathfrak{m}athscr{U}$ a good open covering and $\mathfrak{m}athscr{V}^t$ a good family of refinements with respect to $\mathbb{L}ambdambda$. Then for $\varepsilonpsilon > 0$ sufficiently small,
$$\Hom(w_\mathbb{L}ambdambda({F})_{V_{\alphalpha_i}}|_{V_{\alphalpha_1 \cdots \alphalpha_k}}, w_\mathbb{L}ambdambda({G})_{V_{\alphalpha_j}}|_{V_{\alphalpha_1 \cdots \alphalpha_k}}) \simeq \Gamma(S^{*}V_{\alphalpha_1 \cdots \alphalpha_k}, \mathfrak{m}hom({F}_{\alphalpha_i}|_{V_{\alphalpha_1 \cdots \alphalpha_k}}, {G}_{\alphalpha_j}|_{V_{\alphalpha_1 \cdots \alphalpha_k}})).$$
\varepsilonnd{corollary}
\betaegin{proof}
Write $j_{\alphalpha_1 \cdots \alphalpha_k}: U_{\alphalpha_1 \cdots \alphalpha_k} \hookrightarrow M$. Note that
\[\betaegin{split}
w_\mathbb{L}ambdambda({F})_{\alphalpha_i}|_{V_{\alphalpha_1 \cdots \alphalpha_k}} & = \mathfrak{m}athrm{Cofib}(T_{-\varepsilonpsilon}(j_{\alphalpha_i *}{F}_{\alphalpha_i})|_{V_{\alphalpha_i}} \rightarrow T_\varepsilonpsilon(j_{\alphalpha_i *}{F}_{\alphalpha_i})|_{V_{\alphalpha_i}})|_{V_{\alphalpha_1 \cdots \alphalpha_k}} \\
& \simeq \mathfrak{m}athrm{Cofib}(T_{-\varepsilonpsilon}(j_{\alphalpha_1 \cdots \alphalpha_k *}{F}_{\alphalpha_i})|_{V_{\alphalpha_1 \cdots \alphalpha_k}} \rightarrow T_\varepsilonpsilon(j_{\alphalpha_1 \cdots \alphalpha_k *}{F}_{\alphalpha_i})|_{V_{\alphalpha_1 \cdots \alphalpha_k}}).
\varepsilonnd{split}\]
Then the corollary immediately follows from Proposition \ref{prop:doubling_ff}.
\varepsilonnd{proof}
By Proposition \ref{prop:doubling_ff} and Corollary \ref{cor:doubling_ff}, we can conclude that indeed the family of sheaves $(w_\mathbb{L}ambdambda({F})_\alphalpha)_{\alphalpha \in I}$ on the refined open cover of $\mathfrak{m}athscr{V}$ can be glued to a global object.
\betaegin{proof}[Proof of Theorem \ref{thm:doubling}]
Consider the functor $w_\mathbb{L}ambdambda: \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \rightarrow \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M)$. We apply Proposition \ref{prop:doubling_ff} and Corollary \ref{cor:doubling_ff} to show that $w_\mathbb{L}ambdambda$ is fully faithful, i.e.
$$\Hom(w_\mathbb{L}ambdambda({F}), w_\mathbb{L}ambdambda({G})) \timesrightarrow{\sim} \Gamma(S^*M, \mathfrak{m}hom({F}, {G})).$$
First of all, since $\mathbb{L}ambdambda \subset S^{*}M$ is compact, we may choose assume that there are only finite open subsets $U_\alphalpha \in \mathfrak{m}athscr{U}$ such that $\varphii(\mathbb{L}ambdambda) \cap U_\alphalpha \neq \varnothing$. Hence there exists a uniform $\varepsilonpsilon > 0$ sufficiently small such that Proposition \ref{prop:doubling_ff} and Corollary \ref{cor:doubling_ff} hold for all $U_\alphalpha \in \mathfrak{m}athscr{U}$. By Proposition \ref{prop:doubling_ff} and Corollary \ref{cor:doubling_ff}, and the fact that these quasi-isomorphisms commute with restriction maps, we have the following diagram
\[\timesymatrix{
\betaigoplus_{\alphalpha \in I}\Hom(w_\mathbb{L}ambdambda({F})_{V_\alphalpha}, w_\mathbb{L}ambdambda({G})_{V_\alphalpha}) \alphar[r] \alphar[d]^{\rotatebox{90}{$\sim$}} & \betaigoplus_{\alphalpha, \betaeta \in I}\Hom(w_\mathbb{L}ambdambda({F})_{V_{\alphalpha \betaeta}}, w_\mathbb{L}ambdambda({G})_{V_{\alphalpha \betaeta}}) \alphar@<-.5ex>[r] \alphar@<.5ex>[r] \alphar[d]^{\rotatebox{90}{$\sim$}} & \cdots \\
\betaigoplus_{\alphalpha \in I}\Gamma(S^{*}V_\alphalpha, \mathfrak{m}hom({F}, {G})) \alphar[r] & \betaigoplus_{\alphalpha, \betaeta \in I}\Gamma(S^{*}V_{\alphalpha \betaeta}, \mathfrak{m}hom({F}, {G})) \alphar@<-.5ex>[r] \alphar@<.5ex>[r] & \cdots
}\]
Note that $\mathfrak{m}athscr{V}$ is a good covering such that any finite intersection is contractible. Therefore, by taking the homotopy colimit of the above diagram, we get the quasi-isomorphism of global sections
\[\betaegin{split}
\Hom(w_\mathbb{L}ambdambda({F}), w_\mathbb{L}ambdambda({G})) & \timesrightarrow{\sim} \lmi{\alphalpha \in I}\Hom(w_\mathbb{L}ambdambda({F})_{V_\alphalpha}, w_\mathbb{L}ambdambda({G})_{V_\alphalpha}) \\
& \timesrightarrow{\sim} \lmi{\alphalpha \in I}\, \Gamma(S^{*}V_\alphalpha, \mathfrak{m}hom({F}, {G})) \timesrightarrow{\sim} \Gamma(S^*M, \mathfrak{m}hom({F}, {G})).
\varepsilonnd{split}\]
Finally, we need to check that the doubling functor can be defined for any $0 < \varepsilonpsilon < c(\mathbb{L}ambdambda)/2$ where $c(\mathbb{L}ambdambda)$ is the length of the shortest Reeb chord on $\mathbb{L}ambdambda$. This is because when $0 < \varepsilonpsilon < c(\mathbb{L}ambdambda)/2$, $T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)$ are related by Hamiltonian isotopies supported away from $\mathbb{L}ambdambda$. We can choose $H: S^{*}M \rightarrow \mathfrak{m}athbb{R}$ with compact support (since $\mathbb{L}ambdambda$ is compact) such that
$$H|_\mathbb{L}ambdambda = 0, \;\; H|_{\betaigcup_{\varepsilonpsilon \in [c, c']}T_\varepsilonpsilon(\mathbb{L}ambdambda)} = 1, \;\; H|_{\betaigcup_{\varepsilonpsilon \in [c, c']}T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)} = -1.$$
Then the contact Hamiltonian flow is the integration of the corresponding compactly supported Hamiltonian vector field.
\varepsilonnd{proof}
\betaegin{remark}
The condition that $\mathbb{L}ambdambda$ is compact plays an important role in the proof. First, it ensures that there exists a uniform $\varepsilonpsilon > 0$ such that the doubling functor is locally defined among all $V_\alphalpha \in \mathfrak{m}athscr{V}$. Secondly, it ensures that there exists a compactly supported Hamiltonian isotopy relating $T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)$ for $0 < \varepsilonpsilon < c(\mathbb{L}ambdambda)/2$ (otherwise, though we can similarly define the Hamiltonian function, it is unclear whether the Hamiltonian vector field is complete).
\varepsilonnd{remark}
Finally, we remark that the doubling construction immediately implies the following fiber sequence.
\betaegin{corollary}\lambdabel{cor:exact-tri}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a compact subanalytic Legendrian. Then there is a fiber sequence of functors
$$T_{-\varepsilonpsilon} \rightarrow T_\varepsilonpsilon \rightarrow w_\mathbb{L}ambdambda \circ m_\mathbb{L}ambdambda.$$
\varepsilonnd{corollary}
\subsubsection{Remark on relative doubling construction}\lambdabel{sec:doubling-complete}
We expect that there is also a more geometric approach to the construction of a relative doubling functor. Following \cite[Section 6.3]{NadShen}, we consider a locally closed subanalytic Legendrian $\mathbb{L}ambdambda \subset S^{*}M$ with contact collar $\varphiartial \mathbb{L}ambdambda \times (0, 1) \hookrightarrow \mathbb{L}ambdambda$. Then we try to define
$$w_{(\mathbb{L}ambdambda, \varphiartial \mathbb{L}ambdambda)}: \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \rightarrow \mathbb{S}h_{(\mathbb{L}ambdambda, \varphiartial \mathbb{L}ambdambda)_{\varepsilonpsilon}^\varphirec}(M),$$
where $(\mathbb{L}ambdambda, \varphiartial \mathbb{L}ambdambda)_{\varepsilonpsilon}^\varphirec = \widetilde{T}_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup \widetilde{T}_\varepsilonpsilon(\mathbb{L}ambdambda)$ where $\widetilde{T}_t$ is defined by a non-negative contact Hamiltonian $\widetilde{H}$ such that
$$\widetilde{H}|_{\betaigcup_{t\in [-\varepsilonpsilon, \varepsilonpsilon]}T_t(\mathbb{L}ambdambda \betaackslash \varphiartial \mathbb{L}ambdambda \times (0, 1))} = 1, \;\; \widetilde{H}|_{\varphiartial \mathbb{L}ambdambda \times (0, 1/2)} = 0.$$
Suppose $\mathbb{L}ambdambda = \mathfrak{m}athfrak{c}_F$ is the Lagrangian skeleton of a Weinstein hypersurface $F \subset S^{*}M$. For any exact Lagrangian $L \subset F$ with Legendrian boundary $\varphiartial_\infty L$, consider the limit set of $L$ under the Liouville flow $Z_F$ on $F$ when time goes to $-\infty$, which defines a relative Lagrangian skeleton $\mathfrak{m}athfrak{c}_F \cup (\varphiartial_\infty L \times \mathfrak{m}athbb{R})$. Following Nadler-Shende \cite{NadShen}*{Section 6.3}, we may also expect an explicit construction for the relative doubling functor satisfying properties in this section
$$w_{\mathfrak{m}athfrak{c}_F\cup (\varphiartial_\infty L \times \mathfrak{m}athbb{R})}: \mathfrak{m}sh_{\mathfrak{m}athfrak{c}_F \cup (\varphiartial_\infty L \times \mathfrak{m}athbb{R})}(\mathfrak{m}athfrak{c}_F \cup (\varphiartial_\infty L \times \mathfrak{m}athbb{R})) \rightarrow \mathbb{S}h_{(\mathfrak{m}athfrak{c}_F \cup (\varphiartial_\infty L \times \mathfrak{m}athbb{R}))^\varphirec_\varepsilonpsilon}(M).$$
Then one may remove the relative part of the stop and send the right hand side into $\mathbb{S}h_{\mathfrak{m}athfrak{c}_F}(M)$.
In this setup, one can probably explicitly see that a Lagrangian cocore of $\mathfrak{m}athfrak{c}_F$ is sent to the corresponding sheaf theoretic linking disk. Then, following the first author's work \cite{Kuo-wrapped-sheaves}, we are able to define a functor
$$\mathfrak{m}sh^c_{\mathfrak{m}athfrak{c}_F}(\mathfrak{m}athfrak{c}_F) \rightarrow \wsh_{\mathfrak{m}athfrak{c}_F}(M),$$
as the relative doubling of all the compact objects in $\mathfrak{m}sh^c_{\mathfrak{m}athfrak{c}_F}(\mathfrak{m}athfrak{c}_F)$ in this case will be sheaves with perfect stalks on $M$. This should be closer to the Fukaya categorical construction of the cup functor.
\subsection{Doubling as adjoint functors}\lambdabel{sec:double-ad}
Following the discussion in Section \ref{sec:ad-micro}, we may expect that by wrapping positively and negatively, the doubling functor $w_\mathbb{L}ambdambda$ should be isomorphic to the left and right adjoint of microlocalization $m_\mathbb{L}ambdambda: \mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$.
Indeed, heuristically by wrapping positively via $\wrap_{\mathbb{L}ambdambda \cup T_{2\varepsilonpsilon}(\mathbb{L}ambdambda)}^+$ (respectively, wrapping negatively via $\wrap_{\mathbb{L}ambdambda \cup T_{-2\varepsilonpsilon}(\mathbb{L}ambdambda)}^-$) one may expect that
\betaegin{align*}
\wrap_{\mathbb{L}ambdambda \cup T_{2\varepsilonpsilon}(\mathbb{L}ambdambda)}^+ \circ w_\mathbb{L}ambdambda({F})_{U_\alphalpha}[-1] = \mathfrak{m}athrm{Fib}({F}_\alphalpha \rightarrow T_{2\varepsilonpsilon}(F_\alphalpha)) &= \wrap_{\mathbb{L}ambdambda \cup T_{2\varepsilonpsilon}(\mathbb{L}ambdambda)}^+ \mathfrak{m}athrm{Fib}({F}_\alphalpha \rightarrow \wrap_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^+ F_\alphalpha), \\
\wrap_{\mathbb{L}ambdambda \cup T_{-2\varepsilonpsilon}(\mathbb{L}ambdambda)}^- \circ w_\mathbb{L}ambdambda(F)_{U_\alphalpha} = \mathfrak{m}athrm{Cofib}(T_{-2\varepsilonpsilon}({F}_\alphalpha) \rightarrow F_\alphalpha) &= \wrap_{\mathbb{L}ambdambda \cup T_{-2\varepsilonpsilon}(\mathbb{L}ambdambda)}^- \mathfrak{m}athrm{Cofib}(\wrap_{\mathbb{L}ambdambda \cap \Omega_\alphalpha}^- F_\alphalpha \rightarrow F_\alphalpha).
\varepsilonnd{align*}
Then it will immediately follow from the formula we have in Section \ref{sec:ad-micro} and the beginning of \ref{sec:doubling-local} that $m_\mathbb{L}ambdambda^l = \wrap_\mathbb{L}ambdambda^+ \circ w_\mathbb{L}ambdambda[-1]$ while $m_\mathbb{L}ambdambda^r = \wrap_\mathbb{L}ambdambda^- \circ w_\mathbb{L}ambdambda$. However, as we have explained in the previous section, the Reeb flow $T_t$ on $S^*M$ does not define a Reeb flow on $S^*U_\alphalpha$, and conversely it is also not claer that Reeb flows on $S^*U_\alphalpha$ can be patched together to a global Reeb flow on $S^*M$.
However, we claim that we can in fact check that positively and negatively wrapped doubling functor corresponds to the left and right adjoint of microlocalization by almost the same computation as we did in proving full faithfulness in the previous section. Here are the proofs.
\betaegin{theorem}\lambdabel{thm:doubling_rightad}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a compact subanalytic Legendrian. Then the right adjoint of microlocalization is isomorphic to
$$m_\mathbb{L}ambdambda^r = \wrap_\mathbb{L}ambdambda^- \circ w_\mathbb{L}ambdambda: \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda(M).$$
\varepsilonnd{theorem}
\betaegin{proof}
We know that for ${F} \in \mathbb{S}h_\mathbb{L}ambdambda(M)$, by Theorem \ref{w=ad},
$$\Hom(F, \wrap_\mathbb{L}ambdambda^-(G)) \simeq \Hom(F, G).$$
Thus it suffices to show that for any ${F} \in \mathbb{S}h_\mathbb{L}ambdambda(M)$, there is a canonical quasi-isomorphism
$$\Hom({F}, w_\mathbb{L}ambdambda({G})) \simeq \Gamma(S^*M, \mathfrak{m}hom(m_\mathbb{L}ambdambda({F}), {G})).$$
Again, following the proof of Theorem \ref{thm:doubling}, we consider a good open covering $\mathfrak{m}athscr{U}$ and a good family of refinements $\mathfrak{m}athscr{V}^t$, and show that locally
$$\Hom({F}|_{V_\alphalpha}, w_\mathbb{L}ambdambda({G})|_{V_\alphalpha}) \simeq \Gamma(S^{*}V_\alphalpha, \mathfrak{m}hom({F}|_{V_\alphalpha}, {G}|_{V_\alphalpha})).$$
Writing down the definition of $w_\mathbb{L}ambdambda({G})_{V_\alphalpha}$, we have
\[\betaegin{split}
\Hom&({F}|_{V_\alphalpha}, w_\mathbb{L}ambdambda({G})_{V_\alphalpha}) \\
& \simeq \Hom\betaig({F}|_{V_\alphalpha}, \mathfrak{m}athrm{Cofib}(T_{-\varepsilonpsilon}(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha} \rightarrow T_\varepsilonpsilon(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha})\betaig) \\
& \simeq \mathfrak{m}athrm{Cofib}\betaig(\Hom({F}|_{V_\alphalpha}, T_{-\varepsilonpsilon}(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha}) \rightarrow \Hom({F}|_{V_\alphalpha}, T_\varepsilonpsilon(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha})\betaig).
\varepsilonnd{split}\]
Given the non-characteristic condition for the good family of refinements $\mathfrak{m}athscr{V}^t$, by Lemma \ref{lem:hom-cutoff} we know that
\[\betaegin{split}
\Hom({F}|_{V_\alphalpha}, T_{-\varepsilonpsilon}(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha}) & \simeq \Hom(j_{\alphalpha !}{F}_{\alphalpha}, T_{-\varepsilonpsilon}(j_{\alphalpha *}{G}_\alphalpha)),\\
\Hom({F}|_{V_\alphalpha}, T_\varepsilonpsilon(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha}) & \simeq \Hom(j_{\alphalpha !}\mathfrak{m}athscr{F}_{\alphalpha}, T_\varepsilonpsilon(j_{\alphalpha *}\mathfrak{m}athscr{G}_\alphalpha)).
\varepsilonnd{split}\]
In addition, it is easy to show that the restriction functors above commute with the canonical map $T_{-\varepsilonpsilon}(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha} \rightarrow T_\varepsilonpsilon(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha}$. Therefore, by the Sato-Sabloff exact triangle Corollary \ref{cor:sato-sab} we can conclude that
\[\betaegin{split}
\Hom&({F}|_{V_\alphalpha}, w_\mathbb{L}ambdambda({G})_{V_\alphalpha}) \\
& \simeq \mathfrak{m}athrm{Cofib}\betaig(\Hom({F}|_{V_\alphalpha}, T_{-\varepsilonpsilon}(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha}) \rightarrow \Hom({F}|_{V_\alphalpha}, T_\varepsilonpsilon(j_{\alphalpha *}{G}_\alphalpha)|_{V_\alphalpha})\betaig) \\
& \simeq \mathfrak{m}athrm{Cofib}\betaig(\Hom(j_{\alphalpha !}{F}_{\alphalpha}, T_{-\varepsilonpsilon}(j_{\alphalpha *}{G}_\alphalpha)) \rightarrow \Hom(j_{\alphalpha !}{F}_{\alphalpha}, T_\varepsilonpsilon(j_{\alphalpha *}{G}_\alphalpha))\betaig) \\
& \simeq \Gamma(S^{*}U_\alphalpha, \mathfrak{m}hom(j_{\alphalpha !}{F}_{\alphalpha}, j_{\alphalpha *}{G}_\alphalpha)) \simeq \Gamma(S^{*}V_\alphalpha, \mathfrak{m}hom({F}_{\alphalpha}, {G}_\alphalpha)),
\varepsilonnd{split}\]
where the last inequality again follows from non-characteristic deformation in Lemma \ref{lem:muhom-cutoff}. Hence the proof is completed.
\varepsilonnd{proof}
\betaegin{remark}\lambdabel{rem:doubling_rightad}
In fact, in the proof we have shown that for ${F} \in \mathbb{S}h_\mathbb{L}ambdambda(M), {G} \in \mathfrak{m}sh_\mathbb{L}ambdambda(M)$,
$$\Hom({F}, w_\mathbb{L}ambdambda({G})) \simeq \Hom(T_\varepsilonpsilon({F}), w_\mathbb{L}ambdambda({G})) \simeq \Gamma(S^*M, \mathfrak{m}hom(m_\mathbb{L}ambdambda({F}), {G})).$$
Note that this is also a direct corollary of Proposition \ref{prop:hom_w_pm}.
\varepsilonnd{remark}
\betaegin{theorem}\lambdabel{thm:doubling_leftad}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a closed subanalytic Legendrian. Then the left adjoint of microlocalization is isomorphic to
$$m_\mathbb{L}ambdambda^l = \wrap_\mathbb{L}ambdambda^+ \circ w_\mathbb{L}ambdambda[-1]: \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda(M).$$
\varepsilonnd{theorem}
\betaegin{proof}
Similar to the proof of Theorem \ref{thm:doubling_rightad}, it suffices to show that for any ${F} \in \mathbb{S}h_\mathbb{L}ambdambda(M)$, there is a canonical quasi-isomorphism
$$\Hom(w_\mathbb{L}ambdambda({F})[-1], {G}) \simeq \Gamma(S^*M, \mathfrak{m}hom({F}, m_\mathbb{L}ambdambda({G}))).$$
Again, we consider a good open covering $\mathfrak{m}athscr{U}$ and a good family of refinements $\mathfrak{m}athscr{V}^t$, and show that locally
$$\Hom(w_\mathbb{L}ambdambda({F})_{V_\alphalpha}[-1], {G}|_{V_\alphalpha}) \simeq \Gamma(S^{*}V_\alphalpha, \mathfrak{m}hom({F}|_{V_\alphalpha}, {G}|_{V_\alphalpha})).$$
Writing down the definition of $w_\mathbb{L}ambdambda({F})_{V_\alphalpha}[-1]$, we have
\[\betaegin{split}
\Hom&(w_\mathbb{L}ambdambda({F})[-1]|_{V_\alphalpha}, {G}|_{V_\alphalpha}) \\
& \simeq \Hom\betaig(\mathfrak{m}athrm{Cofib}(T_{-\varepsilonpsilon}(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha} \rightarrow T_\varepsilonpsilon(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha})[-1], {G}|_{V_\alphalpha}\betaig) \\
& \simeq \mathfrak{m}athrm{Cofib}\betaig(\Hom(T_\varepsilonpsilon(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha}, {G}|_{V_\alphalpha}) \rightarrow \Hom(T_{-\varepsilonpsilon}(j_{\alphalpha !}{F}_\alphalpha)|_{V_\alphalpha}, {G}|_{V_\alphalpha})\betaig).
\varepsilonnd{split}\]
Then by non-characteristic deformation in Lemma \ref{lem:hom-cutoff}, \ref{lem:muhom-cutoff} and Sato-Sabloff exact triangle Corollary \ref{cor:sato-sab} we can conclude that
\[\betaegin{split}
\Hom&(w_\mathbb{L}ambdambda({F})|_{V_\alphalpha}, {G}|_{V_\alphalpha}) \\
& \simeq \mathfrak{m}athrm{Cofib}\betaig(\Hom(T_\varepsilonpsilon(j_{\alphalpha !}{F}_\alphalpha), j_{\alphalpha *}{G}|_{V_\alphalpha}) \rightarrow \Hom(T_{-\varepsilonpsilon}(j_{\alphalpha !}{F}_\alphalpha), j_{\alphalpha *}{G}|_{V_\alphalpha})\betaig) \\
& \simeq \Gamma(S^{*}U_\alphalpha, \mathfrak{m}hom(j_{\alphalpha !}{F}_\alphalpha, j_{\alphalpha *}{G}|_{U_\alphalpha})) \simeq \Gamma(S^{*}V_\alphalpha, \mathfrak{m}hom({F}_\alphalpha, {G}_{\alphalpha})),
\varepsilonnd{split}\]
which completes the proof of the theorem.
\varepsilonnd{proof}
\betaegin{remark}\lambdabel{rem:doubling_leftad}
In fact, in the proof we have shown that for ${F} \in \mathfrak{m}sh_\mathbb{L}ambdambda(M), {G} \in \mathbb{S}h_\mathbb{L}ambdambda(M)$,
$$\Hom(w_\mathbb{L}ambdambda({F})[-1], {G}) \simeq \Hom(w_\mathbb{L}ambdambda({F})[-1], T_{-\varepsilonpsilon}({G})) \simeq \Gamma(S^*M, \mathfrak{m}hom({F}, m_\mathbb{L}ambdambda({G}))),$$
which is again an application of Proposition \ref{prop:hom_w_pm}.
\varepsilonnd{remark}
\betaegin{remark}\lambdabel{rem:multi-component-double}
Moreover, we remark that when we apply the doubling construction to a single connected component $\mathbb{L}ambdambda_i \subset \mathbb{L}ambdambda$, using the same argument, one can still show that
$$m_{\mathbb{L}ambdambda_i}^l = \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ \circ w_{\mathbb{L}ambdambda_i}[-1], \;\; m_{\mathbb{L}ambdambda_i}^r = \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^- \circ w_{\mathbb{L}ambdambda_i}.$$
The reader may compare it with the discussion in Remark \ref{rem:multi-component-sato} and \ref{rem:multi-component-ad}.
\varepsilonnd{remark}
\subsubsection{Remark on sheaf quantization result of Guillermou}
At the end of this section, we remark that our result also provides a functorial interpretation of the sheaf quantization result of Guillermou \cite{Gui}, and hence justifies to some extent why Guillermou's construction is canonical.
First, we recall Guillermou's sheaf quantization theorem, from which he deduces results on nearby Lagrangians (see also Jin \cite{JinJhomo} who applies the result to sheaves over ring spectra and proves additional properties on nearby Lagrangians).
\betaegin{theorem}[Guillermou \cite{Gui}]\lambdabel{thm:Gui}
Let $L \subseteq T^*M$ be an embedded closed exact Lagrangian submanifold , and $\widetilde{L} \subseteq J^1(M) \cong S^{*,\infty}_{\tau>0}(M \times \mathbb{R})$ be its Legendrian lift. Then for any $\cL \in \mathfrak{m}sh_{\widetilde{L}}(\widetilde{L})$, there exists a sheaf $F \in \mathbb{S}h_{\widetilde{L}}(M \times \mathbb{R})$ with zero stalk at $M \times \{-\infty\}$ such that
$$m_{\widetilde{L}}(F) = \cL.$$
\varepsilonnd{theorem}
We can reformulate Guillermou's construction as follows, which is essentially his original proof, but with additional functorial interpretations. Namely, the sheaf quantization functor
$$\mathfrak{m}sh_{\widetilde{L}}(\widetilde{L}) \rightarrow \mathbb{S}h_{\widetilde{L}}(M \times \mathbb{R})$$
is simply the left adjoint of the microlocalization functor.
\betaegin{proof}[Proof of Theorem \ref{thm:Gui}]
For any $\cL \in \mathfrak{m}sh_{\widetilde{L}}(\widetilde{L})$, choosing a open covering $\{U_\alphalpha\}_{\alphalpha \in I}$ of $M$ and a corresponding covering $\{\Omega_\alphalpha\}_{\alphalpha \in I}$ of $\widetilde{L} \subset S^{*}(M \times \mathbb{R}R)$, by Lemma \ref{lem:left_right_adjoints_of_microlocalization} we can consider
$$m^l_{\widetilde{L}}(\cL) = \wrap_{\widetilde{L}}^+ \clmi{\alphalpha \in I} j_{\alphalpha,!}\mathfrak{m}athrm{Fib}(\cL_\alphalpha \rightarrow \wrap_\alphalpha^+\cL_\alphalpha) \in \mathbb{S}h_{\widetilde{L}}(M \times \mathbb{R}).$$
We claim that $m_{\widetilde{L}}(F) = \cL$. This follows immediately from the interpretation in Theorem \ref{thm:doubling_leftad} that
$$m^l_{\widetilde{L}}(\cL) = \wrap_\mathbb{L}ambdambda^+ w_{\widetilde{L}}(\cL) \in \mathbb{S}h_{\widetilde{L}}(M \times \mathbb{R}).$$
In fact, by taking a cofinal wrapping which sends $T_{-\varepsilonpsilon}(\widetilde{L})$ to $\widetilde{L}$ and $T_\varepsilonpsilon(\widetilde{L})$ to infinity by the Reeb flow, we immediately see that $m_{\widetilde{L}} \circ m_{\widetilde{L}}^l(\cL) = \cL$ and the stalk of $ m_{\widetilde{L}}^l(\cL)$ at $M \times \{-\infty\}$ is zero.
\varepsilonnd{proof}
\betaegin{remark}
Heuristically, one may imagine that in $\operatornameeratorname{colim}_{\alphalpha \in I} j_{\alphalpha,!}\mathfrak{m}athrm{Fib}(\cL_\alphalpha \rightarrow \wrap_\alphalpha^+\cL_\alphalpha)$, all the singular support coming from $\wrap_\alphalpha^+(\cL_\alphalpha)$ will be sent by some positive Hamiltonian flow to infinity. However, at the moment we do not have a way to get a characterization of these parts in the singular support without appealing to the doubling construction.
\varepsilonnd{remark}
\section{Spherical adjunction from microlocalization}
With the preparation in the previous sections, we are able to prove Theorem \ref{thm:main}. Our main result in the section is the following theorem.
\betaegin{theorem}\lambdabel{thm:main-fun}
Let $\mathbb{L}ambdambda \subset S^{*}M$ be a compact subanalytic Legendrian full stop or swappable stop. Then the microlocalization functor
$$m_\mathbb{L}ambdambda: \mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$$
is a spherical functor.
\varepsilonnd{theorem}
Theorem \ref{thm:main} is going to be a formal consequence of Theorem \ref{thm:main-fun}, as we will discuss in Section \ref{sec:spherical}. In particular, this will imply that the left adjoint of microlocalization is also a spherical functor.
\betaegin{corollary}\lambdabel{cor:main-fun}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a compact subanalytic Legendrian full stop or swappable stop. Then the left adjoint of the microlocalization functor
$$m_\mathbb{L}ambdambda^l: \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda(M)$$
is a spherical functor.
\varepsilonnd{corollary}
From Section \ref{sec:doubling-complete}, we have known the left and right adjoints of the microlocalization in terms of doubling. In Section \ref{sec:natural-trans}, we will construct the natural transformation between the left and right adjoints. Then in Section \ref{sec:sphere-crit}, we introduce the notion of full stops and swappable stops following \cite{SylvanOrlov} and explain how these conditions lead to Theorem \ref{thm:main-fun}. Then in Section \ref{sec:prop-cpt}, we restrict the corresponding functors to certain subcategories and prove Corollary \ref{cor:main}.
Finally in Section \ref{sec:serre-proper}, we will combine the result with the Sabloff-Serr\'{e} duality in Proposition \ref{prop:sab-serre}, and show that when $\mathbb{L}ambdambda \subseteq S^*M$ is a full stop or swappable stop, the spherical cotwist, after tensoring with the dualizing sheaf, $S_\mathbb{L}ambdambda^- \text{ot}imes \omega_M$, is the Serr\'{e} functor on the subcategory $\mathbb{S}h^b_\mathbb{L}ambdambda(M)_0$ of compactly supported sheaves with perfect stalks. This proves a folklore conjecture on Fukaya-Seidel categories when the ambient manifold is a cotangent bundle.
\subsection{Spherical adjunction and spherical functors}\lambdabel{sec:spherical}
First of all, we recall the definition of spherical adjunctions in Dykerhoff-Kapranov-Schechtman-Soibelman \cite{SphericalInfty} in the setting of stable $\infty$-categories.
\betaegin{definition}[\cite{SphericalInfty}*{Definition 1.4.8}]\lambdabel{def:sphericalad}
Let ${\sA, \sB}$ be stable ($\infty$-)categories and
$$F: {\sA} \leftrightharpoons {\sB} : F^l$$
be an adjunction of $\infty$-functors. Let $T'$ and $S'$ be the functors that fit into the fiber sequences
$$T' \rightarrow \mathfrak{m}athrm{id}_{\sB} \rightarrow F \circ F^l, \,\, F^l \circ F \rightarrow \mathfrak{m}athrm{id}_{\sA} \rightarrow S'.$$
Then $F: {\sA} \leftrightharpoons {\sB}: F^l$ is called a spherical adjunction if $T'$ and $S'$ are autoequivalences.
\varepsilonnd{definition}
Given a spherical adjunction $F^l \deltaashv F$, one can in fact show that both $F$ and $F^l$ are spherical functors in the sense of Anno-Logvinenko \cite{Spherical}. We recall the definition of spherical functors in the setting of dg categories \cite{Spherical} and in the general case \cite{SphericalInfty,SphericalChrist}.
\betaegin{definition}\lambdabel{def:spherical}
Let ${\sA, \sB}$ be stable ($\infty$-)categories and $F: {\sA} \rightarrow {\sB}$ an ($\infty$-)functor, with left and right adjoints $F^l$ and $F^r$. Let the spherical twist $T$, dual twist $T'$, cotwist $S$ and dual cotwist $S'$ be the functors that fit into the fiber sequences
\[\betaegin{split}
F \circ F^! \rightarrow \mathfrak{m}athrm{id}_{\sB} \rightarrow T,\, & \, T' \rightarrow \mathfrak{m}athrm{id}_{\sB} \rightarrow F \circ F^*, \\
S \rightarrow \mathfrak{m}athrm{id}_{\sA} \rightarrow F^! \circ F,\, & \, F^* \circ F \rightarrow \mathfrak{m}athrm{id}_{\sA} \rightarrow S'.
\varepsilonnd{split}\]
Then $F$ is a spherical functor if the following conditions hold:
\betaegin{enumerate}
\item the spherical twist $T$ is an autoequivalence;
\item the spherical cotwist $S$ is an autoequivalence;
\item the composition $F^l \circ T[-1] \rightarrow F^l \circ F \circ F^r \rightarrow F^r$ is an isomorphism;
\item the composition $F^r \rightarrow F^r \circ F \circ F^l \rightarrow S \circ F^l[1]$ is an isomorphism.
\varepsilonnd{enumerate}
\varepsilonnd{definition}
\betaegin{proposition}[\cite{SphericalInfty}*{Corollary 2.5.13}]
Let $F: {\sA} \leftrightharpoons {\sB} : F^l$ be a spherical adjunction. Then both $F$ and $F^l$ are spherical functors.
\varepsilonnd{proposition}
\betaegin{remark}\lambdabel{rem:sphere-ad}
Given a spherical adjunction $F^l \deltaashv F$, let $T$ be the inverse of $T'$ and $S$ the inverse of $S'$. One can construct the right adjoint of $F$ by setting $F^r = F^l \circ T[-1]$, and the left adjoint of $F^l$ by setting $F_l = F \circ S[1]$. In fact, any spherical functor has iterated left and right adjoints of any order.
\varepsilonnd{remark}
Therefore, to prove a spherical adjunction $F \vdash F^l$, it suffices to show that either of the functors is a spherical functor as in Definition \ref{def:spherical}. Moreover, the following theorem shows that it suffices to prove any two out of the four conditions.
\betaegin{theorem}[Anno-Logvinenko \cite{Spherical}, Christ \cite{SphericalChrist}]
Let ${\sA, \sB}$ be stable categories, and $F: {\sA} \rightarrow {\sB}$ a functor satisfying any two of the four conditions in Definition \ref{def:spherical}. Then $F$ is a spherical functor. Moreover, $T, T'$ and $S, S'$ are inverse autoequivalences.
\varepsilonnd{theorem}
From the discussion above, we know that in order to prove Theorem \ref{thm:main}, it suffices to prove Theorem \ref{thm:main-fun} stated at the beginning of the section.
\subsection{Natural transform between adjoints}\lambdabel{sec:natural-trans}
Given the adjoint functors and the candidate cotwist in Section \ref{sec:doubling}, we will investigate the relation between the left and right adjoints via the algebraically defined natural transformation by the cotwist
$$m_\mathbb{L}ambdambda^r \rightarrow m_\mathbb{L}ambdambda^r \circ m_\mathbb{L}ambdambda \circ m_\mathbb{L}ambdambda^l \rightarrow {S}_\mathbb{L}ambdambda^- \circ m_\mathbb{L}ambdambda^l[1].$$
The composition of the natural transformations should induce an equivalence in order for microlocalization to be a spherical functor, as stated in Definition \ref{def:spherical} Condition~(4).
For $F \in \mathbb{S}h_\mathbb{L}ambdambda(M)$ and $G \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$, we thus need to prove that the algebraically defined natural morphism
$$\Hom(F, m_\mathbb{L}ambdambda^r(G)) \rightarrow \Hom(F, m_\mathbb{L}ambdambda^r m_\mathbb{L}ambdambda m_\mathbb{L}ambdambda^l(G)) \rightarrow \Hom(F, {S}_\mathbb{L}ambdambda^- m_\mathbb{L}ambdambda^l(G)[1])$$
is an equivalence. On the other hand, Proposition \ref{w=ad} and \ref{prop:hom_w_pm} imply that
\betaegin{align*}
\Hom(F, m_\mathbb{L}ambdambda^r(G)) = \Hom(T_\varepsilonpsilon(F), w_\mathbb{L}ambdambda(G)), \;\;
\Hom(F, S_\mathbb{L}ambdambda^- m_\mathbb{L}ambdambda^l(G)[1]) = \Hom(T_\varepsilonpsilon(F), \wrap_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda(G)).
\varepsilonnd{align*}
Positive isotopies will then induce a geometrically defined natural morphism
$$\Hom(T_\varepsilonpsilon(F), w_\mathbb{L}ambdambda(G)) \rightarrow \Hom(T_\varepsilonpsilon(F), \wrap_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda(G)).$$
Our main result in this section claims that the algebraically defined natural morphism induces an isomorphism if and only if the geometrically defined natural morphism induces an isomorphism.
\betaegin{proposition}\lambdabel{prop:natural-trans}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a compact subanalytic Legendrian. Then for ${F} \in \mathbb{S}h_\mathbb{L}ambdambda(M)$ and ${G} \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$, the natural morphism induced by adjunctions
$$\Hom(F, m_\mathbb{L}ambdambda^r(G)) \rightarrow \Hom(F, m_\mathbb{L}ambdambda^r m_\mathbb{L}ambdambda m_\mathbb{L}ambdambda^l(G)) \rightarrow \Hom(F, {S}_\mathbb{L}ambdambda^- m_\mathbb{L}ambdambda^l(G)[1])$$
is an isomorphism if and only if the natural morphism induced by positive isotopies
$$\Hom(T_\varepsilonpsilon(F), w_\mathbb{L}ambdambda(G)) \rightarrow \Hom(T_\varepsilonpsilon(F), \wrap_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda(G))$$
is an isomorphism.
\varepsilonnd{proposition}
We need to unpack the algebraic adjunctions between microlocalization and its left and right adjoints using results on the doubling functor.
Firstly, we consider the natural transformation to the cotwist $m_\mathbb{L}ambdambda^r \circ m_\mathbb{L}ambdambda \rightarrow S_\mathbb{L}ambdambda^-[1]$. The following lemma follows directly from Corollary \ref{cor:exact-tri} that there is a fiber sequence $T_{-\varepsilonpsilon} \rightarrow T_\varepsilonpsilon \rightarrow w_\mathbb{L}ambdambda \circ m_\mathbb{L}ambdambda$.
\betaegin{lemma}\lambdabel{lem:natural-trans1}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a compact subanalytic Legendrian. Then for ${F} \in \mathbb{S}h_\mathbb{L}ambdambda(M)$ and ${G} \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$, there is a commutative diagram induced by natural transformations
\[\timesymatrix@C=5mm{
\Hom(F, m_\mathbb{L}ambdambda^r m_\mathbb{L}ambdambda m_\mathbb{L}ambdambda^l(G)) \alphar[r] \alphar[d]^{\rotatebox{90}{$\sim$}} & \Hom(F, {S}_\mathbb{L}ambdambda^- m_\mathbb{L}ambdambda^l(G)[1]) \alphar[d]^{\rotatebox{90}{$\sim$}} \\
\Hom(w_\mathbb{L}ambdambda m_\mathbb{L}ambdambda({F}), \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda({G})) \alphar[r] & \Hom(T_{\varepsilonpsilon}({F}), \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda({G})).
}\]
\varepsilonnd{lemma}
\betaegin{proof}
Consider ${F} \in \mathbb{S}h_\mathbb{L}ambdambda(M)$ and ${G} \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$. Theorem \ref{w=ad} implies that it suffices for us to show the following diagram
\[\timesymatrix@C=2.5mm{
\Hom({F}, w_\mathbb{L}ambdambda m_\mathbb{L}ambdambda \circ \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda({G})[-1]) \alphar[r] \alphar[d]^{\rotatebox{90}{$\sim$}} & \Hom({F}, T_{-\varepsilonpsilon}( \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda({G}))) \alphar[d]^{\rotatebox{90}{$\sim$}} \\
\Hom(w_\mathbb{L}ambdambda m_\mathbb{L}ambdambda({F}), \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda({G})) \alphar[r] & \Hom(T_\varepsilonpsilon({F}), \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda({G})).
}\]
Since the two horizontal morphisms are induced by the transformation $w_\mathbb{L}ambdambda \circ m_\mathbb{L}ambdambda[-1] \rightarrow T_{-\varepsilonpsilon}$ and respectively $T_\varepsilonpsilon \rightarrow w_\mathbb{L}ambdambda \circ m_\mathbb{L}ambdambda$ in Corollary \ref{cor:exact-tri}, we can conclude that the diagram commutes.
\varepsilonnd{proof}
Secondly, we need to consider the unit $\mathfrak{m}athrm{id} \rightarrow m_\mathbb{L}ambdambda \circ m_\mathbb{L}ambdambda^l$, which is slightly more difficult. The following lemma relies on Corollary \ref{cor:exact-tri}, and the fact that the adjunction in Theorem \ref{thm:doubling_leftad} factors through the doubling functor by computation in Theorem \ref{thm:doubling}:
$$\Gamma(\mathbb{L}ambdambda, \mathfrak{m}hom(m_\mathbb{L}ambdambda(F), G)) \timesrightarrow{\sim} \Hom(w_\mathbb{L}ambdambda m_\mathbb{L}ambdambda(F), w_\mathbb{L}ambdambda(G)) \timesrightarrow{\sim} \Hom(T_\varepsilonpsilon(F), w_\mathbb{L}ambdambda(G)).$$
\betaegin{lemma}\lambdabel{lem:natural-trans2}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a compact subanalytic Legendrian. Then for ${F} \in \mathbb{S}h_\mathbb{L}ambdambda(M)$ and ${G} \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$, there is a commutative diagram induced by natural transformations
\[\timesymatrix@C=5mm{
\Hom(F, m_\mathbb{L}ambdambda^r(G)) \alphar[r] \alphar[d]^{\rotatebox{90}{$\sim$}} & \Hom(F, m_\mathbb{L}ambdambda^r m_\mathbb{L}ambdambda m_\mathbb{L}ambdambda^l(G)) \alphar[d]^{\rotatebox{90}{$\sim$}} \\
\Hom(T_\varepsilonpsilon({F}), w_\mathbb{L}ambdambda({G})) \alphar[r] & \Hom(w_\mathbb{L}ambdambda m_\mathbb{L}ambdambda({F}), \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda({G})).
}\]
\varepsilonnd{lemma}
\betaegin{proof}
Consider ${F} \in \mathbb{S}h_\mathbb{L}ambdambda(M)$ and ${G} \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$. By Theorem \ref{w=ad}, it suffices to show that there is a commutative diagram
\[\timesymatrix@C=2mm{
\Hom({F}, w_\mathbb{L}ambdambda({G})) \alphar[r] \alphar[d]^{\rotatebox{90}{$\sim$}} & \Hom({F}, w_\mathbb{L}ambdambda m_\mathbb{L}ambdambda \circ \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda({G})[-1]) \alphar[d]^{\rotatebox{90}{$\sim$}} \\
\Hom(T_\varepsilonpsilon({F}), w_\mathbb{L}ambdambda({G})) \alphar[r] & \Hom(w_\mathbb{L}ambdambda m_\mathbb{L}ambdambda({F}), \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda({G})).
}\]
where the morphism on the top is induced by adjunction, and the morphism on the bottom is the composition
$$\Hom(T_\varepsilonpsilon({F}), w_\mathbb{L}ambdambda({G})) \timesrightarrow{\sim} \Hom(w_\mathbb{L}ambdambda m_\mathbb{L}ambdambda({F}), w_\mathbb{L}ambdambda({G})) \rightarrow \Hom(w_\mathbb{L}ambdambda m_\mathbb{L}ambdambda({F}), \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda({G})).$$
Consider the unit of the adjunction between $m_\mathbb{L}ambdambda$ and $m_\mathbb{L}ambdambda^l = \wrap_\mathbb{L}ambdambda^+ \circ w_\mathbb{L}ambdambda[-1]$ in Theorem \ref{thm:doubling_leftad}. Then we know that the morphism on the top factors as
\[\timesymatrix{
\Gamma(\mathbb{L}ambdambda, \mathfrak{m}hom(m_\mathbb{L}ambdambda(F), G)) \alphar[r] \alphar[d]^{\rotatebox{90}{$\sim$}} & \Gamma(\mathbb{L}ambdambda, \mathfrak{m}hom(m_\mathbb{L}ambdambda(F), m_\mathbb{L}ambdambda \circ \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda({G})[-1])) \alphar[d]^{\rotatebox{90}{$\sim$}} \\
\Hom(F, w_\mathbb{L}ambdambda(G)) \alphar[r] & \Hom(F, w_\mathbb{L}ambdambda m_\mathbb{L}ambdambda \circ \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda({G})[-1]).
}\]
Since the top horizontal morphism factors through $\Hom(w_\mathbb{L}ambdambda m_\mathbb{L}ambdambda({F}), \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda({G}))$, it suffices to show that the following composition factors through $\Gamma(\mathbb{L}ambdambda, \mathfrak{m}hom(m_\mathbb{L}ambdambda(F), G))$
$$\Hom(T_\varepsilonpsilon({F}), w_\mathbb{L}ambdambda({G})) \timesrightarrow{\sim} \Hom(w_\mathbb{L}ambdambda m_\mathbb{L}ambdambda({F}), w_\mathbb{L}ambdambda({G})) \rightarrow \Hom(w_\mathbb{L}ambdambda m_\mathbb{L}ambdambda({F}), \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ w_\mathbb{L}ambdambda({G})).$$
Since the adjunction in Theorem \ref{thm:doubling_leftad} factors through the isomorphism of doubling functor in Theorem \ref{thm:doubling}
$$\Hom(T_\varepsilonpsilon({F}), w_\mathbb{L}ambdambda({G})) \timesrightarrow{\sim} \Hom(w_\mathbb{L}ambdambda m_\mathbb{L}ambdambda({F}), w_\mathbb{L}ambdambda({G})) \timesrightarrow{\sim} \Gamma(\mathbb{L}ambdambda, \mathfrak{m}hom(m_\mathbb{L}ambdambda(F), G)),$$
we can conclude that the diagram above indeed commutes.
\varepsilonnd{proof}
\betaegin{proof}[Proof of Proposition \ref{prop:natural-trans}]
We consider the following commutative diagram, where the horizontal morphisms are induced from the identity $w_\mathbb{L}ambdambda \circ m_\mathbb{L}ambdambda = \mathfrak{m}athrm{Cofib}(T_{-\varepsilonpsilon} \rightarrow T_\varepsilonpsilon)$, and vertical morphisms are induced by positive isotopies
\[\timesymatrix@C=2.5mm{
\Hom(T_\varepsilonpsilon({F}), w_\mathbb{L}ambdambda({G})) \alphar[r] \alphar@{=}[d]& \Hom(w_\mathbb{L}ambdambda \circ m_\mathbb{L}ambdambda({F}), w_\mathbb{L}ambdambda({G})) \alphar[d] \alphar[r] & \Hom(T_\varepsilonpsilon({F}), w_\mathbb{L}ambdambda({G})) \alphar[d] \\
\Hom(T_\varepsilonpsilon({F}), w_\mathbb{L}ambdambda({G})) \alphar[r] & \Hom(w_\mathbb{L}ambdambda \circ m_\mathbb{L}ambdambda({F}), \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ \circ w_\mathbb{L}ambdambda({G})) \alphar[r] & \Hom(T_\varepsilonpsilon({F}), \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+ \circ w_\mathbb{L}ambdambda({G})).
}\]
Lemma \ref{lem:natural-trans1} and \ref{lem:natural-trans2} imply that the algebraically defined natural transformation of functors is an isomorphism if and only if the composition of morphisms in the second row is an isomorphism.
From the computation in Theorem \ref{thm:doubling}, compared with Theorem \ref{thm:doubling_rightad} and \ref{thm:doubling_leftad}, we know that horizontal natural morphisms in the first row are isomorphisms
$$\Hom(T_\varepsilonpsilon({F}), w_\mathbb{L}ambdambda({G})) \timesrightarrow{\sim} \Hom(w_\mathbb{L}ambdambda \circ m_\mathbb{L}ambdambda({F}), w_\mathbb{L}ambdambda({G})) \timesrightarrow{\sim} \Hom(T_\varepsilonpsilon({F}), w_\mathbb{L}ambdambda({G})).$$
Therefore, the composition of horizontal morphisms in the second row is an isomorphism if and only if the geometrically defined vertical morphism in the last column is an isomorphism.
\varepsilonnd{proof}
\subsection{Criterion for spherical adjunction}\lambdabel{sec:sphere-crit}
With the presence of the adjunction pairs, fiber sequences and natural transformations in the previous sections, in this section, we will study the spherical cotwist and its dual cotwist, and prove Condition~(2) \& (4) in Definition \ref{def:spherical} under geometric assumptions. As we will see in the proof, both Condition~(2) \& (4) in will rely on some full faithfulness of the wrapping functor $\mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+$.
In this section, we will use the word stop for a closed subanalytic Legendrian in $S^{*}M$ (meaning that the positive wrappings in $S^{*}M$ are stopped by the subanalytic Legendrian), which comes from the study of symplectic topology and wrapped Fukaya categories \cite{Sylvan,GPS1}.
\subsubsection{Spherical adjunction from full stops}
We assume that $M$ is compact in this subsection. First, we introduce the notion of an algebraic full stop, which has been frequently used in wrapped Fukaya categories.
\betaegin{definition}
Let $M$ be compact and $\mathbb{L}ambdambda \subseteq S^{*}M$ be a compact subanalytic Legendrian. Then $\mathbb{L}ambdambda$ is called a full stop if $ \mathbb{S}h^c_\mathbb{L}ambdambda(M)$ is a proper category.
\varepsilonnd{definition}
\betaegin{remark}
Recall that from Theorem \ref{thm:perfcompact} (\cite{Nadler-pants}*{Theorem 3.21} or \cite[Corollary 4.23]{Ganatra-Pardon-Shende3}), we know that
$\mathbb{S}h^b_\mathbb{L}ambdambda(M) = \mathbb{S}h^{pp}_\mathbb{L}ambdambda(M).$
From Proposition \ref{prop:smooth}, we know that in our case $\mathbb{S}h^c_\mathbb{L}ambdambda(M)$ is smooth, which then implies Corollary \ref{cor:prop-in-perf} that
$\mathbb{S}h^b_\mathbb{L}ambdambda(M) \subseteq \mathbb{S}h^c_\mathbb{L}ambdambda(M).$
On the other hand, when $\mathbb{S}h^c_\mathbb{L}ambdambda(M)$ is moreover proper, then we know that \cite[Lemma A.8]{Ganatra-Pardon-Shende3}
$$\mathbb{S}h^c_\mathbb{L}ambdambda(M) = \mathbb{S}h^b_\mathbb{L}ambdambda(M).$$
Conversely, when $M$ and $\mathbb{L}ambdambda$ are both compact, then if $\mathbb{S}h^c_\mathbb{L}ambdambda(M) = \mathbb{S}h^b_\mathbb{L}ambdambda(M)$, we can also tell that $\mathbb{S}h^c_\mathbb{L}ambdambda(M)$ is proper using for example Proposition \ref{lem:sheaves_by_representations}.
\varepsilonnd{remark}
\betaegin{example}\lambdabel{ex:triangle-full}
Let $\mathfrak{m}athcal{S} = \{X_\alphalpha\}_{\alphalpha \in I}$ be a subanalytic triangulation on $M$. Then the union of unit conormal bundles over all strata $\mathbb{L}ambdambda = N^*_\infty\mathfrak{m}athcal{S} = \betaigcup_{\alphalpha \in I}N^{*}_\infty{X_\alphalpha}$ defines a full stop \cite{Ganatra-Pardon-Shende3}*{Proposition 4.24}.
\varepsilonnd{example}
We recall our notion of a geometric full stop in the introduction. For the definition of generalized linking disks, see for example \cite[Section 7.1]{Ganatra-Pardon-Shende3}.
\betaegin{definition}
A closed subanalytic Legendrian $\mathbb{L}ambdambda \subseteq S^{*}M$ is called a geometric full stop if for a collection of generalized linking spheres $\mathbb{S}igma \subseteq S^{*}M$ of $\mathbb{L}ambdambda$, there exists a compactly supported positive Hamiltonian on $S^{*}M \betaackslash \mathbb{L}ambdambda$ such that the Hamiltonian flow sends $D$ to an arbitrary small neighbourhood of $T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)$.
\varepsilonnd{definition}
Following Ganatra-Pardon-Shende \cite{Ganatra-Pardon-Shende3}*{Proposition 6.7}, we prove that a geometric full stop is an algebraic full stop.
\betaegin{proposition}\lambdabel{prop:full-geo=>alg}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a geometric full stop. Then $\mathbb{L}ambdambda$ is also an algebraic full stop.
\varepsilonnd{proposition}
To prove the proposition, we recall the following definition by the first author in \cite{Kuo-wrapped-sheaves}. Let $M$ be compact, and $\widetilde{\wsh}_\mathbb{L}ambdambda(M)$ be the category of constructible sheaves with perfect stalks whose singular support is disjoint from $\mathbb{L}ambdambda$. Let $\mathfrak{m}athcal{C}_\mathbb{L}ambdambda(M)$ be all continuation maps of positive isotopies supported away from $\mathbb{L}ambdambda$. Then the category of wrapped sheaves is \cite[Definition 4.1]{Kuo-wrapped-sheaves}
$$\wsh_\mathbb{L}ambdambda(M) \coloneqq \widetilde{\wsh}_\mathbb{L}ambdambda(M)/\mathfrak{m}athcal{C}_\mathbb{L}ambdambda(M).$$
We have $\Hom_{\wsh_\mathbb{L}ambdambda(M)}(F, G) = \operatornameeratorname{colim}_{\varphi \in W^+(S^*M \betaackslash \mathbb{L}ambdambda)}\Hom(F, G^\varphi)$, and \cite[Theorem 1.3]{Kuo-wrapped-sheaves}
$$\wrap_\mathbb{L}ambdambda^+: \wsh_\mathbb{L}ambdambda(M) \timesrightarrow{\sim} \mathbb{S}h^c_\mathbb{L}ambdambda(M).$$
\betaegin{proof}[Proof of Proposition \ref{prop:full-geo=>alg}]
We prove that $\wsh_\mathbb{L}ambdambda(M)$ is a proper category. Namely, for any ${F, G} \in \wsh_\mathbb{L}ambdambda(M)$,
$$\Hom_{\wsh_\mathbb{L}ambdambda(M)}({F}, {G}) \in \cV_0.$$
Indeed, note that $\mathfrak{m}s^\infty({G}) \subseteq S^{*}M \betaackslash \mathbb{L}ambdambda$ is compact. Thus there exists a cofinal wrapping $\varphi_k \in W^+(S^{*}M \betaackslash \mathbb{L}ambdambda)$ such that $\varphi_{k}^1(\mathfrak{m}s^\infty({G}))$ is contained in a neighbourhood of $\mathbb{L}ambdambda$, and is in particular away from $\mathfrak{m}s^\infty({F})$. Therefore
\[\betaegin{split}
\Hom_{\wsh_\mathbb{L}ambdambda(M)}({F}, {G}) &= \clmi{\varphi \in W^+(S^{*}M \betaackslash \mathbb{L}ambdambda)}\Hom({F}, {G}^\varphi) \simeq \clmi{k \rightarrow \infty}\Hom({F}, {G}^{\varphi_k}) \\
&\simeq \clmi{k \rightarrow \infty}\Hom({F}, {G}) = \Hom({F, G}) \in \cV_0,
\varepsilonnd{split}\]
which completes the proof for the first case of a geometric full stop.
\varepsilonnd{proof}
\betaegin{example}\lambdabel{ex:Lef-full}
Let $\varphii: T^*M \rightarrow \mathfrak{m}athbb{C}$ be an exact symplectic Lefschetz fibration (whose existence is ensured by Giroux-Pardon \cite{GirouxPardon}), and $F = \varphii^{-1}(\infty)$ a regular Weinstein fiber at infinity. Then the Lagrangian skeleton of the Weinstein manifold $\mathbb{L}ambdambda = \mathfrak{m}athfrak{c}_F \subseteq S^{*}M$ defines a full stop \cite{GPS2}*{Corollary 1.14} \& \cite{Ganatra-Pardon-Shende3}*{Proposition 6.7} under the equivalence between wrapped Fukaya categories and microlocal sheaf categories \cite{Ganatra-Pardon-Shende3}.
Let $\mathbb{L}ambdambda = \mathbb{L}ambdambda_\mathbb{S}igma^\infty \subseteq S^{*}T^n$ be the FLTZ skeleton
associated to the toric fan $\mathbb{S}igma$ \cite{FLTZCCC,FLTZHMS,RSTZSkel}. Gammage-Shende (under an extra assumption) \cite{GammageShende} and Zhou (without extra assumptions) \cite{ZhouSkel} show that it is indeed the Lagrangian skeleton of a regular fiber of a symplectic fibration $\varphii: T^*T^n \rightarrow \mathfrak{m}athbb{C}$, which is expected to be a Lefschetz fibration when the mirror toric stack $\mathfrak{m}athcal{X}_\mathbb{S}igma$ is smooth. The fact that $\mathbb{L}ambdambda_\mathbb{S}igma^\infty$ is a full stop (when $\mathfrak{m}athcal{X}_\mathbb{S}igma$ is smooth) is also independently proved by Kuwagaki using mirror symmetry \cite{KuwaCCC}.
\varepsilonnd{example}
When $\mathbb{L}ambdambda \subseteq S^{*}M$ is a full Legendrian stop, we know that $\mathbb{S}h_\mathbb{L}ambdambda(M) = \mathfrak{m}athrm{Ind}(\mathbb{S}h^b_\mathbb{L}ambdambda(M))$. Therefore we only focus on results on the small category $\mathbb{S}h^b_\mathbb{L}ambdambda(M)$.
To show Condition~(2) that $S_\mathbb{L}ambdambda^+$ and $S_\mathbb{L}ambdambda^-$ are equivalences, we appeal to Section \ref{sec:serre} where $S_\mathbb{L}ambdambda^-$ is shown to abide Serre duality (up to a twist) on $\mathbb{S}h^b_\mathbb{L}ambdambda(M)$.
\betaegin{proposition}\lambdabel{prop:full-condition2}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a compact full Legendrian stop where $M$ is a closed manifold. Then there is a pair of inverse autoequivalences
$$S^+_\mathbb{L}ambdambda: \mathbb{S}h^b_\mathbb{L}ambdambda(M) \rightleftharpoons \mathbb{S}h^b_\mathbb{L}ambdambda(M): S^-_\mathbb{L}ambdambda.$$
\varepsilonnd{proposition}
\betaegin{proof}
For $F, G \in \mathbb{S}h^b_\mathbb{L}ambdambda(M)$, we claim that
$$\Hom(F, G) \rightarrow \Hom_{\wsh_\mathbb{L}ambdambda(M)}(T_\varepsilonpsilon(F), T_\varepsilonpsilon(G)).$$
Indeed, consider a sequence of descending open neighbourhoods $\{\Omega_k\}_{k \in \mathbb{N}N}$ of $\mathbb{L}ambdambda \subset S^{*}M$ such that $\Omega_{k+1} \subseteq \overline{\Omega_k}$ and
$\betaigcap_{k \in \mathbb{N}N} \Omega_k = \mathbb{L}ambdambda.$
Let the sequence cofinal wrapping be the Reeb flow $T_{1/t}$ on $S^*M \betaackslash \Omega_k$ and identity on $\Omega_{k+1}$. Then
\betaegin{align*}
\Hom_{\wsh_\mathbb{L}ambdambda(M)}(T_\varepsilonpsilon(F), T_\varepsilonpsilon(G))
&= \clmi{\deltaelta \rightarrow 0^+} \Hom(T_\deltaelta(F), T_\varepsilonpsilon(G)) \\
&= \clmi{\deltaelta \rightarrow 0^+} \Hom(F, T_{\varepsilonpsilon - \deltaelta}(G)).
\varepsilonnd{align*}
Then the right hand side is a constant since $\Hom(F, T_{\varepsilonpsilon - \deltaelta}(G)) \timesleftarrow{\sim} \Hom(F,G)$ by the perturbation trick Proposition \ref{prop:hom_w_pm}.
On the other hand, we also know that $\wrap_\mathbb{L}ambdambda^+: \wsh_\mathbb{L}ambdambda(M) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda^b(M)$ is an equivalence \cite[Theorem 1.3]{Kuo-wrapped-sheaves}. Therefore
$$\Hom_w(T_\varepsilonpsilon(F), T_\varepsilonpsilon(G)) \simeq \Hom(\wrap_\mathbb{L}ambdambda^+ \circ T_\varepsilonpsilon(F), \wrap_\mathbb{L}ambdambda^+ \circ T_\varepsilonpsilon(G)) = \Hom(S_\mathbb{L}ambdambda^+(F), S_\mathbb{L}ambdambda^+(G)).$$
Since $S_\mathbb{L}ambdambda^-$ is the right adjoint of $S_\mathbb{L}ambdambda^+$, we know that $S^-_\mathbb{L}ambdambda \circ S^+_\mathbb{L}ambdambda = \mathfrak{m}athrm{id}_{\mathbb{S}h^b_\mathbb{L}ambdambda(M)}$.
Then consider $F, G \in \mathbb{S}h^b_\mathbb{L}ambdambda(M)$. Sabloff-Serr{e} duality Proposition \ref{prop:sab-serre} implies that
$$\Hom(S^-_\mathbb{L}ambdambda(F), S_\mathbb{L}ambdambda^-(G)) = \Hom(G \text{ot}imes \omega_M^{-1}, S^-_\mathbb{L}ambdambda(F))^\vee = \Hom(F, G).$$
Then since $S_\mathbb{L}ambdambda^-$ is the right adjoint of $S_\mathbb{L}ambdambda^+$, we know that $S^-_\mathbb{L}ambdambda \circ S^+_\mathbb{L}ambdambda = \mathfrak{m}athrm{id}_{\mathbb{S}h^b_\mathbb{L}ambdambda(M)}$.
\varepsilonnd{proof}
Next, we show Condition~(4) that there is a natural isomorphism of functors $m_\mathbb{L}ambdambda^r \timesrightarrow{\sim} S^-_\mathbb{L}ambdambda \circ m_\mathbb{L}ambdambda^l[1]$, which again requires Serr{e} duality in Section \ref{sec:serre}.
\betaegin{proposition}\lambdabel{prop:full-condition4}
Let $\mathbb{L}ambdambda \subset S^{*}M$ be a compact subanalytic full Legendrian stop. Then for any $F \in \mathbb{S}h_\mathbb{L}ambdambda^b(M)$ and $G \in \mathfrak{m}sh^c_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$ there is an isomorphism
$$\Hom(F, m_\mathbb{L}ambdambda^r(G)) \rightarrow \Hom(w_\mathbb{L}ambdambda \circ m_\mathbb{L}ambdambda(G), m_\mathbb{L}ambdambda^l(G)) \rightarrow \Hom(T_\varepsilonpsilon(F), m_\mathbb{L}ambdambda^l(G))$$
\varepsilonnd{proposition}
\betaegin{proof}
Let $\mathfrak{m}u_i \in \mathfrak{m}sh_\mathbb{L}ambdambda^c(\mathbb{L}ambdambda)$ be the corepresentatives of microstalks at the point $p_i$ on the smooth stratum $\mathbb{L}ambdambda_i \subset \mathbb{L}ambdambda$, which (split) generate the category $\mathfrak{m}sh_\mathbb{L}ambdambda^c(\mathbb{L}ambdambda)$. By Proposition \ref{prop:natural-trans}, it suffices to show that for any $F \in \mathbb{S}h_\mathbb{L}ambdambda^b(M)$,
$$\Hom(T_\varepsilonpsilon(F), w_\mathbb{L}ambdambda(\mathfrak{m}u_i)) \simeq \Hom(T_\varepsilonpsilon(F), \wrap_\mathbb{L}ambdambda^+ \circ w_\mathbb{L}ambdambda(\mathfrak{m}u_i)).$$
Note that $m_\mathbb{L}ambdambda^l(\mathfrak{m}u_i) \in \mathbb{S}h^b_\mathbb{L}ambdambda(M)$. By Sabloff-Serr{e} duality Proposition \ref{prop:sab-serre}, we know that the right hand side is
\betaegin{align*}
\Hom(T_\varepsilonpsilon(F), m_\mathbb{L}ambdambda^l(\mathfrak{m}u_i)[1])^\vee &= \Hom(F, T_{-\varepsilonpsilon} \circ m_\mathbb{L}ambdambda^l(\mathfrak{m}u_i)[1])^\vee = \Hom(m_\mathbb{L}ambdambda^l(\mathfrak{m}u_i)[1], F \text{ot}imes \omega_M).
\varepsilonnd{align*}
On the other hand, since $F$ is cohomologically constructible, by Theorem \ref{thm:doubling_rightad} and Remark \ref{rem:DFotimesG}, the left hand side is
\betaegin{align*}
\Hom(T_\varepsilonpsilon(F), w_\mathbb{L}ambdambda(\mathfrak{m}u_i))^\vee &= p_*(\mathbb{N}D{M}F \text{ot}imes w_\mathbb{L}ambdambda(\mathfrak{m}u_i))^\vee = \Hom(\mathbb{N}D{M}F \text{ot}imes w_\mathbb{L}ambdambda(\mathfrak{m}u_i), p^!1_\cV) \\
&= \Hom( w_\mathbb{L}ambdambda(\mathfrak{m}u_i), \VD{M} \circ \mathbb{N}D{M} F) = \Hom(w_\mathbb{L}ambdambda(\mathfrak{m}u_i), F \text{ot}imes \omega_M).
\varepsilonnd{align*}
Moreover, the continuation map is exactly induced by the continuation map $w_\mathbb{L}ambdambda(\mathfrak{m}u_i) \rightarrow \wrap_\mathbb{L}ambdambda^+ \circ w_\mathbb{L}ambdambda(\mathfrak{m}u_i)$. Therefore, the result immediately follows from Proposition \ref{w=ad}.
\varepsilonnd{proof}
By Proposition \ref{prop:full-condition2} and \ref{prop:full-condition4}, we can immediately finish the proof of Theorem \ref{thm:main-fun} and hence the full stop part in Theorem \ref{thm:main}.
\subsubsection{Spherical adjunction from swappable stops}
Next, we define the notion of a swappable stop, which is introduced by Sylvan \cite{SylvanOrlov}, but is a priori weaker than his terminology.
\betaegin{definition}\lambdabel{def:swappable}
Let $\mathbb{L}ambdambda \subset T^{*,\infty}M$ be a compact subanalytic Legendrian. Then $\mathbb{L}ambdambda$ is called a swappable Legendrian stop if there exists a positive wrapping fixing $\mathbb{L}ambdambda$ that sends $T_\varepsilonpsilon(\mathbb{L}ambdambda)$ to an arbitrarily small neighbourhood of $T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)$.
\varepsilonnd{definition}
\betaegin{example}
The Legendrian stops in Example \ref{ex:Lef-full} are swappable, and we conjecture that Legendrian stops in Example \ref{ex:triangle-full} are swappable as well. More generally, when $F \subset S^{*}M$ is a Weinstein page of a contact open book decomposition for $S^{*}M$ \cite{Giroux,HondaHuang}, then it is a swappable stop.
However, swappable stops are not necessarily full stops. For instance, for the Landau-Ginzburg model $\varphii: T^*M \rightarrow \mathfrak{m}athbb{C}$ that are not Lefschetz fibrations, $F \subset S^{*}M$ will in general not be a full stop (one can consider $\varphii: \mathfrak{m}athbb{C}^n \rightarrow \mathfrak{m}athbb{C}; \;\; \varphii = z_1z_2\deltaots z_n$ \cite{Nadspherical,AbAurouxHMS}). Sylvan \cite{SylvanOrlov}*{Example 1.4} also explained that one can take any monodromy invariant subset for the Lagrangian skeleton of fiber of a Lefschetz fibration and get a swappable stop.
\varepsilonnd{example}
\betaegin{example}
There is another way to get new swappable stops from old ones\footnote{The authors would like to thank Emmy Murphy who explains to us this construction.}. Consider $\mathbb{L}ambdambda \subset \varphiartial_\infty X$ to be a swappable stop in $\varphiartial_\infty X$ the contact boundary of some Liouville domain, and $\varphiartial_\infty X'$ the contact boundary of some other Liouville domain. When we take the Liouville connected sum of $X$ and $X'$ along some subcritical Weinstein hypersurface $F \hookrightarrow \varphiartial_\infty X$ and $F \hookrightarrow \varphiartial_\infty X'$ \cite{Avdek}, as the skeleton $\mathfrak{m}athfrak{c}_F$ is of subcritical dimension, the positive loop of $\mathbb{L}ambdambda$ will generically avoid $\mathfrak{m}athfrak{c}_F$. Therefore, $\mathbb{L}ambdambda$ is also swappable in $\varphiartial_\infty (X \#_F X')$.
In particular, let $X = \mathfrak{m}athbb{C}^n$ and $F = \mathfrak{m}athbb{C}^{n-1}$, then $X \#_F X'$ is the 1-handle connected sum of $\mathfrak{m}athbb{C}^n$ and $X'$, which is Liouville homotopy equivalent to $X'$. In particular, for any swappable stop $\mathbb{L}ambdambda \subset D^{2n-1} \subset S^{2n-1}$ (for example, the skeleton of any page of contact open book decomposition), putting it in a Darboux ball in $S^{*}M$, we will get a swappable stop in $S^{*}M$.
\varepsilonnd{example}
When $\mathbb{L}ambdambda \subset S^{*}M$ is a swappable Legendrian stop, then there exists a cofinal positive (resp.~negative) wrapping that sends $T_{\varepsilonpsilon}(\mathbb{L}ambdambda)$ to an arbitrary small neighbourhood of $T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)$ (resp.~sends $T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)$ to an arbitrary small neighbourhood of $T_{\varepsilonpsilon}(\mathbb{L}ambdambda)$). So there exists a (cofinal sequence of) positive contact flow $\varphi_k^t$, $k \in \mathbb{N}N$, supported away from $\mathbb{L}ambdambda$ such that $\varphi_k^1(T_\varepsilonpsilon(\mathbb{L}ambdambda))$ is contained in a small neighbourhood of $ T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)$ for $k \mathfrak{m}athfrak{g}g 0$. We will fix the cofinal sequence of positive flow and check condition (2) and (4) of Definition \ref{def:spherical} by considering this particular wrapping.
\betaegin{proposition}\lambdabel{prop:swap-condition2}
Assume $\mathbb{L}ambdambda$ is swappable. The functors $S_\mathbb{L}ambdambda^+$ and $S_\mathbb{L}ambdambda^-$ are equivalences.
\varepsilonnd{proposition}
\betaegin{proof}
It's sufficient to check that they are fully-faithful since $S_\mathbb{L}ambdambda^+ \vdash S_\mathbb{L}ambdambda^-$ is an adjunction pair. The computation is symmetric so we check that $\Hom(S_\mathbb{L}ambdambda^+ F, S_\mathbb{L}ambdambda^+ G) = \Hom(F,G)$, or equivalently, the canonical map
$$ \Hom(F,G) = \Hom(T_\varepsilonpsilon (F), T_\varepsilonpsilon (G)) \rightarrow \Hom(T_\varepsilonpsilon (F), \wrap_\mathbb{L}ambdambda^+ \circ T_\varepsilonpsilon(G)) $$
is an isomorphism. First apply Proposition \ref{prop:hom_w_pm}, so that the map factorizes as
$$\Hom(T_\varepsilonpsilon (F), T_\varepsilonpsilon (G)) \timesrightarrow{\sim} \Hom(T_\varepsilonpsilon (F), T_{\varepsilonpsilon + \deltaelta} (G)) \rightarrow \Hom(T_\varepsilonpsilon (F), \wrap_\mathbb{L}ambdambda^+ \circ T_\varepsilonpsilon(G)) $$
for some $0 < \deltaelta \ll \varepsilonpsilon$. Then by the swappable assumption, for a sequence of descending open neighbourhoods $\{\Omega_k\}_{k \in \mathbb{N}N}$ of $\mathbb{L}ambdambda \subset S^{*}M$ such that $\Omega_{k+1} \subseteq \overline{\Omega_k}$ and
$\betaigcap_{k \in \mathbb{N}N} \Omega_k = \mathbb{L}ambdambda,$
there exist (an increasing sequence of) positive Hamiltonian flows $\varphi_k^t$, $k \in \mathbb{N}N$, supported away from $\mathbb{L}ambdambda$ such that
$$\varphi_k^1(T_\varepsilonpsilon(\mathbb{L}ambdambda)) \subseteq T_{-1/k}(\Omega_k)$$
for $k \mathfrak{m}athfrak{g}g 0$. Thus we are in the situation of Lemma \ref{lem:nearby_cycle}.
\varepsilonnd{proof}
\betaegin{proposition}\lambdabel{prop:swap-condition4}
When $\mathbb{L}ambdambda$ is swappable. The canonical map $m_\mathbb{L}ambdambda^r \rightarrow m_\mathbb{L}ambdambda^r \circ m_\mathbb{L}ambdambda \circ m_\mathbb{L}ambdambda^l \rightarrow S_\mathbb{L}ambdambda^- \circ m_\mathbb{L}ambdambda^l [1]$ is an isomorphism.
\varepsilonnd{proposition}
\betaegin{proof}
By Proposition \ref{prop:natural-trans}, it's sufficient to show that the map
$$\Hom(T_\varepsilonpsilon({F}), w_\mathbb{L}ambdambda({G})) \rightarrow \Hom(T_{\varepsilonpsilon}({F}), \wrap_\mathbb{L}ambdambda^+ \circ w_\mathbb{L}ambdambda({G}))$$ is an isomorphism.
Since $\mathfrak{m}sif (w_\mathbb{L}ambdambda(G)) \subseteq T_{-\varepsilonpsilon} (\mathbb{L}ambdambda) \cup T_\varepsilonpsilon (\mathbb{L}ambdambda)$, we can again flow it forward by the Reeb flow $T_\deltaelta$ for some $0 < \deltaelta \ll \varepsilonpsilon$ by Proposition \ref{prop:hom_w_pm} such that the above map factorizes as
$$\Hom(T_\varepsilonpsilon({F}), w_\mathbb{L}ambdambda({G})) = \Hom(T_\varepsilonpsilon({F}), T_\deltaelta \circ w_\mathbb{L}ambdambda({G})) \rightarrow \Hom(T_{\varepsilonpsilon}({F}), \wrap_\mathbb{L}ambdambda^+ \circ w_\mathbb{L}ambdambda({G})).$$
But then the same proof from the last proposition applies. By the swappable assumption, for a sequence of descending open neighbourhoods $\{\Omega_k\}_{k \in \mathbb{N}N}$ of $\mathbb{L}ambdambda \subset S^{*}M$ such that $\Omega_{k+1} \subseteq \overline{\Omega_k}$ and
$\betaigcap_{k \in \mathbb{N}N} \Omega_k = \mathbb{L}ambdambda,$
there exist (an increasing sequence of) positive Hamiltonian flows $\varphi_k^t$, $k \in \mathbb{N}N$, supported away from $\mathbb{L}ambdambda$ such that
$$\varphi_k^t(T_\varepsilonpsilon(\mathbb{L}ambdambda) \cup T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)) \subseteq T_{-1/k}(\Omega_k) \cup T_{-1/2k}(\mathbb{L}ambdambda)$$
for $k \mathfrak{m}athfrak{g}g 0$. We can again apply Lemma \ref{lem:nearby_cycle} to conclude that the map is an isomorphism.
\varepsilonnd{proof}
By Proposition \ref{prop:swap-condition2} and \ref{prop:swap-condition4}, we can immediately finish the proof of Theorem \ref{thm:main-fun} and hence the swappable stop part of Theorem \ref{thm:main}.
\subsection{Spherical adjunction on subcategories}\lambdabel{sec:prop-cpt}
In this section, we will restrict to the subcategories of proper objects and compact objects of sheaves. Over the category of proper objects of sheaves (equivalently, sheaves with perfect stalks), we will show that the microlocalization
$$m_\mathbb{L}ambdambda: \mathbb{S}h^b_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda^b(\mathbb{L}ambdambda)$$
is a spherical functor, and over the category of compact objects of sheaves, we will show that the left adjoint of the microlocalization
$$m_\mathbb{L}ambdambda^l: \mathfrak{m}sh^c_\mathbb{L}ambdambda(M) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda^c(M)$$
is a spherical functor. We know that autoequivalences coming from twists and cotwists immediately restrict to these corresponding subcategories. As a result, once we know the corresponding functors restrict, they will be spherical.
First, we consider the subcategories of compact objects. For the spherical adjunction $m_\mathbb{L}ambdambda^l \deltaashv m_\mathbb{L}ambdambda$. We know that the left adjoint $m_\mathbb{L}ambdambda^l$ preserves compact objects, i.e.~we have
$$m_\mathbb{L}ambdambda^l: \mathfrak{m}sh_\mathbb{L}ambdambda^c(\mathbb{L}ambdambda) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda^c(M).$$
However, it is not clear whether microlocalization $m_\mathbb{L}ambdambda$ also preserves these objects.
\betaegin{lemma}
Let $\mathbb{L}ambdambda \subset S^{*}M$ be a subanalytic Legendrian stop. When $m_\mathbb{L}ambdambda^r$ admits a right adjoint, the essential image of the microlocalization functor
$$m_\mathbb{L}ambdambda:\, \mathbb{S}h^c_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$$
is contained in $\mathfrak{m}sh^c_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$
\varepsilonnd{lemma}
\betaegin{proof}
We know that $m_\mathbb{L}ambdambda^r$ preserves colimits as it admits a right adjoint. Now since the right adjoint of $m_\mathbb{L}ambdambda$ preserves colimits, we can conclude that $m_\mathbb{L}ambdambda$ preserves compact objects.
\varepsilonnd{proof}
Whenever $m_\mathbb{L}ambdambda \vdash m_\mathbb{L}ambdambda^l$ is a spherical adjunction, we know by Remark \ref{rem:sphere-ad} that $m_\mathbb{L}ambdambda^r$ admits a right adjoint. Therefore spherical adjunction can always be restricted to the subcategories of compact objects, as we have claimed in Corollary \ref{cor:main}.
Then we consider the subcategories of proper objects. We know that the microlocalization functor preserve proper objects (or equivalently objects with perfect stalks), i.e.~we have
$$m_\mathbb{L}ambdambda: \mathbb{S}h_\mathbb{L}ambdambda^b(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda^b(\mathbb{L}ambdambda).$$
However, it is not clear whether the left adjoint $m_\mathbb{L}ambdambda^l$ and right adjoint $m_\mathbb{L}ambdambda^r$ preserves these objects.
\betaegin{lemma}\lambdabel{lem:sphere-proper}
Let $\mathbb{L}ambdambda \subset S^{*}M$ be a subanalytic Legendrian stop. When $m_\mathbb{L}ambdambda^r$ admits a right adjoint, the essential image of the left adjoint of microlocalization functor
$$m_\mathbb{L}ambdambda^r: \mathfrak{m}sh^b_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda(M)$$
is also contained in $\mathbb{S}h^b_\mathbb{L}ambdambda(M)$.
\varepsilonnd{lemma}
\betaegin{proof}
We recall Theorem \ref{thm:perfcompact} that
$$\mathbb{S}h^b_\mathbb{L}ambdambda(M) = \mathbb{F}un^{ex}(\mathbb{S}h^c_\mathbb{L}ambdambda(M)^{op}, \cV_0), \;\; \mathfrak{m}sh^b_\mathbb{L}ambdambda(\mathbb{L}ambdambda) = \mathbb{F}un^{ex}(\mathfrak{m}sh^c_\mathbb{L}ambdambda(\mathbb{L}ambdambda)^{op}, \cV_0),$$
where the isomorphism is given by the $\Hom(-, -)$ pairing on $\mathbb{S}h^c_\mathbb{L}ambdambda(M)^{op} \times \mathbb{S}h^b_\mathbb{L}ambdambda(M)$ and respectively on $\mathfrak{m}sh^c_\mathbb{L}ambdambda(\mathbb{L}ambdambda)^{op} \times \mathfrak{m}sh^b_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$. Then since we know that microlocalization preserves compact objects
$$m_\mathbb{L}ambdambda: \mathbb{S}h^c_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh^c_\mathbb{L}ambdambda(\mathbb{L}ambdambda),$$
the right adjoint $m_\mathbb{L}ambdambda^r$ clearly preserves proper objects.
\varepsilonnd{proof}
Whenever $m_\mathbb{L}ambdambda \vdash m_\mathbb{L}ambdambda^l$ is a spherical adjunction, we know by Remark \ref{rem:sphere-ad} that $m_\mathbb{L}ambdambda^r$ admits a right adjoint, so the adjunction $m_\mathbb{L}ambdambda^r \vdash m_\mathbb{L}ambdambda$ can be restricted to the subcategories of proper objects.
Therefore by Remark \ref{rem:sphere-ad} the spherical adjunction $m_\mathbb{L}ambdambda \vdash m_\mathbb{L}ambdambda^l$ can always be restricted to the subcategories of proper objects, as we have claimed in Corollary \ref{cor:main}.
As we have seen, the candidate right adjoint of $m_\mathbb{L}ambdambda^l$ will simply be the microlocalization functor
$$m_\mathbb{L}ambdambda:\, \mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda).$$
The candidate left adjoint of $m_\mathbb{L}ambdambda^l$, by Remark \ref{rem:sphere-ad}, is the functor
$$m_\mathbb{L}ambdambda \circ {S}_\mathbb{L}ambdambda^-[1]: \, \mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda).$$
The readers may be confused about the non-symmetry as $m_\mathbb{L}ambdambda^l$ is supposed to be the cup functor on wrapped Fukaya categories and there is no non-symmetry there. We can provide an explanation as follows. Consider the category of wrapped sheaves in \cite{Kuo-wrapped-sheaves}, we have a preferred equivalence $\mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+: \wsh_\mathbb{L}ambdambda(M) \rightarrow \mathbb{S}h^c_\mathbb{L}ambdambda(M)$. If we try to instead replace the domain $\mathbb{S}h^c_\mathbb{L}ambdambda(M)$ by $\wsh_\mathbb{L}ambdambda(M)$, then $m_\mathbb{L}ambdambda^l$ can be replaced by the doubling functor $w_\mathbb{L}ambdambda[-1]$, and one can easily see that
$$m_\mathbb{L}ambdambda^r = m_\mathbb{L}ambdambda \circ \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^+, \,\,\, m_\mathbb{L}ambdambda^l = m_\mathbb{L}ambdambda \circ \mathfrak{m}athfrak{W}_\mathbb{L}ambdambda^-[1].$$
Then $m_\mathbb{L}ambdambda^r$ (resp.~$m_\mathbb{L}ambdambda^l$) is indeed the cap functor by wrapping positively (resp.~negatively) into the Legendrian $\mathbb{L}ambdambda$ and then take microlocalization, i.e.~the sheaf theoretic restriction.
\subsection{Serre functor on proper subcategory}\lambdabel{sec:serre-proper}
In this section, we finally prove that $S^-_\mathbb{L}ambdambda$ is in fact the Serr{e} functor on $\mathbb{S}h^b_\mathbb{L}ambdambda(M)$, when $\mathbb{L}ambdambda$ is a full stop or swappable stop. Thus we will prove the folklore conjecture on partially wrapped Fukaya categories associated to Lefschetz fibrations formulated by Seidel \cite{SeidelSH=HH} (who attributes the conjecture to Kontsevich) with partial results in \cite{SeidelFukI,SeidelFukII,SeidelFukIV1/2}, in the case of partially wrapped Fukaya categories on cotangent bundles.
Let $\sA$ be a stable category over $\cV$. Recall that $\sA$ is proper category if for any $X, Y \in \sA$,
$$\Hom_\sA(X, Y) \in \cV_0.$$
When $\sA$ is a proper category, by the above lemma, we are always able to define the (right) dualizing bi-module $\sA^*$.
\betaegin{definition}
For a proper stable category $\sA$, the (right) dualizing bi-module $\sA^*$ is defined by
$$\sA^*(X, Y) = \Hom_\sA(X, Y)^\vee = \Hom_\cV(\Hom_\sA(X, Y), 1_\cV).$$
\varepsilonnd{definition}
\betaegin{definition}
For a proper stable category $\sA$, a Serr{e} functor $S_\sA$ is the functor that represents the right dualizing bimodule $\sA^*$, i.e.
$$\Hom_\sA(-, -)^\vee \simeq \Hom_\sA(-, S_\sA(-)).$$
\varepsilonnd{definition}
\betaegin{proposition}\lambdabel{prop:serre}
Let $\mathbb{L}ambdambda \subset S^{*}M$ be a full or swappable compact subanalytic Legendrian stop. Then $S_\mathbb{L}ambdambda^- \text{ot}imes \omega_M$ is the Serr{e} functor on $\mathbb{S}h^b_\mathbb{L}ambdambda(M)_0$ of sheaves with perfect stalks and compact supports. In particular, when $M$ is orientable, $S_\mathbb{L}ambdambda^-[-n]$ is the Serr{e} functor on $\mathbb{S}h^b_\mathbb{L}ambdambda(M)_0$.
\varepsilonnd{proposition}
\betaegin{proof}
First, by Lemma \ref{lem:sphere-proper}, we know that $S_\mathbb{L}ambdambda^-: \mathbb{S}h^b_\mathbb{L}ambdambda(M) \rightarrow \mathbb{S}h^b_\mathbb{L}ambdambda(M)$ preserves perfect stalks. Moreover, when $M$ is noncompact and $\mathbb{L}ambdambda$ is a swappable stop, we argue that $S_\mathbb{L}ambdambda^-$ also preserves compact supports. By the swappable assumption, for a sequence of descending open neighbourhoods $\{\Omega_k\}_{k \in \mathbb{N}N}$ of $\mathbb{L}ambdambda \subset S^{*}M$ such that $\Omega_{k+1} \subseteq \overline{\Omega_k}$ and
$\betaigcap_{k \in \mathbb{N}N} \Omega_k = \mathbb{L}ambdambda,$
there exist (an increasing sequence of) positive Hamiltonian flows $\varphi_k^t$, $k \in \mathbb{N}N$, supported away from $\mathbb{L}ambdambda$ such that
$$\varphi_k^{-1}(T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)) \subset T_{1/k}(\Omega_k)$$
for $k \mathfrak{m}athfrak{g}g 0$. Without loss of generality, we can even assume that $\varphi_k^t$ are all supported in some common compact subset. Since $M$ is noncompact, consider any unbounded region $U$ such that $\mathbb{L}ambdambda \cap S^*U = \varnothing$. Then there exists an open subset $U' \subset U \subset M$, $\varphi_k^t(T_\varepsilonpsilon(\mathbb{L}ambdambda)) \cap S^*U' = \varnothing$ for $k \mathfrak{m}athfrak{g}g 0$. Then
$$\Gamma(U', S_\mathbb{L}ambdambda^-(F)) = \Gamma\Big(U', \lmi{k \rightarrow \infty}\,K(\varphi_k^{-1}) \circ T_{-\varepsilonpsilon}(F)\Big) = 0.$$
Since $\mathfrak{m}s^\infty(S_\mathbb{L}ambdambda^-(F)) \subseteq \mathbb{L}ambdambda$, we get $\Gamma(U, S_\mathbb{L}ambdambda^-(F)) = 0$, which implies that $S^-_\mathbb{L}ambdambda(F)$ has compact support.
Since we have concluded that $S_\mathbb{L}ambdambda^-: \mathbb{S}h^b_\mathbb{L}ambdambda(M)_0 \rightarrow \mathbb{S}h^b_\mathbb{L}ambdambda(M)_0$ preserves perfect stalks and compact supports, the proposition then immediately follows from Theorem \ref{w=ad} and Proposition \ref{prop:sab-serre}
and the fact that $\wrap_\mathbb{L}ambdambda^-(- \text{ot}imes \omega_M) = (\wrap_\mathbb{L}ambdambda^-(-))\text{ot}imes \omega_M$ since $\omega_M$ is a local system.
\varepsilonnd{proof}
We should remark that, even though Proposition \ref{prop:sab-serre} is true in general, the above statement is not without the assumption on $\mathbb{L}ambdambda \subseteq S^*M$. For example, in Section \ref{sec:example} we will see an example where $S_\mathbb{L}ambdambda^-$ fails to be an equivalence on $\mathbb{S}h_\mathbb{L}ambdambda^b(M)$.
Finally, we explain the implication of the above result in partially wrapped Fukaya categories. Ganatra-Pardon-Shende \cite[Proposition 7.24]{Ganatra-Pardon-Shende3} have proved that there is a commuative diagram intertwining the cup functor and the left adjoint of microlocalization functor
\[\timesymatrix{
\mathfrak{m}athcal{W}(F) \alphar[r]^{\sim\hspace{10pt}} \alphar[d]_{\cup_F} & \mathfrak{m}u Sh^c_{\mathfrak{m}athfrak{c}_F}(\mathfrak{m}athfrak{c}_F) \alphar[d]^{m_{\mathfrak{m}athfrak{c}_F}^*}\\
\mathfrak{m}athcal{W}(T^*M, F) \alphar[r]^{\sim} & Sh^c_{\mathfrak{m}athfrak{c}_F}(M).
}\]
Sylvan has shown that the spherical twist associated to the cup functor is the wrap-once functor \cite{SylvanOrlov}, and hence it intertwines with the wrap-once functor in sheaf categories. Consequently, we have proven that the negative wrap-once functor
$$\cS_\mathbb{L}ambdambda^-: \mathfrak{m}athrm{Prop}\,\mathfrak{m}athcal{W}(T^*M, F) \rightarrow \mathfrak{m}athrm{Prop}\,\mathfrak{m}athcal{W}(T^*M, F)$$
is indeed the Serr{e} functor on $\mathfrak{m}athrm{Prop}\,\mathfrak{m}athcal{W}(T^*M, F)$.
In particular, let $\varphii: T^*M \rightarrow \mathfrak{m}athbb{C}$ be a symplectic Lefschetz fibration and $F = \varphii^{-1}(\infty)$ be the Weinstein fiber. Let $\mathfrak{m}athfrak{c}_F$ be the Lagrangian skeleton of $F$. Then by Ganatra-Pardon-Shende \cite{GPS2,Ganatra-Pardon-Shende3} we know that
$$\mathfrak{m}athrm{Perf}\,\mathfrak{m}athcal{W}(T^*M, F) = \mathfrak{m}athrm{Prop}\,\mathfrak{m}athcal{W}(T^*M, F)$$
is a proper subcategory. Therefore, $S_\mathbb{L}ambdambda^-$ is the Serr{e} functor on partially wrapped Fukaya category associated to Lefschetz fibrations.
\betaegin{remark}\lambdabel{rem:relativeCY}
Finally, we remark that according to the result of Katzarkov-Pandit-Spaide \cite{KPSsphericalCY}, existence of spherical adjunction in the next section together with a compatible Serr{e} functor will imply existence of the weak relative proper Calabi-Yau structure introduced in \cite{relativeCY} of the pair
$$m_\mathbb{L}ambdambda: \mathbb{S}h^b_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_\mathbb{L}ambdambda^b(\mathbb{L}ambdambda)$$
(even though we do not explicitly show compatibility of the Serr{e} functor, we believe that it is basically clear from the definition in \cite{KPSsphericalCY}).
However, we will see in the last section that spherical adjunction does not hold in all these pairs that are expected to be relative Calabi-Yau (on the contrary, as explained in Sylvan \cite{SylvanOrlov} or Remark \ref{rem:multi-component-ad} and \ref{rem:multi-component-double}, when we consider microlocalization along a single component of a Legendrian stop with multiple components
$$m_{\mathbb{L}ambdambda_i}: \mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathfrak{m}sh_{\mathbb{L}ambdambda}(\mathbb{L}ambdambda) \rightarrow \mathfrak{m}sh_{\mathbb{L}ambdambda_i}(\mathbb{L}ambdambda_i)$$
we will still get spherical adjunctions, but the pair is unlikely to be relative Calabi-Yau). We will investigate relative Calabi-Yau structures separately (and hopefully, in full generality) in future works.
\varepsilonnd{remark}
\section{Spherical pairs and perverse sch\"{o}bers}
In this section, we provide immediate corollaries of the main theorem and discuss how they give rise to spherical pairs and semi-orthogonal decompositions, and prove Proposition \ref{prop:sphere-pair-tri} and Theorem \ref{thm:sphere-pair-var}. Using Proposition \ref{prop:sphere-pair-tri}, we will also give an explicit characterization of the spherical twists and dual twists.
The description of spherical pairs comes from the relation between spherical functors and perverse sheaves of categories (called perverse sch\"{o}bers) on a disk with one singularity \cite{KapraSchober}. For a perverse sch\"{o}ber on $\mathfrak{m}athbb{D}^2$ with singularity at $0$ associated to the spherical functor
$$F: {\sA} \rightarrow {\sB}$$
consider a single cut $[0, 1] \subset \mathfrak{m}athbb{D}^2$. Then the nearby category at $0$ is ${\sA}$ while the vanishing category on $(0, 1]$ is ${\sB}$, and the spherical twist is determined by monodromy around $\mathfrak{m}athbb{D}^2 \betaackslash \{0\}$. Kapranov-Schechtman realized a symmetric description of the perverse sch\"{o}ber determined by the diagram
$${\sB}_- \timesleftarrow{F_-} {\sC} \timesrightarrow{F_+} {\sB}_+,$$
by considering a double cut on the disk $[-1, 1] \subset \mathfrak{m}athbb{D}^2$. The nearby category at $0$ is ${\sC}$ while the vanishing category on $[-1, 0)$ (resp.~$(0, 1]$) is ${\sB}_-$ (resp.~${\sB}_+$). The nearby category ${\sC}$ will carry a 4-periodic semi-orthogonal decomposition. Such a viewpoint will provide new information of the microlocal sheaf categories.
We will provide precise definitions of the terminologies in this section and then show the results in the introduction. Note that none of the arguments in this section essentially relies on microlocal sheaf theory, and therefore they can all be rewritten using Lagrangian Floer theory.
\subsection{Semi-orthogonal decomposition}\lambdabel{sec:semi-ortho}
Firstly, we explain how spherical adjunctions give rise to 4-periodic semi-orthogonal decompositions and spherical twists are given by iterated mutations Halpern--Laistner-Shipman \cite{SphericalGIT} in the case of dg categories and Dykerhoff-Kapranov-Schechtman-Soibelman \cite{SphericalInfty} in general (this is how Sylvan proved that the Orlov cup functor is spherical \cite{SylvanOrlov}).
\betaegin{theorem}[Halpern--Laistner-Shipman \cite{SphericalGIT}, \cite{SphericalInfty}]\lambdabel{thm:4periodic}
Let $F: {\sA} \rightarrow {\sB}$ be an $\infty$-functor, and ${\sC}$ be the semi-orthogonal gluing of ${\sA}$ and ${\sB}$ along the graphical bi-module $\Gamma(F)$. Then $F$ is spherical if and only if ${\sC}$ fits into a 4-periodic semi-orthogonal decomposition such that ${\sA}^{\varphierp\varphierp\varphierp\varphierp} = {\sA}$. The dual twist is the iterated mutation $T_{\sA} = R_{\sA} \circ R_{{\sA}^{\varphierp\varphierp}}$, and the dual cotwist is $S_{\sB} = L_{\sA} \circ L_{{\sA}^{\varphierp\varphierp}}$.
\varepsilonnd{theorem}
\betaegin{remark}
Given a pair of semi-orthogonal decompositions ${\sC} = \left<{\sA}_+, {\sB} \right> \simeq \left<{\sB}, {\sA}_-\right>$, the right mutation functor is the equivalence $R_{{\sA}}: {\sA}_+ \rightarrow {\sA}_-$ defined by the composition of embedding and projection.
\varepsilonnd{remark}
Therefore, restricting to our setting of sheaf categories, our main theorem is equivalent to the following statement.
\betaegin{proposition}\lambdabel{prop:semi-ortho}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a compact subanalytic Legendrian stop. Then under the fully faithful embedding $w_\mathbb{L}ambdambda: \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \rightarrow \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M)$, there is a semi-orthogonal decomposition
$$\mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M) \simeq \lambdangle \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda), \mathbb{S}h_{T_{\varepsilonpsilon}(\mathbb{L}ambdambda)}(M) \rangle.$$
\varepsilonnd{proposition}
\betaegin{proof}
We can check that $\mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M)$ is the semi-orthogonal gluing (i.e.~Grothendieck construction) of $\mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$ and $\mathbb{S}h_{\mathbb{L}ambdambda}(M)$ along the graphical bi-module $\Gamma(m_\mathbb{L}ambdambda)$. First, we show that
$$\lambdangle \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda), \mathbb{S}h_{T_{\varepsilonpsilon}(\mathbb{L}ambdambda)}(M) \rangle \hookrightarrow \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M), $$
By Theorem \ref{thm:doubling_leftad}, Remark \ref{rem:doubling_rightad} and \ref{rem:doubling_leftad}, we know that
$$\Hom(w_\mathbb{L}ambdambda({F}), T_{\varepsilonpsilon}({G})) \simeq 0, \;\; \Hom(T_{\varepsilonpsilon}({G}), w_\mathbb{L}ambdambda({F})) \simeq \Hom(m_\mathbb{L}ambdambda(G), {F} ).$$
This proves full faithfulness.
For essential surjectivity, consider ${F} \in \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M)$. Then Theorem \ref{thm:doubling_rightad} and Remark \ref{rem:multi-component-double} implies the following fiber sequence
$$\wrap_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+ w_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)} m_{T_{\varepsilonpsilon}(\mathbb{L}ambdambda)}({F}) \rightarrow {F} \rightarrow \wrap_{T_{\varepsilonpsilon}(\mathbb{L}ambdambda)}^+(F) .$$
Since $\wrap_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+ w_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)} m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}({F}) = w_{\mathbb{L}ambdambda} m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}({F})$, we can conclude that
$$w_{\mathbb{L}ambdambda} m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}({F}) \rightarrow {F} \rightarrow \wrap_{T_{\varepsilonpsilon}(\mathbb{L}ambdambda)}^+(F) ,$$
where $\wrap_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}^+(F) \in \mathbb{S}h_{T_{\varepsilonpsilon}(\mathbb{L}ambdambda)}(M)$ and $m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}({F}) \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$. This shows the essential surjectivity and thus shows the semi-orthogonal decomposition.
\varepsilonnd{proof}
\betaegin{corollary}\lambdabel{cor:semi-ortho}
The semi-orthogonal decomposition can be restricted to compact objects, namely there is a semi-orthogonal decomposition
$$\mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^c(M) \simeq \lambdangle \mathbb{S}h_{T_{\varepsilonpsilon}(\mathbb{L}ambdambda)}^c(M), \mathfrak{m}sh_\mathbb{L}ambdambda^c(\mathbb{L}ambdambda) \rangle.$$
\varepsilonnd{corollary}
\betaegin{proof}
Consider the fiber sequence of categories
$$\mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \hookrightarrow \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M) \twoheadrightarrow \mathbb{S}h_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M).$$
For $F \in \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^c(M)$, we know that $\wrap_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+(F) \in \mathbb{S}h_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}^c(M)$ since by $\wrap_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+$ is the stop removal functor as explained in Proposition \ref{prop:stopremoval} and Remark \ref{rem:stoprem-cpt}.
Moreover, by Proposition \ref{prop:stopremoval} we know that the fiber of the stop removal functor is compactly generated by the corepresentatives of microstalks at $T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)$ in the category $\mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M)$, which by definition are
$$m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}^l(\mathfrak{m}u_i) = \wrap_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+ w_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(\mathfrak{m}u_i) = w_\mathbb{L}ambdambda(\mathfrak{m}u_i),$$
where $\mathfrak{m}u_i \in \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$ are corepresentatives of the microstalks in the category $\mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)$. Then when restricting to compact objects, the fiber is $\mathfrak{m}sh_\mathbb{L}ambdambda^c(\mathbb{L}ambdambda)$ (split) generated by $\mathfrak{m}u_i \in \mathfrak{m}sh_\mathbb{L}ambdambda^c(\mathbb{L}ambdambda)$.
\varepsilonnd{proof}
\betaegin{remark}
We explain how this is related to Sylvan's proof of the spherical adjunction \cite{SylvanOrlov}*{Section 4}. Sylvan considered a sectorial gluing of the original Weinstein sector $(X, F)$ and the $A_2$-sector $F\lambdangle 2\rangle = (\mathfrak{m}athbb{C}, \{e^{2\varphii \sqrt{-1} j/3}\infty\}_{0\leq j\leq 2}) \times F$, and showed semi-orthogonal decompositions for the ambient sector $(X, F) \cup_F F\lambdangle 2 \rangle$. However, the sector $(X, F) \cup_F F\lambdangle 2 \rangle$ is exactly $(X, T_{-\varepsilonpsilon}(F) \cup T_\varepsilonpsilon(F))$.
\varepsilonnd{remark}
\betaegin{example}
Let $\mathbb{L}ambdambda \subseteq J^1(M) \cong S^{*}_{\tau>0}(M \times \mathfrak{m}athbb{R})$ be a smooth Legendrian with no Reeb chords (i.e.~$\mathbb{L}ambdambda$ is the Legendrian lift of an embedded Lagrangian). Then by \cite{Gui}*{Proposition 24.1} we know that the compactly supported sheaves $\mathbb{S}h_{\mathbb{L}ambdambda}(M \times \mathfrak{m}athbb{R})_0 \simeq 0$, and therefore the compactly supported sheaves with singular support on the double copied Legendrian is
$$\mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M \times \mathfrak{m}athbb{R})_0 \simeq \mathfrak{m}sh_{\mathbb{L}ambdambda}(\mathbb{L}ambdambda).$$
It is interesting to consider the case when $\mathbb{L}ambdambda$ is a singular Legendrian with no Reeb chords, which should relate to the framework of Nadler-Shende \cite{Shendehprinciple,NadShen} where they embedded the Lagrangian skeleton of a Weinstein sector into unit cotangent bundles via the $h$-principle.
\varepsilonnd{example}
\betaegin{example}
Let $\mathbb{L}ambdambda_\text{loose} \subseteq J^1(M) \cong S^{*}_{\tau>0}(M \times \mathfrak{m}athbb{R})$ be a stabilized or (not necessarily smooth) loose Legendrian \cite{CE,Loose,ArbLoose}*{Chapter 7}. Then by \cite{STZ}*{Proposition 5.8} we know that the compactly supported sheaves $Sh_{\mathbb{L}ambdambda_\text{loose}}(M \times \mathfrak{m}athbb{R})_0 \simeq 0$, and therefore the compactly supported sheaves with singular support on the double copied Legendrian is
$$\mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda_\text{loose}) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda_\text{loose})}(M \times \mathfrak{m}athbb{R})_0 \simeq \mathfrak{m}sh_{\mathbb{L}ambdambda_\text{loose}}(\mathbb{L}ambdambda_\text{loose}).$$
\varepsilonnd{example}
Moreover, from the point of view of $K$-theory, the semi-orthogonal gluing (i.e.~the Gronthendieck construction) fits into a (diagram of) simplicial $\infty$-category being the relative Waldhausen $S$-construction. Following Dyckerhoff-Kapranov-Schechtman-Soilbelman \cite{SphericalInfty}, we state the following conjecture.
\betaegin{conjecture}
The relative Waldhausen $S$-construction of the left adjoint of microlocalization functor is the simplicial $\infty$-category
$$S_n(m_\mathbb{L}ambdambda^l) = \mathbb{S}h_{\betaigcup_{j=0}^{n}T_{j\varepsilonpsilon}(\mathbb{L}ambdambda)}(M).$$
\varepsilonnd{conjecture}
\betaegin{remark}
Consider a sectorial gluing of the original Weinstein sector $(X, F)$ and the $A_{n+1}$-sector $F\lambdangle n+1 \rangle = (\mathfrak{m}athbb{C}, \{e^{2\varphii \sqrt{-1} j/{(n+2)}}\infty\}_{0\leq j\leq n+1}) \times F$. Then our geometric model for Waldhausen $S$-construction is $(X, F) \cup_F F\lambdangle n+1 \rangle = \betaig( X, \betaigcup_{0\leq j\leq n}T_{j\varepsilonpsilon}(F) \betaig)$.
\varepsilonnd{remark}
Going back to semi-orthogonal decompositions and spherical adjunctions, combining Theorem \ref{thm:4periodic} and Proposition \ref{prop:semi-ortho}, we immediately get the following corollary from the spherical adjunction we have proved.
\betaegin{corollary}\lambdabel{cor:4-periodic-shv}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a swappable Legendrian stop or full Legendrian stop. Then under the fully faithful embedding $w_\mathbb{L}ambdambda: \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \rightarrow \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M)$, there are semi-orthogonal decompositions
$$\mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M) \simeq \lambdangle \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda), \mathbb{S}h_{T_{\varepsilonpsilon}(\mathbb{L}ambdambda)}(M)\rangle \simeq \lambdangle \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(M), \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \rangle.$$
\varepsilonnd{corollary}
\betaegin{conjecture}
When $\mathbb{L}ambdambda \subseteq S^{*}M$ be a swappable Legendrian stop or full Legendrian stop, we expect that the simplicial $\infty$-category
$$S_n(m_\mathbb{L}ambdambda^l) = \mathbb{S}h_{\betaigcup_{j=0}^{n}T_{j\varepsilonpsilon}(\mathbb{L}ambdambda)}(M)$$
can be lifted to a paracyclic $\infty$-category.
\varepsilonnd{conjecture}
In fact, the semi-orthogonal decompositions
$$\mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M) = \lambdangle \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda), \mathbb{S}h_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M) \rangle = \lambdangle \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(M), \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \rangle$$ also provide (trivial) examples of spherical pairs, which we now introduce.
For a diagram of $\infty$-functors over stable $\infty$-categories
$${\sB}_- \timesleftarrow{F_-} {\sC} \timesrightarrow{F_+} {\sB}_+$$
where $F_\varphim$ admit fully faithful left and adjoints $F_\varphim^{l,r}$, we can write
$${\sA}_- = \ker(F_+) = ^\varphierp(F_+^r{\sB}_+) = (F_+^l{\sB}_+)^\varphierp, \;\; {\sA}_+ = \ker(F_-) = ^\varphierp(F_-^r{\sB}_-) = (F_-^l{\sB}_-)^\varphierp,$$
and write $\iota_\varphim: {\sA}_\varphim \rightarrow {\sC}$ (which admits left and right adjoints $\iota_\varphim^*$ and $\iota_\varphim^!$). The following definition is essentially a reinterpretation of the conditions in Theorem \ref{thm:4periodic}.
\betaegin{definition}
A diagram of $\infty$-functors over stable $\infty$-categories
$${\sB}_- \timesleftarrow{F_-} {\sC} \timesrightarrow{F_+} {\sB}_+$$
is called a spherical pair if $F_\varphim$ admit fully faithful left and right adjoints $F_\varphim^{l,r}$ such that
\betaegin{enumerate}
\item the compositions $F^l_+ \circ F_-: {\sB}_+ \rightarrow {\sB}_-, \; F^l_- \circ F_+: {\sB}_- \rightarrow {\sB}_+$ are equivalences;
\item the compositions $\iota^!_+ \circ \iota_-: {\sA}_+ \rightarrow {\sA}_-, \; \iota^!_- \circ \iota_+: {\sA}_- \rightarrow {\sA}_+$ are equivalences.
\varepsilonnd{enumerate}
\varepsilonnd{definition}
Then Corollary \ref{cor:4-periodic-shv} immediately implies the following proposition.
\betaegin{proposition}\lambdabel{prop:sphere-pair-tri}
Let $\mathbb{L}ambdambda \subset S^{*}M$ be a swappable Legendrian stop or full Legendrian stop. Then there exists spherical pairs of the form
$$\mathfrak{m}sh_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)) \leftarrow \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M) \rightarrow \mathfrak{m}sh_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}(T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)).$$
Meanwhile, there is also a spherical pair
$$\mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(M) \rightarrow \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M) \leftarrow \mathbb{S}h_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M).$$
\varepsilonnd{proposition}
\betaegin{remark}
We can restrict the spherical pairs to the subcategories of compact or proper objects as explained in Section \ref{sec:prop-cpt} and Corollary \ref{cor:semi-ortho}.
\varepsilonnd{remark}
Moreover, from the description in Theorem \ref{thm:4periodic}, we can show that the spherical twists (resp.~dual twists) are simply the positive (resp.~negative) monodromy functor, under the inclusion by the doubling functor.
\betaegin{corollary}\lambdabel{cor:twist}
Let $\mathbb{L}ambdambda \subseteq S^{*}M$ be a swappable Legendrian stop or full Legendrian stop. Under the inclusion by the doubling functor $w_\mathbb{L}ambdambda: \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \hookrightarrow \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M)$, the spherical dual twist is computed by
$$S_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^- \circ S_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^-|_{w_\mathbb{L}ambdambda(\mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda))}[2].$$
Similarly, the spherical dual cotwist is computed by $$S_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+ \circ S_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+|_{\mathbb{S}h_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M)} = S_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+.$$
\varepsilonnd{corollary}
\betaegin{proof}
By Proposition \ref{prop:sphere-pair-tri} and Theorem \ref{thm:4periodic}, it suffices to show that the right mutation functor
$$R_{\mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)}: w_\mathbb{L}ambdambda(\mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)) \hookrightarrow \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M) \rightarrow w_\mathbb{L}ambdambda(\mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda))$$
is the functor $S_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^-|_{w_\mathbb{L}ambdambda(\mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda))}[1]$. Consider the pair of semi-orthogonal decompositions
$$\mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M) = \left< \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda), \mathbb{S}h_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M) \right> = \left< \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(M), \mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda) \right>.$$
The first semi-orthogonal decomposition is realized in Proposition \ref{prop:semi-ortho} by $m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}^l m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(F) \rightarrow F \rightarrow \mathfrak{m}athfrak{W}_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+ F$. Following a similar argument, the second semi-orthogonal decomposition is realized by the fiber sequence
$$\wrap_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}^- F \rightarrow F \rightarrow m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}^r m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(F).$$
Therefore, one can show that the right mutation functor associated to the pair of semi-orthogonal decompositions using Theorem \ref{thm:doubling_rightad} and Remark \ref{rem:multi-component-double}
\betaegin{align*}
R_{\mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)}&(w_{\mathbb{L}ambdambda} m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(F)) = m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}^r m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)} \circ m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}^l m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(F)[1] \\
&= m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}^r m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)} \circ m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}^l m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(F)[1] = m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}^r m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(F)[1] \\
&= \wrap_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^- w_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)} m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(F)[1] = S_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^- w_{\mathbb{L}ambdambda} m_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(F)[1].
\varepsilonnd{align*}
One can also compute $R_{\mathfrak{m}sh_\mathbb{L}ambdambda(\mathbb{L}ambdambda)^{\varphierp\varphierp}}$ in the same way, which implies the result on spherical twists.
For spherical cotwists, it suffices to show that the left mutation functor is $S_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+|_{\mathbb{S}h_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M)} $. Then using the above semi-orthogonal decompositions, we have
\betaegin{align*}
L_{\mathbb{S}h_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}(M)}(\wrap_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}^-F) = \wrap_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}^+ \circ \wrap_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}^-F = S_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+ \circ \wrap_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}^-F.
\varepsilonnd{align*}
One also can compute $L_{\mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}(\mathbb{L}ambdambda)}$ in the same way. This implies the result on spherical dual cotwists. Finally, we note that from the computation
$$S_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+ \circ S_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+(\wrap_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}^-F) = \wrap_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)}^+ \circ \wrap_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+(\wrap_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}^-F) = S_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+(\wrap_{T_\varepsilonpsilon(\mathbb{L}ambdambda)}^-F),$$
which confirms the last assertion.
\varepsilonnd{proof}
\betaegin{remark}
Note that the functor $S_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda)}^+$ intertwines $T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)$ and $T_\varepsilonpsilon(\mathbb{L}ambdambda)$ by a positive isotopy. Hence applying the functor twice has the effect of the monodromy functor.
\varepsilonnd{remark}
These seem to be trivial examples of spherical pairs. In the following section, we will provide some examples that are less trivial, namely pairs of different subanalytic Legendrians.
\subsection{Spherical pairs from variation of skeleta}
In general, there are subanalytic Legendrian stops that give rise to spherical pairs which are not necessarily homeomorphic. For example, Donovan-Kuwagaki \cite{DonovanKuwa} have considered two specific examples from homological mirror symmetry of toric stacks. They presented equivalences between sheaf categories
$$\mathbb{S}h^c_{\mathbb{L}ambdambda_+}(T^n) \timesrightarrow{\sim} \mathbb{S}h^c_{\mathbb{L}ambdambda_-}(T^n)$$
that are mirror to certain flop-flop equivalences of the mirror toric stacks defined by GIT quotients \cite{Flopsphere} (more generally, it is discussed in \cite{ZhouVarI} how different GIT quotients are related by semi-orthogonal decompositions).
\betaegin{remark}
Unlike in algebraic geometry, where the equivalence is between derived categories of varieties related by flops that are only birational equivalent, in symplectic geometry, we expect that the Weinstein sectors $(T^*T^n, \mathbb{L}ambdambda_-)$ and $(T^*T^n, \mathbb{L}ambdambda_+)$ are Weinstein homotopic (see \cite{CE,WeinRevisit} for the definition), even though the Lagrangian skeleta and associated Landau-Ginzburg potentials \cite{RSTZSkel,ZhouSkel,GammageShende} are different. This reflects the flexibility on the symplectic side.
\varepsilonnd{remark}
Here we provide a general criterion for this type of equivalences between microlocal sheaf categories. In known examples of such equivalences between microlocal sheaf categories, the Legendrian stops are required to be related by non-characteristic deformations \cite{DonovanKuwa,ZhouVarI}. However, we provide a criterion that does not a priori require existence of non-characteristic deformations thanks to the equivalences from spherical adjunctions (though it often turns out that the Legendrian stops are related by such deformation).
Note that any two Legendrian stops are generically disjoint after a small contact perturbation.
\betaegin{definition}
Let $\mathbb{L}ambdambda_\varphim \subseteq S^{*}M$ be two disjoint closed subanalytic Legendrian stops. Suppose there exists both a positive and a negative Hamiltonian flow that sends $\mathbb{L}ambdambda_+$ to an arbitrary small neighbourhood of $\mathbb{L}ambdambda_-$, and there also exists a positive and a negative Hamiltonian flow that sends $\mathbb{L}ambdambda_-$ to an arbitrary small neighbourhood of $\mathbb{L}ambdambda_+$. Then $(\mathbb{L}ambdambda_-, \mathbb{L}ambdambda_+)$ is called a swappable pair.
\varepsilonnd{definition}
Both of the following examples are considered in \cite{DonovanKuwa,ZhouVarI}, though from a different perspective.
\betaegin{example}\lambdabel{ex:pair}
We can consider the mirror to the flops associated to $X_0 = \mathfrak{m}athbb{C}^2/\mathfrak{m}athbb{Z}_2$. On the one hand, consider the Deligne-Mumford quotient stack
$$X_- = [\mathfrak{m}athbb{C}^2/\mathfrak{m}athbb{Z}_2].$$
On the other hand, consider the minimal resolution
$$X_+ = \widetilde{\mathfrak{m}athbb{C}^2 / \mathfrak{m}athbb{Z}_2} = \mathfrak{m}athrm{Tot}(\mathfrak{m}athcal{O}_{\mathfrak{m}athbb{CP}^1}(-2)).$$
Under homological mirror symmetry (or coherent-constructible correspondence) of toric stacks \cite{KuwaCCC}, the mirrors are Weinstein sectors $(T^*T^2, \mathbb{L}ambdambda_\varphim)$ as shown in Figure \ref{fig:swap-pair}. One can show that after a small Reeb perturbation, $(T_{-\varepsilonpsilon}(\mathbb{L}ambdambda_-), T_\varepsilonpsilon(\mathbb{L}ambdambda_+))$ and $(T_\varepsilonpsilon(\mathbb{L}ambdambda_-), T_{-\varepsilonpsilon}(\mathbb{L}ambdambda_+))$ are both swappable pairs of Legendrian stops, as shown in Figure \ref{fig:swap-pair}.
\betaegin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{Swappable-pair.pdf}
\caption{The figure on the left illustrates the swappable pair $\mathbb{L}ambdambda_\varphim \subset S^*T^2$ mirror to the flops associated to $X_0 = \mathfrak{m}athbb{C}^2/\mathfrak{m}athbb{Z}_2$, where all the covectors are pointing downward. The figure on the right illustrates a cofinal wrapping that sends $T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)$ to a neighbourhood of $T_\varepsilonpsilon(\mathbb{L}ambdambda)$ and one that sends $T_\varepsilonpsilon(\mathbb{L}ambdambda_+)$ to a neighbourhood of $T_{-\varepsilonpsilon}(\mathbb{L}ambdambda)$.}\lambdabel{fig:swap-pair}
\varepsilonnd{figure}
We can also consider the mirror to the Atiyah flops associated to $X_0 = \{(x, y, z, w) | zy - zw = 0\} \subset \mathfrak{m}athbb{C}^4$. There are two crepant resolutions along two exceptional rational curves
$$X_\varphim = \mathfrak{m}athrm{Tot}(\mathfrak{m}athcal{O}_{E_\varphim}(-1)^{\operatornamelus 2}).$$
Under homological mirror symmetry (or coherent-constructible correspondence) of toric stacks \cite{KuwaCCC}, the mirrors are Weinstein sectors $(T^*T^3, \mathbb{L}ambdambda_\varphim)$. One can similarly show that $(T_{-\varepsilonpsilon}(\mathbb{L}ambdambda_-), T_\varepsilonpsilon(\mathbb{L}ambdambda_+))$ and $(T_\varepsilonpsilon(\mathbb{L}ambdambda_-), T_{-\varepsilonpsilon}(\mathbb{L}ambdambda_+))$ are both swappable pairs of Legendrian stops.
\varepsilonnd{example}
The main theorem of the section is the following statement, that swappable pairs of Legendrian stops induce spherical pairs of sheaf categories.
\betaegin{theorem}\lambdabel{thm:sphere-pair-var}
Let $\mathbb{L}ambdambda_\varphim \subseteq S^{*}M$ be a swappable pair of closed Legendrian stops. Then $\mathbb{S}h_{\mathbb{L}ambdambda_-}(M) \simeq \mathbb{S}h_{\mathbb{L}ambdambda_+}(M), \mathfrak{m}sh_{\mathbb{L}ambdambda_-}(\mathbb{L}ambdambda_-) \simeq \mathfrak{m}sh_{\mathbb{L}ambdambda_+}(\mathbb{L}ambdambda_+)$, and there is a spherical pair
$$\mathbb{S}h_{\mathbb{L}ambdambda_-}(M) \leftarrow \mathbb{S}h_{\mathbb{L}ambdambda_+ \cup \mathbb{L}ambdambda_-}(M) \rightarrow \mathbb{S}h_{\mathbb{L}ambdambda_+}(M).$$
\varepsilonnd{theorem}
\betaegin{proof}
We notice that since $(\mathbb{L}ambdambda_-, \mathbb{L}ambdambda_+)$ is a swappable pair, this implies that $\mathbb{L}ambdambda_\varphim$ are independently swappable in $S^{*}M$: in fact, we can wrap $\mathbb{L}ambdambda_-$ into a small neighbourhood of $\mathbb{L}ambdambda_+$, and then follow the wrapping which sends the neighbourhood of $\mathbb{L}ambdambda_+$ back into a small neighbourhood of $\mathbb{L}ambdambda_-$. Therefore, the left adjoints of microlocalization
$$m_{\mathbb{L}ambdambda_\varphim}^l: \mathfrak{m}sh_{\mathbb{L}ambdambda_\varphim}(\mathbb{L}ambdambda_\varphim) \rightarrow \mathbb{S}h_{\mathbb{L}ambdambda_\varphim}(M)$$
are spherical functors whose spherical twists are $S_{\mathbb{L}ambdambda_\varphim}^+$. On the other hand, for any ${F} \in \mathbb{S}h_{\mathbb{L}ambdambda_-}(M)$ and ${G} \in \mathbb{S}h_{\mathbb{L}ambdambda_+}(M)$, we can define the swapping functors
$${R}_{\mathbb{L}ambdambda_-,\mathbb{L}ambdambda_+}^+({F}) = S_{\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+}^+ F = \mathfrak{m}athfrak{W}_{\mathbb{L}ambdambda_+}^+ {F}, \;\; {R}_{\mathbb{L}ambdambda_+,\mathbb{L}ambdambda_-}^+({G}) = S_{\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+}^+ G = \mathfrak{m}athfrak{W}_{\mathbb{L}ambdambda_-}^+ {G}.$$
From Corollary \ref{cor:twist}, the compositions of swapping functor give the spherical twists
$${S}_{\mathbb{L}ambdambda_-}^+ = {R}_{\mathbb{L}ambdambda_+,\mathbb{L}ambdambda_-}^+ \circ {R}_{\mathbb{L}ambdambda_-,\mathbb{L}ambdambda_+}^+, \; {S}_{\mathbb{L}ambdambda_+}^+ = {R}_{\mathbb{L}ambdambda_-,\mathbb{L}ambdambda_+}^+ \circ {R}_{\mathbb{L}ambdambda_+,\mathbb{L}ambdambda_-}^+.$$
As a result, we know that ${R}_{\mathbb{L}ambdambda_-,\mathbb{L}ambdambda_+}^+, {R}_{\mathbb{L}ambdambda_+,\mathbb{L}ambdambda_-}^+$ are equivalences. Similarly we can also consider spherical cotwists and show that the corresponding co-swapping functors are equivalences. This implies that
$$\mathbb{S}h_{\mathbb{L}ambdambda_-}(M) \simeq \mathbb{S}h_{\mathbb{L}ambdambda_+}(M), \;\; \mathfrak{m}sh_{\mathbb{L}ambdambda_-}(\mathbb{L}ambdambda_-) \simeq \mathfrak{m}sh_{\mathbb{L}ambdambda_+}(\mathbb{L}ambdambda_+).$$
Then consider two new pairs of Legendrian stops $(\mathbb{L}ambdambda_-, T_\varepsilonpsilon(\mathbb{L}ambdambda_-))$ and $(T_{-\varepsilonpsilon}(\mathbb{L}ambdambda_+), \mathbb{L}ambdambda_+)$, obtained by a sufficiently small Reeb push-off. We show that there are equivalences
$$\mathbb{S}h_{\mathbb{L}ambdambda_- \cup T_\varepsilonpsilon(\mathbb{L}ambdambda_-)}(M) \simeq \mathbb{S}h_{\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+}(M) \simeq \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda_+) \cup \mathbb{L}ambdambda_+}(M),$$
such that the corresponding restrictions are the swapping functor
$${R}_{\mathbb{L}ambdambda_-,\mathbb{L}ambdambda_+}^+: \, \mathbb{S}h_{\mathbb{L}ambdambda_-}(M) \rightarrow \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda_+)}(M), \;\; \mathbb{S}h_{T_\varepsilonpsilon(\mathbb{L}ambdambda_-)}(M) \rightarrow \mathbb{S}h_{\mathbb{L}ambdambda_+}(M)$$
Then since $\mathbb{S}h_{\mathbb{L}ambdambda_- \cup T_\varepsilonpsilon(\mathbb{L}ambdambda_-)}(M)$ and $\mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda_+) \cup \mathbb{L}ambdambda_+}(M)$ are both endowed with semi-orthogonal decompositions by Proposition \ref{prop:semi-ortho}, this will complete the proof.
Consider the non-nagative wrapping that fixes $\mathbb{L}ambdambda_-$ while sending $T_\varepsilonpsilon(\mathbb{L}ambdambda_-)$ into $\mathbb{L}ambdambda_+$, and another non-negative wrapping that fixes $\mathbb{L}ambdambda_-$ while sending $\mathbb{L}ambdambda_+$ into $T_{\varepsilonpsilon}(\mathbb{L}ambdambda_-)$. Viewing $\mathbb{L}ambdambda_- \cup T_\varepsilonpsilon(\mathbb{L}ambdambda_-)$ and $\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+$ independently as two Legendrian stops, the above observation shows that they form a swappable pair as well. Hence we have
\betaegin{align*}
{R}_{\mathbb{L}ambdambda_- \cup T_\varepsilonpsilon(\mathbb{L}ambdambda_-), \mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+}^+ : \mathbb{S}h_{\mathbb{L}ambdambda_- \cup T_\varepsilonpsilon(\mathbb{L}ambdambda_-)}(M) \rightarrow \mathbb{S}h_{\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+}(M),\\
{R}_{\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+, \mathbb{L}ambdambda_- \cup T_\varepsilonpsilon(\mathbb{L}ambdambda_-)}^- : \mathbb{S}h_{\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+}(M) \rightarrow \mathbb{S}h_{\mathbb{L}ambdambda_- \cup T_\varepsilonpsilon(\mathbb{L}ambdambda_-)}(M),
\varepsilonnd{align*}
where ${R}_{\mathbb{L}ambdambda_- \cup T_\varepsilonpsilon(\mathbb{L}ambdambda_-), \mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+}^+$ and ${R}_{\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+, \mathbb{L}ambdambda_- \cup T_\varepsilonpsilon(\mathbb{L}ambdambda_-)}^-$ are inverse equivalences by the definition above. Hence we have shown the equivalence
$$\mathbb{S}h_{\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+}(M) \simeq \mathbb{S}h_{\mathbb{L}ambdambda_- \cup T_\varepsilonpsilon(\mathbb{L}ambdambda_-)}(M)$$
which realizes $\mathbb{S}h_{\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+}(M)$ as semi-orthogonal decompositions.
Finally, to identify the projection functors with the stop removal functors, i.e.~positive wrapping functors, we only need to notice that the following diagram commutes
\[\timesymatrix{
\mathbb{S}h_{\mathbb{L}ambdambda_-}(M) \alphar@{=}[d] & \mathbb{S}h_{\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+}(M) \alphar[l]_{\wrap_{\mathbb{L}ambdambda_-}^+\hspace{10pt}} \alphar@{=}[r] & \mathbb{S}h_{\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+}(M) \alphar[r]^{\hspace{10pt}\wrap_{\mathbb{L}ambdambda_+}^+} \alphar[d]_{\rotatebox{90}{$\sim$}}^{\wrap^+_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda_+) \cup \mathbb{L}ambdambda_+}} & \mathbb{S}h_{\mathbb{L}ambdambda_+}(M) \alphar@{=}[d] \\
\mathbb{S}h_{\mathbb{L}ambdambda_-}(M) & \mathbb{S}h_{\mathbb{L}ambdambda_- \cup T_\varepsilonpsilon(\mathbb{L}ambdambda_-)}(M) \alphar[u]_{\rotatebox{90}{$\sim$}}^{\wrap^+_{\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+}} \alphar[l]^{\wrap_{\mathbb{L}ambdambda_-}^+\hspace{10pt}} \alphar[r]^{\sim} & \mathbb{S}h_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda_+) \cup \mathbb{L}ambdambda_+}(M) \alphar[r]_{\hspace{14pt}\wrap_{\mathbb{L}ambdambda_+}^+} & \mathbb{S}h_{\mathbb{L}ambdambda_+}(M).
}\]
This completes the proof of the theorem.
\varepsilonnd{proof}
\subsubsection{Remark on the mirror of flop-flop equivalence}
We now explain how the above result gives rise to more general equivalences and autoequivalences of microlocal sheaf categories other than the spherical twists and cotwists defined by wrapping around. In particular, we will discuss the relation to the work of Donovan-Kuwagaki \cite{Donovan-Kuwagaki}, who provide autoequivalences that are mirror to the flop-flop equivalence in two specific examples.
Throughout this subsection, we will work with the subcategory of compact objects $\mathbb{S}h^c_\mathbb{L}ambdambda(M)$ instead of the large stable category $\mathbb{S}h_\mathbb{L}ambdambda(M)$.
Let $\mathbb{L}ambdambda_\varphim = \mathbb{L}ambdambda_0 \cup H_\varphim \subseteq S^{*}M$ be subanalytic Legendrian stops such that there is a positive Hamiltonian isotopy fixed on $\mathbb{L}ambdambda_0$ that sends $H_-$ to an arbitrary small neighbourhood of $\mathbb{L}ambdambda_+$, and a positive isotopy fixed on $\mathbb{L}ambdambda_0$ that sends $H_+$ to an arbitrary small neighbourhood of $\mathbb{L}ambdambda_-$.
Then by a small perturbation, we know that both $(T_{-\varepsilonpsilon}(\mathbb{L}ambdambda_-), T_\varepsilonpsilon(\mathbb{L}ambdambda_+))$ and $(T_{-\varepsilonpsilon}(\mathbb{L}ambdambda_+), T_\varepsilonpsilon(\mathbb{L}ambdambda_-))$ form a swappable pair. By results in the previous section, the following compositions are equivalences
\[\betaegin{split}
\mathbb{S}h^c_{\mathbb{L}ambdambda_-}(M) \rightarrow \mathbb{S}h^c_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda_-) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda_+)}(M) \rightarrow \mathbb{S}h^c_{\mathbb{L}ambdambda_+}(M), \\
\mathbb{S}h^c_{\mathbb{L}ambdambda_+}(M) \rightarrow \mathbb{S}h^c_{T_{-\varepsilonpsilon}(\mathbb{L}ambdambda_+) \cup T_\varepsilonpsilon(\mathbb{L}ambdambda_-)}(M) \rightarrow \mathbb{S}h^c_{\mathbb{L}ambdambda_-}(M).
\varepsilonnd{split}\]
\betaegin{example}\lambdabel{ex:rel-pair}
Weinstein pairs in Example \ref{ex:pair} will give examples. Suppose $\mathbb{L}ambdambda_0 = \mathbb{L}ambdambda_- \cap \mathbb{L}ambdambda_+$. We can then write $\mathbb{L}ambdambda_\varphim = \mathbb{L}ambdambda_0 \cup H_\varphim$, where $H_\varphim$ are Legendrian disks with boundary $\varphiartial H_\varphim \subseteq \mathbb{L}ambdambda_0$ as in Figure \ref{fig:rel-swap-pair}. In fact, we expect that the Weinstein thickenings of $\mathbb{L}ambdambda_\varphim$ are Weinstein homotopic, and the Weinstein thickening of $\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+$ is a Weinstein stacking \cite{LazaMax} of the two Weinstein hypersurfaces.
\varepsilonnd{example}
\betaegin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{Rel-swappable-pair.pdf}
\caption{The figure on the left and in the middle are the Legendrian pairs $\mathbb{L}ambdambda_\varphim \subset S^*T^2$, where all conormal directions are pointing downward. The figure on the right is $\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+ = \mathbb{L}ambdambda_0 \cup H_- \cup H_1 \subseteq S^*T^2$ where $H_\varphim$ have the corresponding colors as $\mathbb{L}ambdambda_\varphim$.}\lambdabel{fig:rel-swap-pair}
\varepsilonnd{figure}
In the above equivalences, the autoequivalence obtained by compositions
$$\mathbb{S}h^c_{\mathbb{L}ambdambda_-}(M) \timesrightarrow{\sim} \mathbb{S}h^c_{\mathbb{L}ambdambda_+}(M) \timesrightarrow{\sim} \mathbb{S}h^c_{\mathbb{L}ambdambda_-}(M)$$
is no longer the spherical twist by wrapping around the contact boundary as Theorem \ref{thm:sphere-pair-var}. Yet we expect that under appropriate assumptions, these autoequivalences are spherical twists of a spherical pair
$$\mathbb{S}h^c_{\mathbb{L}ambdambda_-}(M) \leftarrow \mathbb{S}h^c_{\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+}(M) \rightarrow \mathbb{S}h^c_{\mathbb{L}ambdambda_+}(M),$$
and are mirror to the spherical pair associated to the flop-flop equivalence from Bozenta-Bondal \cite{Flopsphere}. Donovan-Kuwagaki \cite{DonovanKuwa} have proved this for Example \ref{ex:rel-pair}.
However, currently, using microlocal sheaf theoretic methods, almost the only way to get semi-orthogonal decompositions or reversed inclusions
$$\mathbb{S}h^c_{\mathbb{L}ambdambda_-}(M) \hookrightarrow \mathbb{S}h^c_{\mathbb{L}ambdambda_- \cup \mathbb{L}ambdambda_+}(M)$$
is to assume that $\mathbb{L}ambdambda_\varphim$ are full stops, i.e.~$\mathbb{S}h^c_{\mathbb{L}ambdambda_\varphim}(M) = \mathbb{S}h^b_{\mathbb{L}ambdambda_\varphim}(M)$ \cite{CoteKartal}*{Section 6.3}. Using homological mirror symmetry or coherent-constructible correspondence, it is possible to prove semi-orthogonal decompositions for some limited cases of non-full stops like Example \ref{ex:rel-pair} \cite{DonovanKuwa,ZhouSkel}, but to our knowledge, for the moment there is no general statement.
\section{Verdier duality as categorical duality}\lambdabel{sec:VerdSerr}
Let $(M,\mathbb{L}ambdambda)$ be as before. In this section, we study dualizability, bimodules and colimit preserving functors of sheaf categories with isotropic singular support. We first exhibit an equivalence $\VD{\mathbb{L}ambdambda}: \mathbb{S}h_{-\mathbb{L}ambdambda}(M)^{c,op} \timesrightarrow{\sim} \mathbb{S}h_\mathbb{L}ambdambda(M)^c$ and, as a corollary, we obtain classification of colimit-preserving functors by sheaf kernels
$$ \mathbb{F}un^L(\mathbb{S}h_\mathbb{S}igma(N),\mathbb{S}h_\mathbb{L}ambdambda(M)) = \mathbb{S}h_{-\mathbb{S}igma \times \mathbb{L}ambdambda}(N \times M)$$
through convolutions for any such pair $(N,\mathbb{S}igma)$.
Assume the manifold is compact, in which case there is an inclusion $\mathbb{S}h_\mathbb{L}ambdambda(M)^b \subseteq \mathbb{S}h_\mathbb{L}ambdambda(M)^c$ of sheaves with perfect stalks into compact objects. We show that the classical Verdier duality $\VD{M}: \mathbb{S}h_{-\mathbb{L}ambdambda}(M)^b \timesrightarrow{\sim} \mathbb{S}h_\mathbb{L}ambdambda(M)^c$, is related to $\VD{\mathbb{L}ambdambda}$ by
$$\VD{M}(F) = S_\mathbb{L}ambdambda^+ \circ \VD{\mathbb{L}ambdambda}(F) \text{ot}imes \omega_M$$
for $F \in \mathbb{S}h_{-\mathbb{L}ambdambda}(M)^b$. We mention a similar question was previously studied, in the setting of Betti geometric Langlands program, in \cite{Arinkin-Gaitsgory-Kazhdan-Raskin-Rozenblyum-Varshavsky} (though they consider the non-quasi-compact stack $Bun_G(X)$ where it is hard to even define Verdier duality).
Assume $S_\mathbb{L}ambdambda^+$ is invertible, then we can extend the Verdier duality to $\mathbb{S}h_{-\mathbb{L}ambdambda}(M)^c$ by the formula one the right hand side. We show that the converse is also true in a sense (see Theorem \ref{converse-statement-Verdier} for a precise statement).
\subsection{Products}
Denote by $\mathbb{P}rLst$ the (very large) category of presentable stable categories whose morphisms are given by colimit-preserving functors.
Compactly generated categories in $\mathbb{P}rLst$ form a subcategory $\mathbb{P}rLcs$, and it is equivalent to $\st_\omega$, the category of idempotent complete small stable categories whose morphisms are given by exact functors by taking compact objects,
\betaegin{align*}
\mathbb{P}rLcs &\timesrightarrow{\sim} \st_\omega \\
\sC &\mathfrak{m}apsto \sC^c.
\varepsilonnd{align*}
The inverse map of this identification is given by taking Ind-completion $(\sC_0 \mathfrak{m}apsto \Ind \sC_0)$.
There is a symmetric monoidal structure $\text{ot}imes$ on $\mathbb{P}rLst$ \cite{Lurie2, Hoyois-Scherotzke-Sibilla}, and the following lemma implies that it restricts to a symmetric monoidal structure on $\mathbb{P}rLcs$, which further induces a symmetric monoidal structure $\text{ot}imes$ on $\st$ by sending $(\sC_0,\sD_0)$ to $\left( \Ind(\sC_0) \text{ot}imes \Ind(\sD_0) \right)^c$.
\betaegin{lemma}[{\cite[Proposition 7.4.2]{Gaitsgory-Rozenblyum}}]\lambdabel{pgc,PrLst}
Assume that $\sC$ and $\sD$ are compactly generated stable categories over $\cV$.
\betaegin{enumerate}
\item The tensor product $\sC \text{ot}imes \sD$ is compactly generated by objects of the form
$c_0 \text{ot}imes d_0$ with $c_0 \in \sC^c$ and $d_0 \in \sD^c$.
\item For $c_0$, $d_0$ as above, and $c \in \sC$, $d \in \sD$, we have a canonical isomorphism
$$ \Hom_\sC(c_0,c) \text{ot}imes \Hom_\sD(d_0,d) = \Hom_{\sC \text{ot}imes \sD}(c_0 \text{ot}imes d_0, c \text{ot}imes d).$$
\varepsilonnd{enumerate}
\varepsilonnd{lemma}
We will need this lemma concerning the full-faithfulness of the tensors of functors.
\betaegin{lemma}\lambdabel{ff;pr}
If the functors $F_i: \sC_i \rightarrow \sD_i$ for $i = 1, 2$ in $\mathbb{P}rLst$ are fully faithful,
then their tensor product $F_1 \text{ot}imes F_2: \sC_1 \text{ot}imes \sC_2 \rightarrow \sD_1 \text{ot}imes \sD_2$
is fully faithful if one of the following condition is satisfied:
\betaegin{enumerate}
\item The functor $F_i$ admits a left adjoint.
\item The right adjoint of $F_i$ is colimit-preserving.
\varepsilonnd{enumerate}
\varepsilonnd{lemma}
\betaegin{proof}
We prove (1) and leave (2) to the reader.
We first note that since $F_1 \text{ot}imes F_2 = (\id_{\sD_1} \text{ot}imes F_2) \circ (F_1 \text{ot}imes \id_{\sC_2})$.
It is sufficient to prove the case when $F_2 = \id_{\sC_2}$.
Denote by $F_1^l: \sD_1 \rightarrow \sC_1$ the left adjoint of $F_1$.
We note that since for any $Y \in \sC_1$,
$$ \Hom(F_1^l F_1 X, Y) = \Hom(F_1 X, F_1 Y) = \Hom(X, Y),$$
the left adjoint $F_1^l$ is surjective.
Now we notice that being surjective and being a left adjoint are both preserved under $(-) \text{ot}imes \id_{\sC_2}$.
Thus the right adjoint $F_1 \text{ot}imes \id_{\sC_2}$ is fully-faithful since it has a surjective left adjoint
by a similar argument as above.
\varepsilonnd{proof}
Now we consider pairs of the form $(M,\mathbb{L}ambdambda)$ where $M$ is a real analytic manifold and $\mathbb{L}ambdambda \subseteq S^* M$ is a subanalytic isotropic.
If $(N,\mathbb{S}igma)$ is another such a pair, we can form the product pair $(M \times N, \mathbb{L}ambdambda \times \mathbb{S}igma)$ where we abuse the notation and use $\mathbb{L}ambdambda \times \mathbb{S}igma$ to mean the product isotropic which is given by the projectivization
$$\left( \left( (\mathbb{R}p \mathbb{L}ambdambda \cup 0_M) \times (\mathbb{R}p \mathbb{S}igma \cup 0_N) \right) \setminus 0_{M \times N} \right) / \mathbb{R}p.$$
The main proposition of this subsection is the following compatibility statement between this geometric product and the product structure we recall earlier:
\betaegin{proposition}\lambdabel{pd:g}
Let $M$, $N$ be real analytic manifolds and $\mathbb{L}ambdambda \subseteq S^* M$, $\mathbb{S}igma \subseteq S^* N$
be subanalytic singular isotropics. Denote by $\mathbb{L}ambdambda \times \mathbb{S}igma$ the product singular isotropic in $S^* (M \times N)$,
then there is a equivalence
\betaegin{align*}
\mathbb{S}h_\mathbb{L}ambdambda(M) \text{ot}imes \mathbb{S}h_\mathbb{S}igma(N) &= \mathbb{S}h_{\mathbb{L}ambdambda \times \mathbb{S}igma}(M \times N) \\
(F,G) &\mathfrak{m}apsto F \betaoxtimes G.
\varepsilonnd{align*}
\varepsilonnd{proposition}
Let $\cS$ and $\cT$ be triangulations of $M$ and $N$.
We note that, although the product stratification $\cS \times \cT$ are no longer a triangulation, it still satisfies the conditions in Lemma \ref{lem:sheaves_by_representations}.
Thus the following slight generalization of \cite[Lemma 4.7]{Ganatra-Pardon-Shende3} holds:
\betaegin{proposition}\lambdabel{srep}
Let $\cS$ and $\cT$ be triangulations of $M$ and $N$. Then $\mathbb{S}h_{\cS \times \cT}(M \times N) = (\cS \times \cT) \deltaMod$.
Here we denote by $\cS \times \cT$ the product stratification.
\varepsilonnd{proposition}
Thus when $\mathbb{L}ambdambda = N^*_\infty \cS$ and $\mathbb{S}igma = N^*_\infty \cT$ are given by Whitney triangulations, one can check directly that $(\cS \deltaMod) \text{ot}imes (\cT \deltaMod) = (\cS \times \cT) \deltaMod$, and we can conclude the following special case.
\betaegin{proposition}\lambdabel{pd:ws}
Let $\cS$ and $\cT$ Whitney triangulations of $M$ and $N$. There is an equivalence
$$\betaoxtimes: \mathbb{S}h_{N^*_\infty \cS}(M) \text{ot}imes \mathbb{S}h_{N^*_\infty \cT}(N) \timesrightarrow{\sim} \mathbb{S}h_{N^*_\infty (\cS \times \cT)}(M \times N)$$
sending $1_{\str(s)} \text{ot}imes 1_{\str(t)}$ to $1_{\str(s) \times \str(t)}$.
\varepsilonnd{proposition}
\betaegin{proof}[Proof of Proposition \ref{pd:g}]
To deduce the general case from the triangulation case,
pick a Whitney triangulation $\cT$ of $N$ such that $\mathbb{S}igma \subseteq N^* \cT$
and consider the following diagram:
$$
\betaegin{tikzpicture}
\node at (0,2) {$\mathbb{S}h_\mathbb{L}ambdambda(M) \text{ot}imes \mathbb{S}h_\mathbb{S}igma(N)$};
\node at (8,2) {$\mathbb{S}h_{\mathbb{L}ambdambda \times \mathbb{S}igma}(M \times N)$};
\node at (0,0) {$\mathbb{S}h_{N^* \cS}(M) \text{ot}imes \mathbb{S}h_{N^* \cT}(N)$};
\node at (8,0) {$\mathbb{S}h_{N^* \cS \times N^* \cT}(M \times N)$};
\deltaraw [->, thick] (1.8,2) -- (6.4,2) node [midway, above] {$\betaoxtimes$};
\deltaraw [double equal sign distance, thick] (2.2,0) -- (6.1,0) node [midway, below] {$\betaoxtimes$};
\deltaraw [right hook->, thick] (0,1.7) -- (0,0.3) node [midway, left] {$ $};
\deltaraw [right hook->, thick] (8,1.7) -- (8,0.3) node [midway, right] {$ $};
\varepsilonnd{tikzpicture}
$$
The fully-faithfulness of the vertical functor on the left is implied by Lemma \ref{ff;pr}.
Since the diagram commutes, the horizontal map on the upper row is also fully-faithful.
Pass to the left adjoints and restrict to compact objects,
the equivalence for the general case will be implied by Proposition \ref{wdstopr}
and the proposition cited bolow,
whose counterpart in the Fukaya setting is discussed in a more general situation
in \cite[Section 6]{GPS2}.
\varepsilonnd{proof}
\betaegin{proposition}
Let $(x,\timesi) \in N^* \cS$ and $(y,\varepsilonta) \in N^* \cT$.
We denote by $D_{(x,\timesi)}$ and $D_{(y,\varepsilonta)}$ corepresentatives of the microstalk
functors at $(x,\timesi)$ and $(y,\varepsilonta)$.
Then $D_{(x,\timesi)} \betaoxtimes D_{(y,\varepsilonta)}$ corepresents the microstalk at $(x,y,\timesi,\varepsilonta)$.
\varepsilonnd{proposition}
\betaegin{proof}
By Proposition \ref{wdstopr}, it's sufficient to show that for $F \in \mathbb{S}h_\cS(M)$ and $G \in \mathbb{S}h_\cT(N)$, there is an equivalence
$$ \mathfrak{m}u_{(x,\timesi)}(F) \betaoxtimes \mathfrak{m}u_{(y,\varepsilonta)}(G) = \mathfrak{m}u_{(x,y,\timesi,\varepsilonta)}(F \betaoxtimes G)$$
since corepresentative are unique.
This is the Thom-Sebastiani theorem whose proof in the relevant setting can be found in for example
\cite[Sebastiani-Thom Isomorphism]{D.Massey} or \cite[Theorem 1.2.2]{Schurmann}.
\varepsilonnd{proof}
\betaegin{remark}
We remark that the theorem is stated as compatibility between vanishing cycles
with exterior products $\betaoxtimes$ in the setting of complex manifold.
The proof, however, holds in our case since vanishing cycles $\varphihi_f(F)$ are traded with
$\Gamma_{\{ \mathbb{R}e f \mathfrak{m}athfrak{g}eq 0\} }(F) |_{f^{-1}(0)}$ at the beginning of the proof in for example \cite{D.Massey}.
Furthermore the various computations performed there, for example,
$$ f^* (D_Y(F) ) \cong D_X( f^! F),$$
for a real analytic map $f: X \rightarrow Y$, require only $\mathbb{R}R$-constructibility.
\varepsilonnd{remark}
Having studied the product between categories of the form $\mathbb{S}h_\mathbb{L}ambdambda(M)$, we list some results about convolving with sheaves with a prescribed microsupport. Fix a conic closed subset $Y \subseteq T^*N$, and $K \in \mathbb{S}h_{T^* M \times Y}(M \times N)$. For $F \in \mathbb{S}h(M)$, Assumption (i) from the previous Corollary \ref{msconv} is never satisfied for $K \circ F$. Nevertheless, we show that $\mathfrak{m}s(K \circ F) \subseteq Y$.
\betaegin{lemma}\lambdabel{mscomp}
Let $K \in \mathbb{S}h(M \times N)$ and $Y$ be a conic closed subset of $T^* N$.
If the microsupport $\mathfrak{m}s(K)$ is contained in $T^* M \times Y$,
then $\mathfrak{m}s({p_2}_* K)$ and $\mathfrak{m}s({p_2}_! K)$ are both contained in $Y$.
\varepsilonnd{lemma}
\betaegin{proof}
Let $p_1, p_2$ denote the projections from $M \times N$ to $M$ and $N$.
We would like to apply Proposition \ref{prop:mses}~(2), so we need to obtain properness with respect to $p_2$.
Pick an increasing sequence of relative compact open set $\{U_i\}_{i \in \mathbb{N}N}$ of $M$ such that
$M = \betaigcup_{ i \in \mathbb{N}N} U_i$ and notice that the canonical map $\operatornameeratorname{colim}_{i \in \mathbb{N}N} K_{M \times U_i} \rightarrow K$
is an isomorphism.
Thus by Proposition \ref{prop:mses}~(2), (4) \& (7), we can compute that
\betaegin{align*}
\mathfrak{m}s({p_2}_! H)
&= \mathfrak{m}s(\clmi{i \in \mathbb{N}N} \ {p_2}_! K_{U_i \times N}) \subseteq \overline{ \cup_{i \in \mathbb{N}N} \mathfrak{m}s({p_2}_! K_{U_i \times N}) } \\
&\subseteq \overline{ \cup_{i \in \mathbb{N}N} \ {p_2}_\varphii ( \mathfrak{m}s(K_{U_i \times N}) \cap 0_M \times T^* N) } \\
&\subseteq \overline{ \cup_{i \in \mathbb{N}N} \ {p_2}_\varphii ( T^* M \times Y \cap 0_M \times T^* N) } \\
&\subseteq \overline{ \cup_{i \in \mathbb{N}N} \ {p_2}_\varphii ( 0_M \times Y) } \subseteq \overline{ \cup_{i \in \mathbb{N}N} Y } = Y.
\varepsilonnd{align*}
To prove the case for ${p_2}_*$ we further require that $U_i \subseteq \overline{U_i} \subseteq {U_{i+1}}$
and apply the same computation to the limit $L = \lim_{i \in \mathbb{N}N} \Gamma_{\overline{U_i} \times N}(K)$.
\varepsilonnd{proof}
\betaegin{proposition}\lambdabel{conv-ms}
Let $K \in \mathbb{S}h_{T^* M \times Y}(M \times N)$. Then the assignment $F \mathfrak{m}apsto K \circ F$ defines a functor
$$K \circ (-): \mathbb{S}h(M) \rightarrow \mathbb{S}h_Y(N).$$
\varepsilonnd{proposition}
\betaegin{proof}
We recall that, for $F \in \mathbb{S}h(M)$, $K \circ F \coloneqq {\varphii_N}_! (K \text{ot}imes \varphii_M^* F)$.
By Proposition \ref{prop:noncharacteristic-ms-es}~(iii), there is microsupport estimation
$$\mathfrak{m}s(K \text{ot}imes \varphii_M^* F) \subseteq \mathfrak{m}s(K) \,\widehat{+}\, (\mathfrak{m}s(F) \times 0_M)
\subseteq (T^* M \times Y) \,\widehat{+}\, (T^* M \times 0_N).$$
Now the description of $\widehat{+}$ implies that if $(x,\timesi,y,\varepsilonta)$ is a point on the right hand side, then it comes from a limiting point of a sum from $(x_n,\timesi_n,y_n,\varepsilonta_n) \in T^* M \times Y$ and $(x^\varphirime_n,\timesi^\varphirime_n,y^\varphirime_n,0) \in T^* M \times 0_N$. Thus $(y,\varepsilonta) \in Y$, $\mathfrak{m}s(K \text{ot}imes \varphii_M^* F) \subseteq T^* M \times Y$, and we can apply the last lemma to conclude the proof.
\varepsilonnd{proof}
We will see that in the next section that the above integral transform classifies all colimiting preserving
functors between categories of the form $\mathbb{S}h_\mathbb{L}ambdambda(M)$ where $\mathbb{L}ambdambda$ is a singular closed isotropic.
Before we leave this section, we notice that for a conic closed subset $X \subseteq T^* M$,
we can take any $K \in \mathbb{S}h(M \times M)$
and obtain a universal integral kernel $\iota_{-X \times X}^* K \in \mathbb{S}h_{-X \times X}(M \times M)$.
By the above proposition, $\iota_{-X \times X}^*(K)$ defines a functor
$\iota_{-X \times X}^*(K) \circ (-) : \mathbb{S}h(M) \rightarrow \mathbb{S}h_X(M).$
On the other hand, we can consider similarly functors $\mathbb{S}h(M) \rightarrow \mathbb{S}h_X(M)$
which are defined by $F \mathfrak{m}apsto \iota_{-X \times X}^*(K) \circ \iota_X^*(F)$
or $F \mathfrak{m}apsto \iota_X^*( K \circ F)$.
The claim is that they are all the same.
\betaegin{lemma}\lambdabel{lad&ker}
The following functors $\mathbb{S}h(M) \rightarrow \mathbb{S}h_X(M)$ are equivalent to each other:
\betaegin{enumerate}
\item $F \mathfrak{m}apsto \iota_X^*( K \circ \iota_X^* (F) )$,
\item $F \mathfrak{m}apsto \iota_{-X \times X}^*(K) \circ F$,
\item $F \mathfrak{m}apsto \iota_{-X \times X}^*(K) \circ \iota_X^*(F)$.
\varepsilonnd{enumerate}
In particular, $\iota_X^* (F) = \iota_{-X \times X}^* (1_{\mathbb{D}elta}) \circ F.$
\varepsilonnd{lemma}
\betaegin{proof}
We note all three of the expressions on the right hand side are in $\mathbb{S}h_X(M)$ by the previous Proposition \ref{conv-ms},
and we can see directly that, by Lemma \ref{convrad} and the right adjoint version of Proposition \ref{conv-ms} that
\betaegin{align*}
&\Hom(\iota_{-X \times X}^*(K) \circ F,G)
= \Hom \left( F, \sHom^\circ( \iota_{-X \times X}^*(K), G) \right) \\
&= \Hom \left(\iota_X^*(F),\sHom^\circ( \iota_{-X \times X}^*(K), G )\right)
= \Hom(\iota_{-X \times X}^*(K) \circ \iota_X^*(F),G)
\varepsilonnd{align*}
for $F \in \mathbb{S}h_X(M)$, $G \in \mathbb{S}h(M)$.
Thus (ii) and (iii) are the same.
Now we show that $\Hom(\iota_{-X \times X}^*(K) \circ F,G) = \Hom(\iota_X^*( K \circ \iota_X^* F),G)$
for $F \in \mathbb{S}h_X(M)$, $G \in \mathbb{S}h(M)$.
We've seen that the left hand side is the same as $\Hom(F,\sHom^\circ(\iota_{-X \times X}^* (K),G) )$
and the target is in $\mathbb{S}h_X(M)$.
A similarly computation will imply that the right hand side is the same as $\Hom(F, \iota_{X}^! \sHom^\circ(K,G) )$
and the target is again in $\mathbb{S}h_X(M)$.
This means that we can evaluate at $F \in \mathbb{S}h_X(M)$ and prove the quality only for this case.
Assume such a case, so tautologically $F = \iota_X^* F$, and we compute that
\betaegin{align*}
&\Hom(\iota_{-X \times X}^*(K) \circ F,G)
= \Hom \left(\iota_{-X \times X}^* (K), \sHom(p_1^* F, p_2^! G) \right) \\
&= \Hom \left( K, \sHom(p_1^* F, p_2^! G) \right)
= \Hom(K \circ F,G)
= \Hom(\iota_X^*( K \circ \iota_X^* (F)), G).
\varepsilonnd{align*}
Note that for the third equality,
we use Proposition \ref{prop:mses}~(4) \& (7) to conclude that $\sHom(p_1^* F, p_2^! G) \in \mathbb{S}h_{-X \times X}(M \times M)$.
\varepsilonnd{proof}
\subsection{Dualizability} \lambdabel{dualsec}
Let $(\sC,\text{ot}imes,1_\sC)$ be a symmetric monoidal ($\infty$-)category.
We recall the notion of dualizability which we will use later.
\betaegin{definition}\lambdabel{smdual}
An object $X$ in $\sC$ is dualizable if there exists $Y \in \sC$ and a unit and a counit
$\varepsilonta: 1_\sC \rightarrow Y \text{ot}imes X$ abd $\varepsilonpsilon: X \text{ot}imes Y \rightarrow 1_\sC$
such that the pair $(\varepsilonta,\varepsilonpsilon)$ satisfies the standard triangle equality that the following compositions are identities
$$X \timesrightarrow{\id_X \text{ot}imes \varepsilonta} X \text{ot}imes Y \text{ot}imes X \timesrightarrow{\varepsilonpsilon \text{ot}imes \id_X} X,$$
$$Y \timesrightarrow{\varepsilonta \text{ot}imes \id_Y} Y \text{ot}imes X \text{ot}imes Y \timesrightarrow{\id_Y \text{ot}imes \varepsilonpsilon} Y.$$
\varepsilonnd{definition}
We note that these conditions uniquely classifies $(Y,\varepsilonta,\varepsilonpsilon)$.
The relevant proposition concerning dualizability which we need is the following:
Let $\sA \in \mathbb{P}rLcs$ be compactly generated.
Denote by $\sA_0$ its compact objects and by $\sA^\vee \coloneqq \Ind(\sA_0^{op})$ the Ind-completion of its opposite category.
We first mention that the proof of the Proposition \ref{cg:d} below implies that
$\sA^\vee \text{ot}imes \sA = \mathbb{F}un^{ex}(\sA_0^{op} \text{ot}imes \sA_0,\cV) = \mathbb{F}un^L(\sA^\vee \text{ot}imes \sA,\cV)$.
Here the superscript `ex' means exact functors and the `L' means colimit-preserving functors.
As a result, the Hom-pairing $\Hom_{\sA_0}: \sA_0^{op} \text{ot}imes \sA_0 \rightarrow \cV$ induces a functor
$$\varepsilonpsilon_\sA: \sA^\vee \text{ot}imes \sA \rightarrow \cV$$
by extending $\Hom_{\sA_0}$ to the Ind-completion.
On the other hand, as a functor from $\sA_0^{op} \text{ot}imes \sA_0$ to $\cV$,
it also defines an object in $\sA \text{ot}imes \sA^\vee $ by the above identification, which is equivalent to a functor
$$ \varepsilonta_\sA: \cV \rightarrow \sA \text{ot}imes \sA^\vee.$$
\betaegin{proposition}[{\cite[Proposition 4.10]{Hoyois-Scherotzke-Sibilla}}]\lambdabel{cg:d}
If $\sA \in \mathbb{P}rLst$ is compactly generated, then it is dualizable with respect to the tensor product $\text{ot}imes$ on $\mathbb{P}rLst$,
and the triple $(\sA^\vee,\varepsilonta_\sA,\varepsilonpsilon_\sA)$ exhibits $\sA^\vee \coloneqq \Ind(\sA^{c,op})$ as a dual of it.
\varepsilonnd{proposition}
\betaegin{remark}
We note that when $\sA_0$ contains only one object, $\mathbb{F}un^{ex}(\sA_0^{op} \text{ot}imes \sA_0,\cV)$ recover the classical notion of bi-modules.
As a result, the diagonal bimodule $(\sA_0)_\mathbb{D}elta$ is also often refered to as the identity bimodule $\Id_{\sA_0}$.
\varepsilonnd{remark}
Classifying colimiting-preserving functors shares a close relation with the notion of duality in Definition \ref{smdual}.
In the algebraic geometric setting, this is usually referred as Fourier-Mukai \cite{Ben-Zvi-Francis-Nadler}.
One strategy to prove such a theorem, inspired by an earlier result in the derived algebro-geometric setting \cite[Section 9]{Gaitsgory1},
is that the evaluation and coevaluation should be given by some sort of diagonals geometrically.
The equivalence between such geometric diagonals and the categorical diagonals discussed in Proposition \ref{cg:d},
which is implied by the uniqueness of duals, will provide such a classification.
In our case, we denote by $\mathbb{D}elta: M \hookrightarrow M \times M$ the inclusion of the diagonal
and by $p: M \rightarrow \varphit$ the projection to a point.
By Proposition \ref{pd:g}, there is an identification
$\mathbb{S}h_\mathbb{L}ambdambda(M) \text{ot}imes \mathbb{S}h_{-\mathbb{L}ambdambda}(M) = \mathbb{S}h_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}(M \times M)$.
Under this identification, we propose a duality data $(\varepsilonta,\varepsilonpsilon)$
between $\mathbb{S}h_\mathbb{L}ambdambda(M)$ and $\mathbb{S}h_{- \mathbb{L}ambdambda}(M)$ in $\mathbb{P}rLst$ which is given by
\betaegin{equation}\lambdabel{formula: standard_duality_data}
\betaegin{split}
\varepsilonpsilon &= p_! \mathbb{D}elta^* : \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M) \rightarrow \cV \\
\varepsilonta &= \iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* \mathbb{D}elta_* p^*
: \cV \rightarrow \mathbb{S}h_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}(M \times M).
\varepsilonnd{split}
\varepsilonnd{equation}
Recall that we use
$\iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^*: \mathbb{S}h(M \times M) \rightarrow \mathbb{S}h_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}(M \times M)$
to denote the left adjoint of the inclusion
$\mathbb{S}h_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}(M \times M) \hookrightarrow \mathbb{S}h(M \times M)$.
Note also that since $\cV$ is compactly generated by $1_\cV$,
the colimit-preserving functor $\varepsilonta$ is determined by its value on $1_\cV$
so we will abuse the notation and identify it with $\varepsilonta$.
In order to check the triangle equalities, we first identify $\id \text{ot}imes \varepsilonpsilon$.
\betaegin{lemma}\lambdabel{ev}
Under the identification $$\mathbb{S}h_\mathbb{L}ambdambda(M) \text{ot}imes \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M)
= \mathbb{S}h_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M \times M),$$ the functor
$$\id \text{ot}imes \varepsilonpsilon: \mathbb{S}h_\mathbb{L}ambdambda(M) \text{ot}imes \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M)
\rightarrow \mathbb{S}h_\mathbb{L}ambdambda(M) \text{ot}imes \cV = \mathbb{S}h_\mathbb{L}ambdambda(M)$$
is identified as the functor
$${p_1}_! (\id \times \mathbb{D}elta)^*: \mathbb{S}h_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M \times M)
\rightarrow \mathbb{S}h_\mathbb{L}ambdambda(M).$$
\varepsilonnd{lemma}
\betaegin{proof}
Since both of the functors are colimit-preserving and the categories are compactly generated,
it is sufficient to check that ${p_1}_! (\id \times \mathbb{D}elta)^* \circ \betaoxtimes = \id \text{ot}imes (p_! \mathbb{D}elta^*)$
on pairs $(F,G)$ for $F \in \mathbb{S}h_\mathbb{L}ambdambda(M)^c$ and $G \in \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M)^c$
by Lemma \ref{pgc,PrLst}.
Note that we do not need the compactness assumption for the following computation.
Let $q_1: M^3 \rightarrow M$ and $q_{23}: M^3 \rightarrow M^2$ denote the projections
$q_1(x,y,z) = x$ and $q_{23}(x,y,z) = (y,z)$.
We note that $q_1 \circ (\id \times \mathbb{D}elta) = p_1$ and $q_{23} \circ (\id \times \mathbb{D}elta) = \mathbb{D}elta \circ p_2$.
Thus,
\betaegin{align*}
{p_1}_! (\id \times \mathbb{D}elta)^* (F \betaoxtimes G)
&= {p_1}_! (\id \times \mathbb{D}elta)^* (q_1^* F \text{ot}imes q_{23}^* G) = {p_1}_! (p_1^* F \text{ot}imes p_2^* \mathbb{D}elta^* G) \\
&= F \text{ot}imes ({p_1}_! p_2^* \mathbb{D}elta^* G) = F \text{ot}imes ( p^* p_!\mathbb{D}elta^* G) = F \text{ot}imes_{\cV} (p_! \mathbb{D}elta^* G).
\varepsilonnd{align*}
Here, we use the fact that $*$-pullback is compatible with $\text{ot}imes$ for the second equality, the projection formula for the third, and base change for the forth. The last equality is by definition the action of the coefficient category $\cV$ on $\mathbb{S}h(M)$.
\varepsilonnd{proof}
\betaegin{remark}
A similar computation will imply that $\varepsilonta \text{ot}imes \id$ can be identify with
$$\iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda \times \mathbb{L}ambdambda}^* (1_\mathbb{D}elta \betaoxtimes \betaigcdot):
\mathbb{S}h_\mathbb{L}ambdambda(M) \rightarrow \mathbb{S}h_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M \times M).$$
\varepsilonnd{remark}
Now we check the triangle equality $(\id_{\mathbb{S}h_\mathbb{L}ambdambda(M)} \text{ot}imes \varepsilonpsilon) \circ (\varepsilonta \text{ot}imes \id_{\mathbb{S}h_\mathbb{L}ambdambda(M)})
= \id_{\mathbb{S}h_\mathbb{L}ambdambda(M)}$.
In other words, we check that the composition of the following functors
$$
\betaegin{tikzpicture}
\node at (0,1.5) {$ \mathbb{S}h_\mathbb{L}ambdambda(M)$};
\node at (7,1.5) {$\mathbb{S}h_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}(M \times M) \text{ot}imes \mathbb{S}h_\mathbb{L}ambdambda(M)$};
\node at (7,0) {$\mathbb{S}h_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M \times M)$};
\node at (13,0) {$\mathbb{S}h_\mathbb{L}ambdambda(M)$};
\deltaraw [->, thick] (0.9,1.5) -- (4.3,1.5) node [midway, above]
{$(\iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* \mathbb{D}elta_* p^*)\text{ot}imes \id$};
\deltaraw [->, thick] (9.3,0) -- (12.1,0) node [midway, above] {${p_1}_! (\id \times \mathbb{D}elta)^*$};
\deltaraw [->, thick] (7,1.2) -- (7,0.3) node [midway, right] {$\betaoxtimes$};
\varepsilonnd{tikzpicture}
$$
is the identity. The other triangle equality can be checked symmetrically.
\betaegin{proposition}\lambdabel{teholds}
The above equality holds.
\varepsilonnd{proposition}
\betaegin{proof}
Let $F \in \mathbb{S}h_\mathbb{L}ambdambda(M)$.
The composition of the first two arrows sends $(1_\cV,F)$ to
$$\left( \betaoxtimes \circ (\iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* \mathbb{D}elta_* p^*) \text{ot}imes \id \right) (1_\cV,F)
= (\iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* \mathbb{D}elta_* 1_M) \betaoxtimes F.$$
Apply ${p_1}_! (\id \times \mathbb{D}elta)^*$ and we obtain
$${p_1}_! (\id \times \mathbb{D}elta)^*
\left( (\iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* \mathbb{D}elta_* 1_M) \betaoxtimes F \right)
= {p_1}_! \left( (\iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* \mathbb{D}elta_* 1_M) \text{ot}imes p_2^* F \right).$$
To see that ${p_1}_! \left( (\iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* \mathbb{D}elta_* 1_M) \text{ot}imes p_2^* F \right) = F$,
we use the Yoneda lemma to evaluate at $\Hom(-,H)$ for $H \in \mathbb{S}h_\mathbb{L}ambdambda(M)$
and compute that
\betaegin{align*}
\Hom({p_1}_! \left( (\iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* \mathbb{D}elta_* 1_M) \text{ot}imes p_2^* F \right),H)
&= \Hom\left(\iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* \mathbb{D}elta_* 1_M, \sHom( p_2^* F,p_1^! H) \right)\\
&= \Hom\left( \mathbb{D}elta_* 1_M, \sHom( p_2^* F,p_1^! H) \right) \\
&= \Hom\left( 1_M, \mathbb{D}elta^! \sHom( p_2^* F,p_1^! H) \right) \\
&= \Hom\left( 1_M, \sHom( F, H) \right) = \Hom(F,H).
\varepsilonnd{align*}
For the first equality, we use (3) and (6) of Proposition \ref{prop:mses} to obtain $\mathfrak{m}s(p_2^* F) = 0_M \times \mathfrak{m}s(F)$ and
$\mathfrak{m}s(p_1^! H) = \mathfrak{m}s(H) \times 0_M$, and they further imply the microsupport estimation
$$ \mathfrak{m}s( \sHom( p_2^* F,p_1^! H) ) \subseteq (\mathfrak{m}s(H) \times 0_M) + (0_M \times -\mathfrak{m}s(F))$$
since $\left( 0_M \times \mathfrak{m}s(F) \right) \cap \left( \mathfrak{m}s(H) \times 0_M \right) \subseteq 0_{M \times M}$
and $\sHom( p_2^* F,p_1^! H) \in \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M)$.
We use the basic properties of Grothendieck six functors to obtain the second to last equality.
\varepsilonnd{proof}
Because duals are unique, there is an equivalence $\mathbb{S}h_{-\mathbb{L}ambdambda}(M) = \mathbb{S}h_\mathbb{L}ambdambda(M)^\vee$.
By passing to compact objects, we obtain an equivalence on small categories:
\betaegin{definition}\lambdabel{def: standard_daulity}
We denote by $\mathbb{S}D{\mathbb{L}ambdambda}: \mathbb{S}h_{-\mathbb{L}ambdambda}^c(M) \timesrightarrow{\sim} \mathbb{S}h_\mathbb{L}ambdambda^c(M)^{op}$ the equivalence whose Ind-completion induces the equivalence $\mathbb{S}h_{-\mathbb{L}ambdambda}(M) = \mathbb{S}h_\mathbb{L}ambdambda(M)^\vee$
associated to the duality data in Equation~(\ref{formula: standard_duality_data}) and call it the \textit{standard duality} associated to the pair $(M,\mathbb{L}ambdambda)$.
\varepsilonnd{definition}
Thus, there is a commutative diagram given by the counits:
$$
\betaegin{tikzpicture}
\node at (0,2.5) {$\mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M)$};
\node at (5,2.5) {$\cV$};
\node at (0,1.5) {$\mathbb{S}h_{-\mathbb{L}ambdambda}(M) \text{ot}imes \mathbb{S}h_\mathbb{L}ambdambda(M)$};
\node at (0,0) {$\mathbb{S}h_\mathbb{L}ambdambda(M)^\vee \text{ot}imes \mathbb{S}h_\mathbb{L}ambdambda(M)$};
\node at (5,0) {$\cV$};
\deltaraw [->, thick] (1.6,2.5) -- (4.6,2.5) node [midway, above] {$p_! \mathbb{D}elta^* $};
\deltaraw [->, thick] (1.9,0) -- (4.6,0) node [midway, below] {$ \Hom(-,-)$};
\deltaraw [double equal sign distance, thick] (0,2.2) -- (0,1.8) node [midway, left] {$ $};
\deltaraw [double equal sign distance, thick] (0,1.2) -- (0,0.3) node [midway, left] {$\Ind(\mathbb{S}D{\mathbb{L}ambdambda}) \text{ot}imes \id$};
\deltaraw [double equal sign distance, thick] (5,2.2) -- (5,0.3) node [midway, right] {$ $};
\varepsilonnd{tikzpicture}
$$
Here we abuse the notation and use $\Hom(-,-)$ to denote the functor induced by its Ind-completion.
In particular, for $G \in \mathbb{S}h_{-\mathbb{L}ambdambda}(M)^c$ and $F \in \mathbb{S}h_\mathbb{L}ambdambda(M)$, there is an identification
$$ \Hom(\mathbb{S}D{\mathbb{L}ambdambda} F, G) = p_! (F \text{ot}imes G).$$
A consequence of this identification is that colimit-preserving functors are given by integral transforms, i.e.,
Theorem \ref{fmt} discussed in the introduction.
We mention the following proof is adapted from \cite{Ben-Zvi-Francis-Nadler} where they study a similar
statement in the setting of algebraic geometry.
\betaegin{proof}[Proof of Theorem \ref{fmt}]
The identification is a composition
$$ \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{S}igma}(M \times N) = \mathbb{S}h_{-\mathbb{L}ambdambda}(M) \text{ot}imes \mathbb{S}h_\mathbb{S}igma(N)
= \mathbb{S}h_\mathbb{L}ambdambda(M)^\vee \text{ot}imes \mathbb{S}h_\mathbb{S}igma(N) = \mathbb{F}un^L(\mathbb{S}h_\mathbb{L}ambdambda(M),\mathbb{S}h_\mathbb{S}igma(N)).$$
The effect of the first equivalence identifies $G \betaoxtimes F$ with $(G,F)$ for $G \in \mathbb{S}h_{-\mathbb{L}ambdambda}^c(M)$, $F \in \mathbb{S}h_\mathbb{S}igma^c(N)$ and these objects generate the corresponding category by Lemma \ref{pgc,PrLst} so it is sufficient to identify them. For $G \in \mathbb{S}h_{-\mathbb{L}ambdambda}^c(M)$ and any $F \in \mathbb{S}h_\mathbb{S}igma^c(N)$, the pair $(G,F)$ is first sent to $(\mathbb{S}D{\mathbb{L}ambdambda} G,F)$ and then the functor $\left( co\mathfrak{m}athscr{Y}(\mathbb{S}D{\mathbb{L}ambdambda} G) \right) \text{ot}imes_\cV F$ where we use $co\mathfrak{m}athscr{Y}$ to denote the co-Yoneda embedding.
For this computation we use $p_M: M \times N \rightarrow M$ to denote the projection to the $M$-component, $a_M: M \rightarrow \{*\}$ the projection to a point, and similarly to $N$. We evaluate the functor at $H$ and note that
$\left( \left( co\mathfrak{m}athscr{Y}(\mathbb{S}D{\mathbb{L}ambdambda} G) \right) \text{ot}imes_\cV F \right) (H) = \left( {a_N}^* \Hom(\mathbb{S}D{\mathbb{L}ambdambda} G,H) \right) \text{ot}imes F = \left( {a_N}^* {a_M}_! (G \text{ot}imes H) \right) \text{ot}imes F$ where we use the fact that $\mathbb{S}D{\mathbb{L}ambdambda} G$ is the object corepresenting $(H \mathfrak{m}apsto {a_M}_! (G \text{ot}imes H) )$. Then we can compute that the object on the right end is equivalent to
\betaegin{align*}
\left( {a_N}^* {a_M}_! (G \text{ot}imes H) \right) \text{ot}imes F
&= \left( {p_N}_! {p_M}^* (G \text{ot}imes H) \right) \text{ot}imes F \\
&= {p_N}_! \left( {p_M}^* (G \text{ot}imes H) \text{ot}imes p_N^* F \right) \\
&= {p_N}_! \left( (G \betaoxtimes F) \text{ot}imes p_M^* H \right) = (G \betaoxtimes F) \circ H
\varepsilonnd{align*}
by base change and the projection formula.
\varepsilonnd{proof}
\subsection{Verdier duality}\lambdabel{sec:verdierdual}
We will compare the duality $\mathbb{S}D{\mathbb{L}ambdambda}: \mathbb{S}h_{-\mathbb{L}ambdambda}^c(M) \timesrightarrow{\sim} \mathbb{S}h_\mathbb{L}ambdambda^c(M)^{op}$ we obtain from the last subsection with the more classical Verdier duality.
Recall that, for a locally compact Hausdorff space $X$, the Verdier duality is a functor
\betaegin{align*}
\VD{M}: \mathbb{S}h(X) &\rightarrow \mathbb{S}h(X)^{op} \\
F &\mathfrak{m}apsto \sHom(F,\omega_X)
\varepsilonnd{align*}
where $\omega_X \coloneqq p_! 1_\cV$ is the dualizing sheaf of $X$.
We note that when $X = M$ is a $C^1$-manifold, $\omega_M$ is an invertible local system locally isomorphic to $1_M[\deltaim M]$.
We also note that $\VD{M}$ is not an equivalence on the (large) category $\mathbb{S}h(M)$ so we have to restrict to smaller categories.
Recall that we use the notation $\mathbb{S}h_\mathbb{L}ambdambda^b(M)$ to denote the full subcategory of $\mathbb{S}h_\mathbb{L}ambdambda(M)$ consisting of sheaves with perfect stalks.
In this case, the Verdier dual
\betaegin{align*}
\VD{M}: \mathbb{S}h_\mathbb{L}ambdambda^b(M)^{op} &\timesrightarrow{\sim} \mathbb{S}h_{-\mathbb{L}ambdambda}^b(M) \\
F &\mathfrak{m}apsto \VD{M}(F) \coloneqq \sHom(F,\omega_M)
\varepsilonnd{align*}
is an equivalence since the double dual $F \rightarrow \VD{M} (\VD{M} (F))$ is an isomorphism by \cite[Proposition 3.4.3]{KS}.
From now on, we assume $M$ is compact for the rest of the subsection. Then by Corollary \ref{cor:prop-in-perf}, $\mathbb{S}h_\mathbb{L}ambdambda^b(M) \subseteq \mathbb{S}h_\mathbb{L}ambdambda^c(M)$. One can then ask what is the relation between $\mathbb{S}D{\mathbb{L}ambdambda}$ and $\VD{M}$.
We will explicitly use the following consequence from the rigidity assumption on $\cV$ in the computation.
\betaegin{lemma}[{\cite[Proposition 4.9]{Hoyois-Scherotzke-Sibilla}}]
Assume $\cV_0$ is a rigid symmetric monoidal category.
Then there is a canonical equivalence of symmetric monoidal $\infty$-categories
$$\cV_0 \rightarrow \cV_0^{op}, \, X \mathfrak{m}apsto X^\vee \coloneqq \Hom(X,1_{\cV}).$$
In particular, $(X^\vee)^\vee = X$.
\varepsilonnd{lemma}
The following proposition shows that $\mathbb{S}D{\mathbb{L}ambdambda}$ and $\VD{M}$ are related by the (inverse) Serr{e} functor in Proposition \ref{prop:sab-serre}. Recall that $\mathbb{N}D{M}(F) \coloneqq \sHom(F, 1_M)$.
\betaegin{proposition}\lambdabel{identifying_Verdier_with_standard}
For $F \in \mathbb{S}h_{-\mathbb{L}ambdambda}^b(M)$, $\mathbb{S}D{\mathbb{L}ambdambda}(F) = S_\mathbb{L}ambdambda^+ \circ \mathbb{N}D{M}(F) = S_\mathbb{L}ambdambda^+ \circ \VD{M}(F) \text{ot}imes \omega_M^{-1}$.
\varepsilonnd{proposition}
\betaegin{proof}
Let $G \in \mathbb{S}h_\mathbb{L}ambdambda(M)$, Theorem \ref{w=ad} and Proposition \ref{prop:hom_w_pm} imply that
\betaegin{align*}
\Hom(S_\mathbb{L}ambdambda^+ \mathbb{N}D{M}(F),G) &= \Hom(\mathbb{N}D{M}(F), S_\mathbb{L}ambdambda^-(G)) = \Hom(\mathbb{N}D{M}(F),T_{-\varepsilonpsilon} G) \\
&= p_* \left( \mathbb{D}elta^* \sHom(\varphii_1^* \mathbb{N}D{M}(F), \varphii_2^* G) \right),
\varepsilonnd{align*}
where we use the notation $p: M \rightarrow \{*\}$.
Since $F$ has perfect stalk, applying Remark \ref{rem:DFotimesG} we can trade the sheaf $\mathbb{D}elta^* \sHom(\varphii_1^* \mathbb{N}D{M}(F), \varphii_2^* G)$ with $\mathbb{N}D{M} (\mathbb{N}D{M}F) \text{ot}imes G = F \text{ot}imes G$.
The compact support assumption then implies, by the definition of $\mathbb{S}D{\mathbb{L}ambdambda}$, that
\betaegin{equation*}
\Hom(S_\mathbb{L}ambdambda^+ \circ \mathbb{N}D{M}(F),G) = p_! (F \text{ot}imes G) = \Hom(\mathbb{S}D{\mathbb{L}ambdambda}(F),G). \qedhere
\varepsilonnd{equation*}
\varepsilonnd{proof}
\betaegin{remark}\lambdabel{rem:kernelinverse}
We remark that, for a contact isotopy $\mathbb{P}hi: S^* M \times I \rightarrow S^* M$, the sheaf
$$K(\mathbb{P}hi^{-1}) \circ|_I\, \sHom( K(\mathbb{P}hi)^{-1}, 1_{M \times M})$$
is microsupported in $T^* M \times 0_I$ so we can see that $\mathbb{N}D{M}(F)^\varphi = \mathbb{N}D{M}(F^{\varphi^{-1}})$.
As a consequence, $S^+_\mathbb{L}ambdambda \circ \mathbb{N}D{M}(F) = \mathbb{N}D{M}(S^-_\mathbb{L}ambdambda F)$.
\varepsilonnd{remark}
Note that if we assume $S_\mathbb{L}ambdambda^+$ is invertible, or equivalently $S_\mathbb{L}ambdambda^-$ is its inverse, then Proposition \ref{identifying_Verdier_with_standard} implies that the equivalence $\mathbb{N}D{M}: \mathbb{S}h_{-\mathbb{L}ambdambda}^b(M)^{op} \timesrightarrow{\sim} \mathbb{S}h_\mathbb{L}ambdambda^b(M)$ can be extended to $\mathbb{S}h_\mathbb{L}ambdambda^c(M)^{op} \timesrightarrow{\sim} \mathbb{S}h_\mathbb{L}ambdambda^c(M)$ as
$$\left(F \mathfrak{m}apsto S_\mathbb{L}ambdambda^- ( \mathbb{S}D{\mathbb{L}ambdambda}(F) \text{ot}imes \omega_M ) \right).$$
Taking Ind-completion and we obtain an identification $\mathbb{S}h_{-\mathbb{L}ambdambda}(M)^\vee \cong \mathbb{S}h_\mathbb{L}ambdambda(M)$, which further provides a duality pair $(\varepsilonpsilon^V,\varepsilonta^V)$ as in Formula (\ref{formula: standard_duality_data}).
\betaegin{lemma}
The co-unit $\varepsilonpsilon^V$ is given by
$$ \varepsilonpsilon^V = p_* \mathbb{D}elta^!: \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M) \rightarrow \cV.$$
\varepsilonnd{lemma}
\betaegin{proof}
We claim that, for $F \in \mathbb{S}h_{-\mathbb{L}ambdambda}^c(M)^{op}$ and $G \in \mathbb{S}h_\mathbb{L}ambdambda(M)$, there is an identification
$$\Hom\left(S_\mathbb{L}ambdambda^- \circ \mathbb{S}D{\mathbb{L}ambdambda}(F \text{ot}imes \omega_M^{-1}),G \right) = p_* \mathbb{D}elta^!(F \betaoxtimes G).$$
Indeed, since $S_\mathbb{L}ambdambda^+$ is the inverse of $S_\mathbb{L}ambdambda^-$, the left hand side is given by
\betaegin{align*}
\Hom\left(S_\mathbb{L}ambdambda^- \circ \mathbb{S}D{\mathbb{L}ambdambda}(F \text{ot}imes \omega_M^{-1}),G \right)
&= \Hom \left( \mathbb{S}D{\mathbb{L}ambdambda}(F \text{ot}imes \omega_M^{-1}), S_\mathbb{L}ambdambda^+ (G) \right) \\
&= p_* (F \text{ot}imes S_\mathbb{L}ambdambda^+ (G) \text{ot}imes \omega_M^{-1} ).
\varepsilonnd{align*}
We call that $S_\mathbb{L}ambdambda^+ (G) = \operatornameeratorname{colim}_{\varphi \in W^+( \mathbb{L}ambdambda^c)} (G^w)^\varphi$ where we use $(-)^w$ to represent
a small positively wrapping that pushes $\mathfrak{m}sif(G)$ away from $\mathbb{L}ambdambda$ and $W^+( \mathbb{L}ambdambda^c)$
to represent positive isotopies compactly supported away from $\mathbb{L}ambdambda$.
Since $M$ is compact, we know that $p_*$ preserves colimit and we can further change the expression to
\betaegin{align*}
\Hom\left(S_\mathbb{L}ambdambda^- \circ D_\mathbb{L}ambdambda(F \text{ot}imes \omega_M^{-1}),G \right)
&= \clmi{\varphi \in W^+(\mathbb{L}ambdambda^c)} \, p_* (F \text{ot}imes (G^w)^\varphi \text{ot}imes \omega_M^{-1} ) \\
&= \clmi{\varphi \in W^+(\mathbb{L}ambdambda^c)} \, p_* \left( \mathbb{D}elta^* (F \betaoxtimes (G^w)^\varphi) \text{ot}imes \omega_M^{-1} \right) \\
&= \clmi{\varphi \in W^+(\mathbb{L}ambdambda^c)} \, p_* \mathbb{D}elta^! \left( F \betaoxtimes (G^w)^\varphi \right).
\varepsilonnd{align*}
Here we use Proposition \ref{prop:mses}~(5) and Lemma \ref{omega_product}
for the last equality since $\mathfrak{m}sif((G^w)^\varphi) \cap \mathbb{L}ambdambda = \varnothing$.
But then, by writing $p_* \mathbb{D}elta^! F \text{ot}imes (G^w)^\varphi = \Hom(1_\mathbb{D}elta, F \betaoxtimes (G^w)^\varphi)$,
we see the right hand side is $\Hom(1_\mathbb{D}elta, F \betaoxtimes G)$ and the colimit is taken over a constant diagram,
since we have the same constancy condition for the perturbation trick Proposition \ref{prop:hom_w_pm}.
Thus,
\betaegin{equation*}
\Hom\left(S_\mathbb{L}ambdambda^- \circ D_\mathbb{L}ambdambda(F \text{ot}imes \omega_M^{-1}),G \right)
= \Hom(1_\mathbb{D}elta, F \betaoxtimes G) = p_* \mathbb{D}elta^! (F \betaoxtimes G). \qedhere
\varepsilonnd{equation*}
\varepsilonnd{proof}
The main theorem of this section is that the converse is also true, that is,
\betaegin{theorem}\lambdabel{converse-statement-Verdier}
Let $M$ be a compact manifold, $\mathbb{L}ambdambda \subseteq S^* M$ a compact subanalytic Legendrian, and denote by $\varepsilonpsilon^V$ the colimit-preserving functor
$$p_* \mathbb{D}elta^!: \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M) \rightarrow \cV.$$
There exists an object $\varepsilonta^V$, which we identify as a colimit-preserving functor
$$\varepsilonta^V: \cV \rightarrow \mathbb{S}h_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}(M \times M),$$
such that the pair $(\varepsilonpsilon^V,\varepsilonta^V)$ provides a duality data in the sense of Definition \ref{smdual} in $\mathbb{P}rLst$ if and only if $S_\mathbb{L}ambdambda^+$ is invertible and the induced duality on $\mathbb{S}h_{-\mathbb{L}ambdambda}^c(M)^{op} = \mathbb{S}h_\mathbb{L}ambdambda^c(M)$ restricts to the Verdier duality $\VD{M}$ on $\mathbb{S}h_\mathbb{L}ambdambda(M)^b$.
\varepsilonnd{theorem}
We first we remark that the functor $p_* \mathbb{D}elta^!$ is indeed colimit preserving. For any $H \in \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M)$, we have $p_* \mathbb{D}elta^! H = \Hom(1_\mathbb{D}elta,H)$. Choose some small positive pushoff displacing $\mathbb{L}ambdambda$ from itself and form the product isotopy $\mathbb{P}hi$, which will displace $N^*_\infty \mathbb{D}elta$ from $-\mathbb{L}ambdambda \times \mathbb{L}ambdambda$, by the perturbation trick from Proposition \ref{prop:hom_w_pm}, we obtain that
$$p_* \mathbb{D}elta^! H = \Hom(1_\mathbb{D}elta,H) = \Hom(1_\mathbb{D}elta^{w^{-1}},H) = p_* (\Hom(1_\mathbb{D}elta^{w^{-1}},1_{M \times M}) \text{ot}imes H),$$
and the last expression is, as a composition of a tensor product and pushforward from a compact manifold, is colimit-preserving.
\betaegin{proof}[Proof of Theorem \ref{converse-statement-Verdier}]
The pair $(\varepsilonpsilon^V,\varepsilonta^V)$ gives a duality data if $(\varepsilonpsilon \text{ot}imes \id) \circ (\id \text{ot}imes \varepsilonta) = \id$ and $(\id \text{ot}imes \varepsilonpsilon) \circ (\varepsilonta \text{ot}imes \id) = \id$.
We note that the functor $(\varepsilonpsilon \text{ot}imes \id): \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M) \text{ot}imes \mathbb{S}h_{-\mathbb{L}ambdambda}(M)$ under the identification $\mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M) \text{ot}imes \mathbb{S}h_{-\mathbb{L}ambdambda}(M) = \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda \times -\mathbb{L}ambdambda}(M \times M \times M)$ is given by
\betaegin{align*}
\mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda \times -\mathbb{L}ambdambda}(M \times M \times M) &\rightarrow \mathbb{S}h_{-\mathbb{L}ambdambda}(M) \\
A &\mathfrak{m}apsto {q_3}_* \left( \sHom\left(q_{12}^* K(\mathbb{P}hi^{-1}), A \right) \right).
\varepsilonnd{align*}
Indeed, it is sufficient to check for $A$ of the form $H \betaoxtimes F$ for $H \in \mathbb{S}h_{-\mathbb{L}ambdambda \times \mathbb{L}ambdambda}(M \times M)$ and $F \in \mathbb{S}h_{-\mathbb{L}ambdambda}(M)$ by Theorem \ref{pd:g},
since the expression on the right is colimit preserving for a similar reason why $p_* \mathbb{D}elta^!$ is.
That is, $(\varepsilonpsilon \text{ot}imes \id)$ sends the pair $(H,F)$ to
\betaegin{align*}
\left( p^* \Hom(K(\mathbb{P}hi^-),H)\right) \text{ot}imes F
&= \left( p^* \Gamma(M \times M; \sHom\left(K(\mathbb{P}hi^{-1}),H\right) \text{ot}imes \right) F \\
&= \left( {q_3}_* q_{12}^* \sHom\left(K(\mathbb{P}hi^{-1}),H\right) \right) \text{ot}imes F \\
&= {q_3}_* \left( \sHom\left(q_{12}^* K(\mathbb{P}hi^{-1}),q_{12}^* H\right) \text{ot}imes q_3^* F \right) \\
&= {q_3}_* \left( \sHom\left(q_{12}^* K(\mathbb{P}hi^{-1}), H \betaoxtimes F \right) \right).
\varepsilonnd{align*}
Here, we use base change for the second equality, the projection formula for the second equality, and Proposition \ref{prop:mses}~(8) and Example \ref{GKS_standard_Reeb} for the last equality. Now to compute the endo-functor $(\varepsilonpsilon \text{ot}imes \id) \circ (\id \text{ot}imes \varepsilonta)$ on $\mathbb{S}h_{-\mathbb{L}ambdambda}(M)$, we set $A = F \betaoxtimes \varepsilonta$ for $F \in \mathbb{S}h_{-\mathbb{L}ambdambda}(M)$. Thus,
\betaegin{align*}
(\varepsilonpsilon \text{ot}imes \id) \circ (\id \text{ot}imes \varepsilonta)(F)
&= {q_3}_* \left( \sHom\left(q_{12}^* K(\mathbb{P}hi^{-1}), F \betaoxtimes \varepsilonta \right) \right) \\
&= {q_3}_* \left( \sHom \left(q_{12}^* K(\mathbb{P}hi^{-1}), 1_{M^3} \right) \text{ot}imes (F \betaoxtimes \varepsilonta) ) \right) \\
&= {q_3}_* \left( q_{12}^* \sHom \left( K(\mathbb{P}hi^{-1}), 1_{M \times M} \right) \text{ot}imes q_{23}^* \varepsilonta \text{ot}imes q_1^* F) ) \right)
\varepsilonnd{align*}
Here we use Proposition \ref{prop:mses}~(8) and Example \ref{GKS_standard_Reeb} again to turn $\sHom$ into a $\text{ot}imes$ and then expand $\betaoxtimes$ by the definition. Our goal now is to organize the pull/push functors associated to the projections into the form of convolutions. This process is a special case of the proof for Proposition \ref{conv:asso}.
\betaegin{align*}
&{q_3}_* \left( q_{12}^* \sHom \left( K(\mathbb{P}hi^{-1}), 1_{M \times M} \right) \text{ot}imes q_{23}^* \varepsilonta \text{ot}imes q_1^* F) ) \right) \\
&= {p_2}_* {q_{13}}_* \left( [ q_{12}^* \sHom \left(K(\mathbb{P}hi^{-1}), 1_{M \times M} \right) \text{ot}imes q_{23}^* \varepsilonta ]
\text{ot}imes q_{13}^* p_1^* F \right) \\
&= {p_2}_* \left( {q_{13}}_* [ q_{12}^* \sHom \left( K(\mathbb{P}hi^{-1}), 1_{M \times M} \right) \text{ot}imes q_{23}^* \varepsilonta ]
\text{ot}imes p_1^* F \right) \\
&= {p_2}_* \left( \left( \varepsilonta \circ_{M} \sHom \left( K(\mathbb{P}hi^{-1}), 1_{M \times M} \right) \right)
\text{ot}imes p_1^* F \right) \\
&= \left( \varepsilonta \circ_{M} \sHom ( K(\mathbb{P}hi^{-1}), 1_{M \times M} ) \right) \circ F.
\varepsilonnd{align*}
By Lemma \ref{lad&ker}, the last expression is
$$\left( \varepsilonta \circ_{M} \sHom ( K(\mathbb{P}hi^{-1}), 1_{M \times M} ) \right) \circ F = \left( \varepsilonta \circ_{M} \iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* \left( \sHom ( K(\mathbb{P}hi^{-1}), 1_{M \times M} ) \right) \right) \circ F.$$
Thus, the requirement $(\varepsilonpsilon \text{ot}imes \id) \circ (\id \text{ot}imes \varepsilonta) = \id$ implies that
$$\varepsilonta \circ_{M} \iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* \left( \sHom ( K(\mathbb{P}hi^{-1}), 1_{M \times M} ) \right)= \iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* (1_\mathbb{D}elta)$$
since convolution kernels are determined by their effect on $\mathbb{S}h_{-\mathbb{L}ambdambda}(M)$ by Theorem \ref{fmt}. But, as a functor on $\mathbb{S}h_{-\mathbb{L}ambdambda}(M)$, by Lemma \ref{lad&ker} $\iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* \left( \sHom ( K(\mathbb{P}hi^{-1}), 1_{M \times M} ) \right)$ is, up to tensoring with $\omega_M$, the funtor $S_\mathbb{L}ambdambda^+$ so we conclude that it has a left inverse.
On the other hand, a similar computation will shows that $(\id \text{ot}imes \varepsilonpsilon ) \circ (\varepsilonta \text{ot}imes \id) = \id$ on $\mathbb{S}h_\mathbb{L}ambdambda(M)$ implies that
$$G \circ \left( \iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* \left( \sHom ( K(\mathbb{P}hi^{-1}), 1_{M \times M} ) \right) \circ_{M} \varepsilonta \right) = G$$
for all $G \in \mathbb{S}h_\mathbb{L}ambdambda(M)$ and thus $\iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* \left( \sHom ( K(\mathbb{P}hi^{-1}), 1_{M \times M} ) \right) \circ_{M} \varepsilonta = \iota_{\mathbb{L}ambdambda \times -\mathbb{L}ambdambda}^* (1_\mathbb{D}elta)$. The trick is now, that we can view this equality as in $\mathbb{S}h_\mathbb{L}ambdambda(M)$ by convoluting from the left instead. Thus, we conclude that $S_\mathbb{L}ambdambda^+$ has a right inverse as well. In particular, $S_\mathbb{L}ambdambda^+$ is invertible with $S_\mathbb{L}ambdambda^-$ being its inverse.
\varepsilonnd{proof}
\betaegin{corollary}
When $\mathbb{L}ambdambda$ is either swappable or full stop, the Verdier dual $\VD{M}: \mathbb{S}h_\mathbb{L}ambdambda^b(M)^{op} \timesrightarrow{\sim} \mathbb{S}h_\mathbb{L}ambdambda^b(M)$ extends to an equivalence $\mathbb{S}h_\mathbb{L}ambdambda^c(M)^{op} \timesrightarrow{\sim} \mathbb{S}h_\mathbb{L}ambdambda^c(M)$ on all compact objects.
\varepsilonnd{corollary}
We illustrate an example when $S_\mathbb{L}ambdambda^+$ is invertible.
\betaegin{example}
Let $M = \mathbb{R}R^1$ and $\mathbb{L}ambdambda = 0_{\mathbb{R}R^1} \cup (\cup_{n \in \mathbb{Z}Z} T^*_{n,\leq} \mathbb{R}R^1)$. One can see $\mathbb{L}ambdambda$ is swappable in this case.
The constant sheaves $\{ 1_{(n,\infty)} \}_{n \in \mathbb{Z}Z}$ form a set of generators in $\mathbb{S}h_\mathbb{L}ambdambda(\mathbb{R}R^1)$ and $S_\mathbb{L}ambdambda^+$ sends $1_{(n,\infty)}$ to $1_{(n+1,\infty)}$ for $n \in \mathbb{Z}Z$.
By Lemma \ref{lad&ker}, the functor $S_\mathbb{L}ambdambda^+$ and its inverse are given by sheaf kernels and they are, up to a shift by $[1]$, the constant sheaves supported in the shaded areas as in Figrue \ref{fig:wraponce-kernel}. We mention that the case $M = S^1$ and $\mathbb{L}ambdambda = 0_{S^1} \cup T^*_{0,\leq} S^1$ can be described similarly by lifting to the universal cover.
\betaegin{figure}[h!]
\includegraphics[width=0.9\textwidth]{Duality-wraponce.pdf}
\caption{The sheaf kernel of the positive wrap-once functor $S_\mathbb{L}ambdambda^+$ for $\mathbb{L}ambdambda = 0_{\mathbb{R}R^1} \cup (\cup_{n \in \mathbb{Z}Z} T^*_{n,\leq} \mathbb{R}R^1)$ (left) the kernel of the negative wrap-once functor $S_\mathbb{L}ambdambda^-$ (right). The stops are the dashed lines where conormal directions are pointing down and right.}\lambdabel{fig:wraponce-kernel}
\varepsilonnd{figure}
\varepsilonnd{example}
\section{Example that wrap-once is not equivalence}\lambdabel{sec:example}
When introducing the notion of a swappable stop, Sylvan has already noticed the strong constraint that swappability puts on the stop \cite{SylvanOrlov}. Now we have proved that full stops also implies sphericality. However, it is not know whether being a full or swappable stop is a necessary condition, or even whether any condition is really needed to show spherical adjunction.
Here we provide an example where $S_\mathbb{L}ambdambda^+: \mathbb{S}h_\mathbb{L}ambdambda^c(M) \rightarrow \mathbb{S}h_\mathbb{L}ambdambda^c(M)$ is not an equivalence. Moreover, our computation implies that in this case, $m_\mathbb{L}ambdambda$ does not preserve compact objects and $m_\mathbb{L}ambdambda^l$ does not preserve proper modules (or equivalently, sheaves with perfect stalks).
In order to compute the wrap-once functor, we need the following geometric criterion for cofinal wrappings.
\betaegin{lemma}[{\cite[Lemma 3.29]{Ganatra-Pardon-Shende1}}]
Let $\mathbb{S}igma \subseteq S^*M \betaackslash \mathbb{L}ambdambda$ be a subanalytic Legendrian, and $\varphi_k$ be an increasing sequence of contact flows on $S^*M \betaackslash \mathbb{L}ambdambda$. Suppose there exists a contact form $\alphalpha$ on $S^*M \betaackslash \mathbb{L}ambdambda$ such that
$$\lim_{k \rightarrow \infty} \int_0^1 \mathfrak{m}in_{\varphi_k^t(\mathbb{S}igma)} \alphalpha \betaig(\varphiartial_t \varphi_k|_{\varphi_k^t(\mathbb{S}igma)}\betaig)\, dt = \infty.$$
Then $\{\varphi_k\}_{k \in \mathbb{N}N}$ is a cofinal sequence of wrappings in the category of positive wrappings of $\mathbb{S}igma$ in $S^*M \betaackslash \mathbb{L}ambdambda$.
\varepsilonnd{lemma}
\betaegin{proposition}\lambdabel{prop:T2example}
Let $M = T^n = \mathbb{R}R^n/\mathbb{Z}Z^n$, $\mathbb{L}ambdambda = S^{*}_0T^n \subseteq S^{*}T^n\,(n \mathfrak{m}athfrak{g}eq 2)$, and $\overline{B_\varepsilonpsilon(0)}$ be a closed ball of radius $\varepsilonpsilon$ around $0$. Then $S^-_\mathbb{L}ambdambda(1_0) = \wrap_\mathbb{L}ambdambda^- \circ 1_{\overline{B_\varepsilonpsilon(0)}} \notin \mathbb{S}h^b_\mathbb{L}ambdambda(T^n)$. In particular, $S_\mathbb{L}ambdambda^-$ does not induce an equivalence on the proper subcategory $\mathbb{S}h^b_\mathbb{L}ambdambda(T^n)$.
\varepsilonnd{proposition}
\betaegin{proof}
Let $\varepsilonpsilon > 0$ be a small positive number and $\varepsilonta_k: T^n \rightarrow [0, 1]$ be a smooth cut-off function such that for small neighbourhoods $\overline{B_{\varepsilonpsilon/2k^2}(0)} \subset \overline{B_{\varepsilonpsilon/k^2}(0)}$ around $0$ we have
$$\varepsilonta_k|_{\overline{B_{\varepsilonpsilon/2k^2}(0)}} = 0, \;\; \varepsilonta|_{T^n \betaackslash \overline{B_{\varepsilonpsilon/k^2}(0)}} = 1.$$
Let the decreasing sequence of negative Hamiltonians be $H_k(x, \timesi) = -k\,\varepsilonta_k(x)|\timesi|^2$ and the contact flow be $\varphi_k$. Rescale the contact form on $S^*T^n \betaackslash S^*_0T^n$ by a function $\deltaelta(x) \rightarrow \infty, \, x \rightarrow 0$. One can easily check by the above lemma that $\varphi_k$ is a cofinal sequence of positive wrappings on $S^{*}T^n \betaackslash \mathbb{L}ambdambda$. We will show that $\operatornameeratorname{lim}_{k \rightarrow \infty} K(\varphi_{k}) \circ 1_{\overline{B_\varepsilonpsilon(0)}}$ does not have perfect stalk at $T^n \betaackslash \{0\}$.
For simplicity, we now assume that $n = 2$. Consider the universal cover $\varphii: \mathbb{R}R^n \rightarrow \mathbb{R}R^n /\mathbb{Z}Z^n \cong T^n$. Let the lifting of the Hamiltonian be $\overline{H}_k(x, \timesi) = -k\,\varepsilonta_k(\varphii(x))|\timesi|^2$ and the lifting of the flow be $\overline{\varphi}_k$. Then
$$K({\varphi}_{k}) \circ 1_{\overline{B_\varepsilonpsilon(0)}} = \varphii_*\betaig(K(\overline{\varphi}_{k}) \circ 1_{\overline{B_\varepsilonpsilon(0)}}\betaig).$$
We then show that in each region of the form $\square_m = [m+\varepsilonpsilon, m+1-\varepsilonpsilon] \times [\varepsilonpsilon, 1-\varepsilonpsilon]$ where $m \mathfrak{m}athfrak{g}eq 0$, when $k \mathfrak{m}athfrak{g}eq m+1$ we have
$$1_\cV \hookrightarrow \Gamma\betaig(\square_m, K(\varphi_{k}) \circ 1_{\overline{B_\varepsilonpsilon(0)}}\betaig).$$
In fact, for the outward unit conormal bundle of $\overline{B_\varepsilonpsilon(0)}$, under the Hamiltonian $H_k$, the boundary arc of the sector $S_k$ in between the rays $\theta = \alpharcsin(\varepsilonpsilon/k^2)$ and $\theta = \alpharctan(1/k) - \alpharcsin(\varepsilonpsilon/k^2(1 + k^2)^{1/2})$ will follow the inverse geodesic flow $\overline{H}_k = -k|\timesi|^2$ determined by $\varphiartial_t \varphi_k = k\varphiartial/\varphiartial r$. Therefore there is an injection
$$1_\cV \hookrightarrow \Gamma\betaig(S_k, K(\varphi_{k}) \circ 1_{\overline{B_\varepsilonpsilon(0)}}\betaig) \hookrightarrow \Gamma\betaig(\square_m, K(\overline{\varphi}_{k}) \circ 1_{\overline{B_\varepsilonpsilon(0)}}\betaig).$$
Then under the projection map $\varphii: \mathbb{R}R^n \rightarrow \mathbb{R}R^n /\mathbb{Z}Z^n \cong T^n$, write $\square = [\varepsilonpsilon, 1-\varepsilonpsilon] \times [\varepsilonpsilon, 1-\varepsilonpsilon] \subset T^n$. We know that when $k \mathfrak{m}athfrak{g}eq m+1$,
$$1_\cV^{\operatornamelus m} \hookrightarrow \Gamma\betaig(\square, K(\varphi_{k}) \circ 1_{\overline{B_\varepsilonpsilon(0)}}\betaig).$$
Therefore, $\Gamma\betaig(\square, \lmi{k \rightarrow \infty} K(\varphi_{k}) \circ 1_{\overline{B_\varepsilonpsilon(0)}}\betaig) \notin \cV_0$. Since $\lmi{k \rightarrow \infty} K(\varphi_{k}) \circ 1_{\overline{B_\varepsilonpsilon(0)}} \in \mathbb{S}h_{S_0^*T^n}(T^n)$ is constructible, we can conclude that the stalk at $x' \neq x$ is isomorphic to the sections on $\square$, and hence is not perfect.
\varepsilonnd{proof}
\betaegin{figure}
\centering
\includegraphics[width=1.0\textwidth]{T2-nonswappable.pdf}
\caption{The black circles are the boundary of the regions where the Hamiltonian $H_k$ are cut off by the function $\varepsilonta_k$. The blue sectors are the sectors which completely follow the inverse geodesic flow as they do not intersect the black circles. Since radii of the the black circles decreases (and converges to 0), the slope of the lower edge of the sectors that follow the geodesic flow also decreases (and converges to 0). Even though the slope of the upper edge of the sectors are decreasing as more and more black circles appear on the top right pat of the plane, the sequence of sectors will not shrink to nothing and can go arbitrary far away.}\lambdabel{fig:my_label}
\varepsilonnd{figure}
\betaegin{corollary}
Let $M = T^n$ and $\mathbb{L}ambdambda = S^{*}_0T^n \subseteq S^{*}T^n\,(n \mathfrak{m}athfrak{g}eq 2)$. $S_\mathbb{L}ambdambda^+$ is not an equivalence on $\mathbb{S}h^c_\mathbb{L}ambdambda(T^n)$.
\varepsilonnd{corollary}
\betaegin{proof}
Assume that $S_\mathbb{L}ambdambda^+$ is an equivalence on $\mathbb{S}h^c_\mathbb{L}ambdambda(T^n)$. Consider the equivalence $\mathbb{S}h^b_\mathbb{L}ambdambda(T^n) \simeq \mathfrak{m}athrm{Fun}^{ex}(\mathbb{S}h^c_\mathbb{L}ambdambda(T^n)^{op}, \cV_0)$ given by the homomorphism pairing as stated in Theorem \ref{thm:perfcompact}. For $F \in \mathbb{S}h_\mathbb{L}ambdambda^c(T^n), G \in \mathbb{S}h_\mathbb{L}ambdambda^b(T^n)$, since
$$Hom(S_\mathbb{L}ambdambda^+(F), G) = Hom(F, S_\mathbb{L}ambdambda^-(G)),$$
we know that $S_\mathbb{L}ambdambda^-$ has to be an equivalence on $\mathbb{S}h^b_\mathbb{L}ambdambda(T^n)$. This contradicts the proposition.
\varepsilonnd{proof}
\betaegin{corollary}
Let $M = T^n$ and $\mathbb{L}ambdambda = S^{*}_0T^n \subseteq S^{*}T^n\,(n \mathfrak{m}athfrak{g}eq 2)$. Then
$m_\mathbb{L}ambdambda$ does not preserve compact objects.
\varepsilonnd{corollary}
\betaegin{proof}
This follows immediately from the fiber sequence $m^l_\mathbb{L}ambdambda \circ m_\mathbb{L}ambdambda \rightarrow \mathfrak{m}athrm{id}_{\mathbb{S}h_\mathbb{L}ambdambda(T^n)} \rightarrow S^+_\mathbb{L}ambdambda$.
\varepsilonnd{proof}
\betaegin{remark}
We believe one can also show that $m_\mathbb{L}ambdambda^l$ does not perserve proper modules (or objects with perfect stalks) using a similar argument.
\varepsilonnd{remark}
We can then deduce the following geometric result which shows that the Weinstein stop is not a swappable stop.
\betaegin{corollary}
Let $M = T^n\,(n \mathfrak{m}athfrak{g}eq 2)$ and $\mathbb{L}ambdambda = S^{*}_0T^n$. Then the Weinstein hypersurface $F_\mathbb{L}ambdambda$ defined as the ribbon of $\mathbb{L}ambdambda \subseteq S^{*}T^n$ is not a swappable hypersurface.
\varepsilonnd{corollary}
We can compare our result with the following result of Dahinden \cite{BS-LegIsotopy}.
\betaegin{theorem}[Dahinden \cite{BS-LegIsotopy}]
Let $M$ be a connected manifold with $\deltaim M \mathfrak{m}athfrak{g}eq 2$. Suppose there exists a positive Legendrian isotopy $\mathbb{L}ambdambda_t \subseteq S^{*}M$ such that $\mathbb{L}ambdambda_0 = \mathbb{L}ambdambda_1 = \mathbb{L}ambdambda = S^{*}_xM$ and $\mathbb{L}ambdambda_t \cap \mathbb{L}ambdambda = \varnothing, \, t \in (0, 1)$. Then $M$ is simply connected or $M = \mathbb{R}\mathbb{P}^n$.
\varepsilonnd{theorem}
Dahinden's theorem does not imply that the Weinstein ribbon $F_\mathbb{L}ambdambda$ of $\mathbb{L}ambdambda \subseteq S^{*}M$ is a swappable hypersurface, because firstly, it is in general unknown whether the exact symplectomorphism $F_\mathbb{L}ambdambda$ defined by the positive loop sends the zero section to itself (note that this may be closely related to the nearby Lagrangian conjecture). Therefore, our corollary is at least a priori stronger than the theorem.
\betaibliographystyle{amsplain}
\betaibliography{ref_KuoLi}
\varepsilonnd{document} |
\begin{document}
\title{\bf\Large A Product Version of the Hilton-Milner Theorem}
\date{}
\author{Peter Frankl$^1$, Jian Wang$^2$\\[10pt]
$^{1}$R\'{e}nyi Institute, Budapest, Hungary\\[6pt]
$^{2}$Department of Mathematics\\
Taiyuan University of Technology\\
Taiyuan 030024, P. R. China\\[6pt]
E-mail: $^[email protected], $^[email protected]
}
\maketitle
\begin{abstract}
Two families $\mathcal{F},\mathcal{G}$ of $k$-subsets of $\{1,2,\ldots,n\}$ are called non-trivial cross-intersecting if $F\cap G\neq \emptyset$ for all $F\in \mathcal{F}, G\in \mathcal{G}$ and $\cap \{F\colon F\in \mathcal{F}\}=\emptyset=\cap \{G\colon G\in\mathcal{G}\}$. In the present paper, we determine the maximum product of the sizes of two non-trivial cross-intersecting families of $k$-subsets of $\{1,2,\ldots,n\}$ for $n\geq 4k$, $k\geq 8$, which is a product version of the classical Hilton-Milner Theorem.
\end{abstract}
\section{Introduction}
For a positive integer $n$ let $[n]$ denote the standard $n$-set $\{1,2,\ldots,n\}$. For $1\leq i\leq j\leq n$ set also $[i,j]=\{i,i+1,\ldots,j\}$. For an integer $k$ let $\binom{[n]}{k}$ denote the collection of all subsets of $[n]$. Subsets of $\binom{[n]}{k}$ are called {\it $k$-graphs} or {\it $k$-uniform families}.
A $k$-graph $\mathcal{F}$ is called {\it $t$-intersecting} if $|F\cap F'|\geq t$ for all $F,F'\in \mathcal{F}$. Analogously, two $k$-graphs $\mathcal{F}$ and $\mathcal{G}$ are called {\it cross $t$-intersecting} if $|F\cap G|\geq t$ for all $F\in \mathcal{F}$ and $G\in \mathcal{G}$. In case of $t=1$ we omit the 1 and use the term {\it cross-intersecting}.
One of the central results of extremal set theory is the Erd\H{o}s-Ko-Rado Theorem.
{\noindent\bf Erd\H{o}s-Ko-Rado Theorem (\cite{EKR}).} Let $n>k>t>0$ and suppose that $\mathcal{F}\subset \binom{[n]}{k}$ is $t$-intersecting then for $n\geq n_0(k,t)$,
\begin{align}\label{ineq-ekr}
|\mathcal{F}| \leq \binom{n-t}{k-t}.
\end{align}
We should mention that the exact value of $n_0(k,t)$ is $(k-t+1)(t+1)$. For $t=1$ it was determined already in \cite{EKR}, for $t\geq 15$ it is due to \cite{F78}. Finally Wilson \cite{W3} closed the gap $2\leq t\leq 14$ with a proof valid for all $t$.
Let us note that the {\it full $t$-star}, $\left\{F\in \binom{[n]}{k}\colon [t]\subset F\right\}$ shows that \eqref{ineq-ekr} is best possible. In general, for a set $T\subset[n]$ let $\mathcal{S}_T=\left\{S\in \binom{[n]}{k}\colon T\subset S\right\}$ denote the {\it star of $T$}.
For the case $t=1$ the classical Hilton-Milner Theorem is a strong stability result. Recall that a family $\mathcal{F}$ is called {\it non-trivial} if $\cap \{F\colon F\in \mathcal{F}\}=\emptyset$.
{\noindent\bf Hilton-Milner Theorem (\cite{HM67}).} If $n> 2k$ and $\mathcal{F}\subset \binom{[n]}{k}$ is non-trivial intersecting, then
\begin{align}\label{ineq-nontrival}
|\mathcal{F}| \leq \binom{n-1}{k-1}- \binom{n-k-1}{k-1} +1 =:h(n,k).
\end{align}
By now there are many different proofs known for this important result, cf. \cite{Alon, Borg,FFuredi,FT,F2017,GH,Mors} etc.
Let
\[
\mathcal{H}\mathcal{M}(n,k) =\left\{F\in \binom{[n]}{k}\colon 1\in F,\ F\cap [2,k+1]\neq \emptyset\right\}\cup \{[2,k+1]\}.
\]
Clearly, $\mathcal{H}\mathcal{M}(n,k)$ is a non-trivial intersecting family and it shows that \eqref{ineq-nontrival} is best possible.
Let us state our main result.
\begin{thm}\label{main}
Suppose that $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ are non-trivial cross-intersecting families, $n\geq 4k$, $k\geq 8$. Then
\begin{align}\label{ineq-hfhg2}
|\mathcal{F}||\mathcal{G}|\leq h(n,k)^2=\left(\binom{n-1}{k-1}- \binom{n-k-1}{k-1} +1\right)^2.
\end{align}
\end{thm}
We can prove the same statement for $n\geq 5k$, $k \geq 7$ or $n\geq 8k$, $k\geq 6$ as well using the same proof.
Besides $\mathcal{F}=\mathcal{G}=\mathcal{H}\mathcal{M}(n,k)$, for any $A,B \in {[2,n]\choose k}$ with $A\cap B\neq \emptyset$ the construction
\begin{align*}
\mathcal{F} = \left\{F\in \binom{[n]}{k}\colon 1\in F,\ F\cap A \neq \emptyset\right\}\cup\{B\},\\[5pt] \mathcal{G} = \left\{G\in \binom{[n]}{k}\colon 1\in G,\ G\cap B \neq \emptyset\right\}\cup \{A\}
\end{align*}
also attains equality in \eqref{ineq-hfhg2}.
In the next section we will state several theorems involving the product of the sizes of cross-intersecting families. Some are important and powerful. However, to the best of our knowledge \eqref{ineq-hfhg2} is the first such result that implies the Hilton-Milner Theorem (just set $\mathcal{F}=\mathcal{G}$).
In our proofs, we need two simple inequalities involving binomial coefficients.
\begin{prop}
Let $n,k,i$ be positive integers. Then
\begin{align}
&\binom{n-i}{k} \geq \left(\frac{n-k-(i-1)}{n-(i-1)}\right)^i \binom{n}{k}, \label{ineq-key}\\[5pt]
&\binom{n-i}{k-i} \binom{n}{k}\leq \binom{n-i+1}{k-i+1} \binom{n-1}{k-1}, \mbox{\rm \ for } n\geq k\geq i\geq 2.\label{ineq-key2}
\end{align}
\end{prop}
\begin{proof}
Since
\[
\frac{\binom{n-i}{k}}{\binom{n}{k}} = \frac{(n-k)(n-k-1)\ldots(n-k-(i-1))}{n(n-1)\ldots(n-(i-1))}\geq \left(\frac{n-k-(i-1)}{n-(i-1)}\right)^i,
\]
we have \eqref{ineq-key} holds.
For \eqref{ineq-key2}, simply note that
\[
\frac{\binom{n-i}{k-i} \binom{n}{k}}{\binom{n-i+1}{k-i+1} \binom{n-1}{k-1}} =\frac{n(k-i+1)}{k(n-i+1)}=\frac{kn-(i-1)n}{kn-(i-1)k}\leq 1,
\]
and the inequality follows.
\end{proof}
Let us recall the following common notations:
$$\mathcal{F}(i)=\{F\setminus\{i\}\colon i\in F\in \mathcal{F}\}, \qquad \mathcal{F}(\bar{i})= \{F\in\mathcal{F}: i\notin F\}.$$
Note that $|\mathcal{F}|=|\mathcal{F}(i)|+|\mathcal{F}(\bar{i})|$. For $P\subset Q\subset [n]$, let
\[
\mathcal{F}(P,Q) = \left\{F\setminus Q\colon F\in\mathcal{F},\ F\cap Q=P \right\}.
\]
We also use $\mathcal{F}(\bar{Q})$ to denote $\mathcal{F}(\emptyset, Q)$. For $\mathcal{F}(\{i\},Q)$ we simply write $\mathcal{F}(i,Q)$.
\section{Earlier product theorems and some tools}
In this section, we review some earlier theorems concerning the product of cross-intersecting families. We also recall Hilton's Lemma and give some corollaries that will be used later.
\begin{thm}[Pyber \cite{Pyber86}]
Suppose that $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ are cross-intersecting, $n\geq 2k$. Then
\begin{align}\label{ineq-pyber}
|\mathcal{F}||\mathcal{G}| \leq \binom{n-1}{k-1}^2.
\end{align}
\end{thm}
\begin{thm}[Matsumoto-Tokushige \cite{MT}]
Let $k,\ell$ be positive integers, $n\geq 2k\geq 2\ell$. Suppose that $\mathcal{F}\subset \binom{[n]}{k}$ and $\mathcal{G}\subset \binom{[n]}{\ell}$ are cross-intersecting. Then
\begin{align}\label{ineq-mt}
|\mathcal{F}||\mathcal{G}| \leq \binom{n-1}{k-1}\binom{n-1}{\ell-1}.
\end{align}
\end{thm}
Note that for the case $n\geq 3k\geq 3\ell$, \eqref{ineq-mt} was already proved by Pyber \cite{Pyber86}. For a short proof of \eqref{ineq-pyber} cf. \cite{FK2017}. In that paper some more precise product results are proven.
\begin{example}
Let $2\leq s\leq k+1$ and define two families
\[
\mathcal{A}_s =\left\{A\in \binom{[n]}{k}\colon 1\in A, A\cap [2,s]=\emptyset\right\},\ \mathcal{B}_s=\mathcal{S}_{\{1\}} \cup \left\{B\in \binom{[n]}{k}\colon [2,s]\subset B\right\}.
\]
\end{example}
It is easy to check that $\mathcal{A}_s$ and $\mathcal{B}_s$ are cross-intersecting.
\begin{thm}[\cite{FK2017}]
Let $3\leq s\leq k+1$, $n\geq 2k$. Suppose that $\mathcal{A},\mathcal{B} \subset \binom{[n]}{k}$ are cross-intersecting and \[
|\mathcal{B}|\geq \binom{n-1}{k-1}+\binom{n-s}{k-s+1}.
\]
Then
\begin{align}\label{ineq-FK17}
|\mathcal{A}||\mathcal{B}| \leq \left(\binom{n-1}{k-1}-\binom{n-s}{k-1}\right)\left(\binom{n-1}{k-1}+\binom{n-s}{k-s-1}\right)=|\mathcal{A}_s||\mathcal{B}_s|.
\end{align}
\end{thm}
An important tool for proving the above results is the Kruskal-Katona Theorem (\cite{Kruskal,Katona}, cf. \cite{F84} or \cite{Keevash} for short proofs of it).
Daykin \cite{daykin} was the first to show that the Kruskal-Katona Theorem implies the $t=1$ case of the Erd\H{o}s-Ko-Rado Theorem. Hilton \cite{Hilton} gave a very useful reformulation of the Kruskal-Katona Theorem. To state it let us recall the definition of the lexicographic order on $\binom{[n]}{k}$. For two distinct sets $F,G\in \binom{[n]}{k}$ we say that $F$ {\it precedes} $G$ if
\[
\min\{i\colon i\in F\setminus G\}<\min\{i\colon i\in G\setminus F\}.
\]
E.g., $(1,7)$ precedes $(2,3)$. For a positive integer $b$ let $\mathcal{L}(n,b,m)$ denote the first $m$ members of $\binom{[n]}{b}$.
{\noindent\bf Hilton's Lemma (\cite{Hilton}).} Let $n,a,b$ be positive integers, $n>a+b$. Suppose that $\mathcal{A}\subset \binom{[n]}{a}$ and $\mathcal{B}\subset \binom{[n]}{b}$ are cross-intersecting. Then $\mathcal{L}(n,a,|\mathcal{A}|)$ and $\mathcal{L}(n,b,|\mathcal{B}|)$ are cross-intersecting as well.
For a family $\mathcal{A} \subset\binom{[n]}{a}$ and an integer $b$ define the family of transversals (of size $b$) $\mathcal{H}t^{(b)}(\mathcal{A})$ by
\[
\mathcal{H}t^{(b)}(\mathcal{A}) =\left\{B\in \binom{[n]}{b}\colon B\cap A\neq \emptyset \mbox{ for all } A\in \mathcal{A}\right\}.
\]
Note that $\mathcal{A}$ and $\mathcal{B}$ are cross-intersecting iff $\mathcal{B} \subset \mathcal{H}t^{(b)}(\mathcal{A})$.
Let us use this notation to prove three corollaries of Hilton's Lemma.
\begin{cor}
Let $t\geq 1$.
\begin{align}\label{hiltonLem-1}
\mbox{If } |\mathcal{A}| \geq \binom{n-1}{a-1}+\ldots+\binom{n-t}{a-1} \mbox{ then } |\mathcal{B}|\leq \binom{n-t}{k-t}.
\end{align}
\end{cor}
\begin{proof}
Note that
\begin{align*}
\mathcal{L}(n,a,|\mathcal{A}|)\supset \mathcal{L}\left(n,a,\binom{n-1}{a-1}+\ldots+\binom{n-t}{a-1}\right)&=\left\{A\in \binom{[n]}{a}\colon A\cap [t]\neq \emptyset\right\}\\[5pt]
&=:\mathcal{E}(n,a,t).
\end{align*}
Since $\mathcal{H}t^{(b)}(\mathcal{E}(n,a,t))$ is the $t$-star $\{B\in\binom{[n]}{b}\colon [t]\subset B\}$, the statement follows.
\end{proof}
\begin{cor}
\begin{align}\label{hiltonLem-2}
\mbox{If } |\mathcal{A}| > \binom{n-1}{a-1}-\binom{n-b-1}{a-1} \mbox{ then } |\mathcal{B}|\leq \binom{n-1}{b-1}.
\end{align}
\end{cor}
\begin{proof}
Suppose indirectly $|\mathcal{B}|\geq \binom{n-1}{b-1}+1$. Consider
\[
\mathcal{L}\left(n,b,\binom{n-1}{b-1}+1\right)=\left\{B\in \binom{[n]}{b}\colon 1\in B\right\}\cup \{[2,b+1]\}.
\]
Noting that
\[
\mathcal{H}t^{(a)}\left(\mathcal{L}\left(n,b,\binom{n-1}{b-1}+1\right)\right) = \left\{A\in \binom{[n]}{a}\colon 1\in A, A\cap [2,b+1]\neq \emptyset\right\}
\]
has size $\binom{n-1}{a-1}-\binom{n-b-1}{a-1}$ the desired contradiction follows.
\end{proof}
\begin{cor}
\begin{align}\label{hiltonLem-3.1}
\mbox{If } |\mathcal{A}| > \binom{n-1}{a-1}+\binom{n-2}{a-1}+\binom{n-4}{a-2} \mbox{ then } |\mathcal{B}|\leq \binom{n-3}{b-3}+\binom{n-4}{k-3}.
\end{align}
\end{cor}
\begin{proof}
Note that
\begin{align*}
\mathcal{L}(n,a,|\mathcal{A}|)\supset &\mathcal{L}\left(n,a,\binom{n-1}{a-1}+\binom{n-2}{a-1}+\binom{n-4}{a-2}\right)\\[5pt]
=&\left\{A\in \binom{[n]}{a}\colon 1\in A \mbox{ or }2\in A\mbox{ or }\{3,4\}\subset A \right\}=:\mathcal{D}(n,a).
\end{align*}
Then clearly
\[
\mathcal{H}t^{(b)}(\mathcal{D}(n,a)) =\left\{B\in\binom{[n]}{b}\colon \{1,2,3\}\subset B \mbox{ or } \{1,2,4\}\subset B\right\}
\]
and the statement follows.
\end{proof}
For later use let us prove one more consequence of Hilton's Lemma. We state it in the special case that we need in Section 3.
\begin{lem}
Suppose that $\mathcal{F}, \mathcal{G}\subset \binom{[n]}{k}$ are cross-intersecting, $n>2k$. Let
\[
\binom{n-4}{k-4}<|\mathcal{F}|\leq \binom{n-3}{k-3},
\]
and
\[
\binom{n-1}{k-1}+ \binom{n-2}{k-1}+\binom{n-3}{k-1}<|\mathcal{G}|\leq \binom{n-1}{k-1}+ \binom{n-2}{k-1}+\binom{n-3}{k-1}+\binom{n-4}{k-1}.
\]
Define
\[
f=|\mathcal{F}|-\binom{n-4}{k-4}\mbox{ and } g= |\mathcal{G}| -\binom{n-1}{k-1}- \binom{n-2}{k-1}-\binom{n-3}{k-1}.
\]
Then
\begin{align}\label{hiltonLem-3}
\frac{g}{\binom{n-4}{k-1}}+\frac{f}{\binom{n-4}{k-3}}\leq 1.
\end{align}
\end{lem}
\begin{proof}
Note that all
\[
F\in \mathcal{L}(n,k,|\mathcal{F}|)\setminus \mathcal{L}\left(n,k,\binom{n-4}{k-4}\right)
\]
satisfy $F\cap [4]=[3]$. Let $\mathcal{F}_0$ be the family of size $f$ formed by the corresponding sets $F\setminus [4]\in \binom{[5,n]}{k-3}$.
Also, for all
\[
G\in \mathcal{L}(n,k,|\mathcal{G}|)\setminus \mathcal{L}\left(n,k,\binom{n-1}{k-1}+\binom{n-2}{k-1}+\binom{n-3}{k-1}\right),
\]
$G\cap [4]=\{4\}$ follows from $g\leq \binom{n-4}{k-1}$ (a consequence of \eqref{hiltonLem-1}). Let $\mathcal{G}_0\subset \binom{[5,n]}{k-1}$ be formed by the corresponding sets $G\setminus [4]$. Obviously, $\mathcal{F}_0$ and $\mathcal{G}_0$ are cross-intersecting. The inequality \eqref{hiltonLem-3} is essentially due to Sperner \cite{Sperner} but let us repeat the simple argument. Define a bipartite graph with partite sets $\binom{[5,n]}{k-3}$ and $\binom{[5,n]}{k-1}$ by putting an edge between $F\in\binom{[5,n]}{k-3}$ and $G\in \binom{[5,n]}{k-1}$ iff $F\cap G=\emptyset$. This bipartite graph is bi-regular implying that the neighborhood $\mathcal{N}(\mathcal{F}_0)$ satisfies
\[
|\mathcal{N}(\mathcal{F}_0)|/\binom{n-4}{k-1}\geq |\mathcal{F}_0|/\binom{n-4}{k-3}.
\]
Since $\mathcal{F}_0\cup \mathcal{G}_0$ is an independent set, $\mathcal{N}(\mathcal{F}_0)\cap \mathcal{G}_0 =\emptyset$ implying
\[
\frac{|\mathcal{G}_0|}{\binom{n-4}{k-1}}+\frac{|\mathcal{F}_0|}{\binom{n-4}{k-3}}\leq 1.
\]
\end{proof}
\section{Some size restrictions for the general case and the proof for shifted-resistant families}
Throughout the proof we assume that $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ are non-trivial, cross-intersecting and
\begin{align}\label{indirectAssum}
|\mathcal{F}||\mathcal{G}| \geq h(n,k)^2.
\end{align}
\begin{prop}
For $n\geq 2k-1$, $k\geq 3$
\begin{align}\label{ineq-f1}
\left(\binom{n-3}{k-3}+\binom{n-4}{k-3}\right)&
\left(\binom{n-1}{k-1}+\binom{n-2}{k-1}+\binom{n-3}{k-1}\right)\nonumber\\[5pt]
&\qquad<\left(\binom{n-2}{k-2}+\binom{n-3}{k-2}+\binom{n-4}{k-2}\right)^2.
\end{align}
\end{prop}
\begin{proof}
Note that
\[
\binom{n-1}{k-1}+\binom{n-2}{k-1}+\binom{n-3}{k-1} \leq \frac{n-1}{k-1}\left(\binom{n-2}{k-2}+\binom{n-3}{k-2}+\binom{n-4}{k-2}\right).
\]
Since $n\geq 2k-1$ implies $\frac{n-1}{k-1}\leq \frac{n-3}{k-2}<\frac{n-2}{k-2}$, we infer
\begin{align*}
\frac{n-1}{k-1}\left(\binom{n-3}{k-3}+\binom{n-4}{k-3}\right)&\leq \frac{n-2}{k-2}\binom{n-3}{k-3}+\frac{n-3}{k-2}\binom{n-4}{k-3}\leq \binom{n-2}{k-2}+ \binom{n-3}{k-2}.
\end{align*}
Thus \eqref{ineq-f1} follows.
\end{proof}
\begin{prop}
In proving Theorem \ref{main} we may assume that
\begin{align}
&\min \left\{|\mathcal{F}|,|\mathcal{G}|\right\}> \binom{n-3}{k-3}+\binom{n-4}{k-3},\label{ineq-1}\\[5pt]
&\max \left\{|\mathcal{F}|,|\mathcal{G}|\right\}\leq \binom{n-1}{k-1}+\binom{n-2}{k-1}+\binom{n-4}{k-2}.\label{ineq-2}
\end{align}
\end{prop}
\begin{proof}
By symmetry assume that $|\mathcal{F}|\leq |\mathcal{G}|$. If $|\mathcal{F}|< \binom{n-4}{k-4}$, then applying \eqref{ineq-key2} twice we obtain
\[
|\mathcal{F}||\mathcal{G}|<\binom{n-4}{k-4}\binom{n}{k} <\binom{n-2}{k-2}^2 <h(n,k)^2.
\]
If $|\mathcal{G}|> \binom{n}{k}-\binom{n-4}{k}$
then by \eqref{hiltonLem-2} we have $|\mathcal{F}|< \binom{n-4}{k-4}$. Thus we may assume that $|\mathcal{F}|\geq \binom{n-4}{k-4}$ and $|\mathcal{G}|\leq \binom{n}{k}-\binom{n-4}{k}$.
Let
\[
|\mathcal{F}|=\binom{n-4}{k-4}+\alpha\binom{n-4}{k-3},\ |\mathcal{G}| =\binom{n-1}{k-1}+\binom{n-2}{k-1}+\binom{n-3}{k-1}+\beta\binom{n-4}{k-1}.
\]
Note that \eqref{hiltonLem-3} implies $\alpha+\beta\leq 1$ and thereby
\[
|\mathcal{F}| \leq \binom{n-3}{k-3}-\beta \binom{n-4}{k-3}.
\]
Since $\frac{n-k}{n-3}>\frac{n-k-2}{n-3}$ implies
\[
\frac{\binom{n-4}{k-3}}{\binom{n-3}{k-3}} >\frac{\binom{n-4}{k-1}}{\binom{n-3}{k-1}}>\frac{\binom{n-4}{k-1}}{\binom{n-1}{k-1}+\binom{n-2}{k-1}+\binom{n-3}{k-1}},
\]
it follows that
\begin{align}
|\mathcal{F}||\mathcal{G}| &\leq\left(\binom{n-3}{k-3}-\beta \binom{n-4}{k-3}\right)\left(\binom{n-1}{k-1}+\binom{n-2}{k-1}+\binom{n-3}{k-1}+\beta\binom{n-4}{k-1}\right)\nonumber\\[5pt]
&\leq \binom{n-3}{k-3}\left(\binom{n-1}{k-1}+\binom{n-2}{k-1}+\binom{n-3}{k-1}\right).\label{ineq-hfhgab}
\end{align}
By \eqref{ineq-key2} we know
\[
\binom{n-3}{k-3}\binom{n-1}{k-1}<\binom{n-2}{k-2}^2.
\]
Moreover,
\begin{align*}
\binom{n-3}{k-3}\binom{n-2}{k-1} &\leq \binom{n-2}{k-2}\binom{n-3}{k-2} \frac{k-2}{n-2}\frac{n-2}{k-1} < \binom{n-2}{k-2}\binom{n-3}{k-2},\\[5pt]
\binom{n-3}{k-3}\binom{n-3}{k-1} &\leq \binom{n-2}{k-2}\binom{n-4}{k-2} \frac{k-2}{n-2}\frac{n-3}{k-1} < \binom{n-2}{k-2}\binom{n-4}{k-2}.
\end{align*}
Thus from \eqref{ineq-hfhgab} and $k\geq 3$ we obtain
\begin{align*}
|\mathcal{F}||\mathcal{G}| \leq \binom{n-2}{k-2}\left(\binom{n-2}{k-2}+\binom{n-3}{k-2}+\binom{n-4}{k-2}\right)<h(n,k)^2,
\end{align*}
contradicting \eqref{indirectAssum}. Thus we may assume that
\[
|\mathcal{F}|\geq \binom{n-3}{k-3} \mbox{ and } |\mathcal{G}|\leq \binom{n-1}{k-1}+\binom{n-2}{k-1}+\binom{n-3}{k-1}.
\]
If $|\mathcal{F}|\leq \binom{n-3}{k-3}+\binom{k-4}{k-3}$, then by \eqref{ineq-f1}
\[
|\mathcal{F}||\mathcal{G}|\leq \left(\binom{n-2}{k-2}+\binom{n-3}{k-2}+\binom{n-4}{k-2}\right)^2< h(n,k)^2,
\]
contradicting \eqref{indirectAssum} again. Thus we may further assume $|\mathcal{F}| > \binom{n-3}{k-3}+\binom{n-4}{k-3}$. Then by \eqref{hiltonLem-3.1} it implies that
\[
|\mathcal{G}|\leq \binom{n-1}{k-1}+\binom{n-2}{k-1}+\binom{n-4}{k-2}.
\]
\end{proof}
We need the following computational bound to estimate the size of $\mathcal{F}$ and $\mathcal{G}$ below.
\begin{lem}
For $n\geq 4k$ and $k\geq 8$,
\begin{align}
h(n,k)&>\frac{32}{9}\binom{n-2}{k-2}>3\binom{n-2}{k-2},\label{ineq-hmn-2k-22}\\[5pt]
h(n,k)&> \frac{9}{4}\binom{n-2}{k-2}+\frac{9}{4}\binom{n-4}{k-2}.\label{ineq-hmn-2k-2}
\end{align}
\end{lem}
\begin{proof}
Since $n\geq (j-1)k/2$ implies $\frac{n-k-j+3}{n-j+1}\geq \frac{n-k}{n}$, by \eqref{ineq-key} and $n\geq 4k$ we infer
\[
\frac{\binom{n-j}{k-2}}{\binom{n-2}{k-2}}\geq \left(\frac{n-k-j+3}{n-j+1}\right)^{j-2} \geq \left(\frac{n-k}{n}\right)^{j-2}\geq \left(\frac{3}{4}\right)^{j-2},\ 2\leq j\leq 9.
\]
It follows that
\begin{align*}
\sum_{j=2}^9 \binom{n-j}{k-2}&\geq \binom{n-2}{k-2}\sum_{j=2}^9 \left(\frac{3}{4}\right)^{j-2}=\frac{1-\left(\frac{3}{4}\right)^8}{1-\frac{3}{4}}
\binom{n-2}{k-2}> \frac{32}{9}\binom{n-2}{k-2}.
\end{align*}
Thus by $k\geq 8$ we have
\[
h(n,k)>\binom{n-1}{k-1}-\binom{n-k-1}{k-1}\geq \sum_{j=2}^9 \binom{n-j}{k-2}> \frac{32}{9}\binom{n-2}{k-2}.
\]
By \eqref{ineq-key} and $n\geq 4k$ we also have
\begin{align}\label{ineq-3}
\sum_{j=2}^4 \binom{n-j}{k-2}&\geq \binom{n-2}{k-2} \left(1+\frac{3}{4}+\left(\frac{3}{4}\right)^2\right)>\frac{9}{4}\binom{n-2}{k-2}.
\end{align}
Moreover,
\[
\frac{\binom{n-j}{k-2}}{\binom{n-4}{k-2}}\geq \left(\frac{n-k-j+3}{n-j+1}\right)^{j-4} \geq \left(\frac{n-k}{n}\right)^{j-4}\geq \left(\frac{3}{4}\right)^{j-4},\ 5\leq j\leq 9.
\]
We infer
\begin{align}\label{ineq-4}
\sum_{j=5}^9 \binom{n-j}{k-2}&\geq \binom{n-4}{k-2}\left(\frac{3}{4}+\left(\frac{3}{4}\right)^2+\left(\frac{3}{4}\right)^3
+\left(\frac{3}{4}\right)^4+\left(\frac{3}{4}\right)^5\right)> \frac{9}{4}\binom{n-4}{k-2}.
\end{align}
Adding \eqref{ineq-3} and \eqref{ineq-4} we obtain \eqref{ineq-hmn-2k-2}.
\end{proof}
To prove the theorem we apply shifting, a powerful method that can be traced back to \cite{EKR}. For $\mathcal{F}\subset{[n]\choose k}$ and $1\leq i< j\leq n$, define the shift
$$S_{ij}(\mathcal{F})=\left\{S_{ij}(F)\colon F\in\mathcal{F}\right\},$$
where
$$S_{ij}(F)=\left\{
\begin{array}{ll}
(F\setminus\{j\})\cup\{i\}, & j\in F, i\notin F \text{ and } (F\setminus\{j\})\cup\{i\}\notin \mathcal{F}; \\[5pt]
F, & \mathcal{B}ox{otherwise.}
\end{array}
\right.
$$
It is well known (cf. \cite{F87}) that shifting preserves the cross-intersecting property.
Let us define the {\it shifting partial order} $\prec$. For two $k$-sets $A$ and $B$ where $A=\{a_1,\ldots,a_k\}$, $a_1<\ldots<a_k$ and $B=\{b_1,\ldots,b_k\}$, $b_1<\ldots<b_k$ we say that $A$ precedes $B$ and denote it by $A\prec B$ if $a_i\leq b_i$ for all $1\leq i\leq k$.
A family $\mathcal{F}\subset \binom{[n]}{k}$ is called {\it shifted} (or {\it initial}) if $A\prec B$ and $B\in \mathcal{F}$ always imply $A\in \mathcal{F}$. By repeated shifting one can transform an arbitrary $k$-graph into a shifted $k$-graph with the same number of edges.
The only problem with shifting is that it might destroy the non-triviality of $\mathcal{F}$ or $\mathcal{G}$ or both.
\begin{fact}\label{fact-3.1}
If $S_{ij}(\mathcal{F})\subset \mathcal{S}_{\{i\}}$,
then
\begin{itemize}
\item[(i)] $\mathcal{F}(\overline{ij})=\emptyset$ and
\item[(ii)] $\mathcal{F}(i)\cap \mathcal{F}(j)=\emptyset$. (Note that this is equivalent with $\mathcal{F}(\overline{i}j)\cap \mathcal{F}(i\overline{j})=\emptyset$).
\end{itemize}
\end{fact}
With all this preparation we are ready to state and prove the main result of this section.
\begin{prop}\label{lem-2.4}
Let $n\geq 4k$, $k\geq 8$ and $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ be non-trivial cross-intersecting. Suppose that there exist disjoint pairs $(a,b), (c,d)$ such that $S_{ab}(\mathcal{F})\subset \mathcal{S}_{a}$ and $S_{cd}(\mathcal{F})\subset \mathcal{S}_{c}$. Then
\begin{align}\label{ineq-main}
|\mathcal{F}||\mathcal{G}|< h(n,k)^2.
\end{align}
\end{prop}
\begin{proof}
For notational convenience assume that $(a,b)=(1,2)$ and $(c,d)=(3,4)$.
Arguing indirectly we assume \eqref{indirectAssum}. For $i=2,3,4$, define
\[
\mathcal{F}_i = \{F\setminus [4]\colon F\in \mathcal{F},\ |F\cap [4]|=i\}.
\]
Since $S_{ij}(\mathcal{F})\subset \mathcal{S}_{\{i\}}$ for $(i,j)=(1,2)$ and $(3,4)$ implies $(1,2),(3,4)\in \mathcal{H}t^{(2)}(\mathcal{F})$, we have
\begin{align}\label{ineq-fcap5n}
|F\cap [5,n]|\leq k-2 \mbox{ for all }F \in \mathcal{F}.
\end{align}
\begin{claim}\label{claim-1}
For every $E\in \mathcal{F}_2$, there are at most two choices for $S\in \binom{[4]}{2}$ with $E\cup S\in \mathcal{F}$. For every $T\in \mathcal{F}_3$, there are at most two choices for $R\in \binom{[4]}{3}$ with $T\cup R\in \mathcal{F}$.
\end{claim}
\begin{proof}
Suppose that for some $E\in \mathcal{F}_2$ there are three choices for $S\in \binom{[4]}{2}$ with $E\cup S\in \mathcal{F}$. Note that Fact \ref{fact-3.1} (i) implies $S\neq (1,2)$ and $S\neq (3,4)$. Choose $S_1,S_2$ such that $S_1\cap S_2\neq \emptyset$. Without loss of generality, assume that $S_1=(1,3)$ and $S_2=(1,4)$, then $E\cup \{1\} \in \mathcal{F}(3)\cap \mathcal{F}(4)$, contradicting Fact \ref{fact-3.1} (ii).
Similarly, suppose that for some $T\in \mathcal{F}_3$ there are three choices for $R\in \binom{[4]}{3}$ with $T\cup R\in \mathcal{F}$. Then we may choose $R_1,R_2$ such that $R_1\cap R_2= (1,2)$ or $(3,4)$. Without loss of generality, assume that $R_1=(1,2,3)$ and $R_2=(1,2,4)$, then $T\cup \{1,2\} \in \mathcal{F}(3)\cap \mathcal{F}(4)$, contradicting Fact \ref{fact-3.1} (ii).
\end{proof}
By Claim \ref{claim-1} we have
\begin{align}\label{ineq-hfhp}
|\mathcal{F}| \leq 2|\mathcal{F}_2|+2|\mathcal{F}_3|+|\mathcal{F}_4|.
\end{align}
It follows that
\begin{align}\label{ineq-hfupbound}
|\mathcal{F}| &\leq 2\binom{n-4}{k-2}+2\binom{n-4}{k-3}+\binom{n-4}{k-4}=\binom{n-2}{k-2}+\binom{n-4}{k-2}.
\end{align}
By \eqref{ineq-hmn-2k-2}, for $n\geq 4k$, $k\geq 8$ we have $|\mathcal{F}| < \frac{4}{9}h(n,k)$. Then the indirect assumption \eqref{indirectAssum} implies
\begin{align}\label{ineq-2.5hnk}
|\mathcal{G}| >\frac{9}{4}h(n,k).
\end{align}
Let
\[
\mathcal{P}=\left\{P\in \binom{[4]}{2}\colon P\cap (1,2)\neq \emptyset, P\cap (3,4)\neq \emptyset\right\}.
\]
\begin{claim}\label{claim-new3.0}
\begin{align}\label{ineq-new3.0}
\sum_{P\in \mathcal{P}}|\mathcal{F}(P,[4])|> 4\binom{n-5}{k-3}.
\end{align}
\end{claim}
\begin{proof}
Suppose for contradiction that $\sum\limits_{P\in \mathcal{P}}|\mathcal{F}(P,[4])|\leq 4\binom{n-5}{k-3}$. Note that Fact \ref{fact-3.1} (i) implies $|\mathcal{F}(\{1,2\},[4])|=|\mathcal{F}(\{3,4\},[4])|=0$. Then by Claim \ref{claim-1} we have
\begin{align*}
|\mathcal{F}|&\leq \sum_{P\in \mathcal{P}}|\mathcal{F}(P,[4])|+2|\mathcal{F}_3|+|\mathcal{F}_4|\\[5pt]
&\leq 4\binom{n-5}{k-3}+2\binom{n-4}{k-3}+ \binom{n-4}{k-4}\\[5pt]
&=4\binom{n-5}{k-3}+\binom{n-4}{k-3}+\binom{n-3}{k-3}\\[5pt]
&< 3\binom{n-3}{k-3}+3\binom{n-5}{k-3}.
\end{align*}
By \eqref{ineq-hmn-2k-2} it follows that
\[
|\mathcal{F}| \leq 3\left(\frac{k-2}{n-2}\binom{n-2}{k-2}+\frac{k-2}{n-4}\binom{n-4}{k-2}\right)< \frac{4(k-1)}{3(n-1)} h(n,k).
\]
By \eqref{indirectAssum} and \eqref{ineq-hmn-2k-22},
\begin{align}\label{ineq-sumtotal}
|\mathcal{G}| >\frac{3(n-1)}{4(k-1)} h(n,k)>\frac{3(n-1)}{4(k-1)} 3\binom{n-2}{k-2}>2\binom{n-1}{k-1},
\end{align}
contradicting \eqref{ineq-2}.
\end{proof}
\begin{claim}
\begin{align}\label{ineq-hg4barub}
|\mathcal{G}(\overline{[4]})|\leq \binom{n-7}{k-3}+\binom{n-8}{k-3}.
\end{align}
\end{claim}
\begin{proof}
By Claim \ref{claim-1} and \eqref{ineq-new3.0}, we infer
\[
2|\mathcal{F}_2| \geq \sum_{P\in \mathcal{P}}|\mathcal{F}(P,[4])|> 4\binom{n-5}{k-3}.
\]
It follows that $|\mathcal{F}_2|>2\binom{n-5}{k-3}> \binom{n-5}{k-3}+\binom{n-6}{k-3}+\binom{n-8}{k-4}$. Since $\mathcal{F}_2$, $\mathcal{G}(\overline{[4]})$ are cross-intersecting, by \eqref{hiltonLem-3.1} we obtain \eqref{ineq-hg4barub}.
\end{proof}
\begin{claim}
For $i\in [4]$,
\begin{align}\label{ineq-newhgi}
|\mathcal{G}(i,[4])|\leq \binom{n-4}{k-1} - \binom{n-2-k}{k-1}.
\end{align}
\end{claim}
\begin{proof}
Since $\mathcal{F}$ is non-trivial, there exists $F\in \mathcal{F}$ such that $i\notin F$. Let $E=F\cap [5,n]$. By \eqref{ineq-fcap5n}, $|E|\leq k-2$. Now the cross-intersection implies $E\cap E'\neq \emptyset$ for each $E' \in \mathcal{G}(i,[4])$. Thus the claim follows.
\end{proof}
\begin{claim}\label{claim-new3}
At most five of $\mathcal{G}(P,[4])$, $P\in \binom{[4]}{2}$ have size greater than $\binom{n-5}{k-3}-\binom{n-3-k}{k-3}$.
\end{claim}
\begin{proof}
Suppose for contradiction that $|\mathcal{G}(P,[4])|>\binom{n-5}{k-3}-\binom{n-3-k}{k-3}$ for all $P\in \binom{[4]}{2}$. Then for each $P\in \binom{[4]}{2}$ let $P'=[4]\setminus P$. Apply \eqref{hiltonLem-2} with $a=b=k-2$ to the cross-intersecting families $\mathcal{G}(P,[4])$ and $\mathcal{F}(P',[4])$, we infer $|\mathcal{F}(P',[4])|\leq \binom{n-5}{k-3}$.
Then
\[
\sum_{P\in \mathcal{P}}|\mathcal{F}(P,[4])|\leq 4\binom{n-5}{k-3},
\]
contradicting \eqref{ineq-new3.0}.
\end{proof}
By Claim \ref{claim-new3}, we infer that
\begin{align}\label{ineq-claim2p2}
\sum_{P\subset[4], |P|\geq 2} |\mathcal{G}(P,[4])|&\leq 5\binom{n-4}{k-2}+\binom{n-5}{k-3}-\binom{n-3-k}{k-3}+4\binom{n-4}{k-3}+\binom{n-4}{k-4}\nonumber\\[5pt]
&<2 \binom{n-4}{k-2}+3\left(\binom{n-4}{k-2}+\binom{n-4}{k-3}\right)+\left(\binom{n-4}{k-3} +\binom{n-4}{k-4}\right)\nonumber\\[5pt]
&\qquad+\binom{n-5}{k-3}\nonumber\\[5pt]
&=2 \binom{n-4}{k-2}+3\binom{n-3}{k-2}+\binom{n-3}{k-3}+\binom{n-5}{k-3}\nonumber\\[5pt]
&=2 \binom{n-4}{k-2}+2\binom{n-3}{k-2}+\binom{n-2}{k-2}+\binom{n-5}{k-3}\nonumber\\[5pt]
&<2\binom{n-2}{k-2}+2\binom{n-3}{k-2} +\binom{n-4}{k-2}.
\end{align}
\begin{claim}\label{claim-2}
Exactly two of $\mathcal{G}(1,[4])$, $\mathcal{G}(2,[4])$, $\mathcal{G}(3,[4])$, $\mathcal{G}(4,[4])$ have size greater than $\binom{n-5}{k-2}-\binom{n-3-k}{k-2}$.
\end{claim}
\begin{proof}
Suppose that at least three of $\mathcal{G}(1,[4])$, $\mathcal{G}(2,[4])$, $\mathcal{G}(3,[4])$, $\mathcal{G}(4,[4])$ have size greater than $\binom{n-5}{k-3}-\binom{n-3-k}{k-3}$. Then for each $P\in \mathcal{P}$ there exists $i\in [4]$ such that $i\notin P$ and $|\mathcal{G}(i,[4])|> \binom{n-5}{k-2}-\binom{n-3-k}{k-2}$. Apply \eqref{hiltonLem-2} to $\mathcal{G}(i,[4])$ and $\mathcal{F}(P,[4])$ with $a=k-1$ and $b=k-2$, we infer $|\mathcal{F}(P,[4])|\leq \binom{n-5}{k-3}$ for all $P\in \mathcal{P}$, contradicting \eqref{ineq-new3.0}. Thus, at most two of $\mathcal{G}(1,[4])$, $\mathcal{G}(2,[4])$, $\mathcal{G}(3,[4])$, $\mathcal{G}(4,[4])$ have size greater than $\binom{n-5}{k-3}-\binom{n-3-k}{k-3}$.
Suppose that at most one of them has size greater than $\binom{n-5}{k-2}-\binom{n-3-k}{k-2}$. Then by \eqref{ineq-newhgi} we have
\begin{align*}
\sum_{ 1\leq i\leq 4} |\mathcal{G}(i,[4])|\leq \binom{n-4}{k-1} - \binom{n-2-k}{k-1}+3\left(\binom{n-5}{k-2}-\binom{n-3-k}{k-2}\right).
\end{align*}
Using
\[
\binom{n-5}{k-2}-\binom{n-3-k}{k-2}\leq\binom{n-4}{k-2}-\binom{n-2-k}{k-2},
\]
we get
\begin{align}\label{ineq-claim2p1}
\sum_{ 1\leq i\leq 4} |\mathcal{G}(i,[4])|&\leq \binom{n-4}{k-1} - \binom{n-2-k}{k-1}+\binom{n-4}{k-2} - \binom{n-2-k}{k-2}\nonumber\\[5pt]
&\qquad+2\left(\binom{n-5}{k-2}-\binom{n-3-k}{k-2}\right)\nonumber\\[5pt]
&< \binom{n-3}{k-1} - \binom{n-1-k}{k-1}+2\binom{n-5}{k-2}\nonumber\\[5pt]
&< h(n,k)- \binom{n-2}{k-2}- \binom{n-3}{k-2} +2\binom{n-5}{k-2}.
\end{align}
Adding \eqref{ineq-hg4barub}, \eqref{ineq-claim2p2} and \eqref{ineq-claim2p1},
\begin{align*}
|\mathcal{G}|=&\sum_{\emptyset\neq P\subset[4]} |\mathcal{G}(P,[4])|+|\mathcal{G}(\overline{[4]})|\\[5pt]
<&h(n,k)+\binom{n-2}{k-2}+\binom{n-3}{k-2}+\binom{n-4}{k-2}+2\binom{n-5}{k-2}
+\binom{n-7}{k-3}+\binom{n-8}{k-3}.
\end{align*}
Note that $k\geq 7$ implies $h(n,k)\geq \sum\limits_{2\leq i\leq 8} \binom{n-i}{k-2}$. It follows that
\begin{align*}
|\mathcal{G}|<&2h(n,k)+\binom{n-5}{k-2}-\binom{n-6}{k-2}-\binom{n-7}{k-2}-\binom{n-8}{k-2}
+\binom{n-7}{k-3}+\binom{n-8}{k-3}\\[5pt]
=&2h(n,k)+\binom{n-6}{k-3}+\binom{n-7}{k-3}+\binom{n-8}{k-3}
-\binom{n-7}{k-2}-\binom{n-8}{k-2}\\[5pt]
\leq &2h(n,k)+2\binom{n-6}{k-3}-2\binom{n-8}{k-2}.
\end{align*}
Since $n\geq 4k$ and $k\geq 3$ imply
\begin{align*}
\frac{\binom{n-8}{k-2}}{\binom{n-6}{k-3}} &=\frac{(n-k-3)(n-k-4)(n-k-5)}{(n-6)(n-7)(k-2)}\\[5pt]
&\geq \frac{(3k-3)(3k-4)(3k-5)}{(4k-6)(4k-7)(k-2)}>\left(\frac{3}{4}\right)^2\times3>1.
\end{align*}
we infer $|\mathcal{G}|<2h(n,k)$, contradicting \eqref{ineq-2.5hnk}.
\end{proof}
Now consider a pair $(i,P)$, $P\in \binom{[4]}{2}$, $i\in [4]\setminus P$ and $|\mathcal{G}(i,[4])|> \binom{n-5}{k-2}-\binom{n-3-k}{k-2}$. Apply \eqref{hiltonLem-2} to $\mathcal{G}(i,[4])$ and $\mathcal{F}(P,[4])$ with $a=k-1$, $b=k-2$, we infer $|\mathcal{F}(P,[4])|\leq \binom{n-5}{k-3}$. By Claim \ref{claim-2}, exactly two of $\mathcal{G}(1,[4])$, $\mathcal{G}(2,[4])$, $\mathcal{G}(3,[4])$, $\mathcal{G}(4,[4])$ have size greater than $\binom{n-5}{k-2}-\binom{n-3-k}{k-2}$. Hence, at most one $P\in \{(1,3),(1,4),(2,3),(2,4)\}$ satisfies $|\mathcal{F}(P,[4])|> \binom{n-5}{k-3}$. It follows that
\[
\sum_{P\in \mathcal{P}} |\mathcal{F}(P,[4])| \leq \binom{n-4}{k-2}+3\binom{n-5}{k-3}.
\]
Then by Claim \ref{claim-1} and the identity $\binom{n-2}{k-2}=\binom{n-4}{k-2}+2\binom{n-4}{k-3}+\binom{n-4}{k-4}$,
\begin{align*}
|\mathcal{F}| &\leq \sum_{P\in \mathcal{P}}|\mathcal{F}(P,[4])|+2|\mathcal{F}_3|+|\mathcal{F}_4|\\[5pt]
&\leq 3\binom{n-5}{k-3}+\binom{n-4}{k-2}+2\binom{n-4}{k-3}+ \binom{n-4}{k-4} \\[5pt]
&=\binom{n-2}{k-2}+ 3\binom{n-5}{k-3}.
\end{align*}
Since for $n\geq 4k$,
\begin{align*}
|\mathcal{F}|\leq \binom{n-2}{k-2}+ 3\binom{n-5}{k-3}=\binom{n-2}{k-2}+\frac{3(k-2)}{n-4}\binom{n-4}{k-2}<\binom{n-2}{k-2}+\frac{3}{4}\binom{n-4}{k-2},
\end{align*}
by \eqref{ineq-hmn-2k-22} and \eqref{ineq-hmn-2k-2} we infer
\begin{align}\label{ineq-hf1}
|\mathcal{F}| &< \frac{1}{4}\binom{n-2}{k-2}+ \frac{3}{4} \left(\binom{n-2}{k-2} + \binom{n-4}{k-2}\right)\nonumber\\[5pt]
&<\frac{1}{4}\times \frac{9}{32}h(n,k)+\frac{3}{4}\times \frac{4}{9}h(n,k)\nonumber\\[5pt]
& = \frac{155}{384}h(n,k)<\frac{32}{73}h(n,k).
\end{align}
By Claim \ref{claim-2} and \eqref{ineq-newhgi},
\begin{align}\label{ineq-claim3p1}
\sum_{1\leq i\leq 4}|\mathcal{G}(i,[4])|
&\leq 2\left(\binom{n-4}{k-1} - \binom{n-2-k}{k-2}\right)+2\left(\binom{n-5}{k-2}-\binom{n-3-k}{k-2}\right)\nonumber\\[5pt]
&\leq 2\left(\binom{n-3}{k-1} - \binom{n-1-k}{k-2}\right)\nonumber\\[5pt]
&= 2h(n,k)-2\binom{n-2}{k-2}-2\binom{n-3}{k-2}.
\end{align}
Adding \eqref{ineq-hg4barub}, \eqref{ineq-claim2p2} and \eqref{ineq-claim3p1},
\begin{align}\label{ineq-hg1}
|\mathcal{G}|&=|\mathcal{G}(\overline{[4]})|+\sum_{\emptyset\neq P\subset[4]} |\mathcal{G}(P,[4])|\nonumber\\[5pt]
& \leq 2h(n,k)+\binom{n-4}{k-2}+\binom{n-7}{k-3}+\binom{n-8}{k-3}\nonumber\\[5pt]
&\leq 2h(n,k)+\binom{n-2}{k-2}\overset{\eqref{ineq-hmn-2k-22}}< \frac{73}{32}h(n,k).
\end{align}
But now \eqref{ineq-hf1} and \eqref{ineq-hg1} imply $|\mathcal{F}||\mathcal{G}|<h(n,k)^2$, contradicting \eqref{indirectAssum}. This concludes the proof of the proposition.
\end{proof}
\section{The shifted case and the proof of the main theorem}
In this section, we determine the maximum product of sizes of non-trivial shifted cross-intersecting families.
First, we determine the maximum sum of sizes of non-trivial shifted cross-intersecting families $\mathcal{F},\mathcal{G}$ with $(2,5,7,\ldots,2k-1,2k+1)\notin \mathcal{F}\cup \mathcal{G}$ by modifying an injective map introduced in \cite{F2017}.
\begin{prop}
Let $n\geq k>0$ and $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ be cross-intersecting. Suppose that both $\mathcal{F}$ and $\mathcal{G}$ are non-trivial shifted families. Moreover $(2,5,7,\ldots,2k-1,2k+1)\notin \mathcal{F}\cup \mathcal{G}$. Then
\begin{align}\label{ineq-hfhg2hm}
|\mathcal{F}|+|\mathcal{G}|\leq 2h(n,k).
\end{align}
\end{prop}
\begin{proof}
We claim that for every $H\in \mathcal{F}\cup \mathcal{G}$ there exists $\ell$ such that $|H\cap [2\ell]|\geq \ell$. Moreover, if $1\notin H$ then $\ell \geq 2$. Arguing indirectly, assume that no such $\ell$ exists for $H=\{a_1,a_2,\ldots,a_k\}$
where $a_1<a_2<\ldots<a_k$. For $\ell=1$ this implies $a_1\geq 2$. For $\ell\geq 2$ we infer $a_\ell> 2\ell$ for all $\ell=2,\ldots,k$. By shiftedness, we see that
$(2,5,7,\ldots,2k+1)\in \mathcal{F}\cup \mathcal{G}$, a contradiction. Let $\ell(H)$ be the maximal $\ell$ such that $|H\cap [2\ell]|\geq \ell$.
Recall the notation $A\triangle B = (A\setminus B)\cup (B\setminus A)$, the symmetric difference. Define the function $\phi\colon \phi(H) = H\triangle [2\ell(H)]$.
\begin{claim}\label{claim-4}
For $G,G'\in \mathcal{G}$, $\phi(G)\neq \phi(G')$.
\end{claim}
\begin{proof}
If $\ell(G)=\ell(G'):=\ell$ then $\phi(G)\triangle [2\ell]=G\neq G'=\phi(G') \triangle [2\ell]$ implies $\phi(G)\neq \phi(G')$. On the other hand if $\ell(G)>\ell(G')$ then
\[
|\phi(G) \cap [2\ell(G)]|=\ell >|\phi(G') \cap [2\ell(G)]|
\]
by the maximality of $\ell(G')$.
\end{proof}
\begin{claim}\label{claim-5}
For $G\in \mathcal{G}(\bar{1})$, $\phi(G)\setminus \{1\}\notin \mathcal{F}(1)$.
\end{claim}
\begin{proof}
Set $\ell=\ell(G)$. Let $G=(x_1,x_2,\ldots,x_k)$ with $x_1<x_2<\ldots<x_k$. The maximal choice of $\ell$ implies
\[
[2\ell]\cap G =\{x_1,\ldots,x_\ell\}, \ x_{\ell+1}>2\ell+2,\ldots,x_k>2k.
\]
By shiftedness $(x_1,\ldots,x_\ell)\cup (2\ell+2,2\ell+4,\ldots,2k)\in \mathcal{G}$. Note that $G\in \mathcal{G}(\bar{1})$ implies $x_1\geq 2$. If $\phi(G)\setminus \{1\}=([2,2\ell]\setminus (x_1,\ldots,x_\ell))\cup (2\ell+2,2\ell+4,\ldots,2k)\in \mathcal{F}(1)$ then by shiftedness $([2\ell]\triangle (x_1,\ldots,x_\ell))\cup (2\ell+1,2\ell+3,\ldots,2k-1)\in \mathcal{F}$ and it contradicts the cross-intersecting property.
\end{proof}
\begin{claim}\label{claim-6}
If $H\neq [2,k+1]$, $H\in \mathcal{F}\cup \mathcal{G}$, $1\notin H$ then $\phi(H)\cap [2,k+1]\neq \emptyset$.
\end{claim}
\begin{proof}
There are two cases. Let $\ell=\ell(H)$. If $2\ell\geq k+1$ then $[2,k+1]\subset [2\ell]$. Consequently, we can choose $x\in [2,k+1]\setminus H$ since $H\neq [2,k+1]$. Thus $x\in [2\ell]\setminus H\subset H\triangle [2\ell]$, i.e., $x\in [2,k+1]\cap \phi(H)$. As we noted before, $1\notin H$ and $(2,5,7,\ldots,2k-1,2k+1)\notin \mathcal{F}\cup \mathcal{G}$ imply that $\ell\geq 2$. If $k+1>2\ell$ then $\ell\geq 2$ implies the existence of $x\in [2,2\ell]\setminus H$ and hence $x\in H\triangle [2\ell]$. It follows that $x\in [2,k+1]\cap \phi(H)$.
\end{proof}
Note that the non-triviality and the shiftedness imply $[2,k+1]\in \mathcal{F}\cap \mathcal{G}$. From Claims \ref{claim-4}, \ref{claim-5} and \ref{claim-6}, we infer
\begin{align}
|\mathcal{F}(1)|+|\mathcal{G}(\bar{1})| &=|\mathcal{F}(1)|+|\phi\left(\mathcal{G}(\bar{1})\setminus \{[2,k+1]\}\right)|+1\nonumber\\[5pt]
&\leq \binom{n-1}{k-1}-\binom{n-k-1}{k-1}+1.\label{ineq-hf1hgbar}
\end{align}
Switching the roles of $\mathcal{F}$ and $\mathcal{G}$, we obtain
\begin{align}
&|\mathcal{F}(\bar{1})|+|\mathcal{G}(1)| \leq \binom{n-1}{k-1}-\binom{n-k-1}{k-1}+1.\label{ineq-hfbarhg1}
\end{align}
Adding \eqref{ineq-hf1hgbar} and \eqref{ineq-hfbarhg1}, we get \eqref{ineq-hfhg2hm}.
\end{proof}
Recall the following inequality from \cite{F78}, for a proof using linear algebra cf. \cite{F2022}.
\begin{lem}[\cite{F78}]\label{lem-walk}
Let $\mathcal{F}\subset \binom{[n]}{k}$ be a shifted family with $0\leq t<k$. If $[t]\cup \{t+2,t+4,\ldots,2k-t\}\notin \mathcal{F}$, then
\begin{align}\label{ineq-walk}
|\mathcal{F}| \leq \binom{n}{k-t-1}.
\end{align}
\end{lem}
\begin{prop}\label{lem-2.6}
Let $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ be non-trivial and cross-intersecting, $n\geq 4k$, $k\geq 8$. If both $\mathcal{F}$ and $\mathcal{G}$ are shifted, then
\begin{align}\label{ineq-shiftedhfhg2}
|\mathcal{F}||\mathcal{G}|\leq h(n,k)^2.
\end{align}
\end{prop}
\begin{proof}
We may assume that $(1,3,5,\ldots,2k-1)\notin \mathcal{F}\cap \mathcal{G}$. Indeed, if $(1,3,5,\ldots,2k-1)\in \mathcal{F}\cap \mathcal{G}$, then by cross-intersection we infer $(2,4,\ldots,2k)\notin \mathcal{F}\cup\mathcal{G}$. By shiftedness, it follows that
$(2,5,7,\ldots,2k-1,2k+1)\notin \mathcal{F}\cup \mathcal{G}$. By \eqref{ineq-hfhg2hm},
\[
|\mathcal{F}||\mathcal{G}|\leq \left(\frac{|\mathcal{F}|+|\mathcal{G}|}{2}\right)^2\leq h(n,k)^2
\]
and we are done.
By symmetry assume that $(1,3,5,\ldots,2k-1)\notin \mathcal{F}$. Then by \eqref{ineq-walk} we have
\begin{align}\label{ineq-f781}
|\mathcal{F}| \leq \binom{n}{k-2}.
\end{align}
By \eqref{ineq-1} and $n\geq 4k$, we obtain that
\[
|\mathcal{F}|>\binom{n-3}{k-3}+\binom{n-4}{k-3}=\frac{n-3}{k-3}\binom{n-4}{k-4}+\frac{n-k}{k-3} \binom{n-4}{k-4} >7\binom{n-4}{k-4}.
\]
By \eqref{ineq-key} and $n\geq 4k$,
\[
|\mathcal{F}|> 7\binom{n-4}{k-4}>\left(\frac{4}{3}\right)^4\binom{n-4}{k-4}> \left(\frac{n-3}{n-k+1}\right)^4\binom{n-4}{k-4}> \binom{n}{k-4}.
\]
Then Lemma \ref{lem-walk} implies that $(1,2,3,5,7,\ldots,2k-3)\in \mathcal{F}$.
We distinguish two cases.
{\noindent \bf Case 1.} $(1,2,4,6,\ldots,2k-2)\notin \mathcal{F}$.
In view of \eqref{ineq-walk} the assumption $(1,2,4,6,\ldots,2k-2)\notin \mathcal{F}$ implies for $n\geq 4k$
\begin{align}\label{ineq-f782}
|\mathcal{F}| \leq \binom{n}{k-3}\overset{\eqref{ineq-key}}{<}\left(\frac{n-2}{n-k+1}\right)^3\binom{n-3}{k-3}
<\left(\frac{4}{3}\right)^3\binom{n-3}{k-3}<3\binom{n-3}{k-3}.
\end{align}
By \eqref{ineq-2},
\begin{align}\label{ineq-f783}
|\mathcal{G}| \leq \binom{n-1}{k-1}+\binom{n-2}{k-1}+\binom{n-4}{k-2}< 2\binom{n-1}{k-1}.
\end{align}
From \eqref{ineq-f782} and \eqref{ineq-f783}, we obtain
\begin{align}\label{ineq-hfhg3}
|\mathcal{F}||\mathcal{G}|<6\binom{n-1}{k-1}\binom{n-3}{k-3}\overset{\eqref{ineq-key2}}{\leq} 6\binom{n-2}{k-2}^2\overset{\eqref{ineq-hmn-2k-22}}<h(n,k)^2.
\end{align}
{\noindent \bf Case 2.} $(1,2,4,6,\ldots,2k-2)\in \mathcal{F}$.
By cross-intersection, $(3,5,\ldots,2k+1)\notin \mathcal{G}$. Using that $\mathcal{F}$ is non-trivial, $[2,k+1]\in \mathcal{F}$ follows. By cross-intersection again $G\cap [2,k+1]\neq \emptyset$ for all $G\in \mathcal{G}$. Consequently
\[
|\mathcal{G}(1)|\leq \binom{n-1}{k-1}-\binom{n-k-1}{k-1}<h(n,k).
\]
Consider $\mathcal{G}(\bar{1},2)\subset \binom{[3,n]}{k-1}$. Since $\mathcal{F}(\bar{2})\neq \emptyset$ and $\mathcal{F}$ is shifted, $\mathcal{F}(1,\bar{2})\neq \emptyset$. Thus,
\[
|\mathcal{G}(\bar{1},2)|\leq \binom{n-2}{k-1} - \binom{n-k-1}{k-1}.
\]
For $G\in \mathcal{G}(\bar{1},\bar{2})$ define $\tilde{G}=\{x-2\colon x\in G\}$ and set
\[
\tilde{\mathcal{G}}=\{\tilde{G}\colon G\in \mathcal{G}(\bar{1},\bar{2})\}.
\]
Note that $(3,5,\ldots,2k+1)\notin \mathcal{G}$ implies $(1,3,\ldots,2k-1)\notin \tilde{\mathcal{G}}$. Applying \eqref{ineq-walk} with $t=1$ yields
\begin{align}\label{ineq-g1bar2bar}
|\mathcal{G}(\bar{1},\bar{2})|\leq \binom{n-2}{k-2}.
\end{align}Thus,
\begin{align}\label{ineq-hg}
|\mathcal{G}| \leq |\mathcal{G}(1)|+|\mathcal{G}(\bar{1},2)|+|\mathcal{G}(\bar{1},\bar{2})|<2h(n,k).
\end{align}
By \eqref{ineq-f781} and \eqref{ineq-key}, we obtain for $n\geq 4k$
\begin{align}\label{ineqhfnk-2}
|\mathcal{F}|\leq \binom{n}{k-2}\leq \left(\frac{n-1}{n-k+1}\right)^2\binom{n-2}{k-2}< \left(\frac{4}{3}\right)^2\binom{n-2}{k-2}=\frac{16}{9}\binom{n-2}{k-2}.
\end{align}
Combining \eqref{ineqhfnk-2} and \eqref{ineq-hg}, we obtain
\begin{align*}
|\mathcal{F}||\mathcal{G}|<\frac{32}{9}\binom{n-2}{k-2}h(n,k) \overset{\eqref{ineq-hmn-2k-22}}{<} h(n,k)^2.
\end{align*}
\end{proof}
In order to prove Theorem \ref{main} let us introduce the notion of shift-resistant pair. For a pair of families $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$, define the quantity
\[
w(\mathcal{F},\mathcal{G}) =\sum_{F\in \mathcal{F}}\sum_{i\in F} i + \sum_{G\in \mathcal{G}}\sum_{j\in G} j.
\]
Let us fix $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ so that $\mathcal{F},\mathcal{G}$ are non-trivial cross-intersecting, $|\mathcal{F}||\mathcal{G}|$ is maximal, moreover among such pairs $w(\mathcal{F},\mathcal{G})$ is minimal. If in such pair $\mathcal{F}$ and $\mathcal{G}$ are not both shifted then we say that $(\mathcal{F},\mathcal{G})$ is shift-resistant.
\begin{prop}\label{prop-4.4}
Suppose that $(\mathcal{F},\mathcal{G})$ is a shift-resistant cross-intersecting pair. Then for $\mathcal{H}=\mathcal{F}$ or $\mathcal{H}=\mathcal{G}$ there exist disjoint pairs $(a,b)$, $(c,d)$ such that $S_{ab}(\mathcal{H})\subset \mathcal{S}_a$ and $S_{cd}(\mathcal{H})\subset \mathcal{S}_c$.
\end{prop}
\begin{proof}
Since $\mathcal{F}$ and $\mathcal{G}$ are not both shifted, without loss of generality suppose $S_{ab}(\mathcal{F})\subset \mathcal{S}_a$. Note that this implies $F\cap (a,b)\neq \emptyset$ for all $F\in \mathcal{F}$. Hence
\[
\mathcal{S}_{\{a,b\}}=\left\{S\in \binom{[n]}{k}\colon (a,b)\subset S\right\}\subset \mathcal{G}
\]
follows from the maximality of $|\mathcal{F}||\mathcal{G}|$.
Consider an arbitrary pair $(c,d) \subset [n]\setminus (a,b)$. First note that $S_{cd}(\mathcal{G})\subset \mathcal{S}_c$ cannot hold. Indeed, $\mathcal{S}_{\{a,b\}}\subset \mathcal{G}$ implies $\mathcal{G}(\bar{c},\bar{d})\neq \emptyset$ and thereby $S_{cd}(\mathcal{G})\not\subset \mathcal{S}_c$. If $S_{cd}(\mathcal{F})\subset \mathcal{S}_c$ then we are done. Thus we may assume that $S_{cd}(\mathcal{G})\not\subset \mathcal{S}_c$. Now the minimality of $w(\mathcal{F},\mathcal{G})$ implies $S_{cd}(\mathcal{F})=\mathcal{F}$ and $S_{cd}(\mathcal{G})=\mathcal{G}$ for all $(c,d)\subset [n]\setminus (a,b)$.
Let $D=(d_1,d_2,\ldots,d_{k-1})$ be the $k-1$ smallest elements of $[n]\setminus (a,b)$. Then $\mathcal{F}(\bar{a},\bar{b})=\emptyset$ and $\mathcal{F}(\bar{a})\neq \emptyset$ imply $\mathcal{F}(\bar{a},b) \neq \emptyset$, moreover $S_{cd}(\mathcal{F})=\mathcal{F}$ for all $(c,d) \subset [n]\setminus (a,b)$ implies $D\in \mathcal{F}(\bar{a},b)$. Similarly, $D\in \mathcal{F}(a,\bar{b})$. Thus $\mathcal{F}(\bar{a},b)\cap \mathcal{F}(a,\bar{b})\neq \emptyset$, contradicting $S_{ab}(\mathcal{F})\subset \mathcal{S}_a$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{main}]
Let $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ be a pair of non-trivial cross-intersecting families with $|\mathcal{F}||\mathcal{G}|$ maximal. Moreover, we assume that $w(\mathcal{F},\mathcal{G})$ is minimal among all such pairs. Then the minimality of $w(\mathcal{F},\mathcal{G})$ implies that either both of $\mathcal{F}$, $\mathcal{G}$ are shifted or $(\mathcal{F},\mathcal{G})$ forms a shift-resistant pair.If $\mathcal{F}$ and $\mathcal{G}$ are both shifted then by Proposition \ref{lem-2.6} we conclude that $|\mathcal{F}||\mathcal{G}|\leq h(n,k)^2$.
If $(\mathcal{F},\mathcal{G})$ forms a shift-resistant pair, then by Proposition \ref{prop-4.4} there exist disjoint pairs $(a,b)$, $(c,d)$ such that either $\mathcal{F}$ or $\mathcal{G}$ (call it $\mathcal{H}$) satisfies $S_{ab}(\mathcal{H})\subset \mathcal{S}_a$ and $S_{cd}(\mathcal{H})\subset \mathcal{S}_c$. By symmetry assume that $S_{ab}(\mathcal{F})\subset \mathcal{S}_a$ and $S_{cd}(\mathcal{F})\subset \mathcal{S}_c$. Then by Proposition \ref{lem-2.4} we conclude that $|\mathcal{F}||\mathcal{G}|< h(n,k)^2$.
\end{proof}
\section{ Further improvements and concluding remarks}
The most intriguing problem is whether \eqref{ineq-hfhg2} of Theorem \ref{main} holds for the full range, that is, for all $n,k$ satisfying $n\geq 2k\geq 4$. Note that for $n=2k$,
$|\mathcal{F}|+|\mathcal{G}|\leq {2k \choose k}=2h(n,k)$ is obvious. Consequently, $|\mathcal{F}| |\mathcal{G}| \leq h(n,k)^2$.
By a different argument, we can also prove Theorem \ref{main} for $n\geq 3k$ and $k\geq 15$ as well. Let us first prove a statement of some independent interest.
\begin{prop}\label{prop-5.1}
Suppose that $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ are cross-intersecting, $n\geq 2k$ and
$\min\{|\mathcal{F}|,|\mathcal{G}|\}\geq \binom{n-3}{k-3}+\binom{n-4}{k-3}$. Then
\begin{align}
|\mathcal{F}|+|\mathcal{G}|\leq 2\binom{n-1}{k-1}.
\end{align}
\end{prop}
\begin{proof}
By Hilton's Lemma, without loss of generality, assume that $\mathcal{F}=\mathcal{L}(n,k,|\mathcal{F}|),\mathcal{G}=\mathcal{L}(n,k,|\mathcal{G}|)$, $|\mathcal{F}|\leq |\mathcal{G}|$ and $|\mathcal{G}|>\binom{n-1}{k-1}$. Thus $|\mathcal{G}(1)|=\binom{n-1}{k-1}$ and $|\mathcal{F}(1)|=|\mathcal{F}|$. If $|\mathcal{G}(\bar{1})|\leq\binom{n-2}{k-1}$ then
\[
|\mathcal{F}|+|\mathcal{G}|= |\mathcal{F}(1,2)|+|\mathcal{F}(1,\bar{2})|+|\mathcal{G}(1)|+|\mathcal{G}(\bar{1},2)|.
\]
Since $\mathcal{F}(1,\bar{2}),\mathcal{G}(\bar{1},2)\subset\binom{[3,n]}{k-1}$ are cross-intersecting, we have
\[
|\mathcal{F}(1,\bar{2})|+|\mathcal{G}(\bar{1},2)| \leq \binom{n-2}{k-1}.
\]
It follows that
\begin{align*}
|\mathcal{F}|+|\mathcal{G}|&= |\mathcal{F}(1,2)|+|\mathcal{G}(1)|+(|\mathcal{F}(1,\bar{2})|+|\mathcal{G}(\bar{1},2)|)\\[5pt]
&\leq \binom{n-2}{k-2}+\binom{n-1}{k-1}+\binom{n-2}{k-1}\\[5pt]
&=2\binom{n-1}{k-1}.
\end{align*}
Thus we may assume that $|\mathcal{G}|>\binom{n-1}{k-1}+\binom{n-2}{k-1}$.
Now
\[
|\mathcal{G}|=\binom{n-1}{k-1}+\binom{n-2}{k-1}+|\mathcal{G}(\bar{1},\bar{2})|\mbox{ and }|\mathcal{F}|=|\mathcal{F}(1,2)|.
\]
By the assumption on $|\mathcal{F}|$, $\mathcal{F}(1,2)\subset\binom{[n-2]}{k-2}$ satisfies $|\mathcal{F}(1,2)|\geq \binom{n-3}{k-3}+\binom{n-4}{k-3}$. Consequently $\{3,4\}\subset G$ for all $G\in \mathcal{G}(\bar{1},\bar{2})$. We infer $|\mathcal{G}(\bar{1},\bar{2},3,4)|=|\mathcal{G}(\bar{1},\bar{2})|$ and
\[
|\mathcal{F}(1,2)|= \binom{n-3}{k-3}+\binom{n-4}{k-3}+|\mathcal{F}(1,2,\bar{3},\bar{4})|.
\]
As $\mathcal{F}(1,2,\bar{3},\bar{4})$ and $\mathcal{G}(\bar{1},\bar{2},3,4)$ are cross-intersecting, we have
\[
|\mathcal{F}(1,2,\bar{3},\bar{4})|+|\mathcal{G}(\bar{1},\bar{2},3,4)|\leq \binom{n-4}{k-2}.
\]
Consequently,
\begin{align*}
|\mathcal{F}|+|\mathcal{G}|&= \binom{n-3}{k-3}+\binom{n-4}{k-3}+|\mathcal{F}(1,2,\bar{3},\bar{4})|+\binom{n-1}{k-1}
+\binom{n-2}{k-1}+|\mathcal{G}(\bar{1},\bar{2},3,4)|\\[5pt]
&\leq \binom{n-1}{k-1}+ \binom{n-2}{k-1}+ \binom{n-3}{k-3}+\binom{n-4}{k-3}+\binom{n-4}{k-2}\\[5pt]
&=2\binom{n-1}{k-1}.
\end{align*}
\end{proof}
\begin{lem}\label{lem-5.2}
Let $n\geq 2k$ and let $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ be non-trivial cross-intersecting. Let $|\mathcal{F}|=\alpha h(n,k)$ and suppose that $\alpha <1$. Set $f(\alpha) =\frac{1+\alpha^2}{(1-\alpha)^2}$. Then $|\mathcal{F}||\mathcal{G}|> h(n,k)^2$ implies
\begin{align}\label{ineq-keyfalpha}
f(\alpha) \geq \prod_{1\leq i\leq k-1} \frac{n-i}{n-k-i}.
\end{align}
\end{lem}
\begin{proof}
By \eqref{ineq-1} and Proposition \ref{prop-5.1}, we infer $|\mathcal{F}|+|\mathcal{G}|\leq 2\binom{n-1}{k-1}$. Note that $|\mathcal{F}||\mathcal{G}|> h(n,k)^2$ implies $|\mathcal{G}|\geq h(n,k)/\alpha$. Therefore,
\begin{align*}
2\binom{n-1}{k-1} \geq |\mathcal{F}|+|\mathcal{G}| &\geq \left(\alpha+\frac{1}{\alpha}\right)h(n,k)\geq \left(\alpha+\frac{1}{\alpha}\right)\left(\binom{n-1}{k-1}-\binom{n-k-1}{k-1}\right).
\end{align*}
By rearranging,
\[
\left(\alpha+\frac{1}{\alpha}-2\right)\binom{n-1}{k-1}\leq \left(\alpha+\frac{1}{\alpha}\right)\binom{n-k-1}{k-1}.
\]
Multiplying both sides by $\alpha$ we get
\[
(1-\alpha)^2\binom{n-1}{k-1}\leq (1+\alpha^2)\binom{n-k-1}{k-1}.
\]
Thus,
\begin{align*}
f(\alpha)=\frac{1+\alpha^2}{(1-\alpha)^2} \geq \frac{\binom{n-1}{k-1}}{\binom{n-k-1}{k-1}}
=\frac{(n-1)(n-2)\ldots(n-k+1)}{(n-k-1)(n-k-2)\ldots(n-2k+1)}.
\end{align*}
\end{proof}
In case of shift-resistant families and $k\geq 9$ we succeeded to extend the proof to the whole range.
\begin{prop}\label{prop-5.3.0}
Let $n\geq 2k+1$, $k\geq 6$ and $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ be non-trivial cross-intersecting. Suppose that $(\mathcal{F},\mathcal{G})$ forms a shift-resistant pair. Set $f(\alpha) =\frac{1+\alpha^2}{(1-\alpha)^2}$. If
\[
\prod\limits_{1\leq i\leq k-1} \frac{n-i}{n-k-i} > f(0.64)\approx 10.8765,
\]
then
\begin{align*}
|\mathcal{F}||\mathcal{G}|\leq h(n,k)^2.
\end{align*}
\end{prop}
\begin{proof}
By Proposition \ref{prop-4.4} and \eqref{ineq-hfupbound}, we have
\begin{align}
|\mathcal{F}| \leq \binom{n-2}{k-2}+\binom{n-4}{k-2}.
\end{align}
For $n=2k+1$,
\begin{align*}
\frac{\binom{n-1}{k}+\binom{n-4}{k}}{\binom{n}{k}} &=\frac{k+1}{2k+1}\left(\frac{2k(2k-1)2(k-1)+k(k-1)(k-2)}{2k(2k-1)2(k-1)}\right)\\[5pt]
&=\frac{k+1}{2k+1}\frac{3(3k-2)}{4(2k-1)} =\frac{3}{4} \frac{3k^2+k-2}{4k^2-1} >\frac{9}{16}.
\end{align*}
Since $\binom{x-d}{k}/\binom{x}{k}$ is an increasing function of $x$, for all $n\geq 2k+1$,
\begin{align}\label{ineq-key3}
\binom{n}{k}+\binom{n-1}{k}+\binom{n-4}{k}>\frac{25}{16}\binom{n}{k}.
\end{align}
By \eqref{ineq-key3} we infer for $(n-2)\geq 2(k-2)+1$,
\[
\binom{n-2}{k-2}+\binom{n-3}{k-2}+\binom{n-6}{k-2}>\frac{25}{16}\binom{n-2}{k-2}.
\]
For $(n-4)\geq 2(k-2)+1$,
\[
\binom{n-4}{k-2}+\binom{n-5}{k-2}+\binom{n-7}{k-2}>\frac{25}{16}\binom{n-4}{k-2}.
\]
Therefore,
\[
h(n,k) \geq \sum_{2\leq i\leq 7} \binom{n-i}{k-2}\geq \frac{25}{16}\left(\binom{n-2}{k-2}+\binom{n-4}{k-2}\right)\geq \frac{25}{16} |\mathcal{F}|,
\]
implying that $\alpha =\frac{|\mathcal{F}|}{h(n,k)} \leq \frac{16}{25}=0.64$. Since $f(\alpha)$ is increasing in $[0,1]$, we obtain
\[
f(\alpha)\leq f\left(0.64\right)< \prod\limits_{1\leq i\leq k-1} \frac{n-i}{n-k-i}.
\]
Then Lemma \ref{lem-5.2} implies $|\mathcal{F}||\mathcal{G}|\leq h(n,k)^2$.
\end{proof}
\begin{prop}
For $n\geq 2k$,
\begin{align}\label{ineq-key4}
\prod\limits_{1\leq i\leq k-1} \frac{n-i}{n-k-i} >\left(\frac{n-\frac{k}{2}}{n-\frac{3k}{2}}\right)^{k-1}.
\end{align}
\end{prop}
\begin{proof}
Note that for $m>d>i>0$,
\begin{align}\label{ineq-key5}
\frac{m-d-i}{m-i}\cdot \frac{m-d+i}{m+i} <\left(\frac{m-d}{m}\right)^2.
\end{align}
Equivalently,
\begin{align*}
&\frac{(m-d)^2-i^2}{(m-d)^2}<\frac{m^2-i^2}{m^2}, \mbox{ that is} \\[5pt]
&\left(\frac{i}{m}\right)^2<\left(\frac{i}{m-d}\right)^2,
\end{align*}
which is true for $m>d>0$.
Applying \eqref{ineq-key5} repeatedly with $m=n-\frac{k}{2}$ and $d=k$, we obtain
\[
\frac{(n-k-1)(n-k-2)\ldots (n-2k+1)}{(n-1)(n-2)\ldots (n-k+1)} <\left(\frac{n-\frac{3k}{2}}{n-\frac{k}{2}}\right)^{k-1}
\]
and \eqref{ineq-key4} follows.
\end{proof}
\begin{cor}\label{prop-5.2}
Let $n\geq 2k+1$, $k\geq 9$ and $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ be non-trivial cross-intersecting. If $(\mathcal{F},\mathcal{G})$ forms a shift-resistant pair, then $|\mathcal{F}||\mathcal{G}|\leq h(n,k)^2$.
\end{cor}
\begin{proof}
By Theorem \ref{main} we may assume that $2k\leq n\leq 4k$. By \eqref{ineq-key4} and $k\geq 9$, it follows that
\[
\prod\limits_{1\leq i\leq k-1} \frac{n-i}{n-k-i} >\left(\frac{n-\frac{k}{2}}{n-\frac{3k}{2}}\right)^{k-1}\geq \left(\frac{4k-\frac{k}{2}}{4k-\frac{3k}{2}}\right)^8=\left(\frac{7}{5}\right)^8\approx 14.7579>f(0.64).
\]
By Proposition \ref{prop-5.3.0}, we conclude that $|\mathcal{F}||\mathcal{G}|\leq h(n,k)^2$.
\end{proof}
\begin{cor}\label{prop-5.2.2}
Let $2k+1 \leq n\leq 3.13k$, $k\geq 6$ and $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ be non-trivial cross-intersecting. If $(\mathcal{F}, \mathcal{G})$ forms a shift-resistant pair, then $|\mathcal{F}||\mathcal{G}|\leq h(n,k)^2$.
\end{cor}
\begin{proof}
By \eqref{ineq-key4} and $k\geq 6$, it follows that
\[
\prod\limits_{1\leq i\leq k-1} \frac{n-i}{n-k-i} >\left(\frac{n-\frac{k}{2}}{n-\frac{3k}{2}}\right)^{k-1}\geq \left(\frac{3.13k-\frac{k}{2}}{3.13k-\frac{3k}{2}}\right)^5=\left(\frac{2.63}{1.63}\right)^5\approx 10.9356>f(0.64).
\]
By Proposition \ref{prop-5.3.0}, we conclude that $|\mathcal{F}||\mathcal{G}|\leq h(n,k)^2$.
\end{proof}
Similarly, we can prove the same statement for $2k+1 \leq n\leq 3.54k$, $k=7$ or $2k+1 \leq n\leq 3.96k$, $k=8$.
\begin{prop}\label{prop-5.4}
Let $n\geq 3k$, $k\geq 14$ and $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ be non-trivial cross-intersecting. If both $\mathcal{F}$ and $\mathcal{G}$ are shifted, then
\begin{align*}
|\mathcal{F}||\mathcal{G}|\leq h(n,k)^2.
\end{align*}
\end{prop}
\begin{proof}
By Theorem \ref{main} we may assume that $3k\leq n\leq 4k$. Suppose that $|\mathcal{F}|\leq |\mathcal{G}|$. By \eqref{ineq-f781} we have $|\mathcal{F}|\leq \binom{n}{k-2}$. Set $\alpha=\frac{|\mathcal{F}|}{h(n,k)}$. Then
\[
\frac{1}{\alpha}=\frac{h(n,k)}{|\mathcal{F}|} \geq \frac{\binom{n-1}{k-1}-\binom{n-k-1}{k-1}}{\binom{n}{k-2}}.
\]
Note that $3k\leq n\leq 4k$ and $k\geq 14$ imply
\[
\frac{\binom{n-1}{k-1}}{\binom{n}{k-2}} = \frac{(n-k+2)(n-k+1)}{(k-1)n} \geq \frac{4}{3}
\]
and
\begin{align*}
\frac{\binom{n-k-1}{k-1}}{\binom{n}{k-2}}
&=\frac{(n-k-1)(n-k-2)\ldots(n-2k+1)}{(k-1)n\ldots(n-k+3)}\\[5pt]
&\leq \frac{n-k-1}{k-1} \left(\frac{n-k-2}{n}\right)^{k-2}\\[5pt]
&\leq \left(3+\frac{2}{k-1}\right)\left(\frac{3}{4}\right)^{k-2}\\[5pt]
&< 3.16\times\left(\frac{3}{4}\right)^{12}<\frac{1}{9}.
\end{align*}
It follows that $\frac{1}{\alpha} \geq \frac{4}{3}-\frac{1}{9}=\frac{11}{9}$, implying $\alpha \leq \frac{9}{11}$. Thus,
\[
f(\alpha) \leq f\left(\frac{9}{11}\right)=50.5.
\]
By \eqref{ineq-key4} and $k\geq 14$, it follows that
\[
\prod\limits_{1\leq i\leq k-1} \frac{n-i}{n-k-i} >\left(\frac{n-\frac{k}{2}}{n-\frac{3k}{2}}\right)^{k-1}\geq \left(\frac{4k-\frac{k}{2}}{4k-\frac{3k}{2}}\right)^{13}=\left(\frac{7}{5}\right)^{13}\approx 79.3917>f\left(\frac{9}{11}\right).
\]
By Lemma \ref{lem-5.2} the proposition follows.
\end{proof}
Applying Corollary \ref{prop-5.2}, Proposition \ref{prop-5.4} and repeating the proof of Theorem \ref{main}, we obtain the following result.
\begin{thm}\label{main3}
Suppose that $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ are non-trivial cross-intersecting families, $n\geq 3k$, $k\geq 14$. Then $|\mathcal{F}||\mathcal{G}|\leq h(n,k)^2$.
\end{thm}
The next proposition rules out many potential constructions that may prevent Theorem \ref{main} holding for the full range.
\begin{prop}\label{prop-5.3}
Let $n\geq 2k\geq 4$. Suppose that $\mathcal{F},\mathcal{G}\subset\binom{[n]}{k}$, non-trivial, cross-intersecting. If $\mathcal{H}t^{(2)}(\mathcal{F})\cap\mathcal{H}t^{(2)}(\mathcal{G})\neq \emptyset$ then
\begin{align}\label{ineq-prop-0}
|\mathcal{F}|+|\mathcal{G}|\leq 2h(n,k).
\end{align}
\end{prop}
\begin{proof}
Without loss of generality, assume $(1,2)\cap H\neq \emptyset$ for all $H\in \mathcal{F}\cup \mathcal{G}$. Obviously,
\begin{align}\label{ineq-prop-3}
|\mathcal{F}([2])|+|\mathcal{G}([2])|\leq 2\binom{n-2}{k-2}.
\end{align}
Note that $\mathcal{F}(1,\bar{2})$, $\mathcal{G}(1,\bar{2})$, $\mathcal{F}(\bar{1},2)$ and $\mathcal{G}(\bar{1},2)$ are non-empty by non-triviality. $\mathcal{F}(1,\bar{2})$ and $\mathcal{G}(\bar{1},2)$ are cross-intersecting $(k-1)$-graphs on $[3,n]$. Consequently, by a classical result of Hilton and Milner \cite{HM67} concerning non-empty cross-intersecting families we obtain
\begin{align}\label{ineq-prop-1}
|\mathcal{F}(1,\bar{2})|+|\mathcal{G}(\bar{1},2)|\leq\binom{n-2}{k-1}-\binom{n-k-1}{k-1}+1.
\end{align}
The same is true for $\mathcal{F}(\bar{1},2)$ and $\mathcal{G}(1,\bar{2})$:
\begin{align}\label{ineq-prop-2}
|\mathcal{F}(\bar{1},2)|+|\mathcal{G}(1,\bar{2})|\leq \binom{n-2}{k-1}-\binom{n-k-1}{k-1}+1.
\end{align}
Summing up \eqref{ineq-prop-3}, \eqref{ineq-prop-1} and \eqref{ineq-prop-2} yields
\[
|\mathcal{F}|+|\mathcal{G}|\leq 2\left(\binom{n-2}{k-2}+\binom{n-2}{k-1}-\binom{n-k-1}{k-1}+1\right)=2h(n,k).
\]
\end{proof}
Let us mention that the case $k=2$, that is, if $\mathcal{F}$ and $\mathcal{G}$ are non-trivial cross-intersecting graphs then $|\mathcal{F}||\mathcal{G}|\leq 9$ is easy to prove. If $\mathcal{F}$ and $\mathcal{G}$ share an edge then $|\mathcal{F}|+|\mathcal{G}|\leq 6$ follows from Proposition \ref{prop-5.1} and thereby $|\mathcal{F}||\mathcal{G}|\leq 9$. Arguing indirectly assume $|\mathcal{F}|\geq 4$ and let $(x,y)$ be an edge of $\mathcal{G}$. Non-triviality of $\mathcal{G}$ implies that both $x$ and $y$ have degree two in $\mathcal{F}$. Let $u$ and $v$ be the neighbors of $x$ in $\mathcal{F}$. Then the only candidate for an edge in $\mathcal{G}$ that does not contain $x$ is $(u,v)$ whence $(u,v)\in\mathcal{G}$. By cross-intersection the two neighbors of $y$ in $\mathcal{F}$ must be $u$ and $v$ as well. Now cross-intersection implies $\mathcal{G}=\{(x,y),(u,v)\}$ whence $|\mathcal{F}||\mathcal{G}|=8<9$.
{\bf\noindent Remark.} There are many cases of equality in \eqref{ineq-prop-0}. Namely, let $A,B\in \binom{[3,n]}{k-1}$. Set
\[
\mathcal{F}(1,\bar{2}) = \{ A\},\ \mathcal{G}(\bar{1},2)=\left\{G\in \binom{[3,n]}{k-1}\colon G\cap A\neq\emptyset\right\}.
\]
There are two possibilities for $B$.
\begin{itemize}
\item[(a)] $\mathcal{F}(\bar{1},2) = \{B\},\ \mathcal{G}(1,\bar{2})=\left\{G\in \binom{[n]}{k-1}\colon G\cap B\neq\emptyset\right\}$,
\item[(b)] $\mathcal{F}(\bar{1},2) = \left\{F\in \binom{[n]}{k-1}\colon F\cap B\neq\emptyset\right\},\ \mathcal{G}(1,\bar{2})=\{B\}$.
\end{itemize}
Add $\mathcal{F}(1,2)=\mathcal{G}(1,2)=\binom{[3,n]}{k-2}$. Clearly $|\mathcal{F}||\mathcal{G}|=h(n,k)^2$ holds only in case (b).
It is natural to consider the analogous product version of the more general Hilton-Milner-Frankl Theorem. To state this result let us define two types of families
\begin{align*}
&\mathcal{H}(n,k,t) =\left\{H\in \binom{[n]}{k}\colon [t]\subset H, H\cap [t+1,k+1]\neq \emptyset\right\}\cup \left\{[k+1]\setminus \{j\}\colon 1\leq j\leq t\right\},\\[5pt]
&\mathcal{A}(n,k,t) =\left\{A\in \binom{[n]}{k}\colon |A\cap [t+2]|\geq t+1\right\}.
\end{align*}
\begin{thm}
Suppose that $\mathcal{F}\subset \binom{[n]}{k}$ is non-trivial and $t$-intersecting, $n>(k-t+1)(t+1)$. Then
\begin{align}\label{thm-5.1}
|\mathcal{F}| \leq \max\left\{|\mathcal{H}(n,k,t)|,|\mathcal{A}(n,k,t)|\right\}.
\end{align}
\end{thm}
For $t=1$, $|\mathcal{H}(n,k,t)|\geq |\mathcal{A}(n,k,t)|$ and \eqref{thm-5.1} reduces to the Hilton-Milner Theorem. For $t=2$ and $n>n_0(k,t)$ it was proved in \cite{F782}. For $t\geq 20$ and all $n>(k-t+1)(t+1)$ it follows from \cite{FFuredi2}, however it was first proved in full generality by \cite{AK}.
Let us conclude this paper by announcing the corresponding product version.
\begin{thm}[\cite{FW2022-2}]
Suppose that $\mathcal{F},\mathcal{G}\subset \binom{[n]}{k}$ are non-trivial and cross $t$-intersecting, $n\geq4(t+2)^2k^2$, $k\geq 5$. Then
\begin{align}\label{thm-5.2}
|\mathcal{F}||\mathcal{G}| \leq \max\left\{|\mathcal{H}(n,k,t)|^2,|\mathcal{A}(n,k,t)|^2\right\}.
\end{align}
\end{thm}
\end{document} |
\begin{document}
\title{\bf{LOWER ORDER TERMS FOR THE ONE-LEVEL DENSITY OF ELLIPTIC CURVE $L$-FUNCTIONS }}
\vspace {2 in}
\author{D.\ K.\ Huynh, J.\ P.\ Keating and N.\ C.\ Snaith\\
School of Mathematics,\\ University of Bristol,\\
Bristol BS8 1TW, UK}
\date{\today}
\maketitle \thispagestyle{empty}
\begin{abstract}
It is believed that, in the limit as the conductor tends to infinity, correlations between the zeros of elliptic curve $L$-functions averaged within families follow the distribution laws of the eigenvalues of random matrices drawn from
the orthogonal group. For test functions with restricted support, this is known to be the true for
the one- and two-level densities of
zeros within the families studied to
date. However, for finite conductor Miller's
experimental data reveal an interesting discrepancy from
these limiting results. Here we use the $L$-functions ratios
conjectures to calculate the 1-level density for the family of
even quadratic twists of an elliptic curve $L$-function for large but finite conductor. This
gives a formula for the leading and lower order terms up to an error term that is conjectured to be significantly smaller. The lower order terms explain many of
the features of the zero statistics for relatively small conductor and
model the very slow convergence to the infinite conductor limit.
However, our main observation is that they do not
capture the behaviour of zeros in the important region
very close to the critical point and so do not explain Miller's discrepancy. This therefore implies
that a more accurate model for statistics near to this point
needs to be developed.
\end{abstract}
\section{ Introduction}
The conjecture that the limiting statistical properties of the zeros of $L$-functions
may be modeled by those of the eigenvalues of random matrices goes back to
Montgomery \cite{kn:mont73}, who introduced it in the context of the Riemann
zeta-function. For the Riemann zeros this conjecture is supported by extensive
numerical \cite{kn:odlyzko97} and theoretical
\cite{kn:mont73,kn:hejhal94,kn:bogkea95,kn:bogkea96,kn:rudsar} calculations.
The generalization to zero statistics within families of $L$-functions
was developed by Katz and Sarnak \cite{kn:katzsarnak99a,kn:katzsarnak99b},
and again there is much evidence supporting it \cite{kn:rub01}.
Random matrix models for the moments of the Riemann zeta-function on its
critical line and for central values of $L$-functions within families were
introduced by Keating and Snaith \cite{kn:keasna00a,kn:keasna00b}, and have
since been developed extensively
\cite{kn:confar00,kn:cfkrs,kn:ghk,kn:buikea07,kn:buikea08,kn:cfkrs2}.
For more background, see \cite{kn:mezzsna}.
The random-matrix moment conjectures extend naturally to ratios of $L$-functions.
The $L$-functions ratios conjectures were stimulated by the work of
Farmer, who, in 1995, made a conjecture for shifted moments of the
Riemann zeta-function \cite{kn:farmer95}. Nonnenmacher and
Zirnbauer \cite{kn:nonzir02} found formulas for the ratios of
characteristic polynomials of random matrices coming from one of
the classical compact groups. This was formalised and written up
by Conrey, Farmer and Zirnbauer \cite{kn:cfz1} and lead to the
development of corresponding ratios conjectures for $L$-functions
in number theory \cite{kn:cfz2}.
The Birch/Swinnerton-Dyer conjecture asserts that the rank of an
elliptic curve is equal to the order of vanishing at the central
point of the associated $L$-function. The idea of using random
matrix theory to predict the frequency of non-zero rank in
families of elliptic curves was introduced by Conrey, Keating,
Rubinstein and Snaith \cite{kn:ckrs00,kn:ckrs05}. An interesting
extension of this is to find a random matrix model for elliptic
curve $L$-functions of
a given order of vanishing at the critical point. The first steps in
this direction have been taken by Snaith \cite{kn:snaith05a} and
Miller/Due\~{n}ez \cite{kn:mil05}, but it is clear from Miller's numerical computations
that there is a still simpler problem concerning the zero statistics of families of
rank zero curves that is far from being
understood. This problem is the main motivation for the work we shall report on here.
According to the Katz/Sarnak philosophy \cite{kn:katzsarnak99a,kn:katzsarnak99b},
zeros of families of $L$-functions show the same statistical
behaviour as eigenvalues of random matrices drawn from one of the
classical compact groups. The zeros of a family
of elliptic curve $L$-functions with even (odd) functional
equation should follow the distribution laws of eigenvalues of the even
(odd) orthogonal group. Rigorous calculations \cite{kn:miller02,
kn:mil04,kn:young} show that as the conductor (the parameter that
orders $L$-functions within a family) tends to infinity, the one-
and two-level densities do indeed tend to the expected orthogonal
forms for several different families of elliptic curves.
That is, as the conductor tends to infinity, the zero statistics
approach the scaling limit for large matrix size of the
corresponding statistic for the eigenvalues of matrices from $SO(2N)$
or $SO(2N+1)$. (Similar agreement with random matrix theory is shown for many
other families of $L$-functions, see for example
\cite{kn:duemil06,
kn:fouiwa03,kn:guloglu05,kn:hugrud03a,kn:hugmil07,kn:ILS99,kn:ozlsny99,
kn:ricroy08a,kn:royer01,kn:rub01}.) The test functions involved in these
calculations have a limited range of support, but nonetheless the
evidence is compelling. Thus it was surprising to see in
Miller's numerical results \cite{kn:mil05} a distinct repulsion of the zeros from the
central point for a family of $L$-functions of rank 0
elliptic curves, because no repulsion is seen in the statistics of $SO(2N)$
eigenvalues. Of course, in numerical computations the
conductor is finite, and so it is clear that an explanation is needed for finite conductor statistics and how they approach
the limiting
$SO(2N)$ statistic.
We do have a relatively complete understanding of the way in which the random matrix limit is approached for the zero statistics of the Riemann zeta function at a height $T$ up the critical line as $T\rightarrow\infty$. Berry first wrote down an approximate formula describing the finite-$T$ corrections to the random matrix limiting form for a statistic related to the 2-point correlation function in \cite{kn:berry88} and showed that this described Odlyzko's data remarkably accurately. Later, a formula that is believed to capture all of the essential features was derived by Bogomolny and Keating \cite{kn:bk96}. The terms in the Bogomolny-Keating formula that describe the corrections to the random matrix limit are often referred to as {\it lower order terms}. See \cite{kn:berrykeating99} for an overview and numerical illustrations. More recently, Conrey and Snaith \cite{kn:consna06, kn:consna08} have shown how the Bogomolny-Keating formula and its extension to all $n$-point correlation functions can be recovered from the $L$-functions ratios conjectures \cite{kn:cfz2}. There have also been
investigations of lower order terms in the zero statistics of
various families of $L$-functions
\cite{kn:fouiwa03,kn:mil07,kn:mil08,kn:ricroy08b,kn:young05}. In particular, Conrey and Snaith have shown how such terms can also be recovered from the ratios conjectures \cite{kn:consna06}. It is thus natural in this context to seek the explanation for the surprising discrepancy observed by Miller in these lower order terms.
In this paper we examine lower order terms in the 1-level
density of the zeros of a family of elliptic curve $L$-functions.
Specifically, we investigate even quadratic twists of an
elliptic curve $L$-function, for which we calculate the zeros
numerically with Rubinstein's {\sffamily lcalc}
\cite{kn:rubinstein}. Using the ratios
conjectures we derive a formula for the 1-level density that describes convincingly the
intricate structure of the numerical data away from the central point and so explains the rate of approach to the
random matrix limit in this region. However, most interestingly, our formula fails to describe the
region very close to the central point. To illustrate our main results, we plot in
figure \ref{fig:nrzero} a numerical evaluation of the 1-level density together with our formula. Miller's discrepancy corresponds to the region near to the origin. Our main conclusion here is then that the
explanation for the zero distribution in this region lies beyond the models combining random matrix theory and arithmetical lower order terms considered so far; that is, these formulae are not sufficient to explain the discrepancy. We plan to explore augmented models that build on the present calculation to explain the phenomenon in a future paper with E. Due{\~n}ez
and S. J. Miller
\begin{figure}
\caption{1-level density of unscaled zeros from 0 up to height 0.6
of even quadratic twists of $L_{E_{11}
\label{fig:nrzero}
\end{figure}
\section{ The 1-level density formula}
Let the $L$-function $L_E(s)$ associated with an elliptic curve
$E$ be given by the Dirichlet series
\begin{equation}
L_E(s)=\sum_{n=1}^{\infty} \frac{\lambda(n)}{n^s},
\end{equation}
where the coefficients ($\lambda(n)=a(n)/\sqrt{n}$, with $a_p=p+1-\#E({\mathbb F}_p)$,
$\#E({\mathbb F}_p)$ being the number of points on $E$ counted over
${\mathbb F}_p$) have been normalised
so that the functional equation relates $s$ to
$1-s$:
\begin{equation}
L_E(s)=\omega(E) \left(\frac{2\pi}{\sqrt{M}}\right)^{2s-1}
\frac{\Gamma(3/2-s)}{ \Gamma(s+1/2)}L_E(1-s).
\end{equation}
Here $M$ is the conductor of the elliptic curve $E$; we will
consider only prime $M$. Also, $\omega(E)$ is $+1$ or $-1$
resulting, respectively, in an even or odd functional equation for
$L_E$.
Let $L_E(s, \chi_d)$ denote the $L$-function obtained by twisting $L_E(s)$ quadratically.
Here $d$ is a fundamental discriminant, i.e., $d \in
\mathbb{Z}-\{1\}$, s.t. $p^2 \nmid d$ for all odd primes $p$ and
$d \Longleftrightarrowv 1 \mbox{~~mod~} 4$ or $d \Longleftrightarrowv 8, 12 \mbox{~~mod~} 16$, and $\chi_d$ is
the Kronecker symbol. Then the twisted $L$-function (which is
itself the $L$-function associated with another elliptic curve
$E_d$) is given by
\begin{equation} \label{twistedLfunction}
L_E(s, \chi_d) = \sum_{n = 1}^\infty \frac{\lambda(n)
\chi_d(n)}{n^s} = \prod_p \left(1 -
\frac{\lambda(p)\chi_d(p)}{p^s} +
\frac{\psi_{M}(p)\chi_d(p)^2}{p^{2s}} \right)^{-1}
\end{equation}
where $\psi_{M}$ is the principal Dirichlet character of modulus
$M$:
\begin{equation}
\psi_{M}(p) =
\begin{cases}
1 \text{~if~} p \nmid M\\
0 \text{~otherwise}.
\end{cases}
\end{equation}
The functional equation of this $L$-function is
\begin{equation}
L_E(s,\chi_d)=\chi_d(-M)\omega(E)
\left(\frac{2\pi}{\sqrt{M}|d|}\right)^{2s-1} \frac{\Gamma(3/2-s)}{
\Gamma(s+1/2)}L_E(1-s,\chi_d).
\end{equation}
In order to derive the 1-level density of the zeros near the
critical point $s=1/2$ of $L$-functions in this family of
quadratic twists, we consider the average over the family of a
ratio of $L$-functions evaluated at different points:
\begin{equation}
R_E(\alpha, \gamma) := \label{RE} \sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \frac{L_E(1/2 + \alpha, \chi_d)}{L_E(1/2 +
\gamma, \chi_d)}.
\end{equation}
This is an average over those twisted $L$-functions that have even
functional equations and $0< d \leq X$. Requiring an even
functional equation imposes a restriction on $d \mod M$. We
follow the recipe of \cite{kn:cfkrs}, \cite{kn:cfz2} and the
calculations in \cite{kn:consna06} to derive a formula for
$R_E(\alpha, \gamma)$ via the ratios conjecture. Note that
arriving at a ratios conjecture entails applying a list of
manipulations, several of which introduce errors large enough to
be significant. The miracle is that these errors appear to cancel
out and the recipe yields formulae that have been checked
numerically and against specific known cases in many different
situations (see \cite{kn:cfz2,kn:consna06}). Recent work of Steven
J. Miller \cite{kn:mil07} has shown that a rigorous calculation of
the 1-level density for the family of real quadratic Dirichlet
$L$-functions matches exactly, for a suitably chosen test
function, the prediction obtained by applying the ratios recipe.
See also \cite{kn:sto08} for further investigations of the ratios
conjecture and the 1-level density of the same family of Dirichlet
$L$-functions and \cite{kn:mil08} for Miller's extension of
\cite{kn:mil07} to families of cuspidal newforms.
We use (\ref{twistedLfunction}) to replace $L_E(s, \chi_d)$ in the
denominator of (\ref{RE}) by
\begin{equation}
\frac{1}{L_E(s, \chi_d)} = \sum_{n = 1}^\infty \frac{\mu_E(n)\chi_d(n)}{n^s}
\end{equation}
where $\mu_E(n)$ is a multiplicative function defined as
\begin{equation}
\mu_E(n) =
\begin{cases}
-\lambda(p), \mbox{if~} n = p\\
\psi_{M}(p), \mbox{if~} n = p^2\\
0, \mbox{if~} n = p^k, k > 2.\\
\end{cases}
\end{equation}
We use the approximate functional equation for the $L$-function in
the numerator of (\ref{RE}):
\begin{align} \nonumber
L_E(1/2 + \alpha, \chi_d) = & \sum_{m < x} \frac{\chi_d(m)
\lambda(m)}{m^{1/2 + \alpha}} +
\left(\frac{\sqrt{M}|d|}{2\pi}\right)^{-2\alpha} \frac{\Gamma(1-
\alpha)}{\Gamma(1 + \alpha)} \sum_{n < y} \frac{\chi_d(n)
\lambda(n)}{n^{1/2-\alpha}} \\& + \mbox{remainder},
\label{approximatefunctionalequation}
\end{align}
where $M$ is the conductor of the elliptic curve $E$ and $xy = d^2 / (2\pi)$. Therefore using the first sum
of the approximate functional equation (\ref{approximatefunctionalequation}) we get
\begin{equation} \label{RE1}
R_E^1(\alpha, \gamma) := \sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \sum_{h, m} \frac{\lambda(m) \mu_E(h)
\chi_d(mh)}{m^{1/2 + \alpha} h^{1/2 + \gamma}}.
\end{equation}
We denote by $R_E^2(\alpha, \gamma)$ the expression that results
from using the second sum in the approximate functional equation
(\ref{approximatefunctionalequation}). Thus
\begin{equation} \label{RE_sum}
R_E(\alpha, \gamma) \approx R_E^1(\alpha,\gamma) + R_E^2(\alpha, \gamma).
\end{equation}
The ratios recipe now calls for a replacement of $\chi_d(mh)$ with
its average over the family (the set of $d$'s being summed over).
We set
\begin{equation}
X^*= \sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}}1 {\rm\;\;\;and\;\;\;} X_b^*=
\sum_{\substack{0<d \leq X\\d=b\mod M}}1 \end{equation}
as the number of
fundamental discriminants below $X$ that we are summing over and
note (see \cite{kn:cfkrs}, Theorem 3.1.1)
\begin{equation}\label{eq:harmonic}
\frac{1}{X^*_b}\sum_{\substack{0<d \leq X \\d=b\mod M}} \chi_d(n)
\approx\left\{
\begin{array}{cl}
\chi_b(g)a(n) & \mbox{if }n=g\square, \mbox{ with }
(\square,M)=1\mbox{ and if all prime}\\& \mbox{factors of $g$ are
prime factors of $M$} \\0& \mbox{otherwise,}
\end{array}\right.
\end{equation}
where
\begin{equation}
a(n) = \prod_{p|\square} \frac{p}{p+1}.
\end{equation}
This is to say that terms not of the form $n=g\square$ can be disregarded (this is the so-called `harmonic
detector' which is mentioned in \cite{kn:cfkrs}). Since we are
considering only curves with prime conductor $M$, $g$ is simply a
power of $M$. Note that in the cases we are interested in
$\chi_b(g)=\omega_E^{\ell}$ for $g=M^{\ell}$ because $d$ has been
chosen such that $\chi_b(M)=\chi_d(M)=\omega_E$ (we have
$\chi_d(M)=\chi_d(-M)$ since we are considering only positive
$d$).
Concentrating on $R_E^1$, we replace $\chi_{d}(mh)$ with the
average given by (\ref{eq:harmonic}) and so restrict the sum as
follows:
\begin{equation}
R_E^1(\alpha, \gamma) \approx X^* \sum_{hm = \square M^{\ell}}
\frac{\lambda(m) \mu_E(h) a(mh)\omega_E^{\ell}}{m^{1/2 +
\alpha}h^{1/2 + \gamma}},
\end{equation}
with $(\square,M)=1$ and $g$ divisible only by primes dividing
$M$. We write this sum as an Euler product (for convenience
denoting by $h$ the exponent on primes dividing $h$ in the sum
above and similarly for $m$) and note that if $m+h\geq 1$ then
$a(p^{m+h}) = p/p+1$ for primes not dividing the conductor,
whereas $a(p^{m+h})=1$ if the prime does divide the conductor. So
we obtain
\begin{equation}
R_E^1(\alpha, \gamma) \approx X^* V_|(\alpha,
\gamma)V_{\nmid}(\alpha,\gamma)
\end{equation}
where
\begin{eqnarray}\label{summe}
&&V_\nmid(\alpha, \gamma) := \prod_{p\nmid M} \Bigg(1 +
\frac{p}{p+1} \sum_{\substack{m,h\geq 0\\m + h > 0 \\ m + h ~{\rm
even}}} \frac{\lambda(p^m) \mu_E(p^h)}{p^{m(1/2 + \alpha) + h(1/2
+
\gamma)}} \Bigg)\\
&&V_{|}(\alpha,\gamma):= \prod_{p| M}\Bigg(\sum_{h,m\geq 0}
\frac{\lambda(p^m)\mu_E(p^h)\omega_E^{m+h}} {p^{m(1/2 + \alpha) +
h(1/2 + \gamma)}}\Bigg).\label{eq:vdivide}
\end{eqnarray}
Since $\mu_E(p^h) = 0$ for most powers of $p$, we only need to
consider $h = 0,1,2$ in the sum in (\ref{summe}) and $h=0,1$ in
(\ref{eq:vdivide}). Then the Euler products become
\begin{eqnarray} \label{Euler}
V_\nmid(\alpha, \gamma) &= &\prod_{p\nmid M} \left(1 + \frac{p}{p
+ 1}\left(\sum_{m = 1}^\infty \frac{\lambda(p^{2m})}{p^{m(1 +
2\alpha)}} - \frac{\lambda(p)}{p^{1 + \alpha + \gamma}}\sum_{m =
0}^\infty \frac{\lambda(p^{2m+1})}{p^{m(1 +
2\alpha)}}\right.\right.
\nonumber\\
&&\qquad\qquad\qquad\left.\left.+ \frac{1}{p^{1 + 2\gamma}}\sum_{m
= 0}^\infty \frac{\lambda(p^{2m})}{p^{m(1 + 2\alpha)}} \right)
\right)
\end{eqnarray}
and
\begin{eqnarray}\label{eq:Eulerdivide}
V_{|}(\alpha,\gamma)=\prod_{p| M}\Bigg(\sum_{m= 0}^{\infty}\bigg(
\frac{\lambda(p^m)\omega_E^{m}} {p^{m(1/2 + \alpha)
}}-\frac{\lambda(p)\lambda(p^m)\omega_E^{m+1}}{p^{m(1/2+\alpha)+1/2+\gamma}}\bigg)\Bigg).
\end{eqnarray}
We now factor out the divergent part of $R_E^1$ using the Riemann
zeta function and also, for convenience, we will factor out the
symmetric square $L$-function associated with $L_E$. This leaves
us with a convergent Euler product. In the following, for
simplicity, we shall only deal with elliptic curves with prime
conductor, $M$. Recall that the Euler product of a Hasse-Weil
$L$-function $L_E(s)$ coming from the elliptic curve $E$, with
Dirichlet coefficients $\lambda(n)$ normalised so that the
functional equation relates $s$ to $1 - s$, has the form
\begin{equation}
L_E(s) = \prod_{p|M} (1 - \lambda(p) p^{-s})^{-1} \prod_{p\nmid M}
(1 - \lambda(p) p^{-s} + p^{-2s})^{-1}.
\end{equation}
Now we can write this product as
\begin{equation}
L_E(s) = \prod_p (1 - \alpha(p)p^{-s})^{-1} (1 - \beta(p)p^{-s})^{-1}
\end{equation}
where
\begin{equation} \label{alphaplusbeta}
\alpha(p) + \beta(p) = \lambda(p)
\end{equation}
and
\begin{equation}
\alpha(p) \beta(p) =
\begin{cases}
0 \mbox{~for~} p | M\\
1 \mbox{~for~} p \nmid M. \label{Steve}
\end{cases}
\end{equation}
Let $L_E(\mbox{sym}^2, s)$ denote the symmetric square $L$-function.
Then by definition (see \cite{kn:iwaniec}, page 251)
\begin{equation} \label{Iwaniec}
L_E(\mbox{sym}^2, s) = \prod_p (1 - \alpha^2(p)p^{-s})^{-1} (1 - \alpha(p)\beta(p)p^{-s})^{-1}
(1 - \beta^2(p)p^{-s})^{-1}.
\end{equation}
We have (see \cite{kn:conrey04b}, page 236)
\begin{equation}
\lambda(m) \lambda(n) = \sum_{\substack{d|(m, n)\\(d, M) = 1}} \lambda(mn/d^2),
\end{equation}
(where $M$ is the conductor of $E$) and in particular we have for
$p \nmid M$
\begin{eqnarray} \label{quadrat}
\lambda(p)^2 & = & \lambda(p^2) + 1\\ \label{lang}
\lambda(p^{2m +1})\lambda(p) & = & \lambda(p^{2m +2}) + \lambda(p^{2m}).
\end{eqnarray}
We wish to write the Euler product in (\ref{Iwaniec}) in terms of $\lambda(p)$, so we start
by using (\ref{alphaplusbeta}) to obtain
\begin{eqnarray}
L_E(\mbox{sym}^2, s)& =& \prod_p \left(1 - \frac{\lambda(p)^2 -
\alpha(p)\beta(p)}{p^s} + \frac{\alpha(p)\beta(p)(\lambda(p)^2 -
\alpha(p)\beta(p))}{p^{2s}} \right.\nonumber \\
&&\qquad\qquad\qquad\left.- \frac{(\alpha(p)\beta(p))^3}{p^{3s}}
\right)^{-1}.
\end{eqnarray}
We now distinguish between $p|M$ and $p \nmid M$, and so, using
(\ref{quadrat}) and (\ref{Steve}), we have
\begin{equation} \label{symmetric_old}
L_E(\mbox{sym}^2, s) = \prod_{p | M} \left(1 -
\frac{\lambda(p)^2}{p^s}\right)^{-1} \prod_{p \nmid M} \left(1 -
\frac{\lambda(p^2)}{p^s} + \frac{\lambda(p^2)}{p^{2s}} -
\frac{1}{p^{3s}} \right)^{-1}.
\end{equation}
\kommentar{
Remark: Note that for $p | \Delta$ we have $-p^{1/2}\lambda(p) = \pm 1 \Longrightarrow p\lambda(p)^2 = 1 \Longrightarrow \lambda(p)^2 = 1/p$.
So we can write (\ref{symmetric}) also as
$$
L_E(\mbox{sym}^2, s) =
\prod_{p | \Delta} \left(1 - \frac{1}{p^{s+1}}\right)^{-1}
\prod_{p \nmid \Delta} \left(1 - \frac{\lambda(p^2)}{p^s} + \frac{\lambda(p^2)}{p^{2s}} - \frac{1}{p^{3s}} \right)^{-1}.
$$
}
Now we reconsider the Euler products in (\ref{Euler}) and
(\ref{eq:Eulerdivide}). In constructing ratios conjectures we
usually allow $-\tfrac{1}{4}<{\rm Re} \alpha<\tfrac{1}{4}$ and
$\log X\ll {\rm Re}\gamma<\tfrac{1}{4}$, where the bounds at
$\tfrac{\pm 1}{4}$ allow us to control the convergence of Euler
products of the type (\ref{Euler}). In fact, in this application
the real parts of $\alpha$ and $\gamma$ can be considered as very
small. Thus we can write
\begin{eqnarray}
V_{\nmid}(\alpha, \gamma) &=& \prod_{p\nmid M} \left(1 +
\frac{p}{p + 1}\left(\sum_{m = 1}^\infty
\frac{\lambda(p^{2m})}{p^{m(1 + 2\alpha)}} -
\frac{\lambda(p)}{p^{1 + \alpha + \gamma}}\sum_{m = 0}^\infty
\frac{\lambda(p^{2m+1})}{p^{m(1 + 2\alpha)}}
\right.\right.\nonumber\\
&&\qquad\qquad\qquad\qquad \left.\left.+ \frac{1}{p^{1 +
2\gamma}}\sum_{m = 0}^\infty \frac{\lambda(p^{2m})}{p^{m(1 +
2\alpha)}} \right)
\right)\nonumber \\
&=& \label{orderforeulerproduct} \prod_{p\nmid M} \left(1 +
\frac{\lambda(p^2)}{p^{1 + 2\alpha}} - \frac{\lambda(p^2) + 1
}{p^{1 + \alpha + \gamma}} + \frac{1}{p^{1 + 2 \gamma}} +
\cdots\right),
\end{eqnarray}
where the $\cdots$ indicate terms that converge like $1/p^2$ when
$\alpha$ and $\gamma$ are small. We now use the following
approximations to factor out the divergent or slowly converging
terms. By (\ref{symmetric_old}) we have
\begin{equation}
L_E(\mbox{sym}^2, 1 + 2\alpha)=\prod_p\left( 1+\frac{\lambda(p^2)}{p^{1
+ 2\alpha}} +\cdots \right)
\end{equation}
and
\begin{equation}
\frac{1}{L_E(\mbox{sym}^2, 1+\alpha + \gamma)} \frac{1}{\zeta(1 + \alpha
+ \gamma)}=\prod_p\left(1- \frac{\lambda(p^2) + 1 }{p^{1 + \alpha
+ \gamma}}+\cdots\right).
\end{equation}
Also, since there is only one prime that divides the conductor
$M$, a factor of $\zeta(1+2\gamma)$ will account for the
divergence of the term $\frac{1}{p^{1 + 2 \gamma}}$ in
(\ref{orderforeulerproduct}).
Hence we can write
\begin{equation}
V_\nmid(\alpha, \gamma)V_|(\alpha, \gamma)=Y_E(\alpha,
\gamma)A_E(\alpha, \gamma),
\end{equation}
where \begin{equation}\label{eq:YE}
Y_E(\alpha, \gamma) = \frac{\zeta(1 +
2\gamma) L_E(\mbox{sym}^2, 1 + 2\alpha)}{\zeta(1 + \alpha + \gamma)
L_E(\mbox{sym}^2, 1+ \alpha+\gamma)}.
\end{equation}
$A_E(\alpha, \gamma)$ is given by
\begin{align} \nonumber
&A_E(\alpha, \gamma) = ~ Y_E^{-1}(\alpha, \gamma)\times
\prod_{p\nmid M} \left(1 + \frac{p}{p + 1}\left(\sum_{m =
1}^\infty \frac{\lambda(p^{2m})}{p^{m(1 + 2\alpha)}}\right.\right.
\\ \label{AE} &\left.\left. \qquad\qquad- \frac{\lambda(p)}{p^{1 + \alpha + \gamma}}\sum_{m =
0}^\infty \frac{\lambda(p^{2m+1})}{p^{m(1 + 2\alpha)}} +
\frac{1}{p^{1 + 2\gamma}}\sum_{m = 0}^\infty
\frac{\lambda(p^{2m})}{p^{m(1 + 2\alpha)}} \right) \right)\\
\nonumber &\qquad\qquad\qquad\qquad\times\prod_{p|
M}\Bigg(\sum_{m= 0}^{\infty}\bigg( \frac{\lambda(p^m)\omega_E^{m}}
{p^{m(1/2 + \alpha)
}}-\frac{\lambda(p)}{p^{1/2+\gamma}}\frac{\lambda(p^m)\omega_E^{m+1}}{p^{m(1/2+\alpha)}}\bigg)\Bigg)
\end{align}
and is analytic as $\alpha, \gamma \rightarrow 0$. Hence, by
recalling (\ref{RE1}), we find
\begin{equation}
R_E^1(\alpha,
\gamma) \approx \sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} Y_E(\alpha, \gamma) A_E(\alpha,
\gamma).
\end{equation}
We obtain the other sum $R_E^2(\alpha, \gamma)$ in (\ref{RE_sum})
by using the second term in the approximate functional equation
(\ref{approximatefunctionalequation}) and carrying out exactly the
same steps as above:
\begin{equation}
R_E^2(\alpha, \gamma)\approx \sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \left(\frac{\sqrt{M}|d|}{2\pi}
\right)^{-2\alpha} \frac{\Gamma(1 - \alpha)}{\Gamma(1 + \alpha)}
Y_E(-\alpha, \gamma) A_E(-\alpha, \gamma).
\end{equation}
By applying the ratios conjecture recipe, we therefore have the
result:
\begin{conjecture}[Ratios Conjecture]\label{conj:ratios} For some reasonable conditions such as $-\frac{1}{4}<\mathbb{R}e
\alpha<\frac{1}{4}$, $\frac{1}{\log X} \ll \mathbb{R}e
\gamma<\frac{1}{4}$ and $\rm Im \alpha,\rm Im\gamma\ll
X^{1-\varepsilon}$, we have
\begin{align}
R_E(\alpha, \gamma) = &\sum_{\substack{0<d \leq X\nonumber\\
\chi_d(-M)\omega_E=+1}} \frac{L_E(1/2 + \alpha, \chi_d)}{L_E(1/2 +
\gamma, \chi_d)}
\\
= & \sum_{\substack{0<d \leq X\nonumber\\
\chi_d(-M)\omega_E=+1}} \left( Y_E A_E(\alpha, \gamma) + \left(\frac{\sqrt{M}|d|}{2\pi}\right)^{-2\alpha} \frac{\Gamma(1 - \alpha)}{\Gamma(1 + \alpha)}Y_E A_E(-\alpha, \gamma)\right) \\
& \qquad\qquad\qquad+ O(X^{1/2 + \varepsilon}),\nonumber
\end{align}
where $Y_E$ and $A_E$ are defined at (\ref{eq:YE}) and (\ref{AE}),
respectively, $M$ is the (prime) conductor of the $L$-function
$L_E(s)$ and $\omega_E$ is the sign from its functional equation.
\end{conjecture}
We note that the error term $O(X^{1/2+\varepsilon})$ is part of
the statement of the ratios conjecture; the power on $X$ is not
suggested by any of the steps used in arriving at the main
expression in Conjecture \ref{conj:ratios}. At the end of Section
\ref{sect:data} we propose that the limited data we have available
supports a power saving on the error term, but not necessarily a
power of 1/2.
To calculate the 1-level density we actually need the average of
the logarithmic derivative of $L$-functions in this family, so we
note that
\begin{eqnarray}
\sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \frac{L_E'(1/2 + r, \chi_d)}{L_E(1/2 + r,
\chi_d)} = \frac{d}{d \alpha}R_E(\alpha, \gamma)\Big|_{\alpha =
\gamma = r}.
\end{eqnarray}
Using (\ref{lang}) for primes not dividing $M$ and the
multiplicativity of $\lambda(p)$ for $p|M$, we get $A_E(r,r) = 1$
and we have, with
\begin{equation}
\label{eq:AE1}
A_E^1(r,r)=\frac{d}{d\alpha}A_E(\alpha,\gamma)\big|_{\alpha=\gamma=r},
\end{equation}
\begin{align}\nonumber
\frac{d}{d \alpha} Y_E A_E(\alpha, \gamma)\Big|_{\alpha = \gamma = r}
= & -\frac{\zeta'(1 + 2r)}{\zeta(1 + 2r)} A_E(r,r) + \frac{L_E'(\mbox{sym}^2, 1 + 2r)}{L_E(\mbox{sym}^2, 1+2r)}A_E(r,r) + A_E^1(r,r) \\
= & -\frac{\zeta'(1 + 2r)}{\zeta(1 + 2r)} + \frac{L_E'(\mbox{sym}^2, 1 +
2r)}{L_E(\mbox{sym}^2, 1+2r)} + A_E^1(r,r)
\end{align}
and
\begin{align}\nonumber
\frac{d}{d \alpha}
\left(\frac{\sqrt{M}|d|}{2\pi}\right)^{-2\alpha} \frac{\Gamma(1 -
\alpha)}{\Gamma(1 + \alpha)}\{Y_E(-\alpha, \gamma) A_E(-\alpha,
\gamma)\}\Big|_{\alpha = \gamma = r}
\\= - \left(\frac{\sqrt{M}|d|}{2\pi}\right)^{-2r} \frac{\Gamma(1 - \alpha)}{\Gamma(1 + \alpha)}
\frac{\zeta(1 + 2r)L_E(\mbox{sym}^2, 1 - 2r)}{L_E(\mbox{sym}^2,1)}A_E(-r, r).
\end{align}
Therefore we have for the logarithmic derivative the following:
\begin{theorem}\label{theo:logderiv} Assuming the Ratios
Conjecture \ref{conj:ratios} and $\frac{1}{\log X} \ll \mathbb{R}e
($r$)<\frac{1}{4}$ and $\rm Im ($r$)\ll X^{1-\varepsilon}$, the
average of the logarithmic derivative over a family of quadratic
twists (with even functional equation) of the $L$-function of an
elliptic curve with prime conductor $M$ is
\begin{align} \nonumber
& \sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \frac{L_E'(1/2 + r, \chi_d)}{L_E(1/2 + r,
\chi_d)}
\\ & = \sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \Bigg( -\frac{\zeta'(1 + 2r)}{\zeta(1 +
2r)} + \frac{L_E'({\rm sym}^2, 1 + 2r)}{L_E({\rm sym}^2, 1+2r)} +
A_E^1(r,r)
\\ \label{eins} & ~~~~-
\left(\frac{\sqrt{M}|d|}{2\pi}\right)^{-2r} \frac{\Gamma(1 -
r)}{\Gamma(1 + r)} \frac{\zeta(1 + 2r)L_E({\rm sym}^2, 1 -
2r)}{L_E({\rm sym}^2,1)}A_E(-r, r) \Bigg) +
O(X^{1/2+\varepsilon}).\nonumber
\end{align}
Here $\omega_E$ is the sign from the functional equation of $L_E$,
$L_E({\rm sym}^2,s)$ is the associated symmetric square
$L$-function (defined at (\ref{Iwaniec})), and $A_E$ and $A_E^1$
are arithmetic factors defined at (\ref{AE}) and (\ref{eq:AE1}),
respectively.
\end{theorem}
Let $\gamma_d$ denote the ordinate of a generic zero of $L_E(s, \chi_d)$ on the half line.
We consider the 1-level density
\begin{equation}
S_1(f) := \sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \sum_{\gamma_d} f(\gamma_d)
\end{equation}
where $f$ is some nice test function, say an even Schwartz function.
By the argument principle we have
\begin{equation}
S_1(f) = \sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \frac{1}{2\pi i} \left( \int_{(c)} -
\int_{(1-c)}\right) \frac{L'(s, \chi_d)}{L(s, \chi_d)} f (-i(s -
1/2))ds
\end{equation}
where $(c)$ denotes a vertical line from $c -i\infty$ to $c + i\infty$ and $3/4 > c > 1/2 + 1/\log X$.
The integral on the $c$-line is
\begin{equation}
\frac{1}{2\pi} \int_{-\infty}^\infty f(t - i(c-1/2)) \sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \frac{L_E'(1/2 + (c -1/2
+it),\chi_d)}{L_E(1/2 + (c-1/2 +it), \chi_d)}dt.
\end{equation}
The sum over $d$ can be replaced by Theorem \ref{theo:logderiv}.
The bounds on the size $t$ coming from the ratios conjecture
should not limit us here. It is not entirely known in what range
of the parameters the ratios conjecture holds, but the test
function $f$ can be chosen to decay sufficiently fast that the
tails of the integrand, where the ratios conjecture might fail,
will not contribute significantly. (See the 1-level density
section of \cite{kn:consna06} for more detailed analysis.) Next we
move the path of integration to $c = 1/2$ as the integrand is
regular at $t = 0$ and get
\begin{align}
& \frac{1}{2\pi} \int_{-\infty}^\infty f(t) \sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \Bigg( -\frac{\zeta'(1 + 2it)}{\zeta(1 +
2it)} + \frac{L_E'(\mbox{sym}^2, 1 + 2it)}{L_E(\mbox{sym}^2, 1+2it)} +
A_E^1(it,it)
\\ & \qquad - \left(\frac{\sqrt{M}|d|}{2\pi}\right)^{-2it} \frac{\Gamma(1 - it)}{\Gamma(1 + it)}
\frac{\zeta(1 + 2it)L_E(\mbox{sym}^2, 1 - 2it)}{L_E(\mbox{sym}^2,1)}A_E(-it,
it) \Bigg)dt \nonumber \\
&\qquad\qquad\qquad\qquad\qquad
\qquad\qquad\qquad\qquad\qquad\nonumber+ O(X^{1/2+\varepsilon}).
\end{align}
For the integral on the line with real part $1-c$, we use the
functional equation
\begin{equation}
L_E(s, \chi_d) = \chi_d(-M)\omega_E X(s, \chi_d) L_E(1-s, \chi_d)
\end{equation}
with
\begin{equation} \label{xi_equation}
X(s, \chi_d) =
\left(\frac{\sqrt{M}|d|}{2\pi}\right)^{1-2s} \frac{\Gamma(3/2-s)}{
\Gamma(s+1/2)}
\end{equation}
to obtain
\begin{equation}
\frac{L_E'(1-s, \chi_d)}{L_E(1-s, \chi_d)} = \frac{X'(s, \chi_d)}{X(s, \chi_d)} - \frac{L_E'(s, \chi_d)}{L_E(s, \chi_d)}. \label{zwei}
\end{equation}
The logarithmic derivative of (\ref{xi_equation}) evaluated
at $s = 1/2 + \alpha$ is
\begin{equation}
\frac{X'(1/2 + \alpha, \chi_d)}{X(1/2 + \alpha, \chi_d)} = -2\log
\left(\frac{\sqrt{M}|d|}{2\pi} \right) - \frac{\Gamma'}{\Gamma}(1
+ \alpha) - \frac{\Gamma'}{\Gamma}(1 - \alpha).
\end{equation}
For the integral on the $(1-c)$ line we change variables $s
\rightarrow 1-s$ and use (\ref{zwei}). We thus obtain finally the
following:
\begin{theorem} Assuming the Ratios Conjecture \ref{conj:ratios},
the 1-level density for the zeros of the family of even
quadratic twists of an elliptic curve $L$-function $L_E(s)$ with
prime conductor $M$ is given by
\begin{align} \label{oneleveldensity}\nonumber
S_1(f) = &~\sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \sum_{\gamma_d} f(\gamma_d)\\=&~
\frac{1}{2\pi} \int_{-\infty}^\infty f(t) \sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \Bigg( 2\log
\left(\frac{\sqrt{M}|d|}{2\pi} \right) + \frac{\Gamma'}{\Gamma}(1
+ it) + \frac{\Gamma'}{\Gamma}(1 - it)\nonumber
\\ & + 2\Big[-\frac{\zeta'(1 + 2it)}{\zeta(1 + 2it)} +
\frac{L_E'({\rm sym}^2, 1 + 2it)}{L_E({\rm sym}^2, 1+2it)} +
A_E^1(it,it)
\\ \nonumber & - \left(\frac{\sqrt{M}|d|}{2\pi}\right)^{-2it} \frac{\Gamma(1 - it)}{\Gamma(1 + it)}
\frac{\zeta(1 + 2it)L_E({\rm sym}^2, 1 -
2it)}{L_E({\rm sym}^2,1)}A_E(-it, it)\Big] \Bigg)dt \\
& + O(X^{1/2+\varepsilon}),\nonumber
\end{align}
where $\gamma_d$ is a generic zero of $L_E(s,\chi_d)$, $f$ is an
even test function as described above, $\omega_E$ is the sign from
the functional equation of $L_E$, $L_E({\rm sym}^2,s)$ is the
associated symmetric square $L$-function (defined at
(\ref{Iwaniec})), and $A_E$ and $A_E^1$ are arithmetic factors
defined at (\ref{AE}) and (\ref{eq:AE1}), respectively.
\end{theorem}
\section{Numerical test} \label{sect:data}
We test our prediction -- namely formula (\ref{oneleveldensity})
-- for the 1-level density with a concrete example (see figure
\ref{fig:datavstheory}). We pick the elliptic curve $E_{11}$ with
$(a_1,a_2,a_3,a_4,a_6) = (0,-1,1,0,0)$ in the Weierstraß form
\begin{equation}
y^2 + a_1 xy + a_3 y = x^3 + a_2 x^2 + a_4 x + a_6
\end{equation}
giving
\begin{equation}
E_{11}: y^2 + y = x^3 - x^2
\end{equation}
and consider the even quadratic twists of its associated
$L$-function with fundamental discriminants between 0 and 40,000.
We are interested in the 1-level density of unscaled zeros from 0
up to height 30. The numerical data is obtained from Rubinstein's
{\sffamily lcalc} \cite{kn:rubinstein}. In the range considered we
find 11,135 quadratic twists, of which 5,562 are even ones with a
total of about 590,170 zeros. In figure \ref{fig:datavstheory} we
obtain the solid curve from the histogram of this zero data by
choosing a binsize of 0.1 and dividing by both the number of
quadratic twists with even functional equation, and the mean
density of zeros $\log(\sqrt{11}X/(2\pi))$. 593 of the
$L$-functions with even functional equation have (at least) a
double zero at the central point; these zeros at the central point
are not plotted in figure \ref{fig:datavstheory}. The dashed
curve is obtained from the formula (\ref{oneleveldensity}) with
$X=40,000$ and $f(t)=\delta(t-x)+\delta(t+x)$ for $x$ between 0
and 30. This curve is scaled like the data curve by dividing
through by the number of quadratic twists with even functional
equation and the mean density of zeros. It was computed using a
combination of Mathematica and C++. The coefficients $\lambda(p)$
appearing in the arithmetic factor $A_E(\alpha, \gamma)$ were
computed using PARI. To compute coefficients of prime powers
$\lambda(p^m)$ for $p \nmid M$ the following recursion formulas
(see \cite{kn:hugmil07}) were used
\begin{eqnarray}
\lambda(p^{2m}) & = & \lambda(p)^{2m} - \sum_{r = 0}^{m - 1} \left({2m \choose m - r} - {2m \choose m - r - 1} \right)\lambda(p^{2r})\\
\lambda(p^{2m + 1}) & = & \lambda(p)^{2m + 1} - \sum_{r = 0}^{m - 1} \left({2m + 1 \choose m - r}
- {2m + 1 \choose m - r - 1} \right)\lambda(p^{2r + 1}).
\end{eqnarray}
\begin{figure}
\caption{1-level density of unscaled zeros from 0 up to height 30
of even quadratic twists of $L_{E_{11}
\label{fig:datavstheory}
\end{figure}
In general there is good agreement between the data and the
theoretical curve, which captures the main features of the data.
We would expect better agreement with a larger set of data, since
the data seems not yet to have resolved all the peaks further out
along the axis.
A closer look reveals that the 1-level-density is strongly
governed by the non-trivial zeros of $\zeta(s)$ and $L(\mbox{sym}^2,
s)$: we observe that some dips of the data curve are located at
$\gamma / 2$ where $\gamma$ is the ordinate of a non-trivial zero
of the Riemann zeta function. This is captured in the term
\begin{equation}
-\frac{\zeta'(1+2it)}{\zeta(1+2it)}
\end{equation}
of our conjecture for $S_1(f)$. In figure \ref{fig:zetaLzeros} we
mark the position of a non-trivial zero of the Riemann zeta
function on our conjectural answer by a $\ast$. These $\ast$ are
all localised in or around a neighbourhood of a dip. This
phenomenon has been encountered before, in the study of lower order terms
of the number variance \cite{kn:berry88} and the correlation
functions
\cite{kn:berrykeating99,kn:bk96,kn:consna06,kn:consna07,kn:consna08}
of the Riemann zeros, and in the one-level density of other
families of $L$-functions \cite{kn:consna06}.
\begin{figure}
\caption{Effects of non-trivial zeros of the Riemann zeta function
(indicated by~$\ast$) and the non-trivial zeros of $L(\mbox{sym}
\label{fig:zetaLzeros}
\end{figure}
On the other hand we observe that some peaks are located at
$\tilde{\gamma} / 2$ where $\tilde{\gamma}$ is the ordinate of a
non-trivial zero of $L_E(\mbox{sym}^2, s)$. This is captured in the term
\begin{equation}
\frac{L_E'(\mbox{sym}^2, 1 + 2it)}{L_E(\mbox{sym}^2, 1 + 2it)}
\end{equation}
of our conjecture for $S_1(f)$. In figure \ref{fig:zetaLzeros} we
mark the position of a non-trivial zero of $L_E(\mbox{sym}^2, s)$ by a
$\diamond$. The majority of these $\diamond$s are localized in or
around a neighbourhood of a peak. In particular, we observe that
if a zero of the Riemann zeta function is close to a zero of
$L(\mbox{sym}^2, s)$ then these zeros are localised in or around a dip.
Hence, zeros of the Riemann zeta function seem to dominate the
behaviour of the 1-level-density more than the zeros of $L(\mbox{sym}^2,
s)$. This may be explained because the density of the Riemann
zeros in this range is smaller than that of the zeros of
$L(\mbox{sym}^2, s)$ and so in terms of the mean zero density the
one-line is closer to the half-line in the case of the Riemann
zeta function. Therefore one would expect the Riemann zeros to
have a larger effect.
The term
\begin{equation}
- \left(\frac{\sqrt{M}|d|}{2\pi}\right)^{-2it} \frac{\Gamma(1 -
it)}{\Gamma(1 + it)} \frac{\zeta(1 + 2it)L_E(\mbox{sym}^2, 1 -
2it)}{L_E(\mbox{sym}^2,1)}A_E(-it, it),
\end{equation}
from (\ref{oneleveldensity}), makes its most obvious contribution
by causing the oscillation near the origin of the plot of our
conjectural answer for the 1-level density. The factor
$\left(\frac{\sqrt{M}|d|}{2\pi}\right)^{-2it}$ results in
oscillations on the scale of the mean density of the zeros of the
original $L$-function, $L_E$.
In summary, we notice that the lower order terms dominate the
behaviour of the zeros when we are far from the limit of infinite
conductor (in the family of quadratic twists, $E_d$, the conductor
increases with $d$). This becomes more obvious when we compare our
conjectural answer for finite conductors with the limiting
theoretical result: in figure \ref{fig:scale1} we consider the
scaled 1-level density of $SO(2N)$ in the limit $N\rightarrow
\infty$ against our conjectural answer (also scaled) for finite
conductor. We observe convergence to the limiting theoretical
result as we increase $X$, the cut-off point for $d$. The observed
effects of the arithmetical terms for small and finite conductors
are washed out and shifted away from the origin in the large
conductor limit.
\begin{figure}
\caption{Scaled limiting 1-level density of $SO(2N)$ (solid)
versus scaled formula (\ref{interim}
\label{fig:scale1}
\end{figure}
To further understand the approach to the limiting distribution,
we calculate the 1-level density for scaled zeros and recover
the limit and the next to leading order term from
(\ref{oneleveldensity}). As a first step we rescale the variable
$t$ in (\ref{oneleveldensity}) as
\begin{equation}
\tau = t (L/\pi)
\end{equation}
and define
\begin{equation}\label{eq:fg}
f(t) = g(t (L/\pi)),
\end{equation}
where
\begin{equation}
L := \log\bigg(\frac{\sqrt{M}X}{2\pi}\bigg),
\end{equation}
and get, after a change of variables,
\begin{eqnarray}
&& \sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \sum_{\gamma_d}g\Big(\frac{\gamma_d
L}{\pi}\Big)\nonumber
\\ \nonumber && =~ \frac{1}{2L} \int_{-\infty}^\infty g(\tau) \sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \Bigg( 2\log
\left(\frac{\sqrt{M}|d|}{2\pi} \right) +
\frac{\Gamma'}{\Gamma}\Big(1 + \frac{i\pi \tau}{L}\Big)
\\ \nonumber &&~~~ + \frac{\Gamma'}{\Gamma}\Big(1 - \frac{i \pi \tau}{L}\Big) +
2\Big[-\frac{\zeta'(1 + \frac{2 i \pi \tau}{L})}{\zeta(1 + \frac{2
i \pi \tau}{L})} + \frac{L_E'(\mbox{sym}^2, 1 + \frac{2 i \pi
\tau}{L})}{L_E(\mbox{sym}^2, 1+\frac{2 i \pi \tau}{L})} +
A_E^1\Big(\frac{i \pi \tau}{L}, \frac{i \pi \tau}{L}\Big)
\\ \nonumber &&~~~ -\bigg(\frac{\sqrt{M}|d|}{2\pi}\bigg)^{-2i \pi \tau / L}
\frac{\Gamma(1 - \frac{i \pi \tau}{L})}{\Gamma(1 + \frac{i \pi
\tau}{L})} \frac{\zeta(1 + \frac{2 i \pi \tau}{L})L_E(\mbox{sym}^2, 1 -
\frac{2 i \pi \tau}{L})}{L_E(\mbox{sym}^2,1)}\\
\nonumber&&~~~\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
\times A_E\Big(-\frac{i \pi
\tau}{L}, \frac{i \pi \tau}{L}\Big)\Big] \Bigg)d\tau \\
\label{interim} &&~~~ + O(X^{1/2+\varepsilon}).
\end{eqnarray}
We write the number of fundamental discriminants less than or
equal to $X$ as
\begin{equation}
X^* := \sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} 1.
\end{equation}
Using the Euler-Maclaurin formula we make the approximation
\begin{equation}
\sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \log \left(\frac{\sqrt{M}|d|}{2\pi}
\right) = X^*\left[ \log \left(\frac{\sqrt{M}X}{2\pi} \right) - 1
\right] + O\left(X^{1/2 + \varepsilon}\right).
\end{equation}
In the same manner we have
\begin{align}
\sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \left(\frac{\sqrt{M}|d|}{2
\pi}\right)^{-2i \pi \tau / L} = & X^*\big(1+ \frac{2i \pi
\tau}{L} +O(L^{-2})\big)e^{-2i\pi\tau} + O (X^{1/2}).
\kommentar{
\frac{X^*}{1- (2i \pi \tau)/L }
\bigg(\frac{\sqrt{M}X}{2\pi} \bigg)^{-2i \pi \tau/L} +
O(X^{-1/2})\nonumber \\
& =& ******X^*\big(1+ \frac{2i \pi \tau}{L} +O(\log^{-2}
L)\big)e^{-2i\pi\tau}***** .}
\end{align}
Writing
\begin{equation}
\zeta(s+1) = \frac{1}{s} + \sum_{n=0}^\infty
\frac{(-1)^n}{n!} \gamma_n s^n,
\end{equation}
we have
\begin{align}
\frac{\zeta'(1+s)}{\zeta(1+s)} = & -s^{-1} + \gamma + (-\gamma^2 -
2\gamma_1)s + O(s^2),
\end{align}
where $\gamma = \gamma_0$ is Euler's constant, and so
\begin{equation}
\zeta(1 + \frac{2 i \pi \tau}{L}) = \frac{L}{2 i \pi \tau} + \gamma + O(L^{-1}).
\end{equation}
and
\begin{equation}
\frac{\zeta'(1 + \frac{2 i \pi \tau}{L})}{\zeta(1 + \frac{2 i \pi
\tau}{L})}=-\frac{L}{2i \pi \tau} + \gamma +O(L^{-1}).
\end{equation}
Simple Taylor expansions of the other factors in (\ref{interim})
lead us to, with the relation between $f$ and $g$ given in
(\ref{eq:fg}),
\begin{eqnarray} \label{S1f}&& \frac{1}{X^*}S_1(f)=\frac{1}{X^*}\sum_{\substack{0<d \leq X\\
\chi_d(-M)\omega_E=+1}} \sum_{\gamma_d}g\Big(\frac{\gamma_d L}{\pi}\Big)\\
&&\quad=\int_{-\infty}^{\infty}g(\tau)\Bigg( 1 + \frac{\sin(2\pi
\tau)}{2 \pi \tau} - a_1\frac{1 +\cos(2\pi \tau)}{L} -
a_2\frac{\pi \tau \sin(2\pi \tau)}{L^2} +
O\left(\frac{1}{L^3}\right)\Bigg)d\tau\nonumber
\end{eqnarray}
where
\begin{equation} \label{aone}
a_1 = 1 + 2\gamma - A^1_E(0,0) - \frac{L'_E(\mbox{sym}^2,
1)}{L_E(\mbox{sym}^2, 1)}
\end{equation}
and
\begin{align} \nonumber
a_2 & = 2 + 4\gamma + 3\gamma^2 - 2\gamma_1 + B'(0) + 2\gamma B'(0) - 2\frac{L'_E(\mbox{sym}^2, 1)}{L_E(\mbox{sym}^2, 1)}\\
& -\frac{4\gamma L'(1)}{L(1)} - \frac{B'(0)L'_E(\mbox{sym}^2,
1)}{L_E(\mbox{sym}^2, 1)} + \frac{B''(0)}{4} + \frac{L''_E(\mbox{sym}^2,
1)}{L_E(\mbox{sym}^2, 1)},
\end{align}
with
\begin{equation}
B'(0) = \frac{d}{dr}A_E(-r,r)\Big|_{r =0} \mbox{~and~} B''(0) =
\frac{d^2}{dr^2}A_E(-r,r)\Big|_{r =0}.
\end{equation}
In order to obtain (\ref{aone}) we use the following identity
\begin{equation} \label{relation}
-\frac{1}{2} B'(0) = A^1_E(0,0).
\end{equation}
We establish identity (\ref{relation}) by simple algebra and using
(\ref{lang}) for primes not dividing $M$, the multiplicativity of
$\lambda(p)$ for $p|M$ and $A_E(r,r) =1$.
This work was initially conceived to investigate the unexpected
numerical results found by Steven J. Miller \cite{kn:mil05} near
the origin of the histogram of the distribution of the first zero
above the central point of a family of rank zero $L$-functions. He
observed very few examples of zeros lying close to the central
point. That is, he observed the phenomenon of repulsion of zeros
from the central point which we know, from rigorous work on the
1-level and 2-level densities \cite{kn:young,kn:miller02,kn:mil04}
does not persist in the large conductor limit. Since the 1-level
density (a histogram of all zeros) and the distribution of the
lowest zero (a histogram of the lowest zero of each $L$-function)
are the same for very small distances from the central point, it
is natural to enquire whether the ratios conjecture yields a
formula for the 1-level density which would display and explain
Miller's observed repulsion at finite conductor. Although it can
be seen from figure \ref{fig:scale1} that the formula
(\ref{interim}) is significantly smaller near the origin than the
limiting curve, and approaches it from below as the conductor
increases, there is no evidence of repulsion. This is a major
discrepancy from the data, as seen in figure~\ref{fig:nrzero}:
away from the critical point we have a nice match between the
prediction and the data while near the critical point we find
fewer zeros in the data than predicted by our formula. It is most
interesting that the main terms of the ratios conjecture do not
capture this important feature. Of course, the natural question
is whether this contradicts the ratios conjecture, or whether the
discrepancy can by accounted for by the error term. As expected
due to the limited data available, the test described below is
inconclusive, but shows signs that the error term in the
ratios conjecture (and hence on the one level density in
(\ref{oneleveldensity})) is of the form $X^{b+\varepsilon}$, for
$b<1$. The ratios conjecture is usually stated with $b=1/2$.
We fix several sample points at various distances away from the
critical point and measure the difference between the main terms
of our prediction (that is, the sum over $d$ inside the integral
in (\ref{oneleveldensity})) and the data. In fact, we compare the
normalised versions of our prediction and data by dividing through
by the number of fundamental discriminants $X^*$ less than $X$ and
the mean density of zeros. So let us denote this difference
between the main terms of the normalised theory and the data at a
fixed height $t$ and fixed $X$ by $\Delta(t, X)$. Since we have
divided by $X^*$, which is proportional to $X$, this difference is
expected to be of size
\begin{equation}
|\Delta(t, X)| = O(X^{b-1 + \varepsilon})
\end{equation}
The quantity we will plot is
\begin{equation} \label{Q_difference}
Q_{\Delta}(t,X) := \frac{\log(|\Delta( t, X)|)}{\log X}
\end{equation}
and if the ratios conjecture with error term $X^{b+\varepsilon}$
is correct then we would expect
\begin{equation} \label{log_difference}
Q_{\Delta}(t,X) = b-1 + O\Big(\frac{\log\log X}{\log X}\Big)
\end{equation}
as $X\rightarrow \infty$.
In figure \ref{fig:discrepancy} we plot the quantity
$Q_{\Delta}(t,X)$ for $0 < X < 400,000$ and for various fixed
sample points $ t_1~=~0.01, t_2~=~0.02, t_3~=~0.03, t_4~=~0.04,
t_5~=~0.05, t_6~=~0.4$ and, $t_7~=~0.6$. We notice that the
curves are much smoother for sample points near the critical
point, $t=0$, eg.~ $t_1,t_2,t_3$. In the range $0 < X < 400,000$
these points are well inside the region where the zero data shows
repulsion at the critical point; see figure \ref{fig:nrzero}. Thus
the difference between the theory (smooth curve in figure
\ref{fig:nrzero}) and data (histogram) does not change sign as $X$
increases. Presumably it is the amplification of such sign
changes by the logarithm in (\ref{Q_difference}) that is
responsible for the jagged curves in figure \ref{fig:discrepancy}
for sample points $t_4, t_5$ and $t_6$.
We see also that the curves at sample points close to the critical
point appear at first sight to indicate a larger error term - in
fact, over this range of $X$ the $t_1$ curve implies $b-1>0$! If
a limit such as (\ref{log_difference}) exists, it does not seem to
behave uniformly in $t$. However, the $t_1, t_2$ and $t_3$ curves
are decaying as $X$ increases and we do not have enough data to
see what their final behaviour will be. We remember that the
convergence is like $\log \log X/ \log X$, so we would need much
more data to be able to make a sensible conclusion about the size
of the error term.
Also, it is interesting to note that at the right hand side of
figure \ref{fig:discrepancy} the $t_3=0.03$ curve has decayed to a
level comparable to the curves of the sample points that are more
distant from $t=0$. Examining figure \ref{fig:nrzero}, it appears
that the area of major discrepancy between the ratios conjecture
prediction and the data (that is, where the data shows repulsion
from the critical point at $t=0$) lies between t=0 and about
$t=0.03$. We expect that this region will narrow as the range of
discriminants, $d$, increases, and this is born out by comparing
the two pictures in figure \ref{fig:nrzero}; the data grows more
quickly to the height of the solid curve in the right hand picture
where $0<d<400,000$, than in the left hand picture where
$0<d<100,000$. Thus at the right hand edge of figure
\ref{fig:discrepancy}, the point $t_3=0.03$ is about to move into
the region where there is good agreement between the ratios
conjecture prediction and the data. Making a speculative
conclusion from the limited data available, this suggests that the
curves for $t_1$ and $t_2$, or any other fixed $t$, would also
decay to this level if we could gather enough data to shrink the
area of discrepancy at the origin of figure~\ref{fig:nrzero} to a
narrow enough band.
\begin{figure}
\caption{discrepancy $Q_{\Delta}
\label{fig:discrepancy}
\end{figure}
It is impossible to say from the available data what the exponent
$b$ in the error term of the ratios conjecture is. There is
certainly no evidence to suggest $b=0.5$, but the possibility
that the curves in figure \ref{fig:discrepancy} would decay to
$-0.5$ if we could vastly extend the rage of the plot is not ruled
out. However, figure \ref{fig:discrepancy} certainly appears to
suggest that $b<0$ and so the error term is a power of $X$ smaller
than the main term.
\section{ Summary}
We find that the ratios conjecture provides a formula for the one
level density of zeros of a family of quadratic twists of an
elliptic curve $L$-function that agrees with data for finite
conductor, except in the vicinity of the critical point, $t=0$,
and explains the arithmetic nature of the lower order terms which
entirely dominate the behaviour of the statistic away from $t=0$.
The ratios conjecture prediction, when properly scaled, approaches
the limiting $SO(2N)$ random matrix result as the family of
elliptic curves includes those with larger and larger conductor.
This supports all the available evidence that $SO(2N)$ is the
correct limit for zero statistics in this family. It is very
interesting that the ratios conjecture prediction does not capture
the phenomenon of zero repulsion from the critical point, $t=0$,
but the data we have available certainly allows for the ratios
conjecture to be correct with some power $b<1$ of $X$ in the error
term; the discrepancy between the ratios conjecture prediction and
the data (at the origin of figure \ref{fig:nrzero}) can quite
possibly be contained in the error term.
In ongoing work of the authors in collaboration with E. Due{\~n}ez
and S. J. Miller we propose an explanation for the observed
repulsion of zeros near the central point for finite conductor and
a random matrix model that captures the phenomenon.
\pagebreak
\newcommand{\etalchar}[1]{$^{#1}$}
\end{document} |
\begin{document}
\begin{titlepage}
\begin{center}
\bfseries
OPTIMAL MEASUREMENTS OF SPIN DIRECTION
\end{center}
\begin{center}
D M APPLEBY
\end{center}
\begin{center}
Department of Physics, Queen Mary and
Westfield College, Mile End Rd, London E1 4NS, UK
\end{center}
\begin{center}
(E-mail: [email protected])
\end{center}
\begin{center}
\textbf{Abstract}\\
\parbox{10.5 cm }{ The accuracy
of a measurement of the spin direction of a spin-$s$ particle
is characterised, for arbitrary half-integral
$s$. The disturbance caused by the
measurement is also characterised.
The approach is based on that taken in several
previous papers concerning joint measurements of position
and momentum. As in those papers, a distinction is made between
the errors of retrodiction and prediction. Retrodictive and
predictive error relationships are derived. The
POVM describing the outcome of a maximally accurate measurement
process is investigated. It is shown that, if the
measurement is retrodictively optimal, then the distribution of
measured values is given by the initial state
$\mathrm{SU}(2)$ $Q$-function. If
the measurement is predictively optimal, then the distribution
of measured values is related to the final state
$\mathrm{SU}(2)$ $P$-function. The general form of the unitary
evolution operator producing an optimal measurement is
characterised.
}
\end{center}
\begin{center}
Report no. QMW-PH-99-18
\end{center}
\end{titlepage}
\title{Bohmian Velocity post-Decoherence}
\section{Introduction}
\label{sec: intro}
In a recent series of
papers~\cite{self1,self2a,self2b,self3,self2c} we analysed the
concept of experimental accuracy, as it applies to simultaneous
measurements of position and
momentum~\cite{Arthurs,Peres1,Busch,Schroeck,Leonhardt}. The
purpose of this paper is to give a similar analysis for
measurements of spin direction.
There have been a number of previous discussions of joint,
imperfectly accurate measurements of two
(non-commuting) components of spin~\cite{TwoComp}.
Measurements of spin
direction---the kind of measurement considered in this
paper---have been discussed by Busch and
Schroeck~\cite{BuschSpin}, Grabowski~\cite{Grabowski},
Peres~\cite{Peres1}, and Busch
\emph{et al}~\cite{Busch}. In the
following we extend the work of these authors by giving an
analysis of the measurement errors, and of the conditions for a
measurement process to be optimal. In particular, we will show
that a measurement is retrodictively optimal if and only if
the distribution of measured values is given by the generalized
$Q$-function which is defined in terms of $\mathrm{SU}(2)$
coherent states~\cite{SpinCoh,Lieb,Perel,SpinCohB}
(corresponding to an analogous property of joint measurements
of position and momentum derived by Ali and
Prugove\v{c}ki~\cite{Ali}, and proved under less
restrictive conditions in Appleby~\cite{self3}).
This result provides us with some further insight into the
physical significance of the $\mathrm{SU}(2)$ $Q$-function.
It also has a bearing on the problem of state reconstruction.
Amiet and Weigert~\cite{Amiet1,Amiet2} have recently shown how,
by making measurements of a single spin component for
sufficiently many differently oriented Stern-Gerlach
apparatuses, one can calculate the corresponding values of the
$\mathrm{SU}(2)$
$Q$-function, and thereby
reconstruct the density matrix. The fact that a retrodictively
optimal measurement of spin direction has the $Q$-function as
its distribution of measured values suggests an alternative
approach to the problem of state reconstruction: for it means
that one can reconstruct the density matrix from the
statistics of a single run of measurements,
performed on a single apparatus. The fact that
measurements whose outcome is described the
$Q$-function have this property of informational completeness
has been stressed by Busch and Schroeck~\cite{BuschSpin} (also
see Busch~\cite{Complete}, Busch
\emph{et al}~\cite{Busch} and Schroeck~\cite{Schroeck}).
Retrodictively optimal joint measurements of
position and momentum~~\cite{self3,Ali} give rise
to the ordinary Husimi or
$Q$-function~\cite{Husimi,Hillery,Lee}, and so they also have
the property of informational
completeness~\cite{Busch,Schroeck,Complete,Naka}, at least in
principle. However, the practical usefulness of this fact is
somewhat restricted, due to the amplification of statistical
errors which occurs when one attempts to perform the
reconstruction starting from real experimental
data~\cite{Leonhardt,StatAmp}. No such difficulty arises in
the case of measurements of spin direction, due to the fact
that the state space is finite dimensional.
We now outline the approach taken in the remainder of this
paper. We consider a system consisting of a single spin, with
angular momentum operator $\hat{\mathbf{S}}$ satisfying the usual commutation
relations
$\bigl[\hat{\mathbf{S}}s_{a},\hat{\mathbf{S}}s_{b}\bigr]=i
\sum_{c=1}^{3}\epsilon_{abc}\hat{\mathbf{S}}s_{c}$ (with units chosen such
that
$\hbar=1$). We take it that
$\hat{\mathbf{S}}s^2=s(s+1)$ for some arbitrary, but fixed half-integer
$s$.
The components of
$\hat{\mathbf{S}}$ are non-commuting, so they cannot all be simultaneously
measured with perfect precision. However, they can all be
measured with a less than perfect degree of accuracy. In order
to do so one can use the same kind of procedure which is
employed in the Arthurs-Kelly
process~\cite{self2a,Arthurs,Peres1,Busch,Schroeck,Leonhardt}:
that is, one can couple the non-commuting observables of
interest---the components of
$\hat{\mathbf{S}}$---to another set of ``pointer'' or ``meter'' observables
which do commute, and whose values may therefore be
simultaneously determined with arbitrary precision.
The question we then have to decide is how to choose the
pointer observables. The observables to be
measured satisfy the constraint $\hat{\mathbf{S}}s^2=s(s+1)$, where $s$ is
fixed. Consequently, one might take the view that the
magnitude of the spin vector is already known, and that all
that needs to be measured is its direction.
This suggests that the pointer observables should be taken to
be the (commuting) components of a unit vector $\hat{\mathbf{n}}$, satisfying
the constraint $\hat{\mathbf{n}}s^2=1$. The direction of $\hat{\mathbf{n}}$
measures the direction of $\hat{\mathbf{S}}$. We will refer to this as a
type 1 measurement. Such measurements are discussed in
Sections~\ref{sec: POVM}--\ref{sec: CompOpt}.
There is another possibility: for one could take the pointer
observables to be the three \emph{independent}, commuting
components of a vector
$\hat{\boldsymbol{\mu}}$, no constraint being placed on the squared modulus
$\hat{\boldsymbol{\mu}}s^2$. The value of $\hat{\mathbf{S}}s_1$ (respectively $\hat{\mathbf{S}}s_2$,
$\hat{\mathbf{S}}s_3$) is measured by $\hat{\boldsymbol{\mu}}s_1$ (respectively $\hat{\boldsymbol{\mu}}s_2$,
$\hat{\boldsymbol{\mu}}s_3$). We will refer to this as a type 2 measurement.
Such measurements are discussed in Section~\ref{sec: type2}.
We begin our analysis in Section~\ref{sec: POVM}, by
characterising the POVM (positive operator valued measure)
describing the outcome of an arbitrary
type 1 measurement process.
In
Section~\ref{sec: AccDis} we characterise the
accuracy of and disturbance caused by a type 1 measurement
process. Our definitions are based on those given in
Appleby~\cite{self1,self2b}, for simultaneous measurements of
position and momentum. In particular, we are led to make a
distinction between two different kinds of accuracy, which we
refer to as retrodictive and predictive.
After giving, in Section~\ref{sec: CohSte}, a brief summary of
the relevant features of the theory of $\mathrm{SU}(2)$ coherent
states we go on, in Section~\ref{sec: RetOpt}, to describe
retrodictively optimal type 1 measurements. We establish a
bound on the retrodictive accuracy. We define a
retrodictively optimal measurement to be a measurement which
(1) achieves the maximum possible degree of retrodictive
accuracy, and which (2) is isotropic (in a sense to be
explained). We then show that the necessary and sufficient
condition for the measurement to be retrodictively optimal is
that the distribution of measured values be given by the
initial state $\mathrm{SU}(2)$ $Q$-function.
In Section~\ref{sec: PreOpt} we establish a bound on the
predictive accuracy of a type 1 measurement. We derive a
necessary and sufficient condition for this bound to be
achieved, in which case we say that the measurement is
predictively optimal. We show that the distribution of
measured values is then related to the final state
$\mathrm{SU}(2)$ $P$-function.
In Section~\ref{sec: CompOpt} we consider completely
optimal type 1 measurement processes---\emph{i.e.} processes
that are both retrodictively and predictively optimal. We
give the general form of the unitary evolution operator
describing such a process.
Finally, in Section~\ref{sec: type2}, we consider type 2
measurements. We define the retrodictive and predictive
errors of such measurements, and establish bounds which
the errors must satisfy. We then show that, in the limit
as a type 2 measurement tends to optimality (retrodictive or
predictive), it more and more nearly approaches an optimal type
1 measurement (with the replacement
$s^{-1}\hat{\boldsymbol{\mu}} \rightarrow
\hat{\mathbf{n}}$). It follows that, in so far as the aim is to maximise
the measurement accuracy, type 2 measurements have no
advantages.
\section{Type 1 Measurements: POVM}
\label{sec: POVM}
The purpose of this section is to characterise the POVM
(positive operator valued
measure)~\cite{Peres1,Busch,Schroeck,Kraus,Peres2}
describing the outcome of an arbitrary type 1 measurement.
We take a type 1 measurement to consist of a process in which
the system, with $2s+1$ dimensional state space
$\mathscr{H}_{\rm sy}$, is coupled to a measuring apparatus,
with state space $\mathscr{H}_{\rm ap}$. The interaction
commences at a time $t=t_{\rm i}$ when system$+$apparatus are
in the product state $\ket{\psi\otimes\chi_{\rm ap}}$, where
$\ket{\psi}\in
\mathscr{H}_{\rm sy}$ is the initial state of
the system and $\ket{\chi_{\rm ap}}\in
\mathscr{H}_{\rm ap}$ is the initial state of the apparatus. It
ends after a finite time interval at
$t=t_{\rm f}$ when system$+$apparatus are in the state
$\hat{U}\ket{\psi\otimes\chi_{\rm ap}}$, where $\hat{U}$ is the unitary
evolution operator describing the measurement interaction.
It should be stressed that this description is quite general.
In particular, we are not making an impulsive approximation.
Nor are we assuming that the interaction Hamiltonian is large in
comparison with the Hamiltonians describing the system and
apparatus separately. The only
substantive assumption is the statement that
system$+$apparatus are initially in a product state (so that
they are initially uncorrelated).
It should be noted that $\ket{\psi}$ is arbitrary, since the
system might initially be in any state $\in \mathscr{H}_{\rm
sy}$.
On the other hand $\ket{\chi_{\rm ap}}$ is fixed, since
we assume that initially the apparatus is always in the same
``zeroed'' or ``ready'' state.
As explained in Section~\ref{sec: intro}, we
take it that the result of the measurement is specified by the
recorded values of three commuting pointer observables
$\hat{\mathbf{n}}=(\hat{\mathbf{n}}s_1,\hat{\mathbf{n}}s_2,\hat{\mathbf{n}}s_3)$, satisfying the constraint
$\sum_{r=1}^{3}\hat{\mathbf{n}}s_r^2=1$ (so that there are only two pointer
degrees of freedom). However, a measuring instrument does not
usually consist of some pointers, and nothing else. We
therefore allow for the existence of $N$ additional apparatus
observables $\hat{\xi}=(\hat{\xi}_1,\dots,\hat{\xi}_N)$ which, together with the
components of $\hat{\mathbf{n}}$, constitute a complete commuting set. The
eigenkets $\ket{\mathbf{n},\hat{\xi}v}$ thus provide an orthonormal basis
for
$\mathscr{H}_{\rm ap}$.
The operator $\hat{U}$ specifies the final state of
system$+$apparatus given \emph{any} initial state
$\in \mathscr{H}_{\rm sy}\otimes\mathscr{H}_{\rm ap}$.
However, we are only interested in initial states of the very
special form
$\ket{\psi\otimes \chi_{\rm ap}}$, where $\ket{\chi_{\rm ap}}$ is fixed. In other
words, the operator $\hat{U}$ provides us with much more
information than we actually need. It turns out that all the
quantities which are relevant to the argument of this paper can
be expressed in terms of the
operator $\hat{T}(\ptv,\avv)$, defined
by~\cite{Busch,Schroeck,Kraus,Peres2}
\begin{equation}
\hat{T}(\ptv,\avv) = \sum_{m_1,m_2=-s}^{s}
\bigl(\bra{m_1}\otimes\bra{\mathbf{n},\hat{\xi}v}\bigr)
\hat{U}
\bigl(\ket{m_2}\otimes\ket{\chi_{\rm ap}}\bigr)
\ket{m_1}\bra{m_2}
\label{eq: TDef}
\end{equation}
where $\ket{m}$ denotes the eigenket of $\hat{\mathbf{S}}s_3$ with
eigenvalue $m$ (in units such that $\hbar=1$). The operator
$\hat{T}(\ptv,\avv)$ is more convenient to work with because, unlike
$\hat{U}$, it only acts on the system state space
$\mathscr{H}_{\rm sy}$.
The significance of the operator $\hat{T}(\ptv,\avv)$ is that it describes
the change in the state of the system which is caused by the
measurement process~\cite{Busch,Schroeck,Kraus,Peres2}
(\emph{i.e.}\ it describes the
operation~\cite{Busch,Schroeck, Kraus} induced by the
measurement). In fact, suppose that the measurement is
non-selective (meaning that the final value of
$\mathbf{n}$ is not recorded, so that there is no ``collapse''), and
let
$\hat{\rho}_{\rm f}$ be the reduced density matrix describing
the final state of the system. It is then readily verified that
\begin{equation}
\hat{\rho}_{\rm f}
= \int d\mathbf{n} \, d \hat{\xi}v \;
\hat{T}(\ptv,\avv)\,
\ket{\psi}\bra{\psi} \,
\hat{T}(\ptv,\avv)D
\label{eq: rhof}
\end{equation}
where $d \mathbf{n}$ denotes the usual measure on the
unit $2$-sphere: in terms of spherical polars
$d \mathbf{n} = \sin \theta d\theta d \phi$.
Let $\rho_{\rm val}(\mathbf{n})$ be the probability density function
describing the distribution of measured values:
\begin{equation}
\rho_{\rm val}(\mathbf{n})
= \sum_{m =-l}^{l} \int d\hat{\xi}v \,
\Bigl| \bigl(\bra{m}\otimes \bra{\mathbf{n},\hat{\xi}v}\bigr)
\hat{U}
\bigl(\ket{\psi}\otimes \ket{\chi_{\rm ap}}\bigr)
\Bigr|^2
\label{eq: pdfDef}
\end{equation}
$\rho_{\rm val}(\mathbf{n})$ can also be expressed in terms of the operators
$\hat{T}(\ptv,\avv)$. In fact,
define~\cite{Busch,Schroeck,Kraus,Peres2}
\begin{equation}
\hat{E}(\ptv)
= \int d \hat{\xi}v \, \hat{T}(\ptv,\avv)D \hat{T}(\ptv,\avv)
\label{eq: SDef}
\end{equation}
Then
\begin{equation}
\rho_{\rm val}(\mathbf{n}) = \bmat{\psi}{\hat{E}(\ptv)}{\psi}
\label{eq: val}
\end{equation}
We see from this that $\hat{E}(\ptv) d\mathbf{n}$ is the POVM describing the
measurement outcome. In particular
\begin{equation*}
\hat{E}(\ptv) \ge 0
\end{equation*}
for all $\mathbf{n}$ and
\begin{equation}
\int d \mathbf{n} \, \hat{E}(\ptv) = 1
\label{eq: SNorm}
\end{equation}
Until now we have been assuming that the system is initially
in a pure state. If the system is initially in the mixed state
with density matrix $\hat{\rho}_{\rm i}$ we have, in place of
Eqs.~(\ref{eq: rhof}) and~(\ref{eq: val}),
\begin{equation}
\hat{\rho}_{\rm f}
= \int d \mathbf{n} \, d\hat{\xi}v \,
\hat{T}(\ptv,\avv) \, \hat{\rho}_{\rm i} \, \hat{T}(\ptv,\avv)D
\label{eq: rhofMix}
\end{equation}
and
\begin{equation}
\rho_{\rm val}(\mathbf{n})
= \Tr \left(\hat{E}(\ptv) \, \hat{\rho}_{\rm i}\right)
\label{eq: rhoValTermsRhoi}
\end{equation}
Eq.~(\ref{eq: rhofMix}) gives the final state reduced
density for the system in the case when the measurement is
non-selective, so that the pointer position is not recorded.
Suppose, on the other hand, that the final pointer position is
recorded to be in the subset $\mathscr{R}$ of the unit
$2$-sphere. Then $\hat{\rho}_{\rm f}$ is given by
\begin{equation}
\hat{\rho}_{\rm f}
= \frac{1}{p_{\mathscr{R}}}
\int_{\mathscr{R}} d\mathbf{n} \int d\hat{\xi}v \,
\hat{T}(\ptv,\avv) \, \hat{\rho}_{\rm i} \, \hat{T}(\ptv,\avv)D
\label{eq: rhofTermsT}
\end{equation}
where $p_{\mathscr{R}}$ is the probability of finding
$\mathbf{n} \in \mathscr{R}$:
\begin{equation*}
p_{\mathscr{R}} = \int_{\mathscr{R}} d\mathbf{n} \, \rho_{\rm val}(\mathbf{n})
\end{equation*}
\section{Type 1 Measurements: Accuracy and Disturbance}
\label{sec: AccDis}
The purpose of this paper is to establish the form of the
operators $\hat{T}(\ptv,\avv)$ and $\hat{E}(\ptv)$ when the measurement is optimal.
In order to give a precise definition of what ``optimal'' means
in this context, we first need to define a concept
of measurement accuracy; which is the problem addressed in
this section. We also discuss how to quantify the degree to
which the system is disturbed by the measurement process.
The approach we take is based on the approach taken in
Appleby~\cite{self1,self2b}, to the problem of defining the
accuracy of and disturbance caused by a simultaneous
measurement of position and momentum.
We thus work in terms of the Heisenberg picture.
Let
$\hat{\mathbf{S}}_{\rm i} = \hat{\mathbf{S}}$ and $\hat{\mathbf{n}}_{\rm i} = \hat{\mathbf{n}}$ be the initial
values of the Heisenberg spin and pointer observables at the
time $t_{\rm i}$, when the measurement interaction begins; and
let $\hat{\mathbf{S}}_{\rm f} = \hat{U}^{\dagger}\hat{\mathbf{S}}\hat{U}$ and $\hat{\mathbf{n}}_{\rm
f} = \hat{U}^{\dagger}\hat{\mathbf{n}}\hat{U}$ be the final values of these
observables at the time $t_{\rm f}$, when the measurement
interaction ends. Let
$\mathscr{S}_{\rm sy}\subset\mathscr{H}_{\rm sy}$ be the unit
sphere in the system state space. We then define the
retrodictive fidelity $\eta_{\rm i}$ by
\begin{equation}
\eta_{\rm i} = \inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\Bigl(\bmat{\psi\otimes\chi_{\rm ap}}{\tfrac{1}{2}\bigl(\hat{\mathbf{n}}_{\rm f}
\cdot \hat{\mathbf{S}}_{\rm i}+
\hat{\mathbf{S}}_{\rm i}
\cdot \hat{\mathbf{n}}_{\rm f}\bigr)}{\psi\otimes \chi_{\rm ap}}\Bigr)
\label{eq: rfDef}
\end{equation}
and the
predictive fidelity $\eta_{\rm f}$ by
\begin{align}
\eta_{\rm f}
& = \inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\Bigl(\bmat{\psi\otimes\chi_{\rm ap}}{\tfrac{1}{2}\bigl(\hat{\mathbf{n}}_{\rm f} \cdot
\hat{\mathbf{S}}_{\rm f}+
\hat{\mathbf{S}}_{\rm f}
\cdot \hat{\mathbf{n}}_{\rm f}\bigr)}{\psi\otimes\chi_{\rm ap}}\Bigr)
\notag
\\ & =
\inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\Bigl(\bmat{\psi\otimes\chi_{\rm ap}}{\hat{\mathbf{n}}_{\rm f} \cdot \hat{\mathbf{S}}_{\rm
f}}{\psi\otimes\chi_{\rm ap}}\Bigr)
\label{eq: pfDef}
\end{align}
(where we have used the fact that the components of
$\hat{\mathbf{n}}_{\rm f}$ and $\hat{\mathbf{S}}_{\rm f}$ commute).
It should be noted that the concept of fidelity employed here
is somewhat different from the concept of fidelity which is
employed in discussions of cloning and state estimation ($\eta_{\rm i}$
and
$\eta_{\rm f}$ are defined in terms of scalar products of
observables, rather than scalar products of states).
We also define the quantity $\eta_{\rm d}$ by
\begin{equation}
\eta_{\rm d} = \inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\Bigl(\bmat{\psi\otimes\chi_{\rm ap}}{\tfrac{1}{2}\bigl(\hat{\mathbf{S}}_{\rm f} \cdot
\hat{\mathbf{S}}_{\rm i}+
\hat{\mathbf{S}}_{\rm i}
\cdot \hat{\mathbf{S}}_{\rm f}\bigr)}{\psi\otimes\chi_{\rm ap}}\Bigr)
\label{eq: syfDef}
\end{equation}
The intuitive basis for these definitions is most easily
appreciated if one thinks, temporarily, in classical terms. If
interpreted classically
$\eta_{\rm i}$ would represent the minimum expected degree of alignment
between the final pointer direction and the initial direction
of the spin vector. In other words, it would quantify the
retrodictive accuracy of the measurement. On the other hand,
$\eta_{\rm f}$ would represent the minimum expected degree of alignment
between the final pointer direction and the final direction of
the spin vector: it would therefore provide a quantitative
indication of the predictive accuracy. Lastly, $\eta_{\rm d}$
would quantify the extent to which the measurement disturbs the
system, by changing the direction of the spin vector.
Of course, $\hat{\mathbf{n}}_{\rm f}$, $\hat{\mathbf{S}}_{\rm i}$, $\hat{\mathbf{S}}_{\rm f}$ are in
fact quantum mechanical observables, and so the physical
interpretation of $\eta_{\rm i}$, $\eta_{\rm f}$ and $\eta_{\rm d}$ needs to be justified
much more carefully. Rather than proceeding directly, it will
be convenient first to relate these quantities to an
alternative characterisation of the measurement accuracy and
disturbance. This will allow us to appeal to the arguments
given in Appleby~\cite{self1,self2b}, to justify our earlier
characterisation of the accuracy of and disturbance caused by a
simultaneous measurement of position and momentum. It will
also be helpful in Section~\ref{sec: type2},
when we compare type 1 and type 2 measurements.
In a type 1 measurement, the result of the measurement is a
direction, represented by the unit vector $\mathbf{n}$. However, one
could extract from this information estimates of the
initial and final values of the spin vector itself by
multiplying
$\mathbf{n}$ by suitable constants: say
$\zeta_{\rm i} \mathbf{n}$ as an estimate for
$\hat{\mathbf{S}}v_{\rm i}$, and
$\zeta_{\rm f} \mathbf{n}$ as an estimate for $\hat{\mathbf{S}}v_{\rm f}$. The
question then arises: what are the best choices for these
constants?
To answer this question, consider the quantities
\begin{align*}
\sup_{\ket{\psi }\in \mathrm{S}_{\rm sy}}
\left( \bmat{\psi\otimes\chi_{\rm ap}}{
\bigl|\zeta_{\rm i} \hat{\mathbf{n}}_{\rm f}-\hat{\mathbf{S}}_{\rm i}
\bigr|^2}{\psi\otimes\chi_{\rm ap}}
\right)
& = \zeta_{\rm i}^{2} - 2 \zeta_{\rm i} \eta_{\rm i} + s(s+1)
\\
\intertext{and}
\sup_{\ket{\psi }\in \mathrm{S}_{\rm sy}}
\left( \bmat{\psi\otimes\chi_{\rm ap}}{
\bigl|\zeta_{\rm f} \hat{\mathbf{n}}_{\rm f}-\hat{\mathbf{S}}_{\rm f}
\bigr|^2}{\psi\otimes\chi_{\rm ap}}
\right)
& = \zeta_{\rm f}^{2} - 2 \zeta_{\rm f} \eta_{\rm f} + s(s+1)
\end{align*}
These expressions are minimised if we choose
$\zeta_{\rm i} = \eta_{\rm i}$,
$\zeta_{\rm f} = \eta_{\rm f}$. We accordingly define the maximal
rms error of retrodiction
\begin{equation}
\Delta_{\rm e i} \hat{\mathbf{S}}vs
= \left(\sup_{\ket{\psi }\in \mathrm{S}_{\rm sy}}
\left( \bmat{\psi\otimes\chi_{\rm ap}}{
\bigl|\eta_{\rm i} \hat{\mathbf{n}}_{\rm f}-\hat{\mathbf{S}}_{\rm i}
\bigr|^2}{\psi\otimes\chi_{\rm ap}}
\right)
\right)^{\frac{1}{2}}
= \left( s + s^2-\eta_{\rm i}^2
\right)^{\frac{1}{2}}
\label{eq: RetErrA}
\end{equation}
and the maximal rms error of prediction
\begin{equation}
\Delta_{\rm e f} \hat{\mathbf{S}}vs
= \left(\sup_{\ket{\psi }\in \mathrm{S}_{\rm sy}}
\left( \bmat{\psi\otimes\chi_{\rm ap}}{
\bigl|\eta_{\rm f} \hat{\mathbf{n}}_{\rm f}-\hat{\mathbf{S}}_{\rm f}
\bigr|^2}{\psi\otimes\chi_{\rm ap}}
\right)
\right)^{\frac{1}{2}}
= \left( s + s^2-\eta_{\rm f}^2
\right)^{\frac{1}{2}}
\label{eq: PreErrA}
\end{equation}
We also define the maximal rms disturbance by
\begin{equation}
\Delta_{\rm d} \hat{\mathbf{S}}vs
= \left(\sup_{\ket{\psi }\in \mathrm{S}_{\rm sy}}
\left( \bmat{\psi\otimes\chi_{\rm ap}}{
\bigl| \hat{\mathbf{S}}_{\rm f}-\hat{\mathbf{S}}_{\rm i}
\bigr|^2}{\psi\otimes\chi_{\rm ap}}
\right)
\right)^{\frac{1}{2}}
= \sqrt{2}\left( s + s^2-\eta_{\rm d}
\right)^{\frac{1}{2}}
\label{eq: DistA}
\end{equation}
Comparing these expressions with those given in
refs.~\cite{self1,self2b} it can be seen that
$\Delta_{\rm e i} \hat{\mathbf{S}}vs$ plays the same role in relation to the kind of
measurement here considered as do the quantities
$ \Delta_{\mathrm{ei}} x$, $ \Delta_{\mathrm{ei}} p$ in relation to joint
measurements of position and momentum; that
$\Delta_{\rm e f} \hat{\mathbf{S}}vs$ is the analogue of $ \Delta_{\mathrm{ef}} x$, $ \Delta_{\mathrm{ef}} p$; and that
$\Delta_{\rm d} \hat{\mathbf{S}}vs$ is the analogue of $ \Delta_{\mathrm{d}} x$, $ \Delta_{\mathrm{d}} p$.
A suitably modified version of
the argument given in Section 5 of ref.~\cite{self1} may then be
used to show that $\Delta_{\rm e i} \hat{\mathbf{S}}vs$ (and therefore $\eta_{\rm i}$) describes
the retrodictive accuracy of the measurement; that $\Delta_{\rm e f} \hat{\mathbf{S}}vs$
(and therefore
$\eta_{\rm f}$) describes the predictive accuracy; and that $\Delta_{\rm d} \hat{\mathbf{S}}vs$
(and therefore $\eta_{\rm d}$) describes the degree of disturbance
caused by the measurement.
Finally, we note that the quantities $\eta_{\rm i}$, $\eta_{\rm f}$ and $\eta_{\rm d}$ can
be expressed in terms of the operators $\hat{T}(\mathbf{n}, \hat{\xi}v)$ and
$\hat{S}(\mathbf{n})$ defined earlier. In fact,
comparing Eqs.~(\ref{eq: TDef})
and~(\ref{eq: SDef}) with
Eqs.~(\ref{eq: rfDef}--\ref{eq: syfDef}) one finds
{\allowdisplaybreaks
\begin{align}
\eta_{\rm i}
& = \inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\left( \int d \mathbf{n} \,
\bmat{\psi}{\tfrac{1}{2}
\bigl( \hat{E}(\ptv) \mathbf{n} \cdot \hat{\mathbf{S}} + \mathbf{n} \cdot \hat{\mathbf{S}} \, \hat{E}(\ptv)
\bigr)}{\psi}
\right)
\label{eq: etaiTermsS}
\\
\eta_{\rm f}
& = \inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\left( \int d \mathbf{n} d\hat{\xi}v \,
\bmat{\psi}{ \hat{T}(\ptv,\avv)D\, \mathbf{n} \cdot \hat{\mathbf{S}} \, \hat{T}(\ptv,\avv)}{\psi}
\right)
\label{eq: etafTermsT}
\end{align}
and
\begin{multline}
\eta_{\rm d}
=\inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\biggl( \int d \mathbf{n} d \hat{\xi}v \,
\sum_{a=1}^{3}
\bbra{\psi} \tfrac{1}{2}
\bigl( \hat{T}(\ptv,\avv)D \, \hat{\mathbf{S}}s_{a}\, \hat{T}(\ptv,\avv) \, \hat{\mathbf{S}}s_{a}
\bigr.
\biggr.
\\
\biggl.
\bigl.
+
\hat{\mathbf{S}}s_{a}\, \hat{T}(\ptv,\avv)D \, \hat{\mathbf{S}}s_{a} \, \hat{T}(\ptv,\avv)
\bigr)\bket{\psi}
\biggr)
\label{eq: etadTermsT}
\end{multline}
}
\section{$\mathrm{SU}(2)$ Coherent States}
\label{sec: CohSte}
The task we now face is to establish upper bounds on the
fidelities $\eta_{\rm i}$,
$\eta_{\rm f}$ (or, equivalently, lower bounds on the errors
$\Delta_{\rm e i} \hat{\mathbf{S}}vs$, $\Delta_{\rm e f} \hat{\mathbf{S}}vs$), and then to
establish the form of the operators
$\hat{T}(\ptv,\avv)$,
$\hat{E}(\ptv)$ for which these bounds are achieved. The theory of
$\mathrm{SU}(2)$ coherent states will play an important role
in the argument. In order to fix notation we begin by
summarising the relevant parts of this theory. For proofs of
the statements made in this section see
refs.~\cite{SpinCoh,Lieb,Perel,SpinCohB}.
For each unit vector $\mathbf{n} \in \mathbb{R}^3$ choose a vector
$\boldsymbol{\theta}_{\mathbf{n}}\in \mathbb{R}^3$ with the property
\begin{equation*}
\exp\bigl[-i \boldsymbol{\theta}_{\mathbf{n}} \cdot \hat{\mathbf{S}} \bigr] \,
\hat{\mathbf{S}}s_{3} \,
\exp\bigl[i \boldsymbol{\theta}_{\mathbf{n}} \cdot \hat{\mathbf{S}} \bigr]
= \mathbf{n} \cdot \hat{\mathbf{S}}
\end{equation*}
Define
\begin{equation}
\ket{\mathbf{n},m}
= \exp\bigl[ - i \boldsymbol{\theta}_{\mathbf{n}} \cdot
\hat{\mathbf{S}} \bigr]\ket{m}
\label{eq: mnKetDef}
\end{equation}
where $\ket{m}$ is the normalized eigenvector of
$\hat{\mathbf{S}}s_{3}$ with eigenvalue $m$.
We then have
\begin{equation}
\mathbf{n} \cdot \hat{\mathbf{S}} \ket{\mathbf{n},m} = m \ket{\mathbf{n},m}
\label{eq: CohSteEval}
\end{equation}
and
\begin{equation*}
\frac{2s+1}{4 \pi}
\int d\mathbf{n} \,
\ket{\mathbf{n},m}\bra{\mathbf{n},m}
= 1
\end{equation*}
for all $m$.
We are especially interested in the states $\ket{\mathbf{n},s}$.
These are the minimum uncertainty states, for which
$\sum_{a=1}^{3} \bigl(\Delta \hat{\mathbf{S}}s_{a}\bigr)^2=s$. To denote
them we employ the abbreviated notation
\begin{equation}
\ket{\mathbf{n}}=\ket{\mathbf{n},s}
\label{eq: nKetDef}
\end{equation}
The states $\ket{\mathbf{n}}$ so defined
$\in
\mathscr{H}_{\rm sy}$ and are eigenvectors of $\mathbf{n} \cdot
\hat{\mathbf{S}}$. They need to be carefully distinguished from the states
$\ket{\mathbf{n}, \hat{\xi}v}$ which $\in \mathscr{H}_{\rm ap}$ and
are eigenvectors of $\hat{\mathbf{n}}$.
Let $\hat{A}$ be any operator acting on $\mathscr{H}_{\rm sy}$.
The covariant symbol corresponding to $\hat{A}$ is defined by
\begin{equation*}
A_{\mathrm{cv}}(\mathbf{n}) = \bmat{\mathbf{n}}{\hat{A}}{\mathbf{n}}
\end{equation*}
The contravariant symbol corresponding to $\hat{A}$ is
defined to be the unique function $A_{\mathrm{cn}}$ for which
\begin{equation*}
\hat{A} = \frac{2s+1}{4 \pi}
\int d \mathbf{n} \,
A_{\mathrm{cn}}(\mathbf{n}) \ket{\mathbf{n}}\bra{\mathbf{n}}
\end{equation*}
and which satisfies
\begin{equation*}
\int d\mathbf{n}' \Pi_{2 s}(\ptv,\ptv') A_{\mathrm{cn}}(\mathbf{n}') = A_{\mathrm{cn}}(\mathbf{n})
\end{equation*}
where $\Pi_{2 s}(\ptv,\ptv')$ is the projection kernel
\begin{equation}
\Pi_{2 s}(\ptv,\ptv')
= \sum_{j=0}^{2 s}\sum_{m=-j}^{j}
Y^{\vphantom{*}}_{jm}(\mathbf{n}) Y^{*}_{jm}(\mathbf{n}')
= \sum_{j=0}^{2 s} \frac{2 j+1}{4 \pi} P_{j}(\mathbf{n} \cdot \mathbf{n}')
\label{eq: Pi0Proj}
\end{equation}
In these expressions the $Y_{jm}$ are spherical harmonics and
the
$P_{j}$ are Legendre polynomials.
It can be shown that, given any square integrable function $f$,
\begin{equation}
\hat{A}
= \frac{2s+1}{4 \pi}
\int d\mathbf{n} \,f(\mathbf{n}) \ket{\mathbf{n}}\bra{\mathbf{n}}
\label{eq: fCondA}
\end{equation}
if and only if
\begin{equation}
\int d\mathbf{n}' \Pi_{2 s}(\ptv,\ptv') f(\mathbf{n}') = A_{\mathrm{cn}} (\mathbf{n})
\label{eq: fCondB}
\end{equation}
for almost all $\mathbf{n}$.
The covariant (respectively contravariant) symbol of an
operator is often referred to as the $Q$ (respectively $P$)
symbol of that operator. However, we will find it more
convenient to reserve this notation for the symbols
corresponding specifically to the density matrix, scaled by a
factor
$(2s+1)/(4 \pi)$:
\begin{align}
Q(\mathbf{n}) & = \frac{2s+1}{4 \pi}\rho_{\mathrm{cv}} (\mathbf{n}) \\
P(\mathbf{n}) & = \frac{2s+1}{4 \pi}\rho_{\mathrm{cn}} (\mathbf{n})
\label{eq: PfncDef}
\end{align}
With this rescaling the $Q$ and $P$-functions satisfy the
normalisation condition
\begin{equation*}
\int d\mathbf{n} \, Q(\mathbf{n}) = \int d\mathbf{n} \, P(\mathbf{n}) =1
\end{equation*}
In particular, $Q(\mathbf{n})$ is a probability density function. As
we will see, it is in fact the probability density function
describing the outcome of a retrodictively optimal type 1
measurement.
\section{Retrodictively Optimal Type 1 Measurements}
\label{sec: RetOpt}
The purpose of this section is to investigate those processes
which maximise the retrodictive fidelity. We begin by
establishing the following bound on $\eta_{\rm i}$:
\begin{equation}
\eta_{\rm i} \le s
\label{eq: etaiCondA}
\end{equation}
which, in view of Eq.~(\ref{eq: RetErrA}), implies
\begin{equation}
\Delta_{\rm e i} \hat{\mathbf{S}}vs \ge \sqrt{s}
\label{eq: RetErrRelA}
\end{equation}
We will refer to Inequality~(\ref{eq: RetErrRelA}) as the
retrodictive error relation. It can be seen that it has the
same form as the ordinary uncertainty relation,
$\Delta \hat{\mathbf{S}}vs \ge \sqrt{s}$. It is the analogue, for the kind
of measurement here considered, of the inequality
$ \Delta_{\mathrm{ei}} x \, \Delta_{\mathrm{ei}} p \ge 1/2$ proved in ref.~\cite{self2b} for
joint measurements of position and momentum (in units such
that $\hbar=1$).
In order to prove this result we note that it follows from
Eqs.~(\ref{eq: SDef}) and~(\ref{eq: etaiTermsS}) that
\begin{equation*}
(2s+1) \eta_{\rm i}
\le \int d \mathbf{n} d \hat{\xi}v \,
\Tr \bigl( \mathbf{n} \cdot \hat{\mathbf{S}} \, \hat{T}(\ptv,\avv)D \hat{T}(\ptv,\avv) \bigr)
\end{equation*}
In view of Eqs.~(\ref{eq: SDef})
and~(\ref{eq: SNorm}) we also have
\begin{equation*}
\int d\mathbf{n} d\hat{\xi}v \,
\Tr \bigl( \hat{T}(\ptv,\avv)D \hat{T}(\ptv,\avv) \bigr)
= (2s +1)
\end{equation*}
Consequently
\begin{equation}
\int d\mathbf{n} d\hat{\xi}v \,
\Tr\bigl( (\eta_{\rm i}-\mathbf{n} \cdot \hat{\mathbf{S}})
\hat{T}(\ptv,\avv)D \hat{T}(\ptv,\avv) \bigr)
\le 0
\label{eq: RetFidCondC}
\end{equation}
For each fixed $\mathbf{n}$ the kets $\ket{\mathbf{n},m}$ defined by
Eq.~(\ref{eq: mnKetDef}) constitute an orthonormal basis.
We may therefore write
\begin{equation}
\hat{T}(\ptv,\avv)
= \sum_{m,m'=-s}^{s}
T_{m m'}(\mathbf{n},\hat{\xi}v) \ket{\mathbf{n},m}\bra{\mathbf{n},m'}
\label{eq: TExpand}
\end{equation}
for suitable coefficients $T_{m m'}$. Substituting this
expression in Inequality~(\ref{eq: RetFidCondC}) gives
\begin{equation}
\sum_{m, m'=-s}^{s}
\left(
(\eta_{\rm i}-m')
\int d\mathbf{n} d\hat{\xi}v\,
|T_{m m'}(\mathbf{n},\hat{\xi}v)|^2
\right)
\le 0
\label{eq: RetFidCondD}
\end{equation}
Inequality~(\ref{eq: etaiCondA}) is now immediate.
We next show that the retrodictive fidelity achieves its
maximum value $\eta_{\rm i} = s$ if and only if $\hat{E}(\ptv)$ is of the form
\begin{equation}
\hat{E}(\ptv)
= \frac{2s+1}{4 \pi}g(\mathbf{n}) \ket{\mathbf{n}}\bra{\mathbf{n}}
\label{eq: SforRfMax}
\end{equation}
for almost all $\mathbf{n}$,
where $\ket{\mathbf{n}}$ is the state defined by Eq.~(\ref{eq:
nKetDef}), and where $g$ is any function satisfying
\begin{equation}
\int d\mathbf{n}' \, \Pi_{2 s}(\ptv,\ptv') g(\mathbf{n}') =1
\label{eq: gCond}
\end{equation}[$\Pi_{2 s}(\ptv,\ptv')$ being the
projection kernel defined by Eq.~(\ref{eq: Pi0Proj})].
In fact, setting $\eta_{\rm i}=s$ in
Inequality~(\ref{eq: RetFidCondD}) gives
\begin{equation}
\sum_{m, m'=-s}^{s}
\left(
(s-m')
\int d\mathbf{n} d\hat{\xi}v\,
|T_{m m'}(\mathbf{n},\hat{\xi}v)|^2
\right)
\le 0
\end{equation}
from which it follows that the coefficients $T_{m m'}$ must be
of the form
\begin{equation*}
T_{m m'}(\mathbf{n},\hat{\xi}v)
= \left( \frac{2s+1}{4 \pi}\right)^{\frac{1}{2}}
\delta_{m' l} \, g_{m}(\mathbf{n}, \hat{\xi}v)
\end{equation*}
for almost all $\mathbf{n}$, $\hat{\xi}v$. Substituting this expression
into Eq.~(\ref{eq: TExpand}) gives
\begin{equation}
\hat{T}(\ptv,\avv)
= \left( \frac{2s+1}{4 \pi}\right)^{\frac{1}{2}}
\ket{g(\mathbf{n},\hat{\xi}v)}\bra{\mathbf{n}}
\label{eq: RetOptTCondC}
\end{equation}
for almost all $\mathbf{n}$, $\hat{\xi}v$, where
\begin{equation*}
\ket{g(\mathbf{n},\hat{\xi}v)} = \sum_{m=-s}^{s} g_{m}(\mathbf{n},\hat{\xi}v)
\ket{\mathbf{n},m}
\end{equation*}
Setting
\begin{equation*}
g(\mathbf{n}) = \int d\hat{\xi}v \, \bigl\|\ket{g(\mathbf{n},\hat{\xi}v)} \bigr\|^2
\end{equation*}
and using Eq.~(\ref{eq: SDef}), we deduce that $\hat{E}(\ptv)$ is of the form
specified by Eq.~(\ref{eq: SforRfMax}).
It follows from Eqs.~(\ref{eq: SNorm}),
(\ref{eq: fCondA})
and~(\ref{eq: fCondB}), and the fact that $\mathrm{id}_{\rm
\mathrm{cn}}(\mathbf{n})=1$, that the function
$g$ must satisfy
Eq.~(\ref{eq: gCond}). This proves
that the condition represented by Eqs.~(\ref{eq: SforRfMax})
and~(\ref{eq: gCond}) is necessary.
Suppose, on the other hand, that $\hat{E}(\ptv)$ is given by
Eq.~(\ref{eq: SforRfMax}), with $g$ satisfying Eq.~(\ref{eq:
gCond}). Using Eqs.~(\ref{eq: etaiTermsS}),
(\ref{eq: fCondA})
and~(\ref{eq: fCondB}) we deduce
\begin{equation*}
\eta_{\rm i} =
\inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\left(\frac{2s+1}{4 \pi}\int d\mathbf{n} \,
s g(\mathbf{n}) \bigl|\overlap{\mathbf{n}}{\psi}
\bigr|^2
\right)
= s
\end{equation*}
which shows that the condition is also sufficient.
The condition $\eta_{\rm i} =s$ is not, by itself, enough to determine
the distribution of measured values. However, the requirement
that the retrodictive fidelity be maximised is not the only
property which it is natural to require of a measurement that
is to count as optimal. It is also natural to require that the
measurement does not pick out any distinguished spatial
directions. We accordingly define an isotropic measurement to
be one which has the property that, if the initial system state
density matrix takes the rotationally invariant form
\begin{equation*}
\hat{\rho}_{\rm i} = \frac{1}{2 s+1}
\end{equation*}
then the distribution of measured values is also rotationally
invariant:
\begin{equation*}
\rho_{\rm val}(\mathbf{n}) = \frac{1}{4 \pi}
\end{equation*}
for all $\mathbf{n}$.
We define a retrodictively optimal type 1 measurement process
to be an isotropic process for which the retrodictive fidelity
is maximal, $\eta_{\rm i}=s$. It is then straightforward to verify that
a type 1 measurement process is retrodictively optimal if and
only if $\hat{E}(\ptv) = (2s+1)/(4 \pi) \ket{\mathbf{n}}\bra{\mathbf{n}}$. This is
the POVM which has previously been discussed by Busch and
Schroeck~\cite{BuschSpin}, and
others~\cite{Peres1,Busch,Schroeck,Grabowski}.
We see from Eq.~(\ref{eq: rhoValTermsRhoi}) that the
measurement is retrodictively optimal if and only if the
distribution of measured values is given by
\begin{equation*}
\rho_{\rm val}(\mathbf{n}) = Q_{\rm i} (\mathbf{n})
\end{equation*}
for all $\mathbf{n}$, where $Q_{\rm i}$ is the $Q$-function
corresponding to the initial system state density matrix:
\begin{equation*}
Q_{\rm i} (\mathbf{n})
= \frac{2s+1}{4 \pi}\mat{\mathbf{n}}{\hat{\rho}_{\rm i}}{\mathbf{n}}
\end{equation*}
In terms of the operator $\hat{T}(\ptv,\avv)$, the necessary and
sufficient condition for a type 1 measurement to be
retrodictively optimal is that
[see Eq.~(\ref{eq: RetOptTCondC})]
\begin{equation}
\hat{T}(\ptv,\avv)
= \left( \frac{2s+1}{4 \pi}\right)^{\frac{1}{2}}
\ket{g(\mathbf{n},\hat{\xi}v)}\bra{\mathbf{n}}
\label{eq: RetOptTCondA}
\end{equation}
where $\ket{g(\mathbf{n},\hat{\xi}v)}$ is any
family of kets with the property
\begin{equation}
\int d\hat{\xi}v \, \bigl\|\ket{g(\mathbf{n},\hat{\xi}v)} \bigr\|^2
= 1
\label{eq: RetOptTCondB}
\end{equation}
for all $\mathbf{n}$.
We conclude this section by showing that for retrodictively
optimal type 1 measurements
$\langle \hat{\mathbf{S}}_{\rm i}\rangle= (s+1)\langle \hat{\mathbf{n}}_{\rm f}\rangle$.
In fact
\begin{align*}
\bmat{\psi \otimes \chi_{\rm ap}}{\hat{\mathbf{n}}_{\rm f}}{\psi\otimes \chi_{\rm ap}}
& = \int d\mathbf{n} \,\mathbf{n}\, \bmat{\psi}{\hat{E}(\ptv)}{\psi}
\\
& = \frac{2s+1}{4 \pi}
\int d\mathbf{n} \, \mathbf{n} \, \bigl| \overlap{\psi}{\mathbf{n}}\bigr|^2
\\
& = \frac{1}{s+1}
\bmat{\psi\otimes \chi_{\rm ap}}{\hat{\mathbf{S}}_{\rm i}}{\psi\otimes \chi_{\rm ap}}
\end{align*}
where we have used
the fact~\cite{Lieb} that $(s+1)\mathbf{n}$ is the contravariant
symbol corresponding to $\hat{\mathbf{S}}$.
\section{Predictively Optimal Type 1 Measurements}
\label{sec: PreOpt}
The purpose of this section is to characterise the form of
the operator $\hat{T}(\ptv,\avv)$ and function $\rho_{\rm val}(\mathbf{n})$ for
processes which maximise the predictive fidelity,
$\eta_{\rm f}$. In the last section we showed that, for retrodictively
optimal type 1 measurements, $\rho_{\rm val}$
coincides with the initial system state $Q$-function. In
this section we will show that if the measurement is
predictively optimal, then $\rho_{\rm val}$ is
related to the final system state $P$-function.
We begin by
establishing an upper bound on $\eta_{\rm f}$.
By a similar argument to the one leading
to Inequality~(\ref{eq: RetFidCondC}) we find
\begin{equation*}
\int d\mathbf{n} d\hat{\xi}v \,
\Tr\bigl( (\eta_{\rm f}-\mathbf{n} \cdot \hat{\mathbf{S}}) \,\hat{T}(\ptv,\avv)\, \hat{T}(\ptv,\avv)D \bigr)
\le 0
\end{equation*}
which only differs from Inequality~(\ref{eq: RetFidCondC})
in the replacement of $\eta_{\rm i}$ by $\eta_{\rm f}$, and in the fact that the
order of
$\hat{T}(\ptv,\avv)$ and $\hat{T}(\ptv,\avv)D$ is reversed.
The analysis therefore proceeds in nearly the same way.
Corresponding to Inequality~(\ref{eq: etaiCondA}) we deduce
\begin{equation}
\eta_{\rm f} \le s
\label{eq: etafCondA}
\end{equation}
which, in view of Eq.~(\ref{eq: PreErrA}), implies
\begin{equation}
\Delta_{\rm e f} \hat{\mathbf{S}}vs \ge \sqrt{s}
\label{eq: PreErrRelA}
\end{equation}
We will refer to Inequality~(\ref{eq: PreErrRelA}) as the
predictive error relation. It is the analogue, for
measurements of spin direction, of the inequality
$ \Delta_{\mathrm{ef}} x \, \Delta_{\mathrm{ef}} p \ge 1/2$ proved in ref.~\cite{self2b} for
joint measurements of position and momentum (units chosen
such that $\hbar=1$).
We define a predictively optimal type 1 measurement to be one
for which the predictive fidelity is maximal, $\eta_{\rm f} =s$ (unlike
the case of retrodictive optimality, we do not impose the
requirement that the measurement also be isotropic). By a
similar argument to the one given in the last section we
find, corresponding to Eqs.~(\ref{eq: RetOptTCondA})
and~(\ref{eq: RetOptTCondB}), that the necessary and
sufficient condition for a type 1 measurement to be
predictively optimal is that $\hat{T}(\ptv,\avv)$ be of the form
\begin{equation}
\hat{T}(\ptv,\avv)
= \left( \frac{2 s+1}{4 \pi}\right)^{\frac{1}{2}}
\ket{\mathbf{n}} \bra{h(\mathbf{n},\hat{\xi}v)}
\label{eq: PreOptTCondA}
\end{equation}
for almost all $\mathbf{n}$, $\hat{\xi}v$, where $\ket{h(\mathbf{n},\hat{\xi}v)}$ is any
family of kets satisfying the completeness relation
\begin{equation}
\frac{2 s+1}{4 \pi}
\int d\mathbf{n} d\hat{\xi}v \, \ket{h(\mathbf{n},\hat{\xi}v)} \bra{h(\mathbf{n},\hat{\xi}v)}
= 1
\label{eq: PreOptTCondB}
\end{equation}
If $\hat{T}(\ptv,\avv)$ is of this form it follows from
Eqs.~(\ref{eq: SDef})
and~(\ref{eq: rhoValTermsRhoi}) that
\begin{equation}
\rho_{\rm val}(\mathbf{n})
= \frac{2 s+1}{4 \pi}
\int d\hat{\xi}v \,
\bmat{h(\mathbf{n},\hat{\xi}v)}{\hat{\rho}_{\rm i}}{h(\mathbf{n},\hat{\xi}v)}
\label{eq: RetroOptPDF}
\end{equation}
where $\hat{\rho}_{\rm i}$ is the initial system density
matrix.
Now suppose that the measured value of $\mathbf{n}$ has been recorded
to lie in the region $\mathscr{R}$ of the unit 2-sphere. Then,
using Eqs.~(\ref{eq: rhofTermsT}), (\ref{eq: PreOptTCondA})
and~(\ref{eq: RetroOptPDF}), we find
\begin{equation*}
\hat{\rho}_{\rm f}
= \frac{1}{p_{\mathscr{R}}}
\int_{\mathscr{R}} d\mathbf{n} \,
\rho_{\rm val} (\mathbf{n}) \ket{\mathbf{n}}\bra{\mathbf{n}}
\end{equation*}
where $p_{\mathscr{R}}$ is the probability of recording the
result
$\mathbf{n} \in\mathscr{R}$, and where
$\hat{\rho}_{\rm f}$ is the final system reduced density
matrix. In view of Eqs.~(\ref{eq: fCondA}),
(\ref{eq: fCondB})
and~(\ref{eq: PfncDef}) this means that the final system state
$P$-function $P_{\rm f}$ is given by
\begin{equation*}
P_{\rm f} (\mathbf{n})
= \frac{1}{p_{\mathscr{R}}}
\int_{\mathscr{R}} d\mathbf{n}' \,
\Pi_{2s}(\mathbf{n},\mathbf{n}') \rho_{\rm val}(\mathbf{n}')
\end{equation*}
for almost all $\mathbf{n}$, where $\Pi_{2s}$ is the projection kernel
defined by Eq.~(\ref{eq: Pi0Proj}).
If $\mathscr{R}$ is a sufficiently small region surrounding the
point
$\mathbf{n}_{0}$ then
\begin{equation*}
\hat{\rho}_{\rm f} \chi_{\rm ap}prox \ket{\mathbf{n}_{0}}\bra{\mathbf{n}_{0}}
\end{equation*}
Finally, we note that for a predictively optimal type 1
measurement
$\langle \hat{\mathbf{S}}_{\rm f} \rangle= s\langle \hat{\mathbf{n}}_{\rm f}\rangle$. In
fact
\begin{align*}
\bmat{\psi \otimes \chi_{\rm ap}}{\hat{\mathbf{S}}_{\rm f}}{\psi \otimes \chi_{\rm ap}}
& = \int d\mathbf{n} d\hat{\xi}v \,
\bmat{\psi}{\hat{T}(\ptv,\avv)D \, \hat{\mathbf{S}} \, \hat{T}(\ptv,\avv)}{\psi}
\\
& = \frac{2 s+1}{4 \pi}\int d\mathbf{n} d\hat{\xi}v\bmat{\mathbf{n}}{\hat{\mathbf{S}}}{\mathbf{n}}
\boverlap{\psi}{h(\mathbf{n},\hat{\xi}v)}
\boverlap{h(\mathbf{n},\hat{\xi}v)}{\psi}
\\
& = s \int d\mathbf{n} \, \mathbf{n}
\bmat{\psi}{\hat{E}(\ptv)}{\psi}
\\
& = s \bmat{\psi \otimes \chi_{\rm ap}}{\hat{\mathbf{n}}_{\rm f}}{\psi \otimes \chi_{\rm ap}}
\end{align*}
where we have used the fact~\cite{Lieb} that $s \mathbf{n}$ is the
covariant symbol corresponding to $\hat{\mathbf{S}}$.
\section{Completely Optimal Type 1 Measurements}
\label{sec: CompOpt}
We define a completely optimal type 1 measurement to be one
which is both retrodictively and predictively optimal.
Referring to Eqs.~(\ref{eq: RetOptTCondA}),
(\ref{eq: RetOptTCondB}),
(\ref{eq: PreOptTCondA}),
and~(\ref{eq: PreOptTCondB}) we see that the necessary and
sufficient condition for this to be true is that
$\hat{T}(\ptv,\avv)$ be of the form
\begin{equation*}
\hat{T}(\ptv,\avv)
= \left(\frac{2 s+1}{4\pi}\right)^{\frac{1}{2}} f(\mathbf{n}, \hat{\xi}v)
\ket{\mathbf{n}}\bra{\mathbf{n}}
\end{equation*}
where $f$ is any function with the property
\begin{equation*}
\int d \hat{\xi}v \, \left| f(\mathbf{n}, \hat{\xi}v)\right|^2 =1
\end{equation*}
for all $\mathbf{n}$.
Expressed in terms of the operator $\hat{U}$ the condition
reads [see Eq.~(\ref{eq: TDef})]
\begin{equation*}
\bigl( \bra{m_1}\otimes\bra{\mathbf{n},\hat{\xi}v} \bigr)
\hat{U}
\bigl( \ket{m_2} \otimes \ket{\chi_{\rm ap}}\bigr)
= \left(\frac{2 s+1}{4\pi}\right)^{\frac{1}{2}} f(\mathbf{n}, \hat{\xi}v)
\overlap{m_1}{\mathbf{n}} \overlap{\mathbf{n}}{m_2}
\end{equation*}
It is straightforward to verify that there do exist
unitary operators
$\hat{U}$ with this property. It follows that completely
optimal measurements are defined mathematically. The question
as to whether they are possible physically is, of course, rather
less straightforward.
Referring to Eq.~(\ref{eq: etadTermsT}) we see that, for a
completely optimal measurement, the quantity $\eta_{\rm d}$,
characterising the extent to which the system is disturbed by
the measurement process, is given by
\begin{equation}
\eta_{\rm d}
= \inf_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\biggl( \frac{2 s+1}{4 \pi}
\int d \mathbf{n} \, \frac{s}{2} \Bigl(
\boverlap{\psi}{\mathbf{n}} \bmat{\mathbf{n}}{\mathbf{n} \cdot \hat{\mathbf{S}}}{\psi}
+ \bmat{\psi}{\mathbf{n} \cdot \hat{\mathbf{S}}}{\mathbf{n}}\boverlap{\mathbf{n}}{\psi}
\Bigr)
\biggr)
= s^2
\label{eq: ComOptetad}
\end{equation}
where we have used the fact~\cite{Lieb} that $s\mathbf{n}$ is the
covariant symbol corresponding to
$\hat{\mathbf{S}} $. In view of Eq.~(\ref{eq: DistA}) it follows that
\begin{equation*}
\Delta_{\rm d} \hat{\mathbf{S}}vs = \sqrt{2 s}
\end{equation*}
\section{Type 2 Measurements}
\label{sec: type2}
In the preceding sections we have been concerned with type 1
measurements, for which the pointer position is constrained to
lie on the unit $2-$sphere. We now turn our attention to type
2 measurements. As explained in the Introduction, these are
measurements for which the outcome is represented by the three
\emph{independent} commuting components of a vector $\hat{\boldsymbol{\mu}}$, no
constraint being placed on the squared modulus
$\hat{\boldsymbol{\mu}}s^2=\sum_{a=1}^{3} \hat{\boldsymbol{\mu}}s^{2}_{a}$. We will show that, the
more nearly a type 2 measurement approaches to optimality, the
more nearly it approximates an (optimal) type 1
measurement.
We first need to characterise the accuracy of a type 2
measurement. A similar analysis to that given in
Section~\ref{sec: POVM} can be carried through for type 2
measurements, with the replacement
$\mathbf{n}\rightarrow\hat{\boldsymbol{\mu}}v$.
As before, we denote the additional apparatus
degrees of freedom $\hat{\xi}=(\hat{\xi}_{1},\dots,\hat{\xi}_{N})$, so that the
eigenkets $\ket{\hat{\boldsymbol{\mu}}v,\hat{\xi}v}$ comprise an orthonormal basis for
the apparatus state space, $\mathscr{H}_{\rm ap}$. Let $\ket{\chi_{\rm ap}}$
be the intial apparatus state, and let $\hat{U}$ be the unitary
operator describing the evolution brought about by the
measurement interaction. Then, if the initial system
state is
$\ket{\psi}$, the final state of system$+$apparatus,
immediately after the measurement interaction has ended, will
be given by
$\hat{U}\ket{\psi \otimes \chi_{\rm ap}}$. Corresponding
to Eqs.~(\ref{eq: TDef}) and~(\ref{eq: SDef}) we define
\begin{equation*}
\hat{T}(\ptv,\avv)m
=
\sum_{m,m'=-s}^{s}
\bigl(\bra{m}\otimes
\bra{\hat{\boldsymbol{\mu}}v,\hat{\xi}v}\bigr)
\hat{U}
\bigl(\ket{m'}\otimes \ket{\chi_{\rm ap}} \bigr)
\ket{m}\bra{m'}
\end{equation*}
and
\begin{equation}
\hat{E}(\ptv)m
= \int d \hat{\xi}v \, \hat{T}(\ptv,\avv)mD \, \hat{T}(\ptv,\avv)m
\label{eq: EType2Def}
\end{equation}
Corresponding to Eqs.~(\ref{eq: RetErrA})
and~(\ref{eq: PreErrA}) we define the
maximal rms errors of retrodiction and prediction by
\begin{align}
\Delta_{\rm e i} \hat{\mathbf{S}}vs
& = \left(\sup_{\ket{\psi }\in \mathrm{S}_{\rm sy}}
\left( \bmat{\psi\otimes\chi_{\rm ap}}{
\bigl| \hat{\boldsymbol{\mu}}_{\rm f}-\hat{\mathbf{S}}_{\rm i}
\bigr|^2}{\psi\otimes\chi_{\rm ap}}
\right)
\right)^{\frac{1}{2}}
\label{eq: RetErrB}
\\
\intertext{and}
\Delta_{\rm e f} \hat{\mathbf{S}}vs
& = \left(\sup_{\ket{\psi }\in \mathrm{S}_{\rm sy}}
\left( \bmat{\psi\otimes\chi_{\rm ap}}{
\bigl|\hat{\boldsymbol{\mu}}_{\rm f}-\hat{\mathbf{S}}_{\rm f}
\bigr|^2}{\psi\otimes\chi_{\rm ap}}
\right)
\right)^{\frac{1}{2}}
\label{eq: PreErrB}
\end{align}
where $\hat{\mathbf{S}}_{\rm i}=\hat{\mathbf{S}}$,
$\hat{\mathbf{S}}_{\rm f}=\hat{U}^{\dagger} \hat{\mathbf{S}} \hat{U}$
and $\hat{\boldsymbol{\mu}}_{\rm f} = \hat{U}^{\dagger} \hat{\mathbf{S}} \hat{U}$.
It can be seen that Eq.~(\ref{eq: RetErrB})
agrees with Eq.~(\ref{eq: RetErrA}) if one replaces
$\hat{\boldsymbol{\mu}}_{\rm f} \rightarrow \eta_{\rm i} \hat{\mathbf{n}}_{\rm f}$, and that
Eq.~(\ref{eq: PreErrB}) agrees with
Eq.~(\ref{eq: PreErrA}) if one replaces
$ \hat{\boldsymbol{\mu}}_{\rm f} \rightarrow \eta_{\rm f} \hat{\mathbf{n}}_{\rm f}$.
In terms of the operators $\hat{E}(\ptv)m$ and $\hat{T}(\ptv,\avv)m$ we have
\begin{equation}
\Delta_{\rm e i} \hat{\mathbf{S}}vs
=
\Biggl(\sup_{\ket{\psi}\in\mathscr{S}_{\rm sy}}
\biggl( \int d \hat{\boldsymbol{\mu}}v \,
\sum_{a=1}^{3}
\bmat{\psi}{
(\hat{\boldsymbol{\mu}}vs_{a}-\hat{\mathbf{S}}s_{a}) \, \hat{E}(\ptv)m \, (\hat{\boldsymbol{\mu}}vs_{a}-\hat{\mathbf{S}}s_{a})}{
\psi} \biggr) \Biggr)^{\frac{1}{2}}
\label{eq: RetErrType2Def}
\end{equation}
and
\begin{equation}
\Delta_{\rm e f} \hat{\mathbf{S}}vs
=
\Biggl(
\sup_{\ket{\psi}\in\mathscr{S}_{\rm sys}}
\biggl( \int d \hat{\boldsymbol{\mu}}v d \hat{\xi}v \,
\bmat{\psi}{\hat{T}^{\dagger}(\hat{\boldsymbol{\mu}}v,\hat{\xi}v) \,
\bigl|\hat{\boldsymbol{\mu}} - \hat{\mathbf{S}}\bigr|^2 \,
\hat{T}(\hat{\boldsymbol{\mu}}v,\hat{\xi}v)}{\psi}
\biggr)\Biggr)^{\frac{1}{2}}
\label{eq: PreErrType2Def}
\end{equation}
We next show that, corresponding to
Inequality~(\ref{eq: RetErrRelA}), one has the retrodictive
error relationship for type
2 measurements
\begin{equation}
\Delta_{\rm e i} \hat{\mathbf{S}}vs \ge \sqrt{s}
\label{eq: RetErrRelB}
\end{equation}
and that, corresponding to
Inequality~(\ref{eq: PreErrRelA}), one has the
predictive error relationship for type
2 measurements
\begin{equation}
\Delta_{\rm e f} \hat{\mathbf{S}}vs \ge \sqrt{s}
\label{eq: PreErrRelB}
\end{equation}
In fact, it follows from Eqs.~(\ref{eq: EType2Def}),
(\ref{eq: RetErrType2Def})
and~(\ref{eq: PreErrType2Def}) that
{\allowdisplaybreaks
\begin{align*}
(2 s+1) \left(\Delta_{\rm e i} \hat{\mathbf{S}}vs\right)^2
& \ge
\int d\hat{\boldsymbol{\mu}}v d\hat{\xi}v \,
\Tr \left(\bigl|\hat{\boldsymbol{\mu}}v-\hat{\mathbf{S}}\bigr|^2 \, \hat{T}(\ptv,\avv)mD \, \hat{T}(\ptv,\avv)m \right)
\\
\intertext{and}
(2 s+1) \left(\Delta_{\rm e f} \hat{\mathbf{S}}vs \right)^2
& \ge
\int d\hat{\boldsymbol{\mu}}v d\hat{\xi}v \,
\Tr \left(\bigl|\hat{\boldsymbol{\mu}}v-\hat{\mathbf{S}}\bigr|^2 \, \hat{T}(\ptv,\avv)m\, \hat{T}(\ptv,\avv)mD\right)
\end{align*}
Using the fact
\begin{equation*}
(2 s+1) = \int d\hat{\boldsymbol{\mu}}v d\hat{\xi}v \,
\Tr\left(\hat{T}(\ptv,\avv)mD \, \hat{T}(\ptv,\avv)m\right)
\end{equation*}
we deduce
\begin{align}
\int d\hat{\boldsymbol{\mu}}v d\hat{\xi}v \,
\Tr \biggl(\left(\bigl|\hat{\boldsymbol{\mu}}v-\hat{\mathbf{S}}\bigr|^2 -
\left(\Delta_{\rm e i} \hat{\mathbf{S}}vs\right)^2
\right)
\hat{T}(\ptv,\avv)mD\, \hat{T}(\ptv,\avv)m \biggr)
& \le 0
\label{eq: RetErrBCondA}
\\
\intertext{and}
\int d\hat{\boldsymbol{\mu}}v d\hat{\xi}v \,
\Tr \biggl(\left(\bigl|\hat{\boldsymbol{\mu}}v-\hat{\mathbf{S}} \bigr|^2 -
\left(\Delta_{\rm e f} \hat{\mathbf{S}}vs\right)^2
\right)
\hat{T}(\ptv,\avv)m \, \hat{T}(\ptv,\avv)mD\biggr)
& \le 0
\label{eq: PreErrBCondA}
\end{align}
Now make the expansion}
\begin{equation*}
\hat{T}(\ptv,\avv)m
= \sum_{m,m'=-s}^{s}
T_{m m'} (\hat{\boldsymbol{\mu}}v,\hat{\xi}v) \ket{\mathbf{n},m}\bra{\mathbf{n},m'}
\end{equation*}
where $\mathbf{n} = \hat{\boldsymbol{\mu}}v / \hat{\boldsymbol{\mu}}vs$ and
$\ket{\mathbf{n},m}$ is the state defined by
Eq.~(\ref{eq: mnKetDef}).
Using this expansion Inequalities~(\ref{eq: RetErrBCondA})
and~(\ref{eq: PreErrBCondA}) become
\begin{align}
\sum_{m,m'=-s}^{s}
\int d\hat{\boldsymbol{\mu}}v d\hat{\xi}v \,
\Bigl(\bigl(\hat{\boldsymbol{\mu}}vs-m'\bigr)^2+\bigl(s^2
- m'\vphantom{s}^2\bigr)
+\bigl(s-(\Delta_{\rm e i} \hat{\mathbf{S}}vs)^2 \bigr)
\Bigr)
\bigl| T_{m m'}(\hat{\boldsymbol{\mu}}v,\hat{\xi}v) \bigr|^2
& \le 0
\label{eq: RetErrBCondB}
\\
\intertext{and}
\sum_{m,m'=-s}^{s}
\int d\hat{\boldsymbol{\mu}}v d\hat{\xi}v \,
\Bigl(\bigl(\hat{\boldsymbol{\mu}}vs-m\bigr)^2+\bigl(s^2
- m^2\bigr)
+\bigl(s-(\Delta_{\rm e f} \hat{\mathbf{S}}vs)^2 \bigr)
\Bigr)
\bigl| T_{m m'}(\hat{\boldsymbol{\mu}}v,\hat{\xi}v) \bigr|^2
& \le 0
\label{eq: PreErrBCondB}
\end{align}
Inequalities~(\ref{eq: RetErrRelB})
and~(\ref{eq: PreErrRelB}) are now immediate.
Setting $\Delta_{\rm e i} \hat{\mathbf{S}}vs = \sqrt{s}$ in
Inequality~(\ref{eq: RetErrBCondB}) gives
\begin{equation*}
\sum_{m,m'=-s}^{s}
\int d\hat{\boldsymbol{\mu}}v d\hat{\xi}v \,
\Bigl(\bigl(\hat{\boldsymbol{\mu}}vs-m'\bigr)^2+\bigl(s^2
- m'\vphantom{s}^2\bigr)
\Bigr)
\bigl| T_{m m'}(\hat{\boldsymbol{\mu}}v,\hat{\xi}v) \bigr|^2
\le 0
\end{equation*}
which implies
\begin{equation*}
\left| T_{m m'}(\hat{\boldsymbol{\mu}}v,\hat{\xi}v)\right|^2
= g_{m}(\mathbf{n},\hat{\xi}v) \, \delta_{m' s} \, \delta(\hat{\boldsymbol{\mu}}vs -s)
\end{equation*}
for suitable functions $g_{m}$. However, this is not possible,
since the square root of the $\delta$-function is not defined.
It follows that the lower bound set by
Inequality~(\ref{eq: RetErrRelB}) is not precisely
achievable. Nor is the lower bound set by
Inequality~(\ref{eq: PreErrRelB}).
It is, however, possible to approach the lower bounds set by
Inequalities~(\ref{eq: RetErrRelB})
and~(\ref{eq: PreErrRelB}) arbitrarily closely. It can be
seen that as $\Delta_{\rm e i} \hat{\mathbf{S}}vs \rightarrow \sqrt{s}$ (respectively,
$\Delta_{\rm e f}
\hat{\mathbf{S}}vs
\rightarrow \sqrt{s}$), then
$\hat{T}(\ptv,\avv)m$ and $\hat{E}(\ptv)m$ become more and more
strongly concentrated on the surface $\hat{\boldsymbol{\mu}}vs =s$. In other
words, the measurement more and more nearly approaches
a type 1 measurement of maximal retrodictive (respectively,
predictive) accuracy, with pointer observable
$\hat{\mathbf{n}}=\hat{\boldsymbol{\mu}}/s$.
\section{Conclusion}
\label{sec: conclusion}
There are a number of ways in which one might seek to develop
the results reported in this paper.
In the first place, although we showed that
$\Delta_{\rm d} \hat{\mathbf{S}}vs = \sqrt{2 s}$ for a completely optimal type 1
measurement, we did not derive error-disturbance relationships,
analogous to the inequalities $\Delta_{\rm e i} x \,\Delta_{\rm d} p$, $\Delta_{\rm e i} p\, \Delta_{\rm d} x$,
$\Delta_{\rm e f} x \, \Delta_{\rm d} p$, $\Delta_{\rm e f} p \, \Delta_{\rm d} x \ge 1/2$ (in units such
that $\hbar=1$) proved in ref.~\cite{self2b} for the case of a
simultaneous measurement of position and momentum. The general
principles of quantum mechanics~\cite{Heisenberg,Braginsky}
indicate that relationships of this kind must also hold for
measurements of spin direction, at least on a qualitative
level. However, it appears that the problem of giving the
relationships precise, numerical expression is not entirely
straightforward. The question requires further investigation.
In this paper we have considered measurements of spin
direction. However, the problem of simultaneously measuring
just two components of spin is also
important~\cite{TwoComp,BuschSpin}. It would be interesting to
investigate the accuracy of measurements such as this, and to
try to characterise the POVM (or POVM's, in the plural?)
describing the outcome when the measurement is optimal.
We have seen that $\mathrm{SU}(2)$ coherent states play an
important role in the description of optimal measurements of
spin direction. In refs.~\cite{self3,Ali} it was
shown that ordinary, Heisenberg-Weyl coherent states play an
analogous role in the description of optimal joint measurements
of position and momentum. It would be interesting to see if it
is generally true, that every system of generalized coherent
states is related in this way to joint measurements of
the generators of the corresponding Lie group.
There are some important questions of principle regarding
measurements of a \emph{single} spin
component~\cite{Busch,BuschSpin,Wigner,Wheeler,Garra}. It would
be interesting to see if the approach to the problem of defining
the measurement accuracy which was described in this paper can
be used to gain some additional insight into these questions.
Finally, it is obviously important to investigate
whether optimal, or near optimal determinations of spin
direction can be realised experimentally.
\end{document} |
\begin{document}
\title{The Douglas--Rachford algorithm\\
for a hyperplane and a doubleton}
\author{
Heinz H.\ Bauschke\thanks{
Mathematics, University of British Columbia, Kelowna, B.C.\ V1V~1V7, Canada.
E-mail: \texttt{[email protected]}.},~
Minh N.\ Dao\thanks{
CARMA, University of Newcastle, Callaghan, NSW 2308, Australia.
E-mail: \texttt{[email protected]}.}~~~and~
Scott B.\ Lindstrom\thanks{
CARMA, University of Newcastle, Callaghan, NSW 2308, Australia.
E-mail: \texttt{[email protected]}.}
}
\date{April 24, 2018}
\maketitle
\begin{abstract}
The Douglas--Rachford algorithm is a popular algorithm for
solving both convex and nonconvex feasibility problems.
While its behaviour is settled in the convex inconsistent case,
the general nonconvex inconsistent case is far from being fully
understood. In this paper, we focus on the most simple nonconvex
inconsistent case: when one set is a hyperplane and the other a
doubleton (i.e., a two-point set). We present a characterization of cycling in this
case which --- somewhat surprisingly --- depends on whether
the ratio of the distance of the points to the hyperplane is
rational or not. Furthermore, we provide closed-form expressions
as well as several concrete examples which illustrate the
dynamical richness of this algorithm.
\end{abstract}
{\small
\noindent
{\bfseries 2010 Mathematics Subject Classification:}
{Primary:
47H10,
49M27;
Secondary:
65K05,
65K10,
90C26.
}
\noindent {\bfseries Keywords:}
closed-form expressions,
cycling,
Douglas--Rachford algorithm,
feasibility problem,
finite set,
hyperplane,
method of alternating projections,
projector,
reflector
}
\section{Introduction}
\label{s:intro}
The Douglas--Rachford (DR) algorithm \cite{DR56} is a popular algorithm for
finding minimizers of the sum of two functions, defined on a
real Hilbert space and possibly nonsmooth.
Its convergence properties are fairly well understood in the
case when the function are convex; see
\cite{LM79},
\cite{EB92},
\cite{Com04},
\cite{BCL04},
\cite{BDM16},
and \cite{BM17}.
When specialized to
indicator functions,
the DR algorithm aims to solve a feasibility
problem.
The \emph{goal} of this paper is to analyze an instructive --- and
perhaps the most simple ---
nonconvex setting: when one set is a hyperplane and the other
is a doubleton (i.e., it consists of
just two distinct points).
Our analysis reveals interesting dynamic behaviour whose
\emph{periodicity} depends on whether or not a certain ratio of distances
is rational (Theorem~\ref{t:2points}).
We also provide \emph{explicit closed-form expressions} for the
iterates in various circumstances (Theorem~\ref{t:closedform}).
Our work can be regarded as complementary to
the recently rapidly growing body of works on the DR
algorithm in
nonconvex settings including
\cite{ERT07},
\cite{BS11},
\cite{HL13},
\cite{BN14},
\cite{ABT16},
\cite{Pha16},
and \cite{DT17}.
The remainder of the paper is organized as follows.
In Section~\ref{s:setup},
we recall the necessary background material to start our
analysis.
The case when one set contains not just $2$ but finitely many
points is considered in Section~\ref{s:finite}.
Section~\ref{s:cycling} provides a characterization of when
cycling occurs, while
Section~\ref{s:closed-form} presents closed-form expressions and
various examples.
We conclude the paper with Section~\ref{s:conclusion}.
\section{The set up}
\label{s:setup}
Throughout we assume that
\begin{empheq}[box=\mybluebox]{equation}
\text{$X$ is a finite-dimensional real Hilbert space}
\end{empheq}
with inner product $\scal{\cdot}{\cdot}$ and induced norm $\|\cdot\|$, and
\begin{empheq}[box=\mybluebox]{equation}
\text{$A$ and $B$ are nonempty closed subsets of $X$}.
\end{empheq}
To solve the feasibility problem
\begin{empheq}[box=\mybluebox]{equation}
\label{e:prob}
\text{find a point in $A\cap B$},
\end{empheq}
we employ the \emph{Douglas--Rachford algorithm} (also called \emph{averaged alternating reflections})
that uses the \emph{DR operator}, associated with the ordered pair $(A, B)$,
\begin{empheq}[box=\mybluebox]{equation}
T :=\tfrac{1}{2}(\ensuremath{\operatorname{Id}}+R_BR_A)
\end{empheq}
to generate a \emph{DR sequence} $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ with starting point $x_0 \in X$ by
\begin{empheq}[box=\mybluebox]{equation}
\label{e:DRAseq}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad x_{n+1} \in Tx_n,
\end{empheq}
where $\ensuremath{\operatorname{Id}}$ is the identity operator, $P_A$ and $P_B$ are the projectors,
and $R_A :=2P_A -\ensuremath{\operatorname{Id}}$ and $R_B :=2P_B -\ensuremath{\operatorname{Id}}$ are the reflectors with respect to $A$ and $B$, respectively.
Here the projection $P_Ax$ of a point $x \in X$ is the nearest point of $x$ in the set $A$, i.e.,
\begin{equation}
P_Ax :=\ensuremath{\operatorname*{argmin}}_{a \in A} \|x -a\| =\menge{a \in A}{\|x -a\| =d_A(x)},
\end{equation}
where $d_A(x) := \min_{a \in A} \|x -a\|$ is the distance from $x$ to the set $A$.
Note from \cite[Corollary~3.15]{BC17} that
closedness of the set $A$ is necessary and sufficient for $A$ being proximinal,
i.e., $(\forall x\in X)$ $P_Ax\neq \varnothing$.
According to \cite[Theorem~3.16]{BC17},
if $A$ and $B$ are convex, then $P_A$, $P_B$ and hence $T$ are single-valued.
We also note that
\begin{equation}
(\forall x \in X)\quad Tx =\tfrac{1}{2}(\ensuremath{\operatorname{Id}}+R_BR_A)x =\menge{x -a +P_B(2a -x)}{a \in P_Ax},
\end{equation}
and if $P_A$ is single-valued then
\begin{equation}
\label{e:PAsingle}
T =\tfrac{1}{2}(\ensuremath{\operatorname{Id}}+R_BR_A) =\ensuremath{\operatorname{Id}} -P_A +P_BR_A.
\end{equation}
For further information on the DR algorithm
in the classical case
(when $A$ and $B$ are both convex),
see
\cite{LM79},
\cite{Com04},
\cite{BCL04},
\cite{BM17},
and \cite{BDNP16b}.
Results complementary to the rapidly increasing body of works on the
DR algorithm in nonconvex settings can be found in
\cite{BN14},
\cite{Pha16},
\cite{DP16},
\cite{BLSSS17},
\cite{LLS17},
\cite{LSS17},
\cite{DP17}, and the references therein.
The notation and terminology used is standard and follows, e.g., \cite{BC17}.
The nonnegative integers are $\ensuremath{\mathbb N}$, the positive integers are $\ensuremath{\mathbb N}^*$, and the real numbers are $\ensuremath{\mathbb R}$,
while $\ensuremath{\mathbb{R}_+} := \menge{x \in \ensuremath{\mathbb R}}{x \geq 0}$ and $\ensuremath{\mathbb{R}_+}P := \menge{x \in \ensuremath{\mathbb R}}{x >0}$.
We are now ready to start deriving the results we announced in
Section~\ref{s:intro}.
\section{Hyperplane and finitely many points}
\label{s:finite}
We focus on the case when $B$ is a finite set, and we start with the following observation.
\begin{lemma}
\label{l:notcvg}
Suppose that $A$ is convex, that $B$ is finite, and that $A\cap B =\varnothing$.
Let $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ be a DR sequence with respect to $(A, B)$.
Then $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ is not convergent.
\end{lemma}
\begin{proof}
Since $A$ is convex, $P_A$ is single-valued and continuous on $X$.
By \eqref{e:PAsingle}, $T =\frac{1}{2}(\ensuremath{\operatorname{Id}} +R_BR_A) =\ensuremath{\operatorname{Id}} -P_A +P_BR_A$, and hence
\begin{equation}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad b_n :=x_{n+1} -x_n +P_Ax_n \in P_BR_Ax_n \subseteq B.
\end{equation}
Suppose that $x_n \to x \in X$. Then $b_n \to P_Ax$.
But $(b_n)_\ensuremath{{n\in{\mathbb N}}}$ lies in $B$ and $B$ is finite, there exists $n_0 \in \ensuremath{\mathbb N}$ such that
$(\forall n\geq n_0)$ $b_n =b \in B$.
We obtain $P_Ax =b \in A\cap B$, which contradicts the assumption that $A\cap B =\varnothing$.
\end{proof}
From here onwards, we assume that $A$ is a hyperplane and $B$ is
a finite subset of $X$ containing $m$ pairwise distinct vectors;
more specifically,
\begin{subequations}
\begin{empheq}[box=\mybluebox]{equation}
A= \{u\}^\perp \quad\text{with}\quad u\in X, \|u\|= 1
\end{empheq}
and
\begin{empheq}[box=\mybluebox]{equation}
\label{e:B}
B= \{b_1, \dots, b_m\}\subseteq X \quad\text{with}\quad \scal{b_1}{u}\leq \cdots \leq \scal{b_m}{u}.
\end{empheq}
\end{subequations}
\begin{fact}
\label{f:A}
Let $x\in X$. Then the following hold:
\begin{enumerate}
\item\label{f:A_P}
$P_Ax= x- \scal{x}{u}u$.
\item\label{f:A_R}
$R_Ax= x- 2\scal{x}{u}u$.
\item\label{f:A_d}
$d_A(x)= |\scal{x}{u}|$.
\end{enumerate}
\end{fact}
\begin{proof}
This follows from \cite[Example~2.4(i)]{BD17} with noting that
$R_Ax= 2P_Ax- x$ and that $d_A(x)= \|x- P_Ax\|$.
\end{proof}
Let $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ be a DR sequence with respect to $(A, B)$ with starting point $x_0\in X$.
Since $P_A$ is single-valued, we derive from \eqref{e:PAsingle} that
\begin{equation}
(\forall n\in \ensuremath{\mathbb N}^*)\quad
x_n- x_{n-1}+ P_Ax_{n-1}\in Tx_{n-1}- x_{n-1}+ P_Ax_{n-1}= P_BR_Ax_{n-1}\subseteq B.
\end{equation}
Let us set
\begin{empheq}[box=\mybluebox]{equation}
\label{e:b_kn}
(\forall n\in \ensuremath{\mathbb N}^*)\quad b_{k(n)}:= x_n- x_{n-1}+ P_Ax_{n-1}\in P_BR_Ax_{n-1}\subseteq B
\text{~with~} k(n) \in \{1, \dots, m\}.
\end{empheq}
The following lemma shows that the subsequence $(x_n)_{n\in
\ensuremath{\mathbb N}^*}$ lies in the union of the lines through the points in $B$
with a common direction vector $u$.
\begin{lemma}
\label{l:lines}
For every $n\in \ensuremath{\mathbb N}^*$,
\begin{equation}
x_n= \scal{x_{n-1}}{u}u+ b_{k(n)}
\quad\text{and}\quad \scal{x_n}{u}= \scal{x_{n-1}}{u}+ \scal{b_{k(n)}}{u},
\end{equation}
where $k(n)\in \{1, \dots, m\}$. Consequently, the subsequence $(x_n)_{n\in \ensuremath{\mathbb N}^*}$ lies in the union of finitely many (affine) lines:
\begin{equation}
B+ \ensuremath{\mathbb R} u=\bigcup_{b\in B} (b+ \ensuremath{\mathbb R} u)= \menge{b+ \lambda u}{b\in B, \lambda\in \ensuremath{\mathbb R}}.
\end{equation}
\end{lemma}
\begin{proof}
By combining \eqref{e:b_kn} with Fact~\ref{f:A}\ref{f:A_P},
\begin{equation}
\label{e:x+}
(\forall n\in \ensuremath{\mathbb N}^*)\quad x_n= x_{n-1}- P_Ax_{n-1}+ b_{k(n)}= \scal{x_{n-1}}{u}u+ b_{k(n)}.
\end{equation}
Taking the inner product with $u$ yields
\begin{equation}
\label{e:x+,u}
(\forall n\in \ensuremath{\mathbb N}^*)\quad \scal{x_n}{u}= \scal{\scal{x_{n-1}}{u}u+ b_{k(n)}}{u}= \scal{x_{n-1}}{u}+ \scal{b_{k(n)}}{u},
\end{equation}
which completes the proof.
\end{proof}
\begin{proposition}
\label{p:mpoints}
Exactly one of the following holds.
\begin{enumerate}
\item
\label{p:mpoints_finite}
$B$ is contained in one of
the two closed halfspaces induced by $A$.
Then either {\rm (a)} the sequence $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ converges finitely to
a point $x \in \ensuremath{\operatorname{Fix}} T$ and $P_Ax \in A\cap B$,
or {\rm (b)} $A\cap B= \varnothing$ and $\|x_n\| \to +\infty$ in which
case $(P_Ax_n)_\ensuremath{{n\in{\mathbb N}}}$ converges finitely to a best approximation
solution $a \in A$ relative to $A$ and $B$ in the sense that
$d_B(a)= \min d_B(A)$.
\item
\label{p:mpoints_bounded}
$B$ is not contained in one of the two closed halfspaces induced by $A$.
Then the sequence $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ is bounded. If
additionally $A\cap B= \varnothing$, then $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ is not
convergent and
\begin{equation}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad \|x_n- x_{n+1}\| \geq \min d_A(B)> 0.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
\ref{p:mpoints_finite}: This follows from \cite[Theorem~7.5]{BD17}.
\ref{p:mpoints_bounded}: Since $B$ is not a subset of one of two closed halfspaces induced by $A$, it follows from \eqref{e:B} that
\begin{equation}
\label{e:b1bm}
\scal{b_1}{u}< 0< \scal{b_m}{u}.
\end{equation}
Combining Fact~\ref{f:A}\ref{f:A_R} with Lemma~\ref{l:lines} yields
\begin{subequations}
\begin{align}
\label{e:RAx+}
(\forall n\in \ensuremath{\mathbb N}^*)\quad R_Ax_n&= x_n- 2\scal{x_n}{u}u \\
&= \big( \scal{x_{n-1}}{u}u+ b_{k(n)} \big)- \big( \scal{x_{n-1}}{u}+ \scal{b_{k(n)}}{u} \big)u- \scal{x_n}{u}u \\
&= -(\scal{x_n}{u}+\scal{b_{k(n)}}{u})u+ b_{k(n)}.
\end{align}
\end{subequations}
For any $n\in \ensuremath{\mathbb N}^*$ and any distinct indices $i, j\in \{1, \dots, m\}$, we have the following equivalences:
\begin{subequations}
\label{e:compare}
\begin{align}
&\|b_i- R_Ax_n\|\leq \|b_j- R_Ax_n\| \\
\Leftrightarrow{} &\|(\scal{x_n}{u}+\scal{b_{k(n)}}{u})u+ (b_i-b_{k(n)})\|^2
\leq \|(\scal{x_n}{u}+\scal{b_{k(n)}}{u})u+ (b_j-b_{k(n)})\|^2 \\
\Leftrightarrow{} &\|b_i-b_{k(n)}\|^2- \|b_j-b_{k(n)}\|^2
\leq 2(\scal{x_n}{u}+\scal{b_{k(n)}}{u})\scal{b_j-b_i}{u} \\
\Leftrightarrow{} &\begin{cases}
\scal{x_n}{u} \geq \beta_{i,j,n} &\text{if~} \scal{b_i}{u}< \scal{b_j}{u}, \\
\|b_i-b_{k(n)}\|\leq \|b_j-b_{k(n)}\| &\text{if~} \scal{b_i}{u}= \scal{b_j}{u}, \\
\scal{x_n}{u} \leq \beta_{i,j,n} &\text{if~} \scal{b_i}{u}> \scal{b_j}{u},
\end{cases}
\end{align}
\end{subequations}
where
\begin{equation}
\beta_{i,j,n}:= \frac{\|b_i-b_{k(n)}\|^2- \|b_j-b_{k(n)}\|^2}{2\scal{b_j-b_i}{u}} -\scal{b_{k(n)}}{u}.
\end{equation}
We shall now show that $(\scal{x_n}{u})_\ensuremath{{n\in{\mathbb N}}}$ is bounded above. Setting
\begin{equation}
r:= \max\menge{k\in \{1, \dots, m\}}{\scal{b_k}{u}= \scal{b_1}{u}},
\end{equation}
we see that $r< m$ due to \eqref{e:b1bm} and that, by \eqref{e:B},
\begin{equation}
\label{e:b1br}
\scal{b_1}{u}= \cdots= \scal{b_r}{u}< \scal{b_{r+1}}{u}\leq \cdots\leq \scal{b_m}{u}.
\end{equation}
Now let $n\in \ensuremath{\mathbb N}^*$ and set
\begin{equation}
I(n):= \menge{i\in \{1, \dots, r\}}{(\forall j\in \{1, \dots, r\})\quad \|b_i-b_{k(n)}\|\leq \|b_j-b_{k(n)}\|}.
\end{equation}
Then $I(n)= \{k(n)\}$ whenever $k(n)\in \{1, \dots, r\}$ and, by \eqref{e:compare},
\begin{equation}
\label{e:pre-compare}
(\forall i\in I(n))(\forall j\in \{1, \dots, r\})\quad \|b_i- R_Ax_n\|\leq \|b_j- R_Ax_n\|.
\end{equation}
Define
\begin{equation}
\beta_n:= \max\menge{\beta_{i,j,n}}{i\in I(n), j\in \{r+1, \dots, m\}}.
\end{equation}
If $\scal{x_n}{u}> \beta_n$, then \eqref{e:B} and \eqref{e:compare} yield
\begin{equation}
(\forall i\in I(n))(\forall k\in \{r+1, \dots, m\})\quad \|b_i- R_Ax_n\|< \|b_k- R_Ax_n\|,
\end{equation}
which together with \eqref{e:pre-compare} implies that $k(n+1)\in I(n)\subseteq \{1, \dots, r\}$ and,
by \eqref{e:x+,u}, \eqref{e:b1bm} and \eqref{e:b1br},
\begin{equation}
\label{e:decrease}
\scal{x_{n+1}}{u}= \scal{x_n}{u}+ \delta \quad\text{with}\quad \delta:= \scal{b_{k(n+1)}}{u}= \scal{b_1}{u}< 0.
\end{equation}
Noting that \eqref{e:decrease} holds whenever $\scal{x_n}{u}> \beta_n$ and that the sequence $(\beta_n)_\ensuremath{{n\in{\mathbb N}}}$ is bounded since the set $\menge{\beta_{i,j,n}}{i\in I(n), j\in \{r+1, \dots, m\}, n\in \ensuremath{\mathbb N}^*}$ is finite, we deduce that $(\scal{x_n}{u})_\ensuremath{{n\in{\mathbb N}}}$ is bounded above. By a similar argument, $(\scal{x_n}{u})_\ensuremath{{n\in{\mathbb N}}}$ is also bounded below. Combining with \eqref{e:x+}, we get boundedness of $(x_n)_\ensuremath{{n\in{\mathbb N}}}$.
Finally, if $A\cap B= \varnothing$, then, by
Lemma~\ref{l:notcvg}, $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ is not convergent and, by the
Cauchy--Schwarz inequality, Lemma~\ref{l:lines}, and Fact~\ref{f:A}\ref{f:A_d},
\begin{equation}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad \|x_{n+1}- x_n\|\geq |\scal{x_{n+1}-x_n}{u}|=
|\scal{b_{k(n+1)}}{u}|= d_A(b_{k(n+1)})\geq \min d_A(B)> 0.
\end{equation}
The proof is complete.
\end{proof}
\section{Hyperplane and doubleton: characterization of cycling}
\label{s:cycling}
From now on,
we assume that $B$ is a doubleton where
the two points do not belong to the same closed halfspace induced by $A$; more precisely,
\begin{empheq}[box=\mybluebox]{equation}
B= \{b_1, b_2\}\subseteq X \quad\text{with}\quad \scal{b_1}{u}< 0< \scal{b_2}{u}.
\end{empheq}
Set
\begin{empheq}[box=\mybluebox]{equation}
\label{e:32}
\beta_1:= \scal{b_1}{u}< 0,\quad \beta_2:= \scal{b_2}{u}>0, \quad\text{and}\quad \beta:= \frac{\|b_1-b_2\|^2}{2(\beta_1-\beta_2)}= -\frac{\|b_1-b_2\|^2}{2\scal{b_2-b_1}{u}}< 0.
\end{empheq}
\begin{proposition}
\label{p:2points}
The following holds for the DR sequence $(x_n)_\ensuremath{{n\in{\mathbb N}}}$.
\begin{enumerate}
\item
\label{p:2points_bounded}
$(x_n)_\ensuremath{{n\in{\mathbb N}}}$ is bounded but not convergent with
\begin{equation}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad \|x_n- x_{n+1}\| \geq \min\{d_A(b_1), d_A(b_2)\}> 0.
\end{equation}
\item
\label{p:2points_x+}
For every $n\in \ensuremath{\mathbb N}^*$,
\begin{equation}
\label{e:x,u}
x_n= \scal{x_{n-1}}{u}u+ b_{k(n)}
\quad\text{and}\quad \scal{x_n}{u}= \scal{x_{n-1}}{u}+ \scal{b_{k(n)}}{u},
\end{equation}
where $k(n)\in \{1, 2\}$ and where
\begin{subequations}
\label{e:kn+}
\begin{align}
k(n)= 1\ \&\ \scal{x_n}{u}> \beta- \scal{b_1}{u} &\implies k(n+1)= 1, \\
k(n)= 1\ \&\ \scal{x_n}{u}< \beta- \scal{b_1}{u} &\implies k(n+1)= 2, \\
k(n)= 2\ \&\ \scal{x_n}{u}> -\beta- \scal{b_2}{u} &\implies k(n+1)= 1, \\
k(n)= 2\ \&\ \scal{x_n}{u}< -\beta- \scal{b_2}{u} &\implies k(n+1)= 2.
\end{align}
\end{subequations}
\item
\label{p:2points_coeffs}
There exist increasing (a.k.a.\ ``nondecreasing'')
sequences $(l_{1,n})_\ensuremath{{n\in{\mathbb N}}}$ and $(l_{2,n})_\ensuremath{{n\in{\mathbb N}}}$ in $\ensuremath{\mathbb N}$ such that
\begin{equation}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad \scal{x_n}{u} =\scal{x_0}{u} +l_{1,n}\scal{b_1}{u} +l_{2,n}\scal{b_2}{u}
\quad\text{and}\quad l_{1,n} +l_{2,n} =n.
\end{equation}
Moreover,
\begin{equation}
\frac{l_{1,n}}{n}\to \frac{\scal{b_2}{u}}{\scal{b_2-b_1}{u}}\in \left]0, 1\right[
\quad\text{and}\quad
\frac{l_{2,n}}{n}\to \frac{\scal{b_1}{u}}{\scal{b_1-b_2}{u}}\in \left]0, 1\right[
\quad\text{as~} n\to +\infty.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
\ref{p:2points_bounded}: By assumption, $b_1, b_2 \notin A$, and hence $A\cap B =\varnothing$. The conclusion follows from Proposition~\ref{p:mpoints}\ref{p:mpoints_bounded}.
\ref{p:2points_x+}: We get \eqref{e:x,u} from Lemma~\ref{l:lines}.
The equivalences \eqref{e:compare} in the
proof of Proposition~\ref{p:mpoints}\ref{p:mpoints_bounded}
state
\begin{equation}
\|b_1- R_Ax_n\|\leq \|b_2- R_Ax_n\| \Leftrightarrow{}
\scal{x_n}{u}\geq \frac{\|b_1-b_{k(n)}\|^2- \|b_2-b_{k(n)}\|^2}{2\scal{b_2-b_1}{u}} -\scal{b_{k(n)}}{u},
\end{equation}
which implies \eqref{e:kn+}.
\ref{p:2points_coeffs}: Using \eqref{e:x,u}, we find increasing
sequences $(l_{1,n})_\ensuremath{{n\in{\mathbb N}}}$ and $(l_{2,n})_\ensuremath{{n\in{\mathbb N}}}$ in $\ensuremath{\mathbb N}$ such that
\begin{equation}
\label{e:xnx0}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad \scal{x_n}{u} =\scal{x_0}{u} +l_{1,n}\scal{b_1}{u} +l_{2,n}\scal{b_2}{u}
\end{equation}
and that
\begin{equation}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad l_{1,n} +l_{2,n} =n.
\end{equation}
Combining with \ref{p:2points_bounded}, we obtain that
\begin{equation}
l_{1,n}\scal{b_1}{u} +(n -l_{1,n})\scal{b_2}{u} =l_{1,n}\scal{b_1}{u} +l_{2,n}\scal{b_2}{u}
=\scal{x_n}{u} -\scal{x_0}{u}
\end{equation}
is bounded. It follows that
\begin{equation}
\frac{l_{1,n}}{n}\scal{b_1-b_2}{u}+ \scal{b_2}{u}\to 0 \quad\text{as~} n\to +\infty,
\end{equation}
which yields
\begin{equation}
\frac{l_{1,n}}{n}\to \frac{\scal{b_2}{u}}{\scal{b_2-b_1}{u}}\in \left]0, 1\right[ \quad\text{and}\quad
\frac{l_{2,n}}{n}= 1- \frac{l_{1,n}}{n}\to \frac{-\scal{b_1}{u}}{\scal{b_2 -b_1}{u}}\in \left]0, 1\right[
\end{equation}
as $n\to +\infty$.
\end{proof}
\begin{theorem}[cycling and rationality]
\label{t:2points}
The DR sequence $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ cycles after a certain number of
steps regardless of the starting point
if and only if $d_A(b_1)/d_A(b_2)\in \mathbb{Q}$.
\end{theorem}
\begin{proof}
First, by Fact~\ref{f:A}\ref{f:A_d}, $d_A= |\scal{\cdot}{u}|$, which yields
\begin{equation}
\label{e:dAb1b2}
d_A(b_1)= -\scal{b_1}{u} \quad\text{and}\quad d_A(b_2)= \scal{b_2}{u}.
\end{equation}
We also note from Proposition~\ref{p:2points}\ref{p:2points_bounded}--\ref{p:2points_x+} that
\begin{equation}
\label{e:bounded}
\text{$(|\scal{x_n}{u}|)_\ensuremath{{n\in{\mathbb N}}}$ is bounded},
\end{equation}
that
\begin{equation}
\label{e:xn}
(\forall n\in \ensuremath{\mathbb N}^*)\quad x_n= \scal{x_{n-1}}{u}u+ b_{k(n)},
\end{equation}
and that
\begin{equation}
\label{e:xnu}
(\forall n\in \ensuremath{\mathbb N}^*)\quad \scal{x_n}{u}= \scal{x_{n-1}}{u}+ \scal{b_{k(n)}}{u},
\end{equation}
where $k(n)\in \{1, 2\}$.
``$\Leftarrow$'': Assume that $d_A(b_1)/d_A(b_2)\in \mathbb{Q}$. Then there exist $q_1, q_2\in \ensuremath{\mathbb N}^*$ such that $q_1d_A(b_1) =q_2d_A(b_2)$, or equivalently (using \eqref{e:dAb1b2}),
\begin{equation}
\label{e:suff}
q_1\scal{b_1}{u} +q_2\scal{b_2}{u} =0.
\end{equation}
It follows from Proposition~\ref{p:2points}\ref{p:2points_coeffs} that
\begin{equation}
\label{e:xnx0'}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad \scal{x_n}{u} =\scal{x_0}{u} +l_{1,n}\scal{b_1}{u} +l_{2,n}\scal{b_2}{u}
\end{equation}
with $(l_{1,n}, l_{2,n})\in \ensuremath{\mathbb N}^2$.
By \eqref{e:suff}, whenever $l_{1,n} \geq q_1$ and $l_{2,n} \geq q_2$, we have
\begin{equation}
\scal{x_n}{u} = \scal{x_0}{u} +(l_{1,n} -q_1)\scal{b_1}{u} +(l_{2,n} -q_2)\scal{b_2}{u}.
\end{equation}
We can thus restrict to considering the sequences $l_{1,n}',l_{2,n}'$ satisfying \eqref{e:xnx0'}
and also the additional stipulation that $l_{1,n}' <q_1$ or $l_{2,n}' <q_2$.
Then $l_{1,n}'\scal{b_1}{u}$ or $l_{2,n}'\scal{b_2}{u}$ is bounded.
This together with \eqref{e:bounded} and \eqref{e:xnx0'} implies that
both $l_{1,n}'\scal{b_1}{u}$ and $l_{2,n}'\scal{b_2}{u}$ are bounded, and so are $l_{1,n}'$ and $l_{2,n}'$.
Hence, there exist $L_1, L_2 \in \ensuremath{\mathbb N}$ such that
\begin{equation}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad 0 \leq l_{1,n}' \leq L_1 \quad\text{and}\quad 0 \leq l_{2,n}' \leq L_2.
\end{equation}
By combining with \eqref{e:xn} and \eqref{e:xnx0'}, $(\forall n\in \ensuremath{\mathbb N}^*)$ $x_n\in S$, where
\begin{equation}
S :=\menge{\scal{x_0}{u}u +l_1'\scal{b_1}{u}u +l_2'\scal{b_2}{u}u +b_k}
{l_1' =0, \dots, L_1,\ l_2' =0, \dots, L_2,\ k =1, 2}.
\end{equation}
Since $S$ is a finite set, there exist $n_0\in \ensuremath{\mathbb N}$ and $m\in \ensuremath{\mathbb N}^*$ such that $x_{n_0} =x_{n_0+m}$.
It follows that the sequence $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ cycles between $m$ points $x_{n_0}, \dots, x_{n_0+m-1}$ from $n_0$ onwards.
``$\Rightarrow$'': Assume that $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ cycles between $m$ points from $n_0 \in \ensuremath{\mathbb N}$ onwards, i.e., $(\forall n \geq n_0)$ $x_{n+m} =x_n$.
By \eqref{e:xnu},
\begin{equation}
\scal{x_{n_0}}{u} +\sum_{n=n_0}^{n_0+m-1} \scal{b_{k(n)}}{u} =\scal{x_{n_0}}{u}.
\end{equation}
There thus exist $q_1, q_2 \in \ensuremath{\mathbb N}$ such that $q_1 +q_2 =m >0$ and $q_1\scal{b_1}{u} +q_2\scal{b_2}{u} =0$.
Combining with \eqref{e:dAb1b2} implies that $q_1, q_2\neq 0$ and that $d_A(b_1)/d_A(b_2)= q_2/q_1\in \mathbb{Q}$.
\end{proof}
\section{Hyperplane and doubleton: closed-form expressions}
\label{s:closed-form}
In this final section, we refine the previously considered case
with the aim of obtaining \emph{closed-form} expressions for the
terms of the DR sequence $(x_n)_\ensuremath{{n\in{\mathbb N}}}$.
Recall from Proposition~\ref{p:2points}\ref{p:2points_x+} that
\begin{equation}
\label{e:x,u'}
(\forall n\in \ensuremath{\mathbb N}^*)\quad x_n= \scal{x_{n-1}}{u}u+ b_{k(n)}
\quad\text{and}\quad \scal{x_n}{u}= \scal{x_{n-1}}{u}+ \scal{b_{k(n)}}{u},
\end{equation}
where $k(n)\in \{1, 2\}$ and where
\begin{subequations}
\begin{align}
k(n)= 1\ \&\ \scal{x_n}{u}> \beta-\beta_1 &\implies k(n+1)= 1, \label{e:kn11} \\
k(n)= 1\ \&\ \scal{x_n}{u}< \beta-\beta_1 &\implies k(n+1)= 2, \label{e:kn12} \\
k(n)= 2\ \&\ \scal{x_n}{u}> -\beta-\beta_2 &\implies k(n+1)= 1. \label{e:kn21}
\end{align}
\end{subequations}
We note here that if $k(n)= 1$ and $\scal{x_n}{u}=
\beta-\beta_1$, then both $1$ and $2$ are acceptable values for
$k(n+1)$;
for the sake of simplicity, we choose $k(n+1)= 2$ in this case.
Define
\begin{subequations}
\begin{empheq}[box=\mybluebox]{align}
S_1&:= \Menge{x_n}{n\in \ensuremath{\mathbb N}^*,\ k(n)= 1,\ \scal{x_n}{u}\in \left]\beta, \beta+\beta_2\right]},\\
S_2&:= \Menge{x_n}{n\in \ensuremath{\mathbb N}^*,\ k(n)= 2,\ \scal{x_n}{u}\in \left]\beta+\beta_2, \beta-\beta_1+\beta_2\right]}.
\end{empheq}
\end{subequations}
\begin{proposition}
\label{p:segments}
Let $n\in \ensuremath{\mathbb N}^*$. Then the following hold:
\begin{enumerate}
\item
\label{p:segments_11}
If $k(n)= 1$ and $\scal{x_n}{u}\in \left]\beta-\beta_1, \beta+\beta_2\right]$, then
\begin{equation}
k(n+1)= 1 \quad\text{and}\quad \scal{x_{n+1}}{u}= \scal{x_n}{u}+ \beta_1\in \left]\beta, \beta+\beta_2\right].
\end{equation}
\item
\label{p:segments_12}
If $k(n)= 1$ and $\scal{x_n}{u}\in \left]\beta, \beta-\beta_1\right]$, then
\begin{equation}
k(n+1)= 2 \quad\text{and}\quad \scal{x_{n+1}}{u}= \scal{x_n}{u}+ \beta_2\in \left]\beta+\beta_2, \beta-\beta_1+\beta_2\right].
\end{equation}
\item
\label{p:segments_21}
If $k(n)= 2$, $\scal{x_n}{u}\in \left]\beta+\beta_2,
\beta-\beta_1+\beta_2\right]$ and
$\beta+\beta_2\geq 0$, then
\begin{equation}
k(n+1)= 1 \quad\text{and}\quad \scal{x_{n+1}}{u}= \scal{x_n}{u}+ \beta_1\in \left]\beta+\beta_1+\beta_2, \beta+\beta_2\right]\subseteq \left]\beta, \beta+\beta_2\right].
\end{equation}
\end{enumerate}
Consequently,
\begin{equation}
\big(\beta+\beta_2\geq 0
\;\text{and}\;
x_n\in S_1\cup S_2\big) \implies x_{n+1}\in S_1\cup S_2.
\end{equation}
\end{proposition}
\begin{proof}
Notice from \eqref{e:x,u'} that
\begin{equation}
\label{e:xu+}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad \scal{x_{n+1}}{u}= \scal{x_n}{u}+ \scal{b_{k(n+1)}}{u}.
\end{equation}
\ref{p:segments_11}: Combine \eqref{e:kn11} and \eqref{e:xu+}
while noting that $\beta+\beta_1+\beta_2< \beta+\beta_2$ by
\eqref{e:32}.
\ref{p:segments_12}: Combine \eqref{e:kn12} and \eqref{e:xu+}.
\ref{p:segments_21}:
By \eqref{e:32} and the Cauchy--Schwarz inequality,
we obtain
\begin{equation}
0< \beta_2- \beta_1= \scal{b_2-b_1}{u}\leq \|b_2-b_1\|\|u\|= \|b_2-b_1\|
\end{equation}
and
\begin{equation}
\label{e:inequality}
\beta_2- \beta_1\leq \frac{\|b_2-b_1\|^2}{\beta_2-\beta_1}= -2\beta.
\end{equation}
Now assume that $\beta+ \beta_2\geq 0$. Then $\beta_1+ \beta_2\geq (2\beta+ \beta_2)+ \beta_2= 2(\beta+ \beta_2)\geq 0$, and hence
$\left]\beta+\beta_1+\beta_2, \beta+\beta_2\right]\subseteq \left]\beta, \beta+\beta_2\right]$.
It follows from $\scal{x_n}{u}> \beta+\beta_2\geq 0$ that $\scal{x_n}{u}> -\beta-\beta_2$. Now use \eqref{e:kn21} and \eqref{e:xu+}.
Finally, assume that $x_n\in S_1\cup S_2$. If $x_n\in S_2$, then we have from \ref{p:segments_21} that $x_{n+1}\in S_1$. If $x_n\in S_1$ and $\scal{x_n}{u}\in \left]\beta, \beta-\beta_1\right]$, then, by \ref{p:segments_12}, $x_{n+1}\in S_2$. If $x_n\in S_1$ and $\scal{x_n}{u}\in \left]\beta-\beta_1, \beta+\beta_2\right]$, then $x_{n+1}\in S_1$ due to \ref{p:segments_11}. Altogether, $x_{n+1}\in S_1\cup S_2$.
\end{proof}
\begin{theorem}[closed-form expressions]
\label{t:closedform}
Suppose that $\beta+\beta_2\geq 0$ and that $x_1\in S_1\cup S_2$.
Then
\begin{subequations}
\label{e:xu}
\begin{align}
(\forall n\in \ensuremath{\mathbb N}^*)\quad \scal{x_n}{u}
&= \scal{x_0}{u}+ n\beta_1+ \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor (\beta_2- \beta_1) \\
&= \scal{x_0}{u}- \left\lfloor \frac{-\scal{x_0}{u}+\beta-\beta_1-(n-1)\beta_2}{\beta_2-\beta_1} \right\rfloor \beta_1 \notag \\
&\hspace{3.5cm}+ \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor \beta_2
\end{align}
\end{subequations}
and
\begin{equation}
\label{e:x}
(\forall n\in \ensuremath{\mathbb N}^*)\quad x_n= \scal{x_{n-1}}{u}u+ b_{k(n)},
\end{equation}
where
\begin{subequations}
\label{e:kn}
\begin{align}
(\forall n\in \ensuremath{\mathbb N}^*)\quad
k(n)&= \begin{cases}
1 &\text{if~} \scal{x_n}{u}\leq \beta+\beta_2, \\
2 &\text{if~} \scal{x_n}{u}> \beta+\beta_2
\end{cases} \\
&= \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor- \left\lfloor \frac{-\scal{x_0}{u}+\beta-n\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor+ 1.
\end{align}
\end{subequations}
\end{theorem}
\begin{proof}
Note that \eqref{e:x} follows from \eqref{e:x,u}.
According to Proposition~\ref{p:2points}\ref{p:2points_coeffs},
\begin{equation}
\label{e:xnx0''}
(\forall\ensuremath{{n\in{\mathbb N}}})\quad \scal{x_n}{u}= \scal{x_0}{u}+ (n-l_n)\beta_1+ l_n\beta_2 \quad\text{with}\quad l_n\in \ensuremath{\mathbb N}.
\end{equation}
Since $x_1\in S_1\cup S_2$, Proposition~\ref{p:segments} yields
\begin{equation}
\label{e:xnS1S2}
(\forall n\in \ensuremath{\mathbb N}^*)\quad x_n\in S_1\cup S_2.
\end{equation}
Let $n\in \ensuremath{\mathbb N}^*$.
It follows from \eqref{e:32} and
\eqref{e:xnS1S2} that $\scal{x_n}{u}\in \left]\beta, \beta-\beta_1+\beta_2\right]$, which, combined with \eqref{e:xnx0''}, gives
\begin{equation}
\frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1}- 1<l_n\leq \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1}.
\end{equation}
Therefore,
\begin{equation}
l_n= \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor
\end{equation}
and
\begin{equation}
n- l_n= -\left\lfloor \frac{-\scal{x_0}{u}+\beta-\beta_1-(n-1)\beta_2}{\beta_2-\beta_1} \right\rfloor,
\end{equation}
which imply \eqref{e:xu}.
To get \eqref{e:x} and \eqref{e:kn}, we distinguish two cases.
\emph{Case 1}: $\scal{x_n}{u}\leq \beta+ \beta_2$.
On the one hand, by \eqref{e:xnS1S2}
we must have $x_n\in S_1$ and $k(n)= 1$.
On the other hand, from $\scal{x_n}{u}\leq \beta+ \beta_2$ and \eqref{e:xu}, noting that $\beta_1< 0$, we obtain that
\begin{subequations}
\begin{align}
\left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor
&\leq \frac{-\scal{x_0}{u}+\beta-n\beta_1+\beta_2}{\beta_2-\beta_1} \\
&< \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1}
\end{align}
\end{subequations}
which yields
\begin{equation}
\left\lfloor \frac{-\scal{x_0}{u}+\beta-n\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor
= \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor,
\end{equation}
hence \eqref{e:x} and \eqref{e:kn} hold.
\emph{Case 2}: $\scal{x_n}{u}> \beta+ \beta_2$.
By \eqref{e:xnS1S2}, $x_n\in S_2$ and $k(n)= 2$.
Again using \eqref{e:xu} and noting that $\beta_1< 0< \beta_2$, we derive that
\begin{subequations}
\begin{align}
\left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor
&> \frac{-\scal{x_0}{u}+\beta-n\beta_1+\beta_2}{\beta_2-\beta_1} \\
&= \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1}+ \frac{\beta_1}{\beta_2-\beta_1} \\
&> \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor- 1.
\end{align}
\end{subequations}
It follows that
\begin{equation}
\left\lfloor \frac{-\scal{x_0}{u}+\beta-n\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor
= \left\lfloor \frac{-\scal{x_0}{u}+\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor- 1,
\end{equation}
and we have \eqref{e:x} and \eqref{e:kn}. The proof is complete.
\end{proof}
\begin{corollary}
\label{c:closedform}
Suppose that $\beta_1> \beta\geq -\beta_2$, that $x_0\in A$, and that $2\scal{x_0}{b_1-b_2}> \|b_1\|^2- \|b_2\|^2$. Then
\begin{equation}
\label{e:simplify}
(\forall n\in \ensuremath{\mathbb N})\quad \scal{x_n}{u}= n\beta_1+ \left\lfloor \frac{\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor (\beta_2- \beta_1)
\end{equation}
and
\begin{equation}
\label{e:x_simplify}
(\forall n\in \ensuremath{\mathbb N}^*)\quad x_n= \left( (n-1)\beta_1+ \left\lfloor \frac{\beta-n\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor (\beta_2- \beta_1) \right)u+ b_{k(n)},
\end{equation}
where
\begin{equation}
\label{e:kn_simplify}
(\forall n\in \ensuremath{\mathbb N}^*)\quad k(n)= \left\lfloor \frac{\beta-(n+1)\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor- \left\lfloor \frac{\beta-n\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor+ 1.
\end{equation}
\end{corollary}
\begin{proof}
From $x_0\in A$, we have that $\scal{x_0}{u}= 0$ and also $R_Ax_0= P_Ax_0= x_0$.
Since $2\scal{x_0}{b_1-b_2}> \|b_1\|^2- \|b_2\|^2$,
it holds that $\|b_1-x_0\|^2< \|b_2-x_0\|^2$, which yields $P_BR_Ax_0= P_Bx_0= b_1$.
Therefore, $k(1)= 1$, $x_1= x_0- P_Ax_0 +P_BR_Ax_0= b_1$, and $\scal{x_1}{u}= \scal{b_1}{u}= \beta_1$.
On the other hand, it follows from $\beta_1> \beta\geq -\beta_2$
and $\beta_1< 0$ that $\beta+ \beta_2\geq 0$ and that $\beta<
\beta_1<0 \leq \beta+ \beta_2$. We deduce that $\scal{x_1}{u}=
\beta_1\in \left]\beta, \beta+ \beta_2 \right[$, which implies
that $x_1\in S_1$. Using Theorem~\ref{t:closedform}, we get
\eqref{e:simplify} for all $n\in \ensuremath{\mathbb N}^*$. When $n= 0$,
the right-hand side of \eqref{e:simplify} becomes
\begin{equation}
\left\lfloor \frac{\beta-\beta_1+\beta_2}{\beta_2-\beta_1} \right\rfloor (\beta_2- \beta_1) =0= \scal{x_0}{u}
\end{equation}
since $0< \beta-\beta_1+\beta_2< \beta_2-\beta_1$. Hence, \eqref{e:simplify} holds for all $n\in \ensuremath{\mathbb N}$, which together with the second part of Theorem~\ref{t:closedform} completes the proof.
\end{proof}
\begin{example}
\label{ex:R}
Suppose that $X= \ensuremath{\mathbb R}$, that $A= \{0\}$, and that $B= \{b_1, b_2\}$ with $b_1= -1$ and $b_2= r$, where $r\in \ensuremath{\mathbb R}$, $r> 1$. Let $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ be a DR sequence with respect to $(A, B)$ with starting point $x_0= 0$. Then
\begin{equation}
(\forall n\in \ensuremath{\mathbb N})\quad x_n=
-n+ \left\lfloor \frac{n}{r+1} + \frac{1}{2} \right\rfloor (r+1).
\end{equation}
\end{example}
\begin{proof}
Let $u= 1$. Then $A= \{u\}^\perp$ and $(\forall x\in \ensuremath{\mathbb R})$ $\scal{x}{u}= x$.
We have that $\beta_1= \scal{b_1}{u}= -1< 0$, $\beta_2= \scal{b_2}{u}= r >0$, and, since $r> 1$,
\begin{equation}
-1=\beta_1> \beta= \frac{|b_1-b_2|^2}{2(\beta_1-\beta_2)}=
-\frac{(r+1)^2}{2(r+1)}= -\frac{r+1}{2}> -\beta_2=-r.
\end{equation}
It is clear that $x_0= 0\in A$ and that
$2\scal{x_0}{b_1-b_2}= 0> 1- r^2= |b_1|^2- |b_2|^2$.
Now applying Corollary~\ref{c:closedform} yields
\begin{equation}
(\forall n\in \ensuremath{\mathbb N})\quad x_n= \scal{x_n}{u}= -n+ \left\lfloor \frac{-\frac{r+1}{2}+(n+1)+r}{r+1} \right\rfloor (r+1),
\end{equation}
and the conclusion follows.
\end{proof}
\begin{example}
\label{ex:R2}
Suppose that $X= \ensuremath{\mathbb R}^2$, that $A= \ensuremath{\mathbb R}\times \{0\}$, and that $B= \{b_1, b_2\}$ with $b_1= (0, -1)$ and $b_2= (1, r)$, where $r\in \ensuremath{\mathbb R}$, $r\geq \sqrt{2}$. Let $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ be a DR sequence with respect to $(A, B)$ with starting point $x_0= (\alpha, 0)$, where $\alpha\in \ensuremath{\mathbb R}$, $\alpha< r^2/2$.
Then $(\forall n\in \ensuremath{\mathbb N}^*)$:
\begin{multline}
x_n= \left(\left\lfloor \frac{n}{r+1}+ \frac{r^2+2r}{2(r+1)^2} \right\rfloor- \left\lfloor \frac{n-1}{r+1}+ \frac{r^2+2r}{2(r+1)^2} \right\rfloor, -n+ \left\lfloor \frac{n}{r+1}+ \frac{r^2+2r}{2(r+1)^2} \right\rfloor (r+1) \right).
\end{multline}
\end{example}
\begin{proof}
In this case, $A= \{u\}^\perp$ with $u= (0, 1)$, $\beta_1= \scal{b_1}{u}= -1< 0$, $\beta_2= \scal{b_2}{u}= r >0$, and
\begin{equation}
\beta_1= -1> \beta= \frac{\|b_1-b_2\|^2}{2(\beta_1-\beta_2)}= -\frac{1+ (r+1)^2}{2(r+1)}= -1- \frac{r^2}{2(r+1)}.
\end{equation}
On the one hand, $\beta+ \beta_2= \frac{r^2-2}{2(r+1)}\geq 0$.
On the other hand, it is straightforward to see that $x_0\in A$
and that $2\scal{x_0}{b_1-b_2} = -2\alpha>-r^2= \|b_1\|^2 -\|b_2\|^2$.
Applying Corollary~\ref{c:closedform}, we obtain that
\begin{subequations}
\begin{align}
(\forall n\in \ensuremath{\mathbb N}^*)\quad \scal{x_n}{u}&= -n+ \left\lfloor \frac{-1-\frac{r^2}{2(r+1)}+(n+1)+r}{r+1} \right\rfloor (r+1) \\
&= -n+ \left\lfloor \frac{n}{r+1}+ \frac{r^2+2r}{2(r+1)^2} \right\rfloor (r+1).
\end{align}
\end{subequations}
Now for each $n\in \ensuremath{\mathbb N}^*$, writing $x_n= (\alpha_n, \beta_n)\in \ensuremath{\mathbb R}^2$, we observe that $\beta_n= \scal{x_n}{u}$ and, by \eqref{e:x_simplify}, $\alpha_n$ is actually the first coordinate of $b_{k(n)}$, that is,
\begin{equation}
\alpha_n= \begin{cases}
0 &\text{if~} k(n)= 1, \\
1 &\text{if~} k(n)= 2,
\end{cases}
\end{equation}
which combined with \eqref{e:kn_simplify} implies that
\begin{equation}
\alpha_n= k(n)- 1= \left\lfloor \frac{n}{r+1}+ \frac{r^2+2r}{2(r+1)^2} \right\rfloor- \left\lfloor \frac{n-1}{r+1}+ \frac{r^2+2r}{2(r+1)^2} \right\rfloor.
\end{equation}
The conclusion follows.
\end{proof}
Let us specialize Example~\ref{ex:R} further and
also illustrate Theorem~\ref{t:2points}.
\begin{example}[rational case]
Suppose that $X= \ensuremath{\mathbb R}$, that $A= \{0\}$, and that $B= \{-1,
{2}\}$. Let $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ be a DR sequence with respect to $(A, B)$ with starting point $x_0= 0$. Then
\begin{equation}
(\forall n\in \ensuremath{\mathbb N})\quad x_n= -n+ 3 \left\lfloor
\frac{n}{3}+\frac{1}{2} \right\rfloor
\end{equation}
and $(x_n)_\ensuremath{{n\in{\mathbb N}}} = \big(
0,
-1,
1,
0,
-1,
1,
0,
-1,
1,
\ldots
\big)$ is periodic. (See also \cite[Remark~6]{BN14} for another
cyclic example.)
\end{example}
\begin{proof}
Apply Example~\ref{ex:R} with $b_1=-1$ and $b_2=2$.
\end{proof}
\begin{example}[irrational case]
Suppose that $X= \ensuremath{\mathbb R}$, that $A= \{0\}$, and that $B= \{-1,
\sqrt{2}\}$. Let $(x_n)_\ensuremath{{n\in{\mathbb N}}}$ be a DR sequence with respect to $(A, B)$ with starting point $x_0= 0$. Then
\begin{equation}
(\forall n\in \ensuremath{\mathbb N})\quad x_n= -n+ \left\lfloor
\frac{n}{\sqrt{2}+1}+\frac{1}{2} \right\rfloor (\sqrt{2}+1)
\end{equation}
and $(x_n)_\ensuremath{{n\in{\mathbb N}}} = \big(
0,
-1,
-1+\sqrt{2},
-2+\sqrt{2},
-2+2\sqrt{2},
-3+2\sqrt{2},
-4+2\sqrt{2},
-4+3\sqrt{2},
\ldots
\big)$ which is not periodic.
\end{example}
\begin{proof}
Apply Example~\ref{ex:R} with $b_1=-1$ and $b_2=\sqrt{2}$.
\end{proof}
\begin{remark}
Some comments on the last examples are in order.
\begin{enumerate}
\item
We note that the last examples feature terms
resembling (inhomogeneous) Beatty sequences; see \cite{Hav12}.
In fact, let us disclose that we started this journey
by experimentally investigating Example~\ref{ex:R2}
which eventually led to the more general analysis in this paper.
Specifically, in Example~\ref{ex:R2}, if $r= \sqrt{2}$, then $x_n= (u_n, -v_n+w_n\sqrt{2})$,
where the integer sequences
\begin{subequations}
\begin{align}
u_n&:= \lfloor (n+1)(\sqrt{2}-1) \rfloor- \lfloor n(\sqrt{2}-1) \rfloor= \lfloor (n+1)\sqrt{2} \rfloor- \lfloor n\sqrt{2} \rfloor- 1,\\
v_n&:= n-\lfloor (n+1)(\sqrt{2}-1) \rfloor= \lfloor (n+1)(2-\sqrt{2}) \rfloor,\\
w_n&:= \lfloor (n+1)(\sqrt{2}-1) \rfloor= \lfloor (n+1)\sqrt{2} \rfloor- n- 1
\end{align}
\end{subequations}
are respectively listed as \cite{OEIS_A188037}, \cite{OEIS_A074840}, and \cite{OEIS_A097508}
(shifted by one) in the \emph{On-Line Encyclopedia of Integer Sequences}.
\item
Finally, let us contrast the DR algorithm to the method of
alternating projections (see, e.g., \cite{BB96} and \cite{BC17})
in the setting of Example~\ref{ex:R}: indeed, the
sequence $(x_0,P_Ax_0,P_BP_Ax_0,\ldots)$ is simply
$(0,0,-1,0,-1,0,\ldots)$ regardless of whether or not $r>1$ is irrational.
It was also suggested in \cite{BDNP16a} that,
for the convex feasibility problem, the DR algorithm
outperforms the method of alternating projections
in the absence of constraint qualifications.
\end{enumerate}
\end{remark}
\section{Conclusion}
\label{s:conclusion}
In this paper, we provided a detailed analysis of the
Douglas--Rachford algorithm for the case when one set is a
hyperplane and the other a doubleton. We
characterized cycling of this method in terms of the ratio of
the distances of the points to the hyperplane. Moreover, we
presented closed-form expressions of the actual iterates. The
results obtained show the surprising complexity of this
algorithm
when compared to, e.g., the method of alternating projections.
\subsection*{Acknowledgments}
HHB was partially supported by the Natural Sciences and
Engineering Research Council of Canada.
MND was partially supported by the Australian Research Council.
\end{document} |
\begin{document}
\title[Massively parallel computations in algebraic geometry]{
Towards Massively Parallel Computations in Algebraic Geometry
}
\author{Janko~B\"ohm}
\address{Janko~B\"ohm, Department of Mathematics, University of Kaiserslautern, Erwin-Schr\"odinger-Str., 67663 Kaiserslautern, Germany}
\email{[email protected]}
\author{Wolfram~Decker}
\address{Wolfram~Decker, Department of Mathematics, University of Kaiserslautern, Erwin-Schr\"odinger-Str., 67663 Kaiserslautern, Germany}
\email{[email protected]}
\author{Anne~Fr\"uhbis-Kr\"uger}
\address{Anne~Fr\"uhbis-Kr\"uger, Institut f\"ur algebraische Geometrie, Leibniz Universit\"at Hannover, Welfengarten 1, 30167 Hannover, Germany}
\email{[email protected]}
\author{Franz-Josef Pfreundt}
\address{Franz-Josef Pfreundt, Competence Center High Performance Computing, Fraunhofer ITWM, Fraunhofer-Platz 1, 67663 Kaiserslautern, Germany}
\email{[email protected]}
\author{Mirko Rahn}
\address{Mirko Rahn, Competence Center High Performance Computing, Fraunhofer ITWM, Fraunhofer-Platz 1, 67663 Kaiserslautern, Germany}
\email{[email protected]}
\author{Lukas~Ristau}
\address{Lukas~Ristau, Department of Mathematics, University of Kaiserslautern, Erwin-Schr\"odinger-Str., and Competence Center High Performance Computing, Fraunhofer ITWM, Fraunhofer-Platz 1, 67663 Kaiserslautern, Germany, }
\email{[email protected]}
\renewcommand\shortauthors{B\"ohm, J. et al}
\begin{abstract}
Introducing parallelism and exploring its use is still a fundamental challenge for the computer algebra community.
In high performance numerical simulation, on the other hand, transparent environments for distributed
computing which follow the principle of separating coordination and computation have been a success story
for many years. In this paper, we explore the potential of using this principle in the context of computer
algebra. More precisely, we combine two well-established systems: The mathematics we are interested
in is implemented in the computer algebra system {\textsc{Singular}}, whose focus is on polynomial
computations, while the coordination is left to the workflow management system GPI-Space, which relies
on Petri nets as its mathematical modeling language, and has been successfully used for coordinating
the parallel execution (autoparallelization) of academic codes as well as for commercial software in
application areas such as seismic data processing. The result of our efforts is a major step towards a
framework for massively parallel computations in the application areas of {\textsc{Singular}},
specifically in commutative algebra and algebraic geometry. As a first test case for this framework, we have
modeled and implemented a hybrid smoothness test for algebraic varieties which combines ideas from
Hironaka's celebrated desingularization proof with the classical Jacobian criterion. Applying our implementation
to two examples originating from current research in algebraic geometry, one of which cannot be handled
by other means, we illustrate the behavior of the smoothness test within our framework,
and investigate how the computations scale up to 256 cores.
\end{abstract}
\keywords{Computer algebra, Singular, distributed computing, GPI-Space, Petri nets,
computational algebraic geometry, Hironaka desingularization, smoothness test,
surfaces of general type}
\thanks{This work has been supported by the German Research Foundation (DFG) through SPP 1489 and TRR 195, Project II.5.}
\maketitle
\section{Introduction}
Experiments based on calculating examples have always played a key role in mathematical research.
Advanced hardware structures paired with sophisticated mathematical software tools allow for far reaching experiments
which were previously unimaginable. In the realm of algebra and its applications, where exact calculations are
inevitable, the desired software tools are provided by computer algebra systems. In order to take full advantage of
modern multicore computers and high-performance clusters, the
computer algebra community must provide parallelism in their systems.
This will boost the performance of the systems to a new level, thus extending the scope of applications significantly.
However, while there has been a lot of progress in this direction in numerical computing, achieving parallelization
in symbolic computing is still a tremendous challenge both from a mathematical and technical point of view.
On the mathematical side, there are some algorithms whose basic strategy is inherently parallel, whereas many
others are sequential in nature. The systematic design and implementation of parallel algorithms is a major
task for the years to come. On the technical side, models for parallel computing have long been studied in
computer science. These differ in several fundamental aspects. Roughly, two basic paradigms can be distinguished
according to assumptions on the underlying hardware. The shared memory based models allow several different
computational processes (called threads) to access the same data in memory, while the distributed models run
many independent processes which need to communicate their progress to one or several of the other processes.
Creating the prerequisites for writing parallel code in a computer algebra system originally designed for sequential
processes requires considerable efforts which affect all levels of the system.
In this paper, we explore an alternative way of introducing parallelism into computer algebra
computations. This approach is non-intrusive and allows for distributed computing. It is based
on the principle of separating coordination and computation, a principle which has already been pursued with great success
in high performance numerical simulation. Specifically, we rely on the workflow management system GPI-Space
\cite{GPI} for coordination, while the mathematics we are interested in is implemented in the computer algebra
system {\sc Singular} \cite{Singular}.
{\sc Singular} is under development at TU Kaiserslautern, focuses on polynomial computations, and has been
successfully used in application areas such as algebraic geometry and singularity theory. GPI-Space, on the
other hand, is under development at Fraunhofer ITWM Kaiserslautern, and has been successfully used for
coordinating the parallel execution (autoparallelization) of academic codes as well as for commercial
software in application areas such as seismic data processing.
As its mathematical modeling language, GPI-Space relies on Petri nets, which are specifically designed
to model concurrent systems, and yield both data parallelism and task parallelism. In fact, GPI-Space
is not only able to automatically balance, to automatically scale up to huge machines, or to tolerate machine
failures, but can also use existing legacy applications and integrate them, without requiring any change to them.
In our case, {\sc Singular} calls GPI-Space, which, in turn, manages several (many) instances of
{\sc Singular} in its existing binary form (without any need for changes). The experiments carried
through so far are promising and indicate that we are on our way towards a convenient framework
for massively parallel computations in {\sc Singular}.
One of the central tasks of computational algebraic geometry is the explicit construction of objects
with prescribed properties, for instance to find counterexamples to conjectures or to construct general
members of moduli spaces. Arguably, the most important property to be checked here is smoothness.
Classically, this means to apply the Jacobian criterion: If $X\subset \mathbb A^n_{\mathbb K}$
(respectively $X\subset \mathbb P^n_{\mathbb K}$) is an equidimensional affine (respectively projective)
algebraic variety of dimension $d$ with defining equations $f_1=\dots =f_s=0$, compute a
Gr\"obner basis of the ideal generated by the $f_i$ together with the $(n-d)\times (n-d)$ minors of the
Jacobian matrix of the $f_i$ in order to check whether this ideal defines the empty set.
The resulting process is predominantly sequential. It is typically expensive (if not unfeasible),
especially in cases where the codimension $n-d$ is large.
In \cite{smoothtst}, an alternative smoothness test has been suggested by the first and third author
(see \cite{smoothtstlib} for the implementation in \textsc{Singular}). This test builds on ideas from
Hironaka's celebrated desingularization proof \cite{Hir} and is
intrinsically parallel. To explore the potential of our framework, we have modeled and implemented an enhanced
version of the test which is interesting in its own right. Following \cite{smoothtst}, we take our cue from the fact
that each smooth variety is locally a complete intersection. Roughly, the idea is then to apply Hironaka's method
of descending induction by hypersurfaces of maximal contact (in its constructive version by Bravo, Encinas, and
Villamayor \cite{BEV}). This allows us either to detect non-smoothness during the process, or to finally realize
a finite covering of $X$ by affine charts such that in each chart, $X$ is given as a smooth complete intersection.
More precisely, at each iteration step, our algorithm starts from finitely many affine charts $U_i$ which cover $X$,
together with varieties $W_i$ and embeddings $X\cap U_i\subset W_i\cap U_i$ such that each $W_i\cap U_i$ is a
smooth complete intersection in $U_i$. Providing a constructive version of Hironaka's termination criterion,
the algorithm then either detects that $X$ is singular in one $U_i$, and terminates, or constructs for each
$i$ finitely many affine charts $U'_{ij}$ which cover $X\cap U_i$, together with varieties $W'_{ij}\subset W_i$ and embeddings
$X\cap U'_{ij}\subset W'_{ij}\cap U'_{ij}$ such that each $W'_{ij}\cap U'_{ij}$ is a smooth complete intersection in $U'_{ij}$
whose codimension is one less than that of $X\cap U_i$ in $W_i\cap U_i$. Since at each step, the computations in one
chart do not depend on results from the other charts, the algorithm is indeed parallel in nature. Moreover, since our
implementation branches into all available choices of charts in a massively parallel way, and terminates once $X$ is
completely covered by charts, it will automatically determine a choice of charts which leads to the smoothness
certificate in the fastest possible way.
In fact, there is one more twist: As experiments show, see \cite{smoothtst}, the smoothness test is most effective
in a hybrid version which makes use of the above ideas to reduce the general problem to checking smoothness
in finitely many embedded situations $X\cap U\subset W\cap U$ of low codimension, and applies (a relative
version of) the Jacobian criterion there.
Our paper is organized as follows. In Section \ref{sec:smoothness}, we briefly review smoothness
and recall the Jacobian criterion. In Section \ref{sec hybrid smoothness test}, we summarize what we need
from Hironaka-style desingularization and develop our smoothness test. Section \ref{sec GPI} contains a discussion
of GPI-Space and Petri nets which prepares for Section \ref{sec Petri smooth}, where we show how to model
our test in terms of Petri nets. This forms the basis for the implementation of the test using
{\sc{Singular}} within GPI-space. Finally, in Section \ref{sec timings}, we illustrate the behavior
of the smoothness test and its implementation by checking two examples from current research
in algebraic geometry. These examples are surfaces of general type, one of which cannot be handled by other means.
\section{Smoothness and the Jacobian Criterion}\label{sec:smoothness}
We describe the geometry behind our algorithm in the classical language of algebraic varieties
over an algebraically closed field. Because smoothness is a local property, and each
quasiprojective (algebraic) variety admits an open affine covering, we restrict
our attention to affine (algebraic) varieties.
Let ${\mathbb K}$ be an algebraically closed field. Write $\mathbb A^n_{{\mathbb K}}$
for the affine $n$-space over ${\mathbb K}$. An affine variety (over ${\mathbb K}$) is the common vanishing locus
$V(f_1, \dots, f_r)\subset \mathbb A^n_{{\mathbb K}}$ of finitely many polynomials $f_i\in{\mathbb K}[x_1,\dots, x_n]$.
If $Z$ is such a variety, let
$$
I_Z=\{f\in {\mathbb K}[x_1,\ldots,x_n] \mid f(p)=0 \text{ for all } p\in Z\}\subset {\mathbb K}[x_1,\ldots,x_n]
$$
be its vanishing ideal, let ${\mathbb K}[Z]={\mathbb K}[x_1,\ldots,x_n]/I_Z$ be its ring of polynomial functions,
and let $\dim Z = \dim {\mathbb K}[Z]$ be its dimension.
Given a polynomial $h\in{\mathbb K}[x_1,\ldots,x_n]$, we write $$D(h) = \mathbb
A^n_{{\mathbb K}}\setminus V(h)=\{p\in \mathbb A^n_{{\mathbb K}} \mid h(p) \neq 0\}$$
for the principal open set defined by $h$, and $\mathcal O_{Z}(Z\cap D(h))$
for the ring of regular functions on $Z\cap D(h)$. If $p\in Z$ is a point,
we write $\mathcal{O}_{Z,p}$ for the local ring
of $Z$ at $p$, and $\mathfrak{m}_{Z,p}$ for the maximal ideal of $\mathcal{O}_{Z,p}$.
Recall that both rings $\mathcal O_{Z}(Z\cap D(h))$ and $\mathcal{O}_{Z,p}$ are localizations of ${\mathbb K}[Z]$:
Allow powers of the polynomial function defined by $h$ on $Z$ and polynomial functions
on $Z$ not vanishing at $p$ as denominators, respectively.
Relying on the trick of Rabinowitch, we regard $Z\cap D(h)$ as an affine
variety: If $I_Z=\langle f_1,\dots, f_s\rangle$, identify $Z\cap D(h)$ with the vanishing
locus $$V(f_1,\dots, f_s, ht-1)\subset \mathbb A^{n+1}_{{\mathbb K}},$$ where $t$ is an extra
variable.
The tangent space at a point $p=(a_1,\dots, a_n)\in Z$ is the linear variety
$$
T_p Z = V(d_p(f) \mid f\in I_Z)\subset \mathbb A^n_{{\mathbb K}},
$$
where $d_pf$ is the differential of $f$ at $p$:
$$
d_p f=\sum_{i=1}^n \frac{\partial f}{\partial x_i}(p)(x_i-a_i)
\in {\mathbb K}[x_1,\dots, x_n].
$$
We have
$$
\dim T_p Z \geq \max \{\dim V \mid V \text{ is an irreducible component of } Z \text{ through } p\},
$$
and say that $Z$ is \emph{smooth at $p$} if these numbers are equal. Equivalently, $\mathcal{O}_{Z,p}$ is a regular local ring.
Otherwise, $Z$ is \emph{singular at $p$}.
The variety $Z$ is \emph{smooth} if it is smooth at each of its points.
Recall that a variety $Z$ is equidimensional if all its irreducible components
have the same dimension. Algebraically this means that the ideal $I_Z$ is equidimensional,
that is, all associated primes of $I_Z$ have the same dimension.
\begin{theorem}[Jacobian Criterion]
Let ${\mathbb K}$ be an algebraically closed field, and let $Z = V(f_1, \dots, f_s)\subset \mathbb A^n_{{\mathbb K}}$
be an affine variety which is equidimensional of dimension $d$.
Write $I_{n-d}\left({\mathcal J}\right)$ for the the ideal generated by the
$(n-d) \times (n-d)$ minors of the Jacobian matrix
${\mathcal J} = \left({\partial f_i \over \partial x_j}\right)$.
If $I_{n-d}\left({\mathcal J}\right)+I_Z = \langle 1\rangle$, then $Z$ is smooth,
and the ideal $\langle f_1, \dots, f_s\rangle \subset {\mathbb K}[x_1,\ldots,x_n]$
is equal to the vanishing ideal $I_Z$ of $Z$. In particular, $\langle f_1, \dots, f_s\rangle$
is a radical ideal.
\end{theorem}
If\ $Y\subset Z\subset \mathbb A^n_{{\mathbb K}}$ are two affine varieties, the vanishing ideal
$I_{Y,Z}$ of $Y$ in $Z$ is the ideal generated by $I_Y$ in ${\mathbb K}[Z]$. If $Z$ is equidimensional,
we write $\codim_Z Y=\dim Z - \dim Y$ for the codimension of $Y$ in $Z$,
and say that $Y$ is a \emph{complete intersection} in $Z$ if
$I_{Y,Z}$ can be generated by $\codim_Z Y=\codim I_{Y,Z}$ elements (then $Y$
and $I_{Y,Z}$ are equidimensional as well).
\section{A Hybrid smoothness test}\label{sec hybrid smoothness test}
In this section, we present the details of our hybrid smoothness test which, as already
outlined in the introduction, combines the Jacobian criterion with ideas from Hironaka's
landmark paper on the resolution of singularities \cite{Hir} in which Hironaka proved
that such resolutions exist, provided we work in characteristic zero.
For detecting non-smoothness and controlling the resolution process, Hironaka developed a theory of standard
bases for local rings and their completions (see \cite[Chapter 1]{GP} for the algorithmic aspects of
standard bases). Based on this, he defined several invariants controlling the desingularization process.
The so-called $\nu^{*}$-invariant generalizes the order of a power series. As some sort
of motivation, we recall its definition in the analytic setting: Let $(X,0) \subset({\mathbb{A}}_{{\mathbb K}}^{n},0)$
be an analytic space germ over an algebraically closed field ${\mathbb K}$ of characteristic zero, let
${\mathbb K}\{x_{1},\ldots,x_{n}\}$ be the ring of convergent power series with coefficients in ${\mathbb K}$, and
let $I_{X,0} \subset {\mathbb K}\{x_{1},\ldots,x_{n}\}$ be the defining ideal of $(X,0)$. If
$f_{1},\dots,f_{s}$ form a minimal standard basis of $I_{X,0}$, and the $f_i$ are
sorted by increasing order $\ord(f_i)$, then set
$$
\nu^{*}(X,0) = (\ord(f_1), \dots, \ord(f_s)).
$$
This invariant is the key to Hironaka's termination criterion:
The germ $(X,0)$ is singular iff at least one of the entries of $\nu^{*}(X,0)$ is $>1$.
In the algebraic setting of this paper, let $X\subset{\mathbb{A}}_{{\mathbb K}}^{n}$ be an
equidimensional affine variety, with vanishing ideal $I_X\subset {\mathbb K}[x_1,\ldots,x_n]$,
where ${\mathbb K}$ is an algebraically closed field of arbitrary characteristic. Working in
arbitrary characteristic allows for a broader range of potential applications, and is not a problem
since we will only rely on results from Hironaka's papers which also hold in positive characteristic.
To formulate Hironaka's criterion in the algebraic setting, we first recall how to extend the notion of order:
\begin{df}
If $(R, \mathfrak{m})$ is any local Noetherian ring, and $0\neq f\in R$ is
any element, then the \emph{order} of $f$ is defined by setting
$$
\ord(f)=\max\{k\in \mathbb{N}\mid f\in \mathfrak{m}^k\}.
$$
\end{df}
\begin{df}[\cite{Hir, Hir1967}]
\label{def:nu-star}
With notation as above, let $p\in X$. If $f_{1},\dots,f_{s}$ form a minimal standard basis of the extended ideal
$I_X {\mathcal O}_{{\mathbb A}^n_{{\mathbb K}},p}$ with respect to a local degree ordering, and the $f_i$ are sorted
by increasing order, set
$$
\nu^{*}(X,p) = (\ord(f_1), \dots, \ord(f_s)).
$$
\end{df}
\begin{lem}[\cite{Hir, Hir1967}]
The sequence $\nu^{*}(X,p)$ depends only on $X$ and $p$.
\end{lem}
\begin{rem}
\label{rem:comp-nu-star}
Note that $\nu^{*}(X,p)$ can be determined algorithmically: A minimal standard basis as required is obtained
by translating $p$ to the origin and applying Mora's tangent cone algorithm (see \cite{TangentCone}, \cite{GP}).
\end{rem}
Hironaka's criterion can now be stated as follows:
\begin{lem}[\cite{Hir}, Chapter III]
\label{crit-Hir}
The variety $X$ is singular at $p\in X$ iff
\begin{equation}
\nu^{*}(X,p) >_{\lex} (1,\dots,1)\in\mathbb N^{\operatorname{codim} X},
\end{equation}
where $>_{\lex}$ denotes the lexicographical ordering.
\end{lem}
Note that if $X$ is singular at $p$, then the length of $\nu^{*}(X,p)$ may be larger than $\operatorname{codim}(X)$,
but at least one of the first ${\operatorname{codim}(X)}$ entries will be $>1$.
Hironaka's criterion is not of immediate practical use for us: We cannot examine each single point $p\in X$.
Fortunately, solutions to this problem have been suggested by various authors while establishing
constructive versions of Hironaka's resolution process (see, for example, \cite{BM}, \cite{BEV}, \cite{V1}). Here,
we follow the approach of Bravo, Encinas, and Villamayor \cite{BEV} which is best-suited
for our purposes. Their simplified proof of desingularization replaces
local standard bases at individual points by the use of loci of maximal order. These loci are obtained by polynomial computations
in finitely many charts (see \cite[Section 4.2]{FK1}). Loci of maximal order can be used to find so-called hypersurfaces of
maximal contact, which again only exist locally in charts. In a Hironaka style resolution process, hypersurfaces
of maximal contact allow for a descending induction on the dimension of the respective ambient space.
That such hypersurfaces generally do not exist in positive characteristic is
a key obstacle for extending Hironaka's ideas to positive characteristic.
In our context, we encounter a particularly simple special case of all this.
We suppose that we are given an embedding $X\subset W$, where $W$ is
a smooth complete intersection in ${\mathbb{A}}_{{\mathbb K}}^{n}$, say
of codimension $r$. In particular, $W$ is equidimensional of dimension
$$d=n-r.$$
The idea is then to first check whether the locus of order at least two is
non-empty. In this case, $X$ is singular. Otherwise, we can find a finite
covering of $X$ by affine charts and in each chart a hypersurface of maximal
contact whose construction relies only on the suitable choice of one of the generators
of $I_X$ together with one first order partial derivative of this generator\footnote{As a result, the difficulties of resolution of singularities in positive characteristic do
not occur in our setting, see Lemma \ref{lem max contact} below.}. In each chart,
we then consider the hypersurface of maximal contact as the new ambient space
of $X$, and proceed by iteration.
The resulting process allows us to decide at each step of the iteration whether
there is a point $p\in X$ such that the next entry of $\nu^{*}(X,p)$ is $\geq 2$.
To give a more precise statement, we suppose that $X$ has positive codimension in $W$
(otherwise, $X$ is necessarily smooth).
Crucial for obtaining information on an individual entry of $\nu^{\ast}$ is the order of ideals:
\begin{df}
If $(R, \mathfrak{m})$ is any local Noetherian ring, and $\langle 0 \rangle\neq J=\langle
h_1,\dots,h_t\rangle\subset R$ is any ideal, then the \emph{order} of $J$ is defined by setting
$$
\ord(J)=\max\{k\in \mathbb{N}\mid J\subset \mathfrak{m}^k\}
=\operatorname{min}\left \{\operatorname{ord}(h_i) \mid i=1,\dots, t\right\}.
$$
\end{df}
\noindent
In our geometric setup, we apply this as follows:
Given an ideal $\langle 0 \rangle \neq I\subset {\mathbb K}[W]$ and a point $p\in W$,
the \emph{order $\ord_p(I)$ of $I$ \emph{at} $p$} is defined to be the order of the extended ideal
$I \mathcal O_{W,p}$. For $0\neq f\in {\mathbb K}[W]$ we similarly define $\ord_p(f)$ as the order of
the image of $f$ in $\mathcal O_{W,p}$.
\begin{df}
\label{def:order-gen}
With notation as above,
for any integer $b \in{\mathbb{N}}$, the \define{locus of order at least $b$} of the vanishing ideal $I_{X,W}$ is
$$
\Sing(I_{X,W},b)=\left\{ p\in X\mid\operatorname{ord}_{p}(I_{X,W})\geq b\right\}.
$$
\end{df}
\begin{rem}[\cite{Hir}, Chapter III] Note that the loci $\Sing(I_{X,W},b)$ are Zariski closed
since the function
$$X\rightarrow \mathbb N, \ p \mapsto \operatorname{ord}_{p}(I_{X,W}),$$
is Zariski upper semi-continuous.
\end{rem}
\begin{rem}
\label{rem:criterion-singular-I}
With notation as above, let a point $p\in X$ be given. Then the first $r$ elements
of a minimal standard basis of $I_X {\mathcal O}_{{\mathbb A}^n_{\mathbb K},p}$ as in Definition
\ref{def:nu-star} must have order 1 by our assumptions on $W$, that is, the first $r$ entries of $\nu^*(X,p)$ are equal to~$1$.
On the other hand, if $\operatorname{ord}_{p}(I_{X,W})\geq 2$, then the $(r+1)$-st entry of
$\nu^*(X,p)$ is $\geq 2$. Hence, in this case, $X$ is singular at $p$
since the codimension of $X$ in $\mathbb{A}_{{\mathbb K}}^{n}$ is at least $r+1$ by our assumptions.
\end{rem}
In terms of loci of order at least two this amounts to:
\begin{lem}
\label{lem: criterion-singular-II}
With notation as above, $X$ is singular if
$$\Sing(I_{X,W},2)\not=\emptyset.$$
\end{lem}
\begin{proof}
Clear from Remark \ref{rem:criterion-singular-I}.
\end{proof}
To determine the loci $\Sing(I_{X,W},b)$ in a Zariski neighbourhood
of a point~$p\in X$ explicitly, derivatives with respect
to a regular system of parameters of $W$ at $p$ are the method of choice:
See \cite[p. 404]{BEV} for characteristic zero, and \cite[Sections 2.5 and 2.6]{GiraudPos}
for positive characteristic using Hasse derivatives.
For a more detailed description, fix a point $p\in W$. According to our assumptions, the local
ring $\mathcal{O}_{W,p}$ is regular of dimension $d$. So we can find
a regular system of parameters $X_{p,1},\dots, X_{p,d}$ for $\mathcal{O}_{W,p}$. That is,
$X_{p,1},\dots, X_{p,d}$ form a minimal set of generators for $\mathfrak{m}_{W,p}$.
By the Cohen structure theorem, we may, thus, think of the completion
$\widehat{\mathcal O_{W,p}}$ as a formal power series ring in $d$ variables
(see \cite[Proposition 10.16]{Eis}): The map
$$
\Phi:{\mathbb K}[[y_1, \dots, y_d]] \rightarrow \widehat{\mathcal O_{W,p}}, \ y_i\mapsto X_{p,i},
$$
is an isomorphism of local rings. In particular,
the order of an element $f\in {\mathbb K}[W]$ at $p$ coincides with the order of the
formal power series $\Phi^{-1}(f)\in {\mathbb K}[[y_1, \dots, y_d]]$. The latter, in turn,
can be computed as follows:
\begin{lem}[\cite{BEV}, \cite{GiraudPos}]
Let $R={\mathbb K}[[y_1,\dots,y_d]]$, let ${\mathfrak m}=\langle y_1,\dots,y_d \rangle$
be the maximal ideal of $R$, and let $F\in R \setminus \{0\}$. Then
$$\operatorname{ord}(F)= \operatorname{min}\left \{m \in {\mathbb N} \bigm|
\frac{\partial^{a} F}{\partial y^{a}}\not\in {\mathfrak m} \textrm{ for some } a \in {\mathbb N}^n
\textrm{ with } |a|=m \right \},
$$
where the derivatives denote the usual formal derivatives in characteristic zero, and
Hasse derivatives in positive characteristic.
\end{lem}
As we focus on the locus $\Sing(I_{X,W},b)$ with $b=2$, only first order formal derivatives play
a role for us. Since these derivatives coincide with the first order Hasse derivatives, we do not need to discuss
Hasse derivatives here.
\begin{df}\label{def deriv}
In the situation above, we use the isomorphism $\Phi$ of the Cohen structure theorem to define
\emph{first order derivatives} of elements $f\in\widehat{\mathcal O_{W,p}}$
\emph{with respect to the regular system of parameters} $X_{p,1},\dots, X_{p,d}$:
Set
$$
\frac{\partial f}{\partial X_{p,j}}=\Phi\hspace{-1mm}\left(\frac{\partial \Phi^{-1}(f)}{\partial y_{j}}\right)\in\widehat{\mathcal O_{W,p}}, \text{ for } j=1,\dots, d.
$$
\end{df}
We summarize our discussion so far. If $I_{X,W}$ is given by a set of generators
$f_{r+1}, \dots, f_s\in{\mathbb K}[W]\setminus \{0\}$, and if $p\in X$, then $p\in\Sing(I_{X,W},2)$ iff $\operatorname{ord}_p(f_j)>1$
for all $j$. In this case, $X$ is singular at $p$. Furthermore, if $0\neq f\in {\mathbb K}[W]$ is any element, $p\in W$ is any point,
and $X_{p,1},\dots, X_{p,d}$ is a regular system of parameters for $\mathcal{O}_{W,p}$, then $\operatorname{ord}_p(f)>1$ iff
\begin{equation}
\label{not:Delta-local}
1 \not\in \Delta_p(f) :=\left\langle f, \frac{\partial f}{\partial X_{p,1}},\dots,
\frac{\partial f}{\partial X_{p,d}} \right\rangle_{\widehat{\mathcal{O}_{W,p}}}\subset \widehat{\mathcal{O}_{W,p}}.
\end{equation}
Now, as before, we cannot examine each point individually. The following arguments will
allow us to remedy this situation in Lemma \ref{locus order 2} below. We begin by showing
that there is a locally consistent way of choosing regular systems of parameters:
\begin{lem}\label{lem covering by minors}
Let $I_W=\langle f_1,\dots,f_r \rangle \subset {\mathbb K}[x_1,\dots,x_n]$,
and let ${\mathcal J} = \left({\partial f_i \over \partial x_j}\right)$ be the Jacobian matrix of
$f_1,\dots,f_r$. Then there is a finite covering of $W$ by principal
open subsets $D(h)\subset \mathbb{A}_{\mathbb K}^n$ such that:
\begin{enumerate}
\item Each polynomial $h$ is a maximal minor of ${\mathcal J}$.
\item For each $h$, the variables $x_j$ not used for differentiation in forming
the minor $h$ induce by translation a regular system of parameters for every local ring
${\mathcal O_{W,p}}$, $p\in W\cap D(h)$.
\end{enumerate}
For each $h$, we refer to such a choice of a local system of parameters at all points of $W\cap D(h)$ as a \emph{consistent choice}.
\end{lem}
\begin{proof}
Consider a point $p_0\in W$. Then, by the Jacobian criterion,
there is at least one minor $h=\det(M)$ of ${\mathcal J}$ of size $r$ such that
$h(p_0) \neq 0$ (recall that we assume that $W$ is smooth). Suppose for simplicity that $h$ involves the last $r$ columns of
${\mathcal J}$, and let $p=(a_1,\dots,a_n)$ be any point of $W\cap D(h)$. Then the images of
$x_{1}-a_{1}, \ldots,x_d-a_d, f_1,\ldots,f_r$ in ${\mathcal O _{{\mathbb A}^n_{\mathbb K},p}}$ are actually contained in
${\mathfrak m}_{{\mathbb A}^n_{\mathbb K},p}$ and represent a ${\mathbb K}$-basis of the Zariski tangent space
${\mathfrak m}_{{\mathbb A}^n_{\mathbb K},p}/{\mathfrak m}_{{\mathbb A}^n_{\mathbb K},p}^2$.
Hence, by Nakayama's lemma, they form a minimal set of generators for ${\mathfrak m}_{{\mathbb A}^n_{\mathbb K},p}$.
Since $f_1,\dots,f_r$ are mapped to zero when we pass to ${\mathcal O}_{W,p}$, the images of
$x_{1}-a_{1}, \ldots,x_d-a_d$ in ${\mathcal O}_{W,p}$ form a regular system of parameters for
${\mathcal O}_{W,p}$. The result follows because $W$ is quasi-compact in the Zariski topology.
\end{proof}
\begin{notation}
\label{notation:pos}
For further considerations, we retain the notation of the lemma and its proof. Fix one principal open subset
$D(h) \subset {\mathbb A}^n_{\mathbb K}$ as in the lemma.
Suppose that $h=\det(M)$ involves the last $r$ columns of the Jacobian matrix ${\mathcal J}$.
Furthermore, fix one element $0\neq f\in{\mathbb K}[W]$.
\end{notation}
\noindent
We now show how to find an ideal $\Delta(f) \subset {\mathcal O}_{W}(W\cap D(h))$ such that
$$\Delta(f)\;\! \widehat{\mathcal{O}_{W,p}}=\Delta_p(f) \;\text{ for each point }\; p\in W\cap D(h),$$
where $\Delta_p(f)$ is defined as in \eqref{not:Delta-local}. Technically, we manipulate polynomials,
starting from a polynomial in ${\mathbb K}[x_1,\dots,x_n]$ representing $f$. By abuse of notation, we denote
this polynomial again by $f$.
\begin{construction}\label{con deriv}
We construct a polynomial $\tilde{f}\in {\mathbb K}[x_1,\dots, x_n]$ whose image
in ${\mathcal O}_{W}(W\cap D(h))$ coincides with that of $f$, and whose partial derivatives
$\frac{\partial \tilde{f}}{\partial x_i}$, $i=d+1,\dots, n$, are mapped to zero in
${\mathcal O}_{W}(W\cap D(h))/\langle f \rangle$.
For this, let $A$ be the matrix of cofactors of $M$. Then
$$A \cdot M = h \cdot E_{r},$$
where $E_r$ is the $r\times r$ identity matrix. Moreover, if $I\subset{\mathbb K}[x_1,\dots,x_n]$ is the ideal
generated by the entries of the vector $(\tilde{f_1},\dots,\tilde{f}_{r})^T =A \cdot (f_1,\dots,f_{r})^T$, then the extended ideals
$I\mathcal{O}_{{\mathbb A}^n_{\mathbb K}}(D(h))$ and $I_W\mathcal{O}_{{\mathbb A}^n_{\mathbb K}}(D(h))$ coincide
since $h$ is a unit in $\mathcal{O}_{{\mathbb A}^n_{\mathbb K}}(D(h))$.
Let $\widetilde{\mathcal J}=\left({\partial {\tilde{f_i}} \over \partial {x_j}}\right)$ be the Jacobian matrix of
$\tilde{f_1},\dots,\tilde{f}_{r}$. Then the matrix $\widetilde{{\mathcal J}}\vert_{W\cap D(h)}$ obtained
by mapping the entries of $\widetilde{{\mathcal J}}$ to $\mathcal{O}_W (W\cap D(h))$ is of type
$$\widetilde{{\mathcal J}}\vert_{W\cap D(h)} = \begin{pmatrix} \hspace{2mm}* & \mid & h \cdot E_{r}\end{pmatrix}$$
(apply the product rule).
In $\mathcal{O}_{{\mathbb A}^n_{\mathbb K}}(D(h))$, the polynomial $\hat{f}=h\cdot f$ represents the same class as
$f$. Moreover, modulo $f$, each partial
derivative of $\hat{f}$ is divisible by $h$. Hence, after suitable row operations, the partial derivatives in the right hand lower block
of the Jacobian matrix of $\tilde{f}_1,\dots, \tilde{f}_{r}, \hat{f}$ are mapped to zero in ${\mathcal O}_W (W\cap D(h))/\langle f\rangle$:
\[\scalebox{0.8}{
$\left(
\begin{tabular}
[c]{ccc|ccc}
&&& $h$ & &$0$ \\
& $\ast$ && & $\ddots$ \\
& &&$0$& & $h$ \\\hline \rule{0pt}{1.2\normalbaselineskip}
$\frac{\partial \hat{f}}{\partial x_1}$& $\dots$ &
$ \frac{\partial \hat{f}}{\partial x_{d}}$ & $ \frac{\partial \hat{f}}{\partial x_{d+1}}$& $\dots$ &
$ \frac{\partial \hat{f}}{\partial x_{n}}$
\end{tabular}
\right) \longmapsto\left(
\begin{tabular}
[c]{ccc|ccc}
& &&$h$ & & $0$ \\
&$\ast$& && $\ddots$ & \\
& &&$0$& & $h$ \\\hline \rule{0pt}{1.1\normalbaselineskip}
$H_1$& $\dots$ &
$H_{d}$ &$0$& $\dots$ &
$0$
\end{tabular}\right)$ }
\]
\vskip0.2cm
\noindent
The row operations correspond to subtracting ${\mathbb K}[x_1,\dots,x_n]$-linear
combinations of $\tilde{f_1},\dots,\tilde{f}_{r}$ from $\hat{f}$. In this way, we get a polynomial
$\tilde{f}$ as desired: The images of $\tilde{f}$ and $f$ in ${\mathcal O}_{W}(W\cap D(h))$ coincide,
and for $i=d+1,\dots, n$, the $\frac{\partial \tilde{f}}{\partial x_i}$ are
mapped to zero in ${\mathcal O}_{W}(W\cap D(h))/\langle f \rangle$.
In fact, we have
\begin{equation}
\label{ref:def-hs}
(\frac{\partial \tilde{f}}{\partial x_1},\ldots,\frac{\partial \tilde{f}}{\partial x_n})
=(H_1,\ldots,H_d,0,\ldots,0)
\end{equation}
as an equality over ${\mathcal O}_W (W\cap D(h))/\langle f\rangle$.
\end{construction}
\begin{lem}
\label{chain rule}
With notation as above, consider the extended ideal
$$
\Delta(f) =\langle f, H_1,\dots, H_d \rangle {\mathcal O}_{W}(W\cap D(h)).
$$
Then
$$\Delta(f)\;\! \widehat{\mathcal{O}_{W,p}}=\Delta_p(f) \;\text{ for each point }\; p\in W\cap D(h).$$
\end{lem}
\begin{proof}
Let a point $p=(a_1,\dots, a_n)\in W \cap D(h)$ be given. Write
$\boldsymbol{x-a}=\{x_1-a_1, \dots, x_n-a_n\}$ and $\boldsymbol x=\{x_1, \dots, x_n\}$. Then
$$\widehat{{\mathcal O}_{W,p}} \cong
{\mathbb K}[[\boldsymbol{x-a}]]/I_{W}{\mathbb K}[[\boldsymbol{x-a}]],$$
and the natural map
$$\Psi: {\mathbb K}[\boldsymbol x]\longrightarrow {\mathbb K}[[\boldsymbol{x-a}]]\longrightarrow
{\mathbb K}[[\boldsymbol{x-a}]]/I_{W} {\mathbb K}[[\boldsymbol{x-a}]]$$
factors through the inclusion
${\mathbb K}[W] \rightarrow \widehat{{\mathcal O}_{W,p}}$. Moreover, by our assumptions in
Notation \ref{notation:pos}, the isomorphism of the Cohen structure theorem reads
\begin{eqnarray*}
{\mathbb K}[[y_1,\dots,y_d]] & \overset{\Phi}{\longrightarrow} & {\mathbb K}[[\boldsymbol{x-a}]]/I_{W}{\mathbb K}[[\boldsymbol{x-a}]],\\
y_i & \longmapsto & x_i-a_i. \;\;
\end{eqnarray*}
The inverse isomorphism $\Phi^{-1}$ is of type
\begin{eqnarray*}
y_i & \longleftarrow\!\shortmid & x_i-a_i\;\; \text{ if } 1 \leq i \leq d,\\
m_i(y_{1},\dots,y_{d}) & \longleftarrow\!\shortmid & x_i-a_i\;\;
\text{ if } d+1 \leq i \leq n.
\end{eqnarray*}
Then ${\Phi^{-1}}\circ \quad \!\!\!\! \Psi$ is the map
$$g\mapsto g(\boldsymbol {y+a}', m(\boldsymbol{y})+\boldsymbol{a}'')),$$
where $\boldsymbol{a}'=\{a_1,\dots, a_d\}$, $\boldsymbol{a}''
=\{a_{d+1},\dots, a_n\}$, and $\boldsymbol y=\{y_1, \dots, y_d\}$.
Hence, for each $g\in {\mathbb K}[\boldsymbol x]$, the vector of partial derivatives
$$
\begin{pmatrix} \frac{\partial \Phi^{-1}(\Psi(g))}{\partial y_1}, & \dots, &
\frac{\partial \Phi^{-1}(\Psi(g))}{\partial y_d}
\end{pmatrix}
$$
is obtained as the product
$$
\scalebox{0.9}{ $\begin{pmatrix} \frac{\partial g}{\partial x_1}
(\boldsymbol {y+a}', m(\boldsymbol{y})+\boldsymbol{a}''), & \dots, &
\frac{\partial g}{\partial x_n}
(\boldsymbol {y+a}', m(\boldsymbol{y})+\boldsymbol{a}'')
\end{pmatrix} \cdot
\begin{pmatrix} & E_d &
\\ \hline \rule{0pt}{1.1\normalbaselineskip}
\frac{\partial m_{d+1}}{\partial y_1} & \dots &
\frac{\partial m_{d+1}}{\partial y_d} \\ \vdots && \vdots \\
\frac{\partial m_n}{\partial y_1} & \dots &
\frac{\partial m_n}{\partial y_d}
\end{pmatrix}$}
$$
(apply the chain rule).
Taking $g=\tilde{f}$ with $\tilde{f}$ as in Construction \ref{con deriv},
we deduce from Equation \eqref{ref:def-hs} that
$$
\frac{\partial \Phi^{-1}(\Psi(\tilde{f}))}{\partial y_j} = \Phi^{-1}(\Psi(H_j))
$$
as an equality over ${\mathbb K}[[y_1,\ldots,y_d]]/\langle \Phi^{-1}(\Psi(f))\rangle$, for $j=1,\dots, d$. The
result follows by applying $\Phi$ since $\Psi(\tilde{f})=\Psi(f)$ by the very construction of $\tilde{f}$.
\end{proof}
\begin{notation}
In the situation of Lemma \ref{chain rule}, motivated by the lemma and its proof, we write
$$\frac{\partial f}{\partial X_{j}}:=H_j\in {\mathbb K}[x_1,\dots,x_n], \;\text{ for }\; j=1,\dots, d.$$
\end{notation}
Summing up, we get:
\begin{lem}
\label{locus order 2}
Let $f_{r+1},\dots,f_s\in {\mathbb K}[x_1,\dots, x_n]$ represent a set of generators for the vanishing ideal
$I_{X,W}$. Then $\Sing(I_{X,W},2)\cap D(h) $ is the locus
\[
V \left(I_X+\left\langle \frac{\partial f_{i}}{\partial X_{j}}\;\! \middle| \;\! r+1 \leq i \leq s,
1 \leq j \leq d \right\rangle \right) \cap D(h)
\]
which is computable by the recipe given in Construction \ref{con deriv}.
\end{lem}
If the intersection of $\Sing(I_{X,W},2)$ with one principal open set from a covering as in Lemma
\ref{def deriv} is non-empty, then $X$ is singular by Lemma \ref{lem: criterion-singular-II}, and
our smoothness test terminates. If all these intersections are empty we
iterate our process:
\begin{lem}
[\cite{BEV}]\label{lem max contact}
Let $f_{r+1},\dots,f_{s}\in{\mathbb{K}}[x_{1},\dots,x_{n}]$ represent a set of generators
for the vanishing ideal $I_{X,W}$. Retaining Notation \ref{notation:pos},
suppose that $\Sing(I_{X,W},2)\cap D(h)=\emptyset.$
Then there is a finite covering of $X\cap D(h)$ by principal open subsets
of type $D(h\cdot g)\subset\mathbb{A}_{{\mathbb K}}^{n}$ such that:
\begin{enumerate}
\item Each polynomial $g$ is a derivative $\frac{\partial f_{i}}{\partial X_{j}}$ of some $f_{i}$,
$r+1\leq i \leq n$.
\item If we set $W^{\prime}=V(f_{1},\dots,f_{r},f_{i})\subset \mathbb A^n_{{\mathbb K}}$, then
$W'\cap D(h\cdot g)$ is a smooth complete intersection of codimension $r+1$ in $D(h\cdot g)$.
\item We have $X\cap D(h\cdot g)\subset W^{\prime}\cap D(h\cdot g)$.
\end{enumerate}
\end{lem}
\begin{proof}
Let $p_0\in X \cap D(h)$. Then, since $\Sing(I_{X,W},2)\cap D(h)=\emptyset$
by assumption, the order of $f_{i}$ is $\ord_{p_0}(f_{i})=1$ for at least one $i\in\{r+1,\dots,s\}$.
Equivalently, one of the partial derivatives of $f_i$, say $\frac{\partial f_{i}}{\partial X_{j}}$,
does not vanish at $p_0$. Then, if we set $g=\frac{\partial f_{i}}{\partial X_{j}}$ and
$W^{\prime}=V(f_{1},\dots,f_{r},f_{i})$, properties (1) and (3) of the lemma are clear by construction.
With regard to (2), again by construction, we have $\ord_{p}(f_{i})=1$ for each $p\in D(h\cdot g)$.
This implies for each $p\in D(h\cdot g)$:
\begin{itemize}
\item[(a)] We have $\nu^*(W',p) = (1,..,1,1)\in {\mathbb N}^{r+1}$.
\item[(b)] The image of $f_i$ in $\mathcal O_{W,p}$ is a non-zero non-unit.
\end{itemize}
Then, $W'\cap D(h\cdot g)$ is smooth by (a) and Hironaka's Criterion
\ref{crit-Hir}. Furthermore, each local ring $\mathcal O_{W,p}$ is regular and, thus,
an integral domain. Hence, by (b) and Krull's principal ideal theorem, the ideal
generated by the image of $f_i$ in $\mathcal O_{W,p}$ has codimension 1.
We conclude that $W'\cap D(h\cdot g)$ is a complete intersection of codimension
$r+1$ in $D(h\cdot g)$.
The result follows since the Zariski topology is quasi-compact.
\end{proof}
\begin{rem}
\label{rem-get-rad-ideal}
In the situation of the proof above, Hironaka's criterion actually
allows us to conclude that the affine scheme $$\Spec \left(\mathcal O_{\mathbb A^n_{{\mathbb K}}}(D(h\cdot g))/
\langle f_1,\dots, f_{r},f_{i}\rangle O_{\mathbb A^n_{{\mathbb K}}}(D(h\cdot g))\right)$$
is smooth. In particular, $\langle f_1,\dots, f_{r},f_{i}\rangle O_{\mathbb A^n_{{\mathbb K}}}(D(h\cdot g))$
is a radical ideal.
\end{rem}
\begin{rem}
At each iteration step of our process, we start from embeddings of type $X\cap D(q)\subset
W\cap D(q)\subset \mathbb A^n_{{\mathbb K}}$ rather than from an embedding $X\subset
W\subset \mathbb A^n_{{\mathbb K}}$. This is not a problem: When we use
the trick of Rabinowitch to regard $X\cap D(q)\subset W\cap D(q)$ as affine varieties
in $\mathbb A^{n+1}_{{\mathbb K}}$, and apply Lemma~\ref{lem covering by minors} in
$\mathbb A^{n+1}_{{\mathbb K}}$, the extra variable in play will not
appear in the local systems of parameters.
\end{rem}
\begin{rem}[The Role of the Ground Field]
\label{rem:k-versus-K}
Our algorithms essentially rely on Gr\"obner basis techniques (and not, for example, on polynomial factorization).
While the geometric interpretation of what we do is concerned with an algebraically closed
field ${\mathbb K}$, the algorithms will be applied to ideals which are defined over a
subfield ${\Bbbk}\subset {\mathbb K}$ whose arithmetic can be handled by a computer. This makes
sense since any Gr\"obner basis of an ideal $J\subset {\Bbbk}[x_1,\dots, x_n]$ is also a Gr\"obner
basis of the extended ideal $J^e=J{\mathbb K}[x_1,\dots, x_n]$. Indeed, if $J$ is given by generators with coefficients
in ${\Bbbk}$, all computations in Buchberger's Gr\"obner basis algorithm are carried through over ${\Bbbk}$.
In particular, if a property of ideals can be checked using Gr\"obner bases, then $J$ has this
property iff $J^e$ has this property. For example, if $J$ is equidimensional, then $J^e$ is
equidimensional as well. Or, if the condition asked by the Jacobian criterion is fulfilled for $J$,
then it is also fulfilled for $J^e$.
The standard reference for theoretical results on extending the ground field is \cite[VII, \S 11]{ZSCA}.
To give another example, if ${\Bbbk}$ is perfect, and $J$ is a radical ideal, then $J^e$ is a radical ideal, too.
\end{rem}
\begin{notation}
In what follows, ${\Bbbk}\subset {\mathbb K}$ always denotes a field extension with ${\Bbbk}$ perfect
and ${\mathbb K}$ algebraically closed. If $I\subset {\Bbbk}[\boldsymbol x] ={\Bbbk}[x_1,\dots, x_n]$ is an ideal, then $V(I)$
stands for the vanishing locus of $I$ in $\mathbb A^n({\mathbb K})$. Similarly, if
$q\in {\Bbbk}[\boldsymbol x]$, then $D(q)$ stands for the principal open set defined by $q$
in $\mathbb A^n({\mathbb K})$.
\end{notation}
We are now ready to specify the smoothness test. We start from ideals
$$I_W =\langle f_1,\dots, f_r\rangle\subset I_X =\langle f_1,\dots, f_s\rangle\subset {\Bbbk}[\boldsymbol x]$$
and a polynomial $q \in {\Bbbk}[\boldsymbol x]$ such that
\begin{itemize}
\item[\phantom{$(\diamondsuit)$}]\quad $I_X$ is equidimensional and radical,
\item[$(\diamondsuit)$]\quad $I_W\;\!\mathcal O_{\mathbb A^n_{{\mathbb K}}}(D(q))$ is a radical ideal of codimension $r$,
\item[\phantom{$(\diamondsuit)$}]\quad $V(I_W)\cap D(q)$ is smooth.
\end{itemize}
Note that under these conditions, $V(I_W)\cap D(q)\subset D(q)$ is a complete intersection.
The overall structure of our algorithm then consists of the following main components:
\begin{enumerate}[leftmargin=8mm]
\item \emph{Covering by principal open sets}. Find a set $L$ of $r\times r$ submatrices $M$ of the Jacobian matrix
of $f_1,\dots,f_r$ such that all minors $\det(M)$ are non-zero, and such that
\[
q \in \sqrt{ \left\langle f_{1},\ldots,f_{r}\right\rangle+ \left\langle\det(M)\mid M\in L\right\rangle}.
\]
\item \emph{Local system of parameters}. By Lemma \ref{lem covering by minors}, for each $M\in L$, there
is a consistent choice of a regular system of parameters associated to $M$. Without loss of
generality, we can assume that $M$ involves the variables $x_{d+1},\ldots x_n$.
Then we may choose the regular system of parameters to be induced by $x_{1},\ldots,x_d$.
\item \emph{Derivatives relative to the local system of parameters.} Find the matrix of cofactors $A$ of $M$ with $A\cdot M=\det(M)\cdot E_{r}$ and let
\[
\widehat{F}:=\left(
\begin{array}
[c]{c}
\tilde{f}_{1}\\
\vdots\\
\tilde{f}_{s}\\
\hat{f}_{s+1}\\
\vdots\\
\hat{f}_{r}
\end{array}
\right) =
\begin{pmatrix}
A & 0\\
0 & \det(M)\cdot E_{s-r}
\end{pmatrix}
\cdot\left(
\begin{array}
[c]{c}
f_{1}\\
\vdots\\
f_{s}
\end{array}
\right) \ .
\]
By Lemma \ref{locus order 2}, the locus of order $\geq2$ is empty if and only~if
\[
q\in\sqrt{\left\langle f_{1},\ldots,f_{s},\partial f_{i}/\partial X_{j}\mid
i,j>r\right\rangle },
\]
where, by Construction \ref{con deriv}, the ideal of the derivatives $\partial f_{i}/\partial X_{j}$ is generated by the entries of the left
lower block of the Jacobian matrix of $\widehat{F}$ after the row reduction\\%
\[\scalebox{0.85}{
$\mathcal{J}(\widehat{F})=\left(
\begin{tabular}
[c]{c|ccc}
& $\det(M)$ & & $0$\\
$\ast$ & & $\ddots$ & \\
& $0$ & & $\det(M)$\\\hline
$\ast$ & & $\ast$ &
\end{tabular}
\right) \mapsto\left(
\begin{tabular}
[c]{c|ccc}
& $\det(M)$ & & $0$\\
$\ast$ & & $\ddots$ & \\
& $0$ & & $\det(M)$\\\hline
$\ast$ & $0$ & $\cdots$ & $0$
\end{tabular}
\right)$
}\]
\\
\item \emph{Descent in codimension of $X$.} Consider a representation
\[
q^{m}=\sum\alpha_{i,j}\cdot\partial f_{i}/\partial X_{j}\operatorname{mod}
\left\langle f_{1},\ldots,f_{s}\right\rangle.
\]
Suppose $\alpha_{i,j}\neq0$ and $\partial f_{i}/\partial X_{j}\neq 0$. Then by Lemma \ref{lem max contact}, we can pass to a new variety $W' = W\cap V(f_i)\supset X$
and a new principal open set $D(q')$ with $$q' = q\cdot h\cdot\partial f_{i}/\partial X_{j},$$ such that $W'\cap D(q')\subset D(q')$ is a smooth complete intersection of codimension $r+1$.
In this way, we obtain a covering of $X\cap D(q)$ by principal open sets $D(q')$ with $X\cap D(q')\subset W'\cap D(q')$ and iterate.
\end{enumerate}
Algorithm \ref{alg smoothnesstest} \texttt{HybridSmoothnessTest} collects
the main steps of the smoothness test.
It calls Algorithm \ref{alg deltacheck} \texttt{DeltaCheck}
to check whether $\Sing(I_{X,W},2) \cap D(q) \not= \emptyset$. In this case, it
returns \texttt{false} and terminates. Otherwise, it calls
Algorithm \ref{alg descendembsmooth} \texttt{DescentEmbeddingSmooth} which implements
Lemma \ref{lem max contact}. The next step is to recursively apply the \texttt{HybridSmoothnessTest}
in the resulting embedded situations.
If the codimension reaches a specified value, the algorithm invokes a relative version of the Jacobian
criterion by calling Algorithm \ref{alg embJac} \texttt{EmbeddedJacobian}.
\begin{algorithm}[h]
\caption{\texttt{HybridSmoothnessTest}}
\label{alg smoothnesstest}
\begin{algorithmic}[1]
\REQUIRE Ideals $I_W =\langle f_1,\dots, f_r\rangle\subset I_X =\langle f_1,\dots, f_s\rangle\subset {\Bbbk}[\boldsymbol x]$
and a polynomial $q \in {\Bbbk}[\boldsymbol x]$ such that $(\diamondsuit)$ holds; a non-negative integer $c$.
\ENSURE {\texttt true} if $V(I_X \cap D(q))$ is smooth, {\texttt false} otherwise.\\
\IF {$\operatorname{dim}(I_W)-\operatorname{dim}(I_X)$ = 0}
\RETURN {\texttt true}
\ENDIF
\IF {$\operatorname{dim}(I_W)-\operatorname{dim}(I_X) \leq c$}
\RETURN {\texttt{EmbeddedJacobian}($I_W$,$I_X$)}
\ENDIF
\IF {{\bf not} $\operatorname{\texttt{DeltaCheck}}(I_W,I_X,q)$}
\RETURN {\texttt false}
\ENDIF\label{Jac1}
\STATE $L=\operatorname{\texttt{DescentEmbeddingSmooth}}(I_W,I_X,q)$\label{Jac2}
\FORALL {($I_{W'},I_{X},q')\in L$}
\IF {{\bf not} $\texttt{HybridSmoothnessTest}(I_{W'},I_{X},q',c)$ }
\RETURN {\texttt false}
\ENDIF
\ENDFOR
\RETURN {\texttt true}
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[h]
\caption{\texttt{DeltaCheck} for an affine chart}
\label{alg deltacheck}
\begin{algorithmic}[1]
\REQUIRE Ideals $I_W =\langle f_1,\dots, f_r\rangle\subset I_X =\langle f_1,\dots, f_s\rangle\subset {\Bbbk}[\boldsymbol x]$
and a polynomial $q \in {\Bbbk}[\boldsymbol x]$ such that $(\diamondsuit)$ holds.
\ENSURE {\texttt true} if $\Sing(I_{X,W},2) \cap D(q) = \emptyset$, {\texttt false} otherwise.
\\
\hspace{-0.5cm}\hbox{\{ \emph{First handle the case $I_W=\langle 0\rangle$, $q=1$; then
$x_1,\dots,x_n$ induce \}}} \\ \hspace{-0.5cm}\hbox{\{ \emph{a local system of parameters at every point of $W$ \}}}
\IF {$I_W=\langle 0\rangle$ {\bf and }$q=1$}
\IF {$1 \in \langle f_1,\dots,f_s, \frac{\partial f_1}{\partial x_1},\dots, \frac{\partial f_s}{\partial x_n}\rangle$}
\RETURN {\texttt true}
\ELSE
\RETURN {\texttt false}
\ENDIF
\ENDIF
\hspace{-0.5cm}\hbox{\{\emph{ Initialization \}}}
\STATE $Q=\langle 0\rangle$
\STATE $L_1 = \{ r\times r-{\rm submatrices\; } M \;{\rm of }\;\operatorname{Jac}(I_W)\; \mid \operatorname{det}(M)\neq0 \operatorname{mod} I_X\}$
\hspace{-0.5cm}\hbox{\{\emph{ Main Loop: Cover by complements of minors \}}}
\WHILE {$L_1\neq \emptyset$ {\bf and} $q \not\in Q$}\label{line while delta}
\STATE choose $M \in L_1$
\STATE $L_1= L_1 \setminus \{M\}$
\STATE $q_{new}=\operatorname{det}(M)$
\STATE $Q=Q+\langle q_{new}\rangle$
\STATE compute the $r\times r$ cofactor matrix $A$ with
$$A \cdot M = q_{new} \cdot \operatorname{Id}_{r}$$
\hspace{-1.cm}\hbox{\{\emph{ Test $\Sing(I_{X,W},2) \subset V(q_{new}) \cup V(q)$ \}}}
\STATE $C_M= I_X + J_X$, where\\
$J_X = \left\langle q_{new} \cdot \frac{\partial f_i}{\partial x_j}
-\sum\limits_{k \text{ column of }M \atop
l\text{ row of } M}
\frac{\partial f_l}{\partial x_j} A_{lk}
\frac{\partial f_i}{\partial x_k} \hspace{2mm}\middle|\hspace{2mm}
\substack{ r+1 \leq i \leq s
\\j \text{ not a column of } M} \right\rangle$
\label{line derivatives}
\IF {$q_{new}\cdot q \not\in \sqrt{C_M}$}
\RETURN \texttt false
\ENDIF
\ENDWHILE
\RETURN {\texttt true}
\end{algorithmic}
\end{algorithm}
\begin{rem}
Modifying the approach discussed in the components (3) and (4) above, Algorithm \ref{alg descendembsmooth} computes
the products $h \cdot \frac{\partial f_i}{\partial X_j}$
directly as appropriate $(r+1) \times (r+1)$ minors of the Jacobian matrix in step 5 of Algorithm \ref{alg descendembsmooth}.
This exploits the well-known fact that
subtracting multiples of one row from another one as in Gaussian elimination does not change the determinant of a square matrix
-- or in our case the maximal minors
of the $(r+1) \times n$ matrix.
In practical applications, it can be useful to first check whether there is an $ r \times r$ minor, say $N$, of the Jacobian matrix
of $I_W$ which divides $q$. If this is the case, we can restrict in step 6 of Algorithm \ref{alg descendembsmooth} to those minors in $I_{min,i}$ which involve $N$, since these minors already form a generating system of~$I_{min,i}$.
Combining these two
modifications provides an important enhancement with regard to efficiency of the hybrid smoothness test as presented in \cite{smoothtst}.
\end{rem}
\begin{algorithm}[h]
\caption{\texttt{DescentEmbeddingSmooth}}
\label{alg descendembsmooth}
\begin{algorithmic}[1]
\REQUIRE Ideals $I_W =\langle f_1,\dots, f_r\rangle\subset I_X =\langle f_1,\dots, f_s\rangle\subset {\Bbbk}[\boldsymbol x]$
and a polynomial $q \in {\Bbbk}[\boldsymbol x]$ such that $(\diamondsuit)$ and $D(q)\cap \operatorname{Sing}(I_X,2)=\emptyset$ hold.
\ENSURE Triples $(I_{W_i},I_{X}, q_i)$
such that $I_{W_i}\subset I_{X}$ together with $q_i$ satisfy $(\diamondsuit)$, and such that
$V(I_X) \cap D(q) \subset \bigcup_i (V(I_{W_i}) \cap D(q_i))$.
\hspace{-0.5cm}\hbox{\{ \emph{Direct descent: no need to find an open covering of $V(I_X)\cap D(q)$} \}}
\IF { $\Sing(I_{V(f_i),W},2) \cap D(q) = \emptyset$ and $q \notin \sqrt{\langle f_1,\ldots,f_r,f_i \rangle} $ for some $i \in \{r+1,\ldots,n\}$}
\STATE $I_{W_1}=\langle f_1,\dots,f_r,f_i\rangle$
\RETURN \{($I_{W_1}$,$I_{X}$,$q$)\}
\ENDIF
\hspace{-0.5cm}\hbox{\{ \emph{Descent by constructing an open covering of $V(I_X)\cap D(q)$} \}}
\FOR {$i \in \{r+1,\ldots,s\}$}
\STATE $I_{min,i}=$ $\langle (r+1)\times (r+1)$ minors of the Jacobian matrix of
$f_1,\ldots,f_r,f_i\rangle$\\
\ENDFOR
\STATE \label{line cover dec} from among the generators of the ideals
$I_{min,1},\ldots,I_{min,s}$, find minors $h_1,\ldots,h_t \neq 0 \operatorname{mod} I_X$ such that
$q \in \sqrt{I_X+\langle h_1,\ldots,h_t \rangle}$ \\
\STATE fix $i_1,\ldots,i_t \in \{r+1,\ldots s\}$
with $h_j \in I_{min,i_j}$\\
\FOR {$j=1,\ldots,t$}
\STATE $I_{W_j}=\langle f_1,\ldots,f_r,f_{i_j} \rangle$
\ENDFOR
\RETURN $\{(I_{W_1},I_{X},q\cdot h_1),\ldots,(I_{W_t},I_{X},q\cdot h_t)\}$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[h]
\caption{\texttt{EmbeddedJacobian}}
\label{alg embJac}
\begin{algorithmic}[1]
\REQUIRE Ideals $I_W =\langle f_1,\dots, f_r\rangle\subset I_X =\langle f_1,\dots, f_s\rangle\subset {\Bbbk}[\boldsymbol x]$
and a polynomial $q \in {\Bbbk}[\boldsymbol x]$ such that $(\diamondsuit)$ holds.
\ENSURE {\texttt true} if $V(I_X)\cap D(q)$ is smooth, {\texttt false} otherwise.\\
\STATE $Q=\langle 0\rangle$
\STATE \label{line Jac M} $L = \{ r\times r-{\rm submatrices\; } M \;{\rm of }\;\operatorname{Jac}(I_W)\; \mid \operatorname{det}(M)\neq0\operatorname{mod} I_X\}$
\hspace{-0.5cm}\hbox{\{\emph{ Read off regular system of parameters for non-trivial $h$ \}}}
\IF {$\operatorname{det}(M)\! \mid \! q$ for some $M \in L $}
\STATE delete all other elements from $L$
\ENDIF
\hspace{-0.5cm}\hbox{\{\emph{ Covering by complements of the minors \}}}
\WHILE {$L\neq \emptyset$ {\bf and} $q \not\in Q$}\label{line while jac}
\STATE choose $M \in L$
\STATE $L= L \setminus \{M\}$
\STATE \label{line det jac}$q_{new}=\operatorname{det}(M)$
\STATE $Q=Q+\langle q_{new}\rangle$
\STATE \label{line adj jac}compute the $r\times r$ cofactor matrix $A$ with
$$A \cdot M = q_{new} \cdot \operatorname E_{r}$$
\hspace{-1.0cm}\hbox{\{\emph{ Jacobian matrix of $I_X$ w.r.t. local system of parameters for $I_W$ \}}}
\STATE $Jac =
\left(q_{new} \cdot \frac{\partial f_i}{\partial x_j}
-\sum\limits_{k \text{ a column of }M \atop
l\text{ a row of } M}
\frac{\partial g_l}{\partial x_j} A_{l,k}
\frac{\partial f_i}{\partial x_k} \right)\in {\Bbbk}[\boldsymbol x]^{(s-r)\times (n-r)}$\\
where $r+1\leq i\leq s$ and $j$ is not a column of $M$
\STATE $c=\codim\left(V(I_X)\cap D(q)\subset V(I_W)\cap D(q)\right)$
\STATE $J = I_X + I_{m}$, where
$I_{m}=\langle
c\times c-$minors of $Jac$ $\rangle$
\IF {$q_{new} \cdot q \not\in \sqrt{J}$}\label{line false jac}
\RETURN \texttt false
\ENDIF
\ENDWHILE
\RETURN {\texttt true}
\end{algorithmic}
\end{algorithm}
\begin{rem}
In explicit experiments, we typically arrive at an ideal $I_X$ by using a specialized construction method
which is based on geometric considerations. From these considerations, some properties of $I_X$
might be already known. For example, it might be clear that $V(I_X)$ is irreducible. Then there
is no need to check the equidimensionality of $I_X$.
If we apply the algorithm without testing
whether $I_X$ is radical, and the algorithm returns {\tt true}, then $I_X$ must be radical.
This is clear from the Jacobian criterion and the fact that Hironaka's criterion checks smoothness
in the scheme theoretical sense (see Remark \ref{rem-get-rad-ideal}).
\end{rem}
\begin{rem}
If we do not have some specific pair $(I_W,q)$ in mind, we can always start Algorithm \ref{alg smoothnesstest} with $(I_W,q)=(\langle 0 \rangle, 1)$. In this case the algorithm determines smoothness of whole affine variety $V(I_X)\subset \mathbb A^n({\mathbb K})$.
\end{rem}
\section{Petri Nets and the GPI-Space environment}\label{sec GPI}
\subsection{GPI-Space and Task-Based Parallelization}
GPI-Space \cite{GPI} is a task-based workflow management system for high performance environments. It is based on David Gelernter's
approach of separating coordination and computation \cite{gelernter} which leads to the explicit visibility of dependencies,
and is beneficial in many aspects. We illustrate this by discussing some of the concepts realized in GPI-Space:
\vskip0.1cm
\begin{compactitem}[leftmargin=4mm]
\item The coordination layer of GPI-Space uses a separate, specialized language, namely Petri nets \cite{petri},
which leaves optimization and rewriting of coordination activities to experts for data
management rather than bothering experts for computations in a particular domain of
application (such as algebraic geometry) with these things.
\item Complex environments remain hidden from the domain experts and are managed automatically. This includes automatic parallelization,
automatic cost optimized data transfers and latency hiding, automatic adaptation to dynamic changes in the environment, and resilience.
\item Domain experts can use and mix arbitrary implementations of their algorithmic solutions. For the experiments done for this paper,
GPI-Space manages several (many) instances of {\sc Singular} and, at the same time, other code written in \verb!C++!.
\item The use of virtual memory allows one not only to scale applications beyond the limitations imposed by a single machine, but also to
couple legacy applications that normally can only work together by writing and reading files. Also, the switch between low latency, low capacity memory (like DRAM) and high latency, high capacity memory (like a parallel file system) can be done without changing the application.
\item Optimization goals like ``minimal time to solution'', ``maximum throughput'', or ``minimal energy consumption'' are achieved independently
from the domain experts' implementation of their core algorithmic solutions.
\item Patterns that occur in the management of several applications are explicitly available and can be reused. Vice versa,
computational core routines can be reused in different management schemes. Optimization on either side is beneficial
for all applications that use the respective building blocks.
\end{compactitem}
\vskip0.1cm
\noindent
GPI-Space consists of three main components:
\begin{compactitem}[leftmargin=4mm]
\item A distributed, resilient, and scalable runtime system for huge dynamic environments that is responsible for managing the available resources,
specifically the memory resources and the computational resources.
The scheduler of the runtime system assigns activities to resources with respect to both the needs
of the current computations and the overall optimization goals.
\item A Petri net based workflow engine that manages the full application state and is responsible for automatic parallelization and dependency tracking.
\item A virtual memory manager that allows different activities and/or external programs to communicate and share partial results.
The asynchronous data transfers are managed by the runtime system rather than the application itself, and synchronization is done in a
way that aims at hiding latency.
\end{compactitem}
\vskip0.1cm
Of course, the above ideas are not exclusive to GPI-Space -- many other systems exist that follow similar strategies. In the last few years,
task-based programming models are getting much attention in the field of high performance computing. They are realized in systems such as
OmpSs \cite{ompss}, StarPU \cite{starpu}, and PaRSEC \cite{parsec}. All these systems have in common that they do explicit data
management and optimization in favor of their client applications. The differences are in their choice of the coordination language, in
their choice of the user interface, and in their choice of how general or how specific they are.
It is widely believed that task-based systems are a promising approach to program the current and upcoming very large and very complex
super computers in order to enable domain experts to get a significant
fraction of the theoretical peak performance \cite{dongarra, sppexa}.
As far as we know, there have not yet been any attempts to use systems originating from high performance numerical simulation
in the context of computational algebraic geometry, where the main workhorse is Buchberger's algorithm for computing Gr\"obner bases.
Although this algorithm performs well in many practical examples of interest, its worst case complexity is doubly exponential
in the number of variables \cite{MM}. This seems to suggest that algorithms in computational algebraic
geometry are too unpredictable in their time and memory consumption for the successful integration
into task-based systems. However, numerical simulation also encounters problems of unpredictability, and there is
already plenty of knowledge on how to manage imbalances imposed by machine jitter or different
sizes of work packages. For example, numerical state of the art code to compute flows makes use of mesh adaptation.
This creates great and unpredictable imbalances in computational effort which are addressed on the fly
by the respective simulation framework.
The high performance computing community aims for energy efficient computing, just because the machines they are using are so
big that it would be too expensive to not make use of acquired resources. One key factor to achieve good efficiency is perfect load
balancing. Another important topic in high performance computing is the non-intrusive usage of legacy code. GPI-Space is not only
able to automatically balance, to automatically scale up to huge machines, or to tolerate machine failures, it can also use existing
legacy applications and integrate them, without requiring any change to them. This turned out to be the great door opener for
integrating {\sc Singular} into GPI-Space. In fact, in our applications, GPI-Space manages several (many) instances of {\sc Singular} in its existing
binary form (without any need for changes).
Our first experience indicates that the
tools used in high performance computing are mature, both, in terms of operations and in
terms of capabilities to manage complex applications from symbolic computation.
We therefore believe that it is the right time to apply these tools to domains such as computational algebraic geometry.
In GPI-Space, the coordination language is based on Petri nets, which are known to be a good choice because
of their graphical nature, their locality (no global state), their concurrency (no events, just dependencies), and
their reversibility (recomputation in case of failure is possible) \cite{aalst}. Incidentally, these are all properties
that Petri intentionally borrowed from physics for the use in computer science \cite{brauer}. Moreover, Petri nets
share many properties with functional languages, especially their well-known advantages of modularity and
direct correspondence to algebraic structures, which qualify them as both powerful and user friendly
\cite{backus, hughes}.
The following section describes in more detail what Petri nets are and why they are a good choice to describe dependencies.
\subsection{Petri Nets}\label{sec petrinet}
In 1962, Carl Adam Petri proposed a formalism to describe concurrent asynchronous systems \cite{petri}.
His goal was to describe systems that allow for adding resources to running computations
without requiring a global synchronization, and he discovered an elegant solution that connects resources with other resources
only locally. Petri nets are particularly interesting since they have the following properties:
\begin{compactitem}
\item They are graphical (hence intuitive) and hierarchical (so that applications can be decomposed into building blocks that are Petri nets themselves).
\item They are well-suited for concurrent environments since there are no events that require a (total) ordering. Instead,
Petri nets are state-based and describe at any point in time the complete state of the application. That locality (of dependencies)
also allows one to apply techniques from term-rewriting to improve (parts of) Petri nets in their non-functional properties,
for example to add parallelism
or checkpointing.
\item They are reversible and enable backward computation: If a failure causes the loss of a partial
result, it is possible to determine a minimal set of computations whose repetition will recover the
lost partial result.
\end{compactitem}
\vskip0.1cm
The advantages of Petri nets as a mathematical modeling language have been summarized very nicely by van der Aalst \cite{aalst}:
They have precise execution semantics that assign specific meanings to the net, serve as the basis for
an agreement, are independent of the tools used, and enable process analysis and solutions.
Furthermore, because Petri nets are not based on events but rather on state transitions, it is possible to
differentiate between activation and execution of an elementary functional unit. In particular, interruption
and restart of the applications are easy. This is a fundamental condition for fault tolerance to hardware
failure. Lastly, van der Aalst notes the availability of mature analysis techniques that besides proving the
correctness, also allow performance predictions.
\subsubsection{Formal Definitions and Graphical Representation}
\label{subsubsect:def-PN}
Petri nets generalize finite automata by complementing them with
distributed states and explicit synchronization.
\begin{df}
A \define{Petri net} is a triple $\K{P, T, F}$, where $P$ and $T$ are disjoint finite sets,
the sets of \define{places} respectively \define{transitions}, and where $F$ is a subset
$F\subset\K{P\times T}\cup\K{T\times P}$, the \define{flow relation} of the net.
\end{df}
This definition addresses the static parts of a Petri net. In addition, there are dynamic aspects
which describe the execution of the net.
\begin{df} A \define{marking} of a Petri net $\K{P, T, F}$ is a function $M:P\to \mathbb{N}$.
If $M(p)=k$, we say that \define{$p$ holds $k$ tokens under $M$}.
\end{df}
To describe a marking, we also write $M = \{(p,M(p)) \mid p\in P, M(p) \neq 0\}$.
\begin{rem}
For our purposes here, given a Petri net $\K{P, T, F}$ together with a marking $M$, we think of the transitions
as algorithms, while the tokens held by the places represent the data (see Section \ref{subsubsect:ext} below for more on this).
Accordingly, given a place $p$ and a transition $t$, we say that $p$ is an \define{input} (respectively
\define{output}) place of $t$ if $(p,t)\in F$ (respectively $(t,p)\in F$).
\end{rem}
A marking $M$ defines the \define{state} of a Petri net. We say that $M$ \define{enables} a transition $t$
and write
$M\overset{t}\longrightarrow$,
if all input places of $t$ hold tokens, that is, $\K{p,t}\in F$ implies $M(p)> 0$.
A Petri net equipped with a marking $M$ is executed by \define{firing} a single transition
$t$ enabled by $M$. This means to consume a token from each input place of $t$, and to add
a token to each output place of $t$. In other words, the firing of $t$ leads to a new marking $M'$, with
$M'(p)=M(p)-\card{\set{\K{p,t}}\cap F}+\card{\set{\K{t,p}}\cap F}$ for all $p\in P$.
Accordingly, we write
$M\overset{t}\longrightarrow M'$, and say that $M'$ is \define{directly reachable} from $M$ (by firing $t$).
Direct reachability defines the (weighted) \define{firing relation
$R\subseteq \mathcal{M}\times T\times \mathcal{M}$}
over all markings $\mathcal{M}$ by $\K{M,t,M'}\in R\iff M\overset{t}\longrightarrow
M'$.
More generally, we say that a marking $M'$ is \define{reachable by $\hat{t}$} from a marking $M$ if there is a
\define{firing sequence $\hat{t}=t_0\cdots{}t_{n-1}$} such that $M=M_0\overset{t_0}\longrightarrow M_1\overset{t_1}\longrightarrow\dots\overset{t_{n-1}}\longrightarrow M_{n-1}=M'$.
The corresponding graph is called the \define{state graph}.
Fundamental problems concerning state graphs
such as \define{reachability} or \define{coverability} have been subject to many studies, and effective methods have been
developed to deal with these problems \cite{brauer,mayr_reach,lambert_reach,priese_wimmel,karp_miller,kosaraju}.
The static parts of a Petri net are graphically represented by a bipartite directed graph as indicated in the two examples
below.
In such a graph, a marking is visualized
by showing its tokens as dots in the circles representing the places. See Section \ref{subsubsect:ext}
for examples.
\begin{Example}[Data Parallelism in a Petri Net]
\label{ex:1}
The Petri net $\Phi=\K{P,T,F}$ with $P=\set{i,o}$, $T=\set{t}$ and $F=\set{\K{i,t},\K{t,o}}$ is depicted by the graph
\begin{petrinet}
\node[place] (0) {$i$};
\node[transition] (1) [right of=0] {$t$};
\node[place] (2) [right of=1] {$o$};
\path[->]
(0) edge (1)
(1) edge (2)
;
\end{petrinet}
Suppose we are given the marking $M=M_0=\set{\K{i,n}}$ for some $n>0$. Then $t$ is enabled by $M_0$, and firing $t$
means to move one token from $i$ to $o$. This leads to the new marking $M_1=\set{\K{i,n-1},\K{o,1}}$.
Now, if $n>1$, the marking $M_1$ enables $t$ again, and $\Phi$ can fire until the marking
$M'=M_n=\set{\K{o,n}}$ is reached. We refer to this by writing
$M\overset{t^n}\longrightarrow M'$.
Note that with this generalized firing relation, the $n$ incarnations of $t$ have no relation to each other -- conceptually, they fire
all at the same time, that is, in parallel. This is exploited in GPI-Space, and makes much sense if we take into consideration that in
the real world, the transition $t$ would need some time to finish, rather than fire immediately (see Section \ref{subsubsect:ext}
below for how to model time in Petri nets). Data parallelism is nothing else than splitting data into parts and applying the same given function
to each part. This is exactly what happens here: Just imagine that each token in place $i$ represents some part of the data.
\end{Example}
\begin{Example}[Task Parallelism in a Petri Net]
\label{ex:2}
Let $\Psi$ be the Petri net depicted by the graph
\begin{petrinet}
\node[place] (0) {$i$};
\node[transition] (1) [right of=0] {$s$};
\node[invisible] (i) [right of=1] {};
\node[place] (2) [above of=i,node distance=0.5\nd] {};
\node[place] (3) [below of=i,node distance=0.5\nd] {};
\node[transition] (f) [right of=2] {$f$};
\node[transition] (g) [right of=3] {$g$};
\node[place] (4) [right of=f] {$l$};
\node[place] (5) [right of=g] {$r$};
\node[transition] (j) [right of=i,node distance=3\nd] {$j$};
\node[place] (6) [right of=j] {};
\path[->]
(0) edge (1)
(1) edge (2)
(1) edge (3)
(2) edge (f)
(f) edge (4)
(3) edge (g)
(g) edge (5)
(4) edge (j)
(5) edge (j)
(j) edge (6)
;
\end{petrinet}
and consider the marking $M=\set{\K{i,1}}$. Then $\Psi$ can fire $s$ and thereby enable
$f$ \emph{and} $g$. So this corresponds to the situation where different independent algorithms ($f$ and $g$) are
applied to parts (or incarnations) of data. Note that $f$ and $g$ can run in parallel. Just like for the net $\Phi$
from Example \ref{ex:1}, multiple tokens in place $i$ allow for parallelism of $s$ and thereby of $f$, $g$, and $j$
as well. With enough such tokens, we can easily find ourselves in a situation where $s$, $f$, $g$, and $j$
are all enabled at the same time (see again Section \ref{subsubsect:ext}
below for the concept of time in Petri nets).
\end{Example}
To sum this up: Petri nets have the great feature to automatically know about \emph{all} activities that can be executed at any given time.
Hence \emph{all} available parallelism can be exploited.
\subsubsection{Extensions of Petri Nets in GPI-Space}
\label{subsubsect:ext}
To model real world applications, the classical Petri net described above needs to be enhanced,
for example to allow for the modeling of time and data. This leads to extensions such as \define{timed}
and \define{coloured Petri nets}. Describing these and their properties in detail goes beyond the scope of this article.
We briefly indicate, however, what is realized in GPI-Space.
{\bf Time.} In the real world, transitions need time to fire (there is no concept of time in the classical Petri net).
In systems modeling, \define{timed Petri nets} are used to predict best or worst case running times. In \cite{heiner_popova},
for example, the basic idea behind including time is to split the firing process into 3 phases:
\begin{enumerate}
\item The tokens are removed from the input places when a transition fires,
\item the transition holds the tokens while working, and
\item the tokens are put into the output places when a transition finishes working.
\end{enumerate}
This implies that a marking as above alone is not enough to describe
the full state of a timed Petri net. In addition, assuming that phases 1 and 3
do not need any time, the description of such a state includes the knowledge of all active transitions
in phase 2 and all tokens still in use. Passing from a standard to a timed Petri net, the behavior of the net is unaffected in the
sense that any state reached by the timed net is also reachable with
the standard net (see again \cite{heiner_popova}).
{\bf Types and Type Safety.}
As already pointed out, in practical applications, tokens are used to represent data. In the classical Petri net, however, tokens
carry no information, except that they are present or absent. It is therefore necessary to extend the classical concept
by allowing tokens with attached data values, called the \define{token colours} (see \cite{CPN}). Formally, in addition
to the static parts of the classical Petri net, a \define{coloured Petri net} comes equipped
with a finite set $\Sigma$ of \define{colour sets}, also called \define{types},
together with a \define{colour function} $C: P\rightarrow \Sigma$ (``all tokens in a given place $p\in P$ represent
data of the same type'').
Now, a \define{marking} is not just a mapping $P \rightarrow \mathbb{N}$ (``the count of the tokens''),
but a mapping $\Delta \rightarrow\mathbb{N}$, where $\Delta =\{(p,c) \mid p\in P, c\in C(p)\}$.
Imagine, for example, that the type $C(p)$ represents certain blocks of data.
Then, in order to properly process the data stored in $c\in C(p)$, we typically need to know \emph{which block}
out of \emph{how many blocks} $c$ is. That is, implementing the respective type means to equip each block
with two integer numbers. We will see below how to realize this in GPI-Space.
Type safety is enforced in GPI-Space by rejecting Petri nets whose flow relation does not respect the imposed types.
More precisely, transitions are enriched by the concept of a \define{port}, which is a typed place holder for
incoming or outgoing connections.
Type safety is in general checked statically; for transitions relying on legacy code,
it is also checked dynamically during execution (``GPI-Space does not trust legacy code'').
{\bf Expression Language.}
GPI-Space includes an embedded programming language which serves a twofold purpose. On the one hand, it
allows for the introduction and handling of user-defined types. The type for blocks of data as discussed above, for example, may be described
by the snippet
\begin{verbatim}
<struct name="block">
<field name="num" type="uint"/>
<field name="max" type="uint"/>
</struct>
\end{verbatim}
Types can be defined recursively. Moreover, GPI-Space offers a special kind
of transition which makes it possible to manipulate the colour of a token. In the above situation, for
instance, the ``next block`'' is specified by entering
\begin{verbatim}
${block.num} := ${block.num} + 1
\end{verbatim}
Again, all such expressions are type checked.
The second use made of the embedded language is the convenient handling of ``tiny
computations''. Such computations can be executed directly within the workflow engine rather than
handing them over to the runtime system for scheduling and execution, and
returning the results to the workflow engine.
{\bf Conditions.}
In GPI-Space, the firing condition of a transition can be subject to a logical expression
depending on properties of the input tokens of the transition. To illustrate this, consider again the
net $\Psi$ from Example~\ref{ex:2}, and suppose that the input
place $i$ contains tokens representing blocks of data as above. Moreover suppose
that the transition $s$ is just duplicating the blocks in order to apply $f$ and $g$ to each
block. Now, the transition $j$ typically relies on joining the blocks of output data in
$l$ and $r$ with the same number. This is implemented by adding the condition
\begin{verbatim}
${l.num} :eq: ${r.num}
\end{verbatim}
to $j$. This modifies the behavior of the Petri net in a substantial way: The transition $j$ might stay
disabled, even though there are enough tokens available on all input places. This change in behavior
has quite some effects on the analysis of the net: For example, conflicts\footnote{A
\define{conflict} arises from a place $p$ holding at least one token if $p$ is an input place to more
than one transition, but does not hold enough tokens to fire all these transitions.} might disappear,
while loop detection becomes harder. GPI-Space comes
with some analysis tools that take conditions into account. It is beyond the scope of this paper
to go into detail on how to ensure correctness in the presence of conditions. Note, however, that
the analysis is still possible in practically relevant situations, that is, in situations where a number
of transitions formulates a complete and non-overlapping set of conditions (hence, there are no
conflicts or deadlocks).
\subsubsection{Example: Reduction and Parallel Reduction}\label{ex parallel red}
Parallelism can often be increased by splitting problems into smaller independent problems. This
requires that we combine (computer scientists say: reduce) the respective partial results into
the final result. Suppose, for example, that the partial results are obtained by executing a
Petri net, say, $\Pi$, and that these results are attached as colours to tokens which are all added to
the same place $p$ of $\Pi$. Further suppose that reducing the partial results means to apply an
addition operator $+$. Then the reduction problem can be modeled by the Petri net
\vskip0.2cm
\noindent
\begin{petrinet}
\node[place] (p) {$p$};
\node[transition] (f) [right of=p] {$+$};
\node[place] (s) [right of=f] {$s$};
\path[->]
(p) edge (f)
(f) edge [bend left] (s)
(s) edge [bend left] (f)
;
\end{petrinet}
\vskip0.1cm
\noindent
which fits into $\Pi$ locally as a subnet. The place $s$ holds the sum which is updated as long as
partial results are computed and assigned to the place $p$. The update operation executes $s_{i+1}=s_i+p_i$, where $s_i$
is the current value of the sum on $s$ and $p_i$ is one partial result on $p$. Note that this only makes
sense if $+$ is commutative and associative since the Petri net does not guarantee any order of execution. Then in the
end, the value of the sum on $s$ is, say, $s_0+p_0+\ldots+p_{n-1}$, where $s_0$ is the initial value of the state.
Note that $s_0$ needs to be set up by some mechanism not shown here.
Often this is not what is wanted, for example because it may be hard to set up an initial state. The modified subnet
\vskip0.2cm
\noindent
\begin{petrinet}
\node[place] (p) {$p$};
\node[transition] (f) [right of=p] {$+$};
\node[place] (s) [right of=f] {$s$};
\node[transition] (z) [above of=f] {$\downarrow$};
\node[place] (a) [right of=z] {$\bullet$};
\path[->]
(p) edge (f)
(a) edge (z)
(p) edge [bend left] (z)
(z) edge [bend left] (s)
(f) edge [bend left] (s)
(s) edge [bend left] (f)
;
\end{petrinet}
\vskip0.1cm
\noindent
computes $p_0+\ldots+p_{n-1}$ on $s$, and does not require any initial state. The first execution
of this net fires the transition $\downarrow$ which just moves the single available token from $p$ to $s$,
disabling itself. The transition $+$ is not enabled as long as $\downarrow$ has not yet fired, so there is no
conflict between $\downarrow$ and $+$.
It is nice to see that Petri nets allow for local rewrites, local in the sense that no knowledge about the surrounding
net is required in order to prove the correctness of the rewrite operation. Note, however, that both Petri nets above expose no
parallelism: Whenever $+$ fires, the sum on $s$ is used, and no two incarnations of $+$ can run at the same time.
The modified subnet
\vskip0.2cm
\noindent
\begin{petrinet}
\node[invisible] (0) {};
\node[transition] (l) [above of=0] {$\to$};
\node[transition] (r) [below of=0] {$\to$};
\node[place] (d) [left of=0,node distance=0.33\nd] {};
\node[place] (u) [right of=0,node distance=0.33\nd] {$\bullet$};
\node[place] (p) [left of=d] {$p$};
\node[place] (x) [right of=l] {$s$};
\node[place] (y) [right of=r] {$r$};
\node[transition] (f) [right of=u] {$+$};
\path[->]
(d) edge [bend right] (r)
(r) edge [bend right] (u)
(u) edge [bend right] (l)
(l) edge [bend right] (d)
(p) edge [bend left] (l)
(p) edge [bend right] (r)
(l) edge (x)
(r) edge (y)
(x) edge [bend left] (f)
(y) edge [bend right] (f)
;
\draw[->]
(f) to [out=-45,in=10] ($(r)+(0.66\nd,-0.66\nd)$) to [out=190,in=-90] (p)
;
\end{petrinet}
\vskip0.1cm
\noindent
shows a different behavior. Now, the tokens from $p$ are distributed on the two places
$s$ and $r$. As soon as both $s$ and $r$ hold a token, one incarnation
of $+$ can fire. At the same time, the transitions $\to$ can continue
to move tokens to $s$ and $r$, enabling $+$, and finally leading to
multiple incarnations of $+$ running at the same time. Note that the
output of $+$ is fed back to $p$, which makes much sense as it is just another partial result.
Altogether, this example shows how Petri nets can be used for a compact and executable specification
of expected behavior, and then be changed gradually to get different non-functional properties.
\section{Modelling the Smoothness Test as a Petri Net}\label{sec Petri smooth}
Using the inherently parallel structure of the hybrid smoothness test within GPI-Space requires
a reformulation of our algorithms in the language of Petri nets. This will
also emphasize the possible concurrencies, which will automatically be exploited by GPI-Space.
The Petri net $\Gamma$ below
\vskip0.2cm
\noindent
\begin{petrinet}
\node[place] (i) {$i$};
\node[transition] (t) [right of=i] {$t$};
\node[place] (1) [right of=t] {};
\node[transition] (r) [below of=1] {$r_t$};
\node[invisible] (0) [right of=1] {};
\node[transition] (d) [above of=0] {$d$};
\node[transition] (j) [below of=0] {$j$};
\node[place] (2) [right of=d] {};
\node[place] (3) [right of=j] {};
\node[invisible] (x) [right of=2] {};
\node[transition] (s) [above of=x,node distance=0.5\nd] {$s$};
\node[transition] (h) [below of=s] {$h_d$};
\node[transition] (g) [below of=h] {$h_j$};
\node[transition] (q) [below of=g] {$r_j$};
\node[invisible] (y) [below of=h,node distance=0.5\nd] {};
\node[place] (o) [right of=y] {$o$};
\path[->]
(i) edge (t)
(t) edge (1)
(1) edge (r)
(1) edge (d)
(1) edge (j)
(d) edge (2)
(j) edge (3)
(2) edge (s)
(2) edge (h)
(3) edge (g)
(3) edge (q)
(h) edge (o)
(g) edge (o)
(s) edge [bend right=25] node [above] {} (i)
;
\end{petrinet}
\vskip0.1cm
\noindent
is a representation of the hybrid smoothness test as summarized in Algorithm~\ref{alg smoothnesstest}.
A computation starts with one token on the input place $i$, representing a triple $(I_W, I_X, q)$. At the top level,
we will typically start with $I_W = \langle 0 \rangle$ and $q = 1$, that is, with $W=\mathbb{A}^n_{{\mathbb K}}$.
Transition $t$ performs the check for (local) equality as in step 1 of Algorithm \ref{alg smoothnesstest}. Its output token represents, in addition to a copy of the input triple, a flag indicating the result of the check. By the use of conditions, it is ensured that the token will enable exactly one of the subsequent transitions. If the result of the check is \texttt{true}, which can only happen for tokens produced at the deepest level of recursion,
then the variety is smooth in the current chart, and no further computation is required in this chart. In this case, the token will be removed by transition $r_t$. If the result is \texttt{false}, then the next action depends on whether the prescribed codimension limit $c$ in step 3 of Algorithm \ref{alg smoothnesstest} has been reached.
If the codimension of $X$ in $W$ is $\leq c$, then transition $j$ will fire, which corresponds to executing Algorithm \ref{alg embJac} \texttt{EmbeddedJacobian}.
If the Jacobian check gives \texttt{true}, the variety is smooth in the current chart, and the token will be removed by transition $r_j$.
If the Jacobian check gives \texttt{false}, then the variety is not smooth. The transition $h_j$ will then add a token with the flag \texttt{false}
to the output place $o$. Here, the letter $h$ stands for 'Heureka', the greek term for 'I have found'. If a Heureka occurs, all
remaining tokens except that one on $o$ are removed by clean-up transitions not shown in $\Gamma$, no new tasks are started, and
all running work processes are terminated.\footnote{At current stage, the concept of a Heureka is not yet fully supported by both
GPI-Space and {\sc{Singular}}, and is replaced in our implementation by a work-around.}
If the codimension of $X$ in $W$ is larger than $c$, then transition $d$ will fire, which corresponds to executing Algorithm \ref{alg deltacheck}
\texttt{DeltaCheck} (considered as a black box at this point). The ensuing output token represents, in addition to a copy of $(I_W, I_X, q)$, a flag
indicating the result of \texttt{DeltaCheck}. If this result is \texttt{true}, then a descent to an ambient space of dimension one less is necessary.
In this case, transition~$s$ fires, performing Algorithm \ref{alg descendembsmooth} \texttt{DescentEmbeddingSmooth}. This algorithm outputs
a list of triples $(I_{W'},I_{X},q')$, each of which needs to be fed back to place $i$ for further processing. Note however, that in the formal
description of Petri nets in Section \ref{sec petrinet}, we do not allow that a single firing of a transition adds more than one token to a
single place. To model the situation described above in terms of a Petri net, we therefore introduce the subnet
\vskip0.2cm
\noindent
\begin{petrinet}
\node[transition] (s) {$s$};
\node[place] (1) [right of=s] {};
\node[transition] (e) [right of=1] {$e$};
\node[place] (i) [right of=e] {$i$};
\node[transition] (x) [below of=1] {$x$};
\path[->]
(s) edge (1)
(1) edge [bend right] (e)
(e) edge [bend right] (1)
(e) edge (i)
(1) edge (x)
;
\end{petrinet}
\vskip0.1cm
\noindent
between $s$ and $i$. Now, when firing, the transition $s$ produces a single output token, which represents a list $L$ of triples as above.
As long as $L$ is non-empty, transition $e$ iteratively removes a single element from $L$ and assigns it to a token which is added to place
$i$. Finally, transition $x$ deletes the empty list.
These operations are formulated with expressions and conditions (see Subsection \ref{subsubsect:ext}), and can be parallelized as in Example \ref{ex parallel red}.
If, on the other hand, \texttt{DeltaCheck} returns \texttt{false}, then the variety is not smooth. Correspondingly, the transition $h_d$ fires, adding a token with
the flag \texttt{false} to the output place $o$ and triggering a Heureka.
If all tokens within $\Gamma$ have been removed, all charts have been processed without detecting a singularity, and $X$ is smooth. In this case, a token
with flag \texttt{true} will be added\footnote{This is done using some additional places and transitions not shown in $\Gamma$.} to the output place $o$.
Together with the fact that the recursion depth of Algorithm \texttt{DescentEmbeddingSmooth} is limited by the codimension of $X$ in $W$ and that each
instance of it only produces finitely many new tokens, it is ensured that the execution of the Petri net terminates after a finite number of firings with exactly
one token at $o$. GPI-Space automatically terminates if there are no more enabled transitions.
Note that, in addition to the task parallelism visible in $\Gamma$ (see also Example~\ref{ex:2}), all transitions in $\Gamma$ allow for multiple parallel
instances, realizing data parallelism in the sense of Example~\ref{ex:1}.
So far, we have not yet explained how to model Algorithms \ref{alg descendembsmooth} to \ref{alg embJac},
on which Algorithm~\ref{alg smoothnesstest} is based. Algorithm \ref{alg embJac}\ \texttt{EmbeddedJacobian}, for instance,
has been considered as a black box represented by transition $j$.
Note, however, that this algorithm exhibits a parallel structure of its own:
Apart from updating the ideal $Q$ in step 9 in order to use the condition $q \not\in Q$ as a termination criterion
for the \textbf{while} loop in step 5, the computations within the loop are independent from each other.
Hence, waving step 9 and the check $q \not\in Q$ is a trivial way of
introducing data parallelism: Replace the subnet
\vskip0.2cm
\noindent
\begin{petrinet}
\node[place] (i) {};
\node[transition] (j) [right of=i] {$j$};
\node[place] (1) [right of=j] {};
\path[->]
(i) edge (j)
(j) edge node {} (1)
;
\end{petrinet}
\vskip0.1cm
\noindent
of $\Gamma$ by the Petri net
\vskip0.2cm
\noindent
\begin{petrinet}
\node[place] (i) {};
\node[transition] (p) [right of=i] {$p$};
\node[place] (1) [right of=p] {};
\node[transition] (j) [right of=1] {$j'$};
\node[place] (2) [right of=j] {};
\path[->]
(i) edge (p)
(p) edge node {} (1)
(1) edge (j)
(j) edge (2)
;
\end{petrinet}
\vskip0.1cm
\noindent
Here, the transition $p$ generates tokens corresponding to the submatrices $M$ of $\operatorname{Jac}(I_W)$ as described in step
\ref{line Jac M} of the algorithm.
Transition $j'$ performs the embedded Jacobian criterion computations in steps \ref{line det jac}
and \ref{line adj jac} to~\ref{line false jac}.
Of course, in this version, the algorithm may waste valuable resources:
There is a potentially large number of tokens generated by $p$ which lead to superfluous calculations further on.
This suggests to exploit the condition $q \not\in Q$ also in the parallel approach. That is,
transition $j'$ should fire only until a covering of $X\cap D(q)$ has been obtained, and then trigger a Heureka
for the \texttt{EmbeddedJacobian} subnet. However, at this writing, creating the infrastructure for a local Heureka
is still subject to ongoing development.
To remedy this situation at least partially, our current approach is to first compute all minors and collect them in $Q$,
and then to use a heuristic way\footnote{Radical membership seems to offer a more conceptual way:
With notation as in Algorithm~\ref{alg embJac},
we have $g^m\in Q + I_W$ for some $m$.
Given a representation
$g^m = \sum_i a_i q_i + \sum_j b_j g_j$ with minors $q_i = \operatorname{det}(M_i)\in Q$, the $D(q_i)$ with $a_i\neq 0$
cover $X\cap D(g)$. However, finding such representations relies on
Gr\"obner bases computations and is, hence, not effective.} of iteratively dropping minors as long as $q \in Q$.
\begin{rem}
Both Algorithm \ref{alg deltacheck} \texttt{DeltaCheck} (see step \ref{line while delta}) and Algorithm~\ref{alg descendembsmooth} \texttt{DescentEmbeddingSmooth} (see step \ref{line cover dec}) can be parallelized in a similar fashion.
\end{rem}
The logic in all transitions is implemented in C++, using {\tt libSingular}, the C++-library version of {\sc Singular}, as the computational back-end. Some parts are
written in the {\sc Singular} programming language, in particular those relying on functionality implemented in the {\sc Singular} libraries. In order to transfer the
mathematical data from one work process to another one (possibly running on a different machine), the complex internal data structures need to be serialized. For
this purpose, we use already existing functionality of {\sc Singular}, which relies on the so-called ssi-format. This serialization format has been created to efficiently
represent {\sc Singular} data structures, in particular trees of pointers. The mathematical data objects communicated within the Petri net are stored in files located on
a parallel file-system BeeGFS\footnote{See \url{https://de.wikipedia.org/wiki/BeeGFS}}, which is accessible from all nodes of the cluster. Alternatively, we could also use the virtual memory layer provided by
GPI-Space. However, on the cluster used for our timings, the speed of the (de)serialization is limited by the CPU and not the underlying storage medium.
The implementation of the hybrid smoothness test can be used through a startup binary, which is suitable for queuing systems commonly used in computer clusters.
Moreover, there is also an implementation of a dynamical module for {\sc Singular}, which allows the user to directly run the implementation from within the {\sc Singular}
user interface. It should be noted that neither the instance of {\sc Singular} providing the user interface nor {\tt libSingular} had to be modified in order to cooperate with GPI-Space.
\section{Applications in Algebraic Geometry and Behaviour of the Smoothness Test}\label{sec timings}
To demonstrate the potential of the hybrid smoothness test and its implementation as described
in Section \ref{sec Petri smooth}, we apply it to problems originating from current research in
algebraic geometry. We focus on two classes of surfaces of general type, which provide good test examples
since their defining ideals are quite typical for those arising in advanced constructions in algebraic geometry:
They have large codimension, and their rings of polynomial functions are Cohen-Macaulay and even Gorenstein.
Due to their structural properties, rings of these types are of fundamental importance in algebraic geometry.
We begin by giving some background on our test examples, and then provide timings and investigate how the
implementation scales with the number of cores.
\subsection{Applications in Algebraic Geometry}\label{sec applications}
The concept of moduli spaces provides geometric solutions to classification problems
and is ubiquitous in algebraic geometry where we wish to classify algebraic varieties
with prescribed invariants. There is a multitude of abstract techniques for
the qualitative and quantitative study of these spaces, without, in the first instance,
taking explicit equations of the varieties under consideration into account. Relying on
equations and their syzygies (the relations between the equations), on the other hand, we may manipulate geometric objects
using a computer. In particular, if an explicit way of constructing a general element of a
moduli space $M$ is known to us, we may detect geometric properties of $M$ by studying
such an element computationally. Deriving a construction is the innovative and often
theoretically involved part of this approach, while the technically difficult part, the
verification of the properties of the constructed objects, is left to the machine.
Arguably, the most important property to be tested here is smoothness. To provide a basic example
of how smoothness affects the properties of the constructed variety, note that a smooth
plane cubic is an elliptic curve (that is, it has geometric genus one), whereas
a singular plane cubic is a rational curve (which has geometric genus zero).
The study of (irreducible smooth projective complex)
surfaces with geome\-tric genus and irregularity $p_g=q=0$ has a rich history,
and is of importance for several reasons, with surfaces of general type providing
particular challenges (see \cite {BHPV}, \cite{zbMATH05974000}). The self-intersection of a
canonical divisor $K$ on a minimal surface of general type with
$p_g=q=0$ satisfies $1\leq K^2\leq 9$, where the upper bound is given by the
Bogomolov-Miyaoka-Yau inequality (see \cite[VII,
4]{BHPV}). Hence, these surfaces belong to only finitely many components
of the Gieseker moduli space for surfaces of general type \cite{MR0498596}.
Interestingly enough, Mumford asked whether their classification can be done by a computer.
Of particular interest among these surfaces are the \emph{numerical Godeaux} and \emph{numerical Campedelli
surfaces}, which satisfy $K^2=1$ and $K^2=2$, respectively. As Miles Reid puts it \cite{ReidOV}, these ``are in some sense the first
cases of the geography of surfaces of general type, and it is somewhat embarrassing that
we are still quite far from having a complete treatment of them''. Their study is
``a test case for the study of all surfaces of general type''.
For our timings, we focus on two specific examples, each defined over a finite prime field ${\Bbbk}$.
Though, mathematically, we are interested in the geometry of the surfaces in characteristic zero,
computations in characteristic $p$ (which are less expensive) are enough to demonstrate
the behavior of the smoothness test.
The first example is a numerical Campedelli surface $X$ with torsion group $\mathbb{Z}/6 \mathbb{Z}$, which has been constructed
in \cite{NP1} (we work over the finite field ${\Bbbk}=\mathbb{Z}/103\mathbb{Z}$ which contains, as required by the construction, a primitive 3rd root of
unity). The construction yields $X$ as a $\mathbb{Z}/6 \mathbb{Z}$-quotient of a covering surface $\widetilde{X}$ which, in turn, is realized as a
subvariety of the weighted projective space $\mathbb{P}_{\mathbb K}(1,1,1,1,1,2,2,2)$, where ${\mathbb K}=\overline{{\Bbbk}}$. That is, the homogeneous coordinate ring of the ambient space
is a polynomial ring with $5$ variables of degree 1 and $3$ variables of degree 2, and the codimension of $X$ in that space is 5.
In fact, $\widetilde{X}$ is constructed from a hypersurface in projective 3-space $\mathbb{P}_{\mathbb K}^{3}$ using Kustin-Miller unprojection \cite{KM}.
This iterative process increases in every iteration step the codimension of a given Gorenstein ring
by one, while retaining the Gorenstein property. See \cite{BP3,BPlib} for an outline and implementation.
We use the hybrid smoothness test to verify the quasi-smoothness of $\widetilde{X}$, that is, the smoothness of
the affine cone over $\widetilde{X}$ outside the origin. This amounts to apply the test
in each of the 8 (affine) coordinate charts of $\mathbb A^8_{\mathbb K}\setminus \{0\}$. Note that in general,
quasi-smoothness does not automatically guarantee smoothness due to the singularities of
the weighted projective space. In our case, however, the smoothness of both surfaces
$\widetilde{X}$ and $X$ follows from the quasi-smoothness of $\widetilde{X}$ by a straightforward
theoretical argument.
The second example is a numerical Godeaux surface with trivial torsion group. It is taken from
ongoing research work by Isabel Stenger, who uses a construction method suggested by
Frank-Olaf Schreyer in \cite{FOS-OW}. The resulting surface is a subvariety of $\mathbb{P}_{\mathbb K}^{13}$ (of
codimension $11$) which is cut out by $38$ quadrics (and is again realized over the finite field $\mathbb{Z}/103 \mathbb{Z}$).
Using our implementation, we verify the smoothness of the surface by verifying smoothness in each of the 14
coordinate charts of $\mathbb{P}_{\mathbb K}^{13}$. Note that to the best of our knowledge,
this cannot be done by other means.
\subsection{Behavior of the Smoothness Test}
The timings in this subsection are taken on a cluster provided by Fraunhofer ITWM Kaiserslautern. This cluster consists of $192$ nodes, each of which
has $16$ Intel Xeon E5-2670 cores running at $2.6$ GHz with $64$ GB of RAM (so the cluster has a total of $3072$ cores and $12$
TB of RAM). The nodes are connected via FDR Infiniband. Note that the cores are utilized by GPI-Space in a non-hyperthreading way, that is, with a maximum of $16$ jobs per node.
In the case of the the numerical Campe\-del\-li surface, we apply the hybrid smoothness test
with a descent in codimension to minors of size $2\times 2$. Timings are given in Table \ref{tab timings}
\begin{table}[h]
\caption{Run-times of the hybrid smoothness test when applied to the numerical Campedelli surface
}
\label{tab timings}
\begin{tabular}
[c]{c|ccc|c}
number of cores & time / sec & & number of cores & time / sec\\\cline{1-2}
\cline{4-5}
\textbf{1} & \textbf{2\,686.98} & & 48 & 68.64\\
\textbf{2} & \textbf{1350.67} & & \textbf{64} & \textbf{51.98}\\
\textbf{4} & \textbf{684.77} & & 80 & 39.64\\
6 & 466.96 & & 96 & 32.30\\
\textbf{8} & \textbf{356.18} & & 112 & 27.56\\
10 & 290.75 & & \textbf{128} & \textbf{26.15}\\
12 & 245.19 & & 160 & 21.36\\
14 & 215.46 & & 192 & 19.10\\
\textbf{16} & \textbf{191.65} & & 224 & 18.52\\
\textbf{32} & \textbf{99.06} & & \textbf{256} & \textbf{18.41}
\end{tabular}
\end{table}
for $1$ up to $256$ cores (the powers of two are shown in bold), where we always take the average over $100$ runs. See also Figure \ref{fig S5 timings} for
a visualization, where the data points correspond to the entries of Table \ref{tab timings}, and the plotted curve is a
least-square fit of the run-times
using a hyperbola.
\begin{figure}
\caption{Display of the run-times from Table \ref{tab timings}
\label{fig S5 timings}
\end{figure}
In Figure \ref{fig scaling},
\begin{figure}
\caption{Scaling with the number of cores of the run-times from Table \ref{tab timings}
\label{fig scaling}
\end{figure}
we show how the implementation scales with the number of cores by plotting the speedup-factor (relative to the single core run-time) versus the number of cores. We
observe a linear speedup up to $160$ cores. Figure \ref{fig efficiency} visualizes the parallel efficiency (speedup divided by number of cores) of the computation.
\begin{figure}
\caption{Parallel efficiency determined from run-times in Table \ref{tab timings}
\label{fig efficiency}
\end{figure}
To give some explanation for this observation, we note that starting from the 8 affine coordinate
charts of $\mathbb A^8_{\mathbb K}\setminus \{0\}$, the hybrid smoothness test in its current implementation
may branch into up to 323 charts at the leaves of the resulting tree of charts.
As it turns out, however, already a proper subset of the coordinate charts is enough to
cover the affine cone over $\widetilde{X}$ outside the origin, and the algorithm will terminate
once this situation has been achieved. Typically, the algorithm finishes with a total of about $240$ charts.
Hence, we cannot expect any scaling beyond this number of cores. Note that the descent in codimension involves a smaller number of charts, which also limits the scaling. Applying the projective Jacobian criterion, that is,
computing the ideal $J$ generated by the codimension-sized minors of the Jacobian matrix and saturating the ideal $I_X+J$
with respect to the irrelevant maximal ideal (which is generated by all variables), takes about $580$
seconds on one core and uses about $15$ GB of memory. We observe that, while single runs of the
massively parallel implementation take more than these $580$ seconds, by passing
to a larger number of cores, we can achieve a speedup of at least factor $30$ compared to the projective
Jacobian criterion. We also remark that while the computation of the minors in the Jacobian criterion
can be done in parallel, the subsequent saturation (which takes most of the total computation time) is
an inherently sequential process. With regard to memory usage, each of the individual Jacobian criterion
computations in Algorithm \ref{alg embJac} \texttt{EmbeddedJacobian} does not exceed $450$ MB of RAM
(due to the small size of minors after the descent).
In case of the numerical Godeaux surface, we apply the hybrid smoothness test with a descent in codimension down
to minors of size $3\times 3$. So far, smoothness of this surface could not be verified by the projective Jacobian criterion,
which runs out of memory exceeding the available $384$ GB of RAM of the machine we used. The hybrid
smoothness test easily handles this example, using a maximum of $3.1$ GB of RAM for one of the individual Jacobian
criterion computations after the descent. Timings are given in Table \ref{tab timings2}
\begin{table}[h]
\caption{Run-times of the hybrid smoothness test when applied to the numerical Godeaux surface
}
\label{tab timings2}
\begin{tabular}
[c]{c|c|c}
number of nodes & number of cores & time / sec\\\hline\cline{1-2}
\textbf{1} &\textbf{16} & \textbf{53\thinspace000}\\
\textbf{2} &\textbf{32} & \textbf{33\thinspace000}\\
\textbf{4} &\textbf{64} & \textbf{12\thinspace200}\\
\textbf{8} &\textbf{128} & \textbf{3\thinspace100}\\
\textbf{16} &\textbf{256} & \textbf{2\thinspace460}
\end{tabular}
\end{table}
for $16$ up to $256$ cores, where we always take the average over $10$ runs.
See also Figure \ref{fig godeaux time} for a visualization, where the data points correspond to the entries of Table \ref{tab timings2}
and the plotted curve is again a least-square fit of the run-times using a hyperbola.
\begin{figure}
\caption{Display of the run-times from Table \ref{tab timings2}
\label{fig godeaux time}
\end{figure}
We observe that in this example, we actually get a super-linear speedup, that is, when doubling the number of cores used by the algorithm,
the computation time drops by more than a factor of two. We have identified two reasons for this effect.
One reason is purely technical: If more cores than tasks are available to the algorithm, that is, the load factor is smaller than one,
then each individual computation can use a larger memory bandwidth, which speeds up the computation. To indicate the
impact of the workload on the performance, Figure~\ref{fig speicher}
\begin{figure}
\caption{Run-times for parallel individual Jacobian criterion computations on one node (in seconds)
}
\label{fig speicher}
\end{figure}
shows the time used for parallel runs of a given number of copies of a single Jacobian criterion computation
on a single node. While the load factor of the smoothness test is close to $1$ when executed on less
than $64$ cores, it drops to about $0.7$ on $256$ cores, which amounts to a speedup of about $30\%$.
More important
is the second reason,
which stems from the structure of the algorithm and the mathematics behind the surface under consideration:
The smoothness of this surface is determined by considering (on the first level of the algorithm) all $14$ affine
charts of the ambient projective space $\mathbb P_{\mathbb K}^{13}$. The algorithmic subtrees of $4$ of these charts do not terminate during the descent in
codimension within $50\,000$ seconds, while the final covering obtained by the algorithm will always consist of
the same $4$ of the remaining $10$ charts: Since the implementation branches into all available
choices in a massively parallel way and terminates once the surface is completely covered by charts, it will
automatically determine that choice of charts which leads to the smoothness certificate in the fastest possible
way. Note that the $10$ remaining charts above involve a total of
$115$ sub-charts, so we cannot expect much scaling beyond this number of cores.
We have done a simulation of this behavior of the Petri net using the actual computation times of the individual sub-steps of the algorithm for all available choices (sampling all timings for the sub-steps in the same environment). The simulated scaling with the number of cores matches very well the actual behavior of the implementation on the cluster, see Figure \ref{fig simulation}
\begin{figure}
\caption{Simulated timings for determining smoothness of the numerical Godeaux surface via the hybrid smoothness test
}
\label{fig simulation}
\end{figure} for the synthetic timings (normalized to value one for $8$ cores): With up to $4$ available cores, all cores will run into an unfavorable chart with probability almost $1$, while for $8$ to $128$ cores, we observe a super-linear speedup. As expected from the geometric structure of the specific problem, the simulation does not show a significant further speedup beyond $128$ cores.
To summarize, when working with charts, we have the flexibility of choosing a covering which leads to fast individual
computations that are well-balanced with regard to their run-time, resulting in a good performance of the overall parallel algorithm.
Due to the unpredictability of the individual computations, this choice cannot be made a priori in a heuristic way. However, with
a massively parallel approach, the best possible choice is found automatically by the algorithm. The chart based nature
of the smoothness test reflects a fundamental paradigm of algebraic geometry, the description of schemes and sheaves
in terms of charts. One can, hence, expect that a similar approach will also be useful for further applications in algebraic
geometry, for example, in the closely related problem of resolution of singularities.
\noindent\emph{Acknowledgements}. We would like to thank Bernd L\"orwald,
Stavros Papadakis, Gerhard Pfister, Christian Reinbold, Bernd Schober, Hans Sch\"onemann, and Isabel Stenger for helpful discussions.
\end{document} |
\begin{document}
\begin{abstract}
We give a new approach to the failure of the Canonical Base Property
(CBP) in the so far only known counterexample, produced by Hrushovski,
Palacin and Pillay. For this purpose, we will give an alternative
presentation of the counterexample as an additive covers of an
algebraically closed field. We isolate two fundamental weakenings of
the CBP, which already appeared in work of Chatzidakis, and show that
they do not hold in the counterexample. In order to do so, a study of
imaginaries in additive covers is developed, for elimination of finite
imaginaries yields a connection to the CBP. As a by-product of the
presentation, we notice that no pure Galois-theoretic account of the
CBP can be provided.
\end{abstract}
\title{Additive covers and the Canonical Base Property}
\section{Introduction}
Internality is a fundamental notion in
geometric model theory in order to
understand a complete stable theory of finite Lascar rank in terms of its
building blocks, its minimal types of rank one.
A type $p$ is internal, resp. almost internal to
the family $\mathbb{P}$ of all non-locally modular minimal types, if there exists
a set of parameters $C$ such that every realization $a$ of $p$ is
definable, resp. algebraic over $C,e$
where $e$ is a tuple of realizations of types (each one based over $C$) in $\mathbb{P}$.
Motivated by results of Campana \cite{fC80} on algebraic coreductions,
Pillay and
Ziegler \cite{PZ03} showed that in the finite rank part of the theory of
differentially
closed fields in characteristic zero, the type of the canonical base of a
stationary type over a
realization is almost
internal to the constants. With this result, Pillay and Ziegler reproved
the function field case of the Mordell-Lang conjecture in characteristic
zero following Hrushovski's original proof but with considerable
simplifications.
The above phenomena is captured in the notion of the Canonical Base
Property
(CBP), which was introduced and studied by Moosa and Pillay \cite{MP08}:
Over a realization of a
stationary type, its canonical base is almost $\mathbb{P}$-internal.
Chatzidakis \cite{zC12} showed that the CBP already implies a seemingly
stronger
statement, the so-called uniform canonical base property (UCBP):
Whenever the type of a realization of the stationary type $p$ over
some set $C$ of parameters is almost $\mathbb{P}$-internal, then so is
$\textnormal{stp}(\textnormal{Cb}(p)/C)$.
For the proof, she isolated two remarkable properties which hold in
every theory of finite rank with the CBP: Almost internality to $\mathbb{P}$ is preserved
on intersections and more generally on quotients.
Motivated by her work, we introduce the following two notions. A
stationary type is good, resp. special, if the condition for the CBP,
resp. UCBP, holds
for this type. (See Definitions \ref{D:good} and \ref{D:AB}
for a precise formulation.)
The following result relates these two notions to the aforementioned
properties.
\begin{theoremA}\textup{(Propositions \ref{P:propA}
and \ref{P:propB})}
The theory $T$ preserves internality on intersections, resp. on
quotients, if and only if
every stationary almost $\mathbb{P}$-internal type in $T^\textnormal{eq}$ is
good, resp. special.
\end{theoremA}
Though most relevant examples of theories satisfy the CBP, Hrushovski,
Palacín and Pillay \cite{HPP13} produced the so far only known example of
an uncountably
categorical
theory without the CBP. We will give an alternative
description of their counterexample in terms of additive covers of an
algebraically closed field of characteristic zero. Covers are already
present in early work of
Hrushovski \cite{eH91}, Ahlbrandt and Ziegler \cite{AZ91} as well as of
Hodges and Pillay \cite{HP94}.
For an additive cover $\mathcal{M}$ of an algebraically closed field, the sort $S$ is the home-sort and
$P$ is the field-sort. The automorphism group $\textnormal{Aut}(\mathcal{M}/P)$ embeds
canonically in
the group of all additive maps on $P$. If the sort $S$ is almost $P$-internal, the CBP trivially holds.
The counterexample to the CBP has a ring structure on the sort $S$ and the ring multiplication
$\otimes$ is a lifting of the field multiplication. The automorphisms group over $P$ corresponds to the group of
derivations, which ensures that the sort $S$ is not almost $P$-internal.
We prove the following
result.
\begin{theoremB}\textup{(Propositions \ref{P:M1AutVersion}
and \ref{P:CBPpureCover})}
The CBP holds whenever every
additive map on $P$ induces an automorphism in
$\textnormal{Aut}(\mathcal{M}/P)$.
If $\textnormal{Aut}(\mathcal{M}/P)$ corresponds to the group of
derivations, then the product $\otimes$
is definable in $\mathcal{M}$.
\end{theoremB}
We focus on additive covers in which the sort $S$ is not almost
$P$-internal, since otherwise the CBP trivially holds and show that no
such additive cover can eliminate imaginaries. On the other side,
the counterexample to the CBP does eliminate finite imaginaries, which
fits into situation:
\begin{theoremC}\textup{(Theorem \ref{T:finImagCBP})}
If $\mathcal{M}$ eliminates finite imaginaries,
then it cannot preserve internality on quotients, so in particular
the CBP does not hold.
\end{theoremC}
A standard argument shows that the CBP holds whenever it holds for all
real stationary types. We will note that in the counterexample to the CBP
the corresponding real versions of goodness and specialness hold, namely,
every real stationary almost $P$-internal type is special. However the
version for real types does not imply the full condition and gives a new
proof of the failure of the CBP.
\begin{theoremD}\textup{(Propositions \ref{P: M1PropB}
and \ref{P: M1PropA})}
The counterexample to the CBP does not preserve internality on intersections.
\end{theoremD}
Palac\'in and Pillay ~\cite{PP17} considered a strengthening of the CBP,
called the strong canonical base property, which we show cannot
hold in any additive cover, where $S$ is not almost $P$-internal.
Regarding a question which arose in \cite{PP17},
we prove that no \textit{pure} Galois-theoretic account of the CBP can be
provided.
In a forthcoming work, we use the approach with additive covers in order
to produce new counterexamples to the CBP.
\begin{akn}
The author would like to thank his supervisor Amador Mart\'in Pizarro for
numerous helpful discussions, his support, generosity and guidance.
He also would like to thank Daniel Palac\'in for multiple interesting
discussions. Part of this research was carried out at the University of
Notre Dame (Indiana, USA) with financial support from the
DAAD, which the author gratefully acknowledges. The author would like to
thank for the
hospitality and Anand Pillay for many helpful discussions and for
suggesting the study of imaginaries in the counterexample to the CBP.
\end{akn}
\section{The Canonical Base Property and Related Properties}\label{S:CBPAB}
In this section we introduce two properties related to the canonical base
property. We assume throughout this article a
solid knowledge in geometric stability theory \cite{aP96,TZ12}.
Most of the results in this section can be found in \cite{zC12}.
Let us fix a complete stable theory of finite Lascar rank. As usual, we
work inside a sufficiently saturated ambient model. We denote by $\mathbb{P}$ the
$\emptyset$-invariant family of all non-locally modular minimal types.
The following notions provide an equivalent formulation of the CBP and the
UCBP. They will play a crucial role in our attempt to weaken the CBP to
other contexts.
\begin{definition}\label{D:good}
A stationary type $p$ is:
\begin{itemize}
\item \emph{good} if $\textnormal{stp}(\textnormal{Cb}(p)/a)$ is almost
$\mathbb{P}$-internal for some (any) realization $a$ of $p$,
\item \emph{special} if, for every parameter set $C$ and every
realization $a$ of $p$, whenever $\textnormal{stp}(a/C)$ almost $\mathbb{P}$-internal, so
is $\textnormal{stp}(\textnormal{Cb}(p)/C)$ almost $\mathbb{P}$-internal.
\end{itemize}
\end{definition}
\begin{remark}\label{R:CBP_AB}~
\begin{enumerate}[(a)]
\item Note that every special type is good, by setting $C=\{a\}$.
\item It is
immediate from the definitions that the theory $T$ has
the CBP, resp.\ the UCBP, if and only if every
stationary type in $T^\textnormal{eq}$ is good, resp.\ special.
\item Analog to \cite[Remark 2.6]{aP95}, it can be easily shown
that
whether or not every stationary type is good, resp. special, is
preserved
under naming parameters.
\end{enumerate}
\end{remark}
Chatzidakis showed in \cite[Theorem 2.5]{zC12} that the CBP already
implies the UCBP for (simple) theories of finite rank. In order to prove
so, she first shows in \cite[Proposition 2.1]{zC12} that, under the CBP,
the
type
$\textnormal{tp}(b/\textnormal{acl}eq(a)\cap\textnormal{acl}eq(b))$ is almost $\mathbb{P}$-internal, whenever
$\textnormal{stp}(b/a)$ is almost $\mathbb{P}$-internal, and secondly in \cite[Lemma
2.3]{zC12},
that
$\textnormal{tp}(b/\textnormal{acl}eq(a_1)\cap \textnormal{acl}eq(a_2))$ is almost $\mathbb{P}$-internal, if both
$\textnormal{stp}(b/a_1)$ and $\textnormal{tp}(b/a_2)$ are. Motivated by her work, we now
introduce two notions capturing these intermediate steps and
study their relation to the CBP.
\begin{definition}\label{D:AB}
The theory $T$ \emph{preserves internality on intersections} if
the type
\[\textnormal{tp}(b/\textnormal{acl}eq(a)\cap\textnormal{acl}eq(b))\]
is almost $\mathbb{P}$-internal,
whenever $\textnormal{stp}(b/a)$ is almost $\mathbb{P}$-internal.
Similarly, the theory
\emph{preserves internality on quotients} if the type
\[\textnormal{tp}(b/\textnormal{acl}eq(a_1)\cap \textnormal{acl}eq(a_2))\]
is almost $\mathbb{P}$-internal,
whenever both $\textnormal{stp}(b/a_1)$ and $\textnormal{tp}(b/a_2)$ are.
\end{definition}
In order to relate the above properties to consequences of the CBP, we
will need the following observation.
\begin{fact}\label{F:level}\textup{(}\cite[Proposition 1.18]{zC12}
\textnormal{ \& } \cite[Theorem 3.6]{PW13}\textup{)}
Let $\textnormal{stp}(b/A)$ and $\textnormal{stp}(b/C)$ be two $\mathbb{P}$-analysable types.
\begin{enumerate}[(a)]
\item The type $\textnormal{stp}(b/\textnormal{acl}eq(A)\cap\textnormal{acl}eq(C))$ is again
$\mathbb{P}$-analysable. In particular, so is
$\textnormal{stp}(b/\textnormal{acl}eq(A)\cap\textnormal{acl}eq(b))$ also $\mathbb{P}$-analysable.
\item Let $b_A$ be the maximal
subset of $\textnormal{acl}eq(A,b)$ such that $stp(b_A /A)$
is almost $\mathbb{P}$-internal. The tuple $b_A$ (in some fixed enumeration)
dominates $b$ over
$A$, that is, for every set of parameters $D\supset A$,
\[ b \mathop{\mathpalette\Ind{}}_A D \ \ \text{ whenever } \ \ b_A \mathop{\mathpalette\Ind{}}_A D.\]
Furthermore, whenever $\textnormal{acl}eq(D)\cap\textnormal{acl}eq(A,b_A)=\textnormal{acl}eq(A)$, so is
\[ \textnormal{acl}eq(D)\cap\textnormal{acl}eq(A,b)=\textnormal{acl}eq(A).\]
\end{enumerate}
\end{fact}
\begin{proposition}\label{P:propA}
The theory $T$ preserves internality on intersections if and only if
every stationary almost $\mathbb{P}$-internal type in $T^\textnormal{eq}$ is good.
\end{proposition}
\begin{proof}
We assume first that every stationary almost $\mathbb{P}$-internal type is
good, but
the conclusion fails,
witnessed by
two tuples $a$ and $b$. By Remark \ref{R:CBP_AB}, we may assume
\[ \textnormal{acl}eq(a)\cap\textnormal{acl}eq(b)=\textnormal{acl}eq(\emptyset).\]
Thus, the type $\textnormal{stp}(b/a)$ is
almost $\mathbb{P}$-internal, but the type $\textnormal{stp}(b)$ is not. Note that
$\textnormal{stp}(b)$ is $\mathbb{P}$-analysable, by Fact \ref{F:level}.
Among all possible (imaginary) tuples in the ambient model take now
$a'$ such that
$\textnormal{stp}(b/a')$ is almost $\mathbb{P}$-internal and
\[\textnormal{acl}eq(a')\cap\textnormal{acl}eq(b)=\textnormal{acl}eq(\emptyset)\]
with $\textnormal{U}(b_\emptyset/a')$ maximal. Since $\textnormal{stp}(b/a')$ is almost
$\mathbb{P}$-internal, there is a set of parameters $A$ containing $a'$ with
$A \mathop{\mathpalette\Ind{}}_{a'} b$ such that $b$ is algebraic over $Ae$, where $e$ is a
tuple of
realizations of types (each one based over $A$) in $\mathbb{P}$. Since each
type in the family $\mathbb{P}$ is minimal, we may assume, after possibly
enlarging $A$, that $e$ and $b$ are interalgebraic over $A$.
Let now $e'$ be a maximal
subtuple of $e$ independent from $b_\emptyset$ over $A$, so \[ e'
\mathop{\mathpalette\Ind{}}_{A}
b_\emptyset \ \ \text{ and } \ \ e \in \textnormal{acl}eq(A, e', b_\emptyset).\]
Hence,
the tuple $b$
is algebraic over
$Ae' b_\emptyset $ and
\[\textnormal{acl}eq(A,e')\cap\textnormal{acl}eq(b_\emptyset)\subset
\textnormal{acl}eq(a')\cap\textnormal{acl}eq(b)=\textnormal{acl}eq(\emptyset).\]
Therefore
$\textnormal{acl}eq(A,e')\cap\textnormal{acl}eq(b) =\textnormal{acl}eq(\emptyset)$, by Fact \ref{F:level}.
Notice that $\textnormal{stp}(b/A, e')$ is almost $\mathbb{P}$-internal, yet this does not
yield any contradiction since
$\textnormal{U}(b_\emptyset/A,e')=\textnormal{U}(b_\emptyset/a')$. Choose now $b'$ realizing
$\textnormal{stp}(b/A,e')$ independent from $b$ over $A, e'$. An easy forking
computation yields
\[ \textnormal{acl}eq(b')\cap\textnormal{acl}eq(b)=\textnormal{acl}eq(\emptyset).\] By the hypothesis we
have that the almost $\mathbb{P}$-internal type
\[\textnormal{stp}(b'/\textnormal{acl}eq(A,e'))=\textnormal{stp}(b/\textnormal{acl}eq(A,e')) \] is good,
so we deduce that $\textnormal{stp}(\textnormal{Cb}(b/A,e')/b')$ is almost $\mathbb{P}$-internal.
Remark that $b$ is algebraic over $\textnormal{Cb}(b/A,e', b_\emptyset)$ and thus
also algebraic over $b_\emptyset \textnormal{Cb}(b/A,e')$.
Putting all of the above together, we conclude that the type
$\textnormal{stp}(b/b')$ is almost $\mathbb{P}$-internal. Since
\[\textnormal{U}(b_\emptyset/b')\geq \textnormal{U}(b_\emptyset/A, e',b') =
\textnormal{U}(b_\emptyset/A,e')=\textnormal{U}(b_\emptyset/a'),\]
we deduce by the maximality of $\textnormal{U}(b_\emptyset/a')$ that
$\textnormal{U}(b_\emptyset/b')= \textnormal{U}(b_\emptyset/A, e',b')$, that is, \[
b_\emptyset \mathop{\mathpalette\Ind{}}_{\textnormal{acl}eq(A,e')\cap \ \textnormal{acl}eq(b')} A, e', b'.\]
Hence $b_\emptyset \mathop{\mathpalette\Ind{}} b'$, so
$b \mathop{\mathpalette\Ind{}} b'$, by Fact \ref{F:level}, contradicting that $\textnormal{stp}(b)$ is
not almost $\mathbb{P}$-internal.
For the other direction, we need to show that the almost
$\mathbb{P}$-internal
type $\textnormal{stp}(a/b)$ is good, that is, that $\textnormal{stp}(\textnormal{Cb}(a/b)/a)$ is almost
$\mathbb{P}$-internal. We may assume that $b$ equals the canonical base
$\textnormal{Cb}(a/b)$. Superstability yields that $b$ is contained in the
algebraic closure of
finitely many $b$-conjugates of $a$. By preservation of internality on
intersections, the type
$\textnormal{tp}(a/\textnormal{acl}eq(a)\cap\textnormal{acl}eq(b))$ is almost $\mathbb{P}$-internal, so it follows
that \[\textnormal{tp}(b/\textnormal{acl}eq(a)\cap\textnormal{acl}eq(b))\] is almost $\mathbb{P}$-internal. Hence,
the type $\textnormal{stp}(b/a)$ is almost $\mathbb{P}$-internal, as desired.
~\end{proof}
It follows now from Remark \ref{R:CBP_AB} that preservation of internality
on intersections does not depend on constants being named.
\begin{corollary}\label{C:NamingParametersIntersections}
Preservation of internality on intersections is invariant under naming and
forgetting parameters.
\end{corollary}
\begin{remark}\label{R:CCBP}
It follows from Remark \ref{R:CBP_AB} and Proposition \ref{P:propA} that
the CBP is equivalent to the property that
whenever $b=\textnormal{Cb}(a/b)$, then $\textnormal{tp}(b/\textnormal{acl}eq(a)\cap\textnormal{acl}eq(b))$ is almost
$\mathbb{P}$-internal, which was already shown in \cite[Theorem 2.1]{zC12}.
\end{remark}
\begin{proposition}\label{P:propB}
The theory $T$ preserves internality on quotients if and only if
every stationary almost $\mathbb{P}$-internal type in $T^\textnormal{eq}$ is
special.
\end{proposition}
\begin{proof}
Assume that every stationary almost $\mathbb{P}$-internal type is
special. We want
to show that
\[\textnormal{tp}(b/\textnormal{acl}eq(a_1)\cap\textnormal{acl}eq(a_2))\]
is almost $\mathbb{P}$-internal,
whenever both $\textnormal{stp}(b/a_1)$ and $\textnormal{stp}(b/a_2)$ are.
By Remark \ref{R:CBP_AB}, we may assume that
\[ \textnormal{acl}eq(a_1)\cap\textnormal{acl}eq(a_2)=\textnormal{acl}eq(\emptyset).\]
Note that the type $\textnormal{stp}(b)$ is $\mathbb{P}$-analysable, by Fact
\ref{F:level}, so
recall that $b_\emptyset$ is the maximal almost $\mathbb{P}$-internal
subset of
$\textnormal{acl}eq(b)$.
As in the proof of Proposition \ref{P:propA} there is a set of
parameters $A_1$ containing
$a_1$ such that $A_1 \mathop{\mathpalette\Ind{}}_{a_1} b$ and $b$ is
interalgebraic over $A_1$
with some tuple $e$ of realizations of types (each one based over
$A_1$) in $\mathbb{P}$. Choosing a maximal subtuple $e'$ of $e$ with
$e' \mathop{\mathpalette\Ind{}}_{A_1} b_\emptyset$, it follows that $b$ is algebraic over
$b_\emptyset A_1 e'$ and that
\[
\textnormal{acl}eq(b_\emptyset)\cap\textnormal{acl}eq(A_1,e')\subset \textnormal{acl}eq(a_1).
\]
Hence
\begin{equation*}
\tag{$\star$}
\textnormal{acl}eq(b)\cap\textnormal{acl}eq(A_1,e')\cap\textnormal{acl}eq(a_2)=\textnormal{acl}eq(\emptyset),
\end{equation*}
by Fact \ref{F:level}. Since the almost
$\mathbb{P}$-internal type
$\textnormal{stp}(b / A_1 , e')$ is special, we have that
\[\textnormal{stp}(\nicefrac{\textnormal{Cb}(b / A_1 , e')}{a_2})\]
is almost $\mathbb{P}$-internal. Therefore
\[\textnormal{stp}(\nicefrac{\textnormal{Cb}(b / A_1 , e')}{\textnormal{acl}eq(A_1,
e')\cap\textnormal{acl}eq(a_2)})\]
is almost $\mathbb{P}$-internal by Remark \ref{R:CBP_AB}.
Since
\[ b \mathop{\mathpalette\Ind{}}_{\textnormal{Cb}(b / A_1 , e'),b_\emptyset} A_1, e' \]
and $b$ is algebraic over $b_\emptyset A_1 e'$, the tuple $b$ is
algebraic over
$\textnormal{Cb}(b / A_1 , e') b_\emptyset$. In particular, the type
\[ \textnormal{stp}(b/\textnormal{acl}eq(A_1, e')\cap\textnormal{acl}eq(a_2)) \]
is almost $\mathbb{P}$-internal and hence so is $\textnormal{stp}(b)$ because of
($\star$).
In order to prove the other direction, we want to show that the almost
$\mathbb{P}$-internal type $\textnormal{stp}(a/b)$ is special. Fix $C$ some a set of
parameters such that $\textnormal{stp}(a/C)$ is almost $\mathbb{P}$-internal. By
preservation of internality on quotients, the type
\[\textnormal{stp}(a/\textnormal{acl}eq(b)\cap\textnormal{acl}eq(C)) \]
is almost $\mathbb{P}$-internal and so is
\[\textnormal{stp}(\nicefrac{\textnormal{Cb}(a/b)}{\textnormal{acl}eq(b)\cap\textnormal{acl}eq(C)}),\]
since the canonical base $\textnormal{Cb}(a/b)$ is algebraic over finitely many
$b$-conjugates of $a$.
~\end{proof}
We deduce now the analog of Corollary \ref{C:NamingParametersIntersections}
for preservation of internality on quotients.
\begin{corollary}\label{C:NamingParametersQuotients}
Preservation of internality on quotients is invariant under naming and
forgetting parameters.
\end{corollary}
Thanks to the previous notions, we will provide for the sake of
completeness a
compact proof in Corollary \ref{C:Zoe} that the CBP already implies the
UCBP, which essentially
follows the lines of Chatzidakis's proof \cite[Theorem 2.5]{zC12}: Under
the assumption of the CBP, the UCBP is equivalent to preservation of
internality of quotients. Hence, we need only show in Proposition
\ref{P:CBPQuotients} that the CBP implies the latter (cf. \cite[Lemma
2.3]{zC12}). For this, we need some auxiliary results.
Let $\Sigma$ denote the family of all minimal types, that is, of Lascar
rank one. For a set $A$ of parameters, denote by $A_{\emptyset}^{\Sigma}$
be the maximal almost $\Sigma$-internal subset (in some fixed
enumeration) of $\textnormal{acl}eq(A)$.
\begin{fact}\label{F:Comp}\textup{(}\cite[Lemma 1.10]{zC12}
\textnormal{ \& } \cite[Observation 1.2]{zC12}\textup{)}
Assume that the types $\textnormal{stp}(e)$ and $\textnormal{stp}(c)$ are almost
$\Sigma$-internal.
\begin{enumerate}[(a)]
\item If the tuple $e$ is algebraic over $Ac$ for some parameter
set $A$,
then $e$ is algebraic over
$A_{\emptyset}^{\Sigma} c$.
\item If the type $\textnormal{stp}(c)$ is $\mathbb{P}$-analysable, then it is almost
$\mathbb{P}$-internal.
\end{enumerate}
\end{fact}
\begin{lemma}\label{L:CompCBP}
Assume that the theory $T$ has the CBP and let $e$ be a tuple which is
algebraic over $AB$ with
$\textnormal{acl}eq(A)\cap\textnormal{acl}eq(B)=\textnormal{acl}eq(\emptyset)$. If the type $\textnormal{stp}(e)$ is
almost $\Sigma$-internal, then $e$ is algebraic over
$A_{\emptyset}^{\Sigma} B$.
\end{lemma}
\begin{proof}
Choose a set of parameters $D$ with $D \mathop{\mathpalette\Ind{}} e,A,B$ such that $e$ is
interalgebraic over $D$ with a tuple of realizations of types (each
one based over $D$) in $\Sigma$.
Since
\[ e \mathop{\mathpalette\Ind{}}_{A_{\emptyset}^{\Sigma} B} D \ \ \textnormal{ and } \ \
\textnormal{acl}eq(A,D)\cap\textnormal{acl}eq(B,D)=\textnormal{acl}eq(D),\]
we may assume, after naming $D$, that $e$ is a single
element of Lascar rank one. If
\[ e \mathop{\mathpalette\notind{}}_{B} A_{\emptyset}^{\Sigma}, \]
we are done. Otherwise
\[ e \mathop{\mathpalette\Ind{}}_{B} A_{\emptyset}^{\Sigma}, \]
so \[
\textnormal{acl}eq(A_{\emptyset}^{\Sigma})\cap\textnormal{acl}eq(B,e)=\textnormal{acl}eq(A_{\emptyset}^{\Sigma})\cap\textnormal{acl}eq(B)=
\textnormal{acl}eq(\emptyset).\]
The variant of Fact \ref{F:level} (b) with respect to $\Sigma$ yields
\[\textnormal{acl}eq(A)\cap\textnormal{acl}eq(B,e)=\textnormal{acl}eq(\emptyset).\]
Now the CBP and Remark \ref{R:CCBP} imply that the type
$\textnormal{stp}(\textnormal{Cb}(B,e/A))$
is almost
$\mathbb{P}$-internal, hence almost $\Sigma$-internal. Therefore, the
canonical base $\textnormal{Cb}(B,e/A)$ is contained
in $A_{\emptyset}^{\Sigma}$. Since $e$
is algebraic over $\textnormal{Cb}(B,e/A) B$, we conclude that $e$ is
algebraic
over
$A_{\emptyset}^{\Sigma} B$, as desired.
~\end{proof}
We have now the necessary ingredients to show that every complete stable
theory of
finite rank with the CBP preserves internality on quotients.
\begin{proposition}\label{P:CBPQuotients}
If the theory $T$ has the CBP, then it preserves internality on
quotients.
\end{proposition}
\begin{proof}
We want to show that
\[\textnormal{tp}(b/\textnormal{acl}eq(a_1)\cap\textnormal{acl}eq(a_2))\]
is almost $\mathbb{P}$-internal, whenever both $\textnormal{stp}(b/a_1)$ and
$\textnormal{stp}(b/a_2)$ are. Since the CBP is preserved under naming parameters,
we may assume that
\[\textnormal{acl}eq(a_1)\cap\textnormal{acl}eq(a_2)=\textnormal{acl}eq(\emptyset).\]
Choose sets of parameters $A_1$ containing $a_1$ and $A_2$ containing
$a_2$
with
\[ A_1 \mathop{\mathpalette\Ind{}}_{a_1} b, a_2 \ \ \textnormal{ and } \ \ A_2 \mathop{\mathpalette\Ind{}}_{a_2}
b, A_1 \]
such that $b$ is algebraic over both $A_1 e_1$ and $A_2 e_2$, where
$e_1$ and $e_2$ are tuples of realizations of types (each one based
over $A_1$, resp. $A_2$) in $\mathbb{P}$. Since
\[\textnormal{acl}eq(A_1)\cap\textnormal{acl}eq(A_2)=\textnormal{acl}eq(a_1)\cap\textnormal{acl}eq(a_2)=\textnormal{acl}eq(\emptyset),\]
the CBP and Remark \ref{R:CCBP} implies that $\textnormal{stp}(\textnormal{Cb}(A_1/A_2))$ is
almost $\mathbb{P}$-internal, so
\[A_1 \mathop{\mathpalette\Ind{}}_{(A_2)_{\emptyset}^{\Sigma}} A_2.\]
Choose now a maximal subtuple
$e_{1}'$ of $e_1$ which is independent from $A_2$ over $A_1$, so $e_1$
is algebraic over $A_1 e_{1}' A_2$ and
\[\textnormal{acl}eq(A_1 ,
e_{1}')\cap\textnormal{acl}eq(A_2)=\textnormal{acl}eq(A_1)\cap\textnormal{acl}eq(A_2)=\textnormal{acl}eq(\emptyset).\]
Now, let
$e_{2}'$ be a maximal subtuple of $e_2$ with
\[ e_{2}' \mathop{\mathpalette\Ind{}}_{A_2} A_1 , e_{1}'. \]
We deduce that
\[ A_1 , e_{1}' \mathop{\mathpalette\Ind{}}_{(A_2)_{\emptyset}^{\Sigma}} A_2 , e_{2}' \]
and $e_2$ is algebraic over $A_1 e_{1}' e_{2}' A_2$.
Moreover
\[\textnormal{acl}eq(A_1 , e_{1}')\cap\textnormal{acl}eq(A_2, e_{2}')\subset\textnormal{acl}eq(A_1,
e_{1}')\cap\textnormal{acl}eq(A_2)=\textnormal{acl}eq(\emptyset).\]
By Lemma \ref{L:CompCBP},
we get that $e_{1}$ is algebraic over
$(A_1,e_{1}')_{\emptyset}^{\Sigma} A_2$ and that
$e_{2}$ is algebraic over $A_1 e_{1}' (e_{2}'
A_2)_{\emptyset}^{\Sigma}$.
It follows from Fact \ref{F:Comp} (a) that
\[ (A_1,e_{1}')_{\emptyset}^{\Sigma} = (A_1)_{\emptyset}^{\Sigma}
e_{1}'\ \ \text{ and } \ \ (e_{2}',A_2)_{\emptyset}^{\Sigma} =
e_{2}'
(A_2)_{\emptyset}^{\Sigma} .\]
We deduce that $e_{1}$ is algebraic over $(A_1)_{\emptyset}^{\Sigma}
e_{1}'
A_2$ and $e_{2}$ is algebraic over $A_1 e_{1}'
e_{2}'(A_2)_{\emptyset}^{\Sigma}$. Therefore
\[ A_1 , e_{1} \mathop{\mathpalette\Ind{}}_{(A_1)_{\emptyset}^{\Sigma},
(A_2)_{\emptyset}^{\Sigma}, e_{1}, e_{2}}
A_2 , e_{2}. \]
Hence $b$ is algebraic over $(A_1)_{\emptyset}^{\Sigma},
(A_2)_{\emptyset}^{\Sigma}, e_{1}, e_{2}$, so the type $\textnormal{stp}(b)$ is
almost
$\Sigma$-internal. Since, by Fact \ref{F:level}, the type $\textnormal{stp}(b)$ is
$\mathbb{P}$-analysable, we conclude by Fact \ref{F:Comp} (b) that $\textnormal{stp}(b)$
is almost $\mathbb{P}$-internal,
as desired.
~\end{proof}
\begin{remark}\label{R:IndepQuotients}
It is easy to see that a weakening of preservation of internality
on quotients holds in every complete stable theory of finite rank, when
the quotients are independent: If the types $\textnormal{stp}(b/a_1)$ and
$\textnormal{stp}(b/a_2)$ are almost $\mathbb{P}$-internal
and $a_1 \mathop{\mathpalette\Ind{}} a_2$, then the type $\textnormal{stp}(b)$ is almost $\mathbb{P}$-internal.
\end{remark}
For completeness, we now restate Chatzidakis's proof
\cite[Theorem 2.5]{zC12}
that the CBP implies the UCBP using the aforementioned terminology.
\begin{corollary}\label{C:Zoe}
The CBP and UCBP are equivalent properties for theories of finite rank.
\end{corollary}
\begin{proof}
The UCBP clearly implies the CBP, similar to the remark that every
special type is good.
We assume now that the theory has the CBP. We need to show that every
type
$\textnormal{stp}(a/b)$ is special. Since
\[ \textnormal{Cb}(a/b)=\textnormal{Cb}(\nicefrac{\textnormal{Cb}(b/a)}{b}),\]
we may assume that $a$ is the
canonical base $\textnormal{Cb}(b/a)$. In particular, the type $\textnormal{stp}(a/b)$ is almost
$\mathbb{P}$-internal, by the CBP. Now, the Propositions \ref{P:CBPQuotients} and
\ref{P:propB} yield that the type $\textnormal{stp}(a/b)$ is special, as desired.
~\end{proof}
The equivalence of the previous corollary motivates the following
question, after localizing to almost $\mathbb{P}$-internal types.
\begin{question}\label{Q:1}
Are preservation of internality on intersections and on quotients
equivalent
properties for theories of finite rank?
\end{question}
At the moment of writing, we do not know whether the previous question has
a positive answer. Note that providing a structure which answers
negatively the above question means in particular a new theory of finite
rank without the CBP, since we will see in Section \ref{S:Imag} that the
so far only known counterexample to the CBP given in \cite{HPP13} does not
preserve
internality on intersections.
It was remarked in \cite[Lemma 2.11]{BMPW12} that the CBP holds whenever
it holds for stationary real types, or equivalently, for real types over
models. A natural question is whether the same holds for the above
properties of preservation of internality.
\begin{question}\label{Q:2}
Does a theory of finite rank preserve internality on intersections,
resp. on quotients, if every stationary real almost $\mathbb{P}$-internal type
is good, resp. special?
\end{question}
Additive covers of the algebraically closed field $\mathbb{C}$, which will
be introduced in the following section, will provide a negative answer
(see Corollary \ref{C:AB_imag}) to Question \ref{Q:2}.
\section{Additive Covers}\label{S:AddCovers}
The only known example so far of a stable theory of finite rank without
the CBP appeared in \cite{HPP13}. We will consider this example from the
perspective of additive covers of the algebraically closed field
$\mathbb{C}$. We start this section with a couple of definitions.
Following the terminology of Hrushovski ~\cite{eH91}, Ahlbrandt and
Ziegler ~\cite{AZ91}, and
Hodges and Pillay ~\cite{HP94}, we say that $M$ is a \textit{cover} of $N$
if the following three conditions hold:
\begin{itemize}
\item The set $N$ is a stably embedded $\emptyset$-definable subset
of $M$.
\item There is a surjective $\emptyset$-definable map $\mathbb{P}i:M\backslash
N\rightarrow N$.
\item There is a family of groups $(G_a)_{a\in N}$ definable in
$N^{\text{eq}}$ without parameters such that $G_a$ acts definably and
regularly on the fiber $\mathbb{P}i^{-1}(a)$.
\end{itemize}
For the purpose of this article, we will concentrate on particular covers
of the algebraically closed
field $\mathbb{C}$, and hence provide a definition adapted to this
context. From now on, given the canonical projection of the sort
$S=\mathbb{C}\times\mathbb{C}$ onto the first coordinate $P=\mathbb{C}$,
we will denote the elements of $P$ with the greek letters
$\alpha$, $\beta$, ect., while the elements of $S$ will be seen
accordingly as
pairs $(\alpha,a')$ and so on.
\begin{definition}\label{D:AdditiveCover}
An \emph{additive cover} of the algebraically closed
field $\mathbb{C}$ is a structure
$\mathcal{M}=(P,S,\mathbb{P}i,\star,\ldots)$ with the distinguished
sorts $P=\mathbb{C}$ and $S=\mathbb{C}\times\mathbb{C}$ such that the
following conditions to hold:
\begin{itemize}
\item The structure $\mathcal{M}$ is a reduct of
$(\mathbb{C},\mathbb{C}\times\mathbb{C})$ with
the full field structure on the sort $P$.
\item The projection $\mathbb{P}i$ maps $S$ onto $P$.
\item There is an action $\star$ of $P$ on $S$ given
by $\alpha\star(\beta,b')=(\beta,b'+\alpha)$.
\end{itemize}
Moreover, the map
\[ \begin{array}{rccc}
\oplus: & S\times S & \rightarrow & S \\[2mm]
& \big((\alpha,a'),(\beta,b')\big)&\mapsto& (\alpha+\beta,a'+b')
\end{array}\]
is definable in $\mathcal{M}$ without parameters.
\end{definition}
\begin{example}~\label{E:covers}
\begin{itemize}
\item The additive cover
$\mathcal{M}_{0}=(P,S,\mathbb{P}i,\star,\oplus)$ with no additional
structure.
\item The additive cover
$\mathcal{M}_{1}=(P,S,\mathbb{P}i,\star,\oplus,\otimes)$
with the product
\[ \begin{array}{rccc}
\otimes: & S\times S & \rightarrow & S \\[2mm]
& \big((\alpha,a'),(\beta,b')\big)&\mapsto& (\alpha\beta,\alpha
b'+\beta
a').
\end{array}\]
Note that $\mathcal{M}_{1}$
is a commutative ring with multiplicative neutral element $(1,0)$.
The zero-divisors are exactly the elements $a$ in $S$ with
$\mathbb{P}i(a)=0$, that is, the pairs $a=(0, a')$.
\end{itemize}
Given an additive cover $\mathcal{M}$, there is a
canonical
embedding
\[\begin{array}{rll}
\textnormal{Aut}(\mathcal{M}/P)& \hookrightarrow & \{F:\mathbb{C}\rightarrow\mathbb{C}
\text{
additive}
\}\\[1mm]
\sigma &\mapsto & F_\sigma
\end{array}\]
uniquely determined by the identity
$\sigma(x)=F_\sigma(\mathbb{P}i(x))\star x$.
For the additive cover $\mathcal M_0$
of Example \ref{E:covers}, the above embedding defines a bijection
\[\textnormal{Aut}(\mathcal{M}_{0}/P)\leftrightarrow\{F:\mathbb{C}\rightarrow\mathbb{C}
\text{ additive}\}\]
and a straight-forward calculation yields that
\[\textnormal{Aut}(\mathcal{M}_{1}/P)\leftrightarrow\{F:\mathbb{C}\rightarrow\mathbb{C}
\text{
derivation}\}.\] Indeed, for elements $a=(\alpha,a')$ and
$b=(\beta,b')$ in $S$, we have
\begin{align*}
\sigma(a\otimes b)&=F_{\sigma}(\alpha\beta)\star (a\otimes b)
\textnormal{ and } \\
\sigma(a)\otimes\sigma(b)&=\big(F_{\sigma}(\alpha)\star a\big)\otimes
\big(F_{\sigma}(\beta)\star b\big)=(\alpha\beta,\alpha
(b'+F_{\sigma}(\beta))+\beta (a'+F_{\sigma}(\alpha)))\\&=\big(\alpha
F_{\sigma}(\beta)+\beta F_{\sigma}(\alpha)\big)\star (a\otimes b).
\end{align*}
\end{example}
\begin{remark}\label{R:genCov}
Every additive cover $\mathcal{M}$ is a saturated
uncountably categorical structure, where $P$ is the unique strongly
minimal set up to non-orthogonality. The sort $S$ has Morley
rank two and degree one, and is $P$-analysable in two steps. Moreover,
each fiber
$\mathbb{P}i^{-1}(\alpha)$ is strongly minimal.
Therefore, for additive covers, almost $\mathbb{P}$-internality in
the CBP is
equivalent to almost internality to $P$. If $S$ is almost $P$-internal,
then
the CBP trivially holds.
\end{remark}
\begin{remark}\label{R: Cex}
The counterexample to the CBP given in \cite{HPP13} is an
additive cover, including for every irreducible variety $V$ defined over
$\mathbb{Q}^{\textnormal{alg}}$ a predicate in the sort $S$ for the
tangent bundle of $V$. It is easy to see that this structure has the
same definable sets as the additive cover $\mathcal{M}_1$ given in Example
\ref{E:covers}, since every polynomial expression over
$\mathbb{Q}^{\textnormal{alg}}$ in $P$ lifts to a polynomial equation in
$S$, using the ring operations $\oplus$ and $\otimes$.
A key ingredient in the proof that the sort $S$ in the above counterexample
is not almost
$P$-internal \cite[Corollary 3.3]{HPP13} is that every derivation on
the algebraically closed field $\mathbb{C}$ induces an automorphism
in $\textnormal{Aut}(\mathcal{M}_1 / P)$.
\end{remark}
For the following sections, we will need some auxiliary lemmas on the
structure of additive covers, and particularly those where
the sort $S$ is not almost $P$-internal. For the sake of completeness,
note that there are additive covers, besides the full structure, where the
sort $S$ is $P$-internal: Consider the additive cover $\mathcal{M}$ with
the following binary relation $R$ on $S\times S$
\[ R((\alpha,a'),(\beta,b')) \iff \big(\alpha\notin\mathbb{Q}\ \ \& \ \
\beta\notin\mathbb{Q} \ \ \& \ \ a'=b' \big).
\]
It is easy to verify that $\textnormal{Aut}(\mathcal{M}/P)=(\mathbb{C},+)$ and the
sort $S$ is $P$-algebraic (actually $P$-definable), after
naming any element in the fiber $\mathbb{P}i^{-1}(1)$.
The following notion will be helpful in the following chapter.
\begin{definition}\label{D:mean}
Given elements $a_1=(\alpha, a_1'),\ldots, a_n=(\alpha, a'_n)$ of $S$ all
in the same fiber $\mathbb{P}i^{-1}(\alpha)$, their \emph{average} is the element
\[\Big(\alpha, \frac{a'_1+\ldots+a'_n}{n}\Big).\]
\end{definition}
\begin{lemma}\label{L:mean}
Given a non-empty finite set $A$ of elements of $S$, all lying in the
same fiber, every
automorphism $\sigma$ of the additive cover maps the average of $A$ to
the average of $\sigma[A]$. In particular, the average of $A$ is
definable
over $A$.
\end{lemma}
\begin{proof}
We proceed by induction on the size $n$ of the non-empty set $A$. For
$n=1$, there is nothing to prove. Assume $A$ contains at least two
elements, and choose
$a$ some element of $A$. Set $b=\sigma(a)$.
Inductively, we have that $\sigma$ maps the average $d_1$ of
$A\backslash\{a\}$
to the average $d_2$ of $\sigma[A]\backslash\{b\}$. Let
$\varepsilon_1$ and $\varepsilon_2$ be the unique elements in $P$ such
that $\varepsilon_1\star
d_1=a$ and $\varepsilon_2\star d_2=b$. A straight-forward computation
yields that
$\frac{\varepsilon_1}{n}\star d_1$, resp.
$\frac{\varepsilon_2}{n}\star d_2$, is the average of $A$, resp. of
$\sigma[A]$. Now the
claim follows since $\sigma$ maps $\frac{\varepsilon_1}{n}$ to
$\frac{\varepsilon_2}{n}$.
~\end{proof}
\begin{lemma}\label{L:stat}
Let $a_1=(\alpha_1,0),\ldots,a_n=(\alpha_n,0)$ be elements in $S$. The
type $\textnormal{tp}(a_1,\ldots,a_n / \alpha_1,\ldots,\alpha_n)$ is
stationary.
\end{lemma}
\begin{proof}
Choose a maximal subtuple $\hat{a}$ of $(a_1,\ldots,a_n)$ (algebraic)
independent
over the tuple $\bar{\alpha}=(\alpha_1,\ldots,\alpha_n)$. Note that
each
$a_i$ is algebraic over $\bar{\alpha},\hat{a}$. Set
$b_i=(\alpha_i,b_{i}')$ the average of the finite set of
$\{\bar{\alpha},\hat{a}\}$-conjugates of $a_i$. The element $b_i$ is
definable over $\bar{\alpha},\hat{a}$, by Lemma \ref{L:mean}.
\begin{claim*}
The second coordinate $b_{i}'$ of the average $b_i$ is definable (as
an element of $P$) over $\bar{\alpha}$.
\end{claim*}
\begin{claimproof*}
We need only show that $b_{i}'$ is fixed by every automorphism $\tau$
of the sort $P$ fixing $\bar{\alpha}$. The map
$\sigma=(\tau,\tau\times \tau)$ is an automorphism of the full
structure $(\mathbb{C},\mathbb{C}\times\mathbb{C})$, and hence of the
reduct $\mathcal M$. Since $\tau(0)=0$, the automorphism $\sigma$
fixes $\bar{\alpha},a_1,\ldots, a_n$. Hence $\sigma(b_i)=b_i$, so in
particular $\tau(b_{i}')=b_{i}'$.\end{claimproof*}
Therefore $a_i=(-b_{i}')\star b_i$ is definable
over $\bar{\alpha},\hat{a}$. Since the fibers of the projection $\mathbb{P}i$
are
strongly minimal (see Remark \ref{R:genCov}), the type
$\textnormal{tp}(\hat{a}/\bar{\alpha})$ is stationary, so we obtain the desired
conclusion.
~\end{proof}
The above proof yields in particular the following:
\begin{remark}\label{R:algFib}
Every automorphism $\tau$ of $P$ fixing a subset $A$ induces an
automorphism $\sigma$ of the additive cover which fixes all the
elements of $S$ of
the form $(\alpha,0)$, with $\alpha$ in $A$.
The definable and algebraic closure of $P$ in the sort $S$
coincide: \[ S \cap \textnormal{acl}(P) = S \cap
\textnormal{dcl}(P). \]
Given a set of parameters $B$ in $S$ and an element $\beta$ in the
sort $P$, all elements of the strongly minimal fiber $\mathbb{P}i^{-1}(\beta)$
have
the same type over $B, P$ whenever the element $b=(\beta,0)$ of
$S$ is not algebraic over $B, P$.
\end{remark}
\begin{lemma}\label{L:indep}
Let $a_1=(\alpha_1,0),\ldots,a_n=(\alpha_n,0)$ be elements in the sort
$S$ with generic independent elements $\alpha_i$ in $P$.
If the sort $S$ is not almost $P$-internal, then the $a_i$'s are
generic independent.
\end{lemma}
\begin{proof}
Choose some $\beta$ generic in $P$ independent from
$\alpha_1$ and set $a=(\alpha_1,\beta)$ in $S$. Note that the Morley
rank of $a$ is two. If $a_1$ were not generic, then
$a_1$
must be algebraic over the generic element $\alpha_1$ of $P$. Since
$a=\beta\star a_1$, it would
follow that the generic element
$a$ of $S$ is algebraic over $P$, which contradicts our assumption that
the sort $S$ is not almost $\mathbb{P}$-internal. Hence $a_1$ is generic in
$S$.
Now, we inductively assume that the tuple $(a_1,\ldots,a_{n-1})$
consists of generic independent elements and want to show that $a_n
\mathop{\mathpalette\Ind{}} \bar{a}_{<n}$. Assume otherwise that $a_n \mathop{\mathpalette\notind{}}
\bar{a}_{<n}$.
Note that $\alpha_n$ is not algebraic over $\bar{a}_{<n}$, by
Remark \ref{R:algFib}, since $\alpha_n$ is not algebraic over
$\bar{\alpha}_{<n}$. Thus $a_n
\mathop{\mathpalette\notind{}}_{\alpha_n} \bar{a}_{<n}$, so $a_n$ is algebraic over
$\alpha_n \bar{a}_{<n}$.
Choose now some element $\gamma$ in $P$ generic over
$(\alpha_1,\ldots, \alpha_n)$
and set $c=(\alpha_n,\gamma)=\gamma\star a_n$ in $S$. Note that $c$ is
algebraic over
$\bar{a}_{<n}P$. Observe that $\textnormal{RM}(c/\bar{a}_{<n})=2$, by the choice
of $\gamma$, so we conclude that $S$ is almost $\mathbb{P}$-internal, which
gives the desired contradiction.
~\end{proof}
We conclude this section with a full description of the Galois groups of
stationary
$P$-internal types
in additive covers, whenever the sort $S$ is not almost $P$-internal.
\begin{remark}\label{R:Gal}
If $S$ is not $P$-internal, then every definable subgroup of
$(\mathbb{C}^n,+)$ appears as a Galois group and conversely every
Galois group is (definably isomorphic to) such a subgroup.
\end{remark}
\begin{proof}
We show first that every definable subgroup $G$ of $(\mathbb{C}^n,+)$
appears as a Galois group. Let
$a_1=(\alpha_1,0),\ldots,a_n=(\alpha_n,0)$ be elements in the sort
$S$ with generic independent elements $\alpha_i$ in $P$ and
set
\[ E = \{ \bar{x}\in S^n \ |\ \exists \bar{g}\in G \bigwedge_{i=1}^{n}
g_i \star a_i = x_i \}. \]
The type $\textnormal{stp}(a_1,\ldots,a_n/\ulcorner E\urcorner)$ is $P$-internal
because $\alpha_1,\ldots,\alpha_n$ are definable over
$\ulcorner
E\urcorner$. We show that $G$ is the Galois group of this type.
If
\[ b_1,\ldots,b_n \equiv_{\ulcorner E\urcorner, P} a_1,\ldots,a_n, \]
then $\bar{b}$ is in $E$ and there is an unique $\bar{g}$ in
$G$ with $\bar{g}\star \bar{a}=\bar{b}$.
Now assume that conversely $\bar{g}\star \bar{a}=\bar{b}$ for some
$\bar{g}$ in $G$. Note that for $1\leq k\leq n$ the element $a_k$ is
not algebraic over $\bar{a}_{<k},P$ by Lemma \ref{L:indep}, since $S$
is not almost $P$-internal. Hence, Remark \ref{R:algFib} yields
that all elements in the fiber $\mathbb{P}i^{-1}(\alpha_k)$ have the same type
over $\bar{a}_{<k},P$. This shows that we can inductively construct an
automorphism $\sigma$ in $\textnormal{Aut}(\mathcal{M}/P)$ with $\sigma(a_k)=g_k
\star a_k$ for $1\leq k\leq n$. The
automorphism $\sigma$ determines an element of the Galois group of the
fundamental type $\textnormal{stp}(a_1,\ldots,a_n/\ulcorner E\urcorner)$.
Now we show that every Galois group is of the claimed form.
The Galois group $G$ of a
real stationary fundamental
$P$-internal type
$\textnormal{tp}(a_1,\ldots,a_n/B)$
equals
\[ G = \{ (g_1,\ldots,g_n)\in P^n \ \vert \
g_1 \star a_1 , \ldots , g_n \star a_n \equiv_{B,P} a_1 ,
\ldots, a_n \}. \]
More generally, given an imaginary element $e=f(a)$, where $a$ is a
real tuple and $f$ is an $\emptyset$-interpretable function, the
Galois group of the stationary fundamental
$P$-internal type $\textnormal{tp}(e/B)$ equals
\[ \{ g\in P^n \ \vert \
f(g\star a) \equiv_{B,P} e \}. \]
Hence, the statement follows, since every Galois group is the Galois
group of a stationary fundamental (possibly imaginary) type.
Note that we did not use here that $S$ is not almost $P$-internal.
~\end{proof}
\section{Imaginaries in additive covers}\label{S:Imag}
In order to answer Question \ref{Q:2}, we are led to study imaginaries in
additive covers, with a particular focus to the additive covers
in the Example \ref{E:covers}. We will
first show that neither the counterexample $\mathcal{M}_1$ to the
CBP of \cite{HPP13} nor the additive cover $\mathcal{M}_0$ eliminate
imaginaries.
\begin{lemma}\label{L:imag}
The additive cover $\mathcal{M}$ does not eliminate imaginaries if
every derivation on $\mathbb{C}$ induces an automorphism
in $\textnormal{Aut}(\mathcal{M}/P)$.
\end{lemma}
\begin{proof}
Choose two generic independent elements $\alpha$ and $\beta$ in
the sort $P$, and pick elements $a$ and $b$ in the fiber of $\alpha$
and $\beta$, respectively. Fix a derivation $D$ with kernel
$\mathbb{Q}^{\text{alg}}$. Let us assume for a contradiction that the
definable set
\[E=\{(x,y)\in S^2 \ |\ \exists (\lambda,\mu)\in P^2 (\lambda\star a=x
\ \& \
\mu\star b=y \ \& \ D(\beta)\lambda-D(\alpha)\mu=0)\}\] has a real
canonical parameter $e$. By hypothesis, the derivation $D$ induces an
automorphism $\sigma_{D}$ in $\textnormal{Aut}(\mathcal{M}/P)$. Note that
$\sigma_{D}$ must fix $E$
setwise, because
$D(\beta)D(\alpha)-D(\alpha)D(\beta)=0$. Therefore (every element
of) the tuple $e$ lies in $P\cup\mathbb{P}i^{-1}(\mathbb{Q^{\text{alg}}})$.
In particular, the definable set $E$ is permuted by every automorphism
induced by a derivation. Now
let $D_1$ be a derivation with $D_1(\alpha)=1$ and $D_1(\beta)=0$, and
note that
$\sigma_{D_1}$ does not permute $E$, since
$D(\beta)\cdot 1-D(\alpha)\cdot 0=D(\beta)\neq 0$, which gives the
desired contradiction.
~\end{proof}
The proof of \cite{HPP13} shows that the sort $S$ in an additive cover
$\mathcal M$ is not almost $P$-internal, whenever every derivation on
$\mathbb{C}$ induces an automorphism in $\textnormal{Aut}(\mathcal{M}/P)$. We will now
give a strengthening of Lemma \ref{L:imag}.
\begin{proposition}\label{P:ImagInt}
If the additive cover $\mathcal{M}$ eliminates imaginaries, then the
sort $S$ is $P$-internal.
\end{proposition}
\begin{proof}
We will mimic the proof of Lemma \ref{L:imag}. Assume for a
contradiction that the sort $S$ is not $P$-internal and choose two
generic independent elements $a$ and $b$ in $S$. Since $S$ is not
$P$-internal, there is
an automorphism $\tau$ in $\textnormal{Aut}(\mathcal{M}/P)$ which fixes $b$ and
moves
$a$. If we can construct an automorphism $\sigma$ (which was
$\sigma_D$ in the proof of Lemma \ref{L:imag}) such that it only fixes
the
definable closure of $P$ (in $S$), we conclude as before that the real
canonical parameter $e$ of the definable set \[E=\{(x,y)\in S^2 \
|\
\exists (\lambda,\mu)\in P^2 (\lambda\star
a=x
\ \& \
\mu\star b=y \ \& \ F_\sigma(\beta)\lambda-F_\sigma(\alpha)\mu=0)\}\]
is definable over $P$. The automorphism $\tau$ fixes $e$, yet it maps the
pair
$(a,b)$ in $E$ outside of $E$.
Hence, we need only show in the rest of the proof that there exists such
an automorphism $\sigma$.
Choose an enumeration of elements
$a_i=(\alpha_i,0)$ and $b_i=(\beta_i,0)$ in $S$, with $i<2^{\aleph_0}$,
such that:
\begin{itemize}
\item The tuple
$\bar{\alpha}=(\alpha_i)_{i<2^{\aleph_0}}$
is a transcendence basis of the algebraically closed field
$\mathbb{C}$.
\item For each $i<2^{\aleph_0}$, the element $b_i$ is not algebraic
over
$\bar{a},\bar{b}_{<i}$, where
$\bar{a}=(a_i)_{i<2^{\aleph_0}}$ and
$\bar{b}=(b_i)_{i<2^{\aleph_0}}$. Hence
$\textnormal{RM}(b_i/\bar{a},\bar{b}_{<i})=1$ since $\beta_i$ is in
$\textnormal{acl}(\bar{a})$.
\item Each element in $S$ is algebraic over $\bar{a},\bar{b}$.
\end{itemize}
We denote by ${\langle \alpha \rangle}_i$ the unique subtuple of
$\bar{\alpha}$ of smallest length such that $\beta_i$ is algebraic over
${\langle \alpha \rangle}_i$. Write $\mathcal X$ for the set of all
finite subtuples of $\bar{\alpha}$ and consider the map
\[ \begin{array}{rccc}
\Phi: & \mathcal X & \rightarrow &
2^{\aleph_0} \\[2mm]
& (\alpha_{i_1},\ldots,\alpha_{i_n})&\mapsto& \max(i_1,\ldots,i_n).
\end{array}\]
The partial function $F$ defined by
\[
F(\alpha_i)=\alpha_{\omega^{i+1}}
\ \ \textnormal{ and } \ \
F(\beta_i)=\alpha_{\omega^{\max\left(i,\Phi({\langle \alpha
\rangle}_i)\right)+1}+\omega^{i}}
\]
is clearly injective. It follows inductively by Remark \ref{R:algFib}
that
\[\bar{a},\bar{b} \equiv_{P}
F(\bar{a})\star\bar{a},F(\bar{b})\star\bar{b},\]
so $F$ induces a partial automorphism fixing $P$ pointwise with domain
the set
\[\big(\mathbb{P}i^{-1}(\bar{\alpha})\times\mathbb{P}i^{-1}(\bar{\beta})\big)\cup P.\]
Given an element $c$ in $S$, it is by construction algebraic over $\bar a,
\bar b$, so the average of its conjugates is definable over $\bar a, \bar
b$,
by Lemma \ref{L:mean}. Thus, every element of $S$ is definable over
$\bar{a}, \bar{b}, P$. Therefore the above partial automorphism extends
uniquely to an automorphism $\sigma$ in $\textnormal{Aut}(\mathcal{M}/P)$.
\begin{claim*}
The automorphism $\sigma$ only fixes the definable
closure of $P$ in $S$.
\end{claim*}
\begin{claimproof*}
Since $\sigma$ fixes the sort $P$, it suffices to show that all
elements $c$ fixed by $\sigma$ of the form $c=(\gamma,0)$ are definable
over $P$.
Otherwise, choose subtuples of least possible length
\[ \hat{a}=(a_{i_1},\ldots,a_{i_m}) \ \ \textnormal{ and } \ \
\hat{b}=(b_{j_1},\ldots,b_{j_n})
\]
of $\bar{a}$ and $\bar{b}$ such that $c$ is
definable over $\hat{a},\hat{b},P$. Note that $\max(n,m)>0$ and that
every element in the fiber
of
$\gamma$ is definable over $\hat{a},\hat{b},P$.
The type
\[
p=\textnormal{tp}(\hat{a},\hat{b},c/\hat{\alpha},\hat{\beta},\gamma)
\]
is fundamental and stationary by Lemma \ref{L:stat}. Its Galois
group $G$ is a definable additive subgroup of
$\mathbb{C}^{m+n+1}$, by Remark \ref{R:Gal}.
If $\gamma$ is not algebraic over
$\hat{\alpha},\hat{\beta}$, Lemma
\ref{L:indep} yields that
$c \mathop{\mathpalette\Ind{}} \hat{a},\hat{b}$, so the type $\textnormal{stp}(c)$ is $P$-internal and
hence so is (the generic element in the fiber $\mathbb{P}i^{-1}(\gamma)$ of) $S$,
contradicting our assumption.
Since the Galois group $G$ of
$p$ is definable over $\{
\hat{\alpha},\hat{\beta},\gamma \}$, we deduce that it is definable over
\[A=\textnormal{acl}(\hat{\alpha},
{\langle \alpha \rangle}_{j_1},\ldots,{\langle \alpha
\rangle}_{j_n})\supset \{
\hat{\alpha},\hat{\beta}\}.\] The group $G$ is given by a system
$\mathcal{G}$ of
linear equations of the form
\begin{equation*}
\lambda_{1} \cdot x_{1}+\dots+\lambda_{m+n+1} \cdot
x_{m+n+1}=0,
\end{equation*}
with coefficients $\lambda_{i}$ in $A$ and the tuple
\[ ( F(\alpha_{i_1}),\ldots, F(\alpha_{i_m}),
F(\beta_{j_1}),\ldots, F(\beta_{j_n}),
0) \]
is a solution.
Set now $\gamma= \Phi\big( (\hat{\alpha},
{\langle \alpha \rangle}_{j_1},\ldots,{\langle \alpha
\rangle}_{j_n})\big) <2^{\aleph_0}$. If $\alpha_\gamma=\alpha_{i_k}$
for some $1\le k\le m$, denote $i(\gamma)=i_k=\gamma$. Otherwise set
$i(\gamma)=j_\ell$ if $1\le \ell\le n$ is the least index such that
$\alpha_\gamma$ is an element in the tuple
${\langle \alpha \rangle}_{j_\ell}$.
Observe that there is a linear equation in the
system $\mathcal{G}$ such that the coefficient
$\lambda_{i(\gamma)}$ is non-trivial, for otherwise every automorphism
in $\textnormal{Aut}(\mathcal{M}/P)$ fixing all coordinates except (possibly)
the element $d_{i(\gamma)}$, which is the $i(\gamma)^\text{th}$-coordinate
of the tuple $(\hat{a},\hat{b})$,
must also fix $c$, contradicting the minimality of $m$ and $n$. The set
\[ B=\{ F(\alpha_{i_1}),\ldots, F(\alpha_{i_m}),
F(\beta_{j_1}),\ldots, F(\beta_{j_n}) \} \]
consists of distinct elements, by the injectivity of $F$. Therefore,
if suffices to show that the element $F(d_{i(\gamma)})$ is not algebraic
over
\[ A\cup \big(B\backslash \{ F(d_{i(\gamma)}) \}\big) \]
to reach the desired contradiction. For this we need only show that the
element $F(d_{i(\gamma)})$ is not contained in the set
\[
\{ \hat{\alpha},
{\langle \alpha \rangle}_{j_1},\ldots,{\langle \alpha
\rangle}_{j_n} \}.
\]
If $d_{i(\gamma)}=\alpha_{i(\gamma)}$, we obtain the result
since
\begin{align*}
\Phi(F(d_{i(\gamma)}))&=\Phi(F(\alpha_\gamma)) \\
&=\Phi(\alpha_{\omega^{\gamma+1}}) \\
&=\omega^{\gamma+1}\geq\gamma+1 \\
&>\gamma= \Phi\big(
(\hat{\alpha},
{\langle \alpha \rangle}_{j_1},\ldots,{\langle \alpha
\rangle}_{j_n})\big).
\end{align*}
Otherwise $d_{i(\gamma)}=\beta_{i(\gamma)}$, so
\begin{align*}
\Phi(F(d_{i(\gamma)}))&=\Phi(F(\beta_{i(\gamma)}))\\
&=
\omega^{\max\left({i(\gamma)},\Phi({\langle \alpha
\rangle}_{i(\gamma)})\right)+1}+\omega^{{i(\gamma)}} \\
&>
\omega^{ \Phi({\langle \alpha
\rangle}_{i(\gamma)}) +1 } =
\omega^{\gamma+1},
\end{align*}
and we conclude the result analogous to the first case.
\end{claimproof*}
~\end{proof}
Whenever the sort $S$ is not $P$-internal, the additive cover does
not eliminate imaginaries. The situation is different for
finite imaginaries: We will see below that the additive cover
$\mathcal{M}_0$ does not eliminate finite imaginaries, however the
additive cover $\mathcal{M}_{1}$ does.
\begin{remark}\label{R:FiniteImagM0}
Choose two generic independent elements $\alpha$ and $\beta$ be two in
the sort $P$. The finite subset
$\{(\alpha,0),(\beta,0)\}$ of $S$ has no real canonical parameter in
$\mathcal{M}_{0}$.
\end{remark}
\begin{proof}
Assume that the tuple $e$ is a real canonical parameter of the set
$\{(\alpha,
0),(\beta, 0)\}$. Since the tuple $e$ is clearly definable over
$(\alpha,0),
(\beta, 0), P$, the projection $\mathbb{P}i(c)$ of every element $c$ in $S$
appearing in $e$
(if any) must be contained in the $\mathbb{Q}$-vector space generated
by $\alpha$ and $\beta$.
There is an automorphism $\tau$ of $P$ extending the
non-trivial permutation of the set $\{\alpha, \beta\}$, so it is easy
to show that there is a rational number $q$ such that
$\mathbb{P}i(c)=q\cdot(\alpha+\beta)$. Hence, the tuple $e$ is definable over
$(\alpha+\beta,0),P$.
Therefore, any additive map $F$ with $F(\alpha)=1$ and
$F(\beta)=-1$ induces an automorphism $\sigma_F$ fixing $e$, yet it
does not permute $\{(\alpha,0),(\beta,0)\}$.
~\end{proof}
In order to show that the additive cover
$\mathcal{M}_1$ eliminates finite imaginaries,
we first provide a sufficient condition.
\begin{proposition}\label{P:generalimag}
An additive cover $\mathcal{M}$ eliminates
finite imaginaries, whenever every finite subset of $S$ on which $\mathbb{P}i$ is
injective has a real
canonical parameter.
\end{proposition}
\begin{proof}
Let $A$ be the finite set $\{\bar{a}_1,\ldots,\bar{a}_n\}$ of
real $m$-tuples.
Every function
$\Phi:\{1,\ldots, m\}\longrightarrow\{P,S\}$ determines a
subset $A_{\mathbb{P}hi}$ of $A$, according to whether the
$j^\text{th}$-coordinate lies in $P$ or $S$. Every automorphism
permuting $A$ permutes each $A_{\Phi}$, so we may assume that for
every tuple in $A$, the coordinates have the same configuration (given
by the function $\Phi$).
Since the canonical parameter is only determined up to
interdefinability, we may further assume (after possibly permuting the
coordinates) that there is a natural number $0\leq k\leq m$ such that
for each tuple $\bar {a}_i$ in $A$:
\begin{itemize}
\item The $j^\text{th}$-coordinate $a_{i}^{j}$ lies in $S$ for
$1\leq j\leq k$.
\item The $\ell^\text{th}$-coordinate $a_{i}^{\ell}$ lies in $P$
for $k< \ell\leq m$.
\end{itemize}
For every coordinate $1\leq j\leq k$ set $A^j=\{a_{i}^{j} \ | \ 1\leq
i\leq n \}$ and $d_{i}^{j}$
the average of the subset $A^j \cap \mathbb{P}i^{-1}(\mathbb{P}i(a_{i}^{j}))$. For
$1\leq i\leq
n$ let
now $\varepsilon_{i}^{j}$ be the unique element in $P$ with
$a_{i}^{j}=\varepsilon_{i}^{j}
\star d_{i}^{j}$. Consider the tuples
$\varepsilon_{i}=(\varepsilon_{i}^{1},\ldots,\varepsilon_{i}^{k})$
and
\[
\alpha_i=(\mathbb{P}i(a_{i}^{1}),\ldots,\mathbb{P}i(a_{i}^{k}),a_{i}^{k+1},\ldots,a_{i}^{m})\]
in $P$. We need only show that the tuple
\[ e=\big( \ulcorner \{(\varepsilon_{1},\alpha_{1}),\ldots,
(\varepsilon_{n},\alpha_{n})
\}\urcorner,\ulcorner
\{d_{1}^{1},\ldots,d_{n}^{1}\}\urcorner,\ldots,\ulcorner
\{d_{1}^{k},\ldots,d_{n}^{k}\}\urcorner \big) \]
is a canonical parameter of $A$. Note that $e$ is a real tuple since
the sets
$\{d_{1}^{j},\ldots,d_{n}^{j}\}$ have real canonical parameters, by
our assumption.
Let $\sigma$ be an automorphism. If $\sigma$ permutes the set $A$,
Lemma \ref{L:mean} yields
that $\sigma$ permutes each set
$\{d_{1}^{j},\ldots,d_{n}^{j}\}$ since the image of $A^j \cap
\mathbb{P}i^{-1}(\mathbb{P}i(a_{i}^{j}))$ under $\sigma$ is $A^j \cap
\mathbb{P}i^{-1}(\mathbb{P}i(a_{i^*}^{j}))$ for some index $i(\sigma)$ with
$\sigma(a_{i}^{j})=a_{i(\sigma)}^{j}$ and
$\sigma(\alpha_i)=\sigma(\alpha_{i(\sigma)})$. Therefore
$\sigma(\varepsilon_{i})=\varepsilon_{i(\sigma)}$, since
$\sigma(d_{i}^{j})=d_{i(\sigma)}^{j}$. Hence $\sigma$ fixes $e$.
Assume now that $\sigma$ fixes the tuple $e$. The tuple $\alpha_i$ is
mapped to $\alpha_{i(\sigma)}$ and \[
\sigma(a_{i}^{j})=\sigma(\varepsilon_{i}^{j}) \star
\sigma(d_{i}^{j})=\varepsilon_{i(\sigma)}^{j}\star \sigma(d_{i}^{j}). \]
It suffices to show that $\sigma(d_{i}^{j})=d_{i(\sigma)}^{j}$ to
conclude that $\sigma$ permutes $A$. This follows immediately from
\[\mathbb{P}i(\sigma(d_{i}^{j}))=\sigma(\mathbb{P}i(d_{i}^{j}))=
\sigma(\alpha_{i}^{j})=\alpha_{i(\sigma)}^{j},\] since $\sigma$
permutes the set $\{d_{1}^{j},\ldots,d_{n}^{j}\}$.
~\end{proof}
Thus, we will deduce that the additive cover $\mathcal{M}_1$ eliminates
finite imaginaries, by applying Proposition \ref{P:generalimag}, lifting
the corresponding canonical parameters of finite subsets of $P$ to $S$
using the ring operations.
\begin{corollary}\label{C:finiteImag}
The additive cover $\mathcal{M}_{1}$ eliminates finite imaginaries.
\end{corollary}
\begin{proof}
By Proposition \ref{P:generalimag}, we need only show that every
finite
subset
$\{a_1,\ldots,a_n\}$ of $S$, with pairwise distinct projections
$\mathbb{P}i(a_i)=\alpha_i$, has a real canonical parameter.
For $1\leq i\leq n$ lift the $i^\text{\,th}$-symmetric function to
$S$:
\begin{equation*}
\tag{$\spadesuit$}
b_i=\sum_{1\leq j_1<\dots<j_i\leq n}a_{j_1}\otimes\cdots\otimes
a_{j_i}.
\end{equation*}
We
claim that the tuple $b=(b_1,\ldots,b_n)$ is a canonical parameter of
the set
$A=\{a_1,\ldots,a_n\}$. If the automorphism
$\sigma$ permutes $A$, then it clearly fixes $b$.
Assume now that $\sigma$ fixes the tuple $b$. Write each element $a_i$
of $A$ as $a_i=(\alpha_i,a_{i}')$, and similarly
$b_i=(\beta_i,b_{i}')$.
In the full structure
$(\mathbb{C},\mathbb{C}\times\mathbb{C})$ the definable condition
($\spadesuit$) uniquely translates into
\[ \beta_i=\sum_{1\leq j_1<\dots<j_i\leq
n}\alpha_{j_1}\cdots\alpha_{j_i}\]
and the system of linear equations:
\begin{equation*}
\begin{pmatrix}
1 & 1 & \cdots & 1 \\
\sum\limits_{j\neq 1}\alpha_j & \sum\limits_{j\neq 2}\alpha_j
& \cdots & \sum\limits_{j\neq n}\alpha_j \\
\sum\limits_{\substack{j_1<j_2 \\ j_1,j_2\neq 1}}\alpha_{j_1}
\alpha_{j_2}
& \sum\limits_{\substack{j_1<j_2 \\ j_1,j_2\neq
2}}\alpha_{j_1} \alpha_{j_2}
& \cdots
& \sum\limits_{\substack{j_1<j_2 \\ j_1,j_2\neq
n}}\alpha_{j_1} \alpha_{j_2} \\
\vdots & \vdots & \ddots & \vdots \\
\mathbb{P}rod\limits_{j\neq 1}\alpha_j & \mathbb{P}rod\limits_{j\neq
2}\alpha_j & \cdots & \mathbb{P}rod\limits_{j\neq n}\alpha_j
\end{pmatrix}
\begin{pmatrix}
a_1' \\ a_2' \\ \vdots \\ \vdots \\ \vdots \\ a_n'
\end{pmatrix}
=
\begin{pmatrix}
b_1' \\ b_2' \\ \vdots \\ \vdots \\ \vdots \\ b_n'
\end{pmatrix}
\end{equation*}
Since the tuple $(\beta_1,\ldots,\beta_n)$ encodes the finite set
$\{\alpha_1,\ldots,\alpha_n\}$ and the above
matrix has determinant $\mathbb{P}rod_{i<j}(\alpha_i-\alpha_j)\neq 0$,
we
conclude that the automorphism $\sigma$ permutes the set $A$.
~\end{proof}
\section{The CBP in additive covers}\label{S:CBP_addcovers}
As already stated in Remark \ref{R: Cex}, the CBP does not hold in the
additive cover $\mathcal{M}_1$ (see Example
\ref{E:covers}). For the sake of completeness, we will
now sketch a proof, using the
terminology introduced so far. For generic independent
elements $a$, $b$ and $c$ in $S$, set $d=(a \otimes c) \oplus b$.
Assuming the CBP, the type
$\textnormal{stp}(a/c,d)$ is almost $P$-internal, since $\textnormal{Cb}(c,d/a,b)=(a,b)$. As
the elements $a$, $c$ and $d$ are again (generic)
independent, we conclude that the type $\textnormal{stp}(a)$ is almost
$P$-internal, contradicting the fact that $S$ is not
almost $P$-internal.
The above is a lifting to the sort $S$ of a configuration
witnessing that the field $P$ is not one-based. We will now present
another proof that the additive cover $\mathcal{M}_1$ does not have
the CBP, using the so called group
version of the CBP, which already appeared in \cite[Theorem 4.1]{KP06}
\begin{fact}\label{F:groupCBP}\textup{(}\cite[Fact 1.3]{HPP13}\textup{)}
Let $G$ be a definable group in a theory with the CBP. Whenever $a$ is
in $G$ and the type $p=\textnormal{tp}(a/A)$ has finite stabilizer, then $p$ is
almost internal to the family of all non-locally modular minimal types.
\end{fact}
The failure of the group version of the CBP is another example of such a
lifting approach: Consider two generic independent elements $a$ and $b$ of
$S$, and set $c=a\otimes b$. It is easy to see that $\textnormal{stp}(a,b,c)$
has trivial stabilizer, so the above Fact \ref{F:groupCBP} yields,
assuming the CBP, that $S$ is almost $P$-internal, which is a
contradiction.
Now we will see that the additive cover $\mathcal{M}_1$ is already
determined by its automorphism group over $P$.
\begin{proposition}\label{P:M1AutVersion}
If $\mathcal{M}$ is an additive cover such that
$\textnormal{Aut}(\mathcal{M}/P)$ corresponds to the group of derivations on
$\mathbb{C}$, then the product
\[(\alpha,a')\otimes (\beta,b')=(\alpha\beta,\alpha b' + \beta a')\]
is definable in $\mathcal{M}$.
\end{proposition}
\begin{proof}
Choose two generic independent elements
$\alpha$ and $\beta$ in $P$ and consider the elements
$a=(\alpha,0),b=(\beta,0)$ and $c=(\alpha\beta,0)$
in $S$. The type $\textnormal{tp}(a,b,c/\alpha,\beta,\alpha\beta)$ is
$P$-internal and stationary, by Lemma \ref{L:stat}.
Since every element in its Galois group corresponds to a derivation,
we deduce
that for all elements
\[
\tilde a=(\alpha,a'), \tilde b=(\beta,b') \ \ \textnormal{ and } \ \
\tilde c=(\alpha\beta,c')
\]
in $S$, we have that $a,b,c \equiv_{P} \tilde a,\tilde b,\tilde c$ if
and only
if $c'=\alpha
b'+\beta a'$.
Therefore $c$ is definable over $a,b,P$.
In fact, we obtain that $c$ is definable over $a, b$:
Let $\bar{\gamma}$ be a tuple of elements in $P$ such that $c$ is
definable over $a,b,\bar{\gamma}$.
Now let $\bar{\varepsilon}$ be a maximal subtuple of $\bar{\gamma}$
such that
\[\bar{\varepsilon} \mathop{\mathpalette\Ind{}}_{\alpha,\beta}a,b,c.\]
Note that $\bar{\gamma}\backslash\bar{\varepsilon}$ is algebraic over
$\bar{\varepsilon},a,b,c$. Therefore Remark $\ref{R:algFib}$
implies that
$\bar{\gamma}\backslash\bar{\varepsilon}$ is algebraic over
$\bar{\varepsilon},\alpha,\beta$.
Hence $c$ is definable over $a,b,\bar{\varepsilon}$ and so,
by independence, we deduce that $c$ is algebraic over $a,b$.
The average $(\alpha\beta,e')$ of the finite set consisting of the
$\{a,b\}$-conjugates of $c$ is
definable over $a,b$. Similarly as in the proof of Lemma \ref{L:stat},
we deduce that
$e'$ is definable over $\alpha,\beta$. Hence,
\[ c=(-e')\star (\alpha\beta,e') \]
is definable over $a,b$.
Let $\varphi(x,y,z)$ be
a formula such that $c$ is the unique realization of $\varphi(a,b,z)$.
For two generic independent elements
$a_1=(\alpha_1,a_1')$ and $b_1=(\beta_1,b_1')$ in $S$, choose a
derivation $D$ with $D(\alpha_1)=-a_1'$
and $D(\beta_1)=-b_1'$ and let $\sigma_D$ be the induced automorphism
in $\textnormal{Aut}(\mathcal{M}/P)$. Furthermore, take a
field automorphism $\tau$ of $P$ with $\tau(\alpha_1)=\alpha$
and $\tau(\beta_1)=\beta$ and let $\sigma_{\tau}$ be the induced
automorphism of the additive cover as in Remark \ref{R:algFib}.
Since $\sigma_{\tau}(\sigma_{D}(a_1,b_1))=(a,b)$, we deduce that
$\mathcal{M}\models\varphi(a_1,b_1,c_1)$ if and only if
$c_1=a_1\otimes b_1$.
Now we show that the multiplication $\otimes$ is globally definable,
following
the field version in Marker and Pillay's work \cite[Fact 1.5]{MP90}.
Set
\[X=\{ a \ | \ \varphi(\varepsilon\star a,b,(\varepsilon \star
a)\otimes b) \text{ for generic } b \text{ independent from } a \text{
and every } \varepsilon \text{ in }P \}.\]
Note that $\mathbb{P}i(X)$ is cofinite and $\mathbb{P}i^{-1}[\mathbb{P}i(a)]$ is contained in
$X$ for every $a$ in $X$.
Note that $a=b$ if and only if they define the same germ, that is
$a\otimes c=b\otimes c$ for generic $c$ independent from $a$ and $b$,
since generic elements have an inverse.
Let the finite set $P\backslash \mathbb{P}i(X)=\{\gamma_1,\ldots,\gamma_k\}$.
For $1\leq i\leq k$ choose $\alpha_i$ and $\beta_i$ in $\mathbb{P}i(X)$ such
that $\gamma_i=\alpha_i\beta_i$.
Using the elements $(\gamma_i,0),(\alpha_i,0),(\beta_i,0)$ as
parameters, we can uniformly identify every element in the fiber of
$\gamma_i$
with the product of two elements in $X$, namely
$(\gamma_i,c')=(\alpha_i,0)\otimes \big(\varepsilon\star
(\beta_i,0)\big)$, where
$\varepsilon$ is the unique element in $P$ such that
$(\varepsilon\alpha_i)\star(\gamma_i,0)=(\gamma_i,c')$. Now we can
define the multiplication $\otimes$ globally as the composition of
germs of
elements in $X$.
~\end{proof}
We will now show that the CBP holds in the additive cover $\mathcal{M}_0$
and more generally whenever there is
essentially no additional structure on the sort $S$.
\begin{proposition}\label{P:CBPpureCover}
The CBP holds in an additive cover $\mathcal{M}$, whenever every
additive map on $\mathbb{C}$ induces an automorphism in
$\textnormal{Aut}(\mathcal{M}/P)$.
\end{proposition}
In particular, the additive cover $\mathcal{M}_0$ has the CBP.
\begin{proof}
Recall that we need only consider real types over models in order to
deduce that the CBP holds.
Let $p(x)$ be the type of some finite real tuple $\bar{a}$ of length
$k$
over an elementary substructure $N$. In order to show that
the type
$\textnormal{stp}(\textnormal{Cb}(p)/\bar{a})$ is almost
$P$-internal, choose a formula $\varphi(x;\bar{b},\bar{\gamma})$ in
$p$ of least Morley rank and Morley degree one, where $\bar{b}$ is a
tuple of elements in $S\cap N$ and $\bar{\gamma}$ is a tuple of
elements
in
$P \cap N$.
We claim that every automorphism in
$\textnormal{Aut}(\mathcal{M}/P,\bar{a})$ fixes the canonical base $\textnormal{Cb}(p)$, which
is (interdefinable with) the canonical parameter $\ulcorner
\textnormal{d}_{p} x
\varphi(x;y)\urcorner$. For this, it suffices to show that every
such automorphism sends the tuple
$\bar{b}$ to another realization of the formula
$\textnormal{d}_{p} x \varphi(x; y_1, \gamma)$.
Write $\bar{a}=(a_1,\ldots,a_k)$ and \[ \alpha_i=\begin{cases}
\mathbb{P}i(a_i), \text{ if $a_i$ is in $S$}\\
a_i \text{ otherwise.}
\end{cases} \] For
$\bar{b}=(b_1,\ldots,b_n)$, set
$\beta_i=\mathbb{P}i(b_i)$.
We may assume (after possibly reordering) that
$(\beta_1,\ldots,\beta_m)$ is a maximal subtuple of $\bar{\beta}$
which
is $\mathbb{Q}$-linearly independent over $\bar{\alpha}$. So,
\[ \beta_j = \sum\limits_{i=1}^{m}q_i\cdot\beta_i +
\sum\limits_{i=1}^{k}r_i\cdot\alpha_i \]
for $m+1\leq j\leq n$ and rational numbers $q_i$ and $r_i$. In order
to show that $\bar{b}$ is mapped by the automorphism
$\sigma$ of $\textnormal{Aut}(\mathcal{M}/P,\bar{a})$ to another realization of
the formula
$\textnormal{d}_{p} x \varphi(x;y_1, \gamma)$, it suffices to show
that
\[ N\models \forall \varepsilon_1,\ldots,\varepsilon_m \in P\ \
\textnormal{d}_{p} x \varphi(x;\bar{\varepsilon}\star\bar{b},\gamma)
\]
where $\bar{\varepsilon}=(\varepsilon_1,\ldots,\varepsilon_n)$ with
\[ \varepsilon_j = \sum\limits_{i=1}^{m}q_i\cdot\varepsilon_i \]
for $m+1\leq j\leq n$. Indeed: since $N$ is an elementary
substructure of $\mathcal{M}$, the above implies that \[
\mathcal{M}\models \forall \varepsilon_1,\ldots,\varepsilon_m \in
P\ \ \textnormal{d}_{p} x
\varphi(x,\bar{\varepsilon}\star\bar{b},\bar{\gamma}), \] so
$\sigma(\bar b)= F_\sigma(\bar b)\star \bar b$ realizes
$\textnormal{d}_{p} x \varphi(x;\bar y_1,\gamma)$, as desired.
So, let $\varepsilon_1,\ldots,\varepsilon_m$ be in $P\cap N$ and set
$\varepsilon_j = \sum_{i=1}^{m}q_i\cdot\varepsilon_i
$ for $m+1\leq j\leq n$. Choose an additive
map $G$ vanishing on $\alpha_i$ for $1\leq i\leq k$ and with
$F(\beta_i)=\varepsilon_i$ for $1\leq i \leq m$. Hence \[G(\beta_j)=
\sum\limits_{i=1}^{m}q_i\cdot\varepsilon_i , \]
so the image of $\bar b$ under the automorphism $\sigma_G$ induced
by $F$ lies in $N$.
Hence $\sigma_{G}(\bar{b})=\bar \varepsilon \star \bar b$ realizes
$\textnormal{d}_{p} x \varphi(x,y_1,\gamma)$ since
$\sigma_{G}(\bar{a})=\bar{a}$, as desired.
~\end{proof}
\begin{remark}\label{R:rsCBP}
The above proof shows that the canonical base of a real stationary
type $\textnormal{stp}(a/B)$ is definable over $a,P$ which is stronger than
$P$-internality.
As we will see below this does not hold for all
imaginary types.
\end{remark}
Palac\'in and Pillay ~\cite{PP17} considered a strengthening of the CBP,
called the \textit{strong canonical base property}, which we reformulate
in the setting of additive covers: Given a (possibly imaginary) type
$p=\textnormal{stp}(a/B)$,
its canonical base $\textnormal{Cb}(p)$ is algebraic over $a, \bar d$, where
$\textnormal{stp}(\bar d)$ is $P$-internal. If we denote by $\mathcal{Q}$ the
family
types over $\textnormal{acl}eq(\emptyset)$ which are $P$-internal, then the
strong CBP holds if and only if every Galois group $G$ relative to
$\mathcal{Q}$ is \emph{rigid} \cite[Theorem 3.4]{PP17}, that is, the
connected component of every
definable subgroup of $G$ is definable over $\textnormal{acl}(\ulcorner G\urcorner)$.
Notice that no additive cover where the sort $S$ is not almost
$P$-internal can have the strong CBP:
For the two generic independent elements $a=(\alpha,0)$ and $b=(\beta,0)$
in $S$, the stationary
$P$-internal type $\textnormal{tp}(a,b/\alpha,\beta)$ is fundamental and has Galois
group
$(\mathbb{C}^2,+)$. This is clearly a $\mathcal{Q}$-internal type whose
Galois group $G$ (relative to $\mathcal{Q}$) is a definable subgroup of
$(\mathbb{C}^2,+)$. Since vector groups are never rigid, it suffices to
show that $G=\mathbb{C}^2$ (compare to \cite[Proposition 4.9]{JJP20}).
Otherwise,
the
element
$b$ is algebraic over $a,\bar{d}$, where $\textnormal{stp}(\bar d)$ belongs to
$\mathcal Q$ (up to permutation of $a$ and $b$). Hence, the type
$\textnormal{stp}(b/a)$, and thus $S$, is almost $P$-internal.
The question whether a Galois-theoretic interpretation
of the CBP exists arose in \cite{PP17}. We conclude this section by
showing that
no \textit{pure} Galois-theoretic account of the CBP can be provided.
We already noticed in Remark \ref{R:Gal} that, whenever the sort $S$ in an
additive cover is not almost $P$-internal, then the Galois groups relative
to $P$ are precisely all definable subgroups of $(\mathbb{C}^n,+)$, as $n$
varies. In particular, all such additive covers share the same Galois
groups
(relative to $P$). We will now see that the same holds for the Galois
groups relative to $\mathcal{Q}$.
\begin{lemma}\label{L:GalAccount}
All additive covers where the sort $S$ is
not almost $P$-internal share
the same Galois groups relative to $\mathcal{Q}$.
\end{lemma}
\begin{proof}
Note that $\mathcal{Q}$-internality coincides with $P$-internality.
Moreover,
the Galois group relative to $\mathcal{Q}$ is a subgroup of the Galois
group relative to $P$, which by Remark \ref{R:Gal} is a definable
subgroup of some $(\mathbb{C}^n,+)$. So it suffices to show that
every definable subgroup $G$ of $(\mathbb{C}^n,+)$
appears as a Galois group relative to $\mathcal{Q}$.
Choose a tuple $\bar{a}$ of elements
$a_1=(\alpha_1,0),\ldots,a_n=(\alpha_n,0)$ in the sort
$S$ with generic independent elements $\alpha_i$ in $P$ and
set
\[ E = \{ \bar{x}\in S^n \ |\ \exists \bar{g}\in G \bigwedge_{i=1}^{n}
g_i \star a_i = x_i \}. \]
The proof of Remark \ref{R:Gal} shows that the stationary type
$\textnormal{stp}(\bar a/\ulcorner E\urcorner)$
is $P$-internal and fundamental with Galois group $G$. Moreover, for
every set $B$ of parameters we have that \[\textnormal{stp}(\bar a/\ulcorner
E\urcorner, B) \vdash
\textnormal{tp}(\bar a/\ulcorner E\urcorner,B,P).\]
We now show that the Galois group $H$ relative to $\mathcal{Q}$ equals
$G$. Assume for a contradiction that $H$ is a
proper subgroup of $G$. The group $G$ (and $H$ relative to $G$) is
given by a system of linear equations in echelon form, so we find an
index $1\le k\le n$ and a tuple $\bar d$ with $\textnormal{stp}(\bar d)$
$P$-internal such that the element $a_k$ is not algebraic over
$\bar{a}_{>k},\ulcorner E\urcorner$, yet it is algebraic over
$\bar{a}_{>k},\ulcorner
E\urcorner,\bar d$.
By $\mathcal{P}$-internality of $\textnormal{stp}(\bar d)$, there is a set of
parameters $C$ with $C \mathop{\mathpalette\Ind{}} \bar{d},\bar{a},\ulcorner E\urcorner$ such
that $\bar d$ is
definable over $C,P$.
The above yields that $a_k$ is algebraic over $\bar{a}_{>k},\ulcorner
E\urcorner,C,P$ and therefore over $\bar{a}_{>k},\ulcorner
E\urcorner$, which yields the desired contradiction.
\end{proof}
\section{Preservation of internality in additive
covers}\label{S:PI_addcovers}
In this section we will show that the additive cover $\mathcal{M}_1$ does
not preserve
internality on intersections nor internality on quotients. We will start
with the latter, whose proof is considerably simpler.
\begin{proposition}\label{P: M1PropB}
The additive cover $\mathcal{M}_1$ does not preserve internality on
quotients.
\end{proposition}
\begin{proof}
Choose generic independent elements $a,b$ and $c$ in $S$ and set
$d=(a\otimes c)\oplus b$.
Consider now the following definable set:
\begin{align*}
E &= \{ (x,y)\in S^2 \ |\ \mathbb{P}i(x)=\mathbb{P}i(a) \ \& \ \mathbb{P}i(y)=\mathbb{P}i(b) \ \& \
d=(x\otimes c)\oplus y \}
\end{align*}
Since the canonical parameter
$\ulcorner E\urcorner$ is clearly definable over $c,d,\mathbb{P}i(a),\mathbb{P}i(b)$
and the type $\textnormal{stp}(c,d,\mathbb{P}i(a),\mathbb{P}i(b)/\mathbb{P}i(c),\mathbb{P}i(d))$
is $P$-internal,
we deduce that the type \[ \textnormal{stp}(\ulcorner E\urcorner/\mathbb{P}i(c),\mathbb{P}i(d))\]
is $P$-internal.
\begin{claim*}
The type $\textnormal{stp}(\ulcorner E\urcorner / \mathbb{P}i(a),\mathbb{P}i(b))$ is
$P$-internal.
\end{claim*}
\begin{claimproof*}
Choose elements $a_1$ and $b_1$ in the fiber of $\mathbb{P}i(a)$, resp.
$\mathbb{P}i(b)$, such that
\[ a_1,b_1 \mathop{\mathpalette\Ind{}}_{\mathbb{P}i(a),\mathbb{P}i(b)} \ulcorner E\urcorner. \]
Note that every automorphism $\sigma$ in $\textnormal{Aut}(\mathcal{M}_1/P)$
fixing the elements $a_1$ and $b_1$ must fix
$\mathbb{P}i^{-1}(\mathbb{P}i(a))\times\mathbb{P}i^{-1}(\mathbb{P}i(b))$, so $\sigma$
permutes $E$. In particular, the canonical parameter $ \ulcorner
E\urcorner$ is definable over $a_1,b_1,P$, as desired.
\end{claimproof*}
We assume now that $\mathcal M_1$ preserves internality on
quotients in order to
reach a
contradiction. Since
\[\textnormal{acl}eq\big(\mathbb{P}i(a),\mathbb{P}i(b)\big)\cap\textnormal{acl}eq\big(\mathbb{P}i(c),\mathbb{P}i(d)\big)=\textnormal{acl}eq(\emptyset),\]
we deduce
that the type $\textnormal{stp}(\ulcorner E\urcorner)$
is almost $P$-internal.
Therefore there is a real subset $C$ of $S$ with
$C \mathop{\mathpalette\Ind{}} \ulcorner E\urcorner$ such that the canonical parameter
$\ulcorner E\urcorner$ is algebraic over $C,P$. Note that in
particular
\[\mathbb{P}i(C),\mathbb{P}i(a) \mathop{\mathpalette\Ind{}} \mathbb{P}i(b).\]
Choose now a derivation $D$ vanishing both on $\mathbb{P}i(C)$ and on
$\mathbb{P}i(a)$ with
$D(\mathbb{P}i(b))=1$. The induced automorphism $\sigma_D$ fixes $C$ and $P$
pointwise but $\ulcorner E\urcorner$ has an infinite orbit, yielding
the desired contradiction.
~\end{proof}
\begin{remark}
The previous set is definable in every additive cover, since $E$ equals
\[
\{ (x,y)\in S^2 \ |\ \exists\big(\lambda,\mu)\in P^2 ( \lambda\star
a = x \ \& \ \mu\star b= y \ \& \ \lambda \cdot \mathbb{P}i(c)+\mu=0
\big) \}.
\]
The main cause for the failure of preservation of internality on
quotients is that $E$ is definable over $c,d,P$ in $\mathcal{M}_1$.
\end{remark}
\begin{proposition}\label{P: M1PropA}
The additive cover $\mathcal{M}_1$ does not preserve internality on
intersections.
\end{proposition}
\begin{proof}
Choose generic independent elements $a_1$ and $a_2$ in $S$ and
$\varepsilon$ in $P$ generic over $a_1, a_2$. Set
$\bar\alpha=(\alpha_1,\alpha_2)=(\mathbb{P}i(a_1),\mathbb{P}i(a_2))$.
Consider now the definable set
\[ E= \{ (x,y)\in S^2 \ |\ \exists(\lambda,\mu)\in P^2 ( \lambda\star
a=x \ \& \
\mu\star b=y \ \& \ \varepsilon\cdot\lambda+\mu=0 ) \}. \]
Choose $\beta_1$ in $P$ generic over $
\ulcorner
E\urcorner, \bar\alpha,\varepsilon$ as well as elements
$\beta_2$ and $\beta_3$ in $P$ with
\begin{align}
0&=\beta_1 \alpha_1 + \frac{1}{2}\beta_2 \alpha_{1}^2 +
\frac{1}{3}\beta_3 \alpha_{1}^3+\alpha_2 \\
0&=\beta_1 +\beta_2 \alpha_1 + \beta_3 \alpha_{1}^2 - \varepsilon
\end{align}
This is possible because the matrix
\[ \begin{pmatrix}
\frac{\alpha_1^2}{2} & \frac{\alpha_{1}^3}{3} \\
\alpha_1 & \alpha_{1}^2
\end{pmatrix}\]
has determinant $\frac{\alpha_{1}^4}{2}-\frac{\alpha_{1}^4}{3}\neq 0$.
Since $\beta_2$ and $\beta_3$ are definable over
$\beta_1,\bar\alpha,\varepsilon$, we get the independence
\begin{equation*}
\tag{$\blacklozenge$}
\bar\beta \mathop{\mathpalette\Ind{}}_{\bar\alpha,\varepsilon}
\ulcorner E\urcorner,
\end{equation*}
where $\bar\beta=(\beta_1,\beta_2,\beta_3)$.
\begin{claim}
The type $\textnormal{stp}(\ulcorner E\urcorner/\bar\beta)$
is $P$-internal.
\end{claim}
\begin{claimproof}
Let $b_1,b_2$ and $b_3$ be elements in $S$ such that $b_i$ is in
the
fiber of $\beta_i$ with
\[ b_1,b_2,b_3 \mathop{\mathpalette\Ind{}}_{\bar\beta} \ulcorner
E\urcorner, \bar\alpha,\varepsilon \]
We show that every automorphism $\sigma$ in
$\textnormal{Aut}(\mathcal{M}_{1}/P)$ fixing $b_1,b_2$ and $b_3$ must permute
$E$.
Recall that
$F_\sigma$ is the derivation on $\mathbb{C}$ induced by the
automorphism $\sigma$.
Since $F_{\sigma}(\beta_i)=0$, we deduce from equations (1) and
(2)
that $\varepsilon\cdot
F_{\sigma}(\alpha_1)+F_{\sigma}(\alpha_2)=0$. Hence, the
automorphism $\sigma$ permutes the set $E$.
\end{claimproof}
\begin{claim} The intersection
$\textnormal{acl}eq(\ulcorner
E\urcorner)\cap\textnormal{acl}eq(\bar\beta)=\textnormal{acl}eq(\emptyset)$.
\end{claim}
\begin{claimproof}
Because of the independence $(\blacklozenge)$, we need only show
that
\[\textnormal{acl}eq(\bar\beta)\cap\textnormal{acl}eq
(\bar\alpha,\varepsilon)=\textnormal{acl}eq(\emptyset).\]
Choose tuples $\bar{\beta}',\bar{\alpha}',\varepsilon',
\bar{\beta}'',\bar{\alpha}'',\varepsilon'',
\bar{\beta}'''$ such that
\[
\bar{\beta},\bar{\alpha},\varepsilon \equiv
\bar{\beta}',\bar{\alpha},\varepsilon \equiv
\bar{\beta}',\bar{\alpha}',\varepsilon' \equiv
\bar{\beta}'',\bar{\alpha}',\varepsilon' \equiv
\bar{\beta}'',\bar{\alpha}'',\varepsilon'' \equiv
\bar{\beta}''',\bar{\alpha}'',\varepsilon''
\]
with
\begin{align*}
\bar{\beta}' \mathop{\mathpalette\Ind{}}_{\bar{\alpha},\varepsilon} \bar{\beta} \qquad
\bar{\alpha}',\varepsilon' \mathop{\mathpalette\Ind{}}_{\bar{\beta}'}
\bar{\beta},\bar{\alpha},\varepsilon \qquad
\bar{\beta}'' \mathop{\mathpalette\Ind{}}_{\bar{\alpha}',\varepsilon'}
\bar{\beta},\bar{\alpha},\varepsilon,\bar{\beta}' \qquad
\bar{\alpha}'',\varepsilon'' \mathop{\mathpalette\Ind{}}_{\bar{\beta}''}
\bar{\beta},\bar{\alpha},\varepsilon,\bar{\beta}',\bar{\alpha}',\varepsilon'
\end{align*} and
\begin{align*}
\bar{\beta}''' \mathop{\mathpalette\Ind{}}_{\bar{\alpha}'',\varepsilon''}
\bar{\beta},\bar{\alpha},\varepsilon,\bar{\beta}',\bar{\alpha}',\varepsilon',
\bar{\beta}''.
\end{align*}
Since
\[ \textnormal{acl}eq(\bar{\beta})\cap\textnormal{acl}eq(\bar{\alpha},\varepsilon)\subset
\textnormal{acl}eq(\bar{\beta})\cap\textnormal{acl}eq(\bar{\beta}'''), \]
we need only show the independence $\bar{\beta} \mathop{\mathpalette\Ind{}} \bar{
\beta''' }
$. Note first that the whole configuration has Morley rank 9:
\[
\textnormal{RM}(\bar{\beta},\bar{\alpha},\varepsilon,
\bar{\beta}',\bar{\alpha}',\varepsilon',
\bar{\beta}'',\bar{\alpha}'',\varepsilon'',
\bar{\beta}''') =
\textnormal{RM}(\beta_1,\alpha_1,\alpha_2,\varepsilon,
\beta_1',\alpha_1',\beta_1'',\alpha_1'',\beta_1''') =9.
\]
Since \begin{multline*}
\textnormal{RM}(\bar{\beta}''',\bar{\beta},\alpha_1,\alpha_{1}',\alpha_{1}'')=
\\ \textnormal{RM}(
\bar{\beta}'''/\bar{\beta},\alpha_1,\alpha_{1}',\alpha_{1}'') +
\textnormal{RM}(\alpha_{1}'' / \bar{\beta},\alpha_1,\alpha_{1}' ) +
\textnormal{RM}( \alpha_{1}' / \bar{\beta},\alpha_1 ) + \\ +
\textnormal{RM}(\alpha_1 /\bar{\beta}) + \textnormal{RM}(\bar{\beta}) =
\textnormal{RM}(
\bar{\beta}'''/\bar{\beta},\alpha_1,\alpha_{1}',\alpha_{1}'')
+ 6,
\end{multline*}
it suffices to show that
$\alpha_{2},\varepsilon,\bar{\beta}',\alpha_{2}',\varepsilon',
\bar{\beta}'',\alpha_{2}''$ and $\varepsilon''$ are all algebraic
over the tuple
$(\bar{\beta}''',\bar{\beta},\alpha_1,\alpha_{1}',\alpha_{1}'')$.
Clearly
$\alpha_2,\varepsilon,\alpha_{2}''$ and $\varepsilon''$ are
algebraic over
$\bar{\beta}''',\bar{\beta},\alpha_1,\alpha_{1}''$. Furthermore
we
have the following system of linear equations:
\begin{align*}
\begin{pmatrix}
6\alpha_1 & 3\alpha_{1}^2 & 2\alpha_{1}^3 & 0 & 0 & 0 & 0 & 0 \\
1 & \alpha_1 & \alpha_{1}^2 & 0 & 0 & 0 & 0 & 0 \\
6\alpha_{1}' & 3\alpha_{1}'^2 & 2\alpha_{1}'^3 & 6 & 0 & 0 & 0 &
0 \\
1 & \alpha_{1}' & \alpha_{1}'^2 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 6 & 0 & 6\alpha_{1}' & 3\alpha_{1}'^2 &
2\alpha_{1}'^3 \\
0 & 0 & 0 & 0 & 1 & 1 & \alpha_{1}' & \alpha_{1}'^2 \\
0 & 0 & 0 & 0 & 0 & 6\alpha_{1}'' & 3\alpha_{1}''^2 &
2\alpha_{1}''^3 \\
0 & 0 & 0 & 0 & 0 & 1 & \alpha_{1}'' & \alpha_{1}''^2 \\
\end{pmatrix}
\begin{pmatrix}
\beta_{1}' \\ \beta_{2}' \\ \beta_{3}' \\ \alpha_{2}' \\
\varepsilon' \\
\beta_{1}'' \\ \beta_{2}'' \\ \beta_{3}''
\end{pmatrix}
=
\begin{pmatrix}
-6\alpha_2 \\
\varepsilon \\
0 \\
0 \\
0 \\
0 \\
-6\alpha_{2}'' \\
\varepsilon'' \\
\end{pmatrix}
\end{align*}
Thus, we need only show that the above matrix has non-zero
determinant
\begin{align*}
&6
\begin{vmatrix}
6\alpha_1 & 3\alpha_{1}^2 & 2\alpha_{1}^3 \\
1 & \alpha_1 & \alpha_{1}^2 \\
1 & \alpha_{1}' & \alpha_{1}'^2
\end{vmatrix}
\begin{vmatrix}
6\alpha_{1}' & 3\alpha_{1}'^2 & 2\alpha_{1}'^3 \\
6\alpha_{1}'' & 3\alpha_{1}''^2 & 2\alpha_{1}''^3 \\
1 & \alpha_{1}'' & \alpha_{1}''^2
\end{vmatrix}
-6
\begin{vmatrix}
6\alpha_1 & 3\alpha_{1}^2 & 2\alpha_{1}^3 \\
1 & \alpha_1 & \alpha_{1}^2 \\
6\alpha_{1}' & 3\alpha_{1}'^2 & 2\alpha_{1}'^3
\end{vmatrix}
\begin{vmatrix}
1 & \alpha_{1}' & \alpha_{1}'^2 \\
6\alpha_{1}'' & 3\alpha_{1}''^2 & 2\alpha_{1}''^3 \\
1 & \alpha_{1}'' & \alpha_{1}''^2
\end{vmatrix} \\
&=72 \alpha_{1}^2 \alpha_{1}'^2 \alpha_{1}''^2
(\alpha_{1}-\alpha_{1}')(\alpha_{1}-\alpha_{1}'')(\alpha_{1}''-\alpha_{1}')\neq
0.
\end{align*}
~\end{claimproof}
If $\mathcal M_1$ had preservation of internality on
intersections, then
the type
\[\textnormal{stp}(\ulcorner E\urcorner /
\textnormal{acl}eq(\ulcorner E\urcorner)\cap\textnormal{acl}eq(\beta_1,\beta_2,\beta_3))\]
would be almost $P$-internal, by Claim 1, and so would be
$\textnormal{stp}(\ulcorner E\urcorner)$, by the previous claim, which yields a
contradiction, exactly as in the proof of Proposition \ref{P: M1PropB}.
~\end{proof}
Recall that an additive cover preserves internality on
intersections, resp. on quotients, if and only if every almost
$P$-internal
type is good, resp. special, by Propositions \ref{P:propA} and
\ref{P:propB}. For real types, the property of being special follows
directly from almost internality.
\begin{remark}\label{R:realpart}
Almost $P$-internal
real types are special in every additive cover.
\end{remark}
\begin{proof}
We may assume that the sort $S$ is not almost $P$-internal. By a
straight-forward forking calculation (cf. \cite[Theorem 2.5]{zC12} or
Proposition \ref{P:propB}), it suffices to show that, whenever the
real type $\textnormal{stp}(a/B)$ is almost $P$-internal, with $a$ a single
element in $S$, then $\alpha=\mathbb{P}i(a)$ is algebraic over $B$.
Choose a set of parameters $B_1$
with $B_1 \mathop{\mathpalette\Ind{}}_{B} a$ and $a$ algebraic over $B_1,P$. We need only
show that $\alpha$ is algebraic over $B_1$. Otherwise, choose an
element $a_1$ of $S$ in the fiber of $\alpha$ generic over $B_1$. The
elements $a$ and $a_1$ are interdefinable over $P$, so $a_1$ is
algebraic over $B_1, P$, contradicting that $S$ is not almost
$P$-internal.
\end{proof}
Propositions
\ref{P: M1PropA} and \ref{P: M1PropB} and the above remark give a
negative answer to Question~\ref{Q:2}.
\begin{corollary}\label{C:AB_imag}
There is a stable theory of finite Morley rank, where every stationary
real almost
$\mathbb{P}$-internal type is special, yet internality on intersections is not
preserved.
\end{corollary}
We can now conclude this work relating the failure of the CBP and
elimination of finite imaginaries, always in the context of additive
covers. For this, we need the following easy remark, which follows
immediately from
\cite[Remark 1.1 (2)]{zC12}.
\begin{remark}\label{R:intersec} Given tuples $a$ and $b$ in an ambient
model of an $\omega$-stable theory
such that $\textnormal{RM}(a)-\textnormal{RM}(a/b)=1$ and $b=\textnormal{Cb}(a/b)$, the intersection
\[\textnormal{acl}eq(a)\cap
\textnormal{acl}eq(b)=\textnormal{acl}eq(\emptyset).\]
\end{remark}
\begin{theorem}\label{T:finImagCBP}
Suppose that the sort $S$ in the additive cover $\mathcal{M}$ is not
almost $P$-internal. If $\mathcal{M}$ eliminates finite imaginaries,
then it cannot preserve internality on quotients, so in particular
the CBP does not hold.
\end{theorem}
In a forthcoming work, we will explore whether the converse holds. We
believe that similar techniques show that an additive cover as above
cannot even preserve internality on intersections, but we have not yet
pursued this problem thoroughly.
\begin{proof}
We assume that the additive cover $\mathcal{M}$ eliminates finite
imaginaries and that the sort $S$ is not almost $P$-internal. In order
to show the failure of preserving internality on quotients, we will
find a similar configuration
to $(a\otimes c)\oplus b=d$, resonating with Martin's work \cite{gM88}
on
recovering
multiplication.
Choose two generic independent elements
$a_0=(\alpha_0,0)$ and
$a_1=(\alpha_1,0)$ in $S$. The
real canonical parameter of the finite set
$\{ a_0,a_1 \}$ is not definable over $a_0 \oplus a_1, P$: Indeed,
since $S$
is not almost $P$-internal, there is an automorphism $\sigma$ in
$\textnormal{Aut}(\mathcal{M}/P)$ with $\sigma(a_0)=1\star a_0$ and
$\sigma(a_1)=(-1)\star
a_1$, so $\sigma(a_0 \oplus a_1)=a_0\oplus a_1$, but $\sigma$ does not
permute
$\{ a_0,a_1 \}$. Choose now some coordinate $e$ of the real canonical
parameter which
is not definable over $a_0\oplus a_1,P$.
Note that $\varepsilon=\mathbb{P}i(e)$ is definable over $\alpha_0,\alpha_1$,
by
Remark \ref{R:algFib}. Therefore $\varepsilon=r(\alpha_0,\alpha_1)$
for
some symmetric
rational function $r(X,Y)$ over $\mathbb{Q}$. Let $\rho(x,y,z)$ be a
formula such that $e$ is the unique element realizing
$\rho(a_0,a_1,z)$.
We now proceed according to whether $r(\alpha_0,Y)$ is a
polynomial
map. Assume first that the map $r_{\alpha_0}(Y)=r(\alpha_0,Y)$ is not
polynomial.
As in the proof of \cite[Lemma 3.2]{gM88}, there are natural numbers
$n_1,\ldots,n_k$
such that the
degree of the numerator $P_{\alpha_0}(Y)$ of the rational function
\[
\sum_{j=0}^{k}(-1)^{k-j}
\sum_{1\leq i_1<\dots<i_j\leq k}
r_{\alpha_0}\big(Y+n_{i_1}+\cdots+n_{i_j}\big)
\]
is strictly smaller than the degree of its denominator
$Q_{\alpha_0}(Y)$.
For $1\leq i_1<\dots<i_j\leq k$, the formula $\rho(a_0,a_1\oplus
(n_{i_1}+\cdots+n_{i_j},0),z)$
has a unique realization $e_{i_1,\dots,i_j}$, since
\[\alpha_1 \equiv_{\alpha_0} \alpha_1+n_{i_1}+\cdots+n_{i_j},\]
so by Remark \ref{R:algFib}
\[a_1 \equiv_{a_0} a_1\oplus(n_{i_1}+\cdots+n_{i_j},0). \]
Set now
\[e_j=\sum_{1\leq i_1<\dots<i_j\leq k} e_{i_1,\dots,i_j}\]
and \begin{multline*}
\mathbb{P}si(x, y,z)=
\exists \bar{z} \Big(
\bigwedge_{j=0}^{k} \ \
\bigwedge_{1\leq i_1<\dots<i_j\leq k}
\rho(x,y\oplus(n_{i_1}+\cdots+n_{i_j},0),z_{i_1,\dots,i_j}) \ \
\land \\
z=
\sum_{j=0}^{k}(-1)^{k-j}
\sum_{1\leq i_1<\dots<i_j\leq k}
z_{i_1,\dots,i_j}
\Big).
\end{multline*}
Note that the element
\[ \sum_{j=0}^{k}(-1)^{k-j} e_j\]
is the unique realization of $\mathbb{P}si(a_0,a_1,z)$ and its projection to
$P$ is
\[ \frac{P_{\alpha_0}(\alpha_1)}{Q_{\alpha_0}(\alpha_1)}.\]
By Remark \ref{R:algFib}, every element in the fiber of $\alpha_0$ has
the same type as $a_0$ over $a_1,P$, so the formula
\[ \forall u \Big( \mathbb{P}i(u)=x \rightarrow \Big(
\exists ! z \mathbb{P}si(u, a_1,z) \land \forall w \Big( \mathbb{P}si(u, a_1,w)
\rightarrow \mathbb{P}i(w)= \frac{P_{x}(\alpha_1)}{Q_{x}(\alpha_1)}
\Big)\Big)\Big)\]
\noindent belongs to the generic type $\textnormal{tp}(\alpha_0/a_1)$ in $P$.
Therefore, there exists an algebraic number $\xi$ realizing it such that
$\deg(Q_\xi(Y))>\deg(P_\xi(Y))$. Write now $\varphi(y,z)=\mathbb{P}si( (\xi, 0),
y,z)$ and choose generic independent elements $a,b$ and $c$ in $S$ with
projections
\[\mathbb{P}i(a)=\alpha,\mathbb{P}i(b)=\beta \ \ \textnormal{ and } \ \ \mathbb{P}i(c)=\gamma.
\]
The formula $\varphi$ will play the role of the multiplication
$\otimes$, so let
$d=(\delta,d')$ be the unique element such that
\[ \mathcal{M}\models \exists z \big(\varphi(a\oplus c,z)\land z\oplus
b=d\big). \]
\begin{claim}
The intersection
$\textnormal{acl}eq(\alpha,\beta)\cap\textnormal{acl}eq(\gamma,\delta)=\textnormal{acl}eq(\emptyset)$.
\end{claim}
\begin{claimproof}
Since
$\textnormal{RM}(\alpha,\beta)-\textnormal{RM}(\alpha,\beta/\gamma,\delta)=2-1=1$, it
suffices to show by Remark \ref{R:intersec} that
$\textnormal{Cb}(\alpha,\beta/\gamma,\delta)$ is interdefinable with
$(\gamma,\delta)$.
Choose elements $\alpha'$ and $\beta'$ such that
\[\alpha',\beta' \equiv_{\gamma,\delta} \alpha,\beta \ \
\textnormal{
and } \ \ \alpha',\beta' \mathop{\mathpalette\Ind{}}_{\gamma,\delta} \alpha,\beta \
,\]
so
\[
\frac{P_\xi(\alpha+\gamma)}{Q_\xi(\alpha+\gamma)}+\beta=\delta=
\frac{P_\xi(\alpha'+\gamma)}{Q_\xi(\alpha'+\gamma)}+\beta'.
\]
Therefore
\[
P_\xi(\alpha+\gamma)Q_\xi(\alpha'+\gamma)-P_\xi(\alpha'+\gamma)Q_\xi(\alpha+\gamma)+
(\beta-\beta')Q_\xi(\alpha+\gamma)Q_\xi(\alpha'+\gamma)=0.
\]
Since
\[\deg(Q_\xi(Y))>\deg(P_\xi(Y)),\]
we need only show $\beta\neq\beta'$, for then
$\gamma$ is algebraic over $\alpha,\beta,\alpha',\beta'$ and
hence so is $\delta$, as desired.
We assume for a contradiction that
$\beta=\beta'$. Hence $\beta$ is algebraic over $\gamma,\delta$,
so the equation
\[ P_\xi(\alpha+\gamma)=(\delta-\beta)Q_\xi(\alpha+\gamma)\]
yields that $\alpha$ is also algebraic over $\gamma,\delta$, which
is a blatant
contradiction.
\end{claimproof}
As in Proposition \ref{P: M1PropB}, with the definable set
\begin{align*}
E &= \big\{ (x,y)\in S^2 \ |\ \mathbb{P}i(x)=\alpha \ \& \ \mathbb{P}i(y)=\beta
\ \& \
\exists z \big(\varphi(x\oplus c,z)\land z\oplus y=d\big) \big\}
\end{align*}
we can easily prove that
the types
\[
\textnormal{stp}(\ulcorner E\urcorner/\gamma,\delta) \ \ \textnormal{ and } \ \
\textnormal{stp}(\ulcorner E\urcorner/\alpha,\beta)
\]
are $P$-internal, since $(\xi,0)$ is internal over $\textnormal{acl}(\emptyset)$.
We assume now that $\mathcal M$ preserves internality on
quotients in order to reach a contradiction. By
the above claim,
the type $\textnormal{stp}(\ulcorner E\urcorner)$
is almost $P$-internal.
Therefore, there is a set $C$ of parameters with
$C \mathop{\mathpalette\Ind{}} \ulcorner E\urcorner,a,b$ such that the canonical parameter
$\ulcorner E\urcorner$ is algebraic over $C,P$.
Note that in
particular
\[C,a \mathop{\mathpalette\Ind{}} b.\]
Since the sort $S$ is not almost $P$-internal, there is an
automorphism
$\sigma$ in $\textnormal{Aut}(\mathcal{M}/P)$ fixing $C$ and $a$, yet
$\sigma(b)\neq b$.
The orbit of $\ulcorner E\urcorner$ under $\sigma$ is hence infinite,
which gives the desired contradiction.
The remaining case is that the rational function
$r(\alpha_0,Y)$ is
polynomial. For a natural number $m$, write $r(X,mX+Y)$ as
\[
r(X,mX+Y)=\sum_{i=0}^{n} \frac{P_{m,i}(X)}{Q_{m,i}(X)}Y^i,
\]
with coprime polynomials $P_{m,i}(X)$ and $Q_{m,i}(X)$ over
$\mathbb{Q}$ with
$P_{m,n}\neq
0$ (for $r$ is not the zero map).
\begin{claim}
There exists a natural number $m$ such that
$\deg(P_{m,i})\neq \deg(Q_{m,i})$
for some $i>0$.
\end{claim}
\begin{claimproof}
Note that $n>0$ because $r(X,Y)$ is symmetric and non-constant.
We may assume that $\deg(P_{0,i})= \deg(Q_{0,i})$ for all
$i>0$, since otherwise we are done.
If $n>1$, then
\begin{align*}
\frac{P_{1,n-1}(X)}{Q_{1,n-1}(X)}&=
\frac{P_{0,n}(X)}{Q_{0,n}(X)} X +
\frac{P_{0,n-1}(X)}{Q_{0,n-1}(X)}\\
&=\frac{P_{0,n}(X)Q_{0,n-1}(X)X + P_{0,n-1}(X)Q_{0,n}(X)
}{Q_{0,n}(X) Q_{0,n-1}(X)}
\end{align*}
implies
\[
\deg(P_{1,n-1})= \deg(Q_{1,n-1}) + 1,
\]
so the claim follows. Thus, we are left with the case
$n=1$, where
\begin{align*}
r(X,Y)=\frac{P_{0,1}(X)}{Q_{0,1}(X)} Y +
\frac{P_{0,0}(X)}{Q_{0,0}(X)}
= \frac{P_{0,1}(X)Q_{0,0}(X)Y + P_{0,0}(X)Q_{0,1}(X)
}{Q_{0,1}(X) Q_{0,0}(X)}.
\end{align*}
The map
\begin{align*}
r(\alpha_0,Y)=r(Y,\alpha_0)=\frac{P_{0,1}(Y)Q_{0,0}(Y)\alpha_0 +
P_{0,0}(Y)Q_{0,1}(Y)
}{Q_{0,1}(Y) Q_{0,0}(Y)}
\end{align*}
is polynomial and since $\alpha_0 \equiv \alpha_0 + 1$, so is
the map
\begin{align*}
\frac{P_{0,1}(Y)Q_{0,0}(Y)(\alpha_0+1) +
P_{0,0}(Y)Q_{0,1}(Y)
}{Q_{0,1}(Y) Q_{0,0}(Y)}.
\end{align*}
Since $P_{0,1}$ and $Q_{0,1}$ as well as $P_{0,0}$ and $Q_{0,0}$
are coprime,
it follows that $Q_{0,0}=\lambda Q_{0,1}$ for some
rational number $\lambda\neq 0$. We deduce that both
\[
\frac{\lambda \alpha_0 P_{0,1}(X) + P_{0,0}(X)}{\lambda
Q_{0,1}(X)}
\]
and
\[
\frac{\lambda (\alpha_0 +1) P_{0,1}(X) + P_{0,0}(X)}{\lambda
Q_{0,1}(X)}
\]
are polynomials. Hence, every root $\zeta$ of $Q_{0,1}$
is a root of
\[
\lambda \alpha_0 P_{0,1} + P_{0,0}
\ \ \textnormal { and of } \ \
\lambda (\alpha_0 +1) P_{0,1} + P_{0,0}
\]
and therefore $P_{0,1}(\zeta)=0$.
This implies that $Q_{0,1}$ is constant, since
$P_{0,1}$ and $Q_{0,1}$
are coprime.
It follows that $P_{0,1}$ cannot be constant, since otherwise
the symmetric function $r(X,Y)$ would equal to
$q_1\cdot(X+Y)+q_0$ for
some rational numbers $q_1$ and $q_0$, which yields that
the element $e$ would be definable over $a_0\oplus a_1, P$, a
contradiction.
\end{claimproof}
Fix now a natural number $m$ as in the previous claim and choose as
before generic independent elements $a$, $b$ and $c$ in $S$ with
projections
\[\mathbb{P}i(a)=\alpha,\mathbb{P}i(b)=\beta \ \ \textnormal{ and } \ \ \mathbb{P}i(c)=\gamma.
\]
Let
$d=(\delta,d')$ be the unique element such that
\[ \mathcal{M}\models \exists z \big(\rho(a,(m\cdot a) \oplus
c,z)\land z\oplus
b=d\big). \]
Considering the set
\begin{align*}
\big\{ (x,y)\in S^2 \ |\ \mathbb{P}i(x)=\alpha \ \& \ \mathbb{P}i(y)=\beta \ \&
\
\exists z \big(\rho(a,(m\cdot a) \oplus c,z)\land z\oplus y=d\big)
\big\},
\end{align*} we need only show as before that
\[\textnormal{acl}eq(\alpha,\beta)\cap\textnormal{acl}eq(\gamma,\delta)=\textnormal{acl}eq(\emptyset).\]
The strategy is the same as in the proof of Claim 1. Choose
elements $\alpha'$ and $\beta'$ such that
\[\alpha',\beta' \equiv_{\gamma,\delta} \alpha,\beta \ \
\textnormal{
and } \ \ \alpha',\beta' \mathop{\mathpalette\Ind{}}_{\gamma,\delta} \alpha,\beta \
.\]
Note that
\[
r(\alpha,m\cdot\alpha+\gamma)+\beta=\delta=r(\alpha',m\cdot\alpha'+\gamma)+\beta'
,
\]
so
\[
r(\alpha,m\cdot\alpha+\gamma)-r(\alpha',m\cdot\alpha'+\gamma)+\beta-\beta'=0.
\]
Now Claim 2 implies that $\gamma$ is algebraic over
$\alpha,\beta,\alpha',\beta'$, since $\alpha \mathop{\mathpalette\Ind{}} \alpha'$ (for
otherwise both $\alpha$ and $\beta$ are algebraic over
$\gamma,\delta$). It
follows that
$\delta$ is also algebraic over
$\alpha,\beta,\alpha',\beta'$, as desired.
~\end{proof}
\end{document} |
\begin{document}
\vspace*{0.2in}
\begin{flushleft}
{\Large
\textbf\newline{Hybrid Modeling and Prediction of Dynamical Systems}
}
\newline
\\
Franz Hamilton\textsuperscript{1,2*},
Alun Lloyd \textsuperscript{1,2,3},
Kevin Flores \textsuperscript{1,2,4}
\\
\textbf{1} Department of Mathematics, North Carolina State University, Raleigh, NC, USA
\\
\textbf{2} Center for Quantitative Sciences in Biomedicine, North Carolina State University, Raleigh, NC, USA
\\
\textbf{3} Biomathematics Graduate Program, North Carolina State University, Raleigh, NC, USA
\\
\textbf{4} Center for Research in Scientific Computation, North Carolina State University, Raleigh, NC, USA
* [email protected]
\end{flushleft}
\section*{Abstract}
Scientific analysis often relies on the ability to make accurate predictions of a system's dynamics. Mechanistic models, parameterized by a number of unknown parameters, are often used for this purpose. Accurate estimation of the model state and parameters prior to prediction is necessary, but may be complicated by issues such as noisy data and uncertainty in parameters and initial conditions. At the other end of the spectrum exist nonparametric methods, which rely solely on data to build their predictions. While these nonparametric methods do not require a model of the system, their performance is strongly influenced by the amount and noisiness of the data. In this article, we consider a hybrid approach to modeling and prediction which merges recent advancements in nonparametric analysis with standard parametric methods. The general idea is to replace a subset of a mechanistic model's equations with their corresponding nonparametric representations, resulting in a hybrid modeling and prediction scheme. Overall, we find that this hybrid approach allows for more robust parameter estimation and improved short-term prediction in situations where there is a large uncertainty in model parameters. We demonstrate these advantages in the classical Lorenz-63 chaotic system and in networks of Hindmarsh-Rose neurons before application to experimentally collected structured population data.
\section*{Author Summary}
The question of how best to predict the evolution of a dynamical system has received substantial interest in the scientific community. While traditional mechanistic modeling approaches have dominated, data-driven approaches which rely on data to build predictive models have gained increasing popularity. The reality is, both approaches have their drawbacks and limitations. In this article we ask the question of whether or not a hybrid approach to prediction, which combines characteristics of both mechanistic modeling and data-driven modeling, can offer improvements over the standalone methodologies. We analyze the performance of these methods in two model systems and then evaluate them on experimentally collected population data.
\section*{Introduction}
Parametric modeling involves defining an underlying set of mechanistic equations which describe a system's dynamics. These mechanistic models often contain a number of unknown parameters as well as an uncertain state, both of which need to be quantified prior to use of the model for prediction. The success of parametric prediction is tied closely to the ability to construct accurate estimates of the model parameters and state. This can be particularly challenging in high dimensional estimation problems as well as in chaotic systems \cite{voss,baake}. Additionally, there is often a degree of model error, or a discrepancy between the structure of the model and that of the system, further complicating the estimation process and hindering prediction accuracy.
Despite these potential issues, mechanistic models are frequently utilized in data analysis. The question we aim to address is when is it advantageous to use them? Under suitable conditions where model error is relatively small and parameters can be reliably estimated, parametric predictions can provide a great deal of accuracy. However, as we will see in the subsequent examples, a large uncertainty in the initial parameter values often leads to inaccurate estimates resulting in poor model-based predictions.
An alternative approach to modeling and prediction abandons the use of any mechanistic equations, instead relying on predictive models built from data. These nonparametric methods have received considerable attention, in particular those methods based on Takens' delay-coordinate method for attractor reconstruction \cite{farmer,casdagli1989nonlinear,Sugihara:1990aa,smith1992identification,jimenez1992forecasting,sauer94,sugihara1994nonlinear,schroer1998predicting,kugiumtzis1998regularized,yuan,hsieh2005distinguishing,strelioff2006medium,regonda,schelter2006handbook,hamilton2016}. The success of nonparametric methods is strongly influenced by the amount of data available as well as the dimension of the dynamical system. If only a sparse amount of training data is available, the result is often inaccurate predictions due to the lack of suitable nearby neighbors in delay-coordinate space. Furthermore, as the dimension and complexity of the dynamical system increases, nonparametric prediction becomes significantly more difficult due to the necessary data requirements \cite{hamilton2016}.
Several recent works have investigated the situation where only a portion of a mechanistic model is known \cite{hamilton2,berry2016}. Our motivation here though is to explore how best to use a full mechanistic model when it is available. We consider a hybrid methodology to modeling and prediction that combines the complementary features of both parametric and nonparametric methods. In our proposed hybrid method, a subset of a mechanistic model's equations are replaced by nonparametric evolution. These nonparametrically advanced variables are then incorporated into the remaining mechanistic equations during the data fitting and prediction process. The result of this approach is a more robust estimation of model parameters as well as an improvement in short-term prediction accuracy when initial parameter uncertainty is large.
The utility of this method is demonstrated in several example systems. The assumption throughout is that noisy training data from a system are available as well as a mechanistic model that describes the underlying dynamics. However, several of the model parameters are unknown and the model state is uncertain due to the noisy measurements. The goal is to make accurate predictions of the system state up to some forecast horizon beyond the end of the training data. We compare the prediction accuracy of the standard parametric and nonparametric methodologies with the novel hybrid method presented here.
We begin our analysis by examining prediction in the classical Lorenz-63 system \cite{lorenz63}, which exhibits chaotic dynamics. Motivated by the success of the hybrid method in the Lorenz-63 system, we consider a more sophisticated example of predicting the spiking dynamics of a neuron in a network of Hindmarsh-Rose \cite{hindmarsh} cells. Finally, we examine the prediction problem in a well-known experimental dataset from beetle population dynamics \cite{constantino}.
\section*{Materials and Methods}
The assumption throughout is that a set of noisy data is available over the time interval $\left[t(0),t(T)\right]$. This is referred to as the {\it training data} of the system. Using these training data, the question is how best to predict the system dynamics over the interval $\left[t(T+1),t(T+T_F)\right]$, known as the {\it prediction interval}. Standard parametric and nonparametric methods are presented before our discussion of the novel hybrid method which blends the two approaches.
\subsection*{Parametric Modeling and Prediction}
When a full set of mechanistic equations is used for modeling and prediction, we refer to this as the parametric approach. Assume a general nonlinear system of the form
\begin{eqnarray} \label{e1}
\mathbf{x}(k+1) &=& \mathbf{f}\left(t(k),\mathbf{x}(k),\mathbf{p}\right)+\mathbf{w}(k)\\
\mathbf{y}(k) &=& \mathbf{h}\left(t(k),\mathbf{x}(k),\mathbf{p}\right)+\mathbf{v}(k)\nonumber
\end{eqnarray}
where $\textbf{x}= \left[x_1,x_2,\hdots,x_n\right]^{\mathsmaller T}$ is an $n$-dimensional vector of model state variables and $\textbf{p} = \left[p_1,p_2,\hdots,p_l\right]^{\mathsmaller T}$ is an $l$-dimensional vector of model parameters which may be known from first principles, partially known or completely unknown. $\textbf{f}$ represents our system dynamics which describe the evolution of the state $\mathbf{x}$ over time and \textbf{h} is an observation function which maps \textbf{x} to an $m$-dimensional vector of model observations, $\textbf{y} = \left[y_1,y_2,\hdots,y_m\right]^{\mathsmaller T}$. To simplify the description of our analysis, we assume that the training data maps directly to some subset of $\mathbf{x}$. $\textbf{w}(k)$ and $\textbf{v}(k)$ are assumed to be mean $\mathbf{0}$ Gaussian noise terms with covariances $\mathbf{Q}$ and $\mathbf{R}$ respectively. While discrete notation is used in Eq. \ref{e1} for notational convenience, the evolution of \textbf{x} is often described by continuous-time systems. In this situation numerical solvers, such as Runge-Kutta or Adams-Moulton methods, are used to obtain solutions to the continuous-time system at discrete time points.
When the state of a system is uncertain due to noisy or incomplete observations, nonlinear Kalman filtering can be used for state estimation \cite{voss,enkf7,evensen,rabier,cummings,yoshida,stuart,schiffbook,berry2,hamiltonEPL,hamiltonPRE,ghanim,ghanim2,sitz2002}. Here we choose the unscented Kalman filter (UKF), which approximates the propagation of the mean and covariance of a random variable through a nonlinear function using a deterministic ensemble selected through the unscented transformation \cite{simon,julier1,julier2}. We initialize the filter with state vector $\mathbf{x^{+}}(0)$ and covariance matrix $\mathbf{P^{+}}(0)$. At the $k$th step of the filter there is an estimate of the state $\mathbf{x^{+}}(k-1)$ and the covariance matrix $\mathbf{P^{+}}(k-1)$. In the UKF, the singular value decomposition is used to find the square root of the matrix $\mathbf{P^{+}}(k-1)$, which is used to form an ensemble of $2n+1$ state vectors.
The model $\mathbf{f}$ is applied to the ensemble, advancing it forward one time step, and then observed with $\mathbf{h}$. The weighted average of the resulting state ensemble gives the prior state estimate $\mathbf{x^{-}}(k)$ and the weighted average of the observed ensemble is the model-predicted observation $\mathbf{y}^{-}(k)$. Covariance matrices $\mathbf{P^{-}}(k)$ and $\mathbf{P^y}(k)$ of the resulting state and observed ensemble, and the cross-covariance matrix $\mathbf{P^{xy}}(k)$ between the state and observed ensembles, are formed and the equations
\begin{eqnarray} \label{e3}
\mathbf{K}(k) &=& \mathbf{P^{xy}}(k)\left(\mathbf{P^{y}}(k)\right)^{-1}\nonumber\\
\mathbf{P^{+}}(k) &=& \mathbf{P^{-}}(k)-\mathbf{P}^{xy}(k)\left(\mathbf{P}^{y}(k)\right)^{-1}\mathbf{P}^{yx}(k)\nonumber\\
\mathbf{x}^{+}(k) &=& \mathbf{x}^{-}(k)+\mathbf{K}(k)\left(\mathbf{y}(k)-\mathbf{y}^{-}(k) \right).
\end{eqnarray}
are used to update the state and covariance estimates with the observation $\mathbf{y}(k)$.
The UKF algorithm described above can be extended to include the \emph{joint estimation} problem allowing for parameter estimation. In this framework, the parameters $\mathbf{p}$ are considered as auxiliary state variables with trivial dynamics, namely $\mathbf{p}_{k+1} = \mathbf{p}_k$. An augmented $n+l$ dimensional state vector can then be formed consisting of the original $n$ state variables and $l$ model parameters allowing for simultaneous state and parameter estimation \cite{voss,sitz2002}.
To implement parametric prediction, the UKF is used to process the training data and obtain an estimate of $\mathbf{p}$, as well as the state at the end of the training set, $\mathbf{x}(T)$. The parameter values are fixed and Eq. \ref{e1} is forward solved from $t(T)$ to generate predictions of the system dynamics over the prediction interval $\left[t(T+1),t(T+T_F)\right]$. Namely, predictions $\textbf{x}(T+1),\textbf{x}(T+2),\hdots,\textbf{x}(T+T_F)$ are calculated.
\subsection*{Takens' Method for Nonparametric Prediction}
Instead of using the mechanistic model described by Eq. \ref{e1}, the system can be represented nonparametrically. Without loss of generality consider the observed variable $x_{j}$. Using Takens' theorem \cite{takens,SYC}, the $d+1$ dimensional delay coordinate vector $x_j^d(T) = \left[x_j(T),x_j(T-\tau),x_j(T-2\tau),\hdots x_j(T-d\tau)\right]$ is formed which represents the state of the system at time $t(T)$. Here $d$ is the number of delays and $\tau$ is the time-delay.
The goal of nonparametric prediction is to utilize the training data in the interval $\left[t(0),t(T) \right]$ to build local models for predicting the dynamics over the interval $\left[t\left(T+1\right),t\left(T+T_F\right)\right]$. Here, the method of {\it direct prediction} is chosen. Prior to implementation of the direct prediction, a library of delay vectors is formed from the training data of $x_j$.
Direct prediction begins by finding the $\kappa$ nearest neighbors, as a function of Euclidean distance, to the current delay-coordinate vector $x_j^d(T)$ within the library of delay vectors. Neighboring delay vectors
\begin{eqnarray*}
x_j^d(T') &=& \left[x_j(T'),x_j(T'-\tau),x_j(T'-2\tau),\hdots x_j(T'-d\tau)\right]\\
x_j^d(T'') &=& \left[x_j(T''),x_j(T''-\tau),x_j(T''-2\tau),\hdots x_j(T''-d\tau)\right]\\
&\vdots&\\
x_j^d(T^\kappa) &=& \left[x_j(T^\kappa),x_j(T^\kappa-\tau),x_j(T^\kappa-2\tau),\hdots x_j(T^\kappa-d\tau)\right]
\end{eqnarray*}
are found within the training data and the known $x_j(T'+i), x_j(T''+i), \ldots, x_j(T^\kappa+i)$ points are used in a local model to predict the unknown value $x_j(T+i)$ where $i = 1,2,\hdots,T_F$. In this article, a locally constant model is chosen
\begin{eqnarray}
\label{localconstant}
x_j(T+i) \approx w_j'x_j(T'+i) + w_j''x_j(T''+i) + \hdots + w_j^{\kappa}x_j(T^\kappa+i)
\end{eqnarray}
where $w_j',w_j'',\hdots,w_j^\kappa$ are the weights for the $j^{th}$ state that determine the contribution of each neighbor in building the prediction. In its simplest form, Eq. \ref{localconstant} is an average of the nearest neighbors where $w_j' = w_j'' = \hdots = w_j^\kappa = \frac{1}{\kappa}$. More sophisticated weighting schemes can be chosen, for example assigning the weights based on the Euclidean distance from each neighbor to the current delay vector \cite{perretti,perretti2,hamilton2}. Selection of values for $d$, $\tau$ and $\kappa$ is necessary for implementation of the direct prediction algorithm. These values were optimized, within each example, to give the lowest prediction error (results not shown).
The accuracy of the predicted $x_j(T+i)$ is subject to several factors. The presence of noise in the training data plays a substantial role in decreasing prediction accuracy. However, recent advancements in nonparametric analysis have addressed the problem of filtering time series without use of a mechanistic model. In \cite{hamilton2016}, a nonparametric filter was developed which merged Kalman filtering theory and Takens' method. The resulting Kalman-Takens filter was demonstrated to be able to reduce significant amounts of noise in data. Application of the method was extended in \cite{hamiltonEPJ} to the case of filtering stochastic variables without a model. In the results presented below, the training data used for nonparametric prediction are filtered first using the method of \cite{hamilton2016,hamiltonEPJ}.
\subsection*{Hybrid Modeling and Prediction: Merging Parametric and Nonparametric Methods}
As an alternative to the parametric and nonparametric methods described above, we propose a hybrid approach which blends the two methods together. In this framework, we assume that a full mechanistic model as described by Eq. \ref{e1} is available. However, rather than using the full model, a subset of the mechanistic equations are used and the remainder of the variables are represented nonparametrically using delay-coordinates.
In formulating this method it is convenient to first think of Eq. \ref{e1} without vector notation
\begin{eqnarray}
\label{e2}
x_1(k+1) &=& f_1\left(t(k),x_1(k),x_2(k),\hdots,x_n(k),p_1,p_2,\hdots,p_l\right)\nonumber\\
x_2(k+1) &=& f_2\left(t(k),x_1(k),x_2(k),\hdots,x_n(k),p_1,p_2,\hdots,p_l\right)\nonumber\\
&\vdots&\\
x_n(k+1) &=& f_n\left(t(k),x_1(k),x_2(k),\hdots,x_n(k),p_1,p_2,\hdots,p_l\right)\nonumber
\end{eqnarray}
Now assume only the first $n-1$ equations of Eq. \ref{e2} are used to model state variables $x_1,x_2,\ldots,x_{n-1}$, while $x_{n}$ is described nonparametrically
\begin{eqnarray}
\label{hybrid}
x_1(k+1) &=& f_1\left(t(k),x_1(k),x_2(k),\hdots,x_{n-1}(k),x_n(k),p_1,p_2,\hdots,p_l\right)\nonumber\\
x_2(k+1) &=& f_2\left(t(k),x_1(k),x_2(k),\hdots,x_{n-1}(k),x_n(k),p_1,p_2,\hdots,p_l\right)\nonumber\\
&\vdots&\\
x_{n-1}(k+1) &=& f_{n-1}\left(t(k),x_1(k),x_2(k),\hdots,x_{n-1}(k),x_n(k),p_1,p_2,\hdots,p_l\right) \nonumber \\
x_{n}(k+1) &\approx& w_{n}'\tilde{x}_n(T'+k+1) + w_{n}''\tilde{x}_n(T''+k+1) + \hdots + w_{n}^{\kappa}\tilde{x}_n(T^\kappa+k+1) \nonumber
\end{eqnarray}
We refer to Eq. \ref{hybrid} as the {\it hybrid model}. Note, in Eq. \ref{e2} only $x_n$ is assumed to be advanced nonparametrically. This is done purely for ease of presentation and the hybrid model can instead contain several variables whose equations are replaced by nonparametric advancement.
The hybrid model has several distinguishing features. Notice, in this framework nonparametrically advanced dynamics are incorporated into mechanistic equations, essentially merging the two lines of mathematical thought. Furthermore, equations for state variables within Eq. \ref{e2} can be replaced only if there are observations which map directly to them, otherwise their dynamics can not be nonparametrically advanced. Finally, the process of replacing equations in the hybrid method will generally result in a reduction in the number of unknown model parameters to be estimated.
In this hybrid scheme, obtaining an estimate of the unknown parameters in the $n-1$ mechanistic equations and an estimate of $\textbf{x}(T)$ requires a combination of the nonparametric analysis developed in \cite{hamilton2016} and traditional parametric methodology. The state variable $x_{n}$, which is not defined by a mechanistic equation in Eq. \ref{e2}, is represented by delay coordinates within the UKF. Therefore at step $k$ we have the hybrid state
\begin{eqnarray*}
\mathbf{x}^{\mathsmaller H}(k) = \left[x_1(k),x_2(k) ,\ldots, x_{n-1}(k),x_n(k), x_n(k-\tau) , x_n(k-2\tau),\ldots , x_n(k-d\tau)\right]^{\mathsmaller T}
\end{eqnarray*}
The UKF as described above is implemented with this hybrid state $\mathbf{x}^{\mathsmaller H}(k)$ and the model described by Eq. \ref{hybrid}. Notice that in the case of the hybrid model when we have to advance the state dynamics and form the prior estimate in the UKF, the advancement is done parametrically for the first $n-1$ states and nonparametrically for the $n^{th}$ state. Similarly to before, we can augment $\mathbf{x}^{\mathsmaller H}$ with the unknown parameters in the $n-1$ mechanistic equations allowing for simultaneous parameter estimation.
Once the training data are processed and an estimate of $\mathbf{x}^{\mathsmaller H}(T)$ and the parameters are obtained, the hybrid model in Eq. \ref{hybrid} is implemented to generate predictions $\mathbf{x}^{\mathsmaller H}(T+1), \mathbf{x}^{\mathsmaller H}(T+2),\ldots, \mathbf{x}^{\mathsmaller H}(T+T_F)$.
\begin{figure}
\caption{{\bf Example of Lorenz-63 realization.}
\label{figure1}
\end{figure}
\section*{Results}
We demonstrate the utility of the hybrid methodology, with comparison to standard parametric and nonparametric modeling and prediction, in the following example systems. When conducting this analysis, two types of error are considered. The first, error in the observations, manifests itself as noise in the training data which all three methods will have to confront. The second type, error in the parameters, takes the form of an uncertainty in the initial parameter values used by the UKF for parameter estimation. Only the parametric and hybrid methods will have to deal with this parameter error. Throughout, we will refer to a percentage uncertainty which corresponds to the standard deviation of the distribution from which the initial parameter value is drawn relative to the mean. For example, if the true value for a parameter $p_1$ is 12 and we have 50\% uncertainty in this value, then the initial parameter value used for estimating $p_1$ will be drawn from the distribution $N(12,(0.5*12)^2)$.
To quantify prediction accuracy, the normalized root-mean-square-error, or SRMSE, is calculated for each prediction method as a function of forecast horizon. Normalization is done with respect to the standard deviation of the variable as calculated from the training data. In using the SRMSE metric, the goal is to be more accurate than if the prediction was simply the mean of the training data (corresponding to SRMSE = 1). Thus a prediction is better than a naive prediction when SRMSE $<$ 1, though for chaotic systems prediction accuracy will eventually converge to this error level since only short-term prediction is possible.
\subsection*{Prediction in the Lorenz-63 System}
As a demonstrative example, consider the Lorenz-63 system \cite{lorenz63}
\begin{eqnarray} \label{lorenz}
\dot{x} &=& \sigma(y-x)\nonumber\\
\dot{y} &=& x(\rho-z)-y\\
\dot{z} &=& xy-\beta z \nonumber
\end{eqnarray}
where $\sigma = 10$, $\rho = 28$, $\beta = 8/3$. Data are generated from this system using a fourth-order Adams-Moulton method with sample rate $h = 0.05$. We assume that 500 training data points of the $x$, $y$ and $z$ variables are available, or 25 units of time. The Lorenz-63 system oscillates approximately once every unit of time, meaning the training set consists of about 25 oscillations. The goal is to accurately predict the dynamics of $x$, $y$ and $z$ one time unit after the end of the training set. However, the observations of each variable are corrupted by Gaussian observational noise with mean zero and variance equal to 4. Additionally the true value of parameters $\sigma$, $\rho$ and $\beta$ are unknown. Fig. \ref{figure1} shows an example simulation of this system.
The parametric method utilizes Eq. \ref{lorenz} to estimate the model state and parameters, and to predict the $x$, $y$ and $z$ dynamics. For the nonparametric method, delay coordinates of the variables are formed with $d = 9$ and $\tau = 1$. The local constant model for prediction is built using $\kappa = 20$ nearest neighbors. For the hybrid method, the mechanistic equation governing the dynamics of $y$ are replaced nonparametrically resulting in the reduced Lorenz-63 model
\begin{eqnarray*}\label{lorenzhybrid}
\dot{x} &=& \sigma(y-x)\\
\dot{z} &=& xy-\beta z
\end{eqnarray*}
Note, the hybrid model does not require estimation of the $\rho$ parameter since the mechanistic equation for $y$ is removed.
\begin{figure}
\caption{{\bf Comparison of the prediction methods in the Lorenz-63 system.}
\label{figure2}
\end{figure}
Fig. \ref{figure2} shows a comparison of parametric (black), nonparametric (blue) and hybrid (red) prediction error as a function of forecast horizon. SRMSE results averaged over 500 system realizations. Various parameter uncertainty levels are shown: 80\% uncertainty (solid lines), 50\% uncertainty (dashed-dotted lines) and 20\% uncertainty (dashed line). The hybrid method with 80\% uncertainty offers improved short-term prediction of the Lorenz-63 $x$ (Fig. \ref{figure2}a) and $z$ (Fig. \ref{figure2}c) variables over standalone nonparametric prediction as well as parametric prediction with 80\% uncertainty. Hybrid and nonparametric prediction of $y$ (Fig. \ref{figure2}b) are comparable, which is to be expected since the hybrid approach is using nonparametric advancement of $y$ in its formulation. Note that parametric prediction at this uncertainty level does very poorly and in the cases of $y$ and $z$ its result is not shown due to the scale of the error. As the uncertainty decreases for parametric prediction, its performance improves. However, hybrid prediction with 80\% uncertainty still outperforms parametric prediction with 50\% uncertainty in the short-term. At a small uncertainty level, parametric prediction outperforms both hybrid and nonparametric methods which is to be expected since it has access to the true model equations and starts out with close to optimal parameter values.
The success of the hybrid method at higher uncertainty levels can be traced to more accurate estimates of the model parameters in the mechanistic equations that it uses. Table \ref{table_L63} shows the resulting hybrid and parametric estimation of the Lorenz-63 parameters. The hybrid method with 80\% uncertainty is able to construct accurate estimates of both $\sigma$ and $\beta$, with a mean close to the true value and a small standard deviation of the estimates. The parametric method with 80\% and 50\% uncertainty is unable to obtain reliable estimates, exemplified by the large standard deviation of the estimates. Only when the parametric method has a relatively small uncertainty of 20\% is it able to accurately estimate the system parameters.
\begin{table}[ht]
\begin{center}
\begin{tabular}{| c | c | c | c | c |}
\hline
\multicolumn{4}{|c|}{Lorenz-63 Parameter Estimation Results} \\ \hline
True Parameter & Method& Mean&Standard Deviation \\ \hline
\multirow{4}{*}{$\sigma = 10$} & Hybrid (80\% Uncertainty) & 9.77 & 0.75 \\
& Parametric (80\% Uncertainty) & 8.03 & 4.81 \\
& Parametric (50\% Uncertainty) & 9.84 & 3.06 \\
& Parametric (20\% Uncertainty) & 10.06 & 0.95 \\
\cline{1-4}
\multirow{4}{*}{$\rho = 28$} & Hybrid (80\% Uncertainty) & NA & NA\\
& Parametric (80\% Uncertainty) & 24.55 & 14.07 \\
& Parametric (50\% Uncertainty) & 25.63 & 6.37 \\
& Parametric (20\% Uncertainty) & 27.89 & 0.83 \\
\cline{1-4}
\multirow{3}{*}{$\beta = 2.67$} & Hybrid (80\% Uncertainty) & 2.58 & 0.11\\
& Parametric (80\% Uncertainty) & 1.61 & 1.34 \\
& Parametric (50\% Uncertainty) & 2.20 & 0.98 \\
& Parametric (20\% Uncertainty) & 2.63 & 0.19 \\
\cline{1-4}
\end{tabular}
\caption{\textbf{Summary of Lorenz-63 parameter estimation results}. Mean and standard deviation calculated over 500 realizations. The hybrid method, which only needs to estimate $\sigma$ and $\beta$, is robust to a large initial parameter uncertainty. The parametric method on the other hand is unable to obtain reliable estimates of the Lorenz-63 parameters unless the uncertainty is small enough.}
\label{table_L63}
\end{center}
\end{table}
\subsection*{Predicting Neuronal Network Dynamics}
We now consider the difficult high dimensional estimation and prediction problem posed by neuronal network studies. If we are only interested in predicting a portion of the network, then we can use the proposed hybrid method to refine our estimation and prediction while simultaneously reducing estimation complexity. As an example
of this potential network application we consider the prediction of spiking dynamics in a network of $M$ Hindmarsh-Rose neurons \cite{hindmarsh}
\begin{eqnarray}\label{hindmarsh}
\dot{x}_i &=& y_i-a_ix_i^3+b_ix_i^2-z_i+1.2+\sum_{i\neq m}^M \frac{\beta_{im}}{1+9e^{-10x_m}}x_m \nonumber \\
\dot{y}_i &=& 1-c_ix_i^2 \\
\dot{z}_i &=& 5\times 10^{-5}\left[4\left(x_i-\left(-\frac{8}{5}\right)\right)-z_i \right] \nonumber
\end{eqnarray}
where $i = 1,2,\hdots,M$. $x_{i}$ corresponds to the spiking potential while $y_i$ and $z_i$ describe the fast and slow-scale dynamics, respectively, of neuron $i$. Each individual neuron in the network has parameters $a_i =1, b_i = 3$ and $c_i = 5$ which are assumed to be unknown. $\beta_{im}$ represents the connectivity coefficient from neuron $i$ to neuron $m$. For a network of size $M$, we have $M^2-M$ possible connection parameters since neuron self connections are not allowed (i.e. $\beta_{ii} = 0$). These connection parameters are also assumed to be unknown.
\begin{figure}
\caption{{\bf Predicting neuron potential $x_3$ in random 3-neuron Hindmarsh-Rose networks.}
\label{figure3}
\end{figure}
For this example we examine networks of size $M = 3$ with 5 random connections. Data from these networks are generated using a fourth-order Adams-Moulton method with sample rate $h = 0.08$ ms. We assume that the training data consists of 3000 observations, or 240 ms, of the $x_1, x_2,x_3$ variables each of which are corrupted by Gaussian noise with mean 0 and variance of 0.2. Under the stated parameter regime, the neurons in the network spike approximately every 6 ms, meaning our training set has on average around 40 spikes per neuron. In this example, we restrict our focus to predicting 8 ms of the $x_3$ variable (though a similar analysis follows for the prediction of $x_1$ and $x_2$). Fig. \ref{figure3}a shows a representative realization of this problem. Given our interest in $x_3$, the hybrid method only assumes a mechanistic equation for neuron 3
\begin{eqnarray*}
\dot{x}_3 &=& y_3-a_3x_3^3+b_3x_3^2-z_3+1.2+\sum_{3 \neq m}^M \frac{\beta_{3m}}{1+9e^{-10x_m}}x_m \\
\dot{y}_3 &=& 1-c_3x_3^2\\
\dot{z}_3 &=& 5\times 10^{-5}\left[4\left(x_3-\left(-\frac{8}{5}\right)\right)-z_3 \right]
\end{eqnarray*}
and nonparametrically represents neuron 1 and neuron 2.
Fig. \ref{figure3}b shows the resulting accuracy in predicting $x_3$ when using parametric (black), nonparametric (blue) and hybrid (red) methods with 80\% (solid line) and 50\% (dashed-dotted line) uncertainty in parameter values. The parametric approach uses the full mechanistic model described by Eq. \ref{hindmarsh} for modeling and prediction, requiring estimation of the $x,y$ and $z$ state variables and parameters $a,b$ and $c$ for each neuron, as well as the full connectivity matrix. Notice that once again with 80\% uncertainty, the scale of error for the parametric method is much larger compared to the other methods. Only with 50\% uncertainty is the parametric method able to provide reliable predictions of $x_3$. Note that unlike in the Lorenz-63 example, we do not consider the parametric method with 20\% uncertainty since reasonable parameter estimates and predictions are obtained with 50\% uncertainty. The nonparametric method ($\tau = 1$, $d = 9$) uses $\kappa = 10$ neighbors for building the local model for prediction. Again we observe that the hybrid method, even with a large parameter uncertainty of 80\%, provides accurate predictions of $x_3$ compared to the other methods. Table \ref{table_HR} shows the robustness of the hybrid method in estimating the individual parameters for neuron 3.
\begin{table}[ht]
\begin{center}
\begin{tabular}{| c | c | c | c | c |}
\hline
\multicolumn{4}{|c|}{Neuron 3 Parameter Estimation Results} \\ \hline
True Parameter & Method& Mean&Standard Deviation \\ \hline
\multirow{4}{*}{$a_3 = 1$} & Hybrid (80\% Uncertainty) & 0.98 & 0.04 \\
& Parametric (80\% Uncertainty) & 1.07 & 0.51 \\
& Parametric (50\% Uncertainty) & 0.98 & 0.15 \\
\cline{1-4}
\multirow{4}{*}{$b_3 = 3$} & Hybrid (80\% Uncertainty) & 2.96 & 0.10\\
& Parametric (80\% Uncertainty) & 2.92 & 0.88 \\
& Parametric (50\% Uncertainty) & 2.92 & 0.26 \\
\cline{1-4}
\multirow{3}{*}{$c_3 =5$} & Hybrid (80\% Uncertainty) & 4.93 & 0.16\\
& Parametric (80\% Uncertainty) & 4.66 & 1.04 \\
& Parametric (50\% Uncertainty) & 4.83 & 0.43 \\
\cline{1-4}
\end{tabular}
\caption{\textbf{Summary of neuron 3 parameter estimation results}. Mean and standard deviation calculated over 200 realizations. The hybrid method once again is robust to a large initial parameter uncertainty. The parametric method on the other hand is unable to obtain reliable estimates of the neuron parameters with large uncertainty.}
\label{table_HR}
\end{center}
\end{table}
\subsection*{Predicting Flour Beetle Population Dynamics}
We now investigate the prediction problem in a well-known data set from an ecological study involving the cannibalistic red flour beetle \emph{Tribolium castaneum}. In \cite{constantino}, the authors present experimentally collected data and a mechanistic model describing the life cycle dynamics of \emph{T. castaneum}. Their discrete time model describing the progression of the beetle through the larvae, pupae, and adult stages is given by
\begin{eqnarray} \label{beetle}
L(t+1) &=& bA(t) e^{-c_{el}L(t) - c_{ea}A(t)} \nonumber\\
P(t+1) &=& L(t)(1-\mu_l)\\
A(t+1) &=& P(t) e^{-c_{pa}A(t)}+A(t)(1-\mu_a)\nonumber
\end{eqnarray}
where $L$, $P$ and $A$ correspond to larvae, pupae and adult populations, respectively. The essential interactions described by this model are (i) flour beetles become reproductive only in the adult stage, (ii) adults produce new larvae, (iii) adults and larvae can both cannibalize larvae, and (iv) adults cannibalize pupae. We note that since Eq. \ref{beetle} only approximates the life cycle dynamics of the beetle, there is a degree of model error in the proposed system, unlike the previous examples.
\begin{figure}
\caption{{\bf Example data set from \emph{T. castaneum}
\label{figure4}
\end{figure}
The authors of \cite{constantino} experimentally set the adult mortality rate ($\mu_a$) to $0.96$ and the recruitment rate ($c_{pa}$) from pupae to adult to seven different values ($0$, $0.05$, $0.10$, $0.25$, $0.35$, $0.50$, $1.0$). Experiments at each recruitment rate value were replicated three times resulting in 21 different datasets. Each dataset consists of total numbers of larvae, pupae, and adults measured bi-weekly over 82 weeks resulting in 41 measurements for each life stage. These data were fit to Eq. \ref{beetle} in \cite{constantino} and parameter estimates $b = 6.598$, $c_{el} = 1.209 \times 10^{-2}$, $c_{ea} = 1.155 \times 10^{-2}$ and $\mu_l = 0.2055$ were obtained. We treat these parameter values as ground truth when considering the different parameter uncertainty levels for fitting the data to the model.
In our analysis of this system, we treat the first 37 measurements (or 74 weeks) within an experiment as training data and use the remaining 4 time points (or 8 weeks) for forecast evaluation. Fig. \ref{figure4} shows an example of this setup for a representative dataset. Fig. \ref{figure5} shows the results of predicting the larvae (Fig. \ref{figure5}a), pupae (Fig. \ref{figure5}b) and adult (Fig. \ref{figure5}c) populations using parametric (black), nonparametric (blue) and hybrid prediction methods with 80\% (solid line) and 50\% (dashed-dotted line) parameter uncertainty levels. Error bars correspond to the standard error over the 21 datasets. The parametric method uses the full mechanistic model described in Eq. \ref{beetle} to estimate the population state and parameters $b, c_{el}, c_{ea}$ and $\mu_l$ before prediction. We note in Fig. \ref{figure5} that the parametric method with 80\% uncertainty is not shown due to the scale of the error, and is significantly outperformed by the nonparametric prediction ($\tau = 1, d = 2, \kappa = 5$). For the hybrid method, we only consider the mechanistic equations for pupae and adult population dynamics
\begin{eqnarray*}
P(t+1) &=& L(t)(1-\mu_1)\\
A(t+1) &=& P(t) e^{-c_{pa}A(t)}+A(t)(1-\mu_a)
\end{eqnarray*}
and nonparametrically represent larvae. Hybrid prediction with 80\% uncertainty outperforms both nonparametric and parametric with 80\% uncertainty for pupae and adult population levels, and is comparable to parametric with 50\% uncertainty.
\begin{figure}
\caption{{\bf Results for predicting population levels of \emph{T. castaneum}
\label{figure5}
\end{figure}
\section*{Conclusion}
By blending characteristics of parametric and nonparametric methodologies, the proposed hybrid method for modeling and prediction offers several advantages over standalone methods. From the perspective of model fitting and the required parameter estimation that arises in this process, we have shown that the hybrid approach allows for a more robust estimation of model parameters. Particularly for situations where there is a large uncertainty in the true parameter values, the hybrid method is able to construct accurate estimates of model parameters when the standard parametric model fitting fails to do so. At first this may seem counter-intuitive, but in fact it is not that surprising. The replacement of mechanistic equations with their nonparametric representations in effect reduces the dimension of the parameter space that we have to optimize in, resulting in better parameter estimates. As we have demonstrated in the above examples, this refinement in the parameter estimates leads to an improvement in short-term prediction accuracy.
The limitations of the hybrid method are similar to those of parametric and nonparametric methods in that if not enough training data are available then accurate estimation and prediction becomes difficult. However, the demonstrated robustness of the hybrid method to large parameter uncertainty is encouraging, particularly when considering experimental situations where we may not have a good prior estimate of the model parameters. One could consider implementing the hybrid method in an iterative fashion, estimating the parameters of each equation separately, then piecing the model back together for prediction. We can think of this as an {\it iterative hybrid method}, and is the subject of future work.
We view this work as complementary to recent publications on forecasting \cite{perretti,perretti2,hartig}. The authors of \cite{perretti,perretti2} advocate nonparametric methods over parametric methods in general, while a letter \cite{hartig} addressing the work of \cite{perretti} showed that a more sophisticated method for model fitting results in better parameter estimates and therefore model-based predictions which outperform model-free methods. Our results support the view that no one method is uniformly better than the other. As we showed in the above examples, in situations where the model error and uncertainty in initial parameters are relatively small, the parametric approach outperforms other prediction methods. Often in experimental studies though, we are not operating in this ideal situation and instead are working with a model that has substantial error with a large uncertainty in parameters which can lead to inaccurate system inference. In situations such as these, nonparametric methods are particularly useful.
The main appeal of the hybrid method is that we can confront these situations without having to completely abandon the use of the mechanistic equations. This is important since mechanistic models often provide valuable information about the underlying processes governing the system dynamics. While we explored in detail the robustness of the hybrid method to large levels of parameter uncertainty, its usefulness stretches well beyond that. In some instances, we may only have a model for some of the states or portions of the model may have higher error than others. By supplementing these parts with their nonparametric representation, the hybrid method would allow us to only use the parts of the model we are confident in and thus improve our analysis.
\nolinenumbers
\providecommand{\noopsort}[1]{}\providecommand{\singleletter}[1]{#1}
\end{document} |
\begin{document}
\title{Rank-one isometries of CAT(0) cube complexes and their centralisers}
\begin{abstract}
If $G$ is a group acting geometrically on a CAT(0) cube complex $X$ and if $g \in G$ is an infinite-order element, we show that exactly one of the following situations occurs: (i) $g$ defines a rank-one isometry of $X$; (ii) the stable centraliser $SC_G(g)= \{ h \in G \mid \exists n \geq 1, [h,g^n]=1 \}$ of $g$ is not virtually cyclic; (iii) $\mathrm{Fix}_Y(g^n)$ is finite for every $n \geq 1$ and the sequence $(\mathrm{Fix}_Y(g^n))$ takes infinitely many values, where $Y$ is a cubical component of the Roller boundary of $X$ which contains an endpoint of an axis of $g$. We also show that (iii) cannot occur in several cases, providing a purely algebraic characterisation of rank-one isometries.
\end{abstract}
\tableofcontents
\section{Introduction}
\noindent
A major progress in the study of the geometry of CAT(0) cube complexes made in the last years was the proof of the rank rigidity conjecture \cite{CapraceSageev}. Namely, if a group $G$ acts essentially and without fixed point at infinity on a CAT(0) cube complex $X$, then either $G$ contains a rank-one isometry or $X$ decomposes as a product. Among the applications, let us mention: constructions of free subgroups and variations \cite{CapraceSageev, PingPong, CubeUniformGrowth}, acylindrical hyperbolicity (see \cite[Section 6.2]{MoiHypCube} and references therein), random walks \cite{CCCrandom, random} and obstructions to act on CAT(0) cube complexes \cite{CIF, ThompsonFW, CubeKahler, TorsionCube}. All these applications motivate the fundamental role played by rank-one isometries in the geometry of CAT(0) cube complexes.
\noindent
In this article, we are interested in the following question:
\begin{question}
Let $G$ be a group acting geometrically on a CAT(0) cube complex $X$. Does there exist a purely algebraic characterisation of the elements of $G$ which induce rank-one isometries of $X$?
\end{question}
\noindent
A natural attempt is to ask the \emph{stable centraliser}
$$SC_G(g)= \{ h \in G \mid \exists n \geq 1, [h,g^n]=1 \}$$
of an element $g$ of our group $G$ to be virtually cyclic. (Notice that $SC_G(g)$ contains the centraliser of $g$ and has finite index in the commensurator of $\langle g \rangle$.) Unfortunately, it turns out that the stable centraliser may be virtually cyclic while the isometry is not rank-one. More precisely, \cite[Corollary 21]{Rattaggi} provides the example of a commutative-transitive group $G$ acting geometrically on a product of two trees $T_1 \times T_2$ and containing an infinite-order element $g$ whose (stable) centraliser is infinite cyclic.
\noindent
The main goal of the article is to understand how the equivalence between being a rank-one isometry and having a virtually cyclic stable centraliser may fail. In this context, we prove the following statement:
\begin{thm}\label{intro:Main}
Let $G$ be a group acting geometrically on a CAT(0) cube complex $X$, and $g \in G$ an infinite-order element. Fix a cubical component $Y \subset \mathfrak{R}X$ which contains an endpoint of an axis of $g$. Then exactly one of the following situations occurs:
\begin{itemize}
\item $g$ defines a rank-one isometry of $X$;
\item the stable centraliser $SC_G(g)$ of $g$ is not virtually cyclic;
\item $\mathrm{Fix}_Y(g^n)$ is finite for every $n \geq 1$ and the sequence $(\mathrm{Fix}_Y(g^n))$ takes infinitely many values.
\end{itemize}
\end{thm}
\noindent
The third case of our trichotomy is precisely what happens in Rattaggi's example. More precisely, our isometry $g \in \mathrm{Isom}(T_1 \times T_2)$ stabilises $T_1$, so that a cubical component which contains an endpoint of an axis of $g$ must be an unbounded and locally finite tree $T$ (actually, $T$ must be isometric to $T_2$). Then $g$ induces an isometry $h$ of $T$ such that the fixed-set of $h^n$ increases as $n \to + \infty$ but always remains bounded.
\noindent
However, such a behavior seems to be exotic. In most cases, it typically does not happen. In this context, we deduce from Theorem \ref{intro:Main} the following statement:
\begin{thm}\label{intro:Special}
Let $G$ be a group acting geometrically on a CAT(0) cube complex $X$. Assume that, for every hyperplane $J$ of $X$ and every element $g \in G$, the two hyperplanes $J$ and $gJ$ cannot be transverse nor tangent. Then an infinite-order element of $G$ defines a rank-one isometry of $X$ if and only if its stable centraliser in $G$ is virtually cyclic.
\end{thm}
\noindent
For instance, our theorem applies to Haglund and Wise's (virtually) cocompact special groups \cite{Special}, which include many groups of interest such as right-angled Artin groups, graph products of finite groups (e.g., right-angled Coxeter groups) or graph braid groups.
\noindent
In our general study of stable centralisers of isometries, we also prove the following statement, which we think to be of independent interest:
\begin{thm}\label{intro:Regular}
Let $G$ be a group acting geometrically on a CAT(0) cube complex $X$. Assume that $G$ decomposes as a product of $n \geq 1$ unbounded irreducible CAT(0) cube complexes $X_1 \times \cdots X_n$. If $g \in G$ is a regular element, then $SC_G(g)$ is virtually $\mathbb{Z}^n$.
\end{thm}
\noindent
Recall from \cite{CapraceSageev} that $g$ is \emph{regular} if it induces a rank-one isometry on each factor $X_i$. The existence of regular elements has been proved in \cite{CCCrandom} using probabilistic methods (see also \cite[Theorem 6.67]{MoiHypCube} for an alternative proof based on cubical and hyperbolic geometries).
\noindent
The article is organised as follows. In Section \ref{section:Pre}, we recall general definitions and record preliminary statements for future use. In Section \ref{section:SMin}, we introduce \emph{stable minimising sets} and following \cite{FTT} we prove a decomposition theorem. The connection between the stable minimising set and the property of being a rank-one isometry is explained in Section \ref{section:Contracting}, and Theorem \ref{intro:Main} is proved in Section \ref{section:Main}. Finally, a few applications are proved in Section \ref{section:Applications}, including Theorems \ref{intro:Special} and \ref{intro:Regular} above, and we conclude the article with open questions in Section \ref{section:Questions}.
\paragraph{Acknowledgments.} This work was supported by a public grant as part of the Fondation Math\'ematique Jacques Hadamard.
\section{Preliminaries}\label{section:Pre}
\subsection{Cube complexes, hyperplanes, projections}
\noindent
A \textit{cube complex} is a CW complex constructed by gluing together cubes of arbitrary (finite) dimension by isometries along their faces. It is \textit{nonpositively curved} if the link of any of its vertices is a simplicial \textit{flag} complex (ie., $n+1$ vertices span a $n$-simplex if and only if they are pairwise adjacent), and \textit{CAT(0)} if it is nonpositively curved and simply-connected. See \cite[page 111]{MR1744486} for more information.
\noindent
Fundamental tools when studying CAT(0) cube complexes are \emph{hyperplanes}. Formally, a \textit{hyperplane} $J$ is an equivalence class of edges with respect to the transitive closure of the relation identifying two parallel edges of a square. Notice that a hyperplane is uniquely determined by one of its edges, so if $e \in J$ we say that $J$ is the \textit{hyperplane dual to $e$}. Geometrically, a hyperplane $J$ is rather thought of as the union of the \textit{midcubes} transverse to the edges belonging to $J$ (sometimes referred to as its \emph{geometric realisation}). See Figure \ref{figure27}. The \textit{carrier} $N(J)$ of a hyperplane $J$ is the union of the cubes intersecting (the geometric realisation of) $J$.
\noindent
There exist several metrics naturally defined on a CAT(0) cube complex. In this article, we are only interested in the graph metric defined on its one-skeleton, referred to as its \emph{combinatorial metric}. In fact, from now on, we will identify a CAT(0) cube complex with its one-skeleton, thought of as a collection of vertices endowed with a relation of adjacency. In particular, when writing $x \in X$, we always mean that $x$ is a vertex of $X$.
\noindent
The following theorem will be often used along the article without mentioning it.
\begin{thm}\emph{\cite{MR1347406}}
Let $X$ be a CAT(0) cube complex.
\begin{itemize}
\item If $J$ is a hyperplane of $X$, the graph $X \backslash \backslash J$ obtained from $X$ by removing the (interiors of the) edges of $J$ contains two connected components. They are convex subgraphs of $X$, referred to as the \emph{halfspaces} delimited by $J$.
\item A path in $X$ is a geodesic if and only if it crosses each hyperplane at most once.
\item For every $x,y \in X$, the distance between $x$ and $y$ coincides with number of hyperplanes separating them.
\end{itemize}
\end{thm}
\begin{figure}
\caption{A hyperplane (in red) and the associated union of midcubes (in green).}
\label{figure27}
\end{figure}
\noindent
Another useful tool when studying CAT(0) cube complexes is the notion of \emph{projection} onto on a convex subcomplex, which is defined by the following proposition (see \cite[Lemma 13.8]{Special}):
\begin{prop}\label{projection}
Let $X$ be a CAT(0) cube complex, $C \subset X$ a convex subcomplex and $x \in X \backslash C$ a vertex. There exists a unique vertex $y \in C$ minimizing the distance to $x$. Moreover, for any vertex of $C$, there exists a geodesic from it to $x$ passing through $y$.
\end{prop}
\noindent
Below, we record a couple of statements related to projections for future use. Proofs can be found in \cite[Lemma 13.8]{Special} and \cite[Proposition 2.7]{article3} respectively.
\begin{lemma}\label{lem:HypProj}
Let $X$ be a CAT(0) cube complex, $Y \subset X$ a convex subcomplex and $x \in X$ a vertex. Any hyperplane separating $x$ from its projection onto $Y$ separates $x$ from $Y$.
\end{lemma}
\begin{lemma}\label{lem:HypProjSeparate}
Let $X$ be a CAT(0) cube complex and $Y \subset X$ a convex subcomplex. For every vertices $x,y \in X$, the hyperplanes separating the projections of $x$ and $y$ onto $Y$ are precisely the hyperplanes separating $x$ and $y$ which cross $Y$. As a consequence the projection onto $Y$ is $1$-Lipschitz.
\end{lemma}
\noindent
Notice that the next statement is a direct consequence of Lemma \ref{lem:HypProj}:
\begin{cor}\label{cor:HypSepTwoConvex}
Let $X$ be a CAT(0) cube complex and $Y_1,Y_2 \subset X$ two disjoint and non-empty convex subcomplexes. If $y_1 \in Y_1$ and $y_2 \in Y_2$ are two vertices minimising the distance between $Y_1$ and $Y_2$, then the hyperplanes separating $y_1$ and $y_2$ are exactly the hyperplanes separating $Y_1$ and $Y_2$.
\end{cor}
\subsection{Isometries of CAT(0) cube complexes}\label{section:Isom}
\noindent
According to \cite{HaglundAxis}, an isometry $g \in \mathrm{Isom}(X)$ of a CAT(0) cube complex is
\begin{itemize}
\item a \emph{loxodromic isometry} if there exists a bi-infinite geodesic on which $g$ acts by translations;
\item an \emph{elliptic isometry} if $g$ has bounded orbits;
\item an \emph{inversion} if a power of $g$ stabilises a hyperplane and inverts its halfspaces.
\end{itemize}
It is worth noticing that, up to subdividing the cube complex, we may suppose that inversions do not exist.
\noindent
\textbf{Convention:} In this article, we always suppose that a CAT(0) cube complex does not admit inversions.
\noindent
When studying centralisers, natural subsets to consider are:
\begin{definition}
Let $X$ be a CAT(0) cube complex and $g \in \mathrm{Isom}(X)$ a loxodromic isometry. The \emph{minimising set} of $g$ is
$$\mathrm{Min}(g)= \left\{ x \in X \mid d(x,gx)= \inf\limits_{y \in X} d(y,gy) \right\}.$$
Equivalently, $\mathrm{Min}(g)$ is the union of all the axes of $g$.
\end{definition}
\noindent
(For a proof of the equivalence, we refer to \cite[Corollary 6.2]{HaglundAxis}.)
\noindent
The interest of minimising sets is justified in particular by the following statement, proved in \cite[Lemma 6.3]{FTT}:
\begin{lemma}\label{lem:centraliserCC}
Let $G$ be a group acting geometrically on a CAT(0) cube complex $X$ and $g \in G$ a loxodromic isometry. The centraliser $C_G(g)$ acts geometrically on $\mathrm{Min}(g)$.
\end{lemma}
\noindent
Rank-one isometries may be defined in many equivalent ways. Let us mention some of these equivalent definitions.
\begin{prop}\label{prop:contracting}
Let $X$ be a uniformly locally finite CAT(0) cube complex and $g \in \mathrm{Isom}(X)$ an isometry. The following conditions are equivalent:
\begin{itemize}
\item[(i)] $g$ is a \emph{rank-one isometry}, i.e., no CAT(0)-axis of $g$ bounds a CAT(0)-halfplane;
\item[(ii)] $g$ is contracting with respect to the CAT(0) metric;
\item[(iii)] $g$ is a Morse isometry with respect to the CAT(0) metric;
\item[(iv)] $g$ is contracting with respect to the combinatorial metric;
\item[(v)] $g$ is a Morse isometry with respect to the combinatorial metric.
\item[(vi)] $g$ skewers a pair of $L$-separated hyperplanes for some $L \geq 0$, i.e., there exist two halfspaces $A \subset B$ such that $g \cdot B \subsetneq A$ and such that there exist at most $L$ hyperplanes transverse simultaneously to the two hyperplanes bounding $A$ and $B$.
\end{itemize}
\end{prop}
\noindent
Recall that, given a metric space $M$ and an isometry $g \in \mathrm{Isom}(M)$, one says that
\begin{itemize}
\item $g$ is a \emph{Morse isometry} if there exists some $x \in M$ such that $n \mapsto g^n \cdot x$ is a quasi-isometric embedding and if, for every $A,B >0$, there exists some $K \geq 0$ such that any $(A,B)$-quasigeodesic between two points of $\langle g \rangle \cdot x$ stays in the $K$-neighborhood of $\langle g \rangle \cdot x$.
\item $g$ is a \emph{contracting isometry} if there exists some $x \in M$ such that $n \mapsto g^n \cdot x$ is a quasi-isometric embedding and if there exists some $D \geq 0$ such that the nearest-point projection of any ball disjoint from $\langle g \rangle \cdot x$ onto $\langle g \rangle \cdot x$ has diameter at most~$D$.
\end{itemize}
\begin{proof}[Proof of Proposition \ref{prop:contracting}.]
The equivalences $(i) \Leftrightarrow (ii) \Leftrightarrow (iii)$ are respectively proved by \cite[Theorem 5.4]{RankContracting} and \cite[Theorem 2.14]{MR3339446}. The equivalence $(iii) \Leftrightarrow (v)$ is clear because the two metrics are quasi-isometric. Finally, the equivalences $(iv) \Leftrightarrow (v) \Leftrightarrow (vi)$ follow respectively from \cite[Lemma 4.6]{MoiHypCube} and \cite[Theorem 4.2]{MR3339446}.
\end{proof}
\subsection{Wallspaces and orientations}
\noindent
Given a set $X$, a \emph{wall} $\{A,B\}$ is a partition of $X$ into two non-empty subsets $A,B$, referred to as \emph{halfspaces}. Two points of $X$ are \emph{separated} by a wall if they belong to two distinct subsets of the partition.
\noindent
A \emph{wallspace} $(X, \mathcal{W})$ is the data of a set $X$ and a collection of walls $\mathcal{W}$ such that any two points are separated by only finitely many walls. Such a space is naturally endowed with the pseudo-metric
$$d : (x,y) \mapsto \text{number of walls separating $x$ and $y$}.$$
As shown in \cite{ChatterjiNiblo, NicaCubulation}, there is a natural CAT(0) cube complex associated to any wallspace. More precisely, given a wallspace $(X, \mathcal{W})$, define an \emph{orientation} $\sigma$ as a collection of halfspaces such that:
\begin{itemize}
\item for every $\{A,B\} \in \mathcal{W}$, $\sigma$ contains exactly one subset among $\{A,B\}$;
\item if $A$ and $B$ are two halfspaces satisfying $A \subset B$, then $A \in \sigma$ implies $B \in \sigma$.
\end{itemize}
Roughly speaking, an orientation is a coherent choice of a halfspace in each wall. As an example, if $x \in X$, then the set of halfspaces containing $x$ defines an orientation. Such an orientation is referred to as a \emph{principal orientation}. Notice that, because any two points of $X$ are separated by only finitely many walls, two principal orientations are always \emph{commensurable}, ie., their symmetric difference is finite.
\begin{figure}
\caption{A wallspace and its cubulation.}
\label{quotient}
\end{figure}
\noindent
The \emph{cubulation} of $(X, \mathcal{W})$ is the cube complex
\begin{itemize}
\item whose vertices are the orientations within the commensurability class of principal orientations;
\item whose edges link two orientations if their symmetric difference has cardinality two;
\item whose $n$-cubes fill in all the subgraphs isomorphic to one-skeleta of $n$-cubes.
\end{itemize}
See Figure \ref{quotient} for an example.
\subsection{Roller boundary}
\noindent
Let $X$ be a CAT(0) cube complex. An \emph{orientation} of $X$ is an orientation of the wallspace $(X, \mathcal{W}(\mathcal{J}))$, as defined in the previous section, where $\mathcal{J}$ is the set of all the hyperplanes of $X$. The \emph{Roller compactification} $\overline{X}$ of $X$ is the set of the orientations of $X$. Usually, we identify $X$ with the image of the embedding
$$\left\{ \begin{array}{ccc} X & \to & \overline{X} \\ x & \mapsto & \text{principal orientation defined by $x$} \end{array} \right.$$
and we define the \emph{Roller boundary} of $X$ by $\mathfrak{R}X:= \overline{X} \backslash X$.
\noindent
The Roller compactification is naturally a cube complex. Indeed, if we declare that two orientations are linked by an edge if their symmetric difference has cardinality two and if we declare that any subgraph isomorphic to the one-skeleton of an $n$-cube is filled in by an $n$-cube for every $n \geq 2$, then $\overline{X}$ is a disjoint union of CAT(0) cube complexes. Each such component is referred to as a \emph{cubical component} of $\overline{X}$. See Figure \ref{Roller} for an example. Notice that the distance (possibly infinite) between two vertices of $\overline{X}$ coincides with the number of hyperplanes which separate them, if we say that a hyperplane $J$ separates two orientations when they contain different halfspaces delimited by $J$.
\begin{figure}
\caption{Roller compactification of $\mathbb{R}
\label{Roller}
\end{figure}
Two orientations belong to a common cubical component if and only if they differ only on finitely many hyperplanes. A hyperplane separates two orientations if they differ on it.
\noindent
Interestingly, the projection of a vertex in a finite-dimensional CAT(0) cube complex onto a cubical component of its Roller boundary can be defined:
\begin{prop}\label{prop:ProjInf}
Let $X$ be a finite-dimensional CAT(0) cube complex, $x \in X$ a vertex and $Y \subset \mathfrak{R}X$ a cubical component. There exists a unique point $\xi \in Y$ such that the hyperplanes separating $x$ from $\xi$ are precisely the hyperplanes separating $x$ from $Y$.
\end{prop}
\noindent
In the sequel, the point $\xi$ will be referred to as the \emph{projection} of $x$ onto $Y$. Before turning to the proof of Proposition \ref{prop:ProjInf}, we begin by proving a preliminary lemma.
\begin{lemma}\label{lem:DecreasingInf}
Let $X$ be a CAT(0) cube complex, $\xi \in \mathfrak{R}X$ a point at infinity and $(D_i)$ a decreasing sequence of halfspaces containing $\xi$. For every $i \geq 0$, $D_i$ contains the cubical component of $\xi$.
\end{lemma}
\begin{proof}
Let $Y \subset \mathfrak{R}X$ denote the cubical component containing $\xi$ and, for every $i \geq 0$, let $J_i$ denote the hyperplane delimiting $D_i$. Assume for contradiction that there exists some $k \geq 0$ such that $J_k$ separates at least two points of $Y$. Fix a point $\zeta \in Y$ such that $J$ separates $\zeta$ from $\xi$. Notice that $\zeta \in D_k^c \subset D_i^c$ and $\xi \in D_i$ for every $i \geq k$. Consequently, the hyperplanes $J_k,J_{k+1}, \ldots$ all separate $\zeta$ and $\xi$. But this is impossible because the fact that $\zeta$ and $\xi$ both belong to the cubical component $Y$ implies that they are separated by only finitely many hyperplanes of $X$.
\end{proof}
\begin{proof}[Proof of Proposition \ref{prop:ProjInf}.]
Let $\sigma$ be the collection of halfspaces defined as follows. If $J$ is a hyperplane which separates two points of $Y$, then $\sigma$ contains the halfspace delimited by $J$ containing $x$ and not its complement. Otherwise, if $J$ does not separate two points of $Y$, then $\sigma$ contains the halfspace delimited by $J$ containing $Y$ and not its complement. Clearly, $\sigma$ is the only possible candidate for our point of $Y$. Now, we need to verify that $\sigma$ is an orientation, and next, that as a point of $\mathfrak{R}X$ it belongs to $Y$.
\noindent
Let $A$ and $B$ be two halfspaces satisfying $A \subset B$ and $A \in \sigma$. We claim that $B$ belongs to $\sigma$. We distinguish three cases.
\begin{itemize}
\item If $Y$ is included into $A$, then $Y$ must be contained into $B$ as well, hence $B \in \sigma$.
\item If the hyperplanes delimiting $A$ and $B$ both separate at least two points of $Y$, then $x$ must belong to $A$, and so to $B$, which implies that $B \in \sigma$.
\item If the hyperplane delimiting $A$ (resp. $B$) separates (resp. does not separate) two points of $Y$, then $Y$ must be contained into $B$, hence $B \in \sigma$.
\end{itemize}
Thus, we have proved that $\sigma$ is an orientation. Now, assume for contradiction that $\sigma$ does not belong to $Y$. As a consequence, if we fix a point $\xi \in Y$, there exist infinitely many hyperplanes $J_1, J_2, \ldots$ separating $\sigma$ from $\xi$. Because $X$ is finite-dimensional, up to extracting a subspace, we suppose that the $J_i$'s are pairwise disjoint. Moreover, up to re-indexing our sequence, we suppose that $J_i$ separates $J_{i-1}$ and $J_{i+1}$ for every $i \geq 2$. Consequently, if $J_i^+$ denotes the halfspace delimited by $J_i$ which contains $\xi$ for every $i \geq 1$, then $(J_i^+)$ is a decreasing sequence of halfspaces which all contain $\xi$. It follows from Lemma \ref{lem:DecreasingInf} that the $J_i$'s do not cross $Y$, hence $Y \subset J_i^+$ for every $i \geq 1$. Consequently, we must have $J_i^+ \in \sigma$ for every $i \geq 1$ by construction of $\sigma$, a contradiction.
\end{proof}
\noindent
We conclude this subsection by proving a last preliminary lemma. It shows that a cubical component of a uniformly locally finite CAT(0) cube complex must be locally finite.
\begin{lemma}\label{lem:LocallyFiniteInf}
Let $X$ be a CAT(0) cube complex. Assume that there exists some $N \geq 1$ such that any vertex of $X$ admits at most $N$ neighbors. Then any cubical component of $X$ satisfies the same property.
\end{lemma}
\begin{proof}
Fix a vertex $x \in \overline{X}$ and let $y_1, \ldots, y_k$ denote its neighbors. For every $1 \leq i \leq n$, let $J_i$ denote the unique hyperplane separating $x$ from $y_i$.
\noindent
Fix two distinct indices $1 \leq i , j \leq k$. If the carriers $N(J_i)$ and $N(J_j)$ are disjoint, then it follows from Corollary \ref{cor:HypSepTwoConvex} that there exists a hyperplane $J$ separating $N(J_i)$ and $N(J_j)$. Because $y_i$ belongs the halfspace delimited by $J_i$ which does not contain $J_j$ and that $y_j$ belongs similarly to the halfspace delimited by $J_j$ which does not contain $J_i$, necessarily $J$ separates $y_i$ and $y_j$. But $y_i$ and $y_j$ are within distance two in $\overline{X}$, so that $J_i,J_j,J$ cannot define three distinct hyperplanes separating $y_i$ and $y_j$. Therefore, the carriers $N(J_i)$ and $N(J_j)$ have to intersect.
\noindent
Because the carriers $N(J_1), \ldots, N(J_k)$ pairwise intersect, according to Helly's property, there exists a vertex $z \in X$ which belongs to the total intersection $\bigcap\limits_{i=1}^k N(J_i)$. By noticing that each $J_i$ defines a distinct edge having $x$ as an endpoint, we conclude that $k \leq N$, as desired.
\end{proof}
\subsection{Median algebras}
\noindent
A \emph{median algebra} $(X,\mu)$ is the data of a set $X$ and a map $\mu : X \times X \times X \to X$ satisfying the following conditions:
\begin{itemize}
\item $\mu(x,y,y)=y$ for every $x,y \in X$;
\item $\mu(x,y,z)= \mu(z,x,y)= \mu (x,z,y)$ for every $x,y,z \in X$;
\item $\mu\left( \mu(x,w,y), w,z \right) = \mu \left( x,w, \mu(y,w,z) \right)$ for every $x,y,z,w \in X$.
\end{itemize}
The \emph{interval} between two points $x,y \in X$ is
$$I(x,y)= \left\{ z \in X \mid \mu(x,y,z)=z \right\};$$
and a subset $Y \subset X$ is \emph{convex} if $I(x,y) \subset Y$ for every $x,y \in Y$. In this article, we are only interested in median algebras whose interval are finite; they are referred to as \emph{discrete} median algebras.
\noindent
As proved in \cite{NicaCubulation}, a discrete median algebra is naturally a wallspace. Indeed, let us say that $Y \subset X$ is a \emph{halfspace} if $Y$ and $Y^c$ are both convex. Then a \emph{wall} of $X$ is the data of halfspace and its complement, and it turns out that only finitely many walls separate two given point of $X$. The \emph{cubulation} of a discrete median algebra refers to the cubulation of this wallspace. In this specific case, it turns out that any orientation commensurable to a principal orientation must be a principal orientation itself. Consequently, the cubulation of a discrete median algebra $X$ coincides with the cube complex
\begin{itemize}
\item whose vertex-set is $X$;
\item whose edges link two points of $X$ if they are separated by a single wall;
\item whose $n$-cubes fill in every subgraph of the one-skeleton isomorphic to the one-skeleton of an $n$-cube, for every $n \geq 2$.
\end{itemize}
Therefore, a discrete median algebra may be naturally identified with its cubulation, and so may be thought of as a CAT(0) cube complex. The \emph{dimension} and the \emph{Roller compactification} of a discrete median algebra coincides with the dimension and the Roller compactification of its cubulation.
\noindent
Conversely, a CAT(0) cube complex $X$ naturally defines a discrete median algebra (see \cite{mediangraphs} and \cite[Proposition 2.21]{MR2413337}). Indeed, for every triple of vertices $x,y,z \in X$, there exists a unique vertex $\mu(x,y,z) \in X$ satisfying
$$\left\{ \begin{array}{l} d(x,y)= d( x , \mu(x,y,z))+ d(\mu(x,y,z), y) \\ d(x,z)= d(x,\mu(x,y,z))+d(\mu(x,y,z),z) \\ d(y,z)= d(y,\mu(x,y,z))+d(\mu(x,y,z),z) \end{array} \right..$$
Otherwise saying, $I(x,y) \cap I(y,z) \cap I(x,z) = \{ \mu(x,y,z)\}$. The vertex $\mu(x,y,z)$ is referred to as the \emph{median point} of $x,y,z$. Then $(X,\mu)$ is a discrete median algebra, motivating the following terminology:
\begin{definition}
Let $X$ be a CAT(0) cube complex. A \emph{median subalgebra} $Y \subset X$ is a set of vertices stable under the median operation.
\end{definition}
\noindent
We conclude this section by recording a couple of preliminary lemmas. The following statement is precisely what is shown during the proof of \cite[Lemma 2.10]{FTT}.
\begin{lemma}\label{lem:WallHyp}
Let $X$ be a CAT(0) cube complex and $Y \subset X$ a median subset. The walls of $Y$ thought of as a median algebra coincide with the traces of the hyperplanes of~$X$.
\end{lemma}
\noindent
Our second statement is:
\begin{lemma}\label{lem:ConvexHullInter}
Let $X$ be a CAT(0) cube complex and $\gamma$ a bi-infinite geodesic. Let $\zeta,\xi \in \mathfrak{R}X$ denote the endpoints at infinity of $\gamma$. The union of all the geodesics having their endpoints on $\gamma$ coincides with $I(\zeta,\xi)$. As a consequence, $I(\zeta,\xi)$ is the convex hull of $\gamma$.
\end{lemma}
\begin{proof}
Let $\sigma$ be a geodesic between two vertices $a^-$ and $a^+$ of $\gamma$. Of course, if $\gamma^\pm$ denotes the subray of $\gamma$ between $a^\pm$ and $\gamma(\pm \infty)$, then the concatenation $\ell = \gamma^- \cup \sigma \cup \gamma^+$ is also a geodesic. As a consequence, a hyperplane of $X$ cannot separate a vertex $x$ of $\sigma$ from $\{\zeta,\xi\}$, because otherwise it would cross $\ell$ twice. Therefore, no hyperplane separates $x$ from the median point $\mu(x,\zeta,\xi)$, which precisely means that $x= \mu(x,\zeta,\xi)$ or equivalently $x \in I(\zeta,\xi)$.
\noindent
Conversely, let $x \in I(\zeta,\xi)$ be a vertex. Fix a vertex $y \in \gamma$. Because there exist only finitely many hyperplanes separating $x$ and $y$, there an $n \geq 1$ such that all the hyperplanes which separate $x$ and $y$ and which cross $\gamma$ have to cross $\gamma$ between $\gamma(-n)$ and $\gamma(n)$. We also take $n$ sufficiently large so that $y$ lies between $\gamma(-n)$ and $\gamma(n)$ along $\gamma$. Now, we want to prove that $x$ belongs to a geodesic between $\gamma(-n)$ and $\gamma(n)$, or equivalently that $\mu(x,\gamma(-n),\gamma(-n))=x$. If this equality does not hold, then there must exist a hyperplane $J$ separating $x$ from $\{\gamma(-n),\gamma(n)\}$. Necessarily, $J$ separates $x$ from $y$, so that it follows from our choice of $n$ that $J$ does not cross $\gamma$, i.e., $J$ separates $x$ from $\{\zeta,\xi\}$. But this contradicts the equality $\mu(x,\zeta,\xi)= x$.
\noindent
Thus, we have proved that $I(\zeta,\xi)$ coincides with the union of all the geodesics having their endpoints on $\gamma$. In order to show the second assertion of our lemma, it remains to show that the interval $I(\zeta,\xi)$ is convex. So let $x,y \in I(\zeta,\xi)$ be two vertices and let $z$ be a vertex of a geodesic $[x,y]$ between $x$ and $y$. As a consequence of what we have just proved, $x$ and $y$ belong to geodesics with endpoints on $\gamma$. Therefore, if $J$ is a hyperplane which does not cross $\gamma$, then $x$ and $y$ have to belong to the same halfspace delimited by $J$ as $\gamma$, and for the same reason the vertex $z$ must belong to this halfspace as well. It follows that no hyperplane separates $z$ from $\gamma$, or alternatively from $\{\zeta,\xi\}$. The conclusion is that $z$ belongs to $I(\zeta,\xi)$.
\end{proof}
\section{Stable minimising sets of loxodromic isometries}\label{section:SMin}
\noindent
In this section, our goal is to prove the following decomposition theorem about the \emph{stable minimising set} $\mathrm{SMin}(g)= \bigcup\limits_{n \geq 1} \mathrm{Min}(g^n)$ of a loxodromic isometry of a CAT(0) cube complex:
\begin{thm}\label{thm:Iso}
Let $X$ be a uniformly locally finite CAT(0) cube complex and $g \in \mathrm{Isom}(X)$ a loxodromic isometry. Fix an axis $\gamma$ of $g$, let $\zeta,\xi \in \mathfrak{R}X$ denote its points at infinity, and let $Y \subset \mathfrak{R}X$ be the cubical component containing $\xi$. Then $\mathrm{SMin}(g)$ is a median subalgebra of $X$ and
$$\left\{ \begin{array}{ccc} \mathrm{SMin}(g) & \to & Y \times I(\zeta,\xi) \\ x & \mapsto & \left(\pi_Y( x), \mu(x,\zeta,\xi) \right) \end{array} \right.$$
is an isomorphism of median algebras, where $\pi_Y : X \to Y$ is the projection onto $Y$.
\end{thm}
\begin{figure}
\caption{Isomorphism $\mathrm{SMin}
\label{Iso}
\end{figure}
\noindent
The rest of the section is dedicated to the proof of this statement. So let $X$ be a finite-dimensional CAT(0) cube complex and $g \in \mathrm{Isom}(X)$ a loxodromic isometry. Fix an axis $\gamma$ of $g$ and let $\zeta,\xi \in \mathfrak{R}X$ denote its endpoints at infinity. Also, let $Y \subset \mathfrak{R}X$ denote the cubical component which contains $\xi$. Now define the map:
$$\varphi : \left\{ \begin{array}{ccc} X & \to & Y \times I(\zeta,\xi) \\ x & \mapsto & \left( \pi_Y(x), \mu(x,\zeta,\xi) \right) \end{array} \right.,$$
where $\pi_Y : X \to Y$ denotes the projection onto $Y$ as defined by Proposition \ref{prop:ProjInf}.
\begin{lemma}\label{lem:Iso}
Fix some $n \geq 1$, and let $Q_n$ denote the union of all the axes of $g^n$ having $\zeta,\xi$ as points at infinity. Then $\mathrm{Min}(g^n)$, $\mathrm{Fix}_Y(g^n)$ and $Q_n$ are three median subalgebras of $\overline{X}$ and $\varphi$ induces an isomorphism of median algebras $\mathrm{Min}(g^n) \to \mathrm{Fix}_Y(g^n) \times Q_n$.
\end{lemma}
\begin{proof}
Set
$$T_n = \left\{ g^{n\infty} \cdot x := \lim\limits_{k \to + \infty} g^{nk} \cdot x \mid x \in \mathrm{Min}(g^n) \right\}.$$
According to \cite[Lemmas 4.10 and 4.13]{FTT}, $\mathrm{Min}(g^n)$, $T_n$ and $Q_n$ are median subalgebras of $\overline{X}$ and
$$\left\{ \begin{array}{ccc} \mathrm{Min}(g^n) & \to & T_n \times Q_n \\ x & \mapsto & \left(g^{n\infty} x, \mu(x,\zeta,\xi) \right) \end{array} \right.$$
is an isomorphism of median algebras. First, we notice that this map is induced by $\varphi$.
\begin{claim}\label{claim:gInfty}
For every $x \in \mathrm{Min}(g^n)$, we have $g^{n \infty} \cdot x = \pi_Y(x)$.
\end{claim}
\noindent
If there exists some $x \in \mathrm{Min}(g^n)$ such that $g^{n \infty} \cdot x \neq \pi_Y(x)$, then there exists some hyperplane $J$ separating $x$ from $g^{n \infty} \cdot x$ which crosses $Y$. Let $\alpha \in Y$ be a point such that $J$ separates $\alpha$ and $g^{n \infty}x$. Because $X$ is finite-dimensional and because $J$ crosses an axis of $g^n$, there must exist some $k \geq 1$ such that $g^{kn}J^+ \subsetneq J^+$, where $J^+$ denotes the halfspace delimited by $J$ which contains $g^{n \infty}x$. But then $\{ g^{nkr}J \mid r \geq 1\}$ defines an infinite family of hyperplanes separating $\alpha$ and $g^{n \infty} x$, which is impossible since $g^{n\infty}x$ and $\alpha$ are two points of the same cubical component $Y$. This concludes the proof of our claim.
\noindent
The next observation required to conclude the proof of our lemma is:
\begin{claim}
We have $T_n=\mathrm{Fix}_Y(g^n)$.
\end{claim}
\noindent
It is clear that $T_n \subset \mathrm{Fix}_Y(g^n)$. Conversely, fix a point $\alpha \in \mathrm{Fix}_Y(g^n)$. If $\alpha = \xi$, there is nothing to prove, so we suppose that $\alpha \neq \xi$. Let $\mathcal{J}$ denote the (non-empty and finite) collection of the hyperplanes separating $\alpha$ and $\xi$. For every $J \in \mathcal{J}$, let $J^+$ denote the halfspace delimited by $J$ which contains $\alpha$. Notice that, because $\alpha$ and $\xi$ are fixed by $g^n$, the intersection $D := \bigcap\limits_{J \in \mathcal{J}} J^+$ is $\langle g^n \rangle$-invariant. Therefore, if we fix a vertex $x \in \gamma$ and if we set $y:= \mathrm{proj}_D(x)$, then, because the projection onto $D$ is 1-Lipschitz according to Lemma \ref{lem:HypProjSeparate}, we know that
$$d(x,g^nx) \leq d(y,g^ny) = d( \mathrm{proj}_D(x), \mathrm{proj}_D(g^nx)) \leq d(x,g^nx),$$
hence $y \in \mathrm{Min}(g^n)$. Moreover, as any hyperplane separating $y$ and $g^ny$ separates $x$ and $g^nx$ according to Lemma \ref{lem:HypProjSeparate}, it follows that the hyperplanes separating $x$ and $g^nx$ are exactly the hyperplanes separating $y$ and $g^ny$. As a consequence, a hyperplane separating $x$ and $y$ has to separate $g^nx$ and $g^ny$. We can iterate the argument and show that a hyperplane separating $g^nx$ and $g^ny$ has to separate $g^{2n}x$ and $g^{2n}y$. And so on. The conclusion is that a hyperplane $J$ separating $x$ and $y$ has to separate $g^{n\infty}x$ and $g^{n \infty} y$. Because such a hyperplane necessarily crosses $Y$, it follows from Claim \ref{claim:gInfty} that it cannot separate $x$ and $\xi$ nor $y$ and $g^{n \infty} y$. On the other hand, we know from Lemma \ref{lem:HypProj} that $J$ separates $x$ from $D$, so that $J$ cannot separate $\alpha$ and $g^{n \infty}y$. We conclude that $J$ separates $\{y,\alpha,g^{n \infty}y\}$ and $\{x,\xi\}$. Thus, we have proved that a hyperplane separating $x$ and $y$ does not separate $\alpha$ and $y$. As a consequence, a hyperplane separating $y$ from $\alpha$ has to separate $x$ from $\xi= \pi_Y(x)$, which implies that it separates $y$ from $Y$. So $\alpha= \pi_Y(y)=g^{n \infty} \cdot y \in T_n$ according to Claim \ref{claim:gInfty}.
\end{proof}
\noindent
As the inclusions $\mathrm{Min}(g^n) \subset \mathrm{Min}(g^m)$, $\mathrm{Fix}_Y(g^n) \subset \mathrm{Fix}_Y(g^m)$ and $Q_n \subset Q_m$ hold for every integers $n,m \geq 1$ such that $n$ divides $m$, we deduce that $\mathrm{SMin}(g)$, $\bigcup\limits_{n \geq 1} \mathrm{Fix}_Y(g^n)$ and $\bigcup\limits_{n \geq 1} Q_n$ are three median subalgebras of $\overline{X}$ and that $\varphi$ induces an isomorphism of median algebras
$$\mathrm{SMin}(g) \to \left( \bigcup\limits_{n \geq 1} \mathrm{Fix}_Y(g^n) \right) \times \left( \bigcup\limits_{n \geq 1} Q_n \right).$$
Theorem \ref{thm:Iso} now follows from the following two equalities.
\begin{claim}\label{claim:StabFix}
If $X$ is uniformly locally finite, then we have $\bigcup\limits_{n \geq 1} \mathrm{Fix}_Y(g^n) = Y$.
\end{claim}
\begin{proof}
According to Lemma \ref{lem:LocallyFiniteInf}, $Y$ is locally finite. Consequently, if $\alpha \in Y$, the fact that $g$ fixes $\xi \in Y$ implies that $g^N$ fixes $\alpha$ for some sufficiently large $N \geq 1$. Therefore, any point of $Y$ is fixed by a non-trivial power of $g$.
\end{proof}
\begin{claim}
We have $\bigcup\limits_{n \geq 1} Q_n = I(\zeta,\xi)$.
\end{claim}
\begin{proof}
The inclusion $\bigcup\limits_{n \geq 1} Q_n \subset I(\zeta,\xi)$ is clear. Conversely, let $z \in I(\zeta,\xi)$ be a vertex. According to Lemma \ref{lem:ConvexHullInter}, there exist two vertices $x,y \in \gamma$ such that $z$ belongs to a geodesic $[x,y]$ between $x$ and $y$. Fix a sufficiently large integer $N \geq 1$ so that $y$ separates $x$ and $g^Nx$ along $\gamma$. Then, for any choice of a geodesic $[y,g^Nx]$ between $y$ and $g^Nx$, the concatenation
$$\bigcup\limits_{k \in \mathbb{Z}} g^{kN} \cdot \left( [x,y] \cup [y,g^Nx] \right)$$
defines an axis of $g^N$ passing through $z$. So $z \in Q_N$. The reverse inclusion is proved.
\end{proof}
\begin{remark}
It can be shown that our stable minimising set $\mathrm{SMin}(g)$ is not only median but also convex, and that it coincides with the parallel set $Y_g$ introduced in \cite[Section 3]{CubeUniformGrowth}. We do not include a proof of this observation as it will not be used in the sequel. As it was pointed out to us by Elia Fioravanti, \cite{EffRAAG} also contains relevant information about minimising sets. In particular, alternative proofs of some of our results can be derived from \cite{EffRAAG}.
\end{remark}
\section{Geometric characterisation of contracting isometries}\label{section:Contracting}
\noindent
In this section, we make explicit the connection between stable minimising sets and the property of being contracting. More precisely, we want to prove:
\begin{thm}\label{thm:SMingQL}
Let $X$ be a uniformly locally finite CAT(0) cube complex and $g \in \mathrm{Isom}(X)$ a loxodromic isometry. Then $g$ is a contracting isometry if and only if $\mathrm{SMin}(g)$ is quasi-isometric to a line.
\end{thm}
\noindent
We already mentioned geometric characterisations of contracting isometries of CAT(0) cube complexes in Section \ref{section:Isom}. The proof of our theorem is based on the next one:
\begin{prop}\label{prop:ContractingFlat}
Let $X$ be a locally finite CAT(0) cube complex and $g \in \mathrm{Isom}(X)$ a loxodromic isometry. Fix an axis $\gamma$ of $g$. Then $g$ is a contracting isometry if and only if $\gamma$ is quasiconvex and if there does not exist an isometric embedding $\mathbb{R} \times [0,+ \infty) \hookrightarrow X$ such that $\mathbb{R} \times \{0\}$ is sent into the convex hull of $\gamma$.
\end{prop}
\noindent
We emphasize that, in this statement, $\mathbb{R} \times [0,+ \infty)$ and $X$ are thought of as CAT(0) cube complexes endowed with their graph metrics. Also, recall that a subspace $Y$ of a CAT(0) cube complex is \emph{quasiconvex} if there exists a constant $R \geq 0$ such that any geodesic between two points of $Y$ stays in the $R$-neighborhood of $Y$.
\noindent
Our proposition is essentially contained in \cite{article3}. We include a sketch of proof for reader's convenience.
\begin{proof}[Sketch of proof of Proposition \ref{prop:ContractingFlat}.]
If $g$ is a contracting isometry, then we know from Proposition \ref{prop:contracting} that its axis $\gamma$ has to be quasiconvex. Moreover, if there exists an isometric embedding $\mathbb{R} \times [0,+ \infty) \hookrightarrow X$ such that $\mathbb{R} \times \{0\}$ is sent into the convex hull of $\gamma$, then any two hyperplanes intersecting $\gamma$ are simultaneously transverse to infinitely many hyperplanes. Consequently, $g$ does not skewer a pair of $L$-separated hyperplanes for any $L \geq 0$, contradicting Proposition \ref{prop:contracting}. Conversely, assume that $g$ is not contracting and that $\gamma$ is quasiconvex. As a consequence of Proposition \ref{prop:contracting}, for every $n \geq 0$ the hyperplanes $A_n$ and $B_n$ dual to the edges $[\gamma(-n-1),\gamma(-n)]$ and $[\gamma(n),\gamma(n+1)]$ of $\gamma$ are simultaneously transverse to infinitely many hyperplanes; fix a hyperplane $C_n$ transverse to both $A_n$ and $B_n$ which satisfies $d(N(C_n),N(\gamma)) \geq n$, where $N(\gamma)$ denotes the convex hull of $\gamma$. As a consequence of \cite[Corollary 2.17]{coningoff}, there exists an isometric embedding $R_n : [-a_n,b_n] \times [0,c_n] \hookrightarrow X$ such that $R(0,0)$ stays in a fixed neighborhood of $\gamma(0)$ when $n$ varies, and such that $[-a_n,b_n] \times \{0\}$ is sent into $N(\gamma)$, $\{b_n\} \times [0,c_n]$ into $N(B_n)$, $[-a_n,b_n] \times \{c_n\}$ into $N(C_n)$, and $\{-a_n\} \times [0,c_n]$ into $N(A_n)$. Notice that $a_n,b_n,c_n \to + \infty$. Because $X$ is locally finite, we can extract from $(R_n)$ a subsequence converging to an isometric embedding $\mathbb{R} \times [0,+ \infty) \hookrightarrow X$ such that $\mathbb{R} \times \{0\}$ is sent into the convex hull of $\gamma$.
\end{proof}
\noindent
Now we are ready to prove our theorem.
\begin{proof}[Proof of Theorem \ref{thm:SMingQL}.]
Fix an axis $\gamma$ of $g$, let $\zeta,\xi$ denote the endpoints at infinity of $g$, and let $Y \subset \mathfrak{R}X$ be the cubical component containing $Y$. As a consequence of Theorem~\ref{thm:Iso}, $\mathrm{SMin}(g)$ is a quasi-line if and only if $I(\zeta,\xi)$ is a quasi-line and if $Y$ is bounded. Also, as a consequence of Lemma \ref{lem:ConvexHullInter}, $\gamma$ is quasiconvex if and only if $I(\zeta,\xi)$ is a quasi-line.
\begin{claim}
Assume that $I(\zeta,\xi)$ is a quasi-line. If $Y$ is unbounded, then there exists an isometric embedding $\mathbb{R} \times [0,+ \infty) \hookrightarrow X$ such that $\mathbb{R} \times \{0\}$ is sent into the convex hull of the axis $\gamma$.
\end{claim}
\noindent
Let $(\xi_n)$ be a sequence of points of $Y$ such that $d_Y(\xi, \xi_n) \to + \infty$. Also, fix a vertex $z \in \gamma$, and set $x_n=g^nz$ and $y_n=g^{-n}z$ for every $n \geq 1$. For convenience, we identify the points $(\xi_n,x_n)$, $(\xi_n,y_n)$, $(\xi,x_n)$ and $(\xi,y_n)$ of $Y \times I(\zeta,\xi)$ with the vertices of $\mathrm{SMin}(g)$ given by the isomorphism of Theorem \ref{thm:Iso}. Notice that it follows from Lemma \ref{lem:WallHyp} that the distance between $(\xi_n,x_n)$ and $(\xi,x_n)$ tends to infinity as $n \to + \infty$, because the number of walls in $Y \times I(\zeta,\xi)$ separating these two points tends to infinity as well. Notice also that $(\xi_n,x_n)$ and $(\xi,y_n)$ belong to the interval between $(\xi,x_n)$ and $(\xi_n,y_n)$, and that $(\xi,x_n)$ and $(\xi_n,y_n)$ belong to the interval between $(\xi_n,x_n)$ and $(\xi,y_n)$. As a consequence of \cite[Lemma 2.110]{Qm}, there exists an isometric embedding $R_n : [0,a_n] \times [0,b_n]\hookrightarrow X$ such that $(0,0)$, $(a_n,0)$, $(0,b_n)$ and $(a_n,b_n)$ are sent respectively to $(\xi,x_n)$, $(\xi,y_n)$, $(\xi_n,x_n)$ and $(\xi_n,y_n)$. Notice that, since $\gamma$ is quasiconvex, there exists some constant $R \geq 0$ (which does not depend on $n$) such that (the image of) $[0,a_n] \times \{0\}$ intersects the ball $B(z,R)$. Because $X$ is locally finite, up to extracting a subsequence, we may suppose without loss of generality that $B(z,r) \cap \mathrm{Im}(D_n)$ is eventually constant for every $r \geq 1$. Therefore, $(R_n)$ converges to an isometric embedding $R : \mathbb{R} \times [0,+ \infty)$ such that $\mathbb{R} \times \{0\}$ is sent into the convex hull of $\gamma$. This concludes the proof of our claim.
\noindent
Now we are ready to prove our theorem. First, assume that $g$ is a contracting isometry. We know from \cite[Lemma 3.3]{MR3175245} that $\gamma$ has to be quasiconvex, so $I(\zeta,\xi)$ must be a quasi-line. And it follows from the previous claim and from Proposition \ref{prop:ContractingFlat} that $Y$ must be bounded.
\noindent
Conversely, assume that $g$ is not contracting. If $I(\zeta,\xi)$ is not a quasi-line, there is nothing to prove, so assume also that $I(\zeta,\xi)$ is a quasi-line. As a consequence to Proposition~\ref{prop:ContractingFlat}, there exists an isometric embedding $\mathbb{R} \times [0,+ \infty) \hookrightarrow X$ such that $\mathbb{R} \times \{0\}$ is sent into the convex hull of $\gamma$. For every $n \geq 0$, let $\rho_n$ denote the geodesic ray of $X$ corresponding to the image of $\left( \{0\} \times [0,n] \right) \cup \left( [0,+ \infty) \times \{n\} \right)$. For every $n \geq 0$, let $\alpha_n$ denote the orientation of $X$ containing all the halfspaces of $X$ in those $\rho_n$ eventually lies. Notice that any two rays among the $\alpha_n$'s and the subray of $\gamma$ starting from $z$ and pointing to $\xi$ cross the same hyperplanes up to finitely many exceptions. As a consequence, they all belong to the same cubical component, namely $Y$, which implies that $Y$ must be infinite. As $Y$ is locally finite according to Lemma \ref{lem:LocallyFiniteInf}, we conclude that it must be unbounded.
\end{proof}
\section{Centralisers of rank-one isometries}\label{section:Main}
\noindent
This section is dedicated to the main result of the article, making explicit the connection between the \emph{stable centraliser} $SC_G(g)= \{ h \in G \mid \exists n \geq 1, [h,g^n]=1\}$ of an element $g$ which belongs to a group $G$ acting geometrically on a CAT(0) cube complex and the property of being contracting. More precisely:
\begin{thm}\label{thm:StableCentraliser}
Let $G$ be a group acting geometrically on a CAT(0) cube complex $X$, and $g \in G$ an infinite-order element. Fix a cubical component $Y \subset \mathfrak{R}X$ which contains an endpoint of an axis of $g$. Then exactly one of the following situations occurs:
\begin{itemize}
\item $g$ defines a rank-one isometry of $X$;
\item the stable centraliser $SC_G(g)$ of $g$ is not virtually cyclic;
\item $\mathrm{Fix}_Y(g^n)$ is finite for every $n \geq 1$ and the sequence $(\mathrm{Fix}_Y(g^n))$ takes infinitely many values.
\end{itemize}
\end{thm}
\noindent
Before turning to the proof of our theorem, we begin by proving the following lemma:
\begin{lemma}\label{lem:Qstable}
Let $X$ be a CAT(0) cube complex and $g \in \mathrm{Isom}(X)$ an isometry. Assume that $g$ is loxodromic, fix one of its axes $\gamma$ and let $\zeta,\xi$ denote its points at infinity. Also, assume that $J$ and $gJ$ cannot be transverse for any hyperplane $J$. Then the union of all the axes of $g$ having $\zeta,\xi$ as endpoints at infinity coincides with $I(\zeta,\xi)$.
\end{lemma}
\begin{proof}
Fix a vertex $y \in I(\zeta,\xi)$. We want to prove that there exists an axis of $g$ passing through $y$ and having $\zeta,\xi$ as endpoints. We assume that $d(y,\gamma)=1$, the general case following by induction. So let $x \in \gamma$ be a vertex adjacent to $y$. Because $y$ belongs to $I(\zeta,\xi)$, the hyperplane $J$ separating $x$ and $y$ has to separate $\zeta$ and $\xi$, so that $J$ meets $\gamma$ along an edge $[a,b]$. Up to replacing $g$ with $g^{-1}$, we may suppose without loss of generality that $[a,b]$ is on the left of $x$ (if we endow $\gamma$ with a left-right orientation so that $g$ translates the points of $\gamma$ to the right). Of course, the hyperplane $gJ$ has to intersect the axis $\gamma$ along the edge $g [a,b]$. We distinguish two cases.
\noindent
First, assume that $g [a,b]$ is included into the subsegment $[x,gx] \subset \gamma$. Notice that $gJ$ does not separate $y$ and $gy$. Indeed, let $D$ denote the halfspace delimited by $gJ$ which contains $gx$. Because $gJ$ separates $gx$ and $gy$, we have $gx \in D$ and $gy \in D^c$. We also know that $gJ$ separates $x$ and $gx$, so that $x \in D^c$. Next, $J$ is the unique hyperplane separating $x$ and $y$, so we deduce from $J \neq gJ$ that $J$ does not separate $x$ and $y$. Consequently, $y$ has to belong to $D^c$ since $x \in D^c$. Thus, we have proved that $y$ and $gy$ both belong do $D^c$, as desired. Notice also that $J$ separates $y$ and $gy$ since it cannot separate $x$ and $gx$ (otherwise it could cross $\gamma$ twice) and it cannot separate $gx$ and $gy$ because $gJ$ is the unique hyperplane separating $gx$ and $gy$. The conclusion is that the hyperplanes separating $y$ and $gy$ are exactly $J$ and the hyperplanes separating $x$ and $gx$ which are different from $gJ$. As a consequence, the number of hyperplanes separating $x$ and $gx$ equals the number of hyperplanes separating $y$ and $gy$, hence $d(x,gx)=d(y,gy)$. We conclude that $y$ belongs to $\mathrm{Min}(g)$.
\noindent
Second, assume that $g [a,b]$ is included into the subsegment $[b,x] \subset \gamma$. Because $J$ intersects $\gamma$ just once, it has to separate $a$ from $\{b,x,gx\}$. We know that $J$ separates $x$ and $y$, and we know that it cannot separate $gx$ and $gy$ since $gJ$ is the unique hyperplane separating $gx$ and $gy$. Consequently, $J$ separates $\{a,y\}$ and $\{x,gx,gy,b\}$. Next, because $J$ intersects $\gamma$ just once, it has to separate $\{a,b\}$ and $\{x,gx \}$. We know that $gJ$ separates $gx$ and $gy$, and we know that it cannot separate $x$ and $y$ since $J$ is the unique hyperplane separating $x$ and $y$. Consequently, $gJ$ separates $\{a,b, gy\}$ and $\{x,gx\}$. It follows that $J$ and $gJ$ are transverse, which is impossible.
\noindent
So far, we have proved that $y$ belongs to $\mathrm{Min}(g)$. It remains to show that $g^\infty y =\xi$ and $g^{-\infty}y=\zeta$. If there exists a hyperplane $J$ separating $g^\infty y$ and $g^\infty x$, then such a hyperplane has to separate $g^n y$ and $g^nx$ for some sufficiently large $n \geq 1$. Up to translating $J$ by $g^{-n}$, we may suppose without loss of generality that $J$ separates $x$ and $y$. On the other hand, according to \cite[Lemma 4.11]{FTT}, the fact that $J$ separates $g^\infty x$ and $g^\infty y$ implies that $J$ does not cross the axis $\gamma$. Therefore, $J$ has to separate $y$ from $\{\zeta,\xi\}$, contradicting the fact that $y$ belongs to $I(\zeta,\xi)$. We conclude that $g^\infty y = g^\infty x = \xi$. One shows similarly that $g^{- \infty}y= \zeta$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:StableCentraliser}.]
Up to subdividing $X$, we may suppose without loss of generality that $g$ is loxodromic. If $g$ is a contracting isometry, then it follows for instance from the combination of \cite[Theorems 1.3 and 1.5]{SistoContracting}, \cite[Theorem 1.4]{OsinAcyl} and \cite[Corollary 6.6]{DGO} that its stable centraliser is virtually cyclic (although a direct proof is possible). Conversely, assume that $g$ is not contracting. According to Theorem \ref{thm:SMingQL}, $\mathrm{SMin}(g)$ is not a quasi-line. Let $Y \times I(\zeta,\xi)$ be the decomposition of $\mathrm{SMin}(g)$ given by Theorem \ref{thm:Iso}. We distinguish two cases.
\noindent
Case 1: $Y$ is unbounded. If $\mathrm{Fix}_Y(g^n)$ is bounded for every $n \geq 1$, then the sequence $(\mathrm{Fix}_Y(g^n))$ has to take infinitely many values as $\bigcup\limits_{n \geq 1} \mathrm{Fix}_Y(g^n)=Y$ according to Claim \ref{claim:StabFix}. So assume that $\mathrm{Fix}_Y(g^n)$ is unbounded for some $n \geq 1$. Because we know from Lemma \ref{lem:centraliserCC} that $C_G(g^n)$ acts geometrically on $\mathrm{Min}(g^n)$, which is isomorphic to $\mathrm{Fix}_Y(g^n) \times Q_n$ according to Lemma \ref{lem:Iso}, it follows $C_G(g^n)$ cannot be virtually cyclic. Consequently, $SC_G(g)$ cannot be virtually cyclic either.
\noindent
Case 2: $Y$ is bounded. Fix an integer $n \geq 1$ such that $J$ and $g^nJ$ cannot be transverse for any hyperplane $J$ of $X$. (Such an integer exists for instance according to \cite[Lemma 2.2]{HaettelArtin}.) Let $\mathrm{Fix}_Y(g^n) \times Q_n$ be the decomposition of $\mathrm{Min}(g^n)$ given by Lemma~\ref{lem:Iso}. We know from \ref{lem:centraliserCC} that $C_G(g^n)$ acts geometrically on this median subalgebra, and it follows from Lemma \ref{lem:Qstable} that $Q_n= I(\zeta,\xi)$. Because Lemma \ref{lem:LocallyFiniteInf} implies that $Y$ must be finite, it follows that $C_G(g^n)$ contains a finite-index subgroup $C$ which acts geometrically on $I(\zeta,\xi)$. But, because $Y \times I(\zeta,\xi)$ is not a quasi-line and $Y$ is bounded, necessarily $I(\zeta,\xi)$ cannot be quasi-line, so that $C$, and a fortiori $C_G(g^n)$ and $SC_G(g)$, cannot be virtually cyclic.
\end{proof}
\begin{remark}
Interestingly, the arguments above show that, if an isometry $g$ admits an axis $\gamma$ which is not quasiconvex, then its stable centraliser is not virtually cyclic. Indeed, it follows from Lemma \ref{lem:Qstable} that there exists some $n \geq 1$ such that $\mathrm{Min}(g^n)$ contains $I(\zeta,\xi)$. But the centraliser of $g^n$ acts geometrically on $\mathrm{Min}(g^n)$ according to Lemma \ref{lem:centraliserCC} and we know from Lemma \ref{lem:ConvexHullInter} that the interval $I(\zeta,\xi)$ is not a quasi-line if $\gamma$ is not quasiconvex. A consequence of this observation is that, if our cube complex decomposes as the Cartesian product of unbounded complexes and if the stable centraliser of our isometry $g$ is virtually cyclic, then $g$ has to preserve one of the factors (up to finite Hausdorff distance). This explains why, in Rattaggi's example (described in the introduction), the isometry of the product of trees stabilises a factor.
\end{remark}
\section{Applications}\label{section:Applications}
\subsection{Special cube complexes}\label{section:Special}
\noindent
As a first application of Theorem \ref{thm:StableCentraliser}, we prove that:
\begin{thm}\label{thm:Special}
Let $G$ be a group acting geometrically on a CAT(0) cube complex $X$. Assume that, for every hyperplane $J$ and every element $g \in G$, the hyperplanes $J$ and $gJ$ are neither transverse nor tangent. Then an infinite-order element $g \in G$ defines a rank-one isometry of $X$ if and only if its stable centraliser $SC_G(g)$ is virtually cyclic.
\end{thm}
\begin{proof}
Fix an axis $\gamma$ of $g$ and let $Y \subset \mathfrak{R}X$ be a cubical component containing an endpoint of $\gamma$. We claim that $\mathrm{Fix}_Y(g^n)=Y$ for every $n \geq 1$. So let $n \geq 1$ be an integer and let $\zeta \in \mathrm{Fix}_Y(g^n)$ be a point.
\noindent
If $\xi \in Y$ is a point adjacent to $\zeta$, then there exists a unique hyperplane $J$ which separates them. Of course, $g^n \xi$ must be adjacent to $\zeta$ as well since $g^n$ fixes $\zeta$, so that $g^nJ$ is the unique hyperplane separating $\zeta$ and $g^n\xi$. If $g^nJ \neq J$ then $J$ and $g^nJ$ are the unique hyperplanes separating $\xi$ and $g^n \xi$. Notice that, if $N(J)$ and $N(g^nJ)$ are disjoint, then it follows from Corollary \ref{cor:HypSepTwoConvex} that there exists a hyperplane $H$ separating them. But such a hyperplane would separate $\xi$ and $g^n \xi$, which is impossible. Therefore, $J$ and $g^nJ$ must be either transverse or tangent. Because such a configuration is forbidden by assumption, it follows that $\xi= g^n\xi$.
\noindent
Thus, we have proved that $g^n$ fixes all the neighbors of $\zeta$. By arguing by induction over the distance to $\zeta$, we deduce that $g^n$ fixes $Y$ entirely. Now, the desired conclusion follows from Theorem \ref{thm:StableCentraliser}.
\end{proof}
\noindent
As a particular case of Theorem \ref{thm:Special}, one gets:
\begin{cor}
Let $X$ be a compact special cube complex. A non-trivial element $g \in \pi_1(X)$ defines a rank-one isometry of $\widetilde{X}$ if and only if its centraliser in $\pi_1(X)$ is cyclic.
\end{cor}
\begin{proof}
Because $X$ does not contain self-intersecting or self-osculating hyperplanes, it follows that the action of $\pi_1(X)$ on the universal cover $\widetilde{X}$ satisfies the assumption of Theorem \ref{thm:Special}. Therefore, $g$ (which has infinite order since $\pi_1(X)$ is torsion-free) defines a rank-one isometry of $\widetilde{X}$ if and only if its stable centraliser is virtually cyclic, or equivalently cyclic since $\pi_1(X)$ is torsion-free. Notice that an element of $\pi_1(X)$ commutes with a power of $g$ if and only it commutes with $g$ itself. Indeed, such a property holds for right-angled Artin groups (as an immediate consequence \cite[Theorem 1.2]{MR634562}) and according to \cite[Theorem 4.2, Lemma 4.3]{Special} the fundamental group of a special cube always embeds into a right-angled Artin group. Therefore, the stable centraliser of $g$ turns out to coincide with its centraliser.
\end{proof}
\subsection{Some two-dimensional cube complexes}\label{section:DimTwo}
\noindent
Our second application of Theorem \ref{thm:StableCentraliser} is:
\begin{thm}\label{thm:TwoDim}
Let $G$ be a group acting geometrically on a two-dimensional CAT(0) cube complex $X$. Assume that the link of a vertex of $X$ cannot contain an induced copy of $K_{2,3}$ in its one-skeleton. Then an infinite-order element $g \in G$ defines a rank-one isometry of $X$ if and only if its stable centraliser is virtually cyclic.
\end{thm}
\noindent
The theorem will be essentially an immediate consequence of Theorem \ref{thm:StableCentraliser} combined with the following lemma:
\begin{lemma}\label{lem:BoundaryTree}
Let $X$ be a two-dimensional CAT(0) cube complex. Assume that the link of a vertex of $X$ cannot contain an induced copy of $K_{2,3}$ in its one-skeleton. A cubical component of $\mathfrak{R}X$ must be a linear tree.
\end{lemma}
\begin{proof}
Let $Y$ be a cubical component of $X$. As the dimension of $Y$ must be smaller than the dimension of $X$, we know that $Y$ is a tree. It remains to show that a vertex of $Y$ has at most two neighbors. Assume for contradiction that $Y$ contains a vertex $\xi$ with at least three neighbors $\alpha,\beta,\gamma \in Y$. Let $A,B,C$ denote the three hyperplanes separating $\xi$ from $\alpha,\beta,\gamma$ respectively. Notice that any two hyperplanes among $A,B,C$ are disjoint in $X$ as they are not transverse in $Y$.
\noindent
We claim that the carriers $N(A)$, $N(B)$ and $N(C)$ pairwise intersect.
\noindent
Indeed, if $N(A)$ and $N(B)$ are disjoint, it follows from Corollary \ref{cor:HypSepTwoConvex} that there exists a hyperplane $J$ separating $A$ and $B$. Because $\alpha$ and $\beta$ differ on $A$ and $B$, necessarily they differ on $J$ as well. But $\alpha$ and $\beta$ are two vertices of $Y$ at distance two apart, so they only differ on two hyperplanes. Consequently, $N(A)$ and $N(B)$ have to intersect. One shows similarly that $N(A)$ and $N(C)$, and $N(B)$ and $N(C)$, also intersect, concluding the proof of our claim.
\noindent
It follows from Helly's property that the intersection $N(A) \cap N(B) \cap N(C)$ is non-empty. Fix one of its vertices $x$. Notice that, if we fix a hyperplane $J$ separating $x$ from $Y$ (which exists as a consequence of Lemma \ref{lem:DecreasingInf}), then $J$ has to be transverse to $A$, $B$ and $C$. Once again as a consequence of Helly's property, there exists a vertex $z$ which belongs to the intersection $N(J) \cap N(A) \cap N(B) \cap N(C)$. By construction, the subgraph in the link of $z$ generated by the vertices which correspond to the edges adjacent to $z$ and dual to $A,B,C,J$ must be isomorphic to $K_{2,3}$, a contradiction.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:TwoDim}.]
Because an elliptic isometry of a linear tree always has order at most two, it follows from Lemma \ref{lem:BoundaryTree} that the third case of the trichotomy provided by Theorem \ref{thm:StableCentraliser} cannot happen. The desired conclusion follows.
\end{proof}
\subsection{Centralisers of regular elements}
\noindent
Our last application is the following statement:
\begin{thm}\label{thm:Regular}
Let $G$ be a group acting geometrically on a CAT(0) cube complex $X$. Assume that $G$ decomposes as a product of $n \geq 1$ unbounded irreducible CAT(0) cube complexes $X_1 \times \cdots X_n$. If $g \in G$ is a regular element, then $SC_G(g)$ is virtually $\mathbb{Z}^n$.
\end{thm}
\noindent
We begin by proving a preliminary lemma:
\begin{lemma}\label{lem:BoundedInf}
Let $X$ be a CAT(0) cube complex and $g \in \mathrm{Isom}(X)$ a loxodromic isometry with a fixed axis $\gamma$. If $g$ is a contracting isometry, then the cubical component of $\mathfrak{R}X$ containing $\gamma(+ \infty)$ is bounded.
\end{lemma}
\begin{proof}
Let $Y$ denote the cubical component of $\mathfrak{R}X$ which contains $\gamma(+ \infty)$. As a consequence of Proposition \ref{prop:contracting}, there exist an integer $L \geq 1$ and a sequence of pairwise $L$-separated hyperplanes $J_1,J_2, \ldots$ such that $J_i$ separates $J_{i-1}$ and $J_{i+1}$ for every $i \geq 2$. For every $i \geq 1$, let $J_i^+$ denote the halfspace delimited by $J_i$ which contains $J_{i+1}$; notice that $Y \subset J_i^+$ as a consequence of Lemma \ref{lem:DecreasingInf}. Now fix two vertices $\alpha, \beta \in Y$ and let $\mathcal{J}$ denote the collection of the hyperplanes separating them.
\noindent
Let $J \in \mathcal{J}$. Fix two vertices $x,y \in X$ separating by $J$; say that $x$ and $\alpha$ belong to the same halfspace delimited by $J$. Because there exist only finitely many hyperplanes separating a given vertex $z \notin J_1^+$ from $x$ or $y$, it follows that there exists some $i_0 \geq 1$ such that $x,y \notin J_i^+$ for every $i \geq i_0$. As $J$ separates $\{\alpha,x\}$ and $\{\beta,y\}$, and, for every $i \geq i_0$, $J_i$ separates $\{x,y\}$ and $\{\alpha,\beta\}$, we deduce that $J$ and $J_i$ are transverse.
\noindent
Thus, we have proved that any hyperplane of $\mathcal{J}$ is transverse to all but finitely many $J_1,J_2, \ldots$, which implies that $\mathcal{J}$ has cardinality at most $L$. Therefore, $d(x,y) = \# \mathcal{J} \leq L$. The conclusion is that $Y$ has diameter at most $L$.
\end{proof}
\noindent
We are now ready to prove our theorem.
\begin{proof}[Proof of Theorem \ref{thm:Regular}.]
Let $\gamma$ be an axis of $g$. For every $1 \leq i \leq n$, let $\gamma_i$ denote the projection of $\gamma$ onto $X_i$. Then $\gamma_i$ is a bi-infinite geodesic of $X_i$ on which $g$ acts by translations, so it is an axis of $g$ with respect to the induced action $\langle g \rangle \curvearrowright X_i$. Let $\zeta_i, \xi_i \in \mathfrak{R}X$ denote the endpoints at infinity of $\gamma_i$, and $\zeta, \xi \in \mathfrak{R}X$ the endpoints at infinity of $\gamma$. Also, let $Y_i$ denote the cubical component of $\mathfrak{R}X_i$ which contains $\xi_i$, and $Y$ the cubical component of $\mathfrak{R}X$ which contains $\xi$. Notice that $Y= Y_1 \times \cdots \times Y_n$, so that it follows from Lemma \ref{lem:BoundedInf} that $Y$ is bounded, and in fact finite as a consequence of Lemma \ref{lem:LocallyFiniteInf}.
\noindent
We claim that, if $n \geq 1$ is an integer such that $J$ and $g^nJ$ cannot be transverse for any hyperplane $J$ of $X$, then $C_G(g^n)$ is virtually $\mathbb{Z}^n$. (Such an integer exists for instance according to \cite[Lemma 2.2]{HaettelArtin}.)
\noindent
Let $\mathrm{Fix}_Y(g^n) \times Q_n$ be the decomposition of $\mathrm{Min}(g^n)$ given by Lemma \ref{lem:Iso}. We know from \ref{lem:centraliserCC} that $C_G(g^n)$ acts geometrically on this median subalgebra, and it follows from Lemma \ref{lem:Qstable} that $Q_n= I(\zeta,\xi)$. Because $Y$ is finite, it follows that $C_G(g^n)$ contains a finite-index subgroup $C$ which acts geometrically on $I(\zeta,\xi)$. On the other hand, we have
$$I(\zeta,\xi) = \prod\limits_{i=1}^n I(\zeta_i,\xi_i) = \prod\limits_{i=1}^n \mathrm{ConvexHull}(\gamma_i),$$
where the last equality is justified by Lemma \ref{lem:ConvexHullInter}. Moreover, we know from Proposition~\ref{prop:contracting} that $\gamma_i$ is a Morse geodesic, so that we deduce from Lemma \ref{lem:ConvexHullInter} that the convex hull of $\gamma_i$ has to stay in a neighborhood of $\gamma_i$. In other words, the interval $I(\zeta,\xi)$ is a product of $n$ quasi-lines. Therefore, $C_G(g^n)$ has to be virtually $\mathbb{Z}^n$, concluding the proof of our claim.
\noindent
Because $C_G(g^p)$ is contained into $C_G(g^q)$ for every integers $p,q \geq 1$ such that $p$ divides $q$, it follows that $SC_G(g)$ is a union of subgroups which are all virtually $\mathbb{Z}^n$. But, according to \cite[Theorem II.7.5]{MR1744486}, a non-decreasing union of virtually abelian subgroups in a CAT(0) group must be eventually constant, so we conclude that the stable centraliser $SC_G(g)$ has to be virtually $\mathbb{Z}^n$.
\end{proof}
\section{Open questions}\label{section:Questions}
\noindent
In Sections \ref{section:Special} and \ref{section:DimTwo}, we have shown that the third case of the trichotomy provided by Theorem \ref{thm:StableCentraliser} does not happen in some cases. It would be interesting to find a similar phenomenon for other non-exotic CAT(0) cube complexes.
\begin{question}\label{q:CubedManifold}
Let $M$ be a compact cubed manifold. Is it true that a non-trivial element $g \in \pi_1(M)$ defines a rank-one isometry of the universal cover $\widetilde{M}$ if and only if its stable centraliser is infinite cyclic?
\end{question}
\noindent
Recall that a \emph{cubed manifold} is a manifold which admits a tessellation as a nonpositively curved cube complex.
\noindent
Another interesting direction would be extend (a variation of) Theorem \ref{thm:StableCentraliser} to CAT(0) groups. As a particular case of interest:
\begin{question}
Let $M$ be a compact Riemannian manifold of nonpositive curvature. Is it true that a non-trivial element $g \in \pi_1(M)$ defines a rank-one isometry of the universal cover $\widetilde{M}$ if and only if its stable centraliser is infinite cyclic?
\end{question}
\noindent
Let us conclude this article with a discussion about the following famous open question:
\begin{question}\label{question:Famous}
Let $G$ be a group acting geometrically on a CAT(0) cube complex (or more generally, a CAT(0) space). If $G$ does not contain $\mathbb{Z}^2$, is $G$ necessarily hyperbolic?
\end{question}
\noindent
It is worth noticing that, if the action of $G$ of its CAT(0) cube complex $X$ is such that the third point of Theorem \ref{thm:StableCentraliser} cannot happen, then the fact that $G$ does not contain $\mathbb{Z}^2$ implies that all the infinite-order elements of $G$ are rank-one isometries of $X$. This observation leads to the following natural question:
\begin{question}\label{question:AllContracting}
Let $G$ be a group acting geometrically on a CAT(0) cube complex $X$ (or more generally, a CAT(0) space). Assume that any infinite-order element of $G$ defines a rank-one isometry of $X$. Is $G$ hyperbolic?
\end{question}
\noindent
(It is not difficult to show that, given a group $G$ acting geometrically on a CAT(0) space $X$, then $G$ is hyperbolic if and only if there exists some $D \geq 0$ such that all the infinite-order elements of $G$ are $D$-contracting isometries of $X$. It makes Question \ref{question:AllContracting} even more natural.)
\noindent
For instance, it may be expected that the combination of positive answers to Questions~\ref{q:CubedManifold} and \ref{question:AllContracting} leads to a positive answer of Question \ref{question:Famous} for fundamental groups of compact cubed manifolds, generalising \cite{WeakHyp}.
\addcontentsline{toc}{section}{References}
{\footnotesize
}
\end{document} |
\begin{document}
\title{A goodness of fit test for two component two parameter Weibull mixtures}
\maketitle
\begin{center}
{\large Richard A. Lockhart}
{\small{\it{Department of Statistics and Actuarial Science, Simon Fraser
University, Burnaby, B.C. V5A 1S6, Canada}}}
{\large Chandanie W. Navaratna}
{\small{\it{Department of Mathematics,
The Open University of Sri Lanka,
Nawala, Nugegoda,
Sri Lanka}}}
\end{center}
\begin{abstract}
Fitting mixture distributions is needed in applications where data belongs to inhomogeneous populations comprising homogeneous subpopulations. The mixing proportions of the sub populations are in general unknown and need to be estimated as well. A goodness of fit test based on the empirical distribution function is proposed for assessing the goodness of fit in model fits comprising two components, each distributed as two parameter Weibull. The applicability of the proposed test procedure was empirically established using a Monte Carlo simulation study. The proposed test procedure can be easily altered to handle two component mixtures with different componet distributions.
\end{abstract}
\section{Introduction}
Fitting mixture distributions is needed in applications where data belongs to inhomogeneous populations comprising homogeneous subpopulations. The mixing proportions of the sub populations are in general unknown and need to be estimated as well. A goodness of fit test based on the empirical distribution function is proposed for assessing the goodness of fit in mixtures comprising two components, each distributed as two parameter Weibull.
Rest of the article is organized as follows. Section 2 describes mathematical formulation of the problem. Section 3 illustrates the computation of the test statistic. Section 4 offers the asymptotic distribution of the proposed test statistic. Section 5 outlines a procedure for computing p-values based on the proposed test. Section 6 presents the results of a Monte Carlo simulation study that provides empirical evidence for the applicability of the proposed test procedure. Section 8 offers concluding remarks alone with a discussion.
\section{Two parameter Weibull mixture model and testing goodness of fit}
A random variable or vector $X$ is said to follow a finite mixture distribution, if the probability density function ( or probability mass function in the case of discrete $X$), $f(x)$ can be represented by a function of the form $f(x)=p_{1}f_{1}(x,\vec{\theta}_{1})+p_{2}f_{2}(x,\vec{\theta}_{2})+\cdots +p_{k}f_{k}(x,\vec{\theta}_{k}),$ where $p_{i}\ge 0,$ for $i=1,2,\cdots,k$ are the mixing proportions such that $\sum_{i=1}^{k} p_{i}=1$ and $f_{i}(.) \ge 0$ are the density (or mass) functions of the components in the mixture such that $\int_{\Omega} f_{i}(x) dx =1$ ( or in the discrete case $ \sum_{x \in \Omega} f(x)=1)$; here $\vec{\theta}_{i}$ denote the vector of parameters of the $i^{th}$ component density.
We assume that the mixture density is identifiable, so that for any two members $ \sum_{i} p_{i} f_{i}(x,\vec{\theta}_{i})=\sum_{j} p_{j}f_{j}(x,\vec{\theta}_{j}) ,$ if and only if $p_{i}=p_{j}$ and $f_{i}(x,\vec{\theta}_{i})=f_{j}(x,\vec{\theta}_{j}).$
In this work, we confine ourselves to identifiable mixtures with two components so that $k=2$ and each component density is a two-parameter Weibull density given by
\[ f_{i}(x,\alpha_{i},\beta_{i}) =\frac{\alpha_{i}}{\beta_{i}} \left( \frac{x}{\beta_{i}}\right )^{\alpha_{i}-1} \exp \left(-\left( \frac{x}{\beta_{i}}\right)^{\alpha_{i}} \right) \]
The parameters $\alpha_{1}, \alpha_{2}$ are the shape parameters, $\beta_{1}, \beta_{2}$ are the scale parameters and $\vec{\theta}_{i}=(\alpha_{i},\beta_{i})^{T}$ for $i=1,2.$ This model assumes that the location parameters of the two component densities to be the same.
In this two component model, let $p_{1}=p$ so that $p_{2}=1-p.$ Let $F(x,\vec{\theta})$ denotes the mixture distribution function where $\vec{\theta}=(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2},p)^{T}$.
Given a random sample of n observations, from the distribution $F(x,\vec{\theta}),$ the goodness of fit problem can be stated as a test of the null hypothesis that the distribution of the data is a two parameter Weibull mixture with parameter vector $\vec{\theta}$ that needs to be estimated in general.
In the recent past, Weibull mixture models have been extensively used in modeling wind data (\cite{akdag}, \cite{kollu}, \cite{sult}). In many of these studies, the goodness of the fitted models is examined based on Akaike Information Criteria (AIC), Basian Infromation Criteria (BIC), Chi squared test, Root Mean Squared Error (RMSE) and Kolmogorov Smirnov Test (K-S test). Sultan {\em et. al} \cite{sult} reports what they refer to as a correlation Goodness of Fit test for testing goodness of fit in mixtures of two Weibull distributions. In this work, we suggest a procedure for computing approximate p-values for testing goodness of fit of two component two parameter Weibull mixtures based on the Cramer-von Mises statistic.
\section{Computation of the test statistic}
Let $F_{n}(x)$ denote the empirical distribution function of the data defined by $F_{n}(x)=\frac{1}{n} \sum_{i=1}^{n} I(x_{i} \le x), -\infty <x< \infty, $ where the indicator function $I(a,b)$ is defined as 1 for $a \le b$ and as 0 otherwise. Since $F_{n}(x)$ is the proportion of observations less than or equal to $x$ if $F(x)$ is the true distribution of $X$ we expect $F_{n}(x)$ to be close to $F(x).$
The closeness of $F_{n}(x)$ to $F(x)$ is assessed by the Cramer-von Mises statistics defined by
\[ W_{n}^{2}=n \int_{-\infty}^{\infty} (F_{n}(x)-F(x))^{2}dF(x). \]
A computationally more feasible formula can be obtained by considering the probability integral transformation $z=F(x,\vec{\theta}).$ Let $x_{1},x_{2},\ldots,x_{n}$ be the order statistics of the original sample, then the probability integral transforms $z_{1},z_{2},\ldots z_{n}$ obtained as $z=F(x,\vec{\theta})$ will be an ordered sample of independent uniform[0,1] variables. If $\vec{\theta}$ is known, the test statistic can therefore be computed as (see Stephens, Anderson[ ])
\[ W_{n}^{2}=\sum_{i=1}^{n} \left (z_{i}-\frac{2i-1}{2n}\right )^{2}+\frac{1}{12n}. \]
If $\vec{\theta}$ is not completely specified, and the null hypothesisis that the distribution is a member of the two parameter Weibull mixture distribution $F(x,\vec{\theta}),$ the same formula can be used to compute $W_{n}^{2},$ by using $z_{i}=F(x,\vec{\hat{\theta}}),$ where $\vec{\hat{\theta}}$ is an asymptotically efficient estimate for $\vec{\theta}.$ In this work, we estimated $\vec{\theta}$ by the method of maximum likelihood.
\section{ Limiting Distribution of the proposed statistic}
Literatuer reveals that (see Cramer[ ], Durbin[ ]) under suitable regularity conditions, the limiting distribution of $W_{n}^{2}$ for testing the null hypothesis that $X$ is distributed as $F(x)$ is that of $W^{2} = \sum_{j=1}^{n} \lambda_{j} z_{j}^{2}$ where $z_{j}$ s are independent $N(0,1)$ variables and the ’s are independent variables and the $\lambda_{j}$s are the eigenvalues of the covariance kernel $\rho$ namely, the solutions of the eigenvalue equation $\int_{0}^{1} \rho(s,t)f(t)dt = \lambda f(s)$. It remains to discuss the computation of the eigenvalues of the covariance kernel. We present this separately for the two cases of simple hypotheses and composite hypotheses.
\subsection{ Simple Hypotheses}
Durbin and Knott [ ] have proved that for simple null hypotheses, $\rho(s,t)$ is given by $\rho(s,t) = \min(s,t) - st.$ And, $\lambda$s can be computed in the closed form $\lambda_{j}=\frac{1}{\pi^{2}j^{2}}, j=1,2,\cdots .$ and the corresponding eigenfunctions are $ \sqrt{2}\sin(\pi_{j}s),$ for $j=1,2,\cdots,n.$
\subsection { Composite Hypothesis}
\label{eigQ}
In the case of a composite hypothesis, $\rho(s,t)$ can be estimated by
$\hat{\rho}(s,t) = \min(s,t) -st - \Psi(s)^{T} I^{-1} \Psi(t),$ where $\Psi(s) = \frac{\partial F}{\partial \vec{\theta}} \left( F^{-1}(s,\hat{\vec{\theta}}),\vec{\theta}\right),$ where $\hat{\vec{\theta}}$ is an asymptotically efficient estimate for $\vec{\theta}$ and $I$ is the information of a single observation.
Computation of the information matrix and inversion of the mixture distribution function are tedious for the Weibull mixture model at hand. We propose estimating the inverse of the information matrix using $-H/n,$ where $H$ is the Hessian matrix, or the matrix of second derivatives of the likelihood function evaluated at the maximum likelihood estimate $\hat{\vec{\theta}}.$ In passing we note that for the normal and exponential distributions, the matrix $-H/n$ gives the exactly correct form for the covariance kernel $\rho.$
\subsubsection*{Inverse of the mixture distribution function}
We propose computing the inverse of the mixture distribution function pointwise numerically. The procedure we used is described next.
Given $t,$ we need to find $x$ such that $F(x,\vec{\theta})=t.$ This is equivalent to finding zeros of $g(x) = F(x, \vec{\theta})-t.$
We used Secant method, that gives the iteration scheme \[ x_{n+1} = \frac{x_{n}g(x_{n-1}) - x_{n-1} g(x_{n})}{g(x_{n})-g(x_{n-1})}; \]
here $g(x) = p \left( 1 - \exp \left( - \left(\frac{x}{\beta_{1}}\right)^{\alpha_{1}} \right) \right) + (1-p ) \left( 1 - \exp \left( - \left(\frac{x}{\beta_{2}}\right)^{\alpha_{2}} \right) \right)-t.$
The initial values needed to use this iterative scheme can be found by considering the boundary conditions for $p=0$ and $p=1.$
When $p=0,$ the condition $g(x)=0$ gives $ \left( 1 - \exp \left( - \left(\frac{x}{\beta_{1}}\right)^{\alpha_{1}} \right) \right) =t.$
Similarly, when $p=1,$ the condition $g(x)=0$ gives $ \left( 1 - \exp \left( - \left(\frac{x}{\beta_{2}}\right)^{\alpha_{2}} \right) \right) =t.$
Thus, $x_{1}=\beta_{1} \log | (1-t) |^{1/\alpha_{1}}$ and $x_{2}=\beta_{2} \log | (1-t) |^{1/\alpha_{2}}$ can be used as initial values. We note that since $t>0,$ $ \log(1-t)<0$ and hence it is essential to take the absolute value.
The iteration scheme can be carried out until desired convergence. We iterated until the difference between two consecutive points is less than a small number $\epsilon (>0)$ which we chosen to be $5 \times 10^{-6}.$
In all the examples we tried, the initial values for $x_{1}$ and $x_{2}$ obtained were on the opposite sides of the root and the iterative scheme worked satisfatorily.
To evaluate $\Psi(s),$ we need the derivatives $\frac{\partial F}{\partial \vec{\theta}}$ given by
\begin{eqnarray*}
\frac{\partial F}{\partial \alpha_{i}} &= & p_{i} \left(\frac{x}{\beta_{i}}\right)^{\alpha_{i}} \log \left(\frac{x}{\beta_{i}}\right) \exp \left(-\left(\frac{x}{\beta_{i}}\right)^{\alpha_{i}} \right) \\
\frac{\partial F}{\partial \beta_{i}} &= &- p_{i} \left(\frac{\alpha_{i}}{\beta_{i}}\right)^{\alpha_{i}} \left(\frac{x}{\beta_{i}}\right)^{\alpha_{i}} \exp \left(-\left(\frac{x}{\beta_{i}}\right)^{\alpha_{i}} \right) \textrm{for} \quad i=1,2 \quad \textrm{and} \\
\frac{\partial F}{\partial p} &= & \exp \left(-\left(\frac{x}{\beta_{2}}\right)^{\alpha_{2}} \right) - \exp \left(-\left(\frac{x}{\beta_{1}}\right)^{\alpha_{1}} \right). \\
\end{eqnarray*}
These derivatives have to be evaluated at $x=F^{-1}(s,\vec{\theta}),$ where $\hat{\vec{\theta}}$ is the maximum likelihood estimate.
Thus, at any point $(s,t)$ we can evaluate $\hat{\rho}(s,t). $ It remains to show how to calculate estimates for the eigenvalues of $\hat{\rho}(s,t). $ The eigenvalues of $\hat{\rho}(s,t)$ cannot be found in closed form and have to be estimated numerically.
\subsubsection*{Computation of estimates for the eigenvalues of the covariance kernel}
The difficulty associated with finding a closed form for the information matrix and inverting the mixture distribution function limits the application of methods proposed in the literature (see Stephens [5] and Stephens [6]) that hinges on the exapansion of $\Psi(s)^{T}I^{-1}\Psi(t)$ in a Fourier series in the eigenfunctions of $\rho(s,t)$ . We propose a brute force apprach for computing the eigenvalues that proceed as follows.
If $\lambda$ is an eigenvalue of $\rho(s,t)$ and $f(s)$ is an eigenfunction corresponding to $\lambda$, then $\lambda f(s) = \int_{0}^{1} \rho(s,t) f(t) dt. $
Divide the interval [0,1] into $(m+1)$ sub-intervals, each of which is of length $1/(m+1)$. Then,
\begin{eqnarray*}
\lambda f(i/(m+1)) & = & \int_{0}^{1} \rho (i/(m+1),t) f(t) dt \\
& \approx & \frac{1}{m} \sum_{j=1}^{m} \rho(\frac{i}{m+1},\frac{j}{m+1})f(\frac{j}{m+1}), \quad \textrm{for sufficiently large $m$} \\
\end{eqnarray*}
Let $V$ be the column vector with $i$th element equal to $f(i/(m+1))$ and $Q$ be the $m \times m$ matrix whose $(i,j)$th element is $Q_{ij} = \frac{1}{m}\rho \left (\frac{i}{(m+1)},\frac{j}{(m+1)} \right).$
The above equation can be written as $ \lambda V = Q V.$
Hence, finding the eigenvalues of $\rho$ reduces to the dicretised problem of finding the eigenvalues of the matrix $Q.$
We developed software to create the matrix $Q$ using the estimate for $\rho(s,t)$ proposed in Section \ref{eigQ}. Eigenvlaues of $Q$ were then used as estimates for $\lambda.$
\section{Computation of p-values}
\label{pval}
Having noted that the asymptotic distribution of $W_{n}^{2}$ is that of a weighted chi-squared distribution with eigenvalues of the covariance kernel as weights, approximate p-values can be computed as the probability $P(\sum_{i=1}^{\infty} \hat{\lambda}_{i} \chi_{1}^{2} \ge t), $ where $t$ is the value of the test statistic and $\hat{\lambda}_{i}$ are the estimates for the eigenvalues of $\hat{\rho}(s,t)$. We used Imhof's method \cite{imhof} to compute the approximate p-values.
Below we summarise the procedure for computing the suggested approximate p-values.
\begin{enumerate}
\item Find an asymptotically efficient estimate $ \hat{\vec{\theta}}$ of $\vec{\theta}=(\alpha_{1},\alpha_{2},\beta_{1},\beta_{2},p)^{T}.$
\item Compute the probability integral transforms $z_{i}=F(x_{i},\hat{\vec{\theta}}). $
\item Compute the value of the test statistic, $W_{n}^{2} = \sum_{i=1}^{n} \left( z_{i}-\frac{(2i-1)}{2n}\right)^{2}+\frac{1}{12n}.$ \label{step3}
\item Compute $\hat{\Psi}(s)=\frac{\partial F}{\partial \vec{\theta}}(F^{-1}(s,\hat{\theta}),\theta),$ evaluated at $\hat{\vec{\theta}}$ at a desired grid of points $s$ in [0,1].
\item Estimate $I^{-1}$ by $- H/n,$ where $H$ is the Hessian matrix evaluated at $\hat{\theta}.$
\item Create the matrix $Q$ with $(s,t)$th element given by $\hat{\rho}(s,t)= \min(s,t)-st- \hat{\Psi}^{T}(s)(-H/n)\hat{\Psi}(t),$ for the grid points $s,t$ of the interval [0,1].
\item Find the eigenvalues $\hat{\lambda}$ of $Q.$
\item Compute approximate p-value as the probrability that the linear combination $\sum_{i=1}^{k}\hat{\lambda}_{i} \chi_{1}^{2} $ exceeds the test statistic value $W$ computed in Step \ref{step3}.
\end{enumerate}
\section{Empirical Justification for the proposed approximate p-value}
\label{simulation}
Weibull mixture populations for the simulation study were chosen to cover a range from poorly separated mixture components to well separated mixture components. Table \ref{results} presents the parameters of the chosen mixture components. The results presented in Table \ref{results} is based on 10000 simulations for each. Approximate p-values for testing the composite hypothesis that the distribution is a member of the two component two parameter Weibull was computed using the procedure described in Section \ref{pval}. If the procedure for computing approximate p-values is justifiable, the computed approximate p-values have to be uniformly distributed. The Anderson Darling test was used to examine the uniformity of the p-values. The last two columns of Table \ref{results} gives the values of the Anderson Darling statistic and the p-value for testing uniformity.
\begin{center}
\begin{table}[ht]
\begin{tabular}{cccccccc}
population & $\alpha_{1}$ & $\alpha_{2}$ & $\beta_{1}$ & $\beta_{2}$ & $ p$ & Test statistic & pvalue \\ \hline
1 & 2 & 3 & 3 & 0.9 & 0.5 & 0.78 & 0.49 \\
2 & 1.5 & 3 & 2 & 4 & 0.5 & 1.28 & 0.24 \\
3 & 1 & 3 & 2 & 4 & 0.5 & 0.95 & 0.38 \\
4 & 2 & 4 & 0.5 & 3 & 0.5 & 1.47 & 0.18 \\
5 & 2 & 8 & 1 & 4 & 0.5 & 2.43& 0.05 \\ \hline
\end{tabular}
\caption{Results of the simulation study}
\label{results}
\end{table}
\end{center}
None of the results presented in Table \ref{results} provide evidence against the assumption that the resulting p-values are uniformly distributed. This can be taken as empirical evidence for the validity of the proposed test procedure fortesting goodness of fit in two component two parameter Weibull mixtures.
\section{Concluding Remarks and Discussion}
In this paper, we presented a goodness of fit test for two parameter Weibull mixture models. Results of a Monte Carlo simulation study provided empirical evidence for the applicability of the suggested goodness of fit test. More simulation results are presented in Perera (\cite {perera}). Literature revealed applications of tests based on the Akaike Information Criteria and Bayesian Information Criteria (\cite {song}) as well as Root Mean Squared Error (RMSE), Chi Square tests, Kolmogorov-Smirnov test (\cite {kollu}) in order to assess the goodness of fit in such mixture model fits. We expect the proposed test in this paper to be superior in terms of power; however, this needs to be established using power studies against suitable alternative distributions. This is left as further work.
We also note that Likelihood surfaces of Weibull mixture distributions appear to be flat over a wide range in the parameter space. This gives rise to difficulties in calculating maximum likelihood estimates using simple procedures such as Newton Raphson method. Also, likelihood functions for samples of Weibull mixture densities that are not welll separated sometimes have more than one maximum; it is hard to find the global maximum with certainty. In such cases, we found that several very different roots can give equally good fits with similar likelihood values.
\end{document} |
\begin{document}
\author{Zejun Hu
\thanks {Supported by grants of NSFC-10671181 and Chinese-German
cooperation projects DFG PI 158/4-5.}
\and Haizhong Li\thanks{Supported by grants of NSFC-10531090 and
Chinese-German cooperation projects DFG PI 158/4-5.}
\and Luc Vrancken}
\mathrm{d}ate{}
\title{A characterisation of the Calabi product of hyperbolic affine spheres}
\sloppy
\begin{abstract}\noindent There exists a well known construction which allows
to associate with two hyperbolic affine spheres $f_i: M_i^{n_i} \rightarrow
\mathbb R^{n_i+1}$ a new hyperbolic affine sphere immersion
of $I
\times M_1 \times M_2$ into $\mathbb R^{n_1+n_2+3}$. In
this paper we deal with the inverse problem: how to
determine from properties of the difference tensor whether a
given hyperbolic affine sphere immersion of a manifold $M^n \rightarrow \mathbb R^{n+1}$ can be
decomposed in such a way.
{\bf e}_nd{abstract}
{{\bfseries Key words}: {{\bf e}_m affine hypersphere, Calabi product, affine hypersurface}.
{\bfseries Subject class: } 53A15.}
\section{Introduction}
In this paper we study nondegenerate affine hypersurfaces $M^n$ into
$\mathbb R^{n+1}$, equipped with its standard affine connection $D$.
It is well known that on such a hypersurface there exists a
canonical transversal vector field $\xi$, which is called the affine
normal. With respect to this transversal vector field one can
decompose
\begin{equation}
D_X Y = \nabla_X Y +h(X,Y) \xi,
{\bf e}_nd{equation}
thus introducing the affine metric $h$ and the induced affine
connection $\nabla$. The Pick-Berwald theorem states that $\nabla$
coincides with the Levi Civita connection ${\bf \omega}_idehat{\nabla}$ of the affine
metric $h$ if and only if $M$ is immersed as a nondegenerate
quadric. The difference tensor $K$ is introduced by
\begin{equation}
K_X Y = \nabla_X Y -{\bf \omega}_idehat{\nabla}_X Y.
{\bf e}_nd{equation}
It follows easily that $h(K(X,Y),Z)$ is symmetric in $X$, $Y$ and
$Z$. The apolarity condition states that $\operatorname{trace} K_X =0$ for every
vector field $X$. The fundamental theorem of affine differential
geometry, Dillen, see Ref.~\cite{dinovr91} implies that an affine
hypersurface is completely determined by the metric and the
difference tensor $K$.
Deriving the affine normal, we introduce the affine shape operator $S$ by
\begin{equation}
D_X \xi =-SX.
{\bf e}_nd{equation}
Here, we will restrict ourselves to the case that the affine shape
operator $S$ is a multiple of the identity, i.e. $S=H I$. This means
that all affine normals are parallel or pass through a fixed point.
We will also assume that the metric is positive definite in which
case one distinguishes the following classes of affine hyperspheres:
\begin{romanlist}
\item elliptic affine hyperspheres, i.e. all affine normals pass through a fixed point and $H >0$,
\item hyperbolic affine hyperspheres, i.e. all affine normals pass through a fixed point and $H<0$,
\item parabolic affine hyperspheres, i.e. all the affine normals are parallel ($H=0$).
{\bf e}_nd{romanlist}
Due to the work of amongst others Calabi \cite{ca72}, Pogorelov
\cite{po72}, Cheng and Yau \cite{chya86}, Sasaki \cite{sa80} and Li
\cite{li92}, positive definite affine hyperspheres which are
complete with respect to the affine metric $h$ are now well
understood. In particular, the only complete elliptic or parabolic
positive definite affine hyperspheres are respectively the ellipsoid
and the paraboloid. However, there exist many hyperbolic affine
hyperspheres.
In the local case, one is far from obtaining a classification. The reason for this is that affine hyperspheres
reduce to the study of the Monge-Amp{\`e}re equations.
Calabi introduced a construction, called the Calabi product, which shows how to associate with one (or two)
hyperbolic affine hyperspheres a new hyperbolic affine hypersphere. This construction,
as well as the corresponding properties for the difference tensor are recalled in the next section.
In this paper we are interested in the reverse construction, i.e. how to determine using properties of the difference
tensor whether or not a given hyperbolic affine hypersphere (with mean curvature $-1$) can be decomposed
as a Calabi product of a hyperbolic affine hypersphere and a point or as a Calabi product of two hyperbolic
affine hyperspheres.
In particular we show the following two theorems:
\begin{theorem} Let $\partialhi: M^n \rightarrow \mathbb R^{n+1}$ be a (positive definite) hyperbolic affine hypersphere with mean
curvature $\left\langlembda$, $\left\langlembda<0$. Assume that there exists two
distributions $\mathcal D_1$ and $\mathcal D_2$ such that
\begin{romanlist}
\item $T_pM = \mathcal D_1 \oplus \mathcal D_2$,
\item $\mathcal D_1$ and $\mathcal D_2$ are orthogonal with respect to the affine metric $h$
\item $\mathcal D_1$ is a one dimensional distribution spanned by a unit length vector field $T$
\item there exist numbers $\left\langlembda_1$ and $\left\langlembda_2$ satisfying $-\left\langlembda+\left\langlembda_1 \left\langlembda_2 -\left\langlembda_2^2= 0$ such that
\begin{align*}
& K(T,T)=\left\langlembda_1 T\\
&K(T,U) = \left\langlembda_2 U,
{\bf e}_nd{align*}
where $U \in \mathcal D_2$.
{\bf e}_nd{romanlist}
Then $\partialhi:M^n \rightarrow \mathbb R^{n+1}$ can be decomposed as the Calabi product of a hyperbolic
affine sphere $\partialsi:M_1^{n-1} \rightarrow \mathbb R^{n}$ and a point.
{\bf e}_nd{theorem}
and
\begin{theorem} \left\langlebel{theoremprod2}Let $\partialhi: M^n \rightarrow \mathbb R^{n+1}$ be a (positive definite) hyperbolic affine hypersphere with mean
curvature $\left\langlembda$, $\left\langlembda<0$. Assume that there exists
distributions $\mathcal D_1$ (of dimension 1, spanned by a unit
length vector field $T$), $\mathcal D_2$
(of dimension $n_2$) and $\mathcal D_3$ (of dimension $n_3$) such that
\begin{romanlist}
\item $1+n_2+n_3 = n$,
\item $\mathcal D_1$, $\mathcal D_2$ and $\mathcal D_3$ are mutually orthogonal
with respect to the affine metric $h$
\item there exist numbers $\left\langlembda_1$, $\left\langlembda_2$ and $\left\langlembda_3$ such that
\begin{align*}
& K(T,T)=\left\langlembda_1 T\\
&K(T,V) = \left\langlembda_2 V,\\
&K(T,W)= \left\langlembda_3 W,\\
&K(V,W)=0.
{\bf e}_nd{align*}
where $V \in \mathcal D_2$, $W \in \mathcal D_3$, $\left\langlembda_1 =
\left\langlembda_2 +\left\langlembda_3$ and $\left\langlembda_2 \left\langlembda_3 = \left\langlembda$.
{\bf e}_nd{romanlist}
Then $\partialhi:M^n \rightarrow \mathbb R^{n+1}$ can be decomposed as the Calabi product of two hyperbolic
affine sphere immersions $\partialsi_1:M_1^{n_2} \rightarrow \mathbb R^{n_2+1}$ and
$\partialsi_2:M_2^{n_3} \rightarrow \mathbb R^{n_3+1}$.
{\bf e}_nd{theorem}
Note that, as explained in the next section, the converse of the above two theorems is also true.
To conclude this introduction, we remark that the basic integrability conditions for a hyperbolic
affine hypersphere with mean curvature $-1$ state that:
\begin{align}
&\hat R(X,Y)Z = -(h(Y,Z) X-h(X,Z)Y)-[K_X,K_Y]Z,\left\langlebel{gauss}\\
&(\hat\nabla K)(X, Y,Z) =(\hat \nabla K)(Y,X,Z).\left\langlebel{codazzi}
{\bf e}_nd{align}
\section{The Calabi product}
Let $\partialsi_1: M_1^{n_2} \rightarrow R^{n_2+1}$ and $\partialsi_2: M_2^{n_3} \rightarrow R^{n_3+1}$
be hyperbolic affine hyperspheres with mean curvature $-1$. Then we define
the Calabi product of $M_1$ with a point by
\begin{equation*}
\tilde \partialsi(t,p)= (c_1 e^{\tfrac{t}{\sqrt{n}}} \partialsi_1(p), c_2
e^{-{\sqrt{n}}t}),
{\bf e}_nd{equation*}
where $p \in M_1$ and $t \in \mathbb R$
and the Calabi product of $M_1$ with $M_2$
by
\begin{equation*}
\partialsi(t,p,q))= (c_1 e^{\tfrac{\sqrt{n_3+1} t}{\sqrt{n_2+1}}}\partialsi_1(p),
c_2 e^{-\tfrac{\sqrt{n_2+1} t}{\sqrt{n_3+1}}} \partialsi_2(q)),
{\bf e}_nd{equation*}
where $p \in M_1$, $q \in M_2$ and $t \in \mathbb R$.
We now investigate the conditions on the constants $c_1$ and $c_2$ in order that the Calabi product
has constant mean curvature $-1$. We first do so for the Calabi product of two affine spheres.
We denote by $v_1,\mathrm{d}ots,v_{n_2}$ local coordinates for $M_1$ and by $w_1,\mathrm{d}ots,w_{n_3}$ local coordinates
for $M_2$. Then, it follows that
\begin{align*}
&\partialsi_t= (c_1 \tfrac{\sqrt{n_3+1}}{\sqrt{n_2+1}}e^{\tfrac{\sqrt{n_3+1} t}{\sqrt{n_2+1}}}\partialsi_1(p),
-c_2 \tfrac{\sqrt{n_2+1}}{\sqrt{n_3+1}}e^{-\tfrac{\sqrt{n_2+1} t}{\sqrt{n_3+1}}} \partialsi_2(q)),\\
&\partialsi_{tt}=(c_1 \tfrac{n_3+1}{n_2+1}e^{\tfrac{\sqrt{n_3+1} t}{\sqrt{n_2+1}}}\partialsi_1(p),
-c_2 \tfrac{n_2+1}{n_3+1}e^{-\tfrac{\sqrt{n_2+1} t}{\sqrt{n_3+1}}} \partialsi_2(q)),\\
&\partialsi_{tv_i}=\tfrac{\sqrt{n_3+1}}{\sqrt{n_2+1}}(c_1 e^{\tfrac{\sqrt{n_3+1} t}{\sqrt{n_2+1}}}(\partialsi_1)_{v_i},0),\\
&\partialsi_{tw_j}=-\tfrac{\sqrt{n_2+1}}{\sqrt{n_3+1}}(0,c_2 e^{-\tfrac{\sqrt{n_2+1} t}{\sqrt{n_3+1}}} (\partialsi_2)_{w_j}),\\
&\partialsi_{v_iv_j}=(c_1 e^{\tfrac{\sqrt{n_3+1} t}{\sqrt{n_2+1}}}(\partialsi_1)_{v_iv_j},0),\\
&\partialsi_{w_iw_j}=(0,c_2 e^{-\tfrac{\sqrt{n_2+1} t}{\sqrt{n_3+1}}} (\partialsi_2)_{w_iw_j}).
{\bf e}_nd{align*}
If we denote by $h_2$ the affine metric on $M_2$ and by $h_3$ the
centroaffine metric introduced on $M_3$ it follows from the above
formulas that
\begin{align*}
&\partialsi_{tt} =\tfrac{n_3-n_2}{\sqrt{(n_2+1)(n_3+1)}} \partialsi_t + \partialsi\\
&\partialsi_{v_iv_j} = \tfrac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2} h_2(\partialartial v_i,\partialartial v_j) \partialsi+...\\
&\partialsi_{w_iw_j} = \tfrac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2}
h_3(\partialartial w_i,\partialartial w_j) \partialsi+...
{\bf e}_nd{align*}
From \cite{nosa94} we see that $M$ is an affine hypersphere with
mean curvature $-1$ if and only if
\begin{equation*}
det[\partialsi,\partialsi_t,\partialsi_{v_1},\mathrm{d}ots,\partialsi_{v_{n_2}},\partialsi_{w_1},\mathrm{d}ots,\partialsi_{w_{n_3}}]^2
=h(\partialartial_t,\partialartial_t) det[h(\partialartial v_i,\partialartial v_j)]
det[h(\partialartial w_i,\partialartial w_j)].
{\bf e}_nd{equation*}
Taking into account that $\partialsi_1$ and $\partialsi_2$ are already affine
spheres with mean curvature $-1$ we must have that
\begin{equation}
(c_1)^{n_2+1} (c_2)^{n_3+1} = \left
(\tfrac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2}\right )^{n_2+n_3+2}.
{\bf e}_nd{equation}
Hence we can take
\begin{align*}
&c_1=\tfrac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2} d_1\\
&c_2=\tfrac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2} d_2,
{\bf e}_nd{align*}
where
\begin{equation}
(d_1)^{n_2+1} (d_2)^{n_3+1} = 1.
{\bf e}_nd{equation}
Hence by applying an equiaffine transformation we may assume that
$d_1=d_2=1$ and therefore that the Calabi product of two hyperbolic
affine spheres with mean curvature $-1$ is an hyperbolic affine
sphere with mean curvature $-1$ if and only if
\begin{equation*}
\partialsi(t,p,q))= \tfrac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2}(
e^{\tfrac{\sqrt{n_3+1} t}{\sqrt{n_2+1}}}\partialsi_1(p),
e^{-\tfrac{\sqrt{n_2+1} t}{\sqrt{n_3+1}}} \partialsi_2(q)),
{\bf e}_nd{equation*}
up to an equiaffine transformation.
For the Calabi product of a hyperbolic affine sphere and a point, we
proceed in the same way to deduce the following. The Calabi product
of a hyperbolic affine spheres with mean curvature $-1$ and a point
is an hyperbolic affine sphere with mean curvature $-1$ if and only
if
\begin{equation*}
\tilde \partialsi(t,p)= \tfrac{\sqrt{n}}{n+1}( e^{\tfrac{t}{\sqrt{n}}}
\partialsi_1(p), e^{-\sqrt{n} t}),
{\bf e}_nd{equation*}
up to an equiaffine transformation.
\begin{remark} A straightforward calculation shows that the Calabi
product of two hyperbolic affine spheres has parallel cubic form
(with respect to the Levi Civita connection) if and only if both
original hyperbolic affine spheres have parallel cubic forms.
Similarly one has that the Calabi product of a hyperbolic affine
sphere and a point has parallel cubic form if and only if the
original affine sphere has parallel cubic form.
{\bf e}_nd{remark}
\section{Characterisation of the Calabi product of two hyperbolic affine spheres and the proof of Theorem 2}
Throughout this section we will assume that $\partialhi:
M^n\longrightarrow\mathbb R^{n+1} $ is a hyperbolic affine
hypersphere. Without loss of generality we may assume that
$\left\langlembda=-1$ by applying a homothety. We will now prove Theorem
\ref{theoremprod2}. Therefore, we shall also assume that $M$ admits
three mutually orthogonal differential distributions $\mathcal D_1$,
$\mathcal D_2$ and $\mathcal D_3$ of dimension $1$, $n_2> 0$ and
$n_3> 0$ respectively with $1+n_2+n_3=n$, and, for all vectors $V\in
\mathcal D_2$, $W\in \mathcal D_3$,
\begin{gather*}
K(T,T) = \left\langlembda_1 T,\;\;\;\;\;\;
K(T,V) = \left\langlembda_2 V,\\
K(T,W) = \left\langlembda_3 W,\;\;\;\;\;\; K(V,W) = 0.
{\bf e}_nd{gather*}
By the apolarity condition we must have that
\begin{equation}
\left\langlembda_1 +n_2 \left\langlembda_2 +n_3 \left\langlembda_3 = 0,
{\bf e}_nd{equation}
Moreover, we will assume that
\begin{align}
&\left\langlembda_1 = \left\langlembda_2+\left\langlembda_3\\
&\left\langlembda_2 \left\langlembda_3 = -1.
{\bf e}_nd{align}
The above conditions imply that $\left\langlembda_1$, $\left\langlembda_2$ and $\left\langlembda_3$ are constants and
can be determined explicitly in terms of the dimension $n$.
As $M$ is a hyperbolic affine sphere we have that the difference
tensor is a symmetric tensor with respect to the Levi Civita
connection $\hat \nabla$ of the affine metric. In that case, as also
$h(K(X,Y),Z)$ is totally symmetric, the information of Lemma 1 and
Lemma 2 of \cite{BRV} remains valid and can be summarized in the
following lemma:
\begin{lemma} \left\langlebel{lemme1}
We have
\begin{enumerate}
\item $\hat \nabla_{\mathcal D_1} \mathcal D_1 \subset \mathcal D_1$
\item $\hat \nabla_{\mathcal D_2} \mathcal D_2 \subset \mathcal D_2
\oplus \mathcal D_3$
\item $\hat \nabla_{\mathcal D_3} \mathcal D_3
\subset \mathcal D_2 \oplus \mathcal D_3$
\item $h(\hat \nabla_T
W,V) = h(\hat \nabla_W T,V)=-h(\hat \nabla_V T,W)$, for any $V
\in\mathcal D_2, W\in\mathcal D_3$
{\bf e}_nd{enumerate}
{\bf e}_nd{lemma}
Similarly using the
information of the previous lemma, Lemma 3 of \cite{BRV} reduces to
\begin{lemma} \left\langlebel{lemme2} We have
\begin{enumerate}
\item $(\left\langlembda_3-\left\langlembda_2) h(\hat \nabla_{V} \tilde V, W)
= h( K(V,\tilde V), \hat\nabla_T W)
= h(K(V,\tilde V), \hat \nabla_W T)$
\item $(\left\langlembda_2-\left\langlembda_3) h(\hat\nabla_{W} \tilde W, V)
= h(K(W,\tilde W), \hat\nabla_T V)
= h(K(W,\tilde W), \hat \nabla_V T)$
{\bf e}_nd{enumerate}
{\bf e}_nd{lemma}
We denote now by $\{V_1,\mathrm{d}ots,V_{n_2}\}$, respectively $\{W_1,\mathrm{d}ots,W_{n_3}\}$ an
orthonormal basis of $\mathcal D_2$ (resp. $\mathcal D_3$) with respect to the affine
metric $h$. Then, we have
\begin{lemma} \left\langlebel{lemme3} Let $V, \tilde V \in \mathcal D_2$. Then
$$h(\hat \nabla_V T, \hat \nabla_{\tilde V} T) =0.$$
{\bf e}_nd{lemma}
\begin{proof} Using the Gauss equation, we have that
\begin{align*}
h(\hat R(V,T)T,\tilde V)&= -h(V,\tilde V) - h(K(T,T),K(V,\tilde V))
+ h(K(T,V),K(T,\tilde V))\\
&= (-1- \left\langlembda_1 \left\langlembda_2 +\left\langlembda_2^2) h( V,\tilde V)\\
&=(-1- \left\langlembda_3 \left\langlembda_2) h(V,\tilde V)=0.
{\bf e}_nd{align*}
On the other hand, by a direct computation using the previous lemmas, we have
\begin{align*}
h(\hat R(V,T)T,\tilde V)&=h(\hat \nabla_{V} \hat \nabla_{T}T -\hat
\nabla_{T}\hat \nabla_{V} T
-\hat \nabla_{\hat \nabla_{V}T -\hat \nabla_{T}V} T,\tilde V)\\
&=h(-\hat\nabla_{T} \hat\nabla_V T,\tilde V ) -\sum_{k=1}^{n_3} h(\hat \nabla_{V}T - \hat \nabla_T V,W_k) h(\hat
\nabla_{W_k} T, \tilde V)\\
&=h(-\hat\nabla_{T} \hat \nabla_V T,\tilde V) \qquad \text{by Lemma \ref{lemme1} (iv)}\\
&=-\sum_{k=1}^{n_3} h(\hat\nabla_V T,W_k) h(\hat \nabla_{T} W_k,\tilde V)\\
&=\sum_{k=1}^{n_3} h(\hat \nabla_V T,W_k) h(\hat \nabla_{\tilde V} T,W_k)\\
&=h(\hat \nabla_V T, \hat \nabla_{\tilde V} T).
{\bf e}_nd{align*}
{\bf e}_nd{proof}
Similarly, we have
\begin{lemma} \left\langlebel{lemme4} Let $W, \tilde W \in \mathcal D_3$. Then
$$h(\hat\nabla_W T, \nabla_{\tilde W} T) = 0.$$
{\bf e}_nd{lemma}
Combining the two previous lemmmas with Lemma \ref{lemme2} and
Lemma \ref{lemme1} we see that the distributions determined by
$\mathcal D_2$ and $\mathcal D_3$ are totally geodesic. It also
implies that $h(\hat \nabla_V T,W) = h(\hat \nabla_W T, V) = 0$.
This is sufficient to conclude that locally $(M,h)$ is isometric with
$I \times M_1 \times M_2$ where $T$ is tangent to $I$, $\mathcal D_2$ is tangent
to $M_1$ and $\mathcal D_3$ is tangent to $M_2$.
The product structure of $M$ implies the existence of local
coordinates $(t,p,q)$ for $M$ based on an open subset containing
the origin of $\mathbb R \times \mathbb R^{n_2} \times \mathbb R^{n_3}$,
such that $\mathcal D_1$ is given by $dp=dq=0$,
$\mathcal D_2$ is given by $dt=dq=0$, and $\mathcal D_3$ is given by
$dt=dp=0$. We may also assume that $T = \tfrac{\partialartial}{\partialartial
t}$. We now put
\begin{equation}\left\langlebel{eqphi2phi3}
\quad \partialhi_2= -f \left\langlembda_3 \partialhi + f T, \quad \partialhi_3 =
g \left\langlembda_2 \partialhi - g T,
{\bf e}_nd{equation}
where the functions $f$ and $g$, which depend only on the variable $t$, are determined by
\begin{align*}
f'=f (\left\langlembda_3 - \left\langlembda_1) ,\\
g'=g(\left\langlembda_2 - \left\langlembda_1).
{\bf e}_nd{align*}
It is clear that solutions are given by
\begin{equation*}
f(t) = d_1 e^{(\left\langlembda_3-\left\langlembda_1)t} \qquad \text{and} \qquad g(t)
= d_2 e^{(\left\langlembda_2-\left\langlembda_1)t},
{\bf e}_nd{equation*}
where $d_1$ and $d_2$ are constants. Of course, as $\left\langlembda_1 =
\left\langlembda_2+\left\langlembda_3$ we can rewrite the above equation as
\begin{equation*}
f(t) = d_1 e^{-\left\langlembda_2 t} \qquad \text{and} \qquad g(t) = d_2
e^{-\left\langlembda_3t}.
{\bf e}_nd{equation*}
Computing $\left\langlembda_1$, $\left\langlembda_2$ and $\left\langlembda_3$ explicitly, where if necessary by changing the sign of $E_1$ we
may assume that $\left\langlembda_2 \ge 0$ we find that
\begin{align*}
&\left\langlembda_2 = \tfrac{\sqrt{n_3+1}}{\sqrt{n_2+1}},\\
&\left\langlembda_3 = -\tfrac{\sqrt{n_2+1}}{\sqrt{n_3+1}}.
{\bf e}_nd{align*}
Solving now the above equations for the immersion $\partialhi$ we find that
\begin{align*}
\partialhi &=\tfrac{1}{f (\left\langlembda_2-\left\langlembda_3)}\partialhi_2 - \tfrac{1}{g (\left\langlembda_2-\left\langlembda_3)} \partialhi_3\\
&=(\tfrac{1}{d_1} e^{\tfrac{\sqrt{n_3+1}}{\sqrt{n_2+1}} t} \partialhi_2
+ \tfrac{1}{d_2} e^{-\tfrac{\sqrt{n_2+1}}{\sqrt{n_3+1}} t} \partialhi_3)(\frac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2}).
{\bf e}_nd{align*}
A straightforward computation, using {\bf e}_qref{eqphi2phi3}, now shows that
\begin{align*}
D_T (\partialhi_2)&=D_T(-f \left\langlembda_3 \partialhi + f T)\\
&=f(\left\langlembda_3-\left\langlembda_1) (-\left\langlembda_3\partialhi+T)+ f (-\left\langlembda_3 T +(K(T,T)+\partialhi))\\
&=f(\left\langlembda_3-\left\langlembda_1)(-\left\langlembda_3\partialhi+T)+ f ((\left\langlembda_1-\left\langlembda_3) T +\partialhi))\\
&=f (\left\langlembda_2 \left\langlembda_3 +1)\partialhi =0.
{\bf e}_nd{align*}
Similarly
\begin{align*}
&D_W (\partialhi_2)=f(- \left\langlembda_3 W +K(W,T))=0,\\
&D_T (\partialhi_3)=0,\\
&D_V (\partialhi_3)=0.
{\bf e}_nd{align*}
The above implies that $\partialhi_2$ reduces to a map of $M_1$ in $\mathbb R^n$ whereas
$\partialhi_3$ reduces to a map of $M_2$ in $\mathbb R^n$. As we have
that
\begin{align*}
&d\partialhi_2(V)=D_V (\partialhi_2)=f(- \left\langlembda_3 V +K(V,T))=f(-\left\langlembda_3+\left\langlembda_2)V,\\
&d\partialhi_3(W)=D_W(\partialhi_3)=g(\left\langlembda_2 W -K(W,T))=g
(\left\langlembda_2-\left\langlembda_3)W,
{\bf e}_nd{align*}
these maps are actually immersions. Moreover, denoting by $\nabla^1$ the
$\mathcal D_2$ component of $\nabla$, we find that
\begin{align*}
D_{V}d\partialhi_2(\tilde V)&=f(-\left\langlembda_3+\left\langlembda_2)D_{V}\tilde V\\
&= f(-\left\langlembda_3+\left\langlembda_2)\nabla_{V}\tilde V+ f(-\left\langlembda_3+\left\langlembda_2)h(V,\tilde V)\partialhi \\
&= f(-\left\langlembda_3+\left\langlembda_2)\nabla^1_{V}\tilde V+f(-\left\langlembda_3+\left\langlembda_2)(h(K(V,\tilde V),T)T+h(V,\tilde V)\partialhi)\\
&=d\partialhi_2(\nabla^1_{V} \tilde V)+ f(-\left\langlembda_3+\left\langlembda_2)h(V,\tilde V)(\left\langlembda_2 T +\partialhi)\\
&=d\partialhi_2(\nabla^1_{V} \tilde V)+f (-\left\langlembda_3+\left\langlembda_2)\left\langlembda_2 h(V,\tilde V)(T-\left\langlembda_3 \partialhi)\\
&=d\partialhi_2(\nabla^1_{V} \tilde V)+ (-\left\langlembda_3+\left\langlembda_2)\left\langlembda_2
h(V,\tilde V)\partialhi_2.
{\bf e}_nd{align*}
The above formulas imply that $\partialhi_2$ can be interpreted as a
centroaffine immersion contained in an $n_2+1$-dimensional vector
subspace of $\mathbb R^{n+1}$ with induced connection $\nabla^1$ and
affine metric $h_1 = (-\left\langlembda_3+\left\langlembda_2)\left\langlembda_2 h$. Similarly,
we get that $\partialhi_3$ can be interpreted as a centroaffine immersion
contained in an $n_3+1$-dimensional vector subspace of $\mathbb
R^{n+1}$ with induced connection $\nabla^2$ (the restriction of
$\nabla$ to $\mathcal D_3$) and affine metric $h_2 =g
(\left\langlembda_3-\left\langlembda_2)\left\langlembda_3 h$. Of course as both spaces are
complementary, we may assume by a linear transformation that the
$n_2+1$ dimensional space is spanned by the first $n_2+1$
coordinates of $\mathbb R^{n+1}$ whereas the $n_3+1$ dimensional
space is spanned by the last $n_3+1$ coordinates of $\mathbb
R^{n+1}$.
Moreover, taking $V_1,\mathrm{d}ots, V_{n_2}$ as before, we find that
\begin{align*}
\sum_{i=1}^{n_2} (\nabla^1 h_1) (V,V_i,V_i)&=
\left\langlembda_2(\left\langlembda_2-\left\langlembda_3)\sum_{i=1}^{n_2} (\nabla^1 h)
(V,V_i,V_i)\\
&=-2\left\langlembda_2(\left\langlembda_2-\left\langlembda_3)\sum_{i=1}^{n_2} h(\nabla^1_V V_i,V_i)\\
&=-2\left\langlembda_2(\left\langlembda_2-\left\langlembda_3)\sum_{i=1}^{n_2} h(\nabla_V V_i,V_i)\\
&=\left\langlembda_2(\left\langlembda_2-\left\langlembda_3)\sum_{i=1}^{n_2} (\nabla h) (V,V_i,V_i)=0,
{\bf e}_nd{align*}
as by assumption $h(K(V,W),W))= h(K(V,T),T)$. So $M_1$ is an hyperbolic
affine hypersphere. Choosing now the constant $d_1$ appropriately we may
assume that $M_1$ has mean curvature $-1$. A similar argument also holds for $M_2$.
As
\begin{align*}
\partialhi &=\tfrac{1}{f (\left\langlembda_2-\left\langlembda_3)}\partialhi_2 - \tfrac{1}{g (\left\langlembda_2-\left\langlembda_3)} \partialhi_3\\
&=(\tfrac{1}{d_1} e^{\tfrac{\sqrt{n_3+1}}{\sqrt{n_2+1}} t} \partialhi_2
+ \tfrac{1}{d_2} e^{-\tfrac{\sqrt{n_2+1}}{\sqrt{n_3+1}} t} \partialhi_3)(\frac{\sqrt{(n_2+1)(n_3+1)}}{n_2+n_3+2}).
{\bf e}_nd{align*}
We note from Section 2 that we must have that $d_1^{n_2+1} d_2^{n_2+1} = 1$ and that therefore $\partialhi$ is given
as the Calabi product of the immersions $\partialhi_1$ and $\partialhi_2$.
\begin{remark} In case that $M$ has parallel difference tensor, i.e. if $\hat \nabla K=0$, the conditions
of Theorem 2 can be weakened. Indeed we can prove:
\begin{theorem} Let $M$ be a hyperbolic affine sphere with mean curvature $\left\langlembda$,
where $\left\langlembda<0$. Suppose that $\hat \nabla K= 0$
and there exists $h$-orthonormal distributions $\mathcal D_1$ (of dimension $1$),
$\mathcal D_2$ (of dimension $n_2$)
and such that $\mathcal D_3$ (of dimension $n_3$) such that
\begin{align*}
&K(T,T)=\left\langlembda_1 T,\\
&K(T,V)=\left\langlembda_2 V,\\
&K(T,W)=\left\langlembda_3 W,
{\bf e}_nd{align*}
where $T$ is a unit vector spanning $\mathcal D_1$ and $V \in
\mathcal D_2$, $W \in \mathcal D_3$. Moreover we suppose that
$\left\langlembda_2 \ne \left\langlembda_3$ and $2\left\langlembda_2 \ne \left\langlembda_1 \ne 2
\left\langlembda_3$. Then $\partialhi:M^n \rightarrow \mathbb R^{n+1}$ can be
decomposed as the Calabi product of two hyperbolic affine sphere
immersions $\partialsi_1:M_1^{n_2} \rightarrow \mathbb R^{n_2+1}$ and
$\partialsi_2:M_2^{n_3} \rightarrow \mathbb R^{n_3+1}$ with parallel cubic form.
{\bf e}_nd{theorem}
\begin{proof} By applying an homothety we may choose $\left\langlembda=-1$.
As $\hat \nabla K=0$, we also have that $\hat R. K = 0$. This means that
\begin{equation*}
\hat R(X,Y)K(Z,U)= K(\hat R(X,Y)Z,U)+K(Z, \hat R(X,Y)U).
{\bf e}_nd{equation*}
So, taking $X=Z=U=T$ and $Y=V$, we find that
$$
\hat R(T,V)T=V -K_T K_V T +K_V K_TT=(1-\left\langlembda_2^2+\left\langlembda_1 \left\langlembda_2) V.
$$
Hence we deduce that
\begin{equation*}
(\left\langlembda_1-2\left\langlembda_2)(-1-\left\langlembda_1 \left\langlembda_2 +\left\langlembda_2^2)=0.
{\bf e}_nd{equation*}
Similarly we have
\begin{equation*}
(\left\langlembda_1-2\left\langlembda_3)(-1-\left\langlembda_1 \left\langlembda_3 +\left\langlembda_3^2)=0.
{\bf e}_nd{equation*}
In view of the conditions, we must have that $\left\langlembda_2$ and $\left\langlembda_3$ are the two different
roots of the equation
$$-1 - \left\langlembda_1 x +x^2=0.$$
Consequently $\left\langlembda_2+\left\langlembda_3 = \left\langlembda_1$ and $\left\langlembda_2 \left\langlembda_3=-1$.
Finally we take $Z=U=T$, $X=V$ and $Y=W$. Then we find that
\begin{align*}
\left\langlembda_1 \hat R(V,W)T&= 2 K(\hat R(V,W)T,T)=-2 K(K_V K_W T,T)+2
K(K_W K_V T,T)\\
&=-2(\left\langlembda_3-\left\langlembda_2)K_T K_VW.
{\bf e}_nd{align*}
Hence
\begin{align*}
\left\langlembda_1 (\left\langlembda_2-\left\langlembda_3) K_V W&= 2 K(\hat R(V,W)T,T)=-2 K(K_V
K_W T,T)+2 K(K_W K_V T,T)\\
&=-2(\left\langlembda_3-\left\langlembda_2)K_T K_VW.
{\bf e}_nd{align*}
This implies that $K_V W$ is an eigenvector of $K_T$ with eigenvalue $\tfrac{1}{2} \left\langlembda_1$. Given the form
of $K_T$ we deduce that $K(V,W) = 0$. We are now in a position to apply Theorem 2 and deduce that
$M$ can be obtained as the Calabi product of the hyperbolic affine spheres.
{\bf e}_nd{proof}
{\bf e}_nd{remark}
\section{Characterisation of the Calabi product of a hyperbolic affine sphere and a point and the proof of Theorem 1}
Throughout this section we will assume that $\partialhi:
M^n\longrightarrow\mathbb R^{n+1} $ is a hyperbolic affine
hypersphere with mean curvature $-1$ and we will prove Theorem 1.
Therefore, we shall also assume that $M$ admits two mutually
orthogonal differential distributions $\mathcal D_1$ and $\mathcal
D_2$ of dimension $1$ and $n_2> 0$, respectively, with $1+n_2=n$,
and, for unit vector $T\in \mathcal D_1$ and all vectors $V\in
\mathcal D_2$,
\begin{gather*}
K(T,T) = \left\langlembda_1 T,\;\;\;\;\;\;
K(T,V) = \left\langlembda_2 V.\\
{\bf e}_nd{gather*}
By the apolarity condition we must have that
\begin{equation}
\left\langlembda_1 +n_2 \left\langlembda_2= 0,
{\bf e}_nd{equation}
Moreover, we will assume that
\begin{equation}
1+\left\langlembda_1 \left\langlembda_2-\left\langlembda_2^2=0.
{\bf e}_nd{equation}
The above conditions imply that $\left\langlembda_1$ and $\left\langlembda_2$ are constant and
can be determined explicitly in terms of the dimension $n$.
Indeed, if necessary by replacing $T$ with $-T$, we have that
\begin{align*}
&\left\langlembda_2= \tfrac{1}{\sqrt{n}},\\
&\left\langlembda_1=-\tfrac{n-1}{\sqrt{n}}.
{\bf e}_nd{align*}
We now proceed as in the previous case. Using the fact that
$\hat \nabla K$ is totally symmetric it follows that
\begin{lemma} \left\langlebel{lemme3} We have
\begin{enumerate}
\item $\hat \nabla_{T} T=0$,
\item $\hat \nabla_{V} T=0$,
\item $h(\hat \nabla_V \tilde V,T)=0$.
{\bf e}_nd{enumerate}
{\bf e}_nd{lemma}
The previous lemmma tells us that the distributions determined by
$\mathcal D_1$ and $\mathcal D_2$ are totally geodesic.
This is sufficient to conclude that locally $(M,h)$ is isometric with
$I \times M_1$ where $T$ is tangent to $I$ and $\mathcal D_2$ is tangent
to $M_1$.
The product structure of $M$ implies the existence of local
coordinates $(t,p$ for $M$ based on an open subset containing
the origin of $\mathbb R \times \mathbb R^{n_2}$,
such that $\mathcal D_1$ is given by $dp=0$ and
$\mathcal D_2$ is given by $dt=$. We may also assume that $T = \tfrac{\partialartial}{\partialartial
t}$. We now put
\begin{equation}\left\langlebel{eqphi2}
\quad \partialhi_2= f \tfrac{1}{\left\langlembda_2} \partialhi + f T, \quad \partialhi_3 =
g \left\langlembda_2 \partialhi - g T,
{\bf e}_nd{equation}
where the functions $f$ and $g$, which depend only on the variable $t$, are determined by
\begin{align*}
f'=-f \left\langlembda_2= -\tfrac{1}{\sqrt{n}} ,\\
g'=g(\left\langlembda_2 - \left\langlembda_1)=\sqrt{n}.
{\bf e}_nd{align*}
It is clear that solutions are given by
\begin{equation*}
f(t) = d_1 e^{-\tfrac{1}{\sqrt{n}}t} \qquad \text{and} \qquad g(t)
= d_2 e^{\sqrt{n} t}.
{\bf e}_nd{equation*}
A straightforward computation, now shows that
\begin{align*}
D_T (\partialhi_2)&=D_T(f \sqrt{n} \partialhi + f T)\\
&=-f(\partialhi+ \tfrac{1}{\sqrt{n}}T)+ f ( \sqrt{n} T +(K(T,T)+\partialhi))\\
&=f T (- \tfrac{1}{\sqrt{n}}+ \sqrt{n}-\tfrac{n-1}{\sqrt{n}})\\
&=0.
{\bf e}_nd{align*}
Similarly
\begin{align*}
&D_T (\partialhi_3)=0,\\
&D_V (\partialhi_3)=0.
{\bf e}_nd{align*}
The above implies that $\partialhi_2$ reduces to a map of $M_1$ in $\mathbb R^n$ whereas
$\partialhi_3$ is a constant vector in $\mathbb R^n$. As we have
that
\begin{equation*}
d\partialhi_2(V)=D_V (\partialhi_2)=f(\sqrt{n} V
+K(V,T))=f(\sqrt{n}+\tfrac{1}{\sqrt{n}})V,
{\bf e}_nd{equation*}
the map $\partialhi_2$ is actually immersions. Moreover, denoting by $\nabla^1$ the
$\mathcal D_2$ component of $\nabla$, we find that
\begin{align*}
D_{V}d\partialhi_2(\tilde V)&=f(\sqrt{n}+\tfrac{1}{\sqrt{n}})D_{V}\tilde V\\
&= f(\sqrt{n}+\tfrac{1}{\sqrt{n}})\nabla_{V}\tilde V+ f(\sqrt{n}+\tfrac{1}{\sqrt{n}})h(V,\tilde V)\partialhi \\
&= f(\sqrt{n}+\tfrac{1}{\sqrt{n}})\nabla^1_{V}\tilde V+f(\sqrt{n}+\tfrac{1}{\sqrt{n}})(h(K(V,\tilde V),T)T+h(V,\tilde V)\partialhi)\\
&=d\partialhi_2(\nabla^1_{V} \tilde V)+ f(\sqrt{n}+\tfrac{1}{\sqrt{n}})h(V,\tilde V)(\tfrac{1}{\sqrt{n}} T +\partialhi)\\
&=d\partialhi_2(\nabla^1_{V} \tilde V)+\tfrac{n+1}{n}
h(V,\tilde V)\partialhi_2.
{\bf e}_nd{align*}
The above formulas imply that $\partialhi_2$ can be interpreted as a
centroaffine immersion contained in an $n_2+1$-dimensional vector
subspace of $\mathbb R^{n+1}$ with induced connection $\nabla^1$ and
affine metric $h_1 = \tfrac{n+1}{n} h$. Of course as the vector $\partialhi_3$ is transversal
to the immersion $\partialhi_2$, we may assume by a linear transformation that the
$\partialhi_2$ lies in the space spanned by the first $n$
coordinates of $\mathbb R^{n+1}$ whereas the constant vector lies in the direction of the last coordinate,
and by choosing $d_2$ appropriately we may assume that $\partialhi_2= (0,\mathrm{d}ots,0,1)$.
As before we get that $M_1$ satisfies the apolarity condition and hence is a hyperbolic
affine hypersphere. Choosing now the constant $d_1$ appropriately we may
assume that $M_1$ has mean curvature $-1$.
As
\begin{equation*}
\partialhi
=(\tfrac{1}{d_1} e^{\tfrac{1}{\sqrt{n}} t} \partialhi_2
+ \tfrac{1}{d_2} e^{-\sqrt{n} t} \partialhi_3)(\frac{\sqrt{(n)(n_3+1)}}{n+1}).
{\bf e}_nd{equation*}
We note from Section 2 that we must have that $d_1^{n_2+1} d_2 = 1$ and that therefore $\partialhi$ is given
as the Calabi product of the immersions $\partialhi_1$ and a point.
\begin{thebibliography}{1}
\bibitem{BRV}
J.~Bolton, C.~Rodriguez Montealegre and L.~Vrancken.
\newblock Warped product minimal Lagrangian immersions in complex
projective space,
\newblock preprint
\bibitem{ca72}
E.~Calabi.
\newblock {Complete affine hyperspheres, I,}
\newblock {{\bf e}_m Sympos. Math.}, 10(1972): 19--38.
\bibitem{chen}
B.-Y. Chen.
\newblock Some pinching and classification theorems for minimal
submanifolds,
\newblock {{\bf e}_m Arch. Math.}, 60(1993): 568--578.
\bibitem{chya86}
S.~Y.~Cheng and S.~T.~Yau.
\newblock {Complete affine hypersurfaces. I: The completeness of affine
metrics,}
\newblock {{\bf e}_m Commun. Pure Appl. Math.}, 39(1986): 839--866.
\bibitem{dinovr91}
F.~Dillen, K.~Nomizu and L.~Vrancken.
\newblock Conjugate connections and radon's theorem in affine differential
geometry,
\newblock {{\bf e}_m Monatsh. Math.}, 109(1990): 221--235.
\bibitem{divrcalabi}
F.~Dillen and L.~Vrancken.
\newblock Calabi type composition of affine spheres,
\newblock {{\bf e}_m Diff. Geom. Appl.}, 4(1994): 303--328.
\bibitem{krscvr}
M.~Kriele, C.~Scharlach and L.~Vrancken.
\newblock An extremal class of 3-dimensional elliptic affine
spheres,
\newblock {{\bf e}_m Hokkaido Math. J.}, 30(2001): 1--23.
\bibitem{krvr99}
M.~Kriele and L.~Vrancken.
\newblock Lorentzian affine hyperspheres with constant sectional
curvature,
\newblock {{\bf e}_m Trans. Amer. Math. Soc.}, 352(2000): 1581-1599.
\bibitem{krvr}
M.~Kriele and L.~Vrancken
\newblock An extremal class of 3-dimensional hyperbolic affine
spheres,
\newblock {{\bf e}_m Geom. Dedicata}, 77(1999): 239--252.
\bibitem{H}
S.~Hiepko,
\newblock Eine innere Kennzeichung der verzerrten Produkte,
\newblock {{\bf e}_m Math. Ann.}, 241(1979): 209--215.
\bibitem{li92}
A.~M.~Li.
\newblock {Calabi conjecture on hyperbolic affine hyperspheres, II,}
\newblock {{\bf e}_m Math. Ann.}, 293(1992): 485--493.
\bibitem{vrlisi89}
A.~M.~Li, U.~Simon and L.~Vrancken
\newblock Affine spheres with constant affine sectional curvature,
\newblock {{\bf e}_m Math. Z.}, 206(1991): 651--658.
\bibitem{calabi}
A.~M.~Li, U. Simon and G. Zhao.
\newblock {{\bf e}_m Global Affine Differential Geometry of Hypersurfaces},
\newblock W. De Gruyter, Berlin-New York, 1993.
\bibitem{nosa94}
K.~Nomizu and T.~Sasaki.
\newblock {{\bf e}_m Affine Differential Geometry},
\newblock Cambridge University Press, Cambridge, 1994.
\bibitem{po72}
A.V. Pogorelov.
\newblock {On the improper convex affine hyperspheres,}
\newblock {{\bf e}_m Geom. Dedicata}, 1(1972): 33--46.
\bibitem{RV}
C.~Rodriguez Montealegre and L.~Vrancken.
\newblock {Lagrangian submanifolds of the three dimensional complex projective space,}
\newblock {{\bf e}_m J. Math. Soc. Japan}, 53(2001): 603--631.
\bibitem{sa80}
T.~Sasaki.
\newblock {Hyperbolic affine hyperspheres,}
\newblock {{\bf e}_m Nagoya Math. J.}, 77(1980): 107--123.
\bibitem{scsivevr}
C.~Scharlach, U.~Simon, L.~Verstraelen, and L.~Vrancken.
\newblock A new intrinsic curvature invariant for centroaffine
hypersurfaces,
\newblock {{\bf e}_m Beitr{\"a}ge Alg. Geom.},
38(1997): 437--458.
\bibitem{vr00}
L.~Vrancken.
\newblock
The Magid-Ryan conjecture for equiaffine hyperspheres with constant
sectional curvature,
\newblock {{\bf e}_m
J. Diff. Geom.}, 54(2000): 99--138.
{\bf e}_nd{thebibliography}
\vskip 1cm
\begin{flushleft}
\noindent
Zejun Hu: {\sc Department of Mathematics, Zhengzhou University,
Zhengzhou 450052, People's Republic of China.}\ \ E-mail:
[email protected]
Haizhong Li: {\sc Department of Mathematical Sciences, Tsinghua
University, Beijing 100084, People's Republic of China} \ \ E-mail:
[email protected]
Luc Vrancken: {\sc LAMATH, ISTV2, Campus du mont houy, Universite de
Valenciennes, France.} \ \ E-mail: [email protected]
{\bf e}_nd{flushleft}
{\bf e}_nd{document} |
\begin{document}
\raggedbottom
\topmargin-49pt
\textheight620pt
\title{Topology measurement within the histories approach}
\addtocounter{footnote}{+1}
\footnotetext{The Centre of Mathematics, Nevsky pr., 39,
191011, St-Petersburg, Russia}
\footnotetext{Department of Mathematics, SPb UEF, Griboyedova 30/32,
191023, St-Petersburg, Russia (address for correspondence)}
\addtocounter{footnote}{+1}
\footnotetext{Division of Mathematics,
Istituto per la Ricerca di Base,
I-86075, Monteroduni (IS), Molise, Italy}
\begin{abstract}
An idealised experiment estimating the spacetime topology is
considered in both classical and quantum frameworks. The latter is
described in terms of histories approach to quantum theory. A
procedure creating combinatorial models of topology is suggested.
The correspondence between these models and discretised spacetime
models is established.
\end{abstract}
\section{Introduction}
Within the conventional account of the relativity theory the
structure of spacetime as differentiable manifold is supposed to be
given and it is the metric structure that is subject to measurement
and changes. So, the topology of spacetime is not {\em an
observable.}
Nowadays there is no fully fledged theory in which the spacetime
topology would be a variable, nor even in a sense perceivable
entity. However, even if such theory does not exist, we may try to
consider idealized experiments which would let us know the
spacetime topology. That means, we should assume the spacetime
to be a manifold, and we only wish to {\em determine} its
topological structure. In accordance with it, any observer should
believe that the topology of the area of his observation (that is,
appropriate coordinate neighborhood) is that of a ball. So, in
order to recover the entire spacetime topology we have to find out
how the balls do overlap. However, any realistic experiment (having
at most finite number of outcomes) can not let us know it. We are
only able to know if the regions have common points (section
\ref{semp}).
Such experimental scheme inevitably needs {\em several} observers,
but the problem of event identification arises: two observers
registering an event should be made sure they really see the same.
We emphasize that this is a matter of {\em convention:\/} two
observers should have a way to identify remote events. This leads
to the concept of organized observation (section \ref{sentang}).
The obtained results of observations then ought to be somehow
interpreted. We may do that in classical either quantum way. In the
classical approach this leads to the Sorkin discretization scheme
(section \ref{sfinsub}).
The attempt to put a scheme of topology estimation into the
framework of quantum mechanics requires the cooperative nature of
the observations to be explicitly captured in the theory. It is the
notion of {\em homogeneous history} in the histories approach to
quantum theory \cite{ha} that can be used for this purpose. Within
the histories approach we introduce the notion of the
'team' (organized set of observers). To carry out the mathematical
description of the team we had to impose an additional mathematical
structure. It turned out that this structure can be represented by
that of associative algebra (section \ref{sentang}). It is worthy
to mention that such structures can be introduced in different ways
reflecting different ways of organization of the team of observers.
\[
\left( \begin{array}{c}
\hbox{topology} \cr
\hbox{measurement}
\end{array} \right)
\,=\,
\left( \begin{array}{c}
\hbox{homogeneous} \cr
\hbox{history}
\end{array} \right)
\,+\,
\left( \begin{array}{c}
\hbox{organization} \cr
\hbox{of observers}
\end{array} \right)
\]
There is no spacetime points at all within the histories approach,
and the goal of the introduced additional structure is to
'manufacture' them. We suggest an algebraic machinery building
topological spaces (namely, the Rota topologies on primitive
spectra of appropriate algebras) and call it {\em spatialization
procedure} (section \ref{sspat}).
In order to make sure of the viability of our quantum construction
we should take care of {\em correspondence principle}: we should be
able to carry out quasiclassical measurements. That means to
possess such an organization scheme for the team that the result of
the spatialization procedure would be the same as in classical
approach. It reduces to a purely mathematical problem of existence
of appropriate algebraic structure. We suggest a constructive
solution of this problem using so-called incidence algebras
(section \ref{sialg}).
\section{Empirical topology}\label{semp}
Let us consider, following \cite{g2}, an idealized experimental
scheme for determining the topological structure of spacetime.
Consider a team $\Lambda$ of observers. Each of them assumes
himself to be in the center of an area ${{\cal O}_{\lambda}}$ $(\lambda \in
\Lambda$) homeomorphic to an open ball. We require it to satisfy
the correspondence principle: in fact, looking around we do not
see holes or borders in the sky. These areas $\{{{\cal O}_{\lambda}}\}$ will form
an atlas for the spacetime manifold in which they are. Then the
problem of learning the structure of the entire manifold arises. It
was solved by Alexandrov \cite{alexandrov} by introducing the
notion of {\em nerve} of the covering, namely the result is encoded
in the structure of mutual intersections of the elements of the
covering.
Within the proposed scheme, the problem is to {\em experimentally}
verify which areas ${{\cal O}_{\lambda}}$ do overlap. This is done by exchanging
information between observers about the events they observe. The
results of the observations could be put into the following table
(Tab. \ref{tab1}) whose rows correspond to events and columns
correspond to the observers.
\begin{table}[h!t]
\begin{center}
\begin{tabular}{||c||c|c|@{$\:\ldots\:$}|c|@{$\:\ldots\:$}||}
\hline
Event label & ${\cal O}_1$ & ${\cal O}_2$ & ${\cal O}_\lambda$ \cr
\hline
1 & + & -- & + \cr
2 & + & + & + \cr
\ldots & \ldots & \ldots & \ldots \cr
$n$ & -- & + & + \cr
\ldots & \ldots & \ldots & \ldots \cr
\hline
\end{tabular}
\end{center}
\caption{The results of observations: if an observer $\lambda$
registers the event $i$ we put "$+$" into the appropriate
cell of the table, otherwise "$-$" is put.}
\label{tab1}
\end{table}
The consequences we make out of the experiments necessarily
have the statistical nature. In particular, the statement "the
areas of two observers do overlap" is merely a statistical
hypothesis. To verify it the following criterion is suggested:
\begin{equation}\label{epropos}
\mbox{\parbox[c]{100mm}{\em
If it occurs that the observers ${\cal O}_1$ and ${\cal O}_2$ have
registered the same event, then the areas of their observations do
overlap.
} }
\end{equation}
Note that this criterion is {\em statistical} rather than {\em
logical}. We emphasize that after the observations were carried out
we only accept or reject the appropriate hypothesis.
When such a hypothesis is accepted, it gives us the complete
information about the {\em nerve} of the covering. One might think
that now we are able to recover the global topology by gluing the
balls together. But this is an illusion: the obstacle is that we
have nothing to glue! Moreover the geometrical realization by
nerve may be a source of artifacts: for instance we can cover an
interval $(0,1)$ (having dimension 1) in such a way
\[
\begin{array}{rcl}
{\cal O}_1 &=& (0,0.6) \\
{\cal O}_2 &=& (0.4,1) \\
{\cal O}_3 &=& (0.2,0.8)
\end{array}
\]
\noindent that the appropriate nerve is realized by a triangle
(having dimension 2). Supposed we could exhaust {\em all} the
points of spacetime, the "real ultimate" structure of spacetime
manifold would be recovered. However what we can really carry out
is to realize a "homogeneous history" whose outcome is recorded in
the table like Tab. \ref{tab1}.
\section{Entanglement in histories approach}\label{sentang}
In this section we introduce topology measurements into the
histories approach to quantum theory. It will be based on the
algebraization scheme of the histories approach suggested by
C.Isham \cite{qlha}. The key issues of this scheme are
\begin{itemize}
\item to consider {\em
propositions} about histories rather than histories themselves
\item to span a linear space on elementary propositions about
histories
\item to endow the propositions themselves by the additional
structure of orthoalgebra
\end{itemize}
\noindent thus organizing them in a way similar to conventional
quantum mechanics.
In this paper a similar idea is realized. We consider
\begin{itemize}
\item[*] propositions about topologies rather than the topologies
themselves
\item[*] a linear space
spanned on the elementary propositions about topologies
\item[*] the structure of associative algebra on this
linear space
\end{itemize}
Let us specify what do we mean by propositions about topologies.
There are at least three ways to introduce a topology on a set $M$
\cite{ishamtop}. First two of them are in a sense exhaustive: to
define (to list out) all open sets either to define the operation
of closure on all subsets of $M$. The third way is more 'economic':
to declare which sequences do converge. It will be suitable for us
to replace topology by convergencies for both technical and
operationalistic reasons (section \ref{sspat}. In fact, any
realistic experiment can yield us at most a finite sequence of
results. The associative algebras related with propositions about
topology will be built in section \ref{sialg}.
Let us figure out how the notion of organized team of observers can
be incorporated into the histories approach. Let
\begin{equation}\label{eh1}
A^1_{t_1}U_{t_1t_2} A^2_{t_2}\ldots
U_{t_{n-1}t_n}A^n_{t_n}\psi_0
\end{equation}
\noindent be a homogeneous history. The operators $A^i$ are assumed
to act in a Hilbert space ${\cal H}$. It was suggested by Isham
\cite{qlha} to describe the history (\ref{eh1}) by an element of
the tensor product $\otimes_{i=1,n}{\cal H}$. Then we assume that there
is an 'organizer' of the history whose status is {\em a priori} the
same as that of every member of the team. That means that he has
the same state space ${\cal H}$. Thus each history, that is, a vector
from $\otimes_{i=1,n}{\cal H}$, should be associated with a vector in
the state space ${\cal H}$ of the organizer. The suggested
correspondence should meet the following requirements:
\begin{itemize}
\item[(i)] Neither the number of observers nor their particular
choice of what to measure should influence the form of this
organization \item[(ii)] If we have an experiment which is a
refinement of different coarser experiments, their results should
not contradict \item[(iii)] This correspondence should be linear in
order to support the superposition principle \end{itemize}
Mathematically this correspondence is introduced by defining a
family of linear mappings ${\bf O}_n$ ($n=1,\ldots,n$):
\begin{equation} \label{eooo}
{\bf O}_n :{\cal H} \otimes {\cal H} \otimes \ldots \otimes {\cal H}\rightarrow {\cal H}
\end{equation}
\noindent whose form is specified by a particular organization of
the topology measurement. The requirement (iii) is expressed in
the linearity of ${\bf O}_n$. To meet the requirement (i) we are
dealing with the family $\{{\bf O}_n\}$ rather than with a single
mapping.
Now the requirement (ii) can be formulated as a relation between
the mappings ${\bf O}_n$. First,
\[ {\bf O}_1(x) = x \]
\noindent and
\[
{\bf O}_{p+q}(x_1\otimes \ldots \otimes x_{p+q}) =
{\bf O}_2({\bf O}_p(x_1\otimes \ldots \otimes x_p) \otimes
{\bf O}_q(x_1\otimes \ldots \otimes x_q) )
\]
\noindent In particular
\begin{equation}\label{eass}
{\bf O}_2({\bf O}_2(x\otimes y)\otimes z) =
{\bf O}_2(x\otimes {\bf O}_2(y\otimes z))
\end{equation}
\noindent therefore all the mappings ${\bf O}_n$ can be inductively expressed
through ${\bf O}_2$.
\[
{\bf O}_n(x_1\otimes \ldots \otimes x_{n-1} \otimes x_n) =
{\bf O}_2({\bf O}_{n-1}(x_1\otimes \ldots \otimes x_{n-1}) \otimes x_n)
\]
Being a linear mapping, ${\bf O}_2:{\cal H} \otimes
{\cal H} \to {\cal H}$ generates a bilinear mapping ${\cal H} \times {\cal H} \to
{\cal H}$ whose action on the pair $(x,y)$ we denote simply by
$x\cdot y$. Then the relation (\ref{eass}) reads:
\[
(x\cdot y)\cdot z =
x\cdot (y\cdot z)
\]
So, the organization of a topology measurement is mathematically
expressed by defining an {\em associative} product in ${\cal H}$. Then
the 'organizing operator' (\ref{eooo}) takes the form:
\[
{\bf O}_n(x\otimes y\otimes \ldots \otimes z) = x\cdot y\cdot
\ldots \cdot z
\]
Suppose a history realizing a topology measurement 'took place',
that is, the team of observers had carried out a number of yes-no
experiments (Table \ref{tab1}), or, in other words, the operators
$A^i_{t_i}$ are projectors in ${\cal H}$:
\begin{equation}\label{eh10}
P^1_{t_1}U_{t_1t_2} P^2_{t_2}\ldots
U_{t_{n-1}t_n}P^n_{t_n}\psi_0
\end{equation}
The outcome of each of these yes-no measurements is the selection
of a subspace of ${\cal H}$ (associated with appropriate projector). The
organizer has a collection of subspaces of ${\cal H}$ in his disposal.
Now let us return to requirement (i): what invariant object may he
construe out of them having only these subspaces and the product in
${\cal H}$? This is the algebra ${\cal A}$ spanned on these subspaces.
So, all the available information about the spacetime topology is
encoded in this subalgebra of ${\cal H}$. A way to extract it is to apply
the spatialization procedure described in the next section.
\section{Spatialization procedure and Rota topology}\label{sspat}
Let us consider what sort of spaces can be extracted from algebras.
Begin with the discussion of what points ought to be. Suppose for a
moment that the obtained algebra ${\cal A}$ is commutative. In this
case it can be canonically represented by a functional algebra on
an appropriate topological space. This may be obtained using
Gel'fand representation. In this case the points of this space can
be thought of as characters. The characters are, in turn,
one-dimensional irreducible representations whose kernels are
maximal ideals. There are several ways to impose a topology on the
set of points \cite{bourbaki}.
In general, when the algebra ${\cal A}$ may be non-commutative, the
scheme of geometrization remains in principle unchanged: we only
pass from characters to classes of irreducible representations,
and, respectively, from maximal ideals to primitive ones. For a
more detailed analysis of the relevance of primitive ideals the
reader is referred to \cite{g1}. So
\[
X = {\rm Prim}\,{\cal A}
\]
\noindent that is, the points are the elements of the primitive
spectrum of ${\cal A}$ (equivalence classes of IRRs). Note that at
this point we have $X$ as a {\em set} not yet endowed by any
structure. The straightforward way to 'topologize' $X$ could be to
use the Jacobson topology. Unfortunately, in the finitary context
we are (section \ref{semp}) this topology (as well as the
other standard ones) reduces to the trivial case of discrete one.
So, let us seek for a weaker structure which could produce
us a reasonable topology on $X$.
It is the notion of convergence space \cite{ishamtop} which is the
closest to topological structure. It is formed by declaring a
relation $(x_n)\rightharpoonup y$ of convergence between sequences and
points:
\[
x_1,x_2,\ldots, x_n, \ldots \rightharpoonup y
\]
\noindent which always gives rise to the following relation on the
points of $X$:
\begin{equation}\label{econv}
x\rightharpoonup y \:\hbox{if and only if}\:
x,x, \ldots,x, \ldots \rightharpoonup y
\end{equation}
Having any relation $\rightharpoonup$ on $X$, we are always in a position to
define a topology on $X$ as the strongest topology in which
(\ref{econv}) holds. So, we shall introduce a topology on $X = {\rm
Prim}\,{\cal A}$ according to the following scheme:
\[
(\hbox{relation on }X) \longrightarrow (\hbox{topology on }X)
\]
Recall that the elements of $X$ are the primitive ideals of
${\cal A}$, which are, in turn, subsets of ${\cal A}$. Having two
such ideals $X,Y$ we can form both their intersection $X\cap Y$ and
their product $X\cdot Y$ as the ideal spanned on all products
$x\cdot y$ with $x\in X$, $y\in Y$. Note that in general $X\cdot
Y\neq Y\cdot X$. However both $X\cdot Y$ and $Y\cdot X$ always lie
in (but may not coincide with) $X\cap Y$. Relations between
primitive ideals were investigated. G.-C. Rota \cite{rota}
introduced the following relation in the context of enumerative
combinatorics:
\begin{equation}\label{etends}
X \rho Y \:\hbox{if and only if}\: X\cdot Y
\stackrel{\neq}{\subset} X\cap Y
\end{equation}
We shall call the topology generated by this relation "$\rho$"
the {\sc Rota topology} on the set of primitive ideals.
We see that in order to judge on the measured topology it suffices
to build an 'organizing' algebra. A particular form of this
algebra should be produced using the table of observations (like
Tab. \ref{tab1}). There is no {\em a priori} preferred way to build
such an algebra: different models of 'data processing' may give
different spatializations. However, there exists a 'classical
spatialization' using no quantum models --- this is the Sorkin
discretization scheme (section \ref{sfinsub}). The problem of
correspondence then arises is it possible to suggest such an
organizing algebra based on the table of results that the
appropriate topological spaces (Rota and Sorkin topologies) would
coincide. This problem will be solved in section \ref{sialg}.
\section{Finitary substitutes}\label{sfinsub}
The Sorkin spatialization procedure imposes the topology on the set
$N$ of events whose prebase is formed by the subsets of events
observed by each observer. Consider this construction in more
detail following the account suggested in \cite{g2,g1}.
Associate with any event $i$ the set $\Lambda_i \subseteq
\Lambda$ of observers which registered it:
\begin{equation}\label{e11}
\Lambda_i = \{\lambda \subseteq
\Lambda \mid \quad \hbox{the event}\; i \; \hbox{was
registered by }\, \lambda\}
\end{equation}
\noindent and consider the relation $\rightharpoonup$ on the set of events:
\begin{equation}\label{e12}
i\rightharpoonup j \:\:\, \hbox{if and only if} \:\:\,
\forall \lambda \: j\in N_\lambda \, \Rightarrow \,
i\in N_\lambda
\end{equation}
Note that the relation $\rightharpoonup$ is evidently reflexive ($i\rightharpoonup i$)
and transitive ($i\rightharpoonup j,j\rightharpoonup k\,\Rightarrow \, i\rightharpoonup k$). Such
relations are called {\sc quasiorders}. Consider the equivalence
relation $\leftrightarrow$ on the set of events $N$:
\begin{equation}\label{e12a}
i\leftrightarrow j \:\:\, \hbox{if and only if} \:\:\,
i\rightharpoonup j \: \hbox{ and } \: j\rightharpoonup i
\end{equation}
\noindent and consider the quotient set
\begin{equation}\label{e13}
X \: = \: N/\leftrightarrow
\end{equation}
\noindent called {\sc finitary spacetime substitute} \cite{sorkin}
or pattern space \cite{prg}. For $x,y\in X$ introduce the relation
$x\to y$:
\begin{equation}\label{e13a}
x\to y \:\:\, \hbox{if and only if} \:\:\,
\forall i\in x, \: \forall j\in y \: \, \,i\rightharpoonup j
\end{equation}
\noindent (note that the expressions like $i\in x$ make sense since
the elements of $X$ are subsets of $N$). The relation $\to$
(\ref{e13a}) on $X$ is:
\begin{itemize}
\item[(i)] reflexive: $x\to x$
\item[(ii)] transitive: $x\to y,y\to z\,\Rightarrow \, x\to z$
\item[(iii)] antisymmetric: $x\to y,y\to x\,\Rightarrow \, x=y$
\end{itemize}
The relations having these three properties are called {\sc
partial orders}. It is known (see, e.g. \cite{sorkin}) that there
is 1--1 correspondence between partial orders and topologies on
finite sets, and that the topology of the manifold can be recovered
when the number of events and observers grows to infinity
\cite{sorkin}.
To conclude this section consider an example of finitary
substitute from \cite{g2}. Suppose there are four observers
${\cal O}_1,\ldots,{\cal O}_4$ living on the circle $e^{\imath\varphi}$
whose areas of observations are:
\[
\begin{array}{ccl}
{\cal O}_1 & \mapsto & \{-2\pi/3 < \varphi < 2\pi/3\} \cr
{\cal O}_2 & \mapsto & \{\pi/3 < \varphi < 5\pi/3\} \cr
{\cal O}_3 & \mapsto & \{-3\pi/4 < \varphi < 2/3\pi\} \cr
{\cal O}_4 & \mapsto & \{\pi/4 < \varphi < 3\pi/4\}
\end{array}
\]
Then the table of outcomes takes the form:
\begin{center}
\begin{tabular}{||c||c|c|c|c||}
\hline
Event label & ${\cal O}_1$ & ${\cal O}_2$ & ${\cal O}_3$ & ${\cal O}_4$ \cr
\hline
0 & + & -- & -- & -- \cr
$\pi/2$ & + & + & -- & + \cr
$\pi$ & -- & + & -- & -- \cr
$3\pi/2$ & + & + & + & -- \cr
\hline
\end{tabular}
\end{center}
Then the relation "$\rightharpoonup$" (\ref{e12}) is already partial order:
\begin{equation}\label{e17}
\begin{array}{ccccccc}
\pi/2 & \rightharpoonup & 0 &;\:& \pi/2 & \rightharpoonup & \pi \cr
3\pi/2 & \rightharpoonup & 0 &;\: & 3\pi/2 & \rightharpoonup & \pi
\end{array}
\end{equation}
\noindent and the equivalence relation (\ref{e12a}) turns out to be
the equality. Hence the finitary substitute $X$ is the whole set of
events:
\[
x = \{\pi/2\} \; , \;
y = \{3\pi/2\} \; , \;
z = \{0\} \; , \;
w = \{\pi\}
\]
\noindent and the partial order on $X$ is:
\begin{figure}
\caption{The finitary substitute of the circle.}
\label{f17}
\end{figure}
\noindent {\bf Remark.} As we have already mentioned, there is an
equivalent way to define topology in terms of converging sequences.
It worthy to mention that we use the symbol $\to$ for the
partial order (\ref{e12}) due to the following fact:
\[
x \to y \qquad \hbox{if and only if} \qquad
\lim\{x,x,\ldots,x,\ldots\} = y
\]
\section{Incidence algebras}\label{sialg}
There is no direct evidence of the compatibility of Sorkin and
histories approaches to empirical spacetime topologies. In this
section we solve this problem. We explicitly suggest the
construction which starting from the table of observations produces
an algebra whose space of primitive ideals endowed with the Rota
topology is homeomorphic to the Sorkin finitary substitute obtained
from the same table.
As it was studied in section \ref{sentang} in order to build a
model of organized spacetime observations we need to introduce an
algebra, that is the two following objects:
\begin{itemize}
\item A linear space $H$
\item A product operation on the space $H$
\end{itemize}
\noindent somehow generated by the table of observations Tab.
\ref{tab1}, where $H$ will stand for a model of ${\cal H}$ and the
product will capture the organization.
As we promised, we shall deal with a linear space $H$ spanned on
the elementary propositions about topology. What could be the form
of such propositions? Each of them should involve at least two
points, since the matter of topology is just to study the mutual
positions of events. We shall choose the simplest model, namely,
that of two-point statements (a higher order situation was
considered in \cite{jmp96}). Such elementary statements were
already formulated (\ref{epropos}).
The form of the algebra we suggest will be similar to that
introduced in \cite{dmhv}. Let $p,q$ be two events, denote by the
symbol (sic!) ${|p\!><\!q|}$ the proposition associated with this pair.
Form the linear span of all such symbols: ${\rm span}\{ {|p\!><\!q|} \}$ and
define the product on it:
\begin{equation}\label{epq}
{|p\!><\!q|} \cdot |r\!><\!s| \,=\, <\!q|r\!> \cdot |p\!><\!s|
\end{equation}
\noindent where \( <\!q|r\!> = \delta_{qr} \). Note the obtained
product is associative but not commutative in general.
In order to take into account the results of the measurement (Table
\ref{tab1}) we form the linear space
\[
H \,=\, {\rm span} \{ {|p\!><\!q|} \hbox{ such that } p\rightharpoonup q \}
\]
\noindent where $p\rightharpoonup q$ (\ref{e12}) means that $p$ was
registered by a greater set of observers than $q$. To assure that
the obtained algebra can really describe an organization (in the
sense of section \ref{sentang}, we have to check that $H$ is an
algebra.
\noindent {\bf Proposition 1.} The linear space $H$ with the product
(\ref{epq}) is associative algebra.
\noindent {\em Proof.\/} Let ${|p\!><\!q|}$ and $|r\!><\!s|$ are in $H$,
that means $p\rightharpoonup q$ and $r\rightharpoonup s$. If $q\neq r$ then their
product is zero. If $q=r$ then, according to (\ref{epq}) their
product is ${|p\!><\!q|} \cdot |q\!><\!s| = |p\!><\!s|$, which is the element of $H$
since the relation $\rightharpoonup$ is transitive: $p\rightharpoonup q, q\rightharpoonup s$
implies $p\rightharpoonup s$.
\noindent {\bf Remark.} The algebras of such sort (called incidence
algebras) were introduced by Rota in \cite{rota} in a slightly
different way.
Now let us apply the spatialization procedure described in section
\ref{sspat}. The primitive spectrum of the algebra $H$ was
calculated in \cite{drs}; it consists of all the ideals of the
form:
\begin{equation}\label{eprim}
X_s = {\rm span} \{ {|p\!><\!q|} :\; {|p\!><\!q|} \neq |s\!><\!s| \}
\end{equation}
\noindent where $s$ ranges over all equivalence classes with
respect to the relation "$\leftrightarrow$" (\ref{e12a}) on $N$,
that is, events. So, at the first stage of the spatialization
procedure we already have a canonical bijection between the
elements of the primitive spectrum of the algebra $H$ and the
events in the Sorkin's discretization scheme (section \ref{sfinsub}).
In order to show the compatibility of the two schemes we have to
show that the Rota topology on the set $X_s$ is the same as that of
Sorkin.
Let us figure out the form of the relation $\rho$ (\ref{etends})
for the suggested algebra. By the way we shall see that the
relation $\rho$ can be thought of as a sort of 'proximity'
between events. So, let $r,s$ be two events.
\noindent {\bf Proposition 2.} Let $X_r, X_s$ be two primitive ideals.
Then $X_r\rho X_s$ if and only if $r\rightharpoonup s$ and there is no
$t\neq r,s$ such that $r\rightharpoonup t \rightharpoonup s$.
\noindent {\em Proof\/} will be carried out exhaustively: we shall
consider all possible cases.
\begin{itemize}
\item {\bf Case 1.} $r=s$. Consider
$X_r\cdot X_r$. To prove that this product coincides with $X_r$
recall its definition (\ref{eprim}). Let $|a\!><\!b| \in X_r$
(that is $a\neq r$ or $b\neq r$) and prove that it can be
represented as the product of two elements from $X_r$. If $a\neq r$
then $|a\!><\!b| = |a\!><\!a|\cdot |a\!><\!b|$. If $b\neq r$ then
$|a\!><\!b| = |a\!><\!b|\cdot |b\!><\!b|$. Therefore $X_r\cdot
X_r=X_r$, and $X_r\overline{\rho} X_r$.
\item {\bf Case 2.} $r\not\rightharpoonup s$. Consider an element ${|p\!><\!q|}$ from the
intersection of the ideals:
\[
X_r\cap X_s \,=\,{\rm span} \{ {|p\!><\!q|} :\;{|p\!><\!q|} \neq |r\!><\!r| \hbox{ and }
{|p\!><\!q|} \neq |s\!><\!s|\}
\]
\noindent and show that it belongs to the product
\[
X_r\cdot X_s \,=\, {\rm span} \{ {|p\!><\!q|} |a\!><\!b|:\; {|p\!><\!q|} \neq |r\!><\!r|
\hbox{ and } |a\!><\!b|\neq |s\!><\!s| \}
\]
\noindent If $p=q\neq r,s$ then $|p\!><\!p| = |p\!><\!p|\cdot|p\!><\!p|$. If
$p\neq q$ then $p\neq r$ or $q\neq s$ (since $r\not\rightharpoonup s$). Then
${|p\!><\!q|} = |p\!><\!p|\cdot {|p\!><\!q|}$ or ${|p\!><\!q|} = {|p\!><\!q|}\cdot |q\!><\!q|$, respectively.
Therefore $X_r\overline{\rho} X_s$.
\item {\bf Case 3.} $r\rightharpoonup s$ and there is $t\neq r,s$ such that
$r\rightharpoonup t \rightharpoonup s$. Consider an element ${|p\!><\!q|}$ from the
intersection of the ideals:
\[
X_r\cap X_s \,=\,{\rm span} \{ {|p\!><\!q|} :\;{|p\!><\!q|} \neq |r\!><\!r| \hbox{ and }
{|p\!><\!q|} \neq |s\!><\!s|\}
\]
\noindent and show that it belongs to the product
\[
X_r\cdot X_s \,=\, {\rm span} \{ {|p\!><\!q|} |a\!><\!b|:\; {|p\!><\!q|} \neq |r\!><\!r|
\hbox{ and } |a\!><\!b|\neq |s\!><\!s| \}
\]
\noindent If $p=q\neq r,s$ then $|p\!><\!p| =
|p\!><\!p|\cdot|p\!><\!p|$. Let $p\neq q$ and ($p\neq r$ or $q\neq
s$), then ${|p\!><\!q|} = |p\!><\!p|\cdot {|p\!><\!q|}$ (if $p\neq r$) or ${|p\!><\!q|} = {|p\!><\!q|}
\cdot |q\!><\!q|$ (if $q\neq s$). Finally let ${|p\!><\!q|} = |r\!><\!s|$,
then $|r\!><\!s| = |r\!><\!t|\cdot |t\!><\!s|$, and we again have
$X_r\overline{\rho} X_s$.
\item {\bf Case 4.} $r\rightharpoonup s$ and there is no $t\neq r,s$ such
that $r\rightharpoonup t \rightharpoonup s$. Let us show that the element
$|r\!><\!s|$ from the intersection of the ideals:
\[
X_r\cap X_s \,=\,{\rm span} \{ {|p\!><\!q|} :\;{|p\!><\!q|} \neq |r\!><\!r| \hbox{ and }
{|p\!><\!q|} \neq |s\!><\!s|\}
\]
\noindent is not an element of the product
\[
X_r\cdot X_s \,=\, {\rm span} \{ {|p\!><\!q|} |a\!><\!b|:\; {|p\!><\!q|} \neq |r\!><\!r|
\hbox{ and } |a\!><\!b|\neq |s\!><\!s| \}
\]
\noindent Suppose this is not the case, then $|r\!><\!s| = \sum
C_{atc}|a\!><\!t|\cdot|t\!><\!c|$. Multiply this equality (in $H$) by
$|r\!><\!r|$ from the left and by $|s\!><\!s|$ from the right. Then $|r\!><\!s| =
\sum C_{rts}|r\!><\!t|\cdot|t\!><\!s|$. However there is no $t$ such that
$r\rightharpoonup t \rightharpoonup s$, therefore this sum is zero, while $|r\!><\!s|\neq
0$.
\end{itemize}
\noindent
So we see that two primitive ideals are binded by the relation
$\rho$ if and only if they are as close as possible on the Hasse
diagram of the partial order associated with the Sorkin topology
(Case 4).
The results of this section can be summarized in the following
theorem.
\noindent {\bf Theorem.} {\em The Sorkin topology of a finitary
substitute coincides with the Rota topology of its incidence
algebra. }
\section*{Concluding remarks}
Initially, in the histories approach to quantum mechanics the
existence of the spacetime as a fixed manifold was presupposed
\cite{ha}. An algebraic version \cite{qlha} of this approach did
not give up this presupposition, however rendered it rudimentary.
In this paper we do the next step, and the spacetime becomes an
observable up to its combinatorial approximation.
The core of the suggested quantum scheme is the spatialization
procedure. We have realized it as close as possible to the
standard spatialization due to Gel'fand. The peculiarity
of our approach is that we impose a new topology, namely, that of
Rota (section \ref{sspat}). The reason for us to do it was that
in finite dimensional situations (which we considered as realistic
ones) the Gel'fand topology reduced to the trivial discrete one.
The suggested machinery is a bridge between the algebraic
version of the histories approach \cite{qlha} and combinatorial
models such as lattice-like discretization schemes \cite{sorkin}.
From the other side, beyond the histories approach an algebraic
construction merging finitary substitutes into the quantum-like
environment was already carried out by the 'poseteers group' (the
term introduced by F.Lizzi) in \cite{g2,g1}. The comparison of the
two constructions is in Table \ref{tabc}.
\begin{table}[h!t]
\begin{tabular}{p{0.35\textwidth}
@{\hspace{0.1\textwidth}}p{0.35\textwidth}}
\hline \\
{\sc Poseteers' approach} & {\sc Incidence algebras}\\
&\\
\multicolumn{2}{c}{\bf The algebras}\\
&\\
$C^*$-algebras of infinite dimensions &
finite dimensional algebras with no involution \\
&\\
\multicolumn{2}{c}{\bf The points}\\
&\\
kernels of irreducible \mbox{$*$-re}presentations &
kernels of irreducible representations \\
&\\
\multicolumn{2}{c}{\bf The topology}\\
&\\
Jacobson topology & Rota topology \\
&\\
\hline \\
\end{tabular}
\caption{The comparison of the two algebraic schemes.}
\label{tabc}
\end{table}
Note that the incidence algebras are not semisimple. At first sight
this seems to be an essential drawback of the theory bringing it
far from the existing quantum constructions. However, in the case
of finite dimension the Jacobson topology will always be
discrete. From the other side, the lack of semisimplicity makes it
possible to develop differential calculi on finitary models, which
may be considered as a link between the poseteers' approach and the
formalism of discrete differential manifolds \cite{dmhv}.
In \cite{dmhv} finite dimensional semisimple commutative algebras
are studied and a differential structure is built in terms of
moduli of differential forms (being the conjugate to the module of
derivations in the classical case), while there is no nonzero
derivations in the algebras themselves. Contrarily, the incidence
algebras possess derivations which makes it possible to introduce
tensor calculi based on the notion of duality \cite{dv}. Moreover, since all their derivatives are inner, they already contain vectors (for details the reader is referred to \cite{ricomm}).
Finally, we dwell on the algebra of symbols ${|p\!><\!q|}$ we introduced in
section \ref{sialg}. Irrespective to the particular form of the
organizing algebra (like the incidence algebra in our case) we can
always write down expressions like
\[ \large
\sum_{q_1,\ldots,q_n} |p\!><\!q_1|\circ |q_1\!><\!q_2|\circ
\ldots \circ |q_n\!><\!r|
\]
\noindent where the operation "$\circ$" is a multiplication
generating a particular organization of the team (section
\ref{sentang}), and $q_i,q_{i+1}$ are neighbor (in the sense of
Rota topology) events. So, this expression can be thought of as the
sum over trajectories.
One of the authors (RRZ) expresses his gratitude to prof. G.Landi
who organized his visit to the University of Trieste (supported by
the Italian National Research Council) and conveyed many important
ideas. Stimulating discussions with P.M.Ha\-jac,
S.V.Kra\-s\-ni\-kov, and Te\-t\-sue Ma\-s\-u\-da are appreciated.
The work was supported by the RFFI research grant (96--02--19528).
One of us (R.R.Z.) acknowledges the financial support from the
Soros foundation (grant A98-42) and the research grant 97--14.3--62
"Universities of Russia".
\end{document} |
\begin{document}
\title{Identification in the Random Utility Model}
\thanks{I thank the editor, Faruk Gul, and two anonymous referees for their comments that have helped improve this paper. I also thank Luca Anderlini, Axel Anderson, Roger Lagunoff, Jay Lu, Marco Mariotti, Alexandre Poirier, Collin B. Raymond, John Rehbeck, Tomasz Strzalecki, and seminar participants at D-TEA 2021 for helpful comments and discussions. I am especially grateful to Peter Caradonna, Christopher Chambers, and Yusufcan Masatlioglu for their continued support and insightful conversations throughout the course of this project.\\
Turansick: Department of Economics, Georgetown University, ICC 580 37th and O Streets NW, Washington DC 20057. E-mail: \texttt{[email protected]} }
\author{Christopher Turansick}
\date{\today}
\maketitle
\begin{abstract}
The random utility model is known to be unidentified, but there are times when the model admits a unique representation. We offer two characterizations for the existence of a unique random utility representation. Our first characterization puts conditions on a graphical representation of the data set. Non-uniqueness arises when multiple inflows can be assigned to multiple outflows on this graph. Our second characterization provides a direct test for uniqueness given a random utility representation. We also show that the support of a random utility representation is identified if and only if the representation itself is identified.
\end{abstract}
\textit{JEL Classification:} D01
\textit{Keywords:} Random Utility; Stochastic Choice; Identification
\pagebreak
\section{Introduction}
The fundamental goal of revealed preference theory is recovering an agent's preference from choice data. When faced with rational agents, revealed preference theorists have shown that recovery can be achieved. However, agents are not rational as choices are stochastic. The natural relaxation of the standard rationality assumption is to stochastic rationality; agents are rational conditional on a varying unobserved state. This type of rationality is modeled by the random utility model of \citet{block1959random}. Instead of a single preference, agents possess a distribution over preferences. The goal now is to recover a distribution over preferences instead of just a single preference. Unfortunately, there has been less success in recovering a distribution of preferences.
While the random utility model is identified when there are three or fewer alternatives \citep{block1959random}, for larger environments, the random utility model is in general not identified \citep{barbera1986falmagne,fishburn1998stochastic}.\footnote{\citet{falmagne1978representation}, \citet{barbera1986falmagne}, and \citet{gibbard2021disentangling} tell us that the random utility model is identified up to the probability weights on contour sets.} The heart of the identification problem is as follows. We can recover the probability that $w$ is preferred to $x$ and the probability that $y$ is preferred to $z$, but we cannot necessarily recover the probability that both $w$ is preferred to $x$ and $y$ is preferred to $z$ \citep{strzalecki2017stochastic}. Notably, not even the support of the rationalizing distribution is guaranteed to be identified. This means that analysts are unable to even recover the types of preferences in a population. There are distributions with disjoint supports that induce the same set of choice probabilities \citep{fishburn1998stochastic}.
This uniqueness problem gives rise to both empirical and theoretical concerns. From a theoretical perspective, identification of a model allows theorists to map the parameters of their models to behavioral outcomes. One of the main goals of choice theory is to provide simplified approximations of reality in an attempt to explain observed choice behavior. Identification of a model allows us to do exactly this. From an empirical perspective, identification of a choice model allows social planners and mechanism designers to perform proper counterfactual analysis. Identification guarantees that counterfactual analysis will be accurate up to the choice of model. When choice behavior has multiple representations, counterfactuals may take on different values for each one of these representations. This is especially important when we are considering counterfactuals and policy questions that rely on more than just choice frequencies.
To accommodate the random utility model's lack of identification, we ask a new type of question. We ask which rationalizable data sets can be uniquely represented by the random utility model. This differs from the standard approach in the literature which puts further assumptions on the random utility model to ensure that every rationalizable data set is uniquely represented. We take our approach primarily for two reasons. Our first reason is to avoid the restrictive behavioral implications of identifying assumptions. As an example, consider the Luce model and its variants. In practice, these are the most commonly used random utility models. Identification of these models comes at the cost of behavioral implications that are not observed in practice and ex ante unreasonable counterfactuals.\footnote{Behavioral and counterfactual shortcomings of the Luce class of models are well documented. To name a few, we have the red bus/blue bus problem \citep{debreu1960review}, overestimation of demand for goods with high prices \citep{bajari2001discrete}, misleading cross elasticities \citep{ackerberg2005unobserved}, and demand being discontinuous in characteristics \citep{lu2021mixed}.} Our second reason for this approach is to give insight in to how non-uniqueness arises. As part of our analysis, we pin down the exact graphical structure of non-uniqueness in the random utility model. We believe this will aid researchers in developing new identified random utility models.\footnote{Notably, \citet{fishburn1998stochastic} develops an example that shows one way that non-uniqueness arises. We show that, in essence, this is the only way that non-uniqueness arises.}
In order to pin down the graphical structure of non-uniqueness, we use the graphical construction of \citet{fiorini2004short} to take a look at the counterexample of \citet{fishburn1998stochastic}. Our first result generalizes the structure of this example and characterizes which data sets have a unique random utility representation. Non-uniqueness arises when multiple inflows can be assigned to multiple outflows. It is the fact that the random utility model does not pin down the assignment of inflows to outflows that causes non-uniqueness. Any identified random utility model will either preclude the event where this assignment problem arises or pin down the assignment of inflows to outflows.
Our second result characterizes which distributions over preferences are observationally unique. To do this, we translate the conditions used in our first result into conditions about the contour sets of preferences in the support of the distribution. A distribution will be observationally equivalent to some other distribution if there are two preferences in the support that satisfy the following. The two preferences must share some common upper and associated lower contour set and the ordering within these two contour sets differ between the preferences. For larger choice environments, this result means that a unique representation and a full support representation are mutually exclusive.\footnote{This observation was first noted in \citet{mcclellon2015unique}. The result of \citet{mcclellon2015unique} speaks to data sets that are generated by full support distributions. Our result does not require knowledge of the generating process. We only need knowledge of observables.}
The last of our three results characterizes when the support of a random utility representation is uniquely identified. With complete data, every random utility representation has the same support if and only if the random utility representation is unique. In other words, when we have complete data, pinning down the support of a representation is just as hard as pinning down the representation itself.
The rest of this paper is organized as follows. We close this section with a review of related literature. Section 2 reviews the random utility model and the counterexample to uniqueness of \citet{fishburn1998stochastic}. In Section 3, we introduce and discuss our main results. We conclude in Section 4.
\subsection{Related Literature}
Our paper builds on the literature that studies the empirical content of the random utility model of \citet{block1959random}. \citet{falmagne1978representation}, \citet{barbera1986falmagne}, \citet{mcfadden1990stochastic}, and \citet{fiorini2004short} offer characterizations and discussions of the random utility model. More closely related to our work, there is a strand of literature that studies the uniqueness properties of the random utility model. \citet{fishburn1998stochastic} offers an example which shows that the random utility model is not identified. \citet{mcclellon2015unique} uses and extends this example to show that the problem of non-uniqueness is widespread for larger choice environments. \citet{dardanoni2020mixture} and \citet{gibbard2021disentangling} study random utility when agents have limited cognitive ability. \citet{dardanoni2020mixture} study identification of both the underlying preferences as well as the cognitive parameters of the decision makers using a stronger type of data, mixture choice data. \citet{gibbard2021disentangling} studies which uniqueness properties of the random utility model remain when agents have limited attention. Our paper lies in the intersection of these two strands of literature. We offer a characterization of when the random utility model admits a unique representation.
Our paper is also related to the literature which extends the random utility model in order to recover uniqueness. \citet{gul2006random} extend the random utility model to choice over lotteries. They show that restricting the set of preferences to expected utility preferences recovers uniqueness. \citet{lin2020random} studies this uniqueness result by considering to what extent the axioms of expected utility can be relaxed while maintaining uniqueness. Lin shows that the relaxation of the independence axiom to the betweenness axiom \citep{dekel1986axiomatic} causes uniqueness to be lost. \citet{yang2021random} considers randomization over quasi-linear preferences and choice over price-indexed bundles. The restriction to quasi-linear preferences leads to identification of the model. These papers recover uniqueness by restricting the set of preferences allowed by the model.
More recently, there is a strand of literature which recovers uniqueness by putting assumptions on the support of a random utility representation. \citet{apesteguia2017single} are the first to do this. They extend the random utility model by asking that the support of the representation satisfy the single-crossing property with respect to an exogenous order. \citet{filiz2020progressive} extend this result by considering randomization over choice functions while maintaining the single-crossing assumption. \citet{honda2021random} takes a different approach. Instead of assuming single-crossing, Honda assumes a random cravings condition. The random cravings condition supposes that there is some underlying true preference and that every preference in the support of a representation only differs from this true preference by the ranking of a single alternative. Unlike the prior collection of papers, the models of these papers allow any preference to be in the support of some representation. These papers simply restrict which preferences can concurrently be in the support of a single representation.
\section{The Random Utility Model and Fishburn}
To begin, we review the random utility model (RUM). Let $X$ be a finite set of alternatives. Let $\Pi$ be the set of linear orders over $X$.\footnote{A linear order is an antisymmetric, transitive, and complete binary relation.} We use $\pi$ to denote an element of $\Pi$. We use the notation $\pi(A)>\pi(B)$ to denote that every element of $A$ is ranked higher than every element of $B$ according to $\pi$. This implies no further restrictions on how $\pi$ ranks elements of $A$ against other elements of $A$. The same is true for elements of $B$. When $A = \{x\}$, we use the notation $\pi(x)$. Further, we call $\Delta (\Pi)$ the set of probability distributions over $\Pi$. We say that an agent makes decisions according to RUM if they are endowed with a $\nu \in \Delta (\Pi)$ and, whenever they make a decision, they draw a linear order according to this $\nu$ and then choose the maximal element according to the drawn linear order.
We consider stochastic choice data for each non-empty subset of $X$. To formalize this, the data we consider is called a system of choice probabilities. A pair $(X,P)$ is a system of choice probabilities if for all non-empty subsets $A$ of $X$, $P_A(\cdot)$ defines a probability distribution over the elements of $A$. A system of choice probabilities captures the choice probability of each element $x$ of each non-empty subset $A$ of $X$. We now define what it means for data to be rationalizable by RUM.
\begin{definition}
We say that a system of choice probabilities is rationalizable if there exists some $\nu \in \Delta (\Pi)$ such that for all non-empty $A \subseteq X$ and all $x \in A$ we have
$$P_A(x) = \sum_{\pi \in \Pi} \nu( \pi) \mathbf{1}\{ \pi(x) > \pi(A \setminus \{x\}) \}.$$
\end{definition}
\citet{falmagne1978representation} was the first to characterize rationalizability for RUM. The characterization relies on the Block-Marschak polynomials, henceforth BM-polynomials, which were first introduced by \citet{block1959random}. We state the definition of the BM-polynomials here.
\begin{definition}
For a non-empty set $A \subseteq X$ and an element $x \in A$, the BM-polynomial for $x$ in $A$ is given by
\begin{equation*}
\begin{split}
q(x,A) & = P_A(x) - \sum_{A \subsetneq A'} q(x,A') \\
& = \sum_{A \subseteq A'} (-1)^{|A' \setminus A|} P_{A'}(x).
\end{split}
\end{equation*}
\end{definition}
To interpret the BM-polynomials, we turn to a result of \citet{falmagne1978representation}. Let $M_{x,A}$ be the set of linear orders on $X$ that rank $x$ exactly at the top of $A$.
$$M_{x,A}=\{\pi|\pi(X \setminus A) > \pi(x) > \pi(A \setminus \{x\}) \}$$
\citet{falmagne1978representation} shows that a distribution $\nu$ rationalizes a system of choice probabilities if and only if $q(x,A) = \nu(M_{x,A})$ for all all such $x \in A \subseteq X$. This is also the classic uniqueness result for RUM. Any two representations of a system of choice probabilities must put the same probability weight on each contour set. The characterization of RUM by \citet{falmagne1978representation} states that all BM-polynomials must be non-negative. We are interested in characterizing when the rationalizing probability distribution is unique.
Our characterization combines the graphical representation of RUM presented in \citet{fiorini2004short} with the intuition of the counterexample to uniqueness presented in \citet{fishburn1998stochastic}. We begin with the graphical construction due to \citet{fiorini2004short}. Consider a graph with nodes indexed by the elements of $2^X$, the power set of $X$. We will use the set indexing a node to refer to that node. There exists an edge between two nodes $A$ and $B$ if one of the following is true.
\begin{enumerate}
\item $A \subseteq B$ and $|B \setminus A| = 1$
\item $B \subseteq A$ and $|A \setminus B| = 1$
\end{enumerate}
In other words, the edge set of this graph is formed by applying the covering relation of $\subseteq$ to $X$. Now we assign weights to these edges. Assign $q(x,A)$ to the edge connecting $A$ and $A\setminus \{x\}$. \citet{fiorini2004short} does not give a name to this graph, but we will refer to it as the \textbf{probability flow diagram}. Figure 1 gives an example of the probability flow diagram for the set $X = \{a,b,c\}$.
\begin{figure}
\caption{The probability flow diagram for the set $X = \{a,b,c\}
\label{fig:probflow}
\end{figure}
We now revisit the counterexample of \citet{fishburn1998stochastic} and explore the probability flow diagram of the counterexample.
\begin{example}[Fishburn's Counterexample]
Let $X = \{a,b,c,d\}$. Consider the following probability distributions over linear orders on $X$.
\begin{equation*}
\nu_1(\pi) = \begin{cases}
\frac{1}{2} & \text{if }\pi \in \{ a \succ b \succ c \succ d, b \succ a \succ d \succ c\} \\
0 & \text{otherwise}
\end{cases}
\end{equation*}
\begin{equation*}
\nu_2(\pi) = \begin{cases}
\frac{1}{2} & \text{if }\pi \in \{ a \succ b \succ d \succ c, b \succ a \succ c \succ d\} \\
0 & \text{otherwise}
\end{cases}
\end{equation*}
These two probability distributions induce the same system of choice probabilities.
\end{example}
\begin{figure}
\caption{The reduced probability flow diagram for the Fishburn counterexample.}
\label{fig:FishEx}
\end{figure}
This is the counterexample which \citet{fishburn1998stochastic} uses to show that RUM is not identified. Figure 2 shows the reduced probability flow diagram of this example.\footnote{By reduced probability flow diagram we mean that we take the probability flow diagram and remove each edge with zero weight and each node whose connected edges all have zero weight.} In the example above, as both probability distributions induce the same system of choice probabilities, they have the same probability flow diagram. The key feature of this example is found at the node $\{c,d\}$. Note that there are two edges with strictly positive weight that go into $\{c,d\}$ and two edges with strictly positive weight that leave $\{c,d\}$. It turns out that this two-in and two-out structure exactly characterizes non-uniqueness in RUM. We generalize this two-in and two-out structure in the next section.
\section{Characterizing Uniqueness}
In this section, we present two characterizations which tell us when a RUM representation is unique. The first characterization tests choice data while the second tests the representation itself. Using these results, we then characterize when the support of a rationalizing distribution is unique. We now introduce the terminology needed to state our characterizations.
\begin{definition}
We call a path $\rho$ a finite sequence of sets $\{A_i\}_{i=0}^{|X|}$ such that $A_{i+1}\subsetneq A_i$ for all $i$, $A_0=X$, and $A_{|X|}=\varnothing$.
\end{definition}
\citet{fiorini2004short} notes that there is a bijection between paths on the probability flow diagram and the set of linear orders of $X$. The bijection pairs the path $\{X,X\setminus \{x_1\},X \setminus \{x_1,x_2\}, \dots ,\varnothing\}$ with the order that ranks $x_1\succ x_2 \succ \dots$. When we construct a representation, the probability weight associated with order $\pi$ is derived as follows. We decompose the probability flow diagram into paths flows. We then assign the path flow of the path corresponding to $\pi$ as the probability weight put on $\pi$. We will be using the prior bijection and the associated edge weights to study which orders can receive a strictly positive weight in a representation. This idea is captured graphically by the following definition.
\begin{definition}
For a system of choice probabilities $(X,P)$ and its corresponding probability flow diagram, we call a path supported if for all $i \in \{0, \dots, |X|-1\}$, $q(A_i \setminus A_{i+1},A_i)>0$.
\end{definition}
There exists a representation which puts strictly positive weight on a linear order $\pi$ if and only if the path associated with $\pi$ is supported.\footnote{To see this, note that if the algorithm used in the proof of Theorem 1 begins by subtracting out the considered supported path, then that linear order $\pi$ is in the support of the representation. Further, any order which has a path which is not supported must necessarily receive zero probability weight in a representation.} Due to this, if a system of choice probabilities has multiple representations, it must be that the differing probability weights are restricted to orders which have supported paths. As we mentioned prior, the characterization for uniqueness relies on the idea of two-in and two-out. The definition of branching formalizes this idea.
\begin{definition}
We call two paths $\rho$ and $\rho'$ branching if there exists some $i\leq j$ with $i,j \in \{1,\dots,|X|-1\}$ such that $A_{i-1}^{\rho} \neq A_{i-1}^{\rho'}$, $A_{j+1}^{\rho} \neq A_{j+1}^{\rho'}$, and for all $m \in \{i, \dots, j\}$, $A_m^{\rho} = A_m^{\rho'}$.
\end{definition}
Unlike in the counterexample of \citet{fishburn1998stochastic}, the definition of branching does not require the two-in and two-out to happen at the same node. The definition of branching allows for two paths to go into the same node, share a few common edges, and then split. We now have all the terminology we need to state our first theorem.
\begin{theorem}
Suppose that a system of choice probabilities $(X,P)$ is rationalizable. The rationalizing $\nu$ is unique if and only if the probability flow diagram has no pairs of supported branching paths.
\end{theorem}
We leave all proofs to the appendix. However, we discuss the intuition of the proof here. To see the logic for necessity, first consider a node that satisfies two-in and two-out. Call the two-in edges $a$ and $b$ respectively. Call the two-out edges $c$ and $d$ respectively. We can construct two disjoint sets of paths that induce this two-in and two-out property. Consider the pair of paths $\{(a,c),(b,d)\}$. These two paths satisfy two-in and two-out at the considered node. Similarly, the pair of paths $\{(a,d),(b,c)\}$ satisfy two-in and two-out along the same edges as the first pair of paths. This shows that two supported branching paths imply non-uniqueness.
To see the logic for sufficiency, we first note that if no pair of supported paths satisfy two-in and two-out, then every supported path that satisfies two-out with some other supported path must do so above any node at which they satisfy two-in. Similarly, any supported path that satisfies two-in with some other supported path must do so below any node at which they satisfy two-out. These two facts mean that for every supported path there exists some edge such that any two-in happens below that edge and every two-out happens above that edge. The weight along this edge uniquely identifies the probability weight put on the order associated with this path.
Note that Theorem 1 subsumes some known results. \citet{block1959random} tells us that, when $|X| \leq 3$, any representation is unique. We note that branching paths are not found unless $|X| \geq 4$.\footnote{To see this, observe the following. A pair of branching paths share the node $X$, have differing nodes somewhere below $X$, have a common node below their differing nodes, have another differing node below their common node, and then share the node $\varnothing$. This requires having five nodes which can only happen when $|X| \geq 4$.} As an immediate corollary of Theorem 1, we are able to show that, when $|X| \leq 3$, any representation of a system of choice probabilities is unique.
\citet{mcclellon2015unique} shows that when $|X| \geq 4$, the issue of non-uniqueness is widespread. The result notes that if a system of choice probabilities is induced by a full support distribution, then there is a different distribution which induces the same system of choice probabilities. If a system of choice probabilities is induced by a full support distribution, then every path is supported. Since there are branching paths when $|X| \geq 4$, it follows immediately that any representation is not unique. This result can be extended to say that if every BM-polynomial of a system of choice probabilities is strictly positive, then the representation is not unique.
This result has an interesting implication for the Luce and logit class of models. Since Luce choice probabilities can be represented by the logit model and the logit model induces a full support over preferences, for $|X|\geq 4$, any random utility representation of any system of choice probabilities consistent with Luce is not unique. This observation extends to other statistical discrete choice models with full support including the generalized extreme values model (see \citet{McFadden1977ModellingTC} and \citet{dagsvik1995large}).
We now move onto our second characterization. Intuitively, this characterization takes the structure of branching paths and restates that structure in terms of properties of the contour sets of the associated orders. Checking that the support of a representation satisfies these properties amounts to a finite test. Before moving forward, we state the definition of upper contour set.
\begin{definition}
The weak upper contour set of some $x \in X$ according to $\pi \in \Pi$ is the set of all elements $y \in X$ such that $\pi(y) \geq \pi(x)$. We write
$$U_{\pi}(x) = \{y | \pi(y) \geq \pi(x)\}$$
to denote the weak upper contour set of $x$ according to $\pi$.
\end{definition}
With this definition, we are now able to state our second characterization.
\begin{theorem}
Suppose that a system of choice probabilities $(X,P)$ is rationalizable. The rationalizing $\nu$ is unique if and only if there are no pairs of orders $\pi$ and $\pi'$ satisfying the following.
\begin{enumerate}
\item $\nu(\pi)>0$ and $\nu(\pi')>0$
\item There exists $x,y,z \in X$ such that
\begin{enumerate}
\item $\pi(\{x,y\}) > \pi(z)$ and $\pi'(\{x,y\}) > \pi'(z)$
\item $x \neq y$
\item $U_{\pi}(z) \neq U_{\pi'}(z)$
\item $U_{\pi}(x) = U_{\pi'}(y)$
\end{enumerate}
\end{enumerate}
\end{theorem}
Intuitively, the first condition of Theorem 2 captures the definition of a supported path and the second condition captures the definition of a pair of branching paths. Our proof consists of showing that the existence of a pair of supported branching paths is equivalent to the two conditions of Theorem 2. Necessity follows primarily from definitions. The logic for sufficiency is as follows. We first suppose that the representation is not unique. Then, by Theorem 1, there must be a pair of supported branching paths. We show that no matter how one allocates the weight from these supported branching paths, there will always be two orders which violate the conditions of Theorem 2. Now, consider the following definition.
\begin{definition}
Let $S_{\nu}= \{\pi|\nu(\pi) >0 \}$ be the set of linear orders with strictly positive weight under representation $\nu$. Then we call $S_\nu$ the support of $\nu$.
\end{definition}
Suppose we are in the case where there is non-uniqueness. Theorem 2 tells us that there are two linear orders in the support of our representation that have different rankings of $x$ and $y$, rank both above $z$, and differ in their ranking of $z$. This means that there is some fourth element $w$ such that $w$ is ranked below $x$ and $y$ in both orders and is ranked differently compared to $z$ in both rankings. If we restrict to the choice problem of $X=\{w,x,y,z\}$, this is exactly the example of \citet{fishburn1998stochastic} with potentially differing probability weights. A consequence of this observation is that if the system of choice probabilities $(X,P)$ is not uniquely represented, then there exists some subset of $X$ of size four, call it $Y \subseteq X$, such that $(Y,P)$ is not uniquely represented. In other words, if an analyst wants to check for uniqueness, it is sufficient to check for uniqueness of each system of choice probabilities induced by sets of size four. This further means that, subject to finding a representation, if we observe choices only from sets of size four and smaller and we have a unique representation, we can recover the choice probabilities of sets larger than size four. This is not true in the general case where observing choices from larger choice sets further restricts the set of potential representations.
Another interesting consequence of Theorem 2 is that uniqueness is a property of the support of a rationalizing distribution. This means that if a distribution over preferences is the unique representation for some system of choice probabilities, then any distribution with the same support is also the unique representation for its system of choice probabilities. The opposite is also true. If a distribution over preferences is observationally equivalent to some other distribution over preferences, then any distribution over preferences with the same support is observationally equivalent to some other distribution over preferences. It turns out that uniqueness of a representation is equivalent to the support of a representation being identified.
\begin{theorem}
Suppose that a system of choice probabilities $(X,P)$ is rationalizable. Each rationalizing distribution has the same support if and only if $(X,P)$ is uniquely rationalizable.
\end{theorem}
Obviously, if the support is not identified then the rationalizing distribution will not be identified. The intuition for the other direction of the proof is much the same as the intuition for Theorem 1. If the rationalizing distribution is not identified, then the probability flow diagram will have the two-in and two-out structure. Just as before, this means we can decompose the two-in and two-out structure into two disjoint sets of paths. These two disjoint sets then represent two disjoint sets of preferences which induce the two-in and two-out structure. This shows that non-uniqueness of the distribution of preferences implies non-uniqueness of the support of preferences.
\section{Discussion}
In this paper, we provide two characterizations for when a random utility representation is unique. Theorem 1 provides conditions on the BM-polynomials which characterize the graphical structure of non-uniqueness. As BM-polynomials are rarely used in empirical settings, we think of Theorem 1 as a theoretical tool for developing other identified random utility models. Theorem 2 gives conditions which tell us when a representation is observationally equivalent to some other representation. We view Theorem 2 as being a potential empirical tool. Once a representation is found using standard methods (see \citet{kitamura2018nonparametric} and \citet{smeulders2021nonparametric}), Theorem 2 can be applied to check for uniqueness.
As an application of our results, we now consider the single-crossing random utility model (SCRUM) of \citet{apesteguia2017single}. SCRUM puts an additional restriction on the underlying structure of $X$ in that $X$ is endowed with some exogenous linear order $\rhd$. We say a system of choice probabilities is rationalizable by SCRUM if there exists some RUM representation of the system, $\nu$, such that the support of $\nu$ can be ordered so that it satisfies the single-crossing property with respect to $\rhd$. Recall that the single-crossing property is as follows.
\begin{definition}
We say that a representation $\nu$ satisfies the single-crossing property if the support of $\nu$ can be ordered in such a way that for all $x \rhd y$, $\pi_i(x) > \pi_i(y)$ implies $\pi_j(x) > \pi_j(y)$ for all $j \geq i$.
\end{definition}
\citet{apesteguia2017single} show that a SCRUM representation is unique. We now return to the counterexample of \citet{fishburn1998stochastic}.
\begin{example}[Fishburn and SCRUM]
Let $X = \{a,b,c,d\}$. Consider the system of choice probabilities induced by the following two distributions over linear orders.
\begin{equation*}
\nu_1(\pi) = \begin{cases}
\frac{1}{2} & \text{if }\pi \in \{ a \succ b \succ c \succ d, b \succ a \succ d \succ c\} \\
0 & \text{otherwise}
\end{cases}
\end{equation*}
\begin{equation*}
\nu_2(\pi) = \begin{cases}
\frac{1}{2} & \text{if }\pi \in \{ a \succ b \succ d \succ c, b \succ a \succ c \succ d\} \\
0 & \text{otherwise}
\end{cases}
\end{equation*}
Suppose the exogenous order on $X$ is $a \rhd b \rhd c \rhd d$. Then the system of choice probabilities is SCRUM rationalized by $\nu_1$. Now suppose that the exogenous order on $X$ is $a \rhd b \rhd d \rhd c$. Then the system of choice probabilities is SCRUM rationalized by $\nu_2$.
\end{example}
Recall that when a system of choice probabilities has multiple representations, it essentially embeds the example of \citet{fishburn1998stochastic} in some subset of size four. As we see in the above example, the SCRUM representation of the Fishburn example is pinned down by the exogenous order $\rhd$. This follows from the fact that if $\nu_1$ satisfies the single-crossing property with respect to $\rhd$ it must be the case that $\nu_2$ does not. Uniqueness in SCRUM now follows from extensions of this logic.
We see three potential extensions of our work. In this paper, we have maintained the assumption that we observe choice on every non-empty subset of $X$. In empirical settings, this is often unreasonable. One natural extension of our work is to consider the same question but with choice on a limited domain. A potential second extension is to generalize our results to infinite choice domains. This extension would provide insight for model builders who consider choices over lotteries \citep{gul2006random}, dynamic choice \citep{frick2019dynamic}, or choice over price-indexed bundles \citep{yang2021random}.
The third potential extension we consider utilizes the algorithmic nature of our approach. Instead of asking when we have a unique representation, one may be interested in recovering the set of representations for a given system of choice probabilities. As the set of representations for a given system of choice probabilities is convex, this amounts to finding the extreme points of the set of representations. In our construction of a representation, we consider a specific order in which we decompose the probability flow diagram into path flows (and thus assign probability weights to the corresponding order). However, one could exogenously vary the order in which path flows are subtracted from the probability flow diagram. Consider the following description of an algorithm.
\begin{enumerate}
\item Choose some yet unchosen order over paths.
\item Decompose the probability flow diagram into path flows according to the chosen order. Assign the smallest remaining edge capacity of the considered path as the path flow of that path. (See the proof of Theorem 1.)
\item Repeat the prior steps until every order over paths has been exhausted.
\end{enumerate}
This algorithm will return a collection of representations, each of which is an extreme point of the set of representations.\footnote{To see this, note that every distribution found by this algorithm can be matched with an order/ranking over linear orders. The representation associated with a specific ranking over linear orders places the most probability weight possible on a given linear order conditional on higher ranked linear orders getting the most probability weight possible with further iterative conditioning.} It is an open question whether this algorithm returns every extreme point of the set of representations.
At the end of the day, identified models are appealing as they allow for proper counterfactual analysis and clean interpretation of parameters. Our main insight tells us when we can treat the random utility model as if it were identified. This insight offers aid in the future construction of identified random utility models.
\singlespacing
\appendix
\section{Omitted Proofs}
\begin{definition}
We call two paths $\rho$ and $\rho'$ in-branching if there exists some $i \in \{1,\dots, |X|-1\}$ such that $A_i^{\rho} = A_i^{\rho'}$ and $A_{i-1}^{\rho} \neq A_{i-1}^{\rho'}$
\end{definition}
\begin{definition}
We call two paths $\rho$ and $\rho'$ out-branching if there exists some $i \in \{1,\dots, |X|-1\}$ such that $A_i^{\rho} = A_i^{\rho'}$ and $A_{i+1}^{\rho} \neq A_{i+1}^{\rho'}$
\end{definition}
\begin{definition}
We call a collection of sets, $\{A_i, \dots, A_j\}$, a branching section of paths $\rho$ and $\rho'$ if $A_{i-1}^{\rho} \neq A_{i-1}^{\rho'}$, $A_{j+1}^{\rho} \neq A_{j+1}^{\rho'}$, and for all $m \in \{i, \dots, j\}$, $A_m^{\rho} = A_m^{\rho'}$.
\end{definition}
\subsection{Proof of Theorem 1}
We begin by showing necessity. We proceed by contraposition. Suppose there are two supported paths, $\rho$ and $\rho'$, that are branching. This means that these two paths share some set of common nodes $\{A_n,\dots,A_m\}$ such that $A_{n-1}^{\rho} \neq A_{n-1}^{\rho'}$ and $A_{m+1}^{\rho} \neq A_{m+1}^{\rho'}$. Consider the following two paths, respectively $\rho''$ and $\rho'''$, $(A_0^{\rho}, \dots, A_m^{\rho},A_{m+1}^{\rho'},\dots,A_{|X|}^{\rho'})$ and $(A_0^{\rho'}, \dots, A_m^{\rho'},A_{m+1}^{\rho},\dots,A_{|X|}^{\rho})$. Note that the node set and the edge set of $\rho \cup \rho'$ are the same as the node set and edge set of $\rho'' \cup \rho'''$. Let $r$ be the minimum flow along the edge set of $\rho \cup \rho'$. Without loss, let $r$ be the flow of an edge that belongs to the edge set of $\rho$ and $\rho''$. We will now construct two different representations. We construct $\nu_1$ as follows.
\begin{enumerate}
\item Let $\nu_1(\pi_{\rho})=r$.
\item For all $q(\cdot,\cdot)$ on the edge set of $\rho$, let $q_0(\cdot,\cdot)=q(\cdot,\cdot)-r$. For all $q(\cdot,\cdot)$ not on the edge set of $\rho$, let $q_0(\cdot,\cdot)=q(\cdot,\cdot)$.
\item Initialize at $i=0$.
\item Let $s$ be the smallest strictly positive $q_i(\cdot,\cdot)$. Choose some edge which has flow equal to $s$. Since inflow equals outflow (see explanation below the algorithms), this edge is a part of some path from $X$ to $\varnothing$ with all edges along the path having strictly positive flow. Fix this path and call it $\gamma$.
\item Let $\pi_i$ denote the linear order that is bijectively associated with $\gamma$. Set $\nu_1(\pi_i)=s$.
\item For all edges along path $\gamma$, let $q_{i+1}(\cdot,\cdot) = q_i(\cdot,\cdot) -s$. For all edges not along path $\gamma$, let $q_{i+1}(\cdot,\cdot) = q_i(\cdot,\cdot)$.
\item If there is strictly positive flow anywhere along the graph, return to step 4. If not, terminate the algorithm.
\end{enumerate}
We construct $\nu_2$ as follows.
\begin{enumerate}
\item Let $\nu_2(\pi_{\rho''})=r$.
\item For all $q(\cdot,\cdot)$ on the edge set of $\rho''$, let $q_0(\cdot,\cdot)=q(\cdot,\cdot)-r$. For all $q(\cdot,\cdot)$ not on the edge set of $\rho''$, let $q_0(\cdot,\cdot)=q(\cdot,\cdot)$.
\item Initialize at $i=0$.
\item Let $s$ be the smallest strictly positive $q_i(\cdot,\cdot)$. Choose some edge which has flow equal to $s$. Since inflow equals outflow, this edge is a part of some path from $X$ to $\varnothing$ with all edges along the path having strictly positive flow. Fix this path and call it $\gamma$.
\item Let $\pi_i$ denote the linear order that is bijectively associated with $\gamma$. Set $\nu_2(\pi_i)=s$.
\item For all edges along path $\gamma$, let $q_{i+1}(\cdot,\cdot) = q_i(\cdot,\cdot) -s$. For all edges not along path $\gamma$, let $q_{i+1}(\cdot,\cdot) = q_i(\cdot,\cdot)$.
\item If there is strictly positive flow anywhere along the graph, return to step 4. If not, terminate the algorithm.
\end{enumerate}
Note that we know from \citet{fiorini2004short} and \citet{falmagne1978representation} that we have inflow equals outflow on this graph at the start of each of these algorithms. Since each iteration of the algorithm subtracts out a fixed amount from each edge of a given path, we have inflow equals outflow at each stage of this algorithm. This means that this algorithm terminates with zero flow along the graph. To see this, suppose not. Then there is positive flow somewhere along the graph at termination. Since we have inflow equals outflow, we can follow this positive flow all the way to the nodes $X$ and $\varnothing$. This then shows that there is some path with strictly positive flow, thus contradicting termination of our algorithm. Further, this algorithm assigns $q(x,A)$ to orders that rank $x$ exactly at the top of $A$. Thus, we know from \citet{falmagne1978representation} that $\nu_1$ and $\nu_2$ rationalize the system of choice probabilities. Now note that since there is an edge that is shared between $\rho$ and $\rho''$ which has flow equal to $r$, $\nu_1$ puts zero weight on $\pi_{\rho''}$ while $\nu_2$ puts weight equal to $r$ on $\pi_{\rho''}$. Thus these two representations are different, meaning that there is not a unique representation. By contraposition, we have proven necessity.
Now we prove sufficiency. Suppose no two supported paths are branching.
\begin{claim}
Let $\rho$ and $\rho'$ be supported out-branching paths. Let $i \in \{1,\dots, |X|-1\}$ be such that $A_i^{\rho} = A_i^{\rho'}$ and $A_{i+1}^{\rho} \neq A_{i+1}^{\rho'}$. Then for all $j \leq i$, no supported path may be in-branching at $A_j$ for either $\rho$ or $\rho'$.
\end{claim}
\begin{proof}
Suppose not. Then, without loss of generality, there exists some supported path $\rho''$ such that $\rho''$ and $\rho$ are in-branching and there exists $j \leq i$ such that $A_j^{\rho} = A_j^{\rho''}$ and $A_{j-1}^{\rho} \neq A_{j-1}^{\rho''}$. Construct supported path $\rho'''$ as follows.
$$\rho'''=(A_0^{\rho''},\dots,A_j^{\rho''},A_{j+1}^{\rho},\dots,A_i^{\rho},A_{i+1}^{\rho'},\dots,A_{|X|}^{\rho'})$$
By construction, $\rho$ and $\rho'''$ are supported paths which are branching. This is a contradiction. Thus our claim is proven.
\end{proof}
\begin{claim}
Let $\rho$ and $\rho'$ be supported in-branching paths. Let $i \in \{1,\dots, |X|-1\}$ be such that $A_i^{\rho} = A_i^{\rho'}$ and $A_{i-1}^{\rho} \neq A_{i-1}^{\rho'}$. Then for all $j \geq i$, no supported path may be out-branching at $A_j$ for either $\rho$ or $\rho'$.
\end{claim}
\begin{proof}
The logic is identical to the proof of Claim 1.
\end{proof}
Together, Claim 1 and Claim 2 state that for every supported path $\rho$ there exists some $i \in \{1,\dots, |X|-1\}$, such that $A_i$ is in $\rho$, with all supported out-branching paths doing so at or above $A_i$ and with all supported in-branching paths doing so strictly below $A_i$. This means that the edge associated with $q(A_i \setminus A_{i+1},A_i)$ belongs to no supported path other than $\rho$. We know from \citet{falmagne1978representation} that any rationalizing $\nu$ must put probability weight on the set of orders ranking $A_i \setminus A_{i+1}$ exactly at the top of $A_i$ equal to $q(A_i \setminus A_{i+1},A_i)$. Since $\rho$ is the unique supported path that contains $q(A_i \setminus A_{i+1},A_i)$, it must be the case that the order $\pi$ associated with $\rho$ must have $\nu(\pi)= q(A_i \setminus A_{i+1},A_i) $. This can be said about all such orders. Thus no pair of supported paths being branching implies that the rationalizing representation must be unique. Thus we have proven our theorem.
\subsection{Proof of Theorem 2}
We begin by proving necessity of the conditions on $\nu$. We proceed by contraposition. Let $\nu$ put strictly positive probability weight on two orders $\pi$ and $\pi'$ satisfying condition 2 of the theorem. This means that there exist $x,y\in X$ such that $x \neq y$ and $U_{\pi}(x)= U_{\pi'}(y)$. Let $A^1$ denote a node in the path corresponding to $\pi$. Similarly, let $A^2$ denote a node in the path corresponding to $\pi'$. Since $x \neq y$ and $U_{\pi}(x)= U_{\pi'}(y)$, there exists some $k < l$ such that $A^1_k \neq A^2_k$ (since $y \not \in U_{\pi'}(x)$ and $x \not \in U_{\pi}(y)$) and $A^1_l = A^2_l$. This means we can find $k< i \leq l$ such that $A^1_i=A^2_i$ and $A^1_{i-1} \neq A^2_{i-1}$. By $U_{\pi}(z) \neq U_{\pi'}(z)$, it must be that $A^1_z \neq A^2_z$. This implies there is some node after $A_i$ at which $\rho$ and $\rho'$ out-branch. Call the first node that does this $A_j$. Thus $A_{i-1}^{1} \neq A_{i-1}^{2}$, $A_{j+1}^{1} \neq A_{j+1}^{2}$, and for all $m \in \{i, \dots, j\}$, $A_m^{1} = A_m^{2}$. This means that $\rho$ and $\rho'$ are a pair of supported branching paths. Thus $\nu$ is not unique by Theorem 1, and by contraposition the conditions on $\nu$ are necessary.
We now show the sufficiency of the conditions on $\nu$. We proceed by contraposition. Suppose that $\nu$ is not unique. Then, by Theorem 1, there is a pair of supported branching paths on the probability flow diagram of $(X,P)$. Recall the definition of branching path. With this definition in mind, we call the length of a branching section $j -i$. Let $l$ be the minimum length of all branching sections of all pairs of supported branching paths. Note that $l$ is well defined because $X$ is finite. Choose a pair of supported paths $\rho$ and $\rho'$ such that $\rho$ and $\rho'$ have a branching section of length $l$. Let $\{A_i, \dots, A_j\}$ be that branching section. Because $l$ is the minimal length of supported branching sections, there is no $k \in \{i+1,j-1\}$ such that $\{A_i, \dots, A_k\}$ or $\{A_k, \dots, A_j\}$ are supported branching sections. We know from \citet{fiorini2004short} and \citet{falmagne1978representation} that the probability flow diagram satisfies inflow equals outflow. Since there is no supported out-branching path in $\{A_i, \dots, A_{j-1}\}$, it must be the case that inflow into $A_i$ equals outflow from $A_j$.
Let $M_{x,A}$ be the set of linear orders on $X$ that rank $x$ exactly at the top of $A$.
$$M_{x,A}=\{\pi|\pi(X \setminus A) > \pi(x) > \pi(A \setminus x) \}$$
We know from \citet{falmagne1978representation} that $q(x,A) = \nu(M_{x,a})$ for all rationalizing $\nu$. Since $\rho$ and $\rho'$ are supported, the total outflow from $A_j$ is strictly greater than the inflow into $A_i$ from the edge belonging to $\rho$. This means that $\nu$ cannot assign weight onto orders in $M_{A^{\rho}_{i-1} \setminus A_i, A^{\rho}_{i-1} } $ equal to the total outflow from $A_j$.
\begin{claim}
There are two orders, $\pi$ and $\pi'$, satisfying the following.
\begin{enumerate}
\item $\pi \not \in M_{A^{\rho}_{i-1} \setminus A_i, A^{\rho}_{i-1} }$
\item $\pi' \in M_{A^{\rho}_{i-1} \setminus A_i, A^{\rho}_{i-1} }$
\item $\max(\pi,A_j) \neq \max(\pi',A_j)$
\item $\nu(\pi)>0$ and $\nu(\pi')>0$
\end{enumerate}
\end{claim}
\begin{proof}
Since $\{A_i,\dots,A_j\}$ is a branching section of two supported paths, there are at least two orders whose paths pass through $\{A_i,\dots,A_j\}$ which have positive weight under $\nu$. Further, there must be at least two orders whose paths in-branch at $A_i$ and have positive weight under $\nu$. Similarly, there must be at least two orders whose paths out-branch at $A_j$ and have positive weight under $\nu$. There are two cases.
\begin{enumerate}
\item There is some supported edge leaving $A_j$ such that no order in $M_{A^{\rho}_{i-1} \setminus A_i, A^{\rho}_{i-1} }$ with positive weight under $\nu$ has a path containing that edge. \\
Since inflow at $A_i$ equals outflow form $A_j$, it must be the case that some order not in $M_{A^{\rho}_{i-1} \setminus A_i, A^{\rho}_{i-1} }$ has positive weight and has path containing the prior mentioned edge. Call this order $\pi$. Then any $\pi' \in M_{A^{\rho}_{i-1} \setminus A_i, A^{\rho}_{i-1} }$ with $\nu(\pi')>0$ and $\pi$ satisfy the above conditions.
\item For every supported edge leaving $A_j$, there is some order in $M_{A^{\rho}_{i-1} \setminus A_i, A^{\rho}_{i-1} }$ with positive weight under $\nu$ whose path contains that edge. \\
In this case, choose some order $\pi \not \in M_{A^{\rho}_{i-1} \setminus A_i, A^{\rho}_{i-1} }$ such that $\nu(\pi)>0$ and the path corresponding to $\pi$ passes through $\{A_i,\dots,A_j\}$. The existence of such an order is guaranteed by inflow equals outflow. By inflow at $A_i$ equals outflow at $A_j$, the path corresponding to $\pi$ passes through $A_j$. Choose some $\pi' \in M_{A^{\rho}_{i-1} \setminus A_i, A^{\rho}_{i-1} }$ such that $\nu(\pi')>0$ and the path corresponding to $\pi'$ does not have the same edge leaving $A_j$ as the path corresponding to $\pi$. The existence of such a $\pi'$ is guaranteed by the supposition. Then $\pi$ and $\pi'$ satisfy the conditions of the claim.
\end{enumerate}
\end{proof}
By the definition of $\rho_{\pi}$ and $\rho_{\pi'}$, both of these paths are branching and supported. Now let $x=\min(\pi,X \setminus A_i)$ and $y = \min(\pi', X \setminus A_i)$. By $\pi \not \in M_{A^{\rho}_{i-1} \setminus A_i, A^{\rho}_{i-1} }$ and $\pi' \in M_{A^{\rho}_{i-1} \setminus A_i, A^{\rho}_{i-1} }$, $x \neq y$. By $A_i \in \rho_{\pi}$ and $A_i \in \rho_{\pi'}$, $U_{\pi}(x) = U_{\pi'}(y)$. Let $z = \max(\pi,A_j)$. By $\max(\pi,A_j) \neq \max(\pi',A_j)$, $U_{\pi}(z) \neq U_{\pi'}(z)$. By $j \geq i$, $\pi(\{x,y\}) > \pi(z)$ and $\pi'(\{x,y\}) > \pi'(z)$. Thus the conditions on $\nu$ hold, and, by contraposition, the sufficiency of the conditions are shown. Thus the theorem is proven.
\subsection{Proof of Theorem 3}
If there are two rationalizing distributions with different supports, then $(X,P)$ is obviously not uniquely rationalizable. All that is left is to show the other direction. Suppose that $(X,P)$ is not uniquely rationalizable. Then $|X| \geq 4$ and the probability flow diagram of $(X,P)$ has supported branching paths. The two algorithms used in the proof of Theorem 1 find two rationalizing distributions with different supports and can be used here to do so. Thus, by contraposition, each rationalizing distribution having the same support implies there is a unique rationalizing distribution.
\end{document} |
\begin{document}
\begin{abstract}
We provide new elementary proofs of the following two results:
every complex variety is locally the graphs of a $\textup{Dir}$-minimizing function,
first proved by Almgren \cite{Alm};
the gradients of $\textup{Dir}$-minimizing functions,
in principle square-summable, are $p$-integrable for some $p>2$,
proved by De Lellis and the author \cite{DLSp2}.
In the planar case, we prove that our integrability exponents are optimal.
\end{abstract}
\maketitle
\section{Introduction}
Almgren developed the theory of $\textup{Dir}$-minimizing multi-valued
functions in his big regularity paper \cite{Alm} as a first
step toward the regularity of area-minimizing currents in
codimension bigger than $1$.
Following the pioneering ideas of De Giorgi, the starting point was
the approximation of minimal currents via harmonic functions,
which are the minimizers of the first non-constant term in the expansion
of the area functional: the Dirichlet energy.
However, due to the unavoidable phenomenon of branching points
as, for example, in the area-minimizing currents induced by complex varieties,
he needed to develop the theory of $\textup{Dir}$-minimizing $Q$-valued functions,
that are multi-valued functions minimizing a suitable Dirichlet energy.
In this paper, following the work in \cite{DLSp},
we address two questions on Almgren's $Q$-valued functions:
we show that complex varieties are locally graphs of $\textup{Dir}$-minimizing functions and
prove the higher integrability of the gradient of a $\textup{Dir}$-minimizing $Q$-function.
\begin{theorem}\left\langlebel{t:complex}
Let $\mathscr{V}\subseteq \C^\mu\times\C^\nu\simeq\R^{2\mu}\times\R^{2\nu}$
be an irreducible holomorphic variety
which is a $Q:1$-cover of the ball $B_2\subseteq\C^\mu$
under the orthogonal projection.
Then, there exists a $\textup{Dir}$-minimizing $Q$-valued function
$f\in W^{1,2}(B_1,{\mathcal{A}}_Q(\R^{2\nu}))$ such that
$\mathop{graph}(f)=\mathscr{V}\cap (B_1\times\C^\nu)$.
\end{theorem}
\begin{theorem}\left\langlebel{t:higher}
There exists $p=p(n,m,Q)>2$ such that, for every $\Omega\subseteq\R^m$ open and
$u\in W^{1,2}(\Omega,{\mathcal{A}}_Q(\R^{n}))$ $\textup{Dir}$-minimizing, $|Du|\in L_{loc}^p(\Omega)$.
\end{theorem}
Theorem \ref{t:complex} provides many examples of $\textup{Dir}$-minimizing functions and,
in particular, shows that the H\"older continuity and the estimate of the singular set
of a $\textup{Dir}$-minimizer proved in \cite{Alm} and \cite{DLSp} are optimal results.
Theorem \ref{t:complex} has been proved by Almgren in his big regularity paper
\cite[Theorem 2.20]{Alm} using a deep and complicated approximation theorem
of minimal currents via graphs of Lipschitz $Q$-functions
(see also \cite{DLSp2}).
Here we give a more elementary proof avoiding the approximation result by Almgren.
Moreover, for the planar case we also provide an alternative argument which exploits
the equality between the area and the energy of conformal maps.
We hope that this approach can be extended to the study of
regularity issues for more complicated calibrated geometries.
Theorem \ref{t:higher} has been first proved by
De Lellis and the author in \cite{DLSp2} in connection with a new
higher integrability estimate for minimal currents and it plays a crucial role in
the proof of Almgren's approximation theorem given there.
Here, we propose a different ``intrinsic'' proof, where ``intrinsic'' means
based only on the metric theory of $Q$-valued functions as developed in \cite{DLSp}.
In case $m=2$, we can exploit the fact that $\textup{Dir}$-minimizing functions have isolated singularities
(proven in \cite{DLSp}) to find the optimal integrability. The optimality is indeed
shown by the examples provided by complex varieties in the first part of the paper.
The paper is organized as follows. In Section \ref{s:Qfunc} we collect some basic results
and definitions on $Q$-valued functions and the rectifiable currents supported by their graphs.
In Section \ref{s:complex} we identify complex varieties as graphs of
Sobolev $Q$-valued functions and prove Theorem \ref{t:complex}.
Finally, Section \ref{s:higher} contains the proof of Theorem \ref{t:higher} which passes through
a Caccioppoli and a reverse H\"older inequality for $\textup{Dir}$-minimizing functions.
\section{$Q$-valued functions}\left\langlebel{s:Qfunc}
In what follows, we adopt the notation and the approach introduced in \cite{DLSp},
which differs from Almgren's original one.
For the definitions of the metric space of $Q$-points $({\mathcal{A}}_Q,{\mathcal{G}})$,
Sobolev $Q$-valued function and Dirichlet energy, we refer to \cite{DLSp}.
We say that a function $f:\Omega\subset\R^{m}\to{\mathcal{A}}_Q(\R^{n})$ has a smooth \textit{local selection}
in $\Omega'\subseteq\Omega$ if, for every $x\in\Omega'$,
there exist $r>0$ and $f_i:B_r(x)\to\R^{n}$ smooth functions
such that $f\vert_{B_r(x)}=\sum_{i=1}^Q\a{f_i}$.
Note that, in this case, $|Df|^2=\sum_i|Df_i|^2$ is well defined on the whole $\Omega'$.
We observe the following simple consequence of the definition, which for reader's
convenience we state as a lemma.
\begin{lemma}\left\langlebel{l:Sob}
Let $f:\Omega\subset\R^{m}\to{\mathcal{A}}_Q$ have a smooth local selection in
$\Omega'\subseteq\Omega$.
If $\dim_{{\mathcal{H}}}(\Omega\setminus \Omega')\leq m-2$ and
$\int_{\Omega'}|Df|^2<+\infty$,
then $f$ belongs to $W^{1,2}(\Omega,{\mathcal{A}}_Q)$.
\end{lemma}
\begin{proof}
The proof follows from the characterization
of classical Sobolev functions via the slice property.
Indeed, for every $T\in {\mathcal{A}}_Q$, the function $x\mapsto {\mathcal{G}}(f(x),T)$ is
smooth and satisfies
$|D({\mathcal{G}}(f(\cdot),T))|\leq |Df|$ in $\Omega'$ (cp.~to \cite[Proposition 2.17]{DLSp}).
Therefore, since the projection of $\Omega\setminus\Omega'$ on each coordinate
hyperplane is a set of ${\mathcal{H}}^{m-1}$ measure zero, for
${\mathcal{H}}^{m-1}$-a.e.~line $l$ parallel to the axes,
the restriction of ${\mathcal{G}}(f(\cdot),T)$ to $l$ belongs to $W^{1,2}$.
Recalling \cite[Section 4.9.2]{EG},
it follows that ${\mathcal{G}}(f(\cdot),T)\in W^{1,2}(\Omega)$ with
$|D({\mathcal{G}}(f(\cdot),T))|\leq |Df|$ a.e.~in $\Omega$.
Hence, by the definition of Sobolev $Q$-functions \cite[Definition 0.5]{DLSp}, we conclude.
\end{proof}
We will need also a technical result about the lower semicontinuity of the $L^p$ norm of
the gradient under weak convergence.
Although this is a special case of the result in \cite{DLFS}, we include here
an elementary proof for the sake of completeness.
\begin{lemma}[Semicontinuity]\left\langlebel{l:sc}
Let $f_k$, $f\in W^{1,p}(\Omega,{\mathcal{A}}_Q)$, $p<\infty$, be such that
$\lim_{k}\int_\Omega{\mathcal{G}}(f_k,f)^p=0$ and
$\sup_k\int_\Omega |Df_k|^p<\infty$. Then,
\begin{equation}\left\langlebel{e:sc}
\int_\Omega|Df|^p\leq\liminf_{k\to+\infty}\int_\Omega|Df_k|^p.
\end{equation}
\end{lemma}
\begin{proof}
The proof of this result is very similar to the proof of the semicontinuity for the Dirichlet
energy given in \cite[Section 2.3.2]{DLSp}.
Let $\{T_l\}_{l\in\N}$ be any dense subset of ${\mathcal{A}}_Q$ and recall that
by \cite[Proposition 4.2]{DLSp}
$|Df|$ is the monotone limit of $h_N$ with
$$
h_N^2=\max_{l_j\leq N}\sum_j\bigl(\partial_j {\mathcal{G}} (f, T_{l_j})\bigr)^2.
$$
By the Monotone Convergence Theorem, $\int |Df|^p = \sup_N \int h_{N}^p$.
Therefore, denoting by $\mathcal{P}_{N^m}$
the collections $P=\{E_{\bar l}\}_{\bar l=\{l_1,\ldots,l_m\}\in N^m}$
of $N^m$ disjoint open subsets of $\Omega$, as in \cite{DLSp} we conclude that
\begin{equation}\left\langlebel{e:rappresenta}
\int_\Omega|Df|^p = \sup_N \int_\Omega h_{N}^p
=\sup_N \sup_{P\in \mathcal{P}_{N^m}}
\sum_{E_{\bar l}\in P} \int_{E_{\bar l}}
\left(\sum_j\bigl(\partial_j {\mathcal{G}} (f, T_{l_j})\bigr)^2\right)^{\frac{p}{2}}.
\end{equation}
It follows easily from the hypotheses that,
for every $\bar l=\{l_1,\ldots,l_m\}$ and every open set $E_{\bar l}$,
the vector-valued maps $(\partial_1{\mathcal{G}}(f_k,T_{l_1}),\ldots,\partial_m{\mathcal{G}}(f_k,T_{l_m}))$ converge weakly in $L^p(E_{\bar l})$ to
$(\partial_1{\mathcal{G}}(f,T_{l_1}),\ldots,\partial_m{\mathcal{G}}(f,T_{l_m}))$.
Hence, by the semicontinuity of the norm,
\begin{equation*}\left\langlebel{e:semi2}
\int_{E_{\bar l}}
\left(\sum_j\bigl(\partial_j {\mathcal{G}} (f, T_{l_j})\bigr)^2\right)^{\frac{p}{2}}
\leq \liminf_{k\to+\infty}
\int_{E_{\bar l}}
\left(\sum_j\bigl(\partial_j {\mathcal{G}} (f_k, T_{l_j})\bigr)^2\right)^{\frac{p}{2}}.
\end{equation*}
Summing in
$E_l\in P$,
in view of \eqref{e:rappresenta}, we achieve \eqref{e:sc}.
\end{proof}
The main regularity results for $\textup{Dir}$-minimizing $Q$-valued functions
are collected in the following theorem
(see \cite[Theorems 0.9 and 0.11]{DLSp}).
In order to state them, we recall the definition of regular and singular points.
\begin{definition}\left\langlebel{d:regular}
A $Q$-valued function $f$ is regular at a point $x\in \Omega$ if there exist
a neighborhood $U$ of $x$ and $Q$ analytic functions $f_i:U\to \R^{n}$ such that
$f \vert_U= \sum_i \a{f_i}$ and either $f_i (y)\neq f_j (y)$ for every $y\in U$ or
$f_i\equiv f_j$.
The singular set $\Sigma_f$ of $f$ is the complement of the set of regular points.
\end{definition}
\begin{theorem}\left\langlebel{t:regularity}
For every $\textup{Dir}$-minimizing $f\in W^{1,2} (\Omega,{\mathcal{A}}_Q)$ the following holds:
\begin{itemize}
\item[$(i)$] there exists $\alpha=\alpha (m,Q)>0$ ($\alpha (2, Q)=1/Q$) such that
$f\in C^{0,\alpha} (\Omega')$ for every $\Omega'\subset\subset\Omega$ and
\begin{equation}\left\langlebel{e:dec}
\hspace{1.5cm}\textup{Dir}(f,B_r(x))\leq\left(\frac{r}{\rho}\right)^{2\alpha}\textup{Dir}(f,B_\rho(x)),
\quad\forall\;r\leq \rho\;\text{with}\; B_\rho(x)\subseteq\Omega;
\end{equation}
\item[$(ii)$] the Hausdorff dimension of $\Sigma_f$ is at most $m-2$ and,
if $m=2$, $\Sigma_f$ consists of isolated points.
\end{itemize}
\end{theorem}
\subsection{Push-forward of currents under $Q$-functions}\left\langlebel{s:current}
We define now the integer rectifiable current associated to the graph of a $Q$-valued function.
Given a $Q$-valued function $f:\R^{m}\to{\mathcal{A}}_Q(\R^{n})$, we set $\bar f=\sum_i\a{(x,f_i(x))}$,
$\bar f:\R^{m}\to{\mathcal{A}}_Q(\R^{m+n})$.
If $R\in \sD_k(\R^m)$ is a rectifiable current associated to a $k$-rectifiable set $M$
with multiplicity $\theta$,
$R=\tau(M,\theta,\xi)$, where $\xi$ is a borel simple $k$-vector field orienting $M$
(we use the notation in \cite{Sim}),
and if $f$ is a proper Lipschitz $Q$-valued function,
we can define the push-forward of $T$ under $f$ as follows.
\begin{definition}\left\langlebel{d:TR}
Given $R=\tau(M,\theta,\xi)\in \sD_k(\R^m)$ and $f\in{\rm {Lip}}(\R^{m},{\mathcal{A}}_Q(\R^{n}))$ as above,
we denote by $T_{f,R}$ the current in $\R^{m+n}$ defined by
\begin{equation}\left\langlebel{e:TR}
\left\langle T_{f, R}, \omega\right\rangle = \int_{M} \theta \,\sum_i\left\langle
\omega\circ\bar f_i, D^M\bar f_{i\#}\xi\right\rangle d\,{\mathcal{H}}^k
\quad\forall\;\omega\in\sD^k(\R^{m+n}),
\end{equation}
where $\sum_i \a{D^M\bar f_i(x)}$ is the differential of $\bar f$ restricted to $M$.
\end{definition}
\begin{remark}
Note that, by Rademacher's theorem \cite[Theorem 1.13]{DLSp}
the derivative of a Lipschitz $Q$-function is defined a.e.~on smooth manifolds and, hence,
also on rectifiable sets.
\end{remark}
As a simple consequence of the Lipschitz decomposition
in \cite[Proposition 1.6]{DLSp},
there exist $\{E_j\}_{j\in\N}$ closed subsets of $\Omega$,
positive integers $k_{j,l},\,L_j\in\N$ and Lipschitz functions $f_{j,l}:E_j\to\R^{n}$, for $l=1,\ldots,L_j$, such that
\begin{equation}\left\langlebel{e:Lip decomp}
{\mathcal{H}}^{k}(M\setminus \cup_{j} E_j)=0\quad\text{and}\quad
f\vert_{E_j}=\sum_{l=1}^{L_j}k_{j,l}\,\a{f_{j,l}}.
\end{equation}
From the definition, $T_{f,R}=\sum_{j,l}k_{j,l}\bar f_{j,l\#}(R\res E_{j})$
is a sum of rectifiable currents defined by the push-forward under single-valued Lipschitz functions.
Therefore, it follows that $T_{f,R}$ is rectifiable and coincides with
$\tau\big(\bar f(M),\theta_{f}, \vec{T}_{f}\big)$,
where
\[
\theta_{f}(x,f_{j,l}(x))=k_{j,l}\theta(x)\quad \text{and}\quad
\vec{T}_{f}(x,f_{j,l}(x))=\frac{D^M\bar f_{j,l\#}\xi(x)}{|D^M\bar f_{j,l\#}\xi(x)|}
\quad\forall\;x\in E_j.
\]
By the standard area formula, using the above decomposition of $T_{f,R}$,
we get an explicit expression for the mass of $T_{f,R}$:
\begin{equation}\left\langlebel{e:mass TR}
{\mathbf{M}}\left(T_{f, R}\right)=\int_{M}|\theta|\,
\sum_i\sqrt{\partialt\left(D^M\bar f_i\cdot (D^M\bar f_i)^T\right)}\,d\,{\mathcal{H}}^k.
\end{equation}
With a slight abuse of notation, when $R=\a{\Omega}\in \sD_m(\R^m)$
is given by the integration over a Lipschitz domain $\Omega\subset\R^m$ of the standard
$m$-vector $\vec e=e_1\wedge\cdots\wedge e_m$, we write simply $T_{f,\Omega}$ for $T_{f,R}$.
The same we do for $T_{f,\partial \Omega}$, understanding that $\partial\Omega$ is oriented as the
boundary of $\a{\Omega}$.
The main result for what concerns the push-forward under $Q$-valued functions
is given in the following theorem proven in \cite[Theorem C.3]{DLSp2}.
\begin{theorem}\left\langlebel{t:de Tf}
For every $\Omega$ Lipschitz domain and $f\in {\rm {Lip}}(\Omega,{\mathcal{A}}_Q)$,
$\partial \,T_{f,\Omega}=T_{f,\partial \Omega}$.
\end{theorem}
Up to now we have defined the push-forward under Lipschitz maps.
Nevertheless, thanks to the approximate differentiability property of Sobolev
$Q$-functions (see \cite[Corollary 2.7]{DLSp}), for full dimensional current $R=\a{\Omega}$,
the definition of $T_{f,\Omega}$ in \eqref{e:TR} makes sense for Sobolev functions
as soon as the action is finite for every differential form $\omega\in\sD^m(\R^{m+n})$.
It is easy to verify that this condition is satisfied if
\begin{equation*}\left\langlebel{e:finite mass}
{\mathbf{M}}(T_{f,\Omega})=\int_{\Omega}\sum_i\sqrt{\partialt\left(D^M\bar f_i\cdot (D^M\bar f_i)^T\right)}<+\infty.
\end{equation*}
For such functions, we have the following Taylor expansion of the mass of $T_{f,\Omega}$.
\begin{lemma}\left\langlebel{l:mass vis dir}
Let $f\in W^{1,2}(\Omega,{\mathcal{A}}_Q)$ such that ${\mathbf{M}}\left(T_{f,\Omega}\right)<+\infty$.
Then,
\begin{equation}\left\langlebel{e:mass vis dir1}
{\mathbf{M}}\left(T_{\left\langlembda f,\Omega}\right)=Q\,|\Omega|+\frac{\left\langlembda^2}{2}\,\textup{Dir}(f,\Omega)+o\left(\left\langlembda^2\right)
\quad\text{as}\quad\left\langlembda\to0.
\end{equation}
\end{lemma}
\begin{proof}
For every $\left\langlembda>0$, set
$A_\left\langlembda=\big\{|Df|\leq\left\langlembda^{-\frac{1}{2}}\big\}$
and $B_\left\langlembda=\big\{|Df|>\left\langlembda^{-\frac{1}{2}}\big\}$.
Since $f\in W^{1,2}(\Omega,{\mathcal{A}}_Q)$, for $\left\langlembda\to0$, we have that
\begin{equation}\left\langlebel{e:su A}
\textup{Dir}(\left\langlembda\,f,\Omega)=
\textup{Dir}(\left\langlembda\,f,A_\left\langlembda)+\left\langlembda^2\int_{B_\left\langlembda}|Df|^2
=\textup{Dir}(\left\langlembda\,f,A_\left\langlembda)+o\left(\left\langlembda^2\right).
\end{equation}
Using the inequality $\sqrt{1+x^2}\geq 1+\frac{x^2}{2}-\frac{x^4}{4}$ for $|x|\leq 2$,
since $\left\langlembda\,|Df|\leq\sqrt{\left\langlembda}$ in $A_\left\langlembda$, for $\left\langlembda\leq 4$ we infer that
\begin{align}\left\langlebel{e:est mass1}
{\mathbf{M}}\left(T_{\left\langlembda f,\Omega}\right)
&\geq
\sum_i\int_{\Omega}\sqrt{1+\left\langlembda^2\,|Df_i|^2}
\geq Q\,|B_\left\langlembda|+\int_{A_\left\langlembda}
\left(1+\frac{\left\langlembda^2\,|Df|^2}{2}-C\,\left\langlembda^4\,|Df|^4
\right)\notag\\
&\geq Q\,|\Omega|+\frac{\left\langlembda^2}{2}\,\textup{Dir}(f,A_\left\langlembda)-
\int_{A_\left\langlembda}C\,\left\langlembda^3\,|Df|^2\notag\\
&
\hspace{-0.15cm}\stackrel{\eqref{e:su A}}{=}
Q\,|\Omega|+\frac{\left\langlembda^2}{2}\,\textup{Dir}(f,\Omega)+o\left(\left\langlembda^2\right).
\end{align}
For what concerns the reversed inequality, we argue as follows.
In $A_\left\langlembda$, since for every multi index $\alpha$ with $|\alpha|\geq2$ we have
\[
\left\langlembda^{2|\alpha|}|M_{f_i}^\alpha|^2\leq C\,\left\langlembda^{2|\alpha|}|Df_i|^{2|\alpha|}\leq
C\, \left\langlembda^3|Df_i|^2,
\]
we use the inequality $\sqrt{1+x^2}\leq1+\frac{x^2}{2}$ and get
\begin{align}\left\langlebel{e:est massA}
{\mathbf{M}}\big(T_{\left\langlembda f,A_\left\langlembda}\big)
&\leq
\sum_i\int_{A_\left\langlembda}\sqrt{1+\left\langlembda^2\,|Df_i|^2+C\,\left\langlembda^3\,|Df_i|^2}\notag\\
&=
Q\,|A_\left\langlembda|+\frac{\left\langlembda^2}{2}\,\textup{Dir}(f,A_\left\langlembda)+o\left(\left\langlembda^2\right).
\end{align}
In $B_\left\langlembda$, instead, we use the same inequality and
the condition ${\mathbf{M}}(T_{f,\Omega})<+\infty$ to infer
\begin{align}\left\langlebel{e:est massB}
{\mathbf{M}}\big(T_{\left\langlembda f,B_\left\langlembda}\big)
&\leq
\sum_i\int_{B_\left\langlembda}\sqrt{1+\left\langlembda^2\,|Df_i|^2}+\sqrt{\sum_{|\alpha|\geq 2}\left\langlembda^{2|\alpha|}{M_{f_i}^\alpha}^2}\notag\\
&\leq
Q\,|B_\left\langlembda|+\frac{\left\langlembda^2}{2}\,\textup{Dir}(f,B_\left\langlembda)+
\sum_i\int_{B_\left\langlembda}\left\langlembda^2\sqrt{\sum_{|\alpha|\geq 2}{M_{f_i}^\alpha}^2}\notag\\
&\hspace{-0.15cm}\stackrel{\eqref{e:su A}}{\leq} Q\,|B_\left\langlembda|+o(\left\langlembda^2)
+\left\langlembda^2\,{\mathbf{M}}(T_{f,B_\left\langlembda})=Q\,|B_\left\langlembda|+o\left(\left\langlembda^2\right).
\end{align}
From \eqref{e:est mass1}, \eqref{e:est massA} and \eqref{e:est massB}, the proof follows.
\end{proof}
\section{Complex varieties and $\textup{Dir}$-minimizing functions}\left\langlebel{s:complex}
\subsection{Complex varieties as minimal currents}
In the following we consider irreducible holomorphic varieties $\sV\subseteq\C^{\mu+\nu}$
of dimension $\mu$.
Following Federer \cite{Fed2}, we associate to $\sV$ the
integer rectifiable current of real dimension $2\mu$ denoted by $\a{\mathscr{V}}$
given by the integration over the manifold part of $\sV$, $\sV_{\rm{reg}}$.
Recall that the singular part $\sV_{\rm{sing}}=\sV\setminus \sV_{reg}$ is a
complex variety of dimension at most $(\mu-1)$.
A well-known result by Federer asserts that $\a{\sV}$ is a mass-minimizing cycle.
\begin{theorem}\left\langlebel{t:fed}
Let $\mathscr{V}$ be an irreducible holomorphic variety.
Then, the integer rectifiable current $\a{\mathscr{V}}$ has locally finite mass and is
a locally mass-minimizing cycle,
that means $\partial \a{\mathscr{V}}=0$
and ${\mathbf{M}}(\a{\mathscr{V}})\leq{\mathbf{M}}(S)$ for every integer
current $S$ with $\partial S=0$ and ${\rm supp}\,(S-\a{\mathscr{V}})$ compact.
\end{theorem}
We consider domains
$\Omega\subseteq\R^{2\mu}\simeq\C^\mu$ with the usual identification
$(x_l,y_l)\simeq z_l=(x_l+iy_l)$ for $l=1,\ldots,\mu$.
Moreover, $\mathscr{V}\subseteq\Omega\times\R^{2\nu}\subseteq\R^{2\mu+2\nu}\simeq\C^{\mu+\nu}$
is always supposed to be a $Q:1$-cover of $\Omega$ under the orthogonal projection $\pi$
onto $\Omega$, that is $\pi_{\#}\a{\sV}=Q\a{\Omega}$.
Clearly, under this hypothesis, there exists a $Q$-valued function
$f:\Omega\to{\mathcal{A}}_Q(\R^{2\nu})$ such that $\mathscr{V}=\mathop{graph}(f)$.
From Definition \ref{d:regular}, we readily deduce
$\Sigma_f\subseteq\pi(\sV_{\rm{sing}})$,
which in particular implies $\dim_{{\mathcal{H}}}(\Sigma_f)\leq 2\mu-2$.
Therefore, locally in $\Omega\setminus \Sigma_f\times\R^{2\nu}$,
$\sV$ is the superposition of graphs of holomorphic functions, that is,
for every $w\in\Omega\setminus \Sigma_f$, there exist a radius $r$ and $Q$ holomorphic
functions $f_i:B_r(w)\to\C^\nu$ such that $f\vert_{B_r(w)}=\sum_i\a{f_i}$.
The following are the main properties of $f$.
\begin{proposition}\left\langlebel{p:fSob}
Let $\sV\subseteq \Omega\times\R^{2\nu}$ be a holomorphic variety as above and $f$
the associated $Q$-valued function.
Then, the following holds:
\begin{itemize}
\item[$(i)$] $f\in W^{1,2}(\Omega,{\mathcal{A}}_Q)$ and, for $\mu=1$,
$M(\a{\mathscr{V}}\res \Omega)=Q+\frac{\textup{Dir}(f,\Omega)}{2}$;
\item[$(ii)$] $\a{\sV}\res\Omega=T_{f,\Omega}$ and
$\partial (\a{\sV}\res B_r(x))=T_{f,\partial B_r(x)}$
for every $x$ and a.e.~$r>0$ with $B_r(x)\subseteq\Omega$.
\end{itemize}
\end{proposition}
\begin{proof}
Note that, for every smooth $h:\R^2\to\R^{2\nu}$
and, as usual, $\bar h(w)=(w,h(w))$,
\begin{equation}\left\langlebel{e:area<energia}
\sqrt{\partialt\left(D\bar h\cdot D\bar h^T\right)}\leq 1+\frac{|Dh|^2}{2},
\end{equation}
with equality if and only if $h$ is conformal,
i.e.~$|\partial_{x} h|=|\partial_{y} h|$ and $\partial_{x} h\cdot \partial_{y} h=0$.
Indeed, \eqref{e:area<energia} reads as
\begin{align*}\left\langlebel{e:Jh}
\partialt\left(D\bar h\cdot D\bar h^T\right)
&=
\partialt\left(\begin{array}{cc}
1+|\partial_x h| & \partial_x h\cdot \partial_y h \\
\partial_x h\cdot \partial_y h & 1+|\partial_y h|
\end{array} \right)
\leq \left(1+\frac{|\partial_xh|^2+|\partial_y h|^2}{2}\right)^2,
\end{align*}
which in turn is equivalent to
$0\leq \left(|\partial_x h|^2-|\partial_y h|^2\right)^2+
4(\partial_x h\cdot \partial_y h)^2$.
In the case $\mu=1$,
applying \eqref{e:area<energia} to the local holomorphic, hence conformal, selection of $f$,
from \eqref{e:mass TR} we get
\begin{equation}\left\langlebel{e:mass=dir}
M(\a{\mathscr{V}}\res(\Omega\setminus\Sigma_f))=
Q+\frac{\textup{Dir}(f,\Omega\setminus\Sigma_f)}{2}.
\end{equation}
In the case $\mu>1$ and $g:\R^{2\mu}\to \R^{2\nu}$ smooth,
\eqref{e:area<energia} together with
Binet--Cauchy's formula (see \cite[Section 3.2 Theorem 4]{EG}),
for every $l=1,\cdots,\mu$, we infer
\begin{align}\left\langlebel{e:mu>1}
\partialt\left(D\bar g\cdot D\bar g^T\right)&=
1+|Dg|^2+\sum_{|\alpha|=|\beta|\geq2}M_{\alpha\beta}(Dg)^2\notag\\
&\geq
1+|\partial_{x_l}g|^2+|\partial_{y_l}g|^2+
\sum_{i,j=1}^{2\nu}(\partial_{x_l}g^i\partial_{y_l}g^j-\partial_{x_l}g^j\partial_{y_l}g^i)^2\notag\\
&=\partialt\left(\nabla_l\bar g\cdot \nabla_l\bar g^T\right),
\end{align}
where $M_{\alpha\beta}$ stands for the $\alpha,\beta$ minors of a matrix
and $\nabla_l$ denotes the derivative with respect to $x_l$ and $y_l$.
Hence, if $f_i$ is a local holomorphic, consequently conformal,
selection for $f:\Omega\subset \R^{2\mu}\to{\mathcal{A}}_Q$, we infer that
\begin{align*}
\mu\,Q+\frac{|Df|^2}{2}&=\sum_{i=1}^Q\sum_{l=1}^\mu
\left(1+\frac{|\nabla_l f_i|^2}{2}\right)
\stackrel{\eqref{e:area<energia}}{=}
\sum_{i=1}^Q\sum_{l=1}^\mu\sqrt{\partialt\left(\nabla_l\bar f_i\cdot \nabla_l\bar f_i^T\right)}\notag\\
&\stackrel{\eqref{e:mu>1}}{\leq}
\mu \sum_{i=1}^Q\sqrt{\partialt\left(D\bar f_i\cdot D\bar f_i^T\right)}.
\end{align*}
Integrating, we conclude, for $\mu>1$,
\begin{equation}\left\langlebel{e:mass=dir2}
M(\a{\mathscr{V}}\res(\Omega\setminus\Sigma_f))\geq
Q+\frac{\textup{Dir}(f,\Omega\setminus\Sigma_f)}{2\,\mu}.
\end{equation}
Now since the mass of $\a{\sV}$ is finite,
by \eqref{e:mass=dir} and \eqref{e:mass=dir2} the energy of $f$ is finite in
$\Omega\setminus\Sigma_f$.
Being $\dim_{{\mathcal{H}}}(\Sigma_f)\leq m-2$, Lemma \ref{l:Sob} gives $(i)$.
Being $\a{\sV}$ defined by the integration over $\sV_{\rm{reg}}$
and ${\mathcal{H}}^m(\pi(\sV_{\rm{sing}}))=0$,
it follows straightforwardly that $T_{f,\Omega}$ is well-defined by \eqref{e:TR}
and coincides with $\a{\sV}$.
For the same reason, since also ${\mathcal{H}}^{m-1}(\pi(\sV_{\rm{sing}}))=0$,
$\partial(\a{\sV}\res B_r(x))=T_{f,\partial B_r(x)}$
for every $B_r(x)\subseteq \Omega$ such that
$f\vert_{\partial B_r(x)}\in W^{1,2}$ and ${\mathbf{M}}(\partial(\a{\sV}\res B_r(x)))$ is finite,
that is for every $x$ and a.e.~$r>0$, thus concluding the proof of $(ii)$.
\end{proof}
\subsection{Proof of Theorem \ref{t:complex}}
Now we are ready to prove the first main result of the paper.
We divide the proof in two parts: in the first one we give an argument for the planar case
which is particularly simple and exploits the equality between the area and the energy functionals;
in the second part we give a proof valid in every dimension.
\subsubsection{Planar case $\mu=1$}
In view of Proposition \ref{p:fSob}, we need only to show that
$f$ is $\textup{Dir}$-minimizing in $B_1$.
Choose a radius $r\in[1,2]$ such that $\partial B_r\cap\Sigma_f =\emptyset$
and set $g=f\vert_{\partial B_r}$. Note that $g$ is Lipschitz continuous.
For every $h\in {\rm {Lip}}(B_r,{\mathcal{A}}_Q)$ with $h\vert_{\partial B_r}=g$,
from the Taylor expansion of the mass and from \eqref{e:area<energia}, we infer that
\begin{equation}\left\langlebel{e:massineq}
{\mathbf{M}} (T_{h,B_r}) - Q\leq \frac{\textup{Dir}(h,B_r)}{2}.
\end{equation}
By Theorem \ref{t:de Tf}, $\partial T_{h,B_r}=T_{f,\partial B_r}=\partial(\a{\sV}\res B_r)$.
So, using Theorem \ref{t:fed} we infer
\begin{equation*}
\textup{Dir}(f,B_r)\stackrel{\eqref{e:mass=dir}}{=}
2 \left({\mathbf{M}}(T_{f,B_r})-Q\right)\leq
2 \left({\mathbf{M}}(T_{h,B_r})-Q\right)\stackrel{\eqref{e:massineq}}{\leq}
\textup{Dir}(h,B_r).
\end{equation*}
Since the set of Lipschitz functions with trace $g$ is dense in $W_g^{1,2}(B_r,{\mathcal{A}}_Q)$
(see \cite[Section 14]{DLSp}), this implies that $f$ is $\textup{Dir}$-minimizing in $B_r$ and, a fortiori,
in $B_1$.\qed
\begin{remark}
The planar result provides examples of $\textup{Dir}$-minimizing functions
with singular set of dimension $m-2$ for every $m$, thus proving the optimality of the
regularity Theorem \ref{t:regularity}.
Indeed, if $g:B_1\subseteq\R^2\to {\mathcal{A}}_Q$ is $\textup{Dir}$-minimizing and $\Sigma_g\neq \emptyset$,
then $f:B_1\times \R^{m-2}\to{\mathcal{A}}_Q$ with $f(x_1, x_2, \ldots, x_m) = g (x_1, x_2)$
is also $\textup{Dir}$-minimizing (see \cite[Lemma 3.24]{DLSp}) and
$\dim_{{\mathcal{H}}}(\Sigma_f)=m-2$.
\end{remark}
\subsubsection{General case $\mu\geq1$}
Here we exploit the expansion of the mass given in Lemma \ref{l:mass vis dir}.
The reason why this can be done without the strong approximation
theory developed by Almgren in \cite{Alm} and reproved with different
methods in \cite{DLSp2} is that, given as above a complex variety which is the
graph of a multi-valued function, the rescaled current
$L_{\left\langlembda\#}\a{\sV}=T_{\left\langlembda f}$, where
$L_{\left\langlembda}:\C^{\mu+\nu}\to\C^{\mu+\nu}$ is given by
$L_\left\langlembda(x,y)=(x,\left\langlembda y)$,
is also a complex variety (being the $L_\left\langlembda$'s linear complex maps),
and, hence, it is also area-minimizing.
The proof is by contradiction. Assume $f$ is not $\textup{Dir}$-minimizing in $B_1$.
Then, there exists $u\in W^{1,2}(B_1,{\mathcal{A}}_Q)$ and $\eta>0$ such that
$\textup{Dir}(u,B_1)\leq \textup{Dir}(f,B_1)-\eta$ and $u\vert_{\partial B_1}=f\vert_{\partial B_1}$.
Set
\[
w=
\begin{cases}
u & \text{in}\; B_1,\\
f & \text{in}\; B_2\setminus B_1.
\end{cases}
\]
We want to use $w$ in order to construct competitor currents for $L_{\left\langlembda\#}\a{\sV}$.
To this aim, consider for every ${\varepsilon}>0$ the Lipschitz approximations $w_{\varepsilon}$ given by
(see \cite[Proposition 4.4]{DLSp}). It enjoys the following properties:
\begin{itemize}
\item[(a)] $|E_{\varepsilon}|=o\left({\varepsilon}^2\right)$ as ${\varepsilon}\to 0$,
where $E_{\varepsilon}=\big\{w_{\varepsilon}\neq w\big\}$;
\item[(b)] ${\rm {Lip}}(w_{\varepsilon})\leq {\varepsilon}^{-1}$;
\item[(c)] $\norm{|Dw_{\varepsilon}|-|Dw|}{L^2}=o(1)$ as ${\varepsilon}\to 0$.
\end{itemize}
By Proposition \ref{p:fSob} and Lemma \ref{l:mass vis dir},
for every open $A$ such that $E_{\varepsilon}\subseteq A$ and $|A|\leq 2|E_{\varepsilon}|$,
\begin{align*}
{\mathbf{M}}\Big(L_{\left\langlembda\,\#}\big(\a{\sV}\res(E_{\varepsilon}\times\R^{2\nu})
\big)\Big)
&={\mathbf{M}}\left(T_{\left\langlembda f,E_{\varepsilon}}\right)
\leq
{\mathbf{M}}\left(T_{\left\langlembda f,A}\right)\notag\\
&\hspace{-0.15cm}\stackrel{\eqref{e:mass vis dir1}}{=}
Q\,|A|+\frac{\left\langlembda^2}{2}\int_A|Df|^2+o\left(\left\langlembda^2\right)
=
o\left({\varepsilon}^2\right)+O\left(\left\langlembda^2\right).
\end{align*}
Using Fubini's theorem and again Proposition \ref{p:fSob}, we can find radii $r_{\left\langlembda,{\varepsilon}}$ such that
\begin{equation}\left\langlebel{e:slice}
\abs{E_{\varepsilon}\cap \partial B_{r_{\left\langlembda,{\varepsilon}}}}=o\left({\varepsilon}^2\right),
\end{equation}
\begin{equation}\left\langlebel{e:slice2}
\partial \big( L_{\left\langlembda\,\#}\a{\sV}\res B_r\big)=T_{\left\langlembda f,\partial B_r}
\quad\text{and}\quad
{\mathbf{M}}\left(T_{\left\langlembda f, E_{\varepsilon}\cap \partial B_r}\right)=
o\left({\varepsilon}^2\right)+O\left(\left\langlembda^2\right).
\end{equation}
Set $S_{\left\langlembda\,{\varepsilon}}=T_{\left\langlembda f,\partial B_{r_{\left\langlembda,{\varepsilon}}}}-T_{\left\langlembda w_{\varepsilon},\partial B_{r_{\left\langlembda,{\varepsilon}}}}$.
Note that, by Theorem \ref{t:de Tf}, being $w_{\varepsilon}$ Lipschitz,
\[
\partial S_{\left\langlembda\,{\varepsilon}}=\partial T_{\left\langlembda f,\partial B_{r_{\left\langlembda,{\varepsilon}}}}-\partial T_{\left\langlembda w_{\varepsilon},\partial B_{r_{\left\langlembda,{\varepsilon}}}}
\stackrel{\eqref{e:slice2}}{=}
\partial\partial \big( L_{\left\langlembda\#}\a{\sV}\res B_r\big)=0.
\]
Moreover, since ${\rm {Lip}}(\left\langlembda \,w_{\varepsilon})\leq\left\langlembda\,{\varepsilon}^{-1}$ and
$T_{\left\langlembda f,\partial B_{r_{\left\langlembda,{\varepsilon}}}\setminus E_{\varepsilon}}=
T_{\left\langlembda w_{\varepsilon},\partial B_{r_{\left\langlembda,{\varepsilon}}}\setminus E_{\varepsilon}}$,
the mass of $S_{\left\langlembda\,{\varepsilon}}$ can be estimated in the following way:
\begin{align}\left\langlebel{e:mass bound rac}
{\mathbf{M}}\left(S_{\left\langlembda\,{\varepsilon}}\right)&=
{\mathbf{M}}\big(T_{\left\langlembda f,E_{\varepsilon}\cap\partial B_{r_{\left\langlembda,{\varepsilon}}}}\big)
+{\mathbf{M}}\big(T_{\left\langlembda w_{\varepsilon},E_{\varepsilon}\cap\partial B_{r_{\left\langlembda,{\varepsilon}}}}\big)\notag\\
&\stackrel{\eqref{e:slice2}}{\leq}
o\left({\varepsilon}^2\right)+O\left(\left\langlembda^2\right)+C\,\frac{\left\langlembda\,|E_{\varepsilon}|}{{\varepsilon}}
\stackrel{\eqref{e:slice}}{\leq}
o\left({\varepsilon}^2\right)+O\left(\left\langlembda^2\right)+o\left(\left\langlembda\,{\varepsilon}\right).
\end{align}
For ${\varepsilon}=\left\langlembda$, ${\mathbf{M}}\left(S_{\left\langlembda\,\left\langlembda}\right)=O\left(\left\langlembda^2\right)$
and, by the isoperimetric inequality \cite[Theorem 30.1]{Sim},
there exists an integer rectifiable current $R_{\left\langlembda}$ such that
\begin{equation}\left\langlebel{e:mass racc}
\partial R_\left\langlembda=S_{\left\langlembda\,\left\langlembda}
\quad\text{and}\quad
{\mathbf{M}}\left(R_\left\langlembda\right)\leq {\mathbf{M}}\left(S_{\left\langlembda\,\left\langlembda}\right)^{\frac{m}{m-1}}
=o\left(\left\langlembda^2\right).
\end{equation}
The current $T_\left\langlembda=T_{\left\langlembda\,w_\left\langlembda,B_{r_{\left\langlembda}}}+R_\left\langlembda$
contradicts now the minimality of the complex current
$L_{\left\langlembda\,\#}(\a{\sV}\res B_{r_\left\langlembda})$.
Indeed, it is easy to verify that $\partial T_\left\langlembda=\partial(L_{\left\langlembda\,\#}\a{\sV}\res B_{r_\left\langlembda})$ and, for small $\left\langlembda$,
\begin{align*}
{\mathbf{M}}\left(T_\left\langlembda\right)-
{\mathbf{M}}\left(L_{\left\langlembda\,\#}\a{\sV}\res\left(B_{r_\left\langlembda}\times\R^{2\nu}\right)\right)=&{}
Q\,|B_{r_\left\langlembda}|+\frac{\left\langlembda^2}{2}\,\textup{Dir}(w_\left\langlembda,B_{r_\left\langlembda})+\notag\\
&-Q\,|B_{r_\left\langlembda}|-\frac{\left\langlembda^2}{2}\,\textup{Dir}(f,B_{r_\left\langlembda})+o\left(\left\langlembda^2\right)
\\
\leq &{}
-\frac{\left\langlembda^2\,\eta}{4}+o\left(\left\langlembda^2\right)< 0.
\end{align*}
\qed
\section{Higher integrability of the gradients of $\textup{Dir}$-minimizing functions} \left\langlebel{s:higher}
In this section we prove Theorem \ref{t:higher}. As above, for the planar case we give a
simple proof which in addition provides the optimal integrability exponent.
This proof relies on the following proposition, because
by Theorem \ref{t:regularity} the singular points are isolated in dimension two.
\begin{proposition}
Let $u\in W^{1,2}(B_2,{\mathcal{A}}_Q)$ be $\textup{Dir}$-minimizing
and assume that $\Sigma_u=\{0\}$.
Then, $|Du|\in L^p(B_1)$ for every $p< \frac{2Q}{Q-1}$.
\end{proposition}
\begin{proof}
Let $x\in B_1\setminus\{0\}$ and set $r=|x|$.
Then, by $\Sigma_u=\{0\}$, in $B_r(x)$ there exists an analytic selection
of $u$, $u\vert_{B_r(x)}=\sum_i\a{u_i}$, where $u_i:B_r(x)\to\R^{n}$
are harmonic functions.
Using the mean value inequality for $Du_i$,
one infers that
\begin{equation*}
|Du_i(x)|\leq \fint_{B_r(x)}|Du_i|\leq
\frac{1}{\sqrt{\pi}\,r}\left(\int_{B_r(x)}|Du_i|^2\right)^{\frac{1}{2}},
\end{equation*}
from which
\begin{equation}\left\langlebel{e:stima grad}
|Du|(x)^2=\sum_i|Du_i(x)|^2
\leq\frac{1}{\pi\,r^2}\sum_i\int_{B_r(x)}|Du_i^2|
=\frac{\textup{Dir}(u,B_r(x))}{\pi\,r^2}.
\end{equation}
Using the decay estimate \eqref{e:dec} with $\rho=1$ together with \eqref{e:stima grad},
we deduce that
\begin{equation*}\left\langlebel{e:est2d}
|Du|(x)\leq \frac{\textup{Dir}(u,B_2)}{\sqrt{\pi}\,r^{1-\frac{1}{Q}}},
\end{equation*}
which in turn implies the conclusion,
\[
\int_{B_1}|Du|^p\leq C \int_{B_1}\frac{1}{|x|^{p-\frac{p}{Q}}}
<+\infty,\quad\forall\;p<\frac{2Q}{Q-1}.
\]
\end{proof}
\begin{remark}
The range $[2,{2\,Q}{(Q-1)^{-1}})$ for the integrability
exponent is optimal. Consider, indeed, the complex variety
$\sV_Q=\{(z,w):w^Q=z\}\subseteq \C^2$.
By Theorem \ref{t:complex},
the $Q$-valued function $u(z)=\sum_{w^Q=z}\a{w}$
is $\textup{Dir}$-minimizing in $B_2$.
Moreover, $|Du|(z)=Q\,|z|^{\frac{1}{Q}-1}$. Hence,
$|Du|\in L^p$ for every $p<\frac{2Q}{Q-1}$ and
$|Du|\notin L^{\frac{2Q}{Q-1}}$.
\end{remark}
Now we pass to the proof of Theorem \ref{t:higher} for $m\geq3$.
The first step is a Caccioppoli's inequality for $\textup{Dir}$-minimizing functions.
For $P\in\R^n$, we denote by $\tau_P$ the following map:
$\tau_P:{\mathcal{A}}_Q(\R^{n})\to{\mathcal{A}}_Q(\R^{n})$,
\begin{equation*}\left\langlebel{e:translation}
\tau_P (T) := \sum_i \a{T_i-P},\quad\text{for every}\quad
T=\sum_i \a{T_i}.
\end{equation*}
\begin{lemma}[Caccioppoli's inequality]\left\langlebel{l:caccioppoli}
Let $u\in W^{1,2}(\Omega,{\mathcal{A}}_Q)$ be $\textup{Dir}$-minimizing.
Then, for every $P\in\R^{n}$ and every $\eta\in C_c^\infty(\Omega)$,
\begin{equation}\left\langlebel{e:caccioppoli}
\int_\Omega |Du|^2\,\eta^2\leq\int_{\Omega}\abs{\tau_Pu}^2\,|D\eta|^2.
\end{equation}
In particular, in the case $\Omega=B_{2r}$,
\begin{equation}\left\langlebel{e:caccioppoli2}
\int_{B_{\frac{3r}{2}}} |Du|^2\leq\frac{4}{r^2}\int_{B_{2r}}\abs{\tau_Pu}^2.
\end{equation}
\end{lemma}
\begin{proof}
Recall the outer variation \cite[Proposition 3.1]{DLSp} for $\textup{Dir}$-minimizing functions,
\begin{equation*}
0=\int \sum_i \big\left\langlengle D f_i (x):
D_x \psi (x, f_i (x))\big\right\ranglengle\, dx
+ \int \sum_i \big\left\langlengle Df_i (x) :
D_y \psi (x, f_i (x))\cdot
Df_i (x)\right\ranglengle\, d x,
\end{equation*}
and apply it to $\psi(x,y)=\eta(x)^2\,(y-P)$, where $P$ and $\eta$ are as in the statement.
Since $ D_x\psi(x,y)=2\,\eta(x)\,D\eta(x)\otimes(y-P)$ and
$D_y\psi(x,y)=\eta(x)^2\,{\rm Id}\,_n$,
this leads to
\begin{equation}\left\langlebel{e:OV}
0=\int_\Omega \sum_i \big\left\langlengle D u_i (x) :
2\,\eta\,D\eta \otimes (u_i-P)\big\right\ranglengle
+ \int_\Omega \sum_i \big\left\langlengle Du_i (x) :
\eta^2\, Du_i (x)\right\ranglengle.
\end{equation}
Applying H\"older's inequality in \eqref{e:OV}, we conclude \eqref{e:caccioppoli}:
\begin{align*}
\int_\Omega\eta^2\,|Du|^2&=
-\sum_i\int_\Omega\big\left\langlengle Du_i\cdot (u_i-P),\eta\,D\eta\big\right\ranglengle
\leq\int_\Omega \sum_i |Du_i|\,|u_i-P|\,|\eta|\,|D\eta|\notag\\
&\leq\int_\Omega\left(\sum_i|Du_i|^2\,|\eta|^2\right)^{\frac{1}{2}}
\left(\sum_i|u_i-P||D\eta|^2\right)^{\frac{1}{2}}\notag\\
&\leq \left(\int_\Omega\eta^2\,|Du|^2\right)^{\frac{1}{2}}
\left(\int_\Omega|\tau_P(u)|^2\,|D\eta|^2\right)^{\frac{1}{2}}.
\end{align*}
The last conclusion of the lemma follows from \eqref{e:caccioppoli}
choosing $\eta\equiv 1$ in $B_{3r/2}$ and $|D\eta|\leq \frac{2}{r}$.
\end{proof}
The following reverse H\"older inequality is the basic estimate for the higher integrability.
\begin{proposition}\left\langlebel{p:est2p}
Let $\frac{2\,m}{m+2}<s<2$.
Then, there exists $C>0$ such that, for every
$u:\Omega\to{\mathcal{A}}_Q$ $\textup{Dir}$-minimizing, $x\in\Omega$ and
$r<\min\big\{1,{\rm {dist}}(x,\partial\Omega)/2\big\}$,
\begin{equation}\left\langlebel{e:est2p}
\left(\fint_{B_r(x)}|Du|^2\right)^{\frac{1}{2}}\leq
C\left(\fint_{B_{2r}(x)}|Du|^s\right)^{\frac{1}{s}}.
\end{equation}
\end{proposition}
\begin{proof}
The proof is divided into two steps.
\textit{Step $1$: we assume that $u$ has average $0$, ${\bm{\eta}}\circ u=\frac{\sum_i u_i}{Q}=0$.}
The proof is by induction on the number of values $Q$.
The basic step $Q=1$ is clear: indeed, in this case ${\bm{\eta}}\circ u=u=0$.
Now, we assume that \eqref{e:est2p} holds for every $Q'<Q$ and, by
contradiction, it does not hold for $Q$.
Then, up to translations and dilations of the domain, there exists a sequence
$(u_l)_l\subset W^{1,2}(B_{4},{\mathcal{A}}_Q)$ of $\textup{Dir}$-minimizing functions such that ${\bm{\eta}}\circ u_l=0$
and
\begin{equation}\left\langlebel{e:contra}
\left(\fint_{B_{4}}|Du_l|^s\right)^{\frac{1}{s}}<
\frac{\left(\fint_{B_2}|Du_l|^2\right)^{\frac{1}{2}}}{l}.
\end{equation}
Moreover, without loss of generality, we may also assume that $\int_{B_4}|u_l|^2=1$.
Using Caccioppoli's inequality \eqref{e:caccioppoli2}, we have
that $\textup{Dir}(u_l,B_{3})\leq 4$,
which in turn, by \eqref{e:contra}, implies
$$
\norm{{\mathcal{G}}(u_l,Q\a{0})}{W^{1,s}(B_4)}\leq C<+\infty.
$$
Since $s^*>2$, we can apply the compact Sobolev
embedding (see \cite[Proposition 2.11]{DLSp}) to deduce
that there exists a subsequence (not relabeled)
$u_l$ converging to some $u$ in $L^2(B_4)$.
From \eqref{e:contra} and Lemma \ref{l:sc}, we deduce that
\begin{equation}\left\langlebel{e:abs1}
\int_{B_4}|u|^2=1
\qquad \text{and}\qquad \int_{B_{4}}|Du|^s=0,
\end{equation}
which implies that $u$ is constant, $u\equiv T\in {\mathcal{A}}_Q$.
Since by Theorem \ref{t:regularity} the $u_l$'s are equi-bounded
and equi-H\"older in $B_{2}$,
always up to a subsequence (again not relabeled),
the $u_l's$ converge uniformly to $T$ in $B_2$.
This implies, in particular, that
\begin{equation}\left\langlebel{e:abs2}
{\bm{\eta}}\circ T=\lim_{l\to+\infty}{\bm{\eta}}\circ u_l=0.
\end{equation}
From \eqref{e:abs1} and \eqref{e:abs2}, one infers that $T$ is not a point of
multiplicity $Q$.
Therefore, since $u_l\to T$ uniformly in $B_2$, for $l$ large enough
the $u_n$'s must split in the sum of two $\textup{Dir}$-minimizing
functions $u_l=\a{v_l}+\a{w_l}$,
where the $v_l$'s are $Q_1$-valued functions and the $w_l$'s are
$Q_2$-valued, with $Q_1$, $Q_2$ positive and $Q_1+Q_2=Q$.
Applying now the inductive hypothesis to $v_l$ and $w_l$ we contradict
\eqref{e:contra} for $l$ large enough,
\begin{align*}
\left(\fint_{B_1(x)}|Du_l|^2\right)^{\frac{1}{2}}
&\leq\left(\fint_{B_1(x)}|Dv_l|^2\right)^{\frac{1}{2}}
+\left(\fint_{B_1(x)}|Dw_l|^2\right)^{\frac{1}{2}}\notag\\
&\leq
C\left(\fint_{B_{2}(x)}|Dv_l|^s\right)^{\frac{1}{s}}+
C\left(\fint_{B_{2}(x)}|Dw_l|^s\right)^{\frac{1}{s}}
\notag\\
&\leq
2\,C\left(\fint_{B_{2}(x)}|Du_l|^s\right)^{\frac{1}{s}}.
\end{align*}
\textit{Step $2$: generic $\textup{Dir}$-minimizing function $u$.}
Let $u$ be $\textup{Dir}$-minimizing and $\varphi={\bm{\eta}}\circ u$: then,
by \cite[Lemma 3.23]{DLSp}, $\varphi:\Omega\to\R^{n}$ is harmonic
and $D\varphi=\sum_i Du_i$, from which
\begin{equation}\left\langlebel{e:|Dph|}
|D\varphi|^2\leq Q\sum_i|Du_i|^2=Q\,|Du|^2.
\end{equation}
Moreover, again by \cite[Lemma 3.23]{DLSp}, the $Q$-valued
function $v=\sum_i\a{u_i-\varphi}$ is $\textup{Dir}$-minimizing as well.
Note that
\begin{equation}\left\langlebel{e:|Dv|}
|Du|^2\leq2\,|Dv|^2+2\,Q\,|D\varphi|^2
\quad\text{and}\quad
|Dv|^2\leq 2\,|Du|^2+2\,Q\,|D\varphi|^2.
\end{equation}
Using the inequality $\sqrt{\sum_j a_j}\leq \sum_j\sqrt{a_j}$ for positive $a_j$,
we deduce
\begin{align}\left\langlebel{e:est2p inizio}
\left(\fint_{B_r(x)}|Du|^2\right)^{\frac{1}{2}}&\leq
\left(\fint_{B_r(x)}2\,|Dv|^2+2\,Q\,|D\varphi|^2\right)^{\frac{1}{2}}\notag\\
&\leq 2\left(\fint_{B_r(x)}|Dv|^2\right)^{\frac{1}{2}}+
2\,Q\left(\fint_{B_r(x)}|D\varphi|^2\right)^{\frac{1}{2}}.
\end{align}
For the first term in the right hand side of \eqref{e:est2p inizio}, we use Step $1$,
since ${\bm{\eta}}\circ v=0$, to get
\begin{align}\left\langlebel{e:term1}
\left(\fint_{B_r(x)}|Dv|^2\right)^{\frac{1}{2}} &\leq
C\left(\fint_{B_{2r}(x)}|Dv|^s\right)^{\frac{1}{s}}
\;\stackrel{\mathclap{\eqref{e:|Dv|}}}{\leq}\; C\left(\fint_{B_{2r}(x)}\left(2\,|Du|^2+2\,Q\,|D\varphi|^2\right)^{\frac{s}{2}}\right)^{\frac{1}{s}}\notag\\
&\leq
C\left(\fint_{B_{2r}(x)}2\,|Du|^s+2\,Q\,|D\varphi|^s\right)^{\frac{1}{s}}
\stackrel{\eqref{e:|Dph|}}{\leq}
C\left(\fint_{B_{2r}(x)}|Du|^s\right)^{\frac{1}{s}}.
\end{align}
For the remaining term in \eqref{e:est2p inizio}, we use the standard
estimate for harmonic functions,
\begin{equation}\left\langlebel{e:harmonic}
|D\varphi(x)|\leq\frac{C}{r^n}\,\norm{D\varphi}{L^1(B_{2r})}
\qquad\forall\;x\in B_r,
\end{equation}
and infer
\begin{align}\left\langlebel{e:est2p fine}
\left(\fint_{B_r(x)}|D\varphi|^2\right)^{\frac{1}{2}}
&\;\;\stackrel{\mathclap{\eqref{e:harmonic}}}{\leq}\;\;
\frac{C}{r^n}\,\norm{D\varphi}{L^1(B_{2r})}
\leq \frac{C}{r^n}
\left(\int_{B_{2r}(x)}|D\varphi|^s\right)^{\frac{1}{s}}
\,r^{n\left(1-\frac{1}{s}\right)}\notag\\
&\;\;\leq\; C\left(\fint_{B_{2r}(x)}|D\varphi|^s\right)^{\frac{1}{s}}
\;\stackrel{\mathclap{\eqref{e:|Dph|}}}{\leq}\; C\left(\fint_{B_{2r}(x)}|Du|^s\right)^{\frac{1}{s}}.
\end{align}
Clearly, \eqref{e:est2p inizio}, \eqref{e:term1} and
\eqref{e:est2p fine} finish the proof.
\end{proof}
The proof of Theorem \ref{t:higher}
is now an easy consequence of the following reverse H\"older
inequality with increasing supports proved by Giaquinta and Modica in
\cite[Proposition 5.1]{GiMo}.
\begin{theorem}[Reversed H\"older inequality]\left\langlebel{t:GiMo}
Let $\Omega\subseteq\R^{m}$ be open and $g\in L_{loc}^q(\Omega)$,
with $q>1$ and $g\geq0$.
Assume that there exist positive constants $b$ and $R$ such that
\begin{equation}\left\langlebel{e:hyp rh}
\left(\fint_{B_r(x)}g^q\right)^{\frac{1}{q}}\leq
b\fint_{B_{2r}(x)}g,
\quad \forall\;
x\in\Omega,\;\forall\;
r<\min\big\{R,{\rm {dist}}(x,\partial\Omega)/2\big\}.
\end{equation}
Then, there exist $p=p(q,b)>q$ and $c=c(m,q,b)$ such that
$g\in L^p_{loc}(\Omega)$ and
\begin{equation*}\left\langlebel{e;th rh}
\left(\fint_{B_r(x)}g^p\right)^{\frac{1}{p}}\leq
c\left(\fint_{B_{2r}(x)}g^q\right)^{\frac{1}{q}},
\quad\forall\;
x\in\Omega,\;\forall\;
r<\min\big\{R,{\rm {dist}}(x,\partial\Omega)/2\big\}.
\end{equation*}
\end{theorem}
\begin{proof}[Proof of Theorem \ref{t:higher}]
Consider the function $g=|Du|^{s}$, where $s<2$ is the exponent in Proposition \ref{p:est2p}.
Estimate \eqref{e:est2p} implies that
hypothesis \eqref{e:hyp rh} of Theorem \ref{t:GiMo} is satisfied with $q=\frac{2}{s}>1$.
Hence, there exists an exponent $p'>q$,
such that $g$ belongs to $L^{p'}_{loc}(\Omega)$, i.e.~$|Du|\in L^p_{loc}(\Omega)$
for $p=p'\cdot s>2$.
\end{proof}
\end{document} |
\begin{document}
\title{Escaping from Zero Gradient: Revisiting Action-Constrained Reinforcement Learning via Frank-Wolfe Policy Optimization}
\begin{abstract}
Action-constrained reinforcement learning (RL) is a widely-used approach in various real-world applications, such as scheduling in networked systems with resource constraints and control of a robot with kinematic constraints.
While the existing projection-based approaches ensure zero constraint violation, they could suffer from the zero-gradient problem due to the tight coupling of the policy gradient and the projection, which results in sample-inefficient training and slow convergence.
To tackle this issue, we propose a learning algorithm that decouples the action constraints from the policy parameter update by leveraging state-wise Frank-Wolfe and a regression-based policy update scheme.
Moreover, we show that the proposed algorithm enjoys convergence and policy improvement properties in the tabular case as well as generalizes the popular DDPG algorithm for action-constrained RL in the general case.
Through experiments, we demonstrate that the proposed algorithm significantly outperforms the benchmark methods on a variety of control tasks.
\end{abstract}
\section{Introduction}
\label{section:intro}
Action-constrained reinforcement learning (RL) is a popular approach for sequential decision making in real-world systems.
One classic example is maximizing the network-wide utility by optimally allocating the network resource under capacity constraints \citep{xu2018experience,gu2019intelligent,zhang2020cfr}.
Another example is robot control under kinematic constraints \citep{pham2018optlayer,gu2017deep,jaillet2012path,tsounis2020deepgait}, which capture the limitations of the physical components of a robot (e.g., in terms of velocity, torque, or output power).
In these examples, the constraints essentially characterize the set of feasible actions at each state.
To ensure the safe and normal operation of these real-world systems, it is required that these action constraints are satisfied throughout the evaluation as well as the training processes \citep{chow2018lyapunov,liu2020robust,gu2017deep}.
Therefore, in action-constrained RL, an effective training algorithm is required to achieve the following two tasks simultaneously: (i) iteratively improving the policy and (ii) ensuring zero constraint violation at each training step.
To enable RL with action constraints, one popular generic approach is to include an additional differentiable projection layer at the output of the policy network and follow the standard end-to-end policy gradient approach \citep{pham2018optlayer,dalal2018safe,bhatia2019resource}.
While being a general-purpose solution, this projection layer could result in the \textit{zero-gradient issue} during training due to the tight coupling of the policy gradient update and the projection layer.
Specifically, zero gradient occurs when the original output of the policy network falls outside of the feasible action set and any small perturbation of the policy parameters does not lead to any change in the final output action due to the projection mechanism.
To better understand the zero-gradient issue, let us consider a toy example of a policy network with one hidden layer and a linear output layer used to produce a deterministic scalar action. Suppose the actions are required to be non-negative.
To satisfy the non-negativity action constraint, an additional $L_2$-projection layer, which is equivalent to a Rectified Linear Unit (ReLU), is added to the output of the policy.
It can be seen that the policy network can easily suffer from zero gradient due to the clipping effect of ReLU \citep{maas2013rectifier}.
If the zero-gradient issue occurs in a large portion of the state space, the training process could be sample-inefficient as most of the samples are wasted, and therefore the convergence speed could be slow.
Notably, the zero-gradient issue can be particularly severe in the early training phase since the pre-projection actions produced by the policy network are likely to be far away from the feasible sets.
The fundamental cause of the zero-gradient issue is the tight coupling of the policy parameter update and the projection layer under the standard policy gradient framework.
Specifically, in the end-to-end policy-gradient-based training process, the update of the policy parameters relies on the gradient of the actual policy output with respect to the policy parameters and thereby involves the gradient of this additional projection layer.
To escape from the zero-gradient issue, we take a different approach and propose a learning algorithm that \textit{decouples} the parameter update for policy improvement from constraint satisfaction, without using the policy gradient theorem.
The proposed algorithm can be highlighted as follows:
\begin{itemize}[leftmargin=*]
\item To accommodate the action constraints, we leverage the Frank-Wolfe method \citep{frank1956algorithm} to search for feasible action update directions directly \textit{within} the feasible action sets in a \textit{state-wise} manner.
Through this procedure, for a collection of states, we obtain the reference actions that are used to guide the update of the policy parameters for improving the current policy.
\item To update the parameters of the policy network, we propose to construct a loss function (e.g., mean squared error) that enables the policy network to adjust its outputs toward the reference actions.
This update scheme can be viewed as solving a regression problem based on the reference actions by taking one-step gradient descent.
In this way, the parameter update is completely decoupled from the action constraints.
\end{itemize}
Since the proposed framework obviates the need for the gradient of a projection layer, it avoids the zero-gradient issue by nature.
\textbf{Our Contributions.}
In this paper, we revisit the action-constrained RL problem and propose a novel learning framework that avoids the zero-gradient issue and achieves zero constraint violation simultaneously:
\begin{itemize}[leftmargin=*]
\item We formally identify the important zero gradient issue in the existing projection-based approaches for action-constrained RL.
We also pinpoint that the fundamental cause of the zero-gradient issue is the tight coupling of the policy parameter update and the projection layer under the standard policy gradient framework.
To the best of our knowledge, this is the first time that the zero-gradient issue is discussed in the context of action-constrained RL.
\item To better describe the proposed learning framework, we start from the case of finite state spaces and introduce Frank-Wolfe policy optimization (FWPO) with tabular policy parameterization, which can be viewed as an instance of the generalized policy iteration.
By directly searching for update directions within the feasible sets via state-wise Frank-Wolfe, FWPO automatically achieves zero constraint violation and does not require any additional projection.
Moreover, we establish the convergence of FWPO as well as its policy improvement property.
\item Built on FWPO, we propose Neural FWPO (NFWPO) by extending the idea of FWPO to the general neural policies via a regression argument.
By constructing a loss function and leveraging state-wise Frank-Wolfe, we decouple the policy parameter update from the action constraints.
This design automatically prevents the zero-gradient issue.
Moreover, we show that the vanilla DDPG is a special case of NFWPO if there is no action constraints.
\item Through experiments on various real applications, we empirically show the zero-gradient problem and demonstrate that the proposed algorithms significantly outperform the popular benchmark methods for action-constrained RL.
\end{itemize}
\section{Preliminaries}
\label{section:model}
We consider an infinite-horizon discounted Markov decision process (MDP) defined by a tuple $(\mathcal{S},\mathcal{A},p,r,\gamma)$, where $\mathcal{S}$ is state space, $\mathcal{A}$ denotes the action space, $p$ is the state transition probability, $r$ is the reward function, and $\gamma\in (0,1)$ denotes the discount factor.
We assume that the action space $\mathcal{A}\subseteq \bR^{N}$ is continuous and the reward function takes value in $[0,1]$ for all state-action pairs.
At each time step $t=0,1,\cdots$, the learner observes state $s_t$, takes an action $a_t$, and receives an immediate reward $r_t$.
In this paper, we consider the \textit{action-constrained} MDPs where for each state $s\in \cS$ there is a feasible action set $\cC(s) \subseteq \cA$ determined by the underlying collection of constraints.
We assume that $\cC(s)$ is compact and convex.
In this paper, we focus on deterministic policies and use $\pi(\cdot;\theta):\cS\rightarrow \cA$ to denote a deterministic parametric policy with parameter vector $\theta\in\bR^n$.
Under a policy $\pi$, the value functions are defined as the expected long-term rewards
\begin{align}
V(s;\pi)&=\E\Big[\sum_{t=0}^{\infty} \gamma^t r(s_t,a_t)\rvert s_0=s,\pi\Big],\\
Q(s,a;\pi)&=\E\Big[\sum_{t=0}^{\infty} \gamma^t r(s_t,a_t)\rvert s_0=s,a_0=a,\pi\Big].
\end{align}
To make a comparison between policies, for any two policies $\pi$ and $\pi'$, we say that $\pi\geq \pi'$ if $V(s;\pi)\geq V(s;\pi')$, for all $s\in\cS$.
This essentially constructs a partial ordering among policies.
To construct a total ordering of all the policies, consider the performance objective defined as a weighted average of the value function
\begin{equation}
J_{\mu}(\pi):=\E_{s\sim \mu}[V(s;\pi)],
\end{equation}
where $\mu$ is called the restarting state distribution \citep{kakade2002approximately}.
Note that one common choice of $\mu$ is the initial state distribution.
It is also convenient to define the discounted state visitation distribution $d_\mu^{\pi}$ as $d_\mu^{\pi}(s):=(1-\gamma)\E_{s_0\sim \mu}[\sum_{t=0}^{\infty}\gamma^t P(s_t=s \rvert s_0,\pi)]$, for each $s\in \cS$.
\textbf{Notations}. We use the standard notations $\norm{\cdot}_p$ and $\norm{\cdot}_{F}$ to denote the $L_p$-norm of a vector and the Frobenius norm of a matrix, respectively. We use $\inp{\cdot}{\cdot}$ to denote the inner product of two real vectors. For a set $\cD$, we define the diameter of $\cD$ as $\text{diam}_{\norm{\cdot}_2}(\cD):=\sup_{x_1,x_2\in \cD}\norm{x_1-x_2}_2$.
We use $\text{dom}f$ to denote the domain of a function $f$.
\subsection{Policy Gradient}
\label{section:model:DPG}
To optimize the objective $J_{\mu}(\pi)$, the typical approach is to apply gradient ascent based on the policy gradient.
Under the standard regularity conditions, the deterministic policy gradient \citep{silver2014deterministic} can be written as
\begin{align}
&\nabla_{\theta} J_{\mu}(\pi(\cdot;\theta))\nonumber\\
&= \E_{s\sim d_{\mu}^{\pi}}\Big[\nabla_{\theta}\pi(s;\theta) \nabla_{a}Q(s,a;\pi(\cdot;\theta))\rvert_{a=\pi(s;\theta)}\Big].
\label{eq:DPG}
\end{align}
As a practical implementation of the deterministic policy gradient approach, DDPG \citep{lillicrap2016continuous} extends deep $Q$-learning \citep{mnih2015human} to continuous action space in an actor-critic manner.
Specifically, DDPG updates the policy parameter $\theta$ by applying stochastic gradient ascent according to (\ref{eq:DPG}) and obtains an approximated $Q$-function $Q(s,a; \phi)$ parameterized by $\phi$ by using a $Q$-learning-like critic, which updates $\phi$ by minimizing the loss $\E_{(s,a,s',r)\sim \rho}[(r+\gamma Q(s',\pi(s';\theta^-);\phi^-)-Q(s,a;\phi))^2]$, where $\rho$ denotes the sampling distribution of the replay buffer, $\theta^-$ and $\phi^-$ are the parameters of the actor and critic target networks, respectively.
\subsection{Frank-Wolfe Methods}
In this section, we provide an overview of the Frank-Wolfe algorithms. Consider an optimization problem in the form
\begin{equation}
\max_{{x}\in\mathcal{X}} F({x}),\label{eq:constrained opt}
\end{equation}
where $F(\cdot):\mathbb{R}^{d}\rightarrow \mathbb{R}$ is a differentiable function with a Lipschitz continuous gradient, $\mathcal{X}\subseteq \bR^d$ is the feasible set characterized by the underlying constraints on ${x}$.
One popular approach is to apply the projected gradient ascent method \citep{bubeck2015convex}, which combines the standard gradient ascent with a projection step.
By contrast, as a projection-free method, the classic Frank-Wolfe algorithms \citep{frank1956algorithm} and its variants solve the constrained optimization problems in (\ref{eq:constrained opt}) by leveraging a first-order subproblem.
We briefly summarize the Frank-Wolfe algorithm for non-convex objective functions in the batch settings as follows \citep{lacoste2016convergence,reddi2016stochastic}:
\begin{itemize}[leftmargin=*]
\item \textbf{Initialization.} Let $x_k$ denote the input at the $k$-th iteration and choose an arbitrary $x_0\in\cX$ to be the initial point.
\item \textbf{Search for an update direction within the feasible set.}
In the $k$-th iteration, compute $v_k=\argmax_{v\in\cX}\inp{v}{\nabla_{x}F(x)\rvert_{x=x_k}}$ and update the iterate as $x_{k+1}=x_k+\beta_k(v_k-x_k)$, where $v_k-x_k$ is the update direction and $\beta_k$ denotes the learning rate.
\end{itemize}
For unconstrained optimization problems, the convergence properties are typically analyzed in terms of the gradient norm $\norm{\nabla_x F(x)}_2$.
By contrast, for constrained maximization problems, one widely-used metric of convergence in the Frank-Wolfe literature is the \textit{Frank-Wolfe gap} defined as $\cG(x):=\max_{z\in \cX}\inp{z-x}{\nabla_{x}F(x)}$\footnote{In the literature, the Frank-Wolfe gap is typically defined as $\max_{z\in \cX}\inp{z-x}{-\nabla_{x}F(x)}$ since the goal is to minimize an objective function. By contrast, as the goal of RL is to optimize the policy in terms of rewards, we consider the maximization problem in the form of (\ref{eq:constrained opt}) and make the required changes accordingly.}.
It is easy to verify that $\cG(x)=0$ is a necessary and sufficient condition of that $x$ is a stationary point.
\begin{comment}
\begin{remark}
\normalfont Frank-Wolfe optimization has been studied in various settings, such as convex objectives in the batch settings \citep{jaggi2013revisiting,jaggi2015global,liu2020newton}, convex objectives in the stochastic settings \citep{hazan2016variance}, and non-convex objectives \citep{reddi2016stochastic,lacoste2016convergence}.
To leverage Frank-Wolfe in policy optimization, we focus on the non-convex setting as the performance objective is generally non-convex.
\end{remark}
\end{comment}
\section{Frank-Wolfe Policy Optimization}
\label{section:alg}
In this section, we formally present the proposed learning algorithms for action constrained RL.
To better describe the proposed learning framework, we start from a stylized setting with tabular policy parameterization for finite state spaces and extend the idea to develop a more practical algorithm for the general parametric policies.
\subsection{Frank-Wolfe Policy Optimization With Direct Policy Parameterization (FWPO)}
For ease of exposition, we first illustrate the proposed algorithm for the case of finite state spaces and tabular policies with direct parameterization, i.e., $\pi(s;{\theta})\equiv\theta(s)$, for all $s\in\mathcal{S}$.
We consider the performance objective $J_{\mu}(\pi)$ with some restarting state distribution $\mu$ with $\mu(s)>0$, for all $s\in\cS$, and define $\mu_{\min}:=\min_{s\in\cS}\mu(s)$.
For ease of notation, we also define $D_s:=\text{diam}_{\norm{\cdot}_2}(\cC(s))$ for each $s$ and $D_{\max}:=\max_{s\in\cS}D_s$.
Now we present the proposed FWPO algorithm. We use $\theta_k$ to denote the policy parameters in the $k$-th iteration and choose feasible initial policy parameters $\theta_0$ which satisfy $\theta_0(s)\in \cC(s)$, for all $s\in\cS$. FWPO adopts the generalized policy iteration framework \citep{sutton2018reinforcement} by alternating between two subroutines in each iteration:
\begin{itemize}[leftmargin=*]
\item \textbf{Policy update via state-wise Frank-Wolfe.}
FWPO updates the policy by finding a feasible update direction of each state $s\in\cS$ via Frank-Wolfe as
\begin{align}
\hspace{-6pt}c_{k}(s)&=\argmax_{c \in \cC(s)}~\inp{c}{\nabla_{a} Q(s,a;\pi(\cdot;\theta_k))\rvert_{a=\theta_k(s)}},\label{eq:FW direction in tabular case}\\
\hspace{-6pt}\theta_{k+1}(s)&=\theta_k(s)+\alpha_k(s) (c_k(s)-\theta_k(s)),\label{eq:FW update in tabular case}
\end{align}
where $c_k(s)-\theta_k(s)$ is the update direction and $\alpha_k(s)$ denotes the (state-dependent) learning rate.
Moreover, it is natural to define the \textit{state-wise Frank-Wolfe gap} of the $Q$-function at $\theta_k$ as
\begin{equation}
g_{k}(s):=\inp{c_k(s)-\theta_k(s)}{\nabla_{a} Q(s,a;\pi(\cdot;\theta_k))\rvert_{a=\theta_k(s)}}.\label{eq:state-wise FW gap}
\end{equation}
It is easy to verify that $g_{k}(s)\geq 0$, for all $k\in\mathbb{N}$ and for all $s\in\cS$.
As will be shown momentarily, to ensure convergence, the learning rate is configured to be $\alpha_k(s)=\frac{(1-\gamma)\mu_{\min}}{L D_s^2}g_k(s)$.
\item \textbf{Evaluation of the current policy.} FWPO then evaluates the updated policy and obtain the $Q$-function (or an approximated version) for the next iteration. This can be done by a standard policy evaluation approach.
\end{itemize}
The above scheme of FWPO is detailed in Algorithm \ref{alg:FWPO}.
As suggested by Algorithm \ref{alg:FWPO}, FWPO always searches for an update direction within the feasible action sets. Therefore, FWPO automatically achieves zero constraint violation and does not require any additional projection by nature.
\begin{algorithm}[!htbp]
\caption{Frank-Wolfe Policy Optimization (FWPO)}
\label{alg:FWPO}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Initialize the policy parameters as $\theta_0$ that satisfies $\theta_0(s)\in \cC(s)$ for all $s\in\cS$
\FOR{each iteration $k=0,1,\cdots$}
\STATE Evaluate $\pi(\cdot;\theta_k)$ and obtain $Q(s,a;\pi(\cdot;\theta_k))$
\FOR{each state $s\in\cS$}
\STATE Compute the Frank-Wolfe update direction by $c_{k}(s)=\argmax_{c \in \cC(s)}\inp{c}{\nabla_{a} Q(s,a;\pi(\cdot;\theta_k))}$
\STATE $g_{k}(s)=\inp{c_k(s)-\theta_k(s)}{\nabla_{a} Q(s,a;\pi(\cdot;\theta_k))}$
\STATE $\alpha_k(s)=\frac{(1-\gamma)\mu_{\min}}{L D_s^2}g_k(s)$
\STATE $\theta_{k+1}(s)=\theta_k(s)+\alpha_k(s) (c_k(s)-\theta_k(s))$
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{remark}
\label{remark:FWPO}
\normalfont One salient feature of FWPO is that the policy update in (\ref{eq:FW direction in tabular case})-(\ref{eq:FW update in tabular case}) is done by searching for feasible update directions based on $\nabla_a Q(s,a;\pi)$ on a \textit{per-state} basis with state-dependent learning rates, instead of using the standard policy gradient of the performance objective $J_{\mu}(\pi)$.
As will be seen in Section \ref{section:alg:NFWPO}, this design plays a critical role in decoupling the policy parameter update from constraint satisfaction.
Another advantage of FWPO is that it is agnostic to the discounted state visitation distribution $d_{\mu}^{\pi}$ (\textit{cf.} the deterministic policy gradient in (\ref{eq:DPG})) due to the state-wise nature.
This feature allows FWPO to be directly applicable in the off-policy settings in its original form\footnote{In the off-policy settings, the deep policy gradient approaches typically require dropping a term in the policy gradient expression to accommodate the behavior policy \citep{silver2014deterministic}.}.
\end{remark}
\begin{remark}
\label{remark:FWPO convergence}
\normalfont As the policy update under FWPO is done on a state-by-state basis instead of directly on $J_{\mu}(\pi)$, the convergence guarantees of the standard Frank-Wolfe methods do not directly apply to the objective $J_{\mu}(\pi)$ under FWPO.
From this perspective, FWPO is \textit{not} a trivial combination of the Frank-Wolfe methods and policy iteration.
\end{remark}
As suggested by Remark \ref{remark:FWPO convergence}, we proceed to establish the convergence result of FWPO.
For the convergence analysis, based on the state-wise Frank-Wolfe gaps defined in (\ref{eq:state-wise FW gap}), we define the \textit{effective Frank-Wolfe gap} of $J_{\mu}(\pi(\cdot;\theta))$ at $\theta_k$ as
\begin{equation}
\cG_k:=\Big(\sum_{s\in\cS} g_k(s)^2\Big)^{1/2}.\label{eq:effective FW gap}
\end{equation}
Note that $\cG_k=0$ if and only if the update direction is zero for all the states, i.e., $c_k(s)-\theta_k(s)=0$.
Hence, $\cG_k$ indicates whether the $J_{\mu}(\pi(\cdot;\theta))$ converges to a stationary point.
We also define $\bar{\cG}_T:=\min_{0\leq k\leq T}\cG_k$.
To establish the convergence results, we also assume mild regularity conditions on $r$ and $p$ as follows.
\begin{definition}
\label{def:L smoothness}
\normalfont A differentiable function $f:\text{dom}f \rightarrow \bR$ is said to be \textit{$L_0$-smooth} if there exists $L_0\geq 0$ such that for any $x,y\in \text{dom}f$, $\norm{\nabla f(x) - \nabla f(y)}_2\leq L_0\norm{x-y}_2$.
\end{definition}
\textbf{Regularity Assumptions:}
\noindent \textbf{(A1)} The reward function $r(s,a)$ is differentiable and is $L_r$-smooth in $a$, for all $s,a$.
\noindent \textbf{(A2)} The transition probability $p(s'\rvert s,a)$ is twice differentiable and $L_p$-smooth in $a$, for all $s,s',a$. Moreover, $p(s'\rvert s,a)$ satisfies $\sup_{s,a,s'}\norm{\nabla_{a}p(s'\rvert s,a)}_{2}< C_p$.
As the first step, we introduce the following proposition on the smoothness of the performance objective $J_\mu(\pi(\cdot; \theta))$.
Notably, given the regularity assumptions of $r$ and $p$ in action, it remains non-trivial to establish the smoothness of $J_{\mu}(\pi(\cdot; \theta))$ in $\theta$ due to the multi-step compound effect of the changes in policy parameters on the value functions.
\begin{prop}
\label{PROP: L-SMOOTH OF J}
Under the regularity assumptions (A1)-(A2), there exists some constant $L>0$ such that for any restarting state distribution $\mu$, $J_\mu(\pi(\cdot;\theta))$ is $L$-smooth in $\theta$.
\end{prop}
The proof of Proposition \ref{PROP: L-SMOOTH OF J} is provided in Appendix A.1.
Now we are ready to present the convergence result.
\begin{prop}
\label{PROP:CONVERGENCE}
Under the FWPO algorithm with $\alpha_k(s)=\frac{(1-\gamma)\mu_{\min}}{L D_s^2}g_k(s)$, $\{\pi(\cdot;\theta_k)\}$ form a non-decreasing sequence of policies in the sense that $\pi(\cdot;\theta_{k+1})\geq \pi(\cdot;\theta_k)$, for all $k$.
Moreover, the effective Frank-Wolfe gap of FWPO converges to zero as $k\rightarrow \infty$, and the convergence rate can be quantified as
\begin{equation}
\sum_{k=0}^{\infty} \cG_k^2\leq \frac{2LD_{\max}^2}{(1-\gamma)^3 \mu_{\min}^2},
\end{equation}
which implies that $\bar{\cG}_T=O(T^{-1/2})$.
\end{prop}
\begin{myproof}
\normalfont Due to space limitation, we provide a sketch of proof:
(i) To show the non-decreasing property, we leverage the policy difference lemma \citep{kakade2002approximately} and verify a sufficient condition of strict policy improvement; (ii) To show the convergence result, we leverage the smoothness of the value functions as well as the objective and use the technique for convergence of non-convex optimization similar to that in \citep{reddi2016stochastic,lacoste2016convergence}; (iii) A proper learning rate can be selected by taking the smoothness conditions as well as the restarting state distribution into account.
For completeness, the detailed proof is provided in Appendix A.2.
\end{myproof}
\begin{remark}
\normalfont The style of the convergence guarantee in Proposition \ref{PROP:CONVERGENCE} is common in the analysis of gradient descent methods for non-convex smooth functions \citep{bottou2018optimization}.
Moreover, the result (i.e., convergence to a stationary point) in Proposition \ref{PROP:CONVERGENCE} resembles those of the policy gradient algorithms \citep{sutton2000policy,silver2014deterministic}, but for the action-constrained RL settings.
On the other hand, in (\ref{eq:FW direction in tabular case}), the search of the update direction requires the gradient of the $Q$-function.
In practice, it may not be feasible to obtain the whole true $Q$-function, and a value function approximator can be included. In practice, it can be expected that a sufficiently accurate critic shall provide a sufficiently good update direction.
\end{remark}
\begin{comment}
\subsection{Reinterpreting DDPG via Regression}
\label{section:alg:reinterpret DDPG}
To extend the algorithm from the tabular case to the general case, we first reinterpret the DDPG from a state-wise perspective.
Let $\bar{\theta}$ and $\bar{\phi}$ be the current parameters of the actor and the critic, respectively.
As proposed in \citep{lillicrap2016continuous}, the policy update under DDPG is done by approximating the true gradient $\nabla_{\theta}J_\mu(\pi(\cdot;{\theta}))$ by the sample-based gradient $\nabla_{\theta}\hat{J}_\mu(\pi(\cdot;{\theta}))\approx \nabla_{\theta}J_\mu(\pi(\cdot;{\theta}))$ as
\begin{align}
&\nabla_{\theta} \hat{J}_\mu(\pi(\cdot;{\theta}))=\frac{1}{\lvert \cB\rvert}\sum_{s\in\cB}\nabla_{a}Q(s,a; \phi)\rvert_{a=\pi(s;\bar{\theta})}\nabla_{\theta}\pi(s;\theta),\label{eq:sample-based DPG}
\end{align}
where $\cB$ denotes a mini-batch of states drawn from the replay buffer. Note that this update rule can be reinterpreted from the perspective of regression by the following steps:
\begin{itemize}[leftmargin=*]
\item \textbf{Target actions.} For each $s$ in the mini-batch $\cB$, compute the target action at state $s$ under the guidance of the critic
\begin{equation}
a^{*}_s=\pi(s;\bar{\theta})+\eta_{1} \nabla_a Q(s,a;\bar{\phi})\rvert_{a=\pi(s;\bar{\theta})},
\end{equation}
where $\eta_1>0$ is the step size.
\item \textbf{Loss function.} Construct a loss function $L(\theta;\bar{\theta})$ as the mean-squared error (MSE) between the actions of the current policy and the target actions, i.e.,
\begin{align}
&\cL(\theta;\bar{\theta})=\frac{1}{2\lvert \cB\rvert}\sum_{s\in\cB}\big(\pi(s;\theta)- a^{*}_s\big)^2.\label{eq:DDPG MSE loss}
\end{align}
\item \textbf{Gradient update.} Accordingly, update the policy parameter by minimizing the MSE loss by applying gradient descent for one step, i.e.,
\begin{equation}
\theta\leftarrow\theta - \eta_{2} \nabla_\theta \cL(\theta;\bar{\theta}).\label{eq:DDPG MSE update}
\end{equation}
\end{itemize}
We can observe that the update scheme characterized by (\ref{eq:DDPG MSE loss})-(\ref{eq:DDPG MSE update}) is equivalent to the DDPG update with a learning rate of $\eta_1\eta_2$.
Therefore, DDPG can be viewed as an actor-critic algorithm where both the actor and critic are trained by using regression as subroutines.
Inspired by this interpretation, we present how to extend FWPO to the more general policy parameterization in Section \ref{section:alg:NFWPO}.
\end{comment}
\subsection{Neural Frank-Wolfe Policy Optimization (NFWPO)}
\label{section:alg:NFWPO}
In this section, we formally present the
proposed NFWPO algorithm for general parametric policies for action-constrained RL.
As highlighted in Section \ref{section:intro}, we propose to decouple constraint satisfaction from the policy parameter update.
Specifically, to accommodate the action constraints, we extend the state-wise Frank-Wolfe subroutine to the general parametric policies.
One inherent challenge of such extension is that the Frank-Wolfe method searches for an update direction within the feasible set by nature.
However, under neural parameterization, an action produced by the neural network is not guaranteed to stay in the feasible action set.
To address this, we propose to incorporate a projection step into the state-wise Frank-Wolfe subroutine.
Define a projection operator as
\begin{equation}
\Pi_{\cC(s)}(z)=\argmin_{y\in \cC(s)}\norm{y-z}_2.
\end{equation}
For ease of exposition, in the sequel we call the input $z$ a \textit{pre-projection action} and $\Pi_{\cC(s)}(z)$ a \textit{post-projection action}.
NFWPO adopts the actor-critic architecture.
Let $\bar{\theta}$ and $\bar{\phi}$ be the current parameters of the actor and the critic, respectively.
The main features of NFWPO are captured by the actor part as below.
\begin{itemize}[leftmargin=*]
\item \textbf{Derive reference actions via state-wise Frank-Wolfe.} For each $s$ in the mini-batch $\cB$, NFWPO uses Frank-Wolfe to compute the reference action at each state $s$ as
\begin{equation}
\tilde{a}_s=\Pi_{\cC(s)}(\pi(s;\bar{\theta}))+{\alpha} \big(\bar{c}(s)-\Pi_{\cC(s)}(\pi(s;\bar{\theta}))\big),\label{eq:FW target action}
\end{equation}
where $\alpha$ is the learning rate of Frank-Wolfe and
\begin{align}
\bar{c}(s)&=\argmax_{c \in \cC(s)}\inp{c}{\nabla_{a} Q(s,a;\bar{\phi})\rvert_{a=\Pi_{\cC(s)}(\pi(s;\bar{\theta}))}}.\label{eq:FW direction}
\end{align}
(Note that the projection $\Pi_{\cC(s)}(\cdot)$ is only for generating feasible actions and does not require backpropagation.)
\item \textbf{Construct an MSE loss function.} NFWPO constructs a loss function $L_{\text{NFWPO}}(\theta;\bar{\theta})$ as the MSE between the actions of the current policy and the reference actions, i.e.,
\begin{align}
\cL_{\text{NFWPO}}(\theta;\bar{\theta})=\sum_{s\in\cB}\big(\pi(s;\theta)- \tilde{a}_s\big)^2.\label{eq:NFWPO MSE loss}
\end{align}
\item \textbf{Update policy by gradient descent.} NFWPO updates the policy parameter by minimizing the MSE loss in (\ref{eq:NFWPO MSE loss}) by using gradient descent for one step, i.e.,
\begin{equation}
\theta\leftarrow\theta - {\beta}\nabla_\theta \cL_{\text{NFWPO}}(\theta;\bar{\theta}).\label{eq:NFWPO MSE update}
\end{equation}
\end{itemize}
On the other hand, the critic of NFWPO can be based on any standard policy evaluation technique. For ease of exposition, for NFWPO, we use the same critic as the vanilla DDPG (as described in Section \ref{section:model:DPG}). The detailed pseudo code of NFWPO is provided in the supplementary material.
Notably, similar to (\ref{eq:FW direction in tabular case}), NFWPO only uses $\nabla_a Q(s,a;\bar{\phi})$ for deriving reference actions, \textit{without} using the deterministic policy gradient in (\ref{eq:DPG}).
This design allows NFWPO to decouple constraint satisfaction in (\ref{eq:FW target action})-(\ref{eq:FW direction}) from the parameter update in (\ref{eq:NFWPO MSE loss})-(\ref{eq:NFWPO MSE update}).
As highlighed in Section \ref{section:intro}, this decoupling obviates the need for the gradient of a projection layer and hence automatically avoids the zero-gradient issue.
Moreover, below we show that DDPG is actually a special case of NFWPO when there is no action constraints. The proof is provided in Appendix B
\begin{prop}
\label{PROP:NFWPO DDPG}
If there is no action constraints, then the policy update scheme of NFWPO in (\ref{eq:FW target action})-(\ref{eq:NFWPO MSE update}) is equivalent to the vanilla DDPG by \citep{lillicrap2016continuous}.
\end{prop}
\begin{remark}
\normalfont While NFWPO leverages a projection step in (\ref{eq:FW target action}), this projection step is only for deriving reference actions and does not take part in the policy parameter update.
As a result, NFWPO does not require backpropagation of the projection step (as shown in (\ref{eq:FW target action})-(\ref{eq:NFWPO MSE update})) and therefore automatically avoids the zero-gradient issue.
Hence, NFWPO is essentially different from the existing solutions that combine DDPG with a projection layer for end-to-end training \citep{pham2018optlayer,dalal2018safe}.
\end{remark}
\label{section:NUM}
\section{Experimental Results}
\label{section:exp}
In this section, we empirically evaluate FWPO and NFWPO in various real-world applications, including bike sharing systems, communication networks, and continuous control in MuJoCo.
We compare the proposed algorithms against the following popular benchmark methods:
\begin{itemize}[leftmargin=*]
\item \textbf{DDPG+Projection}: The training procedure is identical to the vanilla DDPG \citep{lillicrap2016continuous} except that the action is post-processed by the $L_2$-projection operator $\Pi_{\cC(s)}(\cdot)$ before being applied to the environment.
\item \textbf{DDPG+RewardShaping}: Built on DDPG+Projection, this algorithm adds the $L_2$-norm between the pre-projection and post-projection actions as a penalty to the intrinsic reward.
\item \textbf{DDPG+OptLayer}: This design uses a differentiable projection layer, namely the OptLayer, that supports end-to-end training via gradient descent \citep{pham2018optlayer}.
\end{itemize}
Moreover, for the projection step (without the need of backpropagation) required by DDPG+Projection, DDPG+RewardShaping, and NFWPO, we implement this functionality on the Gurobi optimization solver \citep{gurobi}.
Therefore, the post-projection actions are guaranteed to satisfy the action constraints for all the algorithms.
For each task, each algorithm is trained under the common set of 5 random seeds. Each evaluation consists of 10 episodes, and we report the average performance along with the standard deviation in Figures \ref{fig:BSS-3}-\ref{fig:Halfcheetah all}.
We also summarize the average return over the final 10 evaluations in Table \ref{tab:expResult} and Table \ref{tab:mujocoResult}.
The detailed training setup can be found in Appendix D. The code of our experiments is available \footnote{https://github.com/upupsheep/NFWPO\_Final\_Code}.
\subsection{\textbf{Bike Sharing Systems}}
\label{section:exp:BSS}
We use the open-source BSS simulator\footnote{BSS: https://github.com/bhatiaabhinav/gym-BSS}, which was originally proposed by \citep{ghosh2017incentivizing} and later used for evaluating action-constrained RL by \citep{bhatia2019resource}.
In a bike-sharing problem, there are $m$ bikes and $n$ stations, each of which has a pre-determined bike storage capacity $C$.
An action is to allocate $m$ bikes to $n$ stations under random demands.
The reward signal consists of three parts: (i) Moving cost: the cost of moving one or multiple bikes from one station to another; (ii) Lost-demand cost: the cost of unserved demand due to bike outage. (iii) Overflow cost: the cost incurred when the number of bikes in one station exceeds its capacity.
\textbf{Evaluating FWPO.}
Since the bike sharing environment has a finite state space, we first use it to evaluate FWPO against the baseline methods, all with tabular policy parameterization.
For the action value function, we use the same $Q$-learning-like critic as the vanilla DDPG for all the algorithms.
A medium-sized system with $n=3$, $m=90$, and $C=35$ is chosen that allows to analytically find the optimal policy.
There are two types of constraints: (i) Global constraint: all the action entries shall sum to $90$; (ii) Local constraints: each entry of the action shall be between $0$ and $35$.
Figure \ref{fig:BSS-3}(a) shows the average return of the three algorithms.
We observe that FWPO performs the best, while DDPG+Projection and DDPG+RewardShaping both suffer from slow learning.
This is also reflected by Figure \ref{fig:BSS-3}(b), which shows that FWPO converges to a near-optimal policy much faster than the baselines.
The above phenomenon is mainly due to the inaccurate policy gradient of DDPG under action constraints.
Specifically, the critics of the two baselines are trained with samples with feasible actions while the gradients $\nabla_a Q(s,a;\phi)$ are mostly evaluated at those actions outside the feasible sets.
By contrast, FWPO always stays in the feasible action sets and hence naturally avoids the issue of inaccurate gradients.
\begin{figure}
\caption{Bike sharing problem with $n=3$ (BSS-3) under tabular policies: (a) Average return over 5 random seeds; (b) $L_2$-norm between the learned policies and the optimal policy at each training step.}
\label{fig:Bike_sharing_reward}
\label{fig:Bike_sharing_norm}
\label{fig:BSS-3}
\end{figure}
\textbf{Evaluating NFWPO.}
We proceed to compare NFWPO with the other three baselines in solving a larger-scale bike-sharing problem with $m=150$, $n=5$, and $C=35$.
As shown by Figure \ref{fig:BSS-5}(a), NFWPO converges faster and achieves a larger return than the other baselines.
To better understand its behavior, Figure \ref{fig:BSS-5}(b) shows the cumulative constraint violations of the \textit{pre-projection} actions.
Interestingly, the pre-projection actions of NFWPO can largely avoid constraint violation, and thus requires less help from the projection during training.
By contrast, all the baselines rely heavily on the projection step to stay feasible, because most of their pre-projection actions fail to satisfy the constraints.
We also observe that DDPG+Projection and DDPG+RewardShaping attain similar average return and frequency of violation. This is because they both produce pre-projection actions far from the feasible sets and thereby obtain similar post-projection actions.
Meanwhile, DDPG+OptLayer suffers from nearly zero learning progress due to the zero-gradient issue.
Figure \ref{fig:BSS-5}(c) compares the sample-based gradient with respect to the pre and post-OptLayer actions.
Since the gradients of the pre-OptLayer actions (green line in Figure \ref{fig:BSS-5}(c)) are mostly close to zero, the sample-based policy gradients $\nabla_\theta \hat{J}_{\mu}(\pi(\cdot;\theta))$ are therefore close to zero for most of the training steps.
As the gradients with respect to the post-OptLayer actions are always \textit{non-zero} (blue dotted line in Figure \ref{fig:BSS-5}(c)), we know that the zero-gradient issue of $\nabla_\theta \hat{J}_{\mu}(\pi(\cdot;\theta))$ indeed results from the projection layer.
This confirms that the additional OptLayer could easily lead to the zero-gradient issue and sample-inefficient training.
\begin{figure*}
\caption{Bike sharing problem with $n=5$ (BSS-5) under neural policies: (a) Average return over 5 random seeds; (b) Cumulative number of constraint violations of the pre-projection actions during training; (c) $L_2$-norm of the sample-based gradients with respect to the pre- and post-OptLayer actions of DDPG+OptLayer.}
\label{fig:Bike net reward}
\label{fig:Bike net cons}
\label{fig:Bike Grad}
\label{fig:BSS-5}
\end{figure*}
\subsection{Utility Maximization of Communication Networks}
\label{section:exp:network}
In this section, we evaluate the proposed methods over the task of utility maximization in communication networks.
We simulate the network with the open-source network simulator from PCC-RL\footnote{PCC-RL: https://github.com/PCCproject/PCC-RL} \citep{jay2019internet}.
For the network topology, we consider the classic T3 NSFNET Backbone and set the bandwidth of each link to be 50 packets per second throughout the experiments.
We generate three network flows, each of which has three candidate paths from its source to the destination.
The action is to determine the rate allocation of each flow along each candidate path.
The reward consists of three parts: (i) Throughput: the number of received packets per second; (ii) Drop rate: the number of dropped packets per second; (iii) Latency: the average latency of the packets in the last second. For each flow $i$, its immediate reward is $\log({\frac{\text{throughput}_{\bf i} }{\text{drop rate}_{\bf i}^{0.5} \times \text{latency}_{\bf i}^{1.5}} })$, which corresponds to the widely-used proportional fairness criteria \citep{kelly1997charging}.
One salient feature of a communication network is that when the total packet arrival rate of a link approaches its bandwidth, the latency will grow rapidly, and accordingly most of the packets would be dropped.
Therefore, in this environment, the action constraints correspond to the link bandwidth constraints, i.e., the total assigned packet arrival rate of each link should be bounded by 50.
Figure \ref{fig:Network all}(a) shows the training curves and indicates that NFWPO still converges fast (in about $10^5$ steps) and achieves much larger return than the baselines.
Moreover, similar to the bike-sharing problems, we see from Figure \ref{fig:Network all}(b) that most of the pre-projection actions of NFWPO already satisfy the constraints.
In this task, we find that reward shaping does help in guiding the pre-projection actions towards the feasible action sets, but only under some random seeds (and therefore the large variance in Figures \ref{fig:Network all}(a)-(b)).
Regarding DDPG+OptLayer, in the initial training phase, we observe that it mostly produces pre-OptLayer actions with small flow rates, which lead to a smaller number of constraint violations and moderate returns.
To achieve a higher return, DDPG+OptLayer then gradually increases the flow rates but accidentally causes more constraint violations of pre-OptLayer actions and suffers from the inaccurate gradient issue described in Section \ref{section:exp:BSS}.
Ultimately, DDPG+OptLayer can only achieve a fairly low return.
\begin{figure}
\caption{Utility maximization in NSFNET: (a) Average return over 5 random seeds; (b) {Cumulative constraint violations of the pre-projection actions during training.}
\label{fig:Network_reward}
\label{fig:Network_constraint_violate}
\label{fig:Network all}
\end{figure}
\begin{table*}[!htb]
\centering
\caption{Average return over the final 10 evaluations.}
\label{tab:expResult}
\begin{tabular}{l c c c}
\hline Methods & BSS-3 & BSS-5 & NSFNET \\
\hline
NFWPO & \textbf{-1673.04} & \textbf{-15132.21} & \textbf{13770.67} \\%& \textbf{-7.15} & \textbf{6513.26} \\
DDPG+Projection & -2254.52 & -16123.48 & 1514.44\\% & -11.95 & 2746.72 \\
DDPG+Reward Shaping & -2308.00 & -16123.48 & 9010.46 \\%& -10.10 & 3065.37 \\
DDPG+OptLayer & - & -16686.04 & 1667.59 \\%& -8.33 & 1399.37 \\
\hline
\end{tabular}
\end{table*}
\begin{table*}[!htb]
\centering
\caption{Average return over the final 10 evaluations in the MuJoCo environments.}
\label{tab:mujocoResult}
\begin{tabular}{l c c}
\hline Methods & Reacher & Halfcheetah \\
\hline
NFWPO & \textbf{-4.76} & \textbf{6513.26} \\
DDPG+Projection & -11.15 & 2746.72 \\
DDPG+Reward Shaping & -8.66 & 3065.37 \\
DDPG+OptLayer & -7.25 & 1399.37 \\
SAC+Projection & -10.50 & 4874.45 \\
TRPO+Projection & -11.04 & 2247.82 \\ PPO+Projection & -10.68 & 1459.04 \\
FOCOPS+Projection & -12.23 & 1916.46 \\
\hline
\end{tabular}
\end{table*}
\subsection{MuJoCo Continuous Control Tasks}
\label{section:exp:Mujoco}
To further validate NFWPO, we consider popular continuous control tasks in MuJoCo \citep{6386109} with non-linear and state-dependent action constraints.
We further compare the proposed algorithm with important benchmarks RL algorithms for MuJoCo control taks, including PPO \citep{schulman2017proximal}, TRPO \citep{schulman2015trust}, and SAC \citep{haarnoja2018soft}.
To make the comparison even more comprehensive, we also evaluate FOCOPS \citep{zhang2020first}, which is a recent approach designed to address long-term total discounted cost constraints.
As FOCOPS is not designed for handling state-wise action constraints, we relax the action constraints into the long-term total discounted constraints required by FOCOPS.
To ensure constraint satisfaction, we use the same technique as DDPG+Projection, i.e., the actions of PPO, TRPO, SAC, and FOCOPS are post-processed by $L_2$-projection before being applied to the environment.
\noindent \textbf{Reacher with non-linear constraints.} In this task, the action space is $2$-dimensional (denoted by $u_{1},u_{2}$), and each action entry corresponds to the torque of a joint of a 2-DoF robot.
To validate the applicability of NFWPO, we impose nonlinear action constraints as: $ {u_{1}}^2+{u_{2}}^2\leq 0.05$.
From Figure \ref{fig:Reacher all}(a), we observe a similar trend that NFWPO still converges faster and achieves a larger return than the other baseline methods.
In this task, DDPG+OptLayer and DDPG+RewardShaping can achieve a return closer to NFWPO as it has fewer pre-OptLayer (pre-projection) violations as shown in Figure \ref{fig:Reacher all}(b).
The other algorithms still perform poorly as they always produce actions far from the feasible sets and relies heavily on the projection step.
This is reflected by the fact that SAC, TRPO, and PPO violate the constraint at almost every training step, as shown in Figure \ref{fig:Reacher all}(b). Regarding FOCOPS, the violation of the constraint in the second half of training is less than that in the first half.
This indicates that FOCOPS needs much more training steps to find a policy that violate lesser,
Despite this, the return of FOCOPS remains fairly low as other benchmarks.
\begin{figure}
\caption{Reacher-v2 with a non-linear action constraint: (a) Average return over 5 random seeds; (b) {Cumulative constraint violations of the pre-projection actions during training (``+P'' stands for ``+Projection'', ``+RS'' stands for ``Reward Shaping'', and ``+Opt'' stands for ``+OptLayer'').}
\label{fig:Reacher_reward}
\label{fig:Reacher_constraint_violate}
\label{fig:Reacher all}
\end{figure}
\noindent \textbf{Halfcheetah with state-dependent constraints.} In this task, an action is $6$-dimensional and is denoted by $(v_{1},\cdots,v_{6})$.
We consider a challenging scenario where the constraint is state-dependent. Specifically, the imposed constraint is $\sum_{i=1}^{6} |v_i w_i|\leq 20$, where $w_i$ denotes the angular velocity of the $i$-th joint and is part of the state. This constraint is meant to capture the limitation of total output power.
Similar to the other environments, from Figure \ref{fig:Halfcheetah all}(a)-(b), we still observe that NFWPO achieves better sample efficiency than the other baselines. Moreover, NFWPO violates the constraint for only 3\% of the steps while getting the highest return.
On the other hand, PPO, TRPO, and SAC all violate the constraint for more than 75\% of the steps. FOCOPS violates the constraint for about 15\% of the time but only achieves a fairly low return.
Again, we see that NFWPO outperforms all the baseline methods with much less constraint violation.
\begin{figure}
\caption{Halfcheetah-v2 with a state-dependent action constraint: (a) Average return over 5 random seeds; (b) {Cumulative constraint violations of the pre-projection actions during training (``+P'' stands for ``+Projection'', ``+RS'' stands for ``Reward Shaping'', and ``+Opt'' stands for ``+OptLayer'').}
\label{fig:Halfcheetah_reward}
\label{fig:Halfcheetah_constraint_violate}
\label{fig:Halfcheetah all}
\end{figure}
\begin{comment}
\begin{figure*}
\caption{Halfcheetah-v2: (a)(b)(c) Average Return of each constraint(Halfcheetah-Cons-2.25,Halfcheetah-Cons-1,Halfcheetah-Power) (d)(e)(f)Cumulative constraint violation during training before Gaussian noise of each constraint(Halfcheetah-Cons-2.25,Halfcheetah-Cons-1,Halfcheetah-Power) }
\label{fig:Halfcheetah all}
\end{figure*}
\end{comment}
\section{Related Work}
\label{section:related}
The constrained RL problems have been extensively studied from two main perspectives.
The first category encodes the constraints via cost signals, which are incurred at each step along with the reward signals, and accordingly focuses on the average-cost constraints.
This line of research works borrows a variety of ideas from the optimization literature.
For example, \citep{chow2015risk} addressed chance constraints by using a primal-dual approach to achieve a trade-off between return and risk.
Similarly, \citep{tessler2018reward} proposed Reward Constrained Policy Optimization which applied Lagrangian relaxation and converted the constraints into penalty terms in the objective.
\citep{achiam2017constrained} proposed Constrained Policy Optimization to achieve strict policy improvement under the average cost constraints by using the trust-region approach.
In \citep{chow2018lyapunov}, Lyapunov-based safe reinforcement learning was proposed to address the constraints by solving a linear program.
\citep{yang2019projection} proposed Projection-Based Constrained Policy Optimization to achieve no constraint violation by taking a projection step after the local reward improvement update.
In \citep{liu2020ipo}, Interior-point Policy Optimization was proposed to handle the average cost constraints by augmenting the objective with logarithmic barrier functions.
\citep{satija2020constrained} took a different approach by converting the trajectory-level constraints into per-step state-wise constraints and accordingly defining a safe policy improvement step.
\citep{zhang2020first} proposed an approach to solving the constrained policy optimization problem by first finding a target policy directly in the policy space and thereafter converting the target policy to a parameterized one through a projection step onto the parameter space.
Different from all the above prior works, this paper considers the RL settings with state-wise action constraints.
The second category is on the state-wise constraints that need to be satisfied on a step-by-step basis.
\citep{pham2018optlayer} studied the state-wise action constraints of robotic systems and proposed a projection-based OptLayer to enforce the constraints.
\citep{dalal2018safe} also considered state-wise safety constraints under linearization and proposed a projection-based safety layer to handle the constraints.
Similarly, \citep{bhatia2019resource} considered resource constraints and proposed variants of OptLayer to improve the computational efficiency.
\citep{shah2020solving} proposed a more efficient projection scheme for enforcing linear action constraints.
Despite the similarity in the problem setting, we take a different approach and propose a decoupling framework by leveraging Frank-Wolfe to address the action constraints and completely avoid the zero-gradient issue.
\section{Conclusion}
\label{section:conclusion}
This paper revisits action-constrained RL to tackle the zero-gradient issue and ensure zero constraint violation simultaneously.
We achieve this goal by developing a learning framework that decouples the policy parameter update from constraint satisfaction by leveraging state-wise Frank-Wolfe and a regression argument.
Our theoretical and experimental results demonstrate that the proposed learning algorithm is indeed a promising approach for action-constrained RL.
\appendix
\onecolumn
\setcounter{section}{0}
\section{Proofs of the Theoretical Results}
\label{section:appendix:proof}
In this section, we provide the proof of the convergence result of the tabular FWPO in Proposition \ref{PROP:CONVERGENCE} by leveraging the proof technique of smooth optimization.
We start by establishing the key smoothness result of Proposition \ref{PROP: L-SMOOTH OF J} and then proceed to the convergence analysis.
\subsection{Smoothness Results and Proof of Proposition \ref{PROP: L-SMOOTH OF J}}
\label{section:appendix:smoothness}
In this section, we show the smoothness results of the value functions as well as the performance objective under the regularity assumptions (A1)-(A2).
Notably, given the regularity conditions of $r$ and $p$ in action, it remains non-trivial to establish the smoothness of $V(\cdot;\theta)$ and $J_{\mu}(\pi(\cdot;\theta))$ in $\theta$ due to the multi-step compound effect of the changes in policy parameters on the value functions.
Despite this challenge, we are still able to characterize the required smoothness conditions.
Recall from Definition \ref{def:L smoothness} that a differentiable function $f:\text{dom}f \rightarrow \bR$ is $L_0$-smooth if there exists $L_0\geq 0$ such that for any $x,y\in \text{dom}f$, we have $\norm{\nabla f(x) - \nabla f(y)}_2\leq L_0\norm{x-y}_2$.
One useful property is that if $f$ is $L_0$-smooth, then $f$ satisfies
\begin{equation}
\lvert f(y)-f(x)- \inp{\nabla f(x)}{y-x}\rvert \leq \frac{L_0}{2}\norm{y-x}_2^2, \hspace{6pt} \forall x,y\in \text{dom}f.\label{eq:L-smooth bound}
\end{equation}
For notational convenience, we explicitly number the state space as $\cS=\{1,\cdots,M\}$.
By slightly abusing the notation, we use $\pi$ to denote the $M\times N$ matrix where the $s$-th row represents the action selected by a deterministic policy $\pi$ at each state $s\in\{1,\cdots,M\}$.
Let $\bP^{\pi}$ denote a $M \times M$ matrix where the $(i,j)$-th entry is $p(j\rvert i,\pi(i))$.
Given a deterministic policy $\pi$, consider a ``perturbed'' version of the policy defined by $\pi_{\delta}=\pi+\delta \bW$, where $\delta\in\bR$ and $\bW\in\bR^{M\times N}$ is some fixed matrix with $\norm{\bW}_F=1$.
Moreover, we simplify the notations by letting $\bP(\delta)\equiv\bP^{\pi_\delta}$ and $V_{s}(\delta)\equiv V(s;\pi_{\delta})$ and then define $\bG(\delta):=(\bI_{M}-\gamma \bP(\delta))^{-1}$, where $\bI_{M}$ denotes the identity matrix of size $M\times M$.
To show the smoothness result of $V(s;\theta)$, it is sufficient to show that $\lvert\frac{d^2 V_s(\delta)}{d\delta^2}\rvert$ is bounded for any $\bW\in\bR^{M\times N}$ with $\norm{\bW}_F=1$, for any state $s$.
We first introduce several useful lemmas.
\begin{lemma}[Matrix-by-scalar derivatives]
\label{lemma:matrix derivative}
\begin{align}
\frac{d \bG(\delta)}{d\delta}&=\gamma\bG(\delta)\frac{d \bP(\delta)}{d \delta}\bG(\delta),\label{eq:matrix derivative 1}\\
\frac{d^2 \bG(\delta)}{d\delta^2}&=2\gamma\bG(\delta)\frac{d \bP(\delta)}{d \delta}\bG(\delta)\frac{d \bP(\delta)}{d \delta}\bG(\delta)+ \bG(\delta)\frac{d^2 \bP(\delta)}{d \delta^2}\bG(\delta).\label{eq:matrix derivative 2}
\end{align}
\end{lemma}
\begin{myproof}[Lemma \ref{lemma:matrix derivative}]
\normalfont By the definition of $\bG(\delta)$, we know
\begin{equation}
\bI_M=\bG(\delta)(\bI_M-\gamma \bP(\delta))=\bG(\delta)-\gamma \bG(\delta) \bP(\delta).\label{eq:I=G times Ginv}
\end{equation}
Note that the product rule of matrix-by-scalar derivative suggests that for two matrices $\bU_1(\delta),\bU_2(\delta)$, we have $\frac{d}{d\delta}(\bU_1\bU_2)=\bU_1\frac{d \bU_2}{d\delta}+\frac{d \bU_1}{d\delta}\bU_2$. By taking the matrix-by-scalar derivative with respect to $\delta$ on both sides of (\ref{eq:I=G times Ginv}) and using the product rule, we have
\begin{equation}
\frac{d \bG(\delta)}{d \delta}-\Big(\gamma \bG(\delta)\frac{d \bP(\delta)}{d \delta}+\gamma\frac{d \bG(\delta)}{d \delta}\bP(\delta)\Big)=\b0_M,\label{eq:matrix derivative 3}
\end{equation}
where $\b0_M$ denotes the $M\times M$ zero matrix.
By reorganizing the terms in (\ref{eq:matrix derivative 3}), it is straightforward to verify that (\ref{eq:matrix derivative 1}) holds.
Based on (\ref{eq:matrix derivative 1}), we can obtain the second-order derivative of $\bG(\delta)$ as
\begin{align}
\frac{d^2 \bG(\delta)}{d\delta^2}&=\frac{d}{d\delta}\Big(\bG(\delta)\frac{d \bP(\delta)}{d \delta}\bG(\delta)\Big)\\
&=\bG(\delta)\frac{d}{d\delta}\Big(\frac{d \bP(\delta)}{d \delta}\bG(\delta)\Big)+ \frac{d \bG(\delta)}{d\delta}\Big(\frac{d \bP(\delta)}{d \delta}\bG(\delta)\Big)\\
&=2\gamma\bG(\delta)\frac{d \bP(\delta)}{d \delta}\bG(\delta)\frac{d \bP(\delta)}{d \delta}\bG(\delta)+ \bG(\delta)\frac{d^2 \bP(\delta)}{d \delta^2}\bG(\delta).
\end{align}
Hence, we conclude that (\ref{eq:matrix derivative 1}) and (\ref{eq:matrix derivative 2}) indeed hold. \MYQED
\end{myproof}
\begin{lemma}
\label{lemma:infinite norm of G(z)x}
For any $x\in\bR^{M}$, $\bG(\delta)$ satisfies that
\begin{equation}
\norm{\bG(\delta)x}_{\infty}\leq \frac{1}{1-\gamma}\norm{x}_{\infty}.
\end{equation}
\end{lemma}
\begin{myproof}[Lemma \ref{lemma:infinite norm of G(z)x}]
\normalfont Note that $\bG(\delta)=(\bI_{n}-\gamma \bP(\delta))^{-1}=\sum_{m=0}^{\infty}\gamma^m \bP(z)^{m}$.
Given that $\bP(\delta)$ is a stochastic matrix where all the elements are non-negative and each row sums to 1, we know $\bP(\delta)^m$ is also a stochastic matrix, for any $m\in \mathbb{N}$.
For each $i$, let $e_i$ denote an $n$-dimensional one-hot vector with the $i$-th entry equal to $1$.
Then, it is easy to verify that $\norm{\bG(\delta)e_i}_{\infty}\leq\frac{1}{1-\gamma}$, for any $i\in \{1,\cdots,M\}$.
This thereby implies that $\norm{\bG(\delta)x}_{\infty}\leq \frac{1}{1-\gamma}\norm{x}_{\infty}$, for any $x\in\bR^M$.\MYQED
\end{myproof}
\begin{lemma}
\label{lemma:norm bound of derivative of P}
Under the regularity assumptions (A1)-(A2), there exist constants $C_1,C_2>0$ such that for any $\bW\in\bR^{M\times N}$ with $\norm{\bW}_F=1$ and for any $x\in \bR^{M}$, we have
\begin{align}
\norm{\frac{d \bP(\delta)}{d\delta}x}_{\infty}&< C_1 \norm{x}_{\infty},\label{eq:norm bound of derivative of P}\\
\norm{\frac{d^2 \bP(\delta)}{d\delta^2}x}_{\infty}&< C_2 \norm{x}_{\infty}.\label{eq:norm bound of 2nd derivative of P}
\end{align}
\end{lemma}
\begin{myproof}[Lemma \ref{lemma:norm bound of derivative of P}]
\normalfont For convenience, we use $\bW_i$ to denote the $i$-th row of $\bW$. For any pair of $i,j\in \{1,\cdots,M\}$, the $(i,j)$-th element of $\frac{d \bP(\delta)}{d \delta}$ evaluated at $\delta=0$ satisfies
\begin{align}
\bigg\lvert\Big[\frac{d \bP(\delta)}{d \delta}\Big\rvert_{\delta=0}\Big]_{(i,j)}\bigg\rvert&=\bigg\lvert\lim_{h\rightarrow 0}\frac{p(j\rvert i, \pi_h(i))-p(j\rvert i,\pi(i))}{h}\bigg\rvert\label{eq:norm bound of derivative of P 1}\\
&=\bigg\lvert\inp{\nabla_{a}p(j\rvert i,a)\big\rvert_{a=\pi(i)}}{\bW_i^{\top}}\bigg\rvert\label{eq:norm bound of derivative of P 2}\\
&\leq \norm{\nabla_{a}p(j\rvert i,a)\big\rvert_{a=\pi(i)}}_2 \cdot\norm{\bW_i}_2\label{eq:norm bound of derivative of P 3}\\
&< C_p,\label{eq:norm bound of derivative of P 4}
\end{align}
where (\ref{eq:norm bound of derivative of P 2}) follows from the differentiability of $p$ and the property of directional derivatives, (\ref{eq:norm bound of derivative of P 3}) holds by the Cauchy-Schwarz inequality, and (\ref{eq:norm bound of derivative of P 4}) follows from the regularity assumptions and that $\norm{\bW}_F=1$.
Then, by (\ref{eq:norm bound of derivative of P 4}), it is straightforward to verify that (\ref{eq:norm bound of derivative of P}) indeed holds.
Regarding (\ref{eq:norm bound of 2nd derivative of P}), we first let $\bH_{a}(i,j)$ denote the Hessian of $p(j\rvert i,a)$ with respect to $a$.
The second directional derivative $\frac{d^2 \bP(\delta)}{d \delta^2}$ satisfies that
\begin{align}
\bigg\lvert\Big[\frac{d^2 \bP(\delta)}{d \delta^2}\Big\rvert_{\delta=0}\Big]_{(i,j)}\bigg\rvert&=\Big\lvert \bW_i \bH_{a}(i,j)\big\rvert_{a=\pi(i)} \bW_i^{\top}\Big\rvert\label{eq:norm bound of 2nd derivative of P 1}\\
&\leq \norm{\bW_i}_2\cdot \norm{\bH_{a}(i,j)\big\rvert_{a=\pi(i)} \bW_i^{\top}}_2\label{eq:norm bound of 2nd derivative of P 2}\\
&\leq \norm{\bH_{a}(i,j)\big\rvert_{a=\pi(i)}}_{F} \label{eq:norm bound of 2nd derivative of P 3}\\
&< L_p M^2,\label{eq:norm bound of 2nd derivative of P 4}
\end{align}
where (\ref{eq:norm bound of 2nd derivative of P 1}) holds by the basic property of second directional derivatives, (\ref{eq:norm bound of 2nd derivative of P 2}) is due to the Cauchy-Schwarz inequality, (\ref{eq:norm bound of 2nd derivative of P 3}) follows from that $\norm{\bW}_F\leq 1$ and $\norm{\bH_{a}(i,j)\rvert_{a=\pi(i)}}_2\leq \norm{\bH_{a}(i,j)\rvert_{a=\pi(i)}}_F$, and (\ref{eq:norm bound of 2nd derivative of P 4}) holds by the $L_p$-smoothness of $p$.
Therefore, by (\ref{eq:norm bound of 2nd derivative of P 4}) we conclude that (\ref{eq:norm bound of 2nd derivative of P}) holds.
\MYQED
\end{myproof}
Now we are ready to establish the smoothness conditions of the value functions.
Define $L_{Q}=L_r+M L_p\frac{\gamma}{1-\gamma}$.
\begin{lemma}
\label{lemma:Q and V L-smooth}
Under the regularity assumptions (A1)-(A2) and tabular direct policy parameterization, we have the following smoothness properties of $Q(s,a;\pi)$ and $V(s;\pi)$:
\noindent (i) Under any fixed policy $\pi$, $Q(s,a;\pi)$ is $L_{Q}$-smooth in action $a$, for any state $s\in \cS$.
\noindent (ii) There exists some constant $L>0$ such that $V(s;\pi_\theta)$ is $L$-smooth in the policy parameters $\theta$, for any state $s\in\cS$.
\end{lemma}
\begin{myproof}[Lemma \ref{lemma:Q and V L-smooth}]
\normalfont
\underline{\textbf{For (i):}} Recall that the action-value function can be expressed as
\begin{equation}
Q(s,a;\pi)=r(s,a)+\gamma \sum_{s'\in\cS}p(s'\rvert s,a)V(s';\pi).\label{eq:Q as r+V}
\end{equation}
By taking the derivative of (\ref{eq:Q as r+V}) with respect to $a$, we have
\begin{equation}
\nabla_{a} Q(s,a;\pi)=\nabla_a r(s,a)+\gamma \sum_{s'\in\cS}\nabla_{a}p(s'\rvert s,a)V(s';\pi).\label{eq:grad of Q as grad of r + grad of V}
\end{equation}
By the regularity assumptions of $r$ and $p$, we know that for any state $s$ and any two actions $a'$ and $a''$,
\begin{align}
&\norm{\nabla_{a}Q(s,a;\pi)\rvert_{a=a'} - \nabla_{a}Q(s,a;\pi)\rvert_{a=a''}}_2\label{eq:grad of Q 1}\\
&\leq \norm{\nabla_{a} r(s,a)\rvert_{a=a'}-\nabla_{a} r(s,a)\rvert_{a=a''}}_2+\gamma \sum_{s'\in\cS}\Big(\norm{\nabla_{a}p(s'\rvert s,a)\rvert_{a=a'}-\nabla_{a}p(s'\rvert s,a)\rvert_{a=a''}}_2\cdot\lvert V(s';\pi)\rvert\Big)\label{eq:grad of Q 2}\\
&\leq L_r \norm{a'-a''}_2+M L_p \frac{\gamma}{1-\gamma}\norm{a'-a''}_2,\label{eq:grad of Q 3}
\end{align}
where (\ref{eq:grad of Q 2}) follows from (\ref{eq:grad of Q as grad of r + grad of V}) and the triangle inequality, and (\ref{eq:grad of Q 3}) holds by the regularity assumptions of $r$ and $p$ as well as the fact that $\lvert V(s';\pi)\rvert\leq \frac{1}{1-\gamma}$.
This implies that $Q(s,a;\pi)$ is $L_Q$-smooth in $a$.
\underline{\textbf{For (ii):}} We adapt the proof technique in \citep{agarwal2019theory,agarwal2020optimality} and show the smoothness result of $V(s;\pi_{\theta})$ in $\theta$.
Recall that by the Bellman equation, under a deterministic policy, we have
\begin{equation}
V(s;\pi)=r(s,\pi(s))+\gamma \sum_{s'\in\cS}p(s'\rvert s,a)V(s';\pi), \hspace{6pt}\forall s\in\cS.\label{eq:V as r+V}
\end{equation}
Let $r^{\pi}$ be an $M$-dimensional column vector where the $i$-th entry is $r(i,\pi(i))$, and $V^{\pi}$ denote an $M$-dimensional column vector where the $i$-th entry is $V(i;\pi)$.
Then, we can rewrite (\ref{eq:V as r+V}) in matrix form as
\begin{equation}
V^{\pi}=r^{\pi}+\gamma \bP^{\pi}V^{\pi}.\label{eq:V as r+V in matrix form}
\end{equation}
By (\ref{eq:V as r+V in matrix form}), we immediately know that
\begin{align}
V^{\pi}=(\bI_M-\gamma \bP^{\pi})^{-1} r^{\pi}.
\end{align}
For each $s\in\{1,\cdots,M\}$, let $e_s$ denote an $M$-dimensional one-hot vector with the $s$-th entry equal to $1$. Hence, for each $s\in\cS$, we know $V(s;\pi)=e_s^{\top}(\bI_M-\gamma \bP^{\pi})^{-1} r^{\pi}$.
By Lemma \ref{lemma:matrix derivative}, we have
\begin{align}
\frac{d V_s(\delta)}{d\delta}&=\gamma e_s^{\top} \bG(\delta)\frac{d \bP(\delta)}{d \delta}\bG(\delta)r^{\pi},\label{eq:1st-order derivative of V}\\
\frac{d^2 V_s(\delta)}{d\delta^2}&=2 \gamma e_s^{\top} \bG(\delta)\frac{d \bP(\delta)}{d \delta}\bG(\delta)\frac{d \bP(\delta)}{d \delta}\bG(\delta) r^{\pi} +e_s^{\top} \bG(\delta)\frac{d^2 \bP(\delta)}{d \delta^2}\bG(\delta)r^{\pi}. \label{eq:2nd-order derivative of V}
\end{align}
Then, for any $\bW \in \bR^{M\times N}$ with $\norm{\bW}_F=1$, we have
\begin{align}
\Big\lvert \frac{d V_s(\delta)}{d\delta}\Big\rvert &\leq \gamma \norm{\bG(\delta)\frac{d \bP(\delta)}{d \delta}\bG(\delta)r^{\pi}}_{\infty}\label{eq:norm bound of derivaitve of V 1}\\
&\leq \frac{\gamma C_1}{(1-\gamma)^2},\label{eq:norm bound of derivaitve of V 2}
\end{align}
where (\ref{eq:norm bound of derivaitve of V 1}) is a direct result of (\ref{eq:1st-order derivative of V}), and (\ref{eq:norm bound of derivaitve of V 2}) follows from Lemmas \ref{lemma:infinite norm of G(z)x}-\ref{lemma:norm bound of derivative of P} and $\norm{r^{\pi}}_{\infty}\leq 1$.
Similarly, we know
\begin{align}
\Big\lvert \frac{d^2 V_s(\delta)}{d\delta^2}\Big\rvert &\leq 2\gamma \norm{\bG(\delta)\frac{d \bP(\delta)}{d \delta}\bG(\delta)\frac{d \bP(\delta)}{d \delta}\bG(\delta) r^{\pi}}_{\infty}+ \norm{\bG(\delta)\frac{d^2 \bP(\delta)}{d \delta^2}\bG(\delta)r^{\pi}}_{\infty}\label{eq:norm bound of 2nd derivaitve of V 1}\\
&\leq \frac{2\gamma C_1^2}{(1-\gamma)^3}+\frac{ C_2}{(1-\gamma)^2},\label{eq:norm bound of 2nd derivaitve of V 2}
\end{align}
where (\ref{eq:norm bound of 2nd derivaitve of V 1}) is a direct result of (\ref{eq:2nd-order derivative of V}), and (\ref{eq:norm bound of 2nd derivaitve of V 2}) holds by Lemmas \ref{lemma:infinite norm of G(z)x}-\ref{lemma:norm bound of derivative of P} and $\norm{r^{\pi}}_{\infty}\leq 1$.
Then, since (\ref{eq:norm bound of 2nd derivaitve of V 1})-(\ref{eq:norm bound of 2nd derivaitve of V 2}) hold for any $\bW \in \bR^{M\times N}$ with $\norm{\bW}_F=1$, then we know for every state $s$, $V(s;\pi_{\theta})$ is $L$-smooth in $\theta$, where $L=\frac{2\gamma C_1^2}{(1-\gamma)^3}+\frac{ C_2}{(1-\gamma)^2}$.
\MYQED
\end{myproof}
Now, we are ready to prove Proposition \ref{PROP: L-SMOOTH OF J}. For convenience, we restate Proposition \ref{PROP: L-SMOOTH OF J} below.
\begin{propstar}
Under the regularity assumptions (A1)-(A2), there exists some constant $L>0$ such that for any restarting state distribution $\mu$, $J_\mu(\pi(\cdot;\theta))$ is $L$-smooth in $\theta$.
\end{propstar}
\begin{myproof}[Proposition \ref{PROP: L-SMOOTH OF J}]
\normalfont By (ii) in Lemma \ref{lemma:Q and V L-smooth} and the definition that $J_{\mu}(\pi_\theta)=\E_{s\sim \mu}[V(s;\pi_\theta)]$, we know $J_{\mu}(\pi_\theta)$ is $L$-smooth, for any restarting state distribution $\mu$. \MYQED
\end{myproof}
\subsection{Proof of Proposition \ref{PROP:CONVERGENCE}}
\label{section:appendix:convergence proof}
For convenience, we restate Proposition \ref{PROP:CONVERGENCE} as follows.
\begin{propstar}
Under the FWPO algorithm with $\alpha_k(s)=\frac{(1-\gamma)\mu_{\min}}{L D_s^2}g_k(s)$, $\{\pi(\cdot;\theta_k)\}$ form a non-decreasing sequence of policies in the sense that $\pi(\cdot;\theta_{k+1})\geq \pi(\cdot;\theta_k)$, for all $k$.
Moreover, the effective Frank-Wolfe gap of FWPO converges to zero as $k\rightarrow \infty$, and the convergence rate can be quantified as
\begin{equation}
\sum_{k=0}^{\infty} \cG_k^2\leq \frac{2LD_{\max}^2}{(1-\gamma)^3 \mu_{\min}^2},
\end{equation}
which implies that $\bar{\cG}_T=O(T^{-1/2})$.
\end{propstar}
To show the non-decreasing property in Proposition \ref{PROP:CONVERGENCE}, we summarize useful properties on policy improvement as follows.
We use $A(\cdot,\cdot;\pi)$ to denote the advantage function of a policy $\pi$.
\begin{lemma}[Performance difference lemma, \citep{kakade2002approximately}]
\label{lemma:performance difference}
For any two policies $\pi$ and $\pi'$, for any restarting state distribution $\mu$, the performance difference between the two policies at any state $s$ is
\begin{equation}
V(s;\pi')-V(s;\pi)=\frac{1}{1-\gamma}\E_{s\sim d_{\mu}^{\pi'},a\sim\pi'(\cdot|s)}\big[A(s,a;\pi)\big].
\end{equation}
\end{lemma}
By Lemma \ref{lemma:performance difference}, we can directly obtain a sufficient condition of state-wise policy improvement.
\begin{corollary}
\label{corollary:state-wise policy improvement}
For any two policies $\pi$ and $\pi'$, we have $\pi'\geq \pi$ if the following condition holds for every state $s\in \cS$:
\begin{equation}
\E_{a\sim \pi'}[A(s,a;\pi)]\geq \E_{a\sim \pi}[A(s,a;\pi)]=0.
\end{equation}
\end{corollary}
Now we are ready to prove Proposition \ref{PROP:CONVERGENCE}.
\begin{myproof}[Proposition \ref{PROP:CONVERGENCE}]
\normalfont For ease of notation, in this proof we use $\pi_k\equiv \pi(\cdot;\theta_k)$ and $\pi_k({s})\equiv \pi({s};\theta_k)$.
Recall from (\ref{eq:state-wise FW gap}) that under the policy $\pi_k$, the state-wise Frank-Wolfe gap is defined as $g_{k}(s):=\inp{c_k(s)-\theta_{k}(s)}{\nabla_{{a}} Q({s},{a};\pi_k)\rvert_{{a}=\pi_{k}({s})}}$.
By Lemma \ref{lemma:Q and V L-smooth}, the $Q(s,a;\pi)$ is $L$-smooth in $a$, for any state $s$ and any policy $\pi$. Then, under FWPO, we have
\begin{align}
Q({s},\pi_{k+1}({s});\pi_k)&\geq Q({s},\pi_{k}({s});\pi_k)+\alpha_{k}(s)\inp{{c}_{k}(s)-\theta_{k}(s)}{\nabla_{{a}} Q({s},{a};\pi_k)\rvert_{{a}=\pi_{k}({s})}}-\frac{L}{2}\alpha_{k}(s)^2\norm{{c}_{k}(s)-\theta_{k}(s)}_2^2,\label{eq:Q improvement 1}\\
&\geq Q({s},\pi_{k}({s});\pi_k)+{\alpha_{k}(s)g_{k}(s)-\frac{L}{2}\alpha_{k}(s)^2 D_{s}^2},\label{eq:Q improvement 2}
\end{align}
where (\ref{eq:Q improvement 2}) follows from the definitions of the state-wise Frank-Wolfe gap and the diameter $D_s$.
It is easy to verify that ${\alpha_{k}(s)g_{k}(s)-\frac{L}{2}\alpha_{k}(s)^2 D_{s}^2}$ is positive for all $\alpha_{k}(s)\in (0,\frac{2g_{k}(s)}{LD_s^2})$ and attains a maximum of $\frac{g_{k}(s)^2}{2L D_s^2}$ at $\alpha_{k}(s)=\frac{g_{k}(s)}{LD_s^2}$.
Therefore, if the learning rate $\alpha_{k}(s)\in(0,\frac{2g_{k}(s)}{LD_s^2})$, then $Q({s},\pi_{k+1}({s});\pi_k)>Q({s},\pi_{k}({s});\pi_k)$ and hence
$A({s},\pi_{k+1}({s});\pi_k)>0$.
By Corollary \ref{corollary:state-wise policy improvement}, we know $\pi_{k+1}\geq \pi_{k}$, for all $k$.
Hence, $\{\pi(\cdot;\theta_k)\}$ indeed form a non-decreasing sequence of policies.
Next, we characterize the rate of convergence of the objective $J_{\mu}(\pi_{k})$ of the proposed FWPO algorithm.
By Proposition \ref{PROP: L-SMOOTH OF J}, we know the objective $J_{\mu}(\pi_{k})$ is $L$-smooth.
Therefore, we have
\begin{align}
J_{\mu}(\pi_{k+1})&\geq J_{\mu}(\pi_{k})+\inp{\nabla_{\theta} J_{\mu}(\pi_k)}{\theta_{k+1}-\theta_{k}}-\frac{L}{2}\norm{\theta_{k+1}-\theta_{k}}_2^2 \label{eq:Jmu inequality 1}\\
&=J_{\mu}(\pi_{k})+\sum_{s\in\cS}d^{\pi_k}_{\mu}(s)\alpha_{k}(s)\cdot\inp{{c}_{k}(s)-\theta_{k}(s)}{\nabla_{{a}} Q({s},{a};\pi_k)\rvert_{{a}=\pi_{k}({s})}}-\frac{L}{2}\sum_{s\in\cS}\alpha_{k}(s)^2\norm{{c}_{k}(s)-\theta_{k}(s)}_2^2 \label{eq:Jmu inequality 2}\\
&\geq J_{\mu}(\pi_{k})+(1-\gamma)\mu_{\min}\sum_{s\in\cS}\alpha_{k}(s)g_{k}(s)-\frac{L}{2}\sum_{s\in\cS}\alpha_{k}(s)^2 D_s^2, \label{eq:Jmu inequality 3}
\end{align}
where (\ref{eq:Jmu inequality 1}) follows from that $J_{\mu}(\pi_{k})$ is $L$-smooth, (\ref{eq:Jmu inequality 2}) holds by the update scheme of FWPO, and (\ref{eq:Jmu inequality 3}) follows from that $d^{\pi_k}_{\mu}(s)\alpha_{k}(s)\geq (1-\gamma)\mu_{\min}$ and the definition of the diameters.
By using (\ref{eq:Jmu inequality 3}) and letting $\alpha_k(s)=\frac{g_k(s)}{L D_s^2}(1-\gamma)\mu_{\min}$,
\begin{align}
J_{\mu}(\pi_{k+1})&\geq J_{\mu}(\pi_{k})+\sum_{s\in\cS}\frac{g_k(s)^2}{2L D_s^2}(1-\gamma)^2 \mu_{\min}^2 \label{eq:Jmu inequality 4}\\
&\geq J_{\mu}(\pi_{k})+\frac{(1-\gamma)^2 \mu_{\min}^2}{2LD_{\max}^2}\sum_{s\in\cS}g_k(s)^2.\label{eq:Jmu inequality 5}
\end{align}
Recall that $\cG_k^2:=\sum_{s\in\cS}g_k(s)^2$ denotes the effective Frank-Wolfe gap of the $k$-th iteration.
By taking the telescoping sum of (\ref{eq:Jmu inequality 5}), we know
\begin{equation}
J_{\mu}(\pi_{k+1})\geq J_{\mu}(\pi_{0})+ \frac{(1-\gamma)^2 \mu_{\min}^2}{2LD_{\max}^2} \sum_{t=0}^{k} \cG_t^2.
\end{equation}
Let $\pi^*$ denote an optimal policy, i.e., $\pi^*\geq \pi$, for any policy $\pi$.
Hence, we have
\begin{equation}
\sum_{t=0}^{k} \cG_t^2 \leq \frac{2LD_{\max}^2}{(1-\gamma)^2 \mu_{\min}^2}(J_{\mu}(\pi_{k+1})-J_{\mu}(\pi_{0}))\leq \frac{2LD_{\max}^2}{(1-\gamma)^2 \mu_{\min}^2}(J_{\mu}(\pi^*)-J_{\mu}(\pi_{0}))\leq \frac{2LD_{\max}^2}{(1-\gamma)^3 \mu_{\min}^2},\label{eq:bound on sum of effective FW gap}
\end{equation}
where the last inequality follows from the fact that the value functions are upper bounded by $(1-\gamma)^{-1}$.
Recall that $\bar{\cG}_T:=\min_{0\leq k\leq T}\cG_k$.
Therefore, (\ref{eq:bound on sum of effective FW gap}) implies that
\begin{equation}
\bar{\cG}_T\leq \sqrt{\frac{1}{T+1}\sum_{t=0}^{T} \cG_t^2}\leq \sqrt{\frac{1}{T+1}\cdot\frac{2LD_{\max}^2}{(1-\gamma)^3 \mu_{\min}^2}}=O(T^{-1/2}).
\end{equation}
This completes the proof. \MYQED
\end{myproof}
\section{Proof of Proposition \ref{PROP:NFWPO DDPG}}
\label{section:appendix:equivalence}
For convenience, we restate Proposition \ref{PROP:NFWPO DDPG} as follows.
\begin{propstar}
\label{PROP:NFWPO and DDPG}
If there is no action constraints, then the policy update scheme of NFWPO is equivalent to the vanilla DDPG by \citep{lillicrap2016continuous}.
\end{propstar}
\begin{myproof}
\normalfont We prove this result by reinterpreting the DDPG from a state-wise perspective.
Let $\bar{\theta}$ and $\bar{\phi}$ be the current parameters of the actor and the critic, respectively.
As proposed in \citep{lillicrap2016continuous}, the policy update under DDPG is done by approximating the true gradient $\nabla_{\theta}J_\mu(\pi(\cdot;{\theta}))$ by the sample-based gradient $\nabla_{\theta}\hat{J}_\mu(\pi(\cdot;{\theta}))\approx \nabla_{\theta}J_\mu(\pi(\cdot;{\theta}))$ as
\begin{align}
&\nabla_{\theta} \hat{J}_\mu(\pi(\cdot;{\theta}))=\frac{1}{\lvert \cB\rvert}\sum_{s\in\cB}\nabla_{a}Q(s,a; \phi)\rvert_{a=\pi(s;\bar{\theta})}\nabla_{\theta}\pi(s;\theta),\label{eq:sample-based DPG}
\end{align}
where $\cB$ denotes a mini-batch of states drawn from the replay buffer.
Note that this update rule can be reinterpreted from the perspective of regression by the following steps:
\begin{itemize}[leftmargin=*]
\item \textbf{Reference actions.} For each $s$ in the mini-batch $\cB$, compute the reference action at state $s$ under the guidance of the critic
\begin{equation}
a^{*}_s=\pi(s;\bar{\theta})+\eta_{1} \nabla_a Q(s,a;\bar{\phi})\rvert_{a=\pi(s;\bar{\theta})},\label{eq:DDPG reference action}
\end{equation}
where $\eta_1>0$ is the step size.
\item \textbf{Loss function.} Construct a loss function $L(\theta;\bar{\theta})$ as the mean-squared error (MSE) between the actions of the current policy and the reference actions, i.e.,
\begin{align}
&\cL(\theta;\bar{\theta})=\frac{1}{2\lvert \cB\rvert}\sum_{s\in\cB}\big(\pi(s;\theta)- a^{*}_s\big)^2.\label{eq:DDPG MSE loss}
\end{align}
\item \textbf{Gradient update.} Accordingly, update the policy parameter by minimizing the MSE loss by applying gradient descent for one step, i.e.,
\begin{equation}
\theta\leftarrow\theta - \eta_{2} \nabla_\theta \cL(\theta;\bar{\theta}).\label{eq:DDPG MSE update}
\end{equation}
\end{itemize}
We can observe that the update scheme characterized by (\ref{eq:DDPG MSE loss})-(\ref{eq:DDPG MSE update}) is equivalent to the DDPG update with a learning rate of $\eta_1\eta_2$.
Therefore, DDPG can be viewed as an actor-critic algorithm where both the actor and critic are trained by using regression as subroutines.
Note that (\ref{eq:DDPG reference action}) is equivalent to (\ref{eq:FW target action}) if there is no action constraints.
Hence, we conclude that NFWPO is equivalent to DDPG if there is no action constraints. \MYQED
\end{myproof}
\section{Pseudo Code of NFWPO}
\label{section:appendix:pseudo}
For completeness, we provide the pseudo code of NFWPO in Algorithm \ref{alg:NFWPO}.
\begin{algorithm}[!htbp]
\caption{Frank-Wolfe Policy Optimization With Neural Policy Parameterization (NFWPO)}
\label{alg:NFWPO}
\begin{algorithmic}[1]
\STATE {\bfseries Input:} Frank-Wolfe learning rate $\alpha$, actor learning rate $\beta$, critic learning rate $\beta_c$, target update ratio $\tau$, and variance of exploration noise $\sigma^2$
\STATE Randomly initialize the actor $\pi(\cdot;\theta)$ and the critic $Q(\cdot,\cdot;\phi)$ with parameters $\theta,\phi$
\STATE Initialize the target networks with parameters $\theta^{\dagger},\phi^{\dagger}$
\FOR{each episode $i=0,1,\cdots$}
\STATE Let $\bar{\theta}$ and $\bar{\phi}$ be the snapshots of the current actor and critic parameters
\STATE Receive initial state $s_0$ of the current episode
\FOR{$t=0,1,\cdots$}
\STATE Select action $a_t=\pi(s_t;\bar{\theta})+\mathcal{N}(0,\sigma^2)$
\STATE Apply action $a_t$ and observe reward $r_t$ as well as the next state $s_{t+1}$
\STATE Store transition $(s_t,a_t,r_t,s_{t+1})$ in the replay buffer
\STATE Randomly sample a mini-batch $\cB$ of transitions $(s,a,r,s')$ from the replay buffer
\STATE // update the actor by state-wise Frank-Wolfe
\FOR{each state $s\in\cB$}
\STATE $\bar{c}(s)=\argmax_{c \in \cC(s)}\inp{c}{\nabla_{a} Q(s,a;\bar{\phi})\rvert_{a=\Pi_{\cC(s)}(\pi(s;\bar{\theta}))}}$
\STATE $\tilde{a}_s=\Pi_{\cC(s)}(\pi(s;\bar{\theta}))+{\alpha} \big(\bar{c}(s)-\Pi_{\cC(s)}(\pi(s;\bar{\theta}))\big)$
\ENDFOR
\STATE $\cL_{\text{NFWPO}}(\theta;\bar{\theta})=\sum_{s\in\cB}\big(\pi(s;\theta)- \tilde{a}_s\big)^2$
\STATE $\theta\leftarrow\theta - {\beta}\nabla_\theta \cL_{\text{NFWPO}}(\theta;\bar{\theta})\rvert_{\theta=\bar{\theta}}$
\STATE // update the Q-learning-like critic as vanilla DDPG
\STATE $\cL_{\text{critic}}(\phi)=\frac{1}{\lvert \cB\rvert}\sum_{(s,a,r,s')\in\cB}\big(r+\gamma Q(s',\pi(s';\theta^\dagger);\phi^\dagger)-Q(s,a;\phi)\big)^2$
\STATE $\phi\leftarrow \phi - \beta_c \nabla_{\phi}\cL_{\text{critic}}(\phi)\rvert_{\phi=\bar{\phi}}$
\STATE Update the target networks: $\theta^\dagger\leftarrow \tau \theta+(1-\tau)\theta^\dagger$, $\phi^\dagger\leftarrow \tau \phi+(1-\tau)\phi^\dagger$
\ENDFOR
\ENDFOR
\end{algorithmic}
\end{algorithm}
\section{Detailed Experimental Setup and Additional Experimental Results}
\label{section:appendix:exp}
\subsection{Training Configurations}
\textbf{Random seeds}. For each task, each algorithm is trained under the common set of random seeds of $\{0,1,2,3,4\}$.
\textbf{Exploration.} The training process starts after some number of time steps (1000 steps for Reacher-v2 and 10000 steps for the other environments), and we use a purely exploratory policy in this initial phase to collect samples for the replay buffer for all the algorithms. During training, Gaussian noise is added to each action for exploration for the neural cases. In the tabular case, we use the $\epsilon$-greedy policy as the behavior policy instead.
\textbf{Update frequency}. For NFSNET, due to the longer computation time required by the network simulator, we speed up the simulations by updating the actor every 50 steps for all the algorithms.
For the other environments, as DDPG+OptLayer is particularly time-consuming, we also update its actor every 50 steps to speed up the training process.
\textbf{Implementation of OptLayer.} As there is no off-the-shelf implementation of DDPG+OptLayer available, we leverage the open-source packages cvxpylayer \citep{agrawal2019differentiable} as well as the OptNet proposed by \citep{amos2017optnet} to implement the differentiable projection-based OptLayer for end-to-end training.
Note that in DDPG+OptLayer there are two scenarios where projection is needed for the actor output: (i) producing actions for interacting with the environment (under a behavior policy) and (ii) evaluating the actions produced by the current policy for calculating the deterministic policy gradient as in (\ref{eq:sample-based DPG}).
Since OptLayer is computationally inefficient, we use it only for the scenario (ii), where back-propagation is required for gradient descent.
For scenario (i), we use Gurobi optimization solver instead to speed up the training process.
\subsection{A Summary of the Hyperparameters}
In this section, we provide a summary of the key hyperparameters in Tables \ref{tab:hyperParam neural}-\ref{tab:hyperParam tabular}.
We highlight some key design choices:
\begin{itemize}[leftmargin=*]
\item For both the actor and critic networks, we use two hidden layers with 400 and 300 hidden units with ReLU as the activation, as suggested by \cite{fujimoto2018addressing}. For the activation function of the actor output, in order to better accommodate the various action ranges of different environments, we choose tanh for the MuJoCo control tasks and ReLU for the other tasks, respectively.
\item We choose a smaller batch size for DDPG+OptLayer since its computation time is much larger than the other approaches. As DDPG+OptLayer is a strong baseline in some of the environments, we also set the batch size of NFWPO to be 16 for a fair comparison.
\item For SAC algorithm, we use the open-source Spinning up \citep{SpinningUp2018} and its default setting. For PPO and TRPO, we use the open-source provide by \citep{fujita2018clipped}, which is based on ChainerRL \citep{JMLR:v22:20-376} with its default setting.
\end{itemize}
\subsection{Additional Experimental Results}
In this section, we further compare NFWPO with the baseline methods designed to address special types of action constraints in the HalfCheetah-v2 environment.
\textbf{Comparison with Clipped Action Policy Gradient \citep{fujita2018clipped}.}
We further compare NFWPO with the Clipped Action Policy Gradient (CAPG) algorithms.
As CAPG can only handle bound constraints, we evaluate them with the intrinsic bound constraints in HalfCheetah (i.e., each action entry must be in $[-1,+1]$).
The results in Figure \ref{fig:Halfcheetah_bound_all} show that NFWPO outperforms the two versions of CAPG, namely TRPO-CAPG and PPO-CAPG, under the bound constraints.
\begin{comment}
\textbf{HalfCheetah with non-linear constraints.}
For the experiments with Halfcheetah-v2, we also evaluate NFWPO and the baseline methods with nonlinear action constraints.
Recall that in this task, an action is $6$-dimensional and is denoted by $v_{1},\cdots,v_{6}$.
We consider the quadratic action constraints as: $v_1^2+v_2^2+v_3^2\leq 1$ and $v_4^2+v_5^2+v_6^2\leq 1$.
These constraints capture the limitation on the total torque applied to the joints of the robot.
Similar to the results in other environments, from Figure \ref{fig:Halfcheetah quadratic all}(a)-(b), we observe that NFWPO achieves much faster convergence than the baselines and has much fewer constraint violations in the pre-projection actions.
\begin{figure*}
\caption{Halfcheetah-v2 with quadratic constraints: (a) Average return at each evaluation over 5 random seeds; (b) {Cumulative number of constraint violations of the primitive actions during training.}
\label{fig:Halfcheetah quadratic all}
\end{figure*}
\begin{figure*}
\caption{Reacher-v2 with looser quadratic constraints: (a) Average return at each evaluation over 5 random seeds; (b) {Cumulative number of constraint violations of the primitive actions during training.}
\label{fig:Reacher_looser_quadratic_reward}
\label{fig:Reaher_looser_quadratic_constraint_violate}
\label{fig:Reahcer_looser_quadratic all}
\end{figure*}
\textbf{Humanoid with non-linear constraints.} We evaluate NFWPO and other baseline in Humanoid, a challenging environment with a 17-dim. action space and 376-dim. state space. With non-linear constraints "the square of action’s L2-norm <= 0.5". The result from Figure \ref{fig:Humanoid quadratic all} shows that in the harder task, NFWPO still perform well and matches the other strong baselines.
\begin{figure*}
\caption{Humanoid-v2 with quadratic constraints: (a) Average return at each evaluation over 5 random seeds; (b) {Cumulative number of constraint violations of the primitive actions during training.}
\label{fig:Humanoid quadratic all}
\end{figure*}
\end{comment}
\begin{figure*}
\caption{Halfcheetah-v2 with bound constraints: Average return at each evaluation over 5 random seeds}
\label{fig:Halfcheetah_bound}
\label{fig:Halfcheetah_bound_all}
\end{figure*}
\begin{comment}
\begin{table}[H]
\centering
\caption{Average return over the final 10 evaluations for Halfcheetah-v2 with quadratic constraints.}
\label{tab:cheetahWithStateResult}
\begin{tabular}{l c}
\hline Methods & Halfcheetah (With Quadratic Constraints) \\
\hline
NFWPO & \textbf{4625.86} \\
DDPG+Projection & 1277.16\\
DDPG+Reward Shaping & 2040.08\\
DDPG+OptLayer & 3598.95\\
\hline
\end{tabular}
\end{table}
\end{comment}
\begin{table}[H]
\centering
\caption{A summary of the hyper-parameters of NFWPO and the baseline methods.}
\label{tab:hyperParam neural}
\begin{tabular}{ l c c c c}
\hline Hyper-parameters & Reacher-v2 & HalfCheetah-v2 & NSFNET & BSS-5\\
\hline
Learning Rate (For Frank-Wolfe) & 0.05 & 0.01 & 0.05 & 0.05 \\
Actor Learning Rate (For others) & 0.0001 & 0.0001 & 0.0001 & 0.0001 \\
Critic Learning Rate & 0.001 & 0.001 & 0.001 & 0.001\\
Discount Factor & 0.99 & 0.99 & 0.99 & 0.99\\
Target Update Ratio ($\tau$) & 0.001 & 0.001 & 0.001 & 0.001\\
Replay Buffer Size & $10^4$ & $10^6$ & $5\times10^4$ & $10^6$\\
Evaluation Frequency & 5000 & 5000 & 10000 & 5000\\
Total Training Steps & $3\times10^5$ & $7\times10^5$ & $5\times10^5$ & 100000 \\
Starting Time of Training & 1000 & 10000 & 10000 & 10000\\
Additive Action Noise for Exploration & $\mathcal{N}(0, 0.1) $ & $\mathcal{N}(0, 0.1)$ & $\mathcal{N}(0, 3)$ & $\mathcal{N}(0, 5)$\\
Weight of Reward Shaping & $\frac{1}{7}$ & 3 (state-dependent), 2 (quadratic)& $\frac{1}{4}$ & 4\\
Batch Size (For DDPG+RewardShaping) & 64 & 64 & 64 & 64\\
Batch Size (For DDPG+Projection)& 64 & 64 & 64 & 64\\
Batch Size (For DDPG+OptLayer) & 16 & 16 & 16 & 16\\
Batch Size (For FWPO/NFWPO) & 16 & 16 & 16 & 16 \\
\hline
\end{tabular}
\end{table}
\begin{table}[H]
\centering
\caption{The configurations of the hyper-parameters for tabular policies in the bike-sharing environment.}
\label{tab:hyperParam tabular}
\begin{tabular}{ l c}
\hline Hyper-parameters & BSS-3\\
\hline
Critic Network & (30, out) \\
Learning Rate (For Frank-Wolfe) & 0.05 \\
Actor Learning Rate (For Others) & 0.001 \\
Critic Learning Rate & 0.002 \\
Discount Factor & 0.9 \\
Target Update Ratio for Critic ($\tau$) & 0.01 \\
Target Update Frequency for Actor & 100 \\
Replay Buffer Size & $10^4$ \\
Evaluation Frequency & 5000 \\
Total Training Steps & $10^6$ \\
Starting Time of Training & 10000 \\
Exploratory Behavior Policy & $\epsilon$-greedy, $\epsilon = 0.1$ \\
Weight of Reward Shaping & 1\\
Batch Size (For All Methods) & 64 \\
\hline
\end{tabular}
\end{table}
\end{document} |
\begin{document}
\title[Two New Convex Dominated Functions]{Hermite-Hadamard-Type Inqualities
for New Different Kinds of Convex Dominated Functions}
\author{M.Emin \"{O}zdemir$^{\blacktriangle }$}
\address{$^{\blacktriangle }$Atat\"{u}rk University, K.K. Education Faculty,
Department of Mathematics, 25240, Campus, Erzurum, Turkey}
\email{[email protected]}
\author{Havva Kavurmac\i $^{\blacktriangle ,\blacksquare }$}
\email{[email protected]}
\thanks{$^{\blacksquare }$Corresponding Author}
\author{Mevl\"{u}t Tun\c{c}}
\address{Department of Mathematics, Faculty of Art and Sciences, Kilis 7
Aralik University, Kilis, 79000, Turkey}
\email{[email protected]}
\date{February 2, 2012}
\subjclass[2000]{ Primary 26D15, Secondary 26D10, 05C38}
\keywords{$m-$convex dominated function, Hermite-Hadamard's inequality, $
\left( \alpha ,m\right) -$convex function, $r-$convex function.}
\begin{abstract}
In this paper, we establish several new convex dominated functions and then
we obtain new Hadamard type inequalities.
\end{abstract}
\maketitle
\section{Introduction}
The inequality
\begin{equation}
f\left( \frac{a+b}{2}\right) \leq \frac{1}{b-a}\int_{a}^{b}f\left( x\right)
dx\leq \frac{f\left( a\right) +f\left( b\right) }{2} \label{h}
\end{equation}
which holds for all convex functions $f:[a,b]\rightarrow
\mathbb{R}
$, is known in the literature as Hermite-Hadamard's inequality.
In \cite{G}, Toader defined $m-$convexity as the following:
\begin{definition}
The function $f:[0,b]\rightarrow
\mathbb{R}
,$ $b>0$, is said to be $m-$convex where $m\in \lbrack 0,1],$ if we have
\begin{equation*}
f(tx+m(1-t)y)\leq tf(x)+m(1-t)f(y)
\end{equation*}
for all $x,y\in \lbrack 0,b]$ and $t\in \lbrack 0,1].$ We say that $f$ is $
m- $concave if $\left( -f\right) $ is $m-$convex.
\end{definition}
In \cite{D}, Dragomir proved the following theorem.
Let $f:\left[ 0,\infty \right) \rightarrow
\mathbb{R}
$ be an $m-$convex function with $m\in \left( 0,1\right] $ and $0\leq a<b.$
If $f\in L_{1}\left[ a,b\right] ,$ then the following inequalities hold:
\begin{eqnarray}
f\left( \frac{a+b}{2}\right) &\leq &\frac{1}{b-a}\int_{a}^{b}\frac{f\left(
x\right) +mf\left( \frac{x}{m}\right) }{2}dx \label{h1} \\
&\leq &\frac{1}{2}\left[ \frac{f\left( a\right) +mf\left( \dfrac{a}{m}
\right) }{2}+m\frac{f\left( \frac{b}{m}\right) +mf\left( \dfrac{b}{m^{2}}
\right) }{2}\right] . \notag
\end{eqnarray}
In \cite{DI} and \cite{DPP}, the authors connect together some disparate
threads through a Hermite-Hadamard motif. The first of these threads is the
unifying concept of a $g-$convex dominated function. Similarly, in \cite{KOS}
, Kavurmac\i\ et al. introduced the following class of functions and then
proved a theorem for this class of functions related to (\ref{h1}).
\begin{definition}
Let $g:\left[ 0,b\right] \rightarrow
\mathbb{R}
$ be a given $m-$convex function on the interval $\left[ 0,b\right] $. The
real function $f:\left[ 0,b\right] \rightarrow
\mathbb{R}
$ is called $\left( g,m\right) -$convex dominated on $\left[ 0,b\right] $ if
the following condition is satisfied
\begin{eqnarray*}
&&\left\vert \lambda f(x)+m(1-\lambda )f(y)-f\left( \lambda x+m\left(
1-\lambda \right) y\right) \right\vert \\
&\leq &\lambda g(x)+m(1-\lambda )g(y)-g\left( \lambda x+m\left( 1-\lambda
\right) y\right)
\end{eqnarray*}
for all $x,y\in \left[ 0,b\right] $, $\lambda \in \left[ 0,1\right] $ and $
m\in \left[ 0,1\right] .$
\end{definition}
\begin{theorem}
\label{a} Let $g:\left[ 0,\infty \right) \rightarrow
\mathbb{R}
$ be an $m-$convex function with $m\in \left( 0,1\right] $. $f:\left[
0,\infty \right) \rightarrow
\mathbb{R}
$ is $\left( g,m\right) -$convex dominated \ mapping and $0\leq a<b.$ If $
f\in L_{1}\left[ a,b\right] ,$ then one has the inequalities:
\begin{eqnarray*}
&&\left\vert \frac{1}{b-a}\int_{a}^{b}\frac{f\left( x\right) +mf\left( \frac{
x}{m}\right) }{2}dx-f\left( \frac{a+b}{2}\right) \right\vert \\
&& \\
&\leq &\frac{1}{b-a}\int_{a}^{b}\frac{g\left( x\right) +mg\left( \frac{x}{m}
\right) }{2}dx-g\left( \frac{a+b}{2}\right)
\end{eqnarray*}
and
\begin{eqnarray*}
&&\left\vert \frac{1}{2}\left[ \frac{f\left( a\right) +mf\left( \dfrac{a}{m}
\right) }{2}+m\frac{f\left( \frac{b}{m}\right) +mf\left( \dfrac{b}{m^{2}}
\right) }{2}\right] -\frac{1}{b-a}\int_{a}^{b}\frac{f\left( x\right)
+mf\left( \frac{x}{m}\right) }{2}dx\right\vert \\
&& \\
&\leq &\frac{1}{2}\left[ \frac{g\left( a\right) +mg\left( \dfrac{a}{m}
\right) }{2}+m\frac{g\left( \frac{b}{m}\right) +mg\left( \dfrac{b}{m^{2}}
\right) }{2}\right] -\frac{1}{b-a}\int_{a}^{b}\frac{g\left( x\right)
+mg\left( \frac{x}{m}\right) }{2}dx.
\end{eqnarray*}
\end{theorem}
In \cite{MIH}, definition of $\left( \alpha ,m\right) -$convexity was
introduced by Mihe\c{s}an as the following.
\begin{definition}
\label{d1} The function $f:[0,b]\rightarrow
\mathbb{R}
,$ $b>0$, is said to be $\left( \alpha ,m\right) -$convex, where $\left(
\alpha ,m\right) \in \lbrack 0,1]^{2},$ if we have
\begin{equation*}
f(tx+m(1-t)y)\leq t^{\alpha }f(x)+m(1-t^{\alpha })f(y)
\end{equation*}
for all $x,y\in \lbrack 0,b]$ and $t\in \lbrack 0,1].$
\end{definition}
Denote by $K_{m}^{\alpha }\left( b\right) $ the class of all $\left( \alpha
,m\right) -$convex functions on $\left[ 0,b\right] $ for which $f\left(
0\right) \leq 0.$ If we take $\left( \alpha ,m\right) =\left\{ \left(
0,0\right) ,\left( \alpha ,0\right) ,\left( 1,0\right) ,\left( 1,m\right)
,\left( 1,1\right) ,\left( \alpha ,1\right) \right\} ,$ it can be easily
seen that $\left( \alpha ,m\right) -$convexity reduces to increasing: $
\alpha -$starshaped, starshaped, $m-$convex, convex and $\alpha -$convex,
respectively.
In \cite{SSOR}, Set et al. proved the following Hadamard type inequalities
for $\left( \alpha ,m\right) -$convex functions.
\begin{theorem}
Let $f:\left[ 0,\infty \right) \rightarrow
\mathbb{R}
$ be an $\left( \alpha ,m\right) -$convex function with $\left( \alpha
,m\right) \in \left( 0,1\right] ^{2}.$ If \ $0\leq a<b<\infty $ and $f\in $ $
L_{1}\left[ a,b\right] \cap L_{1}\left[ \frac{a}{m},\frac{b}{m}\right] ,$
then the following inequality holds:
\begin{equation}
f\left( \frac{a+b}{2}\right) \leq \frac{1}{b-a}\int_{a}^{b}\frac{f\left(
x\right) +m\left( 2^{\alpha }-1\right) f\left( \frac{x}{m}\right) }{
2^{\alpha }}dx. \label{h2}
\end{equation}
\end{theorem}
\begin{theorem}
Let $f:\left[ 0,\infty \right) \rightarrow
\mathbb{R}
$ be an $\left( \alpha ,m\right) -$convex function with $\left( \alpha
,m\right) \in \left( 0,1\right] ^{2}.$ If \ $0\leq a<b<\infty $ and $f\in $ $
L_{1}\left[ a,b\right] ,$ then the following inequality holds:
\begin{equation}
\frac{1}{b-a}\int_{a}^{b}f\left( x\right) dx\leq \frac{1}{2}\left[ \frac{
f\left( a\right) +f\left( b\right) +m\alpha f\left( \frac{a}{m}\right)
+m\alpha f\left( \frac{b}{m}\right) }{\alpha +1}\right] . \label{h3}
\end{equation}
\end{theorem}
For the recent results based on the above definition see the papers \cite
{BOP}, \cite{BPR}, \cite{OKS}, \cite{OAK} and \cite{SSO}.
In \cite{OSA}, the power mean $M_{r}(x,y;\lambda )$ of order $r$ of positive
numbers $x,y$ is defined by
\begin{equation*}
M_{r}(x,y;\lambda )=\left\{
\begin{array}{cc}
\left( \lambda x^{r}+\left( 1-\lambda \right) y^{r}\right) ^{\frac{1}{r}}, &
r\neq 0 \\
x^{\lambda }y^{1-\lambda }, & r=0.
\end{array}
\right.
\end{equation*}
A positive function $f$ is $r-$convex on $[a,b]$ if for all $x,y\in \lbrack
a,b]$ and $\lambda \in \lbrack 0,1]$
\begin{equation}
f(\lambda x+(1-\lambda )y)\leq M_{r}(f\left( x\right) ,f\left( y\right)
;\lambda ). \label{h4}
\end{equation}
The generalized logarithmic mean of order $r$ of positive numbers $x,y$ is
defined by
\begin{equation}
L_{r}\left( x,y\right) =\left\{
\begin{array}{cc}
\frac{r}{r+1}\frac{x^{r+1}-y^{r+1}}{x^{r}-y^{r}}, & r\neq 0,1,x\neq y \\
& \\
\frac{x-y}{\ln x-\ln y}, & r=0,x\neq y \\
& \\
xy\frac{\ln x-\ln y}{x-y}, & r=-1,x\neq y \\
& \\
x, & x=y
\end{array}
\right. \label{L}
\end{equation}
In \cite{GPP}, the following theorem was proved by Gill et al. for $r-$
convex functions.
\begin{theorem}
Suppose $f$ is a positive $r-$convex function on $\left[ a,b\right] .$ Then
\begin{equation}
\frac{1}{b-a}\int_{a}^{b}f\left( x\right) dx\leq L_{r}\left( f\left(
a\right) ,f\left( b\right) \right) . \label{h5}
\end{equation}
If $f$ is a positive $r-$concave function, then the inequality is reversed.
\end{theorem}
In the following sections our main results are given: We establish several
new convex dominated functions and then we obtain new Hadamard type
inequalities.
\section{$\left( g-\left( \protect\alpha ,m\right) \right) $-convex
dominated functions}
\begin{definition}
\label{d2} Let $g:\left[ 0,b\right] \rightarrow
\mathbb{R}
,$ $b>0$ be a given $\left( \alpha ,m\right) $-convex function on the
interval $\left[ 0,b\right] $. The real function $f:\left[ 0,b\right]
\rightarrow
\mathbb{R}
$ is called $\left( g-\left( \alpha ,m\right) \right) $-convex dominated on $
\left[ 0,b\right] $ if the following condition is satisfied
\begin{eqnarray}
&&\left\vert \lambda ^{\alpha }f(x)+m(1-\lambda ^{\alpha })f(y)-f\left(
\lambda x+m\left( 1-\lambda \right) y\right) \right\vert \label{h6} \\
&\leq &\lambda ^{\alpha }g(x)+m(1-\lambda ^{\alpha })g(y)-g\left( \lambda
x+m\left( 1-\lambda \right) y\right) \notag
\end{eqnarray}
for all $x,y\in \left[ 0,b\right] $, $\lambda \in \left[ 0,1\right] $ and $
\left( \alpha ,m\right) \in \left[ 0,1\right] ^{2}.$
\end{definition}
The next simple characterisation of $\left( \alpha ,m\right) $-convex
dominated functions holds.
\begin{lemma}
\label{l1} Let $g:\left[ 0,b\right] \rightarrow
\mathbb{R}
$ be an $\left( \alpha ,m\right) $-convex function on the interval $\left[
0,b\right] $ and the function $f:\left[ 0,b\right] \rightarrow
\mathbb{R}
.$ The following statements are equivalent:
\end{lemma}
\begin{enumerate}
\item $f$ is $\left( g-\left( \alpha ,m\right) \right) $-convex dominated on
$\left[ 0,b\right] .$
\item The mappings $g-f$ and $g+f$ are $\left( \alpha ,m\right) $-convex
functions on $\left[ 0,b\right] .$
\item There exist two $\left( \alpha ,m\right) $-convex mappings $h,k$
defined on $\left[ 0,b\right] $ such that
\begin{equation*}
\begin{array}{ccc}
f=\frac{1}{2}\left( h-k\right) & \text{and} & g=\frac{1}{2}\left( h+k\right)
\end{array}
.
\end{equation*}
\end{enumerate}
\begin{proof}
1$\Longleftrightarrow $2 The condition (\ref{h6}) is equivalent to
\begin{eqnarray*}
&&g\left( \lambda x+m\left( 1-\lambda \right) y\right) -\lambda ^{\alpha
}g(x)-m(1-\lambda ^{\alpha })g(y) \\
&\leq &\lambda ^{\alpha }f(x)+m(1-\lambda ^{\alpha })f(y)-f\left( \lambda
x+m\left( 1-\lambda \right) y\right) \\
&\leq &\lambda ^{\alpha }g(x)+m(1-\lambda ^{\alpha })g(y)-g\left( \lambda
x+m\left( 1-\lambda \right) y\right)
\end{eqnarray*}
for all $x,y\in I$, $\lambda \in \left[ 0,1\right] $ and $\left( \alpha
,m\right) \in \left[ 0,1\right] ^{2}.$ The two inequalities may be
rearranged as
\begin{equation*}
\left( g+f\right) \left( \lambda x+m\left( 1-\lambda \right) y\right) \leq
\lambda ^{\alpha }\left( g+f\right) (x)+m(1-\lambda ^{\alpha })\left(
g+f\right) (y)
\end{equation*}
and
\begin{equation*}
\left( g-f\right) \left( \lambda x+m\left( 1-\lambda \right) y\right) \leq
\lambda ^{\alpha }\left( g-f\right) (x)+m(1-\lambda ^{\alpha })\left(
g-f\right) (y)
\end{equation*}
which are equivalent to the $\left( \alpha ,m\right) $-convexity of $g+f$
and $g-f,$ respectively.
2$\Longleftrightarrow $3 We define the mappings $f,g$ as $f=\frac{1}{2}
\left( h-k\right) $ and $g=\frac{1}{2}\left( h+k\right) $. Then, if we sum
and subtract $f,g,$ respectively, we have $g+f=h$ and $g-f=k.$ By the
condition 2 of Lemma 1, the mappings $g-f$ and $g+f$ are $\left( \alpha
,m\right) $-convex on $\left[ 0,b\right] ,$ so $h,k$ are $\left( \alpha
,m\right) $-convex mappings too.
\end{proof}
\begin{theorem}
\label{t1} Let $g:\left[ 0,\infty \right) \rightarrow
\mathbb{R}
$ be an $\left( \alpha ,m\right) -$convex function with $\left( \alpha
,m\right) \in \left( 0,1\right] ^{2}$. $f:\left[ 0,\infty \right)
\rightarrow
\mathbb{R}
$ is $\left( g-\left( \alpha ,m\right) \right) -$convex dominated mapping
and $0\leq a<b.$ If $f\in L_{1}\left[ a,b\right] \cap L_{1}\left[ \frac{a}{m}
,\frac{b}{m}\right] ,$ then the first inequality holds:
\begin{eqnarray*}
&&\left\vert \frac{1}{b-a}\int_{a}^{b}\frac{f\left( x\right) +m\left(
2^{\alpha }-1\right) f\left( \frac{x}{m}\right) }{2^{\alpha }}dx-f\left(
\frac{a+b}{2}\right) \right\vert \\
&& \\
&\leq &\frac{1}{b-a}\int_{a}^{b}\frac{g\left( x\right) +m\left( 2^{\alpha
}-1\right) g\left( \frac{x}{m}\right) }{2^{\alpha }}dx-g\left( \frac{a+b}{2}
\right)
\end{eqnarray*}
and if $f\in L_{1}\left[ a,b\right] $ then the second inequality holds:
\begin{eqnarray*}
&&\left\vert \frac{1}{2}\left[ \frac{f\left( a\right) +mf\left( \dfrac{a}{m}
\right) }{\alpha +1}+m\alpha \frac{f\left( \frac{b}{m}\right) +mf\left(
\dfrac{b}{m^{2}}\right) }{\alpha +1}\right] -\frac{1}{b-a}\int_{a}^{b}\frac{
f\left( x\right) +mf\left( \frac{x}{m}\right) }{2}dx\right\vert \\
&& \\
&\leq &\frac{1}{2}\left[ \frac{g\left( a\right) +mg\left( \dfrac{a}{m}
\right) }{\alpha +1}+m\alpha \frac{g\left( \frac{b}{m}\right) +mg\left(
\dfrac{b}{m^{2}}\right) }{\alpha +1}\right] -\frac{1}{b-a}\int_{a}^{b}\frac{
g\left( x\right) +mg\left( \frac{x}{m}\right) }{2}dx.
\end{eqnarray*}
\end{theorem}
\begin{proof}
By Definition \ref{d2} with $\lambda =\frac{1}{2}$, as the mapping $f$ is $
\left( g-\left( \alpha ,m\right) \right) -$convex dominated function, we
have that
\begin{equation*}
\left\vert \frac{f\left( x\right) +m\left( 2^{\alpha }-1\right) f\left(
y\right) }{2^{\alpha }}-f\left( \frac{x+my}{2}\right) \right\vert \leq \frac{
g\left( x\right) +m\left( 2^{\alpha }-1\right) g\left( y\right) }{2^{\alpha }
}-g\left( \frac{x+my}{2}\right)
\end{equation*}
for all $x,y\in \left[ 0,\infty \right) $ and $\left( \alpha ,m\right) \in
\left( 0,1\right] ^{2}.$ If we choose $x=ta+(1-t)b,$ $y=\left( 1-t\right)
\frac{a}{m}+t\frac{b}{m}$ and $t\in \left[ 0,1\right] ,$ then we get
\begin{eqnarray*}
&&\left\vert \frac{f\left( ta+(1-t)b\right) +m\left( 2^{\alpha }-1\right)
f\left( \frac{\left( 1-t\right) a+tb}{m}\right) }{2^{\alpha }}-f\left( \frac{
a+b}{2}\right) \right\vert \\
&& \\
&\leq &\frac{g\left( ta+(1-t)b\right) +m\left( 2^{\alpha }-1\right) g\left(
\frac{\left( 1-t\right) a+tb}{2}\right) }{2^{\alpha }}-g\left( \frac{a+b}{2}
\right) .
\end{eqnarray*}
Integrating over $t$ on $\left[ 0,1\right] $ we deduce that
\begin{eqnarray*}
&&\left\vert \frac{\int_{0}^{1}f\left( ta+(1-t)b\right) dt+m\left( 2^{\alpha
}-1\right) \int_{0}^{1}f\left( \frac{\left( 1-t\right) a+tb}{m}\right) dt}{
2^{\alpha }}-f\left( \frac{a+b}{2}\right) \right\vert \\
&& \\
&\leq &\frac{\int_{0}^{1}g\left( ta+(1-t)b\right) dt+m\left( 2^{\alpha
}-1\right) \int_{0}^{1}g\left( \frac{\left( 1-t\right) a+tb}{m}\right) dt}{
2^{\alpha }}-g\left( \frac{a+b}{2}\right)
\end{eqnarray*}
and so the first inequality is proved.
Since $f$ is $\left( g-\left( \alpha ,m\right) \right) -$convex dominated
function, we have
\begin{eqnarray*}
&&\left\vert t^{\alpha }f\left( x\right) +m(1-t^{\alpha })f\left( y\right)
-f\left( tx+m(1-t)y\right) \right\vert \\
&& \\
&\leq &t^{\alpha }g\left( x\right) +m(1-t^{\alpha })g\left( y\right)
-g\left( tx+m(1-t)y\right) ,\text{ for all }x,y>0
\end{eqnarray*}
which gives for $x=a$ and $y=\frac{b}{m}$
\begin{eqnarray}
&&\left\vert t^{\alpha }f\left( a\right) +m(1-t^{\alpha })f\left( \frac{b}{m}
\right) -f\left( ta+m(1-t)\frac{b}{m}\right) \right\vert \label{h7} \\
&& \notag \\
&\leq &t^{\alpha }g\left( a\right) +m(1-t^{\alpha })g\left( \frac{b}{m}
\right) -g\left( ta+m(1-t)\frac{b}{m}\right) \notag
\end{eqnarray}
and for $x=\frac{a}{m}$, $y=\frac{b}{m^{2}}$ and then multiply with $m$
\begin{eqnarray}
&&\left\vert mtf\left( \frac{a}{m}\right) +m^{2}(1-t)f\left( \frac{b}{m^{2}}
\right) -mf\left( t\frac{a}{m}+(1-t)\frac{b}{m}\right) \right\vert
\label{h8} \\
&& \notag \\
&\leq &mtg\left( \frac{a}{m}\right) +m^{2}(1-t)g\left( \frac{b}{m^{2}}
\right) -mg\left( t\frac{a}{m}+(1-t)\frac{b}{m}\right) \notag
\end{eqnarray}
for all $t\in \left[ 0,1\right] .$ By properties of modulus, if we add the
inequalities in $\left( \text{\ref{h7}}\right) $ and $\left( \text{\ref{h8}}
\right) $, we get
\begin{eqnarray*}
&&\left\vert t^{\alpha }\left[ f\left( a\right) +mf\left( \frac{a}{m}\right)
\right] +m(1-t^{\alpha })\left[ f\left( \frac{b}{m}\right) +mf\left( \frac{b
}{m^{2}}\right) \right] \right. \\
&& \\
&&-\left. \left[ f\left( ta+m(1-t)\frac{b}{m}\right) +mf\left( t\frac{a}{m}
+(1-t)\frac{b}{m}\right) \right] \right\vert \\
&& \\
&\leq &t^{\alpha }\left[ g\left( a\right) +mg\left( \frac{a}{m}\right)
\right] +m(1-t^{\alpha })\left[ g\left( \frac{b}{m}\right) +mg\left( \frac{b
}{m^{2}}\right) \right] \\
&& \\
&&-\left[ g\left( ta+m(1-t)\frac{b}{m}\right) +mg\left( t\frac{a}{m}+(1-t)
\frac{b}{m}\right) \right] .
\end{eqnarray*}
Thus, integrating over $t$ on $\left[ 0,1\right] $ we obtain the second
inequality. The proof is completed.
\end{proof}
\begin{remark}
If we choose $\alpha =1$ in Theorem \ref{t1}, we get two inequalities of
Hermite-Hadamard type for functions that are $\left( g,m\right) -$convex
dominated in Theorem \ref{a}.
\end{remark}
\begin{theorem}
\label{t2} Let $g:\left[ 0,\infty \right) \rightarrow
\mathbb{R}
$ be an $\left( \alpha ,m\right) -$convex function with $\left( \alpha
,m\right) \in \left( 0,1\right] ^{2}$. $f:\left[ 0,\infty \right)
\rightarrow
\mathbb{R}
$ is $\left( g-\left( \alpha ,m\right) \right) -$convex dominated \ mapping
and $0\leq a<b.$ If $f\in L_{1}\left[ a,b\right] ,$ then the following
inequality holds:
\begin{eqnarray}
&&\left\vert \frac{1}{2}\left[ \frac{f\left( a\right) +f\left( b\right)
+m\alpha f\left( \frac{a}{m}\right) +m\alpha f\left( \frac{b}{m}\right) }{
\alpha +1}\right] -\frac{1}{b-a}\int_{a}^{b}f\left( x\right) dx\right\vert
\label{h9} \\
&& \notag \\
&\leq &\frac{1}{2}\left[ \frac{g\left( a\right) +g\left( b\right) +m\alpha
g\left( \frac{a}{m}\right) +m\alpha g\left( \frac{b}{m}\right) }{\alpha +1}
\right] -\frac{1}{b-a}\int_{a}^{b}g\left( x\right) dx \notag
\end{eqnarray}
\end{theorem}
\begin{proof}
Since $f$ is $\left( g-\left( \alpha ,m\right) \right) -$convex dominated
function, we have
\begin{eqnarray*}
&&\left\vert t^{\alpha }f\left( a\right) +m(1-t^{\alpha })f\left( \frac{b}{m}
\right) -f\left( ta+m(1-t)\frac{b}{m}\right) \right\vert \\
&& \\
&\leq &t^{\alpha }g\left( a\right) +m(1-t^{\alpha })g\left( \frac{b}{m}
\right) -g\left( ta+m(1-t)\frac{b}{m}\right)
\end{eqnarray*}
and
\begin{eqnarray*}
&&\left\vert t^{\alpha }f\left( b\right) +m(1-t^{\alpha })f\left( \frac{a}{m}
\right) -f\left( tb+m(1-t)\frac{a}{m}\right) \right\vert \\
&& \\
&\leq &t^{\alpha }g\left( b\right) +m(1-t^{\alpha })g\left( \frac{a}{m}
\right) -g\left( tb+m(1-t)\frac{a}{m}\right)
\end{eqnarray*}
for all $t\in \left[ 0,1\right] .$ Adding the above inequalities, we get
\begin{eqnarray*}
&&\left\vert t^{\alpha }\left[ f\left( a\right) +f\left( b\right) \right]
+m(1-t^{\alpha })\left[ f\left( \frac{a}{m}\right) +f\left( \frac{b}{m}
\right) \right] -f\left( ta+m(1-t)\frac{b}{m}\right) -f\left( tb+m(1-t)\frac{
a}{m}\right) \right\vert \\
&& \\
&\leq &t^{\alpha }\left[ g\left( a\right) +g\left( b\right) \right]
+m(1-t^{\alpha })\left[ g\left( \frac{a}{m}\right) +g\left( \frac{b}{m}
\right) \right] -g\left( ta+m(1-t)\frac{b}{m}\right) -g\left( tb+m(1-t)\frac{
a}{m}\right) .
\end{eqnarray*}
Integrating over $t\in \left[ 0,1\right] $ and then by dividing the
resulting inequality with $2,$ we get the desired result. The proof is
completed.
Another proof can be done as the following.
Since $f$ is $\left( g-\left( \alpha ,m\right) \right) -$convex dominated,
we have by Lemma \ref{l1} that $g+f$ and $g-f$ are $\left( \alpha ,m\right)
- $convex on $\left[ 0,b\right] ,$ and so by the Hadamard's type inequality
for $\left( \alpha ,m\right) -$convex functions in $\left( \text{\ref{h3}}
\right) $
\begin{eqnarray}
&&\frac{1}{b-a}\int_{a}^{b}\left( g+f\right) \left( x\right) dx \label{h10}
\\
&\leq &\frac{1}{2}\left[ \frac{\left( g+f\right) \left( a\right) +\left(
g+f\right) \left( b\right) +m\alpha \left( g+f\right) \left( \frac{a}{m}
\right) +m\alpha \left( g+f\right) \left( \frac{b}{m}\right) }{\alpha +1}
\right] \notag
\end{eqnarray}
and
\begin{eqnarray}
&&\frac{1}{b-a}\int_{a}^{b}\left( g-f\right) \left( x\right) dx \label{h11}
\\
&\leq &\frac{1}{2}\left[ \frac{\left( g-f\right) \left( a\right) +\left(
g-f\right) \left( b\right) +m\alpha \left( g-f\right) \left( \frac{a}{m}
\right) +m\alpha \left( g-f\right) \left( \frac{b}{m}\right) }{\alpha +1}
\right] \notag
\end{eqnarray}
By using the inequalities in $\left( \text{\ref{h10}}\right) $ and $\left(
\text{\ref{h11}}\right) $, we get the inequality in $\left( \text{\ref{h9}}
\right) $.
\end{proof}
\section{$\left( g,r\right) -$convex dominated functions}
\begin{definition}
\label{d3} Let positive function $g:\left[ a,b\right] \rightarrow
\mathbb{R}
$ be a given $r-$convex function on $\left[ a,b\right] $. The real function $
f:\left[ a,b\right] \rightarrow
\mathbb{R}
$ is called $\left( g,r\right) -$convex dominated on $\left[ a,b\right] $ if
the following condition is satisfied:
\begin{eqnarray*}
&&\left\vert M_{r}(f\left( x\right) ,f\left( y\right) ;\lambda )-f\left(
\lambda x+\left( 1-\lambda \right) y\right) \right\vert \\
&\leq &M_{r}(g\left( x\right) ,g\left( y\right) ;\lambda )-g\left( \lambda
x+\left( 1-\lambda \right) y\right)
\end{eqnarray*}
for all $x,y\in \left[ a,b\right] $ and $\lambda \in \left[ 0,1\right] .$
\end{definition}
\begin{theorem}
Let positive function $g:\left[ a,b\right] \rightarrow
\mathbb{R}
$ be an $r-$convex function on $\left[ a,b\right] $. $f:\left[ a,b\right]
\rightarrow
\mathbb{R}
$ is $\left( g,r\right) -$convex dominated\ mapping and $0\leq a<b.$ If $
f\in L_{1}\left[ a,b\right] ,$ then the following inequality holds:
\begin{equation*}
\left\vert L_{r}\left( f\left( a\right) ,f\left( b\right) \right) -\frac{1}{
b-a}\int_{a}^{b}f\left( x\right) dx\right\vert \leq L_{r}\left( g\left(
a\right) ,g\left( b\right) \right) -\frac{1}{b-a}\int_{a}^{b}g\left(
x\right) dx
\end{equation*}
for all $x,y\in I$, $\lambda \in \left[ 0,1\right] $ and $L_{r}\left(
f\left( a\right) ,f\left( b\right) \right) $ as in (\ref{L}).
\end{theorem}
\begin{proof}
By the Definition \ref{d3} with $r=0,\ f\left( a\right) \neq f\left(
b\right) ,$ we have
\begin{eqnarray*}
&&\left\vert f^{\lambda }\left( a\right) f^{1-\lambda }\left( b\right)
-f\left( \lambda a+\left( 1-\lambda \right) b\right) \right\vert \\
&\leq &g^{\lambda }\left( a\right) g^{1-\lambda }\left( b\right) -g\left(
\lambda a+\left( 1-\lambda \right) b\right) .
\end{eqnarray*}
Integrating the above inequality over $\lambda $ on $\left[ 0,1\right] ,$ we
have
\begin{eqnarray*}
&&\left\vert f\left( b\right) \int_{0}^{1}\left[ \frac{f\left( a\right) }{
f\left( b\right) }\right] ^{\lambda }d\lambda -\int_{0}^{1}f\left( \lambda
a+\left( 1-\lambda \right) b\right) d\lambda \right\vert \\
&& \\
&\leq &g\left( b\right) \int_{0}^{1}\left[ \frac{g\left( a\right) }{g\left(
b\right) }\right] ^{\lambda }d\lambda -\int_{0}^{1}g\left( \lambda a+\left(
1-\lambda \right) b\right) d\lambda .
\end{eqnarray*}
By a simple calculation we have
\begin{eqnarray*}
&&\left\vert \frac{f\left( b\right) -f\left( a\right) }{\ln f\left( b\right)
-\ln f\left( a\right) }-\frac{1}{b-a}\int_{a}^{b}f\left( x\right)
dx\right\vert \\
&& \\
&\leq &\frac{g\left( b\right) -g\left( a\right) }{\ln g\left( b\right) -\ln
g\left( a\right) }-\frac{1}{b-a}\int_{a}^{b}g\left( x\right) dx.
\end{eqnarray*}
The above inequality can written as
\begin{equation*}
\left\vert L_{r}\left( f\left( a\right) ,f\left( b\right) \right) -\frac{1}{
b-a}\int_{a}^{b}f\left( x\right) dx\right\vert \leq L_{r}\left( g\left(
a\right) ,g\left( b\right) \right) -\frac{1}{b-a}\int_{a}^{b}g\left(
x\right) dx.
\end{equation*}
For $r=0,\ f\left( a\right) =f\left( b\right) ,$ we have with the same
development
\begin{eqnarray*}
&&\left\vert f\left( a\right) -f\left( \lambda a+\left( 1-\lambda \right)
b\right) \right\vert \\
&\leq &g\left( a\right) -g\left( \lambda a+\left( 1-\lambda \right) b\right)
\end{eqnarray*}
and this inequality can be written as
\begin{equation*}
\left\vert L_{r}\left( f\left( a\right) ,f\left( b\right) \right) -\frac{1}{
b-a}\int_{a}^{b}f\left( x\right) dx\right\vert \leq L_{r}\left( g\left(
a\right) ,g\left( b\right) \right) -\frac{1}{b-a}\int_{a}^{b}g\left(
x\right) dx.
\end{equation*}
By the Definition \ref{d3} with $r\neq 0,-1,\ f\left( a\right) \neq f\left(
b\right) ,$ we have
\begin{eqnarray*}
&&\left\vert \left( \lambda f^{r}\left( a\right) +\left( 1-\lambda \right)
f^{r}\left( b\right) \right) ^{\frac{1}{r}}-f\left( \lambda a+\left(
1-\lambda \right) b\right) \right\vert \\
&& \\
&\leq &\left( \lambda g^{r}\left( a\right) +\left( 1-\lambda \right)
g^{r}\left( b\right) \right) ^{\frac{1}{r}}-g\left( \lambda a+\left(
1-\lambda \right) b\right) .
\end{eqnarray*}
Integrating the above inequality over $\lambda $ on $\left[ 0,1\right] ,$ we
have
\begin{eqnarray*}
&&\left\vert \frac{r}{r+1}\frac{f^{r+1}\left( a\right) -f^{r+1}\left(
b\right) }{f^{r}\left( a\right) -f^{r}\left( b\right) }-\frac{1}{b-a}
\int_{a}^{b}f\left( x\right) dx\right\vert \\
&& \\
&\leq &\frac{r}{r+1}\frac{g^{r+1}\left( a\right) -g^{r+1}\left( b\right) }{
g^{r}\left( a\right) -g^{r}\left( b\right) }-\frac{1}{b-a}
\int_{a}^{b}g\left( x\right) dx.
\end{eqnarray*}
The above inequality can be written as
\begin{eqnarray*}
&&\left\vert L_{r}\left( f\left( a\right) ,f\left( b\right) \right) -\frac{1
}{b-a}\int_{a}^{b}f\left( x\right) dx\right\vert \\
&\leq &L_{r}\left( g\left( a\right) ,g\left( b\right) \right) -\frac{1}{b-a}
\int_{a}^{b}g\left( x\right) dx.
\end{eqnarray*}
For$\ r\neq 0$ and $f\left( a\right) =f\left( b\right) ,$ we have similarly
\begin{eqnarray*}
&&\left\vert \left( f^{r}\left( a\right) \right) ^{\frac{1}{r}}-f\left(
\lambda a+\left( 1-\lambda \right) b\right) \right\vert \\
&\leq &\left( g^{r}\left( a\right) \right) ^{\frac{1}{r}}-g\left( \lambda
a+\left( 1-\lambda \right) b\right) .
\end{eqnarray*}
Then integrating the above inequality over $\lambda $ on $\left[ 0,1\right]
, $ we have
\begin{eqnarray*}
&&\left\vert L_{r}\left( f\left( a\right) ,f\left( b\right) \right) -\frac{1
}{b-a}\int_{a}^{b}f\left( x\right) dx\right\vert \\
&\leq &L_{r}\left( g\left( a\right) ,g\left( b\right) \right) -\frac{1}{b-a}
\int_{a}^{b}g\left( x\right) dx.
\end{eqnarray*}
Finally, let $r=-1.$ For$\ f\left( a\right) \neq f\left( b\right) $ we have
again
\begin{eqnarray*}
&&\left\vert \left( \lambda f^{-1}\left( a\right) +\left( 1-\lambda \right)
f^{-1}\left( b\right) \right) ^{-1}-f\left( \lambda a+\left( 1-\lambda
\right) b\right) \right\vert \\
&& \\
&\leq &\left( \lambda g^{-1}\left( a\right) +\left( 1-\lambda \right)
g^{-1}\left( b\right) \right) ^{-1}-g\left( \lambda a+\left( 1-\lambda
\right) b\right) .
\end{eqnarray*}
Integrating the above inequality over $\lambda $ on $\left[ 0,1\right] ,$ we
have
\begin{eqnarray*}
&&\left\vert \frac{f\left( a\right) f\left( b\right) }{f\left( b\right)
-f\left( a\right) }\int_{\frac{1}{f\left( a\right) }}^{\frac{1}{f\left(
b\right) }}\lambda ^{-1}d\lambda -\frac{1}{b-a}\int_{a}^{b}f\left( x\right)
dx\right\vert \\
&& \\
&\leq &\frac{g\left( a\right) g\left( b\right) }{g\left( b\right) -g\left(
a\right) }\int_{\frac{1}{f\left( a\right) }}^{\frac{1}{f\left( b\right) }
}\lambda ^{-1}d\lambda -\frac{1}{b-a}\int_{a}^{b}g\left( x\right) dx.
\end{eqnarray*}
The above inequality can be written as
\begin{eqnarray*}
&&\left\vert L_{-1}\left( f\left( a\right) ,f\left( b\right) \right) -\frac{1
}{b-a}\int_{a}^{b}f\left( x\right) dx\right\vert \\
&\leq &L_{-1}\left( g\left( a\right) ,g\left( b\right) \right) -\frac{1}{b-a}
\int_{a}^{b}g\left( x\right) dx.
\end{eqnarray*}
The proof is completed.
\end{proof}
\end{document} |
\begin{document}
\vspace*{15mm}
\noindent{\bf QUANTUM NONLOCALITY AND INSEPARABILITY} \\[12mm]
\hspace*{15mm} Asher Peres \\[5mm]
\hspace*{15mm} {\it Department of Physics\\
\hspace*{15mm} Technion---Israel Institute of Technology\\
\hspace*{15mm} 32\,000 Haifa, Israel}\\[8mm]
\noindent
A quantum system consisting of two subsystems is {\it separable\/} if
its density matrix can be written as $\rho=\sum w_K\,\rho_K'\otimes
\rho_K''$, where $\rho_K'$ and $\rho_K''$ are density matrices for the
two subsytems, and the positive weights $w_K$ satisfy $\sum w_K=1$. A
necessary condition for separability is derived and is shown to be
more sensitive than Bell's inequality for detecting quantum
inseparability. Moreover, {\it collective\/} tests of Bell's inequality
(namely, tests that involve several composite systems simultaneously) may
sometimes lead to a violation of Bell's inequality, even if the latter
is satisfied when each composite system is tested separately.\\[7mm]
\noindent{\bf 1. INTRODUCTION}
From the early days of quantum mechanics, the question has often been
raised whether an underlying ``subquantum'' theory, that would be
deterministic or even stochastic, was viable. Such a theory would
presumably involve additional ``hidden'' variables, and the statistical
predictions of quantum theory would be reproduced by performing suitable
averages over these hidden variables.
A fundamental theorem was proved by Bell~\cite{Bell}, who showed that if
the constraint of {\it locality\/} was imposed on the hidden variables
(namely, if the hidden variables of two distant quantum systems were
themselves be separable into two distinct subsets), then there was an
upper bound to the correlations of results of measurements that could be
performed on the two distant systems. That upper bound, mathematically
expressed by Bell's inequality~\cite{Bell}, is violated by some states
in quantum mechanics, for example the singlet state of two
\mbox{spin-$1\over2$} particles.
A variant of Bell's inequality, more general and more useful for
experimental tests, was later derived by Clauser, Horne, Shimony, and
Holt (CHSH)~\cite{chsh}. It can be written
\begin{equation} |\langle{AB}\rangle+\langle{AB'}\rangle+\langle{A'B}\rangle
-\langle{A'B'}\rangle|\leq 2. \label{CHSH}\end{equation}
On the left hand side, $A$ and $A'$ are two operators that can be
measured by an observer, conventionally called Alice. These
operators do not commute (so that Alice has to choose whether to
measure $A$ or $A'$) and each one is normalized to unit norm (the norm
of an operator is defined as the largest absolute value of any of its
eigenvalues). Likewise, $B$ and $B'$ are two normalized noncommuting
operators, any one of which can be measured by another, distant
observer (Bob). Note that each one of the {\it expectation\/} values in
Eq.~(\ref{CHSH}) can be calculated by means of quantum theory, if the
quantum state is known, and is also experimentally observable, by
repeating the measurements sufficiently many times, starting each time
with identically prepared pairs of quantum systems. The validity of
the CHSH inequality, for {\it all\/} combinations of
measurements independently performed on both systems, is a necessary
condition for the possible existence of a local hidden variable (LHV)
model for the results of these measurements. It is not in general a
sufficient condition, as will be shown below.
Note that, in order to test Bell's inequality, the two distant observers
independently {\it measure\/} subsytems of a composite quantum system,
and then {\it report\/} their results to a common site where that
information is analyzed~\cite{qt}. A related, but essentially
different, issue is whether a composite quantum system can be {\it
prepared\/} in a prescribed state by two distant observers who receive
{\it instructions\/} from a common source. For this to be possible, the
density matrix $\rho$ has to be separable into a sum of direct
products,
\begin{equation} \rho=\sum_K w_K\,\rho_K'\otimes\rho_K'', \label{sep}\end{equation}
where the positive weights $w_K$ satisfy $\sum w_K=1$, and where
$\rho_K'$ and $\rho_K''$ are density matrices for the two subsystems. A
separable system always satisfies Bell's inequality, but the converse
is not necessarily true~[4--7]. I shall derive below a simple algebraic
test, which is a necessary condition for the existence of the
decomposition (\ref{sep}). I shall then give some examples showing that
this criterion is more restrictive than Bell's inequality, or than the
$\alpha$-entropy inequality~\cite{H3a}.\\[7mm]
\noindent{\bf 2. SEPARABILITY OF DENSITY MATRICES}
The derivation of the separability condition is easiest when
the density matrix elements are written explicitly, with all their
indices~\cite{qt}. For example, Eq.~(\ref{sep}) becomes
\begin{equation} \rho_{m\mu,n\nu}=
\sum_K w_K\,(\rho'_K)_{mn}\,(\rho''_K)_{\mu\nu}. \end{equation}
Latin indices refer to the first subsystem, Greek indices to the second
one (the sub\-systems may have different dimensions). Note that this
equation can always be satisfied if we replace the quantum density
matrices by classical Liouville functions (and the discrete indices are
replaced by canonical variables, {\bf p} and {\bf q}). The reason is
that the only constraint that a Liouville function has to satisfy is
being non-negative. On the other hand, we want quantum density matrices
to have non-negative {\it eigenvalues\/}, rather than non-negative
elements, and the latter condition is more difficult to satisfy.
Let us now define a new matrix,
\begin{equation} \sigma_{m\mu,n\nu}\equiv\rho_{n\mu,m\nu}. \end{equation}
The Latin indices of $\rho$ have been transposed, but not the Greek
ones. This is not a unitary transformation but, nevertheless, the
$\sigma$ matrix is Hermitian. When Eq.~(\ref{sep}) is valid, we have
\begin{equation} \sigma=\sum_A w_A\,(\rho_A')^T\otimes\rho_A''. \label{sig}\end{equation}
Since the transposed matrices $(\rho'_A)^T\equiv(\rho'_A)^*$ are
non-negative matrices with unit trace, they can also be legitimate
density matrices. It follows that {\it none of the eigenvalues of
$\sigma$ is negative\/}. This is a necessary condition for
Eq.~(\ref{sep}) to hold~\cite{PRL}.
Note that the eigenvalues of $\sigma$ are invariant under separate
unitary transformations, $U'$ and $U''$, of the bases used by the two
observers. In such a case, $\rho$ transforms as
\begin{equation} \rho\to (U'\otimes U'')\,\rho\,(U'\otimes U'')^\dagger, \end{equation}
and we then have
\begin{equation} \sigma\to
(U'^T\otimes U'')\,\sigma\,(U'^T\otimes U'')^\dagger, \end{equation}
which also is a unitary transformation, leaving the eigenvalues of
$\sigma$ invariant.
As an example, consider a pair of spin-$1\over2$ particles in
an impure singlet state, consisting of a singlet fraction $x$ and
a random fraction $(1-x)$~\cite{Exner}. Note that the ``random
fraction'' $(1-x)$ also includes singlets, mixed in equal proportions
with the three triplet components. We have
\begin{equation} \rho_{m\mu,n\nu}=x\,S_{m\mu,n\nu}+
(1-x)\,\delta_{mn}\,\delta_{\mu\nu}\,/4, \label{x}\end{equation}
where the density matrix for a pure singlet is given by
\begin{equation} S_{01,01}=S_{10,10}=-S_{01,10}=-S_{10,01}=\mbox{$1\over2$},
\label{singlet} \end{equation}
and all the other components of $S$ vanish. (The indices 0 and 1 refer
to any two ortho\-gonal states, such as ``up'' and ``down.'') A
straightforward calculation shows that $\sigma$ has three eigenvalues
equal to $(1+x)/4$, and the fourth eigenvalue is $(1-3x)/4$. This lowest
eigenvalue is positive if $x<{1\over3}$, and the separability criterion
is then fulfilled. This result may be compared with other criteria:
Bell's inequality holds for $x<1/\sqrt{2}$, and the $\alpha$-entropic
inequality~\cite{H3a} for $x<1/\sqrt{3}$. These are therefore much
weaker tests for detecting inseparability than the condition that was
derived here.
In this particular case, it happens that this necessary condition is
also a sufficient one. It is indeed known that if $x<{1\over3}$ it is
possible to write $\rho$ as a mixture of unentangled product
states~\cite{BBPSSW}. This suggests that the necessary
condition derived above ($\sigma$ has no negative eigenvalue) might
also be sufficient for any $\rho$. A proof of this conjecture was
indeed recently obtained~\cite{H3b} for composite systems having
dimensions $2\times2$ and $2\times3$. However, for higher dimensions,
the present necessary condition was shown {\it not\/} to be a
sufficient one.
As a second example, consider a mixed state consisting of a
fraction $x$ of the pure state $a|01\rangle+b|10\rangle$ (with
$|a|^2+|b|^2=1$), and fractions $(1-x)/2$ of the pure states
$|00\rangle$ and $|11\rangle$. We have
\begin{equation} \rho_{00,00}=\rho_{11,11}=(1-x)/2,\end{equation}
\begin{equation} \rho_{01,01}=x|a|^2, \end{equation}
\begin{equation} \rho_{10,10}=x|b|^2, \end{equation}
\begin{equation} \rho_{01,10}=\rho_{10,01}^*=xab^*, \end{equation}
and the other elements of $\rho$ vanish. It is easily seen that
the $\sigma$ matrix
has a negative determinant, and thus a negative eigenvalue, when
\begin{equation} x>(1+2|ab|)^{-1}. \end{equation}
This is a lower limit than the one for a violation of Bell's inequality,
which requires~\cite{Gisin}
\begin{equation} x>[1+2|ab|(\sqrt{2}-1)]^{-1}. \end{equation}
An even more striking example is the mixture of a singlet and a
maximally polarized pair:
\begin{equation} \rho_{m\mu,n\nu}=x\,S_{m\mu,n\nu}+
(1-x)\,\delta_{m0}\,\delta_{n0}\,\delta_{\mu0}\,\delta_{\nu0}.\end{equation}
For any positive $x$, however small, this state is inseparable, because
$\sigma$ has a negative eigenvalue $(-x/2)$. On the other hand, the
Horodecki criterion~\cite{H3c} gives a very generous domain to the
validity of Bell's inequality: $x\leq 0.8$.\\[7mm]
\noindent{\bf 3. COLLECTIVE TESTS FOR NONLOCALITY}
The weakness of Bell's inequality as a test for inseparability is
due to the fact that the only use made of the density matrix
$\rho$ is for computing the probabilities of the various outcomes of
tests that may be performed on the subsystems of a {\it single\/}
composite system. On the other hand, an experimental verification of
that inequality necessitates the use of {\it many\/} composite systems,
all prepared in the same way. However, if many such systems are
actually available, we may also test them collectively, for example two
by two, or three by three, etc., rather than one by one. If we do that,
we must use, instead of $\rho$ (the density matrix of a single system),
a {\it new\/} density matrix, which is $\rho\otimes\rho$, or
$\rho\otimes\rho\otimes\rho$, in a higher dimensional space. It will now
be shown that there are some density matrices $\rho$ that satisfy
Bell's inequality, but for which $\rho\otimes\rho$, or
$\rho\otimes\rho\otimes\rho$, etc., violate that inequality~\cite{PRA}.
The example that will be discussed is that of the Werner
states~\cite{Werner} defined by Eq.~(\ref{x}). Let us consider
$n$ Werner pairs. Each one of the two observers has $n$
particles (one from each pair). They proceed as follows. First, they
subject their $n$-particle systems to suitably chosen local unitary
transformations, $U$, for Alice, and $V$, for Bob. Then, they test
whether each one of the particles labelled 2, 3, \ldots, $n$, has spin
up (for simplicity, it is assumed that all the particles are
distinguishable, and can be labelled unambiguously). Note that any
other test that they can perform is unitarily equivalent to the one for
spins up, as this involves only a redefinition of the matrices $U$ and
$V$. If any one of the $2(n-1)$ particles tested by Alice and Bob shows
spin down, the experiment is considered to have failed, and the two
observers must start again with $n$ new Werner pairs.
A similar elimination of ``bad'' samples is also inherent to any
experimental procedure where a failure of one of the detectors to fire
is handled by discarding the results registered by all the other
detectors: only when {\it all\/} the detectors fire are their results
included in the statistics. This obviously requires an exchange of {\it
classical\/} information between the observers. (There is a controversy
on whether a violation of Bell's inequality with post\-selected
data~\cite{postselect} is a valid test for
non\-locality~\cite{Santos}. I shall not discuss this issue here; I
only examine whether or not Bell's inequality is violated by the
post\-selected data.)
The calculations shown below will refer to the case $n=3$, for
definiteness. The generalization to any other value of $n$ is
straightforward. Spinor indices, for a single \mbox{spin-$1\over2$}
particle, will take the values 0 (for the ``up'' component of spin) and
1 (for the ``down'' component). The 16 components of the density matrix
of a Werner pair, consisting of a singlet fraction $x$ and a random
fraction $(1-x)$, are, in the standard direct product basis:
\begin{equation} \rho_{mn,st}=x\,S_{mn,st}+(1-x)\,\delta_{ms}\,\delta_{nt}\,/4,\end{equation}
where I am now using only Latin indices, contrary to what I did in
Eq.(\ref{x}); this is because Greek indices will be needed for another
purpose, as will be seen soon. Thus, now, the indices $m$ and $s$ refer
to Alice's particle, and $n$ and $t$ to Bob's particle.
When there are three Werner pairs, their combined density matrix is a
direct product $\rho\otimes\rho'\otimes\rho''$, or explicitly,
$\rho_{mn,st}\,\rho_{m'n',s't'}\,\rho_{m''n'',s''t''}$. The result of
the unitary transformations $U$ and $V$ is
\begin{equation} \rho\otimes\rho'\otimes\rho''\to
(U\otimes V)\,(\rho\otimes\rho'\otimes\rho'')\,
(U^\dagger\otimes V^\dagger). \label{newrho} \end{equation}
Explicitly, with all its indices, the $U$ matrix satisfies the unitarity
relation
\begin{equation} \sum_{mm'm''}U_{\mu\mu'\mu'',mm'm''}\;
U^*_{\lambda\lambda'\lambda'',mm'm''}=
\delta_{\mu\lambda}\;\delta_{\mu'\lambda'}\;\delta_{\mu''\lambda''}.
\label{unitary}\end{equation}
In order to avoid any possible ambiguity, Greek indices (whose values
are also 0 and 1) are now used to label spinor components {\it after\/}
the unitary transformations. Note that the indices without primes refer
to the two particles of the first Werner pair (the only ones that are
not tested for spin up) and the primed indices refer to all the other
particles (that are tested for spin up). The $V_{\nu\nu'\nu'',nn'n''}$
matrix elements of Bob's unitary transformation satisfy a relationship
similar to (\theequation). The generalization to a larger number of
Werner pairs is obvious.
After the execution of the unitary transformation (\ref{newrho}), Alice
and Bob have to test that all the particles, except those labelled by
the first (unprimed) indices, have their spin up. They discard any set
of $n$ Werner pairs where that test fails, even once. The density matrix
for the remaining ``successful'' cases is thus obtained by retaining, on
the right hand side of Eq.~(\ref{newrho}), only the terms whose primed
components are zeros, and then renormalizing the resulting matrix to
unit trace. This means that only two of the $2^n$ rows of the $U$
matrix, namely those with indices 000\ldots\ and 100\ldots, are relevant
(and likewise for the $V$ matrix). The elimination of all the other
rows greatly simplifies the problem of optimizing these matrices. We
shall thus write, for brevity,
\begin{equation} U_{\mu 00,mm'm''}\to U_{\mu,mm'm''}, \end{equation}
where $\mu=0,1$. Then, on the left hand side of Eq.~(\ref{unitary}), we
effectively have two unknown row vectors, $U_0$ and $U_1$, each one with
$2^n$ components (labelled by Latin indices $mm'm''$). These vectors
have unit norm and are mutually orthogonal. Likewise, Bob has two
vectors, $V_0$ and $V_1$. The problem is to optimize these four vectors
so as to make the expectation value of the Bell operator~\cite{BMR},
\begin{equation} C:=AB+AB'+A'B-A'B', \end{equation}
as large as possible.
The optimization proceeds as follows. The new density matrix, for the
pairs of spin-\mbox{$1\over2$} particles that were {\it not\/} tested
by Alice and Bob for spin up
(that is, for the first pair in each set of $n$ pairs), is\\[4mm]
$ (\rho_{\rm new})_{\mu\nu,\sigma\tau}=$
\nopagebreak
\begin{equation} N\, U_{\mu,mm'm''}\,V_{\nu,nn'n''}\;\rho_{mn,st}\;\rho_{m'n',s't'}\;
\rho_{m''n'',s''t''}\,U^*_{\sigma,ss's''}\,V^*_{\tau,tt't''},\end{equation}
where $N$ is a normalization constant, needed to obtain unit trace
($N^{-1}$ is the probability that all the ``spin up'' tests were
successful). We then have~\cite{H3c}, for fixed $\rho_{\rm new}$ and
all possible choices of $C$,
\begin{equation} \max\,[{\rm Tr}\,(C\rho_{\rm new})]=2\sqrt{M}, \label{M}\end{equation}
where $M$ is the sum of the two largest eigenvalues of the real
symmetric matrix $T^\dagger T$, defined by
\begin{equation} T_{pq}:={\rm Tr}\,[(\sigma_p\otimes\sigma_q)\,\rho_{\rm new}].
\label{T} \end{equation}
(In the last equation, $\sigma_p$ and $\sigma_q$ are the Pauli spin
matrices.) Our problem is to find the vectors $U_\mu$ and $V_\nu$ that
maximize $M$.
At this point, some simplifying assumptions are helpful.
Since all matrix elements $\rho_{mn,st}$ are real, we can restrict the
search to vectors $U_\mu$ and $V_\nu$ that have only real components.
Furthermore, the situations seen by Alice and Bob are completely
symmetric, except for the opposite signs in the standard
expression for the singlet state:
\begin{equation} \textstyle{\psi=
\left[{1\choose0}{0\choose1}-{0\choose1}{1\choose0}\right]
\;/\sqrt{2}.} \end{equation}
These signs can be made to become the same by redefining the
basis, for example by representing the ``down'' state of Bob's particle
by the symbol ${0\choose-1}$, {\it without\/} changing the basis used
for Alice's particle. This unilateral change of basis is equivalent a
substitution
\begin{equation} V_{\nu,nn'n''}\to(-1)^{\nu+n+n'+n''}\,V_{\nu,nn'n''},\end{equation}
on Bob's side. The minus signs in Eq.~(\ref{singlet}) also disappear,
and there is complete symmetry for the two observers. It is then
plausible that, with the new basis, the optimal $U_\nu$ and $V_\nu$
are the same. Therefore, when we return to the original basis and
notations, they satisfy
\begin{equation} V_{\nu,nn'n''}=(-1)^{\nu+n+n'+n''}\,U_{\nu,nn'n''}.\end{equation}
We shall henceforth restrict our search to pairs of vectors that satisfy
this relation.
After all the above simplifications, the problem that has to be solved
is the following: find two mutually orthogonal unit vectors, $U_0$ and
$U_1$, each one with $2^n$ real components, that maximize the value of
$M(U)$ defined by Eqs.~(\ref{M}) and~(\ref{T}). This is a standard
optimization problem which can be solved numerically. Since the
function $M(U)$ is bounded, it has at least one maximum. It may,
however, have more than one: there may be several distinct local
maxima with different values. A numerical search leads to one
of these maxima, but not necessarily to the largest one. The outcome
may depend on the initial point of the search. It is therefore
imperative to start from numerous randomly chosen points in order to
ascertain, with reasonable confidence, that the largest maximum has
indeed been found.\\[7mm]
\noindent{\bf 4. NUMERICAL RESULTS}
In all the cases that were examined, $M(U)$ turned out to have a local
maximum for the following simple choice:
\begin{equation} U_{0,00\ldots}=U_{1,11\ldots}=1, \label{xor}\end{equation}
and all the other components of $U_0$ and $U_1$ vanish. Recall that the
``vectors'' $U_0$ and $U_1$ actually are two rows, $U_{000\ldots}$
and $U_{100\ldots}$, of a unitary matrix of order $2^n$ (the
other rows are irrelevant because of the elimination of all the
experiments in which a particle failed the spin-up test). In the case
$n=2$, one of the unitary matrices having the property (\ref{xor}) is a
simple permutation matrix that can be implemented by a ``controlled-{\sc
not}'' quantum gate~\cite{cnot}. The corresponding Boolean operation is
known as {\sc xor} (exclusive {\sc or}). For larger values of $n$,
matrices that satisfy Eq.~(\ref{xor}) will also be called {\sc
xor}-transformations.
It was found, by numerical calculations, that {\sc xor}-transformations
always are the optimal ones for $n=2$. They are also optimal for $n=3$
when the singlet fraction $x$ is less than 0.57, and for $n=4$ when
$x<0.52$. For larger values of $x$, more complicated forms of $U_0$ and
$U_1$ give better results. The existence of two different sets of maxima
may be seen in Fig.~1: there are discontinuities in the slopes of the
graphs for $n=3$ and~4, that occur at the values of $x$ where the
largest value of $\langle{C}\rangle$ jumps from one local maximum
to another one.
For $n=5$, a complete determination of $U_0$ and $U_1$ requires the
optimization of 64 parameters subject to 3 constraints, more than my
workstation could handle. I therefore considered only {\sc
xor}-transformations, which are likely to be optimal for $x\,
\mbox{\raisebox{-2pt}{{\scriptsize $\stackrel{\textstyle <}{\sim}$}}}\,
0.5$. In particular, for $x=0.5$ (the value that was used in Werner's
original work \cite{Werner}), the result is $\langle C\rangle=2.0087$,
and the CHSH inequality is violated. This violation occurs in spite of
the existence of an explicit LHV model that gives correct results if
the Werner pairs are tested one by one.
These results prompt a new question: can we get stronger {\it
inseparability\/} criteria by considering $\rho\otimes\rho$, or higher
tensor products? It is easily seen that no further progress can be
achieved in this way. If $\rho$ is separable as in Eq.~(\ref{sep}), so
is $\rho\otimes\rho$. Moreover, the partly transposed matrix
corresponding to $\rho\otimes\rho$ simply is $\sigma\otimes\sigma$, so
that if no eigenvalue of $\sigma$ is negative, then
$\sigma\otimes\sigma$ too has no negative eigenvalue.\\[7mm]
\noindent{\bf ACKNOWLEDGMENT}
This work was supported by the Gerard Swope Fund, and the Fund for
Encouragement of Research.
\parindent 0mm
{\bf Caption of figure}
FIG. 1. \ Maximal expectation value of the Bell operator, versus the
singlet fraction in the Werner state, for collective tests performed on
several Werner pairs (from bottom to top of the figure, 1, 2, 3, and 4
pairs, respectively). The CHSH inequality is violated when
$\langle{C\rangle}>2$.
\end{document} |
\begin{document}
\title{${}$ \vskip -1.2cm Half-arc-transitive graphs of arbitrary even valency greater than 2}
\author{
Marston D.E. Conder
\\[+1pt]
{\normalsize Department of Mathematics, University of Auckland,}\\[-4pt]
{\normalsize Private Bag 92019, Auckland 1142, New Zealand} \\[-3pt]
{\normalsize [email protected]}\\[+4pt]
\and
Arjana \v{Z}itnik
\\[+1pt]
{\normalsize Faculty of Mathematics and Physics, University of Ljubljana,} \\[-4pt]
{\normalsize Jadranska 19, 1000 Ljubljana, Slovenia} \\[-3pt]
{\normalsize [email protected]}
}
\date{}
\maketitle
\begin{abstract}
A {\em half-arc-transitive\/} graph is a regular graph that is both vertex- and
edge-transitive, but is not arc-transitive.
If such a graph has finite valency, then its valency is even, and greater than $2$.
In 1970, Bouwer proved that there exists a half-arc-transitive graph of every
even valency greater than 2, by giving a construction for a family of graphs now
known as $B(k,m,n)$, defined for every triple $(k,m,n)$ of integers greater than $1$
with $2^m \equiv 1 \mod n$.
In each case, $B(k,m,n)$ is a $2k$-valent vertex- and edge-transitive graph
of order $mn^{k-1}$, and Bouwer showed that $B(k,6,9)$ is half-arc-transitive for all $k > 1$.
For almost 45 years the question of exactly which of Bouwer's graphs are
half-arc-transitive and which are arc-transitive has remained open, despite many
attempts to answer it.
In this paper, we use a cycle-counting argument to prove that almost all of the graphs
constructed by Bouwer are half-arc-transitive. In fact, we prove that $B(k,m,n)$
is arc-transitive only when $n = 3$, or $(k,n) = (2,5)$,
or $(k,m,n) = (2,3,7)$ or $(2,6,7)$ or $(2,6,21)$.
In particular, $B(k,m,n)$ is half-arc-transitive whenever $m > 6$ and $n > 5$.
This gives an easy way to prove that there are infinitely many half-arc-transitive
graphs of each even valency $2k > 2$. \\
\noindent
Keywords: graph, half-arc transitive, edge-transitive, vertex-transitive, arc-transitive,
automorphisms, cycles
\noindent
Mathematics Subject Classification (2010):
05E18,
20B25.
\end{abstract}
\section{Introduction}
\label{intro}
In the 1960s, W.T.~Tutte \cite{Tutte} proved that if a connected regular graph of odd valency is both
vertex-transitive and edge-transitive, then it is also arc-transitive.
At the same time, Tutte observed that it was not known whether the same was true for even valency.
Shortly afterwards, I.Z.~Bouwer \cite{Bouwer} constructed a family of vertex- and edge-transitive
graphs of any given even valency $2k > 2$, that are not arc-transitive.
Any graph that is vertex- and edge-transitive but not arc-transitive is now known
as a {\em half-arc-transitive\/} graph. Every such graph has even valency, and
since connected graphs of valency $2$ are cycles, which are arc-transitive,
the valency must be at least $4$.
Quite a lot is now known about half-arc-transitive graphs, especially in the
$4$-valent case --- see \cite{ConderMarusic,ConderPotocnikSparl,Marusic1998b,DM05}
for example.
Also a lot of attention has been paid recently to half-arc-transitive group actions
on edge-transitive graphs --- see \cite {HujdurovicKutnarMarusic} for example.
In contrast, however, relatively little is known about half-arc-transitive graphs of higher valency.
Bouwer's construction produced a vertex- and edge-transitive graph $B(k,m,n)$
of order $mn^{k-1}$ and valency $2k$ for every triple $(k,m,n)$ of integers
greater than $1$ such that $2^{m} \equiv 1$ mod $n$, and Bouwer proved in \cite{Bouwer}
that $B(k,6,9)$ is half-arc-transitive for every $k > 1$. Bouwer also showed that the
latter is not true for every triple $(k,m,n)$; for example, $B(2,3,7)$, $B(2,6,7)$ and $B(2,4,5)$
are arc-transitive.
For the last $45$ years, the question of exactly which of Bouwer's graphs are
half-arc-transitive and which are arc-transitive has remained open, despite a
number of attempts to answer it.
Three decades after Bouwer's paper, C.H.~Li and H.-S.~Sim \cite{LiSim}
developed a quite different construction for a family of half-arc-transitive graphs,
using Cayley graphs for metacyclic $p$-groups,
and in doing this, they proved the existence of infinitely many half-arc-transitive graphs
of each even valency $2k > 2$.
Their approach, however, required a considerable amount of group-theoretic analysis.
In this paper, we use a cycle-counting argument to prove that almost all of the graphs
constructed by Bouwer in \cite{Bouwer} are half-arc-transitive, and thereby give an
easier proof of the fact that there exist infinitely many half-arc-transitive graphs
of each even valency $2k > 2$.
Specifically, we prove the following:
\begin{theorem}
\label{thm:main}
The
graph $B(k,m,n)$ is arc-transitive only if and only if $n = 3$, or $(k,n) = (2,5)$,
or $(k,m,n) = (2,3,7)$ or $(2,6,7)$ or $(2,6,21)$.
In particular, $B(k,m,n)$ is half-arc-transitive whenever $m > 6$ and $n > 5$.
\end{theorem}
By considering the $6$-cycles containing a given $2$-arc, we prove in Section~\ref{sec:main}
that $B(k,m,n)$ is half-arc-transitive whenever $m > 6$ and $n > 7$,
and then we adapt this for the other half-arc-transitive cases in Section~\ref{sec:other}.
In between, we prove arc-transitivity in the cases given in the above theorem
in Section~\ref{sec:AT}.
But first we give some additional background about the Bouwer graphs in
the following section.
\section{Further background}
\label{further}
First we give the definition of the Bouwer graph $B(k,m,n)$, for every triple
$(k,m,n)$ of integers greater than $1$ such that $2^{m} \equiv 1$ mod $n$.
The vertices of $B(k,m,n)$ are the $k$-tuples $(a,b_2,b_3,\dots,b_k)$
with $a \in \mathbb Z_m$ and $b_j \in \mathbb Z_n$ for $2 \le j \le k$.
We will sometimes write a given vertex as $(a,{\bf b})$, where ${\bf b} = (b_2,b_3,\dots,b_k)$.
Any two such vertices are adjacent if they can be written as
$(a,{\bf b})$ and $(a+1,{\bf c})$
where either ${\bf c} = {\bf b}$,
or ${\bf c} = (c_2,c_3,\dots,c_k)$ differs from ${\bf b} = (b_2,b_3,\dots,b_k)$ in exactly
one position, say the $(j\!-\!1)$st position, where $c_j = b_j +2^a$.
Note that the condition $2^{m} \equiv 1$ mod $n$ ensures that $2$ is a unit mod $n$,
and hence that $n$ is odd. Also note that the graph is simple.
In what follows, we let ${\bf e}_j$ be the element of $(\mathbb Z_n)^{k-1}$ with $j\hskip 1pt$th term
equal to $1$ and all other terms equal to $0$, for $1 \le j < k$.
With this notation, we see that the neighbours of a vertex $(a,{\bf b})$
are precisely the vertices $(a+1,{\bf b})$, $(a-1,{\bf b})$, $(a+1,{\bf b}+2^{a}{\bf e}_j)$
and $(a-1,{\bf b}-2^{a-1}{\bf e}_j)$ for $1 \le j < k$, and in particular, this shows that
the graph $B(k,m,n)$ is regular of valency $2k$.
Next, we recall that for all $(k,m,n)$, the graph $B(k,m,n)$ is
both vertex- and edge-transitive; see \cite[Proposition~1]{Bouwer}.
Also $B(k,m,n)$ is bipartite if and only if $m$ is even.
Moreover, it is easy to see that $B(k,m,n)$ has the following three automorphisms:
\noindent (i) $\theta$, of order $k-1$, taking
each vertex $(a,{\bf b}) = (a,b_2,b_3,\dots,b_{k-1},b_k)$
to the vertex $(a,{\bf b}') = (a,b_3,b_4,\dots,b_k,b_2)$, obtained by shifting
its last $k-1$ entries,
\noindent (ii) $\tau$, of order $m$, taking
each vertex $(a,{\bf b}) = (a,b_2,b_3,\dots,b_{k-1},b_k)$
to the vertex $(a,{\bf b}'') = (a+1,2b_2,2b_3,\dots,2b_{k-1},2b_k)$,
obtained by increasing its first entry $a$ by $1$ and multiplying the others by $2$,
and
\noindent (ii) $\psi$, of order $2$, taking
each vertex $(a,{\bf b}) = (a,b_2,b_3,\dots,b_{k-1},b_k)$
to the vertex $(a,{\bf b}'') = (a,2^{a}-1-(b_2+b_3+\dots+b_k),b_3,\dots,b_{k-1},b_k)$,
obtained by replacing its second entry $b_2$ by $2^{a}-1-(b_2+b_3+\dots+b_k)$.
\noindent
(Note: in the notation of \cite{Bouwer}, the automorphism $\psi$ is $T_2 \circ S_2$.)
The automorphisms $\theta$ and $\psi$ both fix the `zero' vertex $(0,{\bf 0}) = (0,0,\dots,0)$,
and $\theta$ induces a permutation of its $2k$ neighbours that fixes each of the two
vertices $(1,{\bf 0}) = (1,0,0,\dots,0)$ and $(-1,{\bf 0}) = (-1,0,0,\dots,0)$
and induces two $(k\!-\!1)$-cycles on the others, while $\psi$ swaps
$(1,{\bf 0})$ with $(1,{\bf e}_1) = (1,1,0,\dots,0)$,
and swaps $(-1,{\bf 0})$
with $(-1,-2^{-1}{\bf e}_1) = (-1,-2^{-1},0,\dots,0)$, and fixes all the others.
It follows that the subgroup generated by $\theta$ and $\psi$ fixes the
vertex $(0,{\bf 0})$ and has two orbits of length $k$ on its neighbours,
with one orbit consisting of the vertices of the form $(1,{\bf b})$
where ${\bf b} = {\bf 0}$ or ${\bf e}_j$ for some $j$,
and the other consisting of those of the form $(-1,{\bf c})$
where ${\bf c} = {\bf 0}$ or $-2^{-1}{\bf e}_j$ for some $j$.
By edge-transitivity, the graph $B(k,m,n)$ is arc-transitive if and only if it admits
an automorphism that interchanges the `zero' vertex $(0,{\bf 0})$ with one of its
neighbours, in which case the above two orbits of $\langle \theta, \psi \rangle$
on the neighbours of $(0,{\bf 0})$ are merged into one under the full automorphism
group.
We will use the automorphism $\tau$ in the next section.
We will also use the following, which is valid in all cases, not just those with
$m > 6$ and $n > 7$ considered in the next section.
\begin{lemma}
\label{threearcs}
Every $3$-arc $\,v_0 \!\sim\! v_1 \!\sim\! v_2 \!\sim\! v_3$ in $X$ with first vertex $v_0 = (0,{\bf 0})$
is of one of the following forms, with $r,s,t \in \{1,\dots,k\!-\!1\}$ in each case$\,:$
\\[+10pt]
\begin{tabular}{rl}
\ {\rm (1)} & $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,{\bf 0}) \!\sim\! (3,{\bf d})$,
where $\,{\bf d} = {\bf 0}$ or $\,4{\bf e}_r$, \\[+4 pt]
\ {\rm (2)} & $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,{\bf 0}) \!\sim\! (1,-2{\bf e}_r)$, \\[+4 pt]
\ {\rm (3)} & $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,2{\bf e}_r) \!\sim\! (3,2{\bf e}_r\!+\!{\bf d})$,
where $\,{\bf d} = {\bf 0}$ or $\,4{\bf e}_s$, \\[+4 pt]
\ {\rm (4)} & $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,2{\bf e}_r) \!\sim\! (1,2{\bf e}_r\!-\!{\bf d})$,
where $\,{\bf d} = {\bf 0}$ or $\,2{\bf e}_s$ with $s \ne r$, \\[+4 pt]
\ {\rm (5)} & $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (1,-{\bf e}_r\!+\!{\bf d})$,
where $\,{\bf d} = {\bf 0}$ or $\,{\bf e}_s$ with $s \ne r$, \\[+4 pt]
\ {\rm (6)} & $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (-1,-{\bf e}_r\!-\!{\bf d})$,
where $\,{\bf d} = {\bf 0}$ or $\,2^{-1}{\bf e}_s$, \\[+4 pt]
\ {\rm (7)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,{\bf e}_r) \!\sim\! (3,{\bf e}_r\!+\!{\bf d})$,
where $\,{\bf d} = {\bf 0}$ or $\,4{\bf e}_s$, \\[+4 pt]
\ {\rm (8)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,{\bf e}_r) \!\sim\! (1,{\bf e}_r\!-\!2{\bf e}_s)$, \\[+4 pt]
\end{tabular}
\begin{tabular}{rl}
\ {\rm (9)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,{\bf e}_r\!+\!2{\bf e}_s) \!\sim\! (3,{\bf e}_r\!+\!2{\bf e}_s\!+\!{\bf d})$,
where $\,{\bf d} = {\bf 0}$ or $\,4{\bf e}_t$, \\[+4 pt]
{\rm (10)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,{\bf e}_r\!+\!2{\bf e}_s) \!\sim\! (1,{\bf e}_r\!+\!2{\bf e}_s\!-\!{\bf d})$,
\\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,2{\bf e}_t$ with $t \ne s$, \\[+4 pt]
{\rm (11)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r) \!\sim\! (1,{\bf e}_r+{\bf e}_s)$, \\[+4 pt]
{\rm (12)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r) \!\sim\! (-1,{\bf e}_r\!-\!{\bf d})$,
where $\,{\bf d} = {\bf 0}$ or $\,2^{-1}{\bf e}_s$, \\[+4 pt]
{\rm (13)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r-{\bf e}_s) \!\sim\! (1,{\bf e}_r-{\bf e}_s\!+\!{\bf d})$,
where $s \ne r$, \\ & \quad and $\,{\bf d} = {\bf 0}$ or $\,{\bf e}_t$ with $t \ne s$, \\[+4 pt]
{\rm (14)} & $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r-{\bf e}_s) \!\sim\! (-1,{\bf e}_r-{\bf e}_s\!-\!{\bf d})$,
where $s \ne r$, \\ & \quad and $\,{\bf d} = {\bf 0}$ or $\,2^{-1}{\bf e}_t$, \\[+4 pt]
{\rm (15)} & $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (0,2^{-1}{\bf e}_r) \!\sim\! (1,2^{-1}{\bf e}_r\!+\!{\bf d})$,
where $\,{\bf d} = {\bf 0}$ or $\,{\bf e}_s$, \\[+4 pt]
{\rm (16)} & $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (0,2^{-1}{\bf e}_r) \!\sim\! (-1,2^{-1}{\bf e}_r\!-\!{\bf d})$,
\\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,2^{-1}{\bf e}_s$ with $s \ne r$, \\[+4 pt]
{\rm (17)} & $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (-2,{\bf 0}) \!\sim\! (-1,2^{-2}{\bf e}_r)$, \\[+4 pt]
{\rm (18)} & $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (-2,{\bf 0}) \!\sim\! (-3,-{\bf d})$,
where $\,{\bf d} = {\bf 0}$ or $\,2^{-3}{\bf e}_r$, \\[+4 pt]
{\rm (19)} & $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (-2,-2^{-2}{\bf e}_r) \!\sim\! (-1,-2^{-2}{\bf e}_r\!+\!{\bf d})$,
\\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,2^{-2}{\bf e}_s$ with $s \ne r$, \\[+4 pt]
{\rm (20)} & $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (-2,-2^{-2}{\bf e}_r) \!\sim\! (-3,-2^{-2}{\bf e}_r\!-\!{\bf d})$,
\\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,2^{-3}{\bf e}_s$, \\[+4 pt]
{\rm (21)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r) \!\sim\! (1,-2^{-1}{\bf e}_r\!+\!{\bf d})$,
\\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,{\bf e}_s$, \\[+4 pt]
{\rm (22)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r) \!\sim\! (-1,-2^{-1}{\bf e}_r\!-\!2^{-1}{\bf e}_s)$, \\[+4 pt]
{\rm (23)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s) \!\sim\! (1,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s\!+\!{\bf d})$,
\\ & \quad where $s \ne r$, and $\,{\bf d} = {\bf 0}$ or $\,{\bf e}_t$, \\[+4 pt]
{\rm (24)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s) \!\sim\! (-1,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s\!-\!{\bf d})$,
\\ & \quad where $s \ne r$, and $\,{\bf d} = {\bf 0}$ or $\,2^{-1}{\bf e}_t$ with $t \ne s$, \\[+4 pt]
{\rm (25)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r) \!\sim\! (-1,-2^{-1}{\bf e}_r\!+\!2^{-2}{\bf e}_s)$, \\[+4 pt]
{\rm (26)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r) \!\sim\! (-3,-2^{-1}{\bf e}_r\!-\!{\bf d})$,
\\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,2^{-3}{\bf e}_s$, \\[+4 pt]
{\rm (27)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r\!-\!2^{-2}{\bf e}_s) \!\sim\! (-1,-2^{-1}{\bf e}_r\!-\!2^{-2}{\bf e}_s\!+\!{\bf d})$,
\\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,2^{-2}{\bf e}_t$ with $t \ne s$, \\[+4 pt]
{\rm (28)} & $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r\!-\!2^{-2}{\bf e}_s) \!\sim\! (-3,-2^{-1}{\bf e}_r\!-\!2^{-2}{\bf e}_s\!-\!{\bf d})$,
\\ & \quad where $\,{\bf d} = {\bf 0}$ or $\,2^{-3}{\bf e}_t$. \\
\end{tabular}
\end{lemma}
\begin{proof}
This follows directly from the definition of $X = B(k,m,n)$.
\end{proof}
Note that the 28 cases given in Lemma~\ref{threearcs} fall naturally into $14$ pairs,
with each pair determined by the form of the initial $2$-arc $\,v_0 \!\sim\! v_1 \!\sim\! v_2$.
Also is it easy to see that
the number of $3$-arcs in each case is \\[-4pt]
$$\begin{cases}
\ k & \hbox{in cases 1 and 18,} \\[-1 pt]
\ k\!-\!1 & \hbox{in cases 2 and 17,} \\[-1 pt]
\ k(k\!-\!1) & \hbox{in cases 3, 6, 7, 12, 15, 20, 21 and 26,} \\[-1 pt]
\ (k\!-\!1)^2 & \hbox{in cases 4, 5, 8, 11, 16, 19, 22 and 25,} \\[-1 pt]
\ k(k\!-\!1)^2 & \hbox{in cases 9 and 28,} \\[-1 pt]
\ (k\!-\!1)^3 & \hbox{in cases 10 and 27,} \\[-1 pt]
\ (k\!-\!1)^{2}(k\!-\!2) & \hbox{in cases 13 and 24,} \\[-1 pt]
\ k(k\!-\!1)(k\!-\!2) & \hbox{in cases 14 and 23,}
\end{cases}
$$
and the total of all these numbers is $2k(2k\!-\!1)^2$, as expected.
\section{The main approach}
\label{sec:main}
Let $k$ be any integer greater than $1$, and suppose that $m > 6$ and $n > 7$.
We will prove that in every such case, the graph $X = B(k,m,n)$ is not arc-transitive,
and is therefore half-arc-transitive.
We do this simply by considering the ways in which a given $2$-arc or $3$-arc
lies in a cycle of length $6$ in $X$.
(For any positive integer $s$, an $s$-arc in a simple graph is a walk
$\,v_0 \!\sim\! v_1 \!\sim\! v_2 \!\sim\! \dots \!\sim\! v_s$ of length $s$ in which any three consecutive vertices are distinct.)
By vertex-transitivity, we can consider what happens locally around the vertex $(0,{\bf 0})$.
\begin{lemma}
\label{girth}
The girth of $X$ is $6$.
\end{lemma}
\begin{proof}
First, $X = B(k,m,n)$ is simple, by definition.
Also there are no cycles of length $3$, $4$ or $5$ in $X$, since in the list of cases
for a $3$-arc $\,v_0 \!\sim\! v_1 \!\sim\! v_2 \!\sim\! v_3$ in $X$ with first vertex $v_0 = (0,{\bf 0})$
given by Lemma~\ref{threearcs}, the vertex $v_3$ is never equal to $v_0$,
the vertex $v_1$ is uniquely determined by $v_2$,
and every possibility for $v_2$ is different from every possibility for $v_3$.
On the other hand, there are certainly some cycles of length $6$ in $X$, such as
$(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,2{\bf e}_k) \!\sim\! (1,
2{\bf e}_k) \!\sim\! (0,{\bf e}_k) \!\sim\! (1,{\bf e}_k) \!\sim\! (0,{\bf 0})$.
\end{proof}
Next, we can
find all $6$-cycles
based at the vertex $v_0 = (0,{\bf 0})$ in $X$.
\begin{lemma}
\label{sixcycles}
Up to reversal, every $6$-cycle based at the vertex $v_0 = (0,{\bf 0})$
has exactly one of the forms below, with $r$, $s,$ $t$ all different
when they appear$\,:$
\\[+5pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,2{\bf e}_r) \!\sim\! (1,2{\bf e}_r) \!\sim\! (0,{\bf e}_r) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf 0})$, \\[+3 pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (1,-{\bf e}_r) \!\sim\! (2,{\bf e}_r) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf 0})$, \\[+3 pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (1,{\bf e}_s\!-\!{\bf e}_r) \!\sim\! (0,{\bf e}_s\!-\!{\bf e}_r) \!\sim\! (1,{\bf e}_s) \!\sim\! (0,{\bf 0})$, \\[+3 pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (-1,-{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,{\bf 0})$, \\[+3 pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,2{\bf e}_s\!+\!{\bf e}_r) \!\sim\! (1,2{\bf e}_s\!-\!{\bf e}_r) \!\sim\! (0,{\bf e}_s\!-\!{\bf e}_r) \!\sim\! (1,{\bf e}_s) \!\sim\! (0,{\bf 0})$, \\[+3 pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r) \!\sim\! (1,{\bf e}_s\!+\!{\bf e}_r) \!\sim\! (0,{\bf e}_s) \!\sim\! (1,{\bf e}_s) \!\sim\! (0,{\bf 0})$, \\[+3 pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r) \!\sim\! (-1,2^{-1}{\bf e}_r) \!\sim\! (0,2^{-1}{\bf e}_r) \!\sim\! (-1,{\bf 0}) \!\sim\! (0,{\bf 0})$, \\[+3 pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r\!-\!{\bf e}_s) \!\sim\! (1,{\bf e}_r\!-\!{\bf e}_s\!+\!{\bf e}_t) \!\sim\! (0,{\bf e}_t\!-\!{\bf e}_s) \!\sim\! (1,{\bf e}_t) \!\sim\! (0,{\bf 0})$, \\[+3 pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r\!-\!{\bf e}_s) \!\sim\! (-1,2^{-1}{\bf e}_r\!-\!{\bf e}_s) \!\sim\! (0,2^{-1}{\bf e}_r\!-\!2^{-1}{\bf e}_s)$ \\ ${}$ \qquad $\!\sim\! (-1,-\!2^{-1}{\bf e}_s) \!\sim\! (0,{\bf 0})$, \\[+3 pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (0,2^{-1}{\bf e}_r) \!\sim\! (1,2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r) \!\sim\! (-1,-2^{-1}{\bf e}_r) $ \\ ${}$ \qquad $\!\sim\! (0,{\bf 0})$, \\[+3 pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (0,2^{-1}{\bf e}_r) \!\sim\! (-1,2^{-1}{\bf e}_r\!-\!2^{-1}{\bf e}_s) \!\sim\! (0,2^{-1}{\bf e}_r\!-\!2^{-1}{\bf e}_s)$ \\ ${}$ \qquad $\!\sim\! (-1,-\!2^{-1}{\bf e}_s) \!\sim\! (0,{\bf 0})$, \\[+3 pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (-2,-2^{-2}{\bf e}_r) \!\sim\! (-1,-2^{-2}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r) \!\sim\! (-1,-2^{-1}{\bf e}_r) $ \\ ${}$ \qquad $\!\sim\! (0,{\bf 0})$, \\[+3 pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r) \!\sim\! (-1,-2^{-1}{\bf e}_r\!-\!2^{-1}{\bf e}_s) \!\sim\! (0,-2^{-1}{\bf e}_s)$ \\ ${}$ \qquad $\!\sim\! (-1,-2^{-1}{\bf e}_s) \!\sim\! (0,{\bf 0})$, \\[+3 pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s) \!\sim\! (1,2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s)$ \\ ${}$ \qquad $\!\sim\! (0,2^{-1}{\bf e}_r\!-\!2^{-1}{\bf e}_s) \!\sim\! (-1,-2^{-1}{\bf e}_s) \!\sim\! (0,{\bf 0})$, \\[+3 pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s) \!\sim\! (-1,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s\!-\!2^{-1}{\bf e}_t)$ \\ ${}$ \qquad $\!\sim\! (0,2^{-1}{\bf e}_s\!-\!2^{-1}{\bf e}_t) \!\sim\! (-1,-2^{-1}{\bf e}_t) \!\sim\! (0,{\bf 0})$, or \\[+3 pt]
$\bullet$ \ $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r\!-\!2^{-2}{\bf e}_s) \!\sim\! (-1,-2^{-2}{\bf e}_r\!-\!2^{-2}{\bf e}_s)$ \\ ${}$ \qquad $\!\sim\! (-2,-2^{-2}{\bf e}_r\!-\!2^{-1}{\bf e}_s) \!\sim\! (-1,-2^{-1}{\bf e}_s) \!\sim\! (0,{\bf 0})$.
\end{lemma}
\begin{proof}
This is left as an exercise for the reader.
It may be helpful to note that a $6$-cycle of the first form is obtainable as a $3$-arc of type 4
with final vertex $2{\bf e}_r$ followed by the reverse of a $3$-arc of type 11
with the same final vertex.
The $6$-cycles of the other $15$ forms are similarly obtainable as the concatenation
of a $3$-arc of type $i$ with the reverse of a $3$-arc of type $j$,
for $(i,j) = (5,8)$, $(5,13)$, $(6,22)$, $(10,13)$, $(11,11)$, $(12,16)$, $(13,13)$, $(14,24)$,
$(15,21)$, $(16,24)$, $(19,25)$, $(22,22)$, $(23,23)$, $(24,24)$ and $(27,27)$, respectively.
Uniqueness follows from the assumptions about $m$ and $n$.
\end{proof}
\begin{corollary}
\label{count6cycles}
The number of different $6$-cycles in $X$ that contain a given $2$-arc
$\,v_0 \!\sim\! v_1 \!\sim\! v_2$ with first vertex $v_0 = (0,{\bf 0})$ is always
$0$, $1$ or $k$.
More precisely, this number is
\\[-12pt]
\begin{center}
\begin{tabular}{cl}
${}$\hskip -18pt $0$
& \hskip -8pt
for the $2$-arcs $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,{\bf 0})$
and $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (-2,{\bf 0})$, \\
& and those of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,3{\bf e}_r)$, \\
& and those of the form $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-(2^{-1}\!+\!2^{-2}){\bf e}_r)$, \\[+6pt]
${}$\hskip -18pt $1$
& \hskip -8pt
for the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,2{\bf e}_r)$, \\
& and those of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,{\bf e}_r)$, \\
& and those of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (2,{\bf e}_r\!+\!2{\bf e}_s)$, when $k > 2$, \\
& and those of the form $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (-2,-2^{-2}{\bf e}_r)$, \\
& and those of the form $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r)$, \\
& and those of the form $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r\!-\!2^{-2}{\bf e}_s)$,
\\ & \quad when $k > 2$, \ and
\end{tabular}
\begin{tabular}{cl}
${}$\hskip -18pt $k$
& \hskip -8pt
for the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (0,-{\bf e}_r)$, \\
& and those of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r)$, \\
& and those of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r\!-\!{\bf e}_s)$, when $k > 2$, \\
& and those of the form $(0,{\bf 0}) \!\sim\! (-1,{\bf 0}) \!\sim\! (0,2^{-1}{\bf e}_r)$, \\
& and those of the form $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r)$, \\
& and those of the form $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (0,-2^{-1}{\bf e}_r\!+\!2^{-1}{\bf e}_s)$,
\\ & \quad when $k > 2$. \\[-3pt]
\end{tabular}
\end{center}
In particular, every $2$-arc of the form $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,2{\bf e}_r)$
lies in just one $6$-cycle,
namely $(0,{\bf 0}) \!\sim\! (1,{\bf 0}) \!\sim\! (2,2{\bf e}_r) \!\sim\! (1,2{\bf e}_r) \!\sim\! (0,{\bf e}_r) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf 0})$,
while every $2$-arc of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r)$ lies in $k$ $6$-cycles.
\end{corollary}
\begin{proof}
This follows easily from inspection of the list of $6$-cycles given in Lemma~\ref{sixcycles},
and their reverses.
\end{proof}
At this stage, we could repeat the above calculations for $2$-arcs and $3$-arcs with first
vertex $(1,{\bf 0}) = (1,0,0,\dots,0)$, but it is much easier to simply apply the
automorphism $\tau$ defined in Section~\ref{further}.
Hence in particular, the $2$-arcs $\,v_0 \!\sim\! v_1 \!\sim\! v_2$ with first
vertex $v_0 = (1,{\bf 0})$
that lie
in a unique $6$-cycle are those of the form $(1,{\bf 0}) \!\sim\! (2,{\bf 0}) \!\sim\! (3,4{\bf e}_r)$,
or $(1,{\bf 0}) \!\sim\! (2,2{\bf e}_r) \!\sim\! (3,2{\bf e}_r)$,
or $(1,{\bf 0}) \!\sim\! (2,2{\bf e}_r) \!\sim\! (3,2{\bf e}_r\!+\!4{\bf e}_s)$ when $k > 2$,
or $(1,{\bf 0}) \!\sim\! (0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r)$,
or $(1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (-1,-{\bf e}_r)$,
or $(1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (-1,-{\bf e}_r\!-\!2^{-1}{\bf e}_s)$ when $k > 2$.
Let $v$ and $w$ be the neighbouring vertices $(0,{\bf 0})$ and $(1,{\bf 0})$.
We will show that there is no automorphism of $X$ that reverses the arc $(v,w)$.
Let $A = \{(2,2{\bf e}_r) : 1 \le r < k\}$, which is the set of all
vertices $x$ in $X$ that extend the arc $(v,w)$ to a $2$-arc $(v,w,x)$ which lies in a unique $6$-cycle,
and similarly, let $B = \{(0,-{\bf e}_r) : 1 \le r < k\}$, the set of all
vertices that extend $(v,w)$ to a $2$-arc which lies in $k$ different $6$-cycles.
Also let $C = \{(-1,-2^{-1}{\bf e}_r) : 1 \le r < k\}$ and $D = \{(1,{\bf e}_r) : 1 \le r < k\}$
be the analogous sets of vertices extending
the arc $(w,v)$, as illustrated in Figure~\ref{ABCD}.
\begin{figure}
\caption{\label{ABCD}
\label{ABCD}
\end{figure}
Now suppose there exists an automorphism $\xi$ of $X$ that interchanges the
two vertices of the edge $\{v,w\}$.
Then by considering the numbers of $6$-cycles that contain a given $2$-arc,
we see that $\xi$ must interchange the sets $A$ and $C$, and interchange the
sets $B$ and $D$.
Next, observe that if $x = (2,2{\bf e}_r) \in A$, then the unique $6$-cycle containing
the $2$-arc $v \!\sim\! w \!\sim\! x$ is $v \!\sim\! w \!\sim\! x \!\sim\! y \!\sim\! z \!\sim\! u \!\sim\! v$,
where $(y,z,u) = ((1,2{\bf e}_r), (0,{\bf e}_r),(1,{\bf e}_r))$;
in particular, the $6$th vertex $u = (1,{\bf e}_r)$ lies in $D$.
Similarly, if $x' = (-1,-2^{-1}{\bf e}_r) \in C$, then the unique $6$-cycle containing
the $2$-arc $w \!\sim\! v \!\sim\! x'$ is $w \!\sim\! v \!\sim\! x' \!\sim\! y' \!\sim\! z' \!\sim\! u' \!\sim\! w$,
where $(y',z',u') = ((0,-2^{-1}{\bf e}_r), (-1,-{\bf e}_r), (0,-{\bf e}_r))$,
and the $6$th vertex $u' = (0,-{\bf e}_r)$ lies in $B$.
The arc-reversing automorphism $\xi$ must take every $6$-cycle
of the first kind to a $6$-cycle
of the second kind, and hence
must take each $2$-arc of the form $v \!\sim\! u \!\sim\! z$ ($= (0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r)$)
to a $2$-arc of the form $w \!\sim\! u' \!\sim\! z'$ ($= (1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (-1,-{\bf e}_r)$).
By Corollary~\ref{count6cycles}, however, each $2$-arc of the
form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_r) \!\sim\! (0,{\bf e}_r)$ lies in $k$ different $6$-cycles,
while each $2$-arc of the form $(1,{\bf 0}) \!\sim\! (0,-{\bf e}_r) \!\sim\! (-1,-{\bf e}_r)$
is the $\tau$-image of the $2$-arc $(0,{\bf 0}) \!\sim\! (-1,-2^{-1}{\bf e}_r) \!\sim\! (-2,-2^{-1}{\bf e}_r)$
and hence lies in only one $6$-cycle.
This is a contradiction, and shows that no such arc-reversing automorphism exists.
Hence the Bouwer graph $B(k,m,n)$ is half-arc-transitive whenever $m > 6$ and $n > 7$.
If $m \le 6$, then the order $m_2$ of $2$ as a unit mod $n$ is at most $6$,
and so $n$ divides $2^{m_2} - 1 = 3, 7, 15, 31$ or $63$.
In particular, if $m _2 = 2$, $3$, $4$ or $5$ then $n \in \{3\}$, $\{7\}$, $\{5,15\}$ or $\{31\}$
respectively, while if $m_2 = 6$, then $n$ divides $63$ but not $3$ or $7$,
so $n \in \{9, 21, 63\}$.
Conversely, if $n = 3, 5, 7, 9,$ $ 15, 21, 31$ or $63$,
then $m_2 = 2$, $4$, $3$, $6$, $4$, $6$, $5$ or $6$,
respectively, and of course in each case, $m$ is a multiple of $m_2$.
We deal with these exceptional cases in the next two sections.
\section{Arc-transitive cases}
\label{sec:AT}
The following observations are very easy to verify.
When $n = 3$ (and $m$ is even), the Bouwer graph $B(k,m,n)$ is always arc-transitive,
for in that case there is an automorphism that takes each
vertex $(a,{\bf b})$ to $(1\!-\!a,-{\bf b})$, and this
reverses the arc from $(0,{\bf 0})$ to $(1,{\bf 0})$.
Similarly, when $k = 2$ and $n = 5$ (and $m$ is divisible by $4$),
the graph $B(k,m,n)$ is always arc-transitive,
since it has an automorphism that takes $(a,b_2)$
to $(1\!-\!a,-b_2)$ for all $a \equiv 0$ or $1$ mod $4$,
and to $(1\!-\!a,2-b_2)$ for all $a \equiv 2$ or $3$ mod $4$,
and again this interchanges $(0,{\bf 0})$ with $(1,{\bf 0})$.
Next, $B(2,m,7)$ is arc-transitive when $m = 3$ or $6$,
because in each of those two cases there is an automorphism that takes
\\[-16pt]
\begin{center}
\begin{tabular}{lll}
$\quad(a,0)$ to $(1\!-\!a,0)$, \quad \ &
$(a,1)$ to $(a\!+\!1,2)$, \quad \ &
$(a,2)$ to $(a\!-\!1,1)$, \\[+1 pt]
$\quad (a,3)$ to $(\!-\!a,6)$, \quad \ &
$(a,4)$ to $(a\!+\!3,4)$, \quad \ &
$(a,5)$ to $(1\!-\!a,5)$, \\[+1 pt]
$\quad (a,6)$ to $(1\!-\!a,3)$, & & \\[-6pt]
\end{tabular}
\end{center}
for every $a \in \mathbb Z_m$.
Similarly, $B(2,6,21)$ is arc-transitive since it has an automorphism taking
\\[-16pt]
\begin{center}
\begin{tabular}{lll}
$\quad (a,0)$ to $(1\!-\!a,0)$, \quad \ &
$(a,1)$ to $(a\!+\!1,2)$, \quad \ &
$(a,2)$ to $(a\!-\!1,1)$, \\[+1 pt]
$\quad (a,3)$ to $(5\!-\!a,6)$, \quad \ &
$(a,4)$ to $(a\!+\!3,11)$, \quad \ &
$(a,5)$ to $(5\!-\!a,19)$, \\[+1 pt]
$\quad (a,6)$ to $(5\!-\!a,3)$, \quad &
$(a,7)$ to $(1\!-\!a,14)$, \quad \ &
$(a,8)$ to $(a\!+\!1,16)$, \\[+1 pt]
$\quad (a,9)$ to $(a\!-\!1,15)$, \quad &
$(a,10)$ to $(5\!-\!a,20)$, \quad \ &
$(a,11)$ to $(a\!+\!3,4)$, \\[+1 pt]
$\quad (a,12)$ to $(5\!-\!a,12)$, \quad &
$(a,13)$ to $(5\!-\!a,17)$, \quad \ &
$(a,14)$ to $(1\!-\!a,7)$, \\[+1 pt]
$\quad (a,15)$ to $(a\!+\!1,9)$, \quad &
$(a,16)$ to $(a\!-\!1,8)$, \quad \ &
$(a,17)$ to $(5\!-\!a,13)$, \\[+1 pt]
$\quad (a,18)$ to $(a\!+\!3,18)$, \quad &
$(a,19)$ to $(5\!-\!a,5)$, \quad \ &
$(a,20)$ to $(5\!-\!a,10)$, \\[-3pt]
\end{tabular}
\end{center}
for every $a \in \mathbb Z_m$.
\section{Other half-arc-transitive cases}
\label{sec:other}
In this final section, we give sketch proofs of half-arc-transitivity in all the remaining cases.
We first consider the three cases where $m$ and the multiplicative order of $2$ mod $n$
are both equal to $6$ (and $n = 9$, $21$ or $63$),
and then deal with the cases where the multiplicative order of $2$ mod $n$ is less than $6$
(and $n = 5$, $7$, $15$ or $31$). In the cases $n = 15$ and $n = 31$,
we can assume that $m < 6$ as well.
Note that one of these remaining cases is the one considered by Bouwer,
namely where $(m,n) = (6,9)$. As this is one of the exceptional cases, it is not representative
of the generic case --- which may help explain why previous attempts to generalise
Bouwer's approach did not get very far.
To reduce unnecessary repetition, we will introduce some further notation that will be used in most of these cases.
As in Section~\ref{sec:main}, we let $v$ and $w$ be the vertex $(0,{\bf 0})$ and its
neighbour $(1,{\bf 0})$.
Next, for all $i \ge 0$, we define $V^{(i)}$ as the set of all
vertices $x$ in $X$ for which $v \!\sim\! w \!\sim\! x $ is a $2$-arc that lies in exactly $i$ different $6$-cycles,
and $W^{(i)}$ as the analogous set of vertices $x'$ in $X$ for which $w \!\sim\! v \!\sim\! x'$
is a $2$-arc that lies in exactly $i$ different $6$-cycles.
(Hence, for example, the sets $A$, $B$, $C$ and $D$ used in Section~\ref{sec:main}
are $V^{(1)}$, $V^{(k)}$, $W^{(1)}$ and $W^{(k)}$, respectively.)
Also for $i \ge 0$ and $j \ge 0$, we define $T_{v}^{\,(i,j)}$ as
the set of all $2$-arcs $v \!\sim\! u \!\sim\! z$ that come from a $6$-cycle of the
form $v \!\sim\! w \!\sim\! x \!\sim\! y \!\sim\! z \!\sim\! u \!\sim\! v$ with $x \in V^{(i)}$ and $u \in W^{(i)}$,
and lie in exactly $j$ different $6$-cycles altogether,
and define $T_{w}^{\,(i,j)}$ as the analogous set of all $2$-arcs $w \!\sim\! u' \!\sim\! z'$
that come from a $6$-cycle of the form $w \!\sim\! v \!\sim\! x' \!\sim\! y' \!\sim\! z' \!\sim\! u' \!\sim\! w$
with $x' \in W^{(i)}$ and $u' \in V^{(i)}$, and lie in exactly $j$ different $6$-cycles altogether.
Note that if the graph under consideration is arc-transitive, then it has an
automorphism $\xi$ that reverses the arc $(v,w)$, and then clearly $\xi$ must
interchange the two sets $V^{(i)}$ and $W^{(i)}$, for each $i$.
Hence also $\xi$ interchanges the two sets
$T_{v}^{\,(i,j)}$ and $T_{w}^{\,(i,j)}$ for all $i$ and $j$,
and therefore $|T_{v}^{\,(i,j)}| = |T_{w}^{\,(i,j)}|$ for all $i$ and $j$.
Equivalently, if $|T_{v}^{\,(i,j)}| \ne |T_{w}^{\,(i,j)}|$ for some pair $(i,j)$,
then the graph cannot be arc-transitive, and therefore must be half-arc-transitive.
The approach taken in Section~\ref{sec:main} was similar, but compared the
$2$-arcs $v \!\sim\! u \!\sim\! z$ that come from a $6$-cycle of the
form $v \!\sim\! w \!\sim\! x \!\sim\! y \!\sim\! z \!\sim\! u \!\sim\! v$ where $x \in V^{(1)}$ and $u \in W^{(k)}$,
with the $2$-arcs $w \!\sim\! u' \!\sim\! z'$
that come from a $6$-cycle of the form $w \!\sim\! v \!\sim\! x' \!\sim\! y' \!\sim\! z' \!\sim\! u' \!\sim\! w$
where $x' \in W^{(1)}$ and $u' \in V^{(k)}$.
We proceed by considering the first three cases below, in which the girth of the Bouwer
graph $B(k,m,n)$ is $6$, but the numbers of $6$-cycles containing a given arc or $2$-arc are
different from those found in Section~\ref{sec:main}.
\subsection{The graphs $B(k,6,9)$}
Suppose $m = 6$ and $n = 9$.
This case was considered by Bouwer in \cite{Bouwer}, but it can also
be dealt with in a similar way to the generic case in Section~\ref{sec:main}.
Every $2$-arc lies in either $k$ or $k+1$ cycles of length $6$,
and each arc lies in exactly $k$ distinct $2$-arcs of the first kind,
and $k-1$ of the second kind.
Next, the set $V^{(k)}$ consists of $(2,{\bf 0})$ and $(0,-{\bf e}_r)$ for $1 \le r < k$,
while $W^{(k)}$ consists of $(-1,{\bf 0})$ and $(1,{\bf e}_s)$ for $1 \le s < k$.
Now consider the $2$-arcs $v \!\sim\! u \!\sim\! z$ that come from a $6$-cycle of the
form $v \!\sim\! w \!\sim\! x \!\sim\! y \!\sim\! z \!\sim\! u \!\sim\! v$ with $x \in V^{(k)}$ and $u \in W^{(k)}$.
There are $k^{2}-2k+2$ such $2$-arcs, and of these, $k\!-\!1$ lie in $k\!+\!1$
different $6$-cycles altogether (namely the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_s) \!\sim\! (2,{\bf e}_s)$),
while the other $k^{2}-3k+3$ lie on only $k$ different 6-cycles.
In particular, $|T_{v}^{\,(k,k+1)}| = k\!-\!1$.
On the other hand, among the $2$-arcs $w \!\sim\! u' \!\sim\! z'$
coming from a $6$-cycle of the form $w \!\sim\! v \!\sim\! x' \!\sim\! y' \!\sim\! z' \!\sim\! u' \!\sim\! w$
with $x' \in W^{(k)}$ and $u' \in V^{(k)}$,
none lies in $k+1$ different 6-cycles.
Hence $|T_{w}^{\,(k,k+1)}| = 0$.
Thus $|T_{v}^{\,(k,k+1)}| \ne |T_{w}^{\,(k,k+1)}|$, and so the Bouwer graph $B(k,6,9)$
cannot be arc-transitive, and is therefore half-arc-transitive.
\subsection{The graphs $B(k,6,21)$ for $k > 2$}
Next, suppose $m = 6$ and $n = 21$, and $k > 2$.
(The case $k = 2$ was dealt with in Section~\ref{sec:AT}.)
Here every $2$-arc lies in one, two or $k$ cycles of length $6$,
and each arc lies in exactly one $2$-arc of the first kind,
and $k-1$ distinct $2$-arcs of each of the second and third kinds.
The set $V^{(k)}$ consists of the $k\!-\!1$ vertices of the form $(0,-{\bf e}_r)$,
while $W^{(k)}$ consists of the $k\!-\!1$ vertices of the form $(1,{\bf e}_s)$.
(Note that this does not hold when $k = 2$.)
Also $T_{v}^{\,(k,2)}$ consists of the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_s) \!\sim\! (2,{\bf e}_s)$,
so $|T_{v}^{\,(k,2)}| = k\!-\!1$,
but on the other hand, $T_{w}^{\,(k,2)}$ is empty.
Hence there can be no automorphism that reverses the arc $(v,w)$,
and so the graph is half-arc-transitive.
\subsection{The graphs $B(k,6,63)$}
Suppose $m = 6$ and $n = 63$. Then
every $2$-arc lies in either one or $k$ different cycles of length $6$,
and each arc lies in exactly $k$ $2$-arcs of the first kind,
and $k\!-\!1$ of the second kind.
The sets $V^{(k)}$ and $W^{(k)}$ are precisely as in the previous case,
but for all $k \ge 2$, and in this case $T_{v}^{\,(k,1)}$ consists of the $2$-arcs of
the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_s) \!\sim\! (2,{\bf e}_s)$,
so $|T_{v}^{\,(k,1)}| = k\!-\!1$, but on the other hand, $T_{w}^{\,(k,1)}$ is empty.
Hence there can be no automorphism that reverses the arc $(v,w)$,
and so the graph is half-arc-transitive.
Now we turn to the cases where $n = 5$, $7$, $15$ or $31$.
In these cases, the order of $2$ as a unit mod $n$ is $4$, $3$, $4$ or $5$,
respectively, and indeed when $n = 15$ or $31$ we may suppose
that $m = 4$ or $m = 5$, while the cases $n = 5$ and $n = 7$ are much more tricky.
\subsection{The graphs $B(k,4,15)$}
Suppose $n = 15$ and $m = 4$.
Then the girth is $4$, with $2k$ different $4$-cycles passing through the vertex $(0,{\bf 0})$.
Apart from this difference, the approach taken in Section~\ref{sec:main} for counting $6$-cycles
still works in the same way as before.
Hence the graph $B(k,4,15)$ is half-arc-transitive
for all $k \ge 2$.
\subsection{The graphs $B(k,5,31)$}
Similarly, when $n = 31$ and $m = 5$, the girth
is $5$, and the same approach as taken in Section~\ref{sec:main} using $6$-cycles
works, to show that the graph $B(k,5,31)$ is half-arc-transitive
for all $k \ge 2$.
\subsection{The graphs $B(k,m,5)$ for $k > 2$}
Suppose $n = 5$ and $k > 2$.
(The case $k = 2$ was dealt with in Section~\ref{sec:AT}.)
Here we have $m \equiv 0$ mod $4$, and the number of $6$-cycles is
much larger than in the generic case considered in Section~\ref{sec:main}
and the cases with $m = 6$ above, but a similar argument works.
When $m > 4$, the girth of $B(k,m,n)$ is $6$, and every $2$-arc lies in
either $2k$, $2k+3$ or $4k-4$ cycles of length 6.
Also $V^{(2k+3)}$ consists of the $k\!-\!1$ vertices of the form $(0,-{\bf e}_r)$,
while $W^{(2k+3)}$ consists of the $k\!-\!1$ vertices of the form $(1,{\bf e}_s)$,
and then $T_{v}^{\,(2k+3,2k)}$ consists of the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_s) \!\sim\! (2,{\bf e}_s)$,
so $|T_{v}^{\,(2k+3,2k)}| = k\!-\!1$,
but on the other hand, $T_{w}^{\,(2k+3,2k)}$ is empty.
When $m = 4$, the girth of $B(k,m,n)$ is $4$, and the situation is similar, but slightly different.
In this case, every $2$-arc lies in either $2k+2$, $2k+5$ or $6k-6$ cycles of length 6,
and $|T_{v}^{\,(2k+5,2k+2)}| = k\!-\!1$ while $|T_{w}^{\,(2k+5,2k+2)}| = 0$.
Once again, it follows that no automorphism can reverse the arc $(v,w)$,
and so the graph is half-arc-transitive, for all $k > 2$ and all $m \equiv 0$ mod $4$.
\subsection{The graphs $B(k,m,7)$ for $(k,m) \ne (2,3)$ or $(2,6)$}
Suppose finally that $n = 7$, with $m \equiv 0$ mod $3$, but $(k,m) \ne (2,3)$ or $(2,6)$.
Here $m \equiv 0$ mod $3$, and we treat four sub-cases separately:
(a) $k = 2$ and $m > 6$; (b) $k > 2$ and $m = 3$; (c) $k > 2$ and $m = 3$;
and (d) $k > 2$ and $m > 6$.
In case (a), where $k = 2$ and $m > 6$, every $2$-arc lies in $1$, $2$
or $3$ cycles of length $6$.
Also the set $V^{(3)}$ consists of the single vertex $(0,-1)$,
while $W^{(3)}$ consists of the single vertex $(1,1)$,
and then $T_{v}^{\,(3,1)}$ consists of the single $2$-arc $(0,0) \!\sim\! (1,1) \!\sim\! (2,1)$,
so $|T_{v}^{\,(3,1)}| = 1$,
but on the other hand, $T_{w}^{\,(3,1)}$ is empty.
In case (b), where $k > 2$ and $m = 3$, every $2$-arc lies in $0$, $2$
or $k$ cycles of length $6$.
In this case the set $V^{(k)}$ consists of the $k\!-\!1$ vertices of the form $(0,-{\bf e}_r)$,
while $W^{(k)}$ consists of the $k\!-\!1$ vertices of the form $(1,{\bf e}_s)$,
and then $T_{v}^{\,(k,2)}$ consists of the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_s) \!\sim\! (2,{\bf e}_s)$,
so $|T_{v}^{\,(k,2)}| = k-1$,
but on the other hand, $T_{w}^{\,(k,2)}$ is empty.
In case (c), where $k > 2$ and $m = 6$, every $2$-arc lies in $3$, $k\!+\!1$
or $4k\!-\!3$ cycles of length $6$.
Here $V^{(k+1)}$ consists of the $k\!-\!1$ vertices of the form $(0,-{\bf e}_r)$,
while $W^{(k+1)}$ consists of the $k\!-\!1$ vertices of the form $(1,{\bf e}_s)$,
and then $T_{v}^{\,(k+1,3)}$ consists of the $2$-arcs of the form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_s) \!\sim\! (2,{\bf e}_s)$,
and so $|T_{v}^{\,(k+1,3)}| = k-1$,
but on the other hand, $T_{w}^{\,(k+1,3)}$ is empty.
In case (d), where $k > 2$ and $m > 6$, every 2-arc lies in $1$, $k\!+\!1$
or $2k\!-\!2$ cycles of length $6$.
Next, if $k > 3$ then $V^{(k+1)}$ consists of the $k\!-\!1$ vertices
of the form $(0,-{\bf e}_r)$,
while $W^{(k+1)}$ consists of the $k\!-\!1$ vertices of the form $(1,{\bf e}_s)$,
but if $k = 3$ then $k+1 = 2k-2$, and $V^{(k+1)}$ contains also $(2,{\bf 0})$
while $W^{(k+1)}$ contains also $(-1,{\bf 0})$.
Whether $k = 3$ or $k > 3$, the set $T_{v}^{\,(k+1,1)}$ consists of the $2$-arcs of the
form $(0,{\bf 0}) \!\sim\! (1,{\bf e}_s) \!\sim\! (2,{\bf e}_s)$, and so $|T_{v}^{\,(k+1,1)}| = k-1$,
but on the other hand, $T_{w}^{\,(k+1,1)}$ is empty.
Hence in all four of cases (a) to (d), no automorphism can reverse the arc $(v,w)$,
and therefore the graph is half-arc-transitive.
This completes the proof of our Theorem.
\noindent
{\Large\bf Acknowledgements}
This work was undertaken while the second author was visiting the University
of Auckland in 2014, and was partially supported by the ARRS (via grant P1-0294),
the European Science Foundation (via EuroGIGA/GReGAS grant N1-0011),
and the N.Z. Marsden Fund (via grant UOA1323).
The authors would also like to thank Toma{\v z} Pisanski and Primo{\v z} Poto\v{c}nik
for discussions which encouraged them to work on this topic,
and acknowledge considerable use of the {\sc Magma} system~\cite{Magma} for
computational experiments and verification of some of the contents of the paper.
\end{document} |
\begin{document}
\title{Na\"ive Markowitz Policies\thanks{The first version of the paper was completed in 2017, and part of the results was included in the first author's PhD thesis defended in 2020. The paper was finalized when the second author was on vacation in Las Vegas, a place arguably ideal for observing the ``na\"ive" behaviors studied in the paper.} }
\author{Lin Chen\thanks{Department of Industrial
Engineering and Operations Research, Columbia University, New York, New York 10027, USA, \texttt{[email protected]}}\ \ \ Xun Yu Zhou\thanks{Department of Industrial Engineering and Operations Research, Columbia University, New York, New York 10027, USA, \texttt{[email protected]}}}
\maketitle
\begin{abstract}
We study a continuous-time Markowitz mean--variance portfolio selection model in which a na\"ive agent, unaware of the underlying time-inconsistency, continuously reoptimizes over time.
We define the resulting na\"ive policies through the limit of discretely na\"ive policies that are committed only in very small time intervals, and derive them analytically and explicitly. We compare na\"ive policies with pre-committed optimal policies and with consistent planners' equilibrium policies in a Black--Scholes market, and find that the former are mean--variance inefficient starting from any given time and wealth, and always take riskier exposure than equilibrium policies.
\noindent {\bf Key Words.} Continuous time, mean--variance model, time inconsistency, na\"ive agent,
pre-committed agent, consistent planner, equilibrium policies.
\end{abstract}
\section{Introduction}
The Markowitz mean--variance (MV) portfolio selection model (\citealp{MH52} and \citealp{MH59}) is a monumental work in quantitative finance. The model formulates the investment problem as striving to achieve the best balance between return and risk, represented respectively by the mean and variance of the final portfolio worth. Its variants, extensions and implications have been passionately studied in theory and applied in practice to this day.
The original MV model is formulated for a static single period and solved by quadratic program. It is natural and necessary to extend it to the dynamic setting, both in discrete time and continuous time. However, a dynamic MV model is inherently {\it time inconsistent}; namely, any ``optimal" policy for the present moment will generally not be optimal for the next moment.\footnote{Here a ``policy" is a {\it plan} that maps any given time and state to an action (a portfolio in the MV model).
It is also called a {\it feedback control law} in control theory.} This inconsistency comes from the variance term that does not satisfy the tower rule: unlike the mean, there is no consistency over time in evaluating the same variance of the final wealth.
As a result, in sharp contrast to the classical time-consistent models, there is no such notion as a {\it dynamically optimal policy} for a time-inconsistent model because any such policy, once planned for this moment, may need to be given up quickly (and {\it instantly} in a continuous-time setting) in favor of a different plan at the next moment. Technically, time-inconsistency poses fundamental challenges in ``solving" -- whatever ``solving" means -- the problem because the Bellman optimality principle, which is the very foundation of the classical dynamic programming for studying dynamic optimization problems, is no longer valid.
Economists have recognized and studied time-inconstancy since as early as the 1950s. The foundational paper \cite{SR56} {\it describes} three types of agents when facing time inconsistency. Type 1, a ``na\"ivet\'e" (or na\"if), is unaware of the time inconsistency and at any given time and state of affairs seeks an ``optimal" policy for that moment only, without knowing that he will not uphold that policy for long. As a result, his policies change all the times, and the eventual policy that is being actually carried out can be vastly and characteristically different from any of his short-lived ``optimal" policies he originally planned to execute.\footnote{For instance, \cite{casino} shows, in a casino gambling model (which is time-inconsistent in discrete time due to probability weighting),
a na\"ive gambler's initial plan was to gamble as long as possible when winning but to stop if he started accumulating losses, he actually ends up doing
the {\it opposite}: he gambles as long as possible when losing and stops once
he accumulates some gains. Similar behaviors are also observed, and indeed prevalent, in stock investment especially with retail investors.} The next two types realize the issue of time inconsistency
but act differently. Type 2 is a ``precommitter" who solves the optimization problem only at time 0 and sticks to the resulting policy throughout (via some ``commitment device" if necessary and available), recognizing that the original policy may no longer be optimal at later times. Type 3 is a ``consistent planner" who is unable to precommit and realizes that her future selves may abandon whatever plans she makes now. Her resolution is to optimize taking the future deviations from the current plan as {\it constraints}, effectively leading to a game among selves at different times.
The resulting policies are called equalibrium ones.
It is important to note that it is not meaningful to determine
which type is superior than the others, simply because there are no uniform criteria
to compare them. In this sense, the Strotzian approach to time-inconsistency is both {\it normative} (i.e. to advise people about the best course of actions, especially in Types 2 and 3) and {\it descriptive} (i.e. to describe what people are actually doing, as more with Type 1).
Mathematically, model formulations and solutions for deriving the
three types of agent policies call for different treatments as they are very different from each other. The problems are also challenging due to the invalidity of the dynamic programming approach. In the last decade, there have been significant developments in studying time-inconsistent models analytically, mainly in three different settings:
MV portfolio selection, and optimization problems involving non-exponential
discounting or probability weighting; see \cite{survey} for a recent survey on the related works. For the MV models, earlier works focused on Type 2, pre-committed agents;
see, e.g., \cite{R89,H71,LN00,ZL00,LIMZ02,BJPZ05,X05,LZ06}, although most of these works did not spell out that their solutions were pre-committed ones. Later research gradually shifted to Type 3, consistent
planners; see, e.g. \cite{BC05, HJZ12, BA14, BMZ14,HJ17}.
In contrast to the rich literature on pre-committed agents and consistent planners, there are far fewer works on the general behaviors of na\"ive agents, and almost none in continuous time (not necessarily limited to MV models). \cite{casino, HOZ22} study na\"ive strategies in casino gambling models which are inherently discrete time. As shown in these papers, finding na\"ive policies in discrete time is rather straightforward technically if the pre-committed polices are already available: at each discrete time point one solves and obtains the corresponding pre-committed policy, holds it until the next time point when one re-solves the pre-committed problem, and repeats these steps until the terminal time. The {\it eventual} na\"ive policy is then just to ``paste" these piece-wise pre-committed policies together. This pasting approach, however, does not work for the continuous-time setting. Indeed, at each given time and state, say $(s,y)$, a pre-committed policy is executed and {\it instantaneously} discarded, while a policy applied for just one single time--state initial point $(s,y)$ has no impact on the dynamics in continuous time. As a result, it is unclear how to paste these continuously changing policies and, even one found a way to do it, how to interpret the resulting policy.
We address this issue specific to continuous time and make two main contributions in this paper. First, to our best knowledge we are the first to define precisely the na\"ive policies in the original sprit of \cite{SR56} but adapted to the continuous-time setting, premised upon the notion that any continuous-time behavior is the limit of discrete-time behaviors when the time-step approaches zero.\footnote{An analogy here is that the Brownian motion is just the limit of a simple random walk when the step size diminishes to zero.} We fix a set of discrete time points and consider a fictitious agent who only optimizes at each of these points and holds the resulting pre-committed policy until the next point. It is then natural to use the ``limit" -- in a certain sense -- of these discretely na\"ive agents when the step size becomes asymptotically small to describe the na\"ive behavior in the original continuous-time model. One technical subtlety here is that policies are generally only measurable functions whose limit is difficult to analyze.
We consider instead the limiting process of the wealth processes -- which are analytically better behaved -- of those discrete agents, and find the policy that generates this limiting process as the wealth process.
A main advantage of our approach is that it is both general and constructive. It is general because the definition of a na\"ive policy applies readily to any time-inconsistent problems beyond MV (see \citealp{CZ20b} for an extension to the general stochastic linear--quadratic control problem), and it is constructive because the definition itself points to the direction of {\it deriving} a na\"ive policy.
The second contribution is to compare the na\"ive policies with the other Strotzian types of policies. Be mindful that it does not make much sense to use either mean or variance of the terminal wealth alone for comparison, as the essence of the MV model is to achieve a best trade-off between the two criteria. Instead, MV efficiency ought to be the primary criterion.
Between a na\"ivet\'e and a pre-committer, starting from any given point of time and state, the latter is MV efficient by definition while we show that the former is not (although he elevates the expected terminal wealth than he originally planned).
To compare
na\"ive and equilibrium policies which are both MV inefficient, we use an objective metric which is the risky weight defined as the fraction of dollar amount invested in stocks.
We show that a na\"ivet\'e {\it always} allocate strictly higher risky weight than the two types of consistent planners considered by \cite{BMZ14} and \cite{HJ17} respectively. This in turn suggests that the na\"ive policies tend to be more risk-taking than their consistent planning counterparts.\footnote{An analogous result is proved in \cite{HOZ22} for a casino gambling model: a na\"ive gambler stops gambling no earlier than a gambler doing consistent planning.}
\cite{PP17} introduce the notion of ``dynamic optimality" in a continuous-time MV model, which seems to bear some relevance to
na\"ive policies (although the paper stops short of commenting on it). Definition 2 therein defines a dynamically optimal policy as there being no other policy applied at present
time could produce a more favourable value at the terminal time. However, as discussed earlier, in a time-inconsistent problem there is no such thing as ``dynamic optimality": as much as a na\"ivet\'e attempts to reoptimize continuously over time, the resulting actual policy at any given time may significantly deviate from the pre-committed optimal one (and therefore is MV inefficient, and indeed {\it not} optimal in any sense). On the other hand, \cite{PP17} conjecture the analytical formula of such a ``dynamically optimal" policy for a single stock Black--Scholes market without explaining where it comes from. Hence the solution method is {\it ad hoc} and it is unclear whether the existence of such a policy is prevalent and, if yes, how to extend the conjecture to a more general MV setting (e.g., one with more than one risky asset) or to other time-inconsistent problems (e.g. with non-exponential discounting or probability weighting). By contrast, our definition of na\"ive policies is general and our derivation of these policies is constructive.
The rest of the paper is organized as follows. In Section \ref{Naive_problem_formulation} we formulate the continuous-time MV portfolio selection model. In Section \ref{Naive_analysis} we introduce the so-called $2^{-n}$-committed policies, which are commited only during a small interval of length $2^{-n}$, before reoptimization. We consider the limit of the wealth processes under these policies as $n\to\infty$, and define the policy that generates this limiting wealth process as a na\"ive policy. We then state the main result that expresses na\"ive policies analytically. In Section \ref{Naive_comparisons} we compare na\"ive policies with other types of policies in a Black--Scholes market. Section \ref{Naive_conclusions} concludes the paper. Proofs related to the main result are placed in Appendices.
\section{A Continuous-Time Markowitz Model}\label{Naive_problem_formulation}
In this section we review the continuous-time Markowitz MV model. We first introduce notations.
Throughout this paper, $M^{\top}$ denotes the transpose of any vector or matrix $M$,
while all vectors are {\it column} vectors unless otherwise specified.
A fixed filtered complete probability space
$(\Omega,\mathcal{F},\mathbb{P},\{\mathcal{F}_t\}_{t\geq 0})$ is given along with a standard $\{\mathcal{F}_t\}_{t\geq 0}$-adapted, $m$-dimensional Brownian motion $W(t)\equiv (W^1(t),...,W^m(t))^{\top}$. We use $f$ or $f(\cdot)$ to denote the {\it function} $f$, and $f(x)$ to denote
the {\it function value} of $f$ at $x$. Likewise, we use $X$ or $X(\cdot)$ to denote a stochastic process $X=\{X_{s},$ $s\geq 0\}$. Given a Hilbert space $H$ and $b>a\geq0$,
we denote by
$L^2([a,b];H)$ the Hilbert space of $H$-valued, square-integrable functions $f$ on $[a,b]$ endowed with the norm $(\int_a^b ||f(t)||^2_H dt)^{1/2}$. Moreover, we denote by $L^2_\mathcal{F}([a,b];\mathbb{R}^m)$ the Hilbert space of $\mathbb{R}^m$-valued, square-integrable and $\{\mathcal{F}_t\}_{t\geq 0}$-adapted stochastic processes $g$ endowed with the norm $\left[\mathbb{E}\int_a^b ||g(t)||^2 dt\right]^{1/2}$, where $||\cdot||$ is the $L^2$ norm in a Euclidean space.
A financial market has $m+1$ assets being traded continuously. One of the assets is a bank account whose price process $S_0$ is subject to the following equation:
\begin{equation}
dS_0(t)=r(t)S_0(t)dt,\ t\geq0;\ S_0(0)=s_0>0,
\end{equation}
where the interest rate function $r(\cdot)$ is deterministic. The other $m$ assets are stocks whose price processes $S_i,i=1,...,m$, satisfy the following stochastic differential equations (SDEs):
\begin{equation}
dS_i(t)=S_i(t)\left[b_i(t)dt+\sum\limits_{j=1}^m \sigma_{ij}(t)dW^j(t)\right],\ t\geq0;\;\; S_i(0)=s_i>0,
\end{equation}
where $b(\cdot)$ and $\sigma_{ij}(\cdot)$, the appreciation and volatility rates functions respectively, are scalar-valued and deterministic.
Set the excess rate of return vector function and the volatility matrix function respectively as
$$B(t):=(b_1(t)-r(t),...,b_m(t)-r(t))^{\top},\;\;\sigma(t):=(\sigma_{ij}(t))_{m\times m}.$$
An agent
has total {\it wealth} $X(t)$ at time $t\in[0,T]$, where $T$ is a given terminal time of the investment horizon. Assuming that the trading of shares takes place in a self-financing fashion and that there are no transaction costs, the process $X$ satisfies the {\it wealth equation}
\begin{equation}\label{wealtheq}
dX(t)=\left[r(t)X(t)+B(t)^\top\pi(t)\right]dt+\pi(t)^\top\sigma(t)dW(t),\ t\in[0,T],
\end{equation}
where each $\pi_i(t),i=1,2,...,m$, denotes the total market value of the agent's wealth in the $i$-th asset, resulting in a {\it portfolio} $(\pi_1(t),...,\pi_m(t))^\top$, at time $t$. The agent considers portfolio choice at time $s$ when her wealth is $y$, where $(s,y)\in [0,T)\times\mathbb{R}$ is given.
The process $\pi\equiv(\pi_1,...,\pi_m)^\top=\{\pi(t): s\leq t\leq T\}$ is called an {\it admissible portfolio} (process) for $(s,y)$ if $\pi\in L^2_\mathcal{F}([s,T];\mathbb{R}^m)$ and the wealth equation (\ref{wealtheq}) with initial condition $X(s)=y$ admits a unique strong solution. Denote by ${\cal U}_{s,y}$ the set of admissible portfolio processes for $(s,y)$.
We focus on a portfolio {\it policy} $\bm{\pi}=\bm{\pi}(\cdot,\cdot)$ which is a {\it deterministic} map from $[0,T]\times\mathbb{R}$ to $\mathbb{R}^m$. Such a policy specifies a portfolio $\bm{\pi}(t,x)$ when time is $t$ and wealth is $x$.\footnote{In control theory, the policy here is also called the {\it feedback} control law, whereas the portfolio process corresponds to the {\it open-loop} control.}
In the classical, time-consistent setting, a policy $\bm{\pi}(\cdot,\cdot)$ is independent of the initial time--state pair $(s,y)$, meaning that it is implemented no matter when and where one starts. Such policies are called {\it time-consistent} ones. A time-consistent policy $\bm{\pi}=\bm{\pi}(\cdot,\cdot)$ is called admissible if for {\it any} $(s,y)\in [0,T)\times\mathbb{R}$, the following SDE obtained by substituting $\bm{\pi}$ into
the wealth equation (\ref{wealtheq})
\begin{equation}\label{wealtheqfb}
dX(t)=\left[r(t)X(t)+B(t)^\top\bm{\pi}(t,X(t))\right]dt+\bm{\pi}(t,X(t))^\top\sigma(t)dW(t),\ t\in[0,T];\;\; X(s)=y,
\end{equation}
admits a unique strong solution $X$ and, moreover, the resulting portfolio process $\pi\in {\cal U}_{s,y}$ where $\pi(t):=\bm{\pi}(t,X(t))$, $t\in[s,T]$. Note that the wealth--portfolio {\it process} pair $(X,\pi)$ {\it depends on} the initial $(s,y)$, and we say $(X,\pi)$ is {\it generated} from the policy $\bm{\pi}$ with respect to $(s,y)$.
The classical verification theorem for time-consistent problems (e.g. \citealp{YZ99}) dictates that, under standard assumptions, there exists a time-consistent policy that generates optimal wealth--portfolio process pair $(X,\pi)$ for any given initial $(s,y)$.
The following assumptions are in force throughout this paper.
\textbf{(A1)} $r(t), B(t)$ and $\sigma(t)$ are uniformly bounded on $ [0,T]$.
\textbf{(A2)} $B(t)\neq 0$ a.e.$t\in[0,T]$ and $\sigma(t)\sigma(t)^\top\geq \delta I, \forall t\in [0,T]$
for some $\delta>0$.
Given $(s,y)\in [0,T)\times\mathbb{R}$, the Markowitz mean--variance portfolio selection problem over $[s,T]$ is
\begin{equation}\label{prob1}
\min\limits_{\pi(\cdot)\in {\cal U}_{s,y}}\ \text{\rm Var}_{s,y}(X(T))
\end{equation}
\begin{equation}\label{ft0}
\text{subject to}\
\begin{cases}
\mathbb{E}_{s,y}[X(T)]=yf(s,T),\\
(X(\cdot),\pi(\cdot))\ \text{satisfy}\ (\ref{wealtheq}) \mbox{ with } X(s)=y
\end{cases}
\end{equation}
where $\text{\rm Var}_{s,y}$ and $\mathbb{E}_{s,y}$ denote respectively the variance and expectation conditional on $\mathcal{F}_s$ and $X(s)=y$, and
$f(u,v), 0\leq u\leq v\leq T$, is a given deterministic real-valued function satisfying $f(u,u) = 1, \forall u\in[0,T]$. The number $f(u,v)$ represents the desired growth factor over the time horizon $[u,v]$. It is economically sensible to consider the expected mean target to be dependent of the initial $(s,y)$, which is equivalent to the state-dependend risk aversion considered in \cite{BMZ14}. \cite{HJ17} consider a more general target $L(s,y)$ instead of $yf(s,T)$; see also Section 4 of this paper.
We add an assumption on $f$ throughout this paper:
\textbf{(A3)} $f\in C^1([0,T]\times[0,T])$, $f(u,v) \geq e^{\int_u^v r(t) dt}$, $\forall\ 0\leq u \leq v \leq T$, and $-\infty<\frac{\partial f}{\partial t}(t,T)|_{t=T}<\infty$.
\noindent The second part of this assumption is natural, demanding the target return to be at least as great as the risk-free return.
Given $(s,y)$, the relation between $\text{\rm Var}_{s,y}(X_*(T))$ and $\mathbb{E}_{s,y}[X_*(T)]$, where $X_*(T)$ is the optimal terminal wealth of the
problem (\ref{prob1}) -- (\ref{ft0}), is called an {\it efficient frontier} with respect to $(s,y)$, which gives the best risk--return tradeoff for future investment when standing at $(s,y)$.
The
problem (\ref{prob1}) -- (\ref{ft0}) has been solved explicitly in literature; see e.g. \cite[Theorem 2.1]{LZ06},\footnote{The previous results such as \cite[Theorem 2.1]{LZ06} are for the case when $(s,y)=(0,x_0)$, but they extend readily to arbitrary initial
$(s,y)$ because the underlying mathematical problem of the latter is the same.}
with the following {\it unique} optimal policy (conditional on $\mathcal{F}_s$ and $X(s)=y$)
\begin{equation}\label{us}
\begin{aligned}
\bm{\pi}_*(t,x;s,y)=-[\sigma(t)\sigma(t)^\top]^{-1}B(t)^\top \left[x-{\gamma}(s,T) e^{-\int_t^T r(v) dv}y\right],\;\;(t,x)\in [s,T)\times \mathbb{R},
\end{aligned}
\end{equation}
where
\begin{equation}\label{gamma}
{\gamma}(s,T):=
\frac{f(s,T)-e^{\int_s^T [r(v)-\rho(v)] dv}}{1-e^{-\int_s^T \rho(v) dv}},\;\;s\in[0,T),
\end{equation}
with
$$\rho(t):=B(t)[\sigma(t)\sigma(t)^\top]^{-1}B(t)^\top>0.$$
Note that l'H\^ospital's rule along with Assumptions {\bf (A2)-(A3)} yield
that $\gamma(\cdot,T)$ is continuous at $T$; hence is uniformly bounded on $[0,T]$.
Substituting the policy (\ref{us}) into the wealth equation (\ref{wealtheq}) we obtain that the corresponding optimal wealth process is determined by the following SDE:
\begin{equation}\label{Ws}
\begin{cases}
dX_*(t)=\left[(r(t)-\rho(t))X_*(t)+{\gamma}(s,T)\rho(t) e^{-\int_t^T r(v) dv}y\right]dt \\
\ \ \ -B(t)(\sigma(t)\sigma(t)^\top)^{-1}\sigma(t)\left[X_*(t)-{\gamma}(s,T)e^{-\int_t^T r(v) dv}y\right]dW(t),\;\;t\in[s,T], \\
X(s)=y.
\end{cases}
\end{equation}
Finally, the efficient frontier at $(s,y)$ is
\begin{equation}\label{ef}
\text{\rm Var}_{s,y}(X_*(T))=\frac{1}{e^{\int_s^T \rho(v) dv}-1}\left(\mathbb{E}_{s,y}[X_*(T)]
-ye^{\int_s^T r(v) dv}\right)^2.
\end{equation}
In sharp contrast to the time-consistent setting,
the policy $\bm{\pi}_*(\cdot,\cdot;s,y)$ given by (\ref{us}) now depends on the initial pair $(s,y)$
{\it explicitly}.
If the agent sticks to this policy during the entire future time period $[s,T]$ without subsequently altering it, then it is the so-called optimal {\it pre-committed} policy. If the agent is na\"ive {\it \`a la} Strotz who reoptimizes at every subsequent time moment, then the policy (\ref{us}) will be abandoned immediately (indeed instantaneously) at any $\tilde s>s$. More precisely, suppose the agent carries out (\ref{us}) for a (little) while and reaches the state $X(\tilde s)$ at time $\tilde s>s$. Now the {\it current} initial time becomes $\tilde s$ and the {\it current} initial state is $X(\tilde s)$. If the agent reoptimizes the problem for the remaining duration $[\tilde s,T]$, then the corresponding policy at $(\tilde s,X(\tilde s))$ is (conditional on $\mathcal{F}_{\tilde s}$)
\begin{equation}\label{ut}
\begin{aligned}
\bm{\pi}_*(t,x;\tilde s,X(\tilde s))=-[\sigma(t)\sigma(t)^\top]^{-1}B(t)^\top \left[x-{\gamma}(\tilde s,T) e^{-\int_t^T r(v) dv} X(\tilde s)\right],\;\;(t,x)\in [\tilde s,T)\times \mathbb{R}.
\end{aligned}
\end{equation}
Clearly, the two policies (\ref{us}) and (\ref{ut}) are generally different
as two {\it functions} on $[\tilde s,T]\times \mathbb{R}$.
So, problem (\ref{prob1}) -- (\ref{ft0}) admits a policy (\ref{us}) that is optimal for the current $(s,y)$ {\it only}.
In other words, the pre-committed optimal policy depends inherently on $(s,y)$, which in turn causes the time-inconsistency of the policy and hence that of the problem, as discussed above. A time-inconsistent policy of the type (\ref{us}) is defined only for the given $(s,y)$.
\section{Na\"ive Policies}\label{Naive_analysis}
A na\"ivet\`e (``he") always ``reoptimizes" under current information; as a result he devises policies and then instantly abandons them in the continuous-time setting. Although at each given time he tries to follow the pre-committed optimal policy (\ref{us}) but his {\it eventual} policy due to the constant changes could be completely different from (\ref{us}). In this section, we define na\"ive policies rigorously, and then derive them in analytical form for the MV problem (\ref{prob1})--(\ref{ft0}).
\subsection{A $2^{-n}$-committed agent}
As discussed earlier, the difficulty of defining and analyzing na\"ive policies lies in the continuous-time setting of the problem. We overcome this difficulty by introducing an auxiliary agent, named the {\it $2^{-n}$-committed agent}, to approximate the behavior of the na\"ivet\`e.
A $2^{-n}$-committed agent (``she") is one who behaves ``in between" a pre-committer and a na\"ivet\`e. Specifically, she partitions the time horizon $[0,T]$ into $2^n$ equal-length intervals, with the partitioning points being $\{t_k\}_{k = 0}^{2^n}$ where $t_k = \frac{kT}{2^n}$. She first solves problem (\ref{prob1})-(\ref{ft0}) with $(s,y)=(0,x_0)$ to obtain the pre-committed optimal policy
$\pi(\cdot,\cdot;0,x_0)$ defined by (\ref{us}). She implements and commits to this policy until time $t_1$ when her wealth becomes $X(t_1)$, at which she resolves problem (\ref{prob1})-(\ref{ft0}) with $(s,y)=(t_1,X(t_1))$ and switches to the policy $\pi(\cdot,\cdot;t_1,X(t_1))$. She commits to this new policy until $t_2$ before changing it to $\pi(\cdot,\cdot;t_2,X(t_2))$. She then repeats these steps until time $T$. Figure \ref{multimePlot} illustrates the resulting wealth process under this construction.
\begin{figure}
\caption{This figure shows a sample path of the wealth process $X_n(\cdot)$ of the $2^{-n}
\label{multimePlot}
\end{figure}
Denote by $\{X_*(t;t_k): t\in[t_k,t_{k+1}]\}$ the above wealth process in the time interval $[t_k,t_{k+1}],\;k=0,1,\cdots,2^{n-1}$, with $X_*(0;0)=x_0$. By (\ref{Ws}), these processes $X_*(t;t_k),t\in[t_{k},t_{k+1}]$, $k=0,1,\cdots,2^{n-1}$, can be determined by the following SDEs recursively:
\begin{equation}\label{rec}
\begin{cases}
dX_*(t;t_k)=\left[(r(t)-\rho(t))X_*(t;t_k)+\gamma(t_k,T)\rho(t) e^{-\int_t^T r(v) dv}X_*(t_k;t_{k - 1})\right]dt \\
\ \ \ -B(t)(\sigma(t)\sigma(t)^\top)^{-1}\sigma(t)\left[X_*(t;t_k)-\gamma(t_k,T) e^{-\int_t^T r(v) dv}X_*(t_k;t_{k - 1})\right]dW(t),\;\; t\in[t_k,t_{k+1}], \\
X_*(t_k;t_k)=X_*(t_k;t_{k-1}),
\end{cases}
\end{equation}
where $X_*(t_0;t_{-1})$ is defined as $x_0$.
Now, by ``pasting" $X_*(\cdot;t_k)$, $k = 0,1,...,2^n-1$, we obtain the following process:
\begin{equation}\label{pasting}
X_n(s):=
\begin{cases}
X_*(s;0),& \mbox{$0\leq s< t_1$}, \\
X_*(s;t_1),& \mbox{$t_1\leq s< t_2$},\\
...\\
X_*(s;t_{2^n-1}),& \mbox{$t_{2^n-1}\leq s\leq T$},\\
\end{cases}
\end{equation}
which is the wealth process of the $2^{-n}$-committed agent, visualized by Figure \ref{multimePlot}. Obviously, this process is adapted and continuous on $[0,T]$.
\subsection{Na\"ive policies}
While this $2^{-n}$-committed agent behaves somewhere between a pre-committed agent and a na\"ive one, she is closer to the latter when $n$ becomes larger. Therefore, we define a na\"ive policy through the limit (in certain sense) of the $2^{-n}$-committed wealth process as $n\rightarrow\infty$.
\begin{definition}
If the $2^{-n}$-committed wealth process $X_n$ converge to an adapted process $X$ in some sense, and the limiting process $X$ can be generated by a {\it time-consistent} admissible policy $\bm{\pi}^*=\bm{\pi}^*(\cdot,\cdot)$, then $\bm{\pi}^*$ is called a {\it na\"ive policy} of the problem (\ref{prob1})-(\ref{ft0}).
\end{definition}
Some remarks on this definition are in order. First, this definition applies to more general time-consistent problems instead of just the current Markowitz problem.
As such, we intentionally leave vague
the precise sense in which $X_n$ converge to $X$ in order to make the definition general and applicable to other problems. For the present problem, we will see momentarily that the convergence is in the weak-$L^2$ sense. Second, a na\"ive policy in itself must be time-consistent, meaning that it can no longer depend on any initial $(s,y)$ and, in particular, on $(0,x_0)$, even though each $X_n$ is indeed constructed starting from a specific pair $(0,x_0)$. Finally, we do not define a na\"ive policy as simply the limit of $2^{-n}$-committed policies, because policies are in general only measurable and they may not converge and are hard to analyze. Instead, we consider the limit of wealth processes that are much better behaved, and then use the limiting wealth equation to recover the corresponding na\"ive policy.
The following proposition, whose proof is deferred to Appendix \ref{proof_normfinite}, indicates that the $2^{-n}$-committed wealth processes $X_n$, $n=1,2,\cdots$, are uniformly bounded in $L^2_\mathcal{F}([0,T];\mathbb{R})$.
\begin{proposition}\label{normfinite}
It holds that $$||X_n||^2 := \mathbb{E}\int_0^T |X_n(s)|^2 ds < \infty,\ \forall n.$$
Moreover, $||X_n||^2$ is uniformly bounded in $n$.
\end{proposition}
Due to Proposition \ref{normfinite}, the sequence $\{X_n\}_{n=1}^\infty$ is uniformly bounded in the Hilbert space $L^2_\mathcal{F}([0,T];\mathbb{R})$, and hence is weakly compact. So there exists a weakly convergent subsequence (still denoted as $\{X_n\}_{n=1}^\infty$ without loss of generality) and a process $X\in L^2_\mathcal{F}([0,T];\mathbb{R})$ such that
$$X_n\rightarrow X\ \text{weakly in $L^2_\mathcal{F}([0,T];\mathbb{R})$}.$$
The following theorem is the main result of the paper, which characterizes this limiting process and, consequently, the na\"ive policy.
\begin{theorem}\label{theorem1}
The weakly limiting process $X$ satisfies the following SDE:
\begin{equation}\label{XSDE}
\begin{cases}
dX(t)=\left[(r(t)-\rho(t))+{\gamma}(t,T)\rho(t) e^{-\int_t^T r(s) ds}\right]X(t)dt \\
\ \ \ -B(t)(\sigma(t)\sigma(t)^\top)^{-1}\sigma(t)\left[1-{\gamma}(t,T)e^{-\int_t^T r(s) ds}\right]X(t)dW(t),\;\;t\in[0,T], \\
X(0)=x_0.
\end{cases}
\end{equation}
Moreover, the following is the na\"ive policy:
\begin{equation}\label{PI}\bm{\pi}^*(t,x)=-[\sigma(t)\sigma(t)^\top]^{-1}B(t)^\top [1-\gamma(t,T)e^{-\int_t^T r(s) ds}]x,\;\;(t,x)\in [0,T]\times \mathbb{R}.
\end{equation}
\end{theorem}
A proof of Theorem \ref{theorem1} is delayed to Appendices \ref{proof_theorem1}.
Note that the explicitly presented policy (\ref{PI}) indeed does not depend on any initial pair $(s,y)$ and, in particular, on $(0,x_0)$. This means that even if the wealth process of the $2^{-n}$-committed is constructed from an arbitrarily different initial pair $(s,y)$, it will lead to the same na\"ive policy (\ref{PI}). On the other hand, it generates $X$ as its wealth process for the given initial $(0,x_0)$.
\section{Comparison between Na\"ive and Other Types of Policies}\label{Naive_comparisons}
In the continuous-time MV literature, two types of equilibrium policies by consistent planners have been introduced and studied: the {\it weak} equilibrium policies by \cite{BMZ14} and the {\it regular} equilibrium policies by \cite{HJ17}. In this section, we compare the na\"ive policies with these two types of equilibrium policies as well as the pre-committed ones, in a Black--Scholes market.
\subsection{Weak and regular equilibrium policies}
We first review the two types of equilibrium strategies, whose definitions can be found, in slight variants of the MV formulation, in \cite{BMZ14} and \cite{HJ17} respectively.
Given $(s,y)\in [0,T]\times\mathbb{R}$, \cite{BMZ14} consider the following problem:
\begin{equation}\label{BjorkP}
\max\limits_{\pi(\cdot)\in {\cal U}_{s,y}}\ J(s,y;\pi(\cdot)):=\mathbb{E}_{s,y}[X(T)]-\frac{\alpha(s,y)}{2}\text{\rm Var}_{s,y}(X(T))
\end{equation}
\begin{equation}\label{ft0BjorkP}
\text{subject to}\
(X(\cdot),\pi(\cdot))\ \text{satisfy}\ (\ref{wealtheq}) \mbox{ with } X(s)=y.
\end{equation}
In the objective function of this problem, there is a risk-aversion term $\alpha(s,y)>0$ that depends on the initial time $s$ and initial state $y$; see \cite{BMZ14} for the many discussions on the motivation of such a varying risk-aversion term.\footnote{\cite{BMZ14} consider only the state-dependent risk aversion
$\alpha(s,y)=\alpha(y)$, but the method and results therein readily extend to the time--state dependent case presetned here.} The problem is again time-inconsistent.
\cite{BMZ14} study the behavior of a consistent planner by considering the equilibrium policies defined as follows.
Given an admissible (time-consistent) policy $\hat{\bm{\pi}}(\cdot,\cdot)$, construct a new policy $\bm{\pi}_h$ by
\begin{equation}\label{pihcons}
\bm{\pi}_h(t,x): =
\begin{cases}
\pi, &t\in[s,s + h),\;x\in\mathbb{R},\\
\hat{\bm{\pi}}(t,x), &t\in[0,T]\setminus [s,s + h),\;x\in\mathbb{R},
\end{cases}
\end{equation}
where $\pi\in\mathbb{R}^m$, $h > 0$ and $s\in[0,T)$ are aribitrarily given.
Let $\pi(\cdot)$ and $\pi_h(\cdot)$ be respectively the portfolio processes generated by $\hat{\bm{\pi}}$ and $\bm{\pi}_h$ starting from $(s,y)$.
We say that $\hat{\bm{\pi}}$ is a {\it weak equilibrium policy} if the pertubed policy $\bm{\pi}_h$ is admissible and
\begin{equation}
\lim\limits_{h\to 0} \inf \frac{J(s,y;\hat{\pi}) - J(s,y;\pi_h)}{h} \geq 0,
\end{equation}
for all $\pi\in\mathbb{R}^m$ and $(s,y)\in [0,T)\times\mathbb{R}$.
On the other hand, \cite{HJ17} formulate the following problem:
\begin{equation}\label{XueP}
\min\limits_{\pi(\cdot)\in {\cal U}_{s,y}}\ \text{\rm Var}_{s,y}(X(T))
\end{equation}
\begin{equation}\label{ft}
\text{subject to}\
\begin{cases}
\mathbb{E}_{s,y}[X(T)]= L(s,y),\\
(X(\cdot),\pi(\cdot))\ \text{satisfy}\ (\ref{wealtheq}) \mbox{ with } X(s)=y,
\end{cases}
\end{equation}
where $L(s,y)$ indicates the expected terminal wealth target when the initial pair is $(s,y)$.\footnote{In the original formulation of \cite{HJ17}, the expected terminal wealth constraint is $\mathbb{E}_{s,y}[X(T)]\geq L(s,y)$,
which is equivalent to the equality constraint formulated here.} When $L(s,y) = yf(s,T)$, the problem (\ref{XueP})--(\ref{ft}) reduces to the problem (\ref{prob1})--(\ref{ft0}). \cite{HJ17} also study a consistent planner, except that they use the notion of {\it regular} equilibrium policies which is very different from that of the weak equilibrium policies. Specifically, an admissible, time-consistent policy $\hat{\bm{\pi}}$ is called a {\it regular equilibrium policy} if for any $(s,y)\in [0,T)\times\mathbb{R}$, any $\pi\in\mathbb{R}^m$ such that $\bm{\pi}_h$ constructed by (\ref{pihcons}) is admissible for sufficeintly small $h>0$, we have\footnote{Here, the term ``admissible" requires the corresponding portfolio processes generated by the relevant policies for $(s,y)$ to also satisfy the expectation constraint in (\ref{ft}).}
\begin{equation}
\text{\rm Var}_{s,y}(X^{\pi_h}(T))-\text{\rm Var}_{s,y}(X^{\hat{\pi}}(T))\geq0
\end{equation}
for sufficiently small $h>0$, where $X^{\hat{\pi}}(T)$ and $X^{\pi_h}(T)$ are the terminal wealth values, both starting from $(s,y)$ and under $\hat{\bm{\pi}}$ and $\bm{\pi}_h$ respectively.
The difference between the problems (\ref{BjorkP})--(\ref{ft0BjorkP}) and (\ref{XueP})--(\ref{ft}) is that the former uses a weighting coefficient $\alpha(s,y)/2$ in its objective function while the latter takes $L(s,y)$ in its constraint. The two problems are related via the Lagrange multiplier method. As a result, if we choose $\alpha(s,y)$ and $L(s,y)$ in a certain way, then the respective pre-committed optimal polices for the two problems coincide, as stipulated in the following proposition.
\begin{proposition}\label{BjorkXueEq}
If
\begin{equation}\label{GL}
\frac{1}{\alpha(s,y)}e^{\int_s^T\rho(t)dt}+ye^{\int_s^Tr(t)dt}
=\frac{L(s,y)-e^{\int_s^T[r(t)-\rho(t)]dt}y}{1-e^{-\int_s^T\rho(t)dt}},\;\;\forall (s,y)\in [0,T]\times\mathbb{R}
\end{equation}
holds, then the pre-committed optimal policies for (\ref{BjorkP})--(\ref{ft0BjorkP}) and (\ref{XueP})--(\ref{ft}) are the same for any $(s,y)\in [0,T]\times\mathbb{R}$.
\end{proposition}
\begin{proof}
It follows from the equations (5.12), (5.1) and (6.7) in \cite{ZL00}
that the pre-committed optimal policy of (\ref{BjorkP})--(\ref{ft0BjorkP}) is
\begin{equation}
\begin{aligned}
\bar{\bm{\pi}}_*(t,x;s,y)=-[\sigma(t)\sigma(t)^\top]^{-1}B(t)^\top \left[x-{\bar\gamma}(s,T,y) e^{-\int_t^T r(v) dv}\right],\;\;(t,x)\in [s,T]\times \mathbb{R},
\end{aligned}
\end{equation}
where
$$ \bar{\gamma}(s,T,y):= \frac{1}{\alpha(s,y)}e^{\int_s^T \rho(v)dv }+e^{\int_s^T r(v) dv}y.$$
On the other hand, it follows from \cite[Theorem 2.1]{LZ06} that the precommitted strategy of (\ref{XueP})--(\ref{ft}) is
\begin{equation}
\begin{aligned}
\tilde{\bm{\pi}}_*(t,x;s,y)=-[\sigma(t)\sigma(t)^\top]^{-1}B(t)^\top \left[x-{\tilde\gamma}(s,T,y) e^{-\int_t^T r(v) dv}\right],\;\;(t,x)\in [s,T]\times \mathbb{R},
\end{aligned}
\end{equation}
where
$$\tilde{\gamma}(s,T,y):=\frac{L(s,y) - e^{\int_s^T [r(v) - \rho(v)] dv}y}{1 - e^{-\int_s^T\rho(v) dv}}.$$
It is now evident that if (\ref{GL}) is satisfied, then $\bar{\gamma}(s,T,y) \equiv \tilde{\gamma}(s,T,y)$ leading to $\bar{\bm{\pi}}_*(t,x;s,y) \equiv \tilde{\bm{\pi}}_*(t,x;s,y)$.
\end{proof}
The condition (\ref{GL}) ensures that the pre-committed solutions of the two problems
coincide. As a result, the na\"ive policies of the two problems are also identical because
they are obtained via the limit of pre-committed policies. However, (\ref{GL}) does not necessarily lead to the same weak/regular equilibrium policies of the two problems, because equilibrium policies are not based on pre-committed ones.
\subsection{Comparisons}
We now compare the na\"ive policies with the weak/regular equilibrium policies and the pre-committed polices, in a Black--Scholes market for simplicity. Specifically, there is a risk-free asset and only one risky asset (i.e. $m=1$) with $r(t)\equiv r>0$,
$B(t)\equiv b-r>0$, $\sigma(t)\equiv \sigma>0$. As a result, $\rho(t)\equiv \rho=(\frac{b-r}{\sigma})^2>0$.
We carry out the comparison for two cases. In Subsection \ref{gammax}, we choose $\alpha(s,y)=\frac{\alpha}{y}$ for some constant $\alpha>0$ in the problem (\ref{BjorkP})--(\ref{ft0BjorkP}), which is also a case examined closely in \cite{BMZ14}. Subsection \ref{Ltx} studies the case when $L(s,y)=ye^{k(T-s)}$ for some constant $k>r$ in the problem (\ref{XueP})--(\ref{ft}). In each case, we choose $f(\cdot,\cdot)$, $L(\cdot,\cdot)$ and $\alpha(\cdot,\cdot)$ in such a way (e.g. to satisfy (\ref{GL})) that the different formulations of the MV problem are consistent in their respective pre-committed optimal policies.
\subsubsection{The case $\alpha(s,y)=\frac{\alpha}{y}$}\label{gammax}
When $\alpha(s,y)=\frac{\alpha}{y}$, the corresponding $L$ according to (\ref{GL}) is
\begin{equation}\label{Le1}
L(s,y)=y\left[\frac{1}{\alpha}\left(e^{(T-s)\rho}-1+\alpha e^{(T-s)r}\right)\right],
\end{equation}
whereas the corresponding $f$ is
\begin{equation}\label{f1}
f(s,T)=\frac{1}{\alpha}\left[e^{(T-s)\rho}-1+\alpha e^{(T-s)r}\right].
\end{equation}
It is easy to check that this $f$ satisfies Assumption \textbf{(A3)}. By Theorem \ref{theorem1},
the na\"ive policy is
\begin{equation}\label{PII}
\bm{\pi}^*(t,x)=-\frac{b-r}{\sigma^2}\left[1-\frac{f(t,T)
-e^{(r-\rho)(T-t)}}{1-e^{-\rho(T-t)}}
e^{-r(T-t)}\right]x,\;\;(t,x)\in [0,T]\times \mathbb{R}.
\end{equation}
Substituting the expression of $f$ in (\ref{f1}) into the above and going through some simple computation, we finally get
\begin{equation}\label{PIII}
\bm{\pi}^*(t,x)=\frac{b-r}{\alpha\sigma^2}e^{(\rho-r)(T-t)}x,\;\;(t,x)\in [0,T]\times \mathbb{R}.
\end{equation}
The {\it risky weight} function of this policy, defined as the ratio between the dollar amount in the stock and the total wealth and denoted by $c_{na}$, is thereby
\begin{equation}\label{cnacase1}
c_{na}(t):=\frac{\bm{\pi}^*(t,x)}{x}=\frac{b-r}{\alpha\sigma^2}e^{(\rho-r)(T-t)},
\;\;t\in [0,T],
\end{equation}
which turns out to be a function of $t$ only.
On the other hand, when $\alpha(s,y)=\frac{\alpha}{y}$, Theorem 4.6 in \cite{BMZ14}
gives
the weak equilibrium policy of the problem (\ref{BjorkP})--(\ref{ft0BjorkP}) as
\begin{equation}
\bm{\pi}_{we}(t,x)=c_{we}(t)x,
\end{equation}
where $c(t)\equiv c_{we}(t)$ is the unique solution to the following integral equation
\begin{equation}\label{cwe}
c(t)=\frac{b-r}{\alpha\sigma^2}\left[e^{-\int_t^T[r+(b-r)c(s)+\sigma^2c(s)^2]ds}+\alpha e^{-\int_t^T \sigma^2 c(s)^2 ds}-\alpha\right].
\end{equation}
Similarly, $c_{we}$ is the risky weight function of the weak equilibrium policy.
Finally, we can rewrite (\ref{Le1}) as
\begin{equation}\label{Le11}
L(s,y)=ye^{\int_s^T[r+\psi(t)]dt}
\end{equation}
where
\begin{equation}\label{psi}
\psi(t):=\frac{r+(\rho-r)e^{\rho(T-t)}}{\alpha e^{(T-t)r}+e^{\rho(T-t)}-1}.
\end{equation}
Applying Theorem 1-i in \cite{HJ17} and noting that the solution to the problem (2.10) therein is $v^*(t)=\frac{\psi(t)}{b-r}$, we obtain the regular equilibrium policy for (\ref{XueP})--(\ref{ft}) to be
\begin{equation}
\bm{\pi}_{re}(t,x)=c_{re}(t)x,
\end{equation}
where
\begin{equation}
c_{re}(t):=\frac{\psi(t)}{b-r}=\frac{1}{b-r}\frac{r+(\rho-r)e^{\rho(T-t)}}{\alpha e^{(T-t)r}+e^{\rho(T-t)}-1},\;\;t\in [0,T]
\end{equation}
is the risky weight of this equilibrium policy at $t\in[0,T]$.
The following proposition shows that the na\"ive policy allocates {\it strictly} more weight to the risky asset than the two equilibrium policies at {\it any} time before $T$.
\begin{proposition}\label{compare_naive_1}
In the Black--Scholes market, if $\alpha(s,y)=\frac{\alpha}{y}$, then we have
$$c_{we}(t)<c_{na}(t),\;\;c_{re}(t)<c_{na}(t),\;\;\forall t\in[0,T),$$
for any $\alpha>0$.
\end{proposition}
\begin{proof}
Let us first prove $c(t)\equiv c_{we}(t)<c_{na}(t)\;\;\forall t\in[0,T)$.
We have the obvious inequality
\begin{equation}\label{oi}
\rho+(b-r)c(s)+\sigma^2c(s)^2>0,\;\;\forall s\in[0,T)
\end{equation}
because $\Delta:=(b-r)^2-4\rho\sigma^2=-3(b-r)^2<0$. Recalling that $c(\cdot)$ satisfies (\ref{cwe}), we deduce
\begin{equation}
\begin{aligned}
c_{we}(t)&=\frac{b-r}{\alpha\sigma^2}\left[e^{-\int_t^T[r+(b-r)c(s)+\sigma^2c(s)^2]ds}+\alpha e^{-\int_t^T \sigma^2 c(s)^2 ds}-\alpha\right]\\
&\leq \frac{b-r}{\alpha\sigma^2}e^{-\int_t^T[r+(b-r)c(s)+\sigma^2c(s)^2]ds}\\
&< \frac{b-r}{\alpha\sigma^2}e^{-\int_t^T(r-\rho)ds}\\
&=\frac{b-r}{\alpha\sigma^2}e^{(\rho-r)(T-t)}=c_{na}(t),\ \forall t\in[0,T).
\end{aligned}
\end{equation}
Next, we prove $c_{re}(t)<c_{na}(t)\;\;\forall t\in[0,T)$. Indeed
\[ \begin{aligned}
c_{re}(t)&=\frac{1}{b-r}\frac{r+(\rho-r)e^{\rho(T-t)}}{\alpha e^{(T-t)r}+e^{\rho(T-t)}-1}\\
&<\frac{1}{b-r}\frac{\rho e^{\rho(T-t)}}{\alpha e^{(T-t)r}}\\
&=\frac{b-r}{\alpha\sigma^2}e^{(\rho-r)(T-t)}=c_{na}(t),\ \forall t\in[0,T).
\end{aligned}
\]
The proof is complete.
\end{proof}
So na\"ive policies take more risk exposure than the two types of equilibrium policies. It is interesting to compare the na\"ivet\`e also with a pre-committer, realizing that the former strives to follow the latter at {\it every} initial pair $(s,y)$. Take $(s,y)=(0,x_0)$ for example. The pre-committer's expected terminal wealth is
\begin{equation}\label{f0T} \mathbb{E}_{0,x_0}[X_*(T)]=x_0f(0,T)={x_0}e^{rT}\frac{1}{\alpha}\left[e^{(\rho -r) T}-e^{-rT}+\alpha \right],
\end{equation}
noting (\ref{f1}). Although the na\"ivet\`e's original expected target return was also $x_0f(0,T)$ at $(0,x_0)$, he changes mind all the time subsequently so his {\it actual} target return at $(0,x_0)$ can be significantly deviate from the original one. To see this, plugging in the na\"ive policy (\ref{PIII}) to the wealth equation (\ref{wealtheq}) to obtain
{\small
\begin{equation}\label{wealtheqex}
dX^*(t)=\left[rX^*(t)+\frac{1}{\alpha}\rho e^{(\rho-r)(T-t)}X^*(t)\right]dt+\frac{b-r}{\alpha\sigma}e^{(\rho-r)(T-t)}X^*(t)dW(t),\ t\in[0,T];\;\; X^*(0)=x_0.
\end{equation}}
Taking the integral form of this SDE and applying expectation on both sides, we get an ODE in terms of $\mathbb{E}[X^*(\cdot)]\equiv \mathbb{E}_{0,x_0}[X^*(\cdot)]$. Solving this ODE we arrive at
\begin{equation}\label{f0T0} \mathbb{E}_{0,x_0}[X^*(T)]={x_0}e^{rT}e^{\frac{1}{\alpha}\frac{\rho}{\rho-r}[e^{(\rho -r)T}-1]}.
\end{equation}
Recall that $\alpha>0$ is the risk aversion coefficient, and the smaller $\alpha$ the less risk averse the agent is. Comparing (\ref{f0T0}) with (\ref{f0T}) and noting that $\frac{\rho}{\rho-r}[e^{(\rho -r)T}-1]>0$ always holds, the na\"ivet\`e's expected terminal wealth is larger than the pre-committer's when $\alpha$ is small, and the former grows exponentially fast while the latter does only linearly in $\alpha^{-1}$ as $\alpha\to 0$. So a na\"ive policy ends up achieving a much higher expected terminal wealth than a pre-committed one which is also his {\it originally} planned target.\footnote{This also reconciles with the previously proved fact that na\"ive policies are more exposed to the stock than equilibrium ones.} However, this by no means implies that the former is superior to the latter because in an MV model there are two criteria and the variance is as important as the return. Indeed, it is straightforward to check that the na\"ive policy (\ref{PII}) is {\it different} from the {\it unique} pre-committed optimal policy (\ref{us}) under the {\it new} expected terminal wealth (\ref{f0T0}), hence must be MV {\it inefficient}.\footnote{Alternatively, one can calculate $\text{\rm Var}_{0,x_0}(X^*(T))$ and show that it is strictly larger than the right hand side of (\ref{ef}) with the expected terminal wealth given by (\ref{f0T0}) and $(s,y)=(0,x_0)$. In other words,
$(\mathbb{E}_{0,x_0}[X^*(T)],\text{\rm Var}_{0,x_0}(X^*(T)))$ lies {\it off} the efficient frontier (\ref{ef}).
Details are left to interested readers.} In other words, the na\"ive policy (\ref{PII}) takes more risk than it needs to - as dictated by the efficient frontier -- in order to achieve a higher expected terminal wealth (\ref{f0T0}).
To sum, in the current MV setting, a na\"ive policy is more risk-loving than the other types of polices while expecting higher terminal wealth. Although at every $(s,y)$ it tries to follow the pre-committed optimal policy, the actual policy turns out to be very different. It is MV inefficient and certainly not ``dynamically optimal" in any sense at any given $(s,y)$.
\subsubsection{The case $L(t,x)=xe^{k(T-t)}$}\label{Ltx}
We now consider the case when $L(t,x)=xe^{k(T-t)}$, where $k>r$ (otherwise the problem (\ref{XueP})--(\ref{ft}) is trivial). The corresponding $f$ is
$f(t,T)=e^{k(T-t)},$
which satisfies Assumption \textbf{(A3)}.
Substituting this into (\ref{PII}) we obtain the na\"ive policy
\[
\bm{\pi}^*(t,x)=c_{na}(t)x,\;\;(t,x)\in [0,T]\times \mathbb{R}
\]
where the risky weight is
\begin{equation}\label{cnacase2}
c_{na}(t)=\frac{\bm{\pi}^*(t,x)}{x}
=\frac{b-r}{\sigma^2}\frac{e^{(k-r)(T-t)}-1}{1-e^{-\rho(T-t)}},
\;\;t\in [0,T].
\end{equation}
Next, it follows from (\ref{GL}) that the corresponding
\begin{equation}\label{ge1}
\alpha(s,y)=\frac{\phi(s)}{y}
\end{equation}
where $\phi(s):=\frac{e^{\rho(T-s)}-1}{e^{k(T-s)}-e^{r(T-s)}}>0$.
Again, by Theorem 4.6 in \cite{BMZ14} we get the weak equilibrium policy of the problem (\ref{BjorkP})--(\ref{ft0BjorkP}) to be
\[
\bm{\pi}_{we}(t,x)=c_{we}(t)x,
\]
where $c(t)\equiv c_{we}(t)$ uniquely solves
\begin{equation}\label{cwe2}
c(t)=\frac{b-r}{\phi(t)\sigma^2}\left[e^{-\int_t^T[r+(b-r)c(s)+\sigma^2c(s)^2]ds}+\phi(t) e^{-\int_t^T \sigma^2 c(s)^2 ds}-\phi(t)\right].
\end{equation}
Finally, by Theorem 1-i in \cite{HJ17}, the regular equilibrium policy for (\ref{XueP})--(\ref{ft}) is
\[
\bm{\pi}_{re}(t,x)=c_{re}(t)x,
\]
where
\begin{equation}
c_{re}(t):=\frac{k-r}{b-r},\;\;t\in [0,T].
\end{equation}
\begin{proposition}\label{compare_naive_2}
In the Black-Scholes market, if $L(t,x)=xe^{k(T-t)}$, then we have
$$c_{we}(t)<c_{na}(t),\;\;c_{re}(t)<c_{na}(t),\;\;\forall t\in[0,T),$$
for any $k>r$.
\end{proposition}
\begin{proof}
It follows from (\ref{cwe2}) that
\begin{equation}
\begin{aligned}
c_{we}(t)\equiv c(t)&=\frac{b-r}{\phi(t)\sigma^2}\left[e^{-\int_t^T[r+(b-r)c(s)+\sigma^2c(s)^2]ds}+\phi(t) e^{-\int_t^T \sigma^2 c(s)^2 ds}-\phi(t)\right]\\
&\leq \frac{b-r}{\phi(t)\sigma^2}e^{-\int_t^T[r+(b-r)c(s)+\sigma^2c(s)^2]ds}\\
&< \frac{b-r}{\phi(t)\sigma^2}e^{-\int_t^T(r-\rho)ds}\\
&=\frac{b-r}{\phi(t)\sigma^2}e^{(\rho-r)(T-t)}\\
&=\frac{b-r}{\sigma^2}\frac{e^{(k-r)(T-t)}-1}{1-e^{-\rho(T-t)}}=c_{na}(t),\ \forall t\in[0,T),
\end{aligned}
\end{equation}
where we have utilized (\ref{oi}) to get the second inquality and noted the definition of $\phi(\cdot)$ to obtain the second to the last equality.
Next, applying the general inequality
\[ \frac{e^x-1}{1-e^{-y}}>\frac{x}{y},\;\;\forall x>0,\;y>0,
\]
we deduce
\[c_{na}(t)
=\frac{b-r}{\sigma^2}\frac{e^{(k-r)(T-t)}-1}{1-e^{-\rho(T-t)}}
>\frac{b-r}{\sigma^2}\frac{k-r}{\rho}=\frac{k-r}{b-r}=c_{re}(t).
\]
The proof is complete.
\end{proof}
We can also show that the na\"ive policy is not MV efficient with respect to any initial $(s,y)$ in the current case. Because the analysis is similar to that in the previous subsection, we omit the details here.
\section{Conclusions}\label{Naive_conclusions}
In this paper we define precisely and derive rigorously the policies implemented by a na\"ive agent, a notion originally put forth by \cite{SR56}, for a continuous-time Markowitz model that is intrinsically time inconsistent. Such an agent attempts to optimize at any given time but, since optimal policies depend on when and where one makes them in a time-inconsistent problem, in effect constantly changes his policies. Ironically, the policy a na\"ivet\'e actually executes may be anything but he originally desired. At any given time and state he sets an expected investment target and wants to achieve mean--variance efficiency but we show that his final policy ends up with a (much) higher target return and an even higher variance that overall becomes mean--variance {\it in}efficient. Moreover, na\"ive policies are universally riskier than their consistent planning counterparts.
Studying na\"ive behaviors in continuous-time problems is a nearly uncharted research area. From a behavioral economics perspective, it is fascinating to inquire and understand how an originally well-intended policy may go wrong or even go opposite when one insists on optimizing {\it all the time}. The definition of na\"ive policies and the approach to derive them in this paper is generalizable to other types of problems such as those with non-exponential discounting and probability weighting.
As such, we hope the paper has also set a stage for further study of these problems.
\noindent\textbf{{\Large Appendices}}
\appendix
\section{Proof of Proposition \ref{normfinite}}\label{proof_normfinite}
The main idea of the proof is to find a \emph{deterministic} function $Y$ to bound $X_n^2$, which is stated in the following lemma.
\begin{lemma}\label{boundode}
Let $Y$ satisfying the following ODE
\begin{equation}
dY(s)=\left[R^*+(\gamma^{*})^2e^{-2\int_s^T r(v) dv }\rho(s)\right]Y(s)ds,\;s\in[0,T];\;\;
Y(0)=x_0,
\end{equation}
where
$$R^*:=\max\limits_{0\leq s\leq T}|2r(s)-\rho(s)|,\ \gamma^{*}:=\max\limits_{0\leq s\leq T}\gamma(s,T).$$
Then, we have, for every $k=0,1,...,2^n-1$,
$$\mathbb{E}[X_*(s;t_k)^2]\leq Y(s),\ \;s\in[t_k,t_{k+1}].$$
\end{lemma}
\begin{proof}
By Assumptions {\bf (A1)--(A3)}, it is clear that $R^*<\infty$ and $\gamma^*<\infty$.
Recall $X_*(\cdot;t_k)$ satisfies the SDE (\ref{rec}) on $[t_k,t_{k+1}]$ for $k=0,1,...,2^n-1$. Applying It\^o's formula to $X_*(t;t_k)^2$ and then taking
conditional expectation on $\mathcal{F}_{t_k}$ we obtain
the ($\omega$-wise) ODE
{\small
\begin{equation}\label{ode0}
\begin{cases}
d\mathbb{E}[X_*(t;t_k)^2|\mathcal{F}_{t_k}]=\left\{(2r(t)-\rho(t))\mathbb{E}[X_*(t;t_k)^2|\mathcal{F}_{t_k}]
+\gamma(t_k,T)^2\rho(t)e^{-2\int_t^T r(v) dv }X_*(t_k;t_k)^2\right\}dt,\;\; t\in[t_k,t_{k+1}],\\
\mathbb{E}[X_*(t_k;t_k)^2|\mathcal{F}_{t_k}]=X_*(t_k;t_k)^2.
\end{cases}
\end{equation}}
Consider a new stochastic process $Z(\cdot;t_k)$ which satisfies the ODE on $[t_k,t_{k+1}]$ for $k=0,1,...,2^n-1$:
\begin{equation}\label{ode1}
\begin{cases}
dZ(t;t_k)=\left[R^*Z(t;t_k)+\gamma(t_k,T)^2\rho(t)e^{-2\int_t^T r(v) dv }X_*(t_k;t_k)^2\right]dt,\;\; t\in[t_k,t_{k+1}],\\
Z(t_k;t_k)=X_*(t_k;t_k)^2.
\end{cases}
\end{equation}
Because $|2r(t)-\rho(t)|\leq R^*$, $t\in[0,T]$, a comparison theorem of ODEs yields
\begin{equation}\label{comp0}
\mathbb{E}[X_*(t;t_k)^2|\mathcal{F}_{t_k}]\leq Z(t;t_k),\;\mbox{a.s.}, \;\;k=0,1,...,2^n-1.
\end{equation}
Now, we construct another stochastic process $\bar Z(\cdot;t_k)$ on $[t_k,t_{k+1}]$ for $k=0,1,...,2^n-1$:
\begin{equation}\label{eq1}
\begin{cases}
d\bar Z(t;t_k)=\left[R^*+(\gamma^*)^2\rho(t)e^{-2\int_t^T r(v) dv }\right]\bar Z(t;t_k)dt,\;\; t\in[t_k,t_{k+1}],\\
\bar Z(t_k;t_k)=X_*(t_k;t_k)^2.
\end{cases}
\end{equation}
It follows from (\ref{ode1}) that $Z(t;t_k)$ increases in $t\in[t_k,t_{k+1}]$; hence $Z(t;t_k)\geq X_*(t_k;t_k)^2$ for $t\in[t_k,t_{k+1}]$. Then, we get
\begin{equation}\label{eq2}
\begin{aligned}
\frac{dZ(t;t_k)}{dt}&=R^*Z(t;t_k)+\gamma(t_k,T)^2\rho(t)e^{-2\int_t^T r(v) dv }X_*(t_k;t_k)^2\\
&\leq \left[R^*+\gamma(t_k,T)^2\rho(t)e^{-2\int_t^T r(v) dv }\right]Z(t;t_k)\\
&\leq \left[R^*+(\gamma^*)^2\rho(t)e^{-2\int_t^T r(v) dv }\right]Z(t;t_k).
\end{aligned}
\end{equation}
Comparing (\ref{eq1}) and (\ref{eq2}), we conclude from the Grownwall inequality that
\begin{equation}\label{comp1}
Z(t;t_k)\leq \bar Z(t;t_k), \;\mbox{a.s.},\; t\in[t_k,t_{k+1}],\;k=0,1,...,2^n-1.
\end{equation}
To finish the proof we use mathematical induction on $k$. When $k=0$, $t\in [0,t_1]$, it follows from (\ref{comp0}) and (\ref{comp1}) that
\begin{equation}
\mathbb{E}[X_*(t;0)^2]=\mathbb{E}[\mathbb{E}[X_*(t;0)^2|\mathcal{F}_0]]\leq \mathbb{E}[Z(t;0)]\leq \mathbb{E}[\bar Z(t;0)]=Y(t).
\end{equation}
Now, assume that when $k=m-1$, the following holds:
\begin{equation}\label{comp3}
\mathbb{E}[X_*(t;t_{m-1})^2]\leq Y(t),\;\; t\in [t_{m-1},t_{m}].
\end{equation}
By (\ref{comp0}) and (\ref{comp1}) we obtain
\begin{equation}\label{comp5}
\begin{aligned}
\mathbb{E}[X_*(t;t_m)^2]&=\mathbb{E}[\mathbb{E}[X_*(t;t_m)^2|\mathcal{F}_{t_m}]]\\
&\leq \mathbb{E}[Z(t;t_m)]\\
&\leq \mathbb{E}[\bar Z(t;t_m)],\;\;t\in [t_{m},t_{m+1}]
\end{aligned}
\end{equation}
where the initial value of $\mathbb{E}[\bar Z(\cdot;t_m)]$ on $[t_{m},t_{m+1}]$ is $\mathbb{E}[X_*(t_m;t_{m})^2]\equiv \mathbb{E}[X_*(t_m;t_{m-1})^2]$. However, (\ref{comp3}) gives $\mathbb{E}[X_*(t_m;t_{m-1})^2]\leq Y(t_m)$, whereas $\mathbb{E}[\bar Z(\cdot;t_m)]$ and $Y(\cdot)$ satisfy the same ODE on $[t_m,t_{m+1}]$. Thus $\mathbb{E}[\bar Z(t;t_m)]\leq Y(t)$ on $[t_m,t_{m+1}]$. Combining with (\ref{comp5}), we get the desired result.
\end{proof}
We are now ready to prove Proposition \ref{normfinite}.
By Lemma \ref{boundode}, we have
\begin{equation}\label{bound1}
\begin{aligned}
||X_n||^2&=\mathbb{E}\int_0^T X_n(s)^2 ds
=\sum\limits_{k=1}^{2^n}\int_{t_{k-1}}^{t_k}\mathbb{E}[X_*(s;t_{k-1})^2 ]ds\\
&\leq \sum\limits_{k=1}^{2^n}\int_{t_{k-1}}^{t_k}Y(s)ds=\int_0^T Y(s)ds<\infty.
\end{aligned}
\end{equation}
\section{Proof of Theorem \ref{theorem1}}\label{proof_theorem1}
To ease notation we use the following
\begin{equation}\label{nota1}
\begin{cases}
\gamma(t):=\gamma(t,T),\;A(t):=r(t)-\rho(t), \;
C(t):=e^{-\int_t^T r(v) dv}\rho(t), \\
D(t):=B(t)(\sigma(t)\sigma(t) ^\top )^{-1}\sigma(t)e^{-\int_t^T r(v) dv}, \;
F(t):=B(t)(\sigma(t)\sigma(t)^\top )^{-1}\sigma(t),
\end{cases}
\end{equation}
with which we rewrite the SDE (\ref{rec}) as
\begin{equation}\label{nota2}
\begin{cases}
dX_*(t;t_k)=\left[A(t)X_*(t;t_k)+\gamma(t_k)C(t)X_*(t_k;t_{k-1})\right]dt\\
\ \ \ +\left[-F(t)X_*(t;t_k)+\gamma(t_k)D(t)X_*(t_k;t_{k-1})\right]dW(t),\;\; t\in[t_k,t_{k+1}], \\
X_*(t_k;t_k)=X_*(t_k;t_{k-1}).
\end{cases}
\end{equation}
Denote
\[ A^*:=\max\limits_{t\in [0,T]}|A(t)|^2,\;C^*:=\max\limits_{t\in [0,T]}|C(t)|^2,\;
D^*:= \max\limits_{t\in [0,T]}||D(t)||^2,\;F^*:= \max\limits_{t\in [0,T]}||F(t)||^2,\]
which are all finite due to the boundedness assumptions in \textbf{(A1)} and \textbf{(A2)}.
In order to prove Theorem \ref{theorem1}, we need the following lemma.
\begin{lemma}\label{lemmaclose}
The process $X_n$ defined by (\ref{pasting}) satisfies
$$\lim\limits_{n\to\infty}\max\limits_{k\in\{0,...,2^n-1\},s\in[t_k,t_{k+1}]}
\mathbb{E}|X_n(s)-X_n(t_k)|^2= 0.$$
\end{lemma}
\begin{proof}
For $s\in[t_k,t_{k+1}]$, we bound the term $\mathbb{E}|X_n(s)-X_n(t_k)|^2$ as follows:
\begin{equation}
\begin{aligned}
&\mathbb{E}|X_n(s)-X_n(t_k)|^2=\mathbb{E}|X_*(s;t_k)-X_*(t_k,t_{k-1})|^2\\
&\leq 2\mathbb{E}\left[\int_{t_k}^s \left(A(t)X_*(t;t_k)+\gamma(t_k)C(t)X_*(t_k;t_{k-1})\right)dt\right]^2 \\
&+2\mathbb{E}\left[\int_{t_k}^s \left(-F(t)X_*(t;t_k)+\gamma(t_k)D(t)X_*(t_k;t_{k-1})\right)dW(t)\right]^2.
\end{aligned}
\end{equation}
For bounding the first term on the right side of the above, we have by the Cauchy--Schwartz inequality
\begin{equation}\label{diffsquare}
\begin{aligned}
&\mathbb{E}\left[\int_{t_k}^s \left(A(t)X_*(t;t_k)+\gamma(t_k)C(t)X_*(t_k;t_{k-1})\right)dt\right]^2 \\
&\leq (s-t_k)\int_{t_k}^s \mathbb{E}\left|A(t)X_*(t;t_k) + \gamma(t_k)C(t)X_*(t_k;t_{k-1})\right|^2dt\\
&\leq (s-t_k)\int_{t_k}^s 2\mathbb{E}|A(t)X_*(t;t_k)|^2 + 2\mathbb{E}|\gamma(t_k)C(t)X_*(t_k;t_{k-1})|^2dt\\
&\leq (s-t_k)\int_{t_k}^s \left(2A^*\mathbb{E}|X_*(t;t_k)|^2 + 2\gamma^*C^*\mathbb{E}|X_*(t_k;t_{k-1})|^2\right)dt\\
&\leq (s-t_k)\int_{t_k}^s (2A^*+ 2\gamma^*C^*)Y(T)dt = (2A^*+ 2\gamma^*C^*)(s-t_k)^2Y(T),
\end{aligned}
\end{equation}
where the last inequality follows from Lemma \ref{boundode} and the fact that $Y(s)$ is increasing in $s\in[0,T]$.
For the second term, by virtue of It\^o's isometry, we similarly have
\begin{equation}
\mathbb{E}\left[\int_{t_k}^s \left(-F(t)X_*(t;t_k)+\gamma(t_k)D(t)X_*(t_k;t_{k-1})\right)dW(t)\right]^2\leq (2\gamma^*D^*+ 2F^*)(s-t_k)Y(T).
\end{equation}
Combining the above, we obtain
\begin{equation}
\begin{aligned}
\mathbb{E}|X_n(s)-X_n(t_k)|^2&\leq 4(s-t_k)(A^*+\gamma^*C^* + \gamma^*D^* + F^*)Y(T), \;s\in[t_k,t_{k+1}].
\end{aligned}
\end{equation}
Thus,
$$\max\limits_{k\in\{0,...,2^n-1\},s\in[t_k,t_{k+1}]}\mathbb{E}[X_n(s)-X_n(t_k)]^2\leq \frac{4T}{2^n}(A^*+\gamma^*C^* + \gamma^*D^* + F^*)Y(T)\to 0$$
as $n\to\infty$.
\end{proof}
Because $X_n \to X\ \text{weakly in $L^2_{\mathcal{F}}([0,T];\mathbb{R})$}$, it follows from
Mazur's lemma that for each integer $n\geq 1$, there exists a positive integer $N(n)$ and a convex combination $V_n:=\sum_{k=n}^{N(n)} a_{k,n} X_k$, where $a_{k,n}\geq0$ and $\sum_{k=n}^{N(n)} a_{k,n}=1$, such that
\begin{equation}
V_n \to X\ \text{strongly in $L^2_{\mathcal{F}}([0,T];\mathbb{R})$}.
\end{equation}
By the definition of $V_n$, it satisfies the SDE
\begin{equation}
\begin{cases}
dV_n(t)=[A(t)V_n(t)+C(t)U_n(t)]dt+[-F(t)V_n(t)+D(t)U_n(t)]dW(t),\\
V_n(0)=x_0,
\end{cases}
\end{equation}
where
$$U_n(t):=\sum_{k=n}^{N(n)} a_{k,n}[\gamma(m_{t,k})X_k(m_{t,k})],\ m_{t,k}:=\frac{N}{2^k}T \mbox{ when } \frac{N}{2^k}T\leq t< \frac{N+1}{2^k}T \mbox{ for some }N\in \mathbb{N}.$$
Consider the linear SDE
\begin{equation}
\begin{cases}
dZ(t)=[A(t)X(t)+C(t)\gamma(t)X(t)]dt+[-F(t)X(t)+D(t)\gamma(t)X(t)]dW(t), \\
Z(0)=x_0.
\end{cases}
\end{equation}
We now prove that
\begin{equation}\label{zstrong}
\begin{aligned}
\lim\limits_{n\to\infty}\int_0^T \mathbb{E}|V_n(t)-Z(t)|^2 dt = 0.
\end{aligned}
\end{equation}
To this end, we first analyze $V_n(t)-Z(t)$. We have
\begin{equation}
\begin{aligned}
&V_n(t)-Z(t)=\int_0^t \left[ A(u)(V_n(u)-X(u))+C(u)(U_n(u)-\gamma(u)X(u))\right]du\\
&+\int_0^t \left[ -F(u)(V_n(u)-X(u))+D(u)(U_n(u)-\gamma(u)X(u))\right] dW(u)\\
&=:Q_{1,n}(t)+Q_{2,n}(t).
\end{aligned}
\end{equation}
As a result,
\begin{equation}\label{op1}
\int_0^T \mathbb{E}|V_n(t)-Z(t)|^2 dt
\leq 2\int_0^T \mathbb{E}|Q_{1,n}(t)|^2 dt+2\int_0^T \mathbb{E}|Q_{2,n}(t)|^2 dt.
\end{equation}
We proceed to analyze $\mathbb{E}[Q_{1,n}^2(t)] $ and $\mathbb{E}[Q_{2,n}^2(t)] $, respectively. First
\begin{equation}\label{Q1P}
\begin{aligned}
\mathbb{E}|Q_{1,n}(t)|^2&\leq T\mathbb{E}\int_0^t \left|A(u)(V_n(u)-X(u))+C(u)(U_n(u)-\gamma(u)X(u))\right|^2du\\
&\leq 2TA^* \int_0^t \mathbb{E}|V_n(u)-X(u)|^2du+2TC^*\int_0^t \mathbb{E}|U_n(u)-\gamma(u)X(u)|^2du.
\end{aligned}
\end{equation}
By the strong convergence of $V_n$ to $X$, the first term above converges to $0$ as $n\to\infty$. For the second term,
\begin{equation}\label{Q121}
\begin{aligned}
&\int_0^t \mathbb{E}|U_n(u)-\gamma(u)X(u)|^2du\\
&=\int_0^t \mathbb{E}|U_n(u)-\gamma(u)V_n(u)+\gamma(u)V_n(u)-\gamma(u)X(u)|^2du\\
&\leq 2(\gamma^{*})^2\int_0^t \mathbb{E}|V_n(u)-X(u)|^2du+2\int_0^t \mathbb{E}\left|\sum\limits_{k=n}^{N(n)} a_{k,n}\left[\gamma(u)X_k(u)-\gamma(m_{u,k})X_k(m_{u,k})\right]\right|^2 du.
\end{aligned}
\end{equation}
Now,
\begin{equation}\label{Q1n}
\begin{aligned}
&\int_0^t \mathbb{E}\left|\sum\limits_{k=n}^{N(n)} a_{k,n}\left[\gamma(u)X_k(u)-\gamma(m_{u,k})X_k(m_{u,k})\right]\right|^2 du \\
&=\int_0^t \mathbb{E}|\sum\limits_{k=n}^{N(n)} a_{k,n}(\gamma(u)-\gamma(m_{u,k}))X_k(u)+a_{k,n}\gamma(m_{u,k})(X_k(u)-X_k(m_{u,k}))|^2 du\\
&\leq 2\int_0^t \left[\mathbb{E}|\sum\limits_{k=n}^{N(n)} a_{k,n}(\gamma(u)-\gamma(m_{u,k}))X_k(u)|^2+\mathbb{E}|\sum\limits_{k=n}^{N(n)}a_{k,n}\gamma(m_{u,k})(X_k(u)-X_k(m_{u,k}))|^2 \right]du\\
&\leq 2\int_0^t \left[\sum\limits_{k=n}^{N(n)} a_{k,n}\mathbb{E}|(\gamma(u)-\gamma(m_{u,k}))X_k(u)|^2+\sum\limits_{k=n}^{N(n)}a_{k,n}
\mathbb{E}|\gamma(m_{u,k})(X_k(u)-X_k(m_{u,k}))|^2\right] du,
\end{aligned}
\end{equation}
where the last inequality follows from the convexity of the function $f(x)=x^2$. Because $\gamma(\cdot)$ is continuous on $[0,T]$, it is uniformly continuous. Hence, for any $\text{\rm Var}epsilon>0$ there is $n_0\in \mathbb{N}$ such that $|\gamma(t)-\gamma(s)|\leq \text{\rm Var}epsilon$ whenever $t,s\in [0,T]$ with $|t-s|\leq \frac{1}{2^{n_0}}T$. For $n\geq n_0$, we then have
\begin{equation}\label{Q1P2}
\begin{aligned}
&2\int_0^t \left[\sum\limits_{k=n}^{N(n)} a_{k,n}\mathbb{E}|(\gamma(u)-\gamma(m_{u,k}))X_k(u)|^2+\sum\limits_{k=n}^{N(n)}a_{k,n}
\mathbb{E}|\gamma(m_{u,k})(X_k(u)-X_k(m_{u,k}))|^2\right] du\\
&\leq 2\int_0^t\left[ \text{\rm Var}epsilon^2 \max\limits_{n\leq k\leq N(n)} \mathbb{E}|X_k(u)|^2+(\gamma^{*})^2\max\limits_{n\leq k\leq N(n)}\mathbb{E}|X_k(u)-X_k(m_{u,k})|^2 \right]du\\
&\leq 2\int_0^t \left[ \text{\rm Var}epsilon^2 Y(u)+(\gamma^{*})^2\frac{4T}{2^n}(A^*+\gamma^*C^* + \gamma^*D^* + F^*)Y(T)\right]du\\
&\leq 2\left[\text{\rm Var}epsilon^2 +(\gamma^{*})^2\frac{4T}{2^n}(A^*+\gamma^*C^* + \gamma^*D^* + F^*)\right]TY(T),
\end{aligned}
\end{equation}
where the second inequality is by Lemma \ref{boundode} and the proof of Lemma \ref{lemmaclose}. Taking $n\to \infty$ and then $\text{\rm Var}epsilon\to 0$, we obtain
\begin{equation}\label{Q122}
\lim\limits_{n\to \infty} \int_0^t \mathbb{E}|\sum\limits_{k=n}^{N(n)} a_k^n\left(\gamma(u)X_k(u)-\gamma(m_{u,k})X_k(m_{u,k})\right)|^2 du = 0.
\end{equation}
Combining (\ref{Q1P}), (\ref{Q121}) and (\ref{Q122}) yields
\begin{equation}
\begin{aligned}
\lim\limits_{n\to\infty}\mathbb{E}|Q_{1,n}(t)|^2 = 0.
\end{aligned}
\end{equation}
Moreover, according to the above analysis the bound of $\mathbb{E}|Q_{1,n}(t)|^2$ does not depend on $t$; thus the dominated convergence theorem gives
\begin{equation}\label{Q1F}
\lim\limits_{n\to \infty}\int_0^T \mathbb{E}|Q_{1,n}(t)|^2 dt=\int_0^T \lim\limits_{n\to \infty}\mathbb{E}|Q_{1,n}(t)|^2 dt=0.
\end{equation}
Employing It\^o's isometry we can derive similarly
\begin{equation}\label{Q2F}
\lim\limits_{n\to \infty}\int_0^T \mathbb{E}|Q_{2,n}(t)|^2 dt=\int_0^T \lim\limits_{n\to \infty}\mathbb{E}|Q_{2,n}(t)|^2 dt=0.
\end{equation}
By plugging (\ref{Q1F}) and (\ref{Q2F}) into (\ref{op1}) we establish (\ref{zstrong}), namely, $V_n \to Z\ \text{strongly in $L^2_{\mathcal{F}}([0,T];\mathbb{R})$}$.
Thus, $Z(t,\omega)=X(t,\omega)$ except on a zero measure set in the space of $[0,T]\times \Omega$. It follows that $X$ satisfies the same SDE as $Z$ or, equivalently, $X$ satisfies (\ref{XSDE}).
Moreover, it is immediate that this wealth equation is generated by the feedback policy
(\ref{PI}).
The proof is complete.
\end{document} |
\betagin{document}
\maketitle
\betagin{abstract}
Consider a two-type Moran population of size $N$ with selection and mutation, where the selective advantage of the fit individuals is amplified at extreme environmental conditions. Assume selection and mutation are weak with respect to $N$, and extreme environmental conditions rarely occur. We show that, as $N\to\infty$, the type frequency process with time speed up by $N$ converges to the solution of a Wright-Fisher-type SDE with a jump term modeling the effect of the environment. We use an extension of the \emph{ancestral selection graph} (ASG) to describe the model's genealogical picture. Next, we show that the type frequency process and the line-counting process of a pruned version of the ASG satisfy a moment duality. This relation yields a characterization of the asymptotic type distribution. We characterize the ancestral type distribution using an alternative pruning of the ASG. Most of our results are stated in annealed and quenched form.
\end{abstract}
{ \footnotesize
\noindent{\slshape\bfseries MSC 2010.} Primary:\, 82C22, 92D15 \ Secondary:\, 60J25, 60J27
\noindent{\slshape\bfseries Keywords.} Wright--Fisher diffusion, Moran model, random environment, ancestral selection graph, duality}
\setcounter{tocdepth}{1}
\tableofcontents
\section{Introduction}\lambdabel{S1}
The \emph{Wright--Fisher diffusion with mutation and selection} describes the evolution of the type composition of an infinite two-type haploid population, which is subject to mutation and selection. Fit individuals reproduce at rate $1+\sigmagma$, $\sigmagma\geq 0$, whereas unfit ones reproduce at rate $1$. In addition, individuals mutate at rate $\theta$ to the fit type with probability $\nu_0\in[0,1]$, and to the unfit type with probability {$\nu_1:=1-\nu_0$}. The proportion of fit individuals evolves forward in time according to the SDE
\betagin{equation}\lambdabel{WFD}
\dd X(t)=\left[\theta\nu_0(1-X(t))-\theta\nu_1 X(t) + \sigmagma X(t)(1-X(t))\right]\,\dd t+\sqrt{2 X(t)(1-X(t))}\,\dd B(t),\quad t\geq 0,
\end{equation}
where $(B(t))_{t\geq 0}$ is a standard Brownian motion. The solution of \eqref{WFD} arises as the \emph{diffusion approximation} of (properly normalized) continuous-time Moran models and discrete-time Wright--Fisher models. In the neutral case, it also appears as the limit of a large class of Cannings models (see \cite{M01}). The genealogical counterpart to $X$ is the \emph{ancestral selection graph} (ASG), which is a branching-coalescing process coding the potential ancestors of an untyped sample of the population at present. It was introduced by Krone and Neuhauser in \citep{KroNe97,NeKro97} and later extended to models evolving under general neutral reproduction mechanisms (\citep{EGT10,BLW16}), and {to general} forms of frequency dependent selection (see \citep{Ne99,BCH18,GS18,CHS19}).
For $\theta=0$, the process $(R(t))_{t\geq 0}$ that counts the lines in the ASG is moment dual with $1-X$, i.e.
\betagin{equation}\lambdabel{mdintro}
{\mathbb E}b\left[(1-X(t))^n\mid X(0)=x\right]={\mathbb E}b\left[(1-x)^{R(t)}\mid R(0)=n\right],\qquad n\in{\mathbb N}b, \, x\in[0,1],\, t\geq 0.
\end{equation}
This relation yields an expression for the absorption probability of $X$ at $0$ in terms of the stationary distribution of $R$. {For $\theta>0$}, two variants of the ASG dynamically resolve mutation events and encode relevant information of the model: \textit{the killed ASG} and \textit{the pruned lookdown ASG}. The killed ASG was introduced in \citep{BW18} to determine if a sample consists only of unfit individuals. Its line-counting process extends Eq. \eqref{mdintro} to the case $\theta>0$ \cite[Prop. 1]{BW18} (see \cite[Prop. 2.2]{CM19} for a generalization). This allows to characterize the stationary distribution of $X$. The pruned lookdown ASG in turn was introduced in \citep{LKBW15} (see \citep{BLW16, FC17} for extensions) as a tool to determine the \emph{ancestral type distribution}, i.e. the type distribution of the individuals that have been successful in the long run.
In many biological situations the strength of selection fluctuates in time. The influence of random fluctuations in selection intensities on the growth of populations has been the object of extensive research in the past (see e.g.\cite{G72,KL74,KL74b,KL75,Bu87, BG02, SJV10}), and it is currently experiencing renewed interest (see e.g. \cite{BCM19,CSW19, BEK19, ChK19, GJP18, GPS19}). A concrete example is given in \cite{PM14}, where different antibiotic treatment strategies against a bacterial population are compared. There it is proved that a constant administration of antibiotics is not optimal, and that the best treatment strategies depend on the length of treatment. In this paper, we consider the scenario where the selective advantage of fit individuals is accentuated by exceptional environmental conditions (e.g. extreme temperatures, precipitation, humidity variation, abundance of resources, etc.). As an example, consider a population consisting of fit and unfit individuals which is subject to catastrophes. Assume that only fit individuals are resistant to the catastrophes. Hence, shortly after a catastrophe the population may drop below its carrying capacity
and subsequently grow quickly. Since fit individuals have a reproductive advantage, it is likely that their relative frequency will grow fast after a catastrophe. One may also think of a population consisting of individuals that are specialized to high temperatures, as well as wild-type individuals accustomed, but not specialized to them. The environment is characterized by moderately high temperatures, present most of the time, and short periods of extreme heat. It is then likely that specialized individuals have a (slight) reproductive advantage under moderate temperatures, and a more prominent advantage at extreme temperatures.
To model the previously described scenario we use a two-type Moran population with mutation and selection, immersed in a varying environment. The environment is modeled via a countable collection of points $(t_i,p_i)_{i\in I}$ {in $(-\infty,\infty)\times(0,1)$, satisfying that $\sum_{t_i\in[s,t]} p_i<\infty$ for all $s<t$.} Each $t_i$ represents a time of an instantaneous environmental change; the \emph{peak} $p_i$ models the strength of this event: at time $t_i$ each fit individual independently reproduces with probability $p_i$. Each offspring replaces a different individual in the population, so that the population size remains constant. The summability of the peaks assures that the number of reproductions in any compact time interval is almost surely finite.
In this context, we show that the type-frequency process is continuous with respect to the environment. The proof uses coupling techniques that uncover the effect of small environmental changes.
Next, we consider a random environment given by a Poisson point process on {$(-\infty,\infty) \times (0,1)$} with intensity measure $\dd t \times \mu$, where $\dd t$ stands for the Lebesgue measure and $\mu$ is a measure on $(0,1)$ satisfying $\int x \mu(\dd x) < \infty$. {Then, we let population size grow to infinity, and} we show that, in an appropriate parameter regime, the fit-type frequency process converges to the solution of the SDE
\betagin{equation}\lambdabel{WFSDE}
\dd X(t)=\theta\left(\nu_0(1-X(t))-\nu_1 X(t)\right)\dd t +X(t-)(1-X(t-))\dd S(t) +\sqrt{2 X(t)(1-X(t))}\dd B(t),\quad t\geq 0,
\end{equation}
where $S(t)\coloneqq \sigmagma t+J(t)$, and $J$ is a pure-jump subordinator with L\'evy measure $\mu$, independent of $B$, which represents the cumulative effect of the environment. We refer to $X$ as the \emph{Wright--Fisher diffusion in random environment}. We prove the convergence in an annealed setting, i.e. when the environment is random. For environments given by compound Poisson processes, we show that the convergence also holds in a quenched sense, i.e. when a realization of the environment is fixed. For $\theta=0$, Eq. \eqref{WFSDE} is a particular case of \cite[Eq. 3.3]{BCM19}, which arises as the {large population limit of} a family of discrete-time Wright--Fisher models \cite[Thm. 3.2]{BCM19}.
Next, we generalize the construction of the ASG, the killed ASG and the {pruned lookdown} ASG to incorporate the effect of the environment. In the annealed case, we establish a relation between $X$, the line-counting process of the killed {ASG, and} the total increment of the environment; we refer to this relation as a \emph{reinforced moment duality}. The latter is a central tool to characterize the asymptotic type frequencies. We also express the ancestral type distribution in terms of the line-counting process of the pruned lookdown ASG. Analogous results are obtained in the more involved quenched setting.
As an application of our results, we compare the long-term behavior of two Wright--Fisher diffusions without mutations; the first one having parameter $\sigmagma=0$ and an environment with L\'evy measure $\mu$; the second one having parameter $\sigmagma=\int_{(0,1)} y\mu(\dd y)$ and no environment. We prove that the probability of fixation of the fit type is smaller under first model than under the second one, provided that the initial frequency of fit individuals is sufficiently large.
The analysis of a more realistic scenario where environmental changes are not always favorable to the same type can not be done via the methods presented in this paper. The main reason is that in such a setting the frequency process does not admit a moment dual. To circumvent this problem one has to take into account the whole combinatorics of the ASG, which is a cumbersome object. This is the object of a forthcoming study.
We would also like to mention a parallel development by \citet{CSW19}. They study the accessibility of the boundaries and the fixation probabilities of a generalization of the SDE \eqref{WFSDE} with $\theta=0$. \cite{CSW19} makes only use of the ASG and does not cover the case $\theta>0$, where the killed and the pruned lookdown ASG play a pivotal role. Moreover, the reinforced moment duality, and all the results obtained in the quenched setting, are to the best of our knowledge new.
The article is organized as follows. An outline of the paper containing our main results is given in Section~\ref{S2}. In Section~\ref{S3} we prove the continuity of the type frequency process in the Moran model with respect to the environment, that \eqref{WFSDE} is well-posed, and that it arises in the large population {limit of} the type frequency process of a sequence of Moran models. In Section \ref{S4} we give more detailed definitions of the ASG, the killed ASG and the pruned lookdown ASG. Section~\ref{S5} is devoted to the proofs of: (i) the annealed moment duality between the process $X$ and the line-counting process of the killed ASG, (ii) the long-term behavior of the annealed type frequency process, and (iii) the annealed ancestral type distribution. The quenched versions of these results are proved in Section \ref{S6}. {Section \ref{S7} provides additional (quenched) results for environments having finitely many jumps in any compact time interval.}
\section{Description of the model and main results}\lambdabel{S2}
{\bf Notation.} The positive integers are denoted by ${\mathbb N}$, and we set ${\mathbb N}_0\coloneqq {\mathbb N}\cup\{0\}$. For $m\in {\mathbb N}$, \[[m]\coloneqq \{1,\ldots,m\},\quad [m]_0\coloneqq [m]\cup\{0\},\quad \text{and} \quad \noo{m}\coloneqq [m]\setminus\{1\}.\]
For $s<t$, we denote by $\Db_{s,t}$ (resp. $\Db$) the space of c\`{a}dl\`{a}g functions from $[s,t]$ (resp. ${\mathbb R}b$) to ${\mathbb R}b$, which is endowed with the Billingsley metric inducing the $J_1$-Skorokhod topology and makes the space complete (see Appendix \ref{A1}). For any Borel set $S\subset{\mathbb R}b$, denote by $\Ms_f(S)$ (resp. $\Ms_1(S)$) the set of finite (resp. probability) measures on $S$. We use $\xrightarrow[]{(d)}$ to denote convergence in distribution of random variables and $\xRightarrow[]{(d)}$ for convergence in distribution of c\`{a}dl\`{a}g process.
For $n\in {\mathbb N}_0$ and $k,m\in[n]_0$, we write $K\sigmam \hypdist{n}{m}{k}$ if $K$ is a hypergeometric random variable with parameters $n,m$, and $k$, i.e \[{\mathbb P}(K=i)= {\binom{n-m}{k-i} \binom{m}{i}}/{\binom{n}{k}},\quad i\in[k\wedge m]_0.\]
For~$x\in[0,1]$ and~$n\in {\mathbb N}$, we write $B\sigmam \bindist{n}{x}$ if $B$ is a binomial random variable with parameters $n$ and $x$, i.e. \[{\mathbb P}(B=i)= \binom{n}{i} x^i (1-x)^{n-i},\quad i\in[n]_0.\]
Relevant notations introduced in the forthcoming sections are collected in Section \ref{Nota}.
\subsection{Moran models in deterministic pure-jump environments}\lambdabel{s21}
Consider a population of size $N$ with two types, type $0$ and $1$, subject to mutation {and selection influenced by} a deterministic environment. {The latter} is modeled by an at most countable collection ${\textbf{0}}ta\coloneqq (t_k, p_k)_{k \in I}$ of points in $(-\infty,\infty) \times (0,1)$ satisfying for any $s,t\in{\mathbb R}b$ with $s\leq t$ that
\betagin{equation} \lambdabel{summableassumption}
\sum_{t_k \in[s, t]} p_k < \infty.
\end{equation}
We refer to $p_k$ as the \emph{peak of the environment} at time $t_k$. The individuals in the population undergo the following dynamic. Each individual independently mutates at rate $\theta_N\geq 0$ with probability $\nu_{0}\in[0,1]$ (resp. $\nu_1\coloneqq 1-\nu_0$) to type $0$ (resp. $1$). Reproduction occurs independently from mutation. Individuals of type $1$ reproduce at rate $1$, whereas individuals of type $0$ reproduce at rate $1+\sigmagma_N$, $\sigmagma_N\geq0$\footnote{The subscript $N$ in the parameters $\sigmagma_N$ and $\theta_N$ emphasizes their dependence on $N$. In Theorem \ref{thm2.2} we will require that they are asymptotically proportional to $1/N$.}. Thus, we refer to type $0$ (resp. type $1$) as the \emph{fit} (resp. \emph{unfit}) type. In addition, at time $t_k$ each type $0$ individual independently reproduces with probability $p_k$. At reproduction times: (a) each individual produces at most one offspring, which inherits the parent's type, and (b) if $n$ individuals are born, $n$ individuals are randomly sampled without replacement from {the population present before the reproduction event (including the parents) } to die, hence keeping the size of the population constant.
\subsection*{Graphical representation}
In the absence of environmental factors (i.e. ${\textbf{0}}ta=\emptyset$), it's classical to describe the evolution of the population by means of the graphical representation as an {interacting particle system (IPS).} This decouples the randomness of the model coming from the initial type configuration from the one coming from mutations and reproductions. {We now extend the graphical representation to incorporate the effect of the environment (see Section \ref{s31} for a more detailed description).
In the graphical representation individuals are represented by horizontal lines at levels $i\in [N]$} {(see Fig. \ref{particlepicture})}. {Time runs forward from left to right. Potential reproduction events are depicted by arrows, with the (potential) parent at the tail and the offspring at the tip. We distinguish between neutral and selective arrows. \emph{Neutral arrows} have a filled arrowhead; they occur at rate $1/N$ per pair of lines. \emph{Selective arrows} have an open arrowhead; they occur in two independent ways: first, at rate $\sigmagma_N/N$ per pair of lines, and second, at any time $t_k$, $k\in I$, a random number $n_k\sigmam\bindist{N}{p_k}$ of lines shoot selective arrows to $n_k$ individuals in the population. Furthermore, beneficial (deleterious) mutations, depicted as circles (crosses), occur at rate $\theta_N \nu_0$ (at rate $\theta_N \nu_1$) per line.
Note that for any $s<t$, the number of non-environmental graphical elements present in $[s,t]$ is almost surely finite. Moreover, thanks to Assumption \eqref{summableassumption}, we have
\betagin{align}
{\mathbb E}b\left[\sum_{t_k\in[s,t]}n_k\right]=N\sum_{t_k\in[s,t]}p_k<\infty.\lambdabel{finitenbofevents}
\end{align}
Hence, $\sum_{t_k\in[s,t]}n_k<\infty$ almost surely, i.e. the number of arrows in $[s,t]$ due to peaks of the environment is almost surely finite.
Once the graphical elements in $[s,t]\times [N]$ are drawn, we specify the initial conditions by assigning types to the $N$ lines at time $s$ and propagate them forward in time according to the following rules: the type of a line right after a circle (resp. cross) is $0$ (resp. $1$); type $0$ propagates through neutral arrows \emph{and} selective arrows; type $1$ propagates only through neutral arrows. }
\betagin{figure}[t!]
\betagin{minipage}{.4\textwidth}
\centering
\scalebox{0.65}{\betagin{tikzpicture}
\draw[dashed, opacity=0.4] (1,-0.2) --(1,1) (1,2)--(1,3);
\draw[dashed, opacity=0.4] (6.2,-0.2) --(6.2,0) (6.2,1) --(6.2,2);
\node [right] at (-0.2,-0.5) {$s$};
\node [right] at (0.8,-0.5) {$t_0$};
\node [right] at (6,-0.5) {$t_1$};
\node [right] at (8.3,-0.5) {$s+T$};
\node [right] at (3.5,-0.8) {$t$};
\draw[-{triangle 45[scale=5]},thick] (4.5,2) -- (4.5,1) node[text=black, pos=.6, xshift=7pt]{};
\draw[thick] (0,1) -- (8.5,1);
\draw[thick] (8.5,2) -- (0,2);
\draw[thick] (8.5,3) -- (0,3);
\draw[thick] (0,4) -- (8.5,4);
\draw[thick] (0,0) -- (8.5,0);
\draw[-{triangle 45[scale=5]},thick] (.4,1) -- (.4,0);
\draw[-{open triangle 45[scale=5]},thick] (7.6,1) -- (7.6,4);
\draw[-{open triangle 45[scale=5]},thick] (1.7,2) -- (1.7,4);
\draw[-{open triangle 45[scale=5]},thick] (1,3) -- (1,4);
\draw[-{open triangle 45[scale=5]},thick] (1,1) -- (1,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,4) -- (6.2,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,0) -- (6.2,1);
\draw[-{open triangle 45[scale=5]},thick] (2.5,0) -- (2.5,1);
\draw[-{open triangle 45[scale=5]},thick] (7,2) -- (7,0);
\draw[-{triangle 45[scale=5]},thick] (3,3) -- (3,1);
\draw[-{angle 45[scale=5]}] (2.5,-0.5) -- (4.5,-0.5) node[text=black, pos=.6, xshift=7pt]{};
\node[ultra thick] at (3.5,4) {$\bigtimes$} ;
\node[ultra thick] at (6.6,4) {$\bigtimes$} ;
\draw (2.1,2) circle (1.5mm) [fill=white!100];
\draw (4.1,0) circle (1.5mm) [fill=white!100];
\node [right] at (-0.5,0) {$1$};
\node [right] at (-0.5,1) {$1$};
\node [right] at (-0.5,2) {$1$};
\node [right] at (-0.5,3) {$0$};
\node [right] at (-0.5,4) {$1$};
\node [right] at (8.5,0) {$0$};
\node [right] at (8.5,1) {$0$};
\node [right] at (8.5,2) {$0$};
\node [right] at (8.5,3) {$0$};
\node [right] at (8.5,4) {$0$};
\end{tikzpicture} }\end{minipage}\betagin{minipage}{.4\textwidth}
\centering
\scalebox{0.65}{\betagin{tikzpicture}
\draw[dashed,thick,opacity=0.3] (1,-0.2) --(1,1) (1,2)--(1,3) (1,4)--(1,4);
\draw[dashed,thick,opacity=0.3] (6.2,-0.2) --(6.2,0) (6.2,1) --(6.2,2) (6.2,4)--(6.2,4);
\node [right] at (-0.2,-0.6) {$T$};
\node [right] at (0.3,-0.6) {$s+T-t_0$};
\node [right] at (5.5,-0.6) {$s+T-t_1$};
\node [right] at (8.3,-0.6) {$0$};
\node [right] at (3.5,-0.8) {$\betata$};
\draw[-{triangle 45[scale=5]}] (4.5,-0.5) -- (2.5,-0.5) node[text=black, pos=.6, xshift=7pt]{};
\draw[thick,opacity=0.3] (0,1) -- (8.5,1);
\draw[thick,opacity=0.3] (8.5,2) -- (0,2);
\draw[thick,opacity=0.3] (8.5,3) -- (0,3);
\draw[thick,opacity=0.3] (0,0) -- (8.5,0);
\draw[thick,opacity=0.3] (0,4) -- (8.5,4);
\draw[thick] (0,4) -- (6.2,4);
\draw[thick] (0,2) -- (8.5,2);
\draw[thick] (4.5,1) -- (8.5,1);
\draw[thick] (0.4,0) -- (6.2,0);
\draw[thick] (1,1) -- (0.0,1);
\draw[thick] (1,3) -- (0.0,3);
\draw[-{triangle 45[scale=5]},thick] (4.5,2) -- (4.5,1) node[text=black, pos=.6, xshift=7pt]{};
\draw[-{triangle 45[scale=5]},thick] (0.4,1) -- (0.4,0);
\draw[-{open triangle 45[scale=5]},thick] (1.7,2) -- (1.7,4);
\draw[-{open triangle 45[scale=5]},thick] (1,3) -- (1,4);
\draw[-{open triangle 45[scale=5]},thick] (1,1) -- (1,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,4) -- (6.2,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,0) -- (6.2,1);
\draw[-{triangle 45[scale=5]},thick, opacity=0.3] (3,3) -- (3,1);
\draw[-{open triangle 45[scale=5]},thick, opacity=0.3] (7.6,1) -- (7.6,4);
\draw[-{open triangle 45[scale=5]},thick, opacity=0.3] (2.5,0) -- (2.5,1);
\draw[-{open triangle 45[scale=5]},thick, opacity=0.3] (7,2) -- (7,0);
\node[ultra thick] at (3.5,4) {$\bigtimes$} ;
\draw[thick] (2.1,2) circle (1.5mm) [fill=white!100];
\draw[thick] (4.1,0) circle (1.5mm) [fill=white!100];
\node[ultra thick,opacity=0.3] at (6.6,4) {$\bigtimes$} ;
\node [right,white] at (-0.5,0) {$1$};
\node [right,white] at (-0.5,1) {$1$};
\node [right,white] at (-0.5,2) {$1$};
\node [right,white] at (-0.5,3) {$0$};
\node [right,white] at (-0.5,4) {$1$};
\node [right,white] at (8.5,0) {$0$};
\node [right,white] at (8.5,1) {$0$};
\node [right,white] at (8.5,2) {$0$};
\node [right,white] at (8.5,3) {$0$};
\node [right,white] at (8.5,4) {$0$};
\end{tikzpicture} }
\end{minipage}
\caption{Left: a realization of the Moran IPS; time runs forward from left to right; the environment has peaks at times $t_0$ and $t_1$. Right: the ASG that arises from the second and third lines (from bottom to top) in the left picture, with the potential ancestors drawn in black; time runs backward from right to left; backward time $\betata\in[0,T]$ corresponds to forward time $t=s+T-\betata$.}
\lambdabel{particlepicture}
\end{figure}
\subsection*{Reading off ancestries in the Moran model}
{The \emph{ancestral selection graph} (ASG) was introduced by Krone and Neuhauser in \cite{KroNe97} (see also \cite{NeKro97}) to study the genealogical relations in the diffusion limit of the Moran model with mutation and selection. In what follows, we briefly explain how to adapt this construction to the Moran model in deterministic environment.
Consider a realization of the {IPS} associated to the Moran model in the environment ${\textbf{0}}ta\coloneqq (t_k, p_k)_{k \in I}$ in the time interval $[s,s+T]$. Fix a sample of $n$ individuals at time $s+T$ and trace backward in time (from right to left in Fig. \ref{particlepicture}) the lines of their potential ancestors (i.e. the lines that are ancestral to the sample for some type-configuration at time $s$), ignoring the effect of mutations; backward time $\betata\in[0,T]$ corresponds to the forward time $t=s+T-\betata$. We do this as follows. When a neutral arrow joins two individuals in the current set of potential ancestors, the two lines coalesce into a single one at the tail of the arrow. When a neutral arrow hits a potential ancestor from outside the current set of potential ancestors, the hit line is replaced by the line at the tail of the arrow. When a selective arrow hits the current set of potential ancestors, the individual that is hit has two possible parents, the \textit{incoming branch} at the tail and the \textit{continuing branch} at the tip. The true parent depends on the type of the incoming branch, but for the moment we work without types. These unresolved reproduction events can be of two types: a \emph{branching} event if the selective arrow emanates from an individual outside the current set of potential ancestors, and a \emph{collision} event if the selective arrow links two current potential ancestors. Note that at the peak times, multiple lines in the ASG can be hit by selective arrows, and therefore, multiple branching and collision events can occur simultaneously. Mutations are superposed on the lines of the ASG.
The object arising under this procedure up to time $\betata=T$ is called the \emph{Moran-ASG in $[s,s+T]$ under the environment ${\textbf{0}}ta$}. It contains all the lines that are potentially ancestral (ignoring mutation events) to the lines sampled at time $t=s+T$, see Fig. \ref{particlepicture}. Note that, since the number of events occurring in $[s,s+T]$ is almost surely finite ({see \eqref{finitenbofevents}}), the Moran-ASG in $[s,s+T]$ is well-defined.
Given an assignment of types to the lines present in the ASG at time $t=s$, we can extract the true genealogy and determine the types of the sampled individuals at time $t=s+T$. {To this end}, we propagate types forward in time along the lines of the ASG taking into account mutations and reproductions, with the rule that if a line is hit by a selective arrow, the incoming line is the ancestor if and only if it is of type~$0$, see Figure~\ref{fig:peckingorder}. This rule is called the \emph{pecking order}. Proceeding in this way, the types in~$[s,s+T]$ are determined along with the true genealogy.}
\betagin{figure}[h!]
\betagin{minipage}{0.23 \textwidth}
\centering
\scalebox{.8}{
\betagin{tikzpicture}
\draw[line width=0.5mm] (0,1) -- (2,1);
\draw[color=black] (0,0) -- (1,0);
\draw[-{open triangle 45[scale=2.5]},color=black] (1,0) -- (1,1) node[text=black, pos=.6, xshift=7pt]{};
\node[above] at (1.8,1) {\tiny D};
\node[above] at (0.3,1) {\tiny C};
\node[above] at (0.3,0) {\tiny I};
\node[left] at (0,1) {\tiny $1$};
\node[left] at (0,0) {\tiny $1$};
\node[right] at (2,1) {\tiny $1$};
\end{tikzpicture}}
\end{minipage}
\betagin{minipage}{0.23 \textwidth}
\centering
\scalebox{.8}{
\betagin{tikzpicture}
\draw[line width=0.5mm] (0,1) -- (2,1);
\draw[color=black] (0,0) -- (1,0);
\draw[-{open triangle 45[scale=2.5]},color=black] (1,0) -- (1,1) node[text=black, pos=.6, xshift=7pt]{};
\node[above] at (1.8,1) {\tiny D};
\node[above] at (0.3,1) {\tiny C};
\node[above] at (0.3,0) {\tiny I};
\node[left] at (0,1) {\tiny $0$};
\node[left] at (0,0) {\tiny $1$};
\node[right] at (2,1) {\tiny $0$};
\end{tikzpicture}}
\end{minipage}
\betagin{minipage}{0.23 \textwidth}
\centering
\scalebox{.8}{
\betagin{tikzpicture}
\draw[] (0,1) -- (2,1);
\draw[line width=0.5mm] (0,0) -- (1,0);
\draw[line width=0.5mm] (1,1) -- (2,1);
\draw[-{open triangle 45[scale=2.5]},color=black,line width=0.5mm] (1,-0.025) -- (1,1) node[text=black, pos=.6, xshift=7pt]{};
\node[above] at (1.8,1) {\tiny D};
\node[above] at (0.3,1) {\tiny C};
\node[above] at (0.3,0) {\tiny I};
\node[left] at (0,1) {\tiny $1$};
\node[left] at (0,0) {\tiny $0$};
\node[right] at (2,1) {\tiny $0$};
\end{tikzpicture}}
\end{minipage}
\betagin{minipage}{0.23 \textwidth}
\centering
\scalebox{.8}{
\betagin{tikzpicture}
\draw[] (0,1) -- (2,1);
\draw[line width=0.5mm] (0,0) -- (1,0);
\draw[line width=0.5mm] (1,1) -- (2,1);
\draw[-{open triangle 45[scale=2.5]},color=black,line width=0.5mm] (1,-0.025) -- (1,1) node[text=black, pos=.6, xshift=7pt]{};
\node[above] at (1.8,1) {\tiny D};
\node[above] at (0.3,1) {\tiny C};
\node[above] at (0.3,0) {\tiny I};
\node[left] at (0,1) {\tiny $0$};
\node[left] at (0,0) {\tiny $0$};
\node[right] at (2,1) {\tiny $0$};
\end{tikzpicture}}
\end{minipage}
\caption{The descendant line (D) splits into the continuing line (C) and the incoming line (I). The incoming line is ancestral if and only if it is of type~$0$. The true ancestral line is drawn in bold.}
\lambdabel{fig:peckingorder}
\end{figure}
\subsection*{Evolution of the type composition}
{Consider the set $\Db^\star$ of non-decreasing functions $\omega\in\Db$ satisfying
\betagin{itemize}
\item[(i)] for all $s<t\in{\mathbb R}b$, $\Deltalta \omega(t)\coloneqq \omega(t)-\omega(t-)\in[0,1)$ and $\sum_{u\in[s,t]}\Deltalta \omega(u)<\infty$,
\item[(ii)] $\omega$ is pure-jump, i.e. for all $s<t$, $\omega(t)=\omega(s)+\sum_{u\in(s,t]}\Deltalta \omega(u)$.
\end{itemize}
Note that the set of environments ${\textbf{0}}ta\coloneqq (t_k, p_k)_{k \in I}$ satisfying \eqref{summableassumption} can be identified with the set of functions $\omega\in \Db^\star$ with $\omega(0)=0$. Indeed, for any $\omega\in\Db^\star$, the collection of points $\{(t,\Deltalta \omega(t)):\, \Deltalta\omega(t)>0)\}$ is countable and satisfy \eqref{summableassumption}.\footnote{The same collection is obtained if we add a constant to $\omega$; this is why we need to fix the value of $\omega(0)$.} Conversely, for any ${\textbf{0}}ta\coloneqq (t_k, p_k)_{k \in I}$, the function $\omega:{\mathbb R}b\to{\mathbb R}b$ defined via
\[\omega(t)\deltafeq\sum_{t_k\in(0,t]}p_k\quad\textrm{for $t\geq 0$, and}\quad {\omega(t)\deltafeq-\sum_{t_k\in(t,0]}p_k}\quad\textrm{for $t<0$},\]
belongs to $\Db^\star$.} For this reason, {we often abuse notation} and refer to the elements of $\Db^\star$ as environments. In addition, an environment $\omega\in\Db^\star$ is said to be \textit{simple} if $\omega$ has only a finite number of jumps in any compact time interval. We denote by $\textbf{0}$ the environment corresponding to ${\textbf{0}}ta=\emptyset$ and refer to it as the \emph{null environment}.
We denote by $Z_N^\omega(t)$ the number of fit individuals at time $t$ {in a Moran} population of size $N$ subject to environment $\omega\in\Db^\star$. We refer to $Z_N^\omega\deltafeq(Z_N^\omega(t))_{t\geq 0}$ as \emph{the quenched fit-counting process}.
In particular, $Z_N^\textbf{0}$ is just the continuous-time Markov chain on $[N]_0$ with infinitesimal generator
\[\As_N^0 f(n)\deltafeq \left[(1+\sigmagma_N)\frac{n(N-n)}{N}+\theta_N\nu_0(N-n)\right](f(n+1)-f(n))
+ \left[\frac{n(N-n)}{N}+\theta_N\nu_1 n\right](f(n-1)-f(n)).\]
Note that if $\omega$ has a jump at time $t$, the number $n(t)$ of individuals placing an offspring is a binomial random variable with parameters $Z_N^\omega(t-)$ and $\Deltalta\omega(t)$. Since the $n(t)$ individuals that will be replaced are chosen uniformly at random, the additional number of fit individuals after the reproduction event is a hypergeometric random variable with parameters $N$, $N-Z_N^\omega(t-)$ and $n(t)$. Therefore, the dynamic of $Z_N^\omega$ is as follows. Recall that in any finite time interval, the number of environmental reproductions is almost surely finite. Thus, we can define $(S_i)_{i\in{\mathbb N}b}$ as the increasing sequence of times at which environmental reproductions take place. We set $S_0\coloneqq 0$. By construction, $(S_i)_{i\in{\mathbb N}b}$ is Markovian and its transition probabilities are given by
\[\mathbb{P}(S_{i+1} > t \mid S_i = s ) = \prod_{u \in (s, t]} (1- \Deltalta \omega(u))^N,\quad i\in{\mathbb N}b_0, 0\leq s\leq t.\] If $Z_N^\omega(0)=n\in [N]_0$, then $Z_N^\omega$ evolves in $[0,S_1)$ as $Z_N^\textbf{0}$ started at $n$. For $i\in{\mathbb N}b$, if $Z_N^\omega(S_{i}-)=k$, then $Z_N^\omega(S_{i})=k+H(N,N-k,\tilde B_i(k))$, where the random variables $H(N,N-k,b)\sigmam \textrm{Hyp}(N,N-k,b)$, $b\in[k]_0$, and $\tilde B_i(k)$ are independent, and $\tilde B_i(k)$ is a binomial random variable with parameters $k$ and $\Deltalta \omega(S_i)$ conditioned to be positive. Then, $Z_N^\omega$ evolves in $[S_i,S_{i+1})$ as $Z_N^\textbf{0}$ {started at $Z_N^\omega(S_{i})$.}
Let us fix $T>0$. We end this section with our first main result, which provides the continuity in $[0,T]$ of the \emph{fit-counting} process with respect to the environment. Note that the restriction of the environment to $[0,T]$ can be identified with an element of
\betagin{equation}\lambdabel{dbs}
\Db_T^\star\coloneqq \{\omega\in\Db_{0,T}: \omega(0)=0,\, \Deltalta \omega(t)\in[0,1)\textrm{ for all $t\in[0,T]$, $\omega$ is non-decreasing and pure-jump}\}.
\end{equation}
Moreover, we equip $\Db_T^\star$ with the metric $d_T^\star$ defined in Appendix \ref{A1} {(see \eqref{defdtstar})}.
\betagin{theorem}[Continuity]\lambdabel{thm2.1}
Let $\omega\in\Db_T^\star$ and let $\{\omega_k\}_{k\in{\mathbb N}b}\subset\Db_T^\star$ be such that $d_T^\star(\omega_k,\omega)\to 0$ as $k\to\infty$. If $Z_N^{\omega_k}(0)=Z_N^\omega(0)$ for all $k\in{\mathbb N}b$, then
\[(Z_N^{\omega_k}(t))_{t\in[0,T]}\xRightarrow[k\to \infty]{(d)}(Z_N^\omega(t))_{t\in[0,T]}.\]
\end{theorem}
Theorem \ref{thm2.1} is proved in Section \ref{s31}.
\subsection{Moran models in an environment driven by a subordinator} \lambdabel{s22}
In contrast to Section \ref{s21}, we consider here a random environment given by a Poisson point process $(t_i, p_i)_{i \in I}$ on $(-\infty,\infty) \times (0,1)$ with intensity measure $\dd t \times \mu$, where $\dd t$ stands for the Lebesgue measure and $\mu$ is a measure on $(0,1)$ satisfying
\betagin{equation}\lambdabel{intmu}
\int_{(0,1)} x \mu(\dd x) < \infty.
\end{equation}
{The latter} implies that {$(t_i, p_i)_{i \in I}$ almost surely satisfies Assumption \eqref{summableassumption}.} In particular, setting $J(t) \coloneqq \sum_{t_i\in(0,t]} p_i$, for $t\geq 0$, and {$J(t)\deltafeq -\sum_{t_i\in(t,0]}p_i$}, for $t<0$, we have $J\in\Db^\star$ almost surely. Moreover, by the L\'evy-Ito decomposition, $(J(t))_{t\in{\mathbb R}b}$ is a pure-jump subordinator with L\'evy measure $\mu$. If the measure $\mu$ is finite, $J$ is a compound Poisson process, and thus, the environment $J$ is almost surely simple.
We will see in Section \ref{s31} that, using the graphical representation, one can simultaneously construct Moran models for any $\omega\in\Db^\star$. Now, consider an independent pure-jump subordinator $(J(t))_{t\geq 0}$, with L\'evy measure $\mu$ on $(0,1)$ satisfying \eqref{intmu}. Thanks to Theorem \ref{thm2.1} the process $Z_N^J\deltafeq (Z_N^J(t))_{t\geq 0}$ is well defined. We refer to $Z_N^J$ as the \emph{annealed fit-counting process}. By definition, we have \[{\mathbb P}b(Z_N^J \in \cdot)=\int {\mathbb P}b(Z_N^\omega \in \cdot) {\mathbb P}b(J \in \dd \omega).\] In other words, ${\mathbb P}b(Z_N^\omega \in \cdot)$ is the law of $Z_N^J$ conditionally on a realization $\omega$ of the environment (i.e. ${\mathbb P}b(Z_N^\omega \in \cdot) = {\mathbb P}b(Z_N^J \in \cdot \mid J=\omega)$) and is classically referred to as the \textit{quenched measure} while ${\mathbb P}b(Z_N^J \in \cdot)$ integrates the effect of the random environment and is classically referred to as the \textit{annealed measure}.
The process $Z_N^J$ is a continuous-time Markov chain on $[N]_0$ with infinitesimal generator
\[\As_N f(n)\coloneqq \As_N^0 f(n) + \int_{(0,1)}\left({\mathbb E}b\left[f\left(n+\Hs(N,N-n,B_n(u))\right)\right]-f(n)\right){\mu(\dd u)},\quad n\in [N_0], \]
where $B_n(u)\sigmam \textrm{Bin}(n,u)$, and for any $i\in[n]_0$, $\Hs(N,N-n,i)\sigmam\textrm{Hyp}(N,N-n,i)$ are independent.
The dynamic of the graphical representation is as follows: For each $i,j\in [N]$ with $i\neq j$, selective (resp. neutral) arrows from level $i$ to level $j$ appear at rate $\sigmagma_N/N$ (resp. $1/N$). For each $i\in [N]$, open circles (resp. crosses) appear at level $i$ at rate $\theta_N\nu_0$ (resp. $\theta_N\nu_1$). For each $k \in [N]$, every group of $k$ lines is subject to simultaneous potential reproductions at rate
\[\sigmagma_{N,k}\coloneqq \int_{(0,1)}y^k (1-y)^{N-k}\mu(\dd y),\] resulting in the appearance of $k$ selective arrows from the lines of this group (of potential parents) to the lines of a group of size $k$ (the potential descendants) that is chosen uniformly at random among subsets of size $k$ of the $N$ individuals. The $k$ selective arrows are drawn uniformly at random from the $k$ potential parents to the $k$ potential descendants. Recall that only type $0$ propagates through selective arrows, while both types propagates through neutral arrows. The appearance of a selective arrow is therefore silent when the potential parent at its tail is of type $1$.
\subsection{The Wright--Fisher diffusion in random environment}\lambdabel{s23}
In this section we are interested in the Wright--Fisher diffusion in random environment described in the introduction{ as the solution of the SDE \eqref{WFSDE}. In {Section \ref{S3}} we will see that, indeed, for any $x_0\in[0,1]$, this SDE has a pathwise unique strong solution starting at $x_0$ (see Proposition \ref{eandu}).}
Consider a pure-jump subordinator $J=(J(s))_{s\geq 0}$ with L\'evy measure satisfying \eqref{intmu} and an independent standard Brownian motion $B=(B(s))_{s\geq 0}$. For any $T>0$, the solution of \eqref{WFSDE} in $[0,T]$ is a measurable function of $(B(s),J(s))_{s\in[0,T]}$, which we denote by $F(B,J)$.
A regular version of the conditional law ${\mathbb P}b(F(B,J) \in \cdot \mid J=\omega)$ of $F(B,J)$ given $J$ is classically referred to as the \textit{quenched probability measure}. It is defined for a.e. realization $\omega$ of $J$. ${\mathbb P}b(F(B,J) \in \cdot)$ integrates the effect of the random environment and is classically referred to as the \textit{annealed measure}. As before, quenched and annealed measure are related via \[{\mathbb P}b(F(B,J) \in \cdot)=\int {\mathbb P}b(F(B,J) \in \cdot \mid J=\omega)\, {\mathbb P}b(J \in \dd \omega).\]
We write $X$ and $X^\omega$ for the solution of \eqref{WFSDE} under the annealed and quenched measures, respectively.
For $\omegaega$ simple, the process $X^\omega$ starting at $x_0$ can be alternatively defined as follows. Denote by $t_1<\cdots<t_k$ {the consecutive jump times} of $\omega$ in $[0,T]$ and set $t_0\coloneqq 0$ and $X^\omega(0)\coloneqq x_0$. In the intervals $[t_i,t_{i+1})$, $X^\omega$ evolves as the solution of \eqref{WFD} starting at $X^\omega(t_i)$. Moreover, if $X^\omega(t_i-)=x$, then $X^\omega(t_i)\coloneqq x+x(1-x)\Deltalta\omega(t_i)$; see Fig. \ref{fig:wfpath} for an illustration.
\betagin{figure}[h!]
\scalebox{0.6}{
\betagin{tikzpicture}
\pgfmathsetseed{1337}
\draw[dashed, opacity=0.4] (0,-1)--(0,4);
\draw[dashed, opacity=0.4] (2,-1)--(2,4);
\draw[dashed, opacity=0.4] (8,-1)--(8,4);
\draw[dashed, opacity=0.4] (12,-1)--(12,4);
\draw[dashed, opacity=0.4] (14.5,-1)--(14.5,4);
\draw[dashed, opacity=0.4] (15,-1)--(15,4);
\draw[very thick] (0,4)--(15,4);
\draw[ultra thick,opacity=1,red] (0,-.94)--(1.97,-.94) (2.03,-.94)--(7.97,-0.94) (8.03,-.94)--(11.97,-0.94) (12.03,-.94)--(14.47,-0.94) (14.53,-0.94)--(15,-0.94);
\draw[ultra thick,opacity=1,red] (2,-1)--(2,3.38);
\draw[ultra thick,opacity=1,red] (8,-1)--(8,2.24);
\draw[ultra thick,opacity=1,red] (12,-1)--(12,3.13);
\draw[ultra thick,opacity=1,red] (14.5,-1)--(14.5,3.17);
\draw[very thick] (0,-1)--(15,-1);
\node [left] at (-0.2,4) {$1$};
\node [left] at (-0.2,-1) {$0$};
\node [left] at (-0.2,2) {$x_0$};
\node [right] at (7.2,-2) {$t$};
\node [right] at (-0.2,-1.5) {$0$};
\node [right] at (1.8,-1.5) {$t_1$};
\node [right] at (7.8,-1.5) {$t_2$};
\node [right] at (11.8,-1.5) {$t_3$};
\node [right] at (14.3,-1.5) {$t_4$};
\node [right] at (14.8,-1.5) {$T$};
\draw[-{angle 45[scale=5]}] (6.5,-1.8) -- (8.5,-1.8) node[text=black, pos=.6, xshift=7pt]{};
{\mathbb E}mmett{100}{0.02}{0.2}{black}{black}{black}
\end{tikzpicture}}
\caption{An illustration of a path of the process $X^\omega$ in the interval $[0,T]$. The red lines represent the jump sizes $\Deltalta \omega$ of the environment $\omega$; $t_1$, $t_2$, $t_3$ and $t_4$ are the jump times of $\omega$.}
\lambdabel{fig:wfpath}
\end{figure}
The next result provides the convergence of the type frequency process in the Moran model to the Wright-Fisher diffusion in random environment, as population size grows to $\infty$ and time is suitably accelerated.
{\betagin{theorem}[Convergence]\lambdabel{thm2.2}
Assume that $N \sigmagma_N\rightarrow \sigmagma$ and $N\theta_N\rightarrow \theta$ as $N\to\infty$, for some $\sigmagma, \theta\geq 0$ (weak selection - weak mutation).
\betagin{enumerate}
\item Let $J$ be a pure-jump subordinator with L\'{e}vy measure $\mu$ in $(0,1)$, and set
$J_N(t)\coloneqq J(t/N)$, $t\geq 0$. Define the process $(X_N(t))_{t\geq 0}$ via $X_N(t)\deltafeq Z_N^{J_N}(Nt)/N,$ $t\geq 0$. If $X_N(0)\xrightarrow[N\to\infty]{(d)} x_0$, then
\[ (X_N(t))_{t\geq 0}\xRightarrow[N\to\infty]{(d)} (X(t))_{t\geq 0},\]
where $X$ is the unique pathwise solution of \eqref{WFSDE} with $X(0)=x_0$.
\item Let $\omega\in\Db^\star$ be a simple environment and set $\omega_N(t)\coloneqq\omega(t/N)$, $t\geq 0$. Define the process $(X_N^\omega(t))_{t\geq 0}$ via $X_N^\omega(t)\deltafeq Z_N^{\omega_N}(Nt)/N,$ $t\geq 0$. If $X_N^\omega(0)\xrightarrow[N\to\infty]{(d)} x_0$, then
\[(X_N^\omega(t))_{t\geq 0}\xRightarrow[N\to\infty]{(d)} (X^\omega(t))_{t\geq 0},\]
with $X^\omega$ starting at $x_0$.
\end{enumerate}
\end{theorem}}
The proof of Theorem \ref{thm2.2} is given in Section \ref{s32}. The reason for using the environment $J_N$ or $\omega_N$ is to compensate that time is sped up by a factor of $N$. In this way, $X_N$ and $X$ share the same environment.
\betagin{remark}
The analogous result of Theorem \ref{thm2.2}-(1) in the context of discrete-time Wright--Fisher models without mutations is covered by the fairly general result \cite[Thm. 3.2]{BCM19} (see also \cite[Thm 2.12]{CSW19}).
\end{remark}
\betagin{remark}
If $J$ is a compound Poisson process, then almost every environment is simple. In this case, according to Theorem \ref{thm2.2}-(2) the quenched convergence holds for almost every environment (with respect to the law of $J$). We conjecture that this is true for general $J$. {In Proposition \ref{q-tight} we show that the sequence $(X_N^\omega)_{N\geq 1}$ is tight} for any environment $\omega$. Hence, it would suffice to prove the continuity of $\omega\mapsto X^\omega$ to obtain the desired convergence. Unfortunately, since the diffusion term in \eqref{WFSDE} is not Lipschitz, the standard techniques used to prove this type of result fail. Developing new techniques to cover non-Lipschitz diffusion coefficients is beyond the scope of this paper.
\end{remark}
\subsection{The ancestral selection graph in random/deterministic environment}\lambdabel{s24}
The aim of this section is to associate an ASG to the Wright--Fisher diffusion in random/deterministic environment. In contrast to the Moran model setting described in Section \ref{s21}, setting up a graphical representation for the forward process is not straightforward. To circumvent this problem, we proceed as follows. We first consider the graphical representation of a Moran model with parameters $\sigmagma/N$, $\theta/N$, $\nu_0$, $\nu_1$, and environment $\omega_N(\cdot)=\omega(\cdot/N)$, and we speed up time by $N$. Next, we sample $n$ individuals at time $T$ and we construct the ASG as in Section \ref{s21}.
Now, replace $\omega$ by a pure-jump subordinator $J$ with L\'evy measure $\mu$ supported in $(0,1)$. Note that the Moran-ASG in $[0,T]$ evolves according to the time reversal of $J$. The latter is the subordinator $\bar{J}^T\coloneqq(\bar{J}^T(\betata))_{\betata\in[0,T]}$ with $\bar{J}^T(\betata)\coloneqq J(T)-J((T-\betata)-)$, which has the same law as $J$ (its law does not depend on $T$). A simple asymptotic analysis of the rates and probabilities for the possible events leads to the following definition.
\betagin{definition}[The annealed{/quenched} ASG] \lambdabel{defannealdasg}
The \emph{annealed ancestral selection graph} $\Gs\deltafeq(\Gs(\betata))_{\betata\geq 0}$ with parameters $\sigmagma,\theta,\nu_0,\nu_1$, and environment driven by a pure-jump subordinator with L\'evy measure $\mu$, associated to a sample of size $n$ of the population at time $T$ is the branching-coalescing particle system starting with $n$ lines and with the following dynamic.
\betagin{itemize}
\item[(i)] {Each} line independently splits at rate $\sigmagma$ into two, an incoming line and a continuing line.
\item[(ii)] {Every} given pair of lines independently {coalesces} into a single one at rate $2$.
\item[(iii)] If $m$ is the current number of lines in the ASG, every group of $k$ lines independently experiences a \emph{simultaneous branching} at rate
{\betagin{equation}\lambdabel{smk}
\sigmagma_{m,k}\coloneqq \int_{(0,1)}y^k (1-y)^{m-k}\mu(\dd y),
\end{equation}}
i.e each line in the group splits into two lines, an incoming line and a continuing line.
\item[(iv)] {Each} line is independently decorated by a beneficial mutation at rate $\theta \nu_0$.
\item[(v)] {Each} line is independently decorated by a deleterious mutation at rate $\theta \nu_1$.
\end{itemize}
{Let $\omega:{\mathbb R}b\to{\mathbb R}b$ be a fixed environment. The \emph{quenched ancestral selection graph} with parameters $\sigmagma,\theta,\nu_0,\nu_1$, and environment $\omega$ of a sample of size $n$ at time $T$ is a branching-coalescing particle system $\Gs_T^\omega\deltafeq(\Gs_T^\omega(\betata))_{\betata\geq 0}$ starting at $\betata=0-$ with $n$ lines and evolving as the annealed ASG but with (iii) replaced by
\betagin{itemize}
\item[(iii')] If at time $\betata$, we have $\Deltalta \omega(T-\betata)>0$, then any line splits into two lines, an incoming line and a continuing line, with probability $\Deltalta \omega(T-\betata)$, independently from the other lines.
\end{itemize}}
See Fig. \ref{fig:backfor} for an illustration of the type-frequency process $X^\omega$ and the killed ASG $\Gs_T^\omega$.
\end{definition}
The branching-coalescing system $\Gs_T^\omega$ is clearly well-defined for $\omegaega$ simple. The justification of the previous definition for general environments is more involved and will be given in Section \ref{s41}.
\betagin{figure}[t!]
\scalebox{0.7}{
\betagin{tikzpicture}
\pgfmathsetseed{1337}
\draw[dashed, opacity=0.5] (0,-1)--(0,4);
\draw[dashed, opacity=0.5] (2,-1)--(2,4);
\draw[dashed, opacity=0.5] (8,-1)--(8,4);
\draw[dashed, opacity=0.5] (12,-1)--(12,4);
\draw[dashed, opacity=0.5] (15,-1)--(15,4);
\draw[opacity=0.5] (0,4)--(15,4);
\draw[opacity=0.5] (0,-1)--(15,-1);
\node [left,opacity=0.5] at (-0.2,4) {$1$};
\node [left,opacity=0.5] at (-0.2,-1) {$0$};
\node [left, opacity=0.5] at (-0.2,2) {$x$};
\node [right, opacity=0.5] at (7.2,-2) {$t$};
\node [right, opacity=0.5] at (1.8,-1.5) {$t_0$};
\node [right, opacity=0.5] at (7.8,-1.5) {$t_1$};
\node [right, opacity=0.5] at (11.8,-1.5) {$t_2$};
\node [right, opacity=0.5] at (14.8,-1.5) {$T$};
\node [right, opacity=0.5] at (-0.2,-1.5) {$0$};
\draw[-{angle 45[scale=5]}, opacity=0.5] (6.5,-1.8) -- (8.5,-1.8) node[text=black, pos=.6, xshift=7pt]{};
\bafo{100}{0.02}{0.2}{black}{black}{black}
\node [right, opacity=1] at (1.8,4.5) {$T-t_0$};
\node [right, opacity=1] at (7.8,4.5) {$T-t_1$};
\node [right, opacity=1] at (11.8,4.5) {$T-t_2$};
\node [right, opacity=1] at (14.8,4.5) {$0$};
\node [right, opacity=1] at (-0.2,4.5) {$T$};
\node [right, opacity=1] at (7.2,5) {$\betata$};
\draw[-{angle 45[scale=5]}, opacity=1] (8.5,4.8) -- (6.5,4.8) node[text=black, pos=.6, xshift=7pt]{};
\draw[thick] (1,-0.5)--(15,-0.5);
\draw[thick] (3,0.5)--(14.5,0.5);
\draw[thick] (10,1.5)--(13,1.5);
\draw[very thick] (0,2.5)--(12,2.5);
\draw[very thick] (6,0)--(12,0);
\draw[very thick] (0,1.5)--(9,1.5);
\draw[very thick] (5,2)--(8,2);
\draw[very thick] (4,3)--(8,3);
\draw[very thick] (7,1)--(8,1);
\draw[very thick] (0,0)--(2,0);
\draw[very thick] (0,3)--(2,3);
\draw[very thick] (14.5,-0.5)--(14.5,0.5);
\draw[very thick] (13,1.5)--(13,0.5);
\draw[very thick] (12,-0.5)--(12,0);
\draw[very thick] (12,1.5)--(12,2.5);
\draw[very thick] (9,0.5)--(9,1.5);
\draw[very thick] (8,0.5)--(8,1);
\draw[very thick] (8,1.5)--(8,2);
\draw[very thick] (8,2.5)--(8,3);
\draw[very thick] (7,0.5)--(7,1);
\draw[very thick] (5,1.5)--(5,2);
\draw[very thick] (4,0.5)--(4,3);
\draw[very thick] (2,-0.5)--(2,0);
\draw[very thick] (2,2.5)--(2,3);
\node[ultra thick] at (10,1.5) {$\bigtimes$} ;
\node[ultra thick] at (6,0) {$\bigtimes$} ;
\node[ultra thick] at (3,0.5) {$\bigtimes$} ;
\node[ultra thick] at (1,-0.5) {$\bigtimes$} ;
\end{tikzpicture}}
\caption{Illustration of a realization of the type-frequency process $X^\omega$ (grey path) and the killed ASG $\Gs_T^\omega$ (black lines) embedded in the same picture. Forward time $t$ runs from left to right; backward time $\betata\coloneqq T-t$ runs from right to left. The environment $\omega$ jumps at forward times $t_0$, $t_1$ and $t_2$.}
\lambdabel{fig:backfor}
\end{figure}
\betagin{remark}
{Note that in the Moran model, a neutral arrow appears from line $A$ to line $B$ at rate $1/N$ and from line $B$ to line $A$ at the same rate. Two lines are thus connected by a neutral arrow at rate $2/N$, which explains the rate of coalescence events in (ii).}
\end{remark}
\subsection{Type frequency via the killed ASG}\lambdabel{s25}
The aim of this section is to relate the type-$0$ frequency process $X$ to the ASG. To this end, {assume} that the proportion of fit individuals at time $0$ is equal to $x\in[0,1]$. Conditionally on $X(T)$, the probability of sampling independently $n$ unfit individuals at time $T$ equals $(1-X(T))^n$. Now, consider the annealed ASG associated to the $n$ sampled individuals in $[0,T]$ and assign randomly types independently to each line in the ASG at time $\betata=T$ according to the initial distribution $(x,1-x)$. In the absence of mutations, the $n$ sampled individuals are unfit if and only if all the lines in the ASG at time $\betata=T$ are assigned the unfit type (because at any selective event a fit individual can only be replaced by another fit individual). Therefore, if $R(T)$ denotes the number of lines present in the ASG at time $\betata=T$, then conditionally on $R(T)$, the probability that the $n$ sampled individuals are unfit is $(1-x)^{R(T)}$. We would then expect to have \[{\mathbb E}b[(1-X(T))^n\mid X(0)=x]={\mathbb E}b[(1-x)^{R(T)}\mid R(0)=n].\]
Mutations determine the types of some of the lines in the ASG even before we assign types to the lines at time $\betata=T$. Hence, we can prune away from the ASG all the sub-ASGs arising from a mutation event. If in the pruned ASG there is a line ending in a beneficial mutation, we can infer that at least one of the sampled individuals is fit. If all the lines end up in a deleterious mutation, we can infer directly that all the sampled individuals are unfit. In the remaining case, the sampled individuals are all unfit if and only if all the lines present at time $\betata=T$ in the pruned ASG are assigned the unfit type. We use this idea in Section \ref{s42} to construct for a given sample of the population at time $t=T$, branching-coalescing systems $\bar{\Gs}\deltafeq (\bar{\Gs}(\betata))_{\betata\geq 0}$ and $\bar{\Gs}_T^\omega\deltafeq (\bar{\Gs}_T^\omega(\betata))_{\betata\geq 0}$ in the annealed and quenched setting, respectively. Both processes have a cemetery state $\dagger$. The main feature of $\bar{\Gs}$ (resp. $\bar{\Gs}^\omega_T$) is that for any $\betata\geq0$, the individuals in the sample are all unfit if and only if $\bar{\Gs}\neq \dagger$ (resp. $\bar{\Gs}^\omega_T\neq \dagger$) and all the lines present at time $\betata$ in $\bar{\Gs}$ (resp. $\bar{\Gs}^\omega_T$) are unfit. We refer to $\bar{\Gs}$ and $\bar{\Gs}^\omega_T$ as the annealed and quenched killed ASG (k-ASG), respectively.
\subsection*{Moment dualities}
In this section we establish a duality relation between the process $X$ and the line-counting process of the k-ASG.
For each $\betata\geq 0$, we denote by $R(\betata)$ the number of lines present in the {annealed} k-ASG at time $\betata$, with the convention that $R(\betata)=\dagger$ if ${\bar{\mathcal{G}}}(\betata)=\dagger$. The process $R\coloneqq (R(\betata))_{\betata\geq 0}$, called the {\textit{annealed}} line-counting process of the k-ASG, is a continuous-time Markov chain with values on ${\mathbb N}b_0^\dagger\coloneqq \mathbb{N}_0\cup\{\dagger\}$ and infinitesimal generator matrix $Q^\mu_\dagger\coloneqq (q^\mu_\dagger(i,j))_{i,j\in{\mathbb N}b_0^\dagger}$ defined via
\betagin{equation}\lambdabel{krates}
q^\mu_\dagger(i,j)\coloneqq \left\{\betagin{array}{ll}
i(i-1)+i \theta\nu_1 &\text{if $j=i-1$},\\
\binom{i}{k}(\sigmagma_{i,k}+\sigmagma 1_{\{k=1\}}) &\text{if $j=i+k,\, i\geq k\geq 1$},\\
i\theta\nu_0&\textrm{if $j=\dagger$},\\
-i(i-1+\theta+\sigmagma)-\int_{(0,1)}(1-(1-y)^i)\mu(\dd y)&\textrm{if $j=i\in{\mathbb N}b_0$},
\end{array}\right.
\end{equation}
{where the coefficients $\sigmagma_{m,k}$ are defined} {in Eq. \eqref{smk}. All other entries are zero}.
{Similarly, for $T \in \mathbb{R}$ and {a fixed} environment $\omegaega\in\Db^\star$, we denote by {$R_T^\omega \coloneqq (R_T^\omega(\betata))_{ \betata \geq 0}$} the line-counting process associated to the quenched k-ASG ${\bar{\mathcal{G}}}q$. The process $R_T^\omega$, {called the \textit{quenched} line-counting process of the k-ASG,} is a continuous-time (inhomogeneous) Markov process with values in ${\mathbb N}b_0^\dagger$. It jumps from $i\in{\mathbb N}b$ to $j\in{\mathbb N}b_0^\dagger \setminus \{i\}$ at rate $q^0_\dagger(i,j)$, where $q^0_\dagger$ is the matrix defined in \eqref{krates} with $\mu=0$. In addition, at each time $\betata \geq 0$ with $\Deltalta\omegaega(T-\betata)>0$, conditionally on $\{R_T^\omega(\betata-)=i\}$, $i\in{\mathbb N}b$, we have $R_T^\omega(\betata) \sigmam i + \bindist{i}{\Deltalta \omegaega(T-\betata)}$.
If $\theta >0$ and $\nu_0 \in (0,1)$, the states $0$ and $\dagger$ are absorbing for $R$ and $R_T^\omega$.}
Let $J$ be a pure-jump subordinator with L\'evy measure $\mu$ supported in $(0,1)$. We write here $X^J$ {instead} of $X$ to stress the dependency of the (strong) solution of \eqref{WFSDE} on the environment $J$. Similarly, we write
$X^\omegaega$ for its quenched version (as introduced in Section \ref{s23}). Since in the annealed case, backward and forward environments have the same law, we can construct the line-counting process of the k-ASG as the strong solution of an SDE involving $J$ {and} other four independent Poisson processes encoding the non-environmental events. We denote it by $(R^J(\betata))_{\betata\geq 0}$. The next result establishes a formal relation between $X^J$ and $R^J$: a \emph{reinforced moment duality}, which allows us to derive moment dualities in the annealed and quenched setting (see Fig. \ref{fig:backfor} to visualize forward and backward processes in the same picture).
\betagin{theorem}[{Reinforced, annealed and quenched moment dualities}] \lambdabel{thm2.3}
For all $x\in[0,1]$, $n\in\mathbb{N}$ and $T\geq 0$, and any function $f\in\Cs^2([0,\infty))$ with compact support,
\betagin{equation}\lambdabel{rmd}
\mathbb{E}[(1-X^J(T))^n f(J(T))\mid X^J(0)=x]=\mathbb{E}[(1-x)^{R^J(T)} f(J(T))\mid R^J(0)=n],
\end{equation}
with the convention $(1-x)^\dagger=0$ for all $x\in[0,1]$. In particular, if $f=1$ we recover the moment duality\footnote{We will often drop the superscript $J$ when using this relation, unless we want to emphasize the dependency on $J$.}
\betagin{equation}\lambdabel{md}
\mathbb{E}[(1-X^J(T))^n \mid X^J(0)=x]=\mathbb{E}[(1-x)^{R^J(T)} \mid R^J(0)=n].
\end{equation}
For almost every (with respect to the law of $J$) environment $\omega\in\Db^\star$,
\betagin{equation}
\mathbb{E} \left [ (1-X^\omegaega(T))^n\mid X^\omegaega(0)=x \right ]= \mathbb{E} \left [ (1-x)^{R_T^\omegaega(T-)}\mid R_T^\omegaega(0-)=n \right ]. \lambdabel{quenchedual}
\end{equation}
\end{theorem}
We prove \eqref{rmd} and \eqref{md} in Section \ref{s51}. The proof of the quenched duality \eqref{quenchedual} is given in Section \ref{s61}. Moreover, Theorem \ref{thmf1} extends \eqref{quenchedual} to any simple environments.
\betagin{remark}
For $\theta=0$, \eqref{md} is a particular case of \cite[Lemma 2.14]{CSW19}.
\end{remark}
\subsection*{Asymptotic type composition}
\betagin{figure}[t!]
\scalebox{0.65}{
\betagin{tikzpicture}
\pgfmathsetseed{1337}
\draw[dashed, opacity=0.4] (5,-1)--(5,4);
\draw[dashed, opacity=0.4] (15,-1)--(15,4);
\draw[dashed, opacity=0.4] (0,-1)--(0,4);
\draw[very thick] (0,4)--(15,4);
\draw[opacity=0.4] (0,2)--(15,2);
\draw[ultra thick,opacity=1,red] (0,-.94)--(1.97,-.94) (2.03,-.94)--(7.97,-0.94) (8.03,-.94)--(11.97,-0.94) (12.03,-.94)--(15,-0.94) ;
\draw[ultra thick,opacity=1,red] (2,-1)--(2,3.38);
\draw[ultra thick,opacity=1,red] (8,-1)--(8,2.24);
\draw[ultra thick,opacity=1,red] (12,-1)--(12,3.13);
\draw[very thick] (0,-1)--(15,-1);
\node [left] at (-0.2,4) {$1$};
\node [left] at (-0.2,-1) {$0$};
\node [right] at (-0.9,-1.5) {$-(\tau+h)$};
\node [left] at (-0.2,2) {$x$};
\node [right] at (7.2,-2) {$t$};
\node [right] at (4.5,-1.5) {$-\tau$};
\node [right] at (14.8,-1.5) {$0$};
\draw[-{angle 45[scale=5]}] (6.5,-1.8) -- (8.5,-1.8) node[text=black, pos=.6, xshift=7pt]{};
\frompast{100}{0.02}{0.2}{black}{black}{black}
\end{tikzpicture}}
\caption{The black (resp. grey) path represents a realization of $X^\omega$ in the interval $[-(\tau+h),0]$ (resp. $[-\tau,0]$ ) starting at $X^\omega(-(\tau+h))=x$ (resp. $X^\omega(-\tau)=x$). The jump sizes of the environment $\omega$ are depicted in red.}
\lambdabel{fig:wfpast}
\end{figure}
Assume now that $\theta >0$ and $\nu_0, \nu_1 \in (0,1)$. In particular, the processes $X$ and $X^\omegaega$ are not absorbed in $\{0,1\}$. We will describe the asymptotic behavior of these processes using Theorem \ref{thm2.3}. The quenched case is particularly delicate, because for a given environment $\omega$, $X^\omegaega(t)$ strongly depends on the environment in the recent past, and only weakly on the environment in the distant past (see Fig. \ref{fig:wfpath}). Hence, unless $\omega$ is constant after some fixed time $t_0$ (i.e $\omega$ has no jumps after $t_0$), $X^\omegaega(t)$ will not converge as $t\to\infty$ (see Remark \ref{periodic} for the case of periodic environments). In contrast, for a given environment $\omega$ in $(-\infty,0]$, we will see that $X^\omega(0)$, conditionally on $X^\omega(-\tau)=x$, converges in distribution as $\tau\to\infty$, and we will characterize its law; the setting is illustrated in Fig. \ref{fig:wfpast} (compare with Fig. \ref{fig:wfpath}). To this end, define for $n\in{\mathbb N}b_0$,
\betagin{align}
\pi_n\coloneqq \mathbb{P}(\exists \betata\geq 0: R(\betata)=0 \mid R(0)=n),\quad
{\mathbb P}i_n(\omegaega)\coloneqq \mathbb{P}(\exists \betata\geq 0 : R_0^\omegaega(\betata)=0 \mid R_{0}^\omegaega(0-)=n), \lambdabel{defpin}
\end{align}
and set $\pi_\dagger\coloneqq 0$ and ${\mathbb P}i_\dagger(\omega)\coloneqq 0$. Clearly, $\pi_0=1$ and ${\mathbb P}i_0(\omegaega)=1$.
\betagin{theorem}[Asymptotic type frequency] \lambdabel{thm2.4}
Assume that $\theta >0$ and $\nu_0, \nu_1 \in (0,1)$.
\betagin{enumerate}
\item The diffusion $X$ has a unique stationary distribution $\eta_X\in\Ms_1([0,1])$ and {$X(t)\xrightarrow[]{(d)}X(\infty)$} as $t\to\infty$, where $X(\infty)$ denotes a random variable distributed according to $\eta_X$. Moreover, for all $n\in\mathbb{N}$,
\betagin{equation}
\mathbb{E}\left[ (1-X(\infty))^n \right]=\pi_n, \lambdabel{cvmomentsannealed}
\end{equation}
and the absorption probabilities $(\pi_n)_{n\geq 0}$ satisfy
\betagin{equation}
(\sigmagma+\theta+n-1)\pi_n=\sigmagma \pi_{n+1}+(\theta\nu_1+ n-1)\pi_{n-1}+ \frac{1}{n}\sum\limits_{k=1}^n\binom{n}{k} \sigmagma_{n,k}(\pi_{n+k}-\pi_n),\quad n\in{\mathbb N}b, \lambdabel{recwn}
\end{equation}
where the coefficients $\sigmagma_{n,k}$, $k\in[n]$, $n\in{\mathbb N}b$, are defined in Eq. \eqref{smk}.
\item For almost every (with respect to the law of $J$) environment $\omega$ and for any $x \in (0,1)$, the distribution of $X^\omegaega(0)$ conditionally on $\{ X^\omegaega(-\tau) = x \}$ has a limit distribution $\Ls^\omegaega$ as $\tau\to\infty$, which does not depend on $x$. Moreover,
\betagin{eqnarray}
\ \int_0^1 (1-y)^n \mathcal{L}^{\omegaega}(\dd y) = {\mathbb P}i_n(\omegaega),\quad n\in{\mathbb N}b, \lambdabel{dualimite}
\end{eqnarray}
and the convergence of moments is exponential, i.e.
\betagin{eqnarray}
\ \left | \mathbb{E}\left [ (1-X^\omegaega(0))^n \mid X^\omegaega(-\tau)=x \right ] - {\mathbb P}i_n(\omegaega) \right | \leq e^{-\theta\nu_0 \tau},\quad n\in{\mathbb N}b. \lambdabel{approxwn}
\end{eqnarray}
\end{enumerate}
\end{theorem}
The setting of Theorem \ref{thm2.4}-(1) is illustrated in Fig. \ref{fig:wfpath} and its proof is given in Section \ref{s51}; the setting of part (2) is illustrated in Fig. \ref{fig:wfpast} and its proof is provided in Section \ref{s61}. Moreover, {Theorem \ref{thmf2}} extends {Theorem} \ref{thm2.4}-(2) to any simple environment. {A refinement of Theorem \ref{thm2.4}-(2) is given in Theorem \ref{thmf3} for simple environments under additional conditions.}
\betagin{remark}\lambdabel{simpsonindexannealed}
\emph{Simpson's index} is a popular tool for describing population diversity. It represents the probability that two individuals chosen uniformly at random, from the population will be of the same type. In our case it is given by ${\rm{Sim}}(t) \coloneqq X(t)^2 + (1-X(t))^2$. If the types represent different species, it gives a measure of bio-diversity. If the types represent two alleles of a gene for a given species, it measures \emph{homozygosity}. As a consequence of Theorem \ref{thm2.4}, one can express the moments of ${\rm{Sim}}(\infty)$ {in terms} of the coefficients $(\pi_n)_{n\geq 0}$. In particular, we have
\[ \mathbb{E}[{\rm{Sim}}(\infty)]=\mathbb{E}[X(\infty)^2 + (1-X(\infty))^2] = 1 - 2 \pi_1 + 2\pi_2. \]
\end{remark}
\betagin{remark}\lambdabel{periodic}
If $\omegaega$ is a periodic environment on $[0,\infty)$ with period $T_p > 0$, the proof of Theorem \ref{thm2.4}-(2) yields that, for any $x \in (0,1)$ and $r \in [0,T_p)$, the distribution of $X^\omegaega(nT_p + r)$, conditionally on $\{ X^\omegaega(0) = x \}$, has a limit distribution $\mathcal{L}_r^{\omegaega}$, when $n$ goes to infinity, which is a function of $\omega$ and $r$, and does not depend on $x$. Furthermore, $\mathcal{L}_r^{\omegaega}$ satisfies $\int_0^1 (1-y)^n \mathcal{L}_r^{\omegaega}(\dd y) = {\mathbb P}i_n(\omegaega_r)$, where $\omegaega_r$ is the periodic environment in $(-\infty,0]$ defined by $\omegaega_r(t) \coloneqq \omegaega(r+t+(\lfloor -t/T_p \rfloor + 1) T_p)$ for any $t \in (-\infty,0]$. The convergence of moments is exponential as in \eqref{approxwn}.
\end{remark}
\subsection*{Mixed environments}
\betagin{figure}[t!]
\scalebox{0.7}{
\betagin{tikzpicture}
\pgfmathsetseed{1337}
\draw[dashed, opacity=0.4] (0,0)--(0,6);
\draw[thick, opacity=1] (9,0)--(9,6);
\draw[dashed, opacity=0.4] (15,0)--(15,6);
\draw[ultra thick, opacity=1,red] (12,0)--(12,4.8);
\draw[ultra thick, opacity=1,red] (10,0)--(10,3);
\draw[ultra thick, opacity=0.4,red] (2,0)--(2,4.5);
\draw[ultra thick, dashed, opacity=0.4,red] (1,0)--(1,4.8);
\draw[ultra thick, dashed, opacity=0.4,red] (3,0)--(3,3.6);
\draw[ultra thick, dashed, opacity=0.4,red] (6,0)--(6,5.4);
\draw[ultra thick, opacity=0.4,red] (0,0.02)--(0.97,0.02) (1.03,0.02)--(2.97,0.02) (2.03,0.02)--(5.97,0.02) (6.03,0.02)--(8.99,0.02);
\draw[ultra thick, opacity=1,red] (9.01,0.02)--(9.97,0.02) (10.03,0.02)--(11.97,0.02) (12.03,0.02)--(15,0.02);
\draw[] (0,6)--(15,6);
\draw[] (0,0)--(15,0);
\node [left] at (-0.2,6) {$1$};
\node [left] at (-0.2,0) {$0$};
\node [left] at (-0.2,3) {$x$};
\node [right] at (8.5,-0.5) {$-{\tau_\star^{}}$};
\node [right] at (14.8,-0.5) {$0$};
\node [right] at (-.4,-0.5) {$-\tau$};
\various{100}{0.02}{0.2}{0.4}{1}{black}
\pgfmathsetseed{1336}
\variousb{50}{0.02}{0.2}{0.4}{1}{blue}
\end{tikzpicture}}
\caption{Two realizations (blue and black) of the type-frequency process $X^{J\otimes_{\tau_\star^{}}^{}\omega}$ in $[-\tau,0]$ starting at $x$. Both realizations share the same deterministic environment $\omega$ in $[-{\tau_\star^{}},0]$; the solid red lines in $(-{\tau_\star^{}},0]$ represent the jump sizes of $\omega$. The corresponding environments in $[-\tau,-\tau_\star^{}]$ are random; the jump sizes $\Deltalta J$ associated to the black (resp. blue) realization in $[-\tau,-{\tau_\star^{}})$ are depicted by solid (resp. dashed) red lines.}
\lambdabel{fig:mixed}
\end{figure}
We now present an application illustrating the advantage of studying both quenched and annealed {settings}. We consider a population evolving from the distant past in a (stationary) random environment {and analyze} the effect in the type composition at present of a recent perturbation of the environment. To this end, we assume we only know the distribution of the environment before the perturbation.
In the absence of perturbations, the environment is given by a pure jump subordinator $J$ in $(-\infty,0]$ with L\'evy measure $\mu$ satisfying \eqref{intmu}. The perturbation occurs in $(-\tau_\star^{},0]$ (for some $\tau_\star^{} > 0$) and is given by a deterministic environment $\omega$. Let $X^{J\otimes_{\tau_\star^{}}^{} \omega}$ be the solution of \eqref{WFSDE} under the environment $J\otimes_{\tau_\star^{}}^{} \omega$, which coincides with $J$ and $\omega$ in $(-\infty,-{\tau_\star^{}}]$ and $(-{\tau_\star^{}},0]$, respectively; see Fig~\ref{fig:mixed} for an illustration. Recall that $ (R_0^\omega(\betata))_{\betata \in [0,{\tau_\star^{}})}$ is the line-counting process associated to the quenched k-ASG (see Section \ref{s25}). We are interested in the distribution of $X^{J\otimes_{\tau_\star^{}}^{} \omega}(0)$. The next result provides the moments of this random variable.
\betagin{proposition}\lambdabel{prop2.5}
Assume that $\theta>0$ and $\nu_0,\nu_1\in(0,1)$. For any $\tau_\star^{}>0$, $n\in{\mathbb N}b$, $x\in[0,1]$, and almost every (with respect to the law of $J$) $\omega\in\Db^\star$, we have
\betagin{equation}\lambdabel{mix1}
\lim_{\tau\to\infty}{\mathbb E}b\left[(1-X^{J\otimes_{\tau_\star^{}}^{} \omega}(0))^n\mid X^{J\otimes_{\tau_\star^{}}^{} \omega}(-\tau)=x\right]={\mathbb E}\left[\pi_{R_0^{\omega}({\tau_\star^{}}-)}\mid R_0^\omega(0-)=n\right].
\end{equation}
\end{proposition}
Proposition \ref{prop2.5} is proved in Section \ref{s61}; a refinement of this result is given for simple environments under the additional condition $\sigmagma=0$ in Proposition \ref{mixf4}.
\subsection{Ancestral type via the pruned lookdown ASG}\lambdabel{s26}
In this section we are interested in the type distribution at present of the individuals that will be successful in the long run. This distribution may substantially differ from the type composition at present and show a bias towards the fit type.
Consider a sample of $n$ individuals at some time $T$ in the future and trace their ancestral lines using the ASG. We will see in Section \ref{s52} that the number of lines in the ASG is positive recurrent (see Lemma \ref{pr}). Hence, the ASG has bottlenecks and, if $T$ is sufficiently large, the $n$ individuals share a common ancestor at time $0$. Assigning types to the lines in the ASG at time $0$ and propagating the types along using the pecking order, we determine the types in the sample as well as the true genealogy. In particular, we obtain the type of the common ancestor of the sample. What it means for $T$ to be sufficiently large depends on $n$ and on the realization of the ASG, but this dependency vanishes as $T\to \infty$. Because we are interested in the type of the individual that is successful in the long run, we can work under this limit consideration. In what follows we formalize this idea.
Consider a realization $\Gs_{[0,T]}^{}\coloneqq (\Gs(\betata))_{\betata\in[0,T]}$ of the annealed ASG in $[0,T]$ started with one line, representing an individual sampled at forward time $T$. If $t$ denotes forward time, we {set} $\betata=T-t$ to denote the backward time (see Fig. \ref{fig:backfor}). For $\betata\in[0,T]$, let $V_\betata$ be the set of lines present at time $\betata$ in $\Gs_{[0,T]}^{}$. Consider a function $c:V_T\to\{0,1\}$ representing an assignment of types to the lines in $V_T$. Given $\Gs_{[0,T]}^{}$ and $c$, we propagate types (forward in time) along the lines of $\Gs_{[0,T]}$ keeping track, at any time $\betata\in[0,T]$, of the true ancestor in $V_T$ of each line in $V_\betata$. We denote by $a_c(\Gs_{[0,T]}^{})$ the type of the ancestor in $V_T$ of the single line in $V_0$. Assume that, under ${\mathbb P}b_x$, $c$ assigns independently to each line type $0$ with probability $x$ and type $1$ with probability $1-x$. {The \emph{annealed ancestral type distribution at time $T$} is
\[h_T(x)\coloneqq {\mathbb P}b_x(a_c(\Gs_{[0,T]})=0),\quad x\in[0,1].\]}
{In the quenched setting, we proceed in the same way, but using $\Gs_{[0,T]}^\omega\coloneqq (\Gs_T^\omega(\betata))_{\betata\in[0,T]}$, the quenched ASG in $[0,T]$ in the environment $\omega$ of an individual sampled at time $T$, instead of $\Gs_{[0,T]}^{}$. The \emph{quenched ancestral type distribution at time $T$} is
\[h_T^\omega(x)\coloneqq {\mathbb P}b_x(a_c(\Gs_{[0,T]}^\omega)=0),\quad x\in[0,1],\]
where under ${\mathbb P}b_x$, $c$ assigns independently to each line present in $\Gs_{[0,T]}^\omega$ at time $\betata=T$ type $0$ with probability $x$ and type $1$ with probability $1-x$.}
{In the absence of mutations, the ancestor of the sampled individual is fit if and only if there is at least one fit line in the ASG having type $0$ at time $\betata=T$. In the presence of mutations, determining the type of the ancestor is more involved. In \cite{LKBW15} the ancestral type distribution was obtained for the null environment using the line-counting process of a pruned version of the ASG, called \emph{the pruned lookdown ASG} (pLD-ASG). In Section \ref{s43} we
generalize this construction to incorporate the effect of the environment. The main feature of the pLD-ASG is that the type of the ancestor at time $t=0$ of the sampled individual at time $t=T$ is $0$ if and only if there is at least one line in the pLD-ASG at time $\betata=T$ that has type $0$ (see Lemma \ref{pruningdonnehtx}). Hence, $h_T(x)$ and $h_T^\omega(x)$ can be represented via the corresponding line-counting processes (which can be easily inferred from the description of the pLD-ASG given in Section \ref{s52}).}
The line-counting process of the annealed pLD-ASG, denoted by $L\coloneqq (L(\betata))_{\betata \geq 0}$, is a continuous-time Markov chain with values on $\mathbb{N}$ and generator matrix $Q^\mu\coloneqq (q^\mu(i,j))_{i,j\in{\mathbb N}b}$ given by
\betagin{equation}\lambdabel{kratespldasg}
q^\mu(i,j)\coloneqq \left\{\betagin{array}{ll}
i(i-1)+(i-1) \theta\nu_1 + \theta\nu_0 &\text{if $j=i-1$},\\
i(\sigmagma + \sigmagma_{i,1}) &\text{if $j=i+1$},\\
\binom{i}{k}\sigmagma_{i,k} &\text{if $j=i+k,\, i\geq k\geq 2$},\\
\theta\nu_0&\textrm{if $1 \leq j < i-1$},\\
-(i-1)(i+\theta)-i\sigmagma-\int_{(0,1)}(1-(1-y)^i)\mu(\dd y)& \textrm{if $j=i$}.\\
\end{array}\right.
\end{equation}
where $\sigmagma_{m,k}$ is defined in \eqref{smk}; all other entries are $0$.
The pLD-ASG associated to $\omega\in\Db^\star$ is well-defined and contains almost surely finitely many lines at any time; we show this in Section \ref{s52}. The corresponding line-counting process $(L_T^\omega(\betata))_{ \betata \geq 0}$ started at time $T$, is a continuous-time (inhomogeneous) Markov process with values in $\mathbb{N}$. It jumps from $i\in{\mathbb N}b$ to $j\in{\mathbb N}b \setminus \{i\}$ at rate $q^0(i,j)$, where $q^0$ is the matrix defined in \eqref{kratespldasg} with $\mu=0$, and in addition, at each time $\betata \geq 0$ such that $\Deltalta\omegaega(T-\betata)>0$, conditionally on $\{L_T^\omegaega(\betata-)=i\}$, $i\in{\mathbb N}b$, we have $L_T^\omegaega(\betata) \sigmam i+\bindist{i}{\Deltalta \omegaega(T-\betata)}$.
Now, we state the main result of this section describing the asymptotic behavior of $h_T(x)$ and $h_T^\omega(x)$.
\betagin{theorem}[Ancestral type distribution]\lambdabel{thm2.6} The following assertions hold:
\betagin{enumerate}
\item The process $L$ admits a unique stationary distribution $\eta_L$. Moreover, if $L(\infty)$ is a random variable distributed {according to} $\eta_L$, then $L(T)\xrightarrow[]{(d)}L(\infty)$ as $T\to\infty$. In particular, $h(x)\coloneqq \lim_{T\to\infty} h_T(x)$ is well-defined, and
\betagin{eqnarray}
h(x) = \sum_{n \geq 0} x(1-x)^{n} a_n, \lambdabel{represh(x)tailpldasg}
\end{eqnarray}
where the coefficients $a_n\deltafeq \mathbb{P}(L(\infty) > n)$, $n\in{\mathbb N}b_0$, satisfy the following recursion\footnote{ in the case $\mu=0$, the recursion is known as Fernhead's recursion.}
\betagin{equation}\lambdabel{fr}
(\sigmagma +\theta +n+1 )\, a_n= \sigmagma\, a_{n-1}+(\theta\nu_1+n+1)\,a_{n+1} + \frac{1}{n}\sum\limits_{j=1}^{n} \gamma_{n+1,j}\, (a_{j-1}-a_{j}),\quad n\in{\mathbb N}b,
\end{equation}
where $\gamma_{i,j}\coloneqq \sum_{k=i-j}^{j}\binom{j}{k}\sigmagma_{j,k}$ {if $1 \leq j<i\leq 2j$ and $\gamma_{i,j}\coloneqq 0$ otherwise.}
\item Assume that $\theta\nu_0 > 0$. For any $n \in {\mathbb N}b$, the distribution of $L_T^\omega(T-)$ conditionally on $\{ L_T^\omegaega(0-) = n \}$ has a limit distribution $\mu^\omega\in\Ms_1({\mathbb N}b)$ as $T\to\infty$, which does not depend on $n$. In particular, $h^{\omegaega}(x)\coloneqq \lim_{T\to\infty}h_T^\omega(x)$ is well-defined and
\betagin{equation}\lambdabel{pldasgatdtpsinftyq}
h^{\omegaega}(x)= 1- \sum_{n=1}^\infty \mu^\omega(\{n\})(1-x)^{n}.
\end{equation}
Moreover, for any $x \in [0,1]$ and $t > 0$,
\betagin{eqnarray}
\left | h^{\omegaega}(x) - h^{\omegaega}_T(x) \right | \leq 2e^{-\theta\nu_0 T}. \lambdabel{approxh(x)4}
\end{eqnarray}
\end{enumerate}
\end{theorem}
The proof of Theorem \ref{thm2.6}-(1) is given in Section \ref{s52}; part (2) is proved in Section \ref{s62}.
Theorem \ref{thmfa} extends part (2) to the case $\theta\nu_0=0$ for simple environments under additional conditions. A refinement of Theorem \ref{thm2.6}-(2) is given in Theorem \ref{thmf4} for simple environments under additional conditions.
In the case $\theta=0$, Theorem \ref{thm2.6} yields the following result about the boundary behavior of $X$.
\betagin{corollary}[Accessibility of the boundaries]\lambdabel{cor2.7}
If $\theta=0$, then for any $T>0$ and $x\in[0,1]$,
\[h_T(x)=\mathbb{E} [X(T) \mid X(0) = x].\]
Moreover, {conditionally} on $\{X(0)=x\}$, $X(T)$ converges almost surely as $T\to\infty$ to a Bernoulli random variable with parameter $h(x)$. In particular, the absorbing states $0$ and $1$ are both accessible from any $x\in(0,1)$.
\end{corollary}
\betagin{remark}
Corollary \ref{cor2.7} is not a direct consequence of \cite[Thm. 3.2]{CSW19}, whose statement does not cover SDEs with a diffusion term {(the term $\sqrt{2X(t)(1-X(t))}\dd B(t)$)}.
\end{remark}
We close this section with an application of our results to the comparison of the (isolated) effects of the environment and of (genic) selection. To this end, we fix a non-zero measure $\mu$ on $(0,1)$ satisfying \eqref{intmu} and we consider two models, both without mutations. The first model has selection parameter
\betagin{equation}\lambdabel{shape}
\sigmagma=\sigmagma_\mu\coloneqq \int_{(0,1)}y\mu(\dd y),
\end{equation}
and no environment (i.e. in \eqref{WFSDE} we take $S(t)\coloneqq \sigmagma_\mu t$). The second one has selection parameter $\sigmagma=0$ and an environment given by a subordinator with L\'evy measure $\mu$ (i.e. in \eqref{WFSDE} we take $S(t)\coloneqq J(t)$). We will use the superscript "sel" (resp. "env") to refer to the first (resp. second) model.
For $n\in{\mathbb N}b$, set $\rho_n^{\rm{env}}\coloneqq {\mathbb P}b(L^{\rm{env}}(\infty)=n)$ and $\rho_n^{\rm{sel}}\coloneqq {\mathbb P}b(L^{\rm{sel}}(\infty)=n)$. Consider the probability generating functions
\[p^{\rm{env}}(z)\coloneqq \sum_{n=1}^\infty \rho_n^{\rm{env}}z^{n}\quad\textrm{and}\quad p^{\rm{sel}}(z)\coloneqq \sum_{n=1}^\infty \rho_n^{\rm{sel}}z^{n},\quad z\in[0,1].\]
Note that $p^{\rm{env}}(z)=1-h^{\rm{env}}(1-z)$ and $p^{\rm{sel}}(z)=1-h^{\rm{sel}}(1-z)$.
\betagin{proposition}\lambdabel{comparison}
For any non-zero measure $\mu$ on $(0,1)$ satisfying \eqref{intmu} we have
\[\rho_1^{\rm{env}}>\rho_1^{\rm{sel}}=\frac{\sigmagma_\mu}{e^{\sigmagma_\mu}-1}\quad\textrm{and}\quad p^{\rm{env}}(z)\leq \frac{\rho_1^{\rm{env}}}{\rho_1^{\rm{sel}}}\, p^{\rm{sel}}(z)=\rho_1^{\rm{env}}\,\left(\frac{e^{\sigmagma_\mu z}-1}{\sigmagma_\mu}\right),\quad z\in[0,1].\]
In particular, there is $x_c\in(0,1)$ such that, for $x\in[x_c,1)$,
\[h^{\rm{env}}(x)={\mathbb P}b\left(\lim_{t\to\infty}X^{\rm{env}}(t)=1 \mid X^{\rm{env}}(0)=x\right)<{\mathbb P}b\left(\lim_{t\to\infty}X^{\rm{sel}}(t)=1 \mid X^{\rm{sel}}(0)=x\right)=h^{\rm{sel}}(x).\]
\end{proposition}
\betagin{remark}
As a consequence of Proposition~\ref{comparison} one recovers the classical result of Kimura \cite{Ki62}
\[h^{\rm{sel}}(x)=\frac{1-e^{-\sigmagma_\mu z}}{1-e^{-\sigmagma_\mu}},\quad x\in[0,1].\]
\end{remark}
\betagin{remark}
Consider a Wright--Fisher diffusion with no mutations and selection parameter $\sigmagma$, evolving in an environment with L\'evy measure $\mu$. The quantity $\sigmagma_\mu$ in \eqref{shape} corresponds to the quantity $\alphapha_{\mathfrak{s}}$ in \cite{CSW19}. As shown there, $\alphapha_{\mathfrak{s}}$ is not sufficient to fully describe the strength of the environment; one also needs to know the \emph{shape of rare selection}, which is defined as $\alphapha^*\coloneqq\int_{(0,1)}\log(1+y)\mu(\dd y)/\alphapha_{\mathfrak{s}}$. The joint action of weak selection and the environment is then described by the quantity $\alphapha_{\rm{eff}}\coloneqq \sigmagma+\alphapha_{\mathfrak{s}}\alphapha^*$, which is called the \emph{effective strength of selection}. The main result in \cite{CSW19} establishes that both boundaries are accessible if and only if $\alphapha_{\rm{eff}}$ is smaller than a quantity $\betata^*$ coding for neutral reproductions ($\betata^*=\infty$ in our case).
\end{remark}
The proof of Corollary \ref{cor2.7} and Proposition~\ref{comparison} are given in Section \ref{s52}.
\section{Moran models and Wright--Fisher processes}\lambdabel{S3}
This section is devoted to the proofs of Theorem \ref{thm2.1} and Theorem \ref{thm2.2} and other related results.
\subsection{Results related to Section \ref{s21}: continuity with respect to the environment}\lambdabel{s31}
\subsection*{Graphical representation}
We start by making more precise the description of the graphical representation of the Moran model as an IPS. This will allow us to decouple the randomness of the model coming from the initial type configuration, the one coming from mutations and reproductions, and the one coming from the environment. Non-environmental events are as usual encoded via a family of independent Poisson processes
\[\Lambda\coloneqq \{\lambdambda_{i}^{0},\lambdambda_{i}^{1},\{\lambdambda_{i,j}^{\vartriangle},\lambdambda_{i,j}^{\blacktriangle}\}_{j\in{[N] \setminus \{i\}}} \}_{i\in[N]},\]
where: (a) for each $i,j\in [N]$ with $i\neq j$, $(\lambdambda_{i,j}^{\vartriangle}(t))_{t\in{\mathbb R}b}$ and $(\lambdambda_{i,j}^{\blacktriangle}(t))_{t\in{\mathbb R}b}$ are Poisson processes with rates $\sigmagma_N/N$ and $1/N$, respectively, and (b) for each $i\in [N]$, $(\lambdambda_{i}^{0}(t))_{t\in{\mathbb R}b}$ and $(\lambdambda_{i}^{1}(t))_{t\in{\mathbb R}b}$ are Poisson processes with rates $\theta_N\nu_0$ and $\theta_N\nu_1$, respectively. We call $\Lambda$ the \textit{basic background}. The environment introduces a new independent source of randomness into the model, that we describe via the collection
\[\Sigma\coloneqq \{(U_i(t))_{i\in[N], t\in{\mathbb R}b}, (\tau_A^{}(t,\cdot))_{A\subset[N], t\in{\mathbb R}b}\},\]
where: (c) $(U_i(t))_{i\in[N], t\in{\mathbb R}b}$ is a $[N] \times {\mathbb R}b-$indexed family of i.i.d. random variables with $U_i(t)$ being uniformly distributed on $[0,1]$, and (d) $(\tau_A(t,\cdot))_{A\subset[N], t\in{\mathbb R}b}$ is a family of independent random variables with $\tau_A(t,\cdot)$ being uniformly distributed on the set of injections from $A$ to $[N]$. We call $\Sigma $ the \textit{environmental background}. We assume that basic and environmental backgrounds are independent and we call $(\Lambda,\Sigma)$ the \textit{background}.
Recall that in the graphical representation individuals are represented by horizontal lines at levels $i\in [N]$ {(see Fig. \ref{particlepicture})}. The random appearance of selective and neutral arrows, circles and crosses is prescribed by the background as follows. At the arrival times of $\lambdambda_{i,j}^{\vartriangle}$ (resp. $\lambdambda_{i,j}^{\blacktriangle}$), we draw open (resp. filled) head arrows from level $i$ to level $j$. At the arrival times of $\lambdambda_{i}^{0}$ (resp. $\lambdambda_{i}^{1})$, we draw an open circle (resp. a cross) at level $i$. Now, given an environment ${\textbf{0}}ta\coloneqq (t_k, p_k)_{k \in I}$ satisfying \eqref{summableassumption}, we define, for each $k\in I$,
\[I_{{\textbf{0}}ta}(k)\coloneqq \{i\in[N]:U_i(t_k)\leq p_k\}\quad\textrm{and}\quad n_{{\textbf{0}}ta}(k)\coloneqq |I_{{\textbf{0}}ta}(k)|,\]
and we draw, at time $t_k$, for each $i\in I_{{\textbf{0}}ta}(k)$ an open head arrow from level $i$ to level $\tau_{I_{{\textbf{0}}ta}(k)}^{}(t,i)$.
\subsection*{Continuity with respect to the environment}
Now, we embark on the proof of Theorem \ref{thm2.1}, which states the continuity of the type composition in a Moran model with respect to the environment. The paths of the fit-counting process are considered as elements of $\Db_{0,T}$, which is endowed with the $J_1$-Skorokhod topology, i.e. the topology induced by the Billingsley {metric} $d_T^0$ defined in {\eqref{defdt0}}. Recall also that the restriction of an environment to $[0,T]$ is described by means of a function in $\Db_T^\star$ (see \eqref{dbs}), which is endowed with the topology induced by the metric $d_T^\star$ defined in {\eqref{defdtstar}}.
Let us denote by $\mu_N(\omega)$ the law of $(Z_N^\omega(t))_{t\in[0,T]}$ (recall that $Z_N^\omega(t)$ is the number of fit individuals at time $t$ in a Moran population of size $N$ subject to environment $\omega$). Theorem \ref{thm2.1} states the continuity of the mapping $\omega\mapsto \mu_N(\omega)$, where the set of probability measures on $\Db_{0,T}$ is equipped with the topology of weak convergence of measures. We will use that the topology of weak convergence of probability measures on a complete and separable metric space $(E,d)$ is induced by the metric $\varrho_E$ {defined in \eqref{defblm}.}
First, we get rid of the small jumps of the environment. To this end, we introduce the following notation. For $\deltalta>0$ and $\omega\in\Db_T^\star$, we define $\omega^\deltalta, \omega_\deltalta\in \Db_T^\star$ via
\betagin{align}
\omega^\deltalta(t)\coloneqq \sum_{u\in[0,t]:\Deltalta \omega(u)\geq \deltalta} \Deltalta \omega(u)\quad\textrm{and}\quad \omega_\deltalta(t)\coloneqq \sum_{u\in[0,t]:\Deltalta \omega(u)< \deltalta} \Deltalta \omega(u).\lambdabel{defomdelta}
\end{align}
Clearly, $\omega^\deltalta$ is simple and $\omega=\omega^\deltalta+\omega_\deltalta$. Moreover, $\omega_\deltalta \to 0$ pointwise as $\deltalta\to 0$, and hence for any $t\in[0,T]$,
\[d_t^\star(\omega,\omega^\deltalta)\leq \sum_{u\in[0,T]}\lvert \Deltalta \omega(u)-\Deltalta \omega^\deltalta(u)\rvert= \omega_\deltalta(T) \xrightarrow[\deltalta\to 0]{}0.\]
In addition, for $\omega\in\Db_T^\star$, $n\in{\mathbb N}b$ and $\vec{r}\coloneqq (r_i)_{i\in[n]}\in[0,T]^n$, we denote by $\mu_N^{\vec{r}}(\omega)$ the law of $(Z_N^{\omega}(r_i))_{i\in[n]}$, where $[0,N]^n$ is equipped with the distance $d_1$ defined via \betagin{equation}\lambdabel{defd1}
d_1( (x_i)_{i\in[n]}, (y_i)_{i\in[n]}) \coloneqq \sum_{i\in[n]}|x_i - y_i|.
\end{equation}
\betagin{proposition}\lambdabel{sjumps}
Let $\omega\in\Db_T^\star$. Assume that, for any $\deltalta>0$, we have $Z_N^{\omega^\deltalta}(0)=Z_N^{\omega}(0)$, then
\betagin{equation}\lambdabel{onedimbound}
\varrho_{[0,T]^n}^{}(\mu_N^{\vec{r}}(\omega^\deltalta),\mu_N^{\vec{r}}(\omega))\leq nN\,\omega_\deltalta(r_*)e^{\sigmagma_N r_*+ \omega(r_*)}, \ \forall \vec{r}\in[0,T]^n, n\in{\mathbb N}b,
\end{equation}
where $r_*\coloneqq \max_{i\in[n]}r_i$. Moreover,
\betagin{equation}\lambdabel{unibound}
\varrho_{\Db_{0,T}}^{}(\mu_N(\omega^\deltalta),\mu_N(\omega))\leq N\omega_\deltalta(T)e^{(1+\sigmagma_N) T+\omega(T)}.
\end{equation}
In particular,
\[(Z_N^{\omega^\deltalta}(t))_{t\in[0,T]}\xrightarrow[\deltalta\to 0]{(d)}(Z_N^{\omega}( t))_{t\in[0,T]}.\]
\end{proposition}
\betagin{proof}
For $\deltalta>0$, we couple in $[0,T]$ a Moran model with parameters $(\sigmagma_N,\theta_N,\nu_0,\nu_1)$ and environment $\omega$ to a Moran model with parameters $(\sigmagma_N,\theta_N,\nu_0,\nu_1)$ and environment $\omega^\deltalta$ (both of size $N$) by using: (1) the same initial type configuration, (2) the same basic background, and (3) the same environmental background.
{For any $t\in[0,T]$ and $a,b\in\{0,1\}$, we denote by $Y_N^{a,b}(t)$ the number of individuals that at time $t$ meet the following two requirements: 1) having type $a$ under the environment $\omega$, 2) having type $b$ under the environment $\omega^\deltalta$.} Clearly, we have
\[\lvert Z_N^{\omega^{\deltalta}}(t)-Z_N^{\omega}(t)\rvert=\lvert Y_N^{1,0}(t) - Y_N^{0,1}(t) \rvert\leq Y_N^{1,0}(t) + Y_N^{0,1}(t)\coloneqq Y_N^{\neq}(t).\]
Note that $Y_N^{\neq}(t)$ is the number of individuals that have a different type at time $t$ under $\omega$ and $\omega^\deltalta$. {In particular, we have $Y_N^{\neq}(t) \leq N$ a.s.} Let us assume that at time $t$ a graphical element arises in the basic background, {i.e. $t$ is an arrival time of one of the Poisson processes in the family $\Lambda$.} If the graphical element corresponds to a mutation event, then $Y_N^{\neq}(t)\leq Y_N^{\neq}(t-)$. If the graphical element is a neutral reproduction, we have
\[{\mathbb E}b\left[Y_N^{\neq}(t)\mid Y_N^{\neq}(t-)\right]=Y_N^{\neq}(t-)+\frac{1}{N}Y_N^{\neq}(t-)(N-Y_N^{\neq}(t-))-\frac{1}{N}(N-Y_N^{\neq}(t-))Y_N^{\neq}(t-)=Y_N^{\neq}(t-).\]
{If the graphical element corresponds to a selective event, then $Y_N^{\neq}(t)$ can increase by $1$ only if the individual at the tail of the arrow has a different type at time $t$ under $\omega$ and $\omega^\deltalta$.} We thus have \[{\mathbb E}b\left[Y_N^{\neq}(t)\mid Y_N^{\neq}(t-)\right]\leq\left(1+\frac1N\right)Y_N^{\neq}(t-).\]
Now, let $0\leq s<t\leq T$ and assume that there are {neither jumps of $\omega^\deltalta$ nor} selective events in $(s,t)$. In particular, in $(s,t)$ only the population driven by $\omega$ is affected by the environment. Moreover, since {neutral reproduction and mutation} events do not increase the expected value of $Y_N^{\neq}$, we obtain \[{\mathbb E}b\left[Y_N^{\neq}(t-)\mid Y_N^{\neq}(s)\right]\leq Y_N^{\neq}(s) +N\sum_{u\in(s,t)}\Deltalta \omega(u).\]
In addition, if at time $t$ the environment $\omega^\deltalta$ jumps (there are only finitely many of these jumps), then \[{\mathbb E}b\left[Y_N^{\neq}(t)\mid Y_N^{\neq}(t-)\right]\leq Y_N^{\neq}(t-)(1+\Deltalta \omega (t)).\]
Let {$0\leq t_1<\cdots<t_m\leq T$} be the jump times of $\omega^\deltalta$. From the previous discussion, we obtain
\betagin{equation}\lambdabel{auxbound}
{\mathbb E}b\left[Y_N^{\neq}(t_{i+1})\mid Y_N^{\neq}(t_i)\right]\leq {\mathbb E}b\left[\left(1+\frac1N\right)^{K_i} \right]\left(Y_N^{\neq}(t_i)+N\varepsilonsilon_i(\deltalta)\right)(1+\Deltalta \omega(t_{i+1})),
\end{equation}
where $\varepsilonsilon_i(\deltalta)\coloneqq \sum_{u\in(t_i,t_{i+1})}\Deltalta \omega(u)$ and $K_i$ is the number of selective events in $(t_i,t_{i+1})$. Note that $K_i$ has a Poisson distribution with parameter $N\sigmagma_N(t_{i+1}-t_i)$. Hence,
\[{\mathbb E}b\left[Y_N^{\neq}({t_{i+1}})\mid Y_N^{\neq}(t_i)\right]\leq e^{\sigmagma_N(t_{i+1}-t_i)}\left(Y_N^{\neq}(t_i)+N\varepsilonsilon_i(\deltalta)\right)(1+\Deltalta \omega(t_{i+1})).\]
Iterating this formula and using that $Y_N^{\neq}(0)=0$ yields
\betagin{equation}\lambdabel{boundMt}
{\mathbb E}b\left[Y_N^{\neq}(t)\right]\leq e^{\sigmagma_N t} N\omega_\deltalta(t)\prod_{t_i\leq t}(1+\Deltalta \omega(t_i))\leq N\,\omega_\deltalta(t)\,e^{\sigmagma_N t+\sum_{u\in[0,t]}\Deltalta \omega(u)}.
\end{equation}
Recall the definition of the space $\textrm{BL}(E)$ from Appendix \ref{A2} {and that $[0,N]^n$ is equipped with the distance $d_1$ defined in \eqref{defd1}.} For any $n \geq 1$ and $F\in \textrm{BL}([0,N]^n)$, we have
\[ \left\lvert \int F \dd\mu_N^{\vec{r}}(\omega^\deltalta)- \int F \dd\mu_N^{\vec{r}}(\omega)\right\rvert = \left\lvert \mathbb{E} \left [ F((Z_N^{\omega^\deltalta}(r_j))_{j\in[n]}) \right ] - \mathbb{E} \left [ F((Z_N^\omega(r_j))_{j\in[n]}) \right ] \right\rvert. \]
Hence, if $\lVert F\rVert_{\textrm{BL}}\leq 1$ {(see \eqref{defnormbl} for the definition of $\lVert \cdot\rVert_{\textrm{BL}}$)} and we couple {$Z_N^{\omega^{\deltalta}}(t)$} and $Z_N^\omega(t)$ as before, we get that
\betagin{align*}
\left\lvert \mathbb{E} \left [ F((Z_N^{\omega^\deltalta}(r_j))_{j\in[n]}) \right ] - \mathbb{E} \left [ F((Z_N^\omega(r_j))_{j\in[n]}) \right ] \right\rvert & \leq \left\lvert \mathbb{E} \left [ d_1((Z_N^{\omega^\deltalta}(r_j))_{j\in[n]}, (Z_N^\omega(r_j))_{j\in[n]}) \right ] \right\rvert \\
& = \mathbb{E} \left [ \sum_{j\in[n]}|Y_N^{\neq}(r_j)| \right ] \leq \sum_{j\in[n]} N\omega_\deltalta(r_j)e^{\sigmagma_N r_j+\sum_{u\in[0,r_j]}\Deltalta \omega(u)},
\end{align*}
where the last bound comes from \eqref{boundMt} applied at each $r_j$. Taking the supremum over all $F\in \textrm{BL}([0,N]^n)$ with $\lVert F\rVert_{\textrm{BL}}\leq 1$ and using the definition of the distance $\varrho_{[0,N]^n}$ in \eqref{defblm} we get \eqref{onedimbound}.
Now, define $Y_N^*(t)\coloneqq \sup_{u\in[0,t]}Y_N^{\neq}(u)$. If at time $t$ a neutral event occurs, then
\[{\mathbb E}b[Y_N^*(t)\mid Y_N^*(t-)]\leq\left(1+\frac1N\right)Y_N^*(t-).\]
Other events can be treated as before, leading to \eqref{auxbound} with $K_i$ being this time the number of selective and neutral events in $(t_i,t_{i+1})$. Hence, Eq. \eqref{unibound} follows similarly as \eqref{onedimbound}. The convergence of $Z_N^{\omega^\deltalta}$ towards $Z_N^\omega$ is a direct consequence of \eqref{unibound}.
\end{proof}
\betagin{proposition}\lambdabel{fjumps}
Let $\omega\in\Db_T^\star$ and $\{\omega_k\}_{k\in{\mathbb N}b}\subset \Db_T^\star$ be such that $d_T^\star(\omega_k,\omega)\to 0$ as $k\to\infty$. If $\omega$ is simple and, for any $k\in{\mathbb N}b$, $Z_N^\omega(0)=Z_N^{\omega_k}(0)$, then
\betagin{equation}\lambdabel{convfinitejumps}
(Z_N^{\omega_k}(t))_{t\in[0,T]}\xrightarrow[k\to \infty]{(d)}(Z_N^\omega(t))_{t\in[0,T]}.
\end{equation}
\end{proposition}
\betagin{proof}
The proof consists of two parts. In the first part, we construct a time deformation $\lambdambda_k\in\Cs_T^\uparrow$ with suitable properties. In the second part, we compare $Z_N^{\omega_k}\circ\lambdambda_k$ and $Z_N^\omega$ under an appropriate coupling of the underlying Moran models.
\textbf{Part 1:}
We assume, without loss of generality, that $d_T^\star(\omega_k,\omega)>0$, for all $k\in{\mathbb N}b$. Set $\varepsilonsilon_k\coloneqq 2d_T^\star(\omega_k,\omega)$, so that $d_T^\star(\omega_k,\omega)< \varepsilonsilon_k$. By definition of the metric $d_T^\star$ in \eqref{defdtstar}, there is $\varphi_k\in \Cs_T^\uparrow$ such that
\[\lVert \varphi_{k}\rVert_T^0\leq \varepsilonsilon_k\quad\textrm{and}\quad\sum_{u\in[0,T]}|\Deltalta \omega(u)-\Deltalta (\omega_k\circ\varphi_{k})(u)|\leq \varepsilonsilon_k,\]
where $\lVert \cdot \rVert_T^0$ is defined in \eqref{normbij}. Denote by $r_1<\cdots<r_n$ {the consecutive jump times} of $\omega$ in $[0,T]$. We assume without loss of generality that $0<r_1\leq r_n<T$. The case where $\omega$ jumps at $T$ can be reduced to the previous case, by extending $\omegaega_k$, $k\in{\mathbb N}b$, and $\omegaega$ to $[0,T+\varepsilon]$ as constants in $[T,T+\varepsilon]$. Set $\gamma_k\coloneqq T\,\sqrt{e^{\varepsilonsilon_k}-1}$. In the remainder of the proof we assume that $k$ is sufficiently large, so that $\gamma_k\leq \min_{i\in[n]_0}(r_{i+1}-r_{i})/3$, where $r_0\coloneqq 0$ and $r_{n+1}\coloneqq T$. This condition ensures that the intervals $I_i^k\coloneqq [r_i-\gamma_k,r_i+\gamma_k]$, $i\in[n]$, are disjoint and contained in $[0,T]$. Now, define $\lambdambda_k:[0,T]\to[0,T]$ via
\betagin{itemize}
\item[(i)] For $u\notin \cup_{i=1}^nI_i^k$: $\lambdambda_k(u)\coloneqq u$.
\item[(ii)] For $u\in [r_i-\gamma_k,r_i]:$ $\lambdambda_k(u)\coloneqq \varphi_k(r_i)+m_i(u-r_i)$, where $m_i\coloneqq (\varphi_k(r_i)-r_i+\gamma_k)/\gamma_k.$
\item[(iii)] For $u\in (r_i,r_i+\gamma_k]:$ $\lambdambda_k(u)\coloneqq \varphi_k(r_i)+\bar{m}_i(u-r_i)$, where $\bar{m}_i\coloneqq (r_i+\gamma_k-\varphi_k(r_i))/\gamma_k.$
\end{itemize}
For $k$ sufficiently large, so that $\varepsilonsilon_k < \log 2$, we can infer from $\lVert \varphi_{k}\rVert_T^0\leq \varepsilonsilon_k$ and from $\gamma_k=T\,\sqrt{e^{\varepsilonsilon_k}-1}$ that $m_i$ and $\bar{m}_i$ are positive. It is then straightforward to check that $\lambdambda_k\in\Cs_T^\uparrow$,
$\lambdambda_k(I_i^k)=I_i^k$, $i\in[n]$, and that
\[\sum_{u\in[0,T]}|\Deltalta \omega(u)-\Deltalta \bar{\omegaega}_k(u)|\leq \varepsilonsilon_k,\]
where {$\bar{\omegaega}_k\coloneqq \omega_k\circ\lambdambda_k$.} Moreover, since $\lVert \varphi_{k}\rVert_T^0\leq \varepsilonsilon_k$, we infer that $\varphi_k(r_i)\in[e^{-\varepsilonsilon_k}r_i,e^{\varepsilonsilon_k}r_i]$. It follows that, for $k$ sufficiently large,
\[1-2\sqrt{e^{\varepsilonsilon_k}-1} \leq m_i \leq 1+2\sqrt{e^{\varepsilonsilon_k}-1},\quad i\in[n],\]
and the same holds for $\bar{m}_i$. Note that we can write $\lambdambda_k(t) = \int_0^t p_k(u) \dd u$, with $p_k:[0,T] \mapsto \mathbb{R}$ taking only the values $(m_i)_{i\in[n]}$, $(\bar{m}_i)_{i\in[n]}$, and $1$. In particular, we have $|p_k(u)-1|\leq 2\sqrt{e^{\varepsilonsilon_k}-1}$ for all $u\in[0,T]$. Thus, for any $s,t\in[0,T]$ with $s\neq t$, the slope $(\lambdambda_k(s)-\lambdambda_k(t))/(s-t)$ belongs to $[1-2\sqrt{e^{\varepsilonsilon_k}-1}, 1+2\sqrt{e^{\varepsilonsilon_k}-1}]$. Therefore, for $k$ sufficiently large, we have
\[\frac{\lambdambda_k(s)-\lambdambda_k(t)}{s-t}, \,\frac{s-t}{\lambdambda_k(s)-\lambdambda_k(t)}\leq 1+3\sqrt{e^{\varepsilonsilon_k}-1},\quad i\in[n].\]
Hence, using that $\log(1+x)\leq x$ for $x>-1$, {we obtain for $k$ sufficiently large
\betagin{equation}\lambdabel{lki}
\lVert \lambdambda_k\rVert_T^0\leq 3\sqrt{e^{\varepsilonsilon_k}-1}.
\end{equation}}
\textbf{Part 2:}
For $\deltalta>0$, we couple in $[0,T]$ a Moran model with parameters $(\sigmagma_N,\theta_N,\nu_0,\nu_1)$ and environment $\omega$ to a Moran model with parameters $(\sigmagma_N,\theta_N,\nu_0,\nu_1)$ and environment $\omega_k$ (both of size $N$) by using: (1) the same initial type configuration, (2) the same basic background, and (3) using in the second population the environmental background of the first one time-changed by $\lambdambda_k^{-1}$. {Under this coupling and by construction of the function $\lambdambda_k$, the Moran model associated to $\omega$ and the Moran model associated to $\omega_k$ (time-changed by $\lambdambda_k$) experience the same basic events out of the time-intervals $I_i^k$. Moreover, at the times $r_i$,} the success of simultaneous environmental reproductions is decided according to the same uniform random variables.
For $t\in[0,T]$, let $Y_N^{\neq}(t)$ be the number of individuals that have a different type at time $t$ for $\omega$ and at time $\lambdambda_k(t)$ for $\omega_k$, and set $Y_N^*(t)\coloneqq \sup_{u\in[0,t]}Y_N^{\neq}(u)$.
Consider the event $E_{k}\coloneqq \{\textrm{there are no basic events in $\cup_{i\in[n]}I_i^k$}\}$, and note that
\betagin{equation}\lambdabel{ekcc}
{\mathbb P}b(E_{k}^c)\leq n\left(1-e^{-2N(1+\sigmagma_N+\theta_N)\gamma_k}\right).
\end{equation}
Moreover, on the event $E_{k}$, only the population driven by $\omega_k$ can change in $(r_i,r_i+\gamma_k]$, and this can only be due to environmental events. Hence,
\betagin{equation}\lambdabel{eqc1}
{\mathbb E}b[Y_N^*(r_i+\gamma_k)1_{E_k}]\leq {\mathbb E}b[Y_N^*(r_i)1_{E_{k}}] +N\sum_{u\in(r_i,r_i+\gamma_k]}\Deltalta\bar{\omegaega}_k(u).
\end{equation}
A similar argument yields
\betagin{equation}\lambdabel{eqc2}
{\mathbb E}b[Y_N^*(r_{i+1}-)\,1_{E_{k}}]\leq {\mathbb E}b[Y_N^*(r_{i+1}-\gamma_k)1_{E_{k}}]+ N\sum_{u\in(r_{i+1}-\gamma_k,r_{i+1})}\Deltalta\bar{\omegaega}_k(u).
\end{equation}
Since in the interval $(r_i+\gamma_k,r_{i+1}-\gamma_k]$ there are no simultaneous jumps of the two environments, we can proceed as in the proof of Proposition \ref{sjumps} to obtain
\betagin{equation}\lambdabel{eqc3}
{\mathbb E}b\left[Y_N^*(r_{i+1}-\gamma_k)1_{E_{k}}\right]\leq e^{(1+\sigmagma_N)(r_{i+1}-r_{i})}\left[{\mathbb E}b[Y_N^*(r_i+\gamma_k)1_{E_{k}}]+N\sum_{u\in (r_i+\gamma_k,r_{i+1}-\gamma_k]}\Deltalta\bar{\omegaega}_k(u)\right].
\end{equation}
Moreover, at time $r_{i+1}$, there are two possible contributions to take into account: (i) the contribution of selective arrows arising simultaneously in both environments, and (ii) the contribution of selective arrows arising only on the environment with the biggest jump. This leads to
\betagin{equation}\lambdabel{eqc4}
{\mathbb E}b\left[Y_N^*(r_{i+1})\,1_{E_{k}}\right]\leq {\mathbb E}b[Y_N^*(r_{i+1}-)\,1_{E_{k}}](1+\Deltalta \omega(r_{i+1})\wedge \Deltalta \bar{\omegaega}_k(r_{i+1}))+ N|\Deltalta \omega(r_{i+1})-\Deltalta \bar{\omegaega}_k(r_{i+1})|.
\end{equation}
Using \eqref{eqc1}, \eqref{eqc2}, \eqref{eqc3} and \eqref{eqc4}, we obtain
\[{\mathbb E}b[Y_N^*(r_{i+1})\,1_{E_k}]\leq e^{(1+\sigmagma_N)(r_{i+1}-r_{i})}\left[{\mathbb E}b[Y_N^*(r_{i})\,1_{E_k}]+N\sum_{u\in
(r_i,r_{i+1}]}|\Deltalta \omega(u)-\Deltalta \bar{\omegaega}_k(u)|\right](1+\Deltalta \omega(r_{i+1})).\]
Iterating this inequality, using that $Y_N^*(0)=0$, and adding the contribution of the interval $(r_n+\gamma_k,T]$, we obtain
\betagin{equation}\lambdabel{ekc}
{\mathbb E}b\left[Y_N^*(T)\,1_{E_{k}}\right]\leq N\varepsilonsilon_k e^{(1+\sigmagma_N)T+\sum_{u\in(0,T]}\Deltalta \omega(u)}.
\end{equation}
Using \eqref{ekcc}, \eqref{ekc}, \eqref{lki}, and the definition of $d_T^0$ in \eqref{defdt0} we obtain for $k$ sufficiently large
\[{\mathbb E}b\left[d_T^0(Z_N^\omega,Z_N^{\omega_k})\right] \leq 2nN\left(1-e^{-2N(1+\sigmagma_N+\theta_N)\gamma_k}\right)+ {3} \sqrt{e^{\varepsilonsilon_k}-1}\vee\left(N\varepsilonsilon_k\, e^{(1+\sigmagma_N)T+\sum_{u\in(0,T]}\Deltalta \omega(u)} \right).\]
{Since $\gamma_k\to 0$ and $\varepsilonsilon_k\to 0$ as $k\to\infty$, the result follows.}
\end{proof}
\betagin{proof}[Proof of Theorem \ref{thm2.1} (Continuity)]
If $\omega$ has a finite number of jumps, the result follows directly from Proposition \ref{fjumps}. In the general case, note that for any $\deltalta>0$, we have
\betagin{equation}\lambdabel{trii}
\Bl{\mu_N(\omega_k)}{\mu_N(\omega)}\leq \Bl{\mu_N(\omega_k)}{\mu_N(\omega_{k}^{\deltalta})}+ \Bl{\mu_N(\omega_{k}^{\deltalta})}{\mu_N(\omega^\deltalta)}+\Bl{\mu_N(\omega^{\deltalta})}{\mu_N(\omega)},
\end{equation}
where $\omega^{\deltalta}$ is as in \eqref{defomdelta} and, similarly, $\omega_{k}^{\deltalta}(t)\coloneqq \sum_{u\in[0,t]: \Deltalta\omega_k(u)\geq \deltalta}\Deltalta \omega_k(u)$. Recall the definition of $d_T^\star$ in \eqref{defdtstar}. We claim that, for any $\deltalta\in A_\omega\coloneqq \{d>0:\Deltalta \omega(u)\neq d \textrm{ for any $u\in[0,T]$}\}$, we have
{\betagin{equation}\tag{Claim 1}\lambdabel{claim-cont}
d_T^\star(\omega_{k}^{\deltalta},\omega^\deltalta)\xrightarrow[k\to\infty]{}0.
\end{equation}}
Assume {that \eqref{claim-cont}} is true and let $\deltalta\in A_\omega$. Note that for any $\lambdambda\in\Cs^\uparrow_T$, we have
\betagin{align*}
\omega_{k,\deltalta}(T)&\coloneqq \sum_{u\in[0,T]: \Deltalta\omega_k(u)< \deltalta}\Deltalta \omega_k(u)=\omega_k(T)-\omega_{k}^{\deltalta}(T)=\omega_k(\lambdambda(T))-\omega_{k}^{\deltalta}(\lambdambda(T))\\
&\leq |\omega_k(\lambdambda(T))-\omega(T)|+|\omega(T)-\omega^\deltalta(T)|+|\omega^\deltalta(T)-\omega_{k}^{\deltalta}(\lambdambda(T))|\\
&\leq d_T^0(\omega,\omega_k)+ \omega_\deltalta(T)+d_T^0(\omega_{k}^{\deltalta},\omega^\deltalta)\leq d_T^\star(\omega,\omega_k)+ \omega_\deltalta(T)+d_T^\star(\omega_{k}^{\deltalta},\omega^\deltalta),
\end{align*}
where we used the definition of $d_T^0$ in \eqref{defdt0} and then Lemma \ref{zero-star}. Combining this {with \eqref{claim-cont}} and Proposition \ref{sjumps}, we obtain
\betagin{equation*}
\limsup_{k\to\infty}\Bl{\mu_N(\omega_k)}{\mu_N(\omega_{k}^{\deltalta})}\leq N\omega_\deltalta(T) e^{(1+\sigmagma_N)T+\omega(T)}.
\end{equation*}
In addition, Proposition \ref{fjumps} together with {\eqref{claim-cont}} implies that
$\limsup_{k\to\infty}\Bl{\mu_N(\omega_{k}^{\deltalta})}{\mu_N(\omega^{\deltalta})}=0$.
Hence, letting $k\to\infty$ in \eqref{trii} and using Proposition \ref{sjumps}, we obtain
\[\limsup_{k\to\infty}\Bl{\mu_N(\omega_k)}{\mu_N(\omega)}\leq 2N\omega_\deltalta(T) e^{(1+\sigmagma_N)T+\omega(T)}.\]
The previous inequality holds for any $\deltalta\in A_\omega$. It is plain to see that $\inf A_\omega=0$. Hence, letting $\deltalta\to 0$ with $\deltalta\in A_\omega$ in the previous inequality yields the result.
It remains to prove {\eqref{claim-cont}}. Let $\deltalta\in A_\omega$. Since $d_T^\star(\omega_k,\omega)$ converges to $0$ as $k\to\infty$, then we see from the definition of $d_T^\star$ in \eqref{defdtstar} that there is a sequence $(\lambdambda_k)_{k\in{\mathbb N}b}$ with $\lambdambda_k\in\Cs_T^\uparrow$ such that
\[\lVert \lambdambda_k\rVert_T^0\xrightarrow[k\to\infty]{}0\quad\textrm{ and }\quad \varepsilonsilon_k\coloneqq \sum_{u\in[0,T]}|\Deltalta(\omega_k\circ\lambdambda_k)(u)-\Deltalta \omega(u)| \xrightarrow[k\to\infty]{}0.\]
Set {$\bar{\omegaega}_k\coloneqq \omega_k\circ\lambdambda_k$.} Clearly,
$\Deltalta\bar{\omegaega}_k(u)\leq \varepsilonsilon_k+\Deltalta \omega(u)$ and $\Deltalta\omega (u)\leq \varepsilonsilon_k+\Deltalta \bar{\omegaega}_k(u)$, $u\in[0,T].$
Therefore,
\betagin{align*}
\omega_{k}^{\deltalta}(\lambdambda_k(t))-\omega^\deltalta(t)&=\sum_{u\in[0,t]: \Deltalta\bar{\omegaega}_k(u)\geq \deltalta}\Deltalta \bar{\omegaega}_k(u)-\sum_{u\in[0,t]: \Deltalta\omega(u)\geq \deltalta}\Deltalta \omega(u)\\
&\leq \sum_{u\in[0,t]: \Deltalta\omega(u)\geq \deltalta-\varepsilonsilon_k}\Deltalta \bar{\omegaega}_k(u)-\sum_{u\in[0,t]: \Deltalta\omega(u)\geq \deltalta}\Deltalta \omega(u)\\
&=\sum_{u\in[0,t]: \Deltalta\omega(u)\geq \deltalta-\varepsilonsilon_k}(\Deltalta \bar{\omegaega}_k(u)-\Deltalta \omega(u))+\sum_{u\in[0,t]: \Deltalta\omega(u)\in[\deltalta-\varepsilonsilon_k,\deltalta)}\Deltalta \omega(u)\\
&\leq d_T^\star(\omega_k,\omega)+\sum_{u\in[0,T]: \Deltalta\omega(u)\in[\deltalta-\varepsilonsilon_k,\deltalta)}\Deltalta \omega(u).
\end{align*}
Similarly, we obtain
\betagin{align*}
\omega^\deltalta(t)-\omega_{k}^{\deltalta}(\lambdambda_k(t))&=\sum_{u\in[0,t]: \Deltalta\omega(u)\geq \deltalta}\Deltalta \omega(u)-\sum_{u\in[0,t]: \Deltalta\bar{\omegaega}_k(u)\geq \deltalta}\Deltalta \bar{\omegaega}_k(u)\\
&\leq \sum_{u\in[0,t]: \Deltalta\omega(u)\in[\deltalta,\deltalta+\varepsilonsilon_k)}\Deltalta \omega(u)+\sum_{u\in[0,t]: \Deltalta\bar{\omegaega}_k(u)\geq \deltalta}(\Deltalta \omega(u)-\Deltalta \bar{\omegaega}_k(u))\\
&\leq \sum_{u\in[0,T]: \Deltalta\omega(u)\in[\deltalta,\deltalta+\varepsilonsilon_k)}\Deltalta \omega(u)+d_T^\star(\omega_k,\omega).
\end{align*}
Thus, using the definition of $d_T^0$ in \eqref{defdt0}, we deduce that
\[d_T^0(\omega_k^\deltalta,\omega^\deltalta)\leq d_T^\star(\omega_k,\omega)+\sum_{u\in[0,T]: \Deltalta\omega(u)\in(\deltalta-\varepsilonsilon_k,\deltalta+\varepsilonsilon_k)}\Deltalta \omega(u).\]
Since $\deltalta\in A_\omega$, letting $k\to\infty$ in the previous inequality yields $\lim_{k\to\infty }d_T^0(\omega_k^\deltalta,\omega^\deltalta)=0$. Recall that $\omega^\deltalta$ has a finite number of jumps, and hence, {\eqref{claim-cont}} follows using Lemma \ref{zero-star}.
\end{proof}
\subsection{Results related to Section \ref{s23}: the Wright--Fisher process as a large population limit}\lambdabel{s32}
We start this section proving that the SDE \eqref{WFSDE} is well-posed.
\betagin{proposition}[Existence and uniqueness]\lambdabel{eandu}
Let $\sigmagma,\theta\geq 0$, $\nu_0,\nu_1\in[0,1]$ with $\nu_0+\nu_1=1$. Let $J$ be a pure-jump subordinator with L\'{e}vy measure $\mu$ supported in $(0,1)$ and let $B$ be a standard Brownian motion independent of $J$. Then, for any $x_0\in[0,1]$, there is a pathwise unique strong solution $(X(t))_{t\geq 0}$ to the SDE \eqref{WFSDE} such that $X(0)=x_0$. Moreover, $X(t)\in[0,1]$ for all $t\geq 0$.
\end{proposition}
\betagin{remark}
The Wright--Fisher diffusion defined via the SDE \eqref{WFSDE} with $\theta=0$ corresponds to \cite[Eq. 10]{CSW19} with $K_y$, $y\in(0,1)$, being a random variable that takes the value $1$ with probability $1-y$ and the value $2$ with probability $y$.
\end{remark}
\betagin{proof}
In order to prove the existence and the pathwise uniqueness of strong solutions of \eqref{WFSDE} we use \cite[Thms. 3.2, 5.1]{LiPu12}. To this purpose, we first extend \eqref{WFSDE} to an SDE on ${\mathbb R}b$ and we write it in the form of \cite[Eq. 2.1]{LiPu12}. Note that by L\'{e}vy-It\^{o} decomposition, the pure-jump subordinator $J$ can be expressed as
$J(t)=\int_{(0,1)} x\, N(t,\dd x),\quad t\geq 0,$
where $N(\dd s,\dd x)$ is a Poisson random measure with intensity measure $\mu$. Hence, defining the functions $a,b:{\mathbb R}b\to{\mathbb R}b$ and $g:{\mathbb R}b\times(0,1)\to{\mathbb R}b$ by setting
\[a(x)\coloneqq \sqrt{2x(1-x)},\quad b(x)\coloneqq\sigmagma x(1-x)+\theta\nu_0(1-x)-\theta\nu_1 x, \quad g(x,u)\coloneqq x(1-x)u,\]
for $x\in[0,1]$ and $u\in(0,1)$; $a(x)\coloneqq 0$, $g(x,u)\coloneqq 0$ for $x\notin[0,1]$; $b(x)\coloneqq \theta\nu_0$ for $x<0$ and $b(x)\coloneqq -\theta\nu_1$ for $x>1$, Eq. \eqref{WFSDE} can be extended to the following SDE on ${\mathbb R}b$
\betagin{equation}\lambdabel{ASDE}
X(t)=X(0)+\int_0^t a(X(s))\dd B(s)+\int_0^t\int_{(0,1)}g(X(s-),u)N(\dd s,\dd u)+\int_0^t b(X(s))\dd s,\quad t\geq 0.
\end{equation}
{Thus,} any solution $(X(t))_{t\geq 0}$ of \eqref{ASDE} such that $X(t)\in[0,1]$ for any $t\geq 0$ is a solution of \eqref{WFSDE} and vice-versa. Note that the functions $a, b,g$ are continuous. Moreover, $b=b_1- b_2$, where
\[\betagin{array}{llll}
b_1(x)\coloneqq \theta\nu_0+\sigmagma x\quad\textrm{for $x\in[0,1]$,} & b_1(x)\coloneqq \theta\nu_0\quad\textrm{for}\quad x\leq0,&\textrm{and}\quad b_1(x)\coloneqq \theta\nu_0+\sigmagma\quad\textrm{for}\quad x\geq1,\\
b_2(x)\coloneqq \theta x +\sigmagma x^2\quad\textrm{for $x\in[0,1]$,}& b_2(x)\coloneqq 0\quad\,\,\,\,\,\,\textrm{for}\quad x\leq0,&\textrm{and}\quad b_2(x)\coloneqq \theta+\sigmagma\quad\,\,\,\,\,\,\textrm{for}\quad x\geq1.
\end{array}\]
In addition, $b_2$ is non-decreasing. Thus, in order to apply \cite[Thms. 3.2, 5.1]{LiPu12}, we only need to verify the sufficient conditions (3.a), (3.b) and (5.a) therein. Condition (3.a) in our case amounts to prove that $x\mapsto b_1(x)+\int_{(0,1)}g(x,u)\mu(\dd u)$ is Lipschitz continuous. In fact, a straightforward calculation shows that
\[|b_1(x)-b_1(y)|+\int_{(0,1)}|g(x,u)-g(y,u)|\mu(\dd u)\leq \left(\sigmagma+\int_{(0,1)}u\mu(\dd u)\right)|x-y|,\quad x,y\in{\mathbb R}b,\]
and hence (3.a) follows. Condition (3.b) amounts to prove that $x\mapsto a(x)$ is $1/2$-H\"older, which was already shown in the proof of \cite[Lemma 3.6]{GS18}.
Therefore, \cite[Thm. 3.2]{LiPu12} yields the pathwise uniqueness for \eqref{ASDE}. Condition (5.a) follows from the fact that the functions $a, b$, $x\mapsto \int_{(0,1)} g(x,u)^2\mu(\dd u)$ and $x\mapsto \int_{(0,1)} g(x,u)\mu(\dd u)$ are bounded on ${\mathbb R}b$. Hence, \cite[Thm. 5.1]{LiPu12} ensures the existence of a strong solution to \eqref{ASDE}. It remains to show that any solution of \eqref{ASDE} with $X(0)\in[0,1]$ is such that $X(t)\in[0,1]$ for any $t\in[0,1]$. Sufficient conditions implying such a result are provided in \cite[Prop. 2.1]{FuLi10}. The conditions on the diffusion and drift coefficients are satisfied, namely, $a$ is $0$ outside $[0,1]$ and $b(x)$ is positive for $x\leq 0$ and negative for $x\geq 1$. However, the condition on the jump coefficient, $x+g(x,u)\in[0,1]$ for every $x\in{\mathbb R}b$, is not fulfilled. Nevertheless, the proof of \cite[Prop. 2.1]{FuLi10} works without modifications under the alternative condition $x+g(x,u)\in[0,1]$ for $x\in[0,1]$ and $g(x,u)=0$ for $x\notin[0,1]$, which is in turn satisfied. This ends the proof.
\end{proof}
\betagin{lemma}\lambdabel{core} The solution of the SDE \eqref{WFSDE} is a Feller process with generator $\As$ satisfying for all $f\in \mathcal{C}^2([0,1],{\mathbb R}b)$
\betagin{align*}
\As f(x)&=x(1-x)f''(x)+ (\sigmagma x(1-x)+\theta\nu_0(1-x)-\theta\nu_1 x) f{'}(x)+\int_{(0,1)}\left(f\left(x+ x(1-x)u\right)-f(x)\right)\mu(\dd u).
\end{align*}
Moreover, $\mathcal{C}^\infty([0,1],{\mathbb R}b)$ is an operator core for $A$.
\end{lemma}
\betagin{proof}
Since pathwise uniqueness implies weak uniqueness (see \citep[Thm.~1]{BLP15}), we infer from \citep[Cor.~2.16]{Ku11} that the martingale problem associated to $A$ in $\Cs^{2}([0,1])$ is well-posed. Moreover, an inspection of the proof shows that this is also true in $\Cs^{\infty}([0,1])$. Using \cite[Prop. 2.2]{vanC92}, we deduce that $X$ is Feller. The fact that $\Cs^\infty([0,1])$ is a core follows then from \cite[Thm.~2.5]{vanC92}.
\end{proof}
Now, we proceed to prove the first part of the main result of Section \ref{s23}, i.e. the annealed convergence of a sequence Moran models towards the solution of the SDE \eqref{WFSDE}.
\betagin{proof}[Proof of Theorem \ref{thm2.2}-(1) (Annealed convergence)]
Let $\As_N^*$ and $\As$ be the infinitesimal generators of the processes {$(X_N(t))_{t\geq 0}$} and $(X(t))_{t\geq 0}$, respectively. {Note that $(X_N(t))_{t\geq 0}$ has state space
{\betagin{equation} \lambdabel{defen}
E_N\coloneqq \{k/N:k\in[N]_0\}.
\end{equation}}} We will prove that, for all $f\in\Cs^\infty([0,1],{\mathbb R}b)$,
{\betagin{equation}\tag{Claim 2}\lambdabel{claimcvgenerator}
\sup\limits_{x\in E_N}|\As_N^* f(x)-\As f(x)|\xrightarrow[N\to\infty]{} 0.
\end{equation}
Provided \eqref{claimcvgenerator}} is true, since $X$ is Feller and $\Cs^\infty([0,1],{\mathbb R}b)$ is an operator core for $A$ (see Lemma \ref{core}), the result follows applying \cite[Theorem 19.25]{Kall02}. Thus, it remains to prove {\eqref{claimcvgenerator}}. To this end, we decompose the generator $\As$ as $\As^{1}+\As^{2}+\As^3+\As^4$, where
\betagin{align*}
\As^1f(x)\coloneqq x(1-x)f{''}(x),\quad\qquad\qquad\qquad\qquad\qquad &\As^2f(x)\coloneqq (\sigmagma x(1-x)+\theta\nu_0(1-x)-\theta\nu_1 x) f{'}(x),\\
\As^3f(x)\coloneqq \int_{(0,\varepsilon_N)}\!\!\left(f\left(x+ x(1-x)u\right)-f(x)\right)\mu(\dd u),\quad &
\As^4f(x)\coloneqq \int_{(\varepsilon_N,1)}\!\!\left(f\left(x+ x(1-x)u\right)-f(x)\right)\mu(\dd u).
\end{align*}
We will choose $\varepsilon_N>0$ later in an appropriate way. We also write $\As_N^*=\As_N^{1}+\As_N^{2}+\As_N^3+\As_N^4$, where
\betagin{align*}
\As_N^1f(x)&\coloneqq N^2x(1-x)\left[\Deltalta_{\frac1N}f(x)+\Deltalta_{-\frac1N}f(x)\right],\\
\As_N^2f(x)&\coloneqq N^2(\sigmagma_N x(1-x)+\theta_N\nu_0(1-x))\left[\Deltalta_{\frac1N}f(x)\right]+N^2\theta_N\nu_1x\left[\Deltalta_{-\frac1N}f(x)\right],\\
\As_N^3f(x)&\coloneqq \int_{(0,\varepsilon_N^{})}\!\!\!\!\!\!\!{\mathbb E}b\left[f\left(x+\xi_N(x,u)\right)\right]-f(x)\mu(\dd u),\quad
\As_N^4f(x)\coloneqq \int_{(\varepsilon_N^{},1)}\!\!\!\!\!\!\!{\mathbb E}b\left[f\left(x+\xi_N(x,u)\right)\right]-f(x)\mu(\dd u),
\end{align*}
where $\Deltalta_hf(x)\deltafeq f(x+h)-f(x)$, and $\xi_N(x,u)\coloneqq \Hs(N,N(1-x),B_{Nx}(u))/N$, with
$\Hs(N,N(1-x),k)\sigmam\hypdist{N}{N(1-x)}{k}$, and $B_{Nx}(u)\sigmam\bindist{Nx}{u}$ being independent.
Let $f\in\Cs^\infty([0,1],{\mathbb R}b)$. Note that
\betagin{equation}\lambdabel{t0}
\sup_{x\in E_N} |\As_N^* f(x)-\As f(x)|\leq \sum_{i=1}^4 \sup_{x\in E_N} |\As_N^i f(x)-\As^i f(x)|.
\end{equation}
Using Taylor expansions of order three around $x$ for $f(x+1/N)$ and $f(x-1/N)$, we get
\betagin{equation}\lambdabel{t1}
\sup\limits_{x\in E_N}|\As_N^1 f(x)-\As^1 f(x)|\leq \frac{\lVert f'''\rVert_\infty}{3N}.
\end{equation}
Similarly, using the triangular inequality and appropriate Taylor expansions of order two yields
\betagin{equation}\lambdabel{t2}
\sup\limits_{x\in E_N}|\As_N^2 f(x)-\As^2 f(x)|\leq {\frac{(N\sigmagma_N+N\theta_N) \lVert f''\rVert_\infty}{2N}}+ (|\sigmagma-N\sigmagma_N|+|\theta-N\theta_N|)\lVert f'\rVert_{\infty}.
\end{equation}
{Since $N\sigmagma_N\to\sigmagma$ and $N\theta_N\to\theta$, the right-hand side in \eqref{t2} converges to $0$ as $N\to\infty$}. In addition, since ${\mathbb E}b[\xi_N(x,u)]=x(1-x)u$, we have
\[|\As^3_Nf(x)|\leq \lVert f'\rVert_\infty\int_{(0,\varepsilon_N)} u\mu(\dd u),\quad x\in[0,1],\]
and hence,
\betagin{equation}\lambdabel{t3}
\sup\limits_{x\in E_N}|\As_N^3 f(x)-\As^3 f(x)|\leq 2 ||f'||_\infty\int_{(0,\varepsilon_N^{})} u\mu(\dd u).
\end{equation}
For the last term, we first note that
\betagin{align*}
\left|{\mathbb E}b\left[f\left(x+\xi_N(x,u)\right)-f(x+x(1-x)u)\right]\right|&
\leq \lVert f'\rVert_{\infty}{\mathbb E}b\left[\left|\xi_N(x,u)-x(1-x)u\right|\right]\\
&\leq \lVert f'\rVert_{\infty}\sqrt{{\mathbb E}b\left[\left(\xi_N(x,u)-x(1-x)u\right)^2\right]}\leq\lVert f'\rVert_{\infty} \sqrt{\frac{u}{N}}.
\end{align*}
In the last inequality we used that
\betagin{equation}\lambdabel{hybimix}
{\mathbb E}b\left[\left(\xi_N(x,u)-x(1-x)u\right)^2\right]=\frac{x(1-x)^2u(1-u)}{N}+\frac{Nx^2(1-x)u^2}{N^2(N-1)}\leq \frac{u}{N},
\end{equation}
which is obtained from standard properties of the hypergeometric and binomial distributions. Hence,
\betagin{equation}\lambdabel{t4}
\sup\limits_{x\in E_N}|\As_N^4 f(x)-\As^4 f(x)|\leq ||f'||_\infty\int_{(\varepsilon_N^{},1)} \sqrt{\frac{u}{N}}\,\mu(du)\leq \frac{||f'||_\infty}{\sqrt{N\varepsilon_N}}\int_{(\varepsilon_N^{},1)} u\,\mu(\dd u) .
\end{equation}
Now, choose $\varepsilon_N\coloneqq 1/\sqrt{N}$. Since $\int_{(0,1)} u\mu(\dd u)<\infty$, {\eqref{claimcvgenerator}} follows by plugging \eqref{t1}, \eqref{t2}, \eqref{t3} and \eqref{t4} in \eqref{t0} and letting $N\to\infty$.
\end{proof}
Before embarking on the proof of the second part of Theorem \ref{thm2.2}, we prove the following estimates for the Moran model with null environment.
\betagin{lemma}\lambdabel{nullest}
For any $x\in E_N$ (see \eqref{defen} for its definition) and $t\geq 0$, we have
\[{\mathbb E}b\left[\left(X_N^{{\bf{0}}}(t)-x\right)^2\mid X_N^{{\bf{0}}}(0)=x\right]\leq \left(\frac12 + N(\sigmagma_N+3\theta_N)\right)t,\]
and
\[-N\theta_N\nu_1 t\leq {\mathbb E}b\left[X_N^{\bf{0}}(t)-x\mid X_N^{{\bf{0}}}(0)=x\right]\leq N(\sigmagma_N+\theta_N\nu_0) t.\]
\end{lemma}
\betagin{proof}
Fix $x\in E_N$ and consider the functions $f_x,g_x:E_N\to[0,1]$ defined via $f_x(z)\coloneqq (z-x)^2$ and $g_x(z)\coloneqq z-x$, $z\in E_N$. The process $X_N^{\bf{0}}$ is a Markov chain with generator $\As_N^{\star,0}\coloneqq \As_N^1+\As_N^2$, where $\As_N^1$ and $\As_N^2$ are defined in the proof of Theorem \ref{thm2.2}. Moreover, for every $z\in E_N$, we have
\betagin{align*}
\As_N^{\star,0} f_x(z)&{=2z(1-z)+N\left[(\sigmagma_N z+\theta_N \nu_0)(1-z)\left(2(z-x)+\frac1N\right)+\theta_N\nu_1 z\left(2(x-z)+\frac1N\right)\right]}\\
&{\leq \frac12+N\left[3\left(\frac{\sigmagma_N}{4}+\theta_N \nu_0\right)+3\theta_N\nu_1\right]}\leq \frac12 + N(\sigmagma_N+3\theta_N),
\end{align*}
and
\betagin{align*}
\As_N^{\star,0} g_x(z)&=N\left[(\sigmagma_N z+\theta_N \nu_0)(1-z)-\theta_N\nu_1 z\right]\in[-N\theta_N\nu_1,N(\sigmagma_N+\theta_N\nu_0)].
\end{align*}
Hence, Dynkin's formula applied to $X_N^0$ with $f_x$ leads to
\[{\mathbb E}b\left[\left(X_N^{\bf{0}}(t)-x\right)^2\mid X_N^{{\bf{0}}}(0)=x\right]=\int_0^t {\mathbb E}b\left[\As_N^{\star,0} f_x(X_N^{\bf{0}}(s))\mid X_N^{{\bf{0}}}(0)=x\right]\dd s\leq \left(\frac12 + N(\sigmagma_N+3\theta_N)\right)t.\]
Similarly, applying Dynkin's formula to $X_N^0$ with $g_x$, we obtain
\[{\mathbb E}b\left[X_N^{\bf{0}}(t)-x\mid X_N^{{\bf{0}}}(0)=x\right]=\int_0^t {\mathbb E}b\left[\As_N^{\star,0} g_x(X_N^{\bf{0}}(s))\mid X_N^{{\bf{0}}}(0)=x\right]\dd s\in[-N\theta_N\nu_1 t,N(\sigmagma_N+\theta_N\nu_0)t],\]
which ends the proof.
\end{proof}
\betagin{proposition}[Quenched tightness]\lambdabel{q-tight}
Assume that $N \sigmagma_N\rightarrow \sigmagma$ and $N\theta_N\rightarrow \theta$ as $N\to\infty$. For any $\omega\in\Db^\star$, the sequence {$(X_N^\omega)_{N\geq 1}$} is tight.
\end{proposition}
\betagin{proof}
Let $(\Fs_s^N)_{s\geq 0}$ denote the natural filtration associated to the process $X_N^\omega$. To prove the tightness of the sequence {$(X_N^\omega)_{N\geq 1}$}, we use \cite[Thm. 1]{BKS16}. For this we need to show that the following conditions hold:
\betagin{itemize}
\item[A1)] For each $T,\varepsilon>0$, there exists a compact set $K\subset{\mathbb R}b$ such that
\[\liminf_{N\to\infty} {\mathbb P}b\left(X_N^\omega(t)\in K,\, \forall t\leq T\right)\geq 1-\varepsilon.\]
\item[A2)] There exist $\alphapha>0$ and non-decreasing, c\`adl\`ag processes $F_N$, $F$ such that $F_N$ is $\Fs_0$-measurable,
$F_N\xRightarrow[]{(d)}F$ and for any $N\geq 1$ and every $0\leq s\leq t$
\[{\mathbb E}b\left[1\wedge |X_N^\omega(t)-X_N^\omega(s)|^{\alphapha} \right]\leq F_N(t)-F_N(s).\]
\end{itemize}
Since for all $t\geq 0$ and $N\geq 1$, $X_N^\omega(t)\in E_N \subset [0,1]$ (see \eqref{defen} for the definition of $E_N$), condition A1) is satisfied. Now, we claim that there are constants $c,C>0$, independent of $N$, such that
{\betagin{equation}\tag{Claim 3}\lambdabel{claiml2cont}
{\mathbb E}b\left[(X_N^\omega(t)-X_N^\omega(s))^2\mid \Fs_s^N\right]\leq c\sum_{u\in[s,t]}\Deltalta \omega(u)+ C (t-s),\quad\textrm{for all $0\leq s\leq t$}.
\end{equation}}
If {\eqref{claiml2cont}} is true, then condition A2) is satisfied with $\alphapha=2$ and $F_N(t)=F(t)=c\sum_{u\in[0,t]}\Deltalta \omega(u)+Ct$, and the result follows from \cite[Thm. 1]{BKS16}. The rest of the proof is devoted to prove {\eqref{claiml2cont}}.
For $x\in E_N$ and $t\geq 0$, we set $\psi_x(\omega,t)\coloneqq {\mathbb E}b_x[(X_N^\omega(t)-x)^2].$
For $s\geq 0$, we set $\omega_s(\cdot)\coloneqq \omega(s+\cdot)$. From the definition of $X_N^\omega$, it follows that, for any $0\leq s<t$,
\betagin{equation}\lambdabel{InMP}
{\mathbb E}b\left[(X_N^\omega(t)-X_N^\omega(s))^2\mid \Fs_s^N\right]=\psi_{X_N^\omega(s)}(\omega_s,t-s).
\end{equation}
Let $0\leq s<t$. We split the proof of {\eqref{claiml2cont}} in three cases.
\textbf{Case 1: $\omega$ has no jumps in $(s,t]$.} In particular, $\omega_s$ has no jumps in $(0,t-s]$. Hence, restricted to $[0,t-s]$, $X_N^{\omega_s}$ has the same distribution as $X_N^{\bf{0}}$. Using Lemma \ref{nullest} with $x=X_N^\omega(s)$ and plugging the result in \eqref{InMP}, we infer that {\eqref{claiml2cont}} holds for any $c\geq 1$ and $C\geq C_1\coloneqq 1/2+\sup_{N\in{\mathbb N}b}(N(\sigmagma_N+3\theta_N))$.
\textbf{Case 2: $\omega$ has $n$ jumps in $(s,t]$.} Let $t_1,\ldots,t_n\in(s,t]$ be the jump times of $\omega$ in $(s,t]$ in increasing order. We set $t_0\coloneqq s$ and $t_{n+1}=t$. For any $i\in[n+1]$ and any $r\in(t_{i-1},t_{i})$, $\omega$ has no jumps in $(t_{i-1},r]$. In particular, $(t_{i-1},r]$ falls into Case 1. Using {\eqref{claiml2cont}} in $(t_{i-1},r]$ and letting $r\to t_i$, we obtain
\[{\mathbb E}b\left[(X_N^\omega(t_i -)-X_N^\omega(t_{i-1}))^2\mid \Fs_{t_{i-1}}^N\right]\leq C_1(t_i-t_{i-1}).\]
Moreover,
\[{\mathbb E}b\left[(X_N^\omega(t_i)-X_N^\omega(t_{i}-))^2\mid \Fs_{t_{i}-}^N\right]\leq {\mathbb E}b\left[\left(\frac{B_{N}(\Deltalta \omega(t_i))}{N}\right)^2\right]\leq \Deltalta \omega(t_i),\]
{where $B_{N}(\Deltalta \omega(t_i)) \sigmam \bindist{N}{\Deltalta \omega(t_i)}$.} Using the two previous inequalities and the tower property of the conditional expectation, we get
\betagin{equation}\lambdabel{oneint}
{\mathbb E}b\left[(X_N^\omega(t_i)-X_N^\omega(t_{i-1}))^2\mid \Fs_{s}^N\right]\leq 2C_1(t_i-t_{i-1})+2\Deltalta\omega(t_i).
\end{equation}
Now, note that
\betagin{align*}
(X_N^\omega(t)-X_N^\omega(s))^2&=\left(\sum_{i=0}^n( X_N^\omega(t_{i+1})-X_N^\omega(t_i))\right)^2\\
&=\sum_{i=0}^n( X_N^\omega(t_{i+1})-X_N^\omega(t_i))^2+2\sum_{i=0}^n( X_N^\omega(t_{i+1})-X_N^\omega(t_i))(X_N^\omega(t_i)-X_N^\omega(s)).
\end{align*}
Using Eq. \eqref{oneint}, we see that
\[{\mathbb E}b\left[\sum_{i=0}^n(X_N^\omega(t_{i+1})-X_N^\omega(t_{i}))^2\mid \Fs_{s}^N\right]\leq 2C_1(t-s)+2\sum_{i=1}^n\Deltalta\omega(t_i).\]
Moreover, we have
\betagin{align*}
{\mathbb E}b\left[(X_N^\omega(t_{i+1})-X_N^\omega(t_{i}))(X_N^\omega(t_{i})-X_N^\omega(s))\mid \Fs_{t_i}^N\right]=\varphi_{X_N^\omega(s),X_N^\omega(t_i)}(\omega_{t_i},t_{i+1}-t_{i}),
\end{align*}
where {for $x,y\in E_N$ and $t\geq 0$ we set} $\varphi_{x,y}(\omega,t)\coloneqq (y-x){{\mathbb E}b[X_N^\omega(t)-y \mid X_N^\omega(0)=y]}$. Since, for any $r\in(t_i,t_{i+1})$, {$\omega_{t_i}$ has no jumps in $(0,r-t_i]$,} we can use Lemma \ref{nullest} to infer that {for any $x,y\in E_N$,}
\[\varphi_{x,y}(\omega_{t_i}, r-t_i)\leq N((\sigmagma_N+\theta_N\nu_0)\vee\theta_N\nu_1)(r-t_i).\]
Note that $(y-x){\mathbb E}b_y[X_N^{\omega_{t_i}}(t_{i+1}-t_i)-X_N^{\omega_{t_i}}((t_{i+1}-t_i)-)]\leq \Deltalta \omega(t_{i+1})$. Hence, letting $r\to t_{i+1}$, we get
\[\varphi_{x,y}(\omega_{t_i}, t_{i+1}-t_i)\leq N((\sigmagma_N+\theta_N\nu_0)\vee\theta_N\nu_1)(t_{i+1}-t_i)+\Deltalta\omega(t_{i+1}),\]
{for any $x,y\in E_N$, hence in particular for $x=X_N^\omega(s)$ and $y=X_N^\omega(t_i)$.} Altogether, we obtain
\[{\mathbb E}b\left[(X_N^\omega(t)-X_N^\omega(s))^2\mid \Fs_{s}^N\right]\leq C_2(t-s)+3\sum_{i=1}^n\Deltalta\omega(t_i),\]
where $C_2\coloneqq 2C_1+\sup_{N\in{\mathbb N}b}N((\sigmagma_N+\theta_N\nu_0)\vee\theta_N\nu_1)$. Hence, {\eqref{claiml2cont}} holds for any $C\geq C_2$ and $c\geq 3$.
\textbf{Case 3: $\omega$ has infinitely many jumps in $(s,t]$}. For any $\deltalta$, we consider $\omega^\deltalta$ as in \eqref{defomdelta} and we couple the processes $X_N^\omega$ and $X_N^{\omega^\deltalta}$ as in the proof of Proposition \ref{sjumps}. Note that $\omega^\deltalta$ has only a finite number of jumps in any compact interval, thus $\omega^\deltalta$ falls into case 2. Moreover, we have
\betagin{align*}
\psi_x(\omega,t)&\leq 2{\mathbb E}b_x[(X_N^\omega(t)-X_N^{\omega^\deltalta}(t))^2]+2{\mathbb E}b_x[(X_N^{\omega^\deltalta}(t)-x)^2]\\
&\leq 2{\mathbb E}b_x[|X_N^\omega(t)-X_N^{\omega^\deltalta}(t)|]+2{\mathbb E}b_x[(X_N^{\omega^\deltalta}(t)-x)^2]\\
&\leq 2e^{N\sigmagma_N t+\omega(t)} \sum_{u\in[0,t]:\Deltalta \omega(u)<\deltalta}\Deltalta \omega(u)+2{\mathbb E}b_x[(X_N^{\omega^\deltalta}(t)-x)^2],
\end{align*}
where in the last inequality we use Proposition \ref{sjumps}. Now, using {\eqref{claiml2cont}} for $X_N^{\omega^\deltalta}$ and the previous inequality, we obtain
\[{\mathbb E}b\left[(X_N^\omega(t)-X_N^\omega(s))^2\mid \Fs_{s}^N\right]\leq e^{N\sigmagma_N (t-s)+\omega(t-s)}\sum_{u\in[s,t]:\Deltalta \omega(u)<\deltalta}\Deltalta \omega(u)+2C_2(t-s)+6\sum_{u\in[s,t]}\Deltalta\omega(u).\]
We let $\deltalta\to 0$ and conclude that {\eqref{claiml2cont}} holds for any $C\geq 2C_2$ and $c\geq 6$.
\end{proof}
Now, we proceed to prove the quenched convergence of the sequence of Moran models to the Wright--Fisher diffusion, under the assumption that the environment is simple.
\betagin{proof}[Proof of Theorem \ref{thm2.2}-(2) (Quenched convergence)]
Let $B\coloneqq (B(t))_{t\geq 0}$ be a standard Brownian motion. We denote by $X^{\bf{0}}$ the unique strong solution of \eqref{WFSDE} associated to $B$ and the null environment. Theorem \ref{thm2.2}-(1) implies in particular that $X_N^{\bf{0}}$ converges to $X^{\bf{0}}$ as $N\to\infty$.
Now, assume that $\omega\neq{\bf{0}}$ is simple. We denote by $T_\omega$ the {(discrete but possibly infinite)} set of jump times of $\omega$ in $(0,\infty)$. Moreover, for $0<i< |T_\omega|+1$, we denote by $t_i\coloneqq t_i(\omega)\in T_\omega$ the time of the $i$-th jump of $\omega$. We set $t_0 \coloneqq 0$ and $t_{|T_\omega|+1} \coloneqq \infty$.
Therefore, we need to prove that
\[(X_N^\omega(t))_{t\geq 0}\xrightarrow[N\to\infty]{(d)} (X^\omega(t))_{t\geq 0},\]
where {we recall that} the process $X^\omega$ {starting at $x_0$} is defined as follows.
\betagin{itemize}
\item[(i)] {$X^\omega(0)=x_0$}.
\item[(ii)] For $i\in{\mathbb N}b$ with $i\leq |T_\omega|+1$, the restriction of $X^{\omega}$ to the interval $(t_{i-1},t_i)$ is given by a version of $X^{\bf{0}}$ started at $X^\omega(t_{i-1})$.
\item[(iii)] For $0<i< |T_\omega|+1$, conditionally on $X^\omega(t_i-)$, \[X^\omega(t_i)=X^\omega(t_i-)+X^\omega(t_i-)(1-X^\omega(t_i-))\Deltalta\omega(t_i).\]
\end{itemize}
Since the sequence $(X_N^\omega)_{N\in{\mathbb N}b}$ is tight {(see Proposition \ref{q-tight}),} it is enough to prove the convergence at the level of the finite dimensional distributions. More precisely, we will prove by induction on $i\in{\mathbb N}b$ with $i\leq |T_\omega|+1$ that for any finite set $I\subset[0,t_i)$, we have \[((X_N^\omega(t))_{t\in I},X_N^\omega(t_i-))\xrightarrow[N\to\infty]{(d)} ((X^\omega(t))_{t\in I},X^\omega(t_i-)).\]
For $i=|T_\omega|+1<\infty$ we remove the components $X_N^\omega(t_i-)$ and $X^\omega(t_i-)$ since they do not make sense. Since $X_N^\omega(t_1-)=X_N^{\bf{0}}(t_1)$ and $X^\omega(t_1-)=X^{\bf{0}}(t_1)$ almost surely, the result for $i=1$ follows from Theorem \ref{thm2.2}-(1). Now, assume that the result is true for some $i<|T_\omega|+1$ and let $I\subset(0,t_{i+1})$. Without loss of generality we assume that $I=\{s_1,\ldots,s_k, t_i,r_1,\ldots, r_m\}$, with $s_1<\cdots<s_k<t_i<r_1<\cdots<r_m$. We also assume that $i<|T_\omega|$, the other case, i.e. $i=|T_\omega|<\infty$, follows using an analogous argument.
Let $F:[0,1]^{k+1}\to{\mathbb R}b$ be a Lipschitz function with $\lVert F\rVert_{\textrm{BL}}\leq 1$ {(see \eqref{defnormbl} for the definition of $\lVert \cdot\rVert_{\textrm{BL}}$)}. Note that
\betagin{align*}
{\mathbb E}b\left[F((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\right]&={\mathbb E}b\left[F((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i-)+\xi_N(X_N^\omega(t_i-),\Deltalta\omega(t_i)))\right],
\end{align*}
where for $x\in E_N$ (see \eqref{defen} for the definition of $E_N$) and $u\in(0,1)$, $\xi_N(x,u)\coloneqq \Hs(N,N(1-x),B_{Nx}(u))/N$ with $\Hs(N,N(1-x),k)\sigmam\hypdist{N}{N(1-x)}{k}$, $k\in[N]_0$, and $B_{Nx}(u)\sigmam\bindist{Nx}{u}$ being independent between them and independent of $X_N^\omega$. Now, set
\betagin{align*}
D_N&\coloneqq {\mathbb E}b\left[F((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i-)+\xi_N(X_N^\omega(t_i-),\Deltalta\omega(t_i)))\right]\\
&\,\,\,-{\mathbb E}b\left[F((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i-)+X_N^\omega(t_i-)(1-X_N^\omega(t_i-))\Deltalta\omega(t_i))\right].
\end{align*}
Using that $\lVert F\rVert_{\textrm{BL}}\leq 1$ and \eqref{hybimix}, we see that
$|D_N| \leq\sqrt{\Deltalta\omega(t_i)/N}\to 0$ as $N\to\infty$.
Therefore, the induction hypothesis yields
\betagin{align*}
{\mathbb E}b\left[F((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\right]&=D_N+{\mathbb E}b\left[F((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i-)+X_N^\omega(t_i-)(1-X_N^\omega(t_i-))\Deltalta\omega(t_i))\right]\\
&\xrightarrow[N\to\infty]{}\,{\mathbb E}b\left[F((X^\omega(s_j))_{j=1}^k,X^\omega(t_i-)+X^\omega(t_i-)(1-X^\omega(t_i-))\Deltalta\omega(t_i))\right]\\
&={\mathbb E}b\left[F((X^\omega(s_j))_{j=1}^k,X^\omega(t_i))\right].
\end{align*}
Therefore,
\betagin{equation}\lambdabel{uptoti}
((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\xrightarrow[N\to\infty]{(d)} ((X^\omega(s_j))_{j=1}^k,X^\omega(t_i)).
\end{equation}
Let $G:[0,1]^{k+m+2}\to{\mathbb R}b$ be a Lipschitz function with $\lVert G\rVert_{\textrm{BL}}\leq 1$. For $x\in E_N$, define
\[H_N(z,x)\coloneqq {\mathbb E}b_x[G(z,x,(X_N^{\bf{0}}(r_j-t_i))_{j=1}^m,X_N^{\bf{0}}(t_{i+1}-t_i))], \ \forall z \in{\mathbb R}b^k.\]
Note that
\betagin{equation}\lambdabel{mpr1}
{\mathbb E}b[G((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i),(X_N^\omega(r_j))_{j=1}^m,X_N^\omega(t_{i+1}-))]={\mathbb E}b\left[H_N((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\right].
\end{equation}
Similarly, for $x\in[0,1]$, define
\[H(z,x)\coloneqq {\mathbb E}b_x[G(z,x,(X^{\bf{0}}(r_j-t_i))_{j=1}^m,X^{\bf{0}}(t_{i+1}-t_i))],\quad z\in{\mathbb R}b^k,\]
and note that
\betagin{equation}\lambdabel{mpr2}
{\mathbb E}b[G((X^\omega(s_j))_{j=1}^k,X^\omega(t_i),(X^\omega(r_j))_{j=1}^m,X^\omega(t_{i+1}-))]={\mathbb E}b\left[H((X^\omega(s_j))_{j=1}^k,X^\omega(t_i))\right].
\end{equation}
Using \eqref{uptoti} and the Skorokhod representation theorem, we can assume that $((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))_{N\geq1}$ and $((X^\omega(s_j))_{j=1}^k,X^\omega(t_i))$ {are defined on} the same probability space and that the convergence holds almost surely. In particular, we can write
\betagin{equation}\lambdabel{er1r2}
|{\mathbb E}b\left[H_N((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\right]-{\mathbb E}b\left[H((X^\omega(s_j))_{j=1}^k,X^\omega(t_i))\right]|\leq R_N^1+R_N^2,
\end{equation}
where
\betagin{align*}
R_N^1&\coloneqq |{\mathbb E}b\left[H_N((X_N^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\right]-{\mathbb E}b\left[H_N((X^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\right]|,\\
R_N^2&\coloneqq |{\mathbb E}b\left[H_N((X^\omega(s_j))_{j=1}^k,X_N^\omega(t_i))\right]-{\mathbb E}b\left[H((X^\omega(s_j))_{j=1}^k,X^\omega(t_i))\right]|.
\end{align*}
Using that $\lVert G\rVert_{\textrm{BL}}\leq 1$, we obtain
\betagin{equation}\lambdabel{er1}
R_N^1\leq \sum_{j=1}^k{\mathbb E}b[|X_N^\omega(s_j)-X^\omega(s_j)|]\xrightarrow[N\to\infty]{}0.
\end{equation}
Moreover, since $X_N^\omega(t_i)$ converges to $X^\omega(t_i)$ almost surely, we conclude using Theorem \ref{thm2.2} that, for any $z\in[0,1]^k$, $H_N(z,X_N^\omega(t_i))$ converges to $H(z,X^\omega(t_i))$ almost surely. Therefore, using dominated convergence theorem, we conclude that
\betagin{equation}\lambdabel{er2}
R_N^2\xrightarrow[N\to\infty]{}0.
\end{equation}
Plugging \eqref{er1} and \eqref{er2} in \eqref{er1r2} and using \eqref{mpr1} and \eqref{mpr2} completes the proof.
\end{proof}
\section{The ASG and its relatives}\lambdabel{S4}
In this section we formalize the definition of the quenched ASG and we provide definitions for the k-ASG and the pLD-ASG both in the annealed and in the quenched setting.
\subsection{Results related to Section \ref{s24}: the quenched ASG}\lambdabel{s41}
The aim of this section is to prove the following result.
\betagin{proposition}[Existence of the quenched ASG]\lambdabel{eqasg}
Let $\omega\in\Db^\star$. For any $n \in{\mathbb N}b$ and $T > 0$, there exists a branching-coalescing particle system $(\Gs_T^\omega(\betata))_{\betata\geq 0}$ starting at $\betata=0-$ with $n$ lines, that almost surely {consists of} finitely many lines at each time $\betata \in [0,T]$, and that satisfies the requirements {(i), (ii), (iii'), (iv) and (v)} of Definition \ref{defannealdasg}.
\end{proposition}
\betagin{proof}
We will explicitly construct a branching-coalescing particle system $(\Gs_T^\omega(\betata))_{\betata\geq 0}$ with the desired properties. The main difficulty is that the environment $\omega$ {may have} infinitely many jumps on each compact interval. Fix $T > 0$ and $n \in {\mathbb N}b$ (sampling size) and define
\[\Lambda_{\textrm{mut}}\coloneqq\{\lambdambda_{i}^{0},\lambdambda_{i}^{1} \}_{i \geq 1}, \ \Lambda_{\textrm{\textrm{sel}}}\coloneqq\{\lambdambda_{i}^{\vartriangle}\}_{i \geq 1}, \ \Lambda_{\textrm{coal}}\coloneqq\{\lambdambda_{i,j}^{\blacktriangle} \}_{i,j \geq 1, i \neq j},\]
where $\lambdambda_{i}^{0}, \lambdambda_{i}^{1}, \lambdambda_{i}^{\vartriangle}$ and $\lambdambda_{i,j}^{\blacktriangle}$ are independent Poisson processes on $[0,T]$ with {parameters $\theta \nu_0, \theta \nu_1, \sigmagma$ and $1$, respectively}. For $\betata \in [0,T]$, let $\tilde \omegaega (\betata) := \omegaega (T) - \omegaega ((T-\betata)-)$ and $I_{\tilde \omegaega} := \{ \betata \in [0,T]: \Deltalta \tilde \omegaega (\betata) > 0 \}$; $I_{\tilde \omegaega}$ is the countable set of jump times of $\tilde \omegaega$. {Let $\mathcal{U}_{\tilde \omegaega} := \{ U_i(\betata) \}_{i \geq 1, \betata \in I_{\tilde \omegaega}}$ be a i.i.d. family of uniform random variables on $(0,1)$}. Assume, without loss of generality, that the arrival times of $\lambdambda_i^0,\lambdambda_i^1, \lambdambda_i^{\vartriangle}$ and $\lambdambda_{i,j}^{\blacktriangle}$, $i,j\in{\mathbb N}b$, $i\neq j$, are countable, distinct between them and distinct from the jump times of $\tilde \omegaega$. Let $I_{\textrm{coal}}$ (resp. $I_{\textrm{\textrm{sel}}}$) be the set of arrival times of $\Lambda_{\textrm{coal}}$ (resp. $\Lambda_{\textrm{\textrm{sel}}}$).
We first construct a set {$\Vs^\omega\subset {\mathbb N}b \times [0,T]$} {of \emph{virtual lines},} representing the set of lines that would be part of the ASG if there were no coalescence events. In particular, once a line enters this set, it will remain there. The set $\Vs^\omega$ is constructed on the basis of the set of potential branching times $I_{\textrm{bran}}\coloneqq I_{\tilde{\omega}}\cup I_{\textrm{sel}}$ as follows. Consider the (countable) set \[S_{\textrm{bran}}\coloneqq\{(\betata_1,\ldots,\betata_k):k\in{\mathbb N}b,\, 0\leq \betata_1<\cdots<\betata_k, \betata_i\in I_{\textrm{bran}}, i\in[k]\},\]
and fix an injective function $i_\star:[n]\times S_{\textrm{bran}}\to{\mathbb N}b\setminus[n]$. The set $\Vs^\omega$ is {determined} as follows.
{\betagin{enumerate}
\item For any $i\in[n]$ (i.e. in the initial sample) and $\betata\in[0,T]$: $(i,\betata)\in\Vs^\omega$.
\item For any $(\betata_1,\ldots,\betata_k)\in S_{\textrm{bran}}$, $j\in[n]$, and $\betata\in[\betata_k,T]$: $(i_\star(j,\betata_1,\ldots,\betata_k),\betata)\in\Vs^\omega$ if and only:
\betagin{itemize}
\item for any $\ell\in[k]$ with $\betata_\ell\in I_{\tilde{\omega}}$, $U_{i_\star(j,\betata_1,\ldots,\betata_{\ell-1})}(\betata_\ell)\leq \Deltalta \tilde \omegaega(\betata_\ell)$ (or $U_{j}(\betata_1) \leq \Deltalta \tilde \omegaega(\betata_1)$ if $\ell=1$),
\item for any $\ell\in[k]$ such that $\betata_\ell\in I_{\textrm{sel}}$, $\betata_\ell$ is a jump time of $\lambdambda_{i_\star(j,\betata_1,\ldots,\betata_{\ell-1})}^{\vartriangle}$ (of $\lambdambda_{j}^{\vartriangle}$ if $\ell=1$),
\end{itemize}
\end{enumerate}}
and these are all possible virtual lines; see Fig. \ref{fig:asgw}.
\betagin{figure}[t!]
\scalebox{0.6}{
\betagin{tikzpicture}
\pgfmathsetseed{1337}
\draw[dashed, opacity=0.5] (0,-1)--(0,4.5);
\draw[dashed, red, opacity=0.5] (2,-1)--(2,4.5);
\draw[dashed, red, opacity=0.5] (8,-1)--(8,4.5);
\draw[dashed, red, opacity=0.5] (12,-1)--(12,4.5);
\draw[dashed, red, opacity=0.5] (15,-1)--(15,4.5);
\draw[dashed, opacity=0.5] (14,-1)--(14,4.5);
\draw[dashed, opacity=0.5] (10,-1)--(10,4.5);
\node [right, opacity=0.5] at (7.2,-2) {$t$};
\node [right, opacity=0.5] at (1.8,-1.5) {$t_0$};
\node [right, opacity=0.5] at (7.8,-1.5) {$t_1$};
\node [right, opacity=0.5] at (11.8,-1.5) {$t_2$};
\node [right, opacity=0.5] at (14.8,-1.5) {$T$};
\node [right, opacity=0.5] at (-0.2,-1.5) {$0$};
\draw[-{angle 45[scale=5]}, opacity=0.5] (6.5,-1.8) -- (8.5,-1.8) node[text=black, pos=.6, xshift=7pt]{};
\node [right, opacity=1] at (1.8,5) {$\betata_5$};
\node [right, opacity=1] at (7.8,5) {$\betata_4$};
\node [right, opacity=1] at (9.8,5) {$\betata_3$};
\node [right, opacity=1] at (11.8,5) {$\betata_2$};
\node [right, opacity=1] at (13.8,5) {$\betata_1$};
\node [right, opacity=1] at (14.8,5) {$0$};
\node [right, opacity=1] at (-0.2,5) {$T$};
\node [right, opacity=1] at (7.2,5.5) {$\betata$};
\draw[-{angle 45[scale=5]}, opacity=1] (8.5,5.3) -- (6.5,5.3) node[text=black, pos=.6, xshift=7pt]{};
\draw[very thick, opacity=1] (0,-1)--(15,-1);
\draw[opacity=0.4] (0,-0.5)--(15,-0.5);
\draw[very thick,opacity=1] (13,-0.5)--(15,-0.5);
\draw[very thick,opacity=1] (0,0)--(15,0);
\draw[opacity=0.4] (0,0.5)--(14,0.5);
\draw[very thick,opacity=1] (7,0.5)--(14,0.5);
\draw[opacity=0.4] (0,1)--(12,1);
\draw[very thick,opacity=1] (0,1.5)--(12,1.5);
\draw[very thick,opacity=1] (0,2)--(10,2);
\draw[opacity=0.4] (0,2.5)--(8,2.5);
\draw[very thick,opacity=1] (0,3)--(8,3);
\draw[opacity=0.4] (0,3.5)--(2,3.5);
\draw[opacity=0.4] (0,4)--(2,4);
\draw[very thick,opacity=1] (1,4)--(2,4);
\draw[very thick,opacity=1] (0,4.5)--(2,4.5);
\draw[very thick,opacity=1] (14,0) to[out=30,in=-30] (14,0.5);
\draw[opacity=0.4] (12,-0.5) to[out=30,in=-30] (12,1);
\draw[very thick,opacity=1] (12,0.5) to[out=30,in=-30] (12,1.5);
\draw[very thick,opacity=1] (10,-1) to[out=30,in=-30] (10,2);
\draw[opacity=0.4] (8,1) to[out=30,in=-30] (8,2.5);
\draw[very thick,opacity=1] (8,1.5) to[out=30,in=-30] (8,3);
\draw[opacity=0.4] (2,0.5) to[out=30,in=-30] (2,3.5);
\draw[very thick,opacity=1] (2,1.5) to[out=30,in=-30] (2,4);
\draw[very thick,opacity=1] (2,2) to[out=30,in=-30] (2,4.5);
\draw[very thick,opacity=1] (7,1.5)--(7,0.5);
\draw[very thick,opacity=1] (1,3)--(1,4);
\draw[very thick,opacity=1] (13,0)--(13,-0.5);
\node[ultra thick] at (9.5,1.5) {$\bigtimes$} ;
\node[ultra thick] at (6,0) {$\bigtimes$} ;
\draw (3,1.5) circle (1.5mm) [fill=white!100];
\node[ultra thick] at (1,-0.5) {$\bigtimes$} ;
\end{tikzpicture}}
\caption{Illustration of the construction of the quenched ASG. The environment $\omega$ has jumps at forward times $t_0$, $t_1$, $t_2$; backward times $\betata_1,\ldots,\betata_5$ belong to the set of potential branching times $\tilde I_{\rm{bran}}$. Virtual lines are depicted grey or black; active lines are black. The ASG in $[0,T]$ consists of the set of active lines together with their connections and mutation marks.}
\lambdabel{fig:asgw}
\end{figure}
Let $V^\omega(\betata)\deltafeq \{i\in{\mathbb N}b: (i,\betata)\in\Vs^\omega\}$. {According to Lemma \ref{nbfinilines}} below, $V^\omega(\betata)$ is almost surely finite for all $\betata \in [0,T]$. Now, for $\betata \in I_{\textrm{coal}}$, let $(a_{\betata},b_{\betata})$ be the pair $(i,j)$ such that $\betata$ is an arrival time of $\lambdambda_{i,j}^{\blacktriangle}$. Since the Poisson processes $\lambdambda_{i,j}^{\blacktriangle}$, $i\neq j$, have distinct jump times, $(a_{\betata},b_{\betata})$ is uniquely defined. Let
\[\tilde I_{\textrm{coal}}\coloneqq \{ \betata \in I_{\textrm{coal}} : \ a_{\betata},b_{\betata} \in V^\omega(\betata) \}\quad\textrm{and}\quad\tilde I_{\textrm{bran}}\coloneqq \{ \betata \in I_{\textrm{bran}} : \ V^\omega(\betata-)\subsetneq V^\omega(\betata) \}.\]
Since $V^\omega(T)$ is independent of $\Lambda_{\textrm{coal}}$ and almost surely finite, it follows that $\tilde I_{\textrm{coal}}$ and $\tilde I_{\textrm{bran}}$ are almost surely finite. Let $\betata_1 < \cdots < \betata_m$ be the elements of $\tilde I_{\textrm{coal}} \cup \tilde I_{\textrm{bran}}$ (set $\betata_{0} \coloneqq 0$ and $\betata_{m+1} \coloneqq T$ for convenience). We define $V_{\rm{on}}^\omega(\betata)\subset V^\omega(\betata)$, the set of \emph{active} lines at time $\betata$ as follows (see also Fig. \ref{fig:asgw}). {For $\betata=0$, we set $V_{\rm{on}}^\omega(0)\coloneqq V^\omega(0)$ and, for $\betata\in(\betata_\ell,\betata_{\ell+1})$, we set $V_{\rm{on}}^\omega(\betata)\coloneqq V_{\rm{on}}^\omega(\betata_\ell)$. For $\betata=\betata_\ell\in \tilde I_{\textrm{coal}}$, we set {$V_{\rm{on}}^\omega(\betata_\ell)\coloneqq V_{\rm{on}}^\omega(\betata_\ell-)\setminus \{ a_{\betata_\ell} \}$ if $\{a_{\betata_\ell},b_{\betata_\ell}\}\subset V_{\rm{on}}^\omega(\betata_\ell-)$}, and
$V_{\rm{on}}^\omega(\betata_\ell)\coloneqq V_{\rm{on}}^\omega(\betata_\ell-)$ otherwise. Finally, for $\betata=\betata_\ell\in \tilde I_{\textrm{bran}}$, we set $V_{\rm{on}}^\omega(\betata_\ell)\coloneqq V_{\rm{on}}^\omega(\betata_\ell-) \cup J_\ell$,
where the set $J_\ell$ consists of the integers $i \in V^\omega(\betata_\ell) \setminus V^\omega(\betata_\ell-)$ such that: $i=i_\star(j,\betata_\ell)$ for some $j\in [n]\cap V_{\rm{on}}^\omega(\betata_\ell-)$, or $i=i_\star(j,\hat \betata_1,\ldots,\hat \betata_k,\betata_\ell)$ for some $(j,\hat \betata_{1},\ldots,\hat \betata_{k})\in i_\star^{-1}(V_{\rm{on}}^\omega(\betata_\ell-)\setminus [n])$ with $\hat{\betata}_k<\betata_\ell$.}
The {ASG on $[0,T]$} is then the branching-coalescing system starting with $n$ lines at levels in $[n]$, consisting at any time $\betata\in[0,T]$ of the lines in $V_{\rm{on}}^\omega(\betata)$, and where:
\betagin{itemize}
\item[(i)] {For} $\betata \in I_{\textrm{bran}}$ such that $V_{\rm{on}}^\omega(\betata-)\subsetneq V_{\rm{on}}^\omega(\betata)$ and $i\in V_{\rm{on}}^\omega(\betata)\setminus V_{\rm{on}}^\omega(\betata-)$, either there is $(j,\hat \betata_{1},\ldots,\hat \betata_{k})\in[n]\times S_{\textrm{bran}}$ with $\hat \betata_{k}<\betata$ such that $i=i_\star(j,\hat \betata_1,\ldots,\hat \betata_k,\betata)$, or there is $j\in[n]$ such that $i=i_\star(j,\betata)$.
In the first case, line $i_\star(j,\hat \betata_1,\ldots,\hat \betata_k)$ branches at time $\betata$ into $i_\star(j,\hat \betata_1,\ldots,\hat \betata_k)$ (continuing line) and $i$ (incoming line). In the second case, line $j$ branches at time $\betata$ into $j$ (continuing line) and $i$ (incoming line).
\item[(ii)] {For} $\betata \in I_{\textrm{coal}}$ such that $V_{\rm{on}}^\omega(\betata)\subsetneq V_{\rm{on}}^\omega(\betata-)$ and $i\in V_{\rm{on}}^\omega(\betata-)\setminus V_{\rm{on}}^\omega(\betata)$, $i=a_\betata$ and $b_{\betata}\in V_{\rm{on}}^\omega(\betata)$. Thus, lines $i$ an $b_\betata$ merge into $b_\betata$ at time $\betata$.
\item[(iii)] At each $\betata \in [0,T]$ that is an arrival time of $\lambdambda_{i}^{0}$ (resp. $\lambdambda_{i}^{1}$) for some $i \in V_{\rm{on}}^\omega(\betata)$, we mark line $i$ with a beneficial (resp. deleterious) mutation at time $\betata$.
\end{itemize}
It is straightforward to see that the so-constructed branching-coalescing particle system satisfies the requirements {(i), (ii), (iii'), (iv) and (v)} of Definition \ref{defannealdasg}. This ends the proof.
\end{proof}
It remains to prove the following lemma.
\betagin{lemma} \lambdabel{nbfinilines}
The set $V^\omegaega(\betata)$ is almost surely finite for any $\betata \in [0,T]$.
\end{lemma}
\betagin{proof}
We keep using the notation introduced in the proof of Proposition \ref{eqasg}. For $\deltalta>0$, we consider the environment $\omega^\deltalta$ defined via \eqref{defomdelta}. We couple the sets of virtual lines $\Vs^\omega$ and $\Vs^{\omega^\deltalta}$ associated to $\omega$ and $\omega^\deltalta$, respectively, by using the same random {sets $\Lambda_{\rm{sel}}$ and $\mathcal{U}_{\tilde \omegaega}$ (note that for $\betata \in I_{\tilde \omegaega}$ with $\Deltalta \tilde \omega(\betata)<\deltalta$, $\Deltalta \tilde \omega^\deltalta(\betata)=0<U_i(\betata)$).
Let $N_T^\omegaega(\betata)\coloneqq |V^\omega(\betata)|$ and $N_T^{\omegaega^\deltalta}(\betata)\coloneqq |V^{\omega^\deltalta}(\betata)|$, $\betata\in[0,T]$. Since $\betata\mapsto N_T^\omega(\betata)$ is non-decreasing, it is enough to prove that $N_T^\omega(T)<\infty$ almost surely. From the construction of the set of virtual lines, it follows that $N_T^{\omegaega^{\deltalta}}(\betata)$ increases almost surely to $N_T^{\omegaega}(\betata)$ as $\deltalta\to 0$.} By the monotone converge theorem we get for all $\betata \in [0,T]$:
\betagin{eqnarray}
\lim_{\deltalta \rightarrow 0} \mathbb{E}[N_T^{\omegaega^{\deltalta}}(\betata)\mid N_T^{\omegaega^{\deltalta}}(0)=n] = \mathbb{E}[N_T^{\omegaega}(\betata)\mid N_T^{\omegaega}(0)=n]. \lambdabel{monotcv}
\end{eqnarray}
Recall that $\tilde{\omega}^\deltalta(\betata)\deltafeq \omega^\deltalta(T)-\omega^\deltalta((T-\betata)-)$, $\betata\in[0,T]$, and that $\tilde \omegaega^{\deltalta}$ has finitely many jumps in $[0,T]$. Let $T_1 <\cdots < T_N$ be the jump times of $\tilde \omegaega^{\deltalta}$. The process $(N_T^{\omegaega^{\deltalta}}(\betata))_{\betata \in [0,T]}$ has the following transitions:
\betagin{enumerate}
\item on $(T_i,T_{i+1})$: $N_T^{\omegaega^{\deltalta}}$ jumps from $k$ to $k+1$ at rate $k\sigmagma,$
\item at time $T_i$: $N_T^{\omegaega^{\deltalta}}$ jumps from $k$ to $k+ B_{k}(\Deltalta \tilde \omegaega^{\deltalta}(T_i))$, where $B_{k}(\Deltalta \tilde \omegaega^{\deltalta}(T_i)) \sigmam \bindist{k}{\Deltalta \tilde \omegaega^{\deltalta}(T_i)}$.
\end{enumerate}
Note that for each $T_i$ we have $\Deltalta \tilde \omegaega^{\deltalta}(T_i)=\Deltalta \tilde \omegaega(T_i)$. This yields in particular
\betagin{eqnarray}
\mathbb{E}[N_T^{\omegaega^{\deltalta}}(T_i)\mid N_T^{\omegaega^{\deltalta}}(T_i-)]=(1+\Deltalta \tilde \omegaega(T_i))N_T^{\omegaega^{\deltalta}}(T_i-). \lambdabel{expectationatjumps}
\end{eqnarray}
Using successively Lemma \ref{expectationbetweenjumps} (see below) and \eqref{expectationatjumps} we get
\[ \mathbb{E}[N_T^{\omegaega^{\deltalta}}(T)\mid N_T^{\omegaega^{\deltalta}}(0)=n]\leq ne^{\sigmagma T} \prod_{\betata \in [0,T]: \Deltalta \tilde \omegaega (\betata) \geq \deltalta}(1+\Deltalta \tilde \omegaega(\betata)). \]
In particular, we have
\betagin{eqnarray}
\mathbb{E}[N_T^{\omegaega^{\deltalta}}(T)\mid N_T^{\omegaega^{\deltalta}}(0)=n]\leq ne^{\sigmagma T} \prod_{\betata \in I_{\tilde \omegaega} \cap [0,T]}(1+\Deltalta \tilde \omegaega(\betata)) < \infty. \lambdabel{expectationatt}
\end{eqnarray}
Letting $\deltalta$ go to $0$ in \eqref{expectationatt} and using \eqref{monotcv} we get
\[ \mathbb{E}[N_T^{\omegaega}(T)\mid N_T^{\omegaega}(0)=n]\leq ne^{\sigmagma T} \prod_{\betata \in I_{\tilde \omegaega} \cap [0,T]}(1+\Deltalta \tilde \omegaega(\betata)) < \infty. \]
This concludes the proof.
\end{proof}
\betagin{lemma} \lambdabel{expectationbetweenjumps}
Let $0 \leq \betata_1 <\betata_2 \leq T$ be such that $\tilde \omegaega^{\deltalta}$ has no jump times on $(\betata_1,\betata_2]$. Then, we have \[\mathbb{E}[N_T^{\omegaega^\deltalta}(\betata_2)\mid N_T^{\omegaega^\deltalta}(\betata_1)] \leq e^{\sigmagma (\betata_2-\betata_1)} N_T^{\omegaega^\deltalta}(\betata_1).\]
\end{lemma}
\betagin{proof}
Since $\tilde \omegaega^{\deltalta}$ has no jump times on $(\betata_1,\betata_2]$, on this interval $N_T^{\omegaega^\deltalta}$ is the Markov chain on $\mathbb{N}$ with generator
$\mathcal{G}_N f(n) = \sigmagma n (f(n+1)-f(n)).$
Let {$f_M(n)\coloneqq n \wedge M$.} Note that, for any $M,n \geq 1$, we have $\mathcal{G}_N f_M(n) \leq \sigmagma f_M(n)$. Applying Dynkin's formula to $N_T^{\omegaega^\deltalta}$ on $(\betata_1,\betata_2]$ with the function $f_M$ we obtain
\betagin{align*}
\mathbb{E}\left[f_M(N_T^{\omegaega^\deltalta}(\betata_2)) \mid N_T^{\omegaega^\deltalta}(\betata_1)\right]&=f_M(N_T^{\omegaega^\deltalta}(\betata_1))+\mathbb{E}\left[\int_{\betata_1}^{\betata_2} \mathcal{G}_N f_M(N_T^{\omegaega^\deltalta}(\betata))\dd \betata\mid N_T^{\omegaega^\deltalta}(\betata_1)\right] \\
&\leq f_M(N_T^{\omegaega^\deltalta}(\betata_1))+\sigmagma \mathbb{E}\left[\int_{\betata_1}^{\betata_2} f_M(N_T^{\omegaega^\deltalta}(\betata))\dd \betata\mid N_T^{\omegaega^\deltalta}(\betata_1)\right] \\
&= f_M(N_T^{\omegaega^\deltalta}(\betata_1))+\sigmagma \int_{\betata_1}^{\betata_2} \mathbb{E}\left[f_M(N_T^{\omegaega^\deltalta}(\betata))\mid N_T^{\omegaega^\deltalta}(\betata_1)\right] \dd \betata.
\end{align*}
By Gronwall's lemma we deduce that $\mathbb{E}[f_M(N_T^{\omegaega^\deltalta}(\betata_2)) \mid N_T^{\omegaega^\deltalta}(\betata_1)]\leq e^{\sigmagma (\betata_2-\betata_1)} f_M(N_T^{\omegaega^\deltalta}(\betata_1))$. The result follows letting $M\to\infty$ and using the monotone convergence theorem.
\end{proof}
\subsection{Definitions related to Section \ref{s25}: the killed ASG}\lambdabel{s42}
The k-ASG as a branching-coalescing system of particles is defined as follows (see Fig. \ref{fig:backfor}).
\betagin{definition}[The annealed{/quenched} k-ASG]\lambdabel{defkilledasgannealed}
The \emph{annealed k-ASG} with parameters $\sigmagma,\theta,\nu_0,\nu_1$, and environment driven by a pure-jump subordinator with L\'evy measure $\mu$, of a sample of size $n$ is the branching-coalescing particle system $\bar{\Gs}\deltafeq(\bar{\Gs}(\betata))_{\betata\geq 0}$ starting with $n$ lines and with the following dynamic.
\betagin{itemize}
\item[(i)] {Each} line splits into two lines, an incoming line and a continuing line, at rate $\sigmagma$.
\item[(ii)] {Every} given pair of lines {coalesces} into a single line at rate $2$.
\item[(iii)] {Every} group of $k$ lines is subject to a simultaneous branching at rate $\sigmagma_{m,k}$ {(defined in Eq. \eqref{smk}),} where $m$ denotes the total number of lines in the ASG before the simultaneous branching event. At the simultaneous branching event, each line in the group involved splits into two lines, an incoming line and a continuing line.
\item[(iv)] {Each} line is killed at rate $\theta \nu_1$.
\item[(v)] {Each} line sends the process to the cemetery state $\dagger$ at rate $\theta \nu_0$.
\end{itemize}
{Let $\omega\in\Db^\star$. The \emph{quenched k-ASG} with parameters $\sigmagma,\theta,\nu_0,\nu_1$, and environment $\omega$, of a sample of size $n$ at time $T$ is the branching-coalescing particle system $\bar{\Gs}_{T}^\omega\deltafeq(\bar{\Gs}_{T}^{\omega}(\betata))_{\betata\geq 0}$ starting at $\betata=0-$ with $n$ lines and evolving according to (i), (ii), (iv) and (v) of the previous definition and replacing (iii) by
\betagin{itemize}
\item[(iii')] {If} at time $\betata$, we have $\Deltalta \omega(T-\betata)>0$, then any line splits into two lines, an incoming line and a continuing line, with probability $\Deltalta \omega(T-\betata)$, independently from the other lines.
\end{itemize}}
\end{definition}
\betagin{remark}
The branching-coalescing system underlying the quenched k-ASG is well-defined as it can be constructed on the basis of the quenched ASG.
\end{remark}
\subsection{Definitions related to Section \ref{s26}: the pruned lookdown ASG}\lambdabel{s43}
In this section, we give a detailed construction of the pLD-ASG, which incorporates the effect of the environment.
First, we construct the (annealed/quenched) \textit{lookdown ASG} (LD-ASG). The latter is the ASG equipped with a numbering of its lines encoding the hierarchy given by the pecking order. This is done as follows. Consider a realization of the (annealed/quenched) ASG in $[0,T]$ starting with one line, which is assigned level $1$. When the line at level $i$ coalesces with the line at level $j>i$, the resulting line is assigned level $i$; the level of each line having level $k > j$ before the coalescence is decreased by $1$. When a group of lines with levels $i_1< i_2<\ldots<i_N$ experiences a simultaneous branching, the incoming (resp. continuing) line of the descendant line with level $i_k$ gets level $i_k+k-1$ (resp. $i_k+k$), respectively; a line having level $j$ before the branching, with $i_k<j<i_{k+1}$ gets level $j+k$; a line having level $j>i_N$ before the branching gets level $j+N$. Mutations do not affect the levels. See Fig. \ref{LDASG}(left) for an illustration.
\betagin{figure}[t!]
\betagin{center}
\betagin{minipage}{0.5 \textwidth}
\centering
\scalebox{0.6}{\betagin{tikzpicture}
\draw[dashed,thick,opacity=0.3] (-0.5,-0.5) --(-0.5,4.5);
\draw[dashed,thick,opacity=0.3] (8.5,-0.5) --(8.5,4.5);
\draw[dashed,thick,opacity=0.3] (0.8,-0.5) --(0.8,0) (0.8,1)--(0.8,4.5);
\draw[dashed,thick,opacity=0.3] (2,-0.5) --(2,1) (2,2)--(2,3) (2,4)--(2,4.5);
\draw[dashed,thick,opacity=0.3] (6.2,-0.5) --(6.2,0) (6.2,1) --(6.2,2) (6.2,4)--(6.2,4.5);
\draw[dashed,thick,opacity=0.3] (7.5,-0.5) --(7.5,1) (7.5,2) --(7.5,4.5) ;
\draw[dashed,thick,opacity=0.3] (4.5,-0.5) --(4.5,1) (4.5,2) --(4.5,4.5) ;
\node [right] at (-0.8,-0.9) {$T$};
\node [right] at (8.3,-0.9) {$0$};
\node [right] at (3.5,-1.5) {$\betata$};
\draw[-{triangle 45[scale=5]}] (4.5,-1.2) -- (2.5,-1.2) node[text=black, pos=.6, xshift=7pt]{};
\node [above] at (8.3,1) {$1$};
\node [above] at (7.2,2) {$1$};
\node [above] at (7.2,1) {$2$};
\node [above] at (5.8,0) {$3$};
\node [above] at (5.8,1) {$4$};
\node [above] at (5.8,2) {$2$};
\node [above] at (5.8,4) {$1$};
\node [above] at (4.2,0) {$3$};
\node [above] at (4.2,2) {$2$};
\node [above] at (4.2,4) {$1$};
\node [above] at (1.7,0) {$5$};
\node [above] at (1.7,1) {$3$};
\node [above] at (1.7,2) {$4$};
\node [above] at (1.7,3) {$1$};
\node [above] at (1.7,4) {$2$};
\node [above] at (0.5,1) {$3$};
\node [above] at (0.5,2) {$4$};
\node [above] at (0.5,3) {$1$};
\node [above] at (0.5,4) {$2$};
\draw[thick] (-0.5,4) -- (6.2,4);
\draw[thick] (-0.5,2) -- (7.5,2);
\draw[thick] (4.5,1) -- (8.5,1);
\draw[thick] (0.8,0) -- (6.2,0);
\draw[thick] (2,1) -- (-0.5,1);
\draw[thick] (2,3) -- (-0.5,3);
\draw[-{triangle 45[scale=5]},thick] (4.5,2) -- (4.5,1) node[text=black, pos=.6, xshift=7pt]{};
\draw[-{triangle 45[scale=5]},thick] (0.8,1) -- (0.8,0);
\draw[-{open triangle 45[scale=5]},thick] (7.5,2) -- (7.5,1);
\draw[-{open triangle 45[scale=5]},thick] (2,3) -- (2,4);
\draw[-{open triangle 45[scale=5]},thick] (2,1) -- (2,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,4) -- (6.2,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,0) -- (6.2,1);
\node[thick] at (0.2,4) {$\bigtimes$} ;
\node[ultra thick] at (8,1) {$\bigtimes$} ;
\draw[thick] (3.6,2) circle (1.5mm) [fill=white!100];
\draw[thick] (1.5,1) circle (1.5mm) [fill=white!100];
\end{tikzpicture} }
\end{minipage}\betagin{minipage}{0.45 \textwidth}
\centering
\scalebox{0.6}{\betagin{tikzpicture}
\draw[dashed,thick,opacity=0.3] (-0.5,-0.5) --(-0.5,4.5);
\draw[dashed,thick,opacity=0.3] (8.5,-0.5) --(8.5,4.5);
\draw[dashed,thick,opacity=0.3] (0.2,-0.5) --(0.2,4.5) ;
\draw[dashed,thick,opacity=0.3] (1.5,-0.5) --(1.5,4.5) ;
\draw[dashed,thick,opacity=0.3] (3.6,-0.5) --(3.6,4.5) ;
\draw[dashed,thick,opacity=0.3] (8,-0.5) --(8,4.5) ;
\node [right] at (-0.8,-0.9) {$T$};
\node [right] at (-0.1,-0.9) {$\betata_4$};
\node [right] at (1.2,-0.9) {$\betata_3$};
\node [right] at (3.3,-0.9) {$\betata_2$};
\node [right] at (7.7,-0.9) {$\betata_1$};
\node [right] at (8.3,-0.9) {$0$};
\node [right] at (3.5,-1.5) {$\betata$};
\draw[-{triangle 45[scale=5]}] (4.5,-1.2) -- (2.5,-1.2) node[text=black, pos=.6, xshift=7pt]{};
\node [above] at (8.3,1) {$1$};
\node [above] at (7.2,2) {$1$};
\node [above] at (7.2,1) {$2$};
\node [above] at (5.8,0) {$3$};
\node [above] at (5.8,1) {$4$};
\node [above] at (5.8,2) {$2$};
\node [above] at (5.8,4) {$1$};
\node [above] at (4.2,0) {$3$};
\node [above] at (4.2,2) {$2$};
\node [above] at (4.2,4) {$1$};
\node [above] at (1.7,1) {$3$};
\node [above] at (1.7,2) {$4$};
\node [above] at (1.7,3) {$1$};
\node [above] at (1.7,4) {$2$};
\node [above] at (0.5,1) {$3$};
\node [above] at (0.5,3) {$1$};
\node [above] at (0.5,4) {$2$};
\node [above] at (7.8,1) {$1$};
\node [above] at (3.3,4) {$1$};
\node [above] at (3.3,2) {$2$};
\node [above] at (1.2,4) {$2$};
\node [above] at (1.2,3) {$1$};
\node [above] at (1.2,1) {$3$};
\node [above] at (0,3) {$1$};
\node [above] at (0,1) {$2$};
\draw[semithick] (3.6,4) -- (6.2,4);
\draw[semithick] (0.2,4) -- (6.2,4);
\draw[line width=2.5pt] (1.5,2) -- (4.5,2);
\draw[semithick] (4.5,2) -- (7.5,2);
\draw[line width=2.5pt] (4.5,1) -- (8.5,1);
\draw[semithick] (3.6,0) -- (6.2,0);
\draw[semithick] (2,1) -- (1.5,1);
\draw[line width=2.5pt] (-0.5,1) -- (1.5,1);
\draw[semithick] (2,3) -- (-0.5,3);
\draw[-{triangle 45[scale=5]},thick] (4.5,2) -- (4.5,1) node[text=black, pos=.6, xshift=7pt]{};
\draw[-{open triangle 45[scale=5]},thick] (7.5,2) -- (7.5,1);
\draw[-{open triangle 45[scale=5]},thick] (2,3) -- (2,4);
\draw[-{open triangle 45[scale=5]},thick] (2,1) -- (2,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,4) -- (6.2,2);
\draw[-{open triangle 45[scale=5]},thick] (6.2,0) -- (6.2,1);
\node[thick] at (0.2,4) {$\bigtimes$} ;
\node[ultra thick] at (8,1) {$\bigtimes$} ;
\draw[thick] (3.6,2) circle (1.5mm) [fill=white!100];
\draw[thick] (1.5,1) circle (1.5mm) [fill=white!100];
\end{tikzpicture} }
\end{minipage}
\end{center}
\caption{LD-ASG (left) and its pLD-ASG (right). Backward time $\betata \in [0,T]$ runs from right to left. In the LD-ASG levels remain constant between the dashed lines, in particular, they are not affected by mutation events. In the pLD-ASG, lines are pruned at mutation events, where an additional updating of the levels takes place. The bold line in the pLD-ASG represents the immune line.}
\lambdabel{LDASG}
\end{figure}
The pLD-ASG is obtained via an appropriate pruning of the lines of the LD-ASG.
Before describing the pruning procedure, we identify a special line in the LD-ASG: \textit{the immune line}. The immune line at time $\betata$ is the line in the ASG present at time $\betata$ that is the ancestor of the starting line if all the lines at time $\betata$ are assigned the unfit type. {In the absence of mutations, the immune line changes only if it is involved in a coalescence or branching event. If it is involved into a coalescence event, the merged line is the new immune line. If it is involved into a branching event, the continuing line is the new immune line.}
In the presence of mutations, the pLD-ASG is constructed simultaneously with the immune line as follows. Let $\betata_1<\cdots<\betata_m$ be the times at which mutations occur in the LD-ASG in $[0,T]$. In the time interval $[0,\betata_1)$ the pLD-ASG coincides with the LD-ASG and the immune line evolves as before. Now, assume that we have constructed the pLD-ASG together with its immune line up to time $\betata_i-$, where the pLD-ASG contains $n$ lines and the immune line has level $k_0\in[n]$. The pLD-ASG is extended up to time $\betata_i$ according to the following rules:
\betagin{itemize}
\item[(i)] If at time $\betata_i$, a line with level $k\neq k_0$ at $\betata_i-$, is hit by a deleterious mutation, we stop tracing back this line; all the other lines are extended up to time $\betata_i$; all the lines with level $j>k$ at time $\betata_i-$ decrease their level by $1$ and the others keep their levels unchanged; the immune line continues on the same line (possibly with a different level).
\item[(ii)] If at time $\betata_i-$, the line with level $k_0$ at $\betata_i-$ is hit by a deleterious mutation, we extend all the lines up to time $\betata_i$; the immune line gets level $n$, but remains on the same line; all the lines having at time $\betata_i-$ a level $j>k_0$ decrease their level by $1$, the others keep their levels unchanged.
\item[(iii)] If at time $\betata_i$, a line with level $k$ is hit by a beneficial mutation, we stop tracing back all the lines with level $j>k$; the remaining lines are extended up to time $\betata_i$, keeping their levels; the line hit by the mutation becomes the immune line.
\end{itemize}
In $[\betata_i,\betata_{i+1})$, $i\in[m-1]$, and $[\betata_m,T]$, the pLD-ASG evolves as the LD-ASG, and the immune line as in the case without mutations. The next result states the main feature of the pLD-ASG.
\betagin{lemma}\lambdabel{pruningdonnehtx}
If we assign types at (backward) time $T$ in the pLD-ASG, the true ancestor of the single line at (backward) time $0$ is the line of type $0$ with smallest level or, if all lines have type $1$, it is the immune line.
\end{lemma}
\betagin{proof}
The proof is analogous to the proof of {\cite[Theorem 4]{LKBW15} which covers the null environment case.}
\end{proof}
\section{Annealed results}\lambdabel{S5}
\subsection{Annealed results related to Section \ref{s25} }\lambdabel{s51}
{We start this section with the proof of the first part of Theorem \ref{thm2.3}, namely Eqs. \eqref{rmd} and \eqref{md}.}
\betagin{proof} [Proof of Theorem \ref{thm2.3}(Part I: reinforced and annealed moment duality)]
Let $H:[0,1]\times \mathbb{N}_0^\dagger\times [0,\infty)$ defined via ${H(x,n,j)\deltafeq}(1-x)^nf(j)$. Let $(P_t)_{t\geq0}$ and $(Q_t)_{t\geq 0} $ denote the semigroups of $(X,J)$ and $(R,J)$, respectively, i.e.
\[P_t g(x,j)={\mathbb E}[g(X(t),J(t)+j)\mid X(0)=x]\quad\textrm{and}\quad Q_th(n,j)={\mathbb E}b[h(R(t),J(t)+j)\mid R(0)=n].\]
Let $(\hat{R},\hat{J})$ be a copy of $(R,J)$, which is independent of $(X,J)$. A straightforward calculation shows that
\betagin{equation}\lambdabel{conmuta}
P_t(Q_s H)(x,n,j)={\mathbb{E}}[(1-X(t))^{\hat{R}(s)}f(J(t)+\hat{J}(s)+j)\mid X(0)=x,\, \hat{R}(0)=n]=Q_s(P_t H)(x,n,j).
\end{equation}
Let $G$ and $G_{\star}$ be the infinitesimal generators of $(X,J)$ and $(R,J)$, respectively. Clearly, for any $x\in[0,1]$, {the function} $(n,j)\mapsto P_t H(\cdot,n,\cdot)(x,j)$ belongs to the domain of $G_{\star}$. Hence, Eq. \eqref{conmuta} yields
\betagin{equation}\lambdabel{comgensemi}
P_t G_{\star} H(x,n,j)= G_{\star} P_t H(x,n,j).
\end{equation}
We claim that
\betagin{equation}\tag{Claim 4}\lambdabel{claimgeneduality}
G H(\cdot,n,\cdot)(x,j)=G_{\star} H(x,\cdot,\cdot)(n,j).
\end{equation}
Assume that \eqref{claimgeneduality} holds. Set ${u(t,x,n,j)\coloneqq} P_tH(\cdot,n,\cdot)(x,j)$ and ${v(t,x,n,j)\coloneqq} Q_tH(x,\cdot,\cdot)(n,j)$. The Kolmogorov forward equation for $Q$ yields
\betagin{equation}\lambdabel{Kq}
\frac{\dd}{\dd t} v(t,x,n,j)=G_{\star} v(x,\cdot,\cdot)(n,j).
\end{equation}
{Moreover, using the Kolmogorov forward equation for $P$, {\eqref{claimgeneduality}} and \eqref{comgensemi}, we get
\[\frac{\dd}{\dd t} u(t,x,n,j)=P_t G H(\cdot,n,\cdot)(x,j)= P_t G_{\star} H(x,\cdot,\cdot)(n,j)= G_{\star} u(x,\cdot,\cdot)(n,j).\]
Hence, $u$ and $v$} satisfy Eq. \eqref{Kq}. Since $u(0,x,n,j)=(1-x)^nf(j)=v(0,x,n,j)$, {Eq.~\eqref{rmd} follows from the uniqueness of the initial value problem associated with $G_{\star}$ (see \cite[Thm. 1.3]{D65}). Eq.~\eqref{md} is obtained using $f\equiv 1$ in Eq.~\eqref{rmd}. It remains to prove \eqref{claimgeneduality}.} Note first that
\betagin{align}\lambdabel{gx}
G H(\cdot,n,\cdot)(x,j)=&\left[n(n-1)x(1-x)^{n-1}-\left(\sigmagma x(1-x)+ \theta\nu_0(1-x)-\theta\nu_1 x \right)n(1-x)^{n-1}\right]f(j)\nonumber\\
&+(1-x)^n\int_{(0,1]}\left[(1-xz)^nf(j+z)-f(j)\right]\mu(\dd z).
\end{align}
In addition,
\betagin{align}\lambdabel{gr}
G_\star H(x,\cdot,\cdot)(n,j)=&n((n-1)+\theta\nu_1)[(1-x)^{n-1}\!-(1-x)^n]f(j)\nonumber\\
&+\sigmagma n[(1-x)^{n+1}\!-(1-x)^n]f(j)- n\theta\nu_0 (1-x)^nf(j)\nonumber\\
&+\sum\limits_{k=0}^n\binom{n}{k}\int_{(0,1)}y^k (1-y)^{n-k}[(1-x)^{n+k}f(j+y)-(1-x)^nf(j)]\mu(\dd y)\nonumber\\
=&\left[n(n-1)x(1-x)^{n-1}-\left(\sigmagma x(1-x)+ \theta\nu_0(1-x)-\theta\nu_1 x \right)n(1-x)^{n-1}\right]f(j)\nonumber\\
&+(1-x)^n\sum\limits_{k=0}^n\binom{n}{k}\int_{(0,1)}y^k (1-y)^{n-k}[(1-x)^kf(j+y)-f(j)]\mu(\dd y).
\end{align}
Moreover, using Fubini's theorem, we obtain
\[\sum\limits_{k=0}^n\binom{n}{k}\int_{(0,1)}y^k (1-y)^{n-k}[(1-x)^kf(j+y)-f(j)]\mu(\dd y)=\int_{(0,1)}\left[(1-xy)^nf(j+y)-f(j)\right]\mu(\dd y).
\]
{Hence, \eqref{claimgeneduality} follows after comparing \eqref{gr} with \eqref{gx}}.
\end{proof}
{We now prove Theorem \ref{thm2.4}-(1), which characterizes the asymptotic type frequency in the annealed setting.}
\betagin{proof}[Proof of Theorem \ref{thm2.4}(Asymptotic type frequency)-(1)]
We first show that $X(t)$ has a limit in distribution as $t\to \infty$. Since $\theta >0$ and $\nu_0\in (0,1)$, Eq. \eqref{md} in Theorem \ref{thm2.3} implies that, for any $x\in[0,1]$, the limit of $\mathbb{E}[(1-X(t))^n\mid X(0)=x]$ as $t\to\infty$ exists and satisfy
\betagin{equation}\lambdabel{mome}
\lim_{t \rightarrow \infty} \mathbb{E}[(1-X(t))^n|X(0)=x]=\pi_n,\quad n\in{\mathbb N}b_0,
\end{equation}
where $\pi_n$ in defined in \eqref{defpin}. Recall that probability measures on $[0,1]$ are completely determined by their positive integer moments and that convergence of positive integer moments implies convergence in distribution. Therefore, Eq. \eqref{mome} implies that there is $\eta_X\in\Ms_1([0,1])$ such that, for any $x\in[0,1]$, conditionally on $\{X(0)=x\}$, the law of $X(t)$ converges in distribution to $\eta_X$ as $t\to\infty$ and
\[\pi_n=\int_{[0,1]} (1-z)^n\eta_X(dz), \quad n\in{\mathbb N}b.\]
Using dominated convergence, the convergence of the law of $X(t)$ towards $\eta_X$ as $t\to\infty$ extends to any initial distribution. As a consequence of this and the Markov property of $X$, it follows that $X$ admits a unique stationary distribution, which is given by $\eta_X$.
Finally, a first step decomposition for the probability of absorption in $0$ of the process $R$ yields
\[{\left[n(\sigmagma+\theta+ n-1)+\sum_{k=1}^n \binom{n}{k}\sigmagma_{n,k}\right]} \pi_n= n\sigmagma \pi_{n+1}+ n(\theta\nu_1+ n-1)\pi_{n-1}+{\sum\limits_{k=1}^n \binom{n}{k}\sigmagma_{n,k} \pi_{n+k}}.\]
Dividing both sides in the previous identity by $n$ and rearranging terms yields Eq. \eqref{recwn}.
\end{proof}
\subsection{Annealed results related to Section \ref{s26}}\lambdabel{s52}
{In this section we prove Theorem \ref{thm2.6}-(1) and Corollary \ref{cor2.7}. Before that we prove the following lemma relating the ancestral type distribution at time $T$ to the number $L(T)$ of lines in the pLD-ASG at time $T$. }
\betagin{lemma}\lambdabel{exprhtannealed}
For all $T \geq 0$ and $x\in[0,1]$, we have
\betagin{equation}
h_T(x)=1-\mathbb{E}[(1-x)^{L(T)}\mid L(0)=1].\lambdabel{pldasgatd}
\end{equation}
\end{lemma}
\betagin{proof} {Since types are assigned to the $L(T)$ lines present in the pLD-ASG at {(backward)} time $T$ according to independent Bernoulli random variables with parameter $x$, the result follows from Lemma \ref{pruningdonnehtx}.}
\end{proof}
{The next result is crucial to describe the asymptotic behavior of $h_T(x)$ as $T\to\infty$.}
\betagin{lemma}[Positive recurrence]\lambdabel{pr}
The process $L$ is positive recurrent.
\end{lemma}
\betagin{proof}
Since {$L$ is irreducible}, it is enough to prove that the state $1$ is positive recurrent. This holds if $\theta\nu_0>0$, because in this case the hitting time of $1$ is upper bounded by an exponential random variable with parameter $\theta\nu_0$. Now, assume that $\theta=0$ (the case $\theta\nu_0=0$ and $\theta\nu_1>0$, can be easily reduced to this case). We proceed in a similar way as in \cite[Proof of Lem. 2.3]{F13}. Define the function $f:{\mathbb N}b\to{\mathbb R}b_+$ via
\[f(n)\coloneqq \sum_{i=1}^{n-1} \frac1{i}\ln\left(1+\frac{1}{i}\right),\]
with the convention that an empty sum equals $0$. Note that $f$ is bounded. Note also that, for $n>1$,
\[n(n-1)(f(n-1)-f(n))=-n\ln\left(1+\frac{1}{n-1}\right)\leq -1.\]
This follows using $x=1/n$ in the inequality {$e^{x} < 1/(1-x)$}, which holds for $x<1$. For any $\varepsilon>0$, set ${n_0(\varepsilon)\coloneqq} \lfloor 1/\varepsilon\rfloor +1.$ Note that for $n>n_0(\varepsilon)$
\[n(f(n+i)-f(n))={n\sum_{j=n}^{n+i-1}\frac1{j}\ln\left(1+\frac{1}{j}\right)}\leq n\ln\left(1+\frac1n\right)i\varepsilon\leq i\varepsilon.\]
Hence, for {$n> n_0(\varepsilon)$,}
\betagin{align*}
G_L f(n)&\leq -1 +\frac{\varepsilon}{n}\sum_{i=1}^n \binom{n}{i}\sigmagma_{n,i} \,i+\sigmagma\varepsilon=-1+\varepsilon\int_{(0,1)}\sum_{i=1}^n\binom{n-1}{i-1}y^i(1-y)^{n-i}\mu(dy)+\sigmagma \varepsilon\\
&=-1+\varepsilon\left(\int_{(0,1)}y\mu(dy)+\sigmagma\right),
\end{align*}
where $\sigmagma_{n,i}$ is defined in \eqref{smk}. Set $m_0\coloneqq n_0(\varepsilon_\star)$, where $\varepsilon_\star\coloneqq 1/\big(2\int_{(0,1)}y\mu(dy)+2\sigmagma\big)$ (and we set $m_0\coloneqq 1$ in the particular case $\mu=0$ and $\sigmagma=0$). In particular, for {$n> m_0$}, we have $G_Lf(n)\leq -1/2.$
Define $T_{m_0}\coloneqq \inf\{\betata>0: L(\betata){\leq m_0}\}$. Applying Dynkin's formula to $L$ with the function $f$ and the stopping time $T_{m_0}\wedge k$, $k\in{\mathbb N}b$, we obtain
\[{\mathbb E}b\left[f(L(T_{m_0}\wedge k))\mid L(0)=n\right]=f(n)+{\mathbb E}b\left[\int_0^{T_{m_0}\wedge k}G_Lf(L(\betata))d\betata\mid L(0)=n\right].\]
Therefore, for {$n> m_0$}, we have
\[0\leq {\mathbb E}b\left[f(L(T_{m_0}\wedge k))\mid L(0)=n\right]\leq f(n)-\frac1{2}{\mathbb E}b[T_{m_0}\wedge k\mid L(0)=n].\]
Hence,
${\mathbb E}b[T_{m_0}\wedge k\mid L(0)=n]\leq 2f(n).$
Letting $k\to\infty$ yields
${\mathbb E}b[T_{m_0}\mid L(0)=n]\leq 2f(n)<\infty.$
Since $L$ is irreducible, the result follows by standard arguments.
\end{proof}
The first part of the proof of Theorem \ref{thm2.6}-(1) builds on the previous two lemmas. The system of equations \eqref{fr} characterizing the tail probabilities $\mathbb{P}(L(\infty) > n)$ is obtained via Siegmund duality. More precisely, consider the continuous-time Markov chain $D\coloneqq (D(\betata))_{\betata \geq 0}$ with values in ${\mathbb N}b^\dagger\deltafeq {\mathbb N}b\cup\{\dagger\}$ with rates
\[q_D(i,j)\coloneqq \left\{\betagin{array}{ll}
(i-1)(\sigmagma + \sigmagma_{i-1,1})&\textrm{if $j=i-1$, $i>1$},\\
(i-1)\theta\nu_1 +i(i-1) &\textrm{if $j=i+1$, $i>1$},\\
\gamma_{i,j}-\gamma_{i,j-1} &\textrm{if {$ 1\leq j< i$}, $i>2$},\\
(i-1)\theta\nu_0&\textrm{if $j=\dagger$, $i>1$,}
\end{array}\right.
\]
where we recall that $\gamma_{i,j}\coloneqq \sum_{k=i-j}^{j}\binom{j}{k}\sigmagma_{j,k}$ {if $1 \leq j<i\leq 2j$ and $\gamma_{i,j}\coloneqq 0$ otherwise,} and $\dagger$ is a cemetery point. Note that $1$ and $\dagger$ are absorbing states of $D$. {The next result relates $L$ and $D$ via duality.}
\betagin{lemma}[Siegmund duality]\lambdabel{sd}
The processes $L$ and $D$ are Siegmund dual, i.e.
\[\mathbb{P}\left(L(\betata) \geq d\mid L(0)=\ell\right)=\mathbb{P}\left(\ell\geq D(\betata) \mid D(0)=d\right),\qquad \textrm{for all }\ell, d\in \mathbb{N}, t\geq0.\]
\end{lemma}
\betagin{proof}
We consider the function $H:\mathbb{N}\times\mathbb{N}\cup\{\dagger\}\rightarrow \{0,1\}$ defined via $H(\ell,d)\coloneqq 1_{\{\ell\geq d\}}$ (i.e. $H(\ell,d)=1$ if $\ell\geq d$ and $H(\ell,d)=0$ otherwise) and $H(\ell,\dagger)\coloneqq 0$, $\ell,d\in\mathbb{N}$. Let $G_L$ and $G_D$ be the infinitesimal generators of $L$ and $D$, respectively. By \cite[Prop. 1.2]{Jaku} we only have to show that $G_L H(\cdot,d)(\ell)=G_D H(\ell,\cdot)(d)$ for all $\ell,d\in\mathbb{N}$. {From \eqref{kratespldasg}}, we have
\betagin{align}\lambdabel{gl}
G_L H(\cdot,d)(\ell)&=\sigmagma \ell \,1_{\{\ell+1=d\}} - (\ell-1)(\ell+\theta\nu_1)\,1_{\{\ell=d\}} -\theta\nu_0 \sum\limits_{j=1}^{\ell-1} 1_{\{j<d\leq \ell\}}+ \sum\limits_{k=1}^\ell\binom{\ell}{k}\sigmagma_{\ell,k} \, 1_{\{\ell< d\leq \ell+k\}}\nonumber\\
&=\sigmagma \ell \,1_{\{\ell+1=d\}} - (\ell-1)(\ell+\theta\nu_1)\,1_{\{\ell=d\}} -\theta\nu_0 (d-1) 1_{\{d\leq \ell\}}+ \gamma_{d,\ell} \, 1_{\{\ell< d\}}.
\end{align}
Similarly, we have
\betagin{align}\lambdabel{gd}
G_D H(\ell,\cdot)(d)&={\sigmagma (d-1) \,1_{\{d-1=\ell\}}} - (d-1)(d+\theta\nu_1)\,1_{\{\ell=d\}} -\theta\nu_0 (d-1) 1_{\{d\leq \ell\}}\nonumber\\
& + \sum\limits_{j=1}^{d-1}\left(\gamma_{d,j}-\gamma_{d,j-1}\right) \, 1_{\{j\leq \ell <d\}}.
\end{align}
Summation by parts yields $\sum_{j=1}^{d-1}\left(\gamma_{d,j}-\gamma_{d,j-1}\right) \, 1_{\{j\leq \ell<d\}}{= \gamma_{d,\ell}1_{\{\ell< d\}}}$. Thus, the result follows comparing \eqref{gd} with \eqref{gl}.
\end{proof}
{Now, we have all the ingredients to prove Theorem \ref{thm2.6}-(1).}
\betagin{proof}[Proof of Theorem \ref{thm2.6}(Ancestral type distribution)-(1)]
Since $L$ is positive recurrent, $L(T)$ converges in distribution as $T\to\infty$ towards the stationary distribution $\eta_L$. In particular, we infer from Eq. \eqref{pldasgatd} that the limit $h(x)$ of $h_T(x)$ as $T\to\infty$ exists and satisfies
\betagin{align*}
h(x)&=1-{\mathbb E}b[(1-x)^{L(\infty)}]=1-\sum_{\ell=1}^\infty {\mathbb P}b(L(\infty)=\ell)(1-x)^\ell\\
&=\sum_{\ell=0}^\infty{\mathbb P}b(L(\infty)>\ell)(1-x)^\ell-(1-x)\sum_{\ell=1}^\infty{\mathbb P}b(L(\infty)>\ell-1)(1-x)^{\ell-1},
\end{align*}
and Eq. \eqref{represh(x)tailpldasg} follows. It remains to prove \eqref{fr}. From Lemma \ref{sd} we infer that $a_n=d_{n+1}$, where
\[d_n\coloneqq \mathbb{P}(\exists \betata>0: D(\betata)=1\mid D(0)=n), \quad n\geq 1.\]
Applying a first step decomposition to the process $D$, we obtain, for $n>1$,
\betagin{equation}\lambdabel{r1}
\left[(n-1)(\sigmagma +\theta +n ) +{\gamma_{n,n-1}}\right]d_n= (n-1)\sigmagma d_{n-1}+(n-1)(\theta\nu_1+n)d_{n+1} + \sum\limits_{j=1}^{n-1} (\gamma_{n,j}-\gamma_{n,j-1})d_j.
\end{equation}
Using summation by parts and rearranging terms in \eqref{r1} yields
\betagin{equation}\lambdabel{r2}
(\sigmagma +\theta +n ) d_n= \sigmagma d_{n-1}+(\theta\nu_1+n)d_{n+1} +\frac{1}{n-1} \sum\limits_{j=1}^{n-1} \gamma_{n,j}(d_j-d_{j+1}),\quad n>1.
\end{equation}
The result follows.
\end{proof}
\betagin{proof}[Proof of Corollary \ref{cor2.7}]
Since $\theta = 0$, the line-counting processes $R$ and $L$ have the same distribution. Hence, combining Lemma \ref{exprhtannealed} and {\eqref{md} (from Theorem \ref{thm2.3})} applied to $n=1$, we obtain
\betagin{equation}\lambdabel{idzm}
h_T(x)={\mathbb E}b[X(T)\mid X(0)=x],
\end{equation}
which proves the first part of the statement.
Moreover, for $\theta=0$, $X$ is a bounded submartingale, and hence $X(T)$ has almost surely a limit as $T\to\infty$, which we denote by $X(\infty)$. Letting $T\to \infty$, in the identity \eqref{idzm} yields
\betagin{equation}\lambdabel{auxh0}
h(x)={\mathbb E}b[X(\infty)\mid X(0)=x].
\end{equation}
Moreover, using {\eqref{md} (from Theorem \ref{thm2.3})} with $n=2$, we get
\[{\mathbb E}b[(1-X(T))^2\mid X(0)=x]={\mathbb E}b[(1-x)^{L(T)}\mid L(0)=2].\]
Letting $T\to \infty$ and using that $L$ is positive recurrent, we obtain
\[{\mathbb E}b[(1-X(\infty))^2\mid X(0)=x]=1-h(x).\]
Plugging \eqref{auxh0} in the previous identity yields the desired result.
\end{proof}
\betagin{proof}[Proof of Proposition \ref{comparison}]
Using Eq. \eqref{fr} in Theorem \ref{thm2.6} for the two models, we obtain, for $n\in{\mathbb N}b$,
\[(n+1)\rho_{n+1}^{\rm{sel}}=\sigmagma_\mu \rho_{n}^{\rm{sel}},\quad\textrm{and}\quad
(n+1)\rho_{n+1}^{\rm{env}}= \frac{1}{n}\sum_{j=1}^n \gamma_{n+1,j}\,\rho_j^{\rm{env}}.\]
Multiplying separately these equations with $z^n$, $z\in[0,1]$, and summing over $n\in{\mathbb N}b$, one obtains
\[(p^{\rm{sel}})'(z)=\rho_1^{\rm{sel}}+\sigmagma_\mu\, p^{\rm{sel}}(z),\quad\textrm{and}\quad (p^{\rm{env}})'(z)=\rho_1^{\rm{env}}+\\sum_{j=1}^\infty \rho_j^{\rm{env}} g_j(z),\]
with $g_j(z)\coloneqq \sum_{n=j}^{2j-1}\gamma_{n+1,j}\frac{z^n}{n}.$ Solving the ODE for $p^{\rm{sel}}$ via variation of constants, and using that $p^{\rm{sel}}(0)$ and $p^{\rm{sel}}(1)=1$, yields the desired formulas for $\rho_1^{\rm{sel}}$ and $p^{\rm{sel}}$ (see also \cite[Thm. 6.1]{CM19}).
Now, using the definition of the coefficients $\gamma_{n+1,j}$ (defined below \eqref{fr}) followed by a straightforward calculation, one obtains
\[g_j(z) =\sum_{k=1}^{j}\binom{j}{k}\sigmagma_{j,k} \int_0^z \frac{u^{j-1}-u^{k+j-1}}{1-u} \dd u=\int_0^z\dd u\, \frac{u^{j-1}}{1-u}\int_{(0,1)}\mu(\dd y)\,(1-(1-y(1-u))^j),\]
where we have also used the definition of the coefficients $\sigmagma_{m,k}$ (see \eqref{smk}). Since $(1-h)^j\geq 1-jh$ for $h\in(0,1)$, we infer that $g_j(z)\leq \sigmagma_\mu z^{j}$ with equality only if $z=0$ or $j=1$. We conclude that
\[(p^{\rm{env}})'(z)<\rho_1^{\rm{env}}+\sigmagma_\mu \, p^{\rm{env}}(z), \quad z\in(0,1].\]
Letting $f(z)\coloneqq \rho_1^{\rm{env}}+\sigmagma_\mu \, p^{\rm{env}}(z)$ we then have $f'(z)/f(z) \leq \sigmagma_\mu$ so, after integration, $\log(f(z)/f(0)) \leq \sigmagma_\mu z$. Since $p^{\rm{env}}(0)=0$, this yields
\[p^{\rm{env}}(z)\leq \rho_1^{\rm{env}}\left(\frac{e^{\sigmagma_\mu z}-1}{\sigmagma_\mu}\right)=\frac{\rho_1^{\rm{env}}}{\rho_1^{\rm{sel}}}\,p^{\rm{sel}}(z).\]
Moreover, since $p^{\rm{env}}(1)=p^{\rm{sel}}(1)=1$, we conclude that $\rho_1^{\rm{env}}\geq\rho_1^{\rm{sel}}$. Assume now that $\rho_1^{\rm{env}}=\rho_1^{\rm{sel}}$. It follows that $p^{\rm{env}}(z)\leq p^{\rm{sel}}(z)$, for $z\in[0,1]$. Hence,
\[1=\int_{0}^1 (p^{\rm{env}})'(z)\dd z<\int_{0}^1 (\rho_1^{\rm{env}}+\sigmagma_\mu \, p^{\rm{env}}(z))\dd z\leq\int_{0}^1 (\rho_1^{\rm{sel}}+\sigmagma_\mu \, p^{\rm{sel}}(z))\dd z= \int_{0}^1 (p^{\rm{sel}})'(z)\dd z=1,\]
which is a contradiction. Thus, $\rho_1^{\rm{env}}>\rho_1^{\rm{sel}}$. In particular, for $z\neq 0$ sufficiently small, $p^{\rm{env}}(z)>p^{\rm{sel}}(z)$. Hence, the last statement follows using that $h^{\rm{env}}(z)=1-p^{\rm{env}}(1-z)$ and $h^{\rm{sel}}(z)=1-p^{\rm{sel}}(1-z)$.
\end{proof}
\section{Quenched results}\lambdabel{S6}
\subsection{Quenched results related to Section \ref{s25}}\lambdabel{s61}
In this section we prove quenched parts of the results stated in Section \ref{s25}. We start with the proof of the second part of Theorem \ref{thm2.3}, which establishes the quenched moment duality \eqref{quenchedual} for almost every environment $\omega$.
\betagin{proof}[Proof of Theorem \ref{thm2.3} (Part II: quenched moment duality)]
Since both sides of \eqref{quenchedual} are right-continuous in $T$, it is sufficient to prove that, for any bounded measurable function $g:\Db^\star_T\to{\mathbb R}b$,
\betagin{equation}\lambdabel{condex}
{\mathbb E}b[(1-X^J(T))^n g((J_s)_{s\in[0,T]})\mid {X^J(0)=x}]={\mathbb E}b[(1-x)^{{R_T^J(T-)}} g((J_s)_{s\in[0,T]})\mid {R_T^J(0-)}=n].
\end{equation}
Let $\Hs:=\{g:\Db_T^\star \to{\mathbb R}b:\textrm{ such that $\eqref{condex}$ is satisfied}\}$. Thanks to the annealed moment duality, i.e. Eq. \eqref{md}, every constant function {belongs} to $\Hs$. Moreover, $\Hs$ is closed under increasing limits of non-negative bounded functions in $\Hs$. Now, we claim that \eqref{condex} holds for functions of the form $g(\omega)=g_1(\omega(t_1))\cdots g_k(\omega(t_k))$, with $0<t_1<\cdots<t_k<T$ and $g_i\in\Cs^2([0,\infty))$ with compact support. If the claim is true, then thanks to the monotone class theorem, $\Hs$ would contain any measurable function $g$, which would then achieve the proof.
We prove the claim by induction on $k$. For $k=1$, we need to prove that, for $t_1\in(0,T)$,
\betagin{equation}\lambdabel{condexsimple}
{\mathbb E}b[(1-X^J(T))^n g_1(J(t_1))\mid X^J(0)=x]={\mathbb E}b[(1-x)^{{R_T^J(T-)}} g_1(J({t_1}))\mid {R_T^J(0-)}=n].
\end{equation}
Note first that, using the Markov property for $X^J$ in $[0,t_1]$ followed by {Eq. \eqref{md} in Theorem \ref{thm2.3} and the fact that $t_1$ and $T$ are almost surely continuity times for $J$,} we obtain
\betagin{align*}
&{\mathbb E}b\left[(1-X^J(T))^n g_1(J(t_1))\mid X^J(0)=x\right]\\
&= {\mathbb E}b\left[g_1(J(t_1))\hat{{\mathbb E}b}\left[(1-\hat{X}^{\hat{J}}(T-t_1))^n\mid \hat{X}^{\hat{J}}(0)=X^J(t_1)\right]\mid X^J(0)=x\right]\\
&= {\mathbb E}b\left[g_1(J(t_1))\hat{{\mathbb E}b}\left[(1-X^J(t_1))^{{{\hat{R}}_{T-t_1}^{\hat{J}}((T-t_1)-)}}\mid {\hat{R}_{T-t_1}^{\hat{J}}(0-)}=n\right]\mid X^J(0)=x\right],
\end{align*}
where the subordinator $\hat{J}$ is defined via $\hat{J}(h)\coloneqq J(t_1+h)-J(t_1)$. The processes {$\hat{X}^{\hat{J}}$ and $\hat{R}_{T-t_1}^{\hat{J}}$} are independent copies of {$X^J$ and $R_{T-t_1}^J$,} which are driven by $\hat{J}$ (which is in turn independent of $(J(u))_{u\in[0,t_1]}$). Using first Fubini's theorem, and then Eq. \eqref{rmd} in Theorem \ref{thm2.3} {and the fact that $0$ and $t_1$ are almost surely continuity times for $J$,} the last expression equals
\betagin{align*}
&\hat{{\mathbb E}b}\left[{\mathbb E}b\left[g_1(J(t_1))(1-X^J(t_1))^{{{\hat{R}}_{T-t_1}^{\hat{J}}((T-t_1)-)}}\mid X^J(0)=x\right]\mid {{\hat{R}}_{T-t_1}^{\hat{J}}(0-)}=n\right]\\
=&\hat{{\mathbb E}b}\left[{\mathbb E}b\left[g_1(J(t_1))(1-x)^{{R_{t_1}^J(t_1-)}}\mid {R_{t_1}^J(0-)}={{\hat{R}}_{T-t_1}^{\hat{J}}((T-t_1)-)}\right]\mid {{\hat{R}}_{T-t_1}^{\hat{J}}(0-)}=n\right].
\end{align*}
The proof of the claim for $k=1$ is achieved using the Markov property for {$R_T^J$} in the (backward) interval $[0,T-t_1]$.
Let us now assume that the claim is true up to $k-1$. We proceed as before to prove that the claim holds for $k$. Using the Markov property for {$X^J$} in $[0,t_1]$ followed by the inductive step, we obtain
\betagin{align*}
&{\mathbb E}b\left[(1-X^J(T))^n \prod_{i=1}^k g_i(J(t_i))\mid X^J(0)=x\right]\\
&={\mathbb E}b\left[g_1(J(t_1))\hat{{\mathbb E}b}\left[(1-{\hat{X}}^{\hat{J}}(T-t_1))^n\, G(J(t_1),\hat{J})\mid {\hat{X}}^{\hat{J}}(0)=X^J(t_1)\right]\mid X^J(0)=x\right]\\
&={\mathbb E}b\left[g_1(J(t_1))\hat{{\mathbb E}b}\left[(1-X^J(t_1))^{{{\hat{R}}_{T-t_1}^{\hat{J}}((T-t_1)-)}}\,G(J(t_1),\hat{J})\mid {{\hat{R}}_{T-t_1}^{\hat{J}}(0-)}=n\right]\mid X^J(0)=x\right],
\end{align*}
where $G(J(t_1),\hat{J})\coloneqq\prod_{i=2}^k g_i(J(t_1)+\hat{J}(t_i-t_1))$. Using Fubini's theorem, {the reinforced duality Eq. \eqref{rmd}}, {and the fact that $0$ and $t_1$ are almost surely continuity times for $J$,} the last expression equals
\betagin{align*}
&\hat{{\mathbb E}b}\left[{{\mathbb E}b}\left[(1-X^J(t_1))^{{{\hat{R}}_{T-t_1}^{\hat{J}}((T-t_1)-)}}g_1(J(t_1))\,G(J(t_1),\hat{J})\mid X^J(0)=x\right] \mid {{\hat{R}}_{T-t_1}^{\hat{J}}(0-)}=n\right]\\
=&\hat{{\mathbb E}b}\left[{{\mathbb E}b}\left[(1-x)^{{R_{t_1}^J(t_1-)}}g_1(J(t_1))\,G(J(t_1),\hat{J})\mid {R_{t_1}^J(0-)}={{\hat{R}}_{T-t_1}^{\hat{J}}((T-t_1)-)}\right] \mid {{\hat{R}}_{T-t_1}^{\hat{J}}(0-)}=n\right],
\end{align*}
and the proof is achieved using the Markov property for {$R^J_T$} in the (backward) interval $[0,T-t_1]$.
\end{proof}
\betagin{proof}[Proof of Theorem \ref{thm2.4}-(2)(Asymptotic type frequency)]
Let $\omega$ be such that the quenched moment duality \eqref{quenchedual} holds between $-\tau$ and $0$. In particular,
\betagin{equation}\lambdabel{mdt0}
\mathbb{E} \left [ (1-X^\omegaega(0))^n|X^\omegaega(-\tau)=x \right ]= \mathbb{E} \left [ (1-x)^{R_0^\omegaega(\tau-)}|R_0^\omegaega(0-)=n \right ].
\end{equation}
Since we assume that $\theta >0$ and $\nu_0, \nu_1 \in (0,1)$, the right hand side converges to ${\mathbb P}i_n(\omegaega)$ (defined in \eqref{defpin}), which proves that the moment of order $n$ of $1-X^\omegaega(0)$ conditionally on $\{ X^\omegaega(-\tau) = x \}$ converges to ${\mathbb P}i_n(\omegaega)$. Since we deal with random variables supported on $[0,1]$, the convergence of the positive integer moments proves the convergence in distribution and the fact that the limit distribution $\mathcal{L}^\omega$ satisfies \eqref{dualimite}.
It remains to prove \eqref{approxwn}. For $\upsilon\in\Ms_1(\mathbb{N}_0^\dagger)$ with finite support, let $\upsilon^\omegaega_s$ denote the distribution of $R_0^\omegaega(s-)$ given that $R_0^\omega(0-)\sigmam\upsilon$.
Let $T_{0, \dagger}^\omega$ be the absorption time of $R_0^\omegaega$ at $\{ 0, \dagger \}$. Note that $T_{0, \dagger}^\omega$ is stochastically bounded by an exponential random variable with parameter $\theta\nu_0$. Therefore,
\betagin{align*}
\upsilon^\omegaega_\tau(\mathbb{N}) & = \mathbb{P}_{\upsilon} \left ( R_0^\omegaega(\tau-) \in \mathbb{N} \right ) = \mathbb{P}_{\upsilon} \left ( T_{0, \dagger}^\omega > \tau \right ) \leq e^{- \theta\nu_0 \tau}. \lambdabel{binomialisation}
\end{align*}
Hence,
we have
\betagin{align*}
\mathbb{P}_{\upsilon} \left ( \exists s\geq 0 \ \text{s.t.} \ R_0^\omegaega(s)=0 \right ) & = \upsilon_{\tau}^\omegaega(\{0\}) + \sum_{k \geq 1} \mathbb{P}_{\upsilon} \left ( R_0^\omegaega(\tau-) = k \ \text{and} \ \exists s\geq \tau \ \text{s.t.} \ R_0^\omegaega(s)=0 \right ) \\
& \leq \upsilon_{\tau}^\omegaega(\{0\}) + \upsilon_\tau^\omegaega(\mathbb{N}) \leq \upsilon_\tau^\omegaega(\{0\}) + e^{-\theta \nu_0 \tau}.
\end{align*}
Thus, we obtain
\betagin{eqnarray}
\upsilon_{\tau}^\omegaega(\{0\}) \leq \mathbb{P}_{\upsilon} \left ( \exists s\geq 0 \ \text{s.t.} \ R_0^\omegaega(s)=0 \right ) \leq \upsilon_{\tau}^\omegaega(\{0\}) + e^{-\theta\nu_0 \tau}. \lambdabel{absetmu0}
\end{eqnarray}
Similarly, we have
\betagin{align*}
\mathbb{E}_{\upsilon} \left [ (1-x)^{R_0^\omegaega(\tau-)} \right ] & = \upsilon_{\tau}^\omegaega(\{0\}) + \sum_{k \geq 1} (1-x)^k \upsilon_{\tau}^\omegaega(\{k\}) \leq \upsilon_{\tau}^\omegaega(\{0\}) + \upsilon_\tau^\omegaega(\mathbb{N}) \leq \upsilon_{\tau}^\omegaega(\{0\}) + e^{-\theta\nu_0 \tau}.
\end{align*}
Hence,
\betagin{eqnarray}
\upsilon_{\tau}^\omegaega(\{0\})\leq \mathbb{E}_{\upsilon} [ (1-x)^{R_0^\omegaega(\tau-)} ] \leq \upsilon_{\tau}^\omegaega(\{0\}) + e^{-\theta\nu_0 \tau}. \lambdabel{absetmu00}
\end{eqnarray}
Recall from Section \ref{s25} that ${\mathbb P}i_n(\omegaega)\coloneqq \mathbb{P}(\exists s\geq 0 \ \text{s.t.} \ R_0^\omegaega(s)=0 \mid R_0^\omegaega(0-)=n)$. Choosing {$\upsilon = \deltalta_n$} in \eqref{absetmu0} and in \eqref{absetmu00} and subtracting both inequalities, we get
\[ \left | \mathbb{E} \left [ (1-x)^{R_0^\omegaega(\tau-)}\mid R_0^\omegaega(0-)=n \right ] - {\mathbb P}i_n(\omegaega) \right | \leq e^{-\theta\nu_0 \tau}. \]
This inequality together with \eqref{quenchedual} (i.e. the quenched moment duality) yields the desired result.
\end{proof}
\betagin{proof}[Proof of Proposition \ref{prop2.5}]
Let $\omega\in\Db^\star$ such that \eqref{quenchedual} holds. Let $J_\omega\coloneqq J\otimes_{\tau_\star} \omega$. Consider the process $X^{J_\omega}$ in $[-\tau,0]$ with $\tau>{\tau_\star^{}}$. Using the Markov property, we obtain
\[{\mathbb E}b\!\left[(1-X^{J_\omega}(0))^n\mid X^{J_\omega}(-\tau)=x\right]=\!\int_0^1\!\!{\mathbb E}b\!\left[(1-X^{\omega}(0))^n\mid X^{\omega}(-{\tau_\star^{}})=y\right]{\mathbb P}b(X(-{\tau_\star^{}})\in \dd y\mid X(-\tau)=x),\]
where $X$ is the solution of \eqref{WFSDE} with subordinator $J$. Combining the previous identity with \eqref{quenchedual} for $X^\omega$ in $(-\tau_\star,0)$, and using translation invariance of $X$, we obtain
\[{\mathbb E}b\!\left[(1-X^{J_\omega}(0))^n\mid X^{J_\omega}(-\tau)=x\right]=\!\int_0^1\!\!{\mathbb E}b\!\left[(1-y)^{R_0^\omega(\tau_\star^{} -)}\mid R_0^{\omega}(0-)=n\right]{\mathbb P}b(X(\tau-\tau_\star)\in \dd y\mid X(0)=x).\]
Hence, letting $\tau\to\infty$ and using Theorem \ref{thm2.4}-(1), we get
\[\lim_{\tau\to\infty}{\mathbb E}b\left[(1-X^{J_\omega}(0))^n\mid X^{J_\omega}(-\tau)=x\right]=\!\int_0^1\!\!{\mathbb E}b\left[(1-y)^{R_0^\omega(\tau_\star^{} -)}\mid R_0^{\omega}(0-)=n\right]{\mathbb P}b(X(\infty)\in \dd y),\]
and the result follows from Eq. \eqref{cvmomentsannealed} in Theorem \ref{thm2.4}-(1).
\end{proof}
\subsection{Quenched results related to Section \ref{s26}}\lambdabel{s62}
This section is devoted to the proof of Theorem \ref{thm2.6}-(2), which describes the asymptotic behavior of the ancestral type distribution.
\betagin{lemma}\lambdabel{hTq}
For all $T \geq 0$, $x\in[0,1]$ and $\omega\in\Db^\star$, we have
\betagin{align}
h^{\omegaega}_T(x)&=1-\mathbb{E}[(1-x)^{L_T^\omegaega(T-)}\mid L_T^\omegaega(0-)=1].\lambdabel{pldasgatdq}
\end{align}
\end{lemma}
\betagin{proof}
The proof is analogous to the proof of Lemma \ref{exprhtannealed}.
\end{proof}
Now, we proceed to prove Theorem \ref{thm2.6}-(2).
\betagin{proof}[Proof of Theorem \ref{thm2.6}(Ancestral type distribution)-(2)]
Recall that by assumption $\theta\nu_0>0$. For $\mu\in\Ms_1(\mathbb{N})$, we denote by $\mu^\omegaega_T(\betata)$ the distribution of $L_T^\omegaega(\betata-)$ given that $L_T^\omegaega(0-)\sigmam\mu$.
Let $t > s > 0$. Note that we have $\mu^{\omegaega}_{t}(t) = (\mu^{\omegaega}_{t}(t - s))^{\omegaega}_{s}(s)$ so
\betagin{eqnarray}
d_{TV} (\mu^{\omegaega}_{t}(t),\mu^{\omegaega}_{s}(s)) = d_{TV} ( (\mu^{\omegaega}_{t}(t - s))^{\omegaega}_{s}(s), \mu^{\omegaega}_{s}(s)), \lambdabel{shifchanges startinglaw}
\end{eqnarray}
where $d_{TV}(\mu_1,\mu_2)$ stands for the total variation distance between $\mu_1$ and $\mu_2$.
Assume now that $L_T^\omegaega(0-)\sigmam\mu$. By construction, $L_T^\omega$ jumps from any state $i$ to the state $1$ with rate $q^0(i,1)\geq \theta\nu_0 > 0$ {(see \eqref{kratespldasg})}. Let $\hat{L}_T^\omega$ be a process with initial distribution $\mu$, evolving as $L_T^\omega$, but jumping from $i$ to $1$ at rate $q^0(i,1) - \theta\nu_0 \geq 0$. {We decompose the dynamic of $L_T^\omega$} as follows: (1) $L_T^\omegaega$ evolve as $\hat{L}_T^\omega$ on $[0, \xi]$, where $\xi$ is an independent exponential random variable with parameter $\theta\nu_0$, (2) at time $\xi$, $L_T^\omega$ jumps to the state $1$ {regardless of its current position}, and (3) conditionally on $\xi$, $L_T^\omega$ has the same law on $[\xi, \infty)$ as an independent copy of $L_{T-\xi}^\omegaega$ started with one line. This idea allows us to couple $L_T^\omega$ to a copy of it $\tilde{L}_T^\omegaega$ with starting law $\tilde \mu$, so that the two processes are equal on $[\xi, \infty)$. Since $L_T^\omegaega(T-) \sigmam \mu^{\omegaega}_T(T)$ and $\tilde{L}_T^\omegaega(T-) \sigmam \tilde\mu^{T}_T(\omegaega)$, we have
\betagin{eqnarray}
d_{TV} (\mu^{\omegaega}_T(T), \tilde \mu^{\omegaega}_T(T)) \leq \mathbb{P} \left ( \tilde{L}_T^\omegaega(T-) \neq {L}_T^\omegaega(T-) \right ) \leq \mathbb{P} \left ( \xi > T \right ) = e^{-\theta\nu_0 T}. \lambdabel{rapprochementexpo}
\end{eqnarray}
This together with \eqref{shifchanges startinglaw}, implies that, for any $\mu\in\Ms_1({\mathbb N}b)$ and any $t > s > 0$,
\betagin{eqnarray}
\ d_{TV} (\mu^{\omegaega}_{t}(t), \mu^{\omegaega}_{s}(s)) \leq e^{-\theta\nu_0 s}. \lambdabel{cauchyexpo}
\end{eqnarray}
In particular $(\mu^{\omegaega}_{t}(t))_{t > 0}$ is Cauchy as $t \rightarrow \infty$ for the total-variation distance. Therefore, $(\mu^{\omegaega}_{t}(t))_{t > 0}$ has a limit $\mu^{\omegaega}\in\Ms_1({\mathbb N}b)$. Moreover, \eqref{rapprochementexpo} implies that $\mu^\omegaega$ does not depend on $\mu$, and the first {part} of Theorem \ref{thm2.6}-(2) is proved. Identity \eqref{pldasgatdtpsinftyq} follows then by Lemma \ref{hTq}.
Setting $s = T$ and letting $t\to\infty$ in \eqref{cauchyexpo} yields
\[d_{TV} (\mu^\omegaega, \mu^{\omegaega}_{T}(T)) \leq e^{-\theta\nu_0 T}.\]
Since $h^{\omegaega}(x) = 1 - \mathbb{E} \left[(1-x)^{Z_{\infty}^\omega} \right]$ and $h^{\omegaega}_T(x) = 1 - \mathbb{E} \left[(1-x)^{Z_{T}^\omega} \right]$, where $Z_{\infty}^\omega \sigmam (\deltalta_1)^{\omegaega}$ and $Z_{T}^\omega \sigmam (\deltalta_1)^{\omega}_{T}(T)$, we get
\[\lvert h_T^\omega(x)-h^\omega(x)\rvert\leq d_{TV}((\deltalta_1)^{\omegaega}, (\deltalta_1)^{\omega}_{T}(T))\leq e^{-\theta\nu_0 T},\]
achieving the proof.
\end{proof}
\section{Further quenched results for simple environments}\lambdabel{S7}
In this section we provide, for simple environments, extensions and refinements of the results obtained in Sections \ref{s25} and \ref{s26} in the quenched setting. Recall the quenched diffusion $X^\omegaega$ defined in Section \ref{s23}.
\subsection{Extensions of quenched results in Section \ref{s25} }\lambdabel{s71}
First, we extend the main quenched results in Section \ref{s25}, which hold for almost every environment, to any simple environment.
\betagin{theorem}[Quenched moment duality for simple environments]\lambdabel{thmf1}
The quenched moment duality \eqref{quenchedual} holds for any simple environment.
\end{theorem}
The proof of Theorem \ref{thmf1} has two main ingredients: a moment duality between the jumps of the environment, and a moment duality at the jumps. These results are covered by the next two lemmas.
\betagin{lemma}[Quenched moment duality between the jumps] \lambdabel{momdualbetween}
Let $0\leq s<t\leq T$ and assume that $\omega$ has no jumps in $(s,t)$. For all $x\in[0,1]$ and $n\in \mathbb{N}$, we have
\[ \mathbb{E} \left [ (1-X^\omegaega(t-))^n \mid X^\omegaega(s)=x \right ]= \mathbb{E} \left [ (1-x)^{R_T^\omegaega((T-s)-)}\mid R_{T}^\omegaega(T-t)=n \right ]. \]
\end{lemma}
\betagin{proof}
In $(s,t)$, the processes $X^\omegaega$ and $R_t^\omegaega$ evolve as in the annealed case with $\mu=0$. Therefore, the result follows applying Theorem \ref{thm2.3} with $\mu = 0$.
\end{proof}
\betagin{lemma}[Quenched moment duality at jumps] \lambdabel{momdualatjumps}
Assume that $\omega\in\Db^\star$ is simple and has a jump at time $t<T$. Then, for all $x\in[0,1]$ and $n\in \mathbb{N}$, we have
\[ \mathbb{E} \left [ (1-X^\omegaega(t))^n \mid X^\omegaega(t-)=x \right ]= \mathbb{E}\left [ (1-x)^{R_{T}^\omegaega(T-t)}\mid R_{T}^\omegaega((T-t)-)=n \right ]. \]
\end{lemma}
\betagin{proof}
On the one hand, since $X^\omegaega(t) = X^\omegaega(t-) + X^\omegaega(t-)(1-X^\omegaega(t-))\Deltalta \omegaega(t)$ almost surely, we have
\betagin{equation}\lambdabel{jumponx}
\mathbb{E}\left [ (1-X^\omegaega(t))^n\mid X^\omegaega(t-)=x \right ] = \left [ 1 - x (1 + (1-x) \Deltalta \omegaega(t)) \right ]^n = \left [(1-x)(1-x\Deltalta\omega(t))\right]^n.
\end{equation}
{On the} other hand, conditionally on $\{R_{T}^\omegaega((T-t)-)=n\}$, we have $R_{T}^\omegaega(T-t) \sigmam n + Y$ where $Y \sigmam \bindist{n}{\Deltalta \omegaega(t)}$. Therefore
\betagin{align}
\mathbb{E} \left [ (1-x)^{R_{T}^\omegaega(T-t)}\mid R_{T}^\omegaega((T-t)-)=n \right ] & = \mathbb{E}\left [ (1-x)^{n+Y} \right ] = (1-x)^n \left [ 1-\Deltalta \omegaega(t) + \Deltalta \omegaega(t)(1-x) \right ]^n \nonumber \\
& =\left [(1-x)(1-x\Deltalta\omega(t))\right]^n. \lambdabel{jumponr}
\end{align}
The combination of \eqref{jumponx} and \eqref{jumponr} yields the result.
\end{proof}
\betagin{proof}[Proof of Theorem \ref{thmf1}]
Let $\omega$ be a simple environment. Let $(t_i)_{i=1}^m$ be the increasing sequence of jump times of $\omegaega$ in $[0,T]$. Without loss of generality we assume that $0$ and $T$ are both jump times of $\omegaega$. In particular, $t_1 = 0$ and $t_m=T$. Let $(X^\omegaega( s))_{ s \in[0,T]}$ and $(R_T^\omegaega(\betata))_{\betata\in[0,T]}$ be independent realizations of the Wright--Fisher process and the line-counting process of the k-ASG, respectively. For $s\in[0,T]$, denote by $\mu_s^\omega(x,\cdot)$ and $\bar{\mu}_s^\omega(x,\cdot)$ the law of $X^\omega(s)$ and $X^\omega(s-)$, respectively, given that $X^\omega(0)=x$. Partitioning with respect to the values of $X^\omegaega(t_m-)$ and using Lemma \ref{momdualatjumps} at $t=T$, we get
\betagin{align*}
\mathbb{E} \left [ (1-X^\omegaega(T))^n\mid X^\omegaega(0)=x \right ]
&= \int_0^1 \mathbb{E} \left [ (1-X^\omegaega(t_m))^{n}\mid X^\omegaega(t_m-)=y \right ] \bar{\mu}_{t_m}^\omega(x,\dd y)\\
&= \int_0^1 \mathbb{E} \left [ (1-y)^{R_T^\omegaega(0)}\mid R_{T}^\omegaega(0-) = n \right ] \bar{\mu}_{t_m}^\omega(x,\dd y) \\
&= \mathbb{E} \left [ (1-X^\omegaega(t_m-))^{R_T^\omegaega(0)}\mid R_T^\omegaega(0-)=n, X^\omegaega(0)=x \right ]\eqdef \,I_T^\omega(x,n).
\end{align*}
Set, for $t<T$, $q_{n,k}^\omega(T,t)\deltafeq\mathbb{P} \left ( R_T^\omegaega(t) = k \mid R_T^\omegaega(0-)=n \right ) $. Partitioning with respect to the values of $X^\omegaega(t_{m-1})$ and $R_T^\omegaega(0)$, and using Lemma \ref{momdualbetween}, we get
\betagin{align*}
I_T^\omega(x,n)=& \sum_{k \in \mathbb{N}_0^\dagger} \mathbb{E} \left [ (1-X^\omegaega(t_m-))^{k}\mid X^\omegaega(0)=x \right ] q_{n,k}^\omega(T,0) \\
= & \sum_{k \in \mathbb{N}_0^\dagger}q_{n,k}^\omega(T,0) \int_0^1 \mathbb{E}\left [ (1-X^\omegaega(t_m-))^k \mid X^\omegaega(t_{m-1})=y \right ] {\mu}_{t_{m-1}}^\omega(x,\dd y) \\
= & \sum_{k \in \mathbb{N}_0^\dagger}q_{n,k}^\omega(T,0) \int_0^1 \mathbb{E} \left [ (1-y)^{R_T^\omegaega((T-t_{m-1})-)}\mid R_T^\omegaega(0)=k \right ] \mu_{t_{m-1}}^\omega(x,\dd y) \\
= & \mathbb{E} \left [ (1-X^\omegaega(t_{m-1}))^{R_T^\omegaega((T-t_{m-1}^{})-)}\mid R_T^\omegaega(0-)=n, X^\omegaega(0)=x \right ].
\end{align*}
If $m=2$ the proof of \eqref{quenchedual} is already complete. If $m > 2$, we continue as follows. Partitioning with respect to the values of $R_T^\omegaega((T-t_{m-1})-)$ and of $X^\omegaega(t_{m-1}^{}-)$, and using Lemma \ref{momdualatjumps}, we obtain
\betagin{align*}
&I_T^\omega(x,n)= \sum_{k \in \mathbb{N}_0^\dagger} \mathbb{E} \left [ (1-X^\omegaega(t_{m-1}))^{k}\mid X^\omegaega(0)=x \right ] q_{n,k}^\omega(T,(T-t_{m-1})-) \\
= & \sum_{k \in {\mathbb N}b_0^\dagger} q_{n,k}^\omega(T,(T-t_{m-1})-)\int_0^1 \mathbb{E} \left [ (1-X^\omegaega(t_{m-1}))^{k}\mid X^\omegaega(t_{m-1}-)=y \right] \bar{\mu}_{t_{m-1}}^\omega(x,\dd y) \\
= & \sum_{k \in \mathbb{N}_0^\dagger}q_{n,k}^\omega(T,(T-t_{m-1})-) \int_0^1 \mathbb{E}\left [ (1-y)^{R_T^\omegaega(T-t_{m-1})}\mid R_{T}^\omegaega((T-t_{m-1})-) = k \right ] \bar{\mu}_{t_{m-1}}^\omega(x,\dd y) \\
=& \mathbb{E}\left [ (1-X^\omegaega(t_{m-1}-))^{R_T^\omegaega(T-t_{m-1})}\mid R_T^\omegaega(0-)=n, X^\omegaega(0)=x \right ].
\end{align*}
Iterating this procedure, using successively Lemma \ref{momdualbetween} and Lemma \ref{momdualatjumps} (the first one is applied on the intervals {$(t_{i-1}, t_i)$}, while the second one is applied at the times {$t_{i}$}), we finally obtain
\[\mathbb{E}[ (1-X^\omegaega(T))^n\mid X^\omegaega(0)=x ]= \mathbb{E} \left [ (1-x)^{R_T^\omegaega(T-)}|R_T^\omegaega(0-)=n \right ], \]
which ends the proof.
\end{proof}
\betagin{theorem}[Quenched asymptotic type frequency for simple environments]\lambdabel{thmf2}
The statement of Theorem \ref{thm2.4}-(2) holds for any simple environment.
\end{theorem}
\betagin{proof}
Analogous to the proof of Theorem \ref{thm2.4}-(2), but using Theorem \ref{thmf1} instead of Theorem \ref{thm2.3}.
\end{proof}
\textbf{Refinements for $\sigmagma=0$.} Under this additional assumption, we provide a more explicit expression of ${\mathbb P}i_n(\omega)$ (defined in \eqref{defpin}). This is possible thanks to the following explicit diagonalization of the matrix $Q_\dagger^0$ (the transition matrix of the process $R$ under the null environment).
\betagin{lemma}\lambdabel{diag}
Assume that $\sigmagma=0$ and set, for $k\in{\mathbb N}b_0^\dagger$, $\lambdambda_k^\dagger\coloneqq -q_\dagger^0(k,k)$, and, for $k\in{\mathbb N}b$, $\gamma_k^\dagger\coloneqq q_\dagger^0(k,k-1)$, {where $q_\dagger^\mu(\cdot,\cdot)$ is defined in \eqref{krates}}. In addition, let
\betagin{itemize}
\item[(i)] $D_\dagger$ be the diagonal matrix with diagonal entries $(-\lambdambda_i^\dagger)_{i\in{\mathbb N}b_0^\dagger}$,
\item[(ii)] $U_\dagger\coloneqq (u_{i,j}^\dagger)_{i,j\in{\mathbb N}b_0^\dagger}$, where $u_{\dagger,\dagger}^\dagger \coloneqq 1$ and $u_{\dagger,j}^\dagger\coloneqq 0$ for $j\in{\mathbb N}b_0$, and, for $i\in{\mathbb N}b_0$
\betagin{eqnarray}
u_{i,j}^\dagger \coloneqq \prod_{l=j+1}^{i} \left ( \frac{\gamma_{l}^\dagger}{\lambdambda_l^\dagger - \lambdambda_j^\dagger} \right ) \,\textrm{for } j\in[i]_0, \ u_{i,j}^\dagger\coloneqq 0,\, \textrm{for } j> i\ \textrm{and} \ \ u_{i,\dagger}^\dagger \coloneqq \theta \nu_0 \sum_{k = 1}^{i} \frac{k}{\lambdambda_{k}^\dagger} \prod_{l=k+1}^{i} \frac{\gamma_{l}^\dagger}{\lambdambda_l^\dagger}, \lambdabel{recrelalphani3}
\end{eqnarray}
\item[(iii)] $V_\dagger\coloneqq (v_{i,j}^\dagger)_{i,j\in{\mathbb N}b_0^\dagger}$, where $v_{\dagger,\dagger}^\dagger \coloneqq 1$ and $v_{\dagger,j}^\dagger\coloneqq 0$ for $j\in{\mathbb N}b_0$, and, for $i\in{\mathbb N}b$
\betagin{eqnarray}
v_{i,j}^\dagger \coloneqq \prod_{l=j}^{i-1} \left ( \frac{-\gamma_{l+1}^\dagger}{\lambdambda_i^\dagger - \lambdambda_l^\dagger} \right ) \,\textrm{for } j\in[i]_0, \ v_{i,j}^\dagger\coloneqq 0,\, \textrm{for } j> i\ \textrm{and} \ \ v_{i,\dagger}^\dagger \coloneqq \frac{- \theta \nu_0}{ \lambdambda_i^\dagger} \sum_{k = 1}^{i} k \prod_{l=k}^{i-1} \left ( \frac{- \gamma_{l+1}^\dagger}{\lambdambda_i^\dagger - \lambdambda_l^\dagger} \right ), \lambdabel{recrelalphani2}
\end{eqnarray}
\end{itemize}
with the convention that an empty sum equals $0$ and an empty product equals $1$. Then, we have
\[Q_\dagger^0=U_\dagger D_\dagger V_\dagger\quad \textrm{and}\quad U_\dagger V_\dagger=V_\dagger U_\dagger=Id.\]
\end{lemma}
\betagin{proof}[Proof of Lemma \ref{diag}]
For any $i\in{\mathbb N}b_0^\dagger$, let $e_i\coloneqq (e_{i,j})_{j\in{\mathbb N}b_0^\dagger}$ be the vector defined via $e_{i,i}\coloneqq1$ and $e_{i,j}\coloneqq0$ for $j\neq i$. Order ${\mathbb N}b_0^\dagger$ as $\{ \dagger, 0, 1, 2,... \}$, so that the matrix $(Q_\dagger^0)^{\top}$ is upper triangular with diagonal elements $(-\lambdambda_{\dagger}, -\lambdambda_{0}, -\lambdambda_{1}, -\lambdambda_{2}, \ldots)$. For $n\in{\mathbb N}b_0^\dagger$, let $v_n \in \textrm{Span} \{ e_i: i \in[n]_0\cup\{\dagger\} \}$ be the eigenvector of $(Q_\dagger^0)^{\top}$ associated with the eigenvalue $-\lambdambda_n$ normalized so that its coordinate with respect to $e_n$ is $1$. It is not difficult to see that these eigenvectors exist and that we have $v_{\dagger} = e_{\dagger}$ and $v_{0} = e_{0}$. For $n \geq 1$, writing $v_n = c_\dagger e_\dagger + c_{0} e_{0} + ... + c_{n-1} e_{n-1} + e_{n}$ and multiplying by $\frac1{-\lambdambda_n} (Q_\dagger^0)^{\top}$ on both sides, we obtain another expression of $v_n$ as a linear combination of $e_\dagger, e_{0},\ldots, e_{n-1}, e_{n}$. Identifying both expressions, we obtain that $c_k=v_{n,k}^\dagger$, for $k\leq n-1$. In particular, we have
\[ v_n = v_{n,\dagger}^\dagger e_{\dagger}+ v_{n,0}^\dagger e_0 +\cdots+ v_{n,n-1}^\dagger e_{n-1} + v_{n,n}^\dagger e_{n}.\]
Proceeding in a similar way, one obtains that
\[ e_n = u_{n,\dagger}^\dagger v_\dagger + u_{n,0}^\dagger v_{0} + \cdots + u_{n,n-1}^\dagger v_{n-1} + u_{n,n}^\dagger v_{n}. \]
We thus get that $V_\dagger^{\top} U_\dagger^{\top} = U_\dagger^{\top} V_\dagger^{\top}=Id$ and $(Q_\dagger^0)^{\top} = V_\dagger^{\top} D_\dagger U_\dagger^{\top}$ (the matrix products are well-defined, because they involve sums of finitely many non-zero terms). This ends the proof.
\end{proof}
Now, consider the polynomials $S_k^\dagger$, $k \in{\mathbb N}b_0$, defined via
\betagin{eqnarray}
S_k^\dagger(x) \coloneqq \sum_{i=0}^{k} v_{k,i}^\dagger\, x^i,\quad x\in[0,1]. \lambdabel{newbasis9}
\end{eqnarray}
In addition, for $z\in(0,1)$, we define the matrices $\Bs(z)\coloneqq (\Bs_{i,j}(z))_{i,j\in{\mathbb N}b_0^\dagger}$ and ${\mathbb P}hi^\dagger(z)\coloneqq ({\mathbb P}hi^\dagger_{i,j}(z))_{i,j\in{\mathbb N}b_0^\dagger}$ via
\betagin{equation} \lambdabel{defbetaki}
\Bs_{i,j}(z)\coloneqq \left\{\betagin{array}{ll}
{\mathbb P}b(i+B_i(z)=j)&\textrm{for $i,j\in{\mathbb N}b,$}\\
1&\textrm{for $i=j\in\{0,\dagger\},
$}\\
0&\textrm{otherwise},
\end{array}\right.\quad\textrm{and}\quad
{{\mathbb P}hi^\dagger(z)\coloneqq} U_\dagger^\top \Bs(z)^\top V_\dagger^\top,
\end{equation}
where $B_i(z)\sigmam\bindist{i}{z}$. We will see in the proof of Theorem \ref{thmf3} that ${\mathbb P}hi^\dagger(z)$ is well-defined.
\betagin{theorem} \lambdabel{thmf3}
Assume that $\sigmagma=0$, $\theta >0$ and $\nu_0, \nu_1 \in (0,1)$. Let $\omega$ be a simple environment. Denote by $N\coloneqq N(\tau)$ the number of jumps of $\omega$ in $(-\tau,0)$ and let $(T_i)_{i=1}^N$ be the sequence of the jump times in decreasing order, and set $T_0 \coloneqq 0$. For any $m\in[N]$, define the matrix $A_m^{\dagger}(\omegaega)\coloneqq (A_{i,j}^{\dagger,m}(\omegaega))_{i,j\in{\mathbb N}b_0^\dagger}$ via
\betagin{eqnarray}
A^{\dagger}_m(\omegaega) \coloneqq {\mathbb P}hi^\dagger(\Deltalta \omegaega(T_m)) \exp \left ( (T_{m-1} - T_m) D_\dagger \right ). \lambdabel{defmatA}
\end{eqnarray}
Then, for all $x \in (0,1)$ and $n \in \mathbb{N}$, we have
\betagin{eqnarray}
\mathbb{E}\left [ (1-X^\omegaega(0))^n\mid X^\omegaega(-\tau)=x \right ] = \sum_{k = 0}^{n 2^N} C^\dagger_{n,k}(\omega,\tau) S_k^\dagger(1-x), \lambdabel{exprmomtpst}
\end{eqnarray}
where the matrix $C^\dagger(\omega,\tau)\coloneqq (C^\dagger_{n,k}(\omega,\tau))_{k,n\in{\mathbb N}b_0^\dagger}$ is given by
\betagin{equation}
C^\dagger(\omega,\tau)\coloneqq U_\dagger \left[A_N^\dagger(\omega)A_{N-1}^\dagger(\omega)\cdots A_1^\dagger(\omega)\right]^\top \exp \left ((T_N+\tau)D_\dagger \right ), \lambdabel{defcoeffmom}
\end{equation}
with the convention that an empty product of matrices is the identity matrix. Moreover, for all $n\in{\mathbb N}b$,
\betagin{equation}
\ {\mathbb P}i_n(\omegaega) = C_{n,0}^\dagger(\omegaega,\infty)\coloneqq \lim_{\tau\to\infty} C_{n,0}^\dagger(\omega,\tau)=\lim_{\tau \to \infty} \left(U_\dagger \left[A_{N(\tau)}^\dagger(\omega)A_{{N(\tau)}-1}^\dagger(\omega)\cdots A_1^\dagger(\omega)\right]^\top\right)_{n,0}, \lambdabel{exprfctgenlt4infmom}
\end{equation}
where the previous limits are well-defined.
\end{theorem}
\betagin{proof}
Let us first show that the matrix products in \eqref{defbetaki}, \eqref{defmatA} and \eqref{defcoeffmom} are well-defined and that $C^\dagger_{n,k}(\omega,\tau) = 0$ for all $k > n2^N$. To this end, order ${\mathbb N}b_0^\dagger$ as $\{ \dagger, 0, 1, 2,... \}$, so that the matrices $U_\dagger^\top$ and $V_\dagger^\top$ are upper triangular. Note also that $\Bs_{j,i}(z)=0$ for $i>2j$. Therefore, for any $n \in{\mathbb N}b$ and any $v= (v_i)_{i \in {\mathbb N}b_0^\dagger}$ such that $v_i = 0$ for all $i > n$, the vector $\tilde v \coloneqq U_\dagger^\top (\Bs(z)^\top (V_\dagger^\top v))$ is well-defined and satisfies $\tilde v_i = 0$ for all $i > 2n$. It follows that the matrix ${\mathbb P}hi^\dagger(z)$ in \eqref{defbetaki} is well-defined. Moreover, since $\exp ( (T_{m-1} - T_m) D_\dagger )$ is diagonal, the product defining the matrix $A^{\dagger}_m(\omegaega)$ in \eqref{defmatA} is also well-defined. Furthermore, for any $n \in{\mathbb N}b$ and any vector $v = (v_i)_{i \in {\mathbb N}b_0^\dagger}$ such that $v_i = 0$ for all $i > n$, the vector $\tilde v \coloneqq A^{\dagger}_m(\omegaega) v$ satisfies $\tilde v_i = 0$ for all $i > 2n$. In particular, for any $m \geq 1$, the product $\exp ( -(T_N+\tau)D_\dagger) A_m^\dagger(\omega)A_{m-1}^\dagger(\omega)\cdots A_1^\dagger(\omega) U_\dagger^\top$ is well-defined. Additionally, for $n \geq 1$ and a vector $v = (v_i)_{i \in {\mathbb N}b_0^\dagger}$ such that $v_i = 0$ for all $i > n$, the vector $\tilde v \coloneqq \exp ( -(T_N+\tau)D_\dagger) A_m^\dagger(\omega)A_{m-1}^\dagger(\omega)\cdots A_1^\dagger(\omega) U_\dagger^\top v$ satisfies $\tilde v_i = 0$ for all $i > 2^m n$. Transposing, we see that the matrix $C^\dagger(\omega,\tau)$ in \eqref{defcoeffmom} is well-defined and satisfies $C^\dagger_{n,k}(\omega,\tau) = 0$ for all $k > n2^N$.
Define, for $s>0$, the stochastic matrix ${\mathbb P}s^\dagger_s(\omega)\coloneqq (p_{i,j}^\dagger(\omega,s))_{i,j\in{\mathbb N}b_0^\dagger}$ via
\[p_{i,j}^\dagger(\omega,s)\coloneqq {\mathbb P}b(R_0^\omega(s-)=j\mid R_0^\omega(0-)=i).\]
Hence, defining $\rho(y)\coloneqq (y^i)_{i\in{\mathbb N}b_0^\dagger}$, $y\in[0,1]$ (with the convention $y^\dagger\coloneqq 0$), we obtain
\betagin{equation}\lambdabel{gfrt}
\mathbb{E}[y^{R_0^\omegaega(\tau-)} \mid R_0^\omegaega(0-)=n] = ({\mathbb P}s_\tau^\dagger(\omegaega) \rho(y))_n=({\mathbb P}s_\tau^\dagger(\omega)U_\dagger {S_\dagger(y)})_n,
\end{equation}
where we used that $\rho(y)=U_\dagger {S_\dagger(y)}$ with ${S_\dagger(y)\coloneqq}(S_k^\dagger(y))_{k\in{\mathbb N}b_0^\dagger}$.
Thus, Theorem \ref{thmf1} and Eq. \eqref{gfrt} yield
\betagin{equation}\lambdabel{mxt}
\mathbb{E} [ (1-X^\omegaega(0))^n\mid X^\omegaega(-\tau)=x ] = \sum_{k = 0}^{\infty} \left({\mathbb P}s_\tau^\dagger(\omega)U_\dagger\right)_{n,k} S_k^\dagger(1-x).
\end{equation}
Now, consider the semi-group $M_\dagger\coloneqq (M_\dagger(s))_{s\geq 0}$ of the line-counting process of the k-ASG in the null environment, which is defined via $M_{\dagger}(s)\coloneqq \exp (sQ_\dagger^0 )$. Thanks to Lemma \ref{diag}, $M_\dagger(\betata)=U_\dagger E_\dagger(\betata)V_\dagger$, where $E_\dagger(\betata)$ is the diagonal matrix with diagonal entries $(e^{-\lambdambda^{\dagger}_j \betata})_{j\in{\mathbb N}b_0^\dagger}$.
Assume first that $N(\tau)=0$ (i.e. $\omega$ has no jumps in $[-\tau,0]$). In this case, we have
\[{\mathbb P}s_\tau^\dagger(\omega)U_\dagger=M_\dagger(\tau)U_\dagger=U_\dagger E_\dagger(\tau)V_\dagger U_\dagger=U_\dagger E_\dagger(\tau)=C^\dagger(\omega,\tau),\]
where we used that $V_\dagger U_\dagger=Id$. Hence, \eqref{exprmomtpst} follows from \eqref{mxt}.
Assume now that $N(\tau)\geq 1$ (i.e. $\omega$ has at least one jump in $[-\tau,0]$). Disintegrating with respect to the values of $R_0^\omega((-T_i)-)$ and $R_0^\omega(-T_i)$, $i\in [N]$, we get
\betagin{equation}\lambdabel{ptdec}
{\mathbb P}s_\tau^\dagger(\omega)=M_\dagger(-T_1)\Bs(\Deltalta\omega (T_1))M_\dagger(T_1-T_2)\Bs(\Deltalta\omega(T_2))\cdots \Bs(\Deltalta\omega(T_N))M_\dagger(T_N+\tau).
\end{equation}
Using this, the relation $M_\dagger(\betata)=U_\dagger E_\dagger(\betata)V_\dagger$, the definition of the matrices ${\mathbb P}hi^\dagger$ and $A_i^\dagger$ (see \eqref{defbetaki} and \eqref{defmatA}), and the fact that {$V_\dagger U_\dagger=Id$}, we obtain
\betagin{align}
{\mathbb P}s_\tau^\dagger(\omega)U_\dagger&=U_\dagger E_\dagger(-T_1) {\mathbb P}hi^\dagger(\Deltalta \omega(T_1))^\top E_\dagger(T_1-T_2){\mathbb P}hi^\dagger(\Deltalta \omega(T_2))^\top\cdots{\mathbb P}hi^\dagger(\Deltalta \omega(T_N))^\top E_\dagger(T_N+\tau) \nonumber \\
&= U_\dagger A_1^\dagger(\omega)^\top A_2^\dagger(\omega)^\top\cdots A_N^\dagger(\omega)^\top E_\dagger(T_N+\tau) \nonumber \\
&= U_\dagger \left[A_N^\dagger(\omega)A_{N-1}^\dagger(\omega)\cdots A_1^\dagger(\omega)\right]^\top E_\dagger(T_N+\tau)=C^\dagger(\omega,\tau), \lambdabel{ptut=ct}
\end{align}
which proves \eqref{exprmomtpst} also in this case.
It remains to prove that $C^\dagger_{n,0}(\omega,\tau)$ converges to ${\mathbb P}i_n(\omega)$ as $\tau\to\infty$. For $\omega={\textbf{0}}$ (i.e. the null environment), {on one hand \eqref{defcoeffmom} yields $C_{n,0}^\dagger(\omega,\tau)=e^{-\lambdambda_0^\dagger \tau} u_{n,0}^\dagger$ and on the other hand \eqref{gfrt} together with $M_\dagger(\betata)=U_\dagger E_\dagger(\betata)V_\dagger$ and $V_\dagger U_\dagger=Id$} yields
\[\mathbb{E}[y^{R_0^{\textbf{0}}(\tau-)} \mid R_0^{\textbf{0}}(0-)=n]=\sum_{k=0}^n e^{-\lambdambda_k^\dagger \tau} u_{n,k}^\dagger S_k^\dagger(y).\]
Since $\lambdambda_k^\dagger>0$ for $k\in{\mathbb N}b$ and $\lambdambda_0^\dagger=0$, the desired convergence follows by letting $\tau \to\infty$ in the previous identity. {For later use, note that we have ${\mathbb P}i_n({\textbf{0}})=u_{n,0}^\dagger$.} The general case is a direct consequence of the following proposition.
\end{proof}
\betagin{proposition} \lambdabel{approxmn}
Assume that $\sigmagma=0$, $\theta >0$, $\nu_0, \nu_1 \in (0,1)$, and $\omega$ is a simple environment. We have
\[ \left | C^\dagger_{n,0}(\omega,\tau) - {\mathbb P}i_n(\omegaega) \right | \leq e^{-\theta\nu_0 \tau}. \]
\end{proposition}
\betagin{proof}
Let $\omega_\tau$ be the environment that coincides with $\omega$ in $(-\tau,\infty)$ and that is constant and equal to $\omega(-\tau)$ in $(-\infty,-\tau]$ (which means that $\omega_\tau$ has no jumps in $(-\infty,-\tau]$). Since ${\mathbb P}s_\tau^\dagger(\omega_\tau)={\mathbb P}s_\tau^\dagger(\omega)$ and $\omega_\tau$ has no jumps in $(-\infty,-\tau]$, we obtain
\betagin{align}\lambdabel{wow}
{\mathbb P}i_n(\omega_\tau)&=\sum_{k\geq 0}p_{n,k}^\dagger(\omega_\tau,\tau)\,{\mathbb P}b(\exists \betata\geq \tau \ \text{s.t.} \ R_0^{\omega_\tau}(\betata)=0 \mid R_0^{\omega_\tau}(\tau-)=k)\nonumber\\
&=\sum_{k\geq 0}p_{n,k}^\dagger(\omega,\tau)\,{\mathbb P}i_k({\textbf{0}}){=\sum_{k\geq 0}p_{n,k}^\dagger(\omega,\tau)\,u_{k,0}^\dagger}=({\mathbb P}s_\tau^\dagger(\omega)U_\dagger)_{n,0}=C_{n,0}^\dagger(\omega,\tau),
\end{align}
where in the last line we used {${\mathbb P}i_k({\textbf{0}})=u_{k,0}^\dagger$ (from the end of the previous proof) and \eqref{ptut=ct}}.
Now combining \eqref{wow} with \eqref{absetmu0} applied to $\omega_\tau$ with $\upsilon=\deltalta_n$ yields
\betagin{equation}\lambdabel{coefetmu0}
p_{n,0}^\dagger(\omega,\tau)=p_{n,0}^\dagger(\omega_\tau,\tau)\leq C_{n,0}^\dagger(\omega,\tau) \leq p_{n,0}^\dagger(\omega_\tau,\tau)+e^{-\theta\nu_0 \tau}=p_{n,0}^\dagger(\omega,\tau)+e^{-\theta\nu_0 \tau}.
\end{equation}
Then, combining \eqref{absetmu0} applied to $\omega$ with $\upsilon=\deltalta_n$ and \eqref{coefetmu0}, we get
\[C_{n,0}^\dagger(\omega,\tau)-e^{-\theta\nu_0 \tau}\leq {\mathbb P}i_n(\omega)\leq C_{n,0}^\dagger(\omega,\tau)+e^{-\theta\nu_0 \tau},\]
and the result follows.
\end{proof}
\betagin{remark}
If $\omega$ has no jumps in $(-\tau,0)$, then $C^\dagger(\omega,\tau)=U_\dagger\exp(\tau D_\dagger)$. In particular, ${\mathbb P}i_n({\textbf{0}})=u_{n,0}^\dagger$.
\end{remark}
\betagin{remark}
Under the assumptions of Theorem \ref{thmf3} the Simpson index (see Remark \ref{simpsonindexannealed}) is given by
\[ \mathbb{E}[{\rm{Sim}}^\omega(\infty)]=\mathbb{E}[X^\omega(\infty)^2 + (1-X^\omega(\infty))^2] = 1 - 2 C_{1,0}^\dagger(\omegaega,\infty) + 2C_{2,0}^\dagger(\omegaega,\infty). \]
\end{remark}
\betagin{remark} \lambdabel{periodic2}
If $\omegaega$ is a simple periodic environment with period $T_p > 0$, then \eqref{exprfctgenlt4infmom} can be re-written as ${\mathbb P}i_n(\omegaega) =\lim_{m \to \infty} (U_\dagger B(\omega)^m )_{n,0}$ where $B(\omega) \coloneqq [A_{N(T_p)}^\dagger(\omega)A_{N(T_p)-1}^\dagger(\omega)\cdots A_1^\dagger(\omega)]^\top$.
\end{remark}
As an application of Theorem \ref{thmf3} we obtain the following refinement of Proposition \ref{prop2.5} for mixed environments composed of a pure-jump subordinator $J$ and a simple environment $\omega$ (see Fig. \ref{fig:mixed}).
\betagin{proposition}\lambdabel{mixf4}
Assume that {$\sigmagma=0$,} $\theta>0$ and $\nu_0,\nu_1\in(0,1)$. For any $\tau_\star>0$, $n\in{\mathbb N}b$, $x\in[0,1]$, and any simple environment $\omega$, we have
\betagin{equation}\lambdabel{formulecompacteexw}
\lim_{\tau\to\infty}{\mathbb E}b\left[(1-X^{J\otimes_{\tau_\star} \omega}(0))^n\mid X^{J\otimes_{\tau_\star} \omega}(-\tau)=x\right]= \sum_{j = 0}^{n 2^N} \left ( \sum_{k = j}^{n 2^N} C^\dagger_{n,k}(\omega,\tau_\star) v_{k,j}^\dagger \right ) \pi_j,
\end{equation}
where $N$ denotes the number of jumps of $\omega$ in $[-\tau_\star,0]$.
\end{proposition}
\betagin{proof}
Let $\omega$ be a simple environment. Proceeding as in the proof of Proposition \ref{prop2.5}, but using Theorem \ref{thmf1} instead of Theorem \ref{thm2.3}, we obtain
\betagin{equation*}
\lim_{\tau\to\infty}{\mathbb E}b\left[(1-X^{J\otimes_{\tau_\star} \omega}(0))^n\mid X^{J\otimes_{\tau_\star} \omega}(-\tau)=x\right]={\mathbb E}\left[\pi_{R_0^{\omega}(\tau_\star -)}\mid R_0^\omega(0-)=n\right].
\end{equation*}
Since $U^\dagger V^\dagger= Id$ and the stochastic matrix ${\mathbb P}s^\dagger_{\tau_\star}(\omega)\coloneqq (p_{i,j}^\dagger(\omega,{\tau_\star}))_{i,j\in{\mathbb N}b_0^\dagger}$ defined via \[p_{i,j}^\dagger(\omega,{\tau_\star})\coloneqq {\mathbb P}b(R_0^\omega({\tau_\star}-)=j\mid R_0^\omega(0-)=i),\]
satisfies $C^\dagger(\omega,{\tau_\star})={\mathbb P}s_{\tau_\star}^\dagger(\omega)U_\dagger$ (see \eqref{ptut=ct}), the results follows.
\end{proof}
\subsection{Extensions of quenched results in Section \ref{s26}}\lambdabel{s72}
In this section we assume that $\sigmagma=0$ and extend some of the quenched results stated in Section \ref{s26} about the ancestral type distribution for simple environments. The next result allows us to get rid of the condition $\theta\nu_0>0$ in Theorem \ref{thm2.6}-(2).
\betagin{theorem}[Ancestral type distribution for simple environments]\lambdabel{thmfa}
Assume that $\sigmagma=0$ and let $\omega\in\Db^\star$ be a simple environment with infinitely many jumps in $[0,\infty)$ and such that the distance between the successive jumps does not converge to $0$. Then the statement of {Theorem \ref{thm2.6}-(2),} excepting by Eq. \eqref{approxh(x)4}, remains true.
\end{theorem}
\betagin{proof}
The case $\theta\nu_0>0$ is already covered by Theorem \ref{thm2.6}-(2). Assume now that $\theta \nu_0=0$, $\sigmagma=0$, and that $\omega$ is as in the statement. For $\mu\in\Ms_1(\mathbb{N})$, we denote by $\mu^\omegaega_T(\betata)$ the distribution of $L_T^\omegaega(\betata-)$ given that $L_T^\omegaega(0-)\sigmam\mu$. We claim that, for all $\mu,\tilde\mu\in\Ms_1({\mathbb N}b)$,
\betagin{equation}\tag{Claim 5}\lambdabel{tvd}
d_{TV}(\mu_T^\omega(T),\tilde\mu_T^\omega(T))\xrightarrow[T\to\infty]{}0,
\end{equation}
{where $d_{TV}(\mu_1,\mu_2)$ stands for the total variation distance between $\mu_1$ and $\mu_2$.} If \eqref{tvd} is true, the rest of the proof follows as in the proof of Theorem \ref{thm2.6}-(2). In what follows we prove \eqref{tvd}.
Let $0 < T_1 < T_2 <\cdots$ be the sequence of the jump times of $\omegaega$ and set $T_0 \coloneqq 0$ for convenience. On $(T_i, T_{i+1})$, $L_T^\omegaega$ has transition rates given by $(q^0(k,j))_{k,j\in{\mathbb N}b}$ (see \eqref{kratespldasg}). For any $k > l$, let $H(k,l)$ denote the hitting time of $l$ by a Markov chain starting at $k$ and with transition rates given by $(q^0(i,j))_{i,j\in{\mathbb N}b}$. Let $(S_l)_{l \geq 2}$ be a sequence of independent exponential random variables with parameter $(l-1) \theta\nu_1 + l(l-1)/2$ and note that $S_l \sigmam H(l,l-1)$ for $l\geq 2$. Using the Markov property, one can easily see that $H(k,1)$ is equal in distribution to $\sum_{l=2}^k S_l$. Therefore, for any $i$ such that $T_{i+1} < T$ and any $k \geq 1$, we have
\betagin{equation}\lambdabel{bou1}
\mathbb{P} \left (L_T^\omegaega((T-T_i)-) = 1 \mid L_T^\omegaega(T-T_{i+1}) = k \right ) {=} \mathbb{P} \left ( \sum_{l=2}^k S_l \leq T_{i+1} - T_{i} \right ) \geq \mathbb{P} \left ( \sum_{l=2}^{\infty} S_l \leq T_{i+1} - T_{i} \right ).
\end{equation}
Clearly $\sum_{l=2}^{\infty} \mathbb{E}[S_l] < \infty$ so $S^\infty\coloneqq \sum_{l=2}^{\infty} S_l<\infty$ almost surely. Moreover, since, for $l \geq 2$, the support of $S_l$ contains $0$, the support of $S^\infty$ contains $0$ as well. In particular $q(s) \coloneqq \mathbb{P} ( \sum_{l=2}^{\infty} S_l \leq s)$ is positive for all $s > 0$. Thus, we obtain from \eqref{bou1} that, for $k\geq1$,
\betagin{eqnarray}
\mathbb{P} \left (L_T^\omegaega((T-T_i)-) = 1 \mid L_T^\omegaega(T-T_{i+1}) = k \right ) \geq q(T_{i+1} - T_{i}). \lambdabel{probatouch0}
\end{eqnarray}
Let $L_T^\omega$, $\tilde L_T^\omega$ and $L_{T_i}^\omega$, $i\geq 0$, be independent realizations of the line-counting process of the pLD-ASG with environment $\omega$ (the subscript indicates the sampling time) and $L_T^\omega(0-)\sigmam \mu$, $\tilde L_T^\omega(0-)\sigmam\tilde \mu$, and $L_{T_i}^\omega(0-)=1$. Let
$i(T)\deltafeq \max\{i\in{\mathbb N}b_0: T_i<T,\, L_T^\omega((T-T_i)-)=\tilde L_T^\omega((T-T_i)-)=1\}$, with the convention that the maximum of an empty set is $-\infty$. Set $T_{-\infty}\coloneqq-\infty$ for convenience. We define $(U_T^\omegaega(\betata))_{\betata \geq 0}$ and $(\tilde U_T^\omegaega(\betata))_{\betata \geq 0}$ by setting $U_T^\omegaega(\betata)\coloneqq L_T^\omegaega(\betata)$ and $\tilde U_T^\omegaega(\betata)\coloneqq \tilde L_T^\omegaega(\betata)$ for $\betata<T-T_{i(T)}$ and $U_T^\omegaega(\betata)\coloneqq\tilde U_T^\omegaega(\betata)=L_{T_{i(T)}}^\omega(\betata-(T-T_{i(T)}))$ for $\betata \geq T-T_{i(T)}$.
Note that $U_T^\omega$ and $\tilde U_T^\omegaega$ have the same distribution as $L_T^\omega$ and $\tilde L_T^\omega$, respectively. In particular, we have $U_T^\omegaega(T-) \sigmam \mu_{T}^\omegaega(T)$ and $\tilde U^\omegaega_T(T-)\sigmam \tilde \mu^{\omegaega}_T(T)$. Moreover, we have $U_T(\omegaega,\betata)=\tilde U_T(\omegaega,\betata)$ for all $\betata \geq T-T_{i(T)}$. Therefore,
\betagin{eqnarray}
d_{TV}(\mu_T^\omega(T),\tilde\mu_T^\omega(T)) \leq \mathbb{P} \left ( U_T^\omegaega(T-) \neq \tilde U_T^\omegaega(T-) \right ) \leq \mathbb{P} \left ( i(T) = -\infty \right ). \lambdabel{rapprochementsm}
\end{eqnarray}
Let $N(T)$ be the number of jumps of $\omega$ in $[0,T]$. According to \eqref{probatouch0}, we have, for $k_1, k_2 \geq 1$ with $k_1 \neq k_2$,
\betagin{align*}
\mathbb{P} &\left (L_T^\omegaega((T-T_i)-)= 1,\, \tilde L_T^\omegaega((T-T_i)-) = 1 \mid L_T^\omegaega(T-T_{i+1}) = k_1, \tilde L_T^\omegaega(T-T_{i+1}) = k_2 \right ) \geq q(T_{i+1} - T_{i})^2.
\end{align*}
Therefore, using \eqref{rapprochementsm}, we obtain
\[ d_{TV}(\mu_T^\omega(T),\tilde\mu_T^\omega(T)) \leq \mathbb{P} \left ( I_0(T) = -\infty \right ) \leq \prod_{i=1}^{N(T)} \left ( 1-q(T_{i} - T_{i-1})^2 \right ) =: \varphi_{\omegaega}(T). \]
Note that $\varphi_{\omegaega}$ does not depend on $\mu$ and $\tilde \mu$. Recall that by assumption the sequence of jump times $T_1, T_2,\ldots$ is infinite and the distance between the successive jumps does not converge to $0$. Therefore, there is $\varepsilonsilon > 0$ such that, for infinitely many indices $i$, we have $T_{i+1}-T_i > \varepsilonsilon$. Thus, the number of factors smaller than $1-q(\varepsilonsilon)^2 < 1$ in the product defining $\varphi_{\omegaega}(T)$ converges to infinity as $T\to\infty$. We deduce that $\varphi_{\omegaega}(T)\to 0$ as $T\to\infty$, which proves \eqref{tvd}, concluding the proof.
\end{proof}
The following explicit diagonalization of the matrix $Q^0$ (the transition
matrix of the process $L$ under the null environment) will allow us to obtain a more explicit expression for $h^{\omegaega}_T(x)$.
\betagin{lemma}\lambdabel{diagpldasg}
Assume that $\sigmagma=0$ and set, for $k\in{\mathbb N}b$, $\lambdambda_k\coloneqq -q^0(k,k)$, and, for $k\in{\mathbb N}b$, $\gamma_k\coloneqq q^0(k,k-1)$, {where $q^\mu(\cdot,\cdot)$ is defined in \eqref{kratespldasg}}. In addition, let
\betagin{itemize}
\item[(i)] $D$ be the diagonal matrix with diagonal entries $(-\lambdambda_i)_{i\in{\mathbb N}b}$.
\item[(ii)] $U\coloneqq (u_{i,j})_{i,j\in{\mathbb N}b}$ where, for all $i\in{\mathbb N}b$, $u_{i,i} \coloneqq 1$, $u_{i,j} \coloneqq 0$ for $j > i$ and, when $i \geq 2$, $u_{i,i-1} \coloneqq \gamma_{i}/(\lambdambda_{i} - \lambdambda_{i-1})$ and the coefficients $(u_{i,j})_{j \in [i-2]}$ are defined via the recurrence relation
\betagin{eqnarray}
u_{i,j} \coloneqq \frac{1}{\lambdambda_i - \lambdambda_j} \left ( \gamma_i {u_{i-1,j}} + \theta\nu_0 \left ( \sum_{l=j}^{i-2} {u_{l,j}} \right ) \right ). \lambdabel{recrelalphani4killed}
\end{eqnarray}
\item[(iii)] $V\coloneqq (v_{i,j})_{i,j\in{\mathbb N}b}$ where, for all $i\in{\mathbb N}b$, $v_{i,i} \coloneqq 1$, $v_{i,j} \coloneqq 0$ for $j > i$ and, when $i \geq 2$, the coefficients $(v_{i,j})_{j \in [i-1]}$ are defined via the recurrence relation
\betagin{eqnarray}
v_{i,j} \coloneqq \frac{-1}{(\lambdambda_i - \lambdambda_j)} \left [ \left ( \sum_{l=j+2}^i v_{i,l} \right ) \theta\nu_0 + v_{i,j+1} \gamma_{j+1} \right ]. \lambdabel{recrelani4killed}
\end{eqnarray}
\end{itemize}
with the convention that an empty sum equals $0$. Then, we have
\[Q=U D V\quad \textrm{and}\quad U V=V U=Id.\]
\end{lemma}
\betagin{proof}
Analogous to the proof of Lemma \ref{diag}.
\end{proof}
Now, we consider the polynomials $S_k, k \in{\mathbb N}b$ defined via
\betagin{eqnarray}
S_k(x) \coloneqq {\sum_{i=1}^{k} v_{k,i} x^i}. \lambdabel{newbasis9pldasg}
\end{eqnarray}
In addition, for $z\in(0,1)$, we define the matrices $\Bs(z)\coloneqq (\Bs_{i,j}(z))_{i,j\in{\mathbb N}b}$ and ${\mathbb P}hi(z)\coloneqq ({\mathbb P}hi_{i,j}(z))_{i,j\in{\mathbb N}b}$ via
\betagin{equation} \lambdabel{defbetakipldasg}
\Bs_{i,j}(z)\coloneqq {\mathbb P}b(i+B_i(z)=j),\quad \textrm{$i,j\in{\mathbb N}b$}\quad\textrm{and}\quad {\mathbb P}hi(z)\coloneqq U^\top \Bs(z)^\top V^\top,
\end{equation}
where $B_i(z)\sigmam\bindist{i}{z}$. The fact that the matrix product defining ${\mathbb P}hi(z)$ is well-defined can be justified similarly as in the proof of Theorem \ref{thmf3}. The same is true for the matrix products in \eqref{defmatApldasg} and \eqref{defcoeff4}.
\betagin{theorem} \lambdabel{thmf4}
Assume that $\sigmagma=0$ and let $\omega$ be a simple environment with infinitely many jumps on $[0,\infty)$ and such that the distance between the successive jumps does not converge to $0$. Let $N$ be the number of jumps of $\omega$ in $(0,T)$ and let $(T_i)_{i=1}^N$ be {the sequence of the jump times} in increasing order. We set $T_0 \coloneqq 0$ for convenience. For any $m\in[N]$, we define the matrix $A_m(\omegaega)\coloneqq (A_{i,j}^{m}(\omegaega))_{i,j\in{\mathbb N}b}$ by
\betagin{eqnarray}
A_m(\omegaega) \coloneqq \exp \left ( (T_m - T_{m-1}) D \right ) {\mathbb P}hi(\Deltalta \omegaega(T_m)). \lambdabel{defmatApldasg}
\end{eqnarray}
Then for all $x \in (0,1), n \in \mathbb{N}$, we have
\betagin{eqnarray}
h^{\omegaega}_T(x) = 1 - \sum_{k = 1}^{2^N} C_{1,k}(\omega,T) S_k(1-x), \lambdabel{exprfctgenlt4}
\end{eqnarray}
where the matrix $C(\omega,T)\coloneqq (C_{n,k}(\omega,T))_{k,n\in{\mathbb N}b}$ is given by
\betagin{align}
C(\omega,T)\coloneqq U \exp \left ( (T-T_N)D \right ) \left[A_1(\omega)A_{2}(\omega)\cdots A_N(\omega)\right]^\top. \lambdabel{defcoeff4}
\end{align}
Moreover, for any $x\in(0,1)$,
\betagin{eqnarray}
\ h^{\omegaega}(x) = 1 - \sum_{k = 1}^{\infty} C_{1,k}(\omegaega, \infty) S_k(1-x), \lambdabel{exprfctgenlt4inf}
\end{eqnarray}
where the series in \eqref{exprfctgenlt4inf} is convergent
and where
\betagin{align}
C_{1,k}(\omegaega, \infty) \coloneqq {\lim_{m \rightarrow \infty}} \left ( U \left[A_1(\omega)A_{2}(\omega)\cdots A_m(\omega)\right]^\top \right )_{1,k}, \lambdabel{defcoeff4inf}
\end{align}
and the above limit is well-defined.
\end{theorem}
\betagin{proof}
We are interested in the generating function of $L_T^\omegaega(T-)$.
For $s>0$, we define the stochastic matrix ${\mathbb P}s^T_s(\omega)\coloneqq (p^T_{i,j}(\omega,s))_{i,j\in{\mathbb N}b}$ via
\[p^T_{i,j}(\omega,s)\coloneqq {\mathbb P}b(L_T^\omega(s-)=j\mid L_T^\omega(0-)=i).\]
We also define $(M(s))_{s\geq 0}$ via $M(s)\coloneqq \exp (sQ^0 )$, i.e $M$ is the semi-group of $L^{{\textbf{0}}}$. Let $T_1<T_{2}<\cdots<T_N$ be the sequence of jump times of $\omega$ in $[0,T]$. Disintegrating with respect to the values of $L_T^\omega( (T-T_i)-)$ and $L_T^\omega( T - T_i)$, $i\in [N]$, we obtain
\betagin{equation}\lambdabel{ptdecpldasg}
{{\mathbb P}s^T_T(\omega)}=M(T-T_N)\Bs(\Deltalta\omega (T_N))M(T_N-T_{N-1})\Bs(\Deltalta\omega(T_{N-1}))\cdots \Bs(\Deltalta\omega(T_1))M(T_1).
\end{equation}
In addition,
\betagin{equation}\lambdabel{gfrtpldasg}
\mathbb{E}[y^{L_T^\omegaega(T-)} \mid L_T^\omegaega(0-)=n] = ({{\mathbb P}s^T_T(\omega)} \rho(y))_n,\quad \textrm{where}\quad \rho(y)\coloneqq (y^i)_{i\in{\mathbb N}b}.
\end{equation}
Thanks to Lemma \ref{diagpldasg}, we have $M(\betata)=U E(\betata)V$, where $E(\betata)$ is the diagonal matrix with diagonal entries $(e^{-\lambdambda_j \betata})_{j\in{\mathbb N}b}$. Moreover, $\rho(y)=U {S(y)}$, where {$S(y)\coloneqq(S_k(y))_{k\in{\mathbb N}b}$.} Using this together with Eq. \eqref{ptdecpldasg} and the relations $M(\betata)=U E(\betata)V$ and {$V U=Id$}, we obtain
\betagin{equation}\lambdabel{gfrtpldasg2}
{{\mathbb P}s^T_T(\omega)} \rho(y)=U E(T-T_N) {\mathbb P}hi(\Deltalta \omega(T_N))^\top E(T_N-T_{N-1}){\mathbb P}hi(\Deltalta \omega(T_{N-1}))^\top\cdots{\mathbb P}hi(\Deltalta \omega(T_1))^\top E(T_1){S(y)}.
\end{equation}
Thus, using the definition of the matrices $A_i(\omega)$, we get
\betagin{align}
{{\mathbb P}s^T_T(\omega)} \rho(y)&= U E(T-T_N) A_N(\omega)^\top A_{N-1}(\omega)^\top\cdots A_1(\omega)^\top {S(y)} \nonumber \\
&= U E(T-T_N) \left[A_1(\omega)A_{2}(\omega)\cdots A_N(\omega)\right]^\top {S(y)}=C(\omega,T){S(y)}. \nonumber
\end{align}
Now, using the previous identity, Lemma \ref{hTq} and Eq. \eqref{gfrtpldasg}, we obtain
\betagin{align*}
h^{\omegaega}_T(x)=1-\mathbb{E}\left[(1-x)^{L_T^\omegaega(T-)} \mid L_T^\omegaega(0-)=1\right] = 1 - {\sum_{k = 1}^{\infty}} C_{1,k}(\omegaega,T) S_k(1-x).
\end{align*}
Proceeding as in the proof of Theorem \ref{thmf3}, one shows that $C_{1,k}(\omegaega,T)=0$ for $k > 2^N$, and \eqref{exprfctgenlt4} follows.
Let us now analyze $C_{1,k}(\omegaega,T)$ as $T\to\infty$. First, note that, on the one hand, from \eqref{exprfctgenlt4}, we have
\[{\mathbb E}b[y^{L_T^\omegaega(T-)}\mid L_T^\omegaega(0-)=1]=\sum_{k=1}^{\infty} C_{1,k}(\omegaega,T) S_k(y).\]
On the other hand, we have
\[{\mathbb E}b[y^{L_T^\omegaega(T-)}\mid L_T^\omegaega(0-)=1]=\sum_{k=1}^{\infty} \mathbb{P}({L_T^\omegaega(T-)=k \mid L_T^\omegaega(0-)=1}) y^k.\]
Since $U^\top$ is the transition matrix from the basis $(y^k)_{k \in {\mathbb N}b}$ to the basis $(S_k(y))_{k \in {\mathbb N}b}$, we deduce that
\betagin{equation}\lambdabel{hamma}
C_{1,k}(\omegaega,T) = \sum_{i \in {\mathbb N}b} u_{i,k} \mathbb{P}(L_T^\omegaega(T-)=i \mid L_T^\omegaega(0-)=1) = \mathbb{E} [ u_{L_T^\omegaega(T-),k} \mid L_T^\omegaega(0-)=1 ].
\end{equation}
From Theorem \ref{thmfa}, we know that the distribution of $L_T^\omegaega(T-)$ converges when $T\to\infty$. In addition, {according to} Lemma \ref{majoalphaki} the function $i \mapsto u_{i,k}$ is bounded, and hence $C_{1,k}(\omegaega,T)$ converges to a real {number.}
Recall that $T_1 < T_2 < \cdots$ is the increasing sequence of the jump times of $\omega$ and that this sequence converges to infinity. Therefore
\[ \lim_{T \rightarrow \infty} C_{1,k}(\omegaega,T) = \lim_{m \rightarrow \infty} C_{1,k}(\omegaega,T_m) = \lim_{m \rightarrow \infty} {\left ( U \left[A_1(\omega)A_{2}(\omega)\cdots A_m(\omega)\right]^\top \right )_{1,k}}, \]
where we used \eqref{defcoeff4} in the last step. This shows that the limit on the right hand side of \eqref{defcoeff4inf} exists and equals $\lim_{T \rightarrow \infty} C_{1,k}(\omegaega,T)$.
It remains to prove \eqref{exprfctgenlt4inf} together with the convergence of the corresponding series. We already know from Theorem \ref{thmfa} that $h^{\omegaega}_T(x)$ converges to $h^{\omegaega}(x)$ when $T\to\infty$ and we have just proved {\eqref{exprfctgenlt4} and} that for any $k \geq 1$, $C_{1,k}(\omegaega,T)$ converges to $C_{1,k}(\omegaega,\infty)$, defined in \eqref{defcoeff4inf}, when $T\to\infty$. Now, we claim that, for all $y\in[0,1]$ and $T>T_1$,
\betagin{equation}\tag{Claim 6}
| C_{1,k}(\omegaega,T) S_k(y) | \leq 4^k \times (2ek)^{(k+\theta)/2} e^{- \lambdambda_k T_1}. \lambdabel{bornecvdom}
\end{equation}
Assume that \eqref{bornecvdom} is true. Then \eqref{exprfctgenlt4inf} and the convergence of the series follow using the dominated convergence theorem. It only remains to prove \eqref{bornecvdom}. As in the proof of \eqref{hamma}, one shows that
\[{{\mathbb P}s^T_{(T-T_1)+}(\omegaega)} \rho(y)=E[y^{L_T^\omegaega(T-T_1)}\mid L_T^\omegaega(0-)=1]=\sum_{k=1}^{\infty} \tilde{C}_{1,k}(\omegaega,T) S_k(y),\]
where {$\tilde{C}_{1,k}(\omegaega,T) = \mathbb{E} [ u_{L_T^\omegaega(T-T_1),k} \mid L_T^\omegaega(0-)=1 ]$}. Proceeding as in the proof of \eqref{gfrtpldasg2}, we can prove that
\betagin{eqnarray}
{{\mathbb P}s^T_{(T-T_1)+}(\omegaega)} \rho(y)=U E(T-T_N) {\mathbb P}hi(\Deltalta \omega(T_N))^\top E(T_N-T_{N-1}){\mathbb P}hi(\Deltalta \omega(T_{N-1}))^\top\cdots{\mathbb P}hi(\Deltalta \omega(T_1))^\top {S(y)}. \lambdabel{gfrtpldasg3}
\end{eqnarray}
Since $E(T_1)$ is diagonal with entries $(e^{-\lambdambda_j T_1})_{j\in{\mathbb N}b}$, we conclude from \eqref{gfrtpldasg2} and \eqref{gfrtpldasg3} that $C_{1,k}(\omegaega,T) = e^{-\lambdambda_k T_1} \tilde C_{1,k}(\omegaega,T)$. Therefore
\[ C_{1,k}(\omegaega,T) = e^{-\lambdambda_k T_1} \mathbb{E} [ u_{L_T^\omegaega(T-T_1),k} \mid L_T^\omegaega(0-)=1 ]. \]
This together with Lemma \ref{majoalphaki} (see below) implies that, for all $k\geq 1$ and $t\geq 0$,
\[ \ |C_{1,k}(\omegaega,T)| \leq (2ek)^{(k+\theta)/2} e^{- \lambdambda_k T_1}. \]
Combining this with Lemma \ref{majosommedesaki}, we obtain \eqref{bornecvdom}, which concludes the proof.
\end{proof}
\betagin{lemma} \lambdabel{majoalphaki} For all $k\geq 1$
\[ \ \sup_{j \geq 1} u_{j,k} \leq (2ek)^{(k+\theta)/2}. \]
\end{lemma}
\betagin{proof}
Let $k \geq 1$. By the definition of the matrix $U$ in Lemma \ref{diagpldasg}, the sequence $(u_{j,k})_{j \geq 1}$ satisfies
\[u_{j,k} = 0 \ \text{for} \ j < k, \ u_{k,k} = 1, \ u_{k+1,k} = \frac{\gamma_{k+1}}{\lambdambda_{k+1} - \lambdambda_{k}},\]
\[u_{k+l,k} = \frac{1}{\lambdambda_{k+l} - \lambdambda_k} \left ( \gamma_{k+l} u_{k+l-1,k} + \theta\nu_0 \sum_{j=0}^{l-2} u_{k+j,k} \right ) \ \text{for} \ l \geq 2.\]
Let $M_k^{j} \coloneqq \sup_{i \leq j} u_{i,k}$. By the definitions of $\gamma_{j+1}$, $\lambdambda_{k+1}$, $\lambdambda_k$ (see Lemma \ref{diagpldasg}), we have
\[\gamma_{k+1} = \lambdambda_{k+1}- (k-1) \theta\nu_0 > \lambdambda_{k+1} - \lambdambda_{k}.\]
This together with the above expressions yields that
\[M_k^k = 1, \qquad M^{k+1}_k = \frac{\lambdambda_{k+1}- (k-1) \theta\nu_0}{\lambdambda_{k+1} - \lambdambda_{k}} \leq \frac{\lambdambda_{k+1}}{\lambdambda_{k+1} - \lambdambda_{k}} = 1 + \frac{\lambdambda_{k}}{\lambdambda_{k+1} - \lambdambda_k}.\]
{Moreover, for $l\geq 2$, we have \[u_{k+l,k}\leq M^{k+l-1}_k \frac{\gamma_{k+l} + (l-1) \theta\nu_0}{\lambdambda_{k+l}-\lambdambda_k}=M^{k+l-1}_k \frac{\lambdambda_{k+l} - (k-1) \theta\nu_0}{\lambdambda_{k+l}-\lambdambda_k}\leq M^{k+l-1}_k \frac{\lambdambda_{k+l}}{\lambdambda_{k+l}-\lambdambda_k}.\]
Hence, we have
\[M^{k+l}_k =M^{k+l-1}_k\vee u_{k+l,k} \leq M^{k+l-1}_k \times \frac{\lambdambda_{k+l}}{\lambdambda_{k+l} - \lambdambda_k} =M^{k+l-1}_k \times \left ( 1 + \frac{\lambdambda_{k}}{\lambdambda_{k+l} - \lambdambda_k} \right ).\]
}
As a consequence, we have
\betagin{eqnarray}
\sup_{j \geq 1} u_{k,j} \leq \prod_{l=1}^{\infty} \left ( 1 + \frac{\lambdambda_{k}}{\lambdambda_{k+l} - \lambdambda_k} \right ) =: M^{\infty}_k. \lambdabel{majoparprodinf}
\end{eqnarray}
Since $\lambdambda_{k+l} \underset{}{\sigmam} l^2$ as $l\to\infty$, it is easy to see that the infinite product $M^{\infty}_k$ is finite. Then,
\[ M^{\infty}_k = \exp \left [ \sum_{l=1}^{\infty} \log \left ( 1 + \frac{\lambdambda_{k}}{\lambdambda_{k+l} - \lambdambda_k} \right ) \right ] \leq \exp \left [ \sum_{l=1}^{\infty} \frac{\lambdambda_{k}}{\lambdambda_{k+l} - \lambdambda_k} \right ] \leq \exp \left [ \frac{\lambdambda_{k} \log(2ek)}{2(k-1)} \right ], \]
where we used Lemma \ref{sommeinverselambda} (see below) in the last step. Since $\lambdambda_{k} = (k-1)(k+\theta)$ (see Lemma \ref{diagpldasg}), the desired result follows.
\end{proof}
\betagin{lemma} \lambdabel{sommeinverselambda} For all $k\in{\mathbb N}b$
\[ {\sum_{l=1}^{\infty}} \frac{1}{\lambdambda_{k+l} - \lambdambda_k} \leq \frac{\log(2ek)}{2(k-1)}. \]
\end{lemma}
\betagin{proof}
Using the definition of $\lambdambda_k$ in Lemma \ref{diagpldasg} we have
\betagin{align*}
{\sum_{l=1}^{\infty}} \frac{1}{\lambdambda_{k+l} - \lambdambda_k} & \leq {\sum_{l=1}^{\infty}} \frac{1}{(k+l)(k+l-1) - k(k-1)}
\leq \frac{1}{(k+1)k - k(k-1)} + \int_k^{\infty} \frac{1}{x(x+1) - k(k-1)} \dd x \\
& = \frac{1}{2k} + \int_1^{\infty} \frac{1}{u^2 + (2k-1)u} \dd u = \frac{1}{2k} + \lim_{a \rightarrow \infty} \int_1^{a} \frac{1}{u(u + 2k-1)} \dd u \\
& \leq \frac{1}{2k-1}\left[1 + \lim_{a \rightarrow \infty} \left ( \int_1^{a} \frac{1}{u} du - \int_1^{a} \frac{1}{u + 2k-1} \dd u \right ) \right]\\
&= \frac{1}{2k-1}\left[1 + \lim_{a \rightarrow \infty} \log\left(\frac{a2k}{a+2k-1}\right ) \right]\leq \frac{\log(2ek)}{2(k-1)}.
\end{align*}
\end{proof}
\betagin{lemma} \lambdabel{majosommedesaki} For all $k\in{\mathbb N}b$, we have
\[ \ \sup_{y \in [0,1]} \left | S_k(y) \right | \leq 4^{k}. \]
\end{lemma}
\betagin{proof}
By definition of the polynomials $S_k$ in \eqref{newbasis9pldasg}, we have for $k\geq 1$
\betagin{eqnarray}
\ \sup_{y \in [0,1]} \left | S_k(y) \right | \leq \sum_{i=1}^{k} | v_{k,i} |. \lambdabel{sup<sumcoef}
\end{eqnarray}
Let us fix $k \geq 1$ and define $S^k_j \coloneqq \sum_{i \geq j}^{k} | v_{k,i} |$. Note that $S^k_j = 0$ for $j > k$ and that $S^k_k = 1$ by the definition of the matrix $(v_{i,j})_{i,j \in {\mathbb N}b}$ in Lemma \ref{diagpldasg}. In particular, the result is true for $k=1$. Thus, we assume that $k > 1$ from now on. Using \eqref{recrelani4killed}, we see that, for any $j \in [k-1]$,
\betagin{align*}
S^k_j = S^k_{j+1} + | v_{k,j} | & = S^k_{j+1} + \left | \frac{-1}{(\lambdambda_k - \lambdambda_j)} \left [ \left ( \sum_{l=j+2}^k v_{k,l} \right ) \theta\nu_0 + v_{k,j+1} \gamma_{j+1} \right ] \right | \\
& \leq S^k_{j+1} + \frac{1}{(\lambdambda_k - \lambdambda_j)} \left [ S^k_{j+2} \theta\nu_0 + (S^k_{j+1}-S^k_{j+2}) \gamma_{j+1} \right ] \\
& \leq \left ( 1 + \frac{\gamma_{j+1}}{\lambdambda_k - \lambdambda_j} \right ) S^k_{j+1} + \frac{\theta\nu_0 - \gamma_{j+1}}{\lambdambda_k - \lambdambda_j} S^k_{j+2}.
\end{align*}
Note that $\frac{\theta\nu_0 - \gamma_{j+1}}{\lambdambda_k - \lambdambda_j}\leq 0$, because of the definition of the coefficients $\gamma_i$ in Lemma \ref{diagpldasg}. Thus, for $j\in[k-1]$,
\betagin{eqnarray}
\ S^k_j \leq \left ( 1 + \frac{\gamma_{j+1}}{\lambdambda_k - \lambdambda_j} \right ) S^k_{j+1}. \lambdabel{relrecskj}
\end{eqnarray}
By the definitions of $\gamma_{j+1}$, $\lambdambda_k$, $\lambdambda_j$ in Lemma \ref{diagpldasg}, and using that $j<k$, we have
\betagin{align*}
\frac{\gamma_{j+1}}{\lambdambda_k - \lambdambda_j} & = \frac{(j+1)j + j {\theta \nu_1} + \theta\nu_0}{k(k-1) - j(j-1) + (k-j) {\theta \nu_1} + (k-j) \theta\nu_0} \\
& \leq \frac{(j+1)j}{j(k-1) - j(j-1)} + \frac{j {\theta \nu_1}}{(k-j) {\theta \nu_1}} + \frac{\theta\nu_0}{(k-j) \theta\nu_0} \leq 2\, \frac{j+1}{k-j}.
\end{align*}
In particular,
\[ 1 + \frac{\gamma_{j+1}}{\lambdambda_k - \lambdambda_j} \leq \frac{k + j + 2}{k-j}. \]
Plugging this into \eqref{relrecskj} yields, for all $j\in[k-1]$,
\[ \ S^k_j \leq \frac{k + j + 2}{k-j} S^k_{j+1}. \]
Then, applying the previous inequality recursively and combining with $S^k_k = 1$, we get
\[ \sum_{i=1}^{k} | v_{k,i} | = S^k_1 \leq \prod_{j=1}^{k-1} \frac{k + j + 2}{k-j} =\binom{2k+1}{k-1}= \frac{\binom{2k+1}{k-1} + \binom{2k+1}{k+2}}{2} \leq 2^{2k+1}/2 = 4^k. \]
Combining with \eqref{sup<sumcoef}, we obtain the desired result.
\end{proof}
\appendix
\section{\texorpdfstring{$J_1$-Skorokhod topology}{} and weak convergence}
\subsection{{Definitions and remarks on the \texorpdfstring{$J_1$-Skorokhod topology}{}}{}}\lambdabel{A1}
For $T>0$, as in the beginning of Section~\ref{S2} we denote by $\Db_{0,T}$ the space of c\`{a}dl\`{a}g functions in $[0,T]$ with values on ${\mathbb R}b$. Let $\Cs_T^\uparrow$ denote the set of increasing, continuous functions from $[0,T]$ onto itself. For $\lambdambda\in\Cs_T^\uparrow$, we set
\betagin{eqnarray}
\lVert \lambdambda\rVert_T^0\coloneqq \sup_{0\leq u<s\leq T }\left \lvert \log\left(\frac{\lambdambda(s)-\lambdambda(u)}{s-u}\right)\right\rvert. \lambdabel{normbij}
\end{eqnarray}
We define the Billingsley metric $d_T^0$ in $\Db_{0,T}$ via
{\betagin{eqnarray}
d_T^0(f,g)\coloneqq \inf_{\lambdambda\in\Cs_T^\uparrow}\{\lVert \lambdambda\rVert_T^0\vee \lVert f-g\circ\lambdambda\rVert_{T,\infty} \}, \ \text{where} \ \lVert f\rVert_{T,\infty}\coloneqq \sup_{s\in[0,T]}|f(s)|. \lambdabel{defdt0}
\end{eqnarray}}
The metric $d_T^0$ induces the $J_1$-Skorokhod topology in $\Db_{0,T}$. An important feature is that the space $(\Db_{0,T},d_T^0)$ is separable and complete. The role of the time-change $\lambdambda$ in the definition of $d_T^0$ is to capture the fact that two c\`{a}dl\`{a}g functions can be close in spite of a small difference between their jumping times.
For $T>0$, a function $\omega\in\Db_{0,T}$ is said to be pure-jump if $\sum_{u\in(0,T]}|\Deltalta \omega(u)|<\infty$ and
for all $t\in (0,T]$, \[\omega(t)-\omega(0)=\sum_{u\in(0,t]}\Deltalta \omega(u),\]
where $\Deltalta \omega(u)\coloneqq \omega(u)-\omega(u-)$, $u\in[0,T]$. In the set of pure-jump functions, we consider the following metric
{\betagin{eqnarray}
d_T^\star (\omega_1,\omega_2)\coloneqq \inf_{\lambdambda\in\Cs_T^\uparrow}\left\{\lVert \lambdambda\rVert_T^0\vee \sum_{u\in[0,T]} \left\lvert \Deltalta \omega_1(u)-\Deltalta (\omega_2\circ \lambdambda)(u)\right\rvert \right\}.\lambdabel{defdtstar}
\end{eqnarray}}
The next result provides comparison inequalities between the metrics $d_T^0$ and $d_T^\star$.
\betagin{lemma}\lambdabel{zero-star}
Let $\omega_1$ and $\omega_2$ be two pure-jump functions with $\omega_1(0)=\omega_2(0)=0$, then
\[d_T^0(\omega_1,\omega_2)\leq d_T^\star(\omega_1,\omega_2).\]
If in addition, $\omega_1$ and $\omega_2$ are non-decreasing, and $\omega_1$ jumps exactly $n$ times in $[0,T]$, then
\[d_T^\star(\omega_1,\omega_2)\leq (4n+3) d_T^0(\omega_1,\omega_2).\]
\end{lemma}
\betagin{proof}
Let $\lambdambda\in \Cs_T^\uparrow$ and set $f\coloneqq \omega_1$ and $g\coloneqq \omega_2\circ\lambdambda$. Since $f$ and $g$ are pure-jump functions with the same value at $0$, we have, for any $t\in[0,T]$,
\[\lvert f(t)-g(t)\rvert=\left\lvert\sum_{u\in[0,t]}(\Deltalta f(u)-\Deltalta g(u))\right\rvert\leq \sum_{u\in[0,t]}\left\lvert\Deltalta f(u)-\Deltalta g(u)\right\rvert.\]
The first inequality follows. Now, assume that $\omega_1$ and $\omega_2$ are non-decreasing and that $\omega_1$ has $n$ jumps in $[0,T]$. Let $t_1<\cdots<t_n$ be {the consecutive jump times} of $\omega_1.$ We first prove that, for any $k\in[n]$,
{\betagin{equation}\lambdabel{boundsumjumps}
\sum_{u\in[0,t_k]}\lvert\Deltalta f(u)-\Deltalta g(u) \rvert \leq (4k+1) \lVert f-g\rVert_{t_k,\infty},
\end{equation}}
where $\lVert \cdot \rVert_{t,\infty}$, $t>0$, is defined in \eqref{defdt0}. We proceed by induction on $k$. Note that
\[\sum_{u\in[0,t_1]}\lvert\Deltalta f(u)-\Deltalta g(u) \rvert=\sum_{u\in[0,t_1)}\Deltalta g(u) +\lvert\Deltalta f(t_1)-\Deltalta g(t_1) \rvert
\leq g(t_1-)+2\lVert f-g\rVert_{t_1,\infty}\leq 3\lVert f-g\rVert_{t_1,\infty},\]
which proves {\eqref{boundsumjumps} for $k=1$.} Now, assuming that {\eqref{boundsumjumps}} is true for $k\in[n-1]$, we obtain
\betagin{align*}
\sum_{u\in[0,t_{k+1}]}\!\!\!\!\lvert\Deltalta f(u)-\Deltalta g(u) \rvert&=\sum_{u\in[0,t_k]}\lvert\Deltalta f(u)-\Deltalta g(u) \rvert +\sum_{u\in(t_{k},t_{k+1})}\!\!\!\Deltalta g(u)+\lvert\Deltalta f(t_{k+1})-\Deltalta g(t_{k+1}) \rvert\\
\leq& (4k+1)\lVert f-g\rVert_{t_k,\infty}+g(t_{k+1}-)-g(t_k)+2\lVert f-g\rVert_{t_{k+1},\infty}\\
=& (4k+1)\lVert f-g\rVert_{t_k,\infty}+(g(t_{k+1}-)-f(t_{k+1}-))-(g(t_k)-f(t_{k}))+2\lVert f-g\rVert_{t_{k+1},\infty}\\
\leq& (4k+1)\lVert f-g\rVert_{t_k,\infty}+4\lVert f-g\rVert_{t_{k+1},\infty}\leq (4(k+1)+1)\lVert f-g\rVert_{t_{k+1},\infty}.
\end{align*}
Hence, {\eqref{boundsumjumps}} also holds for $k+1$. This ends the proof of {\eqref{boundsumjumps} by induction.} Finally, using {\eqref{boundsumjumps},} we get
\betagin{align*}
\sum_{u\in[0,T]}\lvert\Deltalta f(u)-\Deltalta g(u) \rvert&=\sum_{u\in[0,t_{n}]}\lvert\Deltalta f(u)-\Deltalta g(u) \rvert+\sum_{u\in(t_{n},T]}\Deltalta g(u)\\
&\leq (4n+1)\lVert f-g\rVert_{t_{n},\infty}+ g(T)-g(t_n)\leq (4n+3)\lVert f-g\rVert_{T,\infty},
\end{align*}
ending the proof.
\end{proof}
\subsection{Bounded Lipschitz metric and weak convergence}\lambdabel{A2}
Let $(E,d)$ denote a complete and separable metric space. It is well known that the topology of weak convergence of probability measures on $E$ is induced by the Prokhorov metric. An alternative metric inducing this topology is given by the bounded Lipschitz metric, whose definition is recalled in this section.
\betagin{definition}[Lipschitz function] A real valued function $F$ on $(E,d)$ is said to be Lipschitz if there is $K>0$ such that
\[|F(x)-F(y)|\leq K d(x,y),\quad\textrm{for all $x,y\in E$}.\]
We denote by $\textrm{BL}(E)$ the vector space of bounded Lipschitz functions on $E$. The space $BL(E)$ is equipped with the norm
{\betagin{equation}
\lVert F\rVert_{\textrm{BL}}\coloneqq \sup_{x\in E}|F(x)| \vee \sup_{x,y\in E:\, x\neq y}\left\{\frac{|F(x)-F(y)|}{d(x,y)}\right\},\quad F\in BL(E). \lambdabel{defnormbl}
\end{equation}}
\end{definition}
\betagin{definition}[Bounded Lipschitz metric] \lambdabel{defblmenv}
Let $\mu,\nu$ be two probability measures on $E$. The bounded Lipschitz distance between $\mu$ and $\nu$ is defined by
{\betagin{equation}\varrho_{E}(\mu,\nu)\coloneqq \sup\left\{\left\lvert \int F d\mu- \int F d\nu\right\rvert: F\in \textrm{BL}(E),\, \lVert F\rVert_{\textrm{BL}}\leq 1 \right\}.\lambdabel{defblm} \end{equation}}
\end{definition}
The bounded Lipschitz distance defines a metric on the space of probability measures on $E$. Moreover, the bounded Lipschitz distance metrizes the weak convergence of probability measures on $E$, i.e.
\[\varrho_E(\mu_n,\mu)\xrightarrow[n\to\infty ]{}0\quad \Longleftrightarrow\quad \mu_n \xrightarrow[n\to\infty]{(d)}\mu.\]
\section{Table of notations}\lambdabel{Nota}
\betagin{table}[h!]
\betagin{tabularx}{\textwidth}{@{}XXX@{}}
\toprule
\textit{Notation} & \textit{Meaning} & \textit{First appearance} \\
\toprule
$X, (X(t))_{t \geq 0}$ & solution of \eqref{WFSDE} & Introduction\\
$\sigmagma, \theta, \nu_0, \nu_1$ & parameters of $X$ & Introduction \\
$J, (J(t))_{t \geq 0}$ & cumulative effect of environment & Introduction\\
$\bar{J}^T, (\bar{J}^T(\betata))_{\betata\in[0,T]}$ & time reversal of $J$ & Section \ref{s24} \\
${\textbf{0}}ta, (t_i,p_i)_{i\in I}$ & collection of jumps of $J$ & Introduction \\
$\mu$ & L\'evy measure of $J$ & Introduction\\
$S(t)$ & $\sigmagma t+J(t)$ & Introduction \\
$\omegaega, (\omegaega(t))_{t \geq 0}$ & fixed realization of $J$ & Section \ref{s21} \\
$\Deltalta \omegaega(t)$ & jump of $\omegaega$ at time $t$ & Section \ref{s21} \\
$\omega^\deltalta/\omega_\deltalta$ & $\omega$ removed of small/large jumps & Eq. \eqref{defomdelta}\\
$\textbf{0}$ & null environment & Section \ref{s21} \\
$(X^\omega(t))_{t \geq 0}$ & quenched version of $X$ & Section \ref{s23} \\
$\sigmagma_N, \theta_N$ & parameters of Moran model & Section \ref{s21} \\
$(Z_N^\omega(t))_{t \geq 0}$, $(Z_N^J(t))_{t\geq 0}$ & $\sharp$ of fit individuals (Moran model) & Section \ref{s21}, Section \ref{s22} \\
$\As_N$ & generator of $(Z_N^J(t))_{t\geq 0}$ & Section \ref{s22} \\
$\As_N^0$ & $\As_N$ in the case $\mu=0$ & Section \ref{s22} \\
$\mu_N(\omega)$ & law of $(Z_N^\omega(t))_{t\in[0,T]}$ & Section \ref{s31} \\
$X_N, J_N, X_N^\omega, \omega_N$ & renormalizations of $Z_N^J, J, Z_N^\omega, \omega$ & Theorem \ref{thm2.2} \\
$\As_N^*/\As$ & generator of $X_N/X$ & Section \ref{s32} \\
\bottomrule
\end{tabularx}
\end{table}
\betagin{table}
\betagin{tabularx}{\textwidth}{@{}XXX@{}}
\toprule
\textit{Notation} & \textit{Meaning} & \textit{First appearance} \\
\toprule
$E_N$ & state space of $X_N$ & Eq. \eqref{defen}\\
$(\Gs(\betata))_{\betata\geq 0}/(\Gs_T^\omega(\betata))_{\betata\geq 0}$ & annealed/quenched ASG & Definition \ref{defannealdasg}\\
$\sigmagma_{m,k}$ & rates of simultaneous branchings & Eq. \eqref{smk}\\
$(\bar{\Gs}(\betata))_{\betata\geq 0}/(\bar{\Gs}_T^\omega(\betata))_{\betata\geq 0}$ & annealed/quenched killed ASG & Section \ref{s25} \\
$(R(\betata))_{\betata\geq 0}/(R_T^\omega(\betata))_{ \betata \geq 0}$ & line-counting process of $\bar{\Gs}/\bar{\Gs}_T^\omega$ & Section \ref{s25} \\
$Q^\mu_\dagger, (q^\mu_\dagger(i,j))_{i,j\in{\mathbb N}b_0^\dagger}$ & generator of $(R(\betata))_{\betata\geq 0}$ & Eq. \eqref{krates}\\
$\pi_n$/${\mathbb P}i_n(\omegaega)$ & absorption probabilities of $R$/$R_T^\omega$ & Eq. \eqref{defpin}\\
$\eta_X/\Ls^\omegaega$ & limit distribution of $X(t)/X^\omegaega(0)$ & Theorem \ref{thm2.4}\\
$X(\infty)$ & random variable with law $\eta_X$ & Theorem \ref{thm2.4}\\
$J\otimes_\tau^{} \omega$ & mixed environment & Section \ref{s25} \\
$h_T(x)/h_T^\omega(x)$ & ancestral type distribution at $T$ & Section \ref{s26} \\
$h(x)/h^{\omegaega}(x)$ & ancestral type distribution & Theorem \ref{thm2.6}\\
$(L(\betata))_{\betata\geq 0}/(L_T^\omega(\betata))_{ \betata \geq 0}$ & pLD-ASG's line-counting process & Section \ref{s26} \\
$Q^\mu, (q^\mu(i,j))_{i,j\in{\mathbb N}b}$ & generator of $(L(\betata))_{\betata\geq 0}$ & Eq. \eqref{kratespldasg}\\
$\eta_L/\mu^\omega$ & limit law of $L(T)/L_T^\omega(T-)$ & Theorem \ref{thm2.6}\\
$L(\infty)$ & random variable with law $\eta_L$ & Theorem \ref{thm2.6}\\
$a_n$ & $\mathbb{P}(L(\infty) > n)$, coefficients of $h(x)$ & Theorem \ref{thm2.6}\\
$\gamma_{i,j}$ & coefficients in recursion \eqref{fr} & Theorem \ref{thm2.6}\\
$\Db_{s,t}$, $\Db$ & spaces of c\`{a}dl\`{a}g functions & Section \ref{S2} \\
$\Db_T^\star$/$\Db^\star$ & set of possible $(\omegaega(t))_{t \in [0,T]}$/$\omegaega$ & Section \ref{s21} \\
$d_T^0$/$d_T^\star$ & metric on $\Db_{0,T}$/$\Db_T^\star$ & Appendix \ref{A1} \\
$\Cs_T^\uparrow, BL(E)$ & functional sets & Appendix \ref{A1}, \ref{A2} \\
$\lVert \cdot \rVert_T^0, \lVert \cdot \rVert_{T,\infty}, \lVert \cdot \rVert_{\textrm{BL}}$ & functional norms & Appendix \ref{A1}, \ref{A2} \\
$\varrho_{E}(\cdot ,\cdot )$ & Bounded Lipschitz metric & Definition \ref{defblmenv}\\
\bottomrule
\end{tabularx}
\end{table}
\addtocontents{toc}{\protect\setcounter{tocdepth}{0}}
\textbf{Acknowledgements}
We are grateful to E. Baake for many interesting discussions. We also thank Sebastian Hummel and two anonymous referees for their valuable suggestions to improve the manuscript. F. Cordero received financial support from Deutsche Forschungsgemeinschaft~(CRC 1283 ``Taming Uncertainty'', Project~C1). This paper is supported by NSFC grant No. 11688101. Gr\'egoire V\'echambre acknowledges the support from Deutsche Forschungsgemeinschaft~(CRC 1283 ``Taming Uncertainty'', Project~C1) for his visit to Bielefeld University in January 2018. Fernando Cordero acknowledges the support of NYU-ECNU Institute of Mathematical Sciences at NYU Shanghai for his visit to NYU Shanghai in July 2018.
\addtocontents{toc}{\protect\setcounter{tocdepth}{2}}
\end{document} |
\begin{document}
\newcommand{{\mathcal H}}{{\mathcal H}}
\newcommand{{\mathcal T}}{{\mathcal T}}
\newcommand{{\mathcal R}}{{\mathcal R}}
\newcommand{\bar \sigma}{\bar \sigma}
\newcommand{\bar | _{\partial\Omega}ldsymbol{v}}{\bar | _{\partial\Omega}ldsymbol{v}}
\newcommand{\bar | _{\partial\Omega}ldsymbol{v}t}{\bar \vartheta}
\newcommand{L_p(0,T;L_q(\Omega))}{L_p(0,T;L_q(\Omega))}
\newcommand{L_p(0,T;L_q(\Omega))n}{L_p(0,T;L_q(\Omega)^{n-1})}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{{| _{\partial\Omega}ld G}}{{| _{\partial\Omega}ld G}}
\newcommand{{| _{\partial\Omega}ld J}}{{| _{\partial\Omega}ld J}}
\newcommand{\rightline{ $\square$}}{\rightline{ $\square$}}
\renewcommand{\text{d}}{\text{d}}
\renewcommand{\text{d}iv}{\operatorname{div}}
\newcommand{| _{\partial\Omega}ldsymbol{v}}{| _{\partial\Omega}ldsymbol{v}}
\newcommand{| _{\partial\Omega}ldsymbol{g}}{| _{\partial\Omega}ldsymbol{g}}
\newcommand{| _{\partial\Omega}ldsymbol{n}}{| _{\partial\Omega}ldsymbol{n}}
\newcommand{\tilde{D}}{\tilde{D}}
\newcommand{| _{\partial\Omega}ldsymbol{\tau}}{| _{\partial\Omega}ldsymbol{\tau}}
\newcommand{| _{\partial\Omega}ldsymbol{T}}{| _{\partial\Omega}ldsymbol{T}}
\newcommand{| _{\partial\Omega}ldsymbol{F}}{| _{\partial\Omega}ldsymbol{F}}
\newcommand{| _{\partial\Omega}ldsymbol{H}}{| _{\partial\Omega}ldsymbol{H}}
\newcommand{| _{\partial\Omega}ldsymbol{\varphi}}{| _{\partial\Omega}ldsymbol{\varphi}}
\newcommand{| _{\partial\Omega}ldsymbol{I}}{| _{\partial\Omega}ldsymbol{I}}
\newcommand{| _{\partial\Omega}ldsymbol{q}}{| _{\partial\Omega}ldsymbol{q}}
\newcommand{| _{\partial\Omega}ldsymbol{w}}{| _{\partial\Omega}ldsymbol{w}}
\newcommand{| _{\partial\Omega}ldsymbol{D}}{| _{\partial\Omega}ldsymbol{D}}
\newcommand{| _{\partial\Omega}ldsymbol{A}}{| _{\partial\Omega}ldsymbol{A}}
\newcommand{| _{\partial\Omega}ldsymbol{f}}{| _{\partial\Omega}ldsymbol{f}}
\newcommand{| _{\partial\Omega}ldsymbol{\omega}}{| _{\partial\Omega}ldsymbol{\omega}}
\newcommand{| _{\partial\Omega}ldsymbol{\alpha}}{| _{\partial\Omega}ldsymbol{\alpha}}
\newcommand{| _{\partial\Omega}ldsymbol{S}}{| _{\partial\Omega}ldsymbol{S}}
\newcommand{| _{\partial\Omega}ldsymbol{b}}{| _{\partial\Omega}ldsymbol{b}}
\newcommand{| _{\partial\Omega}ldsymbol{u}}{| _{\partial\Omega}ldsymbol{u}}
\newcommand{| _{\partial\Omega}ldsymbol{n}ul}{| _{\partial\Omega}ldsymbol{0}}
\newcommand{| _{\partial\Omega}ldsymbol{\psi}}{| _{\partial\Omega}ldsymbol{\psi}}
\newcommand{| _{\partial\Omega}ldsymbol{\Phi}}{| _{\partial\Omega}ldsymbol{\Phi}}
\newcommand{\partial}{\partial}
\newcommand{\mbox{e}^{z}}{\mbox{e}^{z}}
\newcommand{\mbox{e}^{z}j}{\mbox{e}^{z^j}}
\newcommand{\mbox{e}^{z}jN}{\mbox{e}^{z^j_N}}
\newcommand{\mbox{e}^{\widetilde{z^j}}}{\mbox{e}^{\widetilde{z^j}}}
\newcommand{\mbox{e}^{z}Bj}{\mbox{e}^{Bz^j}}
\newcommand{\mbox{e}^{B\widetilde{z^j}}}{\mbox{e}^{B\widetilde{z^j}}}
\newcommand{\mbox{e}^{-z^j}}{\mbox{e}^{-z^j}}
\newcommand{\mbox{e}^{-z^j}m}{\mbox{e}^{-z^{j-1}}}
\newcommand{\mbox{e}^{-\widetilde{z^j}}}{\mbox{e}^{-\widetilde{z^j}}}
\newcommand{\mbox{e}^{-\widetilde{z^j}}m}{\mbox{e}^{-\widetilde{z^{j-1}}}}
\newcommand{\mbox{e}^{z}jm}{\mbox{e}^{z^{j-1}}}
\newcommand{\mbox{e}^{r_l}}{\mbox{e}^{r_l}}
\newcommand{\mbox{e}^{r_l}N}{\mbox{e}^{r_{l,N}}}
\newcommand{\mbox{e}^{\widetilde{r_{l,N}}}}{\mbox{e}^{\widetilde{r_{l,N}}}}
\newcommand{\mbox{e}^{r_l}j}{\mbox{e}^{r_l^j}}
\newcommand{\mbox{e}^{r_k}}{\mbox{e}^{r_k}}
\newcommand{\mbox{e}^{z_k}}{\mbox{e}^{z_k}}
\newcommand{\mbox{e}^{r_k}j}{\mbox{e}^{r_k^j}}
\newcommand{\mbox{e}^{\widetilde{r_k^j}}}{\mbox{e}^{\widetilde{r_k^j}}}
\newcommand{\mbox{e}^{\widetilde{r_{k,N}}}}{\mbox{e}^{\widetilde{r_{k,N}}}}
\newcommand{\mbox{e}^{\widetilde{r_l^j}}}{\mbox{e}^{\widetilde{r_l^j}}}
\newcommand{\mbox{e}^{r_k}jm}{\mbox{e}^{r_k^{j-1}}}
\newcommand{\mbox{e}^{r_k}N}{\mbox{e}^{r_{k,N}}}
\newcommand{\mbox{e}^{r_k}jN}{\mbox{e}^{r_{k,N}^j}}
\newcommand{\mbox{e}^{r_k}Nd}{\mbox{e}^{r_{k,N,\partiallta}}}
\newcommand{\mbox{e}^{r_k}i}[1]{\mbox{e}^{r_{k,#1}}}
\newcommand{\mbox{e}^{z_k}i}[1]{\mbox{e}^{z_{k,#1}}}
\newcommand{\operatorname{curl}}{\operatorname{curl}}
\newcommand{{\vc{D}}elta}{{\vc{D}}elta}
\newcommand{\operatorname{rot}}{\operatorname{rot}}
\newcommand{\operatorname{div}}{\operatorname{div}}
\newcommand{{\noindent\it Proof.~}}{{\noindent\it Proof.~}}
\newcommand{\Ov}[1]{\bar | _{\partial\Omega}ldsymbol{v}erline{#1}}
\newcommand{C^\infty_c}{C^\infty_c}
\newcommand{\Un}[1]{\underline{#1}}
\newcommand{| _{\partial\Omega}}{| _{\partial\Omega}}
\newcommand{{C}}{{C}}
\newcommand{Y_{k}}{Y_{k}}
\newcommand{\boldsymbol{F}}{| _{\partial\Omega}ldsymbol{F}}
\newcommand{| _{\partial\Omega}ldsymbol{b}f}{{\vc f}}
\newcommand{\vc{Q}}{\vc{Q}}
\newcommand{\boldsymbol{S}}{| _{\partial\Omega}ldsymbol{S}}
\newcommand{\vc{w}}{\vc{w}}
\newcommand{\vc{v}}{\vc{v}}
\newcommand{\hat{\omega}}{\hat{\omega}}
\newcommand{Y_{k}e}{Y_{k}_\varepsilon}
\newcommand{\{O,F,P\}}{\{O,F,P\}}
\newcommand{\{O,F,P,D\}}{\{O,F,P,D\}}
\newcommand{\varrho}{\varrho}
\newcommand{\tilde{\varrho}}{\tilde{\varrho}}
\newcommand{\hat{\varrho}}{\hat{\varrho}}
\newcommand{\varphi}{\varphi}
\newcommand{\tilde{p}}{\tilde{p}}
\newcommand{\ex}[1]{ \left< #1 \right>}
\newcommand{\varrhon}{\varrho_n}
\newcommand{\vartheta_\varepsilon}{\vartheta_\varepsilon}
\newcommand{\boldsymbol{u}_n}{\boldsymbol{u}_n}
\newcommand{\varrhod}{\varrho_\partiallta}
\newcommand{\varrhoa}{\varrho_{A}}
\newcommand{\varrhoam}{\varrho_{A-}}
\newcommand{\varrhob}{\varrho_{B}}
\newcommand{\tilde{\varrho}a}{\tilde{\varrho}_{A}}
\newcommand{\tilde{\varrho}b}{\tilde{\varrho}_{B}}
\newcommand{\tilde{\varrho}n}{\tilde{\varrho}_{n}}
\newcommand{\tilde{g}}{\tilde{g}}
\newcommand{\vartheta_\text{d}elta}{\vartheta_\partiallta}
\newcommand{\boldsymbol{u}_\text{d}elta}{\boldsymbol{u}_\partiallta}
\newcommand{\vartheta}{\vartheta}
\newcommand{\tilde{\vartheta}}{\tilde{\vartheta}}
\newcommand{\hat{\vartheta}}{\hat{\vartheta}}
\newcommand{\boldsymbol{u}}{| _{\partial\Omega}ldsymbol{u}}
\newcommand{{\cal E}}{{\cal E}}
\newcommand{\boldsymbol{d}}{| _{\partial\Omega}ldsymbol{d}}
\newcommand{\boldsymbol{d}k}{| _{\partial\Omega}ldsymbol{d}_k}
\newcommand{\vc}[1]{{\bf #1}}
\newcommand{\vcg}[1]{{\pmb #1}}
\newcommand{\nabla}{\nabla}
\newcommand{{{\partial} \over {\partial \vr}}}{{{\partial} \bar | _{\partial\Omega}ldsymbol{v}er {\partial \varrho}}}
\newcommand{{{\partial} \over {\partial \vt}}}{{{\partial} \bar | _{\partial\Omega}ldsymbol{v}er {\partial \vartheta}}}
\newcommand{\partial_{t}}{\partial_{t}}
\newcommand{\partial_{t}b}[1]{\partial_{t}(#1)}
\newcommand{{\vc{D}}t}{\frac{ d}{dt}}
\newcommand{\tn}[1]{\mbox {\F #1}}
\newcommand{{\rm d} {x}}{{\rm d} {x}}
\newcommand{{\rm d} t }{{\rm d} t }
\newcommand{\it}{\it}
\newcommand{\mbox{\FF R}}{\mbox{\FF R}}
\newcommand{{\rm d} {x}dt}{{\rm d} {x} \ {\rm d} t }
\newcommand{\lr}[1]{\left( #1 \right)}
\newcommand{\intO}[1]{\int_{\Omega} #1 \ {\rm d} {x}}
\newcommand{\intOj}[1]{\int_{\Omega^{1}} #1 \ {\rm d} {x}}
\newcommand{\intOd}[1]{\int_{\Omega^{2}} #1 \ {\rm d} {x}}
\newcommand{\intOe}[1]{\int_{\Omega_\varepsilon} #1 \ {\rm d} {x}}
\newcommand{\intOB}[1]{\int_{\Omega} \left( #1 \right) \ {\rm d} {x}}
\newcommand{\intRN}[1]{\int_{R^3} #1 \ {\rm d} {x}}
\newcommand{\intR}[1]{\int_{R} #1 \ {\rm d} t }
\newcommand{\intRR}[1]{\int_R \int_{R^3} #1 \ {\rm d} {x}dt}
\newcommand{\intT}[1]{\int_0^T #1 \ {\rm d} t }
\newcommand{\intTO}[1]{\int_0^T\!\!\!\! \int_{\Omega} #1 \ {\rm d} {x}dt}
\newcommand{\intt}[1]{\int_0^t #1 \ {\rm d} t }
\newcommand{\inttO}[1]{\int_0^t\!\!\!\! \int_{\Omega} #1 \ {\rm d} {x}dt}
\newcommand{\inttauO}[1]{\int_0^\tau\!\!\!\! \int_{\Omega} #1 \ {\rm d} {x}dt}
\newcommand{\intTOj}[1]{\int_0^T\!\!\!\! \int_{\Omega^{1}} #1 \ {\rm d} {x}dt}
\newcommand{\intTOd}[1]{\int_0^T\!\!\!\! \int_{\Omega^{2}} #1 \ {\rm d} {x}dt}
\newcommand{\intTOB}[1]{ \int_0^T\!\!\!\! \int_{\Omega} \left( #1 \right) \ {\rm d} {x}dt}
\newcommand{\inttOB}[1]{ \int_0^t\!\!\!\! \int_{\Omega} \left( #1 \right) \ {\rm d} {x}dt}
\newcommand{\sumkN}[1]{\sum_{k=1}^3 #1}
\newcommand{\sumlN}[1]{\sum_{l=1}^3 #1}
\newcommand{\blue}[1]{\textcolor{blue}{ #1}}
\newcommand{\red}[1]{\textcolor{red}{ #1}}
\newcommand{\eq}[1]{\begin{equation}
\begin{split}
#1
\end{split}
\end{equation}}
\newcommand{\eqh}[1]{\begin{equation*}
\begin{split}
#1
\end{split}
\end{equation*}}
\newcommand{{\cal{B}}}{{\cal{B}}}
\newcommand{{\cal{R}}}{{\cal{R}}}
\newcommand{{\cal{F}}}{{\cal{F}}}
\newcommand{{\cal{G}}}{{\cal{G}}}
\newcommand{{\vc{D}}}{{\vc{D}}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\text{d}elta}{\partiallta}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{{\bf n}}{{\bf n}}
\newcommand{\partial}{\partial}
\newcommand{{\bf V}}{{\bf V}}
\newcommand{{\bf k}}{{\bf k}}
\newcommand{{\bk_{\bv}}}{{{\bf k}_{| _{\partial\Omega}ldsymbol{v}}}}
\newcommand{{\rm div}}{{\rm div}}
\newtheorem{thm}{Theorem}[section]
\newtheorem{lem}[thm]{Lemma}
\newtheorem{prop}[thm]{Proposition}
\newtheorem{df}{Definition}
\newtheorem{rmk}[thm]{Remark}
\title{On the isothermal compressible multi-component mixture flow:\\
the local existence and maximal $L_p-L_q$ regularity of solutions}
\author{T. Piasecki\footnote{Institute of Applied Mathematics and Mechanics, University of Warsaw,
ul. Banacha 2, 02-097 Warszawa, Poland. E-mail: {[email protected]}.
Supported by the Top Global University Project and the Polish National Science Centre grant 2018/29/B/ST1/00339.},
Y. Shibata\footnote{Department of Mathematics, Waseda University, Ohkubo 3-4-1, Shinjuku-ku, Tokyo 169-8555, Japan.
Adjunct faculty member in the Department of Mechanical
Engineering and Materias Science, University of Pittsburgh. E-mail: {[email protected]}.
Partially supported by
JSPS Grant-in-aid for Scientific Research (A) 17H0109
and Top Global University Project.},
E. Zatorska\footnote{Department of Mathematics, University College London, Gower Street, London WC1E 6BT, United Kingdom. E-mail: {[email protected]}.
Supported by the Top Global University Project and the Polish Government MNiSW research grant 2016-2019 "Iuventus Plus" No. 0888/IP3/2016/74.}}
\text{d}ate{}
\maketitle
\noindent{\bf{Abstract:}} We consider the initial-boundary value problem for the system of equations describing the flow of compressible isothermal mixture of arbitrary large number of components. The system consists of the compressible Navier-Stokes equations and a subsystem of diffusion equations for the species. The subsystems are coupled by the form of the pressure and the strong cross-diffusion effects in the diffusion fluxes of the species. Assuming the existence of solutions to the symmetrized and linearized equations, proven in \cite{PSZ2}, we derive the estimates for the nonlinear equations and prove the local-in-time existence and maximal $L_p-L_q$ regularity of solutions.
\normalsize
\section{Introduction}
\subsection{Setting of the problem}
We consider the system of equations describing the motion of an isothermal mixture of compressible gases
\begin{equation} \label{1.1}
\left.
\begin{array}{r}
\partial_{t}\varrho+\operatorname{div} (\varrho \boldsymbol{u}) = 0\\%& \mbox{w}& (0,T)\times\Omega,\\
\partial_{t}b{\varrho\boldsymbol{u}}+\operatorname{div} (\varrho \boldsymbol{u} \otimes \boldsymbol{u}) - \operatorname{div} | _{\partial\Omega}ldsymbol{S}+ \nabla p =\vc{0}\\%& \mbox{w}& (0,T)\times\Omega,\\
\partial_{t}{\varrho_k}+\operatorname{div} (\varrho_{k} \boldsymbol{u})+ \operatorname{div} | _{\partial\Omega}ldsymbol{F}_k = 0
\end{array}\right\}\quad\mbox{in}\ (0,T)\times\Omega
\end{equation}
in the regular domain $\Omega\subset \mathbb{R}^3$, supplied with boundary conditions
\begin{equation} \label{bc}
\boldsymbol{u}=0, \; | _{\partial\Omega}ldsymbol{F}_k\cdot| _{\partial\Omega}ldsymbol{n}=0 \quad\mbox{on}\ (0,T)\times\partial\Omega
\end{equation}
and initial condition
\begin{equation} \label{bc}
\boldsymbol{u}|_{t=0}=\boldsymbol{u}^0, \quad \varrho_k|_{t=0}=\varrho_k^0,\; k=1 \ldots n \quad \mbox{in} \; \Omega.
\end{equation}
Above, in system \eqref{1.1}, $\varrho$ denotes the mass density of the mixture
\begin{equation} \label{rho}
\varrho=\sum_{k=1}^3\varrho_k,
\end{equation}
$\boldsymbol{u}$ is the mean velocity of the mixture, and $\varrho_k$ is the density of the $k$-th constituent.
The remaining quantities:
the stress tensor $| _{\partial\Omega}ldsymbol{S}$, the total internal pressure $p$, and the diffusion fluxes $| _{\partial\Omega}ldsymbol{F}_k$
are determined as functions of $(\boldsymbol{u},\varrho,\varrho_k)$ by constitutive relations which will be specified later.
The first equation of system \eqref{1.1}, usually called the continuity equation, describes the balance of the mass,
and the second equation expresses the balance of the momentum.
The last $n$ equations describe the balances of masses of separate constituents (species).
Note that the system of equations cannot be independent, as the last $n$ equations must sum up to the continuity
equation. Thus, here we meet a serious mathematical obstacle, the subsystem $(\ref{1.1})_4$ is
degenerate parabolic in terms of $\varrho_k$.
{\bf The stress tensor.}
The viscous part of the stress tensor obeys the {\it Newton rheological law}
\begin{equation}\label{chF:Stokes}
\boldsymbol{S}(\boldsymbol{u})= 2\mu{\vc{D}}(\boldsymbol{u})+\nu\operatorname{div} \boldsymbol{u}{\bf{I}},
\end{equation}
where ${\vc{D}}(\boldsymbol{u})=\frac{1}{2}\left(\nabla \boldsymbol{u}+(\nabla \boldsymbol{u})^{T}\right)$ and the nonnegative viscosity coefficients.
{\bf Internal pressure.}
The internal pressure of the mixture is determined through the Boyle law, when the temperature is constant it is given by
\begin{equation} \label{intpre}
p(\varrho_{1},\ldots,\varrho_{n})=\sum_{k=1}^{n}p_{k}(\varrho_{k})=\sum_{k=1}^{n}\frac{\varrho_{k}}{m_{k}};
\end{equation}
above, $m_{k}$ is the molar mass of the species $k$, and for simplicity, we set the gaseous constant equal to 1.
{\bf Diffusion fluxes.}
A key element of the presented model is the structure of laws governing cross-diffusion processes in the mixture.
The diffusion fluxes are given explicitly in the form
\begin{equation}\label{eq:diff}
| _{\partial\Omega}ldsymbol{F}_{k}=-\sum_{l=1}^{n} {C}_{kl}\boldsymbol{d}_l, \quad k=1,...n,
\end{equation}
where ${C}_{kl}$ are multicomponent flux diffusion coefficients and $\boldsymbol{d}k=(d_{k}^{1},d_{k}^{2},d_{k}^{3})$ is the species $k$ diffusion force
\begin{equation}\label{eq:}
d_{k}^{i}=\nabla_{x_{i}}\left({p_{k}\bar | _{\partial\Omega}ldsymbol{v}er p}\right)+\left({p_{k}\bar | _{\partial\Omega}ldsymbol{v}er p}-{\varrho_{k}\bar | _{\partial\Omega}ldsymbol{v}er \varrho}\right)\nabla_{x_{i}} \log{p}= \frac{1}{p}\lr{\nabla_{x_i} p_k-\frac{\varrho_k}{\varrho}\nabla_{x_i}p}.
\end{equation}
Moreover, we assume that $\sumkN| _{\partial\Omega}ldsymbol{F}_k=\vc{0}$, pointwisely.
The main properties of the flux diffusion matrix $C$ are
\begin{equation} \label{prop_C}
C{\cal Y}={\cal Y}C^{T},\quad
N(C)=\mbox{lin}\{{\cal E}c Y\},\quad
R(C)={U}^{| _{\partial\Omega}t},
\end{equation}
where $Y_k=\frac{\varrho_k}{\varrho}$, ${\cal Y}=\mbox{diag}(Y_{1},\ldots,Y_{N})$, ${\cal E}c Y=(Y_1,\ldots,Y_n)^t$,
$\mbox{lin}\{{\cal E}c Y\}=\{t{\cal E}c Y\: \; t \in \mathbb{R} \}$,
$N(C)$ is the nullspace of $C$, $R(C)$ is the range of $C$,
${\cal E}c U=(1,\ldots,1)^{T},$ and ${U}^{| _{\partial\Omega}t}$ is the orthogonal complement of $\mbox{lin}\{{\cal E}c U\}$.
The second property in \eqref{prop_C} implies
$$
\sum_{l=1}^3 \frac{1}{p}C_{kl} \frac{\varrho_k}{\varrho}\nabla p=\frac{\nabla p}{p}\sum_{l=1}^3 C_{kl}Y_l=0,\quad k=1,\ldots, n,
$$
therefore \eqref{eq:diff}, \eqref{eq:} are reduced to
\begin{equation} \label{eq:diff1}
| _{\partial\Omega}ldsymbol{F}_{k}=-\frac{1}{p}\sum_{l=1}^{n} {C}_{kl}\nabla p_l.
\end{equation}
We also define
\begin{equation} \label{def_D} D_{kl}=\frac{C_{kl}}{\varrho Y_k},
\end{equation}
thus the properties of $C$ \eqref{prop_C}
imply
\begin{equation} \label{prop_D}
D= D^{T},\quad D \geq 0, \quad
N(D)=\mbox{lin}\{{\cal E}c Y\},\quad
R(D)={Y}^{| _{\partial\Omega}t}.
\end{equation}
The first property results from $C_{kl}Y_l=C_{lk}Y_k$, the third from the fact that ${\cal Y}$
is diagonal. Next, $p \in R(\tilde D) \iff p_k=\frac{1}{Y_k}\sum_l C_{kl}q_l$
for some $q \in \mathbb{R}^3$.
Finally $D$ is positive definite over $U^| _{\partial\Omega}t$.
{\bf Exemplary diffusion matrix.}
An example of matrix $C$ satisfying conditions \eqref{prop_C} that will be distinguished throughout the paper is
\begin{equation}\label{Cform}
{C} =\left(
\begin{array}{cccc}
Z_{1} & -Y_{1} & \ldots & -Y_{1}\\
-Y_{2} & Z_{2} & \ldots & -Y_{2}\\
\boldsymbol{d}ots & \boldsymbol{d}ots & \text{d}eltaots & \boldsymbol{d}ots \\
-Y_{n} & -Y_{n}& \ldots & Z_{n}
\end{array} \right),
\end{equation}
where $Z_{k}=\sum_{{i=1} \atop {i\neq k}}^{n} Y_{i}$.\\
Using expressions for the diffusion forces \eqref{eq:diff1} and the properties of this matrix one can rewrite \eqref{eq:diff} into the following form
\eq{\label{difp}
| _{\partial\Omega}ldsymbol{F}_k=-\frac{1}{p}\lr{\nabla p_k-Y_k\nabla p}.
}
Clearly for $C$ given by \eqref{Cform}, the matrix $D_{kl}=\frac{C_{kl}}{\varrho Y_k}$ is symmetric and positive semi-definite.
\subsection{Discussion of the known results}
The main result of this paper concerns the local well-posedness of system \eqref{1.1} in the maximal $L_p-L_q$ regularity setting.
The local well-posedness as well as global well-posedness for small data for two-species variant of system \eqref{1.1} have been shown in authors' previous work \cite{PSZ}. There
the so-called normal form, considered earlier e.g. in \cite{VG}, allows to immediately write a parabolic equation for one of the species densities. The aim of this paper is to generalize this result to the system with arbitrary number of constituents, however still isothermal. The key difference is that in the two species case the part corresponding to diffusion flux is reduced to a single parabolic equation, while now we obtain only a symmetrized system. Nevertheless, the properties of $D$ imply only nonnegativity of its leading order part so an important step is to show its parabolicity. Dealing with the systems of species instead of single equation also requires serious modifications in the linear theory.
The mathematical investigation of multicomponent flows dates back to analysis of a two component
incompressible model assuming Fick law, hence no cross-diffusion, see among others \cite{BdV1}
for inviscid fluid and \cite{BdV2}-\cite{BdV3} in the viscous case.
In the previous results devoted to the complete mixture model, see Giovangigli and Massot \cite{GM1,GM2}, the local smooth solutions and global smooth solutions around constant equilibrium states were considered. Their method of proof was based on normal form of equations, hyperbolic-parabolic estimates and on local strict dissipativity of linearized systems.
It can be seen as an application of more abstract theory proposed for the hyperbolic-parabolic systems of conservation laws by Kawashima and Shizuta \cite{K84,KS88}.
When the species equations are decoupled from the fluid equations, the resulting system of PDEs is related to the Stefan-Maxwell system analyzed for example in \cite{B2010, HMPW13}. In both of these papers the isobaric isothermal systems are considered with the barycentric velocity being equal to $0$. This means that, in comparison with the system of last $n$ equations from \eqref{1.1}, the convective term $\operatorname{div}(\varrho_k| _{\partial\Omega}ldsymbol{u})$ is absent and the variation of total pressure in the diffusion fluxes \eqref{difp} is neglected. Essential difference between these systems is that in the present case the diffusion fluxes are explicit combination of diffusion deriving forces, while for the Stefan-Maxwell system the flux-forces relations need to be first inverted. This can be done using the Perron-Frobenius theory as first noticed in \cite{VG0}. With this at hand, the local-in-time well-posedness and maximal $L_p$ regularity follow from classical results of Amann \cite{Amann} or Pr\"uss \cite{Pruss}. In the approach presented in the present paper we rather relate on the alternative approach of the second author and collaborators \cite{ES1, SS2, Murata, MS16, S17, SS1} tailored to the compressible fluid systems. The main result of this paper is maximal $L_p-L_q$ regularity of solutions to \eqref{1.1}, but it relies on the proof of existence of relevant solutions to the linearized system. The latter result is proved in our other article
\cite{PSZ2} mostly for the sake of brevity, but also as it can be of independent interest. Indeed, it applies to whole class of symmetric parabolic systems satisfying certain regularity assumptions on the coefficients, therefore it is likely to be used in other contexts.
As far as maximal $L_p-L_q$ regularity is concerned, the coupling between Stefan-Maxwell and the fluid equations, was so far considered only for the incompressible Navier-Stokes system, see \cite{BP2017}. It was also proven, independently in \cite{CJ13} and \cite{MT13}, that the incompressible Navier-Stokes-Stefan-Maxwell system possesses a global-in-time weak solution with arbitrary data .
The approach employed by Chen and J\"ungel in \cite{CJ13} relies on a certain symmetrization of the species subsystem with one of equations eliminated, see also \cite{JS13}. They have noticed that such reformulation allows to deduce parabolicity
in terms of the so-called entropic variables. See also \cite{Jungel} for an overview of different problems where a similar approach can be applied.
The idea of our approach is similar, however the change of variables we propose is slightly different, in the spirit of normal variables from \cite{VG}.
Concerning analogous results for the compressible Navier-Stokes-Stefan-Maxwell system, the existence of weak solutions is so far known either for stationary flow of species with the same molar masses \cite{EZ, GPZ, PP1, PP2}, or for exemplary diffusion matrix $C$ and stress tensor $\boldsymbol{S}$ with density-dependent viscosity coefficient \cite{EZ2,EZ3,MPZ1, MPZ2}. There are also relevant results for multi-component systems with diffusion fluxes in the form of the Fick law \cite{FPT}.
\subsection{Notation and functional spaces}
Let us summarize notation used in the paper. We use standard notation $H^k_p, \; k \in \mathbb{N}$ for Sobolev spaces. For a Banach space $X$, by $L_p(0,T;X)$ we denote a Bochner space and
$$
H^1_p(0,T;X)=\{ f \in L_p(0,T;X): \; \partial_t f \in L_p(0,T;X)\}.
$$
Furthermore, for $s \in \mathbb{R}$ a Bessel space $H^{s}_p(\mathbb{R},X)$ is a space of $X$-valued functions for which
\begin{equation*}
\|f\|_{H^{s}_p(\mathbb{R}, X)}
= \Bigl(\int_\mathbb{R} \|\mathcal{F}^{-1}[(1+\tau^2)^{s/2}\mathcal{F}[f](\tau)]
\|^p\,{\rm d}\tau\Bigr)^{1/p} < \infty.
\end{equation*}
We also recall that for $0<s<\infty$ and $m$ a smallest integer larger than $s$ we define Besov spaces on domains as intermediate spaces
\begin{equation} \label{def:bsqp0}
B^{s}_{q,p}(\Omega)=(L_q(\Omega),H^m_q(\Omega))_{s/m,p},
\end{equation}
where $(\cdot,\cdot)_{s/m,p}$ is the real interpolation functor, see \cite[Chapter 7]{Ad}. In particular,
\begin{equation} \label{def:bsqp}
B^{2(1-1/p)}_{q,p}(\Omega)=(L_q(\Omega),H^2_q(\Omega))_{1-1/p,p}=(H^2_q(\Omega),L_q(\Omega))_{1/p,p}. \end{equation}
Next, for abbreviation and clarity we introduce the following notation:
\begin{enumerate}
\item We will denote by $E(T)$ a continuous function of $T$ s.t. $E(0)=0$. Moreover, we use $C$ to denote a generic positive constant, or we use $C(X,Y)$ to specify the dependence of parameters $X$ and $Y$.
\item By ${\cal E}c\cdot$ we denote an $(n-1)$-vector of functions, for example ${\cal E}c\vartheta=(\vartheta_1,\ldots,\vartheta_{n-1})^\top$.
\item We introduce the norms describing regularity of our solutions; for $T>0$ we define:
\eq{ \label{def:norm}
[{| _{\partial\Omega}ldsymbol{v}}]_{T,1}&:=\|{| _{\partial\Omega}ldsymbol{v}}\|_{L_p(0,T;H^2_q(\Omega))}+\|\partial_t {| _{\partial\Omega}ldsymbol{v}}\|_{L_p(0,T;L_q(\Omega))}, \\
[ \sigma]_{T,2}&:=\|\sigma\|_{H^1_p(0,T;H^1_q(\Omega))},\\
[\sigma,{| _{\partial\Omega}ldsymbol{v}},{{\cal E}c\vartheta}]_T&:=[{| _{\partial\Omega}ldsymbol{v}}]_{T,1}+[\sigma]_{T,2}+\sum_{k=1}^{n-1}[\vartheta_k]_{T,1}.
}
Then, for given $T,M>0$ we define the sets in the functional spaces:
\begin{align} \label{def:H12}
{\mathcal H}_{T,M}^1=\{ {| _{\partial\Omega}ldsymbol{v}}: [{| _{\partial\Omega}ldsymbol{v}}]_{T,1}\leq M \}, \qquad
{\mathcal H}_{T,M}^{2}=\{\sigma: \; [\sigma]_{T,2} \leq M\}
\end{align}
and
\begin{equation}\label{def:H}
{\mathcal H}_{T,M} = \left\{(\sigma, {| _{\partial\Omega}ldsymbol{v}}, {\cal E}c\vartheta):
\quad (\sigma, {| _{\partial\Omega}ldsymbol{v}}, \vartheta_k)|_{t=0} = (0, | _{\partial\Omega}ldsymbol{u}^0, h^0_k) \quad\text{in $\Omega$}, \quad
[\sigma,{| _{\partial\Omega}ldsymbol{v}},{\cal E}c\vartheta]_T \leq M\right\}.
\end{equation}
\end{enumerate}
\section{Symmetrization and main result}
The main result of this paper is the the local well-posedness in the maximal $L_p-L_q$ regularity setting of certain reformulation of system \eqref{1.1} \eqref{sys:normal}.
This reformulation is similar to the normal form derived in (\cite{VG}, Chapter 8) for the complete system
with thermal effects. In case of constant temperature derivation of the symmetrized equations can be simplified considerably, and, to make our paper self contained, we show in the Appendix the following result
\begin{prop} \label{thm:main0}
Let $(\varrho,\boldsymbol{u},\varrho_1,\ldots,\varrho_n)$ be a regular solution to system (\ref{1.1}-\ref{rho}) such that
\eq{\label{all_pos}
\{\varrho_1>C,\ldots,\varrho_n>C\}}
for some constant $C>0$. Then the change of unknowns
\begin{equation} \label{def:psi}
(\varrho,h_1,\ldots,h_{n-1})=\lr{
\sum_{i=1}^3 \varrho_1,\log\lr{\frac{\varrho_2^{\frac{1}{m_2}}}{\varrho_1^{\frac{1}{m_1}}}},\ldots,\log\lr{\frac{\varrho_n^{\frac{1}{m_n}}}{\varrho_1^{\frac{1}{m_1}}}
}}=:\Psi(\varrho_1,\ldots,\varrho_n).
\end{equation}
is a diffeomorphism, and the system \eqref{1.1} is transformed to
\eq{ \label{sys:normal}
&\partial_{t}\varrho+\operatorname{div}(\varrho\boldsymbol{u})=0,\\
&\varrho \partial_t \boldsymbol{u}+\frac{\varrho\nabla\varrho}{\Sigma_\varrho}+\sum_{l=2}^3\lr{\varrho_l-\frac{m_l\varrho_l\varrho}{\Sigma_\varrho}}\nabla h_{l-1}+\varrho(\boldsymbol{u}\cdot \nabla)\boldsymbol{u}
=\mu {\vc{D}}elta \boldsymbol{u} + (\mu+\nu)\nabla{\rm div}\boldsymbol{u},\\
&\sum_{l=1}^{n-1} {\cal{R}}_{kl}(\partial_t h_l+ \boldsymbol{u} \cdot \nabla h_l) + \lr{\varrho_{k+1}-\frac{m_{k+1}\varrho_{k+1}\varrho}{\Sigma_\varrho}}\operatorname{div} \boldsymbol{u}
=\operatorname{div} \left( \sum_{l=1}^{n-1}{\cal{B}}_{kl}\nabla h_l\right),
}
with the boundary conditions
\begin{equation} \label{bc:normal}
| _{\partial\Omega}ldsymbol{u}=0, \quad \sum_{l=1}^{n-1}{\cal{B}}_{kl}\nabla h_l \cdot | _{\partial\Omega}ldsymbol{n} = 0, \quad k=1,\ldots,n-1,\quad\mbox{on}\ (0,T)\times\partial\Omega,
\end{equation}
and the initial conditions
\begin{equation} \label{ic:normal}
(| _{\partial\Omega}ldsymbol{u},\varrho,\{h_k\}_{k=1,\ldots,n-1})|_{t=0}=(| _{\partial\Omega}ldsymbol{u}^0,\varrho^0,\{h_k^0\}_{k=1,\ldots,n-1}) = \Psi(\varrho_{1}^0(x),\ldots \varrho_{n}^0(x)),
\end{equation}
where
\begin{equation} \label{def:sigma}
\Sigma_\varrho=\sum_{k=1}^3 m_k \varrho_k
\end{equation}
and ${\cal{R}}$ and ${\cal{B}}$ are $(n-1)\times(n-1)$ matrices given by
\begin{equation} \label{def:Rkl}
{\cal R}_{kl}=m_{k+1}\varrho_{k+1}\partiallta_{kl}-\frac{m_{k+1}m_{l+1}\varrho_{k+1}\varrho_{l+1}}{\Sigma_{\varrho}},
\end{equation}
\begin{equation} \label{lag:5b}
{\cal{B}}_{kl}=\frac{\varrho_{k+1}\varrho_{l+1} D_{k+1,l+1}}{p}.
\end{equation}
for $k,l=1,\ldots,n-1$.
Moreover, the matrix ${\cal{R}}$ is uniformly coercive in $(x,t)$ and the same property holds for ${\cal{B}}$ provided
that either: \\
{\emph{Condition 1:}} The matrix $C$ is of the form \eqref{Cform}\\
or \\
{\emph{Condition 2:}} $\Omega$ is bounded and \eqref{prop_D} is satisfied for $x\in \Ov{\Omega}$, $t\in[0,T]$.\\
\end{prop}
The local well-posedness of system \eqref{sys:normal},\eqref{bc:normal} in the maximal $L_p-L_q$ regularity setting is provided by our main result below.
\begin{thm}\label{thm:main2}
Assume that
\begin{itemize}
\item $2 < p < \infty$, $3 < q < \infty$, $2/p + 3/q < 1$ and $L > 0$;
\item $\Omega$ is a uniform $C^3$ domain in
$\mathbb{R}^3$;
\item there exists a constant $C>0$ such that
\begin{equation} \label{nablaD}
\forall \, k,l \in 1,\ldots,n \quad \|\nabla D_{kl}(t,\cdot)\|_{L_q(\Omega)}\leq C \sum_{j=1}^3\|\nabla \varrho_j(t,\cdot)\|_{L_q(\Omega)} \quad \textrm{a.e. in} \; (0,T);
\end{equation}
\item there exist
positive numbers $a_1$ and $a_2$ for which
\begin{equation}\label{initial:0}
a_1 \leq \varrho_{k}^0(x) \leq a_2 \quad
\forall x \in \bar | _{\partial\Omega}ldsymbol{v}erline{\Omega}, \; k \in 1, \ldots, n.
\end{equation}
\end{itemize}
Let $\varrho_{k}^{0}(x), k=1,\ldots n$, and
$| _{\partial\Omega}ldsymbol{u}^0(x)$ be initial data for Eq. \eqref{1.1}
and let
$$
(\varrho^0(x),h_1^0(x),\ldots,h_{n-1}^0(x)) = \Psi(\varrho_1^0(x),\ldots \varrho_n^0(x)).
$$
Then, there exists a time $T>0$ depending on
$a_1$, $a_2$ and $L$ such that if
the initial data satisfy the condition:
\begin{equation}\label{initial:1}
\|\nabla(\varrho_1^0,\ldots ,\varrho_n^0)\|_{L_q(\Omega)}
+ \|| _{\partial\Omega}ldsymbol{u}^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)^3}
+ \|h_1^0,\ldots,h_{n-1}^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)^{n-1}}
\leq L
\end{equation}
and the compatibility condition:
\begin{equation}\label{initial:2}
| _{\partial\Omega}ldsymbol{u}^0|_\Gamma=0, \quad \nabla h^0_{k} \cdot | _{\partial\Omega}ldsymbol{n}|_\Gamma = 0, \quad k=1,\ldots,n-1,
\end{equation}
then problem \eqref{sys:normal} with boundary conditions \eqref{bc:normal} and initial conditions \eqref{ic:normal} admits a unique solution
$(\varrho, | _{\partial\Omega}ldsymbol{u}, h_1,\ldots,h_{n-1})$ with
\begin{gather*}
\varrho - \varrho^0 \in H^1_p((0, T), H^1_q(\Omega)),
\quad | _{\partial\Omega}ldsymbol{u} \in H^1_p((0, T), L_q(\Omega)^3) \cap L_p((0, T), H^2_q(\Omega)^3),\\
h_1,\ldots,h_{n-1} \in H^1_p((0, T), L_q(\Omega)) \cap L_p((0, T), H^2_q(\Omega))
\end{gather*}
possessing the estimates:
\begin{gather*}
\|\varrho-\varrho^0\|_{H^1_p((0, T), H^1_q(\Omega))}
+ \|\partial_t(| _{\partial\Omega}ldsymbol{u}, h_1,\ldots,h_{n-1})\|_{L_p(0,T;L_q(\Omega))n}
+ \|(| _{\partial\Omega}ldsymbol{u}, h_1,\ldots,h_{n-1})\|_{L_p((0, T), H^2_q(\Omega)^{n+2})}
\leq CL, \\ a_1 \leq \varrho(x,t) \leq na_2+a_1
\quad\text{for $(x, t) \in \Omega\times(0, T)$}, \quad
\int^T_0\|\nabla| _{\partial\Omega}ldsymbol{u}(\cdot, s)\|_{L_\infty(\Omega)}
\leq \partiallta.
\end{gather*}
Here, $C$ is some constant independent of $L$, and $\partiallta$ is a small positive parameter.
\end{thm}
Let us state some remarks concerning our main result.
\begin{rmk}
Notice that due to \eqref{def_D} the requirement \eqref{nablaD} is satisfied for the special form \eqref{Cform} provided $C_1 \leq |\varrho_k| \leq C_2$ for some positive constants $C_1<C_2$.
\end{rmk}
\begin{rmk}
The parameter $\partiallta$ above remains small for large times. This is especially important for the existence of global-in-time solutions, not included in the present study.
\end{rmk}
\begin{rmk}
Due to conditions \eqref{coerc:B} we can apply the inverse of ${\cal{B}}$ to the boundary conditions \eqref{bc:normal} which leads to equivalent formulation of the boundary condition in the standard form
\begin{equation} \label{bc:normal1}
| _{\partial\Omega}ldsymbol{u}=0, \quad \nabla h_{k} \cdot | _{\partial\Omega}ldsymbol{n} = 0, \quad k=1,\ldots,n-1,\quad\mbox{on}\ (0,T)\times\partial\Omega .
\end{equation}
\end{rmk}
\begin{rmk}
The condition $\frac{2}{p}+\frac{3}{q}<1$ deserves a more detailed comment. First of all, it is stronger than condition $\frac{2}{p}+\frac{3}{q} \neq 1$ imposed
in Theorems \ref{thm:lin1} and \ref{thm:main1}, which gives solvability of associated linear problems. A natural question is whether the condition in Theorem \ref{thm:main2} cannot be strenghtened. The answer is partially positive. One could relax this condition allowing $\frac{2}{p}+\frac{3}{q}>1$ with additional constraints on $p,q$ following \cite{SSZ}. However, this would be at a price of numerous additional technicalities that we omit here for brevity.
\end{rmk}
A keynote requirement necessary to prove our main result is the coercivity of matrices ${\cal R}$ and ${\cal{B}}$. The details are given in the Appendix, however it is worth to mention in this place that we need to know that fractional densities are bounded from below by a positive constant. Note that the statement of Theorem \ref{thm:main2} provides us only with bounded functions $h_i$ given by \eqref{def:psi}. Let us therefore check that these conditions are in fact equivalent. The implication in one direction follows immediately from \eqref{def:psi}, for the other one we have:
\begin{lem}\label{lem:1}
Let $h_i$ given by \eqref{def:psi} be bounded and let
\begin{equation} \label{vrpos:0}
\varrho \geq C>0.
\end{equation}
Then
\begin{equation}\label{rhoidown}
\varrho_i \geq C>0, \quad i=1,\ldots,n.
\end{equation}
\end{lem}
\emph{Proof.} Assume $\exists i \in\{1,\ldots, n-1\}$ and $(x_0,t_0)$ s.t.
$$
\lim_{(x,t)\longrightarrow (x_0,t_0)}\varrho_{i+1}(x,t)=0.
$$
Then
\begin{equation} \label{vrpos:1}
\lim_{(x,t)\longrightarrow (x_0,t_0)}\varrho_1(x,t)=0
\end{equation}
since otherwise $h_i(x,t)$ would be unbounded from below. This in turn implies
that
\begin{equation} \label{vrpos:2}
\lim_{(x,t)\longrightarrow (x_0,t_0)}\varrho_{k+1}(x,t)=
0 \quad \forall\; 1 \leq k \leq n-1
\end{equation}
since otherwise corresponding $h_{k}$ would be unbounded from above. This means that $\sum_{k=1}^3\varrho_k(x,t)=0$ which contradicts \eqref{vrpos:0}.
\rightline{ $\square$}
Let us finish this section with presenting the outline of the rest of the paper.
In Section \ref{S:Lag} we rewrite the problem in Lagrangian coordinates ; this step is necessary to apply
the maximal $L_p-L_q$ regularity theory. In Section \ref{S:lin} we linearize the problem around the initial
condition. Section \ref{S:Nonl} is dedicated to nonlinear estimates which are used to close the fixed point
argument and prove Theorem \ref{thm:main2} using the existence result for linearized system from
Theorem \ref{thm:main1}, the proof of which can be found in \cite{PSZ2}.
\section{Lagrangian coordinates}\label{S:Lag}
We begin the proof of Theorem \ref{thm:main2} by transforming the symmetrized system \eqref{sys:normal} to the Lagrangian coordinates $x = \Phi(y,t)$ related to the vector field $| _{\partial\Omega}ldsymbol{v}$:
\begin{equation}\label{lag:1}
x = y + \int^t_0| _{\partial\Omega}ldsymbol{v}(y, s)\,ds.
\end{equation}
Then for any differentiable function $f$ we have
\begin{equation} \label{dt_lag}
\partial_t f(\Phi(t,y),t)=\partial_t f+\boldsymbol{u} \cdot \nabla_x f.
\end{equation}
Since
\begin{equation}\label{lag:2}
\frac{\partial x_i}{\partial y_j} = \partiallta_{ij}
+ \int^t_0\frac{\partial v_i}{\partial y_j}(y, s)\,ds,
\end{equation}
assuming that
\begin{equation}\label{assump:1}
\sup_{t \in (0,T)}\int^t_0\|\nabla| _{\partial\Omega}ldsymbol{v}(\cdot, s)\|_{L_\infty(\Omega)}\,ds
\leq \partiallta
\end{equation}
for sufficiently small positive constant $\partiallta$, the matrix
$\partial x/\partial y = (\partial x_i/\partial y_j)$ has the inverse
\begin{equation}\label{lag:3}
\Bigr(\frac{\partial x_i}{\partial y_j}\Bigr)^{-1} = | _{\partial\Omega}ldsymbol{I} + {\bf V}^0({\bf k}_{| _{\partial\Omega}ldsymbol{v}}), \quad {\bf k}_{| _{\partial\Omega}ldsymbol{v}} = \int^t_0\nabla| _{\partial\Omega}ldsymbol{v}(y, s)\,ds.
\end{equation}
Here , $| _{\partial\Omega}ldsymbol{I}\,$ is the $3\times 3$
identity matrix, and ${\bf V}^0({\bf k})$ is the $3\times 3$ matrix of
smooth functions with ${\bf V}^0(0) = 0$. We have
\begin{equation}\label{lag:4}
\nabla_x = (| _{\partial\Omega}ldsymbol{I} + {\bf V}^0({\bk_{\bv}}))\nabla_y,
\quad \frac{\partial}{\partial x_i} = \sum_{j=1}^3 (\partiallta_{ij} + V^0_{ij}({\bk_{\bv}}))
\frac{\partial}{\partial y_j}.
\end{equation}
Moreover (see for instance \cite{St1}), the map $\Phi(y, t)$ is bijection from $\Omega$ onto
$\Omega$.
We define our unknown functions in Lagrangian coordinates:
\begin{equation}\label{lag:5}
| _{\partial\Omega}ldsymbol{v}(y, t) = | _{\partial\Omega}ldsymbol{u}(x, t),
\quad \eta(y, t) = \varrho(x, t), \quad
\vartheta_i(y, t) = h_i(x, t), \; i=1,\ldots,n-1,
\end{equation}
and we denote
$${\cal E}c{\vartheta}:=(\vartheta_1,\ldots,\vartheta_{n-1})^\top.$$
We now show that $U=(| _{\partial\Omega}ldsymbol{v},\eta,{\cal E}c\vartheta)$ satisfies the system
\eq{\label{lag:sys}
&\partial_t\eta + \eta{\rm div}| _{\partial\Omega}ldsymbol{v} = R_1(U)
\\
&\eta \partial_t | _{\partial\Omega}ldsymbol{v} - \mu {\vc{D}}elta | _{\partial\Omega}ldsymbol{v} - (\mu+\nu)\nabla{\rm div}| _{\partial\Omega}ldsymbol{v} +\frac{\eta}{\Sigma_\varrho}\nabla\eta
+\sum_{l=1}^{n-1}\lr{\varrho_{l+1}-\frac{m_{l+1}\varrho_{l+1}\varrho}{\Sigma_\varrho}}\nabla \vartheta_l = \vc{R}_2(U)\\
&\sum_{l=1}^{n-1} {\cal R}_{kl}\partial_t \vartheta_l + \lr{\varrho_{k+1}-\frac{m_{k+1}\varrho_{k+1}\varrho}{\Sigma_\varrho}}\operatorname{div} | _{\partial\Omega}ldsymbol{v}-{\rm div}\lr{\sum_{l=1}^{n-1} {\cal{B}}_{kl}\nabla\vartheta_l}=R^k_3(U), \quad k=1,\ldots,n-1
}
supplemented with the boundary conditions
\begin{equation} \label{lag:bc}
| _{\partial\Omega}ldsymbol{v}|_{\partial \Omega}=0, \quad \nabla \vartheta_k\cdot| _{\partial\Omega}ldsymbol{n}|_{\partial \Omega}=R^k_4(U), \quad k=1,\ldots,n-1
\end{equation}
where
\begin{equation} \label{def:vrk}
(\varrho_1,\ldots,\varrho_n)=(\varrho_1,\ldots,\varrho_n)(\eta,{\cal E}c{\vartheta})=\Psi^{-1}(\eta,{\cal E}c{\vartheta}).
\end{equation}
\begin{rmk}
In the remainder of the paper we write simply $\varrho_k$ keeping in mind that we have
the dependence \eqref{def:vrk} since we work in Lagrangian coordinates.
\end{rmk}
We now derive the precise form of terms on the right hand side of \eqref{lag:sys},\eqref{lag:bc}.
First of all we have
\begin{equation}\label{lag:div}
{\rm div}_x = {\rm div}_y + \sum_{i,j=1}^3V^0_{ij}({\bk_{\bv}})\frac{\partial v_i}{\partial y_j},
\end{equation}
therefore we easily obtain \eqref{lag:sys}$_1$ with
\begin{equation}\label{lag:6}
R_1(U) = -\eta\sum_{i,j=1}^3 V^0_{ij}({\bk_{\bv}})\frac{\partial v_i}{\partial y_j}.
\end{equation}
Now we need to transform second order operators.
By \eqref{lag:4}, we have
$$
{\vc{D}}elta_x | _{\partial\Omega}ldsymbol{u} = \sum_{k=1}^3\frac{\partial}{\partial x_k}\lr{\frac{\partial | _{\partial\Omega}ldsymbol{u}}{\partial x_k}}
= \sum_{k,l,m=1}^3\lr{\partiallta_{kl} + V^0_{kl}({\bk_{\bv}})}
\frac{\partial}{\partial y_l}
\lr{\lr{\partiallta_{km} + V^0_{km}({\bk_{\bv}})}\frac{\partial | _{\partial\Omega}ldsymbol{v}}{\partial y_m}}.
$$
Therefore
$${\vc{D}}elta_x | _{\partial\Omega}ldsymbol{u} = {\vc{D}}elta_y | _{\partial\Omega}ldsymbol{v} + A_{2{\vc{D}}elta}({\bk_{\bv}})\nabla^2_y| _{\partial\Omega}ldsymbol{v}
+ A_{1{\vc{D}}elta}({\bk_{\bv}})\nabla_y| _{\partial\Omega}ldsymbol{v}
$$
with
\eq{
A_{2{\vc{D}}elta}({\bk_{\bv}})\nabla^2_y| _{\partial\Omega}ldsymbol{v} &= 2\sum_{l,m=1}^3V^0_{kl}({\bk_{\bv}})
\frac{\partial^2| _{\partial\Omega}ldsymbol{v}}{\partial y_l\partial y_m}
+ \sum_{k,l, m=1}^3
V^0_{kl}({\bk_{\bv}})V^0_{km}({\bk_{\bv}})
\frac{\partial^2| _{\partial\Omega}ldsymbol{v}}{\partial y_l \partial y_m}, \label{a2delta} }
\eq{
A_{1{\vc{D}}elta}({\bk_{\bv}})\nabla_y| _{\partial\Omega}ldsymbol{v} = &\sum_{l, m=1}^3(\nabla_{\bk_{\bv}} V^0_{l m})({\bk_{\bv}})
\int^t_0(\partial_l\nabla_y| _{\partial\Omega}ldsymbol{v})\,ds \frac{\partial | _{\partial\Omega}ldsymbol{v}}{\partial y_m}\\
&+ \sum_{k,l, m=1}^3
V^0_{kl}({\bk_{\bv}}) (\nabla_{\bk_{\bv}} V^0_{km})({\bk_{\bv}})
\int^t_0\partial_l\nabla_y| _{\partial\Omega}ldsymbol{v}\,ds\frac{\partial | _{\partial\Omega}ldsymbol{v}}{\partial y_m}, \label{a1delta}
}
where $(\nabla_{\bk_{\bv}} V^0_{km})({\bk_{\bv}})$ denotes $ \lr{V^0_{km}}'({\bk_{\bv}})$.
Similarly for $i\in\{1,\ldots,N\}$ we have
$$\frac{\partial}{\partial x_j}{\rm div}_x| _{\partial\Omega}ldsymbol{u}
= \sum_{k=1}^3(\partiallta_{jk} + V^0_{jk}({\bk_{\bv}}))\frac{\partial}{\partial y_k}
\lr{{\rm div}_y| _{\partial\Omega}ldsymbol{v} + \sum_{l, m=1}^3 V^0_{l m}({\bk_{\bv}})\frac{\partial v_l}{\partial y_m}},
$$
so we obtain
$$\frac{\partial}{\partial x_j}{\rm div}_x| _{\partial\Omega}ldsymbol{u}
= \frac{\partial}{\partial y_j}{\rm div}_y| _{\partial\Omega}ldsymbol{v} + A_{2{\rm div}, j}({\bk_{\bv}})\nabla^2_y| _{\partial\Omega}ldsymbol{v}
+ A_{1{\rm div}, j}({\bk_{\bv}})\nabla_y| _{\partial\Omega}ldsymbol{v},
$$
where
\eq{
A_{2{\rm div},j}({\bk_{\bv}})\nabla^2_y| _{\partial\Omega}ldsymbol{v}
& = \sum_{l, m=1}^3V^0_{l m}({\bk_{\bv}})\frac{\partial^2 v_l}{\partial y_m \partial y_j}
+ \sum_{k=1}^3 V^0_{jk}({\bk_{\bv}})\frac{\partial}{\partial y_k}{\rm div}_y| _{\partial\Omega}ldsymbol{v}
+ \sum_{k, l=1}^3V^0_{jk}({\bk_{\bv}})V^0_{l m}({\bk_{\bv}})
\frac{\partial^2v_l}{\partial y_k \partial y_m}, \label{a2div}
}
\eq{
A_{1{\rm div}, j}({\bk_{\bv}})\nabla_y| _{\partial\Omega}ldsymbol{v}
=& \sum_{l, m=1}^3(\nabla_{{\bk_{\bv}}} V^0_{l m})({\bk_{\bv}})
\int^t_0\partial_j\nabla_y| _{\partial\Omega}ldsymbol{v}\,ds\frac{\partial v_l}{\partial y_m} \\
&+ \sum_{k,l, m=1}^3V^0_{jk}({\bk_{\bv}})(\nabla_{{\bk_{\bv}}} V^0_{l m})({\bk_{\bv}})
\int^t_0\partial_k\nabla_y| _{\partial\Omega}ldsymbol{v}\,ds\frac{\partial v_l}{\partial y_m}. \label{a1div}
}
Therefore, transforming also $\nabla_x \varrho$ and $\nabla_x h_l$ we obtain \eqref{lag:sys}$_2$
with
\eq{ \label{lag:7}
\vc{R}_2(U) =& \mu A_{2{\vc{D}}elta}({\bk_{\bv}})\nabla^2_y| _{\partial\Omega}ldsymbol{v}
+ \mu A_{1{\vc{D}}elta}({\bk_{\bv}})\nabla_y| _{\partial\Omega}ldsymbol{v}
+ \nu A_{2{\rm div}}({\bk_{\bv}})\nabla^2_y| _{\partial\Omega}ldsymbol{v} + \nu A_{1{\rm div}}({\bk_{\bv}})\nabla_y| _{\partial\Omega}ldsymbol{v} \\
&+ \frac{\eta}{\Sigma_\varrho}{\bf V}^0({\bk_{\bv}})\nabla_y\eta
+ {\bf V}^0({\bk_{\bv}})\sum_{l=2}^3\lr{\varrho_l-\frac{m_l\varrho_l\varrho}{\Sigma_\varrho}}\nabla_y \vartheta_{l-1},
}
where $A_{i{\rm div}}\nabla^i_y| _{\partial\Omega}ldsymbol{v},\ i=1,2$ are vectors with coordinates $A_{i{\rm div},j}\nabla^i_y| _{\partial\Omega}ldsymbol{v},\ j=1,\ldots,N$.
Finally we transform the species balance equations.
We have
\begin{equation*}\begin{split}
{\rm div}_x({\cal{B}}_{kl}\nabla_x h_l)
&={\cal{B}}_{kl}({\vc{D}}elta_y\vartheta_l+A_{2{\vc{D}}elta}({\bk_{\bv}})\nabla^2_y\vartheta_l+A_{1{\vc{D}}elta}({\bk_{\bv}})\nabla_y \vartheta_l)\\
&\quad+\lr{\nabla_y {\cal{B}}_{kl}+{\bf V}^0({\bk_{\bv}})\nabla_y {\cal{B}}_{kl}}\lr{\nabla_y \vartheta_l+{\bf V}^0({\bk_{\bv}})\nabla_y \vartheta_l}\\
&={\rm div}_y({\cal{B}}_{kl}\nabla_y\vartheta_l)+R^{kl}_3(U),
\end{split}\end{equation*}
where
\eq{\label{lag:8}
R^{kl}_3(U)=&{\cal{B}}_{kl}(A_{2{\vc{D}}elta}({\bk_{\bv}})\nabla^2_y\vartheta_l+A_{1{\vc{D}}elta}({\bk_{\bv}})\nabla_y \vartheta_l)\\
&+{\bf V}^0({\bk_{\bv}})\nabla_y {\cal{B}}_{kl}(\nabla_y\vartheta_l+{\bf V}^0({\bk_{\bv}})\nabla_y\vartheta_l)+(\nabla_y {\cal{B}}_{kl}){\bf V}^0({\bk_{\bv}})\nabla_y\vartheta_l.
}
Therefore, transforming also $\text{d}iv | _{\partial\Omega}ldsymbol{u}$, we obtain \eqref{lag:sys}$_3$ with
\begin{equation}\label{lag:10}
R^k_{3}(U)=\sum_{l=1}^{n-1} R_{3}^{kl}(U)-\lr{\varrho_{k+1}-\frac{m_{k+1}\varrho_{k+1}\varrho}{\Sigma_\varrho}}\sum_{j,m=1}^3V^0_{jm}({\bk_{\bv}})\frac{\partial v_j}{\partial y_m}.
\end{equation}
It remains to transform the boundary conditions. For this purpose notice that
$$| _{\partial\Omega}ldsymbol{n}(x) = | _{\partial\Omega}ldsymbol{n}\lr{y + \int^t_0| _{\partial\Omega}ldsymbol{v}(y, s)\,ds}
= | _{\partial\Omega}ldsymbol{n}(y) + \int^1_0(\nabla| _{\partial\Omega}ldsymbol{n})
\lr{y + \tau\int^t_0| _{\partial\Omega}ldsymbol{v}(y, s)\,ds}\,d\tau
\int^t_0| _{\partial\Omega}ldsymbol{v}(y, s)\,ds,
$$
and therefore we obtain \eqref{lag:bc} with
\eq{\label{lag:9}
R_4^k(U)=&| _{\partial\Omega}ldsymbol{n}\lr{y + \int^t_0| _{\partial\Omega}ldsymbol{v}(y, s)\,ds}\cdot ({\bf V}^0({\bk_{\bv}})\nabla_y \vartheta_k)\\
&+ \left\{\int^1_0(\nabla| _{\partial\Omega}ldsymbol{n})
\lr{y + \tau\int^t_0| _{\partial\Omega}ldsymbol{v}(y, s)\,ds}\,d\tau
\int^t_0| _{\partial\Omega}ldsymbol{v}(y, s)\,ds\right\} \cdot\nabla_y\vartheta_k.
}
\section{Linearization}\label{S:lin}
\subsection{Formulation of linearized system}
We now linearize the system in the Lagrangian coordinates \eqref{lag:sys} around the initial conditions.
For this purpose we introduce small perturbations
\begin{equation} \label{lin1:1}
\eta=\sigma+\varrho^0,\quad \varrho_l=\sigma_l+\varrho^0_l,
\end{equation}
following the convention introduced in the previous section that $\varrho_l$ are the functions in the Lagrangian coordinates.
\noindent Let us denote
$$
\Sigma_\varrho^0=\sum_{k=1}^n m_k \varrho^0_k, \quad p^0=\sum_{k=1}^n \frac{\varrho_k^0}{m_k},
$$
and
\begin{equation} \label{lin1:1b}
h^0_k=\frac{1}{m_k}\log \varrho^0_{k+1} - \frac{1}{m_1}\log \varrho^0_1, \quad k=1,\ldots,n-1 .
\end{equation}
Observe that due to \eqref{initial:0} we have
\begin{equation}
n a_1 \leq \varrho^0 \leq n a_2,
\end{equation}
as well as
$$
|h^0_k|\leq \frac{1}{m_{k+1}}|\log a_2|+\frac{1}{m_1}|\log a_1|.
$$
The linearization of the continuity equation is straighforward, while for the momentum equation we have
$$
\frac{\eta}{\Sigma_{\varrho}}\nabla \eta = \frac{\varrho^0}{\Sigma_\varrho^0} \nabla \sigma
+ \varrho^0 \nabla \sigma \left( \frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0} \right) + \frac{\eta}{\Sigma_\varrho}\nabla\varrho^0
+ \frac{\sigma}{\Sigma_\varrho}\nabla\sigma
$$
and
\begin{equation} \label{lin1:2}
\frac{m_l\varrho_l\varrho}{\Sigma_\varrho}=
\frac{m_l\varrho^0_l\varrho^0}{\Sigma_\varrho^0}+m_l\varrho^0\varrho^0_l\left( \frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0} \right)
+\frac{m_l}{\Sigma_\varrho}(\varrho_l^0\sigma+\varrho^0\sigma_l).
\end{equation}
Similarly we linearize the ${\cal{R}}_{kl}$ in the species equations
while for the reduced diffusion matrix we use
\begin{equation} \label{lin1:3}
{\cal{B}}_{k-1,l-1}=\frac{\varrho_k \varrho_l D_{kl}}{p}=\frac{\varrho^0_k\varrho^0_lD_{kl}^0}{p^0}
+\frac{\varrho_k\varrho_lD_{kl}-\varrho^0_k\varrho^0_lD_{kl}^0}{p}+\varrho^0_k\varrho^0_lD_{kl}^0\left(\frac{1}{p}-\frac{1}{p^0}\right).
\end{equation}
Therefore we obtain the following linearized system
\begin{align} \label{lin1:sys}
&\partial_{t}\sigma + \varrho^0 {\rm div} | _{\partial\Omega}ldsymbol{v} = f_1(U)\\
&\varrho^0 \partial_t | _{\partial\Omega}ldsymbol{v} - \mu{\vc{D}}elta| _{\partial\Omega}ldsymbol{v}-(\mu+\nu)\nabla{\rm div} | _{\partial\Omega}ldsymbol{v} + \gamma_1 \nabla \sigma
+ \sum_{l=1}^{n-1} \gamma_2^{l} \nabla\vartheta_{l}=| _{\partial\Omega}ldsymbol{b}f_2(U)\\
&\sum_{l=1}^{n-1} {\cal R}_{kl}^0\partial_t \vartheta_l + \gamma_2^k\operatorname{div} | _{\partial\Omega}ldsymbol{v}
-{\rm div}\lr{\sum_{l=1}^{n-1} {\cal{B}}_{kl}^0\nabla\vartheta_l}=f^k_3(U), \quad k=1,\ldots,n-1
\end{align}
in $\Omega\times(0, T)$, supplied with the boundary conditions
\begin{equation} \label{lin1:bc}
| _{\partial\Omega}ldsymbol{v}|_{\partial \Omega}=0, \quad \nabla \vartheta_k \cdot | _{\partial\Omega}ldsymbol{n}|_{\partial \Omega}=f^k_4(U), \quad k=1,\ldots,n-1
\end{equation}
and initial conditions
\begin{equation} \label{lin1:ic}
(\sigma,| _{\partial\Omega}ldsymbol{v},{\cal E}c\vartheta)|_{t=0}=(0,| _{\partial\Omega}ldsymbol{u}^0, {\cal E}c h^0),
\end{equation}
where we denote
$${\cal E}c h^0=(h_1^0,\ldots,h_{n-1}^0),$$
$$D_{kl}^0=D_{kl}(\varrho^0), \quad {\cal{R}} _{kl}^0=m_{k+1}\varrho^0_{k+1}\partiallta_{kl}-\frac{m_{k+1}m_{l+1}\varrho^0_{k+1}\varrho^0_{l+1}}{\Sigma_{\varrho}^0}, \quad {\cal{B}}_{kl}^0=\frac{\varrho^0_{l+1}\varrho^0_{k+1}D_{k+1,l+1}^0}{p_0},$$
$$\gamma_1=\frac{\varrho^0}{\Sigma_\varrho^0}, \quad \gamma_2^k=\varrho^0_{k+1}-\frac{m_{k+1}\varrho^0_{k+1}\varrho^0}{\Sigma_\varrho^0}, $$
and the right hand side is given by
\begin{equation} \label{lin1:5a}
f_1(U)=R_1(U)-\sigma {\rm div} | _{\partial\Omega}ldsymbol{v},
\end{equation}
\begin{equation} \label{lin1:5b}
\begin{split}
| _{\partial\Omega}ldsymbol{b}f_2(U)=&R_2(U)-\sigma\partial_t | _{\partial\Omega}ldsymbol{v} - \varrho^0\nabla\eta \left( \frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0} \right) - \frac{\varrho^0}{\Sigma_\varrho}\nabla\varrho^0
- \frac{\sigma}{\Sigma_\varrho}\nabla\eta\\
&+\sum_{l=1}^{n-1}\lr{-\sigma_{l+1}+m_{l+1}\varrho_{l+1}^0\varrho^0\left( \frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0} \right)+\frac{m_{l+1}}{\Sigma_\varrho}(\varrho_{l+1}\sigma+\varrho^0\sigma_{l+1})}\nabla\vartheta_{l},
\end{split}
\end{equation}
\eq{ \label{lin1:5c}
f^k_3(U)
&=R^k_3(U) + \varrho^0_{k+1}{\rm div} | _{\partial\Omega}ldsymbol{v} + \left[ m_{k+1}\varrho^0\varrho^0_{k+1}\lr{\frac{1}{\Sigma}-\frac{1}{\Sigma^0}}+\frac{m_{k+1}}{\Sigma_\varrho}(\varrho_{k+1}^0\sigma+\varrho^0\sigma_{k+1})\right]{\rm div} | _{\partial\Omega}ldsymbol{v}\\
&+\sum_{l=1}^{n-1}\left( - \partiallta_{kl}m_{k+1}\sigma_{k+1}
+m_{k+1} m_{l+1}\left[\varrho^0_{k+1}\varrho^0_{l+1}\left(\frac{1}{\Sigma}-\frac{1}{\Sigma_\varrho^0}\right)+
\frac{\varrho_{k+1}^0\sigma_{l+1}+\varrho^0_{l+1}\sigma_{k+1}}{\Sigma_\varrho}
\right]\right)\partial_t\vartheta_{l}\\
&+{\rm div} \left( \sum_{l=1}^{n-1} \left[\frac{\varrho_{k+1}\varrho_{l+1}D_{k+1,l+1}-\varrho^0_{k+1}\varrho^0_{l+1}D_{k+1,l+1}^0}{p}+\varrho^0_{k+1}\varrho^0_{l+1}D_{k+1,l+1}^0\lr{\frac{1}{p}-\frac{1}{p_0}}\right]\nabla\vartheta_{l} \right),
}
\begin{equation} \label{lin1:5d}
f^k_4(U)=R^k_4(U).
\end{equation}
\subsection{Solvability of the complete linear system}
\subsubsection{Auxiliary results}
To prove existence of local-in-time strong solutions to system \eqref{lin1:sys} with fixed and given $f_1, | _{\partial\Omega}ldsymbol{b}f_2, f_3^k$, and $f_4^k$ we will use some auxiliary results for two subsystems. First let us recall a relevant existence result for the fluid part (for the proof see \cite{PSZ}, Theorem 5.1 ):
\begin{thm} \label{thm:lin1}
Assume $1 < p, q < \infty$
$2/p + 1/q \not =1$, $T>0$ and
$\Omega$ is a uniformly $C^2$ domain in $\mathbb{R}^N$ $(N \geq 2)$.
Assume moreover that $\varrho^0\in H^1_q(\Omega)$,
$| _{\partial\Omega}ldsymbol{u}_0 \in B^{2(1-1/p)}_{q,p}(\Omega)^N$,
$\tilde f_1 \in L_p(\mathbb{R}, H^1_q(\Omega)^N) $ and $\tilde | _{\partial\Omega}ldsymbol{b}f_2 \in L_p((0, T), L_q(\Omega)^N)$.
Then the problem
\begin{equation}
\left\{
\begin{aligned}
\partial_{t}\sigma+ \varrho^0 {\rm div} | _{\partial\Omega}ldsymbol{v} &= \tilde f_1 &\quad&\text{in $\Omega\times(0, T)$}, \\
\varrho^0 \partial_t | _{\partial\Omega}ldsymbol{v} - \mu{\vc{D}}elta| _{\partial\Omega}ldsymbol{v}-(\mu+\nu)\nabla{\rm div} | _{\partial\Omega}ldsymbol{v} + \gamma_1 \nabla \sigma &= \tilde| _{\partial\Omega}ldsymbol{b}f_2&\quad&\text{in $\Omega\times(0, T)$}, \\
| _{\partial\Omega}ldsymbol{v}|_{\partial \Omega}&=0&\quad&\text{on $\Gamma \times (0, T)$}, \\
(\sigma,| _{\partial\Omega}ldsymbol{v})|_{t=0}&=(0,| _{\partial\Omega}ldsymbol{u}^0)&\quad&\text{in $\Omega$},
\end{aligned}\right.
\end{equation}
admits a solution $(\sigma,| _{\partial\Omega}ldsymbol{v})$ such that
\begin{align}
&\|| _{\partial\Omega}ldsymbol{v}\|_{L_p((0, T), H^2_q(\Omega))}
+ \|\partial_t| _{\partial\Omega}ldsymbol{v}\|_{L_p((0, T), L_q(\Omega))}
+ \| \sigma \|_{H^1_p(0,T;H^1_q(\Omega))} \\
&\quad
\leq Ce^{cT}\lr{\|\varrho^0\|_{H^1_q(\Omega)}+\|| _{\partial\Omega}ldsymbol{u}^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}
+ \|\tilde f_1\|_{L_p((0, T), H^1_q(\Omega))}
+ \|\tilde| _{\partial\Omega}ldsymbol{b}f_2\|_{L_p((0, T), L_q(\Omega))}}.
\end{align}
\end{thm}
For the species subsystem we recall the following theorem which gives solvability in a maximal $L_p-L_q$
regime of a linear problem, its proof can be found in our previous work \cite{PSZ2}.
For general $m$ species we consider $k\in\{1,\ldots,m\}$ and the following set of equations
\begin{equation}\label{1.1?}\left\{
\begin{aligned}
\sum_{\ell=1}^m {\cal{R}}_{k\ell}\partial_t \vartheta_\ell
-{\rm div}\lr{\sum_{\ell=1}^m {\cal{B}}_{k\ell}\nabla \vartheta_\ell} & = \tilde f_3^k
&\quad&\text{in $\Omega\times(0, T)$}, \\
\sum_{\ell=1}^m {\cal{B}}_{k\ell}\nabla \vartheta_\ell \cdot | _{\partial\Omega}ldsymbol{n} & = \tilde f_4^k
&\quad&\text{on $\Gamma \times (0, T)$}, \\
\vartheta_k|_{t=0} & = h_{k}^0
&\quad&\text{in $\Omega$},
\end{aligned}\right.
\end{equation}
where
${\cal{B}}= {\cal{B}}(x)$ and ${\cal{R}}={\cal{R}}(x)$ are $m\times m$ matrices whose
$(k, \ell)^{\rm th}$ components are ${\cal{B}}_{k\ell}(x)$ and ${\cal{R}}_{k\ell}(x)$, respectively.
\begin{thm} \label{thm:main1}
Assume that
\begin{itemize}
\item
there exists a number $M_0$ for which
\begin{equation}\label{1.2}\begin{aligned}
&|{\cal{B}}_{k\ell}(x)|, |{\cal{R}}_{k\ell}(x)| \leq M_0,
\quad \text{for any $x \in \Omega$}, \\
&|{\cal{B}}_{k\ell}(x) - {\cal{B}}_{k\ell}(y)|\leq M_0|x-y|^\sigma,
\quad
|{\cal{R}}_{k\ell}(x) - {\cal{R}}_{k\ell}(y)|\leq M_0|x-y|^\sigma
\quad\text{for any $x, y \in \Omega$},\\
&\|\nabla({\cal{B}}_{k\ell}, {\cal{R}}_{k\ell})\|_{L_r(\Omega)} \leq M_0.
\end{aligned}
\end{equation}
\item
the matrices
${\cal{B}}$ and ${\cal{R}}$ are positive and symmetric and
that there exist constants $m_1,m_2 > 0$ for which
\begin{equation}\label{1.3}
\langle {\cal{B}}(x)\xi, \bar | _{\partial\Omega}ldsymbol{v}erline{\xi} \rangle \geq m_1|\xi|^2,
\quad
\langle {\cal{R}}(x)\xi, \bar | _{\partial\Omega}ldsymbol{v}erline{\xi} \rangle \geq m_2|\xi|^2
\end{equation}
for any complex $m$-vector $\xi$ and $x \in \Omega$.
\item
$1 < p, q < \infty$ and $T > 0$,
$2/p + 1/q \not =1$ and
$\Omega$ is a uniformly $C^2$ domain in $\mathbb{R}^3$ $(N \geq 2)$.
\item for all $k=1,\ldots,m$,
$h_k^0 \in B^{2(1-1/p)}_{q,p}(\Omega)$,
$\tilde f_3^k \in L_p((0, T), L_q(\Omega))$ and
$\tilde f_4^k \in L_p(\mathbb{R}, H^1_q(\Omega))
\cap
H^{1/2}_p(\mathbb{R}, L_q(\Omega))$ are given functions satisfying
the compatibility conditions:
\begin{equation}\label{1.4}\sum_{\ell=1}^m B_{k\ell}\nabla h_{\ell}^0
\cdot| _{\partial\Omega}ldsymbol{n} = \tilde f_4^k(\cdot, 0)
\quad\text{on $\Gamma$}
\end{equation}
provided $2/p + 1/q < 1$.
\end{itemize}
\noindent
Then, problem \eqref{1.1?}
admits a unique solution ${\cal E}c\vartheta = (\vartheta_1, \ldots, \vartheta_m)^\top$
with
\begin{equation}\label{1.5}
{\cal E}c\vartheta \in L_p((0, T), H^2_q(\Omega)^m)
\cap H^1_p((0, T), L_q(\Omega)^m)
\end{equation}
possessing the estimate:
\begin{equation}\label{1.6}
\begin{aligned}
&\|{\cal E}c\vartheta\|_{L_p((0, T), H^2_q(\Omega))}
+ \|\partial_t{\cal E}c\vartheta\|_{L_p((0, T), L_q(\Omega))}\\
&\quad
\leq Ce^{cT}(\|{\cal E}c h^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}
+ \|{\cal E}c{\tilde f}_3\|_{L_p((0, T), L_q(\Omega))}
+ \|{\cal E}c{\tilde f}_4\|_{L_p((0, T), H^1_q(\Omega))}
+ \|{\cal E}c{\tilde f}_4\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega))})
\end{aligned}
\end{equation}
for some constants $C$ and $c$.
\end{thm}
\subsubsection{Fixed point argument}
With Theorems \ref{thm:lin1} and \ref{thm:main1} it is easy to show solvability with appropriate estimates of complete linear system corresponding to \eqref{lin1:sys}-\eqref{lin1:bc}:
\begin{equation}\label{lin2:sys}
\left\{
\begin{aligned}
&\partial_{t}\sigma + \varrho^0{\rm div} | _{\partial\Omega}ldsymbol{v} = f_1 \\
&\varrho^0 \partial_t | _{\partial\Omega}ldsymbol{v} - \mu{\vc{D}}elta| _{\partial\Omega}ldsymbol{v}-(\mu+\nu)\nabla{\rm div} | _{\partial\Omega}ldsymbol{v} + \gamma_1 \nabla \sigma +\sum_{l=1}^{n-1}\gamma^l_2 \nabla \vartheta_l = | _{\partial\Omega}ldsymbol{b}f_2\\
&\sum_{l=1}^{n-1} {\cal R}_{kl}^0\partial_t \vartheta_l + \gamma_2^k\operatorname{div} | _{\partial\Omega}ldsymbol{v}
-{\rm div}\lr{\sum_{l=1}^{n-1} {\cal{B}}_{kl}^0\nabla\vartheta_l}=f^k_3, \quad k=1,\ldots,n-1
\end{aligned}\right.
\end{equation}
with given $\gamma_1,\{\gamma^l_2\}_{l=1,\ldots,n-1}$ and the boundary conditions
\begin{equation} \label{lin2:bc}
| _{\partial\Omega}ldsymbol{v}|_{\partial \Omega}=0, \quad \sum_{l=1}^{n-1} {\cal{B}}_{kl}^0 \nabla \vartheta_l\cdot | _{\partial\Omega}ldsymbol{n}|_{\partial \Omega}=f_4^k, \quad k=1,\ldots,n-1.
\end{equation}
and initial conditions \eqref{lin1:ic}.
We have the following result.
\begin{thm} \label{thm:lin2}
Assume ${\cal{B}}^0$,${\cal{R}}^0$, $\Omega$ and $p,q$ satisfy the assumptions of Theorem \ref{thm:main1} with $m=n-1$. Assume moreover
$| _{\partial\Omega}ldsymbol{u}^0,{\cal E}c h^0 \in B^{2(1-1/p)}_{q,p}(\Omega)$,
$\varrho^0 \in H^1_q(\Omega)$,
$f_1 \in L_p((0, T), H^1_q(\Omega))$,
$(| _{\partial\Omega}ldsymbol{b}f_2,{\cal E}c f_3) \in L_p((0, T), L_q(\Omega)^{n+2})$,
${\cal E}c f_4 \in L_p(\mathbb{R}, H^1_q(\Omega)^{n-1})
\cap
H^{1/2}_p(\mathbb{R}, L_q(\Omega)^{n-1})$.
Then for any $M>0$, if
\begin{align}
&\|| _{\partial\Omega}ldsymbol{u}^0,{\cal E}c h^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}
+\|\varrho^0\|_{H^1_q(\Omega)}
+\|f_1\|_{L_p((0, T), H^1_q(\Omega))}\\
&+\|(| _{\partial\Omega}ldsymbol{b}f_2,{\cal E}c f_3)\|_{L_p((0, T), L_q(\Omega)^{n+2})}
+\|{\cal E}c f_4\|_{L_p(\mathbb{R}, H^1_q(\Omega)^{n-1})}
+\|{\cal E}c f_4\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega)^{n-1})} \leq M,
\end{align}
then there exists $T>0$ such that system
\eqref{lin2:sys}-\eqref{lin2:bc} admits a solution $(\sigma,| _{\partial\Omega}ldsymbol{v},{\cal E}c\vartheta)$ on $(0,T)$ with
\begin{align} \label{est:lin3}
[\sigma,| _{\partial\Omega}ldsymbol{v},{\cal E}c\vartheta]_T \leq
&\|| _{\partial\Omega}ldsymbol{u}^0,{\cal E}c h^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}
+\|\varrho^0\|_{H^1_q(\Omega)}
+\|f_1\|_{L_p((0, T), H^1_q(\Omega))}\\
&+\|(| _{\partial\Omega}ldsymbol{b}f_2,{\cal E}c f_3)\|_{L_p((0, T), L_q(\Omega))}
+\|{\cal E}c f_4\|_{L_p(\mathbb{R}, H^1_q(\Omega))}
+\|{\cal E}c f_4\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega)^3)}
\end{align}
\end{thm}
\emph{Proof.} We use the Banach fixed point argument. For given $\bar | _{\partial\Omega}ldsymbol{v} \in {\mathcal H}^1_{T,M}$ denote by ${\cal E}c\vartheta(\bar | _{\partial\Omega}ldsymbol{v})$
solution to \eqref{lin2:sys}$_3$ with $| _{\partial\Omega}ldsymbol{v}=\bar | _{\partial\Omega}ldsymbol{v}$ and boundary condition \eqref{lin2:bc}$_2$.
Since
$
\|| _{\partial\Omega}ldsymbol{v}\|_{L_\infty(0,T,H^1_\infty(\Omega))}\leq CM,
$ (see estimate \eqref{est:04})
therefore by Theorem \ref{thm:main1} such solution exists for arbitrary time $T>0$, it is unique and it satisfies
\eq{
[{\cal E}c\vartheta(\bar| _{\partial\Omega}ldsymbol{v})]_{T,1}\leq &C(T)\Big(\|{\cal E}c h^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}+
\|{\cal E}c f_3\|_{L_p(0,T;L_q(\Omega))n}+E(T)\|| _{\partial\Omega}ldsymbol{v}\|_{L_p(0,T;H^1_q(\Omega)^3)}\\
&\quad+\|{\cal E}c f_4\|_{L_p(\mathbb{R}, H^1_q(\Omega)^{n-1})}
+\|{\cal E}c f_4\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega)^{n-1})}\Big)\\
\leq & C(T,M)\left(1+E(T)\|\bar| _{\partial\Omega}ldsymbol{v}\|_{L_p(0,T;H^1_q(\Omega)^3)}\right).
}
Therefore for $(\bar | _{\partial\Omega}ldsymbol{v},\bar \sigma) \in {\mathcal H}^1_{T,M} \times {\mathcal H}^2_{T,M}$ we can define
$(| _{\partial\Omega}ldsymbol{v}, \sigma)={\mathcal T}(\bar | _{\partial\Omega}ldsymbol{v},\bar \sigma)$ as a unique solution of the first two equations of system \eqref{lin2:sys} with
${\cal E}c\vartheta={\cal E}c\vartheta(\bar | _{\partial\Omega}ldsymbol{v})$ and boundary condition \eqref{lin2:bc}$_1$. By Theorem \ref{thm:lin1} we have
\eq{
[\sigma]_{T,2}+[| _{\partial\Omega}ldsymbol{v}]_{T,1} \leq &
C(T)\Big(\|\varrho^0\|_{H^1_q(\Omega)}+\|| _{\partial\Omega}ldsymbol{u}^0\|_{B^{2(1-1/p)}_{q,p}(\Omega)^3}\\
&\quad+ \|f_1\|_{L_p((0, T), H^1_q(\Omega))}
+ \|| _{\partial\Omega}ldsymbol{b}f_2\|_{L_p((0, T), L_q(\Omega)^3)}+\|\nabla{\cal E}c\vartheta(\bar| _{\partial\Omega}ldsymbol{v})\|_{L_p((0, T), L_q(\Omega)^{n-1})}\Big)\\
\leq & C(T,M)\left(1+E(T)\|\bar| _{\partial\Omega}ldsymbol{v}\|_{L_p(0,T;H^1_q(\Omega)^3)}\right).
}
Moreover, taking different $\bar| _{\partial\Omega}ldsymbol{v}_1, \bar| _{\partial\Omega}ldsymbol{v}_2\in {\mathcal H}^1_{T,M}$ corresponding to the same initial data $| _{\partial\Omega}ldsymbol{u}^0$, and then subtracting the for ${\cal E}c\vartheta(\bar | _{\partial\Omega}ldsymbol{v}_1)$ and ${\cal E}c\vartheta(\bar | _{\partial\Omega}ldsymbol{v}_2)$
we get
$$
[{\cal E}c\vartheta(\bar | _{\partial\Omega}ldsymbol{v}_1)-{\cal E}c\vartheta(\bar | _{\partial\Omega}ldsymbol{v}_2)]_{T,1}\leq C(M)E(T)[\bar| _{\partial\Omega}ldsymbol{v}_1-\bar| _{\partial\Omega}ldsymbol{v}_2]_{T,1}.
$$
Therefore applying Theorem \ref{thm:lin1} to a difference of two solutions we have
\eqh{
[{\mathcal T}(\bar | _{\partial\Omega}ldsymbol{v}_1,\bar \sigma_1)-{\mathcal T}(\bar | _{\partial\Omega}ldsymbol{v}_2,\bar \sigma_2)]_{T,1;T,2}&\leq
C(M)E(T) [\bar| _{\partial\Omega}ldsymbol{v}_1-\bar| _{\partial\Omega}ldsymbol{v}_2]_{T,1}\\
&\leq C(M)E(T)[(\bar| _{\partial\Omega}ldsymbol{v}_1-\bar| _{\partial\Omega}ldsymbol{v}_2,\bar\sigma_1-\bar\sigma_2)]_{T,1;T,2}.
}
Therefore for sufficiently small $T$, $\mathcal T$ is a contraction on a set ${\mathcal H}^1_{T,M}\times{\mathcal H}^2_{T,M}$,
and applying the Banach fixed point theorem we complete the proof.
\rightline{ $\square$}
\section{Proof of Theorem \ref{thm:main2}}\label{S:Nonl}
\subsection{Nonlinear estimates}
The aim of this section is to prove the following proposition which gives the estimate on the right hand side of linearized system in the regularity required in order to apply Theorem \ref{thm:lin2}. We shall use notation introduced at the beginning of Section 5.2. For brevity in this subsection we will not distinguish between scalar and vector valued functions in notation of functional spaces, except for final estimates.
\begin{prop} \label{prop:est}
Let $\bar U=(\bar \sigma,\bar | _{\partial\Omega}ldsymbol{v},\bar | _{\partial\Omega}ldsymbol{v}t) \in {\mathcal H}_{T,M} $ for given $T,M>0$, where the initial conditions satisfy the assumptions of Theorem \ref{thm:main2}.
Let $f_1(U),f_2(U),f_3^k(U)$ and $f^k_4(U)$ be given by \eqref{lin1:5a}-\eqref{lin1:5d},
where $R_1(U),R_2(U),R_3^k(U)$ and $R^k_4(U)$ are defined in \eqref{lag:6},\eqref{lag:7},\eqref{lag:8}-\eqref{lag:10} and \eqref{lag:9}, respectively. Then
\eq{\label{est:nonlin}
&\|f_1(\bar U)\|_{L_p(0,T;W^1_q(\Omega))} + \|f_2(\bar U)\|_{L_p(0,T,L_q(\Omega)^3)}+\|{\cal E}c f_3\|(\bar U)\|_{L_p(0,T;L_q(\Omega))n} \\
&+\|{\cal E}c f^k(\bar U)\|_{L_p(0,T,H^1_q(\Omega)^{n-1})} + \|{\cal E}c f_4(\bar U)\|_{H^{1/2}_p(\mathbb{R},L_q(\Omega)^{n-1})} \leq C(M,L) E(T).
}
\end{prop}
Let us start with recalling some auxiliary results. The first one is due to
Tanabe (cf. \cite{Tanabe} p.10):
\begin{lem} \label{L:int}
Let $X$ and $Y$ be two Banach spaces such that
$X$ is a dense subset of $Y$ and $X\subset Y$ is continuous.
Then for each $p \in (1, \infty)$
$$H^1_p((0, \infty), Y) \cap L_p((0, \infty), X)
\subset C([0, \infty), (X, Y)_{1/p,p})$$
and for every $u\in H^1_p((0, \infty), Y) \cap L_p((0, \infty), X)$ we have
$$\sup_{t \in (0, \infty)}\|u(t)\|_{(X, Y)_{1/p,p}}
\leq (\|u\|_{L_p((0, \infty),X)}^p
+ \|u\|_{H^1_p((0, \infty), Y)}^p)^{1/p}.
$$
\end{lem}
Next two results will be needed to estimate the boundary data. For the first one see [Shibata and Shimizu \cite{SS1}, Lemma 2.7]:
\begin{lem}\label{lem:5.1}
Let $1 < p < \infty$, $3 < q < \infty$ and $0 < T \leq 1$. Assume that
$\Omega$ is a uniformly $C^2$ domain. Let
\begin{align*}
f \in H^1_\infty(\mathbb{R}, L_q(\Omega)) \cap L_\infty(\mathbb{R}, H^1_q(\Omega)),
\quad
g \in L_p(\mathbb{R}, H^1_q(\Omega)) \cap H^{1/2}_p(\mathbb{R}, L_q(\Omega)).
\end{align*}
If we assume that $f \in L_p(\mathbb{R}, H^1_q(\Omega))$ and
that $f$ vanishes for $t \notin [0, 2T]$ in addition, then we have
\begin{align*}
&\|fg\|_{L_p(\mathbb{R}, H^1_q(\Omega))} + \|fg\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega))}\\
&\quad\leq C(\|f\|_{L_\infty(\mathbb{R}, H^1_q(\Omega))}
+T^{(q-3)/(pq)}\|\partial_tf\|_{L_\infty(\mathbb{R}, L_q(\Omega))}^{(1-3/(2q))}
\|\partial_tf\|_{L_p((\mathbb{R}, H^1_q(\Omega))}^{3/(2q)})
(\|g\|_{_p(\mathbb{R}, H^1_q(\Omega))} + \|g\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega))}).
\end{align*}
\end{lem}
\begin{rmk} \thetag1~
The boundary of $\Omega$
was assumed to be bounded in \cite{SS1}.
However, Lemma \ref{lem:5.1} can be proved using Sobolev's
inequality and complex interpolation theorem, and so
employing the same argument as that in the proof of
\cite[Lemma 2.7]{SS1},
we can prove Lemma \ref{lem:5.1}. \\
\thetag2~ By Sobolev's inequality, $\|fg\|_{H^1_q(\Omega)}
\leq C\|f\|_{H^1_q(\Omega)}\|g\|_{L_q(\Omega)}$, and so
the essential part of Lemma \ref{lem:5.1} is the estimate of
$\|fg\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega))}$.
\end{rmk}
The second result has been shown in Shibata and Shimizu \cite{SS2} for $\Omega=\mathbb{R}^3$ and generalized to a uniform $C^2$ domain in Shibata \cite{S17}:
\begin{lem}\label{lem:5.2} Let $1 < p, q < \infty$. Assume that
$\Omega$ is a uniform $C^2$ domain. Then
$$H^1_p(\mathbb{R}, L_q(\Omega)) \cap L_p(\mathbb{R}, H^2_q(\Omega))
\subset H^{1/2}_p(\mathbb{R}, H^1_q(\Omega)), $$
and
$$\|\nabla f\|_{H^{1/2}_p(\mathbb{R}, L_q(\Omega))}
\leq C(\|f\|_{L_p(\mathbb{R}, H^2_q(\Omega))}
+ \|\partial_t f\|_{L_p(\mathbb{R}, L_q(\Omega))}).
$$
\end{lem}
Now we show preliminary estimates for functions from the space ${\mathcal H}_{T,M}$.
\begin{lem}
Let $\sigma, | _{\partial\Omega}ldsymbol{v}, \vartheta_1 \ldots \vartheta_{n-1} \in {\mathcal H}_{T,M}$ and let ${\bk_{\bv}},{\bf V}^0({\bk_{\bv}})$ be defined in \eqref{lag:3}. Then
\begin{align}
&\|{\bf V}^0({\bk_{\bv}}),\nabla_{{\bk_{\bv}}}{\bf V}^0({\bk_{\bv}})\|_{L_\infty(\Omega\times(0,T))} \leq C(M,L)E(T), \label{est:01}\\
&{\rm sup}_{t \in (0,T)} \|\sigma(\cdot,t)\|_{H^1_q(\Omega)}\leq C(M,L)E(T), \label{est:02} \\
&{\rm sup}_{t \in (0,T)}\|{{\cal E}c \vartheta}(\cdot,t)-{{\cal E}c h^0}\|_{B^{2(1-1/p)}_{q,p}}+{\rm sup}_{t \in (0,T)}\|| _{\partial\Omega}ldsymbol{v}(\cdot,t)-| _{\partial\Omega}ldsymbol{u}_0\|_{B^{2(1-1/p)}_{q,p}}\leq C(M,L),\label{est:03}\\
&\|| _{\partial\Omega}ldsymbol{v},{{\cal E}c \vartheta}\|_{L_\infty(0,T,H^1_\infty(\Omega))}\leq C(M), \label{est:04}\\
&\|\varrho_k-\varrho_k^0\|_{L_\infty(0,T;H^1_q)}\leq C(M,L) \quad \forall k=1,\ldots,n,\label{est:05}\\
&\|\varrho_k-\varrho_k^0\|_{L_\infty(\Omega \times (0,T))} \leq C(M,L)E(T). \label{est:06}
\end{align}
\end{lem}
\emph{Proof}. First of all, we have
\begin{align}
\int^T_0\|\nabla| _{\partial\Omega}ldsymbol{v}(\cdot, t)\|_{L_\infty(\Omega)}\,dt
&\leq C\int^T_0\|| _{\partial\Omega}ldsymbol{v}(\cdot, t)\|_{H^2_q(\Omega)}\,dt\nonumber\\
&\leq T^{1/{p'}}
\Bigl({\rm sup}_{t \in (0,T)}\int^T_0\|| _{\partial\Omega}ldsymbol{v}(\cdot, t)\|_{H^2_q(\Omega)}^p\,dt\Bigr)^{1/p}
\leq MT^{1/p'},
\end{align}
which implies \eqref{est:01}. Next,
$$\|\sigma(\cdot, t)\|_{H^1_q(\Omega)}
\leq \int^t_0\|\partial_t\sigma(\cdot, s)\|_{H^1_q(\Omega)}\,ds
\leq T^{1/{p'}}\|\partial_t\sigma\|_{L_p((0, T), H^1_q(\Omega))}
\leq C(M)E(T),
$$
and so we have \eqref{est:02}. In order to prove \eqref{est:03} we introduce
extension operator
\begin{equation} \label{def:ext} e_T[f](\cdot, t)
= \begin{cases}
0 \quad &t\in(-\infty,0)\cup (2T,+\infty), \\
f(\cdot, t) \quad &t\in(0,T), \\
f(\cdot, 2T-t)\quad & t\in(T,2T).
\end{cases}
\end{equation}
Obviously, $e_T[f](\cdot, t) = f(\cdot, t)$ for $t \in (0, T)$. If
$f|_{t=0}=0$, then we have
\begin{equation} \label{ext:2} \partial_te_T[f](\cdot, t)
= \begin{cases}
0 \quad &t\in(-\infty,0)\cup (2T,+\infty),\\
(\partial_tf)(\cdot, t) \quad &t\in(0,T), \\
-(\partial_tf)(\cdot, 2T-t)\quad & t\in(T,2T),
\end{cases}
\end{equation}
understood in a weak sense.
Applying Lemma \ref{L:int} with $X=H^2_q(\Omega), \, Y=L_q(\Omega)$ and using
\eqref{def:ext} and \eqref{ext:2} we have
\begin{align*}
&\sup_{t \in (0, T)}\|\vartheta(\cdot, t)-h_0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}
\leq \sup_{t \in (0, \infty)}\|e_T[\vartheta_k-h_k^0]\|_{B^{2(1-1/p)}_{q,p}(\Omega)}\\&\quad
= (\|e_T[\vartheta_k-h_k^0]\|_{L_p((0, \infty), H^2_q(\Omega))}^p
+ \|e_T[\vartheta_k-h^0_k]\|_{H^1_p((0, \infty), L_q(\Omega))}^p)^{1/p}\\
&\quad \leq C(\|\vartheta_k-h^0_k\|_{L_p((0, \infty), H^2_q(\Omega))}
+ \|\partial_t\vartheta_k\|_{L_p((0, T), L_q(\Omega))}) \leq C(M,L),
\end{align*}
and estimating $\|| _{\partial\Omega}ldsymbol{v}(\cdot, t)-| _{\partial\Omega}ldsymbol{u}_0\|_{B^{2(1-1/p)}_{q,p}(\Omega)}$
in the same way we obtain \eqref{est:03}. Then \eqref{est:04} follows from
\eqref{est:03} due to Sobolev imbedding theorem as $\frac{2}{p}+\frac{3}{q}<1$.
In order to prove \eqref{est:04} we use a fact that
$$
(\varrho_1,\ldots \varrho_n)=\Psi^{-1}(\varrho,\vartheta_1,\ldots,\vartheta_{n-1}),
$$
where $\Psi$ is the diffeomorphism defined in \eqref{def:psi},
and therefore
\begin{equation} \label{5.9} \begin{split}
&\sup_{t \in (0, T)}\|\varrho_k(\cdot, t)-\varrho_k^0(\cdot)\|_{L_q(\Omega)}
\leq \int^T_0\|\partial_t (\Psi^{-1})({{\cal E}c \theta}(\cdot, t),
\varrho_0(\cdot)+\sigma(\cdot, t))\|_{L_q(\Omega)}\,dt\\
&\quad \leq \int^T_0
\|(\Psi^{-1})'({{\cal E}c \theta}(\cdot, t), \varrho_0(\cdot) + \sigma(\cdot, t))
\|_{L_\infty(\Omega)}
\|(\partial_t{{\cal E}c \theta}(\cdot, t), \partial_t\sigma(\cdot, t))\|_{L_q(\Omega)}\,dt.
\end{split}\end{equation}
By \eqref{est:01} and \eqref{est:04}, we have
\begin{equation} \label{5.8}
\sup_{t \in (0, T)}\|(\Psi^{-1})'({{\cal E}c\theta}(\cdot, t),
\varrho_0(\cdot) + \sigma(\cdot, t))\|_{L_\infty(\Omega)} \leq C.
\end{equation}
Thus, by \eqref{5.9} we have
\begin{equation} \label{5.10}\begin{split}
\sup_{t \in (0, T)}\|\varrho_k(\cdot, t)-\varrho_k^0(\cdot)\|_{L_q(\Omega)}
&\leq C\int^T_0\|(\partial_t{{\cal E}c\theta}(\cdot, t),
\partial_t\sigma(\cdot, t))\|_{L_q(\Omega)}\,dt \\
& \leq CT^{1/{p'}}\|\partial_t({{\cal E}c \theta}, \sigma)\|_{L_p((0, T), L_q(\Omega))}
\leq C(M)E(T).
\end{split}\end{equation}
Similarly, \begin{align*}
&\|\nabla[\varrho_k(\cdot, t)-\varrho_k^0(\cdot)]\|_{L_q(\Omega)} \\
&\quad
\leq \|(\Psi^{-1})'({{\cal E}c \theta}(\cdot, t), \varrho^0(\cdot)+\sigma(\cdot, t)
\|_{L_\infty(\Omega)}
\|(\nabla{{\cal E}c \theta}(\cdot, t), \nabla\varrho^0(\cdot)
+\nabla\sigma(\cdot, t))\|_{L_q(\Omega)}
+ \|\nabla({\cal E}c{\varrho^0})\|_{L_q(\Omega)} \leq C(L,M),
\end{align*}
which implies \eqref{est:05}. In order to show \eqref{est:06} note that
$
W^{\frac{3}{q}+\varepsilonsilon}_q(\Omega)\subset L_\infty(\Omega)$ $\forall \varepsilonsilon>0,$
therefore for $\varepsilonsilon < 1-\frac{3}{q}$ we have
\begin{equation}\label{5.12}\begin{split}
&\sup_{t \in (0, T)}\|\varrho_k(\cdot, t)
-\varrho_k^0(\cdot)\|_{L_\infty(\Omega)}\\
&\quad \leq (\sup_{0 \in (0, T)}
\|\varrho_k(\cdot, t)
-\varrho_k^0(\cdot)\|_{L_q(\Omega)})^{\theta}\\
&\qquad\times
(\sup_{0 \in (0, T)}
\|\varrho_k(\cdot, t)
-\varrho_k^0(\cdot)\|_{H^1_q(\Omega)})^{1-\theta} \leq C(M,L)E(T)
\end{split}\end{equation}
with $\theta = 1-(3/q+\varepsilonsilon) \in (0, 1)$. This way we obtain \eqref{est:06} and complete the proof.
\rightline{ $\square$}
\noindent
The next lemma gives bounds on the terms coming from the change of coordinates.
\begin{lem} \label{l:est_lag}
Let $A_{2{\vc{D}}elta}({\bk_{\bv}})\nabla^2(\cdot),A_{1{\vc{D}}elta}({\bk_{\bv}})\nabla(\cdot),A_{2{\rm div}}({\bk_{\bv}})\nabla^2(\cdot),A_{1{\rm div}}({\bk_{\bv}})\nabla(\cdot)$ be defined in \eqref{a2delta},\eqref{a1delta},\eqref{a2div} and \eqref{a1div}, respectively. Then
\begin{align}
&\|A_{2{\vc{D}}elta}\nabla^2| _{\partial\Omega}ldsymbol{v},A_{2{\rm div}}\nabla^2| _{\partial\Omega}ldsymbol{v}\|_{L_p(0,T;L_q(\Omega))}+
\|A_{1{\vc{D}}elta}\nabla | _{\partial\Omega}ldsymbol{v},A_{1{\rm div}}\nabla | _{\partial\Omega}ldsymbol{v}\|_{L_\infty(0,T;L_q(\Omega))}\leq C(M)E(T) \label{est:06}\\
&\|A_{2{\vc{D}}elta}\nabla^2\vartheta_k,A_{2{\rm div}}\nabla^2\vartheta_k\|_{L_p(0,T;L_q(\Omega))}+
\|A_{1{\vc{D}}elta}\nabla \vartheta_k,A_{1{\rm div}}\nabla \vartheta_k\|_{L_\infty(0,T;L_q(\Omega))}\leq C(M)E(T) \label{est:07}
\end{align}
for all $k=1,\ldots,n-1$.
\end{lem}
\emph{Proof.}
By \eqref{est:01} and \eqref{a2delta} we have
$$
\|A_{2{\vc{D}}elta}\nabla^2| _{\partial\Omega}ldsymbol{v}\|_{L_p(0,T;L_q(\Omega))}\leq \|{\bf V}^0({\bk_{\bv}})\|_{L^\infty(\Omega\times(0,T))}(1+\|{\bf V}^0({\bk_{\bv}})\|_{L^\infty(\Omega\times(0,T))})\|\nabla^2 | _{\partial\Omega}ldsymbol{v}\|_{L_p(0,T;L_q(\Omega))}\leq C(M)E(T).
$$
Next, notice that
$$
\left\| \int_0^t \nabla^2 | _{\partial\Omega}ldsymbol{v} \right\|_{L_q(\Omega)} \leq \int_0^t \|\nabla^2 | _{\partial\Omega}ldsymbol{v}\|_{L_q(\Omega)} \leq t^{1/p'}\|\nabla^2| _{\partial\Omega}ldsymbol{v}\|_{L_p(0,T;L_q(\Omega))},
$$
therefore, by \eqref{est:01} and \eqref{est:04},
\eqh{
\left\| \nabla_{{\bk_{\bv}}}V^0_{lm}({\bk_{\bv}}) \left[\int_0^t \partial_l \nabla | _{\partial\Omega}ldsymbol{v} ds\right] \frac{\partial | _{\partial\Omega}ldsymbol{v}}{\partial y_m} \right\|_{L_q(\Omega)}
&\leq \|\nabla_{{\bk_{\bv}}}V^0_{lm}({\bk_{\bv}})\|_{L_\infty(\Omega\times(0,T))} \left\|\int_0^t \nabla^2 | _{\partial\Omega}ldsymbol{v}\right\|_{L_p(0,T;L_q(\Omega))} \|\nabla | _{\partial\Omega}ldsymbol{v}\|_{L_\infty(\Omega\times(0,T))} \\
&\leq
C(M)E(T).
}
The other terms in $A_{1{\vc{D}}elta}\nabla | _{\partial\Omega}ldsymbol{v}$ have a similar structure, therefore we get
$$
\|A_{1{\vc{D}}elta}\nabla | _{\partial\Omega}ldsymbol{v}\|_{L_\infty(0,T;L_q(\Omega))}\leq C(M)E(T).
$$
As $A_{2{\rm div}}({\bk_{\bv}})\nabla^2(\cdot)$ and $A_{1{\rm div}}({\bk_{\bv}})\nabla(\cdot)$ have structure similar to $A_{2{\vc{D}}elta}\nabla^2(\cdot)$ and $A_{1{\vc{D}}elta}\nabla (\cdot)$, respectively, we conclude \eqref{est:06}. Finally, $\vartheta_k$ have the same regularity as $| _{\partial\Omega}ldsymbol{v}$ so we obtain $\eqref{est:07}$ in the same way. Proof of Lemma \ref{l:est_lag} is complete.
\rightline{ $\square$}
\noindent
With these results at hand we can proceed with the proof of Proposition \ref{prop:est}.\\ {\bf Estimate of $f_1(U)$}. Since $f_1(U)$ it is
exactly the same as in the two species case, we obtain (see \cite{PSZ})
\begin{equation} \label{est:1}
\|f_1(U)\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T).
\end{equation}
{\bf Estimate of $f_2(U)$}. Let us start with $R_2(U)$ defined in \eqref{lag:7}.
By \eqref{est:01} we have
\begin{align*}
\|{\bf V}^0({\bk_{\bv}})\lr{\varrho_l-\frac{m_l\varrho_l\varrho}{\Sigma_\varrho}}\nabla \vartheta_{l-1}\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T).
\end{align*}
Applying Lemma \ref{l:est_lag} to the remaining terms we obtain
$$
\|{\bf R}_2(U)\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T).
$$
Next, by \eqref{est:02}
$$
\|\sigma\partial_t | _{\partial\Omega}ldsymbol{v}\|_{L_p(0,T;L_q(\Omega))} \leq \|\sigma\|_{L_\infty(\Omega \times (0,T))}
\|\partial_t| _{\partial\Omega}ldsymbol{v} \|_{L_p(0,T;L_q(\Omega))} \leq C(M)E(T),
$$
and similarly, using \eqref{est:02}-\eqref{est:05} we get
$$
\left\| \frac{\sigma}{\Sigma_\varrho}\nabla\eta,\sigma_l\nabla\vartheta_{l-1},\frac{m_l}{\Sigma_\varrho}(\varrho_l\sigma+\varrho^0\sigma_l)\nabla\vartheta_{l-1},\frac{\varrho^0}{\Sigma_\varrho}\nabla\varrho^0 \right\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T).
$$
In order to estimate the terms with $\frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0}$ we write it as
$$
\frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0} = \frac{\Sigma_\varrho^0-\Sigma_\varrho}{\Sigma_\varrho \Sigma_\varrho^0}.
$$
As the denominator is bounded from below by a positive constant,
using \eqref{est:04} we get
$$
\|\varrho^0\nabla\eta \left(\frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0} \right)\|_{L_p(0,T;L_q(\Omega))} \leq C \sum_{k=1}^3\|\nabla \eta (\varrho_k-\varrho^0_k)\|_{L_p(0,T;L_q(\Omega))}\leq
$$$$
C \sum_{k=1}^3 \left[ \int_0^T \|\varrho_k-\varrho^0_k\|_{L_\infty}^p\|\nabla \eta\|_{L_q}^p \right]^{1/p} \leq \sum_{k=1}^3 \|\varrho_k-\varrho^0_k\|_{L_\infty(H^1_q)}\|\nabla\eta\|_{L_p(0,T;L_q(\Omega))}
\leq C(M,L)E(T),
$$
and similarly
$$
\|m_l\varrho^0\varrho^0_{l}\left( \frac{1}{\Sigma_\varrho}-\frac{1}{\Sigma_\varrho^0} \right)\nabla\vartheta_{l-1}\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T).
$$
Collecting all above estimates we get
\begin{equation} \label{est:2}
\|{\bf f}_2(U)\|_{L_p(0,T;L_q(\Omega)^3)} \leq C(M,L)E(T).
\end{equation}
{\bf Estimate of $f_3(U)$}. First we estimate $R^k_3(U)$ given by \eqref{lag:8}-\eqref{lag:10}. For this purpose we show
\begin{lem}
We have
\begin{align}
\|{\cal{B}}_{kl}\|_{L_\infty(\Omega\times(0,T))}\leq C(M), \label{est:3_1} \\
\|\nabla {\cal{B}}_{kl}\|_{L_p(0,T;L_q(\Omega))} \leq C(M)E(T). \label{est:3_2}
\end{align}
\end{lem}
\emph{Proof.} \eqref{est:3_1} follows directly from \eqref{est:05} and the form of ${\cal{B}}_{kl}$ \eqref{lag:5b}.
To show \eqref{est:3_2} we need a bound on $\nabla D_{kl}$. For this purpose notice that, by \eqref{est:05},
$$
\|\nabla \varrho_k\|_{L_p(0,T;L_q(\Omega))}^p \leq \int_0^T\|(\nabla\varrho_k - \nabla \varrho^0_k)(t,\cdot)\|_{L_q}^p dt + \int_0^T \|\nabla \varrho^0_k\|_{L_q(\Omega)}dt \leq [C(M,L)E(T)]^p.
$$
Therefore, under the assumption \eqref{nablaD} and using the fact that the fractional densities are bounded from below by a positive constant we obtain
\eqref{est:3_2}.
\rightline{ $\square$}
\noindent
From \eqref{est:07} and \eqref{est:3_1} we get
\begin{equation} \label{est:3_3}
\|{\cal{B}}_{kl}(A_{2{\vc{D}}elta}({\bk_{\bv}})\nabla^2\vartheta_l+A_{1{\vc{D}}elta}({\bk_{\bv}})\nabla \vartheta_l)\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T).
\end{equation}
Next, by \eqref{est:04} and \eqref{est:3_2},
$$
\|\nabla {\cal{B}}_{kl}\nabla \vartheta_l\|_{L_p(0,T;L_q(\Omega))} \leq \|\nabla\vartheta_l\|_{L_\infty(\Omega \times (0,T))}\|\nabla {\cal{B}}_{kl}\|_{L_p(0,T;L_q(\Omega))}\leq C(M,L)E(T).
$$
Therefore
\begin{align} \label{est:3_4}
&\|{\bf V}^0({\bk_{\bv}})\nabla {\cal{B}}_{kl}\left([\nabla\vartheta_l+\nabla\vartheta_l{\bf V}^0({\bk_{\bv}})]\right)+(\nabla {\cal{B}}_{kl}){\bf V}^0({\bk_{\bv}})\nabla\vartheta_l \|_{L_p(0,T;L_q(\Omega))} \nonumber\\
&\leq C \|\|\nabla {\cal{B}}_{kl}\nabla \vartheta_l\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T).
\end{align}
Combining \eqref{est:3_3} and \eqref{est:3_4} we get
\begin{equation} \label{est:3_5}
\|R^{kl}_3(U)\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T).
\end{equation}
Finally, by \eqref{est:01} and \eqref{est:04},
\begin{align*}
\left\|\left(\varrho_{k+1}-\frac{m_{k+1}\varrho_{k+1}\varrho}{\Sigma_\varrho}\right)\sum_{j,m=1}^3V^0_{jm}({\bk_{\bv}})\frac{\partial v_j}{\partial y_m}\right\|_{L_p(0,T;L_q(\Omega))} & \leq C \sum_{j,m=1}^3\|V^0_{jm}({\bk_{\bv}})\|_{L_\infty(\Omega \times (0,T))}\left\|\frac{\partial v_j}{\partial y_m}\right\|_{L_p(0,T;L_q(\Omega))}\\ & \leq C(M)E(T),
\end{align*}
which together with \eqref{est:3_5} yields
\begin{equation}
\|R^k_3(U)\|_{L_p(0,T;L_q(\Omega))} \leq C(M,L)E(T).
\end{equation}
The remaining terms in \eqref{lin1:5c} contains only components
of type $\varrho_k \nabla | _{\partial\Omega}ldsymbol{v}, \varrho_k \partial_t \theta_l, \nabla \varrho_k \nabla \theta_l$ and $\varrho_k \nabla^2 \theta_l$, therefore we can estimate them in a similar way to ${\bf f}_2(U)$ using \eqref{est:02}-\eqref{est:06} obtaining
\begin{equation} \label{est:3}
\|f^k_3(U)\|_{L_p(0,T;L_q(\Omega))n} \leq C(M,L)E(T), \quad k=1,\ldots,n-1.
\end{equation}
\noindent
{\bf Estimate of $f^k_4(U)$}. This task is more delicate since we have to find a bound on
$\|f^k_4(U)\|_{H^{1/2}_p(\mathbb{R};L_q(\Omega))}$.
However, the structure of boundary condition \eqref{bc:normal1}
is exactly the same as in the two species case, therefore
we can repeat the estimate from \cite{PSZ}. For the sake
of completeness we repeat the idea here.
First we have to extend $f^k_4(U)$ to whole real line.
For this purpose we apply the extension operator \eqref{def:ext}.
Let us denote
$$
{| _{\partial\Omega}ld J}[| _{\partial\Omega}ldsymbol{v}](t)= | _{\partial\Omega}ldsymbol{n}(x){\bf V}^0({\bk_{\bv}}) \left\{\int^1_0(\nabla| _{\partial\Omega}ldsymbol{n})
(y + \tau\int^t_0| _{\partial\Omega}ldsymbol{v}(y, s)\,ds)\,d\tau
\int^t_0| _{\partial\Omega}ldsymbol{v}(y, s)\,ds\right\}.
$$
Then \eqref{lag:9} can be rewritten as
$$
R^k_4(U)= - {| _{\partial\Omega}ld J}[| _{\partial\Omega}ldsymbol{v}] \nabla\vartheta_k.
$$
Since ${| _{\partial\Omega}ld J}[| _{\partial\Omega}ldsymbol{v}](0)=0$, we can readily define
\begin{equation}
\tilde {| _{\partial\Omega}ld J}[u]=e_T({| _{\partial\Omega}ld J}[u])
\end{equation}
Next, we also need to extend $\vartheta_k$. The difference is that it does not vanish at $0$, therefore first we first extend the boundary data to $\tilde \vartheta_k^0$ defined on $\mathbb{R}^3$ and define
\begin{equation} \label{Eh}
E\vartheta_k=e_T [\vartheta_k^0-{\mathcal T}(t)\vartheta_k^0] + {\mathcal T}(t)\vartheta_k^0,
\end{equation}
where ${\mathcal T}(t)$ is an exponentially decaying semigroup
(details can be found in Section 5 of \cite{PSZ}).
The norms of extensions are equivalent with the norms defined on $(0,T)$, therefore we have to estimate
$\|E{| _{\partial\Omega}ld J}[u] \nabla(E\vartheta_l)\|_{H^{1/2}_p(\mathbb{R},L_q(\Omega))}$.
For this purpose we apply Lemma \ref{lem:5.1}.
As $\partial \Omega$ is uniformly $C^3$, we can extend the normal vector to $E{\bf n}$ defined on $\mathbb{R}^3$ s.t. $\|E{\bf n}\|_{H^2_\infty(\mathbb{R}^3)}\leq C(\Omega)$.
Then we obtain
\begin{equation} \label{est:10}
\|EJ[| _{\partial\Omega}ldsymbol{v}]\|_{L_\infty(0,T;H^1_q(\Omega))} \leq C(M)E(T).
\end{equation}
and, due to \eqref{ext:2},
\begin{equation} \label{est:11}
\|\partial_t EJ[| _{\partial\Omega}ldsymbol{v}]\|_{L_\infty(0,T;L_q(\Omega))}+\|\partial_t EJ[| _{\partial\Omega}ldsymbol{v}]\|_{L_p(0,T;H^1_q(\Omega))} \leq C\left[\|| _{\partial\Omega}ldsymbol{v}\|_{L_\infty(0,T;H^1_q(\Omega))}+\|| _{\partial\Omega}ldsymbol{v}\|_{L_p(0,T;H^2_q(\Omega))}\right]\leq C(M).
\end{equation}
In order to estimate $E\nabla \vartheta_k$ we apply Lemma \ref{lem:5.2} to obtain
\begin{equation} \label{est:12}
\|E\nabla \vartheta_k\|_{H^{1/2}_p(0,T;L_q(\Omega))}+\|E\nabla \vartheta_k\|_{L_p(H^1_q(\Omega))}\leq C(M,L).
\end{equation}
Applying Lemma \ref{lem:5.1} with $f=E{| _{\partial\Omega}ld J}[u]$ and $g=\nabla(E\vartheta_k)$
and using \eqref{est:10} - \eqref{est:12} we obtain
\begin{equation} \label{est:13}
\|R^k_4(U)\|_{L_p(\mathbb{R},H^1_q(\Omega)) \cap H_p^{1/2}(\mathbb{R},L_q(\Omega))} \leq E(T)C(M,L).
\end{equation}
Now, combining \eqref{est:1},\eqref{est:2},\eqref{est:3},\eqref{est:13} and \eqref{lin1:5d} we obtain \eqref{est:nonlin}, which completes the proof of Proposition \ref{prop:est}.
\subsection{Fixed point argument}
Theorem \ref{thm:lin2} allows us to define an operator
$(\sigma,| _{\partial\Omega}ldsymbol{v},\vartheta)={\mathcal S}(\bar \sigma,\bar | _{\partial\Omega}ldsymbol{v},\bar | _{\partial\Omega}ldsymbol{v}t)$ as a solution to system \eqref{lin1:sys} with the right
hand side $f_1(\bar U), {\bf f}_2(\bar U),f^k_3(\bar U),f^k_4(\bar U)$ where $\bar U=(\bar \sigma,\bar | _{\partial\Omega}ldsymbol{v},\bar | _{\partial\Omega}ldsymbol{v}t)$.
From the Proposition \ref{prop:est} combined with Theorem \ref{thm:lin2} we easily verify that for any $M>0$
$$
{\mathcal S}:{\mathcal H}_{T,M} \to {\mathcal H}_{T,M}
$$
is well defined provided $T>0$ is sufficiently small. It remains to show that ${\mathcal S}$ is a contraction on ${\mathcal H}_{T,M}$.
For this purpose we show
\begin{prop} \label{prop:est_dif}
Let $\bar U_1=(\bar \sigma_1,\bar | _{\partial\Omega}ldsymbol{v}_1,\bar | _{\partial\Omega}ldsymbol{v}t_1),\bar U_2=(\bar \sigma_2,\bar | _{\partial\Omega}ldsymbol{v}_2,\bar | _{\partial\Omega}ldsymbol{v}t_2) \in {\mathcal H}_{T,M}$ for given $T,M>0$, where the initial conditions satify the assumptions of Theorem \ref{thm:main2}.
Let $f_1(U),f_2(U),f_3^k(U)$ and $f^k_4(U)$ be given by \eqref{lin1:5a}-\eqref{lin1:5d},
where $R_1(U),R_2(U),R_3^k(U)$ and $R^k_4(U)$ are defined in \eqref{lag:6},\eqref{lag:7},\eqref{lag:8}-\eqref{lag:10} and \eqref{lag:9}, respectively.
Then
\begin{align} \label{est:dif1}
&\|f_1(U_1)-f_1(U_2)\|_{L_p(0,T;W^1_q(\Omega))} + \|{| _{\partial\Omega}ldsymbol{b}f_2}(U_1)-{| _{\partial\Omega}ldsymbol{b}f_2}(U_2)\|_{L_p(0,T;L_q(\Omega)^3)}
+\|f_3^k(U_1)-f_3^k(U_2)\|_{L_p(0,T;L_q(\Omega))n} +\nonumber\\
&\|f^k_4(U_1)-f^k_4(U_2)\|_{L_p(0,T,H^1_q(\Omega)^{n-1})}
+ \|f^k_4(U_1)-f^k_4(U_2)\|_{H^{1/2}_p(\mathbb{R},L_q(\Omega)^{n-1})} \leq E(L,M,T) [U_1-U_2]_T.
\end{align}
\end{prop}
\emph{Proof}.
The precise form of the terms on the left hand side of \eqref{est:dif1} is rather
complicated, however what is essential is that it contains only the terms which are products of either
$\bar | _{\partial\Omega}ldsymbol{v}_1-\bar | _{\partial\Omega}ldsymbol{v}_2$, $\bar \sigma_1-\bar \sigma_2$ or $\bar | _{\partial\Omega}ldsymbol{v}t_1-\bar | _{\partial\Omega}ldsymbol{v}t_2$ multiplied by some quantities which are small
for small times. Therefore, following the lines of the proof of Proposition \ref{prop:est} we obtain \eqref{est:dif1}.
\rightline{ $\square$}
Now we can subtract systems for $U_1$ and $U_2$ to obtain a linear problem for $U_1-U_2$ with the structure
of the left hand side that same as in \eqref{lag:sys}, zero initial and boundary conditions and left hand side which is estimated in
\eqref{est:dif1}. Therefore, combining Proposition \ref{prop:est_dif} and Theorem \ref{thm:lin2} we obtain
\begin{equation}
[{\mathcal S}(U_1)-{\mathcal S}(U_2)]_T \leq E(T) [U_1 - U_2]_T,
\end{equation}
which implies that for any $M>0$, $\mathcal S$ is a contraction on ${\mathcal H}_{T,M}$ for sufficiently small $T$.
Therefore, application of the Banach fixed point theorem to $\mathcal S$ completes the proof of Theorem \ref{thm:main2}.
\rightline{ $\square$}
\section*{Appendix: Proof of Proposition \ref{thm:main0}}
\subsection*{Derivation of the normal form}
The proof of Proposition \ref{thm:main0} is split into a couple of steps.
First we derive the normal form of system \eqref{1.1}. By the change of unknowns \eqref{def:psi} we have
\begin{equation} \label{norm:1}
[\nabla \varrho,\nabla h_1 \ldots \nabla h_{n-1}]^T = A [\nabla \varrho_1,\ldots \nabla \varrho_n]^T
\end{equation}
with
\begin{equation} \label{norm:2}
A = \left( \begin{array}{cc}
1 & 1_{1\times (n-1)} \\[5pt]
\left(-\frac{1}{m_1\varrho_1}\right)_{(n-1) \times 1} & {\rm diag}\left(\frac{1}{m_2\varrho_2},\ldots,\frac{1}{m_n\varrho_n}\right) \\
\end{array}\right).
\end{equation}
The matrix $A$ is diagonal except the first row and first column, which also have quite simple structure.
It is therefore easy to observe that its inverse reads
\begin{equation} \label{norm:4}
A^{-1} = \left( \begin{array}{cc}
\frac{m_1\varrho_1}{\Sigma_\varrho} & \left[\left(-\frac{m_1\varrho_1 m_k \varrho_k}{\Sigma_\varrho}\right)_{k=2 \ldots n}\right]_{1 \times (n-1)}\\[5pt]
\left[\left(\frac{m_k\varrho_k}{\Sigma_\varrho}\right)_{k=2, \ldots, n}\right]_{(n-1)\times 1} & {\cal R}
\end{array} \right),
\end{equation}
where
\begin{equation} \label{def:sigma}
\Sigma_\varrho=\sum_{k=1}^3 m_k \varrho_k
\end{equation}
and ${\cal R}$ is matrix of dimension $n-1$ given by
\begin{equation} \label{def:Rkl}
{\cal R}_{kl}=m_{k+1}\varrho_{k+1}\partiallta_{kl}-\frac{m_{k+1}m_{l+1}\varrho_{k+1}\varrho_{l+1}}{\Sigma_{\varrho}}, \quad k,l=1,\ldots, n-1.
\end{equation}
Therefore, from \eqref{norm:1} we obtain
\begin{equation} \label{norm:3}
[\nabla \varrho_1,\ldots \nabla \varrho_n]^T=A^{-1}[\nabla \varrho,\nabla h_1 \ldots \nabla h_{n-1}]^T
\end{equation}
and, analogously, for the time derivative
\begin{equation} \label{norm:3a}
[\partial_t \varrho_1,\ldots \partial_t \varrho_n]^T=A^{-1}[\partial_t \varrho,\nabla h^1 \ldots \partial_t h_{n-1}]^T.
\end{equation}
From \eqref{norm:3}, \eqref{norm:3a}, and \eqref{norm:4} we infer
\begin{equation} \label{norm:4}
\partial_t \varrho_{k+1} + \boldsymbol{u} \cdot \nabla \varrho_{k+1} = \frac{m_{k+1}\varrho_{k+1}}{\Sigma_\varrho}(\partial_t \varrho+\boldsymbol{u} \cdot \nabla \varrho)
+\sum_{l=1}^{n-1}{\cal R}_{kl}(\partial_t h_l+\boldsymbol{u} \cdot \nabla h_l), \quad k=1,\ldots,n-1.
\end{equation}
However, from \eqref{1.1} we have
$$
\partial_t \varrho + \boldsymbol{u} \cdot \nabla \varrho = -\varrho \text{d}iv \boldsymbol{u}
$$
as well as
$$
\partial_t \varrho_k + \boldsymbol{u} \cdot \nabla \varrho_k = -\varrho_k \operatorname{div} \boldsymbol{u} - \operatorname{div} \boldsymbol{F}_k.
$$
Inserting these relations to \eqref{norm:4} we obtain
\begin{equation} \label{norm:5}
\sum_{l=1}^{n-1} {\cal R}_{kl}(\partial_t h_l+\boldsymbol{u} \cdot \nabla h_l)+\left(\varrho_{k+1} - \frac{m_{k+1}\varrho_{k+1}\varrho}{\Sigma_\varrho}\right)\text{d}iv \boldsymbol{u} = -\text{d}iv {| _{\partial\Omega}ldsymbol{F}}_{k+1}.
\end{equation}
We can further rewrite the rhs of the above equations. For this purpose we observe that
$$
-\frac{\nabla p_1}{\varrho}\left( \frac{1}{\varrho_1} \sum_{l=2}^3 \varrho_l C_{kl} + \frac{\varrho_1}{\varrho_1}C_{k1}\right)=
-\frac{\nabla p_1}{\varrho_1}\sum_{l=1}^3 Y_l C_{kl}=0
$$
due to \eqref{prop_C}.
Therefore, denoting
\begin{equation} \label{def:barm}
\bar m = \frac{\varrho}{p}
\end{equation}
we obtain from \eqref{eq:diff1}
\eq{ \label{norm:6}
-| _{\partial\Omega}ldsymbol{F}_k &= \frac{1}{p}\sum_{l=1}^3 C_{kl} \nabla p_l\\
&=
\frac{\bar m}{\varrho}\left[\sum_{l=1}^3 C_{kl}\nabla p_l - \nabla p_1\left(\frac{1}{\varrho_1}\sum_{l=2}^3 C_{kl}\varrho_l + C_{k1}\right)\right]\\
&=\frac{\bar m}{\varrho}\sum_{l=2}^3 C_{kl} \left(\nabla p_l-\frac{\varrho_l}{m_1}\frac{\nabla \varrho_1}{\varrho_1}\right)\\
&=\frac{\bar m}{\varrho}\sum_{l=2}^3 \varrho_l C_{kl}\left( \frac{\nabla \varrho_l}{m_l \varrho_l} - \frac{\nabla \varrho_1}{m_1 \varrho_1}\right)\\
&=\frac{\bar m}{\varrho}\sum_{l=2}^3\varrho_k\varrho_l D_{kl}\nabla h_{l-1}.
}
Now let us transform the pressure term, from \eqref{norm:3} we have
\eq{ \label{norm:7}
\nabla p &= \sum_{k=1}^3 \frac{\nabla \varrho_k}{m_k}\\
&=\frac{1}{m_1}\left( \frac{m_1\varrho_1}{\Sigma_\varrho}\nabla \varrho -\sum_{k=2}^3 \frac{m_1\varrho_1m_k\varrho_k}{\Sigma_\varrho}\nabla h_k\right)\\
&\quad +\sum_{l=2}^3\frac{1}{m_l}\left( \frac{m_l\varrho_l}{\Sigma_\varrho}\nabla \varrho + m_l\left(\varrho_l-\frac{m_l\varrho_l^2}{\Sigma_\varrho}\right)\nabla h_{l-1}-\sum_{\substack{k>1\\k\neq l}} \frac{m_l\varrho_lm_k\varrho_k}{\Sigma_\varrho}\nabla h_k\right)\\
& =\frac{\varrho}{\Sigma_\varrho}\nabla \varrho + \sum_{k=1}^{n-1}A_k\nabla h_k,
}
where we denoted
\begin{equation} \label{norm:8}
A_k = \varrho_{k+1}-\frac{1}{\Sigma_\varrho}\left[m_{k+1}\varrho_{k+1}^2+m_{k+1}\varrho_{k+1}\sum_{l\neq k+1}\varrho_l\right]=\varrho_{k+1}-\frac{m_{k+1}\varrho_{k+1}\varrho}{\Sigma_\varrho}.
\end{equation}
From \eqref{norm:5}-\eqref{norm:8}
we obtain the explicit form of the symmetrized system \eqref{sys:normal}.
Now we have to rewrite the boundary conditions \eqref{bc} for the symmetrized system \eqref{sys:normal}.
First note that with equation for $\varrho_1$ being omitted, the system \eqref{sys:normal} needs to be supplemented only with the boundary conditions for $n-1$ last species densities; due to \eqref{norm:6} we get
\begin{equation} \label{bc:normalD}
| _{\partial\Omega}ldsymbol{u}=0, \quad \frac{\bar m}{\varrho}\sum_{l=2}^3 \varrho_k \varrho_l D_{kl}\nabla h_{l-1} \cdot | _{\partial\Omega}ldsymbol{n} = 0, \quad k=2,\ldots,n, \quad\mbox{on}\ (0,T)\times\partial\Omega
\end{equation}
which is exactly \eqref{bc:normal} and it is a natural boundary conditions in view of the second order term in \eqref{sys:normal}$_3$.
\subsection*{Coercivity properties}
Recall that Lemma \ref{lem:1} gives a positive lower bound on fractional densities. We are now ready to prove the more direct coercivity of ${\cal R}$.
Below, $\xi=(\xi_1,\ldots \xi_n)$ is a vector of complex numbers,
$\bar | _{\partial\Omega}ldsymbol{v}erline{\xi}=(\bar | _{\partial\Omega}ldsymbol{v}erline{\xi_1},\ldots \bar | _{\partial\Omega}ldsymbol{v}erline{\xi_n})$ is a vector of their complex conjugates,
and $\langle\cdot,\cdot\rangle$ is a scalar product in $\mathbb{C}$.
\begin{lem} \label{l:R} Let assumptions of Lemma \ref{lem:1} be satisfied.
Then there exists a constant $C_1>0$ independent of (x,t) such that
\begin{equation} \label{coerc:R}
\langle{\cal{R}}(x,t)\xi,\bar | _{\partial\Omega}ldsymbol{v}erline{\xi}\rangle \geq C_1|\xi|^2.
\end{equation}
\end{lem}
\emph{Proof.}
Notice first that ${\cal R}_{kk}>0$ for every $k=1,\ldots, n-1$. We rewrite ${\cal R}_{kk}$ as
$$
{\cal R}_{kk}=\frac{1}{\Sigma_{\varrho}}m_{k+1}\varrho_{k+1}(\Sigma_{\varrho}-m_{k+1}\varrho_{k+1})=
\frac{1}{\Sigma_{\varrho}}m_{k+1}\varrho_{k+1}\sum_{l=1,\ l\neq k+1}^{n}m_l\varrho_l.
$$
Then we have due to symmetry of ${\cal R}$
\eq{
\langle{\cal R}\xi,\bar | _{\partial\Omega}ldsymbol{v}erline{\xi}\rangle&=\sum_{k=1}^{n-1}{\cal R}_{kk}|\xi_k|^2
+ \sum_{l=1}^{n-1}\sum_{k<l}{\cal R}_{kl}(\xi_k\bar | _{\partial\Omega}ldsymbol{v}erline{\xi_l}+\xi_l\bar | _{\partial\Omega}ldsymbol{v}erline{\xi_k})\\
&\geq
\sum_{k=1}^{n-1}{\cal R}_{kk}|\xi_k|^2- \sum_{l=1}^{n-1}\sum_{k<l}|{\cal R}_{kl}|(|\xi_k|^2+|\xi_l|^2)\\
&=
\frac{m_1\varrho_1}{\Sigma_{\varrho}}\sum_{k=1}^{n-1}m_{k+1}\varrho_{k+1}|\xi_k|^2\\
&
\geq \frac{m_1\varrho_1}{\Sigma_{\varrho}}{\rm min}_{k\neq 1}\{m_k\varrho_k\}|\xi|^2,
}
which proves \eqref{coerc:R}.
\rightline{ $\square$}
Although \eqref{prop_D} implies only semi-definitness of $D \geq 0$, the change of unknowns introduced in the previous section and resulting reduction by one row and column enables to deduce ellipticity of the resulting matrix which follows from the properties of $D$. The next lemma shows the coercivity of ${\cal{B}}$.
\begin{lem} \label{l:B} Assume that one of Conditions 1,2 from Proposition \ref{thm:main0} hold. Then there exists a constant $C_2>0$ independent of (x,t) such that
\begin{equation} \label{coerc:B}
\langle {\cal{B}}(x,t)\xi,\bar | _{\partial\Omega}ldsymbol{v}erline{\xi}\rangle \geq C_2|\xi|^2 \quad \forall \; (x,t) \in \Omega \times [0,T].
\end{equation}
\end{lem}
\noindent \emph{Proof.}
It is convenient to rewrite the entries of ${\cal{B}}$ as
\begin{equation}
{\cal{B}}_{kl}=\frac{\varrho}{p} Y_{k+1}Y_{l+1}\frac{C_{k+1,l+1}}{Y_{k+1}}=\frac{\varrho}{p} Y_{l+1} C_{k+1,l+1}.
\end{equation}
Under Condition 1 we therefore have
\begin{equation}
{\cal{B}}=\frac{\varrho}{p} \left(
\begin{array}{cccc}
Y_2Z_2 & -Y_2Y_3 & \ldots & - Y_2Y_n\\
-Y_3Y_2 & Y_3Z_3 & \ldots & - Y_3Y_n\\
\ldots & & &\\
-Y_nY_2 & \ldots & & Y_nZ_n
\end{array}
\right).
\end{equation}
In order to compute ${\rm det} \,{\cal{B}}$ we transform the matrix with elementary operations.
First we add $n-1$ first rows to the last one. Denoting the new matrix by ${\cal{B}}^1$ we have
$$
{\cal{B}}^1_{nn}=Y_nZ_n-Y_n \sum_{j=2}^{n-1}Y_j = Y_nY_1
$$
and for $k<n$ we have
$$
{\cal{B}}^1_{nk}=-Y_nY_k+Y_kZ_k-Y_k\sum_{j \neq k,j\geq2}Y_j=Y_kY_1,
$$
therefore
\begin{equation}
{\cal{B}}^1=\frac{\varrho}{p} \left(
\begin{array}{cccc}
Y_2Z_2 & -Y_2Y_3 & \ldots & - Y_2Y_n\\
-Y_3Y_2 & Y_3Z_3 & \ldots & - Y_3Y_n\\
\ldots & & &\\
Y_1Y_2 & Y_1Y_3 & \ldots & Y_1Y_n
\end{array}
\right).
\end{equation}
Notice that all entries of the last column contain $Y_n$ and all entries of the last row contain $Y_1$,
therefore
\begin{equation} \label{B:2}
{\rm det} \,{\cal{B}} = \left(\frac{\varrho}{p}\right)^{n-1} Y_1Y_n
{\rm det}\underbrace{\left(
\begin{array}{cccc}
Y_2Z_2 & -Y_2Y_3 & \ldots & - Y_2\\
-Y_2Y_3 & Y_3Z_3 & \ldots & - Y_3\\
\ldots & & &\\
Y_2 & Y_3 & \ldots & 1
\end{array}
\right)}_{{\cal{B}}^2}.
\end{equation}
Now we can easily diagonalize part of the above matrix. For this purpose
we add to each $k$-th row, $k=1 \ldots n-1$, the last row multiplied by $Y_{k+1}$.
Then all the entries except the diagonal becomes zero. Namely, we have
$$
{\cal{B}}^2_{k,\cdot} + Y_{k+1}{\cal{B}}^2_{n,\cdot} = Y_{k+1}\sum_{j=1}^3Y_j {\bf e}_k.
$$
Therefore \eqref{B:2} yields
\begin{equation}
{\rm det}\, {\cal{B}} = \left(\frac{\varrho}{p}\right)^{n-1}\prod_{k=1}^3 Y_k \left(\sum_{k=1}^3Y_k \right)^{n-1} \geq C >0,
\end{equation}
since $Y_k(x,t)>C$ for every $k=1,\ldots,n$ uniformly w.r.t. $(x,t)$, due to \eqref{rhoidown}.
Next, denoting
\begin{equation} \label{minors}
{\rm det} \,{\cal{B}}_k = \left| \begin{array}{ccc}
{\cal{B}}_{11} & \ldots & {\cal{B}}_{1k} \\
\boldsymbol{d}ots&\text{d}eltaots&\boldsymbol{d}ots\\
{\cal{B}}_{k1}&\ldots & {\cal{B}}_{kk}
\end{array} \right|
\end{equation}
we have ${\rm det}\,{\cal{B}}_{k}>0$. Therefore, all the leading principal minors of matrix ${\cal{B}}$ are positive and hence we have shown
\begin{equation} \label{detB:pos}
{\cal{B}}(x,t)>0, \quad {\rm det}{\cal{B}}(x,t) \geq C>0 \quad \textrm{uniformly in} \; (x,t).
\end{equation}
Now from \eqref{detB:pos} it's easy to deduce \eqref{coerc:B}. For this purpose note that the eigenvectors $\zeta_i(x,t)$ of ${\cal{B}}(x,t)$ form an orthonormal basis of $\mathbb{R}^3$ and ${\cal{B}}(x,t)$ in this basis is in a form
\begin{equation} \label{B:diag}
{\cal{B}}(x,t)={\rm diag}(\lambda_1(x,t),\ldots,\lambda_n(x,t)\},\quad \lambda_i(x,t)\geq C>0 \quad \textrm{uniformly in} \;(x,t).
\end{equation}
Therefore, denoting $\xi=\sum_{i=1}^3 \alpha_i\zeta_i$ we
have
$$
\langle B(x,t)\xi,\bar\xi\rangle=\sum_{i=1}^3\lambda_i(x,t)\alpha_i^2 \geq {\rm min}_i \{\lambda_i(x,t)\}\sum_{i=1}^3 \alpha_i^2 \geq C |\xi|^2 \quad \textrm{uniformly in} \; (x,t).
$$
Now let us consider a general form of $ D$ satisfying the assumptions \eqref{prop_D}. In this case we use the form of ${\cal{B}}$ as in \eqref{lag:5b}. In particular, each entry of $k$-th row of ${\cal{B}}$ contains $Y_{k+1}$, therefore
\begin{equation}
{\rm det} \,{\cal{B}} = \lr{\frac{\varrho^2}{p}}^{n-1}Y_2 \ldots Y_n \left|
\begin{array}{cccc}
Y_2D_{22} & Y_3D_{23} & \ldots & Y_n D_{2n}\\
Y_2D_{32} & Y_3D_{33} & \ldots & Y_n D_{3n}\\
\ldots & & &\\
Y_2D_{n2} & Y_3D_{n3} & \ldots & Y_n D_{nn}
\end{array}
\right|
\end{equation}
Similarly, since each entry of $k$-th column contains $Y_{k+1}$, we have
\begin{equation}
{\rm det} \,{\cal{B}} = \left(\frac{\varrho^2}{p}\right)^{n-1} (Y_2 \ldots Y_n)^2 \,
{\rm det} \underbrace{ \left(
\begin{array}{cccc}
D_{22} & D_{23} & \ldots & D_{2n}\\
D_{32} & D_{33} & \ldots & D_{3n}\\
\ldots & & &\\
D_{n2} & D_{n3} & \ldots & D_{nn}
\end{array}
\right)}_{:=\bar D}
\end{equation}
Due to \eqref{rhoidown} we have $Y_2 \ldots Y_n \geq C>0$, and so, the whole coefficient in front of matrix $\bar D$ is positive. Notice however that we only have $D \geq 0$ in general, but $\bar D$ is a $(n-1)\times(n-1)$ sub-matrix of $D$
for which we can show positive definiteness. Assume on the contrary that there is a vector $ [v_2,\ldots,v_n]\neq 0$, s.t.
\begin{equation*}
\bar D [v_2,\ldots,v_n] = 0.
\end{equation*}
Then one would also have that
$$
D[0,v_2,\ldots,v_n]=0,
$$
which is in contradiction with the fact that ${\rm Ker}D = {\rm lin}\{{\cal E}c{Y}\}$ and all $Y_k$ are strictly positive.
Similarly we show that the minors \eqref{minors} are positive, hence we conclude that
\begin{equation} \label{Dpos}
D(x)>0.
\end{equation}
Now, as for each $(x,t)$ fixed, $D(x,t)$ is a linear operator, we have
\begin{equation} \label{coerc:2}
\forall (x,t) \in \Omega \; \exists c(x,t)>0 \; s.t. \; \langle\bar D(x,t)\xi,\bar \xi\rangle \, \geq \, c(x,t)|\xi|^2,
\end{equation}
where
$$
c(x,t)={\rm min}_{|\xi|=1}\langle\bar D(x,t)\xi,\bar \xi\rangle.
$$
Finally, if Condition 2 is satisfied, we can have the function $c(x,t)>0$ defined on a compact
set $\bar | _{\partial\Omega}ldsymbol{v}erline{\Omega}\times [0,T]$, hence
$$
\exists \kappa>0: \;c(x,t) \geq \kappa \quad \forall \; (x,t) \in \bar | _{\partial\Omega}ldsymbol{v}erline{\Omega} \times [0,T],
$$
which completes the proof.
\rightline{ $\square$}
\begin{rmk}
The method which we applied for the special structure \eqref{Cform} can be to some extent
repeated for a general matrix using the fact that ${\rm Ker} D={\rm lin}\{{\cal E}c{Y}\}$. However,
in the last step we do not obtain a diagonal sub-matrix but just a matrix with modified entries.
For this matrix coercivity probably could be shown under some additional assumptions on $D$ also for unbounded domain, we leave this direction for further investigation in the future.
\end{rmk}
\end{document} |
\begin{document}
\title{Oriented Gain Graphs, Line Graphs and Eigenvalues}
\author{Nathan Reff}
\address{Department of Mathematics\\
The College at Brockport: State University of New York\\
Brockport, NY 14420, USA}
\email{[email protected]}
\keywords{Orientation on gain graphs, gain graph, complex unit gain graph, line graph, oriented gain graph, voltage graph, line graph eigenvalues}
\subjclass[2010]{Primary 05C22, Secondary 05C50, 05C76, 05C25}
\begin{abstract}
A theory of orientation on gain graphs (voltage graphs) is developed to generalize the notion of orientation on graphs and signed graphs. Using this orientation scheme, the line graph of a gain graph is studied. For a particular family of gain graphs with complex units, matrix properties are established. As with graphs and signed graphs, there is a relationship between the incidence matrix of a complex unit gain graph and the adjacency matrix of the line graph.
\end{abstract}
\date{\today}
\maketitle
\section{Introduction}
The study of acyclic orientations of a graph enjoys a rich history of remarkable results, including Greene's bijection between acyclic orientations and regions of an associated hyperplane arrangement \cite{zbMATH03604927}, as well as Stanley's theorem on the number of acyclic orientations \cite{MR0317988}. In \cite{MR1120422}, Zaslavsky develops a more general theory of orientation on {\it signed graphs} (graphs with edges labeled either $+1$ or $-1$), and shows that regions defined by a signed graphic hyperplane arrangement correspond with acyclic orientations of particular signed graphs. Zaslavsky's approach is intimately connected with the theory of oriented matroids \cite{MR1744046}.
A {\it biased graph} is a graph with a list of distinguished cycles, such that if two cycles in the list are contained in a theta graph, then the third cycle of the theta graph must also be in the list \cite{MR1007712}. Recently, Slilaty has developed an orientation scheme for certain biased graphs and their matroids \cite{MR2701091}, answering a general orientation question posed by Zaslavsky \cite{MR2017726}.
A \emph{gain graph} is a graph with the additional structure that each orientation of an edge is given a group element, called a \emph{gain}, which is the inverse of the group element assigned to the opposite orientation. More recently, Slilaty also worked on real gain graphs and their orientations \cite{DANSGGHA}, generalizing graphic hyperplane arrangements.
In this paper, we define a notion of orientation for gain graphs over an arbitrary group. The orientation provides two immediate applications. First, a natural method for studying line graphs of gain graphs. And second, a well defined incidence matrix. In particular, gain graphs with complex unit gains (also called {\it complex unit gain graphs}) have particularly nice matrix and eigenvalue properties, which were initially investigated in \cite{MR2900705}.
The approach here fits the original orientation methods of Zaslavsky on signed graphs, and opens possibilities for future research. A major question left to answer is what an acyclic orientation is under this setup. This could lead to further connections with hyperplane arrangements and matroids. Additional questions for future projects are also posed throughout the paper.
The reader may also be interested in other recent independent investigations of complex unit gain graphs and specializations that appear in the literature. These include {\it weighted directed graphs} \cite{MR2859913, MR2928567, MR2929181, MR3045222, MR3191876}, which consider gains from the fourth roots of unity instead of the entire unit circle, and so called {\it Hermitian graphs} used to study universal state transfer \cite{MR3217403}. A study of the characteristic polynomial for gain graphs has also been conducted in \cite{MR2890908}.
The paper is organized as follows. In Section \ref{Sec2}, a background on the theory of gain graphs is provided. In Section \ref{SECTIONOrientedGainGraphDEF}, we develop oriented gain graphs. The construction introduced borrows from Edmonds and Johnson's definition of a bidirected graph \cite{MR0267898} and Zaslavsky's more general oriented signed graphs \cite{MR1120422}. Using this construction the line graph of an oriented gain graph is defined and studied in Section \ref{LineGraphsofAbelianGainGraphs}. If the gain group is abelian, the line graph of an oriented gain graph is used to define the line graph of a gain graph. This generalizes Zaslavsky's definition of the line graph of a signed graph \cite{MITTOSSG}.
Finally, in Section \ref{SECTIONMatricesofORIENTEDGAINGRAPHS} we discuss several matrices associated to these various graphs and line graphs. For complex unit gain graphs, we generalize the classical relationship known for graphs and signed graphs between the incidence matrix of a graph and the adjacency matrix of the line graph. This is used to study the adjacency eigenvalues of the line graph.
\section{Background}\label{Sec2}
The set of oriented edges, denoted by $\vec{E}(\Gamma)$, contains two copies of each edge with opposite directions. An oriented edge from $v_i$ to $v_j$ is denoted by $e_{ij}$. Formally, a \emph{gain graph} is a triple $\Phi=(\Gamma,\mathfrak{G},\phi)$ consisting of an \emph{underlying graph} $\Gamma=(V,E)$, the \emph{gain group} $\mathfrak{G}$ and a function $\phi :\vec{E}(\Gamma) \rightarrow \mathfrak{G}$ (called the \emph{gain function}), such that $\phi(e_{ij})=\phi(e_{ji})^{-1}$. For brevity, we write $\Phi=(\Gamma,\phi)$ for a gain graph if the gain group is clear, and call $\Phi$ a $\mathfrak{G}$-gain graph.
The \emph{circle group} is $\mathbb{T}=\{ z\in\mathbb{C} : |z| =1\}\leq \mathbb{C}^{\times}$. One particular family of gain graphs that we will be interested in are $\mathbb{T}$-gain graphs (or \emph{complex unit gain graphs}). The center of a group $\mathfrak{G}$ is denoted by $Z(\mathfrak{G})$.
We will always assume that $\Gamma$ is simple. The set of vertices is $V:=\{v_1,v_2,\ldots,v_n\}$. Edges in $E$ are denoted by $e_{ij}=v_i v_j$. Even though this is the same notation for an oriented edge from $v_i$ to $v_j$ it will always be clear whether an edge or oriented edge is being used. We define $n:=|V|$ and $m:=|E|$.
A \emph{switching function} is any function $\zeta:V\rightarrow \mathfrak{G}$. Switching the $\mathfrak{G}$-gain graph $\Phi=(\Gamma,\phi)$ means replacing $\phi$ by $\phi^{\zeta}$, defined by: $\phi^{\zeta}(e_{ij})=\zeta(v_i)^{-1} \phi(e_{ij}) \zeta(v_j)$; producing the $\mathfrak{G}$-gain graph $\Phi^{\zeta}=(\Gamma,\phi^{\zeta})$. We say $\Phi_1$ and $\Phi_2$ are \emph{switching equivalent}, written $\Phi_1 \sim \Phi_2$, when there exists a switching function $\zeta$, such that $\Phi_2=\Phi_1^{\zeta}$. Switching equivalence forms an equivalence relation on gain functions for a fixed underlying graph. An equivalence class under this equivalence relation is called a \emph{switching class} of $\phi$, and is denoted by $[\Phi]$.
The gain of a walk $W=v_1e_{12}v_2e_{23}v_3\cdots v_{k-1}e_{k-1,k}v_k$ is $\phi(W)=\phi(e_{12})\phi(e_{23})\cdots\phi(e_{k-1,k})$.
Switching conjugates the gain of a closed walk in a gain graph.
\begin{prop}\label{conjugateWalk} Let $W=v_1e_{12}v_2e_{23}v_3\cdots v_{k}e_{k1}v_1$ be a closed walk in a $\mathfrak{G}$-gain graph $\Phi$. Let $\zeta:V\rightarrow \mathfrak{G}$. Then $\phi^{\zeta}(W)=\zeta(v_1)^{-1} \phi(W)\zeta(v_1)$.
\end{prop}
\begin{proof} The following calculation verifies the result:
\begin{align*}
\phi^{\zeta}(W) &= \phi^{\zeta}(e_{12})\phi^{\zeta}(e_{23})\cdots\phi^{\zeta}(e_{k1})\\
&=\zeta(v_1)^{-1}\phi(e_{12})\zeta(v_2)\zeta(v_2)^{-1}\phi(e_{23})\cdots\zeta(v_k)^{-1}\phi(e_{k1})\zeta(v_1)\\
&= \zeta(v_1)^{-1}\phi(W)\zeta(v_1).\qedhere
\end{align*}
\end{proof}
If $C$ is a cycle in the underlying graph $\Gamma$, then the gain of $C$ in some fixed direction starting from a base vertex $v$ is denoted by $\phi(\vec{C}_v)$. Thus, $\vec{C}_v$ is called a \emph{directed cycle with base vertex $v$}. The \emph{fundamental cycle} associated with edge $e$, denoted $C_T(e)$, is the cycle obtained by adding $e$ to a maximal forest of $\Gamma$. For abelian $\mathfrak{G}$ we can state a sufficient condition to guarantee two $\mathfrak{G}$-gain graphs are switching equivalent. The proof here is an adaptation of a well known construction on signed graphs, see for example \cite[Theorem II.A.4]{Math581Notes}.
\begin{lem}\label{SwitchingFExistence} Let $\mathfrak{G}$ be abelian. Let $\Phi_1$ and $\Phi_2$ be $\mathfrak{G}$-gain graphs with the same underlying graph $\Gamma$. If for every cycle $C$ in $\Gamma$ there exists a directed cycle with base vertex $v$ such that $\phi_1(\vec{C}_v)=\phi_2(\vec{C}_v)$, then there exists a switching function $\zeta$ such that $\Phi_2=\Phi_1^{\zeta}$.
\end{lem}
\begin{proof}
Switching individual components is independent, so we may assume that $\Gamma$ is connected.
Pick a spanning tree $T$ of $\Gamma$, and label the vertices so that for all $i\in \{2,\ldots, n\}$, $v_i$ is always adjacent to a vertex in $\{v_1,\ldots,v_{i-1}\}$.
For all $i\in\{2,\ldots,n\}$, let $e_{ij}$ be the unique oriented edge in $\vec{E}(T)$ from $v_i$ to $v_j \in \Phi{:}\{v_1,\ldots,v_{i-1}\}$. We define $\zeta:V\rightarrow \mathfrak{G}$ recursively by
\begin{equation*}
\zeta(v_i)=
\begin{cases} 1_{\mathfrak{G}} & \text{if }i=1,
\\
\phi_1(e_{ij})\zeta(v_j)\phi_2(e_{ij})^{-1} &\text{otherwise.}
\end{cases}
\end{equation*}
Therefore, for all $e_{ij}\in \vec{E}(T)$, $\phi_2(e_{ij})=\phi_1^{\zeta}(e_{ij})$. Now we need to verify that $\phi_2$ and $\phi_1^{\zeta}$ agree on the oriented edges of $\Gamma$ that are not part of $\vec{E}(T)$.
Let $f\in E(C)\backslash E(T)$. By hypothesis, there is some directed cycle with base vertex $v$ such that, $\phi_1(\vec{C}_v)=\phi_2(\vec{C}_v)$, and therefore, $\phi_1(\overrightarrow{C_T(f)}_v)=\phi_2(\overrightarrow{C_T(f)}_v)$. Furthermore, by Proposition \ref{conjugateWalk}, $\phi_1^{\zeta}(\overrightarrow{C_T(f)}_v)=\phi_1(\overrightarrow{C_T(f)}_v)$ since $\mathfrak{G}$ is abelian. Hence, $\phi_2(\overrightarrow{C_T(f)}_v)=\phi_1^{\zeta}(\overrightarrow{C_T(f)}_v)$.
Without loss of generality, suppose $\overrightarrow{C_T(f)}_v=e_{12}e_{23}\cdots e_{k-1,k}e_{k1}$ and $f=e_{k1}$. Since $\phi_2(\overrightarrow{C_T(f)}_v)=\phi_1^{\zeta}(\overrightarrow{C_T(f)}_v)$ we can write
\[ \phi_2(e_{12})\phi_2(e_{23})\cdots \phi_2(e_{k-1,k})\phi_2(e_{k1})= \phi_1^{\zeta}(e_{12})\phi_1^{\zeta}(e_{23})\cdots \phi_1^{\zeta}(e_{k-1,k})\phi_1^{\zeta}(e_{k1}).\]
Also, since $\phi_2(e_{ij})=\phi_1^{\zeta}(e_{ij})$ for every $e_{ij}\in \vec{E}(T)$, by taking inverses we can simplify the product equality to $\phi_2(e_{k1})=
\phi_1^{\zeta}(e_{k1})$. Hence, $\phi_2$ and $\phi_1^{\zeta}$ agree on all oriented edges.
\end{proof}
\section{Oriented Gain Graphs}\label{SECTIONOrientedGainGraphDEF}
A \emph{$\mathfrak{G}$-phased graph} is a pair $(\Gamma,\omega)$, consisting of a graph $\Gamma$, and a \emph{$\mathfrak{G}$-incidence phase function}
$\omega : V\times E\rightarrow \mathfrak{G}\cup \{0\}$ that satisfies
\begin{align*}
\omega(v,e)&\neq 0 \text{ if $v$ is incident to $e$},\\
\omega(v,e)&= 0 \text{ otherwise}.
\end{align*}
A $\mathfrak{G}$-phased graph with $\text{Image}(\omega)\subseteq \{+1,0,-1\}$ is called a \emph{bidirected graph}, and we call $\omega$ a \emph{bidirection}. If $\beta$ is a bidirection then $\beta(v,e)=+1$ can be thought of as an edge-end arrow proceeding into $v$ and $\beta(v,e)=-1$ as an edge-end arrow exiting $v$. See Figure \ref{GraphBidandPhasedEx} for some examples.
\begin{figure}
\caption{A graph $\Gamma$, a $\mathbb{T}
\label{GraphBidandPhasedEx}
\end{figure}
A (weak) \emph{involution} of a group $\mathfrak{G}$ is any element $\mathfrak{s}\in\mathfrak{G}$ such that $\mathfrak{s}^2=1_{\mathfrak{G}}$. Henceforth, all involutions are weak involutions, thus the group identity will also be called an involution.
Let $\mathfrak{G}^{\mathfrak{s}}$ be a group with a distinguished central involution $\mathfrak{s}$. Formally, $\mathfrak{s}$ is a distinguished involution of $\mathfrak{G}^{\mathfrak{s}}$ such that $\mathfrak{s}\in Z(\mathfrak{G}^{\mathfrak{s}})$. The notation $\mathfrak{G}^{\mathfrak{s}}$ is to clearly indicate which choice one has made for $\mathfrak{s}$, so the groups $\mathfrak{G}^{\mathfrak{s}}$ and $\mathfrak{G}^{\mathfrak{t}}$ may be equal even though $\mathfrak{s}\neq \mathfrak{t}$. For example, $\mathbb{T}^1=\mathbb{T}^{-1}=\mathbb{T}$ as groups, but the distinguished central involution of interest changes when writing $\mathbb{T}^1$ and $\mathbb{T}^{-1}$.
Let $\omega$ be a $\mathfrak{G}^{\mathfrak{s}}$-incidence phase function. The \emph{$\mathfrak{G}^{\mathfrak{s}}$-gain graph associated to a $\mathfrak{G}^{\mathfrak{s}}$-phased graph $(\Gamma,\omega)$}, denoted by $\Phi(\omega)$, has its gains defined by
\begin{equation}\label{orientationREL}
\phi(e_{ij})=\omega(v_i,e_{ij})\cdot\mathfrak{s}\cdot\omega(v_j,e_{ij})^{-1}.
\end{equation}
Notice that because $\mathfrak{s}$ is an involution,
\begin{align*}
\phi(e_{ij}) &= \omega(v_i,e_{ij})\cdot\mathfrak{s}\cdot\omega(v_j,e_{ij})^{-1} \\
&= \big[\omega(v_j,e_{ij})\cdot\mathfrak{s}^{-1}\cdot\omega(v_i,e_{ij})^{-1}\big]^{-1}\\
&= \big[\omega(v_j,e_{ij})\cdot\mathfrak{s}\cdot\omega(v_i,e_{ij})^{-1}\big]^{-1}\\
&=\phi(e_{ji})^{-1}.
\end{align*}
This means $\Phi(\omega)$ really is a $\mathfrak{G}^{\mathfrak{s}}$-gain graph with appropriately labeled oriented edges.
See Figures \ref{GraphPhasedAssociated} and \ref{GraphPhasedAssociatedMINUS} for distinguishing examples of $\Phi(\omega)$. To make the pictures less cluttered each edge will have only one oriented edge labelled with a gain (since the gain in the opposite direction is immediately determined as the inverse).
\begin{figure}
\caption{A $\mathbb{T}
\label{GraphPhasedAssociated}
\end{figure}
\begin{figure}
\caption{A $\mathbb{T}
\label{GraphPhasedAssociatedMINUS}
\end{figure}
For a $\mathfrak{G}^{\mathfrak{s}}$-gain graph $\Phi$, an \emph{orientation of $\Phi$} is any $\mathfrak{G}^{\mathfrak{s}}$-incidence phase function $\omega$ that satisfies Equation \eqref{orientationREL}. An \emph{oriented $\mathfrak{G}^{\mathfrak{s}}$-gain graph} is a pair $(\Phi,\omega)$, consisting of a $\mathfrak{G}^{\mathfrak{s}}$-gain graph $\Phi$, and $\omega$, an orientation of $\Phi$. To obtain an oriented signed graph \cite{MR1120422} one can choose $\mathfrak{G}=\{+1,-1\}$ with $\mathfrak{s}=-1$.
Switching a $\mathfrak{G}^{\mathfrak{s}}$-incidence phase function $\omega$ by $\zeta:V\rightarrow \mathfrak{G}^{\mathfrak{s}}$ means replacing $\omega$ by $\omega^{\zeta}$ defined by $\omega^{\zeta}(v_i,e_{ij})=\zeta(v_i)^{-1}\omega(v_i,e_{ij})$. The $\mathfrak{G}^{\mathfrak{s}}$-gain graph associated to the switched $\omega^{\zeta}$ is the same as the $\zeta$-switched $\mathfrak{G}^{\mathfrak{s}}$-gain graph associated to $\omega$. This generalizes the same result known for signed graphs \cite{MR1120422}.
\begin{prop} If $\zeta:V\rightarrow \mathfrak{G}^{\mathfrak{s}}$, then $\Phi(\omega^{\zeta})=\Phi(\omega)^{\zeta}$.
\end{prop}
\begin{proof} We compute $\phi(e_{ij})$ in $\Phi(\omega^{\zeta})$ as follows:
\begin{align*}
\phi_{\omega^{\zeta}}(e_{ij})&=\omega^{\zeta}(v_i,e_{ij})\cdot\mathfrak{s}\cdot\omega^{\zeta}(v_j,e_{ij})^{-1}\\
&=\zeta(v_i)^{-1}\omega(v_i,e_{ij})\cdot\mathfrak{s}\cdot\omega(v_j,e_{ij})^{-1}\zeta(v_j)\\
&=\zeta(v_i)^{-1}\phi_{\omega}(e_{ij})\zeta(v_j)\\
&=\phi_{\omega}^{\zeta}(e_{ij}).\qedhere
\end{align*}
\end{proof}
\noindent {\bf Question 1:} What is an acyclic orientation in an oriented $\mathfrak{G}^{\mathfrak{s}}$-gain graph? The answer to this could have some interesting connections to matroids and hyperplane arrangement theory. This study could further generalize the results of Greene \cite{zbMATH03604927}, Zaslavsky \cite{MR1120422} and Slilaty \cite{DANSGGHA}.
\section{The Line Graph of an Abelian Gain Graph}\label{LineGraphsofAbelianGainGraphs}
Let $\Lambda_{\Gamma}$ denote the line graph of the unsigned graph $\Gamma$. Let $\omega_{\Lambda}$ be a $\mathfrak{G}^{\mathfrak{s}}$-incidence phase function on $\Lambda_{\Gamma}$ defined by
\begin{equation}\label{LGincidencerel}
\omega_{\Lambda}(e_{ij},e_{ij}e_{jk})=\omega(v_j,e_{ij})^{-1}.
\end{equation}
Let $(\Phi,\omega)$ be an oriented $\mathfrak{G}^{\mathfrak{s}}$-gain graph. The \emph{line graph of $(\Phi,\omega)$} is the oriented $\mathfrak{G}^{\mathfrak{s}}$-gain graph $(\Phi(\omega_{\Lambda}),\omega_{\Lambda})$. See Figure \ref{OLinegraphExample} for an example.
\begin{figure}
\caption{An oriented $\mathfrak{G}
\label{OLinegraphExample}
\end{figure}
The line graph of an oriented gain graph $(\Phi,\omega)$ can be viewed as a generalization of the line graph a graph in the following way. Suppose $\omega=\mathfrak{s}$ for every incidence of $\Phi$, thus making the gain of every edge $\mathfrak{s}$. By Equation \eqref{LGincidencerel}, $\omega_{\Lambda}=\mathfrak{s}^{-1}=\mathfrak{s}$ for every incidence of $\Phi(\omega_{\Lambda})$. Hence, the gain of every edge in the line graph is also $\mathfrak{s}$.
\begin{prop} The line graph of $(\Phi(\mathfrak{s}),\mathfrak{s})$ is $(\Phi(\mathfrak{s}_{\Lambda}),\mathfrak{s}_{\Lambda})$.
\end{prop}
\begin{comment}
A cycle in $(\Lambda_{\Gamma}(\tau_{\Lambda}),\tau_{\Lambda})$ is called \emph{derived} if its vertices are the edges of a cycle in $\Phi$. The following proposition shows that the gain of a cycle is preserved in the line graph's corresponding derived cycle.
\begin{prop} Let $\mathfrak{G}$ be abelian. Let $C$ be a cycle in an oriented $\mathfrak{G}$-gain graph $(\Phi,\tau)$. Let $D$ be the derived cycle in $(\Lambda_{\Gamma}(\tau_{\Lambda}),\tau_{\Lambda})$ obtained from $C$. Then $\phi(C)=\phi_{\Lambda}(D)$.
\end{prop}
\begin{proof} Let $C=v_1v_2\cdots v_kv_1$ so $D=e_{k1}e_{12}\cdots e_{k-1,k}e_{k1}$. Then
\begin{align*}
\phi(C)&=\phi(v_1v_2)\cdots\phi(v_kv_1) \\
&=(\tau(v_1,e_{12})\tau(v_2,e_{12})^{-1})\cdots (\tau(v_k,e_{k1})\tau(v_1,e_{k1})^{-1}) \\
&=(\tau_{\Lambda}(e_{12},e_{k1}e_{12})^{-1}\tau_{\Lambda}(e_{12},e_{12}e_{23}))\cdots (\tau_{\Lambda}(e_{k1},e_{k-1,k}e_{k1})^{-1}\tau_{\Lambda}(e_{k1},e_{k1}e_{12}))\\
&=(\tau_{\Lambda}(e_{k1},e_{k1}e_{12})\tau_{\Lambda}(e_{12},e_{k1}e_{12})^{-1})\cdots (\tau_{\Lambda}(e_{k-1,k},e_{k-1,k}e_{k1})\tau_{\Lambda}(e_{k1},e_{k-1,k}e_{k1})^{-1})\\
&=\phi_{\Lambda}(e_{k1}e_{12})\cdots \phi_{\Lambda}(e_{k-1,k}e_{k1})=\phi_{\Lambda}(D).\qedhere
\end{align*}
\end{proof}
A triangle in $(\Lambda_{\Gamma}(\tau_{\Lambda}),\tau_{\Lambda})$ is called a \emph{vertex triangle} if its vertices are the edges at a common vertex in $\Phi$.
\begin{prop} Let $\mathfrak{G}$ be abelian. Let $(\Phi,\tau)$ be an oriented $\mathbb{T}$-gain graph. If $T$ is a vertex triangle in $(\Lambda_{\Gamma}(\tau_{\Lambda}),\tau_{\Lambda})$, then $\phi_{\Lambda}(T)=1$.
\end{prop}
\begin{proof} Let $v_i$ be a vertex incident to edges $e_{ij}$, $e_{ik}$ and $e_{il}$ in $\Phi$. Suppose $T$ is the vertex triangle in $(\Lambda_{\Gamma}(\tau_{\Lambda}),\tau_{\Lambda})$ whose vertices are $e_{ij}$, $e_{ik}$ and $e_{il}$. Then
\begin{align*}
\phi_{\Lambda}(T)&=\phi_{\Lambda}(e_{ij}e_{ik})\phi_{\Lambda}(e_{ik}e_{il})\phi_{\Lambda}(e_{il}e_{ij})\\
&=\tau_{\Lambda}(e_{ij},e_{ij}e_{ik})\tau_{\Lambda}(e_{ik},e_{ij}e_{ik})^{-1}\tau_{\Lambda}(e_{ik},e_{ik}e_{il})\tau_{\Lambda}(e_{il},e_{ik}e_{il})^{-1}\tau_{\Lambda}(e_{il},e_{il}e_{ij})\tau_{\Lambda}(e_{ij},e_{il}e_{ij})^{-1}\\
&=\tau(v_i,e_{ij})^{-1}\tau(v_i,e_{ik})\tau(v_i,e_{ik})^{-1}\tau(v_i,e_{il})\tau(v_i,e_{il})^{-1}\tau(v_i,e_{ij})=1.\qedhere
\end{align*}
\end{proof}
\end{comment}
A quick calculation can verify that changing orientations of a particular edge in an oriented gain graph corresponds to switching the associated vertex in the line graph. The following more general result says that switching equivalent oriented gain graphs produce switching equivalent line graphs. This generalizes the same result known for signed graphs \cite[Lemma 6.1]{MananthavadyNotes} and borrows from its proof methods. This is an essential ingredient to defining the line graph of a gain graph $\Phi$ in general.
\begin{thm}\label{ReOrSwEqLG} Let $\mathfrak{G}^{\mathfrak{s}}$ be abelian. Let $\Phi_1$ and $\Phi_2$ be $\mathfrak{G}^{\mathfrak{s}}$-gain graphs with the same underlying graph $\Gamma$. If $(\Phi_1,\omega)$ and $(\Phi_2,\kappa)$ are oriented $\mathfrak{G}^{\mathfrak{s}}$-gain graphs where $\Phi_1\sim\Phi_2$, then $\Phi(\omega_{\Lambda})\sim\Phi(\kappa_{\Lambda})$.
\end{thm}
\begin{proof}
Let $(\Phi(\omega_{\Lambda}),\omega_{\Lambda})$ and $(\Phi(\kappa_{\Lambda}),\kappa_{\Lambda})$ be the line graphs of $(\Phi_1,\omega)$ and $(\Phi_2,\kappa)$, respectively. Both line graphs have the same underlying graph $\Lambda_{\Gamma}$.
Let $C$ be a cycle in $\Lambda_{\Gamma}$. Assume that $C=e_0e_1\cdots e_{l-1}e_0$, where edges $e_{i-1}$ and $e_{i}$ have a common vertex $v_i$ in $\Gamma$ for every $i\in\{1,\ldots,l-1\}$, and edges $e_{l-1}$ and $e_0$ have a common vertex $v_l$ in $\Gamma$.
Now we calculate the gain of $\vec{C}_{e_0}$ in $(\Phi(\omega_{\Lambda}),\omega_{\Lambda})$ as follows:
\begin{align*}
\phi_{\omega_{\Lambda}}(\vec{C}_{e_0})&=\phi_{\omega_{\Lambda}}(e_0e_1)\phi_{\omega_{\Lambda}}(e_1e_2)\cdots\phi_{\omega_{\Lambda}}(e_{l-1}e_0)\\
&=(\omega_{\Lambda}(e_0,e_0e_1)\cdot\mathfrak{s}\cdot\omega_{\Lambda}(e_1,e_0e_1)^{-1})(\omega_{\Lambda}(e_1,e_1e_2)\cdot\mathfrak{s}\cdot\omega_{\Lambda}(e_2,e_1e_2)^{-1})\cdots\\
&\qquad\qquad(\omega_{\Lambda}(e_{l-1},e_{l-1}e_0)\cdot\mathfrak{s}\cdot\omega_{\Lambda}(e_0,e_{l-1}e_0)^{-1})\\
&=\mathfrak{s}^{l}\omega(v_1,e_0)^{-1}\omega(v_1,e_1)\omega(v_2,e_1)^{-1}\omega(v_2,e_2)\cdots\omega(v_l,e_{l-1})^{-1}\omega(v_l,e_0).
\end{align*}
Notice that the resulting product only involves the orientation $\omega$, and it appears that after rearranging the factors this product may simplify to the gain of a closed walk in $\Gamma$. However, since $v_i$ may be the same as $v_{i+1}$, the product might not be the gain of a closed walk in $\Gamma$. If $v_i=v_{i+1}$ for some $i$, then $\omega(v_i,e_i)\omega(v_{i+1},e_i)^{-1}=1_{\mathfrak{G}^{\mathfrak{s}}}$, so we can reduce the product above. Notice that if $v_i=v_{i+1}$, then $v_i$ is incident to $e_{i-1}$, $e_{i}$ and $e_{i+1}$ in $\Gamma$; thus the vertices $e_{i-1}$, $e_{i}$ and $e_{i+1}$ in the line graph $\Lambda_{\Gamma}$ form a triangle. Therefore, $C'=e_0e_1\cdots e_{i-1}e_{i+1}\ldots e_{l-1}e_0$ is a cycle in $(\Phi(\omega_{\Lambda}),\omega_{\Lambda})$ and $\phi_{\omega_{\Lambda}}(\vec{C'}_{e_0})=\mathfrak{s}\cdot\phi_{\omega_{\Lambda}}(\vec{C}_{e_0})$.
Let $C''=f_0 f_1 \cdots f_{k-1}f_0$ be the cycle in $(\Phi(\omega_{\Lambda}),\omega_{\Lambda})$ obtained from $C$ by reducing all consecutive equal vertices. Suppose that for all $i\in\{1,\ldots,k-1\}$ the edges $f_{i-1}$ and $f_{i}$ have a common vertex $w_i$ in $\Gamma$, and edges $f_{k-1}$ and $f_0$ have common vertex $w_k$ in $\Gamma$. Now notice that $W_{\Gamma}=w_1 f_1 w_2 \cdots f_{k-1} w_k f_0 w_1$ is a closed walk of length $k$ in $\Gamma$. Therefore,
\begin{align*}
\phi_{\omega_{\Lambda}}(\vec{C''}_{f_0})&=\phi_{\omega_{\Lambda}}(f_0f_1)\phi_{\omega_{\Lambda}}(f_1f_2)\cdots\phi_{\omega_{\Lambda}}(f_{k-1}f_0)\\
&=(\omega_{\Lambda}(f_0,f_0f_1)\cdot\mathfrak{s}\cdot\omega_{\Lambda}(f_1,f_0f_1)^{-1})(\omega_{\Lambda}(f_1,f_1f_2)\cdot\mathfrak{s}\cdot\omega_{\Lambda}(f_2,f_1f_2)^{-1})\cdots\\
& \qquad\qquad(\omega_{\Lambda}(f_{k-1},f_{k-1}f_0)\cdot\mathfrak{s}\cdot\omega_{\Lambda}(f_0,f_{k-1}f_0)^{-1})\\
&=\omega(w_1,f_0)^{-1}\cdot\mathfrak{s}\cdot\omega(w_1,f_1)\cdot\omega(w_2,f_1)^{-1}\cdot\mathfrak{s}\cdot\omega(w_2,f_2)\cdots\\
& \qquad\qquad\omega(w_k,f_{k-1})^{-1}\cdot\mathfrak{s}\cdot\omega(w_k,f_0)\\
&=\omega(w_1,f_0)^{-1}\cdot\mathfrak{s}\cdot\omega(w_1,f_1)\cdot\mathfrak{s}\cdot\omega(w_2,f_1)^{-1}\omega(w_2,f_2)\cdot\mathfrak{s}\cdots\\
& \qquad\qquad\mathfrak{s}\cdot\omega(w_k,f_{k-1})^{-1}\omega(w_k,f_0)\\
&=\omega(w_1,f_0)^{-1}\cdot\mathfrak{s}\cdot\phi_1(f_1)\phi_1(f_2)\cdots\phi_1(f_{k-1})\cdot\omega(w_k,f_0)\\
&=\phi_1(f_1)\phi_1(f_2)\cdots\phi_1(f_{k-1})\omega(w_k,f_0)\cdot\mathfrak{s}\cdot\omega(w_1,f_0)^{-1}\\
&=\phi_1(f_1)\phi_1(f_2)\cdots\phi_1(f_{k-1})\phi_1(f_0)\\
&=\phi_1(W_{\Gamma}).
\end{align*}
Thus, $\phi_{\omega_{\Lambda}}(\vec{C}_{e_0})=\mathfrak{s}^{l-k}\phi_1(W_{\Gamma})$. Similarly, we can calculate $\phi_{\kappa_{\Lambda}}(\vec{C}_{e_0})=\mathfrak{s}^{l-k}\phi_2(W_{\Gamma})$.
Since $\mathfrak{G}^{\mathfrak{s}}$ is abelian, Proposition \ref{conjugateWalk} says that switching a $\mathfrak{G}^{\mathfrak{s}}$-gain graph does not change the gain of a closed walk. Hence, our assumption of $\Phi_1\sim \Phi_2$ implies $\phi_1(W_{\Gamma})=\phi_2(W_{\Gamma})$. Therefore, $\phi_{\omega_{\Lambda}}(\vec{C}_{e_0})=\mathfrak{s}^{l-k}\phi_1(W_{\Gamma}) = \mathfrak{s}^{l-k}\phi_2(W_{\Gamma})= \phi_{\kappa_{\Lambda}}(\vec{C}_{e_0})$. Since $C$ was arbitrary, the proof is complete by Lemma \ref{SwitchingFExistence}.
\end{proof}
An arbitrary $\mathfrak{G}^{\mathfrak{s}}$-gain graph $\Phi$ has many possible orientations $\omega$. By selecting one particular orientation, we can then produce a single line graph as above. However, the gains in the line graph depend on the original chosen orientation. Therefore, the line graph of a $\mathfrak{G}^{\mathfrak{s}}$-gain graph cannot be a single gain graph. Theorem \ref{ReOrSwEqLG} allows us to instead define the line graph of $\Phi$ as switching class. If $\omega$ is an arbitrary orientation of $\Phi$, then the \emph{line graph of $\Phi$}, written $\Lambda(\Phi)$, is the switching class $[\Phi(\omega_{\Lambda})]$. See Figure \ref{LinegraphExample} for an example. We can even define the line graph of a switching class $[\Phi]$ as $[\Phi(\omega_{\Lambda})]$; that is, $\Lambda([\Phi])=[\Phi(\omega_{\Lambda})]$. If $\mathfrak{G}=\{+1,-1\}$ with $\mathfrak{s}=-1$, then these definitions generalize line graph of a signed graph \cite{MananthavadyNotes}.
\begin{figure}
\caption{A $\mathbb{T}
\label{LinegraphExample}
\end{figure}
\noindent {\bf Question 2:} Is there an analogue of Theorem \ref{ReOrSwEqLG} for nonabelian $\mathfrak{G}^{\mathfrak{s}}$?
\section{Matrices}\label{SECTIONMatricesofORIENTEDGAINGRAPHS}
Now we study several matrices associated to the various gain graphic structures defined above. Several of these matrices are new and generalize some previously known concepts for graphs, signed graphs and $\mathbb{T}$-gain graphs.
Let $\Phi$ be a $\mathfrak{G}^{\mathfrak{s}}$-gain graph. The \emph{adjacency matrix} $A(\Phi)=(a_{ij})$ is an $n\times n$ matrix with entries in $\mathfrak{G}^{\mathfrak{s}}\cup\{0\}$, and is is defined by
\begin{equation*}
a_{ij}=
\begin{cases} \phi(e_{ij}) & \text{if }v_i\text{ is adjacent to }v_j,
\\
0 &\text{otherwise.}
\end{cases}
\end{equation*}
Let $\Phi$ be an $\mathfrak{G}^{\mathfrak{s}}$-gain graph. The \emph{incidence matrix} $\Eta(\Phi)=(\eta_{ve})$ is an $n\times m$ matrix with entries in $\mathfrak{G}^{\mathfrak{s}}\cup\{0\}$, defined by
\begin{equation*}
\eta_{v_i e}=
\begin{cases} \eta_{v_j e}\cdot\mathfrak{s}\cdot\phi(e_{ij}) &\text{if } e=e_{ij} \in E,
\\
0 &\text{otherwise;}
\end{cases}
\end{equation*}
furthermore, $\eta_{v_i e}\in \mathfrak{G}^{\mathfrak{s}}$ if $e_{ij}\in E$. We say ``an" incidence matrix, because with this definition $\Eta(\Phi)$ is not unique. Each column can be left multiplied by any element in $\mathfrak{G}^{\mathfrak{s}}$ and the result can still be called an incidence matrix. For example, we can choose $\eta_{v_je}=1_{\mathfrak{G^{\mathfrak{s}}}}$ so $\eta_{v_i e}=\mathfrak{s}\cdot\phi(e_{ij})$ for each $e=e_{ij}\in E$.
Providing a $\mathfrak{G}^{\mathfrak{s}}$-gain graph $\Phi$ with an orientation results in a well defined incidence matrix for the oriented $\mathfrak{G}^{\mathfrak{s}}$-gain graph $(\Phi,\omega)$. The \emph{incidence matrix} $\Eta(\Phi,\omega)=(\eta_{ve})$ is an $n\times m$ matrix with entries in $\mathfrak{G}^{\mathfrak{s}}\cup\{0\}$, defined by
\begin{equation}\label{OCUGGincidencerel}
\eta_{v e}=\omega(v,e) \text{ for every } (v,e) \in V\times E.
\end{equation}
\begin{comment}
\subsection{$\mathfrak{G}$-matrices}
For a $\pm\mathfrak{G}$-gain graph with Image$(\sigma)\subseteq \mathfrak{G}$, we can construct several matrices with entries in $\mathfrak{G}\cup\{0\}$ that are of interest. The \emph{$\mathfrak{G}$-adjacency matrix} $\widetilde{A}(\Phi)=(\widetilde{a}_{ij})$ is an $n\times n$ matrix with entries in $\mathfrak{G}\cup\{0\}$, defined by
\begin{equation*}
\widetilde{a}_{ij}=
\begin{cases} \sigma(e_{ij})\phi_{\mathfrak{G}}(e_{ij}) & \text{if }v_i\text{ is adjacent to }v_j,
\\
0 &\text{otherwise.}
\end{cases}
\end{equation*}
[N.B., perhaps make a note that $\widetilde{A}(\Phi)=A(\Sigma)\circ A(\Phi_{\mathfrak{G}})$].
The \emph{$\mathfrak{G}$-incidence matrix} $\widetilde{\Eta}(\Phi)=(\widetilde{\eta}_{ve})$ is an $n\times m$ matrix with entries in $\mathfrak{G}\cup\{0\}$, defined by
\begin{equation*}
\widetilde{\eta}_{v_i e}=
\begin{cases} -\widetilde{\eta}_{v_j e}\sigma(e_{ij})\phi_{\mathfrak{G}}(e_{ij}) &\text{if } e=e_{ij} \in E,
\\
0 &\text{otherwise;}
\end{cases}
\end{equation*}
furthermore, $\widetilde{\eta}_{v_i e}\in \pm\mathfrak{G}$ if $e_{ij}\in E$. Although we say ``the" incidence matrix, it is not unique since each column is determined only up to multiplication by a nonzero scalar.
Providing $\Phi$ with an orientation results in a well defined incidence matrix. The \emph{$\mathfrak{G}$-incidence matrix} $\widetilde{\Eta}(\Phi,\omega)=(\widetilde{\eta}_{ve})$, with $\omega=(\alpha,\tau)$, is an $n\times m$ matrix with entries in $\mathfrak{G}\cup\{0\}$, defined by
\begin{equation}\label{OCUGGincidencerel}
\widetilde{\eta}_{v e}=\alpha(v,e)\tau(v,e) \text{ for every } (v,e) \in V\times E.
\end{equation}
[N.B., perhaps make a note that $\widetilde{\mathrm{H}}(\Phi,\omega)=\mathrm{H}(\Sigma,\alpha)\circ \mathrm{H}(\Phi_{\mathfrak{G}},\tau)$].
\end{comment}
The adjacency matrix of $\Lambda(\Phi)$ is not well defined since the gain of a specific oriented edge is unknown. However, the adjacency matrix of $(\Phi(\omega_{\Lambda}),\omega_{\Lambda})$ is well defined because the gain of every oriented edge is determined by $\omega_{\Lambda}$.
\subsection{Complex Unit Gain Graphs}
We can now state the following generalization of a signed graphic result due to Zaslavsky \cite{MITTOSSG} to the setting of complex unit gain graphs. This also generalizes the well known relationship between the oriented incidence matrix of a graph and the adjacency matrix of the corresponding line graph.
\begin{thm}\label{LineGraphHAEq} Let $(\Phi,\omega)$ be an oriented $\mathbb{T}^{\mathfrak{s}}$-gain graph, where $\mathfrak{s}$ is fixed as either $+1$ or $-1$. Then
\begin{equation}
\Eta(\Phi,\omega)^*\Eta(\Phi,\omega)=2I+\mathfrak{s}A(\Phi(\omega_{\Lambda}),\omega_{\Lambda}).
\end{equation}
\end{thm}
\begin{proof}
Notice that $\Eta(\Phi,\omega)^*\Eta(\Phi,\omega)$ is an $E\times E$ matrix. Consider the dot product of row $\mathbf{r}_i$ of $\Eta(\Phi,\omega)^*$ with column $\mathbf{c}_j$ of $\Eta(\Phi,\omega)$. Suppose column $\mathbf{c}_i$ corresponds to edge $e_{qr}$ and $\mathbf{c}_j$ corresponds to edge $e_{lk}$. Figure \ref{OLinegraphExample} is a helpful reference for the following calculation.
Case 1: $i=j$ (same edge). Then $\mathbf{r}_i=\mathbf{c}_j^*$ and thus,
\begin{align*}
\mathbf{r}_i\cdot \mathbf{c}_j=\mathbf{c}_j^*\cdot \mathbf{c}_j&=\bar{\eta}_{v_ke_{lk}}\eta_{v_ke_{lk}}+ \bar{\eta}_{v_le_{lk}}\eta_{v_le_{lk}}\\
&=\overline{\omega(v_k,e_{lk})}\omega(v_k,e_{lk})+ \overline{\omega(v_l,e_{lk})}\omega(v_l,e_{lk})\\
&=|\omega(v_k,e_{lk})|^2+ |\omega(v_l,e_{lk})|^2\\
&=2.
\end{align*}
Case 2: $i\neq j$ and $r=l$ (distinct adjacent edges).
\begin{align*}
\mathbf{r}_i\cdot \mathbf{c}_j=\mathbf{c}_i^*\cdot \mathbf{c}_j&= \bar{\eta}_{v_l e_{ql}}\eta_{v_le_{lk}}\\
&= \omega(v_l,e_{ql})^{-1} \omega(v_l,e_{lk})\\
&= \omega_{\Lambda}(e_{ql},e_{ql}e_{lk})\omega_{\Lambda}(e_{lk},e_{ql}e_{lk})^{-1}\\% \qquad (\text{by } \eqref{LGincidencerel})\\
&=\mathfrak{s}\phi_{\Lambda}(e_{ql}e_{lk}).
\end{align*}
Therefore, $\Eta(\Phi,\omega)^*\Eta(\Phi,\omega)=2I+\mathfrak{s}A(\Phi(\omega_{\Lambda}),\omega_{\Lambda})$.
\end{proof}
\begin{lem}[\cite{MR2900705}, Lemma 4.1]\label{simASwitch} Let $\Phi_1=(\Gamma,\phi_1)$ and $\Phi_2=(\Gamma,\phi_2)$ both be $\mathbb{T}$-gain graphs. If $\Phi_1\sim \Phi_2$, then $A(\Phi_1)$ and $A(\Phi_2)$ have the same spectrum.
\end{lem}
Even though $A(\Lambda(\Phi))$ is only well defined up to switching, its eigenvalues are well defined by Lemma \ref{simASwitch}.
\begin{cor}\label{LGAdjSpec}
Let $(\Phi,\omega)$ be an oriented $\mathbb{T}^{\mathfrak{s}}$-gain graph, where $\mathfrak{s}$ is fixed as either $+1$ or $-1$. Then $A(\Lambda(\Phi))$ and $A(\Phi(\omega_{\Lambda}),\omega_{\Lambda})$ have the same spectrum.
\end{cor}
For a graph, there is a classic bound on the eigenvalues which says that all adjacency eigenvalues of the line graph are greater than or equal to $-2$. For a signed graph, a similar statement can be made, which says that all adjacency eigenvalues of the line graph of a signed graph are less than or equal to 2 \cite{MITTOSSG}. A generalization for $\mathbb{T}^{-1}$-gain graphs can be stated, establishing an upper bound on the eigenvalues of $A(\Lambda(\Phi))$.
\begin{thm}\label{LHEignM1} Let $\Phi=(\Gamma,\phi)$ be a $\mathbb{T}^{-1}$-gain graph. If $\lambda$ is an eigenvalue of $A(\Lambda(\Phi))$, then $\lambda \leq 2$.
\end{thm}
\begin{proof}
Let $\omega$ be an arbitrary orientation of $\Phi$. Suppose that $\mathbf{x}$ is an eigenvector of $A(\Phi(\omega_{\Lambda}),\omega_{\Lambda})$ with associated eigenvalue $\lambda$. By Theorem \ref{LineGraphHAEq} the following simplification can be made:
\[ \Eta(\Phi,\omega)^*\Eta(\Phi,\omega) \mathbf{x} =\big (2I-A(\Phi(\omega_{\Lambda}),\omega_{\Lambda})\big)\mathbf{x} = (2-\lambda)\mathbf{x}.\]
Therefore, $2-\lambda$ is an eigenvalue of $\Eta(\Phi,\omega)^*\Eta(\Phi,\omega)$. Since $\Eta(\Phi,\omega)^*\Eta(\Phi,\omega)$ is positive semidefinite it must be that $2-\lambda\geq 0$ and therefore, $2\geq \lambda$. By Corollary \ref{LGAdjSpec}, the result follows.
\end{proof}
Similarly for $\mathbb{T}^{1}$-gain graphs a lower bound on the eigenvalues of $A(\Lambda(\Phi))$ may be achieved.
\begin{thm}\label{LHEignP1} Let $\Phi=(\Gamma,\phi)$ be a $\mathbb{T}^{1}$-gain graph. If $\lambda$ is an eigenvalue of $A(\Lambda(\Phi))$, then $-2 \leq \lambda$.
\end{thm}
\noindent {\bf Question 3:} Can we classify all $\mathbb{T}^{-1}$-gain graphs with adjacency eigenvalues less than 2? Similarly, can we classify all $\mathbb{T}^{1}$-gain graphs with adjacency eigenvalues grater than $-2$? Perhaps one might consider these questions for the group of $n$th roots of unity $\boldsymbol\mu_n$ instead of the full circle group $\mathbb{T}$. The bound in Theorem \ref{LHEignM1} holds for $\boldsymbol\mu_{2n}^{-1}$-gain graphs and the bound in Theorem $\ref{LHEignP1}$ holds for all $\boldsymbol\mu_{n}^{1}$-gain graphs.
Recently, Krishnasamy, Lehrer
and Taylor have classified indecomposable star-closed line systems in $\mathbb{C}^n$ \cite{MR2470539, MR2542964}, which vastly generalizes the results of Cameron, Goethals, Seidel, and Shult \cite{MR0441787} as well as Cvetkovi{\'c}, Rowlinson, and Simi{\'c} \cite{MR2120511}. If there is an answer to the above in connection to these indecomposable line systems in $\mathbb{C}^n$, a study of the exceptional graphs could generalize the related work of Chawathe and G.R. Vijayakumar on signed graphs \cite{MR1164766, MR1234728, MR884068, MR1078708}.
\section{ Acknowledgements}
The author would like to thank Thomas Zaslavsky and Marcin Mazur for their valuable comments and suggestions regarding this work.
\end{document} |
\betaegin{document}
\thetaitle{Hom-Groups, Representations and Homological Algebra}
\alphauthor { Mohammad Hassanzadeh }
\chiurraddr{University of Windsor, Department of Mathematics and Statistics, Lambton Tower, Ontario, Canada.
}
\epsilonmail{[email protected]}
\sigmaubjclass[2010]{ 17D99, 06B15, 20J05}
\kappaeywords{ Nonassociative rings and algebras, Representation theory, Homological methods in group theory}
\title{Hom-Groups, Representations and Homological Algebra}
\betaegin{abstract}
A Hom-group $G$ is a nonassociative version of a group where associativity, invertibility, and unitality are twisted by a map $\alphalpha: G\longrightarrow G$.
Introducing the Hom-group algebra $\mathbb{K}G$, we observe that Hom-groups are providing examples of Hom-algebras, Hom-Lie algebras and Hom-Hopf algebras.
We introduce two types of modules over a Hom-group $G$. To find out more about these modules,
we introduce Hom-group (co)homology with coefficients in these modules. Our (co)homology theories generalizes group (co)homologies for groups.
Despite the associative case we observe that the coefficients of Hom-group homology is different from the ones for Hom-group cohomology.
We show that the inverse elements provide a relation between Hom-group (co)homology with coefficients in right and left $G$-modules.
It will be shown that our (co)homology theories for Hom-groups with coefficients could be reduced to the Hochschild (co)homologies of Hom-group algebras.
For certain coefficients the functoriality of Hom-group (co)homology will be shown.
\epsilonnd{abstract}
\sigmaection{ Introduction}
The notion of Hom-Lie algebra is a generalization of Lie algebras which appeared
first in $q$-deformations of Witt and Virasoro algebras where the Jacobi identity is deformed
by a linear map \chiite{as}, \chiite{ckl}, \chiite{ cz}. There are several interesting examples of Hom-Lie algebras. For an example
the authors in \chiite{gr} have shown that any algebra of dimension 3 is a Hom-Lie algebra.
The related algebra structure is called Hom-algebra and introduced in \chiite{ms1}.
Later the other objects such as Hom-bialgebras and Hom-Hopf algebras were studied in \chiite{ms2}, \chiite{ms3}, \chiite{ ya2}, \chiite{ya3}, \chiite{ya4}.
We refer the reader to more work for
Hom-Lie algebras to \chiite{cs}, \chiite{hls}, \chiite{ls}, \chiite{bm}, for
Hom-algebras to \chiite{gmmp}, \chiite{fg}, \chiite{hms}, and for representations of Hom-objects to \chiite{cq}, \chiite{gw}, \chiite{pss}.
One knows that studying Hopf algebras have close relations to groups and Lie algebras. The set of group-like elements and primitive elements of a
Hopf algebra form a group and a Lie algebra respectively. Conversely any group gives a Hopf algebra which is called group algebra. For
any Lie algebra we have universal enveloping algebra. There have been many work relating Hom-Lie algebras and Hom-Hopf algebras.
However some relations were missing in the context of Hom-type objects due to the Lack of Hom-type of notions for groups and group algebras.
Here we briefly explain how Hom-groups were came in to the context of Hom-type objects.
The universal enveloping algebra of a Hom-Lie algebra has a Hom-bialgebra structure, see \chiite{ya4}.
However it has not a Hom-Hopf algebra structure in the sense of \chiite{ms2}. This is due to the fact that the antipode is not an inverse of the identity
map in the convolution product. This motivated the authors in \chiite{lmt} to modify the notion of invertibility in Hom-algebras and introduce a new definition
for the antipode of Hom-Hopf algebras. Solving this problem, they came into axioms of Hom-groups which naturally are appearing in the structure of the group-like
elements of Hom-Hopf algebras. They also were motivated by constructing a Hom-Lie group integrating a Hom-Lie algebra.
Simultaneously with this paper, the author in \chiite{h1} introduced and studied several fundamental notions for Hom-groups which the twisting map $\alpha$ is invertible. It was shown that Hom-groups are examples of quasigroups. Furthermore Lagrange theorem for finite Hom-groups were shown. The Home-Hopf algebra structure of Hom-group Hopf algebra $\mathbb{K}G$ were introduced in \chiite{h2}.\\
In this paper we investigate different aspects of Hom-groups such as modules and homological algebra.
In Section 2, we study the basics of Hom-groups. We introduce the Hom-algebra associated to a Hom-group $G$ and we call it Hom-group algebra denoted by $\mathbb{K}G$.
It has been shown in \chiite{ms1} that the commutator of a Hom-associative algebra $A$ is a Hom-Lie algebra $\mathfrak{g}_A$.
The authors in \chiite{ya4}, \chiite{lmt} showed that the universal enveloping algebra of a Hom-Lie algebra is endowed it with a
Hom-Hopf algebra structure. Therefore Hom-groups are sources of examples for Hom-algebras, Hom-Lie algebras, and Hom-Hopf algebras as follows
$$ G\hookrightarrow \mathbb{K}G\hookrightarrow \mathfrak{g}_{\mathbb{K}G}\hookrightarrow U(\mathfrak{g}_{\mathbb{K}G}).$$
We refer the reader for more examples and fundamental notions for Hom-groups to \chiite{h1}.
In Section 3, we introduce two types of modules over Hom-groups. The first type is called dual Hom-modules.
Using inverse elements in a Hom-group $G$, we show that a left dual $G$-module can be turned in to a right dual $G$-module and vice-a-versa.
Then we introduce $G$-modules and we show that for any left $G$-module $M$ the algebraic dual $Hom(M, \mathbb{K})$ is a
dual right $G$-module where $\mathbb{K}$ is a field.
It is known that the group (co)homology provides an important set of tools for studying modules over a group.
This motivates us to introduce (co)homology theories for Hom-groups to find out more about representations of Hom-groups.
Generally introducing homological algebra for non-associative objects is a difficult task. The first attempts to introduce homological tools for
Hom-algebras and Hom-Lie algebras were appeared in \chiite{aem}, \chiite{ms3}, \chiite{ms4}, \chiite{ya1}. The authors in \chiite{hss} defined Hochschild and
cyclic (co)homology for Hom-algebras.
In Section 4, we introduce Hom-group cohomology with coefficients in dual left (right) $G$-modules. The conditions of
$M$ in our work were also appeared in other context such as \chiite{cg} where the authors used the category of Hom-modules over
Hom-algebras to obtain a monoidal category for modules over Hom-bialgebras.
A noticeable difference between homology theories of Hom-algebras introduced in \chiite{hss} and the ones for Hom-groups in this paper is
that the first one needs bimodules over Hom-algebras and the second one requires one sided modules (left or right).
We show that the Hom-group cohomology with coefficients in a dual right $G$-module is isomorphic to Hom-group cohomology with
coefficients in the dual left $G$-module
where the left action is given by the inverse elements. We compute 0 and 1-cocycles and we show the functoriality of Hom-group cohomology for certain coefficients.
Since any Hom-group gives the Hom-group algebra, the natural question is the relation with cohomology theories of these two different objects.
We show that Hom-group cohomology of a Hom-group $G$ with coefficients in a dual left module is isomorphic to the Hom-Hochschild cohomology of Hom-group algebra $\mathbb{K}G$ with coefficients
in the dual $\mathbb{K}G$-bimodule whose right $\mathbb{K}G$-action is trivial. Later we
introduce Hom-group homologies with coefficients in left (right) $G$-modules.
Despite the associative case, the Hom-associativity condition leads us to use different type of
representations for cohomology and homology theories for Hom-groups. We look into similar results in the homology case.
The (co)homology theories for Hom-algebras in \chiite{hss}, \chiite{aem}, \chiite{ms3} and Hom-groups in this paper, gives us the hope of solving the open problems of introducing homological tools for other non-associative objects such as Jordan algebras and alternative algebras.
\betaigskip
\thetaableofcontents
\sigmaection{Hom-groups }
Here we recall the definition of a Hom-group from \chiite{lmt}.
\betaegin{definition}\label{def-hom}{\rm
A Hom group consists of a set G together with
a distinguished member $1$ of $G$,
a set map: $\alphalpha: G\longrightarrow G$,
an operation $\mu: G\thetaimes G\longrightarrow G$, and an operation written as
$^{-1}:G\longrightarrow G$.
These pieces of structure are subject to the following axioms:\\
i) The product map $\mu: G\thetaimes G\longrightarrow G$ is satisfying the Hom-associativity property
$$\mu(\alphalpha(g), \mu(h, k))= \mu(\mu(g,h), \alphalpha(k)).$$
For simplicity when there is no confusion we omit the sign $\mu$.
ii) The map $\alphalpha $ is multiplicative, i.e, $\alphalpha(gk)=\alphalpha(g)\alphalpha(k)$.
iii) The element $1$ is called unit and it satisfies the Hom-unitality condition
$$g1=1g=\alphalpha(g), \quad\quad ~~~~~ \alpha(1)=1.$$
iv) The map $g\longrightarrow g^{-1}$ satisfies the anti-morphism property $(gh)^{-1}=h^{-1} g^{-1}$.
v) For any $g\iotan G$
there exists a natural number $n$ satisfying the Hom-invertibility condition
$$\alphalpha^n(g g^{-1})=\alphalpha^n(g^{-1}g)=1.$$
The smallest such
$n$ is called the invertibility index of $g$.
}
\epsilonnd{definition}
Since we have the anti-morphism $g\longmapsto g^{-1}$, therefore by the definition, inverse of any element $g\iotan G$ is
unique although different elements may have different invertibility index.
The inverse of the unit element $1$ of a Hom-group $(G, \alpha)$ is itself because $\alpha(\mu(1, 1))=\alpha(1)=1$.
For any Hom-group $(G, \alpha)$ we have $\alpha(g)^{-1}=\alpha(g^{-1})$. This is because if $g^{-1}$ is the unique inverse of $g$ where its invertibility index is $k$ then
$$\alpha^{k-1}( \alpha(g) \alpha(g^{-1}))=\alpha^k(g) \alpha^k(g^{-1})=1.$$ So the invertibility index of $\alpha(g)$ is $k-1$.
If $k=1$ then the invertibility index of each element of $G$ is one.
Non-associativity of the product prevent us to easily define the notion of order for an element $g$. Therefore many basics result of group theory will be affected by missing associativity condition.
\betaegin{example}\label{deformation of groups}
{\rm
Let $(G, \mu, 1)$ be any group and $\alpha: G\longrightarrow G$ be a group homomorphism. We define a new product $\mu_{\alpha}: G\thetaimes G\longrightarrow G$ given by $$\mu_{\alpha}(g, h)= \alpha(\mu(g, h))=\mu(\alpha(g), \alpha(h)).$$ Then $(G, \mu_{\alpha})$ is a Hom-group and we denote this by $G_{\alpha}$. We note that inverse of any element $g\iotan G_{\alpha}$ is also $g^{-1}$ because
$$\alpha(\mu_{\alpha}(g, g^{-1}))= \alpha(g)\alpha(g^{-1})=\alpha(1)=1.$$
The invertibility index of all elements of $G_{\alpha}$ are one.
}
\epsilonnd{example}
\betaegin{remark}
{\rm
In this paper we use the general definition of Hom-groups in Definition \ref{def-hom}. However the author in \chiite{h1} considered an special case when $\alpha$ is invertible. Therefore the invertibility axiom will change to the one that for any $g\iotan G$, there exists a $g^{-1}\iotan G$ where
$$gg^{-1}= g^{-1}g=1.$$
It was shown that the inverse element $g^{-1}$ is unique and also $(gh)^{-1}= h^{-1}g^{-1}$. Therefore some of the axioms in Definition \ref{def-hom} will be obtained by Hom-associativity. See \chiite{H1}.
}
\epsilonnd{remark}
\betaegin{definition}{\rm
Let $\mathbb{K}$ be a field. For any Hom-group $(G, \alpha)$ we can define a free $k$-Hom algebra $\mathbb{K}G$ which is called Hom-group algebra.
More precisely $\mathbb{K}G$ denotes the set of all formal expressions of the form $\sigmaum cg$ where $c\iotan k$ and $g\iotan G$. The multiplication of $\mathbb{K}G$ is defined
by $(cg)(c'g')= (cc')(gg')$ for all $c,c'\iotan k$ and $g,g'\iotan G$.
For the Hom-algebra structure we extend $\alpha:G\longrightarrow G$ to a $\mathbb{K}$-linear map $\mathbb{K}G\longrightarrow \mathbb{K}G$ in the obvious way.
}
\epsilonnd{definition}
\betaegin{remark}{\rm
It is shown in \chiite{ms1} that the commutator of a Hom-associative algebra $A$ given by $[a,b]=ab-ba$, is a Hom-Lie algebra $\mathfrak{g}_A$.
Furthermore the authors in \chiite{ya4}, \chiite{lmt} showed that the universal enveloping algebra of a Hom-Lie algebra is endowed it with a
Hom-Hopf algebra structure. One notes that the Hopf algebra structures in \chiite{ya4} is different from the one in \chiite{lmt}. Therefore Hom-groups
are a source of examples of Hom-algebras, Hom-Lie algebras, and Hom-Hopf algebras as follows
$$ G\hookrightarrow \mathbb{K}G\hookrightarrow \mathfrak{g}_{\mathbb{K}G}\hookrightarrow U(\mathfrak{g}_{\mathbb{K}G}).$$
}
\epsilonnd{remark}
By \chiite{lmt}, an element $x$ in an unital Hom-algebra $(A, \alpha, 1)$ is invertible if there is an element $x^{-1}\iotan A$ and a
non-negative integer $k$ such that $$ \alpha^k(x x^{-1})= \alpha^k(x^{-1}x)=1.$$
The element $x^{-1}$ is called the Hom-inverse of $x$.
The Hom-inverse of an element in a Hom-algebra may not be unique. This is different from
Hom-groups where the inverse of an element is unique. This prevents Hom-invertible elements in an Hom-algebra to be a Hom-group in general.
The authors in \chiite{lmt} showed that for any unital Hom-algebra, the unit 1 is Hom-invertible, the
product of any two Hom-invertible elements is Hom-invertible and every inverse of a Hom-invertible element
is Hom-invertible. Furthermore they proved that the set of group-like elements in a Hom-Hopf algebra is a Hom-group.
The inverse of an element $cg$ in the Hom-group algebra $\mathbb{K}G$ is the unique element $c^{-1}g^{-1}$, where $c\iotan \mathbb{K}$ and $g\iotan G$.
\betaegin{definition}
{\rm
A subset $H$ of a Hom-group $(G,\alpha)$ is called a Hom-subgroup of $G$ if $(H,\alpha)$ itself is a Hom-group.
}
\epsilonnd{definition}
One notes that if $H$ is a Hom-subgroup of $G$ then $\alpha(h) = 1 h\iotan H$ for all $h\iotan H$. Therefore $\alpha(H)\sigmaubseteq H$.
\betaegin{example}
{\rm
Let $G$ be a group and $\alpha: G\longrightarrow G$ be a group homomorphism. If $H$ is a subgroup of
$G$ which $\alpha(H)\sigmaubseteq H$ then $(H_{\alpha}, \alpha)$ is a Hom-subgroup of $G_{\alpha}$.
}
\epsilonnd{example}
\betaegin{definition}
{\rm
Let $(G, \alpha)$ and $(H, \beta)$ be two Hom-groups. The morphism $f: G\longrightarrow H$ is called a morphism of Hom-groups if $\beta(f(g))=f(\alpha(g))$ and $f(gk)=f(g)f(k)$ for all $g,k\iotan G$.
Two Hom-groups $G$ and $H$ are called isomorphic if there exist a bijective morphism of Hom-groups $f: G\longrightarrow H$.
}
\epsilonnd{definition}
\betaegin{proposition}
Let $(G, \alpha)$ and $(H, \beta)$ be two Hom-groups and $f: G\longrightarrow H$ be a morphism of Hom-groups.
If the invertibility index of the element $f(1_G)\iotan H$ is $n$ then $\beta^{n+2}(f(1_G))=1_H$.
\epsilonnd{proposition}
\betaegin{proof}
Since $f$ is multiplicative then $$f(1_G) f(1_G)= f(1_G 1_G) = f(1_G).$$
Also $$1_H f(1_G)=\beta(f(1_G))= f(\alpha(1_G))= f(1_G).$$
Therefore $$f(1_G) f(1_G)= 1_H f(1_G).$$
Then $\beta^n(f(1_G)) \beta^n(f(1_G))= \beta^n(1) \beta^n(f(1_G))$.
So we have $\beta^n(f(1_G)) \beta^n(f(1_G))= 1_H \beta^n(f(1_G))$. Then $$[\beta^n(f(1_G)) \beta^n(f(1_G))] \beta^{n+1}(f(1_G)^{-1})= [1_H \beta^n(f(1_G))] \beta^{n+1}(f(1_G)^{-1}).$$
So
$$ \beta^{n+1}(f(1_G)) [ \beta^n(f(1_G) f(1_G)^{-1})] = \beta(1_H) [\beta^n(f(1_G) f(1_G)^{-1})] .$$ Then
$$\beta^{n+1}(f(1_G)) 1_H = b(1_H) 1_H. $$ Therefore $\beta^{n+2}(f(1_G))= \beta^2(1_H)= 1_H.$
\epsilonnd{proof}
This Lemma shows that in general for a Hom-group homomorphism $f: G\longrightarrow H$ the unitality condition $f(1)=1$ does not hold.
\betaegin{lemma}
Let $(G, \alpha)$ and $(H, \beta)$ be two Hom-groups and $f: G\longrightarrow H$ be a morphism of Hom-groups. If $f(1_G)=1_H$ then
$f(g^{-1})= f(g)^{-1}$.
\epsilonnd{lemma}
\betaegin{proof}
We suppose that the invertibility index of $g$ is $n$. Therefore $$ \beta^n(f(g) f(g^{-1}))=f(\alpha^n(g g^{-1}))= f(1_G)=1_H.$$
So $f(g^{-1})= f(g)^{-1}$.
\epsilonnd{proof}
\betaegin{lemma}
Let $(G, \alpha)$ and $(H, \beta)$ be two Hom-groups and $f: G\longrightarrow H$ be a morphism of Hom-groups. If $f(1_G)=1_H$ then
$kerf=\{ g\iotan G, ~~~ f(g)=1_H\}$ is a Hom-subgroup of $G$.
\epsilonnd{lemma}
\betaegin{proof}
Since $f$ is multiplicative then $kerf$ is closed under multiplication. Also if $x\iotan kerf$ then $x^{-1}\iotan kerf$ because by previous lemma
$$f(x^{-1})= f(x)^{-1}= 1_H^{-1}=1_H.$$
\epsilonnd{proof}
\sigmaection{$G$-modules }
In this section we introduce two different types of modules over a Hom-group $G$. The first type is called dual $G$-modules and we use them
later to introduce a cohomology theory for Hom-groups. The other type is called $G$-modules and they will be used to define a homology theory of Hom-groups.
\betaegin{definition}
Let $(G, \alpha)$ be a Hom-group. An abelian group $M$ is called a dual left $G$-module if there are linear maps $\chidot: G\thetaimes M\longrightarrow M$, and $\beta: M\longrightarrow M$ where
\betaegin{equation}\label{left-dual-module}
g\chidot (\alpha(h)\chidot m)=\beta((gh)\chidot m), \quad\quad g,h\iotan G,
\epsilonnd{equation}
and $$1\chidot m= \beta(m).$$
Similarly, $M$ is called a dual right $G$-module if
\betaegin{equation}\label{right-dual-module}
(m\chidot \alpha(h))\chidot g= \beta(m\chidot (hg)), \quad\quad m\chidot 1=\beta(m).
\epsilonnd{equation}
Finally, we call $M$ a dual $G$-bimodule if it is both a dual left and a dual right $G$-module with the following bimodule property
$$\alpha(a)\chidot (v\chidot b)=(a\chidot v)\chidot \alpha(b).$$
\epsilonnd{definition}
\betaegin{lemma}\label{result of left dual module}
If $ (G, \alpha)$ is a Hom-group and $M$ a dual left $G$-module, then
\betaegin{equation}
g\chidot \beta(m)= \beta(\alpha(g)\chidot m), \quad\quad g\iotan G, m\iotan M.
\epsilonnd{equation}
Similarly for a dual right $G$-module we have
\betaegin{equation}
\beta(m\chidot \alpha(g))= \beta(m)\chidot g.
\epsilonnd{equation}
\epsilonnd{lemma}
\betaegin{proof}
This is followed by substituting $h=1$ in \epsilonqref{left-dual-module} and \epsilonqref{right-dual-module} and using $1\chidot m=\beta(m)= m\chidot 1$.
\epsilonnd{proof}
It is known that for every group $G$, a right $G$-module $M$ can be turned in to a left $G$-module $\widetilde{M}=M$
where the left action is given by $g\chidot m:= mg^{-1}$. This process can also be done for Hom-groups as follows.
\betaegin{lemma}\label{right to left}
Let $(G, \alpha)$ be a Hom-group. A dual right $G$-module $M$ can be turned in to a dual left $G$-module $\widetilde{M}=M$ by the left action given by
\betaegin{equation}
g\chidot m:= m\chidot g^{-1}.
\epsilonnd{equation}
\epsilonnd{lemma}
\betaegin{proof}
This is followed by
\betaegin{align*}
g\chidot (\alpha(k)\chidot m)&= g\chidot (m\chidot \alpha(k)^{-1})\\
&= (m\chidot \alpha(k)^{-1})\chidot g^{-1}= (m\chidot \alpha(k^{-1}))\chidot g^{-1}\\
&= \beta(m\chidot (k^{-1}g^{-1}))=\beta(m\chidot (gk)^{-1}) =\beta((gk)\chidot m),\\
\epsilonnd{align*}
and also
\betaegin{equation*}
1\chidot m=m\chidot 1^{-1}=m\chidot 1=\beta(m).
\epsilonnd{equation*}
\epsilonnd{proof}
The following notion of modules over Hom-groups will be used to introduce Hom-group homology.
\betaegin{definition}\label{modules over Hom-groups}
Let $(G, \alpha)$ be a Hom-group. An abelian group $V$ equipped with $\chidot :M \thetaimes V \longrightarrow V$, $a\thetaimes v\mapsto a\chidot v$, and $\beta:V\longrightarrow V$, is called a left $G$-module if
\betaegin{equation}\label{aux-Hom-module}
(gk)\chidot \beta(m) = \alpha(g)\chidot (k\chidot m), \quad\quad~~~~~~~~ 1\chidot m=\beta(m),
\epsilonnd{equation}
for all $g,k\iotan G$ and $m\iotan M$.
Similarly, $(M,\beta)$ is called a right $G$-module if
\betaegin{equation*}
\betaeta(m)\chidot(gk)= (m\chidot g)\chidot \alpha(k), \quad\quad ~~~~ m\chidot 1= \beta(m).
\epsilonnd{equation*}
Furthermore $M$ is called an $G$-bimodule if
\betaegin{equation}\label{aux-A-bimodule}
\alpha(g)\chidot (m \chidot k) = (g \chidot m)\chidot \alpha(k),
\epsilonnd{equation}
for all $g,k\iotan G$, and $m \iotan M$.
\epsilonnd{definition}
\betaegin{example}\rm{
For a Hom-group $G$, the Hom-group algebra $\mathbb{K}G$ is a bimodule over $G$ by the left and right actions defined by its multiplication and $\beta=\alpha$.
More precise the left action is defined to be $g\chidot (ch)= c(gh)$ where $g,h\iotan G$ and $c\iotan k$.
}
\epsilonnd{example}
\betaegin{lemma}\label{right to left-2}
Let $(G, \alpha)$ be a Hom-group. A right $G$-module $M$ can be turned in to a left $G$-module $\widetilde{M}=M$ by the left action
\betaegin{equation}
g\chidot m:= m\chidot g^{-1}.
\epsilonnd{equation}
\epsilonnd{lemma}
\betaegin{proof} This is because
\betaegin{align*}
&(gk)\chidot\beta(m)=\beta(m) \chidot (k^{-1}g^{-1})= (m\chidot k^{-1})\chidot \alpha(g^{-1})\\
&= (m\chidot k^{-1})\chidot \alpha(g)^{-1}= \alpha(g)\chidot (m\chidot k^{-1}) =\alpha(g)\chidot (k\chidot m)
\epsilonnd{align*}
\epsilonnd{proof}
\betaegin{example}\rm{
Let $G$ be a Hom-group and $M$ be a right $G$-module. If $\mathbb{K}$ is a field, then the algebraic dual
${{M}}^*= \mathop{\rm Hom}\nolimits(M,\mathbb{K})$ can be turned in to a left dual $G$-module by the left dual action given by
\betaegin{equation}
(g\chidot f)(m)= f(m\chidot g).
\epsilonnd{equation}
}
\epsilonnd{example}
\sigmaection{Hom-group cohomoloy}
In this section we introduce Hom-group cohomology for Hom-groups. To to this we need to use the dual modules for the proper coefficients.
\betaegin{theorem}\label{Hom-group cohomology for left}
Let $(G, \alphalpha)$ be a Hom- group and $(M, \betaeta)$ be a dual left $G$-module. Let $C^n_{Hom}(G, M)$ be the space of all maps $\varphi: G^{\thetaimes n}\longrightarrow M$. Then
\betaegin{equation*}
C_{Hom}^\alphast(G, M)=\betaigoplus_{n\gammaeq0} C_{Hom}^n(G, M),
\epsilonnd{equation*}
with the coface maps
\betaegin{align}\label{aux-cosimplisial-structure-vp}
\betaegin{split}
&\delta_0\varphi(g_1, \chidots , g_{n+1})=g_1\chidot \varphi(\alpha(g_2), \chidots , \alpha(g_{n+1}))\\
&\delta_i\varphi(g_1 , \chidots ,g_{n+1})=\beta(\varphi(\alpha(g_1), \chidots , g_i g_{i+1}, \chidots , \alpha(g_{n+1}))), ~~ 1\leq i \leq n\\
&\delta_{n+1}\varphi(g_1, \chidots , g_{n+1})= \beta(\varphi(\alpha(g_1), \chidots , \alpha(g_{n}))),\\
\epsilonnd{split}
\epsilonnd{align}
is a cosimplicial module.
\epsilonnd{theorem}
\betaegin{proof}
We need to show that $\deltaelta_i \deltaelta_j= \deltaelta_j \deltaelta_{i-1}$ for $0\leq j< i \leq n-1$. Let us first show that $\delta_1\delta_0=\delta_0\delta_0$.
\betaegin{align*}
\delta_0(\delta_0\varphi)(g_1,\chidots , g_{n+2}) &=g_1\chidot \delta_0\varphi(\alpha(g_2),\chidots ,\alpha(g_{n+2}))\\
&= g_1\chidot (\alpha(g_2)\chidot\varphi(\alpha^2(g_3),\chidots ,\alpha^2(g_{n+2}))) \\
& =\beta((g_1g_2)\chidot\varphi(\alpha^2(g_3),\chidots ,\alpha^2(g_{n+2}))) \\
&= \beta(\delta_0\varphi(g_1g_2\, \alpha(g_3),\chidots ,\alpha(g_{n+2})))\\
&=\delta_1\delta_0\varphi(g_1,\chidots ,g_{n+2}).
\epsilonnd{align*}
We used the left dual module property in the third equality. Now we show that $\deltaelta_{n+1} \deltaelta_n= \deltaelta_n \deltaelta_{n}$.
\betaegin{align*}
\deltaelta_{n+1} \deltaelta_n\varphi(g_1,\chidots, g_{n+1}) &=\beta(\deltaelta_n\varphi(\alpha(g_1), \chidots, \alpha(g_{n})))\\
&=\beta^2(\varphi(\alpha(g_1), \chidots, \alpha(g_{n-1})))\\
&=\beta(\deltaelta_n\varphi(\alpha(g_1),\chidots,\alpha( g_{n-1}), \alpha(g_{n}g_{n+1})))\\
&=\beta(\deltaelta_n\varphi(\alpha(g_1),\chidots,\alpha( g_{n-1}), \alpha(g_{n})\alpha(g_{n+1})))\\
&=\deltaelta_n \deltaelta_{n}\varphi(g_1,\chidots, g_{n+1}).
\epsilonnd{align*}
We used the multiplicity of $\alpha$ in the fourth equality.
The following demonstrates that $\delta_{n+1}\delta_0=\delta_0\delta_{n}$. We have
\betaegin{align*}
\delta_{n+1}\delta_0\varphi(g_1,\chidots , g_{n+1}) &=\beta(\delta_0\varphi(\alpha(g_1),\chidots ,\alpha(g_{n})))\\
&=\beta(\alpha(g_1)\chidot \varphi(\alpha^2(g_2),\chidots ,\alpha^2(g_{n})))\\
&=g_1 \chidot\beta(\varphi(\alpha^2(g_2),\chidots ,\alpha^2(g_{n})) )\\
&=g_1\chidot \delta_n\varphi(\alpha(g_2),\chidots, \alpha(g_{n+1}))\\
&=\delta_0\delta_n\varphi(g_1,\chidots ,g_{n+1}).
\epsilonnd{align*}
We used the Lemma \ref{result of left dual module} in the third equality.
The relations $ \deltaelta_{j+1} \deltaelta_j= \deltaelta_j \deltaelta_{j}$ follows from the Hom-associativity of $G$.
\epsilonnd{proof}
Similarly we have the following result.
\betaegin{proposition}
Let $(G, \alphalpha)$ be a Hom-group and $(M, \betaeta)$ be a dual right $G$-module. Let $C^n_{Hom}(G, M)$ be the space of all maps $\varphi: G^{\thetaimes n}\longrightarrow M$. Then
\betaegin{equation*}
C_{Hom}^\alphast(G, M)=\betaigoplus_{n\gammaeq0} C_{Hom}^n(G, M),
\epsilonnd{equation*}
with the coface maps
\betaegin{align}\label{aux-cosimplisial-structure-vp}
\betaegin{split}
&\delta_0\varphi(g_1, \chidots , g_{n+1})= \varphi(\alpha(g_1), \chidots , \alpha(g_{n}))\chidot g_{n+1}\\
&\delta_i\varphi(g_1 , \chidots ,g_{n+1})=\beta(\varphi(\alpha(g_1), \chidots , g_ig_{i+1}, \chidots , \alpha(g_{n+1}))), ~~ 1\leq i \leq n\\
&\delta_{n+1}\varphi(g_1, \chidots , g_{n+1})= \beta(\varphi(\alpha(g_2), \chidots , \alpha(g_{n+1}))),\\
\epsilonnd{split}
\epsilonnd{align}
is a cosimplicial module.
\epsilonnd{proposition}
\betaegin{proof}
Here we show that $\delta_{n+1}\delta_0=\delta_0\delta_{n}$.
\betaegin{align*}
\delta_{n+1}\delta_0\varphi(g_1,\chidots , g_{n+1}) &=\beta(\delta_0\varphi(\alpha(g_2),\chidots , \alpha(g_{n+1})))\\
&=\beta(\varphi(\alpha^2(g_2),\chidots ,\alpha^2(g_{n}))\chidot\alpha(g_{n+1}))\\
&= \beta(\varphi(\alpha^2(g_2),\chidots, \alpha^2(g_{n})) )\chidot g_{n+1}\\
&= \delta_n\varphi(\alpha(g_1),\chidots ,\alpha(g_{n}))\chidot g_{n+1}\\
&=\delta_0\delta_n\varphi(g_1,\chidots , g_{n+1}).
\epsilonnd{align*}
We used the Lemma \ref{result of left dual module} in the third equality.
The rest of the relations can be proved similar to the Theorem \ref{Hom-group cohomology for left}.
\epsilonnd{proof}
Now we define the coboundary $b= \sigmaum_{i=0}^{n} d_i$. The previous Theorem and Proposition imply $b^2=0$.
The cohomology of the cochain complex
$$
\betaegin{CD}
0 @>b>> C_{Hom}^0(G, M) @>b>> C_{Hom}^1(G, M) @>b>> C_{Hom}^2(G, M) @>b>> C_{Hom}^3(G, M) \ldots
\epsilonnd{CD}\\
$$
is called Hom-group cohomology of $G$ with coefficients in $M$. Here $M= C_{Hom}^0(G, M)$.
The following proposition shows the relation between Hom-group cohomology with coefficients with dual left and dual right modules.
\betaegin{proposition}
Let $(G, \alpha)$ be a Hom-group and $M$ be a dual right $G$-module. Then $\widetilde{M} =M$ with the left action $g\chidot m= m\chidot g^{-1}$ is a dual left $G$-module. Furthermore
$$H^*(G, M)\chiong H^*(G, \widetilde{M}).$$
\epsilonnd{proposition}
\betaegin{proof}
The space $\widetilde{M}$ is a dual left $G$-module by the Lemma \ref{right to left}.
We define
$$F: C^n(G, \widetilde{M})\longrightarrow C^n(G, M),$$ given by
$$F(\varphi)(g_1, \deltaots , g_n)=\varphi(g_n^{-1}, \chidots , g_1^{-1}).$$
Here we show $F \delta^{\widetilde{M}}_0= \delta^{M}_0 F$ where $\delta^{\widetilde{M}}$ and $\delta^{M}$ stand for the coface maps when
the coefficients are $\widetilde{M} $ and $M$, respectively.
\betaegin{align*}
F \delta^{\widetilde{M}}_0\varphi(g_1, \chidots , g_{n+1})&=\delta^{\widetilde{M}}_0\varphi(g_{n+1}^{-1}, \chidots, g_1^{-1})\\
&=g_{n+1}^{-1}\chidot \varphi(\alpha(g_n^{-1}), \chidots , \alpha(g_1^{-1}))\\
&=\varphi(\alpha(g_n^{-1}), \chidots , \alpha(g_1^{-1}))\chidot g_{n+1}\\
&=\varphi(\alpha(g_n)^{-1}, \chidots , \alpha(g_1)^{-1})\chidot g_{n+1}\\
&= F\varphi(\alpha(g_1), \chidots , \alpha(g_n))\chidot g_{n+1}\\
&=\delta^{M}_0 F(g_1, \chidots , g_{n+1}).
\epsilonnd{align*}
Similarly $F$ commutes with all $\delta_i$'s and therefore with the coboundary maps $b=\sigmaum_i \delta_i$. Thus $F$ is a
map of cochain complexes and induces a map on the level of cohomology.
Furthermore $F$ is a bijection on the level of cochain complexes because inverse elements are unique in Hom-groups.
\epsilonnd{proof}
The following two examples show that the cohomology classes could contain important information about a Hom-group.
\betaegin{example}{\rm \thetaextbf{({$H^0$} and twisted invariant elements) }\\
Let $(G, \alpha)$ be a Hom-group and $M$ be a dual right $G$-module.
Then $$H^0(G, M)= \{ m\iotan M, mg=\beta(m), \forall g\iotan G\}.$$
So the zero cohomology class is the subspace of $M$ which contains those elements that are invariant under the $G$-action with respect to $\beta$.
}
\epsilonnd{example}
\betaegin{example}
{\rm \thetaextbf{($H^1$ and twisted crossed homomorphisms )}\\
Let $(G, \alpha)$ be a Hom-group and $M$ be a dual right $G$-module. To compute $H^1(G, M)$ we need to compute $ker b$ which
contains the 1-cochains $f: G\longrightarrow M$ with $df(g, h)=0$. This means
$$ f(\alpha(g))\chidot h -\beta(f(gh))+\beta(f(\alpha(h)))=0,$$ or
$$\beta(f(gh))= f(\alpha(g))\chidot h +\beta(f(\alpha(h))).$$
These maps are called twisted crossed homomorphism of $G$. Also $Im b$ contains all $ \varphi: G \longrightarrow M$ where there exists $m\iotan M$ such that
$\varphi(g)=mg - \beta(m)$. These map are called twisted principal crossed homomorphisms of $G$. Therefore the first cohomology is the quotient of
twisted crossed homomorphism by twisted principal crossed homomorphisms.
}
\epsilonnd{example}
\betaegin{example}{\rm
We recall that for a Hom-group $G$ the Hom-group algebra $V=\mathbb{K}G$ is a $G$-bimodule by multiplication of $G$.
Therefore by examples of the previous section $(\mathbb{K}G)^\alphast$ is a $G$-dual bimodule.
Now we consider the Hom-group cohomology of $G$ with coefficients in the dual $G$-bimodule $(\mathbb{K}G)^\alphast$.
We show that the coboundary map can be written differently in this case.
One Identifies $\varphi\iotan C^n(G,G^\alphast)$ with
\betaegin{equation*}\label{aux-identification}
\phii:G^{\thetaimes\,n+1}\longrightarrow k,\qquad \phii(g_0, g_1, \chidots g_n):=\varphi(g_1 \otimes\chidots\otimes g_n)(g_0).
\epsilonnd{equation*}
As a result the coboundary map will be changed in to
\betaegin{align*}
b:C^n(G,G^\alphast)&\longrightarrow C^{n+1}(G,G^\alphast),\\
b\phii(g_0, \chidots, g_{n+1})&=\phii(g_0g_1, \alpha(g_2) , \chidots , \alpha(g_{n+1}))\\
&\quad+\sigmaum_{j=1}^n (-1)^j\phii(\alpha(g_0), \chidots, g_jg_{j+1}, \chidots , \alpha(g_{n+1}))\\
&\quad+(-1)^{n+1} \phii(g_{n+1}g_0\otimes \alpha(g_1), \chidots, \alpha(g_n)).
\epsilonnd{align*}
Also the cosimplicial structure is translated into
\betaegin{align}\label{aux-cosimplisial-structure-phi}
\betaegin{split}
&\delta_0\phii(g_0 , \chidots , g_n)= \phii(g_0g_1, \alpha(g_2) , \chidots, \alpha(g_{n}))\\
&\delta_i\phii(g_0 , \chidots , g_n)=\phii(\alpha(g_0) , \chidots , g_i g_{i+1} , \chidots , \alpha(g_{n})), ~~ 1\leq i \leq n-1\\
&\delta_{n}\phii(g_0 , \chidots , g_n)=\phii(g_{n}g_0, \alpha(g_1), \chidots ,\alpha(g_n)).
\epsilonnd{split}
\epsilonnd{align}
}
\epsilonnd{example}
The following proposition shows the functoriality of Hom-group cohomology with certain coefficients.
\betaegin{proposition}
Let $(G, \alpha_{G})$ and $(G', \alpha_{G'})$ be two Hom-groups. Then any morphism $f: G\longrightarrow G'$ of Hom-groups induces the map
\betaegin{equation*}
\widehat{f}: H_{Hom}^n(G', (\mathbb{K}G')^\alphast)\longrightarrow H_{Hom}^n(G, ({\mathbb{K}G})^\alphast)
\epsilonnd{equation*}
\epsilonnd{proposition}
\betaegin{proof}
We define $F: C^n(G',\mathbb{K}G'^\alphast)\longrightarrow C^n(G,\mathbb{K}G^\alphast)$ given by
$$F\varphi(g_0, \chidots, g_n)= \varphi (f(g_0), \chidots, f(g_n))$$
The map $F$ commutes with all differentials $\deltaelta_i$ in \epsilonqref{aux-cosimplisial-structure-phi} because $f(\alpha_G(g))=\alpha_{G'}(f(g))$ and $ f(gk)=f(g) f(k)$. Here we only show that $F$ commutes with $\delta_0$ and we leave the other commutativity relations to the reader.
\betaegin{align*}
&\delta_0^GF\varphi (g_0, \chidots , g_n)\\
&= F\varphi(g_0g_1, \alphalpha_G(g_2), \chidots , \alpha_G(g_n))\\
&=\varphi(f(g_0g_1), f(\alpha_G(g_2)), \chidots , f(\alpha_G(g_n)))\\
&=\varphi(f(g_0) f(g_1), \alpha_{G'}(f(g_2)), \chidots, \alpha_{G'}(f(g_n)))\\
&=\delta_0^{G'}(f(g_0), \chidots , f(g_n))\\
& =F\delta_0^{G'}\varphi (g_0, \chidots , g_n).
\epsilonnd{align*}
\epsilonnd{proof}
One notes that even in the case of associative groups, $H^*(G, \mathbb{K}G)$ has not similar functoriality property.
This reminds us that the coefficients $(\mathbb{K}G)^*$ as dual $G$-module have an important rule in Hom-group cohomology.
\betaegin{example}{\rm \thetaextbf{(Trace $0$-cocycles})
Let $(G, \alpha)$ be a Hom-group. Using the differentials in \epsilonqref{aux-cosimplisial-structure-phi} we have
\betaegin{equation*}
H_{Hom}^{0}(G, \mathbb{K}G^*)=\{ \varphi: G\longrightarrow k, \quad\quad \varphi(gh)=\varphi(hg)\}.
\epsilonnd{equation*}
More precisely, $0$-cocycles of $G$ are trace maps on $G$.
}
\epsilonnd{example}
Here we aim to find out the relation between Hom-group cohomology of a Hom-group $G$ and the Hochschild cohomology of the Hom-group algebra $\mathbb{K}G$.
For this we recall the Hochschild cohomology of Hom-algebras introduced in \chiite{hss}. First we need to recall the definition of dual modules for Hom-algebras from \chiite{hss}.
Let $(\mathcal{A}, \alpha)$ be a Hom-algebra. A vector space $V$ is called a dual left $\mathcal{A}$-module if there are linear maps $\chidot: \mathcal{A}\otimes V\longrightarrow V$, and $\beta: V\longrightarrow V$ where
\betaegin{equation}
a\chidot (\alpha(b)\chidot v)=\beta((ab)\chidot v).
\epsilonnd{equation}
Similarly, $V$ is called a dual right $\mathcal{A}$-module if $v\chidot (\alpha(a))\chidot b= \beta(v\chidot (ab))$. Finally, we call $V$ a dual $\mathcal{A}$-bimodule if $\alpha(a)\chidot (v\chidot b)=(a\chidot v)\chidot \alpha(b).$
Let $(\mathcal{A}, \alphalpha)$ be a Hom-algebra, $(M, \betaeta)$ be a dual $\mathcal{A}$-bimodule and $C^n(\mathcal{A}, M)$ be the space of all $k$-linear maps $\varphi: \mathcal{A}^{\otimes n}\longrightarrow M$. Then the authors in \chiite{hss} showed that
\betaegin{equation*}
C^\alphast(\mathcal{A}, M)=\betaigoplus_{n\gammaeq0} C^n(\mathcal{A}, M),
\epsilonnd{equation*}
with the coface maps
\betaegin{align}\label{aux-cosimplisial-structure-vp}
\betaegin{split}
&d_0\varphi(a_1\otimes\chidots\otimes a_{n+1})=a_1\chidot \varphi(\alpha(a_2)\otimes\chidots\otimes \alpha(a_{n+1}))\\
&d_i\varphi(a_1\otimes\chidots\otimes a_{n+1})=\beta(\varphi(\alpha(a_1)\otimes\chidots\otimes a_i a_{i+1}\otimes\chidots\otimes \alpha(a_{n+1}))), ~~ 1\leq i \leq n\\
&d_{n+1}\varphi(a_1\otimes\chidots\otimes a_{n+1})= \varphi(\alpha(a_1)\otimes\chidots\otimes \alpha(a_{n}))\chidot a_{n+1}.\\
\epsilonnd{split}
\epsilonnd{align}
is a cosimplicial module. The cohomology of the complex $(C^\alphast(\mathcal{A}, M),b)$, where $b=\sigmaum_{i=0}^{n+1}d_i$, is the Hochschild cohomology of the Hom-algebra $\mathcal{A}$ with coefficients in $M$, and is denoted by $H^\alphast(\mathcal{A}, M)$.
The following theorem shows that if the dual left $G$-module
$M$ satisfies an extra condition $\alpha(a)\chidot \beta(m)= \beta(a\chidot m)$, $a\iotan \mathbb{K}G, m\iotan M$, then
the group cohomology of a Hom-group $G$ with coefficients in $M$ will reduce to Hochschild cohomology of
the Hom-group algebra $\mathbb{K}G$ with coefficients in the dual $\mathbb{K}G$-bimodule $\widetilde{M}=M$ where the left action
is coming from the left action of $G$ and the right action is trivial. One notes that if $\alpha=\beta=\mathop{\rm Id}\nolimits$ then we
obtain the corresponding well-known result in the associative case.
\betaegin{theorem}\label{homology-relations}
Let $(G, \alpha)$ be a Hom-group and $M$ a dual left $G$-module.
If $\alpha(a)\chidot \beta(m)= \beta(a\chidot m)$, then
$\widetilde{M}=M$ is a dual $\mathbb{K}G$-bimodule where the left action is coming from the
original left action of $G$ and the right action is the trivial action $m\chidot g := m\chidot 1 = \beta(m)$. Furthermore
$$H^*(G, M)\chiong H^\alphast(\mathbb{K}G,\widetilde{M}).$$
\epsilonnd{theorem}
\betaegin{proof}
The condition $\alpha(a)\chidot \beta(m)= \beta(a\chidot m)$ insures that $\widetilde{M}$ with the given dual left action and the right trivial action $m\chidot g = m\chidot 1 = \beta(m)$ is a dual $G$-bimodule, and therefore a dual $\mathbb{K}G$-bimodule, because
$$\alpha(g)\chidot (m\chidot k)= \alpha(g)\chidot \beta(m)= \beta(g\chidot m)=(g\chidot m)\chidot \alpha(k).$$
Now all differentials $d_i$ of Hochschild cohomology of $\mathbb{K}G$ will be the same as the ones, $\delta_i$, for group cohomology of $G$. Therefore the identity map $\mathop{\rm Id}\nolimits: C^n(G, M)\longrightarrow C^n(\mathbb{K}G, \widetilde{M})$ induces an isomorphism on the level of complexes.
\epsilonnd{proof}
One knows that if $G$ is an associative group, and $\mathbb{K}G$ the group algebra, then any $\mathbb{K}G$-bimodule $M$ can be turned in to a
$G$-right (or left) module by the adjoint action. The process of having the similar
result for Hom-groups is not clear specially because we do not know how to define the adjoint action for Hom-groups.
\sigmaection{Hom-group homology}
In this section we introduce homology theory for Hom-groups. To do this we need to use the notion of modules instead of dual modules for the coefficients.
\betaegin{theorem}
Let $(G, \alphalpha)$ be a Hom-group and $(M, \betaeta)$ be a right
$G$-module satisfying
$$\beta(m\chidot g)= \beta(m)\chidot \alpha(g).$$
Let $C_n^{Hom}(G, M)= M\thetaimes G^{\thetaimes n}$. Then
\betaegin{equation*}
C_\alphast^{Hom}(G, M)=\betaigoplus_{n\gammaeq0} C_n^{Hom}(G, M),
\epsilonnd{equation*}
with the face maps
\betaegin{align}\label{aux-cosimplisial-structure-vp}
\betaegin{split}
&d_0(m, g_1, \chidots , g_{n})=(m\chidot g_1, \alpha(g_2), \chidots , \alpha(g_{n}))\\
&d_i(m, g_1 , \chidots ,g_{n+1})=(\beta(m), \alpha(g_1), \chidots , g_i g_{i+1}, \chidots , \alpha(g_{n})), ~~ 1\leq i \leq n-1\\
&d_{n}(m, g_1, \chidots , g_{n})= (\beta(m), \alpha(g_1), \chidots , \alpha(g_{n-1})),\\
\epsilonnd{split}
\epsilonnd{align}
is a simplicial module.
\epsilonnd{theorem}
\betaegin{proof}
First we show $d_0d_0=d_0d_1$.
\betaegin{align*}
d_0d_0(m, g_1, \chidots , g_{n})&=d_0(m\chidot g_1, \alpha(g_2), \chidots , \alpha(g_{n}))\\
&=((m\chidot g_1)\chidot \alpha(g_2), \alpha^2(g_3), \chidots , \alpha^2(g_{n}))\\
&= ((m\chidot g_1)\chidot \alpha(g_2), \alpha^2(g_3), \chidots , \alpha^2(g_{n}))\\
&= (\beta(m)\chidot (g_1g_2), \alpha^2(g_3), \chidots , \alpha^2(g_{n}))\\
&= d_0(\beta(m), g_1g_2, \alpha(g_3), \chidots , \alpha(g_{n}))\\
&=d_0d_1(m, g_1, \chidots , g_{n}).
\epsilonnd{align*}
We used the dual right property in the fourth equality. Here we show $d_0 d_j=d_{j-1}d_0$ for $j> 1$.
\betaegin{align*}
d_0d_j(m, g_1,&\chidots, g_n)\\
&=d_0(\beta(v)\otimes \alpha(a_1)\otimes\chidots\otimes a_j a_{j+1}\otimes\chidots\otimes \alpha(a_n)) \\
&=\beta(v)\chidot\alpha(a_1)\otimes \alpha^2(a_2)\otimes\chidots\otimes \alpha(a_j a_{j+1})\otimes\chidots\otimes \alpha^2(a_n) \\
&=\beta(v\chidot a_1)\otimes \alpha^2(a_2)\otimes\chidots\otimes \alpha(a_j )\alpha(a_{j+1})\otimes\chidots\otimes \alpha^2(a_n) \\
&=d_{j-1}(v\chidot a_1\otimes \alpha(a_2)\otimes\chidots\otimes \alpha(a_n)) \\
&=d_{j-1}\delta_0(m\otimes a_1\otimes\chidots\otimes a_n).
\epsilonnd{align*}
We used the condition $\beta(m\chidot g)= \beta(m)\alpha(g)$ in the third equality. Now we show $d_0d_n=d_{n-1}d_0$. We have,
\betaegin{align*}
d_0d_n(m, g_1&, \chidots, g_n)\\
&=d_0(\beta(m), \alpha(g_1)\chidots , \alpha(g_{n-1}))\\
&=(\beta(m)\chidot \alpha(g_1), \alpha^2(g_2), \chidots \alpha^2(g_{n-1}))\\
&=\beta(m\chidot g_1), \alpha^2(g_2)\chidots \alpha^2(g_{n-1})\\
&=d_{n-1}(m\chidot g_1, \alpha(g_2)\chidots, \alpha(g_n))\\
&=d_{n-1}\delta_0(m, g_1\chidots g_n).
\epsilonnd{align*}
Now we show $d_i d_n=d_{n-1}d_i$.
\betaegin{align*}
d_i d_n(m, g_1&, \chidots ,g_n)\\
&=d_i (\beta(m), \alpha(g_1), \chidots, \alpha(g_{n-1}))\\
&= (\beta^2(m), \alpha^2(g_1), \chidots, \alpha(g_i)\alpha(g_{i+1}), \chidots , \alpha^2(g_{n-1}))\\
&= d_{n-1}(\beta(m), \alpha(g_1), \chidots, g_i g_{i+1}, \chidots, \alpha(g_n))\\
&=d_{n-1}d_i(m, g_1, \chidots ,g_n).
\epsilonnd{align*}
The rest of the commutativity relations can also be verified.
\epsilonnd{proof}
We define the boundary map $b= \sigmaum_{i=0}^{n} d_i$. By the previous Theorem we have $b^2=0$.
The homology of the chain complex
$$
\betaegin{CD}
0 @<b<< M= C^{Hom}_0(G, M) @<b<< C^{Hom}_1(G, M) @<b<< C^{Hom}_2(G, M) @<b<< C^{Hom}_3(G, M) \ldots
\epsilonnd{CD}\\
$$
is called Hom-group homology of $G$ with coefficients in $M$.
Similarly one has Hom-group homology with coefficients in left modules as follows.
\betaegin{proposition}
Let $(G, \alphalpha)$ be a Hom-group and $(M, \betaeta)$ be a left
$G$-module satisfying
$$\beta(g\chidot m)= \alpha(g)\chidot \beta(m) .$$
Let $C_n^{Hom}(G, M)=G^{\thetaimes n}$. Then
\betaegin{equation*}
C_\alphast^{Hom}(G, M)=\betaigoplus_{n\gammaeq0} C_n^{Hom}(G, M),
\epsilonnd{equation*}
with the face maps
\betaegin{align}\label{aux-cosimplisial-structure-vp}
\betaegin{split}
&d_0( g_1, \chidots , g_{n}, m)=(\alpha( g_1), \alpha(g_2), \chidots ,\alpha( g_{n-1}), g_n\chidot m)\\
&d_i(g_1 , \chidots ,g_{n+1}, m)=( \alpha(g_1), \chidots , g_i g_{i+1}, \chidots , \alpha(g_{n}), \beta(m)), ~~ 1\leq i \leq n-1\\
&d_{n}(g_1, \chidots , g_{n}, m)= ( \alpha(g_2), \chidots , \alpha(g_{n}), \beta(m)).\\
\epsilonnd{split}
\epsilonnd{align}
is a simplicial module.
\epsilonnd{proposition}
\betaegin{proof}
Similar as the previous theorem.
\epsilonnd{proof}
Here we state the relation between Hom-group homology with coefficients in left and right modules.
\betaegin{proposition}
Let $(G, \alpha)$ be a Hom-group and $M$ be a right $G$-module.
Then $\widetilde{M} =M$ with the left action $g\chidot m= m\chidot g^{-1}$, is a left $G$-module and
$$H_*(G, M)\chiong H_*(G, \widetilde{M}).$$
\epsilonnd{proposition}
\betaegin{proof}
The space $\widetilde{M}$ is a left module by the Lemma \ref{right to left-2}.
We set
$$F: C_n(G, \widetilde{M})\longrightarrow C_n(G, M),$$ given by
$$F(g_1, \deltaots , g_n, m)=(m, g_n^{-1}, \chidots , g_1^{-1}).$$
Here we show $d_0^M F= F d_0^{\widetilde{M}}$.
\betaegin{align*}
&d_0^M F( g_1, \chidots , g_{n}, m)\\
&=d_0^M(m, g_n^{-1}, \chidots , g_1^{-1})\\
&=(m\chidot g_n^{-1}, \alpha(g_{n-1}^{-1}), \chidots , \alpha(g_1^{-1}))\\
&=(m\chidot g_n^{-1}, \alpha(g_{n-1})^{-1}, \chidots , \alpha(g_1)^{-1})\\
&=F( \alpha(g_1), \chidots , \alpha(g_{n-1}), m\chidot g_n^{-1}))\\
&=F( \alpha(g_1), \chidots , \alpha(g_{n-1}), g_n\chidot m))\\
&=F d_0^{\widetilde{M}}( g_1, \chidots , g_{n}, m).
\epsilonnd{align*}
Now we prove
$d_n^M F= F d_n^{\widetilde{M}}$.
\betaegin{align*}
&d_n^M F( g_1, \chidots , g_{n}, m)\\
&= d_n^M(m, g_n^{-1}, \chidots , g_1^{-1})\\
&= (\beta(m), \alpha(g_n^{-1}), \chidots , \alpha(g_2^{-1}))\\
&=(\beta(m), \alpha(g_n)^{-1}, \chidots , \alpha(g_2)^{-1})\\
&=F(\alpha( g_2), \chidots , \alpha(g_{n}), \beta(m))\\
&= F d_n^{\widetilde{M}}( g_1, \chidots , g_{n}, m).
\epsilonnd{align*}
\epsilonnd{proof}
Here we show that the Hom-group homology of a Hom-group with coefficients in a right (left) module
reduces to Hochschild homology of Hom-group algebra with coefficients in a certain bimodule.
To do this we remind that the authors in \chiite{hss} introduce Hochschild homology of a Hom-algebra $A$ as follows.
Let $(A,\mu, \alphalpha)$ be a Hom-algebra, and $(V, \betaeta)$ be an $A$-bimodule \chiite{hss} such that
$$\beta(v\chidot a) = \beta(v)\chidot \alpha(a) \quad \thetaext{and} \quad \beta(a\chidot v)=\alpha(a)\chidot \beta(v).$$ Then
\betaegin{equation*}
C^{Hom}_\alphast(A, V)=\betaigoplus_{n\gammaeq 0}C^{Hom}_n(A, V),\qquad C^{Hom}_n(A, V):=V\otimes A^{\otimes n},
\epsilonnd{equation*}
with the face maps
\betaegin{align*}
&\delta_0(v\otimes a_1\otimes \chidots \otimes a_{n})= v \chidot a_1 \otimes \alpha(a_2)\otimes \chidots \otimes \alpha(a_{n})\\
&\delta_i(v\otimes a_1\otimes \chidots \otimes a_{n})=\beta(v)\otimes \alpha(a_1) \chidots \otimes a_i a_{i+1}\otimes \chidots\otimes \alphalpha(a_{n}), ~~ 1\leq i \leq n-1\\
&\delta_{n}(v\otimes a_1\otimes \chidots \otimes a_n)= a_{n} \chidot v \otimes \alpha(a_1)\otimes \chidots\otimes \alpha(a_{n-1}),
\epsilonnd{align*}
is a simplicial module.
Similar to the cohomology case we have the following result.
\betaegin{theorem}
Let $(G, \alpha)$ be a Hom-group and $M$ be a right $G$-module satisfying $\beta(m\chidot g)= \beta(m) \chidot \alpha(g) .$
Let $\widetilde{M}=M$ be a left module with the trivial left action $ g \chidot m:= m\chidot 1 = \beta(m)$. Then $\widetilde{M}$ will be a $\mathbb{K}G$-bimodule and furthermore
$$H_*(G, M)\chiong H_\alphast(\mathbb{K}G, \widetilde{M}).$$
\epsilonnd{theorem}
\betaegin{proof}
The condition $\beta(m\chidot g)= \beta(m) \chidot \alpha(g) $ insures that $\widetilde{M}$ with the given right action and the trivial left action $g \chidot m:= m\chidot 1 = \beta(m)$ is a $G$-bimodule, and therefore a $\mathbb{K}G$-bimodule, because
$$\alpha(g)\chidot (m\chidot k)= \beta (m\chidot k)= \beta(m) \chidot\alpha(k)=(g\chidot m)\chidot \alpha(k).$$
Therefore all differentials $\delta_i$ of Hochschild homology of $\mathbb{K}G$ will be the same as the ones, $d_i$, for group homology of $G$.
Therefore the identity map $\mathop{\rm Id}\nolimits: C_n(G, M)\longrightarrow C_n(\mathbb{K}G, \widetilde{M})$ induces an isomorphism on the level of complexes.
\epsilonnd{proof}
The conditions of $M$ in the previous theorem were also appeared in other context such as \chiite{cg} where the authors used the category of Hom-modules over
Hom-algebras to obtain a monoidal category for modules over Hom-bialgebras.
The functoriality of Hom-group homology is shown as follows.
\betaegin{proposition}
Let $(G, \alpha_{G})$ and $(G', \alpha'_{G'})$ be two Hom-groups. The morphism $f: G\longrightarrow G'$ of Hom-groups induces the map
\betaegin{equation*}
\widehat{f}: H^{Hom}_n(G, \mathbb{K}G)\longrightarrow H^{Hom}_n(G', \mathbb{K}G')
\epsilonnd{equation*}
given by $$g_0, \chidots , g_n\mapsto f(g_0), \chidots, f(g_n).$$
\epsilonnd{proposition}
\betaegin{proof}
The map $\widehat{f}$ commutes with all faces $\deltaelta_i$ because $f(\alpha(g))=\alpha'(f(g))$ and $ f(gk)=f(g) f(k)$.
\epsilonnd{proof}
\betaegin{thebibliography}{9}
\betaibitem[AEM]{aem} F. Ammar, Z. Ejbehi, and A. Makhlouf, \epsilonmph{Cohomology and deformations
of Hom-algebras}, J. Lie Theory 21 (2011), no. 4, 813-836.
\betaibitem[AS]{as} N. Aizawa and H. Sato, \epsilonmph{q-deformation of the Virasoro algebra with central extension}, Phys. Lett. B, 256, (1991), p. 185-190.
\betaibitem[BM]{bm} S. Benayadi and A. Makhlouf, \epsilonmph{ Hom-Lie algebras with symmetric invariant nondegenerate bilinear forms},
J. Geom. Phys. 76 (2014), p. 38-60.
\betaibitem[CG]{cg} S. Caenepeel, and I. Goyvaerts, \epsilonmph{Monoidal Hom-Hopf algebras}, Comm. Algebra 39(6), (2011), p. 2216-2240.
\betaibitem[CKL]{ckl} M. Chaichian, P. Kulish, and J. Lukierski , \epsilonmph{q-deformed Jacobi identity, q-oscillators and q-deformed infinite-dimensional
algebras}, Phys. Lett. B, 237 (1990), p. 401-406.
\betaibitem[CQ]{cq} Y. Cheng and H. Qi, \epsilonmph{Representations of BiHom-Lie algebras}, arXiv:1610.04302v1, (2016).
\betaibitem[CZ]{cz} T. L. Curtright and C. K. Zachos , \epsilonmph{Deforming maps for quantum algebras}, Phys. Lett. B, 243 (1990), p. 237-244.
\betaibitem[CS]{cs} A. J. Calder\'on and J. M. S\'anchez, \epsilonmph{The structure of split regular BiHom-Lie algebras}, Journal of Geometry and Physics
Volume 110, December (2016), P. 296-305.
\betaibitem[FG]{fg} Y. Fregier and A. Gohr,\epsilonmph{ On unitality conditions for hom-associative
algebras}, arXiv:0904.4874, (2009).
\betaibitem[GMMP]{gmmp} G. Graziani, A. Makhlouf, C. Menini, and F. Panaite
\epsilonmph{BiHom-associative algebras, BiHom-Lie algebras and BiHom-bialgebras}, SIGMA 11 (2015), 086, 34 pages.
\betaibitem[GW]{gw} S. Guo and S. Wang, \epsilonmph{Symmetric pairs and pseudosymmetries in Hom-Yetter–Drinfeld categories},
J. Algebra Appl. 16, 1750125 (2017), 21 pages.
\betaibitem[GR]{gr} M. Goze and E. Remm, \epsilonmph{ On the algebraic variety of Hom-Lie algebras}, https://arxiv.org/abs/1706.02484, (2017).
\betaibitem[HLS]{hls} J. T. Hartwig, D. Larsson, and S. D. Silvestrov, \epsilonmph{Deformations of Lie
algebras using $\sigmaigma$-derivations}, J. Algebra 295 (2006), no. 2, p. 314-361.
\betaibitem[H1]{h1} M. Hassanzadeh, \epsilonmph{Lagrange theorem for Hom-groups}, https://arxiv.org/abs/1803.07678, (2018), 13 pages.
\betaibitem[H2]{h2} M. Hassanzadeh, \epsilonmph{On antipodes of Hom-Hopf algebras}, https://arxiv.org/abs/1803.01441, (2018).
\betaibitem[HSS]{hss} M. Hassanzadeh, I. Shapiro and S. S\"utl\"u, \epsilonmph{Cyclic homology for Hom- algebras},
Journal of Geometry and Physics, Volume 98, December (2015) , P. 40-56.
\betaibitem[HMS]{hms} L. Hellstr\"om, A. Makhlouf, and S. D. Silvestrov, \epsilonmph{Universal algebra applied to hom-associative algebras, and more},
Algebra, geometry and mathematical physics, Springer Proc. Math. Stat., vol. 85, (2014), Springer,
Heidelberg, p. 157-199.
\betaibitem[LS]{ls} D. Larsson and S. D. Silvestrov,\epsilonmph{ Quasi-hom-Lie algebras, central exten-
sions and 2-cocycle-like identities}, J. Algebra 288 (2005), no. 2, p. 321-344.
\betaibitem[LMT]{lmt} C. Laurent-Gengouxa, A. Makhlouf, and J. Teles,\epsilonmph{ Universal algebra of a Hom-Lie algebra and group-like elements}
, Journal of Pure and Applied Algebra, Volume 222, Issue 5, (2018), P. 1139-1163.
\betaibitem[MS1]{ms1} A. Makhlouf and S. D. Silvestrov,\epsilonmph{ Hom-algebra structures}, J. Gen. Lie
Theory Appl. 2 (2008), no. 2, p. 51-64.
\betaibitem[MS2]{ms2} A. Makhlouf and S. Silvestrov,\epsilonmph{ Hom-Lie admissible Hom-coalgebras and
Hom-Hopf algebras}, Generalized Lie theory in mathematics, physics and
beyond, Springer, Berlin, (2009), p. 189-206.
\betaibitem[MS3]{ms3} A. Makhlouf and S. Silvestrov ,\epsilonmph{ Hom-algebras and Hom-coalgebras}, J. Algebra Appl. 9 (2010),
no. 4, p. 553–589.
\betaibitem[MS4]{ms4} A. Makhlouf and S. Silvestrov , \epsilonmph{Notes on 1-parameter formal deformations of Hom-
and Hom-Lie algebras}, Forum Math. 22 (2010), no. 4, p. 715-739.
\betaibitem[PSS]{pss} F. Panaite, P. Schrader, and M. D. Staic, \epsilonmph{Hom-Tensor Categories and the Hom-Yang-Baxter Equation},
https://arxiv.org/abs/1702.08475, (2017).
\betaibitem[Ya1]{ya1} D. Yau, \epsilonmph{Hom-algebras and homology}, J. Lie Theory 19 (2009), no. 2, p. 409–421.
\betaibitem[Ya2]{ya2} D. Yau,\epsilonmph{ Hom-bialgebras and comodule Hom-algebras}, Int. Electron. J.
Algebra 8 (2010), p. 45–64.
\betaibitem[Ya3]{ya3} D. Yau, \epsilonmph{ Hom-quantum groups: I. Quasi-triangular Hom-bialgebras}, J.
Phys. A 45 (2012), no. 6.
\betaibitem[Ya4]{ya4} D. Yau, \epsilonmph{ Enveloping algebras of Hom-Lie algebras}, Journal of Generalized Lie Theory and Applications, 2,
(2008), p. 95-108.
\epsilonnd{thebibliography}
\epsilonnd{document} |
\begin{document}
\title{Optimal Exploitation of a Resource \\ with Stochastic Population Dynamics\\ and Delayed Renewal}
\author{Idris Kharroubi \footnote{The research of the author benefited from the support of the French ANR research grant LIQUIRISK} \\
\footnotesize{Sorbonne Universit\'e,} \\
\footnotesize{LPSM} \\
\footnotesize{CNRS, UMR 8001}\\
\footnotesize{\texttt{idris.kharroubi @ upmc.fr}}\and
Thomas Lim \footnote{The research of the author benefited from the support of the ``Chaire March\'e en mutation'', F\'ed\'eration Bancaire Fran\c caise}\\\footnotesize{ENSIIE,}\\
\footnotesize{LaMME} \\
\footnotesize{CNRS UMR 8071} \\
\footnotesize{\texttt{lim @ ensiie.fr} }
\and
Vathana Ly Vath \footnote{The research of the author benefited from the support of the ``Chaire March\'e en mutation'', F\'ed\'eration Bancaire Fran\c caise}\\\footnotesize{ENSIIE,}\\
\footnotesize{LaMME} \\
\footnotesize{CNRS UMR 8071} \\
\footnotesize{\texttt{lyvath @ ensiie.fr} }}
\maketitle
\begin{abstract}
In this work, we study the optimization problem of a renewable
resource in finite time. The resource is assumed to evolve
according to a logistic stochastic differential equation. The
manager may harvest partially the resource at any time and sell it
at a stochastic market price. She may equally decide to renew part
of the resource but uniquely at deterministic times. However, we
realistically assume that there is a delay in the renewing order.
By using the dynamic programming theory, we may obtain the PDE
characterization of our value function. To complete our study, we
give an algorithm to compute the value function and optimal
strategy. Some numerical illustrations will be equally provided.
\end{abstract}
\noindent {\bf Key words}~: impulse control, renewable resource,
optimal harvesting, execution delay, viscosity solutions, states
constraints.
\noindent {\bf MSC Classification (2010)~: 93E20, 62L15, 49L20, 49L25, 92D25}
\section{Introduction}
The management of renewable resources is fundamental for the
survival and growth of the human population. An excessive
exploitation of such resources may lead to their extinction and
may therefore affect the economies of depending populations with,
for instance, high increases of prices and higher uncertainty on
the future. The typical examples are fishery \cite{C90, GKL05,
LW76} or forest management \cite{AOK09, CR89}. Most early studies
in fishery or forest management were mainly focusing on
identifying the optimal harvesting policy. In forest economics
literature, it may be illustrated by the well-known
``tree-cutting'' problem. The most basic ``tree-cutting'' problem
is about identifying the optimal time to harvest a given forest.
Studies extending this initial tree-cutting problem have been
carried by many authors. We may, for instance, refer to
\cite{CR89} and \cite{RC90}, where the authors investigate both
single and ongoing rotation problems under stochastic prices and
forest's age or size. Rotation problem means once all the trees
are harvested, plantation takes place and planted trees may grow
up to the next harvest. In terms of mathematical formulation,
rotation problem may be reduced to an iterative optimal stopping
problem. In \cite{LSHA07}, the authors go a step further by
studying optimal replanting strategy. To be more precise, they
analyze optimal tree replanting on an area of recently harvested
forest land. However, the attempt to incorporate replanting policy
in the study of tree-cutting problem remains relatively very few,
especially when delay has to be taken into account. Indeed, the
renewed resources need some delay to become available for
harvesting. There is also an uncertainty on the renewed
quantities. In other words, the resource obtained after a renewing
decision may differ from the expected one due to some losses.
To our knowledge, these above aspects are not taken into account
in the existing literature on renewable resources management. The
aim of this paper is precisely to provide a more realistic model
in the study of optimal exploitation problems of renewable
resources by taking into account all the above features.
We suppose that the resource population evolves according to a
stochastic logistic diffusion model. Such a logistic dynamics is
classic in the modelling of populations evolution. The stochastic
aspect allows us to take into account the uncertainties of the
evolution. Since the interventions of the manager are not
continuous in practice, we consider a stochastic impulse control
problem on the resource population. We suppose that the operator
has the ability to act on the resource population through two
types of interventions. First, the manager may decide to harvest
the resource and sell the harvested resource at a given exogenous
market price. The second kind of intervention consists in renewing
the resource. Due to physical or biological constraints, the
effect of renewing orders may have some delay, i.e. a lag between
the times at which renewing decisions are taken and the time at
which renewed quantities appear in the global inventory of the
available resources. Renewing or harvesting orders are assumed
to carry both fixed and proportional costs.
From a mathematical point of view, control problems with delay
have been studied in \cite{BP09} and \cite{OS08}, where all
interventions are delayed. Our model may be considered as more
general since some interventions are delayed while some others are
not. Another novelty of our model is the state constraints.
Indeed, the level of owned resource is a physical quantity, and
hence cannot be negative. Control problems under state
constraints, but without delay, have been studied in the
literature, see for instance \cite{MLP07} for the study of optimal
portfolio management under liquidity constraints. To deal with
such problems, the usual approach is to consider the notion of
constrained viscosity solutions introduced by Soner in
\cite{S86I,S86II}. This definition means that the value function
associated to the constrained problem is a viscosity solution in
the interior of the domain and only a semi-solution on the
boundary. In particular, the uniqueness of the viscosity solution
is usually obtained only on the interior of the domain.
In our case, we are able to characterize the behavior of the value
function on the boundary by deriving the PDE satisfied on the
frontier of the constrained domain. We therefore get the
uniqueness property of the value function on the whole closure of
the constrained domain. As a by product, we obtain the continuity
of the value function on the closure of the domain (except at
renewing dates), which improves the existing literature where this
property is obtained only on the interior of the domain, see for
instance \cite{MLP07}.
To complete our study, we provide an algorithm to compute the
value function and an associated strategy that is expected to be
optimal and apply this algorithm on a specific example.
The rest of the paper is organized as follows. In Section
\ref{sec2}, we describe the model and the associated impulse
control problem. In Section \ref{sec3}, we give a characterization
of the value function as the unique viscosity solution to a PDE in
the class of functions satisfying a given growth condition. In
Section \ref{sec4}, we provide an algorithm to compute the value
function and an optimal strategy. Finally Section \ref{sec5} is
devoted to the proof of the main results.
\section{Problem formulation}\label{sec2}
\subsection{The control problem}
Let $(\Omega, \Fc, \P)$ be a complete probability space, equipped
with two mutually independent one-dimensional standard Brownian
motions $B$ and $W$. We denote by $\F:= (\Fc_t)_{t \geq 0}$ the
right-continuous and complete filtration generated by $B$ and $W$.
We consider a manager who owns a field of some given resource,
which may be exploited up to a finite horizon time $T>0$. The aim
of the manager is to manage optimally this resource in order to
maximize the expected terminal wealth which may be extracted.
In resource management, the manager may decide to either harvest
part of the resource or renew it. Resource renewal may be done
only at discrete times $(t_i)_{1 \leq i \leq n}$ with $t_i=i{T\over
n}$, where $n \in \N^*$. We consider an impulse control strategy $\alpha = (t_i,
\xi_i)_{1 \leq i \leq n} \cup (\tau_k, \zeta_k)_{ k \geq 1}$ where
\begin{itemize}
\item $\xi_i$, $1 \leq i \leq n$, is an $\Fc_{t_i}$-measurable
random variable valued in a compact set $[0,K]$, with $K$ being a
positive constant, and corresponds to the maximal quantity of
resource that the manager can renew,
\item $(\tau_k)_{k\geq 1}$ a nondecreasing finite or infinite
sequence of $\F$-stopping times representing the harvest times
before $T$,
\item $\zeta_k$, $k \geq 1$, an $\Fc_{\tau_k}$-measurable random
variable, valued in $\R_+$, corresponding to the harvested
quantity of resource at time $\tau_k$.
\end{itemize}
We assume the quantity of resource renewed at time $t_i$ cannot be
harvested before time $t_i + \delta$ for any $1 \leq i \leq n$
where $\delta=m{T\over n}$ with $m$ a nonnegative integer. We
suppose that for a given quantity $\xi_i$ of resource renewed at
time $t_i$, the manager may get an additional $g(\xi_i)$
harvestable resource at time $t_i+\delta=t_{i+m}$, with $g$ being
a function satisfying the following assumption.
\ni \textbf{(H$g$)} $g:~\R_+\rightarrow\R_+$ is a nondecreasing
and Lipschitz continuous function: there exists a positive
constant $L$ such that \begin{eqnarray*}
|g(x)-g(x')| & \leq & L|x-x'| \;,
\end{eqnarray*}
for all $x,x'\in\R_+$.
For a given strategy $\alpha=(t_i, \xi_i)_{1 \leq i \leq n} \cup
(\tau_k, \zeta_k)_{ k \geq 1}$, we denote by $R^\alpha_t$ the
associated size of resource which is available for harvesting at
time $t$. When no intervention of the manager occurs, the
evolution of the process $R^\alpha$ is assumed to follow the below
logistic stochastic differential equation
\begin{eqnarray}\label{edsR}
dR^\alpha_t & = & \eta R^\alpha_t(\lambda-
R^\alpha_t)dt+\gamma R^\alpha_t dB_t\;,
\end{eqnarray}
where $\eta$,
$\lambda$ and $\gamma$ are three positive constants. Since at each
time $\tau_k$, the quantity $\zeta_k$ is harvested we have
\begin{eqnarray*}
R^\alpha_{\tau_k} & = & R^\alpha_{\tau_k^-}-\zeta_k\;.
\end{eqnarray*}
Moreover, we suppose that there is a natural renewal of the
resource at each time $t_i$ of a deterministic quantity
$g_0\geq0$. Since the renewed quantity $\xi_i$ at time $t_i$ only
appears in the total resource at time $t_i+\delta=t_{i+m}$ and
increases this one of $g(\xi_i)$, we have
\begin{eqnarray*}
R^\alpha_{t_{i}} & = & R^\alpha_{t_{i}^-}+g_0+g(\xi_{i-m}) \;,
\end{eqnarray*}
for $i=m + 1,\ldots,n$, and
\begin{eqnarray*}
R^\alpha_{t_i} & = & R^\alpha_{t_i^-}+g_0 \;,
\end{eqnarray*}
for $i=1,\ldots,m$.
\ni The process $R^\alpha$ is then given by
\begin{eqnarray} \nonumber
R^\alpha_t & = & R_0+\int_0^t\eta R^\alpha_s(\lambda- R^\alpha_s)ds+\int_0^t\gamma R^\alpha_s dB_s\\
& & - \sum_{k \geq 1} \zeta_k \1_{\tau_k \leq t} + \sum_{i = 1}^n
g(\xi_i) \1_{t_{i + m} \leq t}+ g_0 \sum_{i = 1}^n \1_{t_i \leq
t}\;,\quad t\geq 0\;. \label{eq croissance 1}
\end{eqnarray}
We assume that the price $P$ by unit of the resource
is governed by the following stochastic differential equation \begin{eqnarray}
\label{eq prix 1} P_t &=&P_0+\int_0^t\mu P_u du + \int_0^t\sigma
P_u dW_u \;, \quad t \geq 0 \;, \end{eqnarray} with $\mu$ and $\sigma$ two
positive constants.
We also define $Q_t$ the cost at time $t$ to renew a unit of the
resource. We suppose that it follows the below stochastic
differential equation \begin{eqnarray} \label{eq prix 2} Q_t &=& Q_0 +
\int_0^t\rho Q_udu + \int_0^t\varsigma Q_udW_u \;, \quad t \geq 0
\;, \end{eqnarray}
where $\rho$ and $\varsigma$ are two positive constants.
For a given strategy $\alpha=(t_i, \xi_i)_{1 \leq i \leq n} \cup
(\tau_k, \zeta_k)_{ k \geq 1}$, there are several costs that the
manager has to face.
\begin{itemize}
\item At each time $\tau_k$, the manager has to pay a cost
$c_1\zeta_k+c_2$ to harvest the quantity $\zeta_k$, where $c_1$
and $c_2$ are two positive constants. As such, by selling the
harvested quantity $\zeta_k$ at price $P_{\tau_k}$, she may get
$(P_{\tau_k} - c_1) \zeta_k - c_2$ at time $\tau_k$.
\item To renew quantity $\xi_i$ of resource at time $t_i$, the
manager has to pay $(Q_{t_i} + c_3)\xi_i$, where $c_3$ is a
positive constant.
\end{itemize}
Given a control $\alpha=(t_i, \xi_i)_{1 \leq i \leq n} \cup
(\tau_k, \zeta_k)_{ k \geq 1}$ and an initial wealth $X_0$, the
wealth process $X^\alpha $ may be expressed as follows
\begin{eqnarray*} X_t^\alpha &=& X_0+\sum_{k \geq 1} \big[ (P_{\tau_k} -
c_1) \zeta_k - c_2 \big] \1_{\tau_k \leq t}
- \sum_{i = 1}^n (Q_{t_i} + c_3) \xi_i \1_{t_i \leq t}\;.
\end{eqnarray*}
We define the set $\Ac$ of admissible controls as the set of
strategies $\alpha$ such that \begin{eqnarray}\label{admissiblity constraints}
\E\Big[ (X_T^\alpha)^- \Big] ~<~+\infty &\text{ and }& R^\alpha_t
\geq 0 \quad \text{for } 0 \leq t \leq T\;, \end{eqnarray} where $(.)^-$
denotes the negative part. We note that for $R_0\geq0$, the set
$\Ac$ is nonempty as it contains the strategy with no
intervention.
We denote by $\Zc$ the set
$\Zc:=\R\times\R_+\times\R_+^*\times\R_+^*$. We define the
liquidation function $L:~\Zc \rightarrow\R$ by \begin{eqnarray*} L(z) &: = &
\max\{x+(p-c_1)r-c_2,x\} \;,\quad \mbox{ for }~z=(x,r,p,q)\in
\Zc\;. \end{eqnarray*}
\ni From condition \reff{admissiblity constraints}, the
expectation $\E[L(X_T^\alpha,R_T^\alpha,P_T,Q_T)]$ is well defined
for any $\alpha\in \Ac$. We can therefore consider the objective
of the manager which consists in computing the optimal value
\begin{eqnarray}\label{pb initial}
V_0 &:=& \sup_{\alpha \in \Ac} \E \big[L(X_T^\alpha,R_T^\alpha,P_T,Q_T) \big]\;,
\end{eqnarray}
and finding a strategy $\alpha^*\in\Ac$ such that
\begin{eqnarray}\label{opt strat}
V_0 &=& \E\big[ L(X_T^{\alpha^*},R_T^{\alpha^*},P_T,Q_T) \big]\;.
\end{eqnarray}
\subsection{Value functions with pending orders}
In order to provide an analytic characterization of the value
function $V$ defined by the control problem \reff{pb initial}, we
need to extend the definition of this control problem to general
initial conditions. Moreover, since the renewing decisions are
delayed, we have to take into account the possible pending orders.
Given an impulse control $\alpha \in \Ac$, we notice that the
state of the system $R^\alpha$ is not only defined by its current
state value at time $t$ but also by the quantity at time $t$ of
the resource that has been renewed between $t-\delta$ and $t$.
We therefore introduce the following definitions and notations. For any $t\in [0,T]$, we denote by $N(t)$ the number of possible renewing dates before $t$
\begin{eqnarray*}
N(t) & := & \# \Big\{ i\in\{1,\ldots,n\} ~:~ t_i\leq t \Big\}\;,
\end{eqnarray*}
and by $D_t$ the set of renewing resource times and the associated quantities between $t-\delta$ and $t$
\begin{eqnarray}\label{def D_t} D_t &:=& \Big\{d=(t_i,e_i)_{N(t-\delta)+1 \leq
i \leq N(t)}~:~ e_i\in \R_+ \mbox{ for }
i=N(t-\delta)+1,\ldots,N(t) \Big\} \;,\quad \end{eqnarray} with the
convention that $D_t= \emptyset$ if $N(t-\delta)=N(t)$.
For any $t \in [0,T]$ and $d=(t_i,e_i)_{N(t-\delta)+1 \leq i \leq
N(t)} \in D_t$, we denote by $\tilde \Ac_{t,d}$ the set of
strategies which take into account the pending renewing decisions
taken between $t - \delta$ and $t$ \begin{eqnarray*}
\tilde \Ac_{t,d} &:=& \Big\{\alpha=(t_i, \xi_i)_{N(t-\delta)+1 \leq i \leq n} \cup (\tau_k, \zeta_k)_{ k \geq 1} ~: ~ \\
& & \quad \xi_i=e_i ~\mbox{ for }~i=N(t-\delta)+1, \ldots , N(t)\;; \\
& & \quad\xi_i \mbox{ is } \Fc_{t_i}-\mbox{measurable for }~N(t)+1 \leq i \leq n \;; \\
& & \quad (\tau_k)_{k \geq 1} \mbox{ is a nondecreasing finite or infinite sequence of } \F-\mbox{stopping time} \mbox{ with }\tau_1 > t\;; \\
& & \quad \zeta_k \mbox{ is } \Fc_{\tau_k}-\mbox{measurable for }~k\geq 1 \Big\}\;.
\end{eqnarray*}
For $z=(x,r,p,q) \in \Zc$, $d \in D_t$ and $\alpha \in \tilde
\Ac_{t,d}$, we denote by $Z^{t,z,\alpha}=
(X^{t,z,\alpha},R^{t,r,\alpha},P^{t,p},Q^{t,q})$ the quadruple of
processes defined by
\begin{eqnarray} R^{t,r,\alpha}_s &=& r+ \int_t^s \eta
R^{t,r,\alpha}_u(\lambda-R^{t,r,\alpha}_u)du + \int_t^s \gamma
R^{t,r,\alpha}_u dB_u - \sum_{k\geq 1} \zeta_k \1_{\tau_k \leq s}
\nonumber \\\label{eq croissance 1 bis}
& & + \sum_{i = N(t-\delta)+1}^n g(\xi_i) \1_{t_{i + m} \leq s} +g_0\big(N(s)-N(t)\big)
\;, \\\label{dyn X}
X^{t,z,\alpha}_s & = & x+ \sum_{k \geq 1} \big[ (P^{t,p}_{\tau_k} - c_1) \zeta_k- c_2 \big] \1_{\tau_k \leq s} - \hspace{-4mm}\sum_{i = N(t)+1}^n \hspace{-4mm}(Q^{t,q}_{t_i} +c_3 ) \xi_i \1_{t_i \leq s}\;,\\ \label{dyn P}
P^{t,p}_s & = & p+\int_t^s\mu P^{t,p}_u du + \int_t^s\sigma P^{t,p}_u dW_u\;,\\\label{dyn Q}
Q^{t,q}_s & = & q+\int_t^s\rho Q^{t,q}_u du + \int_t^s\varsigma Q^{t,q}_u dW_u \;,
\end{eqnarray}
for $s\in[ t,T]$.
We denote by $\Ac_{t,z,d}$ the set of strategies $\alpha\in \tilde \Ac_{t,d} $ such that
\begin{eqnarray}
\E\Big[ (X^{t,z,\alpha}_{T})^-\Big]~<~+\infty &\text{and} & R^{t,r,\alpha}_s \geq 0
\quad
\text{for all } s \in[t, T]\;. \qquad \label{cond-adm}
\end{eqnarray}
We then consider for $(t,z) \in [0,T] \times \Zc$, $d \in D_t$, $\alpha \in \Ac_{t,z,d}$
the following benefit criterion
\begin{eqnarray*}
J(t,z,\alpha) &:=& \E \Big[ L(Z^{t,z,\alpha}_T)
\Big]\;,
\end{eqnarray*}
which is well defined under conditions \reff{cond-adm}.
We define the corresponding value function by
\begin{eqnarray*}
v(t,z,d) &:=& \sup_{\alpha \in \Ac_{t,z,d}} J(t,z,\alpha) \;, \quad (t,z,d) \in \mathcal D\;,
\end{eqnarray*}
where $\mathcal D$ is the definition domain of $v$ defined by
\begin{eqnarray*}
\Dc & = & \Big\{(t,z,d)~:~(t,z)\in[0,T]\times \Zc \mbox{ and } d\in D_t\Big\}\;.
\end{eqnarray*}
\ni For simplicity, we also introduce the operators $\Gamma^{c}$,
$\Gamma^{rn}_1$ and $\Gamma^{rn}_2$ given by \begin{eqnarray*}
\Gamma^{c}(z,\ell) & := & (x+(p-c_1)\ell-c_2,r-\ell,p,q) \;,\\
\Gamma^{rn}_1(z,\ell) & := & (x-(q+c_3)\ell,r+g_0,p,q) \;,\\
\Gamma^{rn}_2(z,\ell) & := & (x,r+g(\ell),p,q)\;, \end{eqnarray*} for all
$z=(x,r,p,q)\in\Zc$ and $\ell\in\R_+$. The operator $\Gamma^c$
corresponds to the new position of the state process after a
resource consumption decision: if the manager harvests $\zeta_k$
at time $\tau_k$, then the state process is \begin{eqnarray*}
Z^{t,z,\alpha}_{\tau_k} & = &
\Gamma^c(Z^{t,z,\alpha}_{\tau_k^-},\zeta_k) \;, \end{eqnarray*} and
$\Gamma^{rn}_1$ and $\Gamma^{rn}_2$ correspond to the new position
of the state process after a renewal decision: if the manager
renews $(\xi_i)_{1 \leq i \leq n}$ at times $(t_i)_{1 \leq i \leq
n}$, then the state process is given by \begin{eqnarray*}
Z^{t,z,\alpha}_{t_i} & = & \Gamma^{rn}_1(Z^{t,z,\alpha}_{t_i^-},\xi_i)\;, \quad \mbox{for } i = 0 , \ldots, m \;,\\
Z^{t,z,\alpha}_{t_{i}} & = &
\Gamma^{rn}_1(\Gamma^{rn}_2(Z^{t,z,\alpha}_{t_{i}^-},\xi_{i-m}),\xi_i
) \;, \quad \mbox{for } i = m+1 , \ldots, n \;.
\end{eqnarray*}
We first give a new expression of the value function $v$. To this
end, we introduce the set \begin{eqnarray*} \hat \Ac_{t,z,d} & = &
\Big\{\alpha=(t_i, \xi_i)_{N(t-\delta)+1 \leq i \leq n} \cup
(\tau_k, \zeta_k)_{ k \geq 1}\in \tilde \Ac_{t,d} ~: \\
& &
~\big(P^{t,p}_{\tau_k}-c_1\big)\zeta_k-c_2~\geq~ 0~ \forall
\,k\geq 1~ \mbox{ and } R^{t,r,\alpha}_s \geq 0~ \forall
\,s\in[t,T] \Big\}\;. \end{eqnarray*}
\begin{Proposition}
The value function $v$ can be expressed as follows
\begin{eqnarray}\label{eq vA hat}
v(t,z,d) & = & \sup_{\alpha \in \hat \Ac_{t,z,d}} J(t,z,\alpha) \;, \quad (t,z,d) \in \mathcal D\;.
\end{eqnarray}
\end{Proposition}
\ni \textbf{Proof.} Fix $(t,z,d) \in \mathcal D$ with $z=(x,r,p,q)$ and denote by $\hat v(t,z,d) $ the right hand side of \reff{eq vA hat}.
We first notice that $\hat \Ac_{t,z,d}\subset \Ac_{t,z,d}$. Indeed, for $\alpha \in \hat \Ac_{t,z,d}$, we have
\begin{eqnarray*}
X^{t,z,\alpha}_T & = & x+\sum_{k \geq 1} \big[ (P^{t,p}_{\tau_k} - c_1) \zeta_k- c_2 \big] \1_{\tau_k \leq T} - \hspace{-4mm}\sum_{i = N(t)+1}^n \hspace{-4mm}(Q^{t,q}_{t_i} +c_3 ) \xi_i \\
& \geq & x-nK\big(\sup_{s\in[t,T]}Q^{t,q}_s+ c_3\big)\;.
\end{eqnarray*}
Since $Q^{t,q}$ follows the dynamics \reff{dyn Q}, we have $\E[\sup_{s\in[t,T]}Q^{t,q}_s]<+\infty$ and we get $\E[(X^{t,z,\alpha}_T)^-] < +\infty$. We therefore deduce that
\begin{eqnarray*}
v(t,z,d) & \geq & \hat v(t,z,d)\;.
\end{eqnarray*}
We turn to the reverse inequality. Fix $\alpha=(t_i, \xi_i)_{N(t-\delta)+1 \leq i \leq n} \cup (\tau_k, \zeta_k)_{ k \geq 1}\in \Ac_{t,z,d}$ and define the associated strategy $\hat \alpha=(t_i, \xi_i)_{N(t-\delta)+1 \leq i \leq n} \cup (\hat \tau_k, \hat \zeta_k)_{ k \geq 1}\in \hat \Ac_{t,z,d}$ by
\begin{eqnarray*}
(\hat \tau_j,\hat \zeta_j) & = & (\tau_{k_j},\zeta_{k_j})\quad ~~\mbox{ for } j\geq 1\;,
\end{eqnarray*}
where the sequence $(k_j)_{j\geq 1}$ is defined by
\begin{eqnarray*}
k_1 & = & \min \{k\geq 1~:~(P^{t,p}_{\tau_k}-c_1)\zeta_k-c_2\geq 0\}\;,\\
k_j & = & \min\{k\geq k_{j-1}+1~:~(P^{t,p}_{\tau_k}-c_1)\zeta_k-c_2\geq 0\}\;,
\end{eqnarray*}
$i.e.$ $\hat \alpha$ is obtained from $\alpha$ by keeping only harvesting orders such that $(P^{t,p}_{\tau_k}-c_1)\zeta_k-c_2\geq 0$.
We then easily check from dynamics \reff{eq croissance 1 bis} and \reff{dyn X} that
\begin{eqnarray*}
X_s^{t,z,\alpha} ~ \leq ~ X_s^{t,z,\hat \alpha} & \mbox{ and } & R_s^{t,r,\alpha} ~ \leq ~ R_s^{t,r,\hat \alpha}
\end{eqnarray*}
for all $s\in[t,T]$.
Therefore we get
\begin{eqnarray*}
L(Z_T^{t,z,\alpha}) & \leq & L(Z_T^{t,z,\hat \alpha})\;,
\end{eqnarray*}
which gives
\begin{eqnarray*}
\hat v(t,z,d) & \geq & v(t,z,d)\;.
\end{eqnarray*}
\ep
\section{PDE characterization}\label{sec3}
\subsection{Boundary condition and dynamic programming principle}
We first provide a boundary condition for the value function associated to the optimal management of renewable resource.
\begin{Proposition}\label{Bound value function}
The value function $v$ satisfies the following growth condition: there exists a constant $C$ such that
\begin{eqnarray}\label{growth-property}
x
~~\leq~~ v(t,z,d) & \leq & x+C\Big(1+|r|^{4}+|p|^{4}+|q|^{4}\Big) \;,\qquad
\end{eqnarray}
for all $t\in[0,T]$, $z=(x,r,p,q)\in\Zc$, and $d\in D_t$.
\end{Proposition}
\ni The proof of this proposition is postponed to Section \ref{preuve1}.
With this bound, we are able to state the dynamic programming
relation on the value function of our control problem with
execution delay. For any $t\in [0,T]$, $d\in D_t$ and $\alpha
=(t_i,\xi_i)_{N(t-\delta)+1\leq i\leq n}\cup(\tau_k,\zeta_k)_{k
\geq 1} \in \hat \Ac_{t,z,d}$, we denote \begin{eqnarray*}
d(u,\alpha)&=& (t_{i}, \xi_{i})_{N(u-\delta)+1 \leq i \leq N(u)}\;,\quad u\in[t,T] \;,
\end{eqnarray*}
with the convention that $d(u,\alpha)= \emptyset$ if $N(u-\delta)=N(u)$.
We notice that $d(u,\alpha)$ corresponds to the set of renewing orders that have been given before $u$ and whose delayed effects appear after $u$. We also denote by $\Tc_{[t,T]}$ the set of $\F$-stopping times valued in $[t,T]$.
\begin{Theorem}\label{DP}
The value function $v$ satisfies the following dynamic programming principle.
\begin{enumerate}[(DP1)]
\item First dynamic programming inequality:
\begin{eqnarray*}
v(t,z,d)& \geq & \E \Big[ v_{}(\vartheta,Z^{t,z,\alpha}_\vartheta,d(\vartheta,\alpha))\Big]\;,
\end{eqnarray*}
for all $\alpha \in \hat \Ac_{t,z,d}$ and all $\vartheta\in\Tc_{[t,T]}$.
\item Second dynamic programming inequality: for any $\varepsilon >0$, there exists $\alpha \in \hat \Ac_{t,z,d}$ such that
\begin{eqnarray*}
v(t,z,d) - \varepsilon & \leq & \E \Big[ v_{}(\vartheta,Z_\vartheta^{t,z,\alpha},d(\vartheta,\alpha))\Big] \;,
\end{eqnarray*}
for all $\vartheta\in\Tc_{[t,T]}$.
\end{enumerate}
\end{Theorem}
\ni The proof of this proposition is postponed to Section \ref{preuve2}.
\subsection{Viscosity properties and uniqueness}
The PDE system associated to our control problem is formally
derived from the dynamic programming relations. We first decompose
the domain $\Dc$ as follows
\begin{eqnarray*} \Dc & = & \bigcup_{k=0}^{n}\Dc_k
\;, \end{eqnarray*} where \begin{eqnarray*} \Dc_k &= & \Big\{(t,z,d)\in \Dc
~:~~t\in\big[t_k,t_{k+1}\big)\Big\}\;, \end{eqnarray*} for $k=0,\ldots,n-1$
and \begin{eqnarray*} \Dc_n & = & \Big\{(t,z,d)\in \Dc~:~t=T\Big\}\;. \end{eqnarray*} We
also decompose the sets $\Dc_k$, $k=0,\ldots,n$, as follows \begin{eqnarray*}
\Dc_k & = & \Dc_k^1\cup\Dc_k^2 \;, \end{eqnarray*} where \begin{eqnarray*}
\Dc_k^1 & = & \big\{(t,z,d)\in\Dc_k~:~z=(x,r,p,q) \mbox{ with } r=0\big\}\;,\\
\Dc_k^2 & = & \big\{(t,z,d)\in\Dc_k~:~z=(x,r,p,q) \mbox{ with } r>0\big\}\;.
\end{eqnarray*}
We define the operators $\Hc$, $\Nc_1$, $\bar\Nc_1$, $\Nc_2$ and $\bar \Nc_2$ by
\begin{eqnarray*}
\Hc \phi (t,z,d) & = & \sup_{0\leq a \leq r} \phi\big(t,\Gamma^c(z,a),d \big) \;,
\end{eqnarray*}
for any $(t,z,d)\in \Dc$ and any function $\phi$ defined on $\Dc$,
\begin{eqnarray*}
\Nc_1 \phi (t_k,z,d) & = & \sup_{
0 \leq e \leq K} \phi \Big(t_k,\Gamma^{rn}_1 \big(\Gamma^{rn}_2(z,e_{k-m}),e \big),d\cup(t_k,e)\setminus (t_{k-m}, e_{k-m}) \Big) \;,\\
\bar \Nc_1 \phi (t_k,z,d) & = & \sup_{
\begin{tiny}\begin{array}{c}
0 \leq e \leq K\\
0\leq a \leq r \end{array}\end{tiny}
} \phi \Big(t_k,\Gamma^{rn}_1 \big(\Gamma^{rn}_2\big(\Gamma^c(z,a),e_{k-m}\big),e \big),d\cup(t_k,e)\setminus (t_{k-m}, e_{k-m}) \Big) \;,
\end{eqnarray*}
for any $(t_k,z,d)\in \Dc$ with $k=m + 1,\ldots,n$, and any function $\phi$ defined on $\Dc$, and
\begin{eqnarray*}
\Nc_2 \phi (t_k,z,d) & = & \sup_{0 \leq e \leq K
} \phi \Big(t_k,\Gamma^{rn}_1 \big(z,e \big),d\cup(t_k,e) \Big) \;,\\
\bar \Nc_2 \phi (t_k,z,d) & = & \sup_{
\begin{tiny}\begin{array}{c}
0\leq e \leq K\\
0\leq a\leq r \end{array}\end{tiny}
} \phi \Big(t_k,\Gamma^{rn}_1 \big(\Gamma^c(z,a),e \big),d\cup(t_k,e) \Big) \;,
\end{eqnarray*}
for any $(t_k,z,d)\in \Dc$ with $k=0,\ldots,m$, and any function $\phi$ defined on $\Dc$.
This provides equations for the value function $v$ which takes the
following nonstandard form \begin{eqnarray}\label{EDP0}
- \Lc v(t,z,d) & = & 0
\end{eqnarray}
for $(t,z,d)\in \Dc^1_k$, with $k=0,\ldots,n$,
\begin{eqnarray}\label{EDP1}
\min\Big\{ - \Lc v(t,z,d) ~,~
v(t,z,d) - \Hc v(t,z,d)
\Big\} &=&0
\end{eqnarray}
for $(t,z,d)\in \Dc^2_k$, with $k=0,\ldots,n-1$,
\begin{eqnarray}
v(T^-,z,d) & = & \max\big\{\Nc_1 L(z,d)~,~\bar \Nc_1 L(z,d)\big\}\qquad\label{EDP2}
\end{eqnarray}
for $(T,z,d)\in \Dc$,
\begin{eqnarray}\label{EDP4}
v(t_{k}^-,z,d) & = & \max\{ \Nc_1 v(t_k,z,d)~,~\bar \Nc_1v(t_k,z,d)\}
\end{eqnarray}
for $(t_k,z,d)\in \Dc_k$, with $k=m+1,\ldots,n-1$,
and
\begin{eqnarray}
v(t_{k}^-,z,d) & = & \max\big\{\Nc_2 v(t_k,z,d)~,~\bar \Nc_2 v(t_k,z,d)\big\}\qquad\label{EDP6}
\end{eqnarray}
for $(t_k,z,d)\in \Dc_k$, with $k=0,\ldots,m$.
Here $\Lc$ is the second order local operator associated to the diffusion $(P,Q,R)$ with no intervention. It is given by
\begin{eqnarray*}
\Lc \varphi (t,z) & = & \partial_t \varphi(t,z) + \mu p \partial_p\varphi(t,z)+\rho q\partial_q\varphi(t,z)+\eta r(\lambda-r)\partial_r\varphi(t,z) \\
& & +{1\over 2} \Big(\sigma^2p^2\partial^2_{pp}\varphi(t,z)+\varsigma^2q^2\partial^2_{qq}\varphi(t,z)+2\sigma\varsigma pq \partial ^2_{pq}\varphi(t,z)+\gamma^2r^2 \partial ^2_{rr}\varphi(t,z)\Big)
\end{eqnarray*}
for any $(t,z)\in[0,T]\times\Zc$ with $z=(x,r,p,q)$ and any function $\varphi\in C^{1,2}([0,T]\times \Zc )$.
As usual, we do not have any regularity property on the value
function $v$. We therefore work with the notion of (discontinuous)
viscosity solution. Since our system of PDEs \reff{EDP0} to
\reff{EDP6} is nonstandard, we have to adapt the definition to our
framework.
First, for a locally bounded function $w$ defined on $\Dc$, we define its lower semicontinuous (resp. upper semicontinuous) envelop $w_*$ (resp. $ w^*$) by
\begin{eqnarray*}
w _* (t,z,d) & = & \liminf_{\begin{tiny}\begin{array}{c}(t',z',d')\rightarrow(t,z,d)\\(t',z',d')\in \Dc_k\end{array}\end{tiny}} w(t',z',d') \;,\\
w ^* (t,z,d) & = & \limsup_{\begin{tiny}\begin{array}{c}(t',z' ,d')\rightarrow(t,z,d)\\(t',z',d')\in \Dc_k\end{array}\end{tiny}} w(t',z',d') \;,\end{eqnarray*}
for $(t,z,d)\in \Dc_k $, with $k=0,\ldots,n-1$. We also define its left lower semicontinuous (resp. upper semicontinuous) envelop at time $t_k$ by
\begin{eqnarray*}
w _* (t_k^-,z,d) & = & \liminf_{\begin{tiny}\begin{array}{c}(t',z',d')\rightarrow(t_k^-,z,d)\\(t',z',d')\in \Dc_{k-1}\end{array}\end{tiny}} w(t',z',d)\;,\\
w ^* (t_k^-,z,d) & = & \limsup_{\begin{tiny}\begin{array}{c}(t',z',d')\rightarrow(t_k^-,z,d)\\(t',z',d')\in \Dc_{k-1}\end{array}\end{tiny}} w(t',z',d) \;,
\end{eqnarray*}
for $k \in \{1,\ldots,n\}$.
\begin{Definition}[Viscosity solution to \reff{EDP0} -- \reff{EDP6}]
A locally bounded function $w$ defined on $\Dc$ is a viscosity supersolution (resp. subsolution) if
\ni (i) for any $k=0,\ldots,n-1$, $(t,z)\in\Dc_k^1$ and $\varphi\in C^{1,2}(\Dc_k)$ such that
\begin{eqnarray*}
(w_*-\varphi)(t,z,d) & = & \min_{\Dc_k} (w_*-\varphi)\\
\mbox{(resp. } ( w ^*-\varphi)(t,z,d) & = & \max_{\Dc_k} (w ^*-\varphi)\mbox{)}
\end{eqnarray*}
we have
\begin{eqnarray*}
- \Lc \varphi(t,z,d) &\geq &0\\
\mbox{(resp. } - \Lc \varphi(t,z,d) & \leq & 0 \mbox{)}\;,
\end{eqnarray*}
(ii) for any $k=0,\ldots,n-1$, $(t,z)\in\Dc^2_k$ and $\varphi\in C^{1,2}(\Dc_k)$ such that
\begin{eqnarray*}
(w_*-\varphi)(t,z,d) & = & \min_{\Dc_k} (w_*-\varphi)\\
\mbox{(resp. } ( w ^*-\varphi)(t,z,d) & = & \max_{\Dc_k} (w ^*-\varphi)\mbox{)}
\end{eqnarray*}
we have
\begin{eqnarray*}
\min\Big\{ - \Lc \varphi(t,z,d) ~,~
w_*(t,z,d) - \Hc w_*(t,z,d)
\Big\} &\geq &0\\
\mbox{(resp. } \min\Big\{ - \Lc \varphi(t,z,d) ~,~
w^*(t,z,d) - \Hc w^*(t,z,d)
\Big\} & \leq & 0 \mbox{)}\;,
\end{eqnarray*}
(iii) for any $(T,z,d)\in \Dc$ we have
\begin{eqnarray*}
w_*(T^-,z,d) & \geq & \max\{ \Nc_1 L(z,d)~,~\bar \Nc_1 L(z,d)\}\\
\mbox{(resp. } w_*(T^-,z,d) & \leq & \max\{\Nc_1 L(z,d)~,~\bar \Nc_1 L(z,d)\} \mbox{)}\;,
\end{eqnarray*}
(iv) for any $k=m+1,\ldots,n-1$, $(t_k,z,d)\in \Dc$ we have
\begin{eqnarray*}
w_*(t_{k}^-,z,d) & \geq & \max\{ \Nc_1 w_*(t_k,z,d)~,~\bar \Nc_1w_*(t_k,z,d)\}\\
\mbox{(resp. } w^*(t_{k}^-,z,d) & \leq & \max\{ \Nc_1 w^*(t_k,z,d)~,~\bar \Nc_1w^*(t_k,z,d)\}\mbox{)}\;,
\end{eqnarray*}
(v) for any $k=0,\ldots,m$, $(t_k,z,d)\in \Dc$ we have
\begin{eqnarray*}
w_*(t_{k}^-,z,d) & \geq & \max\{ \Nc_2 w_*(t_k,z,d)~,~\bar \Nc_2w_*(t_k,z,d)\}\\
\mbox{(resp. } w^*(t_{k}^-,z,d) & \leq & \max\{ \Nc_2 w^*(t_k,z,d)~,~\bar \Nc_2w^*(t_k,z,d)\}\mbox{)}\;.
\end{eqnarray*}
A locally bounded function $w$ defined on $\Dc$ is said to be a
viscosity solution to \reff{EDP0}--\reff{EDP6} if it is a
supersolution and a subsolution to \reff{EDP0}--\reff{EDP6}.
\end{Definition}
\ni The next result provides the viscosity properties of the value function $v$.
\begin{Theorem}[Viscosity characterization]
The value function $v$ is the unique viscosity solution to
\reff{EDP0}--\reff{EDP6} satisfying the growth condition
\reff{growth-property}. Moreover, $v$ is continuous on $\Dc_k$ for
all $k=0,\ldots,n-1$.
\end{Theorem}
\section{Numerics}\label{sec4}
We describe, in this section, a backward algorithm to approximate
the value function and an optimal strategy. Some numerical
illustrations are also provided.
\subsection{Approximation of the value function $v$}
\paragraph{Initialization step.} For $(t,z,d)\in \Dc_{n-1}^1$ we have
\begin{eqnarray*}
v(t,z,d) & = & \E\Big[ \max\{ \Nc_1 L(Z^{t,z,d}_T,d)~,~\bar \Nc_1 L(Z^{t,z,d}_T,d)\} \Big]\;.
\end{eqnarray*}
We can therefore approximate it by $\hat v(t,z,d)$ which is the associated Monte Carlo estimator.
On $\Dc_{n-1}^2$ the function $v$ is solution to the PDE
\reff{EDP1} with the terminal condition \reff{EDP2}. Therefore, we
can compute an approximation $\hat v$ using an algorithm
computing optimal values of impulse control problem with boundary
on $\Dc_{n-1}^1$ and the terminal value given by \reff{EDP2}
(see e.g. \cite{COS02}).
\paragraph{Step $k+1\rightarrow k$.} Once we have an approximation $\hat v(t,z,d)$ of $v(t,z,d)$ for
$(t,z,d)\in \Dc_{k+1}$ we are able to get an approximation of $v$
on $\Dc_{k}$ as follows.
\ni $\bullet$ Case 1: $m\leq k\leq n-1$.
For $(t,z,d)\in \Dc_{k}^1$ we have
\begin{eqnarray*}
v(t,z,d) & = & \E\Big[ \max\{ \Nc_1 v(t_{k+1},Z^{t,z,d}_{t_{k+1}},d)~,~\bar \Nc_1v(t_{k+1},Z^{t,z,d}_{t_{k+1}},d)\} \Big]\;.
\end{eqnarray*}
We can therefore approximate it by $\hat v(t,z,d)$ which is the Monte Carlo estimator of
\begin{eqnarray*}
\E\Big[ \max\{ \Nc_1\hat v(t_{k+1},Z^{t,z,d}_{t_{k+1}},d)~,~\bar \Nc_1 \hat v(t_{k+1},Z^{t,z,d}_{t_{k+1}},d)\} \Big]\;.
\end{eqnarray*}
On $\Dc_{k}^2$ the function $v$ is solution to the PDE
\reff{EDP1} with the terminal condition \reff{EDP4}. Since we
already have approximations of $v$ on $\Dc^1_{k}$ and
$\Dc_{k+1}$, we can compute an approximation $\hat v$ using an
algorithm computing optimal values of impulse control problem
with boundary on $\Dc_{k}^1$ (see e.g. \cite{COS02}) and the
terminal value given by \begin{eqnarray*} \hat v(t_{k+1}^-,z,d) & = & \max\{
\Nc_1 \hat v(t_{k+1},z,d)~,~\bar \Nc_1\hat v(t_{k+1},z,d)\}\;.
\end{eqnarray*}
\ni $\bullet$ Case 2: $0\leq k\leq m - 1$. The procedure is the same as in Case 1 but with $\Nc_2$ and $\bar \Nc_2$ instead of $\Nc_1$ and $\bar \Nc_1$ respectively.
\subsection{An optimal strategy for the approximated problem}
We turn to the computation of an optimal strategy. From the general optimal stopping theory (see \cite{EK81}), we provide the following strategy $\hat \alpha$. This strategy is constructed as usually done for optimal strategies of impulse control problem but using the approximation $\hat v$ instead of the value function $v$.
We start with an initial data $(t,z,d)$. We denote by $\hat \alpha=(t_i,\hat \xi_i)_{N(t-\delta)+1\leq i\leq n}\cup(\hat \tau_k,\hat \zeta_k)_{k \geq 1}$ the strategy constructed step by step and by $\hat Z^\kappa=(\hat X^\kappa,\hat R^\kappa,\hat P^\kappa,\hat Q^\kappa)$ the process controlled by the truncated strategy $\hat \alpha ^\kappa:= (t_i,\hat \xi_i)_{N(t-\delta)+1\leq i\leq n}\cup(\hat \tau_k,\hat \zeta_k)_{\kappa\geq k \geq 1}$. We also denote by $\hat d _s=(t_i,\hat e_i)_{N(s-\delta)+1\leq i\leq N(s)}$ the pending orders at time $s\in[t,T]$.
\paragraph{Initialization step.} We first start by computing the first harvesting time $\hat \tau_1$ by
\begin{eqnarray*}
\hat \tau_1 & = & \inf\Big\{s\geq t~:~\hat v(s, \hat Z^0_s, \hat d_s)~=~\Hc \hat v(s, \hat Z^0_s, \hat d_s) \Big\}
\end{eqnarray*}
and the associated harvested quantity $\hat \zeta _1$ by
\begin{eqnarray*}
\hat \zeta_1 & \in & \text{arg}\max_{0\leq a\leq \hat R^0_{\tau_1}} \hat v(\hat \tau_1, \Gamma^c(\hat Z^0_{\hat \tau_1},a), \hat d_{\hat \tau_1})\;.
\end{eqnarray*}
\paragraph{Step $k\rightarrow k+1$ for harvesting orders.} We then compute the $(k+1)$-th harvesting time $\hat \tau_{k+1}$ by
\begin{eqnarray*}
\hat \tau_{k+1} & = & \inf\Big\{s\geq \hat \tau_{k}~:~\hat v(s, \hat Z^k_s, \hat d_s)~=~\Hc \hat v(s, \hat Z^k_s, \hat d_s) \Big\}
\end{eqnarray*}
and the associated harvested quantity $\hat \zeta _{k+1}$ by
\begin{eqnarray*}
\hat \zeta_{k+1} & \in & \text{arg}\max_{0\leq a\leq \hat R^k_{\tau_{k+1}}} \hat v(\hat \tau_{k+1} , \Gamma^c(\hat Z^{k}_{\hat \tau_{k+1} },a), \hat d_{\hat \tau_{k+1} })\;.
\end{eqnarray*}
\paragraph{Step $i$ for renewing orders.} Denote by $\hat k_s$ the (random) number of harvesting orders on $[t,s]$.
We then distinguish two cases.
\ni $\bullet$ Case 1: $0\leq i\leq m$.
\ni Suppose first that
\begin{eqnarray*}
\Nc_2 \hat v( t_i, \hat Z ^{\hat k_{t_i}} _{t_i},\hat d_{t_{i-1}}) & \geq & \bar \Nc_2 \hat v( t_i, \hat Z ^{\hat k_{t_i}} _{t_i},\hat d_{t_{i-1}})\;.
\end{eqnarray*}
Then we compute the optimal renewed resource $\hat \xi_i$ at time $t_i$ by
\begin{eqnarray*}
\hat \xi_i & = & \text{arg} \max_{0\leq e\leq K} \hat v\Big( t_i, \Gamma_1^{rn}\big(\hat Z ^{\hat k_{t_i}} _{t_i},e\big),\hat d_{t_{i-1}}\cup(t_i,e)\Big) \;.
\end{eqnarray*}
\ni If we now suppose that
\begin{eqnarray*}
\Nc_2 \hat v( t_i, \hat Z ^{\hat k_{t_i}} _{t_i},\hat d_{t_{i-1}}) & < & \bar \Nc_2 \hat v( t_i, \hat Z ^{\hat k_{t_i}} _{t_i},\hat d_{t_{i-1}})\;.
\end{eqnarray*}
Then we compute the optimal renewed resource $\hat \xi_i$ at time $t_i$ by
\begin{eqnarray*}
\hat \xi_i & = & \text{arg} \max_{0\leq e\leq K} \hat v\Big( t_i, \Gamma_1^{rn}\big(\Gamma_1^{c}\big(\hat Z ^{\hat k_{t_i^-}} _{t_i}, \hat \zeta_{\hat k_{t_i}}\big),e\big),\hat d_{t_{i-1}}\cup(t_i,e)\Big)
\end{eqnarray*}
which is also given by the same expression as in the first inequality
\begin{eqnarray*}
\hat \xi_i & = & \text{arg} \max_{0\leq e\leq K} \hat v\Big( t_i, \Gamma_1^{rn}\big(\hat Z ^{\hat k_{t_i}} _{t_i},e\big),\hat d_{t_{i-1}}\cup(t_i,e)\Big)\;.
\end{eqnarray*}
\ni $\bullet$ Case 2: $m+1\leq i\leq n$.
\ni As in the first case we do not need to distinguish the subcases $\Nc_1 \hat v\geq \bar \Nc_1 \hat v$ and $\Nc_1 \hat v<\bar \Nc_1 \hat v$ and the optimal renewed quantity at time $t_i$ is given by
\begin{eqnarray*}
\hat \xi_i & = & \text{arg} \max_{0\leq e\leq K} \hat v\Big( t_i, \Gamma_1^{rn}\big(\Gamma_2^{rn}\big(\hat Z ^{\hat k_{t_i}} _{t_i},\hat e_{i-m}\big),e\big),\hat d_{t_{i-1}}\cup(t_i,e)\setminus (t_{i-m},\hat e_{i-m})\Big)\;.
\end{eqnarray*}
\subsection{Examples}
In this part we present numerical illustrations that we get by using an implicit finite difference scheme mixed with an iterative procedure which leads to the resolution of a Controlled Markov Chain by assuming that the resource is a forest . This class of problems is intensively studied by Kushner and Dupuis \cite{KushDup}. The convergence of the solution of the numerical scheme towards the solution of the HJB equation, when the time-space step goes to zero, can be shown using the standard local consistency argument i.e. the first and second moments of the approximating Markov chain converge to those of the continuous process $(R,P)$. We assume that the maximal size of the forest is 1 and we use a discretization step of $1/151$ for the size of the forest. About the discretization of the price we discretize the process $S = \log(P)$ with $P_0=1$, we consider $S_{\min} = - |\mu - \sigma^2/2|*T - 3 \sigma \sqrt{T}$ and $S_{\max} = |\mu - \sigma^2/2|*T + 3 \sigma \sqrt{T}$, and the discretization step is $1/101$.
We compute the optimal strategy to harvest and renew, and the value function. We assume the parameters of the logistic SDE are $\eta=1$, $\lambda=0.7$ and $\gamma=0.1$. The parameter of natural renewal is $g_0=3\%$ of the forest. The delay before to able to harvest a tree which is renewed is 1 and the function $g(x)$ is equal to $x$. The initial price is $1$. The parameters of the price $P$ are $\mu=0.07$ and $\sigma=0.1$, and the costs to harvest and renew are $c_1=0.1$, $c_2=0.01$ and $c_3=0.1$. We assume that the price $Q$ is equal to the price $P$. We can renew at times $\{1,2\}$ and the terminal time is $T=3$.
\begin{figure}
\caption{The value function with respect to the price $P$ and the size of the forest $R$.}
\label{fig:valeur}
\end{figure}
We remark that the value function is increasing w.r.t. the price and the size of the forest, which are expected.
\begin{figure}
\caption{The optimal strategy with respect to the price $P$ and the size of the forest $R$. The blue region corresponds to the plantation region, the yellow region corresponds to the harvesting region, the green region corresponds to the continuation region}
\label{fig1}
\end{figure}
We note that the region to harvest is increasing with the price, and the region to renew is decreasing with the price. We never plant and harvest in the same time.
We now study the sensitivity w.r.t. the different parameters. For that we will change parameter by parameter.
\begin{figure}
\caption{In this figure the parameter $\lambda$ is now $0.9$}
\label{fig2}
\end{figure}
If $\lambda$ is bigger in this case the region to harvest is more important and the region to renew is less important, since the growth is more important.
\begin{figure}
\caption{In this figure the parameter $\eta$ is now $0.8$}
\label{fig3}
\end{figure}
If $\eta$ is bigger in this case the region to renew is less important if the price is cheap, since the growth is slow and it is not interesting to renew except if the size of the forest is really small.
\begin{figure}
\caption{In this figure the drift $\mu$ of the price is now $0.09$}
\label{fig4}
\end{figure}
If the drift of the price is more important, the region to harvest is less important for a low price since the manager prefer to wait except if the size is too important because in this case the growth is negative, and the region to renew is more important because we know that the price will be better in the future.
\begin{figure}
\caption{In this figure the proportional costs $c_1$ and $c_3$ are now $0.15$}
\label{fig5}
\end{figure}
If the costs are more expensive, the region to renew is less important because it es expensive to renew and harvest so we renew only if the size is really small.
\section{Proof of the main results}\label{sec5}
\subsection{Growth condition on $v$}\label{preuve1}
We provide in this subsection an upper-bound for the growth of the function $v$.
For any $(t,r)\in[0,T]\times\R_+$, we define the process $\bar R^{t,r}$ by $\bar R^{t,r}_t=r$ and
\begin{eqnarray*}
d \bar R^{t,r}_s & = & \eta \bar R^{t,r}_s(\lambda-\bar R^{t,r}_s)ds + \gamma \bar R^{t,r}_s dB_s \;, \quad \forall \; s \in [t,T] \setminus \{ t_i ~:~ N(t) +1 \leq i \leq n\}\;,\\
\bar R^{t,r}_{t_i} &=&\bar R^{t,r}_{t_i^-} + M \;, \quad \mbox{ for } ~N(t) +1 \leq i \leq n\;,
\end{eqnarray*}
where $M:= \max_{\xi\in[0,K]} g(\xi)+g_0$.
We remark that the process $ \bar R^{t,r}$ can be written under the following form
\begin{eqnarray*}
\bar R^{t,r}_s & = & r+ \int_t^s \eta \bar R^{t,r}_u(\lambda-\bar R^{t,r}_u)du + \int_t^s \gamma \bar R^{t,r}_u dB_u+ \big(N(s)-N(t)\big)M \;,
\end{eqnarray*}
for $s\in[t,T]$. That corresponds to never harvest and renew always the maximum.
We then have the following estimate on the process $\bar R ^{t,r}$.
\begin{Lemma}\label{lem-estim bar R}
For any $\ell\geq1$, there exists a constant $C_\ell$ such that
\begin{eqnarray}\label{estim 2 bar R}
\E\Big[\sup_{s\in[t,T]}\big|\bar R^{t,r}_s\big|^\ell\Big] & \leq & C_\ell\big(1+|r|^{\ell}\big)\;,
\end{eqnarray}
for all $(t,r)\in[0,T]\times \R_+$.
\end{Lemma}
\ni \textbf{Proof.}
We first prove that for any $\ell\geq 1$, there exists a constant $C_\ell$ such that
\begin{eqnarray}\label{estim 1 bar R}
\sup_{s\in[t,T]}\E\Big[\big|\bar R^{t,r}_s\big|^\ell\Big] & \leq & C_\ell\big(1+|r|^{\ell}\big)\;,
\end{eqnarray}
for all $(t,r)\in[0,T]\times \R_+$.
We argue by induction and we prove that for each $i=N(t),\ldots,n-1$ there exists a constant $C_{\ell,i}$ such that
\begin{eqnarray}\label{estim 1bis bar R}
\E\Big[\big|\bar R^{t,r}_s\big|^\ell\Big] & \leq & C_{\ell,i}\big(1+|r|^{\ell}\big)\;,
\end{eqnarray}
for all $r\in \R_+$ and $s\in[t_i\vee t, (t_{i+1}\vee t)\wedge T)$.
\ni $\bullet$ For $i=N(t)$, using the closed formula of the logistic diffusion, we have
\begin{eqnarray*}
\bar R^{t,r}_s & = & \frac{e^{(\eta\lambda-{\gamma^2\over 2}) (s-t) + \gamma (B_s-B_{t})}}{{ 1\over r} + \eta \int_{t}^s e^{(\eta\lambda-{\gamma^2\over 2}) (u-t) + \gamma (B_u-B_{t})} du } \;,
\end{eqnarray*}
for all $s\in[ t, t_{N(t)+1}\wedge T)$.
Therefore we get
\begin{eqnarray*}
\E\Big[\big|\bar R^{t,r}_s \big|^\ell\Big] & \leq & |r |^\ell\E\Big[\big|e^{(\eta\lambda-{\gamma^2\over 2}) (s-t) + \gamma (B_s-B_{t})}\big|^\ell\Big]\\
& \leq & |r|^\ell e^{(\ell|\eta\lambda-{\gamma^2\over 2}|+{|\ell\gamma|^2\over 2})(T-t)}
\end{eqnarray*}
for all $s\in[ t, t_{N(t)+1} \wedge T)$. Therefore \reff{estim 1bis bar R} holds true.
\ni $\bullet$ Suppose that the property holds for $i-1$. Still using the closed formula of the logistic diffusion, we have
\begin{eqnarray*}
\bar R^{t,r}_s & = & \frac{e^{(\eta\lambda-{\gamma^2\over 2}) (s-t_i) + \gamma (B_s-B_{t_i})}}{{ 1\over \bar R^{t,r}_{t_i^-}+M} + \eta \int_{t_i}^s e^{(\eta\lambda-{\gamma^2\over 2}) (u-t_i) + \gamma (B_u-B_{t_i})} du }\;,
\end{eqnarray*}
for all $s\in[t_i\vee t, (t_{i+1}\vee t)\wedge T)$.
Therefore we get
\begin{eqnarray*}
\E\Big[\big|\bar R^{t,r}_s \big|^\ell\Big] & \leq & \E\Big[\big|(\bar R^{t,r}_{t_i^-}+M)e^{(\eta\lambda-{\gamma^2\over 2}) (s-t_i) + \gamma(B_s-B_{t_i})}\big|^\ell\Big]\\
& \leq & \E\Big[\big|\bar R^{t,r}_{t_i^-}+M\big|^\ell\Big]e^{(\ell|\eta\lambda-{\gamma^2\over 2}|+{|\ell\gamma|^2\over 2})(T-t_{i})}\\
& \leq & C'\Big( 1+ \E\Big[\big|\bar R^{t,r}_{t_i^-}\big|^\ell\Big]\Big)\;.
\end{eqnarray*}
Using the induction assumption and Fatou's Lemma, we get the result, and \reff{estim 1bis bar R} holds true for each $i=N(t),\ldots,n$. Taking $C_\ell=\max_{N(t)\leq i\leq n} C_{\ell,i}$, we get \reff{estim 1 bar R}.
We now prove \reff{estim 2 bar R}. Still using the closed formula of the logistic diffusion we have
\begin{eqnarray*}
\big|\bar R^{t,r}_s \big|^\ell & \leq & \max_{N(t)\leq i\leq n} \big|(\bar R^{t,r}_{t_i^-}+M)\sup_{u\in[t_i\vee t, (t_{i+1}\vee t)\wedge T)}e^{(\eta\lambda-{\gamma^2\over 2}) (u-t_i) + \gamma (B_u-B_{t_i})}\big|^\ell\\
& \leq & \sum_{i=N(t)}^{n}\big|(\bar R^{t,r}_{t_i^-}+M)\sup_{u\in[t_i\vee t, (t_{i+1}\vee t)\wedge T)}e^{(\eta\lambda-{\gamma^2\over 2}) (u-t_i) + \gamma (B_u-B_{t_i})}\big|^\ell\;,
\end{eqnarray*}
for all $s\in[t,T]$. Therefore, we get from the independence of $(B_u-B_{t_i})_{u\geq t_i}$ with $\Fc_{ t_i}$ and \reff{estim 1 bar R}
\begin{eqnarray*}
\E\Big[\sup_{s\in[t,T]}\big|\bar R^{t,r}_s\big|^\ell\Big] & \leq & C \Big[\sum_{i=N(t)+1}^{n}\E\Big[\big|\bar R^{t,r}_{t_i^-}+M\big|^\ell\Big] + (1+|r|^\ell)\Big]
\\
& \leq & C'_\ell ( 1+ |r|^\ell)\;,
\end{eqnarray*}
for some constant $C'_\ell$.
\ep
\begin{Proposition}\label{estim R somme zeta}
(i) For any $\ell\geq 1$, there exists a constant $C_\ell$ such that
\begin{eqnarray*}
\E\Big[\sup_{s\in[t,T]}\big|R_s ^{t,r,\alpha}\big|^\ell\Big] & \leq & C_\ell \big(1+ |r|^\ell\big)
\end{eqnarray*}
for any strategy $\alpha \in \hat \Ac_{t,z,d}$.
\ni (ii) There exists a constant $C$ such that
\begin{eqnarray*}
\E\Big[ \Big(\sum_{k\geq 1} \zeta_k\mathds{1}_{\tau_k\leq T}\Big)^2\Big] & \leq & C \big(1+ |r|^4\big)
\end{eqnarray*}
for any strategy $\alpha \in \hat \Ac_{t,z,d}$.
\end{Proposition}
\ni \textbf{Proof.} (i) Fix $\alpha=(t_i,\xi_i)_{N(t-\delta) +1 \leq i \leq n} \cup (\tau_k,\zeta_k)_{k \geq 1} \in \hat \Ac_{t,z,d}$. Using the definition of $\bar R ^{t,r}$ we have
\begin{eqnarray*}
0~~\leq~~ R^{t,r,\alpha}_s & \leq & \bar R^{t,r}_s
\end{eqnarray*}
for all $s\in[t,T]$. Therefore we get from Lemma \ref{lem-estim bar R}
\begin{eqnarray*}
\E\Big[\sup_{s\in[t,T]}\big|R ^{t,r,\alpha}_s\big|^\ell\Big] & \leq & \E\Big[\sup_{s\in[t,T]} \big|\bar R^{t,r}_s\big|^\ell\Big] ~~ \leq ~~ C_\ell \big(1+ |r|^\ell\big)\;.
\end{eqnarray*}
(ii) We turn to the second estimate. From the dynamics \reff{eq croissance 1 bis} of $R^{t,r,\alpha}$, and since $R_T^{t,r,\alpha}\geq 0$ we have
\begin{eqnarray*}
\sum_{k\geq 1} \zeta_k\mathds{1}_{\tau_k\leq T} & \leq & r+ \int_t^T \eta R^{t,r,\alpha}_u(\lambda-R^{t,r,\alpha}_u)du + \int_t^T \gamma R^{t,r,\alpha}_u dB_u + n M \;,
\end{eqnarray*}
where we recall that $M=\max_{\xi\in[0,K]}g(\xi)+g_0$. Therefore, we get
\begin{eqnarray*}
\E\Big[\Big(\sum_{k\geq 1} \zeta_k\mathds{1}_{\tau_k\leq T}\Big)^2\Big] & \leq & 4\Big(|r|^2+ \E\Big[\Big|\int_t^T \eta R^{t,r,\alpha}_u(\lambda-R^{t,r,\alpha}_u)du\Big|^2 + \Big|\int_t^T \gamma R^{t,r,\alpha}_u dB_u\Big|^2 \Big]+ n^2 M^2\Big)\;.
\end{eqnarray*}
Therefore there exists a constant $C$ depending only on $T$, $\eta$, $\lambda$, $\gamma$, $M$ and $n$ such that
\begin{eqnarray*}
\E\Big[\Big(\sum_{k\geq 1} \zeta_k\mathds{1}_{\tau_k\leq T}\Big)^2\Big] & \leq & C\Big(|r|^2+ 1+ \E\Big[\sup_{s\in [t,T]}|R^{t,r,\alpha}_s|^4\Big]\Big) \;.
\end{eqnarray*}
Using estimate (i) we get the result.
\ep
We turn to the proof of the growth estimation for the value function $v$.
\ni\textbf{Proof of Proposition \ref{Bound value function}.} Fix $(t,z,d)\in \Dc$. From the definition of the function $L$ and the dynamics \reff{dyn X} and \reff{dyn P} of $X$ and $P$ we have
\begin{eqnarray*}
\E\Big[ L\big(Z^{t,z,\alpha}_T\big)\Big] & \leq & \E\Big[X^{t,z,\alpha}_T\Big]+ \E\Big[\big|P^{t,p}_T\big|^2\Big]+ \E\Big[\big|R^{t,r,\alpha}_T\big|^2\Big]\\
& \leq & x+ \E\Big[\sup_{s\in[t,T]} |P^{t,p}_s|^2 \Big] + \E\Big[\Big(\sum_{k\geq 1}\zeta_k\mathds{1}_{t\leq \tau_k\leq T}\Big)^2\Big] + \E\Big[\big|R^{t,r,\alpha}_T\big|^2\Big]\\
& & + e^{(2\mu+\sigma^2)(T-t)}|p|^2
\end{eqnarray*}
for any strategy $\alpha=(t_i,\xi_i)_{N(t-\delta) +1 \leq i \leq n} \cup (\tau_k,\zeta_k)_{k \geq 1} \in \hat \Ac_{t,z,d}$. From classical estimates there exists a constant $C$ such that
\begin{eqnarray*}
\E\Big[\sup_{s\in[t,T]} |P^{t,p}_s|^2 \Big] & \leq & C\Big(1+ |p|^2\Big)
\end{eqnarray*}
for all $p\in\R_+^*$. Using this estimate and Proposition \ref{estim R somme zeta} we get
\begin{eqnarray*}
v(t,z,d) & \leq & x+C\big(1+|r|^4+|p|^4+|q|^4\big)\;.
\end{eqnarray*}
Then by considering the strategy $\alpha^0=d\in \hat \Ac_{t,z,d}$ with no more intervention than $d$, we get
\begin{eqnarray*}
x
& \leq & J(t,z,\alpha^0)~~\leq~~v(t,z,d)\;.
\end{eqnarray*}
\ep
\subsection{Dynamic programming principle}\label{preuve2}
Before proving the dynamic programming principle, we need the
following results.
\begin{Lemma}\label{rem-ppte-model}{\rm
For any $(t,z,d)\in\Dc$ and any control $\alpha\in\hat \Ac_{t,z,d}$ we have the following properties.
\begin{enumerate}[(i)]
\item The pair $(Z^{t,z,\alpha}, d(.,\alpha))$ satisfies the following Markov property
\begin{eqnarray*}
\E\big[ \phi(Z^{t,z,\alpha}_{\vartheta_2}) \big| \Fc_{\vartheta_1} \big] &=& \E\big[ \phi(Z^{t,z,\alpha}_{\vartheta_2}) \big| (Z^{t,z,\alpha}_{\vartheta_1},d(\vartheta_1,\alpha)) \big]
\end{eqnarray*}
for any bounded measurable function $\phi$, and any $\vartheta_1,\vartheta_2\in\Tc_{[t,T]}$ such that $\P\big(\vartheta_1\leq\vartheta_2\big)=1$.
\item Causality of the control
\begin{eqnarray*}
\alpha^\vartheta \in \hat \Ac_{\vartheta,Z_\vartheta^{t,z,d}, d(\vartheta,\alpha)} \quad & \text{ and } & \quad d(\vartheta,\alpha) \in D_\vartheta \quad a.s.
\end{eqnarray*}
for any $\vartheta\in\Tc_{[t,T]}$ where we set $\alpha^\vartheta =(t_{i},\xi_{i})_{ N(\vartheta-\delta)+1\leq i\leq n}\cup(\tau_{k},\zeta_{k})_{k\geq \kappa(\vartheta,\alpha)+1}$ and
\begin{eqnarray*}
\kappa(\vartheta,\alpha) & = & \#\big\{ k\geq 1 ~:~\tau_k< \vartheta\big\}\;.
\end{eqnarray*}
\item The state process $Z^{t,z,\alpha}$ satisfies the following flow property
\begin{eqnarray*}
Z^{t,z,\alpha} & = & Z^{\vartheta, Z^{t,z,\alpha}_{\vartheta},\alpha^{\vartheta}} \quad \text{on } [\vartheta,T]\;,
\end{eqnarray*}
for any $\vartheta\in\Tc_{[t,T]}$.
\end{enumerate}}
\end{Lemma}
\ni \textbf{Proof.} These properties are direct consequences of the dynamics of $Z^{t,z,\alpha}$. \ep
We turn to the proof of the dynamic programming principles (DP1) and (DP2).
Unfortunately, we have not enough information on the value function $v$ to directly prove these results. In particular, we do not know the measurability of $v$ and this prevents us from computing expectations involving $v$ as in (DP1) and (DP2). We therefore provide weaker dynamic programing principles involving the envelopes $v_*$ and $v^*$ as in \cite {BT11}. Since we get the continuity of $v$ at the end, these results implies (DP1) and (DP2).
\begin{Proposition}\label{PropDP1}
For any $(t,z,d)\in\Dc$ we have
\begin{eqnarray*}
v(t,z,d)& \geq & \sup_{\alpha \in \hat \Ac_{t,z,d}} \sup_{\vartheta \in \Tc_{[t,T]}} \E \Big[ v_*(\vartheta,Z_\vartheta^{t,z,d},d(\vartheta,\alpha))\Big]\;.
\end{eqnarray*}
\end{Proposition}
\ni \textbf{Proof.}
Fix $(t,z,d) \in \Dc$, $\alpha \in \hat \Ac_{t,z,d}$ and $\vartheta \in \Tc_{[t,T]}$. By definition of the value function $v$, for any $\varepsilon >0$ and $\omega \in \Omega$, there exists $\alpha^{\varepsilon, \omega} \in \hat \Ac_{\vartheta(\omega),Z^{t,z,\alpha}_{\vartheta(\omega)}(\omega),d(\vartheta(\omega), \alpha)}$, which is an $\varepsilon$-optimal control at $(\vartheta,Z_\vartheta^{t,z,\alpha}, d(\vartheta, \alpha))(\omega)$, i.e.
\begin{eqnarray*}
v\big(\vartheta(\omega),Z^{t,z,\alpha}_{\vartheta(\omega)}(\omega), d(\vartheta(\omega), \alpha(\omega))\big) - \varepsilon &\leq &
J(\vartheta(\omega),Z^{t,z,\alpha}_{\vartheta(\omega)}(\omega), \alpha^{\varepsilon,\omega})\;.
\end{eqnarray*}
By a measurable selection theorem (see e.g. Theorem 82 in the appendix of Chapter III in \cite{delmey75}) there exists $\bar \alpha_\varepsilon =(t_i,\bar \xi_i)_{N(\vartheta)+1\leq i\leq n}\cup(\bar \tau_k,\bar \zeta_k)_{k\geq 1}\in \hat\Ac_{\vartheta,Z^{t,z,\alpha}_{\vartheta},d(\vartheta,\alpha)}$ s.t. $\bar \alpha_{\varepsilon}(\omega)= \alpha_{\varepsilon, \omega}(\omega)$ a.s., and so
\begin{eqnarray} \label{inegalite DP1 omega}
v\big(\vartheta,Z^{t,z,\alpha}_{\vartheta}, d(\vartheta, \alpha)\big) - \varepsilon &\leq & J(\vartheta,Z^{t,z,\alpha}_{\vartheta}, \bar \alpha_{\varepsilon})\;,\qquad \P-a.s.
\end{eqnarray}
We now define by concatenation the control strategy $\bar \alpha$ consisting of the impulse control components of $\alpha$ on $[t,\vartheta)$, and the impulse control components $\bar \alpha_\varepsilon$ on $[\vartheta,T]$. By construction of the control $\bar \alpha$
we have $\bar \alpha\in \hat\Ac_{t,z,d}$, $Z^{t,z,\bar \alpha} = Z^{t,z,\alpha}$ on $[t,\vartheta)$,
$d(\vartheta,\bar \alpha)= d(\vartheta,\alpha)$, and $\bar \alpha^\vartheta=\bar \alpha_\varepsilon$.
From Markov property, flow property, and causality features of our model, given by Lemma \ref{rem-ppte-model}, the definition of the performance criterion and the law of iterated conditional expectations, we get
\begin{eqnarray*}
J(t,z,\bar \alpha)&=& \E \Big[ J(\vartheta,Z_\vartheta^{t,z,\alpha}, \bar \alpha_\varepsilon)\Big]\;.
\end{eqnarray*}
Together with \reff{inegalite DP1 omega}, this implies
\begin{eqnarray*}
v(t,z,d) & \geq & J(t,z,\bar \alpha)\\
& \geq & \E \Big[ v_{*}(\vartheta,Z^{t,z,\alpha}_{\vartheta},d(\vartheta,\alpha))\Big] - \varepsilon \;.
\end{eqnarray*}
Since $\eps$, $\vartheta$ and $\alpha$ are arbitrarily chosen, we get the result.
\ep
We now prove (DP2), which is equivalent to the following proposition.
\begin{Proposition}\label{DP2}
For all $(t,z,d) \in \Dc$, we have
\begin{eqnarray*}
v(t,z,d)& \leq & \sup_{\alpha \in \hat \Ac_{t,z,d}} \inf_{\vartheta \in \Tc_{[t,T]}} \E \Big[ v^*\big(\vartheta,Z_\vartheta^{t,z,\alpha},d(\vartheta,\alpha)\big)\Big]\;.
\end{eqnarray*}
\end{Proposition}
\ni \textbf{Proof.}
Fix $(t,z,d) \in \Dc$, $\alpha \in \hat \Ac_{t,z,d}$ and $\vartheta \in \Tc_{[t,T]}$. From the definitions of the performance criterion and the value functions, the law of iterated conditional expectations, Markov property, flow property, and causality features of our model given by Lemma \ref{rem-ppte-model}, we get
\begin{eqnarray*}
J(t,z,\alpha) & = & \E\Big[ \E\Big[L \Big( Z^{\vartheta,Z^{t,z,\alpha}_\vartheta,\alpha^\vartheta}_T\Big) \Big| \Fc_\vartheta \Big]\Big]
~~= ~~ \E \Big[ J\big(\vartheta, Z_\vartheta^{t,z,\alpha},\alpha^\vartheta\big)\Big]\\
&\leq& \E \Big[ v^*\big(\vartheta,Z_\vartheta^{t,z,\alpha},d(\vartheta,\alpha)\big)\Big]\;.
\end{eqnarray*}
Since $\vartheta$ and $\alpha$ are arbitrary, we obtain the required inequality.
\ep
\subsection{Viscosity properties}\label{preuve3}
We first need the following comparison result. We recall that $\Zc=\R\times\R_+\times\R_+^*\times\R_+^*$ and $D_{t}$ is given by \reff{def D_t}.
\begin{Proposition}\label{Lemma-comp}
Fix $k\in\{0,\ldots,m-1\}$ (resp. $k\in\{m,\ldots,n-1\}$) and $g:\Zc\times D_{t_{k+1}}\rightarrow\R$ a continuous function. Let $\underline w:\Dc_k\rightarrow \R$ a viscosity subsolution to \reff{EDP0}-\reff{EDP1} and
\begin{eqnarray}\label{CondTerm1}
\underline w(t_{k+1}^-,z,d) & \geq & \max\big\{\Nc_2 g(z,d)~,~\bar \Nc_2 g(z,d)\big\} \;,\quad (z,d)\in\Zc\times D_{t_{k+1}}\\ \nonumber
\mbox{( resp. } \underline w(t_{k+1}^-,z,d) & \geq & \max\big\{\Nc_1 g(z,d)~,~\bar \Nc_1 g(z,d)\big\} \;,\quad (z,d)\in\Zc\times D_{t_{k+1}} \mbox{ ) } \;,
\end{eqnarray}
and $\bar w:\Dc_k\rightarrow \R$ a viscosity supersolution to \reff{EDP0}-\reff{EDP1}
\begin{eqnarray}\label{CondTerm2}
\bar w(t_{k+1}^-,z,d) & \leq & \max\big\{\Nc_2 g(z,d)~,~\bar \Nc_2 g(z,d)\big\} \;,\quad (z,d)\in\Zc\times D_{t_{k+1}}\\ \nonumber
\mbox{( resp. } \bar w(t_{k+1}^-,z,d) & \leq & \max\big\{\Nc_1 g(z,d)~,~\bar \Nc_1 g(z,d)\big\} \;,\quad (z,d)\in\Zc\times D_{t_{k+1}} \mbox{ ) } \;.
\end{eqnarray}
Suppose there exists a constant $C>0$ such that
\begin{eqnarray}\label{growth-cond1}
\underline w(t,z,d) & \leq & x+C\big(1+|r|^4+|p|^4+|q|^4+|d|^4\big)\\ \label{growth-cond2}
\bar w(t,z,d) & \geq & x
\;,
\end{eqnarray}
for all $(t,z,d)\in\Dc_k$ with $z=(x,r,p,q)$.
Then $\underline w\leq \bar w$ on $\Dc_k$.
In particular there exists at most a unique viscosity solution $w$ to \reff{EDP0}-\reff{EDP1}-\reff{CondTerm1}-\reff{CondTerm2}, satisfying \reff{growth-cond1}-\reff{growth-cond2} and $w$ is continuous on $[t_k,t_{k+1})\times \Zc$.
\end{Proposition}
The proof is postponed to the end of this section.
We are now able to state viscosity properties and uniqueness of $v$.
\paragraph{Viscosity property on $\Dc^1_k$.} Fix $k=0,\ldots,n-1$ and $(t,z,d)\in\Dc^1_k$ with $z=(x,r,p,q)$ and $r=0$.
1) We first prove the viscosity supersolution. Let $\varphi\in C^{1,2}(\Dc_k)$ such that
\begin{eqnarray}\label{cond-fct-test r=0sursol}
(v_*-\varphi)(t,z,d) & = & \min_{\Dc_k} (v_*-\varphi) \;.
\end{eqnarray}
Consider a sequence $(s_\ell,z_\ell,d_\ell)_{\ell \in \N}$ of $\Dc_k$ such that
\begin{eqnarray*}
\big(s_\ell,z_\ell,d_\ell,v(t_\ell,z_\ell,d_\ell)\big) & \xrightarrow[\ell\rightarrow+\infty]{} & \big(t,z,d,v_*(t,z,d)\big) \;.
\end{eqnarray*}
Applying Proposition \ref{PropDP1} with $\vartheta=s_\ell+h_\ell$ where $h_\ell \in(0,s_{\ell+1}-s_\ell)$. We have for $\ell$ large enough
\begin{eqnarray*}
v(s_\ell,z_\ell,d_\ell) & \geq & \E\Big[v_*(s_\ell+h_\ell,Z^\ell_{s_\ell+h},d_\ell)\Big]\;,
\end{eqnarray*}
where $Z^\ell$ stands for $Z^{s_\ell,z_\ell,\alpha^0}$ with $\alpha^0$ the strategy with no more interventions than $d$. From \reff{cond-fct-test r=0sursol}, we get
\begin{eqnarray*}
\chi_\ell +\varphi(s_\ell,z_\ell,d_\ell)& \geq & \E\Big[\varphi(s_\ell+h_\ell,Z^\ell_{s_\ell+h_\ell},d_\ell)\Big]\;,
\end{eqnarray*}
with $\chi_\ell := v(s_\ell,z_\ell,d_\ell)-v_*(t,z,d) - \varphi(s_\ell,z_\ell,d_\ell) +\varphi(t,z,d)\rightarrow0$ as $\ell\rightarrow \infty$.
Taking $h_\ell=\sqrt{|\chi_\ell|}$ and applying Ito's formula we get
\begin{eqnarray*}
{1\over h_\ell} \E\Big[\int_{s_\ell}^{s_\ell+h_\ell}-\Lc\varphi(s,Z^\ell_{s},d_\ell)ds\Big] & \geq & -\sqrt{|\chi_\ell|}\;.
\end{eqnarray*}
Sending $\ell$ to $\infty$, we get the supersolution property from the mean value theorem.
2) We turn to the viscosity subsolution. Let $\varphi\in C^{1,2}(\Dc_k)$ such that
\begin{eqnarray}\label{cond-fct-test r=0soussol}
(v^*-\varphi)(t,z,d) & = & \max_{\Dc_k} (v^*-\varphi) \;.
\end{eqnarray}
Consider a sequence $(s_\ell,z_\ell,d_\ell)_{\ell \in \N}$ of $\Dc_k$ such that
\begin{eqnarray*}
\big(s_\ell,z_\ell,d_\ell,v(s_\ell,z_\ell,d_\ell)\big) & \xrightarrow[\ell\rightarrow+\infty]{} & \big(t,z,d,v^*(t,z,d)\big) \;.
\end{eqnarray*}
From Proposition \ref{DP2} we can find for each $\ell \in \N$ a control $\alpha^\ell=(t_i,\xi^\ell_i)_{N(t_\ell-\delta)+1\leq i\leq n}\cup (\tau_k,\zeta_k)_{k\geq1}\in\hat \Ac_{s_\ell,z_\ell,d_\ell}$ such that
\begin{eqnarray*}
v(s_\ell,z_\ell,d_\ell) & \leq & \E\Big[v^*(s_\ell+h_\ell,Z^\ell_{s_\ell+h_\ell},d)\Big]+{1\over \ell} \;,
\end{eqnarray*}
where $Z^\ell$ stands for $Z^{s_\ell,z_\ell,\alpha^\ell}$ and $h_\ell\in (0,s_{\ell+1}-t_\ell)$ is a constant that will be chosen later.
We first notice that
\begin{eqnarray}\label{conv zero r alpha}
\sup_{s\in[s_\ell,s_\ell+h_\ell]}|R^\ell_s| & \xrightarrow[\ell\rightarrow \infty]{\P-a.s.} & 0 \;.
\end{eqnarray}
Indeed, we have
\begin{eqnarray}\label{majRell}
0~~ \leq ~~ R^\ell_s & \leq & \bar R^\ell_s\;,\quad s\geq s_\ell
\end{eqnarray}
where $\bar R^\ell$ is given by
\begin{eqnarray*}
\bar R^\ell_s & = & r_\ell+\int_{s_\ell}^s \eta\bar R^\ell_u(\lambda-\bar R^\ell_u)du+\int_{s_\ell}^s\bar R^\ell_udB_u\;,\quad s\geq s_\ell \;.
\end{eqnarray*}
Since $r_\ell\xrightarrow[\ell\rightarrow \infty]{} r$ (and $r=0$), we have $\sup_{s\in[s_\ell,s_\ell+h_\ell]}|\bar R^\ell_s| \xrightarrow[\ell\rightarrow \infty]{}0$ as $\ell\rightarrow \infty$ and we get \reff{conv zero r alpha}. In particular, we deduce that up to a subsequence
\begin{eqnarray}\label{conv somme zeta}
\sum_{k\geq1} \zeta_k^\ell\mathds{1}_{\tau_k^\ell\leq s_\ell+h_\ell} & \xrightarrow[\ell\rightarrow+\infty]{\P-a.s.} & 0\;.
\end{eqnarray}
Indeed, we have from \reff{eq croissance 1 bis} and \reff{majRell}
\begin{eqnarray}
\sum_{k\geq1} \zeta_k^\ell\mathds{1}_{\tau_k^\ell\leq s_\ell+h_\ell} & \leq & r_\ell+\int_{s_\ell}^{s_\ell+h_\ell}\eta \lambda R^\ell_udu+\int_{s_\ell}^{s_\ell+h_\ell}\eta R^\ell_udB_u\nonumber\\\label{majsomme coupe}
& \leq & r_\ell+h_\ell\eta\lambda\sup_{s\in[s_\ell,s_\ell+h_\ell]}|\bar R^\ell_s| +\big|\int_{s_\ell}^{s_\ell+h_\ell}\eta R^\ell_udB_u\big|\;.\nonumber
\end{eqnarray}
From BDG inequality and \reff{majRell}, we get from \reff{conv zero r alpha}
\begin{eqnarray*}
\E\Big[\big|\int_{s_\ell}^{s_\ell+h_\ell}\eta R^\ell_udB_u\big|\big] & \xrightarrow[\ell\rightarrow+\infty]{} & 0\;,
\end{eqnarray*}
and hence, up to a subsequence $\big|\int_{s_\ell}^{s_\ell+h_\ell}\eta R^\ell_udB_u\big|\rightarrow0$ as $\ell\rightarrow+\infty$.
From this convergence \reff{conv zero r alpha} and \reff{majsomme coupe}, we get \reff{conv somme zeta}.
We then define the process $\tilde X ^\ell$ by
\begin{eqnarray*}
\tilde X ^\ell _s & = & x_\ell+\sum_{k\geq1} P_{\tau_k^\ell}\zeta_k^\ell\mathds{1}_{\tau_k^\ell\leq s}
\end{eqnarray*}
and observe that from \reff{conv somme zeta}
\begin{eqnarray}\label{Xtilde domine X}
\tilde X ^\ell_{s_\ell+h_\ell} & \xrightarrow[\ell\rightarrow+\infty]{\P-a.s.} & x\;,\\
\tilde X ^\ell & \geq & X ^\ell\;.\nonumber
\end{eqnarray}
Since $v$ is nondecreasing in the $x$ component, it is the same for $v^*$. We get
\begin{eqnarray*}
v(s_\ell,z_\ell,d_\ell) & \leq & \E\Big[v^*(s_\ell+h_\ell,\tilde Z^\ell_{s_\ell+h_\ell},d)\Big]+{1\over \ell}
\end{eqnarray*}
where $\tilde Z^\ell=(\tilde X^\ell,R^\ell,P^\ell,Q^\ell)$. We then get
from \reff{cond-fct-test r=0soussol}
\begin{eqnarray*}
\chi_\ell +\varphi(s_\ell,z_\ell,d_\ell)& \leq & \E\Big[\varphi(s_\ell+h,\tilde Z^\ell_{s_l+h},d_\ell)\Big] +{1\over \ell}\;,
\end{eqnarray*}
where
$\chi_\ell := v(s_\ell,z_\ell,d_\ell)-v^*(t,z,d) - \varphi(s_\ell,z_\ell,d_\ell) + \varphi(t,z,d)\rightarrow0$ as $\ell\rightarrow+\infty$.
Applying Ito's formula and taking $h_\ell=\sqrt{|\chi_\ell|}$ we get by sending $\ell$ to $\infty$ as previously
\begin{eqnarray*}
-\Lc\varphi(t,z,d) & \leq & 0\;.
\end{eqnarray*}
\paragraph{Viscosity property on $\Dc^2_k$.} Fix $k=0,\ldots,n-1$ and $(t,z,d)\in\Dc^2_k$. Then $v(.,d)$ is the value function associated to an optimal impulse control problem with nonlocal operator $\Hc$. Using the same arguments as in the proof of Theorem 5.1 in \cite{MLP07}, we obtain that $v$ is a viscosity solution to \reff{EDP1} on $\Dc_k^2$.
\paragraph{Viscosity property and continuity on $\{t_k\}\times \Zc\times D_{t_k}$.}
We prove it by a backward induction on $k=0,\ldots,n$.
\ni $\bullet$ Suppose that $k=n$ $i.e.$ $t_k=T$.
1) We first prove the subsolution property. Fix some $z=(x,r,p,q)\in \Zc$ and $d=(t_i,e_i)_{n-m+1\leq i \leq n}\in D_{t_{n}}$ and consider a sequence $(s_\ell,z_\ell,d_\ell)_{\ell \in \N}$ with $z_\ell=(x_\ell,r_\ell,p_\ell,q_\ell)$ and $d_\ell=(t_i,e_i^\ell)_{n-m+1\leq i \leq n}$ such that
\begin{eqnarray*}
(s_\ell,z_\ell,d_\ell,v(s_\ell,z_\ell,d_\ell)) & \xrightarrow[\ell\rightarrow+\infty]{} & (T^-,z,d,v_*(T^-,z,d)) \;.
\end{eqnarray*}
By considering a strategy $\alpha^\ell\in \hat \Ac_{s_\ell,z_\ell,d_\ell}$ with a single renewing order $(T,e)$ with $e \leq K$
and the stopping time $\vartheta=T$, we get from the definition of $v$
\begin{eqnarray*}
v(s_\ell,z_\ell,d_\ell) & \geq &
\E\Big[L\Big(\Gamma^{rn}_1 \big(\Gamma^{rn}_2(Z^{s_\ell,z_\ell,\alpha^\ell}_{T^-},e^\ell_{n-m+1}),e \big) \Big)\Big] \;.
\end{eqnarray*}
From the continuity of the functions $L$, $\Gamma^{rn}_1$ and $\Gamma^{rn}_2$, we get
\begin{eqnarray*}
L\Big(\Gamma^{rn}_1 \big(\Gamma^{rn}_2(Z^{s_\ell,z_\ell,\alpha^\ell}_{T^-},e^\ell_{n-m+1}),e \big) & \xrightarrow[\ell\rightarrow+\infty]{\P-a.s.} & L\Big(\Gamma^{rn}_1 \big(\Gamma^{rn}_2(z,e_{n-m+1}),e \big) \;.
\end{eqnarray*}
From Fatou's Lemma and since $e\leq K$ is arbitrarily chosen, we get by sending $\ell$ to $\infty$
\begin{eqnarray}\label{estim v_* 1}
v_*(T^-,z,d) & \geq & \Nc_1 L(z,d)\;.
\end{eqnarray}
Fix now $a\in[0, r]$ and denote $a_\ell= \min\{a,r_\ell\}$. By considering a strategy $\alpha^\ell$ with an immediate harvesting order $(s_\ell,r_\ell)$ and a single renewing order $(T,e)$ and $\vartheta=T$, we get from the definition of $v$
\begin{eqnarray*}
v(s_\ell,z_\ell,d_\ell) & \geq &
\E\Big[L\Big(\Gamma^{rn}_1 \big(\Gamma^{rn}_2 \big(Z^{s_\ell,\Gamma^c(z_\ell,r_\ell),\alpha^\ell}_{T^-},e_{n-m+1} \big),e \big) \Big)\Big] \;.
\end{eqnarray*}
From the continuity of the functions $L$, $\Gamma^c$, $\Gamma^{rn}_1$ and $\Gamma^{rn}_2$, we get
\begin{eqnarray*}
L\Big(\Gamma^{rn}_1 \big(\Gamma^{rn}_2 \big(Z^{s_\ell,\Gamma^c(z_\ell,r_\ell),\alpha^\ell}_{T^-},e_{n-m+1} \big),e \big) \Big) & \xrightarrow[\ell\rightarrow+\infty]{\P-a.s.} & L\Big(\Gamma^{rn}_1 \big(\Gamma^{rn}_2 \big(\Gamma^c(z,r),e_{n-m+1} \big),e \big) \Big)\;.
\end{eqnarray*}
From Fatou's Lemma and since $e\leq K$ and $a\in[0,r]$ are arbitrarily chosen, we get by sending $\ell$ to $\infty$
\begin{eqnarray}\label{estim v_*2}
v_*(T^-,z,d) & \geq & \bar \Nc_1 L(z,d)\;.
\end{eqnarray}
From \reff{estim v_* 1} and \reff{estim v_*2}, we get the subsolution property at $(T^-,z,d)$.
2) We turn to the supersolution property.
We argue by contradiction and suppose that there exist $z=(x,r,p,q)\in \Zc$ and $d\in D_{t_{n}}$ such that
\begin{eqnarray*}
v^*(T^-,z,d) & \geq & \max\Big\{\Nc_1L(z,d)~,~\bar \Nc_1L(z,d)\Big\} + 2\eps\;,
\end{eqnarray*}
with $\eps>0$.
We fix a sequence $(s_\ell,z_\ell,d_\ell)_{\ell \in \N}$ in $\Dc$ such that
\begin{eqnarray}\label{approxlimsup}
(s_\ell,z_\ell,d_\ell,v(s_\ell,z_\ell,d_\ell)) & \xrightarrow[\ell\rightarrow+\infty]{} & (T^-,z,d,v^*(T^-,z,d))\;.
\end{eqnarray}
We then can find $s>0$ and a sequence of smooth functions $(\varphi^h)_{h\geq 1}$ on $[T-s,T]\times \Zc \times D_{t_{n}}$ such that $\varphi^h\downarrow v^*$ on $[T-s,T)\times \Zc \times D_{t_{n}}$, $\varphi^h\downarrow v^*(.^-,.,.)$ on $\{T\}\times \Zc \times D_{t_{n}}$ as $h\uparrow+\infty$ and
\begin{eqnarray}\label{cond1 varphi}
\varphi^h(t',z',d') & \geq & \max\Big\{\Nc_1L(z',d')~,~\bar \Nc_1L(z',d')\Big\} + \eps\;,
\end{eqnarray}
on some neighborhood $\Bc^h$ of $(T,z,d)$ in $[t_{n},T]\times \Zc \times D_{t_{n}}$. Up to a subsequence, we can assume that $\Bc^h_\ell := [t_\ell,T]\times B((z_\ell,d_\ell), \delta_\ell^h)\subset \Bc^h$ for $\delta_\ell^h$ sufficiently small.
Since $v^*$ is locally bounded, there is some $\iota > 0$ such that $|v^*|\leq\iota$ on $\Bc^h$. We therefore get $\varphi^h\geq -\iota$ on $\Bc^h$. We then define the function $\varphi^h_\ell$ by
\begin{eqnarray*}
\varphi^h_\ell(t',z',d') & = & \varphi^h(t',z',d') + 3 \iota {|(z',d')-(z_\ell,d_\ell)|^2\over |\delta^h_\ell|^2 }+\sqrt{T-t'} \;,
\end{eqnarray*}
and we observe that
\begin{eqnarray}\label{cond2 varphi}
(v^*-\varphi_\ell^h) & \leq & -\iota~~<~~0 \quad \mbox{ on } [t_\ell,T]\times \partial B((z_\ell,d_\ell), \delta_\ell^h)\;.
\end{eqnarray}
Since ${\partial\sqrt{T-t}\over \partial t}\rightarrow -\infty$ as $t\rightarrow T^-$, we can choose $h$ large enough such that
\begin{eqnarray}\label{cond3 varphi}
-\Lc \varphi _\ell ^h & \geq & 0 ~\mbox{ on }~ \Bc_\ell^h\;.
\end{eqnarray}
From the definition of $v$ we can find $\alpha^\ell =(t_i,\xi_i^\ell)_{N(t_\ell-\delta)+1\leq i\leq n} \cup (\tau_k^\ell,\zeta_k^\ell)_{k\geq 1}\in \hat \Ac_{s_\ell,z_\ell,d_\ell}$ such that
\begin{eqnarray}\label{Dynpro1/n}
v(t_\ell,z_\ell,d_\ell) & \leq & \E\Big[ L\big(Z^{\ell}_{T} \big)\Big]+{1\over \ell}\;,
\end{eqnarray}
where $Z^\ell$ stands for $Z^{s_\ell,z_\ell,\alpha^\ell}$. Denote by $\theta_\ell^h = \inf\{s\geq s_\ell~:~(s,Z^\ell,d_\ell)\notin \Bc_\ell^h\}\wedge \tau^\ell_1$.
From Ito's formula, \reff{cond1 varphi}, \reff{cond2 varphi} and \reff{cond3 varphi} we have
\begin{eqnarray*}
\varphi_\ell^h(s_\ell,z_\ell,d_\ell) & \geq &
\E\Big[\Big(v\big(T,\Gamma^{rn}(\Gamma^c(Z^\ell_{T^-}, \zeta_1^\ell),\xi_{n-m}^\ell),d_\ell\cup(t_{n-m},\xi^\ell_{n-m}) \big)\mathds{1}_{\tau_1^\ell = T} \\
& & \qquad+v^*\big(\theta^\ell_n,\Gamma^c(Z^\ell_{{\theta_\ell^h}^-}, \zeta_1^\ell),d_\ell) \big)\mathds{1}_{\tau_1^\ell < T}\Big)\mathds{1}_{\tau^\ell_1\leq \theta_\ell^h}\Big] \\
& & + \E\Big[\Big(v\big(T,\Gamma^{rn}(Z^\ell_{T^-}, \xi^\ell_{n-m}),d_\ell\cup(t_{n-m},\xi^\ell_{n-m}) \big)\mathds{1}_{ \theta_\ell^h=T} \\
& & \qquad+v^*\big(\theta_\ell^h,Z^\ell_{{\theta_\ell^h}^-},d_\ell \big)\mathds{1}_{ \theta_\ell^h<t_k}\Big)\mathds{1}_{\tau^1_\ell> \theta_\ell^h}\Big]
+\eps\wedge\iota\;.
\end{eqnarray*}
From \reff{Dynpro1/n} and the Markov property given by Lemma \ref{rem-ppte-model} (i), we get by taking the conditional expectation given $\Fc_{\theta_\ell^h}$,
\begin{eqnarray*}
v(t_\ell,z_\ell,d_\ell) & \leq &
\E\Big[\Big(v\big(T,\Gamma^{rn}(\Gamma^c(Z^\ell_{T^-}, \zeta_1^\ell),\xi_k^\ell),d_\ell\cup(t_{n-m},\xi^\ell_{n-m}) \big)\mathds{1}_{\tau_1^\ell = T}\\
& & \qquad +v^*\big(\theta_\ell^h,\Gamma^c(Z^\ell_{{\theta_\ell^h}^-}, \zeta_1^\ell),d_\ell) \big)\mathds{1}_{\tau_1^\ell < T}\Big)\mathds{1}_{\tau^\ell_1\leq \theta_\ell^h}\Big] \\
& & + \E\Big[\Big(v\big(T,\Gamma^{rn}(Z^\ell_{T^-}, \xi^\ell_{n-m}),d_\ell\cup(t_{n-m},\xi^\ell_{n-m}) \big)\mathds{1}_{ \theta_\ell^h=T}\\
& & \qquad +v^*\big(\theta_\ell^h,Z^\ell_{{\theta_\ell^h}^-},d_\ell \big)\mathds{1}_{ \theta_\ell^h<T}\Big)\mathds{1}_{\tau_1^\ell> \theta_\ell^h}\Big]
+{1\over \ell} \;.
\end{eqnarray*}
We therefore get
\begin{eqnarray*}
\varphi^h(s_\ell,z_\ell,d_\ell) +\sqrt{T-s_\ell}~~=~~\varphi_\ell^h(s_\ell,z_\ell,d_\ell) & \geq & v(s_\ell,z_\ell,d_\ell)+\eps\wedge\iota-{1\over \ell} \;.
\end{eqnarray*}
Sending $\ell$ and $h$ to $+\infty$ we get a contradiction with \reff{approxlimsup}.
\ni $\bullet$ Suppose that the property holds true for $k+1$. From Proposition \ref{Lemma-comp}, the function $v$ is continuous on $D_{t_{k+1}}$. Therefore, we get from Propositions \ref{PropDP1} and \ref{DP2}
\begin{eqnarray*}
v(t,z,d) & = & \sup_{\alpha\in\hat \Ac_{t,z,d}}\E\Big[v\big(t_{k+1},Z^{t,z,\alpha}_{t_{k+1}},d(t_{k+1},\alpha)\big)\Big]
\end{eqnarray*}
for all $(t,z,d)\in\Dc_k$.
We can then apply the same arguments as for $k=n$ and we get the viscosity property at $(t_{k+1}^-,z,d)$ for all $(z,d)\in \Zc\times D_{t_{k+1}}$.
\paragraph{Proof of Proposition \ref{Lemma-comp}.}
We fix the functions $\underline w$ and $\bar w$ as in the statement of Proposition \ref{Lemma-comp}. We then introduce as classically done a perturbation of $\bar w$ to make it a strict supersolution.
\begin{Lemma}
Consider the function $\psi$ defined by
\begin{eqnarray*}
\psi(t,z,d) & = & x+pr+\tilde C_1e^{-\tilde C_2 t}\big( 1+|r|^4+|p|^4+|q|^4+|d|^4\big) \;,
\end{eqnarray*}
where $\tilde C_1$ and $\tilde C_2$ are two positive constants and define for $m\geq 1$ the function $\bar w_m$ on $\Dc_k$ by
\begin{eqnarray*}
\bar w_m & = & \bar w +{1\over m}\psi\;.
\end{eqnarray*}
Then there exist $\tilde C_1$ and $\tilde C_2$ (large enough) such that the following properties hold.
\begin{itemize}
\item The function $\bar w_m$ is a strict viscosity supersolution to \reff{EDP0}-\reff{EDP1} on $[t_k,t_{k+1})\times\Kc$ for any compact subset $\Kc$ of $\Zc\times D_{t_k}$ and any $m\geq 1$ : there exists a constant $\delta>0$ (depending on $\Kc$ and $m$) such that
\begin{eqnarray*}
&&- \Lc \varphi(t,z,d) \geq \delta\\
&&\mbox{(resp. }\min\Big\{ - \Lc \varphi(t,z,d) ~,~
\bar w_m(t,z,d) - \Hc \bar w_m(t,z,d)
\Big\} \geq \delta\mbox{)}
\end{eqnarray*}
for any $(t,z,d)\in\Dc_k^1$ (resp. $(t,z,d)\in\Dc_k^2$) and $\varphi\in C^{1,2}(\Dc_k)$ such that $(z,d)\in \Kc$ and
\begin{eqnarray*}
(\bar w_m-\varphi)(t,z,d) & = & \min_{\Dc_k} (\bar w_m-\varphi)\;.
\end{eqnarray*}
\item We have
\begin{eqnarray}\label{lim-sursol-per}
\lim_{|(z,d)|\rightarrow+\infty}(\underline w-\bar w_m)(t,z,d) & = & -\infty\;.
\end{eqnarray}
\end{itemize}
\end{Lemma}
\ni \textbf{Proof.} A straightforward computation shows that
\begin{eqnarray*}
\psi-\Hc\psi & \geq & c_2>0\;,
\end{eqnarray*}
on $\Dc_{k}$.
Since $\bar w$ is a viscosity supersolution to \reff{EDP1}, we get
\begin{eqnarray}\label{estim obstacle wm}
\bar w_m-\Hc \bar w_m& \geq & {c_2\over m}~~=:~~\delta_0~~>~~0\;,
\end{eqnarray}
on $\Dc^2_k$.
Then, from the definition of the operator $\Lc$ we get for $\tilde C _2$ large enough
\begin{eqnarray*}
-\Lc\psi & > & 0\quad \mbox{ on } \Dc_{t_k}\;.
\end{eqnarray*}
In particular, since $-\Lc\psi$ is continuous, we get
\begin{eqnarray}\label{estim -L psi}
\inf_{[t_k,t_{k+1})\times\Kc} -{1\over m}\Lc\psi~~=:~~ \delta_1 & > & 0
\end{eqnarray}
for any compact subset $\Kc$ of $\Zc\times D_{t_k}$.
By writing the viscosity supersolution property of $\bar w$, we deduce from \reff{estim obstacle wm} and \reff{estim -L psi} the desired strict viscosity supersolution property for $w_m$.
\ni Finally, from growth conditions \reff{growth-cond1} and \reff{growth-cond2}, we get \reff{lim-sursol-per} for $\tilde C_1$ large enough.
\ep
\ni To prove the comparison result, it suffices to prove that
\begin{eqnarray*}
\sup_{\Dc_k} \;(\underline w - \bar w_m ) & \leq & 0\;,
\end{eqnarray*}
for all $m\geq1$. We argue by contradiction and suppose that there exists $m\geq1$ such that
\begin{eqnarray*}
\bar \Delta~~:=~~ \sup_{\Dc_k}\; (\underline w - \bar w_m ) & > & 0\;.
\end{eqnarray*}
Since $\bar w_m-\underline w$ is u.s.c. on $\Dc_k$ and $\bar w_m-\underline w(t_{k+1}^-,.)\leq 0$, we get from \reff{lim-sursol-per} the existence of an open subset $\Oc$ of $\Zc\times D_{t_k}$ and $(t_0,z_0,d_0)\in[t_k,t_{k+1})\times \Oc$ such that $\bar \Oc$ is compact and
\begin{eqnarray*}
(\underline w-\bar w_m)(t_0,z_0,d_0) & = & \bar \Delta\;.
\end{eqnarray*}
We then consider the functions $\Phi_i$ and $\Theta_i$ defined on $[t_k,t_{k+1})\times \bar \Oc$ by
\begin{eqnarray*}
\Phi_i(t,t',z,z',d,d') & = & \underline w(t,z,d)-\bar w_m(t',z',d')-\Theta_i(t,t',z,z',d,d') \\
\Theta_i(t,t',z,z',d,d') & = & |t-t_0|^2+ |z-z_0|^{4}+|d-d_0|^{2}+ {i\over2}\big(|z-z'|^2+ |d-d'|^2\big)
\end{eqnarray*}
for all $(t,z,d),(t',z',d')\in D_k$ and $i\geq 1$. From the growth properties of $\underline w$ and $\bar w_m$, there exists $(\hat t_i,\hat t'_i,\hat z_i,\hat z'_i,\hat d_i,\hat d'_i)\in ([t_k,t_{k+1})\times \bar\Oc)^2$ such that
\begin{eqnarray*}
\bar \Delta_i & := & \sup_{[t_k,t_{k+1})\times \bar\Oc}\Phi_i~~=~~\Phi_i(\hat t_i,\hat t'_i,\hat z_i,\hat z'_i,\hat d_i,\hat d'_i)\;.
\end{eqnarray*}
By classical arguments we get, up to a subsequence, the following convergences
\begin{eqnarray}\nonumber
\big(\hat t_i,\hat t'_i,\hat z_i,\hat z'_i,\hat d_i,\hat d'_i,\big) & \xrightarrow[i\rightarrow+\infty]{} & \big(t_0,t_0,z_0,z_0,d_0,d_0, \big)\;,\\\nonumber
\Phi_i(\hat t_i,\hat t'_i,\hat z_i,\hat z'_i,\hat d_i,\hat d'_i) & \xrightarrow[i\rightarrow+\infty]{} & (\underline w-\bar w_m)(t_0,z_0,d_0)\;,\\ \label{conv Theta i 0}
\Theta_i(\hat t_i,\hat t'_i,\hat z_i,\hat z'_i,\hat d_i,\hat d'_i) & \xrightarrow[i\rightarrow+\infty]{} & 0\;.
\end{eqnarray}
In particular, we have $\max\{\hat t_i,\hat t_i'\}<T$ for $i$ large enough.
We then apply Ishii's Lemma (see Theorem 8.3 in \cite{CIL}) to $(\hat t_i,\hat t'_i,\hat z_i,\hat z'_i,\hat d_i,\hat d'_i)$ which realizes the maximum of $\Phi_i$ and we get for any $\eps_i>0$, the existence of $(e_i,f_i,M_i)\in \bar J^{2,+}\underline w(\hat t_i,\hat z _i)$ and $(e'_i,f'_i,M'_i)\in \bar J^{2,-}\bar w_m(\hat t'_i,\hat z' _i)$ such that
\begin{eqnarray}\label{ishii-cond-1}
e_i ~~=~~{\partial \Theta_i\over \partial t}(\hat t_i,\hat t'_i,\hat z_i,\hat z'_i,\hat d_i,\hat d'_i) & & f_i~~=~~{\partial \Theta_i\over \partial z}(\hat t_i,\hat t'_i,\hat z_i,\hat z'_i,\hat d_i,\hat d'_i)\\ \label{ishii-cond-2}
e_i' ~~=~~{\partial \Theta_i\over \partial t'}(\hat t_i,\hat t'_i,\hat z_i,\hat z'_i,\hat d_i,\hat d'_i) & & f_i'~~=~~{\partial \Theta_i\over \partial z'}(\hat t_i,\hat t'_i,\hat z_i,\hat z'_i,\hat d_i,\hat d'_i)
\end{eqnarray}
and
\begin{equation}\label{ishii-cond-3}
\left(\begin{array}{cc}
M & 0 \\
0 & -M'
\end{array}\right)~~\leq~~ {\partial^2 \Theta_i\over \partial (z, z')^2}(\hat t_i,\hat t'_i,\hat z_i,\hat z'_i,\hat d_i,\hat d'_i)+{1\over i}\Big({\partial^2 \Theta_i\over \partial (z, z')^2}(\hat t_i,\hat t'_i,\hat z_i,\hat z'_i,\hat d_i,\hat d'_i)\Big)^2\;,
\end{equation}
for all $i\geq 1$. We then distinguish two cases.
\ni$\bullet$ Case 1: there exists a subsequence of $(\hat t_i,\hat t'_i,\hat z_i,\hat z'_i,\hat d_i,\hat d'_i)_{i \in \N}$ still denoted $(\hat t_i,\hat t'_i,\hat z_i,\hat z'_i,\hat d_i,\hat d'_i)_{i \in \N}$ such that
\begin{eqnarray*}
(\hat t_i,\hat z_i,\hat d_i) & \in & \Dc_k^2\quad \mbox{ for all }~i\geq 1\;.
\end{eqnarray*}
From the viscosity subsolution property of $\underline w$ and the strict viscosity supersolution property of $\bar w_m$ we have
\begin{eqnarray}\label{soussol}
\min\Big\{-\Lc[\hat z_i,\hat d_i,e_i,f_i,M_i]~;~(\underline w-\Hc \underline w)(\hat t_i,\hat z_i,\hat d_i)\Big\} & \leq & 0\\\label{surssol}
\min\Big\{-\Lc[\hat z'_i,\hat d'_i,e'_i,f_i',M_i']~;~(\bar w_m-\Hc \bar w_m)(\hat t_i,\hat z_i,\hat d_i)\Big\} & \geq & \delta\over m
\end{eqnarray}
where
\begin{eqnarray*}
\Lc [z,d,e,f,M] & = & e + \mu p f_3+\rho qf_4+\eta r(\lambda-r)f_2 \\
& & +{1\over 2} \Big(\sigma^2p^2M_{3,3}+\varsigma^2q^2M_{4,4}+2\sigma\varsigma pq M_{3,4}+\gamma^2r^2 M_{2,2}\Big)
\end{eqnarray*}
for any $z\in \Zc$, $d\in D_{t_k}$, $e\in\R$, $f\in \R^4$ and any symmetric matrix $M\in \R^{4\times 4}$\;.
We then distinguish the following two possibilities in \reff{soussol}.
\ni \textbf{1.} Up to a subsequence we have
\begin{eqnarray*}
w(\hat t_i ,\hat z _i,\hat d_i)-\Hc w(\hat t_i,\hat z_i,\hat d_i) \leq 0 ~~\mbox{ for all }~i\geq 1.
\end{eqnarray*}
Using \reff{surssol}, we have $\bar w_m(\hat t_ i ,\hat z_i,\hat d_i ) - \Hc \bar w_m(\hat t_ i ,\hat z_i,\hat d_i ) \geq
{\delta \over m}$. Therefore, we get
\begin{eqnarray*}
\bar \Delta _i & \leq & \underline w (\hat t_i, \hat z_i, \hat d_i)-\bar w_m (\hat t'_i, \hat z_i', \hat d_i')~~\leq ~~ \Hc\underline w (\hat t_i, \hat z_i, \hat d_i)-\Hc\bar w_m (\hat t'_i, \hat z_i', \hat d_i')-{\delta\over m}\;.
\end{eqnarray*}
Sending $i$ to $+\infty$ we get
\begin{eqnarray*}
\bar \Delta & \leq & \limsup_{i\rightarrow+\infty}\Hc\underline w (\hat t_i, \hat z_i, \hat d_i)-\liminf_{i\rightarrow+\infty}\Hc\bar w_m (\hat t'_i, \hat z_i', \hat d_i')-{\delta\over m}\\
& \leq & \Hc\underline w (t_0, z_0, d_0)-\Hc\bar w_m (t_0, z_0, d_0)-{\delta\over m}\;,
\end{eqnarray*}
where we used the upper semicontinuity of $\Hc\underline w$ and the lower semicontinuity of $\Hc\bar w_m$. Since $\underline w$ is upper semicontinuous there exists $a_0\in[0,r_0]$ (with $z_0=(x_0,r_0,p_0,q_0)$) such that $\Hc \underline w(t_0,z_0,d_0)=\underline w(t_0,\Gamma^c(z_0,a_0),d_0)$. Therefore we get the following contradiction
\begin{eqnarray*}
\bar \Delta & \leq & \underline w(t_0,\Gamma^c(z_0,a_0),d_0) - \bar w_m(t_0,\Gamma^c(z_0,a_0),d_0) - {\delta\over m} ~~\leq~~\bar \Delta - {\delta\over m}\;.
\end{eqnarray*}
\ni \textbf{2.} Up to a subsequence we have
\begin{eqnarray*}
-\Lc[\hat z_i,\hat d_i,e_i,f_i,M_i] & \leq & 0~~\mbox{ for all }~i\geq 1.
\end{eqnarray*}
Using \reff{surssol} we get
\begin{eqnarray}\nonumber
-(e_i-e_i')- \mu \big(\hat p_i [f_i]_3-\hat p'_i [f'_i]_3\big)-\rho \big(\hat q_i[f_i]_4-\hat q'_i[f'_i]_4\big) & & \\ \nonumber-\eta\big( \hat r_i(\lambda-\hat r_i)[f_i]_2 - \hat r'_i(\lambda-\hat r'_i)[f'_i]_2\big) & & \\ \nonumber
-{1\over 2} \Big(\sigma^2\big(\hat p_i^2[M_i]_{3,3}-\hat p_i'^2[M'_i]_{3,3}\big)+\varsigma^2\big(\hat q_i^2[M_i]_{4,4}-\hat {q_i'}^2[M'_i]_{4,4}\big)& & \\
+2\sigma\varsigma \big(\hat p_i\hat q_i [M_i]_{3,4}-\hat p'_i\hat q'_i [M'_i]_{3,4}\big)+\gamma^2\big(\hat r_i^2 [M_i]_{2,2}-{\hat {r'}_i}^2 [M'_i]_{2,2}\big)\Big)& \leq & -{\delta \over m} \;.\label{ineqishii}
\end{eqnarray}
From \reff{ishii-cond-1}-\reff{ishii-cond-2}, we have
\begin{eqnarray*}
e_i ~ = ~ 2(\hat t_i-t_0) & & f_i ~ = ~ 4(\hat z_i-z_0)|\hat z_i-z_0|^2+i(\hat z_i-z_0) \\
e'_i ~ = ~ 2(\hat t'_i-t_0) & & f'_i ~ = ~ 4(\hat z'_i-z_0)|\hat z'_i-z_0|^2+i(\hat z'_i-z_0)
\end{eqnarray*}
and we obtain from \reff{conv Theta i 0} that
\begin{eqnarray}\nonumber
-(e_i-e_i')- \mu \big(\hat p_i [f_i]_3-\hat p'_i [f'_i]_3\big)-\rho \big(\hat q_i[f_i]_4-\hat q'_i[f'_i]_4\big) & & \\-\eta\big( \hat r_i(\lambda-\hat r_i)[f_i]_2 - \hat r'_i(\lambda-\hat r'_i)[f'_i]_2\big) & \xrightarrow[i\rightarrow+\infty]{} & 0\;.\label{first conv}
\end{eqnarray}
Moreover, by \reff{conv Theta i 0} and \reff{ishii-cond-3} , we have using classical arguments
\begin{eqnarray*}
\limsup_{i\rightarrow+\infty} \Big(\sigma^2\big(\hat p_i^2[M_i]_{3,3}-\hat p_i'^2[M'_i]_{3,3}\big)+\varsigma^2\big(\hat q_i^2[M_i]_{4,4}-\hat {q_i'}^2[M'_i]_{4,4}\big)\qquad\quad& & \\
+2\sigma\varsigma \big(\hat p_i\hat q_i [M_i]_{3,4}-\hat p'_i\hat q'_i [M'_i]_{3,4}\big)+\gamma^2\big(\hat r_i^2 [M_i]_{2,2}-{\hat {r'}_i}^2 [M'_i]_{2,2}\big)\Big)& \leq & 0\;.
\end{eqnarray*}
From this last inequality and \reff{first conv} and by sending $i$ to $+\infty$ in \reff{ineqishii} we get $0\leq -{\delta\over m}$, which is the required contradiction.
\ni$\bullet$ Case 2: we have
\begin{eqnarray*}
(\hat t_i,\hat z_i,\hat d_i) & \in & \Dc_k^1\quad \mbox{ for all }~i\geq 1\;.
\end{eqnarray*}
Then we are in the same situation as in the second possibility of Case 1 and we get a contradiction. \ep
\end{document} |
\begin{document}
\title{Fuzzification of Fractal Calculus}
\begin{abstract}
In this manuscript, fractal and fuzzy calculus are summarized. Fuzzy calculus in terms of fractal limit, continuity, its derivative, and integral are formulated. The fractal fuzzy calculus is a new framework that includes fractal fuzzy derivatives and fractal fuzzy integral. In this framework, fuzzy number-valued functions with fractal support are the solutions of fractal fuzzy differential equations. Different kinds of fractal fuzzy differential equations are given and solved.
\end{abstract}
\textbf{Keywords: }Fractal fuzzy differential equations, fuzzy number-valued functions, fractal fuzzy derivatives, fractal fuzzy integral\\
\textbf{2010 Mathematics Subject Classification:} 26E50,~34A07,~28A80
\section{Introduction}
Fractal geometry mathematically describes complex shapes that are not described by Euclidean geometry \cite{b-1}. These shapes are found in nature such as clouds, mountains, lightning and etc. which are called fractals \cite{b-6}. The most important properties of fractals are self-similarity, and have non-integer dimensions. Fractals are non-differentiable in the sense of ordinary calculus since they have a rough structure rather smooth. Their fractal dimensions exceed their topological dimensions and appear similar at various scales \cite{falconer1999techniques,ma-12}. Fractals have different measures like Hausdorff's measure. In this context, ordinary calculus which is based on length, area, and volume fails to define derivatives and integrals on them \cite{b-2}.\\
Many researchers have tried to formulate analysis on fractals in order to explain their physical properties \cite{samayoa2020fractal}, i.e. harmonic analysis \cite{ma-8,ma-13,freiberg2002harmonic}, measure theory \cite{ma-3}, fractional Brownian motion and probability-theoretical approaches \cite{ma-7,lee2022propagation}, fractional space \cite{stillinger1977axiomatic}, fractional calculus \cite{ma-6,trifce2020fractional}.\\
In seminal papers, ordinary calculus was adopted to define their derivatives and integrals of functions with fractal support, like Cantor sets and Koch curves \cite{parvate2009calculus,AD-2,parvate2011calculus}. This new framework which is a generalization of ordinary calculus is called fractal calculus or $F^{\alpha}$-calculus. Fractal calculus is simple, constructive, and algorithmic and applied in physics \cite{Alireza-book}.
Fractal calculus was developed in different branches such as stability of solutions of fractal differential equations, nonlocal reverse Minkowski's fractal integral inequalities, and properties of staircase function
\cite{khalili2021hyers,khalili2019fractalcat,shapovalov2021invariance,rahman2021nonlocal,cetinkaya2021general,wibowo2021relationship}.
Fractal calculus was generalized on fractal cubes and tartan Cantor spaces and Laplace equations on fractal cubes were solved \cite{golmankhaneh2018fractalt,khalili2021laplace}. Fractal derivatives and integrals were worked out fractal interpolation functions and Weierstrass functions \cite{gowrisankar2021fractal}. Random variables, stochastic process and stable distributions on fractal were defined, and corresponding stochastic differential equations were solved
\cite{khalili2019random,golmankhaneh2020stochastic}. Fractal Laplace, Fourier and Sumudu transforms were defined in order to solve fractal differential equation and applied in electrical circuits, economy and in dynamics \cite{Rewid3,banchuin2022noise,golmankhaneh2016non,golmankhaneh2019sumudu,Fourier1,banchuin20224noise,khalili2021economic}.
Fractal anomalous diffusion has been formulated as a diffusion process in fractal media and it has a power law relationship between the mean squared displacement and time \cite{Alireza-Fernandez-1,golmankhaneh2018sub,golmankhaneh2021equilibrium}.\\
Fuzzy sets, numbers, fuzzy-valued functions, fuzzy derivatives, and integrals were introduced and applied to model the processes with uncertainty in science, physical science, engineering, and social science \cite{kloeden1994metric,allahviranloo2021fuzzy,anastassiou2010fuzzy,lakshmikantham2004theory}.
A linear second-order differential equation with constant coefficients with boundary values expressed by fuzzy numbers have been solved \cite{gasilov2014solution}. The fuzzy optimal control problem has been considered to optimize the expected values of the appropriate objective fuzzy functions \cite{zarei2022suboptimal}. The differentiability of fuzzy number-valued functions based on the Hausdorff distance between fuzzy numbers has been suggested \cite{khastan2022new}. First order linear fuzzy differential equations under differential inclusions and strongly generalized differentiability approaches have been studied \cite{khastan2020linear}. Linear fuzzy differential equations appying the concept of generalized differentiability and conditions for the existence of solutions have been investigated \cite{khastan2016solutions}.
A First-order fuzzy differential equation (FDE) with fuzzy initial value was solved \cite{sedaghatfar5method}.\\
In this paper, we introduce a new framework which is a generalization of fractal calculus to include fuzzy-valued functions. \\
The plan of the paper is as follows:\\
In Section \ref{1g}, we summarize the fractal calculus and fuzzy calculus. Fractal fuzzy calculus is formulated and defined in Section \ref{2g}. In Section \ref{3g}, $\alpha$-order fractal fuzzy differential equations are suggested and solved. Section \ref{4g} is devoted to conclusion.
\section{Preliminaries \label{1g}}
In this section we summarize the fractal calculus on fractal curves \cite{parvate2009calculus,AD-2,parvate2011calculus,Alireza-book}.
\subsection{Fractal calculus on fractal curves}
\begin{definition}
For a fractal curve $F$ and a subdivision $P_{[a,b]}, [a,b]\in [a_{0},b_{0}] \in \mathbb{R} $, the mass function is defined by
\begin{equation}
\gamma^{\alpha}(F,a,b)=\lim_{\delta\rightarrow0} \inf_{|P|\leq \delta}\sum_{i=0}^{n-1}
\frac{|\mathbf{w}(t_{i+1})-\mathbf{w}(t_{i})|^{\alpha}}{\Gamma(\alpha+1)},
\end{equation}
where $|.|$ denotes the Euclidean norm on $\mathbb{R}^{n}$, $1\leq \alpha\leq n$, $P_{[a,b]}=\{a=t_{0},...,t_{n}=b\}$, and $|P|=\max_{0\leq i\leq n-1}(t_{i+1}-t_{i})$ for a subdivision $P$.
\end{definition}
\begin{definition}
The $\gamma$-dimension of $F$ is defined by
\begin{align}
\dim_{\gamma}(F)&=\inf\{\alpha:\gamma^{\alpha}(F,a,b)=0\}\nonumber\\&=
\sup\{\alpha:\gamma^{\alpha}(F,a,b)=\infty\}
\end{align}
\end{definition}
\begin{definition}
The rise function of a fractal curve $F$ is defined by
\begin{equation}
S_{F}^{\alpha}(u)=\left\{
\begin{array}{ll}
\gamma^{\alpha}(F,p_{0},u), & u\geq p_{0} ; \\
-\gamma^{\alpha}(F,u,p_{0}), & u<p_{0}.
\end{array}
\right.
\end{equation}
where $u\in [a_{0},b_{0}]$, and $S_{F}^{\alpha}(u)$ gives the mass of the fractal curve $F$ upto point $u$.
\end{definition}
\begin{definition}
Let be a function $f:F\rightarrow \mathbb{R}$. Then $F$-limit of $f$ as $\theta'\rightarrow \theta$ through points of $F$ is $l$, if for given $\epsilon$ there exists $\delta>$ such that
\begin{equation}
\theta'\in F~~and~~|\theta'-\theta|<\delta\Rightarrow |f(\theta')-l|<\epsilon
\end{equation}
or
\begin{equation}
\underset{ \theta'\rightarrow \theta}{F_{-}lim}f(\theta')=l.
\end{equation}
\end{definition}
\begin{definition}
A function $f:F\rightarrow \mathbb{R}$ is said to be $F$-continuous at $\theta$ if
\begin{equation}
\underset{ \theta'\rightarrow \theta}{F_{-}lim}f(\theta')=f(\theta).
\end{equation}
\end{definition}
\begin{definition}
The fractal derivative $F^{\alpha}$-derivative is defined by
\begin{equation}
D_{F}^{\alpha}f(\theta)=\underset{ \theta'\rightarrow \theta}{F_{-}lim}~
\frac{f(\theta')-f(\theta)}{J(\theta')-J(\theta)},
\end{equation}
where $F_{-}lim$ indicates the fractal limit (see in \cite{parvate2011calculus}), $\mathbf{w}(u)=\theta$ and $S_{F}^{\alpha}(u)=J(\theta)$.
\end{definition}
\begin{remark}
We note that the Euclidean distance from origin upto a point $\theta=\mathbf{w}(u)$ is given by $L(\theta)=L(\mathbf{w}(u))=|\mathbf{w}(u)|.$
\end{remark}
\begin{definition}
The fractal integral or $F^{\alpha}$-integral is defined by
\begin{align}
\int_{C(a,b)}f(\theta)d_{F}^{\alpha}\theta&=\sup_{P[a,b]}\sum_{i=0}^{n-1}
\inf_{\theta\in C(t_{i},t_{i+1})}f(\theta)(J(\theta_{i+1})-J(\theta_{i}))\nonumber\\&=
\inf_{P[a,b]}\sum_{i=0}^{n-1}
\sup_{\theta\in C(t_{i},t_{i+1})}f(\theta)(J(\theta_{i+1})-J(\theta_{i})),
\end{align}
where $t_{i}=\mathbf{w}^{-1}(\theta_{i})$, and $C(a,b)$ is the section of the curve lying between points $\mathbf{w}(a)$ and $\mathbf{w}(b)$ on the fractal curve $F$ \cite{parvate2011calculus}.
\end{definition}
\subsection{Fuzzy calculus on real-line}
In this section, we review fuzzy calculus which will be used to fuzzification of the fractal calculus \cite{kloeden1994metric,allahviranloo2021fuzzy,anastassiou2010fuzzy,lakshmikantham2004theory}.\\
A generalized Hukuhara difference for fuzzy sets and a new generalized differentiability concepts for fuzzy valued functions were given in \cite{bede2013generalized,stefanini2009generalized}.
\begin{definition}
Let $X\neq\emptyset$. Then, a set $A\subset X$ is characterized by its membership function $u_{A}(x):X\rightarrow [0,1]$. Thus $u_{A}(x)$ is the degree of membership of element $x$ in the fuzzy set $A$ for each $x\in X$.
\end{definition}
\begin{definition}
Let $A$ be a fuzzy subset of a real number $u_{A}(x):\mathbb{R}\rightarrow [0,1]$. Then
$A$ is called fuzzy number if it satisfies the following axioms
\begin{enumerate}
\item $A$ is normal. It means that there exists $x_{0}$ in $ \mathbb{R}$, such that $u_{A}(x_{0})=1$.
\item $A$ is convex, namely,
\begin{equation}
u_{A}(tx+(1-t)y)\geq \min \{u_{A}(x),u_{A}(y)\},~\forall t\in [0,1],~ x,~ y\in \mathbb{R}.
\end{equation}
\item $u_{A}(x)$ is upper semi continuous on $\mathbb{R}$, viz, for given $\epsilon>0$, there exists $\delta>0$ such that
\begin{equation}
|x-x_{0}|<\delta \Rightarrow u_{A}(x)-u_{A}(x_{0})<\epsilon.
\end{equation}
\item The support of $u_{A}(x)$ is compact. e.g.
\begin{equation}
supp(u_{A}(x))=cl_{\mathbb{R}}\{x\in \mathbb{R};u_{A}(x)> 0\}
\end{equation}
is compact.
\end{enumerate}
\end{definition}
\begin{definition}\label{oo}
A fuzzy number $A$ is determined by a pair of functions $A=(A^{-}(r),A^{+}(r))$, with $A^{-}(r),A^{+}(r):[0,1]\rightarrow \mathbb{R}$ that satisfies the following condition:
\begin{enumerate}
\item $A^{-}(r)=A_{r}^{-}\in \mathbb{R}$ is a bounded, monotonic, increasing, left continuous function in $(0,1]$ and it is right-continuous at $0$.
\item $A^{+}(r)=A_{r}^{+}\in \mathbb{R}$ is a bounded, monotonic, decreasing, left continuous function in $(0,1]$ and it is right-continuous at $0$.
\item For $r\in (0,1]$ we have $A^{-}(r)\leq A^{+}(r)$.
\end{enumerate}
The Definition \ref{oo} is called parametric form of fuzzy numbers.
\end{definition}
\begin{definition}
The $r$-cut of fuzzy number $A$ is defined and called level wise form by
\begin{equation}
[A]_{r}=A_{r}=\{x\in \mathbb{R};u_{A}(x)\geq r\}
\end{equation}
where $A_{r}$ is closed interval $A_{r}=[A_{r}^{-},A_{r}^{+}]$ for any $r\in[0,1]$. We note that $[A]_{0}=supp(u_{A}(x))$ and $F_{\mathbb{R}}$ denote space of fuzzy number.
\end{definition}
\begin{definition}\label{yyhbvgt7}
For every $A,B \in F_{\mathbb{R}}$ and $\lambda\in \mathbb{R},~r\in [0,1]$ the addition and scalar multiplication is defined by
\begin{equation}
(A\oplus B)_{r}=A_{r}+B_{r},~~~(\lambda\odot A)_{r}=\lambda A_{r}.
\end{equation}
\end{definition}
\begin{definition}
The Hausdorff distance between two fuzzy numbers $A,~B$ using their $r$-cuts is defined by
\begin{equation}
d_{H}(A,B)=\sup_{0\leq r\leq 1} \max\{|A_{r}^{-}-B_{r}^{-}|,|A_{r}^{+}-B_{r}^{+}|\}
\end{equation}
\end{definition}
\begin{remark}
The set of fuzzy numbers $(F_{\mathbb{R}},d)$ with addition and scalar multiplication given in Definition \ref{yyhbvgt7}, is a complete metric space.
\end{remark}
\begin{definition}
Consider $A,~B\in F_{\mathbb{R}}$. The Hukuhara difference of $A,B$ is defined by
\begin{equation}
C=A\ominus B,
\end{equation}
if $A=B\oplus C$.
\end{definition}
\begin{definition}
Consider a fuzzy number valued function $f:\mathbb{R}\rightarrow F_{\mathbb{R}}$ and $x_{0}\in \mathbb{R}$, then $l$ is called limit of $f$ at point $x_{0}$ if for every given $\epsilon>0$, there exist $\delta>0$ such that \cite{anastassiou2010fuzzy,ghaffari2022generalized,allahviranloo2021fuzzy}
\begin{equation}
0<|x-x_{0}|<\delta\Rightarrow d_{H}(f(x),l)<\epsilon,
\end{equation}
or,
\begin{equation}
\lim_{x\rightarrow x_{0}} f(x)=l,
\end{equation}
if it exists, where $d_{H}$ is the Hausdorff distance.
\end{definition}
\begin{definition}
The fuzzy function $f$ is called fuzzy continuous if \cite{anastassiou2010fuzzy,ghaffari2022generalized}
\begin{equation}
\lim_{x\rightarrow x_{0}} f(x)=f(x_{0}).
\end{equation}
\end{definition}
\begin{definition}\label{km8558uyh}
A fuzzy number valued function $f:\mathbb{R}\rightarrow F_{\mathbb{R}}$ is called Hukuhara differentiable if there exist $f'(x)\in F_{\mathbb{R}}$ such that
\begin{itemize}
\item Case 1.($I$-differentiable)
\begin{equation}
f'(x)=\lim_{y\rightarrow x} \frac{f(y)\ominus f(x)}{y-x},~~~y>x
\end{equation}
\item Case 2.($II$-differentiable)
\begin{equation}
f'(x)=\lim_{y\rightarrow x} \frac{f(x)\ominus f(y)}{y-x},~~~y>x
\end{equation}
where $f'(x)$ is called the fuzzy derivative of $f$ at $x$.
\end{itemize}
\end{definition}
\begin{theorem}
Let a fuzzy number valued function $f:\mathbb{R}\rightarrow F_{\mathbb{R}}$ be denoted by $f(x)=(\underline{f}(x,r),\overline{f}(x,r))$ for each $r\in [0,1]$ \cite{chalco2008new}. Then \cite{khastan2011variation,khastan2016solutions}
\begin{enumerate}
\item If $f$ is $I$-differentiable, then we have
\begin{equation}
f'(x)=(\underline{f}'(x,r),\overline{f}'(x,r)).
\end{equation}
\item If $f$ is $II$-differentiable, then we have
\begin{equation}
f'(x)=(\overline{f}'(x,r),\underline{f}'(x,r)).
\end{equation}
\end{enumerate}
\end{theorem}
\begin{definition}
Let $f(x)$ be a fuzzy number-valued function. Then the fuzzy Riemann integral is defined as \cite{allahviranloo2021fuzzy}
\begin{equation}
J=FR\int_{a}^{b}f(x)dx=\oplus\sum_{i=0}^{n}\Delta x_{i}\odot f(x_{i}),
\end{equation}
where $\Delta x_{i}=x_{i+1}-x_{i}$ and $\{a=x_{0}<x_{1}<...< x_{n}=b\}$ is a partition of $I=[a,b]$. \\
The fuzzy Riemann integral of $f(x)$ is $J$ if for every given $\epsilon>0$, there exist $\delta>0$ such as
\begin{equation}
d_{H}\bigg(\oplus\sum_{i=0}^{n}\Delta x_{i}\odot f(x_{i}),J\bigg)<\epsilon.
\end{equation}
where $J$ is a fuzzy number.
\end{definition}
\begin{definition}
Let $f:I\rightarrow F_{\mathbb{R}}$ be a triangular number-valued function and $f(x)=(f_{1}(x),f_{2}(x),f_{3}(x))$ and $x_{0}\in I$. Then the fuzzy integral is defined by \cite{ghaffari2022generalized}
\begin{equation}
\int_{a}^{b}f(x)dx=\bigg(\int_{a}^{b}f_{1}(x)dx,\int_{a}^{b}f_{2}(x)dx,\int_{a}^{b}f_{3}(x)dx\bigg)
\end{equation}
\end{definition}
\section{Fuzzy fractal calculus on fractal curves \label{2g}}
In this section, we introduce fractal fuzzy calculus.
\begin{definition}
Let $f(\theta):F\rightarrow F_{\mathbb{R}}$ be a number-valued function on a fractal curve $F$. Then the fuzzy $F$-limit of $f$ at $\theta_{0}$ through $F$ is $l$, if for a given $\epsilon>0$, there exist $\delta>0$, such that
\begin{equation}
\theta\in F,~~~and~~~|\theta-\theta_{0}|<\delta\Rightarrow d_{H}(f(\theta),l)<\epsilon,
\end{equation}
or
\begin{equation}
\underset{ \theta\rightarrow \theta_{0}}{FF_{-}lim}f(\theta')=l
\end{equation}
where $d_{H}$ is the Hausdorff distance.
\end{definition}
\begin{definition}
Let $f(\theta):F\rightarrow F_{\mathbb{R}}$ be a number-valued function on $F$. Then, $f$ is called fuzzy $F$-continuous if
\begin{equation}
\underset{ \theta\rightarrow \theta_{0}}{FF_{-}lim}f(\theta)=f(\theta_{0}).
\end{equation}
\end{definition}
\begin{definition}\label{Fuzzyfractalderivative}
Let $f(\theta):F\rightarrow F_{\mathbb{R}}$ be a number-valued function. Then fractal Hukuhara differentive at $\theta_{0}\in F$ is defined by
\begin{itemize}
\item Case 1. ($I$-$F^{\alpha}$-differentiable)
\begin{equation}
D_{F,H}^{\alpha}f(\theta_{0})=\underset{ \theta\rightarrow \theta_{0}}{FF_{-}lim}~ \frac{f(\theta)\ominus f(\theta_{0})}{J(\theta)-J(\theta_{0})},~~~\theta>\theta_{0}.
\end{equation}
\item Case 2. ($II$-$F^{\alpha}$-differentiable)
\begin{equation}
D_{F,H}^{\alpha}f(\theta_{0})=\underset{ \theta\rightarrow \theta_{0}}{FF_{-}lim}~ \frac{f(\theta_{0})\ominus f(\theta)}{J(\theta)-J(\theta_{0})},~~~~\theta>\theta_{0}.
\end{equation}
where $D_{F,H}^{\alpha}f(\theta_{0})$ is a fuzzy number.
\end{itemize}
\end{definition}
\begin{definition}
Let $f(x)$ be a fractal fuzzy number valued function. Then the fractal fuzzy Riemann integral is defined as \cite{allahviranloo2021fuzzy}
\begin{equation}
J=FFR\int_{C(a,b)}f(\theta)d_{F}^{\alpha}\theta=\oplus\sum_{i=0}^{n}\Delta J_{i}\odot f(\theta_{i}),
\end{equation}
The fractal fuzzy Riemann integral of $f(\theta)$ is $J$ if for a given $\epsilon>0$, there exist $\delta>0$ such as
\begin{equation}
d_{H}\bigg(\oplus\sum_{i=0}^{n}\Delta J_{i}\odot f(\theta_{i}),J\bigg)<\epsilon.
\end{equation}
\end{definition}
\begin{definition}
Let $f:F\rightarrow F_{\mathbb{R}}$ be a fractal triangular number-valued function, $f(\theta)=(f_{1}(\theta),f_{2}(\theta),f_{3}(\theta))$, and $\theta_{0}\in F$, then
\begin{equation}
\int_{C(a,b)}f(\theta)d_{F}^{\alpha}\theta=
\bigg(\int_{C(a,b)}f_{1}(\theta)d_{F}^{\alpha}\theta,
\int_{C(a,b)}f_{2}(\theta)d_{F}^{\alpha}\theta,
\int_{C(a,b)}f_{3}(\theta)d_{F}^{\alpha}\theta\bigg).
\end{equation}
\end{definition}
\section{Fractal fuzzy differential equations \label{3g}}
First order linear fuzzy differential equations by using the generalized
differentiability concept were solved \cite{khastan2011variation,khastan2016solutions}.
In this section, a $\alpha$-order fuzzy differential equation (F.D.E) is given. Then it is changed by its equivalent parametric form, and a new system, which contains two fractal differential equations, is solved.\\
Consider the following fractal fuzzy differential equation with initial condition:
\begin{equation}\label{yyh85}
D_{F,H}^{\alpha}x(\theta)=f(J(\theta),x(\theta)),~~~\tilde{x}(\theta_{0})=
\tilde{x}_{0},~~~\theta\in F,
\end{equation}
where
$f:F\times F_{\mathbb{R}}\rightarrow F_{\mathbb{R}}$ is a fuzzy-valued function and $\tilde{x}_{0}\in F_{\mathbb{R}}$. To solve Eq.\eqref{yyh85}, first we solve $1$-cut and $0$-cut of Eq.\eqref{yyh85} as the following form
\begin{equation}\label{gggsaws1}
\left\{
\begin{array}{ll}
(D_{F,H}^{\alpha}x)^{[1]}(\theta)=f^{[1]}(J(\theta),x(\theta)), & \\
&\\
x^{[1]}(\theta_{0})=\tilde{x}_{0}^{[1]}~~\theta_{0}\in[0,\Theta]. &
\end{array}
\right.
\end{equation}
and
\begin{equation}\label{gggsaws2}
\left\{
\begin{array}{ll}
(D_{F,H}^{\alpha}x)^{[0]}(\theta)=f^{[0]}(J(\theta),x(\theta)), & \\
&\\
x^{[0]}(\theta_{0})=\tilde{x}_{0}^{[0]}~~\theta_{0}\in[0,\Theta]. &
\end{array}
\right.
\end{equation}
Then by solving Eqs.\eqref{gggsaws1} and \eqref{gggsaws2}, we can find $\tilde{x}(\theta)$ which is the solution of the fractal fuzzy differential equation Eq.\eqref{yyh85}.\\
Here we consider two cases:\\
Case (I): Suppose that $\tilde{x}(\theta)$ is $I$-$F^{\alpha}$-differentiable. Then, we can write
\begin{equation}\label{dds521}
D_{F,H}^{\alpha}x(\theta)=[D_{F,H}^{\alpha}
\underline{x}(\theta,r),D_{F,H}^{\alpha}\overline{x}(\theta,r)].
\end{equation}
In view of Eqs.\eqref{yyh85} and \eqref{dds521} for $r\in [0,1]$, we have
\begin{equation}
\left\{
\begin{array}{ll}
D_{F,H}^{\alpha}
\underline{x}(\theta,r)=\underline{f}(\theta,r) & \theta_{0}\leq\theta \leq \Theta \\
&\\
D_{F,H}^{\alpha}\overline{x}(\theta,r)=\overline{f}(\theta,r) & \theta_{0}\leq\theta \leq \Theta.
\end{array}
\right.
\end{equation}
Hence
\begin{equation}
\left\{
\begin{array}{ll}
[(1-r)
D_{F,H}^{\alpha}\underline{x^{[0]}}(\theta)+r D_{F,H}^{\alpha}\underline{x^{[1]}}(\theta)= (1-r)\underline{f^{[0]}}(\theta)+r(\underline{f^{[1]}})(\theta), & \\
& \\
(1-r)
D_{F,H}^{\alpha}\overline{x^{[0]}}(\theta)+r D_{F,H}^{\alpha}\overline{x^{[1]}}(\theta)= (1-r)\overline{f^{[0]}}(\theta)+r(\overline{f^{[1]}})(\theta),
& \\
& \\
\underline{x}(\theta_{0},r)=(1-r)\underline{x^{[0]}}(\theta_{0})+r \underline{x^{[1]}}(\theta_{0}),& \\
& \\
\overline{x}(\theta_{0},r)=(1-r)\overline{x^{[0]}}(\theta_{0})+r \overline{x^{[1]}}(\theta_{0}).
\end{array}
\right.
\end{equation}
It follows that
\begin{equation}\label{AQWZ1}
\left\{
\begin{array}{ll}
D_{F,H}^{\alpha}\underline{x^{[0]}}(\theta)=\underline{f^{[0]}}(\theta), & \\
& \\
D_{F,H}^{\alpha}\overline{x^{[0]}}(\theta)=\overline{f^{[0]}}(\theta) &\\
&\\
\underline{x^{[0]}}(\theta_{0})=\underline{x_{0}}^{[0]}&\\
&\\
\overline{x^{[0]}}(\theta_{0})=\overline{x_{0}}^{[0]},
\end{array}
\right.
\end{equation}
and
\begin{equation}\label{AQWZ2}
\left\{
\begin{array}{ll}
D_{F,H}^{\alpha}\underline{x^{[1]}}(\theta)=\underline{f^{[1]}}(\theta), & \\
& \\
D_{F,H}^{\alpha}\overline{x^{[1]}}(\theta)=\overline{f^{[1]}}(\theta) &\\
&\\
\underline{x^{[1]}}(\theta_{0})=\underline{x_{0}}^{[1]}&\\
&\\
\overline{x^{[1]}}(\theta_{0})=\overline{x_{0}}^{[1]},
\end{array}
\right.
\end{equation}
One can find $\underline{x^{[0]}}(\theta),\overline{x^{[0]}}(\theta),
\underline{x^{[1]}}(\theta),\overline{x^{[1]}}(\theta)$ by solving Eqs.\eqref{AQWZ1} and \eqref{AQWZ2}. Therefore we obtain the solution of Eq.\eqref{yyh85} using $0$-cut and $1$-cut solutions as follows:
\begin{equation}\label{Fistr}
\tilde{x}(\theta)=[\underline{x}(\theta,r),\overline{x}(\theta,r)]=[(1-r)
\underline{x^{[0]}}(\theta)+r \underline{x^{[1]}}(\theta),(1-r)
\overline{x^{[0]}}(\theta)+r\overline{x^{[1]}}(\theta)].
\end{equation}
Case (II). Let $\tilde{x}(\theta)$ be $II$-$F^{\alpha}$-differentiable. Then, we can write
\begin{equation}\label{rdfrw}
D_{F,H}^{\alpha}x(\theta)=[D_{F,H}^{\alpha}
\overline{x}(\theta,r),D_{F,H}^{\alpha}\underline{x}(\theta,r)].
\end{equation}
Likewise, Case (I), we have
\begin{equation}\label{IIAQWZ1}
\left\{
\begin{array}{ll}
D_{F,H}^{\alpha}\underline{x^{[0]}}(\theta)=\overline{f^{[0]}}(\theta), & \\
& \\
D_{F,H}^{\alpha}\overline{x^{[0]}}(\theta)=\underline{f^{[0]}}(\theta) &\\
&\\
\underline{x^{[0]}}(\theta_{0})=\overline{x_{0}}^{[0]}&\\
&\\
\overline{x^{[0]}}(\theta_{0})=\underline{x_{0}}^{[0]},
\end{array}
\right.
\end{equation}
and
\begin{equation}\label{IIAQWZ2}
\left\{
\begin{array}{ll}
D_{F,H}^{\alpha}\underline{x^{[1]}}(\theta)=\overline{f^{[1]}}(\theta), & \\
& \\
D_{F,H}^{\alpha}\overline{x^{[1]}}(\theta)=\underline{f^{[1]}}(\theta) &\\
&\\
\underline{x^{[1]}}(\theta_{0})=\overline{x_{0}}^{[1]}&\\
&\\
\overline{x^{[1]}}(\theta_{0})=\underline{x_{0}}^{[1]},
\end{array}
\right.
\end{equation}
By solving the ordinary fractal differential equations \eqref{IIAQWZ1}) and \eqref{IIAQWZ2}, one may obtain the solution of FDE \eqref{yyh85} which is $II$-$F^{\alpha}$-differentiable as
\begin{equation}\label{Mohham}
\tilde{x}(\theta)=[\overline{x}(\theta,r),\underline{x}(\theta,r)].
\end{equation}
\begin{example}
Consider the fractal fuzzy differential equation as
\begin{equation}\label{UUUU}
D_{F,H}^{\alpha}x(\theta)=x(\theta)+\tilde{c},
\end{equation}
with the conditions
\begin{equation}
x(0,r)=[r,2-r],~~\tilde{c}=[r-1,1-r], r\in[0,1].
\end{equation}
Here we consider two cases.\\
Case I. Let $ \tilde{x}(\theta)$ be $I$-$F^{\alpha}$-differentiable. Then by using Eqs.\eqref{AQWZ1} and \eqref{AQWZ2} we arrive at
\begin{equation}\label{IIAQWZ1iu}
\left\{
\begin{array}{ll}
D_{F,H}^{\alpha}\underline{x^{[0]}}(\theta)=\underline{x^{[0]}}(\theta)-1 & \\
& \\
D_{F,H}^{\alpha}\overline{x^{[0]}}(\theta)=\overline{x^{[0]}}(\theta)+1 &\\
&\\
\underline{x^{[0]}}(\theta_{0})=0&\\
&\\
\overline{x^{[0]}}(\theta_{0})=2
\end{array}
\right.
\end{equation}
and
\begin{equation}\label{IIAQWZ2pp}
\left\{
\begin{array}{ll}
D_{F,H}^{\alpha}\underline{x^{[1]}}(\theta)=\underline{x^{[1]}}(\theta) & \\
& \\
D_{F,H}^{\alpha}\overline{x^{[1]}}(\theta)=\overline{x^{[1]}}(\theta) &\\
&\\
\underline{x^{[1]}}(\theta_{0})=1&\\
&\\
\overline{x^{[1]}}(\theta_{0})=1
\end{array}
\right.
\end{equation}
By solving Eqs.\eqref{IIAQWZ1iu} and \eqref{IIAQWZ2pp}, we obtain
\begin{align}\label{Man2}
\underline{x^{[0]}}(\theta)&=-\exp(J(\theta))+1,~~~
\overline{x^{[0]}}(\theta)=3\exp(J(\theta))-1\nonumber\\
\underline{x^{[1]}}(\theta)&=\exp(J(\theta))+1,~~~~~~
\overline{x^{[1]}}(\theta)=\exp(J(\theta))
\end{align}
By substituting Eq.\eqref{Man2} into Eq.\eqref{Fistr}, we get
\begin{equation}
\tilde{x}(\theta)=[\exp(J(\theta))(2r-1)-r+1,r-\exp(J(\theta))(2r-3)-1].
\end{equation}
Case II. Let $\tilde{x}(\theta)$ be $II$-$F^{\alpha}$-differentiable. Then, by utilizing Eqs.\eqref{IIAQWZ1} and \eqref{IIAQWZ2} we get
\begin{equation}\label{kooo1}
\left\{
\begin{array}{ll}
D_{F,H}^{\alpha}\underline{x^{[0]}}(\theta)=\overline{x^{[0]}}(\theta)+1, & \\
& \\
D_{F,H}^{\alpha}\overline{x^{[0]}}(\theta)=\underline{x^{[0]}}(\theta)-1 &\\
&\\
\underline{x^{[0]}}(\theta_{0})=0&\\
&\\
\overline{x^{[0]}}(\theta_{0})=2,
\end{array}
\right.
\end{equation}
and
\begin{equation}\label{kooo2}
\left\{
\begin{array}{ll}
D_{F,H}^{\alpha}\underline{x^{[1]}}(\theta)=\overline{x^{[1]}}(\theta), & \\
& \\
D_{F,H}^{\alpha}\overline{x^{[1]}}(\theta)=\underline{x^{[1]}}(\theta) &\\
&\\
\underline{x^{[1]}}(\theta_{0})=1&\\
&\\
\overline{x^{[1]}}(\theta_{0})=1,
\end{array}
\right.
\end{equation}
Solving Eqs.\eqref{kooo1} and \eqref{kooo2} gives
\begin{align}\label{Jann}
\underline{x^{[0]}}(\theta)&=
\exp(J(\theta))-r+\frac{(2r-2)}{\exp(J(\theta))}+1,~~~\overline{x^{[0]}}(\theta)=r
+\exp(J(\theta))-\frac{(2r-2)}{\exp(J(\theta))}-1\nonumber\\
\underline{x^{[1]}}(\theta)&=\exp(J(\theta)),~~~\hspace{3.5cm} \overline{x^{[1]}}(\theta)=\exp(J(\theta)).
\end{align}
By substituting Eq.\eqref{Jann} into Eq.\eqref{Mohham}, we find the solution of Eq.\eqref{UUUU} as follows
\begin{equation}\label{wwwq}
\tilde{x}(\theta)=\bigg[\exp(J(\theta))-r+\frac{2r-2}{\exp(J(\theta))}+1,
r+\exp(J(\theta))-\frac{2r-2}{\exp(J(\theta))}-1\bigg].
\end{equation}
In Figure \ref{dd}, we have plotted Eq.\eqref{wwwq} for the case of $r=0.3$.
\begin{figure}
\caption{Graph of Eq.\eqref{wwwq}
\label{dd}
\end{figure}
\end{example}
\begin{example}
Consider the fractal differential equation with fuzzy boundary condition:
\begin{equation}\label{Redree42587}
\left\{
\begin{array}{ll}
(D_{F}^{\alpha})^{2}x(\theta)-4D_{F}^{\alpha}x(\theta)+4 x(\theta)=1-2 J^{2}(\theta), & \\
x(0)=(2,3,4), & \\
x(1)=(1,2,2.5),
\end{array}
\right.
\end{equation}
Let the solution be as $x(\theta)=x_{cr}(\theta)+\tilde{x}_{un}(\theta)$, where $x_{cr}$ is the crisp part of the solution and $\tilde{x}_{un}(\theta)$ is the uncertainty part \cite{gasilov2014solution}. First, we solve the following equation
\begin{equation}\label{EDxs}
\left\{
\begin{array}{ll}
(D_{F}^{\alpha})^{2}x(\theta)-4D_{F}^{\alpha}x(\theta)+4 x(\theta)=1-2 J^{2}(\theta), & \\
x(0)=3, & \\
x(1)=2,
\end{array}
\right.
\end{equation}
The crisp solution of Eq.\eqref{EDxs} is
\begin{equation}
x_{cr}(\theta)=-\frac{1}{2}(J(\theta)+1)^{2}+3.5(1-J(\theta))\exp(2J(\theta))+
4J(\theta)\exp(2(J(\theta)-1))
\end{equation}
To determine the $ \tilde{x}_{un}(\theta)$, let us consider the fractal fuzzy homogeneous equations:
\begin{equation}\label{Wsaq8}
\left\{
\begin{array}{ll}
(D_{F}^{\alpha})^{2}x(\theta)-4D_{F}^{\alpha}x(\theta)+4 x(\theta)= 0, & \\
x(0)=(-1,0,1), & \\
x(1)=(-1,0,0.5)
\end{array}
\right.
\end{equation}
The linear independent solutions of Eq.\eqref{Wsaq8} are $x_{1}(\theta)=\exp(2J(\theta))$, and $x_{2}(\theta)=J(\theta)\exp(2J(\theta))$
\end{example}
Then we have
\begin{align}
&\mathbf{p}=(\exp(2J(\theta)),J(\theta)\exp(2J(\theta))),~~
\mathcal{M}=\left(
\begin{array}{cc}
1 & 0 \\
e^{2} & e^{2} \\
\end{array}
\right),\nonumber\\&
~~~\mathbf{q}=\mathbf{p}\mathcal{M}^{-1}=((1-J(\theta))\exp(2J(\theta)),
J(\theta)\exp(2(J(\theta)-1)).
\end{align}
Thus we obtain
\begin{equation}
\tilde{x}_{un}(\theta)=(1-J(\theta))\exp(2J(\theta))(-1,0,1)+
J(\theta)\exp(2(J(\theta)-1))(-1,0,0.5).
\end{equation}
To represent $ \tilde{x}_{un}(\theta)$ by $\kappa$-cuts, we can write
\begin{equation}\label{ttwqsrffd}
\tilde{x}_{un,\kappa}(\theta)=(1-\kappa)[\underline{x_{un,0}}(\theta),
\overline{x_{un,0}}(\theta)].
\end{equation}
The $\kappa$-cuts representation solution of Eq.\eqref{Redree42587} is
\begin{equation}
x_{\kappa}(\theta)=x_{cr}(\theta)+(1-\kappa)[\underline{x_{un,0}}(\theta),
\overline{x_{un,0}}(\theta)].
\end{equation}
\section{Conclusion \label{4g}}
In this paper, we have formulated the fractal fuzzy calculus which is a generalization of fractal calculus on fuzzy number-valued functions. Fractal calculus is a generalization of ordinary calculus which involves functions with a fractal domain such as Cantor set and Koch curves. Fractal fuzzy differential equations can be used to model uncertainty in the initial condition or dynamic of media with a fractal structure. The research in this direction is in progress.
\\
\textbf{Acknowledgements:} Cristina Serpa acknowledges partial funding by national funds through FCT-Foundation for Science and Technology, project reference: UIDB/04561/2020.
\end{document} |
\begin{document}
\twocolumn[
\icmltitle{Understanding Failures in Out-of-Distribution Detection with \\Deep Generative Models}
\begin{icmlauthorlist}
\icmlauthor{Lily H. Zhang}{nyu}
\icmlauthor{Mark Goldstein}{nyu}
\icmlauthor{Rajesh Ranganath}{nyu}
\end{icmlauthorlist}
\icmlaffiliation{nyu}{New York University}
\icmlcorrespondingauthor{Lily H. Zhang}{[email protected]}
\icmlkeywords{Machine Learning, ICML}
\vskip 0.3in
]
\printAffiliationsAndNotice{}
\begin{abstract}
\Glspl{dgm} seem a natural fit for detecting \,\vert\,ls{ood} inputs,
but such models have been shown to assign higher probabilities or densities to \,\vert\,ls{ood} images than images from the training distribution.
In this work, we explain why this behavior should be attributed to model misestimation.
We first prove that no method can guarantee performance beyond random chance without assumptions on which out-distributions are relevant.
We then interrogate the \emph{typical set hypothesis}, the claim
that relevant out-distributions can lie in high likelihood regions of the data distribution, and that \,\vert\,ls{ood} detection
should be defined based on the data distribution's typical set.
We highlight the consequences implied by assuming support overlap between in- and out-distributions, as well as the arbitrariness of the typical set for \,\vert\,ls{ood} detection.
Our results suggest that estimation error is a more plausible explanation than the misalignment between likelihood-based \,\vert\,ls{ood} detection and out-distributions of interest, and we illustrate how even minimal estimation error can lead to \,\vert\,ls{ood} detection failures, yielding implications for future work in
deep generative modeling and \,\vert\,ls{ood} detection.
\end{abstract}
\section{Introduction}
\,\vert\,lsresetall Predictive models have little guarantee in performance on inputs that differ from the training distribution. Thus, detecting such \,\vert\,ls{ood} inputs is an important step towards safe and reliable machine learning \citep{Amodei2016ConcretePI}.
\Gls{ood} detection has been formalized as the task of identifying points with low likelihood\footnote{As is common in the literature, we will use ``likelihood'' to refer to the probability or density of a sample under a distribution, even though the statistical definition of the term refers to a function of the parameters given fixed data.}
under the training distribution, estimated via a model \cite{Bishop}.
\Glspl{dgm}
estimate complex distributions from often high-dimensional inputs
and produce high-quality simulations \cite{Salimans2017PixelCNNIT, Chen2018PixelSNAILAI, Kingma2018GlowGF}.
However, explicit likelihood \,\vert\,lspl{dgm} (e.g. autoregressive models, normalizing flows) have been shown to
assign higher likelihoods to unrelated inputs than even those from the training distribution.
For instance, a model trained on Fashion-\acrshort{mnist}, an image dataset
of clothing items, assigns higher likelihoods to \acrshort{mnist} images. The same is true for the training distribution (or in-distribution) \acrshort{cifar-10}, a dataset of animals and vehicles, and the \,\vert\,ls{ood} distribution (or out-distribution) \acrshort{svhn}, a dataset of house numbers.
For such dataset pairs, \,\vert\,ls{ood} detection based on explicit likelihood
\,\vert\,lspl{dgm} performs worse than random chance
\cite{Nalisnick2019DoDG, Hendrycks2019DeepAD}.
This observation has motivated many alternative methods for \,\vert\,ls{ood} detection which employ the same \,\vert\,lspl{dgm} but modify how they are used \cite{Ren2019LikelihoodRF, Serr2020InputCA, Schirrmeister2020UnderstandingAD, Choi2018WAICBW,DeepMind2019DetectingOI, Wang2020FurtherAO, Morningstar2020DensityOS}.
While these methods have been successful in empirical benchmarks, we
prove that all methods are powerless against some set of out-distributions (\Cref{sec:prop1}).
This result applies to any detection method, regardless of whether \,\vert\,lspl{dgm} are involved, and highlights the need to specify the out-distributions of interest for the task.
Some works have suggested that the failure of deep generative models to assign low likelihoods to \,\vert\,ls{ood} points is not a model failure;
rather, out-distributions of interest can lie in high likelihood regions of the data distribution.
To explain why points with high likelihood are never observed among samples from the data distribution, these works
mention that points assigned high density or probability under the training distribution may lie within regions of small overall probability.
A method that identifies low likelihood points as \,\vert\,ls{ood} will fail to detect such out-distributions, so existing
works suggest instead to flag as \,\vert\,ls{ood} any point which falls outside a distribution's typical set, a set that contains the majority of the probability mass of a distribution but not necessarily the highest density or probability points \cite{Choi2018WAICBW, DeepMind2019DetectingOI, Wang2020FurtherAO, Morningstar2020DensityOS}.
This \emph{typical set hypothesis}---the idea that relevant out-distributions are determined based on the typical set of a distribution---assumes that \,\vert\,ls{ood} regions can lie in the support of the data distribution.
In this work, we highlight problems with the typical set hypothesis (\Cref{sec:typical_set}).
First, the hypothesis assumes that relevant out-distributions (e.g. \acrshort{svhn}) can overlap in support with the data distribution (e.g. \acrshort{cifar-10}). However, when the in- and out-distribution overlap, there is an irresolvable upper bound on \,\vert\,ls{ood} detection performance (\Cref{sec:single}), and
even a perfect model of the in-distribution can yield worse \,\vert\,ls{ood} detection than a misestimated one (\Cref{sec:decomp}).
A preference for the typical set over other similar sets is also arbitrary for \,\vert\,ls{ood} detection (\Cref{sec:typical_arbitrary}).
These results highlight the implausibility of the typical set hypothesis and its support overlap assumption.
In our experiments, we offer empirical demonstrations of the analyses presented.
First, given an \,\vert\,ls{ood} detection method and a specific in-distribution, we provide examples of out-distributions that the method fails to distinguish from the in-distribution
(\Cref{sec:tsexp}). Then, we showcase an instance where a partially-trained \,\vert\,ls{dgm} yields better \,\vert\,ls{ood} detection than the true distribution of the data when supports overlap between the in- and out-distribution (\Cref{sec:decexp}).
Based on the implausible implications of the typical set hypothesis, we conclude that the high likelihoods assigned to certain \,\vert\,ls{ood} images are instead due to model estimation error.
First, it is reasonable to believe that existing dataset pairs have disjoint (rather than overlapping) support, as one would not expect to draw a house number from the \acrshort{cifar-10} distribution, or a digit from the Fashion-\acrshort{mnist} distribution, even given infinite samples. This implies that existing models mistakenly assign high probability or density where they should be assigning zero.
We demonstrate how even a model with good generation quality and heldout likelihood can still exhibit \,\vert\,ls{ood} failures (\Cref{sec:gen_vs_det}). We then discuss what this perspective of estimation error implies for \,\vert\,lspl{dgm} and \,\vert\,ls{ood} detection (\Cref{sec:important}). We illustrate how recent methods that were motivated by the typical set hypothesis may instead correct for model estimation error, and we suggest future modeling directions to improve \,\vert\,lspl{dgm} for \,\vert\,ls{ood} detection.
\section{Defining OOD Detection}
\label{sec:definition}
\Gls{ood} detection has been defined as the task of identifying ``whether a test example is from a different distribution from the training data'' \cite{Hendrycks2017ABF}.
Here, we illustrate why it is critical to specify the out-distributions to consider. In fact, without any constraints on out-distributions, the task of \,\vert\,ls{ood} detection is impossible.
In this section, we first formalize the broadest form of \,\vert\,ls{ood} detection as a single-sample goodness-of-fit test (\Cref{sec:ood_ht}). We then prove that no method can guarantee better than random chance performance under this task definition (\Cref{sec:prop1}). We conclude that any formal analysis of an \,\vert\,ls{ood} detection method must take into account the out-distributions which define the task.
\subsection{OOD Detection as Goodness-of-fit Testing}
\label{sec:ood_ht}
In its unconstrained form, \,\vert\,ls{ood} detection can be formalized
as a single-sample hypothesis test \cite{DeepMind2019DetectingOI, Serr2020InputCA, Wang2020FurtherAO}; given a sample $\textbf{x}$, the test decides whether to reject the null hypothesis that a sample was drawn from the data distribution $P$, in favor of an alternative hypothesis that the sample came from a distribution other than $P$:
\begin{align*}
H_0&: \textbf{x} \sim P \\
H_A&: \textbf{x} \sim Q \in \mathcal{Q}, P \not\in \mathcal{Q}.
\end{align*}
The decision to reject or not reject the null hypothesis (i.e. mark a sample \,\vert\,ls{ood}) is based on the value of a predetermined test statistic $\phi$, which can be any arbitrary function of a single sample $\textbf{x}$. In our analysis, we focus on test statistics which directly utilize knowledge of the input distribution $P$ or an estimate of it via a deep generative model $P_\theta$.
$P_\theta$ can either be a continuous distribution, as is the case for normalizing flows \cite{Dinh2015NICENI,Dinh2017DensityEU,Kingma2018GlowGF}, or a discrete distribution, as is the case for existing autoregressive models \cite{Salimans2017PixelCNNIT,Oord2016ConditionalIG, Oord2016PixelRN, Chen2018PixelSNAILAI}.
An example of a test statistic is $\phi = \log p_\theta$, where
$p_\theta(\textbf{x})$ denotes either the probability of an observation for a discrete distribution or the density of an observation with respect to the Lebesgue measure for a continuous distribution. This test statistic is often accompanied by the rejection rule $\phi(\textbf{x}) < k$, i.e. reject as \,\vert\,ls{ood} points where $\log p(\textbf{x})$ is low.
We discuss various choices of test statistics, including those which do and do not use an estimate of $P$, in the related work in \Cref{sec:related}.
In order to determine whether an input should be processed through a given classifier,
\,\vert\,ls{ood} tests must make decisions on a single sample at a time. This stands in contrast with most goodness-of-fit testing setups that make a decision based on a collection of samples. We discuss the challenges of this single-sample formulation in \Cref{sec:prop1}.
The quality of a test is measured by its ability to correctly detect \,\vert\,ls{ood} samples without flagging in-distribution samples as \,\vert\,ls{ood}.
The proportion of \,\vert\,ls{ood} samples detected, known as the true positive rate or power of a test, can be plotted as a function of the proportion of in-distribution samples incorrectly rejected, known as the false positive rate or size of a test. This is equivalent to a \,\vert\,ls{roc} curve, and the \,\vert\,ls{auc} is the area under the power vs. size curve.
Using this general formulation of \,\vert\,ls{ood} detection, we
can now interrogate \,\vert\,ls{ood} detection methods more broadly by abstracting away the choice of test statistic $\phi$.
\subsection{OOD Detection as a Single-Sample Distributional Test is Impossible}
\label{sec:prop1}
\,\vert\,ls{ood} detection defined as a single-sample goodness-of-fit test is a challenging classification task given that the out-distributions are unknown.
To remove the effect of misestimation, we consider test statistics which can use knowledge of the true in-distribution $P$ via its density or probability function, denoted $\phi_p: \mathcal{X} \rightarrow \mathbb{R}$. We now present
an impossibility result: no test can do well against all alternatives.
\begin{proposition}
\label{prop:prop1}
Let $P$ be the distribution under the null hypothesis $H_0$. Let $\mu$ be the measure associated with the distribution of test statistic $\phi_p(\textbf{x})$ under the null. Then, assuming the conditional $\textbf{x} \,\vert\, \phi_p(\textbf{x})$ is not degenerate on a $\mu$-non-measure zero set, there exists a set of alternative distributions $Q \in \mathcal{Q}$ where $Q \neq_d P$ and the test has power equal to the false positive rate. In other words, the test does no better than random guessing.
\end{proposition}
\begin{proof}
See \Cref{sec: prop1_proof}. The proof sketch is as follows: First we construct distributions $Q \in \mathcal{Q}$ for which the distribution of $\phi_p(\textbf{x})$ is the same but the distribution of $\textbf{x}|\phi_p(\textbf{x})$
differs when $\textbf{x} \sim P$ and $\textbf{x} \sim Q$
for all $\phi_p(\textbf{x})$ in a non-measure-zero set $\Phi$. This implies $q(\textbf{x}) \neq_d p(\textbf{x})$.
We show that the power of the test for any rejection rule for such a pair $P, Q$ is equal to the false positive rate for all false positive rates, which is equivalent to random guessing.
\end{proof}
\Cref{prop:prop1} demonstrates that no test statistic can be useful for all possible out-distributions.
In the context of single-sample distributional testing,
all proposed test statistics trade off power against different out-distributions.
This means that, without additional assumptions on the family of alternative hypotheses for \,\vert\,ls{ood} detection, no test statistic can be uniformly better across out-distributions than another.
To build intuition behind the proposition, imagine that $\mathcal{X}$ is the space of $d$-dimensional reals $\mathbb{R}^d$ and the in-distribution has a density with respect to the
$d$-dimensional Lebesgue measure.
The test statistic is a function that maps from $\mathbb{R}^d \to \mathbb{R}$; thus, the statistic is a one-dimensional projection of the distribution. In the same way that not all differences in two multivariate distributions can be assessed by looking at a single marginal, not all the differences between $P$ and $Q$ can be assessed by looking at their projections on the test statistic. This result is focused on the single-sample formulation of \,\vert\,ls{ood} detection and holds even for test statistics which are consistent in power asymptotically.
\paragraph{An Example: Using $\log p$ as a test statistic.}
When the test statistic is the log probability or density,
the set of alternative distributions $Q \in \mathcal{Q}$ that cannot be distinguished from $P$ are those which yield the same distribution of log probabilities or densities under $P$. These are distributions which collapse any of the level sets of $P$. As an example in the discrete case, imagine a countable sample space and a distribution $P$ where $c$ of the elements are given the same probability.
Any distribution $Q$ which moves the total probability of the $c$ elements in $P$ to any subset of these elements will share the same distribution of probability under $P$.
The analogue for continuous distributions $\mathbb{R}^d$ is collapsing level sets of dimension $\mathbb{R}^{d-1}$.
We illustrate the phenomenon with an example in the continuous case in \Cref{sec:tsexp}.
\Cref{prop:prop1} emphasizes the need to specify the family of relevant out-distributions for \,\vert\,ls{ood} detection. For instance, likelihood ratio test statistics \cite{Ren2019LikelihoodRF, Serr2020InputCA, Schirrmeister2020UnderstandingAD} are optimal when the alternative hypothesis is correctly specified, but like all test statistics, they trade off power in some other alternative; therefore, comparing different likelihood ratios (and test statistics in general) is only useful when the family of out-distributions is formalized and standardized.
Like any test statistic, $\phi = \log p$ works well for some out-distributions (those whose samples have zero or low likelihood under the data distribution) but poorly for others (those whose samples have high likelihood under the data distribution).
Whether the latter such out-distributions are relevant for \,\vert\,ls{ood} detection is a central question underlying our analysis of the typical set hypothesis.
\section{The Implausibility of the Typical Set Hypothesis}
\label{sec:ts}
\citet{Nalisnick2019DoDG, Hendrycks2019DeepAD} observed that \,\vert\,lspl{dgm} trained on \acrshort{cifar-10} samples assign higher likelihoods to \acrshort{svhn} images, and \,\vert\,lspl{dgm} trained on Fashion\acrshort{mnist} samples assign higher likelihoods to \acrshort{mnist} images.
The explanation for these observations can either be A. such \,\vert\,ls{ood} samples do have high likelihoods under the \emph{data} distribution, or B. these \,\vert\,ls{ood} samples only have high likelihoods under the \emph{model} distribution due to estimation error.
The typical set hypothesis argues for the former,
that out-distributions can lie in high probability or density regions of the data distribution.
A test based on the $\log p$ test statistic and $\phi(\textbf{x}) < k$ rejection rule lacks power, even under the perfect model, to detect out-distributions whose samples have high likelihood under the data distribution, and the typical set hypothesis assumes that \acrshort{svhn} is such an out-distribution relative to \acrshort{cifar-10}, as is \acrshort{mnist} to Fashion-\acrshort{mnist}.
In this section, we detail the typical set hypothesis
(\Cref{sec:typical_set}),
reveal consequences which fall from its assumptions (\Cref{sec:single}, \Cref{sec:decomp}),
and discuss its relevance for \,\vert\,ls{ood} detection (\Cref{sec:typical_arbitrary}).
\subsection{The Typical Set Hypothesis}
\label{sec:typical_set}
The typical set hypothesis posits that 1. out-distributions of interest can lie in regions of high likelihood but small overall probability under the data distribution, and 2. to detect such distributions, \,\vert\,ls{ood} detection should take into account the data distribution's typical set\footnote{\citet{DeepMind2019DetectingOI} discuss the \emph{model's} typical set rather than the data distribution's but do not mention model estimation error. Subsequent works have interpreted their message in the context of the data distribution's typical set
\cite{Morningstar2020DensityOS}.} \cite{Choi2018WAICBW, Nalisnick2019DoDG, Wang2020FurtherAO, Morningstar2020DensityOS}.
Tests that reject
low likelihood points will perform worse than random chance on out-distributions whose samples have high in-distribution likelihoods, but tests
that consider
the data distribution's typical set can have power over such out-distributions.
Quoting \citet{Wang2020FurtherAO}:
\begin{quote}
Samples
from a high-dimensional distribution will often fall on a typical set with high probability, but the
typical set itself does not necessarily have the highest probability density at any given point. Per this
line of reasoning, to determine if a test sample is an outlier, we should check if it falls on the typical
set of the inlier distribution rather than merely examining its likelihood under a given deep generative model.
\end{quote}
Given a distribution $P$, the typical set $A_\epsilon^{(n)}$ is the set of $n$-length sequences $(x_{i1}, ..., x_{in}), x_{ij} \stackrel{\text{i.i.d.}}{\sim} P$
whose empirical entropy is close to the entropy of $P$, i.e. $H(P) = \scalebox{0.75}[1.0]{$-$}\mathbb{E}_{x_{ij} \sim P}[\log p(x_{ij})]$, within a neighborhood determined via the constant $\epsilon$ \cite{Cover1991ElementsOI}:
\begin{equation} \label{eq:typical}
H(P) - \epsilon \leq -\frac{1}{n} \sum_{j=1}^n \log p(x_{ij}) \leq H(P) + \epsilon.
\end{equation}
The typical set can be viewed as a set of elements from the sample space of the product measure $P^n = P \times P \times ... \times P$ ($n$ copies).
For a sufficiently large $n$, i.e. a sufficiently high-dimensional distribution $P^n$, the typical set is small relative to the total number of possible elements in $P^n$, yet the probability of the set under $P^n$ is close to one.
The idea underlying the typical set hypothesis is the following:
If nearly all of the total probability mass of a distribution is concentrated in a small set, then it is unlikely that a sample generated from the distribution will fall outside of this set. For instance, samples from an out-distribution such as \acrshort{svhn} could have high likelihood in the \acrshort{cifar-10} distribution but fall outside its typical set, which would explain why \acrshort{svhn} samples are not seen in the finite \acrshort{cifar-10} dataset.
Tests based on the typical set have power to detect out-distributions concentrated in high likelihood regions of the data distribution which have small overall probability, but the assumption that an out-distribution like \acrshort{svhn} lies within the support of a data distribution like \acrshort{cifar-10} yields questionable consequences, illustrated in the next two sections.
\subsection{No Method Can Guarantee Perfect Detection When Supports Overlap}
\label{sec:single}
In order to explain why \,\vert\,lspl{dgm} trained on \acrshort{cifar-10} or Fashion-\acrshort{mnist} place high likelihood on \acrshort{svhn} or \acrshort{mnist} samples respectively, the typical set hypothesis must assume that the out-distributions overlap in support with the in-distributions.
However,
the probability of classification error is non-zero when the support of a given out-distribution $Q$ overlaps with that of the in-distribution $P$.
Therefore, even with exact knowledge of the in-distribution, no method can achieve perfect detection against out-distributions which overlap in support with the in-distribution.
\begin{proposition}
\label{prop:prop2}
Let $P$ and $Q$ have overlapping support: $\textrm{Pr}_Q(\textbf{x} \in
\text{supp}(p(\textbf{x}))) > 0$. Then, any test has non-zero probability of error.
\end{proposition}
\begin{proof}
Assume there exists a rejection rule $\phi_p(\textbf{x}) \not\in \Phi$ that perfectly separates samples from $P$ and $Q$ i.e.
\begin{align*}
\textrm{Pr}_Q(\phi_p(\textbf{x}) \in \Phi) = 0 \textrm{, and } \textrm{Pr}_P(\phi_p(\textbf{x}) \in \Phi) = 1.
\end{align*}
The above condition requires $\Phi$ to encompass all values in $\{\phi(\textbf{x}) | \textbf{x} \in \text{supp}(p(\textbf{x}))\}$ and none in $\{\phi(\textbf{x}) | \textbf{x} \in \text{supp}(q(\textbf{x}))\}$. However, since $\textrm{Pr}_Q(\textbf{x} \in
\text{supp}(p(\textbf{x}))) > 0$, $\textrm{Pr}_Q(\phi_p(\textbf{x}) \in
\text{supp}(p(\phi_p(\textbf{x})))) > 0$. By contradiction, there exists no subset $\Phi$ that perfectly separates $P$ and $Q$.
\end{proof}
\Cref{prop:prop2} states that if the supports of two distributions (e.g. \acrshort{svhn} and \acrshort{cifar-10}) overlap, then no solution can guarantee perfect discrimination between single samples from these two distributions. This relates to the bound on performance given by the Bayes optimal classifier: even the optimal classifier has non-zero error when the covariate distributions from two classes overlap.
\subsection{A Wrong Model Can Perform Better Than a Perfect One When Supports Overlap}
\label{sec:decomp}
An additional consequence of including support overlap cases in \,\vert\,ls{ood} detection is that for a given out-distribution $Q$, a perfect model can perform worse than a misestimated one.
Define $\phi_p$ using the data distribution $P$ (e.g. $\phi_p = \log p$)
such that the rejection rule is of the form $\phi_p(\textbf{x}) < k$.
(We can recast existing test statistic and rejection rule pairings to follow this form, even if the original pairing does not use rejection rule $\phi_p(\textbf{x}) < k$. See \Cref{sec:rej} for details.)
We can write the \,\vert\,ls{auc} of an \,\vert\,ls{ood} detection procedure using $\phi_p$ as $\textrm{Pr}(\phi_p(\textbf{x}) > \phi_p(\textbf{y}))$ for $\textbf{x} \sim P, \textbf{y} \sim Q$. Perfect discrimination is achieved when $\textrm{Pr}(\phi_p(\textbf{x}) > \phi_p(\textbf{y})) = 1$.
We now show that it is possible for \,\vert\,ls{ood} detection based on a misestimated model to perform better than detection using the true in-distribution when the supports of $P$ and $Q$ overlap. Let $\phi_p = p$, and let $Q$ have support over the entire sample space $\mathcal{X}$. We can construct a $P_\theta$ proportional to the likelihood ratio of $P$ and $Q$:
\[p_\theta(\textbf{x}) = \frac{1}{C}\frac{p(\textbf{x})}{q(\textbf{x})}, C = \int_{\mathcal{X}} \frac{p(\textbf{x})}{q(\textbf{x})} d\textbf{x}, \]
assuming integrability. Then, $\phi_{p_\theta}$ is proportional to the likelihood ratio, and by the Neyman-Pearson Lemma \cite{Neyman1933OnTP}, a likelihood ratio test statistic is uniformly most powerful for a simple hypothesis (i.e. yields the highest power against any single alternative hypothesis $Q$ for a specified test size). Since the ratio is most powerful at every false positive rate specified, \,\vert\,ls{ood} detection via $\phi_{p_\theta}$ achieves the maximal \,\vert\,ls{auc} possible for a given pair $P, Q$. Since a uniformly most powerful decision rule is unique up to sets of measure zero, given $P_\theta \neq P$, \,\vert\,ls{ood} detection using $P_\theta$ is strictly better than detection using $P$. In summary,
\[\textrm{Pr}(\phi_{p_\theta}(\textbf{x}) > \phi_{p_\theta}(\textbf{y})) > \textrm{Pr}(\phi_p(\textbf{x}) > \phi_p(\textbf{y})).\]
The same idea applies even when $Q$ does not place strictly positive density or probability across the sample space $\mathcal{X}$ or when the test statistic is a function other than $\phi_p = \log p$: A model $P_\theta$ which makes values of $\phi_{p_\theta}(\textbf{x}), \textbf{x} \sim P$ higher relative to $\phi_{p_\theta}(\textbf{y}), \textbf{y} \sim Q$ will improve \,\vert\,ls{ood} detection.
We illustrate this phenomenon with an empirical example in \Cref{sec:decexp}, comparing the \,\vert\,ls{ood} performance for a specific $P, Q$ pair when the test statistic utilizes the true distribution $P$ versus a (poor) estimate of it.
Note that this result does not apply when distributions $P$ and $Q$ have disjoint support and the test statistic used is $\phi_p = p$.
Concretely, $\phi_p(\textbf{y}) = 0$ for all $\textbf{y} \sim Q$ and $\phi_p(\textbf{x}) > 0$ for all $\textbf{x} \sim P$, which implies that $\textrm{Pr}(\phi_p(\textbf{x}) > \phi_p(\textbf{y})) = 1$; in this setting, likelihood-based \,\vert\,ls{ood} detection using a perfect model yields optimal performance.
\subsection{OOD Detection based on the Typical Set is Arbitrary}
\label{sec:typical_arbitrary}
The previous two sections (\Cref{sec:single} and \Cref{sec:decomp}) highlight two consequences that result from the assumption in the typical set hypothesis that out-distributions of interest (e.g. \acrshort{svhn}, \acrshort{mnist}) overlap in support with the in-distribution (e.g. \acrshort{cifar-10}, Fashion\acrshort{mnist}).
Beyond the issues resulting from the support overlap assumption, the typical set hypothesis relies on the idea that the typical set is the preferred subset to demarcate what is in- and out-of-distribution. Here, we question this idea, explaining why the properties of the typical set do not suffice to explain its relevance to \,\vert\,ls{ood} detection.
As discussed in \Cref{sec:typical_set}, the typical set hypothesis uses the fact that the typical set can be small yet high probability to justify why points outside the set should be considered \,\vert\,ls{ood}.
However, there can exist other similarly small sets that also contain nearly all of the probability mass, meaning it is arbitrary to prefer the typical set based on its small size and high probability properties alone.
As an example, consider
a high-dimensional distribution of i.i.d. Bernoullis, each with 75\% probability of success. A vector of 100\% ones, denoted $\mathbf{z}_{100}$, has the highest probability but is not part of the typical set
$\mathcal{A}_\epsilon$ which consists of sequences with close to 75\% ones and 25\% zeros.
The property $\Pr(\textbf{x} \in \mathcal{A}_\epsilon) \approx 1$ is used to justify why it is okay to consider any points outside this set as \,\vert\,ls{ood}, including $\mathbf{z}_{100}$.
However, we also can define a new set $\mathcal{A}'_\epsilon$ which substitutes in $\mathbf{z}_{100}$ in place of one of the 72/25 sequences---for instance, the sequence whose first 75\% of elements are ones and last 25\% are zero, which we will denote $\mathbf{z}_{75}$. This same-sized set also satisfies $\Pr(\textbf{x} \in \mathcal{A}'_\epsilon) \approx 1$ and in fact has strictly greater probability than the typical set, since a sequence of ones is more likely than the particular 75/25 sequence that was removed. Under this newly constructed set, the sequence $\mathbf{z}_{75}$ would be considered \,\vert\,ls{ood} since it is not contained in $\mathcal{A}'_\epsilon$. Since we randomly selected one of the 75/25 sequences to replace, the decision to mark this sequence \,\vert\,ls{ood} based on set membership is arbitrary. Yet, it is just as arbitrary to exclude $\mathbf{z}_{100}$ from the set of in-distribution points; after all, $\Pr(\mathbf{z}_{100}) > \Pr(\mathbf{z}_{75})$.
The unique property of the typical set relative to other small-volume, high-probability sets is that its elements are close to equiprobable. However, the need for this property does not follow from the motivation for \,\vert\,ls{ood} detection.
In summary, there are several issues with the typical set hypothesis.
First, the hypothesis relies on the assumption that out-distributions can overlap in support with the in-distribution; this support overlap assumption implies that perfect discrimination between image datasets distibutions such as \acrshort{cifar-10}/\acrshort{svhn} and Fashion-\acrshort{mnist}/\acrshort{mnist} is impossible (\Cref{sec:single}), and that one could perform better \,\vert\,ls{ood} detection with a wrong model of the data distribution than the right one (\Cref{sec:decomp}). Additionally, there is no clear motivation for preferring the typical set over other small volume, high probability sets. These issues suggest the implausibility of the typical set hypothesis as the explanation and solution for existing \,\vert\,ls{ood} failures of \,\vert\,lspl{dgm}.
Consequently, the phenomenon observed in \citet{Nalisnick2019DoDG, Hendrycks2019DeepAD} is likely a result of model estimation error, rather than a property of existing image data distributions requiring an alternative task definition. We detail what this perspective implies about existing models and provide guidance for future work in \Cref{sec:model_perspective}.
\section{Experiments}
The following experiments demonstrate the theoretical properties shown in the prequel. To build intuition around the impossibility result of \Cref{prop:prop1}, we give an example of different distributions with the same distribution of densities under $P$, meaning a test based on a likelihood statistic cannot distinguish them even with access to a perfect model of the in-distribution.
We then demonstrate an instance of a partially-trained \,\vert\,ls{dgm} performing better \,\vert\,ls{ood} detection than the actual model of the data, providing an empirical example for the analysis in \Cref{sec:decomp} that an erroneous model can perform better \,\vert\,ls{ood} detection than a perfect one.
\subsection{Any Test Statistic has Failure Modes When Possible Out-distributions are Unrestricted}
\label{sec:tsexp}
\Cref{prop:prop1} states that any test statistic gives up power over certain alternative hypotheses. We show that the test statistic $\phi(\textbf{x}) = \log p(\textbf{x})$ cannot detect as \,\vert\,ls{ood} single samples drawn from out-distributions $Q, R$ which are contained within $P$ but collapse any of its level sets.
Consider an in-distribution $P$ that is bivariate Gaussian with an identity covariance matrix. There are a variety of out-distributions whose samples yield the same distribution of the test statistic $\phi(\textbf{x}) = \log p(\textbf{x})$. We consider two in \Cref{fig:prop1}: $Q$, the distribution obtained by
sampling $(x_1,x_2)$ from a standard bivariate normal and then flipping the sign of $x_2$
if it is in the second or fourth quadrant, and $R$, the distribution
obtained by sampling $(x_1,x_2)$ from a standard bivariate normal and mapping it to the point $(z,z)$ where $z^2 = (x_1^2 + x_2^2)/2$ (i.e. preserving distance from origin).
The out-distributions $Q$ and $R$ maintain the same distribution of log-densities under $P$ as the distribution $P$ since they distribute mass similarly across the upper level sets $\{\textbf{x}: p(\textbf{x}) > t\} \forall t$.
We can use similar logic to construct problematic out-distributions for other test statistics which are a function of $p(\textbf{x})$, including the test statistic associated with the typicality test in \citet{DeepMind2019DetectingOI}.
In fact, this test statistic, $\phi(\textbf{x}) = \big \lvert \scalebox{0.75}[1.0]{$-$} \log p(\textbf{x}) - \hat{H}_p \big \rvert$ where $\hat{H}_p = \scalebox{0.75}[1.0]{$-$} \frac{1}{\lvert \mathcal{D}_{tr}\rvert}\sum_{\textbf{x} \in \mathcal{D}_{tr}} \log p(\textbf{x})$, is no better than random chance
whenever a log-probability test statistic is no better than chance.
To see this, note that $\hat{H}_p$ is a constant, so the resulting distributions over $\phi(\textbf{x})$ are simply shifted and scaled relative to the distributions over the log-likelihoods.
\begin{figure}
\caption{For a given in-distribution $P$ and choice of test statistic, we can construct out-distributions where \,\vert\,ls{ood}
\label{fig:1a}
\label{fig:1b}
\label{fig:prop1}
\end{figure}
\subsection{A Bad Model Can Beat a Perfect One When the Out- and In-Distributions Overlap in Support}
\label{sec:decexp}
We now provide an empirical example where misestimation can result in better \,\vert\,ls{ood} detection of a particular out-distribution when supports of the in- and out-distribution overlap.
Because we require access to the true model density, we designate a pretrained \,\vert\,ls{dgm} as our in-distribution $P$---specifically the \acrshort{glow} model of \citet{Kingma2018GlowGF} trained on \acrshort{cifar-10}. Next, we train a separate \acrshort{glow} model $P_\theta$ on 40,000 samples from $P$. See \Cref{sec:decexp_supp} for model and training details. We then compare performance of an \,\vert\,ls{ood} detection method using $p$ versus $p_\theta$. We choose the CelebA dataset of celebrity faces \cite{liu2015faceattributes} as our out-distribution $Q$. The in-distribution $P$ represented by the flow overlaps in support with the out-distribution $Q$, as evidenced by the fact that
the pretrained model assigns positive densities to all CelebA images. Our partially-trained model $P_\theta$ (only 50 epochs) achieves an average
\,\vert\,ls{bpd} of 3.67 on the test samples from $P$, versus 3.45 for the true model (lower is better).
However, $P_\theta$ improves \,\vert\,ls{ood} performance relative to the true model for
this choice of $Q$. The misestimation has increased \,\vert\,ls{bpd} for both the in-distribution and out-distribution samples relative to the true model; however, since the extent of the increase is higher for the out-distribution samples, the result is better separation, meaning better \,\vert\,ls{ood} detection under the misestimated model.
See \Cref{fig:decomp}.
\begin{figure}
\caption{A perfect model can perform worse than a misestimated one when supports of the in- and out-distributions overlap.
The \,\vert\,ls{dgm}
\label{fig:3a}
\label{fig:3b}
\label{fig:decomp}
\end{figure}
\section{An Alternative Perspective}
\label{sec:model_perspective}
Rather than conclude that existing image data distributions assign high likelihoods to certain \,\vert\,ls{ood} images, we now consider an alternative explanation, that the phenomenon observed in \citet{Nalisnick2019DoDG, Hendrycks2019DeepAD} is a result of model estimation error.
First,
it is reasonable to assume that the supports of dataset pairs such as \acrshort{cifar-10} and \acrshort{svhn} are disjoint: for instance, we would not expect to draw a house number from the true \acrshort{cifar-10} distribution even given infinite samples.
This assumption is untestable, but if it does hold, then
existing \,\vert\,lspl{dgm} are mistakenly assigning high probability or density in places where they should be assigning none.
Such misestimation may seem surprising given that a \,\vert\,ls{dgm} trained on \acrshort{cifar-10} never seems to generate \acrshort{svhn} images, as previous works have noted.
We can understand this as
poor estimation in small-volume regions of the sample space with negligible total probability mass. In fact, even good generators can be very wrong
in this regard.
From this perspective, training a \,\vert\,ls{dgm} for \,\vert\,ls{ood} detection can require accurate estimation in regions which are unimportant for good generation. We demonstrate this with an example.
\subsection{A Good Generator Can Still Exhibit OOD Detection Failures}
\label{sec:gen_vs_det}
A model $P_\theta$ can output samples from the true data distribution $P$ with probability close to 1 (i.e. good generation) yet still assign higher probability or density to certain out-of-support samples (i.e. bad \,\vert\,ls{ood} detection). As an illustration, consider a finite, discrete sample space where distributions $P$ and $Q$ have disjoint support. If the size of the support of $P$
is much greater than that of the support of $Q$, i.e. $\lvert \text{supp}(P) \rvert >>> \lvert \text{supp}(Q) \rvert$, then a model $P_\theta$ could place higher probability mass on each element of $Q$ than any element in $P$ yet assign negligible probability denoted by $\epsilon$ in total to the elements in $\text{supp}(Q)$, i.e.
\begin{align}
\nonumber &\textrm{Pr}_{p_\theta}(\text{supp}(P)) = 1 - \epsilon, \quad \textrm{Pr}_{p_\theta}( \text{supp}(Q)) = \epsilon,\\
&\textrm{Pr}_{p_\theta}(\textbf{x}) < \textrm{Pr}_{p_\theta}(\mathbf{y}), \,\, \forall \, \textbf{x} \in \text{supp}(P), \mathbf{y} \in \text{supp}(Q)
\label{eq:gen_vs_det}
\end{align}
As an example, let $P$ be a distribution which assigns the same probability to each element in its support, i.e. $\textrm{Pr}_{p}(\textbf{x}) = 1/\lvert \text{supp}(P) \rvert$.
Consider $P_\theta$ which moves $\epsilon / \lvert \text{supp}(P) \rvert$ probability away from each element in $\text{supp}(P)$ and splits it equally across all $\mathbf{y} \in \text{supp}(Q)$, i.e. $\textrm{Pr}_{p_\theta}(\mathbf{y}) = \epsilon /\lvert \text{supp}(Q) \rvert$.
To meet the criteria of \Cref{eq:gen_vs_det},
\begin{align*}
\epsilon > \frac{\lvert \text{supp}(Q) \rvert} {\lvert \text{supp}(P) \rvert + \lvert \text{supp}(Q) \rvert}.
\end{align*}
When $\lvert \text{supp}(P) \rvert$ is much larger than $\lvert \text{supp}(Q) \rvert$,
the $\epsilon$ needed for $P_\theta$ to assign higher probabilities to points
in the support of $Q$ is small.
Such a model would also have very good in-distribution probabilities, smaller than those of $P$ only by an $\epsilon/\lvert \text{supp}(P) \rvert$ amount. In other words, even small differences in held-out log probabilities can matter for \,\vert\,ls{ood} detection of small-volume out-distributions. For instance, using $\epsilon = \nicefrac{\lvert \text{supp}(Q) \rvert}{ \lvert \text{supp}(P) \rvert}$, if $\lvert \text{supp}(P)\rvert = 10^6$ and $\lvert \text{supp}(Q)\rvert = 10^4$, then $\epsilon = 10^{\scalebox{0.75}[1.0]{$-$} 2}$ and $\textrm{Pr}_{p}(\mathbf{x}) = 10^{\scalebox{0.75}[1.0]{$-$} 6}$ while $\textrm{Pr}_{p_\theta}(\mathbf{x}) = 10^{\scalebox{0.75}[1.0]{$-$} 6} - 10^{\scalebox{0.75}[1.0]{$-$} 8}$. The resulting negative log-likelihoods are -13.8155 and -13.8255, respectively. When $\lvert \text{supp}(Q) \rvert$ is even smaller relative to $\lvert \text{supp}(P) \rvert$, the differences can become even smaller. See \Cref{tab:supp} for examples.
\begin{table}[h]
\centering
\caption{ \label{tab:supp} A model $P_\theta$ can place higher probabilities on an out-distribution $Q$ even with in-distribution negative log-likelihoods (NLL) close to the true model $P$ (Oracle). All calculations are based on a uniform $P$ with $\lvert\text{supp}(P)\rvert = 10^6$ and a uniform transfer of $\epsilon = \nicefrac{\lvert \text{supp}(Q)9 \rvert}{\lvert \text{supp}(P) \rvert}$ from $P$ to be divided evenly across $\lvert \text{supp}(Q) \rvert$ of different size ($10^4$, $10^3$, $10^2$).}
\begin{tabular}{c?ccc}
\toprule
& \multicolumn{3}{c}{$\lvert \text{supp}(Q) \rvert$} \\
\hline
Oracle & $10^4$ & $10^3$ & $10^2$ \\
-13.8155 & -13.8255 & -13.8165 & -13.8156 \\
\bottomrule
\end{tabular}
\vskip -.1in
\end{table}
While the above example considers discrete distributions, the same idea holds for continuous probability measures with bounded support.\footnote{Let densities $p$ and $q$ be defined with respect to probability measures $P$ and $Q$ and base Lebesgue measure $\mu$. Consider
$\mu(\text{supp}(P)) >>> \mu(\text{supp}(Q))$.}
We can construct examples similar to the one above by transferring a small amount of mass, originally spread over a large volume within $\text{supp}(P)$, to a region of small volume outside of $\text{supp}(P)$, e.g. $\text{supp}(Q)$.
\citet{Choi2018WAICBW, DeepMind2019DetectingOI, Wang2020FurtherAO}
have made similar points
to explain how
there can exist
elements with
high density or probability that are almost never generated.
While these works have sought to characterize the true in-distribution $P$,
we describe this phenomenon strictly in
terms of model misestimation $P_\theta \neq P$.
This scenario can apply to existing \,\vert\,ls{ood} failures if the volume of the support of the out-distribution is small in comparison to that of the in-distribution. While it is difficult to determine whether this difference in volumes exists in these image distributions, there is reason to believe it might. For one, there are many more pixel combinations which generate a textured pattern than a smooth one. As an extreme example, consider a distribution over solid-color images versus one of random noise images. The former consists of elements where the image is perfectly predictable from the first pixel, and the size of the support is bounded by the number of possible first-value pixels. The latter has much larger support.
Likewise, it is plausible that image distributions containing varying textures (e.g. \acrshort{cifar-10}, Fashion-\acrshort{mnist}) have larger support than distributions with smooth textures (e.g. \acrshort{svhn}, \acrshort{mnist}). This suggests that volume differences can exist in real image distributions.
The fact that a good generator can still experience \,\vert\,ls{ood} detection failures suggests that good generation (and a high held-out likelihood) is not sufficient for good \,\vert\,ls{ood} detection.
Poor model likelihoods over relatively small-volume regions may be a concern for \,\vert\,ls{ood} detection.
\subsection{Improving OOD Detection with DGMs}
\label{sec:important}
Given this perspective that model estimation error is the problem, we list several ways to improve \,\vert\,ls{ood} detection. We first discuss how alternative test statistics, including existing ones, can correct for known errors. We then turn to potential future directions to address model bias.
First, given knowledge of a particular model bias, we can construct alternative test statistics to ameliorate the bias for the application of \,\vert\,ls{ood} detection. For instance, consider the issue of \,\vert\,lspl{dgm} placing high probability or density where they should place zero. Assuming the distribution of $\log p_\theta(\textbf{x})$ is the same for a test set drawn from the same distribution as the training set, \,\vert\,ls{ood} images which are assigned higher likelihoods than training images will be further away from the average training likelihood than the in-distribution test images will be. Consequently, a test statistic which considers distance to the average training likelihood---the statistic proposed by \citet{DeepMind2019DetectingOI}---will perform better \,\vert\,ls{ood} detection than $\phi = \log p_\theta$. We can improve this further by fitting a non-parametric density estimator over the probabilities assigned to the training images and rejecting when a particular likelihood value has not yet been seen or is rare---this \,\vert\,ls{ood} procedure is a simplified version of the density of states estimator proposed in \citet{Morningstar2020DensityOS}, who consider a several statistics jointly, not just the likelihood under the model.
While test statistics such as those in \citet{DeepMind2019DetectingOI, Morningstar2020DensityOS} were introduced to approximate an alternative definition of \,\vert\,ls{ood} based on the typical set, a more plausible viewpoint is that these test statistics correct for estimation error off the support of the in-distribution.
Alternative test statistics can help in certain cases, but they are not guaranteed to improve detection results over $\log p$ across out-distributions, as shown empirically \cite{Morningstar2020DensityOS}. An alternative fix involves improving the models themselves. To avoid the issues observed in \citet{Nalisnick2019DoDG, Hendrycks2019DeepAD}, it is important that models can sufficiently push down probability or density outside of the support of the in-distribution.
To do so may require different modeling preferences than the ones developed with other applications, like generation, of \,\vert\,lspl{dgm} in mind. As an example, certain inductive biases such as convolutional layers which benefit image modeling in general may make \,\vert\,ls{ood} detection between image datasets more difficult; \citet{Schirrmeister2020UnderstandingAD} found that replacing the convolutional layers in a \acrshort{glow} model with fully connected layers improved \,\vert\,ls{ood} detection of problematic image dataset pairs, even though it resulted in worse likelihoods.
The choice of objective may matter as well. \Gls{mle} minimizes $\text{KL}(p||p_\theta)$ which does not allow $p_\theta$ to be zero anywhere $p$ is non-zero \cite{jerfel2021variational}.
This means maximum likelihood favors overdispersed solutions
in the finite-data regime, thus posing a challenge to learning good supports.
\citet{Kirichenko2020WhyNF} show that one can improve a \,\vert\,ls{mle}-trained normalizing flow architecture that assigns high likelihoods to \,\vert\,ls{ood} images by
directly minimizing the density of \,\vert\,ls{ood} images.
After showing the same for \acrshort{pixelcnn++} in \Cref{sec:pcnn_neg}, we show that in-distribution likelihoods can remain high even with the additional constraint of forcing down probabilities. This result suggests that existing model classes
contain similarly good solutions which do not suffer from the problem of high likelihoods for particular \,\vert\,ls{ood} inputs; however, the combination of the model architecture coupled with existing gradient descent-based maximum likelihood optimization does not seem to find such solutions.
\section{Related Work}
\label{sec:related}
\paragraph{Likelihood Ratio Test Statistics.}
Given the poor results of \,\vert\,lspl{dgm} seen in \citet{Nalisnick2019DoDG}, several works propose likelihood ratio test statistics
\cite{Ren2019LikelihoodRF, Serr2020InputCA, Schirrmeister2020UnderstandingAD}. \citet{Ren2019LikelihoodRF} propose a likelihood ratio where the alternative distribution is the same \,\vert\,ls{dgm} model class trained on perturbed samples; \citet{Serr2020InputCA} use a distribution induced by a general image compressor; and \citet{Schirrmeister2020UnderstandingAD} train a \,\vert\,ls{dgm} on a general image distribution such as 80 Million Tiny Images.
Per the discussion in \Cref{sec:definition}, we can conclude that the optimal alternative depends on the set of out-distributions of interest.
\paragraph{Test Statistics Motivated by Typicality.}
\citet{DeepMind2019DetectingOI} devise a goodness-of-fit test based on how close the empirical entropy of a sample deviates from the empirical entropy of the training set. \citet{Morningstar2020DensityOS} learn a kernel density estimator or one-class SVM over multiple statistics jointly, similarly accounting for whether a test example's statistics deviate from values seen in the training set. \citet{Choi2018WAICBW} suggest a typicality test in the latent space of a normalizing flow but see better performance using a likelihood-based
test statistic which takes into account variance across an ensemble of \,\vert\,lspl{dgm}.
\citet{Wang2020FurtherAO} learn a function to map training samples to a white noise sequence and classify as \,\vert\,ls{ood} any input that is not white noise after being transformed via this function.
The empirical success of some of these test statistics has been described as evidence in favor
the need to take typicality into account,
but we offer an alternative conclusion in \Cref{sec:important}: these test statistics may compensate for a particular model misestimation.
\paragraph{Modifying the DGM.}
\citet{Kirichenko2020WhyNF} and \citet{Schirrmeister2020UnderstandingAD} both modify the invertible layers of a normalizing flow, improving \,\vert\,ls{ood} performance at the expense of worse likelihoods. \citet{BIVA} replace the posterior distributions of lower-level latents in their variational autoencoder with their priors, showing improved \,\vert\,ls{ood} performance but worse approximate likelihoods.
\paragraph{Investigating Density for OOD Detection.}
\citet{Lan2020PerfectDM}
suggest that density may be limited in its use for anomaly detection
because
a transformation applied to a continuous random variable with strictly positive density can arbitrarily re-rank the density.
However, points outside of the support will still be outside of the support under transformations, meaning that such out-distribution
samples cannot be
re-ranked to be higher than points in the in-distribution.
\paragraph{Test Statistics Based on Discriminative Models.}
Methods based on discriminative models use some property of a learned classifier for discriminating in-distribution classes, such as the maximum softmax probability \cite{Hendrycks2017ABF}. Extensions incorporate \,\vert\,ls{ood} loss terms, such as encouraging the softmax probabilities of \,\vert\,ls{ood} samples to be uniform or utilizing temperature scaling to increase the sharpness of the in-sample probabilities \cite{Hendrycks2019DeepAD, Liang2018EnhancingTR}. These methods can be seen as learning a direct mapping from inputs to some \,\vert\,ls{ood} score based on minimizing risk with respect to a given distribution of in- and out-samples. However,
these test statistics suffer from the same issue that motivates \,\vert\,ls{ood} detection in the first place: the learned conditional $\hat{p}(y \,\vert\, \textbf{x})$ must perform extrapolation when the input is unlike what was seen during training, and even the true conditional $p(y \,\vert\, \textbf{x})$ is not defined for inputs where $p(\textbf{x}) = 0$.
\paragraph{Directly Learning a Decision Boundary.} One-class Support Vector Machines \cite{Schlkopf1999SupportVM}, Support Vector Data Description \cite{Tax2004SupportVD}, and
their deep variants
\cite{Ruff2018DeepOC} learn to
separate a subset of the input space with its complement. These methods estimate the distribution's support rather than the density or probability over that set.
Note that the support boundary is all that matters for detection if the family of relevant out-distributions only include those disjoint in support with the in-distribution.
\section{Discussion}
The failures of existing \,\vert\,lspl{dgm} to detect certain out-distributions based on log-likelihood has
prompted
some to wonder whether \,\vert\,ls{ood} detection based on probability models requires additional considerations in high dimensions. The results of our analysis suggest that it is the model that is at fault, not the method for \,\vert\,ls{ood} detection. We additionally highlight the importance of formalizing the out-distributions of interest for \,\vert\,ls{ood} detection in general, as well as the arbitrary choice of the typical set for \,\vert\,ls{ood} detection.
Understanding the \,\vert\,ls{ood} detection failures of \,\vert\,lspl{dgm} as estimation error introduces avenues for future work.
We
suggest
that existing models are incorrectly assigning higher probability or density to certain natural images even when such images should have zero probability or density, and we hypothesize that the issue arises due to not just a single modeling choice, but the combination of the model architecture and maximum likelihood objective.
The extent of misestimation in existing \,\vert\,lspl{dgm} could be a relatively small amount of total probability mass, if the total volume of the out-distribution support (e.g. those of \acrshort{svhn}, \acrshort{mnist}) is relatively small in comparison to that of the in-distribution support (e.g. those of \acrshort{cifar-10}, Fashion\acrshort{mnist}). Under this scenario, a model could be near-perfect
yet still assign higher probability or density to samples from an out-distribution with disjoint support. This possibility illustrates the additional considerations required for \,\vert\,ls{ood} detection beyond for instance what is necessary for good generation or held-out likelihood.
That said, the bias exhibited by existing models may affect more than a negligible probability set (including if the problem exists for multiple out-distributions), meaning that future work directed towards correcting this bias across existing model classes could benefit not only \,\vert\,ls{ood} detection, but other applications for generative models as well. Further work comparing generative modeling and support detection may also provide insight. Finally, the interplay between \,\vert\,ls{ood} detection and epistemic uncertainty is worth further study, based on their shared relevance to predictive modeling.
\paragraph{Acknowledgements.}
This work was supported by NIH/NHLBI Award R01HL148248, NSF Award 1922658 NRT-HDR: FUTURE Foundations, Translation, and Responsibility for Data Science, NSF Award 1514422 TWC, and a DeepMind Fellowship. We thank the reviewers for their very helpful feedback and \citet{Kingma2018GlowGF} and \citet{Ren2019LikelihoodRF} for open-sourcing their code.
\appendix
\onecolumn
\newcommand{\phi_p(\mbx)}{\phi_p(\textbf{x})}
\newcommand{\indicator{\phiofx\in \Phi}}{\indicator{\phi_p(\mbx)\in \Phi}}
\newcommand{\indicator{\phiofx \notin \Phi}}{\indicator{\phi_p(\mbx) \notin \Phi}}
\newcommand{p(\mbx|\phiofx)}{p(\textbf{x}|\phi_p(\mbx))}
\newcommand{q(\mbx|\phiofx)}{q(\textbf{x}|\phi_p(\mbx))}
\newcommand{A_{\phiofx}}{A_{\phi_p(\mbx)}}
\newcommand{\indicator{\mbx \in \Aphi}}{\indicator{\textbf{x} \in A_{\phiofx}}}
\newcommand{\indicator{\mbx \notin \Aphi}}{\indicator{\textbf{x} \notin A_{\phiofx}}}
\section{Proposition 1}
\label{sec: prop1_proof}
\textit{Proposition} Let $P$ be the distribution under the null hypothesis $H_0$. Let $\mu$ be the measure associated with the distribution of test statistic $\phi_p(\mbx)$ under the null. Then, assuming conditional $\textbf{x} \,\vert\, \phi_p(\mbx)$ is not degenerate on $\mu$-non-measure zero set, there exists a set of alternative distributions $Q \in \mathcal{Q}$ where $Q \neq_d P$ and the test has power equal to the false positive rate. In other words, the test does no better than random guessing.
\begin{proof}
We first construct a distribution $q(\textbf{x}) \neq_d p(\textbf{x})$ but where $q(\phi_p(\mbx)) = p(\phi_p(\mbx))$.
The roadmap for this part of the proof is as follows:
for some function $f$, we write
\begin{align}
\mathbb{E}_{p(\textbf{x})}(f_p) - \mathbb{E}_{q(\textbf{x})}(f_p) &= \mathbb{E}_{p(\phi_p(\mbx))}\Big[\mathbb{E}_{p(\mbx|\phiofx)}(f_p) - \mathbb{E}_{q(\mbx|\phiofx)}(f_p)\Big]
\end{align}
We then identify $q(\mbx|\phiofx)$ and $f_p$ such that the inner difference of expectations is non-zero, which implies inequality in distribution via $\mathbb{E}_{q(\textbf{x})}(f_p) \neq \mathbb{E}_{p(\textbf{x})}(f_p)$. We do not change the distribution in the outer expectation $p(\phi_p(\mbx))$. We finally define
$q(\textbf{x})=p(\phi_p(\mbx))q(\mbx|\phiofx)$.
We now show how to construct $f_p,q$. Let $(\Omega_{\phi_p(\mbx)}, \mathcal{F}_{\phi_p(\mbx)})$ be the probability space
associated with $\phi_p(\mbx)$, with probability measure $\mu=\mathbb{P}_{p(\phi_p(\mbx))}$. By assumption, $p(\textbf{x} \,\vert\, \phi_p(\mbx))$ is non-degenerate on some $\mu$ non-measure zero set.
This means there exists a set $\Phi \in \mathcal{F}_{\phi_p(\mbx)}$ with $\mu(\Phi) > 0$ such that $ \forall \phi_p(\mbx) \in \Phi$, $\exists A_{\phiofx} \subset \text{supp}(p(\textbf{x} \,\vert\, \phi_p(\mbx)))$
such that $0<\mathbb{P}_{p(\textbf{x} \,\vert\, \phi_p(\mbx))}(A_{\phiofx})<1$.
Let $g$ be any function for which $\mathbb{E}_{p(\mbx|\phiofx)}(g) < \infty$
$\forall \phi_p(\mbx) \notin \Phi$. Then define
\begin{align}
f_{p}(\textbf{x}) \triangleq \indicator{\phiofx\in \Phi} \indicator{\mbx \in \Aphi} + \indicator{\phiofx \notin \Phi} g(\textbf{x})
\end{align}
Define the conditional $q(\mbx|\phiofx)$ with normalization constant $C_{\phi_p(\mbx)}$ and $0<\lambda < 1$:
\begin{equation} \label{eq:conditional}
\begin{split}
q(\mbx|\phiofx) &\triangleq \indicator{\phiofx\in \Phi} \Big[\frac{1}{C_{\phi_p(\mbx)}}\Big(\lambda p(\mbx|\phiofx) \indicator{\mbx \in \Aphi} + p(\mbx|\phiofx) \indicator{\mbx \notin \Aphi}\Big)\Big]\\
&\quad + \indicator{\phiofx \notin \Phi} p(\mbx|\phiofx)
\end{split}
\end{equation}
For $\phi_p(\mbx) \notin \Phi$, $q(\mbx|\phiofx) = p(\mbx|\phiofx)$.
Therefore,
$\indicator{\phiofx \notin \Phi} [\mathbb{E}_{p(\mbx|\phiofx)}(f_p) - \mathbb{E}_{q(\mbx|\phiofx)}(f)] = 0$. For
$\phi_p(\mbx) \in\Phi$, $q(\mbx|\phiofx)$ moves density away from points in $A_{\phiofx}$ relative to $p(\mbx|\phiofx)$, given that $0 < \lambda < 1$.
For simplicity, we construct $q$ such that $\text{supp}(q(\textbf{x} \,\vert\, \phi_p(\mbx))) = \text{supp}(p(\textbf{x} \,\vert\, \phi_p(\mbx)))$. This is to avoid any issues with an invalid joint distribution $q(\textbf{x}, \phi_p(\mbx)) \neq q(\textbf{x})$ if $q(\textbf{x} | \phi_p(\mbx)) = 0$ (the left-hand side would be 0 while the right-hand side would be greater than 0 $\forall \textbf{x} \in \text{supp}(p(\textbf{x}))$.
We now show that this construction leads to $\mathbb{E}_{p(\textbf{x})}(f_p) - \mathbb{E}_{q(\textbf{x})}(f_p) > 0$, implying inequality in distribution.
\begin{align}
&\mathbb{E}_{p(\textbf{x})}(f_p) - \mathbb{E}_{q(\textbf{x})}(f_p) \\
= &\mathbb{E}_{ p(\phi_p(\mbx))}\Big[\mathbb{E}_{p(\mbx|\phiofx)}(f_{p}) - \mathbb{E}_{q(\mbx|\phiofx)}(f_{p})\Big]\\
= &\mathbb{E}_{p(\phi_p(\mbx))}\indicator{\phiofx\in \Phi} \Big[\mathbb{E}_{p(\mbx|\phiofx)}(f_{p}) - \mathbb{E}_{q(\mbx|\phiofx)}(f_{p})\Big] \label{eq:indic} \\
=&\mathbb{E}_{p(\phi_p(\mbx))}\indicator{\phiofx\in \Phi} \Big[\mathbb{E}_{p(\mbx|\phiofx)}(\indicator{\mbx \in \Aphi}) - \mathbb{E}_{q(\mbx|\phiofx)}(\indicator{\mbx \in \Aphi})\Big] \\
=&\mathbb{E}_{p(\phi_p(\mbx))}\indicator{\phiofx\in \Phi} \Big[\int_{A_{\phiofx}}p(\mbx|\phiofx) d\textbf{x} - \int_{A_{\phiofx}}q(\mbx|\phiofx) d\textbf{x})\Big] \\
=&\mathbb{E}_{p(\phi_p(\mbx))}\indicator{\phiofx\in \Phi} \Big[\int_{A_{\phiofx}}p(\mbx|\phiofx) d\textbf{x} - \int_{A_{\phiofx}}\frac{1}{C_{\phi_p(\mbx)}}\lambda p(\mbx|\phiofx) d\textbf{x})\Big] \label{eq:subs} \\
> &0 \label{eq:end}
\end{align}
Line~\ref{eq:subs} follows from the substitution of $q(\mbx|\phiofx)$ defined in \Cref{eq:conditional}. Line~\ref{eq:end} follows from the fact that $\frac{\lambda}{C_{\phi_p(\mbx)}} < 1$, shown below:
\begin{align}
C_{\phi_p(\mbx)} &= \int_{\mathcal{X}} \lambda p(\mbx|\phiofx) \indicator{\mbx \in \Aphi} + p(\mbx|\phiofx) \indicator{\mbx \notin \Aphi} d\textbf{x}\\
&= \lambda \mathbb{P}_{p(\mbx|\phiofx)}(A_{\phiofx}) + \mathbb{P}_{p(\mbx|\phiofx)}(A_{\phiofx}^C) \\
\frac{\lambda}{C_{\phi_p(\mbx)}} &= \frac{\lambda}{\lambda \mathbb{P}_{p(\mbx|\phiofx)}(A_{\phiofx}) + \mathbb{P}_{p(\mbx|\phiofx)}(A_{\phiofx}^C)} \\
&= \frac{1}{\mathbb{P}_{p(\mbx|\phiofx)}(A_{\phiofx}) + \frac{1}{\lambda}\big[\mathbb{P}_{p(\mbx|\phiofx)}(A_{\phiofx}^c)\big]} \\
&< 1 \label{eq:end1}
\end{align}
Line~\ref{eq:end1} holds since the denominator in the previous line is greater than 1:
Since $0<\lambda < 1$, $\frac{1}{\lambda} > 1$. Then, $\mathbb{P}_{p(\mbx|\phiofx)}(A_{\phiofx}) + \frac{1}{\lambda}
\Big[\mathbb{P}_{p(\mbx|\phiofx)}(A_{\phiofx}^c)\Big] > \mathbb{P}_{p(\mbx|\phiofx)}(A_{\phiofx}) + \mathbb{P}_{p(\mbx|\phiofx)}(A_{\phiofx}^c) = 1$.
Having constructed the distribution $Q$, we now proceed with the second part of the proposition: for any specified false positive rate,
any test based on $\phi_p$ has power equal to the false positive rate when the \,\vert\,ls{ood} samples come from $Q$.
Recall that $q(\phi_p(\mbx)) = p(\phi_p(\mbx))$. Then, for any rejection rule $\phi_p(\mbx) \not\in \Phi_{\text{Accept}}$, the probability of rejection is the same
regardless of whether the sample $\textbf{x}$ is drawn from $P$ or $q$:
\begin{align}
\forall \Phi_{\text{Accept}},
\quad
\mathbb{P}_{\textbf{x} \sim q}(\phi_p(\mbx) \not\in \Phi_{\text{Accept}}) &=
\mathbb{P}_{\textbf{x} \sim p}(\phi_p(\mbx) \not\in \Phi_{\text{Accept}}).
\end{align}
Therefore, the power of the test (i.e. rejecting under the $H_A: \textbf{x} \sim q$) is equal to the false positive rate (i.e. rejecting under $H_0: \textbf{x} \sim p$).
When power and false positive rate are equal for all possible values of the false positive rate, then the result is an ROC curve $y = x$ with \,\vert\,ls{auc} $0.5$. This is equivalent to random guessing with rejection rate based on the false positive rate chosen for the test.
\end{proof}
\section{Rejection Rules Can Be Written in the Form $\phi_p(\mbx) < k$}
\label{sec:rej}
\paragraph{Lemma 1} \label{WLOG}\textit{Any rejection rule involving intervals, i.e. $\phi(\textbf{x}) \not\in \Phi$ can be recast as a rule of the form $\phi'(\textbf{x}) < k$.}
\textit{Proof} If we have a one-sided rule, i.e. an interval $\Phi$ where one of the endpoints is $-\infty$ or $\infty$, we simply reverse the sign if necessary, and for two-sided rules, i.e. a bounded interval, we can find the midpoint of the interval $m$, where $\Phi = [m - k, m + k]$, and recast the rule to $\lvert \phi(\textbf{x}) - m \rvert < k$.
Rejection rules of this form match the same ``rejection'' rules used for binary classification more broadly. For added clarity, we define some \,\vert\,ls{ood} detection methods based on their rejection rules in this form. For instance, the likelihood-based test \cite{Bishop} rejects when the negative log likelihood is above a certain threshold $k$, whereas the typicality test \cite{DeepMind2019DetectingOI} rejects when the distance to the training set entropy is above $k$.
\section{Details for Experiment 5.2}
\label{sec:decexp_supp}
In this experiment, we compare a partially trained \acrshort{glow} model $p_\theta$ with a pretrained \acrshort{glow} model \cite{Kingma2018GlowGF} which we use as our data distribution $P$. First, we generate samples from $P$ by sampling from the \acrshort{glow} model pretrained on \acrshort{cifar-10} \footnote{The pretrained model is available here: \href{https://openaipublic.azureedge.net/glow-demo/logs/abl-1x1-aff.tar}{https://openaipublic.azureedge.net/glow-demo/logs/abl-1x1-aff.tar}}. We use temperature 1 for sampling to ensure our samples come from the distribution specified by the model. We generate 40,000 samples for training and 10,000 samples for evaluation, matching the train and test set sizes of the \acrshort{cifar-10} dataset.
The \,\vert\,ls{glow} model $p_\theta$ is made of 3 blocks, each with 8 affine coupling layers with 400 hidden units per layer. The network is trained with Adamax at learning rate 0.001, which stays constant after 10 epochs of warmup. We use batch size 64 during training. We intentionally limit the training (50 epochs with 10 epochs of warmup) to make the model mis-estimation clear. Our model achieves an average bits per dimension of 3.67 on the test samples, versus 3.45 for the true model (lower is better).
The true model is a larger model than $p_\theta$, consisting of 3-blocks each with 32 affine coupling layers with 400 units each.
We evaluate \,\vert\,ls{ood} performance on the test set of the model samples and the test set of CelebA.
\section{Existing Model Architectures Can Yield Good OOD Detectors}
\label{sec:pcnn_neg}
We directly optimize a \acrshort{pixelcnn++} to distinguish between Fashion\acrshort{mnist} and \acrshort{mnist} by replacing the maximum likelihood training objective with one which simultaneously maximizes likelihood on Fashion\acrshort{mnist} images while minimizing likelihood of \acrshort{mnist} images. Our objective is similar to that of \citet{Kirichenko2020WhyNF}, who show that flows can distinguish problematic \,\vert\,ls{ood} dataset pairs when optimized directly to do so.
\begin{equation} \label{eq:neg_obj}
\frac{1}{N_{in}}\sum_{x \in \mathcal{D}_{in}} \log p_{\theta}(x) - \frac{1}{N_{ood}}\sum_{x' \in \mathcal{D}_{ood}} \min(\log p_{\theta}(x'), c)
\end{equation}
Replicating the architecture of \citet{Ren2019LikelihoodRF}, we train our model across five random seeds using the MLE objective and five seeds with the above objective. We use the same training hyperparameters as \citet{Ren2019LikelihoodRF}: 50,000 steps at a learning rate of 0.0001 with exponential decay rate of 0.999995 per step, batch size of 32, and Adam optimizer with momentum parameters 0.95 and 0.9995. Our results, shown in \Cref{table:pcnn_neg}, demonstrate that the \acrshort{pixelcnn++} architecture has the capacity to push down probabilities on problematic \,\vert\,ls{ood} samples while maintaining high in-distribution likelihoods.
\begin{table}[ht]
\centering
\caption{ \label{tab:neg_train} Existing models can be optimized to distinguish datasets. \acrshort{pixelcnn++} trained via the negative training (NT) objective in \Cref{eq:neg_obj} can achieve near-perfect \,\vert\,ls{ood} detection while maintaining comparable held-out log likelihoods (LL) with models trained via maximum likelihood estimation (MLE). We report mean and standard deviation of the results over 5 random seeds.}
\begin{tabular}{rrrr}
\toprule
& Fashion LL & \,\vert\,ls{ood} \acrshort{auc} \\
\midrule
MLE &$ -1550 \pm$ 6 & $0.097 \pm 0.004$ \\
NT & $-1562 \pm$ 7 & $1.000 \pm 0.000$ \\
\bottomrule
\label{table:pcnn_neg}
\end{tabular}
\end{table}
\end{document} |
\begin{document}
\twocolumn[
\icmltitle{
Adding vs. Averaging in Distributed Primal-Dual Optimization
}
\icmlauthor{Chenxin Ma$^*$}{[email protected]}
\icmladdress{Industrial and Systems Engineering, Lehigh University, USA}
\icmlauthor{Virginia Smith$^*$}{[email protected]}
\icmladdress{University of California, Berkeley, USA}
\icmlauthor{Martin Jaggi}{[email protected]}
\icmladdress{ETH Z\"urich, Switzerland}
\icmlauthor{Michael I. Jordan}{[email protected]}
\icmladdress{University of California, Berkeley, USA}
\icmlauthor{Peter Richt\'arik}{[email protected]}
\icmladdress{School of Mathematics, University of Edinburgh, UK}
\icmlauthor{Martin Tak\'a\v{c}}{[email protected]}
\icmladdress{Industrial and Systems Engineering, Lehigh University, USA}
$^*$Authors contributed equally.
\icmlkeywords{optimization algorithms, large-scale machine learning, distributed systems}
\vskip 0.3in
]
\begin{abstract}
Distributed optimization methods for large-scale machine learning suffer
from a communication bottleneck. It is difficult to reduce this bottleneck while still efficiently and accurately aggregating partial work from different machines.
In this paper, we present a novel generalization of the recent communication-efficient primal-dual framework (\textsc{CoCoA}\xspace) for distributed optimization.
Our framework, \textsc{CoCoA}\xspacep, allows for \emph{additive} combination of local updates
to the global parameters at each iteration, whereas previous schemes with convergence guarantees only
allow conservative averaging.
We give stronger (primal-dual) convergence
rate guarantees for both \textsc{CoCoA}\xspace as well as our new variants, and generalize
the theory for both methods to cover non-smooth convex loss functions.
We provide an extensive experimental comparison that shows the markedly improved performance of \textsc{CoCoA}\xspacep on several real-world distributed datasets, especially when scaling up the number of machines.
\end{abstract}
\section{Introduction}
With the wide availability of large datasets that exceed the storage capacity of single machines, distributed optimization methods for machine learning have become increasingly important.
Existing methods require significant communication between workers, frequently equaling the amount of local computation (or reading of local data). As a result, distributed machine learning suffers significantly from a communication bottleneck on real world systems, where communication is typically several orders of magnitudes slower than reading data from main memory.
In this work we focus on optimization problems with empirical loss minimization structure, i.e., objectives that are a sum of the loss functions of each datapoint. This includes the most commonly used regularized variants of linear regression and classification methods.
For this class of problems, the recently proposed \textsc{CoCoA}\xspace
approach \cite{Yang:2013vl,jaggi2014communication} develops a communication-efficient primal-dual
scheme that targets the communication bottleneck, allowing more computation on data-local subproblems native to
each machine before communication. By appropriately
choosing the amount of local computation per round, this framework allows
one to control the trade-off between \emph{communication} and \emph{local
computation} based on the systems hardware at hand.
However, the performance of \textsc{CoCoA}\xspace (as well as related primal SGD-based methods) is significantly reduced by the need to average updates between all machines. As the number of machines $K$ grows, the updates get diluted and slowed by $1/K$, e.g., in the case where all machines except one would have already reached the solutions of their respective partial optimization tasks. On the other hand, if the updates are instead added, the algorithms can diverge, as we will observe in the practical experiments below.
To address both described issues, in this paper we develop a novel generalization of the local \textsc{CoCoA}\xspace subproblems
assigned to each worker, making the framework more powerful in the
following sense:
Without extra computational cost, the set of locally computed updates from
the modified subproblems (one from each machine) can be combined
more efficiently between machines.
The proposed \textsc{CoCoA}\xspacep updates can be
aggressively \emph{added} (hence the `+'-suffix),
which yields much faster
convergence both in practice and in theory. This difference is particularly significant as the number of
machines~$K$ becomes large.
\subsection{Contributions}
\paragraph{Strong Scaling.}
To our knowledge, our framework is the first to
exhibit favorable \emph{strong scaling} for the class of problems considered, as
the number of machines~$K$ increases and the data size is kept fixed.
More precisely, while the convergence rate of \textsc{CoCoA}\xspace degrades
as $K$ is increased, the stronger theoretical
convergence rate here is -- in the worst case -- \emph{independent} of $K$.
Our experiments in Section \ref{sec:experiments} confirm the
improved speed of convergence.
Since the number of communicated vectors is only one per round and worker,
this favorable scaling might be surprising. Indeed, for existing methods, splitting data among more machines generally
increases communication requirements \cite{Shamir:2014tp}, which
can severely affect overall runtime.
\paragraph{Theoretical Analysis of Non-Smooth Losses.}
While the existing analysis for \textsc{CoCoA}\xspace in \citep{jaggi2014communication} only
covered smooth loss functions, here we extend the class of functions where
the rates apply, additionally covering, e.g., Support Vector Machines and non-smooth
regression variants.
We provide a primal-dual convergence rate for both \textsc{CoCoA}\xspace as well as our
new method \textsc{CoCoA}\xspacep in the case of general convex ($L$-Lipschitz) losses.
\paragraph{Primal-Dual Convergence Rate.}
Furthermore, we additionally strengthen the rates by showing stronger
primal-dual convergence for both algorithmic frameworks, which are almost
tight to their objective-only counterparts. Primal-dual rates for \textsc{CoCoA}\xspace had not previously been analyzed in the general convex case.
Our primal-dual rates allow efficient and practical certificates for the
optimization quality, e.g., for stopping criteria. The new rates apply to both
smooth and non-smooth losses, and for both \textsc{CoCoA}\xspace as well as the extended
\textsc{CoCoA}\xspacep.
\paragraph{Arbitrary Local Solvers.}
\textsc{CoCoA}\xspace as well as \textsc{CoCoA}\xspacep allow the use of arbitrary local solvers on each machine.
\paragraph{Experimental Results.}
We provide a thorough experimental comparison with competing algorithms using several real-world distributed datasets. Our practical results confirm the strong scaling of \textsc{CoCoA}\xspacep as the number of machines~$K$ grows, while competing methods, including the original \textsc{CoCoA}\xspace, slow down significantly with larger $K$. We implement all algorithms in
\textsf{\small Spark}, and our code is publicly available at: {\small \url{github.com/gingsmith/cocoa}}.
\subsection{History and Related Work}
While optimal algorithms for the serial (single machine) case are already well researched and understood, the literature in the distributed setting is relatively sparse. In particular, details on optimal trade-offs between computation and communication, as well as optimization or statistical accuracy, are still widely unclear.
For an overview over this currently active research field, we refer the reader to \cite{Balcan:2012tc,richtarik2013distributed,Duchi:2013te,Yang:2013vl,WrightAsynchrous14,fercoq2014fast,jaggi2014communication,Shamir:2014tp,DANE,DISCO,ALPHA} and the references therein.
We provide a detailed comparison of our proposed framework to the related work in Section \ref{sec:relatedWork}.
\section{Setup}
We consider regularized empirical loss minimization problems of the following well-established form:
\begin{equation}
\label{eq:primal}
\min_{ {\bf w}\in \mathbb{R}^d}
\left\{
\mathcal{P}( {\bf w}):=
frac1{n} \sum_{i=1}^n \ell_i( {\bf x}_i^T {\bf w}) + frac\lambda 2 \| {\bf w}\|^2 \right\}
\end{equation}
Here the vectors
$\{ {\bf x}_i\}_{i=1}^n \subset \mathbb{R}^d$ represent the training data examples, and the $\ell_i(.)$ are arbitrary convex real-valued loss functions (e.g., hinge loss), possibly depending on label information for the $i$-th datapoints. The constant $\lambda>0$ is the regularization parameter.
The above class includes many standard problems of wide interest in machine learning, statistics, and signal processing, including support vector machines, regularized
linear and logistic regression,
ordinal regression, and others.
\paragraph{Dual Problem, and Primal-Dual Certificates.}
The conjugate dual of \eqref{eq:primal} takes following form:
\begin{equation}
\label{eq:dual}
\max_{ {\boldsymbol \alpha} \in \mathbb{R}^n}
\Bigg\{
\mathcal{D}( {\boldsymbol \alpha} ):=
-frac1n \sum_{j=1}^n \ell_j^*(- \alpha_j)
-frac{\lambda}{2}
\left\|frac{A {\boldsymbol \alpha}}{\lambdan} \right\|^2 \Bigg\}
\end{equation}
Here the data matrix $A=[ {\bf x}_1, {\bf x}_2, \dots, {\bf x}_n] \in \mathbb{R}^{d\times n}$ collects all data-points as its columns,
and $\ell_j^*$ is the conjugate function to $\ell_j$. See, e.g., \cite{ShalevShwartz:2013wl} for several concrete applications.
It is possible to assign for any dual vector $ {\boldsymbol \alpha} \in \mathbb{R}^n$
a corresponding primal feasible point
\begin{equation}
\label{eq:PDMapping}
{\bf w}( {\boldsymbol \alpha})
= \tfrac1{\lambda n} A {\boldsymbol \alpha}
\end{equation}
The duality gap function is then given by:
\begin{align}
\label{eq:gap}
G( {\boldsymbol \alpha})
:= \mathcal{P}( {\bf w}( {\boldsymbol \alpha}))-\mathcal{D}( {\boldsymbol \alpha})
\end{align}
By weak duality, every value $\mathcal{D}( {\boldsymbol \alpha})$ at a dual candidate~$ {\boldsymbol \alpha}$ provides a lower bound on every primal value $\mathcal{P}( {\bf w})$. The duality gap is therefore a certificate on the approximation quality: The distance to the unknown true optimum $\mathcal{P}( {\bf w}^*)$ must always lie within the duality gap, i.e., $G( {\boldsymbol \alpha}) = \mathcal{P}( {\bf w})-\mathcal{D}( {\boldsymbol \alpha}) \ge \mathcal{P}( {\bf w}) - \mathcal{P}( {\bf w}^*) \ge 0$.
In large-scale machine learning settings like those considered here, the availability of such a computable measure of approximation quality is a significant benefit during training time. Practitioners using classical primal-only methods such as SGD have no means by which to accurately detect if a model has been well trained, as $P( {\bf w}^*)$ is unknown.
\paragraph{Classes of Loss-Functions.}
To simplify presentation, we assume
that all loss functions $\ell_i$ are non-negative, and
\begin{equation}
\ell_i(0)\leq 1 \qquad forall i
\label{eq:afswfevfwaefa}
\end{equation}
\begin{definition}[$L$-Lipschitz continuous loss]
A function $\ell_i: \mathbb{R} \to \mathbb{R}$ is $L$-Lipschitz continuous if $forall a,b \in \mathbb{R}$, we have
\begin{equation}
| \ell_i(a) - \ell_i(b) | \leq L |a-b|
\end{equation}
\end{definition}
\begin{definition}[$(1/\mu)$-smooth loss]
A function $\ell_i: \mathbb{R} \to \mathbb{R}$ is $(1/\mu)$-smooth
if it is differentiable and its derivative is $(1/\mu)$-Lipschitz continuous, i.e.,
$forall a,b \in \mathbb{R}$, we have
\begin{equation}
| \ell_i'(a) - \ell_i'(b) | \leq frac1{\mu} |a-b|
\end{equation}
\end{definition}
\section{The \textsc{CoCoA}\xspacep Algorithm Framework}
In this section we present our novel \textsc{CoCoA}\xspacep framework. $\textsc{CoCoA}\xspacep$ inherits the many benefits of CoCoA as it remains a highly flexible and scalable, communication-efficient framework for distributed optimization. $\textsc{CoCoA}\xspacep$ differs algorithmically in that we modify the form of the local subproblems \eqref{eq:subproblem} to allow for more aggressive additive updates (as controlled by $\gamma$). We will see that these changes allow for stronger convergence guarantees as well as improved empirical performance. Proofs of all statements in this section are given in the supplementary material.
\paragraph{Data Partitioning.}
We write $\{\mathcal{P}_k
\}_{k=1}^K$ for the
given partition of the datapoints $[n]:= \{1,2,\dots,n\}$ over the $K$ worker machines.
We denote the size of each part by $n_k=|\mathcal{P}_k|$.
For any $k\in[K]$
and $ {\boldsymbol \alpha}\in \mathbb{R}^n$
we use the notation
$\vsubset{ {\boldsymbol \alpha}}{k}\in \mathbb{R}^n$ for the vector
$$
(\vsubset{ {\boldsymbol \alpha}}{k})_i
:=
\begin{cases}
0,&\mbox{if}\ i\notin \mathcal{P}_k,\\
\alpha_i,&\mbox{otherwise.}
\end{cases}
$$
\paragraph{Local Subproblems in \textsc{CoCoA}\xspacep.}
We can define a data-local subproblem of the original dual optimization problem \eqref{eq:dual}, which can be solved on machine $k$ and only requires accessing data which is already available locally, i.e., datapoints with $i\in\mathcal{P}_k$. More formally, each machine $k$ is assigned the following local subproblem, depending only on the previous shared primal vector $ {\bf w}\in\mathbb{R}^d$, and the change in the local dual variables~$\alpha_i$ with $i\in\mathcal{P}_k$:
\begin{equation}
\max_{\vsubset{\Delta {\boldsymbol \alpha}}{k}\in\mathbb{R}^{n}} \
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \vsubset{\Delta {\boldsymbol \alpha}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
\end{equation}
where \begin{align}
&\mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \vsubset{\Delta {\boldsymbol \alpha}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
:=
-frac1n\sum_{i \in \mathcal{P}_k}
\ell_i^*(-\alpha_i - (\vsubset{\Delta {\boldsymbol \alpha}}{k})_i)
\nonumber
\\
&\hspace{-2mm}
- frac1K
frac{\lambda}{2}
\| {\bf w}\|^2
-frac1n
{\bf w}^T A \vsubset{\Delta {\boldsymbol \alpha}}{k}
- frac\lambda2
\sigma' \Big\|frac1{\lambda n} A \vsubset{\Delta {\boldsymbol \alpha}}{k}\Big\|^2
\label{eq:subproblem}
\end{align}
\paragraph{Interpretation.}
The above definition of the local objective functions $\mathcal{G}^{\sigma'}_k\hspace{-0.08em}$ are such that they closely approximate the global dual objective $\mathcal{D}$, as we vary the `local' variable~$\vsubset{\Delta {\boldsymbol \alpha}}{k}$, in the following precise sense:
\begin{lemma}
\label{lem:RelationOfDTOSubproblems}
For any dual
$ {\boldsymbol \alpha}, \Delta {\boldsymbol \alpha}
\in \mathbb{R}^n$, primal $ {\bf w} = {\bf w}( {\boldsymbol \alpha})$ and real values $\gamma,\sigma'$ satisfying~\eqref{eq:sigmaPrimeSafeDefinition}, it holds that
\begin{align*}
&\mathcal{D}\Big(
{\boldsymbol \alpha} +\gamma
\sum_{k=1}^K
\vsubset{\Delta {\boldsymbol \alpha}}{k}\!
\Big) \geq
(1-\gamma) \mathcal{D}( {\boldsymbol \alpha})\nonumber
\end{align*}
\begin{align} \hspace{8em} + \gamma
\sum_{k=1}^K
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(\vsubset{\Delta {\boldsymbol \alpha}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
\end{align}
\end{lemma}
The role of the parameter $\sigma'$ is to measure the difficulty of the given
data partition. For our purposes, we will see that it must be chosen not
smaller than
\begin{equation}
\label{eq:sigmaPrimeSafeDefinition}
\sigma'
\geq
\sigma'_{min}
:=
\gamma
\max_{ {\boldsymbol \alpha}\in \mathbb{R}^n}
frac{
\|A {\boldsymbol \alpha}\|^2}{\sum_{k=1}^K \|A \vsubset{ {\boldsymbol \alpha}}{k}\|^2} \
\end{equation}
In the following lemma, we show that this parameter can
be upper-bounded by $\gamma K$, which is trivial to calculate for all values $\gamma\in\mathbb{R}$. We show experimentally (Section \ref{sec:experiments}) that this safe upper bound for $\sigma'$ has a minimal effect on the overall performance
of the algorithm. Our main theorems below show
convergence rates dependent on $\gamma \in [frac1K,1]$, which we refer to as the \textit{aggregation parameter}.
\begin{lemma}\label{lem:sigmaPrimeNotBad}
The choice of $\sigma' := \gamma K$ is valid for \eqref{eq:sigmaPrimeSafeDefinition}, i.e.,
\[
\gamma K
\geq
\sigma'_{min}
\]
\end{lemma}
\paragraph{Notion of Approximation Quality of the Local Solver.}
\begin{assumption}[$ {\bf T}heta$-approximate solution]
\label{asm:THeta}
We assume that
there exists $ {\bf T}heta \in [0,1)$ such that
$forall k\in [K]$,
the local solver at any outer iteration $t$ produces
a (possibly) randomized approximate solution $\vsubset{\Delta {\boldsymbol \alpha}}{k}$,
which satisfies
\begin{align}
\label{eq:localSolutionQuality}
\mathbb{E}\big[
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(\vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
-
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(\vsubset{\Delta {\boldsymbol \alpha}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
\big] \
\\ \nonumber
\leq \ {\bf T}heta
\left(
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(\vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
-
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}({\bf 0}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
\right),
\end{align}
where
\begin{align}
\label{eq:asjfcowjfcaw}
\vsubset{\Delta {\boldsymbol \alpha}^*}{k}
\in \argmax_{\Delta {\boldsymbol \alpha} \in \mathbb{R}^n} \
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(\vsubset{\Delta {\boldsymbol \alpha}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}) \quad forall k\in[K]
\hspace{-2.5mm}
\end{align}
\end{assumption}
We are now ready to describe the \textsc{CoCoA}\xspacep framework, shown in Algorithm \ref{alg:cocoa}.
The crucial difference compared to the existing \textsc{CoCoA}\xspace algorithm \cite{jaggi2014communication} is the more general local subproblem, as defined in \eqref{eq:subproblem}, as well as the aggregation parameter $\gamma$.
These modifications allow the option of directly adding updates to the global vector~$ {\bf w}$.
\begin{algorithm}[h]
\caption{\textsc{CoCoA}\xspacep Framework}
\label{alg:cocoa}
\begin{algorithmic}[1]
\STATE {\bf Input:} Datapoints $A$ distributed according to partition $\{\mathcal{P}_k\}_{k=1}^K$.
Aggregation parameter $\gamma\!\in\!(0,1]$,
subproblem parameter $\sigma'$ for the local subproblems
$\mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \vsubset{\Delta {\boldsymbol \alpha}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})$ for each $k\in[K]$.\\
Starting point $\vc{ {\boldsymbol \alpha}}{0} := {\bf 0} \in \mathbb{R}^n$, $\vc{ {\bf w}}{0}:= {\bf 0}\in \mathbb{R}^d$.
\FOR {$t = 0, 1, 2, \dots $}
\FOR {$k \in \{1,2,\dots,K\}$ {\bf in parallel over computers}}
\STATE call the local solver, computing
a $ {\bf T}heta$-approximate solution
$\vsubset{\Delta {\boldsymbol \alpha}}{k}$
of the local subproblem \eqref{eq:subproblem}
\STATE update $\vsubset{\vc{ {\boldsymbol \alpha}}{t+1}}{k} := \vsubset{\vc{ {\boldsymbol \alpha}}{t}}{k} + \gamma \, \vsubset{\Delta {\boldsymbol \alpha}}{k}$
\STATE return $\Delta {\bf w}_k :=frac1{\lambda n} A \vsubset{\Delta {\boldsymbol \alpha}}{k}$
\mathbb{E}NDFOR
\STATE reduce
\begin{equation}\label{eq:primalGlobalUpdate}
\vc{ {\bf w}}{t+1} := \vc{ {\bf w}}{t} +
\gamma \textstyle \sum_{k=1}^K \Delta {\bf w}_k.
\end{equation}
\mathbb{E}NDFOR
\end{algorithmic}
\end{algorithm}
\section{Convergence Guarantees}
\label{sec:convergence}
Before being able to state our main convergence results, we introduce some useful quantities and the following main lemma characterizing the effect of iterations of Algorithm~\ref{alg:cocoa}, for any chosen internal local solver.
\begin{lemma}
\label{lem:basicLemma}
Let $\ell_i^*$ be stronglyfootnote{
Note that the case of weakly convex $\ell_i^*(.)$ is explicitly allowed here as well, as the Lemma holds for the case $\mu = 0$.
}
convex with convexity parameter
$\mu \geq 0$ with respect to the norm $\|\cdot\|$, $forall i\in[n]$.
Then for all iterations~$t$ of Algorithm~\ref{alg:cocoa} under Assumption~\ref{asm:THeta}, and any $s\in [0,1]$, it holds that
\begin{align}
\label{eq:lemma:dualDecrease_VS_dualityGap}
&\mathbb{E}[
\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t+1})
-
\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})
]
\geq
\\ \nonumber
&\qquad\qquad\qquad\qquad
\gamma
(1- {\bf T}heta)
\Big(
s G(\vc{ {\boldsymbol \alpha}}{t})
-
frac{\sigma'}{2\lambda }
\big(frac sn \big)^2
\vc{R}{t}
\Big),
\end{align}
where
\begin{align*}
\tagthis \label{eq:defOfR}
\vc{R}{t}&:=
-
\tfrac{ \lambda\mu n (1-s)}{\sigma' s }
\|\vc{ {\bf u}}{t}-\vc{ {\boldsymbol \alpha}}{t}\|^2
\\ \qquad \nonumber &+
\textstyle{\sum}_{k=1}^K
\| A \vsubset{ (\vc{ {\bf u}}{t} - \vc{ {\boldsymbol \alpha}}{t} )}{k}\|^2,
\end{align*}
for $\vc{ {\bf u}}{t} \in\mathbb{R}^n$
with
\begin{equation}
\label{eq:defintionOfUi}
-\vc{u_i}{t}
\in \partial \ell_i( {\bf w}(\vc{ {\boldsymbol \alpha}}{t})^T {\bf x}_i).
\end{equation}
\end{lemma}
The following Lemma provides a uniform bound on~$\vc{R}{t}$:
\begin{lemma}
\label{lemma:BoundOnR}
If $\ell_i$ are $L$-Lipschitz
continuous for all $i\in [n]$, then
\begin{equation}
\label{eq:asfjoewjofa}
forall t:
\vc{R}{t}
\leq 4L^2
\underbrace{\sum _{k=1}^K
\sigma_k n_k}_{=: \sigma},
\end{equation}
where
\begin{equation}
\label{eq:definitionOfSigmaK}
\sigma_k :=
\max_{\vsubset{ {\boldsymbol \alpha}}{k} \in \mathbb{R}^n}
frac{\|A \vsubset{ {\boldsymbol \alpha}}{k}\|^2}{
\|\vsubset{ {\boldsymbol \alpha}}{k}\|^2}.
\end{equation}
\end{lemma}
\begin{remark}
\label{rmk:asfwaefwae}
If all data-points $ {\bf x}_i$ are normalized such that
$\| {\bf x}_i\|\leq 1$ $forall i\in [n]$, then
$\sigma_k \leq |\mathcal{P}_k| = n_k$.
Furthermore, if we assume that the data partition is balanced, i.e., that
$n_k = n/K$ for all $k$, then $\sigma \le n^2/K$. This can be used to bound the constants $\vc{R}{t}$, above, as
$
\vc{R}{t} \leq frac{4L^2 n^2}{K}.$
\end{remark}
\subsection{Primal-Dual Convergence for General Convex Losses}
The following theorem shows the convergence for non-smooth loss functions, in terms of objective values as well as primal-dual gap.
The analysis in \cite{jaggi2014communication} only covered the case of smooth loss functions.
\begin{theorem}
\label{thm:convergenceNonsmooth}
Consider Algorithm \ref{alg:cocoa} with Assumption \ref{asm:THeta}.
Let $\ell_i(\cdot)$ be $L$-Lipschitz continuous,
and $\epsilon_G$ $>0$ be the desired duality gap (and hence an upper-bound on primal sub-optimality).
Then after $T$ iterations, where
\begin{align}\label{eq:dualityRequirements}
T
&\geq
T_0 +
\max\{\Big\lceil frac1{\gamma (1- {\bf T}heta)}\Big\rceil,frac
{4L^2 \sigma \sigma'}
{\lambda n^2 \epsilon_G
\gamma (1- {\bf T}heta)}\},
\\
T_0
&\geq t_0+
\left(
frac{2}{ \gamma (1- {\bf T}heta) }
\left(
frac
{8L^2 \sigma \sigma'}
{\lambda n^2 \epsilon_G}
-1
\right)
\right)_+,\notag
\\
t_0 &\geq
\max(0,\Big\lceil \tfrac1{\gamma (1- {\bf T}heta)}
\log(
\tfrac{
2\lambda n^2 (\mathcal{D}( {\boldsymbol \alpha}^* )-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{0}))
}{4 L^2 \sigma \sigma'}
)
\Big\rceil),\notag
\end{align}
we have that the expected duality gap satisfies
\[
\mathbb{E}[\mathcal{P}( {\bf w}(\overline {\boldsymbol \alpha})) - \mathcal{D}(\overline {\boldsymbol \alpha}) ] \leq \epsilon_G,
\]
at the averaged iterate
\begin{equation}\label{eq:averageOfAlphaDefinition}
\overline {\boldsymbol \alpha}: = \tfrac1{T-T_0}\textstyle{\sum}_{t=T_0+1}^{T-1} \vc{ {\boldsymbol \alpha}}{t}.
\end{equation}
\end{theorem}
The following corollary of the above theorem clarifies our main result: The more aggressive adding of the partial updates, as compared averaging, offers a very significant improvement in terms of total iterations needed.
While the convergence in the `adding' case becomes independent of the number of machines $K$, the `averaging' regime shows the known degradation of the rate with growing $K$, which is a major drawback of the original \textsc{CoCoA}\xspace algorithm. This important difference in the convergence speed is not a theoretical artifact but also confirmed in our practical experiments below for different $K$, as shown e.g. in Figure~\ref{fig:scaling_k}.
We further demonstrate below that by choosing $\gamma$ and $\sigma'$ accordingly, we can still recover the original \textsc{CoCoA}\xspace algorithm and its rate.
\begin{corollary}\label{cor:convergence}
Assume that
all datapoints $ {\bf x}_i$ are bounded as $\| {\bf x}_i\|\leq 1$
and that
the data partition is balanced, i.e. that
$n_k = n/K$ for all $k$.
We consider two different possible choices of the aggregation parameter~$\gamma$:
\begin{itemize}
\item
(\textsc{CoCoA}\xspace Averaging, $\gamma := frac1K$):
In this case, $\sigma':=1$ is a valid choice which satisfies
\eqref{eq:sigmaPrimeSafeDefinition}.
Then using $\sigma \le n^2/K$ in light of Remark \ref{rmk:asfwaefwae}, we have that $T$ iterations are sufficient for primal-dual accuracy $\epsilon_G$, with
\begin{align*}
T
&\geq
T_0 +
\max\{\Big\lceil frac K{1- {\bf T}heta}\Big\rceil,frac
{4L^2 }
{\lambda \epsilon_G
(1- {\bf T}heta)}\},
\\
T_0
&\geq t_0+
\left(
frac{2 K}{1- {\bf T}heta}
\left(
frac
{8L^2 }
{\lambda K \epsilon_G}
-1
\right)
\right)_+,
\\
t_0 &\geq
\max(0,\big\lceil \tfrac K{1- {\bf T}heta}
\log(
\tfrac{2\lambda (\mathcal{D}( {\boldsymbol \alpha}^* )-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{0}))
} {4 K L^2 }
)
\big\rceil)
\end{align*}
Hence the more machines $K$, the more iterations are needed (in the worst case).
\item
(\textsc{CoCoA}\xspacep Adding, $\gamma := 1$):
In this case, the choice of $\sigma':=K$ satisfies
\eqref{eq:sigmaPrimeSafeDefinition}.
Then using $\sigma \le n^2/K$ in light of Remark \ref{rmk:asfwaefwae}, we have that $T$ iterations are sufficient for primal-dual accuracy $\epsilon_G$,
with\begin{align*}
T
&\geq
T_0 +
\max\{\Big\lceil frac1{1- {\bf T}heta}\Big\rceil,frac
{4L^2 }
{\lambda \epsilon_G
(1- {\bf T}heta)}\},
\\
T_0
&\geq t_0+
\left(
frac{2}{1- {\bf T}heta}
\left(
frac
{8L^2 }
{\lambda \epsilon_G}
-1
\right)
\right)_+,
\\
t_0 &\geq
\max(0,\big\lceil \tfrac1{1- {\bf T}heta}
\log(
\tfrac{
2\lambda n (\mathcal{D}( {\boldsymbol \alpha}^* )-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{0}))
}{4 K L^2}
)
\big\rceil)
\end{align*}
This is significantly better than the averaging case.
\end{itemize}
\end{corollary}
In practice, we usually have $\sigma \ll n^2/K$, and hence the actual convergence rate can be much better than the proven worst-case bound.
Table \ref{tbl:Sigma}
shows that the actual value of $\sigma$ is typically between one and two orders of magnitudes smaller compared to our used upper-bound $n^2/K$.
\begin{table}[h]
\centering
\caption{The ratio of upper-bound $\tfrac{n^2}{K}$ divided by the true value of the parameter~$\sigma$, for some real datasets. }
\label{tbl:Sigma}
\scriptsize
\begin{tabular}{rrrrrrr}
\toprule
K & 16 & 32 & 64 & 128 & 256 & 512 \\
\midrule
news & 15.483 & 14.933 & 14.278 & 13.390 & 12.074 & 10.252 \\
real-sim & 42.127 & 36.898 & 30.780 & 23.814 & 16.965 & 11.835 \\
rcv1 & 40.138 & 23.827 & 28.204 & 21.792 & 16.339 & 11.099 \\
\midrule
K & 256 & 512 & 1024 & 2048 & 4096 & 8192 \\
\midrule
covtype & 17.277 & 17.260 & 17.239 & 16.948 & 17.238 & 12.729
\\ \bottomrule
\end{tabular}
\end{table}
\iffalse
\begin{table}[htbp]
\centering
\caption{$K\gamma/\sigma'_{\min} $}
\label{tbl:Sigma'}
\scriptsize
\begin{tabular}{rrrrrrr}
\toprule
K & 16 & 32 & 64 & 128 & 256 & 512 \\
\midrule
news & 1.0808 & 1.1264 & 1.1770 & 1.2553 & 1.3838 & 1.6036 \\
real-sim & 1.5096 & 1.6114 & 1.7430 & 2.0018 & 2.3918 & 3.0307 \\
rcv1 & 1.0887 & 1.1842 & 1.3778 & 1.6908 & 2.2273 & 2.9938 \\
\midrule
K & 256 & 512 & 1024 & 2048 & 4096 & 8192 \\
\midrule
covtype & 1.0000 & 1.0000 & 1.0001 & 1.0001 & 1.0005 & 1.0042 \\
webspam & 1.0000 & 1.0002 & 1.0006 & 1.0001 & 1.0006 & 1.0011 \\
\bottomrule
\end{tabular}
\end{table}
fi
\subsection{Primal-Dual Convergence for Smooth Losses}
The following theorem shows the convergence for smooth losses, in terms of the objective as well as primal-dual gap.
\begin{theorem}
\label{thm:convergenceSmoothCase}
Assume the loss functions functions
$\ell_i$ are $(1/\mu)$-smooth $forall i\in[n]$.
We define $\sigma_{\max} =
\max_{k\in[K]} \sigma_k$. Then after $T$ iterations of Algorithm \ref{alg:cocoa}, with
$$
T
\geq
\tfrac{1}
{\gamma
(1- {\bf T}heta)}
\tfrac
{\lambda\mu n+
\sigma_{\max} \sigma'}
{ \lambda\mu n }
\log \tfrac1{\epsilon_\mathcal{D}} ,
$$
it holds that
$$\mathbb{E}[\mathcal{D}( {\boldsymbol \alpha}^*)
- \mathcal{D}(\vc{ {\boldsymbol \alpha}}{T})]
\leq \epsilon_\mathcal{D}.$$
Furthermore, after $T$ iterations with
$$
T
\geq
\tfrac{1}
{\gamma
(1- {\bf T}heta)}
\tfrac
{\lambda\mu n+
\sigma_{\max} \sigma'}
{ \lambda\mu n }
\log
\left(
\tfrac{1}
{\gamma
(1- {\bf T}heta)}
\tfrac
{\lambda\mu n+
\sigma_{\max} \sigma'}
{ \lambda\mu n }
\tfrac1{\epsilon_G}
\right) ,
$$
we have the expected duality gap
$$
\mathbb{E}[
\mathcal{P}( {\bf w}(\vc{ {\boldsymbol \alpha}}{T})) - \mathcal{D}(\vc{ {\boldsymbol \alpha}}{T})
]\leq \epsilon_G.
$$
\end{theorem}
The following corollary is analogous to Corollary \ref{cor:convergence}, but for the case of smooth loses.
It again shows that while the \textsc{CoCoA}\xspace variant degrades with the increase of the number of machines $K$, the $\textsc{CoCoA}\xspacep$ rate is independent of $K$.
\begin{corollary}\label{cor:convergenceSmooth}
Assume that
all datapoints $ {\bf x}_i$ are bounded as $\| {\bf x}_i\|\leq 1$
and that
the data partition is balanced, i.e., that
$n_k = n/K$ for all $k$.
We again consider the same two different possible choices of the aggregation parameter~$\gamma$:
\begin{itemize}
\item
(\textsc{CoCoA}\xspace Averaging, $\gamma := frac1K$):
In this case, $\sigma':=1$ is a valid choice which satisfies
\eqref{eq:sigmaPrimeSafeDefinition}.
Then using $\sigma_{\max} \le n_k = n/K$ in light of Remark \ref{rmk:asfwaefwae}, we have that $T$ iterations are sufficient for suboptimality
$\epsilon_\mathcal{D}$, with
\begin{align*}
T
&\geq
\tfrac{1}
{1- {\bf T}heta}
\tfrac
{\lambda\mu K +
1}
{ \lambda\mu }
\log \tfrac1{\epsilon_\mathcal{D}}
\end{align*}
Hence the more machines $K$, the more iterations are needed (in the worst case).
\item
(\textsc{CoCoA}\xspacep Adding, $\gamma := 1$):
In this case, the choice of $\sigma':=K$ satisfies
\eqref{eq:sigmaPrimeSafeDefinition}.
Then using $\sigma_{\max} \le n_k = n/K$ in light of Remark \ref{rmk:asfwaefwae}, we have that $T$ iterations are sufficient for suboptimality
$\epsilon_\mathcal{D}$, with
\begin{align*}
T
&\geq
\tfrac{1}{1- {\bf T}heta}
\tfrac
{\lambda\mu+1}
{ \lambda\mu }
\log \tfrac1{\epsilon_\mathcal{D}}
\end{align*}
This is significantly better than the averaging case.
Both rates hold analogously for the duality gap.
\end{itemize}
\end{corollary}
\subsection{Comparison with Original \textsc{CoCoA}\xspace}
\begin{remark}
If we choose averaging
($\gamma :=frac1K$) for aggregating the updates, together with $\sigma' := 1$,
then the resulting Algorithm \ref{alg:cocoa} is identical to \textsc{CoCoA}\xspace analyzed in \cite{jaggi2014communication}.
However, they only provide convergence for smooth loss functions $\ell_i$ and have guarantees for dual sub-optimality and not the duality gap. Formally, when $\sigma' = 1$, the subproblems \eqref{eq:subproblem} will differ from the original dual $\mathcal{D}(.)$ only by an additive constant, which does not affect the local optimization algorithms used within \textsc{CoCoA}\xspace.
\end{remark}
\section{SDCA as an Example Local Solver}
We have shown convergence rates for Algorithm \ref{alg:cocoa}, depending solely on the approximation quality $ {\bf T}heta$ of the used local solver (Assumption~\ref{asm:THeta}).
Any chosen local solver in each round receives the local $ {\boldsymbol \alpha}$ variables as an input, as well as a shared vector $ {\bf w} \overset{\eqref{eq:PDMapping}}{=} {\bf w}( {\boldsymbol \alpha} )$ being compatible with the last state of all global $ {\boldsymbol \alpha}\in \mathbb{R}^n$ variables.
As an illustrative example for a local solver, Algorithm \ref{alg:localSDCA} below summarizes randomized coordinate ascent (SDCA) applied on the local subproblem \eqref{eq:subproblem}.
The following two Theorems
(\ref{thm:LocalSDCA_smooth2},
\ref{thm:LocalSDCA_smooth1})
characterize the local convergence
for both smooth and non-smooth functions. In all the results we will use
$r_{\max} := \max_{i \in [n]} \| {\bf x}_i\|^2$.
\begin{algorithm}[h]
\caption{\textsc{LocalSDCA}\xspace$( {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}, k, H)$}
\label{alg:localSDCA}
\begin{algorithmic}[1]
\STATE {\bf Input:}
$\vsubset{ {\boldsymbol \alpha}}{k}, {\bf w}= {\bf w}( {\boldsymbol \alpha})$
\STATE {\bf Data:} Local $\{( {\bf x}_i, y_i)\}_{ i \in \mathcal{P}_k}$
\STATE {\bf Initialize:} $\vc{\Delta {\boldsymbol \alpha}_{[k]}}{0} := {\bf 0} \in \mathbb R^{n}$
\FOR {$h = 0, 1, \dots ,H-1$}
\STATE choose $i\in \mathcal{P}_k$ uniformly at random
\STATE
$\displaystyle
\delta^*_i
:= \argmax_{\delta_i \in \mathbb{R}} \,
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(
\vc{\vsubset{\Delta {\boldsymbol \alpha}}{k}}{h}
+ \delta_i {\bf e}_i; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})$
\STATE
$\vsubset{
\Delta {\boldsymbol \alpha}^{(h+1)}}{k} := \vsubset{
\Delta {\boldsymbol \alpha}^{(h)}}{k} + \delta^*_i {\bf e}_i$
\mathbb{E}NDFOR
\STATE {\bf Output:} $\Delta {\boldsymbol \alpha}_{[k]}^{(H)}$
\end{algorithmic}
\end{algorithm}
\begin{theorem}
\label{thm:LocalSDCA_smooth2}
Assume the functions $\ell_i$ are $(1/\mu)-$smooth for $i\in[n]$.
Then Assumption~\ref{asm:THeta} on the local approximation quality $ {\bf T}heta$ is satisfied
for \textsc{LocalSDCA}\xspace as given in Algorithm \ref{alg:localSDCA}, if we
choose the number of inner iterations $H$ as
\begin{equation}
\label{eq:asjfwjfdwafcea}
H \geq n_k frac{\sigma' r_{\max} + \lambda n \mu}{\lambda n \mu} \log frac1{ {\bf T}heta} .
\end{equation}
\end{theorem}
\begin{theorem}
\label{thm:LocalSDCA_smooth1}
Assume the functions $\ell_i$ are $L$-Lipschitz for $i\in[n]$.
Then Assumption~\ref{asm:THeta} on the local approximation quality $ {\bf T}heta$ is satisfied
for \textsc{LocalSDCA}\xspace as given in Algorithm \ref{alg:localSDCA}, if we
choose the number of inner iterations $H$ as
\begin{equation}
\label{eq:H_convexLoss}
H \geq n_k
\bigg(
frac{1- {\bf T}heta}{ {\bf T}heta }
+
frac{\sigma'r_{\max}}
{2 {\bf T}heta \lambda n^2}
frac{\| \vsubset{\Delta {\boldsymbol \alpha}^*}{k}\|^2}
{
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \vsubset{\Delta {\boldsymbol \alpha}^*}{k};.)
- \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( {\bf 0};.)
}
\bigg) .
\end{equation}
\end{theorem}
\iffalse
\begin{theorem}
\label{thm:LocalSDCA_smoothShai}
TODO
\todo[inline]{
one more theorem where $H$ doesn't depend on $\vsubset{\Delta {\boldsymbol \alpha}^*}{k}$
}
\end{theorem}
fi
\begin{remark}
Between the different regimes allowed in \textsc{CoCoA}\xspacep (ranging between averaging and adding the updates) the computational cost for obtaining the required local approximation quality varies with the choice of $\sigma'$.
From the above worst-case upper bound, we note that the cost can increase with $\sigma'$, as aggregation becomes more aggressive.
However, as we will see in the practical experiments in Section \ref{sec:experiments} below, the additional cost is negligible compared to the gain in speed from the different aggregation, when measured on real datasets.
\end{remark}
\section{Discussion and Related Work}
\label{sec:relatedWork}
\paragraph{SGD-based Algorithms.}
For the empirical loss minimization problems of interest here, stochastic subgradient descent (SGD) based methods are well-established.
Several distributed variants of SGD have been proposed, many of which build on the idea of a parameter server \cite{Niu:2011wo,Liu:2014wj,Duchi:2013te}.
The downside of this approach, even when carefully implemented, is that the amount of required communication is equal to the amount of data read locally (e.g., mini-batch SGD with a batch size of 1 per worker). These variants are in practice not competitive with the more communication-efficient methods considered here, which allow more local updates per round.
\paragraph{One-Shot Communication Schemes.}
At the other extreme, there are distributed methods using only a single round of communication, such as \cite{Zhang:2013wq, Zinkevich:2010tj,Mann:2009tr,McWilliams:2014tl}.
These require additional assumptions on the partitioning of the data, and furthermore can not guarantee convergence to the optimum solution for all regularizers, as shown in, e.g., \cite{DANE}. \cite{Balcan:2012tc} shows additional relevant lower bounds on the minimum number of communication rounds necessary for a given approximation quality for similar machine learning problems.
\paragraph{Mini-Batch Methods.} Mini-batch methods are more flexible and lie within these two communication vs. computation extremes. However,
mini-batch versions of both SGD and coordinate descent (CD) \cite{richtarik2013distributed,MinibatchASDCA,Yang:2013vl, ALPHA, QUARTZ} suffer from their convergence rate degrading towards the rate of batch gradient descent as the size of the mini-batch is increased.
This follows because mini-batch updates are made based on the outdated previous parameter vector $ {\bf w}$, in contrast to methods that allow immediate local updates like \textsc{CoCoA}\xspace.
Furthermore, the aggregation parameter for mini-batch methods is harder to tune, as it can lie anywhere in the order of mini-batch size.
In the \textsc{CoCoA}\xspace setting, the parameter lies in the smaller range given by $K$.
Our \textsc{CoCoA}\xspacep extension avoids needing to tune this parameter entirely, by adding.
\newcommand{\smalltrimfig}[1]{\subfigure{\includegraphics[trim = 30 180 30 180, clip, width=.246\linewidth]{#1}}}
\begin{figure*}
\caption{Duality gap vs. the number of communicated vectors, as well as duality gap vs. elapsed time in seconds for two datasets: Covertype (left, $K$=4) and RCV1 (right, $K$=8). Both are shown on a log-log scale, and for three different values of regularization ($\lambda$=1e-4; 1e-5; 1e-6). Each plot contains a comparison of \textsc{CoCoA}
\label{fig:add_avg}
\end{figure*}
\paragraph{Methods Allowing Local Optimization.}
Developing methods that allow for local optimization requires carefully devising data-local subproblems to be solved after each communication round. \cite{DANE,DISCO} have proposed distributed Newton-type algorithms in this spirit. However, the subproblems must be solved to high accuracy for convergence to hold, which is often prohibitive as the size of the data on one machine is still relatively large.
In contrast, the \textsc{CoCoA}\xspace framework \cite{jaggi2014communication} allows using any local solver of weak local approximation quality in each round.
By making use of the primal-dual structure in the line of work of \cite{Yu:2012fp,Pechyony:2011wi,Yang:2013vl,Lee:2015vr}, the \textsc{CoCoA}\xspace and \textsc{CoCoA}\xspacep frameworks also allow more control over the aggregation of updates between machines.
The practical variant DisDCA-p proposed in \cite{Yang:2013vl} allows additive updates but is restricted to SDCA updates, and was proposed without convergence guarantees.
DisDCA-p can be recovered as a special case of the \textsc{CoCoA}\xspacep framework when using SDCA as a local solver, if $n_k = n/K$ and $\sigma':=K$, see Appendix~\ref{app:disDCA}.
The theory presented here also therefore covers that method.
\paragraph{ADMM.}
An alternative approach to distributed optimization is to use the alternating direction method of multipliers (ADMM), as used for distributed SVM training in, e.g., \cite{Forero:2010vv}. This uses a penalty parameter balancing between the equality constraint $ {\bf w}$ and the optimization objective \cite{boyd2011distributed}. However, the known convergence rates for ADMM are weaker than the more problem-tailored methods mentioned previously, and the choice of the penalty parameter is often unclear.
\paragraph{Batch Proximal Methods.}
In spirit, for the special case of adding ($\gamma=1$), \textsc{CoCoA}\xspacep resembles a batch proximal method, using the separable approximation \eqref{eq:subproblem} instead of the original dual \eqref{eq:dual}. Known batch proximal methods require high accuracy subproblem solutions, and don't allow arbitrary solvers of weak accuracy $ {\bf T}heta$ such as we do here.
\section{Numerical Experiments}
\label{sec:experiments}
We present experiments on several large real-world distributed datasets.
We show that $\textsc{CoCoA}\xspacep$ converges faster
in terms of total rounds as well as elapsed time as compared to \textsc{CoCoA}\xspace in all cases,
despite varying: the dataset, values of regularization, batch size, and cluster size
(Section \ref{sec:addavg}). In Section \ref{sec:scaling} we demonstrate that this
performance translates to orders of magnitude improvement in convergence when
scaling up the number of machines $K$, as compared to \textsc{CoCoA}\xspace as well as to several
other state-of-the-art methods. Finally, in Section~\ref{sec:sigma} we investigate the
impact of the local subproblem parameter $\sigma'$ in the \textsc{CoCoA}\xspacep framework.
\begin{table}[h]
\caption{Datasets for Numerical Experiments.
}
\label{tab:datasets}
\begin{center}
\begin{tabular}{l| r | r | r }
{\small\textbf{Dataset}} & $n$ &
$d$ & {\small\textbf{Sparsity}} \\
\hline
covertype & 522,911 &
54 & 22.22\% \\
epsilon & 400,000 &
2,000 & 100\% \\
RCV1 & 677,399 &
47,236 & 0.16\%
\end{tabular}
\end{center}
\end{table}
\subsection{Implementation Details}
We implement all algorithms in Apache
\textsf{\small Spark} \cite{Zaharia:2012ve} and run them on m3.large Amazon EC2 instances, applying each method to the binary hinge-loss support vector machine.
The analysis for this non-smooth loss was not covered in
\cite{jaggi2014communication} but has been captured here, and thus is both
theoretically and practically justified.
The used datasets are summarized in Table \ref{tab:datasets}.
For illustration and ease of comparison, we here use SDCA \cite{ShalevShwartz:2013wl} as the local solver for both \textsc{CoCoA}\xspace and \textsc{CoCoA}\xspacep.
Note that in this special case, and if additionally $\sigma':=K$, and if the partitioning $n_k = n/K$ is balanced, once can show that the \textsc{CoCoA}\xspacep framework reduces to the practical variant of DisDCA \cite{Yang:2013vl} (which had no convergence guarantees so far).
We include more details on the connection in Appendix~\ref{app:disDCA}.
\subsection{Comparison of \textsc{CoCoA}\xspacep and \textsc{CoCoA}\xspace}
\label{sec:addavg}
We compare the \textsc{CoCoA}\xspacep and \textsc{CoCoA}\xspace frameworks directly using two datasets
(Covertype and RCV1) across various values of $\lambda$, the regularizer, in Figure
\ref{fig:add_avg}. For each value of $\lambda$ we consider both methods with
different values of $H$, the number of local iterations performed before
communicating to the master. For all runs of \textsc{CoCoA}\xspacep we use the safe upper bound of
$\gamma K$ for $\sigma'$. In terms of both the total number of communications
made and the elapsed time, \textsc{CoCoA}\xspacep (shown in blue) converges to the optimal solution
faster than \textsc{CoCoA}\xspace (red). The discrepancy is larger for greater values of $\lambda$,
where the strongly convex regularizer has more of an impact and the problem
difficulty is reduced. We also see a greater performance gap for smaller values of $H$,
where there is frequent communication between the machines and the master, and changes between the algorithms therefore play a larger role.
\subsection{Scaling the Number of Machines $K$}
\label{sec:scaling}
In Figure \ref{fig:scaling_k} we demonstrate the ability of \textsc{CoCoA}\xspacep to scale with an
increasing number of machines $K$. The experiments confirm the ability of strong
scaling of the new method, as predicted by our theory in Section~\ref{sec:convergence},
in contrast to the competing methods.
Unlike \textsc{CoCoA}\xspace, which becomes linearly slower when increasing the number of
machines, the performance of \textsc{CoCoA}\xspacep improves with additional
machines, only starting to degrade slightly once~$K$=16 for the RCV1 dataset.
\newcommand{\halftrimfig}[1]{\subfigure{\includegraphics[trim = 40 190 40 180, clip, width=.49\linewidth]{#1}}}
\newcommand{\trimfig}[1]{\subfigure{\includegraphics[trim = 25 240 30 240, clip, width=.6\linewidth]{#1}}}
\begin{figure}
\caption{The effect of increasing $K$ on the time (s) to reach an $\epsilon_\mathcal{D}
\label{fig:scaling_k}
\end{figure}
\subsection{Impact of the Subproblem Parameter $\sigma'$}
\label{sec:sigma}
Finally, in Figure \ref{fig:sigma}, we consider the effect of the choice of the subproblem parameter $\sigma'$ on convergence. We plot both the number of communications and clock time on a log-log scale for the RCV1 dataset with $K$=8 and $H$=$1e4$. For $\gamma=1$ (the most aggressive variant of \textsc{CoCoA}\xspacep in which updates are added) we consider several different values of~$\sigma'$, ranging from $1$ to $8$. The value $\sigma'$=8 represents the safe upper bound of $\gamma K$. The optimal convergence occurs around $\sigma'$=4, and diverges for $\sigma' \le 2$.
Notably, we see that the easy to calculate upper bound of $\sigma':=\gamma K$
(as given by Lemma \ref{lem:sigmaPrimeNotBad})
has only slightly worse performance than best possible subproblem parameter in our setting.
\begin{figure}
\caption{The effect of $\sigma'$ on convergence of $\textsc{CoCoA}
\label{fig:sigma}
\end{figure}
\section{Conclusion}
In conclusion, we present a novel framework \textsc{CoCoA}\xspacep that allows for fast and
communication-efficient \textit{additive aggregation} in distributed
algorithms for primal-dual optimization.
We analyze the theoretical performance of this method, giving strong
primal-dual convergence rates with outer iterations scaling independently of
the number of machines.
We extended our theory to allow for non-smooth losses. Our
experimental results show significant speedups over previous methods, including the
original \textsc{CoCoA}\xspace framework as well as other state-of-the-art methods.
\paragraph{Acknowledgments.}
We thank Ching-pei Lee and an anonymous reviewer for several helpful insights and comments.
\appendix
\onecolumn
\part*{Appendix}
\section{Technical Lemmas}
\begin{lemma}
[Lemma 21 in \cite{ShalevShwartz:2013wl}]
\label{lemma:ajvoiewffa}
Let $\ell_i : \mathbb{R} \to \mathbb{R}$ be an
$L$-Lipschitz continuous. Then for any real value $a$ with $|a|> L$ we have that
$\ell_i^*(a) = \infty$.
\end{lemma}
\begin{lemma}
\label{lemma:asfewfawfcda}
Assuming the loss functions $\ell_i$ are bounded by $\ell_i(0) \leq 1$ for all $i\in[n]$ (as we have assumed in \eqref{eq:afswfevfwaefa} above), then
for the zero vector $\vc{ {\boldsymbol \alpha}}{0}
:= {\bf 0}\in \mathbb{R}^n$, we have
\begin{equation}
\label{eq:afjfjaoefvcwa}
\mathcal{D}( {\boldsymbol \alpha}^*)
- \mathcal{D}(\vc{ {\boldsymbol \alpha}}{0})
=
\mathcal{D}( {\boldsymbol \alpha}^*)
-\mathcal{D}({\bf 0})
\leq 1.
\end{equation}
\end{lemma}
\begin{proof}
For $ {\boldsymbol \alpha} := {\bf 0}\in \mathbb{R}^n$, we have
$ {\bf w}( {\boldsymbol \alpha}) =
frac1{\lambda n}
A {\boldsymbol \alpha}
= {\bf 0} \in \mathbb{R}^d$.
Therefore, by definition of the dual objective $\mathcal{D}$ given in~\eqref{eq:dual},
\begin{align*}
0 &\leq \mathcal{D}( {\boldsymbol \alpha}^*)
-\mathcal{D}( {\boldsymbol \alpha})
\leq \mathcal{P}( {\bf w}( {\boldsymbol \alpha})) - \mathcal{D}( {\boldsymbol \alpha})
= 0 - \mathcal{D}( {\boldsymbol \alpha})
\overset{\eqref{eq:afswfevfwaefa},\eqref{eq:dual}
}{\leq} 1. \qedhere
\end{align*}
\end{proof}
\section{Proofs}
\subsection{Proof of Lemma \ref{lem:RelationOfDTOSubproblems}}
Indeed, we have
\begin{align}
\label{eq:afijwfcewa}
\mathcal{D}( {\boldsymbol \alpha}
+\gamma
\sum_{k=1}^K
\vsubset{\Delta {\boldsymbol \alpha}}{k}
)
&=
\underbrace{-frac1n\sum_{i=1}^n
\ell_i^*(-\alpha_i
-\gamma (\sum_{k=1}^K
\vsubset{\Delta {\boldsymbol \alpha}}{k})_i)}_{A} -frac\lambda2
\underbrace{\Big\| frac1{\lambda n}
A ( {\boldsymbol \alpha} + \gamma
\sum_{k=1}^K \vsubset{\Delta {\boldsymbol \alpha}}{k}) \Big\|^2}_B.
\end{align}
Now, let us bound the terms $A$ and $B$ separately.
We have
\begin{align*}
A
&=
-frac1n\sum_{k=1}^K
\left(
\sum_{i\in \mathcal{P}_k}
\ell_i^*(-\alpha_i-\gamma
(\vsubset{\Delta {\boldsymbol \alpha}}{k})_i)
\right)
=
-frac1n\sum_{k=1}^K
\left(
\sum_{i\in \mathcal{P}_k}
\ell_i^*(-(1-\gamma)
\alpha_i-\gamma
( {\boldsymbol \alpha} + \vsubset{\Delta {\boldsymbol \alpha}}{k})_i)
\right)
\\
&\geq
-frac1n\sum_{k=1}^K
\left(
\sum_{i\in \mathcal{P}_k}
(1-\gamma) \ell_i^*(-
\alpha_i)
+\gamma
\ell_i^*(-( {\boldsymbol \alpha} + \vsubset{\Delta {\boldsymbol \alpha}}{k})_i)
\right).
\end{align*}
Where the last inequality is due to Jensen's inequality.
Now we will bound $B$, using the safe separability measurement $\sigma'$ as defined in \eqref{eq:sigmaPrimeSafeDefinition}.
\begin{align*}
B
&=
\Big\|frac1{\lambda n}
A ( {\boldsymbol \alpha} + \gamma
\sum_{k=1}^K \vsubset{\Delta {\boldsymbol \alpha}}{k}) \Big\|^2
=
\Big\| {\bf w}( {\boldsymbol \alpha}) + \gammafrac1{\lambda n}
\sum_{k=1}^K A\vsubset{\Delta {\boldsymbol \alpha}}{k} \Big\|^2
\\
& =
\| {\bf w}( {\boldsymbol \alpha}) \|^2
+
\sum_{k=1}^K
2\gammafrac1{\lambda n}
{\bf w}( {\boldsymbol \alpha})^T
A\vsubset{\Delta {\boldsymbol \alpha}}{k}
+
\gamma
\Big(frac1{\lambda n}\Big)^2
\gamma
\Big\|
\sum_{k=1}^K A\vsubset{\Delta {\boldsymbol \alpha}}{k} \Big\|^2
\\
& \overset{\eqref{eq:sigmaPrimeSafeDefinition}}
{\leq}
\| {\bf w}( {\boldsymbol \alpha}) \|^2
+
\sum_{k=1}^K
2\gammafrac1{\lambda n}
{\bf w}( {\boldsymbol \alpha})^T
A\vsubset{\Delta {\boldsymbol \alpha}}{k}
+
\gamma
\Big(frac1{\lambda n}\Big)^2
\sigma'
\sum_{k=1}^K \|A \Delta
\vsubset{ {\boldsymbol \alpha}}{k}\|^2.
\end{align*}
Plugging $A$ and $B$ into
\eqref{eq:afijwfcewa}
will give us
\begin{align*}
\nonumber
\mathcal{D}( {\boldsymbol \alpha}
+\gamma
\sum_{k=1}^K
\vsubset{\Delta {\boldsymbol \alpha}}{k}
)
\ge&
-frac1n\sum_{k=1}^K
\left(
\sum_{i\in \mathcal{P}_k}
(1-\gamma) \ell_i^*(-
\alpha_i)
+\gamma
\ell_i^*(-( {\boldsymbol \alpha} + \vsubset{\Delta {\boldsymbol \alpha}}{k})_i)
\right)
\\
&
-\gamma frac\lambda2 \| {\bf w}( {\boldsymbol \alpha}) \|^2
-(1-\gamma) frac\lambda2 \| {\bf w}( {\boldsymbol \alpha}) \|^2
-frac\lambda2
\sum_{k=1}^K
2\gammafrac1{\lambda n}
{\bf w}( {\boldsymbol \alpha})^T
A\vsubset{\Delta {\boldsymbol \alpha}}{k}
-frac\lambda2
\gamma
\Big(frac1{\lambda n}\Big)^2
\sigma'
\sum_{k=1}^K \|A \Delta \vsubset{ {\boldsymbol \alpha}}{k}\|^2
\\%
=&
\underbrace{
-frac1n\sum_{k=1}^K
\left(
\sum_{i\in \mathcal{P}_k}
(1-\gamma) \ell_i^*(-
\alpha_i)
\right)
-(1-\gamma) frac\lambda2 \| {\bf w}( {\boldsymbol \alpha}) \|^2
}_{(1-\gamma) \mathcal{D}( {\boldsymbol \alpha})}
\\
&
+
\gamma
\sum_{k=1}^K
\left(
-frac1n
\sum_{i\in \mathcal{P}_k}
\ell_i^*(-( {\boldsymbol \alpha} + \vsubset{\Delta {\boldsymbol \alpha}}{k})_i)
- frac1{K} frac\lambda2 \| {\bf w}( {\boldsymbol \alpha}) \|^2
-
frac1{n}
{\bf w}( {\boldsymbol \alpha})^T
A\vsubset{\Delta {\boldsymbol \alpha}}{k}
-frac\lambda2
\sigma' \Big\|frac1{\lambda n}A \Delta \vsubset{ {\boldsymbol \alpha}}{k} \Big\|^2
\right)
\\
\overset{\eqref{eq:subproblem}}{=}& (1-\gamma) \mathcal{D}( {\boldsymbol \alpha})
+\gamma \sum_{k=1}^K \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \vsubset{\Delta {\boldsymbol \alpha}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}).
\end{align*}
\subsection{Proof of Lemma \ref{lem:sigmaPrimeNotBad}}
See \cite{richtarik2013distributed}.
\subsection{Proof of Lemma \ref{lem:basicLemma}}
For sake of notation,
we will write
$ {\boldsymbol \alpha}$ instead of $\vc{ {\boldsymbol \alpha}}{t}$,
$ {\bf w}$ instead of $ {\bf w}(\vc{ {\boldsymbol \alpha}}{t})$
and
$ {\bf u}$ instead of $\vc{ {\bf u}}{t}$.
Now, let us estimate the expected change of the dual objective.
Using the definition of the dual update $\vc{ {\boldsymbol \alpha}}{t+1} := \vc{ {\boldsymbol \alpha}}{t} + \gamma \, \sum_k \vsubset{\Delta {\boldsymbol \alpha}}{k}$ resulting in Algorithm~\ref{alg:cocoa}, we have
\begin{align*}
\mathbb{E}\big[\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})
- \mathcal{D}(\vc{ {\boldsymbol \alpha}}{t+1})\big]
& =
\mathbb{E}\Big[\mathcal{D}( {\boldsymbol \alpha})
- \mathcal{D}( {\boldsymbol \alpha} +
\gamma \sum_{k=1}^K
\vsubset{\Delta {\boldsymbol \alpha}}{k})\Big]
\\
& \text{(by Lemma \ref{lem:RelationOfDTOSubproblems} on the local function $\mathcal{G}^{\sigma'}_k\hspace{-0.08em}( {\boldsymbol \alpha}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})$ approximating the global objective $\mathcal{D}( {\boldsymbol \alpha})$)}\\
&\leq
\mathbb{E}\Big[\mathcal{D}( {\boldsymbol \alpha})
-(1-\gamma)\mathcal{D}( {\boldsymbol \alpha})
-\gamma
\sum_{k=1}^K
\mathcal{G}^{\sigma'}_k\hspace{-0.08em} (\vsubset{
\vc{\Delta {\boldsymbol \alpha}}{t}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
\Big]\\
&=
\gamma
\mathbb{E}\Big[
\mathcal{D}( {\boldsymbol \alpha})
-
\sum_{k=1}^K
\mathcal{G}^{\sigma'}_k\hspace{-0.08em} (\vsubset{
\vc{\Delta {\boldsymbol \alpha}}{t}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
\Big]
\\
&
=
\gamma
\mathbb{E}\Big[
\mathcal{D}( {\boldsymbol \alpha})
-
\sum_{k=1}^K
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(\vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
+
\sum_{k=1}^K
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(\vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
-
\sum_{k=1}^K
\mathcal{G}^{\sigma'}_k\hspace{-0.08em} (\vsubset{
\vc{\Delta {\boldsymbol \alpha}}{t}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
\Big]
\\
&\text{(by the notion of quality \eqref{eq:localSolutionQuality} of the local solver, as in Assumption \ref{asm:THeta})}\\
&\leq
\gamma
\bigg(
\mathcal{D}( {\boldsymbol \alpha})
-
\sum_{k=1}^K
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(\vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
+
{\bf T}heta
\Big(
\sum_{k=1}^K
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(\vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
-
\underbrace{ \sum_{k=1}^K
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}({\bf 0}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
}_{\mathcal{D}( {\boldsymbol \alpha})}
\Big)
\bigg)
\\
&=
\gamma
(1- {\bf T}heta)
\Big(
\underbrace{
\mathcal{D}( {\boldsymbol \alpha})
-
\sum_{k=1}^K
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(\vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
}_{C}
\Big).
\tagthis
\label{eq:Afasfwafewaef}
\end{align*}
Now, let us upper bound
the $C$ term
(we will denote by
$\Delta {\boldsymbol \alpha}^*
= \sum_{k=1}^K \vsubset{\Delta {\boldsymbol \alpha}^*}{k}$):
\begin{align*}
C&
\overset{\eqref{eq:dual},
\eqref{eq:subproblem}}{=}
frac1n
\sum_{i =1}^n
\left(
\ell_i^*(-\alpha_i - \Delta {\boldsymbol \alpha}^*_i)
-\ell_i^*(- \alpha_i)
\right)
+frac1n
{\bf w}( {\boldsymbol \alpha})^T A \Delta {\boldsymbol \alpha}^*
+ \sum_{k=1}^K
frac\lambda2
\sigma' \Big\|frac1{\lambda n} A \vsubset{\Delta {\boldsymbol \alpha}^*}{k}\Big\|^2
\\
&\leq
frac1n
\sum_{i =1}^n
\left(
\ell_i^*(-\alpha_i - s (u_i - \alpha_i))
-\ell_i^*(- \alpha_i)
\right)
+frac1n
{\bf w}( {\boldsymbol \alpha})^T A s ( {\bf u} - {\boldsymbol \alpha} )
+ \sum_{k=1}^K
frac\lambda2
\sigma' \Big\|frac1{\lambda n} A \vsubset{s ( {\bf u} - {\boldsymbol \alpha} )}{k}\Big\|^2
\\
&\overset{\mbox{Strong conv.}}{\leq}
frac1n
\sum_{i =1}^n
\left(
s \ell_i^*(-u_i )
+
(1-s)
\ell_i^*(-\alpha_i )
-
frac{\mu}{2}
(1-s)s (u_i -\alpha_i)^2
-\ell_i^*(- \alpha_i)
\right)
+frac1n
{\bf w}( {\boldsymbol \alpha})^T A s ( {\bf u} - {\boldsymbol \alpha} )
\\& \quad\quad\quad\quad\quad + \sum_{k=1}^K
frac\lambda2
\sigma' \Big\|frac1{\lambda n} A \vsubset{s ( {\bf u} - {\boldsymbol \alpha} )}{k}\Big\|^2
\\
&=
frac1n
\sum_{i =1}^n
\left(
s \ell_i^*(-u_i )
-s
\ell_i^*(-\alpha_i )
-
frac{\mu}{2}
(1-s)s (u_i -\alpha_i)^2
\right)
+frac1n
{\bf w}( {\boldsymbol \alpha})^T A s ( {\bf u} - {\boldsymbol \alpha} )
+ \sum_{k=1}^K
frac\lambda2
\sigma' \Big\|frac1{\lambda n} A \vsubset{s ( {\bf u} - {\boldsymbol \alpha} )}{k}\Big\|^2.
\end{align*}
The convex conjugate maximal property implies that
\begin{equation}
\label{eq:adjwofcewa}
\ell_i^*(-u_i)
= -u_i {\bf w}( {\boldsymbol \alpha})^T {\bf x}_i
-\ell_i( {\bf w}( {\boldsymbol \alpha})^T {\bf x}_i).
\end{equation}
Moreover, from the definition of the primal and dual optimization problems \eqref{eq:primal},
\eqref{eq:dual}, we can write the duality gap as
\begin{align}
\label{eq:asdfjiwjfeojawfa}
G( {\boldsymbol \alpha}) := \mathcal{P}( {\bf w}( {\boldsymbol \alpha}))-\mathcal{D}( {\boldsymbol \alpha})
&\overset{
\eqref{eq:primal},
\eqref{eq:dual}
}{=}
frac1{n}
\sum_{i=1}^n
\left(
\ell_i( {\bf x}_j^T {\bf w})
+ \ell_i^*(- \alpha_i)
+ {\bf w}( {\boldsymbol \alpha})^T {\bf x}_i \alpha_i
\right).
\end{align}
Hence,
\begin{align*}
C
&\overset{
\eqref{eq:adjwofcewa}}
{\leq}
frac1n
\sum_{i =1}^n
\left(
-s u_i {\bf w}( {\boldsymbol \alpha})^T {\bf x}_i
-s\ell_i( {\bf w}( {\boldsymbol \alpha})^T {\bf x}_i)
-s
\ell_i^*(-\alpha_i )
\underbrace{-s {\bf w}( {\boldsymbol \alpha})^T {\bf x}_i \alpha_i
+s {\bf w}( {\boldsymbol \alpha})^T {\bf x}_i \alpha_i
}_{0}
-
frac{\mu}{2}
(1-s)s (u_i -\alpha_i)^2
\right)
\\&\qquad +frac1n
{\bf w}( {\boldsymbol \alpha})^T A s ( {\bf u} - {\boldsymbol \alpha} )
+ \sum_{k=1}^K
frac\lambda2
\sigma' \Big\|frac1{\lambda n} A \vsubset{s ( {\bf u} - {\boldsymbol \alpha} )}{k}\Big\|^2
\\
&=
frac1n
\sum_{i =1}^n
\left(
-s\ell_i( {\bf w}( {\boldsymbol \alpha})^T {\bf x}_i)
-s\ell_i^*(-\alpha_i )
-s {\bf w}( {\boldsymbol \alpha})^T {\bf x}_i \alpha_i
\right)
+
frac1n
\sum_{i =1}^n
\left( s {\bf w}( {\boldsymbol \alpha})^T {\bf x}_i
( \alpha_i-u_i )
-
frac{\mu}{2}
(1-s)s (u_i -\alpha_i)^2
\right)
\\&\qquad +frac1n
{\bf w}( {\boldsymbol \alpha})^T A s ( {\bf u} - {\boldsymbol \alpha} )
+ \sum_{k=1}^K
frac\lambda2
\sigma' \Big\|frac1{\lambda n} A \vsubset{s ( {\bf u} - {\boldsymbol \alpha} )}{k}\Big\|^2
\\
&\overset{\eqref{eq:asdfjiwjfeojawfa}}{=}
-s G( {\boldsymbol \alpha})
-
frac{\mu}{2}
(1-s)s
frac1n
\sum_{i =1}^n
\| {\bf u}- {\boldsymbol \alpha}\|^2
+
frac{\sigma'}{2\lambda }
(frac s{ n})^2
\sum_{k=1}^K
\| A \vsubset{ ( {\bf u} - {\boldsymbol \alpha} )}{k}\|^2.
\tagthis
\label{eq:asdfafdas}
\end{align*}
Now, the claimed improvement bound
\eqref{eq:lemma:dualDecrease_VS_dualityGap}
follows
by plugging
\eqref{eq:asdfafdas}
into \eqref{eq:Afasfwafewaef}.
\subsection{Proof of Lemma
\ref{lemma:BoundOnR}}
For general convex functions, the strong convexity parameter is
$\mu=0$, and hence the definition of $\vc{R}{t}$ becomes
\begin{align*}
\vc{R}{t}
\overset{\eqref{eq:defOfR}}{=}
\sum _{k=1}^K
\| A \vsubset{ (\vc{ {\bf u}} {t} - \vc{ {\boldsymbol \alpha}}{t} )}{k}\|^2
\overset{\eqref{eq:definitionOfSigmaK}}{\leq}
\sum _{k=1}^K
\sigma_k
\| \vsubset{ (\vc{ {\bf u}} {t} - \vc{ {\boldsymbol \alpha}}{t} )}{k}\|^2
\overset{\mbox{Lemma \ref{lemma:ajvoiewffa}}}{\leq}
\sum _{k=1}^K
\sigma_k |\mathcal{P}_k| 4L^2.
\end{align*}
\subsection{Proof of Theorem \ref{thm:convergenceNonsmooth}}
At first let us estimate expected change of dual feasibility. By using the main Lemma \ref{lem:basicLemma}, we have
\begin{align*}
\mathbb{E}[\mathcal{D}( {\boldsymbol \alpha}^*)-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t+1})]
&=
\mathbb{E}[\mathcal{D}( {\boldsymbol \alpha}^*)-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t+1})+\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})]
\\
&
\overset{\eqref{eq:lemma:dualDecrease_VS_dualityGap}
}{=}
\mathcal{D}( {\boldsymbol \alpha}^*)-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})
-\gamma
(1- {\bf T}heta)
s G(\vc{ {\boldsymbol \alpha}}{t})
+
\gamma
(1- {\bf T}heta)
\tfrac{\sigma'}{2\lambda }
(frac s{ n})^2
\vc{R}{t}
\\
&
\overset{\eqref{eq:gap}
}{=}
\mathcal{D}( {\boldsymbol \alpha}^*)-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})
-\gamma
(1- {\bf T}heta)
s (\mathcal{P}( {\bf w}(\vc{ {\boldsymbol \alpha}}{t}))-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t}))
+
\gamma
(1- {\bf T}heta) \tfrac{\sigma'}{2\lambda }
(frac s{ n})^2
\vc{R}{t}
\\
&\leq
\mathcal{D}( {\boldsymbol \alpha}^*)-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})
-\gamma
(1- {\bf T}heta)
s (\mathcal{D}( {\boldsymbol \alpha}^* )-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t}) )
+
\gamma
(1- {\bf T}heta)
\tfrac{\sigma'}{2\lambda }
(frac s{ n})^2
\vc{R}{t} \\
&
\overset{\eqref{eq:asfjoewjofa}}{\leq}
\left(
1-\gamma
(1- {\bf T}heta)
s
\right)
(\mathcal{D}( {\boldsymbol \alpha}^* )-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t}))
+
\gamma
(1- {\bf T}heta)
\tfrac{\sigma'}{2\lambda }
(frac s{ n})^2
4L^2 \sigma.
\tagthis
\label{eq:asoifejwofa}
\end{align*}
Using
\eqref{eq:asoifejwofa}
recursively we have
\begin{align*}
\mathbb{E}[\mathcal{D}( {\boldsymbol \alpha}^*)-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})]
&=
\left(
1-\gamma
(1- {\bf T}heta)
s
\right)^t
(\mathcal{D}( {\boldsymbol \alpha}^* )-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{0}))
+
\gamma
(1- {\bf T}heta)
\tfrac{\sigma'}{2\lambda }
(frac s{ n})^2
4L^2 \sigma
\sum_{j=0}^{t-1}
\left(
1-\gamma
(1- {\bf T}heta)
s
\right)^j
\\
&=
\left(
1-\gamma
(1- {\bf T}heta)
s
\right)^t
(\mathcal{D}( {\boldsymbol \alpha}^* )-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{0}))
+
\gamma
(1- {\bf T}heta)
\tfrac{\sigma'}{2\lambda }
(frac s{ n})^2
4L^2 \sigma
frac{1-\left(
1-\gamma
(1- {\bf T}heta)
s
\right)^t}
{
\gamma
(1- {\bf T}heta)
s }
\\
&\leq
\left(
1-\gamma
(1- {\bf T}heta)
s
\right)^t
(\mathcal{D}( {\boldsymbol \alpha}^* )-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{0}))
+
s
frac{4L^2 \sigma \sigma'}{2\lambda n^2}.
\tagthis
\label{eq:asfwefcaw}
\end{align*}
Choice of
$s=1$ and $t= t_0:= \max\{0,\lceil
frac1{\gamma (1- {\bf T}heta)}
\log(
2\lambda n^2 (\mathcal{D}( {\boldsymbol \alpha}^* )-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{0}))
/ (4 L^2 \sigma \sigma')
)
\rceil\}$
will lead to
\begin{align}\label{eq:induction_step1}
\mathbb{E}[\mathcal{D}( {\boldsymbol \alpha}^*)-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})]
&\leq
\left(
1-\gamma
(1- {\bf T}heta)
\right)^{t_0}
(\mathcal{D}( {\boldsymbol \alpha}^* )-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{0}))
+
frac{4L^2 \sigma \sigma'}{2\lambda n^2}
\leq
frac{4L^2 \sigma \sigma'}{2\lambda n^2}
+
frac{4L^2 \sigma \sigma'}{2\lambda n^2}
=
frac{4L^2 \sigma \sigma'}{\lambda n^2}.
\end{align}
Now, we are going to show that
\begin{align}
\label{eq:expectationOfDualFeasibility}
forall t\geq t_0 : \mathbb{E}[\mathcal{D}( {\boldsymbol \alpha}^* )-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})]
&\leq
frac{4L^2 \sigma \sigma'}{\lambda n^2( 1+ frac12 \gamma (1- {\bf T}heta) (t-t_0))}.
\end{align}
Clearly, \eqref{eq:induction_step1} implies that \eqref{eq:expectationOfDualFeasibility} holds for $t=t_0$.
Now imagine that it holds for any $t\geq t_0$ then we show that it also has to hold for $t+1$.
Indeed, using
\begin{equation}
\label{eq:asdfjoawjdfas}
s=
frac{1}
{1+ frac12 \gamma (1- {\bf T}heta) (t-t_0)} \in [0,1]
\end{equation}
we obtain
\begin{align*}
\mathbb{E}[
\mathcal{D}( {\boldsymbol \alpha}^* )-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t+1})]
&\overset{\eqref{eq:asoifejwofa}
}{\leq}
\left(
1-\gamma
(1- {\bf T}heta)
s
\right)
(\mathcal{D}( {\boldsymbol \alpha}^* )-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t}))
+
\gamma
(1- {\bf T}heta)
\tfrac{\sigma'}{2\lambda }
(frac s{ n})^2
4L^2 \sigma
\\
&\overset{\eqref{eq:expectationOfDualFeasibility}
}{\leq}
\left(
1-\gamma
(1- {\bf T}heta)
s
\right)
frac{4L^2 \sigma \sigma'}{\lambda n^2( 1+ frac12 \gamma (1- {\bf T}heta) (t-t_0))}
+
\gamma
(1- {\bf T}heta)
\tfrac{\sigma'}{2\lambda }
(frac s{ n})^2
4L^2 \sigma
\\
&
\overset{\eqref{eq:asdfjoawjdfas}}{=}
frac{4L^2 \sigma \sigma'}
{\lambda n^2}
\left(
frac{
1+ frac12 \gamma (1- {\bf T}heta) (t-t_0)
-\gamma
(1- {\bf T}heta)
+
\gamma
(1- {\bf T}heta)
\tfrac{1}{2}
}
{(1+ frac12 \gamma (1- {\bf T}heta) (t-t_0))^2}
\right)
\\
&=
frac{4L^2 \sigma \sigma'}
{\lambda n^2}
\underbrace{\left(
frac{
1+ frac12 \gamma (1- {\bf T}heta) (t-t_0)
-frac12 \gamma
(1- {\bf T}heta)
}
{(1+ frac12 \gamma (1- {\bf T}heta) (t-t_0))^2}
\right)}_{D}.
\end{align*}
Now, we will upperbound $D$ as follows
\begin{align*}
D&=
frac1
{1+ frac12 \gamma (1- {\bf T}heta) (t+1-t_0)}
\underbrace{
frac{
(1+ frac12 \gamma (1- {\bf T}heta) (t+1-t_0))
(1+ frac12 \gamma (1- {\bf T}heta) (t-1-t_0))
}
{(1+ frac12 \gamma (1- {\bf T}heta) (t-t_0))^2}}_{\leq 1}
\\
&\leq
frac1
{1+ frac12 \gamma (1- {\bf T}heta) (t+1-t_0)},
\end{align*}
where in the last inequality we have used the fact that geometric mean
is less or equal to arithmetic mean.
If $\overline {\boldsymbol \alpha}$ is defined as \eqref{eq:averageOfAlphaDefinition}
then we obtain that
\begin{align*}
\mathbb{E}[G(\overline {\boldsymbol \alpha})] &=
\mathbb{E}\left[G\left(\sum_{t=T_0}^{T-1} \tfrac1{T-T_0} \vc{ {\boldsymbol \alpha}}{t}\right)\right]
\leq
\tfrac1{T-T_0} \mathbb{E}\left[\sum_{t=T_0}^{T-1} G\left( \vc{ {\boldsymbol \alpha}}{t}\right)\right]
\\
&
\overset{
\eqref{eq:lemma:dualDecrease_VS_dualityGap}
,\eqref{eq:asfjoewjofa}
}{\leq}
\tfrac1{T-T_0} \mathbb{E}\left[\sum_{t=T_0}^{T-1}
\left(
frac1{\gamma
(1- {\bf T}heta)
s}
(
\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t+1})
-
\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})
)
+
\tfrac{4L^2 \sigma \sigma' s}{2\lambda n^2 }
\right)
\right]
\\
&=
frac1{\gamma
(1- {\bf T}heta)
s}
frac1{T-T_0}
\mathbb{E}\left[
\mathcal{D}(\vc{ {\boldsymbol \alpha}}{T})
-
\mathcal{D}(\vc{ {\boldsymbol \alpha}}{T_0})
\right]
+\tfrac{4L^2 \sigma \sigma' s}{2\lambda n^2 }
\\
&\leq
frac1{\gamma
(1- {\bf T}heta)
s}
frac1{T-T_0}
\mathbb{E}\left[
\mathcal{D}( {\boldsymbol \alpha}^*)
-
\mathcal{D}(\vc{ {\boldsymbol \alpha}}{T_0})
\right]
+\tfrac{4L^2 \sigma \sigma' s}{2\lambda n^2 }.
\tagthis \label{eq:askjfdsanlfas}
\end{align*}
Now, if $T\geq \lceil
frac1{\gamma (1- {\bf T}heta)}\rceil+T_0$ such that $T_0\geq t_0$
we obtain
\begin{align*}
\mathbb{E}[G(\overline {\boldsymbol \alpha})]
&\overset{\eqref{eq:askjfdsanlfas}
,\eqref{eq:expectationOfDualFeasibility}
}{\leq}
frac1{\gamma
(1- {\bf T}heta)
s}
frac1{T-T_0}
\left(
frac{4L^2 \sigma \sigma'}{\lambda n^2( 1+ frac12 \gamma (1- {\bf T}heta) (T_0-t_0))}
\right)
+frac{4L^2 \sigma \sigma' s}{2\lambda n^2 }
\\
&=
frac{
4L^2 \sigma \sigma'}{\lambda n^2}
\left(
frac1{\gamma
(1- {\bf T}heta)
s}
frac1{T-T_0}
frac{1}{ 1+ frac12 \gamma (1- {\bf T}heta) (T_0-t_0)}
+frac{ s}{2 }
\right).
\tagthis
\label{eq:fawefwafewa}
\end{align*}
Choosing
\begin{equation}
\label{eq:afskoijewofaw}
s=frac{1}{(T-T_0) \gamma (1- {\bf T}heta)} \in [0,1]
\end{equation}
gives us
\begin{align*}
\mathbb{E}[G(\overline {\boldsymbol \alpha})]
&
\overset{\eqref{eq:fawefwafewa},
\eqref{eq:afskoijewofaw}}{\leq}
frac{
4L^2 \sigma \sigma'}{\lambda n^2}
\left(
frac{1}{ 1+ frac12 \gamma (1- {\bf T}heta) (T_0-t_0)}
+frac{1}{(T-T_0) \gamma (1- {\bf T}heta)} frac{ 1}{2 }
\right). \tagthis
\label{eq:afsjweofjwafea}
\end{align*}
To have right hand side of
\eqref{eq:afsjweofjwafea}
smaller then
$\epsilon_G$
it is sufficient to choose
$T_0$ and $T$ such that
\begin{eqnarray}
\label{eq:sfadwafeewafa}
frac{4L^2 \sigma \sigma'}{\lambda n^2}
\left(
frac{1}{ 1+ frac12 \gamma (1- {\bf T}heta) (T_0-t_0)}
\right)
&\leq & frac12 \epsilon_G,
\\
\label{eq:sfadwafeewafa2}
frac{4L^2 \sigma \sigma'}{\lambda n^2}
\left(
frac{1}{(T-T_0) \gamma (1- {\bf T}heta)} frac{ 1}{2 }
\right)
&\leq & frac12 \epsilon_G.
\end{eqnarray}
Hence of
if
\begin{eqnarray*}
t_0+
frac{2}{ \gamma (1- {\bf T}heta) }
\left(
frac
{8L^2 \sigma \sigma'}
{\lambda n^2 \epsilon_G}
-1
\right)
&\leq &
T_0
,
\\
T_0
+
frac
{4L^2 \sigma \sigma'}
{\lambda n^2 \epsilon_G
\gamma (1- {\bf T}heta)}
&\leq & T,
\end{eqnarray*}
then
\eqref{eq:sfadwafeewafa}
and
\eqref{eq:sfadwafeewafa2}
are satisfied.
\subsection{Proof of Theorem \ref{thm:convergenceSmoothCase}
}
If the function $\ell_i(.)$ is $(1/\mu)$-smooth then $\ell_i^*(.)$ is $\mu$-strongly convex with respect to the
$\|\cdot\|$ norm.
From \eqref{eq:defOfR}
we have
\begin{align*}
\vc{R}{t}&
\overset{\eqref{eq:defOfR}}{=}
-
\tfrac{ \lambda\mu n (1-s)}{\sigma' s }
\|\vc{ {\bf u}}{t}-\vc{ {\boldsymbol \alpha}}{t}\|^2
+
{\sum}_{k=1}^K
\| A \vsubset{ (\vc{ {\bf u}}{t} - \vc{ {\boldsymbol \alpha}}{t} )}{k}\|^2
\\%
&
\overset{\eqref{eq:definitionOfSigmaK}}{\leq}
-
\tfrac{ \lambda\mu n (1-s)}{\sigma' s }
\|\vc{ {\bf u}}{t}-\vc{ {\boldsymbol \alpha}}{t}\|^2
+
{\sum}_{k=1}^K
\sigma_k
\| \vsubset{ \vc{ {\bf u}}{t} - \vc{ {\boldsymbol \alpha}}{t} }{k}\|^2
\\
&\leq
-
\tfrac{ \lambda\mu n (1-s)}{\sigma' s }
\|\vc{ {\bf u}}{t}-\vc{ {\boldsymbol \alpha}}{t}\|^2
+
\sigma_{\max}
{\sum}_{k=1}^K
\| \vsubset{ \vc{ {\bf u}}{t} - \vc{ {\boldsymbol \alpha}}{t} }{k}\|^2
\\
&=
\left(
-
\tfrac{ \lambda\mu n (1-s)}{\sigma' s }
+\sigma_{\max}
\right)
\|\vc{ {\bf u}}{t}-\vc{ {\boldsymbol \alpha}}{t}\|^2.\tagthis
\label{eq:afjfocjwfcea}
\end{align*}
If we plug
\begin{equation}
s=
frac{ \lambda\mu n }
{\lambda\mu n+
\sigma_{\max} \sigma'}\in [0,1]
\label{eq:fajoejfojew}
\end{equation}
into
\eqref{eq:afjfocjwfcea}
we obtain that
$forall t: \vc{R}{t}\leq 0$.
Putting the same $s$
into
\eqref{eq:lemma:dualDecrease_VS_dualityGap}
will give us
\begin{align*}
&\mathbb{E}[
\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t+1})
-
\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})
]
\overset{\eqref{eq:lemma:dualDecrease_VS_dualityGap}
,\eqref{eq:fajoejfojew}}{\geq}
\gamma
(1- {\bf T}heta)
frac{ \lambda\mu n }
{\lambda\mu n+
\sigma_{\max} \sigma'} G(\vc{ {\boldsymbol \alpha}}{t})
\geq
\gamma
(1- {\bf T}heta)
frac{ \lambda\mu n }
{\lambda\mu n+
\sigma_{\max} \sigma'} \mathcal{D}( {\boldsymbol \alpha}^*)-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t}).
\tagthis
\label{eq:fasfawfwaf}
\end{align*}
Using the fact that
$\mathbb{E}[\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t+1})-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})]
=\mathbb{E}[\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t+1})-\mathcal{D}( {\boldsymbol \alpha}^*)]
+\mathcal{D}( {\boldsymbol \alpha}^*)-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})
$
we have
\begin{align*}
\mathbb{E}[\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t+1})-\mathcal{D}( {\boldsymbol \alpha}^*)]
+\mathcal{D}( {\boldsymbol \alpha}^*)-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})
\overset{
\eqref{eq:fasfawfwaf}}
{
\geq
}
\gamma
(1- {\bf T}heta)
frac{ \lambda\mu n }
{\lambda\mu n+
\sigma_{\max} \sigma'} \mathcal{D}( {\boldsymbol \alpha}^*)-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})
\end{align*}
which is equivalent with
\begin{align*}
\mathbb{E}[\mathcal{D}( {\boldsymbol \alpha}^*)-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t+1})]
\leq
\left(
1-\gamma
(1- {\bf T}heta)
frac{ \lambda\mu n }
{\lambda\mu n+
\sigma_{\max} \sigma'}\right)
\mathcal{D}( {\boldsymbol \alpha}^*)-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t}).
\tagthis \label{eq:affpja}
\end{align*}
Therefore if we denote by $\vc{\epsilon_\mathcal{D}}{t} = \mathcal{D}( {\boldsymbol \alpha}^*)-\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})$
we have that
\begin{align*}
\mathbb{E}[\vc{\epsilon_\mathcal{D}}{t}]
\overset{\eqref{eq:affpja}}{\leq} \left(
1-\gamma
(1- {\bf T}heta)
frac{ \lambda\mu n }
{\lambda\mu n+
\sigma_{\max} \sigma'}
\right)^t \vc{\epsilon_\mathcal{D}}{0}
\overset{\eqref{eq:afjfjaoefvcwa}}{\leq}
\left(
1-\gamma
(1- {\bf T}heta)
frac{ \lambda\mu n }
{\lambda\mu n+
\sigma_{\max} \sigma'}
\right)^t
\leq \exp\left(-t \gamma
(1- {\bf T}heta)
frac{ \lambda\mu n }
{\lambda\mu n+
\sigma_{\max} \sigma'}
\right).
\end{align*}
The right hand side will be smaller than some $\epsilon_\mathcal{D}$ if
$$
t
\geq
frac{1}
{\gamma
(1- {\bf T}heta)}
frac
{\lambda\mu n+
\sigma_{\max} \sigma'}
{ \lambda\mu n }
\log frac1{\epsilon_\mathcal{D}}.
$$
Moreover, to bound the duality gap, we have
\begin{align*}
\gamma
(1- {\bf T}heta)
frac{ \lambda\mu n }
{\lambda\mu n+
\sigma_{\max} \sigma'} G(\vc{ {\boldsymbol \alpha}}{t})
&
\overset{
\eqref{eq:fasfawfwaf}
}{\leq}
\mathbb{E}[
\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t+1})
-
\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})
]
\leq
\mathbb{E}[
\mathcal{D}( {\boldsymbol \alpha}^*)
-
\mathcal{D}(\vc{ {\boldsymbol \alpha}}{t})
].
\end{align*}
Therefore $G(\vc{ {\boldsymbol \alpha}}{t})\leq
frac1{
\gamma
(1- {\bf T}heta)}
frac {\lambda\mu n+
\sigma_{\max} \sigma'}
{ \lambda\mu n } \vc{\epsilon_\mathcal{D}}{t}$.
Hence if $\epsilon_\mathcal{D} \leq
\gamma
(1- {\bf T}heta)
frac{ \lambda\mu n }
{\lambda\mu n+
\sigma_{\max} \sigma'}
\epsilon_G $
then $G(\vc{ {\boldsymbol \alpha}}{t})\leq \epsilon_G$.
Therefore
after
$$
t
\geq
frac{1}
{\gamma
(1- {\bf T}heta)}
frac
{\lambda\mu n+
\sigma_{\max} \sigma'}
{ \lambda\mu n }
\log
\left(
frac{1}
{\gamma
(1- {\bf T}heta)}
frac
{\lambda\mu n+
\sigma_{\max} \sigma'}
{ \lambda\mu n }
frac1{\epsilon_G}
\right)
$$
iterations we have obtained a duality gap less than $\epsilon_G$.
\subsection{Proof of Theorem \ref{thm:LocalSDCA_smooth2}}
Because $\ell_i$ are $(1/\mu)$-smooth then
functions
$\ell_i^*$ are $\mu$
strongly convex with respect to the norm $\|\cdot\|$.
The proof is based on
techniques developed in recent coordinate descent papers, including
\cite{richtarik,
richtarik2013distributed,richtarikBigData,TTR:IMPROVRED,
marecek2014distributed,APPROX,lu2013complexity,fercoq2014fast,ALPHA,QUARTZ} (Efficient accelerated variants were considered in \cite{APPROX, ASDCA}).
First, let us define the
function
$F( {\boldsymbol \zeta}): \mathbb{R}^{n_k} \to \mathbb{R}$
as
$F( {\boldsymbol \zeta}) := -\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(
\sum_{i \in \mathcal{P}_k} \zeta_i {\bf e}_i; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
$. This function can be written in two parts
$F( {\boldsymbol \zeta}) = \Phi( {\boldsymbol \zeta}) + f( {\boldsymbol \zeta})$.
The first part
denoted by
$\Phi( {\boldsymbol \zeta})
=frac1n\sum_{i \in \mathcal{P}_k}
\ell_i^*(-\alpha_i - \zeta_i)$
is strongly convex
with convexity parameter
$frac{\mu}{n}$
with respect to the standard Euclidean norm.
In our application, we think of the $ {\boldsymbol \zeta}$ variable collecting the local dual variables $\vsubset{\Delta {\boldsymbol \alpha}}{k}$.
The second part
we will denote by
$f( {\boldsymbol \zeta})
=
frac1K
frac{\lambda}{2}
\| {\bf w}( {\boldsymbol \alpha})\|^2
+frac1n
\sum_{i \in \mathcal{P}_k}
{\bf w}( {\boldsymbol \alpha})^T {\bf x}_i \zeta_i
+
frac\lambda2
\sigma'
frac1{\lambda^2 n^2}
\| \sum_{i \in \mathcal{P}_k} {\bf x}_i \zeta_i \|^2
$.
It is easy to show
that the gradient of $f$ is coordinate-wise Lipschitz
continuous
with Lipschitz constant
$ frac{\sigma'}{\lambda n^2} r_{\max}$
with respect to the standard Euclidean norm.
Following the
proof of Theorem 20 in \cite{richtarikBigData},
we obtain that
\begin{align*}
\mathbb{E}[\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(
\vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
- \mathcal{G}^{\sigma'}_k\hspace{-0.08em}(
\vc{
\vsubset{\Delta {\boldsymbol \alpha}}{k}
}{h+1}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
]
&\leq
\left(
1-frac1{n_k}
frac{1+frac{\mu n \lambda}{\sigma' r_{\max}}}
{frac{\mu n \lambda}{\sigma' r_{\max}}}
\right)
\left(
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(
\vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
- \mathcal{G}^{\sigma'}_k\hspace{-0.08em}(
\vc{
\vsubset{\Delta {\boldsymbol \alpha}}{k}
}{h}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
\right)
\\
&=
\left(
1-frac1{n_k}
frac
{ \lambda n \mu }
{\sigma' r_{\max}+ \lambda n \mu }
\right)
\left(
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(
\vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
- \mathcal{G}^{\sigma'}_k\hspace{-0.08em}(
\vc{
\vsubset{\Delta {\boldsymbol \alpha}}{k}
}{h}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
\right).
\end{align*}
Over all steps up to step $h$, this gives
\begin{align*}
\mathbb{E}[\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(
\vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
- \mathcal{G}^{\sigma'}_k\hspace{-0.08em}(
\vc{
\vsubset{\Delta {\boldsymbol \alpha}}{k}
}{h}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
]
&\leq
\left(
1-frac1{n_k}
frac
{ \lambda n \mu }
{\sigma' r_{\max}+ \lambda n \mu }
\right)^h
\left(
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(
\vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
- \mathcal{G}^{\sigma'}_k\hspace{-0.08em}({\bf 0}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
\right).
\end{align*}
Therefore, choosing
$H$ as in the assumption of our Theorem, given in Equation
\eqref{eq:asjfwjfdwafcea},
we are guaranteed that
$\left(
1-frac1{n_k}
frac
{ \lambda n \mu }
{\sigma' r_{\max}+ \lambda n \mu }
\right)^H \leq {\bf T}heta$, as desired.
\subsection{Proof of Theorem \ref{thm:LocalSDCA_smooth1}
}
Similarly as in the
proof of Theorem
\ref{thm:LocalSDCA_smooth2}
we define a composite function $F( {\boldsymbol \zeta})
= f( {\boldsymbol \zeta})+\Phi( {\boldsymbol \zeta}) $.
However, in this case functions
$\ell_i^*$ are not guaranteed to be strongly convex.
However, the first part has still a coordinate-wise Lipschitz continuous gradient with constant
$ frac{\sigma'}{\lambda n^2} r_{\max}$
with respect to the standard Euclidean norm.
Therefore from Theorem 3 in \cite{TTR:IMPROVRED}
we have that
\begin{align*}
\mathbb{E}[\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(
\vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
- \mathcal{G}^{\sigma'}_k\hspace{-0.08em}(
\vc{
\vsubset{\Delta {\boldsymbol \alpha}}{k}
}{h}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
]
&\leq
frac{n_k}{n_k+h}
\left(
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(
\vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
- \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( {\bf 0}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
+frac12 frac{\sigma'r_{\max}}{\lambda n^2} \| \vsubset{\Delta {\boldsymbol \alpha}^*}{k}\|^2
\right).
\tagthis
\label{eq:afewfew}
\end{align*}
Now, choice
of $h=H$ from
\eqref{eq:H_convexLoss}
is sufficient to have
the right hand side of
\eqref{eq:afewfew} to be
$\leq
{\bf T}heta \big(\mathcal{G}^{\sigma'}_k\hspace{-0.08em}(
\vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
- \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( {\bf 0}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}) \big)$.
\section{Relationship of DisDCA to \textsc{CoCoA}\xspacep}
\label{app:disDCA}
\newcommand{\uv^{\text{\tiny local}}}{ {\bf u}^{\text{\tiny local}}}
\newcommand{{\text{\tiny prev}}}{{\text{\tiny prev}}}
We are indebted to Ching-pei Lee for showing the following relationship between the practical variant of DisDCA \cite{Yang:2013vl}, and \textsc{CoCoA}\xspacep when SDCA is chosen as the local solver:
Considering the practical variant of DisDCA (DisDCA-p, see Figure~2 in \cite{Yang:2013vl}) using the scaling parameter $scl=K$, the following holds:
\begin{lemma}\label{lem:equivDisDCA}
Assume that the dataset is partitioned equally between workers, i.e. $forall k: n_k = frac{n}{K}$.
If within the \textsc{CoCoA}\xspacep framework, SDCA is used as a local solver, and the subproblems are formulated using our shown ``safe'' (but pessimistic) upper bound of $\sigma'=K$, with aggregation parameter $\gamma=1$ (adding), then the \textsc{CoCoA}\xspacep framework reduces exactly to the DisDCA-p algorithm.
\end{lemma}
\begin{proof}(Due to Ching-pei Lee, with some reformulations).
As defined in \eqref{eq:subproblem}, the data-local subproblem solved by each machine in \textsc{CoCoA}\xspacep is defined as
\[
\max_{\vsubset{\Delta {\boldsymbol \alpha}}{k}\in\mathbb{R}^{n}} \
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \vsubset{\Delta {\boldsymbol \alpha}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
\]
where
\[
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \vsubset{\Delta {\boldsymbol \alpha}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
:=
-frac1n\sum_{i \in \mathcal{P}_k}
\ell_i^*(-\alpha_i - (\vsubset{\Delta {\boldsymbol \alpha}}{k})_i)
- frac1K
frac{\lambda}{2}
\| {\bf w}\|^2
-frac1n
{\bf w}^T A \vsubset{\Delta {\boldsymbol \alpha}}{k}
- frac\lambda2
\sigma' \Big\|frac1{\lambda n} A \vsubset{\Delta {\boldsymbol \alpha}}{k}\Big\|^2 \ .
\]
We rewrite the local problem by scaling with $n$, and removing the constant regularizer term $frac1K frac{\lambda}{2}\| {\bf w}\|^2$, i.e.
\begin{equation}
\tilde{\mathcal{G}^{\sigma'}_k\hspace{-0.08em}}( \vsubset{\Delta {\boldsymbol \alpha}}{k}; {\bf w})
:=
-\sum_{j \in \mathcal{P}_k}
\ell_i^*(-\alpha_j - (\vsubset{\Delta {\boldsymbol \alpha}}{k})_j)
-
{\bf w}^T A \vsubset{\Delta {\boldsymbol \alpha}}{k}
- frac{\sigma'}{2\lambda n}
\Big\| A \vsubset{\Delta {\boldsymbol \alpha}}{k}\Big\|^2 \ .
\end{equation}
For the correspondence of interest, we now restrict to single coordinate updates in the local solver.
In other words, the local solver optimizes exactly one coordinate $i \in \mathcal{P}_k$ at a time.
To relate the single coordinate update to the set of local variables, we will use the notation
\begin{equation}\label{eq:singleCoordNotation}
\vsubset{\Delta {\boldsymbol \alpha}}{k} =: \vsubset{\Delta {\boldsymbol \alpha}^{{\text{\tiny prev}}}}{k} + \delta {\bf e}_i \ ,
\end{equation}
so that $\vsubset{\Delta {\boldsymbol \alpha}^{{\text{\tiny prev}}}}{k}$ are the previous local variables, and $\vsubset{\Delta {\boldsymbol \alpha}}{k}$ will be the updated ones.
From now on, we will consider the special case of \textsc{CoCoA}\xspacep when the quadratic upper bound parameter is chosen as the ``safe'' value $\sigma'=K$, combined with adding as the aggregation, i.e. $\gamma=1$.
Now if the local solver within \textsc{CoCoA}\xspacep is chosen as \textsc{LocalSDCA}\xspace, then one local step on the subproblem~(\ref{eq:subproblem}) will calculate the following coordinate update. Recall that $A=[ {\bf x}_1, {\bf x}_2, \dots, {\bf x}_n] \in \mathbb{R}^{d\times n}$.
\begin{equation}
\delta^\;:\;ar := \argmax_{\delta\in\mathbb{R}} \
\tilde{\mathcal{G}^{\sigma'}_k\hspace{-0.08em}}( \vsubset{\Delta {\boldsymbol \alpha}}{k}; {\bf w})
\end{equation}
which -- because it is only affecting one single coordinate, employing \eqref{eq:singleCoordNotation} -- can be expressed as
\begin{align}
\delta^\;:\;ar :=& \argmax_{\delta\in\mathbb{R}} \
-\ell^*_{i}(-(\alpha_i+(\vsubset{\Delta {\boldsymbol \alpha}^{{\text{\tiny prev}}}}{k})_i+\delta))
-\delta {\bf x}_{i}^T {\bf w}
- frac{K}{\lambda n} \delta {\bf x}_i^T A \vsubset{\Delta {\boldsymbol \alpha}^{{\text{\tiny prev}}}}{k}
-frac{K}{2\lambda n}\delta^2 \| {\bf x}_{i}\|_2^2
\notag\\
=& \argmax_{\delta\in\mathbb{R}} \
-\ell^*_{i}(-(\alpha_{i}+(\vsubset{\Delta {\boldsymbol \alpha}^{{\text{\tiny prev}}}}{k})_i+\delta))
-\delta {\bf x}_{i}^T \Big( \underbrace{ {\bf w}
+ frac{K}{\lambda n} A \vsubset{\Delta {\boldsymbol \alpha}^{{\text{\tiny prev}}}}{k} }_{=:\uv^{\text{\tiny local}}} \Big)
-frac{K}{2\lambda n}\delta^2 \| {\bf x}_{i}\|_2^2
\label{eq:coordupdates}
\end{align}
From this formulation, it is apparent that single coordinate local solvers should maintain their locally updated version of the current primal parameters, which we here denote as
\begin{equation}
\uv^{\text{\tiny local}} = {\bf w} + frac{K}{\lambda n} A\vsubset{\Delta {\boldsymbol \alpha}^{{\text{\tiny prev}}}}{k} \ .
\end{equation}
In the practical variant of DisDCA, the summarized local primal updates are
$
\Delta \uv^{\text{\tiny local}} = frac{1}{\lambda n_k} A\vsubset{\Delta {\boldsymbol \alpha}}{k}
$.
For the balanced case $n_k = n/K$ for $K$ being the number of machines, this means the local $\uv^{\text{\tiny local}}$ update of DisDCA-p is
\begin{align}
\Delta \alpha_i^\;:\;ar :=& \argmax_{\Delta \alpha_i\in\mathbb{R}} \
-\ell^*_{i}(-(\alpha_{i}+\Delta \alpha_i))
-\Delta \alpha_i {\bf x}_{i}^T \uv^{\text{\tiny local}}
-frac{K}{2\lambda n}(\Delta \alpha_i)^2 \| {\bf x}_{i}\|_2^2 \ .
\label{eq:disDCAupdates}
\end{align}
It is not hard to show that during one outer round, the evolution of the local dual variables $\vsubset{\Delta {\boldsymbol \alpha}}{k}$ is the same in both methods, such that they will also have the same trajectory of $\uv^{\text{\tiny local}}$. This requires some care if the same coordinate is sampled more than once in a round, which can happen in \textsc{LocalSDCA}\xspace within \textsc{CoCoA}\xspacep and also in DisDCA-p.
\end{proof}
\paragraph{Discussion.}
In the view of the above lemma, we will summarize the connection of the two methods as follows:
\begin{itemize}
\item \textbf{\textsc{CoCoA}\xspace/+ is Not an Algorithm.}
In contrast, it is a framework which allows to use \textit{any local solver} to perform approximate steps on the local subproblem.
This additional level of abstraction (from the definition of such local subproblems in \eqref{eq:subproblem}) is the first to allow \textit{reusability} of any fast/tuned and problem specific single machine solvers, while decoupling this from the distributed algorithmic scheme, as presented in Algorithm \ref{alg:cocoa}.
Concerning the choice of local solver to be used within \textsc{CoCoA}\xspace/+, SDCA is \emph{not} the fastest known single machine solver for most applications.
Much recent research has shown improvements on SDCA \cite{ShalevShwartz:2013wl}, such as accelerated variants \cite{MinibatchASDCA} and other approaches including variance reduction, methods incorporating second-order information, and importance sampling.
In this light, we encourage the user of the \textsc{CoCoA}\xspace or \textsc{CoCoA}\xspacep framework to plug in the best and most recent solver available for their particular local problem (within Algorithm \ref{alg:cocoa}), which is not necessarily SDCA. This choice should be made explicit especially when comparing algorithms.
Our presented convergence theory from Section~\ref{sec:convergence} will still cover these choices, since it only depends on the relative accuracy $ {\bf T}heta$ of the chosen local solver.
\item \textbf{\textsc{CoCoA}\xspacep is Theoretically Safe, while still Adaptive to the Data.}
The general definition of the local subproblems, and therefore the treatment of the varying separable bound on the objective -- quantified by $\sigma'$ -- allows our framework to adapt to the difficulty of the data partition and still give convergence results.
The data-dependent measure $\sigma'$ is fully decoupled from what the user of the framework prefers to employ as a local solver (see also the comment below that \textsc{CoCoA}\xspace is not a coordinate solver).
The safe upper bound $\sigma'=K$ is worst-case pessimistic, for the convergence theory to still hold in all cases, when the updates are added.
Using additional knowledge from the input data, better bounds and therefore better step-sizes can be achieved in \textsc{CoCoA}\xspacep.
An example when $\sigma'$ can be safely chosen much smaller is when the data-matrix satisfies strong row/column sparsity, see e.g. Lemma 1 in \cite{richtarik2013distributed}.
\item \textbf{Obtaining DisDCA-p as a Special Case.}
As shown in Lemma \ref{lem:equivDisDCA} above, we have that if in \textsc{CoCoA}\xspacep, if SDCA is used as the local solver
and the pessimistic upper bound of $\sigma'=K$ is used
and, moreover, the dataset is partitioned equally,
i.e. $forall k: n_k = frac{n}{K}$,
then the \textsc{CoCoA}\xspacep framework reduces exactly to the DisDCA-p algorithm by \cite{Yang:2013vl}.
The correspondence breaks down if the subproblem parameter is chosen to a practically good value $\sigma'\ne K$. Also, as noted above, SDCA is often not the best local solver currently available. In our above experiments, SDCA was used just for demonstration purposes and ease of comparison. Furthermore, the data partition might often be unbalanced in practical applications.
While both DisDCA-p and \textsc{CoCoA}\xspace are special cases of \textsc{CoCoA}\xspacep, we note that DisDCA-p can not be recovered as a special case of the original \textsc{CoCoA}\xspace framework \cite{jaggi2014communication}.
\item \textbf{\textsc{CoCoA}\xspace/+ are Not Coordinate Methods.}
Despite the original name being motivated from this special case, \textsc{CoCoA}\xspace and \textsc{CoCoA}\xspacep are \emph{not} coordinate methods. In fact, \textsc{CoCoA}\xspacep as presented here for the adding case ($\gamma = 1$) is much more closely related to a batch method applied to the dual, using a block-separable proximal term, as following from our new subproblem formulation \eqref{eq:subproblem}, depending on $\sigma'$. See also the remark in Section \ref{sec:relatedWork}.
The framework here (Algorithm \ref{alg:cocoa}) gives more generality, as the used local solver is not restricted to be a coordinate-wise one. In fact the framework allows to translate recent and future improvements of single machine solvers directly to the distributed setting, by employing them within Algorithm \ref{alg:cocoa}.
DisDCA-p works very well for several applications, but is restricted to using local coordinate ascent (SDCA) steps.
\item \textbf{Theoretical Convergence Results.}
While DisDCA-p \cite{Yang:2013vl} was proposed without theoretical justification (hence the nomenclature), the main contribution in the paper here -- apart from the arbitrary local solvers -- is the convergence analysis for the framework.
The theory proposed in \cite{Yang:2013ui} is given only for the setting of orthogonal partitions, i.e., when $\sigma'=1$ and the problems become trivial to distribute given the orthogonality of data between the workers.
The theoretical analysis here gives convergence rates applying for Algorithm \ref{alg:cocoa} when using arbitrary local solvers, and inherits the performance of the local solver.
As a special case, we obtain the first theoretical justification and convergence rates for original \textsc{CoCoA}\xspace in the case of general convex objective, as well as for the special case of DisDCA-p for both general convex and smooth convex objectives.
\end{itemize}
\end{document}
\begin{table}[htbp]
\centering
\caption{$\sigma'_{\min}/\gamma K$}
\begin{tabular}{rrrrrrr}
\toprule
K & 1 & 2 & 4 & 8 & 16 & 32 \\
\midrule
a1a & 1.0000 & 0.9998 & 0.9986 & 0.9968 & 0.9927 & 0.9840 \\
svmguide3 & 1.0000 & 0.9997 & 0.9995 & 0.9989 & 0.9986 & 0.9973 \\
svmguide1.t & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 \\
splice\_scale & 1.0000 & 0.9668 & 0.9121 & 0.8008 & 0.6770 & 0.5243 \\
a3a & 1.0000 & 0.9997 & 0.9993 & 0.9982 & 0.9964 & 0.9919 \\
\midrule
K & 4 & 8 & 16 & 32 & 64 & 128 \\
\midrule
w5a & 0.9880 & 0.9714 & 0.9435 & 0.8552 & 0.7978 & 0.7178 \\
gisette\_scale & 0.9998 & 0.9996 & 0.9992 & 0.9984 & 0.9966 & 0.9929 \\
mushrooms & 0.9189 & 0.9132 & 0.9057 & 0.8995 & 0.8946 & 0.8808 \\
\midrule
K & 16 & 32 & 64 & 128 & 256 & 512 \\
\midrule
news & 0.9252 & 0.8878 & 0.8496 & 0.7966 & 0.7227 & 0.6236 \\
real-sim & 0.6624 & 0.6206 & 0.5737 & 0.4996 & 0.4181 & 0.3300 \\
rcv1 & 0.9185 & 0.8444 & 0.7258 & 0.5914 & 0.4490 & 0.3340 \\
w1a.t & 0.9570 & 0.8738 & 0.8163 & 0.7791 & 0.7331 & NaN \\
\midrule
K & 256 & 512 & 1024 & 2048 & 4096 & 8192 \\
\midrule
covtype & 1.0000 & 1.0000 & 0.9999 & 0.9999 & 0.9995 & 0.9958 \\
webspam & 1.0000 & 0.9998 & 0.9994 & 0.9999 & 0.9994 & 0.9989 \\
\bottomrule
\end{tabular}
\label{tab:sigma2}
\end{table}
\begin{table}[htbp]
\centering
\caption{$(1/K)/(\sigma/n^2)$}
\begin{tabular}{rrrrrrr}
\toprule
K & 1 & 2 & 4 & 8 & 16 & 32 \\
\midrule
a1a & 2.23 & 2.23 & 2.23 & 2.22 & 2.21 & 2.19 \\
covtype & 1.12 & 1.11 & 1.11 & 1.09 & 1.09 & 1.08 \\
kdda & 17.06 & 16.78 & 16.45 & 16.03 & 15.63 & 14.88 \\
news & 17.08 & 16.78 & 16.46 & 16.06 & 15.48 & 14.93 \\
url & 1.54 & 1.54 & 1.53 & 1.53 & 1.53 & 15.32 \\
web/web & 18.76 & 18.73 & 18.80 & 18.66 & 18.94 & 18.38 \\
svmguide3 & 11.75 & 11.71 & 11.68 & 11.57 & 11.36 & 10.78 \\
svmguide1.t & 12.52 & 11.79 & 11.79 & 11.79 & 11.79 & 11.57 \\
splice\_scale & 29.94 & 29.07 & 27.78 & 25.51 & 21.55 & 16.45 \\
a3a & 2.23 & 2.23 & 2.23 & 2.23 & 2.22 & 2.20 \\
w5a & 39.53 & 39.06 & 38.46 & 35.71 & 34.72 & 31.25 \\
gisette\_scale & 1.43 & 1.43 & 1.43 & 1.43 & 1.42 & 1.42 \\
real-sim & 78.53 & 70.23 & 60.94 & 54.57 & 48.36 & 45.92 \\
rcv1\_train & 45.11 & 44.78 & 44.00 & 42.89 & 40.16 & 34.84 \\
w1a.t & 7.62 & 7.53 & 7.44 & 6.51 & 6.26 & 5.82 \\
mushrooms & 6.72 & 6.44 & 6.43 & 6.38 & 6.31 & 6.25 \\
\bottomrule
\end{tabular}
\label{tab:sigma1}
\end{table}
\begin{table}[htbp]
\centering
\caption{$K\gamma/\sigma'_{\min} $}
\begin{tabular}{rrrrrrr}
\toprule
K & 1 & 2 & 4 & 8 & 16 & 32 \\
\midrule
a1a & 1.0000 & 1.0002 & 1.0014 & 1.0032 & 1.0074 & 1.0162 \\
svmguide3 & 1.0000 & 1.0003 & 1.0005 & 1.0011 & 1.0014 & 1.0027 \\
svmguide1.t & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 & 1.0000 \\
splice\_scale & 1.0000 & 1.0344 & 1.0964 & 1.2488 & 1.4770 & 1.9072 \\
a3a & 1.0000 & 1.0003 & 1.0007 & 1.0018 & 1.0036 & 1.0082 \\
\midrule
K & 4 & 8 & 16 & 32 & 64 & 128 \\
\midrule
w5a & 1.0121 & 1.0294 & 1.0599 & 1.1694 & 1.2534 & 1.3931 \\
gisette\_scale & 1.0002 & 1.0004 & 1.0008 & 1.0016 & 1.0034 & 1.0071 \\
mushrooms & 1.0883 & 1.0950 & 1.1042 & 1.1117 & 1.1179 & 1.1354 \\
\midrule
K & 16 & 32 & 64 & 128 & 256 & 512 \\
\midrule
news & 1.0808 & 1.1264 & 1.1770 & 1.2553 & 1.3838 & 1.6036 \\
real-sim & 1.5096 & 1.6114 & 1.7430 & 2.0018 & 2.3918 & 3.0307 \\
rcv1 & 1.0887 & 1.1842 & 1.3778 & 1.6908 & 2.2273 & 2.9938 \\
w1a.t & 1.0449 & 1.1445 & 1.2250 & 1.2835 & 1.3641 & Nan \\
\midrule
K & 256 & 512 & 1024 & 2048 & 4096 & 8192 \\
\midrule
covtype & 1.0000 & 1.0000 & 1.0001 & 1.0001 & 1.0005 & 1.0042 \\
webspam & 1.0000 & 1.0002 & 1.0006 & 1.0001 & 1.0006 & 1.0011 \\
\bottomrule
\end{tabular}
\label{tab:sigma2}
\end{table}
\part{WORK IN PROGRESS!!!}
\begin{lemma}
\label{lemma:primaltosubproblem}
Let $k\in [K]$, $ {\boldsymbol \alpha} \in \mathbb{R}^n$
and let us denote by
$ {\bf w} = {\bf w}( {\boldsymbol \alpha})$.
Then consider following optimization problem
\begin{equation}
\label{eq:localprimal}
\min_{ {\bf w}_k\in \mathbb R^d} \Big\{ \mathcal H_k( {\bf w}_k; {\bf w}):= frac{1}{n} \sum_{i\in \mathcal P_k} \ell_i( ( {\bf w} + {\bf w}_k)^T {\bf x}_i) + frac{\lambda}{2\sigma '} \| {\bf w}_k\|^2 - frac1K frac{\lambda}{2} \| {\bf w}\|^2+ \lambda {\bf w}^T ( {\bf w}+ {\bf w}_k) \Big\}.
\end{equation}
Then the dual
is given by
\eqref{eq:subproblem}.
Moreover, for any feasible
$\vsubset{\Delta {\boldsymbol \alpha}}{k}$
we can fine a feasible point
$ {\bf w}_k$ as follows
\begin{equation}
{\bf w}_k = {\bf w}_k (\vsubset{\Delta {\boldsymbol \alpha}}{k}) =
frac{\sigma'}{\lambda n}
A \vsubset{\Delta {\boldsymbol \alpha}}{k}.
\end{equation}
Moreover, the strong duality holds, i.e.
\begin{equation}
\mathcal H_k( {\bf w}_k(\vsubset{\Delta {\boldsymbol \alpha}}{k}); {\bf w}( {\boldsymbol \alpha}))
-
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \vsubset{\Delta {\boldsymbol \alpha}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
\geq
\mathcal H_k( {\bf w}_k(\vsubset{\Delta {\boldsymbol \alpha}^*}{k}); {\bf w}( {\boldsymbol \alpha}))
-
\mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \vsubset{\Delta {\boldsymbol \alpha}^*}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})
= 0.
\end{equation}
\end{lemma}
\section{Proof of Lemma \ref{lemma:primaltosubproblem}}
\begin{proof}
The dual problem is derived by plugging in the definition of the conjugate function $\ell_i(( {\bf w} + {\bf w}_k)^T {\bf x}_i ) = \max_{\alpha_i} -(\alpha_i+ (\vsubset{\Delta {\boldsymbol \alpha}}{k})_i) ( {\bf w}+ {\bf w}_k)^T {\bf x}_i - \ell^*_i (-\alpha_i - (\vsubset{\Delta {\boldsymbol \alpha}}{k})_i)$, which gives
\begin{align}
&\min_{ {\bf w}_k\in\mathbb R^d}\quad frac{1}{n} \sum_{i\in \mathcal P_k} \max_{\alpha_i} \Big( -(\alpha_i+ (\vsubset{\Delta {\boldsymbol \alpha}}{k})_i) ( {\bf w}+ {\bf w}_k)^T {\bf x}_i - l^*_i (-\alpha_i - (\vsubset{\Delta {\boldsymbol \alpha}}{k})_i)\Big) \notag\\
& \quad\quad\quad+ frac{\lambda}{2\sigma '} \| {\bf w}_k\|^2- frac1K frac{\lambda}{2} \| {\bf w}\|^2+ \lambda {\bf w}^T ( {\bf w}+ {\bf w}_k) \notag\\
= & frac{1}{n} \sum_{i\in\mathcal{P}} \max_{\alpha_i} - l^*_i (-\alpha_i - (\vsubset{\Delta {\boldsymbol \alpha}}{k})_i)- frac1K frac{\lambda}{2} \| {\bf w}\|^2 \notag\\
&\quad\quad\quad+ \min_{ {\bf w}_k\in\mathbb R^d} \Big[ \sum_{i \in \mathcal P_k} ( - frac{1}{n}( (\vsubset{\Delta {\boldsymbol \alpha}}{k})_i) ( {\bf w}+ {\bf w}_k)^T {\bf x}_i) + frac{\lambda}{2\sigma '} \| {\bf w}_k\|^2 \Big] \notag\\
= & \max_{\Delta {\boldsymbol \alpha}_{[k]}} \Big\{frac{1}{n} \sum_{i \in \mathcal P_k} - l^*_i (-\alpha_i - (\vsubset{\Delta {\boldsymbol \alpha}}{k})_i) - frac1K frac{\lambda}{2} \| {\bf w}\|^2 \notag\\
&\quad\quad\quad + \min_{ {\bf w}_k\in\mathbb R^d} \Big[ \sum_{i \in \mathcal P_k} ( - frac{1}{n}((\vsubset{\Delta {\boldsymbol \alpha}}{k})_i) ( {\bf w}+ {\bf w}_k)^T {\bf x}_i) + frac{\lambda}{2\sigma '} \| {\bf w}_k\|^2 \Big] \Big\}
\end{align}
The first order optimality condition for $ {\bf w}_k$, by setting its derivative to zero in the inner minimization, can be written as
\begin{equation}
\ {\bf w}_k^* = frac{\sigma'}{\lambda n} \sum_{i \in \mathcal P_k} ((\vsubset{\Delta {\boldsymbol \alpha}}{k})_i) {\bf x}_i.
\end{equation}
Plugging this back, the inner minimization becomes
\begin{align}
& \sum_{i \in \mathcal P_k} ( - frac{1}{n}((\vsubset{\Delta {\boldsymbol \alpha}}{k})_i) (\overline {\bf w}+ {\bf w}_k)^T {\bf x}_i) +frac{\lambda }{2\sigma'} \| {\bf w}_k\|^2 \notag \\
= & -frac{1}{n} {\bf w} ^T A \Delta {\boldsymbol \alpha}_{[k]} -\sigma' \lambda \|frac{1}{\lambda n} A \Delta {\boldsymbol \alpha}_{[k]} \|^2 + frac{\lambda \sigma'}{2} \|frac{1}{\lambda n} A \Delta {\boldsymbol \alpha}_{[k]} \|^2 \notag\\
= & -frac{1}{n} {\bf w} ^T A \Delta {\boldsymbol \alpha}_{[k]} -frac{\sigma' \lambda}{2} \|frac{1}{\lambda n} A \Delta {\boldsymbol \alpha}_{[k]} \|^2
\end{align}
Writing the resulting full problem, we obtain precisely the local dual subproblem \eqref{eq:subproblem} for the $k$-th coordinate block.
\end{proof}
So, we have the local duality gap
\begin{align} \label{eq:localgap}
\mathcal H_k( {\bf w}_k; {\bf w}) - \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \vsubset{\Delta {\boldsymbol \alpha}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}) = & frac{1}{n} \Big(\sum_{i\in \mathcal P_k} l_i( ( {\bf w} + {\bf w}_k)^T {\bf x}_i) + \ell_i^*(-\alpha_i - (\vsubset{\Delta {\boldsymbol \alpha}}{k})_i) \notag\\
& \quad\quad + frac{\lambda}{\sigma '} \| {\bf w}_k\|^2 + \lambda {\bf w}^T( {\bf w}+ {\bf w}_k) + frac{\lambda}{\sigma'} {\bf w}_k^T {\bf w}\Big).
\end{align}
\begin{lemma}\label{lm:cite_lemma5}
Assume that $\ell_i^*$ is $\mu$-strongly convex (where $\mu\geq 0$). Then for all iterations $h$ of LOCALSDCA and any $s\in[0,1]$ we have
\begin{equation}
\mathbf E [\mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \Delta {\boldsymbol \alpha}^{(h)}_{[k]} ; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}) - \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \Delta {\boldsymbol \alpha}^{(h-1)}_{[k]}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})]
\geq frac{s}{n_k} (H_k( {\bf w}_k; {\bf w}) - \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \vsubset{\Delta {\boldsymbol \alpha}^{(h-1)}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}) ) -frac{s^2}{2\lambda n^2} \Psi^{(h)},
\end{equation}
where $\Psi^{(h)} := frac{1}{n_k}\sum_{i\in \mathcal P_k}\big(\sigma' \| {\bf x}_i\|^2 -frac{\lambda n \mu(1-s)}{s} \big) (u^{(h-1)}_i -\alpha_i^{(h-1)})^2$.
\todo[inline]{
Chenxin, you have to define what is $u_i$'s here}
\end{lemma}
\begin{proof}
Indeed,
\begin{align}
&n\big[\mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \Delta {\boldsymbol \alpha}^{(h)}_{[k]} ; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}) - \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \Delta {\boldsymbol \alpha}^{(h-1)}_{[k]}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})] \notag\\
=& \underbrace{-\ell_i^*(-\alpha_i-(\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h})_i) - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h})_i {\bf w}^T {\bf x}_i - frac{\lambda n}{2}\sigma' \| frac{1}{\lambda n}(\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h})_i {\bf x}_i \|^2 } _{A} \notag\\
& - \underbrace{\Big(-\ell_i^*(-\alpha_i-(\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i) - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i {\bf w}^T {\bf x}_i - frac{\lambda n}{2}\sigma' \| frac{1}{\lambda n}(\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i {\bf x}_i \|^2 \Big)}_{B} .
\end{align}
By the definition of the update $ \delta^*_i$, we have for all $s\in[0,1]$ that
\begin{align}
A =& \max_{\delta_i} \Big\{-\ell_i^*\Big( -\alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i - \delta_i^* {\bf e}_i \Big) - \big((\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i + \delta^*_i {\bf e}_i\big) {\bf w}^T {\bf x}_i -\notag\\
&\quad\quad\quad frac{\lambda n}{2}\sigma' \| frac{1}{\lambda n} \big((\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i + \delta^*_i {\bf e}_i\big) {\bf x}_i \|^2 \Big\} \notag\\
\geq& -\ell_i^*\Big (-\alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i - s( \vc{u_i}{h-1} -\alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i\Big) - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i {\bf w}^T {\bf x}_i \notag\\
&\quad\quad \quad-s\Big(\vc{u}{h-1}_i - \alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i\Big ) {\bf w}^T {\bf x}_i - \notag\\
& \quad\quad\quadfrac{\lambda n}{2}\sigma' \| frac{1}{\lambda n}\big ( (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i+ s(\vc{u}{h-1}_i- \alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i)\big ) {\bf x}_i \|^2 .
\end{align}
From the strong convexity we have
\begin{align}\label{eq:stronglyconv}
&\ell_i^*\Big (-\alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i - s( \vc{u_i}{h-1} -\alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i\Big) \notag\\
\leq &\; sl_i^*(-\vc{u}{h-1}_i) + (1-s)l_i^*(-\alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i) -frac{\mu}{2}s(1-s)(\vc{u}{h-1}_i - \alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i) )^2.
\end{align}
Hence,
\begin{align}
A \overset{\eqref{eq:stronglyconv}}{\geq}& - sl_i^*(-\vc{u}{h-1}_i) - (1-s)l_i^*(-\alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i) + frac{\mu}{2}s(1-s)(\vc{u}{h-1}_i - \alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i) )^2 \notag\\
&\quad - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i {\bf w}^T {\bf x}_i -s\Big(\vc{u}{h-1}_i - \alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i\Big ) {\bf w}^T {\bf x}_i - \notag\\
& \quad\quad\quadfrac{\lambda n}{2}\sigma' \| frac{1}{\lambda n}\big ( (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i+ s(\vc{u}{h-1}_i- \alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i)\big ) {\bf x}_i \|^2 \notag\\
=& -s\big( l_i^*(\vc{u_i}{h-1}) + \vc{u_i}{h-1}( {\bf w}+ {\bf w}_k)^T {\bf x}_i \big) -\ell_i^*(-\alpha_i-(\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i) - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i {\bf w}^T {\bf x}_i \notag\\
& - frac{\lambda n}{2}\sigma' \| frac{1}{\lambda n}(\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i {\bf x}_i \|^2+ frac{s}{2} \big( \mu(1-s) - frac{\sigma'}{\lambda n} s\| {\bf x}_i\|^2 \big) (\vc{u}{h-1}_i -\alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i)^2 \notag\\
&+ s\big(\ell_i^* ( \alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i) + (\alpha_i + (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i) {\bf x}_i^T \vc{ {\bf w}}{h-1}\big) - \sigma' s \Delta \alpha_k^{(h-1)} (\vc{u_i}{h-1}-\alpha_i- \Delta \alpha_k^{(h-1)}) \| {\bf x}_i\|^2 \notag\\
=& \underbrace{-s\big( l_i^*(\vc{u_i}{h-1}) + \vc{u_i}{h-1}( {\bf w}+ {\bf w}_k)^T {\bf x}_i \big)}_{s\ell(( {\bf w}+ {\bf w}_k)^T {\bf x}_i )} \notag\\ &+\underbrace{\Big(-\ell_i^*(-\alpha_i-(\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i) - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i {\bf w}^T {\bf x}_i - frac{\lambda n}{2}\sigma' \| frac{1}{\lambda n}(\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i {\bf x}_i \|^2\Big)}_{B} \notag\\
& + frac{s}{2} \big( \mu(1-s) - frac{\sigma'}{\lambda n} s\| {\bf x}_i\|^2 \big) (\vc{u}{h-1}_i -\alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i)^2 \notag\\
&+ s\big(\ell_i^* ( \alpha_i - (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i) + (\alpha_i + (\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i) {\bf x}_i^T \vc{ {\bf w}}{h-1}\big) + \sigma' s \Delta \alpha_k^{(h-1)} (\alpha_i + \Delta \alpha_k^{(h-1)}) \| {\bf x}_i\|^2.
\end{align}
Therefore,
\begin{align}\label{eq:aacxxsds }
A-B \geq& s\Big[ \ell(( {\bf w}+ {\bf w}_k)^T {\bf x}_i ) + \ell_i^* (-\alpha_i-(\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i)) +(\alpha_i+(\vc{\Delta {\boldsymbol \alpha}_{[k]}}{h-1})_i) {\bf w}^T {\bf x}_i \notag\\
&\quad\quad\quad + \sigma' s \Delta \alpha_k^{(h-1)} (\alpha_i + \Delta \alpha_k^{(h-1)}) \| {\bf x}_i\|^2 + frac{s}{2} \big( \mu(1-s) - frac{\sigma'}{\lambda n} s\| {\bf x}_i\|^2 \big) (\vc{u}{h-1}_i-\vc{ {\boldsymbol \alpha}}{h-1}_i)^2 \Big].
\end{align}
By \eqref{eq:localgap}, taking expectation of \eqref{eq:aacxxsds } we obtain
\begin{align}
frac{1}{s} \mathbf E [A-B] \geq &frac{n}{n_k} \underbrace{frac{1}{n} \sum_{i\in \mathcal P_k} l_i( ( {\bf w} + {\bf w}_k)^T {\bf x}_i) + \ell_i^*(-\alpha_i - (\vsubset{\Delta {\boldsymbol \alpha}^{(h-1)}}{k})_i) + frac{\lambda}{\sigma '} \| {\bf w}_k\|^2 + \lambda {\bf w}^T( {\bf w}+ {\bf w}_k) + frac{\lambda}{\sigma'} {\bf w}_k^T {\bf w} }_{ H_k( {\bf w}_k; {\bf w}) - \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \vsubset{\Delta {\boldsymbol \alpha}^{(h-1)}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}) } \notag\\
& -frac{s}{2\lambda n} \underbrace{frac{1}{n_k} \sum_{i\in\mathcal P_k} \Big( \sigma' \| {\bf x}_i\|^2- frac{\lambda n\mu(1-s)}{s} \Big) (\vc{u}{h-1}_i-\vc{ {\boldsymbol \alpha}}{h-1}_i)^2}_{\vc{\Psi}{h}}.
\end{align}
Therefore, we have obtained the claimed improvement bound
\begin{equation}
frac{n}{s}\mathbf E [\mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \Delta {\boldsymbol \alpha}^{(h)}_{[k]} ; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}) - \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \Delta {\boldsymbol \alpha}^{(h-1)}_{[k]}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})] \geq frac{n}{n_k} (H_k( {\bf w}_k; {\bf w}) - \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \vsubset{\Delta {\boldsymbol \alpha}^{(h-1)}}{k}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}) )-frac{s}{2\lambda n} \Psi^{(h)}.
\end{equation}
\end{proof}
-------------------------------------------------------------------------------------------------------------------
To get $H$:
\begin{align}
\mathbf E [\mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \Delta {\boldsymbol \alpha}^{(h)}_{[k]} ; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}) - \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \Delta {\boldsymbol \alpha}^{(h-1)}_{[k]}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})] &\geq frac{s}{n_k} E [\mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \Delta {\boldsymbol \alpha}^{*}_{[k]} ; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}) - \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \Delta {\boldsymbol \alpha}^{(h-1)}_{[k]}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})] -frac{s^2}{2\lambda n^2} \vc{\Psi}{h} \notag\\
& \geq frac{s}{n_k} E [\mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \Delta {\boldsymbol \alpha}^{*}_{[k]} ; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}) - \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \Delta {\boldsymbol \alpha}^{(h-1)}_{[k]}; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})] -frac{s^2}{2\lambda n^2} \sigma' {\bf x}_{max} frac{1}{n_k} 4L^2,
\end{align}
in the last step we use the upper bound $\vc{\Psi}{h} \leq \sigma' {\bf x}_{max} 4L^2 $. Then, let $r^h =\mathbf E[ \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \Delta {\boldsymbol \alpha}^{*}_{[k]} ; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}) - \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( \Delta {\boldsymbol \alpha}^{(h-1)}_{[k]}] ; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k})$,
\begin{align}
\mathbf E [r^{h-1} ] - frac{s}{n_k} \mathbf E [r^{h-1}] \geq \mathbf E [r^h] - frac{s^2}{2\lambda n^2} \sigma' {\bf x}_{max} 4L^2,
\end{align}
from which we can derive,
\begin{align}\label{eq:aaaaaadasdasdsa}
r^{h} \leq (1- frac{s}{n_k} ) r^{h-1}+ (frac{s}{n})^2 R,
\end{align}
where $R = frac{4L^2 \sigma' {\bf x}_{max}}{2\lambda}$. Then,
\begin{equation}
r^h \leq (1-frac{s}{n_k})^h r^0 + frac{s}{n^2} n_k R.
\end{equation}
Let $ frac{s}{n^2} n_kR \leq \beta {\bf T}heta r_0,$ choose
\begin{align}
H \geq frac{n_k^2 R}{\beta r_0 {\bf T}heta n^2} \log (frac{\beta}{ {\bf T}heta}),
\end{align}
to achieve $r^h \leq {\bf T}heta r^0$, where $\beta$ is chosen to satisfy $s\leq 1.$
------------------------------------------------------------------------------------------
Another way:
\eqref{eq:aaaaaadasdasdsa} is equivalent to
\begin{align}
r^{h} \leq (1- frac{s}{n_k} ) r^{h-1}+ (frac{s}{n_k})^2 frac{R}{2\lambda}, \quad R = frac{4L^2 n_k^2\sigma' {\bf x}_{max}}{n^2}
\end{align}
Next we show
\begin{align}\label{eq:asasda}
r^h \leq frac{2R}{\lambda (2n_k -h_0 +h)}.
\end{align}
for all $t\geq h_0=\max(0, \left\lceil n_k\log (2\lambda n_k r^0/ R)\right\rceil ) $. Indeed, let us choose $s=1,$ then at $h=h_0$, we have
\begin{align}
r^h\leq (1-frac{1}{n_k})^h r^0 + frac{R}{2\lambda n_k^2} n_k \leq e^{-h/n_k} r^0 + frac{R}{2\lambda n_k}\leq frac{R}{\lambda n_k}.
\end{align}
\todo[inline]{
Some small typo here!!!}
\todo[inline]{Chenxin: is it fixed?}
\todo[inline]{
No :)
you want to show that $r^{h_0}
\leq frac{2R}{2\lambda n_k}$
so $\lambda$ is missing there
+
clearly $frac{R}{n_k} > frac{R}{2n_k}$ :) so there is still some small problem
}
This implies \eqref{eq:asasda} holds at $h=h_0$. For $h>h_0$ we use induction. Suppose the claim holds for $h-1$, therefore
\begin{align}
r^h\leq (1-sfrac{s}{n_k}) r^{h-1} + (frac{s}{n_k})^2 R \leq (1-frac{s}{n_k}) frac{R}{\lambda (2n_k+t-1-t_0)} + (frac{s}{n_k})^2 R.
\end{align}
Choosing $s = 2n_k/ (2n_k-h_0+h-1)\in[0,1] $ yields
\begin{align}
r^h&\leq \Big(1- frac{2}{2n_k -h_0+h-1} \Big) frac{2R}{\lambda (2n_k -h_0 +h-1)} + \Big( frac{2n_k}{2n_k -h_0+h-1} \Big) ^2 frac{R}{2\lambda } \notag\\
& = frac{2R}{\lambda (2n_k -h_0 +h-1)} \Big(1- frac{1}{2n_k -h_0+h-1} \Big) \notag\\
& = frac{2R}{\lambda (2n_k -h_0 +h-1)} \Big( frac{2n_k -h_0+h-2 }{2n_k -h_0+h-1} \Big) \notag\\
& \leq frac{2R}{\lambda (2n_k -h_0 +h-1)} \Big( frac{2n_k -h_0+h-1 }{2n_k -h_0+h} \Big) \notag\\
& = frac{2R}{\lambda (2n_k -h_0 +h)} .
\end{align}
To achieve $r^h \leq {\bf T}heta r^0$, choose
$$ h\geq frac{2R}{ {\bf T}heta r^0 \lambda} + h_0 -2n_k.$$
To get a upper bound for $r^0$, we have
\begin{align}
r^0 = H_k(0; {\bf w}) - \mathcal{G}^{\sigma'}_k\hspace{-0.08em}( 0; {\bf w}, \vsubset{ {\boldsymbol \alpha}}{k}) &\leq frac{1}{n} \sum_{i\in\mathcal P_k} (l_i( {\bf w}^T {\bf x}_i) + l_i^*(-\alpha_i)) + \lambda \| {\bf w}\|^2\notag\\
& = frac{1}{n} \sum_{i\in\mathcal P_k} (- {\boldsymbol \alpha} {\bf w}^T {\bf x}_i) + \lambda \| {\bf w}\|^2 \notag\\
& = 2\lambda \| {\bf w}\|^2.
\end{align}
Then how to bound $\| {\bf w}\|^2$?
\end{document} |
\begin{document}
\preprint{}
\title{Heralded quantum repeater based on the scattering of
photons off single emitters in one-dimensional
waveguides\footnote{published in Ann. Phys. \textbf{378}, 33-46
(2017)}}
\author{Guo-Zhu Song$^{1}$, Mei Zhang$^{1}$, Qing Ai$^{1}$, Guo-Jian Yang$^{1}$,
Ahmed Alsaedi$^{2}$, Aatef Hobiny$^{2}$,
Fu-Guo Deng$^{1,2,}$\footnote{Corresponding author:[email protected]} }
\address{$^{1}$Department of Physics, Applied Optics Beijing Area Major Laboratory,
Beijing Normal University, Beijing 100875, China\\
$^{2}$NAAM-Research Group, Department of Mathematics, Faculty of
Science, King Abdulaziz University, P.O. Box 80203, Jeddah 21589,
Saudi Arabia}
\begin{abstract}
We propose a heralded quantum repeater based on the scattering of
photons off single emitters in one-dimensional waveguides. We show
the details by implementing nonlocal entanglement generation,
entanglement swapping, and entanglement purification modules with
atoms in waveguides, and discuss the feasibility of the repeater
with currently achievable technology. In our scheme, the faulty
events can be discarded by detecting the polarization of the
photons. That is, our protocols are accomplished with fidelity of
100\% in principle, which is advantageous for implementing realistic
long-distance quantum communication. Moreover, additional atomic
qubits are not required, but only a single-photon medium. Our scheme
is scalable and attractive since it can be realized in solid-state
quantum systems. With the great progress on controlling
atom-waveguide systems, the repeater may be very useful in quantum
information processing in the future.
\end{abstract}
\keywords{Heralded quantum repeater; one-dimensional waveguides;
scattering property; atom-waveguide systems}
\maketitle
\section{Introduction} \label{sec1}
Entanglement plays an important role in quantum communication, such
as quantum key distribution \cite{dis,tri}, quantum secret sharing
\cite{sh}, and quantum secure direct communication
\cite{QSDC1,QSDC2}. However, entangled photon pairs are produced
locally and inevitably suffer from the noise from optical-fiber
channels when they are transmitted to the parties in quantum
communication, which will decrease the coherence of the photon
systems. In order to exchange private information and avoid an
exponential decay of photons over long distance, the scheme for a
quantum repeater was proposed by Briegel et al. \cite{a} in 1998.
Its main idea is to share the entangled photon pairs in small
segments first, avoiding the exponential decay of photons with the
transmission distance, and then use entanglement swapping \cite{b}
and entanglement purification
\cite{EPP1,EPPsimon,EPPsheng1,DEPP1,DEPP2,DEPP3,DEPP4,HEPP2,HEPPWangGY,DuFFHEPP,dengreview}
to create a long-distance entangled quantum channel.
There are some interesting proposals for implementing a quantum
repeater, by utilizing different physical systems
\cite{e,KL,f,litaorepSR,LiTaoPRA}. For example, in 2001, Duan et al.
\cite{e} suggested an interesting proposal to set up a quantum
repeater with atomic ensembles. In 2006, Klein et al. \cite{KL} put
forward a robust scheme for quantum repeaters with decoherence-free
subspaces. In 2007, using the two-photon Hong-Ou-Mandel
interferometer, Zhao et al. \cite{f} proposed a robust quantum
repeater protocol. In 2016, Li, Yang, and Deng \cite{LiTaoPRA}
introduced a heralded quantum repeater for quantum communication
network based on quantum dots embedded in optical microcavities,
resorting to effective time-bin encoding. The building blocks of
quantum repeaters are experimentally realized by some research
groups, and remarkable progress has been reported
\cite{re,ma,rk,ab,led}.
In the past decade, the interaction between photons and atoms in
high-quality optical microcavities has become one of the most
important methods for implementing quantum computation and quantum
information processing. Some significant achievements
\cite{hi,gh,fi,nes,se} have been made in photon-atom systems in both
theory and experiment. With strong coupling and high-quality
cavities, they can obtain a high-fidelity quantum computation. In
2005, an interesting proposal \cite{Shen} was proposed to realize
the coupling between a single quantum emitter and a photon in
one-dimensional (1D) waveguides, which can be considered as a bad
cavity. In 2007, a similar proposal was presented to realize this
coupling using nanoscale surface plasmons \cite{exp}. In 2015,
S\"{o}llner et al. \cite{sollner1} obtained deterministic
photon-emitter coupling in photonic crystal waveguide in experiment.
In their schemes, the coupling between the emitter and the waveguide
is stronger than the atomic decay rate, but weaker than the
waveguide-loss rate, and the atomic spontaneous emission into the
waveguide becomes the main effect, called the Purcell effect. The
emitter-waveguide systems allow for interesting quantum state
manipulation and quantum information processing, such as
entanglement generation \cite{reso,Kuzyk,TCHliew}, efficient optical
switch \cite{switch}, quantum logic gates
\cite{hypercnot1,renbaocang}, and quantum state transfer
\cite{sca,man,Anpra}. However, with emitter decay and finite
coupling strength, the physical device is restricted to finite $P$
(Purcell factor), so that the scattering of photons off single
emitters may not happen at all. To solve this problem, in $2012$, Li
et al. \cite{sca} proposed a simple scattering setup to realize a
robust-fidelity atom-photon entangling gates, in which the faulty
events can be heralded by detecting the polarization of the photon
pulse.
In this paper, we present a heralded quantum repeater that allows
the nonlocal creation of the entangled state over an arbitrary large distance
with a tolerability of errors. In our scheme, since atoms can
provide long coherence time, we choose a four-level atom as the
emitter. With the scattering of photons off single emitters in 1D waveguides,
the parties in quantum communication can realize nonlocal entanglement
creation against collective noise, entanglement swapping, and
entanglement purification. Moreover, our protocols can turn errors into the detection
of photon polarization, which can be discarded. The prediction of faulty events ensures
that our repeater can be completed with a fidelity of 100\% in principle,
which is advantageous for quantum information processing.
\section{The scattering of photons off single emitters in a 1D waveguide}
\label{basic}
Let us consider a quantum system composed of a single emitter
coupled to electromagnetic modes in a 1D waveguide, as shown in Fig.
\ref{fig1}(a). We first choose a simple two-level atom as the
emitter, consisting of the ground state $|g\rangle$ and the excited
state $|e\rangle$ with the frequency difference $\omega_{a}$. Under
the Jaynes-Cummings model, the Hamiltonian for the interactions
between a set of waveguide modes and a two-level emitter reads
\cite{Shen,exp}:
\begin{eqnarray}
H=\sum_{k}\!\hbar\omega_{_k}a_{_k}^{\dag}a_{_k}+\frac{1}{2}\hbar\omega_{a}\sigma_{z}
+\sum_{k}\!\hbar
g(a_{_k}^{\dag}\sigma_{_-}\!+a_{_k}\sigma_{_+}),\;\;
\end{eqnarray}
where $a_{_{k}}$ and $a_{_{k}}^{\dag}$ are the annihilation and creation
operators of the waveguide mode with frequency $\omega_{_{k}}$, respectively.
$\sigma_{z}$, $\sigma_{_{+}}$, and $\sigma_{_{-}}$ are the inversion,
raising, and lowering operators of the two-level atom, respectively.
$g$ is the coupling strength between the atom and the electromagnetic
modes of the 1D waveguide, assumed to be same for all modes. One can
rewrite the Hamiltonian of the system in real space as \cite{Shen,exp}
\begin{eqnarray} \label{eqa2}
H'=\hbar\!\int\!\! dk\, \omega_{_k}a_{_k}^{\dag}a_{_k}
+\hbar g\!\!\int \!\! dk (a_{_k}\sigma_{_+}e^{ikx_{a}}+h.c.)
+\,\hbar(\omega_{a}-\frac{i\gamma'}{2})\sigma_{ee},
\end{eqnarray}
where $x_{a}$ is the position of the atom, $\sigma_{ee}=|e\rangle\langle
e|$, and $\omega_{_k}=c|k|$ ($c$ is the group velocity of
propagating electromagnetic modes and $k$ is its wave vector).
$\gamma'$ is the decay rate of the atom out of the waveguide (e.g.,
the emission into the free space). Because we only care the
interactions of the near-resonant photons with the atom, we could
make the approximation that left- and right-propagating photons form
completely separate quantum fields \cite{Shen}. Under this
approximation, the operator ${a}_{_{k}}$ in Eq. (\ref{eqa2}) can be
replaced by (${a}_{_{k,R}}+{a}_{_{k,L}}$).
\begin{figure}
\caption{ (Color online) (a) The basic structure for a photon mirror in which a
two-level atom (an emitter marked by the black dot) is coupled to a
1D waveguide (marked by the cylinder). Here the atom has a ground
state $|g\rangle$ and an excited state $|e\rangle$, and
its position is $x=0$. In an ideal situation, an incident photon
(purple, the upper left wave
packet) is fully reflected (blue, the lower left wave packet) when
it resonates with the atom, but there is a transmitted component
(black, the right wave packet) in a practical scattering \cite{exp}
\label{fig1}
\end{figure}
To get the reflection and transmission coefficients of single-photon
scattering, we assume that a photon with the energy $E_{k}$ is
propagating from the left. The state of the system is described by \cite{Shen,exp}
\begin{eqnarray}
|E_{k}\rangle=c_{e}|e,vac\rangle+\int\!\!
dx \Large[\phi_{_{L}}(x)c_{_{L}}^{\dag}(x)
+\phi_{_{R}}(x)c_{_{R}}^{\dag}(x)\Large]|g,vac\rangle,
\label{eqa3}
\end{eqnarray}
where $|vac\rangle$ represents the vacuum state of photons, $c_{e}$ is the
probability amplitude of the atom in the excited state, and
$c_{_{L}}^{\dag}(x)$ ($c_{_{R}}^{\dag}(x)$) is a bosonic operator creating a
left-going (right-going) photon at position $x$. $\phi_{_{R}}(x)$ and $\phi_{_{L}}(x)$ are
the probability amplitudes of right- and left-traveling photon, respectively.
Note that the photon propagates from the left, $\phi_{_{R}}(x)$ and
$\phi_{_{L}}(x)$ could take the forms \cite{Shen,exp}
\begin{eqnarray}
\phi_{_R}(x)&=& e^{ikx}\theta(-x)+te^{ikx}\theta(x),\nonumber\\
\phi_{_L}(x)&=& re^{-ikx}\theta(-x).
\end{eqnarray}
Here $t$ and $r$ are the transmission and reflection coefficients,
respectively. The Heaviside step function $\theta(x)$ equals 1 when
$x$ is larger than zero and 0 when $x$ is smaller than zero. By
solving the time-independent Schr\"odinger equation
$H|E_{k}\rangle=E_{k}|E_{k}\rangle$, one can obtain \cite{Shen,exp}
\begin{eqnarray}
r&=&-\frac{1}{1+\gamma'/\gamma_{_{1D}}-2i\Delta/\gamma_{_{1D}}}, \nonumber\\
t&=&1+r,
\label{eqa5}
\end{eqnarray}
where $\Delta=\omega_{_{k}}-\omega_{a}$ is the photon detuning with the
two-level atom, and $\gamma_{_{1D}}=4\pi g^{2}/c$ is the decay rate of
the atom into the waveguide.
Provided that the incident photon resonates with the emitter (i.e.,
$\Delta=0$), one can easily obtain the reflection coefficient
$r=-1/(1+1/P)$, where $P=\gamma_{_{1D}}/{\gamma'}$ is the Purcell
factor. As we know, in the atom-waveguide system, the spontaneous
emission rate $\gamma_{_{1D}}$ into the 1D waveguide can be much
larger than the emission rate $\gamma'$ into all other possible
channels \cite{Shen,exp}. Considering that a high Purcell factor $P$
can be obtained in realistic systems \cite{exp}, one can get the
reflection coefficient $r\approx-1$ for this system in principle. That is, when the
photon is coupled to the emitter, the atom acts as a photon mirror
\cite{Shen}, which puts a $\pi$-phase shift on reflection. However,
when the photon is detuned from the emitter, it transmits
through the atom with no effect.
Let us consider a four-level atom with degenerate ground states
$|g_{\pm}\rangle$ and excited states $|e_{\pm}\rangle$ as the
emitter in a 1D waveguide, as shown in Fig. \ref{fig1}(b). For the
emitter, the transitions of
$|g_{+}\rangle\leftrightarrow|e_{+}\rangle$ and
$|g_{-}\rangle\leftrightarrow|e_{-}\rangle$ are coupled to two
electromagnetic modes $a_{_{k,R}}$ and $a_{_{k,L}}$, with the
absorption (or emission) of right (R) and left (L) circular
polarization photons, respectively. Assuming that the spatial wave
function of the incident photon is $|\psi\rangle$, with the
scattering properties in the practical situation discussed above,
one can get \cite{sca}
\begin{eqnarray}
|g_{+}\rangle|\psi\rangle|R\rangle &\rightarrow |g_{+}\rangle|\phi\rangle|R\rangle,\;\;\;\;\;\;\;
|g_{-}\rangle|\psi\rangle|R\rangle &\rightarrow |g_{-}\rangle|\psi\rangle|R\rangle,\nonumber\\
|g_{-}\rangle|\psi\rangle|L\rangle &\rightarrow |g_{-}\rangle|\phi\rangle|L\rangle,\;\;\;\;\;\;\;
|g_{+}\rangle|\psi\rangle|L\rangle &\rightarrow |g_{+}\rangle|\psi\rangle|L\rangle.
\end{eqnarray}
Here $|\phi\rangle$ is the spatial state of the photon component
left in the waveguide after the scattering process. In general
situation, $|\phi\rangle=|\phi_{t}\rangle+|\phi_{r}\rangle$, where
$|\phi_{t}\rangle=t|\psi\rangle$ and
$|\phi_{r}\rangle=r|\psi\rangle$ refer to the transmitted and
reflected parts of the photon, respectively. When the Purcell factor
$P$ is infinite, $|\phi\rangle$ is normalized. Whereas, if the input
photon is in the horizontal linear-polarization state
$|H\rangle=(|R\rangle+|L\rangle)/\sqrt{2}$, the transformations turn
into \cite{sca}
\begin{eqnarray}
|g_{+}\rangle|\psi\rangle|H\rangle&\rightarrow
\frac{1}{2}|g_{+}\rangle[(|\phi\rangle+|\psi\rangle)|H\rangle
+(|\phi\rangle-|\psi\rangle)|V\rangle],\nonumber\\
|g_{-}\rangle|\psi\rangle|H\rangle&\rightarrow
\frac{1}{2}|g_{-}\rangle[(|\phi\rangle+|\psi\rangle)|H\rangle
-(|\phi\rangle-|\psi\rangle)|V\rangle],
\label{eqa7}
\end{eqnarray}
where $|V\rangle=(|R\rangle-|L\rangle)/\sqrt{2}$ is the vertical
linear-polarization state. Following the relation in Eq.
(\ref{eqa5}), one gets
$(|\phi\rangle+|\psi\rangle)/2=|\phi_{t}\rangle$ and
$(|\phi\rangle-|\psi\rangle)/2=|\phi_{r}\rangle$ \cite{sca}, and the
transformations in Eq. (\ref{eqa7}) are equivalent to
\begin{eqnarray}
|g_{+}\rangle|\psi\rangle|H\rangle&
\rightarrow\,|g_{+}\rangle|\phi_{t}\rangle|H\rangle+|g_{+}\rangle|\phi_{r}\rangle|V\rangle,\nonumber\\
|g_{-}\rangle|\psi\rangle|H\rangle&
\rightarrow\,|g_{-}\rangle|\phi_{t}\rangle|H\rangle-|g_{-}\rangle|\phi_{r}\rangle|V\rangle.
\end{eqnarray}
It is interesting that the scattering process generates a
vertical-polarized component. Moreover, for the outgoing photon in
state $|H\rangle$, nothing happens to the emitter, while for the
photon component in state $|V\rangle$, a state-dependent $\pi-$phase
shift occurs on the emitter.
With the principle mentioned above, Li et al. \cite{sca} constructed
a heralded setup to realize the scattering between incident photon
and the emitter in a 1D waveguide, as shown in Fig. \ref{fig1}(b).
The input photon in spatial state $|\psi\rangle$ with $|H\rangle$
(from port $1$) is split by a 50 : 50 beam splitter (BS) into two
halves that scatter with the atom simultaneously. Then, the
transmitted and reflected components travel back and exit the beam
splitter from port $1$. The corresponding transformations on the
states can be described as follows \cite{sca}:
\begin{eqnarray}\label{eq8}
\!\!\vert
\Phi_0\rangle&\;\;=&|g_{\pm}\rangle|\psi\rangle|H\rangle^{^{1}}\nonumber\\
\!\!\!\!&\stackrel{ BS}{\longrightarrow}
&\frac{1}{\sqrt{2}}|g_{\pm}\rangle|\psi\rangle|H\rangle^{^{3}}
+\frac{1}{\sqrt{2}}|g_{\pm}\rangle|\psi\rangle|H\rangle^{^{4}}\nonumber\\
\!\!\!\!&\stackrel{ S\!catter}{\longrightarrow}
&\frac{1}{\sqrt{2}}|g_{\pm}\rangle|\phi_{t}\rangle|H\rangle^{^{3}}
+\frac{1}{\sqrt{2}}|g_{\pm}\rangle|\phi_{t}\rangle|H\rangle^{^{4}}
\pm\frac{1}{\sqrt{2}}|g_{\pm}\rangle|\phi_{r}\rangle|V\rangle^{^{3}}
\pm\frac{1}{\sqrt{2}}|g_{\pm}\rangle|\phi_{r}\rangle|V\rangle^{^{4}}\nonumber\\
\!\!\!\!&\stackrel{BS}{\longrightarrow}&|g_{\pm}\rangle|\phi_{t}\rangle
|H\rangle^{^{1}}\pm|g_{\pm}\rangle|\phi_{r}\rangle|V\rangle^{^{1}}.
\end{eqnarray}
Here the superscript $i$ ($i$=1,2,3,4) is the path of the photon, shown in Fig. \ref{fig1}(b).
Note that, due to quantum destructive interference, there is no
photon component coming out from port $2$. Finally, with the help of
$PBS$, discarding the horizontal polarization output from Eq.
(\ref{eq8}) (i.e., the faulty event), one can get the
transformations as follows \cite{sca}:
\begin{eqnarray} \label{eq9}
|g_{-}\rangle|\psi\rangle|H\rangle &&\rightarrow\; -|g_{-}\rangle|\phi_{r}\rangle|V\rangle,\nonumber\\
|g_{+}\rangle|\psi\rangle|H\rangle &&\rightarrow\; +|g_{+}\rangle|\phi_{r}\rangle|V\rangle.
\end{eqnarray}
Similarly, when the incident photon is in state $|V\rangle$, discarding the faulty event
with vertical polarization output, the transformations are described as follows \cite{sca}:
\begin{eqnarray} \label{eq10}
|g_{-}\rangle|\psi\rangle|V\rangle &&\rightarrow\; -|g_{-}\rangle|\phi_{r}\rangle|H\rangle,\nonumber\\
|g_{+}\rangle|\psi\rangle|V\rangle &&\rightarrow\; +|g_{+}\rangle|\phi_{r}\rangle|H\rangle.
\end{eqnarray}
As mentioned above, $|\phi_{r}\rangle$ is the spatial wave function
of the reflected photon component after the scattering process. When
$P\rightarrow\infty$, the perfect scattering process leads to
$|\phi_{r}\rangle=-|\psi\rangle$. In the imperfect situation with a
finite $P$, there is always a transmitted part \cite{exp}, we get
$|\phi_{r}\rangle\neq-|\psi\rangle$, the output photon with
unchanged polarization is detected and the corresponding scattering
event fails, which can be discarded. That is, the setup for
realizing the scattering event between incident photon and the
emitter works in a heralded way.
\section{Quantum repeater based on the scattering configuration} \label{sec4}
\subsection{Robust nonlocal entanglement creation against collective noise}
\label{creation}
With the property of a photon scattering with a four-level atom
coupled to a 1D waveguide, we can design a robust scheme for the
entanglement creation on two nonlocal stationary atoms $a$ and $b$,
as shown in Fig. \ref{fig2}. Suppose that the single photon medium
and the two stationary atoms in 1D waveguides are initially prepared
in the superposition states
$|\psi_{_{0}}\rangle^{p}=\frac{1}{\sqrt{2}}(|H\rangle+|V\rangle)$
and
$|\varphi_{i}\rangle=\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)_{i}$
(here $|0\rangle=|g_{-}\rangle,|1\rangle=|g_{+}\rangle$, $i$ = a,
b), respectively, the state of the system composed of the photon
and the two atoms is
\begin{eqnarray}
|\Omega_{0}\rangle=\frac{1}{2\sqrt{2}}(|H\rangle+|V\rangle)
\otimes(|0\rangle+|1\rangle)_{a}\otimes(|0\rangle+|1\rangle)_{b}.\;\;
\end{eqnarray}
Our scheme works with the following steps.
\begin{figure}
\caption{ (Color online) Schematic setup for the creation of maximally entangled
states on two nonlocal atoms $a$ and $b$ in 1D waveguides.
$PBS\pm$ transmits photons with polarization
$|+\rangle$ and reflects photons with polarization $|-\rangle$,
where $|\pm\rangle=(1/\sqrt{2}
\label{fig2}
\end{figure}
First, the $|H\rangle$ and $|V\rangle$ components of the input
photon are spatially split by a polarizing beam splitter ($PBS$). In
detail, the photon in state $|H\rangle$ passes through both
$PBS_{1}$ and $PBS_{2}$ towards the setup to scatter with atom $a$.
While the component in state $|V\rangle$ is reflected into the other
arm of the interferometer by $PBS_{1}$, and is reflected by $TR_1$
into the channel, having no interaction with atom $a$. After the
scattering process, the part interacting with atom $a$ travels
through $TR_1$ into the channel, but a little later than the other
part. The state of the whole system at the entrance of the channel
becomes $|\Omega_{1}\rangle$. Here
\begin{eqnarray}
|\Omega_{1}\rangle&=&|\Omega_{_{1S}}\rangle+|\Omega_{_{1L}}\rangle,\nonumber\\
|\Omega_{_{1S}}\rangle&=&\frac{1}{2\sqrt{2}}|V\rangle_{_{S}}
\otimes (|0\rangle+|1\rangle)_{a}
\otimes (|0\rangle+|1\rangle)_{b},\nonumber\\
|\Omega_{_{1L}}\rangle&=&\frac{1}{2\sqrt{2}}|V\rangle_{_{L}}
\otimes (|0\rangle-|1\rangle)_{a}
\otimes (|0\rangle+|1\rangle)_{b},\;\;\;\;\;\;
\end{eqnarray}
where $|\Omega_{_{1S}}\rangle$ and $|\Omega_{_{1L}}\rangle$ represent
the two parts of the photon going through the short path (S) and the long path (L)
to the channel, respectively.
Second, as the two parts in the channel are near and their
polarization states are both in $|V\rangle$, the influences of the
collective noise in the quantum channel on these two parts are the
same one \cite{PLA,yamamoto,lixihan}, which can be described by
$|V\rangle\;\rightarrow\;\gamma|V\rangle+\delta|H\rangle$, where
$|\gamma|^{2}+|\delta|^{2}=1$. After the photon travels in the long
quantum channel, the state of the whole system at the output
port becomes
\begin{eqnarray}
|\Omega_{2}\rangle &=&|\Omega_{_{2S}}\rangle+|\Omega_{_{2L}}\rangle,\nonumber\\
|\Omega_{_{2S}}\rangle &=& \frac{1}{2\sqrt{2}}(\gamma|V\rangle_{_{S}}
+\delta|H\rangle_{_{S}})(|0\rangle+|1\rangle)_{a}(|0\rangle+|1\rangle)_{b},\nonumber\\
|\Omega_{_{2L}}\rangle &=& \frac{1}{2\sqrt{2}}(\gamma|V\rangle_{_{L}}
+\delta|H\rangle_{_{L}})(|0\rangle-|1\rangle)_{a}(|0\rangle+|1\rangle)_{b}.
\end{eqnarray}
Third, getting out from the noisy channel, the early part of the
photon in state $|\Omega_{_{2S}}\rangle$ is reflected by the optical
device $TR_{2}$, while the late part in state
$|\Omega_{_{2L}}\rangle$ transmits through $TR_{2}$ into $PBS_{3}$.
After that, the components in states $|H\rangle$ and $|V\rangle$ of
the late part are split into two halves that scatter with atom $b$
and travel back to $PBS_{3}$ simultaneously. Subsequently, the early
part and the late part are rejoined in $PBS_{4}$, and they are
separated into two paths $1$ and $2$. The state of the whole system
evolves into
\begin{eqnarray} \label{eq15}
|\Omega_{3}\rangle&=&\frac{1}{2\sqrt{2}}\gamma(|H\rangle+|V\rangle)_{1}
(|0\rangle|0\rangle+|1\rangle|1\rangle)_{ab}
-\frac{1}{2\sqrt{2}}\gamma(|H\rangle-|V\rangle)_{1}(|0\rangle|1\rangle
+|1\rangle|0\rangle)_{ab}\nonumber\\
&&+\frac{1}{2\sqrt{2}}\delta(|H\rangle+|V\rangle)_{2}(|0\rangle|0\rangle
+|1\rangle|1\rangle)_{ab}
+\frac{1}{2\sqrt{2}}\delta(|H\rangle-|V\rangle)_{2}(|0\rangle|1\rangle
+|1\rangle|0\rangle)_{ab}.\;\;\;\;\;\;
\end{eqnarray}
Finally, the two parts in paths $1$ and $2$ go through $PBS\pm$,
and the photon is detected by one of the four single-photon
detectors $D_{1}$, $D_{2}$, $D_{3}$, and $D_{4}$. If the detector
$D_{2}$ or $D_{3}$ clicks, we should put a $\sigma_{x}$ operation on
atom $b$. If the detector $D_{1}$ or $D_{4}$ clicks, nothing needs
to be done. Eventually, the state of the system composed of atoms
$a$ and $b$ collapses to the maximally entangled state
$|\phi^{+}\rangle_{ab}=\frac{1}{\sqrt{2}}(|0\rangle|0\rangle+|1\rangle|1\rangle)_{ab}$.
Note that, for successful events of imperfect processes, i.e., with
finite $P$, the polarization is swapped but
$|\phi_{r}\rangle\neq-|\psi\rangle$ in Eq. (\ref{eq9}) and Eq.
(\ref{eq10}). This causes a problem that the spatial wave functions
in two arms of the interferometer are not matched at $PBS_{4}$. To
overcome the unbalance between the two spatial wave functions, a
waveform corrector ($WFC$) is adopted in one arm of the
interferometer. In fact, the $WFC$ can be realized by a second
scattering module, which is identical to that of Fig. \ref{fig1}(b).
In detail, we make the auxiliary emitter in $WFC$ permanently stay
in $|g_{-}\rangle$, before and after the scattering process, a
quarter wave plate is needed to implement
$|V\rangle$$\leftrightarrow$$|L\rangle$. With the waveform
correctors, the corresponding wave packet is changed from
$|\psi\rangle$ to $|\phi_{r}\rangle$ without entangling with the
auxiliary emitter. The $WFC$ decreases the overall success
probability of the entanglement creation, but not affect the
fidelity in principle.
Our setup for the robust entanglement creation on two nonlocal atoms
has some interesting features. First, the early part and the late
part of the photon in the channel are so near that they suffer from
the same collective noise \cite{PLA,yamamoto,lixihan}, and an
arbitrary qubit error caused by the long noisy channel can be
perfectly settled, i.e., as shown in Eq. ($\ref{eq15}$), the
probability of the entanglement creation doesn't depend on the
values of collective noise parameters $\gamma$ and $\delta$. Second,
the faulty interactions between the photon and two atoms can be
heralded by the detectors $D_{1}$, $D_{2}$, $D_{3}$, and $D_{4}$. In
detail, if none of these detectors clicks, the event of the
entanglement creation fails, which can be discarded. These good
features make our setup have good applications in quantum repeaters
for long-distance quantum communication.
\subsection{Entanglement swapping}
\label{swapping}
\begin{figure*}
\caption{ (Color online) Schematic diagram showing the principle of entanglement
swapping. $QWP_{i}
\label{fig3}
\end{figure*}
The atomic entangled state can be connected to longer communication
distance via local entanglement swapping. Inspired by the recent
work \cite{Sahandarxiv}, we construct the entanglement swapping
scheme, using the scattering of a single photon combined with
measurements on the atoms, as shown in Fig. \ref{fig3}. The two
pairs of nonlocal atoms $ac$ and $bd$ are both initially prepared in
the maximally entangled states
$|\phi^+\rangle_{ac}=\frac{1}{\sqrt{2}}(|0\rangle|0\rangle+|1\rangle|1\rangle)_{ac}$
and
$|\phi^+\rangle_{bd}=\frac{1}{\sqrt{2}}(|0\rangle|0\rangle+|1\rangle|1\rangle)_{bd}$,
respectively. With the Bell-state measurement on local atoms $cd$
and single-qubit operations, the two nonlocal atoms $ab$ can
collapse to the maximally entangled state
$|\phi^+\rangle_{ab}=\frac{1}{\sqrt{2}}(|0\rangle|0\rangle+|1\rangle|1\rangle)_{ab}$,
which indicates that the nonlocal entanglement for a longer
communication is realized. The principle of quantum swapping is
shown in Fig. \ref{fig3}, and the details are described as follows.
First, suppose that the input photon $p$ is prepared in the
superposition state $|\psi_{_{0}}\rangle^{p}=\frac{1}{\sqrt{2}}(|H\rangle+|V\rangle)$,
and the initial state of the whole system composed of photon $p$ and
the four atoms $acbd$ is $|\Psi_{0}\rangle$. Here,
\begin{eqnarray}
|\Psi_{0}\rangle&=&\frac{1}{2\sqrt{2}}(|H\rangle\!+
\!|V\rangle)\otimes(|0\rangle|0\rangle\!+\!|1\rangle|1\rangle)_{ac}
\otimes(|0\rangle|0\rangle \!+\!|1\rangle|1\rangle)_{bd}.
\end{eqnarray}
The injecting photon $p$ passes through $PBS_{1}$, which transmits the
photon in state $|H\rangle$ and reflects the photon in state
$|V\rangle$. The photon in state $|V\rangle$ is reflected by
$PBS_{1}$ and $PBS_{2}$ into the scattering setup composed of atom
$c$, while the other part in state $|H\rangle$ goes through
$QWP_{1}$ and is reflected by $PBS_{3}$ into the scattering setup to scatter with
atom $d$. Then, the two parts of the photon $p$ are
rejoined in $PBS_{4}$. After that,
the state of the whole system is changed from $|\Psi_{0}\rangle$ to $|\Psi_{1}\rangle$. Here,
\begin{eqnarray}
|\Psi_{1}\rangle&=&\frac{(|H\rangle+|V\rangle)}{4\sqrt{2}}\big[(|00\rangle-|11\rangle)_{cd}
\otimes(|00\rangle+|11\rangle)_{ab}
+(|00\rangle+|11\rangle)_{cd}\otimes(|00\rangle-|11\rangle)_{ab}\big]\nonumber\\
&&-\frac{(|H\rangle-|V\rangle)}{4\sqrt{2}}\big[\!(|01\rangle-|10\rangle)_{cd}
\otimes(|01\rangle+|10\rangle)_{ab}
+(|01\rangle+|10\rangle)_{cd}\otimes(|01\rangle-|10\rangle)_{ab}\big].
\end{eqnarray}
Second, a Hadamard operation $H_{a}$ (e.g., using a $\pi/2$ microwave pulse or optical pulse
\cite{Berezovsky,Press}) is performed on the two local atoms $c$ and $d$
in the waveguides, respectively. Then, the state of the whole system becomes
\begin{eqnarray}
|\Psi_{2}\rangle&=&\frac{(|H\rangle+|V\rangle)}{4\sqrt{2}}\big[(|01\rangle+|10\rangle)_{cd}
\otimes(|00\rangle+|11\rangle)_{ab}
+(|00\rangle+|11\rangle)_{cd}\otimes(|00\rangle-|11\rangle)_{ab}\big]\nonumber\\
&&+\frac{(|H\rangle-|V\rangle)}{4\sqrt{2}}\big[\!(|01\rangle-|10\rangle)_{cd}
\otimes(|01\rangle+|10\rangle)_{ab}
-(|00\rangle-|11\rangle)_{cd}\otimes(|01\rangle-|10\rangle)_{ab}\big].
\end{eqnarray}
Then, the photon $p$ travels through $PBS\pm$ and is detected by single-photon detectors.
Meanwhile, the state of atom $c$ ($d$) is measured by external classical field.
Third, with the outcomes of the detectors for photon $p$ and the
measurements on atoms $cd$, one can see that the four Bell states of
the atoms $a$ and $b$ are completely distinguished. Finally, the
parties can perform corresponding operations (see Table
\ref{tabone}) on atom $a$ to complete the quantum swapping. After
that, the state of the two nonlocal atoms $a$ and $b$ in a longer
distance collapses to the maximally entangled state
$|\phi^{+}\rangle_{ab}=\frac{1}{\sqrt{2}}(|0\rangle|0\rangle+|1\rangle|1\rangle)_{ab}$.
It is important to note that the wrong interactions between photon
and atoms are heralded by the photon detectors in our protocol. In
detail, if neither of the detectors $D_{1}$ and $D_{2}$ clicks, the
interactions between photon and two atoms in 1D waveguides are
faulty, which could be discarded. Therefore, with the prediction of
the faulty events, the parties can obtain a high-fidelity nonlocal
atomic entangled state in a longer distance.
\begin{table} [h]
\centering
\caption{The operations on atom $a$ corresponding to the outcomes
of the photon detectors and the states of atoms $cd$.}
\begin{tabular}{ccc|c}
\hline \hline
$\;\;$Photon click$\;\;\;\;\;\;$ & Atom $c$ $\;\;\;\;\;\;$ & Atom $d$
$\;\;$ & $\;$ Operations on atom $a$ $\;$ \\
\hline
$\;\;$$D_{1}$ $\;\;\;\;\;\;$ & $|0\rangle(|1\rangle)$$\;\;\;\;\;\;\;\;$
& $|1\rangle(|0\rangle)$ $\;\;$ &$\;\;$ $I$ $\;\;$ \\
$\;\;$$D_{1}$ $\;\;\;\;\;\;$ & $|0\rangle(|1\rangle)$$\;\;\;\;\;\;\;\;$
& $|0\rangle(|1\rangle)$ $\;\;$ &$\;\;$ $\sigma_{z}$ $\;\;$ \\
$\;\;$$D_{2}$ $\;\;\;\;\;\;$ & $|0\rangle(|1\rangle)$$\;\;\;\;\;\;\;\;$
& $|1\rangle(|0\rangle)$ $\;\;$ &$\;\;$ $\sigma_{x}$ $\;\;$ \\
$\;\;$$D_{2}$ $\;\;\;\;\;\;$ & $|0\rangle(|1\rangle)$$\;\;\;\;\;\;\;\;$
& $|0\rangle(|1\rangle)$ $\;\;$ &$\;\;$ $\sigma_{z}\sigma_{x}$ $\;\;$ \\
\hline \hline
\end{tabular} \label{tabone}
\end{table}
\subsection{Entanglement purification}
\label{sec43}
In section \ref{creation} and section \ref{swapping}, we just talk
about the influence of noise on flying photons in long quantum
channel. In the practical situation, the errors also occur in
stationary atoms embedded in 1D waveguides, which will decrease the
entanglement of the nonlocal two-atom systems. Using entanglement
purification
\cite{EPP1,EPPsimon,EPPsheng1,DEPP1,DEPP2,DEPP3,DEPP4,HEPP2,HEPPWangGY,DuFFHEPP,dengreview},
we can distill some high-fidelity maximally entangled states from a
mixed entangled state ensemble. Now, we start to explain the
principle of our purification protocol for nonlocal atomic entangled
states, assisted by the scattering of photons off single atoms in 1D
waveguides, as shown in Fig. \ref{fig4}.
Suppose that the initial mixed state shared by two remote parties,
say Alice and Bob, can be written as
\begin{eqnarray}
\rho_{ab}=F|\phi^{+}\rangle_{ab}\langle\phi^{+}|+(1-F)|\psi^{+}\rangle_{ab}\langle\psi^{+}|,
\end{eqnarray}
where $|\psi^{+}\rangle_{ab}=\frac{1}{\sqrt{2}}(|0\rangle|1\rangle
+|1\rangle|0\rangle)_{ab}$. The subscripts $a$ and $b$ represent the
single atoms in 1D waveguides owned by Alice and Bob, respectively.
$F$ is the initial fidelity of the state $|\phi^{+}\rangle$. By
selecting two pairs of nonlocal entangled two-atom systems, the four
atoms are in the states
$|\phi^{+}\rangle_{a_{1}b_{1}}|\phi^{+}\rangle_{a_{2}b_{2}}$ with
the probability of $F^{2}$,
$|\phi^{+}\rangle_{a_{1}b_{1}}|\psi^{+}\rangle_{a_{2}b_{2}}$ and
$|\psi^{+}\rangle_{a_{1}b_{1}}|\phi^{+}\rangle_{a_{2}b_{2}}$ with a
probability of $F(1-F)$, and
$|\psi^{+}\rangle_{a_{1}b_{1}}|\psi^{+}\rangle_{a_{2}b_{2}}$ with a
probability of $(1-F)^{2}$, respectively. Our entanglement
purification protocol for nonlocal entangled atom pairs works with
the following steps.
First, both Alice and Bob prepare an optical pulse in the
superposition state $\frac{1}{\sqrt{2}}(|H\rangle+|V\rangle)$ and
let them pass through the equipments shown in Fig.
\ref{fig4}. Here, we choose the case $|\phi^{+}\rangle_{a_{1}b_{1}}|\phi^{+}\rangle_{a_{2}b_{2}}$
to illustrate the principle. To simplify the discussion, we just discuss the
interactions in Alice, and Bob need complete the same process simultaneously.
For Alice, the $|H\rangle$ and $|V\rangle$
components of the input photon 1 are spatially split by $PBS_{1}$.
In detail, the component in $|V\rangle$ is reflected by both
$PBS_{1}$ and $PBS_{2}$ to the scattering setup composed of atom $a_{1}$,
whereas the component in $|H\rangle$ goes through $QWP_{2}$ and is
reflected by $PBS_{3}$ to the scattering setup containing atom $a_{1}$. After that,
the state of the whole system is changed from $|\Phi_{0}\rangle$ to $|\Phi_{1}\rangle$, where
\begin{eqnarray}
|\Phi_{0}\rangle&=&\frac{1}{2}(|H\rangle+|V\rangle)_{1}
\,\,(|H\rangle+|V\rangle)_{2}
\,\,|\phi^{+}\rangle_{a_{1}b_{1}}|\phi^{+}\rangle_{a_{2}b_{2}},\nonumber\\
|\Phi_{1}\rangle&=&\frac{1}{4}|H\rangle_{1}
\,(|H\rangle+|V\rangle)_{2}\otimes(0000-0011+1100-1111)_{a_{1}b_{1}a_{2}b_{2}}\nonumber\\
&&+\frac{1}{4}|V\rangle_{1}\,(|H\rangle+|V\rangle)_{2}
\otimes(0000+0011-1100-1111)_{a_{1}b_{1}a_{2}b_{2}}.
\end{eqnarray}
\begin{figure*}
\caption{(Color online) Schematic setup showing the principle of the
atomic entanglement purification protocol based on the scattering of
photons off single emitters.}
\label{fig4}
\end{figure*}
Second, the two parts of photon 1 are rejoined at $PBS_{4}$ and
travel through a $PBS\pm$. Meanwhile, the photon 2 at Bob's place
has the same process as photon 1 in Alice simultaneously. After the
above interactions, the state of the whole system collapses into
$|\Phi_{2}\rangle$. Here
\begin{eqnarray}
|\Phi_{2}\rangle&=&\frac{1}{4}(|H\rangle\!+\!|V\rangle)_{1}(|H\rangle+|V\rangle)_{2}
\,(|0000\rangle\!+\!|1111\rangle)_{a_{1}b_{1}a_{2}b_{2}}\nonumber\\
&&+\frac{1}{4}(|H\rangle-|V\rangle)_{1}(|H\rangle-|V\rangle)_{2}
\,(|0011\rangle+|1100\rangle)_{a_{1}b_{1}a_{2}b_{2}}.
\end{eqnarray}
Finally, photon 1 and photon 2 are probed by single-photon detectors.
Similarly, the evolution of the other three cases can be described as follows:
\begin{eqnarray}
\!\!\!\!&&\frac{1}{2}(|H\rangle+|V\rangle)_{1}(|H\rangle+|V\rangle)_{2}
\,|\phi^{+}\rangle_{a_{1}b_{1}}|\psi^{+}\rangle_{a_{2}b_{2}}\nonumber\\
&&\rightarrow-\frac{1}{4}(|H\rangle+|V\rangle)_{1}(|H\rangle-|V\rangle)_{2}
\,(|0001\rangle+|1110\rangle)_{a_{1}b_{1}a_{2}b_{2}}\nonumber\\
&&\;\;\;\;\;-\frac{1}{4}(|H\rangle-|V\rangle)_{1}\!(|H\rangle+|V\rangle)_{2}
\,(|0010\rangle+|1101\rangle)_{a_{1}b_{1}a_{2}b_{2}},
\end{eqnarray}
\begin{eqnarray}
\!\!\!\!&&\frac{1}{2}(|H\rangle+|V\rangle)_{1}(|H\rangle+|V\rangle)_{2}
\,|\psi^{+}\rangle_{a_{1}b_{1}}|\phi^{+}\rangle_{a_{2}b_{2}}\nonumber\\
&&\rightarrow\frac{1}{4}(|H\rangle+|V\rangle)_{1}(|H\rangle-|V\rangle)_{2}
\,(|0100\rangle+|1011\rangle)_{a_{1}b_{1}a_{2}b_{2}}\nonumber\\
&&\;\;\;\;\;+\frac{1}{4}(|H\rangle-|V\rangle)_{1}(|H\rangle+|V\rangle)_{2}
\,(|0111\rangle+|1000\rangle)_{a_{1}b_{1}a_{2}b_{2}},
\end{eqnarray}
and
\begin{eqnarray}
\!\!\!\!&&\frac{1}{2}(|H\rangle+|V\rangle)_{1}(|H\rangle+|V\rangle)_{2}
\,|\psi^{+}\rangle_{a_{1}b_{1}}|\psi^{+}\rangle_{a_{2}b_{2}}\nonumber\\
&&\rightarrow-\frac{1}{4}(|H\rangle+|V\rangle)_{1}(|H\rangle+|V\rangle)_{2}
\,(|0101\rangle+|1010\rangle)_{a_{1}b_{1}a_{2}b_{2}}\nonumber\\
&&\;\;\;\;\;-\frac{1}{4}(|H\rangle-|V\rangle)_{1}(|H\rangle-|V\rangle)_{2}
\,(|0110\rangle+|1001\rangle)_{a_{1}b_{1}a_{2}b_{2}}.
\end{eqnarray}
The measurement results of all cases are shown in Table \ref{tab2}.
With the outcomes of four detectors, we can distill
$|\phi^{+}\rangle_{a_{1}b_{1}}|\phi^{+}\rangle_{a_{2}b_{2}}$ and
$|\psi^{+}\rangle_{a_{1}b_{1}}|\psi^{+}\rangle_{a_{2}b_{2}}$ from
the four cases mentioned above.
\begin{table} [h] \label{tab2}
\centering \caption{The results of the four single-photon detectors
corresponding to the initial entangled states of the four atoms.}
\begin{tabular}{cc|c}
\hline \hline
$\;\;\;$ Initial &$\;\;\;\;\;\;$ states $\;\;\;\;\;\;\;\;\;\;$
&$\;\;\;\;\;\;\;$ Photons $\;\;$ measurement $\;\;\;\;$ \\
\hline
$\;\;\;$($a_{1}b_{1}$) $\;$ &$\;$ ($a_{2}b_{2}$) $\;\;$ &$\;\;$
Detector $\;\;$ click $\;\;\;\;$ \\
\hline
$\;\;\;$$|\phi^{+}\rangle$ $\;$ &$\;$ $|\phi^{+}\rangle$$\;\;$
&$\;\;$$D_{2}D_{4}$ $\;\;\;\;$or $\;\;\;\;$$D_{1}D_{3}$ \\
$\;\;\;$$|\phi^{+}\rangle$ $\;$ &$\;$ $|\psi^{+}\rangle$$\;\;$
&$\;\;$$D_{2}D_{3}$ $\;\;\;\;$or $\;\;\;\;$$D_{1}D_{4}$ \\
$\;\;\;$$|\psi^{+}\rangle$ $\;$ &$\;$ $|\phi^{+}\rangle$$\;\;$
&$\;\;$$D_{2}D_{3}$ $\;\;\;\;$or $\;\;\;\;$$D_{1}D_{4}$ \\
$\;\;\;$$|\psi^{+}\rangle$ $\;$ &$\;$ $|\psi^{+}\rangle$$\;\;$
&$\;\;$$D_{2}D_{4}$ $\;\;\;\;$or $\;\;\;\;$$D_{1}D_{3}$ \\
\hline \hline
\end{tabular} \label{tab2}
\end{table}
Third, to recover the entangled states of atoms $a_{1}$ and $b_{1}$,
Alice and Bob should perform a Hadamard operation $H_{a}$ on
the two nonlocal atoms $a_{2}$ and $b_{2}$ in the waveguides, respectively. Then,
Alice and Bob measure the states of the two atoms $a_{2}$ and $b_{2}$, and
compare their results with the help of classical communication. If
the results are the same ones, nothing needs to be done; otherwise,
a $\sigma_{z}$ operation needs to be put on atom $a_{1}$. From Table \ref{tab2},
one can see that there are two cases in the reserved entangled pairs $a_{1}$ and $b_{1}$.
One is $|\phi^{+}\rangle_{a_{1}b_{1}}$ with a probability of $F^{2}$, and
the other one is $|\psi^{+}\rangle_{a_{1}b_{1}}$ with a probability of $(1-F)^{2}$.
Therefore, in the filtered states, the probability of $|\phi^{+}\rangle_{a_{1}b_{1}}$ is
$F^{'}=\frac{F^{2}}{F^{2}+(1-F)^{2}}$.
That is, after the purification process, the fidelity of $|\phi^{+}\rangle_{a_{1}b_{1}}$
becomes $F^{'}$. When $F>\frac{1}{2}$, one can easily get $F^{'}>F$.
\section{Discussion and summary}\label{sec6}
\begin{figure}
\caption{(Color online) The success probability $p_{s}
\label{fig5}
\end{figure}
We have proposed a heralded scheme for quantum repeater, including
robust nonlocal entanglement creation against collective noise,
entanglement swapping, and entanglement purification modules. The
key element in our protocol is the scattering process between
photons and atoms in 1D waveguides. In the following section, we
will discuss the performance of our quantum repeater under practical
conditions, defining $p_{s}=|\langle\psi|\phi_{r}\rangle|^{2}$ as
the success probability of the scattering event in the heralded
protocol, as shown in Fig. \ref{fig1}(b). Here $|\psi\rangle$ and
$|\phi_{r}\rangle$ are the spatial wave functions of the incident
photon and the reflected photon component after the scattering
event, respectively. For perfect scattering event, i.e., with
$P\rightarrow\infty$, $|\phi_{r}\rangle=-|\psi\rangle$, and the
success probability $p_{s}$ is $100\%$. Whereas, in realistic
situations, with finite $P$, $|\phi_{r}\rangle\neq-|\psi\rangle$, it
includes two cases: one is the successful event of imperfect
processes, where the polarization of output photon is changed but
$|\phi_{r}\rangle=r|\psi\rangle$ ($|r|<1$); the other one is that
the scattering event between atoms and photons doesn't happen at
all. For the latter case, the output photons with unchanged
polarization are detected at the entrance, and the corresponding
quantum computation is discarded. Assuming that the linear optical
elements are perfect in our protocols, the heralded mechanism
ensures that the faulty events cannot influence the fidelity of our
scheme, but decrease the efficiency, because the success probability
$p_{s}$ is determined by the quality of the atom-waveguide systems.
\begin{figure}
\caption{ (Color online) The success probabilities of our
entanglement creation (solid line, black), entanglement swapping
(dashed line, red), entanglement purification (dotted line, blue)
protocols vs the Purcell factor $P$. Here, the detuning parameter is
$\Delta=0$.}
\label{fig6}
\end{figure}
As mentioned above, a high Purcell factor is needed in our scheme,
which can effectively improve the performance of our protocols. In
the last decade, great progress has been made in the
emitter-waveguide systems in both theory and experiment. In 2005,
Vlasov et al. \cite{Vlasov2005} experimentally demonstrated that a
Purcell factor approaching 60 can be observed in low-loss silicon
photonic crystal waveguides. In 2006, Chang et al. \cite{ChangDE}
presented a scheme that a dipole emitter is coupled to a nanowire or
a metallic nanotip, in which a Purcell factor $P=5.2\times10^2$ is
theoretically obtained for a silver nanowire. Subsequently, using
the surface plasmons of a conducting nanowire, Chang et al.
\cite{ChangPRB} proposed a method to obtain an effective Purcell
factor, which can reach $10^3$ in realistic systems in principle. In
addition, short waveguide lengths of only 10 to 20 unit cells were
theoretically found by Manga Rao and Hughes \cite{MangaRao} to
produce a very large Purcell factor in 2007, and the experimental
progress on short photonic crystal waveguides was reported by
Dewhurst et al. \cite{Dewhurst2010} and Hoang et al.
\cite{Hoang2012}. Later, based on subwavelength confinement of
optical fields near metallic nanostructures, Akimov et al.
\cite{Akimov} demonstrated a broadband approach for manipulating
photon-emitter interactions. In 2008, photonic crystal waveguides
were exploited by Hansen et al. \cite{Hansen} to enable single
quantum dots to exhibit nearly perfect spontaneous emission into the
guided modes ($\gamma_{_{1D}}\gg\gamma'$), where the light-matter
coupling strength is largely enhanced. In 2010, a Purcell factor of
$P=5.2$ was experimentally observed for single quantum dots coupled
to a photonic crystal waveguide \cite{Thyr2010}. In 2012, Goban
\cite{GobanKimble} reported the experimental implementation of a
state-insensitive, compensated optical trap for single Cs atoms,
which provides the precise atomic spectroscopy near dielectric
surfaces. Moreover, in 2013, Hung et al. \cite{Hung} proposed a
protocol that one atom trapped in single nanobeam structure could
provide a resonant probe with transmission $|t|^{2}\leq10^{-2}$ in
theory. A similar scheme was realized in experiment by Goban et al.
\cite{Goban} in 2014. Recently, due to the coupling of a single
emitter to a dielectric slot waveguide, a high Purcell factor
$P=31$ was also observed by Kolchin et al. \cite{Kolchin} in
experiment.
As illustrated in section \ref{basic}, we can obtain the reflection
coefficient for an incident photon scattering with an atom in the 1D
waveguide. On resonance, $r=-1/(1+1/P)$, and we get the relation
between the success probability $p_{s}$ (i.e., $|r|^{2}$) and the
Purcell factor $P$, as shown in Fig. \ref{fig5}(a). Moreover, the
scattering quality is also influenced by the nonzero photonic
detuning, and the details are described in Fig. \ref{fig5}(b), where
$p_{s}$ is plotted as a function of the detuning parameter
$\Delta/\gamma_{_{1D}}$. From Fig. 5, one can see that the success
probability $p_{s}$ would exceed $90\%$ on the condition that the
Purcell factor $P\geq50$ and the detuning parameter
$\Delta/\gamma_{_{1D}}\leq0.13$, which could be achievable in
realistic systems. For instance, when we choose $P=100$ and
$\Delta=0.1\gamma_{_{1D}}$ for atom-waveguide systems, the success
probability $p_{s}$ of the scattering process in Fig. \ref{fig1}(b)
can reach $94.33\%$. In our protocols, the faulty events between
photons and atomic qubits can be heralded by single-photon
detectors. However, the imperfection coming from photon loss is
still an inevitable problem in our scheme. The photon loss is caused
by various drawbacks, such as the fiber absorption, the imperfection
of 1D waveguides, and the inefficiency of the single-photon
detector. In fact, if optical losses appear in the prediction of
faulty events, the fidelities of our protocols cannot be unity as
faulty scattering events will not always be detected. Recently, many
proposals have been presented to solve the problems of photon loss
\cite{Gisin,Kocsis,Osorio,Wangtiejun}.
Provided that the linear optical elements are perfect in our scheme,
due to the heralded mechanism, only when no faulty scattering events
are detected in our protocols, can we obtain the quantum repeater
successfully. Here, the times of the basic scattering event (Fig.
1(b)) occurred in our entanglement creation, swapping, and
purification protocols are three, two, and four, respectively. We
calculate the success probabilities of our entanglement creation,
swapping, and purification protocols as a function of the Purcell
factor $P$, as shown in Fig. \ref{fig6}. Recently, Arcari et al.
\cite{Arcari2014} exploited the photonic crystal waveguide to
experimentally realize near-unity coupling efficiency of a quantum
dot to the waveguide mode. In their experiment, the decay rate of
the quantum dot into the waveguide is $\gamma_{_{1D}}=6.182$ GHz,
and the decay rate of the quantum emitter into free space and all
other modes is $\gamma'=98$ MHz, which indicates that the Purcell
factor $P=63.1$ can be implemented. The resonance frequency of the
quantum dot is $\omega_{a}=2\times10^{6}$ GHz. With these
experimental parameters mentioned above, we can obtain the success
probabilities of our entanglement creation, swapping, and
purification protocols are $p_{1}=91.00\%$, $p_{2}=93.90\%$, and
$p_{3}=88.18\%$, respectively, when the detuning parameter is
$\Delta=0$. With the remarkable progress in photonic nanostructures
\cite{Lodahl2015}, our protocols for the heralded quantum repeater
may be experimentally feasible in the near future.
Compared with other schemes, the protocols we present for the heralded
quantum repeater have some interesting features. First,
the faulty events caused by frequency mismatches, weak coupling, atomic
decay into free space, or finite bandwidth of the incident photon can be
turned into detection of the output photon polarization, and that just
affects the efficiency of our protocols, not the fidelity. In other words,
our scheme either succeeds with a perfect fidelity or fails in
a heralded way, which is an advantageous feature for quantum
communication. Second, our scheme focuses on 1D waveguides, in which the modes
can be highly dispersive. In a waveguide, the quantum emitter can efficiently couple
single photon to the propagating modes over a wide bandwidth, which provides an
alternative to the high-Q cavity case for enhancing light-matter interaction.
Third, motivated by recent experimental progress, our scheme for the
heralded quantum repeater is feasible in some other quantum
systems, such as superconducting quantum circuit coupled to
transmission lines \cite{Loo}, quantum dot
embedded in a nanowire \cite{Munsch}, and photonic crystal waveguide
with quantum dots \cite{Lodahl2015}.
In summary, we have proposed a heralded quantum repeater based on the
scattering of photons off single emitters in 1D waveguides. The
information of the entangled states is encoded on four-level atoms
embedded in 1D waveguides. As our protocols can transform faulty
events into the detection of photon polarization, we present
a different way for constructing quantum repeaters in solid-state quantum
systems. With the significant progress on manipulating
atom-waveguide systems, our quantum repeater may be very useful for
quantum communication in the future.
\section*{Acknowledgments}
This work was supported by the National Natural Science Foundation
of China under Grants No. 11674033, No. 11475021, No. 11505007,
and No. 11474026, the Fundamental Research Funds for the Central
Universities under Grant No. 2015KJJCA01, the National Key Basic
Research Program of China under Grant No. 2013CB922000, the Youth
Scholars Program of Beijing Normal University under Grant No.
2014NT28, and the Open Research Fund Program of the State Key
Laboratory of Low-Dimensional Quantum Physics, Tsinghua University
Grant No. KF201502.
\end{document} |
\begin{document}
\begin{frontmatter}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{problem}[theorem]{Problem}
\title{Quadrilateral Mesh Generation III : Optimizing Singularity Configuration Based on Abel-Jacobi Theory}
\author[addr1,addr4]{Xiaopeng Zheng}
\author[addr1]{Yiming Zhu}
\author[addr1,addr2]{Na Lei\corref{correspondingauthor}}
\cortext[correspondingauthor]{Corresponding author}
\ead{[email protected]}
\author[addr1,addr4]{Zhongxuan Luo}
\author[addr3]{Xianfeng Gu}
\address[addr1]{Dalian University of Technology, Dalian, China}
\address[addr3]{Stony Brook University, New York, US}
\address[addr4]{Key Laboratory for Ubiquitous Network and Service Software of Liaoning Province, Dalian, China}
\address[addr2]{DUT-RU Co-Research Center of Advanced ICT for Active Life, Dalian, China}
\begin{abstract}
This work proposes a rigorous and practical algorithm for generating meromorphic quartic differentials for the purpose of quad-mesh generation. The work is based on the Abel-Jacobi theory of algebraic curve.
The algorithm pipeline can be summarized as follows: calculate the homology group; compute the holomorphic differential group; construct the period matrix of the surface and Jacobi variety; calculate the Abel-Jacobi map for a given divisor; optimize the divisor to satisfy the Abel-Jacobi condition by an integer programming; compute the flat Riemannian metric with cone singularities at the divisor by Ricci flow; isometric immerse the surface punctured at the divisor onto the complex plane and pull back the canonical holomorphic differential to the surface to obtain the meromorphic quartic differential; construct the motor-graph to generate the resulting T-Mesh.
The proposed method is rigorous and practical. The T-mesh results can be applied for constructing T-Spline directly. The efficiency and efficacy of the proposed algorithm are demonstrated by experimental results.
\end{abstract}
\begin{keyword}
Quadrilateral Mesh \sep Abel-Jacobi \sep Flat Riemannian Metric \sep Geodesic \sep Discrete Ricci flow \sep Conformal Structure Deformation
\end{keyword}
\end{frontmatter}
\section{Introduction}
In computational mechanics, Computer Aided Design, geometric design, computer graphics, medical imaging, digital geometry processing and many other engineering fields, quadrilateral mesh is a universal and crucial boundary surface representation. Although quadrilateral meshes have been broadly applied in the real industrial world, the theoretic understanding of their geometric structures remains primitive.
Recently, \cite{} makes a breakthrough
from Algebraic geometric view, basically a quad-mesh induces a conformal structure and can be treated as a Riemann surface. Furthermore, a quad-mesh is equivalent to a meromorphic quartic differential with closed trajectories, and the singularities satisfy the Abel-Jacobi condition. This discovery provides a solid theoretic foundation for quad-meshing.
\subsection{Abel-Jacobi Condition}
Suppose a surface $(\Sigma,\mathbf{g})$ is embedded in the Euclidean space $\mathbb{R}^3$ with the induced Euclidean Riemannian metric $\mathbf{g}$. Suppose the surface is represented as a quadrilateral mesh $\mathcal{Q}$, then $\mathcal{Q}$ induces a special combinatorial structure, a Riemannian metric structure, and a conformal structure.\\
\noindent{\emph{Combinatorial structure}}: Suppose the number of vertices, edges, faces of $\mathcal{Q}$ are $V,E,F$, then $E=2F$ and the Euler formula holds, $V+F-E=\chi(\Sigma)$, where $\chi(\Sigma)$ is the Euler characteristic number of $\Sigma$. The vertices with topological valence $4$ are called normal, otherwise singular.\\
\noindent{\emph{Riemannian metric structure}}: A flat metric with cone singularities $\mathbf{g}_Q$ can be induced by $\mathcal{Q}$ by treating each face as a unit planar square. A vertex with $k$-valence has the discrete curvature $(4-k)/2\pi$, the total curvature satisfies the \emph{Guass-Bonnet condition}:
\begin{equation}
\sum_{v} \frac{4-\text{val}(v)}{2} \pi = 2\pi\chi(\Sigma),
\label{eqn:gauss_bonnet}
\end{equation}
where $\text{val}(v)$ is the topological valence of $v$.
The holonomy group induced by the metric $\mathbf{g}_Q$ on the surface $\Sigma\setminus \mathcal{S}$ with punctures at the singular vertices $\mathcal{S}$ is the rotation group
\begin{equation}
\text{Hol}(\Sigma\setminus \mathcal{S},\mathbf{g}_Q)=\{e^{i\frac{\pi}{2}k},k\in \mathbb{Z}\}.
\label{eqn:holonomy}
\end{equation}
This is the so-called \emph{holonomy condition} \cite{CMAME_Quad_Mesh_I}.\\
\noindent{\emph{Conformal structure}}: The quad-mesh
$\mathcal{Q}$ induces a conformal structure, and can be treated as a Riemann surface $S_Q$; furthermore, it induces a meromorphic quartic differential $\omega_Q$, whose horizontal and vertical trajectories are finite. The valence-3 and valence-5 singularities of $\mathcal{Q}$ are the poles and zeros of $\omega_Q$, and the divisor of $\omega_Q$ represents the configuration of singularities of $\mathcal{Q}$, denoted as $(\omega_Q)$. Suppose $\varphi$ is a holomorphic 1-form on $S_Q$, then $\varphi^4$ is a holomorphic quartic differential, then $(\omega_Q)$ and $4(\varphi)$ are equivalent, and satisfy the Abel-Jacobi condition, the image of the Abel-Jacobi map is zero in the Jacobian variety $(J(S_Q))$, $\mu((\omega_Q)-4(\varphi))=0$.
\subsection{Construct Meromorphic Quartic Differential}
The procedure to generate quadrilateral meshes can be summarized as follows: 1) feature points location, and the features are used as part of the singularities of $\mathcal{Q}$; 2) improve the initial singularity set to satisfy the Abel-Jacobi condition to obatin $\mathcal{S}$; 3) construct a meromorphic quadratic differential $\omega$, whose divisor $(\omega)$ equals to $\mathcal{S}$; 4) deform the conformal structure such that the horizontal and vertical trajectories of $\omega$ are closed; 5) trace the horizontal and vertical trajectories of $\omega$ to form the quad-mesh $\mathcal{Q}$.
This work focuses on the first 3 steps. If the initial divisor doesn't satisfies the Gauss-Bonnet condition \ref{eqn:gauss_bonnet}, we will add more poles and zeros at the critical points of Gaussian curvature. Then we minimize the squared norm of the Abel-Jacobi map image of the divisor using gradient descend algorithm. Once the divisor satisfies the Abel-Jacobi condition, we use surface Ricci flow to compute a flat cone metric which concentrates all the curvature at the poles and zeros of the divisor. We isometrically immerse the surface punctured at the divisor into the plane, and pull the canonical holomorphic quartic differential $(dz)^4$ on $\mathbb{C}$ back to the surface, to get the desired meromorphic quartic differential. The meromorphic quartic differential can be applied to generate T-mesh and construct T-Splines.
\subsection{Contributions}
Based on the Abel-Jacobi theory for quad-mesh generation, this work proposes a novel algorithm to generate meromorphic quartic differential, the algorithm has solid theoretic foundation, and practically effective. Conventional methods are heuristic and involves human intervention. In contrast, the proposed method is rigorous automatic. To the best of our knowledge, this is the first work that is based on Riemann surface to construct meromorphic quartic differential for quad-meshing.
The work is organized as follows: section \ref{sec:previous_works} briefly review the most related works; section \ref{sec:theory} introduces the theoretic background; section \ref{sec:algorithm} explains the algorithm in details; the experimental results are reported in section \ref{sec:experiments}; finally, the work concludes in section \ref{sec:conclusion}.
\section{Previous works}
\label{sec:previous_works}
This section briefly review the most related works, we refer readers to \cite{survey:Bommes2013Quad} for more thorough reviews. Quad-mesh generation has vast literature, the following we only discuss some popular approaches.
\paragraph{Triangle Mesh Conversion} Catmull-Clark subdivision method is applied to converting triangular meshes to quad-meshes, then the original vertices become singularities. Another intuitive way is to merge two triangular faces adjacent to the same edge to a quadrilateral, such proposed in~\cite{Remacle2012Blossom,Marco2010Practical,Gurung2011SQuad,Velho20014}. These type of methods can only produce unstructured quad-meshes, without much quality control.
\paragraph{Patch-Based Approach}
In order to generate semi-regular quad-meshes, this type of methods calculate the skeleton first, then partition the mesh input several quadrilateral patches, each patch is regularly tessellated into quads. There are different strategies to cluster the faces to form each patch, one way is to merge neighboring triangle faces based on the similarity among the normals, the other is based on the distance among the centers of the faces \cite{Boier2004Parameterization,Carr2006Rectangular}. Poly-cube map is a normal based method to deform the surface to a poly-cube shape, such as \cite{Xia2011Editable,Wang2008User,Lin2008Automatic,He2009A}. The Morse-Sample complex of eigenfunction of the Laplace operator naturally produce skeleton structure, which is utilized to generate quad-meshes. The spectral surface quadrangulation method applies this method in \cite{Dong2006Spectral,Huang2008Spectral}.
\paragraph{Parameterization Based Approach }
Parameteization method maps the surface onto planar domains, and construct a quad-mesh on the parameter domain, then pull back to the surface. There are different ways to compute the parameterization, such as using discrete harmonic forms \cite{Tong2006Designing}, periodic global parameterization \cite{Alliez2006Periodic} and
branched coverings method \cite{K2010QuadCover}. All these methods rely on solving elliptic partial differential equations on the surface.
\paragraph{Voronoi Based Approach} This approach put samples on the input surface, then compute Voronoi diagram on the surface using different distances. For example, if $L^p$ norm is applied, then the cells are similar to rectangles \cite{L2010Lp}. This method can only generate non-structured quad-mesh.
\paragraph{Cross field Based Approach}
This approach generate the cross field first, then by tracing the stream lines of the cross field \cite{RS:RPS:2014} or parameterization induced by the field \cite{survey:Bommes2013Quad}, the quad-mesh can be constructed. The cross fields are represented in different ways, such as N-RoSy representation\cite{Palacios2007Rotational}, period jump technique\cite{Li2006Representing} and complex value representation\cite{Kowalski2013A}. Then by minimizing the discrete analogy to the harmonic energy \cite{JFH}, the cross field can be smooth out. The work in \cite{landau} relates the Ginzberg-Landau theory with the cross field for genus zero surface case. This type of method is difficult to control the positions of the singularities and the global structure of the quad layout. Cross fields can be treated as the horizontal and vertical directions of a meromorphic quartic differential without magnitudes.
Comparing to the existing approaches, our method has explicit theoretic analysis for the singularities, the dimension of solution space. Therefore the theoretic rigor greatly improves the efficiency and efficacy for quad-mesh generation.
\section{Theoretic Background}
\label{sec:theory}
This section briefly introduces the most related fundamental concepts and theorems.
\subsection{Basic Concepts of Riemann Surface}
\begin{definition}[Riemann Surface] Suppose $S$ is a two dimensional topological manifold, equipped with an atlas $\mathcal{A}=\{(U_\alpha,\varphi_\alpha)\}$, every local chart are complex coordinates $\varphi_\alpha:U_\alpha\to \mathbb{C}$, denoted as $z_\alpha$, and every transition map is biholomorphic,
\[
\varphi_{\alpha\beta}:\varphi_\alpha(U_\alpha\cap U_\beta)\to \varphi_\beta(U_\alpha\cap U_\beta), \quad z_\alpha \mapsto z_\beta,
\]
then the atlas is called a conformal atlas. A topological surface with a conformal atlas is called a Riemann surface.
\end{definition}
Suppose $(\Sigma,\mathbf{g})$ is an oriented surface with a Riemannian metric $\mathbf{g}$. For each point $p\in \Sigma$, we can find a neighborhood $U(p)$, inside $U(p)$ the \emph{isothermal coordinates} $(u,v)$ can be constructed, such that $\mathbf{g}=e^{2\lambda(u,v)}(du^2+dv^2)$. The atlas formed by all the isothermal coordinates is a conformal atlas, therefore we obtain the following:
\begin{theorem}
All oriented, metric surfaces are Riemann surfaces.
\end{theorem}
\begin{definition}[Meromorphic Function on Riemann Surface] Suppose a Riemann surface $(S,\{(U_\alpha,\varphi_\alpha)\})$ is given. A complex function is defined on the surface $f:S\to \mathbb{C}\cup \{\infty\}$. If on each local chart $(U_\alpha,\varphi_\alpha)$, the local representation of the functions $f\circ \varphi_\alpha^{-1}:\mathbb{C}\to \mathbb{C}\cup\{\infty\}$ is meromorphic, then $f$ is called a meromorphic function defined on $S$.
\end{definition}
A memromorphic function can be treated as a holomorphic map from the Riemann surface to the unit sphere.
\begin{definition}[Zeros and Poles]
Given a meromorphic function $f(z)$, if its Laurent series has the form
\[
f(z) = \sum_{n=k}^\infty a_n(z-z_0)^n,
\]
if $k>0$, then $z_0$ is called a zero point of order $k$; if $k<0$, then $z_0$ is called a pole of order $k$; if $k=0$, then $z_0$ is called a regular point. We denote $\nu_p(f)=k$.
\end{definition}
\begin{definition}[Meromorphic Differential] Given a Riemann surface $(S,\{z_\alpha\})$, $\omega$ is a meromorphic differential of order $n$, if it has local representation,
\[
\omega = f_\alpha(z_\alpha) (dz_\alpha)^n,
\]
where $f_\alpha(z_\alpha)$ is a meromorphic function, $n$ is an integer; if $f_\alpha(z_\alpha)$ is a holomorphic function, then $\omega$ is called a holomorphic differential of order $n$. If $z_\alpha$ is a pole (or a zero) of $f_\alpha$ with order $k$, then $z_\alpha$ is called a pole (or a zero) of the meromorphic differential $\omega$ of order $k$.
\end{definition}
A holomorphic differential of order $2$ is called a \emph{holomrphic quadratic differential}; A meromorphic differential of order $4$ is called a \emph{meromorphic quartic differential}.
\begin{definition}[Divisor] The Abelian group freely generated by points on a Riemann surface is called the divisor group, every element is called a divisor, which has the form
\[
D = \sum_p n_p p.
\]
The degree of a divisor is defined as $deg(D)=\sum_p n_p$. Suppose $D_1 = \sum_p n_p p$, $D_2 = \sum_p m_p p$, then $D_1\pm D_2 = \sum_p(n_p\pm m_p)p$; $D_1\le D_2$ if and only if for all $p$, $n_p \le m_p$.
\end{definition}
\begin{definition}[Meromorphic Differential Divisor] Suppose $\omega$ is a meromorphic differential on a Riemann surface $S$, suppose $p\in S$ is a point on $S$, we define the order of $\omega$ at $p$ as
\[
\text{ord}_p(\omega) = \text{ord}_p(f_p),
\]
where $f_p$ is the local representation of $\omega$ in a neighborhood of $p$, $\omega= f_p(z)(dz)^n$. The divisor of $\omega$ is defined as
\[
(\omega) = \sum_p \text{ord}_p(\omega) p.
\]
\end{definition}
\subsection{Abel-Jacobian Theorem}
\begin{figure}
\caption{Canonical fundamental group basis.}
\label{fig:fundamental group basis}
\end{figure}
Suppose $\{a_1,b_1,\dots,a_g,b_g\}$ is a set of canonical basis for the homology group $H_1(S,\mathbb{Z})$ as shown in Fig.~\ref{fig:fundamental group basis}. Each $a_i$ and $b_i$ represent the curves around the inner and outer circumferences of the $i$th handle.
Let $\{\omega_1,\omega_2,\dots, \omega_g\}$ be a normalized basis of $\Omega^1$, the linear space of all holomorphic 1-forms over $\mathbb{C}$. The choice of basis is dependent on the homology basis chosen above; the normalization signifies that
\[
\int_{a_i} \omega_j = \delta_{ij},\quad i,j = 1,2,\dots,g.
\]
For each curve $\gamma$ in the homology group, we can associate a vector $\lambda_\gamma$ in $\mathbb{C}^g$ by integrating each of the $g$ 1-forms over $\gamma$,
\[
\lambda_\gamma = \left(\int_\gamma \omega_1,\int_\gamma \omega_2,\dots, \int_\gamma \omega_g \right)
\]
We define a $2g$-real-dimensional lattice $\Lambda$ in $\mathbb{C}^g$,
\[
\Gamma = \left\{ \sum_{i=1}^g s_i~\lambda_{a_i} + \sum_{j=1}^g t_j~ \lambda_{b_j}, \quad s_i,t_j\in \mathbb{Z} \right\}
\]
\begin{definition}[Jacobian]
The Jacobian of the Riemann surface $S$, denoted $J(S)$, is the compact quotient $\mathbb{C}^g/\Lambda$.
\end{definition}
\begin{definition}[Abel-Jacobi Map] Fix a base point $p_0\in S$. The Abel-Jacobi map is a map $\mu:S\to J(S)$. For every point $p\in S$, choose a curve $c$ from $p_0$ to $p$; the Abel-Jacobi map $\mu$ is defined as follows:
\[
\mu(p) = \left(\int_{p_0}^q \omega_1, \int_{p_0}^q \omega_2, \dots, \int_{p_0}^q \omega_g \right) ~~\mod \Lambda,
\]
where the integrals are all along $c$.
\end{definition}
It can be shown $\mu(p)$ is well-defined, that the choice of curve $c$ doesn't not affect the value of $\mu(p)$.
\begin{theorem}[Abel-Jacobian] Let $D$ be an divisor of degree $0$ on $S$, then $D$ is the divisor of a meromorphic function $f$ if and only if $\mu(D)=0$ in the Jacobian $J(S)$.
\end{theorem}
\subsection{Quad-Meshes and Meromorphic Quartic Forms}
\label{subsec:quad_differential}
We summarize the intrinsic relation between a quad-mesh and a meromorphic quartic differential.
\begin{definition}[Quadrilateral Mesh] Suppose $\Sigma$ is a topological surface, $\mathcal{Q}$ is a cell partition of $\Sigma$, if all cells of $\mathcal{Q}$ are topological quadrilaterals, then we say $(\Sigma,\mathcal{Q})$ is a quadrilateral mesh.
\end{definition}
On a quad-mesh, the \emph{topological valence} of a vertex is the number of faces adjacent to the vertex.
\begin{definition}[Singularity] Suppose $(S,\mathcal{Q})$ is a quadrilateral mesh. If the topological valence of an interior vertex is $4$, then we call it a \emph{regular vertex}, otherwise a \emph{singularity}; if the topological valence of a boundary vertex is $2$, then we call it a \emph{regular boundary vertex}, otherwise a \emph{boundary singularity}. The index of a singularity is defined as follows:
\[
\text{Ind}(v_i) = \left\{
\begin{array}{lcl}
4-\text{val}(v_i) & v_i\not\in \partial (S,\mathcal{Q})\\
2-\text{val}(v_i) & v_i\in \partial (S,\mathcal{Q})\\
\end{array}
\right.
\]
where $\text{Ind}(v_i)$ and $\text{val}(v_i)$ are the index and the topological valence of $v_i$.
\end{definition}
\begin{theorem}[Qaud-Mesh to Meromrophic Quartic Differential]
Suppose $(\Sigma,\mathcal{Q})$ is a closed quadrilateral mesh, then
\begin{enumerate}
\item the quad-mesh $\mathcal{Q}$ induces a conformal atlas $\mathcal{A}$, such that $(\Sigma,\mathcal{A})$ form a Riemann surface, denoted as $S_Q$.
\item the quad-mesh $\mathcal{Q}$ induces a quartic differential $\omega_Q$ on $S_Q$. The valence-$k$ singular vertices correspond to poles or zeros of order $k-4$. Furthermore, the trajectories of $\omega_Q$ are finite.
\end{enumerate}
\label{thm:quad_differential}
\end{theorem}
\begin{theorem}[Quartic Differential to Quad-Mesh]
Suppose $(\Sigma,\mathcal{A})$ is a Riemann surface, $\omega$ is a meromorphic quartic differential with finite trajectories, then $\omega$ induces a quadrilateral mesh $\mathcal{Q}$, such that the poles or zeros with order $k$ of $\omega$ corresponds to the singular vertices of $\mathcal{Q}$ with valence $k+4$.
\label{thm:differential_quad}
\end{theorem}
\begin{theorem}[Quad-mesh singularity Abel-Jacobian condition]Suppose $\mathcal{Q}$ is a closed quadrilateral mesh, $S_Q$ is the induced Riemann surface, $\omega_Q$ is the induced meromorphic quadric form. Assume $\omega_0$ is an arbitrary holomorphic 1-form on $S_Q$, then
\begin{equation}
\mu((\omega_Q) - 4(\omega_0)) = 0\quad \mod \Lambda
\label{eqn:abel_condition}
\end{equation}
in the Jacobian $J(S_Q)$.
\label{thm:Abel_Jacobian_condition}
\end{theorem}
\section{Computational Algorithms}
\label{sec:algorithm}
This section explains the algorithm in details. The input surface is represented as a triangle mesh $\Sigma$; the output is a meromorphic quartic differential $\omega$, and the flat metric with cone singularities at the poles and zeros induced by $\omega$. The pipeline of the algorithm is as follows:
\begin{enumerate}[1.]
\item Compute the \emph{homology} group generators of $\Sigma$, $\{a_1,\cdots,a_g; \ b_1,\cdots,$ $b_g\}$;
\item Compute the dual \emph{holomorphic} 1-form basis $\{ \varphi_1, \cdots, \varphi_g \}$; Construct a \emph{holomorphic} differential $\varphi$ on the Riemann surface $\Sigma$ through a linear combination of basis $\{\varphi_k\}_{k=1}^g$, locate the zeros of $\varphi$;
\item Compute the period matrix $(A,B)$ of surface and construct the lattice $\Gamma$, Jacobian variet $J(\Sigma)$;
\item Compute Abel-Jacobi map of a given divisor $D$ in the Jacobian variet $J(\Sigma)$;
\item Optimize the divisor $D$ to satisfy the Abel-Jacobian condition;
\item Compute the flat metric with cone singularities at the divisor $D$ by surface \emph{Ricci Flow};
\item Compute the cut-graph connecting all singularities; slice the surface along the cut-graph; And isometrically immerse the surface into complex plane; the immersion pulls $(dz)^4$ back to the surface and produces a meromorphic quartic differential $\omega$.
\item Trace the critical horizontal and vertical trajectories of $\omega$, namely isoparametric curves through singularities, to generate a T-mesh and partition the surface into rectangular patches.
\end{enumerate}
In the following, we explain every step in details. Each subsection corresponds to one step.
\subsection{Homology group basis}
\label{sec:homology_group_basis}
In practice, we compute a special set of canonical homology group basis, the tunnel loops $\{a_i\}$ and handle loops $\{b_i\}$, such that each $a_i$ and $b_i$ intersect each other at one point.
Our algorithm is mainly based on the work of Dey et al\cite{}, which avoid tetrahedral tessellation and modification of the original triangle mesh. The algorithm utilizes the concept of \emph{reeb graph} and the linking number to produce different sets of \emph{homology} basis.
As shown in Fig.~\ref{fig:sculpt_loops} left frame, the algorithm may generate homology basis which doesn't satisfy the intersection condition,
\begin{equation}
a_i \cdot b_j = \delta_{ij},~a_i\cdot a_j=0,~b_i\cdot b_j=0, \quad i,j=1,\dots, g,
\label{eqn:intersection_condition}
\end{equation}
where $\alpha\cdot \beta$ represents the algebraic intersection number between $\alpha$ and $\beta$, $g$ is the genus of the mesh.
\begin{figure}
\caption{\textbf{Sculpt}
\label{fig:sculpt_loops}
\end{figure}
We compute the algebraic intersection between the tunnel loops and handle loops, if the intersection condition \ref{eqn:intersection_condition} is violated, we randomly reset the height function used for constructing reeb graph, and obtain a new set of handle loops and tunnel loops. After several iterations, we can get a set of canonical homology group basis, as shown in Fig.~\ref{fig:sculpt_loops} right frame.
\subsection{Holomorphic 1-form Basis}
\label{sec:holomorphic1form_basis}
\begin{figure}
\caption{A \emph{holomorphic}
\label{fig:sculpt_holo1form_basis}
\end{figure}
The algorithm of computing \emph{holomorphic} 1-form is based of the work of Gu et al \cite{}, which is based on Hodge theory. \textbf{Step 1}, for each loop $\gamma$, we slice the mesh $M$ along $\gamma$ to get an open mesh $\bar{M_\gamma}$ with boundaries $\partial M_\gamma=\gamma^+-\gamma^-$, then we construct a function
\[
g_\gamma(v_i) = \left\{
\begin{array}{ll}
1& v_i \in \gamma^{+}\\
0& v_i \in \gamma^{-}\\
\text{rand}&\text{otherwise}
\end{array}
\right.
\]
Then the discrete 1-form $\lambda_{\gamma}=dg_\gamma$ is a closed 1-form.
In this way, we construct a set of cohomology group basis $\lambda_{a_1},\lambda_{b_1},\cdots, \lambda_{a_g},\lambda_{b_g}$. \textbf{Step 2}, for each closed 1-form $\lambda$, we construct a function $f:M\to\mathbb{R}$, such that $\lambda+df$ is harmonic, namely the function $f$ satisfies the Poisson equation $\Delta f = -\delta \lambda$. In this way, we diffuse cohomology basis to harmonic 1-form group basis, denoted as $\omega_{a_1},\omega_{b_1},\cdots,\omega_{a_g},\omega_{b_g}$. \textbf{Step 3}, each harmonic 1-form $\omega$ is equivalent to a curl-free vector field on $M$, we rotate the vector field by $\frac{\pi}{2}$ about the normal to the surface to obatin a divergence free vector field, which is equivalent to another harmonic 1-form ${}^*\omega$. The pair $\omega+\sqrt{-1}{}^*\omega$ is a holomorhic 1-form. In this way, we construct the holomorphic 1-form basis $\{\varphi_{a_1},\varphi_{b_1},\dots,\varphi_{a_g},\varphi_{b_g}\}$, where
\[
\varphi_{\gamma}= \omega_\gamma + \sqrt{-1}{}^*\omega_\gamma, \gamma\in \{a_1,\dots,a_g,b_1,\dots,b_g\}.
\]
According to the Riemann-Roch theory, the above set of \emph{holomorphic} 1-forms span the linear space of all holomorphic 1-forms $\Omega$, namely
for any $\varphi\in \Omega$,
\[
\varphi = \sum_{k=1}^g \alpha_k \varphi_{a_k} + \sum_{l=1}^g \beta_l \varphi_{b_l},
\]
where the $\alpha_k$, $\beta_l$ are real linear combination coefficients.
In practice, in order to compute the zeros of $\varphi$ more accurately, we choose the linear combination coefficients, such that the conformal factor function of $\varphi$ is as uniform as possible. In our implementation, we assign all $\alpha_k$'s and $\beta_l$'s to be $1$. Heuristically, the resulting holomorphic 1-form meets our accuracy requirement.
\subsection{Period matrix and Lattice}
\label{sec:period_matrix}
We can further construct a set of holomorphic 1-form basis $\{\varphi_1,\varphi_2,\dots,\varphi_g\}$, such that
\[
\int_{a_i} \varphi_j = \delta_{ij}, i,j = 1,2,\dots,g.
\]
Then for each $\gamma$ in the homology basis, we construct a $g$ dimensional vector $\lambda_\gamma \in \mathbb{C}^g$,
\[
\lambda_\gamma = \left(\int_\gamma \varphi_1, \int_\gamma \varphi_2,\cdots, \int_\gamma \varphi_g\right)
\]
The period matrix $(A,B)$ can be constructed as
\begin{equation}
A =
\begin{bmatrix}
\int_{a_1} \varphi_{1} & \int_{a_2} \varphi_{1} & \cdots & \int_{a_g} \varphi_{1}\\
\int_{a_1} \varphi_{2} & \int_{a_2} \varphi_{2} & \cdots & \int_{a_g} \varphi_{2}\\
\vdots &\vdots &\ddots &\vdots \\
\int_{a_1} \varphi_{g} & \int_{a_2} \varphi_{g} & \cdots & \int_{a_g} \varphi_{g}\\
\end{bmatrix}
~B =
\begin{bmatrix}
\int_{b_1} \varphi_{1} & \int_{b_2} \varphi_{1} & \cdots & \int_{b_g} \varphi_{1}\\
\int_{b_1} \varphi_{2} & \int_{b_2} \varphi_{2} & \cdots & \int_{b_g} \varphi_{2}\\
\vdots &\vdots &\ddots &\vdots \\
\int_{b_1} \varphi_{g} & \int_{b_2} \varphi_{g} & \cdots & \int_{b_g} \varphi_{g}\\
\end{bmatrix}
\end{equation}
by our construction $A$ is the $g\times g$ identity matrix.
Then we construct a lattice $\Gamma$ in $\mathbb{C}^g$,
\[
\Gamma = \left\{
\sum_{k=1}^g (s_k \lambda_{a_k} + t_k \lambda_{b_k},\quad s_k, t_k\in \mathbb{Z}
\right\}
\]
\subsection{Abel-Jacobi Map}
\label{sec:Abel_Jacobi}
Given a canonical homology group basis£¬ we slice the surface along the basis to obtain a topological disk $\bar{M}$. Fix a base point in the interior of $\bar{M}$,$p_0\in \bar{M}$, for any point $p\in M$, we can choose arbitrarily a path $\gamma\subset \bar{M}$ connecting $p$ and $p_0$, the Abel-Jacobi map $\mu:M\to J(M)$, $J(M)=\mathbb{C}^g/\Gamma$, is defined as
\[
\mu(p) = \Phi(p)~\mod~\Gamma,
\]
where
\begin{equation}
\Phi(p) = \left(\int_\gamma \varphi_1, \int_\gamma \varphi_2,\cdots, \int_\gamma \varphi_g\right)^T.
\label{eqn:Phi}
\end{equation}
Similarly, given a divisor $D=\sum_{i=1}^n n_i p_i$,
\[
\mu(D) = \sum_{i=1}^n n_i \mu(p_i) = \sum_{i=1}^n n_i\left(\int_{p_0}^{p_i} \varphi_1, \int_{p_0}^{p_i} \varphi_2,\cdots, \int_{p_0}^{p_i} \varphi_g\right)^T ~\mod~\Gamma.
\]
Abel-Jacobi condition claims that if $D$ is a principle divisor, then $\mu(D)$ is $0$, namely
\begin{equation}
\Phi(D) - A
\begin{bmatrix}
s_1\\
\vdots\\
s_g
\end{bmatrix}
- B
\begin{bmatrix}
t_1\\
\vdots\\
t_g
\end{bmatrix}
=
\begin{bmatrix}
0\\
\vdots\\
0
\end{bmatrix}
\label{eqn:Abel_Jacobi_Condition}
\end{equation}
By expansion, we obtain the equation
\begin{equation}
\begin{split}
\centering
\left[
\begin{array}{c}
\mathop{\sum}_{i=1}^{n} n_i \bigintss_{\gamma_i} \varphi_1 \\
\mathop{\sum}_{i=1}^{n} n_i \bigintss_{\gamma_i} \varphi_2 \\
\quad \vdots \\
\mathop{\sum}_{i=1}^{n} n_i \bigintss_{\gamma_i} \varphi_g
\end{array}
\right]
\
-
\
\begin{bmatrix}
\mathop{\sum}_{k = 1}^{g} (s_k \bigintss_{a_k} \varphi_1 + t_k \bigintss_{b_k} \varphi_1) \\
\mathop{\sum}_{k = 1}^{g} (s_k \bigintss_{a_k} \varphi_2 + t_k \bigintss_{b_k} \varphi_2) \\
\vdots \\
\mathop{\sum}_{k = 1}^{g} (s_k \bigintss_{a_k} \varphi_g + t_k \bigintss_{b_k} \varphi_g)
\end{bmatrix}
=
\begin{bmatrix}
0 \\
0\\
\vdots \\
0
\end{bmatrix},
\end{split}
\label{eqn:abel_jacobian_map_expand_2}
\end{equation}
where $s_k$, $t_k$ are integers, $\gamma_i$ is the path connecting $p_0$ and $p_i$ in $\bar{M}$.
\subsection{\textbf{Abel-Jacobian Condition Optimization System}}
\label{sec:optimization_system}
Suppose we are given an initial divisor $D_0$, if it doesn't satisfy the Gauss-Bonnet condition, namely $\deg(D_0)\neq 8g-8$, then we can add extra poles or zeros to modify $D_0$ to $D=\sum_{i=1}^k n_i p_i$, such that $\deg(D)=8g-8$. We choose a holomorphic 1-form $\varphi$.
First, we determine the integer coefficients $s_k$ and $t_k$ in Abel-Jacobi condition (\ref{eqn:abel_jacobian_map_expand_2}) by minimizing the norm
\begin{equation}
\min_{s_k,t_k\in\mathbb{Z}}\left\|\Phi(D) - \sum_{k=1}^g s_k \lambda_{a_k} - \sum_{k=1}^g t_k \lambda_{b_k} - \Phi(4(\varphi))\right\|^2,
\label{eqn:integer_program}
\end{equation}
this can be accomplished by standard integer programming \cite{}.
Second, once the integer coefficients $s_k, t_k$, $k=1,2,\dots,g$ are set, we further minimize the
squared norm of $\mu(D)$ with respect to the positions of poles and zeros,
\[
\min_{p_1,\dots p_k\in M} \left\|\Phi(D) - \sum_{k=1}^g s_k \lambda_{a_k} - \sum_{k=1}^g t_k\lambda_{b_k} - \Phi(4(\varphi))\right\|^2.
\]
Let $d\in \mathbb{C}^g$ be
\[
d = \sum_{k=1}^g s_k \lambda_{a_k} - \sum_{k=1}^g t_k\lambda_{b_k} - \Phi(4(\varphi)),
\]
then the above energy becomes
\[
E(p_1,\dots,p_k) := \sum_{j=1}^g \left\|\sum_{i=1}^k n_i \int_{p_0}^{p_i} \varphi_j - d_j \right\|^2.
\]
For each point $p_i$, we choose a local neightborhood $\Delta_i$ of $p_i$, with local parameter $z_i$, then the holomorphic 1-form $\omega_j$ has local representation,
\[
\varphi_j = h_{j}^i(z_i) dz_i,
\]
where $h_j^i(z_i)$ is a holomorphic function defined on $\Delta_i$.
\begin{equation}
\frac{\partial E}{\partial p_i}=\sum_{j=1}^g \left[ n_i h_j^i(p_i) \left(\sum_{i=1}^k n_i \int_{p_0}^{p_i} \bar{\varphi}_j -\bar{d}_j\right) +n_i \bar{h}_j^i(p_i) \left(\sum_{i=1}^k n_i \int_{p_0}^{p_i} \varphi_j-d_j\right)
\right],
\label{eqn:gradient}
\end{equation}
we can use gradient descent method to minimize $\|\mu(D)\|^2$. In practice, we choose the triangle face containing $p_i$ as $\Delta_i$. We isometrically embed $\Delta_i$ onto the plane, and the planar coordinates give local parameter $z_i$. $\omega_j$ can be represented as a complex linear function on $\Delta_i$, which is $h_j^i(z_i)$. In this way, we can minimize the squared norm of $\mu(D)$.
The algorithm is presented briefly in Alg.~\ref{alg:solve_optimization_system}.
\begin{algorithm}[h]
\caption{Optimize a Divisor to Satisfy the Abel-Jacobi Condition}
\renewcommand{\textbf{Input:}}{\textbf{Input:}}
\renewcommand{\textbf{Output:}}{\textbf{Output:}}
\label{alg:solve_optimization_system}
\begin{algorithmic}[1]
\REQUIRE Closed mesh $M$; A group of singularities $D$; A \emph{holomorphic} 1-form; Precision threshold $\varepsilon$.
\ENSURE Optimized divisor $D$ Abel-Jacobian condition.
\IF{$D$ doesn't satisfy Gauss-Bonnet Condition}
\STATE Locate the vertices on $M$ with local maximal Gaussian curvature as poles, or with local minimal curvature as zeros;
\STATE Add these vertices to the divisor $D$, such that $D$ satisfies the Gauss-Bonnet condition.
\ENDIF
\STATE Locate the zeros of $\varphi$ to obtain the divisor $(\varphi)$;
\STATE Compute $\Phi(D)$ and $\Phi(4(\varphi))$ using Eqn.~\ref{eqn:Phi};
\STATE Compute the Abel-Jacobi $\mu(D-4(\varphi))$map by optimization using integer programming Eqn.\ref{eqn:integer_program};
\WHILE{\;$\|\mu(D-4(\varphi))\|^2 > \varepsilon$\;}
\FOR {All each pole and zero $p_i$ in $D$ }
\STATE Locate the face $\Delta_i$ containing $p_i$;
\STATE Compute the local representation $\varphi_j(z_i) = h_j^i(z_i) dz_i$;
\STATE Compute the gradient of the energy Eqn.~\ref{eqn:gradient};
\ENDFOR
\STATE Update the poisitions of the singularities $p_i \leftarrow p_i - \partial \nabla E/\partial p_i$;
\STATE Recompute the Abel-Jacobi map $\mu(D-4(\varphi))$;
\ENDWHILE
\RETURN The divisor $D$.
\end{algorithmic}
\end{algorithm}
\begin{figure}
\caption{The singularities of the Buddha surface.}
\label{fig:buddha_singularities}
\end{figure}
Fig.~\ref{fig:buddha_singularities} shows the singularities on the Buddha surface satisfying the Abel-Jacobi condition.
\subsection{Discrete Surface Ricci Flow}
Once the divisor $D=\sum_{i=1}^k n_i p_i$ is obtained, we can set the target curvature as
\[
\bar{K}(v_i) = \left\{
\begin{array}{ll}
(4-n_i)\frac{\pi}{2} & v_i \in D\\
0 & \text{otherwise}
\end{array}
\right.
\]
For each vertex $v_i \in M$, we set the initial conformal factor as $u_i=0$. Then the edge length is given by vertex scaling, for edge $e_{ij}=[v_i,v_j]$, its length is given by
\[
l_{ij} = e^{u_i}\beta_{ij} e^{u_j},
\]
where $\beta_{ij}$ is the initial edge length. The corner angles are calculated using Euclidean cosine law,
\[
\theta_{k}^{ij} = \cos^{-1} \frac{l_{ik}^2 + l_{jk}^2 - l_{ij}^2}{2l_{ik}l_{jk}},
\]
the discrete Gaussian curvature is given by
\[
K(v_i) = \left\{
\begin{array}{rl}
2\pi - \sum_{jk} \theta_i^{jk} & v_i \not\in \partial M \\
\pi - \sum_{jk} \theta_i^{jk} & v_i \in \partial M
\end{array}
\right.
\]
The discrete Ricci energy is defined as
\[
E(u_1,\dots,u_n)=\int^{(u_1,\dots,u_n)} \sum_{i=1}^n (\bar{K}_i - K_i ) du_i.
\]
The gradient of the energy is given by
\[
\nabla E=(\bar{K}_1-K_1,\bar{K}_2-K_2,\cdots, \bar{K}_n-K_n)^T.
\]
The Hessian matrix is given by the cotange edge weight
\[
\frac{\partial^2 E}{\partial u_i \partial u_j} =
\left\{
\begin{array}{lr}
(\cot \theta_{k}^{ij} + \cot \theta_{l}^{jl})/2& e_{ij}\not\in \partial M\\
\cot \theta_{k}^{ij}/2 & e_{ij}\in \partial M
\end{array}
\right.
\]
and
\[
\frac{\partial^2 E}{\partial u_i^2} = -\sum_{j\neq i} \frac{\partial^2 E}{\partial u_i \partial u_j}.
\]
We can use Newton's method to optimize the Ricci energy, during the optimization, we update the triangulation to be Delaunay all the time. The convergence is proven in the work \cite{}. We can compute holonomy using the resulting Riemannian metric.
\subsection{Isometric Immersion and Meromorphic Quartic Differential}
We have obtain a set of canonical homology group basis $\{a_1,\dots,a_g,b_1,\dots,b_g\}$. The union of the basis form a cut graph $\Gamma$ of the mesh. For each pole or zero $p_i$ in $D$, we find a shortest path $\gamma_i$ connecting $p_i$ to the cut graph, furthermore, all such shortest paths $\gamma_i$'s are disjoint. Then we slice $M$ along the cut graph and the shortest paths, $\Lambda \bigcup \{\bigcup_{i=1}^k \gamma_i\}$, to obtain a topological disk $\tilde{M}$.
Then we flatten $\tilde{M}$ face by face using the metric obtained by the discrete surface Ricci flow. This produces an immersion of $\varphi:\tilde{M}\to\mathbb{C}$. On the complex plane, there is a canonical differential $dz^4$, the pull back $\varphi^* dz^4$ is a meromorphic quartic differential defined on $M$. We can use $\tau$ as a parameterization, and use checker board texture mapping to visualize the quartic differential.
\begin{figure}
\caption{In the left two frames, the red curves form the motor-graph, the blue curves are the original cut graph. The right two frames show the T-Mesh of the Buddha surface.}
\label{fig:motorgraph_TMesh}
\end{figure}
\subsection{T-Mesh Generation}
We trace the critical trajectories of the meomorphic quadratic differential $\varphi^*dz^4$, denoted as $\{\gamma_1(s_1),\gamma_2(s_2),\cdots,\gamma_n(s_n)\}$, where $s_k$ is the arc length parameter of $\gamma_k$, their images $\varphi(\gamma_k)$'s are the horizontal and vertical lines through the zeros and poles on the parameter plane. If $\gamma_i(s_i)$ intersects $\gamma_j(s_j)$ at $p$, if $s_i<s_j$, then $\gamma_j$ stops at $p$, $\gamma_i$ continues. This procedure will generate the \emph{motor graph} on the surface, as shown in the left two frames in Fig.~\ref{fig:motorgraph_TMesh}.
The surface is partitioned into rectangular patches as shown in the right two frames of Fig.~\ref{fig:motorgraph_TMesh}. Each surface patch is parameterized to a planar rectangle, as shown in Fig.~\ref{fig:buddha_parameterization}. The corresponding surface patch and the planar rectangle are rendered using the same color.
\begin{figure}
\caption{Each surface patch is parameterized to a planar rectangle.}
\label{fig:buddha_parameterization}
\end{figure}
\section{Experimental Results}
\label{sec:experiments}
In this section, we briefly report our experimental results. All the experiments were conducted on a PC with 1.60GHz Intel(R) core(TM) i5-8250U CPU, 1.60GB RAM and 64-bit Windows 10 operating system. The running time is reported in table \ref{tab:Runing_time}.
\subsection{T-Mesh Generation}
The singularities and the resulting T-meshes are illustrated in the figures. As shown in Fig.~\ref{fig:singularities}, the singularities surrounded by red, blue and green circles represent the indices $+1$, $-1$ and $-2$ respectively. The points surrounded by white circles are the T-junctions of the T-mesh. Different surface patches are color-encoded differently. By carefully examining the texture patterns in Fig.~\ref{fig:singularities}, we can see that the adjacent patches differ by horizontal and vertical translations composed with rotations by angle $k\frac{\pi}{2}$, $k\in \mathbb{Z}$. Therefore, we can construct T-Splines on these T-meshes directly.
\begin{figure}
\caption{Singularities.}
\label{fig:singularities}
\end{figure}
\begin{figure}
\caption{Singularities and the T-Mesh of the Loveme model.}
\label{fig:examples_1}
\end{figure}
\begin{figure}
\caption{Singularities and the T-meshes of high genus surfaces.}
\label{fig:examples_2}
\end{figure}
\begin{figure}
\caption{Singularities and T-Meshes of the surfaces with complicated geometries.}
\label{fig:examples_3}
\end{figure}
\begin{figure}
\caption{Singularities and T-Meshes of various surfaces.}
\label{fig:examples_4}
\end{figure}
\begin{figure}
\caption{Singularities and T-Meshs of high genus surfaces.}
\label{fig:examples_5}
\end{figure}
\begin{figure}
\caption{The motor-graph and T-mesh of the genus $3$ 2kids surface.}
\label{fig:2kids}
\end{figure}
\begin{figure}
\caption{Singularities and T-Meshes of high genus surfaces.}
\label{fig:example_6}
\end{figure}
\begin{center}
\begin{table*}[h!]
\caption{Runing time}
\label{tab:Runing_time}
\resizebox{\textwidth}{35mm}{
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c}
\hline\hline
\multicolumn{1}{c|}{\multirow{2}*{\textbf{Model}}}
& \multicolumn{4}{|c|}{\textbf{Mesh Information}}
& \textbf{Holo 1-form}
& \textbf{Holo zeros}
& \multicolumn{3}{|c|}{\textbf{Legalization of Singularities} }
& \textbf{Ricci Flow}
& \textbf{Iso. Immersion} \\
\cline{2-12}
\multicolumn{1}{l|}{~}
&a
&b
&c
&d
& \textbf{Time(sec.)}
& \textbf{Time(sec.)}
&\textbf{Error Threshold}
&\textbf{Iterations}
&\textbf{Time(sec.)}
&\textbf{Time(sec.)}
&\textbf{Time(sec.)} \\
\hline
\multicolumn{1}{l|}{\textbf{Kitten}}
& \textbf{10.2k}
& \textbf{30.7k}
& \textbf{20.4k}
& \textbf{1}
& \textbf{10.247}
& ---
& \textbf{3.0e-4}
& \textbf{2132}
& \textbf{0.002}
& \textbf{0.013}
& \textbf{0.006} \\
\multicolumn{1}{l|}{\textbf{Ornament}}
& \textbf{28.8k}
& \textbf{86.5k}
& \textbf{57.7k}
& \textbf{1}
& \textbf{47.954}
& \textbf{---}
& \textbf{3.0e-4}
& \textbf{3382}
& \textbf{0.005}
& \textbf{6.177}
& \textbf{0.014} \\
\multicolumn{1}{l|}{\textbf{Rockerarm}}
& \textbf{40.2k}
& \textbf{120.5k}
& \textbf{80.4k}
& \textbf{1}
& \textbf{39.014}
& \textbf{---}
& \textbf{3.0e-4}
& \textbf{2049}
& \textbf{0.004}
& \textbf{12.134}
& \textbf{0.021} \\
\multicolumn{1}{l|}{\textbf{Dancer}}
& \textbf{43.0k}
& \textbf{129.1k}
& \textbf{86.0k}
& \textbf{1}
& \textbf{43.913}
& \textbf{---}
& \textbf{3.0e-4}
& \textbf{6069}
& \textbf{0.005}
& \textbf{10.0659}
& \textbf{0.027} \\
\multicolumn{1}{l|}{\textbf{Bull}}
& \textbf{75.8k}
& \textbf{227.3k}
& \textbf{151.5k}
& \textbf{1}
& \textbf{95.904}
& \textbf{---}
& \textbf{3.0e-4}
& \textbf{2313}
& \textbf{0.003}
& \textbf{18.160}
& \textbf{0.054} \\
\multicolumn{1}{l|}{\textbf{Sculpt}}
& \textbf{4.0k}
& \textbf{12.2k}
& \textbf{8.0k}
& \textbf{2}
& \textbf{4.828}
& \textbf{0.029}
& \textbf{1.0e-3}
& \textbf{37601}
& \textbf{0.052}
& \textbf{1.485}
& \textbf{0.002} \\
\multicolumn{1}{l|}{\textbf{Starcup}}
& \textbf{30.0k}
& \textbf{90.0k}
& \textbf{60.0k}
& \textbf{2}
& \textbf{51.682}
& \textbf{0.301}
& \textbf{3.0e-4}
& \textbf{1167}
& \textbf{0.005}
& \textbf{6.654}
& \textbf{0.013} \\
\multicolumn{1}{l|}{\textbf{Monk}}
& \textbf{38.5k}
& \textbf{115.5k}
& \textbf{77.0k}
& \textbf{2}
& \textbf{86.741}
& \textbf{4.037}
& \textbf{3.0e-4}
& \textbf{17551}
& \textbf{0.108}
& \textbf{8.624}
& \textbf{0.022} \\
\multicolumn{1}{l|}{\textbf{Hermanubis}}
& \textbf{39.9k}
& \textbf{119.8k}
& \textbf{79.9k}
& \textbf{2}
& \textbf{96.122}
& \textbf{1.441}
& \textbf{3.0e-4}
& \textbf{17540}
& \textbf{0.043}
& \textbf{10.370}
& \textbf{0.025} \\
\multicolumn{1}{l|}{\textbf{Amphora}}
& \textbf{82.6k}
& \textbf{246.5k}
& \textbf{164.3k}
& \textbf{2}
& \textbf{174.396}
& \textbf{0.883}
& \textbf{3.0e-4}
& \textbf{5129}
& \textbf{0.016}
& \textbf{21.4288}
& \textbf{0.046} \\
\multicolumn{1}{l|}{\textbf{Loveme}}
& \textbf{86.7k}
& \textbf{260.2k}
& \textbf{173.5k}
& \textbf{2}
& \textbf{191.663}
& \textbf{1.156}
& \textbf{3.0e-4}
& \textbf{5776}
& \textbf{0.020}
& \textbf{25.7747}
& \textbf{0.0.57} \\
\multicolumn{1}{l|}{\textbf{Buddha}}
& \textbf{59.4k}
& \textbf{178.1k}
& \textbf{118.7k}
& \textbf{3}
& \textbf{179.568}
& \textbf{2.623}
& \textbf{3.0e-4}
& \textbf{670539}
& \textbf{2.021}
& \textbf{13.117}
& \textbf{0.026} \\
\multicolumn{1}{l|}{\textbf{2Kids}}
& \textbf{61.7k}
& \textbf{185.2k}
& \textbf{123.5k}
& \textbf{3}
& \textbf{201.353}
& \textbf{6.274}
& \textbf{3.0e-4}
& \textbf{94620}
& \textbf{0.546}
& \textbf{14.633}
& \textbf{0.027} \\
\multicolumn{1}{l|}{\textbf{3Holes}}
& \textbf{65.0k}
& \textbf{195.0k}
& \textbf{130.0k}
& \textbf{3}
& \textbf{218.954}
& \textbf{0.819}
& \textbf{1.0e-3}
& \textbf{343710}
& \textbf{0.9333}
& \textbf{16.756}
& \textbf{0.032} \\
\multicolumn{1}{l|}{\textbf{Witch}}
& \textbf{75.0k}
& \textbf{225.0k}
& \textbf{150.0k}
& \textbf{4}
& \textbf{363.533}
& \textbf{10.877}
& \textbf{3.0e-4}
& \textbf{304729}
& \textbf{1.033}
& \textbf{20.343}
& \textbf{0.051} \\
\hline\hline
\multicolumn{1}{c|}{\multirow{2}*{\textbf{Hardware}}}
& \multicolumn{9}{c|}{\textbf{CPU}}
& \multicolumn{2}{c}{\textbf{RAM}}\\
\cline{2-12}
\multicolumn{1}{c|}{~}
&\multicolumn{9}{c|}{\textbf{Intel(R) Core(TM) i5-8250U CPU \bm{$@$} 1.60GHz}}
&\multicolumn{2}{c}{\textbf{16.0GB}}\\
\hline\hline
\end{tabular}
}
\end{table*}
\end{center}
\subsection{Abel-Jacobi and Holonomy condition verification}
\label{sec:holonomy_condition_validation}
\begin{figure}
\caption{The loops on the genus two \textbf{Garniture}
\label{fig:holonomy_validation}
\end{figure}
\begin{center}
\begin{table}[h!]
\caption{Th holonomy of the loops in Fig.~\ref{fig:holonomy_validation}, rotation components.}
\label{tab:abel_jacobian_mapping_result}
\centering
\small
\renewcommand\arraystretch{0.7}
\centering
\begin{tabular}{p{4cm}<{\centering}|p{1.4cm}<{\centering}|p{1.4cm}<{\centering}|p{1.4cm}<{\centering}|p{1.4cm}<{\centering}}
\hline\hline
\textbf{Loops}
& $\mathbf{a_1}$
& $\mathbf{b_1}$
& $\mathbf{a_2}$
& $\mathbf{b_2}$ \\
\hline
\textbf{Rotation degree($^{\circ}$)}
& 90.31809
& -0.12269
& 0.19303
& 89.81468 \\
\hline
\textbf{Loops}
& $\mathbf{t_1}$
& $\mathbf{t_2}$
& $\mathbf{t_3}$
& $\mathbf{t_4}$\\
\hline
\textbf{Rotation degree($^{\circ}$)}
& 270.00047
& 540.00136
& 450.00192
& 539.99818 \\
\hline\hline
\end{tabular}
\end{table}
\end{center}
We verify the holonomy conditon for a genus two surface as shown in Fig.~\ref{fig:holonomy_validation}. We compute the tunnel loops $a_1,a_2$ and handle loops $b_1,b_2$, and several loops enclosing different number of singularities. The we compute their holonomies by parallel transportation on the flat metric computed using Ricci flow, the rotation components are reported in the table ~\ref{tab:abel_jacobian_mapping_result}. We can see that all the holonomies are very close to $k90^\circ$, where $k$ is an integer.
Furthermore, for every surface, we compute the image of the singularities under the Abel-Jacobi map, all the results are reported in table ~\ref{tab:abel_jacobian_mapping_result}. We can see that all the images are very close to the zero point in the Jacobian lattice, this shows the singularities satisfy the Abel-Jacobi condition. This demonstrates the accuracy of our proposed algorithm.
\begin{center}
\begin{table*}[h!]
\caption{Abel Jacobian mapping result}
\label{tab:abel_jacobian_mapping_result}
\centering
\small
\renewcommand\arraystretch{0.7}
\centering
\begin{tabular}{p{3cm}<{\centering}|p{8cm}<{\centering}}
\hline\hline
\textbf{Model}
& \textbf{Abel Jacobian Mapping Result} \\
\hline
\textbf{KITTEN}
& $ \begin{pmatrix} \mathbf{ \ 2.13971e-04 \quad + \quad i \ * \ 7.09315e-05 \ } \end{pmatrix} $ \\
~&~\\
\textbf{ORNAMENT}
& $ \begin{pmatrix}\mathbf{ \ -1.09501e-08 \quad + \quad i \ * \ 5.73307e-08 \ } \end{pmatrix} $ \\
\\
\textbf{ROCKERARM}
& $ \begin{pmatrix}\mathbf{ \ -6.05103e-05 \quad + \quad i \ * \ 6.27266e-06 \ } \end{pmatrix} $ \\
\\
\textbf{DANCER}
& $ \begin{pmatrix}\mathbf{ \ -3.14143e-05 \quad + \quad i \ * \ 1.57991e-05 \ } \end{pmatrix} $ \\
\\
\textbf{BULL}
& $ \begin{pmatrix}\mathbf{ \ -1.55144e-05 \quad + \quad i \ * \ 6.56513e-06 \ } \end{pmatrix} $ \\
\\
\multirow{2}*{\textbf{SCULPT}}
& \multirow{2}*{$ \begin{pmatrix}\mathbf{ \ -3.72147e-04 \quad - \quad i \ * \ 9.82485e-04 \ }\\
\textbf{ \ 8.03122e-04 \quad + \quad i \ * \ 6.25321e-04 \ }\end{pmatrix} $ } \\
~&~\\
\\
\multicolumn{1}{c|}{\multirow{2}*{\textbf{STARCUP}}}
& \multirow{2}*{ $ \begin{pmatrix}\mathbf{ \ 4.59275e-05 \quad - \quad i \ * \ 1.27194e-04 \ }\\
\textbf{ \ 8.14751e-05 \quad - \quad i \ * \ 2.32289e-04 \ }\end{pmatrix} $ } \\
~&~\\
\\
\multicolumn{1}{c|}{\multirow{2}*{\textbf{MONK}}}
& \multirow{2}*{ $ \begin{pmatrix}\mathbf{ \ -1.37142e-05 \quad - \quad i \ * \ 1.84819e-04 \ }\\
\textbf{ \ 4.70251e-05 \quad + \quad i \ * \ 1.90921e-04 \ } \end{pmatrix} $ } \\
~&~\\
\\
\multicolumn{1}{c|}{\multirow{2}*{\textbf{HERMANUBIS}}}
& \multirow{2}*{ $ \begin{pmatrix}\mathbf{ \ -1.05753e-04 \quad - \quad i \ * \ 8.17228e-05 \ }\\
\textbf{ \ 9.29236e-05 \quad + \quad i \ * \ 4.96067e-05 \ }\end{pmatrix} $ } \\
~&~\\
\\
\multicolumn{1}{c|}{\multirow{2}*{\textbf{AMPHORA}}}
& \multirow{2}*{ $ \begin{pmatrix}\mathbf{ \ 1.16072e-04 \quad - \quad i \ * \ 1.37645e-04 \ }\\
\textbf{ \ 1.32789e-05 \quad - \quad i \ * \ 1.56983e-04 \ } \end{pmatrix} $ } \\
~&~\\
\\
\multicolumn{1}{c|}{\multirow{2}*{\textbf{LOVEME}}}
& \multirow{2}*{ $ \begin{pmatrix}\mathbf{ \ -9.65795e-05 \quad + \quad i \ * \ 3.60684e-05 \ }\\
\textbf{ \ -3.69644e-05 \quad - \quad i \ * \ 1.48141e-04 \ } \end{pmatrix} $ } \\
~&~\\
\\
\multicolumn{1}{c|}{\multirow{3}*{\textbf{BUDDHA}}}
& \multirow{3}*{ $ \begin{pmatrix}\mathbf{ \ 1.16965e-04 \quad + \quad i \ * \ 2.90814e-04 \ }\\
\textbf{ \ -1.28974e-04 \quad - \quad i \ * \ 7.77251e-06 \ }\\
\textbf{ \ 1.55074e-04 \quad - \quad i \ * \ 2.54977e-04 \ }
\end{pmatrix} $ } \\
~&~\\
~&~\\
\\
\multicolumn{1}{c|}{\multirow{3}*{\textbf{2KIDS}}}
& \multirow{3}*{ $ \begin{pmatrix}\mathbf{ \ 2.90402e-04 \quad - \quad i \ * \ 2.89651e-04 \ }\\
\textbf{ \ 2.13554e-04 \quad - \quad i \ * \ 5.80312e-05 \ }\\
\textbf{ \ 1.70373e-04 \quad + \quad i \ * \ 2.77541e-04 \ }
\end{pmatrix} $ } \\
~&~\\
~&~\\
\\
\multicolumn{1}{c|}{\multirow{3}*{\textbf{3HOLES}}}
& \multirow{3}*{ $ \begin{pmatrix}\mathbf{ \ 6.85741e-05 \quad + \quad i \ * \ 9.32962e-04 \ }\\
\textbf{ \ 3.55608e-05 \quad - \quad i \ * \ 8.67721e-04 \ }\\
\textbf{ \ -1.36089e-05 \quad + \quad i \ * \ 5.60214e-04 \ }
\end{pmatrix} $ } \\
~&~\\
~&~\\
\\
\multicolumn{1}{c|}{\multirow{4}*{\textbf{WITCH}}}
& \multirow{4}*{ $ \begin{pmatrix}\mathbf{ \ -1.29378e-04 \quad - \quad i \ * \ 2.40348e-04 \ }\\
\textbf{ \ -2.75192e-04 \quad + \quad i \ * \ 1.98399e-04 \ }\\
\textbf{ \ 2.23835e-04 \quad + \quad i \ * \ 2.55373e-04 \ }\\
\textbf{ \ -2.64736e-04 \quad + \quad i \ * \ 2.39598e-04 \ }
\end{pmatrix} $ } \\
~&~\\
~&~\\
~&~\\
\hline\hline
\end{tabular}
\end{table*}
\end{center}
\section{Conclusion}
\label{sec:conclusion}
This work proposes a rigorous and practical algorithm for generating meromorphic quartic differentials for the purpose of quad-mesh generation. We give a variational approach to adjust the divisor by an integer programming to satisfy the Abel-Jacobi condition.
Our experimental results demonstrate that the method can handle surfaces with complicated topology and geometry. The algorithm is efficient and accurate. The resuling T-meshes can be used to contruct T-Splines directly.
In the future, we will further explore how to convert the T-meshes to T-Splines, and further optimize the configurations of singularities to improve the quality of the Spline surfaces.
We will also design algorithms for adjusting the conformal structure of the surface to ensure the finiteness of the trajectories of meromorphic differentials and automatic quad-mesh generation.
\section{Conclusion}
\label{sec:conclusion}
This work proposes a rigorous and practical algorithm for generating meromorphic quartic differentials for the purpose of quad-mesh generation. We give a variational approach to adjust the divisor by an integer programming to satisfy the Abel-Jacobi condition.
Our experimental results demonstrate that the method can handle surfaces with complicated topology and geometry. The algorithm is efficient and accurate. The resuling T-meshes can be used to contruct T-Splines directly.
In the future, we will further explore how to convert the T-meshes to T-Splines, and further optimize the configurations of singularities to improve the quality of the Spline surfaces.
We will also design algorithms for adjusting the conformal structure of the surface to ensure the finiteness of the trajectories of meromorphic differentials and automatic quad-mesh generation.
\section*{Acknowledgment}
The authors thank the encouragements and inspiring discussions with Dr. Tom Hughes and his students Dr. candidate Kendric Shpeherd.
This work is partially supported by NSFC No. 61907005, 61720106005, 61772105 and 61936002.
\iffalse
\fi
\end{document} |
\begin{document}
\maketitle
\centerline{\scshape Kim Knudsen and Aksel Kaastrup Rasmussen$^*$}
{\footnotesize
\centerline{Technical University of Denmark}
\centerline{Department of Applied Mathematics and Computer Science}
\centerline{DK-2800 Kgs. Lyngby, Denmark}
}
\begin{abstract}
Electrical Impedance Tomography gives rise to the severely ill-posed
Calderón problem of determining the electrical conductivity distribution
in a bounded domain from knowledge of the associated Dirichlet-to-Neumann
map for the governing equation. The uniqueness and stability questions
for the three-dimensional problem were largely answered in the affirmative in the 1980’s using complex
geometrical optics solutions, and this led further to a direct reconstruction method relying on a non-physical scattering transform. In this paper, the reconstruction problem is taken one step further towards practical applications by considering data contaminated by noise. Indeed, a regularization strategy for the three-dimensional Calderón problem is presented based on a suitable and explicit truncation of the scattering transform. This gives a certified, stable and direct reconstruction method that is robust to small perturbations of the data. Numerical tests on simulated noisy data illustrate the feasibility and regularizing effect of the method, and suggest that the numerical implementation performs better than predicted by theory.
\end{abstract}
\section{Introduction}
Electrical Impedance Tomography (EIT) provides a noninvasive method of obtaining information on the electrical conductivity distribution of electric conductive media from exterior electrostatic measurements of currents and voltages. There are many applications in medical imaging including early detection of breast cancer \cite{cherepenin2002,Zou200379}, hemorrhagic stroke detection \cite{malone2014a,goren2018a}, pulmonary function monitoring \cite{Adler19971762,frerichs2019a,Leonhardt20121917} and targeting control in transcranial brain stimulation \cite{schmidt2015a}. Applications also include industrial testing, for example, crack damage detection in cementitious structures \cite{Hou2009ElectricalIT,Hallaji2014}, and subsurface geophysical imaging \cite{zhao2013a}. The mathematical problem of EIT is called the Calderón problem and was first formulated by A.P. Calderón in 1980 \cite{calderoninverse} as follows:
Consider a bounded Lipschitz domain $\Omega \subset \mathbb{R}^3$ filled with a conductor with a distribution $\gamma \in L^\infty(\Omega)$, $\gamma\geq c >0$. Under the assumption of no sinks or sources of current in the domain, applying an electrical surface potential $f \in H^{1/2}(\partial \Omega)$ induces an electrical potential $u \in H^1(\Omega)$, which uniquely solves the conductivity equation
\begin{equation} \label{eq:cond}
\begin{aligned}
\nabla \cdot (\gamma \nabla u) &=0 & & \text { in } \Omega, \\
u &=f & & \text { on } \partial \Omega.
\end{aligned}
\end{equation}
The Dirichlet-to-Neumann map $\Lambda_{\gamma}: H^{1/2}(\partial \Omega) \rightarrow H^{-1/2}(\partial \Omega)$ is defined as
\begin{equation}
\Lambda_{\gamma} f=\gamma \partial_{\nu} u|_{\partial \Omega},
\end{equation}
and associates a voltage potential on the boundary with a corresponding normal current flux. All pairs $(f,\gamma \partial_{\nu} u|_{\partial \Omega})$, or equivalently the Dirichlet-to-Neumann map, constitute the available {data}.
The forward problem is the problem of determining the Dirichlet-to-Neumann map given the conductivity, and it amounts to solving the boundary value problem \eqref{eq:cond} for all possible $f$. The Calderón problem now asks whether $\gamma$ is uniquely determined by $\Lambda_\gamma$, and how to stably reconstruct $\gamma$ from $\Lambda_\gamma$, if possible.
Uniqueness and reconstruction were considered and solved for sufficiently regular conductivity distributions in dimension $n\geq 3$ in a series of papers \cite{Nachman1988595, nachman1988a, Novikov1988263, sylvester1987a, carorogers2016}. The results are based on complex geometrical optics (CGO) solutions to a Schrödinger equation derived from \eqref{eq:cond}. The first step of the reconstruction method is to recover the CGO solutions on $\partial \Omega$ by solving a weakly singular boundary integral equation with an exponentially growing kernel. The second step is obtaining the so-called non-physical scattering transform, which approximates in a large complex frequency limit the Fourier transform of $\gamma^{-1/2}\Delta \gamma^{1/2}$. Applying the inverse Fourier transform and solving a boundary value problem yields $\gamma$ in the third step. Numerical algorithms following the scattering transform approach in dimension $n\geq 3$ have been developed by approximating the scattering transform \cite{bikowski2011a, knudsen2011a,hamilton2020a,boverman2009a}, by approximating the boundary integral equations \cite{delbary2012a}, and for the full theoretical reconstruction algorithm \cite{delbary2014a}. A reconstruction algorithm for conductivity distributions close to a constant has been suggested, but not implemented \cite{cornean2006a}.
A similar scattering transform approach combined with tools from complex analysis enables uniqueness and reconstruction \cite{nachman1996a} for the two-dimensional Calderón problem. More recently, a final affirmative answer was given to the question of uniqueness for a general bounded conductivity distribution in two dimensions \cite{astala2006a}. Numerical algorithms and implementation for the two-dimensional problem have been considered \cite{knudsen2003a, knudsen2007a, mueller2003a, mueller2002a, siltanen2001a, siltanen2000a} and a regularization analysis and full implementation was given in \cite{Knudsen2009a}. We stress that in any practical case the Calderón problem is three-dimensional, since applying potentials on the boundary of a planar cross section of $\Omega$ leads to current flow leaving the plane.
The Calderón problem is known to be severely ill posed. Conditional stability estimates exist \cite{alessandrini1988a,alessandrini1990a} of the form
\begin{equation}\label{eq:stability}
\|\gamma_1-\gamma_2\|_{L^\infty(\Omega)} \leq f(\|\Lambda_{\gamma_1}-\Lambda_{\gamma_2}\|_{Z}),
\end{equation}
for an appropriate function space $Z$ and continuous function $f$ with $f(0)=0$ of logarithmic type. Furthermore, logarithmic stability is optimal \cite{mandache2001a}. While this is relevant for the theoretical reconstruction, there is no guarantee that a practically measured $\Lambda_\gamma^\varepsilon$ of a perturbed Dirichlet-to-Neumann map is the Dirichlet-to-Neumann map of any conductivity. We emphasize that in any practical case we can not have infinite-precision data, but rather a noisy finite approximation. Consequently, any computational algorithm for the problem needs regularization.
Classical regularization theory for inverse problems is given in \cite{engl1996a,kirsch1996a} with a focus on least squares formulations.
A common approach to regularization for the Calderón problem is based on iterative regularized least-squares, and convergence of such methods is analyzed in \cite{dobson1992, rondi2008a, rondi2016a, lechleiter2008a, jin2012a} in the context of EIT.
A quantitative comparison of CGO-based methods and iterative regularized methods is given in \cite{hamilton2020a}. Reconstruction by statistical inversion is developed in \cite{kaipio2000a, dunlop2016a}, where in the latter, the problem is posed in an infinite-dimensional Bayesian framework. A different statistical approach to the Calderón problem shows stable reconstruction of the surface conductivity on a domain given noisy data \cite{caro2017}. Convergence estimates in probability of a statistical estimator (posterior mean) to the true conductivity given noisy data with a sufficiently small noise level are considered in \cite{kweku2019}.
In this paper we provide a direct CGO-based regularization strategy with an admissible parameter choice rule for reconstruction in the three-dimensional Calderón problem under the following assumptions:
\begin{assumption}
For simplicity of exposition, we assume the domain of interest $\Omega$ is embedded in the unit ball in $\mathbb{R}^3$. Furthermore, we assume $\partial \Omega$ is smooth.
\end{assumption}
\begin{assumption}[Parameter and data space]\label{assumption2}
We consider the forward map $F:D(F)\subset L^\infty(\Omega) \rightarrow Y$, $\gamma\mapsto \Lambda_\gamma$ with the following definition of $D(F)$.
Let $\Pi>0$ and $0<\rho<1$, then $\gamma \in D(F)\subset L^\infty(\Omega)$ satisfies
\begin{equation}\label{Df}
\begin{aligned}
\|\gamma\|_{C^2(\overline{\Omega})}&\leq \Pi,\\
\gamma(x) &\geq \Pi^{-1} \quad \text{for all $x\in \Omega$,}\\
\gamma(x) &\equiv 1 \qquad \, \text{for $\mathrm{dist}(x,\partial \Omega)< \rho$,}
\end{aligned}
\end{equation}
where we assume knowledge of $\Pi$ and $\rho$. We continuously extend $\gamma\equiv 1$ outside $\Omega$. The data space $Y\subset \mathcal{L}(H^{1/2}(\partial \Omega), H^{-1/2}(\partial \Omega))$ consists of bounded linear operators $\Lambda:H^{1/2}(\partial\Omega)\rightarrow H^{-1/2}(\partial\Omega)$ that are Dirichlet-to-Neumann alike in the sense
\begin{equation}
\begin{aligned}
\Lambda(1)&=0,\\
\int_{\partial \Omega} (\Lambda f)(x) \, d\sigma(x) &= 0 \quad \text{for every $f \in H^{1/2}(\partial \Omega)$.}
\end{aligned}
\end{equation}
We equip $D(F)$ and $Y$ with the inherited norms $\|\cdot\|_{D(F)} = \|\cdot \|_{L^\infty(\Omega)}$ and $\|\cdot\|_Y = \|\cdot\|_{H^{1/2}(\partial \Omega)\rightarrow H^{-1/2}(\partial \Omega)}$.
\end{assumption}
There is no reason to believe that the regularity assumptions of $\gamma$ is optimal, in fact, we expect that the strategy generalizes to the less regular setting of \cite{carorogers2016}.
We recall the adaptation of the definitions in \cite{engl1996a,kirsch1996a} presented in \cite{Knudsen2009a} of a regularization strategy in the nonlinear setting.
\label{def:reg1}
A family of continuous mappings $\mathcal{R}_\alpha:Y\rightarrow L^\infty(\Omega)$, parametrized by \textit{regularization parameter} $0<\alpha<\infty$, is called a \textit{regularization strategy} for $F$ if
\begin{equation}\label{reqweak}
\lim_{\alpha \rightarrow 0} \|\mathcal{R}_\alpha \Lambda_\gamma - \gamma\|_{L^\infty(\Omega)}=0,
\end{equation}
for each fixed $\gamma \in D(F)$. We define the perturbed Dirichlet-to-Neumann map as
\begin{equation}\label{eq:perturbdata}
\Lambda_\gamma^\varepsilon = \Lambda_\gamma+\mathcal{E},
\end{equation}
with $\mathcal{E}\in Y$ and $\|\mathcal{E}\|_Y \leq \varepsilon$ for some $\varepsilon>0$. We call $\varepsilon$ the noise level, since we eventually simulate perturbations $\mathcal{E}$ as random noise.
\label{def:reg2}
Furthermore, a regularization strategy $\mathcal{R}_\alpha: Y\rightarrow L^\infty(\Omega)$, $0<\alpha<\infty$, is called \textit{admissible} if
\begin{equation}\label{alphaprop}
\alpha(\varepsilon)\rightarrow 0 \quad \text{ as } \quad \varepsilon \rightarrow 0,
\end{equation}
and for any fixed $\gamma \in \mathcal{D}(F)$ we have
\begin{equation}\label{reqstrong}
\sup_{\Lambda_\gamma^\varepsilon\in Y} \{\|\mathcal{R}_{\alpha(\varepsilon)}\Lambda_\gamma^\varepsilon -\gamma\|_{L^\infty(\Omega)}\mid\|\Lambda_\gamma^\varepsilon-\Lambda_\gamma\|_Y\leq \varepsilon\}\rightarrow 0\quad\text{ as }\quad \varepsilon \rightarrow 0.
\end{equation}
The topology in which we require convergence is essential; we require convergence in strong operator topology, but not in norm topology. The main result of this paper is then as follows.
\begin{theorem}\label{maintheorem}
Suppose $\Pi>0$ and $0<\rho<1$ are given and let $D(F)$ be as in Assumption \ref{assumption2}. Then there exists $\varepsilon_0>0$, dependent only on $\Pi$ and $\rho$ such that the family $\mathcal{R}_\alpha$ defined by \eqref{def:regstrat} is an admissible regularization strategy for $F$ with the following choice of regularization parameter:
\begin{equation}\label{alphadef}
\alpha(\varepsilon)=\begin{cases}
(-1/11\log(\varepsilon))^{-1/p} & \text{ for } 0<\varepsilon < \varepsilon_0,\\
\frac{\varepsilon}{\varepsilon_0}(-1/11\log({\varepsilon_0}))^{-1/p} & \text{ for } \varepsilon \geq \varepsilon_0,
\end{cases}
\end{equation}
with $p>3/2$.
\end{theorem}
This gives theoretical justification for practical reconstruction of the Calderón problem in three dimensions. This is the first deterministic regularization analysis for the three-dimensional Calderón problem known to the authors. Similar results have been shown for the related two-dimensional D-bar reconstruction \cite{Knudsen2009a}, and we will in fact adopt the spectral truncation from there to our setting. {This extension is non-trivial in part because there are no existence and uniqueness guarantees for the CGO solutions that are independent of the magnitude of the complex frequency in the three-dimensional case. In addition, while the two-dimensional D-bar method enjoys the continuous dependence of the solution to the D-bar equation on the scattering transform, it is not obvious when the frequency information of $\gamma$ is stably recovered from the scattering transform corresponding to a perturbed Dirichlet-to-Neumann map in the three-dimensional case. }
We denote the set of bounded linear operators between Banach spaces $X$ and $Y$ by $\mathcal{L}(X,Y)$ and use $\mathcal{L}(X):=\mathcal{L}(X,X)$. We denote the Euclidean matrix operator norm by $\|\cdot\|_N := \|\cdot\|_{\mathbb{C}^{(N+1)^2}\rightarrow \mathbb{C}^{(N+1)^2}}$. The operator norm of $A:H^{s}(\partial \Omega)\rightarrow H^t(\partial \Omega)$ is denoted by $\|A\|_{s,t}$. We reserve $C$ for generic constants and $C_1,C_2,\hdots$ for constants of specific value. Finally, exponential functions of the form $e^{ix\cdot\zeta}$, $x\in \mathbb{R}^3$, $\zeta\in \mathbb{C}^3$, is denoted $e_\zeta(x)$.
In Section \ref{sec:2}, the full non-linear reconstruction algorithm for the three-dimensional Calderón problem is given. Section \ref{sec:3} gives technical estimates regarding the boundary integral equation and the scattering transform and provides a regularizing method for perturbed data with $\varepsilon$ sufficiently small. Then Section \ref{sec:35} extends continuously the method to a regularization strategy $\mathcal{R}_\alpha$ defined on $Y$ and proves Theorem \ref{maintheorem}. In Section \ref{sec:4}, the necessary numerical details concerning the representation of the Dirichlet-to-Neumann map and computation of the relevant norm are given. In addition, a noise model is given. Section \ref{sec:5} presents and discusses numerical results of noise tests with a piecewise constant conductivity distribution using an implementation given in \cite{delbary2014a}, which is available from the corresponding author by request.
\section{The full non-linear reconstruction method}\label{sec:2}
Let $v=\gamma^{1/2}u$, then $v$ is a solution to the Schrödinger equation
\begin{equation}\label{eq:schrod}
\begin{aligned}
(-\Delta+q) v &=0 \quad \text { in } \Omega, \\
v &=g \quad \text { on } \partial \Omega,
\end{aligned}
\end{equation}
with $q=\gamma^{-1/2}\Delta\gamma^{1/2}$ if and only if $u$ is a solution to \eqref{eq:cond} with $f=\gamma^{-1/2}g$. Note in our setting $q=0$ near $\partial \Omega$ and $q\equiv 0$ is extended continuously outside $\Omega$ and further $\Lambda_q g = \partial_\nu v = \Lambda_\gamma f$. The reconstruction method considered here is based on CGO solutions $\psi_\zeta$ to \eqref{eq:schrod}, which take the form
\begin{equation}\label{cgo1}
(-\Delta+q)\psi_\zeta = 0 \quad \text{ in } \mathbb{R}^3,
\end{equation}
satisfying $\psi_\zeta(x) = e^{ix\cdot \zeta}(1+r_\zeta(x))$. Here the complex frequency $\zeta \in \mathbb{C}^3$ satisfies $\zeta \cdot \zeta = 0$ making $e^{ix\cdot \zeta}$ harmonic, and the remainder $r_\zeta$ belongs to certain weighted $L^2$ spaces.
In the three-dimensional case, existence and uniqueness of CGO solutions have been shown for large complex frequencies,
\begin{equation}\label{largezeta}
|\zeta|>C_0\|q\|_{L^\infty(\Omega)}=:D_q
\end{equation}
for some constant $C_0>0$, or alternatively for $|\zeta|$ small \cite{sylvester1987a,cornean2006a}. The analysis involves the Faddeev Green's function
\begin{equation}
G_\zeta(x) := e^{i\zeta\cdot x}g_\zeta(x)\qquad g_\zeta(x):=\frac{1}{(2\pi)^3}\int_{\mathbb{R}^3}\frac{e^{ix\cdot\xi}}{|\xi|^2+2\xi\cdot \zeta}\,d\xi,
\end{equation}
where $g_\zeta$ is defined in the sense of the inverse Fourier transform of a tempered distribution and interpretable as a fundamental solution of $(-\Delta-2i\zeta\cdot \nabla)$.
{Boundedness of convolution by $g_\zeta$ on $\Omega$ is well known \cite{sylvester1987a,brown1996a,salo2006a}:
}
\begin{equation}\label{convgzeta}
\|g_\zeta * f \|_{L^2(\Omega)}\leq C|\zeta|^{s-1}\|f\|_{L^2(\Omega)}, \quad s\in \{0,1,2\},
\end{equation}
where $|\zeta|$ is bounded away from zero, and $C$ is independent of $\zeta$ and $f$.
The non-physical scattering transform is defined for all those $\zeta$ that give rise to a unique CGO solution $\psi_\zeta$ as
\begin{equation}\label{eq:tformeq}
\mathbf{t}(\xi, \zeta)=\int_{\mathbb{R}^{3}} e^{-i x \cdot(\xi+\zeta)} \psi_{\zeta}(x) q(x)\, d x, \quad \xi \in \mathbb{R}^3.
\end{equation}
It is useful to see the scattering transform as a non-linear Fourier transform of the potential $q$. Indeed, for $|\zeta|>D_q$ we have
\begin{equation}\label{lemmaont}
|\mathbf{t}(\xi,\zeta)-\hat q(\xi)| \leq C\|q\|_{L^{{\infty}}(\Omega)}^2|\zeta|^{-1},
\end{equation}
for all $\xi \in \mathbb{R}^3$, where $C$ is independent of $\zeta$ and $q$. Whenever $(\zeta+\xi)\cdot (\zeta+\xi)=0$, integration by parts in \eqref{eq:tformeq} yields
\begin{equation}\label{scatter2}
\mathbf{t}(\xi, \zeta)=\int_{\partial \Omega} e^{-i x \cdot(\xi+\zeta)}(\Lambda_{\gamma}-\Lambda_{1})\psi_\zeta(x)\, d\sigma(x),
\end{equation}
where $d\sigma$ denotes the surface measure on $\partial \Omega$. For fixed $\xi \in \mathbb{R}^3$ this gives rise to the set $\mathcal{V}_\xi = \{\zeta\in \mathbb{C}^3\setminus \{0\} \mid \zeta\cdot \zeta = 0,\, (\zeta+\xi)\cdot (\zeta+\xi) = 0\}$ parametrized by
\begin{equation}\label{zetaxieq}
\zeta(\xi)=\left(-\frac{\xi}{2}+\left(\kappa^{2}-\frac{|\xi|^{2}}{4}\right)^{1 / 2} k^{\perp {\perp}}\right)+i \kappa k^{{\perp}},
\end{equation}
with $\kappa\geq \frac{|\xi|}{2}$ and $k^\perp, k^{\perp\perp} \in \mathbb{R}^3$ are unit vectors and $\{\xi, k^\perp, k^{\perp\perp} \}$ is an orthogonal set \cite{delbary2014a}. Note that for $\zeta(\xi)\in \mathcal{V}_\xi$ and $k\geq \frac{|\xi|}{2}$ we have $|\zeta(\xi)|= \sqrt{2}\kappa$; consequently $\lim_{\kappa\rightarrow \infty} |\zeta(\xi)|=\infty$.
For each fixed $\zeta$ the trace of the CGO solution $\psi_\zeta|_{\partial \Omega}$ is recoverable from the boundary integral equation
\begin{equation}\label{bie}
\psi_{\zeta}|_{\partial \Omega}+\mathcal{S}_{\zeta}\left(\Lambda_{\gamma}-\Lambda_{1}\right) (\psi_{\zeta}|_{\partial \Omega})=e_{\zeta}|_{\partial \Omega},
\end{equation}
where $\mathcal{S}_{\zeta}: H^{-1/2}(\partial \Omega)\rightarrow H^{1/2}(\partial \Omega)$ is {the boundary single layer operator} defined by
\begin{equation}\label{fsinglelayer2}
\left(\mathcal{S}_{\zeta} \varphi\right)(x)=\int_{\partial \Omega} G_{\zeta}(x-y) \varphi(y) d \sigma(y),\quad x \in \partial\Omega.
\end{equation}
{With $\mathcal{S}_0$ we denote the boundary single layer operator corresponding to the usual Green’s function $G_0$ for the Laplacian. Occasionally we use the same notation when $x \in \mathbb{R}^3\setminus \partial \Omega$ and note it is well known that $\mathcal{S}_0 \varphi$ and hence $\mathcal{S}_\zeta \varphi$ is continuous in $\mathbb{R}^3$ \cite{colton1992a}.} We let
$$B_\zeta:=[I+\mathcal{S}_{\zeta}\left(\Lambda_{\gamma}-\Lambda_{1}\right)],$$
denote the boundary integral operator and we note the boundary integral equation \eqref{bie} is a uniquely solvable Fredholm equation of the second kind for $|\zeta|>D_q$ \cite{nachman1996a}. This gives a method of recovering the Fourier transform of $q$ in every frequency through the scattering transform \eqref{scatter2} as $|\zeta|\rightarrow \infty$. This method of reconstruction for the Calderón problem in three dimensions was first explicitly given in \cite{nachman1988a,Novikov1988263}. We summarize the method in three steps.
\begin{method}\label{method:1} CGO reconstruction in three dimensions
\begin{description}
\item[\textbf{Step} $\mathbf{1}$] Fix $\xi\in \mathbb{R}^3$ and solve the boundary integral equation \eqref{bie} for all $\zeta(\xi)\in \mathcal{V}_\xi$. Compute $\mathbf{t}(\xi,\zeta(\xi))$ by \eqref{scatter2}.
\item[\textbf{Step} $\mathbf{2}$] Compute $\hat{q}(\xi)$ by
\begin{equation}
\lim_{|\zeta(\xi)| \rightarrow \infty} \mathbf{t}(\xi,\zeta(\xi)) = \hat q(\xi), \quad \xi\in\mathbb{R}^3,
\end{equation}
and $q(x)$ by the inverse Fourier transform.
\item[\textbf{Step} $\mathbf{3}$] Solve the boundary value problem
\begin{equation}
\begin{aligned}
(-\Delta+q) \gamma^{1 / 2} &=0 \quad \text { in } \Omega, \\
\gamma^{1 / 2} &=1 \quad \text { on } \partial \Omega,
\end{aligned}
\end{equation}
and extract $\gamma$.
\end{description}
\end{method}
We remark that it is sufficient to solve the boundary integral equation in step 1 for a sequence $\{\zeta_k(\xi)\}_{k=1}^\infty$ of complex frequencies in $\mathcal{V}_\xi$ that tends to infinity.
\section{Regularized reconstruction by truncation}\label{sec:3} We continue by mimicking\break Method \ref{method:1} with $\Lambda_\gamma$ replaced by $\Lambda_\gamma^\varepsilon$ with $\varepsilon$ small. We note that, in any case, using $\psi_\zeta$ with $|\zeta|$ large is impractical. {Indeed, when using perturbed measurements naively in \eqref{scatter2}, the propagated perturbation of $\mathbf{t}$ is $\varepsilon$ multiplied with a factor exponentially growing in $|\zeta|$. This factor originates from the solution of the perturbed boundary integral equation}
\begin{equation}\label{bienoisy}
B_\zeta^\varepsilon(\psi_{\zeta}^\varepsilon|_{\partial \Omega}):=\psi_{\zeta}^\varepsilon|_{\partial \Omega}+\mathcal{S}_{\zeta}\left(\Lambda^\varepsilon_{\gamma}-\Lambda_{1}\right) (\psi^\varepsilon_{\zeta}|_{\partial \Omega})=e_{\zeta}|_{\partial \Omega},
\end{equation}
{and in multiplication with} $e^{-ix\cdot(\xi+\zeta{(\xi)})}${, see Lemma \ref{lemma3}}. We will show below that \eqref{bienoisy} is solvable for sufficiently small $\varepsilon$. To mitigate this exponential behavior we propose a reconstruction method that makes use of two coupled truncations: one of the complex frequency $\zeta$ and one of the real frequency of the signal $q^\varepsilon$, the perturbed analog of $q$.
As we shall see, an upper bound of the magnitude $|\zeta(\xi)|$ determines an upper bound of the proximity of $\mathbf{t}$ to $\hat{q}$, when using perturbed data. From \eqref{zetaxieq} we have
\begin{equation}
|\zeta(\xi)|\geq\frac{|\xi|}{\sqrt{2}},
\end{equation}
and hence fixing $|\zeta(\xi)|$ gives a bounded region in $\mathbb{R}^3$, $|\xi|<M$ for some $M>0$, in which $\mathbf{t}$ can be computed. This gives the following method.
\begin{method}\label{method:2} Truncated CGO reconstruction in three dimensions
\begin{description}
\item[\textbf{Step} $\mathbf{1}^\varepsilon$] Let $M=M(\varepsilon)>0$ be determined by a sufficiently small $\varepsilon$. For each fixed $\xi$ with $|\xi|<M$, take $\zeta(\xi)\in \mathcal{V}_\xi$ with an appropriate size determined by $M$ and solve \eqref{bienoisy} to recover $\psi_{\zeta}^\varepsilon|_{\partial \Omega}$. Compute the truncated scattering transform by
\begin{equation}
\mathbf{t}^{\varepsilon}_{M(\varepsilon)}(\xi, \zeta(\xi)):= \begin{cases}
\int_{\partial \Omega} e^{-i x \cdot(\xi+\zeta(\xi))}(\Lambda_{\gamma}^\varepsilon-\Lambda_{1})\psi_\zeta^\varepsilon(x) d\sigma(x), & |\xi|< M(\varepsilon),\\
0, &|\xi|\geq M(\varepsilon),
\end{cases}
\end{equation}
\item[\textbf{Step} $\mathbf{2}^\varepsilon$] Set $\widehat{q^\varepsilon}(\xi):=\mathbf{t}^{\varepsilon}_{M(\varepsilon)}(\xi, \zeta(\xi))$ and compute the inverse Fourier transform to obtain $q^\varepsilon$.
\item[\textbf{Step} $\mathbf{3}^\varepsilon$] Solve the boundary value problem
\begin{equation}
\begin{aligned}
(-\Delta+q^\varepsilon)(\gamma^\varepsilon)^{1/2} &=0 \quad && \text { in } \Omega, \\
(\gamma^\varepsilon)^{1/2}&=1 \quad && \text { on } \partial \Omega.
\end{aligned}
\end{equation}
and extract $\gamma^\varepsilon$.
\end{description}
\end{method}
We call $M$ the truncation radius and note it should depend on $\varepsilon$.
Truncation of the scattering transform with truncation radius $M$ is well known in regularization theory for the two-dimensional D-bar reconstruction method \cite{Knudsen2009a}. We can see the real truncation as a low-pass filtering in the frequency domain; this leads to additional smoothing in the spatial domain. Note that $M$ determines the level of regularization and poses as a regularization parameter $\alpha=M^{-1}$ in the sense of \eqref{reqstrong}.
{
In the following section we derive the required properties of $\mathcal{S}_\zeta$, $B_\zeta^{-1}$ and $(B_\zeta^\varepsilon)^{-1}$. The invertibility of $B_\zeta^\varepsilon$ depends on the invertibility of the unperturbed boundary integral operator $B_\zeta$, which is well known due to the mapping properties of $\mathcal{S}_\zeta$. Although boundedness of $\mathcal{S}_\zeta$ and $B_\zeta^{-1}$ in the three-dimensional case follows by similar arguments to that of the two-dimensional \cite{Knudsen2009a}, it is not immediately clear when $(B_\zeta^\varepsilon)^{-1}$ exists in the absence of existence and uniqueness guarantees of $\psi_\zeta$ for small $|\zeta|$. Neither is it clear under which circumstances $q^\varepsilon$ approximates $q$ as the noise level goes to zero. This is dealt with in Lemma \ref{lemma4} below by choosing a suitable rate, at which $|\zeta|$ and $M$ goes to infinity as $\varepsilon$ goes to zero.
}
\subsection{The perturbed boundary integral equation}\label{sec:per}
{
When $|\zeta|$ is bounded away from zero we can bound $\mathcal{S}_\zeta$ using the mapping properties \eqref{convgzeta} of convolution with $g_\zeta$ between Sobolev spaces defined on $\Omega$. We note that one can give better bounds for arbitrarily small $|\zeta|<1$ than the following result by considering the integral operator $\mathcal{S}_\zeta-\mathcal{S}_0$ with a smooth kernel, see \cite{cornean2006a,Knudsen2009a}.
}
\begin{lemma}\label{lemma1}
Let $\varphi\in H^{-1/2}\left(\partial \Omega\right)$ such that $\int_{\partial \Omega} \varphi(x)\,d\sigma(x) = 0$ and let $\zeta\in \mathbb{C}^3$ with $\zeta\cdot\zeta=0$ {and $|\zeta|>\beta>0$}. Then for the boundary single layer operator, $\mathcal{S}_\zeta$, we have that
\begin{equation}\label{upperboundszeta}
\|\mathcal{S}_\zeta \varphi\|_{H^{1/2}\left(\partial \Omega\right)} \leq C_1 (1+|\zeta|)e^{2|\zeta|}\|\varphi\|_{H^{-1/2}\left(\partial \Omega\right)},
\end{equation}
where the constant $C_1$ is independent of $\zeta$.
\end{lemma}
\begin{proof}
We follow \cite{Knudsen2009a}. Letting $x\in \mathbb{R}^3\setminus \overline{\Omega}$ and introducing $u\in H^1(\Omega)$ with $\Delta u = 0$ and $\partial_\nu u = \varphi$ we write
\ \vspace*{-10pt}
\begin{align}
(\mathcal{S}_\zeta\varphi)(x) &= \int_{\partial \Omega} G_\zeta(x-y)\varphi(y) \, d\sigma(y),\\
&= \int_{\Omega} \nabla_{y} G_{\zeta}(x-y)\cdot \nabla u(y) d y,\\
&=-\nabla\cdot\left(G_{\zeta} *(\nabla u)\right)(x),\\
&= -\nabla\cdot \left[e^{ix\cdot \zeta}\left(g_{\zeta} *(e^{-iy\cdot \zeta}\nabla u)\right)\right](x),
\end{align}
using integration by parts, the chain rule and the fact that $G_\zeta(x-\cdot)$ is smooth in $\Omega$. By the continuity of $\mathcal{S}_\zeta$ the above holds for $x\in \partial \Omega$ as well. Note from \eqref{convgzeta} and Leibniz' rule that
\begin{equation}
\|\nabla \cdot \left[e^{ix\cdot \zeta}\left(g_{\zeta} *(e^{-iy\cdot \zeta}\nabla u)\right)\right]\|_{L^2(\Omega)}\leq Ce^{2|\zeta|}\|\nabla u\|_{L^2(\Omega)},
\end{equation}
and
\begin{equation}
\|\partial_{x_i}\nabla \cdot \left[e^{ix\cdot \zeta}\left(g_{\zeta} *(e^{-iy\cdot \zeta}\nabla u)\right)\right]\|_{L^2(\Omega)}\leq C|\zeta|e^{2|\zeta|}\|\nabla u\|_{L^2(\Omega)},
\end{equation}
for $i=1,2,3$. This yields
\begin{align}
\|\mathcal{S}_\zeta\varphi\|_{H^{1/2}(\partial\Omega)} &\leq \|\nabla\cdot \left[e^{ix\cdot \zeta}\left(g_{\zeta} *(e^{-iy\cdot \zeta}\nabla u)\right)\right]\|_{H^1(\Omega)},\\
&\leq C(1+|\zeta|)e^{2|\zeta|}\|\nabla u\|_{L^2(\Omega)},\\
&\leq C(1+|\zeta|)e^{2|\zeta|}\|\varphi\|_{H^{-1/2}(\partial\Omega)},
\end{align}
using the trace theorem and stability of the Neumann problem for $u$. Here $C$ is dependent on $\beta$ since $|\zeta|>\beta$.
\end{proof}
We have the following estimate of $B_\zeta^{-1}$. The main idea of the proof is to consider a solution $f\in H^{1/2}(\partial \Omega)$ to $B_\zeta f = h$ for some $h\in H^{1/2}(\partial \Omega)$ and then control the exponential component of $f$ by creating a link to the CGO solutions of the Schrödinger equation.
\begin{lemma}\label{lemma2}
For $\zeta\in \mathbb{C}^3\backslash\{0\}$ with $\zeta \cdot \zeta = 0$ and $|\zeta|>D_q$ as in \eqref{largezeta}, the operator $B_\zeta$ is invertible with
\begin{equation}\label{Bzetainv}
\|B_{\zeta}^{-1}\|_{1 / 2} \leq C_{2} (1+|\zeta|)e^{2|\zeta|},
\end{equation}
where $C_2$ is a constant depending only on the \textit{a priori} knowledge $\Pi$ and $\rho$.
\end{lemma}
\begin{proof}
{ We follow \cite{Knudsen2009a}. Using integration by parts note that $B_\zeta f=f+G_\zeta \ast(qv_f)$ on $\partial \Omega$, where $v_f\in H^1(\Omega)$ is the unique solution to
\begin{equation}
\begin{aligned}
(-\Delta+q)v_f &=0 \quad && \text { in } \Omega, \\
v_f &=f \quad && \text { on } \partial \Omega.
\end{aligned}
\end{equation}
To bound $f$ we bound $v_f$ by writing $v_f = v-u^{\mathrm{exp}}$ with
\begin{equation}
\begin{aligned}
\Delta v &=0 \quad && \text { in } \Omega, \\
v &= B_\zeta f \quad && \text { on } \partial \Omega,
\end{aligned}
\end{equation}
and $u^{\mathrm{exp}}:=G_\zeta *(qv_f)$. From the stability property of the Dirichlet problem it is sufficient to bound $u^{\mathrm{exp}}$ in terms of $v$. Note $(-\Delta+q)u^{\mathrm{exp}}=qv$ and hence conjugating with exponentials yields the equation in $\mathbb{R}^3$,
\begin{equation}\label{eq:condconj}
(-\Delta-2i\zeta \cdot \nabla+q)u = qve^{-ix\cdot \zeta},
\end{equation}
where we set $u = e^{-ix\cdot \zeta}u^{\mathrm{exp}}$. It is well known that $u$ is the unique solution among functions in certain weighted $L^2(\mathbb{R}^3)$-spaces satisfying
$$\|u\|_{L^2(\Omega)} \leq C\|q\|_{L^\infty}\frac{e^{|\zeta|}}{|\zeta|}\|v\|_{L^2(\Omega)},$$
whenever $|\zeta|>D_q$, see \cite{sylvester1987a}. Indeed, convolution with $g_\zeta$ on both sides of \eqref{eq:condconj} gives
$$u = g_\zeta*(-qu+qve^{-ix\cdot \zeta}),$$
which upgrades the estimate to
$$\|u\|_{H^1(\Omega)} \leq C\|q\|_{L^\infty}e^{|\zeta|}\|v\|_{L^2(\Omega)},$$
using \eqref{convgzeta}. Now the estimate \eqref{Bzetainv} follows straightforwardly from the trace theorem.
}
\end{proof}
We note that a main difference between the boundary integral equation in two dimensions and three dimensions is the possible existence of a certain $\zeta$ for which there exists no unique CGO solutions to \eqref{cgo1}. The next result shows that Lemma \ref{lemma1} and Lemma \ref{lemma2} implies solvability of the perturbed boundary integral equation using a Neumann series argument on the form
\begin{equation}
B_{\zeta}^\varepsilon = I+\mathcal{S}_\zeta(\Lambda_\gamma^\varepsilon-\Lambda_\gamma) +\mathcal{S}_\zeta(\Lambda_\gamma-\Lambda_1)=[I+A_\zeta^\varepsilon] B_\zeta,
\end{equation}
where $A^\varepsilon_\zeta:=\mathcal{S}_\zeta \mathcal{E} B_\zeta^{-1}$ is a bounded operator in $H^{1/2}(\partial \Omega)$. {It is clear from Lemma \ref{lemma2} that $q$ fixes a lower bound for $|\zeta|$, for which $B_\zeta$ is certain to be invertible.} When the noise level is sufficiently small such that $D_q<|\zeta|<R(\varepsilon)$, for some $R$, we may invert $B_\zeta^\varepsilon$. We have the following result.
\begin{lemma}\label{lemma3}
Let $R = R(\varepsilon):=-\frac{1}{6}\log{\varepsilon}$, and suppose $D_q<|\zeta |< R(\varepsilon_1)$ for some $0<\varepsilon_1<1$. Then there exists $0<\varepsilon_2\leq\varepsilon_1$ for which $B_\zeta^\varepsilon$ is invertible whenever $0<\varepsilon < \varepsilon_2$. Furthermore we have the estimate
\begin{equation}
\|\psi^\varepsilon_\zeta - \psi_\zeta\|_{H^{1/2}(\partial \Omega)} \leq C_3\varepsilon(1+R)^4e^{7R},
\end{equation}
where $C_3$ is a constant depending only on the \textit{a priori} knowledge of $\Pi$ and $\rho$.
\end{lemma}
\begin{proof}
Since $\mathcal{E}\in Y$, it maps onto trace functions that have zero mean on the boundary. Then from Lemma \ref{lemma1} and Lemma \ref{lemma2} we find
\begin{align}\label{epsilon0}
\|A_\zeta^\varepsilon\|_{1/2}= \|\mathcal{S}_\zeta \mathcal{E} B_\zeta^{-1}\|_{1/2}&\leq C_1C_2 \varepsilon (1+R)^2 e^{4R},\\
&\leq C \varepsilon e^{5R}, \label{eq:rhs}
\end{align}
where we have absorbed the polynomial in $R$ into the exponential and thereby obtained a new constant. By the definition of $R$, we note the right-hand side of \eqref{eq:rhs} goes to zero as $\varepsilon$ goes to zero, and hence there exists a $0<\varepsilon_2 \leq \varepsilon_1$ such that $\|A_\zeta^\varepsilon\|_{1/2}<\frac{1}{2}$. Then by a Neumann series argument, $I+A_\zeta^\varepsilon$ is invertible with $\|(I+A_\zeta^\varepsilon)^{-1} \|_{1/2}<2$, and $(B_\zeta^\varepsilon)^{-1}=B_\zeta^{-1}[I+A_\zeta^\varepsilon]^{-1}$. From the boundary integral equations we have $\psi_\zeta=B^{-1}_\zeta (e_\zeta|_{\partial \Omega})$ and $\psi^\varepsilon_\zeta=(B_\zeta^\varepsilon)^{-1}(e_\zeta|_{\partial \Omega})$. Then with the use of Lemma \ref{lemma2} we have for $0<\varepsilon<\varepsilon_2$
\begin{align}
\|\psi^\varepsilon_\zeta\|_{H^{1/2}(\partial \Omega)} &\leq \|(B_\zeta^\varepsilon)^{-1}(e_\zeta|_{\partial \Omega})\|_{H^{1/2}(\partial \Omega)},\\
&\leq 2\|B_\zeta^{-1}\|_{1/2} \|e^{ix\cdot \zeta}\|_{H^{1/2}(\partial \Omega)},\label{Bzetaeps}\\
&\leq C (1+|\zeta|)^2e^{3|\zeta|}.\label{psiestimate}
\end{align}
With the use of Lemma \ref{lemma2} we have for $0<\varepsilon<\varepsilon_2$
\begin{align}
\|(B_\zeta^\varepsilon)^{-1}-B^{-1}_\zeta\|_{1/2}&=\|B_\zeta^{-1}[(I+A_\zeta^\varepsilon)^{-1}-I]\|_{1/2},\\
&\leq \|B_\zeta^{-1}\|_{1/2}\|(I+A_\zeta^\varepsilon)^{-1}[I-(I+A_\zeta^\varepsilon)]\|_{1/2},\\
&\leq \|B_\zeta^{-1}\|_{1/2}\|(I+A_\zeta^\varepsilon)^{-1}\|_{1/2}\|A_\zeta^\varepsilon\|_{1/2},\\
&\leq 2 C_1C_{2}^2 \varepsilon(1+R)^3e^{6R}.
\end{align}
Finally we obtain
\begin{align}
\|\psi^\varepsilon_\zeta - \psi_\zeta\|_{H^{1/2}(\partial \Omega)} &=\|[(B_\zeta^\varepsilon)^{-1}-B^{-1}_\zeta]e_\zeta\|_{H^{1/2}(\partial \Omega)},\\
&\leq \|(B_\zeta^\varepsilon)^{-1}-B^{-1}_\zeta\|_{1/2} \|e^{ix\cdot \zeta}\|_{H^{1/2}(\partial \Omega)},\\
&\leq 2 C_1C_{2}^2 \varepsilon(1+R)^4e^{7R},\label{psidifestimate}
\end{align}
for $0<\varepsilon<\varepsilon_2$.
\end{proof}
\subsection{Truncation of the scattering transform}\label{sec:trunc}
We now show that fixing the magnitude of the complex frequency $|\zeta(\xi)|=(M(\varepsilon))^p$ with $p>3/2$, enables control over the proximity of the truncated scattering transform $\mathbf{t}^\varepsilon_M(\cdot,\zeta)$ to $\hat q$ for small noise levels. This choice is justified from the following result.
\begin{lemma}\label{lemma4}
Let $M(\varepsilon) = (-1/11 \log(\varepsilon))^{1/p}$ be a truncation radius depending on $\varepsilon$ and some exponent $p>3/2$. Fix $\xi \in \mathbb{R}^3$ with $|\xi|< M(\varepsilon)$, suppose $\zeta(\xi)\in \mathcal{V}_\xi$ with
\begin{equation}
|\zeta(\xi)|=(M(\varepsilon))^p=-\frac{1}{11}\log(\varepsilon)
\end{equation}
and let $\varepsilon_2$ be defined as in the proof of Lemma \ref{lemma3}. Further fix $q\in L^\infty(\Omega)$ corresponding to a $\gamma \in D(F)$. Then $\mathbf{t}^\varepsilon_M$ is well defined by \eqref{errtscat} for $0<\varepsilon<\varepsilon_2$ and
\begin{equation}
\lim_{\varepsilon \rightarrow 0} \|\mathbf{t}^\varepsilon_{M(\varepsilon)}-\hat q\|_{L^2(\mathbb{R}^3)}=0.
\end{equation}
\end{lemma}
\begin{proof}
For \textit{(i)} fix first $|\xi|< M(\varepsilon)$ and note first by the triangle inequality that
\begin{equation}\label{trianglerelation}
|\mathbf{t}^{\varepsilon}_{M(\varepsilon)}(\xi,\zeta(\xi))-\hat q(\xi)|\leq |\mathbf{t}^\varepsilon_{M(\varepsilon)}(\xi,\zeta(\xi))-\mathbf{t}(\xi,\zeta(\xi)) |+ |\mathbf{t}(\xi,\zeta(\xi))-\hat q(\xi)|.
\end{equation}
By Lemma \ref{lemma3} there exists a unique solution $\psi^\varepsilon_\zeta$ to the perturbed boundary integral equation and hence $\mathbf{t}^\varepsilon_M$ is well defined. Using \eqref{psiestimate} and \eqref{psidifestimate}, we find the following, in which we set $R=R(\varepsilon)$, $M=M(\varepsilon)$ and $\zeta = \zeta(\xi)$ for simplicity of exposition,
\begin{align}
|\mathbf{t}^\varepsilon_M(\xi,\zeta)-\mathbf{t}(\xi,\zeta)| &= \left|\int_{\partial \Omega} e^{-ix\cdot(\xi+\zeta)}[(\Lambda^\varepsilon_\gamma-\Lambda_1)\psi_\zeta^\varepsilon(x)-(\Lambda_\gamma-\Lambda_1)\psi_\zeta(x)]d\sigma(x) \right|,\\
&\leq \|e ^{-ix\cdot(\xi+\zeta)}\|_{H^{1/2}(\partial \Omega)}\|\Lambda_\gamma-\Lambda_1\|_Y\|\psi_\zeta^\varepsilon-\psi_\zeta\|_{H^{1/2}(\partial \Omega)}\\ \label{testimation}
&\phantom{=}\,\,+ \|e ^{-ix\cdot(\xi+\zeta)}\|_{H^{1/2}(\partial \Omega)}\|\Lambda_\gamma^\varepsilon-\Lambda_\gamma\|_Y\|\psi_\zeta^\varepsilon\|_{H^{1/2}(\partial \Omega)},\\
&\leq C(1+|\zeta|)e^{|\zeta|}\left[\varepsilon(1+|\zeta|)^4e^{7|\zeta|} + \varepsilon(1+|\zeta|)^2e^{3|\zeta|} \right ],
\end{align}
where we use the fact that $\|\Lambda_\gamma - \Lambda_1\|_Y \leq C$, where $C$ depends only on $\Pi$ by the continuity of the forward map $\gamma\mapsto \Lambda_\gamma$. Then,
\begin{equation}
|\mathbf{t}^\varepsilon_M(\xi,\zeta)-\mathbf{t}(\xi,\zeta)|\leq C\varepsilon e^{9|\zeta|}.
\end{equation}
Using \eqref{trianglerelation} and the property \eqref{lemmaont} we conclude for $|\xi|< M(\varepsilon)$ that
\begin{equation}\label{eq:iestimate}
|\mathbf{t}^{\varepsilon}_M(\xi,\zeta)-\hat q(\xi)|\leq C\varepsilon e^{9|\zeta|}+ C|\zeta|^{-1}.
\end{equation}
Then for \textit{(ii)}, using the triangle inequality and \eqref{eq:iestimate} we find
\begin{align}
\|\mathbf{t}^\varepsilon_M-\hat q\|_{L^2(\mathbb{R}^3)}&\leq \|\mathbf{t}^\varepsilon_M-\hat q\|_{L^2(|\xi|< M)}+\|\hat q\|_{L^2(|\xi|\geq M)},\\
&\leq C(\varepsilon e^{9|\zeta|}+M^{-p})\left(\int_{0}^M r^2 \, dr\right)^{1/2}+\|\hat q\|_{L^2(|\xi|\geq M)},\\
& \leq C(\varepsilon e^{10|\zeta|}+M^{3/2-p})+\|\hat q\|_{L^2(|\xi|\geq M)},\\
&\leq C\varepsilon^{1/11}+C(-1/11\log(\varepsilon))^{3/2-p}+\|\hat q\|_{L^2(|\xi|\geq M)},
\end{align}
for $0<\varepsilon<\varepsilon_2$. Since $q\in L^\infty(\Omega)$ is compactly supported in $\Omega$, we have $q\in L^2(\mathbb{R}^3)$, and hence the energy of the tail of $\hat q$ converges to zero as $M(\varepsilon)$ goes to infinity. The result follows as $p>3/2$.
\end{proof}
One may obtain an explicit decay of $\hat q$ by assuming a certain regularity of $q$. Notice the proof above works fine with the choice $|\zeta| = K_1M^{p}+K_2$ for some $0<K_1<1$, $K_2>0$ and $p>3/2$. A user may choose among such $|\zeta|$ freely, with $p=3/2$ being the critical choice. We now prove that $\gamma^\varepsilon$ exists and is unique and that the propagated reconstruction error tends to zero if $\varepsilon\rightarrow 0$, given $\|q^\varepsilon-q\|_{L^2(\Omega)}$ is sufficiently small. This is possible in $H^2(\Omega)$ by a Neumann series argument and elliptic regularity. For the boundary value problem
\begin{equation}
\begin{aligned}
(-\Delta+q^\varepsilon)u&=f \quad && \text { in } \Omega, \\
u&=0 \quad && \text { on } \partial \Omega,
\end{aligned}
\end{equation}
with $f\in L^2(\Omega)$, we introduce the notation $L^\varepsilon: H^1_0(\Omega)\cap H^2(\Omega)\rightarrow L^2(\Omega)$, $L^\varepsilon: u\mapsto f$, defined for any $q^\varepsilon\in L^2(\Omega)$ and then note
\begin{equation}\label{eq:finalstep}
\gamma^\varepsilon = [(L^\varepsilon)^{-1}(-q^\varepsilon)+1]^2,
\end{equation}
whenever $(L^\varepsilon)^{-1}$ exists.
\begin{lemma}\label{lemma5}
Let $q=\Delta \gamma^{1/2}\gamma^{-1/2}$ be a potential with $\gamma\in D(F)$. Then there exists a $0<\varepsilon_3<1$ such that for $0<\varepsilon<\min(\varepsilon_2,\varepsilon_3)=:\varepsilon_0$ the boundary value problem
\begin{equation}\label{bvp1}
\begin{aligned}
(-\Delta+q^\varepsilon)(\gamma^\varepsilon)^{1/2}&=0 \quad && \text { in } \Omega, \\
(\gamma^\varepsilon)^{1/2} &=1 \quad && \text { on } \partial \Omega,
\end{aligned}
\end{equation}
has a unique solution in $H^2(\Omega)$. Furthermore the following inequality holds
\begin{equation}\label{estimatelem5}
\|\gamma^{1 / 2}-(\gamma^{\varepsilon})^{1 / 2}\|_{H^{2}(\Omega)} \leq C_4\|q-q^{\varepsilon}\|_{L^{2}(\Omega)},
\end{equation}
where $C_4$ is dependent only on $\Pi$ and $\rho$.
\end{lemma}
\begin{proof}
Note $(-\Delta+q)^{-1}$ exists and is bounded for $L^2(\Omega)$ into $H^1_0(\Omega)\cap H^2(\Omega)$ with
\begin{equation}\label{boundedsol}
\|u\|_{H^{2}(\Omega)} \leq C\|f\|_{L^{2}(\Omega)},
\end{equation}
by elliptic regularity \cite{evans2010a}. Here $C$ is dependent only on $\Pi$. We construct
\begin{equation}\label{construction}
L^\varepsilon u = (-\Delta+q)[I+(-\Delta+q)^{-1}(q^\varepsilon-q)]u,
\end{equation}
and seek boundedness of $(-\Delta+q)^{-1}(q^\varepsilon-q)$ in $H^2(\Omega)$ as our goal. For any $u\in H^2(\Omega)$
\begin{align}
\|(-\Delta+q)^{-1}(q^\varepsilon-q)u\|_{H^2(\Omega)} \leq C \|q^\varepsilon-q\|_{L^2(\Omega)} \|u\|_{H^2(\Omega)},
\end{align}
using \eqref{boundedsol} and Sobolev embedding theory. By Lemma \ref{lemma4}, there exists a $0<\varepsilon_3<1$ such that for all $0<\varepsilon<\min(\varepsilon_2,\varepsilon_3)$
\begin{equation}
\|(-\Delta+q)^{-1}(q^\varepsilon-q)\|_{H^{2}(\Omega)\rightarrow H^{2}(\Omega)}\leq C\|q^\varepsilon-q \|_{L^2(\Omega)}<\frac{1}{2}.
\end{equation}
Hence $(L^\varepsilon)^{-1}$ exists and is uniformly bounded with respect to $0<\varepsilon \leq \varepsilon_0$. Finally, since $\gamma \in L^\infty(\Omega)$ we have $(q^\varepsilon-q)\gamma^{1/2}\in L^2(\Omega)$, and by solving
\begin{alignat}{2}
L^\varepsilon(\gamma^{1/2}-(\gamma^\varepsilon)^{1/2})&=(q^\varepsilon-q)\gamma^{1/2} \quad && \text { in } \Omega \\
\gamma^{1/2}-(\gamma^\varepsilon)^{1/2}&=0 \quad && \text { on } \partial \Omega,
\end{alignat}
we obtain the estimate \eqref{estimatelem5}.
\end{proof}
We conclude that $\gamma^\varepsilon$ of Method \ref{method:2} exists uniquely and approximates $\gamma$ in the $H^2(\Omega)$-norm, whenever $\varepsilon < \varepsilon_0$.
\section{Extending the method to a regularization strategy}\label{sec:35}
From the definition of an admissible regularization strategy it is clear $\mathcal{R}_\alpha$ must be defined on $Y$ and not only an $\varepsilon_0$-neighborhood of $F(\mathcal{D}(F))$. However, $(B_\zeta^\varepsilon)^{-1}$ and $(L^\varepsilon)^{-1}$ exists only for small enough $\varepsilon$. We confront this by extending these operators to $(B_\zeta^\varepsilon)_\alpha^{\dagger}$ and $(L^\varepsilon)_\alpha^{\dagger}$ coinciding with $(B_\zeta^\varepsilon)^{-1}$ and $(L^\varepsilon)^{-1}$ for $\varepsilon < \varepsilon_0$, such that $\mathcal{R}_\alpha$ is continuous and well defined on $Y$. There are several ways to obtain such extensions, however we will follow \cite{Knudsen2009a} and construct explicit pseudoinverses by means of functional calculus.
Define the normal operator
\begin{equation}
S^\varepsilon_\zeta:=(B_\zeta^\varepsilon)^\ast(B_\zeta^\varepsilon)\in\mathcal{L}(H^{1/2}(\partial \Omega)),
\end{equation}
where $(B_\zeta^\varepsilon)^\ast$ is the adjoint operator of $(B_\zeta^\varepsilon)\in \mathcal{L}(H^{1/2}(\partial \Omega))$. Similarly we define
\begin{equation}
T^\varepsilon_\zeta:=(L^\varepsilon)^\ast(L^\varepsilon)\in\mathcal{L}(L^2(\Omega)).
\end{equation}
Let $h_\alpha^1$ and $h_\alpha^2$ be two real functions defined for $0<\alpha<\infty$ as
\begin{equation}
h^i_\alpha(t) := \begin{cases}
t^{-1} & \text{ for } t>\kappa_i(\alpha),\\
\kappa_i(\alpha)^{-1} & \text{ for } t\leq \kappa_i(\alpha),
\end{cases}
\end{equation}
for $i=1,2$ with $\kappa_i(\alpha)=\frac{1}{4}r_i(\alpha)^2$, where we will see below the estimates \eqref{estimatelem5} and \eqref{boundB} motivates the definition
\begin{equation}
r_i(\alpha) := \begin{cases}
\frac{1}{C_2(1+\alpha^{-p})e^{2\alpha^{-p}}} & \text{ for } i=1,\\
\frac{1}{C_4} & \text{ for } i=2,
\end{cases}
\end{equation}
with $p>3/2$. We define the $\alpha$-pseudoinverses ${(B_\zeta^\varepsilon)}_\alpha^{\dagger}$ of $B_\zeta^\varepsilon$ and ${(L^\varepsilon)}_\alpha^{\dagger}$ of $L^\varepsilon$ for any $0<\alpha<\infty$ as
\begin{align}
(B_\zeta^\varepsilon)_\alpha^{\dagger} &:= h^1_\alpha(S_\zeta^\varepsilon)(B_\zeta^\varepsilon)^\ast,\\
(L^\varepsilon)_\alpha^{\dagger} &:= h^2_\alpha(T^\varepsilon)(L^\varepsilon)^\ast,
\end{align}
where the operators $h^1_\alpha(S_\zeta^\varepsilon)$ in $\mathcal{L}(H^{1/2}(\partial \Omega))$ and $h^2_\alpha(T^\varepsilon)$ in $\mathcal{L}(L^2(\Omega))$ are defined in the sense of continuous functional calculus (see for example \cite{reed1980a,so2018a}) and depend continuously on $S_\zeta^\varepsilon$ and $T^\varepsilon$, respectively (see for example \cite[Lemma 3.1]{Knudsen2009a}). This implies $\Lambda_\gamma^\varepsilon \mapsto (B_\zeta^\varepsilon)_\alpha^{\dagger}$ and $q^\varepsilon \mapsto (L^\varepsilon)_\alpha^{\dagger}$ are continuous mappings. Explicitly, for a self-adjoint operator $S:\mathcal{H}\rightarrow \mathcal{H}$ for a Hilbert space $\mathcal{H}$ we set
\begin{equation}\label{spectraldecomp}
h^i_\alpha(S) = \int_{\sigma(S)} h^i_\alpha(\lambda)\, dP(\lambda),
\end{equation}
where $\sigma(S)\subset \mathbb{C}$ denotes the spectrum of $S$, and $P$ is a spectral measure on $\sigma(S)$.
\begin{method}\label{Method2} Regularized CGO reconstruction in three dimensions
\begin{description}
\item[\textbf{Step} $\mathbf{1_\alpha}$] Given $\alpha>0$, set $M=\alpha^{-1}$. For each $|\xi|<M$ take $\zeta(\xi)\in \mathcal{V}_\xi$ with $|\zeta(\xi)|=M^{p}$ for $p>3/2$ and define
\begin{equation}
\tilde{\psi}_\alpha:=(B_\zeta^\varepsilon)_\alpha^{\dagger}(e_\zeta|_{\partial \Omega})
\end{equation}
and compute the truncated scattering transform $\mathbf{t}_\alpha(\xi,\zeta(\xi))$ for $\zeta(\xi)$ in $\mathcal{V}_\xi$ by
\begin{equation}\label{errtscat}
\tilde{\mathbf{t}}_\alpha(\xi, \zeta(\xi))= \begin{cases}
\int_{\partial \Omega} e^{-i x \cdot(\xi+\zeta(\xi))}(\Lambda_{\gamma}^\varepsilon-\Lambda_{1})\tilde{\psi}_\alpha(x) d\sigma(x) & |\xi|< M,\\
0 &|\xi|\geq M
\end{cases}
\end{equation}
\item[\textbf{Step} $\mathbf{2_\alpha}$] Define $\widehat{q_\alpha}(\xi):=\tilde{\mathbf{t}}_\alpha(\xi,\zeta(\xi))$ and compute the inverse Fourier transform to obtain $q_\alpha$.
\item[\textbf{Step} $\mathbf{3_\alpha}$] Solve the boundary value problem \eqref{bvp1} by computing $(L^\varepsilon)_\alpha^{\dagger}(-q_\alpha)$ and set
\begin{equation}\label{def:regstrat}
\mathcal{R}_\alpha \Lambda_\gamma^\varepsilon := [(L^\varepsilon)_\alpha^{\dagger}(-q_\alpha)+1]^2
\end{equation}
\end{description}
\end{method}
\begin{proof}[Proof of Theorem \ref{maintheorem}]
Given $\Lambda_\gamma^\varepsilon$ in $Y$ we have
\begin{align}
|\tilde{\mathbf{t}}_\alpha(\xi, \zeta(\xi))| &\leq \left|\int_{\partial \Omega} e^{-ix\cdot(\xi+\zeta)}[(\Lambda^\varepsilon_\gamma-\Lambda_\gamma)\tilde{\psi}_\alpha(x)+(\Lambda_\gamma-\Lambda_1)\tilde{\psi}_\alpha(x)]d\sigma(x) \right|,\\
&\leq \|e ^{-ix\cdot(\xi+\zeta)}\|_{H^{1/2}(\partial \Omega)}\|\Lambda^\varepsilon_\gamma-\Lambda_\gamma\|_Y\|\tilde{\psi}_\alpha\|_{H^{1/2}(\partial \Omega)}\\
&\phantom{=}\,\, \|e ^{-ix\cdot(\xi+\zeta)}\|_{H^{1/2}(\partial \Omega)}\|\Lambda_\gamma-\Lambda_1\|_Y\|\tilde{\psi}_\alpha\|_{H^{1/2}(\partial \Omega)},\\
&< \infty,\label{teruendelig}
\end{align}
for all $\xi \in \mathbb{R}^3$, since $(B_\zeta^\varepsilon)_\alpha^{\dagger}$ is bounded in $H^{1/2}(\partial\Omega)$. Then by compact support $\tilde{\mathbf{t}}_\alpha\in L^2(\mathbb{R}^3)$. It follows the inverse Fourier transform of this object is well defined and hence the family of operators $\mathcal{R}_\alpha$ is well defined. Using the continuity of the maps $\Lambda_\gamma^\varepsilon \mapsto (B_\zeta^\varepsilon)_\alpha^{\dagger}$ and $q_\alpha \mapsto (L^\varepsilon)_\alpha^{\dagger}$, a parallel estimation to \eqref{testimation} and the linearity and boundedness of the inverse Fourier transform in $L^2(\mathbb{R}^3)$, it is clear that $\mathcal{R}_\alpha$ is a family of continuous mappings. Now recall from Lemma \ref{lemma2} and \eqref{Bzetaeps} that for $0<\varepsilon<\varepsilon_0$ we have that
\begin{equation}\label{boundB}
\|(B_\zeta^\varepsilon)\|_{1/2}^{-1} \leq \|(B_\zeta^\varepsilon)^{-1}\|_{1/2}\leq 2 C_{2} (1+|\zeta|)e^{2|\zeta|}.
\end{equation}
Set $|\zeta|=\alpha^{-p}$ and note
\begin{equation}
S_\zeta^\varepsilon \geq \frac{1}{4}r_1(\alpha)^2 I.
\end{equation}
By definition of the $\alpha$-pseudoinverse and \eqref{spectraldecomp} we have that $(B_\zeta^\varepsilon)_\alpha^{\dagger}=(B_\zeta^\varepsilon)^{-1}$ for $0<\varepsilon<\varepsilon_0$, and hence $\tilde{\psi}_\alpha=\psi_\zeta^\varepsilon$ is unique. It follows by Lemma \ref{lemma4} that $\tilde{\mathbf{t}}(\cdot,\zeta(\cdot))$ is well defined and $q_\alpha = q^\varepsilon$ converges to $q$ as $\varepsilon$ goes to zero. Conversely, for $0<\varepsilon<\varepsilon_0$ we have $(L^\varepsilon)_\alpha^{\dagger}=(L^\varepsilon)^{-1}$, and hence by Lemma \ref{lemma5} and the Sobolev embedding $H^2(\Omega)\subset C^0(\overline{\Omega})$, \eqref{reqstrong} is satisfied. Note also the weaker requirement \eqref{reqweak} follows analogously. The property \eqref{alphaprop} is satisfied by \eqref{alphadef}.
\end{proof}
A direct consequence of the truncation of the scattering transform is the following property of the reconstruction $\mathcal{R}_\alpha(\varepsilon) \Lambda_\gamma^\varepsilon$ for sufficiently small $\varepsilon$. The regularized reconstructions are as regular as $\Omega$.
\begin{proposition}
Suppose $\Lambda_\gamma^\varepsilon=\Lambda_\gamma+\mathcal{E}$ with $\|\mathcal{E}\|_Y\leq \varepsilon < \varepsilon_0$. Then $\mathcal{R}_\alpha(\varepsilon) \Lambda_\gamma^\varepsilon\in C^\infty(\overline{\Omega})$.
\end{proposition}
\begin{proof}
Since $\tilde{\mathbf{t}}_\alpha(\cdot, \zeta(\cdot))\in L^1(\mathbb{R}^3)$ has compact support, it follows $q_\alpha$ is smooth. Since $\partial \Omega$ is smooth, it follows $\mathcal{R}_\alpha \Lambda_\gamma^\varepsilon\in C^\infty(\overline{\Omega})$ by elliptic regularity \cite{evans2010a}.
\end{proof}
\section{Computational methods}\label{sec:4}
In this section we outline methods of representing and computing the Dirichlet-to-Neumann map numerically and consider the discretization of the boundary integral equations. We assume $\Omega = B(0,1)$ in order to utilize spherical harmonics in representation of functions on $\partial \Omega$.
\subsection{Representation and computation of the Dirichlet-to-Neumann map}\label{sec41}
We consider the Hilbert space $H^s(\partial \Omega)$, $s>0$, defined as the space of all functions $f$ in $L^2(\partial \Omega)$ that satisfy
\begin{equation}\label{h1norm1}
\|f\|_{L^{2}(\partial \Omega)}^{2}+\|(-\Delta_S)^{s / 2} f\|_{L^{2}(\partial \Omega)}^{2}<\infty,
\end{equation}
where $(-\Delta_S)^{s / 2}$ is the fractional order spherical Laplace operator on the unit sphere. Since spherical harmonics, say $\{Y_n^m\}_{n\in\mathbb{N}_0, |m|\leq n}$, constitute an orthonormal basis of $L^2(\partial \Omega)$ (see for example \cite{colton1992a}), we may expand $f \in L^2(\partial \Omega)$ as
\begin{equation}
f = \sum_{n=0}^\infty\sum_{m=-n}^n \langle f, Y^m_n \rangle Y^m_n, \qquad \langle f, Y^m_n \rangle = \int_{\partial \Omega} f(x) \overline{Y^m_n(x)}\, d\sigma(x).
\end{equation}
The spherical harmonics are eigenvectors of $(-\Delta_S)$, in particular,
\begin{equation}
\left(-\Delta_{S}\right)^{s / 2} Y=(n(n+1))^{s / 2} Y,
\end{equation}
for any spherical harmonic $Y$ of degree $n$. Then the requirement \eqref{h1norm1} gives rise to a characterization of $H^s(\partial \Omega)$ suitable for $s\in \mathbb{R}$ as those functions $f \in L^2(\partial \Omega)$ that satisfy
\begin{equation}
\sum_{n=0}^\infty\sum_{m=-n}^n (1+n^2)^s|\langle f, Y^m_n \rangle |^2 < \infty.
\end{equation}
See \cite[Chapter 1.7]{MR0350177} for a more general treatment and the case $s<0$. Thus we define the $H^s(\partial \Omega)$ inner products as
\begin{equation}
\langle f,g \rangle_s:=\langle f,g \rangle_{H^s(\partial \Omega)}=\sum_{n=0}^\infty\sum_{m=-n}^n w_s(n)\langle f,Y^m_n\rangle \overline{w_s(n) \langle g,Y^m_n\rangle},
\end{equation}
where the multiplier functions are defined as
\begin{equation}
w_s(n) :=(1+n^2)^{s/2}, \qquad \text{for $n\in \mathbb{N}_0$, $s\in \mathbb{R}$},
\end{equation}
and hence $\|f\|_{H^s(\partial \Omega)} = \langle f, f \rangle_s^{1/2}$. We build an orthonormal basis $\{\phi_{n,m}^s\}_{n\in \mathbb{N}_0, |m|\leq n}$ of $H^s(\partial \Omega)$ with
\begin{equation}
\phi_{n,m}^s = w_{-s}(n)Y^m_n.
\end{equation}
and hence any $g\in H^s(\partial\Omega)$ has the expansion
\begin{equation}
g = \sum_{n=0}^\infty\sum_{m=-n}^n \langle g, \phi_{n,m}^s \rangle_s \phi_{n,m}^s.
\end{equation}
Consider the $L^2(\partial \Omega)$ orthogonal projection $P_N$ to the space spanned by spherical harmonics of degree less than or equal to $N$, as
\begin{equation}
P_Ng = \sum_{n=0}^N\sum_{m=-n}^n \langle g, Y^m_n\rangle Y^m_n.
\end{equation}
Note $\langle g, Y^m_n\rangle$ as an integral over the unit sphere may be approximated by coefficients $c_{n,m}(\underline{g})$ using Gauss-Legendre quadrature in $2(N+1)^2$ appropriately chosen quadrature points $\{x_k\}_{k=1}^{2(N+1)^2}$ on the unit sphere as in \cite{delbary2014a}. Here we denote $\underline{g} = (g(x_1),\hdots, g(x_{2(N+1)^2}))$. Define
\begin{equation}
L_N g := \sum_{n=0}^N\sum_{m=-n}^n c_{n,m}(\underline{g}) Y^m_n.
\end{equation}
We may approximate any operator $\Lambda:H^{s}(\partial \Omega)\rightarrow H^{-s}(\partial \Omega)$ using $Q$, a matrix in $\mathbb{C}^{2(N+1)^2\times 2(N+1)^2}$ defined by
\begin{equation}\label{lambdaapprox}
(\Lambda g)(x_k) \simeq [{ Q}\underline{g}]_k := \sum_{n=0}^N\sum_{m=-n}^n c_{n,m}(\underline{g}) (\Lambda Y_n^m)(x_k), \quad k=1,\hdots,2(N+1)^2.
\end{equation}
{From here it is clear we can write ${ Q}$ as
\begin{equation}
{Q} = \widetilde{{Q}}{A}, \label{Atransform}
\end{equation}
where ${A}:\underline{g}\mapsto (c_{0,0}(\underline{g}),\hdots,c_{N,N}(\underline{g}))$, and $[\widetilde{{Q}}]_{k\ell} = \Lambda Y_\ell(x_k)$, where $Y_\ell$ is the $\ell'$th spherical harmonic in the natural order. We can think of ${A}$ as the matrix that takes a point-cloud representation of a function on $\partial \Omega$ and gives the spherical harmonic representation. }
Similarly to \cite{Knudsen2009a}, an approximation of the operator norm then takes the form
\begin{equation}\label{normapprox}
\|\Lambda\|_{s,-s} \simeq \sup_f \frac{\|{{\mathcal{Q}}}f\|_{\mathbb{C}^{(N+1)^2}}}{\|f\|_{\mathbb{C}^{(N+1)^2}}} = \|{{{\mathcal{Q}}}}\|_{N},
\end{equation}
where $[{{{\mathcal{Q}}}}]_{ij} = \langle \Lambda \phi^s_{n,m}, \phi^{{-}s}_{n',m'} \rangle_{-s}$ with $i = n'^2+n'+m'+1$ and $j = n^2+n+m+1$. We may approximate
\begin{align}
\langle \Lambda \phi^s_{n,m}, \phi^{{-}s}_{n',m'} \rangle_{-s} &= w_{-s}(n)w_{-s}(n')\langle \Lambda Y_n^m, Y_{n'}^{m'}\rangle, \\
&\simeq w_{-s}(n)w_{-s}(n') {c_{n',m'}(\underline{\Lambda Y_{n}^m})}. \label{innerapprox}
\end{align}
{With $\mathcal{B}$ we denote the map that takes the matrix ${Q}$ and gives the approximation of ${{\mathcal{Q}}}$ defined by \eqref{innerapprox}}. For $\Lambda = \Lambda_\gamma$ we denote the approximation \eqref{lambdaapprox}, ${Q}_\gamma$.
From \eqref{lambdaapprox} {it} is clear that to represent $\Lambda_\gamma$ we need only to compute $(\Lambda_\gamma Y_n^m)(x_k)$ in the quadrature points $x_k$. In this paper we compute $(\Lambda_\gamma Y_n^m)(x_k)$ efficiently by the boundary integral approach for piecewise constant conductivities given in \cite{delbary2014a}, an approach which despite the lack of reconstruction theory has shown to perform well.
\subsection{Noise model}
We simulate a perturbation of the Dirichlet-to-Neumann map by adding Gaussian noise to ${Q}_\gamma$. We let
\begin{equation}
{Q}_\gamma^\varepsilon = {Q}_\gamma+\delta {E}, \label{noisemodel}
\end{equation}
where $\delta >0$ and the elements of the $2(N+1)^2\times 2(N+1)^2$ matrix ${E}$ are independent Gaussian random variables with zero mean and unit variance. We modify ${E}$ such that $\mathcal{B}{E}$ has a first row and column as zeros, such that we may consider $\mathcal{B}{E}$ as an approximation of a linear and bounded operator $\mathcal{E}\in Y$. Furthermore, we approximate $\|\mathcal{E}\|_Y$ using \eqref{normapprox} and \eqref{innerapprox} and note we can specify an absolute level of noise $\|\mathcal{E}\|_Y \approx \varepsilon$ by choosing $\delta$ appropriately. The relative noise level is then
\begin{equation}
\delta \frac{\|\mathcal{E}\|_Y}{\|\Lambda_\gamma\|_Y} \approx \delta \frac{\|{\mathcal{B}}{E}\|_{N}}{\|{\mathcal{B}}{Q}_\gamma\|_{N}}.
\end{equation}
{Note the noise model in \cite{delbary2014a} scales each element of ${E}$ with the corresponding element of ${Q}_\gamma$. Noise models for electrode data simulation typically takes the form
$$V_j^\varepsilon = V_j+\delta_j E_j,$$
as in \cite{hamilton2020a}, where $V_j$ is the voltage vector corresponding to the $j$'th current pattern, $\delta_j>0$ is a scaling parameter dependent on $V_j$ and $E_j$ is a Gaussian vector independent of $E_{j'}$ for $j\neq j'$. For our case such a noise model corresponds best to adding to $\widetilde{{Q}}_\gamma$ in \eqref{Atransform} a matrix $\widetilde{{E}}$ whose columns are $\delta_jE_j$. One may check by vectorizing ${A}^T \widetilde{{E}}^T$ that the corresponding ${E}$ of \eqref{noisemodel} consists of independent and identically distributed Gaussian vectors as rows. However, the elements of each row are now correlated with covariance matrix ${A}^T\mathrm{diag}(\delta){A}$.}
{Finally, we define the signal-to-noise ratio as
\begin{equation}
\mathrm{SNR} = \frac{1}{(N+1)^2}\sum_{n=0}^N\sum_{m=-n}^n \frac{\|{Q}_\gamma \underline{Y_n^m}\|_{\mathbb{C}^{2(N+1)^2}}}{\delta\|{E} \underline{Y_n^m}\|_{\mathbb{C}^{2(N+1)^2}}}.
\end{equation}
}
\subsection{Solving the boundary integral equations}
Following \cite{delbary2014a} we discretize the perturbed boundary integral equations \eqref{bienoisy} by
\begin{equation}\label{discbie}
\left[I+(\mathcal{S}_{0} L_{N}+\mathcal{H}_{\zeta}^{N})(\Lambda_{\gamma}^\varepsilon-\Lambda_{1}) L_{N} \right] ((\psi_{\zeta}^{N})^\varepsilon|_{\partial \Omega})=e_{\zeta}|_{\partial \Omega},
\end{equation}
where $\mathcal{H}_{\zeta}^{N}$ is the approximation of {the integral operator $\mathcal{S}_\zeta-\mathcal{S}_0$} using the Gauss-Legendre quadrature rule of order $N+1$ on the unit sphere in the aforementioned quadrature points $\{x_k\}_{k=1}^{2(N+1)^2}$. We find the following result regarding the convergence of the perturbed solutions $(\psi_\zeta^N)^\varepsilon$ of \eqref{discbie} analogously to \cite{delbary2012a,delbary2014a}.
\begin{theorem}
Suppose $D<|\zeta(\xi)|<-\frac{1}{6}\log\varepsilon_2$ and $\mathcal{E}$ is a linear bounded operator from $H^s(\partial \Omega)$ to $H^t(\partial \Omega)$ for all $s\geq 1/2$ and $t>s$.
Then for all $s>3/2$, there exists $N_0 \in \mathbb{N}$ such that for all $N\geq N_0$ the operator $I+(\mathcal{S}_{0} L_{N}+\mathcal{H}_{\zeta}^{N})(\Lambda_{\gamma}^\varepsilon-\Lambda_{1}) L_{N} $ is invertible in $H^s(\partial \Omega)$. Furthermore we have,
\begin{equation}
\|(\psi_{\zeta}^{N})^\varepsilon - \psi_\zeta^\varepsilon\|_{H^s(\partial \Omega)}\leq \frac{C}{N^{s-3/2}}\|e_\zeta\|_{H^s(\partial\Omega)}.
\end{equation}
\end{theorem}
\begin{proof}
The result follows from a Neumann series argument as in Lemma 3.1 and Theorem 3.2 of \cite{delbary2014a} as for $D<|\zeta(\xi)|<-\frac{1}{6}\log\varepsilon_2$, there exists a bounded inverse $(B_\zeta^\varepsilon)^{-1}$ by Lemma \ref{lemma3}.
\end{proof}
This result ensures that the solutions of the discretized perturbed boundary integral equations are unique and converge to the solutions of \eqref{bienoisy}.
\subsection{Choice of $|\zeta(\xi)|$ and truncation radius}
It is clear from Method \ref{Method2} that we should set $|\zeta(\xi)|=M^{p}$ for some exponent $p>3/2$. Due to the high sensitivity of the CGO solutions with respect to $|\zeta(\xi)|$, we may choose $|\zeta(\xi)|$ differently in practice, although we will not necessarily have a regularization strategy in theory. One idea of \cite{delbary2012a} is to set $|\zeta(\xi)|$ minimal in the admissible set \eqref{zetaxieq}, that is
\begin{equation}\label{fix}
|\zeta(\xi)| = \frac{M}{\sqrt{2}}.
\end{equation}
A different idea is to choose $|\zeta(\xi)|$ independently for each $\xi$ such that $|\zeta(\xi)|$ is minimal with $|\zeta(\xi)| = \frac{|\xi|}{\sqrt{2}}$. We take the critical choice $|\zeta(\xi)|=K_1M^{3/2}$ for some constant $0<K_1<1$ to maintain the smallest $|\zeta|$ within the boundaries of the theory.
In practice we compute $\mathbf{t}_{M(\varepsilon)}(\xi,\zeta(\xi))$ in a $\xi$-grid of points $|\xi|\leq M$ as in \cite{delbary2014a}. The Shannon sampling theorem ensures we can recover uniquely the inverse Fourier transform if we sample densely enough. We use the discrete Fourier transform in equidistant $\xi$- and $x$-grids in three dimensions.
\begin{equation}
\xi_k^j = -M+k\frac{2M}{K-1} \quad \text{ and } \quad x_n^j = -x_{\mathrm{max}}+n\frac{2x_{\mathrm{max}}}{K-1},
\end{equation}
for $n,k=0,\hdots,K-1$, $j=1,2,3$ and some $x_{\max}$ determined by $K$ and $M$. Indeed the discrete Fourier transform requires
\begin{equation}\label{def:K}
M= \frac{\pi (K-1)^2}{2Kx_{\mathrm{max}}}.
\end{equation}
to recover $q^{\varepsilon}(x_n^j)$ for all $n=0,\hdots,K-1$, $j=1,2,3$. In practical applications, we do not know the noise level, in which case we choose $M$ and $K$ and consequently determine $x_{\mathrm{max}}$. Then we recover $q^{\varepsilon}$ in an appropriate finite element mesh of the unit ball using trilinear interpolation. The discrete Fourier transform is computed efficiently with the use of FFT \cite{frigo2005design} with complexity $\mathcal{O}(K^3\log{K^3})$.
The problem of finding the optimal truncation radius given noisy data $\Lambda_\gamma^\varepsilon$ is largely open and is related to the problem of systematically choosing a regularization parameter of regularized reconstruction for an inverse problem. In this paper, we choose the truncation radius by inspection for the simulated data. For further details on the implementation of the reconstruction algorithm we refer to \cite{delbary2012a,delbary2014a}.
\section{Numerical results}\label{sec:5}
We test Method \ref{method:2} as a regularization strategy. We are interested in whether the reconstruction converges to the true conductivity distribution as the noise level goes to zero, and likewise as the regularization parameter $\alpha$ goes to zero for a non-noisy Dirichlet-to-Neumann map. To this end, we simulate a Dirichlet-to-Neumann map for a well-known phantom.
\subsection{Test phantom}
The piecewise constant heart-lungs phantom consists of two spheroidal inclusions and a ball inclusion embedded in the unit sphere with a background conductivity of $1$. The phantom is summarized in Table \ref{hl-phantom}. We compute and represent the Dirichlet-to-Neumann map and noisy counterparts as described in Section \ref{sec41}. In particular, the forward map is computed using $2(N+1)^2$ boundary points on the unit sphere and using maximal degree $N$ of spherical harmonics with $N=25$.
\begin{table}[ht!]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{@{}lllll@{}}
\toprule
Inclusion & Center & Radii & Axes & Conductivity \\ \midrule
Ball & $(-0.09,-0.55,0)$ & $r = 0.273$ & & 2 \\ \midrule
Left spheroid &
$0.55(-\sin(\frac{5\pi}{12}), \cos(\frac{5\pi}{12}), 0)$ &
\begin{tabular}[c]{@{}l@{}}$r_1 = 0.468$,\\ $r_2 = 0.234$,\\ $r_3 = 0.234$\end{tabular} &
\begin{tabular}[c]{@{}l@{}}$(\cos(\frac{5\pi}{12}),\sin(\frac{5\pi}{12}),0)$, \\ $(-\sin(\frac{5\pi}{12}),\cos\frac{5\pi}{12}),0), $\\ $(0,0,1)$\end{tabular} &
0.5 \\ \midrule
Right spheroid &
$0.45(\sin(\frac{5\pi}{12}), \cos(\frac{5\pi}{12}), 0)$ &
\begin{tabular}[c]{@{}l@{}}$r_1 = 0.546$,\\ $r_2 = 0.273$,\\ $r_3 = 0.273$\end{tabular} &
\begin{tabular}[c]{@{}l@{}}$(\cos(\frac{5\pi}{12}),-\sin(\frac{5\pi}{12}),0)$, \\ $(\sin(\frac{5\pi}{12}), \cos(\frac{5\pi}{12}), 0)$, \\ $(0,0,1)$\end{tabular} &
0.5 \\ \bottomrule
\end{tabular}
}
\caption{Summary of piecewise constant heart-lungs phantom consisting of three inclusions}
\label{hl-phantom}
\end{table}
\begin{figure}
\caption{The piecewise constant heart-lungs phantom in a three-dimensional view (\textsc{a}
\label{fig:phantom1}
\end{figure}
\subsection{Regularization in practice}
We now consider the regularization strategy, Method \ref{method:2}, in practice. Alluding to \eqref{reqweak}, we test the reconstruction algorithm by keeping the test data fixed and varying the regularization parameter.
\begin{figure}
\caption{Cross sections $(x^3=0)$ of reconstructions using the regularized reconstruction algorithm with different choices of truncation radius $M$, $K=12$ and $|\zeta(\xi)|=\frac{1}
\label{fig:reg1}
\end{figure}
In Figure \ref{fig:reg1}, we see cross-sectional plots of reconstructed conductivities for different truncation radii $M=\alpha^{-1}$. We use $|\zeta(\xi)|=\frac{1}{4} M^{3/2}$ as the critical choice such that $\zeta(\xi)\in \mathcal{V}_\xi$ for $M\geq 8$, and use the accurate Dirichlet-to-Neumann map with no added noise. The figure shows increasing accuracy and contrast for increasing truncation radii. Similar to the findings of \cite{delbary2014a}, we experience failing reconstructions for large enough truncation radii as the frequency data is dominated by exponentially amplified noise inherent to the finite-precision representation of $\Lambda_\gamma$. This happens since there is noise present in the representation of the Dirichlet-to-Neumann map, no matter how accurately it represents the true infinite-precision data. We see the effect of truncation in practice: low resolution, smaller dynamical range and more smoothness caused by the missing high frequency data. Though not immediately clear from this figure, the reconstructions slightly overshoot the conductivity of the resistive spheroidal inclusions with conductivities as small as 0.38. In addition, the reconstruction algorithm seems to work well in practice on piecewise constant conductivity distributions.
In Figure \ref{fig:reg2}, we see cross-sectional plots of reconstructed conductivities using Dirichlet-to-Neumann maps with added noise and for fixed $|\zeta(\xi)|= \frac{1}{3\sqrt{2}}M^{3/2}$. {Here, $K_1$ is chosen such that $\zeta(\xi)$ is small and admissible for $M\geq 9$.} The truncation radii are chosen optimally by visual inspection. The figure shows reconstructions in the presence of noise of levels ranging from $\varepsilon=10^{-6}$ to $\varepsilon=10^{-3}$ in the Dirichlet-to-Neumann map. We see improving quality of reconstruction as the noise level decreases in accordance with Definition \ref{def:reg2}. Beyond noise levels of $10^{-3}$, reconstruction is still feasible without the corruption of unstable noise, although, they need heavy regularization and start to lack visible features of the phantom. In Figure \ref{fig:noisyreg}, we see the conductivity reconstruction using noisy data with $\varepsilon=10^{-{2}}$ corresponding to approximately $1\%$ relative noise. The resistive spheroidal inclusions start to connect {and the conductive spherical inclusion is not as accurately placed. The remaining intensity in the signal compared to the case $M=9.7$ in Figure \ref{fig:noisyreg} could suggest that additional regularization is needed.}
\begin{figure}
\caption{Cross sections $(x^3=0)$ of reconstructions using the regularized reconstruction algorithm on noisy Dirichlet-to-Neumann maps. {The noise levels correspond to relative noise levels $\varepsilon \approx 0.1\%$ with $\mathrm{SNR}
\label{fig:reg2}
\end{figure}
\begin{figure}
\caption{Regularized reconstruction using noisy Dirichlet-to-Neumann maps with $\varepsilon = 10^{-2}
\label{fig:partA}
\label{fig:partB}
\label{fig:noisyreg}
\end{figure}
\begin{figure}
\caption{The truncation radii as predicted by theory $M =(-1/11\log(\varepsilon))^{-1/p}
\label{fig:reg3}
\end{figure}
The truncation radii of reconstructions in Figure \ref{fig:reg2} and \ref{fig:noisyreg} chosen by visual inspection are plotted and compared to the theoretically predicted truncation radius in Figure \ref{fig:reg3}. This comparison suggests the prediction is somewhat pessimistic and that the practical algorithm allows for lighter regularization in comparison to what the theoretical estimates portend. {However, the prediction and practical reconstructions are not directly comparable, since we should pick $|\zeta(\xi)|=K_1M^p$ with $p$ strictly larger than $3/2$ according to theory}. Finally, we note the noise model utilized by \cite{delbary2014a} and \cite{hamilton2020a} give somewhat different results compared to our {unnormalized perturbation}.
{The results also raise the question of how practical the reconstruction method is for more realistic data. Had we decreased the resolution of the basis of spherical harmonics to which voltages and currents are projected, the approximation error of highly oscillatory functions would increase. In this case we can expect to pick the truncation radius smaller to get a stable reconstruction. Investigating the reconstruction method for electrode data is subject to further study and is related to \cite{isaacson2004} for the two-dimensional D-bar method and \cite{hamilton2020a} for the three-dimensional so-called $\mathbf{t}^{\mathrm{exp}}$ approximation. Possible improvements to the truncation strategy\break include extending the support of $\mathbf{t}$ with prior information using the forward map as in \cite{MR3554880}. In addition, one could experiment with a truncation by thresholding as in \cite{MR3626801}.}
\section{Conclusions}
In this paper we provide and investigate a regularization strategy for the Calder\'on problem in three dimensions. The main result of the paper is Theorem \ref{maintheorem}, which shows that the algorithm defined by Method \ref{Method2} yields reconstructions approximating the true conductivity, when using data corrupted by a sufficiently small perturbation. The proof relies on a gap of the magnitude of the complex frequency in which the existence of unique CGO solutions is guaranteed and the noise level allows a stable and unique solution to the boundary integral equation. The reconstructions from this strategy are regular as a result of the spectral filtering. Numerical results show the regularizing behavior of the reconstruction algorithm in practice and suggests one can utilize higher frequency information in the data than suggested by the theory. The reconstructions of piecewise constant conductivity data show promise even in the case of $1\%$ relative noise.
\section*{Acknowledgments}
AKR and KK were supported by The Villum Foundation (grant no. 25893).
\providecommand{\href}[2]{#2}
\providecommand{\arxiv}[1]{\href{http://arxiv.org/abs/#1}{arXiv:#1}}
\providecommand{\url}[1]{\texttt{#1}}
\providecommand{\urlprefix}{URL }
\end{document} |
{\underline b}egin{document}
\title{Germ-typicality of the coexistence of infinitely many sinks}
{\underline a}uthor{Pierre Berger\footnote{Partially supported by the ERC project 818737 \emph{Emergence of wild differentiable dynamical systems.}}, \; Sylvain Crovisier\footnote{Partially supported by the ERC project 692925 \emph{NUHGD}.}, \; Enrique Pujals
}
\date{\today}
\maketitle
{\underline a}bstract{
In the spirit of Kolmogorov typicality, we introduce the notion of \emph{germ-typicality}: in a space of dynamics, it encompass all these phenomena that occur for a dense and open subset of parameters of any generic parametrized family of systems.
For any $2\le r<\infty$, we prove that the Newhouse phenomenon (the coexistence of infinitely many sinks) is locally $C^r$-germ-typical, nearby a \emph{dissipative bicycle}: a dissipative homoclinic tangency linked to a special heterodimensional cycle.
During the proof we show a result of independent interest: the stabilization of some heterodimensional cycles for any regularity class $r\in \{1, \dots, \infty\}\cup \{\omega\}$ by introducing a new renormalization scheme.
We also continue the study of the paradynamics done in~\cite{BE15,berger2017emergence,BCP16} and
prove that parablenders appear by unfolding some heterodimensional cycles.
\cb}
{\underline b}egin{figure}[h]
{\underline b}egin{center}
\includegraphics{figbicline.pdf}
\caption{Bicycle}\label{bicline}
\end{center}
\end{figure}
\mbox{}{\underline b}egin{minipage}{0.75\textwidth}
\tableofcontents
\end{minipage}
{\underline b}igskip{\underline b}igskip
One of the most complex and rich phenomenon in differentiable dynamical systems was discovered by Newhouse \cite{Ne74,Ne79}. He showed the existence of locally Baire-generic sets of dynamics displaying infinitely many sinks which accumulate onto a Smale's horseshoe (a stably embedded Bernoulli shift).
This property is the celebrated {\em Newhouse phenomenon}. It appears in many classes of dynamics \cite{Bu97,BD99,DNP,Bu97,Duarte08,Bi20}.
Following Yoccoz, this phenomenon provides more a lower bound on the wildness and complexity of the dynamics, rather than a complete understanding on the dynamics. Indeed from the topological or statistical viewpoints, these dynamics are presently extremely far to be understood; it is not clear that the current dynamical paradigm would even allow to state a description of such dynamics.
Since then the problem of the typicality of the Newhouse phenomenon has been fundamental. But the notion of Baire-genericity among dynamical systems is a priori independent to other notions of typicality involving probability.
That is why many important works and programs \cite{ TLY,PT93,PS95,Pa95,Pa05,Pa08,GK} wondered if the complement of the Newhouse phenomenon could be typical in some probabilistic senses
inspired by Kolmogorov.
In his plenary talk ending the ICM 1954, Kolmogorov introduced the notion of typicality for analytic or finitely differentiable dynamics of a manifold $M$. He actually gave two definitions: one was designed to decide that a phenomenon is negligible, the other one to decide that a phenomena is typical.
He called \emph{negligible} a phenomenon which only holds on a subset dynamics sent into a Lebesgue null subset of $\mathbb R^n$ by a finite number of [non trivial] real valued functionals $(\mathcal F_i)_{0\le i\le n}$ on the space of dynamics.
To decide if a phenomenon $\mathcal B$ is \emph{typical}, he proposed to start with a dynamics $f_0$ presenting the behavior, and then to consider a deformation $f_a$ of the form
\[f_a(z) = f_0(z)+a\cdot \phi(x,a)\; ,\]
where $\phi$ is a function of both $x$ and $a$, of the same regularity as $f$ (e.g. analytic, smooth or finitely differentiable).
Then he called the behavior $\mathcal B$ \emph{typical}, or \emph{stably realizable} if, for every $a$ small enough,
the system $f_a$ displays this behavior. This was presented as a criterion for detecting the importance of a phenomenon:
{\underline b}egin{center}
\it Any type of behavior of a dynamical system for which there exists at least one example\\ of stable realization should be recognized as being important and not negligible.
\end{center}
{\underline b}egin{flushright}
Kolmogorov, ICM 1954.
\end{flushright}
Recently, \cite{BE15,berger2017emergence} showed the existence of locally Baire-generic sets of $C^r$-families of dynamics $(f_a)_{a\in \mathbb R^k}$, $r, k<\infty$, such that for
every $\|a\| \leq 1$, the map $f_a$ displays the Newhouse phenomenon. In particular, this showed that the complement of the Newhouse phenomenon is not typical for some interpretations of Kolmogorov's typicality. In \cite{berger2017emergence},
it has been also conjectured that some dynamics with complex statistical behavior should be typical in many senses.
In this work we show that the Newhouse phenomenon is typical according to the following notion
inspired by Kolmogorov idea and subsequent developements \cite{Ilyashenko-nonlocal-bifurcation,KH-handbook,Mather,KZ20}:
{\underline b}egin{definition}[Germ-typicality] A behavior $\mathcal B$ is \emph{$C^r$-germ-typical} in $\mathcal U\subset C^r(M,M)$, if
there exist a Baire-generic\footnote{i.e. a set which contains a countable intersection of open and dense sets.} set $\cR$ in the space of $C^r$-families\footnote{In \cref{sec:Preliminary}, we will precise the topological space of $C^r$-families involved.} $\hat f=(f_a)_{a\in\mathbb R}$ of maps in $\cal U$
and a lower semi-continuous function $\delta\colon \cR \to (0,+\infty)$
such that for every $\hat f\in \mathcal R$
and for all $|a|<\delta(\hat f)$, the map $f_a$ presents the behavior $\mathcal B$.
\end{definition}
To precise our setting, we consider an open set $U$ with boundary inside a surface $M$
and the space $\mathrm{Diff}^r_{loc}(U,M)$ of local $C^r$-diffeomorphisms from $\overline U$ to $M$.
Working inside this space, we reveal that any homoclinic configuration called \emph{bicycle} (see \cref{bicline}) is included in the closure of these open sets $\cal U$:
{\underline b}egin{definition} A local diffeomorphism displays a \emph{bicycle} if one of its saddle point has a homoclinic tangency and a heterocycle. A saddle point $P$ displays a \emph{heterocycle} if $W^u(P)$ contains a projectively hyperbolic source $S$ and if the strong unstable manifold $W^{uu} (S)$ intersects $W^s(P)$.
The bicycle is \emph{dissipative} if the dynamics contracts the area along the orbit of $P$.
\end{definition}
Since a bicycle is a simple configuration, in many cases it may be easy to obtain,
as we will see in \cref{example simple} for the planar dynamics $(x,y)\mapsto (x^2-2,y)$.
The main theorem of this work is the following:
{\underline b}egin{theo}\label{main}
For every $2\leq r<\infty$ and
for every local $C^r$-diffeomorphism of surface $f\in\mathrm{Diff}^r_{loc}(U,M)$, which displays a dissipative bicycle, there exists a (non empty) open set $\mathcal U^r\subset \mathrm{Diff}^r_{loc} (U, M) $ whose closure contains $f$
and where the Newhouse phenomenon is $C^r$-germ-typical.
\end{theo}
Following Kolmogorov viewpoint, this theorem strengthens the evidence of the importance of the Newhouse phenomenon.
As we mentioned previously, \cite{BE15} discovered the stable coexistence of infinitely many sinks for a generic subset of an open set
of parametrized families, i.e. showed that the Newhouse phenomenon can not be neglected when one crosses a region of the space of systems
along some ``well-chosen directions". In comparison \cref{main} goes one step further and establishes the typicality of the Newhouse phenomeon in a sense which only depends on a neighborhood inside the space of systems (and not on the neighborhood of a specific family).
\cb
For the sake of clarity, we restrict the scope of the present work to the case of surface local diffeomorphisms and to a notion of germ-typicality involving only one-parameter families. However we are confident that our result could be generalized to a broader setting: for instance \cite{berger2017emergence} applies also to diffeomorphisms in higher dimension and to any finitely dimensional parameter families.
\subsection*{Locus of robust phenomena: stabilization of heterodimensional cycles}
Another main point of the present work is to bring to light
a very simple configuration nearby which germ-typicallity of the Newhouse phenomenon holds true. The idea to associate a homoclinic configuration to a phenomenon goes back to Newhouse.
In \cite{Ne70}, he first showed that it is possible to get a (non-empty) open set of diffeomorphisms of surface exhibiting homoclinic tangencies (these diffeomorphisms exhibit \emph{$C^2$-robust homoclinic tangencies}), and then in \cite{Ne74} that this open set can encompasses a Baire-generic subset formed by dynamics displaying the Newhouse phenomenon (infinitely many attracting cycles). To obtain such open sets of diffeomorphisms with robust homoclinic tangencies, Newhouse considered horseshoes with large fractal dimension (large thickness in his own nomenclature). Later, in \cite{Ne79}, Newhouse proved that from [the configuration defined by] a homoclinic tangency, a perturbation of the dynamics displays a robust homoclinic tangency (see \cref{Newhouse}).
For local diffeomorphisms, these thick horseshoes can be replaced by
more topological object, called \emph{blenders}. They were introduced by Bonatti and Diaz \cite{BD96} for diffeomorphisms in
dimension larger than or equal to three and can be recasted in the context of local diffeomorphisms of surface as hyperbolic compact sets such that the union of their local unstable manifolds covers $C^r$-robustly a (non-empty open set of the surface (see \cref{def:blender}). In the same spirit as Newhouse's work, one can wonder, nearby which
homoclinic configurations blenders appear. Bonatti, Diaz and Kiriki \cite{BDK} proved that heterodimensional cycles (which, in the case of local diffeomorphisms of surface correspond to cycles between a saddle and a source) play that role when one considers the $C^1$-topology: a $C^1$-perturbation of the heterodimensional cycle generates open sets of dynamics exhibiting blenders and $C^1$-robust heterodimensional cycles. In the present paper, we extend this result to the context of more regular dynamics:
{\underline b}egin{thmBintro} \label{robust heterointro}
For every $1\le r\le \infty$ or $r=\omega$,
consider $f\in \mathrm{Diff}loc^r(U,M)$ exhibiting a heterocycle associated to a saddle $P$. Then there exists $\tilde f$, that is $C^r$-close to $f$, with a basic set $K$ containing the hyperbolic continuation of $P$, and which has a $C^r$-robust heterocycle.
\end{thmBintro}
While communicating our result, Li and Turaev have informed us that they independently achieved to prove a more general version of Theorem \ref{robust hetero} for higher dimensional systems, by using different techniques \cite{LiTur}.
Diaz and Perez have also recently obtained~\cite{DP20} a similar stabilization
of heterodimensional cycles for $C^r$-diffeomorphisms in dimension $3$,
assuming in addition that one of the periodic points exhibits a homoclinic tangency.
\subsection*{Renormalization nearby heterocycles}
In order to prove Theorem \ref{robust hetero} (in \cref{sec:robust robust hetero}), we first show in \cref{pingpong} that nearby heterocycles there are heterocycles satisfying an additional property. These configurations are called \emph{strong heterocycles} and are defined in \cref{def hetero}. Then \cref{PPaffine} introduces a renormalization nearby strong heterocycles to obtain nearly affine blenders.
This renormalization consists in selecting two inverses branches $g^+$ and $g^-$ of larges iterates of the dynamics, which are defined on boxes nearby the heterocycle and then to rescale $\mathcal R g^- =\phi^{-1}\circ g^- \circ \phi$,
$\mathcal R g^+ =\phi^{-1}\circ g^+ \circ \phi$ the two latter branches via a same coordinate change $\phi$.
The maps $\mathcal R g^-, \mathcal R g^+$ are close to affine maps and define a blender, which will be called \emph{nearly affine blender}, see \cref{def:affine blender}.
Theorem \ref{robust hetero} is restated more precisely in \cref{sec:blender}.
\cref{pingpong,PPaffine} are proved in respectively \cref{s.strong,s.blender}. This renormalization is one of the main technical novelty of the present work. It is further developed to obtain \cref{theosectionparadense} (in \cref{sec:parablender}), a parametric counterpart of Theorem \ref{robust hetero}. \cref{theosectionparadense} is essential to prove Theorem \ref{main}. It states that nearby paraheterocycles there are nearly affine parablenders. These are objects of paradynamics.
\subsection*{Paradynamics}
To explain the role of these parametric blenders we have to go back to the paper \cite{BE15}: it considered parameter families of
local diffeomorphisms on surfaces and introduced the notion of \emph{paratangencies}: a homoclinic tangency that is ``sticky'' (or unfolded in ``slow motion"). That phenomenon implies that the attracting periodic points created by the unfolding of the tangency have ``a long life in the parameter space".
Moreover, if any perturbation of a parameter family still exhibits a dissipative homoclinic paratangency for all parameters
(in other words the family exhibits \emph{robust homoclinic paratangencies}, the analog in the space parameter families of the robust homoclinic tangencies in the space of local diffeomorphisms) then, after small perturbation, the new family displays infinitely many attracting periodic points for all parameters (see \cref{prop34,prop36}).
To provide robust paratagencies, \cite{BE15} introduced a parametric version of the blenders, called \emph{$C^r$-parablenders},
see \cref{def.parablender}. To grasp the idea behind this notion, first recall that any hyperbolic compact set of a map has a unique continuation for a nearby system. Any point in the hyperbolic set has a unique continuation as well (see \cref{sec:Preliminary} for details) and the same holds true for its local stable and unstable manifolds. When the parameter family is of class $C^r$, the continuation of a point defines a curve of class $C^r$. The key property of a $C^r$-parablender, is that
for an open set of parametrized points in the surface,
of the local unstable manifold of the parablender moves in slow motion with respect to the parametrized point.
This property can be push forward to the unfolding of homoclinic tangencies and allows to
create robust homoclinic paratangencies.
For that purpose, it is easier to assume that the collection of local unstable manifolds covers a source
homoclinically linked to the parablender.
In \cite{BCP16}, the notion of parablender has been recasted:
parameter families of maps naturally induce an action on $C^r$-jets
and the parablenders can be viewed as
blenders for this dynamics on the space of jets. This viewpoint allowed us to systematize the construction of
parablenders: in \cite{BCP16},
using Iterated Function Systems, a special type of parablenders called \emph{nearly affine parablenders} (see \cref{parablenders defi}) is introduced.
In the present paper, we tried to follow Newhouse's approach and looked for a simple bifurcation that generates ``robust paratangencies". According to \cite{berger2017emergence}, it suffices to obtain a parablender covering a source and linked to a dissipative homoclinic tangency. Similarly to \cite{BDK}, one can wonder if the parametric unfolding of a heterodimensional cycle may generate a parablender. We answer by proving that the unfolding of a homoclinic tangency related to a heterocycle (a \emph{bicycle}) is the sough configuration which produces
robust paratangencies.
To precise, first we prove that combining a homoclinic tangency with the heterocycle, one obtains alternate chain of heterocycles (a chain of heterocycles involving saddles with negative eigenvalues, see \cref{def.Nchain}). The unfolding of that special chain produces a \emph{paraheterocycle} (a heterocycle that is unfolded in ``slow motion'', see \cref{def.paraheterocycle} and theorem \ref{chain}) and which then gives birth to nearly affine parablenders (see theorem \ref{theosectionparadense}) using the aforementioned renormalization technique.
\subsection*{Open problems}
Paradynamics has been useful to prove that several complex and interesting phenomena are robust along a locally Baire-generic set of families of dynamics, see \cite{Be17per,IS17,BR21att,BR21}.
The tools brought by our work should enable to show the $C^r$-germ-typicality of these phenomena.
Note that if a behavior $\mathcal B$ is $C^r$-germ-typical in $\mathcal U$ then it occurs on an open and dense set of parameters for a Baire-generic set of $C^r$-families $(f_a)_a$ of dynamics $f_a\in \cal U$. But it does not imply that the Lebesgue measure of this open and dense set of parameters is full. In particular, it remains open whether the Newhouse phenomenon is locally typical with respect tosome interpretations of
Kolmogorov typicality given by \cite{KH-handbook}, \cite{BE20} or \cite[Chapter 2, section 1]{Ilyashenko-nonlocal-bifurcation}. The latter is slightly stronger than:
{\underline b}egin{definition}[Arnold prevalence (soft version)]
For $r, k\ge 1$, a behavior $\mathcal B$ is $C^r$-$k$-Arnold prevalent in $\mathcal U\subset C^r(M,M)$,
if there exists a Baire-generic set $\mathcal R$ of $C^r$-families $(f_a)_{a\in \mathbb R^k}$ formed by maps $f_a\in \cal U$ such that for every $(f_a)_a\in \mathcal R$, for Lebesgue almost every parameter $a\in \mathbb R^k$ , $f_a$ presents the behavior $\mathcal B$.
\end{definition}
Another notion of typicality has been introduced by Hunt, York and Sauer \cite{HSY}, and then developed by Kaloshin-Hunt in \cite{KH-handbook}; it was used by Gorodeski-Kaloshin \cite{GK} to study the typicality of the Newhouse phenomenon,
but leaves open the problem of the typicality of Newhouse phenomenon following the latter notions\footnote{The original notion of typicality defined by \cite{HSY} is defined for Banach spaces; its counterpart for Banach manifolds (such as the space of dynamics on a compact manifold) is so far not unique (there is no version of this notion which is invariant by coordinate change, contrarily to germ-typicality or Arnold prevalence).}.
Finally let us emphasize that the important Arnold-prevalence or the germ-typicality of the Newhouse phenomenon are open for the $C^\infty$ or analytic topologies. Hopefully the tools developed in this present work seem to us useful to progress on these important problems.
{\mathfrak e}ction{Concepts involved in the proof}\label{concepts}
In this section we state the main results which are used to obtain \cref{main}.
In \cref{sec:Preliminary} we recall classical definitions about hyperbolicity in the particular context of local diffeomorphisms.
In \cref{sec:homocycle} and\cref{sec:hetero} we recall the concepts of homoclinic tangency and heterodymensional cycle between fixed points with different indices and the classical results of Newhouse and Bonatti-Diaz associated to these bifurcations.
In \cref{sec:blender} we recall the notions of blenders and nearly affine blenders and we state the main theorem that relates cycles and blenders (\cref{robust hetero}).
In \cref{bicycle}, we state precisely the definition of bicycle (that combines a homocycle and a heterocycle) and
we show in \cref{robust-bicycle} that from bicycles one can obtain robust bicycles (it is worth to mention that this is done in any $C^r$-regularity including the analytic one).
In \cref{sec:parahetero} and \cref{sec:parablender} we give the parametric version of the previous results. In \cref{sec:parahetero} we introduce the notion of paraheterocycle and explain how by unfolding heterocycles associated to saddles with negative eigenvalues one can obtain a paraheterocycle (\cref{chain}). In \cref{sec:parablender} we introduce the notions of affine and nearly affine parablenders and explain how they emerge from paraheterocycles (\cref{theosectionparadense}).
\subsection{Preliminaries}\label{sec:Preliminary}
In the following $M$ is a compact surface, $U$ an open subset whose boundary is a smooth submanifold
and $\mathrm{Diff}loc^r(U,M)$ for $r\in \mathbb N\cup \{\infty\}$, denotes the restrictions to $U$ of $C^r$-map
$f\colon \overline U\to M$ whose differential $D_xf$ is invertible at every $x\in \overline U$.
Endowed with the $C^r$-topology,
this is a Baire space.
For some results,
one will also assume that $M$ is a real analytic surface
and let $\tilde M$ be a complex extension.
One then considers the space $\mathrm{Diff}loc^\omega(U,M)$
of real analytic maps
endowed with the analytic topology defined as the inductive limit
of the spaces of holomorphic maps defined on neighborhoods of $M$
in $\tilde M$.
Now let us precise the space of $C^r$-families
parametrized by the interval $\mathbb I=(-1,1)$.
For the sake of clarity, we will focus only on the space
$\mathcal{D}^r(\mathbb I \times U,M)$ of families $(f_a)_{a\in \mathbb I}$
which are the restriction of a map
$(a,x)\mapsto f_a(x)$ of class $C^r$
on $\overline \mathbb I \times \overline U$, that we endow with the uniform $C^r$-topology.
However all our arguments will be also valid for the smaller space $C^r(\overline \mathbb I,\mathrm{Diff}^r_{loc} (U, M))$ endowed with the topology of $C^r$-maps from $\overline \mathbb I$ into $\mathrm{Diff}^r_{loc} (U, M)$.
An \emph{inverse branch} for $f\in \mathrm{Diff}loc^r(U,M)$ is the inverse of a restriction $f|V$ of $f$ to a domain $V\subset U$ such that $f^n|V$ is a diffeomorphism onto its image.
A compact set $K$ is \emph{(saddle) hyperbolic} for $f$ if it is $f$-invariant (i.e. $f(K)= K$) and there exists a continuous, $Df$-invariant subbundle $E^s$ of $TM|K$ which is uniformly contracted and normally uniformly expanded. More precisely,
there exists $N\geq 1$ satisfying:
\[\|D_zf^N|E^s_z\|<1/2 \quad \text{and}\quad \|p_{E^{s{\underline b}ot}}\circ D_zf^N(v)\|\ge 2\|v\| \; ,\quad \forall z\in K,\; v\in E^{s{\underline b}ot}_z\; ,\]
where $E^{s{\underline b}ot}$ is the subbundle of $TM|K$ equal to the orthogonal complement of $E^s_z$ and $p_{E^{s{\underline b}ot}}$ the orthogonal projection onto it. The hyperbolic set $K$ is a \emph{basic set} if it is transitive and locally maximal. Then $K$ is equal to the closure of its subset of periodic points.
Any point $x\in K$ has a stable manifold $W^s(x)$ (also denoted $W^s(x;f)$) which is an injectively immersed curve.
The map $f|K$ being in general not injective, a single point $x\in K$ has in general as many unstable manifolds as preorbits $\underline x$. We denote such a submanifold by $W^u(\underline x)$, or $W^u(\underline x; f)$. The space of preorbits $\underline x$ is denoted by $\overleftarrow K:= \{\underline x=(x_i)_{i\le 0}\in K^{\mathbb Z^-}:\; f(x_{i-1})=x_i\}$. The space $\overleftarrow K$ is canonically endowed with the product topology. The zero-coordinate projection is denoted by $\pi_f\colon \overleftarrow K\to M$; it semi-conjugates the shift dynamics $\sigma$ on $\overleftarrow K$ with $f$.
It is well known (see for instance \cite{BR13}) that a hyperbolic compact set is \emph{$C^1$-inverse limit stable}: for every $C^1$-perturbation $f'$ of $f$, there exists a (unique) map $\pi_{f'}\colon \overleftarrow K\to M$ which is $C^0$-close to $\pi_{f'}$ and so that:
\[\pi_{f'}\circ \sigma = f'\circ \pi_{f'}\;.\]
The image $K_{f'}:= \pi_{f'}(\overleftarrow K)$ is also a hyperbolic set. Note that $K_f=K$. Also $K_{f'}$ is called the \emph{hyperbolic continuation} of $K$.
Two basic sets are \emph{(homoclinically) related} if there exists an unstable manifold of the first which has a transverse intersection point with a stable manifold of the second, and vice-versa. Then by the Inclination Lemma, the local unstable manifolds of one basic set are dense in the unstable manifolds of the other.
An $f$-invariant compact space is \emph{projectively hyperbolic expanding} if there exists
a continuous $Df$-invariant subbundle $E^{cu}$ of $TM|K$ which is uniformly expanded and normally uniformly expanded. More precisely,
there exists $N\geq 1$ satisfying:
\[\|D_zf^N|E^{cu}_z\|>2 \quad \text{and}\quad \|p_{E^{cu{\underline b}ot}}\circ D_zf^N(v)\|\ge 2\cdot \|v\|\cdot \|D_zf^N|E^{cu}_z\| \; ,\quad \forall z\in K,\; v\in E^{cu{\underline b}ot}_z.\]
If it is transitive and locally maximal, it is equal to the closure of its subset of periodic points.
To any $\underline x\in \overleftarrow K$, one associates a strong unstable manifold $W^{uu}(\underline x)$
as the set of points which converge to the orbit of $\underline x$ in the past transversally to the bundle $E^{cu}$.
A saddle periodic point $P$ of period $p\ge 1$, is \emph{dissipative} if $|\mathrm{det} D_Pf^{p}|<1$.
A source periodic point $S$ is \emph{projectively hyperbolic} if the tangent space at $S$ split into two $Df$-invariant directions, $T_SM=E^{cu}\oplus E^{uu}$, the direction $E^{cu}$ --called \emph{center unstable}-- being less expanded than the direction $E^{uu}$ --called \emph{strong unstable}. Its strong unstable
manifold $W^{uu}(S)$ is the set of points which converge to the orbit of $S$ in the past
in the direction of $E^{uu}$.
\subsection{Homocycle}\label{sec:homocycle} Given $f\in \mathrm{Diff}loc^r(U,M)$, a saddle periodic point $P\in U$ has a \emph{homoclinic tangency} or \emph{homocycle} for short, if its stable manifold has a non-transverse intersection point $T\in U$ with its unstable manifold.
{\underline b}egin{equation}\tag{Homocycle} \exists T\in TW^s(P)\cap TW^u(P)
\; .\end{equation}
More generally, a basic set $K\subset U$ has a homoclinic tangency if there exist $P\in K$
and $\underline Q\in \overleftarrow K$ (not necessarily periodic) such that $W^s(P)$ is tangent to $W^u(Q)$.
A basic set $K$ has a \emph{$C^r$-robust homoclinic tangency} if for every $C^r$-perturbation of the dynamics, the hyperbolic continuation of $K$ still has a homoclinic tangency.
If $r\geq 2$ and if the phase space is a surface, the tangency $T$ is \emph{quadratic},
if the curvature of $W^s(P)$ and $W^u(\underline Q)$ at $T$ are not equal.
{\underline b}egin{figure}[h]
{\underline b}egin{center}
\includegraphics{homocycle.pdf}
\caption{Homocycle}\label{homocycle}
\end{center}
\end{figure}
Here is a famous theorem by Newhouse~\cite{Ne79}, which stabilizes the homoclinic tangencies.
{\underline b}egin{theorem}[Newhouse]\label{Newhouse} For $2\leq r\leq \infty$ or $r=\omega$, consider $f\in \mathrm{Diff}loc^r(U,M)$ and a saddle periodic point $P$ exhibiting a homoclinic tangency $T$.
Then there exists $\tilde f$ $C^r$-close to $f$, with a basic set $K$
containing the hyperbolic continuation of $P$, and which has a $C^r$-robust homoclinic tangency.
\end{theorem}
The open set $\mathcal N^r $ of dynamics displaying a $C^r$-robust homoclinic tangency is called the \emph{Newhouse domain}. We denote by $\mathcal N^r(P)\subset \mathcal N^r$ the open set of dynamics for which the hyperbolic continuation of $P$ belongs to a basic set displaying a $C^r$-robust homoclinic tangency. By the Inclination Lemma, the stable and unstable manifolds of $P$ are dense in the stable and unstable sets of $K$. Thus a $C^r$-small perturbation of any dynamics in $\mathcal N^r(P) $ creates a homoclinic tangency for $P$. This proves:
{\underline b}egin{proposition}\label{denistehomocylcle}
For every $1\le r\le \infty$ or $r=\omega$, there exists a $C^r$-dense set in $ \mathcal {N}^r(P)$, made by maps for which the hyperbolic continuation of $P$ has a homoclinic tangency.
\end{proposition}
Let $\mathcal N_{diss}^r(P)\subset \mathcal N(P)$ be the open set formed by dynamics for which the hyperbolic continuation of $P$ is dissipative.
As a periodic sink of arbitrarily large period can be obtained by a small perturbation of a dissipative homoclinic tangency, the latter proposition then implies the Baire-genericity in $\mathcal N_{diss}^r(P)$ of dynamics exhibiting a Newhouse phenomenon (see \cite{Ne74} for more details).
\subsection{Heterocycles}\label{sec:hetero}
In the present section we first recast for the case of surface endomorphisms, the notion of heterodimensional cycle introduced in \cite{D,BD96}, and present two stronger versions of it called \emph{heterocycle} and \emph{strong heterocycle}.
{\underline b}egin{definition}\label{def hetero}
A map $f\in \mathrm{Diff}loc(U,M)$ displays a \emph{heterodimensional cycle} if
it has a saddle periodic point $P$ and
a periodic source $S$ such that
$W^u(S)$ intersects $W^s(P)$ and $S$ is in $W^u(P) $:
{\underline b}egin{equation}\tag{Heterodimensional cycle} S\in W^u(P) \quad \text{and}\quad W^s(P)\cap W^{u}(S){\underline n}ot= \varnothing\; .\end{equation}
The heterodimensional cycle forms a \emph{heterocycle} if the source is projectively hyperbolic and $W^{uu}(S)$ intersects $W^s(P)$:
{\underline b}egin{equation}\tag{Heterocycle} S\in W^u(P) \quad \text{and}\quad W^s(P )\cap W^{uu}(S){\underline n}ot= \varnothing\; .\end{equation}
This heterocycle is \emph{strong} if furthermore $W^{uu}(S)$ contains $P$:
{\underline b}egin{equation*}\tag{Strong heterocycle} S\in W^u(P) \quad \mathrm{and}\quad
P\in
W^{uu} (S)
\; .\end{equation*}
\end{definition}
We will see in \cref{pingpong} that any map displaying a heterocycle can be smoothly perturbed to display a strong heterocycle between a saddle point $P'$ homoclinically related to the initial one $P$, and the initial source $S$.
{\underline b}egin{figure}[h]
{\underline b}egin{center}
\includegraphics{figcyclehetero2.pdf}
\caption{Heterocycle for a surface map. \label{fig1}}
\end{center}
\end{figure}
A heterocycle is a one-codimensional phenomenon. To show its local density, we shall generalize it as follow. A basic set $K$ and a projectively hyperbolic periodic source $S$ of a surface map display a \emph{heterocycle} if there exists $P\in K$ (not necessary periodic) such that $ W^s(P)\cap W^{uu}(S){\underline n}ot= \varnothing$ and there exists $\underline P\in \overleftarrow K$ such that $P=\pi_f(\underline P)$ and $S\in W^u(\underline P)$.
The \emph{heterocycle is $C^r$-robust} if for every $C^r$-perturbation of the dynamics, the hyperbolic continuations of $K$ and $S$ still have a heterocycle.
The $C^r$-open set of surface maps which display a robust heterocycle is called the \emph{Bonatti-Diaz domain} and is denoted by $\mathcal {BD}^r$. We denote by $\mathcal {BD}^r(P,S)\subset \mathcal {BD}^r$ the open set of dynamics for which the hyperbolic continuation of $P$ belongs to a basic set displaying a $C^r$-robust heterocycle with the continuation of $S$.
\subsection{Blenders}\label{sec:blender}
Let us again consider a robust heterocycle between a basic set $K$ and a source $S$.
As by a perturbation of the dynamics, $S$ can be moved independently of $K$ and its unstable manifold, this implies that $K$ must be a \emph{blender}:
{\underline b}egin{definition}[$C^r$-Blender]\label{def:blender}
A \emph{$C^r$-blender} for $f\in \mathrm{Diff}loc^r(U,M)$ is a basic set $K$ such that the union of its local unstable manifolds has $C^r$-robustly non-empty interior: there exists a continuous family of local unstable manifolds whose union contains a non-empty open set $V\subset U$ and the same holds true for their continuations for any $C^r$-perturbations $\tilde f$ of $f$.
The set $V$ is called an \emph{activation domain} of the blender $K$.
\end{definition}
As the periodic points are dense in $K$, the unstable manifolds of periodic points are also dense in the activation domain. Hence for a small $ C^r$-perturbation supported by a small neighborhood of the blender, there exists a
periodic point whose unstable manifold contains the source, defining a heterocycle. This proves the following counterpart of \cref{denistehomocylcle}:
{\underline b}egin{proposition}\label{denisteheterocylcle}
For every $1\le r\le \infty$ or $r=\omega$, there exists a $C^r$-dense set in $ \mathcal {BD}^r(P,S)$ made by maps
for which the hyperbolic continuation of $P$ and $S$ have a heterocycle.
\end{proposition}
Bonatti and Diaz have introduced the notion of blender
and obtained the first semi-local constructions of robust heterocycles \cite{BD96}.
{\underline b}egin{question}
All the known $C^r$-blenders are also $C^1$-blender.
\emph{Is $\mathcal {BD}^r$ equal to $\mathcal {BD}^1$?}
\end{question}
The following notion has been introduced in \cite{BCP16} and will play a key role in a renormalization that we will perform nearby heterocycles.
{\underline b}egin{definition}[Nearly affine blender]\label{exam.blender}\label{def:affine blender}
For $r\in [1,\infty)$, $\Delta>1$, $x_0\in(-2,2)$,
$\delta>0$,
$f$ has a \emph{$\delta$-$C^r$-nearly affine blender} with contraction $\Delta^{-1}$ if there is a $C^r$-chart $H \colon \mathbb R^2\hookrightarrow M$
such that:
{\underline b}egin{itemize}
\item[--] there is an inverse branch $g^+$ of an iterate $f^{N^+}$ of $f$ such that $\cR g^+ := H^{-1}\circ g^+\circ H$ is well defined on $[-2,2]^2$ and is $\delta$-$C^r$-close to $(x,y) \mapsto (x_0, \Delta (y-1)+1)$;
\item[--] there is an inverse branch $g^-$ of an iterate $f^{N^-}$ of $f$ such that $\cR g^- := H^{-1}\circ g^-\circ H$ is well defined on $[-2,2]^2$ and is $\delta$-$C^r$-close to $(x,y) \mapsto (x_0, \Delta (y+1)-1)$.
\end{itemize}
\end{definition}
Observe that the maximal invariant set of the map:
\[(\cR g^+)^{-1}\sqcup (\cR g^-)^{-1}: \cR g^+([-2,2]^2)\sqcup (\cR g^-([-2,2]^2) \to \mathbb R^2\]
is a basic set $K$. The following is easy, see for instance \cite[Section 6]{BCP16} for details.
{\underline b}egin{proposition}
\label{ are blender}
For every $\Delta>1$ close to $1$, $x_0\in (-2,2)$ and $\eta\in(0,1)$, if $\delta>0$ is sufficiently small, then the set $K$ is a $C^1$-blender and $(-2,2)\times \left[-1
+\eta,
1-\eta\right] $ is an activation domain.
\end{proposition}
In \cref{Proof robust hetero} we will prove the following analogous of Newhouse \cref{Newhouse},
which stabilizes the heterocycles.
It will be obtained by introducing a renormalization for a perturbation of $f$ leading to a nearly affine blender.
{\underline b}egin{theo}\label{robust hetero}
For every $1\le r\le \infty$ or $r=\omega$,
consider $f\in \mathrm{Diff}loc^r(U,M)$ exhibiting a heterocycle formed by a saddle $P$ and a projectively hyperbolic source $S$.
Then for every $\delta>0$ and any number $\rho\le r$, there exists $\tilde f$, $C^r$-close to $f$,
such that $P_{\tilde f}$ is homoclinically related to a $\delta$-$C^\rho$-nearly affine blender whose activation domain contains $S_{\tilde f}$.
\end{theo}
{\underline b}egin{question}
To what extend the previous results generalize to heterodimensional cycles?
\end{question}
In that direction, \cite{BDK} proved for diffeomorphisms that it is possible to stabilize by $C^1$-perturbation
any \emph{classical} heterodimensional cycle between saddles whose stable dimension differs by one,
provided that at least one of the saddle involved in the cycle belongs to a nontrivial hyperbolic set. An analogue in any regularity class is done in \cite{LiTur}.
\subsection{Bicycles and robust bicycles}\label{bicycle}
Let us precise the definition of bicycle mentioned in the introduction:
{\underline b}egin{definition}
A saddle $P$ and a projectively hyperbolic source $S$
display a \emph{bicycle} if they form a heterocycle and if $P$ has a homocycle.
The bicycle is \emph{dissipative} if the orbit of $P$ is dissipative.
\end{definition}
The notion of bicycle can be extended to basic sets.
{\underline b}egin{definition} A basic set for $f\in \mathrm{Diff}loc^r(U,M)$ displays a \emph{$C^r$-robust bicycle} if it displays a $C^r$-robust homocycle and forms a $C^r$-robust heterocycle with a projectively hyperbolic source.
\end{definition}
It is easy to build a bicycle by perturbation of some explicit example:
{\underline b}egin{example}\label{example simple} For every $r\ge 2$, the map $f:= (x,y) \in \mathbb R^2\mapsto (x^2-2,y)$ is $C^r$-accumulated by maps $f_\varepsilon$ exhibiting a bicycle.
Hence by \cref{main}, there is an open set of $C^r$-perturbations $\mathcal U^r$ of $f_\varepsilon$ in which the coexistence of infinitely many sinks is $C^r$-germ-typical.
\end{example}
{\underline b}egin{proof}[Proof of \cref{example simple}]
First, we choose the parameter $a$ close to $-2$ such that the map $g(x)=x^2+a$
admits two homoclinically related repelling periodic points $s$, $p$, the orbit of the critical point contains $p$
(there exists $n\geq 1$ such that $g^n(0)=p$) and belongs to the unstable set of $p$
(there exists a sequence of backward iterates of $0$ which accumulates on $p$): usually such a parameter $a$ is called a {\em Misiurewicz parameter}).
Then we consider a function $\rho$ close to $1$ which is equal to $1+\varepsilon$ on a small neighborhood of the orbit of $s$ and
to $1-\varepsilon$ in a small neighborhood of the orbit of $p$.
We now consider the following small perturbation of $f$:
\[f_{a,\varepsilon} (x,y)\mapsto (x^2+a, \rho(x) y).\]
Observe that it has a projectively hyperbolic source $S := (s,0)$ and dissipative saddle point $P:= (p,0)$, such that the unstable manifold of each point intersects the other point and the image of the critical point still is preperiodic. One now can perform a small perturbation in a neighborhood of the critical point that makes the map a local diffeomorphism and that also preserves the image of the critical point, which then defines a homoclinic tangency of $p$.
In a such way, one obtains a map with a bicycle involving $P$ and $S$.
\end{proof}
Similarly to \cref{denistehomocylcle} and \cref{denisteheterocylcle} we have:
{\underline b}egin{proposition}\label{densitebicycle}
For every $1\le r\le \infty$ or $r=\omega$,
consider an open set of maps $f\in \mathrm{Diff}loc^r(U,M)$
displaying a $C^r$-robust bicycle involving a saddle $P$ and a projectively hyperbolic source $S$.
It contains a $C^r$-dense subset of maps for which the hyperbolic continuation of $P$ and $S$ form a bicycle.
\end{proposition}
Combining Theorems \ref{Newhouse} and \ref{robust hetero}, one can stabilize the bicycles:
{\underline b}egin{corollary}\label{robust-bicycle} For $2\leq r\leq \infty$ or $r=\omega$, consider
$f\in \mathrm{Diff}loc^r(U,M)$ and a saddle $P$ exhibiting a bicycle.
Then there exists $\tilde f $, $C^r$-close to $f$, with a hyperbolic basic set $K$ containing the hyperbolic continuation of $P$ which exhibits a $C^r$-robust bicycle.
\end{corollary}
\subsection{Paraheterocycles}\label{sec:parahetero}
Let us fix $1\leq r \leq \infty$,
and a $C^{r}$-family $(f_a)_{a\in \mathbb R}$ of local diffeomorphisms $f_a\in \mathrm{Diff}loc^r(U,M)$.
\paragraph{Hyperbolic sets for families of dynamics}
It is well known that if $f_0$ has a hyperbolic fixed point $P$, then its hyperbolic continuation
$(P_a)_{a\in I} $ is a $C^r$ function of the parameter $a$
on a neighborhood $I\subset \mathbb R$ of $0$. More generally, if $K$ is a hyperbolic set for $f_0$, with ${\underline a}rr K$ the inverse limit of $K$, its hyperbolic continuation $(K_a)_{a\in I}$ by the range $K_a=\pi_a({\underline a}rr K)$ of a family of maps $\pi_a:= \pi_{f_a}: {\underline a}rr K \to M$ (see Section~\ref{sec:Preliminary}) with the following regularity:
{\underline b}egin{proposition}[see Prop 3.6 \cite{BE15}]\label{continuation}
There exists a neighborhood $I$ of $0$ where $(\pi_a)_{a\in I}$ is well defined.
For any $\underline z\in {\underline a}rr K$, the map $a\in I \mapsto \pi_a(\underline z)$ is of class $C^r$ and depends continuously on $\underline z$ in the $C^r$-topology.
\end{proposition}
The local stable and unstable manifolds $W^s_{loc} (z; f_a) $ and $W^u_{loc} (\underline z; f_a) $ are canonically chosen so that they depend continuously on $a$, $z$ and $\underline z$ in the $C^r$-topology
(see Prop~3.6 in~\cite{BE15}).
They are called the \emph{hyperbolic continuations} of $W^s_{loc} (z; f_0) $ and $W^u_{loc} (\underline z; f_0) $ for $f_a$.
{\underline b}egin{definition}[Paraheterocycle]\label{def.paraheterocycle}
Given $0\leq d\leq r$,
the family $(f_a)_{a\in \mathbb R}$ displays a \emph{$C^d$-paraheterocycle} at $a_0$ if there exist
a heterocycle for $f_{a_0}$ involving a saddle $P$ and
a projectively hyperbolic source $S$ whose hyperbolic continuations
satisfy for some $N\geq 0$
{\underline b}egin{equation}\label{e.paracycle}
d(S_a,f^N_a(W^{u}_{loc}(P_a)))=o(|a-a_0|^{d'}), \quad \text{ for any integer $0\leq d'\leq d$.}
\end{equation}
We say it is a \emph{strong $C^d$-paraheterocycle} if furthermore $P$, $S$ form a strong heterocycle.
\end{definition}
Note that if $f_{a_0}$ has a heterocycle then $(f_a)_a$ has a $C^0$-paraheterocycle at $a=a_0$.
{\underline b}egin{theo} \label{chain}
Consider a $C^\infty$ family of local diffeomorphisms $(f_a)_{a\in \mathbb R}$ in $\mathrm{Diff}loc^\infty(U,M)$
and a heterocycle for $f_0$ between a saddle point $P$ with period $p$ and a projectively hyperbolic source $S$. Let us assume furthermore that the stable eigenvalue of $D_Pf^p_0$ is negative.
Then there exists a family $(\tilde f_a)_{a\in \mathbb R}$, $C^\infty$-close to $(f_a)_{a\in \mathbb R}$,
which displays a $C^\infty$-paraheterocycle at $a=0$ between the continuation of the saddle $P$
and a projectively hyperbolic source $S'$.
\end{theo}
We will see in \cref{negative-eigenvalue} that the assumption on the negative stable eigenvalue can be obtained when the heterocycle is included in a bicycle.
{\underline b}egin{remark}
The definition of paraheterocycle, the statement of \cref{chain} and its proof extend without difficulty
to families parametrized by $\mathbb R^k$, for any $k\geq 1$, see \cref{r.k-para} and \cref{ss.k-para}.
\end{remark}
\subsection{Parablenders} \label{sec:parablender}
In this section we fix $1\le r<\infty$.
Parablenders are a parametric counterpart of blenders.
The first example of a parablender was given in \cite{BE15}; in \cite{BCP16} a new example of parablender was given and therein the definition of parablender was formulated as:
{\underline b}egin{definition}[$C^r$-Parablender]\label{def.parablender}
The continuation $(K_a)_{a\in I}$ of a hyperbolic set $K$ for the family $(f_a)_{a\in \mathbb R}$ is a \emph{$C^r$-parablender} at ${a_0}\in \operatorname{Interior}(I)$ if the following condition is satisfied.
There exist a continuous family of local unstable manifolds $(W^u_{loc}(\underline z; f _{a_0}))_{\underline z\in{\underline a}rr K}$ and a non-empty open set $O$ of germs at $a_0$ of $C^r$-families of points $(\gamma_a)_{a\in I}$ in $M$
such that for every $(\tilde f_a)_{a\in \mathbb R}$ $C^{r}$-close to $(f_a)_{a\in \mathbb R}$, there exists $\underline z\in {\underline a}rr K$
satisfying:
\[\lim_{a\to a_0} |a-a_0|^{-r}\cdot d{\underline b}igg(\gamma_a\;,\; W^u_{loc}(\underline z; \tilde f _a){\underline b}igg)= 0\; .\]
The open set $O$ is called an \emph{activation domain} for the $C^r$-parablender $(K_a)_{a\in I}$.
\end{definition}
Here is the parametric counterpart of the nearly affine blender introduced in Def. \ref{exam.blender}.
{\underline b}egin{definition}[Nearly affine parablender\footnote{
The coordinates considered in~\cite{BCP16} were slightly different but the same modulo conjugacy: the renormalized inverses branches are of the form:
\[B_b^\pm: (X,Y)\mapsto \left(0, (Y\pm1)/(\Delta^{-1} +b) \right),\]
which is conjugate to the presented form $(A_a^\pm)_\pm$ via the coordinates changes:
\[(X,Y)= (x-x_0, \frac{\Delta-1}{\Delta+a}\cdot y)\quad \mathrm{and}\quad
a=-\frac{b\cdot \Delta^2 }{1+b\cdot \Delta}\, .\]
}
\cite{BCP16}
]\label{parablenders defi}
For $\Delta>1$, $x_0\in (-2,2)$ and $\delta>0$,
a $C^r$-family $(f_a)_{a\in \mathbb R}$ has a \emph{$\delta$-nearly affine $C^r$-parablender}
with contraction $\Delta^{-1}$ at $a=0$ if there exist a neighborhood $I$ of $0$ in $\mathbb R$, a $C^r$-family $(H_a)_{a\in I}$ of charts $H_a \colon \mathbb R^2\hookrightarrow M$,
a diffeomorphism $\theta:J\hookrightarrow I$ fixing $0$ and inverse branches $(g^+_{a})_{a\in I}$, $(g^-_{a})_{a\in I}$ of iterates $f_a^{N^+}$, $f_a^{N^-}$ such that
\[\cR g_{a}^+ := H_a^{-1}\circ g_{\theta(a)}^+\circ H_a
\quad \mathrm{and}\quad \cR g_a^ - := H_a^{-1}\circ g_{\theta(a)}^ -\circ H_a\]
are well defined on $[2,2]^2$ and $(\cR g_{a}^\pm)_{a\in I} $ are $\delta$-$C^r$-close to the two families $(A_a^\pm)_{a\in I}$
defined by
\[A_a^+: (x,y)\mapsto \left(x_0,(\Delta+a) \cdot y + \Delta-1 \right)
\quad \mathrm{and}\quad
A_a^-: (x,y)\mapsto \left(x_0,(\Delta +a)\cdot y - \Delta+1\right). \]
\end{definition}
Note that a nearly affine parablender defines a germ of family of nearly affine blenders $(K_a)_{a\in I}$ at $a=0$ and so a germ of family of blenders by \cref{ are blender}. In \cite[Section 6]{BCP16}, we showed\footnote{The activation domain is not explicited in the statements of the results of \cite[Section 6]{BCP16}, but appears in the proof
as a product $W=B\times A$ (see page 67), where $B=[-2,2]\times (-\eta,\eta)^r$ and where
$A$ is a neighborhood of $0$ in $\mathbb R^{r+1}$, obtained as the image of a neighborhood of $0$
by a surjective linear map (page 63).} that it defines also a parablender:
{\underline b}egin{proposition}\label{nearly are parablender} For every $\Delta>1$ close to $1$ and $x_0\in (-2,2)$, there is $\eta>0$ arbitrarily small such that if $\delta>0$ is sufficiently small,
then $(K_a)_{a\in I}$ is a $C^r$-parablender at $a=0$. Moreover, its activation domain contains:
\[\left\{(z_a)_{a\in I}\in C^r(I, \mathbb R^2) :
z_0\in [-2,2]\times (-\eta,\eta)
\quad \mathrm{and}\quad \| \partial_a^k z_a|_{a=0}\|<\eta , \quad \forall 1\le k\le r \right\}.\]
\end{proposition}
We will show that nearly affine $C^r$-parablenders appear as renormalizations of the dynamics nearby paraheterocycles. This will enable us to show:
{\underline b}egin{theo}\label{theosectionparadense}
Let us consider a $C^\infty$ family of local diffeomorphisms $(f_a)_{a\in \mathbb R}$ in $\mathrm{Diff}loc^\infty(U,M)$
and, for $r\geq 1$, a family of saddles $(P_a)_{a\in \mathbb R}$ and a family of projectively hyperbolic sources $(S_a)_{a\in \mathbb R}$ exhibiting a $C^r$-paraheterocycle at $a=0$.
Then there exists $(\tilde f_a)_{a\in \mathbb R}$, $C^\infty$-close to $(f_a)_{a\in \mathbb R}$ displaying a $C^r$-parablender at $a=0$ which is homoclinically related to $P_0$ and whose activation domain contains the germ of $(S_a)_{a\in \mathbb R}$ at $a=0$. In particular $(\tilde f_a)_{a\in \mathbb R}$ displays a robust $C^r$-robust paraheterocycle at $a=0$.
\end{theo}
{\mathfrak e}ction{Structure of the proofs of the theorems}
\label{Proof robust hetero}
\subsection{Proof of \cref{robust hetero}}\label{sec:robust robust hetero}
The strategy of the proof breaks down into two steps.
In a first step, we obtain, by perturbation of the heterocycle, a strong heterocycle.
This is done in Section~\ref{s.strong}.
{\underline b}egin{proposition} \label{pingpong} For $\rho\in \{\infty ,\omega\}$, let $ f\in \mathrm{Diff}loc^\rho(U,M)$ with a projectively hyperbolic source $S$ and a saddle point $P$ forming a heterocycle. Then there exists a map $\tilde f$ arbitrary $ C^\rho$-close
with a saddle point $Q$ homoclinically related to $P_{\tilde f}$ and which forms with $S_{\tilde f}$ a strong heterocycle.
\end{proposition}
In a second step we perturb the strong heterocycle in order to exhibit a nearly affine blender displaying a robust heterocycle. See Section~\ref{s.blender}.
{\underline b}egin{proposition}\label{PPaffine}
For $\rho\in \{\infty ,\omega\}$, let $f\in \mathrm{Diff}loc^\rho(U,M)$ with a projectively hyperbolic source $S$ and a saddle point $Q$ forming a strong heterocycle. Fix $\infty >r\geq 1$ and take $\Lambda>1$ close to $1$.
Then, for every $\delta>0$ there exists a $C^\rho$-perturbation $\tilde f$ exhibiting a $\delta-C^r$-nearly affine blender which is homoclinically related to $Q_{\tilde f}$ and whose activation domain contains $S_{\tilde f}$.
\end{proposition}
Note that the conjunction of these two propositions implies \cref{robust hetero} for the topologies $C^\infty$ and $C^\omega$.
When the initial diffeomorphism is $C^r$, $1\leq r<\infty$,
we first perturb in the $C^r$-topology in order to get a $C^\infty$-diffeomorphism
taking care that the source $S$ still belongs to the unstable manifold of the saddle $P$, and we then apply the result
for $C^\infty$-diffeomorphisms.
\qed
\subsection{Proof of \cref{theosectionparadense}}
Similarly to the proof of \cref{robust hetero},
the proof consists in two steps that are the parametric counterparts of \cref{pingpong} and \cref{PPaffine}.
They are detailed in Sections~\ref{s.strong} and~\ref{s.blender}.
{\underline b}egin{proposition}\label{Ppingpong}
Consider a $C^\infty$ family of local diffeomorphisms $(f_a)_{a\in \mathbb R}$ in $\mathrm{Diff}loc^\infty(U,M)$,
and, for $r\geq 1$, a family of saddles $(P_a)_{a\in \mathbb R}$ and a family of projectively hyperbolic sources $(S_a)_{a\in \mathbb R}$ exhibiting a $C^r$-paraheterocycle at $a=0$.
Then there exist $(\tilde f_a)_{a\in \mathbb R}$, $C^\infty$-close to $( f_a)_{a\in \mathbb R}$
with a family of saddles $(Q_a)_{a\in \mathbb R}$ homoclinically related to $(P_a)_{a\in \mathbb R}$
which forms with $(S_a)_{a\in \mathbb R}$ a strong $C^r$-paraheterocycle at $a=0$.
\end{proposition}
{\underline b}egin{proposition}\label{PPPaffine}
Consider a $C^\infty$ family of local diffeomorphisms $(f_a)_{a\in \mathbb R}$ in $\mathrm{Diff}loc^\infty(U,M)$,
and, for $r\geq 1$, a family of saddles $(Q_a)_{a\in \mathbb R}$ and a family of projectively hyperbolic sources $(S_a)_{a\in \mathbb R}$ exhibiting a strong $C^r$-paraheterocycle at $a=0$.
Then there exists $(\tilde f_a)_{a\in \mathbb R}$, $C^\infty$-close to $(f_a)_{a\in \mathbb R}$, displaying a $C^r$-parablender at $a=0$ homoclinically related to $Q_0$ and whose activation domain contains the germ of $(S_a)_{a\in \mathbb R}$ at $a=0$.
\end{proposition}
This completes the proof of \cref{theosectionparadense}. \qed
{\underline b}egin{remark}\label{r.H4}
One can choose the parablender and the family of local unstable manifolds defining its activation domain
in such a way that each local unstable manifold does not have $S_0$ as an endpoint and is not tangent to the weak unstable direction of $S_0$. See \cref{H4}.
\end{remark}
\subsection{Proof of \cref{chain}: chains of heterocycles}
We begin with some preparation lemmas. The first one is proved in section~\ref{s.repu}.
{\underline b}egin{lemma} \label{Repu} Let $S$ and $P$ be a projectively hyperbolic source and a saddle point forming a heterocycle for a smooth map $f$. Then for a $C^\infty$-small perturbation of the dynamics, the source $S$ belongs to a Cantor set $R$ which is a projectively hyperbolic expanding set.
\end{lemma}
We introduce the following notion.
{\underline b}egin{definition}\label{def.Nchain}
A \emph{$N$-chain of alternate heterocycles} for a map $f\in \mathrm{Diff}loc(U,M)$ is the data of
$N$ saddle points $P^1,\dots, P^N$ and $N$ projectively hyperbolic sources $S^1, \dots, S^N$ such that:
{\underline b}egin{itemize}
\item the orbits of $P^1,\dots,P^N,S^1,\dots,S^N$ are pairwise disjoint,
\item the stable eigenvalues of the saddles $P^i$ are negative,
\item $W^u(P^i)$ contains $S^i$ and is transverse to $E^{cu}_{S^i}$ for each $1\leq i\leq N$,
\item $W^{uu}(S^i)$ intersects transversally $W^{s}(P^{i+1})$ for $1\leq i<N$
and $W^{uu}(S^N)$ intersects transversally $W^{s}(P^{1})$.
\end{itemize}
\end{definition}
{\underline b}egin{figure}[h]
\centering
\includegraphics{fig_def_alternate_hetero.pdf}
\caption{2-Chain of heterocycles.}
\label{para2}
\end{figure}
Chains of alternate heterocycles may be obtained as follows.
{\underline b}egin{lemma} \label{L.GenGaraheteroPara}
Consider $f\in \mathrm{Diff}loc^\infty(U,M)$
with a heterocycle between a saddle $P$ with period $p$ and a source $S$ such that the stable eigenvalue of $D_Pf^p$ is negative.
Then, for any $N\geq 1$, there exists $\tilde f$, $C^\infty$-close to $f$,
with an $N$-chain of alternate heterocycles whose
saddles $P^1=P,P^2,\dots,P^N$ are homoclinically related to the continuation $P_{\tilde f}$.
\end{lemma}
{\underline b}egin{proof}
By preliminary perturbations one stabilizes the heterocycle and builds a blender $K$
homoclinically related to $P$, whose activation domain contains $S$ (\cref{robust hetero}).
One also reduces to the case where the source $S$ belongs to a projectively hyperbolic expanding
invariant Cantor set $R$ (\cref{Repu}).
One can also assume that $W^{uu}(S)$ and $W^s(P)$ have a transverse intersection point.
In order to simplify, one will assume that $K$ is topologically mixing
(otherwise one has to decompose $K$ into finitely many pieces permuted by the dynamics
and whose return map is topologically mixing on each piece).
Note that $P^1=P$ and $S^1=S$ define a $1$-chain of alternate heterocycle.
One proves the statement by induction on $N$.
Let us assume that $f$ has a $N-1$-chain of alternate heterocycles whose
saddles $P^i$ are homoclinically related to $P$.
One chooses a saddle $P^N$ whose orbit is distinct from the orbits of
$P^1,\dots,P^{N-1}$ and which is homoclinically related to $P$: since $W^{uu}(S^{N-1})$ intersects transversally $W^s(P^{1})$, it also intersects transversally $W^s(P^{N})$.
One also chooses a source $S^N\in R$ in the activation domain of $K$
and whose orbit is distinct from the orbits of $S^1,\dots,S^{N-1}$; one can furthermore assume that it is arbitrarily close to $S$, so that $W^{uu}(S^N)$ intersects transversally $W^s(P)$, hence $W^s(P^1)$.
The blender property implies that $S^N$ belongs to the unstable set of $K$.
More precisely there exists $x\in K$ and $y\in W^u(x){\mathfrak e}tminus \text{\rm Orbit}(S^N)$
such that $f(y)\in \text{\rm Orbit}(S^N)$.
Since $K$ is topologically mixing, $W^{u}(P^N)$ is dense in the unstable set of $K$,
one can find $y'\in W^u(P^N)$ arbitrarily close to $y$
and whose backward orbit is disjoint from a uniform neighborhood of $y$.
One then perturbs $f$ in a small neighborhood of $y$
and get a map satisfying $\tilde f(y')=f(y)$.
Consequently $P^N$ and $S^N$ define a heterocycle for $\tilde f$
and the properties built at the previous steps of the induction are preserved.
\end{proof}
The existence of a saddle point with negative stable eigenvalue
may be obtained once a saddle belongs to a homocycle, as we recall in the next lemma.
{\underline b}egin{lemma} \label{negative-eigenvalue}
Let $f\in \mathrm{Diff}loc^\infty(U,M)$ and $P$ be a saddle point with a homoclinic tangency $L$. Then for a $C^\infty$-small perturbation $\tilde f$ of the dynamics supported on a small neighborhood of $L$,
the saddle $P$ belongs to a basic set which contains a point $Q$ with some period $\tau$
and such that the stable eigenvalue of $D_Q\tilde f^\tau$ is negative.
\end{lemma}
{\underline b}egin{proof}
This is a well-known result.
Up to replace $L$ by an iterate, one assumes $L\in W^s_{loc}(P)$.
One perturbs $f$ so that the contact of the homoclinic tangency is quadratic.
By unfolding the homoclinic tangency, a horseshoe containing $P$ appears.
Indeed, one considers a thin rectangle $R$ which is a tubular neighborhood of $W^s_{loc}(P)$.
A large iterate $f^\ell(R)$ crosses $R$ twice, with different orientations.
In each component of the intersection, a $\ell$-periodic point is obtained,
and the signs of $Df^\ell$ along the stable direction differ.
See \cite[chapter 3]{PT93} for details.
\end{proof}
\cref{chain} follows from the next proposition, proved in \cref{s.heterocycles}.
{\underline b}egin{proposition} \label{P.GenGaraheteroPara}
For any $d\geq 0$, there exists $N=N(d)\geq 1$ with the following property.
Consider a $C^\infty$ family
$(f_a)_{a\in \mathbb R}$ in $\mathrm{Diff}loc^\infty(U,M)$
such that $f_0$ has a $N$-chain of alternate heterocyles with saddle points $P^i$ and sources $S^i$.
Then there exists a family $(\tilde f_a)_{a\in \mathbb R}$, $C^\infty$-close to $(f_a)_{a\in \mathbb R}$,
such that the continuations of $P^1$ and $S^{N}$ form a $C^d$-paraheterocycle at $a=0$.
\end{proposition}
{\underline b}egin{remark}\label{r.k-para}
This result is still valid for families parametrized by $\mathbb R^k$, $k\geq 1$ (see Section~\ref{ss.k-para}).
The length of the chain required is then equal to:
\[N(r,k)={\dim_\mathbb R \{P\in \mathbb R[X_1, \dots, X_k]:\; \deg P\le r,\; P(0)=0\}}.\]
\end{remark}
{\underline b}egin{proof}[Proof of \cref{chain}]
For any large integer $d\geq 1$, \cref{L.GenGaraheteroPara} and
\cref{P.GenGaraheteroPara} give after a $C^\infty$-perturbation
a $C^d$-paraheterocycle between the continuation of the saddle $P$
and a projectively hyperbolic source $S'$.
Hence there exists a $C^d$-small perturbation $(f'_a)_{a\in \mathbb R}$
in $\mathrm{Diff}loc^\infty(U,M)$ and an integer $N$ which satisfy
$S'_a\in f^N(W^u_{loc}(P_{f'_a}))$ for any $a$ close to $0$.
Since $d$ has been chosen arbitrarily large, the perturbation can
be chosen $C^\infty$-small.
\end{proof}
\subsection{Proof of \cref{main}}
A consequence of the previous results is the:
{\underline b}egin{coro}\label{thesis H}
Consider a $C^\infty$ family $(f_a)_{a\in \mathbb R}$ in $\mathrm{Diff}loc^\infty (U,M)$ such that $f_0$ displays a bicycle between a projectively hyperbolic source $S_0$ and a dissipative saddle point $P_0$. Let $r\geq 1$. Then up to a $C^r$-perturbation of the family, and up to replacing $S_0$
by another projectively hyperbolic source, we can assume that:
{\underline b}egin{enumerate}[$(H_1)$]{\mathfrak e}tcounter{enumi}{-1}
\item There exists a blender $K_0$ for $f_0$ whose activation domain contains $S_0$.
\item $K_0$ intersects the repulsive basin of $S_0$.
\item $P_0$ is homoclinically related to $K_0$ and $W^s (P_0) $ has a robust tangency with the strong unstable foliation $\mathcal F^{uu}$ of $S_0$.
\item The continuation $(K_a)_{a\in I}$ of $K_0$ is a $C^r$-parablender at $a=0$ and the continuation $(S_a)_{a\in I}$ of $S_0$ belongs to its activation domain.
\item In the continuous family of local unstable manifolds defining the activation domain involved in $(H_0)$ and $(H_3)$,
each local unstable manifold does not have $S_0$ as an endpoint and is not tangent to the weak unstable direction of $S_0$.
\end{enumerate}
\end{coro}
{\underline b}egin{remark}
The properties $(H_0)$\dots$(H_4)$ are $C^r$-open.
\end{remark}
{\underline b}egin{proof}
With \cref{robust-bicycle} \cpageref{robust-bicycle}, one first stabilizes the bicycle.
By \cref{negative-eigenvalue}, up to a small $C^\infty$-perturbation, one gets a saddle $Q_0$
homoclinically related to $P_0$ whose stable eigenvalue at the period is negative.
One thus gets a robust heterocycle between $Q_0$ and $S_0$
and \cref{chain} \cpageref{chain} gives a family $(f'_a)_{a\in \mathbb R}$, that is $C^\infty$-close, and displaying a $C^\infty$-paraheterocycle between the continuation of $Q_0$ and a projectively hyperbolic source saddles $S'$.
\cref{theosectionparadense} \cpageref{theosectionparadense} produces a family $C^r$-close having a
$C^r$-parablender $(K_a)_{a\in I}$ at $a=0$ which is homoclinically related to $Q_0$ (and $P_0$)
and whose activation domain contains the family of source $(S'_a)_{a\in I}$.
Denoting the new source by $S_0$, we get all the robust properties $(H_0)$, $(H_1)$ and $(H_3)$.
By \cref{r.H4}, $(H_4)$ is also satisfied.
Since $P_0$ and $S_0$ form a robust heterocycle,
one can assume (after a new perturbation) that the strong unstable manifold $W^{uu}(S_0)$
intersects transversally $W^s(P_0)$.
From the robust tangency, we can perturb and produce a homoclinic tangency point $L$
between $W^u_{loc}(P_0)$ and $W^s(P_0)$.
The inclination lemma implies that $W^{uu}(S_0)$ accumulates on $W^u_{loc}(P_0)$.
A last perturbation near $L$ gives a quadratic tangency between $W^{uu}(S_0)$
and $W^s(P_0)$. For maps close, this tangency admits a continuation
which is a quadratic tangency between $W^s(P_0)$
and the leaves of the strong unstable foliation in the repelling basin of $S_0$:
this is $(H_2)$.
\end{proof}
{\underline b}egin{figure}[h!]
{\underline b}egin{center}
\includegraphics[width=6cm]{statement.pdf}
\caption{Assumptions $(H_0)$\dots$(H_4)$. \label{fig:assumption H}}
\end{center}
\end{figure}
We now use the following result of \cite[Theorem A, page 11]{berger2017emergence}:
{\underline b}egin{theorem}\label{t.infinite-sinks}
Consider a $C^\infty$ family $(f_a)_{a\in \mathbb R}$ in $\mathrm{Diff}loc^\infty (U,M)$
with a projectively hyperbolic source $(S_a)_{a\in \mathbb R}$ and a dissipative saddle point $(P_a)_{a\in \mathbb R}$ satisfying $(H_0)$\dots$(H_4)$.
Then, there are $\delta>0$, a $C^r$-neighborhood $\mathcal V$
of the family $(f_a)_a$
in the space of $C^r$-families
and a Baire-generic subset $\cal G\subset \cal V$ such that for any $(\tilde f_a)_a\in \cal G$ and $a\in (-\delta, \delta)$, the map $\tilde f_a$ displays infinitely many sinks.
\end{theorem}
For completeness we sketch its proof.
{\underline b}egin{proof}[Idea of the proof of \cref{t.infinite-sinks}]
Since the hypothesis are open, they hold for an open neighborhood $\mathcal V$ of the initial family.
Let us consider an arbitrary family $(f'_a)_{a\in \mathbb R}$ in $\mathcal V$.
The robust heterocycle provided by $(H_0)$ and $(H_1)$ and \cref{Repu} allow after a perturbation to assume that there are $\delta>0$
and two distinct sources $(S_a)_{a\in [-\delta, \delta]}$, $(S_a')_{a\in [-\delta, \delta]}$ which satisfy $(H_0)$\dots$(H_4)$ at every $a_0\in [-\delta, \delta]$ for each of these sources
and for the family $(f'_a)_{a\in \mathbb R}$.
Then we apply the following key lemma (which uses $(H_3)$):
{\underline b}egin{lemma}[{\cite[Prop. 3.6]{berger2017emergence}}]\label{prop36}
For every $\varepsilon>0$, there exist ${\underline a}lpha>0$ and an $\varepsilon$-$C^r$-perturbation $(f''_a)_{a\in [-\delta, \delta]} $ localized at $(S_a)_a$ and $(S_a')_a$ such that:
{\underline b}egin{enumerate}
\item for every $j\in 2\mathbb Z$, there exists a continuation of a periodic point $(P^{(j)}_a)_a$ in the parablender whose local unstable manifold contains $S_a$ for every $a\in [-\delta, \delta] \cap [{\underline a}lpha j-{\underline a}lpha/2,{\underline a}lpha j+{\underline a}lpha/2]$,
\item for every $j\in 2\mathbb Z+1$, there exists a continuation of a periodic point $(P^{(j)}_a)_a$ in the parablender whose local unstable manifold contains $S'_a$ for every $a\in [-\delta, \delta] \cap [{\underline a}lpha j-{\underline a}lpha/2,{\underline a}lpha j+{\underline a}lpha/2]$.
\end{enumerate}
\end{lemma}
We continue with:
{\underline b}egin{lemma}[{\cite[Prop. 3.4]{berger2017emergence}}] \label{prop34}
After a new $C^\infty$-small perturbation of $(f''_a)_a$, for every $j\in \mathbb Z\cap [-\delta/{\underline a}lpha, \delta/{\underline a}lpha]$ the point $P_a$ displays a quadratic homoclinic tangency which persists for every $a\in [-\delta, \delta] \cap [{\underline a}lpha j-{\underline a}lpha/2,{\underline a}lpha j+{\underline a}lpha/2]$. \end{lemma}
{\underline b}egin{proof}[Idea of proof of \cref{prop34}]
Assume $j$ odd (resp. even) and let us continue with the setting of \cref{prop36}.
As $P_a$ and $P^{(j)}_a$ belong to the same transitive hyperbolic set and using \cref{continuation},
after a small perturbation a fixed iterate of the local unstable manifold of $P_a$ contains $S_a$
for every $a\in [-\delta, \delta] \cap [{\underline a}lpha j-{\underline a}lpha/2,{\underline a}lpha j+{\underline a}lpha/2]$.
Then we proceed as depicted in \cref{fig:proofofprop34}: we denote by $W^s_a$ a segment of $W^s(P_a)$ which is included in a basin of $S_a$ (resp. $S_a'$) and display a tangency with the strong unstable foliation of the
repelling basin of $S'_a$ (resp. $S_a$) by $(H_2)$. After perturbation we can assume this tangency quadratic. Then, in the Grassmanian bundle $\mathbb P(TM)$ of $M$, the tangent space $TW^s$ of this curve intersects transversally the unstable manifold of $(S_a, E^{uu}(S_a)) $ for the action $\hat f_a$ of $Df_a$ on the Grassmannian. By the inclination lemma, the preimages $TW^s_{n,a}$ of $TW^s_a$, by $\hat f_a^n$, converge to the stable manifold $\{S_a\}\times \mathbb {P R}^1{\mathfrak e}tminus \{E^{cu}(S_a)\}$ of $(S_a, E^{uu}(S_a)) $.
By property $(H_4)$, a piece of $W^u_{loc} (P_a)$ intersects $S_a$ with a direction different from $E^{uu}(S_a)$,
hence the stable manifold of $(S_a, E^{uu}(S_a))$ intersects untangentially a piece $TW^u_a$ of $TW^u_{loc} (P_a)$ for every $a\in [-\delta, \delta] \cap [{\underline a}lpha j-{\underline a}lpha/2,{\underline a}lpha j+{\underline a}lpha/2]$. This enables to perturb $(f_a)_a$ such that $TW^s_{n,a}\subset TW^s (P_a)$ intersects $TW^s_{loc} (P_a)$ for every $a\in [-\delta, \delta] \cap [{\underline a}lpha j-{\underline a}lpha/2,{\underline a}lpha j+{\underline a}lpha/2]$. {\underline b}egin{figure}[h!]
{\underline b}egin{center}
\includegraphics[width=7cm]{tangencycreation.pdf}
\caption{Inclination lemma used in the bundle $\mathbb P(TM)$ at $E^{uu}(S_A)$. }\label{fig:proofofprop34}
\end{center}
\end{figure}
\end{proof}
In {\cite[Prop. 3.5]{berger2017emergence}} it is shown that for every $N\geq 1$, we can then perturb the family
in the $C^\infty$-topology near the homoclinic tangency of $(P_a)_a$ obtained in \cref{prop34}
so that for every $a\in [{\underline a}lpha j-{\underline a}lpha/2,{\underline a}lpha j+{\underline a}lpha/2]$ and $j\in \mathbb Z\cap [-\delta/{\underline a}lpha, \delta/{\underline a}lpha]$, the new map $\tilde f_a$ displays a periodic sink of period $\ge N$. Hence we have obtained an open and dense subset in $\mathcal V$ of families displaying a sink of period $\ge N$ at every parameter $a\in [-\delta, \delta]$. By taking the intersection $\cal G$ of these open and dense subsets over $N\ge 1$, we obtain \cref{t.infinite-sinks}.
\end{proof}
This allows to complete the proof of our main theorem.
{\underline b}egin{proof}[Proof of \cref{main}]
Let us consider a $C^r$ map $f$ with a dissipative bicycle associated to a saddle $P$.
By \cref{{robust-bicycle}}, there exists a $C^r$-open set $\mathcal U\in \mathrm{Diff}loc^r(U,M)$, which contains $f$ in its closure,
such that the continuation of $P$ exhibits a robust bicycle for any map in $\mathcal U$.
Let $F:=(f_a)_{a\in \mathbb R}$ be a $C^r$-family consisting of maps $f_a\in\mathcal U$.
By perturbation, one can assume that the family is $C^\infty$ and
by \cref{densitebicycle} that $f_0$ displays a bicycle.
Then, by \cref{thesis H}, there exists a new $C^r$-perturbation which
satisfies
$(H_0)\cdots(H_4)$.
\cref{t.infinite-sinks} associates a neighborhood $\mathcal{V}_F$ of this family and a dense
G$_\delta$-set ${\mathcal G}_F$ of $\mathcal{V}_F$ and $\delta_F>0$.
Let $\{F_n: n\in\mathbb N\}$ be a dense countable set in the space of families $(f_a)_{a\in \mathbb R}\in \mathcal{D}^r(\mathbb I \times U,M)$
consisting of maps $f_a\in\mathcal U$.
The union ${\mathcal G}={\underline b}igcup {\mathcal G}_{F_n}$ is a dense $G_{\delta}$ subset of this space.
By construction, for any family $F=(f_a)_{a\in \mathbb R}$ in ${\mathcal G}$
and any $|a|$ smaller than a semi-countinuous function $\delta_F$ of $F$, the map $\tilde f_a$ exhibits infinitely many sinks for any parameter $a$ close to $0$.
\end{proof}
{\mathfrak e}ction{From heterocycles to basic sets and strong heterocycles}\label{s.strong}
In this section we prove \cref{pingpong}, \cref{Ppingpong} and \cref{Repu}.
We consider a $C^\infty$ map $f\in \mathrm{Diff}loc^\infty(U,M)$ with a projectively hyperbolic source $S$ and a saddle point $P$ forming a heterocycle, and we show that by perturbation it can be improved to
a strong heterocycle.
In \cref{Local coordinate for a heterocycle}, first we establish local coordinates around $P$ and $S$. To obtain these coordinates, we need to perturb the dynamics, to assume the eigenvalues non-resonant, but also to ensure two transversality conditions $(T_1)$-$(T_2)$. Then nearby $P$ and $S$, the inverse dynamics $\cP$ and $\cS$ are linear in local coordinates. Furthermore, the heterocycle defines inverse branches of the dynamics that are transitions from one linearizing chart to the other.
As a direct application of these linearizing charts, we build an IFS
and from there an expanding projective hyperbolic set containing the source: this allows to prove \cref{Repu} at the beginning of \cref{section basic}).
Later, using again these coordinates, we obtain the existence of a non-trivial basic set $K$ which contains $P$
(\cref{existence of Q}).
After a small perturbation, which consists in perturbing the stable eigenvalues of $P$, the strong unstable manifold of $S$ intersects $K$, whereas $S$ belongs to $W^u(K)$. This will imply \cref{pingpong}. The proof of \cref{Ppingpong} follows similar lines.
\subsection{Local coordinates for a heterocycle}\label{Local coordinate for a heterocycle}
For the sake of simplicity we assume that the periodic points $P$ and $S$ are fixed
and that the eigenvalues of $D_Pf$ and $D_Sf$ are positive.
Anyway we can go back to this case by regarding an iterate of the dynamics and performing the forthcoming perturbations
nearby finitely many points belonging to different orbits.
Up to a smooth perturbation we can assume that the eigenvalues of $D_Pf$ and $D_Sf$ are non-resonant.
Then Sternberg Theorem \cite{S58} implies the existence of:
{\underline b}egin{itemize}
\item neighborhoods $V'_S\subset V_S:= f(V'_S)$ of $S$ and coordinates for which $f|V'_S$ has the form:
\[ (x,y)\in V'_S\mapsto (\sigma_{uu}^{-1}\cdot x, \sigma_u^{-1}\cdot y)\in V_S \quad \text{with } 0< \sigma_{uu} < \sigma_u <1\;
.\]
\item neighborhoods $V'_P$ and $V_P:= f(V_P')$ of $P$ and coordinates for which $f|V'_P$ has the form:
\[ (x,y)\in V'_P\mapsto (\sigma^{-1}\cdot x, \lambda^{-1}\cdot y)\in V_P\quad \text{ with } 0< \sigma <1< \lambda \;
.\]
\end{itemize}
This defines the inverse branches $\cP:= (f|V_P')^{-1}$ and
$\cS:= (f|V_S')^{-1}$:
\[\cS: (x,y)\in V_S\mapsto (\sigma_{uu} \cdot x, \sigma_u \cdot y)\in V_S'\quad \mathrm{and}\quad \cP: (x,y)\in V_Q\mapsto (\sigma \cdot x, \lambda \cdot y)\in V_Q'\; .\]
Up to restricting $V_P$ and $V_P'$ and rescaling the coordinates, we can assume:
\[V_P'\equiv [-\sigma,\sigma]\times [-1,1]\quad \mathrm{and}\quad
V_P\equiv [-1,1]\times [-\lambda^{-1},\lambda^{-1}]
\; .\]
Let $W^u_{loc} (P):= V_Q\cap \{y=0\} $, $W^s_{loc} (P):= V'_Q\cap \{x=0\} $ and $W^{uu}_{loc}(S):= \{y=0\} \cap V_S$.
Let $H$ be a point in $ W^s(P)\cap W^{uu}(S)$. Up to replacing it by an iterate, we can assume that $H$ belongs to $V'_P$ with $H=:(0,h)$ in the linearizing coordinates of $P$. Up to
conjugating the dynamics by $(x,y)\mapsto (x,-y)$, we can assume moreover that $h>0$.
Also, a preimage $S'$ of $S$ has coordinates $S' =:(s,0)$ in the linearizing coordinates of $P$:
\[S'\equiv (s,0)\quad \mathrm{and}\quad H\equiv (0,h)\;, \quad h>0\; .\]
Furthermore up to a smooth perturbation, we can assume that:
{\underline b}egin{enumerate}[$(T_1)$]
\item\label{T1} The intersection $ W^s(P)\cap W^{uu}(S)$ is transverse at $H$.
\item\label{T2} The line $ T_S W^u(P)$ is in direct sum with the weak unstable direction $E^{cu}$ of $S$.
\item The line $ T_S W^u(P)$ is in direct sum with the strong unstable direction $E^{uu}$ of $S$.
\end{enumerate}
Let $V''_S \Subset V'_S$ and $V_H \Subset V'_P$ be small neighborhoods of $S$ and $H$;
and let $\TS: V_S''\hookrightarrow V_P$ and $\TH: V_H\hookrightarrow V_S$ be inverse branches
of iterates of $f$ such that $\TS(S)=S'$ and $\TH(H)\in W^{uu}_{loc}(S)$.
{\underline b}egin{figure}[h!]
{\underline b}egin{center}
\includegraphics[width=11cm]{def_inversebranches_hetero.pdf}
\caption{Inverse branches $\cS, \cP, \TS, \TH$ induced by the heterocycle. \label{fig:def_inversebranches}}
\end{center}
\end{figure}
\subsection{Basic sets induced by a heterocycle}\label{section basic}
We now build two hyperbolic sets: one expanding projective hyperbolic set containing the source, and a saddle hyperbolic set containing the saddle.
\subsubsection{Proof of \cref{Repu}: expanding Cantor set linked to the heterocycle} \label{s.repu}
Note that for $n$ large, the point $(s, \lambda^{-n} h)$ belongs to the range of $\TS$.
We perturb $f$ near the point $S'$ and define a map $\tilde f$
which satisfies in coordinates \[\tilde f(s, \lambda^{-n} h)=f(S').\]
This in turn defines a perturbation $\widetilde \TS$ of the inverse branch $\TS$.
As the point $(s, \lambda^{-n} h)$ is sent by $\cP^n$ to the $(\sigma^n \cdot s, h)\in V_H$, the map $\TH\circ \cP^n \circ \widetilde \TS$ is well defined on a neighborhood $W$ of $S$. Hence for $N$ large compared to $n$, the maps
$\cS_1:=\cS^N\circ \TH\circ \cP^n \circ \widetilde \TS$
and $\cS_2:= \cS^N$ are contractions from $W$ into $ W$ with disjoint images.
So they define a transitive expanding Cantor set $R$ for $\tilde f$ which contains $S$.
Let us fix $\eta>0$ small and introduce the cone
$\mathcal{C}:=\{(u,v):\; |u|<\eta |v|\}$.
Using (T$_1$), (T$_2$) and assuming that $n,N$ have been chosen large enough,
for any $x\in W$, the maps $D_x\cS_1$ and $D_x\cS_2$ send $\overline {\mathcal{C}}$
inside $\mathcal{C}\cup\{0\}$.
The cone field criterion (see for instance~\cite{yoccoz-hyperbolic}) implies that the Cantor set $R$ is projectively hyperbolic. The \cref{Repu} is proved.
\qed
\subsubsection{Basic sets linked to the heterocycle}\label{Horseshoes linked to the heterocycle}
The heterocycle configuration implies under the transversality assumptions $(T_1)$ and $(T_2)$
that the saddle $P$ has a transverse homoclinic intersection.
{\underline b}egin{lemma} \label{existence of Q}
For all $n$ large, the subsegment:
\[W^s_{loc} ({\underline b}ar H):=\TS\circ \cS^n\circ \TH( W^s_{loc} (P) \cap V_H)\]
of $W^s (P)$ intersects transversally the local unstable manifold $W^u_{loc} (P)$ at a point ${\underline b}ar H$ which is ${\underline a}symp \sigma_{uu}^n$-close to $S'$. The endpoints of $W^s_{loc} ({\underline b}ar H)$ are ${\underline a}symp \sigma_u^n$-distant from $W^u_{loc} (P)$.
\end{lemma}
{\underline b}egin{proof}
Let $\Gamma:= W^s_{loc} (P)\cap V_H$. This curve is sent by $\TH$ to a curve which intersects transversally $W^{uu}_{loc}(S)$ by $(T_1)$. By projective hyperbolicity, the image by $\cS^n$ of $\TH(\Gamma)$ is a curve
which is tangent to a thin vertical cone field, which is ${\underline a}symp\sigma_{uu}^n$-close to $S$ and which has
length ${\underline a}symp \sigma_u ^n$. As $(\TS)^{-1}(\{y=0\})$ intersects transversally
$\{x=0\}\cap V_S$ at $S$ by $(T_2)$, it must intersect transversally $\cS^n\circ \TH(\Gamma)$ for $n$ large. Consequently the curve $\TS\circ \cS^n\circ \TH(\Gamma)$ intersects the local unstable manifold $\{y=0\}\cap V_P$ of $P$.
\end{proof}
By Smale's horseshoe theorem (see~\cite[chapter 2]{PT93}), one deduces:
{\underline b}egin{coro}\label{c.basic}
There exists a basic set $K$ containing $P$ and ${\underline b}ar H$.
\end{coro}
We will make it more precise.
If $N$ is large, $K$ can be
spanned by the inverse branches
\[\mathcal{G}_1:=\cP^N \quad\text{and}\quad \mathcal{G}_2:= \TS\circ \cS^n\circ \TH\circ \cP^N.\]
{\underline b}egin{figure}[h!]
{\underline b}egin{center}
\includegraphics[width=11cm]{def_inversebranches_hetero_closing.pdf}
\caption{The box $B$ and its images.
\label{fig:def_inversebranches_closing}}
\end{center}
\end{figure}
Let $\varepsilon>0$ be small enough so that $\{0\}\times [h-\varepsilon, h+\varepsilon]$ is included in $V_H$ and let
(see \cref{fig:def_inversebranches_closing}):
\[ B:= [-1,1]\times \left[\frac{ h-\varepsilon}{ \lambda^{N}},\frac{ h+\varepsilon}{ \lambda^{N}}\right]
\; .\]
{\underline b}egin{lemma}\label{condition eigen B} For every $n, N$ large, the map $\mathcal{G}_2$ is well defined on $B$. If $\varepsilon\sigma^n_u\lambda^{N}\gg 1$, the map $\mathcal{G}_2$ displays a saddle fixed point ${\underline b}ar Q$ in $B\cap K$, which is homoclinically related to $P$.
\end{lemma}
{\underline b}egin{proof}
The box $B$ is sent by $\cP^N$ to $ (0,h)+ [-\sigma^N,\sigma^N]\times [-\varepsilon,\varepsilon]$ which is included in $V_H$ for $N$ large enough. As $\cS^n\circ \TH (V_H)$ is included in $V_S''$ for $n$ large enough, the map $g$ is well defined on $B$.
Let us decompose the boundary of $B$:
\[\partial^s B:= \{-1,1\}\times \left[\frac{ h-\varepsilon}{ \lambda^{N}},\frac{ h+\varepsilon}{ \lambda^{N}}\right]
\quad \mathrm{and}\quad \partial^u B= \partial B{\mathfrak e}tminus \partial^s B.\]
Both curves of $\cS^n\circ \TH\circ \cP^N(\partial^s B)$ are
$\sigma_{uu}^n$ close to the vertical arc $W^c_{loc} (S):=\{0\}\times [-1,1]$
and their endpoints are ${\underline a}symp\varepsilon\cdot \sigma_u^n$ distant to $W^{uu}_{loc} (S)$ by transversality
$(T_1)$ at $\TH(H)$ and by projective hyperbolicity of $S$. Thus they intersect transversally $(\TS)^{-1}(W^u _{loc} (P))$ by property $(T_2)$.
Consequently $\mathcal{G}_2(B)$ intersects
$W^u _{loc} (P)$, and $\mathcal{G}_2(\partial^u B)$ is ${\underline a}symp\varepsilon\cdot \sigma_u^n$ distant to $W^u _{loc} (P)$.
By assumption, $\lambda^{-N}$ is small compared to $\varepsilon \cdot \sigma^n_u$, then
$\mathcal{G}_2(B)$ crosses $B$: it does not meet the vertical boundary $\partial^s B$,
whereas $B$ does not meet the horizontal boundary $\mathcal{G}_2(\partial^u B)$.
Thus $\mathcal{G}_2$ displays a fixed point ${\underline b}ar Q$ in $B\cap \mathcal{G}_2(B)$.
Note that $D \mathcal{G}_2$ expands vectors in a vertical cone by a factor ${\underline a}symp \lambda^N\cdot \sigma_u^n$, which is large, and the image of these vectors are uniformly transverse to the horizontal. On the other hand by projective hyperbolicity $D \mathcal{G}_2^{-1}$ sends the vectors in an horizontal cone to uniformly horizontal vectors and expands them by a factor $\sigma^{-n}_{uu} \cdot \sigma^{-N}$.
The point ${\underline b}ar Q$ is a saddle, its local unstable manifold is an horizontal graph in $B$ over $[-1,1]$
whereas its local stable manifold connects the two curves in $\mathcal{G}_2(\partial^u B)$
and so crosses the horizontal $W^u_{loc}(P)$.
This shows that ${\underline b}ar Q$ and $P$ are homoclinically related as required.
\end{proof}
\subsubsection{Replacement of the saddle point}
\label{replacement}
Let us consider a saddle periodic point $Q$ homoclinically related to $P$.
The following allows to replace the saddle $P$ by $Q$ in the heterocycle.
{\underline b}egin{lemma}\label{l.replace}
Let $Q$ be a periodic saddle point that is homoclinically related to $P$.
Then there exists a map $\tilde f$ that is $C^\infty$ close to $f$
such that $S$ and $Q$ form a heterocycle.
One can choose $\tilde f$ to coincide with $f$ outside an arbitrarily small neighborhood of $f^{-1}(S){\mathfrak e}tminus \{S\}$. In particular if $W^{uu}(S;f)$ contains $Q$, then $S$ and $Q$ form a strong heterocycle for $\tilde f$.
\end{lemma}
{\underline b}egin{proof}
By assumption, there exists a point $z\in W^{u}(P)\cap f^{-1}(S){\mathfrak e}tminus \{S\}$.
Let $Q_{-1}$ be the forward iterate of $Q$ which satisfies $f(Q_{-1})=Q$.
Since $Q_{-1}$ is homoclinically related to $P$, there exists $z'\in W^{u}(Q_{-1})$
arbitrarily close to $z$ having a backward orbit which converges to the orbit of $Q$
and which avoids a uniform neighborhood of $z$.
Hence, there exists a $C^\infty$-small perturbation of $f$ supported on a small neighborhood of $z$
satisfying $\tilde f(z')=f(z)$. In particular $W^u(Q)$ contains $S$.
\end{proof}
We state a parametric version of the previous lemma.
{\underline b}egin{lemma}\label{l.replace-para}
Consider a $C^\infty$ family $(f_a)_{a\in \mathbb R}$ in $\mathrm{Diff}loc^\infty(U,M)$,
and, for $r\geq 1$, families of saddles $(P_a)_{a\in \mathbb R}$ and of projectively hyperbolic sources $(S_a)_{a\in \mathbb R}$ exhibiting a $C^r$-paraheterocycle at $a=0$.
If $(Q_a)_{a\in \mathbb R}$ is a family of saddles homoclinically related to $(P_a)_{a\in \mathbb R}$,
then there exists $(\tilde f_a)_{a\in \mathbb R}$, $C^\infty$-close to $( f_a)_{a\in \mathbb R}$
such that $Q_0$ and $S_0$ form a $C^r$-paraheterocycle at $a=0$.
One can choose $(\tilde f_a)_{a\in \mathbb R}$ to coincide with $( f_a)_{a\in \mathbb R}$ outside an arbitrarily small neighborhood of $f^{-1}_0(S_0){\mathfrak e}tminus \{S_0\}$. Hence if $Q_0\in W^{uu}(S_0;f_0)$, then $S_0$, $Q_0$ form a strong heterocycle for $\tilde f_0$.
\end{lemma}
{\underline b}egin{proof}
Let $(K_a)_{a\in I}$ be a basic set that contains $P_a$ and $Q_a$ for $a$ in a neighborhood $I$ of $0$.
Let $\underline P$ and $\underline Q$ be the periodic lifts of $P$ and $Q$ in $\overleftarrow K$.
By assumption, there exists a choice of local unstable manifolds $W^u_{loc}(\underline z,f_a)$
for $\underline z\in \overleftarrow K$ and $N\geq 1$ such that
$d(S_a,f^N(W^{u}_{loc}(\underline P_a)))=o(|a|^r)$.
Since $P$ and $Q$ are homoclinically related, there exists a sequence
of points $\underline z_n\in \overleftarrow K$ which converges to $\underline P$
and which belong to $W^u_{loc}(\underline Q)$.
Since $W^u_{loc} (\underline z; f_a) $ varies continuously with $\underline z$ for the $C^\infty$-topology,
when $n$ is large there exists a family $(\tilde f_a)_{a\in \mathbb R}$, which is $C^\infty$-close to $( f_a)_{a\in \mathbb R}$,
such that $d(S_a,\tilde f^N_a(W^{u}_{loc}(\underline z_{n},\tilde f_a)))=o(|a|^r)$.
There exists a large integer $\ell\geq 1$ such that
$\tilde f^N_a(W^{u}_{loc}(\underline z_{n},\tilde f_a))\subset \tilde f^\ell_a(W^{u}_{loc}(\underline Q_a,\tilde f_a))$,
hence $d(S_a,\tilde f^\ell_a(W^{u}_{loc}(\underline Q_a,\tilde f_a)))=o(|a|^r)$ as in the definition of $C^r$-paracycle.
Note that the perturbation can be supported
on a neighborhood of a point in $f_0^{-1}(S_0){\mathfrak e}tminus \{S_0\}$.
\end{proof}
\subsection{Proof of \cref{pingpong}: from heterocycles to strong heterocycles}
\label{proofpingpong}
The main step in the proof of \cref{pingpong} is contained in the following lemma.
{\underline b}egin{lemma}\label{l.pingpong}
Let us assume that both stable branches of $P$ intersect $W^u(P)$ transversally.
Then there exists a map $\tilde f$, $C^\infty$-close to $f$,
with a saddle $Q$ homoclinically related to $P_{\tilde f}$
such that:
{\underline b}egin{itemize}
\item $f$ and $\tilde f$ coincide on $W^u_{loc}(P)$ and outside a small neighborhood of $P$,
\item $W^{uu}(S,\tilde f)$ contains $Q$.
\end{itemize}
\end{lemma}
{\underline b}egin{proof}
From $(T_1)$,
the curves $W^{uu}_{loc}(S)$ and $\cS^n\circ \TH( W^s_{loc} (P) \cap V_H)$ intersect transversally
at a point whose image under $\TS$ is denoted as $[S',{\underline b}ar H]$, see~\cref{fig:two-cases}.
{\underline b}egin{figure}[h!]
{\underline b}egin{center}
\includegraphics[width=15cm]{Case12.pdf}
\caption{The two cases for the position of $[S',{\underline b}ar H]$.
\label{fig:two-cases}}
\end{center}
\end{figure}
We can reduce to the case depicted on the left part of \cref{fig:two-cases},
where $[S',{\underline b}ar H]$ belongs to the half upper plane $\{y>0\}$ (for the chart of $V_P$).
Indeed if we are in the other case (depicted on the right part of \cref{fig:two-cases}), we use the fact that the stable branch
$\{0\}\times [-1,0]$ of $P$ has backward iterates which accumulate on $W^s_{loc}(P)$
in order to replace $H$ by a point $H'=(0,h')$, $h'<0$, which is a transverse intersection
between $W^s(P)$ and $W^{uu}(S)$. The new point $[S',{\underline b}ar H']$ is close to $[S',{\underline b}ar H]$,
hence belongs to the lower half plane.
It remains to conjugate the chart by $(x,y)\mapsto (x,-y)$ in order to find the desired configuration.
Let us consider some large integers $n,N$,
the map $\mathcal{G}_2$ and the box $B$ defined at Section~\ref{Horseshoes linked to the heterocycle}.
The transversality conditions $(T_2)$ and $(T_3)$ imply that
$\TH(W^{uu}_{loc}(S))$ crosses the box $\mathcal{G}_2(B)$ along
a small curve whose vertical coordinate belongs to an interval
$[c_1. \sigma_{uu}^n, c_2. \sigma_{uu}^n]$, where $c_1,c_2$
are independent from the choice of $n,N$.
We choose $n,N$ such that
{\underline b}egin{equation}\label{choice-nN}
(h-\varepsilon)\lambda^{-N-1}< c_2 \sigma_{uu}^n<(h-\varepsilon)\lambda^{-N}.
\end{equation}
Note that the condition $\varepsilon\sigma^n_u\lambda^{N}\gg 1$ is satisfied
and Lemma~\ref{condition eigen B} associates a saddle point ${\underline b}ar Q\in B$
whose vertical coordinates is in $[(h-\varepsilon)\lambda^{-N}, (h+ \varepsilon)\lambda^{-N}]$.
By the previous estimates, ${\underline b}ar Q$ is ``above" the graph $\TH(W^{uu}_{loc}(S))$.
Now we consider a family $(f_t)_{t\in [0,1]}$ such that $f_0=f$, and for every $t$, the restrictions of $f_t$ to
$W^u_{loc}(P)$ and to the complement of a neighborhood of $V_P'$ coincide with $f$, while the restriction of the map $f_t|V_P'$ is still linear with eigenvalues $(\lambda_t, \sigma)$ such that:
\[\lambda_t= \frac{\lambda}{ \sqrt[N]{ 1+t \cdot (C-1)}}\quad \text{ with } C= \frac{c_1}{c_2}\frac{h-\varepsilon}{h+\varepsilon} \cdot \lambda^{-1} \; .\]
Note that $(f_t)_{t\in [0,1]}$ is a smooth family which is $C^\infty$-close to be constantly equal to $f$ since $n$ is large. The map $\cS$, $\TS$, $\TH$ are unchanged, while $\cP_t^p:= (x,y)\in V_P\mapsto (\sigma \cdot x, \lambda_t \cdot y) $ depends on $a$.
Any map of this family satisfies the assumptions of \cref{Horseshoes linked to the heterocycle}. Let $({\underline b}ar Q_t)_{t\in [0,1]}$ be the hyperbolic continuation of ${\underline b}ar Q$.
The vertical coordinate of ${\underline b}ar Q_1$ is bounded by
\[(h+\varepsilon)\lambda_1^{-N}= C \cdot (h+\varepsilon)\cdot \lambda^{-N}= (h-\varepsilon)\cdot \lambda ^{-N-1}
\frac{c_1}{c_2},\]
From~\eqref{choice-nN}, it is smaller than $c_1. \sigma_{uu}^n$, hence ${\underline b}ar Q_1$ is ``below"
the graph $\TH(W^{uu}_{loc}(S))$.
One deduces that there exists a parameter such that
${\underline b}ar Q_t$ belongs to $\TH(W^{uu}_{loc}(S))$.
This implies that ${\underline b}ar Q_t$ has an iterate $Q$ which belongs to $W^{uu}_{loc}(S)$.
\end{proof}
{\underline b}egin{proof}[Proof of \cref{pingpong} in the $C^\infty$ case]
One considers a basic set $K$ provided by Corollary~\ref{c.basic}.
It contains a periodic saddle $P'$ homoclinically related to $P$ such that both of its stable branches intersects $W^{u}(P')$ transversally.
The Lemma~\ref{l.replace} allows by a first perturbation $\tilde f_1$ to replace $P$ by the saddle $P'$
so that the assumptions of the Lemma~\ref{l.pingpong} are satisfied.
One can then build a new perturbation $\tilde f_2$ such that $W^{uu}(S,\tilde f_2)$ contains a saddle
$Q$ which is homoclinically related to $P$ and $P'$, whereas the heterocycle
between $S$ and $P'$ is not destroyed (since the perturbation does not modify $S$ nor $W^u_{loc}(P')$).
After a third perturbation $\tilde f_3$ provided by Lemma~\ref{l.replace},
a strong heterocycle between $Q$ and $S$ is obtained.
\end{proof}
\subsection{Proof of \cref{pingpong} in the analytic case}
Now we assume $f\in \mathrm{Diff}loc^\omega(U,M)$ and as before $f$ displays a heterocycle between a saddle $P$
and a source $S$. To prove \cref{pingpong} in the analytic case, it suffices to show the following counterparts of
\cref{l.replace,l.pingpong}.
{\underline b}egin{lemma}\label{l.replace-omega}
Let $Q$ be a periodic saddle point that is homoclinically related to $P$.
Then there exists a map $\tilde f$ that is $C^\omega$ close to $f$
such that $S$ and $Q$ form a heterocycle.
If $W^{uu}(S;f)$ contains $Q$, then, one can choose $\tilde f$ so that
$S$ and $Q$ form a strong heterocycle.
\end{lemma}
{\underline b}egin{lemma}\label{l.pingpong-omega}
Let us assume that both stable branches of $P$ intersect $W^u(P)$ transversally.
Then there exists a map $\tilde f$, $C^\omega$-close to $f$,
with a saddle $Q$ homoclinically related to $P_{\tilde f}$
such that:
{\underline b}egin{itemize}
\item $W^{uu}(S,\tilde f)$ contains $Q$.
\item $W^u(P,\tilde f)$ contains $S$.
\end{itemize}
\end{lemma}
{\underline b}egin{proof}[Proof of \cref{l.replace-omega}]
First recall that $M$ is analytically embedded into an Euclidean space $\mathbb R^N$, see \cite{Gr58}. Hence
there exists an analytic retraction $\pi : U\to M$ of a neighborhood $U$ of $M$ in $\mathbb R^N$.
Let $W^u_{loc}(P)$ be a local unstable manifold of $P$ which contains $S$ in its interior and
let $S'{\underline n}eq S$ in $W^u_{loc}(P)$ such that $f(S')=S$.
Let $V_{S'}$ be a small neighborhood of $S'$ such that
the backward orbit of $S'$ inside $W^u_{loc}(P)$ does not meet $S'$.
One takes an analytic chart $\phi: V_{S'}\to [-1,1]^2$ sending
$S'$ to $0$ and $V_{S'}\cap W^u_{loc}(P)$ to $[-1,1]\times \{0\}$.
Now consider a $C^\infty$-family $(f_{p})_{p \in [-\varepsilon,\varepsilon]} $ such that $f_0=f$ and each $f_p$ is equal to $f$ outside $V_{S'}$ while on a smaller neighborhood of $S'$, the map $f_p$ coincides with the composition of $f$ with a translation of vector $(0,p)$. In particular the continuation of $W^u_{loc}(P)$ for $f_p$ inside $V_{S'}$ is equal to $W^u_{loc}(P)$, while the continuations $S_p$ of $S$
and of its preimage $S'_p=f^{-1}(S)\cap V_{S'}$ satisfy that
$\partial_p S'_p|_{p=0}$ has non-zero second coordinate.
Remark that $\chi := Df^{-1} \circ (\partial_p f_p|_{p=0}) $ is a smooth vector field defined on the compact subset ${\underline b}ar U\subset \mathbb R^N$.
Then by Stone-Weierstrass Theorem,
there exists a polynomial vector fields $\tilde \chi \in \mathbb R[X_1, \dots, X_N]$ whose restriction to ${\underline b}ar U$ is
$C^1$-close $ \chi$. Also by reducing $\varepsilon>0$, the following is well defined for any $|p|<\varepsilon$:
\[\tilde f_{p}:= x\in U\mapsto \pi{\underline b}ig( f(x) +p\cdot Df\circ \tilde \chi (x){\underline b}ig)\; .\]
Note that $\partial_p \tilde f_p|_{p=0}=Df\circ \tilde \chi$ is
$C^1$-close to $\partial_p f_p|_{p=0}$. In particular
the hyperbolic continuation $(\tilde S'_p)_{p\in [-\varepsilon, \varepsilon]}$ of $S'$ for $(\tilde f_p)_p$ is family
$C^1$- close to $(\tilde S'_p)_{p\in [-\varepsilon, \varepsilon]}$. Also the hyperbolic continuation $(W^u_{loc}(P, \tilde f_p))_{p\in [-\varepsilon, \varepsilon]}$ is a family of curves
$C^1$- close to the family constantly equal to $[-1,1]\times \{0\}$.
Hence assuming that the $C^1$-size of the perturbation is small, the curve $\Gamma:= {\underline b}igcup_{p\in [-\varepsilon, \varepsilon]} \{\tilde S'_p\}\times \{p\}$ intersects transversaly the surface
$\Sigma:= {\underline b}igcup_{p\in [-\varepsilon, \varepsilon]} W^u_{loc}(P, \tilde f_p)\times \{p\}$ at $\{S'\}\times \{0\}$.
By the inclination lemma with parameter, see \cite[Lemma 3.2]{berger2017emergence}, there exists a sequence $(W_{n,p})_n$ of $p$-families of segments $W_{n,p}\subset W^u(Q,f_p) $ such that the sequence of surfaces
$\Sigma_n := {\underline b}igcup_{p\in [-\varepsilon, \varepsilon]} W_{n,p}\times \{p\}$ converges to $\Sigma$ in the $C^1$-topology as $n\to \infty$.
Thus when $n$ is large, the curve $\Gamma$ intersects $\Sigma_n$ at a point close to $\{S'\}\times \{0\}$. Hence there is $p$ arbitrarily small such that the continuations of $S'$ and $Q$ form a heterocycle for $\tilde f_p$. This proves the first part of the lemma since $\tilde f_p$ is $C^\omega$-close to $f$ when $p$ is small.
In the second part of the lemma, the saddle $Q$ belongs to a local strong unstable manifold $W^{uu}_{loc}(S)$ of $S$
and one performs a similar construction.
Let $Q'{\underline n}eq Q$ in $W^{uu}_{loc}(S)$ which satisfies $f(Q')=Q$, let $V_{Q'}$ be a small neighborhood of $Q'$,
and consider a chart $\psi\colon V_{S'}\to [-1,1]^2$ sending
$Q'$ to $0$ and $V_{Q'}\cap W^{uu}_{loc}(Q)$ to $[-1,1]\times \{0\}$.
One considers a $C^\infty$ family of maps which are equal to $f$ outside $V_{Q'}$
and which coincide with the composition of $f$ with a translation of vector $(0,q)$ on a small neighborhood of $Q'$:
it induces a vector field $\xi$, that can be approximated by a polynomial vector field $\tilde \xi$.
Up to shrinking $\varepsilon>0$, for every $(p,q)\in [-\varepsilon, \varepsilon]^2$, the following is well defined:
\[\tilde f_{p,q}:= x\in M\mapsto \pi{\underline b}ig( f(x) +p\cdot Df\circ \tilde \chi (x)+q\cdot Df\circ \tilde \xi (x){\underline b}ig)\; .\]
Similarly we can consider the continuation $\tilde S_{p,q}$ of $S$, $\tilde Q_{p,q}$ of $Q$,
$W^u_{loc}(P, \tilde f_{p,q})$ of $W^u_{loc}(P, f)$, $W_{n,p, q}$ of $W_{n,p}$ and $W^u_{loc}(Q, \tilde f_{p,q})$ of $W^u_{loc}(Q)$.
From the first part of the proof,
$W^{u}_{loc}(P,\tilde f_{p,q})$ contains $\tilde S_{p,q}$ when $(p,q)$ belongs to graphs $\gamma_n$
that are arbitrarily $C^1$-close to the curve $p=0$ when $n\to \infty$.
By a similar argument, $W^{uu}_{loc}(S,\tilde f_{p,q})$ contains $\tilde Q_{p,q}$ when $(p,q)$
belongs to a one-dimensional submanifold $\sigma$ that contains $0$, is $C^1$-close to the curve $q=0$.
In particular $\sigma$ is transverse to the graphs $\gamma_n$.
Thus the conclusion of the lemma holds for some map $\tilde f_{p,q}$
with $(p,q)\in\gamma_n\cap \sigma$ which is $C^\omega$-close to $f$
when $n$ is large and $p,q$ are small.
This implies the second part of the lemma. \end{proof}
{\underline b}egin{proof}[Proof of \cref{l.pingpong-omega}]
The proof of \cref{l.pingpong} was obtained using a smooth family which changes the stable eigenvalue of $P$, without changing the relative position of $S$ w.r.t. $W^u_{loc}(P; f)$. To obtain the analytic setting, as above, we approximate this family by an analytic one and we add an extra parameter which varies the relative position of $S$ w.r.t. $W^u_{loc}(P; f)$. While the first parameter enables to find a saddle $Q$ homoclinically related to $P$ such that $Q\in W^{uu}_{loc} (S)$, in the analytic setting this unfolding might unfold also the heterocycle. However the new second parameter enables to restore it. \end{proof}
\subsection{Proof of \cref{Ppingpong}: from paraheterocycles to strong paraheterocycles}
We follow the proof of the Proposition~\ref{pingpong} in the $C^\infty$ case.
After a first $C^\infty$-small perturbation of $f_0$ (and hence of the family $(f_a)_{a\in \mathbb R}$),
there exists a saddle $Q$ homoclinically related to $P$ which belongs to $W^{uu}_{loc}(S)$.
The paracycle property~\eqref{e.paracycle} between $S$ and $P$ may not hold anymore,
but by a new perturbation, with a similar size, it can be restored.
Note that it is supported near $f^{-1}(S){\mathfrak e}tminus \{S\}$, hence the property $Q\in W^{uu}_{loc}(S)$ is not destroyed.
Finally one applies lemma~\ref{l.replace-para}, and gets a $C^\infty$-small perturbation
of the family $(f_a)_{a\in \mathbb R}$ in order to get a strong $C^r$-paraheterocycle at $a=0$
between $S$ and $Q$.
\qed
{\mathfrak e}ction{From chains of heterocycles to paraheterocycles}\label{s.heterocycles}
We prove \cref{P.GenGaraheteroPara} in this section:
an $N$-chain of alternate heterocycles
whose saddles are homoclinically related, can be perturbed as a
$C^d$-paraheterocycle, provided that $N$ is large enough with respect to $d$. This is shown by induction on $d$.
The case $d=0$ follows from the continuity of the family (without any perturbation).
The induction step is given by:
{\underline b}egin{proposition}\label{theprop}
Consider a $C^\infty$ family
$(f_a)_{a\in \mathbb R}$ in $\mathrm{Diff}loc^\infty(U,M)$ and $d\geq 0$
such that $f_0$ has a $2$-chain of alternate heterocyles with saddle points $P^1,P^2$ and sources $S^1,S^2$
such that $(P^1,S^1)$ and $(P^2,S^2)$ form two $C^d$-paraheterocycles at $a=0$.
Then there is a $C^\infty$-perturbation of $(f_a)_{a\in \mathbb R}$ such that the continuation of $(P^1,S^2)$ forms a $C^{d+1}$-paraheterocycle at $a=0$.
Moreover the perturbation is supported on a small neighborhood of
$ \text{\rm orbit}(S^1)\cup \text{\rm orbit}(S^2)$.
\end{proposition}
{\underline b}egin{proof}[Proof of \cref{P.GenGaraheteroPara}]
One considers a $2^d$-chain of alternate heterocycles with
periodic points $P^1,S^1,\dots,P^{2^d},S^{2^d}$. \cref{theprop}
allows to perform a perturbation at $ \text{\rm orbit}(S^1)\cup \text{\rm orbit}(S^2) $,
such that $P^1$ and the continuation of $S^2$ form a $C^1$-paraheterocycle.
Note that $P^1,S^2,P^3,S^3,\dots,P^{2^d},S^{2^d}$ is still a $2^d-2$-chain of alternate heterocycles.
By induction, one gets a $2^{d-1}$-chain of alternate heterocycles
$P^1,S^2,\dots,P^{2^d-1},S^{2^d}$ such that $P^{2i+1},S^{2i+2}$ form a $C^1$-paraheterocycle at $a=0$,
for each $0\leq i< 2^{d-1}$.
By a new perturbation supported near the sources, one gets
a $2^{d-2}$-chain of alternate heterocycles
$P^1,S^4,\dots,P^{2^d-3},S^{2^d}$ such that each pair $P^{4i+1},S^{4i+4}$ forms a $C^2$-paraheterocycle at $a=0$. Repeating this construction inductively,
one gets a $C^d$-paraheterocycle at $a=0$ between $P^1$ and the continuation of $S^{2^d}$.
\end{proof}
\cref{theprop} is proved in the next two subsections.
In \cref{ss.k-para} we discuss the case where there are several parameters.
\subsection{Notations and local coordinates}
The setting is similar to \cref{Local coordinate for a heterocycle} and depicted \cref{notation}.
We chooses a large integer $r$ and a small number $\varepsilon>0$, we look for a smooth perturbation of
$(f_a)_{a\in \mathbb R}$ which is $\varepsilon$-$C^r$-small and such that the continuation of $(P^1,S^2)$ forms a $C^{d+1}$-paraheterocycle at $a=0$.
As in \cref{s.strong} we shall assume that the points $P^2$ and $S^1$
are fixed.
We denote by $|\sigma_a|<1$ and $ \lambda_a<-1$
(resp. by $|\sigma^{uu}_a| < |\sigma_a ^u| < 1$)
the inverse of the eigenvalues of the tangent map of $f_a$ at $P^2_a$ (resp. at $S^1_a$).
After a small perturbation we can assume that the eigenvalues are non-resonant and:
\[\frac {\log|\sigma^u_0|}{\log|\lambda_0|}\in \mathbb R{\mathfrak e}tminus \mathbb Q\; .\]
Then by \cite{Ta71}, there exist:
{\underline b}egin{itemize}
\item neighborhoods $V'_S(a)\subset V_S(a):= f_a(V'_S(a))$ of $S^1_a$ endowed with coordinates depending $C^r$ on the parameter and for which the inverse branche $\cS_a:= (f_a|V_S')^{-1}$ has the form:
\[\cS_a: (x,y)\in V_S\mapsto (\sigma^{uu}_a \cdot x, \sigma^u_a \cdot y)\in V_S'\]
\item neighborhoods $V'_P(a)$ and $V_P(a):= f_a(V_P'(a))$ of $P_a^2$ endowed with coordinates depending $C^r$ on the parameter and for which the inverse branch has the form:
\[ \cP_a: (x,y)\in V_P(a)\mapsto (\sigma_a \cdot x, \lambda_a \cdot y)\in V_P'(a)\; .\]
\end{itemize}
Up to restricting $V_P$, $V_P'$ and $V_S$ we can assume them equal to filled rectangles containing $0$ in their interior.
We define:
\[W^u_{loc} (P^2_a)\equiv V_P(a)\cap \{y=0\} \; ,\quad W^s_{loc} (P^2_a)\equiv V'_P(a)\cap \{x=0\}\quad \mathrm{and}\quad W^{uu}_{loc}(S^1_a)\equiv \{y=0\} \cap V_S(a)\; .\]
Let $H_0$ be a point in $ W^s(P_0^2)\cap W^{uu}(S_0^1)$. Up to replacing it by an iterate, we can assume that $H_0$ belongs to $V'_P(0)$ with $H_0\equiv (0,h_0)$ in the linearizing coordinates of $P_0^2$.
Also, a preimage $S'^2_a$ of $S^2_a$ by an iterate of $f_a$ has coordinates $S'^2_a =:(x_a,y_a)$ in the linearizing coordinates of $P^2_a$. Let $\cT_a: V_H\hookrightarrow V_S$ be an inverse branches of an iterate of $f_a$ defined on a neighborhood $V_H \Subset V'_P$ of $H_0$ and such that $\cT_0$ sends $H_0$ into $W^{uu}_{loc}(S^1_0)$.
Up to a smooth perturbation, one can require that:
{\underline b}egin{enumerate}[$(T_1)$]
\item $W^u(P^1)$ is transverse to $E^{cu}_{S^1}$ at $S^1$,
\item $W^{uu}_{loc} (S^1_0)$ and $W^s_{loc}(P^2_0)$ are transverse at $H_0$.
\end{enumerate}
By $(T_1)$, $W^u(P^1_a)$ contains a graph in the chart at $S^1_a$, over a neighborhood
$I\subset \mathbb R$ of $0$:
\[\Gamma_a \equiv \{(x, \gamma_a(x));\; x\in I\} .\]
By $(T_2)$, the transverse intersection $H_0$ admits a continuation $H_{a}$ for $a$ close to $0$.
One sets
$$H_a\equiv (0,h_a) \quad \mathrm{and}\quad \cT_a(H_a)=(z_a, 0)
\, .$$
Since $(P^1,S^1)$ and $(P^2,S^2)$ form two $C^d$-paraheterocycles at $a=0$,
one has for any $0\leq k\leq d$,
\[\partial^k_a \gamma_a(0)|_{a=0}=0
\quad \text{and} \quad \partial^k_a y_a|_{a=0}=0.\]
Up to a small perturbation,
one can also assume that
$$\partial^{d+1}_a \gamma_a(0)|_{a=0}{\underline n}eq 0
\quad \text{and} \quad \partial^{d+1}_a y_a|_{a=0}{\underline n}eq 0.$$
Figure \ref{notation} summaries the notations.
{\underline b}egin{figure}[h]
\centering
\includegraphics[width=12cm]{composition3.pdf}
\caption{Notations.}
\label{notation}
\end{figure}
\subsection{Compositions nearby a paraheterocycle}
Let $\Delta$ be the second coordinate of $ \partial_y \cT_0(H_0)$; it is nonzero by $(T_2)$.
{\underline b}egin{lemma}\label{l.composition}
Given integers $n,m\geq 1$ large such that $(\sigma^{u}_0)^n\lambda_0^m=O(1)$, there is a $C^r$-perturbation of $(f_a)_a$ locallized at $S^2_a$ such that the germ at $a=0$ of $a\mapsto \cS_a^n\circ \cT_a\circ \cP_a^m(S'^2_a)$ is $C^{d+1}$-close to
\[
\left( 0,(\sigma^{u}_0)^n\cdot \Delta \cdot \lambda^m_0 \cdot \frac{ \partial_a^{d+1} y_a|_{a=0} }{(d+1)!} a^{d+1} \right).\]
\end{lemma}
{\underline b}egin{proof}
For $m$ large, after a $C^r$-small perturbation localized at $S_a^2$ (which is conjugated to a translation in a small neighborhood of $S_a^2$), we can assume $S'^2_a=(x_a, y_a+\varepsilon_a)$ where $a\mapsto \varepsilon_a$ is the $C^\infty$-small function defined by $\varepsilon_a := \lambda_a^{-m}\cdot h_a$ and where as before $(x_a,y_a)$ are the coordinates of $S'^2_a$ before the perturbation.
Then observe that
$\cP_a^m(S'^2_a)= H_a+(\sigma_a^{m} \cdot x_a, \lambda_a^m \cdot y_a)$ forms a family whose germ at $a=0$ is $C^{d}$-close to $(H_a)_a$.
When $m$ is large, the germ at $a=0$ of $a\mapsto \cT_a\circ \cP_a^m(S'^2_a)$ is $C^{d+1}$-close to
\[\cT_a(H_a)+ D_{H_a} \cT_a \left( \sigma_a^{m} \cdot x_a, \lambda^m_a \cdot \frac{ \partial_a^{d+1} y_a|_{a=0} }{(d+1)!} a^{d+1} \right)
\]
and so $C^{d+1}$-close to
\[(z_a, 0)+ \partial_y \cT_0(H_0) \cdot \lambda^m_a \cdot \frac{ \partial_a^{d+1} y_a|_{a=0} }{(d+1)!} a^{d+1}
\; .\]
Consequently, for any $n\ge 0$, the germ at $a=0$ of $a\mapsto \cS_a^n\circ \cT_a\circ \cP_a^m(S'^2_a)$ is $C^{d+1}$-close to
\[((\sigma_a^{uu})^n\cdot z_a, 0)+ \mathrm{diag }((\sigma^{uu}_a)^n,(\sigma^{u}_a)^n )\cdot \partial_y \cT_0(H_0) \cdot \lambda^m_a \cdot \frac{ \partial_a^{d+1} y_a|_{a=0} }{(d+1)!} a^{d+1}
\; .\]
If $(\sigma^{u}_0)^n\lambda_0^m=O(1)$, then both $(\sigma_a^{uu})^n$ and $(\sigma^{uu}_0)^n\lambda_0^m$ are small, and so we obtain the announced bound.
\end{proof}
Since the ratio $\log|\sigma^u_0|/\log|\lambda_0|$ is irrational,
and since $\partial^{d+1}_a \gamma_a(0)|_{a=0}{\underline n}eq 0$
and $\partial^{d+1}_a y_a|_{a=0}{\underline n}eq 0$,
one can choose some large positive integers $n,m$ such that
$$n\log|\sigma^u_0|+m\log|\lambda_0|-\log|\Delta|+
\log|\partial^{d+1}_ay_a|_{a=0}
$$
is arbitrarily close to $\log|\partial^{d+1}_a\gamma_a(0)|_{a=0}$. Since $\lambda$ is negative, one can choose $m$ to be odd or even
so that
$\Delta\cdot (\sigma_0^u)^n(\lambda_0)^m\partial^{d+1}y_a|_{a=0}$
and $\partial ^{d+1}\gamma_a(0)|_{a=0}$ have the same sign.
By our assumptions, the $C^d$-jets of $a\mapsto \gamma_a(0)$ and $a\mapsto y_a$ at $a=0$ vanish.
With our choices, this guaranties that the $C^{d+1}$-jet at $a=0$ of
$a\mapsto \Delta \cdot (\sigma_0^u)^n(\lambda_0)^my_a-\gamma_a(0)$ is arbitrarily small.
By~\cref{l.composition}, after a $C^r$-perturbation of $(f_a)_a$ localized at $(S^2_a)_a$, the germ at $a=0$ of the following function is $C^{d+1}$-small:
\[a\mapsto \eta_a:= \gamma_a\circ p_x\circ \cS_a^n\circ \cT_a\circ \cP_a^m(S'^2_a) -p_y\circ \cS_a^n\circ \cT_a\circ \cP_a^m(S'^2_a).\]
A $C^r$-small perturbation localized at $S_a^1$ (which is locally conjugated to a translation)
translates the functions $(\gamma_a)_a$ by $-\eta_a$ for each parameter $a$ close to $0$.
Then we have at $a=0$:
$$d(\Gamma_a, \cS_a^n\circ \cT_a\circ \cP_a^m(S'^2_a))= o( a^{d+1}).$$
As a consequence, the continuation of $P^1_0$ and $ S^2_0$ form a $C^{d+1}$-paraheterocycle at $a=0$ for the chosen perturbation.
Since the charts are a priori only $C^r$, the resulting perturbation is only $C^r$.
In a last step, we thus smooth the family near the sources, keeping the paraheterocycle we have obtained
(the latter being a finite codimensional condition on the family). \cref{theprop} is now proved. \qed
\subsection{Families parametrized by $k$-parameters}\label{ss.k-para}
When the family $(f_a)$ is parametrized by $a=(a_1,\dots,a_k)$ in $\mathbb R^k$, $k>1$, the proof follows the same scheme, by canceling one by one the partial derivatives
$\partial_{a_1}^{i_1}\partial_{a_2}^{i_2}\cdots \partial_{a_k}^{i_k}$ of the unfolding of the heterocycle.
For this end, we proceed by induction on $\{\underline i= (i_1,\dots, i_k)\in \mathbb N ^k:\; \sum_j i_j \le d\}$ following an order $\prec$ such that:
\[\sum_j i_j < \sum_j i_j'\mathbb Rightarrow \underline i \prec \underline i'.\]
{\mathfrak e}ction{Nearly affine (para)-blender renormalization}\label{s.blender}
In this section, we prove \cref{PPaffine,PPPaffine}.
We consider a $C^\infty$ map $f\in \mathrm{Diff}loc^\infty(U,M)$ with a projectively hyperbolic source $S$ and a saddle point $Q$ forming a strong heterocycle, and build by perturbation a nearly affine blender homoclinically related to $Q$. It is defined by two inverse branches from a neighborhood of $Q$ to ``vertical rectangles'' stretching across the local unstable manifold of the saddle.
In \textsection\ref{Setting modulo small perturbation} and \textsection\ref{Unfolding of the strong heterocycle}
we choose nice coordinate systems for the inverse dynamics nearby the source, the saddle and the heteroclinic orbits. It requires preliminary perturbation in order to satisfy some non-resonance and transversality conditions. We also explain how to unfold the strong heterocycle.
The heterocycle induces well-defined inverse branches of the dynamics (\textsection \ref{Choosing candidates}) that are transitions from one linearizing chart to the other.
\textsection \ref{bound} provides $C^r$-estimates on rescalings $g^-,g^+$ of the inverse branches.
In \textsection \ref{Tuning} and \textsection \ref{Translating candidates}, we tune the length of the branches and the size of the unfolding so that the inverse branches
defines a nearly affine blender with a neat dilation $\Delta$; it is homoclinally related to the saddle point $P$ and that its activation domain contains $S$. In other words, \cref{PPaffine} will be proved.
In \textsection\ref{sectionparadense}, we add a parameter, consider a family $(f_a)_{a\in\mathbb R}$
and apply the previous discussion to $f_0$. The inverse branches admit continuations $(g^-_a)_{a\in\mathbb R}$ and $(g^+_a)_{a\in\mathbb R}$.
After having chosen an adapted reparametrization, we extend the $C^r$-bounds to the parametrized families and check that
they define a nearly affine $C^r$-parablender, concluding the proof of~\cref{PPPaffine}.
\paragraph{\it Notations.}
The proofs will depend on a small number $\varepsilon>0$ and on integers $n^+,n^-,m^+,m^-$.
The notation $A=O(\varepsilon)$ (or more generally $A=O(g(\varepsilon, n^+,n^-,m^+,m^-))$)
will mean that the quantity $A$ has a norm bounded by $C.\varepsilon$
(or by $C.|g(\varepsilon, n^+,n^-,m^+,m^-)|$), where the number $C>0$ depends on the initial map $f$
but not on the choices made during the construction.
Similarly, one will say that a function $h$ (that may depend on coordinates $x,y$, and/or parameters $a$ or ${\underline a}lpha$)
is \emph{$C^r$-dominated} by $\varepsilon$ if $\partial^k h= O(\varepsilon)$
for all its $k^\text{th}$ derivatives with respect to $x,y,a,{\underline a}lpha$ with $0\leq k\leq r$.
Note that if in the $C^r$-topology, $h_i=h'_i+O(\varepsilon)$, $i\in \{1,2\}$, then $h_1\circ h_2=h'_1\circ h'_2+O(\varepsilon)$.
\subsection{Coordinates for generic perturbations of strong heterocycles}\label{Setting modulo small perturbation}
We first fix a system of coordinate as depicted in \cref{fig:def_inversebranches2}.
As in \cref{Local coordinate for a heterocycle}, we shall assume that the points $Q$ and $S$
are fixed and the eigenvalues $1<\sigma_u^{-1}<\sigma_{uu}^{-1}$ and $0<\lambda^{-1}<1<\sigma^{-1}$
of $D_Qf$ and $D_Sf$ respectively are positive and non-resonant.
Furthermore we can assume that:
{\underline b}egin{equation}\label{palis module irrational} \frac{\log \sigma_u}{\log \lambda} {\underline n}otin \mathbb Q\; .\end{equation}
The hypothesis of the proposition consists of two finite codimensional conditions:
{\underline b}egin{equation}\label{heterocycle-condition}
S\in W^u(Q; f) \quad \mathrm{and}\quad Q\in W^{uu}(S; f)\; .
\end{equation}
So after a small smooth perturbation, we can assume moreover:
{\underline b}egin{equation}\label{transversality strong hetro} T_QW^{uu}(S; f)\oplus T_Q W^s(Q; f)=T_Q M\quad \mathrm{and}\quad E^{cu}(S) \oplus T_S W^u(Q; f)= T_S M \; .\end{equation}
As in \cref{Local coordinate for a heterocycle}, the non-resonance of the eigenvalues and the smoothness of the dynamics imply, by the Sternberg Theorem \cite{S58}, the existence of:
{\underline b}egin{itemize}
\item Neighborhoods $V'_S\subset V_S:= f(V'_S)$ of $S$ and coordinates for which $f|V'_S$ has the form:
\[f\colon (x,y)\in V'_S\mapsto (\sigma_{uu}^{-1}\cdot x, \sigma_u^{-1}\cdot y)\in V_S.\]
\item Neighborhoods $V'_Q$ and $V_Q= f(V_Q')$ of $Q$ and coordinates in which $f|V'_Q$ has the form:
\[f\colon (x,y)\in V'_Q\mapsto (\sigma^{-1}\cdot x, \lambda^{-1}\cdot y)\in V_Q.\]
\end{itemize}
This defines the inverse branches $\cQ:= (f|V_Q')^{-1}$ and
$\cS:= (f|V_S')^{-1}$:
\[\cS: (x,y)\in V_S\mapsto (\sigma_{uu} \cdot x, \sigma_u \cdot y)\in V_S'\quad \mathrm{and}\quad \cQ: (x,y)\in V_Q\mapsto (\sigma \cdot x, \lambda \cdot y)\in V_Q'\; .\]
Up to restrict $V_Q$ and $V_Q'$ and rescale their coordinate, we can assume:
\[V_Q\equiv [-2,2]\times [-2\lambda^{-1},2\lambda^{-1}]\quad \mathrm{and}\quad
V_Q'\equiv [-2\sigma,2\sigma]\times [-2,2]
\; .\]
Let $W^u_{loc} (Q):= V_Q\cap \{y=0\} $, $W^s_{loc} (Q):= V'_Q\cap \{x=0\} $ and $W^{uu}_{loc}(S):= \{y=0\} \cap V_S$.
By \cref{heterocycle-condition} there is a neighborhood $V''_S \Subset V'_S$ of $0\equiv S$ and an inverse branch $\TS: V_S''\hookrightarrow V_Q$ of an iterate of $f$ sending $0$ into $[-2,2]\times \{0\}$.
Similarly, there exists a neighborhood $V''_Q \Subset V'_Q\cap V_Q$ of $0\equiv Q$ and
an inverse branch $\TQ: V_Q''\hookrightarrow V_S$ of an iterate of $f$ sending $0$ into $V_S\cap \{y=0\}$.
The inverse branches $\TS$ and $ \TQ$ are called the \emph{transitions maps}.
{\underline b}egin{figure}[h!]
{\underline b}egin{center}
\includegraphics[width=11cm]{def_inversebranches.pdf}
\caption{Inverse branches given by the strong heterocycle. \label{fig:def_inversebranches2}}
\end{center}
\end{figure}
{\underline n}oindent
Assuming the neighborhoods $V_S$ and $V_Q$ small enough, it is possible (up to compose by an iterate of $f$) to choose
$\TS,\TQ$ such that $$\TS(0)\in V'_Q{\mathfrak e}tminus V_Q
\quad \mathrm{and}\quad \TQ(0)\in V_S{\mathfrak e}tminus V'_S.$$
Let the coordinates of $\TS$ and $\TQ$ be
\[\TS:=(\XS, \YS)\quad \mathrm{and}\quad \TQ:=(\XQ, \YQ)\; .\]
By~\cref{transversality strong hetro},
$ \partial_y \YQ(0){\underline n}ot= 0$. Thus by rescaling one of the linearizing chart, we can assume:
{\underline b}egin{equation}\label{condition transition}
\partial_y \YQ(0)=1.
\end{equation}
\subsection{Unfolding of the strong heterocycle}\label{Unfolding of the strong heterocycle}
We will perturb $\TS,\TQ$
so that the following points are close to but \emph{not necessarily} in $\{y=0\}$:
\[S'=(s'_x, s'_y):= \TS(0)\quad \mathrm{and}\quad Q'= (q'_x, q'_y):= \TQ(0)\; .\]
This is enabled by the next claim without changing any derivative of the inverse branches.
{\underline b}egin{claim} \label{unfolding as we want}
For every small numbers $s'_y$ and $q'_y$, there exists a $C^\infty$ perturbation of the dynamics such that the inverse branches $\cS$ and $\cQ$ remain unchanged, while the continuations of the inverse branches $\TS$ and $\TQ$ have the same derivatives but satisfy:
\[\YS(0)=s'_y \quad \mathrm{and}\quad \YQ(0)=q'_y.\]
\end{claim}
{\underline b}egin{proof}
First recall that $\TS(0)\in V_Q{\mathfrak e}tminus V_Q'$.
One perturbs $f$ by composing with a translation supported on a small neighborhood of $\TS(0)$.
This enables to move the vertical position of $ \TS(0)$, without affecting the other branches. The modification of $\TQ(0)$ is done similarly.
\end{proof}
In the following we will prescribe some values of $s'_y,q'_y$ and consider the perturbed dynamics.
The inverse branches of the new system will be still denoted by $\cQ$, $\cS$, $\TS=(\XS,\YS)$ and $\TQ=(\XQ,\YQ)$. The next lemma enables to assume that $\partial_y \YS(0)$ is positive.
{\underline b}egin{lemma}\label{partialy YSpositive} Up to perturbation $f$
and to change $\TS$, we can assume moreover that
$$\partial_y \YS(0)>0.$$
\end{lemma}
{\underline b}egin{proof}
If $\partial_y \YS(0)<0$, we are going to perturb $f$ and replace $\TS$ by the inverse branch $\widetilde \TS:= \TS\circ \cS^{n } \circ \TQ\circ \cQ^{m } \circ \TS$ for some large $n$ and $m$. First note that for any large $n$ and any $m$,
the map $\widetilde \TS $ is well defined on a small neighborhood of $0\equiv S$. Also $\partial_y \TS(0)$ is a vector with negative vertical component. By hyperbolicity, it is sent by $D\cQ^m$ to a vertical vector. Its vertical component is still negative since $\lambda>0$. It is pointed at a point $\cQ^{m } \circ \TS(0)$ close to $0\equiv S$. Thus for $m$ sufficiently large, by \cref{condition transition}, its image by $D\TS$ is a vector with negative vertical component. By projective hyperbolicity of the source $S$, its image by $D\cS$ is a vector vertical, pointed at a point nearby $S$ when $n$ is large, and with negative vertical component. Consequently it is sent by $\TS$ to a vector with positive vertical component at a point nearby $\TS(0)$. In other words, the second coordinate of $\partial_y \widetilde \TS(0)$ is positive.
It remains to perform a perturbation of $f$ so that the second coordinate of $\widetilde \TS(0)$ is zero. Let $(\TS_t)_t$ be the family of perturbations of $\TS$ given by \cref{unfolding as we want} and enabling to move the $y$-coordinate of $\TS(0)$. Note that when $n\gg m$,
\[\partial_t (\TS_t\circ \cS^{n } \circ \TQ\circ \cQ^{m } \circ \TS_t)(0){\underline a}pprox \partial_t \TS_t(0) + D(\TS\circ \cS^{n } \circ \TQ\circ \cQ^{m })(\partial_t \TS_t(0)){\underline a}pprox \partial_t \TS_t(0)\; .\]
Hence there is a small parameter $t$ such that $ \widetilde {\TS_t}:=(\TS_t\circ \cS^{n } \circ \TQ\circ \cQ^{m } \circ \TS_t)$ satisfies moreover that the $y$-coordinate of $ \widetilde {\TS_t}(0)$ is $0$.
\end{proof}
\subsection{Choice and renormalization of inverse branches}\label{Choosing candidates}
Let us fix $\Delta>1$ sufficiently close to $1$ so that a nearly affine blender of contraction $\Delta^{-1}$ is a blender by \cref{ are blender}.
The construction also depends on a small number $\varepsilon>0$ (it will measure the distance of the rescaled blender to an affine one)
and on large integers $n^-,m^-,n^+,m^+$ that will be chosen later.
The nearly affine blender will be displayed in the neighborhood $V_Q$ of $Q$, using two inverse branches $g^+$ and $g^-$ of different iterates of $f$. We take them of the form:
\[g^\pm := \TS\circ \cS^{n^\pm} \circ \TQ\circ\cQ^{m^{\pm}} .\]
{\underline b}egin{figure}[h!]
{\underline b}egin{center}
\includegraphics[width=11cm]{formationblenderbox2ters.pdf}
\caption{Construction of a nearly affine blender. \label{fig32}}
\end{center}
\end{figure}
The inverse branches defining the blender will be rescaled by the map:
\[\cH: (x,y)\in \mathbb R^2 \to
(x,\varepsilon \cdot \lambda^{-m^+} \cdot y)\in V_Q\; .\]
Their renormalization are given for $\pm\in \{-,+\}$ by:
{\underline b}egin{equation}\label{def Rg} \cR g^\pm:= \cH^{-1}\circ g^\pm \circ \cH= \cH^{-1}\circ \TS \circ \cS^{n^\pm}\circ \TQ \circ \cQ^{m^\pm} \circ \cH\end{equation}
{\underline b}egin{lemma}\label{Rg well def}
For every $n^-,m^-,n^+,m^+$ large, with $m^+>m^-$, the renormalizations $\cR g^-,\cR g^+$ are well defined on $B:= [-2,2] ^2$.
\end{lemma}
{\underline b}egin{proof} Since $m^-< m^+$,
both maps $\cQ^{m^+}\circ \cH ,\cQ^{m^-}\circ \cH$ are well defined on $B$ and equal to:
\[\cQ^{m^+} \circ \cH(x,y)= ( \sigma^{m^+} \cdot x,\varepsilon \cdot y)\quad \mathrm{and}\quad \cQ^{m^-} \circ \cH(x,y)= ( \sigma^{m^-} \cdot x,\varepsilon \cdot \lambda^{m^--m^+} \cdot y)\; . \]
As $\varepsilon$ is small, their ranges are contained in a small neighborhood of $0$ and so in the domain of $\TQ$. Thus both maps $ \TQ \circ \cQ^{m^\pm} \circ \cH$ are well defined on $B$ and their ranges lie in a small neighborhood of $\TQ(0)\in V_S$.
Then as $\cS$ contracts $V_S$ into itself with a fixed point at $0$ and since $n^\pm$ are large,
$\cS^{n^\pm}\circ \TQ \circ \cQ^{m^\pm} \circ \cH$ is well defined on $B$ and its image is included
in the small neighborhood $V_S''$ of $0$. Thus $\TS\circ \cS^{n^\pm}\circ \TQ \circ \cQ^{m^\pm} \circ \cH$ is well defined on $B$.
\end{proof}
\subsection{Bounds on the renormalized maps}\label{bound}
Given $\varepsilon>0$ small, we require the following properties on the large integers $n^\pm,m^\pm$:
{\underline b}egin{gather}\label{e.hypo1}
n^+> n^- \ge \varepsilon^{-1} \quad \mathrm{and}\quad m^+> m^-\ge \varepsilon^{-1}\;,\\
\{\;\sigma_u^{n^-}\lambda^{m^-}\cdot \partial_y \YS(0),\;
\sigma_u^{n^+}\lambda^{m^+}\cdot \partial_y \YS(0)\;\} \; \subset \;
[\Delta-\varepsilon,\Delta+\varepsilon]\;,\label{e.hypo3}\\
\varepsilon^{-1} \leq \sigma_u^{n^+-n^-} \quad \mathrm{and}\quad \lambda^{m^+-m^-}\leq \varepsilon^2\cdot \min(n^-,m^-)\; . \label{e.hypo2}
\end{gather}
Let us recall that the inverse eigenvalues satisfy
$\kappa:= \max(\sigma_u ,\sigma_{uu} ,\frac{\sigma_{uu}}{\sigma_{u}}, \lambda^{-1},\sigma )<1.$
{\underline b}egin{fact}\label{fact-eps}
For every $\varepsilon>0$ small and $n>\varepsilon^{-1}$, it holds $\kappa^n<n^{-(r+4)}$.
\end{fact}
{\underline n}oindent
In particular, one has $\sigma^{n^-}<\varepsilon$ and $\sigma_u^{n^-} <\varepsilon$.
We decompose the renormalized maps as
\[\cR g^\pm = \mathbb Psi^\pm \circ \mathbb Phi^\pm =
[\cH^{-1}\circ \TS\circ \cS^{n^\pm} \circ \cH^\pm]
\circ
[(\cH^\pm)^{-1}\circ \TQ \circ \cQ^{m^\pm} \circ \cH] \; ,\]
\[\text{where} \quad
\cH^\pm:= (x,y)\mapsto (x,\varepsilon \cdot \lambda^{m^\pm-m^+}\cdot y)-Q' .\]
{\underline b}egin{lemma}\label{preprecondition fro blender}
The maps $(x,y)\mapsto \mathbb Phi^\pm(x,y)-(0,y)$
are $C^{r}$-dominated by $\varepsilon$.
\end{lemma}
{\underline b}egin{proof}
We have $\mathbb Phi^\pm(x,y)=(\cH^\pm)^{-1}\circ \TQ \circ \cQ^{m^\pm} \circ \cH(x,y)$.
Since $Q'=\TQ(0)= \TQ\circ \cQ^{m^\pm}\circ \cH (0) $ we get $\mathbb Phi^\pm(0)=0$.
Recalling that $\TQ=(\XQ, \YQ)$ and that $Q'= (q'_x, q'_y)$, we obtain:
{\underline b}egin{eqnarray*}
\mathbb Phi^\pm(x,y) &= &( \XQ , \varepsilon^{-1} \cdot \lambda^{m^+-m^\pm} \cdot \YQ )(\sigma^{m^\pm} \cdot x
,\varepsilon \cdot \lambda^{m^\pm-m^+} \cdot y)+ ( q'_x , \varepsilon^{-1} \cdot \lambda^{m^+-m^\pm} \cdot q'_y )\; .\\
\partial_x \mathbb Phi^\pm(x,y) &=& \sigma^{m^\pm} \cdot (
\partial_x \XQ , \varepsilon^{-1}\cdot \lambda^{m^+-m^\pm}\cdot \partial_x \YQ )(\sigma^{m^\pm} \cdot x
,\varepsilon \cdot \lambda^{m^\pm-m^+} \cdot y) \; .\\
\partial_y \mathbb Phi^\pm(x,y) &= & ( \varepsilon\cdot \lambda^{m^\pm-m^+} \cdot \partial_y \XQ , \partial_y \YQ )(\sigma^{m^\pm} \cdot x
,\varepsilon \cdot \lambda^{m^\pm-m^+} \cdot y) \; .
\end{eqnarray*}
From this, \eqref{e.hypo1} and \cref{fact-eps}, the first coordinate of $D\mathbb Phi^\pm$ is $C^{r-1}$-dominated by $\varepsilon$.
Using also \eqref{e.hypo2}, we have
$\sigma^{m^\pm} \cdot \varepsilon^{-1}\cdot \lambda^{m^+-m^\pm}<\sigma^{m^\pm} \cdot \varepsilon \cdot m^-<\varepsilon$
and the second coordinate of $\partial_x \mathbb Phi^\pm$ is $C^{r-1}$-dominated by $\varepsilon$.
As $ \partial_y \YQ(0)=1$ by \eqref{condition transition}, the second coordinate of $\partial_y \mathbb Phi^\pm$
coincides with the constant function $1$, up to an error term that is $C^{r-1}$-dominated by $\varepsilon$.
\end{proof}
{\underline b}egin{lemma}\label{precondition fro blender}
The maps $\mathbb Psi^\pm$ coincide with
$$(x,y)\mapsto (0,\Delta\cdot y)+(s'_x\; ,\; \varepsilon^{-1} \lambda^{m^+} \cdot s'_y- \varepsilon^{-1}\cdot \lambda ^{m^+}\sigma_u^{n^\pm } \cdot \partial_y \YS(0) \cdot q'_y ),$$
up to an error term that is $C^{r}$-dominated by $\varepsilon$.
\end{lemma}
{\underline b}egin{proof}
We have $\mathbb Psi^\pm= \cH^{-1}\circ \TS\circ \cS^{n^\pm} \circ \cH^\pm$.
With $\TS=(\XS, \YS)$, it holds:
\[\mathbb Psi^\pm(x,y) = ( \XS, \varepsilon^{-1} \lambda ^{m^+} \YS )(\sigma_{uu} ^{n^\pm}\cdot (x-q'_x),\; \sigma_{u} ^{n^\pm}
\cdot (\varepsilon \cdot \lambda^{m^\pm-m^+}\cdot y-q'_y))
\; . \]
Thus $\partial_x \mathbb Psi^\pm$ is $C^{r-1}$-dominated by $\sigma_{uu} ^{n^-}\cdot \lambda ^{m^+} \cdot \varepsilon^{-1}$,
which by~\eqref{e.hypo1}, \eqref{e.hypo3}, \eqref{e.hypo2} is dominated by
$$(\tfrac{\sigma_{uu}}{\sigma_u}) ^{n^-}\cdot \lambda^{m^+-m^-} \cdot \varepsilon^{-1}<(\tfrac{\sigma_{uu}}{\sigma_u}) ^{n^-} \cdot n^-\cdot \varepsilon<\varepsilon.$$
The first coordinate of $\partial_y \mathbb Psi^\pm$ is $C^{r-1}$-dominated by $\varepsilon \cdot \sigma_u^{n^\pm} \cdot \lambda^{m^\pm-m^+}<\varepsilon$.
Similarly, using~\eqref{e.hypo3}, the second coordinate of $\partial_y \mathbb Psi^\pm$ coincides with $\sigma_u^{n^\pm} \cdot \lambda^{m^\pm } \cdot \partial_y \YS(0)$,
hence with $\Delta$, up to an error term that is $C^{r-1}$-dominated by $\varepsilon$.
We have thus shown that the derivative of $(x,y)\mapsto \mathbb Psi^\pm(x,y)-(0,\Delta\cdot y)$ is $C^{r-1}$-dominated by $\varepsilon$.
Moreover:
\[\mathbb Psi^\pm(0) = (
\XS, \varepsilon^{-1} \cdot \lambda ^{m^+} \cdot \YS )( - \sigma_{uu} ^{n^\pm}\cdot q'_x , - \sigma_{u} ^{n^\pm}
\cdot q'_y ) \; . \]
The first coordinate is $\varepsilon$-close to $\XS(0)=s'_x$ and the second coordinate is equal to:
\[ \varepsilon^{-1} \lambda ^{m^+} \YS (- \sigma_{uu} ^{n^\pm}\cdot q'_x, - \sigma_{u} ^{n^\pm}
\cdot q'_y))
=\varepsilon^{-1} \lambda^{m^+}{\underline b}igg( \YS (0)
- \partial_x \YS (0) \cdot \sigma_{uu} ^{n^\pm}\cdot q'_x
- \partial_y \YS (0)\cdot \sigma_{u} ^{n^\pm}\cdot q'_y +O(\sigma_{u} ^{2n^\pm}){\underline b}igg).\]
As before $\varepsilon^{-1}\cdot \lambda ^{m^+} \cdot \sigma_{uu} ^{n^-}= O(\varepsilon)$.
By \eqref{e.hypo3} and \eqref{e.hypo2},
$\varepsilon^{-1}\cdot\lambda^{m^+}\cdot \sigma_{u} ^{2n^-}$
is dominated by
$\varepsilon^{-1}\cdot\lambda^{m^+-m^-}\cdot \sigma_{u} ^{n^-}= O(\varepsilon).$
As $ \YS (0)=s'_y$, we obtain $(1)$.
\end{proof}
\subsection{Tunning iterates}\label{Tuning}
{\underline b}egin{lemma}\label{choixirrationel} Given $\varepsilon>0$ small, there exist $n^-, m^-, n^+, m^+$ which satisfy \eqref{e.hypo1},
\eqref{e.hypo3}, \eqref{e.hypo2}.
\end{lemma}
{\underline b}egin{proof}
By~\eqref{palis module irrational}, there exist $m,n\ge 1$ arbitrarily large such that:
\[ \lambda^{m}\sigma_{u}^{n}\in [1-\tfrac \varepsilon{10}, 1+\tfrac \varepsilon{10}]\; .\]
As $\Delta$ and $\partial_y \YS(0)$ have the same sign (by \cref{partialy YSpositive}), there are $n^-, m^- >\varepsilon^{-1} $ such that:
\[n^-\ge \varepsilon^{-2}\cdot \lambda^{m} \quad \mathrm{and}\quad
\partial_y \YS(0)\cdot \lambda^{m^-}\sigma_{u}^{n^-}\in
\Delta+[-\tfrac \varepsilon{10},\tfrac \varepsilon{10}]\; .\]
Then let $m^+ := m+m^-$ and $n^+:= n+n^-$. This gives $\lambda^{m^\pm}\cdot \partial_y \YS(0)\cdot \sigma_u^{n^\pm}=\Delta+[-\varepsilon,\varepsilon]$. \end{proof}
A consequence of the Lemmas~\ref{preprecondition fro blender}, \ref{precondition fro blender} and~\ref{choixirrationel} is:
{\underline b}egin{coro}\label{condition for blender}
For every $\varepsilon>0$ there exist $n^-,m^-,n^+,m^+$
such that the renormalized maps $\cR g^\pm$ coincide, up to a term that is $C^r$-dominated by $\varepsilon$, with:
\[(x,y)\mapsto(0, \Delta \cdot y)+ (s'_x\; ,\; \varepsilon^{-1}\cdot \lambda^{m^+} \cdot s'_y- \varepsilon^{-1}\cdot \lambda ^{m^+}\cdot\sigma_u^{n^\pm } \cdot \partial_y \YS(0) \cdot q'_y).\]
\end{coro}
\subsection{
Proof of \cref{PPaffine}: from strong heterocycles to blenders}\label{Translating candidates}
Let $n^-,m^-,n^+,m^+$ be given by \cref{condition for blender}. It remains to choose the values of $s'_y$ and $q_y'$, such that the renormalized maps $\cR g^\pm$ are $C^r$-close to:
\[(x,y)\mapsto(s'_x, \Delta \cdot y)\pm (\Delta-1) .\]
In view of \cref{condition for blender}, it is enough to ask:
\[\varepsilon^{-1}\cdot \lambda^{m^+} \cdot s'_y- \varepsilon^{-1}\cdot \lambda ^{m^+}\cdot \sigma_u^{m^\pm } \cdot \partial_y \YS(0) \cdot q'_y =\pm(\Delta-1)+O(\varepsilon).\]
This is implied by choosing $s'_y$ and $q'_y$ as follows:
{\underline b}egin{equation}\label{choice-sq}
\varepsilon^{-1}\cdot \lambda^{m^+} \cdot s'_y= \Delta-1
\quad \mathrm{and}\quad
\varepsilon^{-1}\cdot \lambda ^{m^+}\sigma_u^{n^-} \cdot \partial_y \YS(0) \cdot q'_y =2(\Delta-1)\; .
\end{equation}
Indeed, one has $\sigma_u ^{n^+-n^- }\geq \varepsilon^{-1}$ by~\eqref{e.hypo2},
and with \eqref{e.hypo1}, \eqref{e.hypo3}, \cref{fact-eps}, the choices \eqref{choice-sq} give $s'_y=O(\varepsilon^2)$, $q'_y=O(\varepsilon)$
and $\varepsilon^{-1}\cdot \lambda ^{m^+}\sigma_u^{n^+} \cdot \partial_y \YS(0) \cdot q'_y =O(\varepsilon)$.
By \cref{ are blender}, $\{\cR g^+, \cR g^-\}$ defines a nearly affine blender with activation domain containing $[-2,2]\times [-1/2 ,1/2]$.
Thus, $\{ g^+, g^-\}$ defines a blender with activation domain containing $\cH([-2,2]\times [-1/2 ,1/2 ])= [-2,2]\times [-\varepsilon \cdot \lambda^{-m^+} /2, \varepsilon \cdot \lambda^{-m^+}/2 ]$.
Choosing $|\Delta-1|<1/4$, one gets $|s'_y|<\varepsilon \cdot \lambda^{-m^+} /2$
and $S'$ belongs to this activation domain. Also the point $Q\equiv 0$ belongs to this activation domain. Note that the unstable manifold of $Q$ stretches across $\{s_x'\}\times [-\varepsilon \cdot \lambda^{-m^+} /2, \varepsilon \cdot \lambda^{-m^+}/2 ]$ and so the stable manifolds of the blender. Hence $Q$ is homoclinically related to the blender.
\cref{PPaffine} is proved in the $C^\infty$ case.
\qed
\subsection{Proof of the \cref{PPaffine} in the analytic case}
The whole previous proof is still valid in the analytic setting but
\cref{unfolding as we want}.
Note that the proof of \cref{PPaffine} does not use that the $r$ first derivatives of $\TS$ and $\TQ$ remain unchanged but only that they are bounded. Thus to prove the analytic case of \cref{PPaffine}, it suffices to show:
{\underline b}egin{claim} \label{unfolding as we want_analytic}
For every small numbers $s'_y$ and $q'_y$, there exists a $C^\omega$ perturbation of the dynamics such that the inverse branches $\cS$ and $\cQ$ remain unchanged, while the continuations of the inverse branches $\TS$ and $\TQ$
derivatives at $0$ and
satisfy:
\[\YS(0)=s'_y \quad \mathrm{and}\quad \YQ(0)=q'_y.\]
Moreover their $C^r$-norm vary continuously with the parameters $s'_y,q'_y$.
\end{claim}
{\underline b}egin{proof} The perturbation technique follows the same lines as the proof of \cref{l.replace-omega}. First we embed analytically $M$ into $\mathbb R^N$, and we define an analytic retraction $\pi$ from a neighborhood of $M\subset \mathbb R^N$ to $M$. Then we chose a $C^\infty$-family $(f_p)_{p\in [-\varepsilon, \varepsilon]^8}$ such that $f_0=f$, such that $f_p$ coincide with $f$ outside of a small neighborhood of $\{S, Q\}$, and such that the following map is a local diffeomorphism:
\[\mathbb Phi \colon p\in [-\varepsilon, \varepsilon]^8\mapsto (S_p, P_p, \sigma(p), \lambda(p), \sigma_u (p),\sigma_{uu} (p))\in M^2\times \mathbb R^4,\]
where $S_p$ and $P_p$ are the continuations of $S$ and $P$, while
$(\sigma^{-1}(p), \lambda^{-1}(p))$ and $(\sigma^{-1}_u (p),\sigma^{-1}_{uu} (p))$ are their eigenvalues. Then using Stone-Weierstrass theorem and the retraction $\pi$, we define an analytic family $(\tilde f_p)_{p\in [-\varepsilon, \varepsilon]^8}$ such that $\tilde f_0=f$ and such that the continuation of $\mathbb Phi$ remains a diffeomorphism. We can thus extract from this family a $4$-parameter family $(\tilde f_p)_{p\in [-\varepsilon, \varepsilon]^4}$ along which the eigenvalues are constant, but such that
the continuations $\tilde S_p$ and $\tilde P_p$ of $S$ and $P$ still satisfy that the following map is a local diffeomorphism:
\[p\in [-\varepsilon, \varepsilon]^4\mapsto (\tilde S_p,\tilde P_p)\in M^2\; .\]
In \textsection\ref{Setting modulo small perturbation}, we assumed the eigenvalues of these points to be non-resonant. Thus we can apply \cite{Ta71} which provides $C^r$-families of coordinates at $S$ and $P$ in which $f_p|V_{S}'$ and $f_p|V_Q'$ coincide with diagonalized linear part of $D_{S} f_p$ and $D_{Q} f_p$, which do not depend on $p$. Consequently the inverse branches $\cS$ and $\cQ$ (seen in the coordinates) remain unchanged when $p$ varies in $[-\varepsilon, \varepsilon]^4$. Also the continuations of the inverse branches $\TS$ and $\TQ$ vary $C^r$-continuously with $p$. On the other hand, the variation of the relative positions of the continuation of $S$ and $Q$ w.r.t. the local unstable manifold of $Q$ and the strong unstable manifold of $S$ is non-degenerated.
\end{proof}
\subsection{Proof of \cref{PPPaffine}: from strong paraheterocycles to parablenders
}\label{sectionparadense}
We now consider a $C^\infty$ family of $(f_a)_{a\in \mathbb R}$
and continue to work in the setting of \textsection \ref{Setting modulo small perturbation}--\ref{Translating candidates} for the map $f=f_0$.
The continuations of the periodic points are $(S_a)_{a\in \mathbb R}$, $(Q_a)_{a\in \mathbb R}$,
with eigenvalues $\sigma_u^{-1}(a),\sigma_{uu}^{-1}(a)$ and $\lambda^{-1}(a),\sigma^{-1}(a)$.
By \cite{Ta71}, their linearizing coordinates can be extended for every $a\in I$ of $I$ sufficiently small, as $C^{r+1}$-family of $C^{r+1}$-diffeomorphisms.
This enables us to consider the continuations $\cS_a$, $\cQ_a$, $\TQ_a$ and $\TS_a$ of the inverse branches $\cS $, $\cQ $, $\TQ$ and $\TS$.
They are still of the form:
\[\cS_a: (x,y)\in V_S\mapsto (\sigma_{uu} (a)\cdot x, \sigma_u (a)\cdot y), \quad \cQ_{a}: (x,y)\in V_Q\mapsto (\sigma (a)\cdot x, \lambda (a)\cdot y),\;\]
\[\TS_a=(\XS_a,\YS_a): (x,y)\in V''_S \hookrightarrow
V_Q, \quad \TQ_a=(\XQ_a,\YQ_a): (x,y)\in V''_Q\hookrightarrow V_S, \; \]
and they allow to define the preimages by $f_a$:
$$S'_a=(s'_x(a),s'_y(a)):=\TS_a(0) \quad \mathrm{and}\quad Q'_a=(q'_x(a),q'_y(a)):=\TQ_a(0).$$
Observe that up to a perturbation localized at a neighborhood of $\cS_0$ we can also assume:
{\underline b}egin{equation}\label{paraasume} \partial_a \frac{\log \sigma_u(a)}{\log \lambda(a)}{\underline n}eq 0\quad \text{ at } a=0\; .\end{equation}
We consider $\Delta>1$, $\varepsilon>0$, and the integers $n^+, m^+, n^-, m^-$ as before.
This allows to extend the definition of $g^\pm$ as families $(g^\pm_a)_{a\in I}$.
We also extend the rescaling maps:
\[\cH_a:\quad (x,y)\mapsto (x,\varepsilon \cdot \lambda^{m^+}(a)\cdot y),\]
\[\cH^\pm_a:\quad (x,y)\mapsto (x,\varepsilon \cdot \lambda^{m^\pm-m^+}(a)\cdot y)-Q'_a,\]
and, similarly to \cref{def Rg}, the renormalized inverse branches:
{\underline b}egin{equation*}
\cR g^\pm_a:= \cH^{-1}_a\circ g^\pm_a \circ \cH_a= \mathbb Psi^\pm_a \circ \mathbb Phi^\pm_a,\end{equation*}
$$ \text{where }\quad\quad
\mathbb Phi^\pm_{\underline a}lpha :=(\cH^\pm_{\underline a}lpha)^{-1}\circ \TQ_{\underline a}lpha \circ \cQ_{\underline a}lpha^{m^\pm} \circ \cH_{\underline a}lpha \quad \mathrm{and}\quad
\mathbb Psi^\pm_{\underline a}lpha :=(\cH _{\underline a}lpha)^{-1}\circ \TS_{\underline a}lpha \circ \cS_{\underline a}lpha^{n^\pm} \circ \cH^\pm_{\underline a}lpha.
$$
For $a$ small, $\cR g^+_a, \cR g^-_a$ are well defined on $B:=[-1, 1]\times [-2,2] $ by \cref{Rg well def} and form a $C^r$-nearly affine blender $K_a$
by \cref{Translating candidates}.
We also rescale the parameter space:
\[{\underline a}lpha(a):=\Delta_-(a)-\Delta_-(0) \quad \text{ where }\quad \Delta_\pm(a) :=\lambda^{m^\pm}(a)\cdot \partial_y \YS_a(0)\cdot \sigma_u^{n^\pm}(a).\]
{\underline b}egin{lemma}\label{estime alpha}
{\underline b}egin{enumerate}
\item The map ${\underline a}lpha$ is a local diffeomorphism at $a=0$.
\item The function ${\underline a}lpha\mapsto a({\underline a}lpha)$ is $C^r$-dominated by in $1/n^-$ (and hence by $\varepsilon$).
\item The maps ${\underline a}lpha \mapsto \Delta_\pm({\underline a}lpha)-(\Delta+ {\underline a}lpha )$ are $C^r$-dominated by $\varepsilon$.
\end{enumerate}
\end{lemma}
{\underline b}egin{proof}
First observe that ${\underline a}lpha(0)=0$. Then
{\underline b}egin{multline*}
\Delta_-^{-1} \cdot \partial_a {\underline a}lpha=
\partial_a \log \Delta_- = \partial_a \log (\lambda^{m^-} \cdot \sigma_u^{n^-} \cdot \partial_y \YS_a(0))
= \partial_a \left(\frac{\log (\lambda^{m^-} \cdot \sigma_u^{n^-})}{\log \lambda } \cdot \log \lambda+\log \partial_y \YS_a(0)\right)
\\
= n^- \cdot \log \lambda\cdot \partial_a \frac{\log \sigma_u}{\log \lambda} +
\frac{m^-\cdot \log \lambda
+ n^- \log \sigma_u}{\log \lambda} \cdot \partial_a \log \lambda +
\partial_a \log (\partial_y \YS_a(0))\; .
\end{multline*}
Thus by~\eqref{e.hypo3}, when $\varepsilon$ is small, $\partial_a {\underline a}lpha|_{a=0}$ is invertible, of the order of $n^-$, giving the first item.
By induction, one gets that the higher derivatives can be written as:
{\underline b}egin{multline}\label{e.higherDelta}
\partial^k_a \Delta_-=\partial^k_a {\underline a}lpha
= \Delta_-\cdot {\underline b}igg(n^- \cdot \log \lambda\cdot \partial_a \frac{\log \sigma_u}{\log \lambda} +
\frac{m^-\cdot \log \lambda
+ n^- \log \sigma_u}{\log \lambda} \cdot \partial_a \log \lambda +\\
\partial_a \log (\partial_y \YS_a(0)){\underline b}igg)^k
+\Delta_-\cdot R_k(n^-,m^-)\; ,
\end{multline}
where $R_k(n^-,m^-)$ is a polynomial in $n^-,m^-$ with degree smaller or equal to $k-1$.
Hence $\partial^k_a {\underline a}lpha$ is dominated by $(n^-)^k$.
Note that $\partial^k_{\underline a}lpha a\cdot (\partial_a {\underline a}lpha)^{k+1}$ is a linear combination of terms of the form
$(\partial_a {\underline a}lpha)^{i_1}\cdot(\partial^2_a {\underline a}lpha)^{i_2}\cdots (\partial^k_a {\underline a}lpha)^{i_k}$, where $i_1+2\cdot i_2+\cdots+k\cdot i_k\leq k$.
This implies that $\partial^k_{\underline a}lpha a$ is dominated by $1/n^-\leq \varepsilon$ as announced in the second item.
The definition of ${\underline a}lpha$ gives $\Delta_-({\underline a}lpha)=\Delta_-(0)+ {\underline a}lpha$ and $|\Delta_-(0)-\Delta|<\varepsilon$
by~\eqref{e.hypo3}.
In order to get the third item, it is thus enough to prove that each derivative
$\partial^k_{\underline a}lpha(\Delta_+-\Delta_-)$ is dominated by $\varepsilon$.
By~\eqref{e.hypo3} and \eqref{e.hypo2}, $\Delta_+-\Delta_-$ is dominated by $\varepsilon$,
and $n^+-n^-$ is dominated by $\varepsilon n^-$, whereas $m^-\cdot \log \lambda
+ n^- \log \sigma_u$ and $m^+\cdot \log \lambda
+ n^+ \log \sigma_u$ are uniformly bounded. {
The partial derivative $\partial_a^k\Delta_-$ satisfies Eq.~\eqref{e.higherDelta}.
Replacing $\Delta_-,n_-,m_-$ by $\Delta_+,n_+,m_+$, one obtains a relation for $\partial_a^k\Delta_+$.
Taking the difference, one concludes that $\partial_a^k(\Delta_+-\Delta_-)$
is dominated by $\varepsilon (n^-)^k$.
Since $\partial_{\underline a}lpha ^k(\Delta_+-\Delta_-)$
is a linear combination of terms $\partial_a^m(\Delta_+-\Delta_-)\cdot \partial^{i_1}_{\underline a}lpha a\cdots\partial^{i_\ell}_{\underline a}lpha a$ with $i_1+\cdots+ i_\ell=m$, by the second equality of
it is dominated by $\varepsilon$.}
\end{proof}
{\underline b}egin{coro}\label{c.bounds}
{\underline b}egin{enumerate}
\item The maps ${\underline a}lpha \mapsto S'_{\underline a}lpha-S'_0$ and ${\underline a}lpha \mapsto Q'_{\underline a}lpha-Q'_0$ are $C^r$-dominated by $\varepsilon$.
\item The first derivative of ${\underline a}lpha \mapsto \varepsilon^{-1}\cdot \lambda ^{m^+-m^\pm }({\underline a}lpha)\cdot q'_y({\underline a}lpha)$
is $C^{r-1}$-dominated by $\varepsilon$.
\item The first derivative of
${\underline a}lpha\mapsto \varepsilon^{-1}\cdot \lambda ^{m^+}({\underline a}lpha)\cdot\sigma_u^{n^\pm }({\underline a}lpha) \cdot q'_y({\underline a}lpha))$ is $C^{r-1}$-dominated by $\varepsilon$.
\end{enumerate}
\end{coro}
{\underline b}egin{proof}
The first item is a direct consequence of \cref{estime alpha}:
in particular the first derivative of
${\underline a}lpha \mapsto q'_y({\underline a}lpha)$ is $C^{r-1}$-dominated by $1/n^-$.
By our choice~\eqref{choice-sq}, $q'_y(0)$ is dominated by $\varepsilon\cdot\lambda^{m^--m^+}$.
Similarly, the first derivative of
${\underline a}lpha \mapsto \lambda^{m^+-m^-}({\underline a}lpha)$ is $C^{r-1}$-dominated by
$$\max\{(\tfrac{m^+-m^-}{n^-})^k\lambda^{m^+-m^-}:\; 1\leq k\leq r\}\leq \tfrac{m^+-m^-}{n^-} \lambda^{m^+-m^-}<\varepsilon^2\lambda^{m^+-m^-} ,$$
using~\eqref{e.hypo2}.
The second item is thus a consequence of~\eqref{e.hypo2}:
$ \varepsilon^{-1} \lambda ^{m^+-m^\pm }/n^-\leq \varepsilon$.
The third item is obtained similarly, by writing $\lambda ^{m^+}({\underline a}lpha)\cdot\sigma_u^{n^\pm }({\underline a}lpha)=
\lambda ^{m^+-m^\pm}({\underline a}lpha)\cdot\lambda^{m^\pm}\sigma_u^{n^\pm }({\underline a}lpha)$
and by using~\eqref{e.hypo3}.
\end{proof}
{\underline b}egin{lemma}\label{condition for parablender1}
With $p_y:(x,y)\mapsto y$, the families of maps $( \mathbb Phi^\pm_{\underline a}lpha-p_y)_{\underline a}lpha$ are $C^r$-dominated by $\varepsilon$.
\end{lemma}
{\underline b}egin{proof} In addition to \cref{preprecondition fro blender}, it remains to study the partial derivatives involving ${\underline a}lpha$.
Let us recall that $\mathbb Phi^\pm_{\underline a}lpha(x,y)$ is given by
$$( \XQ_{\underline a}lpha , \varepsilon^{-1} \cdot \lambda^{m^+-m^\pm}({\underline a}lpha) \cdot \YQ_{\underline a}lpha )(\sigma^{m^\pm}({\underline a}lpha) \cdot x
,\varepsilon \cdot \lambda^{m^\pm-m^+}({\underline a}lpha) \cdot y)- ( q'_x({\underline a}lpha) , \varepsilon^{-1} \cdot \lambda^{m^+-m^\pm}({\underline a}lpha) \cdot q'_y({\underline a}lpha) ).$$
By \cref{c.bounds}, one can reduce to consider the family indexed by ${\underline a}lpha$ and formed by:
{\underline b}egin{equation}\label{e.simplify}
(x,y) \mapsto ( \XQ_{\underline a}lpha , \varepsilon^{-1} \cdot \lambda^{m^+-m^\pm}({\underline a}lpha) \cdot \YQ_{\underline a}lpha )(\sigma^{m^\pm}({\underline a}lpha) \cdot
x
,\varepsilon \cdot \lambda^{m^\pm-m^+}({\underline a}lpha) \cdot y) +Cst .
\end{equation}
By taking the first derivative w.r.t ${\underline a}lpha$ and further derivatives w.r.t. $x,y,{\underline a}lpha$,
factors of the form $(m^\pm)^i(\sigma)^{m^\pm}$, $(m^+-m^-)^i\lambda^{m^+-m^-}$
appear, together with at least one factor of the form $\partial^i_{\underline a}lpha a$.
Since $(m^\pm)^r(\sigma)^{m^\pm}<1$ and
$(m^+-m^-)^r\lambda^{m^+-m^-}<1$ by \cref{fact-eps} and using \ref{estime alpha},
the derivative of~\eqref{e.simplify} w.r.t. ${\underline a}lpha$ forms a family $C^{r-1}$-dominated by the map
$\varepsilon^{-1} \cdot \lambda^{m^+-m^\pm}(0) \cdot \tfrac1{n^-}$ which is dominated by $ \varepsilon$ using \cref{estime alpha}.
\end{proof}
{\underline b}egin{lemma}\label{condition for parablender2}
The families $(\mathbb Psi^\pm_{\underline a}lpha)_{\underline a}lpha$ coincide, up to the addition of maps $C^r$-dominated by $\varepsilon$, with the families defined by:
\[((x,y),{\underline a}lpha)\mapsto(0, (\Delta+{\underline a}lpha) \cdot y)+ (s'_x(0) , \varepsilon^{-1}\cdot \lambda^{m^+}({\underline a}lpha) \cdot s'_y({\underline a}lpha)- \varepsilon^{-1}\cdot \lambda ^{m^+}({\underline a}lpha)
\cdot \sigma_u^{n^\pm }({\underline a}lpha) \cdot \partial_y \YS_{\underline a}lpha (0) \cdot q'_y({\underline a}lpha)). \]
\end{lemma}
{\underline b}egin{proof}
In addition to \cref{precondition fro blender}, we are reduced to examine the $(\partial_{\underline a}lpha \mathbb Psi_{\underline a}lpha^\pm)_{\underline a}lpha$. We have:
\[\mathbb Psi^\pm_{\underline a}lpha(x,y) = ( \XS_{\underline a}lpha, \varepsilon^{-1} \lambda ^{m^+} ({\underline a}lpha)\YS_{\underline a}lpha )(\sigma_{uu}^{n^\pm}({\underline a}lpha) \cdot (x-q'_x({\underline a}lpha)), \sigma_{u} ^{n^\pm}({\underline a}lpha)
\cdot (\varepsilon \cdot \lambda^{m^\pm-m^+}({\underline a}lpha)\cdot y-q'_y({\underline a}lpha)))
\; . \]
We first discuss the families $(\partial_x\mathbb Psi^\pm_{\underline a}lpha)_{\underline a}lpha$, $(\partial_y\mathbb Psi^\pm_{\underline a}lpha)_{\underline a}lpha$ and then the families $(\partial_{\underline a}lpha\mathbb Psi^\pm_{\underline a}lpha(0))_{\underline a}lpha$.
{\underline n}oindent
\emph{Step 1. The families $(\partial_x \mathbb Psi_{\underline a}lpha^\pm)_{\underline a}lpha$} are controlled as in the proof of~\cref{condition for parablender1}, by bounding the factors $\partial ^k_{\underline a}lpha a$ by $1/n^-$
with~\cref{estime alpha}.
By~\eqref{e.hypo3}, \eqref{e.hypo2}, $m^-,m^+,n^+$ are dominated by $n^-$.
{
All of this implies that $\log(\lambda^{m^\pm})$, $\sigma_u^{n^\pm}$, $\sigma_{uu}^{n^\pm}$,
as functions of ${\underline a}lpha$, are $C^r$-bounded.
One deduces that $\partial_x \mathbb Psi_{\underline a}lpha^\pm$ are $C^{r-1}$-dominated by
$\sigma_{uu} ^{n^\pm}\cdot \lambda ^{m^+} \cdot \varepsilon^{-1}$.
Arguing as in the proof of~\cref{precondition fro blender}, $\partial_x \mathbb Psi_{\underline a}lpha^\pm$ are thus $C^{r-1}$-dominated by
$$(\tfrac{\sigma_{uu}}{\sigma_u}) ^{n^-}\cdot \lambda^{m^+-m^-} \cdot \varepsilon^{-1}<(\tfrac{\sigma_{uu}}{\sigma_u}) ^{n^-} \cdot n^-\cdot \varepsilon<\varepsilon.$$
}
{\underline n}oindent
\emph{Step 2. The families $(\partial_y \mathbb Psi_{\underline a}lpha^\pm)_{\underline a}lpha$}
have a first coordinate which is $C^{r-1}$-dominated by
$(n^\pm)^r\cdot \sigma_{u} ^{n^\pm}\cdot (m^\pm-m^+)^r\cdot \lambda ^{m^\pm-m^+} \cdot \varepsilon$,
and by $\varepsilon$ by \cref{fact-eps}.
The second coordinate of $\partial_y \mathbb Psi_{\underline a}lpha^\pm$ equals:
\[((x,y),{\underline a}lpha) \mapsto \sigma_u^{n^\pm}({\underline a}lpha) \cdot\lambda^{m^\pm}({\underline a}lpha)\cdot\partial_y\YS_{\underline a}lpha \; {\underline b}igg(\sigma_{uu}^{n^\pm}({\underline a}lpha) \cdot (x-q'_x({\underline a}lpha)), \sigma_{u} ^{n^\pm}({\underline a}lpha)
\cdot (\varepsilon \cdot \lambda^{m^\pm-m^+}({\underline a}lpha)\cdot y-q'_y({\underline a}lpha)){\underline b}igg)
\;. \]
It differs with $(\sigma_u^{n^\pm}({\underline a}lpha) \cdot \lambda^{m^\pm } ({\underline a}lpha)\cdot \partial_y \YS_{\underline a}lpha(0))_{\underline a}lpha$
up to a map which is $C^{r-1}$-dominated by
$$\sigma_u^{n^\pm} \cdot\lambda^{m^\pm}\cdot \max{\underline b}igg\{ (n^\pm)^r\cdot \sigma_{uu}^{n^\pm}\;,\;
(n^\pm)^r\cdot \sigma_{u} ^{n^\pm}\cdot (m^+-m^\pm)^r
\cdot \lambda^{m^\pm-m^+}\cdot \varepsilon{\underline b}igg\},$$
and hence by $\varepsilon$ from \eqref{e.hypo3} and \cref{fact-eps}.
By definition $\sigma_u^{n^\pm}({\underline a}lpha) \cdot \lambda^{m^\pm} ({\underline a}lpha)\cdot \partial_y \YS_{\underline a}lpha(0)=
\Delta_\pm({\underline a}lpha)$ and $\Delta_\pm({\underline a}lpha)$ coincides with $\Delta+{\underline a}lpha$ up to a term that is $C^r$-dominated by $\varepsilon$, by \cref{estime alpha}.
Up to here, we have shown that the families $(D\mathbb Phi^\pm_{\underline a}lpha)_{\underline a}lpha$ coincide with the spatial derivative of the map $((x,y),{\underline a}lpha)\mapsto (0,(\Delta+{\underline a}lpha)\cdot y)$,
up to a term $C^{r-1}$-dominated by $\varepsilon$.
{\underline n}oindent
\emph{Step 3.
The families $(\partial_{\underline a}lpha\mathbb Psi^\pm_{\underline a}lpha(0))_{\underline a}lpha$,} are given by:
\[\mathbb Psi^\pm_{\underline a}lpha(0) = (
\XS_{\underline a}lpha, \varepsilon^{-1} \cdot \lambda ^{m^+}({\underline a}lpha) \cdot \YS_{\underline a}lpha )( - \sigma_{uu} ^{n^\pm}({\underline a}lpha)\cdot q'_x ({\underline a}lpha), - \sigma_{u} ^{n^\pm}({\underline a}lpha)
\cdot q'_y({\underline a}lpha) ) \; . \]
The first coordinate of each derivative $\partial^k_{\underline a}lpha\mathbb Psi^\pm_{\underline a}lpha(0)$ is dominated by derivatives $\partial^i_{\underline a}lpha a$, hence the first coordinate of
$\partial_{\underline a}lpha\mathbb Psi^\pm_{\underline a}lpha(0)$ is dominated by $\varepsilon$ by \cref{estime alpha}.
By similar estimates as in \cref{precondition fro blender}, combined with \cref{estime alpha},
the second coordinate of $\partial_{\underline a}lpha\mathbb Psi^\pm_{\underline a}lpha(0)$ can be reduced (up to a term $C^{r-1}$-dominated by $\varepsilon$) to:
\[\varepsilon^{-1} \cdot \lambda ^{m^+}({\underline a}lpha) \cdot \YS_{\underline a}lpha ( 0) \; + \;
\varepsilon^{-1} \cdot \lambda ^{m^+}({\underline a}lpha) \cdot D\YS_{\underline a}lpha ( 0) .(0, - \sigma_{u} ^{n^\pm}({\underline a}lpha)
\cdot q'_y({\underline a}lpha) ) \;,\]
which is also equal to $\varepsilon^{-1}\cdot \lambda^{m^+}({\underline a}lpha) \cdot s'_y({\underline a}lpha)- \varepsilon^{-1}\cdot \lambda ^{m^+}({\underline a}lpha)
\cdot \sigma_u^{n^\pm }({\underline a}lpha) \cdot \partial_y \YS_{\underline a}lpha (0) \cdot q'_y({\underline a}lpha)$.
\end{proof}
As a consequence of the Lemmas~\ref{condition for parablender1}, \ref{condition for parablender2} and~\ref{choixirrationel}, we have obtained:
{\underline b}egin{coro}\label{condition for parablender}
For every $\varepsilon>0$ there exist $n^+, n^-, m^+, m^-$ such that
the families $(\cR g^\pm_{\underline a}lpha)_{\underline a}lpha$ coincide, up to a term dominated by $\varepsilon$, with the families defined by:
\[(x,y)\mapsto(0, (\Delta+{\underline a}lpha) \cdot y)+ (s'_x(0) , \varepsilon^{-1}\cdot \lambda^{m^+}({\underline a}lpha) \cdot s'_y({\underline a}lpha)- \varepsilon^{-1}\cdot \lambda ^{m^+}({\underline a}lpha)\cdot\sigma_u^{n^\pm }({\underline a}lpha) \cdot \partial_y \YS_{\underline a}lpha (0) \cdot q'_y({\underline a}lpha))\; . \]
\end{coro}
{\underline b}egin{proof}[End of the proof of \cref{PPPaffine}]
Corollaries~\ref{condition for parablender} and \ref{c.bounds} reduce the family $(\cR g^\pm_{\underline a}lpha)_{\underline a}lpha$ to:
\[(x,y)\mapsto(0, (\Delta+{\underline a}lpha) \cdot y)+ (s'_x(0) , \varepsilon^{-1}\cdot \lambda^{m^+}({\underline a}lpha) \cdot s'_y({\underline a}lpha)+ \varepsilon^{-1}\cdot \lambda ^{m^+}(0)\cdot\sigma_u^{n^\pm }(0) \cdot \partial_y \YS_0 (0) \cdot q'_y(0))\; . \]
As in \cref{Translating candidates},
$$\varepsilon^{-1}\cdot \lambda ^{m^+}(0)\cdot\sigma_u^{n^- }(0) \cdot \partial_y \YS_0 (0) \cdot q'_y(0)=2(\Delta-1)
\quad \mathrm{and}\quad \varepsilon^{-1}\cdot \lambda ^{m^+}(0)\cdot\sigma_u^{n^+ }(0) \cdot \partial_y \YS_0 (0) \cdot q'_y(0)=O(\varepsilon).$$
As we started with a strong $C^r$-paraheterocycle, all the $r$-first derivatives of ${\underline a}lpha \mapsto s'_y({\underline a}lpha)$ equal $0$ at $0$. So by \cref{unfolding as we want}, we can
can perturb $(f_a)_a$ so that ${\underline a}lpha \mapsto s'_y({\underline a}lpha)$ has the same $r$-jet as the $C^r$-small function $ {\underline a}lpha\mapsto \varepsilon\cdot \lambda^{-m^+}({\underline a}lpha) \cdot (\Delta-1) $ at ${\underline a}lpha=0$. Then we obtain that $(\cR g^\pm_{\underline a}lpha)_{\underline a}lpha$ are $\delta$-$C^r$-close to:
\[(x,y)\mapsto(s'_x(0), (\Delta+{\underline a}lpha) \cdot y\pm (\Delta-1)) \;, \]
and hence defines a $\delta$-nearly affine $C^r$-parablender,
where $\delta$ is arbitrarily close to $0$ when $\varepsilon\to0$. By \cref{nearly are parablender},
one deduces that the maximal invariant set $K_a$ induced by the maps $g_a^+,g_a^-$ is a $C^r$-parablender.
Its activation domain see in the chart $\cH_{\underline a}lpha$
contains any germ ${\underline a}lpha\mapsto z({\underline a}lpha)$ with $z(0)\in [-2,2]\times \{0\}$ and $\|\partial_{\underline a}lpha z({\underline a}lpha)\|_{C^{r-1}}\leq \eta$,
where $\eta>0$ is small number independent from $\varepsilon$.
Note that our perturbation satisfies $\cH_{\underline a}lpha(S'_{\underline a}lpha)=(s'_x({\underline a}lpha),\varepsilon(\Delta-1))$.
Combining with \cref{c.bounds}, item $1$, one concludes that the activation domain of $(K_{\underline a}lpha)_{{\underline a}lpha\in I}$
contains the germ of $(S'_{\underline a}lpha)_{{\underline a}lpha}$, and the germ of the source $(S_{\underline a}lpha)$ at ${\underline a}lpha=0$.
We also recall that $Q$ is homoclinicaly related to the (para)-blender.
\cref{PPPaffine} is proved.
\end{proof}
{\underline b}egin{remark}\label{H4}
For each point $\underline x\in \overleftarrow K$, let $\gamma_{\underline x}$ be the unstable curve of $\underline x$
which is a graph over $[-2,2]$. The activation domain is obtain by considering the local unstable manifolds
of the form $(\TS)^{-1}(\gamma_{\underline x})$.
By assumption~\eqref{transversality strong hetro}, $W^u(Q)$ is transverse to $E^{cu}(S)$.
One deduces that the family of local unstable manifolds defining the activation domain of the parablender
satisfies the property announced in \cref{r.H4}.
\end{remark}
{\underline b}ibliographystyle{alpha-like}
{\underline b}ibliography{references}
{\underline b}igskip
{\underline b}igskip
\hspace{-3.2cm}
\footnotesize
{\underline b}egin{tabular}{l l l l l}
\emph{{\underline n}ormalsize Pierre Berger}
& &
\emph{{\underline n}ormalsize Sylvain Crovisier}
& &
\emph{{\underline n}ormalsize Enrique Pujals}
\\
\texttt{[email protected]}
&&
\texttt{[email protected]}
&&
\texttt{[email protected]}
\\
Institut de Math. Jussieu-Paris Rive Gauche
&&
Laboratoire de Math\'ematiques d'Orsay
&& Graduate Center-CUNY
\\
Sorbonne Universit\'e, Univ. de Paris, CNRS,
&&
CNRS - UMR 8628, Univ. Paris-Saclay
&& New York, USA\\
F-75005 Paris, France
&&
Orsay 91405, France
&&
\end{tabular}
\end{document} |
\begin{document}
\begin{center}
{
\Large
An algorithm for calculating $D$-optimal designs for polynomial regression with prior information and its appilications\\
}
Hiroto Sekido \footnote{[email protected]} \\
{\it Department of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University, Kyoto 606-8501, Japan.}
\end{center}
\begin{center}
\begin{minipage}{10.5cm}
{
\small
{\bf Abstract}:
Optimal designs are required to make efficient statistical experiments.
D-optimal designs for some models are calculated by using canonical moments.
On the other hand, integrable systems are dynamical systems whose solutions can be written down concretely.
In this paper, polynomial regression models with prior information are discussed.
In order to calculate D-optimal designs for these models,
a useful relationship between canonical moments and discrete integrable systems is used.
By using canonical moments and discrete integrable systems,
an algorithm for calculating D-optimal designs for these models is proposed.
Then some examples of applications of the algorithm are introduced.
{\it Key words}:
D-optimal design, canonical moment, polynomial regression model, discrete integrable system, nonautonomous discrete time Toda equation
}
\end{minipage}
\end{center}
\section{Introduction}
In statistics, design of experiments is a methodology to make efficient experiments.
Optimal designs have been studied by numerous authors in the literature.
Especially D-optimal designs have been investigated by many authors.
One of the approaches for D-optimal designs is to use canonical moments.
D-optimal designs and D${}_s$-optimal designs for polynomial regression models were studied in \cite{as33jtuw}.
In the calculation in \cite{as33jtuw}, an important point is to use canonical moments instead of ordinary moments,
and D-optimal designs can be identified by its canonical moments.
As we see in Section \ref{s:oiauf90a8df}, the objective function is written down in term of canonical moments.
Besides this, D-optimal designs for various models have been calculated in \cite{nvcuisd987j, afygfuhs83, mdoiudj91j8d, mdoiudj91j9d, ajifa9d78rjio, asfa873hfi, fdrtd34ai, asfa831221i, jjdfayt8367, asiufad74, ioasf873j, ajsdfghh86, auifa987, auifa988, auifa989, ddesa3fdaea3}.
For example, D-optimal designs for weighted polynomial regression with a weight function $x^\alpha (1-x)^\beta$ were found explicitly in \cite{auifa988} by using canonical moments.
However, in most cases, an explicit form of the D-optimal design is unknown.
We here consider polynomial regression models with some prior information.
For example, D-optimal designs for this kind of models have been studied in \cite{afygfuhs83, asfa873hfi, jjdfayt8367}.
D-optimal designs for polynomial regression models with only odd (or even) degree terms are calculated in \cite{afygfuhs83}.
Polynomial regression without intercept is considered in \cite{jjdfayt8367}.
Polynomial regression models through origin is considered in \cite{asfa873hfi}.
On the other hand,
the term {\it integrable systems} is used for nonlinear dynamical systems whose solutions can be written down concretely.
For Hamiltonian systems with finite degree of freedom, their integrability is defined in the Liouville-Arnold theorem \cite{aosdf39jjd}.
However, even now, there is no mathematical definition of integrability for nonlinear systems with infinite degree of freedom.
This is why we call an explicitly solvable nonlinear system as an integrable system.
Discrete integrable systems are discrete analogues of integrable systems.
That is, it is well-known that integrable systems can be discretized such that descretized systems are also solvable.
Discrete integrable systems have been applied to numerical analysis.
Typical examples are matrix eigenvalue algorithms \cite{cva09sau,o1h3ituw}, and
algorithms to compute matrix singular values \cite{daed35dsf}.
An algorithm for calculating D-optimal designs for polynomial regression through a fixed point is proposed in \cite{sekidos1}.
In \cite{sekidos1}, the relationship between canonical moments and discrete integrable systems are used.
In this thesis, we propose an algorithm for constructing D-optimal designs for polynomial regression with prior information, which is a generalized algorithm of \cite{sekidos1},
and the considered models include polynomial regression through origin \cite{asfa873hfi}, and some weighted polynomial regression as particular cases.
Moreover the algorithm can be applied to some optimal designs, and it allows us to calculate a larger class of optimal designs.
That means the relationship between canonical moments and discrete integrable systems expands the class of D-optimal designs which can be calculated.
\section{A preliminary} \label{s:oiauf90a8df}
This section give a brief introduction of polynomial regression models, D-optimal designs, and canonical moments.
At first, we consider the following common linear regression model
\onemathN{
& Y = \theta^{\rm T} f(x) + \varepsilon,\\
& E[\varepsilon] = 0, \quad V[\varepsilon] = \sigma^2,
}
where $f(x) = (f_0(x) \; f_1(x) \; \cdots \; f_{m-1}(x))^{\rm T}$ denotes a known vector of linear independent functions,
$\theta = (\theta_0(x) \; \theta_1(x) \; \cdots \; \theta_{m-1}(x))^{\rm T}$ denotes an unknown vector of parameters,
$\varepsilon$ denotes the error term.
Here we assume each experiment has a stastically indendent error term $\varepsilon$.
Let $\mathcal P_I$ be the set of probability mesures on the borel set on $I$.
For given $\mu \in \mathcal P_I$, let $c_k$ denote the $k$th moment $\int_I x^k {\rm d} \mu(x)$.
Suppose the number of expriments be $n$, and experimental conditions $x_1, x_2, \ldots, x_n$ should be in interval $[0, 1]$.
Here we consider design $x_1, x_2, \ldots, x_n$ is corresponding to the probability mesure
$\mu \in \mathcal P_{[0,1]}$ such that $\mu(\{x\}) = \#\{k|x_k=x\} / n$.
Then D-optimal designs are defined as probability mesure $\mu \in \mathcal P_{[0,1]}$
which maximizes the determinant of Fisher information matrix $M_f(\mu) = \int_0^1 f(x) f(x)^{\rm T} {\rm d} \mu (x)$.
In the case of $(m-1)$th polynomial regression, that is, in the case where $f_k(x) = x^k$,
the D-optimal design is defined as the optimization solution of the optimization problem
\onemath{
& \mbox{maximize}\; |c_{i+j}|_{i,j=0}^{m-1} \\
& \mbox{subject to}\; \mu \in \mathcal P_{[0,1]}.
}{opthoefho}
Note that, in the optimization problem, while $\mu(\{x\})$ should be multiple of $1/n$ for all $x$,
we do not take a such constraint.
Therefore we consider only the relaxation optimization problem.
See \cite{ifa34jy4u} for details.
In the optimization problem \eqref{opthoefho}, the objetive function is written down in terms of moments.
However the constraint is complicated in terms of moments.
To simplify the constraint, sometimes the optimization problem is rephrased in terms of canonical moments.
Now, we define canonical moments.
Suppose we consider the set $\mathcal P_{[0,1]}$ of the probability measures on $[0,1]$.
For a given probability measure $\mu \in \mathcal P_{[0,1]}$, let $c_k^+$ denote the maximum of the
$k$-th moment over the set of all measure having moments $c_0,c_1,\ldots,c_{k-1}$.
Similarly let $c_k^-$ denote the corresponding minimum.
Canonical moments are defined by
\onemath{ p_k = \frac{c_k - c_k^-}{c_k^+ - c_k^-}, \quad k=1,2,\ldots,N, }{eq:adfi8733}
where $N$ is the minimum of $j$ which satisfy $c_{j+1}^+ = c_{j+1}^-$.
If $c_j^+ > c_j^-$ for an arbitrary positive integer $j$, let's set $N = \infty$.
Canonical moments have the property that
\onemath{
\begin{cases}
p_k \in (0,1), & k=1,2,\ldots,N-1, \\
p_N \in \{0,1\}. &
\end{cases}
}{eq:asfuio7h}
Conversely, for a given arbitrary sequence $\{p_k\}_{k=1}^N$ which satisfies \eqref{eq:asfuio7h},
there is a unique $\mu \in \mathcal P_{[0,1]}$ which has the canonical moments $\{p_k\}_{k=1}^N$.
It is to be noted that canonical moments have a Hankel determinant expression.
Let $H_k^{(n)}, \overline H_k^{(n)}$ be the Hankel determinants defined by
\onemath{
H_k^{(n)} = \left|c_{i+j+n}\right|_{i,j=0}^{k-1} , \quad
\overline H_k^{(n)} = \left|c_{i+j+n} - c_{i+j+n+1}\right|_{i,j=0}^{k-1},
}{eq:ksajsdfoiuaf874}
where $k=1,2,\ldots, \;\; n=0,1,2,\ldots$.
We here set that $H_0^{(n)}=1$, and that $H_k^{(n)} = 0$ if a matrix size $k$ is negative.
The canonical moments $p_k$ have the following Hankel determinant expression:
\onemathN{
p_{2k-1} = \frac{ H_{k}^{(1)} \overline H_{k-1}^{(0)} }{ H_{k}^{(0)} \overline H_{k-1}^{(1)} }, \quad
p_{2k} = \frac{ H_{k+1}^{(0)} \overline H_{k-1}^{(1)} }{ H_{k}^{(1)} \overline H_{k}^{(0)} }, \quad k=1,2,\ldots.
}
We introduce useful variable $\zeta_k$ by the transformation of canonical moments $p_k$
\onemath{
& \zeta_0 = 0, \quad \zeta_1 = p_1, \\
& \zeta_k = (1-p_{k-1})p_k, \quad k=2,3,\ldots,N.
}{eq:iofaou4a}
Then $\zeta_k$ also have the Hankel determinant expression
\onemathN{
\zeta_{2k-1} = \frac{ H_{k}^{(1)} H_{k-1}^{(0)} }{ H_{k}^{(0)} H_{k-1}^{(1)} },\quad
\zeta_{2k} = \frac{ H_{k+1}^{(0)} H_{k-1}^{(1)} }{ H_{k}^{(1)} H_{k}^{(0)} },\quad
k=1,2,\ldots.
}
From this expression, Hankel determinants $H_m$ can be expressed in a product of $\zeta_k$ or of canonical moments $p_k$,
\onemath{
H_m^{(0)} &= \prod_{k=1}^{m-1} (\zeta_{2k-1} \zeta_{2k})^{m-k} \\
&= \left( \prod_{j=1}^{m-1} (1-p_{2j})^{m-j-1} p_{2j}^{m-j} \right) \prod_{j=1}^{m-1} ((1-p_{2j-1}) p_{2j-1})^{m-j},
}{eq:hfoais}
where $2m-2 \leq N$.
D-optimal designs for polynomial regression are calculated in \cite{as33jtuw} through the expression \eqref{eq:hfoais}.
Similar approaches for other D-optimal designs are described in the book \cite{asdfjtuw}.
\section{An algorithm for calculating D-optimal designs for polynomial regression with prior information}
In this section, we consider polynomial regression with prior information.
In Subsection \ref{ss::fiu}, polynomial regression with prior information is defined and formulated as a linear regression.
Then we proposed an algorithm for calculating the D-optimal design for polynomial regression with prior information.
\subsection{Generalized canonical moments and the nonautonomous discrete time Toda equation}
At first, we generalize ordinary moments.
For given moments $\{c_k\}_{k=0}^\infty$, let $c_k^{(T)}$ be defined by
\onemathN{
c_k^{(T \uplus \{\lambda\})} = c_{k+1}^{(T)} - \lambda c_k^{(T)}, \;\; c_k^{(\phi)} = c_k,
}
where $T$ denotes a multiset.
And let $H_k^{(T)}$ be its Hankel determinant $|c_{i+j}^{(T)}|_{i,j=0}^{k-1}$.
Then canonical moments $p_k$ and variables $\zeta_k$ are expressed as
\onemath{
& p_{2k} = - \frac{H_{k+1}^{(\phi)} H_{k-1}^{(\{0,-1\})}}{ H_k^{(\{0\})} H_k^{(\{-1\})} },
\;\;
p_{2k+1} = - \frac{H_{k+1}^{(\{0\})} H_{k}^{(\{-1\})}}{ H_{k+1}^{(\phi)} H_k^{(\{0,-1\})} },
\\
& \zeta_{2k} = \frac{H_{k+1}^{(\phi)} H_{k-1}^{(\{0\})}}{ H_k^{(\{0\})} H_k^{(\phi)} },
\;\;
\zeta_{2k+1} = \frac{H_{k+1}^{(\{0\})} H_{k}^{(\phi)}}{ H_{k+1}^{(\phi)} H_k^{(\{0\})} }.
}{eq;oiad90f}
We define generalized canonical moments $p_k^{(T)}$ and variable $\zeta_k^{(T)}$ by
\onemath{
& p_{2k}^{(T)} = - \frac{H_{k+1}^{(T)} H_{k-1}^{(T \uplus \{0,-1\})}}{ H_k^{(T \uplus \{0\})} H_k^{(T \uplus \{-1\})} },
\;\;
p_{2k+1}^{(T)} = - \frac{H_{k+1}^{(T \uplus \{0\})} H_{k}^{(T \uplus \{-1\})}}{ H_{k+1}^{(T)} H_k^{(T \uplus \{0,-1\})} },
\\
& \zeta_{2k}^{(T,s)} = \frac{H_{k+1}^{(T)} H_{k-1}^{(T \uplus \{s\})}}{ H_k^{(T \uplus \{s\})} H_k^{(T)} },
\;\;
\zeta_{2k+1}^{(T,s)} = \frac{H_{k+1}^{(T \uplus \{s\})} H_{k}^{(T)}}{ H_{k+1}^{(T)} H_k^{(T \uplus \{s\})} }.
}{eq;oiad90g}
It is clear that $p_k^{(\phi)} = p_k, \zeta_k^{(\phi,0)} = \zeta_k$.
While generalized canonical moments do not satisfy like a property \eqref{eq:asfuio7h},
generalized canonical moments satisfy like a property \eqref{eq:iofaou4a}, that is,
\onemath{
& \zeta_0^{(T)} = 0, \quad \zeta_1^{(T)} = p_1^{(T)}, \\
& \zeta_k^{(T,0)} = (1-p_{k-1}^{(T)})p_k^{(T)}, \quad k=2,3,\ldots,N.
}{eq;;as9f}
Additionally variables $\zeta_k^{(T)}$ is a determinant solution of nonautonomous discrete time Toda equation,
namely, $\zeta_k^{(T)}$ is satisfy nonautonomous discrete time Toda equation
\onemath{
& \zeta_{2k}^{(T \uplus \{\lambda_1\}, \lambda_2)} + \zeta_{2k+1}^{(T \uplus \{\lambda_1\}, \lambda_2)} + \lambda_2 = \zeta_{2k+1}^{(T,\lambda_1)} + \zeta_{2k+2}^{(T,\lambda_1)} + \lambda_1, \\
& \zeta_{2k+1}^{(T \uplus \{\lambda_1\}, \lambda_2)} \zeta_{2k+2}^{(T \uplus \{\lambda_1\}, \lambda_2)} = \zeta_{2k+2}^{(T,\lambda_1)} \zeta_{2k+3}^{(T,\lambda_1)}.
}{eq:asiu9aToda}
From \eqref{eq:asiu9aToda}, we obtain the following formula
\onemath{
& \zeta_{2k}^{(T, \lambda_1)} + \zeta_{2k+1}^{(T, \lambda_1)} + \lambda_1 = \zeta_{2k+1}^{(T,\lambda_2)} + \zeta_{2k+2}^{(T,\lambda_2)} + \lambda_2, \\
& \zeta_{2k+1}^{(T, \lambda_1)} \zeta_{2k+2}^{(T, \lambda_1)} = \zeta_{2k+2}^{(T,\lambda_2)} \zeta_{2k+3}^{(T,\lambda_2)}.
}{eq:dafopfa90}
\subsection{D-optimal designs for polynomial regression with prior information} \label{ss::fiu}
We consider the $(m+S-1)$-th degree of polynomial regression
\onemath{
& Y = \sum_{k=0}^{m+S-1} \theta_k x^k + \varepsilon,\\
& E[\varepsilon] = 0, \quad V[\varepsilon] = \sigma^2
}{eq:oiaufoa9}
with the $S = \sum_{j=1}^l b_j$ values as prior information
\onemath{
\left. \frac{{\rm d}^k}{{\rm d} x^k} g(x) \right|_{x=\beta_j}, \quad 0 \leq j < l, \quad 0 \leq k < b_j,
}{eq:oripr}
where $g(x) = E[Y|x] = \sum_{k=0}^{m+S-1} \theta_k x^k$,
$b_0, b_1, \ldots, b_{l-1}$ are positive integers,
and $\beta_0, \beta_1, \ldots, \beta_{l-1}$ are arbitrary distinct real values.
That means we consider the cases where the exact $S$ values \eqref{eq:oripr} are known before experiments.
Note that $\beta_k$ do not have to be in $\mathcal X = [0,1]$.
We call the model \eqref{eq:oiaufoa9} ${\rm PRM}_m(\beta, b)$, for short, where $\beta = (\beta_0, \beta_1, \ldots, \beta_{l-1})$, $b = (b_0, b_1, \ldots, b_{l-1})$.
There are multiple ways to consider the model ${\rm PRM}_m(\beta, b)$ as linear regression models.
However the D-optimal design for the model ${\rm PRM}_m(\beta, b)$ is defined uniquely,
since D-optimal designs for linear regression models depend on a linear space spanned by bases functions only (see \cite[Theorem 5.5.1]{asdfjtuw}).
The D-optimal designs are formulated as the following theorem.
A proof of the theorem is given in Appendix.
\begin{theo} \label{theo:a89fas}
The D-optimal design for polynomial regression with prior information ${\rm PRM}_m(\beta, b)$ is defined as the optimal solution of the optimization problem
\onemath{
& \mbox{maximize}\; H_m^{(T)} \\
& \mbox{subject to}\; \mu \in \mathcal P_{[0,1]}.
}{oofdifufyu}
where the multiset $T$ satisfies the number of $\beta_k$ in the $T$ is $2 b_k$, that is, let $m_T(x)$ be the multiplicity function, then
\onemath{
m_T(x) = \begin{cases}
2b_k & (x=\beta_k) \\
0 & (\mbox{\rm otherwise})
\end{cases}.
}{eq:foiuefaoT}
\end{theo}
Note that the D-optimal design for ${\rm PRM}_m(\beta, b)$ is equivalent to the D-optimal design for
weighted polynomial regression model
\onemathN{
& Y = \sum_{k=0}^{m-1} \theta_k x^k + \varepsilon,\\
& E[\varepsilon] = 0, \quad V[\varepsilon] = \frac{\sigma^2}{w(x)}, \quad w(x) = \prod_{k=0}^{l-1} (x-\beta_k)^{2b_k}.
}
Here we propose an algorithm for rephrasing the optimization problem \eqref{oofdifufyu} in terms of canonical moments.
In the algorithm, the two formulas \eqref{eq:afa908}, \eqref{eq:a90dfadk} and the nonautonomous discrete time Toda equation \eqref{eq:asiu9aToda} are used.
At first, we describe the objective function $H_m^{(T)}$ by using the value $\zeta_k^{(T,s)}$ corresponding to generalized canonical moments.
We obtain the formula which is similar to \eqref{eq:hfoais}
\onemath{
H_m^{(T)} = \left( c_0^{(T)} \right)^m \prod_{j=1}^{m-1} \left( \zeta_{2j-1}^{(T,0)} \zeta_{2j}^{(T,0)} \right)^{m-j}.
}{eq:afa908}
Then we describe the objective function $H_m^{(T)}$ in terms of the value $\zeta_k^{(\phi,0)}$ corresponding to canonical moments.
Here by using the nonautonomous discrete time Toda equation \eqref{eq:asiu9aToda} and the formula \eqref{eq:dafopfa90},
$\zeta_k^{(T,s)}$ can be expressed in terms of $\zeta_1^{(\phi,0)}, \zeta_2^{(\phi,0)}, \ldots$.
And we obtain the formula about $c_0^{(T)}$
\onemath{
\zeta_1^{(T,s)} = \frac{c_0^{(T \uplus \{s\})}}{c_0^{(T)}}, \quad c_0^{(\phi)} = 1,
}{eq:a90dfadk}
then $c_0^{(T)}$ also can be expressed in terms of $\zeta_1^{(\phi,0)}, \zeta_2^{(\phi,0)}, \ldots$ by using the formula.
Lastly we describe the objective function $H_m^{(T)}$ in terms of canonical moments by using the relationship \eqref{eq:iofaou4a}.
By putting it all together,
the proposed algorithm for calculating the D-optimal design for polynomial regression with prior information \eqref{eq:oiaufoa9}
is described as follows.
\noindent
\begin{center}
\framebox[12cm][c]{
\begin{minipage}{11.5cm}
{\bf The algorithm for calculating D-optimal designs for ${\rm PRM}_m(\beta, b)$}
\begin{enumerate}[Step 1.] \setlength{\parskip}{0pt} \setlength{\itemsep}{0pt}
\item
By using the formula \eqref{eq:afa908},
describe the objective function $H_{m}^{(T)}$ in terms of $\zeta_k^{(T,s)}$ and $c_0^{(T)}$.
\item
By using the nonautonomous discrete time Toda equation \eqref{eq:asiu9aToda} and the formulas \eqref{eq:dafopfa90}, \eqref{eq:a90dfadk},
describe the objective function $H_{m}^{(T)}$ in terms of $\zeta_k^{(\phi,0)} = \zeta_k$.
\item
By using the relationship \eqref{eq:iofaou4a},
describe the objective function $H_{m}^{(T)}$ in terms of canonical moments $p_k$.
\item
Find canonical moments which maximize the objective function $H_{m}^{(T)}$.
\end{enumerate}
\end{minipage}
}
\end{center}
\section{Application of the algorithm for calculating some D-optimal designs}
In this section, we introduce two examples of application of our algorithm.
In Subsection \ref{sss:df98dss}, robust D-optimal designs for approximate polynomial regression with prior information is considered.
This model is generalized model considered by \cite{asfa831221i}.
In Subsection \ref{sss:asd9f9ai}, maximin optimal designs for estimating a function of parameters on weighted polynomial regression.
We consider the generalized case of the case considered by \cite{ajifa9d78rjio}.
\subsection{Robust D-optimal designs for approximate polynomial regression with prior information} \label{sss:df98dss}
In this subsection, let the design space be $\mathcal X = [-1,1]$ instead of $[0,1]$.
There is one to one correspondence between $\mu \in \mathcal P_{[0,1]}$ and a symmetric measure $\xi \in \mathcal P_{[-1,1]}$
such that
\onemathN{
\mu([0,x^2]) = \xi([-x,x]).
}
The approximate polynomial regression model is described as
\onemathN{
& Y = \sum_{k=0}^{m-1} \theta_k x^k + x^m \psi(x) + \varepsilon, \\
& E[\varepsilon] = 0, \quad V[\varepsilon] = \sigma^2,
}
where $\psi$ denotes an unknown function.
Let the best linear unbiased estimator be $\hat \theta (\psi)$ when we estimate as $\psi(x)$ is considered as 0.
Then we obtain
\onemathN{
& E[\hat \theta (\psi) - \hat \theta (0)] = B_m^{(\phi)}(\xi) r(\phi) \\
& V[\hat \theta (\psi)] = (\sigma^2 / n) B_m^{(\phi)}(\xi)
}
where $\xi$ is a design, $n$ is the number of observations, and
\onemathN{
& r(\psi) = \int_{-1}^1 (1,x,\ldots,x^{m-1})^{\rm T} x^m \psi(x) {\rm d} \xi(x),\\
& B_m^{(\phi)} = (c_{i+j}(\xi))_{i,j=0}^{m-1}.
}
For given a continuous function $\eta(x)$ and a positive number $d$, maximin optimal design is defined as
\onemathN{
& \mbox{maximize}\; H_m^{(\phi)}(\xi) \\
& \mbox{subject to}\; \xi \in \mathcal P_{[-1,1]}, \;\; \sup_{|\psi| \leq |\eta|}r(\psi)^{\rm T} (B_m^{(\phi)})^{-1} r(\psi) \leq d.
}
In \cite{asfa831221i}, the case where
\onemath{
\eta(x) = |x|^\alpha, \;\; \alpha \in {\mathbb Z}_{\geq 0}
}{eq:aioufa9f8}
is considered, and it is shown that the constraint $\sup_{|\psi| \leq |\eta|} r(\psi)^{\rm T} (B_m^{(\phi)})^{-1} r(\psi) \leq d$ is described in terms of canonical moments as
\onemath{
\sup_{|\psi| \leq |\eta|}r(\psi)^{\rm T} (B_m^{(\phi)})^{-1} r(\psi) = \sum_{i = \lfloor \alpha / 2 \rfloor + 1}^{\lfloor (m+\alpha)/2 \rfloor} S_{i, m+\alpha-i}(\mu)^2 \prod_{j=1}^{m+\alpha-2i} \zeta_j (\mu),
}{eq:aiofa908a}
where $S_{i,j}(\mu)$ is defined recursively as
\onemathN{
& S_{i,j}(\mu) = 0, \quad 0 \leq j < i, \\
& S_{i,j}(\mu) = 1, \quad i=0, \;\; j > 0, \\
& S_{i,j}(\mu) = S_{i,j-1}(\mu) + \zeta_{j-i+1}(\mu) S_{i-1,j}(\mu), \quad 0 < i \leq j.
}
Now we consider the approximate polynomial regression model with prior information, namely,
\onemath{
& Y = \sum_{k=0}^{2S+m-1} \theta_k x^k + x^{2S+m} \psi(x) + \varepsilon, \\
& E[\varepsilon] = 0, \quad V[\varepsilon] = \sigma^2,
}{eq:apo90af8f}
with the $2S$ values as prior information
\onemath{
& \left. \frac{{\rm d}^k}{{\rm d} x^k} g(x) \right|_{x=\beta_j},\\
& \left. \frac{{\rm d}^k}{{\rm d} x^k} g(x) \right|_{x=-\beta_j}, \quad 0 \leq j < l, \quad 0 \leq k < b_j,
}{eq:sdapufa90}
where $g(x) = E[Y|x] = \sum_{k=0}^{m+S-1} \theta_k x^k$, and $\beta_0, \beta_1, \ldots, \beta_{l-1}$ are arbitrary distinct real values.
Note that prior information \eqref{eq:sdapufa90} must be symmetric with respect to the origin.
We can show that the optimization problem corresponding to the model \eqref{eq:apo90af8f} is the following by similar calculation to \cite{asfa831221i}
\onemath{
& \mbox{maximize}\; H_m^{(T')}(\xi) \\
& \mbox{subject to}\; \xi \in \mathcal P_{[-1,1]}, \;\; \sup_{|\psi| \leq |\eta|}r^{(T')}(\psi)^{\rm T} (B_m^{(T')})^{-1} r^{(T')}(\psi) \leq d,
}{auio3pfau}
where $T'$ denotes the multiset satisfying
\onemathN{
m_{T'}(x) = \begin{cases}
2b_k & (x=\beta_k) \\
2b_k & (x=-\beta_k) \\
0 & (\mbox{\rm otherwise})
\end{cases},
}
and
\onemathN{
& B^{(T{}')}_m (\xi) = (c_{i+j}^{(T{}')})_{i,j=0}^{m-1} \\
& r^{(T')} (\psi) = \int_{-1}^1 (1,x,\ldots,x^{m-1})^{\rm T} x^m \psi(x) \left( \prod_{j=0}^{l-1} (x-\beta_j)(x+\beta_j) \right) {\rm d} \xi(x).
}
The constraint of \eqref{auio3pfau} corresponding to \eqref{eq:aiofa908a} is described in terms of generalized canonical moments as
\onemathN{
(c_0^{(T')}(\xi))^{-1} \sum_{i = \lfloor \alpha / 2 \rfloor + 1}^{\lfloor (m+\alpha)/2 \rfloor} S_{i, m+\alpha-i}^{(T)}(\mu)^2 \prod_{j=1}^{m+\alpha-2i} \zeta_j^{(T,0)} (\mu),
}
where $T$ is the same as \eqref{eq:foiuefaoT}, and
\onemathN{
& S_{i,j}^{(T)}(\mu) = 0, \quad 0 \leq j < i, \\
& S_{i,j}^{(T)}(\mu) = 1, \quad i=0, \;\; j > 0, \\
& S_{i,j}^{(T)}(\mu) = S_{i,j-1}^{(T)}(\mu) + \zeta_{j-i+1}^{(T,0)}(\mu) S_{i-1,j}^{(T)}(\mu), \quad 0 < i \leq j.
}
Hence we can obtain the expression of the optimization problem corresponding \eqref{eq:apo90af8f} in terms of canonical moments by using our algorithm.
\subsection{Maximin optimal designs for estimating a function of parameters on weighted polynomial regression} \label{sss:asd9f9ai}
Consider the weighted polynomial regression
\onemathN{
& Y = \sum_{k=0}^{m-1} \theta_k x^k + \varepsilon, \\
& E[\varepsilon] = 0, \quad V[\varepsilon] = \sigma^2 w(x), \quad w(x) = \prod_{k=0}^{l-1} (x-\beta_k)^{2b_k}.
}
In this subsection, we consider an optimal design for estimating $g_{m-1}(\theta_{m-1}) + g_{m-2}(\theta_{m-2}) + \cdots + g_{0}(\theta_{0})$,
where $g_k$ is a polynomial.
In this case, the inverse of the asymptotic variance of the estimator is
\onemathN{
\gamma(\mu,\theta) = \sum_{k=0}^{m-1} \left( \frac{{\rm d}}{{\rm d}\theta_{k}}g_k(\theta_{k}) \right)^2 \psi_{k}^{(1)}(\mu),
}
where $\psi_k^{(1)}(\mu) = H_{k+1}^{(T)} / H_{k}^{(T)}$, and $T$ is the same as \eqref{eq:foiuefaoT}.
Since the variance of estimator depends on the unknown parameters $\theta_k$, we consider
maximin optimal design defined as the optimization solution of the optimization problem
\onemathN{
& \mbox{maximize}\; \min_{\theta \in \Theta} \gamma(\mu,\theta) \\
& \mbox{subject to}\; \mu \in \mathcal P_{[0,1]}
}
where $\Theta$ is a given parameter space.
Suppose the parameter space $\Theta = \{\theta \;|\; s_k \leq \theta_k \leq t_k \}$.
From \cite[Theorem 3.1]{ajifa9d78rjio}, the optimal solution of the optimization problem
\onemath{
& \mbox{maximize}\; \int_{\Theta} \gamma(\mu,\theta)^p {\rm d}\pi (\theta) \\
& \mbox{subject to}\; \mu \in \mathcal P_{[0,1]}.
}{eq:aospfa908}
converges weakly to the maximin optimal design as $p \to -\infty$, where
\onemathN{
& {\rm d}\pi(\theta) = {\rm d}\theta \prod_{k=0}^{m-1} h_k(\theta_k), \\
& h_k(\theta_k) = \begin{cases}
\displaystyle \frac{{\rm d}}{{\rm d}\theta_{k}} \left( \frac{{\rm d}}{{\rm d}\theta_{k}}g_k(\theta_{k}) \right)^2 & (\deg g_k \geq 2) \\
\displaystyle 1 & (\mbox{otherwise})
\end{cases}.
}
Then we can calculate the integral in the objective function of the \eqref{eq:aospfa908}.
And we can express $\psi_k^{(1)}(\mu)$ in terms of canonical moments by our algorithm.
Therefore, after expressing the optimization problem \eqref{eq:aospfa908} in terms of canonical moments,
we can obtain an approximate maximin optimal design by solving \eqref{eq:aospfa908} numerically for small $p$.
\appendix
\section{The proof of Theorem \ref{theo:a89fas}}
It can be turn out that the optimization problem \eqref{oofdifufyu} corresponding to ${\rm PRM}_m(\beta, b)$ corresponds to the vector of basis functions
\onemath{
f(x) = \left( \prod_{j=0}^{l-1} (x - \beta_j)^{b_j} \right) (1, x, \ldots, x^{m-1})^{\rm T}.
}{eqjidioafy}
Let $g_k(x) = (1, x, \ldots, x^{k-1})$ be the vector of basis functions for polynomial regression,
then the vector \eqref{eqjidioafy} of basis functions corresponding to ${\rm PRM}_m((\beta_0,\beta_1,\ldots,\beta_{l-1})$, $(b_0,b_1,\ldots,b_{l-1}))$ is expressed as
\onemath{
f(x) = \prod_{j=0}^{l-1} (x - \beta_j)^{b_j} g_m(x).
}{eqdiduad}
To prove Theorem \ref{theo:a89fas}, it suffices to show that ${\rm PRM}_m((\beta_0,\beta_1,\ldots,\beta_{l-1})$, $(b_0,b_1,\ldots,b_{l-1}))$ corresponds to the vector \eqref{eqdiduad} of basis functions.
Let us prove it by the principle of induction.
By the symmetricity of $\beta_j$ and $b_j$, we can assume that
${\rm PRM}_{m+1}((\beta_0,\beta_1,\ldots,\beta_{l-1})$, $(b_0,b_1-1,\ldots,b_{l-1}))$ corresponds to the vector of basis functions
\onemathN{
f(x) = (x - \beta_0)^{b_0-1} \left( \prod_{j=1}^{l-1} (x - \beta_j)^{b_j} \right) g_{m+1}(x).
}
Let $M(x) = \prod_{j=1}^{l-1} (x - \beta_j)^{b_j}$, then the linear regression model ${\rm PRM}_{m+1}((\beta_0,\beta_1,\ldots,\beta_{l-1})$, $(b_0-1,b_1,\ldots,b_{l-1}))$
is described as
\onemathN{
Y = (x - \beta_0)^{b_0-1} M(x) \sum_{k=0}^{m} \theta_k x^k + \varepsilon.
}
Thus the linear regression model ${\rm PRM}_m((\beta_0,\beta_1,\ldots,\beta_{l-1})$, $(b_0,b_1,\ldots,b_{l-1}))$ is described as
\onemath{
Y = (x - \beta_0)^{b_0-1} M(x) \sum_{k=0}^{m} \theta_k x^k + \varepsilon.
}{aioufoaidfa}
with one given value as prior information
\onemath{
\left. \frac{{\rm d}^{b_0-1}}{{\rm d} x^{b_0-1}} \left( (x - \beta_0)^{b_0-1} M(x) \sum_{k=0}^{m} \theta_k x^k \right) \right|_{x=\beta_0}.
}{auifdfa98ua}
Since
\onemathN{
\left. \frac{{\rm d}^k}{{\rm d} x^k} (x - \beta_0)^{b_0-1} \right|_{x=\beta_0} = 0, \quad k = 0, 1, \ldots, b_0-2,
}
the value \eqref{auifdfa98ua} becomes
\onemathN{
(b_0 -1)! M(\beta_0) \sum_{k=0}^m \theta_k \beta_0^k
}
by the general Leibniz rule.
Hence we obtain the value
\onemath{
\alpha = \sum_{k=0}^m \theta_k \beta_0^k
}{eq:iaufopia}
from prior information \eqref{auifdfa98ua}.
From \eqref{eq:iaufopia}, substitute $\theta_0 = \alpha - \sum_{k=1}^m \theta_k \beta_0^k$ into the model \eqref{aioufoaidfa},
we obtain
\onemath{
& Y = (x - \beta_0)^{b_0-1} M(x) \left( \sum_{k=1}^{m} \theta_k x^k - \sum_{k=1}^m \theta_k \beta_0^k + \alpha \right) + \varepsilon.
}{eq:afliauf9}
When we obtain a response $y_k$ by the obserbation at the experimental condition $x_k$,
we can calculate the value $y_k - (x_k-\beta_0)^{b_0-1} M(x_k) \alpha$ easily.
Thus we can ignore the term $(x-\beta_0)^{b_0-1} M(x) \alpha$ from the model \eqref{eq:afliauf9}, then we obtain the model
\onemath{
& Y = (x - \beta_0)^{b_0-1} M(x) \sum_{k=1}^{m} \theta_k (x^k - \beta_0^k) + \varepsilon.
}{eq:adfsaauf9}
Here the vector of basis functions corresponding to the model \eqref{eq:adfsaauf9} is
\onemath{
f(x) = (x - \beta_0)^{b_0-1} M(x)
\begin{pmatrix}
x - \beta_0 \\
x^2 - \beta_0^2 \\
\vdots \\
x^m - \beta_0^m
\end{pmatrix}.
}{eq:adfpoaipfo}
Let the non-singular matrix $A$ be
\onemathN{
A = \begin{pmatrix}
1 & 0 & 0 & \cdots & 0 \\
-\beta_0 & 1 & 0 & \cdots & 0 \\
0 & -\beta_0 & 1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \cdots & 1 \\
\end{pmatrix}
}
then, by multiplying $A$ to the vector \eqref{eq:adfpoaipfo} of basis functions from left,
\onemathN{
Af(x)
&= (x - \beta_0)^{b_0-1} M(x)
\begin{pmatrix}
x - \beta_0 \\
x^2 - \beta_0^2 - \beta_0 (x - \beta_0) \\
\vdots \\
x^m - \beta_0^m - \beta_0 (x^{m-1} - \beta_0^{m-1})
\end{pmatrix} \\
&= (x - \beta_0)^{b_0-1} M(x)
\begin{pmatrix}
x - \beta_0 \\
x (x - \beta_0) \\
\vdots \\
x^{m-1} (x - \beta_0)
\end{pmatrix} \\
&= (x - \beta_0)^{b_0} M(x) g_m(x).
}
Thus we obtain \eqref{eqdiduad}.
\end{document} |
\begin{document}
\title[Iterated function systems with a given continuous stationary distribution]{Iterated function systems with a given continuous stationary distribution
}
\author{ \"{O}rjan Stenflo}
\email{[email protected]}
\subjclass[2000]{
Primary: 60J05,
Secondary: 28A80, 37H99, 60F05, 65C05}
\keywords{ Iterated Function Systems, Markov Chain Monte Carlo}
\address{Department of Mathematics,
Uppsala University,
751 06 Uppsala,
Sweden
}
\begin{abstract}
For any continuous probability measure $\mu$ on ${\mathbb R}$ we construct an IFS with probabilities having $\mu$ as its unique measure-attractor.
\end{abstract}
\maketitle
\section{Introduction}
In 1981 Hutchinson \cite{Hutchinson81} presented a theory of fractals and measures supported on fractals based on iterations of functions.
Let $\{ {\mathbb R}^d; f_i,p_i,\ i=1,...,n \}$
be an iterated function system with probabilities (IFSp). That is, $f_i: {\mathbb R}^d \rightarrow {\mathbb R}^d$, $i=1,...,n$, are functions and $p_i$ are associated non-negative numbers with $\sum_{i=1}^n p_i=1$. If the maps
$f_i: {\mathbb R}^d \rightarrow {\mathbb R}^d $ are contractions, i.e.\ if there exists a constant $c<1$ such that $|f_i(x) - f_i(y)| \leq c |x-y|$, for all $x,y \in {\mathbb R}^d$, then there exists a unique nonempty compact set $A$ satisfying
\begin{equation} \label{Attr}
A = \cup_{i=1}^n f_i(A)=
\{ \lim_{k \rightarrow \infty }
f_{i_1} \circ f_{i_2} \cdots \circ f_{i_k}(x);\ \ \ i_1 i_2 i_3...\in \{1,...,n\}^\mathbb{N} \},
\end{equation}
for any $x \in {\mathbb R}^d$, and a unique
probability measure $\mu$, supported on $A$, satisfying the invariance equation
\begin{equation} \label{valborg}
\mu( \cdot ) = \sum_{i=1}^n p_i \mu( f_i^{-1} (\cdot)),
\end{equation}
see Hutchinson \cite{Hutchinson81}.
The set $A$ is sometimes called the set-attractor, and $\mu$ the
measure-attractor of the IFSp.
The set-attractor $A$ will have a self-repeating ``fractal'' appearance
if all maps $f_i$ are similitudes, and the sets, $f_i(A)$, $i=1,...,n$, do not overlap.
This leads to the intuition to regard the set-attractor $A$ in \eqref{Attr} as
being built up by $n$ (in general overlapping and
heavily distorted) ``copies'' of itself, and the measure-attractor as a ``greyscale colouring'' of the set-attractor.
(Note that the probabilities $p_i$ play no role in the definition of $A$.)
In general we can not expect to have a unique set-attractor if the IFS-maps are not assumed to be contractions or more generally if the limits
$\lim_{k \rightarrow \infty} f_{i_1} \circ f_{i_2} \cdots \circ f_{i_k}(x)$
do not exist, with the limit being independent of $x$, for {\em all} $i_1 i_2 i_3...\in \{1,...,n\}^\mathbb{N}$, but unique measure-attractors exist if the limits
\begin{equation} \label{0630}
\widehat{Z}^F(i_1 i_2 ...):=
\lim_{k \rightarrow \infty} f_{i_1} \circ f_{i_2} \cdots \circ f_{i_k}(x)
\end{equation}
exist (with the limit being independent of $x$) for {\em almost all} $i_1 i_2 i_3...\in \{1,...,n\}^\mathbb{N}$.
(Indeed, if the limit in \eqref{0630} exists a.s.\ then
$\widehat{Z}^F$ may be regarded as a random variable, and its distribution
$\mu(\cdot):=P(\widehat{Z}^F \in \cdot)$, is then the unique solution to
\eqref{valborg}.)
The theory of IFSp has a long pre-history within the theory of Markov
chains, starting already with papers in the 30th by D\"oblin and others.
Let $\{X_k\}_{k=0}^\infty$ be the Markov chain obtained by random (independent)
iterations with the functions, $f_i$, chosen with the
corresponding probabilities, $p_i$. That is, let $\{X_k\}$ be defined
recursively by
$$
X_{k+1}= f_{I_{k+1}}( X_k),\ k \geq 0,
$$
where $\{I_k\}_{k=1}^\infty$ is a sequence of independent random variables with $P(I_k=i)=p_i$, independent of $X_0$, where $X_0$ is some given random variable.
(It is well-know that any Markov chain $\{X_k\}$ (with values in $
\mathbb{R}^d$) can be expressed in the form $X_{k+1}= g( X_k, Y_{k+1})$ where
$g: \mathbb{R}^d \times [0,1] \rightarrow \mathbb{R}^d$ is a measurable function and $\{Y_k\}_{k=1}^\infty$ is a sequence of independent random variables
uniformly distributed on the unit interval, see e.g.\ Kifer \cite{Kifer86}.)
If an IFSp has a unique measure-attractor, $\mu$, then $\mu$ is the
unique stationary distribution of $\{X_k\}$, i.e.\
$\mu$ is the unique probability measure with the property that
if $X_0$ is $\mu$-distributed, then $\{X_k\}$ will be a (strictly) stationary (and ergodic) stochastic process, see e.g.\ Elton \cite{Elton87}.
Therefore a unique measure-attractor can alternatively also be called a unique stationary distribution.
Under standard average contraction conditions it follows that
(\ref{0630}) holds a.s., and
the distribution of $X_k$ converges weakly to $\mu$ (with exponential rate quantified e.g.\ by the Prokhorov metric for arbitrary distributions of the initial random variable $X_0$). Moreover the empirical distribution along trajectories of $\{X_k\}$ converges weakly to $\mu $ a.s.,
and $\{X_k\}$ obeys a central limit theorem.
See e.g.\ Barnsley et al.\ \cite{Barnsleyetal08}, Diaconis and Freedman \cite{DiaconisFreedman99}, and Stenflo \cite{Stenflo11}
for details and further results. These papers also contains surveys of the
literature.
\subsection{The inverse problem}
The inverse problem is to, given a probability measure $\mu$, find an IFSp having $\mu$ as its unique measure-attractor. This problem is of importance in e.g.\ image coding where the image, represented by a probability measure, can be encoded by the parameters in a corresponding IFSp in the affirmative cases, see e.g.\ Barnsley \cite{Barnsley93}. For an encoding to be practically
useful it
needs to involve few parameters and the
distribution of $X_k$ needs to converge quickly
to equilibrium (a property ensured by average contractivity
properties of the functions in the IFSp) for arbitrary initial distributions of $X_0$.
It is possible to construct solutions to the inverse problem in some very
particular cases using Barnsley's ``collage theorem'', see \cite{Barnsley93} containing exciting examples of e.g.\ ferns and clouds (interpreted as probability measures on $\mathbb{R}^2$) and their IFSp encodings,
but typically it is very hard to even find approximate solutions to the inverse problem for general probability measures on ${\mathbb R}^d$.
In this paper we present a (strikingly simple) solution to
the inverse problem for
continuous probability measures on ${\mathbb R}$.
\section{Main result}
In order to present our solution to the inverse problem for
continuous probability measures on ${\mathbb R}$, recall the following basic facts
used in the theory of random number generation;
Let $\mu$ be a probability measure on ${\mathbb R}$, and
let $F(x)= \mu((-\infty,x])$ denote its distribution function.
The generalised inverse distribution function is defined by
$$F^{-1}(u)= \inf_{x \in {\mathbb R}} \{ F(x) \geq u \},\ \ \ 0 \leq u \leq 1,$$ and satisfies
$F^{-1}(F(x)) \leq x$ and $ F(F^{-1}(u)) \geq u$ and therefore
$$ F^{-1}(u) \leq x \hspace{5mm} \text{ if and only if } \hspace{5mm}
u \leq F(x).$$
From this it follows that if $U \in U(0,1)$, i.e.\ if $U$ is a random variable
uniformly distributed on the unit interval, then $F^{-1}(U)$ is a $\mu$-distributed random variable.
This basic property reduces the problem of simulating from an arbitrary distribution on ${\mathbb R}$, to the problem of simulating uniform random numbers on the unit interval.
We say that $\mu$ is continuous if $F$ is continuous.
Note that $\mu(\{x\})=0$ for any $x \in {\mathbb R}$ for continuous probability measures in contrast with discrete probability measures where $\sum_{x \in S} \mu(\{x\})=1$ for some countable set $S$.
If $\mu$ is continuous then $ F(F^{-1}(u)) = u$, for $0 <u<1$.
This property is crucial for the following theorem;
\begin{theorem} \label{110622}
A continuous distribution, $\mu$, on $\mathbb{R}$ with
distribution function, $F$, is the measure-attractor of the
IFS with monotone maps
$f_i(x):= F^{-1} \circ u_i \circ F(x) $, for any $x$ with $F(x) >0$,
and probabilities $p_i=1/n$, where
$u_i(u)=u/n + (i-1)/n$, $0 \leq u \leq 1$, $i=1,2,...,n$, for any $n \geq 2$.
\end{theorem}
\begin{proof}
The Markov chain generated by
$u_i(x)=x/n + (i-1)/n$, $i=1,2,...,n$, chosen
with equal probabilities has the uniform distribution on the unit interval as its unique stationary distribution.
That is, if $\{I_k\}_{k \geq 1}$ is a sequence of independent random variables, uniformly distributed on $\{1,2,...,n\}$, then
\begin{equation}
Z_k^U(x)=u_{I_k} \circ \cdots \circ u_{I_1} (x),\ \ \
Z_0^U(x)=x
\end{equation}
is a Markov chain starting at $x \in [0,1]$ having the uniform distribution as its unique stationary distribution.
This can be seen by observing that $Z_k^U(x)$ has the same distribution as the reversed iterates
\begin{equation} \label{reversed}
\widehat{Z}_k^U(x)=u_{I_1} \circ \cdots \circ u_{I_k} (x),\ \ \
\widehat{Z}_0^U(x)=x,
\end{equation}
for any fixed $k $, and the reversed iterates $\widehat{Z}_k^U(x)$ converges almost surely to the $U(0,1)$-distributed random variable, $\widehat{Z}^U$,
where the $k:th$ digit in the base $n$ expansion of $\widehat{Z}^U$ is given by $I_k-1$.
If $\widehat{Z}^F$ denotes the limit of the reversed iterates of the system with $f_i$ chosen with probability $1/n$, then
\begin{eqnarray} \label{0826}
\widehat{Z}^F&:=&
\lim_{k \rightarrow \infty} \widehat{Z}_k^F(x):=
\lim_{k \rightarrow \infty}
f_{I_1} \circ \cdots \circ f_{I_k} (x)
\nonumber \\
&=&
\lim_{k \rightarrow \infty}
F^{-1} \circ u_{I_1} \circ F
\circ
F^{-1} \circ u_{I_2} \circ F
\circ
F^{-1} \circ u_{I_k} \circ F (x) \nonumber \\
&=&
\lim_{k \rightarrow \infty} F^{-1} \widehat{Z}_k^U ( F(x))= F^{-1} ( \widehat{Z}^U) \ \ \ a.s.,
\end{eqnarray}
where the last equality holds since $F^{-1}(x) $
is non-decreasing, and since a
monotone function can have at most a countable set of discontinuity points in its domain, it follows that $F^{-1}(x) $
is continuous for a.a. $x \in [0,1]$ w.r.t. to the Lebesgue measure.
From the above it follows that
$$ P( \widehat{Z}^F \leq y ) = P( F^{-1} (\widehat{Z}^U) \leq y)=P( \widehat{Z}^U
\leq F(y))=F(y).$$
\end{proof}
\begin{remark}
If $X$ is a continuous $\mu$-distributed random variable, then $ F(F^{-1}(u)) = u$, so $F(X) \in U(0,1)$. This contrasts the case when $X$ is discrete where $F(X)$ will also be discrete, so we cannot expect Theorem \ref{110622} to generalise to discrete distributions.
If an IFS $\{ {\mathbb R}, f_i, p_i, i=1,...,n \}$, has a continuous measure-attractor $\mu$ being the distribution of the a.s.\ limit of the reversed iterates, and the
distribution function $F$ of $\mu$ satisfies $F^{-1} (F(x))=x$, for any $x \in {\mathbb R}$, with $0<F(x)<1$, then, similarly, the IFS
$\{ {[0,1]}, u_i, p_i, i=1,...,n \}$, with
$u_i(u):= F \circ f_i \circ F^{-1}(u)$, $0 <u <1$, has the $U(0,1)$-distribution as its unique stationary distribution.
This is the case for absolutely continuous probability distributions $\mu$ if $F$ is strictly increasing.
\end{remark}
\begin{remark}
From Theorem \ref{110622} it follows that any continuous probability distribution on ${\mathbb R}$ can be approximated by the empirical distribution of
a Markov chain $\{ X_k \}$ on ${\mathbb R}$ generated by an IFSp
with trivial "randomness" generated by e.g.\ by a coin or a dice.
\end{remark}
\begin{remark}
Theorem \ref{110622} may be used to represent a continuous probability measure $\mu$ on ${\mathbb R}$ by the functions suggested in the theorem.
Note that there exist many iterated function systems with probabilities generating the same Markov chain, see e.g.\ Stenflo \cite{Stenflo01}, so
in particular it follows than an IFSp representation of a continuous probability measure on ${\mathbb R}$ is not unique.
The given IFSp representation suggested by Theorem \ref{110622} (for a given $n \geq 2$) is good in the sense that the generated Markov chain converges quickly to the given equilibrium making it possible to quickly simulate it.
If the suggested IFSp representation cannot be described in terms of
few parameters then it might make sense to consider
an approximate representation by approximating the IFS functions
with functions described by few parameters e.g.\ by using Taylor expansions.
\end{remark}
\begin{remark}
From Theorem \ref{110622} it follows
that if $\mu$ is a continuous probability measure on ${\mathbb R} $ being the measure-attractor of
$\{ {\mathbb R}; f_i,p_i,\ i=1,...,n \}$, with $p_i \neq 1/n$ for some $n$,
then there exists another IFSp with uniform probabilities having $\mu$ as its measure-attractor.
\end{remark}
\begin{Exe}
Suppose $F$ is a distribution function satisfying
$$F(1-x)=1-F(x),$$
and
$$F(x)/2= F(a x+b), {\text{ for all }} 0 \leq x \leq 1,$$
where $0 \leq b \leq 1/2$, $0 \leq a+b \leq 1/2$, and $a \neq 0$.
Then
$$F(x)/2+1/2 = 1- F(1-x)/2=
1- F( a(1-x)+b)=F(a x+ 1-a-b).$$
Thus random iterations with the maps
$f_1(x)= ax +b$, and $f_2(x)=ax + 1-a-b$ chosen with equal probabilities generates a Markov chain with stationary distribution $\mu$ having distribution function $F$.
The case $a=1/3$ and $b=0$ corresponds to $F$ being the distribution function of the uniform probability measure on the middle-third Cantor set (the Devil's staircase).
\begin{center}
\resizebox{!}{40mm}{\includegraphics{CantorD.eps}}
\end{center}
\hspace{23mm}\parbox{12cm}{{\em The Cantor set
is the set-attractor of the IFSp \linebreak[4] $\{ {\mathbb R}; f_1(x)=x/3, f_2(x)=x/3+2/3, p_1=1/2, p_2=1/2 \}$
and the distribution function of its measure-attractor (the uniform distribution on the Cantor set) is an increasing continuous function with zero derivative almost everywhere, with $F(0)=0$ and $F(1)=1$ popularly known as the ``Devil's staircase''.
}}
\end{Exe}
\begin{Exe} Let $\mu$ be the probability measure
with triangular density function
$$ d(x) = \begin{cases}
x & 0 \leq x \leq 1 \\
2-x & 1 \leq x \leq 2
\end{cases}. $$
Then $\mu$ is the unique stationary distribution of the Markov chain generated by random iteration with the functions
$$ f_1(x) = \begin{cases}
\frac{x}{\sqrt{2}} & 0 \leq x \leq 1 \\
\sqrt{ 2x- \frac{x^2}{2}-1 } & 1 \leq x \leq 2,
\end{cases} $$
and
$$ f_2(x) = \begin{cases}
2-\sqrt{ 1- \frac{x^2}{2} } & 0 \leq x \leq 1 \\
2- \sqrt{ 2 - 2x + \frac{x^2}{2} } & 1 \leq x \leq 2,
\end{cases}, $$
chosen uniformly at random.
\begin{center}
\resizebox{!}{60mm}{\includegraphics{triangular.eps}}
\end{center}
\hspace{23mm}\parbox{12cm}{{\em Histograms of the first $n$ points in a simulated random trajectory of the Markov chain. The empirical distribution along a trajectory converges weakly to the stationary triangular-distribution with probability one.
}}
\end{Exe}
\begin{Exe}
The distribution function for the exponential distribution with
expected value $\mu= \lambda^{-1}$, $\lambda >0$, satisfies
$F(x)=1-e^{- \lambda x}$, $x \geq 0$.
A Markov chain generated by
random iterations with the two maps $f_1=f_1^\mu$ and $f_2=f_2^\mu$
defined as in
Theorem \ref{110622} has the exponential distribution with expected value $\mu$ as its stationary distribution.
We can construct interesting ``new'' distributions by altering such Markov chains in various ways, e.g.\ by altering the application of two IFSs corresponding to different parameter values.
A result of such a construction is shown in the figure.
\begin{center}
\resizebox{!}{60mm}{\includegraphics{Exponential.eps}} \label{Exponential}
\end{center}
\hspace{23mm}\parbox{12cm}{{\em
}}
The upper figures are histograms of the first 200000 points
in simulations of a trajectory of a Markov chain generated by
random iterations with the two maps $f_1=f_1^\mu$ and $f_2=f_2^\mu$
defined as in
Theorem \ref{110622}
corresponding to the choices $\mu=1$ in the left hand figure and
$\mu=2$ in the righthand figure respectively.
The lower figures are histograms corresponding to trajectories of Markov chains formed by random iterations with the maps
$g_1(x)=
f_1^2 (f_1^1(x))$, $g_2(x)=
f_1^2 (f_2^1(x))$, $g_3(x)=
f_2^2 (f_1^1(x))$, $g_4(x)=
f_2^2 (f_2^1(x))$ and $h_1(x)=
f_1^1 (f_1^2(x))$, $h_2(x)=
f_1^1 (f_2^2(x))$, $h_3(x)=
f_2^1 (f_1^2(x))$, $h_4(x)=
f_2^1 (f_2^2(x))$
respectively, where in both cases the functions are chosen uniformly at random.
\end{Exe}
\begin{remark}
The distributions constructed in the lower figures in the example above are
$1-$variable mixtures of the exponential distributions with expected values $\mu=1$, and $\mu=2$ respectively.
We can, more generally, for any integer $V \geq 1$, generate $V-$variable mixtures between continuous distributions.
See Barnsley et al. \cite{Barnsleyetal05} and \cite{Barnsleyetal08} for more on the theory of
$V-$variable sets and measures.
\end{remark}
\end{document} |
\begin{document}
\title[Self-adjoint free semigroupoid algebras]{All finite transitive graphs admit a self-adjoint free semigroupoid algebra}
\author[A. Dor-On]{Adam Dor-On}
\operatorname{ad}dress{Department of Mathematical Sciences \\ University of Copenhagen \\ Copenhagen\\ Denmark}
\email{[email protected]}
\author[C. Linden]{Christopher Linden}
\operatorname{ad}dress{Department of Mathematics \\ University of Illinois at Urbana-Champaign\\ Urbana\\ IL \\ USA}
\email{[email protected]}
\subjclass[2010]{Primary: 47L55, 47L15, 05C20.}
\keywords{Graph algebra, Cuntz Krieger, free semigroupoid algebra, road coloring, periodic, cyclic decomposition, directed graphs}
\thanks{The first author was supported by an NSF grant DMS-1900916 and by the European Union's Horizon 2020 Marie Sklodowska-Curie grant No 839412.}
\maketitle
\begin{abstract}
In this paper we show that every non-cycle finite transitive directed graph has a Cuntz-Krieger family whose WOT-closed algebra is $B({\mathcal{H}})$. This is accomplished through a new construction that reduces this problem to in-degree $2$-regular graphs, which is then treated by applying the periodic Road Coloring Theorem of B\'eal and Perrin. As a consequence we show that finite disjoint unions of finite transitive directed graphs are exactly those finite graphs which admit self-adjoint free semigroupoid algebras.
\end{abstract}
\section{Introduction}
One of the many instances where non-self-adjoint operator algebra techniques are useful is in distinguishing representations of C*-algebras up to unitary equivalence. By work of Glimm we know that classifying representations of non-type-I C*-algebras up to unitary equivalence cannot be done with countable Borel structures \cite{Gli61}. Hence, in order to distinguish representations of Cuntz algebra ${\mathcal{O}}_n$, one either restricts to a tractable subclass or weakens the invariant. By restricting to permutative or atomic representations, classification was achieved by Bratteli and Jorgensen in \cite{BJ99} and by Davidson and Pitts in \cite{DP99}.
Since general representations of ${\mathcal{O}}_n$ are rather unruly, one can weaken unitary equivalence by considering isomorphism classes of not-necessarily-self-adjoint free semigroup algebras, which are WOT-closed operator algebras generated by the Cuntz isometries of a given representation of ${\mathcal{O}}_n$. The study of free semigroup algebras originates from the work of Popescu on his non-commutative disc algebra \cite{Pop96}, and particularly from work of Arias and Popescu \cite{AP00}, and of Davidson and Pitts \cite{DP98, DP99}. This work was subsequently used by Davidson, Katsoulis and Pitts to establish a general non-self-adjoint structure theorem for any free semigroup algebra \cite{DKP01} which can be used to distinguish many representations of Cuntz algebra ${\mathcal{O}}_n$ via non-self-adjoint techniques.
The works of Bratteli and Jorgensen on iterated function systems were eventually generalized, and classification of Cuntz-Krieger representations of directed graphs found use in the work of Marcolli and Paolucci \cite{MP11} for producing wavelets on Cantor sets, and in work of Bezuglyi and Jorgensen \cite{BJ15} where they are associated to one-sided measure-theoretic dynamical systems called ``semi-branching function systems''.
Towards establishing a non-self-adjoint theory for distinguishing representations of directed graphs, and by building on work of many authors \cite{MS11, DLP05, JK05, KK05, Ken11, Ken13, KP04}, the first author together with Davidson and B. Li extended the theory of free semigroup algebras to classify representations of directed graphs via non-self-adjoint techniques \cite{DDL20}.
\begin{definition}
Let $G=(V,E,r,s)$ be a directed graph with range and source maps $r,s: E \rightarrow V$. A family $\mathrm{S} = (S_v,S_e)_{v\in V,e\in E}$ of operators on Hilbert space $\mathcal{H}$ is a \emph{Toeplitz-Cuntz-Krieger} (TCK) family if
\begin{enumerate}
\item[1.]
$\{S_v\}_{v\in V}$ is a set of pairwise orthogonal projections;
\item[2.]
$S_e^*S_e = S_{s(e)}$ for every $e\in E$;
\item[3.]
$\sum_{e\in F} S_eS_e^* \leq S_v$ for every finite subset $F\subseteq r^{-1}(v)$.
\end{enumerate}
We say that $\mathrm{S}$ is a \emph{Cuntz-Krieger} (CK) family if additionally
\begin{enumerate}
\item[4.]
$\sum_{e \in r^{-1}(v)}S_eS_e^* = S_v$ for every $v\in V$ with $0 < |r^{-1}(v)| < \infty$.
\end{enumerate}
We say $\mathrm{S}$ is a \textit{fully-coisometric} family if additionally
\begin{enumerate}
\item[5.]
$\textsc{sot}sum_{e \in r^{-1}(v)} S_e S_e^* = S_v$ for every $v \in V$.
\end{enumerate}
\end{definition}
Given a TCK family $\mathrm{S}$ for a directed graph $G$, we say that $\mathrm{S}$ is \emph{fully supported} if $S_v \neq 0$ for all $v\in V$. When $\mathrm{S}$ is not fully supported, we may induce a subgraph $G_{\mathrm{S}}$ on the support $V_{\mathrm{S}} = \{ \ v \in V \ | \ S_v \neq 0 \ \}$ of $\mathrm{S}$ so that $\mathrm{S}$ is really just a TCK family for the smaller graph $G_{\mathrm{S}}$. Thus, if we wish to detect some property of $G$ from a TCK family $\mathrm{S}$ of $G$, we will have to assume that $\mathrm{S}$ is fully supported. When $G$ is transitive and $S_w \neq 0$ for some $w \in V$ it follows that $\mathrm{S}$ is fully supported.
Studying TCK or CK families amounts to studying representations of Toeplitz-Cuntz-Krieger and Cuntz-Krieger C*-algebras. More precisely, let ${\mathcal{T}}(G)$ and ${\mathcal{O}}(G)$ be the universal C*-algebras generated by TCK and CK families respectively. Then representations of TCK or CK C*-algebras are in bijection with TCK or CK families respectively. The C*-algebra ${\mathcal{O}}(G)$ is the well-known graph C*-algebra of $G$, which generalizes the Cuntz--Krieger algebra introduced in \cite{CK80} for studying subshifts of finite type. We recommend \cite{Raeb05} for further preliminaries on TCK and CK families, as well as C*-algebras associated to directed graphs.
When $G=(V,E,r,s)$ is a directed graph, we denote by $E^{\bullet}$ the collection of finite paths $\lambda = e_1 ... e_n$ in $G$ where $s(e_i) = r(e_{i+1})$ for $i=1,...,n-1$. In this case we say that $\lambda$ is of length $n$, and we regard vertices as paths of length $0$. Given a TCK family $\mathrm{S} = (S_v,S_e)$ and a path $\lambda = e_1 ... e_n$ we define $S_{\lambda} = S_{e_1} \circ ... \circ S_{e_n}$. We extend the range and source maps of paths $\lambda = e_1...e_n$ by setting $r(\lambda) := r(e_1)$ and $s(\lambda) := s(e_n)$, and for a vertex $v \in V$ considered as a path we define $r(v) = v = s(v)$. A path $\lambda$ of length $|\lambda| >0$ is said to be a cycle if $r(\lambda) = s(\lambda)$. We will often not mention the range and source maps $r$ and $s$ in the definition of a directed graphs, and understand them from context.
\begin{hypothesis}\label{h:1}
Throughout the paper we will assume that whenever $\mathrm{S}=(S_v,S_e)$ is a TCK family, then $\textsc{sot}sum_{v\in V} S_v = I_{{\mathcal{H}}}$. In terms of representations of the C*-algebras this is equivalent to requiring that all $*$-representations of our C*-algebras ${\mathcal{T}}(G)$ and ${\mathcal{O}}(G)$ are non-degenerate.
\end{hypothesis}
\begin{definition}
Let $G=(V,E)$ be a directed graph, and let $\mathrm{S} = (S_v,S_e)$ be a TCK family on a Hilbert space $\mathcal{H}$. The WOT-closed algebra ${\mathfrak{S}}$ generated by $\mathrm{S}$ is called a \emph{free semigroupoid algebra} of $G$.
\end{definition}
The main purpose of this paper is to characterize which finite graphs admit self-adjoint free semigroupoid algebras. For the $n$-cycle graph, we know from \cite[Theorem 5.6]{DDL20} that $M_n(L^{\infty}(\mu))$ is a free semigroupoid algebra when $\mu$ is a measure on the unit circle $\mathbb{T}$ which is not absolutely continuous with respect to Lebesgue measure $m$ on $\mathbb{T}$. Thus, an easy example for a self-adjoint free semigroupoid algebra for the $n$-cycle graph is simply $M_n(\mathbb{C})$, by taking some Dirac measure $\mu = \delta_z$ for $z\in \mathbb{T}$.
On the other hand, for non cycle graphs examples which are self-adjoint are rather difficult to construct, and the first example showing this is possible was provided by Read \cite{Rea05} for the graph with a single vertex and two loops. More precisely, Read shows that there are two isometries $Z_1, Z_2$ on a Hilbert space ${\mathcal{H}}$ with pairwise orthogonal ranges that sum up to ${\mathcal{H}}$ such that the WOT-closed algebra generated by $Z_1,Z_2$ is $B({\mathcal{H}})$. Read's proof was later streamlined and simplified by Davidson in \cite{Dav06}.
\begin{definition}
Let $G = (V,E)$ be a directed graph. We say that $G$ is
\begin{enumerate}
\item
\emph{transitive} if there is a path between any one vertex and another.
\item
\emph{aperiodic} if for any two vertices $v,w \in V$ there is a $K_0$ such that any length $K\geq K_0$ can occur as the length of a path from $v$ to $w$.
\end{enumerate}
\end{definition}
Notice that for \emph{finite} (both finitely many vertices and edges) transitive graphs, the notions of a CK family and fully-coisometric TCK family coincide. In \cite[Theorem 4.3 \& Corollary 6.13]{DDL20} restrictions were found on graphs and TCK families so as to allow for self-adjoint examples.
\begin{theorem}[Theorem 4.3 \& Corollary 6.13 in \cite{DDL20}] \label{t:sa-rest}
Let ${\mathfrak{S}}$ be a free semigroupoid algebra generated by a TCK family $\mathrm{S}$ of a directed graph $G = (V,E)$ such that $S_v \neq 0$ for all $v\in V$. If ${\mathfrak{S}}$ is self-adjoint then
\begin{enumerate}
\item
$\mathrm{S}$ must be fully-coisometric, and;
\item
$G$ must be a disjoint union of transitive components.
\end{enumerate}
\end{theorem}
Showing that non-cycle transitive graphs other than the single vertex with two loops admit a self-adjoint free semigroupoid algebra required new ideas from directed graph theory.
\begin{definition}
Let $G = (V,E)$ be a transitive, finite and in-degree $d$-regular graph. A \emph{strong edge coloring} $c:E \rightarrow \{1,...,d\}$ is one where $c(e)\neq c(f)$ for any two distinct edges $e,f \in r^{-1}(v)$ and $v\in V$.
\end{definition}
Whenever $G = (V,E)$ has a strong edge coloring $c$, it induces a labeling of finite paths $c : E^{\bullet} \rightarrow \mathbb{F}^+_d$ which is defined for $\lambda = e_1...e_n$ via $c(\lambda) = c(e_1) ... c(e_n)$. Since $c$ is a strong edge coloring, whenever $w \in V$ is a vertex and $\gamma = i_1 ... i_n \in \mathbb{F}^+_d$ with $i_j \in \{1,...,d\}$ is a word in colors, we may inductively construct a path $\lambda = e_1... e_n$ such that $c(e_j) = i_j$ and $r(e_1) = w$. In this way, every vertex $w$ and a word in colors uniquely define a back-tracked path whose range is $w$.
\begin{definition}
Let $G = (V,E)$ be a transitive, finite and in-degree $d$-regular graph. A strong edge coloring is called \emph{synchronizing} if for some vertex $v \in V$ there is a word $\gamma_v \in \mathbb{F}^+_d$ in colors $\{1,...,d\}$ such that for any other vertex $w\in V$, the unique back-tracked path $\lambda$ with range $w$ and color $c(\lambda) = \gamma_v$ has source $s(\lambda) = v$.
\end{definition}
It is easy to see that if a finite in-degree regular graph has a \emph{synchronizing} strong edge coloring then it is aperiodic. The converse of this statement is a famous conjecture made by Adler and Weiss in the late 60s \cite{AW70}. This conjecture was eventually proven by Trahtman \cite{Tra09} and is now called the Road Coloring Theorem. The Road Coloring Theorem was the key device that enabled the construction of self-adjoint free semigroupoid algebras for in-degree regular aperiodic directed graphs in \cite[Theorem 10.11]{DDL20}.
\begin{definition}
Let $G=(V,E)$ be a transitive directed graph. We say that $G$ \emph{has period $p$} if $p$ is the largest integer such that we can partition $V=\sqcup_{i =1}^p V_i$ so that when $e \in E$ is an edge with $s(e) \in V_i$ then $r(e) \in V_{i+1}$ (here we identify $p+1 \equiv 1$). This decomposition is called the \emph{cyclic decomposition} of $G$, and the sets $\{V_i\}$ are called the \emph{cyclic components} of $G$.
\end{definition}
\begin{remark}
In a transitive graph $G$ every vertex $v \in V$ has a cycle of finite length through it. Hence, if $G$ does not have finite periodicity this would imply that any cycle around $v$ must have arbitrarily large length. Hence, transitive graphs have finite periodicity.
\end{remark}
It is a standard fact that $G$ is $p$-periodic exactly when for any two vertices $v,w \in V$ there exists $0 \leq r < p$ and $K_0$ such that for any $K \geq K_0$ the length $pK + r$ occurs as the length of a path from $v$ to $w$, while $pK +r'$ does not occur for any $K$ and $0 \leq r' < p$ with $r\neq r'$. Hence, $G$ is $1$-periodic if and only if it is aperiodic. We will henceforth say that $G$ is periodic when $G$ is $p$-periodic with period $p \geq 2$. For a transitive directed graph, the period $p$ is also equal to the greatest common divisor of the lengths of its cycles. This equivalent definition of periodicity of a transitive directed graph is the one most commonly used in the literature.
In \cite[Question 10.13]{DDL20} it was asked whether periodic in-degree $d$-regular finite transitive graphs with $d\geq 3$ have a self-adjoint free semigroupoid algebra. In this paper we answer this question in the affirmative. In fact, we are able to characterize exactly which finite graphs have a self-adjoint free semigroupoid algebra. A generalization of the road coloring for periodic in-degree regular graphs proven by B\'eal and Perrin \cite{BP14} is then the replacement for Trahtman's aperiodic Road Coloring Theorem when the graph is periodic.
\begin{theorem} \label{t:safsa}
Let $G = (V,E)$ be a finite graph. There exists a fully supported CK family $\mathrm{S}=(S_v,S_e)$ which generates a \emph{self-adjoint} free semigroupoid algebra ${\mathfrak{S}}$ if and only if $G$ is the union of transitive components.
Furthermore, if $G$ is transitive and not a cycle then $B({\mathcal{H}})$ is a free semigroupoid algebra for $G$ where ${\mathcal{H}}$ is a separable infinite dimensional Hilbert space.
\end{theorem}
This paper is divided into four sections including this introduction. In Section \ref{s:prc} we translate the periodic Road Coloring Theorem of B\'eal and Perrin to a more concrete statement that we end up using. In Section \ref{s:B(H)-fsa} a reduction to graphs with in-degree at least $2$ at every vertex is made, and our new construction reduces that case to the in-degree $2$-regular case. Finally in Section \ref{s:main-theorem} we combine everything together for a proof of Theorem \ref{t:safsa} and give some concluding remarks.
\section{Periodic Road coloring} \label{s:prc}
The following is the generalization of the notion of synchronization to $p$-periodic finite graphs that we shall need.
\begin{definition} \label{d:p-synch}
Let $G = (V,E)$ be a transitive, finite and in-degree $d$-regular $p$-periodic directed graph with cyclic decomposition $V=\sqcup_{i =1}^p V_i$. A strong edge coloring $c$ of $G$ with $d$ colors is called \emph{$p$-synchronizing} if there exist
\begin{enumerate}
\item distinguished vertices $v_i \in V_i$ for each $1 \leq i \leq p$, and;
\item a word $\gamma$ such that for any $1 \leq i \leq p$ and $v \in V_i$, the unique backward path $\lambda_v$ with $r(\lambda_v) =v$ and $c(\lambda_v) = \gamma$ has source $v_i$.
\end{enumerate}
Such a $\gamma$ is called a $p$-synchronizing word for the tuple $(v_1,...,v_p)$.
\end{definition}
In order to show that every in-degree $d$-regular $p$-periodic graph is $p$-synchronizing we will translate a periodic version of the Road Coloring Theorem due to B\'eal and Perrin \cite{BP14}. We warn the reader that the graphs we consider here are in-degree regular whereas in \cite{BP14} the graphs are out-degree regular. Thus, so as to fit our choice of graph orientation, we state their definitions and theorem with edges having reversed ranges and sources.
Let $G=(V,E)$ be a transitive, finite, in-degree $d$-regular graph. If $c: E \rightarrow \{1,...,d\}$ is a strong edge coloring, each word $\gamma \in \mathbb{F}_d^+$ in colors gives rise to a map $\gamma : V \rightarrow V$ defined as follows. For $v\in V$ let $\lambda_v$ be the unique path with $r(\lambda_v) =v$ whose color is $c(\lambda) = \gamma$. Then we define $v \cdot \gamma := s(\lambda_v)$. In this way, we can apply the function $\gamma$ to each subset $I \subseteq V$ to obtain another subset of vertices $I \cdot \gamma = \{ \ v \cdot \gamma \ | \ v \in I \ \}$.
\begin{definition}
Let $G=(V,E)$ be a transitive, finite and in-degree $d$-regular with a strong edge coloring $c : E \rightarrow \{1,...,d\}$.
\begin{enumerate}
\item
We say that a subset $I$ is a \emph{$c$-image} if there exists a word $\gamma$ such that $V \cdot \gamma = I$.
\item
A $c$-image $I \subseteq V$ is called \emph{minimal} if there is no $c$-image with smaller cardinality.
\end{enumerate}
We define the \emph{rank} of $c$ to be the size of a minimal $c$-image.
\end{definition}
Note that the rank of a transitive graph is always well-defined, since any two minimal $c$-images have the same cardinality. Next, we explain some of the language used in the statement of B\'eal and Perrin's \cite[Theorem 6]{BP14}.
A (finite) \emph{automaton} is a pair $(G,c)$ where $G=(V,E)$ is a finite directed graph and $c :E \rightarrow \{1,...,d\}$ some labeling. We say that $(G,c)$ be a \emph{complete deterministic} automaton if $c|_{r^{-1}(v)}$ is bijective for each $v\in V$. This forces $G$ to be in-degree $d$-regular with a strong edge coloring $c$. We say that an automaton is \emph{irreducible} if its underlying graph is irreducible. Finally, we say that two automata $(G,c)$ and $(H,d)$ are \emph{equivalent} if they have isomorphic underlying graphs. The statement of \cite[Theorem 6]{BP14} then says that any irreducible, complete deterministic automaton $(G,c)$ is equivalent to a complete deterministic automaton whose rank is equal to the period of $G$. This leads to the following restatement of the periodic Road Coloring Theorem of B\'eal and Perrin \cite[Theorem 6]{BP14} in the language of directed graphs and their colorings.
\begin{theorem}[Theorem 6 of \cite{BP14}]
Let $G=(V,E)$ be a transitive, finite and in-degree $d$-regular graph. Then $G$ is $p$-periodic if and only if there exists a strong edge coloring $c : E \rightarrow \{1,...,d\}$ with rank $p$.
\end{theorem}
As a corollary, we have that every finite in-degree $d$-regular graph is $p$-synchronizing (in the sense of Definition \ref{d:p-synch}) when its period is $p$. This will be useful to us in the next section.
\begin{corollary} \label{c:p-synch-exists}
Let $G=(V,E)$ be a transitive, finite, in-degree $d$-regular and $p$-periodic graph with cyclic decomposition $V=\sqcup_{i =1}^p V_i$. Then there is a strong edge coloring $c : E \rightarrow \{1,...,d\}$ which is $p$-synchronizing.
Furthermore, if $\gamma$ is a $p$-synchronizing word for $(v_1,...,v_p)$ and $w_i \in V_i$ is some vertex for some $1 \leq i \leq p$, then there are vertices $w_j \in V_j$ for $j \neq i$ and a $p$-synchronizing word $\mu$ for $(w_1,...,w_p)$.
\end{corollary}
\begin{proof}
Let $c$ be a strong edge coloring with a minimal $c$-image of size $p$. This means that there is a word $\gamma \in \mathbb{F}_d^+$ such that $|V \cdot \gamma| = p$. If $\gamma$ is not of length which is a multiple of $p$, we may concatenate $\gamma'$ to $\gamma$ to ensure that $|\gamma \gamma'| = kp$ for some $0 \neq k \in \mathbb{N}$. Since $V \cdot \gamma$ is minimal we have that $V \cdot \gamma \gamma'$ is also of size $p$. Hence, without loss of generality we have that $|\gamma| = kp$ for some $0 \neq k \in \mathbb{N}$. Since $|\gamma| = kp$, we see that the function $\gamma : V \rightarrow V$ must send elements in $V_i$ to elements in $V_i$. Since each $V_i$ is non-empty and $|V \cdot \gamma| = p$, there is a unique $v_i \in V \cdot \gamma$ such that $V_i \cdot \gamma = \{v_i\}$. It is then clear that $(v_1,...,v_p)$ together with $\gamma$ show that $c$ is $p$-synchronizing.
For the second part, without loss of generality assume that $i=1$, and let $\lambda$ be a path with range $v_1$ and source $w_1$, whose length must be a multiple of $p$. Then for $\mu := \gamma \cdot c(\lambda)$, we still have $|V \cdot \mu| = p$ from minimality of $V \cdot \gamma$, and also $v_1 \cdot c(\lambda) = w_1$. Thus, we see that as in the above proof there are some $w_j \in V_j$ with $j\neq 1$ such that $\mu$ is $p$-synchronizing for $(w_1,...,w_p)$.
\end{proof}
\section{$B({\mathcal{H}})$ as a free semigroupoid algebra} \label{s:B(H)-fsa}
In \cite{Dav06} Read isometries $Z_1,Z_2$ with additional useful properties are obtained on a separable infinite dimensional Hilbert space ${\mathcal{H}}$. More precisely, from the proof of \cite[Lemma 1.6]{Dav06} we see that there are orthonormal bases $\{h_j \}_{j \in \mathbb{N}}$ and $\{g_i \}_{i \in \mathbb{N}}$ together with a sequence
$$
S_{i,j,k} \in \operatorname{span} \{ \ Z_w \ | \ w\in \mathbb{F}_2^+ \ \text{with} \ |w| = 2^k \ \}
$$
such that $S_{i,j,k}$ converges WOT to the rank one operator $g_i \otimes h_j^*$. As a consequence of this, in \cite[Theorem 1.7]{Dav06} it is shown that the WOT closed algebra generated by $\{ \ Z_w \ | \ \mu \in \mathbb{F}_2^+ \ \text{with} \ |w| = 2^k \ \}$ is still $B({\mathcal{H}})$. That $2^k$ above can be replaced with any non-zero $p \in \mathbb{N}$ was claimed after \cite[Question 10.13]{DDL20}, and we provide a proof for it here.
\begin{proposition}\label{p:pgen} For any non-zero $p \in \mathbb{N}$, we have that the WOT-closed algebra ${\mathfrak{Z}}_p$ generated by $\{ \ Z_{\mu} \ | \ \mu \in \mathbb{F}_2^+ \ \text{with} \ |\mu| =p \ \}$ is $B({\mathcal{H}})$.
\end{proposition}
\begin{proof}
As there are finitely many residue classes modulo $p$, there is some $m$ such that $p$ divides $2^k + m$ for infinitely many $k$. Pick a word $w$ with length $|w|=m$, and note that $S_{i,j,k}Z_w \in {\mathcal{A}}lg \{ \ Z_{\mu} \ | \ \mu \in \mathbb{F}_2^+ \ \text{with} \ |\mu| =p \ \}$ for infinitely many $k$. Thus, the rank one operator $(g_i \otimes h_j^*) Z_w$ is in ${\mathfrak{Z}}_p$. Since any operator in $B({\mathcal{H}})$ is the WOT limit of finite linear combinations of operators of the form $g_i \otimes h_j^*$, we see that $A = AZ^*_w Z_w \in {\mathfrak{Z}}_p$ for any $A \in B({\mathcal{H}})$. Hence, ${\mathfrak{Z}}_p = B({\mathcal{H}})$.
\end{proof}
Next we reduce the problem of showing that $B({\mathcal{H}})$ is a free semigroupoid algebra for a transitive graph $G$ to a problem about vertex corners.
\begin{lemma} \label{l:cornerB(H)}
Let $G$ be a transitive graph. Suppose $\mathrm{S}$ is a TCK family on ${\mathcal{H}}$ such that for any $v \in V$ there exists $w\in V$ such that $S_v {\mathfrak{S}} S_w = S_v B({\mathcal{H}}) S_w$. Then ${\mathfrak{S}} = B({\mathcal{H}})$.
\end{lemma}
\begin{proof}
Let $v',w' \in V$ be arbitrary vertices. By assumption, there is $w \in V$ such that $S_{v'}{\mathfrak{S}} S_w = S_{v'}B({\mathcal{H}})S_w$. Let $\lambda$ be a path from $w'$ to $w$, and let $B \in B({\mathcal{H}})$. Then $S_{v'}BS_{\lambda} \in S_{v'} {\mathfrak{S}} S_{w'}$ for any $B\in B({\mathcal{H}})$. Let $B = A S_{\lambda}^*$ for general $A\in B({\mathcal{H}})$ so that $S_{v'}BS_{\lambda} = S_{v'} A S_{w'} \in S_{v'} {\mathfrak{S}} S_{w'}$. Hence, we obtain that $S_{v'} {\mathfrak{S}} S_{w'} = S_{v'} B({\mathcal{H}}) S_{w'}$ for any $v',w' \in V$. Since $\textsc{sot}sum_{v\in V} S_v = I_{{\mathcal{H}}}$ we get that ${\mathfrak{S}} = B({\mathcal{H}})$.
\end{proof}
Let $G = (V,E)$ be a finite directed graph. For an edge $e \in E$ we define the \emph{edge contraction} $G/e$ of $G$ by $e$ to be the graph obtained by removing the edge $e$ and identifying the vertices $s(e)$ and $r(e)$. When $r(e) \neq s(e)$, our convention will be that the identification of $r(e)$ and $s(e)$ is carried out by removing the vertex $r(e)$, and every edge with source / range $r(e)$ is changed to have source / range $s(e)$ respectively.
\begin{lemma}\label{l:edge-cont}
Let $G = (V,E)$ be a transitive and finite directed graph which is not a cycle. Let $e_0 \in E$ such that $r(e_0)$ has in-degree $1$. Then:
\begin{enumerate}
\item The edge $e_0$ is not a loop.
\item The edge contraction $G/e_0$ has one fewer vertex of in-degree $1$ than $G$ has.
\item The edge contraction $G/e_0$ is a finite transitive directed graph which is not a cycle.
\end{enumerate}
\end{lemma}
\begin{proof}
If $e_0$ were a loop, then the assumptions that $G$ is transitive and that $r(e_0)$ has in-degree $1$ would imply that $G$ is a cycle with one vertex. This contradicts our assumption that $G$ is not a cycle, so (i) is proved.
Since now we must have $r(e_0) \neq s(e_0)$, we adopt our convention discussed above. Since $r(e_0)$ has in-degree $1$, there are no other edges whose range is $r(e_0)$, so contraction does not change the range of any of the surviving edges. In particular, the in-degree of the remaining vertices is unchanged. Since we have removed a single vertex of in-degree $1$, (ii) is proved.
By construction $G/e_0 $ is a finite directed graph. It is straightforward to verify that transitivity of $G$ implies transitivity of $G/e_0$. A finite, transitive, directed graph is a cycle if and only if every vertex has in-degree $1$. Since $G$ is by assumption not a cycle, it has a vertex of in-degree at least $2$. Since the in-degree of the remaining vertices is unchanged, $G/e_0$ also has a vertex of in-degree at least $2$. Hence $G/e_0$ is not a cycle and (iii) is proved.
\end{proof}
The following proposition shows that such edge contractions preserve the property of having $B({\mathcal{H}})$ as a free semigroupoid algebra.
\begin{proposition} \label{p:edge-cont-FSA}
Let $G = (V,E)$ be a transitive and finite directed graph that is not a cycle, and let $e_0 \in E$ such that $ r(e_0)$ has in-degree $1$. If $G/e_0$ has $B({\mathcal{H}})$ as a free semigroupoid algebra, then so does $G$.
\end{proposition}
\begin{proof}
Let $v_0 := r(e_0)$ so that $G/e_0 = (\tilde{V}, \tilde{E}) = (V \setminus \{v_0\}, E \setminus \{ e_0\})$.
Let $\widetilde{\mathrm{S}} = (\widetilde{S}_{v}, \widetilde{S}_{e})$ be a TCK family for $G/e$ on ${\mathcal{H}}$ such that $\widetilde{{\mathfrak{S}}} = B({\mathcal{H}})$. By Theorem \ref{t:sa-rest} we get that $\widetilde{\mathrm{S}}$ is actually a CK family. Write ${\mathcal{H}} = \bigoplus_{v \in \tilde{V}} \widetilde{S}_{v} {\mathcal{H}}$ and let ${\mathcal{H}}_v = \widetilde{S}_{v} {\mathcal{H}}$ for $v \in \tilde{V}$. Let ${\mathcal{H}}_{v_0}$ be a Hilbert space identified with $\widetilde{S}_{s(e_0)}{\mathcal{H}}$ via a fixed unitary identification $J : {\mathcal{H}}_{v_0} \rightarrow \widetilde{S}_{{s(e_0)}}{\mathcal{H}}$ and form the space ${\mathcal{K}} = \bigoplus_{v\in V} {\mathcal{H}}_v$. We define a CK family $\mathrm{S}$ for $G$ on ${\mathcal{K}}$ as follows: Let $S_v$ be the projection onto ${\mathcal{H}}_v$ for each $v\in V$. For edges $e \in \tilde{E}$ with $s(e)\neq v_0$ we extend linearly the rule
$$
S_e \xi = \begin{cases}
\widetilde{S}_{e} \xi & \text{ when } \xi \in {\mathcal{H}}_{s(e)}, \\
0 & \text{ when } \xi \in {\mathcal{H}}_{s(e)}^{\perp},
\end{cases}
$$
for edges $e\in \tilde{E}$ with $s(e) = v_0$ we extend linearly the rule
$$
S_e \xi = \begin{cases}
\widetilde{S}_{e}J \xi & \text{ when } \xi \in H_{v_0}, \\
0 & \text{ when } \xi \in H_{v_0}^{\perp},
\end{cases}
$$
and finally for $e_0$ we extend linearly the rule
$$
S_{e_0} \xi = \begin{cases}
J^* \xi & \text{ when } \xi \in {\mathcal{H}}_{s(e_0)}, \\
0 & \text{ when } \xi \in {\mathcal{H}}_{s(e_0)}^{\perp}.
\end{cases}
$$
Since $J$ is a unitary, and since $\widetilde{\mathrm{S}}$ is a CK family for $\widetilde{G}$, we see that $\mathrm{S} = (S_v,S_e)$ is a CK family for $G$.
We verify that for any vertex $v \in V$ we have $S_v {\mathfrak{S}} S_v = S_v B({\mathcal{K}}) S_v$. First note that for any $v \in V$ we have
$$
S_v {\mathfrak{S}} S_v = \overline{\operatorname{span}}^{\textsc{wot}}\{ \ S_{\lambda} \ | \ r(\lambda) = v = s(\lambda), \ \lambda \in E^{\bullet} \ \},
$$
and similarly for every $v\in \tilde{V}$ we have a description as above for $\widetilde{S}_v \widetilde{{\mathfrak{S}}} \widetilde{S}_v$ in terms of cycles in $G / e_0$.
Suppose now that $v \in \tilde{V}$ and that $\lambda$ is a cycle through $v$ in $G$. If $\lambda$ does not go through $v_0$, then $S_v S_{\lambda} S_v = S_v \widetilde{S}_{\lambda} S_v \in S_v \widetilde{{\mathfrak{S}}} S_v$. Next, if $\lambda = \mu \nu$ is a simple cycle such that $s(\mu) = v_0$, since $e_0$ is the unique edge with $r(e_0) = v_0$, we may write $\nu = e_0 \nu'$. We then get that
$$
S_v S_{\lambda} S_v = S_v S_{\mu}J^* S_{\nu'} S_v = S_v \widetilde{S}_{\mu} \widetilde{S}_{\nu'} S_v \in S_v \widetilde{{\mathfrak{S}}} S_v.
$$
For a general cycle $\lambda$ around $v$ which goes through $v_0$, we may decompose it as a concatenation of simple cycles, and apply the above iteratively to eventually get that $S_v S_{\lambda} S_v \in S_v\widetilde{{\mathfrak{S}}} S_v$. Hence, for $v \in \tilde{V}$ we have
\begin{equation*}
S_{v} {\mathfrak{S}} S_{v} = S_{v} \widetilde{{\mathfrak{S}}} S_{v} = S_{v} B({\mathcal{H}})S_{v} = S_v B({\mathcal{K}}) S_v.
\end{equation*}
Finally, for $v = v_0$ fix $\mu$ some cycle going through $v_0$ in $G$, and write $\mu =e_0 \mu'$. For any $\lambda$ which is a cycle in $G$ going through $s(e_0)$, we have that
$$
J^*S_{\lambda}JS_{\mu} = S_{e_0} S_{\lambda} S_{\mu'} \in S_{v_0} {\mathfrak{S}} S_{v_0}.
$$
Hence, from our previous argument applied to $s(e_0) \neq v_0$, we get
\begin{equation} \label{eq:supset-cycle}
J^* S_{s(e_0)} B({\mathcal{H}}) S_{s(e_0)} J S_{\mu} = J^* S_{s(e_0)} {\mathfrak{S}} S_{s(e_0)} J S_{\mu} \subseteq S_{v_0} {\mathfrak{S}} S_{v_0}.
\end{equation}
Next, for $B\in S_{v_0} B({\mathcal{K}}) S_{v_0}$ we take $A= S_{s(e_0)} J B S_{\mu}^*J^* S_{s(e_0)}$ which is now in $S_{s(e_0)} B({\mathcal{H}}) S_{s(e_0)}$, so that
$$
S_{s(e_0)} J B S_{v_0} = A S_{s(e_0)} J S_{\mu} \in S_{s(e_0)} B({\mathcal{H}}) S_{s(e_0)} J S_{\mu}.
$$
By varying over all $B\in S_{v_0} B({\mathcal{K}}) S_{v_0}$ and multiplying by $J^*$ on the left, we get that
\begin{equation*}
S_{v_0} B({\mathcal{K}}) S_{v_0} \subseteq J^* S_{s(e_0)} B({\mathcal{H}}) S_{s(e_0)} J S_{\mu}.
\end{equation*}
Thus, with equation \eqref{eq:supset-cycle} we get that $S_{v_0} B({\mathcal{K}})S_{v_0} = S_{v_0} {\mathfrak{S}} S_{v_0}$. Now, since for any $v\in V$ we have that $S_{v} {\mathfrak{S}} S_{v} = S_v B({\mathcal{K}}) S_v$, by Lemma \ref{l:cornerB(H)} we are done.
\end{proof}
Hence, edge contraction together with Lemma \ref{l:edge-cont} can be repeatedly applied to any finite transitive directed graph $G$ which is not a cycle in order to obtain another such graph $\widetilde{G}$ which has in-degree at least $2$ for every vertex. By applying Proposition \ref{p:edge-cont-FSA} to this procedure, we obtain the following corollary.
\begin{corollary} \label{c:in-deg-2}
Let $G$ be a transitive and finite directed graph which is not a cycle, and let $\widetilde{G}$ be a graph resulting from repeatedly applying edge contractions to edges $e\in E$ with $r(e)$ of in-degree $1$. Then $\widetilde{G}$ has in-degree at least $2$ for every vertex, and if $\widetilde{G}$ has $B({\mathcal{H}})$ as a free semigroupoid algebra, then so does $G$.
\end{corollary}
Let $G = (V,E)$ is finite directed graph which is transitive and has in-degree at least $2$ at every vertex. For a vertex $v\in V$ with in-degree $d_v \geq 3$ we define the \emph{$v$-lag} of $G$ to be the graph $\widehat{G}_v = (\widehat{V}_v, \widehat{E}_v)$ obtained as follows: all vertices beside $v$ and edges ranging in such vertices remain the same. We list $(u_0,...,u_{d_v-1})$ the tuple of vertices in $G$ which are the source of an edge with range $v$, counted with repetition so that there is a unique edge $e_j$ from each $u_j$ to $v$. We add $d_v-2$ vertices $v_1,...,v_{d_v-2}$ and set an edge $f_i$ from $v_i$ to $v_{i-1}$ when $2 \leq i \leq d_v-2$ and an edge $f_1$ from $v_1$ to $v$. We then replace each edge $e_j$ from $u_j$ to $v$ in $G$ with an edge $\hat{e}_j$ from $u_j$ to $v_j$ when $1 \leq j \leq d_v-2$, replace an edge $e_0$ from $u_0$ to $v$ in $G$ by an edge $\hat{e}_0$ from $u_0$ to $v$, and replace an edge $e_{d_v-1}$ from $u_{d_v-1}$ to $v$ in $G$ with an edge $\hat{e}_{d_v-1}$ from $u_{d_v-1}$ to $v_{d_v-2}$. The resulting graph is over the vertex set $\widehat{V}_v = V \sqcup \{v_1,...,v_{d_v-2}\}$ and we denote it by $\widehat{G}_v$. The construction will replace only those edges going into $v$ with those shown in Figure \ref{f:split} (the sources $u_0,...,u_{d_v -1}$ of such edges may have repetitions), and everything else will remain the same.
\begin{figure}\label{f:split}
\end{figure}
\begin{lemma} \label{l:transitive-less-in-deg-3}
Let $G=(V,E)$ be a transitive and finite directed graph with in-degree at least $2$ at every vertex. Let $v \in V$ be a vertex with in-degree $d_v \geq 3$. Then $\widehat{G}_v$ is a finite transitive directed graph with in-degree at least $2$ at every vertex, and has one fewer vertex of in-degree at least $3$.
\end{lemma}
\begin{proof}
We see from the construction that $v_1,..., v_{d_v-2}$, as well as $v$, all have in-degree $2$. So we have reduced by one the number of vertices of in-degree at least $3$, and all other vertices except for $v_1,...,v_{d_v -2}$ and $v$ still have the same in-degree. Hence, $\widehat{G}_v$ has one fewer vertex with in-degree at least $3$.
We next show that $\widehat{G}_v$ is transitive. Indeed, every one of the vertices $\{v_1,..., v_{d_v-2}\}$ leads to $v$, and the set of vertices $\{u_1,...,u_{d_v-1}\}$ lead to $v_1,...,v_{d_v-2}$ so it would suffice to show that for any two vertices $w,u \in V$ we have a path in $\widehat{G}_v$ from $w$ to $u$. Since $G$ is transitive, we have a path $\lambda = g_1...g_n$ in $G$ from $w$ to $u$. Next, define $\hat{g}_k$ to be $g_k$ if $g_k \neq e_j$ for all $j$ and whenever $g_k = e_j$ for some edge $e_j$ with range in $v$ we set $g_k:= f_1...f_j\hat{e}_j$ which is a path in $\widehat{G}_v$ from $u_j$ to $v$. This way the new path $\hat{g} = \hat{g}_1....\hat{g}_n$ is a path in $\widehat{G}_v$ from $w$ to $u$.
\end{proof}
Our construction depends on the choice of orderings for the $\{u_j\}$, so when we write $\widehat{G}_v$ we mean a fixed ordering for sources of incoming edges of the vertex $v$ in the above construction.
Next, let $G=(V,E)$ be a finite directed graph with in-degree at least $2$ at every vertex. For a vertex of in-degree at least $3$, let $\widehat{G}_v$ be the $v$-lag of $G$. We define a map $\theta$ on paths $E^{\bullet}$ of $G$ as follows: If $e\in E$ is any edge with $r(e) \neq v$, we define $\theta(e) = e$. Next, if $e_j$ is the unique edge from $u_j$ to $v$ in $G$ we define $\theta(e_j) = f_1 f_2 ... f_j \hat{e}_j$ which is a path from $u_j$ to $v$ in $\widehat{G}_v$. The map $\theta$ then extends to a map (denoted still by) $\theta$ on finite paths $E^{\bullet}$ of $G$ by concatenation, whose restriction to $V$ is the embedding of $V \subseteq \widehat{V}_v$.
\begin{lemma} \label{l:bijection}
Let $G=(V,E)$ be a transitive and finite directed graph with in-degree at least $2$ at every vertex. For a vertex $v\in V$ of in-degree at least $3$, let $\widehat{G}_v$ be the $v$-lag of $G$. Then $\theta$ is a bijection between $E^{\bullet}$ and paths in $\widehat{E}_v^{\bullet}$ whose range and source are both in $V$.
\end{lemma}
\begin{proof}
Since $\theta$ is injective on edges and vertices, it must be injective on paths as it is defined on paths by extension.
Suppose now that $u \in V$ and $\widehat{\lambda} = \hat{g}_1 ... \hat{g}_j$ is a path in $\widehat{G}_v$ from $u$ to $v$ such that $s(\hat{g}_k) \notin V$ for $k=1,...,j-1$. Then it must be that $\widehat{\lambda} = f_1f_2...f_j\hat{e}_j$ with $s(\hat{e}_j) = u_j = u$, so that $\widehat{\lambda} = \theta(e_j)$. A general path in $\widehat{G}_v$ between two vertices in $V$ is then the concatenation of edges in $E$ and paths of the form $f_1f_2...f_j\hat{e}_j$ as above, so that $\theta$ is surjective.
\end{proof}
Applying a $v$-lag at each vertex of in-degree $3$ repeatedly until there are no more such vertices, we obtain a directed graph $\widehat{G} = (\widehat{V},\widehat{E})$. Since the construction at each vertex of in-degree at least $3$ changes only edges going into the vertex $v$, the order in which we apply the lags does not matter, and we will get the same directed graph $\widehat{G} = (\widehat{V},\widehat{E})$. The following is a result of applying Lemma \ref{l:transitive-less-in-deg-3} and Lemma \ref{l:bijection} at each step of this process.
\begin{corollary} \label{c:in-deg-reg-2}
Let $G=(V,E)$ be a transitive and finite directed graph with in-degree at least $2$ at every vertex. Let $\widehat{G} = (\widehat{V},\widehat{E})$ be the graph resulting from repeatedly applying a $v$-lag at every vertex $v$ of in-degree at least $3$. Then $\widehat{G}$ is in-degree $2$-regular and there is an embedding $V\subseteq \widehat{V}$ which extends to a bijection $\theta$ from paths $E^{\bullet}$ to paths in $\widehat{E}^{\bullet}$ whose range and source in $V$.
\end{corollary}
Now let $G=(V,E)$ be a finite directed graph with in-degree at least $2$ at every vertex, and $\widehat{G} = (\widehat{V},\widehat{E})$ be the graph in Corollary \ref{c:in-deg-reg-2}. Given a strong edge coloring $c$ of $\widehat{G}$ with two colors $\{1,2\}$, each edge $g \in E$ inherits a labeling $\ell$ given by $\ell(g) := c(\theta(g))$ with labels $\{c(\theta(g))\}$. We then extend this labeling to paths in $G$ by setting for any path $\lambda = g_1...g_n \in E^{\bullet}$ in $G$ the label $\ell(\lambda) = c(\theta(\lambda)) = c(\theta(g_1)) ... c(\theta(g_n))$ where $\theta(\lambda) \in \widehat{E}^{\bullet}$ is the path in $\widehat{G}$ corresponding to $\lambda$ whose range and source are always in $V$.
Using the labeling $\ell$ we construct a Cuntz-Krieger family $\mathrm{S}^{\ell} = (S^{\ell}_v,S^{\ell}_e)$ for our original graph $G$ as follows: let ${\mathcal{H}}$ be a separable infinite dimensional Hilbert space. Let ${\mathcal{K}} = \bigoplus_{v \in V} {\mathcal{H}}_v$ where ${\mathcal{H}}_v$ is a copy of ${\mathcal{H}}$ identified via a unitary $J_v : {\mathcal{H}}_v \rightarrow {\mathcal{H}}$. First we define $S^{\ell}_v$ to be the projection onto ${\mathcal{H}}_v$ for $v \in V$. Then, for $e \in E$ we define $S^{\ell}_e$ by linearly extending the rule
$$
S^{\ell}_e\xi = \begin{cases} J_{r(e)}^*Z_{\ell(e)} J_{s(e)} \xi & \text{ for } \xi \in H_{s(e)}\\
0 & \text{ for } \xi \in H_{s(e)}^{\perp}.
\end{cases}
$$
where $Z_{\ell(e)}$ is the composition of Read isometries $Z_1$ and $Z_2$ given as follows: for each $e\in E$ there are $f_1,...,f_j \in \widehat{E}$ (or non at all when $r(e)$ has in-degree $2$ in $G$) such that $\theta(e) = f_1 f_2 ... f_j \hat{e}$ as in the iterated construction of $\widehat{G}$. Thus, we get that $Z_{\ell(e)} = Z_{c(\theta(e))} = Z_{c(f_1)} \circ ... \circ Z_{c(f_j)} \circ Z_{c(\hat{e})}$.
\begin{proposition}\label{p:CKgen}
Let $G$ be a transitive and finite directed graph such that all vertices have in-degree at least $2$, and let $\widehat{G}$ be the in-degree $2$ regular graph constructed in Corollary \ref{c:in-deg-reg-2}. Let $p$ be the period of $\widehat{G}$. Then for any strong edge coloring $c: \widehat{E} \rightarrow \{1,2\}$ for $\widehat{G}$ we have that $\mathrm{S}^{\ell}$ is a CK family for $G$. Furthermore, if $c$ is $p$-synchronizing for $\widehat{G}$, then the free semigroupoid algebra ${\mathfrak{S}}^{\ell}$ generated by $\mathrm{S}^{\ell}$ as above is $B({\mathcal{K}})$.
\end{proposition}
\begin{proof}
We first show that $\mathrm{S}^{\ell}$ is a CK family, given a strong edge coloring $c$ for $\widehat{G}$. It is easy to show by definition that $\mathrm{S}^{\ell}$ is a TCK family, so we show the condition that makes it into a CK family. Indeed, for $v\in V$ we have that
\begin{equation} \label{e:labelck}
\sum_{e\in r^{-1}(v)} S_e^{\ell}(S_e^{\ell})^* = S^{\ell}_vJ_v^* {\mathcal{B}}ig( \sum_{e \in r^{-1}(v)}Z_{\ell(e)}Z_{\ell(e)}^* {\mathcal{B}}ig) J_v S^{\ell}_v.
\end{equation}
Now, let the in-degree of $v$ be $d=d_v$, and let $(u_0,...,u_{d-1})$ be the sources of edges incoming to $v$ in $G$. Then in $\widehat{G}$ the path $\theta(e_j)$ (associated to the edge $e_j$ from $u_j$ to $v$ in $G$) is given by $\theta(e_j) = f_1f_2 .. f_j \hat{e}_j$. Hence, since $c$ is a strong edge coloring with two colors we see that
$$
Z_{c(f_1...f_{d-2})}Z_{c(f_1...f_{d-2})}^* = Z_{c(\theta(e_{d-1}))} Z_{c(\theta(e_{d-1}))}^* + Z_{c(\theta(e_{d-2}))}Z_{c(\theta(e_{d-2}))}^*
$$
as well as
$$
Z_{c(f_1...f_j)}Z_{c(f_1...f_j)}^* = Z_{c(\theta(e_j))} Z_{c(\theta(e_j))}^* + Z_{c(f_1...f_jf_{j+1})}Z_{c(f_1...f_jf_{j+1})}^*.
$$
By applying these identities repeatedly we obtain that
$$
\sum_{e \in r^{-1}(v)}Z_{\ell(e)}Z_{\ell(e)}^* = I_{{\mathcal{H}}}.
$$
Thus from equation \eqref{e:labelck} we get that $\mathrm{S}^{\ell}$ is a CK family.
Next, suppose that the strong edge coloring $c$ of $\widehat{G}$ is $p$-synchronizing. We show that the free semigroupoid algebra ${\mathfrak{S}}^{\ell}$ of $\mathrm{S}^{\ell}$ for the graph $G$ is $B({\mathcal{K}})$. Let $v \in V$ be a vertex. By the second part of Corollary \ref{c:p-synch-exists} we have a $p$-synchronizing word $\gamma_v$ for $v \in \widehat{V}$ in the sense that whenever $u \in \widehat{V}$ is in the same cyclic component of $v$ in $\widehat{G}$, then $u \cdot \gamma_v = v$. Let $\gamma$ be a word in two colors of length divisible by $p$. There is a unique path $\widehat{\lambda}$ in $\widehat{G}$ with $r(\widehat{\lambda}) =v$ such that $c(\widehat{\lambda}) = \gamma$. Since the length of $\widehat{\lambda}$ is divisible by $p$, we have that $s(\widehat{\lambda})$ must also be in the same cyclic component as $v$, so by the $p$-synchronizing property of $\gamma_v$ there is a unique path $\widehat{\lambda}_v$ with $c(\widehat{\lambda}_v) = \gamma_v$ and $r(\widehat{\lambda}_v) = s(\widehat{\lambda})$ and $s(\widehat{\lambda}_v) = v$. This defines a cycle $\widehat{\lambda} \widehat{\lambda}_v$ around $v$ whose color is $c(\widehat{\lambda}\widehat{\lambda}_v) = \gamma\gamma_v$.
By Corollary \ref{c:in-deg-reg-2}, there is a unique cycle $\mu$ around $v$ in $G$ such that $\theta(\mu) = \widehat{\lambda} \widehat{\lambda}_v$. Then $\ell(\mu) = c(\widehat{\lambda}\widehat{\lambda}_v) = \gamma \gamma_v$, and we get that
$$
S^{\ell}_{\mu} = S^{\ell}_vJ_{v}^*Z_{\gamma}Z_{\gamma_v} J_{v} S^{\ell}_v \in S^{\ell}_v \mathfrak{S}S^{\ell}_v.
$$
Since $\widehat{\lambda}$ is a general path with range in $v$ whose length is divisible by $p$, we see that $c(\widehat{\lambda}) = \gamma$ is an arbitrary word of length divisible by $p$. Thus, by Proposition \ref{p:pgen} we obtain $S_vJ_v^*BZ_{\gamma_v} J_v S_v \in S_v {\mathfrak{S}} S_v$ for any $B \in B({\mathcal{H}})$. By taking $B = A Z_{\gamma_v}^*$ we see that $S_v J_v^* A J_v S_v \in S_v {\mathfrak{S}} S_v$ for any $A \in B({\mathcal{H}})$. Finally, we have shown that $S_v {\mathfrak{S}} S_v = S_v B({\mathcal{K}}) S_v$ for arbitrary $v\in V$ so that by Lemma \ref{l:cornerB(H)} we conclude that ${\mathfrak{S}} = B({\mathcal{K}})$.
\end{proof}
\begin{example}\label{e:embed}
In the case where the graph $G$ is a single vertex with $d\geq 3$ edges, the construction of $\widehat{G}$ yields the graph shown in Figure \ref{f:onevert}.
\begin{figure}\label{f:onevert}
\end{figure}
A strong $2$-coloring of the graph in Figure \ref{f:onevert} lifts to a coloring of $G$ by words in $\{1,2\}$. This determines a monomial embedding ${\mathcal{O}}_d \hookrightarrow {\mathcal{O}}_2$, where each generator for ${\mathcal{O}}_d$ is sent to the appropriate composition of the generators of $\mathcal{O}_2$. Such monomial embeddings arise and are studied in \cite{Lin20}. If we represent the canonical generators of ${\mathcal{O}}_2$ by a pair of Read isometries $Z_1,Z_2$, we obtain a representation of ${\mathcal{O}}_d$. If the coloring is synchronizing, the generating isometries of ${\mathcal{O}}_d$ will generate $B({\mathcal{H}})$ as a free semigroup algebra. In fact, any strong 2-coloring of the graph in Figure \ref{f:onevert} is synchronizing: if the edge labeled $e$ has color $i \in \{1,2\}$, then it is easy to see that $i^d$ is a synchronizing word for the vertex $v$.
\end{example}
\section{Self-adjoint free semigroupoid algebras} \label{s:main-theorem}
In this section we tie everything together to obtain our main theorem, and make a few concluding remarks.
\noindent {\bf Theorem \ref{t:safsa}} (Self-adjoint free semigroupoid algebras){\bf .} \emph{
Let $G = (V,E)$ be a finite graph. There exists a fully supported CK family $\mathrm{S}=(S_v,S_e)$ which generates a \emph{self-adjoint} free semigroupoid algebra ${\mathfrak{S}}$ if and only if $G$ is the union of transitive components.}
\emph{Furthermore, if $G$ is transitive and not a cycle then $B({\mathcal{H}})$ is a free semigroupoid algebra for $G$ where ${\mathcal{H}}$ is a separable infinite dimensional Hilbert space.}
\begin{proof}
If ${\mathfrak{S}}$ is self-adjoint, Theorem \ref{t:sa-rest} tells us that $G$ must be the disjoint union of transitive components.
Conversely, if $G$ is the union of transitive components, we will form a CK family whose free semigroupoid algebra is self-adjoint for each transitive component separately, and then define the one for $G$ by taking their direct sum. Hence, we need only show that every finite transitive graph $G$ has a self-adjoint free semigroupoid algebra. Then there are two cases:
\begin{enumerate}
\item {\bf If $G$ is a cycle of length $n$:}
By item (1) of \cite[Theorem 5.6]{DDL20} we have that $M_n(L^{\infty}(\mu))$ is a free semigroupoid algebra when $\mu$ is a measure on the unit circle $\mathbb{T}$ which is not absolutely continuous with respect to Lebesgue measure $m$ on $\mathbb{T}$.
\item {\bf If $G$ is not a cycle:}
By Corollary \ref{c:in-deg-2} we may assume without loss of generality that $G$ has no vertices with in-degree $1$. By the first part of Corollary \ref{c:p-synch-exists} there exists a $p$-synchronizing strong edge coloring $c$ for $\widehat{G}$, so we may apply Proposition \ref{p:CKgen} to deduce that $B({\mathcal{H}})$ is a free semigroupoid algebra for $G$.
\end{enumerate}
\end{proof}
\begin{remark}
Note that in the proof of Theorem \ref{p:CKgen}, since $\widehat{G}$ is always in-degree $2$-regular, we only needed the in-degree $2$-regular case of the periodic Road Coloring Theorem in Corollary \ref{c:p-synch-exists}. On the other hand, we could only prove Proposition \ref{p:pgen} for the free semigroup on two generators, so it was also necessary to reduce the problem to the in-degree $2$-regular case. The latter explains why an iterated version of Proposition \ref{p:CKgen} akin to the one for the first construction in Proposition \ref{p:edge-cont-FSA} is not so readily available.
\end{remark}
To construct a suitable Cuntz-Krieger family as in the discussion preceding Proposition \ref{p:CKgen}, for each vertex in $G$ with in-degree at least $3$ we must choose a monomial embedding of ${\mathcal{O}}_d$ into ${\mathcal{O}}_2$. Example \ref{e:embed} shows that choosing a monomial embedding is equivalent to choosing a strong edge coloring of a certain binary tree with $d$ leaves. Constructing such a tree for each vertex gives the construction of $\widehat{G}$, and this is where the intuition for our proof originated from.
\subsection*{Acknowledgments} The first author would like to thank Boyu Li for discussions that led to the proof of Proposition \ref{p:pgen}. Both authors are grateful to the anonymous referees for pointing out issues with older proofs, and for providing suggestions that improved the exposition of the paper. Both authors are also grateful to Florin Boca and Guy Salomon for many useful remarks on previous draft versions of the paper.
\end{document} |
\begin{document}
\begin{abstract}
Let $G$ be a connected and non-necessarily compact Lie group acting on a connected manifold $M$. In this short note we announce the following result: for a $G$-invariant closed differential form on $M$, the existence of a closed equivariant
extension in the Cartan model for equivariant cohomology is equivalent to the existence of an extension in the homotopy quotient.
{\varepsilon}nd{abstract}
\title{On the equivarant de Rham cohomology for non-compact Lie groups}
\newlength{\origlabelwidth} \setlength\origlabelwidth\labelwidth
\section{Introduction}
The Cartan model for the equivariant cohomology of the manifold $M$
$$\Omega_G^*M:= (S({\mathfrak g}^*) \otimes \Omega^*M)^G, \ \ d_G= d + \Omega^a\iota_{X_a}$$ can be seen as the
de Rham version for the equivariant cohomology. Whenever the Lie group $G$ is compact, Cartan proved an equivariant
version of the De-Rahm Theorem, stating that the cohomology of the Cartan complex is canonically isomorphic
to the cohomology with real coefficients of the homotopy quotient $H*(\Omega_G^*M) \cong H^*(M \times_G EG; \mathbb{R})$ \cite{Cartan} cf. \cite[Thm. 2.5.1]{Guillemin-Sternberg}. When the Lie group $G$ is not compact, the cohomology of the complex $\Omega_G^*M$ (which we also call the Cartan complex) fails in many situations to be isomorphic
to the cohomology of the homotopy quotient, and the explicit relation between the two has been very scarcely addressed.
Nevertheless, the Cartan complex is very well suited for studying equivariant conditions at the infinitesimal level.
Of particular interest is the study of the
conditions under which there is absence of anomalies in gauged WZW actions on Lie groups. In \cite{Witten} Witten showed that the absence of anomalies in gauged WZW actions on compact
Lie groups was equivalent to the existence of closed equivariant extension of the WZW term on the Cartan complex, further showing that the existence or absence of anomalies is
purely topological. The arguments of Witten could be extended
without trouble to the non-compact case (see \cite[Chapter 4]{Uribe}), and together with the main result of this paper, we conclude that the absence or existence of anomalies is purely topological fact, independent of the compacity of the Lie group.
In this short note we investigate the relation between the cohomology
of the $G$-equivariant Cartan complex of $M$ and the cohomology of the homotopy quotient $M \times_G EG$, and we show
that indeed there is a surjective map from the former to the latter. In particular this result implies that
for a $G$-invariant closed differential form on $M$, the existence of a closed equivariant
extension in the Cartan model for equivariant cohomology is equivalent to the existence of an extension in the homotopy quotient.
\section{Equivariant Cartan complex for connected Lie groups}
Let $G$ be a connected Lie group with lie algebra ${\mathfrak g}$. Let $K \subset G$ be a maximal compact subgroup of $G$ and denote
by $\mathfrak k$ its Lie algebra. The inclusion
of Lie algebras ${\mathfrak k} \hookrightarrow {\mathfrak g}$ induces a dual map ${\mathfrak g}^* \to {\mathfrak k}^*$ which is ${\mathfrak k}$-equivariant. Therefore
we have the $K$-equivariant map
$$S({\mathfrak g}^*) \to S(\mathfrak k^*)$$
from the symmetric algebra on ${\mathfrak g}^*$ to the symmetric algebra on $\mathfrak k^*$.
Consider a manifold $M$ endowed with an action of $G$. The Cartan complex associated to the $G$-manifold $M$ is
$$\Omega_G^*M:= (S({\mathfrak g}^*) \otimes \Omega^*M)^G, \ \ d_G= d + \Omega^a\iota_{X_a}$$
where $a$ runs over a base of ${\mathfrak g}$, $\Omega^a$ denotes the element in ${\mathfrak g}^*$ dual to $a$ and $X_a$ is the vector field
on $M$ that defines the element $a \in {\mathfrak g}$.
\begin{remark}
In the literature, whenever the Cartan complex is used, it is assumed that the Lie group is compact. In this note
we extend the notation of Cartan to the non-compact case.
{\varepsilon}nd{remark}
The composition of the natural maps
$$ (S({\mathfrak g}^*) \otimes \Omega^*M)^G \hookrightarrow (S({\mathfrak g}^*) \otimes \Omega^*M)^K \to (S(\mathfrak k^*) \otimes \Omega^*M)^K$$
induces a homomorphism of Cartan complexes
$$\Omega_G^* M \to \Omega_K^*M.$$
\begin{theorem}
Let $G$ be a connected Lie group with Lie algebra ${\mathfrak g}$, let $\mathfrak k$ be the Lie algebra
of the maximal compact subgroup $K$ of $G$ and consider a $G$-manifold $M$. Then the map
$$\Omega_G^* M \to \Omega_K^*M$$
induces a surjective map in cohomology
$$H^*(\Omega_G^* M, d_G) \twoheadrightarrow H^*(\Omega_K^* M, d_K).$$
Since there are canonical isomorphisms $ H^*(\Omega_K^* M, d_K)\cong H^*(M \times_K EK,\mathbb{R}) \cong H^*(M \times_G EG,\mathbb{R})$, we conclude that the canonical map
$$H^*(\Omega_G^* M, d_G) \twoheadrightarrow H^*(M \times_G EG,\mathbb{R})$$
is surjective.
{\varepsilon}nd{theorem}
\begin{proof}
Consider the complex $C^k(G, S({\mathfrak g}^*) \otimes \Omega^\bullet M
)$ defined in \cite[Section 2.1]{Getzler} whose elements are smooth maps
$$f(g_1, \dots , g_k | X) : G^k \times \mathfrak{g} \to \Omega^\bullet M, $$
which vanish if any of the arguments $g_i$ equals the identity of
$G$. The differentials $d$ and $\iota$ are defined by the formulas
\begin{eqnarray*}
(df)(g_1, \dots , g_k | X) &=& (-1)^k df(g_1, \dots , g_k | X) \ \ \ \ \ \ {\rm{and}}\\
(\iota f) (g_1, \dots , g_k | X) &=& (-1)^k \iota(X) f(g_1, \dots
, g_k | X),
{\varepsilon}nd{eqnarray*} as in the case of the differentials in Cartan's
model for equivariant cohomology \cite{Cartan, Guillemin-Sternberg}.
The differential $\bar{d}: C^k \to C^{k+1}$ is defined by the formula
\begin{eqnarray*}
(\bar{d}f)(g_0, \dots , g_k|X) & = & f( g_1, \dots , g_k | X ) +
\sum_{i=1}^k (-1)^i f(g_0, \dots, g_{i-1}g_i, \dots , g_k | X)\\
& & +(-1)^{k+1} g_k f(g_0, \dots , g_{k-1} | {\rm{Ad}}(g_k^{-1})X),
{\varepsilon}nd{eqnarray*}
and the fourth differential $\bar{\iota} : C^k \to C^{k-1}$ is defined by
the formula
\begin{eqnarray*}
(\bar{\iota}f)(g_1, \dots , g_{k-1}|X) & = & \sum_{i=0}^{k-1} (-1)^i
\frac{\partial}{\partial t} f(g_1, \dots, g_i, e^{tX_i}, g_{i+1}
\dots , g_{k-1} | X),
{\varepsilon}nd{eqnarray*}
where $X_i= {\rm{Ad}}(g_{i+1} \dots g_{k-1})X$.
If the map
$$ f: G^k \to S( \mathfrak g^*) \otimes \Omega^\bullet M $$
has for image a homogeneous polynomial of degree $l$, then the total degree
of the map $f$ is $deg(f)=k+l$. The structural
maps $d, \iota, \bar{d}$ and $\bar{\iota}$ are all of degree 1, and
the operator $$d_G = d + \iota +\bar{d} + \bar{\iota}$$ becomes a
degree 1 map that squares to zero.
The cohomology of the complex
$$\left( C^*(G,
S({\mathfrak g}^*) \otimes \Omega^\bullet M ) , d_G \right)$$ will be denoted by
$$H^*(G, S({\mathfrak g}^*) \otimes \Omega^\bullet M )$$
and in \cite[Thm. 2.2.3]{Getzler} it was shown that there is a canonical isomorphism of rings
$$H^*(G, S({\mathfrak g}^*) \otimes \Omega^\bullet M ) \cong H^*(M \times_G
EG ; \mathbb{R})$$
Note that there are natural maps of complexes
$$C^*(G, S({\mathfrak g}^*) \otimes \Omega^*M) \to C^*(K,S(\mathfrak k^*)\otimes \Omega^*M)$$
inducing an isomorphism on cohomology groups
$$H^*(G, S({\mathfrak g}^*) \otimes \Omega^*M) \stackrel{\cong}{\to} H^*(K,S(\mathfrak k^*) \otimes \Omega^*M).$$
This isomorphism follows from the fact that the inclusion $K \subset G$ is a homotopy equivalence inducing
a homotopy equivalence $$M\times_KEK \simeq M \times_G EG$$ and the fact that $$H^*(M\times_G EG,\mathbb{R}) \cong H^*(G, S({\mathfrak g}^*)\otimes \Omega^*M)$$ for any
connected Lie group $G$.
Filtering the double complex $C^*(G, S({\mathfrak g}^*)\otimes \Omega^*M)$ by the degree of the elements in $S({\mathfrak g}^*)\otimes \Omega^*M$ we obtain a spectral
sequence whose first page is
$$E_1=H^*_d(G,S({\mathfrak g}^*)\otimes \Omega^*M),$$ the differentiable cohomology of $G$ with values in the graded
representation $S({\mathfrak g}^*)\otimes \Omega^*M$. Note that in the 0-th row we obtain
$$E_1^{*,0}= (S({\mathfrak g}^*)\otimes \Omega^*M)^G=\Omega_G^*M.$$
The same degree filtration applied to the complex $C^*(K,S(\mathfrak k^*)\otimes \Omega^*M)$ produces a spectral sequence
which at the first page is $\overline{E}_1=H^*_d(K,S(\mathfrak k^*)\otimes \Omega^*M)$, and since $K$ is compact this simply becomes
$$\overline{E}_1^{*,0}=(S(\mathfrak k^*)\otimes \Omega^*M)^K=\Omega_K^*M$$
with $\overline{E}_1^{p,q}=0$ for $q\neq 0$.
The first differential of the spectral sequence once restricted to the 0-th row $E_1^{*,0}=\Omega_G^*M$ is precisely
the differential of the Cartan complex; therefore we obtain
$$E_2^{*,0}= H^*(\Omega_G^*M).$$
Equivalently we obtain
$$\overline{E}_2^{*,0}= H^*(\Omega_K^*M)\cong H^*(M\times_K EK, \mathbb{R}),$$
but in this case the spectral sequence collapses at the second page and the only non zero elements
in $\overline{E}_\infty$ appear on the 0-th row $\overline{E}_\infty^{*,0}\cong H^*(M\times_K EK, \mathbb{R})$.
The canonical map between the complexes
$$C^*(G, S({\mathfrak g}^*) \otimes \Omega^*M) \to C^*(K,S(\mathfrak k^*)\otimes \Omega^*M)$$ induces a map of spectral sequences $E_\bullet \to \overline{E}_\bullet$, and we know
that at the pages at infinity it should induce an isomorphism $E_\infty^{*,\star} \stackrel{\cong}{\to} \overline{E}_\infty^{*,\star} $. Therefore
the map
$$E_2^{*,0} \to \overline{E}_2^{*,0}$$
must be a surjective map, and hence we have the canonical map
$$\Omega_G^*M = E_1^{*,0} \to \overline{E}_1^{*,0}= \Omega_K^*M$$
inducing the desired
surjective map in cohomology
$$H^*(\Omega_G^* M, d_G) \twoheadrightarrow H^*(\Omega_K^* M, d_K).$$
{\varepsilon}nd{proof}
Finally, from the previous theorem we may conclude:
\begin{corollary}
For a $G$-invariant closed differential form on $M$, the existence of a closed equivariant
extension in the Cartan model for equivariant cohomology is equivalent to the existence of an extension in the homotopy quotient.
{\varepsilon}nd{corollary}
\begin{proof}
A $G$-invariant closed differential form on $M$ may be extended to a closed form in the Cartan complex
if and only if its cohomology class lies in the image of the projection map
$$H^*(\Omega_G^* M, d_G) \to H^*(M).$$
This projection map can be seen as the composition of the maps
$$H^*(\Omega_G^* M, d_G) \twoheadrightarrow H^*(\Omega_K^* M, d_K) \to H^*(M).$$
Since the left hand side map is surjective, a $G$-invariant closed differential form on $M$
may be extended to a closed form in the Cartan complex
if and only if its cohomology class lies in the image of the right hand side map.
The canonical isomorphisms
$$H^*(M\times_GEG; \mathbb{R}) \cong H^*(M\times_KEK; \mathbb{R}) \cong H^*(\Omega_K^* M, d_K)$$
imply the result.
{\varepsilon}nd{proof}
{\varepsilon}nd{document} |
\begin{document}
\author{Olivia Dumitrescu}
\title{Plane curves with prescribed triple points: a toric approach}
\maketitle
\begin{abstract}
We will use toric degenerations of the projective plane ${{\mathbb{P}}^ 2}$ to give a new proof of the triple points interpolation problems in the projective plane. We also give a complete list of toric surfaces that are useful as components in this degeneration.
\end{abstract}
\section{Introduction}
Let ${\mathcal{L}}_{d}(m_{1},...,m_{r})$ denote the linear system of curves in ${{\mathbb{P}}^ 2}$ of degree $d$, that pass through $r$ points $P_{1},...,P_{r}$, with multiplicity at least $m_{i}$. A natural question would be to compute the projective dimension of the linear system ${\mathcal{L}}$.
The virtual dimension of $\mathcal{L}$ is
$$\ v
({\mathcal{L}}_{d}(m_1,...,m_r)):= \binom {d+2} {2}-\sum^{r}_{i=1} \binom {m_{i}+1} {2}-1
$$
and the expected dimension is
$\ e ({\mathcal{L}}):=max\{v ({\mathcal{L}}),-1\}.$
There are some elementary cases for which $\ dim({\mathcal{L}}) \neq \ e ({\mathcal{L}}).$ A linear system for which $dim {\mathcal{L}}> e ({\mathcal{L}})$ is called a special linear system. However if we consider the homogeneous case when all multiplicities are equal $m_1=...=m_r=m$, the linear system (denoted by ${\mathcal{L}}_{d}(m^r)$) is expected to be non-special when $r$ is large enough. In this paper we will only consider the case $m=3$; the virtual dimension becomes $v({\mathcal{L}}_{d}(3^r))=\frac{d(d+3)}{2}-6m-1$. In \cite{CDM07} the authors used a toric degeneration of the Veronese surface into a union of projective planes for the double points interpolation problems i.e. $m=2$. This paper extends the degeneration used in \cite{CDM07} to the triple points interpolation problem.
A triple point in the projective plane imposes six conditions, so in this paper we will classify the toric surfaces $(X, {\mathcal{L}})$ with $h^{0}(X, {\mathcal{L}})=6$ (see Theorem \ref{Classification}). In particular we will analyse the ones for which the linear system becomes empty when imposing a triple point, call them $Y_{i}$.
We will then use a toric degeneration of the embedded projective space via a linear system ${\mathcal{L}}$ into a union of planes, quadircs and $r$ disjoint toric surfaces $Y_{i}$. On each surface $Y_{i}$ we will place one triple point and by a semicontinuity argument we will prove the non-speciality of ${\mathcal{L}}$, see
Theorem \ref {triple plane}.\\\\
We remark that this result can be generelized to any dimension i.e. a list of toric varieties becoming empty when imposing a muliplicity $m$ point could be described in a similar way. However the combinatorical degeneration and the construction of the lifting function is not very well understood.
In \cite{Len08} T. Lenarcik used an algebraic approach to study triple points interpolation in ${\mathbb{P}}^ {1}\times {\mathbb{P}}^ {1}$; however the list of the algebraic polygons is slightly different than ours and the connection with toric degenerations is not explicit.
Toric degenerations of three dimensional projective space have been used by S. Brannetti to give a new proof of the Alexander-Hirschowitz theorem in dimension three in \cite{Brann08}. Degenerations of $n$-dimensional projective space have been used by E. Postinghel to give a new proof for the Alexander-Hirschowitz theorem in any dimension in \cite{Postinghel}.
\indent
\section{Toric varieties and toric degenerations.}\label{sec:toric}
We recall a few basic facts
about toric degenerations of projective toric varieties. We refer to \cite {hu} and \cite{WF}
for more information on the subject
and to \cite {gath} for relations with tropical geometry.
The datum of a pair $(X,{\mathcal{L}})$,
where $X$ is a projective, $n$--dimensional toric variety
and ${\mathcal{L}}$ is a base point free, ample line bundle on $X$,
is equivalent to the datum of
an $n$ dimensional convex polytope ${\mathcal{P}}$ in ${\mathbb{R}}^ n$, determined up to translation and $SL_{n}^{\pm}({\mathbb{Z}})$.
We consider a polytope ${\mathcal{P}}$ and a \emph{subdivision} ${\mathcal{D}}$ of ${\mathcal{P}}$ into convex subpolytopes; i.e. a finite family of $n$ dimensional convex
polytopes whose union is ${\mathcal{P}}$ and such that
any two of them intersect only along a face (which may be empty).
Such a subdivision is called \emph{regular} if there is a convex, positive, linear on each face, function $F$ defined on ${\mathcal{P}}$. Such function $F$ will be called a lifting function. Regular subdivisions correspond to degeneration of toric varieties.
We will now prove a technical lemma that will enable us to easily demonstrate the existence of a lifting function when we need them.
Let $X$ be a toric surface and ${\mathcal{P}}$ be its associated polygon.
Consider $L$ a line that separates two disjoint polygons ${\mathcal{P}_1}$ and ${\mathcal{P}_2}$ in ${\mathcal{P}}$ and $X_{1}$ and $X_{2}$ their corresponding toric surfaces such that $L$ does not contain any integer point.
\begin{lemma}
The toric variety $X$ degenerates into a union of toric surfaces two of which are $X_{1}$ and $X_{2}$ which are skew.
\end{lemma}
\begin{proof}
We consider the convex piecewise linear function given by
$$f(x,y,z)=\max \{z, L+z \}$$
Consider the image of the points on the boundary of the polygons $X_{1}$ and $X_{2}$ through $f$.
Change the function $f$ by interposing the convex hull of the boundary points separated by $L$ between $l_{1}$ and $l_{2}$ (as in the Figure \ref{lift1}).
The function will still be convex and piecewise linear, therefore we get a regular subdivision.
We consider now the toric varieties associated to each
polygon, and since ${\mathcal{P}_{1}}$ and ${\mathcal{P}_{2}}$ are disjoint, we obtain that two of the toric surfaces that appeared in the degeration, namely $X_1$ and $X_2$ are skew.
\end{proof}
For example, in the picture below we have four polygons, two of which are disjoint. The corresponding degeneration
will contain four toric varieties, two of them $X_{1}$ and $X_{2}$, being skew.
\begin{figure}\label{lift1}
\end{figure}
It is easy to see how we could iterate this process. Let $M$ be a line cutting the polygon associated to $X_{2}$ and not containing any of its interior points. Then $X_{2}$ degenerates into a union of toric surfaces, two of which are skew, $Y_{2}$ and $Y_{3}$.
In this case, we conclude that $X$ degenerates into nine toric surfaces three of which $X_{1}$, $Y_{2}$ and $Y_{3}$ being skew, as the Figure \ref{lift2} indicates.
\begin{figure}\label{lift2}
\end{figure}
Later on, we will ignore the varieties lying in between the disjoint ones; they are only important for the degeneration and not for the analysis itself.
\section{The Classification of polygons.}
We recall that the group $SL_2^{\pm}({\mathbb{Z}})$ acts on the column vectors of ${\mathbb{R}}^ 2$ by left multiplication. This induces an action of $SL_2^{\pm}({\mathbb{Z}})$ on the set of convex polygons ${\mathcal{P}}$ by acting on its enclosed points ($SL_2^{-}({\mathbb{Z}})$ corresponds to orientation reversing lattice equivalences). Obviously,
${\mathbb{Z}^{2}}$ acts on vectors of ${\mathbb{R}}^ 2$ by translation.
Next, we will classify all convex polygons enclosing six lattice integer points modulo the actions described above.
We first start with a definition.
\begin{definition} We say the polygon ${\mathcal{P}}$ is equivalent to one in \it{standard position} if
\begin{my_enumerate}
\item It contains $O=(0,0)$ as a vertex
\item $OS$ is a vertex where $S=(0,m)$ and $m$ is the largest edge length
\item $OP$ is an edge where $P=(p,q)$ and $0\leq p <q$
\end{my_enumerate}
\end{definition}
\begin{remark} Every polygon has a standard position.
\end{remark}
Indeed, we first choose the longest edge and then we translate one of its vertices to the origin. We will now rotate the polygon to put the longest edge on the positive side of the $x$ axis and then we shift it such that the adjacent edge lies in the upper half of the first quadrant.
Indeed, if $OP$ is an edge with $P=(s,q)$ and $s\geq q$; then $s=mq+p$ for $0\leq p<q$ so we shift left by $m$.
We will call this procedure normalization.\\
It is easy to see that the standard position of the polygon may not be unique, it depends on the choice of the longest edge, and of the choice of the special vertex that becomes the origin.
We can now present the classification of the polygons in standard position according to $m$ (the maximum number of integral points lying on the edges of the polygon), and also according to their number of edges, $n$. Obviously, the polygons ${\mathcal{P}}$ will have at most six edges, and at most five points on an edge.
We leave the elementary details to the reader; we only remark that Pick's lemma is useful (for more details see \cite{Dum10}).
\begin{proposition}\label{Classification} Any polygon enclosing six lattice points is equivalent to exactly one from the following list
\end{proposition}
We now recall that any rational convex polygon ${\mathcal{P}}$ in ${\mathbb{R}^{n}}$ enclosing a fixed number of integer lattice points defines an $n$ dimensional projective toric variety $X_{\mathcal{P}}$ endowed with an ample line bundle on $X_{\mathcal{P}}$ which has the integer points of the polygon as sections. We get the following result
\begin{corollary} Any toric surface endowed with an ample line bundle with six sections is completely described by exactly one of the polygons from the above list.
\end{corollary}
\section{Triple Point Analysis.}
We first observe that six, the number of integer points enclosed by the polygon, represents exactly the number of conditions imposed by a triple point. We will now classify all polygons from Proposition \ref{Classification} for which their corresponding linear system becomes empty when imposing a triple point. There are two methods for testing the emptiness of these linear systems: an algebraic method and a geometric method and we will briefly describe them below. For the algebraic approach, checking that a linear system is non-empty when imposing a triple point reduces to showing that the conditions imposed by a triple point in ${\mathbb{P}^{2}}$ are dependent. For this, one needs to look at the rank of the six by six matrix where the first column represents the sections of the line bundle and the other five columns represent all first and second derivatives in $x$ and $y$. We conclude that the six conditions are dependent if and only if the determinant of the matrix is identically zero.
The geometric method for testing when a planar linear system is empty is to explicitly find it and show that it contains no curve, using ${\mathbb{P}^{2}}$ as a minimal model for the surface $X$ and writing its resolution of singularities.
\begin{remark}\label{Elimination} The corresponding linear systems of the following polygons are non-empty when imposing a base point with multiplicity three.
\end{remark}
\begin{proof} It is easy to check that the algebraic conditions imposed by at least four sections on a line are always dependent. Indeed, we have two possible cases, if the line of sections is an edge, or if is enclosed by the polygon. For the first case, we can only have sections on two levels so the vanishing of the second derivative in $y$ gives a dependent condition (The same argument applies for case $m=3$ representing the embedded ${\mathbb{P}^{1}}\times {\mathbb{P}^{1}}$). For the second case we notice that the vanishing of the first derivative in $y$ and the second derivative in $x$ and $y$ give two linearly dependent conditions.
\end{proof}
We will use the Remark \ref{Elimination} to eliminate the polygons that don't have the desired property and we obtain five polygons for which we will study the corresponding algebraic surfaces and linear systems using toric geometry methods.
For any polygon consider its fan by dualizing the polygon's angles and in the case that the toric variety obtained by gluing the cones is singular, take it's resolution of singularites. In this way we obtain the all the toric surfaces using ${\mathbb{P}^{2}}$ as a minimal model. The associated linear system may not have general points. In general, we will use the notations ${\mathcal{L}}_{d}([1,1]), {\mathcal{L}}_{d}([2,1]), {\mathcal{L}}_{d}([1,1,1])$ for linear systems of degree $d$ that pass through a base point with a defined tangent, a double point with a defined tangent or having a flex direction. For example, ${\mathcal{L}}_{4}([1,1,1]^3)$ represents quartics with three base points that are flex to the line joining any two of them. Since the base points are special, the linear systems will need a different analysis.
We conclude the emptiness of each linear systems with a triple point by applying birational transformations and splitting off $-1$ curves. The last column of the table indicates the geometric conditions that corresponding to the infinitely near multiplicities. We obtain the following result:
\begin{lemma}\label {Empty poly}
The linear systems corresponding to the following polygons become empty after imposing a triple point.
\end{lemma}
All the linear systems from the table become empty when imposing a triple point.
For example the fourth polygon in the above table describes a projective plane embedded by a linear system of conics, ${\mathcal{L}}_{2}$. By imposing a triple point ${\mathcal{L}}_{2}$ becomes empty. One can obtain more polygons with an empty linear system by rotating or by shifting the main ones by any integer numbers.
\section{Triple points in ${\mathbb{P}}^ 2$.}
We denote by $V_d$ the image of the Veronese embedding
$v_d:\mathbb{P}^2 \to \mathbb{P}^{d(d+3)/2}$
that transforms the plane curves of degree $d$ to hyperplane sections of the Veronese variety $V_d$.
We degenerate $V_d$ into a union of disjoint surfaces and ordinary planes and we place one point on each one of the disjoint surfaces.
The surfaces are chosen such that the restriction of a hyperplane section to each one of them to be linear system that becomes empty when we impose a triple point.
We conclude that any hyperplane section to $V_d$ needs to contain all disjoint surfaces, and in particular all of the coordinate points of the ambient projective space covered in this way. Therefore if $V_{d}$ degenerates exactly into a union of disjoint special surfaces and planes (or quadrics) with no points left over we conclude that the desired linear system is empty, and therefore it has the expected dimension. Using semicontinuity this argument can easily be extended to any degeneration as in \cite{CDM07}.
In order to give an inductive proof for triple points in the projective plane we will first analyze triple points in ${\mathbb{P}}^ 1\times{\mathbb{P}}^ 1$.
We will only prove the most difficult case when the linear systems in ${\mathbb{P}}^ 1\times{\mathbb{P}}^ 1$ with virtual dimension $-1$ are empty.
The general case will follow by induction, but it was already proved in a similar way using algebraic methods by T. Lenarcik in \cite{Len08}.
\begin{lemma}\label{bidegree $(5,n)$}
Fix $n\geq 3$. Then linear systems of bidegree $(5,n)$ for $n\neq 4, (11,n), (2, 4n-9), (8,2n-3)$ and an arbitrary number of triple points have the expected dimension.
\end{lemma}
\begin{proof}
\begin{itemize}
\item For any linear systems of bidegree $(5,n)$ we find a skew $n+1$ set of surfaces and we place each of the $n+1$ triple points in one of the surfaces. We denote the degenerations presented below as $C_{5}^{5}$, $C_{5}^{6}$, $C_{5}^{8}$ and $C_{5}^{3}$
For every $n>2$, $n\neq 4$ take $i\in \{3,5,6,8 \}$ such that $\frac{n-i}{4}$ is an integer, $k$. For any arbitrary $n$ we consider the degeneration
$C_{5}^{n}=C_{5}^{i}+k C_{5}^{3}$ where the sum of two blocks means attaching the two disjoint blocks together along the edge of length $5$.
\item For linear systems of bidegree $(11,n)$ and 2$n+2$ triple points we find skew surfaces. We denote the degenerations presented below by $C_{11}^{2}$, $C_{11}^{3}$, and $C_{11}^{4}.$ For every $n>2$ take $i\in \{2,3,4\}$ such that $\frac{n-i}{3}$ is an integer, $k$. For any arbitrary $n$ we consider the degeneration
$C_{11}^{n}=C_{11}^{i}+k C_{11}^{2}.$
\item For curves of bidegree $(2,4n-9)$ we consider the degeneration $C_{2}^{4n-9}$ given by $(n-3)C_{2}^{3}$ (in particular, $C_{2}^{11}=3C_{2}^{3}$) and for $C_{8}^{2n-3}$ we use combinations of $C_{8}^{3}$ and $C_{8}^{5}$
\end{itemize}
\end{proof}
\begin{corollary}\label{main}
Linear systems in ${\mathbb{P}}^ {1}\times {\mathbb{P}}^ {1}$ with triple points of virtual dimension $-1$ are empty.
\end{corollary}
\begin{proof} We have to prove the statement for linear systems of bidegree $(6k-1, n)$ and $(3k-1, 2n-1).$
For the bidegree $(6k-1, n)$ we distinguish two cases. If $k$ is even $k=2k'$ we use the degeneration $C_{12k'-1}^{n}=k'C_{11}^{n}$; while if $k$ is odd of the form $2k'+1$ we use $C_{12k'+5}^{n}=C_{5}^{n}+k'C_{11}^{n}$, for $n\neq 4$. For $n=4$ we use the following degeneration for $C_{17}^{4}$ and we generalize this case by adding $C_{11}^{4}$ blocks
For the bidegree $(3k-1, 2n-1)$ we reduce to the case when $k$ is odd of the form $2k'+1$ and depending on the parity of $k'$, if $6k'=6+12r$ we use the degeneration
$C_{8}^{2n-1}+rC_{11}^{2n-1}$ while $6k'=12r$ we use $C_{5}^{2n-1}+C_{8}^{2n-1}+(r-1)C_{11}^{2n-1}$.
\end{proof}
We can now obtain the following result
\begin{theorem}\label{triple plane}
$\mathcal{L}_d(3^n)$ has the expected dimension whenever $d \geq 5$.
\end{theorem}
\begin{proof}
It is enough to prove the theorem for the number of triple points for which the virtual dimension is $-1$ so in that case we claim that the linear system is empty.
Note that $\binom{d+2}{2}\equiv 0$ mod $6$ if $d\equiv \{2,7, 10, 11\}$ mod $6$; $\binom{d+2}{2}\equiv 1$ mod $6$ if $d\equiv \{0,9\}$ mod $6$;
$\binom{d+2}{2}\equiv 3$ mod $6$ if $d\equiv \{1, 4, 5, 8\}$ mod $6$ and $\binom{d+2}{2}\equiv 4$ mod $6$ if $d\equiv \{3, 6\}$ mod $6$.
We will use the induction step
$V_{12(k+1)+j}=V_{12k+j}+kC^{11}_{11}+C^{11}_{j+1}+V_{10}$
with $j=1,...,12$, $k\geq 0$, $(i,j)\neq (1,4)$ and to finish the proof we present the degenerations of $V_{j}$ if $j\leq 12.$
\end{proof}
\begin{remark}
{\rm
Notice that ${\mathcal{L}_4(3^2)}$ consists of quartics with two triple points and the expected dimension is $2$. This linear system has a fixed part, the double line through the two points and a movable part ${\mathcal{L}_2(1^2)}$ i.e. conics through two points, that has dimension $3$. A simple argument shows that if $d=4$, the linear system ${\mathcal{L}}$ is $-1$--special (we have a $-1$--curve, line connecting the 2 points, splitting off twice) and therefore special.\\
\indent
One could mention that case $d=4$ is also a special case for the double points interpolation problem since ${\mathcal{L}_4(2^5)}$ is expected to be empty but it consists of the double conic determined by the $5$ general points.
}
\end{remark}
\end{document} |
\begin{document}
\title{On the integral form
of rank 1 Kac-Moody algebras.
}
\date{}
{\cal A}uthor{Ilaria Damiani, Margherita Paolini}
\maketitle
\begin{abstract}
\noindent In this paper we shall prove that the $\Z$-subalgebra generated by the divided powers of the Drinfeld generators $x_r^{\pm}$ ($r{\cal I}n\Z$) of the Kac-Moody algebra of type $A_2^{(2)}$ is an integral form (strictly smaller than Mitzman's, see \cite{DM}) of the enveloping algebra, we shall exhibit a basis generalizing the one provided in \cite{HG} for the untwisted affine Kac-Moody algebras
and we shall determine explicitly the commutation relations. Moreover we prove that both in the untwisted and in the twisted case the positive (respectively negative) imaginary part of the integral form is an algebra of polynomials over $\Z$.
\end{abstract}
\small \small \tableofcontents
\section{Introduction} \label{intr}
{\cal{V}}skip .5truecm
\noindent Recall that the twisted affine Kac-Moody algebra of type $A_2^{(2)}$ is $\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$, the $\chi$-invariant subalgebra of $\hat{{{\cal F}rak sl_3}}$ where $\chi$ is the non trivial Dynkin diagram automorphism of $A_2$ (see \cite{VK}) and denote by $\tilde{\cal U}$ its enveloping algebra ${\cal U}(\hat{{{\cal F}rak sl_3}}^{\!\!\chi})$.
\noindent The aim of this paper is to give a basis over $\Z$ of the $\Z$-subalgebra of $\tilde{\cal U}$
generated by the divided powers of the Drinfeld generators $x_r^{\pm}$'s ($r{\cal I}n\Z$) (see definitions {\cal R}ef{a22} and {\cal R}ef{thuz}),
thus proving that this $\Z$-subalgebra is an integral form of $\tilde{\cal U}$.
{\cal{V}}skip .3 truecm
\noindent The integral forms for finite dimensional semisimple Lie algebras were first introduced by Chevalley in \cite{Ch} for the study of the Chevalley groups and of their representation theory.
\noindent The construction of the ``divided power''-$\Z$-form
for
the simple finite dimensional Lie algebras
is due to Kostant (see \cite{Ko})
; it has been generalized to the untwisted affine Kac-Moody algebras by Garland in \cite{HG} as we shall quickly recall.
\noindent Given a simple Lie algebra ${{{\cal F}rak g}}_0$ and the corresponding untwisted affine Kac-Moody algebra ${{{\cal F}rak g}}={{{\cal F}rak g}}_0\otimes\C[t,t^{-1}]\oplus\C c$ provided with an (ordered) Chevalley basis, the $\Z$-subalgebra ${\cal U}z$ of ${\cal U}={\cal U}({{{\cal F}rak g}})$ generated by the divided powers of the
real root vectors is an integral form of ${\cal U}$; a $\Z$-basis of this integral form (hence its $\Z$-module structure) can be described by decomposing ${\cal U}z$ as tensor product of its $\Z$-subalgebras relative respectively to the real root vectors (${\cal U}z^{re,+}$ and ${\cal U}z^{re,-}$), to the imaginary root vectors (${\cal U}z^{im,+}$ and ${\cal U}z^{im,-}$) and to the Cartan subalgebra (${\cal U}z^{{{\cal F}rak h}}$):
${\cal U}z^{re,+}$ has a basis $B^{re,+}$ consisting of the (finite) ordered products of divided powers of the distinct positive real root vectors
and $({\cal U}z^{re,-},B^{re,-})$ can be described in the same way: $$B^{re,\pm}=\{x_{\pm\beta_1}^{(k_{\beta_1})}\cdot ...\cdot x_{\pm\beta_N}^{(k_{\beta_N})}|N\geq 0,\ \beta_1>...>\beta_N>0\ {{\cal R}m{real\ roots}},\ k_{\beta_j}> 0\ {\cal F}orall j\}.$$
Here a real root $\beta$ of ${{{\cal F}rak g}}$ is said to be positive if there exists a positive root ${\cal A}lpha$ of ${{{\cal F}rak g}}_0$ such that $\beta={\cal A}lpha$ or $\beta-{\cal A}lpha$ is imaginary; $x_{\beta}$ is the Chevalley generator corresponding to the real root $\beta$.
A basis $B^{{\cal F}rak h}$ of ${\cal U}z^{{{\cal F}rak h}}$, which is commutative, consists of the products
of the ``binomials''
of the (Chevalley) generators $h_{i
}$ ($i{\cal I}n I$) of the Cartan subalgebra of ${{{\cal F}rak g}}$:
$$B^{{\cal F}rak h}=\left\{
\prod_i{h_{i
}\choose k_i}\Big|k_i\geq 0\ {\cal F}orall i{\cal R}ight\};$$
it is worth remarking that ${\cal U}z^{{{\cal F}rak h}}$ is not an algebra of polynomials.
${\cal U}z^{im,+}$ (and its symmetric ${\cal U}z^{im,-}$) is commutative, too; as $\Z$-module it is isomorphic to the tensor product of the ${\cal U}_{i,\Z}^{im,+}$'s (each factor corresponding to the $i^{th}$ copy of ${\cal U}(\hat{\cal F}rak sl_2)$ inside ${\cal U}$),
so that it is enough to describe it in the rank 1 case: the basis $B^{im,+}$ of ${\cal U}z^{im,+}(\hat{\cal F}rak sl_2)$ provided by Garland can be described as a set of finite products of the elements
$\Lambda_k{(\xi(m))}$ ($r{\cal I}n\N$, $m>0$), where the $\Lambda_k{(\xi(m))}$'s are the elements of ${\cal U}^{im,+}=\C[h_{
r}(=h
\otimes t^r)|r>0]$ defined recursively (for all $m\noindenteq 0$) by
$$\Lambda_{-1}{(\xi(m))}=1,\ \ k\Lambda_{k-1}{(\xi(m))} =\sum_{r\geq 0,s>0{\cal A}top r+s=k}\Lambda_{r-1}{(\xi(m))} {h_{ms}}:$$
$$B
^{im,+}=\left\{\prod_{m>0}\Lambda_{k_m-1}{(\xi(m))}|k_m\geq 0\ {\cal F}orall m,\ \#\{m>0|k_m\noindenteq 0\}<{\cal I}nfty{\cal R}ight\};$$
\noindent It is not clear from this description that ${\cal U}z^{im,+}$ and ${\cal U}z^{im,-}$ are algebras of polynomials.
\noindent Thanks to the isomorphism of $\Z$-modules
$${\cal U}z\cong{\cal U}z^{re,-}\otimes_{\Z}{\cal U}z^{im,-}\otimes_{\Z}{\cal U}z^{{{\cal F}rak h}}\otimes_{\Z}{\cal U}z^{im,+}\otimes_{\Z}{\cal U}z^{re,+}$$
a $\Z$-basis $B$ of ${\cal U}z$ is produced as multiplication of $\Z$-bases of these subalgebras:
$$B=B^{re,-}B^{im,-}B^{{{\cal F}rak h}}B^{im,+}B^{re,+}.$$
\noindent The same result has been proved for all the twisted affine Kac-Moody algebras by Mitzman in \cite{DM}, where the author provides a deeper comprehension and a compact description of the commutation formulas by means of a drastic simplification of both the relations and their proofs. This goal is achieved remarking that the generating series of the elements involved in the basis
can be expressed as suitable exponentials, observation that allows to apply very general tools of calculus, such as the well known properties
$$x\exp(y)=\exp(y)\exp([\cdot,y])(x)$$
if $\exp(y)$ and $\exp([\cdot,y])(x)$ are well defined,
and
$$D(\exp(f))=D(f)\exp(f)$$ if $D$ is a derivation such that $[D(f),f]=0$.
\noindent Here, too, it is not yet clear that ${\cal U}z^{im,\pm}$ are algebras of polynomials.
\noindent However this property, namely ${\cal U}z^{im,+}=\Z[\Lambda_{k-1}=\Lambda_{k-1}{(\xi(1))}=p_{k,1}|k>0]$, is stated in Fisher-Vasta's PhD thesis (\cite{F}), where the author describes the results of Garland for the untwisted case and of Mitzman for $A_2^{(2)}$ aiming at a better understanding of the commutation formulas.
Yet the proof is missing: the theorem describing the integral form is based on observations which seem to forget some necessary commutations, those between $(x_r^+)^{(k)}$ and $(x_s^-)^{(l)}$ when $|r+s|>1$; in \cite{F} only the cases $r+s=0$ and $r+s=\pm 1$ are considered, the former producing the binomials appearing in $B^{{{\cal F}rak h}}$, the latter producing the elements $p_{n,1}$
(and their corresponding negative elements in ${\cal U}z^{im,-}$).
{\cal{V}}skip .3 truecm
\noindent Comparing the Kac-Moody presentation of the affine Kac-Moody algebras with its ``Drinfeld'' presentation as current algebra, one can notice a difference between the untwisted and twisted case, which is at the origin of our work. As in the simple finite dimensional case, also in the affine cases the generators of ${\cal U}z$ described above are redundant: the $\Z$-subalgebra of ${\cal U}$ generated by $\{e_i^{(k)},f_i^{(k)}|i{\cal I}n I,\ k{\cal I}n\N\}$, obviously contained in ${\cal U}z$, is actually equal to ${\cal U}z$.
\noindent On the other hand, the situation changes when we move to the Drinfeld presentation and study the $\Z$-subalgebra $^*{\cal U}_{\Z}$ of ${\cal U}$ generated by the divided powers of the Drinfeld generators $
(x_{i,r}^{\pm})^{(k)}$: indeed, while in the untwisted case it is still true that ${\cal U}z=$ $^*{\cal U}_{\Z}$ and (also in the twisted case) it is always true that $^*{\cal U}_{\Z}\subseteq{\cal U}z$, in general we get two different $\Z$-subalgebras of ${\cal U}$; more precisely $^*{\cal U}_{\Z}\subsetneq{\cal U}z$ in case $A_{2n}^{(2)}$, that is when there exists a vertex $i$ whose corresponding rank 1 subalgebra
is not a copy of ${\cal U}(\hat{\cal F}rak sl_2)$ but is a copy of ${\cal U}(\hat{{{\cal F}rak sl_3}}^{\!\!\chi})$.
Thus in order to complete the description of $^*{\cal U}_{\Z}$ we need to study the case of $A_2^{(2)}$.
{\cal{V}}skip .3 truecm
\noindent In the present paper we prove that the $\Z$-subalgebra generated by $$\{(x_r^+)^{(k)},(x_r^-)^{(k)}|r{\cal I}n\Z,k{\cal I}n\N\}$$ is an integral form of the enveloping algebra also in the case of $A_2^{(2)}$, we exhibit a basis generalizing the one provided in \cite{HG} and in \cite{DM} and determine the commutation relations in a compact yet explicit formulation (see theorem {\cal R}ef{trmA22} and appendix {\cal R}ef{appendA}). We use the same approach as Mitzman's, with a further simplification consisting in the remark that an element of the form $G(u,v)=\exp(xu)\exp(yv)$ is characterized by two properties: $G(0,v)=\exp(yv)$ and ${dG\over du}=xG$.
\noindent Moreover, studying the rank 1 cases we prove that, both in the untwisted and in the twisted case, ${\cal U}z^{im,+}$ and $^*{\cal U}_{\Z}^{im,+}$ are algebras of polynomials: as stated in \cite{F}, the generators of ${\cal U}z^{im,+}$ are the elements $\Lambda_k$ introduced in \cite{HG} and \cite{DM} (see proposition {\cal R}ef{tmom} and remark {\cal R}ef{tmfv}); the generators of $^*{\cal U}_{\Z}^{im,+}$ in the case $A_2^{(2)}$ are elements defined formally as the $\Lambda_k$'s after a deformation of the $h_r$'s (see definition {\cal R}ef{thuz} and remark {\cal R}ef{hdiversi}): describing $^*{\cal U}_{\Z}^{im,+}(\hat{{{\cal F}rak sl_3}}^{\!\!\chi})$ (denoted by $\tilde{\cal U}z^{0,+}$) has been the hard part of this work.
{\cal{V}}skip .3 truecm
\noindent We work over ${\mathbb{Q}}$ and dedicate a preliminary particular care to the description of some integral forms of ${\mathbb{Q}}[x_i|i{\cal I}n I]$ and of their properties and relations when they appear in some non commutative situations, properties that will be repeatedly used for the computations in ${{\cal F}rak g}$: fixing the notations helps to understand the construction in the correct setting. With analogous care we discuss the symmetries arising both in $\hat{\cal F}rak sl_2$ and in $\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$. We chose to recall also the case of ${\cal F}rak sl_2$ and to give in a few lines the proof of the theorem describing its divided power integral form in order to present in this easy context the tools that will be used in the more complicated affine cases.
{\cal{V}}skip .3 truecm
\noindent The paper is organized as follows.
Section {\cal R}ef{intgpl} is devoted to review the description of some integral forms of the algebra of polynomials (polynomials over $\Z$, divided powers,``binomials'' and symmetric functions, see \cite{IM}): they are introduced together with their generating series as exponentials of suitable series with null constant term, and their properties are rigorously stated, thus preparing to their use in the Lie algebra setting.
\noindent We have inserted here, in proposition {\cal R}ef{tmom}, a result about the stability of the symmetric functions with integral coefficients under the homomorphism $\lambda_m$ mapping $x_i$ to $x_i^m$ ($m>0$ fixed), which is almost trivial in the symmetric function context; it is a straightforward consequence of this observation that ${\cal U}z^{im,+}$ is an algebra of polynomials and so is $^*{\cal U}z^{im,+}$ in the twisted case. We also provide a direct, elementary proof of this proposition (see proposition {\cal R}ef{tdmom}).
In section {\cal R}ef{ncn} we collect some computations in non commutative situations that we shall systematically refer to in the following sections.
Section {\cal R}ef{sld} deals with the case of ${\cal F}rak sl_2$: the one-page formulation and proof that we present (see theorem {\cal R}ef{trdc}) inspire the way we study $\hat{\cal F}rak sl_2$ and $\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$, and offer an easy introduction to the strategy followed also in the harder affine cases: decomposing our $\Z$-algebra as a tensor product of commutative subalgebras; describing these commutative structures thanks to the examples introduced in section {\cal R}ef{intgpl}; and glueing the pieces together applying the results of section {\cal R}ef{ncn}.
\noindent Even if the results of this section imply the commutation rules between $(x_r^+)^{(k)}$ and $(x_{-r}^+)^{(l)}$ ($r{\cal I}n\Z$, $k,l{\cal I}n\N$) in the enveloping algebra of $\hat{\cal F}rak sl_2$ (see remark {\cal R}ef{hrs}), it is worth remarking that section {\cal R}ef{slh} does not depend on section {\cal R}ef{sld}, and can be read independently (see remark {\cal R}ef{hev}).
In section {\cal R}ef{slh} we discuss the case of $\hat{\cal F}rak sl_2$.
\noindent The first part of the section is devoted to the choice of the notations in $\hat{\cal U}={\cal U}(\hat{\cal F}rak sl_2)$; to the definition of its (commutative) subalgebras $\hat{\cal U}^{\pm}$ (corresponding to the real component of $\hat{\cal U}$), $\hat{\cal U}^{0,\pm}$ (corresponding to the imaginary component), $\hat{\cal U}^{0,0}$ (corresponding to the Cartan), of their integral forms $\hat{\cal U}z^{\pm}$, $\hat{\cal U}z^{0,\pm}$, $\hat{\cal U}z^{0,0}$, and of the $\Z$-subalgebra $\hat{\cal U}z$ of $\hat{\cal U}$; and to a detailed reminder about the useful symmetries (automorphisms, antiautomorphisms, homomorphisms and triangular decomposition) thanks to which we can get rid of redundant computations.
\noindent In the second part of the section the apparently
tough computations involved in the commutation relations are reduced to four formulas whose proofs are contained in a few lines:
proposition {\cal R}ef{zzk}, proposition {\cal R}ef{pum}, lemma {\cal R}ef{limt}, and proposition {\cal R}ef{exefh}, (together with proposition {\cal R}ef{tmom}) are all what is needed to show that $\hat{\cal U}z$ is an integral form of $\hat{\cal U}$, to recognize that the imaginary (positive and negative) components $\hat{\cal U}z^{0,\pm}$ of $\hat{\cal U}z$ are the algebras of polynomials $\Z[\Lambda_k(\xi(\pm 1))|k\geq 0]=\Z[\hat h_{\pm k}|k>0]$, and to exhibit a $\Z$-basis of $\hat{\cal U}z$ (see theorem {\cal R}ef{trm}).
In section {\cal R}ef{ifa22} we finally present the case of $A_2^{(2)}$.
\noindent As for $\hat{\cal F}rak sl_2$ we first evidentiate some general structures of ${\cal U}(\hat{{{\cal F}rak sl_3}}^{\!\!\chi})$ (that we denote here $\tilde{\cal U}$ in order to distinguish it from $\hat{\cal U}={\cal U}(\hat{\cal F}rak sl_2)$): notations, subalgebras and symmetries. Here we introduce the elements $\tilde h_k$ through the announced deformation of the formulas defining the elements $\hat h_k$'s (see definition {\cal R}ef{thuz} and remark {\cal R}ef{hdiversi}). We also describe a ${\mathbb{Q}}[w]$-module structure on a Lie subalgebra $L$ of $\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$ (see definitions {\cal R}ef{sottoalgebraL} and {\cal R}ef{qwmodulo}), thanks to which we can further simplify the notations.
\noindent In addition, in remark {\cal R}ef{emgg} we recall the embeddings of $\hat{\cal U}$ inside $\tilde{\cal U}$ thanks to which a big part of the work can be translated from section {\cal R}ef{slh}.
\noindent The heart of the problem is thus reduced to the commutation of $\exp(x_0^+u)$ with $\exp(x_1^-v)$ (which is technically more complicated than for $A_1^{(1)}$ since it is a product involving a higher number of factors) and to deducing from this formula the description of the imaginary part of the integral form as the algebra of the polynomials in the $\tilde h_k$'s. To the solution of this problem, which represents the central contribution of this work, we dedicate subsection {\cal R}ef{sottosezione}, where we concentrate, perform and explain the necessary computations.
{\cal{V}}skip .15 truecm
\noindent At the end of the paper some appendices are added for the sake of completeness.
In appendix {\cal R}ef{appendA} we collect all the straightening formulas: since not all of them are necessary to our proofs and in the previous sections we only computed those which were essential for our argument, we give here a complete explicit picture of the commutation relations.
Appendix {\cal R}ef{appendB} is devoted to the description of a $\Z$-basis of $\Z^{(sym)}[h_r|r>0]$ alternative to that introduced in the example {\cal R}ef{rvsf}.
\noindent $\Z^{(sym)}[h_r|r>0]$ is the algebra of polynomials $\Z[\hat h_k|k>0]$, and as such it has a $\Z$-basis consisting of the monomials in the $\hat h_k$'s, which is the one considered in our paper. But, as mentioned above, this algebra, that we are naturally interested in because it is isomorphic to the imaginary positive part of the integral form of the rank 1 Kac-Moody algebras, was not recognized by Garland and Mitzman as an algebra of polynomials: in this appendix the $\Z$-basis they introduce is studied from the point of view of the symmetric functions and thanks to this interpretation it is easily proved to generate freely the same $\Z$-submodule of ${\mathbb{Q}}[h_r|r>0]$ as the monomials in the $\hat h_k$'s.
In appendix {\cal R}ef{appendC} we compare the Mitzman integral form of the enveloping algebra of type $A_2^{(2)}$ with the one studied here, proving the inclusion stated above. We also show that our commutation relations imply Mitzman theorem, too.
Finally, in order to help the reader to orientate in the notations and to find easily their definitions, we conclude the paper with an index of symbols, collected in appendix {\cal R}ef{appendD}.
{\cal{V}}skip .3 truecm
\noindent The study of the integral form of the affine Kac-Moody algebras from the point of view of the Drinfeld presentation, which differs from the one defined through the Kac-Moody presentation (\cite{HG} and \cite{DM}) in the case $A_{2}^{(2)}$ as outilined above, is motivated by the interest in the representation theory over $\Z$, since for the affine Kac-Moody algebras the notion of highest weight vector with respect to the $e_i$'s has been usefully replaced with that defined through the action of the $x_{i,r}^+$'s (see the works of Chari and Pressley \cite{C} and \cite{CP2}): in order to study what happens over the integers it is useful to work with an integral form defined in terms of the same $x_{i,r}^+$'s.
\noindent This work is also intended to be the preliminary classical step in the project of constructing and describing the {{\cal I}t quantum} integral form for the twisted affine quantum algebras (with respect to the Drinfeld presentation). It is a joint project with Vyjayanthi Chari (see also \cite{CP}),
who proposed it during a period of three months that she passed as a visiting professor at the Department of Mathematics of the University of Rome ``Tor Vergata''.
\noindent The commutation relations involved are extremely complicated and appear to be unworkable by hands without a deeper insight; we hope that a simplified approach can open a viable way to work in the quantum setting.
{\cal{V}}skip .5 truecm
\section{Integral form and commutative examples} \label{intgpl}
{\cal{V}}skip .5 truecm
\noindent In this section we give the definition of integral form and summarize, fixing the notations useful to our purpose, some well known commutative examples (deeply studied and systematically exposed in \cite{IM}), which will play a central role in the non commutative enveloping algebra of finite and affine Kac-Moody algebras.
{\cal{V}}skip .3 truecm
\begin{definition} \label{intu}
\noindent Let $U$ be a ${\mathbb{Q}}$-algebra. An integral form of $U$ is a $\Z$-algebra $U_{\Z}$ such that:
i) $U_{\Z}$ is a free $\Z$-module;
ii) $U={\mathbb{Q}}\otimes_{\Z}U_{\Z}$.
\noindent In particular an integral form of $U$ is (can be identified to) a $\Z$-subalgebra of $U$, and a $\Z$-basis of an integral form of $U$ is a ${\mathbb{Q}}$-basis of $U$.
\end{definition}
{\cal{V}}skip .3 truecm
\begin{example} \label{clpol}
\noindent Of course $\Z[x_i|i{\cal I}n I]$ is an integral form of ${\mathbb{Q}}[x_i|i{\cal I}n I]$ with basis the set of monomials in the $x_i$'s, namely
$\{{\bf{x}}^{{\bf{k}}}=\prod_{i{\cal I}n I}x_i^{k_i}\}$ where ${\bf{k}}:I\to\N$ is finitely supported (that is $\#\{i{\cal I}n I|k_i\noindenteq 0\}<{\cal I}nfty$).
\noindent If $\{y_i\}_{i{\cal I}n I}$ and $\{x_i\}_{i{\cal I}n I}$ are $\Z$-bases of the same $\Z$-module, then $\Z[x_i|i{\cal I}n I]=\Z[y_i|i{\cal I}n I]$.
\end{example}
This can be said also as follows:
\noindent {Let $M$ be a free $\Z$-module and $V={\mathbb{Q}}\otimes_{\Z}M$} and consider the functor $S=$ ``symmetric algebra'' from the category of $\Z$-modules (respectively ${\mathbb{Q}}$-vector spaces) to the category of commutative unitary $\Z$-algebras (respectively commutative unitary ${\mathbb{Q}}$-algebras).
\noindent Then $SM$ is an integral form of $SV$ and $SM\cap V=M$.
\noindent By definition, every integral form of $SV$ containing $M$ contains $SM$, that is $SM$ is the least integral form of $SV$ containing $M$.
{\cal{V}}skip .3 truecm
\noindent We are interested in other remarkable integral forms of $SV$ containing $M$.
\begin{remark} \label{srinv}
\noindent Let $U$ be a unitary $\Z$-algebra and $f(u){\cal I}n U[[u]]$. Then:
\noindent 1) If $f(u){\cal I}n 1+uU[[u]]$ then:
i) $f(u)$ is invertible in $U[[u]]$;
ii) the coefficients of $f(u)$, those of $f(-u)$ and those of $f(u)^{-1}$ generate the same $\Z$-subalgebra of $U$;
\noindent 2) If $f(u){\cal I}n uU[[u]]$ then ${{\cal R}m{exp}}(f(u))$ is a well defined element of $1+uU[[u]]$;
\noindent 3) If $f(u){\cal I}n 1+uU[[u]]$ then ${{\cal R}m{ln}}(f(u))$ is a well defined element of $uU[[u]]$;
\noindent 4) ${{\cal R}m{exp}}\,{\scriptstyle\circ}\,{{\cal R}m{ln}}\big|_{1+uU[[u]]}=id$ and ${{\cal R}m{ln}}\,{\scriptstyle\circ}\, {{\cal R}m{exp}}\big|_{uU[[u]]}=id$.
\end{remark}
{\cal{V}}skip .3 truecm
\begin{notation} \label{ntdvd}
\noindent Let $a$ be an element of a unitary ${\mathbb{Q}}$-algebra $U$. The divided powers of $a$ are the elements $$a^{(k)}={a^k\over k!}\, \, (k{\cal I}n\N).$$
Remark that the generating series of the $a^{(k)}$'s is $\exp(a u)$, that is
\begin{equation}\label{gensexp}
\sum_{k\geq 0}a^{(k)}u^k=\exp(au)
\end{equation}.
\end{notation}
{\cal{V}}skip .3 truecm
\begin{example} \label{dvdpw}
\noindent Let $\{x_i\}_{i{\cal I}n I}$ be a $\Z$-basis of $M$. Then it is well known and trivial that:
i) The $\Z$-subalgebra $S^{(div)}M\subseteq SV$ generated by $\{x^{(k)}\}_{x{\cal I}n M,k{\cal I}n\N}$ contains $M$;
ii) $S^{(div)}M\cap V=M$;
iii) $\{x_i^{(k)}\}_{i{\cal I}n I,k{\cal I}n\N}$ is a set of algebra-generators (over $\Z$) of $S^{(div)}M$;
iv) the set $\{{\bf{x}}^{({\bf{k}})}=\prod_{i{\cal I}n I}x_i^{(k_i)}|{\bf{k}}:I\to\N$ is finitely supported$\}$ is a $\Z$-basis of $S^{(div)}M$.
v) $S^{(div)}M$ is an integral form of $SV$ (called the algebra of the divided powers of $M$).
\noindent $S^{(div)}M$ is also denoted $\Z^{(div)}[x_i|i{\cal I}n I]$.
\noindent Remark that if $m(u)=\sum_{r{\cal I}n\N}m_ru^r{\cal I}n uM[[u]]$ then
\begin{equation} \label{divpoweq}
m(u)^{(k)}{\cal I}n S^{(div)}M[[u]]\,\,{\cal F}orall k{\cal I}n\N
\end{equation}
or equivalently
$${{\cal R}m{exp}}(m(u)){\cal I}n S^{(div)}M[[u]].$$
The viceversa is obviously also true:
\begin{equation}\label{sdivmv}m(u){\cal I}n uV[[u]],\ {{\cal R}m{exp}}(m(u)){\cal I}n S^{(div)}M[[u]]\Leftrightarrow m(u){\cal I}n uM[[u]].\end{equation}
\end{example}
{\cal{V}}skip .3 truecm
\begin{notation} \label{ntbin}
\noindent Let $a$ be an element of a unitary ${\mathbb{Q}}$-algebra $U$. The ``binomials'' of $a$ are the elements $${a\choose k}={a(a-1)\cdot...\cdot(a-k+1)\over k!}\, \,\,\,\, \,\,\,(k{\cal I}n\N).$$
Notice that ${a\choose k}$ is the image of ${x\choose k}$ through the evaluation $ev_a:{\mathbb{Q}}[x]\to U$ mapping $x$ to $a$.
\end{notation}
\noindent Now consider the series $\exp(a\ln(1+u))$: this is a well defined element of $U[[u]]$ (because $a\ln(1+u){\cal I}n uU[[u]]$) whose coefficients are polynomials in $a$; this means that with the notations above
$$\exp(a\ln(1+u))=ev_a(\exp(x{{\cal R}m{ln}}(1+u))).$$
In particular if we want to prove that for all $U$ and for all $a{\cal I}n U$ the generating series of the ${a\choose k}$'s is $\exp(a\ln(1+u))$
it is enough to prove the claim in the case $a=x{\cal I}n{\mathbb{Q}}[x]$, and to this aim it is enough to compare the evaluations on an infinite subset of ${\mathbb{Q}}$ (for istance on $\N$), thus reducing the proof to the trivial observation that ${\cal F}orall n{\cal I}n\N$
$$\sum_{k{\cal I}n\N}{n\choose k}u^k=(1+u)^n=\exp(n\ln(1+u)).$$
Thus in general the generating series $\exp(a\ln(1+u))$ of the ${a\choose k}$'s can and will be denoted as $(1+u)^a$; more explicitly
\begin{equation}\label{gensbin}\sum_{k{\cal I}n\N}{a\choose k}u^k=(1+u)^a=\exp\Big(\sum_{r>0}(-1)^{r-1}{a\over r}u^r\Big).\end{equation}
\noindent It is clear from the definition of $(1+u)^a$ that if $a$ and $b$ are commuting elements of $U$ then $$(1+u)^{a+b}=(1+u)^a(1+u)^b.$$
It is also clear that the $\Z$-submodule of $U$ generated by the coefficients of $(1+u)^{a+m}$ ($a{\cal I}n U$, $m{\cal I}n\Z$) depends only on $a$ and not on $m$; it is actually a $\Z$-subalgebra of $U$: indeed for all $k,l{\cal I}n\N$
$${a\choose k}{a-k\choose l}={k+l\choose k}{a\choose k+l}.$$
\noindent More precisely for each $m{\cal I}n\Z$ and $n{\cal I}n\N$ the $\Z$-submodule of $U$ generated by the ${a+m\choose k}$'s for $k=0,...,n$ ($a{\cal I}n U$) depends only on $a$ and $n$ and not on $m$.
\noindent Finally notice that in $U[[u]]$ we have ${{{\cal R}m{d}}\over{{\cal R}m{d}}u}(1+u)^a=a(1+u)^{a-1}$. {\cal{V}}skip .3 truecm
\begin{example} \label{binex}
\noindent Let $\{x_i\}_{i{\cal I}n I}$ be a $\Z$-basis of $M$. Then it is well known and trivial that:
i) The $\Z$-subalgebra $S^{(bin)}M\subseteq SV$ generated by $\{{x\choose k}
\}_{x{\cal I}n M,k{\cal I}n\N}$ contains $M$;
ii) $\{{x_i\choose k}\}_{i{\cal I}n I,k{\cal I}n\N}$ is a set of algebra-generators (over $\Z$) of $S^{(bin)}M$;
iii) the set $\{{{\bf{x}}\choose{\bf{k}}}=\prod_{i{\cal I}n I}{x_i\choose k_i}\}|{\bf{k}}:I\to\N$ finitely supported$\}$ is a $\Z$-basis of $S^{(bin)}M$.
iv) $S^{(bin)}M\cap V=M$;
v) $S^{(bin)}M$ is an integral form of $SV$ (called the algebra of binomials of $M$).
\noindent $S^{(bin)}M$ is also denoted $\Z^{(bin)}[x_i|i{\cal I}n I]$.
\end{example}
{\cal{V}}skip .3 truecm
\begin{example} {\,(Review of the symmetric functions, see \cite{IM})} \label{rvsf}
\noindent Let $n{\cal I}n\N$. It is well known that $\Z[x_1,...,x_n]^{{\cal{S}}_n}$ is an integral form of ${\mathbb{Q}}[x_1,...,x_n]^{{\cal{S}}_n}$ and that
$\Z[x_1,...,x_n]^{{\cal{S}}_n}=\Z[e_1^{[n]},...,e_n^{[n]}]$, where the (algebraically independent for $k=1,...,n$) elementary symmetric polynomials $e_k^{[n]}$'s are defined by
\begin{equation}\label{mcd}
\prod_{i=1}^n(T-x_i)=\sum_{k{\cal I}n\N}(-1)^ke_k^{[n]}T^{n-k}
\end{equation}
and are homogeneous of degree $k$, that is $e_k^{[n]}{\cal I}n\Z[x_1,...,x_n]_k^{{\cal{S}}_n}\subseteq{\mathbb{Q}}[x_1,...,x_n]_k^{{\cal{S}}_n}$.
\noindent It is also well known that for $n_1\geq n_2$ the natural projection $$\pi_{n_1,n_2}:{\mathbb{Q}}[x_1,...,x_{n_1}]\to{\mathbb{Q}}[x_1,...,x_{n_2}]$$ defined
by $$\pi_{n_1,n_2}(x_i)=\begin{cases}x_i&{{\cal R}m{if\, }}i\leq n_2\\ 0 &{{\cal R}m{otherwise}}\end{cases}$$ is such that
$\pi_{n_1,n_2}(e_k^{[n_1]})=
e_k^{[n_2]}
$ for all $k{\cal I}n\N$.
Then $$\bigoplus_{d\geq 0}{\cal{V}}arprojlim\Z[x_1,...,x_n]_d^{{\cal{S}}_n}=\Z[e_1,...,e_k,...]\, \, (e_k\, {{\cal R}m{inverse\, limit\, of\, the\, }}e_k^{[n]})$$ is an integral form of $\oplus_{d\geq 0}{\cal{V}}arprojlim{\mathbb{Q}}[x_1,...,x_n]_d^{{\cal{S}}_n}$, which is called the algebra of the symmetric functions.
\noindent Moreover the elements
$$p_r^{[n]}=\sum_{i=1}^nx_i^r{\cal I}n\Z[x_1,...,x_n]^{{\cal S}_n}\, \, (r>0,\, n{\cal I}n\N)$$
and their inverse limits $p_r{\cal I}n\Z[e_1,...,e_k,...]$ ($\pi_{n_1,n_2}(p_r^{[n_1]})=p_r^{[n_2]}$ for all $r>0$ and all $n_1\geq n_2$) give another set of generators of the ${\mathbb{Q}}$-algebra of the symmetric functions: the $p_r$'s are algebraically independent and $$\bigoplus_{d\geq 0}{\cal{V}}arprojlim{\mathbb{Q}}[x_1,...,x_n]_d^{{\cal S}_n}={\mathbb{Q}}[p_1,...,p_r,...].$$
Finally
$\Z[e_1,...,e_k,...]$ is an integral form of ${\mathbb{Q}}[p_1,...,p_r,...]$ containing $p_r$ for all $r>0$ (more precisely a linear combination of the $p_r$'s lies in $\Z[e_1,...,e_k,...]$ if and only if it has integral coefficients), the relation between the $e_k$'s and the $p_r$'s being given by:
$$\sum_{k{\cal I}n\N}(-1)^ke_ku^k={{\cal R}m{exp}}\Big(-\sum_{r>0}{p_r\over r}u^r\Big).$$
\noindent In this context we use the notation $$\Z[e_k|k>0]=\Z^{(sym)}[p_r|r>0
]\subseteq{\mathbb{Q}}[p_r|r>0]
;$$
to stress the dependence of the $e_k$'s on the $p_r$'s we set $e=\hat p$, that is
\begin{equation} \label{dfhp}
\hat p(u)=\sum_{k{\cal I}n\N}\hat p_ku^k={{\cal R}m{exp}}\Big(\sum_{r>0}(-1)^{r-1}{p_r\over r}u^r\Big)
\end{equation}
and \begin{equation}\label{zdfhp}\Z[\hat p_k|k>0
]=\Z^{(sym)}[p_r|r>0].\end{equation}
\end{example}
{\cal{V}}skip .3 truecm
\begin{remark} \label{spsym}
\noindentoindent
\noindent With the notations above, let ${\cal{V}}arphi:{\mathbb{Q}}[p_1,...,p_r,...]\to U$ be an algebra-homomorphism and $a={\cal{V}}arphi(p_1)$:
i) if ${\cal{V}}arphi(p_r)=0$ for $r>1$ then ${\cal{V}}arphi(\hat p_k)=a^{(k)}$ for all $k{\cal I}n\N$;
ii) if ${\cal{V}}arphi(p_r)=a$ for all $r>0$ then ${\cal{V}}arphi(\hat p_k)={a\choose k}$ for all $k{\cal I}n\N$.
\noindent Hence
$\Z^{(sym)}$ is a generalization of both $\Z^{(div)}$ and $\Z^{(bin)}$.
\end{remark}
{\cal{V}}skip .3 truecm
\begin{remark}\label{funtorialita}
Let $M$ be the $\Z$-module with basis $\{p_r|r>0\}$ and, as above, $V={\mathbb{Q}}\otimes_{\Z}M$. Then:
\noindent i) as for the functors $S$, $S^{(div)}$ and $S^{(bin)}$, we have $\Z^{(sym)}[p_r|r>0]\cap V=M$;
\noindent ii) unlike the functors $S$, $S^{(div)}$ and $S^{(bin)}$, $\Z^{(sym)}[p_r|r>0]$ depends on $\{p_r|r>0\}$ and not only on $M$: for instance
$$\Z^{(sym)}[-p_1,p_r|r>1]\noindenteq \Z^{(sym)}[p_r|r>0]$$
(it is easy to check that these integral forms are different for example in degree 3);
\noindent iii) not all the sign changes
of the $p_r$'s produce different $\Z^{(sym)}$-forms of ${\mathbb{Q}}[p_r|r>0]$:
$$\Z^{(sym)}[(-1)^rp_r|r>0]=\Z^{(sym)}[-p_r|r>0]=\Z^{(sym)}[p_r|r>0]$$
since
$$\exp\left(\sum_{r>0}(-1)^{r-1}{(-1)^rp_r\over r}u^r{\cal R}ight)=\exp\left(\sum_{r>0}(-1)^{r-1}{p_r\over r}(-u)^r{\cal R}ight)$$ and
$$\exp\left(\sum_{r>0}(-1)^{r-1}{-p_r\over r}u^r{\cal R}ight)=\exp\left(\sum_{r>0}(-1)^{r-1}{p_r\over r}u^r{\cal R}ight)^{-1}$$
(see remark {\cal R}ef{srinv},1),ii)).
\end{remark}
{\cal{V}}skip .3 truecm
\noindent In general it is not trivial to understand whether an element of ${\mathbb{Q}}[p_r|r>0]$ belongs or not to $\Z^{(sym)}[p_r|r>0]$; proposition {\cal R}ef{tmom} gives an answer to this question, which is generalized in proposition {\cal R}ef{convoluzioneintera} (the examples in remark {\cal R}ef{funtorialita}, ii) and iii) can be obtained also as applications of proposition {\cal R}ef{convoluzioneintera}).
{\cal{V}}skip .3 truecm
\begin{proposition} \label {tmom}
\noindent Let us fix $m>0$ and let $\lambda_m:{\mathbb{Q}}[p_r|r>0]\to{\mathbb{Q}}[p_r|r>0]$ be the algebra homomorphism defined by
$\lambda_m(p_r)=p_{mr}$ for all $r>0$.
\noindent Then $\Z^{(sym)}[p_r|r>0]$ $(=\Z[\hat p_k|k>0])$ is $\lambda_m$-stable.
\begin{proof}
\noindent For $n{\cal I}n\N$ let $\lambda_m^{[n]}:{\mathbb{Q}}[x_1,...,x_n]\to{\mathbb{Q}}[x_1,...,x_n]$ be the algebra homomorphism defined by $\lambda_m^{[n]}(x_i)=x_i^m$ for all $i=1,...,n$.
\noindent We obviously have that
$$\Z[x_1,...,x_n]\ \ {{\cal R}m{is\ }}\lambda_m^{[n]}{{\cal R}m{-stable}},$$
$${\mathbb{Q}}[x_1,...,x_n]_d\ \ {{\cal R}m{is\ mapped\ to\ }}{\mathbb{Q}}[x_1,...,x_n]_{md}\,\,{\cal F}orall d\geq 0,$$
$$\lambda_m^{[n]}\circ\sigma=\sigma\circ\lambda_m^{[n]}\,\,{\cal F}orall n{\cal I}n\N,\, \sigma{\cal I}n{\cal{S}}_n,$$
$$\pi_{n_1,n_2}\circ\lambda_m^{[n_1]}=\lambda_m^{[n_2]}\circ\pi_{n_1,n_2}\,\,{\cal F}orall n_1\geq n_2,$$
$$\lambda_m^{[n]}(p_r^{[n]})=p_{mr}^{[n]}\,\,{\cal F}orall n{\cal I}n\N,\,r>0,$$
hence there exist the limits of the $\lambda_m^{[n]}\big|_{{\mathbb{Q}}[x_1,...,x_n]_d^{{\cal S}_n}}$'s: their direct sum over $d\geq 0$ stabilizes $\oplus_{d\geq 0}\lim\Z[x_1,...,x_n]_d^{{\cal S}_n}=\Z[\hat p_k|k>0]$ and is $\lambda_m$.
\noindent In particular $\lambda_m(\hat p_k){\cal I}n\Z[\hat p_l|l>0]$ ${\cal F}orall k{\cal I}n\N$.
\end{proof}
\end{proposition}
{\cal{V}}skip .3 truecm
\noindent We also propose a second, direct, proof of proposition {\cal R}ef{tmom}, which provides in addition an explicit expression of the $\lambda_m(\hat p_k)$'s in terms of the $\hat p_l$'s.
\begin{proposition}\label{tdmom}
Let $m$ and $\lambda_m$ be as in proposition {\cal R}ef{tmom} and $\omega {\cal I}n\C$ a primitive $m^{th}$ root of 1. Then
$$\lambda_m(\hat p(-u^m))=\prod_{j=0}^{m-1}\hat p(-\omega^j u){\cal I}n\Z[\hat p_k|k>0][[u]].$$
\begin{proof}
The equality in the statement is an immediate consequence of
$$\sum_{j=0}^{m-1}\omega^{jr}=\begin{cases}m&{{\cal R}m{if}}\ m|r\\0&{{\cal R}m{otherwise}},\end{cases}$$
so that
$$-\sum_{j=0}^{m-1}\sum_{r>0}{p_r\over r}\omega^{jr}u^r=-\sum_{r>0}{p_{mr}\over r}u^{mr}=\lambda_m\left(-\sum_{r>0}{p_r\over r}(u^m)^r{\cal R}ight),$$
whose exponential is the claim.
\noindent Then for all $k>0$
$$\lambda_m(\hat p_k){\cal I}n{\mathbb{Q}}[\hat p_l|l>0]\cap\Z[\omega][\hat p_l|l>0]=\Z[\hat p_l|l>0]$$
since ${\mathbb{Q}}\cap\Z[\omega]=\Z$.
\end{proof}
\end{proposition}
\noindent In order to characterize the functions $a:\Z_+\to{\mathbb{Q}}$ such that
$$\Z^{(sym)}[a_rp_r|r>0]\subseteq\Z^{(sym)}[p_r|r>0]$$
we introduce the notation {\cal R}ef{hcappucciof}, where we rename the $p_r$'s into $h_r$ since in the affine Kac-Moody case the $\Z^{(sym)}$-construction describes the imaginary component of the integral form. Moreover from now on $p_i$ will denote a positive prime number.
\begin{notation}\label{hcappucciof}
Given $a:\Z_+\to{\mathbb{Q}}$ set $$\sum_{k\geq 0}\hat h^{\{ a \} }_ku^k=\hat h^{\{a\}}(u)=\exp\left(\sum_{r>0}(-1)^{r-1}{a_rh_r\over r}u^r{\cal R}ight);$$
$1\!\!\!\!1$ denotes the function defined by $1\!\!\!\!1_r=1$ for all $r{\cal I}n\Z_+$;
\noindent for all $m>0$
$1\!\!\!\!1^{(m)}$ denotes the function defined by $1\!\!\!\!1^{(m)}_r=\begin{cases}m&{{\cal R}m{if}}\ m|r\\0&{{\cal R}m{otherwise}}.\end{cases}$
\noindent Thus
$\hat h^{\{1\!\!\!\!1\}}(u)=\hat h(u)$ and $\hat h^{\{1\!\!\!\!1^{(m)}\}}(-u)=\lambda_m(\hat h(-u^m))$.
\end{notation}
{\cal{V}}skip .3 truecm
\begin{recall}
\noindent The convolution product $*$ in the ring of the ${\mathbb{Q}}$-valued arithmetic functions
$${\cal A}r=\{f:\Z_+\to{\mathbb{Q}}\}$$
is defined by
$$(f*g)(n)=\sum_{r,s:{\cal A}top rs=n}f(r)g(s).$$
The M\"obius function $\mu:\Z_+\to{\mathbb{Q}}$ defined by
$$\mu\left(\prod_{i=1}^n p_i^{r_i}{\cal R}ight)=\begin{cases}(-1)^n&{{\cal R}m {if}}\ r_i=1\ {\cal F}orall i\\0&{{\cal R}m{otherwise}}\end{cases}$$
{\centerline{(where the $p_i$'s are distinct positive prime integers and $r_i\geq 1$ for all $i$)}}
is the inverse of $1\!\!\!\!1$ in the ring of the arithmetic functions.
\end{recall}
{\cal{V}}skip .3 truecm
\begin{proposition}\label{convoluzioneintera}
\noindent Let $a:\Z_+\to{\mathbb{Q}}$ be any function; then, with the notations fixed in {\cal R}ef{hcappucciof},
$$\hat h^{\{a\}}_k{\cal I}n\Z[\hat h_l|l>0]\ \ {\cal F}orall k>0\Leftrightarrow n|(\mu*a)(n){\cal I}n\Z\ \ {\cal F}orall n>0.$$
\begin{proof}
\noindent Remark that $a=1\!\!\!\!1*\mu*a$, that is
$${\cal F}orall n>0\ a_n=\sum_{m|n}(\mu*a)(m)=\sum_{m|n}{(\mu*a)(m)\over m}m=\sum_{m>0}{(\mu*a)(m)\over m}1\!\!\!\!1^{(m)}_n,$$
which means
$$a=\sum_{m>0}{(\mu*a)(m)\over m}1\!\!\!\!1^{(m)}.$$
Let $k_m={(\mu*a)(m)\over m}$ for all $m>0$, choose $m_0>0$ such that $k_m{\cal I}n\Z$ ${\cal F}orall m<m_0$ and set $a^{(0)}=\sum_{m<m_0}k_m1\!\!\!\!1^{(m)}$, $a'=a-a^{(0)}$, so that
$$\hat h^{\{a\}}(u)=\hat h^{\{a'\}}(u)\hat h^{\{a^{(0)}\}}(u),$$
and, by proposition {\cal R}ef{tmom} (see also notation {\cal R}ef{hcappucciof}),
$$\ \ \ \hat h^{\{a^{(0)}\}}(u){\cal I}n\Z[\hat h_k|k>0][[u]].$$
It follows that
\noindent i) $\hat h^{\{a\}}(u){\cal I}n\Z[\hat h_k|k>0][[u]]\Leftrightarrow\hat h^{\{a'\}}(u){\cal I}n\Z[\hat h_k|k>0][[u]]$.
\noindent ii) ${\cal F}orall n<m_0$ $\hat h^{\{a'\}}_n=0$, so that $\hat h^{\{a\}}_n=\hat h^{\{a^{(0)}\}}_n{\cal I}n\Z[\hat h_k|k>0]$;
in particular $\hat h^{\{a\}}(u){\cal I}n\Z[\hat h_k|k>0][[u]]$ if $k_m{\cal I}n\Z$ ${\cal F}orall m>0$.
\noindent iii) $a'_{m_0}=(\mu*a)(m_0)=n_0k_{m_0}$ so that $\hat h^{\{a'\}}_{m_0}=k_{m_0}h_{m_0}$, which belongs to $\Z[\hat h_k|k>0]$ if and only if
$k_{m_0}{\cal I}n\Z$ (see remark {\cal R}ef{funtorialita},i));
in particular $\hat h^{\{a\}}(u)\noindentot{\cal I}n\Z[\hat h_k|k>0][[u]]$ if $\exists m_0{\cal I}n\Z_+$ such that $k_{m_0}\noindentot{\cal I}n\Z$.
\end{proof}
\end{proposition}
\begin{proposition}\label{emmepiallaerre}
\noindent Let $a:\Z_+\to\Z$ be a function satisfying the condition
$$p^r|a_{mp^r}-a_{mp^{r-1}}\ \ {\cal F}orall p,m{\cal I}n\Z_+\ \ {{\cal R}m{with}}\ \ p\ \ {{\cal R}m{prime\ and}}\ (m,p)=1.$$
Then $n|(\mu*a)(n)$ ${\cal F}orall n{\cal I}n\Z_+$.
\begin{proof}
\noindent
The condition $1|(\mu*a)(1)$ is equivalent to the condition $a_1{\cal I}n\Z$.
\noindent For $n>1$ remark that $$n|(\mu*a)(n)\Leftrightarrow p^r|(\mu*a)(n)\ \ {\cal F}orall p\ {{\cal R}m{prime}},\ r>0\ {{\cal R}m{such\ that}}\ p^r||n.$$
Recall that if $P
$ is the set of the prime factors of $n$ and $p{\cal I}n P$ then
$$(\mu*a)(n)=\sum_{S\subseteq P}(-1)^{\#S}a_{{n\over\prod_{q{\cal I}n S}q}}=$$
\begin{equation}\label{mpr}=\sum_{S'\subseteq P\setminus\{p\}}(-1)^{\#S'}(a_{{n\over\prod_{q{\cal I}n S'}q}}-a_{{n\over p\prod_{q{\cal I}n S'}q}}).\end{equation}
The claim follows from the remark that
$p^r||n$ if and only if $p^r||{n\over\prod_{q{\cal I}n S'}q}$.
\end{proof}
\end{proposition}
\begin{remark} \label{vicelambda}
The viceversa of proposition {\cal R}ef{emmepiallaerre} is trivially true, too, and is immediately proved applying
({\cal R}ef{mpr})
to the minimal $n>0$ such that there exists $p|n$ and $r>0$ ($p^r|n$, $n=mp^r$) not satisfying the hypothesis of the statement.
\end{remark}
\noindent Proposition {\cal R}ef{tmom} will play an important role in the study of the commutation relations in the enveloping algebra of $\hat{\cal F}rak sl_2$ (see remarks {\cal R}ef{stuz},vi) and {\cal R}ef{exev}) and of $\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$ (see remark {\cal R}ef{ometiomecap} and proposition {\cal R}ef{sttuz},iv)).
\noindent Proposition
{\cal R}ef{convoluzioneintera} is based on and generalizes proposition {\cal R}ef{tmom}; it is a key tool in the study of the integral form in the case of $A_2^{(2)}$, see corollary {\cal R}ef{hcappucciod}.
\noindent A more precise connection between the integral form $\Z^{(sym)}[h_r|r>0]$ of ${\mathbb{Q}}[h_r|r>0]$ and the homomorphisms $\lambda_m$'s, namely another $\Z$-basis of $\Z^{(sym)}[h_r|r>0]$ (basis defined in terms of the elements $\lambda_m(\hat h_k)$'s and arising from Garland's and Mitzman's description of the integral form of the affine Kac-Moody algebras) is discussed in appendix {\cal R}ef{appendB}.
{\cal{V}}skip .5 truecm
\section{Some non commutative cases} \label{ncn}
{\cal{V}}skip .5truecm
\noindent We start this section with a basic remark.
\begin{remark} \label{dbsg}
\noindent i) Let $U_1$, $U_2$ be two ${\mathbb{Q}}$-algebras, with integral forms respectively $\tilde U_1$ and $\tilde U_2$. Then $\tilde U_1\otimes_{\Z}\tilde U_2$ is an integral form of the ${\mathbb{Q}}$-algebra $U_1\otimes_{{\mathbb{Q}}}U_2$
.
\noindent ii) Let $U$ be an associative unitary ${\mathbb{Q}}$-algebra (not necessarily commutative)
and $U_1,U_2\subseteq U$ be two ${\mathbb{Q}}$-subalgebras such that $U\cong U_1\otimes_{{\mathbb{Q}}} U_2$ as ${\mathbb{Q}}$-vector spaces. If $\tilde U_1,\tilde U_2$ are integral forms of $U_1,U_2$, then $\tilde U_1\otimes_{\Z} \tilde U_2$ is an integral form of $U$ if and only if $\tilde U_2\tilde U_1\subseteq\tilde U_1\tilde U_2$.
\end{remark}
\noindent Remark {\cal R}ef{dbsg},ii) suggests that if we have a (linear) decomposition of an algebra $U$ as an ordered tensor product of polynomial algebras $U_i$ ($i=1,...,N$), that is we have a linear isomorphism
$$U\cong U_1\otimes_{{\mathbb{Q}}}...\otimes_{{\mathbb{Q}}} U_N,$$ then one can tackle the problem of finding an integral form of $U$ by studying the commutation relations among the elements of some suitable integral forms of the $U_i$'s.
\noindent Glueing together in a non commutative way the different integral forms of the algebras of polynomials discussed in section {\cal R}ef{intgpl} is the aim of this section, which collects the preliminary work of the paper: the main results of the following sections are applications of the formulas found here.
{\cal{V}}skip .3 truecm
\begin{notation}\label{lard}
\noindent Let $U$ be an associative ${\mathbb{Q}}$-algebra and $a{\cal I}n U$.
\noindent We denote by $L_a$ and $R_a$ respectively the left and right multiplication by $a$; of course $L_a-R_a=[a,\cdot]=-[\cdot,a]$.
\end{notation}
{\cal{V}}skip .3 truecm
\begin{lemma} \label{cle}
\noindent Let $U$ be an associative unitary ${\mathbb{Q}}$-algebra.
\noindent Consider elements $a,b,c{\cal I}n U$,
$f,g{\cal I}n End(U)$ and ${\cal A}lpha(u){\cal I}n U[[u]]$.
Then:
i) if ${\exp}(f)$ and ${\exp}(g)$ converge and $[f,g]=0$ we have $${{\cal R}m{exp}}(f\pm g)={{\cal R}m{exp}}(f){{\cal R}m{exp}}(g)^{\pm 1};$$
ii) $[L_a,R_a]=0$;
iii) if $f$ is an algebra-homomorphism and $f(a)=a$ we have
$$[f,L_a]=[f,R_a]=0;$$
iv) if ${\exp}(a)$ converges so do ${{\cal R}m{exp}}(L_a)$ and ${{\cal R}m{exp}}(R_a)$, and we have
$${{\cal R}m{exp}}(L_a)=L_{{{\cal R}m{exp}}(a)},\,\,{{\cal R}m{exp}}(R_a)=R_{{{\cal R}m{exp}}(a)},\,\,{{\cal R}m{exp}}(R_a)=L_{{{\cal R}m{exp}}(a)}{{\cal R}m{exp}}([\cdot,a]);$$
v) if $\exp(a)$ and ${{\cal R}m{exp}}(c)$ converge we have $$ab=bc\Leftrightarrow {{\cal R}m{exp}}(a)b=b{{\cal R}m{exp}}(c);$$
vi) if $\exp(b)$ converges and $[b,c]=0$ we have $$[a,b]=c\Leftrightarrow a{{\cal R}m{exp}}(b)={{\cal R}m{exp}}(b)(a+c)
;$$
vii) if ${{\cal R}m{exp}}(a)$, ${{\cal R}m{exp}}(b)$ and ${{\cal R}m{exp}}(c)$ converge and $[a,c]=[b,c]=0$ then
$$[a,b]=c\Leftrightarrow {{\cal R}m{exp}}(a){{\cal R}m{exp}}(b)={{\cal R}m{exp}}(b){{\cal R}m{exp}}(a){{\cal R}m{exp}}(c)$$
viii) if ${\exp}(a)$, ${{\cal R}m{exp}}(b)$ and ${{\cal R}m{exp}}(c)$ converge and $[a,c]=[b,c]=0$ then
$$[a,b]=c{\mathbb{R}}ightarrow {{\cal R}m{exp}}(a+b)={{\cal R}m{exp}}(a){{\cal R}m{exp}}(b){{\cal R}m{exp}}(-c/2);$$
ix) if ${{\cal R}m{d}}:U\to U$ is a derivation and $[a,{{\cal R}m{d}}(a)]=0$ we have
$${{\cal R}m{d}}({{\cal R}m{exp}}(a))={{\cal R}m{d}}(a){{\cal R}m{exp}}(a)={{\cal R}m{exp}}(a){{\cal R}m{d}}(a).$$
x) if ${\cal A}lpha(u)=\sum_{r{\cal I}n\N}{\cal A}lpha_ru^r
$ (${\cal A}lpha_r{\cal I}n U$ ${\cal F}orall r{\cal I}n\N$) we have
$${{{\cal R}m{d}}\over{{\cal R}m{d}}u}{\cal A}lpha(u)={\cal A}lpha(u)b\Leftrightarrow {\cal A}lpha(u)={\cal A}lpha_0{{\cal R}m{exp}}(bu)$$
and
$${{{\cal R}m{d}}\over{{\cal R}m{d}}u}{\cal A}lpha(u)=b{\cal A}lpha(u)\Leftrightarrow {\cal A}lpha(u)={{\cal R}m{exp}}(bu){\cal A}lpha_0.$$
\begin{proof}
Statements v) and vi) are immediate consequence respectively of the fact that for all $n{\cal I}n\N$:
v) $a^nb=bc^n$;
vi) $ab^{(n)}=b^{(n)}a+b^{(n-1)}c$.
\noindent vii) follows from v) ad vi).
\noindent viii) follows from vii):
$$(a+b)^{(n)}=\sum_{r,s,t:{\cal A}top r+s+2t=n}{(-1)^t\over 2^t}a^{(r)}b^{(s)}c^{(t)}.$$
\noindent The other points are obvious.
\end{proof}
\end{lemma}
{\cal{V}}skip .3 truecm
\begin{proposition} \label{bdm}
\noindent Let us fix $m{\cal I}n\Z$ and consider the ${\mathbb{Q}}$-algebra structure on $U={\mathbb{Q}}[x]\otimes_{{\mathbb{Q}}}{\mathbb{Q}}[h]$ given by $xh=(h-m)x$.
\noindent Then $\Z^{(div)}[x]\otimes_{\Z}
\Z^{(bin)}[h]$ and $\Z^{(bin)}[h]\otimes_{\Z}
\Z^{(div)}[x]$ are integral forms of $U$: their images in $U$ are closed under multiplication, and coincide. Indeed
\begin{equation}\label{fru}x^{(k)}{h\choose l}={h-mk\choose l}x^{(k)}\,\,{\cal F}orall k,l{\cal I}n\N\end{equation}
or equivalently, with a notation that will be useful in the following,
\begin{equation}\label{fu}
{{\cal R}m{exp}}(xu)(1+v)^h=(1+v)^h {{\cal R}m{exp}}\left({xu\over (1+v)^m}{\cal R}ight).
\end{equation}
\begin{proof}
The relation between $x$ and $h$
can be written as
$$xP(h)=P(h-m)x$$ and $$x^{(k)}P(h)=P(h-mk)x^{(k)}$$ for all $P{\cal I}n{\mathbb{Q}}[h]$ and for all $k >0$. In particular it holds for $P(h)={h\choose l}$, that is
\begin{equation} \label{xvh}
x(1+v)^h=(1+v)^{h-m}x=(1+v)^h{x\over(1+v)^m}
\end{equation}
and \begin{equation}\label{xvh2}x^{(k)}(1+v)^h=(1+v)^h\left({x\over(1+v)^m}{\cal R}ight)^{(k)}.\end{equation}
\noindent The conclusion follows multiplying by $u^k$ and summing over $k$.
\end{proof}
\end{proposition}
{\cal{V}}skip .3 truecm
\begin{proposition} \label{jhg}
\noindent Let us fix $m{\cal I}n\Z$ and consider the ${\mathbb{Q}}$-algebra structure on $$U={\mathbb{Q}}[x]\otimes_{{\mathbb{Q}}}{\mathbb{Q}}[z]\otimes_{{\mathbb{Q}}}{\mathbb{Q}}[y]$$
defined by $[x,z]=[y,z]=0$, $[x,y]=mz$.
\noindent Then $\Z^{(div)}[x]\otimes_{\Z}\Z^{(div)}[z]\otimes_{\Z}\Z^{(div)}[y]$ is an integral form of $U$.
\begin{proof}
Since $z$ commutes with $x$ and $y$ we just have to straighten $y^{(r)}x^{(s)}$. Thus the claim is a straightforward consequence of lemma {\cal R}ef{cle},vii):
\begin{equation}\label{strxx}\exp(yu)\exp(xv)=\exp(xv)\exp(zuv)^{-m}\exp(yu).\end{equation}
\end{proof}
\end{proposition}
{\cal{V}}skip .3 truecm
\begin{proposition} \label{heise}
\noindent Let us fix $m,l{\cal I}n\Z$ and consider the ${\mathbb{Q}}$-algebra structure on $U={\mathbb{Q}}[h_r|r<0]\otimes_{{\mathbb{Q}}}{\mathbb{Q}}[h_0,c]\otimes_{{\mathbb{Q}}}{\mathbb{Q}}[h_r|r>0]$ given by
$$[c,h_r]=0,\,\,[h_r,h_s]=\delta_{r+s,0}r(m+(-1)^rl)c\,\,{\cal F}orall r,s{\cal I}n\Z.$$
\noindent Then, recalling the notation $\Z[\hat h_{\pm k}|k>0]=\Z^{(sym)}[h_{\pm r}|r>0]$ and defining $U_{\Z}$ to be the $\Z$-subalgebra of $U$ generated by
$U_{\Z}^{\pm}=\Z^{(sym)}[h_{\pm r}|r>0]$ and
$U_{\Z}^0=\Z^{(bin)}[h_0,c]$, we have that
\begin{equation} \label{hhh}
\hat h_+(u)\hat h_-(v)=\hat h_-(v)(1-uv)^{-mc}(1+uv)^{-lc}\hat h_+(u)
\end{equation}
and $U_{\Z}=U_{\Z}^-U_{\Z}^0U_{\Z}^+$, so that
$$U_{\Z}\cong\Z^{(sym)}[h_{-r}|r>0]\otimes_{\Z}\Z^{(bin)}[h_0,c]\otimes_{\Z}\Z^{(sym)}[h_{r}|r>0]$$
is an integral form of $U$.
\begin{proof}
{\cal R}ef{hhh} follows from lemma {\cal R}ef{cle}, vii) remarking that
$$\Big[\sum_{r>0}(-1)^{r-1}{h_r\over r}u^r,\sum_{s>0}(-1)^{s-1}{h_{-s}\over s}v^s\Big]=c\sum_{r>0}{m+(-1)^rl\over r}u^rv^r=$$
$$=-mc{{\cal R}m {ln}}(1-uv)-lc{{\cal R}m {ln}}(1+uv).$$
\noindent Of course $U_{\Z}^0U_{\Z}^-=U_{\Z}^-U_{\Z}^0$ is a $\Z$-subalgebra of $U$, $U_{\Z}^-U_{\Z}^0U_{\Z}^+\subseteq U_{\Z}$, $U_{\Z}$ is generated by
$U_{\Z}^-U_{\Z}^0U_{\Z}^+$ as $\Z$-algebra and $U_{\Z}^-U_{\Z}^0U_{\Z}^+\cong U_{\Z}^-\otimes_{\Z}U_{\Z}^0\otimes_{\Z}U_{\Z}^+$ as $\Z$-modules.
\noindent Hence we need to prove that $U_{\Z}^-U_{\Z}^0U_{\Z}^+$ is a $\Z$-subalgebra of $U$, or equivalently that it is closed under left multiplication by $U_{\Z}^+$ (because it is obviously
closed under left multiplication by $U_{\Z}^-U_{\Z}^0$), which is a straightforward consequence of {\cal R}ef{hhh}.
\end{proof}
\end{proposition}
{\cal{V}}skip .3 truecm
\begin{lemma}\label{lhlh}
Let $U$ be a ${\mathbb{Q}}$-algebra, $T:U\to U$ an automorphism, $$f{\cal I}n\sum_{r>0}\Z T^ru^r\subseteq End(U[[u]]),$$
$h{\cal I}n uU[[u]]$ and $x{\cal I}n U$ such that
$T(h)=h$ and $[x,h]=f(x)$.
Then
$$x{{\cal R}m{exp}}(h)=\exp(h)\cdot \exp(f)(x).$$
\begin{proof}
By proposition {\cal R}ef{cle},iv)
$$x\exp(h)=\exp(h)\exp([\cdot,h])(x),$$
so we have to prove that
$\exp([\cdot,h])(x)=\exp(f)(x),$ or equivalently that
$[\cdot,h]^n(x)=f^n(x)$ for all $n{\cal I}n\N$.
\noindent If $n=0,1$ the claim is obvious; if $n>1$, $f^{n-1}(x)=\sum_{r>0}a_rT^ru^r(x)$ with $a_r{\cal I}n\Z$ for all $r>0$, $f$ commutes with $T$, and by the inductive hypothesis
$$[\cdot,h]^n(x)=[f^{n-1}(x),h]=\left[\sum_{r>0}a_rT^{r}u^r(x),h{\cal R}ight]=$$
$$=\sum_{r>0}a_ru^rT^r([x,h])=\sum a_ru^rT^r f(x)=f\sum a_r u^rT^r(x)=f(f^{n-1}(x))=f^n(x).$$
\end{proof}
\end{lemma}
\begin{proposition} \label{hh}
\noindent Let us fix integers $m_d$'s ($d>0$) and consider elements $\{h_r,\ x_s|r>0,s{\cal I}n\Z\}$ in a ${\mathbb{Q}}$-algebra $U$
such that
$$[h_r,x_s]=\sum_{d|r}dm_dx_{r+s}\ \ {\cal F}orall r>0, s{\cal I}n\Z.$$
\noindent Let $T$ be an algebra automorphism of $U$ such that $$T(h_r)=h_r\,\,{{\cal R}m{and}}\,\, T(x_s)=x_{s-1}\,\,{\cal F}orall r>0, s{\cal I}n\Z.$$
\noindent Then, recalling the notation $\Z[\hat h_k|k>0]=\Z^{(sym)}[h_r|r>0]$, we have that
\begin{equation}\label{cxh}
x_r\hat h_+(u)=\hat h_+(u)\cdot\left(\prod_{d>0}(1-(-T^{-1}u)^d)^{-m_d}{\cal R}ight)(x_r).
\end{equation}
If moreover the subalgebras of $U$ generated by $\{h_r|r>0\}$ and $\{x_r|r{\cal I}n\Z\}$ are isomorphic respectively to
${\mathbb{Q}}[h_r|r>0]$ and ${\mathbb{Q}}[x_r|r{\cal I}n\Z]$ and
there is a ${\mathbb{Q}}$-linear isomorphism
$U\cong{\mathbb{Q}}[h_r|r>0]\otimes_{{\mathbb{Q}}}{\mathbb{Q}}[x_r|r{\cal I}n\Z]$
then
$$\Z^{(sym)}[h_r|r>0]\otimes_{\Z}\Z^{(div)}[x_r|r{\cal I}n\Z]$$ is an integral form of $U$.
\begin{proof}
This is an application of lemma {\cal R}ef{lhlh}: let $h=\sum_{r>0}(-1)^{r-1}{h_r\over r}u^r$; then
$$[x_0,h]=\sum_{r>0}{(-1)^r\over r}u^r\sum_{d|r}dm_dT^{-r}(x_0)=$$
$$=\sum_{d>0}\sum_{s>0}{(-1)^{ds}\over s}m_dT^{-ds}u^{ds}(x_0)=f(x_0)$$
where $$f=
-\sum_{d>0}m_d\ln(1-(-1)^{d}T^{-d}u^d).$$
Then
$$x_0\hat h_+(u)=\hat h_+(u)\cdot\exp(f)(x_0)=\hat h(u)\cdot\left(\prod_{d>0}(1-(-T^{-1}u)^d)^{-m_d}{\cal R}ight)(x_0),$$
and the analogous statement for $x_r$ follows applying $T^{-r}$.
\noindent Remark that $\prod_{d>0}(1-(-T^{-1}u)^d)^{-m_d}=\sum_{r\geq 0}a_rT^{-r}u^r$ with $a_r{\cal I}n\Z$ ${\cal F}orall r{\cal I}n\N$; the hypothesis on the commutativity of the subalgebra generated by the $x_r$'s implies that
$(\sum_{r\geq 0}a_rx_ru^r)^{(k)}$ lies in the subalgebra of $U$ generated by the divided powers $\{x_r^{(k)}|r{\cal I}n\Z,k\geq 0\}$, which allows to conclude the proof thanks to the last hypotheses on the structure of $U$.
\end{proof}
\end{proposition}
{\cal{V}}skip .3 truecm
\begin{remark} \label{praff}
\noindent Proposition {\cal R}ef{hh}, implies proposition {\cal R}ef{bdm}: indeed when $m_1=m$, $m_d=0$ ${\cal F}orall d>1$ we have a projection
$h_r\mapsto h, x_r\mapsto x$, which maps
$\exp(x_0u)$ to $\exp(xu)$, $\hat h(u)$ to $(1+u)^h$ and $T$ to the identity.
\end{remark}
{\cal{V}}skip .5 truecm
\section{The integral form of ${\cal F}rak sl_2$ ($A_1$)} \label{sld}
{\cal{V}}skip .5truecm
\noindent The results about ${\cal F}rak sl_2$ and the $\Z$-basis of the integral form ${\cal U}_{\Z}({\cal F}rak sl_2)$ of its enveloping algebra ${\cal U}({\cal F}rak sl_2)$ are well known (see \cite{Ko} and \cite{S}). Here we recall the description of ${\cal U}z({\cal F}rak sl_2)$ in terms of the non-commutative generalizations described in section {\cal R}ef{ncn}, with the notations of the commutative examples given in section {\cal R}ef{intgpl}.
\noindent The proof expressed in this language has the advantage to be easily generalized to the affine case.
{\cal{V}}skip .3 truecm
\begin{definition} \label{sl2}
\noindent ${\cal F}rak sl_2$ (respectively ${\cal U}({\cal F}rak sl_2)$) is the Lie algebra (respectively the associative algebra) over ${\mathbb{Q}}$ generated by $\{e,f,h\}$ with relations $$[h,e]=2e,\, [h,f]=-2f,\, [e,f]=h.$$
\noindent ${\cal U}z({\cal F}rak sl_2)$ is the $\Z$-subalgebra of ${\cal U}({\cal F}rak sl_2)$ generated by $\{e^{(k)},f^{(k)}| \; k{\cal I}n\N\}$.
\end{definition}
{\cal{V}}skip .3 truecm
\begin{theorem}\label{trdc}
\noindent Let ${\cal U}^+$, ${\cal U}^-$, ${\cal U}^0$ denote the ${\mathbb{Q}}$-subalgebras of ${\cal U}({\cal F}rak sl_2)
$ generated respectively by $e$, by $f$, by $h$.
\noindent Then ${\cal U}^+\cong{\mathbb{Q}}[e]$, ${\cal U}^-\cong{\mathbb{Q}}[f]$, ${\cal U}^0\cong{\mathbb{Q}}[h]$ and ${\cal U}({\cal F}rak sl_2)\cong{\cal U}^-\otimes{\cal U}^0\otimes{\cal U}^+$; moreover
\begin{equation} \label{usldi}
{\cal U}z({\cal F}rak sl_2)\cong\Z^{(div)}[f]\otimes_{\Z}\Z^{(bin)}[h]\otimes_{\Z}\Z^{(div)}[e]
\end{equation}
is an integral form of ${\cal U}({\cal F}rak sl_2)$.
\begin{proof}
Thanks to proposition {\cal R}ef{bdm}, we just have to study the commutation between
$e^{(k)}$ and $f^{(l)}$ for $k,l{\cal I}n\N$.
\noindent Let us recall the commutation relation
\begin{equation} \label{efu}
e\exp(fu)=\exp(fu)(e+hu-fu^2)
\end{equation}
which is a direct application of lemma {\cal R}ef{cle},iv) and of the relations
$[e,f]=h$, $[h,f]=-2f$ and $[f,f]=0$.
\noindent We want to prove that in ${\cal U}({\cal F}rak sl_2)[[u,v]]$
\begin{equation} \label{cef}
{{\cal R}m{exp}}(eu){{\cal R}m{exp}}(fv)={{\cal R}m{exp}}\Big({fv\over 1+uv}\Big)(1+uv)^h{{\cal R}m{exp}}\Big({eu\over 1+uv}\Big).
\end{equation}
Let $F(u)={{\cal R}m{exp}}\Big({fv\over 1+uv}\Big)(1+uv)^h{{\cal R}m{exp}}\Big({eu\over 1+uv}\Big).$
\noindent It is obvious that $F(0)={{\cal R}m{exp}}(fv)$; hence our claim is equivalent to $${{{\cal R}m{d}}\over{{\cal R}m{d}}u}F(u)=eF(u).$$
To obtain this result we derive remarking lemma {\cal R}ef{cle},ix) and then apply formulas {\cal R}ef{xvh} and {\cal R}ef{efu}:
$${{{\cal R}m{d}}\over{{\cal R}m{d}}u}F(u)=$$
$$={{\cal R}m{exp}}\Big({fv\over 1+uv}\Big)(1+uv)^h{e\over(1+uv)^2}{{\cal R}m{exp}}\Big({eu\over 1+uv}\Big)+$$
$$+{{\cal R}m{exp}}\Big({fv\over 1+uv}\Big)\Big({hv\over 1+uv}-{fv^2\over(1+uv)^2}\Big)(1+uv)^h{{\cal R}m{exp}}\Big({eu\over 1+uv}\Big)=$$
$$={{\cal R}m{exp}}\Big({fv\over 1+uv}\Big)\Big(e+{hv\over 1+uv}-{fv^2\over(1+uv)^2}\Big)(1+uv)^h{{\cal R}m{exp}}\Big({eu\over 1+uv}\Big)=$$
$$=eF(u).$$
Remarking that $${xu\over 1+v}{\cal I}n\Z[x][[u,v]],\,\,{{\cal R}m{hence}}\,\,\Big({xu\over 1+v}\Big)^{(k)}{\cal I}n\Z^{(div)}[x][[u,v]]\,\,{\cal F}orall k{\cal I}n\N,$$
it follows that the right hand side of {\cal R}ef{usldi} is an integer form of ${\cal U}({\cal F}rak sl_2)$ (containing ${\cal U}z({\cal F}rak sl_2)$).
\noindent Finally remark that inverting the exponentials on the right hand side, the formula ({\cal R}ef{cef}) gives an expression of $(1+uv)^h$ in terms of the divided powers of $e$ and $f$, so that
$\Z^{(bin)}[h] \subseteq {\cal U}z({\cal F}rak sl_2)$, which completes the proof.
\end{proof}
\end{theorem}
{\cal{V}}skip .5 truecm
\section{The integral form of $\hat{{\cal F}rak sl_2}$ ($A_1^{(1)}$)} \label{slh}
{\cal{V}}skip .5truecm
\noindent The results about $\hat{{\cal F}rak sl_2}$ and the integral form $\hat{\cal U}_{\Z}$ of its enveloping algebra $\hat{\cal U}$ are due to Garland (see \cite{HG}). Here we simplify the description of the imaginary positive component of $\hat{\cal U}_{\Z}$
proving that it is an algebra of polynomials over $\Z$ and give a compact and complete proof of the assertion that the set given in theorem {\cal R}ef{trm} is actually a $\Z$-basis of $\hat{\cal U}z$. This proof has the advantage, following \cite{DM}, to reduce the long and complicated commutation formulas to compact, simply readable and easily proved ones. It is evident from this approach that the results for $\hat{{\cal F}rak sl_2}$ are generalizations of those for ${\cal F}rak sl_2$, so that the commutation formulas
arise naturally recalling the homomorphism
\begin{equation} \label{evaluation}
ev:\hat{{\cal F}rak sl_2}={\cal F}rak sl_2\otimes{\mathbb{Q}}[t^{\pm 1}]\oplus{\mathbb{Q}} c\to{\cal F}rak sl_2\otimes{\mathbb{Q}}[t^{\pm 1}]\to{\cal F}rak sl_2
\end{equation}
induced by the evaluation of $t$ at 1
.
\noindent On the other hand these results and the strategy for their proof will be shown to be in turn generalizable to $\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$.
\noindent As announced in the introduction, the proof of theorem {\cal R}ef{trm} is based on a few results:
proposition {\cal R}ef{zzk}, proposition {\cal R}ef{pum}, lemma {\cal R}ef{limt}, and proposition {\cal R}ef{exefh}.
{\cal{V}}skip .3 truecm
\begin{definition} \label{hs2}
\noindent $\hat{{\cal F}rak sl_2}$ (respectively $\hat{\cal U}
$) is the Lie algebra (respectively the associative algebra) over ${\mathbb{Q}}$ generated by $\{x_r^+,x_r^-,h_r,c|r{\cal I}n\Z\}$ with relations $$c\,\,\,{{\cal R}m{is\,\,central}},$$
$$[h_r,h_s]=2r\delta_{r+s,0}c,\,\,\,[h_r,x_s^{\pm}]=\pm 2x_{r+s}^{\pm}$$
$$[x_r^+,x_s^+]=0=[x_r^-,x_s^-],$$
$$[x_r^+,x_s^-]=h_{r+s}
+r\delta_{r+s,0}c.$$
\noindent Notice that $\{x_r^+,x_r^-|r{\cal I}n\Z\}$ generates $\hat{\cal U}$.
\noindent $\hat{\cal U}^+$, $\hat{\cal U}^-$, $\hat{\cal U}^0$ are the subalgebras of $\hat{\cal U}$ generated respectively by $\{x_r^+|r{\cal I}n\Z\}$, $\{x_r^-|r{\cal I}n\Z\}$, $\{c,h_r|r{\cal I}n\Z\}$.
\noindent $\hat{\cal U}^{0,+}$, $\hat{\cal U}^{0,-}$, $\hat{\cal U}^{0,0}$, are the subalgebras of $\hat{\cal U}$ (of $\hat{\cal U}^0$) generated respectively by $\{h_r|r>0\}$, $\{h_r|r<0\}$, $\{c,h_0\}$.
\end{definition}
{\cal{V}}skip .3 truecm
\begin{remark} \label{hefp}
\noindent $\hat{\cal U}^+$, $\hat{\cal U}^-$ are (commutative) algebras of polynomials:
$$\hat{\cal U}^+\cong{\mathbb{Q}}[x_r^+|r{\cal I}n\Z],\,\,\,\hat{\cal U}^-\cong{\mathbb{Q}}[x_r^-|r{\cal I}n\Z];$$
$\hat{\cal U}^0$ is not commutative: $[h_r,h_{-r}]=2rc$;
\noindent $\hat{\cal U}^{0,+}$, $\hat{\cal U}^{0,-}$, $\hat{\cal U}^{0,0}$, are (commutative) algebras of polynomials:
$$\hat{\cal U}^{0,+}\cong{\mathbb{Q}}[h_r|r>0],\,\,\,\hat{\cal U}^{0,-}\cong{\mathbb{Q}}[h_r|r<0],\,\,\,\hat{\cal U}^{0,0}\cong{\mathbb{Q}}[c,h_0];$$
Moreover we have the following ``triangular'' decompositions:
$$\hat{\cal U}\cong\hat{\cal U}^-\otimes\hat{\cal U}^0\otimes\hat{\cal U}^+,$$
$$\hat{\cal U}^0\cong\hat{\cal U}^{0,-}\otimes\hat{\cal U}^{0,0}\otimes\hat{\cal U}^{0,+}.$$
Remark that the images in $\hat{\cal U}$ of $\hat{\cal U}^-\otimes\hat{\cal U}^0$ and $\hat{\cal U}^0\otimes\hat{\cal U}^+$ are subalgebras of $\hat{\cal U}$ and the images of
$\hat{\cal U}^{0,-}\otimes\hat{\cal U}^{0,0}$ and $\hat{\cal U}^{0,0}\otimes\hat{\cal U}^{0,+}$ are commutative subalgebras of $\hat{\cal U}^0$.
\end{remark}
{\cal{V}}skip .3 truecm
\begin{definition} \label{hto}
\noindent $\hat{\cal U}$ is endowed with the following anti/auto/homo/morphisms:
\noindent $\sigma$ is the antiautomorphism defined on the generators by:
$$x_r^+\mapsto x_r^+,\,\,\,x_r^-\mapsto x_r^-,\,\,\,({\mathbb{R}}ightarrow h_r\mapsto-h_r,\,\,\,c\mapsto -c);$$
$\Omega$ is the antiautomorphism defined on the generators by:
$$x_r^+\mapsto x_{-r}^-,\,\,\,x_r^-\mapsto x_{-r}^+,\,\,\,({\mathbb{R}}ightarrow h_r\mapsto h_{-r},\,\,\,c\mapsto c);$$
\noindent $T$ is the automorphism defined on the generators by:
$$x_r^+\mapsto x_{r-1}^+,\,\,\,x_r^-\mapsto x_{r+1}^-,\,\,\,({\mathbb{R}}ightarrow h_r\mapsto h_r-\delta_{r,0}c,\,\,\,c\mapsto c);$$
\noindent for all $m{\cal I}n\Z$, $\lambda_m$ is the homomorphism defined on the generators by:
$$x_r^+\mapsto x_{mr}^+,\,\,\,x_r^-\mapsto x_{mr}^-,\,\,\,({\mathbb{R}}ightarrow h_r\mapsto h_{mr},\,\,\,c\mapsto mc).$$
\end{definition}
{\cal{V}}skip .3 truecm
\begin{remark} \label{hti}
\noindent $\sigma^2={{\cal R}m{id}}_{\hat{\cal U}}$, $\Omega^2={{\cal R}m{id}}_{\hat{\cal U}}$, $T$ is invertible of infinite order;
\noindent $\lambda_{-1}^2=\lambda_1={{\cal R}m{id}}_{\hat{\cal U}}$; $\lambda_m$ is not invertible if $m\noindenteq\pm 1$; $\lambda_0=ev$ (through the identification
$<x_0^+,x_0^-,h_0>\cong<e,f,h>$).
\end{remark}
\begin{remark} \label{htc}
{\cal{V}}skip .3 truecm
\noindent $\sigma\Omega=\Omega\sigma$, $\sigma T=T\sigma$, $\sigma\lambda_m=\lambda_m\sigma$ for all $m{\cal I}n\Z$;
\noindent $\Omega T=T\Omega$, $\Omega\lambda_m=\lambda_m\Omega$ for all $m{\cal I}n\Z$;
\noindent $\lambda_m T^{\pm 1}=T^{\pm m}\lambda_m$ for all $m{\cal I}n\Z$;
\noindent $\lambda_m\lambda_n=\lambda_{mn}$, for all $m,n{\cal I}n\Z$.
\end{remark}
{\cal{V}}skip .3 truecm
\begin{remark} \label{hbs}
\noindent $\sigma\big|_{\hat{\cal U}^{\pm}}={{\cal R}m{id}}_{\hat{\cal U}^{\pm}},\,\,\,\sigma(\hat{\cal U}^{0,\pm})=\hat{\cal U}^{0,\pm},\,\,\,\sigma(\hat{\cal U}^{0,0})=\hat{\cal U}^{0,0}$.
\noindent $\Omega(\hat{\cal U}^{\pm})=\hat{\cal U}^{\mp},\,\,\,\Omega(\hat{\cal U}^{0,\pm})=\hat{\cal U}^{0,\mp},\,\,\,\Omega\big|_{\hat{\cal U}^{0,0}}={{\cal R}m{id}}_{\hat{\cal U}^{0,0}}$.
\noindent $T(\hat{\cal U}^{\pm})=\hat{\cal U}^{\pm},\,\,\,T\big|_{\hat{\cal U}^{0,\pm}}={{\cal R}m{id}}_{\hat{\cal U}^{0,\pm}},\,\,\,
T(\hat{\cal U}^{0,0})=\hat{\cal U}^{0,0}$.
\noindent For all $m{\cal I}n\Z$ $\lambda_m(\hat{\cal U}^{\pm})\subseteq\hat{\cal U}^{\pm},\,\,\,\lambda_m(\hat{\cal U}^0)=\hat{\cal U}^0,\,\,\,\lambda_m(\hat{\cal U}^{0,0})\subseteq\hat{\cal U}^{0,0}$, $$\lambda_m(\hat{\cal U}^{0,\pm})\subseteq
\begin{cases}\hat{\cal U}^{0,\pm}&{{\cal R}m{if}}\,m>0\cr \hat{\cal U}^{0,\mp}&{{\cal R}m{if}}\, m<0\cr \hat{\cal U}^{0,0}&{{\cal R}m{if}}\,m=0.
\end{cases}$$
\end{remark}
{\cal{V}}skip .3 truecm
\begin{definition}\label{hhuz}
\noindent Here we define some $\Z$-subalgebras of $\hat{\cal U}$:
\noindent $\hat{\cal U}z$ is the $\Z$-subalgebra of $\hat{\cal U}
$ generated by $\{(x_r^{+})^{(k)},(x_r^{-})^{(k)}|r{\cal I}n\Z,k{\cal I}n\N\}$;
\noindent $\hat{\cal U}z^{\pm}=\Z^{(div)}[x_r^{\pm}|r{\cal I}n\Z]$;
\noindent $\hat{\cal U}z^{0,0}=\Z^{(bin)}[h_0,c]$;
\noindent $\hat{\cal U}z^{0,\pm}=\Z^{(sym)}[h_{\pm r}|r>0]$;
\noindent $\hat{\cal U}z^0$
is the $\Z$-subalgebra of $\hat{\cal U}$ generated by $\hat{\cal U}z^{0,-}$, $\hat{\cal U}z^{0,0}$ and $\hat{\cal U}z^{0,+}$.
The notations are those of section {\cal R}ef{intgpl}.
\end{definition}
{\cal{V}}skip .3 truecm
\noindent We want to prove that $\hat{\cal U}z^0 =\hat{\cal U}z^{0,-}\hat{\cal U}z^{0,0}\hat{\cal U}z^{0,+}$, so that it is an integral form of $\hat{\cal U}^0$, and that $\hat{\cal U}z=\hat{\cal U}z^-\hat{\cal U}z^0\hat{\cal U}z^+$, so that $\hat{\cal U}z$ is an integral form of $\hat{\cal U}$.
\noindent As in the case of ${\cal F}rak sl_2$,
working in $\hat{\cal U}[[u]]$ (see the notation below) simplifies enormously the proofs and gives a deeper insight to the question.
{\cal{V}}skip .3 truecm
\begin{notation}\label{hgens}
\noindent We shall consider the following elements in $\hat{\cal U}[[u]]$:
$$x^+(u)=\sum_{r\geq 0}x_r^+u^r=\sum_{r\ge0} T^{-r}u^r(x_0^+),$$
$$x^-(u)=\sum_{r\geq 0}x_{r+1}^-u^r=\sum_{r\ge0} T^{r}u^r(x_1^-),$$
$$h_{\pm}(u)=\sum_{r\geq 1}(-1)^{r-1}{h_{\pm r}\over r}u^r,$$
$$\hat h_{\pm}(u)={\exp}(h_{\pm}(u))=\sum_{r\geq 0}\hat h_{\pm r} u^r.$$
\end{notation}
{\cal{V}}skip .3 truecm{}
\begin{remark} \label{nev}
\noindent Notice that $ev \circ T=ev$ and
$$ev(x^+(-u))=ev\left({{1}\over{1+T^{-1}u}}x_0^+{\cal R}ight)={e\over 1+u},$$
$$ev(x^-(-u))=ev\left({{T}\over{1+Tu}}x_0^-{\cal R}ight)={f\over 1+u},$$
$$ev(h_{\pm}(u))=h{{\cal R}m{ln}}(1+u),$$
$$ev(\hat h_{\pm}(u))=(1+u)^h.$$
\end{remark}
{\cal{V}}skip .3 truecm
\begin{remark} \label{stuz}
Here we list some obvious remarks.
\noindent i) $\hat{\cal U}z^{\pm}\subseteq\hat{\cal U}z\cap\hat{\cal U}^{\pm}$ and $\hat{\cal U}z$ is the $\Z$-subalgebra of $\hat{\cal U}$ generated by $\hat{\cal U}z^+\cup\hat{\cal U}z^-$;
\noindent ii) $\hat{\cal U}z^{\pm}$, $\hat{\cal U}z^{0,0}$, $\hat{\cal U}z^{0,\pm}$ and $\hat{\cal U}z^{0,\pm}\hat{\cal U}z^{0,0}=\hat{\cal U}z^{0,0}\hat{\cal U}z^{0,\pm}$ are integral forms respectively of
$\hat{\cal U}^{\pm}$, $\hat{\cal U}^{0,0}$, $\hat{\cal U}^{0,\pm}$ and $\hat{\cal U}^{0,\pm}\hat{\cal U}^{0,0}=\hat{\cal U}^{0,0}\hat{\cal U}^{0,\pm}$;
\noindent iii) $\hat{\cal U}z$ and $\hat{\cal U}z^{0,0}$ are
stable under $\sigma$, $\Omega$, $T^{\pm 1}$, $\lambda_m $ for all $m{\cal I}n\Z$;
\noindent iv) $\hat{\cal U}z^{\pm}$ is stable under $\sigma$, $T^{\pm 1}$, $\lambda_m $ for all $m{\cal I}n\Z$ and $\Omega(\hat{\cal U}z^{\pm})=\hat{\cal U}z^{\mp}$;
\noindent v) $\hat{\cal U}z^{0,\pm}$ is stable under $\sigma$, $T^{\pm 1}$ and $\Omega(\hat{\cal U}z^{0,\pm})=\lambda_{-1}(\hat{\cal U}z^{0,\pm})=\hat{\cal U}z^{0,\mp}$: more precisely
$$\sigma(\hat h_{\pm}(u))\!=\!\hat h_{\pm}(u)^{-1},\,\Omega(\hat h_{\pm}(u))\!=\!\lambda_{-1}(\hat h_{\pm}(u))\!=\!\hat h_{\mp}(u),\,T^{\pm 1}(\hat h_{\pm}(u))\!=\!\hat h_{\pm}(u);$$
vi)
for $m{\cal I}n\Z$
$$\lambda_m(\hat{\cal U}z^{0,\pm})\subseteq
\begin{cases}\hat{\cal U}z^{0,\pm}&{{\cal R}m{if}}\,m>0\cr \hat{\cal U}z^{0,\mp}&{{\cal R}m{if}}\, m<0\cr \hat{\cal U}z^{0,0}&{{\cal R}m{if}}\,m=0,
\end{cases}$$
thanks to v), to proposition {\cal R}ef{tmom} and to remarks {\cal R}ef{htc} and {\cal R}ef{nev}.
\end{remark}
{\cal{V}}skip.3 truecm
\begin{remark} \label{tmfv}
\noindent
The elements $\hat h_k$'s with $k>0$ generate the same $\Z$-subalgebra of $\hat{\cal U}$ as the elements $\Lambda_k$'s ($k\geq 0$) defined in \cite{HG}.
\noindent Indeed let
$$\sum_{n\geq 0}p_nu^n=P(u)=\hat h(-u)^{-1};$$ then
remarks {\cal R}ef{srinv},1,ii) and {\cal R}ef{funtorialita},iii) imply that $\Z[\hat h_k|k>0]=\Z[p_{n}|n>0]$; but
$${{{\cal R}m{d}}\over{{\cal R}m{d}}u}P(u)=P(u)\sum_{r>0}h_ru^{r-1},$$
that is
$$p_0=1,\ \ p_n={1\over n}\sum_{r=1}^nh_rp_{n-r}\ {\cal F}orall n>0,$$
hence $p_n=\Lambda_{n-1}$ ${\cal F}orall n\geq 0$.
\noindent On the other hand applying $\lambda_m$ we get
$$\lambda_m(p_0)=1,\ \ \lambda_m(p_n)={1\over n}\sum_{r=1}^nh_{rm}\lambda_m(p_{n-r}),$$ so that
$\lambda_m(p_n)=\lambda_m(\Lambda_{n-1})=\Lambda_{n-1}{(\xi(m))}$ (see \cite{HG}).
\end{remark}
{\cal{V}}skip .3 truecm
\begin{remark} \label{hrs}
\noindent Remark that for all $r{\cal I}n\Z$ the subalgebra of $\hat{{\cal F}rak sl_2}$ generated by $$\{x_r^+,x_{-r}^-,h_0+rc\}$$ maps isomorphically onto ${\cal F}rak sl_2$ through the evaluation homomorphism $ev$ (see formula {\cal R}ef{evaluation}). On the other hand for each $r{\cal I}n\Z$ there is an injection ${\cal U}({\cal F}rak sl_2)\to\hat{\cal U}$:
$$e\mapsto x_r^+,\,\,f\mapsto x_{-r}^-,\,\,h\mapsto h_0+rc.$$
In particular theorem {\cal R}ef{trdc}, implies that the elements ${h_0+rc\choose k}$ belong to $\hat{\cal U}z$ for all $r{\cal I}n\Z, k{\cal I}n\N$ (thus, remarking that the elements ${c\choose k}$'s are central and the example {\cal R}ef{binex}, we get that $\hat{\cal U}z^{0,0}\subseteq\hat{\cal U}z$)
and proposition {\cal R}ef{bdm} implies that
$\hat{\cal U}z^{0,0}\hat{\cal U}z^+$ and $\hat{\cal U}z^-\hat{\cal U}z^{0,0}$ are integral forms respectively of $\hat{\cal U}^{0,0}\hat{\cal U}^+$ and $\hat{\cal U}^-\hat{\cal U}^{0,0}$.
\end{remark}
{\cal{V}}skip .3 truecm
{\cal{V}}skip .3 truecm
\begin{proposition} \label{zzk}
\noindent The following identity holds in $\hat{\cal U}$:
$$\hat h_+(u)\hat h_-(v)=\hat h_-(v)(1-uv)^{-2c}\hat h_+(u).$$
\noindent $\hat{\cal U}z^0=\hat{\cal U}z^{0,-}\hat{\cal U}z^{0,0}\hat{\cal U}z^{0,+}$:
it
is an integral form of $\hat{\cal U}^0$.
\begin{proof}
\noindent Since $[h_r,h_s]=2r\delta_{r+s,0}c$,
the claim is proposition {\cal R}ef{heise} with $m\!=\!2$, $l\!=\!0$.
\end{proof}
\end{proposition}
{\cal{V}}skip .3 truecm
\begin{proposition}\label{pum}
\noindent The following identity holds in $\hat{\cal U}$:
\begin{equation} \label{xup}
x_0^+\hat h_+(u)=
\hat h_+(u)(1+T^{-1}u)^{-2}(x_0^+).
\end{equation}
\noindent Hence for all $k{\cal I}n\N$
\begin{equation} \label{xup2}
(x_0^+)^{(k)}\hat h_+(u)= \hat h_+(u)((1+T^{-1}u)^{-2}(x_0^+))^{(k)}.
\end{equation}
\begin{proof}
The claim follows from proposition {\cal R}ef{hh} with $m_1=2$, $m_d=0 \; {\cal F}orall d>1$, and from {\cal R}ef{divpoweq}.
\end{proof}
\end{proposition}
{\cal{V}}skip .3 truecm
\begin{remark}
The identity ({\cal R}ef{xup}) can be written as $$x_0^+\hat h_+(u)=\hat h_+(u){{{\cal R}m{d}}\over{{\cal R}m{d}}u}(ux^+(-u)).$$ Indeed
$$(1+T^{-1}u)^{-2}(x_0^+)=\sum_{r{\cal I}n\N}(-1)^r(r+1)x_r^+u^r={{{\cal R}m{d}}\over{{\cal R}m{d}}u}(ux^+(-u)).$$
\end{remark}
\begin{remark} Remark that the identity ({\cal R}ef{xup2}) is the affine version of
\begin{equation} \label{xup3}
e^{(k)}(1+u)^h=(1+u)^{h}\left({e\over (1+u)^2}{\cal R}ight)^{(k)}
\end{equation} (see equation ({\cal R}ef{xvh2})); indeed $ev$ maps ({\cal R}ef{xup2}) to ({\cal R}ef{xup3}).
\end{remark}
\begin{corollary}\label{cum}
\noindent $\hat{\cal U}z^+\hat{\cal U}z^{0,\pm}\subseteq\hat{\cal U}z^{0,\pm}\hat{\cal U}z^+$ and $\hat{\cal U}z^{\pm}\hat{\cal U}z^0=\hat{\cal U}z^0\hat{\cal U}z^{\pm}$.
\noindent Then $\hat{\cal U}z^0\hat{\cal U}z^+$ and $\hat{\cal U}z^-\hat{\cal U}z^0$ are integral forms respectively of $\hat{\cal U}^0\hat{\cal U}^+$ and $\hat{\cal U}^-\hat{\cal U}^0$.
\begin{proof}
Applying $T^{-r}$ to ({\cal R}ef{xup2}), we find that $(x_r^+)^{(k)}\hat h_+(u)\subseteq\hat h_+(u)\hat{\cal U}z^+[[u]]$ ${\cal F}orall r{\cal I}n\Z,k{\cal I}n\N$, hence $\hat{\cal U}z^+\hat h_+(u)\subseteq\hat h_+(u)\hat{\cal U}z^+[[u]]$ and $\hat{\cal U}z^+\hat{\cal U}z^{0,+}\subseteq\hat{\cal U}z^{0,+}\hat{\cal U}z^+$. From this, applying $\lambda_{-1}$ we get $\hat{\cal U}z^+\hat{\cal U}z^{0,-}\subseteq\hat{\cal U}z^{0,-}\hat{\cal U}z^+$, hence $\hat{\cal U}z^+\hat{\cal U}z^0\subseteq\hat{\cal U}z^0\hat{\cal U}z^+$ thanks to remark {\cal R}ef{hrs}. Finally applying $\Omega$ we obtain that $\hat{\cal U}z^0\hat{\cal U}z^-\subseteq\hat{\cal U}z^-\hat{\cal U}z^0$ and applying $\sigma$ we get the reverse inclusions.
\end{proof}
\end{corollary}
{\cal{V}}skip .3 truecm
We are now left to prove that $\hat{\cal U}z^+\hat{\cal U}z^-\subseteq\hat{\cal U}z^-\hat{\cal U}z^0\hat{\cal U}z^+$ and that $\hat{\cal U}z^0\subseteq\hat{\cal U}z$.
\noindent To this aim we study the commutation relations between $(x_r^+)^{(k)}$ and $(x_s^-)^{(l)}$ or equivalently between ${{\cal R}m{exp}}(x_r^+u)$ and ${{\cal R}m{exp}}(x_s^-v)$.
{\cal{V}}skip .3 truecm
\begin{remark}\label{exev}
\noindent Remark {\cal R}ef{hrs}, implies that ${{\cal R}m{exp}}(x_r^+u){{\cal R}m{exp}}(x_{-r}^-v){\cal I}n\hat{\cal U}z^-\hat{\cal U}z^0\hat{\cal U}z^+$ for all $r{\cal I}n\Z$.
\noindent In order to prove a similar result for
${{\cal R}m{exp}}(x_r^+u){{\cal R}m{exp}}(x_s^-v)$ when $r+s\noindenteq 0$ remark that in general $${{\cal R}m{exp}}(x_r^+u){{\cal R}m{exp}}(x_s^-v)=T^{-r}\lambda_{r+s}({{\cal R}m{exp}}(x_0^+u){{\cal R}m{exp}}(x_1^-v)),$$ so that remark {\cal R}ef{stuz},iv),v),vi) allows us to
reduce to the case $r=0$, $s=1$.
\noindent This case will turn out to be enough also to prove that $\hat{\cal U}z^0\subseteq\hat{\cal U}z$.
\end{remark}
{\cal{V}}skip .3 truecm
\begin{remark} \label{hev}
\noindent In the study of the commutation relations in $\hat{\cal U}z$ remark that
$$ev(\exp(x_0^+u)\exp(x_1^-v))=\exp(eu)\exp(fv)$$ and that straightening $\exp(x_0^+u)\exp(x_1^-v)$ through the triangular decomposition $\hat{\cal U}\cong\hat{\cal U}^-\otimes\hat{\cal U}^0\otimes\hat{\cal U}^+$ we get an element of
$\hat{\cal U}[[u,v]]$
whose coefficients involve $x_{r+1}^-,h_{r+1},\, x_r^+$ with $r\geq 0$ and whose image through $ev$ is
$$\exp\Big({fv\over 1+uv}\Big)(1+uv)^h\exp\Big({eu\over 1+uv}\Big)$$
(see remark {\cal R}ef{nev}).
\noindent Viceversa once we have such an expression for
$\exp(x_0^+u)\exp(x_1^-v)$ applying $T^{-r}\lambda_{r+s}$ we can deduce from it the identity ({\cal R}ef{cef}) and the expression for
$\exp(x_r^+u)\exp(x_s^-v)$ for all $r,s{\cal I}n\Z$ (also in the case $r+s=0$).
\noindent Remark that
$${{\cal R}m{exp}}(vx^-(-uv))\hat h_+(uv){{\cal R}m{exp}}(ux^+(-uv))$$
is an element of $\hat{\cal U}[[u,v]]$ which has the required properties (see remark {\cal R}ef{nev}) and belongs to $\hat{\cal U}z^-\hat{\cal U}z^0\hat{\cal U}z^+$.
{\cal{V}}skip .3 truecm
\end{remark}
Our aim is to prove that
$$\exp(x_0^+u)\exp(x_1^-v)=\exp(vx^-(-uv))\hat h_+(uv)\exp(ux^+(-uv)).$$
{\cal{V}}skip .3 truecm
\begin{lemma} \label{limt}
\noindent In $\hat{\cal U}[[u,v]]$ we have
$$x_0^+\exp(vx^-(-uv))=\exp(vx^-(-uv))\Big(x_0^++{{{\cal R}m{d}}h_+(uv)\over{{\cal R}m{d}}u}+{{{\cal R}m{d}}vx^-(-uv)\over{{\cal R}m{d}}u}\Big).$$
\begin{proof}
The claim follows from lemma {\cal R}ef{cle},iv) remarking that
$$[x_0^+,vx^-(-uv)]=v\sum_{r{\cal I}n\N}h_{r+1}(-uv)^r={{{\cal R}m{d}}\over{{\cal R}m{d}}u}\sum_{r{\cal I}n\N}{h_{r+1}\over r+1}(-1)^{r}(uv)^{r+1}={{{\cal R}m{d}}h_+(uv)\over{{\cal R}m{d}}u},$$
$$\Big[{{{\cal R}m{d}}h_+(uv)\over{{\cal R}m{d}}u},vx^-(-uv)\Big]=-2v^2\sum_{r,s{\cal I}n\N}x_{r+s+2}^+(-uv)^{r+s}=$$
$$=-2v^2\sum_{r{\cal I}n\N}(r+1)x_{r+2}^-(-uv)^r=2{{{\cal R}m{d}}vx^-(-uv)\over{{\cal R}m{d}}u}$$
and
$$\Big[{{{\cal R}m{d}}vx^-(-uv)\over{{\cal R}m{d}}u},vx^-(-uv)\Big]=0.$$
\end{proof}
\end{lemma}
{\cal{V}}skip .3 truecm
\begin{proposition}\label{exefh}
\noindent In $\hat{\cal U}[[u,v]]$ we have
$${{\cal R}m{exp}}(x_0^+u){{\cal R}m{exp}}(x_1^-v)={{\cal R}m{exp}}(vx^-(-uv))\hat h_+(uv){{\cal R}m{exp}}(ux^+(-uv)).$$
\begin{proof}
Let $F(u)={{\cal R}m{exp}}(vx^-(-uv))\hat h_+(uv){{\cal R}m{exp}}(ux^+(-uv))$.
It is clear that $F(0)={{\cal R}m{exp}}(x_1^-v)$, so that it is enough to prove that
$${{{\cal R}m{d}}\over{{\cal R}m{d}}u}F(u)=x_0^+F(u).$$
Remark that, thanks to the derivation rules (lemma {\cal R}ef{cle},ix)), to proposition {\cal R}ef{pum}, and to lemma {\cal R}ef{limt}, we have:
$${{{\cal R}m{d}}\over{{\cal R}m{d}}u}F(u)={{\cal R}m{exp}}(vx^-(-uv))\hat h_+(uv){{{{\cal R}m d}}\over{{\cal R}m{d}}u}(ux^+(-uv)){{\cal R}m{exp}}(ux^+(-uv))+$$
$$+{{\cal R}m{exp}}(vx^-(-uv))\Big({{{{\cal R}m d}}\over{{\cal R}m{d}}u}h_+(uv)+{{{{\cal R}m d}}\over{{\cal R}m{d}}u}(vx^-(-uv))\Big)\hat h_+(uv){{\cal R}m{exp}}(ux^+(-uv))=$$
$$={{\cal R}m{exp}}(vx^-(-uv))\Big(x_0^++{{{{\cal R}m d}}(h_+(uv)+vx^-(-uv))\over{{\cal R}m{d}}u}\Big)\hat h_+(uv){{\cal R}m{exp}}(ux^+(-uv))=$$
$$=x_0^+{{\cal R}m{exp}}(vx^-(-uv))\hat h_+(uv){{\cal R}m{exp}}(ux^+(-uv))=x_0^+F(u).$$
{\cal{V}}skip .3 truecm
\end{proof}
\end{proposition}
\begin{corollary} \label{tfin}
\noindent $\hat{\cal U}z^0\subseteq\hat{\cal U}z$.
\begin{proof}
That $\hat{\cal U}z^{0,+}\subseteq\hat{\cal U}z$ is a consequence of proposition {\cal R}ef{exefh} inverting the exponentials (see the proof theorem {\cal R}ef{trdc}), which implies also (applying $\Omega$) that
$\hat{\cal U}z^{0,-}\subseteq\hat{\cal U}z$; the claim then follows thanks to remark {\cal R}ef{hrs}.
\end{proof}
\end{corollary}
\begin{proposition} \label{strutmodulo}
$\hat{\cal U}z^-\hat{\cal U}z^0\hat{\cal U}z^+$ is a $\Z$-subalgebra of $\hat{\cal U}$ (hence $\hat{\cal U}z=\hat{\cal U}z^-\hat{\cal U}z^0\hat{\cal U}z^+$).
\begin{proof}
We want to prove that $\hat{\cal U}z^-\hat{\cal U}z^0\hat{\cal U}z^+$ (which is obviously a $\hat{\cal U}z^-$-module and, by corollary {\cal R}ef{cum}, a $\hat{\cal U}z^0$-module) is also a
$\hat{\cal U}z^+$-module, or equivalently that
$
\hat{\cal U}z^+\hat{\cal U}z^-\subseteq\hat{\cal U}z^-\hat{\cal U}z^0\hat{\cal U}z^+$.
\noindent By proposition {\cal R}ef{exefh} together with remark {\cal R}ef{exev}, formula {\cal R}ef{cef} and remark {\cal R}ef{hrs}
we have that $
y_+y_-{\cal I}n\hat{\cal U}z^-\hat{\cal U}z^0\hat{\cal U}z^+$ in the particular case when $y_+=(x_r^+)^{(k)}$ and $y_-=(x_s^-)^{(l)}$, thus we just need to perform the correct induction to deal with the general $y_{\pm}{\cal I}n\hat{\cal U}z^{\pm}$.
\noindent Remark that setting $$deg(x_r^{\pm})=\pm 1,\ \ deg(h_r)=deg(c)=0$$
induces a $\Z$-gradation on $\hat{\cal U}$ (since the relations defining $\hat{\cal U}$ are homogeneous) and on $\hat{\cal U}z$ (since its generators are homogeneous), which is preserved by $\sigma$, $T^{\pm 1}$ and $\lambda_m$ ${\cal F}orall m{\cal I}n\Z$; in particular it induces $\N$-gradations
$$\hat{\cal U}^{\pm}=\bigoplus_{k{\cal I}n\N}\hat{\cal U}_{\pm k}^{\pm}, \qquad \qquad \hat{\cal U}_{\Z}^{\pm}=\bigoplus_{k{\cal I}n\N}\hat{\cal U}_{\Z,\pm k}^{\pm}$$
with the properties that
$$\Omega(\hat{\cal U}_{\Z,\pm k}^{\pm})= \hat{\cal U}_{\Z,\mp k}^{\mp},$$
$$\hat{\cal U}_{\Z,k}^{+}=\sum_{n{\cal I}n\N{\cal A}top k_1+...+k_n=k}\Z(x_{r_1}^{+})^{(k_1)}\cdot ... \cdot(x_{r_n}^{+})^{(k_n)}=\sum_{r{\cal I}n\Z}\Z(x_r^{+})^{(k)}+\sum_{k_1,k_2>0{\cal A}top k_1+k_2=k}\hat{\cal U}_{\Z, k_1}^{+}\hat{\cal U}_{\Z, k_2}^{+},$$
$$\hat{\cal U}_{\Z,k}^{+}\hat{\cal U}z^0=\hat{\cal U}z^0\hat{\cal U}_{\Z,k}^{+}\ \ {{\cal R}m{(because}}\ \hat{\cal U}_{k}\hat{\cal U}^0=\hat{\cal U}^0\hat{\cal U}_{k}\ {{\cal R}m{and}}\ \hat{\cal U}z^{+}\hat{\cal U}z^0=\hat{\cal U}z^0\hat{\cal U}z^{+}{{\cal R}m{)}}$$
and
$$[\hat{\cal U}_k^+,\hat{\cal U}_{-l}^-]\subseteq\sum_{m>0}\hat{\cal U}_{-l+m}^-\hat{\cal U}^0\hat{\cal U}_{k-m}^+\ \ {\cal F}orall k,l{\cal I}n\N.$$
We want to prove that
\begin{equation}\label{uzruzs}\hat{\cal U}_{\Z,k}^+\hat{\cal U}_{\Z,-l}^-\subseteq\sum_{m\geq 0}\hat{\cal U}_{\Z,-l+m}^-\hat{\cal U}z^0\hat{\cal U}_{\Z,k-m}^+\ \ {\cal F}orall k,l{\cal I}n\N,\end{equation}
the claim being obvious for $k=0$ or $l=0$.
\noindent Suppose $k\noindenteq 0$, $l\noindenteq 0$ and the claim true for all $(\tilde k,\tilde l)\noindenteq (k,l)$ with $\tilde k\leq k$ and $\tilde l\leq l$ .
Then:
a) proposition {\cal R}ef{exefh} together with remark {\cal R}ef{exev}, formula ({\cal R}ef{cef}) and remark {\cal R}ef{hrs} imply that $$(x_r^+)^{(k)}(x_s^-)^{(l)}{\cal I}n\sum_{m\geq 0}\hat{\cal U}_{\Z,-l+m}^-\hat{\cal U}z^0\hat{\cal U}_{\Z,k-m}^+\ \ {\cal F}orall r,s{\cal I}n\Z;$$
b) if $k_1,k_2>0$ are such that $k_1+k_2=k$ or $l_1,l_2>0$ are such that $l_1+l_2=l$, then
$$\hat{\cal U}_{\Z,k_1}^+\hat{\cal U}_{\Z,k_2}^+\hat{\cal U}_{\Z,-l}^-\subseteq\sum_{m_2\geq 0}\hat{\cal U}_{\Z,k_1}^+\hat{\cal U}_{\Z,-l+m_2}^-\hat{\cal U}z^0\hat{\cal U}_{\Z,k_2-m_2}^+\subseteq$$
$$\subseteq\sum_{m_1,m_2\geq 0}\hat{\cal U}_{\Z,-l+m_2+m_1}^-\hat{\cal U}z^0\hat{\cal U}_{\Z,k_1-m_1}^+\hat{\cal U}z^0\hat{\cal U}_{\Z,k_2-m_2}^+=$$
$$=\sum_{m_1,m_2\geq 0}\hat{\cal U}_{\Z,-l+m_2+m_1}^-\hat{\cal U}z^0\hat{\cal U}_{\Z,k_1-m_1}^+\hat{\cal U}_{\Z,k_2-m_2}^+\subseteq\sum_{m\geq 0}\hat{\cal U}_{\Z,-l+m}^-\hat{\cal U}z^0\hat{\cal U}_{\Z,k-m}^+$$
and symmetrically applying $\Omega$
$$\hat{\cal U}_{\Z,k}^+\hat{\cal U}_{\Z,-l_1}^-\hat{\cal U}_{\Z,-l_2}^-=
\Omega(\hat{\cal U}_{\Z,l_2}^+\hat{\cal U}_{\Z,l_1}^+\hat{\cal U}_{\Z,-k}^-)\subseteq$$
$$\subseteq\Omega (\sum_{m\ge 0}\hat{\cal U}_{\Z,-k+m}^-\hat{\cal U}_{\Z}^0\hat{\cal U}_{\Z,l-m}^+)=\sum_{m\geq 0}\hat{\cal U}_{\Z,-l+m}^-\hat{\cal U}z^0\hat{\cal U}_{\Z,k-m}^+.$$
({\cal R}ef{uzruzs}) follows from a) and b).
\end{proof}
\end{proposition}
{\cal{V}}skip .3 truecm
\begin{theorem}\label{trm}
\noindent The $\Z$-subalgebra $\hat{\cal U}z$ of $\hat{\cal U}$ generated by $$\{(x_r^{\pm})^{(k)}|r{\cal I}n\Z,k{\cal I}n\N\}$$ is an integral form of $\hat{\cal U}$.
\noindent More precisely
$$\hat{\cal U}z\cong\hat{\cal U}z^-\otimes\hat{\cal U}z^0\otimes\hat{\cal U}z^+\cong\hat{\cal U}z^-\otimes\hat{\cal U}z^{0,-}\otimes\hat{\cal U}z^{0,0}\otimes\hat{\cal U}z^{0,+}\otimes\hat{\cal U}z^+$$
and a $\Z$-basis of $\hat{\cal U}z$ is given by the product
$$B^-B^{0,-}B^{0,0}B^{0,+}B^+$$
where $B^{\pm}$, $B^{0,\pm}$ and $B^{0,0}$ are the $\Z$-bases respectively of $\hat{\cal U}z^{\pm}$, $\hat{\cal U}z^{0,\pm}$ and $\hat{\cal U}z^{0,0}$ given as follows:
$$B^{\pm}=\Big\{{(\bf{x}}^{\pm})^{({\bf{k}})}=\prod_{r{\cal I}n\Z}(x_r^{\pm})^{(k_r)} \;|\; {\bf{k}}:\Z\to\N\,\, {{\cal R}m{is\, finitely\, supported}}\Big\}$$
$$B^{0,\pm}=\Big\{{\hat{{\bf{h}}}_{\pm}^{\bf{k}}}=\prod_{l{\cal I}n\Z_+}{\hat h_{\pm l}^{k_l}}\;|\; {\bf{k}}:\Z_+\to\N\,\,{{\cal R}m{is\, finitely\, supported}}\Big\}$$
$$B^{0,0}=\Big\{{h_0\choose k}{c\choose\tilde k} \;|\; k,\tilde k{\cal I}n\N\Big\}.$$
\end{theorem}
{\cal{V}}skip .5 truecm
\section{The integral form of $\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$ ($A_2^{(2)}$) }\label{ifa22}
{\cal{V}}skip .5truecm
\noindent In this section we describe the integral form $\tilde{\cal U}z$ of the enveloping algebra $\tilde{\cal U}$ of the Kac-Moody algebra of type $A_2^{(2)}$ generated by the divided powers of the Drinfeld generators $x_r^{\pm}$; unlike the untwisted case, this integral form is strictly smaller than the one (studied in \cite{DM}) generated by the divided powers of the Chevalley generators $e_0$, $e_1$, $f_0$, $f_1$ (see appendix {\cal R}ef{appendC}).
\noindent However, the construction of a $\Z$-basis of $\tilde{\cal U}z$ follows the idea of the analogous construction in the case $A_1^{(1)}$, seen in the previous section; this method allows us to overcome the technical difficulties arising in case $A_2^{(2)}$ - difficulties which seem otherwise overwhelming.
\noindent The commutation relations needed to our aim can be partially deduced from the case $A_1^{(1)}$: indeed, underlining some embeddings of $\hat{{\cal F}rak sl_2}$ into $\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$ (see remark {\cal R}ef{emgg}), the commutation relations in $\hat{\cal U}$ can be directly translated into a class of commutation relations in $\tilde{\cal U}$ (see corollary {\cal R}ef{czzp}, proposition {\cal R}ef{czzq} and the appendix {\cal R}ef{appendA} for more details).
\noindent Yet, there are some differences between $A_1^{(1)}$ and $A_2^{(2)}$.
\noindent First of all, the real (positive and negative) components of $\tilde{\cal U}$ are no more commutative (this is well known: it happens in all the affine cases different from $A_1^{(1)}$, as well as in all the finite cases different from $A_1$), hence the study of their integral form requires some - easy - additional observations (see lemma {\cal R}ef{zkp}).
\noindent The non commutativity of the real components of $\tilde{\cal U}$ makes the general commutation formula between the exponentials of positive and negative Drinfeld generators technically more complicated to compute and express than in the case of $\hat{\cal F}rak sl_2$; nevertheless, general and explicit compact formulas can be given in this case, too, always thanks to the exponential notation. As already seen, the simplification provided by the exponential approach lies essentially on lemma {\cal R}ef{cle},iv), which allows to perform the computations in $\tilde{\cal U}$ reducing to much simpler computations in $\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$, and even, thanks to the symmetries highlighted in definition {\cal R}ef{tto}, in the Lie subalgebra $L=\hat{{{\cal F}rak sl_3}}^{\!\!\chi} \cap({{\cal F}rak sl_3}\otimes{\mathbb{Q}}[t])\subseteq\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$ (see definition {\cal R}ef{sottoalgebraL}).
Recognizing a ${\mathbb{Q}}[w]$-module structure on each direct summand of $L=L^-\oplus L^0\oplus L^+$ and unifying them in a
${\mathbb{Q}}[w]$-module structure on $L$ (see definition {\cal R}ef{qwmodulo}) provides a further simplification in the notations: one could have done the same construction for $\hat{\cal F}rak sl_2$, but we have the feeling that in the case of $\hat{\cal F}rak sl_2$ it would be unnecessary and that on the other hand it is useful to present both formulations.
\noindent The most remarkable difference with respect to $A_1^{(1)}$ on one hand and to Mitzman's integral form on the other hand lies in the description of the generators of the imaginary (positive and negative) components; it can be surprising that they are not what one could expect: $\tilde{\cal U}z^{0,+}\noindenteq\Z^{(sym)}[h_r|r>0]$. More precisely (see remark {\cal R}ef{hdiversi} and theorem {\cal R}ef{trmA22})
$$\tilde{\cal U}z^{0,+}\noindentot\subseteq\Z^{(sym)}[h_r|r>0]\ \ {{\cal R}m{ and}}\ \ \Z^{(sym)}[h_r|r>0]\noindentot\subseteq\tilde{\cal U}z^{0,+};$$
as we shall show, we need to somehow ``deform'' the $h_r$'s (by changhing some of their signs) to get a basis of $\tilde{\cal U}z^{0,+}$ by the $(sym)$-construction (see definition {\cal R}ef{thuz}, example {\cal R}ef{rvsf} and remark {\cal R}ef{funtorialita}).
\noindent Notice that in order to prove that $\tilde{\cal U}z$ is an integral form of $\tilde{\cal U}$ and that $B$ is a $\Z$-basis of $\tilde{\cal U}z$ (theorem {\cal R}ef{trmA22}) it is not necessary to find explicitly all the commutation formulas between the basis elements.
In any case, for completeness, we shall collect them in the appendix {\cal R}ef{appendA}.
{\cal{V}}skip .3 truecm
\begin{definition} \label{a22}
\noindent ${\hat{{{\cal F}rak sl_3}}^{\!\!\chi}}$ (respectively $\tilde{\cal U}$) is the Lie algebra (respectively the associative algebra) over ${\mathbb{Q}}$ generated by $\{c,h_r,x_r^{\pm},X_{2r+1}^{\pm}|r{\cal I}n\Z\}$ with relations $$c\,\,\,{{\cal R}m{is\,\,central}}$$
$$[h_r,h_s]=\delta_{r+s,0}2r(2+(-1)^{r-1})c$$
$$[h_r,x_s^{\pm}]=\pm 2(2+(-1)^{r-1})x_{r+s}^{\pm}$$
$$[h_r,X_s^{\pm}]=\begin{cases}\pm 4X_{r+s}^{\pm}&{{\cal R}m{if}}\ 2|r
\\ 0&{{\cal R}m{if}}\ 2\noindentot|r
\end{cases}\leqno{(s\ {{\cal R}m{odd}})}$$
$$[x_r^{\pm},x_s^{\pm}]=\begin{cases}0&{{\cal R}m{if}}\ 2|r+s
\\ \pm(-1)^sX_{r+s}^{\pm}&{{\cal R}m{if}}\ 2\noindentot|r+s
\end{cases}$$
$$[x_r^{\pm},X_s^{\pm}]=[X_r^{\pm},X_s^{\pm}]=0$$
$$[x_r^+,x_s^-]=h_{r+s}
+\delta_{r+s,0}rc$$
$$[x_r^{\pm},X_s^{\mp}]=\pm(-1)^r4x_{r+s}^{\mp}\leqno{(s\ {{\cal R}m{odd}})}$$
$$[X_r^+,X_s^-]=8h_{r+s}
+4\delta_{r+s,0}rc\leqno{(r,s\ {{\cal R}m{odd}})}$$
\end{definition}
\noindent Notice that $\{x_r^+,x_r^-|r{\cal I}n\Z\}$ generates $\tilde{\cal U}$.
\noindent Moreover $\{c,h_r,x_r^{\pm},X_{2r+1}^{\pm}|r{\cal I}n\Z\}$ is a basis of $\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$; hence the ordered monomials in these elements (with respect to any total ordering of the basis) is a PBW-basis of $\tilde{\cal U}$.
\noindent $\tilde{\cal U}^+$, $\tilde{\cal U}^-$, $\tilde{\cal U}^0$ are the subalgebras of $\tilde{\cal U}$ generated respectively by $$\{x_r^+|r{\cal I}n\Z\},\ \{x_r^-|r{\cal I}n\Z\},\ \{c,h_r|r{\cal I}n\Z\}.$$
\noindent $\tilde{\cal U}^{\pm,0}$, $\tilde{\cal U}^{\pm,1}$ and $\tilde{\cal U}^{\pm,c}$ are the subalgebras of $\tilde{\cal U}^{\pm}$ generated respectively by
$$\{x_r^{\pm}|r\equiv 0\ (mod\ 2)\},\ \{x_r^{\pm}|r\equiv 1\ (mod\ 2)\}\ {{\cal R}m{and}}\ \{X_{2r+1}^{\pm}|r{\cal I}n\Z\}.$$
\noindent $\tilde{\cal U}^{0,+}$, $\tilde{\cal U}^{0,-}$, $\tilde{\cal U}^{0,0}$, are the subalgebras of $\tilde{\cal U}$ (of $\tilde{\cal U}^0$) generated respectively by $$\{h_r|r>0\},\ \{h_r|r<0\},\ \{c,h_0\}.$$
{\cal{V}}skip .3 truecm
\noindent The following remark is a consequence of trivial applications of the PBW-theorem to different subalgebras of $\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$.
\begin{remark} \label{tefp}
\noindent $\tilde{\cal U}^+$ and $\tilde{\cal U}^-$ are not commutative: $[x_0^+,x_1^+]=-X_1^+$ and $[x_0^-,x_1^-]=X_1^-$.
\noindent $\tilde{\cal U}^{\pm,0}$, $\tilde{\cal U}^{\pm,1}$ and $\tilde{\cal U}^{\pm,c}$ are (commutative) algebras of polynomials:
$$\tilde{\cal U}^{+,0}\cong{\mathbb{Q}}[x_{2r}^+\; | \; r {\cal I}n \Z],\ \
\tilde{\cal U}^{+,1}\cong{\mathbb{Q}}[x_{2r+1}^+\; | \; r {\cal I}n \Z],\ \
\tilde{\cal U}^{+,c}\cong{\mathbb{Q}}[X_{2r+1}^+\; | \; r {\cal I}n \Z],$$
$$\tilde{\cal U}^{-,0}\cong{\mathbb{Q}}[x_{2r}^-\; | \; r {\cal I}n \Z],\ \
\tilde{\cal U}^{-,1}\cong{\mathbb{Q}}[x_{2r+1}^-\; | \; r {\cal I}n \Z],\ \
\tilde{\cal U}^{-,c}\cong{\mathbb{Q}}[X_{2r+1} ^-\; | \; r {\cal I}n \Z].$$
We have the following ``triangular'' decompositions of $\tilde{\cal U}^{\pm}$:
$$\tilde{\cal U}^{\pm}\cong\tilde{\cal U}^{\pm,0}\otimes\tilde{\cal U}^{\pm,c}\otimes\tilde{\cal U}^{\pm,1}\cong\tilde{\cal U}^{\pm,1}\otimes\tilde{\cal U}^{\pm,c}\otimes\tilde{\cal U}^{\pm,0}$$
Remark that $\tilde{\cal U}^{\pm,c}$ is central in $\tilde{\cal U}^{\pm}$, so that the images in $\tilde{\cal U}^{\pm}$ of $\tilde{\cal U}^{\pm,0}\otimes\tilde{\cal U}^{\pm,c}$ and $\tilde{\cal U}^{\pm,1}\otimes\tilde{\cal U}^{\pm,c}$ are commutative subalgebras of
$\tilde{\cal U}$.
{\cal{V}}skip .15 truecm
\noindent $\tilde{\cal U}^0$ is not commutative: $[h_r,h_{-r}]\noindenteq 0$ if $r\noindenteq 0$;
\noindent $\tilde{\cal U}^{0,+}$, $\tilde{\cal U}^{0,-}$, $\tilde{\cal U}^{0,0}$, are (commutative) algebras of polynomials:
$$\tilde{\cal U}^{0,+}\cong{\mathbb{Q}}[h_r|r>0],\,\,\,\tilde{\cal U}^{0,-}\cong{\mathbb{Q}}[h_r|r<0],\,\,\,\tilde{\cal U}^{0,0}\cong{\mathbb{Q}}[c,h_0];$$
Moreover we have the following triangular decomposition of $\tilde{\cal U}^0$:
$$\tilde{\cal U}^0\cong\tilde{\cal U}^{0,-}\otimes\tilde{\cal U}^{0,0}\otimes\tilde{\cal U}^{0,+}\cong\tilde{\cal U}^{0,+}\otimes\tilde{\cal U}^{0,0}\otimes\tilde{\cal U}^{0,-}.$$
Remark that $\tilde{\cal U}^{0,0}$ is central in $\tilde{\cal U}^0$, so that the images in $\tilde{\cal U}^0$ of
$\tilde{\cal U}^{0,-}\otimes\tilde{\cal U}^{0,0}$ and $\tilde{\cal U}^{0,0}\otimes\tilde{\cal U}^{0,+}$ are commutative subalgebras of $\tilde{\cal U}$.
{\cal{V}}skip .15 truecm
\noindent Finally remark the triangular decomposition of $\tilde{\cal U}$:
$$\tilde{\cal U}\cong\tilde{\cal U}^-\otimes\tilde{\cal U}^0\otimes\tilde{\cal U}^+\cong\tilde{\cal U}^+\otimes\tilde{\cal U}^0\otimes\tilde{\cal U}^-,$$
and observe that the images of $\tilde{\cal U}^-\otimes\tilde{\cal U}^0$ and $\tilde{\cal U}^0\otimes\tilde{\cal U}^+$ are subalgebras of $\tilde{\cal U}$.
\end{remark}
{\cal{V}}skip .3 truecm
\begin{definition}\label{tto}
\noindent $\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$ and $\tilde{\cal U}$ are endowed with the following anti/auto/ho\-mo/morphisms:
\noindent $\sigma$ is the antiautomorphism defined on the generators by:
$$x_r^+\mapsto x_r^+,\,\,\,x_r^-\mapsto x_r^-,\,\,\,({\mathbb{R}}ightarrow X_r^{\pm}\mapsto-X_r^{\pm},\,\,\,h_r\mapsto-h_r,\,\,\,c\mapsto -c);$$
$\Omega$ is the antiautomorphism defined on the generators by:
$$x_r^+\mapsto x_{-r}^-,\,\,\,x_r^-\mapsto x_{-r}^+,\,\,\,({\mathbb{R}}ightarrow X_r^{\pm}\mapsto X_{-r}^{\mp},\,\,\,h_r\mapsto h_{-r},\,\,\,c\mapsto c);$$
\noindent $T$ is the automorphism defined on the generators by:
$$x_r^+\mapsto x_{r-1}^+,\,\,\,x_r^-\mapsto x_{r+1}^-,\,\,\,({\mathbb{R}}ightarrow X_r^{\pm}\mapsto -X_{r\mp 2}^{\pm},\,\,\,h_r\mapsto h_r-\delta_{r,0}c,\,\,\,c\mapsto c);$$
\noindent for all odd integer $m{\cal I}n\Z$, $\lambda_m$ is the homomorphism defined on the generators by:
$$x_r^+\mapsto x_{mr}^+,\,\,\,x_r^-\mapsto x_{mr}^-,\,\,\,({\mathbb{R}}ightarrow X_r^{\pm}\mapsto X_{mr}^{\pm},\,\,\,h_r\mapsto h_{mr},\,\,\,c\mapsto mc).$$
Remark that if $m$ is even $\lambda_m$ is not defined on $\tilde{\cal U}$, but it is still defined on $\tilde{\cal U}^{0,+}={\mathbb{Q}}[h_r|r>0]$.
\end{definition}
{\cal{V}}skip .3 truecm
\begin{remark}\label{tti}
\noindent $\sigma^2={{\cal R}m{id}}_{\tilde{\cal U}}$, $\Omega^2={{\cal R}m{id}}_{\tilde{\cal U}}$, $T$ is invertible of infinite order;
\noindent $\lambda_{-1}^2=\lambda_1={{\cal R}m{id}}_{\tilde{\cal U}}$; $\lambda_m$ is not invertible if $m\noindenteq\pm 1$.
\end{remark}
{\cal{V}}skip .3 truecm
\begin{remark}\label{tct}
$\sigma\Omega=\Omega\sigma$, $\sigma T=T\sigma$, $\Omega T=T\Omega$.
Moreover for all $m,n$ odd we have $\sigma\lambda_m=\lambda_m\sigma$, $\Omega\lambda_m=\lambda_m\Omega$,
$\lambda_m T^{\pm 1}=T^{\pm m}\lambda_m$, $\lambda_m\lambda_n=\lambda_{mn}$.
\end{remark}
{\cal{V}}skip .3 truecm
\begin{remark}\label{tsb}
\noindent $\sigma\big|_{\tilde{\cal U}^{\pm,0}}={{\cal R}m{id}}_{\tilde{\cal U}^{\pm,0}},\,\sigma\big|_{\tilde{\cal U}^{\pm,1}}={{\cal R}m{id}}_{\tilde{\cal U}^{\pm,1}},\,\sigma(\tilde{\cal U}^{\pm,c})=\tilde{\cal U}^{\pm,c},\,\sigma(\tilde{\cal U}^{0,\pm})=\tilde{\cal U}^{0,\pm}$, $\sigma(\tilde{\cal U}^{0,0})=\tilde{\cal U}^{0,0}$.
\noindent $\Omega(\tilde{\cal U}^{\pm,0})\!=\tilde{\cal U}^{\mp,0}$, $\Omega(\tilde{\cal U}^{\pm,1})\!=\tilde{\cal U}^{\mp,1}$, $\Omega(\tilde{\cal U}^{\pm,c})\!=\tilde{\cal U}^{\mp,c}$, $\Omega(\tilde{\cal U}^{0,\pm})\!=\tilde{\cal U}^{0,\mp}$, $\Omega\big|_{\tilde{\cal U}^{0,0}}\!\!=\!{{\cal R}m{id}}_{\tilde{\cal U}^{0,0}}$.
\noindent $T
(\tilde{\cal U}^{\pm,0})=\tilde{\cal U}^{\pm,1}$,
$T
(\tilde{\cal U}^{\pm,1})=\tilde{\cal U}^{\pm,0}$, $T
(\tilde{\cal U}^{\pm,c})=\tilde{\cal U}^{\pm,c}$, $T
\big|_{\tilde{\cal U}^{0,\pm}}={{\cal R}m{id}}_{\tilde{\cal U}^{0,\pm}},\,\,\,
T(\tilde{\cal U}^{0,0})=\tilde{\cal U}^{0,0}$.
\noindent For all odd $m{\cal I}n\Z$:
\noindent $\lambda_m(\tilde{\cal U}^{\pm,0})\subseteq\tilde{\cal U}^{\pm,0},\,\,\,\lambda_m(\tilde{\cal U}^{\pm,1})\subseteq\tilde{\cal U}^{\pm,1},\,\,\,\lambda_m(\tilde{\cal U}^{\pm,c})\subseteq\tilde{\cal U}^{\pm,c},\,\,\,\lambda_m(\tilde{\cal U}^{0,0})\subseteq\tilde{\cal U}^{0,0}$, $$\lambda_m(\tilde{\cal U}^{0,\pm})\subseteq
\begin{cases}\tilde{\cal U}^{0,\pm}&{{\cal R}m{if}}\,m>0\cr \tilde{\cal U}^{0,\mp}&{{\cal R}m{if}}\, m<0.
\end{cases}$$
\end{remark}
{\cal{V}}skip .3 truecm
\begin{definition}\label{sottoalgebraL}
$L$, $L^{\pm}$, $L^0$, $L^{\pm,0}$, $L^{\pm,1}$, $L^{\pm,c}$ are the Lie-subalgebras of $\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$ generated by:
$$L: \{x_r^+,x_r^-|r\geq 0\},$$
$$L^+: \{x_r^+|r\geq 0\},\ \ L^-: \{x_r^-|r\geq 0\},\ \ L^0: \{h_r|r\geq 0\},$$
$$L^{+,0}: \{x_{2r}^+|r\geq 0\},\ \ L^{+,1}: \{x_{2r+1}^+|r\geq 0\},\ \ L^{+,c}: \{X_{2r+1}^+|r\geq 0\}.$$
$$L^{-,0}: \{x_{2r}^-|r\geq 0\},\ \ L^{-,1}: \{x_{2r+1}^-|r\geq 0\},\ \ L^{-,c}: \{X_{2r+1}^-|r\geq 0\}.$$
\end{definition}
{\cal{V}}skip .3 truecm
\begin{remark}\label{Lbase}
$L^0$, $L^{\pm,0}$, $L^{\pm,1}$ and $L^{\pm,c}$ are commutative Lie-algebras; for these subalgebras of $L$ the Lie-generators given in definition {\cal R}ef{sottoalgebraL} are bases over ${\mathbb{Q}}$.
\noindent Moreover we have ${\mathbb{Q}}$-vector space decompositions
$$L=L^-\oplus L^0\oplus L^+,\ \ L^+=L^{+,0}\oplus L^{+,1}\oplus L^{+,c},\ \ L^-=L^{-,0}\oplus L^{-,1}\oplus L^{-,c}.$$
Finally remark that $L^+$ is $T^{-1}$-stable and that $L^-$ is $T$-stable; more in detail $T^{\mp 1}(L^{\pm,0})=L^{\pm,1}$, $T^{\mp 1}(L^{\pm,1})\subseteq L^{\pm,0}$ (so that $L^{\pm,0}$ and $L^{\pm,1}$ are $T^{\mp 2}$-stable); $L^{\pm,c}$ is $T^{\mp1}$-stable.
\end{remark}
{\cal{V}}skip .3 truecm
\begin{definition}\label{qwmodulo}
$L$ is endowed with the ${\mathbb{Q}}[w]$-module structure defined by $w\big|_{L^-}=T\big|_{L^-}$, $w\big|_{L^+}=T^{-1}\big|_{L^+}$, $w.h_r=h_{r+1}$ ${\cal F}orall r{\cal I}n\N$.
\end{definition}
{\cal{V}}skip .3 truecm
\begin{lemma}\label{qwconti}
Let $\xi_1(w),\xi_2(w){\cal I}n{\mathbb{Q}}[w][[u,v]]$. Then:
$$[\xi_1(w^2).x_0^{\pm},\xi_2(w^2).x_1^{\pm}]=\mp(\xi_1\xi_2)(-w).X_1^{\pm};\leqno{i)}$$
$$
[\xi_1(w).x_0^+,\xi_2(w).x_0^-]=(\xi_1\xi_2)(w).h_0;\leqno{ii)}$$
$$[\xi_1(w).x_0^+,\xi_2(w).X_1^-]=4\xi_1(-w)\xi_2(-w^2).x_1^-;\leqno{iii)}$$
$$[\xi_1(w).h_0,\xi_2(w).x_0^{\pm}]=\pm(4\xi_1(w)-2\xi_1(-w))\xi_2(w).x_0^{\pm}.\leqno{iv)}$$
\begin{proof}
The assertions are just a translation of the defining relations of $\tilde{\cal U}$:
$$[x_{2r}^{\pm},x_{2s+1}^{\pm}],\ \ [x_r^+,x_s^-],\ \ [x_r^+,X_{2s+1}^-],\ \ [h_r,x_s^{\pm}].$$
For iv), remark that
$$2(2+(-1)^{r-1})w^r=4w^r-2(-w)^r.$$
\end{proof}
\end{lemma}
{\cal{V}}skip .3 truecm
\begin{definition}\label{thuz}
\noindent Here we define some $\Z$-subalgebras of $\tilde{\cal U}$:
\noindent $\tilde{\cal U}z$ is the $\Z$-subalgebra of $\tilde{\cal U}
$ generated by $\{(x_r^{+})^{(k)},(x_r^{-})^{(k)}|r{\cal I}n\Z,k{\cal I}n\N\}$;
\noindent $\tilde{\cal U}z^+$ and $\tilde{\cal U}z^-$ are the $\Z$-subalgebras of $\tilde{\cal U}
$ (and of $\tilde{\cal U}z$) generated respectively by $\{(x_r^{+})^{(k)}|r{\cal I}n\Z,k{\cal I}n\N\}$, and $\{(x_r^{-})^{(k)}|r{\cal I}n\Z,k{\cal I}n\N\}$;
\noindent $\tilde{\cal U}z^{\pm,0}=\Z^{(div)}[x_{2r}^{\pm}|r{\cal I}n \Z]$;
\noindent $\tilde{\cal U}z^{\pm,1}=\Z^{(div)}[x_{2r+1}^{\pm}|r{\cal I}n \Z]$;
\noindent $\tilde{\cal U}z^{\pm,c}=\Z^{(div)}[X_{2r+1}^{\pm}|r{\cal I}n \Z]$;
\noindent $\tilde{\cal U}z^{0,0}=\Z^{(bin)}[h_0,c]$;
\noindent $\tilde{\cal U}z^{0,\pm}=\Z^{(sym)}[{\cal{V}}arepsilon_rh_{\pm r}|r>0]$ with ${\cal{V}}arepsilon_r=\begin{cases}1&{{\cal R}m{if}}\ 4\noindentot|r\\-1&{{\cal R}m{if}}\ 4|r\end{cases}$;
\noindent $\tilde{\cal U}z^0$ is the $\Z$-subalgebra of $\tilde{\cal U}$ generated by $\tilde{\cal U}z^{0,-}$, $\tilde{\cal U}z^{0,0}$ and $\tilde{\cal U}z^{0,+}$.
The notations are those of section {\cal R}ef{intgpl}.
\noindent In particular remark the definition of $\tilde{\cal U}z^{0,\pm}$ (where the ${\cal{V}}arepsilon_r$'s represent the necessary ``deformation'' announced in the introduction of this section, and discussed in details in proposition {\cal R}ef{convoluzioneintera}) and introduce the notation
$$\Z[\tilde h_k|\pm k>0]=\Z^{(sym)}[{\cal{V}}arepsilon_rh_{\pm r}|r>0]$$ where $$\tilde h_{\pm}(u)=\sum_{k{\cal I}n\N}\tilde h_{\pm k}u^k=\exp\Big(\sum_{r>0}(-1)^{r-1}{{\cal{V}}arepsilon_rh_{\pm r}\over r}u^r\Big).$$
\end{definition}
\begin{remark} \label{hdiversi}
It is worth underlining
that
$\tilde h_{+}(u)\noindenteq\hat h_{+}(u)$, where
$$\Z[\hat h_k|k>0]=\Z^{(sym)}[h_{r}|r>0],$$
that is
$$\hat h_{+}(u)=\sum_{k{\cal I}n\N}\hat h_{k}u^k=\exp\Big(\sum_{r>0}(-1)^{r-1}{h_{r}\over r}u^r\Big).$$
More precisely the $\Z$-subalgebras generated respectively by $\{\hat h_k|k>0\}$ and $\{\tilde h_k|k>0\}$ are different and not included in each other: indeed $\tilde h_1=\hat h_1$, $\tilde h_2=\hat h_2$, $\tilde h_3=\hat h_3$ but $\hat h_4\noindentot{\cal I}n\Z[\tilde h_k|k>0]$ and $\tilde h_4\noindentot{\cal I}n\Z[\hat h_k|k>0]$ (see propositions {\cal R}ef{convoluzioneintera} and {\cal R}ef{emmepiallaerre} and remark {\cal R}ef{vicelambda}).
\end{remark}
{\cal{V}}skip .3 truecm
\begin{remark}\label{whtilde}
Let $\xi(w){\cal I}n{\mathbb{Q}}[w][[u]]$; the elements $$\exp(\xi(w^2).x_0^{\pm}),\ \ \exp(\xi(w^2).x_1^{\pm})\ \ {{\cal R}m{and}}\ \ \exp(\xi(w).X_1^{\pm})$$ lie respectively in $\tilde{\cal U}z^{\pm,0}[[u]]$, $\tilde{\cal U}z^{\pm,1}[[u]]$ and $\tilde{\cal U}z^{\pm,c}[[u]]$ if and only if $\xi(w)$ has integral coefficients, that is if and only if $\xi(w){\cal I}n\Z[w][[u]]$ (see example {\cal R}ef{dvdpw}).
\noindent Remark also that $$\hat h_+(u)=\exp(\ln(1+wu).h_0),$$ while
$$\tilde h_+(u)=\exp\left(\big(\ln(1+uw)+{1\over 2}\ln(1-u^4w^4)\big).h_0{\cal R}ight).$$
\end{remark}
\noindent Before entering the study of the integral forms just introduced, we still dwell on the comparison between $\tilde h_+(u)$ and $\hat h_+(u)$, proving lemma {\cal R}ef{ometiomecap}, that will be useful later.
\begin{lemma} \label{emme}
For all $m{\cal I}n\Z\setminus\{0\}$ we have
$$(1+m^2u)^{{1\over m}}{\cal I}n 1+mu\Z[[u]].$$
\begin{proof}
$(1+\sum_{r> 0}a_ru^r)^m=1+m^2u$ implies
$$1+m^2u=1+m\sum_{r> 0}a_ru^r+\sum_{k>1}{m\choose k}\big(\sum_{r> 0}a_ru^r\big)^k.$$
Let us prove by induction on $s$ that $a_s{\cal I}n m\Z$:
if $s=1$ we have that $ma_1=m^2$;
if $s>1$ the coefficient $c_s$ of $u^s$ in $\sum_{k>1}{m\choose k}\big(\sum_{r> 0}a_ru^r\big)^k$ is a combination with integral coefficients of products of the $a_t$'s with $t<s$, which are all multiple of $m$. Then, since $k\geq 2$, $m^2|c_s$.
But $ma_s+c_s=0$, thus $m|a_s$.
\end{proof}
\end{lemma}
\begin{lemma} \label{ometiomecap}
Let us consider the integral forms
$\Z[\hat h_k|k>0]$ and $\Z[\tilde h_k|k>0]$ of ${\mathbb{Q}}[h_r|r>0]$ (see example {\cal R}ef{rvsf}, formula {\cal R}ef{dfhp}, definiton {\cal R}ef{thuz} and remark {\cal R}ef{hdiversi}); for all $m>0$ recall the ${\mathbb{Q}}$-algebra homomorphism $\lambda_m$ of ${\mathbb{Q}}[h_r|r>0]$ (see proposition {\cal R}ef{tmom}) and define the analogous homomorphism $\tilde \lambda_m$ mapping each ${\cal{V}}arepsilon_rh_r$ to ${\cal{V}}arepsilon_{mr}h_{mr}$ (of course $\Z[\tilde h_k|k>0]$ is $\tilde\lambda_m$-stable ${\cal F}orall m>0$).
\noindent We have that:
\noindent i) if $m$ is odd then $\tilde\lambda_m=\lambda_m$; in particular $\Z[\tilde h_k|k>0]$ is $\lambda_m$-stable;
\noindent ii) $\lambda_2(\hat h_k){\cal I}n\Z[\tilde h_l|l>0]$ for all $k>0$;
\noindent iii) $\hat h_+(4u)^{{1\over 2}}{\cal I}n\Z[\tilde h_k|k>0][[u]]$;
\begin{proof}
\noindent i) If $m$ is odd then $4|mr\Leftrightarrow 4|r$, hence ${\cal{V}}arepsilon_{mr}={\cal{V}}arepsilon_r$ ${\cal F}orall r>0$ and the claim follows from proposition {\cal R}ef{tmom}
\noindent ii) By proposition {\cal R}ef{tmom} we know that $\Z[\tilde h_k|k>0]$ is $\tilde\lambda_2$-stable; but
$$\tilde\lambda_2(\tilde h_+(u^2))=\exp\sum_{r>0}(-1)^{r-1}{{\cal{V}}arepsilon_{2r}h_{2r}\over r}u^{2r}=
\exp\sum_{r>0}{h_{2r}\over r}u^{2r}=\lambda_2(\hat h_+(-u^2))^{-1};$$ equivalently
$$\lambda_2(\hat h_+(u^2))=\tilde\lambda_2(\tilde h_+(-u^2))^{-1},$$
which implies the claim.
\noindent iii) Remark that $$\hat h_+(u)\tilde h(u)_+^{-1}=
\exp\left(-\sum_{r>0}{2h_{4r}\over 4r}u^{4r}{\cal R}ight)=
\tilde\lambda_4(\tilde h_+(-u^4))^{-{1\over 2}};$$
then
$$\hat h_+(4u)^{{1\over 2}}=\tilde h_+(4u)^{{1\over 2}}\tilde\lambda_4(\tilde h_+(-4^4u^4))^{-{1\over 4}}.$$
Since $\tilde h_+(4u){\cal I}n 1+4u\Z[\tilde h_k | k>0][[u]]$ and $\tilde\lambda_4(\tilde h_+(4^4u^4)){\cal I}n 1+4^4u\Z[\tilde h_k| k>0][[u]]$
we deduce from lemma {\cal R}ef{emme} that
$$\tilde h_+(4u)^{{1\over 2}},\ \ \tilde\lambda_4(\tilde h_+(4^4u^4))^{{1\over 4}}{\cal I}n\Z[\tilde h_k | k>0],$$ which implies the claim.
\end{proof}
\end{lemma}
{\cal{V}}skip .3 truecm
\begin{remark}\label{tzp}
\noindent It is obvious that
$\tilde{\cal U}z^{\pm,0}$, $\tilde{\cal U}z^{\pm,1}$, $\tilde{\cal U}z^{\pm,c}$, $\tilde{\cal U}z^{0,\pm}$ and $\tilde{\cal U}z^{0,0}$ are integral forms respectively of
$\tilde{\cal U}^{\pm,0}$, $\tilde{\cal U}^{\pm,1}$, $\tilde{\cal U}^{\pm,c}$, $\tilde{\cal U}^{0,\pm}$ and $\tilde{\cal U}^{0,0}$.
\noindent Hence by the commutativity properties we also have that $\tilde{\cal U}z^{\pm,0}\tilde{\cal U}z^{\pm,c}$ and $\tilde{\cal U}z^{\pm,1}\tilde{\cal U}z^{\pm,c}$ are integral forms respectively of $\tilde{\cal U}^{\pm,0}\tilde{\cal U}^{\pm,c}$ and $\tilde{\cal U}^{\pm,c}\tilde{\cal U}^{\pm,1}$.
\noindent Analogously $\tilde{\cal U}z^{0,0}\tilde{\cal U}z^{0,+}$ and $\tilde{\cal U}z^{0,-}\tilde{\cal U}z^{0,0}$ are integral forms respectively of $\tilde{\cal U}^{0,0}\tilde{\cal U}^{0,+}$ and $\tilde{\cal U}^{0,-}\tilde{\cal U}^{0,0}$.
\noindent We want to prove that:
\noindent 1) $\tilde{\cal U}z^0=\tilde{\cal U}z^{0,-}\tilde{\cal U}z^{0,0}\tilde{\cal U}z^{0,+}$, so that $\tilde{\cal U}z^0$ is an integral form of $\tilde{\cal U}^0$;
\noindent 2) $\tilde{\cal U}z^{\pm}=\tilde{\cal U}z^{\pm,1}\tilde{\cal U}z^{\pm,c}\tilde{\cal U}z^{\pm,0}$, so that $\tilde{\cal U}z^+$ and $\tilde{\cal U}z^-$ are integral forms respectively of $\tilde{\cal U}^+$ and $\tilde{\cal U}^-$;
\noindent 3) $\tilde{\cal U}z=\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+$, so that $\tilde{\cal U}z$ is an integral form of $\tilde{\cal U}$.
{\cal{V}}skip .3 truecm
\noindent It is useful to evidentiate the behaviour of the $\Z$-subalgebras introduced above under the symmetries of $\tilde{\cal U}$.
\end{remark}
{\cal{V}}skip .3 truecm
\begin{proposition}\label{sttuz}
\noindent The following stability properties under the action of $\sigma$, $\Omega$, $T^{\pm 1}$ and $\lambda_m$ ($m {\cal I}n \Z$ odd) hold:
\noindent i) $\tilde{\cal U}z$, $\tilde{\cal U}z^+$ and $\tilde{\cal U}z^-$ are $\sigma$-stable, $T^{\pm 1}$-stable, $\lambda_m$-stable.
\noindent \,\,\,\,\,
$\tilde{\cal U}z$ is also $\Omega$-stable, while $\Omega(\tilde{\cal U}z^{\pm})=\tilde{\cal U}z^{\mp}$.
\noindent ii) $\tilde{\cal U}z^{+,0}$, $\tilde{\cal U}z^{+,1}$ and $\tilde{\cal U}z^{+,c}$ are $\sigma$-stable, $T^{\pm 2}$-stable, $\lambda_m$-stable.
\noindent \,\,\,\,\,
$\tilde{\cal U}z^{+,c}$ is also $T^{\pm 1}$-stable, while $T^{\pm 1}(\tilde{\cal U}z^{+,0})=\tilde{\cal U}z^{+,1}$.
\noindent \,\,\,\,\,
$\Omega(\tilde{\cal U}z^{+,0})=\tilde{\cal U}z^{-,0}$,
$\Omega(\tilde{\cal U}z^{+,1})=\tilde{\cal U}z^{-,1}$ and
$\Omega(\tilde{\cal U}z^{+,c})=\tilde{\cal U}z^{-,c}$.
\noindent iii) $\tilde{\cal U}z^{0,0}$, $\tilde{\cal U}z^{0,+}$ and $\tilde{\cal U}z^{0,-}$ are $\sigma$-stable and $T^{\pm 1}$-stable.
\noindent \,\,\,\,\,
$\tilde{\cal U}z^{0,0}$ is also $\Omega$-stable and $\lambda_m$-stable; $\Omega(\tilde{\cal U}z^{0,\pm})=\tilde{\cal U}z^{0,\mp}$; $\tilde{\cal U}z^{0,\pm}$ is $\lambda_m$-stable if $m>0$, while $\lambda_m(\tilde{\cal U}z^{0,\pm})\subseteq\tilde{\cal U}z^{0,\mp}$ if $m<0$.
\noindent iv)
$\tilde{\cal U}z^0$ is $\sigma$-stable, $\Omega$-stable, $T^{\pm 1}$-stable, $\lambda_m$-stable.
\begin{proof}
The only non-trivial assertion is the claim that $\tilde{\cal U}z^{0,+}$ is $\lambda_m$-stable when $m>0$, which was proved in lemma {\cal R}ef{ometiomecap},i).
\noindent The assertion about $\lambda_m(\tilde{\cal U}z^{0,\pm})$ in the general case follows using that $$\Omega(\tilde{\cal U}z^{0,\pm})=\tilde{\cal U}z^{0,\mp}=\lambda_{-1}(\tilde{\cal U}z^{0,\pm}),\ \ \lambda_m\Omega=\Omega\lambda_m\ \ {{\cal R}m{and}}\ \ \lambda_{-m}=\lambda_{-1}\lambda_m.$$
Remark that
$$\sigma(\tilde h_{\pm}(u))\!=\!\tilde h_{\pm}(u)^{-1},\,\Omega(\tilde h_{\pm}(u))\!=\!\lambda_{-1}(\tilde h_{\pm}(u))\!=\!\tilde h_{\mp}(u),\,T^{\pm 1}(\tilde h_{\pm}(u))\!=\!\tilde h_{\pm}(u).$$
\end{proof}
\end{proposition}
{\cal{V}}skip .3 truecm
\begin{remark} \label{autstab}
\noindent The stability properties described in proposition {\cal R}ef{sttuz} imply that:
\noindent i) $\sigma(\tilde{\cal U}z^{0,-}\tilde{\cal U}z^{0,0}\tilde{\cal U}z^{0,+})=\tilde{\cal U}z^{0,+}\tilde{\cal U}z^{0,0}\tilde{\cal U}z^{0,-}$; in particular
$$\tilde{\cal U}z^0=\tilde{\cal U}z^{0,-}\tilde{\cal U}z^{0,0}\tilde{\cal U}z^{0,+}\Leftrightarrow\tilde{\cal U}z^0=\tilde{\cal U}z^{0,+}\tilde{\cal U}z^{0,0}\tilde{\cal U}z^{0,-}.$$
\noindent ii) $T^{\pm 1}(\tilde{\cal U}z^{+,1}\tilde{\cal U}z^{+,c}\tilde{\cal U}z^{+,0})=\tilde{\cal U}z^{+,0}\tilde{\cal U}z^{+,c}\tilde{\cal U}z^{+,1}$ and $\tilde{\cal U}z^{+,1}\tilde{\cal U}z^{+,c}\tilde{\cal U}z^{+,0}$ is $T^{\pm 2}$-stable and $\lambda_m$-stable ($m{\cal I}n\Z$ odd); in particular:
$$\tilde{\cal U}z^+=\tilde{\cal U}z^{+,1}\tilde{\cal U}z^{+,c}\tilde{\cal U}z^{+,0}\Leftrightarrow\tilde{\cal U}z^+=\tilde{\cal U}z^{+,0}\tilde{\cal U}z^{+,c}\tilde{\cal U}z^{+,1}.$$
\noindent iii) $\tilde{\cal U}z^0\tilde{\cal U}z^+$ is $T^{\pm 1}$-stable and $\lambda_{-1}$-stable, and $\Omega(\tilde{\cal U}z^0\tilde{\cal U}z^+)=\tilde{\cal U}z^-\tilde{\cal U}z^0$; in particular
it is enough to prove that $(x_0^+)^{(k)}\tilde h_+(u){\cal I}n\tilde h_+(u)\tilde{\cal U}z^+[[u]]$ ${\cal F}orall k\geq 0$ in order to show that
$$(x_r^+)^{(k)}\tilde h_{\pm}(u){\cal I}n\tilde h_{\pm}(u)\tilde{\cal U}z^+[[u]], \tilde h_{\pm}(u)(x_r^-)^{(k)}{\cal I}n\tilde{\cal U}z^-[[u]]\tilde h_{\pm}(u)\ \ {\cal F}orall r{\cal I}n Z,\ k{\cal I}n\N,$$
or equivalently that $\tilde{\cal U}z^+\tilde{\cal U}z^0\subseteq\tilde{\cal U}z^0\tilde{\cal U}z^+$ and $\tilde{\cal U}z^0\tilde{\cal U}z^-\subseteq\tilde{\cal U}z^-\tilde{\cal U}z^0$.
\noindent iv) $\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+$ is $T^{\pm 1}$-stable and $\lambda_m$-stable ($m{\cal I}n\Z$ odd); in particular if one shows that
$(x_0^+)^{(k)}(x_1^-)^{(l)}{\cal I}n\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+$ it follows that
${\cal F}orall r,s{\cal I}n\Z$ such that $2\noindentot|r+s$ $$(x_r^+)^{(k)}(x_s^-)^{(l)}=T^{-r}\lambda_{r+s}((x_0^+)^{(k)}(x_1^-)^{(l)}){\cal I}n\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+.$$
\end{remark}
{\cal{V}}skip.3 truecm
\begin{proposition} \label{zkd}
\noindent The following identities hold in $\tilde{\cal U}$:
$$\hat h_+(u)\hat h_-(v)=\hat h_-(v)(1-uv)^{-4c}(1+uv)^{2c}\hat h_+(u)$$
and
$$\tilde h_+(u)\tilde h_-(v)=\tilde h_-(v)(1-uv)^{-4c}(1+uv)^{2c}\tilde h_+(u).$$
In particular $\tilde{\cal U}z^0=\tilde{\cal U}z^{0,-}\tilde{\cal U}z^{0,0}\tilde{\cal U}z^{0,+}$ and $\tilde{\cal U}z^0$ is an integral form of $\tilde{\cal U}^0$.
\begin{proof}
\noindent Since $[h_r,h_s]=[{\cal{V}}arepsilon_rh_r,{\cal{V}}arepsilon_sh_s]=\delta_{r+s,0}2r(2+(-1)^{r-1})c$,
the claim is proposition {\cal R}ef{heise} with $m=4$, $l=-2$.
\end{proof}
\end{proposition}
\begin{lemma} \label{zkp}
\noindent The following identity holds in $\tilde{\cal U}$ for all $r,s{\cal I}n\Z$:
$$\exp(x_{2r}^+u)\exp(x_{2s+1}^+v)=\exp(x_{2s+1}^+v)\exp(-X_{2r+2s+1}^+uv)\exp(x_{2r}^+u).$$
\begin{proof}
The claim is an immediate consequence of lemma {\cal R}ef{cle},vii), thanks to the relation $[x_{2r}^+,x_{2s+1}^+]=-X_{2r+2s+1}^+$.
\end{proof}
\end{lemma}
{\cal{V}}skip.3 truecm
\begin{corollary} \label{zppkd}
$\tilde{\cal U}z^+=\tilde{\cal U}z^{+,1}\tilde{\cal U}z^{+,c}\tilde{\cal U}z^{+,0}$; then $\tilde{\cal U}z^{\pm}$ is an integral form of $\tilde{\cal U}^{\pm}$.
\begin{proof}
\noindent From lemma {\cal R}ef{zkp} we deduce that:
\noindent
i) $(X_{2r+1}^+)^{(k)}{\cal I}n\tilde{\cal U}z^+$ ${\cal F}orall k{\cal I}n\N,r{\cal I}n\Z$; this implies that $$\tilde{\cal U}z^{+,c}\subseteq\tilde{\cal U}z^+\ \ {{\cal R}m{and}}\ \ \tilde{\cal U}z^{+,1}\tilde{\cal U}z^{+,c}\tilde{\cal U}z^{+,0}\subseteq\tilde{\cal U}z^+.$$
\noindent
ii) $\tilde{\cal U}z^{+,0}\tilde{\cal U}z^{+,1}\subseteq\tilde{\cal U}z^{+,1}\tilde{\cal U}z^{+,c}\tilde{\cal U}z^{+,0}$, hence $\tilde{\cal U}z^{+,1}\tilde{\cal U}z^{+,c}\tilde{\cal U}z^{+,0}$ is stable by left multiplication by $\tilde{\cal U}z^{+,0}$, hence by $\tilde{\cal U}z$ (which is generated by $\tilde{\cal U}z^{+,0}$ and $\tilde{\cal U}z^{+,1}$).
\noindent Since $1{\cal I}n\tilde{\cal U}z^{+,1}\tilde{\cal U}z^{+,c}\tilde{\cal U}z^{+,0}$, we deduce $\tilde{\cal U}z^+\subseteq\tilde{\cal U}z^{+,1}\tilde{\cal U}z^{+,c}\tilde{\cal U}z^{+,0}$, and the claim follows applying $\Omega$ (see proposition {\cal R}ef{sttuz},i)).
\end{proof}
\end{corollary}
\begin{proposition} \label{xtuzzero}
$\tilde{\cal U}z^+\tilde{\cal U}z^{0,0}\subseteq\tilde{\cal U}z^{0,0}\tilde{\cal U}z^+$; more precisely
$$(x_r^+)^{(k)}{h_0\choose l}={h_0-2k\choose l}(x_r^+)^{(k)}\ \ {\cal F}orall r{\cal I}n\Z,\ k,l{\cal I}n\N.$$
\begin{proof}
The claim follows by immediate application of {\cal R}ef{fru}.\end{proof}
\end{proposition}
{\cal{V}}skip .3 truecm
\begin{proposition} \label{zkopp}
In $\tilde{\cal U}$ the following holds:
\noindent i) $x_0^+\tilde h_+(u)=\tilde h_+(u)(1-uT^{-1})^6(1-u^2T^{-2})^{-3}(1+u^2T^{-2})(x_0^+)$;
\noindent ii) $(x_0^+)^{(k)}\tilde h_+(u){\cal I}n\tilde h_+(u)\tilde{\cal U}z^+[[u]]$ ${\cal F}orall k{\cal I}n\N$;
\noindent iii) $\tilde{\cal U}z^+\tilde{\cal U}z^{0,+}\subseteq\tilde{\cal U}z^{0,+}\tilde{\cal U}z^+$.
\begin{proof}
i)
We have that
$[{\cal{V}}arepsilon_r h_r, x_0^+]={\cal{V}}arepsilon_r2(2+(-1)^{r-1})x_r^+$
and $${\cal{V}}arepsilon_r2(2+(-1)^{r-1})=\begin{cases}6&{{\cal R}m{if}}\ 2\noindentot|r\\2=6-4&{{\cal R}m{if}}\ 2|r\ {{\cal R}m{and}}\ 4\noindentot|r\\-2=6-4-4&{{\cal R}m{if}}\ 4|r,\end{cases}$$
hence
proposition {\cal R}ef{hh} applies, with $m_1=6$, $m_2=-2$, $m_4=-1$ and implies that
$$x_0^+\tilde h_+(u)=\tilde h_+(u)(1+uT^{-1})^{-6}(1-u^2T^{-2})^{2}(1-u^4T^{-4})(x_0^+)=$$
$$=\tilde h_+(u)(1-uT^{-1})^6(1-u^2T^{-2})^{-3}(1+u^2T^{-2})(x_0^+).$$
\noindent ii) Let us underline that $(1-u^2)^{-3}(1+u^2){\cal I}n\Z[[u^2]]$, hence from the coefficients of $(1-u)^6$ it can be deduced that
$$(1-u)^6(1-u^2)^{-3}(1+u^2){\cal I}n\Z[[u^2]]+2u\Z[[u^2]]$$
and
$$x_0^+\tilde h_+(u)=\tilde h_+(u)\sum_{r\geq 0}a_rx_r^+u^r\ \ {{\cal R}m{with}}\ \ a_r{\cal I}n\Z\ {\cal F}orall r\geq 0\ {{\cal R}m{and}}\ 2|a_r\ {\cal F}orall r\ {{\cal R}m{odd}}.$$
If we define $y_0=\sum_{r\ge0}a_{2r}x_{2r}^+u^{2r}$, $y_1={1\over 2}\sum_{r \ge0}a_{2r+1}x_{2r+1}^+u^{2r+1}$ we have that, thanks to lemma {\cal R}ef{cle},viii)
$$\exp(x_0^+v)\tilde h_+(u)=\tilde h_+(u)\exp((y_0+2y_1)v)=$$
$$=\tilde h_+(u)\exp(2y_1v)\exp([y_0,y_1]v^2)\exp(y_0v){\cal I}n\tilde h_+(u)\tilde{\cal U}z^+[[u,v]]$$
from which the claim follows thanks to remark {\cal R}ef{whtilde}.
\noindent iii) From the $T^{\pm 1}$-stability of $\tilde{\cal U}z^+$ and the fact that $T^{\pm 1}\big|_{\tilde{\cal U}z^{0,+}}=id$ we deduce that for all $r{\cal I}n\Z,\ k{\cal I}n\N$
$$(x_r^+)^{(k)}\tilde h_+(u){\cal I}n\tilde h_+(u)\tilde{\cal U}z^+[[u]].$$
The claim follows recalling that the $(x_r^+)^{(k)}$'s generate $\tilde{\cal U}z^+$ and the $\tilde h_k$'s generate $\tilde{\cal U}z^{0,+}$.
\end{proof}
\end{proposition}
\begin{corollary} \label{czzp2}
$\tilde{\cal U}z^{\pm}\tilde{\cal U}z^0=\tilde{\cal U}z^0\tilde{\cal U}z^{\pm}$.
In particular $\tilde{\cal U}z^0\tilde{\cal U}z^+$ and $\tilde{\cal U}z^-\tilde{\cal U}z^0$ are
subalgebras of $\tilde{\cal U}z$.
\begin{proof}
$\tilde{\cal U}z^+\tilde{\cal U}z^{0,0}\subseteq \tilde{\cal U}z^{0,0}\tilde{\cal U}z^+$ (see proposition {\cal R}ef{xtuzzero}) and $\tilde{\cal U}z^+\tilde{\cal U}z^{0,+}\subseteq \tilde{\cal U}z^{0,+}\tilde{\cal U}z^+$ (see {\cal R}ef{zkopp},iii)); moreover
$$\tilde{\cal U}z^+\tilde{\cal U}z^{0,-}=\lambda_{-1}(\tilde{\cal U}z^+\tilde{\cal U}z^{0,+})\subseteq\lambda_{-1}(\tilde{\cal U}z^{0,+}\tilde{\cal U}z^+)=\tilde{\cal U}z^{0,-}\tilde{\cal U}z^+.$$
Hence $\tilde{\cal U}z^+\tilde{\cal U}z^0\subseteq\tilde{\cal U}z^0\tilde{\cal U}z^+$.
\noindent Applying $\sigma$ we get the reverse inclusion and applying $\Omega$ we obtain the claim for $\tilde{\cal U}z^-$.
\end{proof}
\end{corollary}
\noindent Now that we have described $\tilde{\cal U}z^0$, $\tilde{\cal U}z^{\pm}$ and the $\Z$-subalgebras generated by $\tilde{\cal U}z^0$ and $\tilde{\cal U}z^+$ (respectively by $\tilde{\cal U}z^0$ and $\tilde{\cal U}z^-$), in order to show that $\tilde{\cal U}z=
\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+$ it remains to prove that $$\tilde{\cal U}z^0\subseteq\tilde{\cal U}z\ \ {{\cal R}m{and}}\ \ \tilde{\cal U}z^+\tilde{\cal U}z^-\subseteq
\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+.$$
Before attaching this problem in its generality it is worth evidentiating the existence of some copies of $\hat{\cal F}rak sl_2$ inside $\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$, hence of embeddings $\hat{\cal U}\hookrightarrow\tilde{\cal U}$, that induce some useful commutation relations in $\tilde{\cal U}$.
{\cal{V}}skip .3 truecm
\begin{remark}\label{emgg}
The ${\mathbb{Q}}$-linear maps $f,F:\hat{\cal F}rak sl_2\to\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$ defined by
$$x_r^{\pm}\mapsto x_{2r}^{\pm},\ \ h_r\mapsto h_{2r},\ \ c\mapsto 2c\leqno{f:}$$
$$x_r^{\pm}\mapsto {X_{2r\mp 1}^{\pm}\over 4}
,\ \ h_r\mapsto {h_{2r}\over 2}-\delta_{r,0}{c\over 4},\ \ c\mapsto {c\over 2}\leqno{F:}$$
are Lie-algebra homomorphisms, obviously injective, inducing embeddings $f,F:\hat{\cal U}\hookrightarrow\tilde{\cal U}$.
\end{remark}
\begin{corollary}\label{czzp}
$f(\hat{\cal U}z^{0,0})\subseteq\tilde{\cal U}z^{0,0}\subseteq\tilde{\cal U}z$.
\begin{proof}
Since $f(\hat{\cal U}z^{\pm})\subseteq\tilde{\cal U}z^{\pm,0}\subseteq\tilde{\cal U}z$ we have that $f$ maps $\hat{\cal U}z$ (which is generated by $\hat{\cal U}z^+$ and $\hat{\cal U}z^-$) into $\tilde{\cal U}z$; in particular
$f(\hat{\cal U}z^{0,0})\subseteq\tilde{\cal U}z$. But
$$f(\hat{\cal U}z^{0,0})=f(\Z^{(bin)}[h_0,c])=\Z^{(bin)}[h_0,2c],$$ thus $\Z^{(bin)}[h_0,2c]\subseteq\tilde{\cal U}z.$
Since $\tilde{\cal U}z$ is $T$-stable and $T(h_0)=h_0-c$ we also have $\Z^{(bin)}[h_0-c]\subseteq\tilde{\cal U}z$, so that
$$f(\hat{\cal U}z^{0,0})=\Z^{(bin)}[h_0,2c]\subseteq\Z^{(bin)}[h_0,c]=\Z^{(bin)}[h_0,h_0-c]\subseteq\tilde{\cal U}z$$
which is the claim because $\tilde{\cal U}z^{0,0}=\Z^{(bin)}[h_0,c]$.
\end{proof}
\end{corollary}
\begin{proposition} \label{czzq}
$\tilde{\cal U}z^{+,0}\tilde{\cal U}z^{-,0}\subseteq\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+$ and $\tilde{\cal U}z^{+,1}\tilde{\cal U}z^{-,1}\subseteq\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+$.
\begin{proof}
$\tilde{\cal U}z^{+,0}\tilde{\cal U}z^{-,0}=f(\hat{\cal U}z^+\hat{\cal U}z^-)\subseteq f(\hat{\cal U}z^-\hat{\cal U}z^0\hat{\cal U}z^+)=\tilde{\cal U}z^{-,0}f(\hat{\cal U}z^0)\tilde{\cal U}z^{+,0}$: we want to prove that
$f(\hat{\cal U}z^0)=f(\hat{\cal U}z^{0,-}\hat{\cal U}z^{0,0}\hat{\cal U}z^{0,+})\subseteq\tilde{\cal U}z^0$.
\noindent By corollary {\cal R}ef{czzp} $f(\hat{\cal U}z^{0,0})\subseteq\tilde{\cal U}z^{0,0}$.
\noindent On the other hand
$$f(\hat{\cal U}z^{0,+})=f(\Z^{(sym)}[h_r|r>0])=\Z^{(sym)}[h_{2r}|r>0]=\lambda_2(\Z[\hat h_k|k>0]),$$
hence $f(\hat{\cal U}z^{0,+})\subseteq\Z[\tilde h_k|k>0]=\tilde{\cal U}z^{0,+}$ thanks to lemma {\cal R}ef{ometiomecap} ii).
\noindent Finally remark that $f\Omega=\Omega f$, thus $f(\hat{\cal U}z^{0,-})=f\Omega(\hat{\cal U}z^{0,+})\subseteq\Omega\tilde{\cal U}z^{0,+}\subseteq\tilde{\cal U}z^{0,-}$.
\noindent It follows that $f(\hat{\cal U}z^0)\subseteq\tilde{\cal U}z^0$ and $\tilde{\cal U}z^{+,0}\tilde{\cal U}z^{-,0}\subseteq\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+$.
\noindent The assertion for $\tilde{\cal U}z^{\pm,1}$ follows applying $T$, see proposition {\cal R}ef{sttuz},i),ii) and iv).
\end{proof}
\end{proposition}
\subsection{$\exp(x_0^+u)\exp(x_1^-v)$ and $\tilde{\cal U}z^{0,+}$: here comes the hard work}\label{sottosezione}
\noindent We shall deal with the commutation between $\tilde{\cal U}z^{+,0}$ and $\tilde{\cal U}z^{-,1}$ following the strategy already proposed for $\hat{\cal U}z$ and recalling remark {\cal R}ef{autstab},iv): finding an explicit expression involving suitable exponentials for
$$\exp(x_0^+u)\exp(x_1^-v){\cal I}n\tilde{\cal U}^{-,1}\tilde{\cal U}^{-,c}\tilde{\cal U}^{-,0}\tilde{\cal U}^{0,+}\tilde{\cal U}^{+,1}\tilde{\cal U}^{+,c}\tilde{\cal U}^{+,0}[[u,v]]$$
and proving that all its coefficients lie in $$\tilde{\cal U}z^{-,1}\tilde{\cal U}z^{-,c}\tilde{\cal U}z^{-,0}\tilde{\cal U}z^{0,+}\tilde{\cal U}z^{+,1}\tilde{\cal U}z^{+,c}\tilde{\cal U}z^{+,0}\subseteq\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+.$$
\noindent Since here there are more factors involved, the computation is more complicated than in the case of $\hat{\cal F}rak sl_2$ and the simplification provided by this approach is even more evident. On the other hand it is not immediately clear from the commutation formula that our element belongs to $\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+$, or better: the factors relative to the (negative, resp. positive) real root vectors will be evidently elements of $\tilde{\cal U}z^-$, resp. $\tilde{\cal U}z^+$, while proving that the null part lies indeed in $\tilde{\cal U}z^0$ is not evident at all and will require a deeper inspection (see remark {\cal R}ef{hhdehh}, lemma {\cal R}ef{contidp} and corollary {\cal R}ef{hcappucciod}).
\noindent As we shall see, in order to complete the proof that $\tilde{\cal U}z^{0,+}\subseteq\tilde{\cal U}z$ (see proposition {\cal R}ef{samealg}), it is useful to compute also $\exp(x_0^+u)\exp(X_1^-v)$. The two computations ($\exp(x_0^+u)\exp(yv)$ with $y=x_1^-$ or $y=X_1^-$) are essentially the same and will be performed together (see the considerations from remark {\cal R}ef{gesponenziale} to lemma {\cal R}ef{wderivcomm}, of which the propositions {\cal R}ef{xmenogrande} and {\cal R}ef{x0piux1meno} are straightforward applications); even though $\exp(x_0^+u)\exp(x_1^-v)$ presents more symmetries than $\exp(x_0^+u)\exp(X_1^-v)$ (see remark {\cal R}ef{gtg},iii)), its interpretation will require more work, since it is not evident the connection with $\tilde{\cal U}z^{0,+}$, as just mentioned.
{\cal{V}}skip .3 truecm
\begin{remark}\label{gesponenziale}
Let $G=G(u,v){\cal I}n\tilde{\cal U}[[u,v]]$ and $y{\cal I}n L^-$ (see definition {\cal R}ef{sottoalgebraL}); then $$G(u,v)=\exp(x_0^+u)\exp(yv)$$ if and only if the following two conditions hold:
a) $G(0,v)=\exp(yv)$;
b) ${d\over du}G(u,v)=x_0^+G(u,v)$.
\end{remark}
{\cal{V}}skip .3 truecm
\begin{notation}\label{gabg}
In the following $G^-$, $G^0$, $G^+$ will denote elements of $\tilde{\cal U}[[u,v]]$ of the form
$$G^-=\exp({{\cal A}lpha_-})\exp({\beta_-})\exp({\gamma_-}),$$
$$G^+=\exp(\gamma_+)\exp(\beta_+)\exp({\cal A}lpha_+),$$
$$G^0=\exp(\eta)$$
with
$${\cal A}lpha_-{\cal I}n{\mathbb{Q}}[w^2][[u,v]].x_1^-,\ \beta_-{\cal I}n{\mathbb{Q}}[w][[u,v]].X_1^-,\ \gamma_-{\cal I}n{\mathbb{Q}}[w^2][[u,v]].x_0^-,$$
$${\cal A}lpha_+{\cal I}n{\mathbb{Q}}[w^2][[u,v]].x_0^+,\ \beta_+{\cal I}n{\mathbb{Q}}[w][[u,v]].X_1^+,\ \gamma_+{\cal I}n{\mathbb{Q}}[w^2][[u,v]].x_1^+,$$
$$\eta{\cal I}n w{\mathbb{Q}}[w][[u,v]].h_0.$$
$G(u,v)$ will denote the element $G(u,v)=G=G^-G^0G^+$.
\end{notation}
{\cal{V}}skip .3 truecm
\begin{remark}\label{gtg}
Let $G=G^-G^0G^+{\cal I}n\tilde{\cal U}[[u,v]]$ be as in notation {\cal R}ef{gabg}. Then:
\noindent i) Of course $${dG\over du}={dG^-\over du}G^0G^++G^-{dG^0\over du}G^++G^-G^0{dG^+\over du}$$
where, considering the commutativity properties, we have that
$${dG^-\over du}=\exp({{\cal A}lpha_-})\exp({\beta_-}){d({\cal A}lpha_-+\beta_-+\gamma_-)\over du}\exp({\gamma_-}),$$
$${dG^+\over du}=\exp(\gamma_+){d({\cal A}lpha_++\beta_++\gamma_+)\over du}\exp(\beta_+)\exp({\cal A}lpha_+),$$
$${dG^0\over du}={d\eta\over du}G^0
.$$
\noindent ii) If moreover $G=\exp(x_0^+u)\exp(yv)$ with $y{\cal I}n L^-$, the property b) of remark {\cal R}ef{gesponenziale} translates into
$$x_0^+G=\exp({{\cal A}lpha_-})\exp({\beta_-}){d({\cal A}lpha_-+\beta_-+\gamma_-)\over du}\exp({\gamma_-})G^0G^++$$
$$+G^-{d\eta\over du}G^0G^++G^-G^0\exp(\gamma_+){d({\cal A}lpha_++\beta_++\gamma_+)\over du}\exp(\beta_+)\exp({\cal A}lpha_+).$$
\noindent iii) If in particular $y=x_1^-$, then $T\lambda_{-1}\Omega(G(u,v))=G(v,u)$; hence $$G^-(u,v)=T\lambda_{-1}\Omega(G^+)(v,u),$$
$${\cal A}lpha_-(u,v)=T\lambda_{-1}\Omega({\cal A}lpha_+)(v,u),$$
$$\beta_-(u,v)=T\lambda_{-1}\Omega(\beta_+)(v,u),$$
$$\gamma_-(u,v)=T\lambda_{-1}\Omega(\gamma_+)(v,u),$$
$$\eta(u,v)=\eta(v,u).$$
Observe that $T\lambda_{-1}\Omega(X_{2r+1}^+)=-X_{2r+3}^-$ ${\cal F}orall r{\cal I}n\Z$.
\end{remark}
{\cal{V}}skip .3 truecm
\noindent The following lemma is based on lemma {\cal R}ef{cle}, iv) and on the defining relations of $\tilde{\cal U}$ (definition {\cal R}ef{a22}).
\begin{lemma}\label{derivcomm}
With the notations fixed in {\cal R}ef{gabg} we have that:
$$x_0^+\exp({\cal A}lpha_-)=\leqno{i)}$$
$$=\exp({\cal A}lpha_-)\left(x_0^++[x_0^+,{\cal A}lpha_-]+{1\over 2}[[x_0^+,{\cal A}lpha_-],{\cal A}lpha_-]+{1\over 6}[[[x_0^+,{\cal A}lpha_-],{\cal A}lpha_-],{\cal A}lpha_-]{\cal R}ight);$$
$$x_0^+\exp({\cal A}lpha_-)\exp(\beta_-)=\exp({\cal A}lpha_-)\exp(\beta_-)\cdot\leqno{ii)}$$ $$
\cdot\left(x_0^++[x_0^+,{\cal A}lpha_-]+{1\over 2}[[x_0^+,{\cal A}lpha_-],{\cal A}lpha_-]+{1\over 6}[[[x_0^+,{\cal A}lpha_-],{\cal A}lpha_-],{\cal A}lpha_-]+[x_0^+,\beta_-]{\cal R}ight);$$
$$(x_0^++[x_0^+,{\cal A}lpha_-])\exp(\gamma_-)=\leqno{iii)}$$
$$=\exp(\gamma_-)\left(x_0^++[x_0^+,{\cal A}lpha_-]+[x_0^+,\gamma_-]{\cal R}ight)+$$
$$+\left([[x_0^+,{\cal A}lpha_-],\gamma_-]+{1\over 2}[[x_0^+,\gamma_-],\gamma_-]-{1\over 2}[[[x_0^+,{\cal A}lpha_-],\gamma_-],\gamma_-]{\cal R}ight)\exp(\gamma_-);$$
iv) $x_0^+\exp(\eta)=\exp(\eta)(y_0+y_1)$ with $$y_0{\cal I}n{\mathbb{Q}}[w^2][[u,v]].x_0^+,\ \ y_1{\cal I}n w{\mathbb{Q}}[w^2][[u,v]].x_0^+;$$
$$(y_0+y_1)\exp(\gamma_+)=\exp(\gamma_+)(y_0+y_1+[y_0,\gamma_+]).\leqno{v)}$$
vi) In conclusion
$$x_0^+G={dG\over du}$$ if and only if the following relations hold:
$${d{\cal A}lpha_-\over du}=[x_0^+,\beta_-]+[[x_0^+,{\cal A}lpha_-],\gamma_-]$$
$${d\beta_-\over du}={1\over 6}[[[x_0^+,{\cal A}lpha_-],{\cal A}lpha_-],{\cal A}lpha_-]-{1\over 2}[[[x_0^+,{\cal A}lpha_-],\gamma_-],\gamma_-]$$
$${d\gamma_-\over du}={1\over 2}[[x_0^+,{\cal A}lpha_-],{\cal A}lpha_-]+{1\over 2}[[x_0^+,\gamma_-],\gamma_-]$$
$${d\eta\over du}=[x_0^+,\gamma_-]+[x_0^+,{\cal A}lpha_-]$$
$${d{\cal A}lpha_+\over du}=y_0$$
$${d\beta_+\over du}=[y_0,\gamma_+]$$
$${d\gamma_+\over du}=y_1.$$
\begin{proof}
i)-v) are straightforward repeated applications of lemma {\cal R}ef{cle},iv) remarking that:
\noindent i) and ii): $[[[x_0^+,{\cal A}lpha_-],{\cal A}lpha_-],{\cal A}lpha_-]{\cal I}n\tilde{\cal U}^{-,c}[[u,v]]$, hence it commutes with both ${\cal A}lpha_-$ and $\beta_-$ (which are in $\tilde{\cal U}^-[[u,v]]$);
\noindent ii): $\beta_-{\cal I}n\tilde{\cal U}^{-,c}[[u,v]]$, hence it commutes also with $[[x_0^+,{\cal A}lpha_-],{\cal A}lpha_-]$ and $[x_0^+,\beta_-]$ (which belong to $\tilde{\cal U}^-[[u,v]]$) and with $[x_0^+,{\cal A}lpha_-]$ (because $[h_{2r+1},\tilde{\cal U}^{-,c}]=0$ ${\cal F}orall r{\cal I}n\Z$);
\noindent iii): $[[x_0^+,\gamma_-],\gamma_-]$ and $[[[x_0^+,{\cal A}lpha_-],\gamma_-],\gamma_-]$ belong respectively to $\tilde{\cal U}^{-,0}[[u,v]]$ and $\tilde{\cal U}^{-,c}[[u,v]]$, so that they commute with $\gamma_-{\cal I}n\tilde{\cal U}^{-,0}[[u,v]]$; the claim follows from the identities
$$(x_0^++[x_0^+,{\cal A}lpha_-])\exp(\gamma_-)=\exp(\gamma_-)\cdot
\Big(x_0^++[x_0^+,{\cal A}lpha_-]+$$
$$+[x_0^+,\gamma_-]+[[x_0^+,{\cal A}lpha_-],\gamma_-]+{1\over 2}[[x_0^+,\gamma_-],\gamma_-]+{1\over 2}[[[x_0^+,{\cal A}lpha_-],\gamma_-],\gamma_-]\Big)$$
and
$$\exp(\gamma_-)[[x_0^+,{\cal A}lpha_-],\gamma_-]=([[x_0^+,{\cal A}lpha_-],\gamma_-]-[[[x_0^+,{\cal A}lpha_-],\gamma_-],\gamma_-])\exp(\gamma_-);$$
iv): lemma {\cal R}ef{lhlh} implies that $\exp(\eta)^{-1}x_0^+\exp(\eta){\cal I}n{\mathbb{Q}}[w][[u,v]].x_0^+;$
\noindent v): $\gamma_+{\cal I}n\tilde{\cal U}^{+,1}[[u,v]]$ commutes with both $y_1{\cal I}n\tilde{\cal U}^{+,1}[[u,v]]$ and $[y_0,\gamma_+]{\cal I}n\tilde{\cal U}^{+,c}[[u,v]]$.
\noindent Point vi) is a consequence of points i)-v) and remark {\cal R}ef{gtg},i).
\end{proof}
\end{lemma}
\begin{lemma}\label{wderivcomm}
By abuse of notation let ${\cal A}lpha_{\pm}$, $\beta_{\pm}$, $\gamma_{\pm}$, $\eta$ and $y_0$ (see notation {\cal R}ef{gabg} and lemma {\cal R}ef{derivcomm},iv)) denote also the elements of ${\mathbb{Q}}[w][[u,v]]$ such that
$${\cal A}lpha_+={\cal A}lpha_+(w^2).x_0^+,\ \ \beta_+=\beta_+(w).X_1^+,\ \ \gamma_+=\gamma_+(w^2).x_1^+,$$
$${\cal A}lpha_-={\cal A}lpha_-(w^2).x_1^-,\ \ \beta_-=\beta_-(w).X_1^-,\ \ \gamma_-=\gamma_-(w^2).x_0^-,$$
$$\eta=\eta(w).h_0.$$
Then the relations of lemma {\cal R}ef{derivcomm},vi) become:
$${d{\cal A}lpha_-(w^2)\over du}=4\beta_-(-w^2)-6{\cal A}lpha_-(w^2)\gamma_-(w^2),$$
$${d\beta_-(w)\over du}={\cal A}lpha_-(-w)(w{\cal A}lpha_-^2(-w)-3\gamma_-^2(-w)),$$
$${d\gamma_-(w^2)\over du}=-3w^2{\cal A}lpha_-^2(w^2)-\gamma_-^2(w^2),$$
$${d\eta(w)\over du}=w{\cal A}lpha_-(w^2)+\gamma_-(w^2),$$
$${d({\cal A}lpha_+(w^2)+w\gamma_+(w^2))\over du}=\exp(-4\eta(w)+2\eta(-w)),$$
$${d\beta_+(w)\over du}=-{d{\cal A}lpha_+(-w)\over du}\gamma_+(-w).$$
\begin{proof}
The claim is obtained using lemma {\cal R}ef{qwconti}.\end{proof}
\end{lemma}
\begin{proposition}\label{xmenogrande}
$$\exp(x_0^+u)\exp(X_1^-v)=$$
$$=\exp({{\cal A}lpha_-})\exp({\beta_-})\exp({\gamma_-})\exp(\eta)\exp(\gamma_+)\exp(\beta_+)\exp({\cal A}lpha_+)$$
where, with the notations of lemma {\cal R}ef{wderivcomm},
$${\cal A}lpha_-(w)={4uv\over 1-4^2wu^4v^2},\ \ \ \ {\cal A}lpha_+(w)={u\over 1-4^2wu^4v^2},$$
$$\beta_-(w)={(1+3\cdot 4^2wu^4v^2)v\over (1+4^2wu^4v^2)^2},\ \ \ \ \beta_+(w)={(1-4^2wu^4v^2)u^4v\over (1+4^2wu^4v^2)^2},$$
$$\gamma_-(w)={-4^2wu^3v^2\over 1-4^2wu^4v^2},\ \ \ \ \gamma_+(w)={-4u^3v\over 1-4^2wu^4v^2},$$
$$\eta(w)={1\over 2}\ln(1+4wu^2v).$$
In particular:
\noindent i) $(x_0^+)^{(k)}(X_1^-)^{(l)}{\cal I}n\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+$ for all $k,l{\cal I}n\N$;
\noindent ii) $\hat h_+(4u)^{{1\over 2}}{\cal I}n\tilde{\cal U}z[[u]]$.
\begin{proof}
We use the notation fixed in {\cal R}ef{gabg}.
\noindent It is obvious that $G(0,v)=\exp(X_1^-v)$, so that the condition a) of remark {\cal R}ef{gesponenziale} is fulfilled, and we need to verify condition b), following lemmas {\cal R}ef{derivcomm},vi) and {\cal R}ef{wderivcomm}.
\noindent Remark that
$${d\eta(w)\over du}={4wuv\over 1+4wu^2v}={4wuv(1-4wu^2v)\over 1-4^2w^2u^4v^2}=w{\cal A}lpha_-(w^2)+\gamma_-(w^2)$$
and
$$\exp(-4\eta(w)+2\eta(-w))={1-4wu^2v\over(1+4wu^2v)^2},$$
$${\cal A}lpha_+(w^2)+w\gamma_+(w^2)={u(1-4wu^2v)\over1-4^2w^2u^4v^2}={u\over 1+4wu^2v},$$
so that
$${d({\cal A}lpha_+(w^2)+w\gamma_+(w^2))\over du}={1+4wu^2v-8wu^2v\over (1+4wu^2v)^2}=\exp(-4\eta(w)+2\eta(-w)).$$
Now let us recall that for all $n,m{\cal I}n\N$
$${d\over du}{u^n\over (1-au^4)^m}={nu^{n-1}+(4m-n)au^{n+3}\over (1-au^4)^{m+1}},$$
hence, fixing $a=4^2w^2v^2$, we get
$${d{\cal A}lpha_-(w^2)\over du}={4v(1+3au^4)\over(1-au^4)^2},$$
$${d\beta_-(-w^2)\over du}={-4au^3v(1+3au^4)\over(1-au^4)^3},$$
$${d\gamma_-(w^2)\over du}={-a(3u^2+au^6)\over(1-au^4)^2},$$
$${d{\cal A}lpha_+(w^2)\over du}={1+3au^4\over(1-au^4)^2},$$
$${d\beta_+(-w^2)\over du}={4vu^3(1+3au^4)\over(1-au^4)^3}.$$
The relations to prove are then equivalent to the following:
$$4v(1+3au^4)=4(1-3au^4)v+6\cdot 4uv\cdot au^3,$$
$$-4au^3v(1+3au^4)=4uv(-w^24^2u^2v^2-3a^2u^6),$$
$$-a(3u^2+au^6)=-3w^2\cdot 4^2u^2v^2-a^2u^6,$$
$$4u^3v(1+3au^4)=(1+3au^4)4u^3v,$$
which are easily verified.
\noindent Then, since ${\cal A}lpha_{\pm}$, $\beta_{\pm}$, $\gamma_{\pm}$ have integral coefficients, i) follows from example {\cal R}ef{dvdpw}, remark {\cal R}ef{whtilde} and lemma {\cal R}ef{ometiomecap},iii).
\noindent ii) follows at once from the above considerations, inverting the exponentials.
\end{proof}
\end{proposition}
\begin{proposition}\label{x0piux1meno}
$$\exp(x_0^+u)\exp(x_1^-v)=$$
$$=\exp({{\cal A}lpha_-})\exp({\beta_-})\exp({\gamma_-})\exp(\eta)\exp(\gamma_+)\exp(\beta_+)\exp({\cal A}lpha_+)$$
where, with the notations of lemma {\cal R}ef{wderivcomm},
$${\cal A}lpha_+(w)={(1+wu^2v^2)u\over 1-6wu^2v^2+w^2u^4v^4},\ \ \ \ {\cal A}lpha_-(w)={(1+wu^2v^2)v\over 1-6wu^2v^2+w^2u^4v^4},$$
$$\beta_+(w)={(1-4wu^2v^2-w^2u^4v^4)u^3v\over (1+6wu^2v^2+w^2u^4v^4)^2},\ \ \ \
\beta_-(w)={(1-4wu^2v^2-w^2u^4v^4)wuv^3\over (1+6wu^2v^2+w^2u^4v^4)^2},$$
$$\gamma_+(w)={(-3+wu^2v^2)u^2v\over 1-6wu^2v^2+w^2u^4v^4},\ \ \ \
\gamma_-(w)={(-3+wu^2v^2)wuv^2\over 1-6wu^2v^2+w^2u^4v^4},$$
$$\eta(w)={1\over 2}\ln(1+2wuv-w^2u^2v^2).$$
\begin{proof}
\noindent We use the notations fixed in {\cal R}ef{gabg}.
\noindent \noindent It is obvious that $G(0,v)=\exp(x_1^-v)$, so that the condition a) of remark {\cal R}ef{gesponenziale} is fulfilled, and we need to verify condition b), following lemma {\cal R}ef{wderivcomm}.
\noindent First of all remark that $$1-6t^2+t^4=(1+2t-t^2)(1-2t-t^2)$$ and that
$$1+t^2+(-3+t^2)t=1-3t+t^2+t^3=(1-t)(1-2t-t^2);$$
thus,
replacing $t$ by $wuv$, we get
$${\cal A}lpha_+(w^2)+w\gamma_+(w^2)={(1-wuv)u\over 1+2wuv-w^2u^2v^2}$$
and
$$w{\cal A}lpha_-(w^2)+\gamma_-(w^2)={(1-wuv)wv\over 1+2wuv-w^2u^2v^2}.$$
Hence the relations of lemma {\cal R}ef{wderivcomm} involving $\eta$ are easily proved:
$${d\eta(w)\over du}={(1-wuv)wv\over 1+2wuv-w^2u^2v^2}=w{\cal A}lpha_-(w^2)+\gamma_-(w^2)$$
and
$$\exp(-4\eta(w)+2\eta(-w))={1-2wuv-w^2u^2v^2\over(1+2wuv-w^2u^2v^2)^2}$$
while, on the other hand,
$${d\over dt}{t-t^2\over 1+2t-t^2}={1-2t-t^2\over (1+2t-t^2)^2}$$
so that
$${d\over du}({\cal A}lpha_+(w^2)+w\gamma_+(w^2))={1-2wuv-w^2u^2v^2\over(1+2wuv-w^2u^2v^2)^2}$$
and
$$\exp(-4\eta(w)+2\eta(-w))={d\over du}({\cal A}lpha_+(w^2)+w\gamma_+(w^2)).$$
\noindent In order to prove the remaining relations remark that for all $n,m{\cal I}n\N$
$${d\over dt}{t^n\over (1-6t^2+t^4)^m}={nt^{n-1}+6(2m-n)t^{n+1}+(n-4m)t^{n+3}\over (1-6t^2+t^4)^{m+1}},$$
which helps to compute the derivative of ${\cal A}lpha_{\pm}(w^2)$, $\beta_{\pm}(-w^2)$, $\gamma_-(w^2)$, fixing $t=wuv$ and recalling that ${d\over du}=wv{d\over dt}$:
$${d{\cal A}lpha_-(w^2)\over du}={wv^2(14t-4t^3-2t^5)\over (1-6t^2+t^4)^2},$$
$${d\beta_-(-w^2)\over du}={w^2v^3(-1-30t^2-12t^4+14t^6-3t^8)\over (1-6t^2+t^4)^3},$$
$${d\gamma_-(w^2)\over du}={w^2v^2(-3-15t^2+3t^4-t^6)\over (1-6t^2+t^4)^2},$$
$${d{\cal A}lpha_+(w^2)\over du}={1+9t^2-9t^4-t^6\over (1-6t^2+t^4)^2},$$
$${d\beta_+(-w^2)\over du}={w^{-2}v^{-1}(3t^2+26t^4-36t^6+6t^8+t^{10})\over (1-6t^2+t^4)^3}.$$
The relations to prove are then equivalent to the following:
$$14t-4t^3-2t^5=-4(1+4t^2-t^4)t-6(1+t^2)(-3+t^2)t,$$
$$-1-30t^2-12t^4+14t^6-3t^8=(1+t^2)(-(1+t^2)^2-3(-3+t^2)^2t^2), $$
$$-3-15t^2+3t^4-t^6=-3(1+t^2)^2-(-3+t^2)^2t^2,$$
$$3t^2+26t^4-36t^6+6t^8+t^{10}=-(1+9t^2-9t^4-t^6)(-3+t^2)t^2,$$
which are easily verified.
\end{proof}
\end{proposition}
\begin{remark}
Since $(1+2t-t^2)^{-1}{\cal I}n\Z[[t]]$ proposition {\cal R}ef{x0piux1meno} implies that $G^{\pm}{\cal I}n\tilde{\cal U}z^{\pm}[[u,v]]$
(see notation {\cal R}ef{gabg}). Then, in order to prove that $$(x_0^+)^{(k)}(x_1^-)^{(l)}{\cal I}n\tilde{\cal U}z^{-}\tilde{\cal U}z^{0}\tilde{\cal U}z^{+},$$ we just need to show that $\exp(\eta){\cal I}n\tilde{\cal U}z^{0}[[u,v]]$. This will imply that $\tilde{\cal U}z^{-}\tilde{\cal U}z^{0}\tilde{\cal U}z^{+}$ is closed under multiplication, hence it is an integral form of $\tilde{\cal U}$, obviously containing $\tilde{\cal U}z$.
\noindent In order to prove that $\tilde{\cal U}z=\tilde{\cal U}z^{-}\tilde{\cal U}z^{0}\tilde{\cal U}z^{+}$ we need to show in addition that $\tilde{\cal U}z^0\subseteq\tilde{\cal U}z$.
\noindent The last part of this paper is devoted to prove that $$\exp\left({1\over 2}\ln(1+2u-u^2).h_0{\cal R}ight){\cal I}n\tilde{\cal U}z^0[[u]]$$ (see corollary {\cal R}ef{hcappucciod}) and that $\tilde{\cal U}z^0\subseteq\tilde{\cal U}z$ (see proposition {\cal R}ef{samealg}).
\end{remark}
\begin{notation} \label{notedn}
\noindent In the following $d:\Z_+\to{\mathbb{Q}}$ denotes the function defined by
$$\sum_{n>0}(-1)^{n-1}{d_n\over n}u^n={1\over 2}\ln(1+2u-u^2)$$
and $\tilde d={\cal{V}}arepsilon d$ (that is $\tilde d_n={\cal{V}}arepsilon_n d_n$ for all $n>0$, where ${\cal{V}}arepsilon_n$ has been defined in definition {\cal R}ef{thuz}).
\noindent Remark that with this notation we have $\exp(\eta)=\hat h_+^{\{d\}}(uv)$ ($\eta$ as in lemma {\cal R}ef{wderivcomm} and proposition {\cal R}ef{x0piux1meno}, $\hat h_+^{\{d\}}(u)$ as in notation {\cal R}ef{hcappucciof}, where we replace $\hat h^{\{d\}}(u)$ by $\hat h_+^{\{d\}}(u)$ in order to distinguish it from its symmetric $\hat h_-^{\{d\}}(u)=\Omega(\hat h_+^{\{d\}}(u))$).
\end{notation}
\begin{remark}\label{hhdehh}
From $1+2u-u^2=(1+(1+\sqrt 2)u)(1+(1-\sqrt 2)u)$, we get that:
\noindent i) for all $n{\cal I}n\Z_+$ $d_n={1\over 2}((1+\sqrt{2})^n+(1-\sqrt{2})^n)$; equivalently $\exists\delta_n{\cal I}n\Z$ such that
$${\cal F}orall n{\cal I}n\Z_+\ \ \ (1+\sqrt{2})^n=d_n+\delta_n\sqrt{2}.$$
\noindent ii) $d_n$ is odd for all $n{\cal I}n\Z_+$; $\delta_n$ is odd if and only if $n$ is odd.
\noindent iii) $\Z[\hat h_k^{\{d\}}|k>0]\noindentot\subseteq\Z[\hat h_k|k>0]$ (indeed $(\mu*d)(4)=d_4-d_2=17-3=14$, which is not a multiple of 4, see propositions {\cal R}ef{convoluzioneintera} and {\cal R}ef{emmepiallaerre}).
\noindent iv) $\Z[\hat h_k^{\{d\}}|k>0]\subseteq\Z[\tilde h_k|k>0]$ if and only if $\Z[\hat h_k^{\{\tilde d\}}|k>0]\subseteq\Z[\hat h_k|k>0]$.
\end{remark}
\begin{lemma}\label{contidp}
\noindent Let $p,m,r{\cal I}n\Z_+$ be such that $p$ is prime and $(m,p)=1$. Then
$${{\cal R}m{if}}\ p^r=4\ \ \ p^r=4|d_{4m}+d_{2m},$$
$${{\cal R}m{if}}\ p^r\noindenteq 4\ \ \ p^r|d_{p^rm}-d_{p^{r-1}m}.$$
\begin{proof}
\noindent The claim is obvious for $p^r=2$ since the $d_n$'s are all odd.
\noindent In general if $n$ is any positive integer it follows from remark {\cal R}ef{hhdehh} that $$d_{np}+\delta_{np}\sqrt{2}=(d_n+\delta_n\sqrt{2})^p.$$
If $p=2$ this means that
$$d_{2n}=d_n^2+2\delta_n^2,$$
$$\delta_{2n}=2d_n\delta_n,$$
hence
$$2^r||\delta_{2^rm}\ \ {{\cal R}m{(recall\ that}}\ \delta_m\ {{\cal R}m{is\ odd\ since}}\ m\ {{\cal R}m{is\ odd)}}$$
$$d_{2^rm}\equiv
d_{2^{r-1}m}^2\ \ (mod\ 2^{2r-1}),$$
from which it follows that
$$d_{2m}\equiv -1\ \ (mod\ 4),$$
$$d_{2^rm}\equiv 1\ \ (mod\ 2^{r+1})\ \ \ {{\cal R}m{if}}\ r>1:$$
indeed, since $d_m$ and $\delta_m$ are odd,
$$d_{2m}\equiv_{(8)}1+2\equiv_{(4)}-1,$$
while if $r\geq 2$ then $2r-1\geq r+1$ and by induction on $r$ we get
$$d_{2^rm}\equiv d^2_{2^{r-1}m}=(\pm 1+2^rk)^2\equiv 1\ \ (mod\ 2^{r+1}).$$
These last relations immediately imply the claim for $p=2$.
\noindent Now let $p\noindenteq 2$. Then
$$d_{pn}=\sum_{h\geq 0}{p\choose 2h}2^hd_{n}^{p-2h}\delta_n^{2h},$$
$$\delta_{pn}=\sum_{h\geq 0}{p\choose 2h+1}2^hd_{n}^{p-2h-1}\delta_n^{2h+1}.$$
Suppose that $d_n=d+p^{r-1}k$, $\delta_n=\delta+p^{r-1}k'$ with $k=k'=0$ if $r=1$. Then
$$d_{pn}\equiv\sum_{h\geq 0}{p\choose 2h}2^hd^{p-2h}\delta^{2h}\ \ (mod\ p^r),$$
$$\delta_{pn}\equiv\sum_{h\geq 0}{p\choose 2h+1}2^hd^{p-2h-1}\delta^{2h+1}\ \ (mod\ p^r).$$
The above relations allow us to prove by induction on $r>0$ that if $\zeta_p$ is defined by the properties $\zeta_p{\cal I}n\{\pm 1\}$, $\zeta_p\equiv_{(p)} 2^{{p-1\over 2}}$ then
$$d_{p^rm}\equiv d_{p^{r-1}m}\ \ (mod\ p^r)\ \ \ {{\cal R}m{and}}\ \ \ \delta_{p^rm}\equiv \zeta_p\delta_{p^{r-1}m}\ \ (mod\ p^r):$$
indeed
if $r=1$
$$d_{pm}\equiv d_m^p\equiv d_m\ \ (mod\ p),$$
$$\delta_{pm}\equiv 2^{{p-1\over 2}}\delta_m^p\equiv\zeta_p\delta_m\ \ (mod\ p);$$
if $r>1$ then
$$d_{p^rm}\equiv_{(p^r)}\sum_{h\geq 0}{p\choose 2h}2^hd_{p^{r-2}m}^{p-2h}\delta_{p^{r-2}m}^{2h}\equiv_{(p^r)}d_{p^{r-1}m},$$
$$\delta_{p^rm}\equiv_{(p^r)}\zeta_p\sum_{h\geq 0}{p\choose 2h+1}2^hd_{p^{r-2}m}^{p-2h-1}\delta_{p^{r-2}m}^{2h+1}\equiv_{(p^r)}\zeta_p\delta_{p^{r-1}m}.$$
\end{proof}
\end{lemma}
{\cal{V}}skip .3 truecm
\begin{corollary}\label{hcappucciod}
\noindent $\hat h_n^{\{d\}}{\cal I}n\Z[\tilde h_k|k>0]$ for all $n>0$.
\noindent In particular $(x_0^+)^{(k)}(x_1^-)^{(l)}{\cal I}n\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+$ ${\cal F}orall k,l{\cal I}n\N$.
\begin{proof}
\noindent The claim follows from propositions {\cal R}ef{convoluzioneintera} and {\cal R}ef{emmepiallaerre}, remark {\cal R}ef{hhdehh} and lemma {\cal R}ef{contidp}, remarking that if $m$ is odd then
$$ d_{4m}+d_{2m}=-(\tilde d_{4m}-\tilde d_{2m})$$
while if $(m,p)=1$ and $p^r\noindenteq 4$ then
$$d_{p^rm}-d_{p^{r-1}m}=\pm(\tilde d_{p^rm}-\tilde d_{p^{r-1}m})
.$$
Thus for all $n>0$ $\hat h_n^{\{\tilde d\}}{\cal I}n\Z[\hat h_k|k>0]$ and
$\hat h_n^{\{d\}}{\cal I}n\Z[\tilde h_k|k>0]$.
\end{proof}
\end{corollary}
\begin{corollary}\label{b22}
$\tilde{\cal U}z^+\tilde{\cal U}z^-\subseteq\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+$; equivalently $\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+$ is an integral form of $\tilde{\cal U}$.
\begin{proof}
The proof is identical to that of proposition {\cal R}ef{strutmodulo} replacing $\hat{\cal U}$ with $\tilde{\cal U}$, having care to remark that in this case, too,
$$(x_r^+)^{(k)}(x_s^-)^{(l)}{\cal I}n\sum_{m\geq 0}\tilde{\cal U}_{\Z,-l+m}^-\tilde{\cal U}z^0\tilde{\cal U}_{\Z,k-m}^+\ \ {\cal F}orall r,s{\cal I}n\Z,\ {\cal F}orall k,l{\cal I}n\N:$$
if $r+s$ is even this follows at once comparing proposition {\cal R}ef{czzq} with the properties of the gradation, while if $r+s$ is odd it is true by proposition {\cal R}ef{x0piux1meno} and remark {\cal R}ef{autstab},iv).
\end{proof}
\end{corollary}
\begin{proposition} \label{samealg}
$\tilde{\cal U}z^0\subseteq\tilde{\cal U}z$ and $\tilde{\cal U}z=\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+$.
\begin{proof}
\noindent Let ${\cal Z}$ be the $\Z$-subalgebra of ${\mathbb{Q}}[h_r|r>0]$ generated by the coefficients of
$\hat h_+^{\{d\}}(u)$ and of $\hat h_+(4u)^{1/2}$.
Remark that, by propositions {\cal R}ef{xmenogrande} and {\cal R}ef{x0piux1meno}, ${\cal Z}\subseteq\tilde{\cal U}z$.
\noindent We have already proved that ${\cal Z}\subseteq\Z[\tilde h_k|k>0]$ (see lemma {\cal R}ef{ometiomecap},iii) and corollary {\cal R}ef{hcappucciod}). Let us prove, by induction on $j$, that $\tilde h_j{\cal I}n{\cal Z}$ for all $j>0$.
\noindent If $j=1$ the claim depends on the equality $\tilde h_1=h_1=\hat h^{\{d\}}_1$ (since ${\cal{V}}arepsilon_1=d_1=1$).
\noindent Let $j>1$ and suppose that $\tilde h_1,...,\tilde h_{j-1}{\cal I}n{\cal Z}$.
\noindent We notice that if $a:\Z_+\to\Z$ is such that $\hat h_j^{\{a\}}{\cal I}n{\cal{Z}}$ then $a_j\tilde h_j{\cal I}n{\cal{Z}}$: indeed it is always true that
$$\tilde h_j-{{\cal{V}}arepsilon_jh_j\over j}{\cal I}n{\mathbb{Q}}[h_1,...,h_{j-1}]$$
and
$$\hat h_j^{\{a\}}-{a_jh_j\over j}{\cal I}n{\mathbb{Q}}[h_1,...,h_{j-1}]$$
from which we get that
$$\hat h_j^{\{a\}}-{\cal{V}}arepsilon_ja_j\tilde h_j{\cal I}n{\mathbb{Q}}[h_1,...,h_{j-1}];$$
but the condition $\hat h_j^{\{a\}}{\cal I}n{\cal{Z}}\subseteq\Z[\tilde h_k|k>0]$ and the inductive hypothesis
$\Z[\tilde h_1,...,\tilde h_{j-1}]\subseteq{\cal{Z}}$
imply that
$$\hat h_j^{\{a\}}-{\cal{V}}arepsilon_ja_j\tilde h_j{\cal I}n{\mathbb{Q}}[h_1,...,h_{j-1}]
\cap\Z[\tilde h_k|k>0]=\Z[\tilde h_1,...,\tilde h_{j-1}]
\subseteq{\cal{Z}}$$
hence $a_j\tilde h_j{\cal I}n{\cal{Z}}$.
\noindent This in particular holds for $a=d$ and for $\hat h^{\{a\}}(u)=\hat h_+(4u)^{{1\over 2}}$, hence
$$d_j\tilde h_j{\cal I}n{\cal Z}\ \ {{\cal R}m{and}}\ \ 2^{2j-1}\tilde h_j{\cal I}n{\cal Z}.$$
But $(d_j,2^{2j-1})=1$ because $d_j$ is odd, hence $\tilde h_j{\cal I}n{\cal Z}$.
\noindent Then $\tilde{\cal U}z^{0,+}=\Z[\tilde h_k|k>0]={\cal Z}\subseteq\tilde{\cal U}z$ and, applying $\Omega$, $\tilde{\cal U}z^{0,-}\subseteq\tilde{\cal U}z$. The claim follows recalling corollary {\cal R}ef{czzp}.
\end{proof}
\end{proposition}
\noindent We can now collect all the results obtained till now in the main theorem of this work.
{\cal{V}}skip .3 truecm
\begin{theorem}\label{trmA22}
The $\Z$-subalgebra $\tilde{\cal U}z$ of $\tilde{\cal U}$ generated by $$\{(x_r^+)^{(k)},(x_r^-)^{(k)}|r{\cal I}n\Z,k{\cal I}n\N\}$$ is an integral form of $\tilde{\cal U}$.
\noindent More precisely
$$\tilde{\cal U}z\cong
\tilde{\cal U}z^{-,1}\otimes\tilde{\cal U}z^{-,c}\otimes\tilde{\cal U}z^{-,0}\otimes\tilde{\cal U}z^{0,-}\otimes\tilde{\cal U}z^{0,0}\otimes\tilde{\cal U}z^{0,+}\otimes\tilde{\cal U}z^{+,1}\otimes\tilde{\cal U}z^{+,c}\otimes\tilde{\cal U}z^{+,0}$$
and a $\Z$-basis of $\tilde{\cal U}z$ is given by the product
$$B^{-,1}B^{-,c}B^{-,0}B^{0,-}B^{0,0}B^{0,+}B^{+,1}B^{+,c}B^{+,0}$$
where $B^{\pm,0}$, $B^{\pm,1}$, $B^{\pm,c}$, $B^{0,\pm}$ and $B^{0,0}$ are the $\Z$-bases respectively of $\tilde{\cal U}z^{\pm,0}$, $\tilde{\cal U}z^{\pm,1}$, $\tilde{\cal U}z^{\pm,c}$, $\tilde{\cal U}z^{0,\pm}$ and $\tilde{\cal U}z^{0,0}$ given as follows:
$$B^{\pm,0}=\Big\{{(\bf{x}}^{\pm,0})^{({\bf{k}})}=\prod_{r{\cal I}n\Z}(x_{2r}^{\pm})^{(k_r)}|{\bf{k}}:\Z\to\N\,\, {{\cal R}m{is\, finitely\, supported}}\Big\}$$
$$B^{\pm,1}=\Big\{{(\bf{x}}^{\pm,1})^{({\bf{k}})}=\prod_{r{\cal I}n\Z}(x_{2r+1}^{\pm})^{(k_r)}|{\bf{k}}:\Z\to\N\,\, {{\cal R}m{is\, finitely\, supported}}\Big\}$$
$$B^{\pm,c}=\Big\{{(\bf{X}}^{\pm})^{({\bf{k}})}=\prod_{r{\cal I}n\Z}(X_{2r+1}^{\pm})^{(k_r)}|{\bf{k}}:\Z\to\N\,\, {{\cal R}m{is\, finitely\, supported}}\Big\}$$
$$B^{0,\pm}=\Big\{{\tilde{{\bf{h}}}_{\pm}^{\bf{k}}}=\prod_{l{\cal I}n\N}{\tilde h_{\pm l}^{k_l}}|{\bf{k}}:\N\to\N\,\,{{\cal R}m{is\, finitely\, supported}}\Big\}$$
$$B^{0,0}=\Big\{{h_0\choose k}{c\choose\tilde k}|k,\tilde k{\cal I}n\N\Big\}.$$
\end{theorem}
{\cal{V}}skip .5 truecm
{\cal A}ppendix\label{appendix}
{\cal A}ddcontentsline{toc}{section}{Appendices}
{\cal R}enewcommand{\Alph{subsection}}{\Alph{subsection}}
\subsection{Straightening formulas of $A^{(2)}_2$}\label{appendA}
\noindentewtheorem{mydefinition}{Definiton}
\noindentumberwithin{mydefinition}{subsection}
\noindentewtheorem{mytheorem}[mydefinition]{Theorem}
\noindentewtheorem{myremark}[mydefinition]{Remark}
\noindentewtheorem{mynotation}[mydefinition]{Notation}
\noindentewtheorem{mylemma}[mydefinition]{Lemma}
\noindentewtheorem{mycorollary}[mydefinition]{Corollary}
\noindentumberwithin{equation}{subsection}
For the sake of completeness we collect here the commutation formulas of $A^{(2)}_2$, inserting also the formulas that we didn't need for the proof of theorem
{\cal R}ef{trmA22}.
\noindent Notation {\cal R}ef{pppm} and remark {\cal R}ef{epppm} will help writing some of the following straightening relations and to understand the origin of some apparently misterious terms.
\begin{mynotation}\label{pppm}
Given $p(t){\cal I}n{\mathbb{Q}}[[t]]$ let us define $p_+(t), p_-(t){\cal I}n{\mathbb{Q}}[[t^2]]$ and $p_0(t){\cal I}n{\mathbb{Q}}[[t]]$ by $$p(t)=p_+(t)+tp_-(t),\ p_0(t^2)={1\over 2}p_+(t)p_-(t).$$
Remark that the maps $p(t)\mapsto p_+(t)$ and $p(t)\mapsto p_-(t)$ are homomorphisms of ${\mathbb{Q}}[[t^2]]$-modules while $q(t){\cal I}n{\mathbb{Q}}[[t^2]], \tilde q(t^2)=q(t){\mathbb{R}}ightarrow
(qp)_0(t)=\tilde q(t)^2p_0(t)$.
\end{mynotation}
\begin{myremark}\label{epppm}
Given $p(t){\cal I}n{\mathbb{Q}}[[t]]$ we have that
$$\exp(p(uw).x_0^+)=$$
$$=\exp(p_+(uw).x_0^+)\exp(up_0(-u^2w).X_1^+)\exp(up_-(uw).x_1^+)=$$
$$=\exp(up_-(uw).x_1^+)\exp(-up_0(-u^2w).X_1^+)\exp(p_+(uw).x_0^+).$$
\end{myremark}
{\cal{V}}skip .3 truecm
\noindent We shall now list a complete set of {\bf{straightening formulas}} in $\tilde{\cal U}z$.
{\cal{V}}skip .3 truecm
\noindent I) Zero commutations regarding $\tilde{\cal U}z^{0,0}$:
$${c\choose k}\ {{\cal R}m{is\ central\ in\ }}\tilde{\cal U}z;$$
$${h_0\choose k}\ {{\cal R}m{is\ central\ in\ }}\tilde{\cal U}z^0:\left[{h_0\choose k},\tilde h_l{\cal R}ight]=0\ {\cal F}orall k\geq 0,\ l\noindenteq 0.$$
\noindent II) Relations in $\tilde{\cal U}z^{0,+}$ (from which those in $\tilde{\cal U}z^{0,-}$ follow as well):
$$\tilde{\cal U}z^{0,+}\ {{\cal R}m{is\ commutative}}:\ [\tilde h_k,\tilde h_l]=0\ {\cal F}orall k,l>0;$$
$$\tilde \lambda_m(\tilde h_+(-u^m))=\prod_{j=1}^{m}\tilde h_+(-\omega^ju)\ {\cal F}orall m{\cal I}n \Z_+$$
${{\cal R}m{where}}\ \omega\ {{\cal R}m{is\ a\ primitive\ }}m^{{{\cal R}m{th}}}\ {{\cal R}m{root\ of\ }}1$, that is $$\tilde \lambda_m(\tilde h_k)=(-1)^{(m-1)k}\sum_{(k_1,...,k_m):{\cal A}top k_1+...+k_m=mk} \omega^{\sum_{j=1}^m j k_j}\tilde h_{k_1} \dots \tilde h_{k_m} ;$$
if $m$ is odd
$$\lambda_m(\tilde h_k)=\tilde \lambda_m(\tilde h_k)\ {\cal F}orall k\geq 0;$$
if $m$ is even
$$\lambda_{m}(\hat h_+(u))=\tilde\lambda_{m}(\tilde h_+((-1)^{m\over 2}u)^{-1});$$
$$\hat h_+(u)=\tilde h_+(u)\tilde\lambda_4(\tilde h_+(-u^4)^{-{1\over 2}})=\tilde h_+(u)^{{1\over 2}}\tilde h_+(-u)^{-{1\over 2}}\tilde h_+(iu)^{-{1\over 2}}\tilde h_+(-iu)^{-{1\over 2}},$$
$$\hat h_+^{\{d\}}(u)=\hat h_+((1+\sqrt{2})u)^{{1\over 2}}\hat h_+((1-\sqrt{2})u)^{{1\over 2}}=\prod_{m>0}\tilde\lambda_m(\tilde h_+(u^m))^{k_m}$$
where the $k_m$'s are integers defined by the identity
$$1+2u-u^2=(1-2u-u^2)(1+6u^2+u^4)\prod_{m>0}(1+u^m)^{4k_m}.$$
\noindent The corresponding relations in $\tilde{\cal U}z^{0,-}$ are obtained applying $\Omega$, that is just replacing $\tilde h_k$, $\tilde h_+(u)$ and $\hat h_+(u)$ with $\tilde h_{-k}$, $\tilde h_-(u)$ and $\hat h_-(u)$.
\noindent III) Other straightening relations in $\tilde{\cal U}z^0$:
$$\tilde h_+(u)\tilde h_-(v)=\tilde h_-(v)(1-uv)^{-4c}(1+uv)^{2c}\tilde h_+(u).$$
\noindent IV) Commuting elements and straightening relations in $\tilde{\cal U}z^+$ (and in $\tilde{\cal U}z^-$):
$$(X_{2r+1}^+)^{(k)}\ {{\cal R}m{is\ central\ in\ }}\tilde{\cal U}z^+:$$ $$[(X_{2r+1}^+)^{(k)},(x_s^+)^{(l)}]=0=[(X_{2r+1}^+)^{(k)},(X_{2s+1}^+)^{(l)}]\ {\cal F}orall r,s{\cal I}n\Z,\ k,l{\cal I}n\N;$$
$${{\cal R}m{if}}\ r+s\ {{\cal R}m{is\ even\ }}[(x_r^+)^{(k)},(x_s^+)^{(l)}]=0\ {\cal F}orall k,l{\cal I}n\N;$$
$${{\cal R}m{if}}\ r+s\ {{\cal R}m{is\ odd\ }}
\exp(x_r^+ u)\exp(x_{s}^+ v)=\exp(x_s^+ v)\exp((-1)^sX_{r+s}^+ uv)\exp(x_{r}^+ u).$$
\noindent All the relations in $\tilde{\cal U}z^-$ are obtained from those in $\tilde{\cal U}z^+$ applying the antiautomorphism $\Omega$; in particular
if $r+s$ is odd
$$\exp(x_r^- u)\exp(x_{s}^- v)=\exp(x_s^- v)\exp((-1)^rX_{r+s}^- uv)\exp(x_{r}^- u).$$
\noindent V) Straightening relations for $\tilde{\cal U}z^+\tilde{\cal U}z^{0,0}$ (and for $\tilde{\cal U}z^{0,0} \tilde{\cal U}z^-$): ${\cal F}orall r{\cal I}n\Z,\ k,l{\cal I}n\N$
$$(x_r^+)^{(k)}{h_0\choose l}={h_0-2k\choose l}(x_r^+)^{(k)},$$
$$(X_{2r+1}^+)^{(k)}{h_0\choose l}={h_0-4k\choose l}(X_{2r+1}^+)^{(k)},$$
and
$${h_0\choose l}(x_r^-)^{(k)}=(x_r^-)^{(k)}{h_0-2k\choose l},$$
$${h_0\choose l}(X_{2r+1}^-)^{(k)}=(X_{2r+1}^-)^{(k)}{h_0-4k\choose l}.$$
\noindent VI) Straightening relations for $\tilde{\cal U}z^+\tilde{\cal U}z^{0,+}$ (and for $\tilde{\cal U}z^+\tilde{\cal U}z^{0,-}$, $\tilde{\cal U}z^{0,\pm} \tilde{\cal U}z^-$):
$$(X_{2r+1}^+)^{(k)}\tilde h_+(u)=\tilde h_+(u)\left((1-u^2T^{-1})^2X_{2r+1}^+{\cal R}ight)^{(k)}$$
and $$(x_r^+)^{(k)}\tilde h_+(u)=\tilde h_+(u)\left({(1-uT^{-1})^6(1+u^2T^{-2})\over(1-u^2T^{-2})^3}x_r^+{\cal R}ight)^{(k)};$$
the expression for $\left({(1-uT^{-1})^6(1+u^2T^{-2})\over(1-u^2T^{-2})^3}x_r^+{\cal R}ight)^{(k)}$ can be straightened more explicitly: setting $p(t)=(1-t)^6$ we have $$
p_+(t)=1+15t^2+15t^4+t^8,$$ $$
p_-(t)=-6-20t^2-6t^4,$$ $$
p_0(t)=-(1+15t+15t^2+t^4)(3+10t+3t^2),$$
so that
$$\exp(x_r^+v)\tilde h_+(u)=\tilde h_+(u)\exp\left({(1-uT^{-1})^6(1+u^2T^{-2})\over(1-u^2T^{-2})^3}x_r^+v{\cal R}ight)=$$
$$=\tilde h_+(u)\exp\left({p_-(uT^{-1})(1+u^2T^{-2})\over(1-u^2T^{-2})^3}x_{r+1}^+uv{\cal R}ight)\cdot$$
$$\cdot\exp\left({(-1)^{r-1}p_0(-u^2T^{-1})(1-u^2T^{-1})^2\over(1+u^2T^{-1})^6}X_{2r+1}^+uv^2{\cal R}ight)\cdot$$
$$\cdot\exp\left({p_+(uT^{-1})(1+u^2T^{-2})\over(1-u^2T^{-2})^3}x_r^+v{\cal R}ight).$$
Applying the homomorphism $\lambda_{-1}$ (that is $x_s^+\mapsto x_{-s}^+$, $X_s^+\mapsto X_{-s}^+$, $\tilde h_+\mapsto\tilde h_-$, $T^{-1}\mapsto T$) one immediately gets the expression for $(X_{2r+1}^+)^{(k)}\tilde h_-(u)$ and for $\exp(x_{r}^+v)\tilde h_-(u)$.
\noindent Applying the antiautomorphism $\Omega$ ($x_s^+\mapsto x_{-s}^-$, $X_s^+\mapsto X_{-s}^-$, $\tilde h_+\leftrightarrow\tilde h_-$) one gets analogously the expression for
$\tilde h_{\pm}(u)(X_{2r+1}^-)^{(k)}$ and for $\tilde h_{\pm}(u)\exp(x_{r}^-v)$.
\noindent VII) Straightening relations for $\tilde{\cal U}z^+\tilde{\cal U}z^-$:
\noindent VII,a) ${\cal F}rak sl_2$-like relations: ${\cal F}orall r{\cal I}n\Z$
$${{\cal R}m{exp}}(x_r^+u){{\cal R}m{exp}}(x_{-r}^-v)={{\cal R}m{exp}}\Big({x_{-r}^-v\over 1+uv}\Big)(1+uv)^{h_0+rc}{{\cal R}m{exp}}\Big({x_r^+u\over 1+uv}\Big),$$
$${{\cal R}m{exp}}(X_{2r+1}^+u){{\cal R}m{exp}}(X_{-2r-1}^-v)={{\cal R}m{exp}}\Big({X_{-2r-1}^-v\over 1+4^2uv}\Big)(1+4^2uv)^{{h_0\over 2}+{(2r+1)c\over 4}}{{\cal R}m{exp}}\Big({X_{2r+1}^+u\over 1+4^2uv}\Big).$$
\noindent VII,b) $\hat{\cal F}rak sl_2$-like relations:
\noindent if $r+s\noindenteq 0$ is even
$$\exp(x_{r}^+u)\exp(x_s^-v)=$$
$$=\exp\left({1\over 1+uvT^{r+s}}x_s^-v{\cal R}ight)
\lambda_{r+s}(\hat h_+(uv))
\exp\left({1\over 1+uvT^{-r-s}}x_r^+v{\cal R}ight),$$
while ${\cal F}orall r+s\noindenteq 0$
$$\exp(X_{2r+1}^+u)\exp(X_{2s-1}^-v)=$$
$$=\exp\left({1\over 1+4T^{s+r}uv}X_{2s-1}^-v{\cal R}ight)\cdot$$
$$\cdot\lambda_{2(r+s)}(\hat h_+(4^2uv)^{{1\over 2}})\cdot$$
$$\cdot\exp\left({1\over1+4uvT^{-s-r}}X_{2r+1}^+u{\cal R}ight).$$
\noindent VII,c) Straightening relations for $\tilde{\cal U}z^{+,0}\tilde{\cal U}z^{-,c}$ (and $\tilde{\cal U}z^{+,1}\tilde{\cal U}z^{-,c}$, $\tilde{\cal U}z^{+,c}\tilde{\cal U}z^{-,{0{\cal A}top 1}}$):
$$\exp(x_0^+u)\exp(X_{1}^-v)=$$
$$=\exp\left({4\over 1-4^2w^2u^4v^2}x_1^-uv{\cal R}ight)\exp\left({-4^2w^2\over 1-4^2w^2u^4v^2}x_0^-u^3v^2{\cal R}ight)\cdot$$
$$\cdot\exp\left({1+3\cdot 4^2wu^4v^2\over (1+4^2wu^4v^2)^2}X_1^-v{\cal R}ight)
\hat h_+(4u^2v)^{{1\over 2}}\exp\left({1-4^2w^{-1}u^4v^2\over (1+4^2w^{-1}u^4v^2)^2}X_1^+u^4v{\cal R}ight)\cdot$$
$$\cdot\exp\left({-4\over 1-4^2w^{-2}u^4v^2}x_1^+u^3v{\cal R}ight)\exp\left({1\over 1-4^2w^{-2}u^4v^2}x_0^+u{\cal R}ight),$$
which can be written in a more compact way observing that
$${1\over 1-4^2t^2}=\left({1\over 1+4t}{\cal R}ight)_+,\ {-4\over 1-4^2t^2}=\left({1\over 1+4t}{\cal R}ight)_-,\ \left({1\over 1+4t}{\cal R}ight)_0={-2\over (1-4^2t)^2}:$$
$$\exp(x_0^+u)\exp(X_{1}^-v)=$$
$$=\exp\left({4\over 1+4wu^2v}x_1^-uv{\cal R}ight)\exp\left({1\over 1+4^2wu^4v^2}X_1^-v{\cal R}ight)\cdot$$
$$\cdot\hat h_+(4u^2v)^{{1\over 2}}\exp\left({1\over 1+4w^{-1}u^2v}x_0^+u{\cal R}ight)\exp\left(-{1\over 1+4^2w^{-1}u^4v^2}X_1^+u^4v{\cal R}ight);$$
that is more symmetric but less explicit in terms of the given basis of $\tilde{\cal U}z$.
\noindent Applying the homomorphism $T^{-r}\lambda_{2r+2s+1}$ (that is $x_l^{\pm}\mapsto x_{l(2r+2s+1)\pm r}^{\pm}$, $X_1^{\pm}\mapsto (-1)^rX_{2r+2s+1\pm 2r}^{\pm}$, $\hat h_k\mapsto \lambda_{2r+2s+1}(\hat h_k)$, $w\big|_{L^{\pm}}\mapsto T^{\mp(2r+2s+1)}$) one deduces the expression for $\exp(x_r^+u)\exp(X_{2s+1}^-v)$.
\noindent Applying $\Omega$ one analogously gets the expression for
$\exp(X_{2r+1}^+u)\exp(x_s^-v)$.
\noindent VII,d) The remaining relations:
$$\exp(x_0^+u)\exp(x_1^-v)=$$
$$=\exp \left ({{1+w^2u^2v^2}\over{1-6w^2u^2v^2+w^4u^4v^4}}x_1^-v{\cal R}ight) \exp \left({{-3+w^2u^2v^2}\over{1-6w^2u^2v^2+w^4u^4v^4}}x_2^-uv^2{\cal R}ight) \cdot$$
$$\cdot \exp \left (-{{1-4wu^2v^2-w^2u^4v^4}\over{(1+6wu^2v^2+w^2u^4v^4)^2}}X_3^-uv^3{\cal R}ight ) \hat h_+^{\{d\}}(uv)\cdot$$
$$\cdot\exp \left({{1-4wu^2v^2-w^2u^4v^4}\over{(1+6wu^2v^2+w^2u^4v^4)^2}}X_1^+u^3v{\cal R}ight)\cdot$$
$$\cdot \exp \left({{-3+w^2u^2v^2}\over{1-6w^2u^2v^2+w^4u^4v^4}}x_1^+u^2v{\cal R}ight)\exp \left({{1+ w^2u^2v^2}\over{(1-6w^2u^2v^2+w^4u^4v^4)^2}}x_0^+u{\cal R}ight)$$
or, as well,
$$\exp(x_0^+u)\exp(x_1^-v)=$$
$$=\exp\left({1-wuv\over 1+2wuv -w^{2}u^2v^2} x_{1}^{-}v{\cal R}ight) \exp\left({1\over 2(1+6wu^2v^2 +w^2u^4v^4)} X_{3}^{-}uv^3{\cal R}ight) \cdot$$
$$\cdot\hat h_+^{\{d\}}(uv)\cdot $$
$$\cdot \exp\left({1-wuv\over 1+2wuv -w^2u^2v^2}x_0^{+}u{\cal R}ight)\left({-1\over 2(1+6wu^2v^2 +w^2u^4v^4)} X_{1}^{+}u^3v{\cal R}ight).$$
\noindent The general straightening formula for $\exp(x_r^+u)\exp(x_s^-v)$ when $r+s$ is odd is obtained from the case $r=0$, $s=1$ applying $T^{-r}\lambda_{r+s}$, remarking that $w\big|_{L^{\pm}}\mapsto T^{\mp(r+s)}$.
\subsection{Garland's description of ${\cal U}z^{im,+}$}\label{appendB}
In this appendix we focus on the imaginary positive part ${\cal U}z^{im,+}$ of ${\cal U}z={\cal U}z(\frak g)$ (see section {\cal R}ef{intr}) when $\frak g$ is an affine Kac-Moody algebra of rank 1 (that is $\frak g=\hat{\cal F}rak sl_2$ or $\frak g=\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$: these cases are enough to understand also the cases of higher rank): we aim at a better understanding of Garland's (and Mitzman's) basis of ${\cal U}z^{im,+}$
and of its connection with the basis consisting of the monomials in the $\hat h_k$'s, basis which arises naturally from the description of ${\cal U}z^{im,+}$ as $\Z^{(sym)}[h_r|r>0]=\Z[\hat h_k|k>0]$.
\noindent First of all let us fix some notations and recall Garland's description of ${\cal U}z^{im,+}$.
\begin{definition}\label{bun}
With the notations of example {\cal R}ef{rvsf} and proposition {\cal R}ef{tmom} let us define the following elements and subsets in ${\mathbb{Q}}[h_r|r>0]$:
i) $b_{{\bf{k}}}=\prod_{m>0}\lambda_m(\hat h_{k_m})$ where ${\bf{k}}:\Z_+\to\N$ is finitely supported;
\begin{equation} \label{GarlandBase}{{\cal R}m{ii)}}\,\,
B_{\lambda}=\left\{b_{{\bf{k}}}
|{\bf{k}}:\Z_+\to\N\,\,{{\cal R}m{is\, finitely\, supported}}{\cal R}ight\};\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,
\end{equation}
iii) $\Z_{\lambda}[h_r|r>0]=\sum_{{\bf{k}}}\Z b_{{\bf{k}}}$ is the $\Z$-submodule of ${\mathbb{Q}}[h_r|r>0]$ generated by $B_{\lambda}$.
\end{definition}
\noindent Then, with our notation, Garland's description of ${\cal U}z^{im,+}$ can be stated as follows:
\begin{theorem}
${\cal U}z^{im,+}$ is a free $\Z$-module with basis $B_{\lambda}$.
\noindent Equivalently:
\noindent i) ${\cal U}z^{im,+}=\Z_{\lambda}[h_r|r>0]$;
\noindent ii) $B_{\lambda}$ is linearly independent.
\end{theorem}
\begin{remark}
Once proved that ${\cal U}z^{im,+}$ is the $\Z$-subalgebra of ${\cal U}$ generated by $\{\lambda_m(\hat h_k)|m>0,k\geq 0\}$ (hence by $B_{\lambda}$ or equivalently by $\Z_{\lambda}[h_r|r>0]$), proceeding in two different directions leads to the two descriptions of ${\cal U}z^{im,+}$ that we want to compare:
\noindent $\star$) $\Z_{\lambda}[h_r|r>0]$ is a $\Z$-subalgebra of ${\mathbb{Q}}[h_r|r>0]$ (that is $\Z_{\lambda}[h_r|r>0]$ is closed under multiplication): this implies that $${\cal U}z^{im,+}=\Z_{\lambda}[h_r|r>0];$$
it also implies that $\Z[\hat h_k|k>0]\subseteq\Z_{\lambda}[h_r|r>0]$;
\noindent $\star\star$) $\Z[\hat h_k|k>0]$ is $\lambda_m$-stable for all $m>0$ (see proposition {\cal R}ef{tmom}): this implies that $${\cal U}z^{im,+}=\Z[\hat h_k|k>0];$$
it also implies that $\Z_{\lambda}[h_r|r>0]\subseteq\Z[\hat h_k|k>0]$.
Hence $\star$) and $\star\star$) imply that ${\cal U}z^{im,+}=\Z_{\lambda}[h_r|r>0]=\Z[\hat h_k|k>0]$.
\noindent $\star$) has been proved in \cite{HG} by induction on a suitably defined degree. The first step of the induction is
the second assertion of \cite{HG}-lemma 5.11(b), proved in \cite{HG}-section 9: for all $k,l{\cal I}n\N$ $\hat h_k\hat h_l-{k+l\choose k}\hat h_{k+l}$ is a linear combination {{\cal I}t {with integral coefficients}} of elements of $B_{\lambda}$ {{\cal I}t{of degree lower}} than the degree of $\hat h_{k+l}$.\\ In the proof the author uses that $B_{\lambda}$ is a ${\mathbb{Q}}$-basis of ${\mathbb{Q}}[h_r|r>0]$ and concentrates on the integrality of the coefficients: he studies the action of ${\cal F}rak h$ on $\hat{{{\cal F}rak sl_3}}^{\otimes N}$ where ${\cal F}rak h$ is the commutative Lie-algebra with basis $\{h_r|r>0\}$ and $N{\cal I}n\N$ is large enough ($N$ is the maximum among the degrees of the elements of $B_{\lambda}$ appearing in $\hat h_k\hat h_l$ with non-integral coefficient, assuming that such an element exists): ${\cal F}rak h$ is a subalgebra of $\hat{\cal F}rak sl_2$ and there is an embedding of $\hat{\cal F}rak sl_2$ in $\hat{{{\cal F}rak sl_3}}$ for every vertex of the Dynkin diagram of ${{\cal F}rak sl}_3$, so that fixing a vertex of the Dynkin diagram of ${{\cal F}rak sl}_3$ induces an embedding ${{\cal F}rak h}\subseteq\hat{\cal F}rak sl_2\hookrightarrow\hat{{{\cal F}rak sl_3}}$, hence an action of ${{\cal F}rak h}$ on $\hat{{{\cal F}rak sl_3}}$.
But the integral form of $\hat{{{\cal F}rak sl_3}}$ defined as the $\Z$-span of a Chevalley basis is ${\cal U}z(\hat{{\cal F}rak sl_3})$-stable;
since the stability under ${\cal U}z(\hat{{\cal F}rak sl_3})$ is preserved by tensor products (\cite{HG}-section 6), the author can finally deduce the desired integrality property of $\hat h_k\hat h_l$ from the study of the ${\cal F}rak h$-action on $\hat{{{\cal F}rak sl_3}}^{\otimes N}$.
{\cal{V}}skip .3 truecm
\noindent Garland's argument has been sometimes misunderstood: it is the case for instance of \cite{JM} where the authors affirm (in lemma 1.5) that \cite{HG}-lemma 5.11(b) implies that ${\cal U}z^{im,+}=\Z[\hat h_k|k>0]$, while, as discussed above, it just implies the inclusion $\Z[\hat h_k|k>0]\subseteq{\cal U}z^{im,+}=\Z_{\lambda}[h_r|r>0]$.
\noindent On the other hand Garland's argument strongly involves many results of the (integral) representation theory of the Kac-Moody algebras, while $\star$) is a property of the algebra ${\mathbb{Q}}[h_r|r>0]$ and of its integral forms that can be stated in a way completely independent of the Kac-Moody algebra setting:
$$\Z^{(sym)}[h_r|r>0]\subseteq\Z_{\lambda}[h_r|r>0].$$
\noindent The above considerations motivate the present appendix, whose aim is to propose a self-contained proof of $\star$), independent of the Kac-Moody algebra context: on one hand we think that a direct proof can help evidentiating the essential structure of the integral form of ${\mathbb{Q}}[h_r|r>0]$ arising from our study; on the other hand the idea of isolating the single pieces
and glueing them together after studying them separately is much in the spirit of this work, so that it is natural for us to explain also Garland's basis of ${\cal U}z^{im,+}$ through this approach; and finally we hope that presenting a different proof can also help to clarify the steps which appear more difficult in Garland's proof.
{\cal{V}}skip .3 truecm
\noindent In the following we go back to the description of $\Z[\hat h_k|k>0]$ as the algebra of the symmetric functions and we show that $B_{\lambda}$ is a basis of $\Z[\hat h_k|k>0]$ by comparing it with a well known $\Z$-basis of this algebra.
\end{remark}
\begin{remark}\label{rcdcp}
Recall that $\Z[\hat h_k|k>0]$ is the algebra of the symmetric functions and that ${\cal F}orall n{\cal I}n\N$ the projection $\pi_n:\Z[\hat h_k|k>0]\to\Z[x_1,...,x_n]^{{\cal{S}}_n}$ induces an isomorphism $\Z[\hat h_1,...,\hat h_n]\cong\Z[x_1,...,x_n]^{{\cal{S}}_n}$ through which $\hat h_k$ corresponds to the $k^{{{\cal R}m{th}}}$ elementary symmetric polynomial $e_k^{[n]}$, while $\pi_n(\hat h_k)=0$ if $k>n$ and $h_r$ corresponds to the sum of the $r^{{{\cal R}m{th}}}$-powers $\sum_{i=1}^nx_i^r$ ${\cal F}orall r>0$ (see example {\cal R}ef{rvsf}).
\noindent Then it is well known and obvious that:
\noindent i) ${\cal F}orall{\bf{k}}:\Z_+\to\N$ finitely supported $\exists!(\sigma x)_{{\bf{k}}}{\cal I}n\Z[\hat h_k|k>0]$ such that
$$\pi_n((\sigma x)_{\bf{k}})=\sum_{{a_1,...,a_n{\cal A}top\#\{i|a_i=m\}=k_m\ {\cal F}orall m>0}}\prod_{i=1}^nx_i^{a_i}{\cal I}n\Z[x_1,...,x_n]^{{\cal{S}}_n}\ \ {\cal F}orall n{\cal I}n\N;$$
\noindent ii) $\{(\sigma x)_{\bf{k}}|{\bf{k}}:\Z_+\to\N$ finitely supported$\}$ is a $\Z$-basis of $\Z[\hat h_k|k>0]$.
\noindent (It is the basis that in \cite{IM} is denoted by $\{m_{\lambda}|\lambda=(\lambda_1\geq\lambda_2\geq...\geq 0)\}$: $m_{\lambda}=(\sigma x)_{{\bf{k}}}$ where ${\cal F}orall m>0$ $k_m=\#\{i|\lambda_i=m\}$).
\end{remark}
\begin{notation}
As in remark {\cal R}ef{rcdcp}, for all ${\bf{k}}:\Z_+\to\N$ finitely supported let us denote by $(\sigma x)_{\bf{k}}$ the limit of the elements $$\sum_{{a_1,...,a_n{\cal A}top\#\{i|a_i=m\}=k_m\ {\cal F}orall m>0}}\prod_{i=1}^nx_i^{a_i}\ \ (n{\cal I}n\N).$$
By abuse of notation, when $n\geq\sum_{m>0}k_m$ we shall write
$$(\sigma x)_{\bf{k}}=\sum_{{a_1,...,a_n{\cal A}top\#\{i|a_i=m\}=k_m\ {\cal F}orall m>0}}\prod_{i=1}^nx_i^{a_i},$$
which is justified because, under the hypothesis that $n\geq\sum_{m>0}k_m$, ${\bf{k}}$ is determined by the set $\{(a_1,...,a_n)|\#\{i=1,...,n|a_i=m\}=k_m$ ${\cal F}orall m>0\}$.
\end{notation}
\begin{definition} \label{basi}
${\cal F}orall n{\cal I}n\N$ define $B_{\lambda}^{(n)}$, $B_x^{(n)}$, $\Z_{\lambda}^{(n)}$, $\Z_x^{(n)}\subseteq{\mathbb{Q}}[h_r|r>0]={\mathbb{Q}}[\hat h_k|k>0]$ as follows:
$$B_{\lambda}^{(n)}=\left\{b_{\bf{k}}=\prod_{m>0}\lambda_m(\hat h_{k_m}){\cal I}n B_{\lambda}|\sum_{m>0}k_m\leq n{\cal R}ight\},$$
$$B_x^{(n)}=\left\{(\sigma x)_{\bf{k}}|\sum_{m>0}k_m\leq n{\cal R}ight\},$$
$\Z_{\lambda}^{(n)}$ is the $\Z$-module generated by $B_{\lambda}^{(n)}$,
$\Z_x^{(n)}$ is the $\Z$-module generated by $B_x^{(n)}$.
\end{definition}
\begin{remark}\label{vdbx}
By the very definition of $B_x^{(n)}$ we have that:
\noindent i) $B_x^{(n)}$ is a basis of $\Z_x^{(n)}\subseteq\Z[\hat h_k|k>0]=\sum_{n'{\cal I}n\N}\Z_x^{(n')}$, see remark {\cal R}ef{rcdcp}, ii);
\noindent ii) $h{\cal I}n\Z_x^{(n)}$ means that for all $N\geq n$ each monomial in the $x_i$'s appearing in $\pi_N(h)$ with nonzero coefficient involves no more that $n$ indeterminates $x_i$; hence in particular $$h{\cal I}n\Z_x^{(n)}, h'{\cal I}n\Z_x^{(n')}{\mathbb{R}}ightarrow hh'{\cal I}n\Z_x^{(n+n')}.$$
\end{remark}
\begin{lemma} \label{sxkp}
Let $n,n',n''{\cal I}n\N$ and ${\bf{k}}',{\bf{k}}'':\Z_+\to\N$ be such that $n'+n''=n$, $\sum_{m>0}k'_m=n'$, $\sum_{m>0}k''_m=n''$. Then:
\noindent i) $(\sigma x)_{{\bf{k}}'}\cdot(\sigma x)_{{\bf{k}}''}{\cal I}n\Z(\sigma x)_{{\bf{k}}'+{\bf{k}}''}\oplus\Z_x^{(n-1)}$;
\noindent ii) if $k'_m
k''_m=0$ ${\cal F}orall m>0$ then $(\sigma x)_{{\bf{k}}'}
(\sigma x)_{{\bf{k}}''}-(\sigma x)_{{\bf{k}}'+{\bf{k}}''}{\cal I}n\Z_x^{(n-1)}$.
\begin{proof}
That $(\sigma x)_{{\bf{k}}'}\cdot(\sigma x)_{{\bf{k}}''}$ lies in $\Z_x^{(n)}$ follows from remark {\cal R}ef{vdbx},ii), so we just need to:
\noindent i) prove that if $\prod_{i=1}^nx_i^{a_i}$ with $a_i\noindenteq 0$ ${\cal F}orall i=1,...,n$ is the product of two monomials $M'$ and $M''$ appearing with nonzero coefficient respectively in $(\sigma x)_{{\bf{k}}'}$ and in $(\sigma x)_{{\bf{k}}''}$ then $\#\{i|a_i=m\}=k'_m+k''_m$ for all $m>0$;
\noindent ii) compute the coefficient of $(\sigma x)_{{\bf{k}}'+{\bf{k}}''}$ in the expression of $(\sigma x)_{{\bf{k}}'}\cdot(\sigma x)_{{\bf{k}}''}$ as a linear combination of the $(\sigma x)_{{\bf{k}}}$'s when ${\cal F}orall m>0$ $k'_m$ and $k''_m$ are not simultaneously non zero, and find that it is 1.
\noindent i) is obvious because the condition $a_i\noindenteq 0$ ${\cal F}orall i=1,...,n$ implies that the indeterminates involved in $M'$ and those involved in $M''$ are disjoint sets.
\noindent For ii) it is enough to show that, under the further condition on $k'_m$ and $k''_m$, the monomial $\prod_{i=1}^nx_i^{a_i}$ chosen in i) uniquely determines $M'$ and $M''$ such that $\prod_{i=1}^nx_i^{a_i}=M'M''$: indeed $$M'=\prod_{i:k'_{a_i}\noindenteq 0}x_i^{a_i}\ \ {{\cal R}m{and}}\ M''=\prod_{i:k''_{a_i}\noindenteq 0}x_i^{a_i}.$$
\end{proof}
\end{lemma}
\begin{lemma}\label{sldftv}
Let ${\bf{k}}:\Z_+\to\N$, $n{\cal I}n\N$ be such that $\sum_{m>0}k_m=n$. Then:
\noindent i) if $\exists m>0$ such that $k_{m'}= 0$ for all $m'\noindenteq m$ (equivalently $k_m=n$)
we have
$$(\sigma x)_{{\bf{k}}}=\lambda_m(\hat h_{n})=b_{{\bf{k}}}{\cal I}n\Z_x^{(n)}\cap\Z_{\lambda}^{(n)};$$
\noindent ii) in general $b_{{\bf{k}}}-(\sigma x)_{{\bf{k}}}{\cal I}n\Z_x^{(n-1)}$.
\begin{proof}
i) ${\cal F}orall N\geq n$ we have $$(\sigma x)_{{\bf{k}}}=\sum_{1\leq i_1<...<i_n\leq N}x_{i_1}^m\cdot...\cdot x_{i_n}^m=\lambda_m\left(\sum_{1\leq i_1<...<i_n\leq N}x_{i_1}\cdot...\cdot x_{i_n}{\cal R}ight)=\lambda_m(e_n^{[N]})$$ so that $(\sigma x)_{{\bf{k}}}=\lambda_m(\hat h_{n})$.
\noindent ii) $b_{{\bf{k}}}=\prod_{m>0}\lambda_m(\hat h_{k_m})=\prod_{m>0}(\sigma x)_{{\bf{k}}^{[m]}}$ where $k^{[m]}_{m'}=\delta_{m,m'}k_m$ ${\cal F}orall m,m'>0$; thanks to lemma {\cal R}ef{sxkp},ii) we have that
$\prod_{m>0}(\sigma x)_{{\bf{k}}^{[m]}}-(\sigma x)_{\sum_m{\bf{k}}^{[m]}}{\cal I}n\Z_x^{(n-1)}$; but $\sum_{m>0}{\bf{k}}^{[m]}={\bf{k}}$ and the claim follows.
\end{proof}
\end{lemma}
\begin{theorem}\label{gmrvs}
$B_{\lambda}$ is a $\Z$-basis of $\Z[\hat h_k|k>0]$ (thus $\Z[\hat h_k|k>0]=\Z_{\lambda}[h_r|r>0]$).
\begin{proof}
We prove by induction on $n$ that $B_{\lambda}^{(n)}$ is a $\Z$-basis of $\Z_x^{(n)}=\Z_{\lambda}^{(n)}$ ${\cal F}orall n{\cal I}n\N$, the case $n=0$ being obvious.
\noindent Let $n>0$: by the inductive hypothesis $B_{\lambda}^{(n-1)}$ and $B_x^{(n-1)}$ are both $\Z$-bases of $\Z_x^{(n-1)}=\Z_{\lambda}^{(n-1)}$; by definition
$B_x^{(n)}\setminus B_x^{(n-1)}$ represents a $\Z$-basis of $\Z_x^{(n)}/\Z_x^{(n-1)}$ while $B_{\lambda}^{(n)}\setminus B_{\lambda}^{(n-1)}$ represents a set of generators of the $\Z$-module $\Z_{\lambda}^{(n)}/\Z_{\lambda}^{(n-1)}$.
\noindent Now lemma {\cal R}ef{sldftv},ii) implies that if $\sum_{m>0}k_m=n$ then $b_{{\bf{k}}}$ and $(\sigma x)_{{\bf{k}}}$ represent the same element in ${\mathbb{Q}}[\hat h_k|k>0]/\Z_x^{(n-1)}={\mathbb{Q}}[\hat h_k|k>0]/\Z_{\lambda}^{(n-1)}$.
\noindent Hence $B_{\lambda}^{(n)}\setminus B_{\lambda}^{(n-1)}$ represents a $\Z$-basis of $\Z_x^{(n)}/\Z_x^{(n-1)}=\Z_x^{(n)}/\Z_{\lambda}^{(n-1)}$, that is $B_{\lambda}^{(n)}$ is a $\Z$-basis of $\Z_x^{(n)}$; but $B_{\lambda}^{(n)}$ generates $\Z_{\lambda}^{(n)}$ and the claim follows.
\end{proof}
\end{theorem}
\subsection{Comparison with the Mitzman integral form}\label{appendC}
In the present appendix we compare the integral form $\tilde{\cal U}z=$ $^*{\cal U}_{\Z}(\hat{{{\cal F}rak sl_3}}^{\!\!\chi})$ of $\tilde{\cal U}$ described in section {\cal R}ef{ifa22} with the integral form ${\cal U}z(\hat{{{\cal F}rak sl_3}}^{\!\!\chi})$ of the same algebra $\tilde{\cal U}$ introduced and studied by Mitzman in \cite{DM}, that we denote here by ${\cal U}m$ and that is easily defined as the $\Z$-subalgebra of $\tilde{\cal U}$ generated by the divided powers of the Kac-Moody generators $e_i,f_i$ ($i=0,1$): see also remark {\cal R}ef{rfink}.
\noindent More precisely:
\begin{mydefinition}\label{ka22}
\noindent $\tilde{\cal U}$ is the enveloping algebra of the Kac-Moody algebra whose generalized Cartan matrix is $A_2^{(2)}=(a_{i,j})_{i,j{\cal I}n\{0,1\}}=\left(\begin{matrix}2&-1\\-4&2\end{matrix}{\cal R}ight)$ (see \cite{VK}): it has generators $\{e_i,f_i,h_i|i=0,1\}$ and relations
$$[h_i,h_j]=0,\ \ [h_i,e_j]=a_{i,j}e_j,\ \ [h_i,f_j]=-a_{i,j}f_j,\ \ [e_i,f_j]=\delta_{i,j}h_i\ \ (i,j{\cal I}n \{0,1\})$$
$$({{\cal R}m{ad}}e_i)^{1-a_{i,j}}(e_j)=0=({{\cal R}m{ad}}f_i)^{1-a_{i,j}}(f_j)\ \ (i\noindenteq j{\cal I}n \{0,1\}).$$
\end{mydefinition}
\begin{mydefinition} \label{mif}
The Mitzman integral form ${\cal U}m$ of $\tilde{\cal U}$ is the $\Z$-subalgebra of $\tilde{\cal U}$ generated by $\{e_i^{(k)},f_i^{(k)}|i=0,1,\ k{\cal I}n\N\}$.
\end{mydefinition}
\begin{myremark}
The Kac-Moody presentation of $\tilde{\cal U}$ (definition {\cal R}ef{ka22}) and its presentation given in definition {\cal R}ef{a22} are identified through the following isomorphism:
$$e_1\mapsto x_0^+,\ \ f_1\mapsto x_0^-,\ \ h_1\mapsto h_0,\ \ e_0\mapsto{1\over 4}X_1^-,\ \ f_0\mapsto{1\over 4}X_{-1}^+,\ \ h_0\mapsto {1\over 4}c-{1\over 2} h_0.$$
\end{myremark}
\begin{mynotation} \label{mitNota}
In order to avoid in the following any confusion and heavy notations, we set:
$$y_{2r+1}^{\pm}={1\over 4}X_{2r+1}^{\pm},\ \ \mathcal{h}_r={1\over 2}h_r,\ \ \tilde c={1\over 4}c$$
where the $X_{2r+1}^{\pm}$'s, the $h_r$'s and $c$ are those introduced in definition {\cal R}ef{a22}
(thus $e_0=y_1^-$, $f_0=y_{-1}^+$, while the Kac-Moody $h_0$ and $h_1$ appearing in definition {\cal R}ef{ka22} are respectively $\tilde c-\mathcal{h}_0$ and $2\mathcal{h}_0$; moreover
${\cal U}m$ is the $\Z$-subalgebra of $\tilde{\cal U}$ generated by $\{(x_0^{\pm})^{(k)},(y_{\mp 1}^{\pm})^{(k)}|k{\cal I}n\N\}$).
\end{mynotation}
\begin{myremark}\label{stau}
${\cal U}m$ is $\Omega$-stable, $\exp(\pm{{\cal R}m{ad}}e_i)$-stable and $\exp(\pm{{\cal R}m{ad}}f_i)$-stable.
In particular ${\cal U}m$ is stable under the action of
$$\tau_0=\exp({{\cal R}m{ad}}e_0)\exp(-{{\cal R}m{ad}}f_0)\exp({{\cal R}m{ad}}e_0)=\exp({{\cal R}m{ad}}y_1^-)\exp(-{{\cal R}m{ad}}y_{-1}^+)\exp({{\cal R}m{ad}}y_1^-)$$ and
$$\tau_1=\exp({{\cal R}m{ad}}e_1)\exp(-{{\cal R}m{ad}}f_1)\exp({{\cal R}m{ad}}e_1)=\exp({{\cal R}m{ad}}x_0^+)\exp(-{{\cal R}m{ad}}x_0^-)\exp({{\cal R}m{ad}}x_0^+)$$
(cfr. \cite{JH}).
\begin{proof}
The claim for $\Omega$ follows at once from the definitions; the remaining claim are an immediate consequence of the identity
$({{\cal R}m{ad}}a)^{(n)}(b)=\sum_{r+s=n}(-1)^sa^{(r)}ba^{(s)}$.
\end{proof}
\end{myremark}
\begin{myremark}\label{emgfg}
Recalling the embedding $F:\hat{\cal U}\to\tilde{\cal U}$ defined in remark {\cal R}ef{emgg}, theorem {\cal R}ef{trm} implies that the $\Z$-subalgebra of $\tilde{\cal U}$ generated by the divided powers of the $y_{2r+1}^{\pm}$'s is the tensor product of the $\Z$-subalgebras
$\Z^{(div)}[y_{2r+1}^{\pm}|r{\cal I}n\Z]$, $\Z^{(sym)}[\mathcal{h}_{\pm r}|r>0]$, $\Z^{(bin)}[\mathcal{h}_0-\tilde c, 2\tilde c]$.
\end{myremark}
\noindent Mitzman completely described the integral form generated by the divided powers of the Kac-Moody generators in all the twisted cases; in case $A_2^{(2)}$ his result can be stated as follows, using our notations (see examples {\cal R}ef{dvdpw}, {\cal R}ef{binex} and {\cal R}ef{rvsf}, definition {\cal R}ef{bun} and notation {\cal R}ef{mitNota}):
\begin{mytheorem}\label{mitz}
${\cal U}m\cong{\cal U}m^-\otimes_{\Z}{\cal U}m^0\otimes_{\Z}{\cal U}m^+$ where
$${\cal U}m^{\pm}\cong\Z^{(div)}[x_{2r}^{\pm}|r{\cal I}n\Z]\otimes_{\Z}\Z^{(div)}[y_{2r+1}^{\pm}|r{\cal I}n\Z]\otimes_{\Z}\Z^{(div)}[x_{2r+1}^{\pm}|r{\cal I}n\Z]\cong$$
$$\cong\Z^{(div)}[x_{2r+1}^{\pm}|r{\cal I}n\Z]\otimes_{\Z}\Z^{(div)}[y_{2r+1}^{\pm}|r{\cal I}n\Z]\otimes_{\Z}\Z^{(div)}[x_{2r}^{\pm}|r{\cal I}n\Z],$$
$${\cal U}m^0\cong\Z_{\lambda}[\mathcal{h}_{-r}|r>0]\otimes_{\Z}\Z^{(bin)}[2\mathcal{h}_0,\tilde c-\mathcal{h}_0]\otimes_{\Z}\Z_{\lambda}[\mathcal{h}_r|r>0].$$
The isomorphisms are all induced by the product in $\tilde{\cal U}$.
Remark that $\Z^{(bin)}[2\mathcal{h}_0,\tilde c-\mathcal{h}_0]=\Z^{(bin)}[\mathcal{h}_0-\tilde c,2\tilde c]$ (see example {\cal R}ef{binex}) and $\Z_{\lambda}[\mathcal{h}_r|r>0]=\Z^{(sym)}[\mathcal{h}_r|r>0]$
(see theorem {\cal R}ef{gmrvs}).
\end{mytheorem}
{\cal{V}}skip.3 truecm
\begin{myremark} \label{tmnv}
\noindent As in the case of $\hat{\cal F}rak sl_2$ (see remark {\cal R}ef{tmfv}) we can evidentiate the relation between the
elements $\hat \mathcal{h}_k$'s with $k>0$ and the elements $p_{n,1}$'s ($n>0$) defined in \cite {F} following Garland's $\Lambda_{k}$'s.
\noindent Setting
$$\sum_{n\geq 0}p_nu^n=P(u)=\hat \mathcal{h}(-u)^{-1}$$ we have on one hand
$\Z[\hat \mathcal{h}_k|k>0]=\Z[p_{n}|n>0]$ and on the other hand $$p_0=1,\ \ p_n={1\over n}\sum_{r=1}^n\mathcal{h}_rp_{n-r}\ {\cal F}orall n>0,$$
hence $p_n=p_{n,1}$ ${\cal F}orall n\geq 0$ (see \cite{F}) and $\Z[\hat \mathcal{h}_k|k>0]=\Z[p_{n,1}|n>0]$.
\end{myremark}
\begin{mycorollary}\label{yins}
$\tilde{\cal U}z\subsetneq{\cal U}m$.
\noindent More precisely:
$$\Z^{(div)}[X_{2r+1}^{\pm}|r{\cal I}n\Z]\subsetneq\Z^{(div)}[y_{2r+1}^{\pm}|r{\cal I}n\Z],$$
so that $\tilde{\cal U}z^+\subsetneq{\cal U}m^+$ and $\tilde{\cal U}z^-\subsetneq{\cal U}m^-$;
$$\Z^{(bin)}[h_0,c]=\Z^{(bin)}[2\mathcal{h}_0,4\tilde c]\subsetneq\Z^{(bin)}[2\mathcal{h}_0,\tilde c-\mathcal{h}_0]$$
and (see definition {\cal R}ef{thuz})
$$\Z^{(sym)}[{\cal{V}}arepsilon_rh_r|r>0]\subsetneq\Z^{(sym)}[\mathcal{h}_r|r>0]$$
(and similarly for the negative part of ${\cal U}m^0$), so that $\tilde{\cal U}z^0\subsetneq{\cal U}m^0$.
\begin{proof}
For $\Z^{(div)}$ and $\Z^{(bin)}$ the claim is obvious.
For $\Z^{(sym)}$ the inequality follows at once from the fact that $\mathcal{h}_1={h_1\over 2}$ does not belong to $\Z^{(sym)}[{\cal{V}}arepsilon_rh_r|r>0]$ while
the inclusion follows from propositions {\cal R}ef{convoluzioneintera} and {\cal R}ef{emmepiallaerre} remarking that for all $r>0$ ${\cal{V}}arepsilon_rh_r=2{\cal{V}}arepsilon_r\mathcal{h}_r$.
\noindent Then the assertion for $\tilde{\cal U}z$ and ${\cal U}m$ follows from theorems {\cal R}ef{trmA22} and {\cal R}ef{mitz}.
\end{proof}
\end{mycorollary}
\begin{myremark} \label{mizRem}
Theorem {\cal R}ef{mitz} can be deduced from the commutation formulas discussed in this paper and collected in appendix {\cal R}ef{appendA}, thanks to the triangular decompositions (see remark {\cal R}ef{tefp})
and to the following observations:
\noindent
i) ${\cal U}m^0$ is a $\Z$-subalgebra of $\tilde{\cal U}$:
\noindent indeed, since the map $h_r\mapsto \mathcal{h}_r$, $c\mapsto\tilde c$ defines an automorphism of $\tilde{\cal U}^0$, proposition {\cal R}ef{zkd} implies that
$$\hat \mathcal{h}_+(u)\hat \mathcal{h}_-(v)=\hat \mathcal{h}_-(v)(1-uv)^{-4\tilde c}(1+uv)^{2\tilde c}\hat \mathcal{h}_+(u).$$
\noindent ii) ${\cal U}m^+$ and ${\cal U}m^-$ are $\Z$-subalgebras of $\tilde{\cal U}$:
\noindent indeed the $[(x_{2r}^+)^{(k)},(x_{2s+1}^+)^{(l)}]$'s (the only non trivial commutators in ${\cal U}m^+$) lie in $\tilde{\cal U}z^+\subseteq{\cal U}m^+$; on the other hand ${\cal U}m^-=\Omega({\cal U}m^+)$.
\noindent iii) $\exp\left(\sum_{r>0}a_rx_r^+{\cal R}ight){\cal I}n{\cal U}m^+$ if $a_r{\cal I}n\Z$ for all $r>0$:
\noindent see lemma {\cal R}ef{cle},viii), formula ({\cal R}ef{sdivmv}) and the relation $[x_{2r}^+,x_{2s+1}^+]=-4y_{2r+2s+1}^+$.
\noindent iv) ${\cal U}m^0{\cal U}m^+$ and ${\cal U}m^-{\cal U}m^0$ are $\Z$-subalgebras of $\tilde{\cal U}$:
\noindent that $(y_{2r+1}^+)^{(k)}{\cal U}m^0\subseteq{\cal U}m^0{\cal U}m^+$ follows from remark {\cal R}ef{emgfg}; moreover by propositions {\cal R}ef{bdm} and {\cal R}ef{hh} we get
$$(x_r^+)^{(k)}{\mathcal{h}_0-\tilde c\choose l}={\mathcal{h}_0-\tilde c-k\choose l}(x_r^+)^{(l)},$$
$$(x_r^+)^{(k)}\hat \mathcal{h}_+(u)=\hat \mathcal{h}_+(u)\left({1-uT^{-1}\over(1+uT^{-1})^2}x_r^+{\cal R}ight)^{(k)},$$
$$\lambda_{-1}(x_r^+)=x_{-r}^+,\ \
\lambda_{-1}(\hat \mathcal{h}_+(u))=\hat \mathcal{h}_-(u).$$
On the other hand ${\cal U}m^-{\cal U}m^0=\Omega({\cal U}m^0{\cal U}m^+)$.
\noindent v) ${\cal U}m^-{\cal U}m^0{\cal U}m^+$ is a $\Z$-subalgebra of $\tilde{\cal U}$:
$$(x_r^+)^{(k)}(x_s^-)^{(l)}{\cal I}n\tilde{\cal U}z=\tilde{\cal U}z^-\tilde{\cal U}z^0\tilde{\cal U}z^+\subseteq{\cal U}m^-{\cal U}m^0{\cal U}m^+$$
(see theorem {\cal R}ef{trmA22} and corollary {\cal R}ef{yins}),
$$(y_{2r+1}^+)^{(k)}(y_{2s+1}^-)^{(l)}{\cal I}n{\cal U}m^-{\cal U}m^0{\cal U}m^+$$
(see remark {\cal R}ef{emgfg}), and
$$\exp(x_0^+u)\exp(y_1^-v)=$$
$$=\exp({{\cal A}lpha_-})\exp({\beta_-})\exp({\gamma_-})\hat \mathcal{h}_+(u^2v)
\exp(\gamma_+)\exp(\beta_+)\exp({\cal A}lpha_+)$$
where
$${\cal A}lpha_-={uv\over 1-w^2u^4v^2}.x_1^-,\ \ \ \ {\cal A}lpha_+={u\over 1-w^2u^4v^2}.x_0^+,$$
$$\beta_-={(1+3\cdot wu^4v^2)v\over (1+wu^4v^2)^2}.y_1^-,\ \ \ \ \beta_+={(1-wu^4v^2)u^4v\over (1+wu^4v^2)^2}.y_1^+,$$
$$\gamma_-={-w^2u^3v^2\over 1-w^2u^4v^2}.x_0^-,\ \ \ \ \gamma_+={-u^3v\over 1-w^2u^4v^2}.x_1^+$$
\noindent (see proposition {\cal R}ef{xmenogrande} recalling definition {\cal R}ef{qwmodulo} and remark {\cal R}ef{whtilde}), so that
$(x_0^+)^{(k)}(y_1^-)^{(l)}$ lies in ${\cal U}m^-{\cal U}m^0{\cal U}m^+$ for all $k,l\geq 0$; from this it follows that
$(x_r^+)^{(k)}(y_{2s+1}^-)^{(l)}$ and $(y_{2s+1}^+)^{(l)}(x_r^-)^{(k)}$ lie in ${\cal U}m^-{\cal U}m^0{\cal U}m^+$ for all $r,s{\cal I}n\Z$, $k,l\geq 0$ because
${\cal U}m^-{\cal U}m^0{\cal U}m^+$ is stable under $T^{\pm 1}$, $\lambda _m$ ($m{\cal I}n\Z$ odd) and $\Omega$, and
$$x_r^+=T^{-r}\lambda_{2r+2s+1}(x_0^+),\ \ y_{2s+1}^-=(-1)^rT^{-r}\lambda_{2r+2s+1}(y_1^-),$$
$$y_{2s+1}^+=\Omega(y_{-2s-1}^-),\ \ x_r^-=\Omega(x_{-r}^+);$$
\noindent vi) ${\cal U}m\subseteq{\cal U}m^-{\cal U}m^0{\cal U}m^+
$:
\noindent it follows from v) since $(x_0^{\pm})^{(k)}{\cal I}n\Z^{(div)}[x_{2r}^{\pm}|r\!{\cal I}n\!\Z]$ and $(y_{\mp 1}^{\pm})^{(k)}{\cal I}n\Z^{(div)}[y_{2r+1}^{\pm}|r\!{\cal I}n\!\Z]$.
\noindent vii) ${\cal U}m^{\pm}\subseteq{\cal U}m$:
\noindent this follows from remark {\cal R}ef{stau}, observing that
$$\tau_0(x_r^+)=(-1)^{r-1}x_{r+1}^-,\ \ \tau_1(x_r^-)=x_r^+,\ \ \tau_1(y_{2r+1}^-)=y_{2r+1}^+,\ \ \tau_0(y_{2r+1}^+)=-y_{2r+3}^-.$$
\noindent viii) ${\cal U}m^0\subseteq{\cal U}m$:
\noindent it follows from vii), v) and the stability under $\Omega$.
\noindent ix) ${\cal U}m^-{\cal U}m^0{\cal U}m^+\subseteq{\cal U}m$:
\noindent this is just vii) and viii) together.
Then ${\cal U}m={\cal U}m^-{\cal U}m^0{\cal U}m^+$, which is the claim.
\end{myremark}
\begin{myremark}\label{rfink}
As one can see from remark {\cal R}ef{mizRem},vii), $$\{x_r^{\pm},y_{2r+1}^{\pm},\mathcal{h}_s,2\mathcal{h}_0,\tilde c-\mathcal{h}_0|r,s{\cal I}n\Z, s\noindenteq 0\}$$
is, up to signs, a Chevalley basis of $\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$ (see \cite{DM}).
\noindent It is actually through these basis elements that Mitzman introduces, following \cite{HG}, the integral form of $\tilde{\cal U}$, as the $\Z$-subalgebra of $\tilde{\cal U}$ generated by
$$\{(x_r^{\pm})^{(k)},(y_{2r+1}^{\pm})^{(k)}|r{\cal I}n\Z,k{\cal I}n\N\};$$
but this $\Z$-subalgebra is precisely the algebra ${\cal U}m$ introduced in definition {\cal R}ef{mif}: indeed
it turns out to be generated over $\Z$ just by $\{e_i^{(k)},f_i^{(k)}|i=0,1,\ k\geq 0\}$, that is by $\{(x_0^{\pm})^{(k)},(y_{\mp 1}^{\pm})^{(k)}|k\geq 0\}$, thanks to remarks {\cal R}ef{stau} and {\cal R}ef{mizRem},vii).
\end{myremark}
\subsection{List of Symbols}\label{appendD}
{\cal U}nderline{Lie Algebras and Commutative Algebras}:
\begin{abbrv}
{\cal I}tem[$S^{(div)}$]
Example {\cal R}ef{dvdpw} \\
{\cal I}tem[$S^{(bin)}$]
Example {\cal R}ef{binex} \\
{\cal I}tem[$S^{(sym)}$]
Example {\cal R}ef{rvsf} \\
{\cal I}tem[${\cal F}rak {sl_2}$]
Definition {\cal R}ef{sl2} \\
{\cal I}tem[$\hat{{\cal F}rak sl_2}$]
Definition {\cal R}ef{hs2} \\
{\cal I}tem[$\hat{{{\cal F}rak sl_3}}^{\!\!\chi}$]
Definition {\cal R}ef{a22} \\
\end{abbrv}
{\cal U}nderline{Enveloping Algebras}:
\begin{abbrv}
{\cal I}tem[$ {\cal U}z ^{re, \pm}, \;{\cal U}z^{im,\pm}, \; {\cal U}z^{{\cal F}rak h},\; ^*{\cal U}_{\Z}, \; ^*{\cal U}_{\Z}^{imm, \pm}$]
Section {\cal R}ef{intr} \\
{\cal I}tem[${\cal U}, \; {\cal U}z$]
Definiton {\cal R}ef{sl2} \\
{\cal I}tem[${\cal U}^+,\; {\cal U}^-,\; {\cal U}^0$]
Theorem {\cal R}ef{trdc} \\
{\cal I}tem[$\hat{\cal U},\; \hat{\cal U}^{+},\; \hat{\cal U}^{-}, \; \hat{\cal U}^{0}, \; \hat{\cal U}^{0,\pm},\; \hat{\cal U}^{0,0}$ ]
Definition {\cal R}ef{hs2} \\
{\cal I}tem[$\hat{\cal U}z,\; \hat{\cal U}z^\pm, \; \hat{\cal U}z^{0,\pm}, \; \hat{\cal U}z^{0,0}$ ]
Definition {\cal R}ef{hhuz} \\
{\cal I}tem[$\tilde{\cal U},\; \tilde{\cal U}^{\pm},\; \tilde{\cal U}^{0},\; \tilde{\cal U}^{\pm,0},\; \tilde{\cal U}^{\pm,1},\; \tilde{\cal U}^{\pm,c}, \; \tilde{\cal U}^{0,\pm},\; \tilde{\cal U}^{0,0}$]
Definition {\cal R}ef{a22} \\
{\cal I}tem[$\tilde{\cal U}z,\; \tilde{\cal U}z^{\pm},\; \tilde{\cal U}z^{0},\; \tilde{\cal U}z^{\pm,0},\; \tilde{\cal U}z^{\pm,1},\; \tilde{\cal U}z^{\pm,c} \; \tilde{\cal U}z^{0,\pm},\; \tilde{\cal U}z^{0,0}$]
Definition {\cal R}ef{thuz} \\
{\cal I}tem[${\cal U}m$]
Definiton {\cal R}ef{mif}\\
{\cal I}tem[${\cal U}m^-, \;{\cal U}m^0, \;{\cal U}m^+ \;$]
Theorem {\cal R}ef{mitz}\\
\end{abbrv}
{\cal U}nderline{Bases}:
\begin{abbrv}
{\cal I}tem[$B^{re,\pm}, \; B^{im,\pm}, \;B^{{\cal F}rak h}$]
Section {\cal R}ef{intr}
{\cal I}tem[$B^\pm,\; B^{0, \pm},\; B^{0,0} $ ]
Theorem {\cal R}ef{trm} \\
{\cal I}tem[$B^{\pm,0},\; B^{\pm,1},\; B^{\pm,c} $ ]
Theorem {\cal R}ef{trmA22} \\
{\cal I}tem[$B_\lambda \; B_\lambda^{[n]} \; B_x \;B_x^{[n]}$]
Definitions {\cal R}ef{bun} and {\cal R}ef{basi} \\
\end{abbrv}
{\cal U}nderline{Elements and their generating series}:
\begin{abbrv}
{\cal I}tem[$\Lambda_r(\xi(k))$]
Section {\cal R}ef{intr}
{\cal I}tem[$a^{(k)}, \; \exp(au)$]
Notation {\cal R}ef{ntdvd} \\
{\cal I}tem[$\binom{a}{k}, \; (1+u)^a$]
Notation {\cal R}ef{ntbin} \\
{\cal I}tem[$\hat p(u),\; \hat p_r$]
Example {\cal R}ef{rvsf} \\
{\cal I}tem[$\hat h_r^{\{a\}},\; \hat h_+^{\{a\}}(u)$]
Notation {\cal R}ef{hcappucciof} \\
{\cal I}tem[$x^\pm_r, \; h_r, \;c$]
Definition {\cal R}ef{hs2} and Definition {\cal R}ef{a22}\\
{\cal I}tem[$X^\pm_{2r+1}$]
Definition {\cal R}ef{a22}\\
{\cal I}tem[$x^\pm(u),\; h_\pm(u), \; \hat h_\pm(u), \; \hat h_r$]
Notation {\cal R}ef{hgens} \\
{\cal I}tem[$\tilde h_{\pm}(u), \tilde h_{\pm r}$]
Definition {\cal R}ef{thuz} \\
{\cal I}tem[$e_i, f_i, h_i$]
Remark {\cal R}ef{ka22}\\
{\cal I}tem[$y_{2r+1}^\pm , \; \mathcal{h}_r, \; \tilde c$]
Notation {\cal R}ef{mitNota}\\
{\cal I}tem[$\mathcal{h}_\pm(u)$]
Remark {\cal R}ef{mizRem}\\
\end{abbrv}
{\cal U}nderline{Anti/auto/homomorphisms}:
\begin{abbrv}
{\cal I}tem[$\lambda_m, \; \lambda_m ^{[n]}$]
Proposition {\cal R}ef{tmom} \\
{\cal I}tem[$ev$]
Equation {\cal R}ef{evaluation} \\
{\cal I}tem[$\sigma, \; \Omega,\; T, \; \lambda_m$]
Definition {\cal R}ef{hto} and Definition {\cal R}ef{tto} \\
{\cal I}tem[$\tilde \lambda_m$]
Lemma {\cal R}ef{ometiomecap} \\
\end{abbrv}
{\cal U}nderline{Other symbols}:
\begin{abbrv}
{\cal I}tem[$ 1\!\!\!\!1, \; 1\!\!\!\!1^{(m)}, \; 1\!\!\!\!1_r, \; 1\!\!\!\!1^{(m)}_r$]
Notation {\cal R}ef{hcappucciof} \\
{\cal I}tem[$L_a, \; R_a$]
Notation {\cal R}ef{lard} \\
{\cal I}tem[${\cal{V}}arepsilon_r$]
Definition {\cal R}ef{thuz} \\
{\cal I}tem[$L, \; L^\pm,\; L^0, \; L^{\pm, 0}, \; L^{\pm, 1}, \; L^{\pm, c}$]
Definition {\cal R}ef{sottoalgebraL} \\
{\cal I}tem[$w.$]
Definition {\cal R}ef{qwmodulo} \\
{\cal I}tem[$d, \; \tilde d, \; d_n, \; \tilde d_n$]
Notation {\cal R}ef{notedn} \\
{\cal I}tem[$\delta_n$]
Remark {\cal R}ef{hhdehh} \\
\end{abbrv}
{\cal{V}}skip .5 truecm
\end{document} |
\begin{document}
\title{Broadband teleportation}
\author{P.\ van Loock and Samuel L.\ Braunstein}
\address{Quantum Optics and Information Group,\\
School of Informatics, University of Wales, Bangor LL57 1UT, United Kingdom}
\author{H.\ J.\ Kimble}
\address{Norman Bridge Laboratory of Physics 12-33,\\
California Institute of Technology, Pasadena, California 91125}
\maketitle
\begin{abstract}
Quantum teleportation of an unknown broadband electromagnetic
field is investigated. The continuous-variable teleportation
protocol by Braunstein and Kimble [Phys.\ Rev.\ Lett.\
{\bf 80}, 869 (1998)] for teleporting
the quantum state of a single mode of the electromagnetic field is
generalized for the case of a multimode field with finite bandwith.
We discuss criteria for continuous-variable teleportation with
various sets of input states and apply them to the teleportation
of broadband fields. We first consider as a set of input fields
(from which an independent state preparer draws the inputs to be
teleported) arbitrary pure Gaussian states with unknown coherent
amplitude (squeezed or coherent states).
This set of input states, further restricted to an alphabet of
coherent states, was used in the experiment by Furusawa {\it et al.}
[Science {\bf 282}, 706 (1998)]. It requires unit-gain teleportation
for optimizing the teleportation fidelity.
In our broadband scheme, the excess noise added through unit-gain
teleportation due to the finite degree of the squeezed-state entanglement
is just twice the (entanglement) source's squeezing spectrum for its
``quiet quadrature.'' The teleportation of one half of an
entangled state (two-mode squeezed vacuum state), i.e.,
``entanglement swapping,'' and its verification are optimized under
a certain nonunit gain condition. We will also give a broadband
description of this continuous-variable
entanglement swapping based on the single-mode scheme by
van Loock and Braunstein [Phys.\ Rev. A {\bf 61}, 10302 (2000)].
\end{abstract}
\section{Introduction}
Teleportation of an unknown quantum state is its disembodied transport
through a classical channel, followed by its reconstitution, using the
quantum resource of entanglement. Quantum information cannot
be transmitted reliably via a classical channel alone, as this would
allow us to replicate the classical signal and so produce copies of the
initial state, thus violating the no-cloning theorem \cite{Wootters}.
More intuitively, any attempted measurement of the initial state only
obtains partial information due to the Heisenberg uncertainty principle
and the subsequently collapsed wave packet forbides information gain
about the original state from further inspection. Attempts to circumvent
this disability with more generalized measurements also fail \cite{Kraus}.
Quantum teleportation was first proposed to transport an unknown state
of any discrete quantum system, e.g., a spin-$\frac{1}{2}$ particle
\cite{Benn}. In order to accomplish the teleportation, classical and
quantum methods must go hand in hand. A part of the information encoded
in the unknown input state is transmitted via the
quantum correlations between two separated subsystems in an entangled
state shared by the sender and the receiver. In addition, classical
information must be sent via a conventional channel. For the teleportation
of a spin-$\frac{1}{2}$-particle state, the entangled state required
is a pair of spins in a Bell state \cite{Sam2}. The classical information
that has to be transmitted contains two bits in this case.
Important steps toward the experimental implementation of quantum
teleportation of single-photon polarization states have already
been accomplished \cite{Bou,Mart}. However, a complete realization of
the original teleportation proposal \cite{Benn} has not been achieved
in these experiments, as either the state to be teleported is not
independently coming from the outside \cite{Mart} or destructive detection
of the photons in the teleported state is employed as part of the protocol
\cite{Bou}.
In the latter case, a teleported state did not emerge for subsequent
examination or exploitation. This situation has been termed ``a posteriori"
teleportation, being accomplished via post selection of photoelectric
counting events \cite{Sam3}. Without post selection, the fidelity would not
have exceeded the value $\case{2}{3}$ required.
The teleportation of
continuous quantum variables such as position and momentum of a particle
\cite{Vaid} relies on the entanglement of the states in the original
Einstein, Podolsky, and Rosen (EPR) paradox \cite{Einst}.
In quantum optical terms, the observables analogous to the two conjugate
variables position and momentum of a particle are the quadrature
amplitudes of a single mode of the electromagnetic field \cite{Walls}.
By considering the finite (nonsingular) degree of correlation between
these quadratures in a two-mode squeezed state \cite{Walls},
a realistic implementation for the teleportation of continuous
quantum variables was proposed \cite{Sam}.
Based on this proposal, in fact, quantum teleportation of arbitrary
coherent states has been achieved with a fidelity $F=0.58\pm 0.02$
\cite{Furu}.
Without using entanglement, by purely classical communication,
an average fidelity of 0.5 is the best that can be achieved
if the set of input states contains all coherent states \cite{Fuchs}.
The scheme with continuous quadrature amplitudes of a single
mode enables an {\it ``a priori"} (or ``unconditional") teleportation with
high efficiency \cite{Sam}, as reported in Refs.~\onlinecite{Sam4,Furu}.
In this experiment, three criteria necessary for quantum teleportation were
achieved:
1. An unknown quantum state enters the sending station for teleportation.
2. A teleported state emerges from the receiving station for subsequent
evaluation or exploitation.
3. The degree of overlap between the input and the teleported states is
higher than that which could be achieved if the sending and the receiving
stations were linked only by a classical channel.
In continuous-variable teleportation, the teleportation process acts on an
infinite-dimensional Hilbert space instead of the two-dimensional Hilbert
space for the discrete spin variables. However, an arbitrary electromagnetic
field has an infinite number of modes, or in other words, a finite bandwidth
containing a continuum of modes. Thus, the teleportation of the quantum
state of a broadband electromagnetic field requires the teleportation of a
quantum state which is defined in the tensor product space of an infinite
number of infinite-dimensional Hilbert spaces.
The aim of this paper is to extend the treatment of Ref.~\onlinecite{Sam}
to the case of a broadband field, and thereby to provide the theoretical
foundation for laboratory investigations as in Refs.~\onlinecite{Sam4,Furu}.
In particular, we demonstrate that the two-mode squeezed state output of
a nondegenerate optical parametric amplifier (NOPA) \cite{Ou} is a suitable
EPR ingredient for the efficient teleportation of a broadband
electromagnetic field.
In the three above mentioned teleportation experiments,
in Innsbruck \cite{Bou}, in Rome \cite{Mart}, and in Pasadena \cite{Furu},
the nonorthogonal input states to be teleported were
single-photon polarization states (``qubits'') \cite{Bou,Mart} and
coherent states \cite{Furu}.
From a true quantum teleportation device, however, we would also require
the capability of teleporting the entanglement source itself.
This teleportation of one half of an entangled state (``entanglement
swapping'' \cite{Zuk}) means to entangle two quantum systems
that have never directly interacted with each other.
For discrete variables, a demonstration of entanglement swapping
with single photons has been reported by Pan {\it et al.} \cite{Pan}.
For continuous variables, experimental entanglement swapping has not yet
been realized in the laboratory, but there have been several theoretical
proposals of such an experiment.
Polkinghorne and Ralph \cite{Polk} suggested teleporting
polarization-entangled states of single photons using squeezed-state
entanglement where the output correlations are verified via Bell
inequalities. Tan \cite{Tan} and van Loock and Braunstein \cite{PvL}
considered the unconditional teleportation (without post selection
of ``successful'' events by photon detections) of one half of
a two-mode squeezed state using different protocols and verification.
Based on the single-mode scheme of Ref.~\onlinecite{PvL},
we will also present a broadband description of continuous-variable
entanglement swapping.
\section{Teleportation of a single mode}
In the teleportation scheme of a single mode of the electromagnetic field
(for example, representing a single pulse or wave packet),
the shared entanglement is a two-mode squeezed vacuum state \cite{Sam}.
For infinite squeezing, this state contains exactly analogous
quantum correlations as does the state described in the original EPR
paradox, where the quadrature amplitudes of the two modes play the roles
of position and momentum \cite{Sam}. The entangled state is sent in two
halves: one to ``Alice'' (the teleporter or sender) and the
other one to ``Bob'' (the receiver), as illustrated in Fig.~1.
In order to perform the teleportation, Alice has to couple the input
mode she wants to teleport with her ``EPR mode'' at a beam splitter.
The ``Bell detection'' of the $x$ quadrature at one beam splitter
output, and of the $p$ quadrature at the other output, yields the
classical results to be sent to Bob via a classical communication channel.
In the limit of an infinitely squeezed EPR source, these classical results
contain no information about the mode to be teleported. This is analogous
to the Bell-state measurement of the spin-$\frac{1}{2}$-particle pair by
Alice for the teleportation of a spin-$\frac{1}{2}$-particle state.
The measured Bell state of the spin-$\frac{1}{2}$-particle pair determines
whether the particles have equal or different spin projections.
The spin projection of the individual particles, i.e., Alice's EPR particle
and her unknown input particle, remains completely unknown \cite{Benn}.
According to this analogy, we call Alice's quadrature measurements for the
teleportation of the state of a single mode (and of a multimode field in
the following sections) ``Bell detection.''
Due to this Bell detection, the entanglement between Alice's ``EPR mode''
and Bob's ``EPR mode'' means that suitable phase-space displacements of
Bob's mode convert it into a replica of Alice's unknown input mode
(a perfect replica for infinite squeezing).
In order to perform these displacements, Bob needs the classical
results of Alice's Bell measurement.
The previous protocol for the quantum teleportation of
continuous variables used the Wigner distribution and its convolution
formalism \cite{Sam}. The teleportation of a single mode
of the electromagnetic field can also be recast in terms of Heisenberg
equations for the quadrature amplitude operators, which is the formalism
that we employ in this paper. For that purpose,
the Wigner function $W_{\rm EPR}$ describing the entangled state shared
by Alice and Bob \cite{Sam} is replaced by equations for the
quadrature amplitude operators of a two-mode squeezed vacuum state. Two
independently squeezed vacuum modes can be described by \cite{Walls}
\begin{eqnarray}\label{1.1}
&&\hat{\bar{x}}_1=e^{r} \hat{\bar{x}}^{(0)}_1,\;\;\;\hat{\bar{p}}_1=
e^{-r} \hat{\bar{p}}^{(0)}_1,
\nonumber\\
&&\hat{\bar{x}}_2=e^{-r} \hat{\bar{x}}^{(0)}_2,\;\;\;\hat{\bar{p}}_2=
e^{r} \hat{\bar{p}}^{(0)}_2,
\end{eqnarray}
where a superscript `$(0)$' denotes initial vacuum modes and $r$ is the
squeezing parameter.
Superimposing the two squeezed modes at a 50/50 beam splitter
yields the two output modes
\begin{eqnarray}\label{1.2}
&&\hat{x}_1=\frac{1}{\sqrt{2}}e^{r} \hat{\bar{x}}^{(0)}_1
+\frac{1}{\sqrt{2}}e^{-r} \hat{\bar{x}}^{(0)}_2,\;\;\;
\hat{p}_1=\frac{1}{\sqrt{2}}e^{-r} \hat{\bar{p}}^{(0)}_1
+\frac{1}{\sqrt{2}}e^{r} \hat{\bar{p}}^{(0)}_2,\nonumber\\
&&\hat{x}_2=\frac{1}{\sqrt{2}}e^{r} \hat{\bar{x}}^{(0)}_1
-\frac{1}{\sqrt{2}}e^{-r} \hat{\bar{x}}^{(0)}_2,\;\;\;
\hat{p}_2=\frac{1}{\sqrt{2}}e^{-r} \hat{\bar{p}}^{(0)}_1
-\frac{1}{\sqrt{2}}e^{r} \hat{\bar{p}}^{(0)}_2.
\end{eqnarray}
The output modes 1 and 2 are now entangled to a finite degree in a
two-mode squeezed vacuum state. In the limit of infinite squeezing,
$r\to\infty$, both output modes become infinitely noisy, but also the
EPR correlations between them become ideal:
$(\hat{x}_1-\hat{x}_2)\to 0$, $(\hat{p}_1+\hat{p}_2)\to 0$.
Now mode 1 is sent to Alice and mode 2 is sent to Bob. Alice's mode is
then superimposed at a 50/50 beam splitter with the input mode
``in'':
\begin{eqnarray}\label{1.3}
&&\hat{x}_{\rm u}=\frac{1}{\sqrt{2}}\hat{x}_{\rm in}-\frac{1}{\sqrt{2}}
\hat{x}_1,\;\;\;
\hat{p}_{\rm u}=\frac{1}{\sqrt{2}}\hat{p}_{\rm in}-\frac{1}{\sqrt{2}}
\hat{p}_1,\nonumber\\
&&\hat{x}_{\rm v}=\frac{1}{\sqrt{2}}\hat{x}_{\rm in}+\frac{1}{\sqrt{2}}
\hat{x}_1,\;\;\;
\hat{p}_{\rm v}=\frac{1}{\sqrt{2}}\hat{p}_{\rm in}+\frac{1}{\sqrt{2}}
\hat{p}_1.
\end{eqnarray}
Using Eqs.~(\ref{1.3}) we will find it useful to write Bob's mode 2 as
\begin{eqnarray}\label{mode2}
\hat{x}_2&=&\hat{x}_{\rm in}-(\hat{x}_1-\hat{x}_2)
-\sqrt{2}\hat{x}_{\rm u}\nonumber\\
&=&\hat{x}_{\rm in}-\sqrt{2}e^{-r} \hat{\bar{x}}^{(0)}_2
-\sqrt{2}\hat{x}_{\rm u},\nonumber\\
\hat{p}_2&=&\hat{p}_{\rm in}+(\hat{p}_1+\hat{p}_2)
-\sqrt{2}\hat{p}_{\rm v}\nonumber\\
&=&\hat{p}_{\rm in}+\sqrt{2}e^{-r} \hat{\bar{p}}^{(0)}_1
-\sqrt{2}\hat{p}_{\rm v}.
\end{eqnarray}
Alice's Bell detection yields certain classical values
$x_{\rm u}$ and $p_{\rm v}$ for $\hat{x}_{\rm u}$ and $\hat{p}_{\rm v}$.
The quantum variables $\hat{x}_{\rm u}$ and $\hat{p}_{\rm v}$
become classically determined, random variables.
We indicate this by turning
$\hat{x}_{\rm u}$ and $\hat{p}_{\rm v}$ into
$x_{\rm u}$ and $p_{\rm v}$.
The classical probability distribution of
$x_{\rm u}$ and $p_{\rm v}$ is associated with the quantum statistics
of the previous operators \cite{Sam}.
Now, due to the entanglement, Bob's mode 2 collapses into
states that for $r\to\infty$ differ from Alice's input state
only in (random) classical phase-space displacements. After receiving
Alice's classical results $x_{\rm u}$ and $p_{\rm v}$, Bob displaces
his mode,
\begin{eqnarray}\label{1.5}
\hat{x}_2\longrightarrow\hat{x}_{\rm tel}&=&\hat{x}_2+\Gamma\sqrt{2}
x_{\rm u},\nonumber\\
\hat{p}_2\longrightarrow\hat{p}_{\rm tel}&=&\hat{p}_2+\Gamma\sqrt{2}
p_{\rm v},
\end{eqnarray}
thus accomplishing the teleportation \cite{Sam}. The parameter $\Gamma$
describes a normalized gain for the transformation from classical
photocurrent to complex field amplitude. For $\Gamma=1$,
Bob's displacement eliminates $x_{\rm u}$ and $p_{\rm v}$
appearing in Eqs.~(\ref{mode2}) after the collapse of $\hat{x}_{\rm u}$
and $\hat{p}_{\rm v}$ due to the Bell detection.
The teleported field then becomes
\begin{eqnarray}\label{1.6}
\hat{x}_{\rm tel}&=&\hat{x}_{\rm in}-\sqrt{2}e^{-r}
\hat{\bar{x}}^{(0)}_2,
\nonumber\\
\hat{p}_{\rm tel}&=&\hat{p}_{\rm in}+\sqrt{2}e^{-r}
\hat{\bar{p}}^{(0)}_1.
\end{eqnarray}
For an arbitrary gain $\Gamma$, we obtain
\begin{eqnarray}\label{gain}
\hat{x}_{\rm tel}&=&\Gamma\hat{x}_{\rm in}-\frac{\Gamma-1}{\sqrt{2}}
e^{r}\hat{\bar{x}}^{(0)}_1-\frac{\Gamma+1}{\sqrt{2}}e^{-r}
\hat{\bar{x}}^{(0)}_2,\nonumber\\
\hat{p}_{\rm tel}&=&\Gamma\hat{p}_{\rm in}+\frac{\Gamma-1}{\sqrt{2}}
e^{r}\hat{\bar{p}}^{(0)}_2+\frac{\Gamma+1}{\sqrt{2}}e^{-r}
\hat{\bar{p}}^{(0)}_1.
\end{eqnarray}
Note that these equations take no Bell detector inefficiencies into
account.
Consider the case $\Gamma=1$. For infinite squeezing
$r\to\infty$, Eqs.~(\ref{1.6}) describe perfect teleportation of the
quantum state of the input mode. On the other hand, for the classical
case of $r=0$, i.e., no squeezing and hence no entanglement, each of
the teleported quadratures has {\it two} additional units of vacuum
noise compared to the original input quadratures. These two units are
so-called quantum duties or ``quduties" which have to be paid when
crossing the border between quantum and classical domains \cite{Sam}.
The two quduties represent the minimal tariff for every
``classical teleportation'' scheme \cite{Fuchs}.
One quduty, the unit of vacuum noise due to Alice's detection,
arises from her attempt to
simultaneously measure the two conjugate variables $x_{\rm in}$ and
$p_{\rm in}$ \cite{Arth}. This is the standard quantum limit for the
detection of both quadratures \cite{Yama} when attempting to gain as
much information as possible about the quantum state of a light field
\cite{Leon}.
The standard quantum limit yields a product of the measurement accuracies
which is twice as large as the Heisenberg minimum uncertainty product.
This product of the measurement accuracies contains the intrinsic quantum
limit (Heisenberg uncertainty of the field to be detected) plus an additional
unit of vacuum noise due to the detection \cite{Yama}. The second quduty
arises when Bob uses the information of Alice's detection to generate the
state at amplitude $\sqrt{2}x_{\rm u}+i\sqrt{2}p_{\rm v}$ \cite{Sam}. It can
be interpreted as the standard quantum limit imposed on state broadcasting.
\section{Teleportation criteria}
The teleportation scheme with Alice and Bob is complete
without any further measurement. The quantum state teleported remains
unknown to both Alice and Bob and need not be demolished in a detection
by Bob as a final step.
However, maybe Alice and Bob are cheating. Instead of using an EPR
channel, they try to get away without entanglement and use only a
classical channel. In particular, for the realistic experimental
situation with finite squeezing and inefficient detectors where perfect
teleportation is unattainable, how may we verify that successful
quantum teleportation has taken place?
To make this verification we shall introduce a third party,
``Victor'' (the verifier), who is independent of Alice and Bob (Fig.~2).
We assume that he prepares the initial input state
(drawn from a fixed set of states) and passes it on to
Alice. After accomplishing the supposed teleportation, Bob sends the
teleported state back to Victor. Victor's knowledge about the input state
and detection of the teleported state enable Victor to verify
if quantum teleportation has really taken place.
For that purpose, however, Victor needs some measure that helps him to
assess when the similarity between the teleported state and the input state
exceeds a boundary that is only exceedable with entanglement.
\subsection{Teleporting Gaussian states with a coherent amplitude}
The single-mode teleportation scheme from Ref.~\onlinecite{Sam}
works for arbitrary input states, described by any Wigner function
$W_{\rm in}$. Teleporting states with a coherent amplitude
as reliably as possible requires unit-gain teleportation (unit gain
in Bob's final displacement). Only in this case, the coherent
amplitudes of the teleported mode always match those of the input mode
when Victor draws states with different amplitudes from the set
of input states in a sequence of trials.
For this unit-gain teleportation, the teleported state $W_{\rm tel}$
is a convolution of the input $W_{\rm in}$ with a complex Gaussian of
variance $e^{-2r}$. Classical teleportation with $r=0$ then means
the teleported mode has an excess noise of two units of vacuum
$\case{1}{2}+\case{1}{2}$ compared to the input, as also discussed in
the previous section. Any $r>0$ beats this classical scheme, i.e.,
if the input state is always recreated with the right amplitude and
less than two units of vacuum excess noise, we may call this already
quantum teleportation. Let us derive this result using the
least noisy model for classical communication.
For the input quadratures of Alice's sending station and the output
quadratures at Bob's receiving station, the least noisy (linear)
model if Alice and Bob are only classically communicating can be
written as
\begin{eqnarray}\label{model}
\hat{x}_{\rm out,j}&=&\Gamma_x\,\hat{x}_{\rm in}+\Gamma_x\,
s_a^{-1}\hat{x}_a^{(0)}+s_{b,j}^{-1}\hat{x}^{(0)}_{b,j},\nonumber\\
\hat{p}_{\rm out,j}&=&\Gamma_p\,\hat{p}_{\rm in}-\Gamma_p\,
s_a\hat{p}_a^{(0)}+s_{b,j}\hat{p}^{(0)}_{b,j}.
\end{eqnarray}
This model takes into account that Alice and Bob can only
communicate via classical signals, since arbitrarily many copies
of the output mode can be made by Bob where the subscript $j$
labels the $j$th copy. In addition, it ensures that the output
quadratures satisfy the commutation relations
\begin{eqnarray}\label{commut}
[\hat{x}_{\rm out,j},\hat{p}_{\rm out,k}]&=&(i/2)\;\delta_{jk},\nonumber\\
\left[\hat{x}_{\rm out,j},\hat{x}_{\rm out,k}\right]&=&[\hat{p}_{\rm out,j},
\hat{p}_{\rm out,k}]=0\,\,\,.
\end{eqnarray}
Since we are only interested in one single copy of the output we drop
the label $j$.
The parameter $s_a$ is given by Alice's measurement strategy
and determines the noise penalty due to her homodyne detections.
The gains $\Gamma_x$ and $\Gamma_p$ can be manipulated by Bob as well
as the parameter $s_b$ determining the noise distribution of
Bob's original mode. The set of input states may contain pure Gaussian
states with a coherent amplitude, described by
$\hat{x}_{\rm in}=\langle\hat{x}_{\rm in}\rangle+s_v^{-1}\hat{x}^{(0)}$
and $\hat{p}_{\rm in}=
\langle\hat{p}_{\rm in}\rangle+s_v\hat{p}^{(0)}$,
where Victor can choose in each trial the coherent amplitude
and if and to what extent the input is squeezed (parameter $s_v$).
Since Bob always wants to reproduce the input amplitude,
he is restricted to unit gain, symmetric in both quadratures,
$\Gamma_x=\Gamma_p=1$. First, after obtaining the output states from Bob,
Victor verifies if their amplitudes match the corresponding input
amplitudes. If not, all the following considerations concerning the
excess noise are redundant, because Alice and Bob can always
manipulate this noise by fiddling the gain (less than unit gain reduces
the excess noise). If Victor finds overlapping amplitudes in all trials
(at least within some error range),
he looks at the excess noise in each trial.
For that purpose, let us define the normalized variance
\begin{eqnarray}\label{telin}
V_{\rm out,in}^{\hat{x}}&\equiv&\frac{\langle\Delta(\hat{x}_{\rm out}-
\hat{x}_{\rm in})^2\rangle}{\langle\Delta\hat{x}^2
\rangle_{\rm vacuum}}\,\,\,,
\end{eqnarray}
and analogously $V_{\rm out,in}^{\hat{p}}$ with
$\hat{x}\rightarrow\hat{p}$ throughout
$[\langle\Delta\hat{o}^2\rangle\equiv{\rm var}(\hat{o})]$.
Using Eqs.~(\ref{model}) with unit gain, we obtain the product
\begin{eqnarray}\label{product}
V_{\rm out,in}^{\hat{x}}V_{\rm out,in}^{\hat{p}}&=&
(s_a^{-2}+s_b^{-2})(s_a^2+s_b^2).
\end{eqnarray}
It is minimized for $s_a=s_b$, yielding
$V_{\rm out,in}^{\hat{x}}V_{\rm out,in}^{\hat{p}}=4$.
The optimum value of 4 is exactly the result we obtain for what we may
call classical teleportation,
$V_{\rm tel,in}^{\hat{x}}(r=0)V_{\rm tel,in}^{\hat{p}}(r=0)=4$,
using Eqs.~(\ref{1.6}) with subscript `out' $\rightarrow$ `tel'
in Eq.~(\ref{telin}). Thus, we can write our first ``fundamental''
limit for teleporting states with a coherent amplitude as
\begin{eqnarray}\label{limit}
V_{\rm out,in}^{\hat{x}}V_{\rm out,in}^{\hat{p}}&\geq&
V_{\rm tel,in}^{\hat{x}}(r=0)V_{\rm tel,in}^{\hat{p}}(r=0)=4\,\,\,.
\end{eqnarray}
If Victor, comparing the output states with the input states, always
finds violations of this inequality, he may already have big
confidence in Alice's and Bob's honesty (i.e., that they indeed have
used entanglement). Equation~(\ref{limit}) may also enable us already
to assess if a scheme or protocol is capable of quantum teleportation.
Alternatively, instead of looking at the products
$V_{\rm out,in}^{\hat{x}}V_{\rm out,in}^{\hat{p}}$, we could also
use the sums $V_{\rm out,in}^{\hat{x}}+V_{\rm out,in}^{\hat{p}}=
s_a^{-2}+s_b^{-2}+s_a^2+s_b^2$ that are minimized for $s_a=s_b=1$.
Then we find the classical boundary
$V_{\rm out,in}^{\hat{x}}+V_{\rm out,in}^{\hat{p}}\geq 4$.
However, taking into account all the assumptions made for the
derivation of Eq.~(\ref{limit}), this boundary appears to be less
fundamental. First, we have only assumed a linear model.
Secondly, we have only considered the variances of two conjugate
observables and a certain kind of measurement of these.
An entirely rigorous criterion for quantum teleportation
should take into account all possible variables,
measurements and strategies that can be used by Alice and Bob.
Another ``problem'' of our boundary Eq.~(\ref{limit}) is
that the variances $V_{\rm out,in}$ are not directly measurable,
because the input state is destroyed by the teleportation process.
However, for Gaussian input states, Victor can combine his
knowledge of the input variances $V_{\rm in}$ with the detected
variances $V_{\rm out}$ in order to infer $V_{\rm out,in}$.
With a more specific set of Gaussian input states,
namely coherent states, the least noisy model for classical
communication allows us to determine the directly measurable
``fundamental'' limit for the normalized variances of the
output states
\begin{eqnarray}\label{limit2}
V_{\rm out}^{\hat{x}}V_{\rm out}^{\hat{p}}\geq 9\,\,\,.
\end{eqnarray}
But still we need to bear in mind that we did not consider
all possible strategies of Alice and Bob.
Also for arbitrary $s_v$ (set of input states contains all
coherent and squeezed states), Eq.~(\ref{limit2}) represents a classical
boundary, as
\begin{eqnarray}\label{product2}
V_{\rm out}^{\hat{x}}V_{\rm out}^{\hat{p}}&=&
(s_v^{-2}+s_a^{-2}+s_b^{-2})(s_v^2+s_a^2+s_b^2)
\end{eqnarray}
is minimized for $s_v=s_a=s_b$, yielding
$V_{\rm out}^{\hat{x}}V_{\rm out}^{\hat{p}}=9$.
However, since $s_v$ is unknown to Alice and Bob in every trial,
they can attain this classical minimum only by accident.
For $s_v$ fixed, e.g., $s_v=1$ (set of input states contains ``only''
coherent states), Alice and Bob knowing this $s_v$ can always
satisfy $V_{\rm out}^{\hat{x}}V_{\rm out}^{\hat{p}}=9$ in the classical
model. Alternatively, the sums $V_{\rm out}^{\hat{x}}+
V_{\rm out}^{\hat{p}}=s_v^{-2}+s_a^{-2}+s_b^{-2}+s_v^2+s_a^2+s_b^2$
are minimized with $s_a=s_b=1$. In this case, we obtain the
$s_v$-dependent boundary $V_{\rm out}^{\hat{x}}+
V_{\rm out}^{\hat{p}}\geq s_v^{-2}+s_v^2+4$. Without knowing
$s_v$, Alice and Bob can always attain this minimum in the classical
model. In every trial, Victor must combine his knowledge of $s_v$
with the detected output variances in order to find violations
of this sum inequality.
Ralph and Lam \cite{Ralph} define the classical boundaries
\begin{eqnarray}\label{ralph1}
V_{\rm c}^{\hat{x}}+V_{\rm c}^{\hat{p}}\geq 2
\end{eqnarray}
and
\begin{eqnarray}\label{ralph2}
T_{\rm out}^{\hat{x}}+T_{\rm out}^{\hat{p}}\leq 1\,\,\,,
\end{eqnarray}
using the conditional variance
\begin{eqnarray}\label{cond}
V_{\rm c}^{\hat{x}}&\equiv&\frac{\langle\Delta\hat{x}^2_{\rm out}
\rangle}{\langle\Delta\hat{x}^2\rangle_{\rm vacuum}}
\left(1-\frac{|\langle\Delta\hat{x}_{\rm out}\Delta\hat{x}_{\rm in}
\rangle|^2}{\langle\Delta\hat{x}^2_{\rm out}\rangle\langle
\Delta\hat{x}^2_{\rm in}\rangle}\right),
\end{eqnarray}
and analogously for $V_{\rm c}^{\hat{p}}$ with
$\hat{x}\rightarrow\hat{p}$ throughout,
and the transfer coefficient
\begin{eqnarray}\label{trans1}
T_{\rm out}^{\hat{x}}&\equiv&\frac{{\rm SNR}_{\rm out}^{\hat{x}}}{{\rm SNR}
_{\rm in}^{\hat{x}}}\,\,\,,
\end{eqnarray}
and analogously $T_{\rm out}^{\hat{p}}$ with $\hat{x}\rightarrow\hat{p}$
throughout. Here, SNR denotes the signal to noise ratio for the square of
the mean amplitudes, namely ${\rm SNR}_{\rm out}^{\hat{x}}=\langle
\hat{x}_{\rm out}\rangle^2/\langle\Delta\hat{x}_{\rm out}^2\rangle$.
Alice and Bob using only classical communication are not able to
violate {\it either} of the two inequalities
Eq.~(\ref{ralph1}) and Eq.~(\ref{ralph2}).
In fact, these boundaries are two independent limits,
each of them unexceedable in a classical scheme.
However, Alice and Bob can simultaneously approach
$V_{\rm c}^{\hat{x}}+V_{\rm c}^{\hat{p}}=2$ and
$T_{\rm out}^{\hat{x}}+T_{\rm out}^{\hat{p}}=1$ using either
an asymmetric classical detection and transmission scheme
with coherent-state inputs or a symmetric classical scheme with
squeezed-state inputs \cite{Ralph}.
For quantum teleportation, Ralph and Lam \cite{Ralph} require their
classical limits be simultaneously exceeded, $V_{\rm c}^{\hat{x}}+
V_{\rm c}^{\hat{p}}<2$ and $T_{\rm out}^{\hat{x}}+T_{\rm out}^{\hat{p}}>1$.
This is only possible using more than 3 dB squeezing in the entanglement
source \cite{Ralph}.
Apparently, these criteria determine a classical boundary different from
ours in Eq.~(\ref{limit}). For example, in unit-gain teleportation,
our inequality Eq.~(\ref{limit}) is violated for any nonzero squeezing
$r>0$. Let us briefly explain why we encounter this discrepancy.
We have a priori assumed unit gain in our scheme to achieve outputs
and inputs overlapping in their mean values. This assumption is, of course,
motivated by the assessment that good teleportation means good similarity
between input and output {\it states} (here, to be honest, we already have
something in mind similar to the fidelity, introduced in the next section).
First, Victor has to check the match of the amplitudes before looking
at the variances. Ralph and Lam permit arbitrary gain,
because they are not interested in the similarity of input and output
{\it states}, but in certain correlations that manifest
separately in the individual quadratures \cite{Ralph2}.
This point of view originates from the context of quantum nondemolition
(QND) measurements \cite{Brag}, which are focused on a single QND variable
while the conjugate variable is not of interest.
For arbitrary gain, an inequality as in Eq.~(\ref{ralph2}),
containing the input and output mean values, has to be added to an
inequality only for variances as in Eq.~(\ref{ralph1}).
Ralph and Lam's {\it best} classical protocol permits output states
completely different from the input states, e.g., via asymmetric detection
where the lack of information in one quadrature leads on average
to output states with amplitudes completely different from the input
states. The asymmetric scheme means that Alice is {\it not}
attempting to gain as much information about the {\it quantum state}
as possible, as in an Arthurs-Kelly measurement \cite{Arth}.
The Arthurs-Kelly measurement,
however, is exactly what Alice should do in our {\it best} classical
protocol, i.e., classical teleportation.
Therefore, our best classical protocol always achieves output states
already pretty similar to the input states.
Apparently, ``the best'' that can be classically achieved
has a different meaning from Ralph and Lam's point of view and
from ours. Then it is no surprise that the classical boundaries
differ as well.
Apart from these differences, however, Ralph and Lam's criteria do
have something in common with our criterion given by Eq.~(\ref{limit}):
they also do not satisfy the rigor
we require from criteria for quantum teleportation taking into account
everything Alice and Bob can do. By limiting the set of input states
to coherent states, we are able to present such a rigorous criterion
in the next section.
\subsection{The fidelity criterion for coherent-state teleportation}
The rigorous criterion we are looking for to determine the best classical
teleportation and to quantify the distinction between classical and
quantum teleportation relies on the fidelity $F$, for an arbitrary input
state $|\psi_{\rm in}\rangle$ defined by \cite{Fuchs}
\begin{eqnarray}\label{fid1}
F\equiv\langle\psi_{\rm in}|\hat{\rho}_{\rm out}|
\psi_{\rm in}\rangle.
\end{eqnarray}
It is an excellent measure for the similarity between the input
and the output state and equals one only if
$\hat{\rho}_{\rm out}=|\psi_{\rm in}\rangle\langle\psi_{\rm in}|$.
Now Alice and Bob know that Victor draws his states
$|\psi_{\rm in}\rangle$ from a fixed set, but they do not know
which particular state is drawn in a single trial.
Therefore, an average fidelity should be considered \cite{Fuchs},
\begin{eqnarray}\label{fid2}
F_{\rm av}=\int P(|\psi_{\rm in}\rangle)
\langle\psi_{\rm in}|\hat{\rho}_{\rm out}|
\psi_{\rm in}\rangle d|\psi_{\rm in}\rangle,
\end{eqnarray}
where $P(|\psi_{\rm in}\rangle)$ is the probability of drawing
a particular state $|\psi_{\rm in}\rangle$, and the integral runs
over the entire set of input states.
If the set of input states contains simply all possible quantum states
in an infinite-dimensional Hilbert space (i.e., the input state is
completely unknown apart from the Hilbert-space dimension), the
best average fidelity achievable without entanglement is zero.
If the set of input states is restricted to coherent states of amplitude
$\alpha_{\rm in}=x_{\rm in}+ip_{\rm in}$ and
$F=\langle\alpha_{\rm in}|\hat{\rho}_{\rm out}|\alpha_{\rm in}\rangle$,
on average, the fidelity achievable in a purely classical scheme
(when averaged across the entire complex plane) is bounded by
\cite{Fuchs}
\begin{eqnarray}\label{fid3}
F_{\rm av}\leq\frac{1}{2}\,\,\,.
\end{eqnarray}
Let us illustrate these nontrivial results with our single-mode
teleportation equations.
Up to a factor $\pi$, the fidelity
$F=\langle\alpha_{\rm in}|\hat{\rho}_{\rm tel}|\alpha_{\rm in}\rangle$
is the $Q$ function of the teleported mode evaluated for
$\alpha_{\rm in}$:
\begin{eqnarray}\label{fid4}
F=\pi Q_{\rm tel}(\alpha_{\rm in})=\frac{1}{2\sqrt{\sigma_x\sigma_p}}
\exp\left[-(1-\Gamma)^2\left(\frac{x_{\rm in}^2}{2\sigma_x}
+\frac{p_{\rm in}^2}{2\sigma_p}\right)\right],
\end{eqnarray}
where $\Gamma$ is the gain from the previous sections and $\sigma_x$
and $\sigma_p$ are the variances of the
Q function of the teleported mode for the corresponding quadratures.
These variances are according to Eqs.~(\ref{gain}) for a coherent-state
input and $\langle\Delta\hat{x}^2\rangle_{\rm vacuum}
=\langle\Delta\hat{p}^2\rangle_{\rm vacuum}=\case{1}{4}$ given by
\begin{eqnarray}\label{fid5}
\sigma_x=\sigma_p=
\frac{1}{4}(1+\Gamma^2)+\frac{e^{2r}}{8}(\Gamma-1)^2+
\frac{e^{-2r}}{8}(\Gamma+1)^2.
\end{eqnarray}
For classical teleportation ($r=0$) and $\Gamma=1$, we obtain
$\sigma_x=\sigma_p=\case{1}{2}+\case{1}{4}V_{\rm tel,in}^{\hat{x}}(r=0)=
\case{1}{2}+\case{1}{4}V_{\rm tel,in}^{\hat{p}}(r=0)=\case{1}{2}+
\case{1}{2}=1$ and indeed $F=F_{\rm av}=\case{1}{2}$.
In order to obtain a better fidelity, entanglement is necessary.
Then, if $\Gamma=1$, we obtain $F=F_{\rm av}>\case{1}{2}$ for any $r>0$.
For $r=0$, the fidelity drops to zero as
$\Gamma\to\infty$ since the mean amplitude of the teleported state does not
match that of the input state and the excess noise increases.
For $r=0$ and $\Gamma=0$, the fidelity becomes
$F=\exp(-|\alpha_{\rm in}|^2)$. Upon averaging over all possible
coherent-state inputs, this fidelity also vanishes.
Assuming nonunit gain, it is crucial to consider the average
fidelity $F_{\rm av}\neq F$. When averaging across the entire complex
plane, any nonunit gain yields $F_{\rm av}=0$.
This is exactly why Victor should first check the match of the
amplitudes for different input states. If Alice and Bob are cheating
and fiddle the gain in a classical scheme, a sufficiently large input
amplitude reveals the truth.
These considerations also apply to the asymmetric classical detection
and transmission scheme with a coherent-state input
\cite{Ralph} discussed in the previous section.
Of course, the asymmetric scheme does not provide an improvement in the
fidelity. In fact, the average fidelity drops to zero, if Alice
detects only one quadrature (and gains complete information about this
quadrature) and Bob obtains the full information about the measured
quadrature, but no information about the second quadrature.
In an asymmetric classical scheme, Alice and Bob stay far within
the classical domain $F_{\rm av}<\case{1}{2}$. The best classical scheme
with respect to the fidelity is the symmetric one (``classical
teleportation'') with $F_{\rm av}=\case{1}{2}$.
The supposed limitation of the fidelity
criterion that the set of input states contains ``only'' coherent
states is compensated by having an entirely rigorous criterion.
Of course, the fidelity criterion does not limit the possible
input states for which the presented protocol works.
It does not mean we can only teleport coherent states (as we will clearly
see in the next section). However, so far, it is the only criterion
that enables the experimentalist to rigorously verify quantum
teleportation. That is why Furusawa et al. \cite{Furu} were happy
to have used coherent-state inputs, because they could rely on
a strict and rigorous criterion (and not only because coherent states
are the most readily available source for the state preparer Victor).
\subsection{Teleporting entangled states: entanglement swapping}
From a true quantum teleportation device,
we require that it can not only teleport
nonorthogonal states very similar to classical states (such as coherent
states), but also extremely nonclassical states such as entangled states.
When teleporting one half of an entangled state (``entanglement
swapping''), we are certainly much more interested in the
preservation of the inseparability than in the match of any input
and output amplitudes. We can say that entanglement swapping is
successful, if the initially unentangled modes become entangled
via the teleportation process (even, if this is accompanied by a decrease
of the quality of the initial entanglement).
In Ref.~\onlinecite{PvL} has been shown, that the single-mode
teleportation scheme enables entanglement swapping
for any nonzero squeezing ($r>0$) in the two initial entangled states
(of which one provides the teleporter's input and the other one the
EPR channel or vice versa).
Let us introduce ``Claire'' who performs the Bell detection of modes 2 and
3 (Fig.~3). Before her measurement, mode 1 (Alice's mode) is entangled
with mode 2, and mode 3 is entangled with mode 4 (Bob's mode) \cite{PvL}.
Due to Claire's detection, mode 1 and 4 are projected on entangled
states. Entanglement is teleported in every single projection
(for every measured value of $x_{\rm u}$ and $p_{\rm v}$) without
any further local displacement \cite{PvL3}.
How can we verify that entanglement
swapping was successful? Simply, by verifying that Alice and Bob,
who initially did not share any entanglement, are able to perform
quantum teleportation using mode 1 and 4 after entanglement swapping
\cite{PvL}. But then we urgently need a rigorous criterion for
quantum teleportation that unambigously recognizes when Alice and
Bob have used entanglement and when they have not.
Now, again, we can rely on the fidelity criterion for coherent-state
teleportation. Alice and Bob again have to convince Victor that they
are using entanglement and are not cheating. Of course, this is only a
reliable verification scheme of entanglement swapping, if one can be
sure that Alice and Bob did not share entanglement prior to entanglement
swapping and that Claire is not allowed to perform unit-gain
displacements (or that Claire is not allowed to receive any classical
information). Otherwise, Victor's coherent-state input could be
teleported step by step from Alice to Claire (with unit gain) and
from Claire to Bob (with unit gain). This protocol, however, requires
more than 3 dB squeezing in both entanglement sources (if equally
squeezed) to ensure $F_{\rm av}>\case{1}{2}$ \cite{PvL}.
Using entanglement swapping, Alice and Bob can achieve
$F_{\rm av}>\case{1}{2}$ for any squeezing, but one of them has to
perform local displacements based on Claire's measurement results.
Any gain is allowed in these displacements, since in entanglement
swapping, we are not interested
in the transfer of coherent amplitudes (and the two initial two-mode
squeezed states are vacuum states anyway).
But only the optimum gain $\Gamma_{\rm swap}=\tanh 2r$ ensures
$F_{\rm av}>\case{1}{2}$ for any squeezing and provides the optimum
fidelity \cite{PvL}.
Unit gain $\Gamma_{\rm swap}=1$ in entanglement swapping would
require more than 3 dB squeezing in both entanglement sources
(if equally squeezed) to achieve $F_{\rm av}>\case{1}{2}$ \cite{PvL},
or to confirm the teleportation of entanglement via detection of the
combined entangled modes \cite{Tan}.
We will also give a broadband protocol of entanglement swapping as a
``nonunit-gain teleportation.'' The verification of entanglement
swapping via the fidelity criterion for coherent-state teleportation
demonstrates how useful this criterion is. Less rigorous criteria,
as presented in Sec. III A, cannot reliably tell us if Alice
and Bob use entanglement emerging from entanglement swapping.
Furthermore, the entanglement swapping scheme demonstrates that
a two-mode squeezed state enables {\it true} quantum teleportation
for any nonzero squeezing. Requiring more than 3 dB squeezing,
as it is necessary for quantum teleportation according to
Ralph and Lam \cite{Ralph}, is not necessary for the teleporation
of entanglement.
\section{Broadband entanglement}
In this section, we demonstrate that the EPR state required for
broadband teleportation can be generated either directly by
nondegenerate parametric down conversion or by combining
two independently squeezed fields produced via degenerate down
conversion or any other nonlinear interaction.
First, we review the results of Ref.~\onlinecite{Ou} based on the
input-output formalism of Collett and Gardiner \cite{Coll} where a
nondegenerate optical parametric amplifier in a cavity (NOPA) is
studied. We will see that the upper and lower sidebands of the NOPA
output have correlations similar to those of the two-mode squeezed state in
Eqs.~(\ref{1.2}). The optical parametric oscillator is considered
polarization nondegenerate but frequency ``degenerate'' (equal
center frequency for the orthogonally polarized output modes).
The interaction between the two modes is due to the nonlinear $\chi^{(2)}$
medium (in a cavity) and may be described by the interaction Hamiltonian
\begin{eqnarray}\label{1.8}
\hat{H}_{I}=i\hbar\kappa(\hat{a}^{\dagger}_1\hat{a}^{\dagger}_2
e^{-2i\omega_0t}-\hat{a}_1\hat{a}_2 e^{2i\omega_0t}).
\end{eqnarray}
The undepleted pump field amplitude at frequency $2\omega_0$ is described
as a {\it c} number and has been absorbed into the coupling $\kappa$ which
also contains the $\chi^{(2)}$ susceptibility. Without loss of generality
$\kappa$ can be taken to be real. The dynamics of the two cavity modes
$\hat{a}_1$ and $\hat{a}_2$ are governed by the above interaction
Hamiltonian, and input-output relations can be derived relating the
cavity modes to the external vacuum input modes $\hat{b}^{(0)}_1$
and $\hat{b}^{(0)}_2$, the external output modes $\hat{b}_1$ and
$\hat{b}_2$, and two unwanted vacuum modes $\hat{c}^{(0)}_1$ and
$\hat{c}^{(0)}_2$ describing cavity losses (Fig.~4). Recall, the
superscript `$(0)$' refers to vacuum modes. We define uppercase
operators in the rotating frame about the center frequency $\omega_0$,
\begin{eqnarray}\label{1.9}
\hat{O}(t)=\hat{o}(t)e^{i\omega_0t},
\end{eqnarray}
with $\hat{O}=[\hat{A}_{1,2};\hat{B}_{1,2};\hat{B}^{(0)}_{1,2};
\hat{C}^{(0)}_{1,2}]$ and the full Heisenberg operators $\hat{o}=
[\hat{a}_{1,2};\hat{b}_{1,2};\hat{b}^{(0)}_{1,2};\hat{c}^{(0)}_{1,2}]$.
By the Fourier transformation
\begin{eqnarray}\label{1.10}
\hat{O}(\Omega)=\frac{1}{\sqrt{2\pi}}\int dt\,\hat{O}(t)e^{i\Omega t},
\end{eqnarray}
the fields are now described as functions of the modulation frequency
$\Omega$ with commutation relation $[\hat{O}(\Omega),\hat{O}^{\dagger}
(\Omega')]=\delta(\Omega - \Omega')$ for $\hat{B}_{1,2}$,
$\hat{B}^{(0)}_{1,2}$ and $\hat{C}^{(0)}_{1,2}$ since $[\hat{O}(t),
\hat{O}^{\dagger}(t')]=\delta(t - t')$. Expressing the outgoing modes
in terms of the incoming vacuum modes, one obtains \cite{Ou}
\begin{eqnarray}\label{1.11}
\hat{B}_j(\Omega)=G(\Omega)\hat{B}^{(0)}_j(\Omega)+g(\Omega)
\hat{B}^{(0)\dagger}_k(-\Omega)
+\bar{G}(\Omega)\hat{C}^{(0)}_j(\Omega)+\bar{g}(\Omega)\hat{C}
^{(0)\dagger}_k(-\Omega),
\end{eqnarray}
where $k=3-j$, $j=1,2$ (so $k$ refers to the opposite mode to $j$),
and with coefficients to be specified later.
The two cavity modes have been assumed to be both on resonance with
half the pump frequency at $\omega_0$.
Let us investigate the lossless case where the output fields become
\begin{eqnarray}\label{1.12}
\hat{B}_j(\Omega)=G(\Omega)\hat{B}^{(0)}_j(\Omega)+g(\Omega)\hat{B}^{(0)
\dagger}_k(-\Omega),
\end{eqnarray}
with the functions $G(\Omega)$ and $g(\Omega)$ of Eq.~(\ref{1.11})
simplifying to
\begin{eqnarray}\label{1.13}
G(\Omega)&=&\frac{\kappa^2+\gamma^2/4+\Omega^2}{(\gamma/2-i
\Omega)^2-\kappa^2}\,\,\,,\nonumber\\
g(\Omega)&=&\frac{\kappa\gamma}{(\gamma/2-i \Omega)^2-\kappa^2}\,\,\,.
\end{eqnarray}
Here, the parameter $\gamma$ is a damping rate of the cavity
(Fig.~4) and is assumed to be equal for both polarizations.
Equation (\ref{1.12}) represents the input-output relations for a
lossless NOPA.
Following Ref.~\onlinecite{Schum}, we introduce frequency resolved
quadrature amplitudes given by
\begin{eqnarray}\label{1.14}
\hat{X}_j(\Omega)&=&\case{1}{2}[\hat{B}_j(\Omega)+\hat{B}^{\dagger}_j
(-\Omega)],\nonumber\\
\hat{P}_j(\Omega)&=&\case{1}{2i}[\hat{B}_j(\Omega)-\hat{B}^{\dagger}_j
(-\Omega)],\nonumber\\
\hat{X}^{(0)}_j(\Omega)&=&\case{1}{2}[\hat{B}^{(0)}_j(\Omega)+\hat{B}
^{(0)\dagger}_j(-\Omega)],\nonumber\\
\hat{P}^{(0)}_j(\Omega)&=&\case{1}{2i}[\hat{B}^{(0)}_j(\Omega)-\hat{B}
^{(0)\dagger}_j(-\Omega)],
\end{eqnarray}
provided $\Omega\ll \omega_0$. Using them Eq.~(\ref{1.12}) becomes
\begin{eqnarray}\label{1.15}
\hat{X}_j(\Omega)&=&G(\Omega)\hat{X}^{(0)}_j(\Omega)+g(\Omega)
\hat{X}^{(0)}_k(\Omega),\nonumber\\
\hat{P}_j(\Omega)&=&G(\Omega)\hat{P}^{(0)}_j(\Omega)-g(\Omega)
\hat{P}^{(0)}_k(\Omega).
\end{eqnarray}
Here, we have used $G(\Omega)=G^*(-\Omega)$ and $g(\Omega)=g^*(-\Omega)$.
At this juncture, we show that the output quadratures of a lossless NOPA
in Eqs.~(\ref{1.15}) correspond to two independently squeezed modes
coupled to a two-mode squeezed state at a beam splitter.
The operational significance of this fact is that
the EPR state required for broadband teleportation can be created
either by nondegenerate parametric down conversion as described by the
interaction Hamiltonian in Eq.~(\ref{1.8}), or by combining at a
beam splitter two independently squeezed fields generated via degenerate
down conversion \cite{Kimb} (as done in the teleportation
experiment of Ref.~\onlinecite{Furu}).
Let us thus define the superpositions of the two output modes
(barred quantities)
\begin{eqnarray}\label{1.16}
\hat{\bar{B}}_1&\equiv&\case{1}{\sqrt{2}}(\hat{B}_1+\hat{B}_2),
\nonumber\\
\hat{\bar{B}}_2&\equiv&\case{1}{\sqrt{2}}(\hat{B}_1-\hat{B}_2),
\end{eqnarray}
and of the two vacuum input modes
\begin{eqnarray}\label{1.17}
\hat{\bar{B}}^{(0)}_1&\equiv&\case{1}{\sqrt{2}}(\hat{B}^{(0)}_1+
\hat{B}^{(0)}_2),\nonumber\\
\hat{\bar{B}}^{(0)}_2&\equiv&\case{1}{\sqrt{2}}(\hat{B}^{(0)}_1-
\hat{B}^{(0)}_2).
\end{eqnarray}
In terms of these superpositions, Eq.~(\ref{1.12}) becomes
\begin{eqnarray}\label{1.18}
\hat{\bar{B}}_1(\Omega)&=&G(\Omega)\hat{\bar{B}}^{(0)}_1(\Omega)+
g(\Omega)\hat{\bar{B}}^{(0)\dagger}_1
(-\Omega),\nonumber\\
\hat{\bar{B}}_2(\Omega)&=&G(\Omega)\hat{\bar{B}}^{(0)}_2(\Omega)-
g(\Omega)\hat{\bar{B}}^{(0)\dagger}_2
(-\Omega).
\end{eqnarray}
In Eqs.~(\ref{1.18}), the initially coupled modes of Eq.~(\ref{1.12})
are decoupled, corresponding to two independent degenerate parametric
amplifiers.
In the limit $\Omega\to 0$, the two modes of Eqs.~(\ref{1.18}) are
each in the same single-mode squeezed state as the two modes in
Eqs.~(\ref{1.1}). More explicitly, by setting $G(0)=\cosh r$ and
$g(0)=\sinh r$, the annihilation operators
\begin{eqnarray}\label{1.19}
\hat{\bar{B}}_1&=&\cosh r\hat{\bar{B}}^{(0)}_1+\sinh r\hat{\bar{B}}^{(0)
\dagger}_1,\nonumber\\
\hat{\bar{B}}_2&=&\cosh r\hat{\bar{B}}^{(0)}_2-\sinh r\hat{\bar{B}}^{(0)
\dagger}_2,
\end{eqnarray}
have the quadrature operators
\begin{eqnarray}\label{1.20}
&&\hat{\bar{X}}_1=e^{r} \hat{\bar{X}}^{(0)}_1,\;\;\;\hat{\bar{P}}_1=
e^{-r} \hat{\bar{P}}^{(0)}_1,
\nonumber\\
&&\hat{\bar{X}}_2=e^{-r} \hat{\bar{X}}^{(0)}_2,\;\;\;\hat{\bar{P}}_2=
e^{r} \hat{\bar{P}}^{(0)}_2.
\end{eqnarray}
From the alternative perspective of superimposing two independently
squeezed modes at a 50/50 beam splitter to obtain the EPR state, we must
simply invert the transformation of Eqs.~(\ref{1.16}) and recouple the two
modes:
\begin{eqnarray}\label{1.21}
\hat{B}_1&=&\case{1}{\sqrt{2}}(\hat{\bar{B}}_1+\hat{\bar{B}}_2)\nonumber\\
&=&\case{1}{\sqrt{2}}[\cosh r(\hat{\bar{B}}^{(0)}_1+\hat{\bar{B}}^{(0)}_2)
+\sinh r(\hat{\bar{B}}^{(0)\dagger}_1
-\hat{\bar{B}}^{(0)\dagger}_2)]\nonumber\\
&=&\cosh r\hat{B}^{(0)}_1+\sinh r \hat{B}^{(0)\dagger}_2,\nonumber\\
\hat{B}_2&=&\case{1}{\sqrt{2}}(\hat{\bar{B}}_1-\hat{\bar{B}}_2)\nonumber\\
&=&\case{1}{\sqrt{2}}[\cosh r(\hat{\bar{B}}^{(0)}_1-\hat{\bar{B}}^{(0)}_2)
+\sinh r(\hat{\bar{B}}^{(0)\dagger}_1
+\hat{\bar{B}}^{(0)\dagger}_2)]\nonumber\\
&=&\cosh r\hat{B}^{(0)}_2+\sinh r \hat{B}^{(0)\dagger}_1,
\end{eqnarray}
and
\begin{eqnarray}\label{1.22}
\hat{X}_1&=&\case{1}{\sqrt{2}}(\hat{\bar{X}}_1+\hat{\bar{X}}_2)=\case{1}
{\sqrt{2}}(e^{r} \hat{\bar{X}}^{(0)}_1+e^{-r} \hat{\bar{X}}^{(0)}_2),
\nonumber\\
\hat{P}_1&=&\case{1}{\sqrt{2}}(\hat{\bar{P}}_1+\hat{\bar{P}}_2)=\case{1}
{\sqrt{2}}(e^{-r} \hat{\bar{P}}^{(0)}_1+e^{r} \hat{\bar{P}}^{(0)}_2),
\nonumber\\
\hat{X}_2&=&\case{1}{\sqrt{2}}(\hat{\bar{X}}_1-\hat{\bar{X}}_2)=\case{1}
{\sqrt{2}}(e^{r} \hat{\bar{X}}^{(0)}_1-e^{-r} \hat{\bar{X}}^{(0)}_2),
\nonumber\\
\hat{P}_2&=&\case{1}{\sqrt{2}}(\hat{\bar{P}}_1-\hat{\bar{P}}_2)=\case{1}
{\sqrt{2}}(e^{-r} \hat{\bar{P}}^{(0)}_1-e^{r} \hat{\bar{P}}^{(0)}_2),
\end{eqnarray}
as the two-mode squeezed state in Eqs.~(\ref{1.2}). The coupled modes in
Eqs.~(\ref{1.21}) expressed in terms of $\hat{B}^{(0)}_1$ and
$\hat{B}^{(0)}_2$ are the two NOPA output modes of Eq.~(\ref{1.12}),
if $\Omega\to 0$ and $G(0)=\cosh r$, $g(0)=\sinh r$.
More generally, for $\Omega\neq 0$, the quadratures corresponding
to Eqs.~(\ref{1.18}),
\begin{eqnarray}\label{1.23}
&&\hat{\bar{X}}_1(\Omega)=[G(\Omega)+g(\Omega)] \hat{\bar{X}}^{(0)}_1
(\Omega),\;\;\;\hat{\bar{P}}_1(\Omega)=[G(\Omega)-g(\Omega)]
\hat{\bar{P}}^{(0)}_1(\Omega),
\nonumber\\
&&\hat{\bar{X}}_2(\Omega)=[G(\Omega)-g(\Omega)] \hat{\bar{X}}^{(0)}_2
(\Omega),\;\;\;\hat{\bar{P}}_2(\Omega)=[G(\Omega)+g(\Omega)]
\hat{\bar{P}}^{(0)}_2(\Omega),
\end{eqnarray}
are coupled to yield
\begin{eqnarray}\label{1.24}
\hat{X}_1(\Omega)&=&\case{1}{\sqrt{2}}[G(\Omega)+g(\Omega)]
\hat{\bar{X}}^{(0)}_1(\Omega)+\case{1}{\sqrt{2}}[G(\Omega)-g(\Omega)]
\hat{\bar{X}}^{(0)}_2(\Omega),\nonumber\\
\hat{P}_1(\Omega)&=&\case{1}{\sqrt{2}}[G(\Omega)-g(\Omega)]
\hat{\bar{P}}^{(0)}_1(\Omega)+\case{1}{\sqrt{2}}[G(\Omega)+g(\Omega)]
\hat{\bar{P}}^{(0)}_2(\Omega),\nonumber\\
\hat{X}_2(\Omega)&=&\case{1}{\sqrt{2}}[G(\Omega)+g(\Omega)]
\hat{\bar{X}}^{(0)}_1(\Omega)-\case{1}{\sqrt{2}}[G(\Omega)-g(\Omega)]
\hat{\bar{X}}^{(0)}_2(\Omega),\nonumber\\
\hat{P}_2(\Omega)&=&\case{1}{\sqrt{2}}[G(\Omega)-g(\Omega)]
\hat{\bar{P}}^{(0)}_1(\Omega)-\case{1}{\sqrt{2}}[G(\Omega)+g(\Omega)]
\hat{\bar{P}}^{(0)}_2(\Omega).
\end{eqnarray}
The quadratures in Eqs.~(\ref{1.24}) are precisely the NOPA output
quadratures of Eqs.~(\ref{1.15}) as anticipated. With the functions
$G(\Omega)$ and $g(\Omega)$ of Eqs.~(\ref{1.13}), we obtain
\begin{eqnarray}\label{1.25}
G(\Omega)-g(\Omega)&=&\frac{\gamma/2 - \kappa + i\Omega}{\gamma/2 +
\kappa - i\Omega}\,\,\,,\nonumber\\
G(\Omega)+g(\Omega)&=&\frac{(\gamma/2 + \kappa)^2 + \Omega^2}
{(\gamma/2-i \Omega)^2-\kappa^2}\,\,\,.
\end{eqnarray}
For the limits $\Omega\to 0$, $\kappa\to\gamma/2$ (the limit of
infinite squeezing), we obtain $[G(\Omega)-g(\Omega)]\to 0$ and
$[G(\Omega)+g(\Omega)]\to\infty$. If $\Omega\to 0$, $\kappa\to 0$
(the classical limit of no squeezing), then $[G(\Omega)-g(\Omega)]\to 1$
and $[G(\Omega)+g(\Omega)]\to 1$. Thus for $\Omega\to 0$,
Eqs.~(\ref{1.24}) in the above mentioned limits correspond to
Eqs.~(\ref{1.22}) in the analogous limits $r\to\infty$
(infinite squeezing) and $r\to 0$ (no squeezing).
For large squeezing, apparently the individual modes of the
``broadband two-mode squeezed state" in Eqs.~(\ref{1.24}) are very
noisy. In general, the input vacuum modes are amplified in the NOPA,
resulting in output modes with large fluctuations. But the correlations
between the two modes increase simultaneously, so that $[\hat{X}_1
(\Omega)-\hat{X}_2(\Omega)]\to 0$ and $[\hat{P}_1(\Omega)+\hat{P}_2
(\Omega)]\to 0$ for $\Omega\to 0$ and $\kappa\to\gamma/2$.
The squeezing spectra of the independently squeezed modes can be derived
from Eqs.~(\ref{1.23}) and are given by the spectral variances
\begin{eqnarray}\label{spectra}
\langle\Delta\hat{\bar{X}}_1^{\dagger}(\Omega)\Delta\hat{\bar{X}}_1
(\Omega')\rangle&=&\langle\Delta\hat{\bar{P}}_2^{\dagger}(\Omega)
\Delta\hat{\bar{P}}_2(\Omega')\rangle=
\delta(\Omega-\Omega')|S_+(\Omega)|^2\langle\Delta\hat{X}^2
\rangle_{\rm vacuum},\nonumber\\
\langle\Delta\hat{\bar{X}}_2^{\dagger}(\Omega)\Delta\hat{\bar{X}}_2
(\Omega')\rangle&=&\langle\Delta\hat{\bar{P}}_1^{\dagger}(\Omega)
\Delta\hat{\bar{P}}_1(\Omega')\rangle=
\delta(\Omega-\Omega')|S_-(\Omega)|^2\langle\Delta\hat{X}^2
\rangle_{\rm vacuum},
\end{eqnarray}
here with $|S_+(\Omega)|^2=|G(\Omega)+g(\Omega)|^2$ and
$|S_-(\Omega)|^2=|G(\Omega)-g(\Omega)|^2$ ($\langle\Delta\hat{X}^2
\rangle_{\rm vacuum}=\case{1}{4}$).
In general, Eqs.~(\ref{spectra}) may define arbitrary squeezing spectra
of two statistically identical but independent broadband
squeezed states. The two corresponding squeezed modes
\begin{eqnarray}\label{general}
&&\hat{\bar{X}}_1(\Omega)=S_+(\Omega) \hat{\bar{X}}^{(0)}_1
(\Omega),\;\;\;\hat{\bar{P}}_1(\Omega)=S_-(\Omega)
\hat{\bar{P}}^{(0)}_1(\Omega),
\nonumber\\
&&\hat{\bar{X}}_2(\Omega)=S_-(\Omega) \hat{\bar{X}}^{(0)}_2
(\Omega),\;\;\;\hat{\bar{P}}_2(\Omega)=S_+(\Omega)
\hat{\bar{P}}^{(0)}_2(\Omega),
\end{eqnarray}
where $S_-(\Omega)$ refers to the quiet quadratures and $S_+(\Omega)$
to the noisy ones, can be used as EPR source for the following broadband
teleportation scheme when they are combined at a beam splitter:
\begin{eqnarray}\label{general2}
\hat{X}_1(\Omega)&=&\case{1}{\sqrt{2}}S_+(\Omega)
\hat{\bar{X}}^{(0)}_1(\Omega)+\case{1}{\sqrt{2}}S_-(\Omega)
\hat{\bar{X}}^{(0)}_2(\Omega),\nonumber\\
\hat{P}_1(\Omega)&=&\case{1}{\sqrt{2}}S_-(\Omega)
\hat{\bar{P}}^{(0)}_1(\Omega)+\case{1}{\sqrt{2}}S_+(\Omega)
\hat{\bar{P}}^{(0)}_2(\Omega),\nonumber\\
\hat{X}_2(\Omega)&=&\case{1}{\sqrt{2}}S_+(\Omega)
\hat{\bar{X}}^{(0)}_1(\Omega)-\case{1}{\sqrt{2}}S_-(\Omega)
\hat{\bar{X}}^{(0)}_2(\Omega),\nonumber\\
\hat{P}_2(\Omega)&=&\case{1}{\sqrt{2}}S_-(\Omega)
\hat{\bar{P}}^{(0)}_1(\Omega)-\case{1}{\sqrt{2}}S_+(\Omega)
\hat{\bar{P}}^{(0)}_2(\Omega).
\end{eqnarray}
Before obtaining this ``broadband two-mode squeezed vacuum state,''
the squeezing of the two initial modes may be generated by
any nonlinear interaction, e.g., apart from the OPA, also
by four-wave mixing in a cavity \cite{Slush}.
\section{Teleportation of a broadband field}
For the teleportation of an electromagnetic field with finite bandwidth,
the EPR state shared by Alice and Bob should be a broadband two-mode
squeezed state, as discussed in the previous section.
The incoming electromagnetic field
to be teleported $\hat{E}_{\rm in}(z,t)=\hat{E}_{\rm in}^{(+)}(z,t)+
\hat{E}_{\rm in}^{(-)}(z,t)$, traveling in positive-z direction and
having a single polarization, can be described by the positive-frequency
part
\begin{eqnarray}\label{field}
\hat{E}_{\rm in}^{(+)}(z,t)&=&[\hat{E}_{\rm in}^{(-)}(z,t)]^{\dagger}=
\int_{\rm W} d\omega\frac{1}{\sqrt{2\pi}}\left(\frac{u\hbar\omega}
{2cA_{\rm tr}}\right)^{1/2}\hat{b}_{\rm in}(\omega)e^{-i\omega(t-z/c)}.
\end{eqnarray}
The integral runs over a relevant bandwidth W centered on $\omega_0$,
$A_{\rm tr}$ represents the transverse structure of the field and $u$ is
a units-dependent constant (in Gaussian units $u=4\pi$) \cite{Schum}.
The annihilation and creation operators $\hat{b}_{\rm in}(\omega)$ and
$\hat{b}_{\rm in}^{\dagger}(\omega)$ satisfy the commutation relations
$[\hat{b}_{\rm in}(\omega),\hat{b}_{\rm in}(\omega')]=0$ and
$[\hat{b}_{\rm in}(\omega),\hat{b}_{\rm in}^{\dagger}(\omega')]=\delta
(\omega-\omega')$.
The incoming electromagnetic field may now be described in a rotating
frame as
\begin{eqnarray}\label{1.26}
\hat{B}_{\rm in}(t)=\hat{X}_{\rm in}(t)+i\hat{P}_{\rm in}(t)=
[\hat{x}_{\rm in}(t)+i\hat{p}_{\rm in}(t)]e^{i\omega_0t}=
\hat{b}_{\rm in}(t)e^{i\omega_0t},
\end{eqnarray}
as in Eq.~(\ref{1.9}) with
\begin{eqnarray}\label{1.27}
\hat{B}_{\rm in}(\Omega)=\frac{1}{\sqrt{2\pi}}\int dt\hat{B}_{\rm in}(t)
e^{i\Omega t},
\end{eqnarray}
as in Eq.~(\ref{1.10}) and commutation relations $[\hat{B}_{\rm in}
(\Omega),\hat{B}_{\rm in}(\Omega')]=0$, $[\hat{B}_{\rm in}
(\Omega),\hat{B}_{\rm in}^{\dagger}(\Omega')]=\delta(\Omega - \Omega')$.
Of course, the unknown input field is not completely arbitrary.
In the case of an EPR state from the NOPA, we will see
that for successful quantum teleportation, the center of the input field's
spectral range W should be around the NOPA center frequency $\omega_0$
(half the pump frequency of the NOPA). Further, as we shall see, its
spectral width should be small with respect to the NOPA bandwidth to
benefit from the EPR correlations of the NOPA output. As for the
transverse structure and the single polarization of the input field, we
assume that both are known to all participants.
In spite of these complications, the teleportation protocol is performed
in a fashion almost identical to the zero-bandwidth case. The EPR state
of modes 1 and 2 is produced either directly as the NOPA output or by the
superposition of two independently squeezed beams, as discussed in the
preceding section. Mode 1 is sent to Alice and mode 2 is sent to Bob
(see Fig.~1) where for the case of the NOPA, these modes correspond to
two orthogonal polarizations. Alice arranges to superimpose mode 1 with
the unknown input field at a 50/50 beam splitter,
yielding for the relevant quadratures
\begin{eqnarray}\label{1.28}
\hat{X}_{\rm u}(\Omega)&=&\case{1}{\sqrt{2}}\hat{X}_{\rm in}(\Omega)
-\case{1}{\sqrt{2}}\hat{X}_1(\Omega),\nonumber\\
\hat{P}_{\rm v}(\Omega)&=&\case{1}{\sqrt{2}}\hat{P}_{\rm in}(\Omega)+
\case{1}{\sqrt{2}}\hat{P}_1(\Omega).
\end{eqnarray}
Using Eqs.~(\ref{1.28}) we will find it useful to write the quadrature
operators of Bob's mode 2 as
\begin{eqnarray}\label{Bob2}
\hat{X}_2(\Omega)&=&\hat{X}_{\rm in}(\Omega)-
[\hat{X}_1(\Omega)-\hat{X}_2(\Omega)]-
\sqrt{2}\hat{X}_{\rm u}(\Omega)\nonumber\\
&=&\hat{X}_{\rm in}(\Omega)-
\sqrt{2}S_-(\Omega)\hat{\bar{X}}^{(0)}_2(\Omega)-
\sqrt{2}\hat{X}_{\rm u}(\Omega),\nonumber\\
\hat{P}_2(\Omega)&=&\hat{P}_{\rm in}(\Omega)+
[\hat{P}_1(\Omega)+\hat{P}_2(\Omega)]-
\sqrt{2}\hat{P}_{\rm v}(\Omega)\nonumber\\
&=&\hat{P}_{\rm in}(\Omega)+
\sqrt{2}S_-(\Omega)\hat{\bar{P}}^{(0)}_1(\Omega)-
\sqrt{2}\hat{P}_{\rm v}(\Omega).
\end{eqnarray}
Here we have used Eqs.~(\ref{general2}).
How is Alice's ``Bell detection'' which yields classical photocurrents
performed? The photocurrent operators for the two homodyne detections,
$\hat{i}_{\rm u}(t)\propto |E_{\rm LO}^{X}|\hat{X}_{\rm u}(t)$ and
$\hat{i}_{\rm v}(t)\propto |E_{\rm LO}^{P}|\hat{P}_{\rm v}(t)$,
can be written (without loss of generality we assume $\Omega>0$) as
\begin{eqnarray}\label{currents}
\hat{i}_{\rm u}(t)&\propto&|E_{\rm LO}^{X}|\int_{\rm W} d\Omega\,
h_{\rm el}(\Omega)\left[\hat{X}_{\rm u}(\Omega)e^{-i\Omega t}+
\hat{X}^{\dagger}_{\rm u}(\Omega)e^{i\Omega t}\right],\nonumber\\
\hat{i}_{\rm v}(t)&\propto&|E_{\rm LO}^{P}|\int_{\rm W} d\Omega\,
h_{\rm el}(\Omega)\left[\hat{P}_{\rm v}(\Omega)e^{-i\Omega t}+
\hat{P}^{\dagger}_{\rm v}(\Omega)e^{i\Omega t}\right],
\end{eqnarray}
with a noiseless, classical local oscillator (LO) and $h_{\rm el}(\Omega)$
representing the detectors' responses within their electronic bandwidths
$\Delta\Omega_{\rm el}$: $h_{\rm el}(\Omega)=1$ for $\Omega\leq
\Delta\Omega_{\rm el}$ and zero otherwise. We assume that the relevant
bandwidth W ($\sim$ MHz) is fully covered by the electronic bandwidth
of the detectors ($\sim$ GHz). Therefore, $h_{\rm el}(\Omega)\equiv 1$
in Eqs.~(\ref{currents}).
Continuously in time, these photocurrents are measured and fed-forward
to Bob via a classical channel with sufficient RF bandwidth.
Each of them must be viewed as complex quantities in order to respect
the RF phase.
The whole feedforward process, continuously performed in the time domain
(i.e., performed every inverse-bandwidth time),
includes Alice's detections, her classical transmission and corresponding
amplitude and phase modulations of Bob's EPR beam.
Any {\it relative} delays between the classical information conveyed by
Alice and Bob's EPR beam must be such that $\Delta t\ll 1/\Delta\Omega$
with the inverse bandwidth of the EPR source $1/\Delta\Omega$
(for an EPR state from the NOPA: $\Delta t\ll\gamma^{-1}$).
Expressed in the frequency domain, the final modulations can be described
by the classical ``displacements''
\begin{eqnarray}\label{1.29}
\hat{X}_2(\Omega)\longrightarrow\hat{X}_{\rm tel}(\Omega)&=&\hat{X}_2
(\Omega)+\Gamma(\Omega)\sqrt{2}X_{\rm u}(\Omega),\nonumber\\
\hat{P}_2(\Omega)\longrightarrow\hat{P}_{\rm tel}(\Omega)&=&\hat{P}_2
(\Omega)+\Gamma(\Omega)\sqrt{2}P_{\rm v}(\Omega).
\end{eqnarray}
The parameter $\Gamma(\Omega)$ is again a suitably normalized gain
(now, in general, depending on $\Omega$).
For $\Gamma(\Omega)=1$, Bob's displacements from Eqs.~(\ref{1.29})
exactly eliminate $\hat{X}_{\rm u}(\Omega)$ and $\hat{P}_{\rm v}(\Omega)$
in Eqs.~(\ref{Bob2}). The same applies to the Hermitian conjugate
versions of Eqs.~(\ref{Bob2}) and Eqs.~(\ref{1.29}).
We obtain the teleported field
\begin{eqnarray}\label{1.30}
\hat{X}_{\rm tel}(\Omega)&=&\hat{X}_{\rm in}(\Omega)-
\sqrt{2}S_-(\Omega)\hat{\bar{X}}^{(0)}_2(\Omega),\nonumber\\
\hat{P}_{\rm tel}(\Omega)&=&\hat{P}_{\rm in}(\Omega)+
\sqrt{2}S_-(\Omega)\hat{\bar{P}}^{(0)}_1(\Omega).
\end{eqnarray}
For an arbitrary gain $\Gamma(\Omega)$, the teleported field becomes
\begin{eqnarray}\label{arbgain}
\hat{X}_{\rm tel}(\Omega)=\Gamma(\Omega)\hat{X}_{\rm in}(\Omega)
&-&\frac{\Gamma(\Omega)-1}{\sqrt{2}}S_+(\Omega)
\hat{\bar{X}}^{(0)}_1(\Omega)\nonumber\\
&-&\frac{\Gamma(\Omega)+1}{\sqrt{2}}S_-(\Omega)
\hat{\bar{X}}^{(0)}_2(\Omega),\nonumber\\
\hat{P}_{\rm tel}(\Omega)=\Gamma(\Omega)\hat{P}_{\rm in}(\Omega)
&+&\frac{\Gamma(\Omega)-1}{\sqrt{2}}S_+(\Omega)
\hat{\bar{P}}^{(0)}_2(\Omega)\nonumber\\
&+&\frac{\Gamma(\Omega)+1}{\sqrt{2}}S_-(\Omega)
\hat{\bar{P}}^{(0)}_1(\Omega).
\end{eqnarray}
In general, these equations contain non-Hermitian operators with
nonreal coefficients. Let us assume an EPR state from the NOPA,
$S_{\pm}(\Omega)=G(\Omega)\pm g(\Omega)$.
In the zero-bandwidth limit, the
quadrature operators are Hermitian and the coefficients in
Eqs.~(\ref{1.30}) and Eqs.~(\ref{arbgain}) are real.
For $\Omega\to 0$ and $\Gamma(\Omega)=1$,
the teleported quadratures computed from the above
equations are, in agreement with the zero-bandwidth results, given by
$\hat{X}_{\rm tel}=\hat{X}_{\rm in}$ and
$\hat{P}_{\rm tel}=\hat{P}_{\rm in}$, if
$\kappa\to\gamma/2$ and hence $[G(\Omega)-g(\Omega)]\to 0$ (infinite
squeezing). Thus, for zero bandwidth and an infinite degree of EPR
correlations, Alice's unknown quantum state of mode ``in'' is exactly
reconstituted by Bob after generating the output mode ``tel'' through
unit-gain displacements.
However, we are particularly interested in the physical case of
finite bandwidth. Apparently, in unit-gain teleportation,
the complete disappearance of the two classical quduties
for perfect teleportation requires $\Omega=0$ (with an EPR state from the
NOPA). Does this mean an increasing bandwidth always leads to
deteriorating quantum teleportation? In order to make quantitative
statements about this issue, we consider input states with a
coherent amplitude (unit-gain teleportation) and calculate
the spectral variances of the teleported quadratures for a coherent-state
input to obtain a ``fidelity spectrum.''
\subsection{Teleporting broadband Gaussian fields with a coherent amplitude}
Let us employ teleportation equations for the real and imaginary parts
of the non-Hermitian quadrature operators. In order to achieve a nonzero
average fidelity when teleporting fields with a coherent amplitude,
we assume $\Gamma(\Omega)=1$. According to
Eqs.~(\ref{1.30}), the real and imaginary parts of the teleported
quadratures are
\begin{eqnarray}\label{1.31}
{\rm Re}\hat{X}_{\rm tel}(\Omega)=
{\rm Re}\hat{X}_{\rm in}(\Omega)
&-&\sqrt{2}{\rm Re}[S_-(\Omega)]{\rm Re}\hat{\bar{X}}^{(0)}_2(\Omega)
\nonumber\\
&+&\sqrt{2}{\rm Im}[S_-(\Omega)]{\rm Im}\hat{\bar{X}}^{(0)}_2(\Omega),
\nonumber\\
{\rm Re}\hat{P}_{\rm tel}(\Omega)=
{\rm Re}\hat{P}_{\rm in}(\Omega)
&+&\sqrt{2}{\rm Re}[S_-(\Omega)]{\rm Re}\hat{\bar{P}}^{(0)}_1(\Omega)
\nonumber\\
&-&\sqrt{2}{\rm Im}[S_-(\Omega)]{\rm Im}\hat{\bar{P}}^{(0)}_1(\Omega),
\nonumber\\
{\rm Im}\hat{X}_{\rm tel}(\Omega)=
{\rm Im}\hat{X}_{\rm in}(\Omega)
&-&\sqrt{2}{\rm Im}[S_-(\Omega)]{\rm Re}\hat{\bar{X}}^{(0)}_2(\Omega)
\nonumber\\
&-&\sqrt{2}{\rm Re}[S_-(\Omega)]{\rm Im}\hat{\bar{X}}^{(0)}_2(\Omega),
\nonumber\\
{\rm Im}\hat{P}_{\rm tel}(\Omega)=
{\rm Im}\hat{P}_{\rm in}(\Omega)
&+&\sqrt{2}{\rm Im}[S_-(\Omega)]{\rm Re}\hat{\bar{P}}^{(0)}_1(\Omega)
\nonumber\\
&+&\sqrt{2}{\rm Re}[S_-(\Omega)]{\rm Im}\hat{\bar{P}}^{(0)}_1(\Omega).
\end{eqnarray}
Their only nontrivial commutators are
\begin{eqnarray}\label{1.32}
[{\rm Re}\hat{X}_j(\Omega),{\rm Re}\hat{P}_j(\Omega')]=[{\rm Im}
\hat{X}_j(\Omega),{\rm Im}\hat{P}_j(\Omega')]=(i/4)\;\delta(\Omega-\Omega'),
\end{eqnarray}
where we have used Eqs.~(\ref{1.14}) and $[\hat{B}_j(\Omega),
\hat{B}_j^{\dagger}(\Omega')]=\delta(\Omega - \Omega')$.
We define spectral variances similar to Eq.~(\ref{telin}),
\begin{eqnarray}\label{def1}
\frac{\langle\Delta[{\rm Re}\hat{X}_{\rm tel}(\Omega)-
{\rm Re}\hat{X}_{\rm in}(\Omega)]\Delta[{\rm Re}
\hat{X}_{\rm tel}(\Omega')-{\rm Re}\hat{X}_{\rm in}(\Omega')]
\rangle}{\langle\Delta{\rm Re}\hat{X}^2\rangle
_{\rm vacuum}}&\equiv&\delta(\Omega-\Omega')
V_{\rm tel,in}^{{\rm Re}\hat{X}}(\Omega).
\end{eqnarray}
We analogously define
$V_{\rm tel,in}^{{\rm Re}\hat{P}}(\Omega)$, $V_{\rm tel,in}^{{\rm Im}
\hat{X}}(\Omega)$, and $V_{\rm tel,in}^{{\rm Im}\hat{P}}(\Omega)$ with
${\rm Re}\hat{X}\rightarrow{\rm Re}\hat{P}$ etc. throughout.
From Eqs.~(\ref{1.31}), we obtain
\begin{eqnarray}\label{1.34}
V_{\rm tel,in}^{{\rm Re}\hat{X}}(\Omega)&=&
V_{\rm tel,in}^{{\rm Re}\hat{P}}(\Omega)=
V_{\rm tel,in}^{{\rm Im}\hat{X}}(\Omega)=
V_{\rm tel,in}^{{\rm Im}\hat{P}}(\Omega)\nonumber\\
&=&2\;|S_-(\Omega)|^2.
\end{eqnarray}
Here we have used that
\begin{eqnarray}
\langle\Delta{\rm Re}\hat{\bar{X}}^{(0)}_j(\Omega)
\Delta{\rm Re}\hat{\bar{X}}^{(0)}_j(\Omega')\rangle&=&
\delta(\Omega-\Omega')\langle\Delta{\rm Re}\hat{X}^2\rangle
_{\rm vacuum}=\nonumber\\
\langle\Delta{\rm Im}\hat{\bar{X}}^{(0)}_j(\Omega)
\Delta{\rm Im}\hat{\bar{X}}^{(0)}_j(\Omega')\rangle&=&
\delta(\Omega-\Omega')\langle\Delta{\rm Im}\hat{X}^2\rangle
_{\rm vacuum},
\end{eqnarray}
and analogously for the other quadrature, and
\begin{eqnarray}
\langle\Delta{\rm Re}\hat{\bar{X}}^{(0)}_j(\Omega)
\Delta{\rm Im}\hat{\bar{X}}^{(0)}_j(\Omega')\rangle=
\langle\Delta{\rm Re}\hat{\bar{P}}^{(0)}_j(\Omega)
\Delta{\rm Im}\hat{\bar{P}}^{(0)}_j(\Omega')\rangle=0\,\,\,.
\end{eqnarray}
Thus, for unit-gain teleportation at all frequencies, it turns out that
{\it the variance of each teleported quadrature is given
by the variance of the input quadrature plus
twice the squeezing spectrum of the quiet quadrature of a decoupled
mode in a ``broadband squeezed state''} as in Eqs.~(\ref{general}).
The excess noise in each teleported quadrature after the teleportation
process is, relative to the vacuum noise, {\it twice} the
squeezing spectrum $|S_-(\Omega)|^2$ from Eqs.~(\ref{spectra}).
We also obtain these results by directly defining
\begin{eqnarray}\label{def2}
\frac{\langle\Delta[\hat{X}^{\dagger}_{\rm tel}(\Omega)-
\hat{X}^{\dagger}_{\rm in}(\Omega)]\Delta[\hat{X}_{\rm tel}(\Omega')-
\hat{X}_{\rm in}(\Omega')]\rangle}{\langle\Delta\hat{X}^2
\rangle_{\rm vacuum}}&\equiv&\delta(\Omega-\Omega')
V_{\rm tel,in}^{\hat{X}}(\Omega).
\end{eqnarray}
We analogously define
$V_{\rm tel,in}^{\hat{P}}(\Omega)$ with $\hat{X}\rightarrow\hat{P}$
throughout. Using Eqs.~(\ref{1.30}), these variances become for
$\Gamma(\Omega)=1$
\begin{eqnarray}\label{quadrvar}
V_{\rm tel,in}^{\hat{X}}(\Omega)&=&
V_{\rm tel,in}^{\hat{P}}(\Omega)=2\;|S_-(\Omega)|^2.
\end{eqnarray}
We calculate some limits for $V_{\rm tel,in}^{\hat{X}}(\Omega)$ of
Eq.~(\ref{quadrvar}), assuming an EPR state from the NOPA,
$S_-(\Omega)=G(\Omega)-g(\Omega)$.
Since $V_{\rm tel,in}^{\hat{X}}(\Omega)=
V_{\rm tel,in}^{\hat{P}}(\Omega)$ and $\Gamma(\Omega)=1$, we can name
the limits according to the criterion of Eq.~(\ref{limit}).\\
{\bf Classical teleportation, $\kappa\to 0$}:\\
$V_{\rm tel,in}^{\hat{X}}(\Omega)=2$, which is independent of the
modulation frequency $\Omega$.\\
{\bf Zero-bandwidth quantum teleportation, $\Omega\to 0$, $\kappa> 0$}:\\
$V_{\rm tel,in}^{\hat{X}}(\Omega)=2\,
[1-2\kappa\gamma/(\kappa+\gamma/2)^2]$, and in the
ideal case of infinite squeezing $\kappa\to \gamma/2$: $V_{\rm tel,in}
^{\hat{X}}(\Omega)=0$.\\
{\bf Broadband quantum teleportation, $\Omega> 0$, $\kappa> 0$}:\\
$V_{\rm tel,in}^{\hat{X}}(\Omega)=2\,
\{1-2\kappa\gamma/[(\kappa+\gamma/2)^2+\Omega^2]\}$,
and in the ideal case $\kappa\to \gamma/2$:
$V_{\rm tel,in}^{\hat{X}}(\Omega)=2\,[\Omega^2/(\gamma^2+\Omega^2)]$.
So it turns out that also for finite bandwidth ideal quantum teleportation
can be approached provided $\Omega\ll \gamma$.
We can express $V_{\rm tel,in}^{\hat{X}}(\Omega)$ in terms of
experimental parameters relevant to the NOPA. For this purpose,
we use the dimensionless quantities from Ref.~\onlinecite{Ou},
\begin{eqnarray}\label{1.35}
\epsilon=\frac{2\kappa}{\gamma+\rho}=
\sqrt{\frac{P_{\rm pump}}{P_{\rm thres}}}\,\,\,,\;\;\;\;
\omega=\frac{2\Omega}{\gamma+\rho}=\frac{\Omega}{2\pi}\,
\frac{2F_{\rm cav}}{\nu_{\rm FSR}}\,\,\,.
\end{eqnarray}
Here, $P_{\rm pump}$ is the pump power, $P_{\rm thres}$ is the threshold
value, $F_{\rm cav}$ is the measured finesse of the cavity,
$\nu_{\rm FSR}$ is its free spectral range, and the parameter $\rho$
describes cavity losses (see Fig.~4). Note that we now use $\omega$
as a normalized modulation frequency in contrast to Eq.~(\ref{field})
and the following commutators where it was the frequency of the field
operators in the nonrotating frame.
The spectral variances for the lossless case ($\rho=0$)
can be written as a function of $\epsilon$ and $\omega$, namely,
\begin{eqnarray}\label{1.36}
V_{\rm tel,in}^{\hat{X}}(\epsilon,\omega)=
V_{\rm tel,in}^{\hat{P}}(\epsilon,\omega)=2\;\left[1-
\frac{4\epsilon}{(\epsilon+1)^2 + \omega^2}\right].
\end{eqnarray}
Now, the classical limit is $\epsilon\to 0$ ($V_{\rm tel,in}
^{\hat{X}}=2$, independent of $\omega$) and the ideal case
is $\epsilon\to 1$ [$V_{\rm tel,in}^{\hat{X}}(\epsilon,\omega)=
2\omega^2/(4+\omega^2)$]. Obviously,
perfect quantum teleportation is achieved for $\epsilon\to 1$ and
$\omega\to 0$. In fact, this limit can also be approached for finite
$\Omega\neq 0$ provided $\omega\ll 1$ or $\Omega\ll \gamma$. Note
that this condition is not specific to broadband teleportation, but is
simply the condition for broadband squeezing, i.e., for the generation of
highly squeezed quadratures at nonzero modulation frequencies $\Omega$.
Let us now assume coherent-state inputs
$\langle\Delta\hat{X}_{\rm in}^{\dagger}(\Omega)
\Delta\hat{X}_{\rm in}(\Omega')\rangle=
\langle\Delta\hat{P}_{\rm in}^{\dagger}(\Omega)
\Delta\hat{P}_{\rm in}(\Omega')\rangle=\case{1}{4}\delta(\Omega-\Omega')$
[and $\langle\Delta{\rm Re}\hat{X}_{\rm in}(\Omega)
\Delta{\rm Re}\hat{X}_{\rm in}(\Omega')\rangle=\case{1}{8}
\delta(\Omega-\Omega')$, etc.],
at all frequencies $\Omega$ in the relevant bandwidth W.
In order to obtain a spectrum of the fidelities in Eq.~(\ref{fid4})
with $\Gamma\rightarrow\Gamma(\Omega)=1$, we need the spectrum
of the Q functions of the teleported field with the spectral variances
$\sigma_x(\Omega)=\sigma_p(\Omega)=\case{1}{2}+\case{1}{4}
V_{\rm tel,in}^{\hat{X}}(\Omega)$.
We obtain the ``fidelity spectrum''
\begin{eqnarray}\label{fspec}
F(\Omega)=\frac{1}{1+|S_-(\Omega)|^2}\,\,\,.
\end{eqnarray}
Finally, with the new quantities
$\epsilon$ and $\omega$, the fidelity spectrum for quantum teleportation
of arbitrary broadband coherent states using broadband entanglement from the
NOPA ($\rho=0$) is given by
\begin{eqnarray}\label{fidspec}
F(\epsilon,\omega)=\left[2-\frac{4\epsilon}{(\epsilon+1)^2
+\omega^2}\right]^{-1}\,\,\,.
\end{eqnarray}
For different $\epsilon$ values, the spectrum of fidelities is shown
in Fig.~5. From the single-mode protocol (with ideal
detectors), we know that any nonzero
squeezing enables quantum teleportation and coherent-state inputs
can be teleported with $F=F_{\rm av}>\case{1}{2}$ for any $r>0$.
Correspondingly, the fidelity from Eq.~(\ref{fidspec}) exceeds
$\case{1}{2}$ for any nonzero $\epsilon$ at all finite frequencies,
as, provided $\epsilon>0$, there is no squeezing at all only when
$\omega\to\infty$. However, we had assumed [see after Eqs.~(\ref{1.14}):
$\Omega\ll \omega_0$] modulation frequencies $\Omega$ much smaller than
the NOPA center frequency $\omega_0$. In fact, for $\Omega\to
\omega_0$, squeezing becomes impossible at the frequency $\Omega$
\cite{Schum}. But also within the region $\Omega\ll \omega_0$,
effectively, the squeezing bandwith is
limited and hence as well the bandwith of quantum teleportation
$\Delta\omega\equiv 2\omega_{\rm max}$ where $F(\omega)\approx\case{1}{2}$
($<0.51$)
for all $\omega>\omega_{\rm max}$ and $F(\omega)>\case{1}{2}$
($\geq 0.51$)
for all $\omega\leq\omega_{\rm max}$.
According to Fig.~5, we could say that the ``effective teleportation
bandwidth'' is just about $\Delta\omega\approx 5.8$ ($\epsilon=0.1$),
$\Delta\omega\approx 8.6$ ($\epsilon=0.2$),
$\Delta\omega\approx 12.4$ ($\epsilon=0.4$),
$\Delta\omega\approx 15.2$ ($\epsilon=0.6$), and
$\Delta\omega\approx 19.6$ ($\epsilon=1$). The maximum fidelities at
frequency $\omega=0$ are $F_{\rm max}\approx 0.6$ ($\epsilon=0.1$),
$F_{\rm max}\approx 0.69$ ($\epsilon=0.2$),
$F_{\rm max}\approx 0.84$ ($\epsilon=0.4$),
$F_{\rm max}\approx 0.94$ ($\epsilon=0.6$), and, of course,
$F_{\rm max}=1$ ($\epsilon=1$).
\subsection{Broadband entanglement swapping}
As discussed in Sec. III, we particularly want our teleportation device
to be capable of teleporting entanglement. We will present now the
broadband theory of this entanglement swapping for continuous variables,
as it was proposed in Ref.~\onlinecite{PvL} for single modes.
Before any detections (see Fig.~3), Alice (mode 1) and
Claire (mode 2) share the
broadband two-mode squeezed state from Eqs.~(\ref{general2}), whereas
Claire (mode 3) and Bob (mode 4) share the corresponding entangled state
of modes 3 and 4 given by
\begin{eqnarray}\label{general3}
\hat{X}_3(\Omega)&=&\case{1}{\sqrt{2}}S_+(\Omega)
\hat{\bar{X}}^{(0)}_3(\Omega)+\case{1}{\sqrt{2}}S_-(\Omega)
\hat{\bar{X}}^{(0)}_4(\Omega),\nonumber\\
\hat{P}_3(\Omega)&=&\case{1}{\sqrt{2}}S_-(\Omega)
\hat{\bar{P}}^{(0)}_3(\Omega)+\case{1}{\sqrt{2}}S_+(\Omega)
\hat{\bar{P}}^{(0)}_4(\Omega),\nonumber\\
\hat{X}_4(\Omega)&=&\case{1}{\sqrt{2}}S_+(\Omega)
\hat{\bar{X}}^{(0)}_3(\Omega)-\case{1}{\sqrt{2}}S_-(\Omega)
\hat{\bar{X}}^{(0)}_4(\Omega),\nonumber\\
\hat{P}_4(\Omega)&=&\case{1}{\sqrt{2}}S_-(\Omega)
\hat{\bar{P}}^{(0)}_3(\Omega)-\case{1}{\sqrt{2}}S_+(\Omega)
\hat{\bar{P}}^{(0)}_4(\Omega).
\end{eqnarray}
Let us interpret the entanglement swapping here as quantum teleportation
of mode 2 to mode 4 using the entanglement of modes 3 and 4. This means
we want Bob to perform ``displacements'' based on the classical results
of Claire's Bell detection, i.e., the classical determination of
$\hat{X}_{\rm u}(\Omega)=[\hat{X}_2(\Omega)-\hat{X}_3(\Omega)]/\sqrt{2},
\hat{P}_{\rm v}(\Omega)=[\hat{P}_2(\Omega)+\hat{P}_3(\Omega)]/\sqrt{2}$.
These final ``displacements'' (amplitude and phase modulations) of mode 4
are crucial in order to reveal the entanglement from entanglement
swapping and, for verification, to finally exploit it in a second round of
quantum teleportation using the previously unentangled modes 1 and 4
\cite{PvL}.
The entire teleportation process with arbitrary gain $\Gamma(\Omega)$
that led to Eqs.~(\ref{arbgain}), yields now, for the teleportation of
mode 2 to mode 4, the teleported mode $4'$
[where in Eqs.~(\ref{arbgain}) simply
$\hat{X}_{\rm tel}(\Omega)\rightarrow\hat{X}_4'(\Omega),
\hat{P}_{\rm tel}(\Omega)\rightarrow\hat{P}_4'(\Omega),
\hat{X}_{\rm in}(\Omega)\rightarrow\hat{X}_2(\Omega),
\hat{P}_{\rm in}(\Omega)\rightarrow\hat{P}_2(\Omega),
\hat{\bar{X}}^{(0)}_1(\Omega)\rightarrow\hat{\bar{X}}^{(0)}_3(\Omega),
\hat{\bar{P}}^{(0)}_1(\Omega)\rightarrow\hat{\bar{P}}^{(0)}_3(\Omega),
\hat{\bar{X}}^{(0)}_2(\Omega)\rightarrow\hat{\bar{X}}^{(0)}_4(\Omega),
\hat{\bar{P}}^{(0)}_2(\Omega)\rightarrow\hat{\bar{P}}^{(0)}_4(\Omega)$,
and $\Gamma(\Omega)\rightarrow\Gamma_{\rm swap}(\Omega)$],
\begin{eqnarray}\label{arbgain2}
\hat{X}_4'(\Omega)&=&\frac{\Gamma_{\rm swap}(\Omega)}{\sqrt{2}}
[S_+(\Omega)\hat{\bar{X}}^{(0)}_1(\Omega)-S_-(\Omega)
\hat{\bar{X}}^{(0)}_2(\Omega)]\nonumber\\
&-&\frac{\Gamma_{\rm swap}(\Omega)-1}{\sqrt{2}}S_+(\Omega)
\hat{\bar{X}}^{(0)}_3(\Omega)
-\frac{\Gamma_{\rm swap}(\Omega)+1}{\sqrt{2}}S_-(\Omega)
\hat{\bar{X}}^{(0)}_4(\Omega),\nonumber\\
\hat{P}_4'(\Omega)&=&\frac{\Gamma_{\rm swap}(\Omega)}{\sqrt{2}}
[S_-(\Omega) \hat{\bar{P}}^{(0)}_1(\Omega)-S_+(\Omega)
\hat{\bar{P}}^{(0)}_2(\Omega)]\nonumber\\
&+&\frac{\Gamma_{\rm swap}(\Omega)+1}{\sqrt{2}}S_-(\Omega)
\hat{\bar{P}}^{(0)}_3(\Omega)
+\frac{\Gamma_{\rm swap}(\Omega)-1}{\sqrt{2}}S_+(\Omega)
\hat{\bar{P}}^{(0)}_4(\Omega).
\end{eqnarray}
Provided entanglement swapping is successful, Alice and Bob can
use their modes 1 and $4'$ for a further quantum teleportation.
Assuming unit gain in this ``second teleportation,'' where the unknown
input state $\hat{X}_{\rm in}(\Omega)$, $\hat{P}_{\rm in}(\Omega)$
is to be teleported, the teleported field becomes
\begin{eqnarray}\label{arbgain3}
\hat{X}_{\rm tel}(\Omega)&=&\hat{X}_{\rm in}(\Omega)
+\frac{\Gamma_{\rm swap}(\Omega)-1}{\sqrt{2}}
S_+(\Omega)\hat{\bar{X}}^{(0)}_1(\Omega)
-\frac{\Gamma_{\rm swap}(\Omega)+1}{\sqrt{2}}
S_-(\Omega)\hat{\bar{X}}^{(0)}_2(\Omega)\nonumber\\
&-&\frac{\Gamma_{\rm swap}(\Omega)-1}{\sqrt{2}}S_+(\Omega)
\hat{\bar{X}}^{(0)}_3(\Omega)
-\frac{\Gamma_{\rm swap}(\Omega)+1}{\sqrt{2}}S_-(\Omega)
\hat{\bar{X}}^{(0)}_4(\Omega),\nonumber\\
\hat{P}_{\rm tel}(\Omega)&=&\hat{P}_{\rm in}(\Omega)
+\frac{\Gamma_{\rm swap}(\Omega)+1}{\sqrt{2}}
S_-(\Omega)\hat{\bar{P}}^{(0)}_1(\Omega)
-\frac{\Gamma_{\rm swap}(\Omega)-1}{\sqrt{2}}
S_+(\Omega)\hat{\bar{P}}^{(0)}_2(\Omega)\nonumber\\
&+&\frac{\Gamma_{\rm swap}(\Omega)+1}{\sqrt{2}}S_-(\Omega)
\hat{\bar{P}}^{(0)}_3(\Omega)
+\frac{\Gamma_{\rm swap}(\Omega)-1}{\sqrt{2}}S_+(\Omega)
\hat{\bar{P}}^{(0)}_4(\Omega).
\end{eqnarray}
We calculate a fidelity spectrum for coherent-state inputs and obtain
\begin{eqnarray}\label{fidswap}
F(\Omega)=\{1&+&[\Gamma_{\rm swap}(\Omega)-1]^2|S_+(\Omega)|^2/2\nonumber\\
&+&[\Gamma_{\rm swap}(\Omega)+1]^2|S_-(\Omega)|^2/2\}^{-1}\,\,\,.
\end{eqnarray}
The optimum gain, depending on the amount of squeezing, that maximizes
this fidelity \cite{PvL} at different frequencies turns out to be
\begin{eqnarray}\label{gswap}
\Gamma_{\rm swap}(\Omega)=\frac{|S_+(\Omega)|^2-|S_-(\Omega)|^2}
{|S_+(\Omega)|^2+|S_-(\Omega)|^2}\,\,\,.
\end{eqnarray}
Let us now assume that the broadband entanglement comes from the NOPA
(two NOPA's with equal squeezing spectra),
$|S_-(\Omega)|^2\rightarrow |S_-(\epsilon,\omega)|^2=
1-4\epsilon/[(\epsilon+1)^2+\omega^2]$,
$|S_+(\Omega)|^2\rightarrow |S_+(\epsilon,\omega)|^2=
1+4\epsilon/[(\epsilon-1)^2+\omega^2]$.
The optimized fidelity then becomes
\begin{eqnarray}\label{optfidswap}
F_{\rm opt}(\epsilon,\omega)=
\left\{1+2\;\frac{[(\epsilon+1)^2+\omega^2][(\epsilon-1)^2+\omega^2]}
{[(\epsilon+1)^2+\omega^2]^2+[(\epsilon-1)^2+\omega^2]^2}\right\}^{-1}
\,\,\,.
\end{eqnarray}
The spectrum of these optimized fidelities is shown
in Fig.~6 for different $\epsilon$ values. Again, we know from
the single-mode protocol \cite{PvL} with ideal detectors that any nonzero
squeezing in both initial entanglement sources is sufficient for
entanglement swapping to occur. In this case, mode 1 and $4'$
enable quantum teleportation and coherent-state inputs
can be teleported with $F=F_{\rm av}>\case{1}{2}$.
The fidelity from Eq.~(\ref{optfidswap}) is $\case{1}{2}$ for
$\epsilon=0$ and becomes $F_{\rm opt}(\epsilon,\omega)>\case{1}{2}$
for any $\epsilon>0$, provided that $\omega$ does not become infinite
(however, we had assumed $\Omega\ll \omega_0$).
In this sense, the squeezing or entanglement bandwidth is preserved
through entanglement swapping. At each frequency where the initial states
were squeezed and entangled, also the output state of modes 1 and $4'$
is entangled, but with less squeezing and worse quality of entanglement
(unless we had infinite squeezing in the initial states so that the
entanglement is perfectly teleported) \cite{PvL2}.
Correspondingly, at frequencies with initially very small entanglement,
the entanglement becomes even smaller after entanglement swapping
(but never vanishes completely). Thus, the effective bandwidth of
squeezing or entanglement decreases through entanglement swapping.
Then, compared to the teleportation bandwidth using broadband two-mode
squeezed states without entanglement swapping,
the bandwidth of teleportation using the output of entanglement
swapping is effectively smaller.
The spectrum of the fidelities from Eq.~(\ref{optfidswap}) is narrower
and the ``effective teleportation bandwidth''
is now about $\Delta\omega\approx 1.2$ ($\epsilon=0.1$),
$\Delta\omega\approx 2.6$ ($\epsilon=0.2$),
$\Delta\omega\approx 4.2$ ($\epsilon=0.4$),
$\Delta\omega\approx 5.2$ ($\epsilon=0.6$),
and $\Delta\omega\approx 6.8$ ($\epsilon=1$). The maximum fidelities at
frequency $\omega=0$ are $F_{\rm max}\approx 0.52$ ($\epsilon=0.1$),
$F_{\rm max}\approx 0.57$ ($\epsilon=0.2$),
$F_{\rm max}\approx 0.74$ ($\epsilon=0.4$),
$F_{\rm max}\approx 0.89$ ($\epsilon=0.6$), and, still,
$F_{\rm max}=1$ ($\epsilon=1$).
\section{Cavity losses and Bell detector inefficiencies}
We extend the previous calculations and include losses for the particular
case of the NOPA cavity and inefficiencies in Alice's Bell detection.
For this purpose, we use Eq.~(\ref{1.11}) for the outgoing NOPA modes.
We consider losses and inefficiencies for unit-gain teleportation
(teleportation of Gaussian states with a coherent amplitude).
For the case of entanglement swapping (nonunit-gain teleportation),
detector inefficiencies have been included in the single-mode
treatment of Ref.~\onlinecite{PvL}.
By superimposing the unknown input mode with the NOPA
mode 1, the relevant quadratures from Eqs.~(\ref{1.28}) now become
\begin{eqnarray}\label{1.40}
\hat{X}_{\rm u}(\Omega)&=&\frac{\eta}{\sqrt{2}}\hat{X}_{\rm in}(\Omega)
-\frac{\eta}{\sqrt{2}}\hat{X}_1(\Omega)+\sqrt{\frac{1-\eta^2}{2}}
\hat{X}^{(0)}_{D}(\Omega)+\sqrt{\frac{1-\eta^2}{2}}\hat{X}^{(0)}_{E}
(\Omega),\nonumber\\
\hat{P}_{\rm v}(\Omega)&=&\frac{\eta}{\sqrt{2}}\hat{P}_{\rm in}(\Omega)
+\frac{\eta}{\sqrt{2}}\hat{P}_1(\Omega)+\sqrt{\frac{1-\eta^2}{2}}
\hat{P}^{(0)}_{F}(\Omega)+\sqrt{\frac{1-\eta^2}{2}}\hat{P}^{(0)}_{G}
(\Omega).
\end{eqnarray}
The last two terms in each quadrature in Eqs.~(\ref{1.40}) represent
additional vacua due to homodyne detection inefficiencies (the detector
amplitude efficiency $\eta$ is assumed to be constant over the bandwidth
of interest).
Using Eqs.~(\ref{1.40}) it is useful to write the quadratures of NOPA
mode 2 corresponding to Eq.~(\ref{1.11}) as
\begin{eqnarray}\label{mod2ineff}
\hat{X}_2(\Omega)&=&\hat{X}_{\rm in}(\Omega)-[G(\Omega)-
g(\Omega)] [\hat{X}^{(0)}_1(\Omega)-\hat{X}^{(0)}_2(\Omega)]\nonumber\\
&-&[\bar{G}(\Omega)-\bar{g}(\Omega)] [\hat{X}^{(0)}_{C,1}(\Omega)-
\hat{X}^{(0)}_{C,2}(\Omega)]+\sqrt{\frac{1-\eta^2}{\eta^2}}\hat{X}^{(0)}_{D}
(\Omega)+\sqrt{\frac{1-\eta^2}{\eta^2}}\hat{X}^{(0)}_{E}(\Omega)\nonumber\\
&-&\frac{\sqrt{2}}{\eta}\hat{X}_{\rm u}(\Omega),\nonumber\\
\hat{P}_2(\Omega)&=&\hat{P}_{\rm in}(\Omega)+[G(\Omega)-g(\Omega)]
[\hat{P}^{(0)}_1(\Omega)+\hat{P}^{(0)}_2(\Omega)]\nonumber\\
&+&[\bar{G}(\Omega)-\bar{g}(\Omega)] [\hat{P}^{(0)}_{C,1}(\Omega)+
\hat{P}^{(0)}_{C,2}(\Omega)]+\sqrt{
\frac{1-\eta^2}{\eta^2}}\hat{P}^{(0)}_{F}(\Omega)+\sqrt{\frac{1-
\eta^2}{\eta^2}}\hat{P}^{(0)}_{G}(\Omega)\nonumber\\
&-&\frac{\sqrt{2}}{\eta}\hat{P}_{\rm v}(\Omega),
\end{eqnarray}
where now \cite{Ou}
\begin{eqnarray}\label{1.41}
G(\Omega)&=&\frac{\displaystyle\kappa^2+\left({\gamma-\rho\over2}+i\Omega
\right)\left(\frac{\gamma+\rho}{2}-i\Omega\right)}{\displaystyle\left
(\frac{\gamma+\rho}{2}-i\Omega\right)^2-\kappa^2}\,\,\,,\nonumber\\
g(\Omega)&=&\frac{\kappa\gamma}{\displaystyle\left(\frac{\gamma+\rho}{2}-
i\Omega\right)^2-\kappa^2}\,\,\,,\nonumber\\
\bar{G}(\Omega)&=&\frac{\displaystyle\sqrt{\gamma\rho}\left(\frac
{\gamma+\rho}{2}
-i\Omega\right)}{\displaystyle\left(\frac{\gamma+\rho}{2}-i\Omega\right)^2
-\kappa^2}\,\,\,,\nonumber\\
\bar{g}(\Omega)&=&\frac{\kappa\sqrt{\gamma\rho}}{\displaystyle\left
(\frac{\gamma+\rho}{2}-i\Omega\right)^2-\kappa^2}\,\,\,,
\end{eqnarray}
still with $G(\Omega)=G^*(-\Omega)$, $g(\Omega)=g^*(-\Omega)$, and also
$\bar{G}(\Omega)=\bar{G}^*(-\Omega)$, $\bar{g}(\Omega)=\bar{g}^*(-\Omega)$.
The quadratures $\hat{X}^{(0)}_{C,j}(\Omega)$ and
$\hat{P}^{(0)}_{C,j}(\Omega)$ are those of the vacuum modes
$\hat{C}^{(0)}_j(\Omega)$ in Eq.~(\ref{1.11}) according to
Eqs.~(\ref{1.14}).
Again, $\hat{X}_{\rm u}(\Omega)$ and $\hat{P}_{\rm v}(\Omega)$ in
Eqs.~(\ref{mod2ineff}) can be considered as classically determined
quantities $X_{\rm u}(\Omega)$ and $P_{\rm v}(\Omega)$ due to Alice's
measurements. The appropriate amplitude and
phase modulations of mode 2 by Bob depending on the classical results of
Alice's detections are described by
\begin{eqnarray}\label{1.42}
\hat{X}_2(\Omega)\longrightarrow\hat{X}_{\rm tel}(\Omega)&=&\hat{X}_2
(\Omega)+\Gamma(\Omega)\frac{\sqrt{2}}{\eta}X_{\rm u}(\Omega),\nonumber\\
\hat{P}_2(\Omega)\longrightarrow\hat{P}_{\rm tel}(\Omega)&=&\hat{P}_2
(\Omega)+\Gamma(\Omega)\frac{\sqrt{2}}{\eta}P_{\rm v}(\Omega).
\end{eqnarray}
For $\Gamma(\Omega)=1$, the teleported quadratures become
\begin{eqnarray}\label{1.43}
\hat{X}_{\rm tel}(\Omega)&=&\hat{X}_{\rm in}(\Omega)-[G(\Omega)-
g(\Omega)] [\hat{X}^{(0)}_1(\Omega)-\hat{X}^{(0)}_2(\Omega)]\nonumber\\
&-&[\bar{G}(\Omega)-\bar{g}(\Omega)] [\hat{X}^{(0)}_{C,1}(\Omega)-
\hat{X}^{(0)}_{C,2}(\Omega)]+\sqrt{\frac{1-\eta^2}{\eta^2}}\hat{X}^{(0)}_{D}
(\Omega)+\sqrt{\frac{1-\eta^2}{\eta^2}}\hat{X}^{(0)}_{E}(\Omega),\nonumber\\
\hat{P}_{\rm tel}(\Omega)&=&\hat{P}_{\rm in}(\Omega)+[G(\Omega)-g(\Omega)]
[\hat{P}^{(0)}_1(\Omega)+\hat{P}^{(0)}_2(\Omega)]\nonumber\\
&+&[\bar{G}(\Omega)-\bar{g}(\Omega)]
[\hat{P}^{(0)}_{C,1}(\Omega)+\hat{P}^{(0)}_{C,2}(\Omega)]+\sqrt{
\frac{1-\eta^2}{\eta^2}}\hat{P}^{(0)}_{F}(\Omega)+\sqrt{\frac{1-
\eta^2}{\eta^2}}\hat{P}^{(0)}_{G}(\Omega).
\end{eqnarray}
We calculate again spectral variances and obtain with the
dimensionless variables of Eqs.~(\ref{1.35})
\begin{eqnarray}\label{1.44}
V_{\rm tel,in}^{\hat{X}}(\epsilon,\omega)=
V_{\rm tel,in}^{\hat{P}}(\epsilon,\omega)=
2\;\left[1-\frac{4\epsilon\beta}
{(\epsilon+1)^2 + \omega^2}\right]+2\,\frac{1-\eta^2}{\eta^2}\,\,\,,
\end{eqnarray}
where $\beta=\gamma/(\gamma+\rho)$ is a ``cavity escape efficiency''
which contains losses \cite{Ou}. With the spectral Q-function variances
of the teleported field $\sigma_x(\Omega)=\sigma_p(\Omega)=\case{1}{2}
+\case{1}{4}V_{\rm tel,in}^{\hat{X}}(\Omega)$, now for coherent-state
inputs, we find the fidelity spectrum (unit gain)
\begin{eqnarray}\label{fidspec2}
F(\epsilon,\omega)=\left[2-\frac{4\epsilon\beta}{(\epsilon+1)^2
+\omega^2}+\frac{1-\eta^2}{\eta^2}\right]^{-1}\,\,\,.
\end{eqnarray}
Using the values $\epsilon=0.77$, $\omega=0.56$, and $\beta=0.9$, the
measured values in the EPR experiment of Ref.~\onlinecite{Ou}
for maximum pump power (but still below threshold),
and a Bell detector efficiency $\eta^2=0.97$
(as in the teleportation experiment of Ref.~\onlinecite{Furu}), we obtain
$V_{\rm tel,in}^{\hat{X}}=V_{\rm tel,in}^{\hat{P}}=0.453$
and a fidelity $F=0.815$.
The measured value for the ``normalized analysis frequency'' $\omega=0.56$
corresponds to the measured finesse $F_{\rm cav}=180$, the free spectral
range $\nu_{\rm FSR}=790$ MHz and the spectrum analyzer frequency
$\Omega/2\pi=1.1$ MHz \cite{Ou}.
In the teleportation experiment of Ref.~\onlinecite{Furu}, the teleported
states described fields at modulation frequency $\Omega/2\pi=2.9$ MHz
within a bandwidth $\pm\Delta\Omega/2\pi=30$ kHz.
Due to technical noise at low modulation frequencies, the nonclassical
fidelity was achieved at these higher frequencies $\Omega$.
The amount of squeezing at these frequencies was about 3 dB.
The spectrum of the fidelities from Eq.~(\ref{fidspec2}) is shown
in Fig.~7 for different $\epsilon$ values.
\section{Summary and conclusions}
We have presented the broadband theory for quantum teleportation
using squeezed-state entanglement. Our scheme allows the broadband
transmission of nonorthogonal quantum states. We have discussed
various criteria determining the boundary between classical
teleportation (i.e., measuring the state to be transmitted as
well as quantum theory permits and classically conveying the
results) and quantum teleportation (i.e., using entanglement for the
state transfer). Depending on the set of input states, different
criteria can be applied that are best met with the optimum gain used
by Bob for the phase-space displacements of his EPR beam.
Given an alphabet of arbitrary Gaussian states with unknown coherent
amplitudes, on average, the optimum teleportation fidelity is attained
with unit gain at all relevant frequencies. Optimal teleportation
of an entangled state (entanglement swapping) requires a
squeezing-dependent, and hence frequency-dependent, nonunit gain.
Effectively, also with optimum gain, the bandwidth of entanglement
becomes smaller after entanglement swapping compared to the bandwidth
of entanglement of the initial states, as the quality of the entanglement
deteriorates at each frequency for finite squeezing.
In the particular case of the NOPA as the entanglement source,
the best quantum teleportation occurs in the frequency regime close to the
center frequency (half the NOPA's pump frequency).
In general, a suitable EPR source for broadband teleportation can be
obtained by combining two independent broadband squeezed states
at a beamsplitter (actually, even one squeezed state split at a
beamsplitter is sufficient to create entanglement for quantum
teleportation \cite{PvL4,PvL}). Provided ideal Bell detection, unit-gain
teleportation will then in general produce an excess noise in each
teleported quadrature of twice the squeezing spectrum of the quiet
quadrature in the corresponding broadband squeezed state (for the NOPA,
cavity loss appears in the squeezing spectrum).
Thus, good broadband teleportation requires good broadband squeezing.
However, the entanglement source's squeezing
spectrum for its quiet quadrature need not be a minimum near the center
frequency ($\Omega=0$) as for the optical parametric oscillator. In general,
it might have large excess noise there and be quiet at $\Omega\neq 0$ as
for four-wave mixing in a cavity \cite{Slush}.
The spectral range to be teleported $\Delta\Omega$ always should be in
the ``quiet region'' of the squeezing spectrum.
The scheme presented here allows very efficient teleportation of
broadband quantum states: the quantum state at the input
(a coherent, a squeezed, an entangled or any other state), describing
the input field at modulation frequency $\Omega$ within a bandwidth
$\Delta\Omega$, is teleported on each and every trial (where the
duration of a single trial is given by the inverse-bandwidth time
$1/\Delta\Omega$). Every inverse-bandwidth time, a quantum state is
teleported with nonclassical fidelity or previously unentangled fields
become entangled. Also the output of entanglement swapping can therefore
be used for efficient quantum teleportation, succeeding every
inverse-bandwidth time.
In contrast, the discrete-variable schemes involving weak down conversion
enable only relatively rare transfers of quantum states.
For the experiment of
Ref.~\onlinecite{Bou}, a fourfold coincidence (i.e., ``successful''
teleportation \cite{Sam3}) at a rate of 1/40 Hz and a UV pulse rate of 80
MHz \cite{Weinf} yield an overall efficiency of $3\times 10^{-10}$ (events
per pulse). Note that due to filtering and collection difficulties the
photodetectors in this experiment operated with an effective efficiency of
10\% \cite{Weinf}.
The theory presented in this paper applies to the experiment of
Ref.~\onlinecite{Furu} where coherent states were teleported
using the entanglement built from two squeezed fields generated
via degenerate down conversion.
The experimentally determined fidelity in this
experiment was $F=0.58\pm 0.02$ (this fidelity was
achieved at higher frequencies $\Omega\neq 0$ due to technical noise
at low modulation frequencies) which proved the quantum nature of
the teleportation process by exceeding the classical limit
$F\leq\case{1}{2}$.
Our analysis was also intended to provide the theoretical foundation
for the teleportation of quantum states that are more nonclassical
than coherent states, e.g., squeezed states or, in particular,
entangled states (two-mode squeezed states).
This is yet to be realized in the laboratory.
\acknowledgments
The authors would like to thank C.\ M.\ Caves for helpful suggestions.
P.v.L. thanks T.\ C.\ Ralph, H.\ Weinfurter, and A.\ Sizmann for their help.
This work was supported by EPSRC Grant No.~GR/L91344.
P.v.L. was funded in part by a ``DAAD Doktorandenstipendium im
Rahmen des gemeinsamen Hochschulsonderprogramms III von Bund und
L\"{a}ndern." H.J.K. is supported by DARPA via the QUIC Institute which
is administered by ARO, by the National Science Foundation, and by
the Office of Naval Research.
\begin{references}
\bibitem{Wootters} W.\ K.\ Wootters and W.\ H.\ Zurek, Nature {\bf 299},
802 (1982).
\bibitem{Kraus} K.\ Kraus, {\it States, Effects, and Operations},
Springer-Verlag Berlin (1983).
\bibitem{Benn} C.\ H.\ Bennett et al., Phys.\ Rev.\ Lett.\ {\bf 70}, 1895
(1993).
\bibitem{Sam2} S.\ L.\ Braunstein, A.\ Mann, and M.\ Revzen, Phys.\ Rev.\
Lett.\ {\bf 68}, 3259 (1992).
\bibitem{Bou} D.\ Bouwmeester et al., Nature {\bf 390}, 575 (1997).
\bibitem{Mart} D.\ Boschi et al., Phys.\ Rev.\ Lett.\ {\bf 80}, 1121 (1998).
\bibitem{Sam3} S.\ L.\ Braunstein and H.\ J.\ Kimble, Nature {\bf 394},
840 (1998); D.\ Bouwmeester et al., Nature {\bf 394}, 841 (1998);
P.\ Kok and S.\ L.\ Braunstein, Phys.\ Rev.\ A {\bf 61}, 042304 (2000);
D.\ Bouwmeester et al., quant-ph/9910043.
\bibitem{Vaid} L.\ Vaidman, Phys.\ Rev.\ A {\bf 49}, 1473 (1994).
\bibitem{Einst} A.\ Einstein, B.\ Podolsky, and N.\ Rosen, Phys.\ Rev.\
{\bf 47}, 777 (1935).
\bibitem{Walls} D.\ F.\ Walls and G.\ J.\ Milburn, {\it Quantum Optics},
Springer-Verlag Berlin Heidelberg New York (1994).
\bibitem{Sam} S.\ L.\ Braunstein and H.\ J.\ Kimble, Phys.\ Rev.\ Lett.\
{\bf 80}, 869 (1998).
\bibitem{Furu} A.\ Furusawa et al., Science {\bf 282}, 706 (1998).
\bibitem{Fuchs} S.\ L.\ Braunstein, C.\ A.\ Fuchs, and H.\ J.\ Kimble,
J.\ Mod.\ Opt. {\bf 47}, 267 (2000); quant-ph/9910030.
\bibitem{Sam4} S.\ L.\ Braunstein et al.,
{\it International Quantum Electronics Conference}, Vol.~7 of the 1998
OSA Technical Digest Series (Optical Society of America, Washington DC,
1998), p.~133.
\bibitem{Ou} Z.\ Y.\ Ou, S.\ F.\ Pereira, and H.\ J.\ Kimble, Appl.\
Phys.\ B {\bf 55}, 265 (1992).
\bibitem{Zuk} M.\ Zukowski {\it et al.}, Phys.\ Rev.\ Lett.\ {\bf 71},
4287 (1993).
\bibitem{Pan} J.-W.\ Pan {\it et al.}, Phys.\ Rev.\ Lett.\ {\bf 80},
3891 (1998).
\bibitem{Polk} R.\ E.\ S.\ Polkinghorne and T.\ C.\ Ralph,
Phys.\ Rev.\ Lett.\ {\bf 83}, 2095 (1999); quant-ph/9906066.
\bibitem{Tan} S.\ M.\ Tan, Phys.\ Rev.\ A {\bf 60}, 2752 (1999).
\bibitem{PvL} P.\ van Loock and S.\ L.\ Braunstein, Phys.\ Rev.\ A
{\bf 61}, 10302 (2000).
\bibitem{Arth} E.\ Arthurs and J.\ L.\ Kelly, Jr., Bell. Syst. Tech. J.
{\bf 44}, 725 (1965).
\bibitem{Yama} Y.\ Yamamoto et al., {\it Quantum Mechanical Limit in
Optical Precision Measurement and Communication}, Progress in Optics
XXVIII (1990), ed. E.\ Wolf, pp. 99-101.
\bibitem{Leon} U.\ Leonhardt, {\it Measuring the Quantum State of Light},
Cambridge University Press, Cambridge (1997).
\bibitem{Ralph} T.\ C.\ Ralph and P.\ K.\ Lam, Phys.\ Rev.\ Lett.\
{\bf 81}, 5668 (1998).
\bibitem{Ralph2} T.\ C.\ Ralph, R.\ E.\ S.\ Polkinghorne, and P.\ K.\ Lam,
quant-ph/9903003.
\bibitem{Brag} C.\ M.\ Caves et al., Rev.\ Mod.\ Phys.\ {\bf 52}, 341 (1980);
M.\ J.\ Holland et al., Phys.\ Rev.\ A {\bf 42}, 2995 (1990).
\bibitem{PvL3} P.\ van Loock and S.\ L.\ Braunstein, in preparation.
\bibitem{Coll} M.\ J.\ Collett and C.\ W.\ Gardiner, Phys.\ Rev.\ A
{\bf 30}, 1386 (1984); {\bf 31}, 3761 (1985).
\bibitem{Schum} C.\ M.\ Caves and B.\ L.\ Schumaker, Phys.\ Rev.\ A
{\bf 31}, 3068 (1985).
\bibitem{Kimb} H.\ J.\ Kimble, in {\it Fundamental Systems in Quantum Optics,
Les Houches, Session LIII, 1990}, eds. J.\ Dalibard, J.\ M.\ Raimond, and
J.\ Zinn-Justin (Elsevier Science Publishers, Amsterdam, 1992), pp. 549-674.
\bibitem{Slush} R.\ E.\ Slusher et al., Phys.\ Rev.\ Lett.\ {\bf 55}, 2409
(1985).
\bibitem{Reid} M.\ D.\ Reid, Phys.\ Rev.\ A {\bf 40}, 913 (1989).
\bibitem{PvL2} That indeed after entanglement swapping, accomplished by
appropriate final displacements, the outgoing (average or ensemble)
state of modes 1 and $4'$ is again a {\it pure}
two-mode squeezed state with less squeezing than in the initial
states is explained in more detail for single modes in a future publication
\cite{PvL3}.
\bibitem{PvL4} P.\ van Loock and S.\ L.\ Braunstein,
Phys.\ Rev.\ Lett. {\bf 84}, 3482 (2000); quant-ph/9906021.
\bibitem{Weinf} H.\ Weinfurter, private communication.
\end{references}
${~}$
\vskip 3truein
\begin{figure}
\caption{Teleportation of a single mode of the electromagnetic field as
in Ref.~\protect\cite{Sam}
\label{fig1}
\end{figure}
${~}$
\vskip 3truein
\begin{figure}
\caption{Verification of quantum teleportation. The verifier ``Victor''
is independent of Alice and Bob. Victor prepares the input states
which are known to him, but unknown to Alice and Bob. After a supposed
quantum teleportation from Alice to Bob, the teleported states are given
back to Victor. Due to his knowledge of the input states,
Victor can compare the teleported states with the input states.}
\label{fig2}
\end{figure}
${~}$
\vskip 3truein
\begin{figure}
\caption{Entanglement swapping using the two entangled two-mode squeezed
vacuum states of modes 1 and 2 (shared by Alice and Claire) and of
modes 3 and 4 (shared by Claire and Bob) as in Ref.~\protect\cite{PvL}
\label{fig3}
\end{figure}
${~}$
\vskip 3truein
\begin{figure}
\caption{The NOPA as in Ref.~\protect\cite{Ou}
\label{fig4}
\end{figure}
${~}$
\vskip 3truein
\begin{figure}
\caption{Fidelity spectrum of coherent-state teleportation using
entanglement from the NOPA. The fidelities here are
functions of the normalized modulation frequency $\pm\omega$ for
different parameter $\epsilon$ ($=0.1$, $0.2$, $0.4$, $0.6$, and $1$).}
\label{fig5}
\end{figure}
${~}$
\vskip 3truein
\begin{figure}
\caption{Fidelity spectrum of coherent-state teleportation using the output
of entanglement swapping with two equally squeezed (entangled) NOPA's.
The fidelities here are functions of the normalized modulation
frequency $\pm\omega$ for different parameter $\epsilon$
($=0.1$, $0.2$, $0.4$, $0.6$, and $1$).}
\label{fig6}
\end{figure}
${~}$
\vskip 3truein
\begin{figure}
\caption{Fidelity spectrum of coherent-state teleportation using
entanglement from the NOPA. The fidelities here are
functions of the normalized modulation frequency $\pm\omega$ for
different parameter $\epsilon$ ($=0.1$, $0.2$, $0.4$, $0.6$, and $1$).
Bell detector efficiencies $\eta^2=0.97$ and cavity losses with
$\beta=0.9$ have been included here.}
\label{fig7}
\end{figure}
\end{document} |
\begin{document}
\centerline {\Large \bf Quantum Computing in Arrays}
\centerline {\Large \bf Coupled by `Always On' Interactions}
\centerline {{\bf S. C. Benjamin$^{1,2}$ and S. Bose$^{3}$}}
{\footnotesize
\centerline {$^1$Ctr. for Quantum Computation, Clarendon Lab., Univ. of Oxford, {\scriptsize OX1 3PU}, UK.}
\centerline {$^2$Dept. of Materials, Parks Road, Univ. of Oxford, {\scriptsize OX1 3PH}, UK.}
\centerline {$^3$Dept. of Physics and Astronomy, University College London,}
\centerline{Gower St., London {\scriptsize WC1E 6BT}, UK.}
}
{\bf It has recently been shown that one can perform quantum computation in a Heisenberg chain in which the interactions are `always on', provided that one can abruptly tune the Zeeman energies of the individual (pseudo-)spins. Here we provide a more complete analysis of this scheme, including several generalizations. We generalize the interaction to an anisotropic form (incorporating the XY, or Forster, interaction as a limit), providing a proof that a chain coupled in this fashion tends to an effective Ising chain in the limit of far off-resonant spins. We derive the primitive two-qubit gate that results from exploiting abrupt Zeeman tuning with such an interaction. We also demonstrate, via numerical simulation, that the same basic scheme functions in the case of smoothly shifted Zeeman energies. We conclude with some remarks regarding generalisations to two- and three-dimensional arrays.}
There has recently been considerable interest in the question of whether one can perform quantum computation (QC) in Heisenberg-type systems (e.g. interacting electron spins) when the interaction is `always-on' \cite{ourPRL, zhou, newLANL}. This question follows on from a work concerning Heisenberg systems in which the interactions are presumed to be switchable, either individually\cite{3qubitExchangeOnly, Levy, myABqubitPaper} or collectively\cite{ababPRL}. Numerous proposals exist\cite{DiVincenzo1,kane,spinResTrans} for experimental realization of such a model, however interaction switching is liable to prove very challenging to realize, and this motivates the interest in `always-on' interactions. In Ref. [\onlinecite{ourPRL}] we proposed a scheme for exploiting a simple one-dimensional Heisenberg chain with constant, isotropic nearest neighbor interactions. The scheme involved adjusting the single-spin level splittings (the Zeeman energies) to bring neighbors in and out of resonance with one another. We exploited the fact that far off-resonance spins do not exchange energy, but rather interact in an Ising `ZZ' form. We argued that by separating the qubit-bearing spins by passive `barrier' spins, one can negate this residual interaction (thus achieving a passive state for the array) - yet one can invoke an interaction on demand simply by bringing a barrier into resonance with its neighbors.
In the present paper we elaborate on several aspects of that earlier Letter, and we provide certain extensions. Whereas previously we considered only one specific form for the interaction, i.e. the isotropic Heisenberg form ($\sigma^X\sigma^X+\sigma^Y\sigma^Y+\sigma^Z\sigma^Z$), we now generalize our arguments to accommodate different magnitudes for the in-plane and perpendicular components. Thus we subsume the prior isotropic form, and the purely planar ``XY'' interaction, as special cases. There is a wide variety of promising physical systems associated with this family of interactions (for the isotropic limit, see e.g. Refs. [\onlinecite{DiVincenzo1,kane,spinResTrans}], and for the anisotropic case, Refs. [\onlinecite{Imamoglu,Mozyrsky,Seiwert}]). The XY limit is also referred to as the Forster interaction, especially when studied in the context of excitonic exchange in biological molecules. With this generalized form of interaction, we first present an analysis of the effect of far off-resonant neighbors in a long chain, obtaining the anticipated Ising-like form as the lowest order term. We then explain in detail how Zeeman tuning can be exploited perform an elementary two-qubit gate, and we show how the resulting unitary operation depends on the Z versus XY asymmetry in the interaction.
Whereas the original paper assumed a perfectly abrupt transition between on-resonant and far off-resonant Zeeman energies, here we follow our analysis with a numerical simulation demonstrating that smoothly changing Zeeman energies can implement the gate process equally well. This observation considerably increases the practicality of the scheme. Finally, we discuss the generalization to two- and three-dimensional arrays.
\noindent{\bf Analysis of Heisenberg Chain with Large Zeeman Discrepancies}
The analysis is presented in full in Appendix I. Here we summarize it. We start from a total Hamiltonian $H$ given by
$$
H=H_{single}+H_{int},
$$
where
$$
H_{single} = \sum_j B_j \sigma_i^Z. \nonumber
$$
and the exchange interaction is as follows, where the factor $\alpha$ allows for a possible anisotropy between the in-plane and z-direction components.
\begin{eqnarray}
H_{int}&=&J\sum_j (\sigma^X_j\sigma^X_{j+1}+\sigma^Y_i\sigma^Y_{j+1}+\alpha\sigma^Z_j\sigma^Z_{j+1})\nonumber \\
&=&\frac{J}{2}(\sum_j \sigma^{+}_j
\sigma^{-}_{j+1}+\sigma^{-}_j\sigma^{+}_{j+1})+J\alpha\sum_j\sigma_j^Z\sigma_{j+1}^Z, \nonumber
\end{eqnarray}
where $\sigma^\pm\equiv\sigma^X\pm i\sigma^Y$. Here and below, the sum ranges over all $N$ qubits, but subscripts such as $j+1$ are understood to be modulo $N$, i.e. we {\bf assume a closed circular topology}. This considerably simplifies the analysis, but it is not a real constraint - in the limit of large chains the open and closed topologies will be equivalent.
We rewrite $H=H_{1}+H_{2}$ where
$$
H_{1} = \sum_j B_j \sigma_j^Z+J\alpha\sum_j\sigma_j^Z\sigma_{j+1}^Z. \nonumber
$$
and
$$
H_{2}=\frac{J}{2}(\sum_j \sigma^{+}_j
\sigma^{-}_{j+1}+\sigma^{-}_j\sigma^{+}_{j+1}). \nonumber
$$
Notice that $H_1$ is simply the Hamiltonian for an Ising spin chain with varying Zeeman energies. We will find that this term dominates the time evolution when the spins are far off-resonance with their neighbors; the contribution of $H_2$ then vanishes.
Our approach is to exploit the Trotter formula to manipulate the time evolution operator into a form that can be recognized as Ising and non-Ising parts. This is detailed in Appendix I. The exact expression for the time evolution is found to be:
\begin{eqnarray}
U(t)=R(t)\exp{(-iH_1 t)}\ \ \ \ {\rm where}& R=\{\prod_{m=1}^{m=n}\exp\lgroup\frac{-it}{n}H_R(\frac{mt}{n})\rgroup\}_{n\rightarrow\infty}\nonumber\\
H_R(\eta)=\sum_j X_j(\eta)\sigma^+_j\sigma^-_{j+1}+X^\dagger_j(\eta)\sigma^-_j\sigma^+_{j+1}\ \ &{\rm with}\ \ X_j(\eta)=\exp(i\eta(\Delta_j+2J\alpha(\sigma^Z_{j+2}-\sigma^Z_{j-1}))\nonumber
\end{eqnarray}
\noindent where $\Delta_j\equiv 2(B_{j+1}-B_j)$. The right hand term in $U(t)$ is the pure Ising chain evolution we seek, but the subsequent `residual' operator $R$ is more complex. In the second part of Appendix II we expand $R$ as a power series and inspect the terms. We conclude that, for a regular chain with a characteristic $\Delta$ (such as an $ABABAB..$ chain where $\Delta_j=(-1)^j\Delta$), the time evolution can be written as
$$
U(t)=(1-\delta P(t))\exp{(-iH_1 t)}
$$
where $\delta\equiv J/\Delta$, for some finite operator $P(t)$ whose magnitude does not increase with $\Delta$. {\bf Thus for any given time period $t$ the non-Ising evolution will be negligible if $\Delta$ is {\em sufficiently} large compared to $J$.} Assuming that we can dynamically change a $\Delta_j$, switching it between zero and a large value, we can then exploit this result to produce a form of `gate' for quantum computation.
\noindent {\bf Exploitation of the Heisenberg-to-Ising Transition to Perform QC}
Assume that we have some array in which every pair of adjacent spins is far off resonance from one another, i.e. $\Delta_j\gg J$, $\forall j$. Now assume that we abruptly tune one (or more) of the spin Zeeman energies so that we have a triplet $ABA$ where energies $A$ and $B$ are comparable. Let us refer to these spins by the labels $1$ to $3$, and similarly label the external neighboring spins as $0$ and $4$. Suppose spins $0$, $2$ and $4$ are initially in state $\ket{\uparrow}$. Since spin $0$ remains far off resonance from $1$, their interaction is effectively of the Ising form $J\alpha\sigma_0^Z\sigma_1^Z$. Similarly the interaction between $3$ and $4$ is $J\alpha\sigma_3^Z\sigma_4^Z$. Moreover, those external spins (having only an Ising interaction with their neighbors) are `frozen' in the $\ket{\uparrow}$ state thus their interaction with the triplet reduces to $J\alpha\sigma_1^Z$ and $J\alpha\sigma_3^Z$, and the dynamics of the triplet are described by the Hamiltonian:
$$
H_{\rm triplet}=H_{\rm zeeman}+H_{\rm int}
$$
$$
H_{\rm zeeman}=(A+\alpha J)(\sigma_1^Z+\sigma_3^Z)+B\sigma_2^Z
$$
$$
H_{\rm int}=J\sum_{j=1,2}\sigma_j^X\sigma_{j+1}^X+\sigma_j^Y\sigma_{j+1}^Y+\alpha\sigma_j^Z\sigma_{j+1}^Z
$$
In the following we will use the notation $J_{XY}\equiv J$, $J_Z\equiv \alpha J$, $a \equiv A+J_Z$ (the effective Zeeman energy of spins 1 and 3) and $b \equiv B$ for consistency. The Hamiltonian is easy to analyze; the states $\ket{\uparrow\uparrow\uparrow}$ and $\ket{\downarrow\downarrow\downarrow}$ of course remain eigenstates while the remaining states form two distinct subspaces. For the `up' subspace spanned by $\{\ket{\downarrow\uparrow\uparrow}$, $\ket{\uparrow\downarrow\uparrow}$, $\ket{\uparrow\uparrow\downarrow}\}$ we have Hamiltonian and eigenvectors given by
$${\hat H}_U= b\ {\bf I} + 2J_{XY}\left(
\begin{array}{ccc}
0 & 1 & 0 \\
1 & p & 1 \\
0 & 1 & 0
\end{array}
\right)\ \ \Rightarrow\ \ \
\ket{a}_U=\left(
\begin{array}{c}
1 \\
0 \\
-1
\end{array}
\right)\ \ \ {\rm and}\ \ \
\ket{\pm}_U=\left(
\begin{array}{c}
1 \\
\frac{1}{2}(p\pm S_p) \\
1
\end{array}
\right).
$$
With corresponding energies $E_a^U=b$, $E_\pm^U=b+J_{XY}(p\pm S_p)$. Here $p\equiv (a-b-J_Z)/J_{XY}$ and $S_p\equiv\sqrt{8+p^2}$.
Similarly for the complimentary `down' space $\{\ket{\uparrow\downarrow\downarrow}$, $\ket{\downarrow\uparrow\downarrow}$, $\ket{\downarrow\downarrow\uparrow}\}$ we have
$${\hat H}_D= -b\ {\bf I} + 2J_{XY}\left(
\begin{array}{ccc}
0 & 1 & 0 \\
1 & q & 1 \\
0 & 1 & 0
\end{array}
\right)\ \ \Rightarrow\ \ \
\ket{a}_D=\left(
\begin{array}{c}
1 \\
0 \\
-1
\end{array}
\right)\ \ \ {\rm and}\ \
\ket{\pm}_D=\left(
\begin{array}{c}
1 \\
\frac{1}{2}(q\pm S_q) \\
1
\end{array}
\right).
$$
With energies $E_a^D=-b$, $E_\pm^D=b+J_{XY}(p\pm S_p)$, where $q\equiv (b-a-J_Z)/J_{XY}$ and $S_q\equiv\sqrt{8+q^2}$. Now, we know that the initial computational qubit states are
$$
\begin{array}{ccc}
\ket{00} =& \ket{\downarrow\uparrow\downarrow}& \ \ \Leftarrow {\rm\ composed\ of }\ \ket{+}_D\ {\rm and}\ \ket{-}_D\ \ \ \ \ \ \\
\ket{01} = & \ket{\downarrow\uparrow\uparrow}& \ \ \Leftarrow {\rm\ composed\ of }\ \ket{a}_U, \ket{+}_U\ {\rm and}\ \ket{-}_U\\
\ket{10} = & \ket{\uparrow\uparrow\downarrow}& \ \ \Leftarrow {\rm\ composed\ of }\ \ket{a}_U, \ket{+}_U\ {\rm and}\ \ket{-}_U\\
\ket{11} = & \ket{\uparrow\uparrow\uparrow}& \ \ \Leftarrow {\rm \ eigenstate\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }
\end{array}
$$
During the gate operation, the states (other than $\ket{11}$) will rotate within their subspaces. We must arrange to `revive' both the $\ket{00}$ state and the states $\ket{01}$ \& $\ket{10}$ at the same instant, i.e. we must arrange that at some time $t_R$ the central spin is in the definite state $\ket{\uparrow}$ for all computational basis states. (Note that this condition does permit a net rotation in the plane defined by $\ket{01}$ \& $\ket{10}$). Thus at that moment we can effectively switch off the exchange interaction (by switching to far off-resonant Zeeman energies) and we will have performed some unitary transform in the computational basis. Whether such a transform constitutes a useful gate depends on entanglement criteria as mentioned later.
The times for which $\ket{00}$ revives are determined by $E_+^D-E_-^D$. The times at which a state, initially in the $\ket{01}$, $\ket{10}$ plane, returns to that plane are determined by $E_+^U-E_-^U$. Now the parameter which we can experimentally vary is the Zeeman detuning $a-b$; although there may be various detunings for which the revivals coincide (which could be found numerically), there is one value that is immediately obvious by inspection: $a-b=0$ (corresponding to tuning the central barrier spin to $A+J_Z$). In this case we see that $p=q=-J_Z/J_{XY}$, $S_p=S_q=\sqrt{8+(J_Z/J_{XY})^2}$ and thus both revivals coincide at time $t_R=\pi\hbar(8J_{XY}^2+J_Z^2)^{-\frac{1}{2}}$. At this instant, the transformation in the computational basis $\{\ket{00}$, $\ket{01}$, $\ket{10}$, $\ket{11}\}$ is given by the following matrix (neglecting a global phase)
$$
U=\left(
\begin{array}{cccc}
1 &0 &0 &0\\
0 &iQs &Qc &0\\
0 &Qc &iQs &0\\
0 &0 &0 &W
\end{array}
\right)
$$
Here $Q=-\exp(i\phi)$, $s/c=\sin/\cos(\phi)$ and $W=-\exp(-2i\phi)$ with $\phi=\frac{\pi}{2}(8J_{XY}^2/J_Z^2+1)^{-\frac{1}{2}}$. The phases in this matrix are with respect to the passive state of the device (i.e. if we had not tuned the triplet into resonance), under the assumption that the resonance was achieved by shifting the Zeeman energy of the central spin. (If in fact the Zeeman energies of the qubit-bearing spins were adjusted to achieve resonance, then we simply have the above matrix together with two trivial single qubit $Z$ gates.) This transformation $U$ is entangling, and is therefore adequate to construct a universal gate set when combined with single qubit gates\cite{Nielsen}. Using the procedure described in Refs. [\onlinecite{Nielsen, gatePaper}] one can confirm that no more than four uses of this gate are required to form a Control-NOT, for a wide range of $J_Z$ including the $J_Z=0$ and $J=J_Z$ cases, which represent the XY interaction and the isotropic Heisenberg interaction, respectively. It is easier to appreciate the nature of the transform if we apply a couple of single-qubit Z-rotations; defining
$$
Z(\theta)\equiv\left(
\begin{array}{cc}
\exp (i \theta) &0\\
0 & \exp ( -i \theta )
\end{array}\right)
$$
then neglecting a global phase,
\begin{equation}
Z_1(\psi).Z_2(\psi).U=
\left(
\begin{array}{cccc}
1 &0 &0 &0 \\
0 &-Q's &iQ'c &0\\
0 &iQ'c &-Q's &0\\
0 &0 &0 &1
\end{array}
\right)
\label{dressedMat}
\end{equation}
Here $\psi=\frac{\pi}{4}(1-(8J^2/J_Z^2+1)^{-\frac{1}{2}})$ and $Q'=Q^2$ while $s/c$ are as before.
Notice that for the $J_Z=0$ limit, i.e. the case of a pure XY interaction, then the primitive matrix $U$ takes a particularly simple form \cite{recentConfirmation}
\begin{equation}
U_P=\left(
\begin{array}{cccc}
1 &0 &0 &0\\
0 &0 &-1 &0\\
0 &-1 &0 &0\\
0 &0 &0 &-1
\end{array}
\right)
\label{XYgate}
\end{equation}
using which one can construct a CNOT with only two applications, as shown in Fig. 1(b). In this limit, the dressed matrix (\ref{dressedMat}) is recognizable as the ``iSWAP'' which has been studied in the context of an XY interaction between adjacent qubits\cite{Schuch}. Indeed, in the limit of a {\em strict} XY interaction, one might choose to abandon the barrier spin architecture completely, and adopt a trivial architecture in which qubits are adjacent (since the primary function of the barrier spins is to negate the effect of the residual Ising interaction, absent for the pure XY form).
\begin{figure}
\caption{(a) Schematic showing the basic two-qubit gate. Letters $A$, $B$, {\Large $\epsilon$}
\label{figure1}
\end{figure}
\begin{figure}
\caption{(a) An abrupt Zeeman shift, corresponding to our analytic treatment. (b) \& (c) Numerical simulation demonstrating that other, smooth functions can also suffice. In (b) we use a `gentle' switching function of the form $\cos^2\left(\pi\frac{t-t_0}
\label{smoothFigure}
\end{figure}
\begin{figure}
\caption{Structures that are efficient in terms of $R_Q$, the ratio of number of qubits stored to {\em total}
\label{figure3}
\end{figure}
Note the second form of gate presented in Ref.[\onlinecite{ourPRL}] can also be generalised to anisotropic Heisenberg interactions, although it does require some finite $Z$ component since this is exploited to accumulate a phase during the gate operation.
In Ref.[\onlinecite{ourPRL}] and in the above analysis, we consider an abrupt change from far off-resonance spins into resonance. This may be difficult to achieve in many otherwise promising implementations, therefore it we now investigate the effect of smooth switching. Figure 2 shows various profiles for the dynamically changing Zeeman energy of the central spin, given that the Zeeman energies of the outer spins are static (Fig. 2(a) corresponds to analytic treatment given above). For numerical convenience we have built these switching profiles as piece-wise combinations of analytic functions, as defined in the figure caption. In both cases (b) and (c) we fixed the time for the switching transition to the arbitrary choice $t_\Delta=1.25$ and varied just a single parameter, the time for which the detuning is zero. The values shown in the Figure provided a complete revival of the central spin for all qubit basis states, just as in the case of the abrupt transition. The specific transformation achieved in the qubit basis (i.e. the analogue of eqn. (\ref{dressedMat})) is of course different for these smooth switching profiles, but it remains strongly entangling and therefore equally suitable as a primitive two-qubit gate.
The analysis presented in the present paper has been phrased in terms of a one-dimensional array (a). However, the basic gate construction, involving two qubit bearing spins and one barrier spin, can immediately be generalised to many geometries in either two, or three dimensions. In principle one can produce a suitable structure by taking {\em any} arrangement of qubit-bearing spins, and introducing a barrier spin between each (hitherto) adjacent pair. One possible measure of the efficiency of the implementation would be the ratio of qubit-bearing spins to total number of spins, which we can denote $R_Q$. The value $R_Q=\frac{1}{2}$ corresponds to the one-dimensional arrangement (Fig 3(a)). For for a two, or higher, dimensional geometry at least some of the qubits must of course have three or more neighbors. If we restrict ourselves to considering regular structures in which every qubit has the same number of neighbors, then it is apparent that the highest possible value of $R_Q$ is $2/5$. Two arrangements which achieve this value are the hexagonal geometry Fig. 3(b), and the 3D structure illustrated in Fig. 3(c).
In order to do better than this ratio it would be necessary for barrier spins to do `double duty' in the sense that each barrier could not be unique to a specific qubit pair. Figure (d) shows an example arrangement achieving $R_Q=3/5$ in 2D. Note (d) is the compliment of (b), i.e. the qubit and barrier roles are reversed; similarly, one could reverse Fig. 3(c) for a 3D form. In such a structure, bringing a barrier into resonance with its neigbors would initiate a three-qubit gate process - to successfully complete the gate one would require the simultaneous revival of all qubit basis states at some subsequent moment. As the number of qubits involved increases, this quickly becomes infeasible (see Appendix II), but both the three qubit gate shown in Fig. 3(d), and a four qubit variant, do appear possible \cite{SCBunpub}.
Of course, such multi-qubit gates are quite exotic and may be rather inefficient primitives for implementing algorithms.
In conclusion, we have extended the results presented in Ref. [\onlinecite{ourPRL}] in several significant respects. The first, fundamental generalization is from a pure isotropic Heisenberg interaction to a more general anisotropic interaction, including the in-plane ``XY'' interaction as a special case.
All the results presented here incorporate this generality. We have provided a proof that an interaction of this general form tends to a simple Ising interaction in the limit of far off-resonance neighbors. We have presented an analysis of the basic gate of Ref.[\onlinecite{ourPRL}] in with this general interaction, and exhibited the resulting primitive two-qubit gate. In the special case of an XY interaction, we note that the gate has an especially simple form and we provided an explicit circuit for an efficient CNOT based on this primitive. We also consider the effect of a non-abrupt switching of the Zeeman energy: by numerical simulation we demonstrate that simply varying the duration of the on-resonance phase (while the switching time remains constant) allows one to achieve the necessary revival of the barrier spins, and therefore abrupt switching is not a requirement of the scheme. Finally we have remarked upon the simplicity of generalizing to two- and three-dimensional arrays, noting that the array geometry then determines the scheme's cost in terms of the proportion of barrier spins.
SCB wishes to acknowledge support from a Royal Society URF, and from the Foresight LINK project ``Nanoelectronics at the Quantum Edge''.
\noindent{\bf Appendix I: Analysis of Heisenberg Chain with Large Zeeman Discrepancies}
Given the definition of $H\equiv H_{1}+H_{2}$ introduced in the main body of the paper, we can proceed to use the Trotter formula to write the time evolution operator $U(t)$ as
\begin{equation}
U(t)=\lgroup \exp{(-iH_1 t/n)}\exp{(-iH_2 t/n)}\rgroup^n\ \ \ {\rm as} \ \ \ n\rightarrow \infty.
\label{trotter}
\end{equation}
Now we will seek to move all $H_1$ terms to the right, thus separating the Ising and non-Ising parts. Note first that since
$$
[\sigma_i^Z,\sigma_j^Z]=0\nonumber \ \ \ \ [\sigma_i^Z,\sigma_j^Z\sigma_k^Z]=0\ \ \ \ [\sigma_i^Z\sigma_j^Z,\sigma_k^Z\sigma_m^Z]=0
\nonumber
$$
we can write the following, using $\tau\equiv t/n$,
\begin{equation}
\exp(-i H_1\tau)=\lgroup\prod_j\exp(-iB_j \tau \sigma_j^Z)\rgroup\lgroup\prod_j(\exp(-i\alpha J \tau \sigma^Z_j\sigma^Z_{j+1})\rgroup
\label{sigZfactor}
\end{equation}
and in fact we can reorder these terms as we wish.
Moreover we can use
\begin{eqnarray}
\exp(i \tau B_j \sigma_j^Z)&=&\cos(\tau B_j)1 + i\sin(\tau B_j)\sigma_j^Z\nonumber\\
\exp(i \tau J\alpha\sigma_j^Z\sigma_{j+1}^Z)&=&\cos(\tau J\alpha)1 + i\sin(\tau J \alpha)\sigma_j^Z\sigma_{j+1}^Z
\label{sincosexpand}
\end{eqnarray}
We will also find it useful to employ
\begin{equation}
\sigma^Z\sigma^\pm=\pm\sigma^\pm,\ \ \ \sigma^\pm\sigma^Z=\mp\sigma^\pm\ \ \Rightarrow\ \ \sigma^Z\sigma^\pm=-\sigma^\pm\sigma^Z
\label{sigmaZabsorb}
\end{equation}
where $\sigma^{\pm}$ are as defined in the main body of the paper. We will introduce a generalisation of $H_2$,
$$
H_{2}^W\equiv\frac{J}{2}(\sum_i W_j\sigma^{+}_i
\sigma^{-}_{i+1}+W_j^\dagger\sigma^{-}_i\sigma^{+}_{i+1}).
$$
where the $W_j$ are any functions involving scalar constants and $\sigma^Z_k$ for any/all $k$. Now expand
\begin{equation}
\exp(-iH^W_{2}\tau)=\sum_p^\infty \frac{1}{p!}\lgroup\frac{-iJ}{2}(\sum_j W_j\sigma^{+}_j
\sigma^{-}_{j+1}+W_j^\dagger\sigma^{-}_j\sigma^{+}_{j+1})\tau\rgroup^p
\label{powerSeries}
\end{equation}
and note the following using (\ref{sincosexpand}) and (\ref{sigmaZabsorb})
\begin{eqnarray}
&&\exp(-i\tau B_j\sigma_j^Z)\lgroup W_j\sigma^{+}_j
\sigma^{-}_{j+1}+W^\dagger_j\sigma^{-}_j\sigma^{+}_{j+1}\rgroup\nonumber\\
&&\ \ \ =\lgroup W_j\sigma^{+}_j
\sigma^{-}_{j+1}+W^\dagger_j\sigma^{-}_j\sigma^{+}_{j+1}\rgroup\exp(i \tau B_j\sigma_j^Z)\nonumber \\
&&\ \ \ =\lgroup W_j\sigma^{+}_j
\sigma^{-}_{j+1}+W^\dagger_j\sigma^{-}_j\sigma^{+}_{j+1}\rgroup\exp(2i \tau B_j\sigma_j^Z)\exp(-i \tau B_j\sigma_j^Z)\nonumber \\
&&\ \ \ =\lgroup e^{-2i\tau B_j}W_j\sigma^{+}_j
\sigma^{-}_{j+1}+e^{2i\tau B_j}W^\dagger_j\sigma^{-}_j\sigma^{+}_{j+1})\exp(-i B_j\tau\sigma_j^Z)
\nonumber
\end{eqnarray}
and similarly
\begin{eqnarray}
&&\exp(-i\tau B_{j+1}\sigma_{j+1}^Z)(W_j\sigma^{+}_j
\sigma^{-}_{j+1}+W^\dagger_j\sigma^{-}_j\sigma^{+}_{j+1})\nonumber\\
&&\ \ \ =(e^{2i\tau B_{j+1}}W_j\sigma^{+}_j
\sigma^{-}_{j+1}+e^{-2i\tau B_{j+1}}W^\dagger\sigma^{-}_j\sigma^{+}_{j+1})\exp(-i B_{j+1}\tau\sigma_j^Z).
\nonumber
\end{eqnarray}
Then
\begin{eqnarray}
&&\lgroup\prod_j(\exp(-i\tau B_j\sigma_j^Z)\rgroup(W_k\sigma^{+}_k
\sigma^{-}_{k+1}+W^\dagger_k\sigma^{-}_k\sigma^{+}_{k+1})\nonumber \\
&&\ =(\exp(i\tau\Delta_k)W_k\sigma^{+}_k
\sigma^{-}_{k+1}+\exp(-i\tau\Delta_k)W_k^\dagger\sigma^{-}_k\sigma^{+}_{k+1})\lgroup\prod_j(\exp(-i\tau B_j\sigma_j^Z)\rgroup
\nonumber
\end{eqnarray}
where $\Delta_j\equiv 2(B_{j+1}-B_{j})$. Now combining this with (\ref{sigZfactor}) and (\ref{powerSeries}) we can write
\begin{eqnarray}
&&\exp(-i H_1\tau)\exp(-i H_2^W\tau)\nonumber\\
&&\ =\lgroup\prod_j(\exp(-i\alpha J \tau\sigma^Z_j\sigma^Z_{j+1})\rgroup
\exp(-i H_2^{V}\tau)\lgroup\prod_j\exp(-iB_j \tau\sigma_j^Z)\rgroup \label{halfDone}
\end{eqnarray}
where $V_j\equiv\exp(i\tau\Delta_j)W_j$. Now we can commute the remaining left side product through to the right in a similar way. Again using (\ref{sincosexpand}) and (\ref{sigmaZabsorb}) we note that:
\begin{eqnarray}
&&\exp(-i\tau \alpha J\sigma_{j-1}^Z\sigma_j^Z)\lgroup W_j\sigma^{+}_j
\sigma^{-}_{j+1}+W^\dagger_j\sigma^{-}_j\sigma^{+}_{j+1}\rgroup\nonumber\\
&&\ \ \ =\lgroup W_j\sigma^{+}_j
\sigma^{-}_{j+1}+W^\dagger_j\sigma^{-}_j\sigma^{+}_{j+1}\rgroup\exp(i\tau \alpha J\sigma_{j-1}^Z\sigma_j^Z)\nonumber \\
&&\ \ \ =\lgroup W_j\sigma^{+}_j\sigma^{-}_{j+1}+W^\dagger_j\sigma^{-}_j\sigma^{+}_{j+1}\rgroup\exp(2i\tau \alpha J\sigma_{j-1}^Z\sigma_j^Z)\exp(-i\tau \alpha J\sigma_{j-1}^Z\sigma_j^Z)\nonumber \\
&&\ \ \ =\lgroup W_j\exp(-2i \tau \alpha J\sigma_{j-1}^Z)\sigma^{+}_j
\sigma^{-}_{j+1}+W^\dagger_j\exp(2i \tau \alpha J\sigma_{j-1}^Z)\sigma^{-}_j\sigma^{+}_{j+1}\rgroup\exp(-i\tau \alpha J\sigma_{j-1}^Z\sigma_j^Z)\nonumber
\end{eqnarray}
Similarly
\begin{eqnarray}
&&\exp(-i\tau \alpha J\sigma_{j+1}^Z\sigma_{j+2}^Z)\lgroup W_j\sigma^{+}_j
\sigma^{-}_{j+1}+W^\dagger_j\sigma^{-}_j\sigma^{+}_{j+1}\rgroup\nonumber\\
&&\ \ \ =\lgroup W_j\exp(2i \tau \alpha J\sigma_{j+2}^Z)\sigma^{+}_j
\sigma^{-}_{j+1}+W^\dagger_j\exp(-2i \tau \alpha J\sigma_{j+2}^Z)\sigma^{-}_j\sigma^{+}_{j+1}\rgroup\exp(-i\tau \alpha J\sigma_{j+1}^Z\sigma_{j+2}^Z)\nonumber
\end{eqnarray}
However, for the $\sigma_j^Z\sigma_{j+1}^Z$ term we see that
$$
[\exp(-i\tau \alpha J\sigma_{j}^Z\sigma_{j+1}^Z)\ ,\ W\sigma^{+}_j
\sigma^{-}_{j+1}+W^\dagger\sigma^{-}_j\sigma^{+}_{j+1}]=0
$$
since there is a double sign inversion. Then combining these three results we can write
\begin{eqnarray}
&&\lgroup\prod_j\exp(-i\tau \alpha J\sigma_j^Z\sigma_{j+1}^Z)\rgroup\lgroup W_k\sigma^{+}_k
\sigma^{-}_{k+1}+W^\dagger_k\sigma^{-}_k\sigma^{+}_{k+1}\rgroup\nonumber \\
&&\ =\lgroup W_k\exp(2i\tau J\alpha (\sigma^Z_{k+2}-\sigma^Z_{k-1}))\sigma^{+}_k
\sigma^{-}_{k+1}\nonumber \\
&&\ \ \ +W^\dagger_k\exp(-2i\tau J\alpha (\sigma^Z_{k+2}-\sigma^Z_{k-1}))\sigma^{-}_k\sigma^{+}_{k+1}\rgroup\lgroup\prod_j\exp(-i\tau\alpha J\sigma_j^Z\sigma_{j+1}^Z)\rgroup
\nonumber
\end{eqnarray}
Now combining this with (\ref{powerSeries}) and (\ref{halfDone}) we have
$$
\exp(-i H_1\tau)\exp(-i H_2^W\tau)=\exp(-i H_2^{Q}\tau)\exp(-i H_1\tau)
$$
where $Q_j\equiv W_j\exp(i\tau(\Delta_j+2J\alpha(\sigma^Z_{j+2}-\sigma^Z_{j-1}))$.
Now because these $Q_j$ fit within the original definition of $W_j$ (i.e. they are simply ``functions involving scalar constants and $\sigma^Z_k$ for any/all $k$''), we can just repeat the argument to commute {\em all} terms $\exp(i H_1 \tau)$ to the far left. The term originally identified as the $m^{th}$ element $\exp(i H_2 \tau)$ in the Trotter expansion (\ref{trotter}) will have $m$ terms ``$\exp(-i H_1 \tau)$'' pass `through' it, and will thus accumulate a final $Q(m)=\exp(im\frac{t}{n} (\Delta_k+2J\alpha(\sigma^Z_{k+2}-\sigma^Z_{k-1}))$. So the exact expression for the time evolution finally becomes:
\begin{eqnarray}
U(t)=R(t)\exp{(-iH_1 t)}\ \ \ \ {\rm where}& R=\{\prod_{m=1}^{m=n}\exp\lgroup\frac{-it}{n}H_R(\frac{mt}{n})\rgroup\}_{n\rightarrow\infty}\nonumber\\
H_R(\eta)=\sum_j X_j(\eta)\sigma^+_j\sigma^-_{j+1}+X^\dagger_j(\eta)\sigma^-_j\sigma^+_{j+1}\ \ &{\rm with}\ \ X_j(\eta)=\exp(i\eta(\Delta_j+2J\alpha(\sigma^Z_{j+2}-\sigma^Z_{j-1}))\nonumber
\end{eqnarray}
The right hand term in $U(t)$ is the pure Ising chain evolution we seek, but the subsequent `residual' operator $R$ is more complex. We would like to show that it tends to unity as $\delta_j\equiv J/\Delta_j\rightarrow 0$ for all $j$. Now we cannot simply integrate the terms in the product $R$ since they do not commute, and thus we cannot immediately gather the elements with a $\frac{t}{n}$ coefficient. Therefore we proceed by making the expansion $\exp A = 1+A+A^2/2+...$:
$$
\prod_{m=1}^{m=n}\exp{(-iH_R(\frac{mt}{n}) t/n)}=\prod_{m=1}^{m=n}\lgroup1-iH_R(\frac{mt}{n}) t/n+1/2(iH_R(\frac{mt}{n}) t/n))^2+...\rgroup
$$
as ${n\rightarrow\infty}$. We cannot truncate this series since $H_R()$ is not small, but we will seek to gather and sum all terms of given order in $H_R$. We introduce
\begin{eqnarray}
g_a&\equiv&\sum_{m=a}^n \frac{t}{n} H_R(mt/n)\nonumber\\
&\rightarrow& \int_{\eta=at/n}^t d\eta\ H_R(\eta)\nonumber\\
&=&\frac{J}{2}\sum_j\lgroup\lgroup\int_{\eta=at/n}^t d\eta X_j(\eta)\rgroup\sigma^+_j\sigma^-_{j+1}+\lgroup\int_{\eta=at/n}^t d\eta X_j(\eta)\rgroup\sigma^-_j\sigma^+_{j+1}\rgroup\nonumber
\end{eqnarray}
\noindent we can evaluate indefinite integrals\cite{exceptSpecial} as follows.
Defining $\Delta\equiv\Delta_0$, $\rho_j\equiv \Delta/\Delta_j$ and $\delta\equiv J/\Delta$,
$$
\frac{J}{2}\int d\eta X_j(\eta) = \frac{-i\delta}{2}x_j X_j(\eta)\ \ \ \ \ \ \ \ \frac{J}{2}\int d\eta X^\dagger_j(\eta) = \frac{i\delta}{2}x_j X^\dagger_j(\eta)
$$
where
$$
x_j\equiv\frac{\rho_j(1-8\alpha^2\delta_j^2(1+\sigma_{j-1}^Z\sigma_{J+2}^Z))(1-2\alpha\delta_j(\sigma_{j+2}^Z-\sigma_{j-1}^Z))}
{1-16\alpha^2\delta_j^2}\approx \rho_j
$$
with the approximation holding in the limit that all $\delta_j\equiv\frac{J}{\Delta_j}\ll1$. Note that $\rho_j$, and thus $x_j$, is a modest ratio in our periodic chains (e.g. for an $ABAB..$ chain $\rho_j=(-1)^j$; for an $ABCABC..$ chain $\rho_j$ might run $1,1,-\frac{1}{2},1,1,-\frac{1}{2},..$ say). We can write the indefinite integral
\begin{eqnarray} \int d\eta\ H_R(\eta)&=&\frac{i\delta}{2}\sum_j x_j\lgroup -X_j(\eta)\sigma_j^+\sigma_{j+1}^- + X_j^\dagger(\eta)\sigma_j^-\sigma_{j+1}^+\rgroup\nonumber\\
&\equiv& \frac{i\delta}{2}K(\eta)
\label{ints}
\end{eqnarray}
Using $K()$ defined above we can write
\begin{equation}
g_a=\frac{i\delta}{2}(K(t)-K(at/n))
\label{gaeqn}
\end{equation}
Now returning to the expansion, the lowest order in $H_R$ is of course $1$, and the sum of all terms of $1^{st}$ order in $H_R$ is precisely $-ig_0=\frac{1}{2}\delta\{K(t)-K(0))\}$. Thus so far we are seeing the anticipated behavior: the `residual' part of the dynamics, after the Ising-like behavior is allowed for, appears to vanish with $\delta$. However, since we are using an expansion in $H_R()$, where $H_R()$ is not small, we should evaluate and sum the higher terms. Let us use the symbol $S_N$ to represent the sum of terms of order $H_R()^N$; then we have already found $S_1=\frac{1}{2}\delta\{K(t)-K(0)\}$, and
\begin{eqnarray}
S_2&=&(-i\frac{t}{n})^2\sum_{m=1}^n \{\frac{1}{2}(H_R(mt/n))^2+H_R(mt/n)\sum_{p>m} H_R(pt/2)\} \nonumber \\
&=&-(\frac{t}{n})^2\sum_{m=1}^n \{-\frac{1}{2}(H_R(mt/n))^2+H_R(mt/n)\sum_{p\geq m} H_R(pt/2)\} \nonumber
\end{eqnarray}
now the factor $(t/n)^2$ causes the first term here to vanish in the limit $n\rightarrow\infty$, since it contains only $n$ terms each of Order($H_R()\sim J$). For the second term
\begin{eqnarray}
S_2=-\int_0^t H_R(\eta)\lgroup \int_\eta^t H_R(\eta) d \eta\rgroup d\eta
\end{eqnarray}
but the inner integral is given by (\ref{gaeqn}) so that
\begin{equation}
S_2=\frac{-i\delta}{2}\int_0^t H_R(\eta)\lgroup K(t)-K(\eta) \rgroup d\eta
\label{order2terms}
\end{equation}
This integral can be fully evaluated\cite{SCBunpub} but the key point is that it can already be seen to be of order $\delta$ (or less). Note that the $\delta\equiv\frac{J}{\Delta}$ factor {\em cannot} be absorbed by the remaining integral since the variable $\Delta$ occurs only as a phase $\sim\exp(i\Delta\tau)$. Thus in expanding and evaluating the integral we will see some terms with an additional factor of $\frac{1}{\Delta}$, and in the special case that a term exhibits cancellation of the $\Delta$ elements in the phase we would apply a factor of order unity - but we can never introduce a factor of $\Delta$.
Generalizing this observation we can consider $S_N$. This involves terms of the form $H_R(m_1t/n)H_R(m_2t/n)...H_R(m_Nt/n)$ for some set of integers $m_1\geq m_2\geq...m_N$. By the same reasoning above, we can neglect terms where two or more of the $m_i$ are the same value, since they collectively constitute a negligible portion $\frac{1}{n}$ of the sum as $n\rightarrow\infty$. Then we find
\begin{eqnarray}
S_N&=&(-i)^N\int_0^t H_R(\xi_1) \int_{\xi 1}^t H_R(\xi_2)\int_{\xi 2}^t ....\int_{\xi N-1}^t H_R(\xi_N)\ d\xi_1\ d\xi_2\ ... d\xi_N\nonumber\\
&=&\frac{(i)^{N-1}}{2}\delta\int_0^t H_R(\xi_1) \int_{\xi 1}^t H_R(\xi_2) \int_{\xi 2}^t ....\int_{\xi_{N-2}}^t (K(t)-K(\xi_{N-1}))\ d\xi_1\ d\xi_2\ ... d\xi_{N-1}\nonumber
\end{eqnarray}
And as before we can argue that although the remaining $N-1$ integrals may produce additional factors of $1/\Delta$, they cannot absorb any. Thus the factor $\delta$ will remain and we can conclude that {\bf all terms in the expansion $S_n$ ($n\geq 1$) are of order $\delta$ or less.} Therefore the time evolution operator is
$$
U(t)=(1-\delta P(t))\exp{(-iH_1 t)}
$$
for some finite operator $P(t)$ whose magnitude does not increase with $\Delta$. This is the result presented in the main body of the paper.
\noindent{\bf Appendix II: Regarding Revivals}
In the discussion of two and three dimensional arrays, we stated that it will be difficult to achieve the crucial simultaneous `revivals' for multi-qubit gates involving more than a few qubits. Of course, one can observe that if we choose {\em any} detuning $A-B$ for which the revival periods of the various qubit basis states are related by irrational factors (i.e. the general case), then there will eventually be a complete revival to any desired accuracy (although never perfect). However one would typically need to wait an extremely long time for the level of precision required for QC and therefore this type of revival is not a practical choice. Instead we seek to arrange rapid revivals by looking for values of the detuning (and potentially, other parameters) such that the various revival periods are all related by small rational factors. Fulfilling this condition will become unfeasible as the number of qubits increases.
\begin{references}
\bibitem{ourPRL} S. C. Benjamin and S. Bose, Phys. Rev. Lett. {\bf 90} 247901 (2003).
\bibitem{zhou}X. Zhou {\em et. al}, Phys. Rev. Lett. {\bf 89}, 197903 (2002).
\bibitem{newLANL} Preprint: M.-H. Yung, D. W. Leung and S. Bose,
http://arxiv.org/abs/quant-ph/0312105.
\bibitem{3qubitExchangeOnly} D. P. DiVincenzo {\em et al}, Nature {\bf 408}, 339 (2000).
\bibitem{Levy} J. Levy, PRL and online at http://arxiv.org/abs/quant-ph/0101057.
\bibitem{myABqubitPaper} S. C. Benjamin, Phys. Rev. A {\bf 64}, 054303 (2001).
\bibitem{ababPRL} S. C. Benjamin, Phys. Rev. Lett. {\bf 88}, 017904 (2002).
\bibitem{DiVincenzo1} D. Loss \& D. P. DiVincenzo, Phys. Rev. A {\bf 57}, 120 (1998).
\bibitem{kane} B. E. Kane, Nature {\bf 393}, 133 (1998).
\bibitem{spinResTrans} R. Vrijen {\em et al}, Phys. Rev. A {\bf 62}, 012306 (2000).
\bibitem{Imamoglu}A. Imamoglu {\em et al}, Phys. Rev.
Lett. {\bf 83}, 4204 (1999).
\bibitem{Mozyrsky} D. Mozyrsky, V. Privman, and M. Glasser, Phys. Rev.
Lett. {\bf 86}, 5112 (2001).
\bibitem{Seiwert}J. Siewert {\em et al}, J.
Low. Temp. Phys. {\bf 118}, 795 (2000).
\bibitem{Nielsen}J. L. Dodd {\em et al}, Phys. Rev. A {\bf 65}, 040301 (2002).
\bibitem{gatePaper} M. J. Bremner {\em et al.}, http://arxiv.org/abs/quant-ph/0207072.
\bibitem{recentConfirmation}A very recent online preprint, which restricts itself to the XY limit, has also exhibited this matrix - see Ref. \onlinecite{newLANL}.
\bibitem{Schuch} See N. Schuch, J. Siewert, Phys. Rev. A {\bf 67}, 032301 (2003)
and references therein.
\bibitem{sin4function} The function here is a symmetric two part composite. Defining $\tau\equiv t-t_0$, the Zeeman shift is introduced (`switched on') in the period $0<\tau<t_\Delta$ by a function of the form $-\sin^4(\phi\tau)$ for $\tau<\frac{t_\Delta}{2}$, and $\sin^4(\phi(\tau-t_\Delta)-1$ for $\tau>\frac{t_\Delta}{2}$, where $\phi=2\arctan(2^{-1/4})/(t_\Delta)$.
\bibitem{completeR} For each profile we performed a series of numerical simulations, manually adjusting duration until we obtained revivals that were perfect to within an error probability of about $1$ part in $10^6$; apparently one could continue to refine the value arbitrarily.
\bibitem{exceptSpecial} The integral is valid provided $4J\alpha\neq \Delta_j$ $\forall \Delta_j$, which is of course the case since we are interested in the $\alpha J\ll \Delta_j$ limit.
\bibitem{SCBunpub} S. C. Benjamin, unpublished.
\end{references}
\end{document} |
\begin{document}
\author{J.~Nobakht}
\affiliation{Department of Physics, Sharif University of Technology, Azadi avenue, Tehran, Iran}
\author{M.~Carlesso}
\email{[email protected]}
\affiliation{Department of Physics, University of Trieste, Strada Costiera 11, 34151 Trieste, Italy}
\affiliation{Istituto Nazionale di Fisica Nucleare, Trieste Section, Via Valerio 2, 34127 Trieste, Italy}
\author{S.~Donadi}
\affiliation{Institut f\"ur Theoretische Physik, Universit\"at Ulm, D-89069, Germany}
\author{M.~Paternostro}
\affiliation{Centre for Theoretical Atomic, Molecular and Optical Physics, School of Mathematics and Physics,
Queen's University Belfast, Belfast BT7 1NN, United Kingdom}
\author{A.~Bassi}
\affiliation{Department of Physics, University of Trieste, Strada Costiera 11, 34151 Trieste, Italy}
\affiliation{Istituto Nazionale di Fisica Nucleare, Trieste Section, Via Valerio 2, 34127 Trieste, Italy}
\title{Unitary unravelling for the Dissipative Continuous Spontaneous Localization model: application to optomechanical experiments}
\date{\today}
\begin{abstract}
The Continuous Spontaneous Localization (CSL) model strives to describe the quantum-to-classical transition from the viewpoint of collapse models. However, its original formulation suffers from a fundamental inconsistency in that it is explicitly energy non-conserving. Fortunately, a dissipative extension to CSL has been recently formulated that solves such energy-divergence problem. We compare the predictions of the dissipative and non-dissipative CSL models when various optomechanical settings are used, and contrast such predictions with available experimental data, thus building the corresponding exclusion plots.
\end{abstract}
\maketitle
\section{Introduction}
Collapse models predict the occurrence of the quantum-to-classical transition in light of an intrinsic dynamical loss of quantum coherence, when the mass and complexity of the system increase~\cite{Ghirardi:1986aa,Ghirardi:1990aa,Bassi:2003ab,Bassi:2013aa,Adler:2009aa}. This is achieved by modifying the standard Schr{\"o}dinger equation with the addition of a {\it non-linear} interaction with an external {classical noise field}. The latter induces the localization of the wave function in space. Such interaction is negligible for microscopic systems, and is amplified by an intrinsic in-built mechanism that makes it stronger for macroscopic objects. In this way collapse models account for the quantum behaviour of microscopic systems, as well as for the emergence of classicality in the macroscopic world.
The most studied collapse model is the Continuous Spontaneous Localization (CSL) model \cite{Ghirardi:1990aa}. Here, the interaction of a quantum system with the collapse noise depends on two phenomenological parameters: the collapse rate $\lcsl$, which measures the strength of the noise, and the correlation distance $\rC$, which sets the spatial resolution of the collapse, i.e.~the typical distances above which superpositions are suppressed.
The quantitative determination of such parameters has been the focus of speculations. The original estimates put forward in Ref.~\cite{Ghirardi:1986aa} have set $\rC=10^{-7}\, $m and $\lcsl=10^{-16}\,\text{s}^{-1}$, later modified to $\lcsl=10^{-9}\,\text{s}^{-1}$, based on the analysis of the process of latent image formation~\cite{Adler:2007ab,Adler:2007ac}.
Recently, a significant amount of work has been devoted to the identification of experiment-based upper bounds on the CSL parameters. Experiments using matter-wave interferometry~\cite{Arndt:1999aa,Eibenberger:2013aa,Toros:2016aa,Toros:2016ab}, entangled macroscopic diamonds \cite{Belli:2016aa}, cantilevers \cite{Vinante:2016aa,Vinante:2016ab}, cold atoms \cite{Laloe:2014ab,Bilardello:2016aa}, X-rays emission~\cite{Curceanu:2016aa,Piscicchia:2017aa}, and gravitational wave detectors \cite{Carlesso:2016ac,Helou:2016aa} have been instrumental to the drawing of an {\it exclusion plot} aiming at narrowing down the range of acceptable values for the collapse parameters.
A well-known drawback of the phenomenological nature of the CSL model is the prediction of a constant increase of the kinetic energy of a system due to its interaction with the collapse noise. The most conservative prediction of the rate of energy increase is in the range of $10^{-15}$\,K/year~\cite{Bassi:2003aa} (which becomes $10^{-7}$\,K/year when the parameters predicted in Ref.~\cite{Adler:2007ab,Adler:2007ac} are assumed). While such a rate is very small, its non-nullity entails a fundamental limitation of the theory behind the current formulation of CSL. Surely, the interaction with an external noise is expected to break energy conservation for the system alone, however, one does not expect the noise to keep transferring energy to the system forever. Thermalization to the temperature of the noise field would eventually be achieved, thus stopping the net energy increase, a mechanism that is not contemplated in the original CSL formulation.
This has called for the proposal of a dissipative three-parameter extension (which we will dub as ``dCSL'' model)~\cite{Smirne:2014aa,Smirne:2015aa}: besides $\lcsl$ and $\rC$, the dCSL model requires the introduction of an effective temperature $T_\text{\tiny CSL}$, which can be interpreted as the temperature of the collapse noise. Dissipation guarantees that the energy of any system interacting with this noise approaches an asymptotic finite value. In the limit $\tcsl\to\infty$, which implies that the system never thermalizes with the collapse noise, one recovers the standard CSL model, as expected.
While there is currently no fundamental estimate of $\tcsl$, if we assume the noise to be of cosmological origin (a reasonable guess, taking into account its supposed universality), then $T_{\text{\tiny CSL}}\sim 1$\,K stands out as reasonable~\cite{Smirne:2015aa}.
The quest for the ruling-out, or the confirmation of collapse models requires the identification of a credible and physically robust framework. It is thus important to test the predictions of the dCSL model, in particular in relation to the extent to which the bounds on the CSL parameters change if one assumes a finite temperature for the collapse noise. {This analysis was initiated in the study of collapse models effects on matter-wave interferometry~\cite{Toros:2016aa} and cold atoms~\cite{Bilardello:2016aa}}. In this paper, we {extend this }investigation to optomechanical systems which now play a privileged role as they set some of the strongest bounds on the collapse parameters.
While CSL-induced effects can be easily embedded as additional noise on the motion of a mechanical system~\cite{Bahrami:2014aa}, we show that for the dCSL model this is no longer the case and
a different strategy must be followed. Specifically, we will construct a {unitary} unravelling of the dCSL master equation, following the approach described in Ref.~\cite{Barchielli:2015aa},
which has the advantage of greatly simplifying all the necessary calculations, while providing a rigorous approach to the quantification of the effects of the collapse mechanism. Such a unitary unravelling is built around a bosonic quantum noise instead of a standard classical noise, as custom to CSL.
The paper is organized as follows:
In Sec.~\ref{2}, after introducing the master equation for the dCSL dynamics, we build a unitary unravelling for it. In Sec.~\ref{3}, we consider a multiparticle system and
we derive the master equation for the center of mass under the assumption of rigid body and small displacements.
We then build a unitary unravelling for such a master equation and use it to derive the Langevin equations of motion for the center of mass of an optomechanical system, which are thus solved in Sec.~\ref{4}. The results are applied to the several experiments in Sec.~\ref{cwed} to set physically relevant bounds on the parameters characterizing the dCSL model.
\section{Unitary unravelling of the \lowercase{d}CSL model}\label{2}
The mass-proportional dCSL master equation for the density matrix $\hat \rho(t)$ of an $N$-particle system reads \cite{Smirne:2015aa}
\begin{equation}\label{dcslmaster}
\dfrac{\text{d} \hat \rho(t)}{\text{d} t}=-\frac{i}{\hbar}[\hat H, \hat \rho(t)]+\mathcal{L}[ \hat \rho(t)],
\end{equation}
where $\hat H$ is the Hamiltonian of the system and
\begin{equation}ali\label{Ldcslmaster}
\mathcal{L}[ \hat \rho(t)]=\nu^2\!\!\!\int\!\!\text{d}\y\!\left(\hat L(\y)\hat\rho(t)\hat L^\dag(\y)-\tfrac12\{\hat L^\dag(\y)\hat L(\y),\hat\rho(t)\}\right)
\end{equation}ali
with $\nu=\sqrt{\lcsl \rC^3(4\pi)^{3/2}}/m_0$, where $m_0$ is the mass of each particle and $\y$ is a spatial coordinate. The Lindblad operator $\hat L(\y)$ is defined as~\footnote{Note that, in order to include the dissipation into the standard CSL model, different choices for
$\hat L(\y)$ are possible. The one in Eq.~\end{equation}ref{defLy} resembles dissipative collisional decoherence (e.g., see~\cite{Petruccione:2005aa}), with the same interpretation for the mechanism of energy exchange with the noise~\cite{Smirne:2015aa}.}
\begin{equation}ali\label{defLy}
\hat L(\y)=\frac{m_0}{(2\pi\hbar)^3}\sum_{n=1}^N\int\text{d}\Q\,e^{\tfrac{i}{\hbar}\Q\cdot(\hat{\bf x}_n-\y)}\cdot\\
\cdot\exp\left(-\frac{\rC^2}{2\hbar^2}\left| (1+\chi)\Q+2\chi\hat{\bf p}_n\right|^2\right),
\end{equation}ali
where $\hat \x_n$ and $\hat \p_n$ denote the position and the momentum operator of the $n$-th particle of the system, respectively, and the dimensionless parameter $\chi$ is related to the dCSL temperature $\tcsl$ by the relation
\begin{equation}\label{chie}
\chi=\frac{\hbar^2}{8m_0\kB T_{\text{\tiny CSL}}\rC^2},
\end{equation}
where $\kB$ is the Boltzmann constant.
We wish to construct a unitary unravelling of Eq.~\end{equation}ref{dcslmaster}, i.e.~a unitary dynamics $\hat{\mathcal U}_t$ for the state vector $|\psi\rangle$ of the system such that $\hat \rho(t)=\mathbb E[\hat{\mathcal U}_t\ket{\psi}\bra{\psi}\hat{\mathcal U}^{\dagger}_t]$ is solution of the master equation. Here $\mathbb E[\ \cdot\ ]$ denotes the stochastic average over the noise.
For the CSL model, it is straightforward to show that a classical noise is perfectly suited, as the associated Lindblad operators that can be obtained from Eq.~\end{equation}ref{defLy} by setting $\chi=0$, are self-adjoint. For the dCSL model, this is no longer possible in light of the lack of self-adjointedness of $\hat L(\y)$.
Ref.~\cite{Hudson:1984aa} shows the way around: given a master equation in the Lindblad form [such as Eq.~\end{equation}ref{dcslmaster}], it is always possible to build a unitary unravelling by introducing quantum noise operators describing the effects of a bosonic bath.
We thus consider the following stochastic differential equation for the state vector
\begin{equation}\label{dU1}
\text{d} \ket{\psi_t}=\text{d} \mathcal U_t \ket\psi=\left\{-\frac{i}{\hbar}{\hat H}\text{d} t+\text{d}\hat C-\frac12\mathbb E\left[\text{d}\hat C^\dag\text{d}\hat C\right]\right\}\ket{\psi_t},
\end{equation}
where $\hat C$ is a quantum noise operator that is assumed to take the following form
\begin{equation}
\hat C=\nu\int\text{d}\y\,\left(\hat L(\y)\, \hat B^\dag(\y)-\hat L^\dag(\y)\, \hat B(\y)\right).
\end{equation}
Here $\hat B(\y)$ is a noise field operator, whose statistical properties are identified by the It\^o rules
\begin{equation}ali\label{noiserule}
\mathbb E[\text{d}\hat B_t(\x)]=\mathbb E[\text{d}\hat B_t^{\dag}(\x)]&=\mathbb E[\text{d}\hat B_t^\dag(\y)\text{d}\hat B_t(\x)]=0,\\
\mathbb E[\text{d}\hat B_t(\y)\text{d}\hat B_t^\dag(\x)]&=\delta(\y-\x)\,\text{d} t.
\end{equation}ali
Eq.~\end{equation}ref{dU1} leads to a unitary evolution of the system and a simple application of It\^o rules shows that it leads to Eq.~\end{equation}ref{dcslmaster} for the density matrix. For a more exhaustive description of stochastic Schr\"odinger equations under the action of a quantum noise, we refer to Ref.~\cite{Barchielli:2015aa}.
\section{Master equation for the motion of the center of mass of a mechanical resonator}\label{3}
Let us denote with ${\x}_{n}^{(0)}$ ($n=1,\dots,N$) the classical equilibrium position of each particle. We call $\mu(\x)=m_0\sum_n\delta^{(3)}(\x-\x_{n}^{(0)})$ the mass density of the system. We assume that each particle jiggles very little around its equilibrium position, so that the position operator $\hat{\bm x}_n$ of the $n$-th particle can be written as~\cite{Nimmrichter:2014aa,Belli:2016aa}
\begin{equation}
\hat{\bm x}_n={\bm x}^{(0)}_{n}+\text{d}elta \hat{\bm x}_{n}+\hat {\bm x},
\end{equation}
where $\hat {\bm x}$ measures the fluctuations of the center of mass, while $\text{d}elta \hat{\bm x}_{n}$ measures the remaining fluctuations of the $n$-th particle, which are not already included in $\hat {\bm x}$. Under the assumption of a rigid body, which will be the assumption we make from here on, the latter fluctuations are negligible. Consequently, we set $\text{d}elta \hat{\bm x}_{n}=0$, and after tracing Eq.~\end{equation}ref{Ldcslmaster} over the relative degrees of freedom, we obtain the dissipator for the master equation for the center of mass state $ \rhocm$
\begin{equation}ali\label{masteralphabeta}
\mathcal{L}[ \rhocm(t)]=\frac{\nu^2}{(2\pi\hbar)^3}\int\text{d}\Q\,|\tilde\mu(\Q)|^2e^{-\tfrac{\rC^2(1+\chi)^2}{\hbar^2}\Q^2}\cdot\\
\cdot\left[\hat S(\Q)\rhocm(t)\hat S^\dag(\Q)-\tfrac12\acom{\hat S^\dag(\Q)\hat S(\Q)}{\rhocm(t)}\right],
\end{equation}ali
where $\tilde \mu(\Q)=\int\text{d}\x\, \mu(\x)e^{i{\Q\cdot\x/\hbar}}$ and
\begin{equation}\label{defSalpha}
\hat S(\Q)=e^{\tfrac{i}{\hbar}\Q\cdot\hat{\x}}
\exp\left[-2\frac{\rC^2}{\hbar^2}\left( \chi(1+\chi)\frac{\Q\cdot\hat{\bf p}}{N}+\frac{\chi^2\hat{\bf p}^2}{N^2}\right)\right].
\end{equation}
As the motion of the centre of mass of the rigid body is assumed to have a very small amplitude, a condition that we will shortly define quantitatively, we Taylor expand $\hat S(\Q)$. To this end, it is convenient to represent Eq.~\end{equation}ref{masteralphabeta} in the position basis. The first term in the second line becomes
\begin{equation}ali\label{Gdouble}
&\bra{\x}\hat S(\Q)\rhocm(t)\hat S^\dag(\Q)\ket{\x'}=e^{-\tfrac i\hbar \Q\cdot(\x'-\x)}.
\\
&\int\text{d}\p\int\text{d}\p'\,e^{-\tfrac i \hbar(\x\cdot\p-\x'\cdot\p')}\braket{\p|\rhocm|\p'}\cdot\\
&\cdot \exp\left[ -\frac{2\rC^2}{\hbar^2}\left(\frac{\chi(1+\chi)}{N}\Q\cdot(\p+\p' )+\frac{\chi^2}{N^2}( {\p^2}+{\p'^2}) \right) \right].
\end{equation}ali
Due to the Gaussian factor in Eq.~\end{equation}ref{masteralphabeta}, the main contribution to the integral comes from values of $\Q$ whose modulus is smaller than, or comparable to $\hbar/\rC(1+\chi)$. One can then Taylor expand Eq.~\end{equation}ref{Gdouble} under the conditions
\begin{equation}\label{cond2}
|{\x}'-{\x}|\ll \rC(1+\chi),\quad\text{and}\quad |{\p}|,|{\p}'|\ll \frac{N\hbar}{\rC \chi}.
\end{equation}
The same procedure can be applied to the term in Eq.~\end{equation}ref{masteralphabeta} containing the anticommutator.
Then
Eq.~\end{equation}ref{masteralphabeta} becomes
\begin{equation}ali\label{linearmasteralphabeta}
&\mathcal{L}[ \rhocm(t)]=\frac{\nu^2}{(2\pi\hbar)^3}\int\text{d}\Q\,|\tilde\mu(\Q)|^2e^{-\tfrac{\rC^2(1+\chi)^2}{\hbar^2}\Q^2}\cdot\\
&\cdot\left(\tfrac12\com{\hat K(\Q)-\hat K^\dag(\Q)+\hat M(\Q)-\hat M^\dag(\Q)}{ \rhocm(t)}+\right.\\
&\left.+\hat K(\Q)\rhocm(t)\hat K^\dag(\Q)-\tfrac12\acom{\hat K^\dag(\Q)\hat K(\Q)}{\rhocm(t)}\right),
\end{equation}ali
with
\begin{equation}ali\label{defKM}
\hat{K}(\Q)&=-\frac{\kappa{}}{\hbar^{2}}\Q\cdot\hat{\p}+\frac{i}{\hbar}\Q\cdot\hat{\x},\\
\hat{M}{}(\Q)&=-\frac{\kappa{}^{2}}{2\hbar^{2}(1+\chi)^{2}\rC^{2}}\hat{\p}^{2}+\frac{\kappa^{2}}{2\hbar^{4}}(\Q\cdot\hat{\p})^{2}+\\
&-\frac{i}{\hbar^{3}}\kappa(\Q\cdot\hat{\x})(\Q\cdot\hat{\p})-\frac{1}{2\hbar^{2}}(\Q\cdot\hat{\x})^{2}
\end{equation}ali
and
$\kappa={2\rC^{2}\chi(1+\chi)}/{N}$.
Now, considering the motion of a system only in one direction (say the $x$ direction), the master equation for the center of mass state becomes
\begin{equation}ali\label{dqmuplmaster}
\dfrac{\text{d} \rhocm(t)}{\text{d} t}=&-\frac{i}{\hbar}[ \hat H, \rhocm(t)]-\frac{\eta}{2}\com{\hat x}{\com{\hat x}{ \rhocm(t)}}\\
&-\frac{\gamma_{\text{\tiny CSL}}^{2}}{8\eta\hbar^2}\com{\hat p}{\com{\hat p}{ \rhocm(t)}}-\frac{i\gamma_{\text{\tiny CSL}}}{2\hbar}\com{\hat x}{\acom{\hat p}{ \rhocm(t)}},
\end{equation}ali
with
\begin{eqnarray}
\label{etaapp1}
\eta & = &\frac{\nu^2}{(2\pi\hbar)^3\hbar^2}\int\text{d}\Q\,|\tilde\mu(\Q)|^2e^{-\tfrac{\rC^2(1+\chi)^2}{\hbar^2}\Q^2}Q^2_x, \;\;\;\;\\
\gamma_{\text{\tiny CSL}} & = &\eta\frac{4\rC^{2}\chi(1+\chi)}{N},
\label{gammaapp}
\end{eqnarray}
where $Q_x$ denotes the $x$ component of $\Q$.
The second and third term in the right-hand side of Eq.~\end{equation}ref{dqmuplmaster} describe decoherence in position and momentum respectively, while the last one accounts for dissipation.
In order to write down the stochastic unravelling, it is convenient to rewrite Eq.~\end{equation}ref{dqmuplmaster} in the Lindblad form
\begin{equation}ali\label{masteragain}
\dfrac{\text{d} \rhocm(t)}{\text{d} t}=&-\frac{i}{\hbar}\left[\hat H_{\text{\tiny eff}},\rhocm(t)\right]+\\
&+\eta\left(\hat{L}{\rhocm}(t)\hat{L}^{\dagger}-\frac{1}{2}\left\{ \hat{L}^{\dagger}\hat{L},{\rhocm}(t)\right\} \right),
\end{equation}ali
where
$\hat{L} =\hat{x}+i \varkappa\hat{p}$, $\varkappa=\frac{\gamma_{\text{\tiny CSL}}}{2\eta\hbar}$ and $\hat H_{\text{\tiny eff}} = \hat H+\frac{\gamma_{\text{\tiny CSL}}}{4}\left\{ \hat{x},\hat{p}\right\}$.
As described in the previous section, the unitary unraveling is thus given by
\begin{equation}\label{Unravel1}
\text{d}|\psi_{t}\rangle=\left\{ -\frac{i}{\hbar}\hat H_{\text{\tiny eff}}\,\text{d}t+\text{d}\hat{C}-\frac{\eta}{2}\hat{L}^{\dagger}\hat{L}\,\text{d}t\right\} |\psi_{t}\rangle ,
\end{equation}
where
$\text{d}\hat{C}=\hat{L}\,\text{d}\hat{B}_{t}^{\dagger}-\hat{L}^{\dagger}\,\text{d}\hat{B}_{t}$,
and the only non-zero term of the It\^o rules for the quantum noise operator is
\begin{equation}\label{noise1}
\mathbb{E}\left[\text{d}\hat{B}_{t}\text{d}\hat{B}_{t}^{\dagger}\right]=\eta\,\text{d}t.
\end{equation}
Making use of the unravelling in Eq.~\end{equation}ref{Unravel1}, it is now rather straigtforward to derive the Langevin equations for $\hat x{}$ and $\hat p$, moving to the Heisenberg picture.
In general, given the unitary state evolution $|\psi_t\rangle=\hat{\mathcal U}_t|\psi_0\rangle$, the stochastic variation of a generic operator $\hat O$ reads
\begin{equation}ali\label{diffO00}
\text{d} \hat O(t)=\text{d}\hat {\mathcal U}_t^\dag\,\hat O\,\hat{\mathcal U}_t+\hat {\mathcal U}_t^\dag\,\hat O\,\text{d}\hat{\mathcal U}_t+\mathbb E[\text{d}\hat {\mathcal U}_t^\dag\,\hat O\,\text{d}\hat{\mathcal U}_t],
\end{equation}ali
where the last term accounts for the It\^o contribution.
Starting from the unravelling describing the center of mass motion in Eq.~\end{equation}ref{Unravel1}, we find the time evolution for the generic operator ${\hat O}(t)$ by differentiating Eq.~\end{equation}ref{diffO00} with respect to time (from here on, we will omit the explicit time dependence of all the operators but the noises):
\begin{equation}ali\label{diffO}
\frac{\text{d} \hat O}{\text{d} t}&=\frac{i}{\hbar}\com{\hat H_{\text{\tiny eff}}}{\hat O}+\eta\left(\hat L^\dag\hat O\hat L-\tfrac12\acom{\hat L^\dag\hat L}{\hat O}\right)+\\
&+\left(\hat b^\dag(t)\com{\hat O}{\hat L}+\hat b (t)\com{\hat L^\dag}{\hat O}\right),
\end{equation}ali
where we introduced $\hat b(t) =\tfrac{\text{d}}{\text{d} t} \hat B_t$, whose only non-zero correlation reads
\begin{equation}\label{corr:bb}
\mathbb E[ \hat b(t)\hat b^\dag(s)]=\eta\,\delta(t-s).
\end{equation}
The corresponding Langevin equations for $\hat O=\hat x, \, \hat p$ are
\begin{equation}ali \label{langevin1}
\frac{{\text{d} \hat{x}}}{\text{d} t}&= \frac i\hbar \com{\hat H}{\hat x}
-\varkappa \hbar \, \hat w_x(t),\\
\frac{{\text{d} \hat{p}}}{\text{d} t}&=\frac i\hbar \com{\hat H}{\hat p}-\gamma_{\text{\tiny CSL}} \hat{p}
- \hbar \hat w_p(t),
\end{equation}ali
where we introduced $ \hat w_x(t)=\hat{b}^{\dagger }(t)+ \hat{b}(t)$ and $ \hat w_p(t)=i(\hat{b}^{\dagger }(t)- \hat{b}(t))$, whose correlations follow from Eq.~\end{equation}ref{corr:bb}
\begin{equation}ali
&\mathbb E[ \hat w_x(t) \hat w_x(t')]=\mathbb E[ \hat w_p(t) \hat w_p(t')]=\eta\delta(t-t'), \\
&\mathbb E[ \hat w_x(t) \hat w_p(t')]=-\mathbb E[ \hat w_p(t) \hat w_x(t')]=i\eta\delta(t-t').
\end{equation}ali
Compared to the classical Langevin equation, an extra noise appears in the equation for the position operator. This is in agreement with the results in Ref.~\cite{Barchielli:2015aa}, where it is also discussed how the presence of this noise, which in this context appears naturally, is required for having a well defined momentum operator.
\section{Application to optomechanics}\label{4}
Let us consider a one-dimensional mechanical resonator of mass $m$ in an externally driven cavity. Assuming the relevant coordinate to be along the $x$ direction, the resonator and cavity field are coupled according to the radiation pressure Hamiltonian $\hat H_{\text{rp}}=\hbar g\hat a^\dag\hat a\hat x$ with $\hat a$ and $\hat a^\dag$ the annihilation and creation operators of the cavity field, $\hat x$ that should now be interpreted as the position operator for the centre of mass of the resonator, and $g$ the optomechanical coupling rate. The radiation pressure term enters the total Hamiltonian of the system, which comprises the free dynamics of the field and resonator characterized by the frequency $\omegaC$ and $\omega_0$, respectively. The motion of the system is thus described by the Langevin equations~\cite{Mancini:1994aa}
\begin{subequations}\label{free.langevin}
\begin{align}
\frac{\text{d}{\hat x}}{\text{d} t}&={\hat p/m},\\
\frac{\text{d}{\hat p}}{\text{d} t}&=-m\omega_0^2\hat x+\hbar g\hat a^\dag\hat a-\gamma_m \hat p+\hat \xi,\label{free.langevin.a}\\
\frac{\text{d}{\hat a}}{\text{d} t}&=-i\text{d}elta_0 \hat a+i g\hat a\hat x-\kappa \hat a+\sqrt{2\kappa}\ain.\label{free.langevin.b}
\end{align}
\end{subequations}
The terms $-\gamma_m \hat p$ and $\hat \xi$ in Eq.~\end{equation}ref{free.langevin.a} describe the dissipative (at rate $\gamma_m$) and stochastic action of the phononic environment (at temperature $T$) affecting the mechanical resonator~\cite{Caldeira:1983aa,Hu:1992aa,Ford:2001aa,Diosi:2014aa,Ferialdi:2016aa,Carlesso:2016aa}. Here, $\hat \xi$ is an environment noise operator having zero mean and correlation function
\begin{equation}\label{corr_env_noise}
\mathbb E[{\hat\xi_t\hat \xi_s}]
=\hbar m\gamma_m\int\frac{\text{d}\omega}{2\pi}e^{-i\omega(t-s)}\omega\left[1+\coth\left(\tfrac{\hbar\omega}{2\kB T}\right)\right],
\end{equation}
with $\kB$ the Boltzmann constant. In Eq.~\end{equation}ref{free.langevin.b}, $\text{d}elta_0=\omegaC-\omega_\text{\tiny L}$ is the detuning between the cavity frequency $\omegaC$ and the frequency of the external driving field $\omega_\text{\tiny L}$. Moreover, $\kappa$ is the cavity dissipation rate and $\ain=\alpha_\text{\tiny in}+\delta\ain$ describes the driving field, characterized by the steady average amplitude $\alpha_\text{\tiny in}=\sqrt{P_\text{\tiny in}/(\hbar\omegaC)}$, where $P_\text{\tiny in}$ is the input power, and a fluctuating part that is quantum mechanically accounted for by the fluctuation operator $\delta\ain$ such that $\braket{\delta\ain(t)}=0$ and $\braket{\delta\ain(t)\delta\ain^\dag(s)}=\delta(t-s)$.
The steady-state density noise spectrum of the mechanical motion provides an informative inference tool for the long-time properties of the resonator~\cite{Gardiner:2004aa,Paternostro:2006aa}. It
is defined as
\begin{equation}ali\label{sd-def-the}
\text{d}NS&=\frac12\int_{-\infty}^{+\infty} \text{d} \tau \,e^{-i \omega \tau}{\mathbb E} [\braket{\{\delta\hat x(t), \delta \hat x(t+\tau)\}}],\\
&=\frac1{4\pi}\int_{-\infty}^{+\infty}\text{d} \omega'\ \mathbb E[\braket{\{\delta \tilde x(\omega),\delta \tilde x(\omega')\}}],
\end{equation}ali
where
$\delta \hat x(t)=\hat x(t)-\hat x_\text{\tiny st}$ is the fluctuation around the steady-state position $\hat x_\text{\tiny st}=\lim_{t\rightarrow\infty}\hat x(t)$, and $\delta\tilde x(\omega)$ denotes the Fourier transform of $\delta \hat x(t)$.
Our goal now is to explicitly compute $\text{d}NS$ in Eq.~\end{equation}ref{sd-def-the}, under the assumption of the dCSL dynamics for the mechanical resonator.
To this end, we modify the set of optomechanical Langevin equations according to the prescriptions in Eq.~\end{equation}ref{langevin1}. We thus get
\begin{subequations}\label{langevin.all}
\begin{align}
\frac{\text{d}{\hat x}}{\text{d} t}&=\frac{\hat p}{m}-\varkappa \hbar \, \hat w_x(t),\\
\frac{\text{d}{\hat p}}{\text{d} t}&=-m\omega_0^2\hat x+\hbar g\hat a^\dag\hat a-\gamma \hat p+\hat \xi
- \hbar \hat w_p(t),\\
\frac{\text{d}{\hat a}}{\text{d} t}&=-i\text{d}elta_0 \hat a+i g\hat a\hat x-\kappa \hat a+\sqrt{2\kappa}\ain,
\end{align}
\end{subequations}
where $\gamma=\gamma_m+\gamma_{\text{\tiny CSL}}$ is the total damping rate.
We move to the frequency domain, where the equations above become algebraic, and find
\begin{equation}\label{deltaxlaser}
\delta\tilde x(\omega)=\frac{\tilde \xi(\omega)+\tilde{\mathcal N}_\text{\tiny C}(\omega)+\tilde{\mathcal N}_\text{\tiny CSL}(\omega)}{d(\omega)},
\end{equation}
where $d(\omega)=m[(\omega_\text{\tiny eff}^2(\omega)-\omega^2)-i\gamma_\text{\tiny eff}(\omega)\omega]$ depends on the effective resonance frequency $\omega_\text{\tiny eff}(\omega)$ and damping $\gamma_\text{\tiny eff}(\omega)$, whose full expressions are given in Appendix~\ref{app.noiselaser}. Three independent sources of noise contribute to $\delta\tilde x(\omega)$: $\tilde \xi(\omega)$, which is the Fourier transform of $\hat \xi$, accounts for the phononic noise inducing Brownian motion of the mechanical system;
$\tilde{\mathcal N}_\text{\tiny C}(\omega)$ is the source of noise due to the open nature of the cavity and induced by the driving field, and its explicit expression is given in Appendix \ref{app.noiselaser}; finally, $\tilde{\mathcal N}_\text{\tiny CSL}(\omega)$ refers to the dCSL contribution to the noise, and is the key of our analysis. It reads
\begin{equation}\label{defNcsl}
\tilde{\mathcal N}_\text{\tiny CSL}(\omega)=\varkappa\hbar m(i\omega-\gamma)\tilde w_x(\omega)-\hbar \tilde w_p(\omega),
\end{equation}
where $\tilde w_x(\omega)$ and $\tilde w_p(\omega)$ are, respectively, the Fourier transform of $\hat w_x(t)$ and $\hat w_p(t)$.
It is worth remarking that the dCSL noise enters $\text{d}NS$ not only through $\tilde{\mathcal N}_\text{\tiny CSL}(\omega)$, but also in light of the presence of $\gamma_\text{\tiny CSL}$ in $d(\omega)$. The density noise spectrum of the mechanical system then reads
\begin{widetext}
\begin{equation}\label{dns1}
\text{d}NS=\frac{1}{|d(\omega)|^2}\left[\hbar \gamma_mm\omega\coth\left(\tfrac{\hbar\omega}{2\kB T}\right)+\frac{2\hbar^2g^2\kappa^2|\alpha|^2(\text{d}elta^2+\kappa^2+\omega^2)}{\left[\kappa^2+(\text{d}elta-\omega)^2\right]\left[\kappa^2+(\text{d}elta+\omega)^2\right]} +\hbar^2\eta\left(1+\varkappa^2m^2(\gamma^2+\omega^2)\right)\right],
\end{equation}
\end{widetext}
where $\alpha=\braket{\hat a}=\sqrt{2\kappa}\alpha_\text{\tiny in}/(\kappa-i\text{d}elta)$ and $\text{d}elta=\text{d}elta_0-g\braket{\hat x}$.
Eq.~\end{equation}ref{dns1} can be used to test the dCSL model in optomechanical experiments, to compare the corresponding predictions with those computed for the CSL model~\cite{Bahrami:2014aa,Nimmrichter:2014aa,Vinante:2016aa,Belli:2016aa,Carlesso:2016ac,Carlesso:2018ab}.
\section{Characterization of the \lowercase{d}CSL model: Comparison with experimental data}\label{cwed}
We can now apply the theoretical framework derived in the previous Sections to set experimental upper bounds on the dCSL parameters.
We focus on nanomechanical cantilevers~\cite{Vinante:2016aa,{Vinante:2016ab}} and gravitational wave detectors~\cite{Carlesso:2016ac,Helou:2016aa}. {These are the optomechanical experiments whose data set the strongest bounds on $\lambda$ and $\rC$ for the standard CSL model}. We first perform the theoretical analysis of the setups and then we make a comparison with the experimental data.
\subsection{Nanomechanical cantilever}\label{nc}
In~\cite{Usenko:2011aa,Vinante:2016aa, Vinante:2016ab} the position variance of a cantilever, which is proportional to its temperature, is measured for different temperatures of the surrounding environment. For our analysis, we consider the experiment reported in \cite{Vinante:2016ab}.
The system consists of a silicon cantilever, of size $450\times57\times2.5\,\mu$m, stiffness $k_\text{\tiny stiff}=(0.40\pm0.02)\,$N/m and density 2330\,kg/m$^3$, to which a ferromagnetic micro-sphere (radius $15.5\,\mu$m and density 7430\,kg/m$^3$) is attached.
The latter has two functions: it increases the effect of the CSL noise on the system (being its density much bigger than that of silicon) and allows to monitor the motion with a SQUID in place of a laser, as considered before~\footnote{As for the laser, also the SQUID disturbs the system. This can be directly taken into account in the data extrapolation, see~\cite{Vinante:2016aa,Vinante:2016ab} for a detailed discussion.}.
Then, without the laser contribution, $\text{d}NS$ becomes
\begin{equation}\label{eq.dns.cant}
\text{d}NS=\frac1{m^2}\frac{2m\gamma_m\kB T+\hbar^2\eta\left[1+\varkappa^2m^2\left(\gamma^2+\omega^2\right) \right]}{(\omega_0^2-\omega^2)^2+\gamma^2\omega^2},
\end{equation}
where $\gamma=\gamma_m+\gamma_\text{\tiny CSL}$, $\omega_0=\sqrt{k_\text{\tiny stiff}/m}$ and we have considered the high temperature limit for the environmental noise. For further details we refer to \cite{Vinante:2016ab}.
By integrating $\text{d}NS$ around the resonant frequency we obtain the temperature $T_\text{\tiny S}$ of the system
\begin{equation}
T_\text{\tiny S}=\frac{m\omega_0^2}{\kB}\int\text{d}\omega\,\text{d}NS=T+ \text{d}elta T_\text{\tiny dCSL},
\end{equation}
where $T$ is the environmental temperature and $ \text{d}elta T_\text{\tiny dCSL}$ the dCSL contribution. The expression of the latter is given by
\begin{equation}\label{deltaTdcsl}
\text{d}elta T_\text{\tiny dCSL}=\frac{\hbar^2\eta\left[1+\varkappa^2m^2\left(\gamma^2+\omega_0^2\right) \right]}{2\kB m\gamma}-\frac{\gamma_\text{\tiny CSL}}{\gamma}T.
\end{equation}
The first term increases the temperature of the system (similarly to the standard CSL case), while the second term cools the system and this is a fingerprint of the dCSL model. To make an explicit example, if one considers an experiment where the environmental temperature is much higher than $T_\text{\tiny CSL}$, then the system is cooled by the dCSL noise, contrary to the CSL case, where the system can be only warmed up~\cite{Bilardello:2016aa}.
\begin{figure*}
\caption{(Color online) {Experimental bounds on the dCSL parameters $\lambda$ and $\rC$ for {two}
\label{fig1}
\end{figure*}
\subsection{Gravitational wave detectors}
Following the analysis performed in \cite{Carlesso:2016ac}, we can easily derive the dCSL experimental bounds from gravitational wave detectors. The three experiments considered here are AURIGA \cite{Vinante:2006aa}, Advanced LIGO \cite{Abbott:2016ab} and LISA Pathfinder \cite{Armano:2018aa}.
AURIGA consists in an aluminium cylinder of radius $0.3\,$m, length $3\,$m and mass $2300\,$kg cooled at 4.2\,K, whose resonant deformation at frequency $\omega_0/2\pi\sim900\,$Hz is monitored by a SQUID-based readout \cite{Baggio:2005aa}. We model the system with two cylinders of half length, oscillating in counterphase, as done in \cite{Carlesso:2016ac}. The minimum value for the force noise, which could be attributed to dCSL \cite{Carlesso:2016ac}, is $\mathcal S_\text{\tiny F}=12\,$pN/Hz$^{1/2}$.
LIGO is a Michelson interferometer, whose two arms are configured as a Fabry-Perot cavity, with two cylindrical silica mirrors (density 2200\,kg/m$^3$, radius 17\,cm and length 20\,cm) separated by a distance of 4\,km. We estimate that the minimum effective noise $\mathcal S_\text{\tiny F}=95\,$fN/Hz$^{1/2}$ is reached at $\omega/2\pi=30-35$\,Hz \cite{Abbott:2016ab,Abbott:2016aa}.
LISA Pathfinder consists in a pair of cubical masses (mass 1.928\,kg and side length 4.6\,cm) which are 37.6\,cm away from each other. The two masses are in free fall, surrounded by a space satellite following them, and orbiting around the first Lagrangian point of the Sun-Earth system. The minimum force noise is $\mathcal S_\text{\tiny F}=1.77\,$fN/Hz$^{1/2}$ just above mHz regime \cite{Armano:2018aa}.
Differently from the cantilever, where one measures the center-of-mass motion, here the relevant quantity is the relative distance $\R_{12}$ between the two masses (in the case of AURIGA this corresponds to the elongation of the single mass).
Then, the equations of motion must be changed accordingly. We explicitly derive them in Appendix \ref{app:composite} and we obtain for the corresponding $\text{d}NS$
\begin{equation}\label{dns2A}
\text{d}NS=\frac{\hbar^2(\eta-\sigma)}{m^2}\,\frac{1+m^2 \varkappa^2(\gamma^2+\omega^2)}{(\tilde\omega_0^2-\omega^2)^2+\tilde\gamma^2\omega^2},
\end{equation}
where $\tilde \omega_0^2=\omega_0^2-2\gamma \varkappa \sigma\hbar$, $\tilde\gamma=\gamma -2 \varkappa\sigma \hbar$ and the explicit form of $\sigma$ is given in Eq.~\end{equation}ref{sigmaA}. Since we are primarily interested in estimating the effect of the dCSL noise, we neglect all other noise sources, paying the price of setting more conservative bounds.
\subsection{Bounds on dCSL parameters}\label{subsec.exp}
In Fig.~\ref{fig1} we report the bounds on the parameters $\lcsl$ and $\rC$ by choosing two different values of $T_{\text{\tiny CSL}}$. The value of $T_{\text{\tiny CSL}}=1\,$K is a natural choice if one assumes that the CSL noise has a cosmological origin. Compared to the results presented in \cite{Bilardello:2016aa,Carlesso:2016ac,Carlesso:2018ab,Vinante:2016ab}, which refer to the CSL model ($T_{\text{\tiny CSL}}=+\infty$), the first panel shows no appreciable difference. Hence, for any $T_{\text{\tiny CSL}} >1\,$K bounds on the dCSL model are practically equivalent to those on the standard CSL model.
{ Things start changing} if we take different values for the noise temperature. Specifically, we consider as an example the value of {$T_\text{\tiny CSL}=10^{-7}\,$K}.
As Fig.~\ref{fig1} shows, the bounds from gravitational wave detectors are stable, still coinciding with those obtained in \cite{Carlesso:2016ac,Helou:2016aa,Carlesso:2018ab} with the reference to CSL model. The reason is that the diffusion constant $\eta$ defined in Eq.~\end{equation}ref{etaapp1} is the only relevant quantity here, and it changes with respect to the CSL model only if $1+\chi$ cannot be approximated to unity. This takes place for ranges of the noise temperature such that [cf.~Eq.~\end{equation}ref{chie}]
\begin{equation}
T_{\text{\tiny CSL}}\rC^2\ll\frac{\hbar^2}{8m_0\kB }\sim10^{-18}\,\text{m$^2$K}.
\end{equation}
Thus, changes are expected for $T_\text{\tiny CSL}\leq10^{8}\,$K when $\rC\ll10^{-13}\,$m and for $T_\text{\tiny CSL}\leq10^{-7}\,$K when $\rC\ll10^{-5}\,$m. This can be seen in the bound coming from LISA Pathfinder, which becomes slightly weaker for $\rC<10^{-6}\,$m at $T_\text{\tiny CSL}=10^{-7}\,$K as shown in the right panel of Fig.~\ref{fig1}.
A strong effect of the dissipative extension of the model is shown in the bounds from the nanomechanical cantilever for $T_\text{\tiny CSL}=10^{-7}\,$K. Such a change is driven not only by changes in $\eta$ as discussed before, but also by the change of the dissipation rate $\gamma=\gamma_m+\gamma_\text{\tiny CSL}$, with $\gamma_\text{\tiny CSL}=\eta\gamma'$, where from Eq.~\end{equation}ref{gammaapp} we have
\begin{equation}
\gamma'= \frac{4\rC^{2}m_0\chi(1+\chi)}{m}.
\end{equation}
Moreover, for this experiment there is an additional dCSL contribution which, conversely to the case of the gravitational wave detectors experiments considered before, is not negligible. This comes from the
term $\varkappa^2m^2(\gamma^2+\omega^2)$ in $\text{d}NS$ as defined in Eq.~\end{equation}ref{dns1}, where
\begin{equation}\label{varkappam}
\varkappa m=\frac{\hbar}{4\kB T_{\text{\tiny CSL}}}\left(1+\frac{\hbar^2}{8m_0\kB T_{\text{\tiny CSL}}\rC^2}\right)
\end{equation}
is independent from the system parameters. The term $\varkappa^2m^2(\gamma^2+\omega^2)$ becomes relevant when significantly larger than 1. For the cantilever under consideration, the transition occurs at $T_\text{\tiny CSL}\lesssim 10^{-5}\,$K for $\rC=10^{-8}\,$m and at $T_\text{\tiny CSL}\lesssim 10^{-7}\,$K for $\rC=10^{-4}\,$m.
Such a term affects the system more than the modification of the diffusion constant, and consequently the corresponding bound becomes stronger for small $\rC$.
\section{Conclusions}
We have provided a description of the dCSL model in terms of Langevin equations resulting from a unitary unravelling of the collapse master equation. Our linear and unitary unravelling is able to mimic the non-linear and stochastic action of the dCSL model, including its dissipative nature. The approach that we have put forward is fully suited for optomechanical setups such as cantilevers and gravitational wave detectors, which were discussed in Sec.~\ref{cwed}.
We have identified the bounds on dCSL parameters $\lcsl$ and $\rC$ for two values of the noise temperature $T_\text{\tiny CSL}$. For $T_{\text{\tiny CSL}}>1\,$K the dissipative effects are negligible and the bounds are {\it de facto} the same as those obtained with the standard CSL model \cite{Carlesso:2016ac,Carlesso:2018ab}. For $T_{\text{\tiny CSL}}=10^{-7}\,$K, the cantilever bound in the region $\rC\ll 10^{-5}\,$m is modified. Conversely, the bounds given by gravitational wave detectors are almost completely unaffected by such a dissipative extension for the considered ranges of temperatures. Lowers values of the temperature seem unrealistic and therefore were not considered.
Our approach can be in suitably applied also to other non-interferometric tests of collapse models, such as spontaneous photon emission from Germanium \cite{Fu:1997aa,Curceanu:2016aa,Piscicchia:2017aa} and phonon excitations in crystals \cite{Adler:2018aa,Bahrami:2018aa}. However, in this case the conditions in Eq.~\end{equation}ref{cond2} are not fulfilled, therefore the approximations used through the text cannot be applied and one has to proceed in a different way. One should note that these bounds, coming from photon emission and phonon excitations, significantly depend on the spectrum of the noise and disappear for a frequency cut-off in the range $10^{11}-10^{15}\,$Hz \cite{Adler:2007ad,Adler:2013aa,Donadi:2014aa,Bassi:2014aa,Donadi:2015aa,Carlesso:2018aa}. Therefore, analyzing how these bounds are affected by dissipative effects seems not so relevant.
Our investigation is well placed within the current research effort towards the sharpening of collapse models in light of possible (and indeed foreseeable) experimental assessment of their effects on massive systems. We believe that {\it curing} a physically significant drawback of CSL-like mechanisms such as their inherent energy non-conserving nature provides more robust theoretical models to be contrasted to the evidence of experimental data gathered in any of the settings that we have analyzed here, and thus a more compelling case for the exploration of possible alternative models for the quantum-to-classical transition.
\section*{Acknowledgments}
We are grateful to Giulio Gasbarri and Luca Ferialdi for many useful and valuable comments on the paper. JN and AB acknowledge financial support from University of Trieste (Grant FRA 2016). MC, MP and AB acknowledge financial support the EU Collaborative Project TEQ (Grant Agreement 766900).
AB acknowledges financial support from the Instituto Nazionale di Fisica Nucleare (INFN). SD, MP and AB acknowledge COST Action CA15220 QTSpace. SD acknowledges financial support from Fondazione Angelo Della Riccia and The Foundation BLANCEFLOR Boncompagni Ludovisi, n\'ee Bildt.
\begin{thebibliography}{56}
\makeatletter
\providecommand \@ifxundefined [1]{
\@ifx{#1\undefined}
}
\providecommand \@ifnum [1]{
\ifnum #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \@ifx [1]{
\ifx #1\expandafter \@firstoftwo
\else \expandafter \@secondoftwo
\fi
}
\providecommand \natexlab [1]{#1}
\providecommand \enquote [1]{``#1''}
\providecommand \bibnamefont [1]{#1}
\providecommand \bibfnamefont [1]{#1}
\providecommand \citenamefont [1]{#1}
\providecommand \href@noop [0]{\@secondoftwo}
\providecommand \href [0]{\begingroup \@sanitize@url \@href}
\providecommand \@href[1]{\@@startlink{#1}\@@href}
\providecommand \@@href[1]{\endgroup#1\@@endlink}
\providecommand \@sanitize@url [0]{\catcode `\\12\catcode `\$12\catcode
`\&12\catcode `\#12\catcode `\^12\catcode `\_12\catcode `\%12\relax}
\providecommand \@@startlink[1]{}
\providecommand \@@endlink[0]{}
\providecommand \url [0]{\begingroup\@sanitize@url \@url }
\providecommand \@url [1]{\endgroup\@href {#1}{\urlprefix }}
\providecommand \urlprefix [0]{URL }
\providecommand \Eprint [0]{\href }
\providecommand \doibase [0]{http://dx.doi.org/}
\providecommand \selectlanguage [0]{\@gobble}
\providecommand \bibinfo [0]{\@secondoftwo}
\providecommand \bibfield [0]{\@secondoftwo}
\providecommand \translation [1]{[#1]}
\providecommand \BibitemOpen [0]{}
\providecommand \bibitemStop [0]{}
\providecommand \bibitemNoStop [0]{.\EOS\space}
\providecommand \EOS [0]{\spacefactor3000\relax}
\providecommand \BibitemShut [1]{\csname bibitem#1\endcsname}
\let\auto@bib@innerbib\@empty
\bibitem [{\citenamefont {Ghirardi}\ \emph {et~al.}(1986)\citenamefont
{Ghirardi}, \citenamefont {Rimini},\ and\ \citenamefont
{Weber}}]{Ghirardi:1986aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~C.}\ \bibnamefont
{Ghirardi}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Rimini}}, \
and\ \bibinfo {author} {\bibfnamefont {T.}~\bibnamefont {Weber}},\ }\href
{http://link.aps.org/doi/10.1103/PhysRevD.34.470} {\bibfield {journal}
{\bibinfo {journal} {Phys.~Rev.~D}\ }\textbf {\bibinfo {volume} {34}},\
\bibinfo {pages} {470} (\bibinfo {year} {1986})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ghirardi}\ \emph {et~al.}(1990)\citenamefont
{Ghirardi}, \citenamefont {Pearle},\ and\ \citenamefont
{Rimini}}]{Ghirardi:1990aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~C.}\ \bibnamefont
{Ghirardi}}, \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Pearle}}, \
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Rimini}},\ }\href
{http://link.aps.org/doi/10.1103/PhysRevA.42.78} {\bibfield {journal}
{\bibinfo {journal} {Phys.~Rev.~A}\ }\textbf {\bibinfo {volume} {42}},\
\bibinfo {pages} {78} (\bibinfo {year} {1990})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bassi}\ and\ \citenamefont
{Ghirardi}(2003{\natexlab{a}})}]{Bassi:2003ab}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Bassi}}\ and\ \bibinfo {author} {\bibfnamefont {G.~C.}\ \bibnamefont
{Ghirardi}},\ }\href
{http://www.sciencedirect.com/science/article/pii/S0370157303001030}
{\bibfield {journal} {\bibinfo {journal} {Phys.~Rep.}\ }\textbf {\bibinfo
{volume} {379}},\ \bibinfo {pages} {257 } (\bibinfo {year}
{2003}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bassi}\ \emph {et~al.}(2013)\citenamefont {Bassi}
\emph {et~al.}}]{Bassi:2013aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Bassi}} \emph {et~al.},\ }\href
{http://link.aps.org/doi/10.1103/RevModPhys.85.471} {\bibfield {journal}
{\bibinfo {journal} {Rev.~Mod.~Phys.}\ }\textbf {\bibinfo {volume} {85}},\
\bibinfo {pages} {471} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Adler}\ and\ \citenamefont
{Bassi}(2009)}]{Adler:2009aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont
{Adler}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bassi}},\
}\href {http://www.sciencemag.org/content/325/5938/275.short} {\bibfield
{journal} {\bibinfo {journal} {Science}\ }\textbf {\bibinfo {volume}
{325}},\ \bibinfo {pages} {275} (\bibinfo {year} {2009})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Adler}(2007{\natexlab{a}})}]{Adler:2007ab}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont
{Adler}},\ }\href {http://stacks.iop.org/1751-8121/40/i=12/a=S03} {\bibfield
{journal} {\bibinfo {journal} {J. Phys. A}\ }\textbf {\bibinfo {volume}
{40}},\ \bibinfo {pages} {2935} (\bibinfo {year}
{2007}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Adler}(2007{\natexlab{b}})}]{Adler:2007ac}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont
{Adler}},\ }\href {http://stacks.iop.org/1751-8121/40/i=44/a=C01} {\bibfield
{journal} {\bibinfo {journal} {J. Phys. A}\ }\textbf {\bibinfo {volume}
{40}},\ \bibinfo {pages} {13501} (\bibinfo {year}
{2007}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Arndt}\ \emph {et~al.}(1999)\citenamefont {Arndt}
\emph {et~al.}}]{Arndt:1999aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Arndt}} \emph {et~al.},\ }\href {http://dx.doi.org/10.1038/44348} {\bibfield
{journal} {\bibinfo {journal} {Nature}\ }\textbf {\bibinfo {volume}
{401}},\ \bibinfo {pages} {680} (\bibinfo {year} {1999})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Eibenberger}\ \emph {et~al.}(2013)\citenamefont
{Eibenberger} \emph {et~al.}}]{Eibenberger:2013aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Eibenberger}} \emph {et~al.},\ }\href {http://dx.doi.org/10.1039/C3CP51500A}
{\bibfield {journal} {\bibinfo {journal} {Phys.~Chem.~Chem.~Phys.}\
}\textbf {\bibinfo {volume} {15}},\ \bibinfo {pages} {14696} (\bibinfo {year}
{2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Toro{\v s}}\ \emph {et~al.}(2017)\citenamefont
{Toro{\v s}}, \citenamefont {Gasbarri},\ and\ \citenamefont
{Bassi}}]{Toros:2016aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Toro{\v s}}}, \bibinfo {author} {\bibfnamefont {G.}~\bibnamefont
{Gasbarri}}, \ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Bassi}},\ }\href
{http://www.sciencedirect.com/science/article/pii/S0375960117309465}
{\bibfield {journal} {\bibinfo {journal} {Phys.~Lett.~A}\ }\textbf
{\bibinfo {volume} {381}},\ \bibinfo {pages} {3921 } (\bibinfo {year}
{2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Toro{\v s}}\ and\ \citenamefont
{Bassi}(2018)}]{Toros:2016ab}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Toro{\v s}}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Bassi}},\ }\href {http://stacks.iop.org/1751-8121/51/i=11/a=115302}
{\bibfield {journal} {\bibinfo {journal} {J.~Phys.~A}\ }\textbf {\bibinfo
{volume} {51}},\ \bibinfo {pages} {115302} (\bibinfo {year}
{2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Belli}\ \emph {et~al.}(2016)\citenamefont {Belli}
\emph {et~al.}}]{Belli:2016aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Belli}} \emph {et~al.},\ }\href
{http://link.aps.org/doi/10.1103/PhysRevA.94.012108} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {94}},\
\bibinfo {pages} {012108} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Vinante}\ \emph {et~al.}(2016)\citenamefont {Vinante}
\emph {et~al.}}]{Vinante:2016aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Vinante}} \emph {et~al.},\ }\href
{http://link.aps.org/doi/10.1103/PhysRevLett.116.090402} {\bibfield
{journal} {\bibinfo {journal} {Phys.~Rev.~Lett.}\ }\textbf {\bibinfo
{volume} {116}},\ \bibinfo {pages} {090402} (\bibinfo {year}
{2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Vinante}\ \emph {et~al.}(2017)\citenamefont {Vinante}
\emph {et~al.}}]{Vinante:2016ab}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Vinante}} \emph {et~al.},\ }\href {\doibase 10.1103/PhysRevLett.119.110401}
{\bibfield {journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf
{\bibinfo {volume} {119}},\ \bibinfo {pages} {110401} (\bibinfo {year}
{2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Lalo\"e}\ \emph {et~al.}(2014)\citenamefont
{Lalo\"e}, \citenamefont {Mullin},\ and\ \citenamefont
{Pearle}}]{Laloe:2014ab}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Lalo\"e}}, \bibinfo {author} {\bibfnamefont {W.~J.}\ \bibnamefont {Mullin}},
\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Pearle}},\ }\href
{http://link.aps.org/doi/10.1103/PhysRevA.90.052119} {\bibfield {journal}
{\bibinfo {journal} {Phys.~Rev.~A}\ }\textbf {\bibinfo {volume} {90}},\
\bibinfo {pages} {052119} (\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bilardello}\ \emph {et~al.}(2016)\citenamefont
{Bilardello}, \citenamefont {Donadi}, \citenamefont {Vinante},\ and\
\citenamefont {Bassi}}]{Bilardello:2016aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Bilardello}}, \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Donadi}},
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Vinante}}, \ and\
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bassi}},\ }\href
{http://dx.doi.org/10.1016/j.physa.2016.06.134} {\bibfield {journal}
{\bibinfo {journal} {Phys.~A}\ }\textbf {\bibinfo {volume} {462}},\ \bibinfo
{pages} {764 } (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Curceanu}\ \emph {et~al.}(2016)\citenamefont
{Curceanu} \emph {et~al.}}]{Curceanu:2016aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Curceanu}} \emph {et~al.},\ }\href {\doibase 10.1007/s10701-015-9923-4}
{\bibfield {journal} {\bibinfo {journal} {Foundations of Physics}\ }\textbf
{\bibinfo {volume} {46}},\ \bibinfo {pages} {263} (\bibinfo {year}
{2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Piscicchia}\ \emph {et~al.}(2017)\citenamefont
{Piscicchia} \emph {et~al.}}]{Piscicchia:2017aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Piscicchia}} \emph {et~al.},\ }\href
{http://www.mdpi.com/1099-4300/19/7/319} {\bibfield {journal} {\bibinfo
{journal} {Entropy}\ }\textbf {\bibinfo {volume} {19}} (\bibinfo {year}
{2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Carlesso}\ \emph {et~al.}(2016)\citenamefont
{Carlesso}, \citenamefont {Bassi}, \citenamefont {Falferi},\ and\
\citenamefont {Vinante}}]{Carlesso:2016ac}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Carlesso}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bassi}},
\bibinfo {author} {\bibfnamefont {P.}~\bibnamefont {Falferi}}, \ and\
\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Vinante}},\ }\href
{http://link.aps.org/doi/10.1103/PhysRevD.94.124036} {\bibfield {journal}
{\bibinfo {journal} {Phys.~Rev.~D}\ }\textbf {\bibinfo {volume} {94}},\
\bibinfo {pages} {124036} (\bibinfo {year} {2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Helou}\ \emph {et~al.}(2017)\citenamefont {Helou},
\citenamefont {Slagmolen}, \citenamefont {McClelland},\ and\ \citenamefont
{Chen}}]{Helou:2016aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Helou}}, \bibinfo {author} {\bibfnamefont {B.~J.~J.}\ \bibnamefont
{Slagmolen}}, \bibinfo {author} {\bibfnamefont {D.~E.}\ \bibnamefont
{McClelland}}, \ and\ \bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont
{Chen}},\ }\href {https://link.aps.org/doi/10.1103/PhysRevD.95.084054}
{\bibfield {journal} {\bibinfo {journal} {Phys.~Rev.~D}\ }\textbf {\bibinfo
{volume} {95}},\ \bibinfo {pages} {084054} (\bibinfo {year}
{2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bassi}\ and\ \citenamefont
{Ghirardi}(2003{\natexlab{b}})}]{Bassi:2003aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Bassi}}\ and\ \bibinfo {author} {\bibfnamefont {G.~C.}\ \bibnamefont
{Ghirardi}},\ }\href {\doibase
http://dx.doi.org/10.1016/S0370-1573(03)00103-0} {\bibfield {journal}
{\bibinfo {journal} {Phys.~Rep.}\ }\textbf {\bibinfo {volume} {379}},\
\bibinfo {pages} {257 } (\bibinfo {year} {2003}{\natexlab{b}})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Smirne}\ \emph {et~al.}(2014)\citenamefont {Smirne},
\citenamefont {Vacchini},\ and\ \citenamefont {Bassi}}]{Smirne:2014aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Smirne}}, \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont {Vacchini}}, \
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bassi}},\ }\href
{https://link.aps.org/doi/10.1103/PhysRevA.90.062135} {\bibfield {journal}
{\bibinfo {journal} {Phys.~Rev.~A}\ }\textbf {\bibinfo {volume} {90}},\
\bibinfo {pages} {062135} (\bibinfo {year} {2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Smirne}\ and\ \citenamefont
{Bassi}(2015)}]{Smirne:2015aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Smirne}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bassi}},\
}\href {\doibase 10.1038/srep12518} {\bibfield {journal} {\bibinfo
{journal} {Sci. Rep.}\ }\textbf {\bibinfo {volume} {5}},\ \bibinfo {pages}
{12518} (\bibinfo {year} {2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bahrami}\ \emph {et~al.}(2014)\citenamefont {Bahrami}
\emph {et~al.}}]{Bahrami:2014aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Bahrami}} \emph {et~al.},\ }\href
{http://link.aps.org/doi/10.1103/PhysRevLett.112.210404} {\bibfield
{journal} {\bibinfo {journal} {Phys.~Rev.~Lett.}\ }\textbf {\bibinfo
{volume} {112}},\ \bibinfo {pages} {210404} (\bibinfo {year}
{2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Barchielli}\ and\ \citenamefont
{Vacchini}(2015)}]{Barchielli:2015aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Barchielli}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Vacchini}},\ }\href {http://stacks.iop.org/1367-2630/17/i=8/a=083004}
{\bibfield {journal} {\bibinfo {journal} {New J.~Phys.}\ }\textbf {\bibinfo
{volume} {17}},\ \bibinfo {pages} {083004} (\bibinfo {year}
{2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hudson}\ and\ \citenamefont
{Parthasarathy}(1984)}]{Hudson:1984aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {R.~L.}\ \bibnamefont
{Hudson}}\ and\ \bibinfo {author} {\bibfnamefont {K.~R.}\ \bibnamefont
{Parthasarathy}},\ }\href
{https://link.springer.com/article/10.1007/BF01258530} {\bibfield {journal}
{\bibinfo {journal} {Comm.~Math.~Phys.}\ }\textbf {\bibinfo {volume} {93}},\
\bibinfo {pages} {301} (\bibinfo {year} {1984})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Nimmrichter}\ \emph {et~al.}(2014)\citenamefont
{Nimmrichter}, \citenamefont {Hornberger},\ and\ \citenamefont
{Hammerer}}]{Nimmrichter:2014aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Nimmrichter}}, \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Hornberger}}, \ and\ \bibinfo {author} {\bibfnamefont {K.}~\bibnamefont
{Hammerer}},\ }\href {http://link.aps.org/doi/10.1103/PhysRevLett.113.020405}
{\bibfield {journal} {\bibinfo {journal} {Phys.~Rev.~Lett.}\ }\textbf
{\bibinfo {volume} {113}},\ \bibinfo {pages} {020405} (\bibinfo {year}
{2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Mancini}\ and\ \citenamefont
{Tombesi}(1994)}]{Mancini:1994aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Mancini}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Tombesi}},\ }\href {\doibase 10.1103/PhysRevA.49.4055} {\bibfield {journal}
{\bibinfo {journal} {Phys. Rev. A}\ }\textbf {\bibinfo {volume} {49}},\
\bibinfo {pages} {4055} (\bibinfo {year} {1994})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Caldeira}\ and\ \citenamefont
{Leggett}(1983)}]{Caldeira:1983aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.~O.}\ \bibnamefont
{Caldeira}}\ and\ \bibinfo {author} {\bibfnamefont {A.~J.}\ \bibnamefont
{Leggett}},\ }\href {http://dx.doi.org/10.1016/0378-4371(83)90013-4}
{\bibfield {journal} {\bibinfo {journal} {Phys.~A}\ }\textbf {\bibinfo
{volume} {121}},\ \bibinfo {pages} {587 } (\bibinfo {year}
{1983})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Hu}\ \emph {et~al.}(1992)\citenamefont {Hu},
\citenamefont {Paz},\ and\ \citenamefont {Zhang}}]{Hu:1992aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~L.}\ \bibnamefont
{Hu}}, \bibinfo {author} {\bibfnamefont {J.~P.}\ \bibnamefont {Paz}}, \ and\
\bibinfo {author} {\bibfnamefont {Y.}~\bibnamefont {Zhang}},\ }\href
{\doibase 10.1103/PhysRevD.45.2843} {\bibfield {journal} {\bibinfo
{journal} {Phys.~Rev.~D}\ }\textbf {\bibinfo {volume} {45}},\ \bibinfo
{pages} {2843} (\bibinfo {year} {1992})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ford}\ and\ \citenamefont
{O'Connell}(2001)}]{Ford:2001aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {G.~W.}\ \bibnamefont
{Ford}}\ and\ \bibinfo {author} {\bibfnamefont {R.~F.}\ \bibnamefont
{O'Connell}},\ }\href {\doibase 10.1103/PhysRevD.64.105020} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. D}\ }\textbf {\bibinfo {volume}
{64}},\ \bibinfo {pages} {105020} (\bibinfo {year} {2001})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Di\'osi}\ and\ \citenamefont
{Ferialdi}(2014)}]{Diosi:2014aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Di\'osi}}\ and\ \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Ferialdi}},\ }\href {http://link.aps.org/doi/10.1103/PhysRevLett.113.200403}
{\bibfield {journal} {\bibinfo {journal} {Phys.~Rev.~Lett.}\ }\textbf
{\bibinfo {volume} {113}},\ \bibinfo {pages} {200403} (\bibinfo {year}
{2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Ferialdi}(2016)}]{Ferialdi:2016aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Ferialdi}},\ }\href {\doibase 10.1103/PhysRevLett.116.120402} {\bibfield
{journal} {\bibinfo {journal} {Phys. Rev. Lett.}\ }\textbf {\bibinfo
{volume} {116}},\ \bibinfo {pages} {120402} (\bibinfo {year}
{2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Carlesso}\ and\ \citenamefont
{Bassi}(2017)}]{Carlesso:2016aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Carlesso}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Bassi}},\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.95.052119}
{\bibfield {journal} {\bibinfo {journal} {Phys.~Rev.~A}\ }\textbf {\bibinfo
{volume} {95}},\ \bibinfo {pages} {052119} (\bibinfo {year}
{2017})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Gardiner}\ and\ \citenamefont
{Zoller}(2004)}]{Gardiner:2004aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {C.}~\bibnamefont
{Gardiner}}\ and\ \bibinfo {author} {\bibfnamefont {P.}~\bibnamefont
{Zoller}},\ }\href@noop {} {\emph {\bibinfo {title} {{Quantum Noise}}}}\
(\bibinfo {publisher} {Springer-Verlag Berlin Heidelberg},\ \bibinfo {year}
{2004})\BibitemShut {NoStop}
\bibitem [{\citenamefont {Paternostro}\ \emph {et~al.}(2006)\citenamefont
{Paternostro} \emph {et~al.}}]{Paternostro:2006aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Paternostro}} \emph {et~al.},\ }\href
{http://stacks.iop.org/1367-2630/8/i=6/a=107} {\bibfield {journal} {\bibinfo
{journal} {New J.~Phys.}\ }\textbf {\bibinfo {volume} {8}},\ \bibinfo
{pages} {107} (\bibinfo {year} {2006})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Carlesso}\ \emph
{et~al.}(2018{\natexlab{a}})\citenamefont {Carlesso}, \citenamefont
{Paternostro}, \citenamefont {Ulbricht}, \citenamefont {Vinante},\ and\
\citenamefont {Bassi}}]{Carlesso:2018ab}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Carlesso}}, \bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Paternostro}}, \bibinfo {author} {\bibfnamefont {H.}~\bibnamefont
{Ulbricht}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Vinante}}, \
and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bassi}},\ }\href
{http://stacks.iop.org/1367-2630/20/i=8/a=083022} {\bibfield {journal}
{\bibinfo {journal} {New J.~Phys.}\ }\textbf {\bibinfo {volume} {20}},\
\bibinfo {pages} {083022} (\bibinfo {year} {2018}{\natexlab{a}})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Usenko}\ \emph {et~al.}(2011)\citenamefont {Usenko},
\citenamefont {Vinante}, \citenamefont {Wijts},\ and\ \citenamefont
{Oosterkamp}}]{Usenko:2011aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {O.}~\bibnamefont
{Usenko}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Vinante}},
\bibinfo {author} {\bibfnamefont {G.}~\bibnamefont {Wijts}}, \ and\ \bibinfo
{author} {\bibfnamefont {T.~H.}\ \bibnamefont {Oosterkamp}},\ }\href
{http://dx.doi.org/10.1063/1.3570628} {\bibfield {journal} {\bibinfo
{journal} {App.~Phys.~Lett.}\ }\textbf {\bibinfo {volume} {98}},\ \bibinfo
{pages} {133105} (\bibinfo {year} {2011})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Armano}\ \emph {et~al.}(2018)\citenamefont {Armano}
\emph {et~al.}}]{Armano:2018aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Armano}} \emph {et~al.},\ }\href
{https://link.aps.org/doi/10.1103/PhysRevLett.120.061101} {\bibfield
{journal} {\bibinfo {journal} {Phys.~Rev.~Lett.}\ }\textbf {\bibinfo
{volume} {120}},\ \bibinfo {pages} {061101} (\bibinfo {year}
{2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Abbott}\ \emph
{et~al.}(2016{\natexlab{a}})\citenamefont {Abbott} \emph
{et~al.}}]{Abbott:2016ab}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~P.}\ \bibnamefont
{Abbott}} \emph {et~al.} (\bibinfo {collaboration} {LIGO Scientific
Collaboration and Virgo Collaboration}),\ }\href
{http://link.aps.org/doi/10.1103/PhysRevLett.116.131103} {\bibfield
{journal} {\bibinfo {journal} {Phys.~Rev.~Lett.}\ }\textbf {\bibinfo
{volume} {116}},\ \bibinfo {pages} {131103} (\bibinfo {year}
{2016}{\natexlab{a}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Vinante}\ \emph {et~al.}(2006)\citenamefont {Vinante}
\emph {et~al.}}]{Vinante:2006aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Vinante}} \emph {et~al.} (\bibinfo {collaboration} {AURIGA Collaboration}),\
}\href {http://stacks.iop.org/0264-9381/23/i=8/a=S14} {\bibfield {journal}
{\bibinfo {journal} {Class.~Quantum Grav.}\ }\textbf {\bibinfo {volume}
{23}},\ \bibinfo {pages} {S103} (\bibinfo {year} {2006})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Kovachy}\ \emph {et~al.}(2015)\citenamefont {Kovachy}
\emph {et~al.}}]{Kovachy:2015ab}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {T.}~\bibnamefont
{Kovachy}} \emph {et~al.},\ }\href
{https://link.aps.org/doi/10.1103/PhysRevLett.114.143004} {\bibfield
{journal} {\bibinfo {journal} {Phys.~Rev.~Lett.}\ }\textbf {\bibinfo
{volume} {114}},\ \bibinfo {pages} {143004} (\bibinfo {year}
{2015})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Baggio}\ \emph {et~al.}(2005)\citenamefont {Baggio}
\emph {et~al.}}]{Baggio:2005aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {L.}~\bibnamefont
{Baggio}} \emph {et~al.},\ }\href
{http://link.aps.org/doi/10.1103/PhysRevLett.94.241101} {\bibfield {journal}
{\bibinfo {journal} {Phys.~Rev.~Lett.}\ }\textbf {\bibinfo {volume} {94}},\
\bibinfo {pages} {241101} (\bibinfo {year} {2005})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Abbott}\ \emph
{et~al.}(2016{\natexlab{b}})\citenamefont {Abbott} \emph
{et~al.}}]{Abbott:2016aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {B.~P.}\ \bibnamefont
{Abbott}} \emph {et~al.} (\bibinfo {collaboration} {LIGO Scientific
Collaboration and Virgo Collaboration}),\ }\href
{http://link.aps.org/doi/10.1103/PhysRevLett.116.061102} {\bibfield
{journal} {\bibinfo {journal} {Phys.~Rev.~Lett.}\ }\textbf {\bibinfo
{volume} {116}},\ \bibinfo {pages} {061102} (\bibinfo {year}
{2016}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Fu}(1997)}]{Fu:1997aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {Q.}~\bibnamefont
{Fu}},\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.56.1806} {\bibfield
{journal} {\bibinfo {journal} {Phys.~Rev.~A}\ }\textbf {\bibinfo {volume}
{56}},\ \bibinfo {pages} {1806} (\bibinfo {year} {1997})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Adler}\ and\ \citenamefont
{Vinante}(2018)}]{Adler:2018aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont
{Adler}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Vinante}},\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.97.052119}
{\bibfield {journal} {\bibinfo {journal} {Phys.~Rev.~A}\ }\textbf {\bibinfo
{volume} {97}},\ \bibinfo {pages} {052119} (\bibinfo {year}
{2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bahrami}(2018)}]{Bahrami:2018aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Bahrami}},\ }\href {https://link.aps.org/doi/10.1103/PhysRevA.97.052118}
{\bibfield {journal} {\bibinfo {journal} {Phys.~Rev.~A}\ }\textbf {\bibinfo
{volume} {97}},\ \bibinfo {pages} {052118} (\bibinfo {year}
{2018})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Adler}\ and\ \citenamefont {Ramazano{\u
g}lu}(2007)}]{Adler:2007ad}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont
{Adler}}\ and\ \bibinfo {author} {\bibfnamefont {F.~M.}\ \bibnamefont
{Ramazano{\u g}lu}},\ }\href {http://stacks.iop.org/1751-8121/40/i=44/a=017}
{\bibfield {journal} {\bibinfo {journal} {J.~Phys.~A}\ }\textbf {\bibinfo
{volume} {40}},\ \bibinfo {pages} {13395} (\bibinfo {year}
{2007})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Adler}\ \emph {et~al.}(2013)\citenamefont {Adler},
\citenamefont {Bassi},\ and\ \citenamefont {Donadi}}]{Adler:2013aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.~L.}\ \bibnamefont
{Adler}}, \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bassi}}, \ and\
\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Donadi}},\ }\href
{http://stacks.iop.org/1751-8121/46/i=24/a=245304} {\bibfield {journal}
{\bibinfo {journal} {J.~Phys.~A}\ }\textbf {\bibinfo {volume} {46}},\
\bibinfo {pages} {245304} (\bibinfo {year} {2013})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Donadi}\ \emph {et~al.}(2014)\citenamefont {Donadi},
\citenamefont {Deckert},\ and\ \citenamefont {Bassi}}]{Donadi:2014aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Donadi}}, \bibinfo {author} {\bibfnamefont {D.-A.}\ \bibnamefont {Deckert}},
\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bassi}},\ }\href
{http://www.sciencedirect.com/science/article/pii/S0003491613002443}
{\bibfield {journal} {\bibinfo {journal} {Ann.~Phys.}\ }\textbf {\bibinfo
{volume} {340}},\ \bibinfo {pages} {70 } (\bibinfo {year}
{2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Bassi}\ and\ \citenamefont
{Donadi}(2014)}]{Bassi:2014aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {A.}~\bibnamefont
{Bassi}}\ and\ \bibinfo {author} {\bibfnamefont {S.}~\bibnamefont {Donadi}},\
}\href {http://www.sciencedirect.com/science/article/pii/S0375960114000073}
{\bibfield {journal} {\bibinfo {journal} {Phys.~Lett.~A}\ }\textbf
{\bibinfo {volume} {378}},\ \bibinfo {pages} {761 } (\bibinfo {year}
{2014})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Donadi}\ and\ \citenamefont
{Bassi}(2015)}]{Donadi:2015aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {S.}~\bibnamefont
{Donadi}}\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bassi}},\
}\href {http://stacks.iop.org/1751-8121/48/i=3/a=035305} {\bibfield
{journal} {\bibinfo {journal} {J.~Phys.~A}\ }\textbf {\bibinfo {volume}
{48}},\ \bibinfo {pages} {035305} (\bibinfo {year} {2015})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Carlesso}\ \emph
{et~al.}(2018{\natexlab{b}})\citenamefont {Carlesso}, \citenamefont
{Ferialdi},\ and\ \citenamefont {Bassi}}]{Carlesso:2018aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Carlesso}}, \bibinfo {author} {\bibfnamefont {L.}~\bibnamefont {Ferialdi}},
\ and\ \bibinfo {author} {\bibfnamefont {A.}~\bibnamefont {Bassi}},\ }\href
{https://doi.org/10.1140/epjd/e2018-90248-x} {\bibfield {journal} {\bibinfo
{journal} {Eur.~Phys.~J. D}\ }\textbf {\bibinfo {volume} {72}},\ \bibinfo
{pages} {159} (\bibinfo {year} {2018}{\natexlab{b}})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Petruccione}\ and\ \citenamefont
{Vacchini}(2005)}]{Petruccione:2005aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {F.}~\bibnamefont
{Petruccione}}\ and\ \bibinfo {author} {\bibfnamefont {B.}~\bibnamefont
{Vacchini}},\ }\href {\doibase 10.1103/PhysRevE.71.046134} {\bibfield
{journal} {\bibinfo {journal} {Phys.~Rev.~E}\ }\textbf {\bibinfo {volume}
{71}},\ \bibinfo {pages} {046134} (\bibinfo {year} {2005})}\BibitemShut
{NoStop}
\bibitem [{\citenamefont {Jain}\ \emph {et~al.}(2016)\citenamefont {Jain} \emph
{et~al.}}]{Jain:2016aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {V.}~\bibnamefont
{Jain}} \emph {et~al.},\ }\href
{https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.116.243601}
{\bibfield {journal} {\bibinfo {journal} {Phys.~Rev.~Lett.}\ }\textbf
{\bibinfo {volume} {116}},\ \bibinfo {pages} {243601} (\bibinfo {year}
{2016})}\BibitemShut {NoStop}
\bibitem [{\citenamefont {Armano}\ \emph {et~al.}(2016)\citenamefont {Armano}
\emph {et~al.}}]{Armano:2016aa}
\BibitemOpen
\bibfield {author} {\bibinfo {author} {\bibfnamefont {M.}~\bibnamefont
{Armano}} \emph {et~al.},\ }\href
{http://link.aps.org/doi/10.1103/PhysRevLett.116.231101} {\bibfield
{journal} {\bibinfo {journal} {Phys.~Rev.~Lett.}\ }\textbf {\bibinfo
{volume} {116}},\ \bibinfo {pages} {231101} (\bibinfo {year}
{2016})}\BibitemShut {NoStop}
\end{thebibliography}
\appendix
\section{Study of the validity of the conditions in Eq.~\end{equation}ref{cond2} for the analysis in section~\ref{nc}}\label{app:ce}
\begin{figure}
\caption{Regime of validity of the conditions in (Eq.~\end{equation}
\label{EC}
\end{figure}
As shown in the main text, Eq.~\end{equation}ref{masteralphabeta} can be approximated by Eq.~\end{equation}ref{dqmuplmaster} when two assumptions are fulfilled, see Eq.~\end{equation}ref{cond2}, which are here reported:
\begin{equation}
|{\x}'-{\x}|\ll \rC(1+\chi),\quad\text{and}\quad |{\p}|,|{\p}'|\ll \frac{N\hbar}{\rC \chi}.
\end{equation}
Since we work in a reference frame where the average velocity of the system is zero, a good estimation of $|\bm{v}|$ is given by its fluctuations $\text{d}elta\bm{v}$.
We are considering the system in the steady state i.e.~when it has thermalized with the environment, and this allows to use the equipartition theorem to estimate the fluctuations of the position and the velocity of the system:
\begin{equation}
\text{d}elta{\x} \sim \sqrt{\frac{\kB T}{m\, \omega_{0}^{2}}},\quad\text{d}elta\bm{v} \sim \sqrt{\frac{\kB T}{m}}.
\end{equation}
For the cantilever considered in section~\ref{nc}, we find
\begin{equation}
\text{d}elta{\x} \sim10^{-12}\,\text{m},\quad\text{d}elta\bm{v} \sim10^{-8}\,\text{m/s}.
\end{equation}
The shaded region in Fig.~\ref{EC} shows the values of $\rC$ and $\tcsl$ which fulfil the conditions of Eq.~\end{equation}ref{cond2}, taking the environmental temperature as $T \simeq 1\,$mK. As we can see, even for very low CSL temperatures, such as $\tcsl\sim10^{-10}\,$ K, for any $\rC\geq10^{-10}\,\text{m}$ conditions of Eq.~\end{equation}ref{cond2} are satisfied and the analysis in the main text is valid. Only when reaching much lower temperatures as $\tcsl\sim10^{-15}\,\text{K}$, the range of values of $\rC$ which satisfy Eq.~\end{equation}ref{cond2} strongly reduces.
\section{Density Noise Spectrum details}
\label{app.noiselaser}
The explicit form of the effective resonant frequency $\omega_\text{\tiny eff}(\omega)$, of the effective damping $\gamma_\text{\tiny eff}(\omega)$ and of the laser noise $\tilde{\mathcal N}_\text{\tiny C}(\omega)$ appearing in Eq.~\end{equation}ref{deltaxlaser} can be derived by following the standard procedure \cite{Gardiner:2004aa,Paternostro:2006aa}. Respectively, they read
\begin{equation}ali
\omega_\text{\tiny eff}^2(\omega)&=\omega_0^2-\frac{2|\alpha|^2\hbar g^2\text{d}elta(\text{d}elta^2-\omega^2+\kappa^2)}{m\left((\text{d}elta+\omega)^2+\kappa^2\right)(\left(\text{d}elta-\omega)^2+\kappa^2\right)},\\
\gamma_\text{\tiny eff}(\omega)&=\gamma+\frac{4|\alpha|^2\hbar g^2\kappa\text{d}elta}{m\left((\text{d}elta+\omega)^2+\kappa^2\right)(\left(\text{d}elta-\omega)^2+\kappa^2\right)},\\
\tilde{\mathcal N}_\text{\tiny C}(\omega)&=\hbar g\sqrt{2\kappa}\left[\frac{\alpha^*\tilde a_\text{\tiny in}(\omega)}{\kappa+i(\text{d}elta-\omega)}+\frac{\alpha\tilde a^\dag_\text{\tiny in}(-\omega)}{\kappa-i(\text{d}elta+\omega)} \right],
\end{equation}ali
where $\text{d}elta=\text{d}elta_0-g\braket{\hat x}$ and the only non zero correlation is given by $\braket{\tilde a_\text{\tiny in}(\omega)\tilde a_\text{\tiny in}^\dag(-\bar\omega)}=2\pi\delta(\omega+\bar\omega)$.
\section{dCSL for composite systems}\label{app:composite}
We consider a system containing $N$ particles, which can be divided in 2 subsets labeled by the indeces $\alpha,\beta=1,2$ where the $\alpha$-th subset has $N_\alpha$ particles. The mass density of each subset is described by $\mu_\alpha(\x)=m_0\sum_n\delta^{(3)}(\x-\x_{n,\alpha}^{(0)})$, where ${\bf x}_{n,\alpha}^{(0)}$ denotes the classical equilibrium position of the $n$-th nucleon (belonging to the $\alpha$-th mass distribution). Then, similarly to the procedure shown in the main text, we can express Eq.~\end{equation}ref{Ldcslmaster} as:
\begin{equation}ali\label{masteralphabetaA}
\mathcal{L}[ \hat \rho(t)]=\frac{\nu^2}{(2\pi\hbar)^3}\sum_{\alpha,\beta}\int\text{d}\Q\,\tilde\mu_\alpha(\Q)\tilde\mu_\beta^*(\Q)e^{-\tfrac{\rC^2(1+\chi)^2}{\hbar^2}\Q^2}\cdot\\
\cdot\left[\hat S_\alpha(\Q)\hat\rho(t)\hat S_\beta^\dag(\Q)-\tfrac12\acom{\hat S_\beta^\dag(\Q)\hat S_\alpha(\Q)}{\hat\rho(t)}\right],
\end{equation}ali
where $\tilde\mu_\alpha(\Q)=\int\text{d}\x\,\mu_\alpha(\x)e^{i\Q\cdot\x/\hbar}$ and $\hat S_\alpha(\Q)$ takes the same expression of $\hat S(\Q)$ in Eq.~\end{equation}ref{defSalpha}, with the following substitutions:
\begin{equation}\label{xtoxa}
\hat\x\to\hat\x_\alpha,\quad \hat\p\to\hat\p_\alpha \quad\text{and}\quad N\to N_\alpha.
\end{equation}
The dissipator in Eq.~\end{equation}ref{masteralphabetaA} describes the $N$ particles system when this is considered as divided in subsets labeled by $\alpha$.
Under the short-lenght approximation, valid for
\begin{equation}\label{condappendix}
|{\x}'_\beta-{\x}_\alpha|\ll \rC(1+\chi),\quad\text{and}\quad |{\p}_\alpha|,|{\p}_\beta|\ll \frac{N\hbar}{\rC \chi},
\end{equation}
Eq.~\end{equation}ref{masteralphabetaA} can be approximated with
\begin{equation}ali\label{linearmasteralphabetaA}
&\mathcal{L}[ \hat \rho(t)]=\frac{\lcsl \rC^3}{\pi^{3/2}m_0^2\hbar^3}\sum_{\alpha,\beta}\int\text{d}\Q\,\tilde\mu_\alpha(\Q)\tilde\mu_\beta^*(\Q)e^{-\tfrac{\rC^2(1+\chi)^2}{\hbar^2}\Q^2}\\
&\cdot\left(\tfrac12\com{\hat K_\alpha(\Q)-\hat K_\beta^\dag(\Q)+\hat M_\alpha(\Q)-\hat M_\beta^\dag(\Q)}{\hat \rho(t)}+\right.\\
&\left.+\hat K_\alpha(\Q)\hat\rho(t)\hat K_\beta^\dag(\Q)-\tfrac12\acom{\hat K_\beta^\dag(\Q)\hat K_\alpha(\Q)}{\hat\rho(t)}\right),
\end{equation}ali
where $\hat K_\alpha$ and $\hat M_\alpha$ can be obtained from Eq.~\end{equation}ref{defKM} with the replacements in Eq.~\end{equation}ref{xtoxa}. Note that the first condition in Eq.~\end{equation}ref{condappendix} is fulfilled, assuming that $|\x_\alpha|$ and $|\x_\beta|\ll \rC$, even when $\alpha$ and $\beta$ belong to different subsets, centred around points distant more than $\rC$. Indeed, $\x_{\alpha}$ and $\x_{\beta}$ describe the fluctuations of the $\alpha$ and $\beta$ subsets around the corresponding centers of mass and not their actual positions. Consequently, we have $|{\x}'_\beta-{\x}_\alpha|\leq|\x_\alpha|+|\x_\beta|$, which is smaller than $\rC$ [cf.~Eq.~\end{equation}ref{cond2}].
As already stated in the main text, there are two situations of interest. The first one is when the system is not divided in subsets, i.e.~when $\alpha=\beta=1$. Then Eq.~\end{equation}ref{linearmasteralphabetaA} reduces simply to Eq.~\end{equation}ref{linearmasteralphabeta} describing the motion of the center-of-mass of the system. This first case is discussed in the main text and examples of systems which can be well described just by studying the center of mass motion are cantilevers \cite{Vinante:2016aa,Vinante:2016ab} or optical levitated nanospheres \cite{Jain:2016aa}. On the other hand, in interferometric experiments involving two masses, as LIGO and LISA Pathfinder \cite{Abbott:2016ab,Armano:2016aa}, one is interested in the relative motion between two distinct objects. In such a case the dynamics is described by Eq.~\end{equation}ref{linearmasteralphabetaA} with $\alpha,\beta=1,2$.
We now restrict to the case of two subsets having the same mass density distribution, at positions displaced by $\R_{12}$. Accordingly, $N_2=N_1$ and
\begin{equation}\label{assumptionidenticalA}
\tilde \mu_2(\Q)=\tilde \mu_1(\Q)e^{-i\Q\cdot\R_{12}/\hbar}.
\end{equation}
Under this assumption, Eq.~\end{equation}ref{linearmasteralphabetaA} becomes:
\begin{equation}ali\label{master2A}
\mathcal{L}[ \hat \rho(t)]&=\frac{\lcsl \rC^3}{\pi^{3/2}m_0^2\hbar^3}\int\text{d}\Q\,|\tilde\mu_1(\Q)|^2e^{-\tfrac{\rC^2(1+\chi)^2}{\hbar^2}\Q^2}\cdot\\
&\cdot\left(\hat f_{11}+\hat f_{22}+\hat f_{12}e^{-i\Q\cdot\R_{12}/\hbar}+\hat f_{21}e^{i\Q\cdot\R_{12}/\hbar}\right),
\end{equation}ali
where we introduced
\begin{equation}ali
\hat f_{\alpha\beta}&=\tfrac12\com{\hat K_\alpha-\hat K_\beta^\dag+\hat M_\alpha-\hat M_\beta^\dag}{\hat \rho(t)}+\\
&+\hat K_\alpha(\Q)\hat\rho(t)\hat K_\beta^\dag(\Q)-\tfrac12\acom{\hat K_\beta^\dag(\Q)\hat K_\alpha(\Q)}{\hat\rho(t)}.
\end{equation}ali
The meaning of the four terms in Eq.~\end{equation}ref{master2A} is the following: the terms $\hat f_{11}$ and $\hat f_{22}$ give, respectively, the contribution to the master equation due to the mass distributions $\mu_1$ and $\mu_2$ as if they were alone; this is the incoherent contribution. The last two terms instead account for correlation effects between the two mass distributions.
To better understand the meaning of Eq.~\end{equation}ref{master2A}, let us consider two limiting cases.
The first limit is given by $|\R_{12}|\gg \rC$, for which the phases multiplying $\hat f_{12}$ and $\hat f_{21}$ oscillate very rapidly, giving a negligible contribution compared to that of $\hat f_{11}$ and $\hat f_{22}$. This means that for large distances the noise acting on the fist mass distribution is totally uncorrelated from the one acting on the second mass.
In the opposite limit, i.e.~when $|\R_{12}|\ll \rC$, we have $e^{i\Q\cdot\R_{12}/\hbar}\simeq 1$. In this case, the same noise acts on the two mass distributions, and the contributions from the cross terms become relevant.
By considering the problem only along the direction of motion ($x$-direction), the master equation becomes
\begin{equation}ali
\frac{\text{d}\hat\rho(t)}{\text{d}t}&=-\frac{i}{\hbar}\left[\hat{H}_{\text{\tiny eff}}^{(2)},\hat \rho(t)\right]+\\
&+\sum_{\alpha=1}^{2}K_{\alpha,\beta}\left(\hat{L}_{\alpha}\hat{\rho}(t)\hat{L}_{\beta}^{\dagger}-\frac{1}{2}\left\{ \hat{L}_{\beta}^{\dagger}\hat{L}_{\alpha},\hat{\rho}(t)\right\} \right),
\end{equation}ali
where
\begin{equation}ali\label{Heff2A}
\hat{H}_{\text{\tiny eff}}^{(2)}&=\hat{H}+\left(\frac{\gamma_{\text{\tiny CSL}}}{4}+\frac{\varkappa\sigma\hbar}{2}\right)\left(\left\{ \hat{x}_{1},\hat{p}_{1}\right\}+\left\{ \hat{x}_{2},\hat{p}_{2}\right\} \right)+\\
&+\varkappa\Omega\hbar^{2}(\hat{p}_{1}-\hat{p}_{2}),
\end{equation}ali
\begin{equation}\label{sigmaA}
\sigma\!=\!\frac{\nu^2}{(2\pi\hbar)^3\hbar^2}\!\!\int \!\! \text{d}{\Q}\,|\tilde{\mu}({\Q})|^{2}e^{-\tfrac{\rC^{2}(1+\chi)^{2}}{\hbar^{2}}{\Q}^{2}}\!\!Q_{x}^{2}\cos\!\left(\frac{Q_{x}R_{12}}{\hbar}\right)\!\!,
\end{equation}
\begin{equation}\label{BigsigmaA}
\Omega\!=\!\!\frac{\nu^2}{(2\pi\hbar)^3\hbar^2}\!\!\int\!\!\text{d}{\Q}\,|\tilde{\mu}({\Q})|^{2}e^{-\tfrac{\rC^{2}(1+\chi)^{2}}{\hbar^{2}}{\Q}^{2}}\!\!Q_{x}\sin\!\left(\frac{Q_{x}R_{12}}{\hbar}\right)\!\!,
\end{equation}
and
\begin{align}\label{BigKA}
&K_{\alpha,\beta}=\left(\begin{array}{cc}
\eta & \sigma\\
\sigma & \eta
\end{array}\right),\quad\text{}\quad \varkappa=\frac{\gamma_{\text{\tiny CSL}}}{2\eta\hbar},\\
\label{Lalpha2A}
&\hat{L}_{\alpha}=\hat{x}_{\alpha}+i \varkappa\hat{p}_{\alpha}\;\;\;\;\text{with}\;\;\;\;\alpha=1,2.
\end{align}
Note that the parameters $\sigma$ and $\Omega$, similarly to $\eta$ defined in Eq.~\end{equation}ref{etaapp1}, depend on the phenomenological constants $\rC$, $\lambda_{\tiny \text{CSL}}$ and $T_{\tiny \text{CSL}}$ of the dCSL model as well as on the mass distribution of the system. In the limit when the center of mass of the two sub-systems coincide, i.e. $R_{12}\to0$, one finds that $\sigma\to\eta$ and $\Omega\to0$.
Following the same scheme of the main text, we can write the corresponding unitary unravelling:
\begin{equation}\label{unravel2A}
\text{d}|\psi_{t}\rangle=\left\{ -\frac{i}{\hbar}\hat{H}_{\text{\tiny eff}}^{(2)}\text{d}t+\text{d}\hat{C}_{2}-\frac{1}{2}\mathbb{E}\left[\text{d}\hat{C}_{2}^{\dagger}\text{d}\hat{C}_{2}\right]\right\} |\psi_{t}\rangle,
\end{equation}
where
\begin{equation}\label{C2A}
\text{d}\hat{C}_{2}=\sum_{\alpha=1}^{2}\left(\hat{L}_{\alpha}\,\text{d}\hat{B}_{\alpha t}^{\dagger}-\hat{L}_{\alpha}^{\dagger}\,\text{d}\hat{B}_{\alpha t}\right),
\end{equation}
and with the It\^o rules
\begin{equation}\label{corrnoise2A}
\mathbb{E}\left[\text{d}\hat{B}_{\alpha t}\text{d}\hat{B}_{\beta t}^{\dagger}\right]=K_{\beta,\alpha}\text{d}t,
\end{equation}
and all the others It\^o products are zero.
Given the Eq.~\end{equation}ref{unravel2A} the Langevin equation for a generic operator
$\hat O$ is:
\begin{equation}ali\label{diffO2A}
\frac{\text{d} \hat O}{\text{d} t}&=\frac{i}{\hbar}\com{\hat{H}_{\text{\tiny eff}}^{(2)}}{\hat O}+\!\sum_{\alpha=1}^{2}\left(\hat b_\alpha^\dag(t)\!\com{\hat O}{\hat L_\alpha}\!+\!\hat b_\alpha(t)\com{\hat L_\alpha^\dag}{\hat O}\right)\!+\\
&+\!\sum_{\alpha,\beta=1}^{2}\!K_{\alpha,\beta}\left(\hat L_\alpha^\dag\hat O\hat L_\beta-\tfrac12\acom{\hat L_\alpha^\dag\hat L_\beta}{\hat O}\right),
\end{equation}ali
where we introduced $\hat b_\alpha (t) =\tfrac{\text{d}}{\text{d} t} \hat B_{\alpha,t}$.
By considering
\begin{equation}
\hat H=\sum_{\alpha=1}^2\frac{\hat p_\alpha^2}{2m}+\frac12m\omega_0^2\hat x_\alpha^2,
\end{equation}
in Eq.~\end{equation}ref{Heff2A},
the Langevin equations for the relative coordinates $\hat x =\hat x_1-\hat x_2$ and $\hat p =\tfrac12(\hat p_1-\hat p_2)$ of the two masses become
\begin{equation}ali\label{langevin2A}
\frac{\text{d} \hat x}{\text{d} t}&=\frac{2\hat p}{m}+2 \varkappa\sigma\hbar \hat x+2 \varkappa\Omega\hbar^2- \varkappa \hbar \hat w_{x}(t),\\
\frac{\text{d} \hat p}{\text{d} t}&=-\frac{m}{2} \omega_0^2\hat x-\gamma \hat p-\frac{\hbar}{2} \hat w_{p}(t),
\end{equation}ali
where we introduced
\begin{equation}ali\label{wrel}
\hat w_x(t)&=\left[\hat{b}^{\dagger }_1(t)+ \hat{b}_1(t)\right]-\left[\hat{b}^{\dagger }_2(t)+ \hat{b}_2(t)\right],\\
\hat w_p(t)&=i\left[\hat{b}^{\dagger }_1(t)-\hat{b}_1(t)\right]-i\left[\hat{b}^{\dagger }_2(t)- \hat{b}_2(t)\right],
\end{equation}ali
with
\begin{equation}ali\label{corrrel}
\mathbb E[ \hat w_{x}(t) \hat w_{x}(s)]&=2(\eta-\sigma)\delta(t-s),\\
\mathbb E[ \hat w_{x}(t) \hat w_{p}(s)]&=2i(\eta-\sigma)\delta(t-s),\\
\mathbb E[ \hat w_{p}(t) \hat w_{x}(s)]&=-2i(\eta-\sigma)\delta(t-s),\\
\mathbb E[ \hat w_{p}(t) \hat w_{p}(s)]&=2(\eta-\sigma)\delta(t-s),
\end{equation}ali
describing the correlations between the noises.
We now compute the density noise spectrum for the relative position. Starting from Eqs.~\end{equation}ref{langevin2A} the fluctuation in position in Fourier space is
\begin{equation}
\delta\tilde x(\omega)=\frac{\hbar}m \frac{m a(i \omega-\gamma)\tilde w_{x}-\tilde w_p}{(\omega_0^2-\omega^2-2\gamma \varkappa\sigma\hbar)-i\omega(\gamma-2 \varkappa\sigma\hbar)},
\end{equation}
where the correlations of the Fourier transformed noises read:
\begin{equation}ali
\mathbb E[ \tilde w_x(\omega) \tilde w_x(\omega')]&=4\pi(\eta-\sigma)\delta(\omega+\omega'),\\
\mathbb E[ \tilde w_x(\omega) \tilde w_p(\omega')]&=4\pi i(\eta-\sigma)\delta(\omega+\omega'), \\
\mathbb E[ \tilde w_p(\omega) \tilde w_x(\omega')]&=-4\pi i(\eta-\sigma)\delta(\omega+\omega'), \\
\mathbb E[ \tilde w_p(\omega) \tilde w_p(\omega')]&=4\pi(\eta-\sigma)\delta(\omega+\omega').
\end{equation}ali
The corresponding density noise spectrum, calculated using Eqs.~\end{equation}ref{sd-def-the} with the relative coordinates, is given in Eq.~\end{equation}ref{dns2A}.
\end{document} |
\begin{document}
\title{How to compute the Stanley depth of a module}
\author{Bogdan Ichim}
\address{Simion Stoilow Institute of Mathematics of the Romanian Academy, Research Unit 5, C.P. 1-764,
014700 Bucharest, Romania} \email{[email protected]}
\author{Lukas Katth\"an}
\address{Universit\"at Osnabr\"uck, FB Mathematik/Informatik, 49069
Osnabr\"uck, Germany}\email{[email protected]}
\author{Julio Jos\'e Moyano-Fern\'andez}
\address{Universitat Jaume I, Campus de Riu Sec, Departamento de Matem\'aticas \& Institut Universitari de Matem\`atiques i Aplicacions de Castell\'o, 12071
Caste\-ll\'on de la Plana, Spain} \email{[email protected]}
\subjclass[2010]{Primary: 05A18; 05E40; Secondary: 16W50.}
\keywords{Graded modules; Hilbert depth; Stanley depth; Stanley decomposition.}
\thanks{The first author was partially supported by the project PN-II-RU-TE-2012-3-0161, granted by the Romanian National Authority for Scientific Research,
CNCS -- UEFISCDI. The second author was partially supported by
the German Research Council DFG-GRK~1916. The third author was partially supported by the Spanish Government---Ministerio de Econom\'ia y Competitividad (MINECO), grant MTM2012-36917-C03-03.}
\begin{abstract}
In this paper we introduce an algorithm for computing the Stanley depth of a finitely generated multigraded module $M$ over the polynomial ring $\mathds{K}[X_1, \ldots, X_n]$.
As an application, we give an example of a module whose Stanley depth is strictly greater than the depth of its syzygy module.
In particular, we obtain complete answers for two open questions raised by Herzog in \cite{H}. Moreover, we show that the question whether $M$ has Stanley depth at least $r$ can be reduced to the question whether a certain combinatorially defined polytope $\mathscr{P}$ contains a $\mathds{Z}^n$-lattice point.
\end{abstract}
\maketitle
\section{Introduction}
Let $\mathds{K}$ be a field. Let $R=\mathds{K}[X_1, \ldots , X_n]$ be the standard $\mathds{Z}^n$-graded polynomial ring, and let $M=\oplus M_{\mathbf{a}}$ be a finitely generated $\mathds{Z}^n$-graded $R$-module (also called multigraded in the sequel). The \emph{Stanley depth} of $M$, denoted $\sdep M$, is a combinatorial invariant of $M$ related
to a conjecture of Stanley from 1982 \cite[Conjecture 5.1]{St}, which states that, in the case when $\mathds{K}$ is infinite, the inequality $\depth M \leq \sdep M$ holds; this is nowadays called the \emph{Stanley conjecture}. We refer the reader to \cite{intro} for a short introduction to the subject and to \cite{H} for a comprehensive survey.
After the initial submission of this paper, a counterexample to Stanley's conjecture was given by Duval, Goeckner, Klivans, and Martin in \cite{counterexample}.
We would like to mention that the counterexample may be checked directly by computational methods.
The Stanley depth is an interesting invariant which naturally arises in various combinatorial and computational contexts, which remains rather elusive so far, cf.~\cite{Sturmfels1991,Murdock2002,N1,N2}.
Our goal is to answer the following natural question, which was raised by Herzog:
\begin{question}\label{Q:Herzog65}\cite[Question 1.65]{H}
Does there exist an algorithm to compute the Stanley depth of finitely generated multigraded $R$-modules?
\end{question}
In the particular cases of $R$-modules which are either monomial ideals $I\subset R $, or quotients thereof, this question has been answered by Herzog, Vladoiu, and Zheng \cite{HVZ}. The majority of the published articles concerning Stanley depth are related to this result. A key remark is that, in the cases studied by Herzog, Vladoiu and Zheng, the Hilbert series already determines the module structure. So, the Stanley depth may be computed directly from the Hilbert series of $M$.
This leads to another interesting combinatorial invariant, called the \emph{Hilbert depth} of $M$, which was introduced by Bruns, Krattenthaler, and Uliczka in \cite{BKU}. In fact, the method of \cite{HVZ} extends directly to an algorithm for computing the Hilbert depth of finitely generated multigraded $R$-modules, introduced by the first and third author \cite{IJ} and the first author together with Zarojanu \cite{IZ}.
However, until now, little is known about the computation of the Stanley depth in general.
For computing either the Stanley or the Hilbert depth one has to consider certain combinatorial decompositions. They are called \emph{Stanley decompositions} in the first case, respectively \emph{Hilbert decompositions} in the second case.
An interesting fact is that---beside the interest raised among algebraists and combinatorialists by the conjecture of Stanley---Stanley decompositions have a separate life in applied mathematics.
This goes back to Sturmfels and White \cite{Sturmfels1991}, where it is shown how Stanley decompositions can be used to describe finitely generated graded algebras, e.g. rings of invariants under some group action.
More recently, this found applications in the normal form theory for systems of differential equations with nilpotent linear part (see Murdock \cite{Murdock2002}, Murdock and Sanders \cite{Murdock2007}, Sanders \cite{Sanders2007}).
It is also worth mentioning that, in the particular case of a normal affine monoid, suited Stanley (or Hilbert) decompositions have already been used with success in order to design arguable the fastest available algorithms for computing Hilbert series (see \cite{N1} and \cite{N2}). Further, these algorithms have been used for computing the Hilbert series (and subsequently the associated probability generating functions) corresponding to three well studied (but difficult to compute) voting situations with four candidates arising from the field of social choice: the Condorcet paradox, the Condorcet efficiency of plurality voting and Plurality versus Plurality Runoff (see \cite{Sch} and \cite{N2} for details).
We remark that every Stanley decomposition is inducing a Hilbert decomposition, but the converse is not true. In fact, in many particular cases studied until now the converse also holds (for example in \cite{HVZ} and the related results). More generally, it makes sense to ask:
\begin{question}\label{Q:2}
Which Hilbert decompositions are induced by Stanley decompositions?
\end{question}
A precise answer to Question \ref{Q:2} implies an answer to Question~\ref{Q:Herzog65}.
This is the main contribution of the present article:
In Theorem \ref{thm:main} we give an effective criterion to decide whether a given Hilbert decomposition is induced by a Stanley decomposition.
This leads directly to an algorithm for the computation of the Stanley depth of a finitely generated multigraded $R$-module, which we present in Section \ref{Algo}.
Further, as an application of our main result, we are able to construct a counterexample which gives a negative answer to the following open question, also raised by Herzog (see Subsection \ref{subsection:counterex}):
\begin{question}\label{Q:Herzog63}\cite[Question 1.63]{H}\label{q:syz}
Let $M$ be a finitely generated multigraded $R$-module with syzygy module $Z_k$ for $k = 1,2,\ldots$.
Is it true that $\sdep Z_{k+1}\ge \sdep Z_k$?
\end{question}
Moreover, we define and study the structure of the set of all \emph{$\mathbf{g}$-determined Stanley decompositions} of a finitely generated multigraded $R$-module $M$.
In Section \ref{ssec:rado} we show that this set naturally corresponds to the set of solutions of a certain system of linear Diophantine inequalities.
In other words, the question whether $M$ has Stanley depth at least $r$ can be reduced to the question whether a certain combinatorially defined polytope $\mathscr{P}$ contains a $\mathds{Z}^n$-lattice point.
This polytope $\mathscr{P}$ turns out to be an intersection of a certain affine subspace with the positive orthant and \emph{finitely many polymatroids}.
Finally, we would like to point out that we have not been able to either prove or disprove \cite[Conjecture~2]{A1} and \cite[Conjecture 1.64]{H}, despite several computational and theoretical attempts. Further research on these open conjectures is certainly desirable in view of the recent important advance made by Duval, Goeckner, Klivans and Martin in \cite{counterexample}.
The article is organized as follows. In Section \ref{Sect:pre}, we fix the notation, recall the definitions and the necessary previous results. In Section \ref{AlgStanley}, we formulate and prove Theorem \ref{thm:main}, which is the answer to Question \ref{Q:2}. In Section \ref{Algo}, we deduce an algorithm for the computation of the Stanley depth. This fully responds to Herzog's Question \ref{Q:Herzog65}.
In Section \ref{ApplicationsExamples} we answer Question \ref{Q:Herzog63} and we present several interesting applications of the main result.
\section{Prerequisites} \label{Sect:pre}
In this section we recall the basics about both Stanley and Hilbert decompositions.
We refer the reader to \cite{H} for a more comprehensive treatment.
Let $\mathds{K}$ be a field, $R=\mathds{K}[X_1, \ldots ,X_n]$ be the polynomial ring with the fine $\mathds{Z}^n$-grading, and let $M$ be a finitely generated $\mathds{Z}^n$-graded $R$-module.
Throughout the paper, we denote the cardinality of a set $S$ by $|S|$ and we set $[n] := \set{1,\dotsc, n}$.
Moreover, $n$-tuples in $\mathds{Z}^n$ will be denoted by boldface letters as $\mathbf{a}, \mathbf{b}, \ldots$, while $a_i$ will denote the $i$-th component of $\mathbf{a} \in \mathds{Z}^n$. Further, for $\mathbf{a}\in\mathds{N}^n$, we set $\supp(\mathbf{a}) := \set{i \in [n] \,:\, a_i \neq 0}$ and $\mathbf{X}^\mathbf{a} := X_1^{a_1}\dotsm X_n^{a_n}$.
\begin{defi}
\begin{enumerate}
\item A \emph{Stanley decomposition} of $M$ is a finite family $(R_i,m_i)_{i\in I}$, in which all $m_i \in M$ are homogeneous and $R_i$ are subalgebras of $R$ generated by a subset of the indeterminates of $R$,
such that $R_i\cap \Ann m_i=0$ for each $i\in I$, and
\begin{equation}\label{eq:stadec}
M=\Dirsum_{i\in I} m_iR_i
\end{equation}
as a multigraded $\mathds{K}$-vector space.
\item A \emph{Hilbert decomposition} of $M$ is a finite family $(R_i,\mathbf{s}_i)_{i\in I}$, where $\mathbf{s}_i\in \mathds{Z}^n$ and the $R_i$ are again subalgebras of $R$ generated by a subset of the indeterminates of $R$ for each $i\in I$, such that
\begin{equation}\label{eq:hildec}
M\iso\Dirsum_{i\in I} R_i(-\mathbf{s}_i)
\end{equation}
as a multigraded $\mathds{K}$-vector space.
\end{enumerate}
\end{defi}
Note that every Stanley decomposition $(R_i, m_i)_{i\in I}$ of $M$ gives rise to the Hilbert decomposition $(R_i, \deg m_i)_{i \in I}$.
In the sequel, we will say that a Hilbert decomposition \emph{is induced by a Stanley decomposition} if it arises in this way.
Moreover, observe that, in general, the $R$-module structure of a Stanley decomposition is different from that of $M$, and that Hilbert decompositions depend only on the Hilbert series of $M$, i.e. they do not take the $R$-module structure of $M$ into account.
\begin{defi}
The \emph{depth} of a Hilbert (resp.~Stanley) decomposition is the minimal dimension of the subalgebras $R_i$ in the decomposition.
Equivalently, it is the depth of the right-hand side of (\ref{eq:hildec}) (resp.~(\ref{eq:stadec})), considered as $R$-module.
The \emph{multigraded Hilbert depth} (resp.~the \emph{Stanley depth}) of $M$ is then the maximal depth of a Hilbert (resp.~Stanley) decomposition of $M$.
We write $\hdep M$ and $\sdep M$ for the multigraded Hilbert resp. Stanley depth.
\end{defi}
We denote by $\preceq$ the componentwise order on $\mathds{Z}^n$ and we set
\[[\mathbf{a},\mathbf{b}] := \set{\mathbf{c} \in \mathds{Z}^n \,:\, \mathbf{a} \preceq \mathbf{c} \preceq \mathbf{b}}\]
with $\mathbf{a},\mathbf{b} \in \mathds{Z}^n$.
For the computation of the Hilbert resp.~Stanley depth, one may restrict the attention to a certain \emph{finite} class of decompositions.
Let us briefly recall the details.
The module $M$ is said to be {\it positively $\mathbf{g}$-determined} for $\mathbf{g} \in \mathds{N}^n$ if $M_\mathbf{a}=0$ for $\mathbf{a} \notin \mathds{N}^n$ and the multiplication
map $\cdot X_k : M_{\mathbf{a}} \longrightarrow M_{\mathbf{a}+\mathbf{e}_k}$ is an isomorphism whenever $a_k \ge g_k$, see Miller \cite{M}.
A characterization of positively $\mathbf{g}$-determined modules is given by the following:
\begin{propo}\label{prop:ezra}\cite[Proposition 2.5]{M}
The module $M$ is positively $\mathbf{g}$-determined if and only if the multigraded Betti numbers of $M$ satisfy $\beta_{0,\mathbf{a}}^R(M)=\beta_{1,\mathbf{a}}^R(M)=0$ unless $\mathbf{a} \in [\mathbf{0}, \mathbf{g}]$.
\end{propo}
In particular, if $M$ has no components with negative degrees, then it is always $\mathbf{g}$-determined for a sufficiently large $\mathbf{g}\in \mathds{N}^n$.
For our purpose, the importance of $M$ being positively $\mathbf{g}$-determined is that it allows us to restrict the search space for possible Hilbert or Stanley decompositions, as we explain in the following.
For a given Hilbert decomposition $(R_i, \mathbf{s}_i)_{i \in I}$ of $M$ and a multidegree $\mathbf{a} \in \mathds{N}^n$, let
\[ \mathcal{C}(\mathbf{a}) := \set{i \in I \,:\, (R_i(-\mathbf{s}_i))_{\mathbf{a}} \neq 0 } \]
be the set of indices of those Hilbert spaces that contribute to degree $\mathbf{a}$.
\begin{propo}\label{lem:gstandart}
Let $M$ be a positively $\mathbf{g}$-determined module.
The following statements are equivalent for a Hilbert decomposition $(\mathds{K}[Z_i], \mathbf{s}_i)_{i \in I}$ of $M$:
\begin{enumerate}
\item $\mathbf{s}_i \preceq \mathbf{g}$ for all $i \in I$.
\item $\set{j \,:\, (\mathbf{s}_i)_j = g_j} \subseteq Z_i$ for all $i \in I$.
\item $\mathcal{C}(\mathbf{a}) = \mathcal{C}(\mathbf{a} \wedge \mathbf{g})$ for all $\mathbf{a} \in \mathds{N}^n$. (Here, $\mathbf{a} \wedge \mathbf{g}$ denotes the componentwise minimum.)
\item $\bigoplus_{i} \mathds{K}[Z_i](-\mathbf{s}_i)$ is $\mathbf{g}$-determined as $R$-module.
\end{enumerate}
\end{propo}
\begin{proof}
{\bf (1)$\Longrightarrow$(2)}
Let $i \in I$ and $j \in [n]$ such that $g_j = (\mathbf{s}_i)_j$.
By assumption (1), it holds that $(\mathbf{s}_{i'})_{j} \leq g_j = (\mathbf{s}_i)_j$ for every $i' \in I$.
Hence, $i' \in \mathcal{C}(\mathbf{s}_i + \mathbf{e}_j)$ implies that $X_j \in Z_{i'}$ and $\mathds{K}[Z_{i'}](-\mathbf{s}_{i'})_{\mathbf{s}_i} \neq 0$.
In particular, $\mathcal{C}(\mathbf{s}_i + \mathbf{e}_j) \subseteq \mathcal{C}(\mathbf{s}_i)$.
But $M$ is $\mathbf{g}$-determined, so
\[|\mathcal{C}(\mathbf{s}_i + \mathbf{e}_j)| = \dim_\mathds{K} M_{\mathbf{s}_i + \mathbf{e}_j} = \dim_\mathds{K} M_{\mathbf{s}_i} = |\mathcal{C}(\mathbf{s}_i)| \]
Thus $i \in \mathcal{C}(\mathbf{s}_i) = \mathcal{C}(\mathbf{s}_i + \mathbf{e}_j)$ and the claim follows.
{\bf (2) $\Longrightarrow$ (3)}
We first show that $\mathcal{C}(\mathbf{a} \wedge \mathbf{g}) \subseteq \mathcal{C}(\mathbf{a})$ for $\mathbf{a} \in \mathds{N}^n$.
Let $i \in \mathcal{C}(\mathbf{a} \wedge \mathbf{g})$.
It suffices to prove that for each $j \in [n]$ with $(\mathbf{s}_i)_j < a_j$, it holds that $j \in Z_i$.
Note that $(\mathbf{s}_i)_j \leq (\mathbf{a} \wedge \mathbf{g})_j$ for every $j$.
Moreover, if $(\mathbf{s}_i)_j < (\mathbf{a} \wedge \mathbf{g})_j$ then $i \in \mathcal{C}(\mathbf{a} \wedge \mathbf{g})$ implies that $j \in Z_i$.
On the other hand, if $(\mathbf{s}_i)_j = (\mathbf{a} \wedge \mathbf{g})_j$ and $(\mathbf{s}_i)_j < a_j$, then $(\mathbf{s}_i)_j = g_j$ and thus $j \in Z_i$ by assumption.
It follows that $\mathcal{C}(\mathbf{a} \wedge \mathbf{g}) \subseteq \mathcal{C}(\mathbf{a})$.
Further, $M$ being $\mathbf{g}$-determined implies as above that $|\mathcal{C}(\mathbf{a})| = |\mathcal{C}(\mathbf{a} \wedge \mathbf{g})|$ and thus $\mathcal{C}(\mathbf{a}) = \mathcal{C}(\mathbf{a} \wedge \mathbf{g})$.
{\bf (3) $\Longrightarrow$ (1)}
For each $i \in I$, it holds that $i \in \mathcal{C}(\mathbf{s}_i) = \mathcal{C}(\mathbf{s}_i \wedge \mathbf{g})$. Therefore $\mathbf{s}_i \preceq \mathbf{s}_i \wedge \mathbf{g} \preceq \mathbf{g}$.
{\bf (1) and (2) $\Longleftrightarrow$ (4)}
This follows easily by considering the Betti numbers of
$$\bigoplus_{i} \mathds{K}[Z_i](-\mathbf{s}_i).$$
\end{proof}
Note that the conditions are not equivalent if $M$ is not $\mathbf{g}$-determined.
Moreover, the existence of a Hilbert decomposition satisfying these conditions for some $\mathbf{g} \in \mathds{N}^n$ does not imply that $M$ is $\mathbf{g}$-determined.
Motivated by the preceding proposition we introduce the following:
\begin{defi}
\begin{enumerate}
\item A Hilbert decomposition $\mathfrak{D}$ of $M$ is called \emph{$\mathbf{g}$-determined} if $M$ is positively $\mathbf{g}$-determined and $\mathfrak{D}$ satisfies the equivalent conditions of Proposition \ref{lem:gstandart}.
\item A Stanley decomposition $(R_i, m_i)_{i \in I}$ of $M$ is called \emph{$\mathbf{g}$-determined} if the underlying Hilbert decomposition $(R_i, \deg m_i)_{i \in I}$ is $\mathbf{g}$-determined.
\end{enumerate}
\end{defi}
Every Hilbert decomposition of $M$ is $\mathbf{g}$-determined for a sufficiently large $\mathbf{g} \in \mathds{N}^n$.
On the other hand, for a fixed $\mathbf{g} \in \mathds{N}^n$ there are only \emph{finitely} many $\mathbf{g}$-determined Hilbert decompositions.
By the following result, it is essentially sufficient to consider $\mathbf{g}$-determined Hilbert (Stanley) decompositions if $M$ is $\mathbf{g}$-determined:
\begin{propo}\label{prop:gdet}
Let $M$ be positively $\mathbf{g}$-determined.
\begin{enumerate}
\item There exists a $\mathbf{g}$-determined Hilbert decomposition of $M$ whose depth equals the Hilbert depth of $M$.
\item Similarly, there exists a $\mathbf{g}$-determined Stanley decomposition of $M$ whose depth equals the Stanley depth of $M$.
\end{enumerate}
\end{propo}
\begin{proof}
This is immediate from Corollary 3.4 and Corollary 4.7 of \cite{IJ}, since the decompositions used there are $\mathbf{g}$-determined.
\end{proof}
\section{Which Hilbert decompositions are induced by Stanley decompositions?}\label{AlgStanley}
In this section we characterize those Hilbert decompositions which are induced by Stanley decompositions.
Throughout the section, we fix a finitely generated $\mathds{Z}^n$-graded $R$-module $M$
and a Hilbert decomposition $\mathfrak{D} = (R_i, \mathbf{s}_i)_{i \in I}$ of $M$.
Without loss of generality, we shall assume that both $M$ and $\mathfrak{D}$ are (positively) $\mathbf{g}$-determined for some $\mathbf{g} \in \mathds{N}^n$.
As above, we set
\[ \mathcal{C}(\mathbf{a}) := \set{i \in I \,:\, (R_i(-\mathbf{s}_i))_{\mathbf{a}} \neq 0 } \]
for each multidegree $\mathbf{a} \in \mathds{N}^n$.
Then, Proposition 4.4 of \cite{IJ} may be reformulated as follows:
\begin{propo}\cite[Proposition 4.4]{IJ}\label{hilbert:stanley}
The given Hilbert decomposition of $M$ is induced by a Stanley decomposition if and only if
there exist homogeneous elements $(m_i)_{i \in I} \subset M$ with $\deg m_i = \mathbf{s}_i$ such that the following holds:
For all $i\in I$ we have that $R_i\cap \Ann m_i=0$, and for all $\mathbf{a} \in \mathds{N}^n, \mathbf{a} \preceq \mathbf{g}$, the set
\begin{equation}\label{eq:testset}
\set{ \mathbf{X}^{\mathbf{a}-\mathbf{s}_i} m_i \,:\, i \in \mathcal{C}(\mathbf{a})}\subset M_{\mathbf{a}}
\end{equation}
is $\mathds{K}$-linearly independent.
\end{propo}
\newcommand{\tilde{m}}{\tilde{m}}
The difficulty for applying this result is that one has to choose the right elements $m_i\in M_{\mathbf{s}_i}$ in order to determine whether a given Hilbert decomposition is induced by a Stanley decomposition.
In the sequel we present a method for circumventing this problem.
The idea is to consider (for all $i\in I$) \qq{generic} elements $\tilde{m}_i \in M_{\mathbf{s}_i}$ and to test (for all $\mathbf{a} \in [\mathbf{0},\mathbf{g}]$) the linear independence of the sets \eqref{eq:testset} via computations of determinants.
We make this precise in the following manner.
\begin{construction}\label{const}
For the given Hilbert decomposition $(R_i, \mathbf{s}_i)_{i \in I}$ of $M$, we construct a collection of matrices $(A_{\mathbf{a}})_{\mathbf{a} \in [\mathbf{0},\mathbf{g}]}$ as follows.
First, for each $\mathbf{a} \in [\mathbf{0},\mathbf{g}]$, we choose a basis $\set{b_{\mathbf{a},1}, \dotsc, b_{\mathbf{a},l_\mathbf{a}}}$ for the $\mathds{K}$-vector space $M_\mathbf{a}$. Then,
for each $i \in I$, we set $\tilde{m}_i := \sum_{j} Y_{i,j} b_{\mathbf{s}_i,j}$ with indeterminate coefficients $Y_{i,1}, \dotsc, Y_{i,l_{\mathbf{s}_i}}$.
The matrix $A_{\mathbf{a}}$ has one row for each of the basis vectors of $M_{\mathbf{a}}$ and one column for each $i \in \mathcal{C}(\mathbf{a})$.
For every such $i$, expand $\mathbf{X}^{\mathbf{a} - \mathbf{s}_i} \tilde{m}_i$ in the chosen basis of $M_{\mathbf{a}}$ and write the coefficients into $A_{\mathbf{a}}$. More explicitly, if
\[
\mathbf{X}^{\mathbf{a} - \mathbf{s}_i}b_{\mathbf{s}_i,j}=\sum_{k}c_{j,k}b_{\mathbf{a},k}
\]
with $c_{j,k}\in \mathds{K}$, then
\[
\mathbf{X}^{\mathbf{a} - \mathbf{s}_i} \tilde{m}_i = \sum_{k}\big(\sum_{j} c_{j,k} Y_{i,j} \big) b_{\mathbf{a},k}.
\]
We set $A_{\mathbf{a}}=(\sum_{j} c_{j,k} Y_{i,j})_{i,k}$.
For the ease of reference, we also set
\[\tilde{I} := \set{(i,j) \,:\, i \in I, 1\leq j \leq l_{\mathbf{s}_i}},\]
so that the entries of $A_{\mathbf{a}}$ live in the polynomial ring $\mathds{K}[Y_{i,j} \,:\, (i,j) \in \tilde{I}]$.
\end{construction}
Note that the entries of $A_{\mathbf{a}}$ are linear polynomials in the $Y_{i,j}$.
Moreover, the matrices $A_{\mathbf{a}}$ are square matrices, because the number of rows equals $\dim M_{\mathbf{a}}$, while the number of columns equals the cardinality of $\mathcal{C}(\mathbf{a})$. But this is also $\dim M_{\mathbf{a}}$, as we started with a Hilbert decomposition.
\begin{ex}
We give a simple example to illustrate the construction.
Let $R = \mathds{K}[X_1, X_2]$ and $M = (X_1, X_2) \oplus (X_1X_2) \subset R^2$.
The module $M$ is positively $\mathbf{g}$-determined for $g = (1,1)$.
Let $e_1,e_2$ be the generators of $R^2$.
We choose as vector space bases $X_1 e_1, X_2 e_1, X_1 X_2 e_1$ and $X_1 X_2 e_2$ for the corresponding components of $M$.
Consider the Hilbert decomposition
\[ M \cong R(-1,0) \oplus R(0,-1). \]
We have $\tilde{m}_1 = Y_{1,1} X_1 e_1$ and $\tilde{m}_2 = Y_{2,1} X_2 e_1$.
The matrices $A_{\mathbf{a}}$ constructed above are in this case
\begin{equation*}
A_{(1,0)} = \begin{pmatrix} Y_{1,1} \end{pmatrix}
\qquad
A_{(0,1)} = \begin{pmatrix} Y_{2,1} \end{pmatrix}
\qquad
A_{(1,1)} = \begin{pmatrix} Y_{1,1} & Y_{2,1} \\ 0 & 0 \end{pmatrix}.
\end{equation*}
\end{ex}
Next theorem is the main result of this paper.
\begin{theo}\label{thm:main}
With the notation introduced in Construction \ref{const}, the following holds:
\begin{itemize}
\item[(a)] Assume $|\mathds{K}| = \infty$. Then the given Hilbert decomposition of $M$ is induced by a Stanley decomposition if and only if the determinant of $A_{\mathbf{a}}$ is not the zero polynomial for all $\mathbf{a} \in [\mathbf{0},\mathbf{g}]$.
\item[(b)] Assume $|\mathds{K}| = q < \infty$. Let $P := \prod_{\mathbf{a} \in [\mathbf{0},\mathbf{g}]} \det A_{\mathbf{a}}$.
Let further $\tilde{P}$ be the polynomial obtained from $P$ as follows: From every exponent of every monomial in $P$, subtract $q-1$ until the remainder is less than $q$. Then the given Hilbert decomposition of $M$ is induced by a Stanley decomposition if and only if $\tilde{P} \neq 0$.
\end{itemize}
\end{theo}
\begin{proof}
We use the characterization of Proposition~\ref{hilbert:stanley}.
First, note that the assumption $R_i\cap \Ann m_i=0$ in Proposition \ref{hilbert:stanley} is not really needed:
If the sets
$$
\set{ \mathbf{X}^{\mathbf{a}-\mathbf{s}_i} m_i \,:\, i \in \mathcal{C}(\mathbf{a})}
$$
are $\mathds{K}$-linearly independent for all $\mathbf{a} \in [\mathbf{0},\mathbf{g}]$, then the fact that $R_i\cap \Ann m_i=0$ for all $i$ follows automatically.
To see this, assume for the contrary that $R_j\cap \Ann m_j\neq0$ for some $j$. Then there exists a multidegree $\mathbf{d} \in \mathds{N}^n$ such that $\mathbf{X}^\mathbf{d} m_j = 0$ and $\mathbf{X}^\mathbf{d} \in R_j$. But as $M$ is $\mathbf{g}$-determined, this implies that there exists $\mathbf{d}'\preceq \mathbf{d} \in \mathds{N}^n$ such that $\mathbf{X}^{\mathbf{d}'} m_j = 0$ and $\mathbf{d}' + \mathbf{s}_j \preceq \mathbf{g}$ (remember that the multiplication
map $\cdot X_k : M_{\mathbf{d}+\mathbf{s}_j-\mathbf{e}_k} \longrightarrow M_{\mathbf{d}+\mathbf{s}_j}$ is an isomorphism if $(\mathbf{d}+\mathbf{s}_j)_k > g_k$).
Then the set $\set{ \mathbf{X}^{\mathbf{d}'} m_i \,:\, i \in \mathcal{C}(\mathbf{d}' + \mathbf{s}_j)}$ contains the zero vector and therefore cannot be linearly independent.
Next, consider a choice of elements $m_i = \sum_j y_{i,j} b_{\mathbf{s}_i,j}$ with $(y_{i,j})_{(i,j) \in \tilde{I}} \subset \mathds{K}$.
We now observe that for a fixed $\mathbf{a} \in \mathds{N}^n$, the set $\set{ \mathbf{X}^{\mathbf{a}-\mathbf{s}_i} m_i \,:\, i \in \mathcal{C}(\mathbf{a})}$ is $\mathds{K}$-linearly independent
if and only if $\det A_{\mathbf{a}} ((y_{i,j})_{(i,j) \in \tilde{I}}) \neq 0$.
Hence the elements $m_i$ build a Stanley decomposition if and only if $\prod_{\mathbf{a} \in[\mathbf{0},\mathbf{g}]}\det A_{\mathbf{a}} ((y_{i,j})_{(i,j) \in \tilde{I}}) \neq 0$.
If the field is infinite, then it is possible to choose such $y_{i,j}$ if and only if $P := \prod_{\mathbf{a} \in [\mathbf{0},\mathbf{g}]} \det A_{\mathbf{a}}$ is not the zero polynomial.
This is clearly equivalent to each of the factors $\det A_{\mathbf{a}}$ being nonzero.
If $\mathds{K}$ is finite,
then $P$ has a non-zero value over $\mathds{K}^{|\tilde{I}|}$ if and only if it is not contained
in the ideal $(Y_{i,j}^{q} - Y_{i,j} \,:\, (i,j) \in \tilde{I})$.
This set of generators is already a (universal) Gr\"obner basis, hence $P$ is contained in the ideal if and only if its remainder modulo this Gr\"obner basis is zero, see Cox, Little, O'Shea \cite[p.~82, Corollary 2]{CLS}.
Clearly, $\tilde{P}$ is the remainder of $P$ with respect to this Gr\"obner basis, so the claim follows.
\end{proof}
Note that this theorem gives an effectively computable criterion to decide whether a Hilbert decomposition is induced by a Stanley decomposition.
\begin{rema}\label{rk:char2} Let us add some remarks.
\begin{enumerate}
\item We can say a little more about the structure of $\det A_{\mathbf{a}}$.
Endow the polynomial ring $\mathds{K}[Y_{i,j} \,:\, (i,j) \in \tilde{I}]$ with a $\mathds{N}^{|I|}$-grading by setting $\deg Y_{i,j} := e_i$.
It follows from the definition that the entries of a column of $A_{\mathbf{a}}$ corresponding to $m_i$ are homogeneous of degree $e_i$.
Hence $\det A_{\mathbf{a}}$ is a homogeneous polynomial (with respect to this grading) and its degree is a $0/1$-vector.
In particular, all monomials in $\det A_{a}$ are squarefree.
\item Consider the case that $\dim_\mathds{K} M_{\mathbf{a}} \leq 1$ for all $\mathbf{a} \in \mathds{Z}^n$.
Then, by the above remark, the single entry of $A_{\mathbf{a}}$ is either zero or of the form $c Y_{i1}$ for some $i \in I$ and $c \in \mathds{K}\setminus\set{0}$.
Hence the Hilbert decomposition is induced by a Stanley decomposition if and only if none of the $A_{\mathbf{a}}$ is the zero matrix.
So, in this case our Theorem \ref{thm:main} specializes to \cite[Proposition 2.8]{BKU}.
In particular, the assumption that $\mathds{K}$ is infinite can be removed from \cite[Conjecture 5.1]{St} in the case that $M$ is an $R$-module with $\dim_\mathds{K} M_{\mathbf{a}} \leq 1$ for all $\mathbf{a} \in \mathds{Z}^n$. While this seems to be known, we could not find a precise reference for it.
\end{enumerate}
\end{rema}
In general, the case distinction on the cardinality of the field cannot be removed.
In fact, if $\mathds{K}$ is finite, then the condition that $\det A_{\mathbf{a}} \neq 0$ for all $a$ is not sufficient.
On the positive side, we know that the determinants of the $A_{\mathbf{a}}$ are polynomials with squarefree monomials.
Hence, if they are nonzero, then they do not vanish identically even over a finite field.
On the other hand, it might not be possible to find values for the $Y_{i,j}$ such that all determinants are nonzero simultaneously.
The following example shows this phenomenon.
\begin{figure}
\caption{The Hilbert decomposition of Example \ref{ex:finite}
\label{fig:exfinite}
\end{figure}
\begin{ex}\label{ex:finite}
Let $R = \mathds{K}[X_1, X_2]$ endowed with the standard $\mathbb{Z}^2$-grading.
Consider the module $M$ with generators $e_1,\dots, e_5$ in degrees $(3,0),(3,0),(2,1),(1,2),(0,3)$ and relations
\[ X_2e_1=X_1e_3,\quad X_2^2e_2=X_1^2e_4,\quad X_2^3e_1+X_2^3e_2=X_1^3e_5.\]
A Hilbert decomposition of $M$ is given by
\begin{gather*}
R_1 = \mathds{K}[X_1, X_2], \qquad R_2 = R_3 = R_4 = \mathds{K}[X_2],\\
R_5 = \mathds{K}[X_1,X_2],\qquad R_6 = R_7 = R_8 = \mathds{K}[X_1],
\end{gather*}
and
\[ \mathbf{s}_1 = \mathbf{s}_2 = (3,0), \mathbf{s}_3 = (2,1), \mathbf{s}_4 = (1,2), \mathbf{s}_5 = (0,3), \mathbf{s}_6 = (2,2), \mathbf{s}_7 = (2,3), \mathbf{s}_8 = (1,3), \]
see Figure \ref{fig:exfinite}.
In each degree there is a matrix $A_{\mathbf{a}}$. Let us compute the matrices in the degrees $(3,1), (3,2)$ and $(3,3)$.
For this we set $\tilde{m_1} = Y_{11} e_1 + Y_{12} e_2$, $\tilde{m}_3 = Y_3 e_3$, $\tilde{m}_4 = Y_4 e_4$ and $\tilde{m}_5 = Y_5 e_5$.
Moreover, we choose $X_2^k e_1, X_2^k e_2$ as basis for $M_{(3,k)}$. With these conventions, we have the following matrices:
\begin{equation}\label{eq:matrix}
A_{(3,1)} = \begin{pmatrix} Y_{11} & Y_{3} \\ Y_{12} & 0 \end{pmatrix}
\qquad
A_{(3,2)} = \begin{pmatrix} Y_{11} & 0 \\ Y_{12} & Y_{4} \end{pmatrix}
\qquad
A_{(3,3)} = \begin{pmatrix} Y_{11} & Y_{5} \\ Y_{12} & Y_{5} \end{pmatrix}
\end{equation}
Their determinants are $Y_{12} Y_3$, $Y_{11} Y_4$, and $(Y_{11} + Y_{12} ) Y_5$.
Hence over the finite field $\mathbb{F}_2$ with two elements, it is not possible to choose values $y_{11}, y_{12}$ for $Y_{11}, Y_{12}$, such that all three determinants are nonzero.
Thus the Hilbert decomposition given above is induced by a Stanley decomposition over $\mathbb{F}_4$, say, but not over $\mathbb{F}_2$.
\end{ex}
For later use, we note the following consequence of Theorem \ref{thm:main}:
\begin{coro}\label{coro:localglobal}
Assume that $\mathds{K}$ is infinite and let $(R_i, \mathbf{s}_i)_{i \in I}$ be a Hilbert decomposition of $M$.
Then $(R_i, \mathbf{s}_i)_{i \in I}$ is induced by a Stanley decomposition if and only if for each $\mathbf{a} \in \mathds{N}^n, \mathbf{a} \preceq \mathbf{g}$, there exists a linearly independent subset $(m_i)_{i \in \mathcal{C}(\mathbf{a})}$ of $M_\mathbf{a}$, such that $m_i \in \mathbf{X}^{\mathbf{a}-\mathbf{s}_i} M_{\mathbf{s}_i}$ for $i \in \mathcal{C}(\mathbf{a})$.
\end{coro}
\begin{proof}
The condition is clearly equivalent to the non-vanishing of the determinants of $A_\mathbf{a}$ for $\mathbf{a} \in [\mathbf{0},\mathbf{g}]$.
\end{proof}
\section{An algorithm for computing the Stanley depth of a module}\label{Algo}
In this section we describe how Theorem \ref{thm:main} can be used to effectively compute the Stanley depth of a given (finitely generated $\mathds{Z}^n$-graded) module.
We assume (as in Section \ref{AlgStanley}) that $M$ is a fixed finitely generated $\mathds{Z}^n$-graded $R$-module and we fix $\mathbf{g} \in \mathds{N}^n$ such that $M$ is positively $\mathbf{g}$-determined.
By Proposition \ref{prop:gdet}, one only needs to consider $\mathbf{g}$-determined Stanley decompositions.
Hence the Stanley depth of $M$ can be expressed as
\newcommand{\Df}[1]{\mathfrak{D}(#1)}
\newcommand{\mathfrak{D}}{\mathfrak{D}}
\[
\sdep M = \max \left\{\depth \mathfrak{D} \,:\,
\begin{aligned}
&\mathfrak{D} \text{ is a $\mathbf{g}$-determined Hilbert decomposition of } M\\
& \text{which is induced by a Stanley decomposition.}
\end{aligned}
\right\}.
\]
A key remark is that there are only \emph{finitely many} $\mathbf{g}$-determined Hilbert decompositions of $M$ for a fixed $\mathbf{g}$.
To actually compute the Stanley depth using this formula, one needs to
\begin{enumerate}
\item iterate over all $\mathbf{g}$-determined Hilbert decompositions $\mathfrak{D}$ of $M$; and
\item decide whether $\mathfrak{D}$ is induced by a Stanley decomposition of $M$.
\end{enumerate}
An algorithm for the first task was presented in \cite[Algorithm 1]{IZ}.
In this section we shall follow this approach and we modify \cite[Algorithm 1]{IZ}, so that it may be used for computing the Stanley depth.
We would like to remark at this point that an alternative approach for this first task is to use a description of
the set of $\mathbf{g}$-determined Hilbert decompositions as the set of lattice points in a certain polytope. We give a precise description of this polytope later, in Proposition \ref{prop:dioph}.
So, in fact one may use standard software to enumerate these points, for example \texttt{SCIP} \cite{SCIP} or \texttt{Normaliz} \cite{N1, N2}.
This idea for enumerating Hilbert decompositions was originally suggested by W. Bruns and described in Katth\"an \cite[Section 7.2.1]{K}.
For the second task, we suggest to apply Theorem \ref{thm:main}. In order to make this effective, one has to choose bases for the components $M_{\mathbf{a}}$ of $M$.
One possibility is to choose standard monomials with respect to some Gr\"obner bases, cf. Eisenbud~\cite[Theorem 15.3]{Eis}.
The computation of the matrices $A_{\mathbf{a}}$ and their determinant can then be done using standard algorithms from constructive module theory.
We refer to Chapter 15 of \cite{Eis} or Chapter 10.4 of Becker and Weispfenning~\cite{BW}.
A possible alternative for the second task is provided by Theorem \ref{thm:stanleypolytop}.
\begin{rema}\label{rem:finiteinfinite}
For the case distinction of Theorem \ref{thm:main}, one has to decide whether the field is finite or not.
We describe one way to avoid this.
With the notation introduced in Construction \ref{const}, let
\[P=P(\mathfrak{D}) := \prod_{\mathbf{a} \in [\mathbf{0},\mathbf{g}]} \det A_\mathbf{a} \in \mathds{K}[Y_{i,j} \,:\, (i,j) \in \tilde{I}].\]
If the field is finite, one has to reduce $P$ to $\tilde{P}$ as described in Theorem \ref{thm:main}, while in the infinite case one can directly use $P$.
But even in the finite case, $P$ equals $\tilde{P}$ if the largest exponent in $P$ does not exceed the cardinality of $\mathds{K}$.
Note that this is trivially true if $\mathds{K}$ is infinite, so we can base the case distinction on the question whether the largest exponent in $P$ exceeds the cardinality of $\mathds{K}$.
\end{rema}
\subsection{Enumerating \texorpdfstring{$\mathbf{g}$}{g}-determined Hilbert decompositions via Hilbert partitions}
In the following, we present a modified version of \cite[Algorithm 1]{IZ} for the computation of the Stanley depth, see Algorithm \ref{algo:hdep} below.
Hence we obtain an algorithm for the computation of the Stanley depth of $M$.
As the algorithm in \cite{IZ} is formulated in terms of Hilbert \emph{partitions}, we recall the necessary definitions from \cite{IJ}.
Let the polynomial
\[
H_M(t)_{\preceq \mathbf{g}}:=\sum_{0\preceq \mathbf{a} \preceq \mathbf{g}} (\dim_\mathds{K} M_{\mathbf{a}}) t^\mathbf{a}
\]
be the truncated $\mathds{Z}^n$-graded Hilbert series of $M$.
For $\mathbf{a},\mathbf{b} \in \mathds{Z}^n$ such that $\mathbf{a}\preceq \mathbf{b}$, we set
\[
Q[\mathbf{a},\mathbf{b}](t):=\sum_{\mathbf{a} \preceq \mathbf{c} \preceq \mathbf{b} } t^{\mathbf{c}}
\]
and call it the \emph{polynomial induced by the interval} $[\mathbf{a},\mathbf{b}]$.
\begin{defi}[\cite{IJ}]\label{defi:Hpartition}
We define a \emph{Hilbert partition} of the polynomial $H_M(t)_{\preceq \mathbf{g}}$ to be a finite sum
\[
\mathfrak{P}: H_M(t)_{\preceq \mathbf{g}}=\sum_{i \in I} Q[\mathbf{a}^i,\mathbf{b}^i](t)
\]
of polynomials induced by the intervals $[\mathbf{a}^i,\mathbf{b}^i]$.
\end{defi}
Note that there are only finitely many Hilbert partitions of $H_M(t)_{\preceq \mathbf{g}}$. On one hand,
every Hilbert partition $\mathfrak{P}$ induces a $\mathbf{g}$-determined Hilbert decomposition $\mathfrak{D}(\mathfrak{P})$ by the following construction.
\begin{construction}[\cite{IJ}]
\newcommand{\mathcal{G}}{\mathcal{G}}
Let $\mathfrak{P}: \sum_{i \in I} Q[\mathbf{a}^i,\mathbf{b}^i](t)$ be a Hilbert partition of $H_M(t)_{\preceq \mathbf{g}}$.
For $\mathbf{0} \preceq \mathbf{a} \preceq \mathbf{b} \preceq \mathbf{g}$ we set
\[ \mathcal{G}[\mathbf{a},\mathbf{b}] := \set{\mathbf{c} \in [\mathbf{a},\mathbf{b}] \,:\, c_j = a_j \text{ for all } j \text { with } b_j = g_j}.\]
Further, for $\mathbf{b} \preceq \mathbf{g}$ let $Z_\mathbf{b} := \set{j \in [n] \,:\, b_j = g_j}$, $\rho(\mathbf{b})=|Z_\mathbf{b}|$ and let $\mathds{K}[Z_{\mathbf{b}}] := \mathds{K}[X_j \,:\, j \in Z_\mathbf{b}]$.
Then we define
\[
\mathfrak{D}(\mathfrak{P}): M\iso \bigoplus_{i=1}^r\Big(\bigoplus_{\mathbf{c}\in \mathcal{G}[\mathbf{a}^i,\mathbf{b}^i]} K[Z_{\mathbf{b}^i}](-\mathbf{c})\Big).
\]
\end{construction}
On the other hand, by Proposition \ref{lem:gstandart}, each $\mathbf{g}$-determined Hilbert decomposition $(\mathds{K}[Z_i], \mathbf{s}_i)_{i \in I}$ is induced by the Hilbert partition
\[
\mathfrak{P}: H_M(t)_{\preceq \mathbf{g}} = \sum_{i \in I} Q[\mathbf{s}_i,\mathbf{b}^i](t),
\]
where
\[
(\mathbf{b}^i)_j = \begin{cases}
(\mathbf{s}_i)_j &\text{ if } j \notin \mathds{Z}_i,\\
g_j &\text{ if } j \in \mathds{Z}_i.\\
\end{cases}
\]
Hence the $\mathbf{g}$-determined Hilbert decompositions are exactly those Hilbert decompositions which are induced by a Hilbert partition.
Our modified version \cite[Algorithm 1]{IZ} for the computation of the Stanley depth is presented in Algorithm \ref{algo:hdep}.
\allowdisplaybreaks
\begin{algorithm}
\SetKwFunction{FindElementsToCover}{{\bf FindElementsToCover}}
\SetKwFunction{FindPossibleCovers}{{\bf FindPossibleCovers}}
\SetKwFunction{Beg}{begin}
\SetKwFunction{En}{end}
\SetKwFunction{size}{size}
\SetKwFunction{AddInterval}{{\bf AddInterval}}
\SetKwFunction{ComputeDeterminantsProduct}{{\bf ComputeDeterminantsProduct}}
\SetKwFunction{CheckStanleyDepth}{{\bf CheckStanleyDepth}}
\SetKwFunction{Reduce}{{\bf Reduce}}
\SetKwData{Boolean}{Boolean}
\SetKwData{Container}{Container}
\SetKwData{Polynomial}{Polynomial}
\SetKwData{Integer}{Integer}
\caption{Function that checks if $\sdep\ge s$ recursively} \label{algo:hdep}
\KwData{$\mathbf{g}\in \mathds{N}^n$, $s \in \mathds{N}$, an $R$-module $M$, a polynomial $H(t)=H_M(t)_{\preceq \mathbf{g}}\in \mathds{N}[t_1,...,t_n]$, a \Container $\mathfrak{P}$ and $q \in \mathds{N} \cup\set{\infty}$}
\KwResult{{\it true} if $\sdep M \geq s$}
\Boolean \CheckStanleyDepth{$\mathbf{g},s,M,P,\mathfrak{P},q$}\;
\Begin{
\If {$H\notin \mathds{N}[t_1,...,t_n]$}{\Return{false}\;}
\Container $E=$\FindElementsToCover{$\mathbf{g},s,H$}\;
\nl \If {$\size{E}=0$}{
\nl \Polynomial $P(Y)$:=\ComputeDeterminantsProduct{$\mathbf{g},M,\mathfrak{P}$}\;
\nl P=\Reduce{$P,q$}\;
\nl \If {$P \neq 0$}{\Return{true}\;}
\Return{false}\;}
\Else{
\For { i=\Beg{E} \KwTo i=\En{E} }{
\Container $C[i]$:=\FindPossibleCovers{$\mathbf{g},s,H,E[i]$}\;
\If {\size{$C[i]$}$=0$}{\Return{false}\;}
\For { j=\Beg{$C[i]$} \KwTo j=\En{$C[i]$} }{
\Polynomial $\tilde{H}(t)=H(t)-Q[E[i],C[i][j]](t)$\;
\nl \Container $\tilde{\mathfrak{P}}$:=\AddInterval{$\mathfrak{P},Q[E[i],C[i][j]](t)$}\;
\If{\CheckStanleyDepth{$\mathbf{g},s,M,\tilde{H},\tilde{\mathfrak{P}}$}=true}{\Return{true}\;}
}
}
\Return{false}\;
}
}
\end{algorithm}
The differences from \cite[Algorithm 1]{IZ} appear at lines 1--4, 5, and in the usage of the extra parameters $M$, $\mathfrak{P}$, and $q$. The container $\mathfrak{P}$ is used for storing the intervals in the Hilbert partitions that have been computed, and it can be initialized empty.
The $R$-module structure of $M$ is needed for computing the matrices $A_{\mathbf{a}}$ (for all $\mathbf{a} \in [\mathbf{0},\mathbf{g}]$). Moreover, $q$ is the cardinality of the field, which is needed for the reduction.
Assuming that the reader is familiar with \cite[Algorithm 1]{IZ}, we describe below the new key steps of the algorithm:
\begin{itemize}
\item line~1. ~If $E$ is empty, then we have computed a complete Hilbert partition in $\mathfrak{P}$ (since there are no elements in $E$ to cover). Then we have to check using Theorem \ref{thm:main} whether the Hilbert decomposition $\mathfrak{D}(\mathfrak{P})$ is induced by a Stanley partition.
\item line~2. The function {\bf ComputeDeterminantsProduct} computes $P(\mathfrak{D}(\mathfrak{P}))$ as in Remark \ref{rem:finiteinfinite}. Since $P$ depends on the $R$-module structure of $M$, we have to pass it as a parameter.
\item line~3. Here we compute the reduction of $P$ with respect to the cardinality $q$ of the field.
We point out that we can skip this step if $\mathds{K}$ is infinite.
\item line~4. We apply Theorem \ref{thm:main}, so we check whether $P \neq 0$. If the answer is positive, then we are done. We have reached a good leaf of the searching tree.
\item lines~5. The child $\tilde{P}$ is generated here and further investigated in the recursive call.
\end{itemize}
\section{Applications and Examples}\label{ApplicationsExamples}
In this section, we present several applications of Theorem \ref{thm:main}.
To simplify the discussion we assume throughout this section that $|\mathds{K}|=\infty$.
\subsection{Stanley depth of syzygies}\label{subsection:counterex}
In this subsection, we present an example of an $R$-module $M$, such that $\sdep M > \sdep \Syz_R^1(M)$.
This answers Question 63 in \cite{H} to the negative.
Let us describe the idea of the construction.
It was observed in \cite{IZ} that there are modules $M$ such that $\sdep M < \sdep M \oplus R$.
But it always holds that $\Syz_R^1(M \oplus R) = \Syz_R^1(M)$.
Hence, we will look for a module whose Stanley depth increases sufficiently under adding copies of the ring, to obtain
\[
\sdep \Syz_R^1(M)=\sdep \Syz_R^1(M\oplus R^{a}) < \sdep M\oplus R^{a}.
\]
In fact, it is already sufficient to choose $M = \mathfrak{m}$, the maximal ideal in some polynomial ring.
It follows from \cite[Proposition 3.6]{BKU} that
\[\sdep \Syz_R^1(\mathfrak{m}) \leq n - \lceil\frac{n-2}{3}\rceil,¸\]
where $n$ is the number of variables. As $\sdep \mathfrak{m} = \lceil\frac{n}{2}\rceil$, we see that in order to use this upper bound,
we need that the Stanley depth of $\mathfrak{m}$ increases at least by two after adding any number of copies of the ring.
The smallest $n$ where this is possible is six. Indeed, an easy computation following Popescu \cite{PopescuA} shows that the \emph{$\mathds{Z}$-graded Hilbert depth} of $\mathfrak{m}_6 \oplus R^9$ equals $5$,
while $\sdep \Syz_R^1(\mathfrak{m}_6\oplus R^9) = \sdep \Syz_R^1(\mathfrak{m}_6) \leq 6 - \lceil\frac{6-2}{3}\rceil = 4$ (see Uliczka \cite{U} for details about the $\mathds{Z}$-graded Hilbert depth). So $M = \mathfrak{m}_6 \oplus R^9$ is our candidate for a counterexample.
We need to compute a Hilbert decomposition $\mathfrak{D}$ of $M$ with $\depth \mathfrak{D}=5$.
Unfortunately, this module is already too large for the CoCoA implementation of the Algorithm in \cite{IZ}.
By Proposition \ref{lem:gstandart}, it is enough to search for a $\mathbf{g}$-Hilbert decomposition, where $\mathbf{g}=(1,1,1,1,1,1)$.
These decompositions are described by a system of linear Diophantine inequalities (see Section \ref{ssec:rado} for details) and we can solve the system with the software \texttt{SCIP} \cite{SCIP}.
This yields the Hilbert decomposition of $M$, which is summarized in Table \ref{tab:largehdec}. There, an entry such as $2\times[001111 ,101111]$ is to be interpreted as two copies of the vector space $$\mathds{K}[X_1,X_3,X_4,X_5,X_6](0,0,-1,-1,-1,-1)$$ in the Hilbert decomposition.
\begin{table}[th]
\begin{tabular}{rrr}
$4\times[000000 ,111110]$ & $2\times[000000 ,111101]$ & $3\times[000000 ,111011]$ \\{}
$[000001 ,111101]$ & $[000001 ,111011]$ & $[000001 ,110111]$ \\{}
$[000001 ,101111]$ & $[000001 ,011111]$ & $[000010 ,110111]$ \\{}
$[000010 ,101111]$ & $[000010 ,011111]$ & $[000100 ,111101]$ \\{}
$[000100 ,110111]$ & $[000100 ,101111]$ & $[000100 ,011111]$ \\{}
$[001000 ,101111]$ & $[010000 ,011111]$ & $[100000 ,110111]$ \\{}
$[000111 ,110111]$ & $[001011 ,111011]$ & $[001101 ,101111]$ \\{}
$[001110 ,111110]$ & $[010011 ,011111]$ & $[010101 ,011111]$ \\{}
$[010110 ,111110]$ & $[011001 ,111101]$ & $[011010 ,011111]$ \\{}
$[011100 ,111101]$ & $[100011 ,111011]$ & $[100101 ,110111]$ \\{}
$[100110 ,101111]$ & $[101001 ,111101]$ & $[101010 ,111110]$ \\{}
$[101100 ,111101]$ & $[110001 ,110111]$ & $[110010 ,111110]$ \\{}
$[110100 ,110111]$ & $[111000 ,111011]$ & $2\times[001111 ,101111]$ \\{}
$[101011 ,111011]$ & $[110011 ,111011]$ & $[111100 ,111110]$ \\{}
$3\times[011111 ,011111]$ & $2\times[101111 ,101111]$ & $2\times[110111 ,110111]$ \\{}
$[111011 ,111011]$ & $2\times[111101 ,111101]$ & $[111110 ,111110]$\\{}
$10\times[111111 ,111111]$ &&
\end{tabular}
\caption{A Hilbert decomposition $\mathfrak{D}$ of $M$ with $\depth \mathfrak{D}=5$.}\label{tab:largehdec}
\end{table}
In particular, the Hilbert depth of $M$ equals $5$.
It remains to show that this Hilbert decomposition is induced by a Stanley decomposition of $M$.
Then we can conclude that
\[\sdep M = 5 > 4 \geq \sdep \Syz_R^1(M).\]
For this we prove the following general result:
\begin{propo}\label{prop:max}
Let $\mathfrak{m} \subset R$ be the maximal monomial ideal. Assume that $\mathds{K}$ is infinite. Then for all $\alpha,\beta \in \mathds{N}$ it holds that
\[ \hdep \mathfrak{m}^{\oplus \alpha} \oplus R^{\oplus \beta} = \sdep \mathfrak{m}^{\oplus \alpha} \oplus R^{\oplus \beta}. \]
In fact, every Hilbert decomposition of this module is induced by a Stanley decomposition.
\end{propo}
\begin{proof}
Let $M := \mathfrak{m}^{\oplus \alpha} \oplus R^{\oplus \beta}$.
Further, let $e_1, \dotsc, e_\alpha, f_1, \dotsc, f_\beta$ be the natural set of generators of $R^{\oplus \alpha} \oplus R^{\oplus \beta}$ and consider $M$ as a submodule of this module.
In every nonzero multidegree $\mathbf{a} \in \mathds{N}^n$, the elements $\mathbf{X}^\mathbf{a} e_1, \dotsc, \mathbf{X}^\mathbf{a} e_\alpha, \mathbf{X}^\mathbf{a} f_1, \dotsc, \mathbf{X}^\mathbf{a} f_\beta$ form a vector space basis of $M_{\mathbf{a}}$.
Moreover, a vector space basis of $M_\mathbf{0}$ is given by $f_1, \dotsc, f_\beta$.
Now consider a Hilbert decomposition $(R_i, \mathbf{s}_i)_{i \in I}$ of $M$.
We distinguish two kinds of summands in this decomposition.
First, there are those $i$ where $\mathbf{s}_i = 0$.
Here we set $m_i := \sum_j Z_{ij} f_j$ and we call these generators of the first type.
As we start from a Hilbert decomposition, it is clear that there are exactly $\dim M_\mathbf{0} = \beta$ generators of the first type.
Further, for $i$ with ${\mathbf{s}_i} \neq 0$ we set $m_i := \sum_j Y_{ij} \mathbf{X}^{\mathbf{s}_i} e_j + \sum_j Z_{ij} \mathbf{X}^{\mathbf{s}_i} f_j$.
We call these the generators of the second type.
Next we consider the corresponding matrices as in Theorem \ref{thm:main}.
In the multidegree $\mathbf{0}$, it is easy to see that $A_\mathbf{0}$ is a generic (square) matrix in the variables $Z_{ij}$, and thus its determinant is non-zero.
So consider a multidegree $\mathbf{a} \neq \mathbf{0}$.
Both types of generators can contribute to $M_{\mathbf{a}}$, so the matrix $A_{\mathbf{a}}$ has the following shape:
\[ \begin{tikzpicture}
\matrix[style={
matrix of math nodes, nodes in empty cells,
every node/.append style={text width=0.55cm,align=center,minimum height=5ex},
left delimiter=(, right delimiter=),
}] (mat) {
& & & \\
& & & \\
& & & \\
& & & \\
};
\draw (mat-2-1.south west) -- (mat-2-4.south east);
\draw (mat-1-2.north east) -- (mat-4-2.south east);
\node[font=\large] at (mat-1-1.south east) {$0$};
\node[font=\large] at (mat-1-3.south east) {$Y_{**}$};
\node[font=\large] at (mat-3-1.south east) {$Z_{**}$};
\node[font=\large] at (mat-3-3.south east) {$Z_{**}$};
\draw[decoration={brace,raise=13pt},decorate]
(mat-1-4.north east) -- node[right=15pt] {$\alpha$} (mat-2-4.south east);
\draw[decoration={brace,raise=13pt},decorate]
(mat-3-4.north east) -- node[right=15pt] {$\beta$} (mat-4-4.south east);
\draw[decoration={brace,raise=5pt,mirror},decorate]
(mat-4-1.south west) -- node[below=8pt] {$u$} (mat-4-2.south east);
\end{tikzpicture} \]
Here $u$ stands for the number of generators of the first type contributing to the multidegree $\mathbf{a}$.
Note that every entry on the antidiagonal of $A_{\mathbf{a}}$ is non-zero.
Indeed, because the sum of the indices of the matrix entries is $\alpha + \beta + 1$, while for every entry of the zero-block this sum is at most $\alpha + u \leq \alpha + \beta$.
Hence the antidiagonal gives a non-zero monomial in the Leibniz expansion of the determinant, and as all non-zero entries of the matrix are different variables, therefore cancelation cannot occur. Thus the determinant is non-zero and the claim follows from Theorem \ref{thm:main}.
\end{proof}
\begin{rema}
\begin{enumerate}
\item
Proposition \ref{prop:max} does not hold as stated for arbitrary ideals.
Consider the case $R = \mathds{K}[X_1,X_2]$ and $M = (X_1 X_2) \oplus R$.
Then $\mathds{K} \oplus X_1\mathds{K}[X_1,X_2] \oplus X_2\mathds{K}[X_1,X_2]$ is a Hilbert decomposition of $M$ that is not induced by a Stanley decomposition.
\item The result also does not hold if one adds shifted copies of the ring. Consider $R = \mathds{K}[X_1,X_2]$ and $M = (X_1,X_2) \oplus R(-1,-1)$. Then $X_1 \mathds{K}[X_1,X_2] \oplus X_2 \mathds{K}[X_1,X_2]$ is a Hilbert decomposition of $M$ which is not induced by a Stanley decomposition.
In fact, by adding shifted copies of the ring, one can always obtain a Hilbert decomposition of Hilbert depth $n$ for an arbitrary graded module $M$. For this, consider a finite free resolution of $M$,
\[ 0 \rightarrow F_p \rightarrow F_{p-1} \rightarrow \dotsb \rightarrow F_0 \rightarrow M \rightarrow 0. \]
Then the sum of the Hilbert series of the even modules equals the Hilbert series of $M$ plus the sum of the Hilbert series of the odd modules, so the former is a Hilbert decomposition of the latter.
\end{enumerate}
\end{rema}
Based on several examples, we conjecture the following strengthening of Proposition \ref{prop:max}:
\begin{conjecture}
For every number of variables and any $\alpha,\beta \in \mathds{N}$, the $\mathds{Z}$-graded Hilbert depth \cite{U} and the Stanley depth of $\mathfrak{m}^{\oplus \alpha} \oplus R^{\beta}$ coincide.
\end{conjecture}
\subsection{The set of \texorpdfstring{$\mathbf{g}$}{g}-determined Stanley decompositions}\label{ssec:rado}
In this section, we show that the set of all $\mathbf{g}$-determined Stanley decompositions can be described by a (large) system of linear Diophantine inequalities, or, equivalently, by the set of $\mathds{Z}^n$-lattice points inside a polytope $\mathscr{P}$.
Consider a finitely generated $\mathds{N}^n$-graded $R$-module $M$ which is $\mathbf{g}$-determined for some $\mathbf{g} \in \mathds{N}^n$.
Let
\[ \Omega := \set{ (\mathds{K}[Z], \mathbf{a}) \,:\, \mathbf{a} \in \mathds{N}^n, \mathbf{a} \preceq \mathbf{g}, Z \subseteq [n], \set{j \,:\, g_j = a_j} \subseteq Z } \]
be the set of all possible building blocks for a $\mathbf{g}$-determined Hilbert decomposition of $M$ (according to Proposition \ref{lem:gstandart}).
We write $\mathds{N}^\Omega$ for the free commutative monoid with generators $\set{\mathbf{e}_\omega \,:\, \omega \in \Omega}$.
A $\mathbf{g}$-determined Hilbert decomposition $(R_i, \mathbf{s}_i)_{i\in I}$ of $M$ can then be identified with the element $\sum_{i\in I} e_{(R_i, \mathbf{s}_i)} \in \mathds{N}^\Omega$.
For a vector $u \in \mathds{N}^\Omega$, we write $u(\mathbf{a},Z)$ for the component of $u$ corresponding to $Z \subseteq [n]$ and $\mathbf{a} \in [\mathbf{0},\mathbf{g}]$.
Note that for $(\mathds{K}[Z], \mathbf{b}) \in \Omega$ and $\mathbf{a} \succeq \mathbf{b}$, it holds that $(\mathds{K}[Z](-\mathbf{b}))_\mathbf{a} \neq 0$ if and only if $\supp(\mathbf{a}-\mathbf{b}) \subseteq Z$. Now, $\mathbf{g}$-determined Hilbert decompositions may be characterized easily.
\begin{propo}\label{prop:dioph}
A vector $u \in \mathds{N}^\Omega$ corresponds to a $\mathbf{g}$-determined Hilbert decomposition of $M$ if and only if it satisfies the following equalities:
\begin{align}\label{eq:hilb}
\sum_{\mathbf{b} \in [\mathbf{0},\mathbf{a}]} \sum_{\substack{Z \subseteq [n] \\ \supp(\mathbf{a}-\mathbf{b}) \subseteq Z}} u(\mathbf{b},Z) &= \dim_\mathds{K} M_\mathbf{a} & \text{ for } \mathbf{a} \in [\mathbf{0},\mathbf{g}].
\end{align}
\end{propo}
So, the set of $\mathbf{g}$-determined Hilbert decompositions corresponds naturally to the set of $\mathds{Z}^n$-lattice points in the polytope $\mathscr{H}$ of non-negative solutions to \eqref{eq:hilb}.
The set of \emph{$\mathbf{g}$-determined Stanley decompositions} is a subset of this.
By the following result, this subset may be defined by linear inequalities as well, i.e. the $\mathbf{g}$-determined Hilbert decomposition of $M$ which are induced by $\mathbf{g}$-determined Stanley decompositions correspond to the $\mathds{Z}^n$-lattice points in a certain polytope $\mathscr{P}$.
This is the main result of this subsection.
\begin{theo}\label{thm:stanleypolytop}
A vector $u \in \mathds{N}^\Omega$ corresponds to a $\mathbf{g}$-determined Hilbert decomposition of $M$ which is induced by a $\mathbf{g}$-determined Stanley decomposition, if and only if it satisfies both \eqref{eq:hilb} and in addition the following inequalities:
\begin{align}\label{eq:stan}
\sum_{\mathbf{b} \in J} \sum_{\substack{Z \subseteq [n] \\ \supp(\mathbf{a}-\mathbf{b}) \subseteq Z}} u(\mathbf{b},Z) &\leq \dim_\mathds{K} \sum_{\mathbf{b} \in J} \mathbf{X}^{\mathbf{a}-\mathbf{b}} M_{\mathbf{b}} & \text{ for } \mathbf{a} \in [\mathbf{0},\mathbf{g}], J \subseteq [\mathbf{0},\mathbf{a}].
\end{align}
Here, the sum on the right-hand side is a sum of vector spaces.
\end{theo}
\begin{rema}
The system of inequalities \eqref{eq:stan} is rather large, so it does not seem to be feasible for the actual computation of the Stanley depth.
However, the theorem shows that the set of all Stanley decomposition has a nice structure.
Note that the integer solutions of \eqref{eq:stan} for a fixed $a \in [\mathbf{0}, \mathbf{g}]$ form a discrete polymatroid, cf.~Herzog and Hibi \cite{HH2}.
So the set of $\mathbf{g}$-determined Stanley decompositions may also be seen as an intersection of discrete polymatroids with the polytope $\mathscr{H}$.
\end{rema}
The proof uses Rado's theorem, which we recall for the reader's convenience.
Recall that a \emph{transversal} of a set system $A_1, \dotsc , A_r$ is a collection of pairwise different elements $a_1 \in A_1, a_2 \in A_2, \dotsc, a_r \in A_r$.
\begin{theo}[Rado's theorem, VIII.2.3 \cite{Aigner}]\label{thm:rado}
Let $M$ be a matroid on a ground set $B$ with rank function $r$ and let $\mathfrak{A}: A_1, \dotsc , A_r \subseteq B$ be a collection of subsets of $B$.
Then $\mathfrak{A}$ has an independent transversal if and only if
\[ |I| \leq r\left(\bigcup_{i \in I} A_i\right) \]
for every subset $I \subset [r]$.
\end{theo}
\newcommand{\mathcal{V}}{\mathcal{V}}
\noindent We use the following variant of Rado's theorem.
\begin{coro}\label{cor:rado}
Let $V$ be a vector space and $\mathcal{V}: V_1, \dotsc, V_s$ a collection of linear subspaces of $V$.
For $u \in \mathds{N}^s$, the following are equivalent:
\begin{enumerate}
\item There exists an independent transversal of $\mathcal{V}$, i.e. a linearly independent family of vectors $v_1 \in V_1, v_2 \in V_2, \dotsc, v_s \in V_s$.
\item For each subset $I \subseteq \set{1, \dots, s}$, the following inequality holds:
\[ |I| \leq \dim_\mathds{K} \sum_{i \in I} V_i. \]
Here, the sum on the right-hand side is a sum of vector spaces.
\end{enumerate}
\end{coro}
\begin{proof}
The inequality is clearly necessary, so we only need to show the sufficiency.
Let $A_i$ be a basis for $V_i, 1 \leq i \leq s$.
Consider the union $M := \bigcup_i A_i$ as a matroid.
By Rado's theorem \ref{thm:rado}, $A_1, \dotsc, A_s$ has an independent transversal if and only if
\[ |I| \leq \dim_\mathds{K} \lin\left(\bigcup_{i \in I} A_i\right) = \dim_\mathds{K} \sum_{i \in I} V_i \]
for every subset $I \subset [s]$.
Hence the inequality in our claim is sufficient.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:stanleypolytop}]
Assume that $u \in \mathds{N}^\Omega$ is indeed a $\mathbf{g}$-determined Hilbert decomposition of the module $M$.
By Corollary \ref{coro:localglobal}, the Hilbert decomposition $u$ corresponds to a Stanley decomposition
if and only if for each $\mathbf{a} \in [\mathbf{0},\mathbf{g}]$, there are linearly independent elements $( m(\mathbf{b},Z,i) )_{(\mathbf{b},Z,i) \in \Lambda}$, such that $m(\mathbf{b},Z,i) \in \mathbf{X}^{\mathbf{a}-\mathbf{b}} M_\mathbf{b}$ for all $(\mathbf{b},Z,i) \in \Lambda$, where
\[ \Lambda := \Lambda(\mathbf{a},u) := \set{(\mathbf{b},Z,i) \,:\, \mathbf{b} \in [\mathbf{0},\mathbf{a}], Z \subset [n], \supp(\mathbf{a}-\mathbf{b}) \subset Z, 1\leq i \leq u(\mathbf{b},Z)}. \]
So in particular, the inequality in our claim is necessary.
We apply the preceding Corollary \ref{cor:rado} to the vector space $M_\mathbf{a}$ and the collection $(\mathbf{X}^{\mathbf{a}-\mathbf{b}} M_\mathbf{b})_{(\mathbf{b},Z,i) \in \Lambda}$ of subspaces.
For a subset $I \subset \Lambda$, consider
\[ \bar{I} := \set{(\mathbf{b},Z,i) \in \Lambda \,:\, (\mathbf{b},Z', i') \in I \text{ for some } Z', i'}. \]
It clearly holds that
\[ \sum_{(\mathbf{b},Z,i) \in I} \mathbf{X}^{\mathbf{a}-\mathbf{b}} M_\mathbf{b} = \sum_{(\mathbf{b},Z,i) \in \bar{I}} \mathbf{X}^{\mathbf{a}-\mathbf{b}} M_\mathbf{b}, \]
hence it suffices to consider subsets of the form $\bar{I}$, and these are in bijection with subsets $J \subseteq [\mathbf{0},\mathbf{a}]$.
Hence our inequalities are also sufficient.
\end{proof}
\end{document} |
\begin{document}
\title{Layer potentials for general linear elliptic systems}
\author{Ariel Barton}
\address{Ariel Barton, Department of Mathematical Sciences,
309 SCEN,
University of Ar\-kan\-sas,
Fayetteville, AR 72701}
\email{[email protected]}
\subjclass[2010]{Primary
35J58,
Secondary
31B10,
31B20
}
\begin{abstract}
In this paper we construct layer potentials for elliptic differential operators using the Lax-Milgram theorem, without recourse to the fundamental solution; this allows layer potentials to be constructed in very general settings. We then generalize several well known properties of layer potentials for harmonic and second order equations, in particular the Green's formula, jump relations, adjoint relations, and Verchota's equivalence between well-posedness of boundary value problems and invertibility of layer potentials.
\end{abstract}
\keywords{Higher order differential equation, layer potentials, Dirichlet problem, Neumann problem}
\maketitle
\tableofcontents
\section{Introduction}
There is by now a very rich theory of boundary value problems for Laplace's operator, and more generally for second order divergence form operators $-\Div \mat A\nabla$. The Dirichlet problem
\begin{equation*}-\Div \mat A\nabla u=0 \text{ in }\Omega,\quad u=f \text{ on }\partial\Omega, \quad \doublebar{u}_\XX\leq C\doublebar{f}_\DD\end{equation*}
and the Neumann problem
\begin{equation*}-\Div \mat A\nabla u=0 \text{ in }\Omega,\quad \nu\cdot\mat A\nabla u=g \text{ on }\partial\Omega, \quad \doublebar{u}_\XX\leq C\doublebar{g}_\NN\end{equation*}
are known to be well-posed for many classes of coefficients $\mat A$ and domains~$\Omega$, and with solutions in many spaces $\XX$ and boundary data in many boundary spaces $\DD$ and~$\NN$.
A great deal of current research consist in extending these well posedness results to more general situations, such as operators of order $2m\geq 4$ (for example, \cite{MazMS10,KilS11B,MitMW11, MitM13A, BreMMM14, BarHM17pC}; see also the survey paper \cite{BarM16B}), operators with lower order terms (for example, \cite{BraBHV12,Tao12,Fel16,PanT16,DavHM16p}) and operators acting on functions defined on manifolds (for example, \cite{MitMT01,MitMS06,KohPW13}).
Two very useful tools in the second order theory are the double and single layer potentials given by
\begin{align}
\label{eqn:introduction:D}
\D^{\mat A}_\Omega f(x) &= \int_{\partial\Omega} \overline{\nu\cdot \mat A^*(y)\nabla_{y} E^{L^*}(y,x)} \, f(y)\,d\sigma(y)
,\\
\label{eqn:introduction:S}
\s^L_\Omega g(x) &= \int_{\partial\Omega}\overline{E^{L^*}(y,x)} \, g(y)\,d\sigma(y)
\end{align}
where $\nu$ is the unit outward normal to~$\Omega$ and where $E^L(y,x)$ is the fundamental solution for the operator~$L=-\Div \mat A\nabla$, that is, the formal solution to $L E^L(\,\cdot\,,x)=\delta_x$. These operators are inspired by a formal integration by parts
\begin{align*}u(x)
&= \int_\Omega \overline{L^*E^{L^*}(\,\cdot\,,x)}\,u
\\&=- \int_{\partial\Omega}\!\! \overline{\nu\cdot \mat A^*\nabla E^{L^*}(\,\cdot\,,x)} \, u\,d\sigma
+\int_{\partial\Omega}\!\!\overline{E^{L^*}(\,\cdot\,,x)} \, \nu\cdot \mat A\nabla u\,d\sigma
+\int_\Omega \overline{E^{L^*}(\,\cdot\,,x)}\,Lu\end{align*}
which gives the Green's formula
\begin{equation*}u(x) = -\D^{\mat A}_\Omega (u\big\vert_{\partial\Omega})(x) + \s^L_\Omega (\nu\cdot \mat A\nabla u)(x)\quad\text{if $x\in\Omega$ and $Lu=0$ in $\Omega$}\end{equation*}
at least for relatively well-behaved solutions~$u$.
Such potentials have many well known properties beyond the above Green's formula, including jump and adjoint relations. In particular, by a clever argument of Verchota \cite{Ver84} and some extensions in \cite{BarM13,BarM16A}, well posedness of the Dirichlet problem is equivalent to invertibility of the operator $g\mapsto \s^L_\Omega g\big\vert_{\partial\Omega}$, and well posedness of the Neumann problem is equivalent to invertibility of the operator $f\mapsto \nu\cdot\mat A\nabla\D^{\mat A}_\Omega f$.
This equivalence has been used to solve boundary value problems in many papers, including
\cite{FabJR78,Ver84,DahK87,FabMM98,Zan00} in the case of harmonic functions (that is, the case $\mat A=\mat I$ and $L=-\Delta$) and
\cite{AlfAAHK11, Bar13,Ros13, HofKMP15B, HofMayMou15,HofMitMor15, BarM16A} in the case of more general operators under various assumptions on the coefficients~$\mat A$. Layer potentials have been used in other ways in \cite{PipV92,KenR09,Rul07,Mit08,Agr09,MitM11,BarM13,AusM14}. Boundary value problems were studied using a functional calculus approach in \cite{AusAH08,AusAM10,AusA11, AusR12, AusM14, AusS14p, AusM14p}; in \cite{Ros13} it was shown that certain operators arising in this theory coincided with layer potentials.
Thus, it is desirable to extend layer potentials to more general situations. One may proceed as in the homogeneous second order case, by constructing the fundamental solution, formally integrating by parts, and showing that the resulting integral operators have appropriate properties. In the case of higher order operators with constant coefficients, this has been done in \cite{Agm57,CohG83, CohG85, Ver05, MitM13A, MitM13B}. However, all three steps are somewhat involved in the case of variable coefficient operators (although see \cite{DavHM16p,Bar16} for fundamental solutions, for second order operators with lower order terms, and for higher order operators without lower order terms, respectively).
An alternative, more abstract construction is possible. The fundamental solution for various operators was constructed in \cite{HofK07,Bar16,DavHM16p} as the kernel of the Newton potential, which may itself be constructed very simply using the Lax-Milgram theorem. It is possible to rewrite the formulas \eqref{eqn:introduction:D} and~\eqref{eqn:introduction:S} for the second order layer potential directly in terms of the Newton potential, without mediating by the fundamental solution, and this construction generalizes very easily. It is this approach that was taken in \cite{BarHM15p,BarHM17pA}.
In this paper we will provide the details of this construction in a very general context. Roughly, this construction is valid for all differential operators $L$ that may be inverted via the Lax-Milgram theorem, and all domains $\Omega$ for which suitable boundary trace operators exist. We will also show that many properties of traditional layer potentials are valid in the general case.
The organization of this paper is as follows. The goal of this paper is to construct layer potentials associated to an operator~$L$ as bounded linear operators from a space $\DD_2$ or $\NN_2$ to a Hilbert space $\HH_2$ given certain conditions on $\DD_2$, $\NN_2$ and $\HH_2$.
In Section~\ref{sec:dfn} we will list these conditions and define our terminology. Because these properties are somewhat abstract, in
Section~\ref{sec:example} we will give an example of spaces $\HH_2$, $\DD_2$ and $\NN_2$ that satisfy these conditions in the case where $L$ is a higher order differential operator in divergence form without lower order terms.
This is the context of the paper \cite{BarHM17pC}; we intend to apply the results of the present paper therein to solve the Neumann problem with boundary data in $L^2$ for operators with $t$-independent self-adjoint coefficients.
In Section~\ref{sec:D:S} of this paper we will provide the details of the construction of layer potentials.
We will prove the higher order analogues for the Green's formula, adjoint relations, and jump relations in Section~\ref{sec:properties}.
Finally, in Section~\ref{sec:invertible} we will show that the equivalence between well posedness of boundary value problems and invertibility of layer potentials of \cite{Ver84,BarM13,BarM16A} extends to the higher order case.
\section{Terminology}
\label{sec:dfn}
We will construct layer potentials $\D^\B_\Omega$ and $\s^L_\Omega$ using the following objects.
\begin{itemize}
\item Two Hilbert spaces $\HH_1$ and $\HH_2$.
\item Six vector spaces $\widehat\HH_1^\Omega$, $\widehat\HH_1^\CC$, $\widehat\HH_2^\Omega$, $\widehat\HH_2^\CC$, $\widehat\DD_1$ and $\widehat\DD_2$.
\item Bounded bilinear functionals $\B:\HH_1\times\HH_2\mapsto \C$, $\B^\Omega:\HH_1^\Omega\times\HH_2^\Omega\mapsto \C$, and $\B^\CC:\HH_1^\CC\times\HH_2^\CC\mapsto \C$. (We will define the spaces $\HH_j^\Omega$, $\HH_j^\CC$ momentarily.)
\item Bounded linear operators $\Tr_1:\HH_1\mapsto\widehat\DD_1$ and $\Tr_2:\HH_2\mapsto\widehat\DD_2$.
\item Bounded linear operators from $\HH_j$ to $\widehat\HH_j^\Omega$ and $\widehat\HH_j^\CC$; we shall denote these operators $\big\vert_\Omega$ and~$\big\vert_\CC$.
\end{itemize}
We will work not with the spaces $\widehat \HH_j^\Omega$, $\widehat\HH_j^\CC$ and $\widehat\DD_j$, but with the spaces
$ \HH_j^\Omega$, $\HH_j^\CC$ and $\DD_j$ defined as follows.
\begin{gather}
\HH_j^\Omega=\{F\big\vert_\Omega:F\in\HH_j\}/\sim\text{ with norm }\doublebar{f}_{\HH^\Omega_j} = \inf\{\doublebar{F}_{\HH_j}: F\big\vert_\Omega=f\}
,\\
\HH_j^\CC=\{F\big\vert_\CC : F\in\HH_j\}/\sim\text{ with norm }\doublebar{f}_{\HH^\CC_j} = \inf\{\doublebar{F}_{\HH_j}: F\big\vert_\CC=f\}
,\\
\DD_j=\{\Tr_j F:F\in\HH_j\}/\sim\text{ with norm }\doublebar{f}_{\DD_j} = \inf\{\doublebar{F}_{\HH_j}: \Tr_j F=f\}
\end{gather}
where $\sim$ denotes the equivalence relation $f\sim g$ if $\doublebar{f-g}=0$.
We impose the following conditions on the given function spaces and operators. We require that there is some $\lambda>0$ such that for every $u\in\HH_1$, $v\in \HH_2$ and $\varphi$,~$\psi\in \HH_j$ for $j=1$ or $j=2$, the following conditions are valid.
\begin{gather}
\label{cond:coercive}
\sup_{w\in \HH_1\setminus\{0\}} \frac{\abs{\B(w,v)}}{\doublebar{w}_{\HH_1}}\geq \lambda \doublebar{v}_{\HH_2},\quad
\sup_{w\in \HH_2\setminus\{0\}} \frac{\abs{\B(u,w)}}{\doublebar{w}_{\HH_2}}\geq \lambda \doublebar{u}_{\HH_1}.
\\
\label{cond:local}
\B(u,v) = \B^\Omega(u\big\vert_{\Omega},v\big\vert_\Omega) +\B^\CC(u\big\vert_{\CC}, v\big\vert_{\CC}).
\\
\label{cond:trace:extension}
{\text{If $\Tr_j \varphi=\Tr_j \psi$, then there is a $w\in\HH_j$ with}}
\qquad\qquad \qquad\qquad\qquad \qquad
\\\nonumber \qquad\qquad \qquad\qquad\qquad \qquad
{\text{
$w\big\vert_\Omega=\varphi\big\vert_\Omega$, $w\big\vert_{\CC}=\psi\big\vert_{\CC}$ and $\Tr_j w= \Tr_j\varphi=\Tr_j\psi$.}}
\end{gather}
We now introduce some further terminology.
We will define the linear operator $L$ as follows. If $u\in \HH_2$, let $Lu$ be the element of the dual space $\HH_1^*$ to $\HH_1$ given by
\begin{equation}\label{dfn:L}\langle\varphi,Lu\rangle = \B(\varphi,u).\end{equation}
Notice that $L$ is bounded $\HH_2\mapsto\HH_1^*$.
If $u\in \HH_2^\Omega$, we let $(Lu)\big\vert_\Omega$ be the element of the dual space to $\{\varphi\in\HH_1:\Tr_1 \varphi=0\}$ given by
\begin{equation}\label{dfn:L:interior}
\langle\varphi,(Lu)\big\vert_\Omega\rangle = \B^\Omega(\varphi\big\vert_\Omega,u)\quad\text{for all $\varphi\in\HH_1$ with $\Tr_1\varphi=0$}
.\end{equation}
If $u\in\HH_2$, we will often use $(Lu)\big\vert_\Omega$ as shorthand for $(L(u\big\vert_\Omega))\big\vert_\Omega$.
We will primarily be concerned with the case $(Lu)\big\vert_\Omega=0$.
Let
\begin{equation}\NN_2=\DD_1^*, \qquad \NN_1=\DD_2^*\end{equation}
denote the dual spaces to $\DD_1$ and $\DD_2$.
We will now define the Neumann boundary values of an element $u$ of $\HH_2^\Omega$ that satisfies $(Lu)\big\vert_\Omega=0$.
If $\Tr_1\varphi=\Tr_1\psi$ and $(Lu)\big\vert_\Omega=0$, then $\B^\Omega(\varphi\big\vert_\Omega-\psi\big\vert_\Omega,u)=0$ by definition of $(Lu)\big\vert_\Omega$. Thus, $\B^\Omega(\varphi\big\vert_\Omega,u)$ depends only on~$\Tr_1\varphi$, not on~$\varphi$. Thus, $\M_\Omega^\B u$ defined as follows is a well defined element of $\NN_2$.
\begin{equation}\label{eqn:Neumann}\langle \Tr_1\varphi,\M_\Omega^\B u \rangle = \B^\Omega(\varphi\big\vert_\Omega,u)\quad\text{for all $\varphi\in \HH_1$}.\end{equation}
We may compute
\begin{equation*}\abs{\langle \arr f,\M_\Omega^\B u \rangle} \leq \doublebar{\B^\Omega} \inf\{\doublebar{\varphi}_{\HH_1}:\Tr_1\varphi=\arr f\} \doublebar{u}_{\HH_2^\Omega}
=
\doublebar{\B^\Omega} \doublebar{\arr f}_{\DD_1}\doublebar{u}_{\HH_2^\Omega}\end{equation*}
and so we have the bound
$\doublebar{\M_\Omega^\B u }_{\NN_2}\leq \doublebar{\B^\Omega}\doublebar{u}_{\HH_2^\Omega}$.
If $(Lu)\big\vert_\Omega\neq 0$, then the linear operator given by $\varphi\mapsto B^\Omega(\varphi\big\vert_\Omega,u)$ is still of interest. We will denote this operator $L(u\1_\Omega)$; that is,
if $u\in\HH_2^\Omega$ (or $u\in\HH_2$ as before), then $L(u\1_\Omega)\in \HH_1^*$ is defined by
\begin{equation}\label{dfn:L:singular}
\langle\varphi,L(u\1_\Omega)\rangle = \B^\Omega(\varphi\big\vert_\Omega,u)\quad\text{for all $\varphi\in\HH_1$}
.\end{equation}
\section{An example: higher order differential equations}
\label{sec:example}
In this section, we provide an example of a situation in which the terminology of Section~\ref{sec:dfn} and the construction and properties of layer potentials of Sections~\ref{sec:D:S} and~\ref{sec:properties} may be applied. We remark that this is the situation of \cite{BarHM17pC}, and that we will therein apply the results of this paper.
Let $m\geq 1$ be an integer, and let $L$ be an elliptic differential operator of the form
\begin{equation}
\label{eqn:L}
Lu=(-1)^m\sum_{\abs\alpha=\abs\beta= m} \partial^\alpha(A_{\alpha\beta} \partial^\beta u)\end{equation}
for some bounded measurable coefficients~$\mat A$ defined on $\R^d$. Here $\alpha$ and $\beta$ are multiindices in $\N_0^d$, where $\N_0$ denotes the nonnegative integers.
As is standard in the theory, we say that $Lu=0$ in an open set $\Omega$ in the weak sense if
\begin{equation}\label{eqn:weak}\int_\Omega \sum_{\abs\alpha=\abs\beta= m} \partial^\alpha\varphi \,A_{\alpha\beta}\, \partial^\beta u=0 \quad\text{for all $\varphi\in C^\infty_0(\Omega)$}.\end{equation}
We impose the following ellipticity condition: we require that for some $\lambda>0$,
\begin{equation*}\Re \sum_{{\abs\alpha=\abs\beta= m}} \int_{\R^d} \overline{\partial^\alpha\varphi}\,A_{\alpha\beta}\,\partial^\beta\varphi\geq \lambda \doublebar{\nabla^m\varphi}_{L^2(\R^d)}^2 \quad \text{for all $\varphi\in\dot W^2_m(\R^d)$.}
\end{equation*}
Let $\Omega\subset\R^d$ be a Lipschitz domain with connected boundary, and let $\CC=\R^d\setminus\bar\Omega$ denote the interior of its complement. Observe that $\partial\Omega=\partial\CC$.
The following function spaces and linear operators satisfy the conditions of Section~\ref{sec:dfn}.
\begin{itemize}
\item $\HH_1=\HH_2=\HH$ is the homogeneous Sobolev space $\dot W^2_m(\R^d)$ of locally integrable functions~$\varphi$ (or rather, of equivalence classes of functions modulo polynomials of degree $m-1$) with weak derivatives of order $m$, and such that the $\HH$-norm given by $\doublebar{\varphi}_\HH=\doublebar{\nabla^m\varphi}_{L^2(\R^d)}$ is finite. This space is a Hilbert space with inner product $\langle \varphi,\psi\rangle =\sum_{\abs\alpha=m} \int_{\R^d} \overline{\partial^\alpha\varphi}\,\partial^\alpha\psi$.
\item $\widehat\HH^\Omega$ and $\widehat\HH^\CC$ are the Sobolev spaces $\widehat\HH^\Omega=\dot W^2_m(\Omega)=\{\varphi:\nabla^m\varphi\in L^2(\Omega)\}$ and $\widehat\HH^\CC=\dot W^2_m(\CC)=\{\varphi:\nabla^m\varphi\in L^2(\CC)\}$ with the obvious norms.
\item $\widehat \DD$ denotes the (vector-valued) Besov space $ \dot B^{2,2}_{1/2}(\partial\Omega)$ of locally integrable functions modulo constants with norm
\begin{equation*}\doublebar{f}_{\dot B^{2,2}_{1/2}(\partial\Omega)}
= \biggl(\int_{\partial\Omega}\int_{\partial\Omega} \frac{\abs{f(x)-f(y)}^2}{\abs{x-y}^d}\,d\sigma(x)\,d\sigma(y)\biggr)^{1/2}.\end{equation*}
\item In \cite{Bar16pA,BarHM15p,BarHM17pC}, $\Tr$ is the linear operator defined on $ \HH$ by $\Tr u=\Trace^\Omega \nabla^{m-1}u\big\vert_\Omega$, where $\Trace^\Omega$ is the standard boundary trace operator of Sobolev spaces. (Given a suitable modification of the trace space $\DD$, it is also possible to choose
$\Tr u = \{\Trace^\Omega \partial^\gamma u\}_{\abs\gamma\leq m-1}$, or more concisely $\Tr u = (\Trace^\Omega u,\partial_\nu u,\dots,\partial_\nu^{m-1} u)$, where $\nu$ is the unit outward normal, so that the boundary derivatives of $u$ of all orders are recorded. See, for example,
\cite{
PipV95B,She06B,
Agr07,
MazMS10,MitM13A}.)
\item $\B$ is the bilinear operator on $\HH\times\HH$ given by
\begin{equation*}\B(\psi,\varphi) = \sum_{\abs\alpha=\abs\beta= m}\int_{\R^d} \overline{\partial^\alpha\psi}\,A_{\alpha\beta}\,\partial^\beta\varphi.\end{equation*}
$\B^\Omega$ and $\B^\CC$ are defined analogously to~$\B$, but with the integral over $\R^d$ replaced by an integral over $\Omega$ or~$\CC$.
\end{itemize}
$\B$, $\B^\Omega$ and $\B^\CC$ are clearly bounded and bilinear, and the restriction operators $\big\vert_\Omega:\HH\mapsto\widehat\HH^\Omega$,
$\big\vert_\CC:\HH\mapsto\widehat\HH^\CC$ are bounded and linear.
The trace operator $\Tr$ is linear. If $\Omega=\R^d_+$ is the half-space, then boundedness of $\Tr:\HH\mapsto\DD$ was established in \cite[Section~5]{Jaw77}; this extends to the case where $\Omega$ is the domain above a Lipschitz graph via a change of variables. If $\Omega$ is a bounded Lipschitz domain, then boundedness of $\Tr:W\mapsto\widehat\DD$, where $W$ is the inhomogeneous Sobolev space with norm $\sum_{k=0}^m \doublebar{\nabla^k\varphi}_{L^2(\R^d)}$, was established in \cite[Chapter~V]{JonW84}. Then boundedness of $\Tr:\HH\mapsto\widehat\DD$ follows by the Poincar\'e inequality.
By assumption, the coercivity condition~\eqref{cond:coercive} is valid. If $\partial\Omega$ has Lebesgue measure zero, then Condition~\eqref{cond:local} is valid. A straightforward density argument shows that if $\Tr$ is bounded, then Condition~\eqref{cond:trace:extension} is valid.
Thus, the given spaces and operators satisfy the conditions imposed at the beginning of Section~\ref{sec:dfn}.
We now comment on a few of the other quantities defined in Section~\ref{sec:dfn}.
If $u\in \HH$, and if $Lu=0$ in $\Omega$ in the weak sense of formula~\eqref{eqn:weak}, then by density $\B^\Omega(\varphi,u)=0$ for all $\varphi\in\HH$ with $\Tr\varphi=0$; that is, $(Lu)\big\vert_\Omega$ as defined in Section~\ref{sec:dfn} satisfies $(Lu)\big\vert_\Omega=0$.
For many classes of domains there is a bounded extension operator from $\widehat\HH^\Omega$ to $\HH$, and so $\HH^\Omega=\widehat\HH^\Omega=\dot W^2_m(\Omega)$ with equivalent norms. (If $\Omega$ is a Lipschitz domain then this is a well known result of Calder\'on \cite{Cal61} and Stein \cite[Theorem~5, p.~181]{Ste70}; the result is true for more general domains, see for example \cite{Jon81}.)
As mentioned above, if $\Omega\subset\R^d$ is a Lipschitz domain, then $\Tr$ is a bounded operator $\HH \mapsto \widehat \DD$. If $\partial\Omega$ is connected, then $\Tr$ moreover has a bounded right inverse. (See \cite{JonW84} or \cite[Proposition~7.3]{MazMS10} in the inhomogeneous case, and \cite{Bar16pB} in the present homogeneous case.) Thus, the norm in $\DD$ is comparable to the Besov norm.
Furthermore, $\{\nabla^{m-1}\varphi\big\vert_{\partial\Omega}:\varphi\in C^\infty_0(\R^d)\}$ is dense in $\DD$. Thus, if $m=1$ then $\DD=\widehat\DD=\dot B^{2,2}_{1/2}(\partial\Omega)$. If $m\geq 2$ then $\DD$ is a closed {proper} subspace of $\widehat\DD$, as the different partial derivatives of a common function must satisfy certain compatibility conditions. In this case $\DD$ is the Whitney-Sobolev space
used in many papers, including \cite{AdoP98,
MazMS10, MitMW11, MitM13A, MitM13B, BreMMM14, Bar16pA}.
If $m=1$, then by an integration by parts argument we have that $\M_\B^\Omega u = \nu\cdot \mat A\nabla u$, where $\nu$ is the unit outward normal to~$\Omega$, whenever $u$ is sufficiently smooth. The weak formulation of Neumann boundary values of formula~\eqref{eqn:Neumann} coincides with the formulation of higher order Neumann boundary data of \cite{BarHM15p,Bar16pA,BarHM17pC} given the above choice of $\Tr=\Trace^\Omega \nabla^{m-1}$, and with that of \cite{
Ver05,Agr07,
MitM13A} if we instead choose $\Tr u=(\Trace^\Omega u, \partial_\nu u,\dots,\partial_\nu^{m-1} u)$ or $\Tr u = \{\Trace^\Omega \partial^\gamma u\}_{\abs\gamma\leq m-1}$.
\section{Construction of layer potentials}
\label{sec:D:S}
We will now use the Babu\v{s}ka-Lax-Milgram theorem to construct layer potentials. This theorem may be stated as follows.
\begin{thm}[{\cite[Theorem~2.1]{Bab70}}]
\label{thm:lax-milgram}
Let $\HH_1$ and $\HH_2$ be two Hilbert spaces, and let $\B$ be a bounded bilinear form on $\HH_1\times \HH_2$ that is coercive in the sense that for some fixed $\lambda>0$, formula~\eqref{cond:coercive} is valid for every $u\in\HH_1$ and $v\in\HH_2$.
Then for every linear functional $T$ defined on ${\HH_1}$ there is a unique $u_T\in {\HH_2}$ such that $\B(v,u_T)=\overline{T(v)}$. Furthermore, $\doublebar{u_T}_{\HH_2}\leq \frac{1}{\lambda}\doublebar{T}_{\HH_1\mapsto\C}$.
\end{thm}
We construct layer potentials as follows.
Let $\arr g\in\NN_2$. Then the operator $T_{\arr g}\varphi = \langle \arr g,\Tr_1\varphi\rangle$ is a bounded linear operator on~$\HH_1$. By the Lax-Milgram lemma, there is a unique $u_T=\s^L_\Omega\arr g\in \HH_2$ such that
\begin{equation}\label{eqn:S}
\B( \varphi, \s^L_\Omega\arr g ) = \langle \Tr \varphi,\arr g\rangle
\quad\text{for all $\varphi\in\HH_1$}.\end{equation}
We will let $\s^L_\Omega\arr g$ denote the single layer potential of~$\arr g$. Observe that the dependence of $\s^L_\Omega$ on the parameter $\Omega$ consists entirely of the dependence of the trace operator on~$\Omega$, and the connection between $\Tr_2$ and $\Omega$ is given by formula~\eqref{cond:trace:extension}. This formula is symmetric about an interchange of $\Omega$ and $\CC$, and so $\s^L_\Omega \arr g=\s^L_\CC\arr g$.
The double layer potential is somewhat more involved. We begin by defining the Newton potential.
Let $H$ be an element of the dual space $\HH_1^*$ to~$\HH_1$. By the Lax-Milgram theorem, there is a unique element $\PP^L H$ of $\HH_2$ that satisfies
\begin{equation}
\label{eqn:newton}
\B(\varphi,\PP^L H) = \langle \varphi,H\rangle\quad\text{for all $\varphi\in\HH_1$}.\end{equation}
We refer to $\PP^L$ as the Newton potential.
In some applications, it is easier to work with the Newton potential rather than the single layer potential directly; we remark that
\begin{equation}\s^L_\Omega \arr g = \PP^L (T_{\arr g}) \quad\text{where } \langle T_{\arr g},\varphi \rangle = \langle \arr g,\Tr_1\varphi\rangle.\end{equation}
We now return to the double layer potential. Let $\arr f\in\DD_2$. Then there is some $F\in \HH_2$ such that $\Tr_2 F=\arr f$. Let
\begin{equation}
\label{eqn:D:+}\D_\Omega^\B \arr f = -F\big\vert_\Omega + \PP^L (L(\1_\Omega F))\big\vert_\Omega
\qquad\text{if $\Tr_2 F=\arr f$}.\end{equation}
Notice that $\D_\Omega^\B \arr f$ is an element of $\HH_2^\Omega$, not of $\HH_2$.
We will conclude this section by showing that $\D^\B_\Omega\arr f$ is well defined, that is, does not depend on the choice of $F$ in formula~\eqref{eqn:D:+}. We will also establish that layer potentials are bounded operators.
\begin{lem}\label{lem:potentials:bounded}
The double layer potential is well defined. Furthermore, we have the bounds
\begin{gather*}\doublebar{\D^\B_\Omega\arr f}_{\HH_2^\Omega} \leq \frac{\doublebar{B^\CC}}{\lambda}\doublebar{\arr f}_{\DD_2},
\quad
\doublebar{\D^\B_\CC\arr f}_{\HH_2^\CC} \leq \frac{\doublebar{B^\Omega}}{\lambda}\doublebar{\arr f}_{\DD_2},
\quad
\doublebar{\s^L_\Omega\arr g}_{\HH_2} \leq \frac{1}{\lambda}\doublebar{\arr g}_{\NN_2}.
\end{gather*}
\end{lem}
\begin{proof}
By Theorem~\ref{thm:lax-milgram}, we have that
\begin{equation*}\doublebar{\s^L_\Omega \arr g}_{\HH_2}
\leq
\frac{1}{\lambda}
\doublebar{T_{\arr g}}_{\HH_1\mapsto\C}
\leq
\frac{1}{\lambda}
\doublebar{\Tr_1}_{\HH_1\mapsto\DD_1}\doublebar{\arr g}_{\DD_1\mapsto\C}.
\end{equation*}
By definition of $\DD_1$ and $\NN_2$, $\doublebar{\Tr_1}_{\HH_1\mapsto\DD_1}=1$ and $\doublebar{\arr g}_{\DD_1\mapsto\C}=\doublebar{\arr g}_{\NN_2}$, and so $\s^L_\Omega:\NN_2\mapsto\HH_2$ is bounded with operator norm at most $1/\lambda$.
We now turn to the double layer potential. We will begin with a few properties of the Newton potential.
By definition of~$L$, if $\varphi\in\HH_1$ then $\langle \varphi, LF\rangle = \B(\varphi, F)$. By definition of~$\PP^L$, $\B(\varphi,\PP^L (LF)) = \langle\varphi, LF\rangle$. Thus, by coercivity of~$\B$,
\begin{equation}F=\PP^L(LF)\quad\text{for all }F\in\HH_2.\end{equation}
By definition of $\B^\Omega$, $\B^\CC$ and $L(\1_\Omega F)$,
\begin{align*}
\langle\varphi,LF\rangle
&= \B(\varphi,F)
=\B^\Omega(\varphi\big\vert_\Omega,F\big\vert_\Omega)
+\B^\CC(\varphi\big\vert_{\CC},F\big\vert_{\CC})&
\\&=
\langle\varphi, L(F\1_\Omega)\rangle + \langle \varphi, L(F\1_{\CC})\rangle
&\text{for all $\varphi\in\HH_1$}.\end{align*}
Thus, $LF=L(F\1_\Omega)+L(F\1_{\CC})$ and so
\begin{align}
\label{eqn:D:alternate:extensions}
-F + \PP^L (L(F\1_\Omega))
&= -F + \PP^L (LF) - \PP^L (F\1_{\CC})
\\\nonumber&= - \PP^L (L(F\1_{\CC}))
.\end{align}
In particular, suppose that $\arr f=\Tr_2 F=\Tr_2 F'$. By Condition~\eqref{cond:trace:extension}, there is some $F''\in \HH_2$ such that $F''\big\vert_\Omega=F\big\vert_\Omega$ and $F''\big\vert_{\CC}=F'\big\vert_{\CC}$. Then
\begin{align*} -F\big\vert_\Omega + \PP^L (L(\1_\Omega F))\big\vert_\Omega
&= -F''\big\vert_\Omega + \PP^L (L(\1_\Omega F''))\big\vert_\Omega
= - \PP^L (L(F''\1_{\CC}))\big\vert_\Omega
\\&= - \PP^L (L(F'\1_{\CC}))\big\vert_\Omega
= -F'\big\vert_\Omega + \PP^L (L(\1_\Omega F'))\big\vert_\Omega
\end{align*}
and so $\D_\Omega^\B \arr f$ is well-defined, that is, depends only on $\arr f$ and not the choice of function $F$ with $\Tr_2 F=\arr f$.
Furthermore, we have the alternative formula
\begin{equation}\label{eqn:D:alternate}
\D_\Omega^\B \arr f = - \PP^L (L(\1_{\CC} F))\big\vert_\Omega
\qquad\text{if $\Tr_2 F=\arr f$}
.\end{equation}
Thus,
\begin{equation*}\doublebar{\D_\Omega^B\arr f}_{\HH_2^\Omega}
\leq
\inf_{\Tr_2 F=\arr f} \doublebar{\PP^L (L(\1_{\CC} F))\big\vert_\Omega}_{\HH_2^\Omega}
\leq
\inf_{\Tr F=\arr f} \doublebar{\PP^L (L(\1_{\CC} F))}_{\HH_2}.
\end{equation*}
by definition of the $\HH_2^\Omega$-norm.
By Theorem~\ref{thm:lax-milgram} and definition of $\PP^L$, we have that
\begin{equation*}\doublebar{\PP^L (L(\1_{\CC} F))}_{\HH_2}\leq \frac{1}{\lambda} \doublebar{L(\1_{\CC} F)}_{\HH_1\mapsto\C}.\end{equation*}
Since $L(\1_{\CC} F)(\varphi) = \B^\CC(\varphi\big\vert_{\CC}, F\big\vert_{\CC})$, we have that \begin{equation*}\doublebar{L(\1_{\CC} F)}_{\HH_1\mapsto\C}\leq \doublebar{\B^\CC}\doublebar{F\big\vert_{\CC}}_{\HH_2^\CC}
\leq \doublebar{\B^\CC}\doublebar{F}_{\HH_2}\end{equation*}
and so
\begin{equation*}\doublebar{\D_\Omega^B\arr f\big\vert_\Omega}_{\HH_2^\Omega}
\leq
\inf_{\Tr F=\arr f} \frac{1}{\lambda} \doublebar{\B^\CC}\doublebar{F}_{\HH_2}
=\frac{1}{\lambda} \doublebar{\B^\CC} \doublebar{\arr f}_{\DD_2}
\end{equation*}
as desired.
\end{proof}
\section{Properties of layer potentials}
\label{sec:properties}
We will begin this section by showing that layer potentials are solutions to the equation $(Lu)\big\vert_\Omega=0$ (Lemma~\ref{lem:potentials:solutions}). We will then prove the Green's formula (Lemma~\ref{lem:green}), the adjoint formulas for layer potentials (Lemma~\ref{lem:adjoint}), and conclude this section by proving the jump relations for layer potentials (Lemma~\ref{lem:jump}).
\begin{lem}\label{lem:potentials:solutions} Let $\arr f\in\DD$, $\arr g\in\NN$, and let $u=\D^\B_\Omega\arr f$ or $u=\s^L_\Omega\arr g\big\vert_\Omega$. Then
$(Lu)\big\vert_\Omega=0$.
\end{lem}
\begin{proof}
Recall that $(Lu)\big\vert_\Omega=0$ if $\B^\Omega(\varphi_+\big\vert_\Omega,u)=0$ for all $\varphi_+\in\HH_1$ with $\Tr_1\varphi_+=0$.
If $\Tr_1\varphi_+=0=\Tr_1 0$, then by Condition~\eqref{cond:trace:extension} there is some $\varphi\in\HH_1$ with $\varphi\big\vert_\Omega=\varphi_+$, $\varphi\big\vert_{\CC} = 0$ and $\Tr_1\varphi=0$.
By the definition~\eqref{eqn:S} of the single layer potential,
\begin{equation*}0=\B(\varphi,\s^L\arr g)
=\B^\Omega(\varphi\big\vert_\Omega,\s^L_\Omega\arr g\big\vert_\Omega)
+\B^\CC(\varphi\big\vert_{\CC},\s^L_\Omega\arr g\big\vert_{\CC})
=\B^\Omega(\varphi_+\big\vert_\Omega,\s^L_\Omega\arr g\big\vert_\Omega)
\end{equation*}
as desired.
Turning to the double layer potential, if $\varphi\in\HH_1$, then by the definition~\eqref{eqn:D:+} of $\D_\Omega^\B$, formula~\eqref{eqn:D:alternate} for~$\D_\CC^\B$ and linearity of~$\B^\Omega$,
\begin{align*}
\B^\Omega(\varphi\big\vert_\Omega,\D^\B_\Omega \arr f)
&= -\B^\Omega\bigl(\varphi\big\vert_\Omega, F\big\vert_\Omega\bigr)
+\B^\Omega\bigl(\varphi\big\vert_\Omega, \PP^L(L(\1_\Omega F))\big\vert_\Omega\bigr)
,\\
\B^\CC(\varphi\big\vert_\CC,\D^\B_\CC \arr f\big\vert_{\CC})
&=-\B^\CC\bigl(\varphi\big\vert_{\CC}, \PP^L(L(\1_\Omega F))\big\vert_{\CC}\bigr)
.\end{align*}
Subtracting and applying Condition~\eqref{cond:local},
\begin{align*}
\B^\Omega(\varphi\big\vert_\Omega,\D^\B_\Omega \arr f)
-\B^\CC(\varphi\big\vert_\CC,\D^\B_\CC \arr f\big\vert_{\CC})
&= -\B^\Omega\bigl(\varphi\big\vert_\Omega, F\big\vert_\Omega\bigr)
+\B\bigl(\varphi, \PP^L(L(\1_\Omega F))\bigr)
.\end{align*}
By the definition~\eqref{eqn:newton} of $\PP^L$,
\begin{equation*}\B\bigl(\varphi, \PP^L(L(\1_\Omega F))\bigr) = \langle \varphi, L(\1_\Omega F)\rangle\end{equation*}
and by the definition~\eqref{dfn:L:singular} of $L(\1_\Omega F)$,
\begin{equation*}\B\bigl(\varphi, \PP^L(L(\1_\Omega F))\bigr) = \B^\Omega(\varphi\big\vert_\Omega,F\big\vert_\Omega).\end{equation*}
Thus,
\begin{align}\label{eqn:D:solution}
\B^\Omega(\varphi\big\vert_\Omega,\D^\B_\Omega \arr f)
-\B^\CC(\varphi\big\vert_\CC,\D^\B_\CC \arr f)
&= 0
\quad\text{for all $\varphi\in\HH_1$.}
\end{align}
In particular, as before if $\Tr_1 \varphi_+=0$ then there is some $\varphi$ with $\varphi\big\vert_\Omega=\varphi_+\big\vert_\Omega$, $\varphi\big\vert_\CC=0$ and so $\B^\Omega(\varphi\big\vert_\Omega,\D^\B_\Omega \arr f)=0$. This completes the proof.
\end{proof}
\begin{lem}\label{lem:green}
If $u\in\HH_2^\Omega$ and $(Lu)\big\vert_\Omega=0$, then
\begin{equation*}u = -\D^\B_\Omega (\Tr_2 U) + \s^L_\Omega (\M^\B_\Omega u)\big\vert_\Omega,\quad
0 = \D^\B_\CC (\Tr_2 U) + \s^L_\CC (\M^\B_\Omega u)\big\vert_{\CC}\end{equation*}
for any $U\in\HH_2$ with $U\big\vert_\Omega=u$.
\end{lem}
\begin{proof}
By definition~\eqref{eqn:D:+} of the double layer potential,
\begin{equation*}
-\D_\Omega^\B (\Tr_2 U)
= U\big\vert_\Omega - \PP^L (L(\1_\Omega U))\big\vert_\Omega
= u - \PP^L (L(\1_\Omega u))\big\vert_\Omega
\end{equation*}
and by formula~\eqref{eqn:D:alternate}
\begin{equation*}\D_\CC^\B (\Tr_2 U) = -\PP^L (L(\1_\Omega u))\big\vert_{\CC}.\end{equation*}
It suffices to show that $\PP^L(L(\1_\Omega u))=\s^L_\Omega(\M^\B_\Omega u)$.
Let $\varphi\in\HH_1$. By formulas~\eqref{eqn:S} and~\eqref{eqn:Neumann},
\begin{equation*}\B(\varphi,\s^L_\Omega (\M^\B_\Omega u))
=\langle \Tr_1 \varphi,\M^\B_\Omega u\rangle
=\B^\Omega(\varphi\big\vert_\Omega,u)
.\end{equation*}
By formula~\eqref{eqn:newton} for the Newton potential
and by the definition~\eqref{dfn:L:singular} of $L(\1_\Omega u)$,
\begin{equation*}
\B(\varphi, \PP^L(L(\1_\Omega u)))
= \langle \varphi, L(\1_\Omega u)\rangle
=\B^\Omega(\varphi\big\vert_\Omega,u)
.\end{equation*}
Thus, $\B(\varphi, \PP^L(L(\1_\Omega u))) = \B(\varphi,\s^L_\Omega (\M^\B_\Omega u))$ for all $\varphi\in\HH$; by coercivity of $\B$, we must have that $\PP^L(L(\1_\Omega u))=\s^L_\Omega(\M^\B_\Omega u)$. This completes the proof.
\end{proof}
Let $\B^*(\varphi,\psi)=\overline{\B(\psi,\varphi)}$ and define $\B^\Omega_*$, $\B^\CC_*$ analogously. Then $\B^*$ is a bounded and coercive operator $\HH_2\times \HH_1\mapsto\C$, and so we can define the double and single layer potentials $\D^{\B^*}_\Omega:\DD_1\mapsto \HH_1^\Omega$, $\s^{L^*}_\Omega:\NN_1\mapsto \HH_1$.
We then have the following adjoint relations.
\begin{lem}\label{lem:adjoint}
We have the adjoint relations
\begin{align}
\label{eqn:neumann:D:dual}
\langle \arr \varphi, \M_\B^{\Omega} \D^{\B}_\Omega \arr f\rangle
&= \langle\M_{\B^*}^{\Omega} \D^{\B^*}_\Omega \arr \varphi, \arr f\rangle
,\\
\label{eqn:dirichlet:S:dual}
\langle \arr \gamma, \Tr_2 \s^L_\Omega \arr g\rangle
&= \langle \Tr_1 \s^{L^*}_\Omega \arr \gamma, \arr g\rangle
\end{align}
for all $\arr f\in \DD_2$, $\arr \varphi\in\DD_1$, $\arr g\in\NN_2$ and $\arr\gamma\in\NN_1$.
If we let $\Tr_2^\Omega \D^{\B}_\Omega \arr f = -\Tr_2 F + \Tr_2 \PP^L(L(\1_\Omega F))$, where $F$ is as in formula~\eqref{eqn:D:+}, then $\Tr_2^\Omega \D^{\B}_\Omega \arr f $ does not depend on the choice of $F$, and we have the duality relations
\begin{align}\label{eqn:dirichlet:D:dual}
\langle \arr \gamma, \Tr_2^\Omega \D^{\B}_\Omega \arr f\rangle
&= \langle-\arr \gamma+\M_{\B^*}^{\Omega}\s^{L^*}_\Omega\arr \gamma, \arr f\rangle
.\end{align}
\end{lem}
\begin{proof}
By formula~\eqref{eqn:S},
\begin{gather*}\langle \Tr_1 \s^{L^*}_\Omega \arr \gamma, \arr g\rangle
=\B(\s^{L^*}_\Omega\arr \gamma,\s^L_\Omega\arr g\rangle,
\\
\langle \Tr_1 \s^{L}_\Omega \arr g, \arr \gamma\rangle
=\B^*(\s^{L}_\Omega\arr g,\s^{L^*}_\Omega\arr \gamma\rangle\end{gather*}
and so formula~\eqref{eqn:dirichlet:S:dual} follows by definition of~$\B^*$.
Let $\Phi\in\HH_1$ and $F\in\HH_2$ with $\Tr_1\Phi=\arr\varphi$, $\Tr_2F=\arr f$.
Then by formulas \eqref{eqn:Neumann} and~\eqref{eqn:D:+},
\begin{align*}\langle \arr \varphi, \M_\B^{\Omega} \D^{\B}_\Omega \arr f\rangle
&=
\B^\Omega(\Phi\big\vert_\Omega, \D^{\B}_\Omega \arr f)
=
-\B^\Omega(\Phi\big\vert_\Omega, F\vert_\Omega)
+\B^\Omega(\Phi\big\vert_\Omega, \PP^L(L(\1_\Omega F))\big\vert_\Omega)
\\&=
-\overline{\B^\Omega_*(F\vert_\Omega, \Phi\big\vert_\Omega)}
+\overline{\B^\Omega_*(\PP^L(L(\1_\Omega F))\big\vert_\Omega, \Phi\big\vert_\Omega)}
.\end{align*}
By formula~\eqref{dfn:L:singular},
\begin{equation*}\B^\Omega_*(\PP^L(L(\1_\Omega F))\big\vert_\Omega, \Phi\big\vert_\Omega)
= \langle \PP^L(L(\1_\Omega F)), L^*(\1_\Omega\Phi)\rangle.\end{equation*}
By formula~\eqref{eqn:newton},
\begin{equation*}\B^\Omega_*(\PP^L(L(\1_\Omega F))\big\vert_\Omega, \Phi\big\vert_\Omega)
= \B^\Omega_*( \PP^L(L(\1_\Omega F)), \PP^{L^*}(L^*(\1_\Omega\Phi))).\end{equation*}
Thus,
\begin{align*}\langle \arr \varphi, \M_\B^{\Omega} \D^{\B}_\Omega \arr f\rangle
&=
-\overline{\B^\Omega_*(F\vert_\Omega, \Phi\big\vert_\Omega)}
+\overline{\B^\Omega_*( \PP^L(L(\1_\Omega F)), \PP^{L^*}(L^*(\1_\Omega\Phi)))}
.\end{align*}
By the same argument
\begin{align*}\langle
\arr f,\M_{\B^*}^{\Omega} \D^{\B^*}_\Omega \arr \varphi\rangle
&=
-\overline{\B^\Omega(\Phi\vert_\Omega, F\big\vert_\Omega)}
+\overline{\B^\Omega( \PP^{L^*}(L^*(\1_\Omega \Phi)), \PP^{L}(L(\1_\Omega F)))}
\end{align*}
and by definition of $\B^\Omega_*$ formula~\eqref{eqn:neumann:D:dual} is proven.
Finally, by definition of $\Tr_2^\Omega \D^{\B}_\Omega$,
\begin{equation*}\langle \arr \gamma, \Tr_2^\Omega \D^{\B}_\Omega \arr f\rangle
=
-\langle \arr \gamma, \Tr_2 F\rangle
+
\langle \arr \gamma, \Tr_2 \PP^L(L(\1_\Omega F)) \rangle
.\end{equation*}
By the definition~\eqref{eqn:S} of the single layer potential,
\begin{equation*}
\langle \arr \gamma, \Tr_2 \PP^L(L(\1_\Omega F)) \rangle
=
\overline{\B^*( \PP^L(L(\1_\Omega F)) ,\s^{L^*}_\Omega\arr \gamma)}
.\end{equation*}
By definition of $\B^*$ and the definition~\eqref{eqn:newton} of the Newton potential,
\begin{equation*}\overline{\B^*( \PP^L(L(\1_\Omega F)) ,\s^{L^*}_\Omega\arr \gamma)}
= {\langle \s^{L^*}_\Omega\arr \gamma, L(\1_\Omega F) \rangle}
\end{equation*}
and by the definition~\eqref{dfn:L:singular} of $L(\1_\Omega F)$,
\begin{equation*}{\langle \s^{L^*}_\Omega\arr \gamma, L(\1_\Omega F) \rangle}
= \B^\Omega (\s^{L^*}_\Omega\arr \gamma\big\vert_\Omega, F\big\vert_\Omega).\end{equation*}
By the definition~\eqref{eqn:Neumann} of Neumann boundary values,
\begin{equation*}\overline{\B^\Omega_*(F,\s^{L^*}_\Omega\arr \gamma)} = \overline{\langle \Tr_2 F, \M^\Omega_{\B^*}(\s^{L^*}_\Omega\arr\gamma\big\vert_\Omega)\rangle}\end{equation*}
and so
\begin{equation*}\langle \arr \gamma, \Tr_2^\Omega \D^{\B}_\Omega \arr f\rangle
=
-\langle\arr\gamma,\arr f\rangle +
{\langle \M^\Omega_{\B^*}(\s^{L^*}_\Omega\arr\gamma\big\vert_\Omega), \arr f\rangle}
\end{equation*}
for any choice of $F$. Thus $\Tr_2^\Omega \D^{\B}_\Omega $ is well-defined and formula~\eqref{eqn:dirichlet:D:dual} is valid.
\end{proof}
\begin{lem}\label{lem:jump}
Let $\Tr_2^\Omega \D^{\B}_\Omega $ be as in Lemma~\ref{lem:adjoint}.
If $\arr f\in\DD$ and $\arr g\in\NN$, then we have the jump and continuity relations
\begin{align}
\label{eqn:D:jump}
\Tr_2^\Omega\D^\B_\Omega\arr f +\Tr_2^\CC\D^\B_\CC\arr f
&=-\arr f
,\\
\label{eqn:S:jump}
\M_\B^\Omega (\s^L_\Omega \arr g\big\vert_\Omega)
+\M_\B^{\CC} (\s^L_\Omega\arr g\big\vert_{\CC})
&=\arr g
,\\
\label{eqn:D:cts}
\M_\B^{\Omega} (\D^\B_\Omega\arr f) - \M_\B^{\CC} (\D^\B_\CC\arr f )
&=0
.\end{align}
If there are bounded operators $\Tr_2^\Omega:\HH_2^\Omega\mapsto\DD_2$ and $\Tr_2^\CC:\HH_2^\CC\mapsto\DD_2$ such that $\Tr_2 F = \Tr_2^\Omega (F\big\vert_\Omega)= \Tr_2^\CC (F\big\vert_\CC)$ for all $F\in\HH_2$, then in addition
\begin{align}
\label{eqn:S:cts}
\Tr_2^\Omega(\s^L_\Omega \arr g\big\vert_\Omega) -\Tr_2^\CC(\s^L_\Omega \arr g\big\vert_{\CC})
&=0
.\end{align}
\end{lem}
The given condition formula for $\Tr_2^\Omega$, $\Tr_2^\CC$ is very natural if $\Omega\subset\R^d$ is an open set, $\CC=\R^d\setminus\bar\Omega$ and $\Tr_2$ denotes a trace operator restricting functions to the boundary~$\partial\Omega$.
Observe that if such operators $\Tr_2^\Omega$ and $\Tr_2^\CC$ exist, then by the definition~\eqref{eqn:D:+} of the double layer potential and by the definition of $\Tr_2^\Omega \D^\B_\Omega$ in Lemma~\ref{lem:adjoint}, $\Tr_2^\Omega (\D^\B_\Omega \arr f) = (\Tr_2^\Omega \D^\B_\Omega)\arr f$ and so there is no ambiguity of notation.
\begin{proof}[Proof of Lemma~\ref{lem:jump}]
We first observe that by the definition~\eqref{eqn:Neumann} of Neumann boundary values, if $u_+\in\HH_2^\Omega$ and $u_-\in\HH_2^\CC$ with $(Lu_+)\big\vert_\Omega=0$ and $(Lu_-)\big\vert_\CC=0$, then
\begin{equation*}
\M_\B^\Omega u_+
+\M_\B^{\CC} u_-=\arr \psi
\text{ if and only if }
\langle\Tr_1\varphi,\arr\psi\rangle = \B^\Omega(\varphi\big\vert_\Omega,u_+)
+\B^\CC(\varphi\big\vert_{\CC},u_-)
\end{equation*}
for all $\varphi\in\HH_1$.
The continuity relation \eqref{eqn:D:cts} follows from formula~\eqref{eqn:D:solution}.
The jump relation~\eqref{eqn:D:jump} follows from the definition of $\Tr_2^\Omega\D^\B_\Omega$ and by using formula~\eqref{eqn:D:alternate:extensions} to rewrite $\Tr_2^\CC\D^\B_\CC$.
The jump relation \eqref{eqn:S:jump} follows from the definition~\eqref{eqn:S} of the single layer potential.
The continuity relation~\eqref{eqn:S:cts} follows because $\s^L_\Omega\arr g\in \HH_2$ and by the definition of $\Tr_2^\Omega$, $\Tr_2^\CC$.
\end{proof}
\section{Layer potentials and boundary value problems}
\label{sec:invertible}
If $\HH_2^\Omega$ and $\DD_2$ are as in Section~\ref{sec:example}, then by the Lax-Milgram lemma there is a unique solution to the Dirichlet aed and Neumann boundary value problems
\begin{equation*}\left\{\begin{aligned}
(Lu)\big\vert_\Omega &= 0,\\
\Tr^\Omega_2 u &= \arr f,\\
\doublebar{u}_{\HH_2^\Omega} &\leq C \doublebar{\arr f}_{\DD_2},
\end{aligned}\right.\qquad
\left\{\begin{aligned}
(Lu)\big\vert_\Omega &= 0,\\
\M_\B^\Omega u &= \arr g,\\
\doublebar{u}_{\HH^\Omega_2} &\leq C \doublebar{\arr g}_{\NN_2}
.\end{aligned}\right.
\end{equation*}
We routinely wish to establish existence and uniqueness to the Dirichlet and Neumann boundary value problems
\begin{equation*}
(D)^{\widehat L}_\XX\left\{\begin{aligned}
(\widehat Lu)\big\vert_\Omega &= 0,\\
\WTr_\XX^\Omega u &= \arr f,\\
\doublebar{u}_{\XX^\Omega} &\leq C \doublebar{\arr f}_{\DD_\XX},
\end{aligned}\right.\qquad
(N)^\B_\XX\left\{\begin{aligned}
(\widehat Lu)\big\vert_\Omega &= 0,\\
\MM_\B^\Omega u &= \arr g,\\
\doublebar{u}_{\XX^\Omega} &\leq C \doublebar{\arr g}_{\NN_\XX}
\end{aligned}\right.
\end{equation*}
for some constant $C$ and some other solution space $\XX$ and spaces of Dirichlet and Neumann boundary data $\DD_\XX$ and $\NN_\XX$. For example, if $L$ is a second-order differential operator, then as in \cite{JerK81B,KenP93,KenR09,DinPR13p} we might wish to establish well-posedness with $\DD_\XX=\dot W_1^p(\partial\Omega)$, $\NN_\XX=L^p(\partial\Omega)$ and $\XX=\{u:\widetilde N(\nabla u)\in L^p(\partial\Omega)\}$, where $\widetilde N$ is the nontangential maximal function introduced in \cite{KenP93}.
The classic method of layer potentials states that if layer potentials, originally defined as bounded operators $\D^\B_\Omega:\DD_2\mapsto\HH_2$ and $\s^L_\Omega:\NN_2\mapsto\HH_2$, may be extended to operators $\D^\B_\Omega:\DD_\XX\mapsto\XX$ and $\s^L_\Omega:\NN_\XX\mapsto\XX$, and if certain of the properties of layer potentials of Section~\ref{sec:properties} are preserved by that extension, then well posedness of boundary value problems are equivalent to certain invertibility properties of layer potentials.
In this section we will make this notion precise.
As in Sections~\ref{sec:dfn}, \ref{sec:D:S} and~\ref{sec:properties}, we will work with layer potentials and function spaces in a very abstract setting.
\subsection{From invertibility to well posedness}
\label{sec:invertible:well-posed}
In this section we will need the following objects.
\begin{itemize}
\item Quasi-Banach spaces $\XX^\Omega$, $\DD_\XX$ and $\NN_\XX$.
\item A linear operator $u\mapsto(\widehat L u)\big\vert_\Omega$ acting on~$\XX^\Omega$.
\item Linear operators $\WTr_\XX^\Omega:\{u\in\XX^\Omega:(\widehat L u)\big\vert_\Omega =0\}\mapsto\DD_\XX$ and $\MM_\B^\Omega:\{u\in\XX^\Omega:(\widehat L u)\big\vert_\Omega =0\}\mapsto\NN_\XX$.
\item Linear operators $\widehat\D^\B_\Omega:\DD_\XX\mapsto \XX^\Omega$ and $\widehat\s^L_\Omega:\NN_\XX\mapsto \XX^\Omega$.
\end{itemize}
\begin{rmk}
Recall that $\s^L_\Omega=\s^L_\CC$ is defined in terms of a ``global'' Hilbert space $\HH_2$. If $\XX^\Omega=\HH^\Omega$, then $\widehat \s^L_\Omega\arr g = \s^L_\Omega\arr g\big\vert_\Omega$. In the general case, we do not assume the existence of a global quasi-Banach space $\XX$ whose restrictions to $\Omega$ lie in~$\XX^\Omega$, and thus we will let $\widehat \s^L_\Omega\arr g$ be an element of $\XX^\Omega$ without assuming a global extension.
\end{rmk}
In applications it is often useful to define $\Tr^\Omega$, $\M_\B^\Omega$, $L$, $\D^\B_\Omega$ and $\s^L_\Omega$ in terms of some Hilbert spaces $\HH_j$, $\HH_j^\Omega$ and to extend these operators to operators with domain or range $\XX^\Omega$ by density or some other means. See, for example, \cite{BarHM17pC}. We will not assume that the operators $\WTr_\XX^\Omega$, $\MM_\B^\Omega$, $\widehat L$, $\widehat\D^\B_\Omega$ and $\widehat\s^L_\Omega$ arise by density; we will merely require that they satisfy certain properties similar to those established in Section~\ref{sec:properties}.
Specifically, we will often use the following conditions; observe that if $\XX^\Omega=\HH_2^\Omega$ for some $\HH_2^\Omega$ as in Section~\ref{sec:dfn}, these properties are valid.
\begin{description}
\item[\hypertarget{ConditionT}{\condTname}]
$\WTr_\XX^\Omega$ is a bounded operator $\{u\in\XX^\Omega:(\widehat L u)\big\vert_\Omega =0\}\mapsto \DD_\XX$.
\item[\hypertarget{ConditionM}{\condMname}]
$\MM_\B^\Omega$ is a bounded operator $\{u\in\XX^\Omega:(\widehat L u)\big\vert_\Omega =0\}\mapsto \NN_\XX$.
\item[\hypertarget{ConditionS}{\condSname}] The single layer potential $\widehat\s^L_\Omega$ is bounded $\NN_\XX\mapsto \XX^\Omega$, and if $\arr g\in \NN_\XX$ then $(\widehat L (\widehat\s^L_\Omega\arr g))\big\vert_\Omega=0$.
\item[\hypertarget{ConditionD}{\condDname}] The double layer potential $\widehat\D^\B_\Omega$ is bounded $\DD_\XX\mapsto\XX^\Omega$, and if $\arr f\in \DD_\XX$ then $(\widehat L (\widehat\D^\B_\Omega\arr f))\big\vert_\Omega=0$.
\item[\hypertarget{ConditionG}{\condGname}] If $u \in \XX^\Omega$ and $(\widehat L u)\big\vert_\Omega =0$, then we have the Green's formula
\begin{equation*}u = -\widehat\D^\B_\Omega (\WTr_\XX^\Omega u) + \widehat\s^L_\Omega (\MM_\B^\Omega u).\end{equation*}
\end{description}
We remark that the linear operator $u\mapsto(\widehat L u)\big\vert_\Omega$ is used only to characterize the subspace $\XX^\Omega_{ L}=\{u\in\XX^\Omega:(\widehat L u)\big\vert_\Omega =0\}$. We could work directly with $\XX^\Omega_{ L}$; however, we have chosen to use the more cumbersome notation $\{u\in\XX^\Omega:(\widehat L u)\big\vert_\Omega =0\}$ to emphasize that the following arguments, presented here only in terms of linear operators, are intended to be used in the context of boundary value problems for differential equations.
The following theorem is straightforward to prove and is the core of the classic method of layer potentials.
\begin{thm}\label{thm:surjective:existence}
Let $\XX$, $\DD_\XX$, $\NN_\XX$, $\widehat\D^\B_\Omega$, $\widehat\s^L_\Omega$, $\WTr_\XX^\Omega$ and $\MM_\B^\Omega$ be quasi-Banach spaces and linear operators with domains and ranges as above.
Suppose that Conditions~\condT\ and \condS\ are valid, and that $\WTr_\XX^\Omega \widehat\s^L_\Omega: \NN_\XX \mapsto \DD_\XX$ is surjective.
Then for every $\arr f\in \DD_\XX$, there is some $u$ such that
\begin{equation}\label{eqn:Dirichlet:weak}
(\widehat L u)\big\vert_\Omega = 0, \quad \WTr_\XX^\Omega u = \arr f, \quad u\in\XX^\Omega.\end{equation}
Suppose in addition $\WTr_\XX^\Omega \widehat\s^L_\Omega: \NN_\XX \mapsto \DD_\XX$ has a bounded right inverse, that is, there is a constant $C_0$ such that if $\arr f\in\DD_\XX$, then there is some preimage $\arr g$ of $\arr f$ with $\doublebar{\arr g}_{\NN_\XX}\leq C_0\doublebar{\arr f}_{\DD_\XX}$. Then there is some constant $C_1$ depending on $C_0$ and the implicit constants in Conditions~\condT\ and~\condS\ such that if $\arr f\in\DD_\XX$, then there is some $u\in\XX^\Omega$ such that
\begin{equation}
\label{eqn:Dirichlet:strong}
(\widehat L u)\big\vert_\Omega = 0, \quad \WTr_\XX^\Omega u = \arr f, \quad \doublebar{u}_{\XX^\Omega}\leq C_1\doublebar{\arr f}_{\DD_\XX}.\end{equation}
Suppose that Conditions~\condM\ and \condD\ are valid, and that $\MM_\B^\Omega\widehat\D^\B_\Omega: \DD_\XX \mapsto \NN_\XX$ is surjective.
Then for every $\arr g\in \NN_\XX$, there is some $u$ such that
\begin{equation}\label{eqn:Neumann:weak}(\widehat L u)\big\vert_\Omega = 0, \quad \MM_\B^\Omega u = \arr g, \quad u\in\XX^\Omega.\end{equation}
If $\MM_\B^\Omega\widehat\D^\B_\Omega: \DD_\XX \mapsto \NN_\XX$ has a bounded right inverse, then there is some constant $C_1$ depending on the bound on that inverse and the implicit constants in Conditions~\condM\ and~\condD\ such that if $\arr g\in\NN_\XX$, then there is some $u\in\XX^\Omega$ such that
\begin{equation}\label{eqn:Neumann:strong}(\widehat L u)\big\vert_\Omega = 0, \quad \MM_\B^\Omega u = \arr g, \quad \doublebar{u}_{\XX^\Omega}\leq C_1\doublebar{\arr g}_{\NN_\XX}.\end{equation}
\end{thm}
Thus, surjectivity of layer potentials implies of solutions to for boundary value problems.
We may also show that injectivity of layer potentials implies uniqueness of solutions to boundary value problems. This argument appeared first in \cite{BarM16A} and is the converse to an argument of \cite{Ver84}.
\begin{thm}\label{thm:injective:unique}
Let $\XX$, $\DD_\XX$, $\NN_\XX$, $\widehat\D^\B_\Omega$, $\widehat\s^L_\Omega$, $\WTr_\XX^\Omega$ and $\MM_\B^\Omega$ be quasi-Banach spaces and linear operators with domains and ranges as above. Suppose that Conditions~\condT, \condM, \condD, \condS\ and~\condG\ are all valid.
Suppose that the operator
$\WTr_\XX^\Omega \widehat\s^L_\Omega: \NN_\XX \mapsto \DD_\XX$ is one-to-one. Then for each $\arr f\in\DD_\XX$, there is at most one solution $u$ to the Dirichlet problem
\begin{equation*}(\widehat L u)\big\vert_\Omega = 0, \quad \WTr_\XX^\Omega u = \arr f, \quad u\in\XX^\Omega.\end{equation*}
If $\WTr_\XX^\Omega \widehat\s^L_\Omega: \NN_\XX \mapsto \DD_\XX$ has a bounded left inverse, that is, there is a constant $C_0$ such that the estimate $\doublebar{\arr g}_{\NN_\XX}\leq C_0 \doublebar{\WTr_\XX^\Omega \widehat\s^L_\Omega \arr g}_{\DD_\XX}$ is valid, then there is some constant $C_1$ such that every $u\in\XX^\Omega$ with $(\widehat L u)\big\vert_\Omega=0$ satisfies $\doublebar{u}_{\XX^\Omega}\leq C_1\doublebar{\Tr_\XX^\Omega u}_{\DD_\XX}$ (that is, if $u$ satisfies the Dirichlet problem~\eqref{eqn:Dirichlet:weak} then $u$ must satisfy the Dirichlet problem~\eqref{eqn:Dirichlet:strong}).
Similarly, if the operator $\MM_\B^\Omega \widehat\D^\B_\Omega:\DD_\XX \mapsto \NN_\XX$ is one-to-one, then for each $\arr g\in\NN_\XX$, there is at most one solution $u$ to the Neumann problem
\begin{equation*}(\widehat L u)\big\vert_\Omega = 0, \quad \MM_\B^\Omega u = \arr g, \quad u\in\XX^\Omega.\end{equation*}
If $\MM_\B^\Omega \widehat\D^\B_\Omega:\DD_\XX \mapsto \NN_\XX$ has a bounded left inverse, then there is some constant $C_1$ such that every $u\in\XX^\Omega$ with $(\widehat L u)\big\vert_\Omega=0$ satisfies $\doublebar{u}_{\XX^\Omega}\leq C_1\doublebar{\MM_\B^\Omega u}_{\DD_\XX}$.
\end{thm}
\begin{proof}
We present the proof only for the Neumann problem; the argument for the Dirichlet problem is similar.
Suppose that $u$, $v\in\XX^\Omega$ with $(\widehat Lu)\big\vert_\Omega=(\widehat Lv)\big\vert_\Omega=0$ in~$\Omega$ and $\MM_\B^\Omega u=\arr g=\MM_\B^\Omega v$. By Condition~\condG,
\begin{align*}
u
= -\widehat\D^\B_\Omega (\WTr_\XX^\Omega u) + \widehat\s^L_\Omega (\MM_\B^\Omega u)
&= -\widehat\D^\B_\Omega (\WTr_\XX^\Omega u) + \widehat\s^L_\Omega \arr g
,\\
v= -\widehat\D^\B_\Omega (\WTr_\XX^\Omega v) + \widehat\s^L_\Omega (\MM_\B^\Omega v)
&= -\widehat\D^\B_\Omega (\WTr_\XX^\Omega v) + \widehat\s^L_\Omega \arr g
.\end{align*}
In particular, $\MM_\B^\Omega \widehat\D^\B_\Omega (\WTr_\XX^\Omega u) = \MM_\B^\Omega \widehat\D^\B_\Omega (\WTr_\XX^\Omega v)$. If $\MM_\B^\Omega \widehat\D^\B_\Omega$ is one-to-one, then $\WTr_\XX^\Omega u = \WTr_\XX^\Omega v$. Another application of Condition~\condG\ yields that $u=v$.
Now, suppose that we have the estimate $\doublebar{\arr f}_{\DD_\XX}\leq C \doublebar{\MM_\B^\Omega \widehat\D^\B_\Omega \arr f}_{\NN_\XX}$. (This implies injectivity of $\MM_\B^\Omega \widehat\D^\B_\Omega$.) Let $u\in {\XX^\Omega}$ with $(\widehat L u)\big\vert_\Omega=0$; we want to show that $\doublebar{u}_{\XX^\Omega}\leq C \doublebar{\MM_\B^\Omega u}_{\DD_\XX}$.
By Condition~\condG, and because $\XX^\Omega$ is a quasi-Banach space,
\begin{equation*}\doublebar{u}_{\XX^\Omega}\leq C\doublebar{\widehat\D^\B_\Omega (\WTr_\XX^\Omega u)}_{\XX^\Omega} + C\doublebar{\widehat\s^L_\Omega (\MM_\B^\Omega u)}_{\XX^\Omega}.\end{equation*}
By Conditions~\condD\ and~\condS,
\begin{equation*}\doublebar{u}_\XX\leq C\doublebar{\WTr_\XX^\Omega u}_{\DD_\XX} + C\doublebar{\MM_\B^\Omega u}_{\NN_\XX}.\end{equation*}
Applying our estimate on $\MM_\B^\Omega \widehat\D^\B_\Omega$, we see that
\begin{equation*}\doublebar{u}_\XX\leq C\doublebar{\MM_\B^\Omega \widehat\D^\B_\Omega\WTr_\XX^\Omega u}_{\NN_\XX} + C\doublebar{\MM_\B^\Omega u}_{\NN_\XX}.\end{equation*}
By Condition~\condG,
$\widehat\D^\B_\Omega(\WTr_\XX^\Omega u) =\widehat\s^L_\Omega(\MM_\B^\Omega u) - u $, and so
\begin{equation*}\doublebar{u}_\XX\leq C\doublebar{\MM_\B^\Omega \widehat \s^L_\Omega\MM_\B^\Omega u}_{\NN_\XX} + C\doublebar{\MM_\B^\Omega u}_{\NN_\XX}.\end{equation*}
Another application of Condition~\condS\ and of Condition~\condM\ completes the proof. \end{proof}
\subsection{From well posedness to invertibility}
\label{sec:well-posed:invertible}
We are now interested in the converse results. That is, we have shown that results for layer potentials imply results for boundary value problems; we would like to show that results for boundary value problems imply results for layer potentials.
Notice that the above results were built on the Green's formula~\condG. The converse results will be built on jump relations, as in Lemma~\ref{lem:jump}. Recall that jump relations treat the interplay between layer potentials in a domain and in its complement; thus we will need to impose conditions in both domains.
In this section we will need the following spaces and operators.
\begin{itemize}
\item Quasi-Banach spaces $\XX^\UU$, $\XX^\VV$, $\DD_\XX$ and $\NN_\XX$.
\item Linear operators $u\mapsto(\widehat L u)\big\vert_\UU$ and $u\mapsto(\widehat L u)\big\vert_\VV$ acting on~$\XX^\UU$ and~$\XX^\VV$.
\item Linear operators $\WTr_\XX^\UU $, $\MM_\B^\UU$, $\WTr_\XX^\VV $, and $\MM_\B^\VV$ acting on $\{u\in\XX^\UU:(\widehat L u)\big\vert_\UU =0\}$ or $\{u\in\XX^\VV:(\widehat L u)\big\vert_\VV =0\}$.
\item Linear operators $\widehat\D^\B_\UU$, $\widehat\D^\B_\VV$ acting on $\DD_\XX$ and $\widehat\s^L_\UU$, $\widehat\s^L_\VV$ acting on $\NN_\XX$.
\end{itemize}
In the applications $\UU$ is an open set in $\R^d$ or in a smooth manifold, and $\VV=\R^d\setminus\overline\UU$ is the interior of its complement. The space $\XX^\VV$ is then a space of functions defined in~$\VV$ and is thus a different space from $\XX^\UU$. However, we emphasize that we have defined only one space $\DD_\XX$ of Dirichlet boundary values and one space $\NN_\XX$ of Neumann boundary values; that is, the traces from both sides of the boundary must lie in the same spaces.
We will often use the following conditions.
\begin{description}
\item[\hypertarget{ConditionTT}{\condTTname}, \hypertarget{ConditionMM}{\condMMname}, \hypertarget{ConditionSS}{\condSSname}, \hypertarget{ConditionDD}{\condDDname}, \hypertarget{ConditionGG}{\condGGname}] Condition \condT, \condM, \condS, \condD, or \condG\ holds for both $\Omega=\UU$ and $\Omega=\VV$.
\item[\hypertarget{ConditionJScts}{\condJSctsname}] If $\arr g\in\NN_\XX$, then we have the continuity relation
\begin{align*}
\WTr_\XX^\UU (\widehat\s^L_\UU \arr g) -\WTr_\XX^\VV (\widehat\s^L_\VV \arr g)
&=0
.\end{align*}
\item[\hypertarget{ConditionJDcts}{\condJDctsname}] If $\arr f\in\DD_\XX$, then we have the continuity relation
\begin{align*}
\MM_\B^\UU (\widehat\D^\B_\UU\arr f) - \MM_\B^\VV (\widehat\D^\B_\VV\arr f )
&=0
.\end{align*}
\item[\hypertarget{ConditionJSjump}{\condJSjumpname}] If $\arr g\in\NN_\XX$, then we have the jump relation
\begin{align*}
\MM_\B^\UU (\widehat\s^L_\UU )
+\MM_\B^\VV (\widehat\s^L_\VV\arr g)
&=\arr g
.\end{align*}
\item[\hypertarget{ConditionJDjump}{\condJDjumpname}] If $\arr f\in\DD_\XX$, then we have the jump relation
\begin{align*}
\WTr_\XX^\UU (\widehat\D^\B_\UU\arr f) +\WTr_\XX^\VV (\widehat\D^\B_\VV\arr f)
&=-\arr f
.\end{align*}
\end{description}
We now move from well posedness of boundary value problems to invertibility of layer potentials.
The following theorem uses an argument of Verchota from \cite{Ver84}.
\begin{thm}\label{thm:unique:injective}
Assume that Conditions~\condMM,
\condSS, \condJScts, and \condJSjump\ are valid.
Suppose that, for any $\arr f\in\DD_\XX$, there is at most one solution $u_+$ or $u_-$ to each of the two Dirichlet problems
\begin{gather*}
(\widehat L u_+)\big\vert_\UU = 0, \quad \WTr_\XX^\UU u_+ = \arr f, \quad u_+\in\XX^\UU
,\\
(\widehat L u_-)\big\vert_\VV = 0, \quad \WTr_\XX^\VV u_- = \arr f, \quad u_-\in\XX^\VV
.\end{gather*}
Then $\Tr_\XX^{\UU}\widehat\s^L_\UU:\NN_\XX\mapsto\DD_\XX$ is one-to-one.
If in addition there is a constant $C_0$ such that every $u_+\in\XX^\UU$ and $u_-\in\XX^\VV$ with $(\widehat L u_+)\big\vert_\UU =0$ and $(\widehat L u_+)\big\vert_\VV = 0$ satisfies
\begin{equation*}\doublebar{u_+}_{\XX^\UU}\leq C_0\doublebar{\WTr_\XX^\UU u}_{\DD_\XX},
\quad
\doublebar{u_-}_{\XX^\VV}\leq C_0\doublebar{\WTr_\XX^\VV u}_{\DD_\XX},
\end{equation*}
then there is a constant $C_1$ such that the bound $\doublebar{\arr g}_{\NN_\XX} \leq C_1 \doublebar{\Tr_\XX^{\UU}\widehat\s^L_\UU\arr g}_{\DD_\XX}$ is valid for all $\arr g\in\NN_\XX$.
Similarly, assume that Conditions~
\condTT, \condDD, \condJDcts, and \condJDjump\ are valid. Suppose that for any $\arr g\in\NN_\XX$, there is at most one solution $u_\pm$ to each of the two Neumann problems
\begin{gather*}
(\widehat L u_+)\big\vert_\UU = 0, \quad \MM_\B^\UU u_+ = \arr g, \quad u_+\in\XX^\UU
,\\
(\widehat L u_-)\big\vert_\VV = 0, \quad \MM_\B^\VV u_- = \arr g, \quad u_-\in\XX^\VV
.\end{gather*}
Then $\MM_\B^\UU\widehat\D^\B_\UU:\DD_\XX\mapsto\NN_\XX$ is one-to-one.
If there is a constant $C_0$ such that every $u_+\in\XX^\UU$ and $u_-\in\XX^\VV$ with $(\widehat L u_+)\big\vert_\UU =0$ and $(\widehat L u_+)\big\vert_\VV = 0$ satisfies
\begin{equation*}\doublebar{u_+}_{\XX^\UU}\leq C_0\doublebar{\MM_\B^\UU u}_{\DD_\XX},
\quad
\doublebar{u_-}_{\XX^\VV}\leq C_0\doublebar{\MM_\B^\VV u}_{\DD_\XX},
\end{equation*}
then there is a constant $C_1$ such that the bound $\doublebar{\arr f}_{\DD_\XX} \leq C_1 \doublebar{\MM_\B^\UU\widehat\D^\B_\UU\arr f}_{\NN_\XX}$ is valid for all $\arr f\in\DD_\XX$.
\end{thm}
\begin{proof}
As in the proof of Theorem~\ref{thm:injective:unique}, we will consider only the relationship between the Neumann problem and the double layer potential.
Let $\arr f$, $\arr h\in\DD_\XX$. By Condition~\condDD, $u_+=\widehat\D^\B_\UU\arr f\in\XX^\UU$ and $v_+=\widehat\D^\B_\UU \arr h\in\XX^\UU$. If $\MM_\B^\UU \widehat\D^\B_\UU \arr f = \MM_\B^\UU \widehat\D^\B_\UU \arr h$, then $\MM_\B^\UU u_+=\MM_\B^\UU v_+$. Because there is at most one solution to the Neumann problem, we must have that $u_+=v_+$, and in particular $\WTr_\XX^\UU \widehat\D^\B_\UU \arr f = \WTr_\XX^\UU \widehat\D^\B_\UU \arr h$.
By Condition~\condJDcts, we have that $\MM_\B^\VV \widehat\D^\B_\VV \arr f = \MM_\B^\VV \widehat\D^\B_\VV \arr h$. By Condition~\condDD\ and uniqueness of solutions to the $\VV$-Neumann problem, $\WTr_\XX^\VV \widehat\D^\B_\VV \arr f = \WTr_\XX^\VV \widehat\D^\B_\VV \arr h$. By \condJDjump, we have that
\begin{equation*}\arr f
= \WTr_\XX^\UU \widehat\D^\B_\UU \arr f + \WTr_\XX^\VV \widehat\D^\B_\VV \arr f
= \WTr_\XX^\UU \widehat\D^\B_\UU \arr h + \WTr_\XX^\VV \widehat\D^\B_\VV \arr h
=\arr h\end{equation*}
and so $\MM_\B^\UU \widehat\D^\B_\UU$ is one-to-one.
Now assume the stronger condition, that is, that $C_0<\infty$. Because $\DD_\XX$ is a quasi-Banach space, if $\arr f\in\DD_\XX$ then by Condition~\condJDjump,
\begin{equation*}\doublebar{\arr f}_{\DD_\XX} \leq C\doublebar{\WTr_\XX^\UU \widehat\D^\B_\UU \arr f}_{\DD_\XX}+\doublebar{\WTr_\XX^\VV \widehat\D^\B_\VV \arr f}_{\DD_\XX}.\end{equation*}
By Condition~\condDD, $\widehat\D^\B_\UU \arr f\in\XX^\UU$ with $(\widehat L(\widehat\D^\B_\UU \arr f))\big\vert_\UU=0$. Thus by Condition~\condTT, $\doublebar{\WTr_\XX^\UU \widehat\D^\B_\UU \arr f}_{\NN_\XX}\leq C\doublebar{\widehat\D^\B_\UU \arr f}_{\XX^\UU}$.
Thus,
\begin{equation*}\doublebar{\arr f}_{\DD_\XX} \leq C\doublebar{\widehat\D^\B_\UU \arr f}_{\XX^\UU}+\doublebar{\widehat\D^\B_\VV \arr f}_{\XX^\VV}.\end{equation*}
By definition of~$C_0$,
\begin{equation*}\doublebar{\widehat\D^\B_\UU \arr f}_{\XX^\UU}\leq C_0\doublebar{\MM_\B^\UU\widehat\D^\B_\UU \arr f}_{\NN_\XX}\quad\text{and}\quad\doublebar{\widehat\D^\B_\VV \arr f}_{\XX^\VV}\leq C_0\doublebar{\MM_\B^\VV\widehat\D^\B_\VV \arr f}_{\NN_\XX}.\end{equation*} By Condition~\condJDcts, $\MM_\B^\VV\widehat\D^\B_\VV \arr f=\MM_\B^\UU\widehat\D^\B_\UU \arr f$ and so
\begin{equation*}\doublebar{\arr f}_{\DD_\XX} \leq 2CC_0\doublebar{\MM_\B^\UU\widehat\D^\B_\UU \arr f}_{\NN_\XX}\end{equation*}
as desired.
\end{proof}
Finally, we consider the relationship between existence and surjectivity. The following argument appeared first in \cite{BarM13}.
\begin{thm} \label{thm:existence:surjective}
Assume that Conditions~\condMM, \condGG,
\condJScts, and \condJDjump\ are valid.
Suppose that, for any $\arr f\in\DD_\XX$, there is at least one pair of solutions $u_\pm$ to the pair of Dirichlet problems
\begin{equation}\label{eqn:Dirichlet:ES}
(\widehat L u_+)\big\vert_\UU = (\widehat L u_-)\big\vert_\VV = 0, \>\>\> \WTr_\XX^\UU u_+ = \WTr_\XX^\VV u_- = \arr f, \>\>\> u_+\in\XX^\UU
, \>\>\> u_-\in\XX^\VV
.\end{equation}
Then $\Tr_\XX^{\UU}\widehat\s^L_\UU:\NN_\XX\mapsto\DD_\XX$ is onto.
Suppose that there is some $C_0<\infty$ such that, if $\arr f\in\DD_\XX$, there is some pair of solutions $u^\pm$ to the problem~\eqref{eqn:Dirichlet:ES} with
\begin{equation*}\doublebar{u_+}_{\XX^\UU}\leq C_0\doublebar{\arr f}_{\DD_\XX},
\quad
\doublebar{u_-}_{\XX^\VV}\leq C_0\doublebar{\arr f}_{\DD_\XX}.
\end{equation*}
Then there is a constant $C_1$ such that for any $\arr f\in\DD_\XX$, there is a $\arr g\in\NN_\XX$ such that ${\Tr_\XX^{\UU}\widehat\s^L_\UU\arr g}=\arr f$ and
$\doublebar{\arr g}_{\NN_\XX} \leq C_1 \doublebar{\arr f}_{\DD_\XX}$.
Similarly, assume that Conditions~\condTT, \condGG,
\condJDcts, and \condJSjump\ are valid. Suppose that for any $\arr g\in\NN_\XX$, there is at least one pair of solutions $u_\pm$ to the pair of Neumann problems
\begin{equation}\label{eqn:Neumann:ES}
(\widehat L u_+)\big\vert_\UU = (\widehat L u_-)\big\vert_\VV = 0, \>\>\> \MM_\B^\UU u_+ = \MM_\B^\VV u_- = \arr g, \>\>\> u_+\in\XX^\UU
, \>\>\> u_-\in\XX^\VV
.\end{equation}
Then $\MM_\B^\UU\widehat\D^\B_\UU:\DD_\XX\mapsto\NN_\XX$ is onto.
Suppose that there is some $C_0<\infty$ such that, if $\arr g\in\NN_\XX$, there is some pair of solutions $u^\pm$ to the problem~\eqref{eqn:Neumann:ES} with
\begin{equation}\label{eqn:Neumann:bound}\doublebar{u_+}_{\XX^\UU}\leq C_0\doublebar{\arr g}_{\NN_\XX},
\quad
\doublebar{u_-}_{\XX^\VV}\leq C_0\doublebar{\arr g}_{\NN_\XX}.
\end{equation}
Then there is a constant $C_1$ such that for any $\arr g\in\NN_\XX$, there is an $\arr f\in\DD_\XX$ such that ${\MM_\B^\UU\widehat\D^\B_\UU\arr f}=\arr g$ and
$\doublebar{\arr f}_{\DD_\XX} \leq C_1 \doublebar{\arr g}_{\NN_\XX}$.
\end{thm}
\begin{proof}
As usual we present the proof for the Neumann problem.
Choose some $\arr g\in\NN_\XX$ and let $u_+$ and $u_-$ be the solutions to the problem~\eqref{eqn:Neumann:ES} assumed to exist. (If $C_0<\infty$ we further require that the bound~\eqref{eqn:Neumann:bound} be valid.)
By Condition~\condTT, $\arr f_+=\WTr_\XX^\UU u_+$ and $\arr f_-=\WTr_\XX^\VV u_-$ exist and lie in~$\DD_\XX$. By Condition~\condGG,
\begin{align*}
2\arr g &= \MM_\B^\UU u_+ + \MM_\B^\VV u_-
\\&= \MM_\B^\UU (-\widehat\D^\B_\UU \arr f_+ + \widehat\s^L_\UU \arr g) + \MM_\B^\VV (-\widehat\D^\B_\VV \arr f_- + \widehat\s^L_\VV \arr g)
.\end{align*}
By Conditions~\condJDcts\ and \condJSjump\ and linearity of the operators $\MM_\B^\UU$, $\MM_\B^\VV$, we have that
\begin{align*}
2\arr g
&=
-\MM_\B^\UU\widehat\D^\B_\UU \arr f_+
+ \MM_\B^\UU\widehat\s^L_\UU \arr g
-\MM_\B^\UU\widehat\D^\B_\UU \arr f_-
+\arr g- \MM_\B^\UU\widehat\s^L_\UU \arr g
\\&=
\arr g -\MM_\B^\UU \widehat\D^\B_\UU (\arr f_+ + \arr f_-)
.\end{align*}
Thus, $\MM_\B^\UU \widehat\D^\B_\UU$ is surjective. If $C_0<\infty$, then because $\DD_\XX$ is a quasi-Banach space and by Condition~\condTT,
\begin{equation*}\doublebar{\arr f_+ + \arr f_-}_{\DD_\XX}
\leq C C_0\doublebar{\arr g}_{\NN_\XX}
\end{equation*}
as desired.
\end{proof}
\newcommand{\etalchar}[1]{$^{#1}$}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\title{ Restrained double Roman domination of a graph}
\author{{\small Doost Ali Mojdeh$^{a}$\thanks{Corresponding author} , Iman Masoumi$^{b}$ Lutz Volkmann$^{c}$}\\ \\ {\small $^{a}$Department of
Mathematics, Faculty of Mathematical Sciences}\\{\small University of Mazandaran, Babolsar, Iran}\\{\small email: [email protected]}\\ \\
{\small $^{b}$Department of Mathematics, University of Tafresh}\\{\small Tafresh, Iran}\\ {\small email: i\[email protected]}\\ \\
{\small $^{c}$Lehrstuhl II f\"{u}r Mathematik, RWTH Aachen University}\\ {\small 52056 Aachen, Germany}\\ {\small email: [email protected]} }
\date{}
\maketitle
\begin{abstract}
For a graph $G=(V,E)$, a restrained double Roman dominating function is a function $f:V\rightarrow\{0,1,2,3\}$ having the property that if $f(v)=0$,
then the vertex $v$ must have at least two neighbors
assigned $2$ under $f$ or one neighbor $w$ with $f(w)=3$, and if $f(v)=1$, then the vertex $v$ must have at least one neighbor $w$ with $f(w)\geq2$,
and at the same time, the subgraph $G[V_0]$ which
includes vertices with zero labels has no isolated vertex. The weight of a restrained double Roman dominating function $f$ is the sum
$f(V)=\sum_{v\in V}f(v)$, and the minimum weight of a restrained
double Roman dominating function on $G$ is the restrained double Roman domination number of $G$. We initiate the study of restrained double Roman
domination with proving that the problem of computing
this parameter is $NP$-hard. Then we present an upper bound on the restrained double Roman domination number of a connected graph $G$ in terms of the
order of $G$ and characterize the graphs attaining this bound.
We study the restrained double Roman domination versus the restrained Roman domination. Finally, we characterized all trees $T$ attaining the exhibited
bound.
\end{abstract}
\textbf{2010 Mathematical Subject Classification:} 05C69
\textbf{Keywords}: Domination, restrained Roman domination, restrained double Roman domination.
\section{Introduction}
Throughout this paper, we consider $G$ as a finite simple graph with vertex set $V=V(G)$ and edge set $E=E(G)$. We use \cite{west} as a
reference for terminology and notation which are not explicitly defined
here. The open neighborhood of a vertex $v$ is denoted by $N(v)$, and its closed neighborhood is
$N[v]=N(v)\cup \{v\}$. The minimum and maximum degrees of $G$ are denoted by $\delta(G)$ and $\Delta(G)$,
respectively. Given subsets $A,B \subseteq V(G)$, by $[A,B]$ we mean the set of all edges with one end
point in $A$ and the other in $B$. For a given subset $S\subseteq V(G)$, by $G[S]$ we represent the subgraph
induced by $S$ in $G$. A tree $T$ is a double star if it contains exactly two vertices that are not leaves.
A double star with $p$ and $q$ leaves attached to each support vertex, respectively, is denoted by
$S_{p,q}$. A wounded spider is a tree obtained from subdividing at most $n-1$ edges of a star $K_{1,n}$.
A wounded spider obtained by subdividing $t \le n-1$ edges of $K_{1,n}$, is denoted by $ws(1,n, t)$.\\
A set $S\subseteq V(G)$ is called a dominating set if every vertex not in $S$ has a neighbor in $S$. The domination number
$\gamma(G)$ of $G$ is the minimum cardinality among all dominating sets of $G$. A
restrained dominating set ($RD$ set) in a graph $G$ is a dominating set $S$ in $G$ for which every vertex
in $V(G)-S$ is adjacent to another vertex in $V(G)-S$. The restrained domination number ($RD$
number) of $G$, denoted by
$\gamma_r(G)$, is the smallest cardinality of an $RD$ set of $G$. This concept was
formally introduced in \cite{domke} (albeit, it was indirectly introduced in \cite{hattingh, haynes}).
The variants of restrained domination have been already worked. For instance, a total restrained domination of a graph $G$ is an $RD$ set of $G$
for which the subgraph induced by the dominating set of $G$
has no isolated vertex, which can be referred to the \cite{chen}. Secure restrained dominating set $(SRDS)$ which is a set
$S \subseteq V(G)$ for which $S$ is restrained dominating and for all $u \in V\setminus S$ there exists $v \in S\cap N(u)$ such that
$(S\setminus \{v\})\cup \{u\}$ is restrained dominating set \cite{roushini}.
The restrained Roman dominating function is a Roman dominating function $f: V(G) \to \{0,1,2\}$ such that the subgraph induced by the set
$\{v\in V(G): f(v)=0\}$ has no isolated vertex, \cite{roushini1}.
The restrained Italian dominating function ($RIDF$) is an Italian dominating function $f: V(G) \to \{0,1,2\}$ such that the subgraph induced by the
set $\{v\in V(G): f(v)=0\}$ has no isolated vertex, \cite{samadi}.
These results motivates us to consider a double Roman dominating function $f$ for which the subgraph induced by $V_0^f$ has no isolated vertex,
which is the concept that we stand on it as new parameter namely restrained double Roman domination and will be investigated in this paper
Beeler \emph{et al}. (2016) \cite{bhh} introduced the concept of double Roman domination of a graph.\\
If $ f:V(G)\rightarrow \{0,1,2,3\}$ is a function, then let $(V_0,V_1,V_2,V_3)$ be the ordered partition of $V(G)$ induced by $f$, where
$V_i=\{v\in V(G):f(v)=i\}$
for $i=0,1,2,3$. There is a 1-1 correspondence between the function $f$ and the ordered partition $(V_0,V_1,V_2,V_3)$. So we will write
$f=(V_0,V_1,V_2,V_3)$.
A double Roman dominating function (DRD function for short) of a graph $G$ is a function $ f:V(G)\rightarrow \{0,1,2,3\}$ for which the following
conditions are satisfied.
\begin{itemize}
\item[(a)] If $f(v)=0$, then the vertex $v$ must have at least two neighbors in $V_2$ or one neighbor in $V_3$.
\item[(b)] If $f(v)=1$ , then the vertex $v$ must have at least one neighbor in $V_2\cup V_3$.
\end{itemize}
This parameter was also studied in \cite{al}, \cite{jr}, \cite{mojdeh} and \cite{zljs}.
Accordingly, a restrained double Roman dominating function ({$RDRD$} function for short) is a double Roman dominating function
$f:V\rightarrow\{0,1,2,3\}$ having the property that:
the subgraph induced by $V_0$ (the vertices with zero labels under $f$) $G[V_0]$ has no isolated vertex. The restrained double Roman domination
number ($RDRD$ number) $\gamma_{rdR}(G)$ is the minimum weight of an $RDRD$ function $f$ of $G$. For the sake of convenience, an $RDRD$
function $f$ of a graph $G$ with weight $\gamma_{rdR}(G)$ is called a $\gamma_{rdR}(G)$-function.
This paper is organized as follows. We prove that the restrained double Roman domination problem is $NP$-hard even for general graphs. Then,
we present an upper bound on the restrained double Roman domination number of a connected graph $G$ in terms of the order of $G$ and characterize
the graphs attaining this bound.
We study the restrained double Roman domination versus the restrained Roman domination. Finally, we characterize trees $T$ by the given restrained
double Roman domination number of $T$.
\section{Complexity and computational issues}
We consider the problem of deciding whether a graph $G$ has an $RDRD$ function of weight at most
a given integer. That is stated in the following decision problem.\\
We shall prove the $NP$-completeness by reducing the following
vertex cover decision problem, which is known to be $NP$-complete.
pace{3mm}
\framebox{
\parbox{1\linewidth}{
VERTEX COVER DECISION PROBLEM
INSTANCE: A graph $G = (V,E)$ and a positive integer $p \le |V (G)|$.
QUESTION: Does there exist a subset $C \subseteq V (G)$ of size at most $p$ such that
for each edge $xy \in E(G)$ we have $x \in C$ or $y \in C$?}}
pace{3mm}
\begin{theorem}
\emph{(Karp \cite{karp} )}\label{the-karp} Vertex cover decision problem is $NP$-complete for general
graphs.
\end{theorem}
pace{3mm}
\framebox{
\parbox{1\linewidth}{
RISTRAINED DOUBLE ROMAN DOMINATION problem ($RDRD$ problem)\\
INSTANCE: A graph $G$ and an integer $p\leq |V(G)|$.\\
QUESTION: Is there an $RDRD$ function $f$ for $G$ of weight at most $p$?}}\\
pace{3mm}
\begin{theorem}\label{the-NP}
The restrained double Roman domination problem is $NP$-complete for general
graphs.
\end{theorem}
\begin{proof}
We transform the vertex cover decision problem for general graphs to the
restrained double Roman domination decision problem for general graphs. For a given
graph $G = (V(G), E(G))$, let $ m= 3|V (G)| + 4$ and construct a graph $H = (V(H),E(H))$ as
follows. Let $V(H) = \{x_i : 1 \le i \le m\} \cup \{y\} \cup V (G) \cup \{u_{j_i} : 1 \le i \le m\ \mbox{for\ each}\ e_j \in E(G)\}$, and let
$$E(H)=\{x_ix_{i+1}: (\mbox{mod}\ m)\ 1\le i\le m\}$$ $$\ \ \ \cup \{x_iy: 1\le i\le m\} \cup \{vy: v\in V(G)\}$$ $$\ \ \
\cup \{vu_{j_i}: v\ \mbox{is\ the\ vertex\ of\ edge}\ e_j\in E(G)\ \mbox{and}\ 1\le i \le m \}$$ $$\ \ \ \cup \{u_{j_i}u_{j_{(i+1)}}\
(\mbox{mod}\ m) : 1 \le i \le m\}.$$
Figure 1 shows the graph $H$ obtained from $G = P_4=a_1a_2a_3a_4$ by the above procedure. Note that, since $m= 3|V (G)| + 4=16$ for this example and $G$
\begin{figure}
\caption{The graph $G=P_4$ and $H$.}
\label{fig:g1-g2-g3}
\end{figure}
has three edges $e_1, e_2, e_3$,
$$H[\{x_i: 1\le i\le 16\}] \cong H[\{u_{1_i}: 1\le i\le 16\}] \cong H[\{u_{2_i}: 1\le i\le 16\}]\cong H[\{u_{3_i}: 1\le i\le 16\}]\cong C_{16}$$
$y$ is adjacent to $x_i$ for $1\le i \le 16$ and $a_l$ for $1\le l\le 4$; $u_{j_i}$ is adjacent to both $a_j$ and $a_{j+1}$ for $1\le j\le 3$ and
$1\le i\le 16$.
We claim that $G$ has a vertex cover of size at most $k$ if and only if $H$ has an RDRDF with weight at most $3k+3$. Hence the $NP$-completeness of the
restrained double Roman domination problem in general
graphs will be equivalent to the $NP$-completeness of vertex cover problem. First, if $G$ has a vertex cover $C$ of size at most $k$, then the
function $f$ defined on $V(G)$ by $f(v) = 3$ for
$v \in C \cup \{y\}$ and $f(v) = 0$ otherwise,
is an RDRDF with weight at most $3k + 3$. On the other hand, suppose that $g$ is an RDRDF on $H$ with weight at most $3k + 3$. If $g(y)\ne 3$, then
there exist two cases.
Case 1. Let $g(y)\in \{0,1\}$. Then
$$\sum_{i=1}^{m}g(x_i)\ge \gamma_{rdR}(C_m) \ge \gamma_{dR}(C_m)\ge m >3|V(G)|+3 \ge 3k+3$$ that is a contradiction.
Case 2. Let $g(y)=2$ and $C_m=\{x_ix_{i+1}: (\mod\ m)\ 1\le i\le m\}$. Then $g(C_m)\ge 2m/3$ and $g(H)\ge 2m/3 +2k+2 = 2(3|V (G)| + 4)/3 +2k+2\ge
4k+14/3> 3k+3$ which is a contradiction. Thus $g(y) = 3$. Similarly, we have $g(u) = 3$ or $g(v) = 3$
for any $e = uv \in E(G)$. Therefore $C = \{v \in V : g(v) = 3\}$ is a vertex cover of $G$ and
$3|C| + 3 \le w(g) \le 3k + 3$. Consequently, $|C| \le k$.
\end{proof}
\section{$RDRD$ number of some graphs}
In this section we investigate the exact value of the restrained double Roman domination number of some graphs.
\begin{observation}\label{the-com-par} For complete graph $K_n$ and complete bipartite graph $K_{m,n}$,\\
\emph{(i)} $\gamma_{rdR}(K_n)=3$ for $n\ge 2$.\\
\emph{(ii)} $\gamma_{rdR}(K_{n,m})=6$ for $m,n\ge 2$.\\
\emph{(iii)} $\gamma_{rdR}(K_{1,m})=m+2.$
\emph{(iv)} $\gamma_{rdR}(K_{n_1,n_2,\cdots, n_m})=\left\{
\begin{array}{ll}
3, & \mbox{if}\ \emph{min}\{n_1 ,n_2,\cdots, n_m\}=1,\\
6, & \hbox{otherwise.}
\end{array}
\right.$\\.
\end{observation}
\begin{theorem}\label{the-path}
For a path $P_n$ $(n\geq 4)$, $\gamma_{rdR}(P_n)=n+2$.\\
\end{theorem}
\begin{proof}
Assume that $n\ge 4$ and $P_n=v_1v_2\cdots v_n$. Define $h:V(P_n) \to \{0,1,2,3\}$ by $h(v_{3i+2})=3$ for $0\le i \le n/3-1,\ h(v_{1})=h(v_n)=1$
and $h(v)=0$ otherwise, whenever
$n \equiv 0 \,({\rm mod}\, 3)$.\\
Define $h:V(P_n) \to \{0,1,2,3\}$ by $h(v_{3i+1})=3$ for $0\le i \le (n-1)/3$ and $h(v)=0$ otherwise, whenever $n \equiv 1\, ({\rm mod}\, 3)$. \\
Define $h:V(P_n) \to \{0,1,2,3\}$ by $h(v_{3i+2})=3$ for $0\le i \le (n-2)/3,\
h(v_{1})=1$ and $h(v)=0$ otherwise, whenever $n \equiv 2\, ({\rm mod}\, 3)$. Therefore $\gamma_{rdR}(P_n)\le n+2$ for $n\ge 4$.
Now we prove the inverse inequality. It is straightforward to verify that $\gamma_{rdR}(P_n)=n+2$ for $4\le n\le 6$. For $n\ge 7$ we proceed by
induction on $n$. Let $n\ge 7$ and
let the inverse inequality be true for every path of order less than $n$. Assume that $f = (V_0, V_1, V_2, V_3)$ is a $\gamma_{rdR}$-function of $P_n$.
It is well known that $f(v_n)\ne 0$.
If $f(v_n)=1$, then $f(v_{n-1}) \ge 2$. Define $g: P_{n-1} \to \{0,1,2,3\}$, $g(v_i)=f(v_i)$ for $1\le i\le n-1$. Clearly, $g$ is an RDRD-function
of $P_{n-1}$. It follows from the induction hypothesis that
$$\gamma_{rdR}(P_n)=w(f)=w(g)+1\ge \gamma_{rdR}(P_{n-1})+1\ge (n-1)+2 +1\ge n+2.$$
If $f(v_{n}) =2$, then $f(v_{n-1})= 1$ and $f(v_{n-2})\ge 1$. Define $g: P_{n-2} \to \{0,1,2,3\}$, $g(v_i)=f(v_i)$ for $1\le i\le n-2$. Clearly,
$g$ is a $RDRD$-function of $P_{n-2}$. As above we obtain,
$$\gamma_{rdR}(P_n)=w(f)=w(g)+3\ge \gamma_{rdR}(P_{n-2})+3\ge (n-2)+2 +3=n+3.$$
If $f(v_{n}) =3$, then $f(v_{n-1})= 0$, $f(v_{n-2})= 0$ and $f(v_{n-3})= 3$. Define $g: P_{n-3} \to \{0,1,2,3\}$, $g(v_i)=f(v_i)$ for $1\le i\le n-3$.
Clearly, $g$ is a $RDRD$-function
of $P_{n-3}$. It also follows from the induction hypothesis that $$\gamma_{rdR}(P_n)=w(f)=w(g)+3\ge \gamma_{rdR}(P_{n-3})+3\ge (n-3)+2 +3=n+2.$$
Thus the proof is complete.\\
\end{proof}
\begin{theorem}\label{the-cycle}
For a cycle $C_n$, $(n\ge 3)$,
$\gamma_{rdR}(C_n)=\left\{
\begin{array}{ll}
n, & \mbox{if}\ n \equiv 0\ (\mbox{mod}\ 3), \\
n+2, & \hbox{otherwise.}
\end{array}
\right.$\\
\end{theorem}
\begin{proof}
Assume that $n\ge 3$ and $C_n=v_1v_2\cdots v_nv_1$. Define $h:V(C_n) \to \{0,1,2,3\}$ by $h(v_{3i})=3$ for $1\le i \le n/3$ and
$h(v)=0$ otherwise, whenever $n \equiv 0\, ({\rm mod}\, 3)$.\\
Define $h:V(C_n) \to \{0,1,2,3\}$ by $h(v_{3i+1})=3$ for $0\le i \le (n-1)/3$ and $h(v)=0$ otherwise, whenever $n \equiv 1\, ({\rm mod}\, 3)$. \\
Define $h:V(C_n) \to \{0,1,2,3\}$ by $h(v_{3i+2})=3$ for $0\le i \le (n-2)/3,\ h(v_{1})=1$ and $h(v)=0$ otherwise, whenever
$n \equiv 2\, ({\rm mod}\, 3)$. Therefore
$$\gamma_{rdR}(C_n)\le \left\{
\begin{array}{ll}
n, & \mbox{if}\ n \equiv 0\ (\mbox{mod}\ 3), \\
n+2, & \hbox{otherwise.}
\end{array}
\right.$$
Now we prove the inverse inequality. For $n \equiv 0 \,({\rm mod}\, 3)$, since $\gamma_{rdR}(C_n)\ge \gamma_{dR}(C_n)=n$, (see \cite{al, bhh}),
clearly the result holds.
Let $n \not \equiv 0\ (\mbox{mod}\ 3)$ and let $f = (V_0, V_1, V_2, V_3)$ be a $\gamma_{rdR}$-function of $C_n$. Since the neighbor
of vertex of weight $0$ is a vertex of weight $3$ and a vertex of weight $0$,
if $n \not \equiv 0\ (\mbox{mod}\ 3)$, there are two adjacent vertices $v_i, v_{i+1}$ in $C_n$ such that their weights are positive.
Now, if $f(v_i)\ge 2$ and $f(v_{i+1})\ge 2$, then by removing the edge $v_iv_{i+1}$, the resulted graph is $P_n$. Define
$g: P_{n} \to \{0,1,2,3\}$, $g(v_i)=f(v_i)$ for $1\le i\le n$. Clearly,
$g$ is an RDRD-function of $P_{n}$ with $w(g)=w(f)$. Since $w(g)\ge n+2$ then $w(f)\ge n+2$.\\
Let $f(v_i)\ge 2$ and $f(v_{i+1})= 1$. Then $f(v_{i+2})\ge 1$. Now remove the edge $v_{i+1}v_{i+2}$ and obtain a $P_n$. Define
$g: P_{n} \to \{0,1,2,3\}$, $g(v_i)=f(v_i)$ for $1\le i\le n$.
Clearly, $g$ is an RDRD-function of $P_{n}$ with $w(g)=w(f)$. Thus $w(f)\ge n+2$. \\
Let $f(v_i)=f(v_{i+1})= 1$. As above, we remove the edge $v_iv_{i+1}$ and the resulted graph $P_n$ has an RDRD-function $g$ of weight at least $w(f)$.
That is $w(f)\ge n+2$. Therefore the proof is complete.
\end{proof}
\section{Upper bounds on the $RDRD$ number}
In this section we obtain sharp upper bounds on the restrained double Roman domination number of a graph.
\begin{proposition}\label{2n-1} Let $G$ be a connected graph of order $n\ge 2$. Then
$\gamma_{rdR}(G) \le 2n-1$, with equality if and only if $n=2$.
\end{proposition}
\begin{proof} If $w$ is a vertex of $G$, then define the fuction $f$ by $f(w)=1$ and $f(x)=2$ for $x\in V(G)\setminus\{w\}$.
Since $G$ is connected of order $n\ge 2$, we observe that $f$ is an RDRD function of $G$ of weight $2n-1$ and thus $\gamma_{rdR}(G) \le 2n-1$.
If $n\ge 3$, then $G$ contains a vertex $w$ with at least two neighbors $u$ and $v$. Now define the function $g$ by $g(u)=g(v)=1$ and $g(x)=2$ for $x\in V(G)\setminus\{u,v\}$.
Then $g$ is an RDRD function of $G$ of weight $2n-2$ and so $\gamma_{rdR}(G) \le 2n-2$ in this case. Since $\gamma_{rdR}(K_2)=3=2\cdot 2-1$, the proof is complete.
\end{proof}
\begin{proposition}\label{diam} Let $G$ be a connected graph of order $n\ge 2$. Then
$\gamma_{rdR}(G) \le 2n+1 - diam(G)$ and this bound is sharp for the path $P_n$ ($n\ge 4$).
\end{proposition}
\begin{proof} By Theorem \ref{the-path}, $\gamma_{rdR}(P_n) \le n+2$. Let $P=v_1v_2\cdots v_{diam(G)+1}$ be a diametrical path in $G$. Let $g$
be a $\gamma_{rdR}$-function of $P$. Then $w(g)\le diam(G)+3$. Now we define an RDRD-function $f$ as:\\
$$f(x)=\left\{
\begin{array}{ll}
2, & x \notin V(P),\\
g(x), & \hbox{otherwise.}
\end{array}
\right.$$
It is clear that $f$ is an RDRD-function of $G$ of weight $w(f) \le 2(n-(diam(G)+1)) + diam (G)+3$. Therefore $\gamma_{rdR}(G) \le 2n+1 - diam(G)$.\\
Theorem \ref{the-path} shows the sharpness of this bound.
\end{proof}
\begin{proposition} Let $G$ be a connected graph of order $n$ and circumference $c(G)<\infty$. Then
$\gamma_{rdR}(G) \le 2n +2 - c(G)$, and this bound is sharp for each cycle $C_n$ with $3 \nmid n$.
\end{proposition}
\begin{proof} Let $C$ be a longest cycle of $G$, that means $|V(C)|=c(G)$. By Theorem \ref{the-cycle}, $\gamma_{rdR}(C) \le c(G)+2$. Let $h$
be a $\gamma_{rdR}$-function on $C$. Then $w(h)\le c(G)+2$. Now we define an RDRD-function $f$ as:\\
$$f(x)=\left\{
\begin{array}{ll}
2, & x\notin V(C), \\
h(x), & \hbox{otherwise.}
\end{array}
\right.$$\\
It is clear that $f$ is an RDRD-function of $G$ of weight $w(f) \le 2(n-c(G)) + c(G)+2$. Therefore $\gamma_{rdR}(G) \le 2n+2 - c(G)$.\\
For sharpness, if $G=C_n$ and $3\nmid n$, then $\gamma_{rdR}(C_n)=n+2= 2n+2 - n=2n+2-c(G)$.
\end{proof}
\begin{observation}\label{1}
Let $G$ be a graph and $f=(V_0,V_1,V_2)$ a $\gamma_{rR}$-function of $G$. Then $\gamma_{rdR}(G)\leq 2|V_1|+3|V_2|$.
\end{observation}
\begin{proof}
Let $G$ be a graph and $f=(V_0,V_1,V_2)$ a $\gamma_{rR}$-function of $G$. We define a function $g=(V_0',V_2',V_3')$ as follows:
$V_0'=V_0$, $V_2'=V_1$, $V_3'=V_2$. Note that under $g$, every vertex with a label $0$ has a neighbor assigned $3$ and each vertex with
label $1$ becomes a vertex with label $2$ and also $G[V_0']$ has no isolated vertex. Hence, $g$ is a restrained double Roman dominating function.
Thus, $\gamma_{rdR}(G)\leq 2|V_2'|+3|V_3'|=2|V_1|+3|V_2|$.
\end{proof}
Clearly, the bound of observation \ref{1} is sharp, as can be seen with the path $G=P_4$, where $\gamma_{rR}(G)=4$ and $\gamma_{rdR}(G)=6$.
We also note that strict inequality in the bound can be achieved by the subdivided star $G=S(K_{1,k})$ which formed by subdividing each edge of
the star $K_{1,k}$, for $k\geq 3$, exactly once. Then it is simple to check that $\gamma_{rR}(G)=2k+1$ and $\gamma_{rdR}(G)=3k$. Hence, $|V_1|=1$
and $|V_2|=k$, and so, $3k=\gamma_{rdR}(G)<2|V_1|+3|V_2|=2+3k.$
\begin{lemma}\label{lem1} If a graph $G$ has a non-pendant edge, then there is a $\gamma_{rdR}(G)$-function $f = (V_0, V_1, V_2, V_3)$ such that
$V_0\cup V_1 \ne \emptyset$.
\end{lemma}
\begin{proof}
If $\gamma_{rdR}(G)<2n$, then obviously $V_0\cup V_1 \ne \emptyset$. Now we show that $\gamma_{rdR}(G)<2n$.
Let $uw$ be a non-pendant edge with $\deg(u)$ and $\deg(w)$ be at least $2$.\\
Assume that $N_{G}(u)\cap N_{G}(w)\ne \emptyset$, and let $v$ be a vertex in $N_{G}(u)\cap N_{G}(w)$. Then the function
$f=(V_0 =\{u,w\}, V_1 =\emptyset, V_2= V(G) \setminus \{u,w,v\}, V_3=\{v\})$
is an RDRD-function of $G$ with $w(f)\le 2n-3$.\\
Assume that $N_{G}(u)\cap N_{G}(w)= \emptyset$, and let $a\in N_G(u)\setminus \{w\}$ and $b\in N_G(w)\setminus \{u\}$.
Then the function $f=(V_0= \{u,w\}, V_1=\emptyset, V_2= V(G) \setminus \{u,w,a,b\}, V_3=\{a,b\})$ is an RDRD-function of $G$
with $w(f)\le 2n-2$. This completes the proof.
All in all the proof is complete.
\end{proof}
pace{3mm}
\begin{figure}
\caption{The graph $H_{10}
\label{fig:g1-g2-g3}
\end{figure}
For any integer $n \ge 3$, let $H_n$ be the graph obtained from $(n-2)/2$ copies
of $K_2$ and a copy of $K_1$ by adding a new vertex and joining it to both leaves of each $K_2$ and the given $K_1$, and
let $F_n$ be the graph obtained from $(n-2)/2$ copies of $K_2$ by adding a new vertex and joining it to both leaves of each $K_2$. Thus
for $n \ge 4$, $H_n$ have a vertex of degree $n-1$, a vertex of degree $1$ and other vertices
of degree two and for $n \ge 3$, $F_n$ have a vertex of degree $n-1$ and other vertices
of degree two. Figure 2 shows the graph $H_{10}$ and $F_9$. Let $\mathcal{H} = \{H_n :n \ge 4\ \mbox{is\ even}\}$ and
$\mathcal{F} = \{F_n :n \ge 3\ \mbox{is\ odd}\}$.
\begin{theorem}\label{the} For every connected graph $G$ of order $n \ge 3$ with $m$ edges,
$\gamma_{rdR}(G) \ge 2n + 1- \lceil(4m-1)/3\rceil$, with equality if and only if
$G \in \mathcal{H}\cup \mathcal{F}$ or $G\in\{K_{1,2},K_{1,3},K_{1,4}\}$.
\end{theorem}
\begin{proof}
If $G=K_{1,n-1}$ is a star, then $\gamma_{rdR}(G)=n+1$ and $m=n-1$. Now it is easy to see that
$\gamma_{rdR}(K_{1,n-1})= 2n + 1- \lceil(4m-1)/3\rceil$ for $3\le n\le 5$ and
$\gamma_{rdR}(K_{1,n-1})>2n + 1- \lceil(4m-1)/3\rceil$ for $n\ge 6$.
Next assume that $G$ is not a star. By Lemma \ref{lem1} there is a $\gamma_{rdR}(G)$-function of $f= (V_0, V_1, V_2, V_3)$ such that
$V_0\cup V_1\ne \emptyset$. It is well known that,
the induced subgraph $G[V_0]$ has no isolated vertex. Therefore, $|E(G[V_0])| \ge |V_0|/2$. Let $V'_0=\{v\in V_0: N(v) \subseteq V_2\}$ and
$V''_0=\{v\in V_0: v \,\,{has\,\,a\,\,neighbor\,\,in}\,\,V_3\}$. Then $|E(V_0,V_2)| \ge 2|V'_0|$, $|E(V_0,V_3)| \ge |V''_0|$ and
$|E(V_1,V_2\cup V_3)| \ge |V_1|$.
Therefore
$$|E(G)|=m\ge |V_0|/2+ 2|V'_0|+ |V''_0|+ |V_1|.$$
Since $|V_0|=|V'_0|+|V''_0|$, we deduce that
\begin{equation}\label{EQ11}
(4m-1)/3\ge 2|V_0|+ 4/3|V'_0|+ 4/3|V_1|-1/3
\end{equation}
and thus
\begin{equation}\label{EQ12}
2n+1 - \lceil(4m-1)/3\rceil \le 2n+1 - (4m-1)/3 \le 2n+1 -2|V_0|-4/3|V'_0|- 4/3|V_1|+1/3.
\end{equation}
Since $\gamma_{rdR}(G) = |V_1|+2|V_2|+3|V_3|$, $|V_0|+|V_1|+|V_2|+|V_3|=n$ and $2n+1=2|V_0|+2|V_1|+2|V_2|+2|V_3|+1$, we obtain
\begin{eqnarray*}
2n+1 -2|V_0|-4/3|V'_0|- 4/3|V_1|+1/3 & = & -4/3|V'_0|+ 2/3|V_1|+2|V_2|+2|V_3|+4/3\\
& = & \gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3.
\end{eqnarray*}
Next we will show that
\begin{equation}\label{EQ13}
\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3\le \gamma_{rdR}(G)
\end{equation}
or $\gamma_{rdR}(G) \ge 2n + 1- \lceil(4m-1)/3\rceil$.
If $|V'_0|\ge 1$, then $-4/3|V'_0|-1/3|V_1|-|V_3|+4/3\le 0$ and so $\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3\le \gamma_{rdR}(G)$.\\
Let now $|V'_0|=0$. Note that the condition $V_0\cup V_1\ne \emptyset$ implies $V''_0 \cup V_1\ne \emptyset$.\\
Assume next that $V_1= \emptyset$. We deduce that $|V''_0|\ge 1$ and therefore $|V_3|\ge 1$. If there are at least two vertices of weight $3$,
then $\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3<\gamma_{rdR}(G)$.\\
If there is only one vertex of weight $3$, then $m\ge n-1+\frac{n-1}{2}=\frac{3(n-1)}{2}$. We deduce that
$\gamma_{rdR}(G)\ge 3 \ge 2n+1 -\left\lceil \frac{6(n-1)-1}{3}\right\rceil
\ge 2n+1 -\left\lceil \frac{4m-1}{3}\right\rceil$, with equality if and only if $|V_2|=0$, $n$ is odd and $m=\frac{3(n-1)}{2}$, that means
$G\in{\cal F}$.\\
Now assume that $|V_1|\ge 1$. If $|V''_0|\ge 1$, then $|V_3|\ge 1$ and thus $\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3\le \gamma_{rdR}(G)$.
Next let $|V''_0|=0$. If $|V_3|\ge 1$, then $\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3\le \gamma_{rdR}(G)$. Now assume that $|V_3|=0$.
This implies that all vertices have weight $1$ or $2$. If $3\le n\le 5$, then it is easy to see that
$\gamma_{rdR}(G)> 2n+1-\left\lceil\frac{4m-1}{3}\right\rceil$.
Let now $n\ge 6$. If $|V_1|\ge 5$, then $\gamma_{rdR}(G)-4/3|V'_0|-1/3|V_1|-|V_3|+4/3<\gamma_{rdR}(G)$. Otherwise $|V_1|\le 4$, $|V_2|\ge n-4$ and
$m\ge n-1$. This implies
$$\gamma_{rdR}(G)\ge 2(n-4)+4=2n-4>2n+1-\left\lceil\frac{4(n-1)-1}{3}\right\rceil\ge 2n+1-\left\lceil\frac{4m-1}{3}\right\rceil.$$
\\.
Thus $\gamma_{rdR}(G) \ge 2n+1 -(4m-1)/3\ge 2n+1 -\lceil(4m-1)/3\rceil$.\\
For equality: If $G\in \mathcal{H}$, then $G=H_n$ for $n\ge 4$ even and $|E(H_n)|=3(n-2)/2+1$.
Thus $2n+1- (4(3(n-2)/2+1)-1)/3= 2n+1- \lceil (4(3(n-2)/2+1)-1)/3 \rceil = 2n+1 -2(n-2)-1=4= \gamma_{rdR}(H_n)$.
If $G\in \mathcal{F}$, then $G=F_n$ for $n\ge 3$ odd and $|E(F_n)|=3(n-1)/2$.
Thus $2n+1- \lceil(4(3(n-1)/2))-1/3\rceil = 2n+1 -2(n-1)=3= \gamma_{rdR}(F_n)$.
Conversely, assume that $\gamma_{rdR}(G)=2n+1-\lceil(4m-1)/3\rceil$. Then all inequalities occurring in the proof become equalities.
In the case $|V_1|=0$, we have seen above that we have equality if and only if $G\in{\cal F}$. In the case $|V_1|\ge 1$, we have seen above
that $|V_3|\ge 1$. Therefore the equality in Inequality
(\ref{EQ13}) leads to $|V_3|=|V_1|=1$ and $|V'_0|=0$. Hence $V_0=V''_0$. Thus equality in Inquality (\ref{EQ11}) or equivalently, in the
inequality $|E(G)|=m\ge |V_0|/2+ 2|V'_0|+ |V''_0|+ |V_1|$
leads to $m=3/2|V''_0|+1$. Now let the vertices $v, u$ be of weight $3,1$ respectively.
Then $m=|E(G)| \ge |E(v, V''_0)| + G[V''_0] +1 \ge |V''_0|+ 1/2|V''_0|+1=3/2|V''_0|+1$.
If $|V_2|\ne 0$, then the connectivity of $G$ leads to the contradiction $m\ge 3/2|V''_0|+2$. Consequently, $|V_2|=0, |V_0|=(2m-2)/3$ and $u$ and $v$
are adjacent. Since $G$ is connected, $G\in \mathcal{H}$.
\\
\end{proof}
\section{$RDRD$-set versus $RRD$-set}
One of the aim of studying these parameters is that to see the related between them and compare each together.
\begin{proposition}\label{cor1}
For any graph $G$, $\gamma_{rdR}(G)\leq 2\gamma_{rR}(G)$ with equality if and only if $G=\overline{K_n}$.
\end{proposition}
\begin{proof}
Let $f=(V_0,V_1,V_2)$ be a $\gamma_{rR}$-function of $G$. Since $\gamma_{rR}(G)=|V_1|+2|V_2|$, by Observation \ref{1}, we have that
$\gamma_{rdR}(G)\leq 2|V_1|+3|V_2|=\gamma_{rR}(G)+|V_1|+|V_2|\leq 2\gamma_{rR}(G)$.
If $\gamma_{rdR}(G)=2\gamma_{rR}(G)=2|V_1|+4|V_2|$, then since $\gamma_{rdR}(G)\leq 2|V_1|+3|V_2|$, we must have $V_2=\emptyset$. Hence,
$V_0=\emptyset$ must hold, and so $V=V_1$. By definition of $\gamma_{rR}$-function, we deduce that no two vertices in $G$ are adjacent, for otherwise,
if $u$ and $v$ are adjacent, then only one of them in every $\gamma_{rdR}$-function on $G$ has a label of $2$ which contradicts with
$\gamma_{rdR}(G)=2\gamma_{rR}(G)$.
\end{proof}
The proof of Lemma \ref{lem1} shows the next proposition.
\begin{proposition} If $G$ contains a triangle, then $\gamma_{rdR}(G) \le 2n-3$.
\end{proposition}
\begin{theorem}\label{the-111}
For every graph $G$, $\gamma_{rR}(G) < \gamma_{rdR}(G$).
\end{theorem}
\begin{proof}
Let $f=(V_0,V_1,V_2,V_3)$ be a $\gamma_{rdR}(G)$-function. If $V_3 \ne \emptyset$, then $(V'_0=V_0, V'_1=V_1 , V'_2=V_2\cup V_3)$ is an
RRD-function $g$ such that $w(g)< w(f)$. Let $V_3=\emptyset$. If $V_0=\emptyset$, then, since $V_2 \ne \emptyset$,
$g=(\emptyset, V'_1=V_1\cup V_2, \emptyset)$ is an RRD-function such that $w(g)< w(f)$. If $V_0 \ne \emptyset$, then $|V_2|\ge 2$. Let $f(v)=2$
for a vertex $v$. Then $g=(V'_0=V_0, V'_1=V_1\cup \{v\}, V'_2=V_2-\{v\})$ is an RRD-function $g$ for which $w(g)< w(f)$. Therefore
$\gamma_{rR}(G) < \gamma_{rdR}(G)$.
\end{proof}
pace{2mm}
As an immediate consequence of Proposition \ref{cor1}, we have.
\begin{corollary}
For any nontrivial connected graph $G$, $\gamma_{rdR}(G) < 2\gamma_{rR}(G)$.
\end{corollary}
\begin{theorem}\label{the-222} Let $G$ be a graph of order $n$. Then $\gamma_{rdR}(G)=\gamma_{rR}(G)+1$ if and only if $G$ is one of the following
graphs.\\
\emph{1}. $G$ has a vertex of degree $n-1$.\\
\emph{2}. There exists a subset $S$ of $V(G)$ such that:\\
\emph{2.1}. every vertex of $V-S$ is adjacent to a vertex in $S$,\\
\emph{2.2}. there are two subsets $A_0$ and $A_1$ of $V-S$ with $A_0\cup A_1=V-S$ such that $A_0$ is the set of non-isolated
vertices in $N(S)$ and each vertex in $A_0$ has at least two neighbors in $S$,\\
\emph{2.3}. for any $2$-subset $\{a,b\}$ of $S$, $N(\{a,b\})\cup A_0 \ne \emptyset$ and for a $3$-subset $\{x,y,z\}$ of $S$,
if $\{x,y,z\} \cap A_0 \ne \emptyset$, then there are three vertices $u,v,w$ in $A_0$ such that $N(u)\cup S=\{x,y\}$, $N(v)\cup S=\{x,z\}$ and
$N(w)\cup S=\{y,z\}$.
\end{theorem}
\begin{proof} Let $\gamma_{rdR}(G)=\gamma_{rR}(G)+1$ with a $\gamma_{rdR}(G)$-function $f=(V_0,V_1,V_2,V_3)$ and a $\gamma_{rR}(G)$-function
$g=(U_0,U_1,U_2)$.
If $V_3\ne \emptyset$, then $|V_3|=1$. Because if $|V_3|\ge 2$ then by changing $3$ to $2$ we obtain a RRD-function $h$ with $w(h)< w(g)$, a
contradiction. Let $V_3=\{v\}$.
In addition, we note that $|V_2|=0$. I we suppose that $|V_2|\ge 1$, then let $u\in V_2$. Then
$h=(V'_0=V_0, V'_1=V_1\cup \{u\}, V'_2=V_2\cup \{v\})$ is an RRD-function $g$ for which $w(h)< w(g)$, a contradiction.
Thus all vertices different from $v$ are adjacent to the vertex $v$ such that the non-isolated vertices in $N(v)$ are assigned with $0$ and the isolated
vertices in $N(v)$ are assigned with $1$.
In this case $U_0=V_0, U_1=V_1$ and $U_2=V_3$.\\
If $V_3= \emptyset$, then $V_2\ne \emptyset$ and $|V_2| \ge 2$. In this case, there must exist a vertex $v\in V_2$ such that
$U_0= V_0, U_1=V_1\cup \{v\}$ and $U_2=V_2-\{v\}$. There is such a function $f$ if we guarantee a subset $S$ of $V(G)$ with each vertex of
weight $2$ for which every other vertex in $V-S$ has to adjacent to a vertex of $S$, that is the condition 2.1 holds.\\ Since we can only
change one of vertices of weight $2$ in $f$ to a vertex of weight $1$ in $g$,
there must be existed two subsets $A_0$ and $A_1$ in $V-S$ such that the conditions 2.2 and 2.3 hold.\\
Conversely, if the condition 1 holds, then $f=(V_0, V_1, \emptyset, V_3=\{v\})$ and $g=(U_0=V_0, U_1=V_1, U_2=\{v\})$ are
$\gamma_{rdR}(G)$-function and $\gamma_{rR}(G)$-function respectively where $V_0$ is the set of non-isolated vertices in $N(v)$ and
$V_1$ is the set of isolated vertices in $N(v)$. Thus $\gamma_{rdR}(G)=\gamma_{rR}(G)+1$.\\
If the condition 2 holds, then we can have only one vertex of weight $2$ in $G$ under $f$ such that it changes to the weight $1$ in $G$ under $g$.
Thus $\gamma_{rdR}(G)=\gamma_{rR}(G)+1$.
\end{proof}
We showed that for any graph $G$, $\gamma_{rdR}(G)\le 2\gamma_{rR}(G)$ and the equality holds if and on if $G$ is a trivial graph $\overline{K_n}$.
Hence, for any nontrivial graph $G$, $\gamma_{rdR}(G)\le 2\gamma_{rR}(G)-1$. Now we characterise graph $G$ with this property
$\gamma_{rdR}(G)= 2\gamma_{rR}(G)-1$.
\begin{theorem}
If $G$ is a nontrivial graph, then $\gamma_{rdR}(G)\le 2\gamma_{rR}(G)-1$. If $\gamma_{rdR}(G)=2\gamma_{rR}(G)-1$, then $G$ consists of a $K_2$ and $n-2$
isolated vertices or $G$ consists of a vertex $h$ and two disjoint vertex sets $H$ and $R$ such that $H=N(h)$, $G[H]$ does not have isolated vertices,
$G[R]$ is trivial, there is no edge between $h$ and $R$ and $N(h)\cap N(R)\neq N(h)$.
\end{theorem}
\begin{proof} Since $G$ is a nontrivial graph, Proposition \ref{cor1} implies $\gamma_{rdR}(G)\le 2\gamma_{rR}(G)-1$. Now we investigate the equality.\\
Let $\gamma_{rdR}(G)= 2\gamma_{rR}(G)-1$, where $f=(V_0,V_1,V_2,V_3)$ is a $\gamma_{rdR}(G)$-function and $g=(U_0,U_1,U_2)$ is a
$\gamma_{rR}(G)$-function.
Then $2|U_1|+4|U_2|-1=|V_1|+2|V_2|+3|V_3|$. On the other hand, since $2|U_1|+4|U_2|-1=|V_1|+2|V_2|+3|V_3|=\gamma_{rdR}(G) \le 2|U_1|+3|U_2|$,
it follows that $|U_2|\le 1$.
If $U_2=\emptyset$, then $|U_0|=0$ and therefore $|U_1|=n$. Using the inequality above, we obtain
$$2n-1=2|U_1|-1\le\gamma_{rdR}(G)\le 2|U_1|=2n.$$
If $\gamma_{rdR}(G)=2n$, then $G$ is trivial, a contradiction. If $\gamma_{rdR}(G)=2n-1$, then Proposition \ref{2n-1} shows that
$G$ consists of a $K_2$ and $n-2$ isolated vertices.
Let now $|U_2|= 1$ such that $U_2=\{h\}$, $H=N(h)$, $R=V(G)\setminus N[h]=\{u_1,u_1,\ldots,u_p\}$.
Clearly, $U_0\subseteq H$ and $R\subseteq U_1$.
If $H$ contains exactly $s\ge 1$ isolated vertices, then $\gamma_{rR}(G)=2+s+p$ and therefore
$\gamma_{rdR}(G)\le 3+s+2p\le 2\gamma_{rR}(G)-2$, a contradiction. Hence $H=N(h)$ does not contain isolated vertices and thus
$\gamma_{rR}(G)=p+2$.
If $G[R]$ contains an edge, then we obtain the contradiction $\gamma_{rdR}(G)\le 3+2p-1=2p+2\le 2\gamma_{rR}(G)-2$. Thus $G[R]$ is trivial.
If there is an edge between $h$ and $R$, then we also obtain the contradiction $\gamma_{rdR}(G)\le 3+2p-1=2p+2\le 2\gamma_{rR}(G)-2$.
If $N(h)\cap N(R)=N(h)$, then $f=(H,\emptyset,\{h\}\cup R,\emptyset)$ is an RDRD function of $G$, and hence
$\gamma_{rdR}(G)\le 2p+2\le 2\gamma_{rR}(G)-2$, a contradiction.
\end{proof}
\section{Trees}
In this section we study the restrained double Roman domination of trees.\\
\begin{theorem}\label{the-tree1}
If $T$ is a tree of order $n\geq2$, then $\gamma_{rdR}(T)\leq \lceil\frac{3n-1}{2}\rceil$. The equality holds if $T\in\{P_2,P_3,P_4,P_5, S_{1,2}, ws(1,n, n-1), ws(1,n, n-2)\}$.
\end{theorem}
\begin{proof}
Let $T$ be a tree of order $n\geq2$. We will proceed by induction on $n$. If $n=2$, then $\gamma_{rdR}(T)=3=\lceil\frac{3n-1}{2}\rceil$. If $n\geq3$,
then $diam(T)\geq2$.
If $diam(T)=2$, then $T$ is the star $K_{1,n-1}$ for $n\geq3$ and $\gamma_{rdR}(T)=n+1\leq \lceil\frac{3n-1}{2}\rceil$. If $diam(T)=3$, then $T$ is a
double star $S_{r,s}$ for $1\leq r\leq s$. Hence, $n=r+s+2\geq4$.
If $r=1=s$, then $T=P_4$ and $\gamma_{rdR}(T)=6\leq\lceil\frac{12-1}{2}\rceil$. If $r=1, s\ge 2$, then $n=s+3$ and
$\gamma_{rdR}(T)=s+5\leq\lceil\frac{3(s+3)-1}{2}\rceil$. If $r\ge 2, s\ge 2$,
then $n=r+s+2$ and $\gamma_{rdR}(T)=r+s+4\leq\lceil\frac{3(r+s+2)-1}{2}\rceil$.\\
Hence, we may assume that $diam(T)\geq4$. This implies that $n\geq5$. Assume that any tree $T'$ with order $2\leq n'<n$ has
$\gamma_{rdR}(T')\leq \lceil\dfrac{3n'-1}{2}\rceil$. Among all longest paths in $T$,
choose $P$ to be one that maximizes the degree of its next-to-last vertex $v$, and let $w$ be a leaf neighbor of $v$. Note that by our
choice of $v$, every child of $v$ is a leaf. Since $deg(v)\geq2$, the vertex $v$
has at least one leaf as a child. Now we put $T'=T-T_v$ where the order of the substar $T_v$ is $k+1$ with $k\geq1$. Note that since
$diam(T)\geq4$, $T'$ has at least three vertices, that is, $n'\geq3$. Let $f'$ be a
$\gamma_{rdR}$-function of $T'$. Form $f$ from $f'$ by letting $f(x)=f'(x)$ for all $x\in V(T')$, $f(v)=2$, and $f(z)=1$ for all leaf neighbors of $v$.
Thus $f$ is a restrained double Roman dominating function of $T$,
implying that $\gamma_{rdR}(T)\leq \gamma_{rdR}(T')+k+2 \le \lceil\dfrac{3(n-k-1)-1}{2}\rceil+k+2=\lceil\dfrac{3n-k}{2}\rceil
\leq \lceil\dfrac{3n-1}{2}\rceil$.\\
If $T\in \{P_2,P_3,P_4,P_5, S_{1,2}, , ws(1,n, n-1), ws(1,n, n-2)\}$, then clearly
$\gamma_{rdR}(T)=\lceil\dfrac{3n-1}{2}\rceil$.
\end{proof}
pace{2mm}
\begin{theorem}\label{the-tree2}
For every tree $T$ of order $n\geq 3$, with $l$ leaves and $s$ support vertices, we have $\gamma_{rdR}(T)\leq\dfrac{4n+2s-l}{3}$, and this
bound is sharp for the family of stars ($K_{1,n-1}$ $n\geq 3$),
double stars, caterpillars for which each vertex is a leaf or a support vertex and all support vertices have even degree
$2m$ or at most two end support vertices has degree $2m-1$ and the other support vertices has degree $2m$, wounded spiders in which
the central vertex is adjacent with at least two leaves.
\end{theorem}
\begin{proof}
Let $T$ be a tree with order $n\geq3$. Since $n\geq3$, $diam(T)\geq2$. If $diam(T)=2$, then $T$ is the star $K_{1,n-1}$ for $n\geq3$ and
$\gamma_{rdR}(T)=n+1\leq\dfrac{4n+2-(n-1)}{3}=\dfrac{3n+3}{3}=n+1$.
If $diam(T)=3$, then $T$ is a double star $S_{r,t}$ for $1\leq r\leq t$. We have $\gamma_{rdR}(T)=n+2=\dfrac{4n+2s-l}{3}$. Hence, we may assume
$diam(T)\geq4$. Thus, $n\geq5$. Assume that any tree
$T'$ with order $3\leq n'<n$, $l'$ leaves and $s'$ support vertices has $\gamma_{rdR}(T')\leq\dfrac{4n'+2s'-l'}{2}$. Among all longest paths in $T$,
choose $P$ to be one that maximizes the degree of its next-to-last vertex $u$,
and let $x$ be a leaf neighbor of $u$, $w$ be a parent vertex of $v$ and $v$ be a parent vertex of $u$. Note that by our choice of $u$, every child
of $u$ is a leaf. Since $t=deg(u)\geq2$, the vertex $u$ has at least one leaf children.
We now consider the two cases are as follows:\\
\textbf{Case 1}. $deg(v)\geq3$. In this case, we put $T'=T-T_u$, where the order of the star $T_u$ is $t$ with $t\geq2$. Note that since $diam(T)\geq4$,
$T'$ has at least three vertices, that is, $n'\geq3$. Let $f'$ be a
$\gamma_{rdR}$-function of $T'$. Thus we have $n'=n-t$, $l'=l-(t-1)$ and $s'=s-1$. Clearly,
$\gamma_{rdR}(T)\leq \gamma_{rdR}(T')+t+1\leq\dfrac{4(n-t)+2(s-1)-(l-(t-1))}{3}+t+1=\dfrac{4n+2s-l}{3}$.\\
\textbf{Case 2}. $deg(v)=2$. We now consider the following two subcases.\\
\textbf{i}. $deg(w)>2$. Then we put $T'=T-T_v$ where order of subtree $T_v$ is $t+1$. Clearly, we have $n'=n-(t+1)$, $s'=s-1$ and $l'=l-(t-1)$. Thus,
$\gamma_{rdR}(T)\leq \gamma_{rdR}(T')+t+2\leq \dfrac{4(n-t-1)+2(s-1)-(l-(t-1))}{3}+t+2=\dfrac{4n+2s-l-1}{3}\leq \dfrac{4n+2s-l}{3}$.\\
\textbf{ii}. $deg(w)=2$. Then we put $T'=T-T_v$, where the order of the subtree $T_v$ is $t+1$. Thus in this case, $w$ in the subtree $T'$
becomes a leaf and we have $n'=n-(t+1)$, $s'\le s$ and $l'= l-(t-1)+1$. Thus,
$\gamma_{rdR}(T)\leq \gamma_{rdR}(T')+t+2\leq \dfrac{4(n-t-1)+2(s)-(l-(t-1)+1)}{3}+t+2=\dfrac{4n+2s-l}{3}$.
\end{proof}
\begin{theorem}\label{the-tree3}
If $T$ is a tree, then $\gamma_r(T)+1\le \gamma_{rdR}(T)\le 3\gamma_r(T)$, and equality for the lower bound holds if and only if $T$ is a star.
The upper bound is sharp for the paths $P_{m}$ ($m\equiv 1\ \mbox{mod}\ 3$),
The cycles $C_n$ ($n\equiv 0,\,1\ \mbox{mod}\ 3$), the complete graphs $K_n$, the complete bipartite graphs
$K_{n,m}\ (m,n \ge 2)$ and the multipartite graphs $K_{n_1,n_2,\cdots, n_m},\ (m\ge 3)$.
\end{theorem}
\begin{proof}
Let $T$ be a tree. Since at least one vertex has value $2$ under any $RDRD$ function of $T$, we see that $\gamma_r(T)+1 \le \gamma_{rdR}(T)$.
If we assign
the value $3$ to the vertices in a $\gamma_r(T)$-set, then we obtain an RDRD function of $T$. Therefore $\gamma_{rdR}(T)\le 3\gamma_r(T)$.\\
The sharpness of the upper bound is deuced from Propositions 1-7 of \cite{domke} and Observation \ref{the-com-par}, Theorem
\ref{the-path} and Theorem \ref{the-cycle}.\\
For equality of the lower bound, if $T=K_{1,n-1}$ is a star, then it is clear $\gamma_{rdR}(T)=n+1$ and $\gamma_{r}(T)=n$. If $T$ is a tree and
$\gamma_{rdR}(T)=\gamma_{r}(T)+1$, then we have only one vertex of value $2$ in any $\gamma_{rdR}(T)$-function and the other vertices of positive
weight have value $1$.
In addition, the vertices of value 1 are adjacent to the vertex of value 2, and therefore $T$ is a star.
least one vertex
\end{proof}
The following result gives us the RDRD of $G$ in terms of the size of $E(G)$, and order of $G$.
\begin{proposition}\label{prop-tree4} Let $G$ be a connected graph $G$ of order $n\ge 2$ with $m$ edges. Then
$\gamma_{rdR}(G) \le 4m-2n+3$, with equality if and only if $G$ is a tree with
$\gamma_{rdR}(G) = 2n-1$.
\end{proposition}
\begin{proof}
For the given connected graph, $m\ge n-1$ and according to Proposition \ref{diam} $\gamma_{rdR}(G) \le 2n-1 =4n-4 -2n +3\le 4m-2n+3$.\\
If $\gamma_{rdR}(G) = 4m-2n+3$, then
$m=n-1$ and $G$ is a tree with $\gamma_{rdR}(G) = 2n-1$.\\
Conversely, assume that $G$ is a tree with $\gamma_{rdR}(G) = 2n-1$. Hence $\gamma_{rdR}(G) = 4m-2n+3$.
\end{proof}
\section{Conclusions and problems}
The concept of restrained double Roman domination in graphs was initially investigated in this paper. We studied the computational complexity of this
concept and proved some bounds on the $RDRD$ number of graphs. In the case of trees, we characterized all trees attaining the exhibited bound. We now
conclude the paper with some problems suggested by this research.
pace{1mm}\\
$\bullet$ For any graph $G$, provided the characterizations of graphs with small or large $RDRD$ numbers.
$\bullet$ It is also worthwhile proving some other nontrivial sharp bounds on $\gamma_{rdR}(G)$ for general graphs $G$ or some well-known families
such as, chordal, planar, triangle-free, or claw-free graphs.
pace{1mm}\\
$\bullet$ The decision problem RESTRAINED DOUBLE ROMAN DOMINATION is NP-complete for general graphs, as proved in Theorem \ref{the-NP}. By the way,
there might be some families of graphs such that $RDRD$ is NP-complete for them or there might be some polynomial-time algorithms for computing
the $RDRD$ number of some well-known families of graphs, for instance, trees. Can you provide these families?\\
$\bullet$ In Theorems \ref{the-tree1} and \ref{the-tree2} we showed upper bounds for $\gamma_{rdR}(T)$. The sufficient and necessity conditions for equality may be problems.
pace{3mm}
\end{document} |
\betaegin{document}
\title[$P$-Partitions]{A Historical Survey of $P$-Partitions}
\alphauthor{Ira M. Gessel}
\alphaddress{Brandeis University}
\email{[email protected]}
\sigmaubjclass[2010]{Primary 05A15. Secondary 05A17}
\date{June 2, 2015}
\betaegin{abstract}
We give a historical survey of the theory $P$-partitions, starting with MacMahon's work, describing Richard Stanley's contributions and his related work, and continuing with more recent developments.
\end{abstract}
\maketitle
\emph{Dedicated to Richard Stanley on the occasion of his seventieth birthday.}
\sigmaection{Introduction}
Richard Stanley's 1971 Harvard Ph.D.~thesis \gammaite{stanley-thesis} studied two related topics, plane partitions and $P$-partitions. A short article on his work on $P$-partitions appeared in 1970
\gammaite{chromatic}, and a detailed exposition of this thesis work appeared as an American Mathematical Society Memoir \gammaite{ordered} in 1972. In this paper, I will describe some of the historical background of the theory of $P$-partitions, sketch Stanley's contribution to the theory, and mention some more recent developments.
The basic idea of the theory of $P$-partitions was discovered by P. A. MacMahon, generalized by Knuth, and independently rediscovered by Kreweras. In order to understand the different notations and approaches that these authors used, it will be helpful to start with a short exposition of Stanley's approach.
\sigmaection{Stanley's theory of $P$-partitions}
\label{s-theory}
I will usually follow Stanley's notation, but with some minor modifications. Stanley has given a more recent account of the theory of $P$-partitions in \emph{Enumerative Combinatorics}, Vol.~1 \gammaite{ec1}, Sections 1.4 and 3.15.
Let $P$ be a finite partially ordered set (poset) with $p$ elements and let $\omega$ be a bijection from $P$ to $[p]=\{1,2,\dots, p\}$, called a \emph{labeling} of $P$. We use the symbols $\preceq$ and $\prec$ for the partial order relation of $P$. Then a $(P,\omega)$-partition is a map $\sigmaigma$ from $P$ to the set $\mathbb{N}$ of nonnegative integers satisfying the conditions
\betaegin{enumerate}
\item[(i)] If $X\prec Y$ then $\sigma(X)\ge\sigma(Y)$.
\item[(ii)] If $X\prec Y$ and $\omega(X)>\omega(Y)$ then $\sigma(X)> \sigma(Y)$.
\end{enumerate}
If $\sigmaum_{X\in P}\sigma(X) = n$ then we call $\sigma$ a $P$-partition of $n$.
For simplicity, we assume that the elements of $P$ are $1,2,\dots, p$.
We denote by $\mathscr{A}(P,\omega)$ the set of all $(P,\omega)$-partitions and by $\mathscr{A}(P,\omega;m)$ the set of all $(P,\omega)$-partitions with largest part at most $m$.
Thus for the labeled poset $P$ of Figure \ref{f-1}, where the elements of $P$ are identified with their labels, $\sigmaigma: \{1,2,3\} \to \mathbb{N}$ is a $P$-partition if and only if $\sigma(2)>\sigma(1)$ and $\sigma(2)\ge\sigma(3)$.
\betaegin{figure}[thbp]
\gammaentering
\includegraphics[width=1in]{poset1}
\gammaaption{A poset with three elements}
\label{f-1}
\end{figure}
If $\omega(X)<\omega(Y)$ whenever $X\prec Y$, then $\omega$ is called a \emph{natural labeling}, and if
$\omega(X)>\omega(Y)$ whenever $X\prec Y$, then $\omega$ is called a \emph{strict labeling}.
If $\omega$ is a natural labeling, then all of the inequalities in the definition of a $P$-partition are weak, and if $\omega$ is a strict labeling then all the inequalities are strict.
A \emph{linear extension} of $P$ is a total order (or chain) on $P$ that contains $P$. Every linear extension of $P$ inherits the labeling $\omega$ of $P$. We denote by $\mathscr{L}(P,\omega)$ the set of linear extensions of $P$. We may identify a linear extension of $P$ with the permutation of $[p]$ obtained by listing the labels of its elements in increasing order. Thus for $P$ in Figure \ref{f-1}, $\mathscr{L}(P,\omega)$ consists of the two permutations $213$ and $231$. Then the ``fundamental theorem of $P$-partitions'' (stated somewhat differently, but equivalently, by Stanley \gammaite[Lemma 6.1]{ordered}) asserts that the set $\mathscr{A}(P,\omega)$ of $P$-partitions is the disjoint union of $\mathscr{A}(\pi,\omegamega)$ over all elements $\pi$ of $\mathscr{L}(P,\omega)$.
Thus for the poset $P$ of Figure \ref{f-1}, the fundamental theorem says that the set of $P$-partitions is the disjoint union of the solutions of $\sigma(2)>\sigma(1)\ge \sigma(3)$ and the solutions of $\sigma(2)\ge\sigma(3)>\sigma(1)$.
We sketch here Stanley's proof. Given a $P$-partition $\sigma$, we can arrange its values in weakly decreasing order and thus find a permutation $\pi$ of $P$ such that
\betaegin{equation*}
\sigma(\pi(1)) \ge \sigma(\pi(2)) \ge \dots \ge \sigma(\pi(p)).
\end{equation*}
There may be many ways to do this, but we get a unique permutation if we require that the labels of equal values be arranged in increasing order; i.e., if $\sigma(\pi(i)) = \sigma(\pi({i+1}))$ then $\omega(\pi(i)) < \omega(\pi({i+1}))$. But the contrapositive of this property is (since the labels are all distinct and $\sigma(\pi(i)) \ge \sigma(\pi({i+1}))$) that if $\omega(\pi(i)) > \omega(\pi({i+1}))$ then $\sigma(\pi(i)) > \sigma(\pi({i+1}))$. This shows that every $P$-partition is in $\mathscr{A}(\pi,\omega)$ for a unique $\pi\in \mathscr{L}(P,\omega)$, and it is clear from the definitions that $\mathscr{A}(\pi,\omega)\sigmaubseteq \mathscr{A}(P,\omega)$ for each $\pi\in \mathscr{L}(P,\omega)$.
The fundamental theorem has enumerative consequences, since $(P,\omega)$-partitions are easy to count when $P$ is a total order. In particular, Stanley defines
\betaegin{align*}
U_m(P,\omega) &= \sigmaum_{\sigma\in \mathscr{A}(P,\omega;m)}q^{\sigma(1)+\gammadots+\sigma(p)},\\
U(P,\omega) &= \lim_{m\to\infty}U_m(P,\omega) =\sigmaum_{\sigma\in \mathscr{A}(P,\omega)}q^{\sigma(1)+\gammadots+\sigma(p)},\\
\mathscr{O}mega(P,\omega;m)&=U_{m-1}(P,\omega)|_{q=1};
\end{align*}
$\mathscr{O}mega(P,\omega;m)$ is called the \emph{order polynomial} of $(P,\omega)$.
(In section \ref{s-recip} we will write $U_m(P,\omega;q)$ and $U(P,\omega;q)$ when we need to show dependence on $q$.)
To evaluate these sums we apply the fundamental theorem, which reduces the problem to the case in which $P$ is a total order.
If $\pi$ is a permutation of $[p]$, we define the \emph{descent set} $\mathscr{S}(\pi)$ to be the set $\{\,j \mid \pi(j) > \pi(j+1)\,\}$, and we denote by $\des(\pi)$ the number of elements of $\mathscr{S}(\pi)$ and by $\maj(\pi)$ (the \emph{major index}\footnote{Stanley used the term ``index".} of $\pi$) the sum of the elements of $\mathscr{S}(\pi)$.
Then the basic result on these quantities is
\betaegin{equation}
\label{e-majdes}
\sigmaum_{m=0}^\infty U_m(P,\omega) t^m = \frac{\sigmaum_{\pi\in \mathscr{L}(P,\omega)}q^{\maj(\pi)} t^{\des(\pi)}}{(1-t)(1-tq)\gammadots(1-tq^p)}.
\end{equation}
Equation \eqref{e-majdes} is easily proved directly if $P$ is a total order (chain), and the general case follows from the fundamental theorem. As consequences of \eqref{e-majdes} we have
\betaegin{gather}
\label{e-U}
U(p,\omega) = \frac{\sigmaum_{\pi\in \mathscr{L}(P,\omega)}q^{\maj(\pi)}}{(1-q)\gammadots(1-q^p)}
\intertext{and}
\label{e-order}
\sigmaum_{m=0}^\infty \mathscr{O}mega(P,\omega;m) t^m = \frac{\sigmaum_{\pi\in \mathscr{L}(P,\omega)} t^{1+\des(\pi)}}{(1-t)^{p+1}}.
\end{gather}
\sigmaection{MacMahon}
The story of $P$-partitions begins with Percy A. MacMahon's work on plane partitions \gammaite{macmahon1911memoir} (see also \gammaite[Vol.~2, Section X, Chapter 3]{MR0141605}). The problem that MacMahon considers is that of counting plane partitions of a given shape; that is, arrangements of nonnegative integers with a given sum in a ``lattice" such as
\betaegin{equation*}
\betaegin{array}{cccc}
4&4&2&1\\
4&3&2\\
2&1
\end{array}
\end{equation*}
in which the entries are weakly decreasing in each row and column.
MacMahon gives a simple example to illustrate his idea. We want to count arrays of nonnegative integers
\betaegin{equation*}
\betaegin{array}{cc}
p&q\\
r&s
\end{array}
\end{equation*}
satisfying $p\ge q\ge s$ and $p\ge r\ge s$, and we assign to such an array the weight $x^{p+q+r+s}$.
The set of solutions of these inequalities is the disjoint union of the solution sets of the inequalities
\betaegin{equation*}
\textrm{(i) } p\ge q\ge r\ge s \quad \textrm{and}\quad \textrm{(ii) } p\ge r> q\ge s.
\end{equation*}
Setting $r=s+A$, $q=s+A+B$, and $p=s+A+B+C$, where $A$, $B$, and $C$ are arbitrary nonnegative integers, we see that the sum $\sigmaum x^{p+q+r+s}$ over solutions of
$p\ge q\ge r\ge s $
is equal to
\betaegin{equation*}
\sigmaum_{A,B,C,s\ge0}x^{C+2B+3A+4s}=\frac{1}{\betaq 1\betaq 2\betaq 3 \betaq 4},
\end{equation*}
where $\betaq n= (1-x^n)$. Similarly, the generating function for solutions of $ p\ge r> q\ge s$ is $x^2/\betaq 1 \betaq 2 \betaq 3 \betaq 4$, so the generating function for all of the arrays is
\betaegin{equation}
\label{e-gf1}
\frac{1+x^2}{\betaq 1\betaq2 \betaq3 \betaq4}.
\end{equation}
MacMahon explains (but does not prove) that a similar decomposition exists for counting plane partitions of any shape, and moreover, the terms that appear in the numerator have combinatorial interpretations. They correspond to what MacMahon calls \emph{lattice arrangements}, which are essentially what we now call \emph{standard Young tableaux}. In the example under discussion there are two lattice arrangements,
\betaegin{equation*}
\betaegin{array}{cc}
4&3\\
2&1
\end{array}
\text{ and }
\betaegin{array}{cc}
4&2\\
3&1
\end{array}.
\end{equation*}
They are the plane partitions of the shape under consideration in which the entries are $1, 2, \dots, n$, where $n$ is the number of positions in the shape. To each lattice arrangement MacMahon associates a \emph{lattice permutation}: the $i$th entry in the lattice permutation corresponding to an arrangement is the row of the arrangement in which $n+1-i$ appears, where the rows are represented by the Greek letters $\alpha$, $\beta$, \dots. So the lattice permutation associated to the first arrangement is $\alpha\alpha\beta\beta$ and to the second is $\alpha\beta\alpha\beta$. (A sequence of Greek letters is called a lattice permutation if any initial segment contains at least as many $\alpha$s as $\beta$s, at least as many $\beta$s as $\gamma$s, and so on.) To each lattice permutation, MacMahon associates an inequality relating $p$, $q$, $r$, and $s$; the $\alpha$s are replaced, in left-to-right order with the first-row variables $p$ and $q$, and the $\beta$s are replaced with the second-row variables $r$ and~$s$. A greater than or equals sign is inserted between two Greek letters that are in alphabetical order and a greater than sign is inserted between two Greek letters that are out of alphabetical order. So the lattice permutation $\alpha\alpha\beta\beta$ gives the inequalities $p\ge q\ge r\ge s$ and the lattice permutation $\alpha\beta\alpha\beta$ give the inequalities $p\ge r>q\ge s$. Each lattice permutation contributes one term to the numerator of \eqref{e-gf1}, and the power of $x$ in such a term is the sum of the positions of the Greek letters that are followed by a smaller Greek letter. MacMahon then describes the variation in which a restriction on part sizes is imposed. The decomposition into disjoint inequalities works exactly as in the unrestricted case, and reduces the problem to counting partitions with a given number of parts and a bound on the largest part. Most of the rest of the paper is devoted to applications of this idea to the enumeration of plane partitions. In a postscript, MacMahon considers the analogous situation in which only decreases in the rows are required, not in the columns. The enumeration of such arrays is not of much interest in itself, since the generating function for an array with $p_1, p_2,\dots, p_n$ nodes in its $n$ rows is clearly
\betaegin{equation*}
\frac{1}{\betaq 1 \gammadots \betaq {p_1}\betaq 1 \gammadots \betaq {p_2}\gammadots\gammadots \betaq 1\gammadots \betaq {p_n}}.
\end{equation*}
However the same decomposition that is used in the case of plane partitions yields interesting results about permutations.
In a follow-up paper \gammaite{macmahon1913indices} (see also \gammaite[Vol. 2, Section IX, Chapter 3]{MR0141605}), MacMahon elaborates on this idea. Given a sequence of elements of a totally ordered set, MacMahon defines a \emph{major contact}\footnote{Now usually called a \emph{descent}.} to be a pair of consecutive entries in which the first is greater than the second, and he defines the \emph{greater index}\footnote{MacMahon's greater index is now usually called the \emph{major index}. The term ``major index" was introduced by Foata and Sch\"utzenberger \gammaite{MR519777,MR506852} in reference to MacMahon's military rank.
Curiously, MacMahon used the term ``major index" for a related concept that does not seem to have been further studied.} to be the sum of the positions of the first elements of the major contacts.
Thus the greater index of $\beta\alpha\alpha\alpha\gamma\gamma\beta\alpha\gamma$, where the letters are ordered alphabetically, is $1+6+7=14$. (He similarly defines the ``equal index" and ``lesser index" but these do not play much of a role in what follows.)
MacMahon's main result in the paper is that the sum $\sigmaum x^p$, where $p$ is the greater index, over all ``permutations of the assemblage $\alpha^i \beta^j\gamma^k\gammadots$" is
\betaegin{equation*}
\frac{\betaq 1\betaq 2 \gammadots \betaq{i+j+k+\gammadots}}
{\betaq 1\betaq 2 \gammadots \betaq i \gammadot \betaq 1\betaq 2 \gammadots \betaq j \gammadot \betaq 1\betaq 2 \gammadots \betaq k \gammadots}
\end{equation*}
As in the previous paper MacMahon illustrates with an example, but does not give a formal proof, nor even an informal explanation of why the procedure works. We consider the sum of $x^{a_1+a_2+a_3+b_1+b_2}$ over all inequalities $a_1\ge a_2\ge a_3$, $b_1\ge b_2$. We see directly that the sum is
\betaegin{equation*}
\frac{1}{\betaq 1 \betaq 2 \betaq 3\gammadot \betaq 1 \betaq 2}.
\end{equation*}
MacMahon breaks up these inequalities just as before into subsets corresponding to all the permutations of $\alpha^3\beta^2$; for example, to the permutation $\alpha\beta\alpha\beta\alpha$ corresponds the inequalities $a_1\ge b_1 > a_2 \ge b_2 > a_3$, where the strict inequalities correspond to the major contacts, which in this example are all of the form $\beta \alpha$. The generating function for the solutions of these inequalities is
\betaegin{equation*}
\frac{x^6}{\betaq 1\betaq 2\betaq 3\betaq 4 \betaq 5};
\end{equation*}
here $6=2+4$ is the the greater index of the permutation $\alpha\beta\alpha\beta\alpha$. Summing the contributions from all ten permutations of $\alpha^3\beta^2$ gives
\betaegin{equation*}
\frac{\sigmaum x^p}{\betaq 1\betaq 2\betaq 3\betaq 4 \betaq 5} = \frac{1}{\betaq 1 \betaq 2 \betaq 3\gammadot \betaq 1 \betaq 2}.
\end{equation*}
In his book \emph{Combinatory Analysis} \gammaite[Vol.~2, Section IX]{MR0141605} MacMahon discusses the analogous result when a bound is imposed on the part sizes. The sum of $x^{a_1+\gammadots+a_p}$ over all solutions of $n\ge a_1\ge \gammadots \ge a_p$ is
$\betaq {n+1}\gammadots \betaq{n+p}/\betaq1\gammadots \betaq p$, and MacMahon derives in Art.~462 an important, though not well-known, formula that he writes as
\betaegin{equation}
\label{e-qSN1}
\betaegin{multlined}
\sigmaum_{n=0}^\infty g^n \frac{\betaq{n+1}\gammadots \betaq{n+p_1}\gammadot \betaq{n+1}\gammadots \betaq{n+p_2}\gammadots\gammadots \betaq{n+1}\gammadots \betaq{n+p_m}}
{\betaq 1 \betaq 2 \gammadots \betaq{p_1} \gammadot\betaq 1 \betaq 2 \gammadots \betaq{p_2}\gammadots\gammadots \betaq 1 \betaq 2 \gammadots \betaq{p_m}}\\
= \frac{1+g\mathbb{P}F_1+g^2\mathbb{P}F_2+\gammadots+g^\nu \mathbb{P}F_\nu}
{(1-g)(1-gx)(1-gx^2)\gammadots(1-gx^{p_1+\gammadots+p_\nu})}.
\end{multlined}
\end{equation}
Here $\mathbb{P}F_s$ is the generating function, by greater index, of permutations of the assemblage $\alpha_1^{p_1}\alpha_2^{p_2}\gammadots \alpha_m^{p_m}$ with $s$ major contacts. This result is worth restating it in more modern terminology: Let
\[A_{p_1,\dots, p_m}(t,q) = \sigmaum_\pi t^{\des(\pi)}q^{\maj(\pi)},\]
where the sum is over all permutations $\pi$ of the multiset $\{1^{p_1}, 2^{p_2}, \dots, m^{p_m}\}$, and if $\pi=a_1\gammadots a_p$, where $p=p_1+\gammadots p_m$ then $\des(\pi)$ is the number of descents of $\pi$, that is, the number of indices $i$ for which $a_i>a_{i+1}$, and $\maj(\pi)$ is the sum of the descents of $\pi$. Let $(a;q)_m$ be the $q$-rising factorial $(1-a)(1-aq)\gammadots (1-aq^{n-1})$, let $(q)_n$ denote $(q;q)_n = (1-q)\gammadots (1-q^n)$ and let
$\qbinom{m}{n}$ denote the $q$-binomial coefficient
\betaegin{equation*}
\frac{(q)_{m}}{(q)_n (q)_{m-n}}.
\end{equation*}
Then
\betaegin{equation}
\label{e-qSN2}
\sigmaum_{n=0}^\infty t^n \qbinom{n+p_1}{p_1}\qbinom{n+p_2}{p_2}\gammadots \qbinom{n+p_m}{p_m}
=\frac {A_{p_1,\dots, p_m}(t,q)}{(t;q)_{p+1}}.
\end{equation}
Several specializations of \eqref{e-qSN2} are worth mentioning.
For $q=1$
the polynomials $A_{p_1,\dots, p_m}(t,1)$ solve Simon Newcomb's problem, the problem of counting permutations of a multiset by descents.\footnote{In \gammaite{macmahon1908second} MacMahon describes Simon Newcomb's problem as the equivalent problem of counting permutations of a multiset by consecutive pairs which are \emph{not} descents, though in his book \gammaite[volume 1, pp. 187]{MR0141605} he counts by descents.} (MacMahon had solved Simon Newcomb's problem earlier by a different method \gammaite{macmahon1908second}; however, he does not note here the connection with Simon Newcomb's problem.) In the case $q=1$, $p_1=\gammadots=p_m=1$, the polynomials $A_{1^m}(t,1)$ are the Eulerian polynomials\footnote{The Eulerian polynomials are often defined to be our $tA_{1^m}(t,1)$, making the generating function the nicer
$\sigmaum_{n=0}^\infty t^n n^m$.
}, satisfying
\betaegin{equation*}
\sigmaum_{n=0}^\infty t^n (n+1)^m = \frac{A_{1^m}(t,1)}{(1-t)^{m+1}}.
\end{equation*}
For $p_1=\gammadots=p_m=1$, \eqref{e-qSN2} becomes
\betaegin{equation*}
\sigmaum_{n=0}^\infty t^n (1+q+\gammadots +q^n)^m
=\frac {A_{1^m}(t,q)}{(t;q)_{m+1}},
\end{equation*}
a result often attributed to Carlitz \gammaite{MR0366683} (though Carlitz stated an equivalent result much earlier \gammaite{MR0060538}, attributing it to John Riordan).
\sigmaection{Kreweras}
In 1967, Germain Kreweras \gammaite{kreweras1967} (see also \gammaite{MR0200180} for a briefer account) used an approach similar to MacMahon's, though stated very differently, to solve a common generalization of Simon Newcomb's problem\footnote{Kreweras's seems to be unaware of MacMahon's work on Simon Newcomb's problem and refers only to Riordan \gammaite[216--219]{MR0096594} as a reference.} and what he calls ``Young's problem". In Young's problem, we are given two weakly decreasing sequences $Y=(y_1,\dots, y_h)$ and $Y'=(y_1',\dots, y_h')$ with $y_i\ge y_i'$ for each $i$ and we ask how many ``Young chains" there are from $Y'$ to $Y$, which are sequences of partitions (weakly decreasing sequences of integers) starting with $Y'$ and ending with $Y$ in which each partition is obtained from the previous one by increasing one part by 1. In modern terminology, these are standard tableaux of shape $Y/Y'$; that is, fillings of a Young diagram of shape $Y$ with the squares of a Young diagram of shape $Y'$ removed from it, with the integers $1,2,\dots, m$ (where $m$ is the total number of squares), so that the entries are increasing in every row and column. For example, if $Y'= (2,1,0)$ and $Y=(3,2,2)$ then one of the Young chains from $Y'$ to $Y$ is $Y'=(2,1,0), (3,1,0), (3,1,1), (3,2,1), (3,2,2) = Y$. This corresponds to the skew Young tableau
\betaegin{equation*}
\ytableaushort{\none\none1,\none3,24}
\end{equation*}
in which the entry $i$ occurs in row $j$ if the $i$th step in the chain is an increase by 1 in the $j$th position.
Kreweras defines a ``return" (\emph{retour en arri\`ere}) of a Young chain to consist of three consecutive partitions $UVW$ such that the entry augmented in passing from $V$ to $W$ has an index that is strictly less than which is augmented in passing from $U$ to $V$. In terms of Young tableau, a return corresponds to an entry $i$ which is in a higher row than $i+1$. (In our example, 1 and 3 correspond to returns.) In MacMahon's approach, the returns correspond to major contacts of lattice permutations. Kreweras writes $\theta_r(Y,Y')$ for the number of Young chains from $Y'$ to $Y$ with $r$ returns.
He observes that Simon Newcomb's problem is equivalent to a special case of computing $\theta_r(Y,Y')$; thus the number of
permutations of the multiset $\{1^3, 2, 3^2\}$ with $r$ descents is equal to the number of skew Young tableaux of shape
\betaegin{equation*}
\ydiagram{3+3,2+1,2}
\end{equation*}
with $r$ returns.
He then gives the solution to this problem in the form
\betaegin{equation*}
\frac{\sigmaum_{r\ge0} \theta_r(Y,Y') t^r}{(1-t)^{\eta-\eta'+1}}=\sigmaum_{r\ge0} w_r t^r.
\end{equation*}
Here $\eta$ is the sum of the entries of $Y$, $\eta'$ is the sum of the entries of $Y'$, and $w_r$ is the number of chains
\betaegin{equation}
\label{e-chains}
Y' \le Z_1 \le \gammadots \le Z_r \le Y;
\end{equation}
in an earlier work \gammaite{kreweras1965}, Kreweras had given the formula
\betaegin{equation*}
w_r = \det \left(\betainom{y_i-y_j'+r}{i-j+r}\right)_{i,j=1,\dots, h},
\end{equation*}
where $Y=(y_1,\dots, y_h)$ and $Y' = (y'_1,\dots, y'_h)$.
In the case of Simon Newcomb's problem, the determinant is upper triangular, and is therefore a product of binomial coefficients (as can also be seen directly).
Kreweras's method of proof is ultimately equivalent to MacMahon's approach, though described very differently: he associates to every chain \eqref{e-chains} a Young chain from $Y'$ to $Y$ in such a way that the contribution to $\sigmaum_r w_rt^r$ corresponding to a given Young chain with $r$ returns is $t^r/(1-t)^{\eta-\eta'+1}$.
In a later paper \gammaite{MR623036}, Kreweras studied what is in Stanley's terminology the order polynomial of a naturally labeled poset. Although published in 1981, long after Stanley's memoir \gammaite{ordered}, Kreweras stated that Stanley's work was unknown to him when the paper was written.
\sigmaection{Knuth}
In 1970, Donald E. Knuth \gammaite{MR0277401} used MacMahon's approach to study solid (i.e., three-dimensional) partitions. MacMahon had conjectured that the generating function for solid partitions was $\prod_{i=1}^\infty (1-z^i)^{-\betainom{i+1}{2}}$. This conjecture had been disproved earlier \gammaite{MR0217029}, but Knuth wanted to compute the number $c(n)$ of solid partitions of $n$ for larger values of $n$ in an (unsuccessful) attempt to find patterns. Knuth realized that MacMahon's approach would work for arbitrary partially ordered sets, not just those corresponding to plane partitions.
Knuth takes a set $P$ partially ordered by the relation $\prec$ and well-ordered by the total order $<$, where $x\prec y$ implies $x<y$. He defines a $P$-partition of $N$ to be a function $n$ from $P$ to the set of nonnegative integers satisfying (i) $x\prec y$ implies $n(x)\ge n(y)$, (ii) only finitely many $x$ have $n(x)>0$, and (iii) $\sigmaum_{x\in P} n(x) = N$.
Knuth proves that there is a bijection from $P$-partitions of $N$ to pairs of sequences
\betaegin{gather*}
n_1\ge n_2\ge\gammadots \ge n_m\\
x_1, x_2, \dots, x_m
\end{gather*}
where $m\ge0$, the $n_i$ are positive integers with sum $N$, and the $x_i$ are distinct elements of $P$ satisfying
\betaegin{enumerate}
\item[(S1)] For $1\le j\le m$ and $x\in P$, $x\prec x_i$ implies $x=x_i$ for some $i<j$.
\item[(S2)] $x_i>x_{i+1}$ implies $n_i > n_{i+1}$ for $1\le i < m$.
\end{enumerate}
Knuth is interested primarily in the case in which $P$ is countably infinite, for which he uses a modification of this bijection to prove that if $P$ is an infinite poset and $s(n)$ is the number of $P$-partitions of $n$ then
\betaegin{equation*}
1+s(1)z+s(2)z^2 + \gammadots = (1+t(1)z+t(2)z ^2+\gammadots)/(1-z)(1-z^2)(1-z^3)\gammadots
\end{equation*}
where $t(k)$ is the number of linear extensions of finite order ideals of $P$ with ``index" $k$; Knuth's index is a variant of MacMahon's greater index.
\sigmaection{Thomas}
Gl\^anffrwd Thomas's 1977 paper \gammaite{thomas}, based on his 1974 Ph.D. thesis \gammaite{thomas-thesis} appeared after Stanley's memoir, but it was written without knowledge of Stanley's work (but with knowledge of MacMahon's). Thomas's motivation was the combinatorial definition of Schur functions. If $\lambda$ is a partition, then a Young tableau of shape $\lambda$ is a filling of the Young diagram of $\lambda$ that is weakly increasing in rows and strictly increasing in columns. For example, if $\lambda$ is the partition $(4,2,1)$ then a Young tableau of shape $\lambda$ is
\betaegin{equation}
\label{e-ssyt}
\ytableaushort{4411,21,1}
\end{equation}
The \emph{Schur function} $s_\lambda$ is the sum of the weights of all Young tableaux of shape $\lambda$, where the weight of a Young tableau is the product of $x_i$ over all of its entries $i$. (So the weight of the tableau \eqref{e-ssyt} is $x_1^4x_2x_4^2$.) Schur functions are symmetric in the variables $x_i$ and have important applications in enumeration and in the representation theory of symmetric and general linear groups.
Thomas considers a more general situation, in which he allows as shapes (which he calls ``frames") any subset of $\mathbb{Z}\times \mathbb{Z}$ and he defines a \emph{numbering} of a frame to be filling with positive integers that is weakly increasing in rows and strictly increasing in columns. For example,
\betaegin{equation}
\label{e-frame}
\ytableaushort{12,\none4,2\none4}
\end{equation}
is a numbering. To any numbering he associates an \emph{index numbering} by replacing its entries in increasing order with $1,2,\dots, m$, where $m$ is the number of entries, and ties are broken from bottom to top and then left to right.
Thus the index numbering corresponding to \eqref{e-frame} is
\betaegin{equation*}
\ytableaushort{13,\none5,2\none4}
\end{equation*}
Thomas calls two numberings equivalent if they have the same index numbering. He defines the monomial of a numbering of a frame to be the product $\prod x_i$ over all the entries $i$ of the frame. (So the sum of the monomials of all the frames of a Young diagram is a Schur function.) His goal is determine the sum of the monomials of an equivalence class of numberings.
The numberings of frames are a particular case of $P$-partitions corresponding to subposets of $\mathbb{Z}\times \mathbb{Z}$, and except for the case of skew Schur functions, which are symmetric, one might just as well study general $P$-partitions with his weighting. The interesting aspect of his work is that it seems to be the earliest appearance (after a brief mention by Stanley \gammaite[p.~81]{ordered}) of what are now called quasi-symmetric generating functions for $P$-partitions, which we will discuss in more detail in Section\ref{s-qs}.
As we have seen, the study of $P$-partitions leads to inequalities like $j_1\ge j_2 > j_3 \ge j_4$, or equivalently (following Thomas),
\[i_1\le i_2< i_3 \le i_4.\]
While MacMahon wanted to compute $\sigmaum x^{i_1+\gammadots +i_4},$ Thomas was interested in the more refined multivariable generating function
\betaegin{equation*}
\sigmaum_{i_1\le i_2< i_3 \le i_4} x_{i_1}x_{i_2}x_{i_3}x_{i_4}.
\end{equation*}
This is a \emph{fundamental quasi-symmetric function}; these form a basis for the algebra of quasi-symmetric functions, which will be discussed in Section \ref{s-qs}.
Thomas applies \emph{Baxter operators} to construct quasi-symmetric functions.
A Baxter operator on a commutative algebra $A$ over a field $K$ is a linear operator $B: A\to A$ such that for some fixed nonzero $\theta\in K$,
\betaegin{equation*}
B(aB(b)) + B(bB(a)) = B(a)B(b) + B(\theta ab)
\end{equation*}
for all $a,b\in A$.
Now let $A$ be the algebra of infinite sequences $(a_1,a_2,\dots)$ with entries in a field, with componentwise operations.
We define two maps $A\to A$; first a map introduced by Rota and Smith \gammaite{MR0343094}
\betaegin{align*}
S(a_1,a_2,\dots) &= \betaiggl(0,a_1, a_1+a_2,\dots, \sigmaum_{i=1}^{r-1}a_i, \dots\betaiggr),
\end{align*}
which we write as
$\left(\dots, \sigmaum_{i=1}^{r-1}a_i, \dots\right)$,
and a variant
\betaegin{align*}
P(a_1,a_2,\dots)&= \betaiggl(a_1, a_1+a_2,\dots, \sigmaum_{i=1}^{r}a_i, \dots\betaiggr)\\
&=\betaiggl(\dots, \sigmaum_{i=1}^{r}a_i, \dots\betaiggr).
\end{align*}
Then $S$ and $P$ are Baxter operators.
We show by an example the connection between these operators and quasi-symmetric functions:
Let $x=(x_1, x_2, x_3,\dots)$. Then
\betaegin{equation*}
xS(xS(xP(x))) = \betaiggl(\dots, \sigmaum_{1\le i\le j<k<r} x_i x_j x_k x_r,\dots \betaiggr).
\end{equation*}
Thus the fundamental quasi-symmetric function
\betaegin{equation*}
\sigmaum_{i\le j<k<l} x_i x_j x_k x_l,\dots
\end{equation*}
is obtained by adding all the entries of $xS(xS(xP(x)))$.
Thomas's approach has been further developed by Rudolf Winkel \gammaite{MR1612387}.
\sigmaection{Stanley}
In this section we discuss a few highlights of Stanley's memoir \gammaite{ordered}.
\sigmaubsection{Reciprocity theorems}
\label{s-recip}
If $\omega$ is a labeling of a poset $P$ of size $p$, we define the \emph{complementary labeling} $\betaar\omega$ of $P$ by $\betaar\omega(i) = p+1-\omega(i)$. Thus when we change the labeling of $P$ from $\omega$ to $\betaar\omega$, the strict and weak inequalities in the definition of a $P$-partition are switched. If the permutation $\pi$ in $\mathscr{L}(P,\omega)$ corresponds to the permutation $\betaar\pi$ in $\mathscr{L}(P, \betaar\omega)$ then the descent sets $\mathscr{S}(\pi)$ and $\mathscr{S}(\betaar\pi)$ are complementary subsets of $[p-1]$, and thus by \eqref{e-majdes}--\eqref{e-order}, $U_m(P,\betaar\omega;q)$, $ U(P, \betaar\omega;q)$, and $\mathscr{O}mega(P,\betaar\omega;m)$ are determined by $U_m(P,\omega;q)$, $ U(P, \omega;q)$, and $\mathscr{O}mega(P,\omega;m)$. The formulas expressing these relations are surprisingly simple. We first note that if $P$ is a chain and $(P,\omega)$ corresponds to a permutation $\pi$ with $s$ descents then
$U_m(P, \omega;q)= q^{\maj(\pi)}\qbinom{p+m-s}{p}$. It follows that in general, $U_m(P,\omega;q)$ is a polynomial in $q^m$ and $\mathscr{O}mega(P,\omega;m)$ is a polynomial in $m$, so $U_m(P,\omega;q)$ and $\mathscr{O}mega(P,\omega; m)$ can be extended in a natural way to negative values of $m$. Moreover, $U(P,\omega;q)$ is a rational function of $q$, so $U(P,\omega; q^{-1})$ is well-defined as a rational function of $q$. Then we have the following reciprocity formulas
\betaegin{align}
\notag
q^p U_m(P,\betaar\omega;q) &= (-1)^p U_{-(m+2)}(P,\omega, q^{-1}), \text{ with $U_{-1}(P,\omega) = 0$}\\
\notag
q^p U(P, \betaar\omega;q) &= (-1)^q U(P, \omega; q^{-1})\\
\label{e-rec-ord}
\mathscr{O}mega(P,\betaar\omega;m) &= (-1)^p\mathscr{O}mega(P,\omega; -m).
\end{align}
When $P$ satisfies certain ``chain conditions" there is an additional relation between the enumerative quantities associated with $(P,\omega)$ and $(P,\betaar\omega)$ \gammaite[18--19]{ordered}. We state here one of these results \gammaite[Proposition 19.3]{ordered}: Suppose that $(P,\omega)$ is naturally labeled and that every maximal chain in $P$ has length $l$. Then for all $m$,
\betaegin{equation*}
\mathscr{O}mega(P,\omega;m)=(-1)^p \mathscr{O}mega(P,\omega; -l-m)=\mathscr{O}mega(P,\betaar\omega; l+m)
\end{equation*}
and the number of permutations in $\mathscr{L}(P,\omega)$ with $s$ descents is equal to the number of permutations in $\mathscr{L}(P,\omega)$ with $n-l-s$ descents.
A nice application of the reciprocity theorem for order polynomials \eqref{e-rec-ord} is Stanley's result \gammaite{MR0317988} that if $\gammahi(\lambda)$ is the chromatic polynomial of a graph $G$ with $p$ vertices then $(-1)^p\gammahi(-1)$ is the number of acyclic orientations of $G$. Any proper coloring of $G$ with the integers $1, 2, \dots, \lambda$ yields an acyclic orientation of $G$ in which edges are directed from the lower color to the higher. The number of proper colorings of $G$ in $\lambda$ colors associated to an acyclic orientation $O$ is $\mathscr{O}mega(P_O,\omega;\lambda)$ where $P$ is the poset associated to $O$ and $\omega$ is a strict labeling. Then by \eqref{e-rec-ord}, $(-1)^p\mathscr{O}mega(P_O,\omega;-1)=\mathscr{O}mega(P_O, \betaar\omega; 1) = 1$, since $\betaar\omega$ is a natural labeling, and thus each acyclic orientation of $G$ contributes exactly 1 to
$(-1)^p\gammahi(-1)$.
\sigmaubsection{Disjoint unions}
We may allow the labeling $\omega$ of a poset $P$ to be an arbitrary function from $P$ to the positive integers as long as incomparable elements of $P$ have distinct labels. Then if $(P,\omega_1)$ and $(Q,\omega_2)$ are labeled posets in which the images of $\omega_1$ and of $\omega_2$ are disjoint, we can construct the disjoint union labeled poset $(P+Q, \omega_1+\omega_2)$ where the labeling $\omega_1+\omega_2$ is $\omega_1$ on $P$ and is $\omega_2$ on $Q$. It is clear that, with the notation of Section \ref{s-theory},
\betaegin{equation*}
U_m(P+Q,\omega_1+\omega_2) = U_m(P,\omega_1)U_m(Q,\omega_2)
\end{equation*}
and similarly for a disjoint union of more than two posets.
Moreover, there is a simple description of $\mathscr{L}(P+Q,\omega_1+\omega_2)$: it is the set of ``shuffles" of $\mathscr{L}(P,\omega_1)$ and $\mathscr{L}(Q,\omega_2)$. Thus to obtain MacMahon's formula \eqref{e-qSN2} from \eqref{e-majdes} we take $(P,\omega) = (P_1+\gammadots+P_r, \omega_1+\gammadots +\omega_r)$ where $P_i$ is a chain of
size $p_i$ with every label equal to $i$, so that $U_m(P_i, \omega_i) = \qbinom{m+p_i}{p_i}$.
Now let
\betaegin{equation*}
W_s(P,\omega) = \sigmaum_{\pi} q^{\maj(\pi)},
\end{equation*}
where the sum is over all permutations $\pi\in \mathscr{L}(P,\omega)$ with $s$ descents.
Stanley \gammaite[Prop.~12.6]{ordered} proves the formula
\betaegin{multline*}
W_s(P+Q,\omega_1+\omega_2) \\
= \sigmaum_{i=0}^{|P|-1}\sigmaum_{j=0}^{|Q|-1} q^{(s-i)(s-j)}\qbinom{|P|+j-i}{s-i}\qbinom{|Q|+i-j}{s-j}W_i(P,\omega_1)W_j(Q,\omega_2)
\end{multline*}
which is especially interesting in (and equivalent to) the case in which $P$ and $Q$ are chains, where it describes the enumeration by descents and major index of the shuffles of two permutations.
Bijective proofs of this formula were later found by Goulden \gammaite{MR773053} and by Stadler \gammaite{MR1713460}.
\sigmaubsection{$\alphalpha$ and $\betaeta$}
\label{e-ab}
For any subset $S$ of $[p-1]$, let $\alpha(P,\omega;S)$ be the number of permutations in $\mathscr{L}(P,\omega)$ with descent set contained in $S$ and let $\beta(P,\omega;S)$ be the number of permutations in $\mathscr{L}(P,\omega)$ with descent set equal to $S$. Then $\alpha(P,\omega;S)=\sigmaum_{T\sigmaubseteq S}\beta(P,\omega;T)$, so by inclusion-exclusion, \[\beta(P,\omega;S)=\sigmaum_{T\sigmaubseteq S}(-1)^{|S|-|T|}\alpha(P,\omega;T).\]
We can give another interpretation to $\alpha(P,\omega;S)$. An \emph{order ideal} of $P$ is a subset $I$ of $P$ such that if $X\in I$ and $Y\prec X$ then $Y\in I$. A chain of order ideals in $P$
\betaegin{equation*}
\varnothing=I_0\sigmaubset I_1\sigmaubset\gammadots\sigmaubset I_k=P
\end{equation*}
is called \emph{$\omega$-compatible} if the restriction of $\omega$ to each $I_{i+1}-I_i$ is order-preserving. (If $\omega$ is natural, then any chain of order ideals is $\omega$-compatible.) It is not hard to see that if the elements of $S$ are
$m_1<m_2<\gammadots<m_s$ then $\alpha(P,\omega;S)$ is the number of $\omega$-compatible chains
$\varnothing=I_0\sigmaubset I_1\sigmaubset\gammadots\sigmaubset I_{s+1}=P$ in which $|I_i|=m_i$ for $i=1,\dots, s$: given such a chain we associate to it
the permutation consisting of the labels of $I_1$ in increasing order, followed by the labels of $I_2-I_1$ in increasing order, and so on.
We call two labelings $\omega_1$ and $\omega_2$ of $P$ \emph{equivalent} if $\mathscr{A}(P,\omega_1) = \mathscr{A}(P,\omega_2)$. Alternatively, $\omega_1$ and $\omega_2$ are equivalent if whenever $Y$ covers $X$ in $P$, $\omega_1(X)>\omega_1(Y)$ if and only if $\omega_2(X)>\omega_2(Y)$. If $\omega_1$ and $\omega_2$ are equivalent labelings, then for every $S\sigmaubseteq [p-1]$, we have $\alpha(P,\omega_1;S)=\alpha(P,\omega_2;S)$, and there is a simple bijection between the permutations counted by $\alpha(P,\omega_1;S)$ and those counted by $\alpha(P,\omega_2;S)$. It follows that $\beta(P,\omega_1;S)=\beta(P,\omega_2;S)$. This fact has interesting consequences; for example \gammaite[Exercise 7.95]{ec2}, it can be used to show the existence of Solomon's descent algebra \gammaite{solomon} for the symmetric group.
\sigmaection{Further Developments}
\sigmaubsection{Posets}
As Stanley noted in \gammaite[Section 4]{ordered}, $P$-partitions are closely related to the distributive lattice $J(P)$ of order ideals of $P$, ordered by inclusion. In particular, if $\omega$ is a natural labeling then the order polynomial $\mathscr{O}mega(P,\omega;m)$ is the number of chains of order ideals
\betaegin{equation*}
\varnothing=I_0\sigmaubseteq I_1\sigmaubseteq\gammadots\sigmaubseteq I_m=P,
\end{equation*}
and as described in Section \ref{e-ab}, $\alpha(P,\omegamega;S)$, and thus $\beta(P,\omegamega;S)$, can be defined in terms of chains in $J(P)$. These counts of chains make sense in any graded poset with a unique minimal and maximal element, and in this context the analogue of the order polynomial is called the \emph{zeta polynomial} and $\alpha$ and $\beta$ are called the \emph{flag $f$-vector} and \emph{flag $h$-vector} (or \emph{rank-selected M\"obius invariant}). An account of their basic properties can be found\ in \gammaite[Sections 3.12 and 3.13]{ec1}. Stanley studied aspects of these concepts in \gammaite{MR0309815}, \gammaite{MR0354472}, \gammaite{MR0354473}, and \gammaite{MR0409206}. Without further conditions the numbers $\beta(S)$ need not be nonnegative, but if the edges of the Hasse diagram of the poset can be labeled with integers so that whenever $s\le t$ there is a unique saturated chain from $s$ to $t$ with nondecreasing labels (an ``R-labeling") then $\beta(S)$ has a combinatorial interpretation completely analogous to the $P$-partition case. (See \gammaite[Section 3.14]{ec1}.)
\emph{Cohen-Macaulay posets} \gammaite{MR661307,MR570784} are another important class of posets for which $\beta(S)$ can be shown to be nonnegative by algebraic or topological methods, but for which $\beta(S)$ does not in general have a combinatorial interpretation.
\sigmaubsection{Counting lattice points}
If $\sigmaP$ is a lattice polytope in $\mathbb{R}^p$ (the convex hull of a set of lattice points) then the number of lattice points in $m\sigmaP$ ($\sigmaP$ dilated by a factor of $m$) is a polynomial in $m$, called the \emph{Ehrhart polynomial} of $\sigmaP$. If $(P,\omega)$ is naturally labeled then $\mathscr{O}mega(P,\omega; m+1)$ is the Ehrhart polynomial of the \emph{order polytope} of $P$ which is the set of all $(x_1,\dots, x_p)$ in $\mathbb{R}^p$ satisfying $x_i\ge x_j$ whenever $i\prec j$, and $0\le x_i\le 1$ for all $i$. Some of the properties of order polynomials generalize to Ehrhart polynomials, including the reciprocity theorem for order polynomials, equation \eqref{e-rec-ord}. Stanley has made important contributions to the theory of Ehrhart polynomials and their generalizations, as described in Matthias Beck's paper \gammaite{beck} in this volume; see also \gammaite[sections 4.5--4.6]{ec1} and \gammaite{MR2911976}.
In \gammaite{MR824105}, Stanley defines the \emph{chain polytope} of the poset $P$ to be the set of points $(x_1,\dots, x_p)$ in $\mathbb{R}^p$ satisfying
$x_i\ge0$ for all $i$ and $x_{i_1}+\gammadots +x_{i_k}\le 1$ for every chain $i_1\prec\gammadots\prec i_k$ in $P$, and proves that this polytope has the same Ehrhart polynomial as the order polytope of $P$.
\sigmaubsection{Root systems}
Gessel \gammaite[p.~300]{MR777705} suggested that the inequalities that define $P$-partitions could be generalized to the inequalities determined by the reflecting hyperplanes of a Coxeter group.
Victor Reiner \gammaite{MR1101971,reiner-signed} observed that the definition of a $P$-partition can be restated in terms of the \emph{root system} of type $A_{p-1}$, which is the set of vectors in $\mathbb{R}^p$ of the form $e_i-e_j$, with $i\ne j$, where $e_i$ is $i$th standard basis vector in $\mathbb{R}^p$. The \emph{positive roots} are the roots $e_i -e_j$ where $i<j$ and the \emph{negative roots} are the negatives of these.
Given a set $R$ of roots, we can consider the set of vectors $\sigma=(\sigma_1,\dots, \sigma_p)\in \mathbb{N}^p$ satisfying
\betaegin{align*}
\langle \alpha,\sigma\rangle&\ge 0, \text{ for all roots $\alpha$ in $R$},\\
\langle \alpha,\sigma\rangle& > 0, \text{ for all negative roots $\alpha$ in $R$},
\end{align*}
where $\langle\, \gammadot\,, \gammadot\,\rangle$ is the usual inner product on $\mathbb{R}^p$.
Then if this system of inequalities is consistent, the set of solutions will be the set of $(P,\omega)$-partitions for some partial order on $P=[p]$ with the labeling $\omega(i) = i$ for all $i\in [p]$. For example, the poset of Figure \ref{f-1} corresponds the the set of roots $\{e_2-e_1, e_2-e_3\}$; the negative root $e_2-e_1$ gives the inequality $\sigma_2-\sigma_1>0$ and the (positive) root $e_2-e_3$ gives the inequality $\sigma_2-\sigma_3\ge 0$.
Reiner \gammaite{MR1248063,reiner-signed} generalized this idea to arbitrary root systems, which are sets of vectors in $\mathbb{R}^p$ satisfying certain properties; each root system consists of a set of positive roots and their negatives, the negative roots. To each root system is associated its \emph{Weyl group}, which is the finite group of isometries of $\mathbb{R}^p$ generated by the reflections in the roots. (So for the root system of type $A_{p-1}$, the Weyl group is the symmetric group ${\mathfrak S}_p$.) Reiner shows that there is a generalization of the fundamental theorem of $P$-partitions to root systems, in which the role of the symmetric group is replaced by the corresponding Weyl group.
He then studies in particular the case of the root system of type $B_p$, in which the positive roots are $e_i$ for $1\le i\le p$ and $e_i+e_j$ and $e_i-e_j$ for $1\le i<j\le n$. Thus, for example, if we take the roots $e_1$, $-e_2-e_3$, and $e_3-e_1$, then the corresponding inequalities are $\sigma_1\ge0$, $-\sigma_2-\sigma_3>0$ and $\sigma_3 -\sigma_1<0$. (Unlike the case of ordinary $P$-partitions, here we allow the $\sigma_i$ to take on arbitrary integer values.) The elements of the Weyl group of type $B_p$, the hyperoctahedral group, may be viewed as \emph{signed permutations}, which are permutations $\pi$ of the set
$\pm[p]=\{-p,\gammadots, -1, 1, \gammadots p\}$, such that $\pi(-i) = -\pi(i)$ for all $i\in \pm[p]$; a signed permutation is determined by its values on $[p]$. The general definition of descent set for a Weyl group reduces in this case to
\betaegin{equation*}
\mathscr{S}(\pi) = \{\,i \in [p] \mid \ \pi(i)>\pi(i+1)\,\},
\end{equation*}
where we take $\pi(p+1) =p+1$ and use the order\footnote{Different choices for the roots would allow similar results with the usual order on $\pm[p]$.}
$1<2<\gammadots < p+1 <-p< \gammadots <-2<-1$. Reiner then obtains for signed permutations analogues of all the basic $P$-partition for ordinary permutations.
Chak-On Chow \gammaite[Section 2]{MR2717011} studied ``$P$-partitions of type $B$" with a closely related, but somewhat different approach, using ``type $B$ posets" which are partial orders $\prec$ on the set $\{-p,\gammadots, -1, 0, 1,\gammadots,p\}$ with the property that $i\prec j$ if and only if $-j \prec -i$.
Further work on root system analogues of $P$-partitions was undertaken by John Stembridge in his study of Coxeter cones \gammaite{MR2388082}.
\sigmaubsection{Lexicographic inequalities}
MacMahon, in \gammaite{multi} and other papers (see \gammaite[Chapter 8]{MR514405}), studied \emph{multipartite partitions} which are expressions of ``multipartite numbers" as sums of multipartite numbers, where a multipartite number is a tuple of nonnegative integers. The number of partitions of the multipartite number $(n_1,\dots, n_s)$ with $p$ parts, where $(0,\dots,0)$ is allowed as a part, is the coefficient of $q_1^{n_1}\gammadots q_{s}^{n_s}$ in $\phi_r(x)$, where
\betaegin{equation*}
\sigmaum_{p=0}^\infty \phi_p(q_1,\dots, q_s)z^p = \prod_{k_1,\dots, k_s=0}^\infty \frac{1}{1-q_1^{k_1}\gammadots q_s^{k_s}z}
\end{equation*}
It is not hard to show that there exist polynomials $\mathscr{L}ambda_p(q_1,\dots, q_s)$ such that
\betaegin{equation*}
\phi_p(q_1,\dots, q_s) = \frac{\mathscr{L}ambda_p(q_1,\dots, q_s)}{(q_1;q_1)_p\gammadots (q_s;q_s)_p}
\end{equation*}
where $(q;q)_s=(1-q)\gammadots (1-q^s)$. E. M. Wright \gammaite{MR0084012} conjectured that the polynomials $\mathscr{L}ambda_p(q_1,\dots, q_s)$ have nonnegative coefficients, and his conjecture was proved by Basil Gordon \gammaite{MR0157959} in 1963. We illustrate Gordon's approach with the simplest example, when $p=s=2$. An unordered pair $(a_1,b_1), (a_2,b_2)$ of bipartite numbers may be arranged in decreasing lexicographic order, so counting such pairs is equivalent to counting solutions of the lexicographic inequality $(a_1,b_1)\ge (a_2, b_2)$. The set of solutions of this lexicographic inequality is the disjoint union of the solutions of
\betaegin{gather*}
a_1\ge a_2, \ b_1\ge b_2\\
\sigmahortintertext{and}
a_1>a_2, \ b_1 < b_2.\\
\end{gather*}
The solutions of the first inequalities contribute $1/(q_1;q_1)_2(q_2;q_2)_2$ to $\mathscr{L}ambda_2(q_1,q_2)$ and the solutions of the second inequalities contribute $q_1q_2/(q_1;q_1)_2(q_2;q_2)_2$, so $\mathscr{L}ambda_2(q_1,q_2) = 1+q_1q_2$.
Gordon proved Wright's conjecture by showing that the lexicographic inequalities specifying the terms in $\phi_p(q_1,\dots, q_s)$ can in general be decomposed in a similar way. D. P. Roselle \gammaite{MR0342406}, using essentially the same approach, gave a simple combinatorial interpretation to the coefficient of $q_1^{i_1}q_2^{i_2}$ in $\mathscr{L}ambda_r(q_1,q_2)$; it is the number of permutations $\pi$ of $[p]$ such that $\maj(\pi) = i_1$ and $\maj(\pi^{-1})=i_2$. Garsia and Gessel \gammaite{MR532836} showed that, more generally, the coefficient of $q_1^{i_1}\gammadots q_s^{i_s}$ in $\mathscr{L}ambda_p(q_1,\gammadots,q_s)$ is the number of $s$-tuples $(\pi_1,\dots, \pi_s)$ of permutations of $[p]$ whose product $\pi_1\gammadots \pi_s$ is the identity permutation such that $\maj(\pi_j)=i_j$ for each $j$. They also showed that by considering multipartite partitions with bounded part sizes, one can count these $s$-tuples of permutations by major index and number of descents. Further results along these lines have been found by a number of authors \gammaite{MR2780854, MR2195428, MR2196519, moynihan,MR2181371,MR1101971,MR1248063}.
Gessel \gammaite{MR777705} studied ``multipartite $P$-partitions" in which the inequalities defining $(P,\omega)$-partitions are applied to multipartite numbers ordered lexicographically; his results enumerate $s$-tuples of permutations whose product is in $\mathscr{L}(P,\omega)$ by their descent sets.
Another application of lexicographic inequalities related to $P$-partitions was given by Gessel and Reu\-ten\-auer \gammaite{MR1245159}. A \emph{Lyndon word} is a sequence of nonnegative integers that is lexicographically strictly less than all of its cyclic permutations. Thus the word $(a_1, a_2, a_3)$ is a Lyndon word if and only if $(a_1, a_2, a_3) < (a_2, a_3, a_1)$ and $(a_1, a_2, a_3)<(a_3, a_1, a_2)$. The set of solutions of these lexicographic inequalities is the disjoint union of the solutions of
\betaegin{gather*}
a_1\le a_2 < a_3\\
\sigmahortintertext{and}
a_1 < a_3 \le a_2.
\end{gather*}
Gessel and Reutenauer showed that a similar decomposition exists for Lyndon words of any length, and more generally, for multisets of Lyndon words, and this allowed them to count permutations of a given cycle type by their descent sets. A generalization to hyperoctahedral groups was given by St\'ephane Poirier \gammaite{MR1603753}.
\sigmaubsection{Quasi-symmetric functions}
\label{s-qs}
In his memoir \gammaite{ordered}, Stanley considered the generating function for $(P,\omega)$-partitions
\betaegin{equation*}
F(P,\omega)=\sigmaum_{\sigma\in \mathscr{A}(P,\omega)}x_1^{\sigma(1)}x_2^{\sigma(2)}\gammadots x_p^{\sigma(p)}
\end{equation*}
in which different $(P,\omega)$-partitions contribute different terms. The theory of Schur functions (see, for example, \gammaite[Section 7.10]{ec2}) suggests looking at the less refined generating function
\betaegin{equation}
\label{e-qsgf}
\Gamma(P,\omega) = \sigmaum_{\sigma\in \mathscr{A}(P,\omega)}x_{\sigma(1)}x_{\sigma(2)}\gammadots x_{\sigma(p)}
\end{equation}
discussed briefly by Stanley \gammaite[p.~81]{ordered} and in more detail by
Gessel\footnote{Gessel took $P$-partitions to be order-preserving, rather than order-reversing, so his $\Gamma(P,\omega)$ is slightly different from that defined here. } \gammaite{MR777705}. (See also \gammaite[Section 7.19]{ec2}.)
By the fundamental theorem of $P$-partitions,
\betaegin{equation*}
\Gamma(P,\omega) = \sigmaum_{\pi\in \mathscr{L}(P,\omega)} \Gamma(\pi,\omega).
\end{equation*}
The \emph{fundamental quasi-symmetric functions}, denoted $F_\alpha$ or $L_\alpha$, are indexed by compositions (sequences of positive integers) and defined
as follows: if $\alpha=(\alpha_1,\dots a_k)$ is a composition of $p$ then
\betaegin{equation*}
F_\alpha = \sigmaum x_{i_1} x_{i_2}\gammadots x_{i_p}
\end{equation*}
where the sum is over all $i_1\le i_2\le\gammadots\le i_p$ satisfying $i_j < i_{j+1}$ if $j\in \{\alpha_1, \alpha_1+\alpha_2,\dots, \alpha_1+\gammadots+\alpha_{k-1}\}$. It is not hard to show that the $F_\alpha$ are linearly independent and generate a ring, called the \emph{ring of quasi-symmetric functions}, that contains the ring of symmetric functions \gammaite[Chapter 7]{ec2}. Thus the information contained in $\Gamma(P,\omega)$ is precisely the multiset of descent sets of the
permutations in $\mathscr{L}(P,\omega)$; i.e., numbers $\beta(P,\omegamega;S)$. The quasi-symmetric generating function \eqref{e-qsgf} extends to an encoding of the flag $h$-vector of a graded poset, as studied by Richard Ehrenborg \gammaite{MR1383883}.
The theory of quasi-symmetric functions has proven useful in a number of enumeration problems. For example, Stanley \gammaite{MR782057} used them to define what are now called ``Stanley symmetric functions" in the study of reduced decompositions in symmetric groups.
There is a comultiplication on the ring of quasi-symmetric functions that makes it into a Hopf algebra (and an ``internal" comultiplication that makes it a bialgebra). The dual Hopf algebra is the algebra of \emph{non-commutative symmetric functions} that has been studied extensively by Jean-Yves Thibon and others in a series of papers beginning with
\gammaite{MR1327096}; see also Malvenuto and Reutenauer \gammaite{MR1358493}. Type $B$ quasi-symmetric functions and noncommutative symmetric functions were studied by Chow \gammaite{MR2717011}. Another related algebra with enumerative applications is the Malvenuto-Reutenauer algebra \gammaite{MR2103213,malvenuto,MR1358493}.
A different encoding of the flag $h$-vector of a poset is the \emph{$\mathbf{ab}$-index}, which is especially useful in studying Eulerian posets \gammaite{MR1651249,MR1283084}.
\sigmaubsection{Enriched $P$-partitions}
John Stembridge \gammaite{enriched} introduced a generalization of $(P,\omega)$-partitions that interpolates between $(P,\omega)$-partitions and $(P,\betaar\omega)$-partitions. We introduce the following total ordering on the set $\mathbb{P}'$ of nonzero integers:
\betaegin{equation*}
-1< +1 < -2 < +2 < -3 < +3<\gammadots;
\end{equation*}
for $k\in \mathbb{P}'$ the notations $k>0$ and $|k|$ retain their usual meanings.
Then an enriched $(P,\omega)$-partition is a map $\sigma: P\to \mathbb{P}'$ such that
for all $X\prec Y$ in $P$ we have
\footnote{Stembridge defined enriched $P$-partitions to be order-preserving; for consistency we define them here to be order-reversing.}
\betaegin{enumerate}
\item[(i)] $\sigma(X)\ge\sigma(Y)$
\item[(ii)] If $\sigma(X) =\sigma(Y)>0$ then $\omega(X) <\omega (Y)$
\item[(iii)] If $\sigma(X) = \sigma(Y) <0$ then $\omega(X) > \omega(Y)$.
\end{enumerate}
Note that if the image of $\sigma$ lies in $\{+1,+2,\dots\}$ then (iii) is vacuous and (ii) is equivalent to the condition that if $\omega(X)>\omega(Y)$ then $\sigma(X)>\sigma(Y)$, so $\sigma$ is an ordinary $(P,\omega)$-partition, and if the image of $\sigma$ lies in $\{-1, -2, \gammadots \}$ then (ii) is vacuous and (iii) is equivalent to the condition that if $\omega(X)<\omega(Y)$ then $\sigma(X) > \sigma(Y)$, so $\sigma$ is essentially a $(P,\betaar\omega)$-partition. Stembridge proves a version of the fundamental theorem for enriched $(P,\omega)$-partitions, and defines the quasi-symmetric generating function
\betaegin{equation*}
\Delta(P,\omega) = \sigmaum_\sigma \prod_{X\in P}x_{|\sigma(X)|},
\end{equation*}
so by the fundamental theorem,
\betaegin{equation*}
\Delta(P,\omega) = \sigmaum_{\pi\in \mathscr{L}(P,\omega)} \Delta(\pi,\omega).
\end{equation*}
It is a remarkable fact that $\Delta(\pi,\omega)$ depends only on the \emph{peak set} of $\pi$, that is, the set $\{\, i \mid \pi(i-1) < \pi(i) > \pi(i+1)\,\}$. The distinct $\Delta(\pi,\omega)$ form a basis for a subalgebra of the algebra of quasi-symmetric functions.
Stembridge also discusses the analogue of the order polynomial for enriched $P$-partitions and studies cases in which $\Delta(P,\omega)$ is symmetric, which are related to Schur's $Q$-functions.
Kathryn Nyman \gammaite{MR2001673} used enriched $P$-partitions to prove the existence of the ``peak algebra" of the symmetric group. T.~Kyle Petersen \gammaite{MR2296309} studied type $B$ enriched $P$-partitions and applied them to type $B$ peak algebras.
Enriched $P$-partitions have also been applied to the study of chains in Eulerian posets \gammaite{MR1982883}.
\sigmaubsection{Additional applications and developments}
SeungKyung Park \gammaite{park} studied naturally labeled posets $P$ whose order polynomial $\mathscr{O}mega_P(n)$ is the Stirling number of the second kind $S(k+n,n)$, thereby giving a new combinatorial interpretation and $q$-analogue to the polynomials $B_k(t)$ defined by
\betaegin{equation*}
\sigmaum_{n=0}^\infty S(k+n,n) t^n = \frac{B_k(t)}{(1-t)^{2k+1}}.
\end{equation*}
Combinatorial interpretations for these polynomials had been given earlier by John Riordan \gammaite{MR0429582} and by Gessel and Stanley \gammaite{MR0462961}.
Sangwook Ree \gammaite{sree} applied $P$-partitions to count restricted lattice paths in the plane by left turns, obtaining generalizations of $q$-Narayana numbers.
Petter Br\"and\'en \gammaite{MR2047757}, taking a similar approach, gave several interpretations to $q$-Narayana numbers in counting Dyck paths.
Joseph Neggers \gammaite{MR0551484} conjectured in 1978 that for any naturally labeled poset $(P,\omega)$ the polynomial
$\sigmaum_{\pi\in \mathscr{L}(P,\omega)} t^{\des(\pi)}$ has all real roots. In 1986, Stanley conjectured that this holds more generally for any labeled poset $(P,\omega)$ (see
\gammaite{MR963833}). Stanley's conjecture was disproved by Petter Br\"and\'en \gammaite{MR2119757} in 2004 and Neggers's conjecture was disproved by John Stembridge
\gammaite{MR2262844}
in 2007.
Peter McNamara and Christophe Reutenauer \gammaite{MR2195427} used $P$-partitions to study idempotents in the group algebra of the symmetric group.
McNamara and Ryan Ward \gammaite{MR3245895} studied the question of when two different labeled posets have the same generating function \eqref{e-qsgf}.
Valentin F\'eray and Victor Reiner \gammaite{MR2913529} explored connections between $P$-par\-ti\-tions and commutative algebra, and in particular described a class of naturally labeled posets for which the sum $\sigmaum_{\pi\in \mathscr{L}(P)}q^{\maj(\pi)}$ factors nicely.
Lo\"\i c Foissy and Claudia Malvenuto \gammaite{malv} reinterpreted the fundamental theorem of $P$-partitions as an injective Hopf algebra morphism and generalized it to pre-orders, leading to a Hopf algebra on finite topologies.
\sigmaubsection*{Acknowledgments} I would like to thank
Christian Krattenthaler,
Claudia Malvenuto,
T. Kyle Petersen,
Victor Reiner, and
John Stembridge
for their helpful suggestions, and Richard Stanley for his development of the theory of $P$-partitions.
\providecommand{\betaysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\betaegin{thebibliography}{10}
\betaibitem{MR2103213}
Marcelo Aguiar and Frank Sottile, \emph{Structure of the
{M}alvenuto-{R}eutenauer {H}opf algebra of permutations}, Adv. Math.
\textbf{191} (2005), no.~2, 225--275.
\betaibitem{MR0217029}
A.~O.~L. Atkin, P.~Bratley, I.~G. Macdonald, and J.~K.~S. McKay, \emph{Some
computations for {$m$}-dimensional partitions}, Proc. Cambridge Philos. Soc.
\textbf{63} (1967), 1097--1100.
\betaibitem{beck}
Matthias Beck, \emph{Stanley's major contributions to {E}hrhart theory},
arXiv:1407.0255 [math.CO].
\betaibitem{MR2911976}
Matthias Beck, \emph{Combinatorial reciprocity theorems}, Jahresber. Dtsch.
Math.-Ver. \textbf{114} (2012), no.~1, 3--22.
\betaibitem{MR2780854}
Riccardo Biagioli and Jiang Zeng, \emph{Enumerating wreath products via
{G}arsia-{G}essel bijections}, European J. Combin. \textbf{32} (2011), no.~4,
538--553.
\betaibitem{MR1982883}
Louis~J. Billera, Samuel~K. Hsiao, and Stephanie van Willigenburg, \emph{Peak
quasisymmetric functions and {E}ulerian enumeration}, Adv. Math. \textbf{176}
(2003), no.~2, 248--276.
\betaibitem{MR661307}
A.~Bj{\"o}rner, A.~M. Garsia, and R.~P. Stanley, \emph{An introduction to
{C}ohen-{M}acaulay partially ordered sets}, Ordered sets ({B}anff, {A}lta.,
1981), NATO Adv. Study Inst. Ser. C: Math. Phys. Sci., vol.~83, Reidel,
Dordrecht-Boston, Mass., 1982, pp.~583--615.
\betaibitem{MR570784}
Anders Bj{\"o}rner, \emph{Shellable and {C}ohen-{M}acaulay partially ordered
sets}, Trans. Amer. Math. Soc. \textbf{260} (1980), no.~1, 159--183.
\betaibitem{MR2119757}
Petter Br{\"a}nd{\'e}n, \emph{Counterexamples to the {N}eggers-{S}tanley
conjecture}, Electron. Res. Announc. Amer. Math. Soc. \textbf{10} (2004),
155--158 (electronic).
\betaibitem{MR2047757}
\betaysame, \emph{{$q$}-{N}arayana numbers and the flag {$h$}-vector of {$J(\mathbf
2\times\mathbf n)$}}, Discrete Math. \textbf{281} (2004), no.~1-3, 67--81.
\betaibitem{MR963833}
Francesco Brenti, \emph{Unimodal, log-concave and {P}\'olya frequency sequences
in combinatorics}, Mem. Amer. Math. Soc. \textbf{81} (1989), no.~413,
viii+106.
\betaibitem{malv}
Lo\"\i c~Foissy and Claudia Malvenuto, \emph{The {H}opf algebra of finite
topologies and {T}-partitions}, preprint, 2014, \url{http://arxiv.org/abs/1407.0476v2}.
\betaibitem{MR0060538}
L.~Carlitz, \emph{{$q$}-{B}ernoulli and {E}ulerian numbers}, Trans. Amer. Math.
Soc. \textbf{76} (1954), 332--350.
\betaibitem{MR0366683}
\betaysame, \emph{A combinatorial property of {$q$}-{E}ulerian numbers}, Amer.
Math. Monthly \textbf{82} (1975), 51--54.
\betaibitem{MR2717011}
Chak-On Chow, \emph{Noncommutative {S}ymmetric {F}unctions of {T}ype {B}},
Ph.D. thesis, Massachusetts Institute of Technology, 2001.
\betaibitem{MR1383883}
Richard Ehrenborg, \emph{On posets and {H}opf algebras}, Adv. Math.
\textbf{119} (1996), no.~1, 1--25.
\betaibitem{MR1651249}
Richard Ehrenborg and Margaret Readdy, \emph{Coproducts and the {$cd$}-index},
J. Algebraic Combin. \textbf{8} (1998), no.~3, 273--299.
\betaibitem{MR2913529}
Valentin F{\'e}ray and Victor Reiner, \emph{{$P$}-partitions revisited}, J.
Commut. Algebra \textbf{4} (2012), no.~1, 101--152.
\betaibitem{MR519777}
Dominique Foata, \emph{Distributions eul\'eriennes et mahoniennes sur le groupe
des permutations}, Higher combinatorics ({P}roc. {NATO} {A}dvanced {S}tudy
{I}nst., {B}erlin, 1976), NATO Adv. Study Inst. Ser., Ser. C: Math. Phys.
Sci., vol.~31, Reidel, Dordrecht-Boston, Mass., 1977, With a comment by
Richard P. Stanley, pp.~27--49.
\betaibitem{MR2195428}
Dominique Foata and Guo-Niu Han, \emph{Signed words and permutations. {II}.
{T}he {E}uler-{M}ahonian polynomials}, Electron. J. Combin. \textbf{11}
(2004/06), no.~2, Research Paper 22, 18.
\betaibitem{MR2196519}
\betaysame, \emph{Signed words and permutations. {III}. {T}he {M}ac{M}ahon
{V}erfahren}, S\'em. Lothar. Combin. \textbf{54} (2005/07), Art. B54a, 20.
\betaibitem{MR506852}
Dominique Foata and Marcel-Paul Sch{\"u}tzenberger, \emph{Major index and
inversion number of permutations}, Math. Nachr. \textbf{83} (1978), 143--159.
\betaibitem{MR532836}
A.~M. Garsia and I.~Gessel, \emph{Permutation statistics and partitions}, Adv.
in Math. \textbf{31} (1979), no.~3, 288--305.
\betaibitem{MR1327096}
Israel~M. Gelfand, Daniel Krob, Alain Lascoux, Bernard Leclerc, Vladimir~S.
Retakh, and Jean-Yves Thibon, \emph{Noncommutative symmetric functions}, Adv.
Math. \textbf{112} (1995), no.~2, 218--348.
\betaibitem{MR0462961}
Ira Gessel and Richard~P. Stanley, \emph{Stirling polynomials}, J.
Combinatorial Theory Ser. A \textbf{24} (1978), no.~1, 24--33.
\betaibitem{MR777705}
Ira~M. Gessel, \emph{Multipartite {$P$}-partitions and inner products of skew
{S}chur functions}, Combinatorics and algebra ({B}oulder, {C}olo., 1983),
Contemp. Math., vol.~34, Amer. Math. Soc., Providence, RI, 1984,
pp.~289--317.
\betaibitem{MR1245159}
Ira~M. Gessel and Christophe Reutenauer, \emph{Counting permutations with given
cycle structure and descent set}, J. Combin. Theory Ser. A \textbf{64}
(1993), no.~2, 189--215.
\betaibitem{MR0157959}
B.~Gordon, \emph{Two theorems on multipartite partitions}, J. London Math. Soc.
\textbf{38} (1963), 459--464.
\betaibitem{MR773053}
I.~P. Goulden, \emph{A bijective proof of {S}tanley's shuffling theorem},
Trans. Amer. Math. Soc. \textbf{288} (1985), no.~1, 147--160.
\betaibitem{MR0277401}
Donald~E. Knuth, \emph{A note on solid partitions}, Math. Comp. \textbf{24}
(1970), 955--961.
\betaibitem{kreweras1965}
G.~Kreweras, \emph{Sur une classe de probl\`emes de d\'enombrement li\'es au
treillis des partitions des entiers}, Cahier de B.U.R.O. (1965), no.~6,
15--67.
\betaibitem{MR0200180}
\betaysame, \emph{Sur une extension du probl\`eme dit ``de {S}imon {N}ewcomb''},
C. R. Acad. Sci. Paris S\'er. A-B \textbf{263} (1966), A43--A45.
\betaibitem{kreweras1967}
\betaysame, \emph{Traitement simultan\'e du ``{P}robl{\`e}me de {Y}oung'' et du
``{P}robl\`eme de {S}imon {N}ewcomb"}, Cahiers de B.U.R.O. (1967), no.~10,
23--31.
\betaibitem{MR623036}
\betaysame, \emph{Polyn\^omes de {S}tanley et extensions lin\'eaires d'un ordre
partiel}, Math. Sci. Humaines (1981), no.~73, 97--116.
\betaibitem{macmahon1908second}
P.~A. MacMahon, \emph{Second memoir on the compositions of numbers},
Philosophical Transactions of the Royal Society of London. Series A
\textbf{207} (1908), 65--134, Reprinted in \gammaite[pp.~687--756]{MR514405}.
\betaibitem{macmahon1911memoir}
\betaysame, \emph{Memoir on the theory of the partitions of numbers. {P}art {V}.
{P}artitions in two-dimensional space}, Proceedings of the Royal Society of
London. Series A \textbf{85} (1911), no.~578, 304--305, Reprinted in
\gammaite[pp.~1328--1363]{MR514405}.
\betaibitem{macmahon1913indices}
\betaysame, \emph{The indices of permutations and the derivation therefrom of
functions of a single variable associated with the permutations of any
assemblage of objects}, American Journal of Mathematics (1913), 281--322,
Reprinted in \gammaite[pp.~508--549]{MR514405}.
\betaibitem{multi}
\betaysame, \emph{Seventh memoir on the partition of numbers. {A} detailed study
of the enumeration of the partitions of multipartite numbers}, Philosophical
Transactions of the Royal Society of London. Series A \textbf{217} (1917),
81--113.
\betaibitem{MR0141605}
Percy~A. MacMahon, \emph{Combinatory {A}nalysis}, Two volumes (bound as one),
Chelsea Publishing Co., New York, 1960, Originally published in two volumes
by Cambridge University Press, 1915--1916.
\betaibitem{MR514405}
Percy~Alexander MacMahon, \emph{Collected papers. {V}ol. {I}. {C}ombinatorics},
MIT Press, Cambridge, Mass.-London, 1978, Mathematicians of Our Time, 24.
Edited and with a preface by George E. Andrews. With an introduction by
Gian-Carlo Rota.
\betaibitem{malvenuto}
Claudia Malvenuto, \emph{Produits et coproduits des fonctions
quasi-sym\'etriques et de l'alg\`ebre des descentes}, Ph.D. thesis,
Universit\'e du Qu\'ebec \`a Montr\'eal, 1994, Publications du LaCIM, 16.
\betaibitem{MR1358493}
Claudia Malvenuto and Christophe Reutenauer, \emph{Duality between
quasi-symmetric functions and the {S}olomon descent algebra}, J. Algebra
\textbf{177} (1995), no.~3, 967--982.
\betaibitem{MR2195427}
Peter McNamara and Christophe Reutenauer, \emph{{$P$}-partitions and a
multi-parameter {K}lyachko idempotent}, Electron. J. Combin. \textbf{11}
(2004/06), no.~2, Research Paper 21, 18 pp. (electronic).
\betaibitem{MR3245895}
Peter R.~W. McNamara and Ryan~E. Ward, \emph{Equality of {$P$}-partition
generating functions}, Ann. Comb. \textbf{18} (2014), no.~3, 489--514.
\betaibitem{moynihan}
Matthew Moynihan, \emph{The {F}lag {D}escent {A}lgebra and the {C}olored
{E}ulerian {D}escent {A}lgebra}, Ph.D. thesis, Brandeis University, 2012,
arXiv:1210.4122 [math.CO].
\betaibitem{MR0551484}
Joseph Neggers, \emph{Representations of finite partially ordered sets}, J.
Combin. Inform. System Sci. \textbf{3} (1978), no.~3, 113--133.
\betaibitem{MR2001673}
Kathryn~L. Nyman, \emph{The peak algebra of the symmetric group}, J. Algebraic
Combin. \textbf{17} (2003), no.~3, 309--322.
\betaibitem{park}
SeungKyung Park, \emph{{$P$}-partitions and {$q$}-{S}tirling numbers}, J.
Combin. Theory Ser. A \textbf{68} (1994), no.~1, 33--52.
\betaibitem{MR2181371}
T.~Kyle Petersen, \emph{Cyclic descents and {$P$}-partitions}, J. Algebraic
Combin. \textbf{22} (2005), no.~3, 343--375.
\betaibitem{MR2296309}
\betaysame, \emph{Enriched {$P$}-partitions and peak algebras}, Adv. Math.
\textbf{209} (2007), no.~2, 561--610.
\betaibitem{MR1603753}
St{\'e}phane Poirier, \emph{Cycle type and descent set in wreath products},
Discrete Math. \textbf{180} (1998), no.~1-3, 315--343.
\betaibitem{sree}
Sangwook Ree, \emph{Enumeration of {L}attice {P}aths and $p$-{P}artitions},
Ph.D. thesis, Brandeis University, 1994.
\betaibitem{MR1101971}
Victor Reiner, \emph{Quotients of {C}oxeter complexes and {$P$}-partitions},
Mem. Amer. Math. Soc. \textbf{95} (1992), no.~460, vi+134.
\betaibitem{MR1248063}
\betaysame, \emph{Signed permutation statistics}, European J. Combin. \textbf{14}
(1993), no.~6, 553--567.
\betaibitem{reiner-signed}
\betaysame, \emph{Signed posets}, J. Combin. Theory Ser. A \textbf{62} (1993),
no.~2, 324--360.
\betaibitem{MR0096594}
John Riordan, \emph{An {I}ntroduction to {C}ombinatorial {A}nalysis}, Wiley
Publications in Mathematical Statistics, John Wiley \& Sons, Inc., New York;
Chapman \& Hall, Ltd., London, 1958.
\betaibitem{MR0429582}
\betaysame, \emph{The blossoming of {S}chr\"oder's fourth problem}, Acta Math.
\textbf{137} (1976), no.~1--2, 1--16.
\betaibitem{MR0342406}
D.~P. Roselle, \emph{Coefficients associated with the expansion of certain
products}, Proc. Amer. Math. Soc. \textbf{45} (1974), 144--150.
\betaibitem{MR0343094}
Gian-Carlo Rota and D.~A. Smith, \emph{Fluctuation theory and {B}axter
algebras}, Symposia {M}athematica, {V}ol. {IX} ({C}onvegno di {C}alcolo delle
{P}robabilit\`a, {INDAM}, {R}ome, 1971), Academic Press, London, 1972,
pp.~179--201.
\betaibitem{solomon}
Louis Solomon, \emph{A {M}ackey formula in the group ring of a {C}oxeter
group}, J. Algebra \textbf{41} (1976), no.~2, 255--264.
\betaibitem{MR1713460}
Jonathan~D. Stadler, \emph{Stanley's shuffling theorem revisited}, J. Combin.
Theory Ser. A \textbf{88} (1999), no.~1, 176--187.
\betaibitem{MR0309815}
R.~P. Stanley, \emph{Supersolvable lattices}, Algebra Universalis \textbf{2}
(1972), 197--217.
\betaibitem{chromatic}
Richard~P. Stanley, \emph{A chromatic-like polynomial for ordered sets}, Proc.
{S}econd {C}hapel {H}ill {C}onf. on {C}ombinatorial {M}athematics and its
{A}pplications ({U}niv. {N}orth {C}arolina, {C}hapel {H}ill, {N}.{C}., 1970),
Univ. North Carolina, Chapel Hill, N.C., 1970, pp.~421--427.
\betaibitem{stanley-thesis}
\betaysame, \emph{{O}rdered {S}tructures and {P}artitions}, Ph.D. thesis, Harvard
University, 1971.
\betaibitem{MR0354472}
\betaysame, \emph{Supersolvable semimodular lattices}, M\"obius algebras ({P}roc.
{C}onf., {U}niv. {W}aterloo, {W}aterloo, {O}nt., 1971), Univ. Waterloo,
Waterloo, Ont., 1971, pp.~80--142.
\betaibitem{ordered}
\betaysame, \emph{Ordered {S}tructures and {P}artitions}, American Mathematical
Society, Providence, R.I., 1972, Memoirs of the American Mathematical
Society, No. 119.
\betaibitem{MR0317988}
\betaysame, \emph{Acyclic orientations of graphs}, Discrete Math. \textbf{5}
(1973), 171--178.
\betaibitem{MR0354473}
\betaysame, \emph{Finite lattices and {J}ordan-{H}\"older sets}, Algebra
Universalis \textbf{4} (1974), 361--371.
\betaibitem{MR0409206}
\betaysame, \emph{Binomial posets, {M}\"obius inversion, and permutation
enumeration}, J. Combinatorial Theory Ser. A \textbf{20} (1976), no.~3,
336--356.
\betaibitem{MR782057}
\betaysame, \emph{On the number of reduced decompositions of elements of {C}oxeter
groups}, European J. Combin. \textbf{5} (1984), no.~4, 359--372.
\betaibitem{MR824105}
\betaysame, \emph{Two poset polytopes}, Discrete Comput. Geom. \textbf{1} (1986),
no.~1, 9--23.
\betaibitem{MR1283084}
\betaysame, \emph{Flag {$f$}-vectors and the {$cd$}-index}, Math. Z. \textbf{216}
(1994), no.~3, 483--499.
\betaibitem{ec2}
\betaysame, \emph{Enumerative combinatorics. {V}olume 2}, Cambridge Studies in
Advanced Mathematics, vol.~62, Cambridge University Press, Cambridge, 1999.
\betaibitem{ec1}
\betaysame, \emph{Enumerative combinatorics. {V}olume 1}, second ed., Cambridge
Studies in Advanced Mathematics, vol.~49, Cambridge University Press,
Cambridge, 2012.
\betaibitem{enriched}
John~R. Stembridge, \emph{Enriched {$P$}-partitions}, Trans. Amer. Math. Soc.
\textbf{349} (1997), no.~2, 763--788.
\betaibitem{MR2262844}
\betaysame, \emph{Counterexamples to the poset conjectures of {N}eggers,
{S}tanley, and {S}tembridge}, Trans. Amer. Math. Soc. \textbf{359} (2007),
no.~3, 1115--1128 (electronic).
\betaibitem{MR2388082}
\betaysame, \emph{Coxeter cones and their {$h$}-vectors}, Adv. Math. \textbf{217}
(2008), no.~5, 1935--1961.
\betaibitem{thomas}
Gl\^anffrwd~P. Thomas, \emph{Frames, {Y}oung tableaux, and {B}axter sequences},
Advances in Math. \textbf{26} (1977), no.~3, 275--289.
\betaibitem{thomas-thesis}
Gl\^anffrwd~Powell Thomas, \emph{Baxter {A}lgebras and {S}chur functions},
Ph.D. thesis, University College of Swansea, 1974.
\betaibitem{MR1612387}
Rudolf Winkel, \emph{Sequences of symmetric polynomials and combinatorial
properties of tableaux}, Adv. Math. \textbf{134} (1998), no.~1, 46--89.
\betaibitem{MR0084012}
E.~M. Wright, \emph{Partitions of multi-partite numbers}, Proc. Amer. Math.
Soc. \textbf{7} (1956), 880--890.
\end{thebibliography}
\end{document} |
\begin{document}
\title{f On bi-exactness of discrete quantum groups}
\begin{abstract}
We define Ozawa's notion of bi-exactness to discrete quantum groups, and then prove some structural properties of associated von Neumann algebras. In particular, we prove that any non amenable subfactor of free quantum group von Neumann algebras, which is an image of a faithful normal conditional expectation, has no Cartan subalgebras.
\end{abstract}
\section{\bf Introduction}
A countable discrete group $\Gamma$ is said to be \textit{bi-exact} (or said to be in \textit{class} $\cal S$)
if it is exact and there exists a map $\mu\colon \Gamma \rightarrow \mathrm{Prob}(\Gamma)\subset \ell^1(\Gamma)$ such that $\limsup_{x\rightarrow\infty}\|\mu(sxt)-s\mu(x)\|_1=0$ for any $s,t\in \Gamma$.
This notion was introduced and studied by Ozawa in his book with Brown $\cite[\textrm{Section 15}]{BO08}$. In particular, he gave the following two different characterizations of bi-exactness (Lemma 15.1.4 and Proposition 15.2.3(2) in the book):
\begin{itemize}
\item[$\rm (i)$] The group $\Gamma$ is exact and the algebra $L\Gamma$ satisfies condition $\rm (AO)^+$ with the dense $C^*$-algebra $C^*_\lambda(\Gamma)$.
\item[$\rm (ii)$] There exists a unital $C^*$-subalgebra ${\cal B}\subset \ell^\infty (\Gamma)$ such that
\begin{itemize}
\item[$\bullet$] the algebra $\cal B$ contains $c_0(\Gamma)$ (so that we can define ${\cal B}_\infty:={\cal B}/c_0(\Gamma)$);
\item[$\bullet$] the left translation action on $\ell^\infty(\Gamma)$ induces an amenable action on ${\cal B}_{\infty}$, and the right one induces the trivial action on ${\cal B}_{\infty}$.
\end{itemize}
\end{itemize}
Here we recall that a von Neumann algebra $M\subset \mathbb{B}(H)$ with a standard representation satisfies \textit{condition} $\rm (AO)^+$ $\cite[\textrm{Definition 3.1.1}]{Is12_2}$ if there exists a unital $\sigma$-weakly dense locally reflexive $C^*$-subalgebra $A\subset M$ and a unital completely positive (say, u.c.p.) map $\theta\colon A\otimes A^{\rm op} \rightarrow \mathbb{B}(H)$ such that $\theta(a\otimes b^{\rm op})-ab^{\rm op}\in \mathbb{K}(H)$ for any $a,b \in A$.
Here $A^{\rm op}$ means the opposite algebra of $A$, which acts canonically on $H$ by right.
These characterizations of bi-exactness have been used in different contexts. For example, condition (i) is very close to condition (AO) introduced in $\cite{Oz03}$. In particular Ozawa's celebrated theorem in the same paper says that von Neumann algebras of bi-exact groups are solid,
meaning that any relative commutant of a diffuse amenable subalgebra is still amenable.
Condition (ii) is used to show that hyperbolic groups are bi-exact. This follows from the fact that all hyperbolic groups act on their Gromov boundaries as amenable actions.
The definition of bi-exactness itself is also interesting since it is closely related to an \textit{array}, which was introduced and studied in $\cite{CS11}$ and $\cite{CSU11}$.
In the present paper, we generalize bi-exactness to discrete quantum groups in two different ways, which correspond to conditions (i) and (ii).
This is not a difficult task since condition (i) does not rely on the structure of group $C^*$-algebras (this is a $C^*$-algebraic condition) and all objects in condition (ii) are easily defined for discrete quantum groups.
We then study some basic facts on these conditions. In particular, we prove that free products of free quantum groups or quantum automorphism groups are bi-exact, showing that a condition close to bi-exactness is closed under taking free products.
After these observations, we prove some structural properties of von Neumann algebras associated with bi-exact quantum groups.
We first give the following theorem which generalizes $\cite[\textrm{Theorem 3.1}]{PV12}$.
This is a natural generalization from discrete groups to discrete quantum groups of Kac type.
In the proof, we need only slight modifications since the original proof for groups do not rely on the structures of group von Neumann algebras very much.
Note that a special case of this theorem was already generalized in $\cite{Is12_2}$ for general $\rm II_1$ factors.
\begin{ThmA}\label{A}
Let $\mathbb{G}$ be a compact quantum group of Kac type, whose dual acts on a tracial von Neumann algebra $B$ as a trace preserving action. Write $M:=\hat{\mathbb{G}}\ltimes B$.
Let $q$ be a projection in $M$ and $A\subset qMq$ a von Neumann subalgebra, which is amenable relative to $B$ in $M$. Assume that $\hat{\mathbb{G}}$ is weakly amenable and bi-exact. Then we have either one of the following statements:
\begin{itemize}
\item[$\rm (i)$] We have $A\preceq_{M} B$.
\item[$\rm (ii)$] The algebra ${\cal N}_{qMq}(A)''$ is amenable relative to $B$ in $M$ (or equivalently, $L^2(qM)$ is left ${\cal N}_{qMq}(A)''$-amenable as a $qMq$-$B$-bimodule.).
\end{itemize}
\end{ThmA}
The same arguments as in the group case give the following corollary. As we mentioned, free quantum groups and quantum automorphism groups are examples of the corollary (see also Theorem \ref{C}).
\begin{CorA}
Let $\mathbb{G}$ be a compact quantum group of Kac type. Assume that $\hat{\mathbb{G}}$ is weakly amenable and bi-exact.
\begin{itemize}
\item[$\rm (1)$] The algebra $L^\infty(\mathbb{G})$ is strongly solid. Moreover any non amenable von Neumann subalgebra of $L^\infty(\mathbb{G})$ has no Cartan subalgebras.
\item[$\rm (2)$] If $\hat{\mathbb{G}}$ is non amenable, then $L^\infty(\mathbb{G})\otimes B$ has no Cartan subalgebras for any finite von Neumann algebra $B$.
\end{itemize}
\end{CorA}
We also mention that if a non-amenable, weakly amenable and bi-exact discrete quantum group $\hat{\mathbb{G}}$ admits an action on a commutative von Neumann algebra $L^\infty(X,\mu)$,
so that $\hat{\mathbb{G}}\ltimes L^\infty(X,\mu)$ is a $\rm II_1$ factor and $L^\infty(X,\mu)$ is a Cartan subalgebra, then $L^\infty(X,\mu)$ is the unique Cartan subalgebra up to unitary conjugacy.
However such an example is not known.
Next we investigate similar results on discrete quantum groups of non Kac type. For this, let us recall condition $\rm (AOC)^+$, which is a similar one to condition $\rm (AO)^+$ on continuous cores.
We say a von Neumann algebra $M$ and a faithful normal state $\phi$ on $M$ satisfies \textit{condition} $\rm (AOC)^+$ $\cite[\textrm{Definition 3.2.1}]{Is12_2}$ if there exists a unital $\sigma$-weakly dense $C^*$-subalgebra $A\subset M$ such that
the modular action of $\phi$ gives a norm continuous action on $A$ (so that we can define $A \rtimes_{\rm r} \mathbb{R}$), $A \rtimes_{\rm r} \mathbb{R}$ is locally reflexive, and there exists a u.c.p.\ map
$\theta\colon (A\rtimes_{\rm r}\mathbb{R})\otimes (A\rtimes_{\rm r}\mathbb{R})^{\rm op} \rightarrow \mathbb{B}(L^2(M,\phi)\otimes L^2(\mathbb{R}))$
such that $\theta(a\otimes b^{\rm op})-ab^{\rm op}\in \mathbb{K}(L^2(M)\otimes\mathbb{B}(L^2(\mathbb{R}))$ for any $a,b\in A\rtimes_{\rm r}\mathbb{R}$.
In $\cite{Is12_2}$, we proved that von Neumann algebras of free quantum groups with Haar states satisfy condition $\rm (AOC)^+$ and then deduced that they do not have modular action (of Haar states) invariant Cartan subalgebras if they have the $\rm W^*$CBAP (and they are non-amenable).
This is a partial answer for the absence of Cartan subalgebras in two senses: the $\rm W^*$CBAP of free quantum groups were not known, and we could prove only the absence of special Cartan subalgebras under the $\rm W^*$CBAP.
Very recently De Commer, Freslon and Yamashita solved the first problem. In fact, they proved that free quantum groups and quantum automorphism groups are weakly amenable and hence von Neumann algebras of them have the $\rm W^*$CBAP $\cite{DFY13}$.
Thus only the second problem remains to be proved.
In this paper, we solve the second problem. In fact, the following theorem is an analogue of Theorem \ref{A} on continuous cores and it gives a complete answer for our Cartan problem (except for amenability).
In the theorem below, $C_h(L^\infty(\mathbb{G}))$ means the continuous core of $L^\infty(\mathbb{G})$ with respect to the Haar state $h$.
\begin{ThmA}\label{B}
Let $\mathbb{G}$ be a compact quantum group, $h$ a Haar state of $\mathbb{G}$ and $(B,\tau_B)$ a tracial von Neumann algebra. Denote $M:= C_h(L^\infty(\mathbb{G}))\otimes B$ and $\mathrm{Tr}_M:=\mathrm{Tr} \otimes\tau_B$, where $\mathrm{Tr}$ is the canonical trace on $C_h(L^\infty(\mathbb{G}))$.
Let $q$ be a $\mathrm{Tr}_M$-finite projection in $M$ and $A\subset qMq$ an amenable von Neumann subalgebra.
Assume that $L^\infty(\mathbb{G})$ has the $ W^*$CBAP and $(L^\infty(\mathbb{G}),h)$ satisfies condition $\rm (AOC)^+$ with the dense $C^*$-algebra $C_{\rm red}(\mathbb{G})$. Then we have either one of the following statements:
\begin{itemize}
\item[$\rm (i)$] We have $A\preceq_{M} L\mathbb{R}\otimes B$.
\item[$\rm (ii)$] The algebra $L^2(qM)$ is left ${\cal N}_{qMq}(A)''$-amenable as a $qMq$-$B$-bimodule.
\end{itemize}
\end{ThmA}
\begin{CorA}
Let $\mathbb{G}$ be a compact quantum group.
Assume that $L^\infty(\mathbb{G})$ has the $ W^*$CBAP and $(L^\infty(\mathbb{G}),h)$ satisfies condition $\rm (AOC)^+$ with the dense $C^*$-subalgebra $C_{\rm red}(\mathbb{G})$.
\begin{itemize}
\item[$\rm (1)$]
Any non amenable von Neumann subalgebra of $L^\infty(\mathbb{G})$, which is an image of a faithful normal conditional expectation, has no Cartan subalgebras.
\item[$\rm (2)$] If $L^\infty(\mathbb{G})$ is non amenable, then $L^\infty(\mathbb{G})\otimes B$ has no Cartan subalgebras for any finite von Neumann algebra $B$.
\end{itemize}
\end{CorA}
As further examples of corollaries above, we give the following theorem. Note that this theorem does not say anything about amenability of $\hat{\mathbb{G}}$ and $L^\infty(\mathbb{G})$.
\begin{ThmA}\label{C}
Let $\mathbb{G}$ be one of the following compact quantum groups.
\begin{itemize}
\item[$\rm (i)$] A co-amenable compact quantum group.
\item[\rm (ii)] The free unitary quantum group $A_u(F)$ for any $F\in\mathrm{GL}(n,\mathbb{C})$.
\item[\rm (iii)] The free orthogonal quantum group $A_o(F)$ for any $F\in\mathrm{GL}(n,\mathbb{C})$.
\item[\rm (iv)] The quantum automorphism group $A_{\rm aut}(B, \phi)$ for any finite dimensional $C^*$-algebra $B$ and any faithful state $\phi$ on $B$.
\item[\rm (v)] The dual of a bi-exact and weakly amenable discrete group $\Gamma$ with $\Lambda_{\rm cb}(\Gamma)=1$.
\item[\rm (vi)] The dual of a free product $\hat{\mathbb{G}}_1*\cdots *\hat{\mathbb{G}}_n$, where each $\mathbb{G}_i$ is as in from $\rm (i)$ to $\rm (v)$ above.
\end{itemize}
Then the dual $\hat{\mathbb{G}}$ is weakly amenable and bi-exact.
The associated von Neumann algebra $L^\infty(\mathbb{G})$ has the $W^*$CBAP and satisfies condition $\rm (AOC)^+$ with the Haar state and the dense $C^*$-subalgebra $C_{\rm red}(\mathbb{G})$.
\end{ThmA}
\noindent
Throughout the paper, we always assume that discrete groups are countable, quantum group $C^*$-algebras are separable, von Neumann algebras have separable predual, and Hilbert spaces are separable.
\noindent
{\bf Acknowledgement.} The author would like to thank Yuki Arano, Cyril Houdayer, Yasuyuki Kawahigashi, Narutaka Ozawa, Sven Raum and Stefaan Vaes for fruitful conversations.
He was supported by JSPS, Research Fellow of the Japan Society for the Promotion of Science and FMSP, Frontiers of Mathematical Sciences and Physics.
This research was carried out while he was visiting the Institut de Math$\rm \acute{e}$matiques de Jussieu. He gratefully acknowledges the kind hospitality of them.
\section{\bf Preliminaries}
\subsection{\bf Compact (and discrete) quantum groups}\label{CQG}
Let $\mathbb{G}$ be a compact quantum group. In the paper, we basically use the following notation.
We denote the comultiplication by $\Phi$, Haar state by $h$, the set of all equivalence classes of all unitary corepresentations by $\mathrm{Irred}(\mathbb{G})$, and right and left regular representations by $\rho$ and $\lambda$ respectively.
We regard $C_{\rm red}(\mathbb{G}):=\rho(C(\mathbb{G}))$ as our main object and we frequently omit $\rho$ when we see the dense Hopf $*$-algebra.
The canonical unitary $\mathbb{V}$ is defined as $\mathbb{V}=\bigoplus_{x\in\mathrm{Irred}\mathbb{G}} U^x$.
The GNS representation of $h$ is written as $L^2(\mathbb{G})$ and it has a decomposition $L^2(\mathbb{G})=\sum_{x\in\mathrm{Irred}(\mathbb{G})}\oplus (H_x\otimes H_{\bar{x}})$.
Along the decomposition, the modular operator of $h$ is of the form $\Delta^{it}=\sum_{x\in\mathrm{Irred}(\mathbb{G})}\oplus (F_x^{it}\otimes F_{\bar{x}}^{-it})$.
The canonical conjugation is denoted by $J$.
All dual objects are written with hat (e.g.\ $\hat{\mathbb{G}}, \hat{\Phi},\ldots$).
We frequently use the unitary element $U:=J\hat{J}$ which satisfies $U\rho(C(\mathbb{G}))U=\lambda(C(\mathbb{G}))$ and $U\hat{\rho}(c_0(\hat{\mathbb{G}}))U=\hat{\lambda}(c_0(\hat{\mathbb{G}}))$.
{\bf $\bullet$ Crossed products}
For crossed products of quantum groups, we refer the reader to $\cite{BS93}$.
Let $\hat{\mathbb{G}}$ be a discrete quantum group and $A$ a $C^*$-algebra. Recall following notions:
\begin{itemize}
\item A (\textit{left}) \textit{action} of $\hat{\mathbb{G}}$ on $A$ is a non-degenerate $*$-homomorphism $\alpha\colon A \rightarrow M(c_0(\hat{\mathbb{G}})\otimes A)$ such that $(\iota\otimes \alpha)\alpha=(\hat{\Phi}\otimes\iota)\alpha$ and $(\hat{\epsilon}\otimes\iota)\alpha(a)=a$ for any $a\in A$, where $\hat{\epsilon}$ is the counit.
\item A \textit{covariant representation} of an action $\alpha$ into a $C^*$-algebra $B$ is a pair $(\theta,X)$ such that
\begin{itemize}
\item $\theta$ is a non-degenerate $*$-homomorphism from $A$ into $M(B)$;
\item $X \in M(c_0(\hat{\mathbb{G}})\otimes B)$ is a unitary representation of $\hat{\mathbb{G}}$;
\item they satisfy the covariant relation $(\iota\otimes \theta)\alpha(a)=X^*(1\otimes \theta(a))X$ ($a\in A$).
\end{itemize}
\end{itemize}
Let $\alpha$ be an action of $\hat{\mathbb{G}}$ on $A$. Then for a covariant representation $(\theta,X)$ into $B$, the closed linear span
\begin{equation*}
\overline{\mathrm{span}}\{\theta(a)(\omega\otimes\iota)(X) \mid a\in A, \omega\in \ell^\infty(\hat{\mathbb{G}})_* \} \subset M(B)
\end{equation*}
becomes a $C^*$-algebra.
The \textit{reduced crossed product $C^*$-algebra} is the $C^*$-algebra associated with the covariant representation $((\hat{\lambda}\otimes\iota)\alpha, \hat{\mathscr{W}}\otimes 1)$ into ${\cal L}(L^2(\mathbb{G})\otimes A)$, where $\hat{\lambda}$ is the left regular representation, $\hat{\mathscr{W}}:=(\iota\otimes\rho)(\mathbb{V})$, and ${\cal L}(L^2(\mathbb{G})\otimes A)$ is the $C^*$-algebra of all right $A$-module maps on the Hilbert module $L^2(\mathbb{G})\otimes A$.
The \textit{full crossed product $C^*$-algebra} is defined as a universal object for all covariant representations.
When $A\subset \mathbb{B}(H)$ is a von Neumann algebra, the \textit{crossed product von Neumann algebra} is defined as the $\sigma$-weak closure of the reduced crossed product in $\mathbb{B}(L^2(\mathbb{G})\otimes H)$.
We denote them by $\hat{\mathbb{G}}\ltimes_{\rm r}A$, $\hat{\mathbb{G}}\ltimes_{\rm f}A$ and $\hat{\mathbb{G}}\ltimes A$ respectively.
The crossed product von Neumann algebra has a canonical conditional expectation $E_A$ onto $A$, given by
\begin{equation*}
\hat{\mathbb{G}}\ltimes A \ni \theta(a)(\omega\otimes\iota\otimes\iota)(\hat{\mathscr{W}}\otimes 1) \mapsto a (\omega\otimes h \otimes\iota)(\hat{\mathscr{W}}\otimes 1)\in A,
\end{equation*}
where $h$ is the Haar state (more explicitly it sends $\theta(a)\rho(u_{i,j}^z)$ to $a h(u_{i,j}^z)$).
For a faithful normal state $\phi$ on $A$, we can define a canonical faithful normal state on $\hat{\mathbb{G}}\ltimes A$ by $\tilde{\phi}:=\phi\circ E_A$. When $\phi$ is a trace and the given action $\alpha$ is $\phi$-preserving (i.e.\ $(\iota\otimes\phi)\alpha(a)=\phi(a)$ for $a\in A$), $\tilde{\phi}$ also becomes a trace.
{\bf $\bullet$ Free products}
For free products of discrete quantum groups, we refer the reader to $\cite{Wa95}$.
We first recall fundamental facts of free products of $C^*$-algebras. Let $A_i$ $(i=1,\ldots,n)$ be unital $C^{\ast}$-algebras and $\phi_i$ non-degenerate states on $A_i$ (i.e.\ GNS representations of $\phi_i$ are faithful).
Denote GNS-representations of $\phi_i$ by $(\pi_{i},H_{i},\xi_{i})$ and decompose $H_{i}=\mathbb{C}\xi_{i}\oplus H_{i}^{0}$ as Hilbert spaces, where $H_{i}^{0}:=(\mathbb{C}\xi_{i})^{\perp}$. Define two Hilbert spaces by
\begin{eqnarray*}
H_1*\cdots*H_n:=\mathbb{C}\Omega \oplus \bigoplus_{k\geq1}\hspace{0.3em}&\bigoplus_{(i_1,\dots,i_k)\in N_k}& H_{i_{1}}^{0}\otimes H_{i_{2}}^{0}\cdots \otimes H_{i_{k}}^{0},\\
H(i):=\mathbb{C}\Omega \oplus \bigoplus_{k\geq1}\hspace{0.3em}&\bigoplus_{(i_1,\dots,i_k)\in N_k, i\neq i_{1}}& H_{i_{1}}^{0}\otimes H_{i_{2}}^{0}\cdots \otimes H_{i_{k}}^{0},
\end{eqnarray*}
where $\Omega$ is a fixed norm one vector and $N_k:=\{(i_1,\dots,i_k)\in \{1,\ldots,n\}^k \mid i_l\neq i_{l+1} \textrm{ for all }l=1,\dots,k-1\}$. Write $H:=H_1*\cdots*H_n$. Let $W_i$ be unitary operators given by
\begin{eqnarray*}
W_{i}\colon H
&=& H(i)\oplus(H_{i}^{0}\otimes H(i))\\
&\simeq& (\mathbb{C}\xi_{i}\otimes H(i))\oplus(H_{i}^{0}\otimes H(i))\\
&\simeq&(\mathbb{C}\xi_{i} \oplus H_{i}^{0})\otimes H(i)\\
&=&H_{i}\otimes H(i)
\end{eqnarray*}
Then a canonical $A_i$-action (more generally $\mathbb{B}(H_i)$-action) on $H$ is given by $\lambda_i(a):=W_i^* (a\otimes 1)W_i$ $(a\in A_i)$. The \textit{free product $C^*$-algebra} $(A_1,\phi_1)*\cdots *(A_n,\phi_n)$ is the $C^*$-algebra generated by $\lambda_i(A_i)$ $(i=1,\ldots, n)$.
The vector state of the canonical cyclic vector $\Omega$ is called the free product state and denoted by $\phi_1*\cdots *\phi_n$.
When each $A_i$ is a von Neumann algebra and $\phi_i$ is normal, the \textit{free product von Neumann algebra} is defined as the $\sigma$-weak closure of the free product $C^*$-algebra in $\mathbb{B}(H)$. We denote it by $(A_1,\phi_1)\bar{*}\cdots \bar{*}(A_n,\phi_n)$.
Let $M_i$ $(i=1,\ldots,n)$ be von Neumann algebras and $\phi_i$ be faithful normal states on $M_i$. Denote modular objects of them by $\Delta_i$ and $J_i$. Then modular objects on the free product von Neumann algebra are given by
\begin{alignat*}{3}
\Delta^{it} &\colon H_{i_{1}}^{0}\otimes \cdots \otimes H_{i_{k}}^{0} \ni \xi_{i_{1}}\otimes \cdots \otimes \xi_{i_{k}} \longmapsto \Delta^{it}_{i_1}\xi_{i_{1}}\otimes \cdots \otimes \Delta^{it}_{i_k}\xi_{i_{k}} &\in& H_{i_{1}}^{0}\otimes \cdots \otimes H_{i_{k}}^{0},\\
J \hspace{0.7em} &\colon H_{i_{1}}^{0}\otimes \cdots \otimes H_{i_{k}}^{0} \ni \xi_{i_{1}}\otimes \cdots \otimes \xi_{i_{k}} \longmapsto J_{i_k}\xi_{i_{k}}\otimes \cdots \otimes J_{i_1}\xi_{i_{1}} &\in& H_{i_{k}}^{0}\otimes \cdots \otimes H_{i_{1}}^{0}.
\end{alignat*}
The unitary elements $V_i:=\Sigma(J_i\otimes J|_{JH(i)})W_iJ$, where $\Sigma$ is a flip from $H_i\otimes H(i)$ to $H(i)\otimes H_i$, gives a right $M_i$-action (more generally $\mathbb{B}(H_i)$-action) by $\rho_i(a):=J\lambda_i(a)J=V_i^*(1\otimes J_iaJ_i)V_i$.
Let $\mathbb{G}_i$ $(i=1,\ldots,n)$ be compact quantum groups and $h_i$ Haar states of them. Then the reduced free product $(C_{\rm red}(\mathbb{G}_1), h_1)*\cdots *(C_{\rm red}(\mathbb{G}_n), h_n)$ has a structure of compact quantum group with the Haar state $h_1*\cdots *h_n$.
We write its dual as $\hat{\mathbb{G}}_1*\cdots *\hat{\mathbb{G}}_n$ and call it the \textit{free product} of $\hat{\mathbb{G}}_i$ $(i=1,\ldots,n)$.
Modular objects $\hat{J}$ and $U:=J\hat{J}$ are given by
\begin{alignat*}{3}
\hat{J} \hspace{0.5em} &\colon H_{i_{1}}^{0}\otimes \cdots \otimes H_{i_{n}}^{0} \ni \xi_{i_{1}}\otimes \cdots \otimes \xi_{i_{n}} \longmapsto \hat{J}_{i_1}\xi_{i_1}\otimes \cdots \otimes \hat{J}_{i_n}\xi_{i_{n}} &\in& H_{i_{1}}^{0}\otimes \cdots \otimes H_{i_{n}}^{0},\\
U \hspace{0.5em} &\colon H_{i_{1}}^{0}\otimes \cdots \otimes H_{i_{n}}^{0} \ni \xi_{i_{1}}\otimes \cdots \otimes \xi_{i_{n}} \longmapsto U_{i_n}\xi_{i_{n}}\otimes \cdots \otimes U_{i_1}\xi_{i_{1}} &\in& H_{i_{n}}^{0}\otimes \cdots \otimes H_{i_{1}}^{0}.
\end{alignat*}
We have a formula $W_i\hat{J}=(\hat{J}_i\otimes \hat{J}|_{H(i)}) W_i$.
By the natural inclusion $(C_{\rm red}(\mathbb{G}_i), h_i)\subset (C_{\rm red}(\mathbb{G}_1), h_1)*\cdots *(C_{\rm red}(\mathbb{G}_n), h_n)$ $(i=1,\ldots,n)$, we can regard all corepresentations of $\mathbb{G}_i$ as those of the dual of $\hat{\mathbb{G}}_1*\cdots *\hat{\mathbb{G}}_n$.
A representative of all irreducible representations of the dual of $\hat{\mathbb{G}}_1*\cdots *\hat{\mathbb{G}}_n$ is given by the trivial corepresentation and all corepresentations of the form $w_1\boxtimes w_2\boxtimes \cdots \boxtimes w_n$, where each $w_l$ is in $\mathrm{Irred}(\mathbb{G}_i)$ for some $i$ (and not trivial corepresentation) and no two adjacent factors are taken from corepresentations of the same quantum group.
{\bf $\bullet$ Amenability of actions}
Let $\alpha$ be an action of a discrete quantum $\hat{\mathbb{G}}$ on a unital $C^*$-algebra $A$. Denote by $L^2(\mathbb{G})\otimes A$ the Hilbert module obtained from $L^2(\mathbb{G})\otimes_{\rm alg} A$ with $\langle\xi\otimes a | \eta\otimes b\rangle:=\langle\xi|\eta\rangle b^*a$ for $\xi,\eta\in L^2(\mathbb{G})$ and $a,b\in A$.
We say $\alpha$ is \textit{amenable} $\cite[\textrm{Definition 4.1}]{VV05}$ if
there exists a sequence $\xi_n\in L^2(\mathbb{G})\otimes A$ satisfying
\begin{itemize}
\item[$\rm (i)$] $\langle\xi_n |\xi_n\rangle \rightarrow 1$ in $A$;
\item[$\rm (ii)$] for all $x\in \mathrm{Irred}(\mathbb{G})$, $\|((\iota\otimes \alpha)(\xi_n)-\hat{\mathscr{V}_{12}}(\xi_n)_{13}) (1\otimes p_x \otimes 1)\|\rightarrow 0$;
\item[$\rm (iii)$] $((\hat{\rho}\times\hat{\lambda})\circ\hat{\Phi}\otimes \iota)\alpha(a)\xi_n =\xi_n a$ for any $n\in\mathbb{N}$ and $a\in A$,
\end{itemize}
where $\hat{\mathscr V}:=(\lambda\otimes \iota)(\mathbb{V}_{21})$ and
$\hat{\rho}\times\hat{\lambda}$ is the multiplication map from $\ell^\infty(\hat{\mathbb{G}})\otimes\ell^\infty(\hat{\mathbb{G}})$ into $\mathbb{B}(L^2(\mathbb{G}))$, which is bounded since $\ell^\infty(\hat{\mathbb{G}})$ is amenable.
It is clear that $\hat{\mathbb{G}}$ is amenable if and only if any trivial action of $\hat{\mathbb{G}}$ is amenable.
{\bf $\bullet$ Free quantum groups and quantum automorphism groups}
Let $F$ be a matrix in $\mathrm{GL}(n,\mathbb{C})$. The \textit{free unitary quantum group} (resp.\ \textit{free orthogonal quantum group}) of $F$ $\cite{VW96}\cite{Wa95}$ is the $C^*$-algebra $C(A_u(F))$ (resp.\ $C(A_o(F))$) is defined as the universal unital $C^*$-algebra generated by all the entries of a unitary $n$ by $n$ matrix $u=(u_{i,j})_{i,j}$ satisfying that $F(u_{i,j}^*)_{i,j}F^{-1}$ is unitary (resp.\ $F(u_{i,j}^*)_{i,j}F^{-1}=u$).
Recall that a \textit{coaction} of a compact quantum group $\mathbb{G}$ on a unital $C^*$-algebra $B$ is a unital $*$-homomorphism $\beta\colon B\rightarrow B\otimes C(\mathbb{G})$ satisfying $(\beta\otimes \iota)\circ \beta=(\iota\otimes \Phi)\circ \beta$ and $\overline{{\rm span}}\{\beta(B)(1\otimes A)=B\otimes C(\mathbb{G})$.
Let $(B,\phi)$ be a pair of a finite dimensional $C^*$-algebra and a faithful state on $B$. Then the \textit{quantum automorphism group} of $(B,\phi)$ $\cite{Wa98_1}\cite{Ba01}$ is the universal compact quantum group $A_{\rm aut}(B,\phi)$ which can be endowed with a coaction $\beta\colon B\rightarrow B\otimes C(A_{\rm aut}(B,\phi))$ satisfying $(\phi\otimes \iota)\circ \beta(x)=\phi(x)1$ for $x\in B$.
\subsection{\bf Weak amenability and the $\rm \bf W^*$CBAP}
Let $\hat{\mathbb{G}}$ be a discrete quantum group. Denote the dense Hopf $*$-algebra by ${\mathscr C}(\hat{\mathbb{G}})$.
For any element $a\in\ell^\infty(\hat{\mathbb{G}})$, we can associate a linear map $m_a$ from ${\mathscr C}(\hat{\mathbb{G}})$ to ${\mathscr C}(\hat{\mathbb{G}})$, given by $(m_a\otimes \iota)(u^x)=(1\otimes ap_x)u^x$ for any $x\in \mathrm{Irred}(\mathbb{G})$, where $p_x\in c_0(\hat{\mathbb{G}})$ is the canonical projection onto $x$ component.
We we say $\hat{\mathbb{G}}$ is \textit{weakly amenable} if there exist a net $(a_i)_i$ of elements of $\ell^\infty(\hat{\mathbb{G}})$ such that
\begin{itemize}
\item each $a_i$ has finite support, namely, $a_ip_x=0$ except for finitely many $x\in \mathrm{Irred}(\mathbb{G})$;
\item $(a_i)_i$ converges to 1 pointwise, namely, $a_ip_x$ converges to $p_x$ in $\mathbb{B}(H_x)$ for any $x\in \mathrm{Irred}(\mathbb{G})$;
\item $\limsup_i\|m_{a_i}\|_{\rm c.b.}$ is finite.
\end{itemize}
We also recall that a von Neumann algebra $M$ has the $\it weak^*$ \textit{completely approximation property} (or $\ W^*$\textit{CBAP}, in short)
if there exists a net $(\psi_i)_i$ of completely bounded (say c.b.) maps on $M$ with normal and finite rank such that $\limsup_i\|\psi_i\|_{\rm c.b.}<\infty$ and $\psi_i$ converges to $\mathrm{id}_M$ in the point $\sigma$-weak topology.
Then the \textit{Cowling--Haagerup constant} of $\hat{\mathbb{G}}$ and $M$ are defined as
\begin{eqnarray*}
&&\Lambda_{\rm c.b.}(\hat{\mathbb{G}}):=\inf\{\ \limsup_i\|m_{a_i}\|_{\rm c.b.}\mid (a_i)_i \textrm{ satisfies the above condition}\},\\
&&\Lambda_{\rm c.b.}(M):=\inf\{\ \limsup_i\|\psi_i\|_{\rm c.b.}\mid (\psi_i) \textrm{ satisfies the above condition}\}.
\end{eqnarray*}
It is known that $\Lambda_{\rm c.b.}(\hat{\mathbb{G}})\geq\Lambda_{\rm c.b.}(L^\infty(\mathbb{G}))$.
De Commer, Freslon, and Yamashita recently proved that $\Lambda_{\rm c.b.}(\hat{\mathbb{G}})=1$, where $\mathbb{G}$ is a free quantum group or a quantum automorphism group $\cite{DFY13}$.
Note that a special case of this was already solved by Freslon $\cite{Fr12}$.
\subsection{\bf Popa's intertwining techniques and relative amenability}
We first recall Popa's intertwining techniques in both non-finite and semifinite situations.
\begin{Def}\label{popa embed def}\upshape
Let $M$ be a von Neumann algebra, $p$ and $q$ projections in $M$, $A\subset qMq$ and $B\subset pMp$ von Neumann subalgebras, and let $E_{B}$ be a faithful normal conditional expectation from $pMp$ onto $B$. Assume $A$ and $B$ are finite. We say $A$ \textit{embeds in $B$ inside} $M$ and denote by $A\preceq_M B$ if there exist non-zero projections $e\in A$ and $f\in B$, a unital normal $\ast$-homomorphism $\theta \colon eAe \rightarrow fBf$, and a partial isometry $v\in M$ such that
\begin{itemize}
\item $vv^*\leq e$ and $v^*v\leq f$,
\item $v\theta(x)=xv$ for any $x\in eAe$.
\end{itemize}
\end{Def}
\begin{Thm}[\textit{non-finite version, }{\cite{Po01}\cite{Po03}\cite{Ue12}\cite{HV12}}]\label{popa embed}
Let $M,p,q,A,B$, and $E_{B}$ be as in the definition above, and let $\tau$ be a faithful normal trace on $B$. Then the following conditions are equivalent.
\begin{itemize}
\item[$\rm (i)$]The algebra $A$ embeds in $B$ inside $M$.
\item[$\rm (ii)$]There exists no sequence $(w_n)_n$ of unitaries in $A$ such that $E_{B}(b^*w_n a)\rightarrow 0$ strongly for any $a,b\in qMp$.
\end{itemize}
\end{Thm}
\begin{Thm}[\textit{semifinite version, }{\cite{CH08}\cite{HR10}}]\label{popa embed2}
Let $M$ be a semifinite von Neumann algebra with a faithful normal semifinite trace $\mathrm{Tr}$, and $B\subset M$ be a von Neumann subalgebra with $\mathrm{Tr}_{B}:=\mathrm{Tr}|_{B}$ semifinite. Denote by $E_{B}$ the unique $\mathrm{Tr}$-preserving conditional expectation from $M$ onto $B$.
Let $q$ be a $\mathrm{Tr}$-finite projection in $M$ and $A\subset qMq$ a von Neumann subalgebra. Then the following conditions are equivalent.
\begin{itemize}
\item[$\rm (i)$]There exists a non-zero projection $p\in B$ with $\mathrm{Tr}_{B}(p)<\infty$ such that $A\preceq_{eMe}pBp$, where $e:=p\vee q$.
\item[$\rm (ii)$]There exists no sequence $(w_n)_n$ in unitaries of $A$ such that $E_{B}(b^{\ast}w_n a)\rightarrow 0$ strongly for any $a,b\in qM$.
\end{itemize}
We use the same symbol $A\preceq_{M}B$ if one of these conditions holds.
\end{Thm}
\noindent
By the proof of the semifinite one, we have that $A\not\preceq_M B$ if and only if there exists a net $(p_j)$ of $\mathrm{Tr}_B$-finite projections in $B$ which converges to 1 strongly and $A\not\preceq_{e_jMe_j}p_jBp_j$, where $e_j:=p_j\vee q$.
We also mention that when $A$ is diffuse and $B$ is atomic, then $A\not\preceq_M B$. This follows from the existence of a normal unital $*$-homomorphism $\theta$ in the definition.
We next recall relative amenability introduced in $\cite{OP07}$ and $\cite{PV11}$.
\begin{Def}\upshape
Let $(M,\tau)$ be a tracial von Neumann algebra, $q\in M$ a projection, and $B\subset M$ and $A\subset qMq$ be von Neumann subalgebras.
We say $A$ is amenable relative to $B$ in $M$ if there exists a state $\phi$ on $q\langle M,e_B\rangle q$ such that $\phi$ is $A$-central and the restriction of $\phi$ on $A$ coincides with $\tau$.
\end{Def}
\begin{Def}\upshape
Let $(M,\tau)$ and $(B,\tau_B) $ be tracial von Neumann algebras and $A\subset M$ be a von Neumann subalgebra.
We say an $M$-$B$-bimodule ${}_M K_B$ is left $A$-amenable if there exists a state $\phi$ on $\mathbb{B}(K)\cap (B^{\rm op})'$ such that $\phi$ is $A$-central and the restriction of $\phi$ on $A$ coincides with $\tau$.
\end{Def}
We note that for any $B\subset M$ and $A\subset qMq$, amenability of $A$ relative to $B$ in $M$ is equivalent to left $A$-amenability of $qL^2(M)$ as a $qMq$-B-bimodule, since $q\langle M,e_B\rangle q= q(\mathbb{B}(L^2(M))\cap (B^{\rm op})')q=\mathbb{B}(qL^2(M))\cap (B^{\rm op})'$.
We also mention that when $B$ is amenable, then since $\mathbb{B}(K)\cap (B^{\rm op})'$ is amenable, there exists a conditional expectation from $\mathbb{B}(K)$ onto $\mathbb{B}(K)\cap (B^{\rm op})'$.
In this case, relative amenability of $A$ (or left $A$-amenability) means amenability of $A$.
\section{\bf Bi-exactness}
\subsection{\bf Two definitions of bi-exactness}
We introduce two notions of bi-exactness on discrete quantum groups. These notions are equivalent for discrete groups as we have seen in Introduction.
Recall that $C_{\rm red}(\mathbb{G})=\rho(C(\mathbb{G}))$ and $UC_{\rm red}(\mathbb{G})U=\lambda(C(\mathbb{G}))=C_{\rm red}(\mathbb{G})^{\rm op}$, where $U=J\hat{J}=\hat{J}J$. Basically we use $UC_{\rm red}(\mathbb{G})U$ instead of $C_{\rm red}(\mathbb{G})^{\rm op}$.
\begin{Def}\upshape\label{bi-ex}
Let $\hat{\mathbb{G}}$ be a discrete quantum group. We say $\hat{\mathbb{G}}$ is \textit{bi-exact} if it satisfies following conditions:
\begin{itemize}
\item[$\rm (i)$] the quantum group $\hat{\mathbb{G}}$ is exact (i.e.\ $C_{\rm red}(\mathbb{G})$ is exact);
\item[$\rm (ii)$] there exists a u.c.p.\ map $\theta\colon C_{\rm red}(\mathbb{G})\otimes C_{\rm red}(\mathbb{G}) \rightarrow \mathbb{B}(L^2(\mathbb{G}))$ such that $\theta(a\otimes b)-aUbU\in \mathbb{K}(L^2(\mathbb{G}))$ for any $a,b \in C_{\rm red}(\mathbb{G})$.
\end{itemize}
\end{Def}
\begin{Def}\upshape\label{st bi-ex}
Let $\hat{\mathbb{G}}$ be a discrete quantum group. We say $\hat{\mathbb{G}}$ is \textit{strongly bi-exact} if there exists a unital $C^*$-subalgebra ${\cal B}$ in $\ell^{\infty}(\hat{\mathbb{G}})$ such that
\begin{itemize}
\item[$\rm (i)$] the algebra $\cal B$ contains $c_0(\hat{\mathbb{G}})$, and the quotient ${\cal B}_{\infty}:={\cal B}/c
_0(\hat{\mathbb{G}})$ is nuclear;
\item[\rm (ii)] the left translation action on $\ell^\infty(\hat{\mathbb{G}})$ induces an amenable action on ${\cal B}_{\infty}$, and the right one induces the trivial action on ${\cal B}_{\infty}$.
\end{itemize}
\end{Def}
\begin{Rem}\upshape
Amenability implies strong bi-exactness. In fact, we can choose ${\cal B}:=c_0(\hat{\mathbb{G}})+\mathbb{C}1$.
In this case, both the left and right actions on ${\cal B}_\infty(\simeq \mathbb{C})$ are trivial.
\end{Rem}
We first observe relationship between bi-exactness and strong bi-exactness.
In (i) of the definition of strong bi-exactness, nuclearity of ${\cal B}_\infty$ is equivalent to that of $\cal B$. Moreover, together with the condition (ii), the $C^*$-subalgebra ${\cal C}_l\subset \mathbb{B}(L^2(\mathbb{G}))$ generated by $\hat{\lambda}({\cal B})$ and $C_{\rm red}(\mathbb{G})(=\rho(C(\mathbb{G})))$ is also nuclear.
In fact, the quotient image of ${\cal C}_l$ in $\mathbb{B}(L^2(\mathbb{G}))/\mathbb{K}(L^2(\mathbb{G}))$ is nuclear since there is a canonical surjective map from $\hat{\mathbb{G}} \ltimes_{\rm f} {\cal B}_\infty$ to ${\cal C}_l$ and $\hat{\mathbb{G}} \ltimes_{\rm f} {\cal B}_\infty$ is nuclear by amenability of the action.
Then ${\cal C}_l$ is an extension of $\mathbb{K}(L^2(\mathbb{G}))$ by this quotient image, and hence is nuclear.
Note that ${\cal C}_l$ contains $\mathbb{K}(L^2(\mathbb{G}))$, since it contains $C_{\rm red}(\mathbb{G})$ and the orthogonal projection from $L^2(\mathbb{G})$ onto $\mathbb{C}\hat{1}$.
We put ${\cal C}_r:=U{\cal C}_lU$, where $U:=J\hat{J}$. Triviality of the right action in (ii) implies that all commutators of $UC_{\rm red}(\mathbb{G})U$ and $\hat{\lambda}({\cal B})$ (respectively $C_{\rm red}(\mathbb{G})$ and $U\hat{\lambda}({\cal B})U$) are contained in $\mathbb{K}(L^2(\mathbb{G}))$.
This implies that all commutators of ${\cal C}_l$ and ${\cal C}_r$ are also contained in $\mathbb{K}(L^2(\mathbb{G}))$.
Thus we obtained the following $\ast$-homomorphism:
\begin{equation*}
\nu\colon {\cal C}_l \otimes {\cal C}_r \longrightarrow \mathbb{B}(L^2(\mathbb{G}))/\mathbb{K}(L^2(\mathbb{G})); a\otimes b \longmapsto ab.
\end{equation*}
This map is an extension of the multiplication map on $C_{\rm red}(\mathbb{G}) \otimes UC_{\rm red}(\mathbb{G})U$, and so this multiplication map is nuclear since so is ${\cal C}_l \otimes {\cal C}_r$. Finally by the lifting theorem of Choi and Effros $\cite{CE76}$ (or see $\cite[\textrm{Theorem C.3}]{BO08}$), we obtain a u.c.p.\ lift $\theta$ of the multiplication map.
Thus we observed that strong bi-exactness implies bi-exactness (exactness of $\hat{\mathbb{G}}$ follows from nuclearity of ${\cal C}_l$). The intermediate object ${\cal C}_l$ is important for us, and we will use this algebra in the next subsection.
We summary these observations as follows.
\begin{Lem}\label{intermediate}
Strong bi-exactness implies bi-exactness.
The following condition is an intermediate condition between bi-exactness and strong bi-exactness:
\begin{itemize}
\item There exists a nuclear $C^*$-algebra ${\cal C}_l\subset \mathbb{B}(L^2(\mathbb{G}))$ which contains $C_{\rm red}(\mathbb{G})$ and $\mathbb{K}(L^2(\mathbb{G}))$, and all commutators of ${\cal C}_l$ and ${\cal C}_r(:=U{\cal C}_lU)$ are contained in $\mathbb{K}(L^2(\mathbb{G}))$.
\end{itemize}
\end{Lem}
Examples of bi-exact quantum groups were first given by Vergnioux $\cite{Ve05}$. He constructed a u.c.p.\ lift directly for free quantum groups. Then he, in a joint work with Vaes $\cite{VV05}$, gave a new proof for bi-exactness of $\hat{A}_o(F)$ and they in fact proved strong bi-exactness.
In the proof, they only used the fact that $A_o(F)$ is monoidally equivalent to some $\mathrm{SU}_q(2)$ with $-1<q<1$ and $q\neq0$, seeing some estimates on intertwiner spaces of $\mathrm{SU}_q(2)$ $\cite[\textrm{Lemma 8.1}]{VV05}$.
Since the dual of $\mathrm{SO}_q(3)$ is a quantum subgroup of some dual of $\mathrm{SU}_q(2)$, intertwiner spaces of $\mathrm{SO}_q(3)$ have the same estimates.
From this fact, we can deduce strong bi-exactness of a dual of a compact quantum group which is monoidally equivalent to $\mathrm{SO}_q(3)$ (by the same argument as that for $\mathrm{SU}_q(2)$).
We also mention that strong bi-exactness of $A_u(F)$ was proved by the same argument $\cite{VV08}$.
We summary these observations as follows.
\begin{Thm}\label{example st bi-exact}
Let $\mathbb{G}$ be a compact quantum group which is monoidally equivalent to $\mathrm{SU}_q(2)$, $\mathrm{SO}_q(3)$, or $A_u(F)$, where $-1<q<1$, $q\neq 0$, $F$ is not a scalar multiple of a $2$ by $2$ unitary. Then the dual $\hat{\mathbb{G}}$ is strongly bi-exact.
\end{Thm}
In $\cite{Is12_2}$, we introduced condition $\rm (AOC)^+$, which is similar to condition $\rm (AO)^+$ on continuous cores, and proved that von Neumann algebras of free quantum groups satisfy this condition.
In the proof we also used only the fact that $A_o(F)$ is monoidally equivalent to some $\mathrm{SU}_q(2)$ and hence we actually proved the following statement.
\begin{Thm}\label{example bi-exact}
Let $\mathbb{G}$ be a compact quantum group which is monoidally equivalent to $\mathrm{SU}_q(2)$, $\mathrm{SO}_q(3)$, or $A_u(F)$, where $-1<q<1$, $q\neq 0$, $F$ is not a scalar multiple of a $2$ by $2$ unitary. Then $L^\infty(\mathbb{G})$ and its Haar state satisfy condition $\rm (AOC)^+$ with the dense $C^*$-algebra $C_{\rm red}(\mathbb{G})$.
\end{Thm}
In the proof we gave a sufficient condition to condition $\rm (AOC)^+$, which was formulated for general von Neumann algebras $\cite[\textrm{Lemma 3.2.3}]{Is12_2}$.
When we see a quantum group von Neumann algebra, we have a more concrete sufficient condition as follows. To verify this, see Subsection 3.2 in the same paper.
In the lemma below, $\pi$ means the canonical $*$-homomorphism from $\mathbb{B}(L^2(\mathbb{G}))$ into $\mathbb{B}(L^2(\mathbb{G})\otimes L^2(\mathbb{R}))$ defined by $(\pi(x)\xi)(t):=\Delta_h^{-it}x\Delta_h^{it}\xi(t)$ for $x\in\mathbb{B}(L^2(\mathbb{G}))$, $t\in\mathbb{R}$, and $\xi\in L^2(\mathbb{G})\otimes L^2(\mathbb{R})$.
\begin{Lem}\label{AOC}
Let $\mathbb{G}$ be a compact quantum group and ${\cal C}_l\subset C^*\{C_{\rm red}(\mathbb{G}), \hat{\lambda}(\ell^\infty(\hat{\mathbb{G}})) \}$ a $C^*$-subalgebra which contains $C_{\rm red}(\mathbb{G})$ and $\mathbb{K}(L^2(\mathbb{G}))$. Put ${\cal C}_r:=U{\cal C}_lU$. Assume the following conditions:
\begin{itemize}
\item[$\rm (i)$] the algebra ${\cal C}_l$ is nuclear;
\item[$\rm (ii)$] a family of maps $\mathrm{Ad}\Delta_h^{it}$ $(t\in\mathbb{R})$ gives a norm continuous action of $\mathbb{R}$ on ${\cal C}_l$;
\item[$\rm (iii)$] all commutators of $\pi({\cal C}_l)$ and ${\cal C}_r \otimes1$ are contained in $\mathbb{K}(L^2(\mathbb{G}))\otimes\mathbb{B}(L^2(\mathbb{R}))$.
\end{itemize}
Then $L^\infty(\mathbb{G})$ and its Haar state satisfy condition $\rm (AOC)^+$ with the dense $C^*$-algebra $C_{\rm red}(\mathbb{G})$.
\end{Lem}
\begin{Rem}\upshape\label{amenable}
When $\hat{\mathbb{G}}$ is amenable, then $L^\infty(\mathbb{G})$ and its Haar state satisfy condition $\rm (AOC)^+$. In fact, we can choose ${\cal B}:=c_0(\hat{\mathbb{G}})+\mathbb{C}1$ and ${\cal C}_l:=C^*\{C_{\rm red}(\mathbb{G}), \hat{\lambda}({\cal B})\}$.
In this case, all conditions in this lemma are easily verified.
\end{Rem}
\begin{Rem}\upshape\label{group}
When $\hat{\mathbb{G}}$ is a strongly bi-exact discrete quantum group of Kac type (possibly bi-exact discrete group) with ${\cal B}\subset \ell^\infty(\hat{\mathbb{G}})$, then since the modular operator is trivial, ${\cal C}_l:=C^*\{C_{\rm red}(\mathbb{G}), \lambda({\cal B})\}$ satisfies these conditions.
\end{Rem}
\begin{Rem}\upshape
In the proof of $\cite[\textrm{Proposition 3.2.4}]{Is12_2}$, we put ${\cal C}_l=\hat{\mathbb{G}}\ltimes_{\rm r} {\cal B}_\infty$ (here we write it as $\tilde{\cal C}_l$), and in this subsection we are putting ${\cal C}_l=C^*\{C_{\rm red}(\mathbb{G}), \hat{\lambda}({\cal B})\}$.
Both of them are nuclear $C^*$-algebras containing $C_{\rm red}(\mathbb{G})$ and do the same work to get condition $\rm (AOC)^+$.
The difference of them is that ${\cal C}_l$ is contained in $\mathbb{B}(L^2(\mathbb{G}))$ but $\tilde{\cal C}_l$ is not.
Hence ${\cal C}_l$ is more useful and $\tilde{\cal C}_l$ is more general (since $\tilde{\cal C}_l$ produces ${\cal C}_l$). In the previous paper, we preferred the generality and hence used $\tilde{\cal C}_l$ in the proof.
\end{Rem}
\subsection{\bf Free products of bi-exact quantum groups}
Free products of bi-exact discrete groups (more generally, free products of von Neumann algebras with condition (AO)) were studied in $\cite{Oz04}\cite[\textrm{Section 4}]{GJ07}\cite[\textrm{Section 15.3}]{BO08}$.
In this subsection we will prove similar results on discrete quantum groups. We basically follow the strategy in $\cite{Oz04}$.
\begin{Lem}[{\cite[\rm Lemma\ 2.4]{Oz04}}]\label{nuclear}
Let $B_i\subset \mathbb{B}(H_i)$ $(i=1,2)$ be $C^*$-subalgebras with $B_i$-cyclic vectors $\xi_i$ and denote by $\omega_i$ the corresponding vector states (note that each $\omega_i$ is non-degenerate). If each $B_i$ contains $P_i$, the orthogonal projection onto $\mathbb{C}\xi_i$, and is nuclear, then the free product $(B_1,\omega_1)*(B_2,\omega_2)$ is also nuclear.
\end{Lem}
\begin{Rem}\upshape
In this Lemma, each $B_i$ contains $\mathbb{K}(H_i)$ since it contains $P_i$ and the vector $\xi_i$ is $B_i$-cyclic.
Projections $\lambda_i(P_i)\in(B_1,\omega_1)*(B_2,\omega_2)$ are orthogonal projections onto $H(i)$ and hence the orthogonal projection onto $\mathbb{C}\Omega$ is of the form $\lambda_1(P_1)+\lambda_2(P_2)-1$, which is contained in $(B_1,\omega_1)*(B_2,\omega_2)$.
Since the vector $\Omega$ is cyclic for $(B_1,\omega_1)*(B_2,\omega_2)$, $(B_1,\omega_1)*(B_2,\omega_2)$ contains all compact operators.
\end{Rem}
For free product von Neumann algebras, we use the same notation $W_i$, $V_i$, $\Delta$, $\Delta_i$, $J$ and $J_i$ as in the free product part of Subsection \ref{CQG}.
\begin{Lem}[{\cite[\rm Lemma\ 3.1]{Oz04}}]
For a free product von Neumann algebra $(M_1,\phi_1)*\cdots*(M_n,\phi_n)$, we have the following equations:
\begin{eqnarray*}
\lambda_i(a) = J\rho_i(a)J
&=&JV_i^*(1_{JH(i)}\otimes J_iaJ_i)V_iJ \\
&=&V_i^*(P_\Omega\otimes a+\lambda_i(a)\mid_{JH(i)\ominus \mathbb{C}\Omega}\otimes 1_{H_i})V_i\\
&=&V_j^*(\lambda_i(a)\mid_{JH(j)}\otimes 1_{H_j})V_j,
\end{eqnarray*}
for any $a\in \mathbb{B}(H_i)$ and $i\neq j$, where $P_\Omega$ is the orthogonal projection onto $\mathbb{C}\Omega$.
\end{Lem}
\begin{Rem}\upshape\label{commutator}
Simple calculations with the lemma show that $[\lambda_i(\mathbb{B}(H_i)),J\lambda_j(\mathbb{B}(H_j))J]=0$ when $i\neq j$, and that
\begin{equation*}
[\lambda_i(a), J\lambda_i(b)J]= V_i^* (P_\Omega \otimes [a, J_ibJ_i])V_i
\end{equation*}
for $a,b\in \mathbb{B}(H_i)$.
Since $V_i=\Sigma(J_i\otimes J|_{JH(i)})W_iJ$, where $\Sigma$ is the flip, this equation means
\begin{eqnarray*}
[\lambda_i(a), J\lambda_i(b)J]
&=& V_i^* (P_\Omega \otimes [a, J_ibJ_i])V_i \\
&=&J^* W_i^* (J_i\otimes J|_{JH(i)})^*\Sigma^* (P_\Omega \otimes [a, J_ibJ_i]) \Sigma(J_i\otimes J|_{JH(i)})W_iJ \\
&=& J^* W_i^*(J_i[a, J_ibJ_i]J_i\otimes P_\Omega) W_iJ.
\end{eqnarray*}
Hence we get
\begin{equation*}
[\lambda_i(a), J\lambda_i(b)J]
=W_i^*([a, J_ibJ_i]\otimes P_\Omega) W_i \quad (a,b\in \mathbb{B}(H_i)).
\end{equation*}
This means the operator $[\lambda_i(a), J\lambda_i(b)J]$ is, as an operator on $H_1*\cdots *H_n$, $[a, J_ibJ_i]$ on $\mathbb{C}\Omega\oplus H_i^0(=H_i)$ and 0 otherwise.
\end{Rem}
\begin{Pro}\label{free prod bi-exact}
Let $\mathbb{G}_i$ $(i=1,\ldots,n)$ be compact quantum groups. If each $\hat{\mathbb{G}}_i$ satisfies the intermediate condition in \textrm{Lemma $\ref{intermediate}$} with ${\cal C}_l^i$, then the free product $\hat{\mathbb{G}}_1*\cdots*\hat{\mathbb{G}}_n$ satisfies the same condition with the nuclear $C^*$-algebra $({\cal C}_l^1,h_1)*\cdots* ({\cal C}_l^n,h_n)$, where $h_i$ are the vector states of $\hat{1}_{\mathbb{G}_i}$.
In particular, $\hat{\mathbb{G}}_1*\cdots *\hat{\mathbb{G}}_n$ is bi-exact if each $\hat{\mathbb{G}}_i$ is strongly bi-exact.
\end{Pro}
\begin{proof}
We may assume $n=2$. Write $H:=L^2(\mathbb{G}_1)*L^2(\mathbb{G}_2)$. By Lemma \ref{nuclear} and the following remark, ${\cal C}_l^1* {\cal C}_l^2$ is nuclear and contains $\mathbb{K}(H)$.
So what to show is that commutators of ${\cal C}_l^1* {\cal C}_l^2$ and $U({\cal C}_l^1* {\cal C}_l^2)U$ are contained in $\mathbb{K}(H)$.
We have only to check that $[\lambda_i({\cal C}_l^i),U\lambda_j({\cal C}_l^j)U]$ $(i,j=1,2)$ are contained in $\mathbb{K}(H)$, since $\mathbb{K}(H)$ is an ideal.
This is easily verified by Remark \ref{commutator}.
\end{proof}
\begin{Pro}\label{free prod AOC^+}
Let $\mathbb{G}_i$ $(i=1,\ldots,n)$ be compact quantum groups. If each $\hat{\mathbb{G}}_i$ satisfies conditions in \textrm{Lemma $\ref{AOC}$} with ${\cal C}_l^i$, then the free product $\hat{\mathbb{G}}_1*\cdots*\hat{\mathbb{G}}_n$ satisfies the same condition with the nuclear $C^*$-algebra $({\cal C}_l^1,h_1)*\cdots* ({\cal C}_l^n,h_n)$, where $h_i$ are the vector states of $\hat{1}_{\mathbb{G}_i}$.
\end{Pro}
\begin{proof}
We may assume $n=2$ and write $H:=L^2(\mathbb{G}_1)*L^2(\mathbb{G}_2)$.
By the same manner as in the last proposition, ${\cal C}_l^1* {\cal C}_l^2$ is nuclear and contains $\mathbb{K}(H)$.
This algebra is contained in $C^*\{C_{\rm red}(\mathbb{G}_1)*C_{\rm red}(\mathbb{G}_2), \hat{\lambda}(\ell^\infty(\hat{\mathbb{G}}_1*\hat{\mathbb{G}}_2)) \}$.
Norm continuity of the modular action is trivial since it is continuous on each $\lambda_k({\cal C}_l^k)$.
By Remark \ref{commutator}, our commutators in the algebra $\mathbb{B}(H)$ (not the algebra $\mathbb{B}(H\otimes L^2(\mathbb{R}))$) are of the form
\begin{equation*}
[\lambda_k(a), U\lambda_k(b)U]
=W_k^*([a, U_kbU_k]\otimes P_\Omega) W_k \quad (a,b\in {\cal C}^k_l)
\end{equation*}
(or $[\lambda_k(a), U\lambda_l(b)U]=0$ when $k\neq l$). Modular actions for $a$ is of the form
\begin{equation*}
[\Delta^{it}\lambda_k(a)\Delta^{-it}, U\lambda_k(b)U]
=[\lambda_k(\Delta_k^{it}a\Delta_k^{-it}), U\lambda_k(b)U]
=W_k^*([\Delta_k^{it}a\Delta_k^{-it}, U_kbU_k]\otimes P_\Omega) W_k,
\end{equation*}
where $\Delta_k$ is the modular operator for $\mathbb{G}_k$.
Hence when we see commutators of $\pi({\cal C}_l^k)$ and ${\cal C}_r^l\otimes 1$ in $\mathbb{B}(H\otimes L^2(\mathbb{R}))$, we can first assume $k=l$ since these commutators are zero when $k\neq l$.
When we see these commutators for a fixed $k$ (and $k=l$), we actually work on $\mathbb{B}(L^2(\mathbb{G}_k)\otimes L^2(\mathbb{R}))$ with the modular action of $\mathbb{G}_k$, where we regard $L^2(\mathbb{G}_k)\simeq\mathbb{C}\Omega\oplus L^2(\mathbb{G}_k)^0\subset H$.
Hence by the assumption on $\mathbb{G}_k$, we get
\begin{equation*}
[\pi({\cal C}^k_l), {\cal C}^k_r\otimes 1]\subset \mathbb{K}(L^2(\mathbb{G}_k))\otimes \mathbb{B}(L^2(\mathbb{R})) \subset \mathbb{K}(L^2(\mathbb{G}))\otimes \mathbb{B}(L^2(\mathbb{R})).
\end{equation*}
Thus we get the condition on commutators.
\end{proof}
Now we can give new examples of bi-exact quantum groups and von Neumann algebras with condition $\rm (AOC)^+$.
\begin{Cor}\label{free prod AOC}
Let $\mathbb{G}_i$ $(i=1,\ldots,n)$ be compact quantum groups. Assume that each $\mathbb{G}_i$ is monoidally equivalent to $\mathrm{SU}_q(2)$, $\mathrm{SO}_q(3)$, or $A_u(F)$, where $-1<q<1$, $q\neq 0$, $F$ is not a scalar multiple of a $2$ by $2$ unitary. Then the free product $\hat{\mathbb{G}}_1*\cdots *\hat{\mathbb{G}}_n$ is bi-exact.
The associated von Neumann algebra $(L^\infty(\mathbb{G}_1),h_1)\bar{*}\cdots \bar{*}(L^\infty(\mathbb{G}_n),h_n)$ and its Haar state $h_1*\cdots *h_n$ satisfies condition $\rm (AOC)^+$ with the dense $C^*$-algebra $(C_{\rm red}(\mathbb{G}_1),h_1)*\cdots *(C_{\rm red}(\mathbb{G}_n),h_n)$.
\end{Cor}
\subsection{\bf Proof of Theorem \ref{C}}
We first recall some known properties on free quantum groups and quantum automorphism groups. They were originally proved in $\cite{Ba97}\cite{Wa98_2}\cite{BDV05}$ for free quantum groups and $\cite{RV06}\cite{So08}\cite{Br12}$ for quantum automorphism groups. See $\cite[\textrm{Section 4}]{DFY13}$ for the details.
When $F\in\mathrm{GL}(2,\mathbb{C})$ is a scalar multiple of a $2$ by $2$ unitary, then $A_u(F)=A_u(1_2)$ and the dual of $A_u(1_2)$ is a quantum subgroup of $\mathbb{Z}*\hat{A}_o(1_2)$.
When $F\in\mathrm{GL}(n,\mathbb{C})$ is any matrix, then the dual of $A_o(F)$ is isomorphic to a free product of some $\hat{A}_o(F_1)$ and $\hat{A}_u(F_1)$ with $F_1\bar{F_1}\in \mathbb{R}\cdot \mathrm{id}$.
For such a matrix $F$ as $F\bar{F}=c \cdot\mathrm{id}$ for some $c\in\mathbb{R}$, the quantum group $A_o(F)$ is mononidally equivalent to $\mathrm{SU}_q(2)$, where $-\mathrm{Tr}(FF^*)/c=q+q^{-1}$, $-1\leq q\leq 1$ and $q\neq0$.
When $q=\pm1$, then $\mathrm{dim}_q(u)=|-\mathrm{Tr}(FF^*)/c|=2$, where $u$ is the fundamental representation of $A_o(F)$, and hence $A_o(F)=\mathrm{SU}_{\pm1}(2)$.
Thus every $\hat{A}_o(F)$ and $\hat{A}_u(F)$ is a discrete quantum subgroup of a free product of amenable discrete quantum groups and duals of compact quantum groups
which are monoidally equivalent to $\mathrm{SU}_q(2)$ or $A_u(F)$, where $-1< q< 1$, $q\neq0$, $F\in\mathrm{GL}(n,\mathbb{C})$ is not a scalar multiple of a $2$ by $2$ unitary.
The quantum automorphism group $A_{\rm aut}(M,\phi)$ is co-amenable if and only if $\mathrm{dim}(M)\leq4$.
For any finite dimensional $C^*$-algebra $M$ and any state $\phi$ on $M$,
$\hat{A}_{\rm aut}(M,\phi)$ is isomorphic to a free product of duals of quantum automorphism groups with $\delta$-form. Such quantum automorphism groups are co-amenable or monoidally equivalent to $\mathrm{SO}_q(3)$, where $\delta=q+q^{-1}$ and $0< q\leq1$.
When $q=1$ and $\delta=2$, since $\mathrm{dim}(M)\leq\delta^2=4$, $A_{\rm aut}(M,\phi)$ is co-amenable.
Thus every $\hat{A}_{\rm aut}(M,\phi)$ is a free product of amenable discrete quantum groups and duals of compact quantum groups
which are monoidally equivalent to $\mathrm{SO}_q(3)$ for some $q$ with $0<q<1$.
We see the following easy lemma before the proof.
\begin{Lem}
Let $\mathbb{G}$ and $\mathbb{H}$ be compact quantum groups. Assume that $\hat{\mathbb{H}}$ is a quantum subgroup of $\hat{\mathbb{G}}$.
If $\hat{\mathbb{G}}$ is bi-exact (resp.\ $(L^\infty(\mathbb{G}),h)$ satisfies condition $\rm (AOC)^+$ with the $C^*$-algebra $C_{\rm red}(\mathbb{G})$),
then $\hat{\mathbb{H}}$ is bi-exact (resp.\ $(L^\infty(\mathbb{H}),h)$ satisfies condition $\rm (AOC)^+$ with the $C^*$-algebra $C_{\rm red}(\mathbb{H})$).
\end{Lem}
\begin{proof}
By assumption there exists the unique Haar state preserving conditional expectation $E_{\mathbb{H}}$ from $L^\infty(\mathbb{G})$ onto $L^\infty(\mathbb{H})$ (and from $C_{\rm red}(\mathbb{G})$ onto $C_{\rm red}(\mathbb{H})$).
It extends to a projection $e_{\mathbb{H}}$ from $L^2(\mathbb{G})$ onto $L^2(\mathbb{H})$.
Let $\theta$ be a u.c.p.\ map as in the definition of bi-exactness (resp.\ condition $\rm (AOC)^+$). Then a u.c.p.\ map given by $a\otimes b^{\rm op}\mapsto e_{\mathbb{H}}\theta(a\otimes b^{\rm op})e_{\mathbb{H}}$ for $a, b\in C_{\rm red}(\mathbb{H})$
(resp.\ $a\otimes b^{\rm op}\mapsto (e_{\mathbb{H}}\otimes 1)\theta(a\otimes b^{\rm op})(e_{\mathbb{H}}\otimes 1)$ for $a, b\in C_{\rm red}(\mathbb{H})\rtimes_{\rm r}\mathbb{R}$) do the work.
Note that modular objects $J$ and $\Delta^{it}$ of the Haar state commute with $e_{\mathbb{H}}$.
Local reflexivity of $C_{\rm red}(\mathbb{H})$ (resp.\ $C_{\rm red}(\mathbb{H})\rtimes_{\rm r}\mathbb{R}$) follows from that of $C_{\rm red}(\mathbb{G})$ (resp.\ $C_{\rm red}(\mathbb{G})\rtimes_{\rm r}\mathbb{R}$) since it is a subalgebra.
\end{proof}
\begin{proof}[\bf Proof of Theorem \ref{C}]
For weak amenablity and the $\rm W^*$CBAP, they are already discussed in $\cite[\textrm{Section 5}]{DFY13}$ (see also $\cite[\textrm{Theorem 4.6}]{Fr11}$). Hence we see only bi-exactness and condition $\rm (AOC)^+$.
Let $\mathbb{G}$ be as in the statement. Then thanks for the observation above, $\hat{\mathbb{G}}$ is a discrete quantum subgroup of $\hat{\mathbb{G}}_1*\cdots *\hat{\mathbb{G}}_n$, where each $\mathbb{G}_i$ is
co-amenable, a dual of bi-exact discrete group, or monoidally equivalent to $\mathrm{SU}_q(2)$, $\mathrm{SO}_q(3)$, or $A_u(F)$, where $-1<q<1$, $q\neq 0$, $F$ is not a scalar multiple of a $2$ by $2$ unitary.
Hence by Theorems \ref{example st bi-exact} and \ref{example bi-exact}, Remarks \ref{amenable} and \ref{group}, Propositions \ref{free prod bi-exact} and \ref{free prod AOC^+}, and the last lemma, $\hat{\mathbb{G}}$ is bi-exact and $(L^\infty(\mathbb{G}),h)$ satisfies condition $\rm (AOC)^+$ with $C_{\rm red}(\mathbb{G})$.
\end{proof}
\section{\bf Proofs of main theorems}
\subsection{\bf Proof of Theorem \ref{A}}
To prove Theorem \ref{A}, we can use the same manner as that in $\cite[\rm Theorem\ 3.1]{PV12}$ except for (i) Proposition 3.2 and (ii) Subsection 3.5 (case 2) in $\cite{PV12}$.
To see (ii) in our situation, we need a structure of quantum group von Neumann algebras, which is weaker than that of group von Neumann algebras but enough to solve our problem. Since we will see a similar (and more general) phenomena in the next subsection (Lemma \ref{case2}), we omit it.
Hence here we give a proof of (i). To do so, we see one proposition which is a quantum analogue of a well known property for crossed products with discrete groups.
\begin{Pro}
Let $\mathbb{G}$ be a compact quantum group of Kac type, whose dual acts on a tracial von Neumann algebra $(B,\tau_B)$ as a trace preserving action. Write $M:= \hat{\mathbb{G}}\ltimes B$. Let $p$ be a projection in $M$ and $A\subset pMp$ a von Neumann subalgebra. Then the following conditions are equivalent:
\begin{itemize}
\item[$\rm (i)$] $A\not\preceq_M B$;
\item[$\rm (ii)$] there exists a net $(w_n)_n$ of unitaries in $A$ such that $\lim_n \|(w_n)_{i,j}^x \|_{2,\tau_B}=0$ for any $x\in \mathrm{Irred}(\mathbb{G})$ and $i,j$, where $(w_n)_{i,j}^x$ is given by $w_i=\sum_{x\in \mathrm{Irred}(\mathbb{G}), i,j}(w_n)_{i,j}^x u_{i,j}^x$ for a fixed basis $(u_{i,j}^x)$ satisfying $h(u_{i,j}^x u_{k,l}^{y*})=\delta_{i,k}\delta_{j,l}\delta_{x,y}h(u_{i,j}^x u_{i,j}^{x*})$.
\end{itemize}
\end{Pro}
\begin{proof}
We first assume (i). Then by definition, there exists a net $(w_n)_n$ in ${\cal U}(A)$ such that $\lim_n \|E_B(b^*w_na) \|_2=0$ for any $a,b \in M$. Putting $b=1$ and $a=u_{k,l}^{y*}$ and since $E_B(w_n u_{k,l}^{y*})=\sum_{x,i,j}(w_n)_{i,j}^x h(u_{i,j}^x u_{k,l}^{y*})=(w_n)_{k,l}^y h(u_{k,l}^y u_{k,l}^{y*})$, we have
\begin{equation*}
\|(w_n)_{k,l}^y\|_2= |h(u_{k,l}^y u_{k,l}^{y*})|^{-1}\|E_B(w_n u_{k,l}^{y*})\|_2\rightarrow 0 \quad (n\rightarrow \infty).
\end{equation*}
Conversely, assume (ii). We will show $\lim_n \|E_B(b^*w_na) \|_2=0$ for any $a,b \in M$. To see this, we may assume $a=u_{\alpha,\beta}^y$ and $b=u_{k,l}^z$.
For any $c\in B$, $u_{k,l}^{z*}c$ is a linear combination of $c' u_{k',l'}^{z*}$ for some $c'\in B$ and $k',l'$. When we apply $E_B$ to $u_{k,l}^{z*} c u_{i,j}^x u_{\alpha,\beta}^y$, it does not vanish only if $\bar{z}\otimes x\otimes y$ contains the trivial representation. For fixed $y,z$, the number of such $x$ is finite (since this means $x\in z\otimes \bar{y}$). So we have
\begin{eqnarray*}
\|E_B(b^*w_na) \|_2 &=& \|\sum_{\textrm{finitely many }x, i,j} E_B(b^*(w_n)_{i,j}^x u_{i,j}^xa) \|_2\\
&\leq& \sum_{\textrm{finitely many }x, i,j} \|b^*(w_n)_{i,j}^x u_{i,j}^xa \|_2\\
&\leq& \sum_{\textrm{finitely many }x, i,j} \|b^*\|\|(w_n)_{i,j}^x \|_2 \|u_{i,j}^xa\| \rightarrow 0.
\end{eqnarray*}
\end{proof}
The following proposition is a corresponding one to (i) Proposition 3.2 in $\cite{PV12}$.
\begin{Pro}
Theorem $\ref{A}$ is true if it is true for any trivial action.
\end{Pro}
\begin{proof}
Let $\mathbb{G}$, $B$, $M$, $p$, and $A$ be as in Theorem \ref{A}.
By Fell's absorption principle, we have the following $*$-homomorphism:
\begin{equation*}
\Delta\colon M= \hat{\mathbb{G}}\ltimes B \ni bu_{i,j}^x \longmapsto \sum_{k=1}^{n_x} bu_{i,k}^x\otimes u_{k,j}^x\in M\otimes L^\infty(\mathbb{G})=:{\cal M}.
\end{equation*}
Put ${\cal A}:=\Delta(A)$, $\tilde{q}:=\Delta(q)$ and ${\cal P}:={\cal N}_{\tilde{q}{\cal M}\tilde{q}}({\cal A})''$. Then consider following statements:
\begin{itemize}
\item[(1)] If $A$ is amenable relative to $B$ in $M$, then $\cal A$ is amenable relative to $M\otimes1$ in $\cal M$.
\item[(2)] If $\cal P$ is amenable relative to $M\otimes1$ in $\cal M$, then ${\cal N}_{qMq}(A)''$ is amenable relative to $B$ in $M$.
\item[(3)] If ${\cal A}\preceq_{\cal M}M\otimes 1$, then $A\preceq_M B$.
\end{itemize}
If one knows them, then (1) and our assumption say that we have either $\cal P$ is amenable relative to $M\otimes1$ or ${\cal A}\preceq_{\cal M}M\otimes 1$. Then (2) and (3) imply our desired conclusion.
To show (1) and (2), we only need the property $\Delta\circ E_B=E_{M\otimes 1}\circ \Delta=E_{B\otimes1}\circ \Delta$. So we can use the same strategy as in the group case. To show (3) we need the previous proposition, and once we accept it, then (3) is easily verified.
\end{proof}
\subsection{\bf Proof of Theorem \ref{B}}
We use the same symbol as in the statement of Theorem \ref{B}. We moreover use the following notation:
\begin{eqnarray*}
&&P:= {\cal N}_{qMq}(A)'', \
C_h:=C_h(L^\infty(\mathbb{G})),\
\tau:=\mathrm{Tr}_M(q\cdot q),\\
&&L^2(M):=L^2(M,\mathrm{Tr}_M)=L^2(B,\tau_B)\otimes L^2(C_h,\mathrm{Tr}),\\
&&\pi\colon M=B\otimes C_h \ni b\otimes x \mapsto (b\otimes_A q \otimes x) \in \mathbb{B}((L^2(M)q \otimes_A L^2(P))\otimes L^2(C_h,\mathrm{Tr})),\\
&&\theta\colon P^{\rm op} \ni y^{\rm op}\mapsto (q\otimes_A y^{\rm op}) \otimes 1 \in \mathbb{B}((L^2(M)q \otimes_A L^2(P))\otimes L^2(C_h,\mathrm{Tr})),\\
&&N:=W^*\{\pi(B\otimes 1), \theta(P^{\rm op})\},\ {\cal N}:=N\otimes C_h.
\end{eqnarray*}
We first recall the following theorem which is a generalization of $\cite[\textrm{Theorem 3.5}]{OP07}$ and $\cite[\textrm{Theorem B}]{Oz10}$.
Now we are working on a semifinite von Neumann algebra $\cal N$ but still the theorem is true.
To verify this, we need the following observation.
If a semifiite von Neumann algebra has the $\rm W^*$CBAP, $(\phi_i)_i$ is an approximation identity, and $(p_j)_j$ is a net of trace-finite projections converging to 1 strongly, then a subnet of $(\psi_j\circ\phi_i)$, where $\psi_j$ is a compression by $p_j$, is an approximation identity.
Hence we can find an approximate identity whose images are contained in a trace-finite part of the semifinite von Neumann algebra.
As finite rank maps relative to $B$ $\cite[\textrm{Definition 5.3}]{PV11}$, one can take linear spans of
\begin{equation*}
\psi_{y,z,r,t}\colon qMq\rightarrow qMq; x\mapsto y(\mathrm{id}_B\otimes \mathrm{Tr})(zxr)t,
\end{equation*}
where $y,z,r,t\in M$ satisfying $y=qy$, $t=tq$, $z=(1\otimes p)z$, and $r=r(1\otimes p')$ for some Tr-finite projections $p,p'\in C_h$. Note that for fixed $p\in C_h$ with $\mathrm{Tr}(p)<\infty$, $\mathrm{Tr}(p)^{-1}(\mathrm{id}_B\otimes \mathrm{Tr})$ is a conditional expectation from $B\otimes pC_hp$ onto $B$.
\begin{Thm}[{\cite[\textrm{Theorem 5.1}]{PV12}}]
There exists a net $(\omega_i)_i$ of normal states on $\pi(q){\cal N}\pi(q)$ such that
\begin{itemize}
\item[$\rm (i)$] $\omega_i(\pi(x))\rightarrow \tau(x)$ for any $x\in qMq$;
\item[$\rm (ii)$] $\omega_i(\pi(a)\theta(\bar{a}))\rightarrow 1$ for any $a\in {\cal U}(A)$;
\item[$\rm (iii)$] $\|\omega_i\circ\mathrm{Ad}(\pi(u)\theta(\bar{u}))-\omega_i\| \rightarrow 0$ for any $u \in {\cal N}_{qMq}(A)$.
\end{itemize}
Here $\bar a$ means $(a^{\rm op})^*$.
\end{Thm}
Let $H$ be a standard Hilbert space of $N$ with a canonical conjugation $J_N$.
From now on, as a standard representation of $C_h$, we use $L^2(C_h):=L^2(C_h,\hat{h})=L^2(\mathbb{G})\otimes L^2(\mathbb{R})$, where $\hat{h}$ is the dual weight of $h$.
Then since $H\otimes L^2(C_h)$ is standard for $\cal N$ with a canonical conjugation ${\cal J}:=J_N\otimes J_{C_h}$, there exists a unique net $(\xi_i)_i$ of unit vectors in the positive cone of $\pi(q){\cal J}\pi(q){\cal J}(H\otimes L^2(C_h))$ such that $\omega_i(\pi(q)x\pi(q))=\langle x \xi_i | \xi_i \rangle$ for $x\in {\cal N}$. Then conditions on $\omega_i$ are translated to the following conditions:
\begin{itemize}
\item[$\rm (i)'$] $\langle \pi(x) \xi_i | \xi_i \rangle\rightarrow \tau(qxq)$ for any $x\in M$;
\item[$\rm (ii)'$] $\|\pi(a)\theta(\bar a)\xi_i -\xi_i\|\rightarrow 0$ for any $a\in {\cal U}(A)$;
\item[$\rm (iii)'$] $\|\mathrm{Ad}(\pi(u)\theta(\bar{u}))\xi_i -\xi_i\| \rightarrow 0$ for any $u \in {\cal N}_{qMq}(A)$.
\end{itemize}
Here $\mathrm{Ad}(x)\xi_i:=x{\cal J}x{\cal J}\xi_i$. To see $\rm (iii)'$, we need a generalized Powers--St$\rm \o$rmer inequality (e.g.\ $\cite[\textrm{Theorem IX.1.2.(iii)}]{Ta2}$).
The following lemma is a very similar statement to that in $\cite[\textrm{Subsection 3.5 (case 2)}]{PV12}$. Since we treat quantum groups and moreover our object $C_h$ is semifinite and twisted (not a tensor product), we need more careful treatment. So here we give a complete proof of the lemma.
\begin{Lem}\label{case2}
Let $(\xi_i)_i$ be as above. Assume $A\not\preceq_{M} B\otimes L\mathbb{R}$.
Then we have
\begin{equation*}
\limsup_i\| (1_{\mathbb{B}(H)}\otimes x \otimes 1_{L\mathbb{R}})\xi_i\|=0 \quad \textrm{for any } x \in \mathbb{K}(L^2(\mathbb{G})).
\end{equation*}
\end{Lem}
\begin{proof}
Suppose by contradiction that there exist $\delta>0$ and a finite subset ${\cal F}\subset \mathrm{Irred}(\mathbb{G})$ such that
\begin{equation*}
\limsup_i\| (1_{\mathbb{B}(H)}\otimes P_{\cal F} \otimes 1_{L\mathbb{R}})\xi_i\| >\delta,
\end{equation*}
where $P_{\cal F}$ is the orthogonal projection onto $\sum_{x\in {\cal F}}H_x \otimes H_{\bar x}$.
Replacing with a subnet of $(\xi_i)_i$, we may assume that
\begin{equation*}
\liminf_i\| (1_{\mathbb{B}(H)}\otimes P_{\cal F} \otimes 1_{L\mathbb{R}})\xi_i\| >\delta.
\end{equation*}
Our goal is to find a finite set ${\cal F}_1\subset \mathrm{Irred}(\mathbb{G})$ and a subnet of $(\xi_i)_i$ which satisfy
\begin{equation*}
\liminf_i\| (1_{\mathbb{B}(H)}\otimes P_{{\cal F}_1} \otimes 1_{L\mathbb{R}})\xi_i\| >2^{1/2}\delta.
\end{equation*}
Then repeating this argument, we get a contradiction since
\begin{equation*}
1\geq\liminf_i\| (1_{\mathbb{B}(H)}\otimes P_{{\cal F}_k} \otimes 1_{L\mathbb{R}})\xi_i\| >2^{k/2}\delta \quad (k\in \mathbb{N}).
\end{equation*}
\noindent
\textit{{\bf claim 1.} There exists a $\mathrm{Tr}$-finite projection $r\in L\mathbb{R}$ and a subnet of $(\xi_i)_i$ such that }
\begin{equation*}
\liminf_i\| (1_{\mathbb{B}(H)}\otimes P_{\cal F} \otimes r)\xi_i\| >\delta.
\end{equation*}
\begin{proof}[\bf proof of claim 1]
Define a state on $\mathbb{B}(H\otimes L^2(C_h))$ by $\Omega(X):=\mathrm{Lim}_i\langle X\xi_i | \xi_i\rangle$. Then the condition $\rm (i)'$ of $(\xi_i)_i$ says that $\Omega(\pi(x))=\tau(qxq)$ for $x\in M$. Let $r_j$ be a net of $\mathrm{Tr}$-finite projections in $L\mathbb{R}$ which converges to 1 strongly. Then we have
\begin{eqnarray*}
|\Omega((1\otimes P_{\cal F}\otimes 1) \pi(1-r_j))|^2&\leq&\Omega((1\otimes P_{\cal F}\otimes 1))\Omega(\pi(1-r_j)) \\
&=&\Omega((1\otimes P_{\cal F}\otimes 1))\tau(q(1-r_j)q) \rightarrow 0 \quad (j\rightarrow \infty).
\end{eqnarray*}
This implies that
\begin{eqnarray*}
0&\leq&
\mathrm{Lim}_i\| (1\otimes P_{\cal F} \otimes 1)\xi_i\| - \mathrm{Lim}_i\| (1\otimes P_{\cal F} \otimes r_j)\xi_i\| \\
&=&\Omega(1\otimes P_{\cal F} \otimes 1) - \Omega( (1\otimes P_{\cal F} \otimes 1)\pi(r_j))\\
&=&\Omega((1\otimes P_{\cal F}\otimes 1) \pi(1-r_j)) \rightarrow 0 \quad (j\rightarrow \infty)
\end{eqnarray*}
Hence we can find a $\mathrm{Tr}$-finite projection $r\in L\mathbb{R}$ such that $\mathrm{Lim}_i\| (1\otimes P_{\cal F} \otimes r)\xi_i\|>\delta$. Finally by taking a subnet of $(\xi_i)_i$, we have $\liminf_i\| (1\otimes P_{\cal F} \otimes r)\xi_i\|>\delta$.
\end{proof}
We fix a basis $\{u_{i,j}^x \}_{i,j}^x (=:X)$ of the dense Hopf $*$-algebra of $C(\mathbb{G})$ and use the notation
\begin{eqnarray*}
&&\mathrm{Irred}(\mathbb{G})_x:=\{ u_{i,j}^x\in X \mid i,j=1,\ldots,n_x\} \quad (x\in \mathrm{Irred}(\mathbb{G})),\\
&&\mathrm{Irred}(\mathbb{G})_{\cal E}:=\cup_{x\in{}\cal E}\mathrm{Irred}(\mathbb{G})_x \quad ({\cal E} \subset \mathrm{Irred}(\mathbb{G})).
\end{eqnarray*}
We may assume that each $\hat{u}_{i,j}^x\in L^2(\mathbb{G})$ is an eigenvector of the modular operator of $h$, namely, for any $u_{i,j}^x$ there exists $\lambda>0$ such that $\Delta_h^{it}\hat{u}_{i,j}^x=\lambda^{it}\hat{u}_{i,j}^x$ $(t\in \mathbb{R})$.
In this case we have a formula $\sigma_t^h(u_{i,j}^x)=\lambda^{it}u_{i,j}^x$ and hence $\sigma_t^h(u_{i,j}^{x*} a u_{i,j}^{x})=u_{i,j}^{x*} \sigma_t^h(a) u_{i,j}^{x}$ for any $a\in L^\infty(\mathbb{G})$.
Let $P_e$ be the orthogonal projection from $L^2(\mathbb{G})$ onto $\mathbb{C}\hat 1$.
For any $u\in X$, consider a compression map
\begin{equation*}
\Phi_u(x):=h(u^*u)^{-1}(1\otimes P_eu^* \otimes 1)\pi(x)(1\otimes uP_e \otimes 1)
\end{equation*}
for $x\in M$, which gives a normal map from $M$ into $B\otimes \mathbb{C}P_e\otimes\mathbb{B}(L^2(\mathbb{R}))\simeq B\otimes \mathbb{B}(L^2(\mathbb{R}))$.
\noindent
\textit{{\bf claim 2.}
For any $u\in X$, we have}
\begin{equation*}
\Phi_{u}(b\otimes af)= b\otimes h(u^*u)^{-1}h(u^{*}au)f \quad (b\in B, a\in L^\infty(\mathbb{G}), f\in L\mathbb{R}).
\end{equation*}
\textit{In particular, $\Phi_u$ is a normal conditional expectation from $M$ onto $B\otimes L\mathbb{R}$.}
\begin{proof}[\bf proof of claim 2]
Assume $B=\mathbb{C}$ for simplicity. Recall that any element $a\in L^\infty(\mathbb{G})$ in the continuous core is of the form $a=\int\sigma_{-t}^h(a)\otimes e_t \cdot dt$, which means $(a\xi)(s)=\sigma_{-s}^h(a)\xi(s)$ for $\xi \in L^2(\mathbb{G})\otimes L^2(\mathbb{R})$ and $s\in \mathbb{R}$.
Hence a simple calculation shows that for any $a\in L^\infty(\mathbb{G})$,
\begin{eqnarray*}
h(u^*u)\Phi_u(a)&=&(P_eu^* \otimes 1)\int\sigma_{-t}^h(a)\otimes e_t \cdot dt (uP_e \otimes 1)\\
&=&\int P_eu^*\sigma_{-t}^h(a)uP_e\otimes e_t \cdot dt\\
&=&\int h(u^*au)P_e\otimes e_t \cdot dt= h(u^*au)P_e\otimes 1,
\end{eqnarray*}
where we used $P_eu^*\sigma_{-t}^h(a)uP_e=h(u^*\sigma_{-t}^h(a)u)P_e=h(\sigma_{-t}^h(u^*au))P_e=h(u^*au)P_e$. Thus $\Phi_u$ satisfies our desired condition.
\end{proof}
\noindent
\textit{{\bf claim 3.} For any $u\in X$ and $x\in M$, we have}
\begin{equation*}
\limsup_i\|\pi(x)(1\otimes P_u\otimes r)\xi_i\|\leq h(u^*u)^{-1/2} \|x(1\otimes r)\|_{2,\mathrm{Tr}_M\circ\Phi_u},
\end{equation*}
\textit{where $P_u$ is the orthogonal projection from $L^2(\mathbb{G})$ onto $\mathbb{C}\hat{u}$.}
\begin{proof}[\bf proof of claim 3]
This follows from a direct calculation. Since $P_u=h(u^*u)^{-1} uP_e u^*$, we have
\begin{eqnarray*}
&&h(u^*u)\|(\pi(x)(1\otimes P_u\otimes r)\xi_i\|^2\\
&=&h(u^*u)\langle (1\otimes P_u \otimes 1)\pi(\tilde{x}^*\tilde{x}) (1\otimes P_u \otimes 1)\xi_i | \xi_i\rangle \qquad (\tilde x := x(1\otimes r))\\
&=&h(u^*u)^{-1}\langle (1\otimes P_eu^* \otimes 1)\pi(\tilde{x}^*\tilde{x}) (1\otimes uP_e \otimes 1)\tilde{\xi}_i | \tilde{\xi}_i\rangle \qquad (\tilde{\xi}_i :=(1\otimes u^* \otimes 1) \xi_i)\\
&=&\langle (1\otimes P_e \otimes 1)\pi(\Phi_u(\tilde{x}^*\tilde{x})) (1\otimes P_e \otimes 1)\tilde{\xi}_i | \tilde{\xi}_i\rangle\\
&=&\| \pi(y) (1\otimes P_eu^* \otimes 1)\xi_i \|^2 \qquad (y:=\Phi_u(\tilde{x}^*\tilde{x})^{1/2})\\
&\leq& \| \pi(y) \xi_i \|^2 \qquad (\textrm{since $\pi(y)$ and $(1\otimes P_eu^* \otimes 1$) commute})\\
&\rightarrow& \mathrm{Tr}_M(qy^*yq) \leq \mathrm{Tr}_M(y^*y) = \mathrm{Tr}_M(\Phi_u(\tilde{x}^*\tilde{x}))= \mathrm{Tr}_M\circ\Phi_u((1\otimes r)x^*x(1\otimes r)).
\end{eqnarray*}
\end{proof}
\noindent
\textit{{\bf claim 4.} For any $\epsilon>0$ and any finite subset ${\cal E}\subset \mathrm{Irred}(\mathbb{G})$, there exist $a\in {\cal U}(A)$ and $v\in M$ such that
\begin{itemize}
\item $v$ is a finite sum of elements of the form $b\otimes u f$, where $b\in B$, $f\in L\mathbb{R}$ and $u\in X \setminus \mathrm{Irred}(\mathrm{G})_{\cal E}$;
\item $h(u^*u)^{-1/2}\|(a-v)(1\otimes r)\|_{2,\mathrm{Tr}_M\circ\Phi_u}<\epsilon$ for any $u\in\mathrm{Irred}(\mathrm{G})_{\cal F}$.
\end{itemize}
}
\begin{proof}[\bf proof of claim 4]
Since $A\not\preceq_{{}_e M_e}B\otimes L\mathbb{R}r$, where $e:=q\vee r$, for any $\epsilon>0$ and any finite subset ${\cal E}\subset \mathrm{Irred}(\mathbb{G})$, there exist $a\in {\cal U}(A)$ s.t\ $\|E_{B\otimes L\mathbb{R}}((1\otimes r)(1\otimes u^*)a(1\otimes r))\|_{2,\mathrm{Tr}_M}<\epsilon$ for any $u\in \mathrm{Irred}(\mathrm{G})_{\cal E}$.
Since the linear span of $\{b\otimes u\lambda_t| b\in B, t\in\mathbb{R},u\in X\}$ is strongly dense in $M$, we can find a bounded net $(z_j)$ of elements in this linear span which converges to $a$ in the strong topology.
Hence we can find $z$ in the linear span such that $\|(a-z)(1\otimes r)\|_{2,\mathrm{Tr}_M}$ and $\|(a-z)(1\otimes r)\|_{2,\mathrm{Tr}_M\circ\Phi_u}$ $(u\in\mathrm{Irred}(\mathrm{G})_{\cal F})$ are very small.
In this case we may assume that $\|E_{B\otimes L\mathbb{R}}((1\otimes r)(1\otimes u^*)z(1\otimes r))\|_{2,\mathrm{Tr}_M}<\epsilon$ for any $u\in \mathrm{Irred}(\mathrm{G})_{\cal E}$.
Write $z=\sum_{\rm finite} b_{i,j}^x\otimes u_{i,j}^x f_{i,j}^x$. Then for any $y\in{\cal E}$ the above inequality implies
\begin{eqnarray*}
\epsilon&>&\|E_{B\otimes L\mathbb{R}}((1\otimes r)(1\otimes u^{y*}_{k,l})z(1\otimes r))\|_{2,\mathrm{Tr}_M}\\
&=&\|\sum_{\rm finite} (1\otimes r)(b_{i,j}^x\otimes h(u^{y*}_{k,l}u_{i,j}^x)f_{i,j}^x)(1\otimes r)\|_{2,\mathrm{Tr}_M}\\
&=&\sum_{\rm finite}\delta_{x,y}\delta_{i,k}\delta_{j,l}h(u^{y*}_{k,l}u_{i,j}^x)\|b_{i,j}^x\otimes f_{i,j}^xr\|_{2,\mathrm{Tr}_M}
=\|u^{y}_{k,l}\|^2_{2,h}\|b_{k,l}^y\otimes f_{k,l}^yr\|_{2,\mathrm{Tr}_M}.
\end{eqnarray*}
Hence if we write $z=\sum_{x\in {\cal E}}b_{i,j}^x\otimes u_{i,j}^x f_{i,j}^x + \sum_{x\not\in {\cal E}}b_{i,j}^x\otimes u_{i,j}^x f_{i,j}^x $, and say $v$ for the second sum, then we have
\begin{eqnarray*}
\|(z-v)(1\otimes r)\|_{2,\mathrm{Tr}_M\circ\Phi_u}
&\leq&\sum_{x\in{\cal E},i,j}\| b_{i,j}^x\otimes u_{i,j}^x f_{i,j}^xr \|_{2,\mathrm{Tr}_M\circ\Phi_u} \\
&\leq&\sum_{x\in{\cal E},i,j}\|u_{i,j}^x\| \| b_{i,j}^x\otimes f_{i,j}^xr \|_{2,\mathrm{Tr}_M\circ\Phi_u} \\
&\leq&\sum_{x\in{\cal E},i,j}\| b_{i,j}^x\otimes f_{i,j}^xr \|_{2,\mathrm{Tr}_M} \\
&<&C({\cal E})\cdot\epsilon,
\end{eqnarray*}
for any $u\in\mathrm{Irred}(\mathrm{G})_{\cal F}$, where $C({\cal E})>0$ is a constant which depends only on ${\cal E}$. Thus this $v$ is our desired one.
\end{proof}
Now we return to the proof. Since $1\otimes P_{\cal F}\otimes r$ is a projection, we have
\begin{eqnarray*}
\limsup_i\|\xi_i-(1\otimes P_{\cal F}\otimes r)\xi_i\|^2
&=&\limsup_i(\|\xi_i\|^2-\|(1\otimes P_{\cal F}\otimes r)\xi_i\|^2)\\
&=&1-\liminf_i\|(1\otimes P_{\cal F}\otimes r)\xi_i\|^2<1-\delta^2.
\end{eqnarray*}
So there exists $\epsilon>0$ such that $\limsup_i\|\xi_i-(1\otimes P_{\cal F}\otimes r)\xi_i\|<(1-\delta^2)^{1/2}-\epsilon$. We apply claim 4 to $(\sum_{x\in{\cal F}}\mathrm{dim}(H_x)^2)^{-1}\epsilon$ and ${\cal E}:={\cal F}\bar{\cal F}$ $(=\{z| z\in x\otimes \bar{y} \textrm{ for some }x,y\in{\cal F}\})$, and get $a$ and $v$. Then we have
\begin{eqnarray*}
&&\limsup_i\|\xi_i-\theta(\bar{a})\pi(v)(1\otimes P_{\cal F}\otimes r)\xi_i\|\\
&\leq&\limsup_i\|\xi_i-\theta(\bar{a})\pi(a)(1\otimes P_{\cal F}\otimes r)\xi_i\|+\limsup_i\|\theta(\bar{a})\pi(a-v)(1\otimes P_{\cal F}\otimes r)\xi_i\|\\
&\leq&\limsup_i\|\theta(a^{\rm op})\pi(a^*)\xi_i-(1\otimes P_{\cal F}\otimes r)\xi_i\| + \sum_{u\in \mathrm{Irred}(\mathbb{G})_{\cal F}} h(u^*u)^{-1/2}\|(a-v)(1\otimes r)\|_{2, \mathrm{Tr}_M\circ\Phi_u} \\
&<&\limsup_i\|\xi_i-(1\otimes P_{\cal F}\otimes r)\xi_i\| + \epsilon < (1-\delta^2)^{1/2},
\end{eqnarray*}
where we used condition $\rm (ii)'$ of $(\xi_i)$ and claim 3.
By the choice of $v$, it is of the form $\sum_{x\in{\cal S},i,j}b_{i,j}^x\otimes u_{i,j}^x f_{i,j}^x$ for some finite set ${\cal S}\subset \mathrm{Irred}(\mathbb{G})$ with ${\cal S}\cap{\cal F}\bar{\cal F}=\emptyset$. Note that this means ${\cal SF}\cap{\cal F}=\emptyset$.
The vector $\theta(\bar{a})\pi(v)(1\otimes P_{\cal F}\otimes r)\xi_i$ is contained in the range of $1\otimes P_{\cal SF}\otimes 1$. This is because the modular operator $\Delta_h^{it}$ commutes with $P_{\cal SF}$ and $P_{\cal F}$ and hence
\begin{equation*}
(1\otimes P_{{\cal SF}}\otimes 1)\pi(v)(1\otimes P_{\cal F}\otimes 1)=\pi(v)(1\otimes P_{\cal F}\otimes 1).
\end{equation*}
Then we have
\begin{eqnarray*}
1-\delta^2
&>&\limsup_i\|\xi_i-\theta(\bar{a})\pi(v)(1\otimes P_{\cal F}\otimes r)\xi_i\|^2 \\
&\geq&\limsup_i\|(1\otimes P_{\cal SF}\otimes 1)^{\perp}\xi_i\|^2\\
&=&1-\liminf_i\|(1\otimes P_{\cal SF}\otimes 1)\xi_i\|^2.
\end{eqnarray*}
This means $\delta<\liminf_i\|(1\otimes P_{\cal SF}\otimes 1)\xi_i\|$.
Finally put ${\cal F}_1:={\cal F}\cup{\cal SF}$. Then since ${\cal SF}\cap{\cal F}=\emptyset$, we get
\begin{equation*}
\liminf_i\|(1\otimes P_{{\cal F}_1}\otimes 1)\xi_i\|^2
\geq\liminf_i\|(1\otimes P_{\cal F}\otimes 1)\xi_i\|^2 + \liminf_i\|(1\otimes P_{\cal SF}\otimes 1)\xi_i\|^2>2\delta^2.
\end{equation*}
Thus ${\cal F}_1$ is our desired one and we can end the proof.
\end{proof}
Now we start the proof. We follow that of $\cite[\textrm{Subsection 3.4 (case 1)}]{PV12}$. Since this part of the proof does not rely on the structure of group von Neumann algebras, we need only slight modifications, which were already observed in $\cite{Is12_2}$. Hence here we give a rough sketch of the proof.
We use the following notation which is used in $\cite{PV12}$.
\begin{eqnarray*}
&&{\cal D}:=M\odot M^{\rm op} \odot P^{\rm op} \odot P \supset (C_{\rm red}(\mathbb{G})\rtimes_{\rm r}\mathbb{R})\odot (C_{\rm red}(\mathbb{G})\rtimes_{\rm r}\mathbb{R})^{\rm op} \odot P^{\rm op} \odot P =:{\cal D}_0,\\
&&\Psi\colon {\cal D}\rightarrow \mathbb{B}(H\otimes L^2(C_h)\otimes L^2(C_h)), \quad
\Theta \colon {\cal D}\rightarrow \mathbb{B}(H\otimes L^2(C_h)),\\
&&\Psi((b_1\otimes x_1) \otimes (b_2\otimes x_2)^{\rm op}\otimes y^{\rm op}\otimes z)=b_1J_Nb_2^*J_Ny^{\rm op}J\bar{z}J\otimes x_1\otimes x_2^{\rm op}, \\
&&\Theta((b_1\otimes x_1) \otimes (b_2\otimes x_2)^{\rm op}\otimes y^{\rm op}\otimes z)=b_1J_Nb_2^*J_Ny^{\rm op}J\bar{z}J\otimes x_1x_2^{\rm op}\\
&&\phantom{\Theta((b_1\otimes x_1) \otimes (b_2\otimes x_2)^{\rm op}\otimes y^{\rm op}\otimes z)}=\pi(b_1\otimes x_1){\cal J}\pi(b_2\otimes x_2)^*{\cal J}\theta(y^{\rm op}){\cal J}\theta(\bar{z}){\cal J}.
\end{eqnarray*}
\begin{proof}[\bf Proof of Theorem \ref{B}]
Define a state on $\mathbb{B}(H\otimes L^2(C_h))$ by $\Omega_1(X):=\mathrm{Lim}_i\langle X\xi_i|\xi_i\rangle$. Our condition $\rm (AOC)^+$, together with Lemma \ref{case2}, implies that $|\Omega_1(\Theta(X))|\leq \|\Psi(X)\|$ for any $X\in{\cal D}_0$.
We extend this inequality on $\cal D$ by using an approximation identity of $C_h$, which take vales in $C_{\rm red}(\mathbb{G})\rtimes_{\rm r}\mathbb{R}$ (such a net exists, see $\cite[\textrm{Lemma 2.3.1}]{Is12_2}$).
Then a new functional $\Omega_2$ on $C^*(\Psi({\cal D}))$ is defined by $\Omega_2(\Psi(X)):=\Omega_1(\Theta(X))$ $(X\in{\cal D})$.
Its Hahn--Banach extension on $\mathbb{B}(H\otimes L^2(C_h)\otimes L^2(C_h))$ restricts to a $P$-central state on $q(B\otimes \mathbb{B}(C_h))q\otimes \mathbb{C}$. More precisely we have a $P$-central state on $\mathbb{B}(qL^2(M))\cap(B^{\rm op})'(=q(B\otimes \mathbb{B}(C_h))q)$ which restricts to the trace $\tau$ on $qMq$.
\end{proof}
\subsection{\bf Proofs of corollaries}
For the Kac type case, the same proofs as in the group case work. So we see only the proof of Corollary \ref{B}.
Let $\mathbb{G}$ be a compact quantum group in the statement of the corollary, $h$ the Haar state of $\mathbb{G}$, and $(B,\tau_B)$ be a tracial von Neumann algebra.
Write $M:=B\otimes L^\infty(\mathbb{G})$. Let $(A,\tau_A)$ be an amenable tracial von Neumann subalgebra in $M$ with expectation $E_A$.
Then since modular actions of $\tau_A$ and $\tau_A\circ E_A$ has the relation $\sigma^{\tau_A}=\sigma^{\tau_A\circ E_A}|_A$ (see the proof of $\cite[\rm{Theorem\ IX.4.2}]{Ta2}$),
we have an inclusion $A\otimes L\mathbb{R}=C_{\tau_A}(A)\subset C_{\tau_A\circ E_A}(M)$ with a faithful normal conditional expectation $\tilde{E}_A$ given by $\tilde{E}_A(x\lambda_t)=E_A(x)\lambda_t$ for $x\in M$ and $t\in \mathbb{R}$.
Since continuous cores are canonically isomorphic with each other, there is a canonical isomorphism from $C_{\tau_A\circ E_A}(M)$ onto $C_{\tau_B\otimes h}(M)=B\otimes C_h(L^\infty(\mathbb{G}))$ $(:=\tilde{M})$. We denote by $\tilde{A}$ the image of $C_{\tau_A}(A)$ in $\tilde{M}$.
Put $\tilde{P}:={\cal N}_{\tilde{M}}(\tilde{A})''$.
Note that there is a faithful normal conditional expectation $E_{\tilde{P}}$ from $\tilde{M}$ onto $\tilde{P}$, since $\tilde{A}$ is an image of an expectation (use $\cite[\rm{Theorem\ IX.4.2}]{Ta2}$).
\begin{Lem}
Under the setting above, for any projection $z\in{\cal Z}(A)\cap (1_B\otimes L^\infty(\mathbb{G}))$ (possibly $z=1$) we have either
\begin{itemize}
\item[$\rm (i)$] $Az\preceq_M B$;
\item[$\rm (ii)$] there exists a conditional expectation from $z(B\otimes \mathbb{B}(L^2(C_h)))z$ onto $z\tilde{P}z$, where we regard $z\in 1_B\otimes C_h$ by the canonical inclusion $L^\infty(\mathbb{G})\subset C_h$.
\end{itemize}
\end{Lem}
\begin{proof}
Suppose $Az\not\preceq_M B$. Then by the same manner as in $\cite[\textrm{Proposition 2.10}]{BHR12}$, we have $\tilde{A}zq\not\preceq_{\tilde{M}}B\otimes L\mathbb{R}r$ for any projections $q\in {\cal Z}(\tilde{A})$ and $r\in L\mathbb{R}$ with $(\tau_B\otimes\mathrm{Tr})(q)<\infty$ and $\mathrm{Tr}(r)<\infty$.
By the comment below Theorem \ref{popa embed2}, this means $\tilde{A}zq\not\preceq_{\tilde{M}}B\otimes L\mathbb{R}$ for any projection $q\in {\cal Z}(\tilde{A})$ with $(\tau_B\otimes\mathrm{Tr})(q)<\infty$.
We fix such $q$ and assume $z=1$ for simplicity.
Apply Theorem \ref{B} to them and get that $L^2(q\tilde{M})$ is left ${\cal N}_{q\tilde{M}q}(q\tilde{A}q)''$-amenable as a $q\tilde{M}q$-$B$-bimodule.
Note that ${\cal N}_{q\tilde{M}q}(q\tilde{A}q)''=q\tilde{P}q$ (e.g.\ $\cite[\rm Lemma\ 2.2]{FSW10}$).
By $\cite[\textrm{Proposition 2.4}]{PV12}$ this means that ${}_{q\tilde{M}q}L^2(q\tilde{M}q)_{q\tilde{P}q} \prec {}_{q\tilde{M}q}L^2(q\tilde{M})\otimes_B L^2(\tilde{M}q)_{q\tilde{P}q}$.
Let $\nu_q$ (or $\nu$ for $q=1$) be the following canonical multiplication $*$-homomorphism:
\begin{alignat*}{5}
& \hspace{5em}\mathbb{B}(\tilde{q}K) && \hspace{1em}\mathbb{B}(qq^{\rm op}L^2(\tilde{M}))\\
& \hspace{6em}\cup && \hspace{3.5em}\cup \\
& \textrm{$*$-alg}\{q\tilde{M}q\otimes_B q^{\rm op}, q\otimes_B (q\tilde{P}q)^{\rm op}\} &\quad\xrightarrow{\nu_q}\quad& \textrm{$*$-alg}\{q\tilde{M}q, (q\tilde{P}q)^{\rm op}\}&
\end{alignat*}
where $K:=L^2(\tilde{M})\otimes_B L^2(\tilde{M})$ and $\tilde{q}:=(q\otimes_B1)(1\otimes_B q^{\rm op})\in\mathbb{B}(K)$. The weak containment above means that $\nu_q$ is bounded.
Let $(q_i)_i$ be a net of $(\tau_B\otimes\mathrm{Tr})$-finite projections in ${\cal Z}(\tilde{A})$ which converges to 1 strongly. Then since each $q_i$ satisfies the weak containment above, we have for any $x\in \textrm{$*$-alg}\{\tilde{M}\otimes_B1, 1\otimes_B (\tilde{P})^{\rm op}\}\subset\mathbb{B}(K)$,
\begin{equation*}
\|x\|_{\mathbb{B}(K)}=\sup_i\|\tilde{q_i}x\tilde{q_i}\|_{\mathbb{B}(\tilde{q_i}K)}\geq
\sup_i\|\nu_{q}(\tilde{q_i}x\tilde{q_i})\|_{\mathbb{B}(q_iq_i^{\rm op}L^2(\tilde{M}))}=\|\nu(x)\|_{\mathbb{B}(L^2(\tilde{M}))}.
\end{equation*}
Hence $\nu$ is bounded and we have
${}_{\tilde{M}}L^2(\tilde{M})_{\tilde{P}} \prec {}_{\tilde{M}}L^2(\tilde{M})\otimes_B L^2(\tilde{M})_{\tilde{P}}$.
(For a general $z$, we have ${}_{z\tilde{M}z}L^2(z\tilde{M}z)_{z\tilde{P}z} \prec {}_{z\tilde{M}z}L^2(z\tilde{M})\otimes_B L^2(\tilde{M}z)_{z\tilde{P}z}$.)
By Arveson's extension theorem, we extend $\nu$ on $C^*\{\tilde{M}\otimes_B1, 1\otimes_B (B'\cap \mathbb{B}(L^2(\tilde{M})))\}$ as a u.c.p.\ map into $\mathbb{B}(L^2(\tilde{M}))$ and denote it by $\Phi$.
Then since $\tilde{M}\otimes 1$ is contained in the multiplicative domain of $\Phi$, the image of $1\otimes_B (B'\cap \mathbb{B}(L^2(\tilde{M})))$ is contained in $\tilde{M}'=\tilde{M}^{\rm op}$.
Consider the following composition map:
\begin{equation*}
B^{\rm op}\otimes \mathbb{B}(L^2(C_h))= (B'\cap\mathbb{B}(L^2(\tilde{M})))\simeq1\otimes_B (B'\cap\mathbb{B}(L^2(\tilde{M})))\xrightarrow{\Phi}\tilde{M}^{\rm op}\xrightarrow{E_{\tilde{P}}^{\rm op}} \tilde{P}^{\rm op}.
\end{equation*}
(For a general $z$, we have $z^{\rm op}(B^{\rm op}\otimes \mathbb{B}(L^2(C_h)))z^{\rm op}\rightarrow z(z\tilde{M}z)^{\rm op}\simeq (z\tilde{M}z)^{\rm op}\rightarrow (z\tilde{P}z)^{\rm op}.$)
Finally composing with $\mathrm{Ad}(J_{\tilde{M}})=\mathrm{Ad}(J_B\otimes J_{C_h})$, we get a conditional expectation from $B\otimes \mathbb{B}(L^2(C_h))$ onto $\tilde{P}$.
\end{proof}
Now assume that $L^\infty(\mathbb{G})$ is non amenable and $A\subset M$ is a Cartan subalgebra.
Let $w$ be a central projection in $L^\infty(\mathbb{G})$ such that $L^\infty(\mathbb{G})w$ has no amenable direct summand. Write $z=1_B\otimes w$. Then $z\in A$ since $A$ is maximal abelian.
Since $\tilde{A}$ is Cartan in $\tilde{M}$ (e.g.\ $\cite[\textrm{Subsection 2.3}]{HR10}$), we have $\tilde{M}=\tilde{P}$.
The lemma above says that we have either (i) $Az\preceq_MB$ or (ii) there exists a conditional expectation from $B\otimes \mathbb{B}(wL^2(C_h))$ onto $z\tilde{M}z=B\otimes C_hw$.
If (ii) holds, then composing with $\tau_B\otimes \mathrm{id}_{C_h}$, we get a conditional expectation from $\mathbb{B}(wL^2(C_h))$ onto $C_hw$. This is a contradiction since $C_hw=(L^\infty(\mathbb{G})w)\rtimes \mathbb{R}$ is non amenable.
If (i) holds, then we have $Az\preceq_{Mz}B\otimes \mathbb{C}w$ since $z\in {\cal Z}(M)$, and hence $(B\otimes \mathbb{C}w)'\cap Mz \preceq_{Mz} A'\cap Mz$ by $\cite[\textrm{Lemma 3.5}]{Va08}$ (this is true for non finite $M$). This means $1_B \otimes L^\infty(\mathbb{G})w\preceq_{Mz} Az$ and hence $L^\infty(\mathbb{G})w$ has an amenable direct summand. Thus we get a contradiction.
Next assume that $B=\mathbb{C}$. Let $N\subset L^\infty(\mathbb{G})(=M)$ be a non-amenable subalgebra with expectation $E_N$ and assume that $A\subset N$ is a Cartan subalgebra with expectation $E_A$.
Let $z$ be a central projection in $N$ such that $Nz$ is non-amenable and diffuse. Since $A$ is maximal abelian, $z\in A$ and $Az$ is a Cartan subalgebra in $Nz$.
Hence $Az$ is diffuse, which means $Az\not\preceq_{M}\mathbb{C}$. The above lemma implies that there exists a conditional expectation from $\mathbb{B}(zL^2(C_h))$ onto $z{\cal N}_{\tilde{M}}(\tilde{A})''z={\cal N}_{z\tilde{M}z}(\tilde{A}z)''$.
Now ${\cal N}_{z\tilde{M}z}(\tilde{A}z)''$ is non amenable and hence a contradiction. In fact, we have inclusions $C_{\tau_A}(A)z\subset C_{\tau_A\circ E_A}(N)z\subset zC_{\tau_A\circ E_A \circ E_N}(M)z$, and since the first inclusion is Cartan, the normalizer of $\tilde{A}z$ in $z\tilde{M}z$ generates a non-amenable subalgebra.
\section{\bf Further remarks}
\subsection{\bf Continuous and discrete cores}
In $\cite[\textrm{Subsection 5.2}]{Is12_2}$, we discussed semisolidity and non solidity of continuous cores of free quantum group $\rm III_1$ factors. Non solidity of them follow from non amenability of the relative commutant of the diffuse algebra $L\mathbb{R}$, which contains a non-amenable centralizer algebra.
Let $\mathbb{G}$ be a compact quantum group as in Theorem \ref{C} and assume that $L^\infty(\mathbb{G})$ is a full $\rm III_1$ factor and the Haar state is $\mathrm{Sd}(L^\infty(\mathbb{G}))$-almost periodic. Denote its discrete core by $D(L^\infty(\mathbb{G}))$ (see $\cite{Co74}\cite{Dy94}$).
Then $qD(L^\infty(\mathbb{G}))q$ is always strongly solid for any trace finite projection $q$,
since $D(L^\infty(\mathbb{G}))$ is isomorphic to $L^\infty(\mathbb{G})_h\otimes \mathbb{B}(H)$ for some separable Hilbert space $H$ and $L^\infty(\mathbb{G})_h$ is a strongly solid $\rm II_1$ factor (this follows from Theorem \ref{B}). Hence structures of these cores are different. In the subsection, we observe this difference from some viewpoints.
Write as $\Gamma\leq \mathbb{R}^*$ the Sd-invariant of $L^\infty(\mathbb{G})$.
Then regarding $\mathbb{R}\subset \hat{\Gamma}$, take the crossed product $L^\infty(\mathbb{G})\rtimes \hat{\Gamma}$ by the extended modular action of the Haar state.
This algebra is an explicit realization of the discrete core, namely, $D(L^\infty(\mathbb{G}))\simeq L^\infty(\mathbb{G})\rtimes \hat{\Gamma}$.
Since this extended action is the modular action on the dense subgroup $\mathbb{R}$, we can easily verify that $L^\infty(\mathbb{G})\rtimes \hat{\Gamma} (=:M)$ satisfies the same condition as condition $\rm (AOC)^+$ replacing $\mathbb{R}$-action with $\hat{\Gamma}$-action.
Under this setting, we can prove Theorem \ref{B}, namely, for any amenable subalgebra $A\subset qMq$ (with trace finite projection $q\in L\hat{\Gamma}$), we have either (i) $A\preceq_{M} L\hat{\Gamma}$ or (ii) ${\cal N}_{qMq}(A)''$ is amenable.
This obviously implies strong solidity of $qMq$, since all diffuse subalgebra $A$ automatically satisfies $A\not\preceq_{qMq}L\hat{\Gamma}$ (because $L\hat{\Gamma}$ is atomic).
Roughly speaking, this observation implies that the only obstruction of the solidity of $C_h(L^\infty(\mathbb{G}))(=:C_h)$ is a diffuse subalgebra $L\mathbb{R}$.
More precisely, $C_h$ is non solid since the subalgebra $L\mathbb{R}$ is diffuse, and $D(L^\infty(\mathbb{G}))$ is strongly solid since the subalgebra $L\hat{\Gamma}$ is atomic.
The next view point is from property Gamma (or non fullness). It is known that a $\rm II_1$ factor $M$ has the property Gamma if and only if $C^*\{M,M' \}\cap \mathbb{K}(L^2(M))=0$ $\cite{aaa}$.
Moreover it is not difficult to see that a von Neumann algebra $M$ satisfies condition (AO) and $C^*\{M,M' \}\cap \mathbb{K}(L^2(M))=0$, then it is amenable.
Hence any non amenable $\rm II_1$ factor can not have property Gamma and condition (AO) at the same time.
When $L^\infty(\mathbb{G})$ is a full $\rm III_1$ factor, on the one hand, for any $\mathrm{Tr}$-finite projection $p$, $pC_hp$ is non full, since it admits a non trivial central sequence (use the almost periodicity of $h$).
Hence we can deduce $C^*\{C_h,C_h' \}\cap \mathbb{K}(L^2(C_h)=0$. In particular $C_h$ (and $pC_hp$ for \textit{any} projection $p$) does not satisfy condition (AO).
On the other hand, for the discrete core $D(L^\infty(\mathbb{G}))$, $qD(L^\infty(\mathbb{G}))q$ always satisfies condition $\rm (AO)^+$ for any projection $q\in L\hat{\Gamma}\simeq \ell^\infty(\Gamma)$ with finite support.
This is because $D(L^\infty(\mathbb{G}))$ satisfies condition $\rm (AO)^+$ with respect to the quotient $\mathbb{K}(L^2(\mathbb{G}))\otimes \mathbb{B}(\ell^2(\Gamma))$ and hence $qD(L^\infty(\mathbb{G}))q$ satisfies $\rm (AO)^+$ for $\mathbb{K}(L^2(\mathbb{G}))\otimes q\mathbb{B}(\ell^2(\Gamma))q=\mathbb{K}(L^2(\mathbb{G}))\otimes \mathbb{K}(q\ell^2(\Gamma))$.
By $\cite[\textrm{Theorem B}]{Is12_2}$, we again get the strong solidity of $qD(L^\infty(\mathbb{G}))q$.
We also get fullness of $qD(L^\infty(\mathbb{G}))q$ (since this is non amenable) for any $q\in \ell^\infty(\Gamma)$ with finite support and hence that of $D(L^\infty(\mathbb{G}))$.
\subsection{\bf Primeness of crossed products by bi-exact quantum groups}
In this subsection, we consider only compact quantum groups of Kac type. Let $M=\hat{\mathbb{G}}\ltimes B$ be a finite von Neumann algebra as in the statement of Theorem \ref{A}. Assume that $B$ is amenable.
Then we have for any amenable subalgebra $A\subset qMq$ $(q\in M$ is a projection), we have either (i) $A\preceq_M B$ or (ii) ${\cal N}_{qMq(A)''}$ is amenable.
This is a sufficient condition to semisolidity when $B$ is abelian, and to semiprimeness, which means that any tensor decomposition has an amenable tensor component, when $B$ is non abelian.
Thus we proved that any non amenable von Neumann subalgebra $N\subset p(\hat{\mathbb{G}}\ltimes B)p$ is prime when $B$ is abelian, and semiprime when $B$ is amenable.
\end{document} |
\begin{equation}gin{document}
\title{\bf Revisiting Offspring Maxima in Branching Processes}
\alphauthor{George P. Yanev \\
Department of Mathematics and Statistics \\
University of South Florida \\
Tampa, Florida 33620 \\
e-mail: [email protected]}
\date{\varepsilonmpty}
\maketitle
\begin{equation}gin{abstract}
We present a progress report for studies on maxima related to
offspring in branching processes. We summarize and discuss the
findings on the subject that appeared in the last ten years. Some
of the results are refined and illustrated with new examples.
\varepsilonnd{abstract}
\section{Introduction}
There is a significant amount of research in the theory of
branching processes devoted to extreme value problems concerning
different population characteristics. The history of such studies
goes back to the works in 50-ies by Zolotarev \cite{Zol54} and
Urbanik \cite{Urb56} (see also \cite{Har63}) who considered the
maximum generation size. Our goal here is to summarize and discuss
results on maxima related to the offspring. Papers directly
addressing this area of study have begun to appear in the last ten
years (though see "hero mothers" example in \cite{JagNer84}.)
Let ${\cal M}_n$ denote the maximum offspring size of all individuals
living in the $(n-1)$-st generation of a branching process. This
is a maximum of random number of independent and identically
distributed (i.i.d.) integer-valued random variables, where the
random index is the population size of the process. ${\cal M}_n$ has
two characteristic features: (i) the i.i.d. random variables are
integer-valued and (ii) the distribution of the random index is
connected to the distribution of the terms involved through the
branching mechanism. These two characteristics distinguish the
subject matter maxima among those studied in the general extreme
value theory.
The study of the sequence \ $\{{\cal M}_n\}$\ might be motivated in
different ways. It provides a fertility measure characterizing the
most prolific individual in one generation. It also measures the
maximum litter (or family) size. In the branching tree context,
it is the maximum degree of a vertex. The asymptotic behavior of
${\cal M}_n$\ gives us some information about the influence of the
largest families on the size and survival of the entire
population.
The paper is organized as follows. Next section deals with maxima
in simple branching processes with or without immigration. In
Section~3 we derive results about maxima of a triangular array of
zero-inflated geometric variables. Later we apply these to
branching processes with varying geometric environments. Section 4
begins with limit theorems for the max-domain of attraction of
bivariate geometric variables. Then we discuss one application to
branching processes with promiscuous matting. The final section
considers a different construction in which a random score (a
continuous random variable) is associated with each individual in
a simple branching process. We present briefly limiting results
for the score's order statistics. In the end of the section, we
give an extension to two-type processes.
\section{Maximum family size in simple branching processes}
Define a Bienaym\'{e}--Galton--Watson (BGW) branching process and
its $n$-th generation maximum family size by $Z_0=1$;
\[ Z_n=\sum_{i=1}^{Z_{n-1}}X_i(n)\quad \mbox{and}\quad {\cal M}_n=
\max_{ 1\lambdae i\lambdae Z_{n-1}}X_i(n) \quad (n=1,2,\lambdadots),
\]
respectively, where the offspring variables $X_i(n)$ are i.i.d.
nonnegative and integer-valued.
Along with the BGW process $\{Z_n\}$, we consider the process with
immigration $\{Z^{im}_n\}$ and its offspring maximum
\[ Z^{im}_n=\sum_{i=1}^{Z^{im}_{n-1}}X_i(n)+ Y_n \quad \mbox{and}
\quad {\cal M}^{im}_n=\max_{1\lambdae i\lambdae Z^{im}_{n-1}}X_i(n) \quad
(n=1,2,\lambdadots),
\]
respectively, where $\{Y_n, \ n=1,2,...\}$ are independent of the
offspring variables, i.i.d. and integer-valued non-negative random
variables.
Finally, let us modify the immigration component such that
immigrants may enter the $n$-th generation only
if the $(n-1)$-st generation size is zero. Thus, we have the Foster-Pakes
process and its offspring maximum
\[ Z^{0}_n=\sum_{i=1}^{Z^0_{n-1}}X_i(n)+ I_{\displaystyle \{Z^{0}_{n-1}=0\}}Y_n
\quad \mbox{and} \quad {\cal M}^{0}_n=\max_{1\lambdae i \lambdae
Z^0_{n-1}}X_i(n)\quad (n=1,2,\lambdadots),
\]
where $I_{A} $ stands for the indicator of $A$.
Denote by $F(x)=P(X_i(n)\lambdaeq x)$\ the common distribution function of
the offspring variables with mean \ $0<m<\inftynfty$\ and variance \
$0<\sigma^2 \lambdaeq \inftynfty$. In this section, we deal with the
subcritical $(m<1)$, critical $(m=1)$, and supercritical $(m>1)$
processes separately.
\subsection{Subcritical processes}
Let $\hat{{\cal M}}_n$ denote the maximum family size in all three
processes defined above: $\{ Z_n\}$, $\{Z^{im}_n\}$, and
$\{Z^0_n\}$. Let $g(s)$ be the immigration p.g.f.. Also, let
${\cal A}_n=\{Z_{n-1}>0\}$ for processes without immigration, and
${\cal A}_n$ be the certain event - otherwise. The following
result is true.
\begin{equation}gin{theorem} If $0<m<1$, then for $x\ge 0$
\begin{equation} \lambdaabel{distr_limit}\lambdaim_{n\to \inftynfty}P(\hat{{\cal M}}_n\lambdae x|{\cal
A}_n)=\gamma(F(x)) \varepsilonnd{equation} and \begin{equation} \lambdaabel{moment_limit} \lambdaim_{n\to
\inftynfty}E(\hat{{\cal M}}_n|{\cal A}_n)=\sum_{k=0}^\inftynfty
[1-\gamma(F(k))] \varepsilonnd{equation} where
(i) in case of $\{Z_n\}$, $\gamma$ is the unique p.g.f. solution
of $\gamma(f(s))=m\gamma(s)+1-m$ and (\ref{moment_limit}) holds
if, in addition, $EX_i(n)\lambdaog(1+ X_i(n))<\inftynfty$.
(ii) in case of process $\{Z^{im}_n\}$, (\ref{distr_limit}) holds
provided $E\lambdaog(1+Y_n)<\inftynfty$ and
$\gamma$ is the unique p.g.f. solution of $
\gamma(s)=g(s)\gamma(f(s))$. (\ref{moment_limit}) is true if, in
addition, $EY_n<\inftynfty$.
(iii) in case of process $\{Z^{0}_n\}$ we assume that
$E\lambdaog(1+Y_n)<\inftynfty$. Then $ \gamma(s)= 1 - \sum_{n=0}^\inftynfty
[1-g(f_n(s))]$ $(0< s\lambdae 1)$ and $\gamma(0) = \{1 +
\sum_{n=0}^\inftynfty [1-g(f_n(0))]\}^{-1}$. Also,
(\ref{moment_limit}) holds if, in addition, $EY_n<\inftynfty$.
\varepsilonnd{theorem}
\begin{equation}gin{example} Consider $\{Z_n\}$ with geometric offspring
p.g.f. $f(s)=p/(1-qs),$\ where \ $1/2<p=1-q<1$. \ Then \
$m=q/p<1$\ and it is not difficult to see that
$\gamma(s)=(1-m)s/(1-ms).$\ Hence
\[
\lambdaim_{n\to\infty}P({\cal M}_n\lambdaeq k\mid Z_{n-1}>0)= \fracrac{\displaystyle
(p-q)(1-q^{k+1})}{\displaystyle p-q(1-q^{k+1})}.
\]
It can also be seen (\cite{RahYan99}) that
\[
\fracrac{m}{1-pm}\lambdae \lambdaim_{n\to \inftynfty}E({\cal M}_n|Z_n>0)\lambdae
\fracrac{m}{1-m}.
\]
\varepsilonnd{example}
\begin{equation}gin{example} Consider $\{Z^{im}_n\}$ (see \cite{Pak71}) with
\[
f(s)=(1+m-ms)^{-1} \quad (0<m<1) \quad \mbox{and} \quad g(s)=
f^{\nu}(s) \qquad (\nu
>0).\]
Then $ \gamma(s)=((1-m)/(1-ms))^\nu$, a negative binomial p.g.f.,
and the above theorem yields
\[
\lambdaim_{n \to \inftynfty} P({\cal M}^{im}_n \lambdae x) =
\lambdaeft( \fracrac{1-m}{1-mF(x)}\right)^\nu \ \mbox{and}\ \ \lambdaim_{n\to
\inftynfty} E{\cal M}^{im}_n=\sum_{j=0}^{\inftynfty} 1-\lambdaeft[
\fracrac{1-F(j)}{1-mF(j)}\right]^{\nu}\lambdae \fracrac{\nu m^2}{1-m}.
\]
\varepsilonnd{example}
\begin{equation}gin{example} Let $\mu=EY_n$. Consider $\{Z^0_n\}$ with
\[
f(s)=(1+m-ms)^{-1} \qquad \mbox{and} \qquad g(s)= 1 - (\mu /
m)\lambdaog (1+m - ms)\qquad (0<m<1).\]
In this case $\gamma = (m-\mu \lambdaog(1-ms))/(m-\mu \lambdaog (1-m))$ and by the
theorem
\[
\displaystyle \lambdaim_{n \to \inftynfty} P\{{\cal M}^0_n \lambdae x\} = {m - \mu \lambdaog
(1-mF(x)) \over m -\mu \lambdaog (1-m)}\ ,
\]
and
\[
\lambdaim_{n \to \inftynfty}E{\cal M}^0_n = \mu\ {\displaystyle m+\sum_{k=0}^\inftynfty \lambdaog
\fracrac{1-m[(1+m)^{k+1}-m^{k+1}]}{1-m} \over m-\mu\lambdaog (1-m)}\lambdae {\displaystyle
\mu m \over m -\mu \lambdaog(1-m)}\ {\displaystyle m \over 1-m}.
\]
\varepsilonnd{example}
\subsection{Critical processes}
In the rest of this section we need some asymptotic results for
the maxima of i.i.d. random variables. Recall that a distribution
function $F(x)$ belongs to the max-domain of attraction of a
distribution function $H(x,\theta)$ (i.e., $F\inftyn D(H)$) if and
only if there exist sequences $ a(n)>0$\ and \ $b(n)$\ such that
\begin{equation}gin{equation}\lambdaabel{3.8}
\lambdaim_{n\to\inftynfty}F^n(a(n)x+b(n))=H(x, \theta)\ ,
\varepsilonnd{equation}
weakly. According to the classical Gnedenko's result, \
$H(x;\theta)$
has the following (von Mises) form
\begin{equation} \lambdaabel{dom_attr}H(x;\theta)=\varepsilonxp\{-h(x;\theta)\}
=\varepsilonxp\lambdaeft\{-(1+x\theta^{-1})^{-\theta}\right\}, \quad
1+x\theta^{-1}>0; \ -\inftynfty<\theta<\inftynfty. \varepsilonnd{equation} Necessary and
sufficient conditions for \ $F\inftyn D(H)$ are well-known. In
particular, \ $F\inftyn D(\varepsilonxp\{-x^{-a}\})$, \ $a>0$\ if and only if
for \ $x>0$ the following regularity condition on the tail
probability holds
\begin{equation}gin{equation} \lambdaabel {24}
1-F(x)=x^{-a}L(x)\ ,
\varepsilonnd{equation}
where $L(x)$ is a slowly varying at infinity function (s.v.f.).
{\bf A. Processes without immigration.}\ In case of
a simple BGW process, the following result holds.
\begin{equation}gin{theorem} Let $m=1$ and $\sigma^2<\inftynfty$. (i) If
(\ref{3.8}) holds, then \begin{equation} \lambdaabel{thm2_dist}\lambdaim_{n\to
\inftynfty}P\lambdaeft(\fracrac{{\cal M}_n-b(n)}{a(n)}\lambdae
x|Z_{n-1}>0\right)=\fracrac{1}{1+\sigma^2h(x, \theta)/2}. \varepsilonnd{equation}
(ii) If
(\ref{24}) holds, then \begin{equation} \lambdaabel{thm2_exp} \lambdaim_{n\to
\inftynfty}\fracrac{E({\cal M}_n|Z_{n-1}>0)}{n^{1/a}L_1\lambdaeft(n\right)}=
\fracrac{\pi/a}{\sin(\pi/a)} \qquad (a\ge 2), \varepsilonnd{equation} where $L_1(x)$ is
certain s.v.f. with known asymptotics.
\varepsilonnd{theorem}
The theorem implies that if $F\inftyn D(\varepsilonxp\{-e^{-x}\})$ then the
limiting distribution is logistic with c.d.f.
$\lambdaeft(1+e^{-x}\right)^{-1}$; and if $F\inftyn D(\varepsilonxp\{-x^{-a}\})$
then the limiting distribution is log-logistic with c.d.f.
$\lambdaeft(1+x^{-a}\right)^{-1}$.
\begin{equation}gin{theorem} Let $m=1$, $\sigma^2=\infty$, and (\ref{24}) holds. Then for $x \geq
0$ and $1<a\lambdae 2$ \begin{equation} \lambdaabel{thm3_dist}\lambdaim_{n\to\infty}P\lambdaeft(
\fracrac{\displaystyle {\cal M}_n}{\displaystyle n^{1/[a(a-1)]}L_2\lambdaeft(n\right)} \lambdaeq x\mid
Z_{n-1}>0\right) = 1-\fracrac{1}{\displaystyle
\lambdaeft(1+x^{a(a-1)}\right)^{1/(a-1)}} , \varepsilonnd{equation} which is a Burr Type
XII distribution (e.g. \cite{Tad80}) and
\begin{equation}gin{equation}\lambdaabel{4.77}
\lambdaim_{n\rightarrow \inftynfty}\fracrac{E({\cal M}_n|Z_{n-1}>0)
}{n^{1/[a(a-1)]}L_2\lambdaeft(n\right)}\! \! = \fracrac{1}{a-1} B \lambdaeft(
\fracrac{1}{a-1}-\fracrac{1}{a(a-1)}, 1+\fracrac{1}{a(a-1)} \right) \quad
(1<a\lambdae 2),
\varepsilonnd{equation}
where $B(u,v)$ is the Beta function and $L_2(x)$ is certain s.v.f.
with known asymptotics.
\varepsilonnd{theorem}
Note that for $a=2$ the right-hand sides in (\ref{thm2_dist})
(under assumption (\ref{24})) and (\ref{thm2_exp}) coincide with
those in (\ref{thm3_dist}) and (\ref{4.77}), respectively. The
right-hand side in (\ref{4.77}) is the expected value of the limit
in (\ref{thm3_dist}) (see \cite{Tad80}).
\begin{equation}gin{example}
Let $1-F(x)\sim x^{-2}\lambdaog x$. In this case one can check (see
\cite{RahYan99}) that Theorem~3 with $a=2$ implies
\[
\lambdaim_{n\to\infty}P\lambdaeft( \fracrac{\displaystyle {\cal M}_n}{\displaystyle n^{1/2}(\lambdaog n)^{3/2}}\lambdaeq
x\mid Z_{n-1}>0\right) = \fracrac{4x^2}{1+4x^2}\ .
\]
for $x \geq 0$ and
\[
\lambdaim_{n\rightarrow \inftynfty}\fracrac{\displaystyle E({\cal M}_n|Z_{n-1}>0)}{\displaystyle
n^{1/2}(\lambdaog n)^{3/2}} = \fracrac{\pi}{2}\ .
\]
\varepsilonnd{example}
{\bf B. Processes with immigration $\{Z^{im}_n\}$.}\
Let $\mu=EY_n$. We have the following theorem.
\begin{equation}gin{theorem} Assume that $ m=1, \ 0<\sigma^2<\inftynfty$, and $0 <
\mu < \inftynfty.$ (i) If (\ref{3.8}) holds, then \begin{equation}
\lambdaabel{thm4_dist} \lambdaim_{n\to \inftynfty}P\lambdaeft(
\fracrac{{\cal M}^{im}_n-b(n)}{a(n)} \lambdaeq x \right) = \fracrac{1}{\displaystyle
(1+\sigma^2 h(x, \theta)/2)^{2\mu/\sigma^2}}.\varepsilonnd{equation} (ii) If
(\ref{24}) is true, then \begin{equation} \lambdaabel{thm4_exp} \lambdaim_{n\to
\inftynfty}\fracrac{\displaystyle EM^{im}_n}{\displaystyle n^{1/a}L_2(n)} =
\fracrac{2\mu}{\sigma^2} B\lambdaeft(\fracrac{2\mu}{\sigma^2}+\fracrac{1}{a},
1-\fracrac{1}{a}\right) \quad (a\ge 2),\varepsilonnd{equation} where $B(u,v)$ is the Beta
function and $L_2(x)$ is certain s.v.f. with known asymptotics.
\varepsilonnd{theorem}
The theorem implies that if $F\inftyn D(\varepsilonxp\{-e^{-x}\})$ then the
limiting distribution is generalized logistic with c.d.f.
$\lambdaeft(1+\sigma^2 e^{-x}/2\right)^{-2\mu/\sigma^2}$; if $F\inftyn
D(\varepsilonxp\{-x^{-a}\})$ then the limiting distribution is a Burr Type
III (e.g. \cite{Tad80}) with c.d.f.
$\lambdaeft(1+\sigma^2x^{-a}/2\right)^{-2\mu/\sigma^2}$. The right-hand
side in (\ref{thm4_exp}) is the expected value of the limit in
(\ref{thm4_dist}) (see \cite{Tad80}).
\begin{equation}gin{theorem} Let $m=1$, $\sigma^2=\infty$, and (\ref{24}) holds.
In addition, suppose \begin{equation} \lambdaabel{theta_cond}\Theta(x):=-\inftynt_0^x
\lambdaog[1-P(Z^{im}_t>0)]dt=c\lambdaog x+d+\varepsilon(x), \varepsilonnd{equation} where
$\lambdaim_{x\to \inftynfty}\varepsilon(x)=0$, $c>0$, and $d$ are
constants. Then for $x \geq 0$, \begin{equation}
\lambdaabel{thm5_dist}\lambdaim_{n\to\infty}P\lambdaeft( \fracrac{\displaystyle {\cal M}^{im}_n}{\displaystyle
n^{1/[a(a-1)]}L_2\lambdaeft(n\right)} \lambdaeq x\right) =
\fracrac{1}{(1+x^{-a(a-1)})^c} \quad (1<a\lambdae 2), \varepsilonnd{equation} which is a Burr
Type III distribution (e.g. \cite{Tad80}) and \begin{equation}
\lambdaabel{thm5_exp}\lambdaim_{n\rightarrow \inftynfty}\fracrac{E{\cal M}_n^{im}
}{n^{1/[a(a-1)]}L_2\lambdaeft(n\right)}= cB\lambdaeft(c+\fracrac{1}{a(a-1)},
1-\fracrac{1}{a(a-1)}\right)\quad (1<a\lambdae 2), \varepsilonnd{equation} where $B(u,v)$ is
the Beta function and $L_2(x)$ is certain s.v.f. with known
asymptotics. The right-hand side in (\ref{thm5_exp}) is the
expected value of the limit in (\ref{thm5_dist}).
\varepsilonnd{theorem}
Note that for $c=1$ and $a=2$ the right-hand sides in
(\ref{thm5_dist}) and (\ref{thm5_exp}) coincide with those in
(\ref{thm3_dist}) and (\ref{4.77}), respectively. The condition
(\ref{theta_cond}) holds even when the immigration mean is not
finite. Next example illustrates this point.
\begin{equation}gin{example} Following \cite{Pak75}, we consider offspring and
immigrants generated by
\[
f(s)=1-(1-s)(1+(a-1)(1-s))^{-1/(a-1)}\ \mbox{and}\
g(s)=\varepsilonxp\{-\lambdaambda (1-s)^{a-1}\},
\]
respectively. Then (\ref{24}) holds and (\ref{theta_cond}) yields
\[
\Theta(t)=(\lambdaambda/(a-1))[\lambdaog t+\lambdaog (a-1)+\lambdaog (1+(a-1)t)^{-1}].
\]
Therefore,
\[
\lambdaim_{n\to\infty}P\lambdaeft( \fracrac{\displaystyle {\cal M}^{im}_n}{\displaystyle
n^{1/[a(a-1)]}L_3\lambdaeft(n\right)} \lambdaeq x\right) =
\fracrac{1}{(1+x^{-a(a-1)})^{\lambdaambda/(a-1)}} \quad (1<a\lambdae 2),
\]
and
\[
\lambdaim_{n\rightarrow \inftynfty}\fracrac{E {\cal M}_n^{im}
}{n^{1/[a(a-1)]}L_3\lambdaeft(n\right)}=
\fracrac{\lambdaambda}{a-1}B\lambdaeft(\fracrac{\lambdaambda }{a-1}-\fracrac{1}{a(a-1)},
1-\fracrac{1}{a(a-1)}\right) \quad (1<a\lambdae 2),
\]
where $B(u,v)$ is the Beta function and $L_3(x)$ is certain s.v.f.
with known asymptotics.
\varepsilonnd{example}
{\bf C. Foster-Pakes processes
$\{Z^0_n\}$.} The following limit theorem for ${\cal M}^0_n$ under a
non-linear normalization holds.
\begin{equation}gin{theorem} Assume that $ m=1, \ 0<\sigma^2<\inftynfty$, and $0 <
\mu < \inftynfty.$ If \begin{equation}\lambdaabel{cond} \lambdaim_{n\to\inftynfty} {\displaystyle
P(X_1(1)>n) \over P(X_1(1)>n+1)} = 1\ \varepsilonnd{equation} then for \ $0<x <1$,
\begin{equation}\lambdaabel{crthm} \lambdaim_{n\to\inftynfty} P\lambdaeft( {\displaystyle \lambdaog U({\cal M}^0_n)
\over \lambdaog n} \lambdaeq x \right) = x, \varepsilonnd{equation} where $U(y)=1/(1-F(y))$.
\varepsilonnd{theorem}
Note that (\ref{cond}) is a necessary condition for $X_1(n)$ to be
in a max-domain of attraction.
\subsection{Supercritical processes}
Denote by $\hat{{\cal M}}_n$ (as in the subcritical case above) the
maximum family size in all three processes: $\{ Z_n\}$,
$\{Z^{im}_n\}$, and $\{Z^0_n\}$. The following result is true.
\begin{equation}gin{theorem} Assume that $m>1$ and $EX_i(n)\lambdaog(1+ X_i(n))<\inftynfty$. If (\ref{3.8}) holds, then
\[
\lambdaim_{n\to \inftynfty}P\lambdaeft( \fracrac{\hat{{\cal M}}_n-b(m^n)}{a(m^n)} \lambdaeq x
\right) =\psi(h(x, \theta))\] If (\ref{24}) is true, then
\[\lambdaim_{n\to
\inftynfty}\fracrac{E\hat{{\cal M}}_n}{m^{-n/a}L_1\lambdaeft(m^{-n/a}\right)}=\inftynt_0^\inftynfty
1-\psi(x^{-a})dx,
\]
where $L_1(x)$ is certain s.v.f. with known asymptotics.
(i) in case of $\{Z_n\}$, $\psi$ is the unique, among the Laplace
transforms, solution of \begin{equation} \lambdaabel{psi_eqn}
\psi(u)=f(\psi(um^{-1})), \qquad (u>0).\varepsilonnd{equation}
(ii) in case of $\{Z^{im}_n\}$, we assume in addition that $E
\lambdaog(1+ Y_n) < \inftynfty$ and \[
\psi(u) = \prod_{k=1}^{\inftynfty}g(\varphi(um^{-k})) \qquad (u>0) \ ,
\] where $\varphi(u)$ is the unique, among the
Laplace transforms, solution of (\ref{psi_eqn}).
(iii) in case of $\{Z^0_n\}$, we assume in addition that $E Y_n <
\inftynfty$ and \[ \psi(u) = g(\varphi(u)) - \sum_{n=0}^\inftynfty [1 -
f(\varphi({u m^{-n}}))]P(Z^0_n=0) \qquad (u>0)\] and $\varphi(u)$
is the unique, among the Laplace transforms, solution of
(\ref{psi_eqn}). \varepsilonnd{theorem}
It is interesting to compare the limiting behavior of the maximum
family size in the processes allowing immigration with that when
the processes evolve in "isolation", i.e., without immigration. In
the supercritical case, as might be expected, the immigration has
little effect on the asymptotics of the maximum family size. The
limits differ only in the form of the Laplace transform $\psi(u)$.
In the subcritical and critical cases the mechanism of immigration
eliminates the conditioning on non--extinction. Theorem~6 for the
Foster-Pakes process differs from the rest of the results by the
non-linear norming of ${\cal M}_n$. The study of the limiting behavior
of the expectation in this case needs additional efforts.
It is known that some of the most popular discrete distributions,
like geometric and Poisson, do not belong to any max-domain of
attraction. This restricts the applicability of the results in the
critical and supercritical cases above. A general construction of
discrete distributions attracted in a max-domain is given in Wilms
(1994). As it is proved there, if $X$ is attracted by a Gumbel or
Fr\'{e}chet distributions, then the same holds for the integer
part $[X]$. Next we follow a different approach considering
triangular arrays of geometric variables which leads to branching
processes with varying environments.
The results in this section are published in \cite{Mit98},
\cite{MitYan99}, and \cite{RahYan96}-\cite{RahYan99}. In
\cite{YanTso00} an extension for order statistics is considered.
\section{Maximum family size in processes with
varying environments}
It is well-known that the geometric law is not attracted to any
max-stable law. Therefore, the limit theorems for maxima in the
critical and supercritical cases above do not apply to geometric
offspring. In this section we utilize a triangular array of
zero-modified geometric (ZMG) offspring distributions, instead.
\subsection{Maxima of arrays of zero-modified geometric variables}
In this subsection we prove limit theorems for maximum of ZMG with
p.m.f. \begin{eqnarray*} P(X_i(n)=j)=\cases{a_np_n(1-p_n)^{j-1} & if $j\ge 1$,
\cr 1-a_n & if $j=0$, $\qquad (n=1,2,\lambdadots)$} \varepsilonnd{eqnarray*} For a
positive integer $\nu_n$ consider the triangular array of variables
\begin{eqnarray*}
X_1(1), X_2(1), & \lambdadots, & X_{\nu_1}(1)\\
X_1(2), X_2(2), & \lambdadots, & \qquad X_{\nu_2}(2) \\
& \lambdadots & \\
X_1(n), X_2(n), & \lambdadots, & \qquad \qquad \qquad X_{\nu_n}(n)
\varepsilonnd{eqnarray*} We prove limit theorems as $\nu_n\to \inftynfty$ for the row
maxima
\[
{\cal M}_n=\max_{1\lambdae i\lambdae \nu_n}X_i(n).
\]
Let $\Lambda$ has the standard Gumbel law with c.d.f. $
\varepsilonxp(-e^{-x})$ for $-\inftynfty<x<\inftynfty$.
\begin{equation}gin{theorem} Assume that for some real
$c$
\[
\lambdaim_{n\to \inftynfty}p_n=0 \quad \mbox{and} \quad \lambdaim_{n\to
\inftynfty}p_n\lambdaog(\nu_na_n)=2c.
\]
A. If $\lambdaim_{n\to \inftynfty}\lambdaog(\nu_na_n)=\inftynfty$, then $c\ge0$ and
\[
p_n{\cal M}_n-\lambdaog(\nu_na_n)\stackrel{d}\to \Lambda -c.
\]
B. If $\lambdaim_{n\to \inftynfty}\lambdaog(\nu_na_n)=\alphalpha$,
$(-\inftynfty<\alphalpha<\inftynfty)$, then
\[
p_n{\cal M}_n \stackrel{d}\to (\Lambda+\alphalpha)^+.
\]
\varepsilonnd{theorem}
The idea of the proof is to exploit: (i) the exponential
approximation to the zero-modified geometric law when its mean
$a_n/p_n$ is large; (ii) the fact that exponential law is
attracted by Gumbel distribution.
\subsection{Processes with varying geometric environments}
Consider a branching process with ZMG offspring law defined over
the triangular array above. Thus, we have a simple branching
process with geometric varying environments. For this process we
prove limit theorems for the offspring maxima in all three
classes: subcritical, critical, and supercritical. Define
$\mu_0=1$,
\[
\mu_n=E(Z_n|Z_0=1)=\prod_{j=1}^nm_j \qquad (n\ge1).
\]
If the environments are weakly varying, i.e., $\mu=\lambdaim_{n\to
\inftynfty}\mu_n$ exists, then the processes can be classify (see
\cite{MitPakYan03}) as follows.
\begin{eqnarray*}
\{Z_n\}\ \mbox{is}\ \
\cases{\mbox{supercritical} & if $\mu=\inftynfty$ \quad \quad \ \
\mbox{i.e.} \ $\sum_n(m_n-1)\to \inftynfty$ \cr \mbox{critical} & if
$\mu\inftyn(0,\inftynfty)$ \quad \mbox{i.e.} \ $\sum_n(m_n-1)< \inftynfty$
\cr \mbox{subcritical} & if $\mu=0$ \quad \quad \quad \
\mbox{i.e.} \ $\sum_n(m_n-1)\to -\inftynfty$} \varepsilonnd{eqnarray*} Define the maximum
family size for the process with varying geometric environments as
\[
{\cal M}_n^{ge}=\max_{1\lambdae i\lambdae Z_n}X_i(n), \qquad (n=1,2,\lambdadots)
\]
In the result below the role played by $\nu_n$ before is played by
$B_{n-1}$ where
\[
B_n=\mu_n\sum_{j=1}^n \fracrac{p_j^{-1}-1}{\mu_j}.
\]
Let ${\cal V}$ be a standard logistic random variable with c.d.f.
$(1+e^{-x})^{-1}$ for $-\inftynfty<x<\inftynfty$.
\begin{equation}gin{theorem} Suppose that $\lambdaim_{n\to
\inftynfty} B_n=\inftynfty$ and for $c$ real
\[
\lambdaim_{n\to \inftynfty}p_n=0 \quad \mbox{and} \quad \lambdaim_{n\to
\inftynfty}p_n\lambdaog(B_{n-1}a_n)=2c.
\]
A. If $\lambdaim_{n\to \inftynfty}\lambdaog(B_{n-1}a_n)=\inftynfty$, then
\[
(p_n{\cal M}_n^{ge}-\lambdaog(B_{n-1}a_n)|Z_{n-1}>0)\stackrel{d}\to {\cal V} -c.
\]
B. If $\lambdaim_{n\to \inftynfty}\lambdaog(B_{n-1}a_n)=\alphalpha$,
$(-\inftynfty<\alphalpha<\inftynfty)$, then
\[
(p_n{\cal M}_n^{ge}|Z_{n-1}>0) \stackrel{d}\to ({\cal V}+\alphalpha)^+.
\]
\varepsilonnd{theorem}
Referring to the above theorem, we can say that the branching
mechanism transforms Gumbel to logistic distribution. It is
interesting to notice that this is in parallel with results for
maximum of i.i.d. random variables with random geometrically
distributed index discussed in \cite{GneGne82}.
\begin{equation}gin{example} Let us sample a
linear birth and death process $({\cal B}_t)$ at irregular times. Let
$Z_n={\cal B}_{t_n}$ where $0<t_n<t_{n+1}\to t_{\inftynfty}\lambdae \inftynfty$. If
$\lambdaambda$ and $\mu$ are the birth and death rates, respectively,
and $d_n=t_n-t_{n-1}$, then $a_n=m_np_n$,
$$p_n=\cases{{\displaystyle \lambda-\mu\over \displaystyle \lambdaambda m_n-\mu} & if $\lambdaambda \not=\mu$,\cr
\fracrac{\displaystyle 1}{\displaystyle 1+\lambdaambda d_n} & if $\lambda=\mu$,} \quad
m_n=e^{(\lambdaambda -\mu)d_n}.$$ and
$$B_n=\cases{{\displaystyle \lambdaambda (\mu_n-1)\over \displaystyle \lambdaambda -\mu} & if $\lambdaambda \not=\mu$,\cr
\lambdaambda t_n & if $\lambdaambda =\mu$,} \quad
\mu_n=e^{(\lambdaambda-\mu)t_n}.$$
A. If $\lambdaambda>\mu$ and
$$\lambdaim_{n\to
\inftynfty}\fracrac{\displaystyle t_n}{\displaystyle m_n}=\fracrac{2c}{\lambdaambda-\mu}\inftyn
[0,\inftynfty),$$ then
\[
\lambdaeft(\fracrac{{\cal M}_n^{ge}}{m_n}-(\lambdaambda-\mu)t_n)\ |\
Z_{n-1}>0\right)\stackrel{d}\to {\cal V} -c.
\]
B. If $\lambdaambda=\mu$ and $t_n=n^{\delta}l(n)$ \ \ $(\delta \ge 1)$,
then
\[
\lambdaeft(\fracrac{{\cal M}_n^{ge}}{\lambdaambda\delta n^{\delta-1}l(n)}-\lambdaog n\ |\
Z_{n-1}>0\right) \stackrel{d}\to {\cal V}.
\]
\varepsilonnd{example}
The results in this section can be found in \cite{MitPakYan03}.
\section{Maxima in bisexual processes}
In this section we consider maxima of triangular arrays of
bivariate geometric random vectors. The obtained results are
applied to a class of bisexual branching processes.
\subsection{Max-domain of attraction of bivariate geometric arrays}
The following construction is due to Marshall and Olkin
\cite{MO85}. Consider a random vector $(U, V)$ having Bernoulli
marginals, i.e., it takes on four possible values (0,0), (0,1),
(1,0), and (1,1) with probabilities $p_{00}, \ p_{01}, \ p_{10}$,
and $p_{11}$, respectively. Thus the marginal probabilities for
$U$ and $V$ are \begin{eqnarray*} P(U=0)=p_{0+}=p_{00}+p_{01}, & &
P(U=1)=p_{1+}=p_{10}+p_{11} \\
P(V=0)=p_{+0}=p_{00}+p_{10}, & & P(V=1)=p_{+1}=p_{01}+p_{11} .
\varepsilonnd{eqnarray*}
Consider a sequence $\{(U_n, V_n)\}_{n=1}^\inftynfty$ of
independent and identically distributed with $(U, V)$ random
vectors. Let $\xi$ and $\varepsilonta$ be the number of zeros preceding the
first 1 in the sequences $\{U_n\}_{n=1}^\inftynfty$ and
$\{V_m\}_{n=1}^\inftynfty$, respectively. Both $\xi$ and $\varepsilonta$ follow
a geometric distribution and, in general, they are dependent variables.
The vector $(\xi, \varepsilonta)$ has a bivariate geometric distribution
with probability mass function for integer $l$ and $k$
\begin{equation}gin{equation}\lambdaabel{def1}
P(\xi=l, \varepsilonta=k) =
\lambdaeft\{
\begin{equation}gin{array}{ll}
p_{00}^lp_{10}p_{+0}^{k-l-1}p_{+1} & \mbox{if} \quad 0\lambdaeq l <k,\\
p_{00}^l p_{11} & \mbox{if} \quad l=k, \\
p_{00}^k p_{01} p_{0+}^{l-k-1}p_{1+} & \mbox{if} \quad 0\lambdaeq k <l.\\
\varepsilonnd{array}
\right.
\varepsilonnd{equation}
and
\begin{equation}gin{equation}\lambdaabel{def2}
P(\xi > l, \varepsilonta > k)=
\lambdaeft\{
\begin{equation}gin{array}{ll}
p_{00}^{l+1}p_{+0}^{k-l}& \mbox{if} \quad 0\lambdaeq l \lambdaeq k,\\
p_{00}^{k+1} p_{0+}^{l-k} & \mbox{if} \quad 0\lambdaeq k <l.\\
\varepsilonnd{array}
\right.
\varepsilonnd{equation}
The marginals of $\xi$ and $\varepsilonta$ for integer $l$ and $k$ are $
P(\xi=l)=p_{1+}p_{0+}^l \quad (l\geq 0)$ and
$P(\varepsilonta=k)=p_{+1}p_{+0}^k \quad (k\geq 0)$, respectively and \begin{equation}q
\lambdaabel{marg} \bar{F}_\xi(l)=P(\xi > l)=p_{0+}^{l+1} \quad (l\geq
0), \qquad \bar{F}_\varepsilonta(k)=P(\varepsilonta > k)=p_{+0}^{k+1} \quad (k\geq
0). \varepsilonnd{equation}q
For $n=1,2,\lambdadots$, let $\nu_n$ be a positive integer and
$\{(\xi_i(n), \varepsilonta_i(n)): i=1,2,\lambdadots, \nu_n\}$ be a triangular
array of independent random vectors with the same bivariate
geometric distribution (\ref{def1}) where $p_{ij}$ are replaced by
$p_{ij}(n)\ \ (i,j=0,1)$ for $n=1,2,\lambdadots$ That is, \begin{eqnarray*}
(\xi_1(1), \varepsilonta_1(1)), (\xi_2(1), \varepsilonta_2(1)), & \lambdadots, &
(\xi_{\nu_1}(1), \varepsilonta_{\nu_1}(1))\\
(\xi_1(2), \varepsilonta_1(2)), (\xi_2(2), \varepsilonta_2(2)), & \lambdadots, & \qquad
(\xi_{\nu_2}(2), \varepsilonta_{\nu_2}(2)) \\
& \lambdadots & \\
(\xi_1(n), \varepsilonta_1(n)), (\xi_2(n), \varepsilonta_2(n)), & \lambdadots, & \qquad
\qquad \qquad (\xi_{\nu_n}(n), \varepsilonta_{\nu_n}(n)) \varepsilonnd{eqnarray*} Below we
prove a limit theorem as $\nu_n \to \inftynfty$ for the bivariate row
maximum
\[
({\cal M}_n^\xi, {\cal M}_n^\varepsilonta)=\lambdaeft(\max_{1\lambdaeq i\lambdaeq \nu_n} \xi_i(n),
\max_{1\lambdaeq i\lambdaeq \nu_n}\varepsilonta_i(n)\right).
\]
\begin{equation}gin{theorem} Let $\lambdaim_{n\to \inftynfty}\nu_n=\inftynfty$. If there are
constants $0\lambdaeq a, b, c < \inftynfty$, such that \begin{equation} \lambdaabel{assum2}
\lambdaim_{n\to \inftynfty}p_{11}(n)\lambdaog \nu_n = 2c \quad \lambdaim_{n\to
\inftynfty} \fracrac{p_{10}(n)}{p_{11}(n)}\lambdaog \nu_n = a \quad \mbox{and}
\quad \lambdaim_{n\to \inftynfty} \fracrac{p_{01}(n)}{p_{11}(n)}\lambdaog \nu_n =
b, \varepsilonnd{equation} then for
$x, y \geq 0$
\begin{eqnarray*} \lambdaefteqn{ \lambdaim_{n\to\inftynfty}
P\lambdaeft( p_{11}(n){\cal M}_n^\xi-\lambdaog\nu_n\lambdaeq x, \ \ p_{11}(n){\cal M}_n^\varepsilonta-\lambdaog\nu_n\lambdaeq
y \right)}\\
& & =
\varepsilonxp\lambdaeft\{ -e^{ -x-a-c}-e^{ -y-b-c}+e^{ -\max\{x,y\}-a -b -c}\right\}.
\varepsilonnd{eqnarray*}
\varepsilonnd{theorem}
{\bf Proof}\ Set $x_n=(x+\lambdaog \nu_n)/p_{11}(n)$ and $y_n=(y+\lambdaog
\nu_n)/p_{11}(n)$. \begin{eqnarray*} P\lambdaeft( {\cal M}_n^\xi\lambdaeq x_n,
{\cal M}_n^\varepsilonta\lambdaeq
y_n\right) & = & (F(x_n, y_n))^{\nu_n} \\
& = &
\lambdaeft(1-\bar{F}_\xi(x_n)-\bar{F}_\varepsilonta(y_n)+P(\xi_i(n)>x_n, \varepsilonta_i(n)>y_n)\right)^{\nu_n}.
\varepsilonnd{eqnarray*}
Let $x<y$ and thus, $x_n<y_n$. Taking logarithm, expanding in Taylor
series,
and using
(\ref{def2}) and (\ref{marg}), we obtain
\begin{equation}q \lambdaabel{lim0}
\lambdaefteqn{\lambdaog P\lambdaeft( {\cal M}_n^\xi\lambdaeq x_n, {\cal M}_n^\varepsilonta\lambdaeq
y_n\right)}\\
& = & \nu_n \lambdaog
\lambdaeft(1-\bar{F}_\xi(x_n)-\bar{F}_\varepsilonta(y_n)+P(\xi_i(n)>x_n, \varepsilonta_i(n)>y_n)\right) \nonumber \\
& = &
-\nu_n \lambdaeft\{ [\bar{F}_\xi(x_n)+\bar{F}_\varepsilonta(y_n)-P(\xi_i(n)>x_n, \varepsilonta_i(n)>y_n)](1+o(1))\right\}\nonumber \\
& = &
-\lambdaeft( \nu_n p_{0+}(n)^{[x_n]+1} + \nu_n p_{+0}(n)^{[y_n]+1}
- \nu_n p_{00}(n)^{[x_n]+1}p_{+0}(n)^{[y_n]-[x_n]}\right)
(1+o(1)) . \nonumber
\varepsilonnd{equation}q
Write $[x_n]=x_n-\{x_n\}$, where $0\lambdaeq \{x_n\}<1$ is
the fractional part of $x_n$. It is easily seen that
$\lambdaim_{n\to \inftynfty}(p_{0+}(n))^{[x_n]+1}=\lambdaim_{n\to
\inftynfty}(p_{0+}(n))^{x_n+1-\{x_n\}}=
\lambdaim_{n\to \inftynfty}(p_{0+}(n))^{x_n}$ as $n\to \inftynfty$.
Furthermore, taking into account (\ref{assum2}), we have
\begin{eqnarray*}
\lambdaefteqn{\lambdaog \lambdaeft(\nu_n p_{0+}^{x_n}(n)\right)=
\lambdaog \nu_n + \fracrac{x+\lambdaog
\nu_n}{p_{11}(n)}\lambdaog(1-p_{1+}(n))}\nonumber \\
& = &
\lambdaog \nu_n - \fracrac{x+\lambdaog
\nu_n}{p_{11}(n)}\lambdaeft(p_{11}(n)+p_{10}(n)
+\fracrac{1}{2}(p_{11}(n)+p_{10}(n))^2+O(p^3_{1+}(n))\right)
\nonumber \\
& = &
-x(1+o(1)) - \fracrac{p_{10}(n)}{p_{11}(n)}\lambdaog \nu_n -
\fracrac{(p_{11}(n)+p_{10}(n))^2}{2p_{11}(n)}\lambdaog \nu_n +
O(p^2_{11}(n))
\nonumber \\
& = &
-x(1+o(1))-\lambdaeft(\fracrac{p_{10}(n)}{p_{11}(n)}+
\fracrac{1}{2}p_{11}(n)\right)\lambdaog\nu_n (1+o(1)) +
O(p^2_{11}(n))
\nonumber \\
& \to &
-x -a -c \ .
\nonumber
\varepsilonnd{eqnarray*}
Therefore
\begin{equation} \lambdaabel{lim1}
\lambdaim_{n \to \inftynfty}\nu_n p_{0+}(n)^{[x_n]+1}=e^{\displaystyle -x -a -c} \ .
\varepsilonnd{equation}
Similarly we arrive at
\begin{equation} \lambdaabel{lim2}
\hspace{-0.3cm} \lambdaim_{n \to \inftynfty}\nu_n p_{+0}(n)^{[y_n]+1}=e^{\displaystyle -y -b -c} \
\mbox{and} \
\lambdaim_{n \to \inftynfty}\nu_n p_{00}(n)^{[x_n]+1}=e^{\displaystyle -x -a -b -c} \ .
\varepsilonnd{equation}
Finally,
\begin{eqnarray*}
\lambdaefteqn{\lambdaog p_{+0}^{y_n-x_n}(n)=
\fracrac{(y-\lambdaog\nu_n)-(x-\lambdaog\nu_n)}{p_{11}(n)}\lambdaog
(1-p_{11}(n)-p_{01}(n))} \\
& = &
-\fracrac{y-x}{p_{11}(n)}\lambdaeft(p_{11}(n)+p_{01}(n)
+\fracrac{1}{2}(p_{11}(n)+p_{01}(n))^2+O(p^3_{+1}(n))\right)\\
& = &
x-y -(y-x)\lambdaeft(
\fracrac{p_{01}(n)}{p_{11}(n)}(1+o(1))+\fracrac{1}{2}p_{11}(n)(1+o(1))
+ O(p^2_{+1}(n))\right)\\
& \to &
x-y
\varepsilonnd{eqnarray*}
Thus,
\begin{equation} \lambdaabel{lim3}
\lambdaim_{n \to \inftynfty}p_{+0}(n)^{[y_n]-[x_n]}=e^{\displaystyle x-y} \ .
\varepsilonnd{equation}
The assertion of the theorem for $x<y$ follows from
(\ref{lim0})-(\ref{lim3}). The case $y<x$ is treated
similarly. This completes the proof.
In particular, if $a=b=0$ then
\[
\lambdaim_{n\to\inftynfty}
P\lambdaeft( p_{11}(n){\cal M}_n^\xi-\lambdaog\nu_n\lambdaeq x, p_{11}(n){\cal M}_n^\varepsilonta-\lambdaog\nu_n\lambdaeq
y \right)
=
\varepsilonxp\lambdaeft\{ -e^{ -\min\{x,y\}-c}\right\}.
\]
Note that in this case the limit is proportional to the upper bound for the possible asymptotic
distribution of a multivariate maximum given in
\cite{Gal87}, Theorem~5.4.1.
For the componentwise maxima, applying Theorem~10, one can obtain
the following limiting results. If
$p_{1+}(n)\lambdaog \nu_n \to 2c_1<\inftynfty$, then
\[
\lambdaim_{n\to\inftynfty}P( p_{11}(n){\cal M}_n^\xi-\lambdaog\nu_n\lambdaeq x)
=
\varepsilonxp\lambdaeft\{ -e^{ -x-c_1}\right\}.
\]
If $p_{1+}(n)\lambdaog \nu_n \to 2c_2<\inftynfty$, then
\[
\lambdaim_{n\to\inftynfty}
P\lambdaeft(p_{11}(n){\cal M}_n^\varepsilonta-\lambdaog\nu_n\lambdaeq
y \right) =
\varepsilonxp\lambdaeft\{ -e^{ -y-c_2}\right\}.
\]
\subsection{Bisexual processes with varying geometric
environments} Consider the array of bivariate random vectors
$\{(\xi_i(n),\varepsilonta_i(n)):\ i=1,2,\lambdadots; \ n=0,1,\lambdadots\}$, which
are independent with respect to both indexes. Let
$L:{\cal R}^+\times{\cal R}^+\to{\cal R}^+$ be a mating function. A bisexual
process with varying environments is defined (see \cite{MMR04}) by
the recurrence: $Z_0=N>0$,
\[
(Z^F_{n+1},Z^M_{n+1})=\sum_{i=1}^{Z_n}(\xi_i(n),\varepsilonta_i(n))\] and
\[
Z_{n+1}=L(Z^F_{n+1},Z^M_{n+1}) \quad (n=0,1,\lambdadots).
\]
Define the mean growth rate per mating unit
\[
r_{nj}=j^{-1}E(Z_{n+1}|Z_n=j) \quad (j=1,2,\lambdadots) \quad
\mbox{and} \quad \mu_n=\prod_{i=0}^{n-1}r_{i1}, \ \mu_0=1 \quad
(n=1,2, \lambdadots)
\]
{\bf Lemma } (\cite{MMR04}) {\inftyt If
\begin{equation} \lambdaabel{assump1}
\sum_{n=0}^\inftynfty \lambdaeft( 1- \fracrac{r_{n1}}{r_n}\right)\
\varepsilonnd{equation}
then
\[
\lambdaim_{n\to \inftynfty} \fracrac{Z_n}{\mu_n
} = W \quad \mbox{a.s.},
\]
where $W$ is a nonnegative random variable with
$E(W)<\inftynfty$.
If, in addition, there exist constants $A>0$ and $c>1$ such that
\begin{equation} \lambdaabel{assump2}
\prod_{i=j}^{n+j-1}r_{i1}\geq Ac^n \quad j=1,2,\lambdadots; \
n=0,1,\lambdadots
\varepsilonnd{equation}
and there exists a random variable $X$ with $E(X\lambdaog(1+X))<\inftynfty$
such that for any $u$
\begin{equation} \lambdaabel{assump3}
P(X\lambdaeq u) \lambdaeq P\lambdaeft(\fracrac{L(\xi_i(n), \varepsilonta_i(n))}{r_{n1}}\lambdaeq u\right)\qquad
(n=0,1,\lambdadots),
\varepsilonnd{equation}
then $P(W>0)>0$.}
Further on we assume that $(\xi_i(n),\varepsilonta_i(n))$ are i.i.d. copies
of the bivariate geometric vector $(\xi, \varepsilonta)$ introduced above
and that the mating is promiscuous, i.e., \begin{equation} \lambdaabel{mating}
L(\xi(n),\varepsilonta(n))=\xi(n)\min\{1,\varepsilonta(n)\}.\varepsilonnd{equation}
\begin{equation}gin{theorem} \lambdaabel{cpm2} Let $\{ Z_n\}$ be a bisexual branching process with varying geometric environments
and mating function (\ref{mating}).
If
\begin{equation} \lambdaabel{cpm_lim_assum1}
\prod_{j=1}^\inftynfty p_{+0}(j)p_{0+}(j)\neq 0 \quad \mbox{and} \quad \sum_{n= 0}^\inftynfty
p_{+1}(n)<\inftynfty \ ,
\varepsilonnd{equation}
then
\begin{equation} \lambdaabel{cpm_Wlim1}
\lambdaim_{n\to \inftynfty} \fracrac{Z_n}{\mu_n
} = W \quad \mbox{a.s.},
\varepsilonnd{equation}
where $W$ is a nonnegative random variable with
$E(W)<\inftynfty$ and $P(W>0)>0$.
\varepsilonnd{theorem}
{\bf Proof}\ To prove the theorem it is sufficient to verify the
assumptions (\ref{assump1})-(\ref{assump3}) in the above lemma.
First, we prove that (\ref{assump1}) holds. Indeed, for $j\geq 1$
\begin{equation}q jr_{nj} & = &
E(Z^F_{n+1}\min \{1,Z^M_{n+1}\}) \lambdaabel{growth rate}\\
& = &
EE(Z^F_{n+1}\min \{1,Z^M_{n+1}\}\ |\ Z^M_{n+1}) \nonumber \\
& = &
(1-P(Z^M_{n+1}=0))EZ^F_{n+1} \nonumber \\
& = &
(1-p_{+1}^j(n))\fracrac{jp_{0+}(n)}{p_{1+}(n)}\ , \nonumber
\varepsilonnd{equation}q
where we have used that both $Z^M_{n+1}$ and $Z^F_{n+1}$ are
negative binomial with parameters $(j, p_{+1}(n))$
and $(j,p_{1+}(n))$, respectively. Thus,
\begin{equation} \lambdaabel{cpm_rn}
r_n=\lambdaim_{j\to \inftynfty}r_{nj}=\lambdaim_{j\to
\inftynfty}
(1-p_{+1}^j(n))\fracrac{p_{0+}(n)}{p_{1+}(n)}=\fracrac{p_{0+}(n)}{p_{1+}(n)}
\ .
\varepsilonnd{equation}
Now, (\ref{growth rate}) and (\ref{cpm_rn}) imply
$
1-r_{n1}/r_n=p_{+1}(n)
$,
which along with (\ref{cpm_lim_assum1}) leads to
(\ref{assump1}).
Let us prove (\ref{assump3}). Indeed, for $k\geq 1$
\begin{eqnarray*}
P(L(\xi(n),\varepsilonta(n))=k)\! & \!= \!& \! \sum_{j=1}^\inftynfty
P(\xi(n)\min\{1,\varepsilonta(n)\}=k|\varepsilonta(n)=j)P(\varepsilonta(n)=j) \\
\! & = & \! P(\xi(n)=k)\sum_{j=1}^\inftynfty P(\varepsilonta(n)=j) \nonumber \\
\! & = & \! p_{1+}(n)p_{0+}^k(n)\sum_{j=1}^\inftynfty
p_{+1}(n)p_{+0}^j(n) \nonumber \\
\! & = & \!
p_{+0}(n)p_{1+}(n)p_{0+}^k(n)\ .
\nonumber
\varepsilonnd{eqnarray*}
Therefore,
$
P(L(\xi(n),\varepsilonta(n))/r_{n1}\geq u)=p_{0+}^{[ur_{n1}]+1}(n)
$
and hence, similarly to (\ref{lim1}), taking into account
(\ref{cpm_lim_assum1}), we obtain
\begin{eqnarray*}
\lambdaog P\lambdaeft(\fracrac{\displaystyle L(\xi(n),\varepsilonta(n))}{\displaystyle r_{n1}}\geq u\right) & \sim & ur_{n1}\lambdaog p_{0+}(n) \\
& = &
-u
\fracrac{p_{+0}(n)p_{0+}(n)}{p_{1+}(n)}p_{1+}(n)(1+o(1))
\\
& \to & -u
\varepsilonnd{eqnarray*}
Thus,
$\lambdaim_{n\to \inftynfty}P(L(\xi(n),\varepsilonta(n)/r_{n1})\geq u)=e^{-u}$, which implies (\ref{assump3}).
Finally, to prove (\ref{assump2}), observe that (\ref{growth rate}) implies for any $j$
and $n$
\begin{eqnarray*}
\prod_{i=j}^{n+j-1}r_{i1} & = &
\prod_{i=j}^{n+j-1}p_{+0}(i)p_{0+}(i)
\prod_{i=j}^{n+j-1}p_{11}^{-1}(i) \\
& \geq &
\prod_{i=1}^\inftynfty p_{+0}(i)p_{0+}(i)
\prod_{i=j}^{n+j-1}p_{11}^{-1}(i) \\
& \geq &
Ac^n \ ,
\varepsilonnd{eqnarray*}
where $A=\prod_{i=1}^\inftynfty p_{+0}(i)p_{0+}(i)>0$ (provided that the product in (\ref{cpm_lim_assum1}) is finite) and
$c=\min_{i\geq j}p^{-1}_{11}(i)>1$ ($p_{11}(i)\to 0$ under
(\ref{cpm_lim_assum1})). (\ref{assump2}) also holds if
the product in (\ref{cpm_lim_assum1}) is infinite. Now, referring to the above lemma we complete the
proof of the theorem.
Define offspring maxima in the bisexual process $\{Z_n\}$ by
\[
({\cal M}_n^F, {\cal M}_n^M)=\lambdaeft(\max_{1\lambdae i\lambdae Z_{n}}\xi_i(n),\
\max_{1\lambdae i\lambdae Z_{n}}\varepsilonta_i(n)\right).
\]
\begin{equation}gin{theorem} Assume that $\mu_n\to \inftynfty$ and
there are constants $0\lambdaeq a, b, c < \inftynfty$, such that \begin{equation}
\lambdaabel{assum_1} \lambdaim_{n\to \inftynfty}p_{11}(n)\lambdaog \mu_n = 2c \quad
\lambdaim_{n\to \inftynfty} \fracrac{p_{10}(n)}{p_{11}(n)}\lambdaog \mu_n = a \quad
\mbox{and} \quad \lambdaim_{n\to \inftynfty}
\fracrac{p_{01}(n)}{p_{11}(n)}\lambdaog \mu_n = b. \varepsilonnd{equation} Also assume that
\begin{equation} \lambdaabel{assum_2}
\prod_{j=1}^\inftynfty p_{+0}(j)p_{0+}(j)\neq 0 \quad \mbox{and} \quad \sum_{n= 0}^\inftynfty
p_{+1}(n)<\inftynfty \ .
\varepsilonnd{equation}
Then
\[
\lambdaim_{n\to\inftynfty}
P\lambdaeft( p_{11}(n){\cal M}_n^F -\lambdaog\mu_n\lambdae x, p_{11}(n){\cal M}_n^M-\lambdaog\mu_n \lambdae y\right)
= \inftynt_0^\inftynfty (G(x,y))^zdP(W\lambdae z),
\]
where \[ G(x,y)=\varepsilonxp\lambdaeft\{ -e^{ -x-a-c}-e^{ -y-b-c}+e^{
-\max\{x,y\}-a -b -c}\right\}.
\]
\varepsilonnd{theorem}
{\bf Proof}\ Set $x_n=(x+\lambdaog \mu_n)/p_{11}(n)$ and $y_n=(y+\lambdaog
\mu_n)/p_{11}(n)$. Under assumption (\ref{assum_1}), Theorem 11
implies \begin{equation} \lambdaabel{theorem_11} P\lambdaeft( {\cal M}_n^F\lambdaeq x_n,
{\cal M}_n^M\lambdaeq y_n \ | \ Z_n=k\right) = (F(x_n, y_n))^{k}\to H(x,y).
\varepsilonnd{equation} Under (\ref{assum_2}), Theorem 12 implies \begin{equation}
\lambdaabel{theorem_12} \lambdaim_{n\to \inftynfty}P\lambdaeft( \fracrac{Z_n}{\mu_n}\lambdae
x\right)=P(W\lambdae x). \varepsilonnd{equation} Therefore, by (\ref{theorem_11}) and
(\ref{theorem_12}),
\begin{equation}gin{eqnarray*}
\lambdaefteqn{\hspace{-2cm}P\lambdaeft({\cal M}_n^F\lambdaeq \fracrac{x+\lambdaog
\mu_n}{p_{11}(n)}, {\cal M}_n^M\lambdae \fracrac{y+\lambdaog
\mu_n}{p_{11}(n)}\right)=\sum_{k= 0}^\inftynfty P\lambdaeft(Z_n=k
\right)(F(x_n,
y_n))^{k}}\\
& = & \sum_{k=0}^\inftynfty P\lambdaeft(
\fracrac{Z_n}{\mu_n}=\fracrac{k}{\mu_n}\right)(F(x_n, y_n))^{\mu_n
k/\mu_n} \\
& = & \inftynt_0^\inftynfty (G(x,y))^zdP(W\lambdae z).
\varepsilonnd{eqnarray*}
Next example, adopted from \cite{Mit05}, shows that the various
conditions in Theorem~13 can be satisfied.
\begin{equation}gin{example} Let $\alphalpha>1$ and $\begin{equation}ta>1$. Set
\[
p_{11}(n)=n^{-\alphalpha} \qquad \mbox{and}\qquad
p_{01}(n)=p_{10}(n)=n^{-(\alphalpha+\begin{equation}ta)} \qquad (n\ge 2). \] It is
not difficult to see that with this choice of $p_{ij}(n)$
$(i,j=0,1)$, we have
\[
\lambdaog \mu_n\sim \alphalpha n \lambdaog n \qquad \mbox{as}\qquad n\to \inftynfty
\]
and both (\ref{assum_1}) (with $a=b=c=0$) and (\ref{assum_2}) are
satisfied.
\varepsilonnd{example}
The exposition in this section follows \cite{Mit05}, extending
some of the results there.
\section{Maximum score}
In this section we assume that every individual in a Galton-Watson
family tree has a continuous random characteristic which maximum
is of interest.
\subsection{Maximum scores in Galton-Watson processes}
Let us go back to the simple BGW process and attach random scores
to each individual in the family tree. More specifically,
associate with the $j$-th individual in the $n$-th generation a
continuous random variable $Y_j(n)$. Arnold and Villase\~{n}or
(1996) published the first paper studying the maxima individual
scores ("heights"). Pakes (1998) proves more general results
concerning the laws of offspring score order statistics. Quoting
\cite{Pak98}, "these results provide examples of the behavior of
extreme order statistics of observations from samples of random
size." Define by $M_{(k),n}$ the $k$-th largest score within the
$n$-th generation and by $\bar{M}_{(k),n}$ the $k$-th largest
among the random variables $\{Y_{i}(n):\ 1\lambdae i\lambdae Z_\nu, 0\lambdae
\nu\lambdae n\}$, i.e., the $k$-th largest score up to and including
the $n$-th generation. Pakes (1998) studies the limiting behavior
of "near maxima", i.e., (upper) extreme order statistics
$M_{(k),n}$ and $\bar{M}_{(k),n}$ when $n\to \inftynfty$ and $k$
remains fixed. The two general cases that arise are whether the
law of $Z_n$ (or the total progeny $T_n=\sum_{\nu=0}^nZ_\nu$),
conditional on survival, do not require or do require,
normalization to converge to non-degenerate limits.
If no normalization is required then no particular restriction
need to be placed on the score distribution function $S$, but the
limit laws are rather complex mixtures of the laws of extreme
order statistics. The principal result states that
\[
\lambdaim_{n\to\inftynfty}P(M_{(k),n}\lambdae x| {\cal
A}_n)=\sum_{j=1}^\inftynfty\sum_{i=0}^{k-1}{j \choose
i}(1-S(x))^iS^{j-i}(x)g_j,
\]
where it is assumed that the conditional law ${\cal G}_n$ of $Z_n$
given ${\cal A}_n$ (${\cal A}_n$ includes non-extinction)
converges to a discrete and non-defective limit ${\cal G}$ and
$g_j$ denote the masses attributed to $j$ by ${\cal G}$.
If normalization is required then one must assume that the score
distribution function $S$ is attracted to an extremal law, and
then the limit laws are mixtures of the classical limiting laws of
extreme order statistics. let us assume that there are positive
constants $C_n\uparrow \inftynfty$ such that for the conditional law
${\cal G}_n$ we have $ {\cal G}_n(xC_n)\Rightarrow N(x), $ where
$N(x)$ is a non-defective but possibly degenerate distribution
function. Assume also that the score distribution function $S$ is
in the domain of attraction of on extremal law given by
(\ref{dom_attr}). The general result in \cite{Pak98} is \begin{equation}
\lambdaabel{norm_limit}\lambdaim_{n\to\inftynfty}P\lambdaeft(\fracrac{M_{(k),n}-b(C_n)}{a(C_n)}\lambdae
x|{\cal A}_n\right)=\sum_{i=0}^{k-1}\fracrac{(h(x,
\theta))^i}{i!}\inftynt_0^\inftynfty y^ie^{-yh(x, \theta)}dN(y). \varepsilonnd{equation}
\begin{equation}gin{example} Consider an immortal (i.e., $P(X=0)=0$) supercritical process with shifted geometric offspring
law given by its p.g.f. $f(s)=s/(1+m-ms)$ $(m>1)$, then (see Pakes
(1998)) (\ref{norm_limit}) becomes
\[
\lambdaim_{n\to\inftynfty}P\lambdaeft(\fracrac{M_{(k),n}-b(C_n)}{a(C_n)}\lambdae x|{\cal
A}_n\right)=1-\lambdaeft(\fracrac{h(x, \theta)}{1+h(x, \theta)}\right)^k.
\]
Thus the limit has a generalized logistic law when the score law
is attracted to Gumbel law, $h(x, \theta)=e^{-x}$; and a
Pareto-type law results when $S$ is attracted to the Fr\'{e}chet
law.
\varepsilonnd{example}
Phatarford (see \cite{Pak98}) has raised the question (in the context
of horse racing), "What is the probability that the founder of a
family tree is better than all its descendants?" The answer turns
out to be $E\lambdaeft(T^{-1}\right)$, where $T={\displaystyle
\sum_{n=0}^\inftynfty Z_n}$ is the total number of individuals in the
family tree. More generally, if $\tau_n$ is the index of the
generation up to the $n$-th which contains the largest score,
Pakes (1998) proves that
\[
P(\tau_n=k)=E\lambdaeft(\fracrac{Z_k}{T_n}\right), \quad (k=0,1,\lambdadots,n),
\]
as well as limit theorems for $\tau_n$ as $n\to \inftynfty$.
This subsection is based on \cite{ArnVil96} and \cite{Pak98}.
\subsection{Maximum scores in two-type processes}
Let each individual in a two-type branching process be equipped
with a non-negative continuous random variable - individual score.
We present limit theorems for the maximum individual score.
Consider two independent sets of independent random vectors with
integer nonnegative components
\[\{{\bf X}^1(n)\}=\{(X^1_{1j}(n), X^1_{2j}(n))\} \ \ \mbox{and} \ \
\{{\bf X}^2(n)\}=\{(X^2_{1j}(n), X^2_{2j}(n))\} \ \ (j\ge 1; n\ge
0).
\]
A two-type branching process $\{{\bf Z}(n)\}=\{(Z_1(n),Z_2(n))\}$
is defined as follows: ${\bf Z}(0)\neq {\bf 0}$ a.s. and for
$n=1,2,\lambdadots$
$$ Z_1(n)=\sum_{j=1}^{Z_1(n-1)} X^1_{1j}(n) +
\sum_{j=1}^{Z_2(n-1)} X^2_{1j}(n),$$
\[Z_2(n)=\sum_{j=1}^{Z_1(n-1)} X^1_{2j}(n) + \sum_{j=1}^{Z_2(n-1)}
X^2_{2j}(n).\] Here $X^i_{kj}(n)$ refers to the number of
offspring of type $k$ produced by the $j$-th individual of type
$i$. With the $j$-th individual of type $i$ living in the $n$-th
generation we associate a non-negative continuous random variable
$\zeta_{ij}(n)$, $(i=1,2)$ "score", say. Assume that the offspring
of type $1$ and type $2$ have scores, which are independent and
identically distributed within each type. Define the maximum score
within the $n$-th generation by
\[{\cal M}_n^\zeta =\max\{{\cal M}_n^{\zeta_1},\ {\cal M}_n^{\zeta_2}\}, \qquad \mbox{where}\quad
{\cal M}_n^{\zeta_i}=\max_{1\lambdae j\lambdae Z_i(n-1)}\zeta_{ij}(n)\qquad
(i=1,2).
\]
Note that this is maximum of random number, independent but
non-identically distributed random variables. Let
$F_i(x)=P(\zeta_i\lambdae x)$ $(i=1,2)$ be the c.d.f.'s of the scores
of type 1 and type 2 individuals, respectively.
\noindent{\inftyt Assumption 1}\ (tail-equivalence)\ We assume that
$F_1$ and $F_2$ are tail equivalent, i.e., they have the same
right endpoint $x_0$ and for some $A>0$
$$ \lambdaim_{x \uparrow x_0} \fracrac{1-F_1(x)}{1-F_2(x)}=A.$$
\noindent{\inftyt Assumption 2}\ (max-stability)\ Suppose $F_1$ is in
a max-domain of attraction, i.e., (\ref{3.8}) holds.
We consider the critical branching process ${\bf Z}(n)$ with mean
matrix ${\bf M}$, which is positively regular and nonsingular. Let
${\bf M}$ has maximum eigenvalue 1 and associated right and left
eigenvectors ${\bf u}=(u_1,u_2)$ and ${\bf v}=(v_1,v_2)$,
normalized such that ${\bf u \cdot v}=1$ and ${\bf u\cdot 1}=1$.
\begin{equation}gin{theorem}\ Let $\{{\bf Z}(n)\}$ be the above critical
two-type branching process. If the offspring variance $2B<\inftynfty$
and both Assumptions 1 and 2 hold, then \begin{equation} \lambdaabel{theorem_2D}
\hspace{-0.3cm} \lambdaim_{n \to \inftynfty}
P\lambdaeft(\fracrac{{\cal M}_n^\zeta-b(v_1Bn)}{a(v_1Bn)} \lambdae x|{\bf Z}(n) \ne
{\bf 0} \right)=\fracrac{1}{1+h(x,\theta)+(v_2/v_1)h(cx+d,\theta)},
\varepsilonnd{equation} where if $-\inftynfty<\theta<\inftynfty$ is fixed, then
$c=A^{1/|\theta |}$ and $d=0$; if $\theta\to \pm \inftynfty$, then
$c=1$ and $d=\lambdan A$. \varepsilonnd{theorem}
{\bf Proof}. Since $F_1(x)$ and $F_2(x)$ are
tail-equivalent, we have (see \cite{Res87}, p.67)
\[
\lambdaim_{n\to \inftynfty}\lambdaeft(F_2(a(n)x + b(n))\right)^n \to
H(cx+d,\theta),
\]
where the constants $c$ and $d$ are as in (\ref{theorem_2D}). On
the other hand, it is well-known (see \cite{AthNey72}, p.191) that
for $x>0$ and $ y>0$ \[
\lambdaim_{n\to\inftynfty}
P\lambdaeft(\fracrac{Z_1(n)}{v_1Bn}\lambdae x,\fracrac{Z_2(n)}{v_2Bn}\lambdae y|{\bf
Z}(n) \ne {\bf 0} \right)=G(x,y), \]
where the limiting distribution has
Laplace transform
\begin{equation}gin{equation}\lambdaabel{LT}
\psi(\lambdaambda, \mu)=\fracrac{1}{1+\lambdaambda+\mu}
\qquad (\lambdaambda>0, \ \mu > 0).
\varepsilonnd{equation}
Set $x_n=a(v_1Bn)x+b(v_1Bn)$, $s_n=k/v_1Bn$, and $t_n=l/v_2Bn$.
Referring to the definition of both ${\cal M}_n^\zeta$ and process
$\{{\bf Z}(n)\}$ we obtain
\begin{equation}gin{eqnarray*}
\lambdaefteqn{P\lambdaeft({\cal M}_n^\zeta \lambdaeq x_n|{\bf Z}_n\neq {\bf
0}\right)=\sum_{(k,l)={\bf 0}}^\inftynfty P\lambdaeft( {\bf
Z}(n)=(k,l)|{\bf Z}(n)\neq {\bf
0}
\right)
P\lambdaeft(\max\lambdaeft\{{\cal M}_n^{\zeta_1}, {\cal M}_n^{\zeta_2}\right\}\lambdae
x_n\right)}
\\
& = & \sum_{(k,l)={\bf 0}}^\inftynfty P\lambdaeft(
\fracrac{Z_1(n)}{v_1Bn}=\fracrac{k}{v_1Bn},
\fracrac{Z_2(n)}{v_2Bn}=\fracrac{l}{v_2Bn}|{\bf Z}(n)\neq {\bf 0}
\right) \lambdaeft[F_1(x_n)\right]^k\lambdaeft[F_2(x_n)\right]^l
\\
& = & \sum_{(k,l)={\bf 0}}^\inftynfty P\lambdaeft(
\fracrac{Z_1(n)}{v_1Bn}=s_n, \fracrac{Z_2(n)}{v_2Bn}=t_n|{\bf Z}(n)\neq
{\bf 0} \right)
\lambdaeft[F_1(x_n)\right]^{(v_1Bn)s_n}\lambdaeft[F_2(x_n)\right]^{(v_1Bn)t_n(v_2/v_1)}
\\
& \to &
\inftynt_0^\inftynfty \inftynt_0^\inftynfty H(x,\theta)^s H(cx+d,\theta)^{(v_2/v_1)t}d G(s,t) \\
& = & \inftynt_0^\inftynfty \inftynt_0^\inftynfty \varepsilonxp \lambdaeft\{ -sh(x,\theta)-t\fracrac{v_2}{v_1} h(cx+d, \theta)\right\}d G(s,t)\\
& = & \lambdaeft[ 1+h(x, \theta)+\fracrac{v_2}{v_1} h(cx+d,
\theta)\right]^{-1},
\varepsilonnd{eqnarray*}
where in the last formula we used the Laplace transform of
$G(u,v)$ given in (\ref{LT}). The proof is complete.
The two examples below illustrate the kind of limit laws that can
be encountered.
\begin{equation}gin{example} Let $F_1$ and $F_2$ be Pareto c.d.f.'s given for \
$x_i >\theta_i>0$ and $ c>0$ by
\[
F_i(x_i)=1-\lambdaeft(\fracrac{\theta_i}{x_i}\right)^c \qquad (i=1,2).
\]
Note that the two distributions share the same value of the
parameter $c$. It is not difficult to check that the limit is
log-logistic given by
$$\lambdaim_{n \to \inftynfty}
P\lambdaeft\{\fracrac{{\cal M}_n^\zeta}{\theta_1(v_1Bn)^{1/c}} \lambdae x|{\bf Z}(n)
\ne {\bf 0}
\right\}=\lambdaeft[1+\lambdaeft(1+\fracrac{v_2}{v_1}\lambdaeft(\fracrac{\theta_1}{\theta_2}\right)^{-c}\right)x^{-c}\right]^{-1}.$$
\varepsilonnd{example}
\begin{equation}gin{example} Let $F_1$ and $F_2$ be logistic and exponential
c.d.f.'s given by
\[
F_1(x_1)=1-e^{-x_1} \quad (0<x_1<\inftynfty)\quad \mbox{and}\quad
F_2(x_2)=\fracrac{1}{1+e^{-x_2}} \quad (-\inftynfty < x_2 < \inftynfty),
\]
respectively. It is known that both are in the max-domain of
attraction of $H(x)=\varepsilonxp\{-\varepsilonxp\{-x\}\}$ and share (see
\cite{AhsNev01}, p.91) the same normalizing constants $a(n)=1$ and
$b(n)=\lambdan n$. This fact, after inspecting the proof of the
theorem, allows us to bypass the tail-equivalence assumption and
obtain a logistic limiting distribution, i.e, for $-\inftynfty <x
<\inftynfty$
$$\lambdaim_{n \to \inftynfty} P\lambdaeft\{{\cal M}_n^\zeta-\lambdaog (v_1Bn) \lambdae x\ |\ {\bf Z}(n)
\ne {\bf 0} \right\}=
\lambdaeft[1+\lambdaeft(1+\fracrac{v_2}{v_1}\right)e^{-x}\right]^{-1}.$$
\varepsilonnd{example}
The results in this subsection are modifications of those in
\cite{MitYan02}.
\section*{Acknowledgments.} I thank I. Rahimov for igniting my interest to
extremes in branching processes. Thanks to the organizers of ISCPS
2007 for the excellent conference. This work is partially
supported by NFSI-Bulgaria, MM-1101/2001.
\begin{equation}gin{thebibliography}{999}
\bibitem {AhsNev01} Ahsanullah, M. and Nevzorov, V. B.,
{\inftyt Ordered Random Variables}, Nova Science Publishers, Inc.,
Huntington, NY, 2001.
\bibitem
{ArnVil96} Arnold, B. C. and Villase\~{n}or, J. A.,
The tallest man in the world., In: {\inftyt Statist. Theory and
Appl.: papers in honor of Herbert A. David}, Eds. Nagaraja, H. N.,
Sen, P. K. and Morrison, D. F, pp. 81--88, Springer, Berlin, 1996.
\bibitem {AthNey72} Athreya, K. B. and Ney, P. E., {\inftyt Branching Processes},
Springer, New York, 1972.
\bibitem {Gal87} Galambos, J. {\inftyt The Asymptotic Theory of Extreme Order
Statistics}, 2nd Edn., Krieger, Melbourne, Florida, 1987.
\bibitem{GneGne82} Gnedenko, B.V. andGnedenko, D.B.
Laplace distributions and the logistic distribution as limit
distributions in probability theory. {\inftyt Serdica}, 8(1982),
2:229--234 (In Russian).
\bibitem {Har63} Harris, T.E. {\inftyt The Theory of Branching
Processes}, Springer, Berlin, 1963.
\bibitem {JagNer84} Jagers, P. and Nerman, O. Limit theorems for
sums determined by branching and exponentially growing processes.
{\inftyt Stoch. Proc. Appl.} 17(1984), 47-71.
\bibitem{MO85}Marshall, A.W. and Olkin, I. A family of bivariate
distributions generated by the Bernoulli distribution. {\inftyt J.
Amer. Statist. Assoc.}, 80(1985), 332-338.
\bibitem {Mit98} Mitov, K.V. The maximal number of offspring of
one particle in a branching process with state-dependent
immigration. {\inftyt Proc. of 27th Spring Conference of the Union
of Bulgarian Mathematicians, Math. and Math. Education}, 1998,
92-97.
\bibitem {Mit05} Mitov, K.V. Extremes of bivariate geometic
variables with application to bisezual branching processes. {\inftyt
Pliska Studia Math. Bulgarica}, 17(2005), 349-362.
\bibitem
{MitPakYan03} Mitov, K.V., Pakes, A.G., and Yanev, G.P. Extremes
of geometric variables with applications to branching processes.
{\inftyt Statist and Probab Letters}, 65(2003), 379-388.
\bibitem
{MitYan99} Mitov, K.V. and Yanev G.P. Maximum family size in
branching processes with state--dependent immigration, {\inftyt Proc.
of 28th Spring Conference of the Union of Bulgarian
Mathematicians, Montana, Math. and Math. Education}, 1999,
142-144.
\bibitem
{MitYan02} Mitov, K.V. and Yanev, G.P. Maximum individual score in
critical two-type branching processes. {\inftyt C. R. Acad. Bulg.
Sci.}, 55(2002), 11:17-22.
\bibitem{MMR04} Molina, M., Mota, M., and Ramos, A. Limiting
behaviour for superadditive bisexual Galton-Watson processes in
varying environments. {\inftyt Test}, 13(2004), 2:481-499.
\bibitem
{Pak71} Pakes, A.G. Branching processes with immigration. {\inftyt J.
Appl. Prob.}, 8(1971), 32-42.
\bibitem{Pak75} Pakes, A.G. Some new limit theorems for the critical
branching process allowing immigration. {\inftyt Stoch. Proc. Appl.}
3(1975), 175-185.
\bibitem
{Pak98} Pakes, A.G. Extreme order statistics on Galton--Watson
trees. {\inftyt Metrika}, 47(1998), 95-117.
\bibitem {RahYan96} Rahimov, I. and Yanev G.P. On a
maximal sequence associated with simple branching processes. {\inftyt
Institute of Mathematics and Informatics}, Sofia, Preprint no.6,
1996, pp.14.
\bibitem
{RahYan97} Rahimov, I. and Yanev, G.P. Maximal number of direct
offspring in simple branching processes. {\inftyt Nonlinear Analysis,
Theory, Methods and Applications}, 30(1997), 2015-2023.
\bibitem
{RahYan99} Rahimov, I. and Yanev, G.P. On maximum family size in
branching processes. {\inftyt J. Appl. Probab.}, 36(1999), 632--643.
\bibitem {Res87} Resnick, S., {\inftyt Extreme Value Distributions, Regular
Variations, and Point Processes}, Springer, Berlin, 1987.
\bibitem
{Tad80} Tadikamalla, P.R. A look at the Burr and related
distributions. {\inftyt International Statistical Review}, 48(1980)
337-344.
\bibitem {Urb56} Urbanik, K. On a problem concerning birth and
death processes. {\inftyt Acta Math. Acad. Sci. Hungar.}, 7(1956),
99-106 (In Russian.)
\bibitem
{Wil94} Wilms, R. Fractional parts of random variables. Limit
theorems and infinite divisibility. Ph.D. Thesis, Technical
University of Eindhoven, Eindhooven, Holland, 1994.
\bibitem
{YanTso00} Yanev, G.P. and Tsokos, C.P. Family size order
statistics in branching processes with immigration. {\inftyt Stoch.
Anal. Appl.} 18(2000), 4:655-670.
\bibitem {Zol54} Zolotarev, V.M. On a problem in the theory of
branching rpocesses. {\inftyt Uspehi Matemat. Nauk} 9(1954), 147-156
(In Russian.)
\varepsilonnd{thebibliography}
\varepsilonnd{document} |
\begin{document}
\title[Lipschitz Equivalence Class, Ideal Class and Class number Problem]
{Lipschitz Equivalence Class, Ideal Class and the Gauss Class Number Problem}
\author{Li-Feng Xi}
\address{Institute of Mathematics, Zhejiang Wanli University, Ningbo,
Zhejiang, 315100, P.~R. China}
\email{[email protected]}
\author{Ying Xiong}
\address{Department of Mathematics, South China University of Technology,
Guangzhou, 510641, P.~R. China}
\email{[email protected]}
\subjclass[2000]{Primary 28A80, Secondary 11R29}
\keywords{Lipschitz equivalence, self-similar set, ideal class, class number}
\thanks{Ying Xiong is the corresponding author.}
\thanks{Supported by National Natural Science Foundation of China (Grant Nos
11101159, 11071224, 11071082), NCET, Fundamental Research Funds for the
Central Universities, SCUT (2012ZZ0073), and Morningside Center of
Mathematics.}
\begin{abstract}
Classifying fractals under bi-Lipschitz mappings in fractal geometry just
as important as classifying topology spaces under homeomorphisms in
topology. This paper concerns the Lipschitz equivalence of totally
disconnected self-similar sets in~$\R^d$ satisfying the OSC and with
commensurable ratios. We obtain the complete Lipschitz invariants for
such self-similar sets. The key invariant we found is an ideal related to
the IFS. This discovery establishes a one-to-one correspondence between
the Lipschitz equivalence classes of self-similar sets and the ideal
classes in a related ring. Accordingly, two self-similar sets $A$ and $B$
with the same dimension and ratio root are Lipschitz equivalent if and
only if their ideals $I_A$ and $I_B$ are equivalent, i.e., $aI_A=bI_B$
for some elements $a$ and $b$ in the related ring~$R$. This result
reveals an interesting relationship between the Lipschitz class number
problem and the Gauss class number one problem for real quadratic fields,
which was proposed by Gauss in~1801 but still remains a open question
today. Our result implies that the development on the Lipschitz class
number problem may lead to deeper understanding of the Gauss class number
problem.
By the Jordan-Zassenhaus Theorem in algebraic number theory on the
finiteness of ideal classes, we further prove a finiteness result about
the Lipschitz equivalence classes under the commensurable condition. This
result says that the geometrical structures of such self-similar sets are
essentially finite in view of Lipschitz equivalence, although the OSC
allows small copies of the self-similar sets touch in infinite geometric
manners. In other words, the above finiteness result describe the open
set condition in terms of Lipschitz equivalence.
By contrast, we also study the non-commensurable case. It turns out that
the difference between the commensurable case and the non-commensurable
case is essential. In fact, we consider the family of self-similar sets
under the same restrictions only dropping the commensurable condition,
and then find that there may exist infinitely many Lipschitz equivalent
classes.
The simplest case of our result is that the related ring is a principal
ideal domain. Then the class number is one, there is only one Lipschitz
equivalence class, all self-similar sets in this class are Lipschitz
equivalent to a symbolic metric space. For example, the ring $\Z[1/N]$ is
a principal ideal domain for positive integer $N\ge 2$, then the above
result implies: suppose $A$ and $B$ are totally disconnected self-similar
sets satisfying the open set condition, if both $A$ and $B$ are generated
by $N$ contracting similarities with the same ratio $r$, then $A$ and $B$
are Lipschitz equivalent. This very special corollary of our main result
generalizes many known results on the Lipschitz equivalence of
self-similar sets.
\end{abstract}
\maketitle
\tableofcontents
\section{Introduction}\label{sec:intro}
\subsection{Background}
Two metric spaces $(X_1,\dst_1)$ and $(X_2,\dst_2)$ are said to be
\emph{Lipschitz equivalent}, denoted by $X_1\simeq X_2$, if there is a
bijection $f\colon X_1\to X_2$ which is \emph{bi-Lipschitz}, i.e., there
exists a constant $L\ge1$ such that
\[L^{-1}\dst_1(x,y)\le\dst_2\bigl(f(x),f(y)\bigr)\le L\dst_1(x,y)\quad
\text{for all $x,y\in X_1$}.
\]
Roughly speaking, the spaces $X_1$ and $X_2$ are ``almost the same'' from the
viewpoint of metric.
Two Lipschitz equivalent fractals can be considered as having the same
geometrical structure since many important geometrical properties are
invariant under the bi-Lipschitz mappings, such as
\begin{itemize}
\item fractal dimensions: Hausdorff dimension, packing dimension, etc;
\item properties of measures: doubling, Ahlfors-David regularity, etc;
\item metric properties: uniform perfectness, uniform disconnectedness,
etc.
\end{itemize}
By contrast, Gromov pointed out in~\cite{Gromo07} that ``isometry'' leads to
a poor and rather boring category and ``continuity'' takes us out of geometry
to the realm of pure topology. So Lipschitz equivalence is suitable for the
study of fractal geometry of sets. Classifying fractals under bi-Lipschitz
mappings in fractal geometry just as important as classifying topology spaces
under homeomorphisms in topology.
Another interesting motivation of studying Lipschitz equivalence of fractals
comes from geometry group theory (see~\cite{Bonk06,FarMo98}). For example,
Farb and Mosher~\cite{FarMo98} established a quasi-isometry (in Gromov's
sense) from the group
\begin{equation*}
\xb{BS}(1,n)=\langle a,b\,|\, aba=b^{n}\rangle
\end{equation*}
to some space with its upper boundary $C_n$ being a self-similar fractal.
Then they proved that $\xb{BS}(1,n)$ and $\xb{BS}(1,m)$ are quasi-isometric
if and only if two self-similar fractals $C_{n}$ and $C_{m}$ are Lipschitz
equivalent. In the appendix of~\cite{FarMo98}, Cooper obtained that $C_n
\simeq C_m$ if and only if $\log m/\log n\in \Q$.
In general it is very difficult to determine whether two fractals are
Lipschitz equivalent or not. Indeed, there is \emph{little} known about the
Lipschitz equivalence of fractals, even for the most familiar fractals---self
similar sets in Euclidean spaces.
This paper concerns the Lipschitz equivalence of totally disconnected
self-similar sets in~$\R^d$ satisfying the OSC and with commensurable ratios
(Definition~\ref{d:cms}). We obtain the \emph{complete} Lipschitz invariants
for such self-similar sets (Theorem~\ref{t:TOCElip}). This is the first
\emph{general} result on the Lipschitz equivalence of self-similar sets with
overlaps. The key invariant we found is an ideal related to the IFS
(Definition~\ref{d:ideal}). This discovery establishes the connection between
Lipschitz equivalence class of self-similar sets and \emph{ideal class} in
algebraic number theory. (See, e.g., \cite{Lang94,Neuki99} for detailed
introduction of algebraic number theory).
\begin{defn}[ideal class]\label{d:class}
Two nonzero ideals $I$ and $J$ of an integral domain~$R$ is said to be in
the same class if $aI=bJ$ for some $a,b\in R$. The corresponding
equivalence classes is called the \emph{ideal classes} of~$R$. The
\emph{class number} of~$R$, denoted by $h(R)$, is defined to be the number
of ideal classes.
\end{defn}
Historically, ideal theory was developed in the investigation of Fermat's
Last Theorem. In 1844, Kummer proved Fermat's Last Theorem for every odd
prime number $p\le 19$, based on the fact that the ring $\Z[e^{2\pi i/p}]$ is
a unique factorization domain for such~$p$. This is equivalent to $\Z[e^{2\pi
i/p}]$ has class number $h_{p}=1$. But $h_{p}>1$ for every odd prime number
$p\ge 23$, and so this proof failed for such cases. To settle this problem,
in 1847, Kummer introduced ``ideal numbers'' to recover a form of unique
factorization for the ring~$\Z[e^{2\pi i/p}]$. As a result, Kummer can prove
Fermat's Last Theorem for all regular prime numbers $p$, which are the prime
numbers such that $p\nmid h_{p}$ (all odd prime numbers less than $100$ are
regular except for $37,59,67$). Kummer's idea of ``ideal number'' was further
developed by Dedekind, who established the modern theory of ideal in Algebra.
The study of ideal classes goes back to Lagrange and Gauss, before Kummer and
Dedekind's work on ideal. In 1773, Lagrange developed a general theory to
handle the problem of when an integer~$m$ is representable by a given binary
quadratic form
\[m=ax^2+bxy+cy^2,\]
where $a$, $b$ and $c$ are fixed integers with $\gcd(a,b,c)=1$. Some special
cases of this problem had been studied by Fermat and Euler. Lagrange defined
two quadratic forms $ax^2+bxy+cy^2$ and $AX^2+BXY+CY^2$ to be equivalent if
there exists an invertible integral linear change of variables
\[\begin{pmatrix}
x \\
y \\
\end{pmatrix}=
\begin{pmatrix}
\alpha & \beta \\
\gamma & \delta \\
\end{pmatrix}
\begin{pmatrix}
X \\
Y \\
\end{pmatrix},\quad
\text{where $\alpha,\beta,\gamma,\delta\in\Z$ with}\
\begin{vmatrix}
\alpha & \beta \\
\gamma & \delta \\
\end{vmatrix}=\pm1,
\]
that transforms $ax^2+bxy+cy^2$ to $AX^2+BXY+CY^2$. Note that equivalent
forms have the same discriminant~$D=b^2-4ac$, and more important that
equivalent forms represent the same set of integers. Let $h(D)$ denote the
number of equivalence classes of binary quadratic forms with discriminant
$D$. In \emph{Disquisitiones Arithmeticae} published in~1801, Gauss further
studied this problem and proved that $h(D)$ is finite for every value~$D$.
Moreover, Gauss put forward his famous class number problems (see
Section~\ref{ssec:clsnmb}). The relationship between the equivalence class of
quadratic forms and ideal class is that, for every negative square free
integer~$D$, the equivalence classes of binary quadratic forms with
discriminant~$D$ correspond \emph{one-to-one} to the ideal classes of the
ring~$\xc O_D$, which is the ring of all algebraic integers
in~$\mathbb{Q}(\sqrt D)$. In other words, $h(D)$ equals the class number
of~$\xc O_D$ for every negative square free integer~$D$ (see, e.g.,
\cite{Goldf85,Stark07}).
Just like the binary quadratic forms, for some appropriate families of
self-similar sets, we can establish a \emph{one-to-one} correspondence
between the Lipschitz equivalence classes and the ideal classes of a related
ring (Theorem~\ref{t:TOCEpr} and~\ref{t:TOCELCN}).
\subsection{Main results}
For convenience, we recall some basic notions of self-similar sets,
see~\cite{Falco97,Falco03,Hutch81} for more details. Let
$\ms=\{S_1,S_2,\dots,S_N\}$ be an \emph{iterated functions system} (IFS) on a
complete metric space~$(X,\dst)$ where each $S_i$ is a contracting similarity
of ratio~$r_i\in(0,1)$, i.e., $\dst\bigl(S_i(x),S_i(y)\bigr)=r_i\dst(x,y)$.
The self-similar set generated by the IFS~$\ms$ is the unique nonempty
compact set $E_\ms\subset X$ such that $E_\ms=\bigcup_{i=1}^N S_i(E_\ms)$. We
say that the IFS~$\ms$ satisfies the \emph{strong separation condition} (SSC)
if the sets $\{S_i(E_\ms)\}$ are pairwise disjoint. The IFS~$\ms$ satisfies
the \emph{open set condition} (OSC) if there exists a nonempty bounded open
set~$O$ such that the sets $S_i(O)$ are disjoint and contained in~$O$. If
furthermore $O\cap E_\ms\ne\emptyset$, we say that the IFS~$\ms$ satisfies
the \emph{strong open set condition} (SOSC). Obviously, we have
\[\text{SSC}\Rightarrow\text{SOSC}\Rightarrow\text{OSC}.\]
For IFSs on Euclidean spaces, Hutchinson~\cite{Hutch81}, Bandt and
Graf~\cite{BanGr92} and Schief~\cite{Schie94} proved that $\hdim E_\ms$
equals the \emph{similarity dimension}~$s$ (the unique positive solution of
$\sum_{i=1}^N r_i^s=1$) if $\ms$ satisfies the OSC, and that
\[\text{SOSC}\Leftrightarrow\text{OSC}\Leftrightarrow\xc H^s(E_\ms)>0.\]
Here $\xc H^s$ denotes the $s$-dimensional Hausdorff measure.
\begin{defn}[commensurable]\label{d:cms}
The ratios~$r_1,\dots,r_N$ of the IFS~$\ms$ are said to be
\emph{commensurable} if the multiplicative group generated by
$\{r_1,\dots,r_N\}$ can be generated by a single number $r_\ms\in(0,1)$. In
other words, $r_i=r_\ms^{\lambda_i}$ with $\lambda_i\in\N$ for each $i$ and
$\gcd(\lambda_1,\dots,\lambda_N)=1$. We call $r_\ms$ the \emph{ratio root}
of~$\ms$.
\end{defn}
\begin{rem}
The ratios $r_1,\dots,r_N$ are commensurable if and only if $\log r_i/\log
r_j\in\Q$ for $1\le i,j\le N$.
\end{rem}
We say the IFS~$\ms$ satisfies the TDC, denoted by $\ms\in\TDC$, if the
corresponding self-similar set~$E_\ms$ is totally disconnected. Write
\begin{multline*}
\OEC=\bigl\{\ms\colon\text{$\ms$ is an IFS on a Euclidean space}\\
\text{satisfying the OSC and with commensurable ratios}\bigr\}.
\end{multline*}
In this paper, we introduce an ideal related to~$\ms\in\TOEC$ which turns out
to be a very important Lipschitz invariant. For an IFS
$\ms=\{S_1,\dots,S_N\}$ satisfying the OSC, the \emph{natural
measure}~$\mu_\ms$ of~$\ms$ is defined to be the normalized $s$-dimensional
Hausdorff measure restricted to~$E_\ms$, where $s=\hdim E_\ms$, i.e.,
$\mu_\ms=\xc H^s|_{E_\ms}/\xc H^s(E_\ms)$. Note that $\mu_\ms$ is the unique
Borel probability measure such that
\[\mu_\ms(A)=\sum_{i=1}^Nr_i^s\mu_\ms\bigl(S_i^{-1}(A)\bigr)\quad
\text{for all Borel sets~$A$}.
\]
For $\ms\in\OEC$, we call $p_\ms=r_\ms^s$ the \emph{measure root} of~$\ms$
(see Remark~\ref{r:muB}). It follows from $\sum_{i=1}^Nr_i^s=1$ and $\log
r_i/\log r_\ms$ is a positive integer that $p_\ms^{-1}$ is an algebraic
integer. Let
\[\Z[p_\ms]=\bigl\{P(p_\ms)\colon\text{$P$ is a polynomial with integer
coefficients}\bigr\}\]
be the ring generated by $p_\ms$ over the integer set~$\Z$.
\begin{defn}[interior separated set]
Suppose that $\ms\in\TOEC$. A compact set $F\subset E_\ms$ is called an
\emph{interior separated set} of~$E_\ms$ if $E_\ms\setminus F$ is also
compact and $F\subset O$ for some open set $O$~satisfying the OSC.
\end{defn}
We remark that $\mu_\ms(F)\in\Z[p_\ms]$ for every interior separated set~$F$.
\begin{defn}[ideal of IFS]\label{d:ideal}
Suppose that $\ms\in\TOEC$. The ideal of~$\ms$, denoted by~$I_\ms$, is
defined to be the ideal of~$\Z[p_\ms]$ generated by
\[\bigl\{\mu_\ms(F)\colon\text{$F$ is an interior separated set
of~$E_\ms$}\bigr\}.\]
\end{defn}
\begin{rem}
It is worth noting that the ideal~$I_\ms$ depends not only on the algebraic
properties of ratios, but also on the geometrical structure of the
self-similar set~$E_\ms$.
\end{rem}
\begin{rem}
At first sight it seems that we need find all open sets which satisfy the
SOSC if we want to determine the ideal of an IFS. Fortunately, one such
open set is enough, see Remark~\ref{r:ideal} in Section~\ref{ssec:BDnum}.
\end{rem}
\begin{exmp}\label{e:idSSC}
We have $I_\ms=\Z[p_\ms]$ when $\ms$ satisfies the SSC. Indeed, the open
set $O=\{x\colon\dist(x,E_\ms)<\eps\}$ satisfies the OSC for $\eps$ small
enough. And so $E_\ms\subset O$ is an interior separated set. Therefore
$1=\mu_\ms(E_\ms)\in I_\ms$ and $I_\ms=\Z[p_\ms]$.
\end{exmp}
Our main result gives the complete Lipschitz invariants of self-similar sets
generated by IFSs in $\TOEC$. This is the first \emph{general} result on the
Lipschitz equivalence of self-similar sets with overlaps.
\begin{thm}\label{t:TOCElip}
Suppose that $\ms,\mt\in\TOEC$, then $E_\ms\simeq
E_\mt$ if and only if
\begin{enumerate}[\upshape(i)]
\item $\hdim E_\ms=\hdim E_\mt$;
\item $\log r_\ms/\log r_\mt\in\Q$;
\item $I_\ms=aI_\mt$ for some $a\in\R$.
\end{enumerate}
\end{thm}
\begin{rem}
We emphasize that, in Theorem~\ref{t:TOCElip}, the two IFSs~$\ms$ and~$\mt$
are allowed to be defined on Euclidean spaces of different dimensions. For
example, the IFS~$\ms$ in Example~\ref{e:NPI} is defined on~$\R^1$ and the
IFS~$\ms$ in Example~\ref{e:ideal} is defined on~$\R^2$. By
Theorem~\ref{t:TOCElip}, the two corresponding self-similar sets are
Lipschitz equivalent, see Figure~\ref{fig:NPI} and~\ref{fig:ideal}.
\end{rem}
Theorem~\ref{t:TOCElip} offers much deep insight into the geometrical
structure of self-similar sets generated by IFSs in~$\TOEC$. To make this
more clear, we shall consider a family of self-similar sets with the same
ratio root to eliminate the influence of ratios. Let $\OEC(p,r)$ denote the
set consisting of all IFS~$\ms\in\OEC$ with $p_\ms=p$ and $r_\ms=r$. Given
$\ms,\mt\in\TOEC(p,r)$, we have $\hdim E_\ms=\hdim E_\mt=\log p/\log r$ and
$r_\ms=r_\mt=r$. Therefore, Conditions~(i) and~(ii) in
Theorem~\ref{t:TOCElip} are fulfilled and the ideals $I_\ms$ and~$I_\mt$
belong to the same ring~$\Z[p]$. Consequently, we have
\begin{thm}\label{t:TOCEpr}
Suppose that $\ms,\mt\in\TOEC(p,r)$, then the two self-similar sets $E_\ms$
and $E_\mt$ are Lipschitz equivalent if and only if their ideals $I_\ms$
and $I_\mt$ belong to the same ideal class of~$\Z[p]$.
\end{thm}
Roughly speaking, Theorem~\ref{t:TOCEpr} tells us that different Lipschitz
equivalence classes correspond to different ideal classes, see
Example~\ref{e:NPI}. It is natural to ask whether the correspondence induced
by Theorem~\ref{t:TOCEpr} is one-to-one. Our next result gives an affirm
answer to this question. We define the number of Lipschitz equivalence
classes of self-similar sets generated by IFSs in~$\TOEC(p,r)$ to be the
\emph{Lipschitz class number} of~$\TOEC(p,r)$, denoted by $\LCN(p,r)$.
\begin{thm}\label{t:TOCELCN}
Suppose that $\TOEC(p,r)\ne\emptyset$. Then the Lipschitz equivalent
classes of self-similar sets generated by IFSs in~$\TOEC(p,r)$ correspond
one-to-one to the ideal classes of~$\Z[p]$. This means
$\LCN(p,r)=h(\Z[p])$. Moveover, the Lipschitz class number $\LCN(p,r)$ is
finite for every pair $p,r$.
\end{thm}
The most significance of Theorem~\ref{t:TOCELCN} is the one-to-one
correspondence between the Lipschitz equivalence classes and the ideal
classes. We will further discuss this point in next subsection.
It is also worth noting that the finiteness result about the Lipschitz
equivalence classes gives some interesting information about the OSC. For
self-similar fractals, the OSC is a generally accepted separation condition,
but it is too complicated to describe completely in geometry. For example,
the OSC allows the small copies of self-similar set touch in infinitely many
geometric manners. However, the finiteness result in Theorem~\ref{t:TOCELCN}
says that the touching manners is essentially finite in view of Lipschitz
equivalence. In other words, this finiteness result describes the open set
condition in term of Lipschitz equivalence.
We present two examples to illustrate Theorem~\ref{t:TOCELCN}. For
positive square free integer~$D$, let $\xc O_D$ be the ring of all algebraic
integers in the field~$\Q(\sqrt D)$. We know from algebraic number theory
that
\[\xc O_D=\begin{cases}
\Z[\sqrt D], &\text{if $D\equiv 2$ or $3\pmod4$};\\
\Z[\frac{1+\sqrt D}2], &\text{if $D\equiv1\pmod4$};
\end{cases}\]
and
\begin{align*}
h(\xc O_D)=1\quad & \text{for $D=2,3,5,6,7,11,13,14,17,19,21,22,23,29,\dots$}; \\
h(\xc O_D)=2\quad & \text{for $D=10,15,26,30,34,35,39,42,51,55,58,65,\dots$}.
\end{align*}
For more square free $D>0$ with $h(\xc O_D)=1$ or $2$,
see~\cite{MolWi91,MolWi92}. Using the above facts about the class number
of~$\xc O_D$, we give the following two examples.
\begin{exmp}
Let $p=(\sqrt5-1)/2$, then $p^4+p^3+p=1$ and $p^3+2p^2=1$. Suppose that
$\ms,\mt\in\TOEC(p,r)$ with ratios $r^4,r^3,r$ and $r^3,r^2,r^2$,
respectively, then $p_\ms=p_\mt=p$. Since $\Z[p]=\Z[p+1]=\xc O_5$ has
class number one, we have $E_\ms\simeq E_\mt$. In this example, the
relative positions of the small copies of self-similar sets $E_\ms$ and
$E_\mt$ do \emph{not} affect the Lipschitz equivalence.
\end{exmp}
The following example involves the ring~$\xc O_{10}$, which has class
number~$2$. We consider the IFS family $\TOEC(p,r)$ with $p=\sqrt{10}-3$ and
$r=1/10$. Theorem~\ref{t:TOCELCN} says that there are two Lipschitz
equivalence classes since $\Z[p]=\Z[\sqrt{10}]=\xc O_{10}$.
\begin{figure}
\caption{The structure of~$E_\ms$ in Example~\ref{e:NPI}
\label{fig:NPI}
\end{figure}
\begin{exmp}\label{e:NPI}
Let $\ms=\{S_1,S_2,\dots,S_7\}\in\TOEC(p,r)$, where $p=\sqrt{10}-3$,
$r=1/10$ and
\begin{gather*}
S_1\colon x\mapsto rx,\quad S_2\colon x\mapsto -r^2x+3r,\quad
S_3\colon x\mapsto rx+3r,\\
S_4\colon x\mapsto -rx+6r,\quad S_5\colon x\mapsto rx+6r,\quad
S_6\colon x\mapsto -rx+9r,\quad S_7\colon x\mapsto rx+9r.
\end{gather*}
See Figure~\ref{fig:NPI}. (In Figure~\ref{fig:NPI}, the
symbol~$\bm\circlearrowleft$ means that there is a minus sign in the
contraction coefficient of the corresponding similarity. In geometry, this
means a rotation by the angle~$\pi$.) Then $p_\ms=p=\sqrt{10}-3$ satisfying
the equation $p^2+6p=1$, and Example~\ref{e:gdid} shows
\[I_\ms=(2,p+1)=(2,\sqrt{10})=\bigl\{2a+b\sqrt{10}\colon a,b\in\Z\bigr\}.\]
One can check that $I_\ms$ is not a principal ideal of the ring
$\Z[p_\ms]=\xc O_{10}$.
Let $\mt\in\TOEC(p,r)$ satisfying the SSC with ratios $r^2,r,r,r,r,r,r$.
Then $p_\mt=p=\sqrt{10}-3$, and by Example~\ref{e:idSSC},
$I_\mt=\Z[p_\mt]=\xc O_{10}$ is a principal ideal.
Theorem~\ref{t:TOCELCN} implies that $E_\ms\not\simeq E_\mt$ and that
either $E\simeq E_\ms$ or $E\simeq E_\mt$ for every self-similar set~$E$
generated by IFS in $\TOEC(p,r)$. In this example, the relative positions
of the small copies of self-similar sets do affect the Lipschitz
equivalence.
\end{exmp}
We close this subsection with a necessary and sufficient condition for the
IFS family $\TOEC(p,r)\ne\emptyset$.
\begin{prop}\label{p:p,r}
The IFS family $\TOEC(p,r)\ne\emptyset$ if and only if $p,r\in(0,1)$ and
there exist positive integers $\lambda_1,\lambda_2,\dots,\lambda_N$ with
$\gcd(\lambda_1,\dots,\lambda_N)=1$ such that
\[p^{\lambda_1}+p^{\lambda_2}+\dots+p^{\lambda_N}=1.\]
Here we allow that $\lambda_i=\lambda_j$ for $i\ne j$.
\end{prop}
\subsection{Class number problem}\label{ssec:clsnmb}
In this subsection, we will discuss the Lipschitz class number
of~$\TOEC(p,r)$ by making use of Theorem~\ref{t:TOCELCN}. This problem is
closely related to the famous class number problems posed by Gauss in
Articles~303 and~304 of \emph{Disquisitiones Arithmeticae} of~1801.
Recall that the class number of an algebraic number field is defined to be
the class number of the ring of all algebraic integers in it. We state some
Gauss class number problems in the modern terminology.
\begin{cthm}[Gauss class number conjecture]
The number of imaginary quadratic fields $\Q(\sqrt D)$ $(D<0)$ which have a
given class number~$n$ is finite.
\end{cthm}
This conjecture was solved by Heilbronn in~1934. Gauss further posed the
following problem.
\begin{cthm}[Gauss class number problem]
For small~$n$, determine all $D<0$ such that the class number of~$\Q(\sqrt
D)$ equals~$n$.
\end{cthm}
Watkins~\cite{Watki04} gave the solution for $n\le100$ in~2004. As a special
case,
\begin{cthm}[Gauss class number one problem for imaginary quadratic fields]
There are only nine imaginary quadratic fields~$\Q(\sqrt D)$ $(D<0)$ with
class number one. They are
\[D\in\{-1,-2,-3,-7,-11,-19,-43,-67,-163\}.\]
\end{cthm}
This conjecture was solved by Baker~\cite{Baker68} and Stark~\cite{Stark67}
independently in~1967. For the contrasting case of real quadratic fields,
Gauss conjecture
\begin{cthm}[Gauss class number one problem for real quadratic fields]
There are infinitely many real quadratic fields~$\Q(\sqrt D)$ $(D>0)$ with
class number one.
\end{cthm}
This conjecture is still open. We do not even yet know whether there are
infinitely many algebraic number fields (of arbitrary degree) with class
number one.
Like the Gauss class number problems, we can pose the Lipschitz class number
problem.
\begin{ques}\label{q:LCN}
For a given integer~$n>0$, determine all~$p,r$ such that $\LCN(p,r)=n$.
\end{ques}
Theorem~\ref{t:TOCELCN} says that $\LCN(p,r)=n$ if and only if
$\TOEC(p,r)\ne\emptyset$ (see Proposition~\ref{p:p,r}) and $h(\Z[p])=n$.
However, from the Gauss class number problems, we know that, in algebraic
number theory, it is very difficult to determine all the algebraic
numbers~$p$ such that the ring~$\Z[p]$ has a given class number~$n$. In other
words, it seems very hard to solve Question~\ref{q:LCN} by making use of
algebraic number theory. A natural question is: can we obtain some results on
Question~\ref{q:LCN} by analyzing the geometrical structure of self-similar
sets directly? Such results, if obtained, might lead to deeper understanding
of the Gauss class number problems. Of course, this is also very difficult.
We don't know how to do it so far.
\subsection{More results on principal ideal}
The simplest case of Theorem~\ref{t:TOCELCN} is that the ring~$\Z[p]$ is a
principal ideal domain.
\begin{thm}\label{t:PID}
Suppose that $\ms,\mt\in\TOEC(p,r)$. If $\Z[p]$ is a principal ideal
domain, then $E_\ms\simeq E_\mt$.
\end{thm}
\begin{rem}
Like the SSC, in this case the relative positions of the small copies of
self-similar sets do \emph{not} affect the Lipschitz equivalence.
\end{rem}
The simplest example of $\Z[p]$ to be a principal ideal domain is that
$p=1/N$ for integers $N\ge2$. Given $\ms=\{S_1,\dots,S_N\}\in\TOEC$ with the
$N$ ratios are all equal to $r$, the ring related to~$\ms$ is just $\Z[1/N]$.
This leads to the following theorem.
\begin{thm}\label{t:srt}
Suppose $E\subset\R^d$ and $E'\subset\R^{d'}$ are totally disconnected
self-similar sets satisfying the open set condition, if both $E$ and $E'$
are generated by $N$ contracting similarities with the same ratio $r$, then
$E$ and $E'$ are Lipschitz equivalent.
\end{thm}
\begin{rem}\label{r:srt}
Theorem~\ref{t:srt} generalizes many known results on the Lipschitz
equivalence of self-similar sets (see Section~\ref{ssec:OSC}), although it
is only a very special corollary of Theorem~\ref{t:TOCELCN}.
\end{rem}
On the other hand, it is natural to think that a self-similar set has the
simplest geometrical structure if it is Lipschitz equivalent to a
self-similar set satisfying the SSC. If the set is generated by an
IFS~$\ms\in\TOEC$, by Theorem~\ref{t:TOCElip} and Example~\ref{e:idSSC}, this
is equivalent to that the ideal $I_\ms$ is a principle ideal. Let $\PI$
denote the set of all IFS $\ms\in\TOEC$ such that the ideal~$I_\ms$ is a
principal ideal. It follows from Theorem~\ref{t:TOCElip} that
\begin{thm}\label{t:PI}
Suppose that $\ms,\mt\in\PI$, then $E_\ms\simeq E_\mt$ if and only if
\begin{enumerate}[\upshape(i)]
\item $\hdim E_\ms=\hdim E_\mt$;
\item $\log r_\ms/\log r_\mt\in\Q$;
\item $\Z[p_\ms]=\Z[p_\mt]$.
\end{enumerate}
\end{thm}
The point is that the condition $I_\ms=aI_\mt$ in Theorem~\ref{t:TOCElip} is
equivalent to $\Z[p_\ms]=\Z[p_\mt]$ provided that $I_\ms$ and $I_\mt$ are
both principle ideals. In fact, under the assumption of being principle
ideal, $I_\ms=aI_\mt$ is equivalent to $\Z[p_\ms]=b\Z[p_\mt]$ for some
$b\in\R$. Then observe that $b\in\Z[p_\ms]$ since $1\in\Z[p_\mt]$, and so
$b\Z[p_\ms]\subset\Z[p_\ms]=b\Z[p_\mt]$, i.e., $\Z[p_\ms]\subset\Z[p_\mt]$.
By symmetry we have $\Z[p_\ms]=\Z[p_\mt]$. However, in general $I_\ms=aI_\mt$
is not equivalent to $\Z[p_\ms]=\Z[p_\mt]$, see Example~\ref{e:NPIZ}.
\begin{exmp}\label{e:NPIZ}
Let $p_\ms=\sqrt{10}-3$ be the positive solution of the equation
$p_\ms^2+6p_\ms=1$ and $p_\mt=37\sqrt{10}-117$ the positive solution of the
equation $p_\mt^2+234p_\mt=1$. Then
$\Z[p_\ms]=\Z[\sqrt{10}]\ne\Z[p_\ms]=\Z[37\sqrt{10}]$. Let
$I_\ms=I_\mt=37(\sqrt{10}\Z+2\Z)$. One can check that $I_\ms$ is an ideal
of~$\Z[p_\ms]$ and $I_\mt$ is an ideal of~$\Z[p_\mt]$. Thus, $I_\ms=I_\mt$
but $\Z[p_\ms]\ne\Z[p_\mt]$.
\end{exmp}
A natural question arises:
\begin{ques}\label{q:PI}
For what IFS~$\ms\in\TOEC$, is the ideal $I_\ms$ a principal ideal?
\end{ques}
We remark that Example~\ref{e:idSSC} says that $\PI$ contains all IFS
$\ms\in\TOEC$ satisfying the SSC. It is also obviously that $\ms\in\PI$ if
$\Z[p_\ms]$ is a principle ideal domain. We further give another partial
answer to Question~\ref{q:PI}. An IFS~$\ms$ on~$\R^d$ is said to be
\emph{orthogonal homogeneous} if there is a $d\times d$ orthogonal
matrix~$\bm A$ such that each $S_i\in\ms$ has the form $S_i\colon x\mapsto
r_i\bm Ax+b_i$ with $r_i\in(0,1)$. In other words, the similarities in~$\ms$
have the same orthogonal part but their ratios may be different. We say an
IFS~$\ms$ satisfies the convex open set condition (COSC) if $\ms$ satisfies
the OSC with a convex open set. Let
\begin{multline}\label{eq:PIS}
\xs S:=\Bigl\{\ms\in\TOEC\colon \text{$\ms$ satisfies the COSC}\\
\text{and is orthogonal homogeneous}\Bigr\}.
\end{multline}
\begin{thm}\label{t:covopen}
For every $\ms\in\xs S$, we have $I_\ms=\Z[p_\ms]$. As a result,
$\xs S\subset\PI$.
\end{thm}
\begin{rem}
We don't know whether Theorem~\ref{t:covopen} is still true if we drop the
COSC. On the other hand, Example~\ref{e:NPI} and~\ref{e:ideal} implies that
the condition that $\ms$ is orthogonal homogeneous can not be relaxed too
much.
\end{rem}
\begin{rem}
Under the commensurable case, Theorem~\ref{t:PI} and~\ref{t:covopen} extend
the results in~\cite{RuWaX12,XiRu07} in a very general setting for IFSs
on~$\R^d$ ($d\ge1$), see Section~\ref{ssec:OSC}.
\end{rem}
The paper is organized as follows. In Section~\ref{sec:SSS}, we review some
known results about the Lipschitz equivalence of self-similar sets and
present some new results on the non-commensurable case, including
Theorem~\ref{t:Z+} and~\ref{t:sublip}. Section~\ref{sec:Zp} concerns the
algebraic properties of measure root. As a result, we give the proof of
Proportion~\ref{p:p,r}. Section~\ref{sec:ideal} devoted to the proof of
Theorem~\ref{t:TOCELCN} and~\ref{t:covopen}. This is based on some techniques
of computing the ideal of IFS, see Theorem~\ref{t:gdid}
and~\ref{t:separated}. Section~\ref{sec:BD} introduces the notions of blocks
decomposition, interior blocks and measure polynomials. This section also
prove some basic results, such as the finiteness of the measure polynomials
(Proposition~\ref{p:finiteCP}) and the cardinality of boundary blocks and
interior blocks (Lemma~\ref{l:C(k)}, \ref{l:CP(k)} and~\ref{l:CB(k)}). All of
this are fundamental to our study. Section~\ref{sec:idea} discusses the main
ideas behind the proof of Theorem~\ref{t:TOCElip}, including the cylinder
structure (Definition~\ref{d:qscldr}), the dense island structure
(Definition~\ref{d:dnsilnd}), the measure linear property
(Definition~\ref{d:msrln}) and the suitable decomposition
(Definition~\ref{d:suitde}). We conclude this section with
Lemma~\ref{l:suitde}, which is the tool to construct the same cylinder
structure. The proof of Theorem~\ref{t:TOCElip} is presented in
Section~\ref{sec:LipBE} and~\ref{sec:proof}. By making use of cylinder
structure and dense island structure, we first prove the whole self-similar
set is Lipschitz equivalent to interior blocks of it
(Proposition~\ref{p:BlipE}), then deal with the Lipschitz equivalence between
interior blocks of different self-similar sets (Proposition~\ref{p:BlipB}).
Thus the proof of Theorem~\ref{t:TOCElip} is complete. Finally, we study the
non-commensurable case and give the proofs of Theorem~\ref{t:Z+}
and~\ref{t:sublip} in Section~\ref{sec:NC}.
\section{Geometrical structure of Self-similar Sets}
\label{sec:SSS}
\subsection{About self-similar sets}
Self-similar sets in Euclidean spaces are fundamental objects in fractal
geometry. However, we do not know much about them.
Given an IFS $\ms=\{S_i\}_{i=1}^N$ consisting of contracting similarities
$\ms=\{S_i\}_{i=1}^N$ on~$\R^d$, Hutchinson~\cite{Hutch81} showed that there
is a unique nonempty compact set~$E_\ms\subset\R^d$, called self-similar set,
such that
\[E_\ms=\bigcup_{i=1}^NS_i(E_\ms).\]
Conversely, given a self-similar set~$E$, it is not easy to determine all the
IFSs which generate~$E$, even under some reasonable additional conditions.
This is why we state our results by IFSs rather than self-similar sets. This
problem is rather fundamental and has some relationship with the Lipschitz
equivalence problem of self-similar sets. In fact, if two IFSs generate the
same self-similar set, they must satisfy the conditions necessary to the
Lipschitz equivalence. It is somewhat surprising that there is little known
about the generating IFSs of a given self-similar set. We refer
to~\cite{DenLa,FenWa09} for detailed study of this problem.
Another basic problem is to determine the dimension of self-similar sets. In
general this problem is very difficult. A open conjecture of Furstenberg says
that $\hdim E_\lambda=1$ for any $\lambda $ irrational, where
$E_\lambda=E_\lambda/3\cup (E_\lambda/3+\lambda/3)\cup (E_\lambda/3+2/3)$.
Although the IFSs involved are rather sample, the conjecture remained open
from 1970s until settled by Hochman~\cite{Hochm12} very recently, see
also~\cite{Kenyo97,RaoWe98,SVe02}. We know much more about the dimension of
self-similar sets if some separation conditions hold. Such conditions control
the overlaps between small copies of self-similar set. The OSC, which means
the overlaps are \emph{small}, was introduced by Moran~\cite{Moran46}. For
IFSs on Euclidean spaces, it is well known from Hutchinson~\cite{Hutch81}
that if $\ms$ satisfies the OSC, then $\hdim E_\ms$ equals the
\emph{similarity dimension}~$s$ (the unique positive solution of
$\sum_{i=1}^N r_i^s=1$) and the Hausdorff measure $\xc H^s(E_\ms)>0$.
Moreover, Bandt and Graf~\cite{BanGr92} and Schief~\cite{Schie94} proved that
\[\text{SOSC}\Leftrightarrow\text{OSC}\Leftrightarrow\xc H^s(E_\ms)>0.\]
Although there are various conditions which equivalent to the OSC obtained
by~\cite{BanGr92,Moran46,Schie94}, in general it is not known how to
determine whether a given IFS satisfies the OSC. We refer
to~\cite{BaHuR06,BanRa07} for more studies on the OSC. Another well studied
separation condition is the \emph{weak separation condition} (WSP), which
extends the OSC while allowing overlaps on the iteration,
see~\cite{DasEd11,LauNg99,Zerner1996}.
If one want to know more about the geometrical structure of self-similar
sets, the information of dimension is not enough, which only tells us about
the size of sets. It is natural to think that the self-similar sets in the
same Lipschitz equivalence class have the same geometrical structure. In this
sense, our result is a step towards the well-understanding of the geometrical
structure of self-similar sets satisfying the OSC. In the remainder of this
section, we review some known results about Lipschitz equivalence of
self-similar sets in Euclidean spaces and generalize almost all of them by
making use of our new results. For other related works on Lipschitz
equivalence, see~\cite{DavSe97,FalMa89,MatSa09,Xi04,Xi07,XiXi12,XioXi09}.
\subsection{The SSC case}
When the self-similar sets satisfy the SSC, their geometrical structure are
clear since there are no overlaps between the small copies $S_i(E_\ms)$. But
the problem of Lipschitz equivalence in this case is rather difficult. It is
not hard to see that, in the SSC case, the algebraic properties of the ratios
of the self-similar sets completely determine whether or not they are
Lipschitz equivalent. However, we do not yet know completely what algebraic
properties affect the Lipschitz equivalence.
Cooper and Pignataro~\cite{CooPi88} studied order-preserving bi-Lipschitz
mappings between self-similar subsets of $\R^1$ and proved the measure linear
property (see Section~\ref{ssec:ML}). Falconer and Marsh~\cite{FalMa92}
obtained two necessary conditions in terms of algebraic properties of ratios.
Based on the ideas in~\cite{CooPi88,FalMa92}, Rao, Ruan and
Wang~\cite{RaRuW12} completely characterize the Lipschitz equivalence for
several \emph{special} kinds of self-similar sets satisfying the SSC. Some
sufficient and necessary conditions on the Lipschitz equivalence in the SSC
case were obtained in Xi~\cite{Xi10}, Llorente and Mattila~\cite{LloMa10} and
Deng and Wen et~al.~\cite{DeWXX11}. But these conditions are not based on the
algebraic properties of ratios and so it is impossible to verify them for
given IFSs.
Our results substantially improves the study of the SSC case. By
Theorem~\ref{t:PI} and Example~\ref{e:idSSC}, we find the complete Lipschitz
invariants in terms of algebraic properties of ratios under the commensurable
condition.
\begin{thm}\label{t:SSC}
Suppose that $\ms,\mt$ both satisfy the SSC and the ratios of them are
both commensurable. Then $E_\ms\simeq E_\mt$ if and only if
\begin{enumerate}[\upshape(i)]
\item $\hdim E_\ms=\hdim E_\mt$;
\item $\log r_\ms/\log r_\mt\in\Q$;
\item $\Z[p_\ms]=\Z[p_\mt]$.
\end{enumerate}
\end{thm}
We remark that the Conditions~(ii) and~(iii) are independent, see the
following two examples.
\begin{exmp}\label{e:ssc1}
Let $\ms$ be an IFS satisfying the SSC with ratios~$3^{-1}$, $3^{-1}$,
$3^{-2}$ and~$3^{-2}$, and $\mt$ an IFS satisfying the SSC with ratios
\[\underbrace{3^{-3},\dots,3^{-3}}_{20},
\underbrace{3^{-6},\dots,3^{-6}}_{8}.\]
Then $p_\ms=\frac{\sqrt3-1}2$ is the positive solution of the equation
$2p_\ms^2+2p_\ms=1$ and $p_\mt=\frac{3\sqrt3-5}4$ is the positive solution
of the equation $8p_\mt^2+20p_\mt=1$. We have $\hdim E_\ms=\hdim E_\mt$,
\[\frac{\log p_\ms}{\log p_\mt}=\frac13\in\Q\quad \text{and}\quad
\Q(p_\ms)=\Q(p_\mt)=\Q(\sqrt3),\]
but
\[\Z[p_\ms]=\Z[\sqrt3,\frac12]\ne\Z[3\sqrt3,\frac12]=\Z[p_\mt].\]
\end{exmp}
\begin{exmp}\label{e:ssc2}
Let $p_\ms=\frac{\sqrt5-1}4$ be the positive solution of the equation
$4p_\ms^2+2p_\ms=1$ and $p_\mt=\frac{\sqrt5-2}2$ the positive solution of
the equation $4p_\mt^2+8p_\mt=1$. Then
\[\Z[p_\ms]=\Z[p_\mt]=\Z[\sqrt5,\frac12],\quad \text{but}\
\log p_\ms/\log p_\mt\notin\Q\]
since $p_\mt=4p_\ms^3$.
\end{exmp}
\subsection{The non-commensurable case}
It is interesting to compare Theorem~\ref{t:SSC} with Falconer and Marsh's
classic result in~\cite{FalMa92}. Without assuming the commensurable
condition, they obtained some \emph{necessary} conditions for $E_\ms\simeq
E_\mt$.
\begin{thm*}[Falconer and Marsh~\cite{FalMa92}]
Suppose that $\ms,\mt$ both satisfy the SSC and $r_1,\dots,r_n$ are ratios
of~$\ms$, $t_1,\dots,t_m$ are ratios of~$\mt$. The following conditions are
necessary for $E_\ms\simeq E_\mt$.
\begin{enumerate}[\upshape(i)]
\item $\hdim E_\ms=\hdim E_\mt$;
\item there exist positive integers $u,v$ such that
\begin{gather*}
\sgp(r_1^u,\dots,r_n^u)\subset\sgp(t_1,\dots,t_m), \
\sgp(t_1^v,\dots,t_m^v)\subset\sgp(r_1,\dots,r_n),
\end{gather*}
where $\sgp(a_1,\dots,a_n)$ denotes the multiplicative sub-semigroup
of positive real numbers generated by $a_1,\dots,a_n$;
\item $\Q(r_1^s,\dots,r_n^s)=\Q(t_1^s,\dots,t_m^s)$, where $s=\hdim
E_\ms=\hdim E_\mt$.
\end{enumerate}
\end{thm*}
If we assume the commensurable condition, then the Condition~(ii) in
Theorem~\ref{t:SSC} is equivalent to the Condition~(ii) in Falconer and
Marsh's theorem. While the Condition~(iii) in Theorem~\ref{t:SSC} is strictly
stronger than the Condition~(iii) in Falconer and Marsh's theorem. In fact,
let $\ms$ and $\mt$ be as in Example~\ref{e:ssc1}, then $\ms$ and $\mt$
satisfy all the conditions in Falconer and Marsh's theorem. However,
Condition~(iii) of Theorem~\ref{t:SSC} says that the self-similar sets
$E_\ms$ and $E_\mt$ are not Lipschitz equivalent.
This observation inspires the following theorem. For positive numbers
$a_1,\dots,a_n$, let $\Z^+[a_1,\dots,a_n]$ denotes the smallest set that
contains $a_1,\dots,a_n$ and all positive integers, and is closed under
addition and multiplication. In other words,
\begin{multline}\label{eq:Z+[a]}
\Z^+[a_1,\dots,a_n]=\bigl\{P(a_1,\dots,a_n)\colon\\
\text{$P$ is a polynomial with positive integer coefficients}\bigr\}.
\end{multline}
\begin{thm}\label{t:Z+}
Let $\ms$ and $\mt$ be two IFSs satisfying the SSC. Suppose that
$\ms\simeq\mt$ and $\hdim E_\ms=\hdim E_\mt=s$. Then
\begin{equation}\label{eq:Z+}
\Z^+[r_1^s,\dots,r_n^s]=\Z^+[t_1^s,\dots,t_m^s],
\end{equation}
where $r_1,\dots,r_n$ are the ratios of~$\ms$ and $t_1,\dots,t_m$ the
ratios of~$\mt$.
\end{thm}
\begin{rem}
Theorem~\ref{t:Z+} strengthens the condition~(iii) in Falconer and Marsh's
theorem. For this, note that
$\Z^+[r_1^s,\dots,r_n^s]=\Z^+[t_1^s,\dots,t_m^s]$ implies
$\Z[r_1^s,\dots,r_n^s]=\Z[t_1^s,\dots,t_m^s]$, and the latter implies
$\Q(r_1^s,\dots,r_n^s)=\Q(t_1^s,\dots,t_m^s)$.
\end{rem}
\begin{rem}\label{r:eqcond}
Under the commensurable condition, if we assume that $\hdim E_\ms=\hdim
E_\mt=s$ and the Condition~(ii) in Theorem~\ref{t:SSC}, then the
Condition~(iii) in Theorem~\ref{t:SSC} is equivalent to~\eqref{eq:Z+}, see
Lemma~\ref{l:Z[p]}(e).
\end{rem}
For convenience of further discussion, we introduce some notations. Let $\ms$
be an IFS consisting of contracting similarities with ratios $r_1,\dots,r_n$.
Write
\begin{equation}\label{eq:sgpZ+}
\sgp\ms=\sgp(r_1,\dots,r_n)\quad\text{and}\quad
\Z^+[\ms]=\Z^+[r_1^s,\dots,r_n^s],
\end{equation}
where $s=\hdim E_\ms$. We call two multiplicative sub-semigroup $G_1$ and
$G_2$ of~$(0,1)$ are equivalent, denoted by $G_1\sim G_2$, if there exist two
positive integers $u$ and $v$ such that $g_1^u\in G_2$ for all $g_1\in G_1$
and $g_2^v\in G_1$ for all $g_2\in G_2$. With these notations, we can rewrite
the above necessary conditions as: if $E_\ms\simeq E_\mt$, then
\begin{enumerate}[\upshape(i)]
\item $\hdim E_\ms=\hdim E_\mt$;
\item $\sgp\ms\sim\sgp\mt$;
\item $\Z^+[\ms]=\Z^+[\mt]$.
\end{enumerate}
From Theorem~\ref{t:SSC}, Theorem~\ref{t:Z+} and Remark~\ref{r:eqcond}, one
might expect that the above necessary conditions are also sufficient for
$E_\ms\simeq E_\mt$. Unfortunately, it turns out that these conditions are
far from being sufficient. Indeed, we can find infinitely many IFSs
satisfying the SSC such that any two of them satisfy above conditions~(i),
(ii) and~(iii), but are not Lipschitz equivalent (Example~\ref{e:infSSC}).
This fact implies that the difference between the commensurable case and the
non-commensurable case is \emph{essential} and that the problem for the
non-commensurable case is much more difficult. This is also why we cannot
drop the commensurable assumption in Theorem~\ref{t:TOCElip}. Among many
difficulties, the lack of some finiteness result like
Proposition~\ref{p:finiteCP} in the non-commensurable case may be the biggest
obstacle. How to settle the problem for the non-commensurable case is still
not clear.
The insufficiency of the conditions (i), (ii) and (iii) follows from a new
criterion for the Lipschitz equivalence. To state it, we need some more
notations.
Let $\ms$ be an IFS consisting of contracting similarities. For every
multiplicative sub-semigroup $G$ of $(0,1)$, write
\[\ms^G=\bigl\{S\in\ms\colon (r\cdot\sgp\ms)\cap G\ne\emptyset,\
\text{where $r$ is the ratio of~$S$}\bigr\}.\]
\begin{exmp}
Let $\ms=\{S_1,S_2,S_3,S_4\}$. The corresponding ratios
\[r_1=a,\quad r_2=a^2,\quad r_3=ab,\quad r_4=b,\]
where $a,b\in(0,1)$ such that $\log a/\log b\notin\Q$. Then $\sgp\ms$ is
the multiplicative semigroup generated by $a$ and $b$. Let $G_1$ be the
multiplicative semigroup generated by $a$, $G_2$ the multiplicative
semigroup generated by $b$, and $G_3$ the multiplicative semigroup
generated by $ab$. Then
\[\ms^{G_1}=\{S_1,S_2\},\quad \ms^{G_2}=\{S_4\},\quad \ms^{G_3}=\ms.\]
\end{exmp}
To simplify notation, we write $\ms\simeq\mt$ instead of $E_\ms\simeq E_\mt$.
When the IFS~$\mt$ is empty or contains only one similarity~$S$, we keep the
conventions that $\ms\simeq\emptyset$ if and only if $\ms=\emptyset$ and that
$\ms\simeq\{S\}$ if and only if $\ms$ also contains only one similarity.
\begin{thm}\label{t:sublip}
Let $\ms$ and $\mt$ be two IFSs satisfying the SSC. Then $\ms\simeq\mt$ if
and only if $\ms^G\simeq\mt^G$ for all multiplicative sub-semigroups $G$ of
$(0,1)$.
\end{thm}
It follows from Theorem~\ref{t:sublip} that
\begin{exmp}\label{e:infSSC}
Let $\ms$ be an IFS satisfying the SSC with ratios $1/9$ and $4/9$, then
$\hdim E_\ms=1/2$. Let $\ms_1=\ms$; $\ms_2$ an IFS satisfying the SSC
with ratios $1/81$, $1/81$, $1/81$ and $4/9$; \dots; $\ms_n$ an IFS
satisfying the SSC with ratios
\[\underbrace{9^{-n},\dots,9^{-n}}_{3^{n-1}},4/9;\]
and so on. Then we have
\begin{enumerate}[\upshape(i)]
\item $\hdim E_{\ms_1}=\hdim E_{\ms_2}=\dots=\hdim
E_{\ms_n}=\dots=1/2$,
\item $\sgp\ms_1\sim\sgp\ms_2\sim\dots\sim\sgp\ms_n\sim\cdots$,
\item $\Z^+[\ms_1]=\Z^+[\ms_2]=\dots=\Z^+[\ms_n]=\dots=\Z^+[1/3]$,
\end{enumerate}
but $\ms_i\not\simeq\ms_j$ whenever $i\ne j$ since $\hdim
E_{\ms_n^G}=\frac{n-1}{2n}$, where $G$ is the multiplicative semigroup
generated by~$1/9$.
\end{exmp}
Using the same idea, we have the following more general result.
\begin{prop}\label{p:infSSC}
Let $\ms$ be an IFS satisfying the SSC and $\hdim E_\ms=s$. Suppose that
one of the ratios of~$\ms$, say $r$, satisfies
\begin{itemize}
\item $\ms^G\ne\ms$, where $G$ is the multiplicative semigroup
generated by~$r$;
\item there exist positive integers $\lambda_1$, $\lambda_2$, \dots,
$\lambda_m$ such that
\[r^{\lambda_1}+r^{\lambda_2}+\dots+r^{\lambda_m}=1.\]
\end{itemize}
Then there exist infinitely many IFS $\ms_1$, $\ms_2$, \dots satisfying
the SSC such that for each $n\ge1$,
\begin{enumerate}[\upshape(i)]
\item $\hdim E_{\ms_n}=s$,
\item $\sgp\ms_n\sim\sgp\ms$,
\item $\Z^+[\ms_n]=\Z^+[\ms]$,
\end{enumerate}
but $\ms_i\not\simeq\ms_j$ whenever $i\ne j$.
\end{prop}
\begin{rem}
Note that, if we assume the commensurable condition, then the
Conditions~(i), (ii) and~(iii) in Proposition~\ref{p:infSSC} ensure that
there are only one Lipschitz equivalence class in the SSC case
(Theorem~\ref{t:SSC}), or there are only finitely many Lipschitz
equivalence classes in the OSC case (Theorem~\ref{t:TOCELCN}). However,
Proposition~\ref{p:infSSC} says that such finiteness result does not hold
without the commensurable condition. In other word, the difference between
the commensurable case and the non-commensurable case is essential.
\end{rem}
\subsection{The OSC case}\label{ssec:OSC}
If the SSC does not hold, the situation is much more complicated. Unlike the
SSC case, generally, the geometrical structure depends not only on the
algebraic properties of the ratios, but also on the relative positions of the
small copies of self-similar sets due to the occurrence of the overlaps. In
fact, we know \emph{very little} about the geometrical structure of
self-similar sets with overlaps. This is a fundamental but extremely
difficult problem in fractal geometry. Here we only discuss some known
results about Lipschitz equivalence in the OSC case.
Wen and Xi~\cite{WenXi03} studied the self-similar arcs, a kind of connected
self-similar sets satisfying the OSC. They constructed two self-similar arcs
of the same Hausdorff dimension, which are not Lipschitz equivalent. This
means that the Hausdorff dimension is not enough to determine the Lipschitz
equivalence in this case. In general, more Lipschitz invariants other than
Hausdorff dimension remain unknown for the self-similar sets with non-trivial
connected component. In fact, we still have no efficient method to
investigate such self-similar sets.
In the OSC case, we do know more if the self-similar sets satisfy the
\emph{totally disconnectedness condition} (TDC). One reason is that the
geometrical structure of the totally disconnected self-similar sets
satisfying the OSC is similar to that of self-similar sets satisfying the
SSC, and so we can make use of some ideas appearing in the study of the SSC
case. Up to now all known results in the OSC and the TDC case are some
generalized versions of the $\{1,3,5\}$-$\{1,4,5\}$ problem. Let
\begin{gather*}
E_{1,3,5}=(E_{1,3,5}/5)\cup(E_{1,3,5}/5+2/5)
\cup(E_{1,3,5}/5+4/5),\\
E_{1,4,5}=(E_{1,4,5}/5)\cup(E_{1,4,5}/5+3/5)
\cup(E_{1,4,5}/5+4/5).
\end{gather*}
\begin{figure}
\caption{\{1,3,5\}
\label{fig:135145}
\end{figure}
The two sets are called $\{1,3,5\}$-set and $\{1,4,5\}$-set, respectively
(see Figure~\ref{fig:135145}). David and Semmes~\cite{DavSe97} asked whether
$E_{1,3,5}$ and $E_{1,4,5}$ are Lipschitz equivalent, and the question is
called the $\{1,3,5\}$-$\{1,4,5\}$ problem. Rao, Ruan and Xi~\cite{RaRuX06}
gave an affirmative answer to this problem by the method of using
graph-directed system (Definition~\ref{d:graphdir}) to investigate the
self-similar sets. So far all further developments depend on this method more
or less.
\begin{figure}
\caption{\{1,3,5\}
\label{fig:135145g}
\end{figure}
Xi and Ruan~\cite{XiRu07} studied generalized $\{1,4,5\}$-sets in the line
(see Figure~\ref{fig:135145g}). This is a version of \{1,3,5\}-\{1,4,5\}
problem with different ratios. Given $r_1,r_2,r_3\in(0,1)$ with
$r_1+r_2+r_3<1$, let $\ms=\{S_1,S_2,S_3\}$, where
\[S_1\colon x\mapsto r_1x,\quad S_2\colon x\mapsto r_2x+(1-r_2-r_3),\quad
S_3\colon x\mapsto r_3x+(1-r_3).\]
Let $\mt$ be an IFS satisfying the SSC with ratios~$r_1$, $r_2$ and $r_3$.
They showed that
\[E_\ms\simeq E_\mt\Longleftrightarrow\log r_1/\log r_3\in\Q.\]
Recently, Ruan, Wang and Xi~\cite{RuWaX12} further study this problem for
IFSs containing more than three similarities. Although the IFSs studied
by~\cite{RuWaX12,XiRu07} are allowed to have non-commensurable ratios, their
settings, which only consider IFSs on~$\R^1$ and require an open interval to
satisfy the OSC and some other additional conditions, are very special. The
method of~\cite{RuWaX12,XiRu07}, depending heavily on the special settings,
sheds no light on how to settle the problem for the non-commensurable case in
general. Under the assumption that the ratios are commensurable,
Theorem~\ref{t:PI} and~\ref{t:covopen} extend their results in a very general
setting for IFSs on~$\R^d$ ($d\ge1$).
\begin{figure}
\caption{\{1,3,5\}
\label{fig:135145h}
\end{figure}
The authors~\cite{XiXi10} consider the \{1,3,5\}-\{1,4,5\} problem in~$\R^d$
(see Figure~\ref{fig:135145h}) and showed that if the two self-similar sets
\[E_A=\bigcup_{a\in A}N^{-1}(E_A+a),\quad E_B=\bigcup_{b\in B}N^{-1}(E_B+b)\]
are totally disconnected, where $A,B\subset\{0,\dots,N-1\}^d$, then
$E_A\simeq E_B$ if and only if $\card A=\card B$. Recently, Luo and
Lau~\cite{LuoLa12} and Deng and He~\cite{DenHe12} also studied the IFSs with
equal ratios in more general setting than~\cite{RaRuX06,XiXi10} and proved
some special cases of Theorem~\ref{t:srt}.
\begin{figure}
\caption{\{1,3,5\}
\label{fig:135145r}
\end{figure}
The authors~\cite{XioXi12} also proved a rotation version of
\{1,3,5\}-\{1,4,5\} problem (see Figure~\ref{fig:135145r}). Let $S_1\colon
x\mapsto x/5$, $S_2\colon x\mapsto (-x+4)/5$ and $S_3\colon x\mapsto
(x+4)/5$. The self-similar set $E_{1,-4,5}=\bigcup_{i=1}^3S_i(E_{1,-4,5})$ is
called the $\{1,-4,5\}$-set. Then $E_{1,-4,5}\simeq E_{1,4,5}\simeq
E_{1,3,5}$. (In Figure~\ref{fig:135145r}, the symbol~$\bm\circlearrowleft$
means that there is a minus sign in the contraction coefficient of the
corresponding similarity. In geometry, this means a rotation by the
angle~$\pi$.)
As we see, the method of using graph-directed system can deal with various
versions of the $\{1,3,5\}$-$\{1,4,5\}$ problem. But this method cannot give
a general result for the Lipschitz equivalence problem of self-similar sets
since it is in general very hard or even impossible to find a suitable
graph-directed system for a given family of self-similar sets. In other word,
only very special self-similar sets can be studied by using of graph-directed
system.
In this paper, we introduce the blocks to study the self-similar sets and
replace the graph-directed system by the interior blocks (see
Section~\ref{ssec:BDdef}). This new and powerful method leads to deeper
insights into geometrical structure of self-similar sets than the method of
the graph-directed system. Consequently, we are able to generalize almost all
of known results in the OSC and the TDC case. In fact, all the results
in~\cite{DenHe12,LuoLa12,RaRuX06,XiXi10,XioXi12} are very special cases of
Theorem~\ref{t:srt}, which is only a very special corollary of
Theorem~\ref{t:TOCELCN}. While Theorem~\ref{t:PI} and~\ref{t:covopen} also
generalize the results in~\cite{RuWaX12,XiRu07} under the commensurable case.
More important, we think this new method is also useful for the further study
on Lipschitz equivalence and other related problems.
\section{The Algebraic Properties of Measure Root}
\label{sec:Zp}
This section concerns the algebraic properties of measure root. As a result,
we give the proof of Proposition~\ref{p:p,r}.
The following lemma is the collection of some algebraic properties of measure
root. These properties may be known, we include the proof only for the
self-containedness since we don't find appropriate references (note that
$\Z[p]$ is in general not a Dedekind domain).
\begin{lem}\label{l:Z[p]}
Let $p\in(0,1)$. Suppose that there exist positive integers
$\lambda_1,\lambda_2,\dots,\lambda_N$ with
$\gcd(\lambda_1,\dots,\lambda_N)=1$ such that
\[p^{\lambda_1}+p^{\lambda_2}+\dots+p^{\lambda_N}=1.\]
Then we have the following conclusions.
\begin{enumerate}[\upshape(a)]
\item $p^{-1}$ is an algebraic integer and $p^{-1}\in\Z[p]$.
\item The quotient ring $\Z[p]/I$ is finite for every nonzero ideal~$I$.
\item For each nonzero ideal~$I$ of~$\Z[p]$, there exists a positive
integer~$\ell$ such that $1-p^\ell\in I$.
\item $\Z[p]$ is noetherian, i.e., every ideal is finitely generated.
\item For each positive number $a\in\Z[p]$, there exists a polynomial~$P$
with positive integer coefficients such that $a=P(p)$. In other
words,
\[\{a>0\colon a\in\Z[p]\}=\Z^+[p],\]
where $\Z^+[p]$ is defined by~\eqref{eq:Z+[a]}.
\item Let $a_1,\dots,a_m$ be positive numbers and $I=(a_1,\dots,a_m)$ the
ideal of~$\Z[p]$ generated by $a_1,\dots,a_m$. Then, for each
positive number $a\in I$, there exist positive numbers
$b_1,\dots,b_m\in\Z[p]$ such that
\[a=a_1b_1+a_2b_2+\dots+a_mb_m.\]
\item $h(\Z[p])\le h(\Z[p^{-1}])$.
\item $h(\xc O_p)\le h(\Z[p^{-1}])$, where $\xc O_p$ denotes the ring of
all algebraic integers in the field~$\Q(p)$.
\end{enumerate}
\end{lem}
We remark that the inequalities in Lemma~\ref{l:Z[p]}(g) and~(h) may be
strict, see Example~\ref{e:clane1} and~\ref{e:clane2}. We need the following
fact.
\begin{fct}\label{f:square}
The class number $h(\Z[\sqrt{n}\,])>1$ if the nonzero integer $n$ is not
square free (i.e., $m^2\mid n$ for some integer $m>1$). For this, one can
check that the ideal $(m,\sqrt n)$ is not a principle ideal, where $m$ is a
prime number such that $m^2\mid n$.
\end{fct}
\begin{exmp}\label{e:clane1}
Let $p=(\sqrt{10}-3)/2$ be the solution of $4p^2+12p=1$. Then
\[h(\Z[p])=h(\Z[\sqrt{10},\tfrac12])=1<h(\Z[p^{-1}])=h(\Z[2\sqrt{10}]).\]
To see that $h(\Z[\sqrt{10},\tfrac12])=1$, observe that the mapping
\[\pi\colon I\to I^*=\{2^{-\ell}\alpha\colon\alpha\in I,\ell\ge0\}\]
from the nonzero ideal~$I$ of~$\Z[\sqrt{10}]$ to the nonzero ideal~$I^*$
of~$\Z[\sqrt{10},\tfrac12]$ is a surjection, and that $I=a J$ implies
$I^*=a J^*$. Then since a nonzero ideal~$I$ of~$\Z[\sqrt{10}]$ is either a
principle ideal or belongs to the same ideal class of~$(2,\sqrt{10})$, it
follows from $\pi(2,\sqrt{10})=\Z[\sqrt{10},\tfrac12]$ that
$h(\Z[\sqrt{10},\tfrac12])=1$.
\end{exmp}
\begin{exmp}\label{e:clane2}
Let $p=5\sqrt2-7$ be the solution of $p^2+14p=1$. Then
\[h(\xc O_p)=h(\Z[\sqrt2])=1<h(\Z[p^{-1}])=h(\Z[5\sqrt2]).\]
\end{exmp}
We remark that in Example~\ref{e:clane1}, $h(\Z[p])=1<h(\xc
O_p)=h(\Z[\sqrt{10}])=2$, while in Example~\ref{e:clane2}, $h(\xc
O_p)=1<h(\Z[p])=h(\Z[5\sqrt2])$.
The remainder of this section is devoted to the proof of Lemma~\ref{l:Z[p]}
and Proposition~\ref{p:p,r}. We begin with a technical lemma.
\begin{lem}\label{l:primitive}
Let
\[\bm\Xi=\begin{pmatrix}
\xi_1 & 1 & 0 & \hdotsfor4 & 0 \\
\xi_2 & 0 & 1 & 0 & \hdotsfor3 & 0 \\
\xi_3 & 0 & 0 & 1 & 0 & \hdotsfor2 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
\xi_{n-2} & 0 & \hdotsfor3 & 0 & 1 & 0 \\
\xi_{n-1} & 0 & \hdotsfor4 & 0 & 1 \\
\xi_n & 0 & \hdotsfor5 & 0 \\
\end{pmatrix}\]
be an $n\times n$ matrix, where $\xi_1,\xi_2,\dots,\xi_n$ are nonnegative
integers. Let $\lambda_1,\dots,\lambda_m$ be all the indexes such that
$\xi_{\lambda_i}>0$. If $\gcd(\lambda_1,\dots,\lambda_m)=1$ and
$n\in\{\lambda_1,\dots,\lambda_m\}$, i.e., $\xi_n>0$, then the
matrix~$\bm\Xi$ is primitive. Moreover, let $p$ be the unique positive
solution of the equation $\xi_1p+\xi_2p^2+\dots+\xi_np^n=1$ and $\bm
p=(1,p,\dots,p^n)$, then
\[\bm p\bm\Xi=p^{-1}\bm p.\]
In other word, the value $p^{-1}$ is the Perron-Frobenius eigenvalue
of~$\bm\Xi$ and the vector~$\bm p$ is the corresponding left-hand
Perron-Frobenius eigenvector.
\end{lem}
In what follows, $\bm A\ge\bm B$ means that each $a_{ij}\ge b_{ij}$ and $\bm
A>\bm B$ that each $a_{ij}>b_{ij}$ for arbitrary matrices $\bm A=(a_{ij})$
and $\bm B=(b_{ij})$.
\begin{proof}
The equality $\bm p\bm\Xi=p^{-1}\bm p$ is obvious. It remains to show that
the matrix~$\bm\Xi$ is primitive. Let $\bm A_{ij}$ be the $n\times n$
matrix such that the $(i,j)$-entry of $\bm A_{ij}$ is~$1$ and all other
entries are zero. Let $\bm B=(b_{ij})$ be the $n\times n$ matrix as
\[b_{ij}=\begin{cases}
1, & i+1 \equiv j \pmod n;\\
0, & \text{otherwise}.
\end{cases}\]
It follows from the meanings of~$\lambda_1,\dots,\lambda_m$ that
\[\bm\Xi\ge\bm B+\sum_{\lambda_i\ne n}\bm A_{\lambda_i1}.\]
Since $\gcd\{\lambda_1,\lambda_2,\dots,\lambda_m\}=1$ and
$n\in\{\lambda_1,\lambda_2,\dots,\lambda_m\}$, there exist positive
integers $l_i$ and $l$ such that
\[\sum_{\lambda_i\ne n}l_i\cdot\lambda_i=ln+1.\]
Observe that $\bm B^{k-1} \bm A_{k1}=\bm A_{11}$ and $\bm B^n$ is the
identity matrix. We have
\[\biggl(\bm B +\sum_{\lambda_i\ne n}\bm A_{\lambda_i1}\biggr)
^{ln+1}\ge \bm B^{ln+1}+\prod_{\lambda_i\ne n}(\bm B^{\lambda_i-1}
\bm A_{\lambda_i1})^{l_i}=\bm B+\bm A_{11}.\]
Finally, a straightforward computation reveals that $(\bm B+\bm
A_{11})^{2n-3}>\bm0$.
\end{proof}
The following lemma is a well-known property of the primitive matrix.
\begin{lem}\label{l:limMatrix}
Let $\bm\Xi$, $p$ and $\bm p$ be as in Lemma~\ref{l:primitive}. Suppose
that $\bm q$ is the right-hand Perron-Frobenius eigenvector of~$\bm\Xi$ such
that $\bm p\cdot\bm q=1$. Then
\[\lim_{k\to\infty}p^k\bm\Xi^k=\bm q\cdot\bm p.\]
\end{lem}
Now we are able to prove Lemma~\ref{l:Z[p]} and Proposition~\ref{p:p,r}.
\begin{proof}[Proof of Lemma~\ref{l:Z[p]}]
(a) It is obvious.
(b) First observe that $\Z[p]/(m)$ is finite for every nonzero integer~$m$.
It remains to show that each nonzero ideal~$I$ contains a nonzero integer.
Pick a nonzero number $a\in I$. By (a), for $\ell$ large enough,
$p^{-\ell}a\in I$ is a algebraic integer. Thus for a fixed such $\ell$, we
can find a polynomial~$P$ with integer coefficients such that
$P(p^{-\ell}a)\in I$ is a nonzero integer.
(c) By (b), we can find two integers $\ell_2>\ell_1>0$ with
$p^{\ell_1}-p^{\ell_2}\in I$. We have $1-p^{\ell_2-\ell_1}\in I$ since
$p^{-1}\in\Z[p]$ by (a).
(d) Suppose on the contrary that $I$ is an ideal that is not finitely
generated. Then the quotient group $I/(a)$ is infinite for all $a\in I$,
which contradicts (b) since $I/(a)\subset\Z[p]/(a)$.
(e) Suppose that
\[p^{\lambda_1}+p^{\lambda_2}+\dots+p^{\lambda_N}=
\xi_1p+\xi_2p^2+\dots+\xi_np^n,\]
where $\xi_n>0$. Let $\bm\Xi$ be the matrix as in Lemma~\ref{l:primitive}.
Since $\gcd(\lambda_1,\dots,\lambda_N)=1$, the conditions of
Lemma~\ref{l:primitive} are fulfilled. Let $\bm p$ and $\bm q$ be the
Perron-Frobenius eigenvectors as in Lemma~\ref{l:limMatrix}. Since
$a\in\Z[p]$ is positive, there exist $\ell\ge0$ and a column vector $\bm
a=(a_1,a_2,\dots,a_n)^T$ with integer entries such that $a=p^\ell\bm
p\cdot\bm a>0$. Recall that $p^{-1}$ and $\bm p$ are the eigenvalue and the
eigenvector of the matrix~$\bm\Xi$, respectively. And so $a=p^{\ell+k}\bm
p\bm\Xi^k\bm a$ for all $k\ge0$. By Lemma~\ref{l:limMatrix},
\[p^k\bm\Xi^k\bm a\to\bm q\cdot(\bm p\cdot\bm a)>\bm0\quad
\text{as $k\to\infty$}.\]
This implies that $\bm\Xi^k\bm a>\bm0$ for sufficiently large~$k$. Thus,
Conclusion~(e) follows.
(f) We prove this by induction on~$m$. The case $m=1$ is obvious. Now
suppose this is true for $m-1$, let $a\in(a_1,\dots,a_m)$ be a
positive number. We have $a=a_1b'_1+\dots+a_mb'_m$ for some
$b'_1,\dots,b'_m\in\Z[p]$. Suppose without loss of generality that
$b'_m>0$. By (c), we can find a positive integer~$\ell$ such that
$1-p^\ell\in(a_1,\dots,a_{m-1})$. Pick $k$ large enough such that
$a-a_mb'_mp^{k\ell}>0$. The proof is completed by the induction
assumption since
\[0<a-a_mb'_mp^{k\ell}=a-a_mb'_m+a_mb'_m(1-p^{k\ell})
\in(a_1,\dots,a_{m-1}).\]
(g) For each nonzero ideal $I$ of~$\Z[p]$, write $I^*=I\cap\Z[p^{-1}]$,
then $I^*$ is a nonzero ideal of~$\Z[p^{-1}]$. It suffices to show the fact
that if $I^*=a J^*$ for some $a\in\R$, then $I=a J$, where $I$ and $J$ are
two nonzero ideals of~$\Z[p]$. Indeed, for each $b\in J$, there exists an
integer~$\ell$ with $bp^{-\ell}\in J^*$, so $abp^{-\ell}\in aJ^*=I^*$. Thus
$ab\in I$, i.e., $aJ\subset I$. By symmetric, $a^{-1}I\subset J$ and so
$I=aJ$.
(h) Recall that $p^{-1}$ is an algebraic integer and that
$\Q(p)=\Q(p^{-1})$. Together with the fact that $\xc O_p$ is a finitely
generated $\Z$-module, we know that there exists a positive integer $m$
such that $m\xc O_p\subset\Z[p^{-1}]$. For each nonzero ideal $I$ of~$\xc
O_p$, write $I^*=mI$, then $I^*$ is a nonzero ideal of~$\Z[p^{-1}]$. It is
obviously that $aI^*=J^*$ if and only if $aI=J$, where $I$ and $J$ are
two nonzero ideals of~$\xc O_p$. Therefore, $h(\xc O_p)\le h(\Z[p^{-1}])$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{p:p,r}]
Suppose first that there is an IFS $\ms\in\TOEC(p,r)$. By the meanings
of~$p,r$, we can assume that the ratios of~$\ms$ are
$r^{\lambda_1},r^{\lambda_2},\dots,r^{\lambda_N}$, where
$\gcd(\lambda_1,\dots,\lambda_N)=1$. Then we have
\[p^{\lambda_1}+p^{\lambda_2}+\dots+p^{\lambda_N}=1.\]
The conclusion that $p,r\in(0,1)$ is obvious.
Conversely, fix an integer $\ell>0$ such that $r^{\ell}<1/2$ and
$1-p^\ell-p^{\ell+1}>0$. By Lemma~\ref{l:Z[p]}(e), there exist nonnegative
integers $\ell_1,\dots,\ell_m$ such that
\[p^{-\ell}(1-p^\ell-p^{\ell+1})=p^{\ell_1}+p^{\ell_2}+\dots+p^{\ell_m}.\]
Let $\ms$ be an IFS satisfying the OSC and consisting of $m+2$ similarities
with ratios $r^\ell$, $r^{\ell+1}$,
$r^{\ell+\ell_1},r^{\ell+\ell_2},\dots,r^{\ell+\ell_m}$. Since all the
ratios are less than~$1/2$, such IFS does exist on~$\R^d$ with $2^d\ge m+2$.
For example, let $\xc S=\{S_1,\dots,S_{m+2}\}$ with
\begin{align*}
S_1&\colon x\mapsto r^\ell x+(\underbrace{0,\dots,0,0}_d)/2,&
S_2&\colon x\mapsto r^{\ell+1}x+(\underbrace{0,\dots,0,1}_d)/2, \\
S_3&\colon x\mapsto r^{\ell+\ell_1}x+(\underbrace{0,\dots,1,0}_d)/2,&
S_4&\colon x\mapsto r^{\ell+\ell_2}x+(\underbrace{0,\dots,1,1}_d)/2,\\
&\dots\dots&\dots\dots
\end{align*}
Then $\ms$ satisfies the OSC with the open set $(0,1)^d$. Also note that
$\gcd(\ell,\ell+1)=1$, so we have $r_\ms=r$ and $p_\ms=p$. Therefore,
$\ms\in\TOEC(p,r)\ne\emptyset$.
\end{proof}
\section{The Ideal of IFS}\label{sec:ideal}
This section is devoted to the proofs of Theorem~\ref{t:TOCELCN} and
Theorem~\ref{t:covopen}, which are closely related to the problem of
determining the ideal of IFS in $\TOEC$. The difficult is that there is no
general method to determine such ideals. For our purpose, we consider the
problem in two special cases: the self-similar set has the graph-directed
structure and the self-similar set generated by IFSs in~$\xs S$, where $\xs
S$ is defined by~\eqref{eq:PIS}.
\subsection{The graph-directed structure}
The key point of the proof of Theorem~\ref{t:TOCELCN} is the following
theorem.
\begin{thm}\label{t:ideal}
Suppose that $\TOEC(p,r)\ne\emptyset$. Then for each nonzero ideal~$I$ of
the ring~$\Z[p]$, there exists an IFS $\ms\in\TOEC(p,r)$ such that
$I_\ms=I$.
\end{thm}
We make use of the graph-directed sets to prove Theorem~\ref{t:ideal}. For
convenience, we recall the definition of graph-directed sets
(see~\cite{MauWi88}).
\begin{defn}[graph-directed sets]\label{d:graphdir}
Let $G=(\xc V,\xc E)$ be a directed graph with vertex set $\xc V$ and
directed-edge set~$\xc E$. Suppose that for each edge $e\in\xc E$, there is
a corresponding similarity $S_e\colon\R^d\to\R^d$ of ratio~$r_e\in(0,1)$.
The graph-directed sets on~$G$ with the similarities $\{S_e\}_{e\in\xc E}$
are defined to be the unique nonempty compact sets $\{E_i\}_{i\in\xc V}$
satisfying
\begin{equation}\label{eq:graphdir}
E_i=\bigcup_{j\in\xc V}\bigcup_{e\in\xc E_{i,j}}S_e(E_j)
\qquad\text{for $i\in\xc V$},
\end{equation}
where $\xc E_{i,j}$ is the set of edges staring at~$i$ and ending at~$j$.
In particular, if \eqref{eq:graphdir}~is a disjoint union for each $i\in\xc
V$, we call $\{E_i\}_{i\in\xc V}$ are \emph{dust-like} graph-directed sets
on~$(\xc V,\xc E)$.
\end{defn}
If the self-similar set $E$ has the graph-directed structure, we can
determine its ideal easily. Let $\ms\in\TOEC(p,r)$. Suppose that $E_\ms$ is
one of the dust-like graph-directed sets $\{E_i\}_{i\in\xc V}$ on $G=(\xc
V,\xc E)$, and that for all $e\in\xc E$, $\log r_e/\log r\in\N$. Without loss
of generality, we also suppose that $\xc V=\{0,1,\dots,n\}$ and $E_\ms=E_0$.
Let $\xc E_{i,j}^k$ denote the set of sequences of $k$ edges
$(e_1,e_2,\dots,e_k)$ which form a directed path from vertex~$i$ to
vertex~$j$. Let $O$ be an open set of~$\ms$ satisfying the SOSC. We use $\xc
V_O$ to denote the set of vertexes~$i$ such that there exists
$(e_1,e_2,\dots,e_k)\in\xc E_{0,i}^k$ for some $k\ge1$ satisfying
\[S_{e_1}\circ S_{e_2}\circ\dots\circ S_{e_k}(E_i)\subset O.\]
\begin{thm}\label{t:gdid}
The ideal $I_\ms$ is generated by $\{\xc H^s(E_i)/\xc H^s(E_\ms)\colon
i\in\xc V_O\}$, where $s=\hdim E_\ms$.
\end{thm}
The proof of Theorem~\ref{t:gdid} will be given in Section~\ref{ssec:BDnum}
since it requires a basic fact about the ideal of IFS (Remark~\ref{r:ideal}).
We give an example here.
\begin{exmp}\label{e:gdid}
Let $\ms$ be as in Example~\ref{e:NPI}. Let $E_0=E_\ms$, $E_1=-rE_\ms\cup
E_\ms$ and $E_2=-E_\ms+E_\ms$. It is easy to check that $E_0$, $E_1$ and
$E_2$ forms a family of graph-directed sets. Let $O=(0,1)$, then $\xc
V_O=\{1,2\}$. By Theorem~\ref{t:gdid},
\[I_\ms=(\xc H^s(E_1)/\xc H^s(E_0),\xc H^s(E_2)/\xc H^s(E_0))=(p+1,2).\]
\end{exmp}
By Theorem~\ref{t:gdid}, we are able to give the proof of
Theorem~\ref{t:ideal}.
\begin{proof}[Proof of Theorem~\ref{t:ideal}]
Fix a nonzero ideal $I$ of $\Z[p]$. By Lemma~\ref{l:Z[p]}(c), there exists
a positive integer~$\ell$ such that $1-p^\ell\in I$. We can further require
that $r^\ell<1/6$ since $1-p^{k\ell}\in I$ for all $k\ge1$. By
Lemma~\ref{l:Z[p]}(d), we can choose positive numbers
$a_1,a_2,\dots,a_m\in\Z[p]$ such that $I=(a_1,\dots,a_m)$. By
Lemma~\ref{l:Z[p]}(f), there exist positive numbers $b_1,\dots,b_m\in I$ such
that
\begin{equation}\label{eq:pk}
1-p^\ell=a_1b_1+\dots+a_mb_m.
\end{equation}
By Lemma~\ref{l:Z[p]}(e), for $1\le i\le m$, there exist nonnegative integers
$u_{i,j}$ ($1\le j\le N_i$) and positive integers $v_{i,j}$ with
$r^{v_{i,j}}<1/6$ ($1\le j\le M_i$) such that
\begin{equation}\label{eq:aibi}
\begin{aligned}
a_i&=p^{u_{i,1}}+p^{u_{i,2}}+\dots+p^{u_{i,N_i}}, \\
b_i&=p^{v_{i,1}}+p^{v_{i,2}}+\dots+p^{v_{i,M_i}}.
\end{aligned}
\end{equation}
We can further require that
\begin{equation}\label{eq:u,u+1}
u_{1,1}+1=u_{1,2}
\end{equation}
since there exists a nonnegative integer $u$ such that $a_1-p^u-p^{u+1}>0$,
then set $u_{1,1}=u$, $u_{2,1}=u+1$ and apply Lemma~\ref{l:Z[p]}(e) to
$a_1-p^u-p^{u+1}$. Finally, choose a positive integer $d$ such that
\begin{equation}\label{eq:d}
2^d\ge\max(N_1,N_2,\dots,N_m)\quad\text{and}\quad
2^d\ge M_1+M_2+\dots+M_m.
\end{equation}
Now we are ready to construct the desired IFS~$\ms$. In the remainder of this
proof, we use $\bm x$ and~$\bm y$ to denote the points in~$\R^d$. For
$\Lambda\subset\{1,2,\dots,d\}$, define an isometric mapping
$T_\Lambda\colon\R^d\to\R^d$ by
\begin{equation}\label{eq:T}
(T_\Lambda\bm x)_i=\begin{cases}
x_i,& i\notin\Lambda,\\
-x_i, & i\in\Lambda,
\end{cases}\quad
\text{for $\bm x=(x_1,\dots,x_d)\in\R^d$}.
\end{equation}
Since $2^d\ge\max(N_1,\dots,N_m)$, for each $i\in\{1,2,\dots,m\}$, we can
choose distinct
\[\Lambda_{i,1},\Lambda_{i,2}\dots,\Lambda_{i,N_i}\subset\{1,\dots,d\},\]
and then define an IFS $\mt_i$ as
\begin{equation}\label{eq:Ti}
\mt_i=\bigl\{r^{u_{i,1}}T_{\Lambda_{i,1}},r^{u_{i,2}}T_{\Lambda_{i,2}},
\dots,r^{u_{i,N_i}}T_{\Lambda_{i,N_i}}\bigr\}.
\end{equation}
Let
\[Y=\bigl\{\bm y=(y_1,\dots,y_d)\in\R^d\colon\text{$y_i=1/3$ or $2/3$ for
$i=1,\dots,d$}\}.\]
Since $2^d\ge M_1+\dots+M_m$, we can choose distinct points $\bm y_{i,j}\in
Y$ for $1\le i\le m$ and $1\le j\le M_i$. Define a contracting similarity
$S_0\colon\bm x\mapsto r^\ell\bm x$ on~$\R^d$ and IFSs
\begin{equation}\label{eq:Sij}
\ms_{i,j}=\bigl\{r^{v_{i,j}}T+\bm y_{i,j}\colon T\in\mt_i\bigr\}
\end{equation}
for $1\le i\le m$, $1\le j\le M_i$. Finally, define
\[\ms=\{S_0\}\cup\bigcup_{i=1}^{m}\bigcup_{j=1}^{M_i}\ms_{i,j}.\]
It remains to show that $\ms\in\TOEC(p,r)$ and $I_\ms=I$.
We first prove that $\ms\in\OEC(p,r)$. Note that the ratios of $\ms$ are
\[r^\ell\quad\text{and}\quad r^{u_{i,j}+v_{i,j'}}\
(1\le i\le m,1\le j\le N_i,1\le j'\le M_i).\]
By~\eqref{eq:u,u+1}, we have $r_\ms=r$. By~\eqref{eq:pk} and~\eqref{eq:aibi},
we have $p_\ms=p$. We will show that $\ms$ satisfies the OSC for the open
set~$(0,1)^d$. Note that the ratios of~$\ms$ are all less than $1/6$ since
$r^\ell<1/6$, $r^{v_{i,j}}<1/6$ and $u_{i,j}\ge0$. And so
$S_0(0,1)^d\subset(0,1/6)^d$; $S(0,1)^d\subset(-1/6,1/6)^d+\bm y_{i,j}$ for
$S\in\ms_{i,j}$. Therefore, $S(0,1)^d\subset(0,1)^d$ for all $S\in\ms$. On
the other hand, for distinct $S,S'\in\ms$, we need to show $S(0,1)^d\cap
S'(0,1)^d=\emptyset$. There are three cases to consider.
\begin{description}
\item[Case~1] One of $S,S'$ is $S_0$. Then there exists some $\bm y\in Y$
such that
\[S(0,1)^d\cap S'(0,1)^d\subset(0,1/6)^d\cap
\bigl((-1/6,1/6)^d+\bm y\bigr)=\emptyset.\]
\item[Case~2] $S\in\ms_{i,j}$ and $S'\in\ms_{i',j'}$ with
$(i,j)\ne(i',j')$. Then the corresponding $\bm y_{i,j},\bm
y_{i',j'}\in Y$ are distinct. And so
\[S(0,1)^d\cap S'(0,1)^d\subset\bigl((-1/6,1/6)^d+\bm y_{i,j}\bigr)
\cap\bigl((-1/6,1/6)^d+\bm y_{i',j'}\bigr)=\emptyset.\]
\item[Case~3] $S,S'\in\ms_{i,j}$. By the definition of $\ms_{i,j}$, we
have $S(0,1)^d\cap S'(0,1)^d=\emptyset$.
\end{description}
This completes the proof of $\ms\in\OEC(p,r)$.
Let $E_0=E_\ms$ be the self-similar set generated by~$\ms$ and
$E_i=\bigcup_{T\in\mt_i}T(E_0)$ for $1\le i\le m$. Define $S_{i,j}\colon
x\mapsto r^{v_{i,j}}x+\bm y_{i,j}$ for $1\le i\le m$, $1\le j\le M_i$. The
proof of $\ms\in\TDC$ and $I_\ms=I$ is based on the fact that the sets
$\{E_i\}_{i=0}^m$ are \emph{dust like graph-directed sets}. Indeed, we have
\begin{equation}\label{eq:E0}
E_0=S_0(E_0)\cup\bigcup_{i=1}^m\bigcup_{j=1}^{M_i}S_{i,j}(E_i),
\end{equation}
and for $1\le i\le m$,
\begin{multline*}
E_i=\bigcup_{T\in\mt_i}T(E_0)=\bigcup_{T\in\mt_i}T\circ S_0(E_0)\cup
\bigcup_{T\in\mt_i}\bigcup_{i'=1}^m\bigcup_{j=1}^{M_{i'}}
T\circ S_{i',j}(E_{i'}) \\
=\bigcup_{T\in\mt_i}S_0\circ T(E_0)\cup
\bigcup_{T\in\mt_i}\bigcup_{i'=1}^m\bigcup_{j=1}^{M_{i'}}
T\circ S_{i',j}(E_{i'})
\end{multline*}
since $T\circ S_0=S_0\circ T$ for $T\in T_i$. It follows from
$E_i=\bigcup_{T\in\mt_i}T(E_0)$ that
\begin{equation}\label{eq:Ei}
E_i=S_0(E_i)\cup\bigcup_{T\in\mt_i}\bigcup_{i'=1}^m\bigcup_{j=1}^{M_{i'}}
T\circ S_{i',j}(E_{i'})\quad
\text{for $1\le i\le m$}.
\end{equation}
We will show that all the unions in~\eqref{eq:E0} and~\eqref{eq:Ei} are
disjoint. By the definition of~$\ms$, we know that $E_0\subset[0,1]^d$,
$E_0\setminus(0,1)^d=\{\bm0\}$ and $E_i\subset(-1,1)^d$ for all $1\le i\le
m$. Note that the ratios of $S_0$ and $S_{i,j}$ are $r^\ell$ and
$r^{v_{i,j}}$, all less than $1/6$. This means
\[S_0(E_0)\subset[0,1/6]^d,\quad
S_{i,j}(E_{i})\subset(-1/6,1/6)^d+\bm y_{i,j}.\]
Recall that $\bm y_{i,j}\in Y$, we have
\[\bigcup_{i=1}^m\bigcup_{j=1}^{M_i}S_{i,j}(E_i)\subset(1/6,5/6)^d,\]
and
\begin{equation}\label{eq:SijDJNT}
S_{i,j}(E_i)\cap S_{i',j'}(E_{i'})=\emptyset\quad
\text{when $(i,j)\ne(i',j')$},
\end{equation}
since $\bm y_{i,j}\ne\bm y_{i',j'}$. Therefore, the unions in~\eqref{eq:E0}
are disjoint. For the unions in~\eqref{eq:Ei}, observe that, for $1\le i\le
m$ and distinct $T,T'\in\mt_i$, $T(0,1)^d\cap T'[0,1]^d=\emptyset$ by the
definition of~$\mt_i$. There are two results follow from the observation. The
first is
\[\bigcup_{i'=1}^m\bigcup_{j=1}^{M_{i'}}T\circ S_{i',j}(E_{i'})\cap
\bigcup_{i'=1}^m\bigcup_{j=1}^{M_{i'}}T'\circ S_{i',j}(E_{i'})\subset
T(1/6,5/6)^d\cap T'(1/6,5/6)^d=\emptyset\]
for distinct $T,T'\in\mt_i$. The second is, for $T\in\mt_i$, $1\le i'\le m$
and $1\le j\le M_{i'}$,
\begin{multline*}
T\circ S_{i',j}(E_{i'})\cap S_0(E_i)=T\circ S_{i',j}(E_{i'})
\cap\bigcup_{T'\in\mt_i}T'\circ S_0(E_0) \\
=T\circ S_{i',j}(E_{i'})\cap T\circ S_0(E_0)=\emptyset
\end{multline*}
since $S_{i',j}(E_{i'})\cap S_0(E_0)=\emptyset$. It follows from the two
results and~\eqref{eq:SijDJNT} that the unions in~\eqref{eq:Ei} are also
disjoint. Thus, we have proved that the sets $\{E_i\}_{i=0}^m$ are dust like
graph-directed sets, and so $\ms\in\TDC$ follows.
Finally, we turn to prove that $I_\ms=I$ by making use of
Theorem~\ref{t:gdid}. Let $O=(0,1)^d$, then $\xc V_O=\{1,2,\dots,m\}$. For
$1\le i\le m$, we have
\begin{equation*}
\xc H^s(E_i)=\sum_{T\in\xc T_i}\xc H^s(T(E_0))
=(p^{u_{i,1}}+p^{u_{i,2}}+\dots+p^{u_{i,N_i}})\xc H^s(E_0)
=a_i\xc H^s(E_0)
\end{equation*}
by~\eqref{eq:aibi} and~\eqref{eq:Ti}. Therefore, we have
$I_\ms=(a_1,\dots,a_m)=I$.
\end{proof}
\begin{figure}
\caption{The structure of~$E_\ms$ in Example~\ref{e:ideal}
\label{fig:ideal}
\end{figure}
\begin{exmp}\label{e:ideal}
Let $r=1/10$ and $p=\sqrt{10}-3$ be the positive solution of the equation
$p^2+6p=1$. Let $I=(2,\sqrt{10})$ be an ideal of the ring
$\Z[p]=\Z[\sqrt{10}]$. It is worth noting that $I$ is not a principle
ideal.
It follows from Proposition~\ref{p:p,r} that $\TOEC(p,r)\ne\emptyset$. We
will construct an IFS $\ms\in\TOEC(p,r)$ such that $I_\ms=I$ according to
the proof of Theorem~\ref{t:ideal}.
Observe that $I=(2,p+1)$ and $1-p=p(p+1)+2p\cdot 2$. By~\eqref{eq:pk},
\eqref{eq:aibi}, \eqref{eq:u,u+1} and~\eqref{eq:d}, we may set
\[\begin{cases}
a_1=1+p,\\ a_2=2=1+1,
\end{cases}\quad
\begin{cases}
b_1=p,\\ b_2=2p=p+p,
\end{cases}
\begin{cases}
\ell=1,\\ d=2.
\end{cases}\]
By~\eqref{eq:T}, \eqref{eq:Ti} and~\eqref{eq:Sij}, we may set
\[\mt_1=\{T_\emptyset,r\cdot T_{\{1,2\}}\},\quad
\mt_2=\{T_\emptyset,T_{\{1\}}\},\]
$S_0=rT_\emptyset$ and
\begin{gather*}
\ms_{1,1}=\bigl\{rT_\emptyset+(2/3,2/3),r^2T_{\{1,2\}}+(2/3,2/3)\bigr\},\\
\ms_{2,1}=\bigl\{rT_\emptyset+(2/3,1/3),rT_{\{1\}}+(2/3,1/3)\bigr\},\\
\ms_{2,2}=\bigl\{rT_\emptyset+(1/3,2/3),rT_{\{1\}}+(1/3,2/3)\bigr\}.
\end{gather*}
Finally, let $\ms=\{S_0\}\cup\ms_{1,1}\cup\ms_{2,1}\cup\ms_{2,2}$, see
Figure~\ref{fig:ideal} for the corresponding self-similar set~$E_\ms$. By
the proof of Theorem~\ref{t:ideal}, we know that $I_\ms=I$.
\end{exmp}
We also need the following special version of Jordan-Zassenhaus Theorem (see,
e.g., \cite{CurRe62}) to prove Theorem~\ref{t:TOCELCN}.
\begin{cthm}[Jordan-Zassenhaus Theorem]
Suppose that $\alpha$ is an algebraic integer, then the class number
$h(\Z[\alpha])$ is finite.
\end{cthm}
We remark that $\Z[\alpha]$ is in general not a Dedekind domain, so the
conclusion on the finiteness of class number $h(\Z[\alpha])$ cannot be
derived directly by the corresponding result of Dedekind domain.
\begin{proof}[Proof of Theorem~\ref{t:TOCELCN}]
By Theorem~\ref{t:TOCEpr} and~\ref{t:ideal}, we have $\LCN(p,r)=h(\Z[p])$
when $\TOEC(p,r)\ne\emptyset$. Then by Lemma~\ref{l:Z[p]}(g), we have
$\LCN(p,r)=h(\Z[p])\le h(\Z[p^{-1}])$. Finally, since $p^{-1}$ is an
algebraic integer (Lemma~\ref{l:Z[p]}(a)), by the Jordan-Zassenhaus
Theorem, the class number $\LCN(p,r)\le h(\Z[p^{-1}])$ is finite.
\end{proof}
\subsection{Principle ideal}
Let $\ms=\{S_1,S_2,\dots,S_N\}$ be an IFS satisfying the OSC. We write $\xs
O_\ms$ to denote all the open sets satisfying the OSC for the IFS~$\ms$ and
$\partial_\ms=E_\ms\setminus\bigcup_{O\in\xs O_\ms}O$, where $E_\ms$ is the
self-similar set generated by~$\ms$. Notice that $\partial_\ms=\emptyset$ if
and only if $\ms$ satisfies the SSC. We say that a point~$x\in\partial_\ms$
is \emph{separated} if there is a finite word $\bm i=i_1\dots
i_n\in\{1,\dots,N\}^n$ such that
\begin{equation}\label{eq:xbdr}
S_{\bm i}(x)\notin\partial_\ms\quad\text{and}\quad
S_{\bm i}(x)\notin S_{\bm j}(E_\ms)
\end{equation}
for every word~$\bm j$ of the same length as~$\bm i$ but $\bm j\ne\bm i$,
where $S_{\bm i}=S_{i_1}\circ\dots\circ S_{i_n}$.
We need the following theorem to prove Theorem~\ref{t:covopen}, which is also
of interesting in itself.
\begin{thm}\label{t:separated}
Let $\ms\in\TOEC$. If the points in~$\partial_\ms$ are all separated, then
$I_\ms=\Z[p_\ms]$.
\end{thm}
\begin{proof}
For each $x\in\partial_\ms$, let $\bm i_x$ be a word of finite length
satisfying~\eqref{eq:xbdr}. Choose a compact subset $F_x\subset E_\ms$
containing~$x$ such that $E_\ms\setminus F_x$ is also compact and $S_{\bm
i_x}(F_x)\cap S_{\bm j}(E_\ms)=\emptyset$ for every word~$\bm j$ of same
length as~$\bm i_x$ but $\bm j\ne\bm i_x$. Such $F_x$ does exist since
$E_\ms$ is totally disconnected and $\bm i_x$ satisfies~\eqref{eq:xbdr}. We
can further require $S_{\bm i_x}(F_x)$ to be an interior separated set due to
$S_{\bm i_x}(x)\notin\partial_\ms$. Note that $F_x$ is also an open subset in
the topology space~$E_\ms$. So $\{F_x\}_x$ is an open cover of the compact
set~$\partial_\ms$, thus we have a finite sub-cover $F_1$, \dots, $F_n$ and
the corresponding words $\bm i_1$, \dots, $\bm i_n$. Let
\[F^*_0=E_\ms\setminus\bigcup_{k=1}^nF_k,\quad
F^*_1=F_1,\quad F^*_2=F_2\setminus F_1,\ldots,\quad
F^*_n=F_n\setminus\bigcup_{k=1}^{n-1}F_k.\]
Then $\{F^*_k\}_{k=0}^n$ is a disjoint cover of~$E_\ms$. We claim that
$\mu_\ms(F^*_k)\in I_\ms$ for all $0\le k\le n$. Then
\[\sum_{k=0}^n\mu_\ms(F^*_k)=\mu_\ms(E_\ms)=1\in I_\ms.\]
Thus $I_\ms=\Z[p_\ms]$. It remains to prove the claim. For $1\le k\le n$,
observe that $S_{\bm i_k}(F^*_k)$ are all interior separated sets. And so
$\mu_\ms(F^*_k)\in I_\ms$ for $1\le k\le n$ since $p_\ms^{-1}\in\Z[p_\ms]$
(Lemma~\ref{l:Z[p]}(a)). For $F^*_0$, since $F^*_0$ is compact and each point
in $F^*_0$ can be covered by an interior separated set, we have $F^*_0$ is a
finite union of interior separated sets. Thus the claim $\mu_\ms(F^*_0)\in
I_\ms$ follows.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{t:covopen}]
According to Theorem~\ref{t:separated}, we only need to prove that the
points in~$\partial_\ms$ are all separated. For this, fix a point
$x_0\in\partial_\ms$. Let $F$ be an interior separated set. It is easy to
see that, for $n$ large enough, there is a $\Lambda\subset\{1,\dots,N\}^n$
such that
\[F=\bigcup_{\bm i\in\Lambda}S_{\bm i}(E_\ms).\]
Choose a word in $\Lambda$, say $\bm i^*$. If $S_{\bm i^*}(x_0)\notin
S_{\bm j}(E_\ms)$ for all $\bm j\in\Lambda$ and $\bm j\ne\bm i^*$, then
$x_0$ is separated since $F$ is an interior separated set. Thus, the proof
is completed. Otherwise, suppose that $S_{\bm i^*}(x_0)\in S_{\bm
j}(E_\ms)$ for some $\bm j$ other than~$\bm i^*$. Let $O$ be the convex
open set satisfying the OSC. Since $S_{\bm i^*}(O)\cap S_{\bm j}(O)
=\emptyset$, by convexity, there is a liner function~$H$ such that
$H(x)<H(y)$ for all $x\in S_{\bm i^*}(O)$ and all $y\in S_{\bm j}(O)$.
Since $S_{\bm i^*}(x_0)\in\overline{S_{\bm i^*}(O)}\cap\overline{S_{\bm
j}(O)}$, we have
\[H(S_{\bm i^*}(x_0))=\sup_{x\in S_{\bm i^*}(O)}H(x)
=\max_{x\in S_{\bm i^*}(E_\ms)}H(x).\]
Since each $S\in\ms$ has the form $S\colon x\mapsto r_S\bm Ax+b_S$ with
$r_S\in(0,1)$, we have
\[S_{\bm i}\colon x\mapsto r_{\bm i}\bm A^nx+b_{\bm i}\quad
\text{for all $\bm i\in\Lambda$}.\]
It follows that
\[H(S_{\bm i}(x_0))=\sup_{x\in S_{\bm i}(O)}H(x)
=\max_{x\in S_{\bm i}(E_\ms)}H(x)\quad\text{for all $\bm i\in\Lambda$}.\]
And so $H(S_{\bm j}(x_0))>H(S_{\bm i^*}(x_0))$. Now let $\Lambda_1$ denote
the set of all~$\bm i\in\Lambda$ such that $H(S_{\bm i}(x_0))>H(S_{\bm
i^*}(x_0))$. We have
\begin{itemize}
\item $\Lambda_1\subsetneq\Lambda$ (since $\bm i^*\notin\Lambda_1$).
\item For $\bm i\in\Lambda_1$, if $S_{\bm i}(x_0)\in S_{\bm j}(E_\ms)$
for some $\bm j\in\Lambda$, then $\bm j\in\Lambda_1$ (since
$H(S_{\bm i}(x_0))\le H(S_{\bm j}(x_0))$).
\end{itemize}
Repeat the above argument with replacing $\Lambda$ by~$\Lambda_1$, then
either we find a word $\bm i^*\in\Lambda_1$ such that $S_{\bm
i^*}(x_0)\notin S_{\bm j}(E_\ms)$ for all $\bm j\in\Lambda_1$ but $\bm
j\ne\bm i^*$, this means $x_0$ is separated, or we get a
subset~$\Lambda_2\subsetneq\Lambda_1$ such that for $\bm i\in\Lambda_2$, if
$S_{\bm i}(x_0)\in S_{\bm j}(E_\ms)$ for some $\bm j\in\Lambda_1$, then
$\bm j\in\Lambda_2$. If the latter case happens, then we repeat the
argument again. The process stops when we find a desired word $\bm i^*$ to
show that $x_0$ is separated. This completes the proof since $\Lambda$ is
finite.
\end{proof}
\section{The Blocks Decomposition of Self-similar Sets}\label{sec:BD}
To understand the geometric structure of self-similar sets generated by IFSs
$\ms\in\TOEC$, we shall make use of the \emph{blocks decomposition}. Indeed,
the whole proof of our result is base on it.
In this section, we introduce the basic definitions of blocks decomposition
and give some important properties. From now on, fix an
$\ms=\{S_1,S_2,\dots,S_N\}\in\TOEC$. For notational convenience, we will
write $E_\ms$, $\mu_\ms$, $r_\ms$ and~$p_\ms$ as~$E$, $\mu$, $r$ and~$p$,
respectively. Let $r_i$ be the contraction ratio of~$S_i$ for $1\le i\le N$
and $s=\hdim E$. Write
\begin{equation}\label{eq:lambda}
\lambda_i=\log r_i/\log r\quad\text{and}\quad
\lambda=\max_{1\le i\le N}\lambda_i-1.
\end{equation}
Recall that $\gcd(\lambda_1,\dots,\lambda_N)=1$. For $\bm i=i_1i_2\dots
i_n\in\{1,2,\dots,N\}^n$, write $\bm i^-=i_1i_2\dots i_{n-1}$ and
\begin{equation}\label{eq:Siripi}
S_{\bm i}=S_{i_1}\circ S_{i_2}\circ\dots\circ S_{i_n},
\quad r_{\bm i}=r_{i_1}r_{i_2}\cdots r_{i_n},
\quad p_{\bm i}=r_{\bm i}^s.
\end{equation}
Define
\begin{equation}\label{eq:Sk}
\ms_k=\bigl\{S_{\bm i}\colon r_{\bm i}\le r^k<r_{\bm i^-}
\bigr\}.
\end{equation}
\subsection{The definition of blocks decomposition}\label{ssec:BDdef}
\begin{defn}[level-$k$ blocks decomposition]\label{d:block}
The decomposition $E=\bigcup_{j=1}^{n_k}B_{k,j}$ is called the level-$k$
blocks decomposition of~$E$ ($k\ge0$), if each set
\[\bigl\{x\colon\dist(x,B_{k,j})<r^k|E|/2\bigr\}\]
is connected for $1\le j\le n_k$ and
\[\dist(B_{k,i},B_{k,j})\ge r^k|E|,\quad\text{for $i\ne j$},\]
where $|E|$ denotes the diameter of~$E$. The set $B_{k,j}$ is called
a level-$k$ block of~$\ms$. The family of all the level-$k$ blocks will
be denoted by~$\xs B_k$. Write $\xs B=\bigcup_{k\ge0}\xs B_k$.
\end{defn}
\begin{rem}\label{r:block}
For $B\in\xs B_k$, write $\ms_B=\{S\in\ms_k\colon S(E)\subset B\}$.
According to above definition, it is easy to check that
$B=\bigcup_{S\in\ms_B}S(E)$.
\end{rem}
We shall use the natural measure~$\mu$ to describe the size of blocks.
Notice that $r_{\bm i} \in \{r^k, r^{k+1}, \dots, r^{k+\lambda}\}$ for
$S_{\bm i}\in\ms_k$. This leads to the following definition.
\begin{defn}[measure polynomial]\label{d:PB}
For $B\in\xs B_k$ and $0\le \ell\le\lambda$, write
\[\xi_{B,\ell}=\card\bigl\{S\in\ms_B\colon
\text{the ratio of $S$ is $r^{k+\ell}$}\bigr\}.\]
The polynomial
\[P_B\colon t\mapsto \xi_{B,0} + \xi_{B,1}t + \dots +
\xi_{B,\lambda}t^\lambda\]
is called the measure polynomial of level-$k$ block~$B$. Write
\[\xc P_k=\left\{P_B\colon B\in\xs B_k\right\}\quad\text{and}
\quad \xc P=\bigcup_{k=0}^\infty \xc P_k.\]
\end{defn}
\begin{rem}\label{r:muB}
For $B\in\xs B_k$, we have $\mu(B)=p^k P_B(p)$. This is why we call $P_B$
the measure polynomial and $p$ the measure root of~$\ms$.
\end{rem}
\begin{rem}
The measure polynomial of a block $B$ depends not only on~$B$ but also on
the level of~$B$ since the level of~$B$ may be not unique. For example, let
$\ms=\{S_1,S_2\}$ with $S_1\colon x\mapsto x/9$ and $S_2\colon x\mapsto
x/3+2/3$. Then $r_\ms=1/3$ and
\[\xs B_1=\bigl\{S_1(E_\ms),S_2(E_\ms)\bigr\},\quad
\xs B_2=\bigl\{S_1(E_\ms),S_2\circ S_1(E_\ms),S_2\circ S_2(E_\ms)\bigr\}.\]
Note that $S_1(E_\ms)\subset\xs B_1\cap\xs B_2$, so the level of
$S_1(E_\ms)$ may be~$1$ or~$2$. Consequently, the measure polynomial of
$S_1(E_\ms)$ may be $t\mapsto t$ for level-$1$ or $t\mapsto 1$ for
level-$2$.
\end{rem}
Now we introduce the definition of interior blocks, which is the key to our
study.
\begin{defn}[interior block]\label{d:inblcok}
For $k\ge0$, $B\in\xs B_k$ is called a level-$k$ \emph{interior} block if
$\dist(B,O^c)\ge r^k|E|$ for some open set $O$ satisfying the SOSC. While
$B\in\xs B_k$ is called a level-$k$ \emph{boundary} block if $B$ is not a
level-$k$ interior block. Let $\xs B^\circ_k$ and $\xs B^\partial_k$ denote
the family of all level-$k$ interior blocks and the family of all level-$k$
boundary blocks, respectively. Write
\[\xs B^\circ=\bigcup_{k\ge0}\xs B^\circ_k,\quad
\xc P^\circ_k=\left\{P_B\colon B\in\xs B^\circ_k\right\},
\quad \xc P^\circ=\bigcup_{k=0}^\infty \xc P^\circ_k.\]
\end{defn}
\begin{rem}\label{r:inblock}
Suppose that $B\in\xs B^\circ_k$, then
\begin{enumerate}[(a)]
\item for all level-$l$ blocks $A\subset B$, we have $A\in\xs
B^\circ_l$;
\item suppose that $r_{\bm i}=r^l$, then $S_{\bm i}(B)\in\xs
B^\circ_{k+l}$ and $P_{S_{\bm i}(B)}=P_B$.
\end{enumerate}
\end{rem}
For further study of blocks decomposition, we introduce some more notations.
\begin{defn}[notations]\label{d:ntt}
(a) Let $\xs C$ be a family of sets. Write
\[\bigsqcup\xs C=\bigcup_{C\in\xs C}C.\]
(b) Let $\xs A\subset\xs B_l$ be a nonempty family of level-$l$ blocks. For
$k\ge0$, write
\[\xs B_k(\xs A)=\Bigl\{B\in\xs B_{l+k}\colon B\subset
\bigsqcup\xs A\Bigr\}.\]
(c) Let $A\in\xs B_l$ be a level-$l$ block. For $k\ge0$, write
\[\xs B^\circ_k(A)=\{B\in\xs B^\circ_{l+k}\colon B\subset A\},\quad
\xs B^\partial_k(A)=\{B\in\xs B^\partial_{l+k}\colon B\subset A\}\]
and $\xs B_k(A)=\{B\in\xs B_{l+k}\colon B\subset A\}$.
\end{defn}
The interior blocks have many advantages. The first is that, under the OSC,
different small copies of the self-similar set may has overlaps, but the
intersection of interior blocks in different small copies must be empty. The
second is that blocks in an interior block are still interior blocks (see
Remark~\ref{r:inblock}(a)). This means that we recover a form of disjointness
result for interior blocks. Therefore, the geometrical structure of interior
blocks is like the self-similar sets satisfying the SSC in some sense. The
last but not the least is the following lemma, which reveals the relationship
between the measure polynomials of interior blocks and the ideal of~$\ms$.
\begin{lem}\label{l:ideal}
Let $I$ be the ideal of~$\Z[p]$ generated by $\{P(p)\colon P\in\xc
P^\circ\}$, then $I=I_\ms$.
\end{lem}
\begin{proof}
It follows from Remark~\ref{r:muB} and $p^{-1}\in\Z[p]$
(Lemma~\ref{l:Z[p]}(a)) that $I$ is just the ideal generated by
$\{\mu(B)\colon B\in\xs B^\circ\}$. We have $I\subset I_\ms$ since every
interior block is an interior separated set. On the other hand, observe
that each interior separated set can be written as a finite disjoint union
of interior blocks, so $I_\ms\subset I$ holds too.
\end{proof}
\subsection{Finiteness of measure polynomials}
This subsection devoted to the finiteness of the measure polynomials, which
is the start point of our research. It follows from the totally
disconnectedness of the self-similar set.
\begin{prop}\label{p:finiteCP}
There are only finitely many measure polynomials for every IFS
$\ms\in\TOEC$.
\end{prop}
We need some lemmas to prove Proposition~\ref{p:finiteCP}. The first two,
Lemma~\ref{l:4compact} and~\ref{l:discnt}, are known facts in topology.
\begin{lem}[{\cite[\S2.10.21]{Feder69}}]\label{l:4compact}
Let $X$ be a compact metric space and $\xs K(X)$ the set of all nonempty
compact subset of~$X$, then $\xs K(X)$ is compact under the Hausdorff
metric.
\end{lem}
\begin{lem}[see also~\cite{XiXi10}]\label{l:discnt}
Let $\{F_i\}_{i=1}^n$ be a finite family of totally disconnected and
compact subsets of a Hausdorff topology space, then $\bigcup_{i=1}^n F_i$
is also totally disconnected.
\end{lem}
\begin{lem}\label{l:finite}
Suppose that $x\in\R^d$ and $k\ge0$, then
\[M=\sup_{x,k}\card\bigl\{S\in\ms_k\colon
\dist\bigl(S(E),x\bigr)\le r^k|E|\bigr\}<\infty.\]
\end{lem}
\begin{proof}
This is a simple consequence of the OSC. Let $O$ be an open
set satisfying the OSC, then $\dist(O,E)=0$. And so
$\dist\bigl(S(O),x\bigr)\le 2r^k|E|$ for all $S\in\ms_k$ such that
$\dist\bigl(S(E),x\bigr)\le r^k|E|$. It follows that
\[M\le\frac{\xc L\bigl(U(0,2|E|+|O|)\bigr)}{r^{\lambda d}\xc L(O)}\]
since the diameter of~$S(O)$ is not less than $r^{k+\lambda}|O|$.
Here $\xc L$ denotes the Lebesgue measure and $U(x,\rho)$ the open ball of
radius~$\rho$ centered at~$x$.
\end{proof}
\begin{lem}\label{l:compact}
Given $M\ge1$. Let $\xs F$ be the family of all nonempty compact
subsets~$F$ of~$\R^d$ such that
\begin{enumerate}[\upshape(i)]
\item $F=\bigcup_{i=1}^MT_i(E)$, where each $T_i$ is a similar mapping
with ratio lying in $\{1,r,\dots, r^\lambda\}$, we allow that
$T_i=T_j$ for $i\ne j$;
\item $\dist\bigl(T_i(E),0\bigr)\le|E|$ for $1\le i\le M$;
\item $0\in F$.
\end{enumerate}
Then $\xs F$ is compact under the Hausdorff metric.
\end{lem}
\begin{proof}
By Lemma~\ref{l:4compact} and Condition~(ii), it is sufficient to prove
that $\xs F$ is closed. Suppose that $F_i=\bigcup_{j=1}^MT_{i,j}(E)\in\xs
F$ and $F_i\to F$ under the Hausdorff metric, we shall show that $F\in\xs
F$. Notice that the family of functions~$T_{i,j}$ is equicontinuous. By the
Arzela-Ascoli Theorem and Condition~(ii), we can assume that $T_{i,j}$
converge to some continuous mapping~$T_i$ under compact open topology as
$j\to\infty$ for $1\le i\le M$ (that is, $T_{i,j}$ converge to~$T_i$
uniformly on each compact set).
Now let $F^*=\bigcup_{i=1}^MT_i(E)$. It is not difficult to check that
$F_i\to F^*$ and $F^*\in\xs F$, and so $F=F^*\in\xs F$.
\end{proof}
\begin{lem}\label{l:continuous}
Let $\xs F$ be as in Lemma~\ref{l:compact}. For $F\in\xs F$, let
$F_\delta=\{x\colon \dist(x,F)\le\delta\}$ be the $\delta$-neighbourhood
of~$F$ and $F_{\delta,0}$ the connected component of~$F_\delta$
containing~$0$. Define
\[\Delta(F)=\sup\bigl\{\delta\ge0\colon
\text{$|x|<|E|$ for all $x\in F_{\delta,0}$}\bigr\},\] where $|x|$ denotes
the usually absolute value of $x\in\R^d$ and $|E|$ the diameter of~$E$.
Then $\Delta(F)>0$ for all $F\in\xs F$ and $\Delta$ is continuous on
$\xs F$.
\end{lem}
\begin{proof}
We first show that $\Delta(F)>0$ for all $F$. Suppose otherwise that
$\Delta(F)=0$ for some $F\in\xs F$. Then for every $\delta>0$,
$F_{\delta,0}$ contains an $x_\delta$ with $|x_\delta|\ge|E|$. We can pick
$\delta_i\to0$ such that $F_{\delta_i,0}\to F_0$ (under the Hausdorff
metric) and $x_{\delta_i}\to x_0$ for some compact set~$F_0$ and some point
$x_0$ with $|x_0|\ge|E|$. It follows that $F_0$ is a connected component of
$F$ containing two distinct points: $0$ and~$x_0$. This contradicts the
fact that $F$ is totally disconnected (by Lemma~\ref{l:discnt}).
Next we claim that $|\Delta(F)-\Delta(G)|\le\hdist(F,G)$ for $F,G\in\xs
F$, where $\hdist$ denotes the Hausdorff metric, and so $\Delta$ is
continuous. By the symmetry, it is sufficient to show that
$\Delta(F)\le\Delta(G)+\hdist(F,G)$. Pick an $\delta>\Delta(G)$. Then
$F_{\delta+\hdist(F,G)}\supset G_\delta\supset G_{\delta,0}$ which
contains an $x$ with $|x|\ge|E|$. So we have
$\Delta(F)\le\delta+\hdist(F,G)$. Desired inequality follows from the
arbitrary of~$\delta$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{p:finiteCP}]
We claim that there exists a positive integer $K$ such that for all $k\ge
K$ and all $B\in\xs B_k$, we have $|B|\le 2r^{k-K}|E|$. Assume this is
true, pick an open set~$O$ satisfying the OSC, we will show that
$\sup_{P\in\xc P}P\bigl(\xc L(O)\bigr)<\infty$ ($\xc L$ denotes the
Lebesgue measure), and so $\xc P$ must be finite.
Let $B\in\xs B_k$ with $k\ge K$. By the claim and Remark~\ref{r:block}, we
have $\bigl|\bigcup_{S\in\ms_B}S(E)\bigr|=|B|\le 2r^{k-K}|E|$. It follows
from $E\subset\overline O$ (the closure of~$O$) that
\[\biggl|\bigcup_{S\in\ms_B}S(O)\biggr|\le r^k\bigl(2r^{-K}|E|+2|O|\bigr)\]
since the diameter of~$S(O)$ is not greater than $r^k|O|$ for $S\in\ms_B$.
Thus
\[P_B\bigl(\xc L(O)\bigr)=r^{-dk}\sum_{S\in\ms_B}\xc L\bigl(S(O)\bigr)
\le\xc L\bigl(U(0,2r^{-K}|E|+2|O|)\bigr).\]
This obviously implies that $\sup_{P\in\xc P}P\bigl(\xc L(O)\bigr)<\infty$.
It remains to verify our claim. Consider the family~$\xs F$ in
Lemma~\ref{l:compact} with the constant~$M$ being as in
Lemma~\ref{l:finite}. By Lemma~\ref{l:continuous}, we can find a positive
integer~$K$ such that
\[0<r^K|E|<\inf_{F\in\xs F}\Delta(F).\]
We will show this~$K$ is desired. For this, let $B\in\xs B_k$ with $k\ge
K$ and $x\in B$, consider the similar mapping $T\colon y\mapsto
r^{K-k}(y-x)$. Let
\[F=\bigcup_{\substack{S\in\ms_{k-K}\\\dist(T\circ S(E),0)\le|E|}}
T\circ S(E).\]
Then $F\in\xs F$ by Lemma~\ref{l:finite}. Thus $0=T(x)\in T(B)\subset
F_{r^K|E|,0}\subset U(0,|E|)$. This means that $|B|\le 2r^{k-K}|E|$.
\end{proof}
\begin{rem}\label{r:diameter}
From the proof of Proposition~\ref{p:finiteCP}, we conclude that there
exists a constant $\varpi>1$ such that for all $k\ge0$ and all $B\in\xs
B_k$,
\[\varpi^{-1}r^k|E|\le|B|\le\varpi r^k|E|.\]
\end{rem}
\subsection{The cardinality of blocks}\label{ssec:BDnum}
In this subsection, we show that almost all blocks are interior blocks. This
conclusion follows from two lemmas.
\begin{lem}\label{l:C(k)}
Let $\zeta(k)=\card\xs B^\partial_k$ be the number of all level-$k$ boundary
blocks, then
\[\lim_{k\to\infty}p^k\cdot \zeta(k)=0.\]
\end{lem}
\begin{proof}
Let $O$ be an open set satisfying the SOSC. Write
\[\xs B^O_k=\bigl\{B\in\xs B_k\colon\dist(B,O^c)\ge r^k|E|\bigr\},\]
then $\xs B^O_k\subset\xs B^\circ_k$. Notice that $p^k \cdot \zeta(k) \le
\bigl(\min_{P\in\xc P}P(p)\bigr)^{-1} \sum_{B\in\xs
B^\partial_k}p^kP_B(p)$. By Remark~\ref{r:muB},
\[\begin{split}
\sum_{B\in\xs B^\partial_k}p^kP_B(p)
&\le\sum_{B\in\xs B_k\setminus\xs B^O_k}p^kP_B(p)
=\sum_{B\in\xs B_k\setminus\xs B^O_k}\mu(B)\\
&=\mu\biggl(\bigcup_{B\in\xs B_k\setminus\xs B^O_k}B
\biggr)\to\mu(E\setminus O)=0,\quad\text{as $k\to\infty$}.\qedhere
\end{split}\]
\end{proof}
\begin{lem}\label{l:CP(k)}
Let $\zeta_P(k)=\card\{B\in\xs B^\circ_k\colon P_B=P\}$ be the number of all
level-$k$ interior blocks which measure polynomial is~$P$, then for each
measure polynomial $P\in\xc P^\circ$,
\[\liminf_{k\to\infty}p^k\cdot \zeta_P(k)>0.\]
\end{lem}
\begin{proof}
For $\ell\in\{0,1,\dots,\lambda\}$ and $k\ge1$, write
\begin{equation}\label{eq:numkj}
\xi_{k,\ell}=\card\{S\in\ms_k\colon
\text{the ratio of $S$ is $r^{k+\ell}$}\}.
\end{equation}
Let $\bm\xi_k=(\xi_{k,0},\xi_{k,1},\dots,\xi_{k,\lambda})^T$ and
\begin{equation}\label{eq:Xi}
\bm\Xi=\begin{pmatrix}
\xi_{1,0} & 1 & 0 & \hdotsfor4 & 0 \\
\xi_{1,1} & 0 & 1 & 0 & \hdotsfor3 & 0 \\
\xi_{1,2} & 0 & 0 & 1 & 0 & \hdotsfor2 & 0 \\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
\vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\
\xi_{1,\lambda-2} & 0 & \hdotsfor3 & 0 & 1 & 0 \\
\xi_{1,\lambda-1} & 0 & \hdotsfor4 & 0 & 1 \\
\xi_{1,\lambda} & 0 & \hdotsfor5 & 0 \\
\end{pmatrix}
\end{equation}
be a $(\lambda+1)\times(\lambda+1)$ matrix. Then we have the following
recursion formula.
\begin{equation}\label{eq:numktok+1}
\bm\xi_{k+1}=\bm\Xi\bm\xi_k,\qquad k\ge1.
\end{equation}
Note that $\ms_1=\ms$, and so
\[\xi_{1,0}p+\xi_{1,1}p^2+\dots+\xi_{1,\lambda}p^{\lambda+1}=
p^{\lambda_1}+p^{\lambda_2}+\dots+p^{\lambda_N}=1,\]
where $\lambda_i$ are as in~\eqref{eq:lambda}. Therefore, the
matrix~$\bm\Xi$ satisfies the conditions of Lemma~\ref{l:primitive}. So
$\bm\Xi$ is primitive and $p^{-1}$ is the Perron-Frobenius eigenvalue.
For $P\in\xc P^\circ$, there exists a level-$l$ interior block $B$ for some
$l\ge1$ such that $P_B=P$. It follows from Remark~\ref{r:inblock}(b) that
\[\zeta_P(k)\ge\xi_{k-l,0}\quad\text{for $k>l$}.\]
Then this lemma follows from the recursion formula~\eqref{eq:numktok+1} and
the fact that $p^{-1}$ is the Perron-Frobenius eigenvalue of the primitive
matrix~$\bm\Xi$.
\end{proof}
\begin{rem}\label{r:ideal}
Let $O$ and $\xs B^O_k$ be as in the proof of Lemma~\ref{l:C(k)}. From the
proof of Lemma~\ref{l:C(k)}, we have
\[\lim_{k\to\infty}p^k\card(\xs B_k\setminus\xs B^O_k)=0.\]
Together with Lemma~\ref{l:ideal}, Lemma~\ref{l:C(k)} and
Lemma~\ref{l:CP(k)}, we see that $I_\ms$ can be generated by
$\bigl\{\mu(B)\colon B\in\xs B^O_k\ \text{for some $k\ge1$}\bigr\}$, where
$O$ is an arbitrary open set satisfying the SOSC. This means that we need
only find a specific open set satisfying the SOSC when we want to determine
the ideal of an IFS.
\end{rem}
The following lemma is a corollary of Lemma~\ref{l:C(k)} and~\ref{l:CP(k)}.
Recall notations in Definition~\ref{d:ntt}(c). For $k\ge0$, define
\[\zeta^\partial(k)=\max_{B\in\xs B}\card\xs B^\partial_k(B)\quad\text{and}
\quad\zeta^\circ(k)=\min_{B\in\xs B,\,P\in\xc P^\circ}
\card\bigl\{A\in\xs B^\circ_k(B)\colon P_A=P\bigr\}.\]
\begin{lem}\label{l:CB(k)}
We have
\[\lim_{k\to\infty}p^k\zeta^\partial(k)=0\quad\text{and}\quad
\liminf_{k\to\infty}p^k\zeta^\circ(k)>0.\]
\end{lem}
\begin{proof}
Let $B\in\xs B_l$ and $P_B$ the measure polynomial of~$B$. Recall
that $P_B(t)=\sum_{\ell=0}^\lambda\xi_{B,\ell}t^\ell$, where
$\xi_{B,\ell}=\{S\in\ms_B\colon\text{the ratio of~$S$ is $r^{l+\ell}$}\}$.
To prove the first limit, we use Remark~\ref{r:inblock}(b) to obtain that,
for $k>\lambda$,
\[\card\xs B^\partial_k(B)\le\sum_{\ell=0}^\lambda\xi_{B,\ell}\zeta(k-\ell)
\le P_B(1)\sum_{\ell=0}^\lambda \zeta(k-\ell).\]
So for $k>\lambda$,
\[\zeta^\partial(k)\le\max_{P\in\xc P}P(1)\sum_{\ell=0}^\lambda \zeta(k-\ell).\]
By Lemma~\ref{l:C(k)}, we have $\lim_{k\to\infty}p^k\zeta^\partial(k)=0$.
To prove the second limit, let $P\in\xc P^\circ$, also by
Remark~\ref{r:inblock}(b), we have,
\[\card\bigl\{A\in\xs B^\circ_k(B)\colon P_A=P\bigr\}
\ge\sum_{\ell=0}^\lambda\xi_{B,\ell}\zeta_P(k-\ell)
\ge\min_{0\le \ell\le\lambda}\zeta_P(k-\ell)\]
for $k>\lambda$. And so for $k>\lambda$,
\[\zeta^\circ(k)\ge\min_{0\le \ell\le\lambda,\,P\in\xc P^\circ}\zeta_P(k-\ell).\]
By Lemma~\ref{l:CP(k)}, we have $\liminf_{k\to\infty}p^k\zeta^\circ(k)>0$.
\end{proof}
We close this subsection by the proof of Theorem~\ref{t:gdid}.
\begin{proof}[Proof of Theorem~\ref{t:gdid}]
Let $I$ denote the ideal generated by
\[\bigl\{\xc H^s(E_i)/\xc H^s(E_\ms)\colon i\in\xc V_O\bigr\}.\]
Since the graph-directed sets $\{E_i\}_{i\in\xc V}$ are dust-like, the set
$S_{e_1}\circ S_{e_2}\circ\dots\circ S_{e_k}(E_i)\subset O$, where
$(e_1,e_2,\dots,e_k)\in\xc E_{0,i}^k$, is an interior separated set
of~$E_\ms=E_0$. So the ideal $I_\ms$ contains $\mu(S_{e_1}\circ\dots\circ
S_{e_k}(E_i))$. By the condition that $\log r_e/\log r\in\N$ for all
$e\in\xc E$, we have
\[\mu(S_{e_1}\circ\dots\circ S_{e_k}(E_i))=p^\ell\xc H^s(E_i)/\xc H^s(E_\ms)
\]
for some positive integer $\ell$. This means that $\xc H^s(E_i)/\xc
H^s(E_\ms)\in I_\ms$ for all $i\in\xc V_O$ since $p^{-1}\in\Z[p]$ by
Lemma~\ref{l:Z[p]}(a). Thus we have $I\subset I_\ms$.
Conversely, by Remark~\ref{r:ideal}, we know that $I_\ms$ is generated by
\[\bigl\{\mu(B)\colon B\in\xs B^O_k\ \text{for some $k\ge1$}\bigr\}.\]
Fix a $B\in\bigcup_{k\ge1}\xs B^O_k$. It follows from $B$ is a block that,
for large enough positive integer~$\ell$,
\[B=\bigcup_{i\in\xc V_O}\bigcup_{
\substack{(e_1,\dots,e_\ell)\in\xc E_{0,i}^\ell\\
S_{e_1}\circ\dots\circ S_{e_\ell}(E_i)\subset B}} S_{e_1}\circ\dots\circ
S_{e_\ell}(E_i).\]
We have $\mu(B)\in I$ since the above union is disjoint. And so
$I_\ms\subset I$.
\end{proof}
\section{Main Ideas of the Proof}\label{sec:idea}
The most difficult part in our proof is the sufficient part of
Theorem~\ref{t:TOCElip}. It is rather tedious and technical, requiring
delicate composition and decomposition of blocks. Although the proof is very
complicated, the main ideas behind it is simple. This section is devoted to
the introduction of these ideas.
\subsection{Cylinder structure and dense island structure}
It is usually very difficult to define bi-Lipschitz mappings between given
sets. However, if these sets have some special structure, things become
somewhat easy. In this paper, we make use of two special structures: the
cylinder structure and the dense island structure.
Lemma~\ref{l:cylin} consider the Lipschitz equivalence between sets with
structure of nested Cantor sets. We call this cylinder structure
(Definition~\ref{d:qscldr}). Lemma~\ref{l:djt} consider the dense island
structure (Definition~\ref{d:dnsilnd}), which involves the idea of extension
of bi-Lipschitz mapping. This idea is also used by Llorente and
Mattila~\cite{LloMa10}.
\begin{defn}
A family of disjoint subsets of a set~$F$ is called a \emph{partition}
of~$F$ if the union of the family is~$F$.
\end{defn}
\begin{defn}
Let $\xs C_1$ and $\xs C_2$ be two partitions of a set $F$. We say that
$\xs C_2$ is \emph{finer} than~$\xs C_1$, denoted by $\xs C_1\prec\xs C_2$,
if each set in~$\xs C_2$ is a subset of some set in~$\xs C_1$. This is
equivalent to that each set in~$\xs C_1$ is a union of some sets in~$\xs
C_2$.
\end{defn}
\begin{defn}[cylinder structure]\label{d:qscldr}
Let $F$ be a compact subset of a metric space. We say $F$ has
\emph{$(\varrho,\iota)$-cylinder structure} for $\varrho\in(0,1)$ and
$\iota\ge1$ if there exist families $\xs C_k$ for $k\ge1$ such that
\begin{enumerate}[(i)]
\item each $\xs C_k$ is a partition of~$F$;
\item $\xs C_1\prec\xs C_2\prec\dots\prec\xs C_k\prec\xs
C_{k+1}\prec\dotsb$;
\item for each $k\ge1$,
\begin{alignat*}{2}
\iota^{-1}\varrho^k&\le|C|/|F|\le\iota\varrho^k &&\qquad
\text{for all $C\in\xs C_k$}; \\
\iota^{-1}\varrho^k&\le\dist(C_1,C_2)/|F| &&\qquad
\text{for distinct $C_1,C_2\in \xs C_k$};
\end{alignat*}
where $|\cdot|$ denotes the diameter.
\end{enumerate}
The sets in $\xs C_k$ ($k\ge1$) are called cylinders and the families $\xs
C_k$ are called cylinder families.
\end{defn}
\begin{exmp}
Let $X=\{0,1\}^\N$ be the symbolic space with metric
\[\rho(x,y)=2^{-\inf\{k\colon x_k\ne y_k\}}\]
for $x=x_1x_2\dotsc$ and $y=y_1y_2\dotsc$. For each $k\ge1$ and each word
$w=w_1w_2\dots w_k$ of length~$k$, define cylinder
\[[w]=\{x\in X\colon x_1x_2\dots x_k=w_1w_2\dots w_k\}.\]
Let $\xs C_k=\{[w]\colon\text{$w$ has length~$k$}\}$ for $k\ge1$. We see
that $X$ has $(1/2,1)$-cylinder structure.
\end{exmp}
\begin{exmp}
Let $\ms\in\TOEC$ and $\xs C_k=\xs B_k$. By the definition of blocks
(Definition~\ref{d:block}) and Remark~\ref{r:diameter}, we know that the
self-similar set~$E_\ms$ has $(r_\ms,\varpi_\ms)$-cylinder structure.
\end{exmp}
\begin{defn}\label{d:smcyl}
Suppose that $F$ and $F'$ have $(\varrho,\iota)$-cylinder structure with
cylinder families $\xs C_k$ and $\xs C'_k$, respectively. We say $F$ and
$F'$ have the same $(\varrho,\iota)$-cylinder structure if there exists a
one-to-one mapping $\tilde f$ of $\bigcup_{k=1}^\infty\xs C_k$ onto
$\bigcup_{k=1}^\infty\xs C'_k$ such that
\begin{enumerate}[(i)]
\item $\tilde f$ maps $\xs C_k$ onto $\xs C'_k$ for all $k\ge1$;
\item for $C_1\in\xs C_{k_1}$ and $C_2\in\xs C_{k_2}$, where $k_1<k_2$,
we have $\tilde f(C_1)\supset\tilde f(C_2)$ if and only if
$C_1\supset C_2$.
\end{enumerate}
We call $\tilde f$ the cylinder mapping.
\end{defn}
\begin{lem}\label{l:cylin}
Suppose that $F$ and $F'$ have the same $(\varrho,\iota)$-cylinder
structure, then there is a bi-Lipschitz mapping~$f$ of~$F$ onto~$F'$ such
that
\[\varrho\iota^{-2}\rho(x,y)/|F|\le\rho(f(x),f(y))/|F'|\le
\varrho^{-1}\iota^2\rho(x,y)/|F|\quad\text{for $x,y\in F$}.\]
Here $\rho$ denotes the metric. In particular, $F\simeq F'$.
\end{lem}
\begin{proof}
We use the same notations as in Definition~\ref{d:smcyl}. For $x\in F$,
there exists a unique $C_k\in\xs C_k$ for each $k\ge1$ such that $x\in C_k$
since $F=\bigsqcup\xs C_k$ and the union is disjoint. Since $C_1\supset
C_2\supset\dots\supset C_k\supset\dotsb$, we have
\[\tilde f(C_1)\supset\tilde f(C_2)\supset\dots\supset\tilde
f(C_k)\supset\dotsb.\]
Together with the fact that $|\tilde f(C_k)|\to0$ as $k\to\infty$, we know
that there is a unique $x'\in\bigcap_{k=1}^\infty\tilde f(C_k)$. This leads
to a mapping $f\colon F\to F'$, $x\mapsto x'$. It remains to show that $f$
is the desired mapping. For this, let $x,y\in F$. Then there exists a
$k\ge1$, $C\in\xs C_k$ and distinct $C_x,C_y\in\xs C_{k+1}$ such that
$x,y\in C$ and $x\in C_x$, $y\in C_y$. By the definition of~$f$, we have
$f(x),f(y)\in\tilde f(C)$ and $f(x)\in\tilde f(C_x)$, $f(y)\in\tilde
f(C_y)$. It follows from the definition of quasi cylinder structure that
\begin{gather*}
\iota^{-1}\varrho^{k+1}\le\dist(C_x,C_y)/|F|\le\rho(x,y)/|F|\le|C|/|F|
\le\iota\varrho^k, \\
\iota^{-1}\varrho^{k+1}\le\dist(\tilde f(C_x),\tilde f(C_y))/|F'|\le
\rho(f(x),f(y))/|F'|\le|\tilde f(C)|/|F'|\le\iota\varrho^k.
\end{gather*}
Thus, $f$ satisfies the inequality in this lemma. Finally, we have
$f(F)=F'$ since $f(F)$ is compact and dense in~$F'$.
\end{proof}
Recall that $\bigsqcup\xs D=\bigcup_{D\in\xs D}D$ for any family~$\xs D$ of
sets (Definition~\ref{d:ntt}(a)).
\begin{defn}[dense island structure]\label{d:dnsilnd}
Let $F$ be a compact set in a metric space. A subset $D$ of $F$ is called
an $\iota$-island for $\iota>0$ if
\[|D|\le\iota\dist(D,F\setminus D).\]
We say that $F$ has dense $\iota$-island structure if there exists a
family~$\xs D$ of disjoint $\iota$-islands of~$F$ such that $\bigsqcup\xs
D$ is dense in~$F$.
\end{defn}
\begin{exmp}
Let $X=\{0,1\}^\N$ be the symbolic space with metric
\[\rho(x,y)=2^{-\inf\{k\colon x_k\ne y_k\}}\]
for $x=x_1x_2\dotsc$ and $y=y_1y_2\dotsc$. Then $X$ has dense $1/2$-island
structure with families $\xs D=\{[0^k1]\colon k\ge0\}$.
\end{exmp}
\begin{defn}\label{d:smdi}
Suppose that $F$ and $F'$ have dense $\iota$-island structure with disjoint
$\iota$-island families $\xs D$ and $\xs D'$, respectively. We say that $F$
and $F'$ have the same dense $\iota$-island structure if there exist a
one-to-one mapping $\tilde f$ of $\xs D$ onto $\xs D'$ and a constant
$\tilde L>1$ such that
\begin{enumerate}[(i)]
\item $\tilde L^{-1}\dist(D_1,D_2)/|F|\le\dist(\tilde f(D_1),\tilde
f(D_2))/|F'|\le\tilde L\dist(D_1,D_2)/|F|$ for each two distinct
$\iota$-islands $D_1,D_2\in\xs D$;
\item for each $\iota$-island $D\in\xs D$, there is a bi-Lipschitz
mapping $f_D$ of $D$ onto $\tilde f(D)$ such that
\[\tilde L^{-1}\rho(x,y)/|F|\le\rho(f_D(x),f_D(y))/|F'|\le
\tilde L\rho(x,y)/|F|\quad\text{for $x,y\in D$}.\]
Here $\rho$ denotes the metric.
\end{enumerate}
We call $\tilde f$ the island mapping.
\end{defn}
\begin{lem}\label{l:djt}
Let $F$ and $F'$ have dense $\iota$-island structure with the
$\iota$-island families $\xs D$ and $\xs D'$, respectively. If $F$ and $F'$
have the same dense $\iota$-island structure with island mapping~$\tilde f$
and constant $\tilde L$. Then there is a bi-Lipschitz mapping~$f$ of~$F$
onto~$F'$ such that
\begin{equation}\label{eq:eddjt}
L^{-1}\rho(x,y)/|F|\le\rho(f(x),f(y))/|F'|\le L\rho(x,y)/|F|
\quad\text{for $x,y\in F$}.
\end{equation}
Here $L=(2\iota+1)\tilde L$. In particular, $F\simeq F'$.
\end{lem}
\begin{proof}
Let $f$ be the one-to-one mapping of $\bigsqcup\xs D$ onto $\bigsqcup\xs
D'$ such that the restriction of $f$ to $D$ is just $f_D$, i.e.,
$f|_D=f_D$, for every $D\in\xs D$, where $f_D$ is as in
Definition~\ref{d:smdi}. We claim that $f$ and $L=(2\iota+1)\tilde L$
satisfies the inequality~\eqref{eq:eddjt} for $x,y\in\bigsqcup\xs D$. For
this, let $x,y\in\bigsqcup\xs D$. There are two cases. If $x,y\in D$ for
some $D\in\xs D$, then $f$ satisfies the inequality~\eqref{eq:eddjt} for
$L=\tilde L$ since $f|_D=f_D$. Suppose otherwise that $x\in D_x$ and $y\in
D_y$ for distinct $D_x,D_y\in\xs D$. Then by the definition of~$f$,
$f(x)\in\tilde f(D_x)$ and $f(y)\in\tilde f(D_y)$. Therefore,
\begin{gather*}
\dist(D_x,D_y)\le\rho(x,y)\le
|D_x|+\dist(D_x,D_y)+|D_y|\le(2\iota+1)\dist(D_x,D_y), \\
\begin{split}
\dist(\tilde f(D_x),\tilde f(D_y))\le\rho(f(x),f(y))
&\le|\tilde f(D_x)|+\dist(\tilde f(D_x),\tilde f(D_y))+|\tilde f(D_y)|\\
&\le(2\iota+1)\dist(\tilde f(D_x),\tilde f(D_y)).
\end{split}
\end{gather*}
Together with Condition~(i) of Definition~\ref{d:smdi}, we have
\[((2\iota+1)\tilde L)^{-1}\rho(x,y)/|F|\le\rho(f_D(x),f_D(y))/|F'|
\le(2\iota+1)\tilde L\rho(x,y)/|F|.\]
A summary of the above two cases shows that $f$ and $L=(2\iota+1)\tilde L$
satisfy the inequality~\eqref{eq:eddjt} for $x,y\in\bigsqcup\xs D$.
Finally, note that $f$ can be extended to a bi-Lipschitz mapping from $F$
onto $F'$ since $\bigsqcup\xs D$ and $\bigsqcup\xs D'$ are dense in $F$ and
$F'$, respectively. We also denote this mapping by~$f$. Then $f$ and $L$
satisfy the inequality~\eqref{eq:eddjt} for $x,y\in F$.
\end{proof}
\subsection{Measure linear}\label{ssec:ML}
To make use of Lemma~\ref{l:cylin} and~\ref{l:djt}, we need to construct
corresponding structure for given sets. The difficult is how to do it. An
important observation obtained by Cooper and Pignataro~\cite{CooPi88} gives
the key hint. This observation is called measure linear.
\begin{defn}[measure linear]\label{d:msrln}
Let $(X,\mu)$ and $(Y,\nu)$ be two measure spaces. A map $f\colon X\to Y$
is called \emph{measure linear} if there is a constant $a>0$ such that for
all $\mu$-measurable sets $A\subset X$, $f(A)$ is $\nu$-measurable and
$\nu(f(A))=a\mu(A)$.
\end{defn}
Let $f$ be a bi-Lipschitz mapping from a self-similar set~$E$ onto another
self-similar set. Cooper and Pignataro~\cite{CooPi88} showed that the
restriction of $f$ to some small copy of~$E$ is measure linear for
$s$-dimensional Hausdorff measure, where $s=\hdim E$, provided that the two
self-similar sets both satisfy the SSC (see Lemma~\ref{l:ML}). In fact,
measure linear property also holds in our setting (see Lemma~\ref{l:ml}).
Inspired by the observation of measure linear, it is natural to require
\begin{equation}\label{eq:ML}
\xc H^s(\tilde f(C))/\xc H^s(C)=\xc H^s(F')/\xc H^s(F)
\quad\text{for all cylinders $C$}
\end{equation}
in the construction of same cylinder structure for $F$ and $F'$. In fact,
this is just the case in the proofs of Lemma~\ref{l:4blockslip} and the
sufficient part of Proposition~\ref{p:BlipB}. We remark that a cylinder
mapping~$\tilde f$ satisfying~\eqref{eq:ML} induces a bi-Lipschitz
mapping~$f$ (as in the proofs of Lemma~\ref{l:cylin}) such that $f$ is
measure linear on the whole set~$F$. We also remark that all the bi-Lipschitz
mappings appearing in~\cite{DenHe12,LuoLa12,RaRuX06,RuWaX12,XiRu07,XiXi10}
have the measure linear property on the whole set.
However, in many cases, e.g., the rotation version of \{1,3,5\}-\{1,4,5\}
problem in~\cite{XioXi12}, bi-Lipschitz mappings which are measure linear on
the whole set do not exist. In other words, there only exist bi-Lipschitz
mappings which are measure linear on subset. In fact, this is exactly what
obtained by Cooper and Pignataro~\cite{CooPi88}. In such cases, we cannot
construct the same cylinder structure by the inspiration of measure linear.
This makes our proof much more complicated. In such cases, we must consider
all the subsets on which some bi-Lipschitz mapping is measure linear.
Falconer and Marsh~\cite{FalMa92} showed that the union of such subsets are
dense in the whole set. This is why we introduce the dense island structure.
Lemma~\ref{l:djt} is our tool to deal with these cases, see the proofs of
Lemma~\ref{l:4blockslip'} and Proposition~\ref{p:BlipE}.
We close this subsection with an example in~\cite{XioXi12} to show that
bi-Lipschitz mappings which are measure linear on whole set do not exist.
\begin{exmp}\label{e:NML}
We consider the $\{1,3,5\}$-set and the $\{1,-4,5\}$-set (see
Figure~\ref{fig:135145r}). Recall that the two sets are the self-similar
sets such that
\begin{align*}
E_{1,3,5} & =(E_{1,3,5}/5)\cup(E_{1,3,5}/5+2/5)\cup(E_{1,3,5}/5+4/5), \\
E_{1,-4,5} & =(E_{1,-4,5}/5)\cup(-E_{1,-4,5}/5+4/5)\cup(E_{1,-4,5}/5+4/5).
\end{align*}
It follows from Theorem~\ref{t:srt} that $E_{1,3,5}\simeq E_{1,-4,5}$. But
bi-Lipschitz mappings of~$E_{1,3,5}$ onto~$E_{1,-4,5}$ which are measure
linear on~$E_{1,3,5}$ do not exist.
Suppose on the contrary that $f$ is a such mapping. Then we have
\[\nu(f(A))=\mu(A)\quad\text{for all Borel sets $A\subset E_{1,3,5}$},\]
where $\mu$ and $\nu$ are the natural measure of $E_{1,3,5}$ and
$E_{1,-4,5}$, respectively. Let $F=2E_{1,3,5}$, then rewrite
\[E_{1,-4,5}=(E_{1,-4,5}/5)\cup(F/5+3/5).\]
We say that $A$ is a small copy of a self-similar set~$E$ if $A=S(E)$,
where $S$ can be written as a composition of similarities in the IFS
of~$E$. Now let $A$ be a small copy of~$E_{1,3,5}$ such that
$f(A)\subset(F/5+3/5)$, then $\mu(A)=3^{-k}$ for some positive integer~$k$
and
\[f(A)=(F_1\cup\dots\cup F_n)/5+3/5\subset(F/5+3/5),\]
where $F_i$ is a small copy of $F$ for $1\le i\le n$. So for each $1\le
i\le n$, there is a positive integer~$k_i$ such that $\nu(F_i)=2\cdot
3^{-k_i}$. Therefore,
\[\nu(f(A))=2\cdot3^{-1}(3^{-k_1}+\dots+3^{-k_n})=\mu(A)=3^{-k}.\]
But this is impossible.
\end{exmp}
\subsection{Suitable decomposition}\label{ssec:BDsd}
In the construction of cylinder mapping satisfying~\eqref{eq:ML}, it is often
required to decompose interior blocks into small parts with measures equal to
given numbers. We call such decompositions the suitable decomposition
(Definition~\ref{d:suitde}). This subsection deals with the problem of the
existence of suitable decomposition (Lemma~\ref{l:suitde}).
We begin with the definition of suitable decomposition by using the notations
in Definition~\ref{d:ntt}.
\begin{defn}[suitable decomposition]\label{d:suitde}
Let $\xs A\subset\xs B_l$ be a nonempty family of level-$l$ blocks. We call
\[\xs B_k(\xs A)=\bigcup_{i=1}^n\xs A_i\]
an order-$k$ suitable decomposition for positive numbers
$a_1,a_2,\dots,a_n$, if $\xs A_i\cap\xs A_j=\emptyset$ for $i\ne j$ and
\[\mu\left(\bigsqcup\xs A_i\right)=\sum_{B\in\xs A_i}\mu(B)
=a_i\quad\text{for $1\le i\le n$}.\]
\end{defn}
\begin{rem}\label{r:suit}
It is plain to observe that the existence of an order-$k$ suitable
decomposition ensures the existence of an order-$K$ suitable decomposition
for all $K\ge k$ since each level-$(l+k)$ block can be written as a
disjoint union of some level-$(l+K)$ blocks.
\end{rem}
\begin{defn}\label{d:suit}
Let $\alpha_1$, $\alpha_2$, $\dots$, $\alpha_\eta$ be positive numbers and
$\xs A\subset\xs B_l$ a nonempty family of level-$l$ blocks. We say that
$a_1$, $a_2$, \dots, $a_n$ are $(\xs
A;\alpha_1,\dots,\alpha_\eta)$-suitable if
\[\sum_{i=1}^na_i=\sum_{B\in\xs A}P_B(p)\]
and $a_i\in\{\alpha_1,\dots,\alpha_\eta\}$ for all $i=1,2,\dots,n$.
\end{defn}
\begin{lem}\label{l:suitde}
Given positive numbers $\alpha_1,\alpha_2,\dots,\alpha_\eta\in I_\ms$,
there exists an integer $K\ge1$ depending only on $\ms$ and $\alpha_1$,
$\dots$, $\alpha_\eta$ has the following property. For each ${l\ge1}$, each
nonempty family of level-$l$ interior blocks $\xs A\subset\xs B^\circ_l$
and each $(\xs A;\alpha_1,\dots,\alpha_\eta)$-suitable sequence
$a_1,a_2,\dots,a_n$, there is an order-$K$ suitable decomposition
\[\xs B_K(\xs A)=\bigcup_{i=1}^n\xs A_i\]
for $p^la_1,\dots,p^la_n$.
\end{lem}
Lemma~\ref{l:suitde} ensures the measure linear property of bi-Lipschitz
mapping in our proof. Before proving it, we present two technical lemmas.
\begin{lem}\label{l:idealP}
Let $\alpha\in I_\ms$ be a positive number. Then for each sufficiently
large integer $\ell$, there exist integers $\gamma_{\ell,P}\ge0$ for
$P\in\xc P^\circ$ such that
\[\alpha=p^\ell\sum_{P\in\xc P^\circ}\gamma_{\ell,P}P(p).\]
\end{lem}
\begin{proof}
Let $I^*$ denotes the set of all the positive numbers $\alpha\in I_\ms$
satisfying the property in the lemma. We claim that $p^m P(p)\in I^*$ for
all integers~$m$ and all $P\in\xc P^\circ$. For this, suppose that $P=P_A$
for some $A\in\xs B^\circ_l$. By Remark~\ref{r:muB} and
Remark~\ref{r:inblock}(a), for all $k\ge1$,
\[p^m P(p)=p^{m-l}\mu(A)=p^{m-l}\sum_{B\in\xs B_k(A)}
\mu(B)=p^{m+k}\sum_{B\in\xs B^\circ_k(A)}P_{B}(p),\]
where $\xs B_k(A)=\{B\in\xs B_{l+k}\colon B\subset A\}$ and $\xs
B^\circ_k(A)=\{B\in\xs B_{l+k}\colon B\subset A\}$. This means that $p^m
P(p)\in I^*$. Now let $\alpha\in I_\ms$ be a positive number, then by
Lemma~\ref{l:ideal}, Lemma~\ref{l:Z[p]}(e) and~(f), we have
\[\alpha=\sum_{P\in\xc P^\circ}b_PP(p),\]
where each $b_P$ can be written as $p^{m_1}+\dots+p^{m_\kappa}$ for some
integers $m_1$, \dots, $m_\kappa$. Observe that $I^*+I^*\subset I^*$.
This fact together with $p^m P(p)\in I^*$ yields $\alpha\in I^*$, and so
$I^*=\{\alpha\in I_\ms\colon \alpha>0\}$.
\end{proof}
\begin{lem}\label{l:dvsum}
Given positive numbers $\alpha_1,\alpha_2,\dots,\alpha_\eta$ and
$\beta_1,\beta_2,\dots,\beta_\tau$, let $\Sigma_\theta$ be the set of all
vectors $(\kappa_1,\kappa_2,\dots,\kappa_{\eta+\tau})$ with nonnegative
integer entries such that
\[\kappa_1\alpha_1 + \kappa_2\alpha_2 + \dots + \kappa_\eta\alpha_\eta =
\kappa_{\eta+1}\beta_1 + \kappa_{\eta+2}\beta_2 + \dots +
\kappa_{\eta+\tau}\beta_\tau>\theta,\]
for $\theta\ge0$. Suppose that $\Sigma_0\ne\emptyset$. Then there exists a
constant $\theta>0$ such that each vector
$(\kappa_1,\kappa_2,\dots,\kappa_{\eta+\tau})\in\Sigma_\theta$ can be
written as the sun of two vectors in~$\Sigma_0$, i.e., there exist
$(\kappa'_1, \dots, \kappa'_{\eta+\tau}), (\kappa''_1, \dots,
\kappa''_{\eta+\tau}) \in \Sigma_0$ such that
\[\kappa_i=\kappa'_i+\kappa''_i,\quad\text{for}\ 1\le i\le \eta+\tau.\]
\end{lem}
\begin{proof}
Suppose to the contrary that such $\theta$ does not exist. So we can find a
sequence of vectors $(\kappa^{(j)}_1, \dots, \kappa^{(j)}_{\eta+\tau}) \in
\Sigma_0$ such that each vector $(\kappa^{(j)}_1, \dots,
\kappa^{(j)}_{\eta+\tau})$ can not be written as the sum of two vectors
in~$\Sigma_0$ and the sequences $\sum_{i=1}^\eta \kappa^{(j)}_i\alpha_i =
\sum_{i=1}^\tau \kappa^{(j)}_{\eta+i}\beta_i$ are strictly increasing.
Now consider the sequence of nonnegative integer~$\{\kappa_1^{(j)}\}$. If
$\sup_j\kappa^{(j)}_1<\infty$, then there is a constant subsequence
of~$\{\kappa_1^{(j)}\}$; otherwise $\sup_j\kappa^{(j)}_1=\infty$, then
there is a strictly increasing subsequence of~$\{\kappa_1^{(j)}\}$.
Therefore, by taking a subsequence, we can assume that
$\{\kappa_1^{(j)}\}$ is either constant or strictly increasing. Applying
the same argument, we can further assume that $\{\kappa_i^{(j)}\}$ is
either constant or strictly increasing for $1\le i\le\eta+\tau$. But then
$(\kappa^{(2)}_1-\kappa^{(1)}_1, \dots, \kappa^{(2)}_{\eta+\tau} -
\kappa^{(1)}_{\eta+\tau}) \in \Sigma_0$ and
\[\kappa^{(2)}_i = (\kappa^{(2)}_i-\kappa^{(1)}_i)+\kappa^{(1)}_i,\qquad
1\le i\le \eta+\tau,\]
contradicting the fact that $(\kappa^{(2)}_1, \dots,
\kappa^{(2)}_{\eta+\tau})$ can not be written as the sum of two vectors
in~$\Sigma_0$.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{l:suitde}]
We begin with applying Lemma~\ref{l:dvsum} by taking the
$\alpha_1,\dots,\alpha_\eta$ in Lemma~\ref{l:dvsum} just as the given
$\alpha_1,\dots,\alpha_\eta$ and $\{\beta_1,\dots,\beta_\tau\}=\{P(p)\colon
P\in\xc P^\circ\}$. According to Lemma~\ref{l:dvsum}, we may assume that
\begin{equation}\label{eq:<theta}
\sum_{B\in\xs A}P_B(p)=\sum_{i=1}^na_i\le\theta.
\end{equation}
We shall prove the lemma for specified $\xs A$ and $a_1,\dots,a_n$, and
show that the constant~$K$ depends only on the two sequences $P_B$
($B\in\xs A$) and $a_1,\dots,a_n$. Since there are only finitely many such
two sequences under the assumption~\eqref{eq:<theta}, the lemma follows.
Now fix $\xs A\subset\xs B^\circ_l$ and $a_1,\dots,a_n$ with
$a_i\in\{\alpha_1,\dots,\alpha_\eta\}$ such that $\sum_{B\in\xs
A}P_B(p)=\sum_{i=1}^na_i$. We divide the proof into two steps.
In the first step, we consider a closely related decomposition
of the family
\[\ms_{\xs A,k}=\Bigl\{S\in\ms_{l+k}\colon S(E)\subset\bigsqcup\xs A\Bigr\},
\]
where $\xc S_{l+k}$ is defined by~\eqref{eq:Sk}. We will find a integer
$K'>0$, which depends on the two sequences $P_B$ ($B\in\xs A$) and
$a_1,\dots,a_n$, such that there is a decomposition
\begin{equation}\label{eq:4dvsum}
\ms_{\xs A,K'}=\bigcup_{i=1}^n\xc A_i
\end{equation}
satisfying
\begin{equation}\label{eq:4dvsum'}
\sum_{S\in\xc A_i}\mu\bigl(S(E)\bigr)=p^la_i
\quad\text{for $1\le i\le n$}.
\end{equation}
By Lemma~\ref{l:Z[p]}(e), there exist an integer $K_1\ge1$ and integers
$a_{i,\ell}\ge0$ such that
\begin{equation}\label{eq:Zpnum}
a_i=p^{K_1}\sum_{\ell=0}^\lambda a_{i,\ell}p^\ell
\quad \text{for}\ 1\le i\le n.
\end{equation}
Here $\lambda$ is as in~\eqref{eq:lambda}. Note that $K_1$ depends on
$\alpha_1,\dots,\alpha_\eta$ since $a_i\in\{\alpha_1,\dots,\alpha_\eta\}$.
Write $\bm a_i=\bigl(a_{i,0},\dots,a_{i,\lambda}\bigr)^T$ for $1\le i\le
n$. Let $\bm\Xi$ be the matrix in~\eqref{eq:Xi} and $\bm
p=(1,p,\dots,p^\lambda)$. Recall that $\bm\Xi$ is primitive, $p^{-1}$ is
the Perron-Frobenius eigenvalue of~$\bm\Xi$ and $\bm p$ is the
corresponding left-hand Perron-Frobenius eigenvector. Consequently,
by~\eqref{eq:Zpnum},
\[a_i=p^{K_1}\bm p\bm a_i=p^{K_1+k}\bm p\bm\Xi^k\bm a_i,\quad
\text{for all $k\ge0$}.\]
Write
\[b_{k,\ell} = \card\bigl\{S\in\ms_{\xs A,k}\colon\text{the ratio of $S$ is
$r^{l+k+\ell}$}\bigr\}\quad\text{for $0\le \ell\le\lambda$},\]
and $\bm b=(b_{K_1,0},\dots,b_{K_1,\lambda})^T$. Then we have
\[\bm p\sum_{i=1}^n\bm a_i=p^{-K_1}\sum_{i=1}^na_i=p^{-K_1}\sum_{B\in\xs A}
P_B(p)=p^{-l-K_1}\sum_{B\in\xs A}\mu(B) =\bm p\bm b.\]
By Lemma~\ref{l:limMatrix}, as $k\to\infty$,
\[p^k\bm\Xi^k\bm a_i\to(\bm p\cdot\bm a_i)\bm q\ \text{for}\ 1\le i\le n
\quad\text{and}\quad p^k\bm\Xi^k\bm b\to(\bm p\cdot\bm b)\bm q.\]
It follows that there exists an integer~$K_2>0$ such that
\begin{equation}\label{eq:a<b}
\bm\Xi^{K_2}\sum_{i=1}^{n-1}\bm a_i<\bm\Xi^{K_2}\bm b.
\end{equation}
Let $K'=K_1+K_2$. Note that $K'$ depends on the two sequences $P_B$
($B\in\xs A$) and $a_1,\dots,a_n$ since this holds for $K_1$ and $K_2$. We
shall show that $K'$ has the desired property.
Write $\bm\Xi^{K_2}\bm a_i=(a'_{i,0},\dots,a'_{i,\lambda})^T$ for $1\le
i\le n-1$. By the definition of the matrix~$\bm\Xi$, we have
$\bm\Xi^{K_2}\bm b=(b_{K',0}, \dots, b_{K',\lambda})^T$. It follows
from~\eqref{eq:a<b} that
\[a'_{1,\ell}+a'_{2,\ell}+\dots+a'_{n-1,\ell}<b_{K',\ell}\quad
\text{for $0\le\ell\le\lambda$}.\]
Note that $a'_{i,\ell}\ge0$ for $1\le i\le n$ and $0\le \ell\le\lambda$
since $a_{i,\ell}\ge0$. Recall that
\[b_{K',\ell}= \card\bigl\{S\in\ms_{\xs A,K'}\colon\text{the ratio of $S$ is
$r^{l+K'+\ell}$}\bigr\}\quad\text{for $0\le \ell\le\lambda$}.\]
So there exist disjoint subsets $\xc A_1,\dots,\xc A_{n-1}$ of $\ms_{\xs
A,K'}$ such that
\[\card\bigl\{S\in\xc A_i \colon \mu\bigl(S(E)\bigr)
=p^{l+K'+\ell}\bigr\}=a'_{i,\ell}\]
for $1\le i\le n-1$ and $0\le \ell\le\lambda$. Write $\xc A_n=\ms_{\xs
A,K'}\setminus\bigcup_{i=1}^{n-1}\xc A_i$. We claim that the
decomposition
\[\ms_{\xs A,K'}=\bigcup_{i=1}^n\xc A_i\]
satisfying
\[\sum_{S\in\xc A_i}\mu\bigl(S(E)\bigr)=p^la_i
\quad\text{for $1\le i\le n$}.\]
In fact, for $1\le i\le n-1$,
\[\sum_{S\in\xc A_i}\mu\bigl(S(E)\bigr)=p^{l+K'}\bm p
\bm\Xi^{K_2}\bm a_i=p^{l+K_1}\bm p\bm a_i=p^la_i.\]
And so
\begin{multline*}
\sum_{S\in\xc A_n}\mu\bigl(S(E)\bigr)=\sum_{S\in\ms_{\xs A,K'}}\mu
\bigl(S(E)\bigr)-\sum_{i=1}^{n-1}\sum_{S\in\xc A_i}\mu\bigl(S(E)\bigr) \\
=\sum_{B\in\xs A}\mu(B)-p^l\sum_{i=1}^{n-1}a_i=p^la_n.
\end{multline*}
In the second step, we will use the decomposition~\eqref{eq:4dvsum} to
obtain a suitable decomposition. The idea is to approximate
$\bigcup_{S\in\xc A_i}S(E)$ by $\bigcup_{S\in\xc A_i}\bigcup_{B\in\xs
B^\circ_k}S(B)$, while the latter is a disjoint union of interior blocks
(see Remark~\ref{r:inblock}(b)).
For $1\le i\le n$, by~\eqref{eq:4dvsum'},
\begin{multline*}
\sum_{S\in\xc A_i}\biggl(\mu\bigl(S(E)\bigr)-\sum_{B\in\xs B^\circ_k}
\mu\bigl(S(B)\bigr)\biggr)
=\sum_{S\in\xc A_i}\sum_{B\in\xs B^\partial_k}
\mu\bigl(S(B)\bigr)\\
=\sum_{S\in\xc A_i}\sum_{B\in\xs B^\partial_k}
\mu\bigl(S(E)\bigr)\cdot\mu(B)
=p^{l+k}a_i\sum_{B\in\xs B^\partial_k}P_B(p).
\end{multline*}
Note that $a_iP_B(p)>0$ and $a_iP_B(p)\in I_\ms$ since $a_i\in I_\ms$. By
Lemma~\ref{l:idealP}, we can further require that the positive integer $K'$
in the first step also satisfies the condition that there exist integers
$\gamma_{i,P_B,P}\ge0$ such that
\[a_iP_B(p)=p^{K'}\sum_{P\in\xc P^\circ}\gamma_{i,P_B,P}P(p)\]
for all $1\le i\le n$ and all $B\in\xs B^\partial_k$. Since
$a_i\in\{\alpha_1,\dots,\alpha_\eta\}$ and $P_B\in\xc P$, such $K'$ depends
on $\alpha_1,\dots,\alpha_\eta$ and $\ms$ rather than~$k$ or~$\xs
B^\partial_k$. By above computation
\begin{equation}\label{eq:addition}
\sum_{S\in\xc A_i}\biggl(\mu\bigl(S(E)\bigr)-\sum_{B\in\xs B^\circ_k}
\mu\bigl(S(B)\bigr)\biggr)=p^{l+K'+k}\sum_{P\in\xc P^\circ}
\sum_{B\in\xs B^\partial_k}\gamma_{i,P_B,P}P(p).
\end{equation}
Recall that $\sum_{i=1}^na_i<\theta$, and so $n\le\theta\max_{1\le
i\le\eta}\alpha_i^{-1}$. Write
\[\gamma^*=\max_{i,P_B,P} \gamma_{i,P_B,P}.\]
By Lemma~\ref{l:C(k)} and~\ref{l:CP(k)}, there exists an integer~$K_3>0$
relying on~$\ms$ and $\alpha_1,\dots,\alpha_\eta$ such that for all
$P\in\xc P^\circ$,
\begin{equation}\label{eq:CP(K3)}
\zeta_P(K_3)\ge\gamma^*\zeta(K_3)\theta\max_{1\le i\le\eta}\alpha_i^{-1}\ge
n\gamma^*\zeta(K_3)\ge\sum_{i=1}^n\sum_{B\in\xs B^\partial_{K_3}}
\gamma_{i,P_B,P}.
\end{equation}
For $1\le i\le n$ and $P\in\xc P^\circ$, write
\[\xs A'_i=\bigl\{S(B)\colon S\in\xc A_i,B\in\xs B^\circ_{K_3}\bigr\}
\quad\text{and}\quad\Upsilon_{i,P}=\sum_{B\in\xs B^\partial_{K_3}}
\gamma_{i,P_B,P}.\]
Now we consider $\xs A'_1$, $\xs A'_2$, \dots, $\xs A'_{n-1}$, which are
families consisting of interior blocks. Equality~\eqref{eq:addition} says
that, for each $P\in\xc P^\circ$ and each $1\le i\le n-1$, if we can find
$\Upsilon_{i,P}$ many level-$(l+K'+K_3)$ interior blocks~$B$ such that
$P_B=P$, and add them to~$\xs A'_i$, then $\mu(\bigsqcup\xs A'_i)$ is just
equal to $p^la_i$. So we need to find such many interior blocks outside of
$\xs A'_1$, $\xs A'_2$, \dots, $\xs A'_{n-1}$.
It follows from \eqref{eq:a<b} that
\[a'_{1,0}+a'_{2,0}+\dots+a'_{n-1,0}<b_{K',0}.\]
According to the definition of~$\xc A_i$, the above inequality implies that
$\xc A_n$ contains at least one $S$, say $S^*$, whose ratio is $r^{l+K'}$.
By Remark~\ref{r:inblock}(b) and the definition of~$\zeta_P$, we know that, for
each $P\in\xc P^\circ$, the family $\bigl\{S^*(B)\colon B\in\xs
B^\circ_{K_3}\bigr\}$ contains $\zeta_P(K_3)$ many level-$(l+K'+K_3)$ interior
blocks~$B$ such that $P_B=P$. Then inequality~\eqref{eq:CP(K3)} implies
that there exist disjoint subfamilies $\xs A''_1$, \dots, $\xs A''_{n-1}$
of $\bigl\{S^*(B)\colon B\in\xs B^\circ_{K_3}\bigr\}$ such that
\[\card\{B\in\xs A''_i\colon P_B=P\}=\Upsilon_{i,P}\quad
\text{for $1\le i\le n-1$ and $P\in\xc P^\circ$}.\]
Since $S^*\in\xc A_n$, the families $\xs A'_1$, \dots, $\xs A'_{n-1}$,
$\xs A''_1$, \dots, $\xs A''_{n-1}$ are disjoint.
Notice that, for $B\in\bigcup_{i=1}^{n-1}\xs A'_i$, the level of~$B$ is at
most $l+K'+K_3+\lambda$; while for all $B\in\bigcup_{i=1}^{n-1}\xs A''_i$
the level of~$B$ is $l+K'+K_3$. Let $K=K'+K_3+\lambda$, then $K$ depends on
$\ms$ and $\alpha_1$, \dots, $\alpha_\eta$. Define
\[\xs A_i=\Bigl\{B\in\xs B_K(\xs A)\colon
B\subset\bigsqcup\xs A'_i\cup\bigsqcup\xs A''_i\Bigr\}
\quad\text{for $1\le i\le n-1$},
\]
and $\xs A_n=\xs B_K(\xs A)\setminus\bigcup_{i=1}^{n-1}\xs A_i$. Then $\xs
B_K(\xs A)=\bigcup_{i=1}^n\xs A_i$ is a suitable decomposition, since
\[\sum_{B\in\xs A_i}\mu(B)=\sum_{B\in\xs A'_i\cup\xs A''_i}\mu(B)
=\sum_{S\in\xc A_i}S(E)=p^la_i
\]
for $1\le i\le n-1$ due to~\eqref{eq:addition}, and so $\sum_{B\in\xs
A_n}\mu(B)=p^la_n$.
\end{proof}
\section{Interior Blocks and the Whole Set}\label{sec:LipBE}
Fix an $\ms=\{S_1,S_2,\dots,S_N\}\in\TOEC$. For notational convenience, we
denote $E_\ms$, $\mu_\ms$, $r_\ms$ and~$p_\ms$ shortly by~$E$, $\mu$, $r$
and~$p$, respectively. We also use notations as in Definition~\ref{d:ntt}.
The aim of this section is to prove the following proposition.
\begin{prop}\label{p:BlipE}
We have $B_0\simeq E$ for all interior blocks~$B_0$.
\end{prop}
Let us say something about the proof of Proposition~\ref{p:BlipE}.
Example~\ref{e:NML} implies that, in some cases, it is impossible to
construct the same cylinder structure for $E$ and~$B_0$ under the guidance of
measure linear property. Fortunately, we find that $E$ and $B_0$ have the
same dense island structure. And so Proposition~\ref{p:BlipE} follows from
Lemma~\ref{l:djt}. The difficult is how to define the islands and the
bi-Lipschitz mapping $f_D$ between two islands $D$ and $\tilde f(D)$. The
results in Section~\ref{ssec:BDnum} say that almost all blocks are interior
blocks. So it is natural to define the islands to be the finite unions of
interior blocks. This means that we need consider the bi-Lipschitz mappings
between two finite unions of interior blocks. We do this in
Section~\ref{ssec:LipBB}, and then give the proof of
Proposition~\ref{p:BlipE} in Section~\ref{ssec:LipBE}.
\subsection{The Lipschitz equivalence of interior blocks}\label{ssec:LipBB}
For $A\in\xs B_l$ and $\xs A\subset\xs B_l$ ($l\ge0$), recall that
$\bigsqcup\xs A=\bigcup_{A\in\xs A}A$,
\[\xs B_k(A)=\bigl\{B\in\xs B_{l+k}\colon B\subset A\bigr\}
\quad\text{and}\quad
\xs B_k(\xs A)=\left\{B\in\xs B_{l+k}\colon B\subset\bigsqcup\xs A\right\}.\]
\begin{defn}\label{d:strc}
Let $\xs A$ be a nonempty family of interior blocks. We say that $\xs A$
has $(A;l,k)$-structure if $A\in\xs B_l$ is a level-$l$ block and $\xs
A\subset\xs B_k^\circ(A)$.
\end{defn}
\begin{lem}\label{l:blockslip}
For each $k_0\ge1$, there exists a constant~$L$ depending only on~$k_0$
with the following property. Let $\xs A$ and $\xs A'$ be two nonempty
families of interior blocks such that $\xs A$ has $(A;l_1,k_1)$-structure
and $\xs A'$ has $(A';l_2,k_2)$-structure, where $\max(k_1,k_2)\le k_0$.
Then there is a bi-Lipschitz mapping $f\colon\bigsqcup\xs A\to\bigsqcup\xs
A'$ such that
\[L^{-1}r^{-l_1}|x-y|\le r^{-l_2}|f(x)-f(y)|\le Lr^{-l_1}|x-y|
\quad\text{for}\ x,y\in\bigsqcup\xs A.\]
In particular, we have $\bigsqcup\xs A\simeq\bigsqcup\xs A'$ for any two
nonempty families of interior blocks.
\end{lem}
The proof of Lemma~\ref{l:blockslip} is based on two special cases,
Lemma~\ref{l:4blockslip} and~\ref{l:4blockslip'}.
\begin{lem}\label{l:4blockslip}
For every $k_0\ge1$, there exists a constant~$L$ depending only on~$k_0$
with the following property. Let $\xs A$ and $\xs A'$ be two nonempty
families of interior blocks such that $\xs A$ has $(A;l_1,k_1)$-structure
and $\xs A'$ has $(A';l_2,k_2)$-structure, where $\max(k_1,k_2)\le k_0$. If
\[\sum_{B\in\xs A}P_B(p)=\sum_{B\in\xs A'}P_B(p),\]
then there is a bi-Lipschitz mapping $f\colon\bigsqcup\xs
A\to\bigsqcup\xs A'$ such that
\[L^{-1}r^{-l_1}|x-y|\le r^{-l_2}|f(x)-f(y)|\le Lr^{-l_1}|x-y|
\quad\text{for}\ x,y\in\bigsqcup\xs A.\]
\end{lem}
\begin{proof}
Let $F=\bigsqcup\xs A$ and $F'=\bigsqcup\xs A'$. We shall show that $F$ and
$F'$ have the same cylinder structure, then this lemma follows from
Lemma~\ref{l:cylin}. For this, we make use of Lemma~\ref{l:suitde} to
define a cylinder mapping satisfying~\eqref{eq:ML}. Take the positive
numbers $\alpha_1,\dots,\alpha_\eta$ in Lemma~\ref{l:suitde} to be
$\{P(p)\colon P\in\xc P^\circ\}$ and $K$ the corresponding integer
constant. The cylinder families $\xs C_k$ and $\xs C'_k$ are defined as
follows. Define $\xs C_1=\xs A$ and
\[\begin{cases}
\xs C_k=\xs B_{(k-1)K}(\xs A)&\text{for $k$ is odd};\\
\xs C'_k=\xs B_{(k-1)K}(\xs A')&\text{for $k$ is even}.
\end{cases}\]
\begin{center}
\begin{tabular}{*8c}
\hline
& $k=1$ & $k=2$ & $k=3$ & $k=4$ & $k=5$ & $k=6$ & \dots
\rule[-1ex]{0pt}{3.5ex} \\
\hline
$\xs C_k$: & $\xs A$ & & $\xs B_{2K}(\xs A)$ &
& $\xs B_{4K}(\xs A)$ & & \dots \rule[-1ex]{0pt}{3.5ex} \\
$\xs C'_k$: & & $\xs B_K(\xs A')$ & & $\xs B_{3K}(\xs A')$
& & $\xs B_{5K}(\xs A')$ & \dots \rule[-1ex]{0pt}{3.5ex} \\
\hline
\end{tabular}
\end{center}
It remains to define $\xs C_k$ for $k$ is even, $\xs C'_k$ for $k$ is odd
and the cylinder mapping~$\tilde f$.
We begin with the definition of~$\xs C'_1$ by making use of
Lemma~\ref{l:suitde}. Since $P_B\in\xc P^\circ$ for all $B\in\xs C_1=\xs A$
and
\[\sum_{B\in\xs C_1}P_B(p)=\sum_{B\in\xs A}P_B(p)=\sum_{B\in\xs A'}P_B(p),\]
we know that the sequence $\{P_B(p)\colon B\in\xs C_1\}$ are $(\xs
A';\alpha_1,\dots,\alpha_\eta)$-suitable by Definition~\ref{d:suit},
where $\{\alpha_1\dots,\alpha_\eta\}=\{P(p)\colon P\in\xc P^\circ\}$. Since
$\xs A'\subset\xs B_{k_2}^\circ(A')\subset\xs B_{l_2+k_2}^\circ$, by
Lemma~\ref{l:suitde}, for positive numbers $\{p^{l_2+k_2}P_B(p)\colon
B\in\xs C_1\}$, there is a suitable decomposition
\[\xs B_K(\xs A')=\bigcup_{B\in\xs C_1}\xs C_B\]
such that
\[\sum_{B'\in\xs C_B}\mu(B')=p^{l_2+k_2}P_B(p)\quad
\text{for all $B\in\xs C_1=\xs A$}.\]
Define
\[\xs C'_1=\left\{\bigsqcup\xs C_B\colon B\in\xs C_1\right\},\]
and $\tilde f\colon\xs C_1\to\xs C'_1$ by $\tilde f(B)=\bigsqcup\xs C_B$.
It is easy to see that $\xs C'_1\prec\xs C'_2=\xs B_K(\xs A')$. We also
have that
\[p^{-l_1-k_1}\mu(B)=p^{-l_2-k_2}\mu(\tilde f(B))\quad
\text{for all $B\in\xs C_1$},\]
since $p^{-l_2-k_2}\mu(\tilde f(B))=p^{-l_2-k_2}\sum_{B'\in\xs C_B}\mu(B')
=P_B(p)=p^{-l_1-k_1}\mu(B)$.
Now suppose that the cylinder families $\xs C_1$, \dots, $\xs C_{k-1}$, $\xs
C'_1$, \dots, $\xs C'_{k-1}$ and the cylinder mapping $\tilde f$ have
been defined such that $\tilde f$ maps $\xs C_j$ onto $\xs C'_j$ for
$1\le j\le k-1$ and
\begin{equation}\label{eq:muC}
p^{-l_1-k_1}\mu(C)=p^{-l_2-k_2}\mu(\tilde f(C))\quad
\text{for all $C\in\bigcup_{j=1}^{k-1}\xs C_j$}.
\end{equation}
We shall define $\xs C_k$, $\xs C'_k$ and $\tilde f\colon\xs C_k\to\xs
C'_k$. Suppose without loss of generality that $k$ is even, then $\xs
C'_k=\xs B_{(k-1)K}(\xs A')$. We consider the suitable decomposition of $\xs
B_K(B_0)$ for each $B_0\in\xs C_{k-1}=\xs B_{(k-2)K}(\xs A)$.
By~\eqref{eq:muC},
\begin{multline*}
\sum_{\substack{B'\subset\tilde f(B_0)\\B'\in\xs C'_k}}P_{B'}(p)
=p^{-l_2-k_2-(k-1)K}\mu(\tilde f(B_0))\\
=p^{-l_1-k_1-(k-1)K}\mu(B_0)
=\sum_{B\in\xs B_K(B_0)}P_B(p).
\end{multline*}
And so the sequence $\{P_{B'}(p)\colon B'\subset\tilde f(B_0), B'\in\xs
C'_k\}$ are $\{\xs B_K(B_0);\alpha_1,\dots,\alpha_\eta\}$-suitable by
Definition~\ref{d:suit}, where
$\{\alpha_1\dots,\alpha_\eta\}=\{P(p)\colon P\in\xc P^\circ\}$. By
Lemma~\ref{l:suitde}, for positive numbers
\[\Bigl\{p^{l_1+k_1+(k-1)K}P_{B'}(p)\colon B'\subset\tilde f(B_0),
B'\in\xs C'_k\Bigr\},\]
there is a suitable decomposition
\begin{equation}\label{eq:cylin}
\xs B_K(\xs B_K(B_0))=\xs B_{2K}(B_0)=
\bigcup_{\substack{B'\subset\tilde f(B_0)\\B'\in\xs C'_k}}\xs C_{B'}
\end{equation}
such that
\[\sum_{B\in\xs C_{B'}}\mu(B)=p^{l_1+k_1+(k-1)K}P_{B'}(p)\quad
\text{for all $B'\in\xs C'_k$ and $B'\subset\tilde f(B_0)$}.\]
Indeed, we obtain $\xs C_{B'}$ for all $B'\in\xs C'_k$ by above argument
since for every $B'\in\xs C'_k$, there is a unique $B_0\in\xs C_{k-1}$ such
that $B'\subset\tilde f(B_0)$. Then we define
\[\xs C_k=\left\{\bigsqcup\xs C_{B'}\colon B'\in\xs C'_k\right\}\]
and $\tilde f\colon\xs C_k\to\xs C'_k$ by $\tilde f\left(\bigsqcup\xs
C_{B'}\right)=B'$. By~\eqref{eq:cylin}, we know that
\[\xs B_{(k-2)K}(\xs A)=\xs C_{k-1}\prec\xs C_k\prec\xs C_{k+1}
=\xs B_{kK}(\xs A).\]
We also have $p^{-l_1-k_1}\mu(C)=p^{-l_2-k_2}\mu(\tilde f(C))$ for all
$C\in\xs C_k$. If $k$ is odd, we can define $\xs C_k$, $\xs C'_k$ and
$\tilde f\colon\xs C_k\to\xs C'_k$ by a similar argument. Thus, by
induction on~$k$, we finally obtain all the cylinder families $\xs C_k$,
$\xs C'_k$ and the cylinder mapping~$\tilde f$.
To prove $F=\bigsqcup\xs A$ and $F'=\bigsqcup\xs A'$ have the same cylinder
structure, it remains to compute the constants $\varrho$ and $\iota$. Since
$F\subset A\in\xs B_{l_1}$ and $F$ contains at least one level-$(l_1+k_1)$
interior block, by Remark~\ref{r:diameter}, we have
\[\varpi^{-1}r^{l_1+k_1}|E|\le|F|\le\varpi r^{l_1}|E|.\]
Let $C,C_1,C_2\in\xs C_k$, where $C_1$ and $C_2$ are distinct. If $k$ is
odd, then $\xs C_k=\xs B_{(k-1)K}(\xs A)\subset\xs
B_{l_1+k_1+(k-1)K}^\circ$, by Remark~\ref{r:diameter} and the definition of
blocks (Definition~\ref{d:block}), we have
\begin{align*}
\varpi^{-1}r^{l_1+k_1+(k-1)K}|E|&\le|C|\le\varpi r^{l_1+k_1+(k-1)K}|E|;\\
r^{l_1+k_1+(k-1)K}|E|&\le\dist(C_1,C_2).
\end{align*}
If $k$ is even, then by the definition of $\xs C_k$, we know that $\xs
B_{(k-2)K}(\xs A)=\xs C_{k-1}\prec\xs C_k\prec\xs C_{k+1}=\xs B_{kK}(\xs
A)$. And so
\begin{align*}
\varpi^{-1}r^{l_1+k_1+kK}|E|&\le|C|\le\varpi r^{l_1+k_1+(k-2)K}|E|;\\
r^{l_1+k_1+kK}|E|&\le\dist(C_1,C_2).
\end{align*}
As a summary, $F$ has the $(\varrho,\iota)$-cylinder structure for
$\varrho=r^K$ and $\iota=\varpi^2r^{-k_0-2K}$, (recall that
$k_0\ge\max(k_1,k_2)$). A similar argument shows that $F'$ also has the
$(\varrho,\iota)$-cylinder structure for $\varrho=r^K$ and
$\iota=\varpi^2r^{-k_0-2K}$. Then by the cylinder mapping~$\tilde f$, we
know that $F$ and $F'$ have the same $(\varrho,\iota)$-cylinder structure.
Therefore, by Lemma~\ref{l:cylin}, there is a bi-Lipschitz mapping~$f$ such
that
\[\varrho\iota^{-2}|x-y|/|F|\le|f(x)-f(y)|/|F'|\le
\varrho^{-1}\iota^2|x-y|/|F|\quad\text{for distinct $x,y\in F$}.\]
It is easy to check that
\[\varpi^{-2}r^{l_1-l_2+k_0}\le|F|/|F'|\le\varpi^2r^{l_1-l_2-k_0}.\]
Therefore, we have
\[L^{-1}r^{-l_1}|x-y|\le r^{-l_2}|f(x)-f(y)|\le Lr^{-l_1}|x-y|
\quad\text{for}\ x,y\in F=\bigsqcup\xs A,\]
where $L=\varrho^{-1}\iota^2\varpi^2r^{-k_0}=\varpi^6r^{-3k_0-5K}$ depends
only on~$k_0$ since $\varpi$ and $K$ are all constants related to the
IFS~$\ms$.
\end{proof}
\begin{lem}\label{l:4blockslip'}
For each $k_0\ge1$, there exists a constant~$L$ depending only on~$k$ with
the following property. Let $\xs A$ and $\xs A'$ be two nonempty families
of interior blocks such that $\xs A$ has $(A;l_1,k_1)$-structure and $\xs
A'$ has $(A';l_2,k_2)$-structure, where $\max(k_1,k_2)\le k_0$. If all
interior blocks in~$\xs A\cup\xs A'$ have the same measure polynomial, then
there is a bi-Lipschitz mapping $f\colon\bigsqcup\xs A\to\bigsqcup\xs A'$
such that
\[L^{-1}r^{-l_1}|x-y|\le r^{-l_2}|f(x)-f(y)|\le Lr^{-l_1}|x-y|
\quad\text{for}\ x,y\in\bigsqcup\xs A.\]
\end{lem}
\begin{proof}
It suffices to prove the lemma in the case that $\xs A$ or $\xs A'$
consists of only one interior block. If this is true, for general $\xs A$
and $\xs A'$, pick $B\in\xs A$, then by the assumption, there are constant
$L$ and bi-Lipschitz mappings $f_1\colon B\to\bigsqcup\xs A$, $f_2\colon
B\to\bigsqcup\xs A'$ satisfying the condition in the lemma. Thus $f_2\circ
f_1^{-1}\colon\bigsqcup\xs A\to\bigsqcup\xs A'$ satisfies
\[L^{-2}r^{-l_1}|x-y|\le r^{-l_2}|f_2\circ f_1^{-1}(x)-f_1\circ f_0^{-1}(y)|
\le L^2r^{-l_1}|x-y|.\]
And so the lemma holds for the general case. Thus we need only to prove the
case that $\xs A=\{B_1\}$, $\xs A'=\{B'_1,\dots,B'_m\}$ and
$P_{B_1}=P_{B'_1}\dots=P_{B'_m}=P_0$ for some $m>1$. (If $m=1$, this
follows from Lemma~\ref{l:4blockslip}.)
By Lemma~\ref{l:CB(k)}, there exists an integer~$\ell$ such that
\[\zeta^\circ(\ell)\ge p^{-k_0}\frac{\max_{P\in\xc P}P(p)}
{\min_{P\in\xc P}P(p)}.\]
Since
\[mp^{l_2+k_0}P_0(p)\le mp^{l_2+k_2}P_0(p)=\sum_{i=1}^m\mu(B'_i)
\le\mu(A')\le p^{l_2}P_{A'}(p),\]
we have
\[m\le p^{-k_0}\frac{\max_{P\in\xc P}P(p)}{\min_{P\in\xc P}P(p)}
\le\zeta^\circ(\ell).\]
This means that, for every $B\in\xs B$ and every $P\in\xc P^\circ$, $\xs
B_\ell(B)$ contains at least $m$ interior blocks whose measure polynomial
is~$P$. Notice that the constant $\ell$ depends on~$k_0$ rather than~$m$.
Let $F=B_0$ and $F'=\bigcup_{i=1}^m B'_i$. We shall show that $F$ and $F'$
have the same dense $\iota$-island structure. Then the lemma follows from
Lemma~\ref{l:djt}. For this, we need to define $\iota$-island families $\xs
D$, $\xs D'$, the island mapping~$\tilde f$ and bi-Lipschitz mappings $f_D$
for each~$D$.
By the property of~$\ell$, we have that both $\xs B_\ell(B_1)$ and $\xs
B_\ell(B'_1)$ contain at least $m$ interior blocks, say, $B^{(1)}_1$,
\dots, $B^{(1)}_m$ and $B'^{(1)}_1$, \dots, $B'^{(1)}_m$, respectively,
whose measure polynomial are all~$P_0$. By induction, suppose that
$B^{(j)}_1$, \dots, $B^{(j)}_m$ and $B'^{(j)}_1$, \dots, $B'^{(j)}_m$ have
been defined for $j=1,\dots,k-1$. We define $B^{(k)}_1$, \dots, $B^{(k)}_m$
and $B'^{(k)}_1$, \dots, $B'^{(k)}_m$ to be $m$ interior blocks in~$\xs
B_\ell(B^{(k-1)}_1)$ and $\xs B_\ell(B'^{(k-1)}_1)$, respectively, whose
measure polynomial are all~$P_0$. For convenience, we also write
\[B^{(0)}_1=B_1\quad\text{and}\quad
B'^{(0)}_i=B'_i\ \text{for $i=1,\dots,m$}.\]
For $k\ge1$, define
\begin{gather*}
\xs A^{(k)}_1=\{B^{(k)}_2,\dots,B^{(k)}_m\},\quad
\xs A^{(k)}_2=\xs B_\ell(B^{(k-1)}_1)\setminus
\{B^{(k)}_1,\dots,B^{(k)}_m\}, \\
\xs A'^{(k)}_1=\{B'^{(k-1)}_2,\dots,B'^{(k-1)}_m\},\quad
\xs A'^{(k)}_2=\xs B_\ell(B'^{(k-1)}_1)\setminus
\{B'^{(k)}_1,\dots,B'^{(k)}_m\},
\end{gather*}
and
\begin{gather*}
D^{(k)}_1=\bigsqcup\xs A^{(k)}_1,\quad D^{(k)}_2=\bigsqcup\xs A^{(k)}_2,
\quad \xs D=\bigcup_{k=1}^\infty\left\{D^{(k)}_1,D^{(k)}_2\right\},\\
D'^{(k)}_1=\bigsqcup\xs A'^{(k)}_1,\quad
D'^{(k)}_2=\bigsqcup\xs A'^{(k)}_2,\quad
\xs D'=\bigcup_{k=1}^\infty\left\{D'^{(k)}_1,D'^{(k)}_2\right\}.
\end{gather*}
Observe that there are $x$ and $x'$ such that
\[F\setminus\bigsqcup\xs D=\bigcap_{k=1}^\infty B^{(k)}_1=\{x\}
\quad\text{and}\quad F'\setminus\bigsqcup\xs D'=\bigcap_{k=1}^\infty
B'^{(k)}_1=\{x'\},\]
so $\bigsqcup\xs D$ and $\bigsqcup\xs D'$ are both dense in $F$ and $F'$,
respectively. For $k\ge1$, $\xs A^{(k)}_1,\xs A^{(k)}_2\subset\xs
B_\ell(B^{(k-1)}_1)\subset\xs B^\circ_{l_1+k_1+k\ell}$; $\xs
A'^{(k+1)}_1,\xs A'^{(k)}_2\subset\xs B_\ell(B'^{(k-1)}_1)\subset\xs
B^\circ_{l_2+k_2+k\ell}$ and $\xs A'^{(1)}_1\subset\xs
B^\circ_{k_2}(A')\subset\xs B^\circ_{l_2+k_2}$. By Remark~\ref{r:diameter},
for $k\ge1$
\begin{gather*}
\varpi^{-1}r^{l_1+k_1+k\ell}|E|\le|D^{(k)}_1|,|D^{(k)}_2|
\le\varpi r^{l_1+k_1+(k-1)\ell}|E|, \\
\varpi^{-1}r^{l_2+k_2+k\ell}|E|\le|D'^{(k+1)}_1|,|D'^{(k)}_2|
\le\varpi r^{l_2+k_2+(k-1)\ell}|E|,\\
\varpi^{-1}r^{l_2+k_2}|E|\le|D'^{(1)}_1|\le\varpi r^{l_2}|E|.
\end{gather*}
By the definition of blocks (Definition~\ref{d:block}), for $k\ge1$,
\begin{gather*}
r^{l_1+k_1+k\ell}|E|\le\dist(D^{(k)}_i,F\setminus D^{(k)}_i),
\quad i=1,2; \\
r^{l_2+k_2+(k-1)\ell}|E|\le\dist(D'^{(k)}_1,F'\setminus D'^{(k)}_1);\\
r^{l_2+k_2+k\ell}|E|\le\dist(D'^{(k)}_2,F'\setminus D'^{(k)}_2).
\end{gather*}
As a summary, we see that both $F$ and $F'$ have dense $\iota$-island
structure (Definition~\ref{d:dnsilnd}) for $\iota=\varpi r^{-k_0-2\ell}$,
(recall that $\max(k_1,k_2)\le k_0$).
Now define the island mapping $\tilde f\colon\xs D\to\xs D'$ by $\tilde
f(D^{(k)}_i)=D'^{(k)}_i$ for $k\ge1$ and $i=1,2$. To show that $F$ and $F'$
have the same dense $\iota$-island structure, we shall verify the
conditions of Definition~\ref{d:smdi}.
For Condition~(i), note that $D^{(k)}_i,D^{(k')}_j\subset B^{(k-1)}_1$ for
$1\le k\le k'$; $D'^{(k)}_i,D'^{(k')}_j\subset B'^{(k-2)}_1$ for $2\le k\le
k'$ and $D'^{(1)}_i,D'^{(k')}_j\subset A'$. Together with the definition of
blocks (Definition~\ref{d:block}), we have
\begin{gather*}
r^{l_1+k_1+k\ell}|E|\le\dist(D^{(k)}_i,D^{(k')}_j)
\le\varpi r^{l_1+k_1+(k-1)\ell}|E|
\quad\text{for $k\ge1$}, \\
r^{l_2+k_2+k\ell}|E|\le\dist(D'^{(k)}_i,D'^{(k')}_j)
\le\varpi r^{l_2+k_2+(k-2)\ell}|E|
\quad\text{for $k\ge2$},\\
r^{l_2+k_2+\ell}|E|\le\dist(D^{(1)}_i,D^{(k')}_j)
\le\varpi r^{l_2}|E|.
\end{gather*}
Since $F=B_0\in\xs B^\circ_{l_1+k_1}$ and $F'=\bigcup_{i=1}B'_i\subset
A'\in\xs B_{l_2}$, where $B'_i\in\xs B^\circ_{l_2+k_2}$, by
Remark~\ref{r:diameter},
\begin{gather*}
\varpi^{-1}r^{l_1+k_1}|E|\le|F|\le\varpi r^{l_1+k_1}|E|, \\
\varpi^{-1}r^{l_2+k_2}|E|\le|F'|\le\varpi r^{l_2}|E|.
\end{gather*}
By the definition of $\tilde f$, for distinct $D_1,D_2\in\xs D$,
\[L_1^{-1}\dist(D_1,D_2)/|F|\le\dist(\tilde f(D_1),\tilde f(D_2))/|F'|
\le L_1\dist(D_1,D_2)/|F|,\]
where $L_1=\varpi^3r^{-k_0-2\ell}$.
For Condition~(ii) of Definition~\ref{d:smdi}, we use
Lemma~\ref{l:4blockslip} to obtain~$f_D$ for each $D$. Observe that, for
$k\ge1$,
\begin{gather*}
\sum_{B\in\xs A^{(k)}_1}P_B(p)=\sum_{B\in\xs A'^{(k)}_1}
P_B(p)=(m-1)P_0(p), \\
\sum_{B\in\xs A^{(k)}_2}P_B(p)=\sum_{B\in\xs A'^{(k)}_2}
P_B(p)=(p^{-\ell}-(m-1))P_0(p);
\end{gather*}
and $\xs A^{(k)}_1$, $\xs A^{(k)}_2$ have
$(B^{(k-1)}_1;l_1+k_1+(k-1)\ell,\ell)$-structure; $\xs A'^{(k+1)}_1$, $\xs
A'^{(k)}_2$ have $(B'^{(k-1)}_1;l_2+k_2+(k-1)\ell,\ell)$-structure for
$k\ge1$; $\xs A'^{(1)}_1$ has $(A';l_2,k_2)$-structure. By
Lemma~\ref{l:4blockslip}, there are $L_2>1$ and bi-Lipschitz mappings~$f_D$
of~$D$ onto~$\tilde f(D)$ for each $D\in\xs D$, such that
\[L_2^{-1}r^{-l_1}|x-y|\le r^{-l_2}|f_D(x)-f_D(y)|
\le L_2r^{-l_1}|x-y|\quad\text{for $x,y\in D$}.\]
Here $L_2$ depends on~$\ell$ and~$k_0$, so finally depends on~$k_0$. Note
that
\begin{equation}\label{eq:F/F'}
\varpi^{-2}r^{l_1-l_2+k_0}\le|F|/|F'|\le\varpi^2r^{l_1-l_2-k_0}.
\end{equation}
This means that $f_D$ satisfy the Condition~(ii) of Definition~\ref{d:smdi}
for $L_3=L_2\varpi^2r^{-k_0}$. Therefore, $f_D$ and $\tilde
L=\max(L_1,L_3)$ satisfy the conditions of Definition~\ref{d:smdi}. It
follows that $F$ and $F'$ have the same dense $\iota$-island structure with
$\iota=\varpi r^{-k_0-2\ell}$.
Finally, by Lemma~\ref{l:djt}, there is a bi-Lipschitz mapping~$f$ of~$F$
onto~$F'$ such that
\begin{equation*}
L_4^{-1}\rho(x,y)/|F|\le\rho(f(x),f(y))/|F'|\le L_4\rho(x,y)/|F|
\quad\text{for $x,y\in F$}.
\end{equation*}
Here $L_4=(2\iota+1)\tilde L$ depends on~$k_0$. Together with
inequality~\eqref{eq:F/F'}, the lemma follows.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{l:blockslip}]
By Lemma~\ref{l:CB(k)}, there is an integer~$\ell$ depends only on the
IFS~$\ms$ such that for all $B\in\xs B$ and all $P\in\xc P^\circ$, the
family $\xs B_\ell(B)$ contain at least one block whose measure polynomial
is~$P$.
Now for each $P\in\xc P^\circ$, write
\[\xs C_P=\left\{B\in\xs B_\ell(\xs A)\colon P_B=P\right\}
\quad\text{and}\quad
\xs D_P=\left\{B\in\xs B_\ell(\xs A')\colon P_B=P\right\}\]
Then $\xs C_p,\xs D_P\ne\emptyset$ and
\[\bigsqcup\xs A=\bigcup_{P\in\xc P^\circ}\bigsqcup\xs C_P,\quad
\bigsqcup\xs A'=\bigcup_{P\in\xc P^\circ}\bigsqcup\xs D_P.\]
By Lemma~\ref{l:4blockslip'}, for each $P\in\xc P^\circ$, there is a
bi-Lipschitz mapping $f_P\colon\bigsqcup\xs C_P\to\bigsqcup\xs D_P$ such
that
\begin{equation}\label{eq:|x-y|fP}
L_1^{-1}r^{-l_1}|x-y|\le r^{-l_2}|f_P(x)-f_P(y)|\le L_1r^{-l_1}|x-y|
\quad\text{for $x,y\in\bigsqcup\xs C_P$},
\end{equation}
where constant $L_1$ depends on $k_0+\ell$, so finally depends only
on~$k_0$ and the IFS~$\ms$.
Let $f\colon\bigsqcup\xs A\to\bigsqcup\xs A'$ be the bijection such that
the restriction of~$f$ to $\bigsqcup\xs C_P$ is~$f_P$ for every $P\in\xc
P^\circ$. We will show that $f$ is the desired bi-Lipschitz mapping. Let
$x,y$ be two distinct points in~$\bigsqcup\xs A$, there are two cases to
consider.
\textit{Case~1}. $x,y\in\bigsqcup\xs C_P$ for some $P\in\xc P^\circ$. Then
$f$ and $x,y$ satisfy the inequality~\eqref{eq:|x-y|fP}.
\textit{Case~2}. $\{x,y\}\not\subset\xs C_P$ for all $P\in\xc P^\circ$. In
this case, there are distinct $B_x,B_y\in\xs B_\ell(\xs A)$ such that $x\in
B_x$ and $y\in B_y$. By the definition of~$f$, there are distinct
$B'_x,B'_y\in\xs B_\ell(\xs A')$ such that $f(x)\in B'_x$ and $f(y)\in
B'_y$. Therefore, by Remark~\ref{r:diameter},
\begin{gather*}
r^{l_1+k_1+\ell}|E|\le\dist(B_x,B_y)\le|x-y|\le|A|\le r^{l_1}\varpi|E|, \\
r^{l_2+k_2+\ell}|E|\le\dist(B'_x,B'_y)\le|f(x)-f(y)|\le|A'|
\le r^{l_2}\varpi|E|.
\end{gather*}
It follows from Case~1 and~2 that
\[L^{-1}r^{-l_1}|x-y|\le r^{-l_2}|f(x)-f(y)|\le Lr^{-l_1}|x-y|,\]
where constant $L=\max(L_1,\varpi r^{-(k_0+\ell)})$ depends only on~$k_0$.
\end{proof}
\subsection{Proof of Proposition~\ref{p:BlipE}}\label{ssec:LipBE}
Let $F=E$ and $F'=B_0$. Suppose that $B_0$ is a level-$l$ interior block. We
shall show that $F$ and $F'$ have the same dense $\iota$-island structure.
Then Proposition~\ref{p:BlipE} follows from Lemma~\ref{l:djt}. For this, we
need to define $\iota$-island families $\xs D$, $\xs D'$, the island
mapping~$\tilde f$ and bi-Lipschitz mappings $f_D$ for each~$D$. The key
point behind our construction is that almost all blocks are interior blocks,
see Lemma~\ref{l:C(k)}, \ref{l:CP(k)} and~\ref{l:CB(k)}.
By Lemma~\ref{l:CB(k)}, there exists an integer $\ell>0$ such that
\begin{equation}\label{eq:bd<in}
\zeta^\partial(\ell)<\min_{B\in\xs B}\card\xs B_\ell(B).
\end{equation}
Let $\xs A_0=\{E\}$ and $\xs A_k=\xs B^\partial_{k\ell}$ for $k\ge1$. We
first define a injection
\[\Gamma\colon\bigcup_{k=0}^\infty\xs A_k\to
\bigcup_{k=0}^\infty\xs B_{k\ell}(B_0)\]
such that
\begin{enumerate}[(a)]
\item $\Gamma$ maps $\xs A_k$ into $\xs B_{k\ell}(B_0)$ for all
$k\ge1$;
\item for $A_1\in\xs A_{k_1}$ and $A_2\in\xs A_{k_2}$, where $k_1\le
k_2$, we have $\Gamma(A_1)\supset\Gamma(A_2)$ if and only if
$A_1\supset A_2$.
\end{enumerate}
We do this by induction on~$k$. When $k=0$, define $\Gamma(E)=B_0$. Now
suppose that $\Gamma$ have been defined on~$\xs A_k$. By
Remark~\ref{r:inblock}(a), $\xs A_{k+1}=\bigcup_{A\in\xs A_k}\xs
B^\partial_\ell(A)$. So we need only to define $\Gamma$ on each $\xs
B^\partial_\ell(A)$. Suppose that $\card\xs B^\partial_\ell(A)>0$, otherwise
there is nothing more to do. By~\eqref{eq:bd<in}, we have
\[\card\xs B^\partial_\ell(A)<\card\xs B_\ell\bigr(\Gamma(A)\bigl)\quad
\text{for all $A\in\xs A_k$}.\]
So there is a injection $\Gamma$ on $\xs B^\partial_\ell(A)$ such that
$\Gamma(B)\in\xs B_\ell\bigr(\Gamma(A)\bigl)\subset\xs B_{(k+1)\ell}(B_0)$
for all $B\in\xs B^\partial_\ell(A)$. Thus, we have finished the definition
of~$\Gamma$ on~$\xs A_{k+1}$. It is easy to check Condition~(a) and~(b).
Then for $A\in\bigcup_{k=0}^\infty\xs A_k$, define
\[\xs C_A=\xs B^\circ_\ell(A)\quad\text{and}\quad
\xs C'_A=\bigl\{B'\in\xs B_\ell\bigl(\Gamma(A)\bigr)\colon B'\ne\Gamma(B)\
\text{for all $B\in\xs B^\partial_\ell(A)$}\bigr\}.\]
By~\eqref{eq:bd<in}, we have $\xs C_A,\xs C'_A\ne\emptyset$ for all
$A\in\bigcup_{k=0}^\infty\xs A_k$. Let
\[D_A=\bigsqcup\xs C_A\quad\text{and}\quad D'_A=\bigsqcup\xs C'_A.\]
Define
\[\xs D=\biggl\{D_A\colon A\in\bigcup_{k=0}^\infty\xs A_k\biggr\}
\quad\text{and}\quad
\xs D'=\biggl\{D'_A\colon A\in\bigcup_{k=0}^\infty\xs A_k\biggr\}.\]
We shall show that both $F$ and $F'$ have the dense $\iota$-island structure
(Definition~\ref{d:dnsilnd}). First, it is easy to see that $\bigsqcup\xs D$
and $\bigsqcup\xs D'$ are both dense in $F=E$ and $F'=B_0$, respectively.
Second, for $D_A\in\xs D$ and $D'_A\in\xs D'$ with $A\in\xs A_k=\xs
B^\partial_{k\ell}$, we have $\xs C_A$ has $(A;k\ell,\ell)$-structure and
$\xs C'_A$ has $(\Gamma(A);k\ell+l,\ell)$-structure. So by
Remark~\ref{r:diameter},
\begin{gather*}
\varpi^{-1}r^{(k+1)\ell}|E|\le|D_A|\le\varpi r^{k\ell}|E|, \\
\varpi^{-1}r^{(k+1)\ell+l}|E|\le|D'_A|\le\varpi r^{k\ell+l}|E|.
\end{gather*}
By the definition of blocks (Definition~\ref{d:block}),
\begin{gather*}
\dist(D_A,F\setminus D_A)\ge r^{(k+1)\ell}|E|, \\
\dist(D'_A,F'\setminus D'_A)\ge r^{(k+1)\ell+l}|E|.
\end{gather*}
It follows that both $F$ and $F'$ have the dense $\iota$-island structure for
$\iota=\varpi r^{-\ell}$.
Now define the island mapping $\tilde f\colon\xs D\to\xs D'$ by $\tilde
f(D_A)=D'_A$ for $A\in\bigcup_{k=0}^\infty\xs A_k$. To show that $F$ and $F'$
have the same dense $\iota$-island structure, we shall verify the conditions
of Definition~\ref{d:smdi}.
For Condition~(i), we consider $D_{A_1}$ and $D_{A_2}$ for distinct
$A_1\in\xs A_{k_1},A_2\in\xs A_{k_2}$, where $k_1\le k_2$. There are two
cases to consider.
\textit{Case~1}. $A_1\supset A_2$. By the properties of~$\Gamma$, we have
$\Gamma(A_1)\in\xs B_{k_1\ell}(B_0)$ and $\Gamma(A_1)\supset\Gamma(A_2)$.
Recall that $D_A=\bigsqcup\xs C_A$, $\xs C_A=\xs B^\circ_\ell(A)$ and
$D'_A=\bigsqcup\xs C'_A$, $\xs C'_A\subset\xs B^\circ_\ell(\Gamma(A))$. We
have
\begin{gather*}
r^{(k_1+1)\ell}|E|\le\dist(D_{A_1},D_{A_2})\le|A_1|
\le\varpi r^{k_1\ell}|E|, \\
r^{(k_1+1)\ell+l}|E|\le\dist(D'_{A_1},D'_{A_2})\le|\Gamma(A_1)|
\le\varpi r^{k_1\ell+l}|E|.
\end{gather*}
\textit{Case~2}. $A_1\cap A_2=\emptyset$. Then there are $k\ge0$ and
$B_3\in\xs A_k$ and $B_1,B_2\in\xs A_{k+1}$ such that $A_1\cup A_2\subset
B_3$, $A_1\subset B_1$, $A_2\subset B_2$ and $B_1\ne B_2$. By the properties
of~$\Gamma$, we have $\Gamma(B_3)\in\xs B_{k\ell}(B_0)$,
$\Gamma(B_1),\Gamma(B_2)\in\xs B_{(k+1)\ell}(B_0)$,
$\Gamma(A_1)\cup\Gamma(A_2)\subset\Gamma(B_3)$,
$\Gamma(A_1)\subset\Gamma(B_1)$, $\Gamma(A_2)\subset\Gamma(B_2)$ and
$\Gamma(B_1)\ne\Gamma(B_2)$. And so
\begin{gather*}
r^{(k+1)\ell}|E|\le\dist(B_1,B_2)\le\dist(D_{A_1},D_{A_2})\le|B_3|
\le\varpi r^{k\ell}|E|, \\
r^{(k+1)\ell+l}|E|\le\dist(\Gamma(B_1),\Gamma(B_2))
\le\dist(D'_{A_1},D'_{A_2})\le|\Gamma(B_3)|\le\varpi r^{k\ell+l}|E|.
\end{gather*}
For the diameter of~$F=E$ and $F'=B_0$, recall that $B_0$ is a level-$l$
interior block. By Remark~\ref{r:diameter}, we have
\[|F|=|E|\quad\text{and}\quad\varpi^{-1}r^l|E|\le|F'|\le\varpi r^l|E|.\]
By the definition of $\tilde f$, for distinct $D_1,D_2\in\xs D$,
\[L_1^{-1}\dist(D_1,D_2)/|F|\le\dist(\tilde f(D_1),\tilde f(D_2))/|F'|
\le L_1\dist(D_1,D_2)/|F|,\]
where $L_1=\varpi^2r^{-\ell}$.
For Condition~(ii) of Definition~\ref{d:smdi}, we use Lemma~\ref{l:blockslip}
to obtain~$f_D$ for each~$D$. Observe that, for $A\in\xs A_k$ with $k\ge0$,
$\xs C_A$ has $(A;k\ell,\ell)$-structure and $\xs C'_A$ has
$(\Gamma(A);k\ell+l,\ell)$-structure. Recall that $D_A=\bigsqcup\xs C_A$ and
$D'_A=\bigsqcup\xs C'_A$. By Lemma~\ref{l:blockslip}, there are $L_2>1$ and
bi-Lipschitz mappings~$f_D$ of~$D$ onto~$\tilde f(D)$ for each $D\in\xs D$,
such that
\[L_2^{-1}|x-y|\le r^{-l}|f_D(x)-f_D(y)|\le L_2|x-y|
\quad\text{for $x,y\in D$}.\]
Here $L_2$ depends on~$\ell$. Note that
\begin{equation}\label{eq:F/F'2}
\varpi^{-1}r^{-l}\le|F|/|F'|\le\varpi r^{-l}.
\end{equation}
This means that $f_D$ satisfy the Condition~(ii) of Definition~\ref{d:smdi}
for $L_3=L_2\varpi$. Therefore, $f_D$ and $\tilde L=\max(L_1,L_2)$ satisfy
the conditions of Definition~\ref{d:smdi}. It follows that $F$ and $F'$ have
the same dense $\iota$-island structure with $\iota=\varpi r^{-\ell}$.
Finally, by Lemma~\ref{l:djt}, we have $E=F\simeq F'=B_0$.
\section{The proof of Theorem~\ref{t:TOCElip}}\label{sec:proof}
This section is devoted to the proof of Theorem~\ref{t:TOCElip}. To this end,
we consider two IFS $\ms,\mt\in\TOEC$. Throughout this section, we adopt the
following notational conventions: $r_\ms$, $I_\ms$, $p$, $\mu$, $\xc
P^\circ$, $\xs A^\circ$, $\xs A^\circ_k$ will denote the ratio root, the
ideal, the measure root, the natural measure, the set of measure polynomials
of interior blocks, the family of interior blocks and the family of level-$k$
interior blocks of~$\ms$, respectively; while $r_\mt$, $I_\mt$, $q$, $\nu$,
$\xc Q^\circ$, $\xs B^\circ$, $\xs B^\circ_k$ will denote the ratio root, the
ideal, the measure root, the natural measure, the set of measure polynomials
of interior blocks, the family of interior blocks and the family of level-$k$
interior blocks of~$\mt$, respectively. For $A\in\xs A^\circ_l$ and $B\in\xs
B^\circ_l$, we use $P_A$ and $Q_B$ to denote the corresponding measure
polynomials. We also write
\[\xs A_k(A)=\bigl\{A'\in\xs A_{l+k}\colon A'\subset A\bigr\}
\quad\text{and}\quad\xs B_k(B)=\bigl\{B'\in\xs B_{l+k}\colon
B'\subset B\bigr\}.\]
By Proposition~\ref{p:BlipE} and Lemma~\ref{l:blockslip},
Theorem~\ref{t:TOCElip} can be obtained from the following proposition.
\begin{prop}\label{p:BlipB}
Let $A_0\in\xs A^\circ$ and $B_0\in\xs B^\circ$ be two interior blocks of
$\ms$ and $\mt$, respectively. Then $A_0\simeq B_0$ if and only if
\begin{enumerate}[\upshape(i)]
\item $\hdim E_\ms=\hdim E_\mt$;
\item $\log r_\ms/\log r_\mt\in\Q$;
\item $I_\ms=a I_\mt$ for some $a\in\R$.
\end{enumerate}
\end{prop}
\subsection{Necessity}
This subsection is devoted to the proof of necessary part of
Proposition~\ref{p:BlipB}. For this, we always assume that $A_0\in\xs
A^\circ$, $B_0\in\xs B^\circ$ and $A_0\simeq B_0$. Fix a bi-Lipschitz mapping
$f\colon A_0\to B_0$. The idea used in the proof of necessary part is similar
to that given in~\cite{CooPi88,FalMa92}. In particular, the following two
lemmas are essentially the same as the corresponding results
in~\cite{CooPi88,FalMa92}, see also Lemma~\ref{l:ML} and~\ref{l:fE}.
\begin{lem}[measure linear, \cite{CooPi88}]\label{l:ml}
There exists an interior block~$A_f\subset A_0$ such that the restriction
$f|_{A_f}\colon A_f\to f(A_f)$ is measure linear, i.e., for any Borel
subset $F\subset A_f$ with $\mu(F)>0$,
\[\frac{\mu(F)}{\nu(f(F))}=\frac{\mu(A_f)}{\nu(f(A_f))}.\]
\end{lem}
\begin{lem}[\cite{FalMa92}]\label{l:image}
There exists an integer~$K_f$ such that for each interior block $A\subset
A_0$, there are interior block $B_A\subset B_0$ and $\xs B_A\subset\xs
B_{K_f}(B_A)$ satisfying $f(A)=\bigsqcup\xs B_A\subset B_A$.
\end{lem}
We omit the proofs of the above two lemmas, since the arguments are similar
to those used in~\cite{CooPi88,FalMa92}. We only remark that the proof of
Lemma~\ref{l:ml} requires the finiteness of measure polynomials
(Proposition~\ref{p:finiteCP}).
Now we turn to the proof of the necessity. Condition~(i) is obviously
necessary since $\hdim A_0=\hdim E_\ms$ and $\hdim B_0=\hdim E_\mt$.
For condition~(ii), let $\xs B_A$ be as in Lemma~\ref{l:image} for all
interior blocks $A\subset A_0$. By Proposition~\ref{p:finiteCP}, there are
only finitely many measure polynomials. It follows that the set
\[\biggl\{\sum_{B\in\xs B_A}Q_B(q)\colon
\text{$A\subset A_0$ is an interior block}\biggr\}\]
is finite, where $Q_B$ is the measure polynomial of~$B$. Together with
Lemma~\ref{l:CB(k)}, we know that there are two interior blocks $A_1,A_2$
with the same measure polynomial $P$ such that $A_2\subsetneq A_1\subsetneq
A_f$ and
\[\sum_{B\in\xs B_{A_1}}Q_B(q)=\sum_{B\in\xs B_{A_2}}Q_B(q),\]
where $A_f$ is as in Lemma~\ref{l:ml} and $\xs B_{A_1},\xs B_{A_2}$ are as in
Lemma~\ref{l:image}. Suppose that $A_1$ is of level~$k_1$, $A_2$ of
level~$k_2$ and $\xs B_{A_1}\subset\xs B^\circ_{l_1}$, $\xs B_{A_2}\subset\xs
B^\circ_{l_2}$, then by Remark~\ref{r:muB} and Lemma~\ref{l:ml},
\[\frac{p^{k_1}P(p)}{q^{l_1}\sum_{B\in\xs B_{A_1}}Q_B(q)}
=\frac{\mu(A_1)}{\nu(f(A_1))}=\frac{\mu(A_2)}{\nu(f(A_2))}
=\frac{p^{k_2}P(p)}{q^{l_2}\sum_{B\in\xs B_{A_2}}Q_B(q)}.
\]
This reduces to $p^{k_2-k_1}=q^{l_2-l_1}$. We have $\log r_\ms/\log
r_\mt\in\Q$ since $k_1\ne k_2$, $l_1\ne l_2$ and $p=r_\ms^s$, $q=r_\mt^s$,
where $s=\hdim E_\ms=\hdim E_\mt$.
For condition~(iii), let $A_f$ be as in Lemma~\ref{l:ml}, set
\[\frac{\mu(A_f)}{\nu(f(A_f))}=a,\]
we need to show that $I_\ms=a I_\mt$. By Lemma~\ref{l:idealP} and symmetry,
it suffices to prove that $p^lP(p)\in a I_\mt$ for all $l\ge0$ and all
$P\in\xc P^\circ$. By condition~(ii), we can assume that $p^m=q^n$ for two
positive integers $m$ and~$n$. It follows from Lemma~\ref{l:CB(k)} that there
are positive integer $k$ and interior block $A\in\xs A^\circ_{km+l}$ such
that $A\subset A_f$ and $P_A=P$. Thus
\[p^lP(p)=p^{-km}\mu(A)=a\cdot q^{-kn}\nu(f(A))\in a I_\mt\]
since $q^{-1}\in\Z[q]$ and $f(A)$ is an interior separated set.
\subsection{Sufficiency}
In this subsection, we will prove the sufficient part.
\begin{lem}\label{l:Is>0}
Let $\alpha\in I_\ms$ be a positive number, then there exist integer $l$
and $\xs C\subset\xs B^\circ_k$ for some $k\ge0$ such that
\[\alpha=p^l\sum_{B\in\xs C}\mu(B).\]
\end{lem}
\begin{proof}
By Lemma~\ref{l:idealP}, there exist integers $k_1$ and $\gamma_P\ge0$ such
that
\[\alpha=p^{k_1}\sum_{P\in\xc P^\circ}\gamma_PP(p).\]
By Lemma~\ref{l:CP(k)}, there exists an integer $k_2>k_1$ such that
\[\zeta_P(k_2)\ge\gamma_P\quad\text{for all $P\in\xc P^\circ$}.\]
So there exists $\xs C\in\xs B^\circ_{k_2}$ such that
\[\sum_{B\in\xs C}\mu(B)=p^{k_2}\sum_{P\in\xc P^\circ}
\gamma_PP(p)=p^{k_2-k_1}\alpha.\]
The proof is completed by taking $l=k_1-k_2$.
\end{proof}
It follows from the conditions and Lemma~\ref{l:Is>0} that there exist $\xs
C\subset\xs A^\circ_l$ and $\xs C'\in\xs B^\circ_{l'}$ for some positive
integer $l,l'$ such that $p^l=q^{l'}$ and
\begin{equation}\label{eq:meq}
\sum_{A\in\xs C}\mu(A)=a\sum_{B\in\xs C'}\nu(B).
\end{equation}
We will show that $\bigsqcup\xs C\simeq\bigsqcup\xs C'$, this conclusion
together with Lemma~\ref{l:blockslip} implies $A_0\simeq B_0$. The proof of
$\bigsqcup\xs C\simeq\bigsqcup\xs C'$ is similar to the proof of
Lemma~\ref{l:4blockslip}.
Let $F=\bigsqcup\xs C$ and $F'=\bigsqcup\xs C'$. We shall show that $F$ and
$F'$ have the same cylinder structure, then this lemma follows from
Lemma~\ref{l:cylin}. For this, we make use of Lemma~\ref{l:suitde} to define
a cylinder mapping satisfying~\eqref{eq:ML}. In Lemma~\ref{l:suitde}, take
the IFS to be~$\ms$ and the constants $\{\alpha_i\}$ to be $\{a Q(q)\colon
Q\in\xc Q^\circ\}$, suppose that the corresponding integer is~$K$. Use
Lemma~\ref{l:suitde} again by taking the IFS to be~$\mt$ and the constants
$\{\alpha_i\}$ to be $\{a^{-1}P(p)\colon P\in\xc P^\circ\}$, suppose that the
corresponding integer is~$K'$. By Remark~\ref{r:suit} and $\log r_\ms/\log
r_\mt\in\Q$, we can further require that $p^K=q^{K'}$.
The cylinder families $\xs C_k$ and $\xs C'_k$ are defined as follows. Define
$\xs C_1=\xs C$ and
\[\begin{cases}
\xs C_k=\xs A_{(k-1)K}(\xs C)&\text{for $k$ is odd};\\
\xs C'_k=\xs B_{(k-1)K'}(\xs C')&\text{for $k$ is even}.
\end{cases}\]
\begin{center}
\begin{tabular}{*8c}
\hline
& $k=1$ & $k=2$ & $k=3$ & $k=4$ & $k=5$ & $k=6$ & \dots
\rule[-1ex]{0pt}{3.5ex} \\
\hline
$\xs C_k$: & $\xs C$ & & $\xs A_{2K}(\xs C)$ &
& $\xs A_{4K}(\xs C)$ & & \dots \rule[-1ex]{0pt}{3.5ex} \\
$\xs C'_k$: & & $\xs B_{K'}(\xs C')$ & & $\xs B_{3K'}(\xs C')$
& & $\xs B_{5K'}(\xs C')$ & \dots \rule[-1ex]{0pt}{3.5ex} \\
\hline
\end{tabular}
\end{center}
It remains to define $\xs C_k$ for $k$ is even, $\xs C'_k$ for $k$ is odd and
the cylinder mapping~$\tilde f$.
We begin with the definition of~$\xs C'_1$ by making use of
Lemma~\ref{l:suitde}. By~\eqref{eq:meq} and $p^l=q^{l'}$,
\[a^{-1}\sum_{A\in\xs C_1}P_A(p)=a^{-1}\sum_{A\in\xs C}P_A(p)
=\sum_{B\in\xs C'}Q_B(q).\]
Since $P_A\in\xc P^\circ$ for all $A\in\xs C_1=\xs C$, we know that the
sequence $\{a^{-1}P_A(p)\colon A\in\xs C_1\}$ are $(\xs
C';\alpha_1,\dots,\alpha_\eta)$-suitable by Definition~\ref{d:suit}, where
\[\{\alpha_1\dots,\alpha_\eta\}=\{a^{-1}P(p)\colon P\in\xc P^\circ\}.\]
Since $\xs C'\subset\xs B_{l'}$, by Lemma~\ref{l:suitde}, for positive
numbers $\{q^{l'}a^{-1}P_A(p)\colon A\in\xs C_1\}$, there is a suitable
decomposition
\[\xs B_{K'}(\xs C')=\bigcup_{A\in\xs C_1}\xs C'_A\]
such that
\[\sum_{B\in\xs C'_A}\nu(B)=q^{l'}a^{-1}P_A(p)\quad
\text{for all $A\in\xs C_1=\xs C$}.\]
Define
\[\xs C'_1=\left\{\bigsqcup\xs C'_A\colon A\in\xs C_1\right\},\]
and $\tilde f\colon\xs C_1\to\xs C'_1$ by $\tilde f(A)=\bigsqcup\xs C'_A$. It
is easy to see that $\xs C'_1\prec\xs C'_2=\xs B_{K'}(\xs C')$. We also have
that
\[\mu(A)=a\nu(\tilde f(A))\quad\text{for all $A\in\xs C_1$},\]
since $a\nu(\tilde f(A))=a\sum_{B\in\xs C'_A}\nu(B)=q^{l'}P_A(p)
=p^lP_A(p)=\mu(A)$.
Now suppose that the cylinder families $\xs C_1$, \dots, $\xs C_{k-1}$, $\xs
C'_1$, \dots, $\xs C'_{k-1}$ and the cylinder mapping $\tilde f$ have been
defined such that $\tilde f$ maps $\xs C_j$ onto $\xs C'_j$ for $1\le j\le
k-1$ and
\begin{equation}\label{eq:muCanu}
\mu(C)=a\nu(\tilde f(C))\quad
\text{for all $C\in\bigcup_{j=1}^{k-1}\xs C_j$}.
\end{equation}
We shall define $\xs C_k$, $\xs C'_k$ and $\tilde f\colon\xs C_k\to\xs C'_k$.
Suppose without loss of generality that $k$ is even, then $\xs C'_k=\xs
B_{(k-1)K'}(\xs C')$. We consider the suitable decomposition of $\xs
A_K(A_0)$ for each $A_0\in\xs C_{k-1}=\xs A_{(k-2)K}(\xs C)$. Recall that
$p^l=q^{l'}$ and $p^K=q^{K'}$, by~\eqref{eq:muCanu},
\[\sum_{\substack{B\subset\tilde f(A_0)\\B\in\xs C'_k}}aQ_B(q)
=aq^{-l'-(k-1)K'}\nu(\tilde f(A_0))=p^{-l-(k-1)K}\mu(A_0)
=\sum_{A\in\xs A_K(A_0)}P_A(p).\]
And so the sequence $\{aQ_B(q)\colon B\subset\tilde f(A_0),B\in\xs C'_k\}$
are $\{\xs A_K(A_0);\alpha_1,\dots,\alpha_\eta\}$-suitable by
Definition~\ref{d:suit}, where $\{\alpha_1\dots,\alpha_\eta\}=\{a Q(q)\colon
Q\in\xc Q^\circ\}$. By Lemma~\ref{l:suitde}, for positive numbers
\[\Bigl\{p^{l+(k-1)K}aQ_B(q)\colon
B\subset\tilde f(A_0),B\in\xs C'_k\Bigr\},\]
there is a suitable decomposition
\begin{equation}\label{eq:finer}
\xs A_K(\xs A_K(A_0))=\xs A_{2K}(A_0)=
\bigcup_{\substack{B\subset\tilde f(A_0)\\B\in\xs C'_k}}\xs C_B
\end{equation}
such that
\[\sum_{A\in\xs C_B}\mu(A)=p^{l+(k-1)K}aQ_B(q)\quad
\text{for all $B\in\xs C'_k$ and $B\subset\tilde f(A_0)$}.\]
Indeed, we obtain $\xs C_B$ for all $B\in\xs C'_k$ by above argument since
for every $B\in\xs C'_k$, there is a unique $A_0\in\xs C_{k-1}$ such that
$B\subset\tilde f(A_0)$. Then we define
\[\xs C_k=\left\{\bigsqcup\xs C_B\colon B\in\xs C'_k\right\}\]
and $\tilde f\colon\xs C_k\to\xs C'_k$ by $\tilde f\left(\bigsqcup\xs
C_B\right)=B$. By~\eqref{eq:finer}, we know that
\[\xs A_{(k-2)K}(\xs C)=\xs C_{k-1}\prec\xs C_k\prec\xs C_{k+1}
=\xs A_{kK}(\xs C).\]
We also have $\mu(C)=a\nu(\tilde f(C))$ for all $C\in\xs C_k$. If $k$ is odd,
we can define $\xs C_k$, $\xs C'_k$ and $\tilde f\colon\xs C_k\to\xs C'_k$ by
a similar argument. Thus, by induction on~$k$, we finally obtain all the
cylinder families $\xs C_k$, $\xs C'_k$ and the cylinder mapping~$\tilde f$.
To prove $F=\bigsqcup\xs C$ and $F'=\bigsqcup\xs C'$ have the same cylinder
structure, it remains to compute the constants $\varrho$ and $\iota$. Since
$\xs C$ contains at least one level-$l$ interior block, by
Remark~\ref{r:diameter}, we have
\[\varpi_\ms^{-1}r_\ms^l|E_\ms|\le|F|\le|E_\ms|.\]
Let $C,C_1,C_2\in\xs C_k$, where $C_1$ and $C_2$ are distinct. If $k$ is odd,
then $\xs C_k=\xs A_{(k-1)K}(\xs C)\subset\xs A_{l+(k-1)K}^\circ$, by
Remark~\ref{r:diameter} and the definition of blocks
(Definition~\ref{d:block}), we have
\begin{align*}
\varpi_\ms^{-1}r_\ms^{l+(k-1)K}|E_\ms|&\le|C|
\le\varpi_\ms r_\ms^{l+(k-1)K}|E_\ms|;\\
r_\ms^{l+(k-1)K}|E_\ms|&\le\dist(C_1,C_2).
\end{align*}
If $k$ is even, then by the definition of $\xs C_k$, we know that $\xs
A_{(k-2)K}(\xs C)=\xs C_{k-1}\prec\xs C_k\prec\xs C_{k+1}=\xs A_{kK}(\xs C)$.
And so
\begin{align*}
\varpi_\ms^{-1}r_\ms^{l+kK}|E_\ms|&\le|C|
\le\varpi_\ms r_\ms^{l+(k-2)K}|E_\ms|;\\
r_\ms^{l+kK}|E_\ms|&\le\dist(C_1,C_2).
\end{align*}
As a summary, $F$ has the $(\varrho_1,\iota_1)$-cylinder structure for
$\varrho_1=r_\ms^K$ and $\iota_1=\varpi_\ms^2r_\ms^{-l-2K}$. A similar
argument shows that $F'$ also has the $(\varrho_2,\iota_2)$-cylinder
structure for $\varrho_2=r_\mt^{K'}$ and
$\iota_2=\varpi_\mt^2r_\ms^{-l'-2K'}$. Recall that $r_\ms^K=r_\mt^{K'}$. Then
by the cylinder mapping~$\tilde f$, we know that $F$ and $F'$ have the same
$(\varrho,\iota)$-cylinder structure for $\varrho=\varrho_1=\varrho_2$ and
$\iota=\max(\iota_1,\iota_2)$.
Finally, by Lemma~\ref{l:cylin}, we have $\bigsqcup\xs C=F\simeq
F'=\bigsqcup\xs C'$.
\section{The Non-Commensurable Case}\label{sec:NC}
\subsection{Proof of Theorem~\ref{t:Z+}}
Let $\ms=\{S_1,S_2\dots,S_N\}$ of ratios $r_1,r_2\dots,r_N$ and
$\mt=\{T_1,T_2\dots,T_M\}$ of ratios $t_1,t_2\dots,t_M$ be two IFSs
satisfying the SSC. For convenience, write $E=E_\ms$ and $F=E_\mt$. We also
use the following notations:
\begin{alignat*}{2}
E_{\bm i}&=S_{i_1}\circ\dots\circ S_{i_n}(E),\qquad &
r_{\bm i}&=r_{i_1}r_{i_2}\cdots r_{i_n}, \\
F_{\bm j}&=T_{j_1}\circ\dots\circ T_{j_m}(F),\qquad &
t_{\bm j}&=t_{j_1}t_{j_2}\cdots t_{j_m},
\end{alignat*}
where $\bm i=i_1i_2\dots i_n\in\{1,\dots,N\}^n$ and $\bm j=j_1j_2\dots
j_m\in\{1,\dots, M\}^m$.
Suppose that $E\simeq F$ and $\hdim E=\hdim F=s$. Let $f$ be a bi-Lipschitz
mapping from $E$ onto $F$. We need two known lemmas.
\begin{lem}[measure linear, \cite{CooPi88}]\label{l:ML}
There is an $E_{\xb i}$ such that $f|_{E_{\xb i}}$ is measure linear, i.e.,
for all Borel subsets $A\subset E_{\xb i}$ with $\xc H^s(A)>0$, we have
\[\frac{\xc H^s(A)}{\xc H^s(f(A))}=\frac{\xc H^s(E_{\xb i})}
{\xc H^s(f(E_{\xb i}))}.\]
\end{lem}
\begin{lem}[\cite{FalMa92}]\label{l:fE}
There exists a positive integer~$K$ dependent on~$f$ such that for each
word $\bm i$ of finite length, there exist a subset
$\Lambda\subset\{1,\dots,M\}^K$ and a word $\bm j$ of finite length such
that
\[f(E_{\bm i})=\bigcup_{\bm j^*\in\Lambda}F_{\bm{jj}^*}.\]
\end{lem}
\begin{proof}[Proof of Theorem~\ref{t:Z+}]
By symmetry, it suffices to prove that $t_j^s\in\Z^+[r_1^s,\dots,r_N^s]$
for each~$j$. Let $E_{\xb i}$ be as in Lemma~\ref{l:ML}. By
Lemma~\ref{l:fE}, we have
\[f(E_{\xb i})=\bigcup_{\bm j^*\in\Lambda}F_{\xb j\bm j^*}
\quad\text{for some $\Lambda\subset\{1,\dots,M\}^K$}.\]
For each $j\in\{1,\dots,M\}$, there is a set $\Lambda_j$ consisting of finite
many words of finite length such that
\[f^{-1}\biggl(\bigcup_{\bm j^*\in\Lambda}F_{\xb j\bm j^*j}\biggr)=
\bigcup_{\bm i^*\in\Lambda_j}E_{\xb i\bm i^*}.\]
Applying Lemma~\ref{l:ML} with $A=\bigcup_{\bm i^*\in\Lambda_j}E_{\xb i\bm
i^*}$, we have
\[\frac{\xc H^s(E_{\xb i})}{\xc H^s(f(E_{\xb i}))}=
\frac{\xc H^s(A)}{\xc H^s(f(A))}=\frac{\xc H^s
\bigl(\bigcup_{\bm i^*\in\Lambda_j}E_{\xb i\bm i^*}\bigr)}
{\xc H^s\bigl(\bigcup_{\bm j^*\in\Lambda}F_{\xb j\bm j^*j}\bigr)}
=\frac{\xc H^s(E_{\xb i})\cdot\sum_{\bm i^*\in\Lambda_j}
r_{\bm i^*}^s}{\xc H^s(f(E_{\xb i}))\cdot t_j^s}.
\]
This means $t_j^s=\sum_{\bm i^*\in\Lambda_j}r_{\bm i^*}^s$, and so
$t_j^s\in\Z^+[r_1^s,\dots,r_N^s]$.
\end{proof}
\subsection{Proof of Theorem~\ref{t:sublip}}
With notations as in the proof of Theorem~\ref{t:Z+}. We need a lemma
obtained by Rao, Ruan and Wang~\cite{RaRuW12}, which is a corollary of
Lemma~\ref{l:ML} and~\ref{l:fE}. Fix a bi-Lipschitz mapping~$f$ of~$E$
onto~$F$. Let $E_{\xb i}$ be as in Lemma~\ref{l:ML}. According to
Lemma~\ref{l:fE}, for each word~$\bm i$ of finite length, there are a subset
$\Lambda\subset\{1,\dots,M\}^K$ and a word~$\bm j$ of finite length such that
\begin{equation}\label{eq:fEii}
f(E_{\xb i\bm i})=\bigcup_{\bm j^*\in\Lambda}F_{\bm j\bm j^*}.
\end{equation}
\begin{lem}[\cite{RaRuW12}]\label{l:Mfnt}
The set
$\bigl\{\xc H^s(E_{\xb i\bm i})/\xc H^s(F_{\bm j})\colon
\text{$\bm i$ and $\bm j$ satisfy~\eqref{eq:fEii}}\bigr\}$
is finite.
\end{lem}
Let $G\subset(0,1)$ be a multiplicative semigroup. For $\bm i=i_1i_2\dots
i_n\in\{1,\dots,N\}^n$ and $\bm j=j_1j_2\dots j_m\in\{1,\dots,M\}^m$, define
\[\card_G\bm i=\card\{k\colon S_{i_k}\notin\ms^G\}\quad\text{and}\quad
\card_G\bm j=\card\{k\colon T_{j_k}\notin\mt^G\}.\]
As a corollary of Lemma~\ref{l:Mfnt}, we have
\begin{lem}\label{l:cardG}
$\sup\bigl\{\card_G\bm j\colon\text{$\bm i$ and $\bm j$
satisfy~\eqref{eq:fEii} and $\card_G\bm i=0$}\bigr\}<\infty$.
\end{lem}
\begin{proof}
Write $\card_l\bm j=\card\{k\colon j_k=l\}$ for $1\le l\le M$, $\card_l\bm
i=\card\{k\colon i_k=l\}$ for $1\le l\le N$. If this lemma is not true, we
can find a sequence $(\bm i_k,\bm j_k)_{k\ge1}$ such that for all $k\ge1$,
\begin{itemize}
\item $\bm i_k$ and $\bm j_k$ satisfy~\eqref{eq:fEii} and $\card_G\bm
i_k=0$;
\item $\card_G\bm j_k<\card_G\bm j_{k+1}$;
\end{itemize}
If $\sup_k\card_1\bm i_k=\infty$, by choosing a subsequence, we can assume
that $\card_1\bm i_k<\card_1\bm i_{k+1}$; otherwise, $\sup_k\card_1\bm
i_k<\infty$, by choosing a subsequence, we can assume that $\card_1\bm i_k$
is equal to a constant for all~$k$. In both cases, we can require that
$\card_1\bm i_k\le\card_1\bm i_{k+1}$. Repeating the same argument, we can
further require that for all $k\ge1$,
\begin{itemize}
\item $\card_l\bm i_k\le\card_l\bm i_{k+1}$ for $1\le l\le N$.
\item $\card_l\bm j_k\le\card_l\bm j_{k+1}$ for $1\le l\le M$.
\end{itemize}
We shall show that
\[\frac{\xc H^s(E_{\xb i\bm i_a})}{\xc H^s(F_{\bm j_a})}\ne
\frac{\xc H^s(E_{\xb i\bm i_b})}{\xc H^s(F_{\bm j_b})}\quad \text{whenever
$a\ne b$}.\] This contradicts Lemma~\ref{l:Mfnt}, and so the lemma
follows. To verify the inequality, suppose $a<b$, we have
\[\left(\frac{\xc H^s(E_{\xb i\bm i_a})}{\xc H^s(F_{\bm j_a})}\biggm/
\frac{\xc H^s(E_{\xb i\bm i_b})}{\xc H^s(F_{\bm j_b})}\right)^{1/s}=
\frac{t_{\bm j_b}/t_{\bm j_a}}{r_{\bm i_b}/r_{\bm i_a}}=
\frac{t_1^{\beta_1}t_2^{\beta_2}\cdots t_M^{\beta_M}}
{r_1^{\alpha_1}r_2^{\alpha_2}\cdots r_N^{\alpha_N}}:=\frac\varphi\phi,\]
where $\alpha_l=\card_l\bm i_b-\card_l\bm i_a\ge0$ ($1\le l\le N$) and
$\beta_l=\card_l\bm j_b-\card_l\bm j_a\ge0$ ($1\le l\le M$). Suppose that
$T_1,\dots,T_\ell\notin\mt^G$ and $T_{\ell+1},\dots,T_M\in\mt^G$, then
\[\card_G\bm j=\card_1\bm j+\card_2\bm j+\dots+\card_\ell\bm j.\]
Since
\[\beta_1+\dots+\beta_\ell=\sum_{l=1}^\ell(\card_l\bm j_b-\card_l\bm j_a)
=\card_G\bm j_b-\card_G\bm j_a>0,\]
we may assume that $\beta_1>0$. Then $\varphi/t_1\in\sgp\mt$. It follows
from $\card_G\bm i_a=\card_G\bm i_b=0$ that there exists a $g\in\sgp\ms$
such that $\phi g\in G$. Since $\sgp\ms\sim\sgp\mt$, there exists a
positive integer~$u$ such that $g^u\in\sgp\mt$. Therefore,
\[\varphi^ug^u=t_1\cdot\bigl(t_1^{u-1}(\varphi/t_1)^ug^u\bigr)\in
t_1\cdot\sgp\mt.\]
Notice that $(t_1\cdot\sgp\mt)\cap G=\emptyset$ since $T_1\notin\mt^G$.
Therefore, $\varphi^ug^u\notin G$. Together with $\phi^ug^u\in G$, we have
$\phi\ne\varphi$. The desired inequality follows.
\end{proof}
To prove Theorem~\ref{t:sublip}, we also need the following theorem obtained
independently by Llorente and Mattila~\cite{LloMa10} and Deng and
Wen~et.~al.~\cite{DeWXX11}.
\begin{thm*}[\cite{DeWXX11,LloMa10}]
Let $E$ and $F$ be two self-similar sets satisfying the SSC. Then $E\simeq
F$ if and only if there exist bi-Lipschitz mappings $f_1$ and $f_2$ such
that $f_1\colon E\to f_1(E)\subset F$ and $f_2\colon F\to f_2(F)\subset
E$.
\end{thm*}
\begin{proof}[Proof of Theorem~\ref{t:sublip}]
First note that we have $\ms^G=\ms$ and $\mt^G=\mt$ when $G=(0,1)$. So we
only need to show that $\ms\simeq\mt$ implies $\ms^G\simeq\mt^G$. Now fix a
multiplicative sub-semigroup $G\subset(0,1)$. By the above theorem and
symmetry, it remains to find a bi-Lipschitz mapping $f_1$ such that
$f_1\colon E_{\ms^G}\to f_1(E_{\ms^G})\subset E_{\mt^G}$.
According to Lemma~\ref{l:cardG}, we can find $\bm i_0,\bm j_0$
satisfying~\eqref{eq:fEii} and $\card_G\bm i_0=0$ such that
\[\card_G\bm j_0=\sup\bigl\{\card_G\bm j\colon\text{$\bm i$ and $\bm j$
satisfy~\eqref{eq:fEii} and $\card_G\bm i=0$}\bigr\}<\infty.\]
We shall show that $f\circ S_{\xb i\bm i_0}(E_{\ms^G})\subset T_{\bm
j_0}(E_{\mt^G})$, where $f$ is the fixed bi-Lipschitz mapping of~$E$
onto~$F$. Then taking $f_1=T_{\bm j_0}^{-1}\circ f\circ S_{\xb i\bm i_0}$, we
have $f_1(E_{\ms^G})\subset E_{\mt^G}$ and the proof is complete.
Let $x\in E_{\ms^G}$ and $\bm i_k$ be the unique word of length~$k$
satisfying $x\in S_{\bm i_k}(E_{\ms^G})$ for each $k\ge1$, then $\card_G\bm
i_k=0$. For each $k\ge1$, let $\bm j_k$ satisfies
\[f(E_{\xb i\bm i_0\bm i_k})=\bigcup_{\bm j^*\in\Lambda_k}F_{\bm j_k\bm j^*},
\quad\text{where $\Lambda_k\subset\{1,\dots,M\}^K$.}\]
Note that
\[f(E_{\xb i\bm i_0})=\bigcup_{\bm j^*\in\Lambda}F_{\bm j_0\bm j^*}
\quad\text{for some $\Lambda\subset\{1,\dots,M\}^K$}\]
since $\bm i_0$ and $\bm j_0$ satisfy~\eqref{eq:fEii}. So we can write $\bm
j_k=\bm j_0\bm j'_k$ for each $k\ge1$. Since $\card_G\bm i_0\bm i_k=0$ and
$\card_G\bm j_0$ is maximal, we have $\card_G\bm j'_k=0$ for each $k\ge1$.
This means $f\circ S_{\xb i\bm i_0}(x)\in T_{\bm j_0}(E_{\mt^G})$, and so
$f\circ S_{\xb i\bm i_0}(E_{\ms^G})\subset T_{\bm j_0}(E_{\mt^G})$.
\end{proof}
\end{document} |
\begin{document}
\author{R. Vilela Mendes \\
Grupo de F\'\i sica-Matem\'atica\\
Complexo II, Universidade de Lisboa \\
Av. Gama Pinto, 2, 1699 Lisboa Codex Portugal\\
e-mail: [email protected]}
\title{Saddle scars: Existence and applications}
\date{}
\maketitle
\begin{abstract}
A quantum scar is a wave function which displays an high intensity in the
region of a classical unstable periodic orbit. Saddle scar are states
related to the unstable harmonic motions along the stable manifold of a
saddle point of the potential. Using a semiclassical method it is shown
that, independently of the overall structure of the potential, the local
dynamics of the saddle point is sufficient to insure the general existence
of this type of scars and their factorized structure is obtained.
Potentially useful situations are identified, where these states appear
(directly or in disguise) and might be used for quantum control purposes.
\end{abstract}
\section{Introduction}
Until the early eighties it was widely believed that, for systems with an
ergodic classical motion, the squared eigenfunctions must coincide, in the
semiclassical limit, with the projection of the microcanonical phase space
measure. This idea found solid ground on the mathematical results of
Shnirelman\cite{Shnirelman}, Zelditch\cite{Zelditch} and Colin de Verdi\`ere
\cite{Colin}. On close scrutiny however, what these results state is that,
for a quantum system that is classically ergodic, there is an eigenvalue
sequence of density one such that the corresponding quantum densities $
\left| \psi (x)\right| ^2$ converge weakly to the Liouville measure.
Therefore the observation of states that do not fit these expectations does
not contradict the mathematical results. For one thing the convergence may
be very slow and, on the other hand, nothing forbids the existence of other
subsequences converging to measures different from the Liouville measure.
In fact wave functions were found which are concentrated near the classical
unstable periodic orbits. When this happens one says that {\it the quantum
state is scarred by the unstable periodic orbit} or that one has a {\it
quantum scar}. Such states have been observed at first in numerical
simulations\cite{Taylor} \cite{Heller1} \cite{Heller2} and, more recently,
experimental evidence was found\cite{Wilkinson} on a semiconductor
quantum-well tunneling experiment.
The first theory of scars was proposed by Heller\cite{Heller1}, other
theoretical formulations followed, developed by several authors\cite
{Bogomolny} \cite{Berry} \cite{Feingold1} \cite{Feingold2}. Heller's theory
studies the overlap integral
\begin{equation}
\label{1.1}C(t)=\left\langle \Psi (t,x)\mid \Psi (0,x)\right\rangle
\end{equation}
for a propagating wave packet which at time zero has a Gaussian shape and
initial conditions $(p_0,x_0)$ corresponding to an unstable periodic orbit.
Expanding $\Psi (0,x)$ in energy eigenstates
\begin{equation}
\label{1.2}\Psi (0,x)=\sum_nc_n\Psi _n(x)
\end{equation}
one sees that the Fourier transform $S(E)$ of the overlap $C(t)$ is the
spectral density weighted by the probabilities $\left| c_n\right| ^2$.
\begin{equation}
\label{1.3}S(E)=\sum_n\left| c_n\right| ^2\delta (E-E_n)
\end{equation}
Now, if the period $\tau $ of the classical periodic orbit and the largest
positive Lyapunov exponent $\lambda $ are such that $e^{-\tau \lambda /2}$
is not very small, the overlap $C(t)$ will display peaks at times $n\tau $.
As the wave packet spreads, the amplitude of the peaks decreases after each
orbit traversal at the rate $e^{-\tau \lambda /2}$. The Fourier transform of
$C(t)$ will therefore have peaks of width $\lambda $ with spacing $\omega =
\frac{2\pi }\tau $. Referring to (\ref{1.3}) one concludes that only the
eigenstates that lie under the peaks contribute to the expansion of the wave
packet. Since the wave packet has an enhanced intensity along the region of
the period orbit, this is expected to carry over to the contributing energy
eigenstates. The stronger the overlap resurgences are, the stronger the
effect is expected to be. Therefore the intensity of the effect varies like $
1/\tau \lambda $ .
The above qualitative derivation\cite{Heller1} of the scar effect is flawed
if the product $\lambda d(E)$ (where $d(E)$ is the mean level density) is
very large. Then the number of contributing eigenstates is very large and no
individual eigenstate is required to show a significant intensity
enhancement near the periodic orbit. Also the argument assumes the low
period unstable orbits to be isolated. If there are several nearby orbits of
different periods the argument breaks down. However, if it happens that many
periodic orbits of the same period are present in the same configuration
space region, the effect may even be enhanced. This is the situation for the
periodic motions in the neighborhood of an unstable critical point (a
saddle) of the potential $V(x)\,$. Near the critical point there are
unstable harmonic periodic motions along the stable manifold of the critical
point. As long as anharmonic corrections are unimportant, all the orbits
will have the same period independently of their amplitude. The scars
associated to these unstable periodic orbits are called {\it saddle scars}
in this paper.
Whenever a dynamical systems has a phase-space region with sensitive
dependence to initial conditions, the periodic orbits in that region are
unstable and, even when they are dense, they are a zero measure set in the
smooth (Liouville) measure over the energy surface. Therefore, unstable
classical orbits are in practice never observed, because all typical motions
are aperiodic and uniformly cover the support of the Liouville measure. The
phenomenon of quantum scars may therefore have far-reaching implications for
the applications of quantum systems. Whenever an unstable periodic orbit
scars a quantum eigenstate, the system may easily be made to behave like the
unstable orbit by resonant excitation to the corresponding energy level. In
this sense, scars are a gift of Nature, for they allow the exploration of
dynamical configurations that in classical mechanics are washed away by
ergodicity.
Saddle points are the typical critical points of generic (Morse) functions.
Therefore, once their existence is established, saddle scars are expected to
be quite abundant. In the remainder of the paper a semiclassical method is
used to establish that, independently of the overall structure of the
potential, the local dynamics of the saddle point is sufficient to insure
the existence of this type of scars. Then, in the closing section,
potentially useful situations are identified where these states appear
(directly or in disguise) and may be used for quantum control purposes.
\section{Semiclassical estimates}
In the neighborhood of a saddle point, there is a choice of coordinates such
that, up to higher order terms, the potential is
\begin{equation}
\label{2.1}V(x)=\sum_i\sigma _ix_i^2+\cdots
\end{equation}
For two dimensions $\sigma _1>0$ and $\sigma _2<0$. In this case one obtains
the following result:
{\bf \# There are scar states concentrated along the stable manifold of the
saddle point and, on the neighborhood of the stable manifold,}
\begin{equation}
\label{2.2}\left| \Psi _{\textnormal{scar}}\right| ^2\propto \cos \left( \frac{W_n
}{2\hbar }x_2^2\right) \left| \psi _n(x_1)\right| ^2
\end{equation}
$\psi _n(x_1)$ {\bf being close to an harmonic oscillator wave-function and }
$W_n$ {\bf a function of the monodromy matrix in the transverse direction. }
(Explicit expressions for $W_n$ under different approximations are given
below)
The result is obtained following Bogomolny's semiclassical construction of
wave functions\cite{Bogomolny}. From the series expansion of the energy
Green's function in terms of eigenfunctions it follows that the averaged
squared wave function is proportional to the imaginary part of the Green's
function
\begin{equation}
\label{2.3}\left\langle \left| \Psi _{E_0}(x)\right| ^2\right\rangle \propto
\left\langle \textnormal{Im}G(x,x,E_0)\right\rangle
\end{equation}
the average being taken over a small energy interval $\Delta E$ around $E_0$
, which corresponds, in the semiclassical approximation, to restrict the
contributions to orbits with times of motion of order $\leq \frac \hbar
{\Delta E}$ . Likewise, if an average is taken over small intervals of the
variable $x$, the dominant contributions come from classical trajectories
for which the change of momentum on the closed orbit is small. I will
discuss later the role of these two averages.
The next step is to use the semiclassical approximation for the Green's
function
\begin{equation}
\label{2.4}G(x_0,x,E)=\overline{G}(x_0,x,E)+\left( \frac 1\hbar \right)
^{(d+1)/2}G_{\textnormal{osc}}(x_0,x,E)
\end{equation}
\begin{equation}
\label{2.5}G_{\textnormal{osc}}(x_0,x,E)=i^{-1}\left( \frac 1{2\pi i}\right) ^{
\frac{d-1}2}\sum_\beta \sqrt{\left| \det D_\beta \right| }\exp \left\{ \frac
i\hbar S_\beta \left( x_0,x,E\right) -i\frac \pi 2\nu _\beta \right\}
\end{equation}
For the contributions in the neighborhood of a classical periodic orbit it
is convenient to choose one of the coordinates along the orbit $(x_1)$ and
the others $(x_i$ ; $i=2,\cdots ,n)$ along the transverse directions\cite
{Gutzwiller} \cite{Bogomolny}. On the neighborhood of an unstable periodic
orbit along the stable manifold of the saddle point the action is expanded
up to quadratic terms in the transversal coordinates and one has
\begin{equation}
\label{2.6}G_{\textnormal{osc}}(x,x,E)=\frac 1{i\left( 2\pi i\right)
^{1/2}}\sum_\beta \frac{D^{1/2}(x_1)}{\left| \stackrel{\bullet }{x}_1\right|
}\exp \left\{ \frac i\hbar \left( \overline{S}_\beta +\frac
12\sum_{i,j=2}^nW_{ij}(x_1)x_ix_j\right) -i\frac \pi 2\nu _\beta \right\}
\end{equation}
with $D(x_1)$ and $W_{ij}(x_1)$ functions of the monodromy matrix in the
transverse coordinates
\begin{equation}
\label{2.7}\left(
\begin{array}{c}
x_{\bot }(\tau _1) \\
p_{\bot }(\tau _1)
\end{array}
\right) =\left(
\begin{array}{cc}
m_{11}(x_1) & m_{12}(x_1) \\
m_{21}(x_1) & m_{22}(x_1)
\end{array}
\right) \left(
\begin{array}{c}
x_{\bot }(0) \\
p_{\bot }(0)
\end{array}
\right)
\end{equation}
with
\begin{equation}
\label{2.8}
\begin{array}{c}
D(x_1)=\left| \det \left( m_{12}^{-1}\right) \right| \\
W(x_1)=m_{12}^{-1}m_{11}+\left( m_{22}-1\right) m_{12}^{-1}-\left(
m_{12}^T\right) ^{-1}
\end{array}
\end{equation}
and $\tau _1$ the period of the periodic orbit along $x_1$. These
expressions hold for any number of transverse directions. I now specialize
to a two dimensional saddle point. Non-trivial orbits along the stable
manifold are harmonic motions of period $\tau _1=2\pi \sqrt{\frac{m_1}{
2\sigma _1}}$ independent of the amplitude of the oscillation. Therefore
defining
\begin{equation}
\label{2.9}D=\frac{\sqrt{2m_2\sigma _2}}{\sinh \left( 2\pi \sqrt{\frac{
m_1\sigma _2}{m_2\sigma _1}}\right) }
\end{equation}
\begin{equation}
\label{2.10}W=\frac{2\cosh \left( 2\pi \sqrt{\frac{m_1\sigma _2}{m_2\sigma _1
}}\right) -2}{\frac 1{\sqrt{2m_2\sigma _2}}\sinh \left( 2\pi \sqrt{\frac{
m_1\sigma _2}{m_2\sigma _1}}\right) }
\end{equation}
$D$ and $W$ do not depend on $x_1$ and the dependence on the transverse
coordinate factors out for each term of the sum in Eq.(\ref{2.6}). However,
for each primitive trajectory, one also has to sum over multiple passings
obtaining the following sum
\begin{equation}
\label{2.11}\sum_n\left( 2m_2\sigma _2\right) ^{\frac 14}\sinh ^{-\frac
12}\left( n\theta \right) \exp \left\{ \frac i\hbar \left( n\overline{S}+
\frac{\cosh \left( n\theta \right) -1}{2m_2\sigma _2\sinh \left( n\theta
\right) }x_2^2\right) -i\frac \pi 2n\nu \right\}
\end{equation}
where $\theta =2\pi \sqrt{\frac{m_1\sigma _2}{m_2\sigma _1}}$ . There are
two situations where simple closed form results may be obtained:
\# When $\exp \left( \theta \right) \gg 1$ , the $x_2$-dependence factors
out from the sum and the result is
\begin{equation}
\label{2.12}\left\langle \left| \Psi _E(x)\right| ^2\right\rangle \propto
\left\langle \textnormal{Im}\left\{ \exp \left( \frac i\hbar \frac W2x_2^2\right)
G(x_1,E)\right\} \right\rangle
\end{equation}
$G(x_1,E)$ being the harmonic oscillator Green's function. From
\begin{equation}
\label{2.13}G(x_1,E)=\sum_n\left| \Psi _n(x)\right| ^2\left\{ P\left( \frac
1{E-E_n}\right) -i\pi \delta \left( E-E_n\right) \right\}
\end{equation}
it follows that the principal part drops out under averages over small
energy intervals and the result (\ref{2.2}) follows with $W_n=W$ for $
n=1,2,\cdots $ . The lowest lying state however corresponds to the orbit of
the unstable fixed point and the period of this orbit is no longer $\tau _1$
. The value of $W_0$ in this case may be calculated by considering an orbit
from the coordinate $(0,x_2)$ to the fixed point $(0,0)$ and back along the
unstable manifold, which is equivalent to take the limit $\sigma
_1\rightarrow 0$ in Eq.(\ref{2.10}). Then
\begin{equation}
\label{2.14}W_0=2\sqrt{2m_2\sigma _2}
\end{equation}
\# If $\exp \left( \theta \right) $ is not large then the $x_2$-dependence
does not factor out in the sum (\ref{2.11}). Notice however that the sum in (
\ref{2.11}) is not a sum over multiple passings of the same orbit, but a sum
over different orbits because, for $x_2\neq 0$ , the initial and final
momentum are different, and the difference grows with $n$. Therefore if, in
addition to the average over small energy intervals, one also averages over
small coordinate intervals then, the contribution of the primitive orbit
dominates the leading semiclassical approximation. The same factorization of
the $x_2$-dependence, as before, is obtained. However, the $\psi (x_1)$
function in this case is not exactly an harmonic oscillator wave functions,
but a function corresponding to a sum restricted to the primitive orbits.
\section{Applications}
The canonical form of the potential near a saddle point establishes a local
separation of variables which, of course, does not hold far away from the
saddle point. However, what the semiclassical estimate of the previous
section shows is that the local dynamics of the saddle point is sufficient
to insure the local existence of a factorized quantum state (\ref{2.2}). If
the separation of variables extends over a sufficiently large range then,
the transverse shape of the wave function may be approximated by the
solution of a one dimensional problem. Consider, for example a
one-dimensional Hamiltonian
\begin{equation}
\label{3.1}H=-\frac 1{2m}\frac{d^2}{dx^2}+2g\cos (x)
\end{equation}
with $2\pi -$periodic boundary conditions. The eigenstates are Mathieu
functions and there is a state of energy slightly above $2g$ with the
squared amplitude $\left| F\right| ^2$ as shown in Fig.1 (for $g=50$ , $
\frac 1{2m}=1$ , $E=101.189$). In the figure the state is also compared with
the local approximation following from (\ref{2.2}) and (\ref{2.14}). The
width of the state depends on the factor $\sqrt{2mg}$ and the energy is
approximately $2g$ plus the kinetic localization energy. For an higher
dimensional problem one must also add the quantized energy of the harmonic
oscillation along the stable manifold. These considerations provide a simple
rule for the approximate energies at which saddle scars are expected to be
found.
Among the physical situations where saddle scars might appear, an
interesting example is probably the quantum collision of systems containing
both attractive and repulsive interactions (chemical ions, nuclei, etc)\cite
{Vilela}. When a system contains several positive and negatively charged
particles, there are classical configurations of close proximity of the
particles which are of low energy because the repulsion between the
like-charged particles is compensated by the attraction of unlike-charged
particles. These configurations however are highly unstable and the chance
to observe (or stabilize) them in classical mechanics is nil. They are zero
measure configurations in the energy surface. For smooth potentials these
unstable configurations would be saddle points of the potential, hence they
are expected to give rise to saddle scars. These states would correspond to
well defined energy levels and might be prepared by resonant excitation.
That is, quantum control through scars makes accessible some states that,
classically, are essentially unobservable.
Another interesting aspect of saddle scars is their generality, because
saddle points are the typical critical points of generic functions. There
are also other features of the classical phase space which the wave
functions imitate and that, in some limit or by change of coordinates, may
be related to the saddle scars. This concerns in particular the
regularization of singular potentials. An example is the collision states
found for a three-dimensional periodic Coulomb problem\cite{Vilela}. In this
case the quantum collision states correspond to wave functions concentrated
along a phase space feature which is not an actual orbit, but the separatrix
of two classes of unstable orbits.
In Jacobi coordinates ($r=x_1-x_2$ , $\eta =x_3-\frac 12\left(
x_1+x_2\right) $) the potential between three unlike-charged particles is
\begin{equation}
\label{3.2}V(r,\eta )=\frac 1{\left| r\right| }-\frac 1{\left| \frac r2-\eta
\right| }-\frac 1{\left| \frac r2+\eta \right| }
\end{equation}
The dynamics of binary collisions in the three-body problem may be
regularized, but the case of interest here is a triple collision which,
except for exceptional cases\cite{McGe}, is not regularizable. The
potential, however, may be regularized by addition of a small quantity to
the definition of the distances
\begin{equation}
\label{3.3}\left| \rho \right| _\epsilon =\left( \rho _1^2+\rho _2^2+\rho
_3^2+\epsilon ^2\right) ^{\frac 12}
\end{equation}
In the neighborhood of the triple collision point $\overrightarrow{r}=
\overrightarrow{\eta }=0$ , the regularized potential is
\begin{equation}
\label{3.4}V_\epsilon =-\frac 1\epsilon -\frac 1{4\epsilon ^3}\left|
r\right| ^2+\frac 1{\epsilon ^3}\left| \eta \right| ^2+\cdots
\end{equation}
and this is a saddle point with three stable and three unstable directions.
According to the discussion above one would expect the quantum collision
states to have scarred wave functions concentrated along the stable
manifolds and with a small dispersion in the transverse (unstable manifold)
directions. This is the situation that is indeed found in the numerical
computations\cite{Vilela} of the three-dimensional periodic Coulomb problem.
It shows that the effect seems to survive the $\epsilon \rightarrow 0$
limit. Triple collisions in a 3-dimensional 3-body problem lie on an
analytic 10-dimensional submanifold of the 12-dimensional dynamical
manifold. Hence it is a zero measure effect, essentially non-observable in
classical systems. It is interesting that, through the scar effect, they do
correspond to well-defined energy levels, accessible by resonant excitation.
\section{Figure caption}
Fig.1 - One dimensional density for a wave function concentrated around an
unstable point and the semiclassical approximation (+).
\end{document} |
\begin{document}
\begin{abstract}
R.~S.~Kulkarni showed that a finite group acting pseudofreely, but not freely, preserving orientation, on an even-dimensional sphere (or suitable sphere-like space) is either a periodic group acting semifreely with two fixed points, a dihedral group acting with three singular orbits, or one of the polyhedral groups, occurring only in dimension 2. It is shown here that the dihedral group does not act pseudofreely and locally linearly on an actual $n$-sphere when $n\equiv 0\mod 4$. The possibility of such an action when $n\equiv 2\mod 4$ and $n>2$ remains open. Orientation-reversing actions are also considered.
\end{abstract}
\dedicatory{Dedicated to Jos\'e Mar\'ia Montesinos on the occasion of his 65th birthday}
\title{Pseudofree Group Actions on Spheres}
\section{Introduction}
The focus of this note is the question of what finite groups can act pseudofreely (but not freely) on some sphere. Recall that a group action is pseudofree if the fixed point set of each non-identity element is discrete.
A good part of this question was already answered by R.~S.~Kulkarni for pseudofree actions on (cohomology) manifolds with the homology of a sphere. But there were left open questions about existence of certain actions on actual spheres. We quote from Kulkarni. (For consistency with the rest of this paper, we have made minor alterations in the notation.)
\begin{thm}[Kulkarni \cite{Kulkarni1982}]
Let $X$ be an admissible space which is a $d$-dimensional $\mathbb{Z}$-cohomology manifold with the mod $2$ cohomology isomorphic to that of an even dimensional sphere $S^{n}$. Let $G$ be a finite group acting pseudofreely on $X$ and trivially on $H^{*}(X;\mathbb{Q})$. The either
\begin{enumerate}[a.]
\item $G$ acts semifreely with two fixed points and has periodic cohomology of period $d$, or
\item $G\approx $ a dihedral group of order $2k$, $k$ odd or
\item $n=2$ and $G\approx $ a dihedral, tetrahedral, octahedral or icosahedral group.
\end{enumerate}
\end{thm}
Actions of the first type in the theorem arise as suspensions of free actions. Actions of the third type arise as classical actions on the $2$-sphere.
Kulkarni, however, remarked (p. 222) that he did not know whether a dihedral group of order $2k$, $k$ odd, actually can act pseudofreely on a $\mathbb{Z}$-cohomology manifold which is a $\mathbb{Z}_{2}$-cohomology sphere of even dimension $>2$.
It should be remarked that aside from the standard actions on spheres $S^{1}$ and $S^{2}$ there are no such pseudofree \emph{linear} actions.
We restrict attention primarily to locally linear actions on actual closed manifolds,
and we find it useful to consider the slightly broader class of \emph{tame} actions. We understand a pseudofree action of a finite group $G$ to be \emph{tame} if each point $x$ has a disk neighborhood invariant under the isotropy group $G_{x}$.
We denote the dihedral group of order $2k$ by $D_{k}$ and the cyclic group of order $k$ by $C_{k}$. We fix an expression of $D_{k}$ as a semidirect product with group extension
\[
1\to C_{k} \to D_{k} \to C_{2} \to 1
\]
where the quotient $C_{2}$ acts on the normal subgroup $C_{k}$ by inversion. There are $k$ involutions in $D_{k}$, all conjugate, and all determining the various splittings as a semidirect product.
\begin{thm}\label{thm:nonexistence}
The dihedral group $D_{k}$ of order $2k$, $k$ odd, does not act locally linearly or tamely, pseudofreely, and preserving orientation, on $S^{n}$ when $n\equiv 0\mod 4$.
\end{thm}
In fact the argument shows that there do not exist orientation-preserving actions of $D_{k}$ on closed $n$-manifolds, $n\equiv 0\mod 4$, having exactly three orbit types $(2,2,k)$, i.e., with isotropy groups $C_{2}, C_{2}, C_{k}$.
We do not consider here the possibilities for nontrivial pseudofree actions on higher dimensional manifolds with the cohomology of $S^{n}$, such as $S^{n}\times\mathbb{R}^{m}$.
It remains to ponder the case $n\equiv 2\mod 4$, which encompasses the classical actions on the $2$-sphere. In this case we obtain a weak positive result.
\begin{thm}\label{thm:existence}
If $n\equiv 2\mod 4$ and $k$ is an odd positive integer, then there is a smooth, closed, orientable $n$--manifold on which the dihedral group $D_{k}$ of order $2k$ acts smoothly, pseudofreely, preserving orientation, with exactly three singular orbits of types $(2,2,k)$.
\end{thm}
The argument shows that when $n\ge 6$ such $n$-manifolds can be chosen to be $2$-connected. But it remains an open question whether the manifold can be chosen to be a sphere or a mod 2 homology sphere.
We also consider the case of orientation-reversing pseudofree actions on spheres.
\begin{thm}\label{thm:orientation_reversing_cases}
Suppose that a finite group $G$ acts locally linearly and pseudofreely on a sphere $S^{n}$, with some elements of $G$ reversing orientation. If $n$ is odd, then $G$ must be a dihedral group and $n\equiv 1\mod 4$; and if $n$ is even, then $G$ must be a periodic group with a subgroup of index $2$.\end{thm}
When $n$ is odd the prototype is the standard action of the dihedral group on a circle, but we do not know if there are analogous actions in higher odd dimensions. The existence is closely related to the existence of orientation-preserving actions in neighboring even dimensions.
When $n$ is even, these kinds of actions arise as ``twisted suspensions'' of free actions. Such actions by even order cyclic groups have been studied and classified by S.~E.~Cappell and J.~L.~Shaneson \cite{CappellShaneson1978}, in the piecewise linear case, and by S.~Kwasik and R.~Schultz \cite{KwasikSchultz1990, KwasikSchultz1991}, in the purely topological case.
\section{Proof of Theorem \ref{thm:nonexistence}}
Suppose that $D_{k}$ acts pseudofreely on $S^{n}$. Such an action cannot be free since $D_{k}$ does not satisfy the Milnor condition that every element of order two lies in the center. Similarly $n$ cannot be odd. For otherwise a nontrivial isotropy group would act freely, preserving orientation, on an even-dimensional sphere linking a point with nontrivial isotropy group. But this would violate the Lefschetz fixed point theorem.
So henceforth we assume that $n>2$ and $n$ is even. Now by the Lefschetz Fixed Point theorem we also conclude that every nontrivial element of $D_{k}$ has exactly two fixed points. The two fixed points of an element of order $k$ are interchanged by all the elements of order $2$, since otherwise the dihedral group would act freely on a linking sphere to one of the fixed points. On the other hand the cyclic subgroup $C_{k}$ permutes the fixed points of the elements of order two in two orbits of size $k$. Now removing small invariant disk neighborhoods of the singular set and passing to the quotient we obtain an $n$-manifold $Y^{n}$ whose boundary consists of two homotopy real projective $(n-1)$-spaces $P_{1}$ and $P_{2}$ and a single homotopy lens space $L_{k}$. We have $\pi_{1}(Y)=D_{k}$, $\pi_{1}(P_{j})\approx C_{2}$, and $\pi_{1}(L_{k})=C_{k}$.
The regular covering over $Y$ is classified by a map $f:Y\to K(D_{k},1)$. Note that although there are $k$ different subgroups of order $2$, they are all conjugate in $D_{k}$ and hence there is a well-defined inclusion-induced homomorphism $H_{*}(C_{2})\to H_{*}(D_{k})$, as well as the usual homomorphism $H_{*}(C_{k})\to H_{*}(D_{k})$.
We will make use of the following elementary, well-known, homology calculation. The proof is an exercise in the spectral sequence of the split extension
$
1\to C_{k}\to D_{k}\to C_{2}\to 1.
$
\begin{prop}\label{prop:trivialcoefs}
For $k$ an odd integer, the homology of $D_{k}$ with $\mathbb{Z}$ coefficients is given by
\[
H_{q}(D_{k};\mathbb{Z})=
\begin{cases}
\mathbb{Z} & \text{for } q=0\\
\mathbb{Z}/2 & \text{for } q\equiv 1\mod 4\\
\mathbb{Z}/2k & \text{for } q\equiv 3\mod 4\\
0 & \text{for even } q>0
\end{cases}
\]
Moreover, when $ q\equiv 3\mod 4$, the inclusion $C_{k}\to D_{k}$ induces an injection $$ \mathbb{Z}_{k}=H_{q}(C_{k};\mathbb{Z})\to H_{q}(D_{k};\mathbb{Z})=\mathbb{Z}_{2k}$$ and the projection $D_{k}\to C_{2}$ induces a surjection $$ \mathbb{Z}_{2k}=H_{q}(D_{k};\mathbb{Z})\to H_{q}(C_{2};\mathbb{Z})=\mathbb{Z}_{2}.$$ \qed
\end{prop}
\begin{proof}[Completion of Proof of Theorem \ref{thm:nonexistence}]
The proof when $n\equiv 0\mod 4$ follows easily from Proposition \ref{prop:trivialcoefs}. Under the classifying map $f:Y\to K(D_{k},1)$ restricted to $\partial Y$, $f_{*}[P_{i}]$ is the element of order $2$ in $H_{n-1}(D_{k})=\mathbb{Z}/2k$, for $i=1,2$. The element $f_{*}[L_{k}]$ is an element of order $k$. But obviously $f_{*}[P_{1}]+f_{*}[P_{2}]+f_{*}[L_{k}]=0$, since the classifying map is defined on all of $Y$. This implies $f_{*}[L_{k}]=0$, a contradiction.
\end{proof}
\begin{remark}
This argument shows that when $n\equiv 0\mod 4$ there is no orientation-preserving, pseudofree action of $D_{k}$ on any closed, orientable, $n$-manifold of type $(2,2,k)$, i.e., having singular orbit structure consisting of one orbit with isotropy group $C_{k}$ and two orbits with isotropy groups of order $2$.
\end{remark}
\section{Proof of Theorem \ref{thm:existence}}
We will also need a somewhat less precise statement for oriented bordism.
\begin{prop}\label{prop:}
For $k$ odd and $q\equiv 1\mod 4$, the map $\Omega_{q}(BC_{k})\to \Omega_{q}(BD_{k})$ induced by inclusion is zero.
\end{prop}
\begin{proof}
We indicate a proof, based on ``big guns''. According to results of Thom and Milnor, the oriented cobordism ring $\Omega_{*}$, is finitely generated, has no odd order torsion, and is finite except in dimensions divisible by $4$. Indeed all 2-torsion elements have order exactly 2, and the torsion subgroup is finitely generated in each dimension. Moreover, modulo torsion, $\Omega_{*}$ is a polynomial algebra with one generator in each dimension $\equiv 0\mod 4$. Rationally these generators can be take to be complex projective spaces $\mathbb{C}P^{2m}$. For these and related facts, we refer to R.~E.~Stong \cite{Stong1968}.
Now we can calculate both $\Omega_{q}(BC_{k})$ and $\Omega_{q}(BD_{k})$ via the Atiyah-Hirzebruch spectral sequences
\[
H_{i}(BC_{k};\Omega_{j})\Rightarrow \Omega_{i+j}(BC_{k})
\]
and
\[
H_{i}(BD_{k};\Omega_{j})\Rightarrow \Omega_{i+j}(BD_{k})
\]
Since is $k$ is odd, the groups $H_{i}(BC_{k};\Omega_{j})$ are zero unless $j\equiv 0\mod 4$, according to the above remarks. Therefore suppose $j\equiv 0\mod 4$.
Now $q=i+j$ is congruent to $1\mod 4$ if and only if $i\equiv 1\mod 4$. But then
$H_{i}(BC_{k};\Omega_{j})$ is $k$--torsion, since $H_{i}(BC_{k};\mathbb{Z})$ is $k$-torsion (for $i\ne 0$), while $H_{i}(BD_{k};\Omega_{j})$ is $2$--torsion by the Universal Coefficient formula. Therefore, in all such cases, with $i+j\equiv 1\mod 4$, the map
\[
H_{i}(BC_{k};\Omega_{j})\to
H_{i}(BD_{k};\Omega_{j})
\]
between the spectral sequences is trivial. The result follows by comparison of spectral sequences.
\end{proof}
\begin{cor}
If $L$ is a homotopy lens space of dimension $q\equiv 1\mod 4$ and fundamental group $\pi_{1}(L)=C_{k}$, then the composition
$L\to BC_{k}\to BD_{k}$ of the classifying map with the inclusion-induced map is null-bordant.\qed
\end{cor}
\begin{proof}[Proof of Theorem \ref{thm:existence}]
We show how to construct a smooth, pseudofree action of $D_{k}$ on a smooth $n$-manifold, for any $n\equiv 2\mod 4$, with this same orbit structure. At this writing we are not sure whether the manifold can be chosen to be a sphere or even a mod 2 homology sphere, however, when $n>2$. We conjecture that it cannot.
In dimension $2$ there is a standard $D_{k}$ action on the $2$--sphere such that when one removes invariant disk neighborhoods of the singular points and passes to the orbit space one has a disk with two holes, i.e., a pair of pants. One boundary circle has isotropy type $C_{k}$ and the other two boundary circles have isotropy type $C_{2}$. One can think of the classifying map we examined above as given by choosing two distinct elements of order $2$ in $D_{k}$, with product of order $k$.
We now consider higher dimensions $n\equiv 2\mod 4$.
Start with a disjoint union of two real projective $(n-1)$-spaces $P_{1}$ and $P_{2}$ and a single lens space $L_{k}$. We define a regular $D_{k}$ covering of $P_{1}\sqcup P_{2}\sqcup L_{k}$ by mapping $P_{i}\to K(D_{k},1)$, representing the nonzero element of $H_{n-1}(D_{k})=\mathbb{Z}/2$, that is taking the canonical map $P_{i}\to K(C_{2},1)$ followed by a map $K(C_{2},1)\to K(D_{k},1)$ induced by an inclusion $C_{2}\to D_{k}$. Similarly we take a standard inclusion $L_{k}\to K(C_{k},1)$ composed with the natural map $K(C_{k},1)\to K(D_{k},1)$. Note that $\mathbb{R}P^{n-1}$ admits an orientation-reversing diffeomorphism, hence represents an element of order $2$ in $\Omega_{n-1}(BC_{2})$.
It follows from the preceding remarks that the combined map $P_{1}\sqcup P_{2}\sqcup L_{k}\to K(D_{k},1)$ is null-bordant. Indeed, $P_{1}\sqcup P_{2}\to K(D_{k},1)$ and $L_{k}\to K(D_{k},1)$ are separately null-bordant.
Choose such a manifold with the desired boundary and a $D_{k}$ covering extending the given one. Passing to the $2k$-fold covering and capping off all the boundary spheres with disks provides the required $n$-manifold with pseudofree action of $D_{k}$ with the desired singular orbit structure.
\end{proof}
\begin{remark}Of course we can arrange that the manifold constructed is connected, by forming the connected sum of components in the orbit space and noting that the resulting classifying map for the covering, $W^{n}\to BD_{k}$ must be surjective on fundamental group.
We can also easily arrange that the manifold with group action constructed above be simply connected. Just use surgery to kill the normal subgroup of the fundamental group of the oriented manfold with boundary $P_{1}\sqcup P_{2}\sqcup L_{k}$ with quotient group $D_{k}$.
One can further arrange that the manifold with group action is $2$-connected.
According to Stong \cite{Stong1968}, for example,
$\Omega_{q}^{\text{Spin}}$, like $\Omega_{q}$, has no odd order torsion and has elements of infinite order only in dimensions divisible by $4$. Using this, the preceding spectral sequence argument shows that $\Omega_{q}^{\text{Spin}}(BC_{k})\to \Omega_{q}^{\text{Spin}}(BD_{k})$ is $0$ for $k$ odd
and $q\equiv 1\mod 4$. From this it follows that the $n$-manifold with group action can be chosen to be $2$-connected (when $n\ge 6$) by arranging that the orbit manifold is spin and then doing spin surgery on $0$-, $1$- and $2$-spheres in the orbit space.
\end{remark}
\section{Proof of Theorem \ref{thm:orientation_reversing_cases}: The orientation-reversing cases}
Suppose a finite group $G$ acts pseudofreely on $S^{n}$, but with not every element of $G$ preserving orientation. Let $H<G$ be the subgroup of index two that does preserve orientation.
\subsection{Dimension $n$ odd}
The fundamental example in this case is the action of the dihedral group on the unit circle.
It follows from the Lefschetz Fixed Point Formula that each orientation-reversing element has exactly two fixed points. The pseudofree condition implies that nontrivial, orientation-preserving elements act without fixed point. That is, the subgroup $H$ acts freely.
Thus each orientation-reversing element has order two, since the square of an orientation-reversing element has fixed points and the orientation-preserving subgroup acts freely.
Moreover, each orientation-reversing element $x\in G- H$ acts on $y\in H$ by inversion, for $xyxy=e\Rightarrow xyx=y^{-1}$. The fact that inversion is an automorphism of $H$ implies that $H$ is abelian. Since $H$ acts freely on a sphere it satisfies the property that every subgroup of order $p^{2}$ is cyclic. It follows that $H$ is cyclic of some order, hence that $G$ is dihedral.
It remains to decide whether one can actually construct such orientation-reversing dihedral actions in higher dimensions. If there were such an orientation-reversing action in an odd dimension $n$, then one could promote it to an orientation-preserving, ``tame'' pseudofree action in dimension $n+1$ by twisted suspension. It then follows from Kulkarni's result that $H$ has odd order. We would therefore conclude that $n+1\equiv 2\mod 4$, by Theorem \ref{thm:nonexistence}. Thus we have ruled out $n\equiv 3\mod 4$, and must have $n\equiv 1\mod 4$.
\subsection{Dimension $n$ even}
It follows from the Lefschetz Fixed Point Formula that no orientation-reversing (pseudofree) element has a fixed point.
In this case, by the results from the preceding section, the orientation-preserving subgroup $H$ must be one of those described by Kulkarni.
\subsubsection{$H$ a periodic group acting semifreely with two fixed points}
Deleting the two $H$ fixed points, we see that $G$ acts freely on $S^{n-1}\times\mathbb{R}$, and hence $G$ must have periodic cohomology and has $H$ as a subgroup of index 2, as required. \qed
\begin{remark}
A standard example arises when $G$ acts freely, preserving orientation, on an odd dimensional equatorial sphere. Such an action can be extended by twisted suspension, using a projection $G\to \{\pm 1\}$.
Cappell and Shaneson \cite{CappellShaneson1978} argue that every PL pseudofree action of $\mathbb{Z}_{2N}$ is a twisted suspension.
They also point out that not every such action is equivalent to a twisted suspension, for instance, for the quaternion group of order 8. Note also that an even order periodic group need not have a subgroup of index 2. An example is the binary icosahedral group, which is perfect.
\end{remark}
\subsubsection{$H$ a dihedral group $D_{k}$, acting with three singular orbits of types $2,2,k$}
Note that we have already ruled this case out when $n\equiv 0\mod 4$. With the extra orientation-reversing elements, we are able to rule out such actions in all cases when $n\equiv 0\mod 2$, as we now explain.
Now a transfer argument shows that the orbifold $X=S^{n}/H$ has the rational homology of $S^{n}$, since $H$ acts homologically trivially. In particular, $\chi(S^{n}/H)=2$. On the other hand the same transfer argument shows that the orbifold $X=S^{n}/G$ has the rational homology of a point, since $G$ acts homologically nontrivially. In particular, $\chi(S^{n}/G)=1$.
It follows that the action of $G/H\approx C_{2}$ on $S^{n}/H$ has no fixed points. On the other hand, the action of $G/H\approx C_{2}$ on $S^{n}/H$ must preserve the image of the \emph{three} $H$ singular orbits. And one of these singular orbits (at least the one of type $k$) must be fixed by $G/H$. This contradiction completes the proof. \qed
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
\end{document} |
\begin{document}
\title[Lebesgue mixed norm estimates for Bergman Projectors]{Lebesgue mixed norm estimates for Bergman projectors:
from tube domains over homogeneous cones to Homogeneous Siegel Domains of Type II }
\author[D. B\'ekoll\'e]{David B\'ekoll\'e}
\address{Department of Mathematics, Faculty of Science, University of Ngaound\'er\'e\\ P.O. Box 454, Ngaound\'er\'e, Cameroon }
\email{{\tt [email protected]}}
\author[J. Gonessa]{Jocelyn Gonessa}
\address{Universit\'e de Bangui, Facult\'e des Sciences, D\'epartement de math\'ematiques et Informatique, BP. 908, Bangui, R\'epublique Centrafricaine}
\email{[email protected]}
\author[C. Nana]{Cyrille Nana}
\address{Faculty of Science, Department of Mathematics, University of Buea, P.O. Box 63, Buea, Cameroon}
\email{{\tt [email protected]}}
\subjclass{} \keywords{Homogeneous cones - Homogeneous Siegel domains of type II - Bergman
spaces - Bergman projectors - Box operator.}
\begin{abstract}
We present a transference principle of Lebesgue mixed norm estimates for Bergman projectors from tube domains over homogeneous cones to homogeneous Siegel domains of type II associated to the same cones. This principle implies improvements of these estimates for homogeneous Siegel domains of type II associated with Lorentz cones, e.g. the Pyateckii-Shapiro Siegel domain of type II.
\end{abstract}
\maketitle
\section{Introduction }
Let $D$ be a domain in $\mathbb C^n$ and $dv$ the Lebesgue measure defined in $\mathbb C^n.$
We denote by $P$ the Bergman projector i.e., the orthogonal projector of the Hilbert space $L^2 (D, dv)$ onto its closed subspace $A^2 (D, dv)$
consisting of holomorphic functions on $D.$ It is well-known that $P$ is an integral operator defined on $L^p (D, dv)$ whose kernel $B(.,.),$
called the Bergman kernel, is the reproducing kernel of $A^2 (D, dv).$ In this work, we consider the case where $D$ is a homogeneous Siegel
domain of type II and we are interested in the values of $p\geq 1$ for which the Bergman projector $P$ can be extended as a bounded
operator on $L^p (D, dv).$ More generally, we investigate the values $1\leq p, q \leq \infty$ for which the Bergman projector extends
to a bounded operator on Lebesgue mixed norm spaces $L^{p, q} (D).$
In fact, C. Nana \cite{Nana} determined a range of values $1\leq p, q \leq \infty$ for which the Bergman projector of a homogeneous
Siegel domain of type II extends as a bounded operator on Lebesgue mixed norm spaces $L^{p, q} (D).$ He even considered the case where
the Lebesgue measure $dv$ is replaced by standard weighted measures. Earlier in a joint work \cite{NT} with B. Trojan, the same author
considered the particular case of tube domains over homogeneous cones (homogeneous Siegel domains of type I). The purpose of the present
paper is to present a transference principle to deduce mixed norm estimates for Bergman projectors on homogeneous Siegel domains of
type II from analogous estimates on tube domains over associated cones. As an application, the results of \cite{Nana} can be
obtained as consequences of the results of \cite{NT}.
\section{Description of homogeneous cones and homogeneous Siegel domains of type II. Statement of the main results}
In this section, we recall the description of a homogeneous cone within the framework of $T$-algebras.
Next, we introduce homogeneous Siegel domains of type II and state our main results.
\subsection{Homogeneous cones}
We use the same notations as in \cite{Chua} and \cite{NT}. We denote by $\mathcal U$ a (real)
matrix algebra of rank $r$ with canonical decomposition
\begin{equation*}
\mathcal U = \bigoplus\limits_{1\leq i, j \leq r} \mathcal U_{ij}
\end{equation*}
such that $\mathcal U_{ij} \mathcal U_{jk} \subset \mathcal U_{ik}$ and $\mathcal U_{ij} \mathcal U_{lk} = \{0\} $ if $j\neq l.$ We assume that $\mathcal U$ has a structure of $T$-algebra (in the sense of \cite{V}) in which an involution is given by $x\mapsto x^\star.$ This structure implies that the subspaces $\mathcal U_{ij}$ satisfy: $\mathcal U_{ii} = \mathbb Rc_i$ where $c_i^2 = c_i$ and $dim \hskip 1truemm \mathcal U_{ij} = n_{ij} = n_{ji}.$ Also, the matrix
\begin{equation*}
\mathbf e = \sum_{j=1}^r c_j
\end{equation*}
is a unit element for the algebra $\mathcal U.$
Let $\rho$ be the unique isomorphism from $\mathcal U_{ii}$ onto $\mathbb R$ with $\rho (c_i) = 1$ for all $i=1,...,r.$ We shall consider the subalgebra
\begin{equation*}
\mathcal T = \bigoplus\limits_{1\leq i \leq j \leq r} \mathcal U_{i j}
\end{equation*}
of $\mathcal U$ consisting of upper triangular matrices and let
\begin{equation*}
H = \{t\in \mathcal T: \rho (t_{ii}) > 0, \hskip 2truemm i=1,...,r\}
\end{equation*}
be the subgroup of upper triangular matrices whose diagonal elements are positive.
Denote by $V$ the vector space of "Hermitian matrices" in $\mathcal U$
\begin{equation*}
V= \{x\in \mathcal U: \hskip 2truemm x^\star = x\}.
\end{equation*}
If we set
\begin{equation*}
n_i = \sum_{j=1}^{i-1} n_{ji}, \quad \quad m_i = \sum_{j=i+1}^{r} n_{ij},
\end{equation*}
then
\begin{equation}\lambdabel{dim}
dim \hskip 1truemm V = n= r+\sum_{i=1}^r m_i = r+\sum_{i=1}^r n_i.
\end{equation}
The vector space $V$ becomes a Euclidean space with the inner product
\begin{equation*}
(x|y) = tr \hskip 1truemm (xy^\star)
\end{equation*}
where
\begin{equation*}
tr \hskip 1truemm (x) = \sum_{i=1}^r \rho (x_{ii}).
\end{equation*}
Next we define
\begin{equation*}
\Omegamega= \{ss^\star: \hskip 2truemm s\in H\}.
\end{equation*}
By a theorem of Vinberg (\cite[p. 384]{V}), $\Omegamega$ is an open convex homogeneous cone containing no entire straight lines, in which the group $H$ acts simply transitively via the transformations
\begin{equation}\lambdabel{simplytransitive}
\partiali (w): uu^\star \mapsto \partiali (w)[uu^\star] = (wu)(u^\star w^\star) \quad \quad (w, u \in H).
\end{equation}
Thus, to every element $y\in \Omegamega$ corresponds a unique $t\in H$ such that
$$y=\partiali (t)[\textbf e].$$
Like in \cite{NT}, we shall adopt the notation:
$$t\cdot \textbf e = \partiali (t)[\textbf e].$$
We shall assume that $\Omegamega$ is irreducible, and hence rank $(\Omegamega)=r.$ All homogeneous convex cones can be constructed in this way
(\cite[p. 397]{V}).
As in \cite{NT}, we denote by $Q_j$ the fundamental rational functions in $\Omegamega$ given by
\begin{equation*}
Q_j (y) = \rho (t_{jj})^2, \quad \quad {\rm when} \hskip 2truemm y=t \cdot {\mathbf e} \in \Omegamega.
\end{equation*}
We consider the matrix algebra $\mathcal U'$ which differs from $\mathcal U$ only on its grading, in the sense that
\begin{equation*}
\mathcal U'_{ij} = \mathcal U_{r+1-i, r+1-j} \quad \quad (i, j = 1,...,r).
\end{equation*}
It is proved in \cite{V} that $\mathcal U'$ is also a $T$-algebra and $V' = V$ where $V'$ is the subspace of $\mathcal U'$
consisting of Hermitian matrices. We define accordingly its subalgebra
\begin{equation*}
\mathcal T' = \bigoplus\limits_{1\leq i \leq j \leq r} \mathcal U'_{ij}
\end{equation*}
of $\mathcal U$ consisting of lower triangular matrices
and the subgroup $H'$ of $\mathcal T'$ whose diagonal elements are positive. We have
$$\mathcal T' = \{t^\star: t\in \mathcal T\} \quad {\rm and} \quad H' = H^* = \{t^\star: t\in H\}.$$
The corresponding homogeneous cone coincides with the dual cone of $\Omegamega,$ namely
\begin{equation*}
\Omegamega^*= \{\xi \in V': (x|\xi) > 0, \hskip 2truemm \forall x \in \omegaverlineerline {\Omegamega} \setminus \{0\}\}.
\end{equation*}
One also has
\begin{equation*}
\Omegamega^*= \{t^\star t: \hskip 2truemm t\in H\}.
\end{equation*}
(See \cite[p. 390]{V}).
For $\xi = t^\star t \in \Omegamega^*,$ we shall define
\begin{equation*}
Q^*_j (\xi) = \rho (t_{jj}^2).
\end{equation*}
The group $H'$ acts simply transitively on the cone $\Omegamega^*$ via the transformations
\begin{eqnarray}\lambdabel{starsimplytransitive}
\partiali (w^\star): u^\star u\mapsto \partiali (w^\star)[u^\star u] = (w^\star u^\star)(uw) \,\,\,(w^\star, u^\star \in H').
\end{eqnarray}
We write
$$t^\star \cdot \textbf e = \partiali (t^\star)[\textbf e]\quad \quad (t^\star \in H').$$
We have the following identity.
\begin{equation}\lambdabel{star}
Q_j^* (t^\star \cdot \mathbf e) = Q_j (t\cdot \mathbf e).
\end{equation}
In the sequel, we shall use the following notations: for all $x\in \Omegamega, \hskip 2truemm \xi \in \Omegamega^*$ and $\alpha = (\alpha_1, \alpha_2,...,\alpha_r) \in \mathbb R^r,$
\begin{equation*}
Q^\alpha (x) = \partialrod_{j=1}^r Q_j^{\alpha_j} (x) \quad {\rm and} \quad (Q^*)^\alpha (\xi) = \partialrod_{j=1}^r (Q^*_j)^{\alpha_j} (\xi).
\end{equation*}
We identify a real number $\beta$ with the vector $(\beta, ...,\beta)\in \mathbb R^r$ and we write
\begin{equation*}
Q^\beta (x) = \partialrod_{j=1}^r Q_j^{\beta} (x) \quad {\rm and} \quad (Q^*)^\beta (\xi) = \partialrod_{j=1}^r (Q^*_j)^{\beta} (\xi),
\end{equation*}
\begin{equation*}
Q^{\alpha + \beta} (x) = \partialrod_{j=1}^r Q_j^{\alpha_j + \beta} (x) \quad {\rm and} \quad (Q^*)^{\alpha +\beta} (\xi) = \partialrod_{j=1}^r (Q^*_j)^{\alpha_j + \beta} (\xi).
\end{equation*}
We put $\tau = (\tau_1, \tau_2,...,\tau_r) \in \mathbb R^r$ with
$$\tau_i = 1+\frac 12 (m_i +n_i).$$
Let $x\in \Omegamega,$ we have for $j=1,...,r$
\begin{equation} {\lambdabel{Qj}}
Q_j (\partiali (t)[x]) = Q_j (t\cdot \mathbf e)Q_j (x).
\end{equation}
Therefore, for any $t\in H,$
$$Q^\tau (\partiali (t)[x]) = det \hskip 1truemm \partiali (t)Q^\tau (x)$$
since
$$det \hskip 1truemm \partiali (t) = Q^\tau (t\cdot {\mathbf e}).$$
(See \cite[p. 388]{V}). The above properties are also valid if we replace $Q_j$ by $Q^*_j$ and $x\in \Omegamega$ by $\xi \in \Omegamega^*.$
In particular, for all $\xi\in \Omega^*$ and $t^\star \in H',$ we have for $j=1,...,r$
\begin{equation} {\lambdabel{QjStar}}
Q^*_j (\partiali (t^\star)[\xi]) = Q^*_j (t^\star\cdot \mathbf e)Q^*_j (\xi).
\end{equation}
\subsection{Homogeneous Siegel domains of type II}
Let $V^{\mathbb C} = V+iV$ be the complexification of $V.$ Then each element of $V^{\mathbb C}$ is identified with a vector in $\mathbb C^n.$
The coordinates of a point $z\in \mathbb C^n$ are arranged in the form
\begin{equation}\lambdabel{z}
z=(z_{11}, z_2, z_{22},...,z_r, z_{rr})
\end{equation}
where
\begin{equation}\lambdabel{z_j}
z_j = (z_{1j},...,z_{j-1, j}), \quad \quad j=2,...,r
\end{equation}
and
\begin{equation}\lambdabel{z_{ij}}
z_{jj} \in \mathbb C, \quad z_{ij} = (z^{(1)}_{ij},...,z^{(n_{ij})}_{ij})\in \mathbb C^{n_{ij}}, \quad 1\leq i < j \leq r.
\end{equation}
For all $j=1,...,r$ we denote $e_{jj} = z,$ where $z_{jj} = 1$ and the other coordinates are equal to zero and we denote
\begin{equation*}
e=\sum_{j=1}^r e_{jj} = (1, 0, 1,...,0, 1).
\end{equation*}
Let $m\in \mathbb N.$ For each row vector $u\in \mathbb C^m,$ we denote $u'$ the transpose of $u.$ Given $m\times m$ Hermitian matrices
$\widetilde{H}_{11},\widetilde{H}_2,\widetilde{H}_{22},...,\widetilde{H}_r, \widetilde{H}_{rr}$ such that for every $j=1,...,r,$ we have
$$u\widetilde{H}_{jj} \betar v' \in \mathbb C, \quad \quad u\widetilde{H}_j \betar v' \in \mathbb C^{n_j},$$
we define a $\Omegamega$-Hermitian, homogeneous form $F: \mathbb C^m \times \mathbb C^m \rightarrow \mathbb C^n$ as
\begin{equation}\lambdabel{form}
F (u, v) = (u\widetilde{H}_{11} \betar v',u\widetilde{H}_2 \betar v',u\widetilde{H}_{22} \betar v'...,u\widetilde{H}_r \betar v', u\widetilde{H}_{rr} \betar v'), \quad \quad (u, v) \in \mathbb C^m \times \mathbb C^m
\end{equation}
such that
\begin{enumerate}
\item[(i)]
$F(u, u) \in \omegaverlineerline {\Omegamega}:$
\item[(ii)]
$F(u, u) = 0$ if and only if $u=0;$
\item[(iii)]
for every $t\in H,$ there exists $\tilde t \in GL (m, \mathbb C)$ such that
\begin{equation}\lambdabel{tilde}
t\cdot F(u,u) = F(\tilde tu, \tilde tu).
\end{equation}
\end{enumerate}
The point set
\begin{equation}\lambdabel{Siegel}
D(\Omegamega, F) = \{(z, u) \in \mathbb C^n \times \mathbb C^m: \Im m \hskip 1truemm z - F(u, u) \in \Omegamega\}
\end{equation}
in $\mathbb C^{n+m}$ is called a Siegel domain of type II associated to the open convex homogeneous cone $\Omegamega$ and to the $\Omegamega-$Hermitian,
homogeneous form $F.$ Recall that if $m=0,$ the domain $D$ is a tube type Siegel domain or a homogeneous Siegel domain of type I,
associated with the cone $\Omegamega,$ or the tube domain over the homogeneous cone $\Omegamega,$ considered by the authors of \cite{NT}.
Using (\ref{z}), we write
$$F(u, u) = (F_{11}(u, u), F_2(u, u), F_{22}(u, u),...,F_r(u, u), F_{rr}(u, u))$$
where for $i=1,...,r$ and $j=2,...,r,$
$$F_{ii} (u, u) = u\widetilde{H}_{ii} \betar u', \quad F_j (u, u) = u\widetilde {H}_j \hat u'= (F_{1j} (u, u),...,F_{j-1, j} (u, u))$$
and for $1\leq i < j \leq r$ and $\lambdambda=1,...,n_{ij},$
$$F_{ij} (u, u)= (F^{(1)}_{ij} (u, u),...,F^{(n_{ij})}_{ij} (u, u)), \quad F^{(\lambdambda)}_{ij} (u, u)=u\widetilde{H}^{(\lambdambda)}_{ij} \betar u'.$$
The space $\mathbb C^m$ decomposes into the direct sum of subspaces $\mathbb C^{b_1} \omegaplus...\omegaplus \mathbb C^{b_r}$ on which are
concentrated the Hermitian forms $F_{jj},$ that is, with appropriate coordinates, we have for $i=1,...,r,$
\begin{equation}\lambdabel{decomp}
\widetilde{H}_{ii} = \mbox{diag} (0_{(b_1)},...,0_{(b_{i-1}}, I_{(b_i)},0_{(b_{i+1})},...,0_{(b_r)})
\end{equation}
where $0_{(b_k)}$ and $I_{(b_k)}$ denote respectively the null matrix and the identity matrix of the vector space
$\mathbb C^{b_k}$ for all $k=1,...,r.$ (See for instance \cite[ pp. 127-129]{X}.)
In the sequel, we denote $b$ the vector
$$b=(b_1,...,b_r)\in \mathbb N^r.$$
and we denote $dv$ the Lebesgue measure in $\mathbb C^m.$ Let $\nu = (\nu_1,...,\nu_r) \in \mathbb R^r.$ For all $(x+iy, u) \in D,$ we shall consider the measure
$$dV_\nu (x+iy, u) = Q^{\nu-\frac b2 - \tau} (y - F(u,u))dxdydv(u).$$
We denote by $L^p_\nu (D), \hskip 2truemm 1\leq p \leq \infty,$ the Lebesgue space $L^p (D, dV_\nu (z, u)).$ The weighted Bergman
space $A^p_\nu (D)$ is the (closed) subspace of $L^p_\nu (D)$ consisting of holomorphic functions. In order to have a non-trivial subspace,
we take $\nu = (\nu_1,...,\nu_r) \in \mathbb R^r$ such that $\nu_i > \frac {m_i + b_i}2, \hskip 2truemm i=1,...,r.$ (See \cite{Nana}.)
The orthogonal projector of the Hilbert space $L^2_\nu (D)$ on its closed subspace $A^2_\nu (D)$ is the weighted Bergman
projector $P_\nu.$ We recall that $P_\nu$ is defined by the integral
$$P_\nu f(z, u) = \int_D B_\nu ((z, u), (w, v))f(w, v)dV_\nu (w, v), \quad (z, u) \in D,$$
where for a suitable constant $d_{\nu, b},$
$$B_\nu ((z, u), (w, v))=d_{\nu, b}Q^{-\nu - \frac b2 -\tau} \left(\frac {z-\betar w}{2i}-F(u,v)\right)$$
is the weighted Bergman kernel i.e., the reproducing kernel of $A^2_\nu (D).$ (See \cite[Proposition II.5]{BT}.) The scalar product $\lambdangle \cdot, \cdot \rangle_\nu$ is given by
$$\lambdangle f, g\rangle_\nu = \int_{D} f(z, u)\omegaverlineerline{g(z, u)}dV_\nu (z, u).$$
Let us now introduce mixed norm spaces. For $1\leq p \leq \infty$ and $1\leq q < \infty,$ let $L^{p, q}_\nu (D)$ be the space of measurable functions on $D$ such that
$$||f||_{L^{p, q}_\nu (D)} := \left(\int_{\mathbb C^m} \int_{\Omegamega + F(u, u) } \left(\int_V |f(x+iy, u)|^p dx\right)^{\frac qp} Q^{\nu-\tau-\frac b2} (y-F(u, u)dydv(u)\right)^{\frac 1q}$$
is finite (with obvious modification if $p=\infty)$. As before, we call $A^{p, q}_\nu (D)$ the (closed) subspace of $L^{p, q}_\nu (D)$ consisting of holomorphic functions.
Note that for $p=q,$ the Lebesgue mixed norm space $L^{p, q}_\nu (D)$ coincides with the Lebesgue space $L^p_\nu (D)$ and the mixed norm Bergman space $A^{p, q}_\nu (D)$ coincides with the Bergman space $A^p_\nu (D).$
The unweighted case corresponds to $\nu = \tau + \frac b2.$
\subsection{Example}\lambdabel{type 2} The homogeneous (non-symmetric) Siegel domain of type II introduced by Pyateckii-Shapiro is associated to the spherical cone
$$\Gammamma:= \left\{(y_{11}, y_{12}, y_{22})\in \mathbb R^3: Q_1 (y) = y_{11}> 0, Q_2 (y) = y_{22}-\frac {(y_{12})^2}{y_{11}} > 0\right\}$$
and to the $\Gammamma-$Hermitian, homogeneous form
\begin{eqnarray*}
\begin{array}{clcr}
F:&\mathbb C^2 \longrightarrow \mathbb C^3&\\
&(u,v)\mapsto F(u, v) = (0, 0, u\betar v).&
\end{array}
\end{eqnarray*}
In this domain $D(\Gammamma, F),$ we have
$$n=3, \hskip 2truemm r=2, \hskip 2truemm n_1=m_2=0, \hskip 2truemm n_2=m_1=1,\,\,\,b_1=0,\,\,\,b_2=1, \hskip 2truemm \tau =\left(\frac 32, \frac 32\right).$$
The (unweighted) Bergman kernel of $D(\Gammamma, F)$ has the following expression:
$$B ((z, u), (w, v)) = CQ_1^{-3}\left(\frac {z-\omegaverlineerline {w}}{2i}-F(u, v)\right)Q_2^{-4}\left(\frac {z-\omegaverlineerline {w}}{2i}-F(u, v)\right)$$
which can be written
$$B ((z, u), (w, v)) = C\left(\frac {z_{11}-\omegaverlineerline {w_{11}}}{2i}\right)^{-3}\left(\frac {z_{22}-\omegaverlineerline {w_{22}}}{2i}-u\betar v-\frac{\left(\frac {z_{12}-\omegaverlineerline {w_{12}}}{2i}\right)^2}{\frac {z_{11}-\omegaverlineerline {w_{11}}}{2i}}\right)^{-4}.$$
For $\nu =(\nu_1, \nu_2) \in \mathbb R^2$ such that $\nu_j > \frac 12, \hskip 2truemm j=1, 2,$ the associated (weighted) Bergman kernel is given by
$$B_\nu ((z, u), (w, v)) = d_\nu Q_1^{-\nu_1-\frac 32}\left(\frac {z-\omegaverlineerline {w}}{2i}-F(u, v)\right)Q_2^{-\nu_2-2}\left(\frac {z-\omegaverlineerline {w}}{2i}-F(u, v)\right).$$
\subsection{Statement of the results}
The main result of our paper is the following.
\begin{thm}
Let $\nu = (\nu_1,...,\nu_r) \in \mathbb R^r$ such that $\nu_j > \frac {m_j +n_j + b_j}2, \hskip 2truemm j=1,...,r.$ Assume that the Bergman projector $\mathbb P_{\nu - \frac b2}$ of the tube domain $T_\Omegamega$ over the homogeneous cone $\Omegamega$ is bounded on $L^{p, q}_{\nu - \frac b2} (T_\Omegamega).$ Then the Bergman projector $P_\nu$ of the homogeneous Siegel domain $D$ of type II associated to $\Omegamega$ and to the $\Omegamega-$Hermitian homogeneous form $F$ is bounded on $L^{p, q}_\nu (D).$
\end{thm}
As a consequence, the following result of \cite{Nana} for $D$ is a
consequence of the corresponding result of \cite{NT} for tube domains over homogeneous cones (see Theorem \ref{boundedness} below).
\begin{thm}
Let $\nu = (\nu_1,...,\nu_r) \in \mathbb R^r$ such that $\nu_j > \frac {m_j +n_j + b_j}2, \hskip 2truemm j=1,...,r.$ We set
$q_\nu := 1+\min \limits_{1\leq j \leq r} \frac {\nu_j - \frac {m_j}2 -\frac {b_j}2}{\frac {n_j}2}.$ The Bergman projector
$P_\nu$ extends to a bounded operator on $L^{p, q}_\nu (D)$ for
\begin{eqnarray*}
\left \{
\begin{array}{clcr}
0&\leq \frac 1p &\leq \frac 12\\
\frac 1{q_\nu p'}&<\frac 1q &< 1-\frac 1{q_\nu p'}
\end{array}
\right.
\quad \quad or \quad \quad
\left \{
\begin{array}{clcr}
\frac 12&\leq \frac 1p &\leq 1\\
\frac 1{q_\nu p}&<\frac 1q &< 1-\frac 1{q_\nu p}
\end{array}
\right.
.
\end{eqnarray*}
\end{thm}
Our theorem implies improvements of $L^{p, q}_\nu$ estimates for Bergman projectors in homogeneous
Siegel domains of type II associated to Lorentz cones for some particular values of $\nu \in \mathbb R^2$. An interesting case is the Pyateckii-Shapiro Siegel domain of type II
defined in Example \ref{type 2}.
For this domain, the problem under study was investigated in \cite[section 6]{Go} and a particular case of Theorem 2.1 was used there. We point out that the underlying spherical cone is isomorphic to the Lorentz cone of $\mathbb R^3.$
The $L^{p, q}_\nu$ estimates (with $\nu$ real, $\nu =(\nu,\cdots, \nu))$ for the Bergman projectors on tube
domains over Lorentz cones are now completely settled after the works of \cite{BBPR}, \cite{BBGR} and \cite{BBGRS}
(cf. also \cite{BBGNPR} and \cite{B}), and the recent proof of the $l^2$-decoupling conjecture by Bourgain and Demeter \cite{BD}. This goal is achieved via a particular case of the following more general theorem where $\nu= (\nu_1, \nu_2)$ is a vector of $\mathbb R^2.$ We denote by $\mathcal Lambdambda_n$ the Lorentz cone of $\mathbb R^n, \hskip 2truemm n\geq 3,$ and we adopt the following notations.
$$p_\nu =1+\frac {\nu_2 +\frac n2}{(\frac n2 -1 -\nu_2)_+};$$
$$q_\nu= 1+\frac {\nu_2}{\frac {n}2-1};$$
$$q_\nu (p)= p_\sharp q_\nu \quad {\rm with} \quad p_\sharp = \min (p, p');$$
$$\tilde q_{\nu, p}
=\frac {\nu_2 + \frac n2-1}{(\frac n{2p'} -1)_+}.$$
\begin{thm}
Let $\nu= (\nu_1, \nu_2)\in \mathbb R^2$ such that $\nu_1 > \frac n2 -1$ and $\nu_2 >0.$ The weighted Bergman $P_\nu$ of the tube domain $T_{\mathcal Lambdambda_n}$ is bounded in $L^{p, q}_\nu (T_{\mathcal Lambdambda_n})$ for the following values of $p, q$ and $\nu.$
\begin{enumerate}
\item
$\frac {n-2}{\nu_2 +\frac n2 -1}<p<\frac {n-2}{\frac n2 -1-\nu_2 }, \hskip 2truemm q'_\nu (p)<q<q_\nu (p)$ provided $0<\nu_2 < \frac n2 -1;$
\item
$1\leq p \leq \infty, \hskip 2truemm q'_\nu (p)<q<q_\nu (p)$ provided $\nu_2 \geq \frac n2 -1;$
\item
$2\leq p\leq \frac {2n}{n-2}, \hskip 2truemm p<p_\nu$ and $q'_\nu (p) <q<2q_\nu;$
\item
$p_\nu >p>\frac {2n}{n-2}$ and $2<q<\widetilde q_{\nu, p}$
provided $\frac {n-2}{2n}<\nu_2 < \frac n2 -1;$
\item
$p>\frac {2n}{n-2}$ and $2<q<\widetilde q_{\nu, p}$ provided $\nu_2 \geq \frac n2 -1;$
\item
the couples $(p, q)$ obtained by complex interpolation from the previous couples.
\end{enumerate}
\end{thm}
\vskip 20truemm
\vskip 20truemm
\vskip 20truemm
\vskip 2truemm
\vskip 2truemm
Figures \ref{fig1}, \ref{fig2}, \ref{fig3} and \ref{fig4} illustrate the regions of boundedness of the Bergman projector $P_\nu$ of $T_{\mathcal Lambdambda_n}.$
An application of Theorem 2.1 (the transference principle) then gives the following result (which improves \cite[Theorem 6.2.15]{Go} and \cite[Theorem 2.3]{Nana} for $D=D(\Gammamma,\,F)$):
\begin{thm}
Let $\nu = (\nu_1, \nu_2)\in\mathbb{R}^2$ with $\nu_1>\frac 12$ and $\nu_2>1.$ The Bergman projector $P_\nu$ of the Pyateckii-Shapiro
domain $D(\Gammamma,\,F)$ extends to a bounded operator on $L^{p, q}_\nu (D(\Gammamma,\,F))$ for the following values of $p$ and $q:$
\begin{enumerate}
\item
$1\leq p \leq \infty$ and $\frac {2p_\sharp \nu_2}{2p_\sharp \nu_2 - 1}<q<2p_\sharp \nu_2;$
\item
$2\leq p \leq 6$ and $\frac {\nu_2}{\nu_2 -\frac 1{2p'}}<q<4\nu_2;$
\item
$p>6$ and $2<q<\frac {\nu_2}{\frac 3{2p'}-1};$
\item
the couples $(p, q)$ obtained by symmetry and complex interpolation from the previous couples.
\end{enumerate}
\end{thm}
The couples $(p, q)$ described in the previous theorem are represented in the figure depicted below.
\vskip 2truemm
\vskip 2truemm
Further improvements will follow for homogeneous Siegel domains of type II associated to symmetric cones if
one could solve the conjecture
stated in \cite{BBGRS} for tube domains over symmetric cones.
The plan of the sequel of the present paper is as follows. In section 3, we review the analysis on homogeneous
cones and on homogeneous Siegel domains of type II. Most of the results in this section are taken from \cite{Nana}.
In section 4, we restrict to tube domains over homogeneous cones and we introduce the action of the Box operator,
generalizing results from \cite{BBPR} for Lorentz cones. For these domains, we exhibit a necessary and sufficient
condition for the boundedness of the Bergman projector in terms of a Hardy type inequality for the Box operator. In section 5,
we prove the Hardy type inequality for homogeneous Siegel domains of type II and we apply it to prove the main result, namely Theorem 2.1.
The proofs of Theorem 2.3 and Theorem 2.4 are given in section 6.
\section{Analysis on homogeneous cones and in homogeneous Siegel domains of type II}
Let $n\geq 3$ and $D$ be a homogeneous Siegel domain of type II associated to the homogeneous cone $\Omegamega$ and the $\Omegamega$-Hermitian form $F.$
\subsection{Basic results on homogeneous cones and in homogeneous Siegel domains of type II}
In this section, we recall the following results whose proofs are essentially in \cite{NT}.
\begin{lem}
\lambdabel{integ}\cite[Corollary 4.16]{NT} Let $\nu=(\nu_1,\ldots,\nu_r)\in\mathbb{R}^r$ such that $\nu_j>\frac{m_j}{2},\quad j=1,\ldots,r.$ Then
$$\int_\Omega e^{-(\xi|y)}Q^{\nu-\tau}(y)dy=\Gamma_{\Omega}(\nu)(Q^*)^{-\nu}(\xi),\,\,\,\xi\in\Omega^*,$$
where $\Gamma_\Omega(\nu)$ denotes the gamma integral \cite{NT} in the cone $\Omegamega.$
\end{lem}
\begin{remark}
{\rm It is well-known that the fundamental rational functions $Q_j$ (resp. $Q^*_j)$ can extended as a zero-free analytic function $Q_j (\frac zi)$ (resp. $Q^*_j (\frac zi))$ on the tube domain $V+i\Omega$ (resp. $V'+i\Omega^*)$. In particular, it follows from Lemma \ref{integ} that
if $\zeta\in V'+i\Omega^*$ and $\nu_j>\frac{m_j}{2},\quad j=1,\ldots,r,$ we set}
\begin{equation}a \int_\Omega\lambdabel{int}
e^{i(\zeta|y)}Q^{\nu-\tau}(y)dy=\lambdabel{cor1}\Gamma_{\Omega}(\nu)(Q^*)^{-\nu}\left(\frac{\zeta}{i}\right).
\end{equation}a
\end{remark}
\begin{lem} \lambdabel{13} \cite[Lemma 4.19]{NT} Let $\mu=(\mu_1,\mu_2,\ldots,\mu_r)\in\mathbb{R}^r$ and
$\lambda=(\lambda_1,\lambda_2,\ldots,\lambda_r)\in\mathbb{R}^r.$ For all $y\in \Omega,$ the integral
$$J_{\mu\lambda}(y)=\int_\Omega Q^\mu(y+v) Q^{\lambda-\tau}(v) dv$$
is finite if and only if \begin{equation}as \lambda_j>\frac{m_j}{2}, \quad
\mu_j+\lambda_j<-\frac{n_j}{2},\quad \quad j=1,\ldots,r. \end{equation}as In this
case, there is a positive constant $M_{\lambda\mu}$ such that
$$J_{\mu\lambda}(y)=M_{\lambda\mu}Q^{\mu+\lambda}(y).$$
\end{lem}
\begin{lem} \lambdabel{14} \cite[Lemma 4.20]{NT} Let $\a=(\a_1,\a_2,\ldots,\a_r)\in \mathbb{R}^r.$
\hskip 0.5cm
The integral \begin{equation}a\lambdabel{Ja} J_\a(y)=\int_V
\left|Q^{-\a}\left(\frac{x+iy}{i}\right)\right|dx\quad \quad
(y\in\Omega) \end{equation}a converges if and only if
$\a_j>1+n_j+\frac{m_j}{2},\,\,j=1,\ldots,r.$ In this case, there is a positive constant $c_\a$ such that \begin{equation}as
J_\a(y)=c_\a Q^{-\a+\tau}(y). \end{equation}as
\end{lem}
The following two results were stated in \cite{Nana} .
\begin{cor}
$A^{p, q}_\nu (D)$ is a Banach space.
\end{cor}
\begin{lem}\lambdabel{density}
Assume that $\mu = (\mu_1,...,\mu_r)$ and $\nu = (\nu_1,...,\nu_r)$ belong to $\mathbb R^r$ and satisfy $\mu_j, \hskip 1truemm \nu_j > \frac {m_j + b_j}2, \hskip 2truemm j=1,...,r.$ The subspace $(A^{p, q}_\nu \cap A^{2}_\mu)(D)$ is dense in $A^{p, q}_\nu (D).$
\end{lem}
Let $\nu=(\nu_1,\nu_2,\ldots,\nu_r)\in\mathbb{R}^r$
such that $\nu_j>\frac{m_j+b_j}{2},\,\,j=1,\ldots,r.$ Following \cite{BT}, we shall
denote $L_{(-\nu)}^2(\Omega^*\times \mathbb{C}^m)$ the Hilbert space of functions $g:\Omega^*\times\mathbb{C}^m\to\mathbb{C}$ such that:
\begin{itemize}
\item[i)] for all compact subset $K_1$ of $\mathbb{C}^n$ contained in $\Omega^*$ and for all compact subset $K_2$ of $\mathbb{C}^m,$ the mapping $u\mapsto g(\cdot,u)$ is holomorphic on $K_2$ with values in $L^2(K_1,-\nu),$ where $$L^2(K_1,-\nu)=\{f:K_1\to\mathbb{C}:\int_{K_1}|f(\xi)|^2(Q^*)^{-\nu+\frac{b}{2}}(\xi)d\xi<\infty\};$$
\item[ii)] the function $g\in L^2(\Omega^*\times\mathbb{C}^m,\,(Q^*)^{-\nu+\frac{b}{2}}(\xi) e^{-2(F(u,\,u)|\xi)}d\xi dv(u)).$
\end{itemize}
We then
define by
$$\mathcal L g(z,\,u)=(2\partiali)^{-\frac{n}{2}}\int_{\Omega^*}e^{i(z|\xi)}g(\xi,\,u)d\xi$$
the "Laplace transform" of any function $g \in L_{(-\nu)}^2(\Omega\times\mathbb{C}^m).$ Now, we
recall the Plancherel-Gindikin result found in \cite[Theorem II.2]{BT} which is a generalization of the Paley-Wiener Theorem \cite[Theorem 5.1]{NT}.
\begin{thm}\lambdabel{16}
Let $\nu=(\nu_1,\nu_2,\ldots,\nu_r)\in \mathbb{R}^r$ with
$\nu_j>\frac{m_j+b_j}{2},\,\,j=1,\ldots,r.$ A function $G$ belongs
to $A_\nu^2 (D)$ if and only if $G=\mathcal Lg$, with $g\in
L_{(-\nu)}^2(\Omega^*\times \mathbb{C}^m).$ Moreover there is a positive constant $e_{\nu,b}$ such that \begin{equation}a
\|G\|_{A_\nu^2
(D)}=\lambdabel{PW1}e_{\nu,b}\|g\|_{L_{(-\nu)}^2(\Omega^*\times\mathbb{C}^m)}.\end{equation}a
\end{thm}
\subsection{Integral operators associated to the Bergman projector}
Let $\mu,\a\in\mathbb{R}^r$ and $f$ be a bounded function with compact support on $D.$ We consider the integral operators:
\begin{equation}\lambdabel{operator}
T_{\mu,\a}f(z,u)=Q^\a(\Im m\, z-F(u,u))\int_DB_{\mu+\a}((z,u),(w,t))f(w,t)dV_\mu(w,t)
\end{equation}
and
\begin{equation}
T^+_{\mu,\a}f(z,u)=Q^\a(\Im m\, z-F(u,u))\int_D \left \vert B_{\mu+\a}((z,u),(w,t))\right \vert f(w,t)dV_\mu(w,t).
\end{equation}
\begin{thm}\lambdabel{essai}
Let $\a,\nu,\mu\in\mathbb{R}^r$ and $1\leq p,q\leq \infty.$ Then the operator $T^+_{\mu,\a}$ extends boundedly to $L^{p,q}_\nu(D)$ whenever for all $j=1,\ldots,r$
the parameters satisfy the following:
$$\mu_j+\a_j>\frac 12(m_j+n_j+b_j)$$ and
$$\mu_jq-\nu_j>(q-1)\left(\frac{m_j}{2}+\frac{b_j}{2}\right)+\frac{n_j}{2},\,\,\,\,\a_jq+\nu_j>\frac{m_j}{2}+\frac{b_j}{2}+(q-1)\frac{n_j}{2}.$$
\end{thm}
\begin{proof}
We follow the scheme of \cite[Proof of Theorem 2.1]{Nana}. Let $\mu=(\mu_1,\mu_2,\ldots,\mu_r)\in\mathbb{R}^r$ be such that $\mu_j>\frac{m_j+b_j}{2},\,\,j=1,\ldots,r.$
We shall denote $U=\{(t,u): u\in\mathbb{C}^m, t\in\Omega+F(u,u)\}.$ We define the measure $\mathcal V_\mu, \hskip 1truemm \mu \in \mathbb R^r,$ on $U$ by
$$d\mathcal V_\mu (t, u) = Q^{\mu - \frac b2 -\tau} (t-F(u, u))dtdv(u)$$
and we define
$L_\mu^q(U)$ as the space of all $g:U\to\mathbb{C}$ with norm given by
\begin{equation}as\|g\|_{L_\mu^q(U)}^q&=&\int_{\mathbb{C}^m}\int_{\Omega+F(u,u)}|g(t,u)|^qd\mathcal V_\mu(t,u)\\&=&\int_{\mathbb{C}^m}\int_{\Omega}|g(y+F(u,u),u)|^qQ^{\mu-\frac{b}{2}-\tau}(y)dydv(u).\end{equation}as
We will need the following result.
\begin{prop}\cite[Proposition 5.2]{Nana}\lambdabel{new}
Let $u,s\in\mathbb{C}^m;\,\,y\in\Omega+F(u,u)$ and $t\in\Omega+F(s,s).$ For $\lambda=(\lambda_1,\lambda_2,\cdots,\lambda_r)\in\mathbb{R}^r,$
the integral $$I_\lambda(y,u,t)=\int_{\mathbb{C}^m}Q^{-\lambda}(y+t+F(s,s)-2\mathbb{R}e e\,F(u,s))dv(s)$$
converges if $\lambda_j-b_j>\frac{n_j}{2},\,j=1,\ldots,r.$ In this case, there is a positive constant $C_\lambda$ such that
\begin{equation}a\lambdabel{new1}
I_\lambda(y,u,t)=C_\lambda Q^{-\lambda+b}(y-F(u,u)+t).
\end{equation}a
\end{prop}
Next, we shall use the following notation: for all $u,s\in\mathbb{C}^m,$ $$A=\mathbb{R}e e\,F(u,s).$$
Also we shall denote
$$f_{y,u}(x)=f(x+iy,u).$$
Thus, for $f\in L_\mu^{p,q}(D),$ using Minkowski's inequality for integrals, Young's inequality and Lemma \ref{14}, we get
\begin{equation}as
\|T^+_{\mu,\a}f\|_{L_\nu^{p,q}(D)}\leq C\|R_{\mu,\a}g\|_{L^q_\nu(U)}
\end{equation}as
where for $s\in\mathbb{C}^m$ and $t\in\Omega+F(s,s),$
$$g(t,s)=\|f_{t,s}\|_p$$
and $R_{\mu,\a}$ is the integral operator with positive kernel defined on $L^q_\nu(U)$ by
\begin{equation}
\lambdabel{T_g} R_{\mu,\a}g(y,\,u)=Q^\a(y-F(u,u))\int_{\mathbb{C}^m}\int_{\Omega+F(s,\,s)} Q^{-\mu-\a-\frac{b}{2}}(y-2A+t)g(t,s)d\mathcal V_\mu(t,s).
\end{equation}
Observe that $R_{\mu,\a}$ is a self-adjoint operator if $\a=0$ and $\mu=\nu.$
To prove Theorem \ref{essai}, it suffices therefore to prove the boundedness of the operator $R_{\mu,\a}$ on $L^q_\nu(U).$
\begin{thm}\lambdabel{T}
Let $\mu=(\mu_1,\ldots,\mu_r)\in \mathbb{R}^r$ and $\a=(\a_1,\ldots,\a_r)\in \mathbb{R}^r$ such that
$\mu_j+\a_j>\frac{m_j+n_j+b_j}{2},\,\,j=1,\ldots,r.$ The operator $R_{\mu,\a}$ is
bounded on $L_\nu^q(U)$ whenever
$$\mu_jq-\nu_j>(q-1)\left(\frac{m_j}{2}+\frac{b_j}{2}\right)+\frac{n_j}{2},\,\,\,\,\a_jq+\nu_j>\frac{m_j}{2}+\frac{b_j}{2}+(q-1)\frac{n_j}{2}.$$
\end{thm}
\begin{proof}
We will use Schur's Lemma (See \cite{FR}). The kernel of the operator
$R_{\mu,\a}$ relative to the measure $dV_\nu(t,s)$ is given by
$$N(y,u;t,s)= Q^\a(y-F(u,u))Q^{-\mu-\a-\frac{b}{2}}(y-2A+t)Q^{\mu-\nu}(t-F(s, s))$$
and it is positive. By Schur's Lemma, it is sufficient to find a
positive and measurable function $\varphi$ defined on $U$ such
that
\begin{eqnarray}
\lambdabel{q} \int_{\mathbb{C}^m}\int_{\Omega+F(u,\,u)} N(y,u;t,s) \varphi(y,\,u)^{q}d\mathcal V_\nu(y,u)=
C\varphi(t,\,s)^{q} \end{equation}a and \begin{equation}a \lambdabel{q'} \int_{\mathbb{C}^m}\int_{\Omega+F(s,\,s)} N(y,u;t,s)
\varphi(t,\,s)^{q'}d\mathcal V_\nu(t,s)= C\varphi(y,\,u)^{q'}.
\end{eqnarray}
We take as test functions $ \varphi(t,\,s)=Q^{\gammamma}(t-F(s,\,s))$ where
$\gammamma=(\gammamma_1,\ldots,\gammamma_r)\in\mathbb{R}^r$ has to be determined. The left-hand side of (\ref{q}) equals
$$K(t,s)=Q^{\mu-\nu}(t-F(s,s))\int_\Omega I_{-\mu-\a-\frac{b}{2}}(t,s,y)Q^{\gamma q+\nu+\a-\frac{b}{2}-\tau}(y)dy.$$
Using (\ref{new1}), we get
\begin{equation}
\lambdabel{new2}
K(t,s)=CQ^{\mu-\nu}(t-F(s,s))\int_\Omega Q^{-\mu-\a+\frac{b}{2}}(y+t-F(s,s))Q^{\gamma q+\nu+\a-\frac{b}{2}-\tau}(y)dy.
\end{equation}
An
application of Lemma \ref{13} gives that (\ref{new2}) holds whenever
\begin{eqnarray*}
\frac{-\nu_j-\a_j+\frac{m_j}{2}+\frac{b_j}{2}}{q}<\gammamma_j<\frac{\mu_j-\nu_j-\frac{n_j}{2}}{q},\quad
j=1,\ldots,r.
\end{eqnarray*}
Likewise, (\ref{q'}) holds when
\begin{eqnarray*}
\frac{-\mu_j+\frac{m_j}{2}+\frac{b_j}{2}}{q'}<\gammamma_j<\frac{\a_j-\frac{n_j}{2}}{q'},\quad
j=1,\ldots,r.
\end{eqnarray*}
For these intervals to be non-empty, we need $\mu_j+\a_j>\frac{m_j+n_j+b_j}{2},\,\,j=1,\ldots,r.$
The identities (\ref{q}) and (\ref{q'}) are
simultaneously satisfied if each $\gammamma_j,\,\,j=1,\ldots,r,$ satisfy
the following condition
\begin{equation}
\lambdabel{cond2} \gammamma_j\in
\left]\frac{-\nu_j-\a_j+\frac{m_j}{2}+\frac{b_j}{2}}{q'}, \frac{\mu_j-\nu_j-\frac{n_j}{2}}{q'}\right[\bigcap\left]\frac{-\mu_j+\frac{m_j}{2}+\frac{b_j}{2}}{q},\frac{\a_j-\frac{n_j}{2}}{q}\right[.
\end{equation}
The intersection in (\ref{cond2}) is not empty if
$\frac{-\nu_j-\a_j+\frac{m_j}{2}+\frac{b_j}{2}}{q'}<\frac{\a_j-\frac{n_j}{2}}{q}$ and
$\frac{-\mu_j+\frac{m_j}{2}+\frac{b_j}{2}}{q}<\frac{\mu_j-\nu_j-\frac{n_j}{2}}{q'};$ that is
for any $j=1,\ldots,r,$
$\a_jq+\nu_j>(q-1)\frac{n_j}{2}+\frac{m_j}{2}+\frac{b_j}{2}$ and $\mu_jq-\nu_j>\frac{n_j}{2}+(q-1)\left(\frac{m_j}{2}+\frac{b_j}{2}\right).$
\end{proof}
\end{proof}
\begin{cor}\lambdabel{P+}
Let $\mu=(\mu_1,\mu_2,\ldots,\mu_r)\in\mathbb{R}^r$ and $\nu=(\nu_1,\nu_2,\ldots,\nu_r)\in\mathbb{R}^r$ such that
$\mu_j>\frac{m_j+n_j+b_j}{2},\,\,\,j=1,\ldots,r$ and $\nu_j>\frac{m_j+b_j}{2},\,\,\,j=1,\ldots,r.$ Assume that $1\leq p,q\leq \infty.$ Then $P_\mu^+$ is bounded on $L^{p,q}_\nu(D)$ whenever
for all $j=1,\ldots,r,$ we have
$$\frac{\nu_j-\frac{m_j}{2}-\frac{b_j}{2}+\frac{n_j}{2}}{\mu_j-\frac{m_j}{2}-\frac{b_j}{2}}<q<1+\frac{\nu_j-\frac{m_j}{2}-\frac{b_j}{2}}{\frac{n_j}{2}}.$$
In this case, the Bergman projector $P_\mu$ extends to a bounded operator from $L^{p,q}_\nu(D)$ onto $A^{p,q}_\nu(D).$
\end{cor}
\begin{proof}
Just take $\a=0$ in Theorem \ref{essai}. The case $q=1$ is left as an exercise: also cf. \cite{BT}.
\end{proof}
\begin{remark}\lambdabel{3.12}
Let $k$ be a positive integer, let $\rho \in \mathbb R^r$ be such that $\rho_j >0$ for every $j=1,\cdots, r.$
Let $1\leq p \leq \infty$ and $2<q<\infty.$ In view of Corollary \ref{P+}, the operator $P^+_{\nu+k\rho}$
(and hence the Bergman projector $P_{\nu+k\rho}$) is bounded on $L^{p, q}_{\nu+kq\rho}$ for $\nu=(\nu_1,\nu_2,\ldots,\nu_r)\in\mathbb{R}^r$ such that
$\nu_j>\frac{m_j+n_j+b_j}{2},\,\,\,j=1,\ldots,r$ and $k$ large.
\end{remark}
\begin{prop}
Let $\nu=(\nu_1,\nu_2,\ldots,\nu_r)\in\mathbb{R}^r$ be such that
$\nu_j>\frac{m_j+b_j}{2},\,\,\,j=1,\ldots,r.$ Assume $\mu=(\mu_1,\mu_2,\ldots,\mu_r)\in\mathbb{R}^r$ and $p,q$ are such that $P_\mu$ extends as a bounded operator
in $L^{p,q}_\nu(D).$ Then
\begin{itemize}
\item[(i)]
the Bergman space $A^{p,q}_\nu(D)$ is the closed linear span of the set $\{B_\mu(\cdot,(w,t)),\,(w,t)\in D\}.$ In particular,
$B_\mu(\cdot,(w,t))\in L^{p,q}_\nu(D).$
\item[(ii)]
$P_\mu$ is the identity on $A^{p,q}_\nu(D);$ in particular, $P_\mu (L^{p,q}_\nu(D))=A^{p,q}_\nu(D).$
\end{itemize}
\end{prop}
\begin{proof}
\begin{itemize}
\item[(i)]
We start by establishing that $B_\mu(\cdot,(w,t))\in A^{p,q}_\nu(D)$ for all $(w,t)\in D.$ Let $f,g\in \mathcal C_c (D)$ (continuous functions on $D$ with compact support). Then
\begin{equation}as
\lambdangle P_\mu g,f\rangle_\nu=\int_D P_\mu g(z,u)\omegaverlineerline{f(z,u)}dV_\nu(z,u)=\lambdangle g,P^*_\mu f\rangle_\nu
\end{equation}as
where
\begin{equation} \lambdabel{Pstar}
P^*_\mu f(z,u)=Q^{\mu-\nu}(\Im m\,z-F(u,u))\int_D B_\mu((z,u),(w,t))f(w,t)dV_\nu(w,t)
\end{equation}
i.e.
\begin{equation}\lambdabel{Pstarplus}
P^*_\mu f=T_{\nu,\mu-\nu}f
\end{equation}
according to (\ref{operator}). By density of $\mathcal{C}_c(D)$ in $L^{p',q'}_\nu(D)$ and continuity of $P^*_\mu$ on $L^{p',q'}_\nu(D),$ we conclude that
\begin{equation}
P^*_\mu f=T_{\nu,\mu-\nu}f,\,\,\,\forall f\in L^{p',q'}_\nu(D).
\end{equation}
Now, the boundedness of $P_\mu^*$ on $L^{p',q'}_\nu(D)$ means boundedness of $T_{\nu,\mu-\nu}$ on $L^{p',q'}_\nu(D).$
Therefore, $B_\mu(\cdot,(w,t))\in L^{p,q}_\nu(D)$ for all $(w,t)\in D.$
To conclude, we show that for $f\in L^{p',q'}_\nu(D)$ such that
\begin{equation}\lambdabel{duality}
\lambdangle f,B_\mu(\cdot,(w,t))\rangle_\nu=0,\,\,\,\forall (w,t)\in D,
\end{equation}
we also have $\lambdangle f,F\rangle_\nu=0$ for all $F$ in a dense subspace of $A^{p,q}_\nu(D).$ Now, identities (\ref{duality}), (\ref{Pstar})
and (\ref{Pstarplus}) imply that
$$T_{\nu,\mu-\nu}f=0.$$
Thus, for all $F\in A^{p,q}_\nu(D)\cap A^{2,2}_\mu(D),$ we have
$$\lambdangle f,F\rangle_\nu=\lambdangle f,P_\mu F\rangle_\nu=\lambdangle P_\mu^*f,F\rangle_\nu=\lambdangle T_{\nu,\mu-\nu}f,F\rangle_\nu=0.$$
\item[(ii)]
To prove $(ii),$ we just observe that $P_\mu$ is the identity on the subspace $(A^{p, q}_\nu \cap A^{2, 2}_\mu) (D)$ which is dense on the space $A^{p, q}_\nu (D)$ by Lemma \ref{density}.
\end{itemize}
\end{proof}
\section{The action of the Box operator in tube domains over convex homogeneous cones}
In this section, we restrict to the case where the $\Omegamega-$Hermitian form $F$ is identically zero and $m=0$. We denote $T_\Omegamega$ the tube domain over the homogeneous cone $\Omegamega$ of rank $r,$ i.e. $T_\Omegamega = V+i\Omegamega.$ For $\nu \in \mathbb R^r, \hskip 2truemm 1\leq p \leq \infty$ and $1\leq q < \infty,$ let $L^{p, q}_\nu (T_\Omegamega)$ be the space of measurable functions on $T_\Omegamega$ such that
$$||f||_{L^{p, q}_\nu (T_\Omegamega)} := \left(\int_{\Omegamega} \left(\int_V |f(x+iy)|^p dx\right)^{\frac qp} Q^{\nu - \tau} (y)dy\right)^{\frac 1q}$$
is finite (with obvious modification if $p=\infty)$.
We call $A^{p, q}_\nu (T_\Omegamega)$ the subspace of $L^{p, q}_\nu (T_\Omegamega)$ consisting of holomorphic functions. For $A^{p, q}_\nu (T_\Omegamega)\neq \{O\},$ we assume in the sequel that $\nu = (\nu_1,\cdots,\nu_r)$ is such that $\nu_j > \frac {m_j}2$ for each $j=1, \cdots, r.$
We call $\mathbb P_\nu$ the weighted Bergman projector on $T_\Omegamega$ and $\mathbb B_\nu$ the associated weighted Bergman kernel on $T_\Omegamega.$
\begin{defn}
Let $\rho\in \mathbb R^r$ be such that $\rho_j > 0$ for every $j=1,\cdots,r.$
We say that $\rho$ is an $\Omegamega-$integral vector if the fundamental compound function $(Q^*)^\rho (\xi)$ is a polynomial in $\xi.$
\end{defn}
In the sequel, we fix an $\Omegamega$-integral vector $\rho.$ We shall now adapt some proofs of \cite[section 6]{BBPR} to tube domains over open convex homogeneous cones.
\begin{defn} The generalized wave operator (the Box) $\mathcal Box=\mathcal Box_x$ on the cone $\Omegamega$ is the differential operator defined by the equality
$$\mathcal Box_x [e^{i(x|\xi)}] = (Q^*)^\rho (\xi)e^{i(x| \xi)} \hskip 2truemm {\rm where} \hskip 2truemm \xi \in \mathbb R^n.$$
When applied to a holomorphic function on the tube domain $T_\Omegamega$ over the cone $\Omegamega,$ we have $\mathcal Box = \mathcal Box_z = \mathcal Box_x$ where $z=x+iy.$ In view of Remark 3.2, for every $\mu \in \mathbb R^r$ such that $\mu_j > \frac {m_j}2, \hskip 2truemm j=1,\cdots, r,$ we have
\begin{equation}\lambdabel{box}
\mathcal Box \left(Q^{-\mu} \left(\frac zi\right)\right) = c_\mu Q^{-\mu -\rho} \left(\frac zi\right), \quad z\in T_\Omegamega.
\end{equation}
\end{defn}
\begin{lem}
For every $f\in A^{2, 2}_\nu (T_\Omegamega)$ and every $h\in H,$ if we set $f_h (z) = f(hz),$ then
$$\mathcal Box (f_h) = Q^\rho(h\cdot e)(\mathcal Box f)_h.$$
\end{lem}
\begin{proof}
In view of the previous theorem, we have
$$f_h (z) = (2\partiali)^{-\frac n2} \int_{\Omegamega^*} e^{i(\partiali[h] z|\xi)}g(\xi)d\xi = (2\partiali)^{-\frac n2} \int_{\Omegamega^*} e^{i(z|\partiali[h^\star]\xi)}g(\xi)d\xi$$
for some $g\in L^2_{(-\nu)} (\Omegamega^*).$ The definition of $\mathcal Box$ and identity (6) give
\begin{eqnarray*}
\mathcal Box (f_h) (z) &=& (2\partiali)^{-\frac n2} \int_{\Omegamega^*} (Q^*)^\rho (\partiali[h^\star]\xi)e^{i(z|h^\star \cdot \xi)}g(\xi)d\xi\\
&=&(2\partiali)^{-\frac n2} (Q^*)^\rho (h^\star \cdot e)\int_{\Omegamega^*} (Q^*)^\rho (\xi)e^{i(\partiali [h]|\xi)}g(\xi)d\xi\\
&=&Q^\rho (h\cdot e)(\mathcal Box f)_h (z)
\end{eqnarray*}
since by (3), $Q^* (h^\star \cdot e)=Q(h\cdot e).$
\end{proof}
\begin{prop}\lambdabel{Boxee}
The Box operator $\mathcal Box^k$ is bounded from $A^{p, q}_\nu (T_\Omegamega)$ to $A^{p, q}_{\nu + kq\rho } (T_\Omegamega)$ for all $1\leq p, q\leq \infty$ and $\nu=(\nu_1,\cdots,\nu_r)\in \mathbb R^r$ such that $\nu_j > \frac {m_j}2, \hskip 2truemm j=1,...,r.$
\end{prop}
\begin{proof} It suffices to prove the proposition for $k=1.$ The other cases are obtained by induction on $k.$ We denote by $d$ the invariant distance on the cone $\Omegamega.$ The Cauchy integral formula for derivatives implies that, if $f$ is holomorphic,
$$|\mathcal Box f (x+i\mathbf e)| \leq C\int_{|\xi|<1, \hskip 1truemm d(\eta, {\mathbf e})<1 } |f(x-\xi+i\eta)|d\xi d\eta.$$
Hence by the Minkowski integral inequality,
$$||\mathcal Box f (\cdot +i\mathbf e)||_p \leq C\int_{d(\eta, {\mathbf e})<1} ||f(\cdot +i\eta)||_p Q^{-\tau} (\eta)d\eta.$$
Here we have introduced the $H$-invariant measure on the cone $\Omegamega:$
$$dm(\eta) = Q^{-\tau} (\eta)d\eta.$$
We recall thet $Q(\eta) \sim Q(y)$ if $d(\eta, y) <1.$ (See for instance \cite[section 4]{NT}).
Let $f$ be in the dense subspace $(A^{p, q}_\nu \cap A^{2}_\nu) (T_\Omegamega)$ and let $y=h\cdot e$ with $h\in H.$
A change of variables combined with the previous lemma gives that
$$||\mathcal Box f (\cdot +iy)||_p \leq CQ(y)^{-\rho}\int_{d(\eta, y)<1} ||f(\cdot +i\eta)||_p dm (\eta).$$
Then
\begin{eqnarray*}
||\mathcal Box f||^q_{A^{p, q}_{\nu + q\rho} (T_\Omegamega)} &\leq&C\int_\Omegamega \left(\int_{d(\eta, y)<1} ||f(\cdot +i\eta)||_p dm (\eta)\right)^q Q^{\nu - \tau} (y)dy\\
&\leq&C\int_\Omegamega \left(\int_{d(\eta, y)<1} ||f(\cdot +i\eta)||_p^q dm(\eta)\right)\left(\int_{d(\eta, y)<1} dm(\eta)\right)^{q-1}Q^{\nu - \tau} (y)dy
\end{eqnarray*}
by the H\"older inequality. Since
\begin{equation}\lambdabel{inv}
\int_{d(\eta, y)<1} dm(\eta)=\int_{d(\eta, \textbf e)<1} dm(\eta)=const.,
\end{equation}
we obtain
\begin{eqnarray*}
||\mathcal Box f||^q_{A^{p, q}_{\nu + q\rho} (T_\Omegamega)} &\leq&C\int_\Omegamega \left(\int_{d(\eta, y)<1} ||f(\cdot +i\eta)||_p^q Q^{- \tau} (\eta)d\eta\right)Q^{\nu-\tau} (y)dy\\
&\leq&C'\int_\Omegamega \left(\int_{d(\eta, y)<1} ||f(\cdot +i\eta)||_p^q Q^{\nu-\tau}(\eta)d\eta\right)Q^{-\tau} (y)dy=C||f||_{A^{p, q}_\nu (T_\Omegamega)}.
\end{eqnarray*}
since $Q_j (y) \sim Q_j (\eta)$ when $d(\eta, y) <1.$ An application of the Fubini-Tonelli Theorem and of identity (\ref{inv}) gives that
\begin{eqnarray*}
||\mathcal Box f||^q_{A^{p, q}_{\nu + q} (T_\Omegamega)} &\leq&C\int_\Omegamega \left(\int_{d(\eta, y)<1} Q^{- \tau} (y)dy\right)||f(\cdot +i\eta)||_p^q Q^{\nu - \tau} (\eta)d\eta\\
&\leq&C'\int_\Omegamega ||f(\cdot +i\eta)||_p^q Q^{\nu-\tau}(\eta)d\eta=C'||f||_{A^{p, q}_{\nu} (T_\Omegamega)}.
\end{eqnarray*}
\end{proof}
We show next the relations between properties of $\mathcal Box$ and boundedness of $\mathbb P_\nu.$ They will give us a necessary condition for the boundedness of $\mathbb P_\nu.$\\
Our considerations are based on the identity stated in the next lemma. We set
$$M_k f(x+iy) = Q^{k\rho} (y)f(x+iy).$$
\begin{lem}
Let $f$ be a continuous function with compact support in $T_\Omegamega.$ Then, for every positive integer $k,$
\begin{equation}\lambdabel{multiplication}
\mathcal Box^k (\mathbb P_\nu f) = \gammamma_{\nu, k} \mathbb P_{\nu+k\rho} (M_{-k} f),
\end{equation}
where $\gammamma_{\nu, k}$ is a non-zero constant and $\mathcal Box^k = \mathcal Box \circ ... \circ \mathcal Box \quad k$ times.
\end{lem}
\begin{prop}\lambdabel{Boxe}
Assume that $\mathbb P_\nu$ is bounded on $L^{p, q}_\nu (T_\Omegamega).$ Then for every positive integer $k, \hskip 2truemm \mathcal Box^k$ is an
isomorphism of $A^{p, q}_\nu (T_\Omegamega)$ onto $A^{p, q}_{\nu +kq\rho} (T_\Omegamega).$ Moreover, for $f\in A^{p, q}_\nu (T_\Omegamega),$
\begin{equation}\lambdabel{diagram}
\mathbb P_\nu \circ M_k (\mathcal Box^k f) = \gammamma_{\nu, k}f.
\end{equation}
\end{prop}
\begin{proof}
Clearly, $M_{-k}$ is an isometric isomorphism of $L^{p, q}_\nu (T_\Omegamega)$ onto $L^{p, q}_{\nu +kq\rho} (T_\Omegamega)$ with
$(M_k)^{-1}=M_{-k}.$ Let $f$ be a continuous function with compact support in $T_\Omegamega.$ By (\ref{multiplication}),
$$\mathbb P_{\nu+k\rho} f = \gammamma_{\nu, k}^{-1} \mathcal Box^k \circ \mathbb P_\nu \circ M_k f.$$
By density, $P_{\nu+k\rho}$ is bounded on $L^{p, q}_{\nu +kq\rho} (T_\Omegamega).$ We then have the following commutative diagram.
\begin{eqnarray*}
L^{p, q}_\nu (T_\Omegamega) &\stackrel{\mathbb P_\nu}\longrightarrow&A^{p, q}_\nu (T_\Omegamega)\\
M_{-k} \downarrow &&\downarrow \gammamma_{\nu, k}^{-1} \mathcal Box^k \\
L^{p, q}_{\nu +kq\rho} (T_\Omegamega) &\stackrel{\mathbb P_{\nu+k\rho}}\longrightarrow& A^{p, q}_{\nu +kq} (T_\Omegamega)
\end{eqnarray*}
where each map is continuous. By Remark 3.12 and Proposition 3.13, $\mathbb P_{\nu + k}$ is onto; then also $\mathcal Box^k$ is onto.
The rest of the proof goes in the following order. We first prove that $\mathcal Box^k$ is one to one for $p=q=2$ (in which case $P_\nu$
is obviously bounded). Then we prove (\ref{diagram}) and finally we show that $\mathcal Box^k$ is one to one for general $p, q.$
Let $f=\mathcal L g \in A^{2}_\nu (T_\Omegamega).$ Then
$$\mathcal Box^k f = C_k \mathcal L (Q^{k\rho} g)\in A^{2}_{\nu +2k\rho} (T_\Omegamega).$$
By Theorem 3.7, if $\mathcal Box^k f =0,$ then $g=0$ a.e. and hence $f=0.$
In order to prove (\ref{diagram}), it suffices to take $f \in (A^{p, q}_\nu \cap A^{2}_\nu) (T_\Omegamega),$
since the left-hand side of (\ref{diagram}) involves continuous operators.
Calling $G$ the left-hand side of (\ref{diagram}), then $G\in (A^{p, q}_\nu \cap A^{2, 2}_\nu)(T_\Omegamega)$
and, by the commutativity of the diagram above,
$$\mathcal Box^k G = \gammamma_{\nu, k} \mathbb P_{\nu + k\rho} (\mathcal Box^k f) = \gammamma_{\nu, k} \mathcal Box^k f.$$
By the injectivity of $\mathcal Box^k$ on $A^{2, 2}_\nu (T_\Omegamega),$ we conclude that $G=\gammamma_{\nu, k} f.$
Finally assume that $f\in A^{p, q}_\nu (T_\Omegamega).$ Then the assumption $\mathcal Box^k f =0$ implies that $f=0$ by (\ref{diagram}).
\end{proof}
The following theorem was proved in \cite [Theorem 6.8]{NT}.
\begin{thm}\lambdabel{boundedness}
Let $\nu =(\nu_1,\cdots,\nu_r) \in \mathbb R^r$ such that $\nu_j > \frac {m_j+n_j+b_j}2$ for each $j=1,\cdots,r.$ We set $q_\nu := 1+\min \limits_{1\leq j \leq r} \frac {\nu_j - \frac {m_j}2}{\frac {n_j}2}.$ Let $1\leq p\leq \infty$ and $2\leq q<\infty.$
Then the weighted Bergman projector $\mathbb P_\nu$ is bounded from $L^{p, q}_\nu (T_\Omegamega)$ to $A^{p, q}_\nu (T_\Omegamega)$ for\\
$
\left \{
\begin{array}{clcr}
0&\leq \frac 1p &\leq \frac 12\\
\frac 1{q_\nu p'}&<\frac 1q &< 1-\frac 1{q_\nu p'}
\end{array}
\right.
$
\quad \quad or \quad \quad
$
\left \{
\begin{array}{clcr}
\frac 12&\leq \frac 1p &\leq 1\\
\frac 1{q_\nu p}&<\frac 1q &< 1-\frac 1{q_\nu p}
\end{array}
\right
.
.
$
\end{thm}
The following statement is a direct consequence of Proposition \ref{Boxe} and Theorem \ref{boundedness}.
\begin{cor}\lambdabel{Box}
For the values of $p, q$ given in Theorem 4.7, $\mathcal Box^k$ is an isomorphism of $A^{p, q}_\nu (T_\Omegamega)$ onto $A^{p, q}_{\nu +k\rho q} (T_\Omegamega)$
for every positive integer $k.$
\end{cor}
A partial converse of Corollary \ref{Box} also holds. (See \cite{BBGNPR, BBGRS} for tube domains over symmetric cones and \cite{BBPR}
for tube domains over Lorentz cones).
\begin{thm}
Assume that $1\leq p<\infty$ and $2\leq q<\infty$ and that for some $k\geq k_0,$ the inequality
$$||f||_{A^{p, q}_\nu (T_\Omegamega)} \leq C||\mathcal Box^k f||_{A^{p, q}_{\nu+k\rho q} (T_\Omegamega)}$$
holds. Then $\mathbb P_\nu$ is bounded on $L^{p, q}_\nu (T_\Omegamega).$
\end{thm}
\section{Proof of Theorem 2.1}
In this section, we start by proving the Hardy-type inequality for the homogeneous Siegel domain $D$ of type II. The notations are those of section 2.
\begin{thm}
Let $\nu \in \mathbb R^r$ be such that $\nu_i > \frac {m_i + n_i + b_i}2, \hskip 2truemm i=1,...,r.$ Let $1\leq p<\infty$ and $2\leq q<\infty.$
Assume that there exists a positive integer $k$ and a positive constant $C=C(k, p, q, \nu)$ such that for all $f\in A^{p, q}_\nu (D),$ the following Hardy type inequality holds.
\begin{equation}\lambdabel{hardy}
\int_{\mathbb C^n} \int_{\Omegamega + F(u, u)} \left(\int_V |f(x+iy, u)|^p dx\right)^{\frac qp} Q^{\nu -\tau - \frac b2} (y-F(u, u))dy dv(u)
\end{equation}
$$
\leq C\int_{\mathbb C^n} \int_{\Omegamega + F(u, u)} \left(\int_V |\mathcal Box^k_x f(x+iy, u)|^p dx\right)^{\frac qp} Q^{\nu +kq\rho-\tau - \frac b2} (y-F(u, u))dy dv(u).
$$
Then the Bergman projector $P_\nu$ of $D$ admits a bounded extension to $L^{p, q}_\nu (D).$
\end{thm}
\begin{proof}
We adapt the proof of \cite[Theorem 1.3]{BBGRS}, for tube domains over symmetric cones. In this reference, $\nu$ is a real number. We want to prove the existence of some constant $C$ such that, for $f\in (L^{p, q}_\nu \cap L^{2, 2}_\nu)(D),$ we have the inequality
$$||P_\nu f||_{A^{p, q}_\nu (D)} \leq C||f||_{ L^{p, q}_\nu (D)}.$$
Consider such an $f$ with $||f||_{ L^{p, q}_\nu (D)} = 1.$ Call $G=P_\nu f.$ By Fatou's Lemma, it is sufficient to prove that
the functions $G_\epsilonilon (z, u) := G(z+i\epsilonilon \mathbf e, u),$ which belong to $A^{p, q}_\nu (D),$ have norms uniformly bounded.
So using (\ref{hardy}), it is sufficient to show that $\mathcal Box_z^k G_\epsilonilon$ is uniformly in $L^{p, q}_{\nu + kq\rho} (D).$
To prove this, we apply (\ref{box}) to obtain the identity
$$\mathcal Box_z^k G_\epsilonilon (z, u) = C\int_D B_{\nu+k\rho} ((z+i\epsilonilon \textbf e, u), (w, t))f(w, t)dV_\nu (w, t).$$
In view of Proposition \ref{Boxee}, it suffices to prove that $P_{\nu +k\rho}$ is bounded on $L^{p, q}_{\nu + kq\rho} (D)$ for $k$ large.
We apply Remark \ref{3.12} to conclude.
\end{proof}
By Theorem 5.1, it suffices to prove the existence of a positive integer $k$ such that the Hardy type inequality (\ref{hardy}) is valid.
If we make the change of variable $y' = y-F(u, u),$ this inequality takes the following form.
$$
\int_{\mathbb C^n} \left(\int_{\Omegamega} \left(\int_V |f(x+iy' +iF(u, u), u)|^p dx\right)^{\frac qp} Q^{\nu -\tau - \frac b2} (y')dy'\right)dv(u)
$$
$$
\leq C\int_{\mathbb C^n} \left(\int_{\Omegamega} \left(\int_V |\mathcal Box^m_x f(x+iy'+iF(u,u), u)|^p dx\right)^{\frac qp} Q^{\nu +kq\rho-\tau - \frac b2} (y')dy'\right)dv(u).
$$
For any fixed $u\in \mathbb C^n,$ we consider the holomorphic function $f_u: T_\Omegamega \rightarrow \mathbb C$ defined by
$$f_u (x+iy') = f(x+iy'+iF(u, u), u).$$
Also observe that the property $f\in A^{p, q}_\nu (D)$ can be expressed in the following form.
$$||f||^q_{A^{p, q}_\nu (D)} = \int_{\mathbb C^n} \left(\int_{\Omegamega} \left(\int_V |f(x+iy' +iF(u, u), u)|^p dx\right)^{\frac qp} Q^{\nu -\tau - \frac b2} (y')dy'\right) dv(u) < \infty.$$
An application of the Fubini Theorem gives that, for almost all $u\in \mathbb C^n,$ the function $f_u$ belongs to $A^{p, q}_\nu (T_\Omegamega).$
Assume that the Bergman projector $\mathbb P_{\nu - \frac b2}$ of $T_\Omegamega$ is bounded on $A^{p, q}_{\nu -\frac b2} (T_\Omegamega).$
By Proposition \ref{Boxe}, this implies that for every positive integer $k,$ $\mathcal Box^k$ is an isomorphism of $A^{p, q}_{\nu -\frac b2} (T_\Omegamega)$
onto $A^{p, q}_{\nu +kq\rho-\frac b2} (T_\Omegamega).$ So there exists a positive constant $C$ such that
$$\int_{\Omegamega} \left(\int_V |f_u(x+iy)|^p dx\right)^{\frac qp} Q^{\nu -\tau - \frac b2} (y)dy$$
$$ \leq C\int_{\Omegamega} \left(\int_V |\mathcal Box^k_x f_u(x+iy)|^p dx\right)^{\frac qp} Q^{\nu +kq\rho -\tau - \frac b2} (y)dy$$
for almost all $u\in \mathbb C^n.$ An integration with respect to $u$ finishes the proof.
\section{The case of the Pyateckii-Shapiro Siegel domain of type II}
\subsection{Bergman projections and Besov-type spaces in tube domains over symmetric cones}
Let $\Omega$ be a symmetric cone of rank $r$ in a Euclidean Jordan algebra $V.$ We describe a Littlewood-Paley decomposition adapted to to the geometry of $\Omega.$ Referring to \cite{BBGR}, we call $d$ the invariant distance in $\Omega.$ Let $\{\xi_j\}$ be a fixed $(\frac 12, 2)$-lattice in $\Omegamega$ and let $B_j$ be the d-ball $B_1 (\xi_j)$ with centre $\xi_j$ and radius 1. These balls $\{B_j\}$ form a covering of $\Omega.$ We choose a real function $\varphi_0\in C_c^\infty (B_2 (\mathbf e))$ such that
$$0\leq \varphi_0 \leq 1, \quad {\rm and} \quad \varphi_0\vert_{B_1 (\mathbf {e})} \equiv 1.$$
We write $\xi_j =g_j \mathbf e,$ for some $g_j\in T.$ Then, we can define $\varphi_j (\xi) = \varphi_0 (g_j^{-1} \xi),$ so that
$$\varphi_j \in C_c^\infty (B_2 ( {\xi_j})), \quad 0\leq \varphi_j \leq 1, \quad {\rm and} \quad \varphi_j\vert_{B_j} \equiv 1.$$
We assume that $\xi_0 = \bf e$ to avoid ambiguity of notation. By the finite intersection property of the lattice $\{\xi_j\},$ there exists a constant $c>0$ such that
$$\frac 1c \leq \mathcal Phi (\xi) := \sum \limits_j \varphi_j (\xi) \leq c.$$
We define the function $\partialsi_j$ by $\widehat \partialsi_j=\frac {\varphi_j}\mathcal Phi.$
The Besov-type spaces, $B_\nu^{p,q}, \hskip 2truemm \nu=(\nu_1,\ldots,\nu_r) \in \mathbb R^r, \hskip 1truemm 1\leq p \leq \infty, \hskip 1truemm 1\leq q<\infty,$ adapted to this
Littlewood-Paley decomposition are defined as the equivalence classes of tempered distributions which have
finite seminorms
\begin{equation}a\lambdabel{besov}
\|f\|_{B_\nu^{p,q}}=\left[\sum_j(Q^*)^{-\nu}(\xi_j)\|f*\partialsi_j\|_p^q\right]^{\frac{1}{q}}.
\end{equation}a
When $n=1,$ and $\xi_j=2^j,$ the norm (\ref{besov}) corresponds to the classical Besov space $B_{p,q}^{-\nu/q}(\mathbb{R}).$
We denote by $S_{\Omega}$ the space of Schwartz functions
$f: V\rightarrow \mathbb C$ with $\mbox{Supp}\hskip 1truemm \widehat{f}\subset \omegaverlineerline{\Omega}.$ One basic tool is a special decomposition for functions in $S_{\Omega},$
\begin{equation}a\lambdabel{Littlewood-Paley}
f=\sum_jf*\partialsi_j,\,\,\,\mbox{for all}\,\,f\in S_{\Omega}.
\end{equation}a
Moreover, we call $\mathcal D_{\Omegamega}$ the subspace of $S_{\Omega}$ consisting of those functions whose support is compact in $\Omegamega.$ We point out that the subspace $\mathcal D_{\Omegamega}$ is dense in $B_\nu^{p,q} .$
We further refer to \cite{DD}, \cite{DD1} and \cite{BBGR}.
We normalize the Fourier transform by
$$\widehat{f}(\xi)=\mathcal F f(\xi)=\frac{1}{(2\partiali)^n}\int_Ve^{-i(x|\xi)}f(x)dx,\,\,\,\mbox{for}\,\,\xi\in V$$
and like in section 3, we define the Laplace transform $\mathcal L$ by
$$\mathcal L (f)(z) =\frac{1}{(2\partiali)^{\frac n2}}\int_\Omega e^{i(z|\xi)}f(\xi)d\xi,\,\,\,\mbox{for}\,\,z\in T_\Omegamega.$$
We call $\mathcal C$ the operator $C(f) = \mathcal L(\widehat f).$ We call $p'$ the conjugate index of $p$ and for $\nu =(\nu_1,\cdots,\nu_r)\in \mathbb R^r,$ we adopt the following notations.
$$p_\sharp = \min (p, p');$$
$$p_\nu = 1+\min \limits_{j=1,\cdots,r} \frac {\nu_j +\frac nr}{(\frac {n_j}2 -\nu_j)_+};$$
$$q_\nu = 1+\min \limits_{j=1,\cdots,r} \frac {\nu_j -\frac {m_j}2}{\frac {n_j}2};$$
$$q_\nu (p) = p_\sharp q_\nu;$$
$$\widetilde q_{\nu, p} = \min \limits_{j=1,\cdots,r} \frac {\nu_j +\frac {n_j}2}{\left(\frac {n_j}{2p'} -\left(1+\frac {m_j}2\right)\right)_+}.$$
We record the following theorem due to D. Debertol. (See for instance
\cite[Theorem 4.6]{DD}, \cite[Theorem 1.2 and Corollary 4.7]{DD1}).
\begin{thm}
Let $\nu =(\nu_1,\cdots,\nu_r)\in \mathbb R^r,$ such that $\nu_j >\frac{m_j}{2},\,\,j=1,\ldots,r$ and let $1<p<\infty$ and $1<q<\widetilde q_{\nu, p}.$ Then, for every $F\in A^{p,q}_\nu (T_\Omega)$, there is a tempered distribution $f$ in $V$ such that $||f||_{B^{p,q}_\nu } <\infty$ and $F=\mathcal C f.$ Moreover we have
\begin{enumerate}
\item
$\lim \limits_{y\rightarrow 0, \hskip 1truemm y\in \Omegamega} F(\cdot +iy) = f$ both in ${\mathcal S}'(V)$ and in $B^{p,q}_\nu;$
\item
$||f||_{B^{p,q}_\nu} \lesssim ||F||_{A^{p,q}_\nu}.$
\end{enumerate}
\end{thm}
D. Debertol also proved the following theorem found in \cite[Theorem 5.8]{DD} and \cite[Theorem 1.3]{DD1}.
\begin{thm}
Let $\nu=(\nu_1,\cdots,\nu_r)\in \mathbb R^r$ such that $\nu_j >\frac {m_j}2, \hskip 2truemm j=1,\cdots,r$ and $1<p<p_\nu, \hskip 2truemm q'_\nu (p) < q < \widetilde q_{\nu, p}.$ The following assertions are equivalent.
\begin{enumerate}
\item
The Bergman projector $\mathbb P_\nu$ of $T_\Omega$ admits a bounded extension from $L^{p, q}_\nu (T_\Omegamega)$ to $A^{p, q}_\nu (T_\Omegamega).$
\item
The operator $\mathcal C$ is an isomorphism from $B^{p, q}_\nu $ to $A^{p, q}_\nu (T_\Omegamega).$
\end{enumerate}
\end{thm}
The following two results are generalizations of \cite[Theorem 4.11]{BBGR} and \cite[Lemma 4.14]{BBGR}. For the analysis on symmetric cones, we also refer to \cite{FaKo}.
\begin{thm}\lambdabel{4.11}
Let $\nu=(\nu_1,\ldots,\nu_r)\in\mathbb{R}^r$ such that $\nu_j>\frac{m_j}{2},\,\,j=1,\ldots,r$ and $1\leq p,\,s<\infty.$ Assume that there exist a number
$\delta>0,$ a vector $\mu=(\mu_1,\ldots,\mu_r)\in\mathbb{R}^r$ with $\mu_j >0, =1,\ldots,r$ and a constant $C=C(\mu,\delta)>0$ such that the estimate
\begin{equation}a\lambdabel{4.12}
\left\|\sum_j f_j\right\|_p\leq C\left[\sum_j(Q^*)^{-\mu}(\xi_j)e^{\delta(\xi_j|{\bf e})}\|f_j\|^s_p\right]^\frac{1}{s}
\end{equation}a
holds for every finite sequence $\{f_j\}\subset L^p(V)$ with $\mbox{Supp}\hskip 1truemm \widehat{f_j}\subset B_2 (\xi_j).$
We assume that the index $q$ satisfies one of the following conditions.
\begin{enumerate}
\item[(i)]
$1\leq q\leq s$ and $q < s\min \limits_{j=1,\ldots,r}\frac{\nu_j-\frac{m_j}{2}}{\mu_j};$
\item[(ii)]
$s<q<\min \left \{s\min\limits_{j=1,\ldots,r}\frac{\nu_j-\frac{m_j}{2}+\frac{n_j}{2}}{\mu_j+\frac{n_j}{2}}, \widetilde{q}_{\nu,p}\right\}.$
\end{enumerate}
Then for every function $f\in \mathcal S_{\Omega},$ the function $F=\mathcal{C}(f)$ belongs to $A_\nu^{p,q}(T_\Omega),$
and moreover,
$$\|F\|_{A_\nu^{p,q}(T_\Omega)}\lesssim\|f\|_{B_\nu^{p,q}(T_\Omega)}.$$
\end{thm}
\begin{lem}\lambdabel{4.14}
Let $1\leq p,\,s<\infty$ and assume that {\rm (\ref{4.12})} holds for some number $\delta>0$ and some vector $\mu=(\mu_1,\ldots,\mu_r)\in\mathbb{R}^r.$ Then for every
$f\in\mathcal{D}_{\Omega}$ and $y\in\Omega,$ the function $F(\cdot+iy)=\mathcal F^{-1}(\widehat{f}e^{-(y|\cdot)})$ belongs to $L^p(V).$ Moreover,
\begin{equation}a\lambdabel{4.15}
\|F(\cdot+iy)\|_p\lesssim Q^{-\frac \mu s}(y)\|f\|_{B_\mu^{p,s}}
\end{equation}a
with constants independent of $f$ or $y\in\Omega.$
\end{lem}
In the proofs, we denote $\{\chi_j\}$ a family of functions defined as $\widehat \chi_j (\xi):=\widehat \chi (g_j^{-1} (\xi)$ from an arbitrary $\widehat \chi\in C_c^\infty (B_4 (\mathbf e))$ so that $0\leq \widehat \chi \leq 1$ and $\widehat \chi$ is identically 1 in $B_2 (\mathbf e).$ We shall use the following estimate (formula (3.47) of \cite{BBGR}): there exist two positive numbers $C$ and $\gammamma$ such that
\begin{equation}\lambdabel{3.47}
||\mathcal F^{-1} (\widehat \chi_j e^{-(y\vert \cdot)}||_1 \leq Ce^{-\frac {(g_j y\vert \mathbf e)}\gammamma}
\end{equation}
\begin{proof}[Proof of Lemma 6.4]
\hskip 2truemm By homogeneity (see \cite[Proposition 3.19 ]{DD}), it is sufficient to prove (\ref{4.15}) when $y=\eta{\bf e},$ for some fixed $\eta>0$ to be chosen below.
Let us denote $\widehat{g}=\widehat{f}e^{-\eta({\bf e}|\cdot)},$ so that $g=\sum\limits_jg*\partialsi_j\in\mathcal{S}_{\Omega}.$ Applying (\ref{4.12}) to the sequence $\{f_j = g*\partialsi_j\}$
and using the Young inequality, we obtain
\begin{equation}as
\|F(\cdot+i\eta{\mathbf e})\|_p &=&\|g\|_p=\|\sum_jg*\partialsi_j\|_p\\
&\lesssim& \left[\sum_j(Q^*)^{-\mu}(\xi_j)e^{\delta(\xi_j|{\bf e})}\|(g*\partialsi_j\|^s_p\right]^\frac{1}{s} \\
&\lesssim& \left[\sum_j(Q^*)^{-\mu}(\xi_j)e^{\delta(\xi_j|{\bf e})}\|\mathcal F^{-1}(\widehat{f}\hskip 1truemm \widehat{\partialsi_j}e^{-\eta({\bf e}|\cdot)})\|^s_p\right]^\frac{1}{s}\\
&\lesssim& \left[\sum_j(Q^*)^{-\mu}(\xi_j)e^{\delta(\xi_j|{\bf e})}\|f*\partialsi_j\|_p^s\|\mathcal F^{-1}(e^{-\eta({\bf e}|\cdot)}\widehat{\chi_j})\|^s_1\right]^\frac{1}{s}.\\
\end{equation}as
Now, $\|\mathcal F^{-1}(e^{-\eta({\mathbf e}|\cdot)}\widehat{\chi_j})\|_1$ is bounded by a constant times $e^{-\gamma\eta(\xi_j|{\bf e})}$ by formula (\ref
{3.47}). Therefore,
\begin{equation}as
\|F(\cdot+i\eta{\mathbf e})\|_p&\lesssim& \left[\sum_j(Q^*)^{-\mu}(\xi_j)e^{\delta(\xi_j|{\bf e})-\gamma\eta(\xi_j|{\bf e})}\|f*\partialsi_j\|_p^s\right]^\frac{1}{s}
\end{equation}as
We only need to choose $\eta$ larger than $\delta/\gamma.$
\end{proof}
We now conclude the proof of Theorem \ref{4.11}. Given $f\in\mathcal{D}_{\Omega}$ and $F:=\mathcal C f,$ Lemma \ref{4.14} applied to
$\mathcal F^{-1}(\widehat{f}e^{-(y|{\bf e}}))$ gives us
\begin{equation}as
\|F(\cdot+i2y)\|_p &\lesssim& Q^{-\frac \mu s}(y)\left[\sum_j(Q^*)^{-\mu}(\xi_j)\|\mathcal F^{-1}(\widehat{f}\hskip 1truemm \widehat{\partialsi_j}e^{-(y|\cdot)})\|^s_p\right]^\frac{1}{s}\\
&\lesssim&Q^{-\frac \mu s}(y)\left[\sum_j(Q^*)^{-\mu}(\xi_j)e^{-\gamma(\xi_j|y)}\|f*\partialsi_j\|_p^s\right]^\frac{1}{s},
\end{equation}as
where we have used formula (\ref{3.47}) and Young's inequality again. Thus
\begin{equation}as
I&:=&\int_\Omega\|F(\cdot+i2y)\|_p^qQ^{\nu-\frac nr}(y)dy\\
&\lesssim&\int_\Omega Q^{-\frac{\mu q}{s}}(y)\left[\sum_j(Q^*)^{-\mu}(\xi_j)e^{-\gamma(\xi_j|y)}\|f*\partialsi_j\|_p^s\right]^\frac{q}{s}Q^{\nu-\frac nr}(y)dy.
\end{equation}as
When $q/s\leq 1$ i.e. $q\leq s,$ then
\begin{equation}as
I&\lesssim&\int_\Omega Q^{-\frac{\mu q}{s}}(y)\sum_j(Q^*)^{-\mu\frac{q}{s}}(\xi_j)e^{-\gamma\frac{q}{s}(\xi_j|y)}\|f*\partialsi_j\|_p^qQ^{\nu-\frac nr}(y)dy\\
&\lesssim&\sum_j(Q^*)^{-\mu\frac{q}{s}}(\xi_j)\|f*\partialsi_j\|_p^q\int_\Omega e^{-\gamma\frac{q}{s}(\xi_j|y)}Q^{-\frac{\mu q}{s}+\nu-\frac nr}(y)dy\\
&\lesssim&\sum_j(Q^*)^{-\mu\frac{q}{s}+\frac{\mu q}{s}-\nu}(\xi_j)\|f*\partialsi_j\|_p^q=\sum_j(Q^*)^{-\nu}(\xi_j)\|f*\partialsi_j\|_p^q
\end{equation}as
provided $-\mu_j\frac{q}{s}+\nu_j>\frac{m_j}{2},\,\,j=1,\ldots,r$ thanks to Lemma \ref{integ}. This leads to condition (i).
Assume now that $q/s> 1$ i.e. $q>s.$ Let $\beta=(\beta_1,\ldots,\beta_r)\in\mathbb{R}^r$ be a vector to be chosen below. We have
\begin{equation}as
I&\lesssim&\int_\Omega Q^{-\frac{\mu q}{s}}(y)\left[\sum_j(Q^*)^{-\mu-\beta}(\xi_j)\|f*\partialsi_j\|_p^s(Q^*)^{\beta}(\xi_j)e^{-\gamma(\xi_j|y)}\right]^\frac{q}{s}Q^{\nu-\frac nr}(y)dy
\end{equation}as
Then by H\"older's inequality,
\begin{equation}a
\left[\sum_j(Q^*)^{-\mu-\beta}(\xi_j)\|f*\partialsi_j\|_p^s(Q^*)^{\beta}(\xi_j)e^{-\gamma(\xi_j|y)}\right]^\frac{q}{s}\leq\\{}{}\nonumber
\left[\sum_j(Q^*)^{-\frac{q}{s}(\mu+\beta)}(\xi_j)\|f*\partialsi_j\|_p^qe^{-\gamma(\xi_j|y)}\right]
\left[\sum_j(Q^*)^{\beta\left(\frac{q}{s}\right)'}(\xi_j)e^{-\gamma(\xi_j|y)}\right]^{\left(\frac{q}{s}\right)/\left(\frac{q}{s}\right)'}.
\end{equation}a
But
$$\sum_j(Q^*)^{\beta(\frac{q}{s})'}(\xi_j)e^{-\gamma(\xi_j|y)}\sim\int_{\Omega^*}e^{-\gamma(\xi|y)}(Q^*)^{\beta\left(\frac{q}{s}\right)'-\frac nr}(\xi)d\xi
\sim Q^{-\beta\left(\frac{q}{s}\right)'}(y)$$
provided $\beta_j \left(\frac{q}{s}\right)'>\frac{n_j}{2},\,\,j=1,\ldots,r$ thanks to Lemma \ref{integ}. It follows that
\begin{equation}as
I&\lesssim& \int_\Omega Q^{-\frac{\mu q}{s}-\beta\left(\frac{q}{s}\right)'}(y)\left[\sum_j(Q^*)^{-\frac{q}{s}(\mu+\beta)}(\xi_j)\|f*\partialsi_j\|_p^qe^{-\gamma(\xi_j|y)}\right]Q^{\nu-\frac nr}(y)dy \\
&\lesssim&\sum_j(Q^*)^{-\frac{q}{s}(\mu+\beta)}(\xi_j)\|f*\partialsi_j\|_p^q \int_\Omega e^{-\gamma(\xi_j|y)}Q^{-\frac{q}{s}(\mu+\beta)+\nu-\frac nr}(y)dy\\
\end{equation}as
The last integral converges if $-\frac{q}{s}(\mu_j+\beta_j)+\nu_j>\frac{m_j}{2},\,\,j=1,\ldots,r$ thanks to Lemma \ref{integ}. It follows that
\begin{equation}as
I&\lesssim&\sum_j(Q^*)^{-\nu}(\xi_j)\|f*\partialsi_j\|_p^q
\end{equation}as
whenever $\beta_j \left(\frac{q}{s}\right)'>\frac{n_j}{2},\,\,j=1,\ldots,r$ and $-\frac{q}{s}(\mu_j+\beta_j)+\nu_j>\frac{m_j}{2},\,\,j=1,\ldots,r.$ That is
we will conclude with $I:=\int_\Omega\|F(\cdot+i2y)\|_p^qQ^{\nu-\tau}(y)dy\lesssim\|f\|_{B_\nu^{p,q}}$ if we can choose real numbers $\beta_j$ so that
$$\frac{n_j}{2}\left(1-\frac{1}{q/s}\right)<\beta_j<\frac{1}{q/s}\left(\nu_j-\frac{m_j}{2}\right)-\mu_j,\,\,j=1,\ldots,r.$$
Solving for $q/s$ we get
$$\frac qs<\min_{j=1,\ldots,r}\frac{\nu_j-\frac{m_j}{2}+\frac{n_j}{2}}{\mu_j+\frac{n_j}{2}}$$
which gives condition (ii).
From Theorems 6.1 and 6.2, we deduce the following corollary for tube domains over symmetric cones.
\begin{cor}
Let $\nu=(\nu_1,\cdots,\nu_r)\in \mathbb R^r$ such that $\nu_j >\frac {m_j}2, \hskip 2truemm j=1,\cdots,r,$ $1<p<p_\nu$ and $1<s<\infty.$ Assume further that there is a positive number $\delta$ and a vector $\mu = (\mu_1,\cdots,\mu_r)\in \mathbb R^r$ with $\mu_j >0, \hskip 2truemm j=1,\cdots,r$ and a constant $C=C(\mu, \delta)>0$ such that
$$\left\|\sum_j f_j\right\|_p\leq C\left[\sum_j(Q^*)^{-\mu}(\xi_j)e^{\delta(\xi_j|{\bf e})}\|f_j\|^s_p\right]^\frac{1}{s}$$
holds for every finite sequence $\{f_j\}\in L^p (V)$ with $\mbox Supp \hskip 2truemm \widehat f_j \in B_2 (\xi_j).$ We assume that for the index $q,$ we are in one of the following two situations.
\begin{enumerate}
\item[(a)]
If $q'_\nu (p) < s,$ then there are two cases:
\begin{enumerate}
\item[(i)]
$q'_\nu (p) < q \leq s$ and $q < \min \left \{s\min \limits_{\hskip 2truemm j=1,\cdots,r} \frac {\nu_j -\frac {m_j}2}{\mu_j}, \widetilde q_{\nu, p}\right \};$
\item[(ii)]
$s < q < \min \left \{s\min \limits_{\hskip 2truemm j=1,\cdots,r} \frac {\nu_j - \frac {m_j}2 +\frac {n_j}2}{\mu_j +\frac {n_j}2}, \widetilde q_{\nu, p}\right \}.$
\end{enumerate}
\item[(b)]
If $q'_\nu (p) \geq s,$ then $$q'_\nu (p) <q< \min \left \{s\min \limits_{\hskip 2truemm j=1,\cdots,r} \frac {\nu_j - \frac {m_j}2 +\frac {n_j}2}{\mu_j +\frac {n_j}2}, \widetilde q_{\nu, p}\right \}.$$
\end{enumerate}
Then the Bergman projector $P_\nu$ admits a bounded extension from $L^{p, q}_\nu (T_\Omegamega)$ to $A^{p, q}_\nu (T_\Omegamega).$
\end{cor}
\subsection{The proof of Theorem 2.3}
We refer to \cite[section 5]{BBGR}. Let $\Omegamega = \mathcal Lambdambda_n$ denote the Lorentz cone in $\mathbb R^n, \hskip 2truemm n\geq 3,$ defined by
$$\mathcal Lambdambda_n = \{y=(y_1,y') \in \mathbb R^n: \hskip 1truemm \mathcal Delta_1 (y) >0, \hskip 2truemm \mathcal Delta_2 (y) >0\},$$
with $\mathcal Delta_1 (y)=y_1$ and $\mathcal Delta_2 (y) =y_1^2 -|y'|^2.$ The rank of $\mathcal Lambdambda_n$ is $r=2.$ For $j\geq 1,$ take a maximal $2^{-j}$-separated sequence $\{\omegamega_k^{(j)}\}_{k=1}^{k_j}$ of points of the sphere $\mathbb S^{n-2} \subset \mathbb R^{n-1},$ with respect to the Euclidean distance (so that $k_j \sim 2^{j(n-2)}).$ Then define the sets\\
$$E_{j, k} =$$
$$ \left \{(\xi_1, \xi')\in \mathcal Lambdambda_n: 2^{-1} <\tau <2, \hskip 2truemm 2^{-2j-2} <1-\frac {|\xi'|^2}{\xi_1^2} <2^{-2j+2}
{\rm and} \hskip 1truemm \left |\frac {\xi'}{|\xi'|} - \omegamega_k^{(j)}\right | \leq \delta 2^{-j} \right \}$$
where the constant $\delta >0$ is suitably chosen \cite{BBGR}.
Recall that for $\mathcal Lambdambda_n$ we have $r=2,\,\,m_j=(2-j)d,\,\,n_j=(j-1)d$ with $d=n-2.$ Thus for $\nu=(\nu_1,\nu_2)\in\mathbb{R}^2$
such that $\nu_1>\frac n2-1,\,\,\nu_2>0,$ we have
$$p_\nu =1+\frac {\nu_2 +\frac n2}{\left(\frac n2 -1 -\nu_2\right)_+};$$
$$q_\nu= 1+\min \limits_{1\leq i \leq 2} \frac {\nu_i - \frac {m_i}2}{\frac {n_i}2}=1+\frac {\nu_2}{\frac {n}2-1};$$
$$q'_\nu (p)= \frac {\nu_2 +\frac n2 -1}{\nu_2 +\left(1-\frac 1{p_\sharp}\right)\left(\frac n2 -1\right)};$$
$$\tilde q_{\nu, p} =\min \limits_{1\leq i \leq 2} \frac {\nu_j + (j-1)\frac d2}{\left(\frac n{2p'} -1-(2-j)\frac d2\right)_+}
=\frac {\nu_2 + \frac n2-1}{\left(\frac n{2p'} -1\right)_+}.$$
The following theorem is a consequence of \cite[Proposition 5.5]{BBGR} and Corollary 6.5 above.
\begin{thm}\lambdabel{main}
Let $1\leq p < p_\nu, \hskip 2truemm 1\leq s <\infty.$ Suppose that for some $\mu \geq 0$
there exists a constant $C_{\mu}$ such that
\begin{equation}\lambdabel{dec}
\left \mathcal Vert\sum_{k=1}^{k_j} f_k \right \mathcal Vert \leq C_{\mu} 2^{\frac {2j{\mu}}{s}}\left [\sum_{k=1}^{k_j} \mathcal Vert f_k \mathcal Vert_p^s\ \right ]^{\frac 1s}
\quad {\rm for \hskip 2truemm all} \hskip 2truemm j\geq 1,
\end{equation}
for every sequence $\{f_k\}$ satisfying $Supp \hskip 1truemm \widehat f_k \subset E_{j, k}.$
We assume that for the index $q,$ we are in one of the following two situations.
\begin{enumerate}
\item[(a)]
If $q'_\nu (p) < s,$ then there are two cases:
\begin{enumerate}
\item[(i)]
$q'_\nu (p) < q \leq s$ and $q < \min \left \{s\min \limits_{\hskip 2truemm j=1,\cdots,r} \frac {\nu_j -\frac {m_j}2}{\mu}, \widetilde q_{\nu, p}\right \};$
\item[(ii)]
$s < q < \min \left \{s\min \limits_{\hskip 2truemm j=1,\cdots,r} \frac {\nu_j - \frac {m_j}2 +\frac {n_j}2}{\mu+\frac {n_j}2}, \widetilde q_{\nu, p}\right \}.$
\end{enumerate}
\item[(b)]
If $q'_\nu (p) \geq s,$ then $$q'_\nu (p) < q< \min \left \{s\min \limits_{\hskip 2truemm j=1,\cdots,r} \frac {\nu_j - \frac {m_j}2 +\frac {n_j}2}{\mu +\frac {n_j}2}, \widetilde q_{\nu, p}\right \}.$$
\end{enumerate}
Then $P_\nu$ is bounded in $L^{p, q}_\nu.$
\end{thm}
The following theorem is a consequence of the $l^2$-decoupling theorem recently proved by Bourgain and Demeter \cite{BD}.
\begin{thm}\lambdabel{bourgain}
The estimate {\rm (\ref{dec})} is valid for $s=2$ and the following values of $p$ and $\mu:$
\begin{enumerate}
\item
$2\leq p \leq \frac {2n}{n-2}$ and $\mu=0;$
\item
$p\geq \frac {2n}{n-2}$ and $\mu =\frac {n-2}2 - \frac np.$
\end{enumerate}
\end{thm}
We deduce the following corollary.
\begin{cor}\lambdabel{resultat}
Let $n\geq 3$ and $\nu=(\nu_1,\nu_2)\in \mathbb{R}^2$ such that $\nu_1>\frac n2-1,\,\,\nu_2>0.$
The weighted Bergman projector $P_\nu$ is bounded in $L^{p, q}_\nu(T_{\mathcal Lambdambda_n})$ for the following values of $p, q$ and $\nu:$
\begin{enumerate}
\item
$2\leq p \leq \frac {2n}{n-2}, \hskip 2truemm p<p_\nu$ and $q'_\nu (p) < q < 2q_\nu;$
\item
$p_\nu >p \geq \frac {2n}{n-2}, \hskip 2truemm q'_\nu (p) < q<\min \left \{2\frac {\nu_1 - \frac n2 +1}{\frac n2 -1 -\frac np}, \widetilde q_{\nu, p}\right \}$ provided $0 < \nu_2 < \frac n2 -1;$
\item
$p_\nu >p > \frac {2n}{n-2}$ and $2 < q< \min \left \{2\frac {\nu_1 - \frac n2 +1}{\frac n2 -1 -\frac np}, \widetilde q_{\nu, p}\right \}$ provided $\nu_2 \geq \frac n2 -1.$
\end{enumerate}
\end{cor}
\begin{proof}
We first notice that if $p\geq 2,$ the following equivalence holds
\begin{equation}
q'_\nu (p) <2 \quad {\rm if \hskip 2truemm and \hskip 2truemm only \hskip 2truemm if}\quad \nu_2 > \left(\frac n2 -1\right)\left(1-\frac 2p\right).
\end{equation}
Also,
\begin{equation}
\widetilde q_{\nu, p} \geq 2q_\nu >2 \quad {\rm for \hskip 2truemm all}\quad 1\leq p \leq \frac {2n}{n-2} \quad {\rm and} \quad \nu_2 >0.
\end{equation}
\vskip 2truemm
1) Suppose first that $2\leq p \leq \frac {2n}{n-2}$ and $\mu=0.$ Then by Theorem 6.7, estimate (40) is satisfied for $s=2.$ We distinguish two cases.\\
{\underline {Case}} 1. We suppose that $\nu_2 > \left(\frac n2 -1\right)\left(1-\frac 2p\right).$ Then by equation (41), we have $q'_\nu (p) <s=2.$ It follows from Theorem 6.6 that $P_\nu$ is bounded in $L^{p, q}_\nu (T_{\mathcal Lambdambda_n})$ if\\
$\bullet
\quad q'_\nu (p) <q\leq 2$ and $q<\widetilde q_{\nu, p}$\\
$\bullet
\quad 2<q <\min \{2q_\nu, \widetilde q_{\nu, p}\}=2q_\nu$\\
and the last equality above follows from equation (42).\\
{\underline {Case}} 2. We suppose that $0<\nu_2 \leq \left(\frac n2 -1\right)\left(1-\frac 2p\right).$ Then by (1), we have $q'_\nu (p) \geq s=2.$ It follows from Theorem 6.6 that $P_\nu$ is bounded in $L^{p, q}_\nu (T_{\mathcal Lambdambda_n})$ if
$q'_\nu (p) <q<\min \{2q_\nu, \widetilde q_{\nu, p}\}=2q_\nu$
and the last equality above again follows from equation (42). This proves the assertion (1) of the corollary.
\vskip 2truemm
2) Observe that $p=\frac {2n}{n-2} $ means that $\mu=0.$ We assume then that $p>\frac {2n}{n-2}.$ Notice that
$$p_\nu = 1+\frac {\nu_2 +\frac n2}{(\frac n2 -1-\nu_2)_+}=\left \{
\begin{array}{clcr}
1+\frac {\nu_2 +\frac n2}{\frac n2 -1-\nu_2}&\rm if &0<\nu_2 <\frac n2 -1\\
\infty & &\rm elsewhere
\end{array}
\right
.
.$$
This suggests two cases: $0<\nu_2 <\frac n2 -1$ and $\nu_2 \geq \frac n2 -1.$ Also, the case $p_\nu \leq \frac {2n}{n-2}$ is irrelevant since we must have $p<p_\nu:$ it would refer to assertion (1) of the corollary. This suggests to replace the first case by $\frac {n-2}{2n} <\nu_2 <\frac n2 -1.$ Furthermore, note that $p>\frac {2n}{n-2}$ is equivalent to $\mu=\frac n2 -1-\frac np >0.$ Also, $p>\frac {2n}{n-2}$ implies that $\frac n{2p'} -1 >\frac {n-2}{4}>0$ and so
$$\widetilde q_{\nu, p} = 2\frac {\nu_2 +\frac n2 -1}{n-2-\frac np}.$$
Moreover, we check that $\widetilde q_{\nu, p} >2$ in the following two cases:
\begin{itemize}
\item
$\frac{n-2}{2n}<\nu_2 <\frac n2 -1$ and $p_\nu >p>\frac {2n}{n-2};$
\item
$\nu_2 \geq \frac n2 -1$ and $p>\frac {2n}{n-2}.$
\end{itemize}
{\underline {Case}} 1. We suppose first that $\frac {n-2}{2n} <\nu_2 <\frac n2 -1, \quad p_\nu >p>\frac {2n}{n-2}$ and $\mu=\frac n2 -1-\frac np.$
By Theorem 6.7, estimate (40) is satisfied for $s=2.$ We check easily that $\frac {n-2}{2n} < (\frac n2 -1)(1-\frac 2p)$ whenever $p>\frac {2n}{n-2}.$ According to equation (1), we distinguish two subcases:
\begin{enumerate}
\item[(i)]
If $\frac {n-2}{2n} <\nu_2 \leq \left(\frac n2 -1\right)\left(1-\frac 2p\right),$ then $\quad q'_\nu (p) \geq s=2.$ It follows from Theorem 6.6 that $P_\nu$ is bounded in $L^{p, q}_\nu (T_{\mathcal Lambdambda_n})$ if
$$\quad q'_\nu (p) < q< \min \left \{2\min \left \{\frac {\nu_1 -\frac n2 +1}{\frac n2 -1-\frac np}, \frac {\nu_2 +\frac n2 -1}{n-2-\frac np}\right \}, \widetilde q_{\nu, p}\right \}=\min \left \{2\frac {\nu_1 - \frac n2 +1}{\frac n2 -1 -\frac np}, \widetilde q_{\nu, p}\right \}.$$
\item[(ii)]
If $\left(\frac n2 -1\right)\left(1-\frac 2p\right) <\nu_2 < \frac n2 -1,$ then $\quad q'_\nu (p) < s=2.$ It follows from Theorem 6.6 that $P_\nu$ is bounded in $L^{p, q}_\nu (T_{\mathcal Lambdambda_n})$ if\\
$\bullet
\quad q'_\nu (p) <q\leq 2$ and $q<\min \left \{2\min \left \{\frac {\nu_1 -\frac n2 +1}{\frac n2 -1-\frac np}, \frac {\nu_2 }{\frac n2-1-\frac np}\right \}, \widetilde q_{\nu, p}\right \}=\min \left \{2\frac {\nu_1 - \frac n2 +1}{\frac n2 -1 -\frac np}, \widetilde q_{\nu, p}\right \}$\\
$\bullet
\quad 2<q <\min \left \{2\min \left \{\frac {\nu_1 -\frac n2 +1}{\frac n2 -1-\frac np}, \frac {\nu_2 +\frac n2 -1}{n-2-\frac np}\right \}, \widetilde q_{\nu, p}\right \}=\min \left \{2\frac {\nu_1 - \frac n2 +1}{\frac n2 -1 -\frac np}, \widetilde q_{\nu, p}\right \}.$\\
\end{enumerate}
This proves the assertion (2) of the corollary.\\
{\underline {Case}} 2. We suppose that $\nu_2 \geq \frac n2 -1, \quad p_\nu =\infty >p>\frac {2n}{n-2}$ and $\mu=\frac n2 -1-\frac np.$
By Theorem 6.7, estimate (40) is satisfied for $s=2.$ In this case, $q'_\nu (p) <s=2$ since $\nu_2 \geq \frac n2 -1.$ It follows from Theorem 6.6 that $P_\nu$ is bounded in $L^{p, q}_\nu (T_{\mathcal Lambdambda_n})$ if\\
$\bullet
\quad q'_\nu (p) <q\leq 2$ and $q<\min \left \{2\min \left \{\frac {\nu_1 -\frac n2 +1}{\frac n2 -1-\frac np}, \frac {\nu_2 }{\frac n2-1-\frac np}\right \}, \widetilde q_{\nu, p}\right \}=\min \left \{2\frac {\nu_1 - \frac n2 +1}{\frac n2 -1 -\frac np}, \widetilde q_{\nu, p}\right \}$\\
$\bullet
\quad 2<q <\min \left \{2\min \left \{\frac {\nu_1 -\frac n2 +1}{\frac n2 -1-\frac np}, \frac {\nu_2 +\frac n2 -1}{n-2-\frac np}\right \}, \widetilde q_{\nu, p}\right \}=\min \left \{2\frac {\nu_1 - \frac n2 +1}{\frac n2 -1 -\frac np}, \widetilde q_{\nu, p}\right \}.$
This proves the assertion (3) of the corollary.
\end{proof}
Assertion (1) of the previous corollary is just assertion (2) of Theorem 2.3. Assertions (1) and (2) of Theorem 2.3 are particular cases of \cite[Corollary 1.4]{DD1} for tube domains over Lorentz cones with $\mu=\nu.$ For assertions (3) and (4), for $p_\nu >p>\frac {2n}{n-2},$ we obtain by interpolation the following result which is sharper than assertions (2) and (3) of the previous corollary.
\begin{cor}
Let $n\geq 3$ and $\nu=(\nu_1, \nu_2)\in \mathbb R^2$ such that $\nu_1 >\frac n2 -1, \hskip 2truemm \nu_2 >0.$ The weighted Bergman projector $P_\nu$ is bounded in $L^{p, q}_\nu (T_{\mathcal Lambdambda_n})$ for the following values of $p, q$ and $\nu.$
\begin{enumerate}
\item
$\frac {n-1}{\frac n2 -1 - \nu_2} > p > \frac {2n}{n-2}$ and $\quad 2<q< \widetilde q_{\nu, p}$ provided
$\frac {n-2}{2n}<\nu_2 <\frac n2 -1;$
\item
$ p > \frac {2n}{n-2}$ and $\quad 2<q< \widetilde q_{\nu, p}$ provided $\nu_2 \geq \frac n2 -1.$
\end{enumerate}
\end{cor}
\begin{proof}
\hskip 2truemm The situation is represented in Figure \ref{fig6} (Figure 1.1 of [2]) . From assertion (1) of Theorem 2.3,
we obtain that $P_\nu$ is bounded in $L^{p, q}_\nu (T_{\mathcal Lambdambda_n})$ if
$$0\leq \frac 1p \leq 1 \quad {\rm and} \quad \frac 1{q_\nu}<\frac 1q<\frac 1{q'_\nu}.$$
Combining with assertion (1) of Corollary 6.8, we deduce by interpolation that $P_\nu$ is bounded in $L^{p, q}_\nu
(T_{\mathcal Lambdambda_n})$ for
$$\frac 1{p_\nu} < \frac 1p < \frac {n-2}{2n} \quad {\rm and} \quad \frac {n-2}{2n}<\nu_2 <\frac n2 -1$$
$$({\rm resp.} \quad \frac 1p < \frac {n-2}{2n} \quad {\rm and} \quad \nu_2 \geq \frac n2 -1),$$
if the couple $(\frac 1p, \frac 1q)$ lies in the triangle given by the inequalities
\begin{equation}\lambdabel{ineq}
y>\frac {n-2}n (-q_\nu x +1), \quad \frac 1{2q_\nu} <x<\frac 1{q_\nu}.
\end{equation}
Remind that for such values of $p,$ we have $\frac n{2p'} -1>0$ and so $\widetilde q_{\nu, p} =
2\frac {\nu_2 +\frac n2 -1}{n-2-\frac np}.$ It is now easy to conclude that the first inequality in (\ref{ineq})
can be written in the form
$$q < \widetilde q_{\nu, p}.$$
\end{proof}
\begin{remark}
{\rm According to Theorem 2.3, the conjecture stated in the introduction of [2] for $\nu_1=\nu_2$
is valid for tube domains over Lorentz cones. More precisely, the weighted Bergman projector $P_\nu$ is
bounded in $L^{p, q}_\nu$ when the couple $(\frac 1p, \frac 1q)$ lies in the blank region of Figure 1.1 of [2]
depicted below. This result has been proved in the blue region in [4] and in the red region in [2] for $\nu_1=\nu_2$.
The result in the blank region is given by Theorem 2.3. In particular, the case $\nu_1=\nu_2=\frac n2$ and $p=q$ in
Theorem 2.3 corresponds to} \cite[Theorem 1.2]{BoNa}.
\end{remark}
\vskip 60truemm
\begin{figure}
\caption{Region of boundedness of $P_\nu$ for Lorentz cones and $\nu_1=\nu_2$}
\end{figure}
Finally, the proof of Theorem 2.4 is just a combination of the Theorem 2.3 for $n=3$ and Theorem 2.1 for the Pyateckii-Shapiro domain.
\vskip 2truemm
\noindent
\textbf{Acknowledgements.} The authors wish to express their gratitude to Aline Bonami and Gustavo Garrig\'os for valuable discussions.
\end{document} |
\begin{document}
\title{On the maximum CEI of graphs with parameters}
\author{Fazal Hayat\footnote{E-mail: [email protected]}\\
School of Mathematical Sciences, South China Normal University, \\
Guangzhou 510631, PR China}
\date{}
\maketitle
\begin{abstract}
The connective eccentricity index (CEI) of a graph $G$ is defined as $\xi^{ce}(G)=\sum_{v \in V(G)}\frac{d_G(v)}{\varepsilon_G(v)}$, where $d_G(v)$ is the degree of $v$ and $\varepsilon_G(v)$ is the eccentricity of $v$. In this paper, we characterize the unique graphs with maximum CEI from three classes of graphs: the $n$-vertex graphs with fixed connectivity and diameter, the $n$-vertex graphs with fixed connectivity and independence number, and the $n$-vertex graphs with fixed connectivity and minimum degree. \\ \\
{\bf Key Words}: connective eccentricity index, connectivity, diameter, minimum degree, independent number.\\\\
{\bf 2010 Mathematics Subject Classification:} 05C07; 05C12; 05C35
\end{abstract}
\section{Introduction}
All graphs considered in this paper are simple and connected. Let $G$ be graph on $n$ vertices with vertex set $V(G)$ and edge set $E(G)$. For $v \in V(G)$, let $N_G(v)$ be the set of all neighbors of $v$ in $G$. The degree of $v \in V(G)$, denoted by $d_G(v)$, is the cardinality of $N_G(v)$. The maximum and minimum degree of $G$ is denoted by $\Delta(G)$ and $\delta(G)$, respectively. The graph formed from $G$ by deleting any vertex $v \in V(G)$ (resp. edge $uv \in E(G)$) is denoted by $G-v$ (resp. $G-uv$). Similarly, the graph formed from $G$ by adding an edge $uv$ is denoted by $G+uv$, where $u$ and $v$ are non-adjacent vertices of $G$. For a vertex subset $A$ of $V(G)$, denote by $G[A]$ the subgraph induced by $A$. The distance between vertices $u$ and $v$ of $G$, denoted by $d_G(u,v)$, is the length of a shortest path connecting $u$ and $v$ in $G$. For $v\in V(G)$, the eccentricity of $v$ in $G$, denoted $\varepsilon_G(v)$, is the maximum distance from $v$ to all other vertices of $G$. The diameter of a graph $G$ is the maximum eccentricity of all vertices in $G$. As usual, by $S_n$ and $P_n$ we denote the star and path on $n$ vertices, respectively.
A subset $A$ of $V(G)$ is called vertex cut of $G$ if $G-A$ is disconnected. The minimum size of $A$ such that $G-A$ is disconnected or has exactly one vertex is called connectivity of $G$, and denote by $k(G)$. A graph is $k$-connected if its connectivity is at least $k$.
A subset $I$ of $V(G)$ is called an independent set of $G$ if $I$ contains pairwise non-adjacent vertices. The independence number of $G$ denoted by $\alpha(G)$, is the maximum number of vertices of independent sets of $G$.
The join of two disjoint graphs $M_1$ and $M_2$ is denoted by $M_1 \vee M_2$ is the graph formed from $M_1 \cup M_2$ by adding the edges $\{e=xy : x\in V(M_1), y\in V(M_2) \}$. Let $M_1 \vee M_2 \vee M_3 \vee \dots \vee M_t=(M_1\vee M_2)\cup (M_2\vee M_3)\cup \dots \cup (M_{t-1}\vee M_t)$. The sequential join of $t$ disjoint copies of a graph $G$ is denoted by $[t]G$, and the union of $m$ disjoint copies of a graph $G$ is denoted by $mG$.
Topological indices are numbers reflecting certain structural features of a molecule that are derived from its molecular graph. They are used in theoretical chemistry for design of chemical compounds with given physicochemical properties or given pharmacological and biological activities.
The eccentric connectivity index of a graph $G$ is a topological index based on degree and eccentricity, defined as
\[
\xi^c (G)=\sum_{v \in V(G)}d_G(v)\varepsilon_G(v).
\]
It has been studied extensively, see \cite{1,2,3,4,6}.
Gupta et al. in 2000 \cite{GSM}, proposed a new topological index involving degree and eccentricity called the connective eccentricity index, defined as
\[
\xi^{ce}(G)=\sum_{v \in V(G)}\frac{d_G(v)}{\varepsilon_G(v)}.
\]
From experiments for treating hypertension of chemical compounds like non-peptide N-benzylimidazole derivatives, the results obtained using the connective eccentricity index were better than the corresponding values obtained using Balaban’s mean square distance index. Therefore it is worth studying mathematical properties of connective eccentricity index.
Mathematical properties of connective eccentricity index have been studied extensively for trees, unicyclic and general graph. In particular, Yu and Feng \cite {YF} obtained upper or lower bounds for connective eccentricity index of graphs in terms of many graph parameters such as radius, maximum degree,
independence number, vertex connectivity, minimum degree, number of pendant vertices and number of cut edges. Li and Zhao \cite {LZ} studied the extremal properties of connective eccentricity index among n-vertex trees with given graph parameters such as, number of pendant vertices, matching number, domination number, diameter, vertex bipartition. Xu et al. \cite {XDL} characterized the extremal graph for connective eccentricity index among all connected graph with fixed order and fixed matching number.
For more studies on connective eccentricity index of graphs we refer \cite{LLZ, TWLF, YQTF} and the references cited therein.
In the present paper, as a continuance we mainly study the mathematical properties of the connective eccentricity index of graphs in terms of various graph invariants. First, we determine the graph which attains the maximum CEI among $n$-vertex graphs with fixed connectivity and diameter, then we identify the unique graph with given connectivity and independence number having the maximum CEI. Finally, we characterize those graph with maximum CEI among $n$-vertex graphs with fixed connectivity and minimum degree.
\begin{lemma}\cite{YF}\label{L1}
Let $G$ be a graph with a pair of non adjacent vertices $u, v$. Then $\xi ^{ce}(G) < \xi ^{ce}(G+uv)$.
\end{lemma}
\section{Results}
Let $\mathbb{G}_k(n,d)$ be the class of all $k$-connected graphs of order $n$ with diameter $d$. If $d=1$, then $K_n$ is the unique graph in $\mathbb{G}_k(n,1)$. Therefore, we consider $d\geq 2$ in what follows.
Denoted by
$
G(n,k,d)= K_1 \vee [(d-2)/2]K_k \vee K_{n-kd+2k-2} \vee [(d-2)/2]K_k \vee K_1
$
for even $d \geq 4$, and let
$\mathcal{H}(n,k,d)$ be the set of graphs of $K_1 \vee [(d-3)/2]K_k \vee K_{s+1} \vee K_{t+1} \vee [(d-3)/2]K_k \vee K_1$, where $s, t \geq k-1$ and $s+ t= n-kd+3k-4$ for odd $d \geq 3$.
\begin{theorem}\label{T1}
Let $G$ be a graph in $\mathbb{G}_k(n,d)$ with maximum CEI, where $d\geq 3$. Then $G \cong G(n,k,d)$ if $d$ is even, and $G \in \mathcal{H}(n,k,d)$ otherwise.
\end{theorem}
\begin{proof}
Let $G \in \mathbb{G}_k(n,d)$ such that $\xi ^{ce}(G)$ is as large as possible. Let $P:= u_0u_1 \dots u_d$ be a diametral path in $G$. Let $A_i= \{v\in V(G) : d_G(v, u_0)=i\}$. Then $|A_0|=1$ and $A_0\cup A_1\cup \dots \cup A_d$ is a partition of $V(G)$.
Note that $G$ is a $k$-connected graph, we have $|A_i| \geq k $ for $i\in \{1,2, \dots , d-1\}$.
By Lemma \ref{L1}, we have $G[A_i]$ and $G[A_{i-1}\cup A_i]$ are complete graphs for $i\in \{1, \dots , d\}$.
We claim that $|A_d|=1$; Otherwise, we choose a vertex $v\in A_d\setminus \{u_d\}$ and let $G^\ast =G+\{vx : x\in A_{d-2}\}$. Clearly, $G^\ast \in \mathbb{G}_k(n,d)$. By Lemma \ref{L1}, $\xi ^{ce}(G) < \xi ^{ce}(G^\ast)$, a contradiction. So $|A_d|=1$.
Thus, we have $|A_0|=|A_d|=1$, and $|A_i|\ge k$ for $i\in \{2, \dots, d-1\}$.
\noindent {\bf Case 1}. $d$ is even with $d \geq 4$. then $|A_1|= |A_2|=\dots = |A_{\frac{d}{2}-1}|= |A_{\frac{d}{2}+1}| =\dots = |A_{d-1}|=k $ and $|A_{\frac{d}{2}}|= n-kd+2k-2$.
First, we claim that $|A_1|=|A_{d-1}|= k$. Suppose that $|A_1|\geq k+1$, then we choose $w\in A_1\setminus \{u_1\}$ and let $G' =G-wu_0+\{wx : x\in A_3\}$. Clearly, $A_0\cup (A_1\setminus \{w\})\cup (A_2\cup \{w\}) \cup A_0\cup \dots \cup A_d$ is a partition of $ V(G')$. From the construction of $G'$, we have $d_G(v) = d_{G'}(v), \varepsilon_G(v) = \varepsilon_{G'}(v)$ for all $v \in V(G)\setminus (A_3\cup \{u_0,w\})$. Moreover,
\begin{eqnarray*}
d_G(u_0) &=& d_{G'}(u_0)+1, \\
\varepsilon_G(u_0) &=& \varepsilon_{G'}(u_0)=d, \\
d_G(w) &=& d_{G'}(w)+1-|A_3|, \\
\varepsilon_G(w) &>& \varepsilon_{G'}(w) , \\
d_G(x) &=& d_{G'}(x)-1, \\
\varepsilon_G(x) &=& \varepsilon_{G'}(x)< d \hbox{ for all $x \in A_3$}.
\end{eqnarray*}
By the definition of CEI, we have
\begin{eqnarray*}
\xi^{ce}(G) - \xi^{ce}(G') &=& \frac{d_G(u_0)}{\varepsilon_G(u_0)} - \frac{d_{G'}(u_0)}{\varepsilon_{G'}(u_0)}
+\frac{d_G(w)}{\varepsilon_G(w)}- \frac{d_{G'}(w)}{\varepsilon_{G'}(w)}\\
&&+ \sum_{x \in A_3}\left( \frac{d_G(x)}{\varepsilon_G(x)}-
\frac{d_{G'}(x)}{\varepsilon_{G'}(x)}\right)\\
&&+ \sum_{v \in V(G)\setminus (A_3\cup \{u_0,w\})}\left( \frac{d_G(v)}{\varepsilon_G(v)}-
\frac{d_{G'}(v)}{\varepsilon_{G'}(v)}\right)\\
&=& \frac{1}{d} +\frac{d_G(w)}{\varepsilon_G(w)}- \frac{d_{G'}(w)}{\varepsilon_{G'}(w)}+
|A_3| \left(\frac{-1}{\varepsilon_{G}(x)}\right)\\
&<& \frac{1}{d} +\frac{1-|A_3|}{\varepsilon_{G'}(w)}- \frac{|A_3|}{\varepsilon_{G}(x)}\\
&\leq& \frac{1}{d}- \frac{|A_3|}{\varepsilon_{G}(x)}\\
&<& 0,
\end{eqnarray*}
where the last inequality follows from the fact that $\varepsilon_{G}(x) < d $ and $|A_3|\ge k\ge 1$. Thus, $\xi^{ce}(G) < \xi^{ce}(G')$, a contradiction to the choice of $G$. Therefore, $|A_1|= k$. Similarly, $|A_{d-1}|=k$, as claimed.
By similar argument as above we may also show that $|A_2|=|A_{d-2}|=k$, \dots, $|A_{\frac{d}{2}-1}|= |A_{\frac{d}{2}+1}|=k$.
Then we have $|A_{\frac{d}{2}}|= n-kd+2k-2$.
Therefore, $G \cong G(n,k,d)$.
\noindent {\bf Case 2.} $d$ is odd with $d \geq 3$. By similar argument as in Case 1, we have $|A_1|= |A_d|=k$, $|A_2|=|A_{d-1}|=k$, \dots, $|A_{\frac{d-3}{2}}|= |A_{\frac{d+3}{2}}| =k$. It follows that $|A_{\frac{d-1}{2}}|+|A_{\frac{d-1}{2}}|= n-kd+3k-2$. Therefore, $G \in \mathcal{H}(n,k,d)$. Now
only need to show that all graphs in $\mathcal{H}(n,k,d)$ have equal CEI. Let $G_1= K_1 \vee [(d-3)/2]K_k \vee K_{z+1} \vee [(d-3)/2]K_k \vee K_1$, where $z= n-kd+2k-3$. Clearly, $G_1 \in \mathcal{H}(n,d)$. For a graph $G_2= K_1 \vee [(d-3)/2]K_k \vee K_{s+1} \vee K_{t+1} \vee [(d-3)/2]K_k \vee K_1$, we assume its vertex partition $A_0\cup A_1\cup \dots \cup A_d$ is defined as above.
If one of $s,t$ is $k-1$, then $\xi ^{ce}(G_1) \cong \xi ^{ce}(G_2)$. Suppose that $s,t \geq k$. Let $M\subseteq A_{\frac{d+1}{2}} \setminus \{u_{\frac{d+1}{2}}\}$ and $|M|=t-k+1$. Now we obtain $G_1$ from $G_2$ by the following graph transformation:
\[
G_1=G_2-\{xy : x \in M, y \in A_{\frac{d+3}{2}} \} + \{xy : x \in M, y \in A_{\frac{d-3}{2}} \}.
\]
Then, it is easy to see that $A_0\cup A_1\cup \dots \cup A_{\frac{d-3}{2}} \cup ( A_{\frac{d-1}{2}}\cup M)\cup ( A_{\frac{d+1}{2}}\setminus M)\cup A_{\frac{d+3}{2}} \cup \dots \cup A_d$ is a partition of $V(G_1)$.
From the construction of $G_1$, we have $\varepsilon_{G_1}(v) = \varepsilon_{G_2}(v)$ for all $v \in V(G_2)$ and $d_{G_1}(v) = d_{G_2}(v)$ for all $v \in V(G_2) \setminus (A_{\frac{d+3}{2}} \cup A_{\frac{d-3}{2}})$, it follows that
\begin{eqnarray*}
d_{G_2}(x) &=& d_{G_1}(x)+t-k+1 \hbox{ for each $x \in A_{\frac{d+3}{2}} $}, \\
d_{G_2}(x) &=& d_{G_1}(x)-(t-k+1)\hbox{ for each $x \in A_{\frac{d-3}{2}} $}.
\end{eqnarray*}
By the definition of CEI, we have
\begin{eqnarray*}
\xi^{ce}(G_2) - \xi^{ce}(G_1) &=& \sum_{x \in A_{\frac{d+3}{2}}}\left( \frac{d_{G_2}(x)}{\varepsilon_{G_2}(x)}
-\frac {d_{G_1}(x)}{\varepsilon_{G_1}(x)}\right)\\
&&+ \sum_{x \in A_{\frac{d-3}{2}}}\left( \frac{d_{G_2}(x)}{\varepsilon_{G_2}(x)}- \frac{d_{G_1}(x)}{\varepsilon_{G_1}(x)}\right)\\
&&+ \sum_{v \in V(G_2) \setminus (A_{\frac{d+3}{2}} \cup A_{\frac{d-3}{2}})}\left( \frac{d_{G_2}(v)}{\varepsilon_{G_2}(v)}-
\frac{d_{G_1}(v)}{\varepsilon_{G_1}(v)}\right)\\
&=& k\left( \frac{t-k+1}{\varepsilon_{G_2}(x)}
-\frac {t-k+1}{\varepsilon_{G_2}(x)}\right)\\
&=&0.
\end{eqnarray*}
Thus, $\xi^{ce}(G_2) = \xi^{ce}(G_1)$. This completes the proof.
\end{proof}
Let $\mathbb{G}_k(n,\alpha)$ be the class of all $k$-connected graphs of order $n$ with independence number $\alpha$. If $\alpha=1$, then $K_n$ is the unique graph in $\mathbb{G}_k(n,1)$ with maximum CEI. Therefore, we consider $\alpha\geq 2$ in what follows. Let
\[
S_{n,\alpha }= K_k \vee (K_1 \cup ( K_{n-k-\alpha} \vee (\alpha -1)K_1).
\]
Obviously, $S_{n,\alpha } \in \mathbb{G}_k(n,\alpha)$
\begin{theorem}
Among all graphs in $\mathbb{G}_k(n,\alpha)$ with $\alpha\geq 2$, $S_{n,\alpha }$ is the unique graph with maximum CEI.
\end{theorem}
\begin{proof}
Note that since $G \in \mathbb{G}_k(n,\alpha)$, we have $k+\alpha \le n$. If $k+\alpha= n$ then $G \cong K_k \vee \alpha K_1$, and the result holds in this case. Therefore, we consider the case $k+\alpha +1 \le n$ in what follows.
Let $G \in \mathbb{G}_k(n,\alpha)$ such that $\xi ^{ce}(G)$ is as large as possible. Let $I$ and $A$ be the maximum independent set and vertex cut of $G$, respectively with $|I|=\alpha $ and $|A|=k$. Let $G_1, G_2, \dots ,G_s$ be the components of $G-A$ with $s\geq 2$. Assume that $|G_1|\geq |G_2|\geq \dots \geq|G_s|$. We claim that $G_1$ is non-trivial; Otherwise, then $G_i$ is trivial for $i\in \{1,2, \dots , s\}$, and the independence number of $G$ is at least $n-k ~(\geq \alpha +1)$, a contradiction. So, $G_1$ is non-trivial. Let $|A\cap I|=a, |A\setminus I|=b$ and $|V(G_i)\cap I|=n_i|, |V(G_i)\setminus I|=m_i$ for $i\in \{1,2, \dots , s\}$. Obviously, $k=a+b$ and $V(G_i)=n_i+m_i$ for $i\in \{1,2, \dots , s\}$. We proceed with the following claims.
\noindent {\bf Claim 1.} $G-A$ contains exactly two components, i.e., $s=2.$
\noindent {\bf Proof of Claim 1.}
Suppose that $s\geq 3$. Since $G_1$ is non-trivial, we have $V(G_1)\setminus I \neq \emptyset$. Then choose $u \in V(G_1)\setminus I$ and $v \in V(G_2)$. Let $H=G+uv$. Clearly, $H \in \mathbb{G}_k(n,\alpha)$. By Lemma \ref{L1}, we have $\xi ^{ce}(G) < \xi ^{ce}(H)$, a contradiction. So, $s=2.$
\noindent {\bf Claim 2.} $G[A] \cong K_b \vee aK_1, G_i \cong K_{m_i}\vee n_iK_1$ and $G[V(G_i)UA] \cong K_{b+m_i}\vee (a+n_i)K_1$ for $i=1,2$.
\noindent {\bf Proof of Claim 2.}
First, we show that $G[A] \cong K_b \vee aK_1$. Suppose that $G[A] \ncong K_b \vee aK_1$. Then there exist $u,v \in A \setminus I$ or $u \in A \setminus I, v \in A\cap I$. Let $Q=G+uv$. Clearly, $Q \in \mathbb{G}_k(n,\alpha)$. By Lemma \ref{L1}, we have $\xi ^{ce}(G) < \xi ^{ce}(Q)$, a contradiction. So, $G[A] \cong K_b \vee aK_1$. By similar techniques we can show that $G_i \cong K_{m_i}\vee n_iK_1$ and $G[V(G_i)UA] \cong K_{b+m_i}\vee (a+n_i)K_1$ for $i=1,2$.
\noindent {\bf Claim 3.} $G_2$ is trivial.
\noindent {\bf Proof of Claim 3.}
Suppose that $G_2$ is non-trivial. Then we have the following two possible cases.
\noindent {\bf Case 1.} $n_2=0$.
If $a=0$, then $I=V(G_1)\cap I$. Choose $w \in V(G_2)\setminus I$, we get $I\cup \{w\}$ is an independent set such that $|I\cup \{w\}|=\alpha +1$, a contradiction. So, $a \geq 1$.
Let $G' =G-\{wx : x \in V(G_2)\setminus \{w\}\}+ \{xy : x \in V(G_2)\setminus \{w\}, y\in V(G_1)\}$.
Clearly, $G' \in \mathbb{G}_k(n,\alpha)$. From the construction of $G'$, we have $\varepsilon_G(v) = \varepsilon_{G'}(v)=1$ for each $v \in A \setminus I$ and $\varepsilon_G(v) = \varepsilon_{G'}(v)=2$ for each $v \in V(G)\setminus (A \setminus I)$. Moreover,
\begin{eqnarray*}
d_G(w) &=& d_{G'}(w)+m_2-1, \\
\varepsilon_G(w) &=& \varepsilon_{G'}(w)=2,\\
d_G(x) &=& d_{G'}(x)+1- n_1- m_1 \hbox{ for all $x \in V(G_2)\setminus \{w\}$}, \\
\varepsilon_G(x) &=& \varepsilon_{G'}(x)=2 \hbox{ for all $x \in V(G_2)\setminus \{w\}$}, \\
d_G(x) &=& d_{G'}(x)+1- m_2 \hbox{ for all $x \in V(G_1)$},\\
\varepsilon_G(x) &=& \varepsilon_{G'}(x)=2 \hbox{ for all $x \in V(G_1)$},\\
d_G(x) &=& d_{G'}(x) \hbox{ for all $x \in A$}.
\end{eqnarray*}
By the definition of CEI, we have
\begin{eqnarray*}
\xi^{ce}(G) - \xi^{ce}(G') &=& \frac{d_G(w)}{\varepsilon_G(w)} - \frac{d_{G'}(w)}{\varepsilon_{G'}(w)}
+\sum_{x \in V(G_2)\setminus \{w\}}\left( \frac{d_G(x)}{\varepsilon_G(x)}- \frac{d_{G'}(x)}{\varepsilon_{G'}(x)}\right)\\
&&+ \sum_{x \in V(G_1)}\left( \frac{d_G(x)}{\varepsilon_G(x)}- \frac{d_{G'}(x)}{\varepsilon_{G'}(x)}\right)\\
&&+ \sum_{x \in A}\left( \frac{d_G(x)}{\varepsilon_G(x)}- \frac{d_{G'}(x)}{\varepsilon_{G'}(x)}\right)\\
&=& \frac{1}{2}{m_2-1-(m_2+n_2-1)^2-(m_2-1)(m_1+n_1-1) }\\
&<&0,
\end{eqnarray*}
a contradiction.
\noindent {\bf Case 2.} $n_2 \neq 0$.
Choose $w \in V(G_2)\cap I$. Let
$G'' =G-\{wx : x \in V(G_2)\}+ \{xy : x \in V(G_1) \cap I , y\in V(G_2)\setminus I\}+ \{xy : x \in V(G_1) \setminus I , y \in V(G_2)\setminus \{w\}\}
$.
Clearly, $G'' \in \mathbb{G}_k(n,\alpha)$. From the construction of $G''$, we have $\varepsilon_G(v) = \varepsilon_{G''}(v)$ for all $v \in V(G)$. Moreover,
\begin{eqnarray*}
d_G(v) &=& d_{G''}(v) \hbox{ for all $v \in A$}, \\
d_G(w) &=& d_{G''}(w)+m_2,\\
d_G(x) &=& d_{G''}(x)+1-n_1-m_1 \hbox{ for all $x \in V(G_2)\setminus I$}, \\
d_G(x) &=& d_{G''}(x)-m_1 \hbox{ for all $x \in (V(G_2)\cap I)\setminus \{w\}$}, \\
d_G(x) &=& d_{G''}(x)-m_2 \hbox{ for all $x \in V(G_1)\cap I$}, \\
d_G(x) &=& d_{G''}(x)+1-n_2-m_2 \hbox{ for all $x \in V(G_1)\setminus I$}. \\
\end{eqnarray*}
By the definition of CEI, we have
\begin{eqnarray*}
\xi^{ce}(G) - \xi^{ce}(G'') &=& \frac{d_G(w)}{\varepsilon_G(w)} - \frac{d_{G''}(w)}{\varepsilon_{G''}(w)}+
\sum_{x \in V(G_2)\setminus I}\left( \frac{d_G(x)}{\varepsilon_G(x)}- \frac{d_{G''}(x)}{\varepsilon_{G''}(x)}\right)\\
&&+ \sum_{x \in (V(G_2)\cap I)\setminus \{w\}}\left( \frac{d_G(x)}{\varepsilon_G(x)}-
\frac{d_{G''}(x)} {\varepsilon_{G''}(x)}\right)\\
&&+ \sum_{x \in V(G_1)\cap I}\left( \frac{d_G(x)}{\varepsilon_G(x)}- \frac{d_{G''}(x)}{\varepsilon_{G''}(x)}\right)\\
&&+ \sum_{x \in V(G_1)\setminus I}\left( \frac{d_G(x)}{\varepsilon_G(x)}-\frac{d_{G''}(x)}{\varepsilon_{G''}(x)}\right)\\
&&+ \sum_{v \in A}\left( \frac{d_G(v)}{\varepsilon_G(v)}-\frac{d_{G''}(v)}{\varepsilon_{G''}(v)}\right)\\
&=& \frac{1}{\varepsilon_G(v)} \{ m_2-m_2(n_1+m_1-1)-m_1(n_2-1)\\
&-& n_1m - m_1(n_2+m_2-1) \}\\
&<&0,
\end{eqnarray*}
a contradiction. So, $G_2$ is trivial.
\noindent {\bf Claim 4.} $V(G_2) \subseteq I$.
\noindent {\bf Proof of Claim 4.}
Suppose that $V(G_2) \nsubseteq I$, then $a \geq 2$. Suppose that $a\leq 1$. If $a=1$, i.e., $A\cap I =\{w_1\}$. Let
$G^\ast =G+\{w_1x : x \in V(G_1)\cap I\}$.
Clearly, $G^\ast \in \mathbb{G}_k(n,\alpha)$. By Lemma \ref{L1}, we have $\xi ^{ce}(G) < \xi ^{ce}(G^\ast)$, a contradiction. If $a=0$, then $I\cup V(G_2)$ is an independent set of $G$ such that $|I\cup V(G_2)|=\alpha +1$, a contradiction. So, $a \geq 2$.
Since $G_1$ is non-trivial then $V(G_1)\setminus I \neq \emptyset $. Choose $w_2 \in V(G_1)\setminus I$. Let
$G^{\ast\ast} =G-\{w_1v : v = V(G_2)\}+ \{w_2v : v = V(G_2)\}$.
Clearly, $G^{\ast\ast} \in \mathbb{G}_k(n,\alpha)$.
From the construction of $G^{\ast\ast}$, we have $ \varepsilon_G(x) = \varepsilon_{G^{\ast\ast}}(x)=1$ for all $x \in A\setminus I$, $ \varepsilon_G(x) = \varepsilon_{G^{\ast\ast}}(x)=2$ for all $x \in (A\cap I)\cup(V(G_1)\setminus \{w_2\})\cup\{v\}$ , and $d_G(x) = d_{G^{\ast\ast}}(x)$ for all $x \in V(G)\setminus \{w_1,w_2\}$. Moreover,
\begin{eqnarray*}
d_G(w_1) &=& d_{G^{\ast\ast}}(w_1)+1, \\
\varepsilon_G(w_1) &=& \varepsilon_{G^{\ast\ast}}(w_1)=2,\\
d_G(w_2) &=& d_{G^{\ast\ast}}(w_2)-1, \\
\varepsilon_G(w_2)=2 &>& \varepsilon_{G^{\ast\ast}}(w_2)=1. \\
\end{eqnarray*}
By the definition of CEI, we have
\begin{eqnarray*}
\xi^{ce}(G) - \xi^{ce}(G^{\ast\ast}) &=& \frac{d_G(w_1)}{\varepsilon_G(w_1)} - \frac{d_{G^{\ast\ast}}(w_1)}{\varepsilon_{G^{\ast\ast}}(w_1)}
+\frac{d_G(w_2)}{\varepsilon_G(w_2)}- \frac{d_{G^{\ast\ast}}(w_2)}{\varepsilon_{G^{\ast\ast}}(w_2)}\\
&&+ \sum_{x \in V(G)\setminus \{w_1,w_2\}}\left( \frac{d_G(x)}{\varepsilon_G(x)}- \frac{d_{G^{\ast\ast}}(x)}{\varepsilon_{G^{\ast\ast}}(x)}\right)\\
&=& -\frac{1+d(w_2)}{2}\\
&<&0,
\end{eqnarray*}
a contradiction. So, $V(G_2) \subseteq I$. From claim 1--4, we have $G \cong S_{n,\alpha }$. This completes the proof.
\end{proof}
Let $\mathbb{G}_k(n,\delta)$ be the class of all $k$-connected graphs of order $n$ with minimum degree at least $\delta$. Let
\[
M_{n,\delta }= K_k \vee (K_{\delta-k+1} \cup K_{n-\delta-1}).
\]
Obviously, $M_{n,\delta } \in \mathbb{G}_k(n,\delta)$
\begin{theorem}
Among all graphs in $\mathbb{G}_k(n,\delta)$, $M_{n,\delta }$ is the unique graph with maximum CEI.
\end{theorem}
\begin{proof}
Note that $G \in \mathbb{G}_k(n,\delta)$, we have $k+1 \le n$. If $k+1= n$, then $G \cong K_k \cong M_{n,\delta }$, and the result holds in this case. Therefore, we consider the case $k+2 \le n$ in what follows.
Let $G \in \mathbb{G}_k(n,\delta)$ such that $\xi ^{ce}(G)$ is as large as possible. Let $A$ be the vertex cut of $G$ with $|A|=k$. Let $G_1, G_2, \dots ,G_r$ be the components of $G-A$ with $r\geq 2$. We claim that $r=2$; Otherwise $r\geq 3$, and let $G_1, G_2, G_3$ be at least three components of $G$. Let $G' =G+\{xy : x \in V(G_2), y \in V(G_3) \}$. Clearly, $G' \in \mathbb{G}_k(n,\delta)$. By Lemma \ref{L1}, we have $\xi ^{ce}(G) < \xi ^{ce}(G')$, a contradiction. So, $r=2.$ Also by Lemma \ref{L1}, $G[V(G_1)\cup A] $ and $G[V(G_2)\cup A] $ are complete. Thus, we have $G\cong K_k \vee (K_{a_1} \cup K_{a_2})$, where $a_1= |V(G_1)|, a_2= |V(G_2)|$, and $a_1 + a_2 = n-k.$
Without loss of generality, we assume that $a_1 \leq a_2$.
To complete the proof it sufficies to show that $a_1= \delta -k +1$. Suppose that $a_1 > \delta -k +1$. For $w \in V(G_1)$, let $G' =G-\{wx : x \in V(G_1)\setminus \{w\}\}+\{wx : x \in V(G_2)\} $. Clearly, $G' \in \mathbb{G}_k(n,\delta)$.
From the construction of $G'$, we have
\begin{eqnarray*}
d_G(w) &=& d_{G'}(w)+a_1-a_2-1, \\
\varepsilon_G(w) &=& \varepsilon_{G'}(w)=2,\\
d_G(z) &=& d_{G^{\ast\ast}}(z)+1, \\
\varepsilon_G(z) &=& \varepsilon_{G'}(z)=2 \hbox{ for all $z \in V(G_1)\setminus \{w\}$}, \\
d_G(t) &=& d_{G^{\ast\ast}}(t)-1, \\
\varepsilon_G(t) &=& \varepsilon_{G'}(t)=2 \hbox{ for all $t \in V(G_2)$}, \\
d_G(x) &=& d_{G^{\ast\ast}}(x),\\
\varepsilon_G(x) &=& \varepsilon_{G'}(x)=1 \hbox{ for all $x \in A$}.
\end{eqnarray*}
By the definition of CEI, we have
\begin{eqnarray*}
\xi^{ce}(G) - \xi^{ce}(G') &=& \frac{d_G(w)}{\varepsilon_G(w)} - \frac{d_{G'}(w)}{\varepsilon_{G'}(w)}+
\sum_{z \in V(G_1)\setminus \{w\}}\left( \frac{d_G(z)}{\varepsilon_G(z)}- \frac{d_{G'}(z)}{\varepsilon_{G'}(z)}\right)\\
&&+ \sum_{t \in V(G_2)}\left( \frac{d_G(t)}{\varepsilon_G(t)}-
\frac{d_{G'}(t)} {\varepsilon_{G'}(t)}\right)\\
&&+ \sum_{x \in A}\left( \frac{d_G(x)}{\varepsilon_G(x)}- \frac{d_{G'}(x)}{\varepsilon_{G'}(x)}\right)\\
&=& a_1-1-a_2\\
&<&0,
\end{eqnarray*}
where the last inequality follows due to the fact that $a_1 \leq a_2$, a contradiction. So, $a_1= \delta -k +1$, one has $a_2= n-\delta-1$. Thus, $G \cong M_{n,\delta }$. This completes the proof.
\end{proof}
\end{document} |
\begin{document}
\title{Fixed points of $n$-valued maps, the fixed point property and the case of surfaces~--~a braid approach}
\author{DACIBERG~LIMA~GON\c{C}ALVES\\
Departamento de Matem\'atica - IME-USP,\\
Caixa Postal~66281~-~Ag.~Cidade de S\~ao Paulo,\\
CEP:~05314-970 - S\~ao Paulo - SP - Brazil.\\
e-mail:~\url{[email protected]}\vspace*{4mm}\\
JOHN~GUASCHI\\
Normandie Universit\'e, UNICAEN,\\
Laboratoire de Math\'ematiques Nicolas Oresme UMR CNRS~\textup{6139},\\
CS 14032, 14032 Caen Cedex 5, France.\\
e-mail:~\url{[email protected]}}
\maketitle
\begin{abstract}
\noindent We study the fixed point theory of $n$-valued maps of a space $X$ using the fixed point theory of maps between $X$ and its configuration spaces. We give some general results to decide whether an $n$-valued map can be deformed to a fixed point free $n$-valued map. In the case of surfaces, we provide an algebraic criterion in terms of the braid groups of $X$ to study this problem. If $X$ is either the $k$-dimensional ball or an even-dimensional real or complex projective space, we show that the fixed point property holds for $n$-valued maps for all $n\geq 1$, and we prove the same result for even-dimensional spheres for all $n\geq 2$. If $X$ is the $2$-torus, we classify the homotopy classes of $2$-valued maps in terms of the braid groups of $X$. We do not currently have a complete characterisation of the homotopy classes of split $2$-valued maps of the $2$-torus that contain a fixed point free representative, but we give an infinite family of such homotopy classes.
\end{abstract}
\section{Introduction}\label{sec:intro}
Multifunctions and their fixed point theory have been studied for a number of years, see for example the books~\cite{Be,Gor}, where fairly general classes of multifunctions and spaces are considered. Continuous $n$-valued functions are of particular interest, and more information about their fixed point theory on finite complexes may be found in~\cite{Brr1,Brr2,Brr3,Brr4,Brr5,Brr6,Bet1,Bet2,Sch0,Sch1,Sch2}. In this paper, we will concentrate our attention on the case of metric spaces, and notably that of surfaces. In all of what follows, $X$ and $Y$ will be topological spaces, and $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$ will be a multivalued function \emph{i.e.}\ a function that to each $x\in X$ associates a non-empty subset $\phi(x)$ of $Y$. Following the notation and the terminology of the above-mentioned papers, a multifunction $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$ is \emph{upper semi-continuous} if for all $x\in X$, $\phi(x)$ is closed, and given an open set $V$ in $Y$, the set $\set{x\in X}{\phi(x)\subset V}$ is open in $X$, is \emph{lower semi-continuous} if the set $\set{x \in X}{\phi(x)\cap V \neq \varnothing}$ is open in $X$, and is \emph{continuous} if it is upper semi-continuous and lower semi-continuous. Let $I$ denote the unit interval $[0,1]$. We recall the definitions of (split) $n$-valued maps.
\begin{defns}
Let $X$ and $Y$ be topological spaces, and let $n\in \ensuremath{\mathbb N}$.
\begin{enumerate}
\item An \emph{$n$-valued map} (or multimap) $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$, is a continuous multifunction that to each $x\in X$ assigns an unordered subset of $Y$ of cardinal exactly $n$.
\item A \emph{homotopy} between two $n$-valued maps $\phi_1,\phi_2\colon\ensuremath{\up{th}}inspace X \multimap Y$ is an $n$-valued map $H\colon\ensuremath{\up{th}}inspace X\times I \multimap Y$ such that $\phi_1=H ( \cdot , 0)$ and $\phi_2=H ( \cdot , 1)$.
\end{enumerate}
\end{defns}
\begin{defn}[\cite{Sch0}]
An $n$-valued function $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$ is said to be a \emph{split $n$-valued map} if there exist single-valued maps $f_1, f_2, \ldots, f_n\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} Y$ such that $\phi(x)=\brak{f_1(x),\ldots,f_n(x)}$ for all $x\in X$. This being the case, we shall write $\phi=\brak{f_1,\ldots,f_n}$. Let $\splitmap{X}{Y}{n}$ denote the set of split $n$-valued maps between $X$ and $Y$.
\end{defn}
\emph{A priori}, $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$ is just an $n$-valued function, but if it is split then it is continuous by \repr{multicont} in the Appendix, which justifies the use of the word `map' in the above definition. Partly for this reason, split $n$-valued maps play an important r\^ole in the theory.
We now recall the notion of coincidence of a pair $(\phi, f)$ where $\phi$ is an $n$-valued map and $f\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} Y$ is a single-valued map (meaning continuous)~\emph{cf.}~\cite{Brr5}. Let $\ensuremath{\operatorname{\text{Id}}}_{X}\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} X$ denote the identity map of $X$.
\begin{defns}
Let $\phi\colon\ensuremath{\up{th}}inspace X\multimap Y$ be an $n$-valued map, and let $f\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} Y$ be a single-valued map. The set of coincidences of the pair $(\phi, f)$ is denoted by $\operatorname{\text{Coin}}(\phi, f)=\set{x\in X}{f(x)\in \phi(x)}$. If $X=Y$ and $f=\ensuremath{\operatorname{\text{Id}}}_X$ then $\operatorname{\text{Coin}}(\phi, \ensuremath{\operatorname{\text{Id}}}_X)=\set{x\in X}{x\in \phi(x)}$ is called the \emph{fixed point set} of $\phi$, and will be denoted by $\operatorname{\text{Fix}}(\phi)$. If $f$ is the constant map $c_{y_0}$ at a point $y_0\in Y$ then $\operatorname{\text{Coin}}(\phi, c_{y_0})=\set{x\in X}{y_0\in \phi(x)}$ is called the set of \emph{roots} of $\phi$ at $y_0$.
\end{defns}
Recall that a space $X$ is said to have the \emph{fixed point property} if any self-map of $X$ has a fixed point. This notion may be generalised to $n$-valued maps as follows.
\begin{defn}\label{fppro}
If $n\in \ensuremath{\mathbb N}$, a space $X$ is said to have the \emph{fixed point property} for $n$-valued maps if any $n$-valued map $\phi\colon\ensuremath{\up{th}}inspace X \multimap X$ has a fixed point.
\end{defn}
If $n=1$ then we obtain the classical notion of the fixed point property. It is well known that the fixed point theory of surfaces is more complicated than that of manifolds of higher dimension. This is also the case for $n$-valued maps. A number of results for singled-valued maps of manifolds of dimension at least three may be generalised to the setting of $n$-valued maps, see for example the results of Schirmer from the 1980's~\cite{Sch0,Sch1,Sch2}. In dimension one or two, the situation is more complex, and has only been analysed within the last ten years or so, see~\cite{Brr1} for the study of $n$-valued maps of the circle. The papers~\cite{Brr4,Brr6} illustrate some of the difficulties that occur when the manifold is the $2$-torus $\ensuremath{\mathbb{T}^{2}}$. Our expectation is that the case of surfaces of negative Euler characteristic will be much more involved.
In this paper, we explore the fixed point property for $n$-valued maps, and we extend the famous result of L.~E.~J.~Brouwer that every self-map of the disc has a fixed point to this setting \cite{Bru}. We will also develop some tools to decide whether an $n$-valued map can be deformed to a fixed point free $n$-valued map, and we give a partial classification of those split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$ that can be deformed to fixed point free $2$-valued maps. Our approach to the study of fixed point theory of $n$-valued maps makes use of the homotopy theory of configuration spaces. It is probable that these ideas can also be adapted to coincidence theory. This viewpoint is fairly general. It helps us to understand the theory, and provides some means to perform (not necessarily easy) computations in general. Nevertheless, for some specific situations, such as for surfaces of non-negative Euler characteristic, these calculations are often tractable. To explain our approach, let $F_{n}(Y)$ denote the \emph{$n\ensuremath{\up{th}}$ (ordered) configuration space} of a space $Y$, defined by:
\begin{equation*}
F_n(Y)=\setr{(y_1,\ldots,y_n)}{\text{$y_i\in Y$, and $y_i\neq y_j$ if $i\neq j$}}.
\end{equation*}
Configuration spaces play an important r\^ole in several branches of mathematics and have been extensively studied, see~\cite{CG,FH} for example. The symmetric group $S_n$ on $n$ elements acts freely on $F_n(Y)$ by permuting coordinates. The corresponding quotient space, known as the \emph{$n\ensuremath{\up{th}}$ (unordered) configuration space of $Y$}, will be denoted by $D_n(Y)$, and the quotient map will be denoted by $\pi \colon\ensuremath{\up{th}}inspace F_{n}(Y) \ensuremath{\longrightarrow} D_{n}(Y)$. The \emph{$n\ensuremath{\up{th}}$ pure braid group $P_n(Y)$} (respectively the \emph{$n\ensuremath{\up{th}}$ braid group $B_n(Y)$}) of $Y$ is defined to be the fundamental group of $F_n(Y)$ (resp.\ of $D_n(Y)$), and there is a short exact sequence:
\begin{equation}\label{eq:sesbraid}
1\ensuremath{\longrightarrow} P_n(Y) \ensuremath{\longrightarrow} B_n(Y) \stackrel{\tau}{\ensuremath{\longrightarrow}} S_n \ensuremath{\longrightarrow} 1,
\end{equation}
where $\tau$ is the homomorphism that to a braid associates its induced permutation. For $i=1,\ldots,n$, let $p_{i}\colon\ensuremath{\up{th}}inspace F_{n}(Y)\ensuremath{\longrightarrow} Y$ denote projection onto the $i\ensuremath{\up{th}}$ factor. The notion of intermediate configuration spaces was defined in~\cite{GG2,GG4}. More precisely, if $n, m\in \ensuremath{\mathbb N}$, the subgroup $S_n\times S_m \subset S_{n+m}$ acts freely on $F_{n+m}(Y)$ by restriction, and the corresponding orbit space $F_{n+m}(Y)/(S_n\times S_m)$ is denoted by $D_{n, m}(Y)$. Let $B_{n,m}=\pi_{1}(D_{n, m}(Y))$ denote the associated `mixed' braid group. The space $F_{n+m}(Y)$ is equipped with the topology induced by the inclusion $F_{n+m}(Y)\subset Y^{n+m}$, and $D_{n, m}(Y)$ is equipped with the quotient topology. If $Y$ is a manifold without boundary then the natural projections $\overline{p}_{m,n}\colon\ensuremath{\up{th}}inspace D_{m,n}(Y) \ensuremath{\longrightarrow} D_m(Y)$ onto the first $m$ coordinates are fibrations. For maps whose target is a configuration space, we have the following notions.
\begin{defns}
Let $X$ and $Y$ be topological spaces, and let $n\in \ensuremath{\mathbb N}$. A map $\Phi\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} D_n(Y)$ will be called an \emph{$n$-unordered map}, and a map $\Psi\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} F_n(Y)$ will be called an \emph{$n$-ordered map}. For such an $n$-ordered map, for $i=1,\ldots,n$, there exist maps $f_i\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} Y$ such that $\Psi(x)=(f_1(x),\ldots, f_n(x))$ for all $x\in X$, and for which $f_i(x)\neq f_j(x)$ for all $1\leq i,j\leq n$, $i\neq j$, and all $x\in X$. In this case, we will often write $\Psi=(f_{1},\ldots,f_{n})$.
\end{defns}
The fixed point-theoretic concepts that were defined earlier for $n$-valued maps carry over naturally to $n$-unordered and $n$-ordered maps as follows.
\begin{defns}
Let $X$ and $Y$ be topological spaces, let $f\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} Y$ be a single-valued map, let $y_0\in Y$, and let $n\in \ensuremath{\mathbb N}$.
\begin{enumerate}[(a)]
\item Given an $n$-unordered map $\Phi\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} D_n(Y)$, $x\in X$ is said to be a \emph{coincidence} of the pair $(\Phi, f)$ if there exist $(x_1,\ldots,x_n)\in F_n(Y)$ and $j\in\brak{1,\ldots,n}$ such that $\Phi(x)= \pi(x_1,\ldots, x_n)$ and $f(x)=x_j$. The set of coincidences of the pair $(\Phi, f)$ will be denoted by $\operatorname{\text{Coin}}(\Phi, f)$.
If $X=Y$ and $f=\ensuremath{\operatorname{\text{Id}}}_X$ then $\operatorname{\text{Coin}}(\Phi, \ensuremath{\operatorname{\text{Id}}}_X)$
is called the \emph{fixed point set} of $\Phi$, and is denoted by $\operatorname{\text{Fix}}(\Phi)$. If $f$ is the constant map $c_{y_0}$ at $y_0$ then $\operatorname{\text{Coin}}(\Phi, c_{y_0})$
is called the set of \emph{roots} of $\Phi$ at $y_0$.
\item Given an $n$-ordered map $\Psi\colon\ensuremath{\up{th}}inspace X\ensuremath{\longrightarrow} F_n(Y)$, the set of coincidences of the pair $(\Psi, f)$ is defined by $\operatorname{\text{Coin}}(\Psi, f)= \set{x\in X}{\text{$f(x)= p_j\circ \Psi(x)$ for some $1\leq j\leq n$}}$. If $X=Y$ and $f=\ensuremath{\operatorname{\text{Id}}}_X$ then $\operatorname{\text{Coin}}(\Psi, \ensuremath{\operatorname{\text{Id}}}_X)=\set{x\in X}{\text{$x=p_j\circ \Psi(x)$ for some $1\leq j\leq n$}}$ is called the \emph{fixed point set} of $\Psi$, and is denoted by $\operatorname{\text{Fix}}(\Psi)$. If $f$ is the constant map $c_{y_0}$ then $\operatorname{\text{Coin}}(\Psi, c_{y_0})=\set{x\in X}{\text{$y_0=p_j\circ \Psi(x)$ for some $1\leq j\leq n$}}$ is called the set of \emph{roots} of $\Psi$ at $y_0$.
\end{enumerate}
\end{defns}
In order to study $n$-valued maps via single-valued maps, we use the following natural relation between multifunctions and functions. First observe that there is an obvious bijection between the set of $n$-point subsets of a space $Y$ and the unordered configuration space $D_{n}(Y)$. This bijection induces a one-to-one correspondence between the set of $n$-valued functions from $X$ to $Y$ and the set of functions from $X$ to $D_n(Y)$. In what follows, given an $n$-valued function $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$, we will denote the corresponding function whose target is the configuration space $D_{n}(Y)$ by $\Phi\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} D_n(Y)$, and \emph{vice-versa}. Since we are concerned with the study of continuous multivalued functions, we wish to ensure that this correspondence restricts to a bijection between the set of (continuous) $n$-valued maps and the set of continuous single-valued maps whose target is $D_{n}(Y)$. It follows from \reth{metriccont} that this is indeed the case if $X$ and $Y$ are metric spaces. This hypothesis will clearly be satisfied throughout this paper. If the map $\Phi\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} D_n(Y)$ associated to $\phi$ admits a lift $\widehat{\Phi}\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} F_n(Y)$ via the covering map $\pi$ then we shall say that $\widehat{\Phi}$ is a \emph{lift} of $\phi$ (see \resec{pres} for a formal statement of this definition). We will make use of this notion to develop a correspondence between split $n$-valued maps and maps from $X$ into $F_n(Y)$. As we shall see, the problems that we are interested in for $n$-valued maps, such as coincidence, fixed point and root problems, may be expressed within the context of $n$-unordered maps, to which we may apply the classical theory of single-valued maps.
Our main aims in this paper are to explore the fixed point property of spaces for $n$-valued maps, and to study the problem of whether an $n$-valued map map can be deformed to a fixed point free $n$-valued map. We now give the statements of the main results of this paper. The first theorem shows that for simply-connected metric spaces, the usual fixed point property implies the fixed point property for $n$-valued maps.
\begin{thm}\label{th:sccfpp}
Let $X$ be a simply-connected metric space that has the fixed point property, and let $n\in \ensuremath{\mathbb N}$. Then every $n$-valued map of $X$ has at least $n$ fixed points, so $X$ has the fixed point property for $n$-valued maps. In particular, for all $n,k\geq 1$, the $k$-dimensional disc $\dt[k]$ and the $2k$-dimensional complex projective space $\ensuremath{\mathbb C} P^{2k}$ have the fixed point property for $n$-valued maps.
\end{thm}
It may happen that a space does not have the (usual) fixed point property but that it has the fixed point property for $n$-valued maps for $n>1$. This is indeed the case for the $2k$-dimensional sphere $\St[2k]$.
\begin{prop}\label{prop:S2fp}
If $n\geq 2$ and $k\geq 1$, $\St[2k]$ has the fixed point property for $n$-valued maps.
\end{prop}
\reth{sccfpp} and \repr{S2fp} will be proved in \resec{pres}. Although the $2k$-dimensional real projective space $\ensuremath{\mathbb R} P^{2k}$ is not simply connected, in \resec{rp2k} we will show that it has the fixed point property for $n$-valued maps for all $n\in \ensuremath{\mathbb N}$.
\begin{thm}\label{th:rp2Kfpp}
Let $k,n\geq 1$. The real projective space $\ensuremath{\mathbb R} P^{2k}$ has the fixed point property for $n$-valued maps. Further, any $n$-valued map of $\ensuremath{\mathbb R} P^{2k}$ has at least $n$ fixed points.
\end{thm}
We do not know of an example of a space that has the fixed point property, but that does not have the fixed point property for $n$-valued maps for some $n\geq 2$.
In \resec{fixfree}, we turn our attention to the question of deciding whether an $n$-valued map of a surface $X$ of non-negative Euler characteristic $\chi(X)$ can be deformed to a fixed point free $n$-valued map. In the following result, we give algebraic criteria involving the braid groups of $X$.
\begin{thm}\label{th:defchineg}
Let $X$ be a compact surface without boundary such that $\chi(X)\leq 0$, let $n\geq 1$, and let $\phi\colon\ensuremath{\up{th}}inspace X \multimap X$ be an $n$-valued map.
\begin{enumerate}[(a)]
\item\label{it:defchinega} The $n$-valued map $\phi$ can be deformed to a fixed point free $n$-valued map if and only if there is a homomorphism
$\varphi\colon\ensuremath{\up{th}}inspace \pi_1(X) \ensuremath{\longrightarrow} B_{1,n}(X)$ that makes the following diagram commute:
\begin{equation}\label{eq:commdiag1}
\begin{tikzcd}[ampersand replacement=\&]
\&\& B_{1,n}(X) \ar{d}{(\iota_{1,n})_{\#}}\\
\pi_{1}(X) \ar[swap]{rr}{(\ensuremath{\operatorname{\text{Id}}}_{X}\times \Phi)_{\#}} \ar[dashrightarrow, end anchor=south west]{rru}{\varphi} \&\&\pi_{1}(X) \times B_{n}(X),
\end{tikzcd}
\end{equation}
where $\iota_{1,n}\colon\ensuremath{\up{th}}inspace D_{1,n}(X) \ensuremath{\longrightarrow} X \times D_{n}(X)$ is the inclusion map.
\item\label{it:defchinegb} If the $n$-valued map $\phi$ is split, it can be deformed to a fixed point free $n$-valued map if and only if there is a homomorphism $\widehat{\varphi}\colon\ensuremath{\up{th}}inspace \pi_1(X) \ensuremath{\longrightarrow} P_{n+1}(X)$ that makes the following diagram commute:
\begin{equation*}
\begin{tikzcd}[ampersand replacement=\&]
\&\& P_{n+1}(X) \ar{d}{(\widehat{\iota}_{n+1})_{\#}}\\
\pi_{1}(X) \ar[swap]{rr}{(\ensuremath{\operatorname{\text{Id}}}_{X}\times \widehat{\Phi})_{\#}} \ar[dashrightarrow, end anchor=south west]{rru}{\widehat{\varphi}} \&\& \pi_{1}(X) \times P_{n}(X).
\end{tikzcd}
\end{equation*}
where $\widehat{\iota}_{n+1}\colon\ensuremath{\up{th}}inspace F_{n+1}(X) \ensuremath{\longrightarrow} X \times F_{n}(X)$ is the inclusion map.
\end{enumerate}\end{thm}
If $\phi\colon\ensuremath{\up{th}}inspace X \multimap X$ is a split $n$-valued map given by $\phi=\{f_1, \cdots ,f_n\}$ that can be deformed to a fixed point free $n$-valued map, then certainly each of the single-valued maps $f_i$ can be deformed to a fixed point free map. The question of whether the converse of this statement holds for surfaces is open. We do not know the answer for any compact surface without boundary different from $\St$ or $\ensuremath{{\mathbb R}P^2}$, but it is likely that the converse does not hold. More generally, one would like to know if the homotopy class of $\phi$ contains a representative for which the number of fixed points is exactly the Nielsen number. Very little is known about this question, even for the $2$-torus. Recall that the Nielsen number of an $n$-valued map $\phi\colon\ensuremath{\up{th}}inspace X\multimap X$, denoted $N(\phi)$, was defined by Schirmer~\cite{Sch1}, and generalises the usual Nielsen number in the single-valued case. She showed that $N(\phi)$ is a lower bound for the number of fixed points among all $n$-valued maps homotopic to $\phi$.
Within the framework of \reth{defchineg}, it is natural to study first the case of $2$-valued maps of the $2$-torus $\ensuremath{\mathbb{T}^{2}}$, which is the focus of \resec{toro}. In what follows, $\mu$ and $\lambda$ will denote the meridian and the longitude respectively of $\ensuremath{\mathbb{T}^{2}}$. Let $(e_1, e_2)$ be a basis of $\pi_1(\ensuremath{\mathbb{T}^{2}})$ such that $e_1=[\mu]$ and $e_2=[\lambda]$. For self-maps of $\ensuremath{\mathbb{T}^{2}}$, we will not be overly concerned with the choice of basepoints since the fundamental groups of $\ensuremath{\mathbb{T}^{2}}$ with respect to two different basepoints may be canonically identified. In \resec{toro2}, we will study the groups $P_{2}(\ensuremath{\mathbb{T}^{2}})$, $B_{2}(\ensuremath{\mathbb{T}^{2}})$ and $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$, and in \reco{compactpres}, we will see that $P_{2}(\ensuremath{\mathbb{T}^{2}})$ is isomorphic to the direct product of a free group $\mathbb{F}_2(u,v)$ of rank $2$ and $\ensuremath{\mathbb Z}^2$. In what follows, the elements of $P_2(\ensuremath{\mathbb{T}^{2}})$ will be written with respect to the decomposition $\mathbb{F}_2(u,v) \times \ensuremath{\mathbb Z}^2$, and $\operatorname{\text{Ab}}\colon\ensuremath{\up{th}}inspace \mathbb{F}_2(u,v) \ensuremath{\longrightarrow} \ensuremath{\mathbb Z}^{2}$ will denote Abelianisation. \reth{helgath01}, which is a result of~\cite{Sch1} for the Nielsen number of split $n$-valued maps, will be used in part of the proof of the following proposition.
\begin{prop}\label{prop:exisfpf}
Let $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ be a split $2$-valued map of the torus $\ensuremath{\mathbb{T}^{2}}$, and let $\widehat{\Phi}=(f_1,f_2) \colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ be a lift of $\phi$ such that $\widehat{\Phi}_{\#}(e_{1})=(w^r,(a,b))$ and $\widehat{\Phi}_{\#}(e_{2})= (w^s, (c,d)))$, where $(r,s)\in \ensuremath{\mathbb Z}^{2}\setminus \brak{(0,0)}$, $a,b,c,d\in \ensuremath{\mathbb Z}$ and $w\in \mathbb{F}_2(u,v)$. Then the Nielsen number of $\phi$ is given by:
\begin{equation*}
N(\phi)=\left\lvert\det\begin{pmatrix}
a-1 & c \\
b & d-1
\end{pmatrix}\right\rvert
+
\left\lvert\det\begin{pmatrix}
rm+a-1 & sm+c \\
rn+b & sn+d-1
\end{pmatrix}\right\rvert,
\end{equation*}
where $\operatorname{\text{Ab}}(w)=(m,n)\in \ensuremath{\mathbb Z}^{2}$. If the map $\phi$ can be deformed to a fixed point free $2$-valued map, then both of the maps $f_1$ and $f_2$ can be deformed to fixed point free maps. Furthermore, $f_1$ and $f_2$ can be deformed to fixed point free maps if and only if either:
\begin{enumerate}[(a)]
\item\label{it:exisfpfa} the pairs of integers $(a-1, b),(c,d-1)$ and $(m,n)$ belong to a cyclic subgroup of $\ensuremath{\mathbb Z}^2$, or
\item\label{it:exisfpfb} $s(a-1, b)=r(c,d-1)$.
\end{enumerate}
\end{prop}
Within the framework of \repr{exisfpf}, given a split $2$-valued map $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ for which $N(\phi)=0$, we would like to know whether $\phi$ can be deformed to a fixed point free $2$-valued map. If $N(\phi)=0$, then by this proposition, one of the conditions~(\ref{it:exisfpfa}) or~(\ref{it:exisfpfb}) must be satisfied. The following result shows that condition~(\ref{it:exisfpfb}) is also sufficient.
\begin{thm}\label{th:necrootfree3}
Let $\widehat{\Phi}\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2({\ensuremath{\mathbb{T}^{2}}})$ be a lift of a split $2$-valued map $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ that satisfies $\widehat{\Phi}_{\#}(e_{1})=(w^{r},(a,b))$ and $\widehat{\Phi}_{\#}(e_{2})= (w^{s}, (c,d))$, where $w\in \mathbb{F}_2(u,v)$, $a,b,c,d\in \ensuremath{\mathbb Z}$ and $(r,s)\in \ensuremath{\mathbb Z}^{2}\setminus \brak{(0,0)}$ satisfy $s(a-1,b)=r(c,d-1)$. Then $\phi$ may be deformed to a fixed point free $2$-valued map.
\end{thm}
With respect to condition~(\ref{it:exisfpfa}), we obtain a partial converse for certain values of $a,b,c,d,m$ and $n$.
\begin{thm}\label{th:construct2val} Suppose that $(a-1, b),(c,d-1)$ and $(m,n)$ belong to a cyclic subgroup of $\ensuremath{\mathbb Z}^2$ generated by an element of the form $(0,q), (1,q), (p,0)$ or $(p,1)$, where $p,q\in \ensuremath{\mathbb Z}$, and let $r,s\in \ensuremath{\mathbb Z}$. Then there exist $w\in \mathbb{F}_2(u,v)$, a split fixed point free $2$-valued map $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ and a lift $\widehat{\Phi} \colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$ of $\phi$ such that such that $\operatorname{\text{Ab}}(w)=(m,n)$, $\widehat{\Phi}_{\#}(e_1)=((w^{r},(a,b))$ and $\widehat{\Phi}_{\#}(e_2)= (w^{s}, (c,d))$.
\end{thm}
\repr{exisfpf} and Theorems~\ref{th:necrootfree3} and~\ref{th:construct2val} will be proved in \resec{toro}. Besides the introduction and an appendix, this paper is divided into 4 sections. In \resec{pres}, we give some basic definitions, we establish the connection between multimaps and maps whose target is a configuration space, and we show that simply-connected spaces have the fixed point property for $n$-valued maps if they have the usual fixed point property. In \resec{rp2k}, we show that even-dimensional real projective spaces have the fixed point property for $n$-valued maps. In \resec{fixfree}, we provide general criteria of a homotopic and algebraic nature, to decide whether an $n$-valued map can be deformed or not to a fixed point free $n$-valued map, and we give the corresponding statements for the case of roots. In \resec{toro}, we study the fixed point theory of $2$-valued maps of the $2$-torus. In \resec{toro2}, we give presentations of certain braid groups of $\ensuremath{\mathbb{T}^{2}}$, in \resec{descript}, we describe the set of homotopy classes of split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$, and in \resec{fptsplit2}, we study the fixed point theory of split $2$-valued maps. In the Appendix, written with R.~F.~Brown, in \reth{metriccont}, we show that for the class of metric spaces that includes those considered in this paper, $n$-valued maps can be regarded as single-valued maps whose target is the associated unordered configuration space.
\section{Generalities and the $n$-valued fixed point property}\label{sec:pres}
In \resec{relnsnvm}, we begin by describing the relations between $n$-valued maps and $n$-unordered maps. We will assume throughout that $X$ and $Y$ are metric spaces, so that we can apply \reth{metriccont}. Making use of unordered configuration space, in \relem{split} and \reco{inject}, we prove some properties about the fixed points of $n$-valued maps. In \resec{disc}, we give an algebraic condition that enables us to decide whether an $n$-valued map is split. We also study the case where $X$ is simply connected (the $k$-dimensional disc for example, which has the usual fixed point property) and we prove \reth{sccfpp}, and in \resec{sph}, we analyse the case of the $2k$-dimensional sphere (which does not have the usual fixed point property), and we prove \repr{S2fp}.
\subsection{Relations between $n$-valued maps, $n$-(un)ordered maps and their fixed point sets}\label{sec:relnsnvm}
A proof of the following result may be found in the Appendix.
\begin{thm}\label{th:metriccont}
Let $X$ and $Y$ be metric spaces, and let $n\in \ensuremath{\mathbb N}$. An $n$-valued function $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$ is continuous if and only if the
corresponding function $\Phi\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} D_n(Y)$ is continuous.
\end{thm}
It would be beneficial for the statement of \reth{metriccont} to hold under weaker hypotheses on $X$ and $Y$. See~\cite{Brr7} for some recent results in this direction.
\begin{defn}
If $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$ is an $n$-valued map and $\Phi\colon\ensuremath{\up{th}}inspace X\ensuremath{\longrightarrow} D_{n}(Y)$ is the associated $n$-unordered map, an $n$-ordered map $\widehat{\Phi}\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} F_n(Y)$ is said to be a \emph{lift} of $\phi$ if the composition $\pi\circ \widehat{\Phi}\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} D_{n}(Y)$ of $\widehat{\Phi}$ with the covering map $\pi\colon\ensuremath{\up{th}}inspace F_n(Y) \ensuremath{\longrightarrow} D_n(Y)$ is equal to $\Phi$.
\end{defn}
If $\phi=\brak{f_1,\ldots,f_n}\colon\ensuremath{\up{th}}inspace X \multimap Y$ is a split $n$-valued map, then it admits a lift $\widehat{\Phi}=(f_1,\ldots,f_n)\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} F_n(Y)$. For any such lift, $\operatorname{\text{Fix}}(\widehat{\Phi})=\operatorname{\text{Fix}}(\phi)$, and the map
$\widehat{\Phi}$ determines an ordered set of $n$ maps $(f_1=p_1\circ \widehat{\Phi},\ldots, f_n=p_n\circ \widehat{\Phi})$ from $X$ to $Y$ for which $f_i(x)\ne f_j(x)$ for all $x\in X$ and all $1\leq i<j\leq n$. Conversely, any ordered set of $n$ maps $(f_1,\ldots,f_n)$ from $X$ to $Y$ for which $f_i(x)\ne f_j(x)$ for all $x\in X$ and all $1\leq i<j\leq n$ determines an $n$-ordered map $\Psi\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} F_n(Y)$ defined by $\Psi(x)=(f_1(x),\ldots,f_n(x))$ and a split $n$-valued map $\phi=\brak{f_1,\ldots,f_n}\colon\ensuremath{\up{th}}inspace X \multimap Y$ of which $\Psi$ is a lift.
So the existence of such a split $n$-valued map $\phi$ is equivalent to that of an $n$-ordered map $\Psi\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} F_n(Y)$, where $\Psi=(f_1,\ldots,f_n)$. This being the case, the composition $\pi\circ \Psi\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} D_n(Y)$ is the map $\Phi\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} D_n(Y)$ that corresponds (in the sense described in \resec{intro}) to the $n$-valued map $\phi$. Consequently, an $n$-valued map $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$ admits a lift if and only if it is split. As we shall now see, \reth{metriccont} will be of help in the description of the relations between (split) $n$-valued maps and $n$-(un)ordered maps of metric spaces. As we have seen, to each $n$-valued map (resp.\ split $n$-valued map), we may associate an $n$-unordered map (resp.\ a lift), and \emph{vice-versa}. Note that the symmetric group $S_n$ not only acts (freely) on $F_n(Y)$ by permuting coordinates, but it also acts on the set of ordered $n$-tuples of maps between $X$ and $Y$. Further, the restriction of the latter action to the subset $F_{n}(Y)^{X}$ of $n$-ordered maps, \emph{i.e.}\ maps of the form $\Psi\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} F_{n}(Y)$, where $\Psi(x)=(f_1(x),\ldots,f_n(x))$ for all $x\in X$ for which $f_i(x)\ne f_j(x)$ for all $x\in X$ and $1\leq i< j\leq n$, is also free. In what follows, $[X,Y]$ (resp.\ $[X,Y]_{0}$) will denote the set of homotopy classes (resp.\ based homotopy classes) of maps between $X$ and $Y$.
\begin{lem}\label{lem:split}\mbox{} Let $X$ and $Y$ be metric spaces, and let $n\in \ensuremath{\mathbb N}$.
\begin{enumerate}[(a)]
\item\label{it:splitII} The set $\splitmap{X}{Y}{n}$ of split $n$-valued maps from $X$ to $Y$ is in one-to-one correspondence with the orbits of the set of maps $F_{n}(Y)^{X}$ from $X$ to $F_n(Y)$ modulo the free action defined above of $S_n$ on $F_{n}(Y)^{X}$.
\item\label{it:splitIII} If two $n$-valued maps from $X$ to $Y$ are homotopic and one is split, then the other is also split. Further, the set $\splitmap{X}{Y}{n}/\!\sim$ of homotopy classes of split $n$-valued maps from $X$ to $Y$ is in one-to-one correspondence with the orbits of the set $[X,F_{n}(Y)]$ of homotopy classes of maps from $X$ to $F_n(Y)$ under the action of $S_n$ induced by that of $S_n$ on $F_{n}(Y)^{X}$.
\item\label{it:splitIV} Suppose that $X=Y$. If an $n$-valued map $\phi\colon\ensuremath{\up{th}}inspace X \multimap X$ is split and deformable to a fixed point free map, then a lift
$\widehat{\Phi}\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} F_{n}(X)$ of $\phi$
may be written as ${\Psi}=(f_1,\ldots,f_n)$, where for all $i=1,\ldots,n$, the map $f_i\colon\ensuremath{\up{th}}inspace X\ensuremath{\longrightarrow} X$ is a map that is deformable to a fixed point free map.
\end{enumerate}
\end{lem}
\begin{proof}\mbox{}
\begin{enumerate}[(a)]
\item Let $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$ be a split $n$-valued map. From the definition, there exists an $n$-ordered map $\widehat{\Phi}\colon\ensuremath{\up{th}}inspace X\ensuremath{\longrightarrow} F_n(Y)$ such that $\Phi=\pi\circ \widehat{\Phi}$, up to the identification given by \reth{metriccont}.
If $\widehat{\Phi}=(f_1,\ldots,f_n)$, the other lifts of $\phi$ are obtained via the action of the group of deck transformations of the covering space, this group being $S_n$ in our case, and so are of the form $(f_{\sigma(1)},\ldots,f_{\sigma(n)})$, where $\sigma\in S_{n}$. This gives rise to the stated one-to-one correspondence between $\splitmap{X}{Y}{n}$ and the orbit space $F_{n}(Y)^{X}/S_{n}$.
\item By naturality, the map $\pi\colon\ensuremath{\up{th}}inspace F_n(Y) \ensuremath{\longrightarrow} D_n(Y)$ induces a map $\widehat{\pi}\colon\ensuremath{\up{th}}inspace [X, F_n(Y)]\ensuremath{\longrightarrow} [X, D_n(Y)]$ defined by $\widehat{\pi}([\Psi])=[\pi\circ \Psi]$ for any $n$-ordered map $\Psi\colon\ensuremath{\up{th}}inspace X\ensuremath{\longrightarrow} F_{n}(Y)$. Given two homotopic $n$-valued maps between $X$ and $Y$, which we regard as maps from $X$ to $D_n(Y)$ using \reth{metriccont}, if the first has a lift to $F_n(Y)$, then the lifting property of a covering implies that the second also admits a lift to $F_n(Y)$, so if the first map is split then so is the second map. To prove the second part of the statement, first note that there is a surjective map $f\colon\ensuremath{\up{th}}inspace F_{n}(Y)^{X} \ensuremath{\longrightarrow} \splitmap{X}{Y}{n}$ given by $f(g)=\pi\circ g$,
where we identify $\splitmap{X}{Y}{n}$ with the set of maps $D_{n}(Y)^{X}$ from $X$ to $D_{n}(Y)$, that induces a surjective map $\overline{f}\colon\ensuremath{\up{th}}inspace [X,F_{n}(Y)] \ensuremath{\longrightarrow} \splitmap{X}{Y}{n}/\!\sim$ on the corresponding sets of homotopy classes. Further, if ${\Psi}_1, {\Psi}_2\in F_{n}(Y)^{X}$ are two $n$-ordered maps that are homotopic via a homotopy $H$, and if $\alpha\in S_n$, then the maps $\alpha \circ\Psi_1,\alpha \circ \Psi_2 \in F_{n}(Y)^{X}$ are also homotopic via the homotopy $\alpha \circ H$, and so we obtain a quotient map $q\colon\ensuremath{\up{th}}inspace [X,F_{n}(Y)] \ensuremath{\longrightarrow} [X,F_{n}(Y)]/S_{n}$. We claim that $\overline{f}$ factors through $q$ via the map $\overline{\overline{f}}\colon\ensuremath{\up{th}}inspace [X,F_{n}(Y)]/S_{n}\ensuremath{\longrightarrow} \splitmap{X}{Y}{n}/\!\sim$ defined by $\overline{\overline{f}}([g])=[f(g)]$. To see this, let $g, h\in F_{n}(Y)^{X}$ be such that $q([g])=q([h])$. Then there exists $\alpha\in S_{n}$ such that $\alpha([g])=[h]$. Then $\overline{f}(\alpha[g])=[\alpha f(h)]= \overline{f}([h])=[f(h)]$. But from the definition of $\splitmap{X}{Y}{n}$, $[\alpha f(g)]=[f(g)]$, and so $[f(g)]=[f(h)]$, which proves the claim. By construction, the map $\overline{\overline{f}}$ is surjective. It remains to show that it is injective. Let $g,h\in F_{n}(Y)^{X}$ be such that $\overline{\overline{f}}([g])=\overline{\overline{f}}([h])$. Then $[f(g)]=[f(h)]$, and thus $f(g)$ and $f(h)$ are homotopic via a homotopy $H$ in $\splitmap{X}{Y}{n}$, where $H(0,f(g))=f(g)$ and $H(1,f(g))=f(h)$. Then $H$ lifts to a homotopy $\widetilde{H}$ such that $\widetilde{H}(0,g)=g$, and $\widetilde{H}(1,g)$ is a lift of $f(h)$. But $h$ is also a lift of $f(h)$, so there exists $\alpha\in S_{n}$ such that $f(h)=\alpha h$. Further, $g$ is homotopic to $f(h)$, so is homotopic to $\alpha(h)$, and hence $q([g])=q([\alpha(h)])= q([h])$ from the definition of $q$, which proves the injectivity of $\overline{\overline{f}}$.
\item Since $\phi$ is split, we may choose a lift $\widehat{\Phi}=(f_1,\ldots,f_n)\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} F_{n}(X)$ of $\phi$. By hypothesis, there is a homotopy $H\colon\ensuremath{\up{th}}inspace X\times I \ensuremath{\longrightarrow} D_n(X)$ such that $H(\cdot ,0)=\Phi$, and $H(\cdot ,1)$ is fixed point free. Since the initial part of the homotopy $H$ admits a lift, there exists a lift $\widehat{H}\colon\ensuremath{\up{th}}inspace X\times I \ensuremath{\longrightarrow} F_n(X)$ of $H$ such that $\widehat{H}(\cdot, 0)=\widehat{\Phi}$, and $\widehat{H}(\cdot, 1)$ is fixed point free. So $\widehat{H}(\cdot, 1)$ is of the form $(f_1,\ldots,f_n)$, where $f_i$ is fixed point free for $1\leq i\leq n$,
and the conclusion follows.\qedhere
\end{enumerate}
\end{proof}
\begin{rems}\mbox{}
\begin{enumerate}[(a)]
\item The action of $S_n$ on the set of homotopy classes $[X, F_n(Y)]$ is not necessarily free (see \repr{classif}(\ref{it:classifb})).
\item The question of whether the converse of \relem{split}(\ref{it:splitIV}) is valid for surfaces is open, see the introduction.
\end{enumerate}
\end{rems}
The following consequence of \relem{split}(\ref{it:splitIV}) will be useful in what follows, and implies that if a split $n$-valued map can be deformed to a fixed point free $n$-valued map (through $n$-valued maps), then the deformation
is through split $n$-valued maps.
\begin{cor}\label{cor:inject}
Let $X$ be a metric space, and let $n\in \ensuremath{\mathbb N}$.
A split $n$-valued map $\phi\colon\ensuremath{\up{th}}inspace X\multimap X$ may be deformed within $\splitmap{X}{X}{n}$ to a fixed point free $n$-valued map
if and only if any lift $\widehat{\Phi}=(f_1,\ldots,f_n)\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} F_n(X)$ of $\phi$ may be deformed within $F_{n}(X)$
to a fixed point free map $\widehat{\Phi}'=(f_1',\ldots,f_n')\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} F_n(X)$. In particular, for all $1\leq i\leq n$, there exists a homotopy $H_i\colon\ensuremath{\up{th}}inspace X\times I\ensuremath{\longrightarrow} X$ between $f_i$ and $f_i'$, where $f_i'$ is a fixed point free map, and $H_j(x, t)\ne H_k(x, t)$ for all $1\leq j<k\leq n$, $x\in X$ and $t\in [0,1]$.
\end{cor}
\begin{proof}
The `if' part of the statement may be obtained by considering the composition of the deformation between ${\Psi}$ and ${\Psi}'$ by the projection $\pi$ and by applying \reth{metriccont}.
The `only if' part follows in a manner similar to that of the proof of the first part of \relem{split}(\ref{it:splitIII}).
\end{proof}
\subsection{The fixed point property of simply connected spaces and the $k$-disc $\dt[k]$ for $n$-valued maps} \label{sec:disc}
In this section, we analyse the case where $X$ is a simply-connected metric space that possesses the fixed point property, such as the closed $k$-dimensional disc $\dt[k]$. In \relem{split1}, we begin by proving a variant of the so-called `Splitting Lemma' that is more general than the versions that appear in the literature, such as that of Schirmer given in~\cite[Section~2, Lemma~1]{Sch0} for example. The hypotheses are expressed in terms of the homomorphism on the level of the fundamental group of the target $Y$, rather than that of the domain $X$, and the criterion is an algebraic condition, in terms of the fundamental group, for an $n$-valued map from $X$ to $Y$ to be split. This allows us to prove \reth{sccfpp}, which says that a simply-connected metric space that has the fixed point property also possesses the fixed point property for $n$-valued maps for all $n\geq 1$. In particular, $\dt[k]$ satisfies this property for all $k\geq 1$. The $2$-disc will be the only surface with boundary that will be considered in this paper. The cases of other surfaces with boundary, such as the annulus and the M\"obius band, will be studied elsewhere.
\begin{lem}\label{lem:split1}
Let $n\geq 1$, let $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$ be an $n$-valued map between metric spaces, where $X$ is connected and locally arcwise-connected, and let $\Phi\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} D_n(Y)$ be the associated $n$-unordered map. Then $\phi$ is split if and only if the image of the induced homomorphism $\Phi_{\#}\colon\ensuremath{\up{th}}inspace \pi_1(X) \ensuremath{\longrightarrow} B_n(Y)$ is contained in the image of the homomorphism $\pi_{\#}\colon\ensuremath{\up{th}}inspace P_n(Y) \ensuremath{\longrightarrow} B_{n}(Y)$ induced by the covering map $\pi\colon\ensuremath{\up{th}}inspace F_{n}(Y)\ensuremath{\longrightarrow} D_{n}(Y)$. In particular, if $X$ is simply connected then all $n$-valued maps from $X$ to $Y$ are split.
\end{lem}
\begin{proof}
Since $X$ and $Y$ are metric spaces, using \reth{metriccont}, we may consider the $n$-unordered map $\Phi\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} D_n(Y)$ that corresponds to $\phi$. The first part of the statement follows from standard results about the lifting property of a map to a covering space in terms of the fundamental group~\cite[Chapter~5, Section~5, Theorem~5.1]{Mas}. The second part is a consequence of the first part.
\end{proof}
As a consequence of \relem{split1}, we are able to prove~\reth{sccfpp}.
\begin{proof}[Proof of \reth{sccfpp}]
Let $X$ be a simply-connected metric space that has the fixed point property. By \relem{split1}, any $n$-valued map $\phi\colon\ensuremath{\up{th}}inspace X \multimap X$ is split. Writing $\phi=\{f_1,\ldots,f_n\}$, each of the maps $f_i\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} X$ is a self-map of $X$ that has at least one fixed point. So $\phi$ has at least $n$ fixed points, and in particular, $X$ has the fixed point property for $n$-valued maps. The last part of the statement then follows.
\end{proof}
\subsection{$n$-valued maps of the sphere $\mathbb S^{2k}$}\label{sec:sph}
Let $k\geq 1$. Although $\mathbb S^{2k}$ does not have the fixed point property for self-maps, we shall show in this section that it has the fixed point property for $n$-valued maps for all $n>1$, which is the statement of \repr{S2fp}. We first prove a lemma.
\begin{lem}\label{lem:S2split}
Let $n\geq 1$ and $k\geq 2$. Then any $n$-valued map of $\mathbb S^{k}$ is split.
\end{lem}
\begin{proof}
The result follows from \relem{split1} using the fact that $\mathbb S^k$ is simply connected.
\end{proof}
\begin{proof}[Proof of \repr{S2fp}]
Let $n\geq 2$, and let $\phi \colon\ensuremath{\up{th}}inspace \mathbb S^{2k} \multimap \mathbb S^{2k}$ be an $n$-valued map. By \relem{S2split}, $\phi$ is split, so it admits a lift $\widehat{\Phi}\colon\ensuremath{\up{th}}inspace \mathbb S^{2k} \ensuremath{\longrightarrow} F_{n}(\St[2k])$, where $\widehat{\Phi}=(f_1,f_2,\ldots,f_n)$. Since $f_1(x)\ne f_2(x)$ for
all $x\in \St[2k]$, we have $f_2(x)\neq -(-f_1(x))$, it follows that $f_2$ is homotopic to $-f_1$ via a homotopy that for all $x\in \mathbb S^{2k}$, takes $-f_1(x)$ to $f_2(x)$ along the unique geodesic that joins them. Thus the degree of one of the maps $f_1$ and $f_2$ is different from $-1$, and so has a fixed point, which implies that $\phi$ has a fixed point.
\end{proof}
\begin{rem}
If $n>2$ and $k=1$ then the result of \repr{S2fp} is clearly true since by~\cite[pp.~43--44]{GG5}, the set $[\St, F_{n}(\St)]$ of homotopy classes of maps between $\St$ and $F_{n}(\St)$ contains only one class, which is that of the constant map. So any representative of this class is of the form $\phi=(f_1,\ldots,f_n)$, where all of the maps $f_i\colon\ensuremath{\up{th}}inspace \St\ensuremath{\longrightarrow} \St$ are homotopic to the constant map. Such a map always has a fixed point, and hence $\phi$ has at least $n$ fixed points.
\end{rem}
\section{$n$-valued maps of the projective space $\ensuremath{\mathbb R} P^{2k}$}\label{sec:rp2k}
In this section, we will show that the projective space $\ensuremath{\mathbb R} P^{2k}$ also has the fixed point property for $n$-valued maps, which is the statement of \reth{rp2Kfpp}. Since $\ensuremath{\mathbb R} P^{2k}$ is not simply connected, we will require more elaborate arguments than those used in Sections~\ref{sec:disc} and~\ref{sec:sph}. We separate the discussion into two cases, $k=1$, and $k>1$.
\subsection{$n$-valued maps of \ensuremath{{\mathbb R}P^2}}\label{sec:rp2ka}
The following result is the analogue of \relem{S2split} for $\ensuremath{{\mathbb R}P^2}$.
\begin{lem}\label{lem:rp2split}
Let $n\geq 1$. Then any $n$-valued map of the projective plane is split.
\end{lem}
\begin{proof}
Let $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{{\mathbb R}P^2} \multimap \ensuremath{{\mathbb R}P^2}$ be an $n$-valued map, let $\Phi\colon\ensuremath{\up{th}}inspace \ensuremath{{\mathbb R}P^2} \ensuremath{\longrightarrow} D_n(\ensuremath{{\mathbb R}P^2})$ be the associated $n$-unordered map, and let $\Phi_{\#}\colon\ensuremath{\up{th}}inspace \pi_{1}(\ensuremath{{\mathbb R}P^2}) \ensuremath{\longrightarrow} B_{n}(\ensuremath{{\mathbb R}P^2})$ be the homomorphism induced on the level of fundamental groups. Since $\pi_{1}(\ensuremath{{\mathbb R}P^2})$ is isomorphic to the cyclic group of order $2$, it follows that $\im{\Phi_{\#}}$ is contained in the subgroup $\ang{\ft}$ of $B_{n}(\ensuremath{{\mathbb R}P^2})$ generated by the full twist braid $\ft$ of $B_n(\ensuremath{{\mathbb R}P^2})$ because $\ft$ is the unique element of $B_n(\ensuremath{{\mathbb R}P^2})$ of order $2$ \cite[Proposition~23]{GG3}. But $\ft$ is a pure braid, so $\im{\Phi_{\#}}\subset P_{n}(\ensuremath{{\mathbb R}P^2})$. Thus $\Phi$ factors through $\pi$, and hence $\phi$ is split by \relem{split1} as required.
\end{proof}
\begin{prop}\label{prop:RP2fp}
Let $n\geq 1$. Then any $n$-valued map of $\ensuremath{{\mathbb R}P^2}$ has at least $n$ fixed points, in particular $\ensuremath{{\mathbb R}P^2}$ has the fixed point property for $n$-valued maps.
\end{prop}
\begin{proof}
Let $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{{\mathbb R}P^2} \multimap \ensuremath{{\mathbb R}P^2}$ be an $n$-valued map of $\ensuremath{{\mathbb R}P^2}$. Then $\phi$ is split by \relem{rp2split}, and so $\phi=\brak{f_1,\ldots,f_n}$, where $f_{1},\ldots, f_{n}\colon\ensuremath{\up{th}}inspace \ensuremath{{\mathbb R}P^2} \ensuremath{\longrightarrow} \ensuremath{{\mathbb R}P^2}$ are pairwise coincidence-free self-maps of $\ensuremath{{\mathbb R}P^2}$. But $\ensuremath{{\mathbb R}P^2}$ has the fixed point property, and so for $i=1,\ldots,n$, $f_{i}$ has a fixed point. Hence $\phi$ has at least $n$ fixed points.
\end{proof}
\subsection{$n$-valued maps of $\ensuremath{\mathbb R} P^{2k}$, $k>1$}\label{sec:proje}
The aim of this section is to prove that $\ensuremath{\mathbb R} P^{2k}$ has the fixed point property for $n$-valued maps for all $n\geq 1$ and $k>1$. Indeed, we will show that every such $n$-valued map has at least $n$ fixed points.
Given an $n$-valued map $\phi\colon\ensuremath{\up{th}}inspace X \multimap X$ of a topological space $X$, we consider the corresponding map $\Phi\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} D_n(X)$, and the induced homomorphism $\Phi_{\#}\colon\ensuremath{\up{th}}inspace \pi_1(X) \ensuremath{\longrightarrow} \pi_1(D_n(X))$ on the level of fundamental groups, where $\pi_1(D_n(X))=B_n(X)$. By the short exact sequence~\reqref{sesbraid}, $P_n(X)$ is a normal subgroup of $B_n(X)$ of finite index $n!$, so the subgroup $H=\Phi_{\#}^{-1}(P_n(X))$ is a normal subgroup of $\pi_1(X)$ of finite index. Further, if $L=\pi_1(X)/H$, the composition $\pi_1(X) \stackrel{\Phi_{\#}}{\ensuremath{\longrightarrow}} B_n(X) \stackrel{\tau}{\ensuremath{\longrightarrow}} S_n$ is a homomorphism that induces a homomorphism between $L$ and $S_n$.
\begin{prop}\label{prop:nielsen}
Let $n\in \ensuremath{\mathbb N}$. Suppose that $X$ is a connected, locally arcwise-connected metric space.
With the above notation, there exists a covering $q\colon\ensuremath{\up{th}}inspace \widehat{X} \ensuremath{\longrightarrow} X$ of $X$ that corresponds to the subgroup $H$, and the $n$-valued map $\phi_1=\phi \circ q\colon\ensuremath{\up{th}}inspace \widehat{X}\multimap X$ admits exactly $n!$ lifts, which are $n$-ordered maps from $\widehat{X}$ to $F_n(X)$. If one such lift $\widehat{\Phi}_{1}\colon\ensuremath{\up{th}}inspace \widehat{X}\ensuremath{\longrightarrow} F_n(X)$ is given by $\widehat{\Phi}_{1}=(f_1,\ldots, f_n)$, where for $i=1,\ldots,n$, $f_i$ is a map from $\widehat{X}$ to $X$, then the other lifts are of the form $(f_{\tau(1)},\ldots,f_{\tau(n)})$, where $\tau\in S_n$.
\end{prop}
\begin{proof}
The first part is a consequence of~\cite[Theorem~5.1, Chapter~V, Section~5]{Mas}, using the observation that $S_{n}$ is the deck transformation group corresponding to the covering $\pi\colon\ensuremath{\up{th}}inspace F_n(X) \ensuremath{\longrightarrow} D_n(X)$. The second part follows from the fact that $S_n$ acts freely on the covering space $F_n(X)$ by permuting coordinates.
\end{proof}
The fixed points of the $n$-valued map $\phi\colon\ensuremath{\up{th}}inspace X\multimap X$ may be described in terms of the coincidences of the covering map $q\colon\ensuremath{\up{th}}inspace \widehat{X} \ensuremath{\longrightarrow} X$ with the maps $f_{1},\ldots, f_{n}$ given in the statement of \repr{nielsen}.
\begin{prop}\label{prop:coinfix} Let $n\in \ensuremath{\mathbb N}$, let
$X$ be a connected, locally arcwise-connected, metric space, let $\phi\colon\ensuremath{\up{th}}inspace X \multimap X$ be an $n$-valued map, and let ${\widehat \Phi_1}=(f_1,\ldots,f_n)\colon\ensuremath{\up{th}}inspace \widehat{X} \ensuremath{\longrightarrow} F_n(X)$ be an $n$-ordered map that is a lift of $\phi_1=\phi\circ q\colon\ensuremath{\up{th}}inspace \widehat{X} \multimap X$ as in \repr{nielsen}. Then the map $q$ restricts to a surjection $q\colon\ensuremath{\up{th}}inspace \bigcup_{i=1}^{n} \operatorname{\text{Coin}}(q, f_i) \ensuremath{\longrightarrow} \operatorname{\text{Fix}}(\phi)$.
Furthermore, the pre-image of a point $x\in \operatorname{\text{Fix}}(\phi)$ by this map is precisely $q^{-1}(x)$, namely the fibre over $x\in X$ of the covering map $q\colon\ensuremath{\up{th}}inspace \widehat{X} \ensuremath{\longrightarrow} X$.
\end{prop}
\begin{proof}
Let $\widehat{x}\in \operatorname{\text{Coin}}(q, f_i)$ for some $1\leq i\leq n$, and let $x=q(\widehat{x})$. Then $f_i(\widehat{x})=q(\widehat{x})$, and since $\Phi(x)= \pi\circ \widehat{\Phi}_1(\widehat{x})=\brak{f_1(\widehat{x}),\ldots,f_n(\widehat{x})}$, it follows that $x\in \phi(x)$, \emph{i.e.}\ $x\in \operatorname{\text{Fix}}(\phi)$, so the map is well defined. To prove surjectivity and the second part of the statement, it suffices to show that if $x \in \operatorname{\text{Fix}}(\phi)$, then any element $\widehat{x}$ of $q^{-1}(x)$ belongs to $\bigcup_{i=1}^{n} \operatorname{\text{Coin}}(q, f_i) $. So let $x\in \phi(x)$, and let $\widehat{x}\in \widehat{X}$ be such that $q(\widehat{x})=x$. By commutativity of the following diagram:
\begin{equation*}
\begin{tikzcd}[ampersand replacement=\&]
\&\& F_{n}(X) \ar{d}{\pi}\\
\widehat{X} \ar[swap]{r}{q} \ar[dashrightarrow, end anchor=south west]{rru}{\widehat{\Phi}_1} \& X \ar[swap]{r}{\Phi} \& D_{n}(X),
\end{tikzcd}
\end{equation*}
and the fact that $x\in \operatorname{\text{Fix}}(\phi)$, it follows that $x$ is one of the coordinates, the $j\up{th}$ coordinate say, of $\widehat{\Phi}_1(\widehat{x})$. This implies that $\widehat{x}\in \operatorname{\text{Coin}}(q, f_j)$, which completes the proof of the proposition.
\end{proof}
\begin{prop}\label{prop:nullhomo}
Let $n,k>1$. If $\phi\colon\ensuremath{\up{th}}inspace \St[2k] \multimap \ensuremath{\mathbb R} P^{2k}$ is an $n$-valued map, then $\phi$ is split, and for $i=1,\ldots,n$, there exist maps $f_i\colon\ensuremath{\up{th}}inspace \St[2k] \ensuremath{\longrightarrow} \ensuremath{\mathbb R} P^{2k}$ for which $\phi=\{f_1,\ldots,f_n\}$. Further, $f_i$ is null homotopic for all $i\in \brak{1,\ldots,n}$.
\end{prop}
\begin{proof}
The first part follows from \relem{split1}. It remains to prove the second part, \emph{i.e.}\ that each $f_i$ is null homotopic. Since $\St[2k]$ is simply connected, the set $[\mathbb{S}^{2k}, \mathbb{S}^{2k}]=[\mathbb{S}^{2k}, \mathbb{S}^{2k}]_{0}=\pi_{2k}(\mathbb{S}^{2k})$, where $[ \cdot , \cdot ]_{0}$ denotes basepoint-preserving homotopy classes of maps. Let $x_0\in \mathbb{S}^{2k}$ be a basepoint, let $p\colon\ensuremath{\up{th}}inspace \St[2k] \ensuremath{\longrightarrow} \ensuremath{\mathbb R} P^{2k}$ be the two-fold covering, and let $\overline{x}_0=p(x_{0})\in \ensuremath{\mathbb R} P^{2k}$ be the basepoint of $\ensuremath{\mathbb R} P^{2k}$. Consider the natural map $[(\mathbb{S}^{2k}, x_0), (\mathbb{S}^{2k}, x_0)] \ensuremath{\longrightarrow} [(\mathbb{S}^{2k}, x_0), (\ensuremath{\mathbb R} P^{2k}, \overline{x}_0)]$ from the set of based homotopy classes of self-maps of $\St[2k]$ to the based homotopy classes of maps from $\St[2k]$ to $\ensuremath{\mathbb R} P^{2k}$,
that to a homotopy class of a basepoint-preserving self-map of $\St[2k]$ associates the homotopy class of the composition of this self-map with $p$.
This correspondence is an isomorphism. The covering map $p$ has topological degree $2$~\cite{Ep}, so the degree of a map $f\colon\ensuremath{\up{th}}inspace \St[2k] \ensuremath{\longrightarrow} \ensuremath{\mathbb R} P^{2k}$ is an even integer (we use the system of local coefficients given by the orientation of $\ensuremath{\mathbb R} P^{2k}$~\cite{Ol}). Since $H^{l}( \ensuremath{\mathbb R} P^{2k} , \widetilde{\mathbb{Q}})=0$ for $\widetilde{\mathbb{Q}}$ twisted by the orientation and $l\ne 2k$, if $i\neq j$, it follows that the Lefschetz coincidence number $L(f_{i},f_{j})$ is equal to $\deg(f_i)$. But $f_i$ and $f_j$ are coincidence free, so their Lefschetz coincidence number must be zero,
which implies that $\deg(f_i)=0$~\cite{GJ}. Since $n>1$, we conclude that $\deg(f_i)=0$ for all $1\leq i\leq n$, and the result follows.
\end{proof}
We are now able to prove the main result of this section, that $\ensuremath{\mathbb R} P^{2k}$ has the fixed point property for $n$-valued maps for all $k,n\geq 1$.
\begin{proof}[Proof of \reth{rp2Kfpp}]
The case $n=1$ is classical, so assume that $n\geq 2$. We use the notation introduced at the beginning of this section, taking $X=\ensuremath{\mathbb R} P^{2k}$. Since $\pi_1(\ensuremath{\mathbb R} P^{2k})\cong\ensuremath{\mathbb Z}_2$, $H$ is either $\pi_1(\ensuremath{\mathbb R} P^{2k})$ or the trivial group. In the former case, the $n$-valued map $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb R} P^{2k} \multimap \ensuremath{\mathbb R} P^{2k}$ is split, $\operatorname{\text{Fix}}(\phi)=\bigcup_{i=1}^{n} \operatorname{\text{Fix}}(f_i)$,
where $\phi=\brak{f_1,\ldots,f_n}$, and for all $i=1,\ldots,n$, $f_{i}$ is a self-map of $\ensuremath{\mathbb R} P^{2k}$. It follows that $\phi$ has at least $n$ fixed points. So suppose that $H$ is the trivial subgroup of $\pi_1(\ensuremath{\mathbb R} P^{2k})$. Then $\widehat{\ensuremath{\mathbb R} P^{2k}}=\St[2k]$, and $q$ is the covering map $p\colon\ensuremath{\up{th}}inspace \St[2k] \ensuremath{\longrightarrow} \ensuremath{\mathbb R} P^{2k}$.
We first consider the case $n=2$. Let $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb R} P^{2k} \multimap \ensuremath{\mathbb R} P^{2k}$ be a $2$-valued map, and let $\widehat{\Phi}_1\colon\ensuremath{\up{th}}inspace \mathbb{S}^{2k} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb R} P^{2k})$ be a lift of the map $\Phi_1=\Phi\circ p\colon\ensuremath{\up{th}}inspace \mathbb{S}^{2k} \ensuremath{\longrightarrow} D_2(\ensuremath{\mathbb R} P^{2k})$ that factors through the projection $\pi\colon\ensuremath{\up{th}}inspace F_2(\ensuremath{\mathbb R} P^{2k}) \ensuremath{\longrightarrow} D_2(\ensuremath{\mathbb R} P^{2k})$. By \repr{nielsen}, $\widehat{\Phi}_1=(f_1, f_2)$, where for $i=1,2$, $f_i\colon\ensuremath{\up{th}}inspace \mathbb{S}^{2k} \ensuremath{\longrightarrow} \ensuremath{\mathbb R} P^{2k}$ is a single-valued map, and it follows from \repr{nullhomo} that $f_1$ and $f_2$ are null homotopic.
If $\operatorname{\text{Coin}}(f_i, p)= \ensuremath{\varnothing}$ for some $i\in \brak{1,2}$, then arguing as in the second part of the proof of \repr{nullhomo}, it follows that $L(p,f_i)=\deg(p)=2$, which yields a contradiction. So $\operatorname{\text{Coin}}(f_i, p)\ne \ensuremath{\varnothing}$ for all $i\in \brak{1,2}$. Using the fact that $f_{1}$ and $f_{2}$ are coincidence free, we conclude that $\phi$ has at least two fixed points, and the result follows in this case.
Finally suppose that $n>2$. Arguing as in the case $n=2$, we obtain a lift of $\phi\circ p$ of the form $(f_1, \ldots, f_n)$, where for $i=1,\ldots,n$,
$f_i\colon\ensuremath{\up{th}}inspace \mathbb{S}^{2k} \ensuremath{\longrightarrow} \ensuremath{\mathbb R} P^{2k}$ is a map. If $i=1,\ldots,n$ then for all $j\in \brak{1,\ldots,n}$, $j\neq i$, we may apply the above argument to $f_i$ and $f_j$ to obtain $\operatorname{\text{Coin}}(f_i, p)\ne \ensuremath{\varnothing}$. Hence $\phi$ has at least $n$ fixed points, and the result follows.
\end{proof}
\begin{rem}
If $n>1$, we do not know whether there exists a non-split $n$-valued map $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb R} P^{2k} \multimap \ensuremath{\mathbb R} P^{2k}$.
\end{rem}
\section{Deforming (split) $n$-valued maps to fixed point and root-free maps}\label{sec:fixfree}
In this section, we generalise a standard procedure for deciding whether a single-valued map may be deformed to a fixed point free map to the $n$-valued case. We start by giving a necessary and sufficient condition for an $n$-valued map (resp.\ a split $n$-valued map) $\phi\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} X$ to be deformable to a fixed point free $n$-valued map (resp.\ split fixed point free $n$-valued map), at least in the case where $X$ is a manifold without boundary. This enables us to prove \reth{defchineg}. We then go on to give the analogous statements for roots. Recall from \resec{intro} that $D_{1,n}(X)$ is the quotient of $F_{n+1}(X)$ by the action of the subgroup $\brak{1} \times S_n$ of the symmetric group $S_{n+1}$, and that $B_{1,n}(X)=\pi_1(D_{1,n}(X))$.
\begin{prop}\label{prop:defor} Let $n\in \ensuremath{\mathbb N}$, let $X$ be a metric space, and let $\phi\colon\ensuremath{\up{th}}inspace X \multimap X$ be an $n$-valued map.
If $\phi$ can be deformed to a fixed point free $n$-valued map, then there exists a map $\ensuremath{\mathbb{T}^{2}}heta\colon\ensuremath{\up{th}}inspace X\ensuremath{\longrightarrow} D_{1,n}(X)$ such that the following diagram is commutative up to homotopy:
\begin{equation}\label{eq:commdiag3}
\begin{tikzcd}[ampersand replacement=\&]
\& \& D_{1,n}(X) \ar{d}{\iota_{1,n}}\\
X \ar[swap]{rr}{\ensuremath{\operatorname{\text{Id}}}_{X}\times \Phi} \ar[dashrightarrow, end anchor=south west]{rru}{\ensuremath{\mathbb{T}^{2}}heta}
\& \& X \times D_{n}(X),
\end{tikzcd}
\end{equation}
where $\iota_{1,n}\colon\ensuremath{\up{th}}inspace D_{1,n}(X) \ensuremath{\longrightarrow} X \times D_n(X)$ is the inclusion map. Conversely, if $X$ is a manifold without boundary and there exists a map $\ensuremath{\mathbb{T}^{2}}heta\colon\ensuremath{\up{th}}inspace X\ensuremath{\longrightarrow} D_{1,n}(X)$ such that diagram~\reqref{commdiag3} is commutative up to homotopy, then the $n$-valued map $\phi\colon\ensuremath{\up{th}}inspace X \multimap X$ may be deformed to a fixed point free $n$-valued map.
\end{prop}
\begin{proof}
For the first part, if $\phi'$ is a fixed point free deformation of $\phi$, then we may take the factorisation map $\ensuremath{\mathbb{T}^{2}}heta\colon\ensuremath{\up{th}}inspace X\ensuremath{\longrightarrow} D_{1,n}(X)$ to be that defined by $\ensuremath{\mathbb{T}^{2}}heta(x)= (x, \Phi'(x))$. For the converse, the argument is similar to the proof of the case of single-valued maps, and is as follows.
Let $\ensuremath{\mathbb{T}^{2}}heta\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} D_{1,n}(X)$ be a homotopy factorisation that satisfies the hypotheses. Composing $\ensuremath{\mathbb{T}^{2}}heta$ with the projection $\overline{p}_{1,n}\colon\ensuremath{\up{th}}inspace D_{1,n}(X)\ensuremath{\longrightarrow} X$ onto the first coordinate, we obtain a self-map of $X$ that is homotopic to the identity. Let $H\colon\ensuremath{\up{th}}inspace X\times I \ensuremath{\longrightarrow} X$ be a homotopy between $\overline{p}_{1,n}\circ \ensuremath{\mathbb{T}^{2}}heta$ and $\ensuremath{\operatorname{\text{Id}}}_{X}$. Since $\overline{p}_{1,n}\colon\ensuremath{\up{th}}inspace D_{1,n}(X)\ensuremath{\longrightarrow} X$ is a fibration and there is a lift of the restriction of $H$ to $X\times \brak{0}$, $H$ lifts to a homotopy $\widetilde{H}\colon\ensuremath{\up{th}}inspace X\times I \ensuremath{\longrightarrow} D_{1,n}(X)$. The restriction of $\widetilde{H}$ to $X\times \brak{1}$ yields the required deformation.
\end{proof}
For split $n$-valued maps, the correspondence given by \relem{split}(\ref{it:splitII}) gives rise to a statement analogous to that of \repr{defor} in terms of $F_{n}(X)$.
\begin{prop}\label{prop:equivsplit} Let $n\in \ensuremath{\mathbb N}$, let $X$ be a metric space, and let $\phi\colon\ensuremath{\up{th}}inspace X \multimap X$ be a split $n$-valued map. If $\phi$ can be deformed to a fixed point free $n$-valued map, and if $\widehat{\Phi}\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} F_{n}(X)$ is a lift of $\phi$, then there exists a map $\widehat{\ensuremath{\mathbb{T}^{2}}heta}\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} F_{n+1}(X)$ such that the following diagram is commutative up to homotopy:
\begin{equation}\label{eq:commdiag4}
\begin{tikzcd}[ampersand replacement=\&]
\&\& F_{n+1}(X) \ar{d}{\widehat{\iota}_{n+1}}\\
X \ar[swap]{rr}{\ensuremath{\operatorname{\text{Id}}}_{X}\times \widehat{\Phi}} \ar[dashrightarrow, end anchor=south west]{rru}{\widehat{\ensuremath{\mathbb{T}^{2}}heta}} \&\& X \times F_{n}(X),
\end{tikzcd}
\end{equation}
where $\widehat{\iota}_{n+1}\colon\ensuremath{\up{th}}inspace F_{n+1}(X) \ensuremath{\longrightarrow} X \times F_n(X)$ is the inclusion map. Conversely, if $X$ is a manifold without boundary and there exists a map $\widehat{\ensuremath{\mathbb{T}^{2}}heta}\colon\ensuremath{\up{th}}inspace X\ensuremath{\longrightarrow} F_{n+1}(X)$ such that diagram~\reqref{commdiag4} is commutative up to homotopy, then the split $n$-valued map $\phi\colon\ensuremath{\up{th}}inspace X \multimap X$ may be deformed through split maps to a fixed point free split $n$-valued map.
\end{prop}
\begin{proof}
Similar to that of \repr{defor}.
\end{proof}
We now apply Propositions~\ref{prop:defor} and~\ref{prop:equivsplit} to prove \reth{defchineg}, which treats the case where $X$ is a compact surface without boundary (orientable or not) of non-positive Euler characteristic.
\begin{proof}[Proof of \reth{defchineg}] The space $D_{1,n}(X)$ is a finite covering of $D_{1+n}(X)$, so it is a $K(\pi, 1)$ since
$D_{1+n}(X)$ is a $K(\pi, 1)$ by~\cite[Corollary~2.2]{FaN}. To prove the `only if' implication of part~(\ref{it:defchinega}), diagram~\reqref{commdiag1}
implies the existence of diagram~\reqref{commdiag3} using the fact that the space $D_{1,n}(X)$ is a $K(\pi, 1)$,
where $\varphi=\ensuremath{\mathbb{T}^{2}}heta_{\#}$ is the homomorphism induced by $\ensuremath{\mathbb{T}^{2}}heta$ on the level of the fundamental groups. Conversely, diagram~\reqref{commdiag3} implies that the two maps $\iota_{1,n}\circ \ensuremath{\mathbb{T}^{2}}heta$ and $\ensuremath{\operatorname{\text{Id}}}_X\times \Phi$ are homotopic, but not necessarily by a basepoint-preserving homotopy, so diagram~\reqref{commdiag1} is commutative up to conjugacy. Let $\delta\in\pi_{1}(X) \times B_{n}(X)$ be such that $(\iota_{1,n})_{\#}\circ \varphi(\alpha)=\delta (\ensuremath{\operatorname{\text{Id}}}_X\times \phi)_{\#}(\alpha) \delta^{-1}$ for all $\alpha\in \pi_1(X)$, and let $\widehat{\delta}\in B_{1,n}(X)$ be an element such that $(\iota_{1,n})_{\#}(\widehat{\delta})=\delta.$ Considering the homomorphism $\varphi'\colon\ensuremath{\up{th}}inspace \pi_1(X) \ensuremath{\longrightarrow} B_{1,n}(X)$ defined by $\varphi'(\alpha)=\widehat{\delta}^{-1}\varphi (\alpha)\widehat{\delta}$ for all $\alpha\in \pi_1(X)$, we obtain the commutative diagram~\reqref{commdiag1}, where we replace $\varphi$ by $\varphi'$. The proof of part~(\ref{it:defchinegb}) is similar, and is left to the reader.
\end{proof}
For the case of roots, we now give statements analogous to those of Propositions~\ref{prop:defor} and~\ref{prop:equivsplit} and of \reth{defchineg}. The proofs are similar to those of the corresponding statements for fixed points, and the details are left to the reader.
\begin{prop}\label{prop:rootI} Let $n\in \ensuremath{\mathbb N}$, let $X$ and $Y$ be metric spaces, let $y_0\in Y$ be a basepoint, and let $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$ be an $n$-valued map. If $\phi$ can be deformed to a root-free $n$-valued map, then there exists a map $\ensuremath{\mathbb{T}^{2}}heta\colon\ensuremath{\up{th}}inspace X\ensuremath{\longrightarrow} D_{n}(Y\backslash\{y_0\})$ such that the following diagram is commutative up to homotopy:
\begin{equation}\label{eq:commdiag3I}
\begin{tikzcd}[ampersand replacement=\&]
\& \& D_{n}(Y\backslash\{y_0\}) \ar{d}{\iota_{n}}\\
X \ar[swap]{rr}{ \Phi} \ar[dashrightarrow, end anchor=south west]{rru}{\ensuremath{\mathbb{T}^{2}}heta}
\& \& D_{n}(Y),
\end{tikzcd}
\end{equation}
where the map $\iota_{n}\colon\ensuremath{\up{th}}inspace D_{n}(Y\backslash\{y_0\}) \ensuremath{\longrightarrow} D_n(Y)$ is induced by the inclusion map $Y\backslash \{y_0\} \mathrel{\lhook\joinrel\ensuremath{\longrightarrow}} Y$. Conversely, if $Y$ is a manifold without boundary and there exists a map $\ensuremath{\mathbb{T}^{2}}heta\colon\ensuremath{\up{th}}inspace X\ensuremath{\longrightarrow} D_{n}(Y)$ such that diagram~\reqref{commdiag3I} is commutative up to homotopy, then the $n$-valued map $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$ may be deformed to a root-free $n$-valued map.
\end{prop}
For split $n$-valued maps, the correspondence of \relem{split}(\ref{it:splitII}) gives rise to a statement analogous to that of Proposition \ref{prop:rootI} in terms of $F_{n}(Y)$.
\begin{prop}\label{prop:equivsplitI} Let $n\in \ensuremath{\mathbb N}$, let $X$ and $Y$ be metric spaces, let $y_0\in Y$ a basepoint, and let $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$ be a split $n$-valued map. If $\phi$ can be deformed to a root-free $n$-valued map then there exists a map $\widehat{\ensuremath{\mathbb{T}^{2}}heta}\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} F_{n}(Y\backslash \{y_0\})$ and a lift $\widehat{\Phi}$ of $\phi$ such that the following diagram is commutative up to homotopy:
\begin{equation}\label{eq:commdiag4I}
\begin{tikzcd}[ampersand replacement=\&]
\&\& F_{n}(Y\backslash \{y_0\}) \ar{d}{\widehat{\iota}_{n}}\\
X \ar[swap]{rr}{ \widehat{\Phi}} \ar[dashrightarrow,
end anchor=south west]{rru}{\widehat{\ensuremath{\mathbb{T}^{2}}heta}} \&\& F_{n}(Y),
\end{tikzcd}
\end{equation}
where the map $\widehat{\iota}_{n}\colon\ensuremath{\up{th}}inspace F_{n}(Y\backslash \{y_0\}) \ensuremath{\longrightarrow} F_n(Y)$ is induced by the inclusion map $Y\backslash \{y_0\} \mathrel{\lhook\joinrel\ensuremath{\longrightarrow}} Y$. Conversely, if $Y$ is a manifold without boundary, and there exists a map $\widehat{\ensuremath{\mathbb{T}^{2}}heta}\colon\ensuremath{\up{th}}inspace X\ensuremath{\longrightarrow} F_{n}(Y\backslash \{y_0\})$ such that diagram~\reqref{commdiag4I} is commutative up to homotopy, then the split $n$-valued map $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$ may be deformed through split maps to a root-free split $n$-valued map.
\end{prop}
Propositions~\ref{prop:rootI} and~\ref{prop:equivsplitI} may be applied to the case where $X$ and $Y$ are compact surfaces without boundary of non-positive Euler characteristic to obtain the analogue of \reth{defchineg} for roots.
\begin{thm}\label{th:defchinegI}
Let $n\in \ensuremath{\mathbb N}$, and let $X$ and $Y$ be compact surfaces without boundary of non-positive Euler characteristic.
\begin{enumerate}[(a)]
\item\label{it:defchinegal} An $n$-valued map
$\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$ can be deformed to a root-free $n$-valued map if and only if there is a homomorphism
$\varphi\colon\ensuremath{\up{th}}inspace \pi_1(X) \ensuremath{\longrightarrow} B_{n}(Y\backslash\{y_0\})$ that makes the following diagram commute:
\begin{equation*}
\begin{tikzcd}[ampersand replacement=\&]
\&\& B_{n}(Y\backslash\{y_0\}) \ar{d}{(\iota_{n})_{\#}}\\
\pi_{1}(X) \ar[swap]{rr}{\Phi_{\#}} \ar[dashrightarrow, end anchor=south west]{rru}{\varphi} \&\& B_{n}(Y),
\end{tikzcd}
\end{equation*}
where $\iota_{n}\colon\ensuremath{\up{th}}inspace D_{n}(Y\backslash\{y_0\}) \ensuremath{\longrightarrow} D_{n}(Y)$ is induced by the inclusion map $Y\backslash \{y_0\} \mathrel{\lhook\joinrel\ensuremath{\longrightarrow}} Y$, and $\Phi\colon\ensuremath{\up{th}}inspace X\ensuremath{\longrightarrow} D_{n}(Y)$ is the $n$-unordered map associated to $\phi$.
\item\label{it:defchinegbI} A split $n$-valued map $\phi\colon\ensuremath{\up{th}}inspace X \multimap Y$ can be deformed to a root-free $n$-valued map if and only if there exist a lift $\widehat{\Phi}\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} F_n(Y)$ of $\phi$ and a homomorphism $\widehat{\varphi}\colon\ensuremath{\up{th}}inspace \pi_1(X) \ensuremath{\longrightarrow} P_{n}(Y\backslash\{y_0\})$ that make the following diagram commute:
\begin{equation*}
\begin{tikzcd}[ampersand replacement=\&]
\&\& P_{n}(Y\backslash\{y_0\}) \ar{d}{(\widehat{\iota}_{n})_{\#}}\\
\pi_{1}(X) \ar[swap]{rr}{\widehat{\Phi}_{\#}} \ar[dashrightarrow, end anchor=south west]{rru}{\widehat{\varphi}} \&\& P_{n}(Y),
\end{tikzcd}
\end{equation*}
where $\widehat{\iota}_{n}\colon\ensuremath{\up{th}}inspace F_{n}(Y\backslash\{y_0\}) \ensuremath{\longrightarrow} F_{n}(Y)$ is induced by the inclusion map $Y\backslash \{y_0\} \mathrel{\lhook\joinrel\ensuremath{\longrightarrow}} Y$.
\end{enumerate}
\end{thm}
\section{An application to split $2$-valued maps of the $2$-torus}\label{sec:toro}
In this section, we will use some of the ideas and results of \resec{fixfree} to study the fixed point theory of $2$-valued maps of the $2$-torus $\ensuremath{\mathbb{T}^{2}}$. We restrict our attention to the case where the maps are split, \emph{i.e.}\ we consider $2$-valued maps of the form $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ that admit a lift $\widehat{\Phi}\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$, where $\widehat{\Phi}=(f_1, f_2)$, $f_1$ and $f_2$ being coincidence-free self-maps of $\ensuremath{\mathbb{T}^{2}}$. We classify the set of homotopy classes of split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$, and we study the question of the characterisation of those split $2$-valued maps that can be deformed to fixed point free $2$-valued maps. The case of arbitrary $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$ will be treated in a forthcoming paper. In \resec{toro2}, we give presentations of the groups $P_{2}(\ensuremath{\mathbb{T}^{2}})$, $B_{2}(\ensuremath{\mathbb{T}^{2}})$ and $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ that will be used in the following sections, where $1$ denotes a basepoint of $\ensuremath{\mathbb{T}^{2}}$. In \resec{descript}, we describe the set of based and free homotopy classes of split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$. In \resec{fptsplit2}, we give a formula for the Nielsen number, and we derive a necessary condition for such a split $2$-valued map to be deformable to a fixed point free $2$-valued map. We then give an infinite family of homotopy classes of
split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$ that satisfy this condition and that
may be deformed to fixed point free $2$-valued maps. To facilitate the calculations, in \resec{p2Tminus1}, we shall show that the fixed point problem is equivalent to a root problem.
\subsection{The groups $P_{2}(\ensuremath{\mathbb{T}^{2}})$, $B_{2}(\ensuremath{\mathbb{T}^{2}})$ and $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$}\label{sec:toro2}
In this section, we give presentations of $P_{2}(\ensuremath{\mathbb{T}^{2}})$, $B_{2}(\ensuremath{\mathbb{T}^{2}})$ and $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ that will be used in the following sections. Other presentations of these groups may be found in the literature, see~\cite{Bel,Bi,GM,Sco} for example. We start by considering the group $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$. If $u$ and $v$ are elements of a group $G$, we denote their commutator $uvu^{-1}v^{-1}$ by $[u,v]$, and the commutator subgroup of $G$ by $\Gamma_{2}(G)$. If $A$ is a subset of $G$ then $\ang{\!\ang{A}\!}_{G}$ will denote the normal closure of $A$ in $G$.
\begin{prop}\label{prop:presP2T1}
The group $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ admits the following presentation:
\begin{enumerate}
\item[generators:] $\rho_{1,1}$, $\rho_{1,2}$, $\rho_{2,1}$, $\rho_{2,2}$, $B_{1,2}$, $B$ and $B'$.
\item[relations:]\mbox{}
\begin{enumerate}[(a)]
\item\label{it:presP2T1a} $\rho_{2,1}\rho_{1,1}\rho_{2,1}^{-1}=B_{1,2}\rho_{1,1}B_{1,2}^{-1}$.
\item $\rho_{2,1}\rho_{1,2}\rho_{2,1}^{-1}=B_{1,2}\rho_{1,2}\rho_{1,1}^{-1}B_{1,2}\rho_{1,1}B_{1,2}^{-1}$.
\item $\rho_{2,2}\rho_{1,1}\rho_{2,2}^{-1}=\rho_{1,1}B_{1,2}^{-1}$.
\item $\rho_{2,2}\rho_{1,2}\rho_{2,2}^{-1}=B_{1,2}\rho_{1,2}B_{1,2}^{-1}$.
\item\label{it:presP2T1e} $\rho_{2,1}B\rho_{2,1}^{-1}=B$ and $\rho_{2,2}B\rho_{2,2}^{-1}=B$.
\item\label{it:presP2T1f} $\rho_{2,1}B_{1,2}\rho_{2,1}^{-1}=B_{1,2} \rho_{1,1}^{-1}B_{1,2} \rho_{1,1}B_{1,2}^{-1}$ and $\rho_{2,2}B_{1,2}\rho_{2,2}^{-1}=B_{1,2} \rho_{1,2}^{-1}B_{1,2} \rho_{1,2}B_{1,2}^{-1}$.
\item\label{it:relf} $B'\rho_{1,1}B'^{-1}=\rho_{1,1}$ and $B'\rho_{1,2}B'^{-1}=\rho_{1,2}$.
\item\label{it:relg} $B'B_{1,2}B'^{-1}=B_{1,2}^{-1}B^{-1} B_{1,2}BB_{1,2}$ and $B'BB'^{-1}=B_{1,2}^{-1}BB_{1,2}$.
\item\label{it:relh} $[\rho_{1,1},\rho_{1,2}^{-1}]=BB_{1,2}$ and $[\rho_{2,1},\rho_{2,2}^{-1}]=B_{1,2}B'$.
\end{enumerate}
\end{enumerate}
In particular, $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ is a semi-direct product of the free group of rank three generated by $\brak{\rho_{1,1},\rho_{1,2},B_{1,2}}$ by the free group of rank two generated by $\brak{\rho_{2,1},\rho_{2,2}}$.
\end{prop}
Geometric representatives of the generators of $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ are illustrated in Figure~\ref{fig:gens}. The torus is obtained from this figure by identifying the boundary to a point.
\begin{figure}
\caption{The generators of $P_{2}
\label{fig:gens}
\end{figure}
\begin{rem}
The inclusion of $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})$ in $P_{2}(\ensuremath{\mathbb{T}^{2}})$ induces a surjective homomorphism $\map{\alpha}{P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})}{P_{2}(\ensuremath{\mathbb{T}^{2}})}$ that sends $B$ and $B'$ to the trivial element, and sends each of the remaining generators (considered as an element of $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})$) to itself (considered as an element of $P_2(\ensuremath{\mathbb{T}^{2}})$). Applying this to the presentation of $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})$ given by \repr{presP2T1}, we obtain the presentation of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ given in~\cite{FH}.
\end{rem}
\begin{proof}[Proof of \repr{presP2T1}]
Consider the following Fadell-Neuwirth short exact sequence:
\begin{equation}\label{eq:fnses}
1\ensuremath{\longrightarrow} P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}},x_{1}) \ensuremath{\longrightarrow} P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2})) \xrightarrow{(p_{2})_{\#}} P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2}) \ensuremath{\longrightarrow} 1,
\end{equation}
where $(p_{2})_{\#}$ is the homomorphism given geometrically by forgetting the first string and induced by the projection $p_{2}\colon\ensuremath{\up{th}}inspace F_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})\ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}\setminus\brak{1}$ onto the second coordinate. The kernel $K=P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}},x_{1})$ of $(p_{2})_{\#}$ (resp.\ the quotient $Q=P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{1})$) is a free group of rank three (resp.\ two). It will be convenient to choose presentations for these two groups that have an extra generator. From Figure~\ref{fig:gens}, we take $K$ (resp.\ $Q$) to be generated by $X=\brak{\rho_{1,1},\rho_{1,2},B_{1,2},B}$ (resp.\ $Y=\brak{\rho_{2,1},\rho_{2,2},B'}$) subject to the single relation $[\rho_{1,1},\rho_{1,2}^{-1}]=BB_{1,2}$ (resp.\ $[\rho_{2,1},\rho_{2,2}^{-1}]=B'$). We apply standard methods to obtain a presentation of the group extension $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2}))$~\cite[Proposition~1, p.~139]{Jo}. This group is generated by the union of $X$ with coset representatives of $Y$, which we take to be the same elements geometrically, but considered as elements of $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2}))$. This yields the given generating set. There are three types of relation. The first is that of $K$. The second type of relation is obtained by lifting the relation of $Q$ to $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2}))$, which gives rise to the relation $[\rho_{2,1},\rho_{2,2}^{-1}]B'^{-1}=B_{1,2}$. The third type of relation is obtained by rewriting the conjugates of the elements of $X$ by the chosen coset representatives of the elements of $Y$ in terms of the elements of $X$ using the geometric representatives of $X$ and $Y$ illustrated in Figure~\ref{fig:gens}. We leave the details to the reader. The last part of the statement is a consequence of the fact that $K$ (resp.\ $Q$) is a free group of rank three (resp.\ two), so the short exact sequence~\reqref{fnses} splits.
\end{proof}
\begin{rem}
For future purposes, it will be convenient to have the following relations at our disposal:
\begin{align*}
\rho_{2,1}^{-1}\rho_{1,1}\rho_{2,1}&=\rho_{1,1}B_{1,2}^{-1}\rho_{1,1}B_{1,2}\rho_{1,1}^{-1} &
\rho_{2,1}^{-1}\rho_{1,2}\rho_{2,1}&=\rho_{1,1}B_{1,2}^{-1}\rho_{1,1}^{-1} \rho_{1,2}B_{1,2}^{-1} \rho_{1,1}B_{1,2}\rho_{1,1}^{-1}\\
\rho_{2,2}^{-1}\rho_{1,1}\rho_{2,2}&=\rho_{1,1}\rho_{1,2}B_{1,2}\rho_{1,2}^{-1} &
\rho_{2,2}^{-1}\rho_{1,2}\rho_{2,2}&=\rho_{1,2}B_{1,2}^{-1} \rho_{1,2}B_{1,2}\rho_{1,2}^{-1}\\
\rho_{2,1}^{-1}B_{1,2}\rho_{2,1}&=\rho_{1,1}B_{1,2}\rho_{1,1}^{-1} &
\rho_{2,2}^{-1}B_{1,2}\rho_{2,2}&=\rho_{1,2}B_{1,2}\rho_{1,2}^{-1}.
\end{align*}
As in the proof of \repr{presP2T1}, these equalities may be derived geometrically.
\end{rem}
The presentation of $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\}, (x_1,x_2))$ given by \repr{presP2T1} may be modified to obtain another presentation that highlights its algebraic structure as a semi-direct product of free groups of finite rank.
\begin{prop}\label{prop:presTminus1alta}
The group $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\}, (x_1,x_2))$ admits the following presentation:
\begin{enumerate}
\item[generators:] $B$, $u$, $v$, $x$ and $y$.
\item[relations:]\mbox{}
\begin{enumerate}[(a)]
\item\label{it:altpresa} $xux^{-1}=u$.
\item\label{it:altpresb} $xvx^{-1}=v[v^{-1}, u]B^{-1}[u, v^{-1}]$.
\item\label{it:altpresc} $xBx^{-1}=u[v^{-1}, u]B[u, v^{-1}]u^{-1}$.
\item\label{it:altpresd} $yuy^{-1}=v[v^{-1}, u]Buv^{-1}$.
\item\label{it:altprese} $yvy^{-1}=v$.
\item\label{it:altpresf} $yBy^{-1}=v[v^{-1},u]B[u,v^{-1}]v^{-1}=uvu^{-1}Buv^{-1}u^{-1}$.
\end{enumerate}
\end{enumerate}
In particular, $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\}, (x_1,x_2))$ is a semi-direct product of the free group of rank three generated by $\brak{u,v,B}$ by the free group of rank two generated by $\brak{x,y}$.
\end{prop}
\begin{proof}
Using relation~(\ref{it:relh}) of \repr{presP2T1}, we define $B'$ as $B_{1,2}^{-1} [\rho_{2,1},\rho_{2,2}^{-1}]$ and $B_{1,2}$ as $B^{-1} [\rho_{1,1},\rho_{1,2}^{-1}]$. We then apply the following change of variables:
\begin{equation}\label{eq:uvxy}
\text{$u=\rho_{1,1}$, $v=\rho_{1,2}$, $x=\rho_{1,1}B_{1,2}^{-1}\rho_{2,1}$ and $y=\rho_{1,2}B_{1,2}^{-1}\rho_{2,2}$.}
\end{equation}
Relations~(\ref{it:presP2T1a})--(\ref{it:presP2T1e}) of \repr{presP2T1} may be seen to give rise to relations~(\ref{it:altpresa})--(\ref{it:altpresf}) of \repr{presTminus1alta}. Rewritten in terms of the generators of \repr{presTminus1alta}, relations~(\ref{it:presP2T1f})--(\ref{it:relg}) of \repr{presP2T1} are consequences of relations~(\ref{it:altpresa})--(\ref{it:altpresf}) of \repr{presTminus1alta}. To see this, using the relations of \repr{presTminus1alta}, first note that:
\begin{align*}
xB_{1,2}x^{-1}&= x B^{-1} [u,v^{-1}] x^{-1}=u[v^{-1}, u]B^{-1} [u, v^{-1}]u^{-1} \ldotp \bigl[u, [v^{-1}, u] B[u, v^{-1}] v^{-1}\bigr]\\
&= B^{-1} [u,v^{-1}] = B_{1,2}\;\text{and}\\
yB_{1,2}y^{-1} &= y B^{-1} [u,v^{-1}] y^{-1} = v[v^{-1},u]B^{-1}[u,v^{-1}]v^{-1}\ldotp \bigl[ v[v^{-1}, u]Buv^{-1}, v^{-1}\bigr]\\
&= B^{-1} [u,v^{-1}] = B_{1,2}.
\end{align*}
In light of these relations, it is convenient to carry out the calculations using $B_{1,2}$ instead of $B$.
In conjunction with the relations of the preceding remark, we obtain the following relations:
\begin{equation}\label{eq:conjxyuv}
\left\{
\begin{aligned}
yuy^{-1}&=vB_{1,2}^{-1}uv^{-1},\; xvx^{-1}=uvu^{-1} B_{1,2}\\
y^{-1}uy&=B_{1,2} v^{-1}uv,\; x^{-1}vx=u^{-1}v B_{1,2}^{-1}u,
\end{aligned}
\right.
\end{equation}
from which it follows that:
\begin{align*}
\rho_{2,1}B_{1,2}\rho_{2,1}^{-1}&=B_{1,2}u^{-1}x B_{1,2} x^{-1}u B_{1,2}^{-1}= B_{1,2}u^{-1} B_{1,2}u B_{1,2}^{-1}=B_{1,2} \rho_{1,1}^{-1}B_{1,2} \rho_{1,1}B_{1,2}^{-1}\;\text{and}\\
\rho_{2,2}B_{1,2}\rho_{2,2}^{-1}&=B_{1,2}v^{-1}y B_{1,2} y^{-1}v B_{1,2}^{-1}=B_{1,2}v^{-1} B_{1,2} v B_{1,2}^{-1}= B_{1,2} \rho_{1,2}^{-1}B_{1,2} \rho_{1,2}B_{1,2}^{-1}.
\end{align*}
Thus relations~(\ref{it:altpresa})--(\ref{it:altpresf}) of \repr{presTminus1alta} imply relations~(\ref{it:presP2T1f}) of \repr{presP2T1}. Now $B'= B_{1,2}^{-1}[\rho_{2,1},\rho_{2,2}^{-1}]$, and a straightforward computation shows that:
\begin{align*}
B'=u^{-1}x y^{-1}vB_{1,2}^{-1} x^{-1}u v^{-1}y&=u^{-1}x y^{-1}vB_{1,2}^{-1} x^{-1}u v^{-1} \ldotp xyx^{-1}\ldotp xy^{-1}x^{-1}y\\
&=[v^{-1},u]B_{1,2}\ldotp xy^{-1}x^{-1}y.
\end{align*}
One may then check that:
\begin{align*}
xy^{-1}x^{-1}y u y^{-1}xyx^{-1}&=B_{1,2}^{-1}[u,v^{-1}] u[v^{-1}, u] B_{1,2}\;\text{and}\\
xy^{-1}x^{-1}y v y^{-1}xyx^{-1}&=B_{1,2}^{-1}[u,v^{-1}] v [v^{-1}, u] B_{1,2},
\end{align*}
and that:
\begin{align*}
B' \rho_{1,1}B'^{-1}&=B' uB'^{-1}=[v^{-1},u] B_{1,2}\ldotp xy^{-1}x^{-1}y u y^{-1}xyx^{-1} B_{1,2}^{-1} [u,v^{-1}]=u=\rho_{1,1}\; \text{and}\\
B' \rho_{1,2} B'^{-1}&= B' vB'^{-1}=[v^{-1},u] B_{1,2}\ldotp xy^{-1}x^{-1}y v y^{-1}xyx^{-1} B_{1,2}^{-1} [u,v^{-1}]=v=\rho_{1,2}.
\end{align*}
Hence relations~(\ref{it:altpresa})--(\ref{it:altpresf}) of \repr{presTminus1alta} imply relations~(\ref{it:relf}) of \repr{presP2T1}. Furthermore,
\begin{align*}
B' B_{1,2}B'^{-1} &= v^{-1}uvu^{-1} B_{1,2} xy^{-1}x^{-1}y B_{1,2} y^{-1}xyx^{-1} B_{1,2}^{-1} uv^{-1}u^{-1}v\\
&= v^{-1}uvu^{-1} B_{1,2} uv^{-1}u^{-1}v=B_{1,2}^{-1}\ldotp B_{1,2} [v^{-1},u] B_{1,2} [u,v^{-1}] B_{1,2}^{-1} \ldotp B_{1,2}\\
&= B_{1,2}^{-1} B^{-1} B_{1,2} B B_{1,2},
\end{align*}
and since
\begin{align*}
xy^{-1}x^{-1}y [u , v^{-1}] y^{-1}xyx^{-1} &= \left[B_{1,2}^{-1}[u,v^{-1}] u[v^{-1}, u] B_{1,2}, B_{1,2}^{-1}[u,v^{-1}] v^{-1} [v^{-1}, u] B_{1,2}\right]\\
&= B_{1,2}^{-1}[u,v^{-1}] B_{1,2},
\end{align*}
we obtain:
\begin{align*}
B' B B'^{-1} &=[v^{-1},u]B_{1,2} B_{1,2}^{-1}[u,v^{-1}] B_{1,2}^{-1} B_{1,2} B_{1,2}^{-1}[u,v^{-1}]=B_{1,2}^{-1} [u,v^{-1}] B_{1,2}^{-1} \ldotp B_{1,2}\\
&=B_{1,2}^{-1} B B_{1,2}.
\end{align*}
Hence relations~(\ref{it:altpresa})--(\ref{it:altpresf}) of \repr{presTminus1alta} imply relations~(\ref{it:relg}) of \repr{presP2T1}. This proves the first part of the statement. The last part of the statement is a consequence of the nature of the presentation.
\end{proof}
The homomorphism $\map{\alpha}{P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})}{P_{2}(\ensuremath{\mathbb{T}^{2}})}$ mentioned in the remark that follows the statement of \repr{presP2T1} may be used to obtain a presentation of $P_{2}(T)$ in terms of the generators of \repr{presTminus1alta}. To do so, we first show that $\ker{\alpha}$ is the normal closure in $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ of $B$ and $B'$.
\begin{prop}\label{prop:presTminus2alta}
$\ker{\alpha}=\ang{\!\ang{B,B'}\!}_{P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})}$.
\end{prop}
\begin{proof}
Consider the presentation of $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ given in \repr{presP2T1}, as well as the short exact sequence~\reqref{fnses} and the surjective homomorphism $\map{\alpha}{P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})}{P_{2}(\ensuremath{\mathbb{T}^{2}})}$. In terms of the generators of \repr{presP2T1}, for $i=1,2$, $\alpha$ sends $\rho_{i,j}$ (resp.\ $B_{1,2}$) (considered as an element of $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2}))$) to $\rho_{i,j}$ (resp.\ to $B_{1,2}$) (considered as an element of $P_{2}(\ensuremath{\mathbb{T}^{2}})$), and it sends $B$ and $B'$ to the trivial element. Since $B,B'\in \ker{\alpha}$, it is clear that $\ang{\!\ang{B,B'}\!}_{P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})}\subset \ker{\alpha}$. We now proceed to prove the converse inclusion. Using the projection $(p_{2})_{\#}$ and $\alpha$, we obtain the following commutative diagram of short exact sequences:
\begin{equation}\label{eq:keralpha}
\begin{gathered}
\begin{tikzcd}[ampersand replacement=\&]
\& 1 \ar{d} \& 1 \ar{d} \& 1 \ar{d} \&\\
1 \ar{r} \& \ker{\overline{\alpha}} \ar{r} \ar[dashed]{d}{\tau\left\lvert_{\ker{\overline{\alpha}}}\right.} \& P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}},x_{1}) \ar[dashed]{r}{\overline{\alpha}} \ar{d}{\tau} \& P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{x_{2}},x_{1}) \ar{r} \ar{d} \& 1\\
1 \ar{r} \& \ker{\alpha} \ar{r} \ar[dashed, shift left=1]{d}{(p_{2})_{\#}\left\lvert_{\ker{\alpha}}\right.} \& P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2})) \ar{r}{\alpha} \ar{d}{(p_{2})_{\#}} \& P_{2}(\ensuremath{\mathbb{T}^{2}}, (x_{1},x_{2})) \ar{r} \ar{d}{(p_{2}')_{\#}} \& 1\\
1 \ar{r} \& \ker{\alpha'} \ar{r} \ar{d} \ar[dashed, shift left=1]{u}{s} \& P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2}) \ar{r}{\alpha'} \ar{d} \& P_{1}(\ensuremath{\mathbb{T}^{2}},x_{2}) \ar{r} \ar{d} \& 1.\\
\& 1 \& 1 \& 1 \&
\end{tikzcd}
\end{gathered}
\end{equation}
Diagram~\reqref{keralpha} is constructed in the following manner: the surjective homomorphism $\map{\alpha'}{P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})}{P_{1}(\ensuremath{\mathbb{T}^{2}},x_{2})}$ is induced by the inclusion $\map{\iota'}{\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}}{\ensuremath{\mathbb{T}^{2}}}$, and the right-hand column is a Fadell-Neuwirth short exact sequence, where the fibration $\map{p_{2}'}{F_{2}(\ensuremath{\mathbb{T}^{2}})}{\ensuremath{\mathbb{T}^{2}}}$ is given by projecting onto the second coordinate. The homomorphism $\map{\tau}{P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}},x_{1})}{P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2}))}$ is interpreted as inclusion. Taking $\brak{\rho_{2,1},\rho_{2,2},B'}$ to be a generating set of $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})$ subject to the relation $[\rho_{2,1},\rho_{2,2}^{-1}]=B'$, as for $\alpha$, we see that $\alpha'(\rho_{2,j})=\rho_{2,j}$ and $\alpha'(B')=1$. Alternatively, we may consider $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})$ to be the free group on $\brak{\rho_{2,1},\rho_{2,2}}$, and $P_{1}(\ensuremath{\mathbb{T}^{2}},x_{2})$ to be the free Abelian group on $\brak{\rho_{2,1},\rho_{2,2}}$, so $\alpha'$ is Abelianisation, and $\ker{\alpha'}=\Gamma_{2}(P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2}))$. Interpreting $\alpha'$ as the canonical projection of $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})$ onto its quotient by the normal closure of the element $[\rho_{2,1},\rho_{2,2}^{-1}]$ in $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})$, we see that $\ker{\alpha'}=\ang{\!\ang{\,[\rho_{2,1},\rho_{2,2}^{-1}]\,}\!}_{P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})}$. But $[\rho_{2,1},\rho_{2,2}^{-1}]=B'$ in $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})$, hence:
\begin{equation}\label{eq:keralphaprime}
\ker{\alpha'}=\ang{\!\ang{B'}\!}_{P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1},x_{2})}.
\end{equation}
The fact that $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})$ is a free group implies that $\ker{\alpha'}$ is too (albeit of infinite rank). The commutativity of the lower right-hand square of~\reqref{keralpha} is a consequence of the following commutative square:
\begin{equation*}
\begin{tikzcd}[ampersand replacement=\&]
F_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}) \ar[r, "\iota"] \ar[d, "p_{2}"] \& F_{2}(\ensuremath{\mathbb{T}^{2}}) \ar[d, "p_{2}'"]\\
F_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}) \ar[r, "\iota'"] \& F_{2}(\ensuremath{\mathbb{T}^{2}}),
\end{tikzcd}
\end{equation*}
where $\map{\iota}{F_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})}{F_{2}(\ensuremath{\mathbb{T}^{2}})}$ is induced by the inclusion $\iota'$. Together with $\alpha$ and $\alpha'$, the second two columns of~\reqref{keralpha} give rise to a homomorphism $\map{\overline{\alpha}}{P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}}, x_{1})}{P_{1}(\ensuremath{\mathbb{T}^{2}} \setminus\brak{x_{2}},x_{1})}$. Taking $\brak{\rho_{1,1},\rho_{1,2},B_{1,2},B}$ to be a generating set of $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}}, x_{1})$, by commutativity of the diagram, we see that $\overline{\alpha}$ sends each of $\rho_{1,1}$, $\rho_{1,2}$ and $B_{1,2}$ (considered as an element of $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}}, (x_{1},x_{2}))$) to itself (considered as an element of $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{x_{2}}, x_{1})$), and sends $B'$ to the trivial element. In particular, $\overline{\alpha}$ is surjective. As for $\alpha'$, we may consider $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}}, x_{1})$ to be the free group of rank three generated by $\brak{\rho_{1,1},\rho_{1,2},B_{1,2}}$, and $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{x_{2}}, x_{1})$ to be the group generated by $\brak{\rho_{1,1},\rho_{1,2},B_{1,2}}$ subject to the relation $[\rho_{1,1},\rho_{1,2}^{-1}]=B_{1,2}$. The homomorphism $\alpha'$ may thus be interpreted as the canonical projection of $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{x_{2}}, x_{1})$ onto its quotient by $\ang{\!\ang{[\rho_{1,1},\rho_{1,2}^{-1}]B_{1,2}^{-1}}\!}_{P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{x_{2}}, x_{1})}$. But by relation~(\ref{it:relh}) of \repr{presP2T1}, $B'=[\rho_{1,1},\rho_{1,2}^{-1}]B_{1,2}^{-1}$, hence:
\begin{equation*}
\ker{\overline{\alpha}}=\ang{\!\ang{B'}\!}_{P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{x_{2}},x_{1})}.
\end{equation*}
By exactness, the first (resp.\ second) two rows of~\reqref{keralpha} give rise to an induced homomorphism $\map{\tau\left\lvert_{\ker{\overline{\alpha}}}\right.}{\ker{\overline{\alpha}}}{\ker{\alpha}}$ (resp.\ $\map{(p_{2})_{\#}\left\lvert_{\ker{\alpha}}\right.}{\ker{\alpha}}{\ker{\alpha'}}$), and $\tau\left\lvert_{\ker{\overline{\alpha}}}\right.$ is injective because $\tau$ is. The homomorphism $(p_{2})_{\#}\left\lvert_{\ker{\alpha}}\right.$ is surjective, because by~\reqref{keralphaprime}, any element $x$ of $\ker{\alpha'}$ may be written as a product of conjugates of $B'$ and its inverse by products of $\rho_{2,1}$, $\rho_{2,2}$ and $B'$. This expression, considered as an element of $P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, (x_{1},x_{2}))$, belongs to $\ker{\alpha}$, and its image under $(p_{2})_{\#}$ is equal to $x$.
The fact that $\im{\tau\left\lvert_{\ker{\overline{\alpha}}}\right.}\subset \ker{(p_{2})_{\#}\left\lvert_{\ker{\alpha}}\right.}$ follows from exactness of the second column of~\reqref{keralpha}. Conversely, if $z\in \ker{(p_{2})_{\#}\left\lvert_{\ker{\alpha}}\right.}$ then $z\in \ker{\tau}$ by exactness of the second column. So there exists $y\in P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1,x_{2}},x_{1})$ such that $\tau(y)=z$. But $\overline{\alpha}(y)=\alpha(\tau(y))=\alpha(z)=1$, and hence $y\in \ker{\overline{\alpha}}$. This proves that $\ker{(p_{2})_{\#}\left\lvert_{\ker{\alpha}}\right.}\subset \im{\tau\left\lvert_{\ker{\overline{\alpha}}}\right.}$, and we deduce that the first column is exact.
Finally, since $\ker{\alpha'}$ is free, we may pick an infinite basis consisting of conjugates of $B'$ by certain elements of $P_{1}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1}, x_{2})$. Further, there exists a section $\map{s}{\ker{\alpha'}}[\ker{\alpha}]$ for $(p_{2})_{\#}\left\lvert_{\ker{\alpha}}\right.$ that consists in sending each of these conjugates (considered as an element of $\ker{\alpha'}$) to itself (considered as an element of $\ker{\alpha}$). In particular, $\ker{\alpha}$ is an internal semi-direct product of $\tau(\ker{\overline{\alpha}})$, which is contained in $\ang{\!\ang{B}\!}_{P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})}$, by $s(\ker{\overline{\alpha}})$, which is contained in $\ang{\!\ang{B'}\!}_{P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})}$. Hence $\ker{\alpha}$ is contained in the subgroup $\ang{\!\ang{B,B'}\!}_{P_{2}(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})}$. This completes the proof of the proposition.
\end{proof}
We may thus deduce the following useful presentation of $P_{2}(\ensuremath{\mathbb{T}^{2}})$.
\begin{cor}\label{cor:compactpres}
The group $P_2(\ensuremath{\mathbb{T}^{2}}, (x_1,x_2))$ admits the following presentation:
\begin{enumerate}
\item[generators:] $u$, $v$, $x$ and $y$.
\item[relations:]\mbox{}
\begin{enumerate}[(a)]
\item\label{it:altpresA} $xux^{-1}=u$ and $yuy^{-1}=u$.
\item\label{it:altpresB} $xvx^{-1}=v$ and $yvy^{-1}=v$.
\item\label{it:altpresC} $xyx^{-1}=y$.
\end{enumerate}
\end{enumerate}
In particular, $P_2(\ensuremath{\mathbb{T}^{2}}, (x_1,x_2))$ is isomorphic to the direct product of the free group of rank two generated by $\brak{u,v}$ and the free Abelian group of rank two generated by $\brak{x,y}$.
\end{cor}
\begin{rem}
The decomposition of \reco{compactpres} is a special case of~\cite[Lemma~17]{BGG}.
\end{rem}
\begin{proof}[Proof of \reco{compactpres}]
By \repr{presTminus2alta}, a presentation of $P_2(\ensuremath{\mathbb{T}^{2}})$ may be obtained from the presentation of $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})$ given in \repr{presTminus1alta} by setting $B$ and $B'$ equal to $1$. Under this operation, relations~(\ref{it:altpresa}) and~(\ref{it:altpresd}) (resp.~(\ref{it:altpresb}) and~(\ref{it:altprese})) of \repr{presTminus1alta} are sent to relations~(\ref{it:altpresA}) (resp.~(\ref{it:altpresB})) of \reco{compactpres}, and relations~(\ref{it:altpresc}) and~(\ref{it:altpresf}) of \repr{presTminus1alta} become trivial. We must also take into account the fact that $B'=1$ in $P_2(\ensuremath{\mathbb{T}^{2}})$. In $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})$, we have:
\begin{equation*}
B' = B_{1,2}^{-1} [\rho_{2,1}, \rho_{2,2}^{-1}]= [v^{-1},u] B \bigl[B^{-1}[u,v^{-1}]\ldotp u^{-1}x, y^{-1}v [v^{-1},u] B\bigr],
\end{equation*}
and taking the image of this equation by $\alpha$, we obtain:
\begin{align*}
1 &= [v^{-1},u] \bigl[[u,v^{-1}] u^{-1}x, y^{-1}v [v^{-1},u] \bigr]\\
& = [v^{-1},u] \ldotp [u,v^{-1}] u^{-1}x \ldotp y^{-1}v [v^{-1},u] \ldotp x^{-1}u [v^{-1},u] \ldotp [u,v^{-1}] v^{-1} y = u^{-1}x y^{-1}uvu^{-1} x^{-1}u v^{-1} y
\end{align*}
in $P_2(\ensuremath{\mathbb{T}^{2}})$. Using relations~(\ref{it:altpresA}) (resp.~(\ref{it:altpresB})) of \reco{compactpres}, it follows that $[x, y^{-1}]=1$, which yields relation~(\ref{it:altpresC}) of \reco{compactpres}. The last part of the statement is a consequence of the nature of the presentation.
\end{proof}
\begin{rem}
As we saw in \repr{presTminus1alta}, $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\}, (x_1, x_2))$ is a semi-direct product of the form $\mathbb{F}_3(u,v,B)\rtimes \mathbb{F}_2(x,y)$, the action being given by the relations of that proposition. Transposing the second two columns of the commutative diagram~\reqref{keralpha}, we obtain, up to isomorphism, the following commutative diagram:
\begin{equation}\label{eq:f3f2}
\begin{tikzcd}[ampersand replacement=\&]
1 \ar{r}\& \mathbb{F}_3(u,v,B) \ar{d}{\overline{\alpha}} \ar{r}{\tau} \& \mathbb{F}_3(u,v,B)\rtimes \mathbb{F}_2(x,y) \ar{d}{\alpha} \ar{r}{(p_{2})_{\#}} \ar{r} \& \mathbb{F}_2(x,y) \ar{d}{\alpha'} \ar{r} \& 1\\
1 \ar{r}\& \mathbb{F}_2(u,v) \ar{r} \& \mathbb{F}_2(u,v) \times \ensuremath{\mathbb Z}^{2} \ar{r}{(p_{2}')_{\#}} \& \ensuremath{\mathbb Z}^{2} \ar{r} \& 1,
\end{tikzcd}
\end{equation}
where $\alpha(u)=u$, $\alpha(v)=v$, $\alpha(B)=1$, $\alpha(x)=(1; (1,0))$ and $\alpha(y)=(1; (0,1))$. This is a convenient setting to study the question of whether a $2$-valued map may be deformed to a root-free $2$-valued map (which implies that the corresponding map may be deformed to a fixed point free map), since using~\reqref{f3f2} and \reth{defchinegI}(\ref{it:defchinegbI}), the question is equivalent to a lifting problem, to which we will refer in \resec{fptsplit2}, notably in Propositions~\ref{prop:necrootfree2} and~\ref{prop:construct2valprop}.
\end{rem}
Using the short exact sequence~\reqref{sesbraid}, we obtain the following presentation of the full braid group $B_2(\ensuremath{\mathbb{T}^{2}}, (x_1,x_2))$ from that of $P_2(\ensuremath{\mathbb{T}^{2}}, (x_1,x_2))$ given by \reco{compactpres}.
\begin{prop}\label{prop:presful}
The group $B_2(\ensuremath{\mathbb{T}^{2}}, (x_1,x_2))$ admits the following presentation:
\begin{enumerate}
\item[generators:] $u$, $v$, $x$, $y$ and $\sigma$.
\item[relations:]\mbox{}
\begin{enumerate}[(a)]
\item\label{it:presB2Ta} $xux^{-1}=u$ and $yuy^{-1}=u$.
\item $xvx^{-1}=v$ and $yvy^{-1}=v$.
\item\label{it:presB2Tc} $xyx^{-1}=y$.
\item\label{it:presB2Td} $\sigma^{2}=[u,v^{-1}]$.
\item\label{it:presB2Te} $\sigma x \sigma^{-1}=x$ and $\sigma y \sigma^{-1}=y$.
\item\label{it:presB2Tf} $\sigma u \sigma^{-1}=[u,v^{-1}]u^{-1}x$ and $\sigma v \sigma^{-1}=[u,v^{-1}]v^{-1}y$.
\end{enumerate}
\end{enumerate}
\end{prop}
\begin{proof}
Once more, we apply the methods of~\cite[Proposition~1, p.~139]{Jo}, this time to the short exact sequence~\reqref{sesbraid} for $X=\ensuremath{\mathbb{T}^{2}}$ and $n=2$, where we take $P_{2}(\ensuremath{\mathbb{T}^{2}})$ to have the presentation given by \reco{compactpres}. A coset representative of the generator of $\ensuremath{\mathbb Z}_{2}$ is given by the braid $\sigma=\sigma_{1}$ that swaps the two basepoints. Hence $\brak{u,v,x,y,\sigma}$ generates $B_{2}(\ensuremath{\mathbb{T}^{2}})$. Relations~(\ref{it:presB2Ta})--(\ref{it:presB2Tc}) emanate from the relations of $P_{2}(\ensuremath{\mathbb{T}^{2}})$. Relation~(\ref{it:presB2Td}) is obtained by lifting the relation of $\ensuremath{\mathbb Z}_{2}$ to $B_{2}(\ensuremath{\mathbb{T}^{2}})$ and using the fact that $\sigma^{2}=B_{1,2}=[u,v^{-1}]$. To obtain relations~(\ref{it:presB2Te}) and~(\ref{it:presB2Tf}), by geometric arguments, one may see that for $j\in \brak{1,2}$, $\sigma \rho_{1,j} \sigma^{-1}= \rho_{2,j}$ and $\sigma \rho_{2,j} \sigma^{-1}= B_{1,2}\rho_{2,j}B_{1,2}^{-1}$, and one then uses \req{uvxy} to express these relations in terms of $u,v,x$ and $y$.
\end{proof}
\subsection{A description of the homotopy classes of $2$-ordered and split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$, and the computation of the Nielsen number}\label{sec:descript}
In this section, we describe the homotopy classes of $2$-ordered (resp.\ split $2$-valued) maps of $\ensuremath{\mathbb{T}^{2}}$ using the group structure of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ (resp.\ of $B_{2}(\ensuremath{\mathbb{T}^{2}})$) given in \resec{toro2}.
\begin{prop}\label{prop:baseT2}\mbox{}
\begin{enumerate}
\item\label{it:baseT2a} The set $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]_0$ of based homotopy classes of $2$-ordered maps of $\ensuremath{\mathbb{T}^{2}}$ is in one-to-one correspondence with the set of commuting, ordered pairs of elements of $P_{2}(\ensuremath{\mathbb{T}^{2}})$.
\item\label{it:baseT2b} The set $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]$ of homotopy classes of $2$-ordered maps of $\ensuremath{\mathbb{T}^{2}}$ is in one-to-one correspondence with the set of commuting, conjugate, ordered pairs of $P_{2}(\ensuremath{\mathbb{T}^{2}})$, i.e.\ two commuting pairs $(\alpha_1, \beta_1)$ and $(\alpha_2, \beta_2)$ of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ give rise to the same homotopy class of $2$-ordered maps of $\ensuremath{\mathbb{T}^{2}}$ if there exists $\delta\in P_{2}(\ensuremath{\mathbb{T}^{2}})$ such that $\delta\alpha_1\delta^{-1}=\alpha_2$ and $\delta\beta_1\delta^{-1}=\beta_2$.
\item\label{it:baseT2c} Under the projection $\widehat{\pi}\colon\ensuremath{\up{th}}inspace [\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})] \ensuremath{\longrightarrow} [\ensuremath{\mathbb{T}^{2}},D_{2}(\ensuremath{\mathbb{T}^{2}})]$ induced by the covering map $\pi\colon\ensuremath{\up{th}}inspace F_2(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} D_2(\ensuremath{\mathbb{T}^{2}})$, two homotopy classes of $2$-ordered maps of $\ensuremath{\mathbb{T}^{2}}$ are sent to the same homotopy class of $2$-unordered maps if and only if any two pairs of braids that represent the maps are conjugate in $B_{2}(\ensuremath{\mathbb{T}^{2}})$.
\end{enumerate}
\end{prop}
\begin{proof} Let $x_0\in \ensuremath{\mathbb{T}^{2}}$ and $(y_0, z_0)\in F_2(\ensuremath{\mathbb{T}^{2}})$ be basepoints, and let $\Psi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ be a basepoint-preserving $2$-ordered map. The restriction of $\Psi$ to the meridian $\mu$ and the longitude $\lambda$ of $\ensuremath{\mathbb{T}^{2}}$, which are geometric representatives of the elements of the basis $(e_{1},e_{2})$ of $\pi_1(\ensuremath{\mathbb{T}^{2}})$, gives rise to a pair of geometric braids. The resulting pair $(\Psi_{\#}(e_{1}),\Psi_{\#}(e_{2}))$ of elements of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ obtained via the induced homomorphism $\Psi_{\#}\colon\ensuremath{\up{th}}inspace \pi_{1}(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} P_{2}(\ensuremath{\mathbb{T}^{2}})$ is an invariant of the based homotopy class of the map $\Psi$, and the two braids $\Psi_{\#}(e_{1})$ and $\Psi_{\#}(e_{2})$ commute. Conversely, given a pair of braids $(\alpha, \beta)$ of $P_{2}(\ensuremath{\mathbb{T}^{2}})$, let $f_{1}\colon\ensuremath{\up{th}}inspace \St[1] \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ and $f_{2}\colon\ensuremath{\up{th}}inspace \St[1] \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ be geometric representatives of $\alpha$ and $\beta$ respectively, \emph{i.e.}\ $\alpha=[f_{1}]$ and $\beta=[f_{2}]$. Then we define a geometric map from the wedge of two circles into $F_{2}(\ensuremath{\mathbb{T}^{2}})$ by sending $x\in \St[1] \vee \St[1]$ to $f_{1}(x)$ (resp.\ to $f_{2}(x)$) if $x$ belongs to the first (resp.\ second) copy of $\St[1]$. By classical obstruction theory in low dimension, this map extends to $\ensuremath{\mathbb{T}^{2}}$ if and only if $\alpha$ and $\beta$ commute as elements of $P_{2}(\ensuremath{\mathbb{T}^{2}})$, and part~(\ref{it:baseT2a}) follows. Parts~(\ref{it:baseT2b}) and~(\ref{it:baseT2c}) are consequences of part~(\ref{it:baseT2a}) and classical general facts about maps between spaces of type $K(\pi, 1)$, see~\cite[Chapter~V, Theorem~4.3]{Wh} for example.
\end{proof}
Applying \repr{equivsplit} to $\ensuremath{\mathbb{T}^{2}}$, we obtain the following consequence.
\begin{prop}\label{prop:lifttorus}
If $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ is a split $2$-valued map and $\widehat{\Phi}\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ is a lift of $\phi$, the map $\phi$ can be deformed to a fixed point free $2$-valued map if and only if there exist commuting elements $\alpha_1, \alpha_2 \in P_{3}(\ensuremath{\mathbb{T}^{2}})$ such that $\alpha_1$ (resp.\ $\alpha_2$) projects to $(e_1, \widehat{\Phi}_{\#}(e_1))$ (resp.\ $(e_2, \widehat{\Phi}_{\#}(e_2))$) under the homomorphism induced by the inclusion map $\widehat{\iota}_{3}\colon\ensuremath{\up{th}}inspace F_{3}(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}\times F_{2}(\ensuremath{\mathbb{T}^{2}})$.
\end{prop}
\begin{proof}
Since $\ensuremath{\mathbb{T}^{2}}$ is a space of type $K(\pi, 1)$, the existence of diagram~\reqref{commdiag4} is equivalent to that of the corresponding induced diagram on the level of fundamental groups. It then suffices to take $\alpha_{1}=\ensuremath{\mathbb{T}^{2}}heta_{\#}(e_1)$ and $\alpha_{2}=\ensuremath{\mathbb{T}^{2}}heta_{\#}(e_2)$ in the statement of \repr{equivsplit}.
\end{proof}
Proposition~\ref{prop:lifttorus} gives a criterion to decide whether a split $2$-valued map of $\ensuremath{\mathbb{T}^{2}}$ can be deformed to a fixed point free $2$-valued map. However, from a computational point of view, it seems better to use an alternative condition in terms of roots (see \resec{exrfm}).
In the following proposition, we make use of the identification of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ with $\mathbb{F}_{2} \times \ensuremath{\mathbb Z}^{2}$ given in \reco{compactpres}.
\begin{prop}\label{prop:exismaps}\mbox{}
\begin{enumerate}
\item\label{it:exismapsa} The set $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]_0$ of based homotopy classes of $2$-ordered maps of $\ensuremath{\mathbb{T}^{2}}$ is in one-to-one correspondence with the set of pairs $(\alpha, \beta)$ of elements of $\mathbb{F}_2 \times \ensuremath{\mathbb Z}^{2}$ of the form $\alpha=(w^r,(a,b))$, $\beta=(w^s, (c,d))$, where $(a,b), (c,d), (r,s)\in \ensuremath{\mathbb Z}^{2}$ and $w\in \mathbb{F}_2$. Further, up to taking a root of $w$ if necessary, we may assume that $w$ is either trivial or is a primitive element of $\mathbb{F}_2$ (i.e.\ $w$ is not a proper power of another element of $\mathbb{F}_2$).
\item\label{it:exismapsb} The set $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]$ of homotopy classes of $2$-ordered maps of $\ensuremath{\mathbb{T}^{2}}$ is in one-to-one correspondence with the set of the equivalence classes of pairs $(\alpha, \beta)$ of elements of $\mathbb{F}_2 \times \ensuremath{\mathbb Z}^{2}$ of the form given in part~(\ref{it:exismapsa}), where the equivalence relation is defined as follows: the pairs of elements $( (w_1^{r_{1}},(a_1,b_1)), (w_1^{s_{1}}, (c_1,d_1)))$ and $( (w_2^{r_{2}},(a_2,b_2)), (w_2^{s_{2}}, (c_2,d_2)))$ of $\mathbb{F}_2 \times \ensuremath{\mathbb Z}^{2}$ are equivalent if and only if $(a_1,b_1,c_1,d_1) = (a_2,b_2,c_2,d_2)$, and either:
\begin{enumerate}[(i)]
\item $w_1=w_2=1$, or
\item $w_1$ and $w_2$ are primitive, and there exists $\ensuremath{\varepsilon}\in \brak{1,-1}$ such that $w_1$ and $w_2^{\ensuremath{\varepsilon}}$ are conjugate in $\mathbb{F}_2$,
and $(r_1,s_{1})=\ensuremath{\varepsilon} (r_2,s_{2})\ne (0,0)$.
\end{enumerate}
\end{enumerate}
\end{prop}
\begin{proof}
Part~(\ref{it:exismapsa}) follows using \repr{baseT2}(\ref{it:baseT2a}), the identification of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ with $\mathbb{F}_2\times \ensuremath{\mathbb Z}^2$ given by \reco{compactpres}, and the fact that two elements of $\mathbb{F}_2$ commute if and only if they are powers of some common element of $\mathbb{F}_2$. Part~(\ref{it:exismapsb}) is a consequence of \repr{baseT2}(\ref{it:baseT2b}), \reco{compactpres}, and the straightforward description of the conjugacy classes of the group $\mathbb{F}_2\times \ensuremath{\mathbb Z}^2$.
\end{proof}
To describe the homotopy classes of split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$, let us consider the set $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]$ of homotopy classes of $2$-ordered maps and the action of $\ensuremath{\mathbb Z}_2$ on this set that is induced by the action of $\ensuremath{\mathbb Z}_2$ on $F_2(\ensuremath{\mathbb{T}^{2}})$. By \relem{split}(\ref{it:splitIII}), the corresponding set of orbits $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]/\ensuremath{\mathbb Z}_{2}$ is in one-to-one correspondence with the set $\splitmap{\ensuremath{\mathbb{T}^{2}}}{\ensuremath{\mathbb{T}^{2}}}{2}/\!\sim$ of homotopy classes of split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$.
Given a homotopy class of a $2$-ordered map of $\ensuremath{\mathbb{T}^{2}}$, choose a based representative $f\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$. The based homotopy class of $f$ is determined by the element $f_{\#}$ of $\operatorname{\text{Hom}}(\ensuremath{\mathbb Z}^{2}, P_2(\ensuremath{\mathbb{T}^{2}}))$. In turn, by \repr{exismaps}(\ref{it:exismapsa}),
$f_{\#}$ is determined by a pair of elements of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ of the form $( (w^r,(a,b)), (w^s, (c,d)))$, where $(a,b)$, $(c,d)$ and $(r,s)$ belong to $\ensuremath{\mathbb Z}^2$, and $w\in \mathbb{F}_2$. To characterise the equivalence class of $f$ in $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]/\ensuremath{\mathbb Z}_{2}$, we first consider the set of conjugates of this pair by the elements of $P_2(\ensuremath{\mathbb{T}^{2}})$, which by \repr{baseT2}(\ref{it:baseT2b}) describes the homotopy class of $f$ in $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]$, and secondly, we take into account the $\ensuremath{\mathbb Z}_2$-action by conjugating by the elements of $B_2(\ensuremath{\mathbb{T}^{2}})$. So the equivalence class of $f$ in $[\ensuremath{\mathbb{T}^{2}},F_{2}(\ensuremath{\mathbb{T}^{2}})]/\ensuremath{\mathbb Z}_{2}$ is characterised by the set of conjugates of the pair $( (w^r,(a,b)), (w^s, (c,d)))$ by elements of $B_2(\ensuremath{\mathbb{T}^{2}})$. The presentation of $B_{2}(\ensuremath{\mathbb{T}^{2}})$ given by~\repr{presful} contains the action by conjugation of $\sigma$ on $P_{2}(\ensuremath{\mathbb{T}^{2}})$. Consider the homomorphism involution of $\mathbb{F}_2(u,v)$ that is defined on the generators of $\mathbb{F}_2(u,v)$ by $u \longmapsto u^{-1}$ and $v \longmapsto v^{-1}$. The image of an element $w\in \mathbb{F}_2(u,v)$ by this automorphism will be denoted by $\widehat{w}$. With respect to the decomposition of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ given by \reco{compactpres}, let $\gamma\colon\ensuremath{\up{th}}inspace P_2(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} \mathbb{F}_2(u,v)$ denote the projection onto $\mathbb{F}_2(u,v)$. Let $\operatorname{\text{Ab}}\colon\ensuremath{\up{th}}inspace \mathbb{F}_2(u,v) \ensuremath{\longrightarrow} \ensuremath{\mathbb Z}^{2}$ denote the Abelianisation homomorphism that sends $u$ to $(1,0)$ and $v$ to $(0,1)$. We write $\operatorname{\text{Ab}}(w)=(|w|_{u}, |w|_{v})$, where $|w|_{u}$ (resp.\ $|w|_{v}$) denotes the exponent sum of $u$ (resp.\ $v$) in the word $w$, and $\ell(w)$ will denote the word length of $w$ with respect to $u$ and $v$. One may check easily that $(\widehat{w})^{-1}=\widehat{w^{-1}}$ and $\ell(w)=\ell(\widehat{w})$ for all $w\in \mathbb{F}_2(u,v)$. Note also that if $\lambda\in \mathbb{F}_2(u,v)$ and $r\in \ensuremath{\mathbb Z}$ then:
\begin{equation}\label{eq:lambdahat}
\widehat{\lambda} (\lambda \widehat{\lambda})^{r} \lambda=\widehat{\lambda} (\lambda \widehat{\lambda})^{r} \widehat{\lambda}^{-1} \ldotp \widehat{\lambda} \lambda=(\widehat{\lambda} \lambda \widehat{\lambda}\widehat{\lambda}^{-1})^{r} \widehat{\lambda} \lambda=(\widehat{\lambda}\lambda)^{r+1}.
\end{equation}
\begin{lem}\label{lem:conjhat}
For all $w\in \mathbb{F}_{2}(u,v)$, $\widehat{w}=vu^{-1}\gamma(\sigma w\sigma^{-1})uv^{-1}$ and
\begin{equation}\label{eq:conjsigma}
\sigma w \sigma^{-1}=(uv^{-1}\widehat{w} vu^{-1}, (|w|_{u}, |w|_{v})).
\end{equation}
In particular, $\widehat{w}$ is conjugate to $\gamma(\sigma w\sigma^{-1})$ in $\mathbb{F}_2(u,v)$.
\end{lem}
\begin{proof}
By \repr{presful},
we have:
\begin{equation*}
\text{$\sigma u \sigma^{-1} = [u,v^{-1}]u^{-1} x=uv^{-1}\widehat{u} vu^{-1} x^{|u|_{u}}$ and
$\sigma v \sigma^{-1} = [u,v^{-1}]v^{-1} y=uv^{-1}\widehat{v} vu^{-1} y^{|v|_{v}}$.}
\end{equation*}
If $w\in \mathbb{F}_{2}(u,v)$ then~\reqref{conjsigma} follows because
$x$ and $y$ belong to the centre of $P_{2}(\ensuremath{\mathbb{T}^{2}})$. Thus $\gamma(\sigma w \sigma^{-1})=uv^{-1}\widehat{w} vu^{-1}$ as required.
\end{proof}
\begin{lem}\label{lem:conjhat1}\mbox{}
\begin{enumerate}
\item\label{it:classifI} Let $a,b\in \mathbb{F}_{2}(u,v)$ be such that $ab$ is written in reduced form. Then $ab=\widehat{b}\,\widehat{a}$ if and only if there exist $\lambda\in \mathbb{F}_{2}(u,v)$ and $r,s\in \ensuremath{\mathbb Z}$ such that:
\begin{equation}\label{eq:concl1}
\text{$a=(\widehat{\lambda} \lambda)^s\widehat{\lambda}$ and $b=(\lambda\widehat{\lambda})^r\lambda$.}
\end{equation}
\item\label{it:classifII} For all $w\in \mathbb{F}_{2}(u,v)$, $w$ and $\widehat{w}$ are conjugate in $\mathbb{F}_2(u,v)$ if and only if there exist $\lambda\in \mathbb{F}_2(u,v)$ and $l\in \ensuremath{\mathbb Z}$ such that:
\begin{equation}\label{eq:concl2}
w=(\lambda \widehat{\lambda})^l.
\end{equation}
\end{enumerate}
\end{lem}
\begin{rem}
By modifying the definition of the $\widehat{\cdot}$ homomorphism appropriately, \relem{conjhat1} and its proof may be generalised to any free group of finite rank on a given set.
\end{rem}
\begin{proof}[Proof of \relem{conjhat1}]
We first prove the `if' implications of~(\ref{it:classifI}) and~(\ref{it:classifII}). For part~(\ref{it:classifI}), let $a,b\in \mathbb{F}_{2}(u,v)$ be such that $ab$ is written in reduced form, and that~\reqref{concl1} holds. Using~\reqref{lambdahat}, we have:
\begin{equation*}
ab=(\widehat{\lambda} \lambda)^s\widehat{\lambda}(\lambda\widehat{\lambda})^r\lambda=(\widehat{\lambda} \lambda)^{r+s+1}=(\widehat{\lambda} \lambda)^{r}\widehat{\lambda}(\lambda \widehat{\lambda})^{s}\lambda=
\widehat b\,\widehat a.
\end{equation*}
For part~(\ref{it:classifII}), if~\reqref{concl2} holds then $\widehat{w}=(\widehat{\lambda} \lambda)^l=\widehat{\lambda}(\lambda\widehat{\lambda})^l(\widehat{\lambda})^{-1}=\widehat{\lambda} w\widehat{\lambda}^{-1}$ by~\reqref{lambdahat}, so $w$ and $\widehat{w}$ are conjugate in $\mathbb{F}_2(u,v)$.
Finally, we prove the `only if' implications of~(\ref{it:classifI}) and~(\ref{it:classifII}) simultaneously by induction on the length $k$ of the words $ab$ and $w$, which we assume to be non trivial and written in reduced form. Let~(E1) denote the equation $ab=\widehat{b}\,\widehat{a}$, and let~(E2) denote the equation $\widehat{w}=\ensuremath{\up{th}}eta w \ensuremath{\up{th}}eta^{-1}$, where $\ensuremath{\up{th}}eta\in \mathbb{F}_2(u,v)$. Note that if $a$ and $b$ satisfy~(E1) then both $a$ and $b$ are non trivial, and that $\widehat{b}\,\widehat{a}$ is also in reduced form. Further, if $z\in \mathbb{F}_2(u,v)$, then since $|z|_{y}=-|\widehat{z}|_{y}$ for $y\in\brak{u,v}$, it follows from the form of~(E1) and~(E2) that $k$ must be even in both cases. We carry out the proof of the two implications by induction as follows.
\begin{enumerate}[(i)]
\item If $k\leq 4$ then (E1) implies~\reqref{concl1}. One may check easily that $k$ cannot be equal to $2$. Suppose that $k=4$ and that~(E1) holds. Now $\ell(a)\neq 1$, for otherwise $b$ would start with $a^{-1}$, but then $ab$ would not be reduced. Similarly, $\ell(b)\neq 1$, so we must have $\ell(a)=\ell(b)=2$, in which case $b=\widehat{a}$, and it suffices to take $r=s=0$ and $\lambda=b$ in~\reqref{concl1}.
\item If $k\leq 4$ then (E2) implies~\reqref{concl2}. Once more, it is straightforward to see that $k$ cannot be equal to $2$. Suppose that $k=4$. Since $w$ and $\widehat{w}$ are conjugate, we have $|w|_{y}=|\widehat{w}|_{y}$ for $y\in\brak{u,v}$, and so $|w|_{y}=0$. Since $w$ is in reduced form, one may then check that~\reqref{concl2} holds, where $\ell(\lambda)=2$.
\item\label{it:induct} Suppose by induction that for some $k\geq 4$,~(E1) implies~\reqref{concl1} if $\ell(ab)<k$ and~(E2) implies~\reqref{concl2} if $\ell(w)<k$. Suppose that $a$ and $b$ satisfy~(E1) and that $\ell(ab)=k$. If $\ell(a)=\ell(b)$ then $b=\widehat{a}$, and as above, it suffices to take $r=s=0$ and $\lambda=b$ in~\reqref{concl1}. So assume that $\ell(a)\neq \ell(b)$. By applying the automorphism $\widehat{\cdot}$ and exchanging the r\^{o}les of $a$ and $b$ if necessary, we may suppose that $\ell(a)<\ell(b)$. We consider two subcases.
\begin{enumerate}[(A)]
\item $\ell(a)\leq \ell(b)/2$: since both sides of~(E1) are in reduced form, $b$ starts and ends with $\widehat{a}$, and there exists $b_{1}\in \mathbb{F}_2(u,v)$ such that $b=\widehat{a}b_{1}\widehat{a}$, written in reduced form. Substituting this into~(E1), we see that $a\widehat{a}b_{1}\widehat{a}= a\widehat{b}_{1}a\widehat{a}$, and thus $\widehat{a}b_{1}=\widehat{b}_{1}a$, written in reduced form. This equation is of the form~(E1), and since $\ell(\widehat{a}b_{1})<\ell(b)<\ell(ab)$, we may apply the induction hypothesis. Thus there exist $\lambda\in \mathbb{F}_{2}(u,v)$ and $r,s\in \ensuremath{\mathbb Z}$ such that
$\widehat{a}=(\widehat{\lambda} \lambda)^s\widehat{\lambda}$, and $b_1=(\lambda\widehat{\lambda})^{r}\lambda$.
Therefore $a=(\lambda\widehat{\lambda} )^s \lambda$ and
\begin{equation*}
b=\widehat{a} b_1 \widehat{a}=(\widehat{\lambda} \lambda)^s\widehat{\lambda} (\lambda\widehat{\lambda})^{r}\lambda (\widehat{\lambda} \lambda)^s\widehat{\lambda}= (\widehat{\lambda} \lambda)^{2s+r+1}\widehat{\lambda},
\end{equation*}
using~\reqref{lambdahat}, which proves the result in this case.
\item $\ell(b)/2< \ell(a) <\ell(b)$: since both sides of~(E1) are in reduced form, $b$ starts with $\widehat{a}$, and there exists $b_{1}\in \mathbb{F}_2(u,v)$, $b_{1}\neq 1$, such that $b=\widehat{a}b_{1}$. Substituting this into~(E1), we obtain $a\widehat{a}b_{1}=a\widehat{b}_{1}\widehat{a}$, which is equivalent to $\widehat{a}b_{1}\widehat{a}^{-1}=\widehat{b}_{1}$. This equation is of the form~(E2), and since $\ell(\widehat{b}_{1})<\ell(b)<\ell(ab)=k$, we may apply the induction hypothesis. Thus there exist $\lambda\in \mathbb{F}_{2}(u,v)$ and $l\in \ensuremath{\mathbb Z}$ such that $b_1=(\lambda\widehat{\lambda})^{l}$. The fact that $b_{1}\neq 1$ implies that $\lambda\widehat{\lambda}\neq 1$ and $l\neq 0$. We claim that $\lambda\widehat{\lambda}$ may be chosen to be primitive. To prove the claim, suppose that $\lambda\widehat{\lambda}$ is not primitive. Since $\mathbb{F}_{2}(u,v)$ is a free group of rank $2$, the centraliser of $\lambda\widehat{\lambda}$ in $\mathbb{F}_{2}(u,v)$ is infinite cyclic, generated by a primitive element $v$, and replacing $v$ by $v^{-1}$ if necessary, there exists $s\geq 2$ such that $v^s= \lambda\widehat{\lambda}$. Therefore $b_{1}=v^{sl}$, and substituting this into the relation $\widehat{b}_{1}=\widehat{a}b_{1}\widehat{a}^{-1}$, we obtain $\widehat{v}^{sl}=\widehat{a} v^{sl}\widehat{a}^{-1}=(\widehat{a} v\widehat{a}^{-1})^{sl}$ in the free group $\mathbb{F}_{2}(u,v)$, from which we conclude that $\widehat{v}=\widehat{a} v\widehat{a}^{-1}$. We may thus apply the induction hypothesis to this relation because $\ell(v)<\ell(b_1)<k$, and since $v$ is primitive, there exists $\gamma\in \mathbb{F}_{2}(u,v)$ for which $v=\gamma\widehat{\gamma}$. Hence $b_{1}=(\gamma\widehat{\gamma})^{sl}$, where $\gamma\widehat{\gamma}$ is primitive, which proves the claim. Substituting $b_1=(\lambda\widehat{\lambda})^{l}$ into the relation $\widehat{a} b_1\widehat{a}^{-1}=\widehat{b}_1$, we obtain:
\begin{equation*}
(\widehat{a} \lambda\widehat{\lambda} \widehat{a}^{-1})^{l}=\widehat{a} (\lambda\widehat{\lambda})^{l}\widehat{a}^{-1}=\widehat{(\lambda\widehat {\lambda})^{l}}=(\widehat{\lambda} \lambda)^{l},
\end{equation*}
where we take $\lambda\widehat{\lambda}$ to be primitive. Once more, since $\mathbb{F}_{2}(u,v)$ is a free group of rank $2$ and $l\neq 0$, it follows that $\widehat{a} \lambda\widehat{\lambda} \widehat{a}^{-1}=\widehat{\lambda} \lambda=\widehat{\lambda} \lambda \widehat{\lambda} \widehat{\lambda}^{-1}$, from which we conclude that $\widehat{\lambda}^{-1}\widehat{a}$ belongs to the centraliser of $\lambda\widehat{\lambda}$. But $\lambda\widehat{\lambda}$ is primitive, so there exists $t\in \ensuremath{\mathbb Z}$ such that $\widehat{\lambda}^{-1}\widehat{a}=(\lambda\widehat{\lambda})^{t}$, and hence $\widehat{a}=\widehat{\lambda} (\lambda\widehat{\lambda})^{t}$. Hence $a=\lambda (\widehat\lambda \lambda)^{t}=(\lambda \widehat\lambda)^{t}\lambda$ and $b=\widehat a b_1=\widehat{\lambda} (\lambda\widehat{\lambda})^{t+l}= (\widehat{\lambda}\lambda)^{t+l}\widehat{\lambda}$ in a manner similar to that of~\reqref{lambdahat}, so~\reqref{concl1} holds.
\end{enumerate}
\item By the induction hypothesis and~(\ref{it:induct}), we may suppose that for some $k\geq 4$,~(E1) implies~\reqref{concl1} if $\ell(ab)\leq k$ and~(E2) implies~\reqref{concl2} if $\ell(w)<k$. Suppose that $\ell(w)=k$ and that $w$ and $\widehat{w}$ are conjugate. Let $\widehat{w}=\ensuremath{\up{th}}eta w\ensuremath{\up{th}}eta^{-1}$, where $\ensuremath{\up{th}}eta\in \mathbb{F}_{2}(u,v)$. If $\ensuremath{\up{th}}eta=1$ then $w=\widehat{w}$, which is impossible. So $\ensuremath{\up{th}}eta\neq 1$, and since $\ell(w)=\ell(\widehat{w})$, there must be cancellation in the expression $\ensuremath{\up{th}}eta w\ensuremath{\up{th}}eta^{-1}$. Taking the inverse of the relation $\widehat{w}=\ensuremath{\up{th}}eta w\ensuremath{\up{th}}eta^{-1}$ if necessary, we may suppose that cancellation occurs between $\ensuremath{\up{th}}eta$ and $w$. So there exist $\ensuremath{\up{th}}eta_1,\ensuremath{\up{th}}eta_2\in \mathbb{F}_{2}(u,v)$ such that $\ensuremath{\up{th}}eta=\ensuremath{\up{th}}eta_1\ensuremath{\up{th}}eta_2$ written in reduced form, and such that the cancellation between $\ensuremath{\up{th}}eta$ and $w$ is maximal \emph{i.e.}\ if $w_{1}=\ensuremath{\up{th}}eta_{2}w$ is written in reduced form then $\ensuremath{\up{th}}eta_{1}w_{1}$ is also reduced. Let $\ell(\ensuremath{\up{th}}eta)=n$ and $\ell(\ensuremath{\up{th}}eta_2)=r$. We again consider two subcases.
\begin{enumerate}[(A)]
\item Suppose first that $r=n$. Then $\ensuremath{\up{th}}eta_{1}=1$, $\ensuremath{\up{th}}eta_{2}=\ensuremath{\up{th}}eta$ and $w=\ensuremath{\up{th}}eta^{-1}w_{1}$, so:
\begin{equation}\label{eq:ellwinequ}
\ell(w)=\ell(\ensuremath{\up{th}}eta^{-1}w_{1})\leq \ell(\ensuremath{\up{th}}eta^{-1})+\ell(w_{1})=n+\ell(w)-n=\ell(w).
\end{equation}
Hence $\ell(\ensuremath{\up{th}}eta^{-1}w_{1})= \ell(\ensuremath{\up{th}}eta^{-1})+\ell(w_{1})$, from which it follows that $w=\ensuremath{\up{th}}eta^{-1}w_1$ is written in reduced form. Therefore $\widehat{w}=\widehat{\ensuremath{\up{th}}eta}^{-1}\widehat{w}_{1}$ is also written in reduced form. Now $\widehat{w}=\ensuremath{\up{th}}eta w \ensuremath{\up{th}}eta^{-1}=w_{1}\ensuremath{\up{th}}eta^{-1}$, and applying an inequality similar to that of~\reqref{ellwinequ}, we see that $\widehat{w}=w_{1}\ensuremath{\up{th}}eta^{-1}$ is written in reduced form. Hence $\widehat{\ensuremath{\up{th}}eta}^{-1}\widehat{w}_{1}=w_{1}\ensuremath{\up{th}}eta^{-1}$, which is in the form of~(E1), both sides being written in reduced form. Thus $\ell(w_{1}\ensuremath{\up{th}}eta^{-1})= \ell(\ensuremath{\up{th}}eta^{-1}w_1)=\ell(w)=k$, and by the induction hypothesis, there exist $\lambda\in \mathbb{F}_{2}(u,v)$ and $r,s\in \ensuremath{\mathbb Z}$ such that $w_1=(\widehat{\lambda} \lambda)^s\widehat{\lambda} $ and $\ensuremath{\up{th}}eta^{-1}=(\lambda\widehat{\lambda})^r\lambda$. So by~\reqref{lambdahat}, $w=\ensuremath{\up{th}}eta^{-1}w_1= (\lambda\widehat{\lambda})^r\lambda(\widehat{\lambda} \lambda)^s\widehat{\lambda} =( \lambda \widehat{\lambda})^{r+s+1}$, which proves the result in the case $r=n$.
\item Now suppose that $r<n$. Then there must be cancellation on both sides of $w$. Taking the inverse of both sides of the equation $\widehat{w}=\ensuremath{\up{th}}eta w\ensuremath{\up{th}}eta^{-1}$ if necessary, we may suppose that the length of the cancellation on the left is less than or equal to that on the right. So there exist $\ensuremath{\up{th}}eta_1,\ensuremath{\up{th}}eta_2,\ensuremath{\up{th}}eta_3, w_2\in \mathbb{F}_{2}(u,v)$ such that $\ensuremath{\up{th}}eta=\ensuremath{\up{th}}eta_1\ensuremath{\up{th}}eta_2\ensuremath{\up{th}}eta_3$, $w=\ensuremath{\up{th}}eta_3^{-1}w_2 \ensuremath{\up{th}}eta_2\ensuremath{\up{th}}eta_3$ and $\widehat{w}=\ensuremath{\up{th}}eta_1\ensuremath{\up{th}}eta_2 w_2 \ensuremath{\up{th}}eta_1^{-1}$, all these expressions being written in reduced form. Since $\ell(w)=\ell(\widehat{w})$, it follows from the second two expressions that $\ell(\ensuremath{\up{th}}eta_1)=\ell(\ensuremath{\up{th}}eta_3)$, and that $w=\ensuremath{\up{th}}eta_3^{-1}w_2 \ensuremath{\up{th}}eta_2\ensuremath{\up{th}}eta_3=\widehat{\ensuremath{\up{th}}eta}_1\widehat{\ensuremath{\up{th}}eta}_2 \widehat{w}_2 \widehat{\ensuremath{\up{th}}eta}_1^{-1}$, written in reduced form, from which we conclude that $\widehat{\ensuremath{\up{th}}eta}_1=\ensuremath{\up{th}}eta_3^{-1}$, and that $w_2 \ensuremath{\up{th}}eta_2=\widehat{\ensuremath{\up{th}}eta}_2 \widehat{w}_2$, written in reduced form, which is in the form of~(E1). Now $\ell(w_2 \ensuremath{\up{th}}eta_2)<\ell(w)=k$, and applying the induction hypothesis, there exist $\lambda\in \mathbb{F}_{2}(u,v)$ and $r,s\in \ensuremath{\mathbb Z}$ such that $w_2=(\widehat{\lambda} \lambda)^s\widehat{\lambda} $ and $\ensuremath{\up{th}}eta_2=(\lambda\widehat{\lambda})^r\lambda$. Hence $w=\ensuremath{\up{th}}eta_3^{-1}w_2 \ensuremath{\up{th}}eta_2\ensuremath{\up{th}}eta_3= \ensuremath{\up{th}}eta_3^{-1} (\widehat{\lambda} \lambda)^{r+s+1} \ensuremath{\up{th}}eta_3= (\ensuremath{\up{th}}eta_3^{-1}\widehat{\lambda} \widehat{\ensuremath{\up{th}}eta}_3 \ldotp\widehat{\ensuremath{\up{th}}eta}_3^{-1} \lambda \ensuremath{\up{th}}eta_3)^{r+s+1}=(\gamma \widehat{\gamma})^{r+s+1}$, where $\gamma=\ensuremath{\up{th}}eta_3^{-1}\widehat{\lambda} \widehat{\ensuremath{\up{th}}eta}_3$. This completes the proof of the induction step, and hence that of the lemma.\qedhere
\end{enumerate}
\end{enumerate}
\end{proof}
\begin{prop}\mbox{}\label{prop:classif}
\begin{enumerate}
\item\label{it:classifa} Let $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ be a split $2$-valued map, and let $\widehat{\Phi}\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}}\ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ be a lift of $\phi$ that is determined by the pair $((w^r,(a,b)), (w^s, (c,d)))$ as described in \repr{exismaps}. Let $\mathcal{O}_{\phi}$ denote the set of conjugates of this pair by elements of $B_2(\ensuremath{\mathbb{T}^{2}})$. Then $\mathcal{O}_{\phi}$ is the union of the sets $\mathcal{O}_{\phi}^{(1)}$ and $\mathcal{O}_{\phi}^{(2)}$, where $\mathcal{O}_{\phi}^{(1)}$ is the subset of pairs of the form $((w_1^r,(a,b)), (w_1^s, (c,d)))$, where $w_1$ runs over the set of conjugates of $w$ in $\mathbb{F}_2(u,v)$, and $\mathcal{O}_{\phi}^{(2)}$ is the subset of pairs of the form $((w_2^r,(a+r|w|_{u},b+r|w|_{v})), (w_2^s, (c+s|w|_{u},d+s|w|_{v})))$, where $w_{2}$ runs over the set of conjugates of $\widehat{w}$ in $\mathbb{F}_2(u,v)$. Further, the correspondence that to $\phi$ associates $\mathcal{O}_{\phi}$ induces a bijection between the set of homotopy classes of split $2$-valued maps and the set of conjugates of the pairs of the form given by \repr{exismaps}(\ref{it:exismapsa}) by elements of $B_2(\ensuremath{\mathbb{T}^{2}})$.
\item\label{it:classifb} Let $f=(f_1,f_2)\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$ be a $2$-ordered map of $\ensuremath{\mathbb{T}^{2}}$ determined by the pair $((w^r,(a,b)), (w^s, (c,d)))$, let $g=(g_1,g_2)\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$, and let $\widehat{\pi}\colon\ensuremath{\up{th}}inspace [\ensuremath{\mathbb{T}^{2}}, F_2(\ensuremath{\mathbb{T}^{2}})] \ensuremath{\longrightarrow} [\ensuremath{\mathbb{T}^{2}}, D_2(\ensuremath{\mathbb{T}^{2}})]$ be the projection defined in the proof of \relem{split}(\ref{it:splitIII}). Then $\widehat{\pi}^{-1}(\widehat{\pi}([f]))= \brak{[f],[g]}$. Further, $[f]=[g]$ if and only if there exist $\lambda\in \mathbb{F}_2(u,v)$ and $l\in \ensuremath{\mathbb Z}$ such that $w=(\lambda \widehat{\lambda})^l$.
\end{enumerate}
\end{prop}
\begin{proof}\mbox{}
\begin{enumerate}[(a)]
\item To compute $\mathcal{O}_{\phi}$, we determine the conjugates of the pair $((w^r,(a,b)), (w^s, (c,d)))$ by elements of $B_2(\ensuremath{\mathbb{T}^{2}})$, namely by words of the form $\sigma^{\ensuremath{\varepsilon}}z$, where $\ensuremath{\varepsilon} \in \brak{0,1}$, and $z\in P_2(\ensuremath{\mathbb{T}^{2}})$. With respect to the decomposition of \reco{compactpres}, if $\ensuremath{\varepsilon}=0$, we obtain the elements of $\mathcal{O}_{\phi}^{(1)}$. If $\ensuremath{\varepsilon}=1$, using the computation for $\ensuremath{\varepsilon}=0$ and the fact that
$\sigma z^{m} \sigma^{-1}=(uv^{-1}\widehat{z}^{m} vu^{-1}, (m|z|_{u}, m|z|_{v}))$ for all $z\in \mathbb{F}_{2}(u,v)$ and $m\in \ensuremath{\mathbb Z}$ by \relem{conjhat},
we obtain the elements of $\mathcal{O}_{\phi}^{(2)}$. This proves the first part of the statement. The second part is a consequence of classical general facts about maps between spaces of type $K(\pi, 1)$, see~\cite[Chapter~V, Theorem~4.3]{Wh} for example.
\item Let $\alpha\in [\ensuremath{\mathbb{T}^{2}}, F_2(\ensuremath{\mathbb{T}^{2}})]$ and let $f=(f_1,f_2)\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$ be such that $f\in \alpha$. Taking $g=(f_2,f_1)\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$, under the projection $\widehat{\pi}\colon\ensuremath{\up{th}}inspace [\ensuremath{\mathbb{T}^{2}}, F_2(\ensuremath{\mathbb{T}^{2}})] \ensuremath{\longrightarrow} [\ensuremath{\mathbb{T}^{2}}, D_2(\ensuremath{\mathbb{T}^{2}})]$, $\widehat{\pi}(\beta)=\widehat{\pi}(\alpha)$, where $\beta=[g]$. From \resec{relnsnvm}, $\alpha$ and $\beta$ are the only elements of $[\ensuremath{\mathbb{T}^{2}}, F_2(\ensuremath{\mathbb{T}^{2}})]$ that project under $\widehat{\pi}$ to $\widehat{\pi}(\alpha)$, which proves the first part. It remains to decide whether $\alpha=\beta$. Suppose that $f$ is determined by the pair $P_1=((w^r,(a,b)), (w^s, (c,d)))$. Since $\widehat{\pi}(\beta)=\widehat{\pi}(\alpha)$, $g$ is determined by a pair belonging to $\mathcal{O}_{\phi}$. Using the fact that $g$ is obtained from $f$ via the $\ensuremath{\mathbb Z}_2$-action on $F_2(\ensuremath{\mathbb{T}^{2}})^{\ensuremath{\mathbb{T}^{2}}}$ that arises from the covering map $\pi$ and applying covering space arguments, there exists $g'\in \beta$ that is determined by the pair $P_2=((\widehat{w}^r,(a+r|w|_{u},b+r|w|_{v})), (\widehat{w}^s, (c+s|w|_{u},d+s|w|_{v})))$. Then $\alpha=\beta$ if and only if $P_1$ and $P_2$ are conjugate by an element of $P_2(\ensuremath{\mathbb{T}^{2}})$, which is the case if and only if $w$ and $\widehat{w}$ are conjugate in the free group $\mathbb{F}_2(u,v)$ (recall from the proof of \relem{conjhat1} that if $w$ and $\widehat{w}$ are conjugate then $|w|_{u}=|w|_{v}=0$). By part~(\ref{it:classifI}) of that lemma, this is equivalent to the existence of $\lambda\in \mathbb{F}_2(u,v)$ and $l\in \ensuremath{\mathbb Z}$ such that $w=(\lambda \widehat{\lambda})^l$.\qedhere
\end{enumerate}
\end{proof}
\subsection{Fixed point theory of split $2$-valued maps}\label{sec:fptsplit2}
In this section, we give a sufficient condition for a split $2$-valued map of $\ensuremath{\mathbb{T}^{2}}$ to be deformable to a fixed point free $2$-valued map. \repr{lifttorus} already provides one such condition. We shall give an alternative condition in terms of roots, which seems to provide a more convenient framework from
a computational point of view. To obtain fixed point free $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$, we have two possibilities at our disposal: we may either use \reth{defchineg}(\ref{it:defchinegb}), in which case we should determine the group $P_{3}(\ensuremath{\mathbb{T}^{2}})$, or \reth{defchinegI}(\ref{it:defchinegbI}), in which case we may make use of the results of \resec{toro2}, and notably \repr{presTminus1alta}. We choose the second possibility. We divide the discussion into three parts. In \resec{proofexisfpf}, we prove \repr{exisfpf}. In \resec{p2Tminus1}, we give the analogue for roots of the second part of \repr{exisfpf}, and in
\resec{exrfm}, we will give some examples of split $2$-valued maps that may be deformed to root-free $2$-valued maps.
\subsubsection{The proof of \repr{exisfpf}}\label{sec:proofexisfpf}
Let $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ be a $2$-valued map of $\ensuremath{\mathbb{T}^{2}}$. As in \reco{compactpres}, we identify $P_{2}(\ensuremath{\mathbb{T}^{2}})$ with $\mathbb{F}_{2} \times \ensuremath{\mathbb Z}^{2}$. The Abelianisation homomorphism is denoted by $\operatorname{\text{Ab}}\colon\ensuremath{\up{th}}inspace \mathbb{F}_2(u,v) \ensuremath{\longrightarrow} \ensuremath{\mathbb Z}^2$. Recall from the beginning of \resec{relnsnvm} that if $\phi$ is split and $\widehat{\Phi}\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ is a lift of $\phi$, then $\operatorname{\text{Fix}}(\widehat{\Phi})=\operatorname{\text{Fix}}(\phi)$. In this section, we compute the Nielsen number $N(\phi)$ of $\phi$,
and we give necessary conditions for $\phi$ to be homotopic to a fixed point free $2$-valued map. Although the Nielsen number is not the main subject of this paper, it is an important invariant in fixed point theory. The Nielsen number of $n$-valued maps was defined in~\cite{Sch1}. The following result of that paper will enable us to compute $N(\phi)$ in our setting.
\begin{thm}[{\cite[Corollary~7.2]{Sch1}}]\label{th:helgath01}
Let $n\in \ensuremath{\mathbb N}$, let $K$ be a compact polyhedron, and let $\phi=\{f_1,\ldots,f_n\}\colon\ensuremath{\up{th}}inspace K \multimap K$ be a split $n$-valued map. Then $N(\phi)=N(f_1)+\cdots+N(f_n)$.
\end{thm}
\begin{proof}[Proof of \repr{exisfpf}]
Let $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ be a split $2$-valued map of $\ensuremath{\mathbb{T}^{2}}$, and let $\widehat{\Phi}=(f_1,f_2) \colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ be a lift of $\phi$ such that $\widehat{\Phi}_{\#}(e_{1})=(w^r,(a,b))$ and $\widehat{\Phi}_{\#}(e_{2})= (w^s, (c,d)))$, where $(r,s)\in \ensuremath{\mathbb Z}^{2}\setminus \brak{(0,0)}$, $a,b,c,d\in \ensuremath{\mathbb Z}$ and $w\in \mathbb{F}_2(u,v)$. For $i=1,2$, we shall compute the matrix $M_{i}$ of the homomorphism $f_{i\#}\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb Z}^{2} \ensuremath{\longrightarrow} \ensuremath{\mathbb Z}^{2}$ induced by $f_i$ on the fundamental group of $\ensuremath{\mathbb{T}^{2}}$ with respect to the basis $(e_{1},e_{2})$ of $\pi_1(\ensuremath{\mathbb{T}^{2}})$ (up to the canonical identification of $\pi_1(\ensuremath{\mathbb{T}^{2}})$ for different basepoints if necessary). In practice, the bases in the target are the images of the elements $(\rho_{1,1},\rho_{1,2})$ and $(\rho_{2,1},\rho_{2,2})$ by the homomorphism $p_{i\#}\colon\ensuremath{\up{th}}inspace P_2(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} \pi_{1}(\ensuremath{\mathbb{T}^{2}})$ induced by the projection $p_i\colon\ensuremath{\up{th}}inspace F_2(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}$ onto the $i\up{th}$ coordinate, where $i=1,2$. Note that $f_{i\#}=p_{i\#}\circ \widehat{\Phi}_{\#}$. Setting $w=w(u,v)$, and using multiplicative notation and \req{uvxy}, we have:
\begin{align*}
\widehat{\Phi}_{\#}(e_{1})&=w^r x^{a} y^{b}=(w(\rho_{1,1},\rho_{1,2}))^r (\rho_{1,1}B_{1,2}^{-1}\rho_{2,1})^{a} (\rho_{1,2}B_{1,2}^{-1}\rho_{2,2})^{b}\\
\widehat{\Phi}_{\#}(e_{2})&=w^s x^{c} y^{d}=(w(\rho_{1,1},\rho_{1,2}))^s (\rho_{1,1}B_{1,2}^{-1}\rho_{2,1})^{c} (\rho_{1,2}B_{1,2}^{-1}\rho_{2,2})^{d}.
\end{align*}
Projecting onto the first (resp.\ second) coordinate, it follows that $f_{1\#}(e_{1})=\rho_{1,1}^{rm+a}\rho_{1,2}^{rn+b}$ and $f_{1\#}(e_{2})=\rho_{1,1}^{sm+c}\rho_{1,2}^{sn+d}$ (resp.\ $f_{2\#}(e_{1})=\rho_{2,1}^{a}\rho_{2,2}^{b}$ and $f_{2\#}(e_{2})=\rho_{2,1}^{c}\rho_{2,2}^{d}$), so $M_{1} =\left( \begin{smallmatrix}
rm+a & sm+c \\
rn+b & sn+d
\end{smallmatrix}\right)$ and $M_{2} =\left( \begin{smallmatrix}
a & c \\
b & d
\end{smallmatrix}\right)$.
One then obtains the equation for $N(\phi)$ as a consequence of \reth{helgath01} and the usual formula for the Nielsen number of a self-map of $\ensuremath{\mathbb{T}^{2}}$~\cite{BrooBrPaTa}. The second part of the statement is clear, since if $\phi$ can be deformed to a fixed point free $2$-valued map then $f_1, f_2$ can both be deformed to fixed point free maps. To prove the last part, $f_1$ and $f_2$ can both be deformed to fixed point free maps if and only if $\det(M_{i}-I_{2})=0$ for $i=1,2$, which using linear algebra is equivalent to:
\begin{gather}
\text{$\det(M_{2}-I_{2})=0$, and}\label{eq:onedet}\\
\det\begin{pmatrix}
a-1 & sm \\
b & sn
\end{pmatrix}+
\det\begin{pmatrix}
rm & c \\
rn & d-1
\end{pmatrix}=0.\label{eq:twodet}
\end{gather}
Equation~\reqref{onedet} is equivalent to the proportionality of $(c,d-1)$ and $(a-1,b)$. Suppose that \req{twodet} holds. If one of the determinants in that equation is zero, then so is the other, and it follows that $(a-1, b),(c,d-1)$ and $(m,n)$ generate a subgroup of $\ensuremath{\mathbb Z}^{2}$ isomorphic to $\ensuremath{\mathbb Z}$, which yields condition~(\ref{it:exisfpfa}) of the statement. If both of these determinants are non zero then $(m,n)$ is neither proportional to $(a-1,b)$ nor to $(c,d-1)$, and since $(c,d-1)$ and $(a-1,b)$ are proportional, $(m,n)$ is not proportional to any linear combination of the two. Further,~\reqref{twodet} may be written as:
\begin{equation*}
0=s\det\begin{pmatrix}
a-1 & m \\
b & n
\end{pmatrix}+
r\det\begin{pmatrix}
m & c \\
n & d-1
\end{pmatrix}=
\det\begin{pmatrix}
s(a-1)-rc & m \\
sb-r(d-1) & n
\end{pmatrix},
\end{equation*}
from which it follows that $s(a-1, b)=r(c,d-1)$, which is condition~(\ref{it:exisfpfb}) of the statement. The converse is straightforward.
\end{proof}
\begin{rem}
Within the framework of \repr{exisfpf}, the fact that $f_1$ and $f_2$ can be deformed to fixed point free maps does not necessarily imply that there exists a deformation of the pair $(f_1, f_2)$, regarded as a map from $\ensuremath{\mathbb{T}^{2}}$ to $F_2(\mathbb{T}^{2})$, to a pair $(f_1', f_2')$ where the maps $f_1'$ and $f_2'$ are fixed point free. To answer the question of whether the $2$-ordered map $(f_1, f_2)\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$ can be deformed or not to a fixed point free $2$-ordered map under the hypothesis that each map can be deformed to a fixed point free map would be a major step in understanding the fixed point theory of $n$-valued maps, and would help in deciding whether the Wecken property holds or not for $\ensuremath{\mathbb{T}^{2}}$ for the class of split $n$-valued maps.
\end{rem}
\subsubsection{Deformations to root-free $2$-valued maps}\label{sec:p2Tminus1}
Recall from the introduction that a root of an $n$-valued map $\phi_0\colon\ensuremath{\up{th}}inspace X \multimap Y$, with respect to a basepoint $y_0\in Y$, is a point $x$ such that $y_0\in \phi_0(x)$. If $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ is an $n$-valued map then we may construct another $n$-valued
map $\phi_0\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ as follows.
If $x\in \ensuremath{\mathbb{T}^{2}}$ and $\phi(x)=\{ x_1,\ldots,x_n\}$, then let $\phi_0(x)=\{x_1\ldotp x^{-1},\ldots,x_n \ldotp x^{-1}\}$. The correspondence that to $\phi$ associates $\phi_{0}$ is bijective. Moreover, if $\phi$ is split, so that $\phi=\{f_1, f_2,\ldots, f_n\}$, where the self-maps $f_{i}\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}$, $i=1,\ldots,n$, are coincidence-free, then $\phi_{0}$ is also split, and is given by $\phi_{0}(x)=\{f_1(x)\ldotp x^{-1},\ldots, f_n(x)\ldotp x^{-1}\}$ for all $x\in \ensuremath{\mathbb{T}^{2}}$. The restriction of the above-mentioned correspondence to the case where the $n$-valued maps are split is also a bijection. The following lemma implies that the question of deciding whether an $n$-valued map $\phi$ can be deformed to a fixed point free map is equivalent to deciding whether the associated map $\phi_{0}$ can be deformed to a root-free map. Let $1$ denote the basepoint of $\ensuremath{\mathbb{T}^{2}}$.
\begin{lem}\label{lem:equivroot}
With the above notation, a point $x_0\in \ensuremath{\mathbb{T}^{2}}$ is a fixed point of an $n$-valued map $\phi$ of $\ensuremath{\mathbb{T}^{2}}$ if and only if it is a root of the $n$-valued map $\phi_0$ (i.e.\ $1\in \phi(x_0)$). Further, $\phi$ may be deformed to an $n$-valued map $\phi'$ such that $\phi'$ has $k$ fixed points if and only if $\phi_0$ may be deformed to an $n$-valued map $\phi_0'$ such that $\phi_0'$ has $k$ roots.
\end{lem}
\begin{proof}
Straightforward, and left to the reader.
\end{proof}
The algebraic condition given by \reth{defchinegI}(\ref{it:defchinegbI}) is equivalent to the existence of a homomorphism $g_{\#}\colon\ensuremath{\up{th}}inspace \pi_1(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} P_{2}(\ensuremath{\mathbb{T}^{2}})$ that factors through $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\})$. The following result is the analogue for roots of the second part of \repr{exisfpf}.
\begin{prop}\label{prop:exisrf}
Let $g\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ be a split $2$-valued map, and let $\widehat{g}=(g_1, g_2)\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_{2}(\ensuremath{\mathbb{T}^{2}})$ be a lift of $g$ such that $\widehat{g}_{\#}(e_{1})=(w^{r},(a',b))$ and $\widehat{g}_{\#}(e_{2})=(w^{s}, (c,d'))\in P_{2}(\ensuremath{\mathbb{T}^{2}})$,
where $(r,s)\in \ensuremath{\mathbb Z}^{2}\setminus \brak{(0,0)}$, $a',b,c,d'\in \ensuremath{\mathbb Z}$, $w\in \mathbb{F}_2(u,v)$ and $\operatorname{\text{Ab}}(w)=(m,n)$.
If $g$ can be deformed to a root-free map, then each of the maps $g_1, g_2\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}$ can be deformed to a root-free map. Further, $g_1$ and $g_2$ can both be deformed to root-free maps if and only if either:
\begin{enumerate}
\item\label{it:exisrfa} the pairs $(a',b),(c,d')$ and $(m,n)$ belong to a cyclic subgroup of $\ensuremath{\mathbb Z}^2$, or
\item\label{it:exisrfb} $s(a',b)=r(c,d')$.
\end{enumerate}
\end{prop}
\begin{proof}
If $g$ can be deformed to a root-free map then clearly the maps $g_1, g_2\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}$ can be deformed to root-free maps. For the second part of the statement, for $i=1,2$, consider the maps $f_{i}\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}$, where $f_i(x)=g_i(x)\ldotp x$, and where $g_1$ and $g_2$ can both be deformed to root-free maps. Then $f_1$ and $f_2$ can be deformed to fixed point free maps, and the maps $f_1, f_2$ are determined by the elements $(w^r, (a,b))$, $(w^s, (c,d))$ of $P_2(\ensuremath{\mathbb{T}^{2}})$, where $a=a'+1$ and $d=d'+1$. By \repr{exisfpf}, either the elements $(a-1,b),(c,d-1)$ and $(m,n)$ belong to a cyclic subgroup of $\ensuremath{\mathbb Z}^2$, or $s(a-1,b)=r(c,d-1)$, which is the same as saying that either the elements $(a',b),(c,d')$ and $(m,n)$ belong to a cyclic subgroup of $\ensuremath{\mathbb Z}^2$, or $s(a',b)=r(c,d')$, and the result follows.
\end{proof}
\subsubsection{Examples of split $2$-valued maps that may be deformed to root-free $2$-valued maps}\label{sec:exrfm}
We now give a family of examples of split $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$ that satisfy the necessary condition of \repr{exisrf} for such a map to be deformable to a root-free map. To do so, we exhibit a family of $2$-ordered maps that we compose with the projection $\pi\colon\ensuremath{\up{th}}inspace F_2(\ensuremath{\mathbb{T}^{2}}) \ensuremath{\longrightarrow} D_2(\ensuremath{\mathbb{T}^{2}})$ to obtain a family of split $2$-valued maps. We begin by studying a $2$-ordered map of $\ensuremath{\mathbb{T}^{2}}$ determined by a pair of braids of the form $((w^{r},(a,b))$, $ (w^{s}, (c,d))$, where $s(a,b)=r(c,d)$ (we make use of the notation of \repr{exisrf}).
\begin{prop}\label{prop:necrootfree2}
If $\widehat{\Phi}\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2({\ensuremath{\mathbb{T}^{2}}})$ is a lift of a split $2$-valued map $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ that satisfies $\widehat{\Phi}_{\#}(e_{1})=(w^{r},(a,b))$ and $\widehat{\Phi}_{\#}(e_{2})= (w^{s}, (c,d))$, where $w\in \mathbb{F}_2(u,v)$, $a,b,c,d\in \ensuremath{\mathbb Z}$ and $(r,s)\in \ensuremath{\mathbb Z}^{2}\setminus \brak{(0,0)}$ satisfy $s(a,b)=r(c,d)$, then $\phi$ may be deformed to a root-free $2$-valued map.
\end{prop}
\begin{proof} By hypothesis, the subgroup $\Gamma$ of $\ensuremath{\mathbb Z}^2$ generated by $(a,b)$ and $(c,d)$ is contained in a subgroup isomorphic to $\ensuremath{\mathbb Z}$. Let $\gamma$ be a generator of $\Gamma$. Suppose first that $r$ and $s$ are both non zero. Then we may take $\gamma=(a_0, b_0)=(\ell/r)(a,b)= (\ell/s)(c,d)$, where $\ell=\gcd(r,s)$, and the elements $(w^{r},(a,b))$ and $(w^{s}, (c,d))$ belong to the subgroup of $P_{2}(\ensuremath{\mathbb{T}^{2}})$ generated by $(w^{\ell},(a_0,b_0))$. Let $z\in P_2(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ be an element that projects to $(w^{\ell},(a_0,b_0))$ under the homomorphism $\alpha\colon\ensuremath{\up{th}}inspace P_2(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})\ensuremath{\longrightarrow} P_2(\ensuremath{\mathbb{T}^{2}})$ induced by the inclusion $\ensuremath{\mathbb{T}^{2}}\setminus\brak{1} \ensuremath{\longrightarrow} \ensuremath{\mathbb{T}^{2}}$. The map $\varphi\colon\ensuremath{\up{th}}inspace \pi_1(\ensuremath{\mathbb{T}^{2}})\ensuremath{\longrightarrow} P_2(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ defined by $\varphi(e_1)=z^{r/\ell}$ and $\varphi(e_2)=z^{s/\ell}$ extends to a homomorphism, and is a lift of $\widehat{\Phi}_{\#}$. The result in this case follows by \reth{defchinegI}(\ref{it:defchinegbI}). Now suppose thar $r=0$ (resp.\ $s=0$). Then $(a,b)=(0,0)$ (resp.\ $(c,d)=(0,0)$) and $(w^{r},(a,b))$ (resp.\ $(w^{s}, (c,d))$) is trivial in $P_{2}(\ensuremath{\mathbb{T}^{2}})$. Let $z\in P_2(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ be an element that projects to $(w^{s},(c,d))$ (resp.\ to $(w^{r},(a,b))$). Then we define $\varphi(e_1)=1$ and $\varphi(e_2)=z$ (resp.\ $\varphi(e_1)=z$ and $\varphi(e_2)=1$), and once more the result follows.
\end{proof}
\begin{proof}[Proof of \reth{necrootfree3}] This follows directly from \repr{necrootfree2} and the relation between the fixed point and root problems described by \relem{equivroot}.
\end{proof}
\begin{lem}\label{lem:exfreero}
Let $k,l\in \ensuremath{\mathbb Z}$ and suppose that either $p\in \{0,1\}$ or $q\in \{0,1\}$. With the notation of \repr{presTminus1alta}, the elements $(x^py^q)^k$ and $(u^pv^q)^l$ of $P_2(\ensuremath{\mathbb{T}^{2}}\setminus\{1\}; (x_1,x_2))$ commute.
\end{lem}
\begin{proof}
We will make use of \repr{presTminus1alta} and some of the relations obtained in its proof. If $p$ or $q$ is zero then the result follows easily. So it suffices to consider the two cases $p=1$ and $q=1$. By \req{conjxyuv}, for $\ensuremath{\varepsilon}\in \brak{1,-1}$, we have $x^{\ensuremath{\varepsilon}}v x^{-\ensuremath{\varepsilon}}=u^{\ensuremath{\varepsilon}}v (u^{-1}B_{1,2})^{\ensuremath{\varepsilon}}$ and $y^{\ensuremath{\varepsilon}}u y^{-\ensuremath{\varepsilon}}=(vB_{1,2}^{-1})^{\ensuremath{\varepsilon}} uv^{-\ensuremath{\varepsilon}}$, and by induction on $r$, it follows that $x^{r} v
x^{-r}=u^{r} v (u^{-1}B_{1,2})^{r}$ and $y^{r}u y^{-r}=(vB_{1,2}^{-1})^{r} uv^{-r}$ for all $r\in \ensuremath{\mathbb Z}$. So if $p=1$ or $q=1$ then we have respectively:
\begin{align*}
xy^{q} uv^{q} y^{-q} x^{-1} &= x (vB_{1,2}^{-1})^{q} uv^{-q} \ldotp
v^{q}x^{-1}= uv^{q}u^{-1} u =uv^{q}, \;\text{and}\\
x^{p}y u^{p}v y^{-1} x^{-p} &= x^{p} (yvy^{-1})^{p} vx^{-p}= x^{p}
v(B_{1,2}^{-1}u)^{p} v^{-1} \ldotp v x^{-p}= u^{p} v
(u^{-1}B_{1,2})^{p}(B_{1,2}^{-1}u)^{p}=u^{p} v,
\end{align*}
as required.
\end{proof}
This enables us to prove the following proposition and \reth{construct2val}.
\begin{prop}\label{prop:construct2valprop}
Suppose that $(a, b),(c,d)$ and $(m,n)$ belong to a cyclic subgroup of $\ensuremath{\mathbb Z}^2$ generated by an element of the form $(0,q), (1,q), (p,0)$ or $(p,1)$, where $p,q\in \ensuremath{\mathbb Z}$, and let $r,s\in \ensuremath{\mathbb Z}$. Then there exist $w\in \mathbb{F}_2(u,v)$, a split $2$-valued map $\phi\colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \multimap \ensuremath{\mathbb{T}^{2}}$ and a lift $\widehat{\Phi} \colon\ensuremath{\up{th}}inspace \ensuremath{\mathbb{T}^{2}} \ensuremath{\longrightarrow} F_2(\ensuremath{\mathbb{T}^{2}})$ of $\phi$ for which $\operatorname{\text{Ab}}(w)=(m,n)$, $\widehat{\Phi}_{\#}(e_1)=((w^{r},(a,b))$ and $\widehat{\Phi}_{\#}(e_2)= (w^{s}, (c,d))$, and such that $\phi$ can be deformed to a root-free $2$-valued map.
\end{prop}
\begin{proof}
Once more we apply \reth{defchinegI}(\ref{it:defchinegbI}). Let $(p,q)$ be a generator of the cyclic subgroup given in the statement. So there exist $\lambda_{1}, \lambda_{2}, \lambda_{3}\in \ensuremath{\mathbb Z}$ such that $(a, b)=\lambda_1(p,q)$, $(c,d)=\lambda_2(p,q)$ and $(m,n)=\lambda_3(p,q)$. We define $\varphi\colon\ensuremath{\up{th}}inspace \pi_1(\ensuremath{\mathbb{T}^{2}})\ensuremath{\longrightarrow} P_2(\ensuremath{\mathbb{T}^{2}}\setminus\brak{1})$ by $\varphi(e_1)=(u^pv^q)^{\lambda_3r}(x^py^q)^{\lambda_1}$ and $\varphi(e_2)=(u^pv^q)^{\lambda_3s}(x^py^q)^{\lambda_2}$. \relem{exfreero} implies that $\varphi$ extends to a well-defined homomorphism, and we may take $w=(u^pv^q)^{\lambda_3}$.
\end{proof}
\begin{proof}[Proof of \reth{construct2val}] This follows directly from \repr{construct2valprop} and the relation between the fixed point and root problems described in \relem{equivroot}.
\end{proof}
\begin{rem}
\reth{construct2val} implies that there is an infinite family of homotopy classes of $2$-valued maps of $\ensuremath{\mathbb{T}^{2}}$ that satisfy the necessary condition of \repr{exisrf}(\ref{it:exisrfa}), and can be deformed to root-free maps. We do not know whether there exist examples of maps that satisfy this condition but that cannot be deformed to root-free, however it is likely that such examples exist.
\end{rem}
\section*{Appendix: Equivalence between $n$-valued maps and maps into configuration spaces}
This appendix constitutes joint work with R.~F.~Brown. Let $n\in \ensuremath{\mathbb N}$. As observed in \resec{intro}, the set of $n$-valued functions from $X$ to $Y$ is in one-to-one correspondence with the set of functions from $X$ to $D_n(Y)$. As we have seen in the main part of this paper, this correspondence facilitates the study of $n$-valued maps, and more specifically of their fixed point theory. In this appendix, we prove \reth{metriccont} that clarifies the topological relationship preserved by the correspondence under some mild hypotheses on $X$ and $Y$. For the sake of completeness, we will include the proof of a simple fact (\repr{multicont}) mentioned in~\cite{Sch0} that relates the splitting of maps and the continuity of multifunctions.
Given a metric space $Y$, let $\mathcal K'$ be the family of non-empty compact sets of $Y$. We equip $\mathcal K'$ with the topology induced by the Hausdorff metric on $\mathcal K'$ defined in~\cite[Chapter~VI, Section~6]{Be}.
\begin{thm}[{\cite[Chapter~VI, Section~6, Theorem~1]{Be}}]\label{th:berge}
Let $X$ and $Y$ be metric spaces, let $\mathcal K'$ denote the family of non-empty compact sets of $Y$, let $\Gamma\colon\ensuremath{\up{th}}inspace X \multimap Y$ be a multifunction such that for all $x\in X$, $\Gamma(x)\in \mathcal K'$ and $\Gamma(x)\ne \varnothing$. Then $\Gamma$ is continuous if and only if it is a single-valued continuous mapping from $X$ to $\mathcal K'$.
\end{thm}
As we mentioned in \resec{intro}, $F_n(Y)$ may be equipped with the topology induced by the inclusion of $F_{n}(Y)$ in $Y^n$, and $D_n(Y)$
may be equipped with the quotient topology using the quotient map $\pi\colon\ensuremath{\up{th}}inspace F_n(Y) \ensuremath{\longrightarrow} D_n(Y)$, a subset $W$ of $D_n(Y)$ being open if and
only if $\pi^{-1}(W)$ is open in $F_n(Y)$. If $Y$ is a metric space with metric $d$, the set $D_n(Y)$ is a subset of $\mathcal K'$, and the Hausdorff metric on $\mathcal K'$ mentioned above restricts to a Hausdorff metric $d_H$ on $D_n(Y)$ defined as follows. If $z,w \in D_n(Y)$ then there exist $(z_1, \ldots , z_n), (w_1, \ldots , w_n)\in F_n(Y)$ such that $z=\pi(z_1, \ldots , z_n)$ and $w=\pi(w_1, \ldots , w_n)$, and
we define $d_H$ by:
\begin{equation*}
d_H(z,w)=\max\Bigl(\max_{1\leq i\leq n} d(z_i,w), \max_{1\leq i\leq n} d(w_i,z) \Bigr),
\end{equation*}
where $\displaystyle d(z_i,w)=\min_{1\leq j\leq n} d(z_i,w_j)$ for all $1\leq i\leq n$. Notice that $d_H(z,w)$ does not depend on the choice of representatives in $F_n(Y)$. We now prove \reth{metriccont}.
\begin{proof}[Proof of \reth{metriccont}]
By \reth{berge}, it suffices to show that the set $D_n(Y)$ equipped with the Hausdorff metric $d_{H}$ is homeomorphic to the unordered configuration space $D_n(Y)$ equipped with the quotient topology, or equivalently, to show that a subset of $D_n(Y)$ is open with respect to the Hausdorff metric topology if and only if it is open with respect to the quotient topology. Let $y\in D_n(Y)$, and let $(y_1,\ldots,y_n) \in F_n(Y)$ be such that $\pi(y_1,\ldots,y_n)=y$.
For the `if' part, let $U_1,\ldots, U_n$ be open balls in $Y$ whose centres are $y_1,\ldots,y_n$ respectively. Without loss of generality, we may assume that they have the same radius $\ensuremath{\varepsilon}>0$, and are pairwise disjoint. Consider the Hausdorff ball $U_{H}$ of radius $\ensuremath{\varepsilon}$ in $D_n(Y)$ whose centre is $y$.
Let $z$ be an element of $U_{H}$, and let $(z_1,\ldots,z_n) \in F_n(Y)$ be such that $\pi(z_1,\ldots,z_n)=z$. Suppose that $z \notin \pi(U_1 \times \cdots \times U_n)$. We argue for a contradiction. Then there exists a ball $U_{i}$ such that
$z_j \notin U_i$ for all $j\in \brak{1,\ldots,n}$.
So $d(y_i,z)\geq \ensuremath{\varepsilon}$, and from the definition of $d_H$, it follows that $d_H(y,z)\geq \ensuremath{\varepsilon}$, which contradicts the choice of $z$. Hence $z\in \pi(U_1 \times \cdots \times U_n)$, and the `if' part follows.
For the `only if' part, let us consider an open ball $U_{H}$ of radius $\ensuremath{\varepsilon}>0$ in the Hausdorff metric $d_H$ whose
centre is $y$.
We will show that there are open balls $U_1, \ldots ,U_n$ in $Y$ whose centres are $y_1, \ldots , y_n$ respectively, such that the subset of elements $z$ of $D_{n}(Y)$, where $z=\pi(z_1,\ldots,z_n)$ for some $(z_1,\ldots,z_n) \in F_n(Y)$, and where
each $U_i$ contains exactly one point of $\brak{z_1,\ldots,z_n}$, is a subset of $U_{H}$.
Define $\delta>0$ to be the minimum of $\ensuremath{\varepsilon}$ and the distances $d(y_i, y_j)/2$ for all $i \ne j$. For $j = 1, \dots , n$ let $U_{j}$ be the open ball in $Y$ of radius $\delta$ with respect to $d$ and whose centre is $y_j$. Clearly, $U_i \cap U_j = \varnothing$ for all $i \ne j$. So for all $z=\brak{z_1, \dots , z_n}$ belonging to the set $\pi(U_1 \times \cdots \times U_n)$, each $U_j$ contains exactly one point of $\brak{z_1,\ldots,z_n}$, which up to permuting indices, we may suppose to be $z_j$. Further, for all $j = 1, \dots , n$, $d(z_j,y)=d(z_j,y_j)<\delta$ and $d(y_j,z)=d(y_j,z_j)<\delta$.
So from the definition of $d_H$, $d_H(y,z)<\delta<\ensuremath{\varepsilon}$, hence $z\in U_H$, and the `only if' part follows.
\end{proof}
Just above Lemma~1 of \cite[Section~2]{Sch0}, Schirmer wrote `Clearly a multifunction which splits into maps is continuous'. For the sake of completeness, we provide a short proof of this fact.
\begin{prop}\label{prop:multicont}
Let $n\in \ensuremath{\mathbb N}$, let $X$ be a topological space, and let $Y$ be a Hausdorff topological space. For $i=1,\ldots,n$, let $f_i\colon\ensuremath{\up{th}}inspace X \ensuremath{\longrightarrow} Y$ be continuous. Then the split $n$-valued map $\phi= \brak{f_1,\ldots,f_n}\colon\ensuremath{\up{th}}inspace X \multimap Y$ is continuous.
\end{prop}
\begin{proof}
Let $x_0 \in X$, and let $V$ be an open subset of $Y$ such that $\phi(x_0) \cap V\ne \varnothing$. Then there exists $j\in \brak{1,\ldots,n}$ such that $f_j(x_0) \in V$. Since $f_j$ is continuous, there exists an open subset $U_j$ containing $x_0$ such that if $x \in U_j$ then $f_j(x) \in V$, so $\phi(x)\cap V\ne \varnothing$. Therefore $\phi$ is lower semi-continuous. To prove upper semi-continuity, first note that for all $x\in X$, $\phi(x)$ is closed in $Y$ because $\phi(x)$ is a finite set and $Y$ is Hausdorff. Now let $x_0 \in X$ be such that $\phi(x_0) \subset V$. We then use the continuity of the $f_j$ to define the $U_j$ as before, and we set $U = \bigcap_{j=1}^n U_j$. Since $\phi(U) \subset V$, we have proved that $\phi$ is also upper semi-continuous, and so it is continuous.
\end{proof}
\end{document} |
\begin{document}
\title{Long mutation cycles}
\author{Sergey Fomin}
\address{Department of Mathematics, University of Michigan,
Ann Arbor, MI 48109, USA}
\email{[email protected]}
\author{Scott Neville}
\address{Department of Mathematics, University of Michigan, Ann Arbor, MI 48109, USA}
\email{[email protected]}
\newcommand{u}{u}
\newcommand{\oboxed}[1]{\ovalbox{$#1$}}
\cornersize*{12pt}
\setlength{\fboxsep}{2pt}
\date{April 22, 2023}
\thanks{Partially supported by NSF grants DMS 1840234 and DMS-2054231 and by a Simons Fellowship.
}
\subjclass{
Primary
13F60,
Secondary
05C20.
}
\keywords{Quiver mutation, mutation cycle, cluster algebras.}
\begin{abstract}
A mutation cycle is a cycle in a graph whose vertices are labeled by~the quivers
in a given mutation class and whose edges correspond to single mutations.
For any fixed $n\ge 4$, we describe arbitrarily long mutation cycles involving $n$-vertex quivers.
Each of these mutation cycles allows for an arbitrary choice of $\binom{n}{2}$ positive integer parameters.
None of the mutation cycles we construct can be paved by short mutation cycles.
\end{abstract}
\maketitle
\section{Introduction}
Quiver mutations play fundamental role in the theory of cluster algebras and its numerous applications,
see, e.g., \cite{FWZ} and references therein.
In this paper, we focus on a particular aspect of the combinatorics of quiver mutations, namely
the study of mutation cycles.
These are cycles in the \emph{mutation graph} of a mutation equivalence class,
the graph whose vertices correspond to the quivers in the class
and whose edges correspond to single mutations that relate quivers to each other.
A~bit more precisely, a \emph{mutation cycle} of length~$N$ rooted at a quiver~$Q$
is a sequence of mutation steps that, when successively applied to~$Q$, ends up returning to~$Q$:
\begin{equation*}
Q=Q^{(0)} \mutation{i_1}
Q^{(1)} \mutation{i_2}
\cdots
\mutation{i_{N-1}} Q^{(N-1)} \mutation{i_N}
Q^{(N)}=Q;
\end{equation*}
here we require $i_1\neq i_2\neq \cdots\neq i_{N-1}\neq i_N\neq i_1$ to ensure that we never
apply the same mutation twice in a row.
Not much is known about mutation cycles. Existing constructions involve
\begin{itemize}[leftmargin=.15in]
\item
quivers of finite mutation type, e.g., those associated with triangulated surfaces~\cite{MR2448067},
where elements of the mapping class group give rise to mutation cycles;
\item
quivers related to square products of Dynkin diagrams~\cite{keller-annals, MR2767952} and more generally,
flip moves in plabic graphs \cite{balitskiy-wellman};
\item
mutation-periodic quivers of A.~Fordy and B.~R.~Marsh~\cite{Fordy-Marsh}.
\end{itemize}
All of these mutation cycles rely on small arrow multiplicities and/or particular symmetries of the quivers involved.
Perhaps more importantly, the only way to get a very long mutation cycle using either of these constructions
(provided we treat quivers up to isomorphism)
is to use quivers with a lot of vertices.
In this paper, we considerably expand the zoo of mutation cycles by constructing,
for any fixed~$n\ge 4$,
arbitrarily long mutation cycles involving $n$-vertex quivers. \linebreak[3]
Our main result (cf.\ Theorem~\ref{thm:Summary} below)
is a construction of a family of $n$-vertex quivers
each of which lies on a mutation cycle of length~${n+4k}$.
All quivers along any such mutation cycle are distinct.
Moreover, none of these cycles can be paved by mutation cycles of length~${\le 4k}$.
Thus, if $n$ is fixed and $k$ is large, say, $n=5$ and $k=100$,
then our construction produces quivers on 5~vertices that lie on mutation cycles of length~405 that cannot be paved by
cycles shorter than~400.
Furthermore, there are lots of such mutation cycles!
For fixed $n\ge 4$ and $k\ge 1$, our construction depends on an arbitrary choice of
$\binom{n}{2}$ parameters $q_{ij}\in {\mathbb Z}_{\ge2}$; here $1\le i<j\le n$.
Different choices of parameters produce different mutation cycles.
The parametrization $(q_{ij})\mapsto Q$
(as well as its inverse) is given by polynomials over~${\mathbb Z}$.
We next state our main result, which combines
Theorems~\ref{thm:GeneralSemiCycles}, \ref{thm:GeneralSemiCycleDistinct}, and~\ref{thm:primitive}.
\begin{theorem}
\label{thm:Summary}
Let $n\ge 4$ and $k\ge 1$.
Choose integers $q_{ij} \geq 2$ for all pairs $1 \leq i < j \leq n$.
Define the sequence $p_0,p_1,p_2,\dots$ by the recurrence
\begin{equation}
p_{j+1}=q_{12} p_j -p_{j-1},
\end{equation}
with initial values $p_0=1$, $p_1=q_{12}$.
Let $Q$ be the quiver on the vertex set $\{1,\dots,n\}$ whose exchange matrix $B(Q)=(b_{ij})$ is defined as follows.
For $1\le i<j\le n$, set
\[
b_{ij} =
\begin{cases}
-p_{2k-2} q_{1j} - p_{2k-1} q_{2j} & \text{if $i=1$ and $3\le j \le n-1$}; \\
p_{2k-1} q_{1j} + p_{2k} q_{2j} & \text{if $i=2$ and $3\le j \le n-1$}; \\
q_{ij} & \text{otherwise};
\end{cases}
\]
also set $b_{ii}=0$ and $b_{ji}=-b_{ij}$.
Then
\begin{itemize}[leftmargin=.2in]
\item
applying the following sequence of mutations to~$Q$ recovers the same quiver~$Q$:
\begin{equation}
\label{eq:main-cycle}
n, \underbrace{1,2,\dots ,1,2}_{\textup{$k$ times}}, n-1, n-2, \dots, 2, 1, \underbrace{2,1,\dots ,2,1}_{\textup{$k$ times}};
\end{equation}
\item
all $n+4k$ quivers lying on this mutation cycle are distinct;
\item
this mutation cycle cannot be paved by mutation cycles of length $\le 4k$.
\end{itemize}
\end{theorem}
Figure~\ref{fig:generic-8-cycle} shows the simplest instance of this construction
(for $n=4$ and $k=1$), a mutation cycle of length~8 involving 4-vertex quivers.
This mutation cycle cannot be paved by shorter cycles.
\begin{figure}
\caption{Mutation cycle of length~8 involving 4-vertex quivers.
The $\binom{4}
\label{fig:generic-8-cycle}
\end{figure}
For $n\ge 5$, the statements in Theorem~\ref{thm:Summary}
remain true even if we view our quivers up to isomorphism and/or global reversal of arrows.
We conjecture (and prove in the $n=4$ case) that each of these mutation cycles is the unique cycle in the corresponding mutation graph,
so that the associated cluster modular group~\cite{FG}
(or cluster automorphism group~\cite{ASS}) is isomorphic to~${\mathbb Z}$.
All quivers appearing in the mutation cycles that we construct
have arrow multiplicities~$\ge2$, i.e., any two vertices are connected by at least two arrows.
The construction from Theorem 1.1 can be extended to yield mutation cycles (with different initial quivers)
whose mutation sequences are obtained by replacing the $1, 2, \dots, 1, 2$ and $2, 1, \dots, 2, 1$ fragments in~\eqref{eq:main-cycle}
by much more general subwords.
This has been independently observed by Tucker Ervin~\cite{ervin}.
We describe several other ways to obtain nontrivial mutation cycles, see, e.g.,
Examples~\ref{eg:6-cycle-vortices} and \ref{eg:rosette}--\ref{eg:big-horseshoe}.
\pagebreak[3]
Some of our original motivation came from the major unsolved problem of detecting mutation equivalence of quivers.
For $3$-vertex quivers, the problem can be solved (cf.~\cite{ABBS}) using a descent algorithm that
repeatedly decreases the arrow multiplicities until it reaches a canonical minimal representative within a given mutation class.
In light of our results, any descent algorithm for detecting mutation equivalence of quivers
would have to be confluent over arbitrarily large diamonds.
It also follows that an algorithm for finding a shortest sequence of mutations
that connects two mutation-equivalent quivers cannot be based on a steepest descent heuristic.
\pagebreak[3]
None of the quivers appearing in this paper have frozen vertices.
In fact, we did not observe any large mutation cycles in which a particular vertex is frozen.
To rephrase, each of the large mutation cycles that we found involves mutations at all vertices.
Throughout this paper, we use the following conventions.
All our quivers are labeled, so that an $n$-vertex quiver~$Q$ typically has the vertex set $[1,n]=\{1,\ldots, n\}$.
(Subquivers of such a quiver~$Q$ have vertex sets $V\subset [1,n]$.)
It is natural to consider three notions of quiver equivalence:
(i)~equality of quivers as labeled graphs,
ubox{(ii)~isomorphism} of quivers (i.e., quivers are the same up to a relabeling of vertices), and
(iii)~isomorphism that may be combined with the global reversal of arrows.
While we default to notion~(i), many of our results hold with respect to (ii) and/or~(iii).
The paper is organized as follows.
General background on quiver mutations is intro\-duced in Sections~\ref{sec:quiver-mut}--\ref{sec:3Vertex}.
Various aspects of our main result are established in
Sections~\ref{Sec:GeneralSemiCycles}--\ref{sec:genericity}.
In Section~\ref{Sec:GeneralSemiCycles}, we verify that applying the mutation sequence~\eqref{eq:main-cycle}
recovers the original quiver~$Q$ (cf.\ Theorem~\ref{thm:GeneralSemiCycles}).
In Section~\ref{sec:distinctness}, we prove that all quivers lying on this mutation cycle are distinct
(cf.\ Theorem~\ref{thm:GeneralSemiCycleDistinct}),
even if we treat them up to isomorphism and/or reversal of all arrows
(under mild assumptions, cf.\ Theorem~\ref{thm:GeneralSemiCycleDistinct-noniso}).
In Section~\ref{sec:exits}, we introduce and study certain properties of quivers that propagate under mutation in some directions.
This combinatorial machinery is then used in Sections~\ref{Sec:primitiveCycles}--\ref{sec:genericity}.
In~Section~\ref{Sec:primitiveCycles}, we show that the long mutation cycles constructed in Theorem~\ref{thm:GeneralSemiCycles}
cannot be paved by mutation cycles of bounded length.
In Section~\ref{sec:genericity}, we demonstrate that the set of quivers that appear in our main construction
is in some sense ``full-dimensional:''
it can be parametrized by $\binom{n}{2}$-tuples of nonnegative integers.
In Section~\ref{sec:no-seed-cycles}, we show that
none of the mutation cycles we construct gives rise to a cycle in the exchange graph of the associated cluster algebra.
More generally, we show (see Theorem~\ref{th:no-seed-cycle}) that
no mutation cycle consisting entirely of quivers with at least two arrows between each pair of vertices
yields a cycle in the exchange graph.
Each birational map obtained by composing seed mutations along one of our mutation cycles
gives rise to a discrete dynamical system that deserves further study.
In Section~\ref{sec:moreCycles} we present several additional mutation cycles
that do not come from the construction of Theorem~\ref{thm:GeneralSemiCycles}.
While each of these cycles belongs to a family of arbitrarily long (for a fixed~$n$) mutation cycles,
we do not attempt to describe these families explicitly.
\section*{Acknowledgments}
Some of our results were reported at the OPAC conference in Minneapolis (May 2022) and at the AMS special session on cluster algebras, positivity, and related topics (April 2023).
We thank the organizers of these events.
After having developed the auxiliary machinery presented in Section~\ref{sec:exits},
we found out that a substantial part of it has appeared, in different form, in the earlier work by M.~Warkentin~\cite{Warkentin}.
We are grateful to Tucker Ervin for bringing our attention to~\cite{Warkentin} and for sharing his own observations.
We thank Danielle Ensign for stimulating discussions and assistance with computer simulations.
\section{Quiver mutations}
\label{sec:quiver-mut}
In this section, we establish terminology and remind the reader of the relevant definitions and results.
For a systematic introduction to the combinatorics of quiver mutations, see~\cite{FWZ}.
For a sampling of additional results, see, e.g., \cite{BBH, Fordy-Marsh, Lawson-Mills, Warkentin}.
\begin{definition}
\label{def:quiver}
A \emph{quiver} $Q$ is a finite directed graph with no directed cycles of length 1 or~2.
Its (directed) edges are called \emph{arrows}.
Multiple arrows between a given pair of vertices are allowed.
When drawing a quiver, we will typically write edge multiplicities next to single arrows,
rather then drawing multiple arrows.
Thus, we would draw $\bullet\!\stackrel{\scriptstyle2}{\longrightarrow}\!\bullet$ instead of
$\bullet \!\rightrightarrows\! \bullet\,$.
All our quivers will be \emph{labeled}, i.e., we will distinguish between different isomorphic quivers on the same vertex set.
Accordingly, if $Q$ and $Q'$ are two quivers, then
notation $Q = Q'$ will mean that $Q$ and $Q'$ are \emph{equal} as labeled graphs.
We note that in the literature, quivers are often considered up to graph isomorphism (a relabeling of vertices) and/or a global reversal of arrows; we do not follow this convention.
Cf.\ Example~\ref{eg:acyclicClasses}.
For an $n$-vertex quiver~$Q$, we will typically use the~set
\[
[1,n]=\{1,2,\dots,n\}
\]
as the set of (labels of the) vertices of~$Q$.
\end{definition}
\begin{definition}
\label{def:B(Q)}
Each quiver gives rise to a skew-symmetric matrix $B(Q) = (b_{ij})$
whose entries $b_{ij}=b_{ij}(Q)$ indicate how many arrows run between each pair of vertices and what their orientation is.
The \emph{weight} $|b_{ij}(Q)|$ is the number of arrows between the vertices $i$ and~$j$.
Notation $i \!\dasharrow\! j$ will indicate that all arrows between the vertices $i$ and~$j$
are directed from $i$ to~$j$; in other words, $b_{ij}\geq 0$.
\end{definition}
\begin{definition}
A vertex $i$ in a quiver~$Q$ is called a \emph{sink} (resp., \emph{source}) if $j \!\dasharrow\! i$ (resp.,~$i \!\dasharrow\! j$) for all other vertices $j$ in $Q$.
If $i$ is either a sink or a source, but we do not care which, we may say $i$ is a sink/source.
\end{definition}
\begin{definition}
We always use the term \emph{subquiver} to mean ``full subquiver'' (i.e., an induced subgraph).
We denote the (full) subquiver of~$Q$ with vertex set $S$ by~$Q|_S$.
For example, $Q|_{ijk}$ denotes the subquiver of~$Q$ supported by the vertices~$\{i,j,k\}$.
\end{definition}
\begin{definition}
\label{def:quiver-mutation}
To \emph{mutate} a quiver $Q$ at a vertex~$i$, perform the following steps:
\begin{enumerate}[leftmargin=.3in]
\item for each path $j \rightarrow i \rightarrow k$ in~$Q$, add a new arrow from $j$ to $k$.
(Thus, if we have $j \stackrel{\scriptstyle a}{\longrightarrow} i \stackrel{\scriptstyle b}{\longrightarrow} k$ in~$Q$,
then we should add $ab$ new arrows from $j$ to~$k$.)
\item reverse all arrows incident to $i$.
\item repeatedly remove oriented $2$-cycles until there are none left.
\end{enumerate}
The transformed (mutated) quiver is denoted by~$\T{i}{Q}$. We will write $Q \mutation{i} Q'$ to mean that $Q' = \T{i}{Q}$.
\end{definition}
The notation $\mu[i]$ is non-standard; in most of the literature, mutation at a vertex~$i$ is denoted by~$\mu_i$.
We break convention in this paper to avoid nested subscripts and improve legibility.
\pagebreak[3]
Important properties of quiver mutation include:
\begin{itemize}[leftmargin=.2in]
\item
$\mu[i]$ is an involution;
\item
$\mu[i]$ commutes with restriction, provided $i$ is in the restricted subset;
\item
$\mu[i]$ commutes with the reversal of all arrows in the quiver.
\end{itemize}
\begin{definition}
A \emph{sink/source mutation} $\mu[i]$ is a mutation at a sink or source vertex~$i$.
Such a mutation simply reverses all the arrows incident to~$i$.
\end{definition}
\begin{remark}
If $i$ is a sink or source in $Q$ and $S$ is a vertex set not containing~$i$, then
\[
\T{i}{Q}|_S = Q|_S.
\]
\end{remark}
We introduce the following notational shorthand to improve legibility.
\begin{definition}
\label{def:stringOfMutations}
We denote $\mu[i_k i_{k-1}\cdots i_2 i_1] = \mu[i_k] \circ \cdots \circ \mu[i_2] \circ \mu[i_1]$,
so that
\begin{align*}
\mu[i_k i_{k-1}\cdots\, i_2 i_1](Q) &= \mu[i_k] \circ \mu[i_{k-1}] \circ \cdots \circ \mu[i_2] \circ \mu[i_1](Q).
\end{align*}
In other words, we apply the mutations indexed by the bracketed symbols
in the right-to-left order (as is usual when composing maps).
\end{definition}
\begin{example}
\label{ex:notation-mu}
The following identities use the notation introduced in Definition~\ref{def:stringOfMutations}:
\begin{align*}
\T{i i}{Q} &= Q, \\
\T{j}{\T{i}{Q}} &= \T{j i }{Q}, \\
\T{i j } {\T{k l m}{Q}}&= \T{i j k l m}{Q}, \\
\T{1 2 3 2 3}{Q|_{123}} &= \T{12323}{Q}|_{123}.
\end{align*}
\end{example}
\begin{definition}
\label{def:121212}
Notation $(i j)^k$ will denote the sequence $i j i j\cdots i j$ of length $2k$, alternating between $i$ and~$j$.
Thus
\[
\T{(i j)^k}{Q} \stackrel{\rm def}{=} \mu[i] \circ \mu[j] \circ \cdots \circ \mu[i] \circ \mu[j](Q)
\]
denotes the result of applying $k$ iterations of $\mu[i] \circ \mu[j]$ to~$Q$.
\end{definition}
\begin{definition}
\label{def:mutationClass}
The \emph{mutation class} $[Q]$ of a (labeled) quiver $Q$ is the set of (labeled) quivers
that can be obtained from~$Q$ by applying a sequence of mutations.
\end{definition}
\begin{example}[\emph{Type~$\mathbf{A}_3$}]
Let $Q$ be an orientation of the 3-vertex tree
\begin{equation*}
\bullet\!\!-\!\!\!-\!\!\bullet\!\!-\!\!\!-\!\!\bullet
\end{equation*}
(the Dynkin diagram of type~$A_3$).
The mutation class $\mathbf{A}_3\!=\![Q]$ consists of $14$ quivers:
\begin{itemize}[leftmargin=.2in]
\item
two oriented $3$-cycles on the vertices $1,2,3$;
\item
$12$ quivers obtained by choosing all possible vertex labelings and edge orientations of the 3-vertex tree.
\end{itemize}
By comparison, there are only $4$ quivers up to isomorphism in the mutation class~$\mathbf{A}_3$,
namely the $3$-cycle and three different orientations of the 3-vertex tree:
\begin{equation*}
\bullet\!\rightarrow\!\bullet\!\rightarrow\!\bullet
\qquad\qquad
\bullet\!\rightarrow\!\bullet\!\leftarrow\!\bullet
\qquad\qquad
\bullet\!\leftarrow\!\bullet\!\rightarrow\!\bullet
\end{equation*}
The last two of these are equivalent up to global reversal of arrows, so there are $3$ quivers up to isomorphism and global reversal of arrows in the mutation class~$\mathbf{A}_3$.
\end{example}
\begin{remark}
Two isomorphic (labeled) quivers may belong to different mutation classes; see Example~\ref{eg:acyclicClasses} below.
\end{remark}
\begin{definition}
\label{def:mutationGraph}
The \emph{mutation graph} of a mutation class is the graph whose vertices are the quivers in the mutation class
and whose edges correspond to mutations: for each pair of quivers $Q$ and~$Q'=\T{i}{Q}$, there is an edge
labeled $i$ connecting $Q$ and~$Q'$.
\end{definition}
\begin{definition}
\label{def:mutation cycle}
A \emph{mutation cycle} in a mutation graph is a closed walk in which no two consecutive edges or vertices coincide.
More precisely, a mutation cycle of length $N>0$ is defined by a quiver~$Q$ and a sequence $\mathbf{i}=i_1\cdots i_N$ such that
\begin{itemize}[leftmargin=.2in]
\item
$\mu[i_N\cdots i_1](Q)=Q$;
\item
$i_1\neq i_2\neq \cdots\neq i_{N-1}\neq i_N\neq i_1$;
\item
no mutation is applied at an isolated vertex.
\end{itemize}
We then say this mutation cycle is \emph{based} at~$Q$.
We note that $\mu[i_N\cdots i_1](Q)=Q$ if and only if $\mu[i_1\cdots i_N](Q)=Q$.
\end{definition}
\begin{definition}
\label{def:acyclic}
A quiver is \emph{acyclic} if it contains no oriented cycles.
\end{definition}
Note that the above definitions involve two kinds of cycles:
(a) oriented cycles within a particular quiver and
(b) mutation cycles within a mutation graph (whose vertices correspond to quivers in a given mutation class).
\begin{proposition}
\label{pr:acyclic-mutation-cycle}
Let $Q$ be an acyclic quiver on the vertex set $[1,n]$,
with $b_{ij}(Q)\ge 0$ for $i<j$.
Then $\T{12\cdots n}{Q}=Q$ and, equivalently, $\T{n \cdots 21}{Q}=Q$.
\end{proposition}
\begin{proof}
When we mutate $Q$ at the sink~$n$, the latter vertex becomes a source
and the quiver becomes acyclic with respect to the linear ordering ubox{$n\to 1\to 2\to\cdots \to n-1$},
a cyclic rearrangement of the original linear ordering. (The weights do not change.)
After $n$ rotations, we return to the original quiver. See Figure~\ref{fig:central-triangle}.
\end{proof}
\begin{figure}
\caption{Acyclic quivers forming a mutation cycle. }
\label{fig:central-triangle}
\end{figure}
Proposition~\ref{pr:acyclic-mutation-cycle} shows that any $n$-vertex acyclic quiver~$Q$ lies on a mutation cycle of length~$n$
consisting entirely of acyclic quivers.
All of them are re-orientations~of~$Q$.
In this paper, we will mostly study quivers satisfying the following condition:
\begin{definition}
\label{def:largeWeights}
We say that a quiver~$Q$ has \emph{large weights} if $|b_{ij}(Q)| \geq 2$ for all pairs of distinct vertices~$i$ and~$j$.
(Such quivers were called ``abundant'' in~\cite{Warkentin} and ``2-complete'' in~\cite{Felikson-Tumarkin-2018, LeeLee}.)
\end{definition}
Various results concerning quiver mutation simplify considerably when restricted to quivers with large weights.
In particular, this is the case for the classification of mutation classes of 3-vertex quivers, to be discussed in Section~\ref{sec:3Vertex}.
\section{Quivers on three vertices}
\label{sec:3Vertex}
Mutations of 3-vertex quivers are well understood, see especially~\cite{ABBS}.
In this section, we state and prove, without claiming any originality, all basic results about 3-vertex quivers that will be needed in the sequel.
To be specific, we will only need to treat the case of \emph{mutation-acyclic} 3-vertex quivers,
i.e., those quivers that are mutation equivalent to an acyclic quiver.
\begin{definition}
\label{def:cyclic-quiver}
A 3-vertex quiver~$Q$ is called \emph{cyclic}
if it is not acyclic, i.e., if $Q$ contains an oriented 3-cycle.
We will only use the term ``cyclic'' for 3-vertex quivers.
\end{definition}
\begin{definition}
\label{def:elbow}
Let $Q$ be an acyclic 3-vertex quiver with nonzero weights.
Then $Q$ contains one sink and one source.
The remaining vertex is called the \emph{elbow} of~$Q$.
\end{definition}
A mutation $\mu[i]$ in a 3-vertex quiver leaves two of the three weights unchanged.
\begin{definition}
\label{def:ascent}
In a 3-vertex quiver~$Q$,
a vertex~$i$ (or the mutation~$\mu[i]$)
is called an \emph{ascent} (resp., \emph{descent})
if $\mu[i]$ strictly increases (resp., decreases) one of the weights:
\[
|b_{jk}(\T{i}{Q})| > |b_{jk}(Q)|.
\]
Thus, $i$ is an ascent in $Q$ if and only if $i$ is a descent in~$\T{i}{Q}$.
\end{definition}
\begin{example}
\label{eg:ascents}
Let $Q$ be a 3-vertex cyclic quiver shown below, with $b, c\ge 2$:
\begin{equation}
\label{eq:2bc-3vertex}
Q=\begin{tikzcd}[arrows={-stealth}]
1 \arrow[r, "2"]
& 2 \arrow[d, "b" ]
\\
& 3 \arrow[lu, "c"]
\end{tikzcd}
\end{equation}
(Thus $Q$ has large weights.) Then the ascents and descents of $Q$ are as follows:
\begin{itemize}[leftmargin=.2in]
\item
if $2=b=c$ (the Markov quiver), 1, 2, 3 are neither ascents nor descents;
\item
if $2<b=c$, then 3 is an ascent, 1 and 2 are neither ascents nor descents;
\item
if $2\le b<c$, then 1 and 3 are ascents and 2 is a descent;
\item
if $2\le c<b$, then 2 and 3 are ascents and 1 is a descent.
\end{itemize}
\end{example}
The following lemma is immediate from Definition~\ref{def:ascent}.
\begin{lemma}
\label{lem:acyclic-descent}
Let $Q$ be an acyclic 3-vertex quiver with nonzero weights.
Then $Q$ has one ascent (namely the elbow) and no descents.
\end{lemma}
\begin{lemma}
\label{lem:3VertexUniqueDescents}
A cyclic $3$-vertex quiver $Q$ with large weights has at most one descent.
If $Q$ has a descent, then it is unique, and is the vertex opposite the maximum weight.
Moreover, if $Q$ has a descent, then the other two vertices are ascents.
\end{lemma}
\begin{proof}
By reversing arrows if necessary, we may assume that $1 \!\dasharrow\! 2 \!\dasharrow\! 3 \!\dasharrow\! 1$ in $Q.$ By relabeling the vertices, we may further assume that vertex $2$ is a descent of $Q.$ So we can compute explicitly
\[
|b_{13}(\mu[2](Q))| = |b_{31}(Q) - b_{12}(Q) b_{23}(Q)| < b_{31}(Q)
\]
(where the inequality is due to $2$ being a descent of~$Q$). Since all of $b_{12}(Q), b_{23}(Q)$ and $b_{31}(Q)$ are positive and at least $2$,
this implies that
\[
2 b_{31}(Q) > b_{12}(Q) b_{23}(Q) \geq 2 \max (b_{12}(Q), b_{23}(Q)).
\]
So $b_{31}(Q),$ the weight opposite our descent, is the largest weight in~$Q.$
But $2$ was an arbitrary descent, so if we repeat the argument for another descent we would get two different maximal weights, a contradiction.
Next we check that if $Q$ has descent $2$, then the other vertices are ascents in $Q$.
We compute
\[
b_{32}(\T{1}{Q}) = b_{31}(Q) b_{12}(Q) - b_{23}(Q) \ge 2 b_{31}(Q) - b_{23}(Q) > b_{23}(Q) > 0,
\]
so $1$ is an ascent in~$Q$. (Here we have used that the maximum weight is unique.)
An analogous argument shows that $3$ is an ascent too.
\end{proof}
\begin{lemma}
\label{lem:3VertexAcyclicAscents}
Let $Q$ be an acyclic quiver with large weights on the vertex set $\{1,2,3\}$,
with the elbow at~2.
Consider a sequence $i_1, i_2, i_3,\ldots\in\{1,2,3\}$ such that $i_1=2$ and $i_j\ne i_{j+1}$ for $j\ge 1$.
Set $Q^{(0)}=Q$, $Q^{(1)}=\mu[i_1](Q)$, and more generally,
\begin{equation}
\label{eq:Q^{(k)}}
Q^{(k)}=\mu[i_k \cdots i_2 i_1](Q) \quad (k\ge1).
\end{equation}
Then for any $k\ge 1$:
\begin{itemize}[leftmargin=.2in]
\item
the mutation $\mu[i_{k}]$ is a descent in $Q^{(k)}$;
\item
the quiver $Q^{(k)}$ is cyclic;
\item
the quivers $Q^{(k)}$ and $Q^{(k+1)}$ have opposite edge orientations.
\end{itemize}
\end{lemma}
\begin{proof}
We argue by induction on~$k$.
\emph{Base case:} $k=1$.
We have $i_1=2$ and $Q^{(1)} = \mu[2](Q).$
Suppose, without loss of generality, that in~$Q$ we have $1 \!\dasharrow\! 2 \!\dasharrow\! 3$ and hence $1 \!\dasharrow\! 3.$
Therefore
\begin{align*}
b_{13}(Q^{(1)}) &= b_{13}(Q) + b_{12}(Q) b_{23}(Q) > b_{13}(Q), \\
b_{12}(Q^{(1)}) &= - b_{12}(Q), \\
b_{23}(Q^{(1)}) &= - b_{23}(Q).
\end{align*}
In particular, $2$ is a descent in $Q^{(1)}$ and $Q^{(1)}$ is cyclic.
Without loss of generality, we may assume that $i_2=1$, so that $Q^{(2)}= \mu[1](Q^{(1)})$.
By Lemma~\ref{lem:3VertexUniqueDescents}, $1$ is an ascent in $Q^{(1)}$ (hence~$|b_{32}(Q^{(2)})| \geq |b_{32}(Q^{(1)})|$).
Since the weights are nonzero and $2 \!\dasharrow\! 1 \!\dasharrow\! 3$ in $Q^{(1)},$ we also have $b_{32}(Q^{(2)}) < b_{32}(Q^{(1)})$.
It follows that $b_{32}(Q^{(2)}) < 0,$ so $Q^{(1)}$ and $Q^{(2)}$ have opposite edge orientations.
\emph{Induction step:}
suppose that the claims are true for $Q^{(k-1)}.$
We will treat the case where $i_k = 2, i_{k+1}=1$ and $1 \!\dasharrow\! 2,$ all other cases being similar.
By induction,~$Q^{(k)}$ has opposite edge orientations from~$Q^{(k-1)},$ so in particular is cyclic.
Also by induction,~$Q^{(k-1)}$ has a descent at $i_{k-1} \neq 2,$ and so Lemma~\ref{lem:3VertexUniqueDescents} implies that
$Q^{(k-1)}$ has ascent $2$, or equivalently $Q^{(k)}$ has descent~$2$.
Finally, the mutation $\mu[1]$ reverses all arrows incident to $1$ in $Q^{(k+1)} = \mu[1](Q^{(k)}),$
so it only remains to show that $3 \!\dasharrow\! 2$ in $Q^{(k+1)}.$
If not, then $Q^{(k+1)}$ is acyclic and~${b_{23}(Q^{(k)}) > b_{23}(Q^{(k+1)})}$ (since $Q^{(k)}$ has a path~$3 \!\dasharrow\! 1 \!\dasharrow\! 2$).
But then $Q^{(k)}$ has two descents, contradicting Lemma~\ref{lem:3VertexUniqueDescents}.
\end{proof}
\pagebreak[3]
The mutation class of an arbitrary 3-vertex acyclic quiver with large weights can be parametrized as follows.
\begin{lemma}
\label{lem:3VertexAcyclicAscents-1}
Let $Q$ be an acyclic quiver with large weights on the vertex set $\{1,2,3\}$,
with the elbow at the vertex~2.
Then the map
\begin{equation*}
(i_1,\dots,i_k) \mapsto \mu[i_k \cdots i_2 i_1](Q)
\end{equation*}
(cf.~\eqref{eq:Q^{(k)}}) is a bijection between
\begin{itemize}[leftmargin=.2in]
\item
the set of finite sequences of vertices $(i_1,\dots,i_k)$, $k\ge 0$, satisfying $i_j \neq i_{j+1}$ and $i_2\ne 2$, and
\item
the mutation class~$[Q]$ (see Definition~\ref{def:mutationClass}).
\end{itemize}
Thus the mutation graph of $Q$ consists of three complete infinite binary trees of cyclic quivers, with each root connected to a different acyclic quiver.
The three acyclic quivers form a mutation cycle of sink/source mutations (see Figure~\ref{fig:central-triangle}).
See Figure~\ref{fig:largeWeightAcyclic}.
\end{lemma}
\begin{figure}
\caption{
Part of the mutation graph of a $3$-vertex acyclic quiver with large weights,
showing the vertices at distance $\le 3$ from the acyclic triangle.
The green (resp.,~blue) vertices correspond to acyclic (resp., cyclic) quivers.
Each edge is labeled with the mutation performed.
This label is in brackets when the mutation is at a sink/source.
}
\label{fig:largeWeightAcyclic}
\end{figure}
\begin{proof}
Every quiver in $[Q]$ is given by $\mu[i_k \cdots i_1](Q)$, for some sequence $(i_1,\dots,i_k)$.
If~$i_j=i_{j+1}$ for some $j$, then we can shorten the sequence by removing these two entries.
Similarly, if $i_2=2$, then we can shorten the sequence via
\begin{align*}
\T{\cdots i_4 i_3 2 1}{Q}&=\T{\cdots i_4 i_3 3}{Q}, \\
\T{\cdots i_4 i_3 2 3}{Q}&=\T{\cdots i_4 i_3 1}{Q}
\end{align*}
(here we use that $\T{321}{Q}=\T{123}{Q}=Q$, see Proposition~\ref{pr:acyclic-mutation-cycle}).
We may therefore assume that $i_j \neq i_{j+1}$ for all~$j$ and moreover
$i_2\neq 2$. Thus we have a surjection.
\pagebreak[3]
Let us show that for any $Q'\in[Q]$, the sequence $(i_1,\dots,i_k)$ with the required properties is unique.
Indeed, it follows from Lemma~\ref{lem:3VertexAcyclicAscents} that $i_k$ is the unique descent in~$Q'$,~$i_{k-1}$
is the unique descent in~$\mu[i_k](Q')$,
$i_{k-2}$ is the unique descent in~$\mu[i_{k-1} i_k](Q')$, etc., until we reach an acyclic quiver
(either $Q$ or $\mu[1](Q)$ or~$\mu[3](Q)$).
\end{proof}
\begin{remark}
\label{rem:3-vertex-arbitrary}
One can use the results in~\cite{ABBS}
to extend the above description to arbitrary mutation classes of 3-vertex quivers.
This can be further generalized to $3\times 3$ skew-symmetrizable matrices, cf.~\cite{Seven3x3}.
\end{remark}
\begin{definition}
\label{def:height}
Let $Q$ be a $3$-vertex quiver which is mutation equivalent to an acyclic quiver with large weights.
By Lemma~\ref{lem:3VertexAcyclicAscents}, the quiver~$Q$ also has large weights.
The \emph{descent sequence} of~$Q$ is the (unique) longest sequence $\mathbf{i}=i_1\cdots i_k$ of successive descents
originating at~$Q$; in other words, $i_1$ is the unique descent of~$Q$,
$i_2$ is the unique descent of $\mu[i_1](Q)$, etc.
Alternatively, $\mathbf{i}$ is the (unique) shortest sequence of mutations such that the quiver $\T{\mathbf{i}}{Q}$ is acyclic.
Cf.\ the last paragraph of the proof of Lemma~\ref{lem:3VertexAcyclicAscents-1}.
\end{definition}
\begin{example}
\label{eg:acyclicClasses}
Let $R$ and $R'$ be acyclic $3$-vertex quivers shown below, with distinct large weights $a,b,c$:
\begin{equation}
\label{eq:RR'-3vertex}
R=\begin{tikzcd}[arrows={-stealth}]
1 \arrow[r, "a"]
\arrow[rd, swap,"c"]
& 2 \arrow[d, "b" ]
\\
& 3
\end{tikzcd}
\qquad\qquad
R'=\begin{tikzcd}[arrows={-stealth}]
1
& 2 \arrow[d, "a" ] \arrow[l, swap, "c"]
\\
& 3 \arrow[lu, "b"]
\end{tikzcd}
\end{equation}
The quivers $R$ and $R'$ are isomorphic but \emph{not} mutation equivalent.
Indeed, by Lemma~\ref{lem:3VertexAcyclicAscents-1},
all acyclic quivers in the mutation class $[R]$ are obtained from~$R$ by reorienting its arrows
(while keeping the labeling intact), see Figure~\ref{fig:central-triangle}.
By contrast, getting $R'$ from~$R$ requires relabeling of the vertices.
\end{example}
To state our next technical result, we will need the following notation.
\begin{definition}
\label{def:chebyshev-poly}
We denote by $u_j(a)\in{\mathbb Z}[a]$, $j=0,1,2,\dots$, the \emph{monic Chebyshev polynomials of the second kind}
defined by
\begin{equation*}
u_j(2\cos\theta)=\frac{\sin((j+1)\theta)}{\sin\theta}
\end{equation*}
or by the recurrence
\begin{equation}
\label{eq:chebyshev-rec}
u_{j+1}(a)=au_j(a)-u_{j-1}(a),
\end{equation}
with initial values $u_0(a)=1$, $u_1(a)=a$.
Thus
$u_2(a)=a^2-1$, $u_3(a)=a^3-2a$, \dots
\end{definition}
\begin{proposition}
\label{pr:3VertexAlternatingMutations}
Let $Q^{(1)}$ be an acyclic quiver on the vertex set $\{1,2,3\}$, with large weights and~${3\!\dasharrow\!1\!\dasharrow\!2}$ (thus~${3\!\dasharrow\!2}$).
Let $a\!=\! b_{21}(Q^{(1)})$, $b\!=\!b_{31}(Q^{(1)})$, $c\!=\!b_{32}(Q^{(1)})$.
For $j=1,2,\dots$,~set
\begin{align*}
Q^{(2j+1)}&=\T{(2 1)^j}{Q^{(1)}}, \\
Q^{(2j+2)}&=\T{1(2 1)^j}{Q^{(1)}},
\end{align*}
so that
\begin{equation}
\label{eq:mu12121...}
Q^{(1)} \mutation{1}
Q^{(2)} \mutation{2}
Q^{(3)} \mutation{1}
Q^{(4)} \mutation{2}
\cdots,
\end{equation}
cf.\ Figure~\ref{fig:mu1mu2}.
Then the weights of the cyclic quivers $Q^{(2)}$, $Q^{(3)}$, $Q^{(4)}$, \dots are given by the following formulas.
For $j\ge0$, we have
\begin{align*}
b_{21}(Q^{(2j+2)}) &= a,\\
b_{13}(Q^{(2j+2)}) &= u_{2j}(a)b + u_{2j-1}(a)c,\\%convention, u_{-1}(a)=0
b_{32}(Q^{(2j+2)}) &= u_{2j+1}(a)b + u_{2j}(a)c.
\end{align*}
and for $j>0$, we have
\begin{align*}
b_{12}(Q^{(2j+1)}) &= a,\\
b_{23}(Q^{(2j+1)}) &= u_{2j-1}(a)b + u_{2j-2}(a)c,\\
b_{31}(Q^{(2j+1)}) &= u_{2j}(a)b + u_{2j-1}(a)c.\\
\end{align*}
\end{proposition}
\begin{figure}
\caption{The first three mutations in Proposition~\ref{pr:3VertexAlternatingMutations}
\label{fig:mu1mu2}
\end{figure}
\begin{proof}
Note that $Q^{(2)}, Q^{(3)}, \ldots$ are cyclic by Lemma~\ref{lem:3VertexAcyclicAscents}, since our first mutation is at the elbow~$1.$
Each of the mutations \eqref{eq:mu12121...} keeps two of the three weights intact and creates one new weight.
These new weights, together with $b$, form the sequence
\begin{equation*}
b, ab+c, (a^2-1)b+ac, (a^3-2a)b+(a^2-1)c, \dots
\end{equation*}
that satisfies the same recurrence as the monic Chebyshev polynomials, see~\eqref{eq:chebyshev-rec}.
The desired formulas follow.
\end{proof}
We will later need the following simple observation about 3-vertex subquivers of a larger quiver.
\begin{lemma}
\label{lem:4VertexWeightChanges}
Let $i,j,u,v$ be four vertices in a quiver~$Q$.
Suppose that $i$ is an ascent or descent in $Q|_{iju}$ and an ascent or descent in $Q|_{ijv}$.
Then $b_{uv}(Q) = b_{uv}(\T{i}{Q})$.
\end{lemma}
\begin{proof}
Assume, without loss of generality, that we have orientation $j \!\dasharrow\! i$ in $Q$.
In order for $i$ to be an ascent or descent in $Q|_{ijk}$,
we must have $i \!\dasharrow\! u$, and similarly~$i \!\dasharrow\! v$.
But then $i$ is a source in $Q|_{ikv}$, so $\mu[i]$ does not affect $b_{uv}$.
\end{proof}
\section{Main construction}
\label{Sec:GeneralSemiCycles}
For $a,b\in{\mathbb Z}$, we will use the notation $[a,b]=\{a,a+1,\dots,b-1,b\}$.
\begin{theorem}
\label{thm:GeneralSemiCycles}
Let $n\ge 4$ and $k> 0$.
Let $\tilde R$ be a quiver on the vertex set $[1,n-1]$
such that $b_{ij}(\tilde R)\ge 2$
whenever $i<j$.
(In particular, $\tilde R$ is acyclic and has large weights.)
Define the quiver $Q$ on the vertex set $[1,n]$
by setting
\begin{equation}
\label{eq:Q{[1,n]}}
Q|_{[1,n-1]}= \T{(12)^k}{\tilde R}
\end{equation}
and choosing the values $b_{in}(Q) \geq 2$ arbitrarily;
in particular, $n$~is a sink in~$Q$.
Then the quiver $Q$ lies on the following mutation cycle of length $n+4k$:
\begin{equation}
\label{eq:GeneralSemiCycle}
Q = \T{(1 2)^k 1 2 3 \cdots (n-2) (n-1) (2 1)^k n}{Q}.
\end{equation}
\end{theorem}
\begin{example}
\label{eg:n=4,k=2}
The $n=4, k=2$ case of Theorem~\ref{thm:GeneralSemiCycles} is shown in Figure~\ref{fig:generic-8-cycle} ,
with the quiver~$Q$ appearing in the top left corner.
We assume that $a,b,c,d,e,f \geq 2$.
\end{example}
\begin{example}
\label{eg:longcycle5}
Here is a specific example with $n=5$ and $k=2$.
Choose
\begin{equation*}
\tilde R=\begin{tikzcd}[arrows={-stealth}, row sep=40]
1 \arrow[rr, "3"] \arrow[rrd, near start, "6"] \arrow[d, swap, "5" ]
&& 2 \arrow[d, "8" ] \arrow[dll,swap, near end, "7"]
\\
4 && 3 \arrow[ll, swap, "4" ]
\end{tikzcd}
\end{equation*}
so that
\begin{equation*}
Q|_{[1,4]}\!=\!\T{1212}{\tilde R} = \begin{tikzcd}[arrows={-stealth}, row sep=40]
1 \arrow[rr, "3"]
&& 2 \arrow[d, "566" ] \arrow[dll,swap, near end, outer sep=-1, "490"]
\\
4 \arrow[u, "187" ] && 3 \arrow[ll, swap, "4" ] \arrow[llu, swap, near end, outer sep=-1, "216"]
\end{tikzcd}
.
\end{equation*}
Now extend $Q|_{[1,4]}$ to $Q$ by setting arbitrary values $b_{15},b_{25}, b_{35},b_{45}\ge2$.
E.g.,
\begin{equation}
Q= \begin{tikzcd}[arrows={-stealth}, row sep=30]
1 \arrow[rr, "3"] \arrow[rddd, near end, swap, outer sep=-1.5, "13"]
&& 2 \arrow[dd, "566" ] \arrow[ddll, swap, "490", pos=0.62, outer sep=-1.5] \arrow[lddd, near end, outer sep=-1.5, "17"]
\\
\\
4 \arrow[uu, "187" ] \arrow[rd, swap, outer sep=-1.5, "11" ] &&
3 \arrow[ll, swap, "4" ] \arrow[lluu, "216", swap, near end, outer sep=-1] \arrow[ld, outer sep=-1.5, "19" ]
\\
& 5
\end{tikzcd}
\end{equation}
Then \eqref{eq:GeneralSemiCycle} asserts that
$\T{1212123421215}{Q}=Q$, as in Figure~\ref{fig:egCycle}.
\end{example}
\begin{figure}
\caption{
The mutation cycle from Theorem~\ref{thm:GeneralSemiCycles}
\label{fig:egCycle}
\end{figure}
The remainder of this section is devoted to the proof of Theorem~\ref{thm:GeneralSemiCycles}.
To make it easier to discuss particular mutations and quivers along the mutation sequence
appearing in~\eqref{eq:GeneralSemiCycle},
we denote by $Q^{(j)}$ the result of applying the first $j$ mutations in this sequence (starting with $\mu[n]$) to~$Q$,
so that
\begin{equation}
\label{eq:Q123}
Q^{(0)}=Q,\ \ Q^{(1)}=\T{n}{Q},\ \ Q^{(2)}=\T{1n}{Q}, \ \ Q^{(3)}=\T{21n}{Q}, \dots,
\end{equation}
see Figure~\ref{fig:GeneralSemiCycles-notation}.
We then denote
\begin{align}
\label{eq:RLQ-R}
R&=Q^{(2k+1)}=\T{(2 1)^k n}{Q}, \\
\label{eq:RLQ-L}
L&= Q^{(2k+n)}=\T{1 \cdots (n-1) (21)^k n}{Q}=\T{1 \cdots (n-1)}{R} , \\
\label{eq:Qpdef}
Q' &= Q^{(4k+n)}= \T{(1 2)^k 1 2 3 \cdots (n-2) (n-1) (2 1)^k n}{Q}.
\end{align}
\begin{figure}
\caption{The putative mutation cycle, see \eqref{eq:Q123}
\label{fig:GeneralSemiCycles-notation}
\end{figure}
Our goal is to show that $Q = Q'$.
We will gradually demonstrate that certain subquivers of $Q$ and $Q'$ are equal,
see Lemmas~\ref{lem:SubquiverTildeQ}, \ref{lem:Subquiver12n}, and~\ref{lem:bin}.
\begin{lemma}
\label{lem:SubquiverTildeQ}
We have $L|_{[1,n-1]}\!=\!R|_{[1, n-1]}\!=\!\tilde R$ and $Q|_{[1,n-1]}\!=\!Q^{(1)}|_{[1,n-1]}\!=\!Q'|_{[1, n-1]}$.
More generally,
\begin{equation}
\label{eq:SubquiverTildeQ-2}
Q^{(4k+n-\ell)}|_{[1,n-1]} = Q^{(\ell+1)}|_{[1,n-1]} \quad (0 \leq \ell \leq 2k).
\end{equation}
\end{lemma}
\begin{proof}
Since $n$ is a sink in~$Q$, we have
\begin{equation}
\label{eq:Q[n-1]=Q1}
Q|_{[1,n-1]}=Q^{(1)}|_{[1,n-1]}.
\end{equation}
Therefore
\begin{equation}
\label{eq:R[n-1]=tildeR}
R|_{[1,n-1]}\stackrel{\eqref{eq:RLQ-R}}{=}\T{(21)^k}{Q^{(1)}}|_{[1,n-1]}
\stackrel{\eqref{eq:Q[n-1]=Q1}}{=}\T{(21)^k}{Q}|_{[1,n-1]}\stackrel{\eqref{eq:Q{[1,n]}}}{=}\tilde R.
\end{equation}
Recall that $\tilde R$ is acyclic with $b_{ij}(\tilde R)>0$ for $i<j$.
It follows by Proposition~\ref{pr:acyclic-mutation-cycle} that
\begin{equation}
\label{eq:L[n-1]=tildeL}
L|_{[1, n-1]}\stackrel{\eqref{eq:RLQ-L}}{=}\T{12\cdots(n-1)}{R}|_{[1,n-1]}
\stackrel{\eqref{eq:R[n-1]=tildeR}}{=}\T{12\cdots(n-1)}{\tilde R}=\tilde R .
\end{equation}
We conclude that
\begin{equation*}
Q'|_{[1, n-1]}
\!\stackrel{\eqref{eq:Qpdef}}{=}\! \T{(12)^k}{L}|_{[1, n-1]}
\!\stackrel{\eqref{eq:L[n-1]=tildeL}}{=\!\!=}\! \T{(12)^k}{R}|_{[1, n-1]}
\!\stackrel{\eqref{eq:RLQ-R}}{=}\! Q^{(1)}|_{[1,n-1]}=Q|_{[1,n-1]}.
\end{equation*}
Identity \eqref{eq:SubquiverTildeQ-2} is deduced in the same way, by applying the mutations
$\mu[\cdots 212]$ to the identity $L|_{[1,n-1]}\!=\!R|_{[1, n-1]}$.
\end{proof}
Now that we have shown that $Q|_{[1,n-1]}=Q'|_{[1, n-1]}$,
it remains to demonstrate that the multiplicities and directions of arrows incident to the vertex~$n$
are the same in $Q$ and~$Q'$; that is, we need to show that $b_{in}(Q)=b_{in}(Q')$ for $i=1,\dots,n-1$.
\begin{lemma}
\label{lem:Subquiver12nStart}
The quiver $Q^{(1)}|_{12n}$ is acyclic with large weights, with elbow at~$1$. \\
The quiver $Q|_{12n}$ is acyclic with large weights, with elbow at~$2$.
\end{lemma}
\begin{proof}
We have $b_{12}(R) =b_{12}(\tilde R)\ge 2$ by construction.
Since $Q^{(1)}=\T{(12)^k}{R}$, it follows that $b_{12}(Q^{(1)})=b_{12}(R)\ge 2$.
We have $b_{1n}(Q),b_{2n}(Q)\ge 2$ by construction.
Since $Q^{(1)}=\T{n}{Q}$, it follows that~$b_{n1}(Q^{(1)}), b_{n2}(Q^{(1)})\ge 2$,
and we are done with $Q^{(1)}|_{12n}$.
The claim regarding $Q|_{12n}=\T{n}{Q^{(1)}|_{12n}}$ immediately follows.
\end{proof}
For $1\le \ell\le 2k$, we denote
\begin{equation}
\label{eq:varepsilon}
\varepsilon(\ell)=
\begin{cases}
1 & \text{if $\ell$ is odd;} \\
2 & \text{if $\ell$ is even,}
\end{cases}
\end{equation}
so that the mutations along the right rim of the diagram in Figure~\ref{fig:GeneralSemiCycles-notation}
take the form
\begin{equation*}
Q^{(\ell)}\mutation{\varepsilon(\ell)}Q^{(\ell+1)}.
\end{equation*}
\begin{lemma}
\label{lem:mut-epsilon}
Let $1\le \ell\le 2k$ and $i\!\in\! [3,n\!-\!1]$.
Then $\varepsilon(\ell)$ is a descent in $Q^{(\ell)}|_{12i}$.
Furthermore, $Q^{(\ell)}|_{12n}$ is cyclic, with large weights and ascent at~$\varepsilon(\ell)$.
In particular, $R|_{12n}$ has large weights and the orientation $1\!\dasharrow\! 2\!\dasharrow\! n\!\dasharrow\! 1$.
\end{lemma}
\begin{proof}
By construction, $R|_{12i}=\tilde R|_{12i}$ is acyclic with elbow at~$2$ and large weights.
It~follows from Lemma~\ref{lem:3VertexAcyclicAscents} that every mutation
directed away from~$R$ along the right rim of Figure~\ref{fig:GeneralSemiCycles-notation}
ascends~the subquiver on the vertices $1,2,i$.
Hence going in the opposite direction gives a descent.
By Lemma~\ref{lem:Subquiver12nStart}, $Q^{(1)}|_{12n}$ is acyclic with elbow at~$1$ and large weights.
By Lemma~\ref{lem:3VertexAcyclicAscents}, every mutation
directed towards~$R$ along the right rim of Figure~\ref{fig:GeneralSemiCycles-notation}
ascends~the subquiver on the vertices $1,2,n$ and makes this subquiver cyclic.
Since $1 \!\dasharrow\! 2$ in $R$, the last claim follows.
\end{proof}
\begin{lemma}
\label{lem:binRQ}
We have
\begin{equation}
\label{eq:binRQ}
Q^{(1)}|_{[3,n]} =Q^{(2)}|_{[3,n]} = \cdots =Q^{(2k+1)}|_{[3,n]} = R|_{[3,n]}.
\end{equation}
\end{lemma}
\begin{proof}
Let $1\le \ell\le 2k$ and $3\le i<j\le n$.
By Lemma~\ref{lem:mut-epsilon},
$\varepsilon(\ell)$ is an ascent or descent in both $Q^{(\ell)}|_{12i}$ and $Q^{(\ell)}|_{12j}$.
Then Lemma~\ref{lem:4VertexWeightChanges} implies that $b_{ij}$ is unchanged by the
mutation $Q^{(\ell)}\mutation{\varepsilon(\ell)}Q^{(\ell+1)}$.
The claim follows.
\end{proof}
\label{Sec:Sinks}
\begin{lemma}
\label{lem:Sinks}
Every mutation $\mu[i]$ with $i\ge 3$ in Figure~\ref{fig:GeneralSemiCycles-notation}
is a sink mutation (assuming we are moving clockwise along the top or bottom rim).
\end{lemma}
We note that in Figure~\ref{fig:GeneralSemiCycles-notation},
there is exactly one mutation $\mu[i]$ for every $i\ge 3$.
\begin{proof}
The mutation $Q\mutation{n}Q^{(1)}$ is a sink mutation by construction.
Let us examine the mutations $\mu[\ell]$ ($3\le \ell \le n-1$) along the bottom rim:
\begin{equation}
\label{eq:bottom(n-2)}
Q^{(2k+n-2)} \mutation{3} Q^{(2k+n-3)} \mutation{4} \cdots \mutation{n-2}
Q^{(2k+2)} \mutation{n-1} Q^{(2k+1)} =R.
\end{equation}
Recall that $\tilde R=R|_{[1,n-1]}$ is acyclic with $b_{ij}(\tilde R)>0$ for $i<j$,
so all such mutations are sink mutations
within the subquiver on $[1,n-1]$, cf.\ Proposition~\ref{pr:acyclic-mutation-cycle}.
By Lemma~\ref{lem:binRQ}, for $i \in [3,n-1]$, we have~$b_{in}(R) = b_{in}(Q^{(1)}) = - b_{in}(Q) < 0$,
so~$R|_{[3,n]}$ is acyclic with linear order~$n\to 3\to 4\to\cdots\to n-1$.
Consequently, all mutations~$\mu[\ell]$ ($3\le \ell \le n-1$) are sink mutations
within the subquiver on~$[3,n]$, cf.~Proposition~\ref{pr:acyclic-mutation-cycle}.
Putting everything together, we see that all mutations~$\mu[\ell]$~(${3\le \ell \le n-1}$) are sink mutations.
\end{proof}
In order to prove that $Q$ and $Q'$ agree on their subquivers with vertices $1,2,n$
(see Lemma~\ref{lem:Subquiver12n}), we are going to completely describe all subquivers $Q^{(\ell)}_{12n}$, cf.\ Figure~\ref{fig:Q12nShapes}.
\begin{figure}
\caption{Subquivers on the vertices $1,2,n$ along the mutation cycle.
Dashed lines indicate equality of these subquivers.
A red dot $\red{\bullet}
\label{fig:Q12nShapes}
\end{figure}
\begin{lemma}
\label{lem:Subquiver12nBottom}
We have
$Q^{(2k+n-2)}|_{12n}=\cdots =
Q^{(2k+2)}|_{12n} = Q^{(2k+1)}|_{12n} = R|_{12n}$.
\end{lemma}
Note that these are precisely the subquivers of the quivers appearing in~\eqref{eq:bottom(n-2)}.
\begin{proof}
By Lemma~\ref{lem:Sinks}, each mutation at $3,\dots,n-2,n-1$ is a sink mutation.
As such, it does not change the subquiver supported on the remaining vertices $1,2,n$.
\end{proof}
\begin{lemma}
\label{lem:Subquiver12nLeft}
We have $Q^{(4k+n-2-j)}|_{12n} = Q^{(j+1)}|_{12n}$ for $ 0 \leq j \leq 2k$.
\end{lemma}
\begin{proof}
In the case $j=2k$, the claim
$Q^{(2k+n-2)}|_{12n}=Q^{(2k+1)}|_{12n}$ holds by Lemma~\ref{lem:Subquiver12nBottom}.
Applying mutations at $2, 1, 2,\dots$, we obtain the desired identities for all~$j$.
\end{proof}
\begin{lemma}
\label{lem:Subquiver12n}
We have $Q|_{12n} = Q'|_{12n}$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:Subquiver12nLeft} (with $j=0$), we have $Q^{(4k+n-2)}|_{12n} = Q^{(1)}|_{12n}$. By Lemma~\ref{lem:Subquiver12nStart}, the subquiver~$Q^{(1)}|_{12n}$ lies on the mutation 3-cycle
\begin{equation*}
Q|_{12n} \mutation{n} Q^{(1)}|_{12n} = Q^{(4k+n-2)}|_{12n}
\mutation{2} Q^{(4k+n-1)}|_{12n} \mutation{1} Q'|_{12n}. \qedhere
\end{equation*}
\end{proof}
We next examine the entries $b_{in}(Q^{(j)})$ for $i\in [3,n-1]$.
Lemma~\ref{lem:fourChanges} describes the evolution of these entries under the mutations
\begin{align*}
Q^{(2k+n-2)} \mutation{2}Q^{(2k+n-1)}\mutation{1}Q^{(2k+n)} \\
Q^{(4k+n-2)} \mutation{2}Q^{(4k+n-1)}\mutation{1}Q^{(4k+n)}
\end{align*}
located near the bottom left and top left corners of
Figure~\ref{fig:GeneralSemiCycles-notation}.
\newcommand{a}{a}
\newcommand{x}{x}
\newcommand{y}{y}
\begin{lemma}
\label{lem:fourChanges}
Set $a\!=\!b_{12}(Q)\!=\!b_{12}(R), x\!=\!b_{1n}(Q)$, and $y\!=\!b_{2n}(Q)$.
For $i\!\in\! [3,n-1]$,
\begin{align}
\label{eq:change1}
b_{in}(Q^{(2k+n-1)}) - b_{in}(Q^{(2k+n-2)}) &= b_{2i}(R) (u_{2k-1}(a) x + u_{2k-2}(a) y), \\
\label{eq:change2}
b_{in}(Q^{(2k+n)}) - b_{in}(Q^{(2k+n-1)}) &= b_{1i}(R) (u_{2k-2}(a) x + u_{2k-3}(a) y), \\
\label{eq:change3}
b_{in}(Q^{(4k+n-1)}) - b_{in}(Q^{(4k+n-2)}) &= -y (u_{2k-2}(a) b_{2i}(R) + u_{2k-3}(a) b_{1i}(R)), \\
\label{eq:change4}
b_{in}(Q') - b_{in}(Q^{(4k+n-1)}) &= -x (u_{2k-2}(a) b_{1i}(R) + u_{2k-1}(a) b_{2i}(R)).
\end{align}
\end{lemma}
We note that the numbers $b_{12}(R), b_{1i}(R), b_{2i}(R)$ and $b_{1n}(Q), b_{2n}(Q), b_{in}(Q)$
are all at least $2$ (hence positive) by construction.
\begin{proof}
Let us prove the formula~\eqref{eq:change1};
the proofs of \eqref{eq:change2}--
\eqref{eq:change4} are analogous.
As $Q^{(2k+n-2)} \mutation{2} Q^{(2k+n-1)}),$ Lemma~\ref{lem:Subquiver12nBottom} implies $b_{2n}(Q^{(2k+n-2)}) = b_{2n}(R|_{12n}).$
Thus by applying Proposition~\ref{pr:3VertexAlternatingMutations} to $Q^{(1)}|_{12n}$ (with $n$ relabeled to $3$):
\[
b_{2n}(Q^{(2k+n-2)}) =b_{2n}(R|_{12n}) = b_{2n}(\T{(21)^k}{Q^{(1)}} = u_{2k-1}(a) x + u_{2k-2}(a) y > 0
\]
Lemma~\ref{lem:SubquiverTildeQ} implies that $b_{2i}(Q^{(2k+n-2)}|_{[1,n-1]}) = b_{2i}(\T{21}{\tilde R}).$
Since both $1$ and $2$ are source mutations in $\tilde R$, we have
$b_{2i}(Q^{(2k+n-2)}) = -b_{2i}(\tilde R) <0$.
Therefore $i \!\dasharrow\! 2 \!\dasharrow\! n$ in $Q^{(2k+n-2)}$ and we compute:
\begin{align*}
b_{in}(Q^{(2k+n-1)}) - b_{in}(Q^{(2k+n-2)}) &=b_{i2}(Q^{(2k+n-2)}) b_{2n}(Q^{(2k+n-2)}) \\
&=b_{2i}(R) (u_{2k-1}(a) x + u_{2k-2}(a) y).
\end{align*}
To show \eqref{eq:change3}, \eqref{eq:change4}, one should apply Proposition~\ref{pr:3VertexAlternatingMutations} to relabelings of the quivers $R|_{12i}$ instead of $Q^{(1)}|_{12n}$.
\end{proof}
\begin{lemma}
\label{lem:binUnchanged}
We have $Q^{(2k+n)}|_{[3,n]} = Q^{(2k+n+1)}|_{[3,n]} = \cdots = Q^{(4k+n-2)}|_{[3,n]}$.
\end{lemma}
\begin{proof}
Let $2 \leq \ell \leq 2k$ and $3 \leq i < j \leq n$.
By Equation~\eqref{eq:SubquiverTildeQ-2} (proved in Lemma~\ref{lem:SubquiverTildeQ}) and Lemma~\ref{lem:Subquiver12nLeft},
$Q^{(4k+n-\ell)}|_{12i}$ and $Q^{(4k+n-\ell)}|_{12j}$ correspond to subquivers with the same support in either $Q^{(\ell+1)}$ or $Q^{(\ell-1)}$.
Lemma~\ref{lem:mut-epsilon} implies that the mutation~$\varepsilon(\ell)$ is an ascent or descent in both of these subquivers.
Thus Lemma~\ref{lem:4VertexWeightChanges} implies that~$b_{ij}$ is unchanged by the mutation
$Q^{(4k+n-\ell)}~\mutation{\varepsilon(\ell)}~Q^{(4k+n+1-\ell)}$.
\end{proof}
\begin{lemma}
\label{lem:bin}
We have $b_{in}(Q) = b_{in}(Q')$ for $3 \leq i \leq n-1$.
\end{lemma}
\begin{proof}
We have
\[
b_{in}(Q) =-b_{in}(Q^{(1)}) = -b_{in}(R) = b_{in}(Q^{(2k+n-2)})
\]
by Lemmas~\ref{lem:binRQ},~\ref{lem:Sinks}.
Further we can rewrite:
\begin{align*}
b_{in}(Q') - b_{in}(Q^{(2k+n-2)}) &= (b_{in}(Q') - b_{in}(Q^{(4k+n-2)})) \\
&\quad + (b_{in}(Q^{(4k+n-2)}) - b_{in}(Q^{(2k+n)})) \\
&\quad + (b_{in}(Q^{(2k+n)}) - b_{in}(Q^{(2k+n-2)})).
\end{align*}
Lemma~\ref{lem:binUnchanged} implies $b_{in}(Q^{(4k+n-2)})\! =\! b_{in}(Q^{(2k+n)})$.
Summing equations~\eqref{eq:change1}--\eqref{eq:change4} from Lemma~\ref{lem:fourChanges} implies that
\begin{align*}
b_{in}(Q') - b_{in}(Q^{(2k+n-2)}) &= b_{2i}(R) (u_{2k-1}(b_{12}(R)) b_{1n}(Q) + u_{2k-2}(b_{12}(R)) b_{2n}(Q)\\
&\quad + b_{1i}(R) (u_{2k-2}(b_{12}(R)) b_{1n}(Q) + u_{2k-3}(b_{12}(R)) b_{2n}(Q))\\
&\quad - b_{2n}(Q) (u_{2k-2}(b_{12}(R)) b_{2i}(R) + u_{2k-3}(b_{12}(R)) b_{1i}(R))\\
&\quad - b_{1n}(Q) (u_{2k-2}(b_{12}(R)) b_{1i}(R) + u_{2k-1}(b_{12}(R)) b_{2i}(R))\\
&= b_{1i}(R)b_{1n}(Q) (u_{2k-2}(b_{12}(R)) - u_{2k-2}(b_{12}(R)))\\
&\quad + b_{1i}(R)b_{2n}(Q) (u_{2k-3}(b_{12}(R)) - u_{2k-3}(b_{12}(R))) \\
&\quad + b_{2i}(R)b_{1n}(Q) (u_{2k-1}(b_{12}(R)) - u_{2k-1}(b_{12}(R))) \\
&\quad + b_{2i}(R)b_{2n}(Q) (u_{2k-2}(b_{12}(R)) - u_{2k-2}(b_{12}(R))) \\
&= 0,
\end{align*}
as claimed.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:GeneralSemiCycles}]
We have $b_{ij}(Q') = b_{ij}(Q)$ for all $i<j;$ first for $i,j < n$ in Lemma~\ref{lem:SubquiverTildeQ},
then for $i=1,2$ and $j=n$ in Lemma~\ref{lem:Subquiver12n},
and finally by checking for $i>2$ and $j=n$ in Lemma~\ref{lem:bin}.
\end{proof}
\section{Distinctness}
\label{sec:distinctness}
In this section, we show that each mutation cycle described in Theorem~\ref{thm:GeneralSemiCycles}
does not visit the same quiver more than once.
\begin{definition}
\label{def:minimal-cycle}
A~mutation cycle is called \emph{minimal} if any two quivers lying on the cycle are distinct.
\end{definition}
\begin{theorem}
\label{thm:GeneralSemiCycleDistinct}
The mutation cycle in Theorem~\ref{thm:GeneralSemiCycles} is minimal.
\end{theorem}
\begin{theorem}
\label{thm:GeneralSemiCycleDistinct-noniso}
Suppose that either $n\ge 5$ or else $n=4$ and $b_{14}(Q) \neq b_{23}(R)$.
Then no two quivers on the mutation cycle described in Theorem~\ref{thm:GeneralSemiCycles}
are related by an isomorphism and/or global reversal of arrows.
\end{theorem}
Our proofs of Theorems \ref{thm:GeneralSemiCycleDistinct}--\ref{thm:GeneralSemiCycleDistinct-noniso}
will rely on the descriptions,
given in Lemmas~\ref{lem:orientations}--\ref{lem:descents} below, of
the various 3-vertex subquivers~$Q^{(j)}|_{pqr}$ of the quivers $Q^{(j)}$ (cf.\ \eqref{eq:Q123}).
These lemmas, in turn, summarize the results obtained in Lemmas~\ref{lem:R12nSubquiv}--\ref{lem:Q[3,n]Subquiv}.
\begin{definition}
For an $n$-vertex quiver $Q$ and a permutation~$\sigma$ in the symmetric group~$S_n$,
we will denote by $\sigma(Q)$ the result of relabeling the vertices of~$Q$
by~$\sigma$.
\end{definition}
For example, if $\sigma\in S_3$ is defined by $\sigma(1)=2$, $\sigma(2)=3$, $\sigma(3)=1$,
then $\sigma(R)=R'$, where $R$ and~$R'$ are the quivers from Example~\ref{eg:acyclicClasses}.
\begin{remark}
See Example~\ref{eg:4vertexSymmetric} for why the condition $b_{14}(Q) \neq b_{23}(R)$
is included in the $n=4$ case of
Theorem~\ref{thm:GeneralSemiCycleDistinct-noniso}.
This condition is guaranteed by
choosing a value $b_{14}(Q)\ge 2$ different from $b_{23}(\tilde R)$.
In fact, this restriction can be further relaxed.
Let $\sigma \in S_4$ be the permutation defined by
${\sigma(1) = 2}$, ${\sigma(2)=1}$, ${\sigma(3)=4}$, ${\sigma(4)=3}$.
Then the conclusion of
Theorem~\ref{thm:GeneralSemiCycleDistinct-noniso} holds whenever reversing the arrows of $\sigma(Q)$ gives a quiver different from~$Q^{(2k+2)}$.
\end{remark}
\begin{lemma}
\label{lem:R12nSubquiv}
Each 3-vertex quiver $Q^{(j)}|_{12n}$, for $0\le j\le 4k+n$, is mutation equivalent to an acyclic quiver with large weights.
More specifically, the descent sequences of the quivers $Q^{(j)}|_{12n}$ are determined from:
\begin{align}
\label{eq:R12nDesc}
& R|_{12n} = Q^{(2k+1)}|_{12n} \mutation{2} Q^{(2k)}|_{12n} \mutation{1} \cdots \mutation{1} Q^{(1)}|_{12n} =
{\begin{tikzcd}[arrows={-stealth}, sep=small, ampersand replacement=\&]
\scriptstyle 1 \arrow[r, dashed]
\& \scriptstyle 2
\\
\& \scriptstyle n \arrow[u, dashed] \arrow[lu, dashed]
\end{tikzcd}},
\\
\label{eq:C12n}
& Q^{(2k+1)}|_{12n} = Q^{(2k+2)}|_{12n} = \cdots = Q^{(2k+n-2)}|_{12n} ,
\\
\label{eq:Lp12nDesc}
& Q^{(2k+n-2)}|_{12n} \mutation{2} Q^{(2k+n-1)}|_{12n} \mutation{1} \cdots \mutation{1} Q^{(4k+n-2)}|_{12n} =
\begin{tikzcd}[arrows={-stealth}, sep=small, ampersand replacement=\&]
\scriptstyle 1 \arrow[r, dashed]
\& \scriptstyle 2
\\
\& \scriptstyle n \arrow[u, dashed] \arrow[lu, dashed]
\end{tikzcd}, \\
\label{eq:Top12n}
& \text{$Q^{(4k+n-1)}|_{12n}$ and~$Q^{(0)}|_{12n}=Q^{(4k+n)}|_{12n}$ are acyclic.}
\end{align}
If $Q^{(j)}|_{12n}$ is cyclic, then its descent sequence appears as a subsequence of consecutive mutations
in either \eqref{eq:R12nDesc} or~\eqref{eq:Lp12nDesc}.
(For $2k+1\le j\le 2k+n-2$, first use \eqref{eq:C12n}.)
\end{lemma}
To illustrate, $Q^{(2k)}|_{12n}$ has descent sequence $1(21)^{k-1}$.
\pagebreak[3]
\begin{proof}
Lemma~\ref{lem:Subquiver12nStart} states $Q^{(1)}|_{12n}$ is acyclic with large weights and with elbow at~$1$.
Thus Lemma~\ref{lem:3VertexAcyclicAscents-1} implies~\eqref{eq:R12nDesc}.
Now \eqref{eq:C12n} follows from Lemma~\ref{lem:Subquiver12nBottom}.
(Thus sequences of quivers~\eqref{eq:R12nDesc} and~\eqref{eq:Lp12nDesc} are identical to each other.)
By Lemma~\ref{lem:Subquiver12nStart}, vertex $1$ is a source in $Q|_{12n}$, and \eqref{eq:Top12n} follows.
\end{proof}
\begin{lemma}
\label{lem:Q12iSubquiv}
Each 3-vertex quiver $Q^{(j)}|_{12i}$, for $0\le j\le 4k+n$ and $3 \le i \le n-1$, is mutation equivalent to an acyclic quiver with large weights.
More specifically, the descent sequences of the quivers $Q^{(j)}|_{12i}$ are determined from:
\begin{align}
\label{eq:Qone12iDesc}
& Q^{(1)}|_{12i} \mutation{1} Q^{(2)}|_{12i} \mutation{2} \cdots \mutation{2} Q^{(2k+1)}|_{12i} = R|_{12i}=
\begin{tikzcd}[arrows={-stealth}, sep=small, ampersand replacement=\&]
1 \arrow[r, dashed] \arrow[rd, dashed]
\& 2 \arrow[d, dashed]
\\
\& i
\end{tikzcd},
\\
\label{eq:C12i}
& \text{$Q^{(2k+1)}|_{12i}, \ldots, Q^{(2k+n)}|_{12i}$ are acyclic,}
\\
\label{eq:Q12iDesc}
& Q^{(0)}|_{12i} \mutation{1} Q^{(4k+n-1)}|_{12i} \mutation{2} \cdots \mutation{2} Q^{(2k+n)}|_{12i} = L|_{12i}=
\begin{tikzcd}[arrows={-stealth}, sep=small, ampersand replacement=\&]
1 \arrow[r, dashed] \arrow[rd, dashed]
\& 2 \arrow[d, dashed]
\\
\& i
\end{tikzcd}.
\end{align}
If $Q^{(j)}|_{12i}$ is cyclic, then its descent sequence appears as a subsequence of consecutive mutations in either~\eqref{eq:Qone12iDesc} or~\eqref{eq:Q12iDesc}.
In particular, $Q^{(0)}|_{12i}$ has descent sequence $(12)^{k}$.
\end{lemma}
\begin{proof}
By construction, $R|_{12i}$ is acyclic with large weights and with elbow at $2$.
Thus Lemma~\ref{lem:3VertexAcyclicAscents-1} implies~\eqref{eq:Qone12iDesc}.
Now \eqref{eq:C12i} follows from Lemma~\ref{lem:Subquiver12nBottom}.
Since $n$ is a sink in $Q^{(0)},$ we have $Q^{(0)}|_{12i} = Q^{(1)}|_{12i}$ and so the sequences of quivers~\eqref{eq:Qone12iDesc} and~\eqref{eq:Q12iDesc} are identical to each other.
\end{proof}
\begin{lemma}
\label{lem:Q1inOr2inSubquiv}
Let $3\!\le i \!\le\! n-1$ and $0\!\le\! j\!\le\! 4k+n$.
Then the quivers $Q^{(j)}|_{1in}$ and~$Q^{(j)}|_{2in}$ are mutation equivalent to acyclic quivers with large weights.
More specifically,
\begin{itemize}[leftmargin=.2in]
\item
if $j\notin \{2k+n,4k+n-1\}$, then $Q^{(j)}|_{1in}$ is acyclic;
\item
if $j\!\in\! \{2k+n,4k+n-1\}$, then $Q^{(j)}|_{1in}$ is mutation acyclic with one-term descent sequence~$(1)$;
\item
if $j\notin \{2k+n-1,4k+n-2\}$, then $Q^{(j)}|_{2in}$ is acyclic;
\item
if $j\in \{2k+n-1,4k+n-2\}$, then $Q^{(j)}|_{2in}$ is mutation acyclic with one-term descent sequence~$(2)$.
\end{itemize}
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:fourChanges}, all four quivers $Q^{(2k+n)}|_{1in}$, $Q^{(4k+n-1)}|_{1in}$, $Q^{(2k+n-1)}|_{2in}$, $Q^{(4k+n-2)}|_{2in}$ are cyclic with descents at $1$, $1$, $2$, and~$2$, respectively.
We will next check that $Q^{(j)}|_{1in}$ and $Q^{(j)}|_{2in}$ are acyclic otherwise,
which will imply that these descents are actually the entire descent sequences.
Vertex $n$ is a sink or source in each of $Q|_{1in}$, $Q|_{2in}$, $Q^{(1)}|_{1in}$, and $Q^{(1)}|_{2in}$, so all four quivers are acyclic.
In $R,$ we have $1 \!\dasharrow\! i$ and $2 \!\dasharrow\! i$, while $n \!\dasharrow\! i$ by Lemma~\ref{lem:binRQ}.
Thus $R|_{1in}$ and $R|_{2in}$ are acyclic.
It then follows by Lemma~\ref{lem:Sinks} that $Q^{(j)}|_{1in}$ and $Q^{(j)}|_{2in}$ are acyclic for~${2k+1 \le j \le 2k+n-2.}$
By inspection, at most two subquivers of a $4$-vertex quiver can be cyclic.
By Lemmas~\ref{lem:R12nSubquiv}--\ref{lem:Q12iSubquiv}, both $Q^{(j)}|_{12i}$ and $Q^{(j)}|_{12n}$ are cyclic if ${2\le j \le2k}$ or~${2k+n+1 \le j \le 4k+n-3}$.
So $Q^{(j)}|_{12in}$ has two cyclic $3$-vertex subquivers and therefore both~$Q^{(j)}|_{1in}$ and $Q^{(j)}|_{2in}$ are acyclic if
${0 \le j \le 2k}$ or ${2k+n+1 \le j \le 4k+n-3}$.
By Lemma~\ref{lem:binRQ}, we have $n \!\dasharrow\! i$ in~$R$. After applying a sink mutation at each vertex in $[2,n-\!1],$ we find that $i \!\dasharrow\! n$ in~$Q^{(2k+n-1)}$.
Also, since $1$ is a source in $L|_{[1,n-1]}$ by Lemma~\ref{lem:SubquiverTildeQ}, $1$ is a sink in~$Q^{(2k+n-1)}|_{[1,n-1]}$.
Thus $i \!\dasharrow\! 1$ and $i \!\dasharrow\! n,$ so $Q^{(2k+n-1)}|_{1in}$ is acyclic.
Likewise, in $Q^{(4k+n-2)},$ Lemma~\ref{lem:3VertexAcyclicAscents} applied to $L|_{12i}$ implies that $i \!\dasharrow\! 1 \!\dasharrow\! 2$
while Lemma~\ref{lem:Subquiver12nStart} implies that $1$ is an elbow in $Q^{(4k+n-2)}|_{12n}$,
so~$n \!\dasharrow\! 1$. Thus $Q^{(4k+n-2)}|_{1in}$ is acyclic too.
The case of the quivers $Q^{(j)}|_{2in}$ is handled similarly.
By Lemma~\ref{lem:binRQ}, $n \!\dasharrow\! i$ in~$R$. After applying a sink mutation at each vertex in ${[3,n-1]},$ we find that $i \!\dasharrow\! n$ in $Q^{(2k+n-2)}$ and $2$ is a sink in~$Q^{(2k+n-2)}|_{[1,n-1]}$.
Thus ${i \!\dasharrow\! 2},$ so $Q^{(2k+n-2)}|_{2in}$ is acyclic.
Likewise, in $Q^{(4k+n-1)},$ Lemma~\ref{lem:3VertexAcyclicAscents} applied to $L|_{12i}$ implies that $i \!\dasharrow\! 2 \!\dasharrow\! 1$ while Lemma~\ref{lem:Subquiver12nStart} implies that $2$ is a source in $Q^{(4k+n-2)}|_{12n}$, so~$n \!\dasharrow\! 2$.
Thus $Q^{(4k+n-1)}|_{2in}$ is acyclic too.
\end{proof}
\begin{lemma}
\label{lem:Q[3,n]Subquiv}
For any~$j$, the subquivers $Q^{(j)}|_{[3,n]}$, $Q^{(j)}|_{\{1\}\cup[3,n-1]}$, and $Q^{(j)}|_{\{2\}\cup[3,n-1]}$ are acyclic,
with large weights.
\end{lemma}
\begin{proof}
{\ }
\noindent
\textbf{Case~1:} subquivers $Q^{(j)}|_{[3,n]}$.
For ${1\le j\le 4k+n-2}$,
Lemmas~\ref{lem:binRQ},\ref{lem:Sinks} and \ref{lem:binUnchanged} imply that the quiver $Q^{(j)}|_{[3,n]}$ is acyclic.
As $Q^{(1)}|_{[3,n]}$ is acyclic and $n$ is a sink, $Q|_{[3,n]}$ is acyclic.
The remaining case of $Q^{(4k+n-1)}|_{[3,n]}$ follows from Lemma~\ref{lem:fourChanges}.
\noindent
\textbf{Case~2:} subquivers $Q^{(j)}|_{\{1\}\cup[3,n-1]}$ and $Q^{(j)}|_{\{2\}\cup[3,n-1]}$.
Lemmas~\ref{lem:R12nSubquiv}--\ref{lem:Q12iSubquiv} imply that
for $\ell \in[i+1,n-1]$ and ${j \in[0,2k] \cup [2k+n+1, 4k+n]}$,
both $Q^{(j)}|_{12i}$ and $Q^{(j)}|_{12\ell}$ are cyclic.
Since a $4$-vertex quiver can have at most two cyclic $3$-vertex subquivers,
both $Q^{(j)}|_{1i\ell}$ and $Q^{(j)}|_{2i\ell}$ are acyclic.
Since $R|_{[1,n-1]}$ is acyclic by construction, Lemma~\ref{lem:Sinks} implies that $Q^{(j)}|_{1ij}$ and $Q^{(j)}|_{2ij}$ are acyclic for all~$j$.
Given that we have already shown that $Q^{(j)}|_{[3,n]}$ is acyclic, it follows that $Q^{(j)}|_{[3,n-1]}$ is acyclic.
Thus both $Q^{(j)}|_{\{1\}\cup[3,n-1]}$ and~$Q^{(j)}|_{\{2\}\cup[3,n-1]}$ are acyclic.
\end{proof}
We summarize some useful consequences of Lemmas~\ref{lem:R12nSubquiv}--\ref{lem:Q[3,n]Subquiv}
in Lemmas \ref{lem:orientations}--\ref{lem:cycleIsSinksAndDescents} below.
\begin{lemma}
\label{lem:orientations}
Let $0\le j\le 4k+n$.
Any $3$-vertex subquiver of $Q^{(j)}$ is mutation equivalent to an acyclic quiver with large weights.
Such a 3-vertex subquiver is cyclic if and only if it appears on the list below (here $i\in [3,n-1]$):
\begin{itemize}[leftmargin=.2in]
\item $Q^{(j)}|_{12n}$, for $j \in [2, 4k+n-3]$;
\item $Q^{(j)}|_{12i}$, for $j \not \in [2k+1, 2k+n]$;
\item $Q^{(j)}|_{1in}$, for $j \in \{2k+n, 4k+n-1\}$;
\item $Q^{(j)}|_{2in}$, for $j \in \{2k+n-1, 4k+n-2\}$.
\end{itemize}
\end{lemma}
\begin{lemma}
\label{lem:Qj-large}
Each quiver $Q^{(j)}$ has large weights.
\end{lemma}
\begin{lemma}
\label{lem:descents}
Let $0\le j \le 4k+n$. Then:
\noindent
1. The descent sequence of each $3$-vertex subquiver of $Q^{(j)}$ consists of $1$'s and~$2$'s.
\noindent
2. Both $1$ and $2$ appear within descent sequences of $3$-vertex subquivers of~$Q^{(j)}$.
\noindent
3. Vertex $1$ is a descent of some $3$-vertex subquiver of $Q^{(j)}$ if and only if $j \!\notin\! [{2k+1}, \linebreak[3]
2k+n-2]$.
\noindent
4. Vertex $2$ is a descent of some $3$-vertex subquiver of $Q^{(j)}$ if and only if $j \not \in \{0, 1\}$.
\end{lemma}
\pagebreak[3]
\begin{lemma}
\label{lem:no2cycles}
Let $ 0 \leq j < 4k+n$.
If $\T{v}{Q^{(j)}}=\T{u}{Q^{(j)}}$, then $u=v$.
\end{lemma}
\begin{proof}
Suppose $u\ne v$. Pick a vertex $i \notin \{u,v\}$.
By Lemma~\ref{lem:orientations}, $Q^{(j)}|_{iuv}$ is mutation equivalent to an acyclic quiver with large weights.
But $\T{v}{Q^{(j)}|_{iuv}} = \T{u}{Q^{(j)}|_{iuv}}$, contradicting Lemma~\ref{lem:3VertexAcyclicAscents-1}.
\end{proof}
\begin{lemma}
\label{lem:cycleIsSinksAndDescents}
For any $0 \le j < 4k+n$ and $1 \le v \le n$, the following are equivalent:
\begin{itemize}[leftmargin=.55in]
\item[\rm(\ref{lem:cycleIsSinksAndDescents}a)]
$\T{v}{Q^{(j)}} = Q^{(j \pm 1)}$ (with the superscript $j\pm 1$ taken modulo $4k+n$);
\item[\rm(\ref{lem:cycleIsSinksAndDescents}b)]
one of the following conditions holds:
\begin{itemize}[leftmargin=.2in]
\item vertex $v$ is a sink/source in~$Q^{(j)}$;
\item vertex $v$ is a descent of some $3$-vertex subquiver of $Q^{(j)}$ and there is at most one sink/source in~$Q^{(j)}$.
\end{itemize}
\end{itemize}
\end{lemma}
\begin{proof}
The claim can be checked using Lemmas~\ref{lem:R12nSubquiv}--\ref{lem:Q[3,n]Subquiv}.
We note that by Lemma~\ref{lem:no2cycles}, $\T{v}{Q^{(j)}} = Q^{(j \pm 1)}$ implies that the mutation $Q^{(j)}\mutation{v}Q^{(j \pm 1)}$
lies on the mutation cycle in Theorem~\ref{thm:GeneralSemiCycles}
(so for example, if $j=1$, then $v=1$ or $v=n$).
\end{proof}
\begin{lemma}
\label{lem:oneSinkSource}
The only quivers $Q^{(j)}$ with exactly one sink/source vertex are
$Q$, $Q^{(1)}$, $R$, and~$Q^{(2k+n-2)}$.
\end{lemma}
\begin{proof}
Lemma~\ref{lem:Sinks} implies that each of $Q$, $Q^{(1)}$, $R$, and $Q^{(2k+n-2)}$ has either a sink or a source.
Lemma~\ref{lem:orientations} and the construction of $\tilde R$ imply that this sink/source is unique.
More explicitly:
\begin{itemize}[leftmargin=.25in]
\item[(a)]
in both $Q$ and~$Q^{(1)}$, vertex $n$ is a sink/source;
every other vertex lies in some cyclic $3$-vertex subquiver;
\item[(b)]
in $R$, vertex $n-1$ is the sink;
vertex $1$ is the source of $\tilde R=R|_{[1,n-1]}$, but is not a source in~$R$ since
$1$ is contained in the cyclic subquiver $R|_{12n}$;
\item[(c)]
the same argument applies to $Q^{(2k+n-2)}$, with $1$ and $n-1$ replaced by $2$ and~$3$, respectively.
\end{itemize}
Again by Lemmas~\ref{lem:Sinks} and~\ref{lem:orientations},
every quiver $Q^{(j)}$ other than $Q$, $Q^{(1)}$, $R$ or $Q^{(2k+n-2)}$ has either
both a sink and a source, or no sink/source.
\end{proof}
\begin{lemma}
\label{lem:Q-neq-Qj}
For any $j\in [1,4k+n-1]$, we have $Q=Q^{(0)}\neq Q^{(j)}$.
Also, $Q$ is distinct from $Q^{(j)}$ with all arrows reversed.
\end{lemma}
\begin{proof}
Quiver $Q$ has a sink at $n$ and no sources.
Therefore by Lemma~\ref{lem:oneSinkSource},
$Q$ is distinct from every quiver $Q^{(j)}$ (even allowing the reversal of all arrows)
except possibly $Q^{(1)}, R,$ and~$Q^{(2k+n-2)}$.
However, $R$ has a sink at $n-1$, whereas $Q^{(2k+n-2)}$ has a source at~$3$.
Finally, $Q^{(1)}_{12n}$ has an elbow at~1, whereas $Q_{12n}$ has an elbow~at~2.
\end{proof}
\begin{lemma}
\label{lem:QDistinct}
None of the quivers $Q^{(j)}$, for $1\le j\le 4k+n-1$, is isomorphic to~$Q$,
even if we allow the global reversal of arrows---unless $n=4$ and $b_{14}(Q)=b_{23}(R)$.
\end{lemma}
\begin{proof}
Any isomorphism, possibly involving a global reversal of arrows, must map sinks/sources to sinks/sources.
Likewise, it must map the descent sequence (resp., the elbow) of a $3$-vertex subquiver
to another descent sequence (resp., elbow).
Lemma~\ref{lem:oneSinkSource} implies that if the quivers $Q$ and $Q^{(j)}$ are isomorphic, possibly with a global reversal of arrows,
then the quiver $Q^{(j)}$ is equal to either $Q^{(1)}$, $R$, or~$Q^{(2k+n-2)}$.
We consider each possibility in turn.
Suppose that $\sigma\in S_n$ is a permutation such that
$\sigma(Q)$ coincides with $Q^{(1)}$ up to global reversal of arrows.
Then ${\sigma(n) = n}$, ${\sigma(1) = 1}$, and~${\sigma(2) = 2}$.
By Lemma~\ref{lem:Subquiver12nStart}, the elbow of $Q|_{12n}$ is $2$,
yet the elbow of $Q^{(1)}|_{12n}$ is $1,$ a contradiction.
Lemma~\ref{lem:orientations} implies that $Q$ has $n-3$ cyclic subquivers, whereas both
$R$ and $Q^{(2k+n-2)}$ have exactly one.
Thus if $n\geq 5$, then $Q^{(j)}$ cannot be $R$ nor~$Q^{(2k+n-2)}$.
If $n=4,$ then each of $Q$, $R$, and~$Q^{(2k+4-2)}=Q^{(2k+2)}$ have one sink/source and one cyclic subquiver.
The cyclic subquiver $Q|_{123}$ has descent $1$ while the cyclic subquiver $Q^{(2k+2)}|_{124} = R|_{124}$ has descent $2,$
and both $Q^{(2k+2)}$ and $R$ have a sink/source at~$3$.
So~the only permutation $\sigma\in S_4$ that could potentially work is given by
$\sigma(1) = 2$, $\sigma(2)=1$, $\sigma(3)=4$, and $\sigma(4)=3$.
But $2$ is an elbow in both $R|_{123}$ and $Q|_{124},$ so $Q^{(j)}$ is not $R$.
As $Q^{(2k+2)}$ has a source, $\sigma(Q)$ could only be $Q^{(2k+2)}$ after a global reversal of arrows.
But then
\[
b_{23}(R) = -b_{23}(\T{3}{R}) = b_{23}(\sigma(Q)) = b_{14}(Q),
\]
as claimed.
\end{proof}
\pagebreak[3]
\begin{lemma}
\label{lem:QAutomorphisms}
The quiver $Q=Q^{(0)}$ has no nontrivial automorphisms, even if we allow a global reversal of arrows.
\end{lemma}
\begin{proof}
The quiver $Q$ has a (unique) sink~$n$ and no sources.
Therefore an isomorphism~$\sigma$ of~$Q$
must leave the sink~$n$ in~$Q$ in place: $\sigma(n)=n$.
Also, there are no isomorphisms between $Q$ and the quiver obtained by reversing all arrows in~$Q$.
By Lemma~\ref{lem:orientations}, the only cyclic 3-vertex subquivers of~$Q=Q^{(0)}$ are
the quivers $Q|_{12i}$, for $i=3,\dots,n-1$;
their descent sequences, by Lemma~\ref{lem:Q12iSubquiv}, are of the form~$(12)^k$.
It follows that ${\sigma(1)=1}$ and~${\sigma(2)=2}$.
By Lemmas~\ref{lem:binRQ}--\ref{lem:Sinks}, the subquiver~$Q|_{[3,n-1]}$ is acyclic with orientations
\begin{equation*}
3 \!\dasharrow\! 4 \!\dasharrow\! \cdots \!\dasharrow\! n-1.
\end{equation*}
We conclude that $\sigma(i)=i$ for all~$i$.
\end{proof}
\begin{lemma}
\label{lem:isosMove}
Suppose that an isomorphism $\sigma$ sends $Q^{(\ell)}$ to $Q^{(j)}$ (or to $Q^{(j)}$ with the arrows reversed).
Then one of the following statements holds:
\begin{itemize}[leftmargin=.2in]
\item
$\sigma$ sends $Q^{(\ell-m)}$ to $Q^{(j+m)}$, for all $m \geq 0$;
\item
$\sigma$ sends $Q^{(\ell-m)}$ to $Q^{(j+m)}$, with all the arrows reversed, for all $m \geq 0$;
\item
$\sigma$ sends $Q^{(\ell-m)}$ to $Q^{(j-m)}$, for all $m \geq 0$;
\item
$\sigma$ sends $Q^{(\ell-m)}$ to $Q^{(j-m)}$, with all the arrows reversed, for all $m \geq 0$.
\end{itemize}
(Here the superscripts are taken modulo~$4k+n$.)
In particular, taking $m=\ell$, we see that $\sigma$ sends $Q$ to $Q^{(j \pm \ell)}$, possibly with all arrows reversed.
\end{lemma}
\begin{proof}
We argue by induction on $m$.
For $m=0$, there is nothing to show.
Suppose the claim is true for some~$m$, i.e., $\sigma$ sends $Q^{(\ell-m)}$ to $Q^{(j\pm m)}$,
potentially with the arrows reversed.
We will argue that the analogous statement holds for $m+1$.
Let $v_m$ be such that $Q^{(\ell-(m+1))} \mutation{v_m} Q^{(\ell-m)}$.
By Lemma~\ref{lem:cycleIsSinksAndDescents},
the vertex $v_m$ satisfies condition~(\ref{lem:cycleIsSinksAndDescents}b) with respect to the quiver~$Q^{(\ell-m)}$.
It follows that $\sigma(v_m)$ satisfies~(\ref{lem:cycleIsSinksAndDescents}b) with respect to $\sigma(Q^{(\ell-m)})$,
i.e., with respect to $Q^{(j\pm m)}$ (possibly with the arrows reversed).
Again by Lemma~\ref{lem:cycleIsSinksAndDescents}, this implies that
$Q^{(j \pm m \pm 1)} \mutation{\sigma(v_m)} Q^{(j \pm m)}$.
Therefore
\[
\sigma(Q^{(\ell-(m+1))}) = \sigma(\T{v_m}{Q^{(\ell-m)}}) = \T{\sigma(v_m)}{Q^{(j\pm m)}} = Q^{(j \pm m \pm 1)}
\]
(or the same with all arrows in $Q^{(j \pm m)}$ and $Q^{(j \pm m\pm 1)}$ reversed).
It remains to show that the signs in the superscript $j \pm m \pm 1$ must agree.
Suppose that $\sigma(Q^{(\ell-m)}) = Q^{(j-m)}$; the other cases are completely analogous.
Let $v_{m-1}$ be such that $Q^{(\ell-m)} \mutation{v_{m-1}} Q^{(\ell-(m-1))}$.
Then $v_{m-1} \neq v_m$ and therefore $\sigma(v_m) \neq \sigma(v_{m-1})$.
The claim then follows by Lemma~\ref{lem:no2cycles}.
\end{proof}
\begin{lemma}
\label{lem:noIsoFixesQ}
Let $\ell, j \in[0,4k+n-1]$.
If $\sigma(Q^{(\ell)})=Q^{(j)}$ and $\sigma(Q)=Q$, then $\ell = j$.
Also, if $\sigma$ sends $Q^{(\ell)}$ to $Q^{(j)}$ with all arrows reversed
and sends $Q$ to itself with all arrows reversed, then $\ell = j$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:isosMove} (with $m=\ell$), $\sigma$ sends $Q = Q^{(0)}$ to $Q^{(j \pm \ell)}$,
possibly with all arrows reversed.
Assume that there is no reversal of arrows, as the other cases are similar.
Then $Q=Q^{(j \pm \ell)}$.
Lemma~\ref{lem:Q-neq-Qj} implies that $j \pm \ell \equiv 0 \bmod\!(4k+n)$.
Since $\ell, j \in [0, 4k+n-1],$ we conclude that either $\ell=j$ or $j + \ell = 4k+n$,
in which case (cf.\ Lemma~\ref{lem:isosMove}, first bullet), $\sigma(Q^{(\ell-m)})=Q^{(j+m)}$.
If $\ell=j$, then we are~done. Otherwise, Lemma~\ref{lem:isosMove} (first bullet, with $m = \ell-1 = 4k+n-j-1$)
implies ${\sigma(Q^{(1)})=Q^{(4k+n-1)}}$.
Now $Q \shortmutation{n} Q^{(1)}$ implies $Q \mutation{\sigma(n)} {Q^{(4k+n-1)}}$.
But $Q \shortmutation{1} Q^{(4k+n-1)} $, so $\sigma(n)=1$ by Lemma \ref{lem:no2cycles}.
Thus $\sigma$ is a nontrivial automorphism of~$Q$, contradicting Lemma~\ref{lem:QAutomorphisms}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:GeneralSemiCycleDistinct}]
We need to show that all the quivers $Q^{(j)}$ are distinct.
Suppose ${Q^{(\ell)} = Q^{(j)}}$.
Applying Lemma~\ref{lem:noIsoFixesQ} with $\sigma=\textup{id}$, we get $\ell = j$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:GeneralSemiCycleDistinct-noniso}]
Suppose $\sigma$ sends $Q^{(\ell)}$ to $Q^{(j)}$, possibly with all arrows reversed.
By Lemma~\ref{lem:isosMove}, the isomorphism $\sigma$ sends $Q$ to $Q^{(j \pm \ell)}$,
possibly with all arrows reversed.
By Lemma~\ref{lem:QDistinct}, $j \pm \ell = 0\bmod\!(4k+n)$.
Then $\ell=j$ by Lemma~\ref{lem:noIsoFixesQ}.
\end{proof}
\begin{example}
\label{eg:4vertexSymmetric}
Consider the quiver $Q$ defined by the matrix
\[
B(Q) = \begin{pmatrix}
0 & a & -(a^2-1)b -ac & c \\
-a & 0 & c + ab & b \\
(a^2-1)b +ac & -c-ab & 0 & x \\
-c & -b & -x & 0
\end{pmatrix}
\]
for some integers~$a,b,c,x\ge 2$.
This quiver $Q$ lies on the (minimal) mutation cycle from Theorem~\ref{thm:GeneralSemiCycles}, with $n=4$, $k=1$,
and $\tilde R$ being the first quiver in~\eqref{eq:RR'-3vertex}.
That~is, $Q = \T{1 2 123 21 4}{Q}$.
Let $\sigma(1)=2, \sigma(2)=1, \sigma(3)=4$ and $\sigma(4)=3$.
Then $\sigma(Q)$ coincides with $\T{3 2 1 4}{Q}$ with all arrows reversed.
In fact, each quiver $Q^{(j)}$, for $0\le j\le 4$, is isomorphic to $Q^{(4-j)}$ with all arrows reversed,
cf.\ Lemma~\ref{lem:isosMove}.
See Figure~\ref{fig:cycleCollapse}.
\end{example}
\begin{figure}
\caption{
Example~\ref{eg:4vertexSymmetric}
\label{fig:cycleCollapse}
\end{figure}
\section{Vortices, global descents, and exits}
\label{sec:exits}
In this section, we introduce some technical tools that will be used in Sections~\ref{Sec:primitiveCycles}--\ref{sec:genericity}.
Several ideas appearing in this section were discovered much earlier, albeit in somewhat different form,
in the Ph.D.\ thesis of M.~Warkentin,
see \cite[pages 13--18]{Warkentin}. \linebreak[3]
The key distinguishing features of the approach developed below are the use of vortex-free quivers
and the notion of global descent, see Definitions~\ref{def:vortex} and~\ref{def:globalDescent}, respectively.
\begin{remark}
Instead of the aforementioned concepts, \cite{Warkentin}~utilizes the notions of a ``fork'' and a ``point of return.''
For example, per \cite[Definition~2.1]{Warkentin},
a cyclic quiver~$Q$ with large weights and vertex set $\{i,j,k\}$
such that $|b_{ij}(Q)|>\max(|b_{ik}|,|b_{jk}|)$
would be called a``fork with point of return~$k$.''
Our Corollary~\ref{cor:weak-propagates} and Proposition~\ref{pr:weak-is-exit}
can be viewed as loose counterparts of \cite[Lemmas 2.5, 2.8, 3.5]{Warkentin},
although the exact statements and some of the proof arguments are different.
\end{remark}
The following terminology is an adaptation of one introduced by D.~Knuth~\cite[Section~4]{MR1226891}.
The papers \cite{brouwer, cameron, moon} use different terms for the same objects.
\begin{definition}
\label{def:vortex}
A \emph{vortex} is a 4-vertex quiver $Q$ such that
\begin{itemize}[leftmargin=.2in]
\item
all weights in $Q$ are nonzero;
\item
one of the vertices of $Q$ is a source or a sink;
\item
the remaining three vertices of $Q$ support a cyclic 3-vertex subquiver.
\end{itemize}
The (unique) sink/source of a vortex is called its \emph{apex}.
See Figure~\ref{fig:vortices}.
A quiver is \emph{vortex-free} if none of its (full) 4-vertex subquivers is a vortex.
\end{definition}
\begin{figure}
\caption{Four vortices with apex at vertex $4$. Weights are not shown.}
\label{fig:vortices}
\end{figure}
\begin{definition}
\label{def:globalDescent}
We say that an $n$-vertex quiver $Q$ has a \emph{global descent}
at vertex~$i$ if
\begin{itemize}[leftmargin=.2in]
\item $Q$ contains at least one cyclic $3$-vertex subquiver,
and
\item
all such subquivers have descent at~$i$.
\end{itemize}
(The first condition simply means that $Q$ is not acyclic.)
\end{definition}
In a quiver with large weights, the global descent vertex~$i$ is unique by Lemma~\ref{lem:3VertexUniqueDescents}.
(In all applications appearing in this paper, the weights are large.)
If the particular vertex~$i$ is not important, we will just say that $Q$ has a global descent.
\begin{remark}
In general, a global descent vertex~$i$ does not have to be a ``descent:''
mutating at~$i$ might increase some weights.
However, if $Q$ has large weights and is vortex-free,
then a global descent does not increase any weights.
\end{remark}
\begin{lemma}
\label{lem:Q-i-is-acyclic}
Let $Q$ be a quiver on the vertex set $[1,n]$ that has global descent at~$i$.
Then the $(n-1)$-vertex subquiver $Q|_{[1,n]-\{i\}}$ is acyclic.
\end{lemma}
\begin{proof}
If $Q|_{[1,n]-\{i\}}$ contains a cyclic 3-vertex subquiver, then $i$ cannot be its descent, a contradiction.
\end{proof}
\pagebreak[3]
\begin{lemma}
\label{lem:4VertexGlobalDescentTest}
Let $Q$ be a quiver with at least 4 vertices.
Assume that $Q$ is not acyclic and has large weights. Fix a vertex~$i$.
The following are equivalent:
\begin{itemize}[leftmargin=.2in]
\item $Q$ has global descent at~$i$.
\item every $4$-vertex subquiver of $Q$ containing $i$ either has global descent at $i$ or is acyclic.
\end{itemize}
\end{lemma}
\begin{proof}
Suppose that $Q$ has global descent at~$i$. Let $Q'$ be a subquiver of~$Q$.
Every cyclic 3-vertex subquiver of~$Q'$ is a cyclic 3-vertex subquiver of~$Q$, so it must have descent~$i$.
It follows that either $Q'$ has global descent~$i$ or else is acyclic, cf.\ Definition~\ref{def:globalDescent}.
Now suppose that every $4$-vertex subquiver of $Q$ containing $i$ either has global descent at $i$ or is acyclic.
Every cyclic $3$-vertex subquiver $Q'$ of~$Q$ appears in a $4$-vertex subquiver~$Q''$ which contains~$i$.
Since $Q''$ is not acyclic, it has global descent at~$i$; thus, in particular, $Q'$ has descent~$i$.
\end{proof}
\begin{lemma}
\label{lem:strong=>weak}
Let $i$ be a vertex in a quiver $Q$ with large weights.
Suppose that the pair $(Q,i)$ satisfies the conditions
\begin{align}
\label{eq:Q-global-descent}
&\text{$Q$ has global descent at a vertex different from~$i$; uspace{1.8in}} \\
&\label{eq:Q-vortex-free}
\text{$Q$ is vortex-free.}
\end{align}
Then $(Q,i)$ satisfies the conditions
\begin{align}
\label{eq:Q-ascent-3}
&\text{$i$ is an ascent in every cyclic 3-vertex subquiver of~$Q$ that contains it; uspace{.35in}} \\
&\label{eq:Q-not-sink/source}
\text{$i$ is not a sink/source in~$Q$;}\\
&\label{eq:Q-not-apex}
\text{$i$ is not the apex of a vortex in~$Q$.}
\end{align}
\end{lemma}
\begin{proof}
Suppose $Q$ has global descent~$v$.
\begin{itemize}[leftmargin=.42in]
\item[\eqref{eq:Q-ascent-3}:]
Since $v$ is a global descent of~$Q$, any cyclic 3-vertex subquiver of~$Q$ has descent~$v$.
Thus, by Lemma~\ref{lem:3VertexUniqueDescents}, $i$ is an ascent of any such subquiver.
\item[\eqref{eq:Q-not-sink/source}:]
Since $Q$ has global descent~$v$, it must contain a cyclic subquiver $Q|_{uvw}$.
If $i$ were a sink/source, then $Q|_{iuvw}$ would be a vortex with apex~$i$.
\item[\eqref{eq:Q-not-apex}:]
By assumption $Q$ is vortex-free, so $i$ is not the apex of a vortex. \qedhere
\end{itemize}
\end{proof}
\begin{lemma}
\label{lem:StrictAscents}
Let $i$ and $i'$ be distinct vertices in a quiver $Q$ with large weights.
Let~$Q'\!=\!\T{i}{Q}$.
If the pair $(Q,i)$ satisfies \eqref{eq:Q-ascent-3}-\eqref{eq:Q-not-apex}, then
\begin{itemize}[leftmargin=.2in]
\item
$(Q',i')$ satisfies~\eqref{eq:Q-global-descent}, with global descent~$i$;
\item
$(Q',i')$ satisfies~\eqref{eq:Q-vortex-free};
\item
$|b_{uv}(Q)| \leq |b_{uv}(Q')|$ for all $u$ and~$v$, with at least one strict inequality.
\end{itemize}
\end{lemma}
\begin{proof}
Let $Q$ be a quiver on the vertex set $[1,n]$.
We will first prove the result for $n=1, 2, 3, 4$, then use the $n=4$ case to prove the general case.
If $n=1$ or $n=2$, the result is vacuous: every vertex is a sink/source, so \eqref{eq:Q-not-sink/source} is never satisfied.
For $n=3$, the claim is trivial, cf.\ Lemmas~\ref{lem:acyclic-descent}--\ref{lem:3VertexUniqueDescents}.
Let $n=4$ and say $i=2$.
Since $2$ is not a sink/source, we may assume, without loss of generality, that $1\!\dasharrow\! 2$, $2\!\dasharrow\! 3$, and $2\!\dasharrow\! 4$.
We may further assume that $3\!\dasharrow\! 4$.
Thus $Q|_{123}$ and $Q|_{124}$ are either cyclic or have an elbow at~$2$.
Regardless, $2$~is an ascent in both $Q|_{123}$ and $Q|_{124}$.
By Lemma~\ref{lem:acyclic-descent}, both $Q'|_{123}$ and $Q'|_{124}$ are cyclic with descent at~$2$.
We conclude that $Q'$ is oriented as follows:
\begin{equation*}
Q \quad \begin{tikzcd}[arrows={stealth-, dashed}, sep=normal, ampersand replacement=\&]
1 \arrow[d, -] \arrow[dr, -]
\& 2 \arrow[l]
\\
4 \arrow[ur] \arrow[r] \& 3 \arrow[u]
\end{tikzcd}
\mutation{2}
\begin{tikzcd}[arrows={-stealth, dashed}, sep=normal, ampersand replacement=\&]
1 \arrow[d] \arrow[dr]
\& 2 \arrow[l]
\\
4 \arrow[ur] \& 3 \arrow[u] \arrow[l]
\end{tikzcd}
\quad
Q'
\end{equation*}
In particular, $Q'$ is vortex free and all cyclic subquivers have descent~$2$.
Finally, since $2$ is an ascent in $Q|_{123}$ and $Q|_{124}$,
we have $|b_{13}(Q)| < |b_{13}(Q')|$ and $|b_{14}(Q)| < |b_{14}(Q')|$.
Since all other weights in~$Q'$ are unchanged from~$Q$, we are done with the $n=4$ case.
\pagebreak[3]
Now suppose that $n > 4$.
Since $i$ is not a sink/source in~$Q$, let $u\!\dasharrow\! i\!\dasharrow\! v$ in~$Q$.
As $i$ is an ascent of every cyclic $3$-vertex subquiver of~$Q$ containing~$i$,
it follows that
\begin{itemize}[leftmargin=.3in]
\item[{\rm (a)}]
the weights don't decrease when we mutate from~$Q$ to~$Q'$;
moreover, at least one weight does increase: $|b_{uv}(Q)| < |b_{uv}(Q')|$;
\item[{\rm (b)}]
the subquiver $Q'|_{iuv}$ has descent at~$i$, hence $Q'$ is not acyclic.
\end{itemize}
Statement (a) above establishes the last claim in Lemma~\ref{lem:StrictAscents}.
To prove \eqref{eq:Q-global-descent}-\eqref{eq:Q-vortex-free}, we will need the following statements for all distinct vertices $u,v,w$:
\begin{itemize}[leftmargin=.3in]
\item[{\rm (c)}]
any 4-vertex subquiver $Q'|_{iuvw}$ is vortex-free;
\item[{\rm (d)}]
any 4-vertex subquiver $Q'|_{iuvw}$ is either acyclic or has global descent~$i$.
\end{itemize}
Indeed, if $i$ is not a sink/source in $Q|_{iuvw}$, then $(Q|_{iuvw}, i)$ satisfies \eqref{eq:Q-ascent-3}-\eqref{eq:Q-not-apex};
hence $Q'|_{iuvw}= \T{i}{Q|_{iuvw}}$ is vortex-free and has global descent~$i$ by the $n=4$ case.
If instead $i$ is a sink/source in~$Q|_{iuvw}$ (but not the apex of a vortex), then~$Q|_{iuvw}$ is acyclic,
so~$Q'|_{iuvw}$ is acyclic as well (in particular, not a vortex).
In either case, statements (c) and (d) follow.
Combining Lemma~\ref{lem:4VertexGlobalDescentTest} (for the quiver~$Q'$)
with statements (b) and (d) above,
we conclude that $Q'$ has global descent~$i$.
In particular, $(Q', i')$ satisfies~\eqref{eq:Q-global-descent}.
It remains to show that $Q'$ satisfies~\eqref{eq:Q-vortex-free}.
By Lemma~\ref{lem:Q-i-is-acyclic}, the subquiver~$Q'|_{[1,n]-\{i\}}$ is acyclic.
So a vortex in~$Q'$ must contain~$i$; but this is impossible by statement~(c).
\end{proof}
Lemmas~\ref{lem:strong=>weak}--\ref{lem:StrictAscents} imply that conditions \eqref{eq:Q-ascent-3}-\eqref{eq:Q-not-apex}
(resp., \eqref{eq:Q-global-descent}-\eqref{eq:Q-vortex-free})
propagate:
\begin{corollary}
\label{cor:weak-propagates}
Let $i$ and $i'$ be distinct vertices in a quiver $Q$ with large weights.
Let~$Q'=\T{i}{Q}$.
Suppose that the pair $(Q,i)$ satisfies \eqref{eq:Q-ascent-3}-\eqref{eq:Q-not-apex} (resp., \eqref{eq:Q-global-descent}-\eqref{eq:Q-vortex-free}).
Then the pair $(Q',i')$ satisfies the same conditions;
in fact, $Q'$~has global descent~$i$.
Moreover, $|b_{uv}(Q)| \leq |b_{uv}(Q')|$ for all $u$ and~$v$, with at least one strict inequality.
\end{corollary}
The following notion and its properties discussed below will be used in Section~\ref{Sec:primitiveCycles}.
\begin{definition}
\label{def:transient}
In a quiver $Q$, a vertex~$i$ is an \emph{exit} if for every sequence of vertices
$i\neq i_1\neq i_2\neq\cdots\neq i_{\ell-1}\neq i_\ell$, we have~$Q \neq \T{i_\ell \cdots i_1\,i}{Q}$.
Informally, once we mutate at~$i$, we cannot return to~$Q$.
Consequently, the edge $Q\shortmutation{i} \mu[i](Q)$ of the mutation graph does not lie on any mutation cycle.
\end{definition}
\begin{lemma}
\label{lem:exit-leads-to-tree}
Suppose that $i$ is an exit in a quiver~$Q$. Let $Q'=\mu[i](Q)$.
Consider the subgraph of the mutation graph that ``lies beyond'' the edge $Q \shortmutation{i}Q'$.
That is, take the induced subgraph of the mutation graph whose vertices correspond to all possible quivers of the form $\T{i_\ell \cdots i_1\,i}{Q}$ as above (including~$Q'$).
This subgraph is a complete rooted $(n-1)$-ary tree with root~$Q'$.
\end{lemma}
\begin{proof}
Suppose for contradiction that there are two mutation sequences, both based at~$Q$ and starting with~$i$, that lead to the same quiver.
Reversing one of the sequences, concatenating them, and removing consecutive entries equal to each other
produces a sequence $i\neq i_1\neq i_2\neq\cdots\neq i_{\ell-1}\neq i_\ell$ satisfying $Q = \T{i_\ell \cdots i_1\,i}{Q}$, a contradiction.
\end{proof}
\begin{remark}
The description given in Lemma~\ref{lem:exit-leads-to-tree} still holds
if we identify quivers up to isomorphism and global reversal of arrows,
except that the degree of each non-root vertex in a tree will be \emph{at most}~$n$.
Cf.\ \cite[Lemmas 2.7--2.8]{Warkentin}.
\end{remark}
\pagebreak[3]
\begin{proposition}
\label{pr:weak-is-exit}
If $(Q,i)$ satisfies \eqref{eq:Q-ascent-3}-\eqref{eq:Q-not-apex}, then $i$ is an exit (cf.\ Definition~\ref{def:transient}).
\end{proposition}
\begin{proof}
Let $i=i_0\neq i_1\neq i_2\neq\cdots$.
Repeatedly applying Corollary~\ref{cor:weak-propagates}, we conclude that for any~$j$,
the pair $(\T{i_j \cdots i_1\,i}{Q}, i_{j+1})$ satisfies \eqref{eq:Q-ascent-3}-\eqref{eq:Q-not-apex}
and moreover the total number of arrows in $\T{i_j \cdots i_1\,i}{Q}$ increases with~$j$.
Thus $\T{i_j \cdots i_1\,i}{Q}\neq Q$ for all~$j$, as desired.
\end{proof}
We conclude this section by some observations that will be used in Section~\ref{sec:genericity}.
\begin{lemma}
\label{lem:orientations-fixed}
Let $Q$ be a quiver with large weights and let $i$ be a vertex in~$Q$
such that the pair $(Q, i)$ satisfies conditions \eqref{eq:Q-ascent-3}--\eqref{eq:Q-not-apex}.
Let $i\,i_1\cdots i_m$ be a sequence that begins with~$i$
and satisfies $i\neq i_1\neq \cdots\neq i_m$.
Then the orientations of arrows in the quiver $\T{i_m\cdots i_1\,i}{Q}$ are uniquely determined by
the orientations of arrows in~$Q$.
\end{lemma}
That is, suppose that $Q'$ is another quiver on the same set of vertices such that \linebreak[3]
(a) $Q'$ has large weights, (b) $Q$ and $Q'$ have the same orientations of arrows,
and \linebreak[3]
(c) the pair $(Q',i)$ satisfies conditions \eqref{eq:Q-ascent-3}--\eqref{eq:Q-not-apex}.
Then the quivers $\T{i_m\cdots i_1\,i}{Q}$ and $\T{i_m\cdots i_1\,i}{Q'}$
have the same orientations of arrows.
\begin{proof}
By Corollary~\ref{cor:weak-propagates}, the quiver $\mu[i](Q)$ and the sequence $i_1\cdots i_m$ satisfy
the conditions of Lemma~\ref{lem:orientations-fixed}.
It therefore suffices to check that the orientations of arrows in $\mu[i](Q)$ are uniquely determined by
the orientations of arrows in~$Q$.
By definition, $\mu[i]$ reverses all arrows in~$Q$ that are incident to~$i$.
For any $u \!\dasharrow\! v$ in $Q$ with $u\neq i$ and $v\neq i$, we have the following cases.
\noindent
\emph{Case~1:} $Q|_{iuv}$ is acyclic.
Then $u \!\dasharrow\! v$ in $\T{i}{Q}$, unchanged from~$Q$.
\noindent
\emph{Case~2:} $Q|_{iuv}$ is cyclic.
Then, by condition \eqref{eq:Q-ascent-3},
\begin{equation*}
b_{uv}(\mu[i](Q)) = b_{uv}(Q) - b_{ui}(Q) b_{iv}(Q) < b_{uv}(Q) < |b_{uv}(\mu[i](Q))|,
\end{equation*}
implying that $v\!\dasharrow\! u$ in $\T{i}{Q}$.
\end{proof}
\begin{corollary}
\label{cor:orientations-fixed}
Let $Q$ be an acyclic quiver with large weights.
Then for any sequence $i_1\cdots i_m$,
the orientations of arrows in the quiver $\T{i_m\cdots i_1}{Q}$ are uniquely determined by
the orientations of arrows in~$Q$.
\end{corollary}
\begin{proof}
If $i_1$ is a sink/source, then $\mu[i_1](Q)$ is again acyclic.
Therefore, without loss of generality, we may assume that $i_1$ is not a sink/source.
We may moreover assume that $i_1\neq \cdots\neq i_m$.
Now Lemma~\ref{lem:orientations-fixed} applies, proving the claim.
\end{proof}
\section{Mutation cycles that cannot be paved by short cycles}
\label{Sec:primitiveCycles}
In this section, we show that the long mutation cycles constructed in Theorem~\ref{thm:GeneralSemiCycles}
cannot be paved by mutation cycles of bounded length:
\begin{theorem}
\label{thm:primitive}
Fix~$n \geq 4$ and $k\ge 1$.
Let $Q$ be an $n$-vertex quiver constructed as in Theorem~\ref{thm:GeneralSemiCycles}.
Then $Q$ does not lie on any mutation cycle of length $\le 4k+1$.
Consequently, the mutation cycle~\eqref{eq:GeneralSemiCycle}
(which has length $4k+n$) cannot be paved by mutation cycles of length $\le 4k+1$.
\end{theorem}
\begin{theorem}
\label{thm:primitive-nonisom}
If we also assume that either $n\ge 5$ or else $n=4$ and ${b_{14}(Q) \neq b_{23}(R)}$,
then the above statements remain true if we identify quivers that differ by an isomorphism and/or a reversal of all arrows.
\end{theorem}
The proofs of these theorems will rely on Lemmas~\ref{lem:QTildeGlobalDescentVF}--\ref{lem:cycle-exits} below.
We will need some notation that will be used throughout.
We will use~$L$ (instead of~$Q$) as a base point for our mutation cycle~\eqref{eq:GeneralSemiCycle}.
We will denote by~$L^{(\ell)}$ the result of applying $\ell$ mutations to~$L$, going clockwise along the cycle.
Thus $L^{(\ell)}=Q^{(n+2k+\ell)}$.
\begin{lemma}
\label{lem:QTildeGlobalDescentVF}
The subquivers $L^{(\ell)}|_{[1,n-1]}$ are vortex-free.
\end{lemma}
\begin{proof}
By Lemmas~\ref{lem:orientations}--\ref{lem:Qj-large}, the quiver $L_{[1,n-1]}$ is acyclic, has large weights,
and its vertex~$2$ is not a sink/source.
So Lemma~\ref{lem:StrictAscents} implies that
the quiver $L^{(1)}|_{[1,n-1]} = \T{2}{L|_{[1,n-1]}}$ is vortex-free and has global descent at~$2$.
Then, by induction on~$\ell$, Corollary~\ref{cor:weak-propagates} implies that $L^{(\ell)}|_{[1,n-1]}$ is vortex-free
for~$0 < \ell \leq 2k$.
By Lemma~\ref{lem:SubquiverTildeQ}, ${L^{(\ell)}|_{[1,n-1]}=L^{(4k+1 - \ell)}|_{[1,n-1]}}$,
implying the claim for $2k<\ell\le 4k$.
For the remaining values $4k\le \ell\le 4k+n$, the claim follows from the fact
(cf.~\eqref{eq:L[n-1]=tildeL})
that the mutations at the bottom of Figure~\ref{fig:GeneralSemiCycles-notation},
when restricted to the vertex set $[1,n-1]$, are sink/source mutations,
hence they preserve the property of being vortex-free.
\end{proof}
\begin{lemma}
\label{lem:VortexFreeEll}
If $0\le \ell \le 2k-1$ or $2k+2\le \ell \leq 4k$, then $L^{(\ell)}$ is vortex-free.
If $\ell = 2k$ or $\ell=2k+1$, then $L^{(\ell)}$ has vortices, but all of them have apex~$n$.
\end{lemma}
\begin{proof}
By Lemma~\ref{lem:QTildeGlobalDescentVF}, the quiver $L^{(\ell)}|_{[1,n-1]}$ is vortex-free.
Let us now assume that $\ell \notin \{2k, 2k+1\}$.
We can check using Lemma~\ref{lem:orientations}
that in every 4-vertex subquiver of $L^{(\ell)}$ containing~$n$,
the number of cyclic 3-vertex subquivers is either 0 or~2---hence this subquiver is not a vortex.
Finally, $n$ is a sink/source in both $L^{(2k)}$ and $L^{(2k+1)}$, so $n$ is the only possible apex
for the vortices contained in these quivers.
\end{proof}
The following result shows that for a quiver $L^{(\ell)}$
not lying on the bottom rim of Figure~\ref{fig:GeneralSemiCycles-notation},
every mutation that takes us away from the mutation cycle~\eqref{eq:GeneralSemiCycle}
gives rise to a tree in the mutation graph.
\begin{lemma}
\label{lem:cycle-exits}
For all $0 < \ell \le 4k$ and every vertex $i$, either
$\T{i}{L^{(\ell)}} = L^{(\ell \pm 1)}$ or
$i$ is an exit in $L^{(\ell)}$.
\end{lemma}
\begin{proof}
Suppose that $\T{i}{L^{(\ell)}} \neq L^{(\ell \pm 1)}$.
It follows from Lemma~\ref{lem:VortexFreeEll} that $i$ is not an apex of a vortex in~$L^{(\ell)}$.
One can check, via repeated use of Lemma~\ref{lem:orientations},
that $L^{(\ell)}$ has at most one sink/source.
(Specifically, $L^{(\ell)}$ has no sinks/sources unless $\ell = 2k$ or $\ell=2k+1$,
in which case $n$ is the only sink/source.)
Combining Lemma~\ref{lem:cycleIsSinksAndDescents},
the assumption $\T{i}{L^{(\ell)}} \neq L^{(\ell \pm 1)}$, and the fact that $L^{(\ell)}$ has at most one sink/source,
we conclude that $i$ is an ascent of every cyclic $3$-vertex subquiver of~$L^{(\ell)}$ that contains~$i$.
Now Proposition~\ref{pr:weak-is-exit} implies that $i$ is an exit in $L^{(\ell)}$.
\end{proof}
\begin{remark}
It may well be that the restriction on $\ell$ in Lemma~\ref{lem:cycle-exits} is unnecessary.
The current proof of the lemma does not extend to all~$\ell$ since Proposition~\ref{pr:weak-is-exit} may not apply.
This is illustrated in Figure~\ref{fig:mutations-which-may-not-be-exits}.
\end{remark}
\begin{figure}
\caption{
The cycle in Theorem~\ref{thm:GeneralSemiCycles}
\label{fig:mutations-which-may-not-be-exits}
\end{figure}
\begin{proof}[Proof of Theorem~\ref{thm:primitive}]
By Lemmas~\ref{lem:cycle-exits} and~\ref{lem:exit-leads-to-tree},
no mutation cycle can contain a mutation of the form
$L^{(\ell)}\shortmutation{i} L'\neq L^{(\ell\pm1)}$ for $0<\ell\le 4k$.
That is, we cannot deviate from the original mutation cycle anywhere outside the bottom rim of Figure~\ref{fig:GeneralSemiCycles-notation}.
By Theorem~\ref{thm:GeneralSemiCycleDistinct}, all the quivers $L^{(\ell)}$ are distinct,
so any mutation cycle containing $Q=L^{(2k)}$ has to contain all the quivers $L^{(\ell)}$, for $0\le \ell\le 4k+1$.
The claim follows.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:primitive-nonisom}]
We claim that no cycle in the mutation graph (where we identify quivers up to isomorphism and/or reversal of all arrows)
can go through~$Q$ and contain a mutation of the form
$L^{(\ell)}\shortmutation{i} L'\neq L^{(\ell\pm1)}$ for $0<\ell\le 4k$.
Otherwise, continuing the cycle beyond~$L'$, we would eventually reach a quiver~$Q'$ isomorphic to~$Q$
(up to global reversal of arrows).
The mutation graph contains a cycle passing through~$Q'$,
namely an appropriate version of the cycle~\eqref{eq:GeneralSemiCycle}.
On the other hand, $i$~is an exit by Lemma~\ref{lem:cycle-exits}, so the portion of the mutation graph
that lies beyond~$L'$ is a tree (by Lemma~\ref{lem:exit-leads-to-tree}), a contradiction.
We conclude that we cannot deviate from the original mutation cycle anywhere outside the bottom rim of Figure~\ref{fig:GeneralSemiCycles-notation}.
By Theorem~\ref{thm:GeneralSemiCycleDistinct-noniso}, all the quivers $L^{(\ell)}$ are pairwise non-isomorphic,
even allowing a reversal of all arrows.
Hence any cycle in the mutation graph (where we identify quivers up to isomorphism and/or reversal of all arrows)
that contains $Q=L^{(2k)}$ has to contain all the quivers $L^{(\ell)}$, for $0\le \ell\le 4k+1$.
The claim follows.
\end{proof}
\section{Genericity}
\label{sec:genericity}
We next demonstrate that the quivers
that appear in Theorem~\ref{thm:GeneralSemiCycles} are in some sense ``generic.''
Making this statement precise will require some preliminaries.
\begin{definition}
\label{def:quivers=lattice-points}
We denote by $\Quiv_n$ the set of all quivers on the vertex set $[1,n]$.
A subset $\mathbf{Q}\subset\Quiv_n$ is called a \emph{family of quivers} (on this vertex set).
Each quiver $Q\in\Quiv_n$ is encoded by a skew-symmetric matrix ${B(Q)=(b_{ij})}$, as explained in Definition~\ref{def:B(Q)}.
Restricting to the upper-triangular part $(b_{ij})_{1\le i<j\le n}$,
we can~view~$\Quiv_n$ as an ${\binom{n}{2}}$-dimensional integer lattice.
Consequently, any family $\mathbf{Q}\subset\Quiv_n$ can be viewed as a collection of lattice points.
\end{definition}
\begin{definition}
\label{def:z-biregular}
Let $A$ and $B$ be two subsets of ${\mathbb Z}^d$.
A map
\begin{align*}
A&\stackrel{f}{\to} B \\[-3pt]
(a_1,\dots,a_d)&\mapsto (b_1,\dots,b_d)
\end{align*}
is called \emph{${\mathbb Z}$-biregular} if
\begin{itemize}[leftmargin=.2in]
\item
$f$ is a bijection;
\item
both $f$ and its inverse $g=f^{-1}$ are given by polynomials with integer coefficients:
\begin{align*}
b_i&=f_i(a_1,\dots,a_d)\in{\mathbb Z}[a_1,\dots,a_d], \quad i=1,\dots, d; \\
a_i&=g_i(b_1,\dots,b_d)\in{\mathbb Z}[b_1,\dots,b_d], \quad i=1,\dots, d.
\end{align*}
\end{itemize}
In light of Definition~\ref{def:quivers=lattice-points}, we can extend this notion to the cases where
either $A$ or $B$ (or both) are subsets of~$\Quiv_n\cong {\mathbb Z}^{\binom{n}{2}}$.
\end{definition}
\begin{remark}
We will typically use the above notion to infinite subsets of~${\mathbb Z}^d$ defined by algebraic inequalities.
\end{remark}
\begin{remark}
The inverse of a ${\mathbb Z}$-biregular map (resp., a composition of ${\mathbb Z}$-biregular maps) is ${\mathbb Z}$-biregular.
Also, if $f:A\to B$ is a ${\mathbb Z}$-biregular map and $A'\subset A$, then the restriction $f|_{A'}:A'\to f(A')$
is again a ${\mathbb Z}$-biregular map.
\end{remark}
\begin{remark}
\label{rem:relabeling-regular}
The notion of ${\mathbb Z}$-biregularity, when applied to maps involving families of quivers, does not depend on the choice of vertex labeling.
Put differently, any permutation of the vertex labels of a quiver induces a ${\mathbb Z}$-biregular map.
\end{remark}
\begin{example}
\label{eg:QQ'}
Let $\mathbf{Q}$ be the following family of acyclic 3-vertex quivers:
\begin{equation*}
\mathbf{Q}=\Bigl\{
Q(a,b,c)=
\begin{tikzcd}[arrows={-stealth}, sep=small, cramped]
\scriptstyle 1 \arrow[r, "a"]
& \scriptstyle 2
\\
\scriptstyle 3 \arrow[u,"b"] \arrow[ur, swap, "c" , outer sep=-1pt] &
\end{tikzcd}
\,\Bigl| \, a,b,c\ge 0
\Bigr\}\subset\Quiv_3.
\end{equation*}
The parametrization map ${\mathbb Z}_{\ge0}^3\to\mathbf{Q}$ given by $(a,b,c)\mapsto Q(a,b,c)$ is clearly ${\mathbb Z}$-biregular.
A more interesting example is the restriction of the mutation map $\mu[1]$ to~$\mathbf{Q}$:
\begin{align*}
\mu[1]: \mathbf{Q}&\to\mathbf{Q'}=\{Q'(a,b,c')=\begin{tikzcd}[arrows={-stealth}, sep=small, cramped, ampersand replacement=\&]
\scriptstyle 1 \arrow[d, swap, "b"]
\& \scriptstyle 2 \arrow[l, swap, "a" ]
\\
\scriptstyle 3 \arrow[ur, swap, "{c'}", outer sep=-2pt] \&
\end{tikzcd} \mid a\ge 0, b\ge 0, c'\ge ab\}\subset\Quiv_3.
\end{align*}
Composing the two maps, we get the ${\mathbb Z}$-biregular parametrization ${\mathbb Z}^3_{\ge0}\to\mathbf{Q'}$ defined by
$(a,b,c)\mapsto Q'(a,b,c+ab)$.
The inverse map sends $Q'(a,b,c')\in \mathbf{Q'}$ to $(a,b,c'-ab)\in{\mathbb Z}^3_{\ge0}$.
\end{example}
\begin{example}
\label{eg:Q1Q3}
Let $a,b,c\ge0$ and let the quiver $Q$ be as shown in~\eqref{eq:Q1Q2Q3-12} below.
Apply the mutation sequence $\mu[21]$ to~$Q$ and assume that the resulting quivers are oriented as follows:
\begin{equation}
\label{eq:Q1Q2Q3-12}
\!\!\!\!\begin{array}{ccccccc}
\begin{tikzcd}[arrows={-stealth}]
1 \arrow[r, "a"]
& 2
\\
3 \arrow[u,"b"] \arrow[ur, swap, "c" ] &
\end{tikzcd}
& &
\begin{tikzcd}[arrows={-stealth}]
1 \arrow[d, swap, "b"]
& 2 \arrow[l, swap, "a" ]
\\
3 \arrow[ur, swap, "ab+c"] &
\end{tikzcd}
& &
\!\begin{tikzcd}[arrows={-stealth}]
1 \arrow[r, "a"]
& 2 \arrow[dl, "ab+c" ]
\\
3 \arrow[u, "\!\!\!(a^2\!-1)b+ac"] &
\end{tikzcd}
\\
Q
& \!\!\!\!\!\!\mutation{1}\!\!\!\!\!\!
& Q'
& \!\!\!\!\!\!\mutation{2}\!\!\!\!\!\!
& \qquad\qquad Q''.
\end{array}
\end{equation}
In other words, we assume that
\begin{equation*}
Q\in \mathbf{Q} = \{ Q(a,b,c) \mid a,b,c, (a^2-1)b+ac\ge 0\} \subset\Quiv_3.
\end{equation*}
Alternatively, one could start with a quiver $Q''$ as in~\eqref{eq:Q1Q2Q3-12-tilde} below and apply $\mu[12]$.
Assume that the orientations are the same as in~\eqref{eq:Q1Q2Q3-12}:
\begin{equation}
\label{eq:Q1Q2Q3-12-tilde}
\!\!\!\!\begin{array}{ccccccc}
\begin{tikzcd}[arrows={-stealth}]
1 \arrow[r, "a"]
& 2
\\
3 \arrow[u,"a {c''} - {b''}"] \arrow[ur, swap, "a{b''} - ( a^2 - 1){c''}" ] &
\end{tikzcd}
& &
\begin{tikzcd}[arrows={-stealth}]
1 \arrow[d, swap, "a {c''} - {b''}"]
& 2 \arrow[l, swap, "a" ]
\\
3 \arrow[ur, swap, "{c''}"] &
\end{tikzcd}
& &
\!\begin{tikzcd}[arrows={-stealth}]
1 \arrow[r, "a"]
& 2 \arrow[dl, "{c''}" ]
\\
3 \arrow[u, "{b''}"] &
\end{tikzcd}
\\
Q
& \!\!\!\!\!\!\!\mutation{1}\!\!\!\!\!\!
& \qquad Q'
& \!\!\!\!\!\!\mutation{2}\!\!\!\!\!\!
& Q''
\end{array}
\end{equation}
In other words, we assume that
\begin{equation*}
Q''\in \mathbf{Q''} = \Bigl\{ \begin{tikzcd}[arrows={-stealth}, sep=small, cramped]
\scriptstyle 1 \arrow[r, "a"]
& \scriptstyle 2 \arrow[dl, "{c''}" , outer sep=-2]
\\
\scriptstyle 3 \arrow[u, "{b''}"] &
\end{tikzcd} \mid a,{b''},{c''}, a{c''}-{b''}, a{b''} -(a^2-1){c''} \ge 0
\Bigr\} .
\end{equation*}
The natural map $\mu[21]:\mathbf{Q}\to \mathbf{Q''}$ given by
${b''} = (a^2\!-1)b+ac$ and ${c''} = ab+c$
is ${\mathbb Z}$-biregular.
The inverse map
$\mu[12]:\mathbf{Q''}\to \mathbf{Q}$
is given by
$b=a{c''}-{b''}$, $c=a{b''} - ( a^2 - 1){c''}$.
One could also define the family $\mathbf{Q'}$ of quivers~$Q'$ that appear in mutation sequences
of the form \eqref{eq:Q1Q2Q3-12}--\eqref{eq:Q1Q2Q3-12-tilde}, with specified orientations of all quivers.
We would then get ${\mathbb Z}$-biregular maps $\mathbf{Q}\to \mathbf{Q'}\to \mathbf{Q''}$.
\end{example}
\begin{remark}
Generalizing Example~\ref{eg:Q1Q3}, we can see that fixing orientations of the quivers related to each other by
a sequence of successive mutations yields a ${\mathbb Z}$-biregular map
between the families of quivers defined by appropriate inequalities.
\end{remark}
Let $\mathbf{Acyc}_n$ denote the family of all $n$-vertex acyclic quivers
with standard orientation and large weights:
\begin{equation}
\label{eq:Acyc_n}
\mathbf{Acyc}_n=\{Q\in\Quiv_n\mid b_{ij}(Q)\ge 2\ \text{for all $i<j$}\}.
\end{equation}
\begin{lemma}
\label{lem:mut-acyclic-orientations}
Fix a mutation sequence $\mathbf{i}=i_1\cdots i_m$.
Then the restriction of the map $\mu[\mathbf{i}]:\Quiv_n\to\Quiv_n$ to $\mathbf{Acyc}_n$
is ${\mathbb Z}$-biregular onto its image.
\end{lemma}
\begin{proof}
By Corollary~\ref{cor:orientations-fixed}, the orientations of arrows in each quiver $\T{i_j\cdots i_1}{Q}$
do not depend on the choice of $Q\in\mathbf{Acyc}_n$.
This implies that every map
\begin{align*}
\mu[i_j]:\T{i_{j-1} \cdots i_1}{\mathbf{Acyc}_n} &\longrightarrow \T{i_j\cdots i_1}{\mathbf{Acyc}_n} \\
\T{i_{j-1}\cdots i_1}{Q} &\longmapsto\T{i_j\cdots i_1}{Q}
\end{align*}
is given by polynomials, as is its inverse.
Thus, all these maps are ${\mathbb Z}$-biregular.
Composing them all, we obtain the claim.
\end{proof}
\begin{definition}
\label{def:generic-quivers}
Fix an integer $n\ge 2$.
A family $\mathbf{Q}$ of $n$-vertex quivers is called \emph{fully generic} if
it contains a subset that is related to ${\mathbb Z}_{\ge0}^{\binom{n} 2}$ via a ${\mathbb Z}$-biregular parametrization.
\end{definition}
\begin{example}[Quivers with a given orientation]
\label{eg:fixed-orientation-generic}
For each pair of indices $i,j \in [1,n]$, fix a sign $\varepsilon_{ij}\in\{-1,1\}$.
Then the following quiver family is fully generic:
\begin{equation*}
\mathbf{Q}=\{Q\in\Quiv_n \mid b_{ij}(Q)\cdot\varepsilon_{ij}\ge 0\}.
\end{equation*}
This remains true if we replace (some of) the weak inequalities $b_{ij}(Q)\cdot\varepsilon_{ij}\ge 0$
by the strict inequalities $b_{ij}(Q)\cdot\varepsilon_{ij}> 0$,
or require $Q$ to have large weights.
In particular, the family $\mathbf{Acyc}_n$ (cf.\ \eqref{eq:Acyc_n}) is fully generic.
\end{example}
\begin{definition}
\label{def:generic-mut-cycle}
We say that a quiver family
$\mathbf{Q}\subset\Quiv_n$ and a sequence $\mathbf{i}=i_1\cdots i_k$ define a \emph{fully generic mutation cycle} if
\begin{itemize}[leftmargin=.2in]
\item
$\mathbf{Q}$ is a fully generic family of quivers;
\item
for any $Q\in\mathbf{Q}$, the sequence $\mathbf{i}$ is a mutation cycle based at~$Q$, cf.\ Definition~\ref{def:mutation cycle}.
\end{itemize}
\end{definition}
\begin{example}
\label{eg:acyclic-generic-cycle}
Each acyclic quiver $Q\in\mathbf{Acyc}_n$ lies on the mutation cycle $\mathbf{i}=12\cdots n$,
see Proposition~\ref{pr:acyclic-mutation-cycle}.
Since $\mathbf{Acyc}_n$ is a fully generic family (see Example~\ref{eg:fixed-orientation-generic}),
this mutation cycle is generic.
\end{example}
\begin{example}[{cf.\ \cite[Example~12.30]{Warkentin}}]
\label{eg:6-cycle-vortices}\
Figure~\ref{fig:generic-6-cycle} shows a fully generic mutation cycle of length~6 consisting
of 4-vertex quivers, all of them vortices.
\end{example}
\newcommand{-stealth}{-stealth}
\newcommand{-stealthl}{stealth-}
\begin{figure}
\caption{A fully generic mutation cycle involving 6 vortices.
Here ${a,b,c,d,e,f \geq 0}
\label{fig:generic-6-cycle}
\end{figure}
\pagebreak[3]
\begin{example}[\emph{Non-example \#1:} Mutation cycles of type~$A_2$]
\label{eg:A2-nongeneric-cycle}
Let $\mathbf{Q}=\{Q\in\Quiv_n\mid b_{12}(Q)=1\}$.
Then $\mathbf{i}=(12)^5$ is a mutation cycle, see \cite{FWZ, ca2}.
However, the family of quivers $\mathbf{Q}$ is not fully generic [why?].
\end{example}
\begin{example}[\emph{Non-example \#2:} Fordy-Marsh mutation cycles]
\label{eg:fordy-marsh-nongeneric-cycle}
Mutation cycles constructed in \cite{Fordy-Marsh} are not fully generic:
the weights in the quivers that they consider need to satisfy some algebraic constraints.
\end{example}
\begin{remark}
While we will not rely on this statement, it can be shown that the property of being fully generic does not depend
on the choice of a base point on a mutation cycle.
Thus, replacing $\mathbf{Q}$ (resp., $\mathbf{i}=i_1\cdots i_k$)
by $\mu[i_1](\mathbf{Q})$ (resp., $\mathbf{i}'=i_2\cdots i_k i_1$)
produces another fully generic mutation cycle, etc.
\end{remark}
For $n\ge 4$ and~$k\ge 1$,
we denote by $\mathbf{Q}_{n,k}\subset\mathbf{Q}_n$ the family of quivers~$Q$ described in Theorem~\ref{thm:GeneralSemiCycles}.
\begin{theorem}
\label{th:long-cycle-generic}
For any $n\ge 4$ and~$k\ge 1$,
the quiver family $\mathbf{Q}_{n,k}$ is fully generic.
Consequently, the mutation cycle in Theorem~\ref{thm:GeneralSemiCycles} is fully generic.
\end{theorem}
Before proving Theorem~\ref{th:long-cycle-generic}, we illustrate it by examining its simplest instance.
\begin{example}
For $n$=4 and $k=1$, we get the mutation cycle of length~8 shown in
Figure~\ref{fig:generic-8-cycle}, with $a,b,c,d,e,f\ge2$.
The family $\mathbf{Q}_{4,1}$ consists of all 4-vertex quivers~$Q$ of the form
shown in the upper-left corner of Figure~\ref{fig:generic-8-cycle}.
To see that this family is fully generic, we will verify that the map $(a,b,c,d,e,f)\mapsto Q$
gives a ${\mathbb Z}$-biregular parametrization of $\mathbf{Q}_{4,1}$ by the lattice points in~${\mathbb Z}_{\ge2}^6$.
This amounts to checking that the inverse map $(b_{ij})\mapsto (a,b,c,d,e,f)$,
where $(b_{ij})=B(Q)$ (see Definition~\ref{def:B(Q)}), is given by polynomials over~${\mathbb Z}$.
It is straightforward to check that, indeed,
\begin{align*}
a&=b_{12}, \quad \\
b&=-b_{23}+b_{12}b_{31}, \quad \\
c&=b_{34},\quad \\
d&=b_{31}+b_{12}b_{23}-b_{12}^2b_{31}, \quad \\
e&=b_{24}, \quad \\
f&=b_{14}.
\end{align*}
\end{example}
\begin{proof}[Proof of Theorem~\ref{th:long-cycle-generic}]
We will prove this theorem by constructing a ${\mathbb Z}$-biregular parametrization
${\mathbb Z}_{\ge2}^{\binom{n}{2}} \to \mathbf{Q}_{n,k}$.
The parameters of this parametrization are:
\begin{itemize}[leftmargin=.2in]
\item
the weights $b_{ij}(R)$ ($1\le i< j\le n-1$) of the subquiver~$\tilde R$, and
\item
the numbers $b_{in}(Q)$ for $i\in [1,n-1]$.
\end{itemize}
(These are precisely the parameters $q_{ij}$ in Theorem~\ref{thm:Summary}.)
By Lemma~\ref{lem:mut-acyclic-orientations}, the map
\begin{align*}
\mu[(12)^k]: \mathbf{Acyc}_{n-1} &\longrightarrow \mu[(12)^k](\mathbf{Acyc}_{n-1}) \\
\tilde R &\longrightarrow Q|_{[1,n-1]}
\end{align*}
is ${\mathbb Z}$-biregular.
Adding the $n-1$ parameters $b_{in}(Q)$, we obtain the claim.
\end{proof}
\pagebreak[3]
It is not hard to show that for $n\ge5$ and $k\ge1$,
different choices of $\binom{n}{2}$ parameters in Theorem~\ref{thm:Summary}/\ref{thm:GeneralSemiCycles}
produce mutation cycles that are different
from each other even if we allow rotating the cycle, reversing the direction along it,
treating quivers up to isomorphism, and/or global reversal of arrows.
The $n=4$ case is a bit different:
\begin{proposition}
\label{pr:double-cycle}
Let $n=4$ and $k\ge1$.
Let $Q=Q(a,b,c,d,e,f)$ denote the quiver constructed as in Theorem~\ref{thm:GeneralSemiCycles},
with parameters
\begin{equation*}
\begin{array}{lll}
b_{12}(\tilde R)\!=\!a, \quad & b_{13}(\tilde R)\!=\!d, \quad & b_{14}(Q)\!=\!f, \\[2pt]
& b_{23}(\tilde R)\!=\!b, \quad & b_{24}(Q)\!=\!e, \\[2pt]
& & b_{34}(Q)\!=\!c.
\end{array}
\end{equation*}
Let $\mathcal{C}(a,b,c,d,e,f)$ be the mutation cycle~\eqref{eq:GeneralSemiCycle} based at $Q(a,b,c,d,e,f)$, cf.\ Figure~\ref{fig:generic-8-cycle}.
Transform this mutation cycle as follows:
\begin{itemize}[leftmargin=.2in]
\item[-]
Move the base point to the opposite quiver~$Q^{(2k+2)}$ (cf.\ Figure~\ref{fig:GeneralSemiCycles-notation}).
\item[-]
Reverse the arrows of all quivers.
\item[-]
Relabel their vertices using the permutation $1\leftrightarrow2$, $3\leftrightarrow4$.
\item[-]
Relabel the mutations in the same way.
\item[-]
Reverse the direction along the cycle from clockwise to counterclockwise.
\end{itemize}
Then the resulting mutation cycle is $\mathcal{C}(a,f,c,e,d,b)$.
\end{proposition}
We omit the proof, which is a straightforward application of Proposition~\ref{pr:3VertexAlternatingMutations}.
\begin{example}
Consider the case $k=1$ of Proposition~\ref{pr:double-cycle}.
In this case, the quiver $Q^{(2k+2)}=Q^{(4)}$ appears in the bottom-right corner of Figure~\ref{fig:generic-8-cycle}.
Changing the labels and reversing all arrows produces the following quiver:
\begin{equation*}
\qquad
\begin{tikzcd}[arrows={-stealth}, sep=6em]
1 \arrow[r, "a"]
& 2 \arrow[d, "fa^2-f+ae"]
\\
4 \arrow[ur, "d", swap, bend left=12, near start, outer sep=-1.8, stealth-] \arrow[u, "b", stealth-] \arrow[r, "c", swap, stealth-]
& 3 \arrow[ul, "e+af", swap, bend left=12, very near end, outer sep=-1.8]
\end{tikzcd}
\end{equation*}
Swapping $b\leftrightarrow f$ and $d\leftrightarrow e$ everywhere recovers the original quiver~$Q$
shown in the upper-left corner of Figure~\ref{fig:generic-8-cycle}, in agreement with Proposition~\ref{pr:double-cycle}.
In other words, all our theorems about~$Q$ apply to~$Q^{(4)}$ as well, after a suitable relabeling.
\end{example}
\begin{theorem}
For $n=4$ and $k \geq 1,$ let $Q$ be a quiver constructed as in Theorem~\ref{thm:GeneralSemiCycles}. The mutation cycle described in Theorem~\ref{thm:GeneralSemiCycles} is the unique minimal mutation cycle in its mutation class.
\end{theorem}
\begin{proof}
The minimality of this mutation cycle follows from Theorems~\ref{thm:GeneralSemiCycles} and~\ref{thm:GeneralSemiCycleDistinct}.
Let us show that every mutation away from the cycle is an exit.
That is, we claim that for every $0 \leq j < 4k+4$ and every vertex $v \in [1,4]$,
either $\T{v}{Q^{(j)}} = Q^{(j\pm1)}$ or $Q^{(j)}$ has exit $v$.
By Lemma~\ref{lem:cycle-exits} applied to $Q$ (respectively $Q^{(2k+2)}$), the claim holds for $2k+4 < j \leq 4k+3$ and $0 \leq j \leq 2k$ (respectively $2 < j \leq 4k+2$).
Since $k \ge 1$, the claim holds for all $j$.
\end{proof}
\section{No cycles in exchange graphs}
\label{sec:no-seed-cycles}
It is natural to ask whether any of the mutation cycles discussed above give rise to cycles of seed mutations
(equivalently, cycles in the exchange graph of an associated cluster algebra).
It turns out that they do not:
\begin{theorem}
\label{th:no-seed-cycle}
Let $C$ be a mutation cycle of $n$-vertex quivers ($n\geq 2$) such that all quivers along the cycle have large weights.
Then $C$ does not underlie a mutation cycle of seeds in the associated cluster algebra (with arbitrary coefficients).
In particular, the mutation cycle in Theorem~\ref{thm:GeneralSemiCycles} is never a mutation cycle of seeds.
\end{theorem}
\begin{proof}
We replicate the ``tropical degeneration'' approach used in the proof of \cite[Theorem~5.1.1]{FWZ} and in many other places.
Roughly, the idea is to track the degrees of cluster variables with respect to a particular initial variable,
verifying that these degrees increase strictly as we go around the cycle.
It is enough to consider the cluster algebra~$\mathcal{A}$ with trivial coefficients, as the general case will follow.
Assume that our cycle $C$ is based at a quiver $Q$, with mutation sequence $i_1 i_2\cdots$:
\begin{equation*}
Q=Q^{(0)} \mutation{i_1}
Q^{(1)} \mutation{i_2}
Q^{(2)} \mutation{i_3}
\cdots
.
\end{equation*}
Let $(Q,{\mathbf{x}})$ be the initial seed in~$\mathcal{A}$, with ${\mathbf{x}}\!=\!(x_1,\dots,x_n)$ the initial cluster.
Let $(Q^{(\ell)},{\mathbf{x}}^{(\ell)}$ be the seed obtained after $\ell$ mutation as above, with ${\mathbf{x}}^{(\ell)}=(x^{(\ell)}_1,\dots,x^{(\ell)}_n)$.
Then every $x^{(\ell)}_i$ is a Laurent polynomial in $x_1,\dots,x_n$.
Assume, without loss of generality, that $i_1 \neq 1$.
Let $\deg_1$ denote the degree with respect to the variable~$x_1$.
We will argue by induction on $\ell$ that $\deg_1(x_{i_{\ell+1}}^{(\ell +1)}) > \deg_1(x_j^{(\ell)}) \geq 0$ for all~$j$.
This will imply that, as the degrees of cluster variables keep increasing, there may be no cycle in the exchange graph.
\emph{Base:} $\ell = 0$.
Since $i_1 \neq 1$, we have
$x_{i_1}^{(1)} = \frac{ x_1^{b}m + n}{ x_{i_1}}$,
where $b=|b_{1i_1}(Q)|\ge 2$
and $m$ and $n$ are monomials in $x_2, \ldots, x_n$.
Therefore
\begin{equation*}
\deg_1(x_{i_1}^{(1)}) = b > 1 = \deg_1(x_1^{(0)}) \geq \deg_1(x_j^{(0)})
\end{equation*}
for all $j$.
\emph{Induction step.}
Suppose that $\deg_1(x_{i_{\ell}}^{(\ell)}) > \deg_1(x_j^{(\ell-1)}) \geq 0$.
Since $x_{j}^{(\ell)} = x_j^{(\ell-1)}$ for $j \neq i_{\ell}$, we conclude that $\deg_1(x_j^{(\ell)}) \geq 0$ for all~$j$.
Since $i_{\ell+1} \neq i_{\ell}$, it follows that
\begin{equation*}
x_{i_{\ell+1}}^{(\ell+1)} = \frac{ (x_{i_{\ell}}^{(\ell)})^b\,m + n}{ x_{i_{\ell+1}}^{(\ell)}},
\end{equation*}
where $b=|b_{i_{\ell}i_{\ell+1}}(Q^{(\ell)})|\ge2$ and $m$ and $n$ are monomials in $x_1^{(\ell)}, \ldots, x_n^{(\ell)}$.
Since $\deg_1(x_j^{(\ell)}) \geq 0$ for all $j$, we have $\deg_1(m)\ge0$ and $\deg_1(n) \geq 0$.
Consequently
\begin{align*}
\deg_1(x_{i_{\ell+1}}^{(\ell+1)})
&\geq b \deg_1(x_{i_{\ell}}^{(\ell)}) - \deg_1(x_{i_{\ell+1}}^{(\ell)}) \\
&=b \deg_1(x_{i_{\ell}}^{(\ell)}) - \deg_1(x_{i_{\ell+1}}^{(\ell-1)})
> \deg_1(x_{i_{\ell}}^{(\ell)})
\geq \deg_1(x_j^{(\ell)})
\end{align*}
for all $j$, as desired.
\end{proof}
\section{A gallery of mutation cycles}
\label{sec:moreCycles}
In this section, we discuss several additional families of mutation cycles.
\begin{example}
\label{eg:rosette}
Figure~\ref{fig:generic-12-cycle-cyclic} shows a fully generic mutation cycle of length~12.
Let $\mathbf{C}=\mathbf{C}(a,b,c,d,e,f)$ denote the mutation class containing this mutation cycle.
In can be shown that (a) this mutation cycle is unique within~$\mathbf{C}$,
(b) none of the quivers in $\mathbf{C}$ has a sink/source (in particular, none is acyclic), but
(c) every proper subquiver of every quiver in~$\mathbf{C}$ is mutation-acyclic.
\end{example}
\newcommand{3.5em}{3.5em}
\newcommand{-1}{-1}
\newcommand{-1.5}{-1.5}
\begin{figure}
\caption{
Fully generic mutation cycle of length~12 involving 4-vertex quivers.
The integer parameters $a,b,c,d,e,f$ can be chosen as follows.
First pick any $a,b,c,f\ge 2$.
Then choose $d\ge\max(2, 2-ab+cf)$ and $e\ge\max(2,2+af-bc)$
to make sure that $\overline{d}
\label{fig:generic-12-cycle-cyclic}
\end{figure}
\begin{example}
\label{eg:rosette-2}
Figure~\ref{fig:generic-12-cycle-fractured-cyclic} shows another fully generic mutation cycle of length~12.
Properties (a)--(c) from Example~\ref{eg:rosette} hold in this example as well.
\end{example}
\newcommand{-stealth}{-stealth}
\begin{figure}
\caption{
Fully generic mutation cycle of length~12 involving 4-vertex quivers. \linebreak[3]
The integer parameters $a,b,c,d,e,f$ can be chosen as follows.
Begin by picking any ${b,d,e,f \geq 2}
\label{fig:generic-12-cycle-fractured-cyclic}
\end{figure}
\begin{example}
\label{eg:big-horseshoe}
Figure~\ref{fig:generic-10-cycle-allmutations} shows a fully generic mutation cycle of length~10.
Like in our construction from Theorem~\ref{thm:GeneralSemiCycles}, there are two quivers with a sink/source at $n=4$ on top, identical sequences of mutations on the left and right sides,
and mutations at $3,2,1$ along the bottom rim (though their order is reversed).
Unlike in our main construction, the bottom rim has no sink/source mutations.
Surprisingly, the weights still line up and forms another generic cycle.
This mutation cycle again satisfies properties (a) and (c) (but not (b)) from Example~\ref{eg:rosette}.
\end{example}
\begin{figure}
\caption{
Fully generic mutation cycle of length~10 involving 4-vertex quivers. \linebreak[3]
The integer parameters $a,b,c,d,e,f \geq 2$ can be chosen arbitrarily.
For ease of notation, define the (positive) weights $\overline{b}
\label{fig:generic-10-cycle-allmutations}
\end{figure}
\begin{example}
\label{eg:7-cycle}
Figure~\ref{fig:7-cycle} shows a 5-vertex quiver that lies on a fully generic mutation cycle of length~7.
\end{example}
\begin{figure}
\caption{
A 5-vertex quiver lying on a fully generic mutation cycle of length~7.
Apply mutations at $1, 5, 2, 3, 5, 4, 5$, in this order, to return to the original quiver.
}
\label{fig:7-cycle}
\end{figure}
\begin{remark}
The constructions in Examples~\ref{eg:6-cycle-vortices} and~\ref{eg:7-cycle} are special cases (for $n=4$ and $n=5,$ respectively) of a general construction of a mutation cycle of $n$-vertex quivers. We next sketch this construction.
Start with $n-1$ integers $q_{in} \ge 2$, for $i=1,\dots,n-1$.
Pick integers $a$ and~$b$ satisfying $1 \!<\! a \!<\! b \!< \!n\!-\!1$.
Create an acyclic quiver $\tilde Q$ on the vertices $1,\dots,n\!-\!1$, with standard orientation and very large weights,
say, bigger than any product $q_{in} q_{jn}$.
Construct the quiver $Q$ as follows.
Set $Q|_{[1,n-1]} = \tilde Q$.
Set $n\stackrel{q_{in}}{\longrightarrow} i$ if $a<i\le b$; set $i\stackrel{q_{in}}{\longrightarrow} n$ otherwise.
Then quiver~$Q$ lies on the following mutation cycle:
\[
\T{n (n-1) \cdots (b+1) n b \cdots (a+1) n a \cdots 1}{Q} = Q.
\]
In this construction, every mutation $\mu[i]$ with $i \neq n$ is a sink/source mutation, whereas every mutation at $n$ is not.
\end{remark}
\providecommand{\bysame}{\leavevmodeubox to3em{urulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{
uref{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{uref}[2]{#2}
\end{document} |
\begin{document}
\title{Optimal quantum control of Bose-Einstein condensates in
magnetic microtraps: Comparison of GRAPE and Krotov optimization
schemes}
\author{Georg J\"ager}
\affiliation{Institut f\"ur Physik, Karl-Franzens-Universit\"at Graz,
Universit\"atsplatz 5, 8010 Graz, Austria}
\author{Daniel M. Reich}
\author{Michael H. Goerz}
\author{Christiane P. Koch}
\affiliation{Theoretische Physik, Universit\"at Kassel,
Heinrich-Plett-Str. 40, 34132 Kassel, Germany}
\author{Ulrich Hohenester}
\affiliation{Institut f\"ur Physik, Karl-Franzens-Universit\"at Graz,
Universit\"atsplatz 5, 8010 Graz, Austria}
\date{September 11, 2014}
\begin{abstract}
We study optimal quantum control of the dynamics of trapped Bose-Einstein
condensates: The targets are to split a condensate, residing initially in
a single well, into a double well, without inducing excitation; and to excite
a condensate from the ground to the first excited state of a single well. The
condensate is described in the mean-field approximation of the Gross-Pitaevskii
equation. We compare two optimization approaches in terms of their performance
and ease of use, namely gradient ascent pulse engineering (GRAPE) and Krotov's method. Both approaches are derived from the variational principle but
differ in the way the control is updated, additional costs are accounted for,
and second order derivative information can be included.
We find that GRAPE produces smoother control fields and works in a black-box manner, whereas Krotov with a suitably chosen step size parameter converges faster but can produce sharp features in the control fields.
\end{abstract}
\pacs{03.75.-b,39.20.+q,39.25.+k,02.60.Pn}
\maketitle
\section{Introduction}
Controlling complex quantum dynamics is a recurring theme in many
different areas of AMO physics and physical chemistry. Recent examples
include quantum state preparation~\cite{SayrinNat11,buecker:11},
interferometry~\cite{vanfrank:14} and
imaging~\cite{LapertSciRep12,HaeberlePRL13},
or reaction control~\cite{RybakPRL11,GonzalezPRA12}. The central idea
of quantum control is to employ external fields to steer the dynamics
in a desired way~\cite{RiceBook,ShapiroBook}. The fields that
realize the desired dynamics can be determined by optimal
control theory (OCT) \cite{Tannor92,WerschnikJPB07}.
An expectation value that encodes the target is then taken to be a
functional of the external field which is minimized or maximized.
The target can be simply a desired final state~\cite{Tannor92}, or a
unitary operator~\cite{JosePRL02}, a
prescribed value of energy or position~\cite{DoriaPRL11}, or an
experimental signal such as a pump-probe trace~\cite{KaiserJCP04}.
The algorithms that can be employed for optimizing the target
functional broadly
fall into two categories -- those where changes in the field are
determined solely by evaluating the functional, such as simplex
algorithms~\cite{DoriaPRL11,CanevaPRA11},
and those that utilize derivative information, such as Krotov's
method~\cite{Konnov99,sklarz:02} or
gradient ascent pulse engineering (GRAPE)~\cite{KhanejaJMR05}, possibly
combined with quasi-Newton methods~\cite{MachnesPRA11,eitan:12}.
The solutions that one obtains typically do not only depend on
the target functional but also on the specific algorithm that is
employed and the initial guess field. This is due to the fact that
numerical optimization is always a local search which may find one of
possibly many optimal solutions or get stuck in a local
extremum. It is thus important to understand which features of an
optimal control solution are due to the optimization procedure and
which reflect truly physical properties of the quantum system.
For example, when seeking to identify, by use of optimal control
theory, the quantum
speed limit, i.e., the shortest possible time in which a quantum
operation can be carried out~\cite{CanevaPRL09}, the answer should be
independent of the algorithm. Moreover, in view of employing
calculated solutions in an experiment, conditions such as limited
power, limited time resolution, or limited bandwidth need to be
met. The way in which the various optimization approaches can
accommodate such requirements differ greatly.
Here, we study control of a Bose-Einstein condensate in a magnetic
microtrap, comparing several variants of a GRAPE-type
algorithm~\cite{hohenester.pra:07,hohenester.cpc:14a}
with Krotov's method~\cite{Konnov99,sklarz:02,reich:12}. We consider
two targets --
splitting the condensate, which resides initially in the ground state of
a single well, into a double well, without inducing excitation, and
exciting the condensate from the ground to the first excited state of
a single well. The latter is important for stimulated processes in
matter waves, whereas the former presents a crucial step in
interferometry~\cite{shin:04,schumm:05,grond.njp:10}. A challenging
aspect of controlling a condensate is the non-linearity of the equation of
motion which can compromise or even prevent convergence of the
optimization~\cite{sklarz:02}. The two methods tackle this problem in
different ways, GRAPE by computing the search direction for new control fields
within the framework of Lagrange parameters and submitting the optimal control
search to generic minimization routines~\cite{peirce:88,hohenester.pra:07},
Krotov's method by accounting for the non-linearity of the equations of
motion in the monotonicity conditions when constructing the algorithm~\cite{Konnov99,sklarz:02,reich:12}.
Furthermore, the methods differ in the way in which additional
requirements such as smoothness of the control can be accounted
for. We compare the two optimization approaches with respect to the
solutions they yield as well as their
performance, and ease of use. Our study extends an earlier comparison of GRAPE-type algorithms with Krotov's method~\cite{MachnesPRA11} that was concerned with the linear
Schr\"odinger equation and with finite-size (spin-type) quantum systems.
Our paper is organized as follows:
After introducing the equation of motion for the condensate dynamics
together with the control targets in
Sec.~\ref{sec:theory}, we briefly review the two optimization schemes in
Sec.~\ref{sec:octmethods}. Section~\ref{sec:results}
presents our results for wavefunction splitting and shaking. Moreover,
we investigate the influence of the nonlinearity,
the performance of the two algorithms, and the smoothness of the optimized
control in Secs.~\ref{subsec:nonlin}
to~\ref{subsec:smooth}. Our conclusions are presented in
Sec.~\ref{sec:conclusions}.
\section{Model and Optimization Problem}
\label{sec:theory}
In this paper we consider a quasi-1D condensate residing in a magnetic
confinement potential $V(x,\lambda(t))$ that can be controlled by some external
\textit{control parameter} $\lambda(t)$
\cite{hohenester.pra:07,buecker:11,buecker:13,hohenester.cpc:14a}. We describe
the condensate dynamics within the mean-field framework of the Gross-Pitaevskii
equation, where $\psi(x,t)$ is the condensate wavefunction, normalized to one, whose time evolution
is governed by \cite{leggett:01} ($\hbar=1$)
\begin{equation}\label{eq:gp}
i\frac{\partial\psi(x,t)}{\partial t}=
\Bigl(-\frac{1}{2M}\frac{\partial^2}{\partial x^2} + V(x,\lambda(t)) +
\kappa\bigl|\psi(x,t)\bigr|^2\Bigr)\psi(x,t)\,.
\end{equation}
The first term on the right-hand side is the operator for the kinetic
energy, the second one is the confinement potential, and the last term
is the nonlinear atom-atom interaction in the mean field
approximation. $M$ is the atom mass and $\kappa$ is the strength of
the nonlinear atom-atom interactions, which is related to the effective one-dimensional interaction strength $U_0$ and the number of atoms $N$ through $\kappa=U_0(N-1)$ \cite{grond.pra:09b}.
We can now formulate our optimal control problem. Suppose that the
condensate is initially described by the wavefunction
$\psi(x,0)=\psi_0(x)$ and the potential is varied in the time interval
$[0,T]$. We are now seeking for an optimal time variation of
$\lambda(t)$ that brings the terminal wavefunction $\psi(x,T)$ as
close as possible to a \textit{desired} wavefunction $\psi_d(x)$. To
rate the success for a given control, we introduce the cost function
\begin{equation}\label{eq:statecost}
J_T(\psi(T))=\frac 12\left[1-\left| \left< \psi_d |\psi(T)\right> \right |^2 \right]\,,
\end{equation}
which becomes zero when the terminal wavefunction matches the desired
one up to an arbitrary phase. Optimal control theory aims at a
$\lambda_{\rm OCT}(t)$ that minimizes Eq.~\eqref{eq:statecost}.
\section{Optimization Methods}
\label{sec:octmethods}
\begin{table*}
\centering
\begin{ruledtabular}
\begin{tabular}{lccccccl}
Algorithm & line search & free parameter & deriv & penalty
& penalty equation & update & update equation \\
\colrule
GRAPE grad L2 & yes & $\gamma$ & 1 & $\dot\lambda^2$ & Eq.~\eqref{eq:cost} & concurrent & Eq.~\eqref{eq:L2} \\
GRAPE grad H1 & yes & $\gamma$ & 1 & $\dot\lambda^2$ & Eq.~\eqref{eq:cost} & concurrent & Eq.~\eqref{eq:H1} \\
GRAPE BFGS L2 & yes & $\gamma$ & 2 & $\dot\lambda^2$ & Eq.~\eqref{eq:cost} & concurrent & Eq.~\eqref{eq:L2} \\
GRAPE BFGS H1 & yes & $\gamma$ & 2 & $\dot\lambda^2$ & Eq.~\eqref{eq:cost} & concurrent & Eq.~\eqref{eq:H1} \\
Krotov & no & $k$ & 1 &
$(\Delta\lambda)^2$ & Eq.~\eqref{eq:controlkrotov} & sequential & Eq.~\eqref{eq:controlkrotov2} \\
KBFGS & no & $k$ & 2 &
$(\Delta\lambda)^2$ & Eq.~\eqref{eq:controlkrotov} & sequential & Eq.~(38) of Ref.~\cite{eitan:12} \\
\end{tabular}
\end{ruledtabular}
\caption{Optimization approaches used in this paper. For each algorithm, we
specify whether a line search is used; what free parameter is
available to influence the optimization; the order of the derivative for the determination of the new control parameter; the
penalty term that is added to Eq.~\eqref{eq:statecost}, with
$\Delta \lambda = \lambda - \lambda_{\rm ref}$; the equation for the cost function; the type of the update of the control; and the update equation used in our simulations.
} \label{tab:algos}
\end{table*}
In this paper, we apply two different optimal control approaches, namely
a gradient ascent pulse engineering (GRAPE) scheme~\cite{KhanejaJMR05} and
Krotov's method~\cite{Konnov99,sklarz:02,reich:12}, which will be discussed
separately below. An overview of the control approaches is given in
Table~\ref{tab:algos}.
\subsection{GRAPE: Functional and Optimization Scheme}
\label{subsec:grape}
The GRAPE scheme for Bose-Einstein condensates has been presented in detail
elsewhere~\cite{hohenester.pra:07,buecker:13,jaeger.pra:13,hohenester.cpc:14a},
for this reason we only briefly introduce the working equations.
Experimentally, strong variations of the control parameter are difficult to
achieve. Therefore, we add to the cost function an additional term
\cite{hohenester.pra:07,borzi:08,vonwinckel:08},
\begin{equation}\label{eq:cost}
J(\psi(T),\lambda)=J_T(\psi(T))+\frac{\gamma}{2} \int_0^T[\dot{\lambda}(t)]^2 dt\,.
\end{equation}
Mathematically, the additional term penalizes strong variations of
the control parameter and is needed to make the OCT problem
well-posed~\cite{hohenester.pra:07,borzi:08,vonwinckel:08}.
Through $\gamma$ it is possible to weight the relative importance of
wavefunction matching and control smoothness. Below we will set $\gamma\ll 1$
such that $J$ is dominated by the terminal cost $J_T$.
In order to bring the system from the initial state $\psi_0$ to the
terminal state $\psi(T)$ we have to fulfill the Gross-Pitaevskii
equation, which enters as a \textit{constraint} in our optimization
problem. The constrained optimization problem can be turned into an
unconstrained one by means of Lagrange multipliers $p(x,t)$, whose
time evolution is governed by~\cite{hohenester.pra:07}
\begin{equation}\label{eq:adjoint}
i\dot p=\left(-\frac{1}{2M}\frac{\partial^2}{\partial x^2} +
V(x,\lambda(t))+2\kappa|\psi|^2\right)p+\kappa\psi^2 p^*\,,
\end{equation}
subject to the terminal condition
$p(T)=i\langle\psi_d|\psi(T)\rangle\psi_d$. The optimal control
problem is then composed of the Gross-Pitaevskii equation~\eqref{eq:gp} and
Eq.~\eqref{eq:adjoint}, which must be fulfilled simultaneously
together with \cite{hohenester.pra:07}
\begin{equation}\label{eq:optcontrol}
\gamma\ddot\lambda=
-\mathfrak{Re}\bigl<p\bigr|\frac{\partial V}{\partial\lambda}\bigl|\psi\bigr>
\end{equation}
for the optimal control. This expression differs from standard
GRAPE~\cite{KhanejaJMR05} and results from minimizing changes in the
control, cf. Eq.~\eqref{eq:cost}.
This set of equations can be also employed for a non-optimal control where
Eq.~\eqref{eq:optcontrol} is not fulfilled. In this case Eq.~\eqref{eq:gp} is
solved forwards in time and
Eq.~\eqref{eq:adjoint} backwards in time, and the search direction
$\nabla_\lambda J$ for an improved control is calculated from one of the
equations \cite{hohenester.pra:07,vonwinckel:08,hohenester.cpc:14a}
\begin{eqnarray}
\nabla_\lambda J\phantom{]}\!\!&=&\!\!
-\gamma\ddot\lambda-
\mathfrak{Re}\,\bigl<p\bigr|\frac{\partial V}{\partial\lambda}\bigl|\psi\bigr> \,\,\,
\mbox{for $L^2$ norm}\label{eq:L2}\qquad\\
-\frac{d^2 }{dt^2}\bigl[\nabla_\lambda J\bigr]\!\!&=&\!\!
-\gamma\ddot\lambda-
\mathfrak{Re}\,\bigl<p\bigr|\frac{\partial V}{\partial\lambda}\bigl|\psi\bigr> \,\,\,
\mbox{for $H^1$ norm.}\label{eq:H1}\qquad
\end{eqnarray}
These two expressions are obtained by interpreting, on the right hand
side of Eq.~\eqref{eq:cost}, the integral
$\int_0^T[\dot\lambda]^2\,dt=\langle \dot\lambda,\dot\lambda\rangle_{L^2}=
\langle \lambda,\lambda\rangle_{H^1}$
in terms of an $L^2$ or $H^1$
norm~\cite{vonwinckel:08,grond.pra:09b}. The $H^1$ norm implies
that one additionally has to solve a Poisson equation, see
the derivative operator on the left hand side of
Eq.~\eqref{eq:H1}. This generally results in a much smoother
time dependence of the control parameters while the additional
numerical effort for solving the Poisson equation is negligible. As
for an optimal control, both Eqs.~\eqref{eq:L2} and
\eqref{eq:H1} yield $\nabla_\lambda J=0$.
Here we solve the optimal control equations using the Matlab toolbox
\textsc{octbec} \cite{hohenester.cpc:14a}. The ground and desired states of the Gross-Pitaevskii equation are computed using the optimal damping algorithm \cite{dion:07,hohenester.cpc:14a}.
The control parameters are obtained
iteratively using either a conjugate gradient method (GRAPE grad), which
only uses first order information, or a quasi-Newton BFGS
scheme~\cite{bertsekas:99} (GRAPE BFGS), which also takes into account
second-order information via an approximated Hessian. In both cases, the
optimization employs a line search to determine the optimal step size in the
direction of a given gradient. The pulse update is calculated for all time
points simultaneously, making the GRAPE schemes \emph{concurrent}.
\subsection{Krotov's method: Functional and Optimization scheme}
\label{subsec:krotov}
Krotov's method~\cite{Konnov99} provides an alternative optimal control
implementation. The main idea is to add to Eq.~\eqref{eq:statecost} a
vanishing term~\cite{Konnov99,sklarz:02,reich:12}, which is chosen
such that the minimum of the new function is also a minimum of $J$.
However, for non-optimal $\lambda(t)$ one can devise a scheme that
always gives a new control corresponding to a lower cost
function. Thus, Krotov's method leads to a monotonically convergent
optimization algorithm that is expected to exhibit much faster
convergence.
Our implementation closely follows
Refs.~\cite{sklarz:02,reich:12,eitan:12}. Specifically, the cost reads
\begin{equation}\label{eq:costkrotov}
J(\psi(T),\lambda)
= J_T(\psi(T))+
\int_0^T \frac{[\lambda(t)-\lambda_{\rm ref}(t)]^2}{S(t)} dt\,,
\end{equation}
where the reference field $\lambda_{\rm ref}(t)$
is typically chosen to be the control from the previous
iteration~\cite{PalaoPRA03}. The second term in
Eq.~\eqref{eq:costkrotov} penalizes changes in the control from one
iteration to the next, and ensures that as an optimum is approached the value of
the functional is increasingly determined by only $J_T$.
$S(t)=k s(t)$ is a shape function that
controls the turning on and off of the control fields, $k$ is a step size
parameter, and $s(t)\in[0,1]$ is bound between $0$ and $1$.
Let $\psi^{(i)}(t)$ and $\lambda^{(i)}(t)$ denote the wavefunction and control
parameter, respectively, in the $i$th iteration of the optimal control loop.
To get started, we first solve for an initial guess $\lambda^{(0)}(t)$ the
Gross-Pitaevskii equation \eqref{eq:gp} and the adjoint equation \eqref{eq:adjoint}
for the co-state $p(t)$, which is backward-propagated in time with the same terminal condition as in GRAPE,
in order to obtain $\psi^{(0)}(t)$ and $p^{(0)}(t)$. In the next step, we
solve the Gross-Pitaevskii equation \textit{simultaneously} with the equation
for the new control field
\begin{widetext}
\begin{equation}
\label{eq:controlkrotov}
\lambda^{(i+1)}(t) = \lambda^{(i)}(t)
+ S(t) \mathfrak{Re}\, \Braket{p^{(i)}(t)|
\left[
\frac{\partial V}{\partial \lambda}
\right]_{\lambda^{(i+1)}(t)}
| \psi^{(i+1)}(t)}
+ \mathfrak{Re}\, \frac{\sigma(t)}{2i}
\Braket{\Delta \psi(t)|
\left[
\frac{\partial V}{\partial \lambda}
\right]_{\lambda^{(i+1)}(t)}
| \psi^{(i+1)}(t)}\,,
\end{equation}
\end{widetext}
where $\psi^{(i+1)}(t)$ is obtained by propagating $\psi(t=0)$ forward
in time using the updated pulse.~\footnote{The co-states of this work and of Ref.~\cite{reich:12} are related through $p=i\chi$. With this definition the adjoint equation \eqref{eq:adjoint} and the terminal condition $p(T)$ are the same for GRAPE and Krotov. As consequence, the scalar products on the right hand side of Eq.~\eqref{eq:controlkrotov} involve the real rather than the imaginary part.}
The fact that $\psi^{(i+1)}(t)$ appears on the right hand side of the update
equation implies that the update at a given time $t$ depends on the updates at
all earlier times, making Krotov's method \emph{sequential}. This type of
update makes it non-straightforward to include a cost term on the
derivative of the control as in Eq.~\eqref{eq:cost}, since the derivative at
a given time $t$ requires knowledge of past \emph{and} future values of
$\psi(t)$.
The last term in Eq.~\eqref{eq:controlkrotov}
with $\Delta\psi(t)=\psi^{(i+1)}(t)-\psi^{(i)}(t)$ is generally
needed to ensure convergence in presence of the nonlinear mean-field
term $\kappa|\psi(t)|^2$ of the Gross-Pitaevskii equation. Convergence is
achieved through a proper choice of $\sigma(t)$
\cite{sklarz:02,reich:12}. In this work we neglect this additional
contribution for simplicity, as it is of only minor importance for
the moderate $\kappa$ values of our present concern.
\begin{figure}
\caption{(Color online) Wavefunction splitting through the transformation of the
confinement potential from a single to a double well. (a) The solid lines report the control parameters $\lambda(t)$ for the GRAPE and Krotov
optimizations, respectively. The potential is held constant after the terminal
time $T=2$ ms. The dashed line shows the shape function $s(t)$ of
Eq.~\eqref{eq:costkrotov}
\label{fig:splitting1}
\end{figure}
\begin{figure}
\caption{(Color online) Cost function versus number of solved equations (either Gross-Pitaevskii or adjoint equation) for
GRAPE and Krotov. For GRAPE one optimization iteration consists of numerous
solutions of the Gross-Pitaevskii equation \eqref{eq:gp}
\label{fig:splitting2}
\end{figure}
The derivative ${\partial V}/{\partial\lambda}$ in
Eq.~\eqref{eq:controlkrotov} has to be computed for $\lambda^{(i+1)}(t)$,
thus leading to an implicit equation for $\lambda^{(i+1)}(t)$. When $k$
is chosen sufficiently small, such that the control parameter varies
only moderately from one iteration to the next, one can obtain the new
control fields approximately from
\begin{equation}
\begin{split}
\label{eq:controlkrotov2}
\lambda^{(i+1)}(t) & \approx \lambda^{(i)}(t) \\
& + S(t) \, \mathfrak{Re} \Braket{
p^{(i)}(t) |
\left[
\frac{\partial V}{\partial \lambda}
\right]_{\lambda^{(i)}(t)}
| \Psi^{(i+1)}(t)
}\,.
\end{split}
\end{equation}
Otherwise one can employ an iterative Newton scheme for the
calculation of $\lambda^{(i+1)}(t)$, as briefly described in
Appendix~\ref{sec:newtonkrotov}. In all our simulations we found
Eq.~\eqref{eq:controlkrotov2} to provide sufficiently accurate
results.
Once the new wavefunctions $\psi^{(i+1)}(t)$ and control parameters
$\lambda^{(i+1)}(t)$ are computed, we get the adjoint variables $p^{(i+1)}(t)$ through
the solution of Eq.~\eqref{eq:adjoint} and continue with the Krotov optimization
loop until the cost function $J$ is small enough or a certain number of
iterations is exceeded.
As a variant, we also use a combination of Krotov's method with the
BFGS method (KBFGS) \cite{eitan:12}.
It includes an approximated Hessian via the Krotov gradient as an additional
term in the update equation~\eqref{eq:controlkrotov}. However, for technical
reasons and differently from the GRAPE BFGS algorithm, no line search is
employed.
\section{Results}
\label{sec:results}
In this paper, we consider two control problems. The first one is \textit{condensate
splitting}, where the condensate initially resides in one well which is
subsequently split into a double well. In our simulations we employ the
confinement potential of Lesanovsky et al.~\cite{lesanovsky:06} where the
control parameter $\lambda(t)$ is associated with a radio frequency magnetic field \cite{hohenester.pra:07}.
The objective is to bring at the terminal time $T$ the condensate wavefunction
to the ground state of the double well potential.
In the second control problem the condensate wavefunction is
excited from the ground to the first excited state of a single well potential.
The confinement potential is an anharmonic single-well
potential, details and a parameterized form of $V(x)$ can be found
in~\cite{buecker:11,buecker:13,hohenester.cpc:14a}. The shakeup is achieved by
displacing the potential origin according to $V(x-\lambda(t))$, where
$\lambda(t)$ now corresponds to the position of the potential minimum, i.e.,
through \textit{wavefunction shaking}. Experimental
realizations of such shaking protocols have been reported in
\cite{buecker:11,buecker:13,vanfrank:14}.
In our simulations GRAPE and Krotov start with the same initial guess.
The terminal time is set to $T=2$ ms throughout.
Unless stated differently, we use a nonlinearity
$\kappa/\hbar=2\pi\times 250$ Hz ($\kappa=\pi/2$ for
units with $\hbar=1$ and time measured in milliseconds, as used in our
simulations \cite{hohenester.cpc:14a}).
\subsection{Splitting vs.\ Shaking}
\label{subsec:splitting}
\begin{figure}
\caption{(Color online) Same as Fig.~\ref{fig:splitting1}
\label{fig:shaking1}
\end{figure}
Figure~\ref{fig:splitting1} shows (a) the controls obtained from our
GRAPE and Krotov optimizations for condensate splitting, together with (b,c) the density maps of the condensate wavefunction.
The potential is held constant after the terminal
time $T=2$ ms of the control process. Figs.~\ref{fig:splitting1}(d,e) show the
square moduli of the terminal (solid lines) and desired (dashed lines)
condensate wavefunctions, which are almost indistinguishable, thus demonstrating
the success of both control protocols. This can be also seen from the density
maps which show no time variations at later times, when the potential is held
constant, in accordance to the fact that the terminal wavefunction is the
groundstate of the double well trap.
Figure~\ref{fig:splitting2} compares the efficiency of the GRAPE and Krotov
optimizations. We plot the cost function $J_T$ versus the number $n$ of
equations solved during optimization. For both GRAPE and Krotov, $n$ counts the
solutions of either the Gross-Pitaevskii or the adjoint equation.
The actual computer run
times depend on the details of the numerical implementation, but are comparable
for both schemes. As can be seen in Fig.~\ref{fig:splitting2}, in the GRAPE
optimization the cost function decreases in large steps after a given number of
solved equations, whereas in the Krotov optimization $J_T$ decreases continuously.
The cost evolution of GRAPE can be attributed to the BFGS search algorithm,
where a line search is performed along a given search direction. Once the
minimum is found, the step is accepted ($J_T$ drops) and a new search direction
is obtained through the solution of the adjoint equation. In contrast, the
Krotov algorithm is constructed such that $J_T$ decreases monotonically in each
iteration step. Altogether, GRAPE and Krotov optimizations
perform equally well.
\begin{figure}
\caption{(Color online) Same as Fig.~\ref{fig:splitting2}
\label{fig:shaking2}
\end{figure}
In comparison to condensate splitting, the shakeup process is a
considerably more complicated control problem.
Figure~\ref{fig:shaking1} shows the optimized control parameters as well as the
time evolution of the condensate densities. Both GRAPE and Krotov succeed
comparably well. Regarding the control fields, the GRAPE one is smoother
than the Krotov one, due to the penalty term on $\dot\lambda(t)$ in
Eq.~\eqref{eq:cost}. From Fig.~\ref{fig:shaking2} we observe that a much higher
number of optimization iterations is needed, in comparison to wavefunction
splitting, for both optimization methods to significantly reduce $J_T$.
Initially, $J_T$ decreases more rapidly for the Krotov optimization, but after
a larger number $n$ of solved equations, say around $n\sim 600$, GRAPE
performs better.
\subsection{Influence of Nonlinearity}
\label{subsec:nonlin}
We investigate the influence of the nonlinear atom-atom interaction on
the convergence of the optimization loop. The dashed lines in
Fig.~\ref{fig:splitting2} report results for splitting simulations with a larger
nonlinearity $\kappa/\hbar=2\pi\times 1000$ Hz. While the GRAPE convergence
depends only weakly on $\kappa$, Krotov converges significantly slower for
larger $\kappa$ values.
Things are different for the shaking shown in Fig.~\ref{fig:shaking2}.
While the GRAPE performance again depends only weakly on $\kappa$, Krotov
converges \textit{faster} with increasing $\kappa$.
Because of the lack of a line search in the Krotov algorithm, the convergence
behavior is far more dependent on specific features of the control landscape
which depend strongly on $\kappa$.
\subsection{Convergence Behavior}
\label{subsec:convergence}
Next, we inquire into the details of the convergence properties for the
optimization of the shakeup process. By comparing GRAPE with Krotov, we will
identify the advantages and disadvantages of the respective optimization
methods.
\begin{figure}
\caption{(Color online) (a) Cost function versus number of solved equations for
conjugate gradient (grad) and BFGS optimization schemes, and for search
directions obtained from Eqs.~(\ref{eq:L2}
\label{fig:shakingGRAPE}
\end{figure}
Figure~\ref{fig:shakingGRAPE}(a) shows
the terminal cost function $J_T$ versus the number of solved equations of motion $n$ for
the different GRAPE schemes. It is evident that the conjugate gradient
solutions reach a plateau after a certain number of iterations.
In contrast, the BFGS solutions decrease significantly even at later stages of the optimization. We attribute this behavior to the use of the second order derivative
information. The GRAPE BFGS scheme, which estimates the Hessian of $J$ in
addition to $\nabla_\lambda J$, can take larger steps to cross flat regions of
$J$, contrary to the (first-order) GRAPE gradient scheme, which gets stuck.
Figure~\ref{fig:shakingGRAPE}(b) shows the control fields for the
GRAPE BFGS schemes. Although both optimization strategies perform equally well, the solutions
obtained with $H^1$ norm are smoother and probably better suited for
experimental implementation.
\begin{figure}
\caption{(Color online) Same as Fig.~\ref{fig:shakingGRAPE}
\label{fig:shakingKrotov}
\end{figure}
Figure~\ref{fig:shakingKrotov} presents (a) $J_T$ versus $n$ and (b) the control
parameters for the Krotov optimization. The solid line with $k=0.005$ in
panel (a) is identical to the one shown in Fig.~\ref{fig:shaking2}. When we
increase $k$ (black line) the cost function drops more rapidly. However, we
found that larger $k$ values can lead to sharp variations in $\lambda(t)$ which
might be problematic for experimental implementations, as will be discussed in
more detail below.
\begin{figure*}
\caption{(Color online) (a,c) Evolution of control parameters during the
optimization process for GRAPE. (b,d) Density plot of power spectra of the
control parameters displayed in panels (a,c). We use a logarithmic color scale. In panels (b) and (d) the numbers of iterations are chosen such that the final cost function $J_T$ becomes approximately $10^{-2}
\label{fig:performanceGRAPE}
\end{figure*}
\begin{figure*}
\caption{(Color online) Same as Fig.~\ref{fig:performanceGRAPE}
\label{fig:performanceKrotov}
\end{figure*}
In Fig.~\ref{fig:shakingKrotov} we additionally display results for a
simulation using a combination of Krotov's method with the BFGS scheme
(KBFGS)~\cite{eitan:12}. The performance of KBFGS is similar to the
simpler optimization procedure of Eq.~\eqref{eq:controlkrotov2}, a
finding in accordance with
Ref.~\cite{eitan:12}.
We attribute this to the fact that within the
Krotov scheme only a small portion of the control landscape is
explored, because the monotonic convergence enforces small control
updates, in contrast to GRAPE where larger regions are scanned by the
line search. As consequence, the improvement in the
Krotov search direction via the Hessian is minimal.
Finally, the dashed line for adaptive $k$ shows results for an optimization that starts
with a small $k$ value, which subsequently increases in each iteration until the cost
decreases by a desired amount (here 2.5 per cent) within one iteration. This $k$
value is then kept constant for the rest of the optimization. The idea behind this
strategy is that the choice of $k$ is crucial for convergence, but the optimal
value is different for each problem. Generally, finding a suitable value for
$k$ requires some trial and error.
\subsection{Features of the Control}
\label{subsec:smooth}
For many experimental implementations it is indispensable to use smooth control
parameters. In the following we investigate the smoothness of the optimal
controls obtained by the different optimization methods.
Figure~\ref{fig:performanceGRAPE}(a) shows for GRAPE BFGS H1 the evolution of the $\lambda(t)$
values during optimization. One observes that during the first few
iterations the characteristic features of $\lambda(t)$ emerge, which then become
refined in the course of further iterations. Fig,~\ref{fig:performanceGRAPE}(b)
reports the power spectra (square moduli of Fourier transforms) of the
$\lambda(t)$ history during optimization. During the first say 20 iterations
the Fourier-transformed control parameter $\tilde\lambda(\nu)$ spectrally
broadens, indicating the emergence of sharp features during optimization. With
increasing iterations the spectral width of $\tilde\lambda(\nu)$ remains
approximately constant.
Results of the GRAPE BFGS L2 optimization are shown in Figs.~\ref{fig:performanceGRAPE}(c,d). We
observe that, in contrast to the H1 results, $\lambda(t)$ acquires sharp
features during optimization, as also reflected by the broad power spectrum.
This is because initially the gradient $\nabla_\lambda J$, which determines the
search direction for improved control parameters, exhibits strong variations.
These variations are washed out in the H1 optimization through the solution
of the Poisson equation, see Eq.~\eqref{eq:H1}, leading to significantly
smoother control parameters.
In GRAPE, the user must additionally provide the weighting factor $\gamma$ of
Eq.~\eqref{eq:cost} that determines the relative importance of terminal cost and
control smoothness. For the problems under study, we found that the performance
of GRAPE does not depend sensitively on the value of $\gamma$, and we usually
use a small value such that the cost is dominated by the terminal cost.
Figs.~\ref{fig:performanceKrotov}(a,c) show the $\lambda(t)$ history during
a Krotov optimization for different step sizes $k$, and panels (b,d) report the
corresponding power spectra. In comparison to the GRAPE BFGS H1 optimization, the power
spectra are significantly broader, in particular for the larger $k$ values. This is
due to the fact that in the functional used for the Krotov optimization there
is no penalty term that enforces smoothness of the control (and thus a narrow
spectrum).
The choice of the step size $k$ is rather critical for the Krotov performance. With
increasing $k$ the cost function decreases more rapidly during optimization.
However, values of $k$ that are too large can lead to numerical instabilities.
These instabilities result from the discretization of the update equation;
mathematically, Krotov is only guaranteed to converge monotonically for any
value of $k$ if the control problem is continuous.
\begin{figure}
\caption{(Color online) Same as Fig.~\ref{fig:shaking2}
\label{fig:GRAPEKrotov}
\end{figure}
One might wonder whether a combination of both approaches would give the best of
two worlds. In Fig.~\ref{fig:GRAPEKrotov} we present results for simulations
where we start with a Krotov optimization and switch to GRAPE after a given
number of iterations. As can be seen, the performance of this combined
optimization does not offer a particular advantage over
genuine GRAPE or Krotov optimizations. This is probably due to
differences between the optimal
control fields $\lambda(t)$ obtained by the two approaches, such that
$\lambda(t)$ needs to be significantly modified when changing from one scheme to
the other. In addition, the BFGS search algorithm of GRAPE uses the information
of previous iterations in order to estimate the Hessian of the control space,
and this information is missing when changing schemes.
\section{Conclusions and Outlook}\label{sec:conclusions}
Based on the two examples investigated in the previous section, namely
wavefunction splitting and shaking in a magnetic microtrap, we now set out to
analyze the advantages and disadvantages of the GRAPE and Krotov optimization
methods which are tied to the functional that is minimized in each case.
First, when the optimization converges fast to an optimal solution, such as for wavefunction splitting
investigated in Sec.~\ref{subsec:splitting}, both optimization algorithms
perform equally well, even without carefully tuning the free parameters $\gamma$ or $k$. For such problems, the choice of algorithm is a matter of
personal preference. On the other hand, for optimization problems with slow convergence, such as wavefunction shaking, more care has to be taken.
Specifically, there are significant differences between the two algorithms in
terms of free parameters vs.\ speed of convergence as well as possible cost
functionals vs.\ features of the obtained optimal control.
While GRAPE BFGS utilizes a line search to ensure monotonic convergence and to
obtain the optimal step size in each iteration, the speed of convergence in
Krotov's method is mainly determined by the free parameter $k$. On the one hand this
means that GRAPE BFGS works better ``out of the box'' since it automatically
determines the best step size in each step. On the other hand,
the convergence is slowed down due to the necessity of a line search.
It is also evident from our results that both algorithms yield controls with
features that can be understood in terms of additional costs
introduced in the
functional. For GRAPE we use a cost that penalizes a large derivative of
the control which results in smooth controls in the end. For Krotov's
method we employ
a penalty on changes in the amplitude of the control in each iteration.
Correspondingly this leads to controls that have a smaller integrated
intensity and come at the cost of a less smooth control.
In principle it is conceivable to modify the Krotov algorithm to take into
account an additional cost term on the derivative of the control. While we
conjecture that this will lead to controls that are comparable with those
obtained in the GRAPE framework, the necessary modification of the Krotov
algorithm is beyond the scope of the current work.
In the context of controlling Bose-Einstein condensates with experimentally
smooth controls, the optimization with the GRAPE BFGS method, a functional
enforcing smoothness, and use of the $H^1$ norm appears to be the method of
choice. It is a black--box scheme with practically no problem-dependent
parameters, it gives the desired smooth control fields, and works for various
nonlinearity parameters $\kappa$.
In contrast, the Krotov optimization without an appropriate penalty term in the
functional can converge faster but usually also leads to sharp features in
the control. A sensitive choice of the step size $k$ is indispensable to
achieve a compromise between fast convergence and smoothness. If smoothness is
not an issue or extremely fast convergence is needed, the Krotov method is
preferable.
A combination of GRAPE and Krotov in the sense of switching from
one method to the other during the optimization did not result in any
significant gain. This is explained by the different control solutions
that are found by the different methods which do not easily facilitate
a transition between them. It points to the fact that many control
solutions exist and which solution is identified by the optimization depends
strongly on the additional constraints~\cite{JosePRA13} as well as the
optimization method.
\section*{Acknowledgments}
This work has been supported in part by the Austrian science fund FWF
under project P24248, by NAWI Graz, and the European Union under Grant
No. 297861 (QUAINT).
\begin{appendix}
\section{}\label{sec:newtonkrotov}
In this appendix we briefly show how to numerically solve the equation
\begin{widetext}
\begin{equation}\label{eq:controlkrotov3}
\lambda^{(i+1)}(t) =\lambda^{(i)}(t)
+S(t)\,\mathfrak{Re}\,
\bigl<p^{(i)}(t)\bigr|\left[\frac{\partial V}{\partial\lambda}\Bigl|_{\lambda^{(i+1)}(t)}\right]
\bigl|\psi^{(i+1)}(t)\bigr>\,,
\end{equation}
which differs from Eq.~\eqref{eq:controlkrotov2} in that the potential
derivative is evaluated for $\lambda^{(i+1)}(t)$. Things can be easily generalized
for the additional $\sigma(t)$ term of Eq.~\eqref{eq:controlkrotov}. Let
$\lambda_0(t)$ denote an initial guess for the solution of
Eq.~\eqref{eq:controlkrotov3}, e.g.\ the solution of
Eq.~\eqref{eq:controlkrotov2}. We now set
$\lambda^{(i+1)}(t)=\lambda_0(t)+\delta\lambda(t)$, where $\delta\lambda(t)$ is assumed to
be a small quantity. Thus, we can expand the second term on the right hand side
of Eq.~\eqref{eq:controlkrotov3} in lowest order of $\delta\lambda(t)$ to obtain
\begin{equation}\label{eq:controlkrotov4}
\lambda_0(t)+\delta\lambda(t)\approx\lambda^{(i)}(t)+S(t)\,\mathfrak{Re}\,
\bigl<p^{(i)}(t)\bigr|\left[\frac{\partial V}{\partial\lambda}\Bigl|_{\lambda_0(t)}
+\frac{\partial^2 V}{\partial\lambda^2}\Bigl|_{\lambda_0(t)}\delta\lambda(t)
\right]\bigl|\psi^{(i+1)}(t)\bigr>\,.
\end{equation}
Separating the contributions of $\delta\lambda$ from the rest, we get
\begin{equation}
\left(1-S(t) \,\mathfrak{Re}\,
\bigl<p^{(i)}(t)\bigr|\left[
\frac{\partial^2 V}{\partial\lambda^2}\Bigl|_{\lambda_0(t)}
\right]\bigl|\psi^{(i+1)}(t)\bigr>\right)\delta\lambda(t)\approx
-(\lambda_0(t)-\lambda^{(i)}(t))+
S(t)\bigl<p^{(i)}(t)\bigr|\left[\frac{\partial V}{\partial\lambda}\Bigl|_{\lambda_0(t)}\right]
\bigl|\psi^{(i+1)}(t)\bigr>\,,
\end{equation}
\end{widetext}
which can be solved for $\delta\lambda(t)$. If $|\delta\lambda(t)|<\varepsilon$ is
smaller than some small tolerance $\varepsilon$, we set
$\lambda^{(i+1)}(t)\to\lambda_0(t)+\delta\lambda(t)$. Otherwise we set
$\lambda_0(t)\to\lambda_0(t)+\delta\lambda(t)$ and repeat the Newton iteration until
convergence. Typically only few iterations are needed to reach tolerances of
the order of $\varepsilon=10^{-6}$.
\end{appendix}
\end{document} |
\begin{document}
\begin{frontmatter}
\title{Quartic diophantine equation $X^4 - Y^4 = R^2 - S^2 $
}
\author{S.Muthuvel}
\ead{[email protected], [email protected]}
\author{R.Venkatraman
\corref{mycorrespondingauthor}}
\cortext[mycorrespondingauthor]{Corresponding author}
\ead{[email protected]}
\address{Department of Mathematics, College of Engineering and Technology, SRM Institute of Science and Technology, Vadapalani Campus,
No.1 Jawaharlal Nehru Salai, Vadapalani, Chennai-600026, Tamilnadu, India.}
\begin{abstract}
In this paper, we deal with the quartic diophantine equation $X^4 - Y^4 = R^2 - S^2$ to present its infinitely many integer solutions.
\end{abstract}
\begin{keyword}
Quartic diophantine equation \sep Elementary method
\MSC[2020] 11D25 \sep 11D45
\end{keyword}
\end{frontmatter}
\section{Introduction}\label{sec1}
In the general form of Diophantine equation
$$x^n + y^n = u^n +v^n, \qquad \qquad n \in \mathbb{N}$$
The case $n=2$ has been expressed in \cite{Davies,Kersey book,Pasternak}. For $n=4$, the parametric solutions to the aforementioned equation are described in \cite{Choudhry-1,Euler ppr, Hardy book}. More general diophantine equations with more variable or with integer coefficients that are not all equal to one were taken into consideration by several researchers \cite{Choudhry-h,Elkies,Izadi & Nabardi,Janfada & Nabardi}. The authors provided an infinite number of positive integral solutions for various powers as illustrated in \cite{Babic & Nabardi}.
The objective of this work is to obtain infinitely many integral solutions of
\begin{eqnarray}
X^4 - Y^4=R^2 - S^2 \label{1}
\end{eqnarray}
for each one of the parametric method.
In \cite{Baghlaghdam & Izadi form}, the equations
\begin{eqnarray}
a\left(X_{1}^{'5}+X_{2}^{'5}\right)+\sum_{i=0}^{m}a_{i}X_{i}^{5}=b\left(Y_{1}^{'3}+Y_{2}^{'3}\right)+\sum_{i=0}^{n} b_{i}Y_{i}^{3} \label{s1}
\end{eqnarray}
where $m,n \in \mathbb{N} \cup \{0\}$ and $a,b \neq 0, \ a_{i},b_{i}$ are fixed arbitrary rational numbers are
examined. The solution to \eqref{s1}, which is converted into a cubic or a quartic elliptic curve with a positive rank, is found using the theory of elliptic curves.
Authors demonstrate that in [\cite{Baghlaghdam & Izadi high}, Main Theorem 2]
$$\sum_{i=1}^{n}p_{i} x_{i}^{a_i}= \sum_{j=1}^{m}q_{j} y_{j}^{b_j}$$
$m,n,a_{i},b_{j} \in \mathbb{N}, \ p_{i},q_{j} \in \mathbb{Z}, \ i=1,2,\dots,n, \ j=1,2,\dots,m$
has a parametric solution and
infinitely many solutions in nonzero integers if there exists an $i$ such that $p_i = 1$
and $(a_{i},a_{1}a_{2} \dots a_{i-1}a_{i+1}\dots a_{n}b_{1}b_{2}\dots b_{m}) = 1$ or there exists a $j$ such that $q_j = 1$ and
$(b_{j},a_{1}\dots a_{n}b_{1}\dots b_{j-1}b_{j+1}\dots b_{m}) = 1$. Although linear transformations are also employed in this article, we propose a different strategy and some different conditions for the integer coefficients in order to solve \eqref{1}.
\section{Solving the Diophantine equation $X^4 - Y^4 = R^2 - S^2$}
The trivial solution of the equation \eqref{1} is $(X,Y,R,S)=(m,n,m^2 , n^2)$ for $m, n \in \mathbb{Z}$.
Four different linear transformations are considered and for each one of them, we give a different class of infinitely many integer solutions of equation \eqref{1}.
\subsection{Method-1}
Consider the linear transformations,
\begin{eqnarray}
X=px+u, \qquad Y=qx-u, \qquad R=x+v, \qquad S=px-v \label{M1LT}
\end{eqnarray}
\noindent $p,x,u,v \in \mathbb{Z}$. Introducing \eqref{M1LT} in \eqref{1}, we get
\begin{eqnarray}
\alpha x^4 + \beta x^3 + \gamma x^2 + \delta x &=& 0 \label{M1_1}
\end{eqnarray}
\noindent where
\begin{align}
\begin{aligned}
\alpha &= p^4 - q^4, \\ \gamma &= 6p^2 u^2 - 6p^2 u^2 + p^2 - 1,
\end{aligned}
&&
\begin{aligned}
\beta &= 4p^3 u + 4q^3 u, \\ \delta &= 4pu^3 + 4qu^3 - 2v + 2pv \label{M1_2}
\end{aligned}
\end{align}
\noindent For $\delta=0$ in \eqref{M1_2}, we obtain
$$(2p+2q)u^3 = v(1-p)$$
Further, we put $u=t, v=t^3$ and get $p=\frac{1-2q}{3}$. \\ In \eqref{M1_2}, equating the like terms $\gamma = 0$
$$\left(q + 1\right)\left[\left(-15t^2 +2\right)q + \left(3t^2 -4\right)\right]= 0$$
\noindent Simplifying the above expression, we have $q=\frac{4 - 3t^2}{2 - 15t^2}$ and therefore \eqref{M1_1} becomes
\begin{eqnarray*}
\alpha x^4 + \beta x^3 &=& 0 \\
x&=& \frac{-t\left(405t^8 + 459t^6 - 1404t^4 + 600t^2 - 56\right)}{3\left(27t^6 - 27t^4 + 36t^2 - 10\right)}
\end{eqnarray*}
\noindent Plugging the values $p,q,u,v$ in \eqref{M1LT}, we acquire
\begin{eqnarray}
\begin{aligned}
X &= \frac{t\left(1215t^{10} - 1782t^8 + 4671t^6 - 774t^4 - 366t^2 + 52\right)}{3(2-15t^2)\left(27t^6 - 27t^4 + 36t^2 - 10\right)} \\
Y &= \frac{t\left(1215t^{10} - 1782t^8 + 4671t^6 - 5634t^4 + 1902t^2 - 164\right)}{3(2-15t^2)\left(27t^6 - 27t^4 + 36t^2 - 10\right)} \\
R &= \frac{-t\left(324t^8 - 378t^6 + 1296t^4 - 570t^2 + 56\right)}{3\left(27t^6 - 27t^4 + 36t^2 - 10\right)} \\
S &= \frac{t\left(810t^8 + 1512t^6 + 1674t^4 - 1092t^2 + 112\right)}{3(2-15t^2)\left(27t^6 - 27t^4 + 36t^2 - 10\right)}
\end{aligned}
\end{eqnarray}
\noindent Eliminating the denominators from the above equations,
\begin{eqnarray*}
\begin{aligned}
X &= 1215t^{11} - 1782t^9 + 4671t^7 - 774t^5 - 366t^3 + 52t \\
Y &= 1215t^{11} - 1782t^9 + 4671t^7 - 5634t^5 + 1902t^3 - 164t \\
R&=5904900t^{19}-14368590t^{17}+41898546t^{15}-55842858t^{13}+58236894t^{11} \\
& \qquad -36547200t^9+12314916t^7-2186784t^5+193392t^3-6720t \\
S&=984150t^{17}+721710t^{15}+1395306t^{13}-1476954t^{11}+3664440t^9-3124332t^7 \\
& \qquad +1027296t^5-140112t^3+6720t
\end{aligned}
\end{eqnarray*}
We get a integer solution $(X,Y,R,S)$ of equation \eqref{1} for every $t \in \mathbb{Z}$. So,
the presented method generates infinitely many integer solutions of the initial
equation \eqref{1}.
\subsection{Method-2}
In this method, we deal with different transformation in \eqref{1}. Let
\begin{eqnarray}
X=px+u, \qquad Y=qx-u, \qquad R=x+v, \qquad S=px-v \label{M2LT}
\end{eqnarray}
\noindent $p,x,u,v \in \mathbb{Z}$. In previous subsection, by introducing these linear transformations in \eqref{1}, leads us to the equation of the form
\begin{eqnarray}
Ax^4 + Bx^3 + Cx^2 + Dx &=& 0 \label{M2_1}
\end{eqnarray}
\noindent where
\begin{align}
\begin{aligned}
A &= p^4-q^4, \\ C &= 6p^2 u^2 - 6p^2 u^2 + p^2 - 1,
\end{aligned}
&&
\begin{aligned}
B &= 4p^3 u + 4q^3 u, \\ D &= 4pu^3 + 4qu^3 - 2v - 2pv \label{M2_2}
\end{aligned}
\end{align}
\noindent For $D=0$ in \eqref{M2_2}, we get
$$(2p+2q)u^3 = v(1+p)$$
Further, we set $u=t, v=t^3$ and attain $p=1-2q$. In \eqref{M2_2}, equating the like terms $C = 0$
$$\left(q - 1\right)\left[\left(9t^2 +2\right)q - 3t^2\right]= 0$$
\noindent Using the above equation, we obtain $q=\frac{3t^2}{9t^2 + 2}$ and therefore \eqref{M2_1} becomes
\begin{eqnarray*}
A x^4 + B x^3 &=& 0 \\
x&=& \frac{-t\left(243t^8 + 297t^6 + 216t^4 + 72t^2 +8\right)}{\left(27t^6 + 27t^4 + 12t^2 + 2\right)}
\end{eqnarray*}
\noindent Applying the values $p,q,u,v$ in \eqref{M2LT}, we acquire
\begin{eqnarray}
\begin{aligned}
X &= \frac{-3t\left(27t^8 + 36t^6 + 27t^4 + 12t^2 + 2\right)}{\left(27t^6 + 27t^4 + 12t^2 + 2\right)} \\
Y &= \frac{-t\left(81t^8 + 108t^6 + 81t^4 + 24t^2 + 2\right)}{\left(27t^6 + 27t^4 + 12t^2 + 2\right)} \\
R &= \frac{-2t\left(108t^8 + 135t^6 + 102t^4 + 35t^2 + 4\right)}{\left(27t^6 + 27t^4 + 12t^2 + 2\right)} \\ \label{M2_F}
S &= \frac{-2t\left(54t^8 + 81t^6 + 60t^4 + 25t^2 + 4\right)}{\left(27t^6 + 27t^4 + 12t^2 + 2\right)}
\end{aligned}
\end{eqnarray}
\noindent By cancelling the denominators in \eqref{M2_F},
\begin{eqnarray*}
\begin{aligned}
X&=2187t^{15}+5103t^{13}+6075t^{11}+4617t^9+2322t^7+756t^5+144t^3+12t \\
Y&=2187t^{15}+5103t^{13}+6075t^{11}+4293t^9+1890t^7+504t^5+72t^3+4t \\
R&=4251528t^{27}+18068994t^{25}+38381850t^{23}+52986636t^{21}+52435512t^{19}\\
& \qquad+38959218t^{17}+22221378t^{15}+9803592t^{13}+3331692t^{11}+858168t^9 \\
& \qquad +162216t^7+21216t^5+1712t^3+64t \\
S&=2125764t^{27}+9565938t^{25}+21139542t^{23}+30154356t^{21}+30784212t^{19}\\
& \qquad +23632722t^{17}+13977846t^{15}+6426864t^{13}+2290356t^{11}+623160t^9 \\
& \qquad +125496t^7+17664t^5+1552t^3+64t
\end{aligned}
\end{eqnarray*}
For any $t \in \mathbb{Z}$, we obtain an integer solution $(X,Y,R,S)$ to equation \eqref{1}. Consequently, the proposed method yields an infinite number of integer solutions to the starting equation \eqref{1}.
\subsection{Method-3}
In this method, we deal with different transformation in \eqref{1}. Let
\begin{eqnarray}
X=v, \qquad Y=px+v, \qquad R=qx+u, \qquad S=x+u \label{M3LT}
\end{eqnarray}
\noindent $p,x,u,v \in \mathbb{Z}$. In subsection-1, by introducing these linear transformations in \eqref{1}, leads us to the equation of the form
\begin{eqnarray}
ax^4 + bx^3 + cx^2 + dx &=& 0 \label{M3_1}
\end{eqnarray}
\noindent where
\begin{align}
\begin{aligned}
a &= p^4, \\ c &= 6p^2 v^2 + q^2 - 1,
\end{aligned}
&&
\begin{aligned}
b &= 4p^3 v, \\ d &= 4pv^3 - 2u - 2qu \label{M3_2}
\end{aligned}
\end{align}
\noindent For $d=0$ in \eqref{M3_2}, we obtain
$$(2p)v^3 = u(1 - q)$$
Additionally, we put $u=t^3, v=t$ and get $p=\frac{1 - q}{2}$. In \eqref{M3_2}, equating the like terms $c = 0$
$$\left(q - 1\right)\left[\left(3t^2 +2\right)q - \left(3t^2 - 2\right)\right]= 0$$
\noindent Thus, we get $q=\frac{3t^2-2}{3t^2 + 2}$ and therefore \eqref{M3_1} becomes
\begin{eqnarray*}
a x^4 + b x^3 &=& 0 \\
x&=& \frac{-2t\left(81t^8 + 216t^6 + 216t^4 + 96t^2 + 16\right)}{\left(27t^6 + 54t^4 + 36t^2 + 8\right)}
\end{eqnarray*}
\noindent Taking the values $p,q,u,v$ and applying it in \eqref{M3LT}, we get
\begin{eqnarray}
\begin{aligned}
X &= t \\
Y &= \frac{-3t\left(81t^8 + 216t^6 + 216t^4 + 96t^2 + 16\right)}{(3t^2 + 2)\left(27t^6 + 54t^4 + 36t^2 + 8\right)} \\
R &= \frac{-t\left(405t^{10} + 756t^8 + 216t^6 - 384t^4 - 304t^2 - 64\right)}{(3t^2 + 2)\left(27t^6 + 54t^4 + 36t^2 + 8\right)} \\
S &= \frac{-t\left(135t^8 + 358t^6 + 396t^4 + 184t^2 + 32\right)}{\left(27t^6 + 54t^4 + 36t^2 + 8\right)}
\end{aligned}
\end{eqnarray}
\noindent Neglecting the denominators from the above equation,
\begin{eqnarray*}
\begin{aligned}
X&= 81t^{10}+216t^8+216t^6+96t^4+16t^2 \\
Y&=243t^{10}+648t^8+648t^6+288t^4+48t^2 \\
R&=32805t^{21}+148716t^{19}+268272t^{17}+217728t^{15}+18144t^{13}-120960t^{11} \\
& \qquad -112896t^9-49152t^7-11008t^5-1024t^3 \\
S&=32805t^{21}+196344t^{19}+532008t^{17}+849312t^{15}+874656t^{13}+600000t^{11} \\
&\qquad +273536t^9+79872t^7+13568t^5+1024t^3
\end{aligned}
\end{eqnarray*}
For each $t \in \mathbb{Z}$, equation \eqref{1} has an integer solution $(X,Y,R,S)$. The resulting method produces an infinite number of integer solutions to the initial equation \eqref{1}.
\subsection{Method-4}
In this method, we deal with different transformation in \eqref{1}. Let
\begin{eqnarray}
X=-v, \qquad Y=px-v, \qquad R=qx-u, \qquad S=x+u \label{M4LT}
\end{eqnarray}
\noindent $p,x,u,v \in \mathbb{Z}$. In subsection-1, by introducing these linear transformations in \eqref{1}, leads us to the equation of the form
\begin{eqnarray}
Mx^4 + Nx^3 + Px^2 + Qx &=& 0 \label{M4_1}
\end{eqnarray}
\noindent where
\begin{align}
\begin{aligned}
M &= p^4, \\ P &= 6p^2 v^2 + q^2 - 1,
\end{aligned}
&&
\begin{aligned}
N &= -4p^3 v, \\ Q &= -4pv^3 - 2u - 2qu \label{M4_2}
\end{aligned}
\end{align}
\noindent For $Q=0$ in \eqref{M4_2}, we obtain
$$(2p)v^3 = u(-1 - q)$$
Further, we put $u=t^3, v=t$ and get $q=-1-2p$. In \eqref{M4_2} equating the like terms $Q = 0$
$$p = \frac{-2}{3t^2 + 2}$$
\noindent Therefore \eqref{M4_1} becomes
\begin{eqnarray*}
M x^4 + N x^3 &=& 0 \\
x&=& -2t\left(3t^2 + 2\right)
\end{eqnarray*}
\noindent Substituting the values $p,q,u,v$ in \eqref{M4LT}, we obtain
\begin{eqnarray*}
\begin{aligned}
X &= -t \\
Y &= 3t \\
R &= 5t^3 -4t \\
S &= -5t^3 -4t
\end{aligned}
\end{eqnarray*}
We get a integer solution $(X,Y,R,S)$ of equation \eqref{1} for every $t \in \mathbb{Z}$. So,
the presented method generates infinitely many integer solutions of the initial
equation \eqref{1}.
\end{document} |
\begin{document}
\title{A Necessary and Sufficient Condition for Entanglement Witness}
\author{F. Masillo}
\email{[email protected]}
\affiliation{Dipartimento di Fisica, Universit\`{a} del Salento, I-73100 Lecce, Italy}
\date{\today}
\keywords{Quantum Entanglement, Positive Maps, Entanglement Witness}
\begin{abstract}
An Entanglement Witness (EW) is an observable that permits the identification of Entangled States.
While the structure of separable (non Entangled) states is very complicated and not yet completely understood, the qubit, \emph{i.e.} the simplest non trivial quantum system, has a satisfactory geometrical description in terms of Bloch spheres. In this paper, we will show that this geometrical representation of qubits can be used in understanding separable states and, in particular, in the formulation of a simple necessary and sufficient condition for EW.
\end{abstract}
\maketitle
\section*{Introduction}
Quantum entanglement \cite{EPR35}, \cite{S35} is the responsible of the most fascinating quantum mechanical effects \cite{E91,BW92, BBCJP93, EJ96} and, in particular, it plays a crucial role in all quantum informational processes that have a significant speed up with respect to the analogous classical ones \cite{NC00}.
It follows immediately that the identification and manipulation of entangled states play a crucial role in this fascinating field of modern physics.
The search of valid criteria for the identification of entanglement is a long history \cite{HHHH09} and simple operational criteria exist only for low dimensional cases \cite{P96,HHH96}.
In this paper we concentrate our attention only on the set of \emph{entanglement witness} (EW) \cite{P96,HHH96,T00},
\emph{i.e.} on the set of linear functional, defined on the set of states of a multipartite quantum systems, that are
positive on the set of separable states.
The existence of EW can be proven applying the Hahn-Banach theorem to the convex set of separable states and the compact convex set containing only one entangled state \cite{HHH96}.
In particular, as an immediate consequence, it is possible to prove that for every entangled state at least an EW exists that ``separates'' it from the set of separable states.
Moreover, noticing that EW are observables, the identification of entangled states is reduced to measure particular physical quantities.
Another important motivation to investigate EW is due to the famous Choi-Jamio{\l}kowski (CJ) isomorphism \cite{J74,C82}.
This last in fact, permits the construction of positive, but not completely positive (CP), maps starting from the set of bipartite EW and vice versa \cite{HHH96}.
We recall briefly that a map $\Lambda$ is positive if and only if transform positive operators into positive operators (a classical example of positive map is Transposition).
It is a relevant result that for low dimensional cases every positive map $\Lambda$ assumes the following form \cite{S63,W76}:
\begin{equation}
\Lambda=\Lambda^{(1)}_{CP}+\Lambda^{(2)}_{CP}\circ T
\end{equation}
where $\Lambda^{(1)}_{CP},\Lambda^{(2)}_{CP}$ are CP maps, \emph{i.e.} maps that assume the Kraus-Stinespring form \cite{KBDW83}, and $T$ is transposition. Unfortunately this result cannot be extended for higher dimensional systems \cite{HHH96}.
The aim of this paper is to introduce a simple criterion to test if an observable is an EW. In particular, the paper is organized as follows.
In the first section we give some definitions and lemmas used throughout the paper. In section \ref{sec.2}, basing on the simple case of a qubit, we introduce the concept of \emph{separable tangent space}.
In sections \ref{sec.4} and \ref{sec.5} we present the main result of the paper, in particular the first one is dedicated to prove the necessary part of the main theorem while the second regards the sufficient condition.
Some concluding remarks are drawn in the final section.
\section{Some definitions}\label{sec.1}
In what follows we will indicate with $(\mathscr{H}^S,\langle\cdot|\cdot\rangle)$, or simply $\mathscr{H}^S$,
the Hilbert space associated to a quantum system $S$.
It is natural to associate to an $n-$dimensional Hilbert space $\mathscr{H}$ the linear space $\mathcal{L}(\mathscr{H})$ of endomorphism on $\mathscr{H}$.
The space $\mathcal{L}(\mathscr{H})$ is naturally endowed with the inner product:
\begin{equation}
(a,b)=\mathrm{Tr}(a^\dag b).
\end{equation}
It is simple to prove that the couple $(\mathcal{L}(\mathscr{H}),(\cdot,\cdot))$
is an $n^2-$dimensional complex Hilbert space.
A particularly interesting subset of $\mathcal{L}(\mathscr{H})$ is
\begin{equation}
\mathcal{H}(\mathscr{H})=\{h\in\mathcal{L}(\mathscr{H}):h=h^\dag\}.
\end{equation}
This is the real linear space of Hermitian operators on $\mathscr{H}$. As it is well known, the set of Hermitian operators, in quantum mechanics, correspond to the set of all possible \emph{observables}.
A relevant subset of $\mathcal{H}(\mathscr{H})$ is:
\begin{equation}
\mathcal{H}^+(\mathscr{H})=\{a\in\mathcal{H}: \langle\psi|a|\psi\rangle\geq0, \forall\psi\in \mathscr{H}\}.
\end{equation}
This is the set of positive operators on $\mathscr{H}$. In particular, the set
$\mathcal{S}(\mathscr{H})$ of positive trace one operators will be called the \emph{set of states}.
Naturally all the previous definitions can be extended to the Hilbert space $\mathscr{H}^{\Sigma_{i=1}^k(i)}=\bigotimes_{i=1}^k\mathscr{H}^{(i)}$
associated to the multipartite system
$S^{(\Sigma_{i=1}^k(i))}=\sum_{i=1}^k S^{(i)}$.
For typographical convenience, in what follows we put
\begin{equation}
\mathscr{H}^{(l\div m)}=\mathscr{H}^{\Sigma_{i=l}^m(i)}=\bigotimes_{i=l}^m\mathscr{H}^{(i)}
\end{equation}
We recall now the definitions of two important subsets of $\mathcal{S}(\mathscr{H}^{(1\div k)})$:
\begin{definition}
We say \cite{W98} that a state $\varrho\in\mathcal{S}(\mathscr{H}^{(1\div k)})$
is separable if and only if
\begin{equation}
\varrho=\sum_{i}p_{i}\bigotimes_{j=1}^k\varrho_i^{(j)},\,\,\,\,\,\,\,\,\,
\,\,\, p_{i}\geq0,\,\, \Sigma_{i}p_{i}=1,
\end{equation}
where $\forall j=1,\ldots,k$ and $\forall i$, $\varrho^{(j)}_i\in \mathcal{S}({\mathscr H}^{(j)})$.
\end{definition}
The set of separable states will be denoted by $\mathfrak{S}^{(1\div k)}$.
It is simple to note that $\mathfrak{S}^{(1\div k)}$ is the convex hull of the set $\mathfrak{P}^{(1\div k)}$ of pure product states:
\begin{align}
\mathfrak{P}^{(1\div k)}=&\left\{p=\bigotimes_{j=1}^k|e^{(j)}\rangle\langle e^{(j)}|\mbox{for some}\,\,|e^{(j)}\rangle\in{\mathscr H}^{(j)}, \right.\nonumber\\ &\left.\begin{array}{c}
\\
\\
\\
\end{array}\langle e^{(j)}|e^{(j)}\rangle=1,\forall j=1,\ldots,k \right\}.
\end{align}
\begin{definition} We say \cite{W98} that a state $\varrho\in\mathcal{S}(\mathscr{H}^{(1\div k)})$
is entangled if and only if
\begin{equation}
\varrho\in \mathfrak{E}(\mathscr{H}^{(1\div k)})= \mathcal{S}(\mathscr{H}^{(1\div k)})\setminus \mathfrak{S}(\mathscr{H}^{(1\div k)}).
\end{equation}
\end{definition}
In this paper,we will focus our attention on the convex cone
generated by $\mathfrak{S}$. So we recall the following definition \cite{R96}:
\begin{definition}
A subset $C$ of a linear space $V$ is said a cone if and only if satisfies the
following condition:
\begin{equation}
x\in C \Rightarrow px\in C, \forall p\geq 0.
\end{equation}
\end{definition}
Note that sometimes in the definition of a cone the condition $\forall p\geq 0$ is replaced by the weaker $\forall p> 0$ \cite{R96}.
Important subspace of an Hilbert space $V$ are hyperplanes \cite{R96}:
\begin{definition}
Given an $n-$dimensional Hilbert space $V$, the subspace $\mathcal{I}$ is said an hyperplane if and only if $\mathrm{dim}\,\mathcal{I}=n-1$.
\end{definition}
We recall briefly that an hyperplane $\mathcal{I}\subset V$ can be defined
by one of its, non zero, orthogonal vectors $a\in V$ \cite{R96}:
\begin{equation}
\mathcal{I}=\{x\in V|(x,a)=0\}.
\end{equation}
We will write $\mathcal{I}_a$ to denote the hyperplane defined by the vector $a$.
It is well known that $\mathcal{I}_{a}$ defines the following open half-spaces:
\begin{align}
\mathcal{I}^+_{a}&=\{x\in V: (x,a)>0\},\\
\mathcal{I}^-_{a}&=\{x\in V: (x,a)<0\},
\end{align}
and the corresponding closed half-spaces:
\begin{align}
\bar{\mathcal{I} }^+_{a}&=\{x\in V: (x,a)\geq0\},\\
\bar{ \mathcal{I}}^-_{a}&=\{x\in V: (x,a)\leq0\}.
\end{align}
We will use some simple results obtained applying to hyper spaces and half-spaces some well known theorems of linear algebra. We will collect here these results to make the treatment more clear.
Let us consider $W\subseteq V$, a linear subspace of $V$. Let $\mathcal{I}_a$ be an hyper plane of $V$ determined by (one of) its orthogonal vector $a$. It is simple to prove, using the projection theorem, that the following lemmas hold:
\begin{lemma}\label{th.2.1}
The subspace $\mathcal{I}_a\cap W$ is determined by the following equality:
\begin{equation}
\mathcal{I}_a\cap W=\{x\in W: (x,a_W)=0\},
\end{equation}
where $a_W$ is the projection of $a$ on $W$.
\end{lemma}
\begin{lemma}\label{th.2.2}
The following equalities hold:
\begin{align}
\mathcal{I}^+_{a}\cap W&=\{x\in W: (x,a_W)>0\},\\
\mathcal{I}^-_{a}\cap W&=\{x\in W: (x,a_W)<0\},
\end{align}
and
\begin{align}
\bar{\mathcal{I} }^+_{a}\cap W&=\{x\in W: (x,a_W)\geq0\},\\
\bar{ \mathcal{I}}^-_{a}\cap W&=\{x\in W: (x,a_W)\leq0\}.
\end{align}
\end{lemma}
As it was noted, an hyperplane $\mathcal{I}_a$ can be identified by its orthogonal monodimensional subspace (generated by $a$).
In particular, in the case of $\mathcal{H}(\mathscr{H}^{(1\div k)})$, the vector $a$ is an observable that, under suitable conditions, can detect entangled state:
\begin{definition}
If $(a,p)\geq0$ for all $p\in\mathfrak{S}^{(1\div k)}$ and at least one $q\in\mathfrak{E}^{(1\div k)}$ exists such that $(a,q)<0$ then $a$ is called an Entanglement Witness (EW).
\end{definition}
\section{Some geometrical considerations}\label{sec.2}
To expose the main idea of the paper, let us start considering the state space $\mathscr{H}^2$ of a single qubit. It is well known that, in the Bloch sphere representation, states can be represented by point inside a sphere in the $3-$dimensional real space generated by the Pauli matrices, $\sigma_1$, $\sigma_2$ and $\sigma_3$
where
\begin{equation}\label{eq.pm}
\sigma_1=\left(
\begin{array}{cc}
0 & 1 \\
1 & 0 \\
\end{array}
\right)
,\quad\sigma_2=\left(
\begin{array}{cc}
0 & -i \\
i & 0\\
\end{array}
\right)
,\quad\sigma_3=\left(
\begin{array}{cc}
1 & 0 \\
0 & -1 \\
\end{array}
\right).
\end{equation}
In fact it is simple to prove that every trace one Hermitian matrices acting on $\mathds{C}^2$ can be written as:
\begin{equation}\label{eqb}
h= \frac{1}{2}\mathds{I}+\frac{1}{2}(h_1\sigma_1+h_2\sigma_2+h_3\sigma_3)= \frac{1}{2}\mathds{I}+\frac{1}{2}\vec{h}\cdot\vec{\sigma}.
\end{equation}
It is possible to show that if $\|\vec{h}\| = \sqrt{ h_1^2 + h_2^2+ h_3^2}\leq 1$, (\ref{eqb}) is a positive matrix and, in particular, (\ref{eqb}) is a pure state if and only if $\|\vec{h}\| = 1$.
Let us find the tangent plane $\pi$ to the Bloch sphere through one pure state, for example $p=\frac{1}{2}\mathds{I}+\frac{1}{2}\sigma_z$. It is immediate to note that the elements of $\pi$ has the following form:
\begin{equation} \label{eq3}
\sigma_3+\alpha\sigma_1+\beta\sigma_2,\quad \alpha,\beta\in\mathds{R},
\end{equation}
(see figure \ref{fig.1}).
\begin{figure}\label{fig.1}
\label{fig.1}
\end{figure}
The bidimensional subspace $C$ associated to the affine plane (\ref{eq3}) is naturally $C=\mathrm{span}\{\sigma_1,\sigma_2\}$.
It is now immediate to note that the hyperplane tangent to $\mathcal{S}(\mathscr{H}^2)$ is the subspace:
\begin{equation}\label{eq.p1}
\Pi=\mathrm{span}\{p,\sigma_1,\sigma_2\}.
\end{equation}
\begin{remark}
By simple calculations, we see that the hyperplane $\Pi$ through the point $p$ can be obtained imposing that $\Pi$ contains the space $C(p)$, where:
\begin{equation} \label{eqC}
C(p)=\{i[p,h]:h=h^\dag\}.
\end{equation}
\end{remark}
\begin{remark}
We note that the only vectors orthogonal to $ \pi$ must necessarily have the form
\begin{equation}
q=\alpha(\frac{1}{2}\mathds{I}-\frac{1}{2}\sigma_z).
\end{equation}
Note that $q$ is a positive operator if $\alpha\geq0$ while it is a negative operator if $\alpha\leq0$.
\end{remark}
In the next sections, we will prove that these remarks can be suitably generalized in the multipartite case.
\section{Main result (necessary condition)}\label{sec.4}
To simplify the notation we will introduce the following linear spaces
\begin{align}
\tau^{(i)}=\mathrm{span}&\{\mathds{I}^{(1\div(i-1))}\otimes h^{(i)}\otimes\mathds{I}^{((i+1)\div k)}:\\
&h^{(i)}\in\mathcal{H}(\mathscr{H}^{(i)}),\mathrm{Tr}h^{(i)}=0\}
\end{align}
where $\mathds{I}^{(i\div j)}$ is the identity operator on $\mathscr{H}^{(i\div j)}$ , obviously $\mathds{I}^{(i\div j)}=\bigotimes_{l=i}^j\mathds{I}^{(l)}$ where $\mathds{I}^{(l)}$ is the identity operator on $\mathscr{H}^{(l)}$.
Let us introduce the following linear space:
\begin{equation}
\tau=\mathrm{span}\left(\bigcup_{i=1}^k\tau^{(i)}\right).
\end{equation}
Noticing that for all $t^{(i)}\in\tau^{(i)}$ and $t^{(j)}\in\tau^{(j)}$, $i\neq j$, $(t^{(i)},t^{(j)})=0$ we can write
\begin{equation}
\tau =\bigoplus_{i=1}^k\tau^{(i)}.
\end{equation}
\begin{remark}
Note that $\tau$ is the set of the generators of local unitary transformations, i.e. the set of all unitary transformations $U$ on $\mathscr{H}^{(1\div k)}$ of the form
\begin{equation}
U^{(1\div k)}=\bigotimes_{i=i}^kU^{(i)},
\end{equation}
where $\forall i=1,\ldots,k$, $U^{(i)}$ is a unitary transformation on $\mathscr{H}^{(i)}$.
\end{remark}
The following set generalizes (\ref{eqC}).
\begin{definition}
Given an element $p\in\mathcal{H}(\mathscr{H}^{(1\div k)})$, we can define the following subspace:
\begin{equation}
\mathfrak{C}(p)=\{i[p,t]| t\in\tau\}.
\end{equation}
\end{definition}
If $p$ is a pure product states, we will call $\mathfrak{C}(p)$ \emph{separable tangent space at the point $p$}.
Let us consider $k$ finite Hilbert space $\mathscr{H}^{(i)}$, $i=1,\ldots,k$ ($\mathrm{dim}\mathscr{H}^{(i)}=n$, $\forall i$).
Let us now prove the following theorem:
\begin{theorem} \label{mth}
Let us consider a vector $a\in\mathcal{H}(\mathscr{H}^{(1\div k)})$ and its corresponding hyperplane
$\mathcal{I}_a$. If $a$ is an entanglement witness then the following condition holds:
\begin{equation} \label{eq.nsc}
\mbox{if}\,\, p\in\mathfrak{P}^{(1\div k)}\cap\mathcal{I}_a \,\,\mbox{then}\,\, \mathfrak{C}(p)\subseteq\mathcal{I}_a.
\end{equation}
\end{theorem}
\begin{proof}
Let us consider a vector $a\in\mathcal{H}(\mathscr{H}^{(1\div k)})$ and its corresponding hyper plane
$\mathcal{I}_a$.
If $\mathfrak{P}\cap\mathcal{I}_a=\emptyset$ we have nothing to prove, so let us suppose that $\mathfrak{P}\cap\mathcal{I}_a\neq\emptyset$ i.e., a pure product state $p$ exists that lies in $\mathcal{I}_a$.
In particular let us put:
\begin{equation}
p=\bigotimes_{i=1}^k|e_1^{(i)}\rangle\langle e_1^{(i)}|,
\end{equation}
where $\forall i=1,\ldots,k$ $|e_1^{(i)}\rangle\in \mathscr{H}^{(i)}$. Naturally we can choose orthonormal bases $\mathcal{B}^{(i)}$ in $\mathscr{H}^{(i)}$ such that $|e_1^{(i)}\rangle \in\mathcal{B}^{(i)}$, in particular let us put $\mathcal{B}^{(i)}=\{|e_j^{(i)}\rangle\}_{j=1,\ldots,n}$.
Using this representation, $p$ is represented by an $\underbrace{n^2\times \cdots\times n^2}_{k-\mbox{times}}$ hermitian matrix:
\begin{equation}
p= \bigotimes_{i=1}^k p_1^{(i)},
\end{equation}
where $p^{(i)}_{lm}=\delta_{1l}\delta_{1m}$.
Let us first introduce the following trace less hermitian matrices: $\{\sigma_1(lm),\sigma_2(lm),\sigma_3(lm)\}_{l<m=1,\ldots,n}$
whose non zero elements are so defined:
\begin{align}
({\sigma_1(lm)})_{ij}&=\delta_{li}\delta_{mj}+\delta_{lj}\delta_{mi},\\
{(\sigma_2(lm))}_{ij}&=i\delta_{li}\delta_{mj}-i\delta_{lj}\delta_{mi},\\
{(\sigma_3(lm))}_{ij}&=\delta_{li}\delta_{lj}-\delta_{mj}\delta_{mi}.
\end{align}
The notation emphases the fact that the matrices introduced generalize the Pauli matrices (\ref{eq.pm}).
Note that
\begin{equation}
\{\sigma_1(lm),\sigma_2(lm),\sigma_3(lm)\}_{l<m=1,\ldots,n},
\end{equation}
is a spanning set for the set of all Hermitian trace less matrices.
Using this representation, we note that
\begin{equation}
\bigotimes_{i=1}^{j-1}|e_1^{(i)}\rangle\langle e_1^{(i)}|\otimes{\sigma_1(1m)}\otimes\bigotimes_{i=j+1}^{k}|e_1^{(i)}\rangle\langle e_1^{(i)}|\in\mathfrak{C}(p),\label{eq.42}
\end{equation}
and
\begin{equation}
\bigotimes_{i=1}^{j-1}|e_1^{(i)}\rangle\langle e_1^{(i)}|\otimes{\sigma_2(1m)}\otimes\bigotimes_{i=j+1}^{k}|e_1^{(i)}\rangle\langle e_1^{(i)}|\in\mathfrak{C}(p)\label{eq.43}.
\end{equation}
for all $m=2,\ldots,n$ and $j=1,\ldots,k-1$.
Moreover, it is trivial to observe that the previous operators are a spanning set for $\mathfrak{C}(p)$.
Let us now prove that $\mathfrak{C}(p)$ is a subspace of $\mathcal{I}_a$, in particular it is sufficient to prove that the elements (\ref{eq.42}) and (\ref{eq.43}) belong to $\mathcal{I}_a$. We start proving that ${\sigma_1(12)}\otimes\bigotimes_{i=2}^{k}|e_1^{(i)}\rangle\langle e_1^{(i)}|\in \mathcal{I}_a$ and ${\sigma_2(12)}\otimes\bigotimes_{i=2}^{k}|e_1^{(i)}\rangle\langle e_1^{(i)}|\in \mathcal{I}_a$.
Let us consider the subspaces
\begin{align}
W_{12}^{(1)}&=\mathrm{span}\left\{|e_1^{(1)}\rangle,|e_2^{(1)}\rangle\right\},\\
\mathcal{W}_{12}^{(1)}&=\mathrm{span}\left\{|e_1^{(1)}\rangle\otimes\bigotimes_{i=2}^{k}
|e_1^{(i)}\rangle,|e_2^{(1)}\rangle\otimes\bigotimes_{i=2}^{k}|e_1^{(i)}\rangle \right\},
\end{align}
and the corresponding spaces $\mathcal{H}({W}_{12}^{(1)})$ and $\mathcal{H}(\mathcal{W}_{12}^{(1)})$.
Using lemmas \ref{th.2.1} and \ref{th.2.2}
applied to the space $\mathcal{W}_{12}^{(1)}$, we obtain that
\begin{equation}
\mathcal{W}_{12}^{(1)}\cap \bar{\mathcal{I}}_a^+=\{x\in \mathcal{W}_{12}^{(1)}: (x,a_\mathcal{W})\geq0\},
\end{equation}
where $a_\mathcal{W}$ is the projection of $a$ on the subspace $\mathcal{H}(\mathcal{W}_{12}^{(1)})$.
Now we recall that $\mathfrak{S}^{(1\div k)}\subset \bar{\mathcal{I}}_a^+$. Immediately we have that
$\mathcal{H}(\mathcal{W}_{12}^{(1)})\cap\mathfrak{S}^{(1\div k)}\subset \mathcal{H}(\mathcal{W}_{12}^{(1)})\cap\bar{\mathcal{I}}_a^+$.
So we have that
$(x,a_\mathcal{W})\geq 0, \forall x\in\mathcal{H}(\mathcal{W}_{12}^{(1)})\cap\mathfrak{S}^{(1\div k)}$.
It is simple to prove that
\begin{equation}
\mathcal{H}(\mathcal{W}_{12}^{(1)})\cap\mathfrak{S}^{(1\div k)}= \left\{p^{(1)}\otimes\bigotimes_{i=2}^{k}|e_1^{(i)}\rangle\langle e_1^{(i)}| , p^{(1)}\in\mathcal{S}(W_{12}^{(1)})\right\}.
\end{equation}
This simple remark implies that $a_\mathcal{W}$ must assume the form
$|e_2^{(1)}\rangle\langle e_2^{(1)}|\otimes\bigotimes_{i=2}^{k}|e_1^{(i)}\rangle\langle e_1^{(i)}|$ or $0$.
In both cases it is simple to prove that
\begin{align}
&\left(\tilde{\sigma}_1{(12)},a_\mathcal{W}\right)=0,\\
&\left(\tilde{\sigma}_2{(12)},a_\mathcal{W}\right)=0,
\end{align}
where
\begin{align}
\tilde{\sigma}_1{(12)}=&\sigma_1{(12)}\otimes\bigotimes_{i=2}^{k}|e_1^{(i)}\rangle\langle e_1^{(i)}|,\\
\tilde{\sigma}_2{(12)}=&\sigma_2{(12)}\otimes\bigotimes_{i=2}^{k}|e_1^{(i)}\rangle\langle e_1^{(i)}|.
\end{align}
Repeating the same reasoning for the different space
$\mathcal{W}_{1m}^{(l)}$, $l=2,\ldots, k$ and $m=2,\ldots,n$,
we can complete the proof.
\end{proof}
Theorem \ref{mth} has an immediate geometrical interpretation. It is evident, following the proof of the theorem, that equation (\ref{eq.nsc}) states that if an Entanglement Witness $a$ is othogonal to some pure product state
$p=\bigotimes_{i=1}^k|e_1^{(i)}\rangle\langle e_1^{(i)}|\in \mathcal{I}_a$ then $\mathcal{I}_a$ is tangent to all possible Bloch Sphere $\mathcal{S}(\mathcal{W}^{(l)}_{12})$
where
\begin{equation}
\mathcal{W}^{(l)}_{12}=\mathrm{span}\left\{|\psi_1^l\rangle,|\psi_2^l\rangle|\right\}
\end{equation}
with
\begin{align}
|\psi_1^l\rangle&=\bigotimes_{j=1}^{l-1}|e^{(j)}_1\rangle\otimes|e^{(l)}_1\rangle\otimes\bigotimes_{j=l+1}^k|e^{(j)}_1\rangle\\
|\psi_2^l\rangle&=\bigotimes_{j=1}^{l-1}|e^{(j)}_1\rangle\otimes|e^{(l)}_2\rangle\otimes\bigotimes_{j=l+1}^k|e^{(j)}_1\rangle
\end{align}
and
$|e^{(l)}_1\rangle\langle e^{(l)}_1|\neq|e^{(l)}_2\rangle\langle e^{(l)}_2|$.
\section{Main result (sufficient condition)}\label{sec.5}
In the previous section we proved that if a hyperplane, which defines an entanglement witness, contains a pure product states $p$, then necessarily must contain the separable tangent spaces $\mathfrak{C}(p)$.
In this section we will prove that this condition is not only necessary, but also sufficient.
\begin{theorem}\label{thsc}
Let us consider a vector $a\in\mathcal{H}(\mathscr{H}^{(1\div k)})$ and its corresponding hyperplane $\mathcal{I}_a$.
If $\mathcal{I}_a$ satisfies condition (\ref{eq.nsc})
then $a$ satisfies the following equation:
\begin{equation}\label{eq.nsc2}
(p_1,a)(p_2,a)\geq0
\end{equation}\label{eq.sp}
for all $p_1,p_2\in\mathfrak{P}^{(1\div k)}$.
\end{theorem}
Note that equation (\ref{eq.sp}) implies that if exists a pure product state $p_1(p_2)\in\mathfrak{P}^{(1\div k)}$ such that $(p_1,a)<0$ ($(p_2,a)>0$) then
$(p,a)\leq0$ (respectively $(p,a)\geq0$) for all $p\in\mathfrak{P}^{(1\div k)}$.
\begin{proof}
We will show inductively on the number of subsystems $k$ that theorem \ref{thsc} holds.
In particular we will prove that
\begin{enumerate}
\item theorem \ref{thsc} holds for $k=1$;
\item if theorem \ref{thsc} holds for $k-1$ then it holds for $k$.
\end{enumerate}
\textbf{Step 1} ($k=1$).
Note that in this case, obviously, $\mathfrak{P}=\mathcal{S}(\mathscr{H})$.
Let us suppose that $\mathcal{I}_a$ satisfies condition (\ref{eq.nsc})
but not condition (\ref{eq.nsc2}).
This means that, at least, a couple of pure states, for example
$p_1=|e_1\rangle\langle e_1|$ and $p_2= |e_2\rangle\langle e_2|$, exist such that
\begin{equation}\label{eq.50}
(p_1,a)<0,\quad (p_2,a)>0.
\end{equation}
Without loss of generality, see lemmas \ref{th.2.1} and \ref{th.2.2}, we can restrict our attention to the bidimensional space $W$ generated by $|e_1\rangle$ and $|e_2\rangle$.
It is evident that $a_\mathcal{W}$, i.e. the projection of $a$ on the space $\mathcal{W}=\mathcal{H}(W)$, cannot be a definite operator, so we can choose a representation such that
\begin{equation}
a_\mathcal{W}=\alpha\frac{1}{2} (\mathds{I}+\beta\sigma_3),
\end{equation}
where $\alpha,\beta\in\mathds{R}$ and $|\beta|>1$, $\alpha\neq0$.
It is simple to show that
\begin{align}
p_3&=\frac{1}{2} \mathds{I}+\frac{1}{2}\delta \sigma_1-\frac{1}{2\beta}\sigma_3\in\mathcal{I}_a\cap\mathfrak{P},\\
p_4&=\frac{1}{2} \mathds{I}+\frac{1}{2}\delta \sigma_2-\frac{1}{2\beta}\sigma_3\in\mathcal{I}_a\cap\mathfrak{P},
\end{align}
where $\delta=\sqrt{1-\beta^{-2}}$.
Using the hypothesis (\ref{eq.nsc}),
we obtain immediately that $\{\mathds{I}, \sigma_1, \sigma_2, \sigma_3\}\subset\mathcal{I}_a $. It is evident that this last implies
\begin{equation}
(p_1,a)=0\: \mbox{and} \: (p_2,a)=0,
\end{equation}
against the hypothesis (\ref{eq.50}), and so theorem \ref{thsc} holds for $k=1$.
\begin{remark}
Note that for $k=1$, if exists a pure states $p\in \mathcal{S}(\mathscr{H})$ such that $(p,a)>0$ then equation (\ref{eq.nsc}) represents a necessary and sufficient positivity condition for $a$. This supports the common idea that Entanglement Witness generalizes the concept of positive operator.
\end{remark}
\begin{remark}\label{rem.5.2}
By direct calculations, it is possible to show, in the case of $k=1$, that if $\mathcal{I}_a$ satisfies (\ref{eq.nsc}) and contains the projectors associated to two linearly independent vectors $|e_1\rangle,|e_2\rangle\in\mathscr{H}$, then, for all linear combinations $|g\rangle=\alpha_1|e_1\rangle+\alpha_2|e_2\rangle$, $\alpha_1,\alpha_2\in \mathds{C}$, $\mathcal{I}_a$ contains the projectors
\begin{equation}
g=|g\rangle\langle g|.
\end{equation}
Obviously, this results can be generalized to multipartite systems in the following way.
If $\mathcal{I}_a$ satisfies (\ref{eq.nsc}) and contains the projectors associated to the vectors $|e_1^{(1)}\rangle\otimes|f\rangle$ and $|e_2^{(1)}\rangle\otimes|f\rangle$, where $|f\rangle$ is a product vector in $\mathscr{H}^{(2\div k)}$, and
$|e_1\rangle,|e_2\rangle\in\mathscr{H}^{(1)}$ are linearly independent, then, for all linear combinations $|g\rangle=\alpha_1|e_1\rangle+\alpha_2|e_2\rangle$, $\alpha_1,\alpha_2\in \mathds{C}$, $\mathcal{I}_a$ contains the projectors
\begin{equation}
g'=|g\rangle\langle g|\otimes|f\rangle\langle f|.
\end{equation}
\end{remark}
\textbf{Step 2.}
Now suppose that theorem \ref{thsc} holds for $k-1$ component systems. We will show now that it holds for $k$ component systems.
In fact let us now suppose that an hyperplane $\mathcal{I}_a$ exists such that
\begin{equation}
\forall \,\, p\in\mathfrak{P}^{(1\div k)}\cap\mathcal{I}_a \,\,\Rightarrow\,\, \mathfrak{C}(p)\subseteq\mathcal{I}_a
\end{equation}
but $a$ does not satisfie condition (\ref{eq.nsc2}).
This means that at least two pure product states, for example $p_1$ and $p_2$, exist such that
\begin{align}
p_1=&\bigotimes_{i=1}^{k}|e_1^{(i)}\rangle\langle e_1^{(i)}|\\
p_2=&\bigotimes_{i=1}^{k}|e_2^{(i)}\rangle\langle e_2^{(i)}|
\end{align}
such that
\begin{equation} \label{h1}
(a,p_1)<0;\,\,\,(a,p_2)>0.
\end{equation}
Let us consider the spaces
\begin{equation}
W_{12}^{(i)}=\mathrm{span}\{|e_1^{(i)}\rangle,|e_2^{(i)}\rangle\}.
\end{equation}
Now two cases arise:
Case 1)
\begin{equation} \label{eq.1}
\mathrm{dim}W_{12}^{(i)}=1,
\end{equation}
for at least one $i$;
Case 2)
\begin{equation} \label{eq.2}
\mathrm{dim}W_{12}^{(i)}=2
\end{equation}
for all $i$.
In the first case, obviously, the situation can be reduced to the $k-1$ component systems case.
In fact, by equation (\ref{eq.1}) at least one $i$ exists, for example $i=1$, such that $|e_1^{(1)}\rangle=|e_2^{(1)}\rangle$. In this case, let us consider the space
\begin{equation}
W^{(1\div k)}=\bigotimes_{i=1}^k W_{12}^{(i)}.
\end{equation}
The corresponding space $\mathcal{H}(W^{(1\div k)})$ contains only elements $h$ of the form
\begin{equation}
h=|e_1^{(1)}\rangle\langle e_1^{(1)}|\otimes h',
\end{equation}
where $h'\in \mathcal{H}(W^{(2\div k)})$, whit $W^{(2\div k)}=\bigotimes_{i=2}^k W_{12}^{(i)}$.
In particular, this simple remark implies
that, given two elements $a_1,a_2\in \mathcal{H}(W^{(1\div k)})$, we have
\begin{align}
a_1&=|e_1^{(1)}\rangle\langle e_1^{(1)}|\otimes a_1',\\
a_2&=|e_1^{(1)}\rangle\langle e_1^{(1)}|\otimes a_2',
\end{align}
and
\begin{equation}
(a_1,a_2)=(a_1',a_2'),
\end{equation}
where the inner product on the lhs (rhs) of the previous equation must be intended in the Hilbert space $\mathcal{H}(\mathscr{H}^{1\div k})$ (respectively, $\mathcal{H}(\mathscr{H}^{2\div k})$).
This last gives
\begin{equation} \label{h1}
(a'_W,p'_1)<0;\,\,\,(a'_W,p'_2)>0,
\end{equation}
where
\begin{align}
p'_1=&\bigotimes_{i=2}^{k}|e_1^{(i)}\rangle\langle e_1^{(i)}|,\\
p'_2=&\bigotimes_{i=2}^{k}|e_2^{(i)}\rangle\langle e_2^{(i)}|,
\end{align}
and
\begin{equation}
a_W=|e_1^{(1)}\rangle\langle e_1^{(1)}|\otimes a'_W.
\end{equation}
($a_W$ is the projection of $a$ on the space $\mathcal{H}(W^{(1\div k)})$).
Let us consider the $k-$partite system $W^{(2\div k)}$.
It is simple to verify that $\mathcal{I}_{a'_W}(\subseteq\mathcal{H}(W^{(2\div k)})$ satisfies the following condition:
\begin{equation}
\mbox{if}\,\, p\in\mathfrak{P}^{(2\div k)}\cap\mathcal{I}_{a'_W}\,\,\mbox{then}\,\, \mathfrak{C}(p)\cap \mathcal{H}(W^{(2\div k)}) \subseteq\mathcal{I}_{a'_W}.
\end{equation}
We note that:
\begin{align}
&\mathfrak{P}^{(2\div k)}\cap\mathcal{I}_{a'_W}\subseteq\mathfrak{P}^{(2\div k)}\cap\mathcal{H}(W^{(2\div k)})=\\
&=\left\{p=\bigotimes_{j=2}^k|e^{(j)}\rangle\langle e^{(j)}| \mbox{for some}\,\,|e^{(j)}\rangle\in{W^{(j)}_{12}},\forall j=2,\ldots,k \right\},
\end{align}
and
\begin{equation}
\mathfrak{C}(p)\cap \mathcal{H}(W^{(2\div k)})=\{i[p,t]| t\in\tau^{(2\div k)}\cap\mathcal{H}(W^{(2\div k)})\}.
\end{equation}
Using the inductive hypothesis, we obtain immediately a contradiction.\\
Case 2).
By equation (\ref{eq.2}) we have $|e_1^{(i)}\rangle\neq|e_2^{(i)}\rangle$ for all $i$.
In this case, the space $W^{(1\div k)}$ is a $2^k$ dimensional Hilbert space. Let us consider the following
two pure states in $\mathcal{H}(W)$:
\begin{align}
p_3&=|e_1^{(1)}\rangle\langle e_1^{(1)}|\otimes\bigotimes_{i=2}^k|e_2^{(i)}\rangle\langle e_2^{(i)}|,\\
p_4&=|e_2^{(1)}\rangle\langle e_2^{(1)}|\otimes\bigotimes_{i=2}^k|e_1^{(i)}\rangle\langle e_1^{(i)}|.
\end{align}
It is simple to show that
\begin{equation}
(p_3,a_W)=0\:\mbox{and}\: \quad (p_4,a_W)=0.
\end{equation}
In fact if $(p_3,a_W)>0$ $((p_3,a_W)<0)$, by a similar arguments of case 1) applied to the states $p_1$ and $p_3$ (respectively $p_2$ and $p_3$) we have immediately a contradiction. Similar reasoning holds for $p_4$.
Let us now consider the state $|g^{(1)} \rangle\langle g^{(1)}|$ where $|g^{(1)}\rangle \in\mathscr{H}^{(1)}$ is a non trivial linear combination of $|e_1^{(1)}\rangle$ and $|e_2^{(1)}\rangle$:
\begin{equation}
|g^{(1)} \rangle=\alpha_1|e_1^{(1)}\rangle+\alpha_2|e_2^{(1)}\rangle,
\end{equation}
$\alpha_1,\alpha_2\neq0$, and the following multipartite states
\begin{align}
p_5=&|g^{(1)} \rangle\langle g^{(1)}|\otimes\bigotimes_{i=2}^k|e_1^{(i)}\rangle\langle e_1^{(i)}|,\\
p_6=&|g^{(1)} \rangle\langle g^{(1)}|\otimes\bigotimes_{i=2}^k|e_2^{(i)}\rangle\langle e_2^{(i)}|.
\end{align}
By similar arguments of case 1, we can exclude $(a,p_5)>0$ and $(a,p_6)<0$.
Now let us suppose that $(a,p_5)<0$ and $(a,p_6)>0$ hold simultaneously. We have an immediate contradiction using the same arguments of case 1.
So at least one of the following possibilities must hold:
\begin{enumerate}
\item $(a,p_5)=0$;
\item $(a,p_6)=0$.
\end{enumerate}
The first possibility, using the fact that $(a,p_4)=0$ and the hypothesis (\ref{eq.nsc}), implies that $(a,p_1)=0$ (see remark \ref{rem.5.2}) against the hypothesis (\ref{eq.50}).
The second case, using the fact that $(a,p_3)=0$ and the hypothesis (\ref{eq.nsc}), implies that $(a,p_2)=0$ against the hypothesis (\ref{eq.50}).
So we have a contradiction and then we can conclude that theorem \ref{thsc} holds for $k$ component systems. This completes the proof.
\end{proof}
\section{Concluding remarks}\label{sec.6}
In the previous sections we give the necessary tools to formulate a necessary and sufficient condition for Entanglement Witness. In particular, using theorem \ref{mth} and theorem \ref{thsc} we obtain immediately the following theorem.
\begin{theorem}\label{thfi}
Let us consider a vector $a\in\mathcal{H}(\mathscr{H}^{(1\div k)})$ and its corresponding hyperplane $\mathcal{I}_a$.
Then $a$ is an Entanglement Witness if and only if the following conditions hold:
\begin{enumerate}
\item $\exists p\in\mathfrak{P}^{(1\div k)}: (p,a)>0$;
\item $\mbox{if}\,\, p\in\mathfrak{P}^{(1\div k)}\cap\mathcal{I}_a \,\,\mbox{then}\,\, \mathfrak{C}(p)\subseteq\mathcal{I}_a$;
\item $a\notin\mathcal{H}^+(\mathscr{H}^{(1\div k)})$.
\end{enumerate}
\end{theorem}
Note that conditions 1 and 2 are necessary and sufficient for $a$ to be positive on the set of separable states, while condition 3 assures that at least an entangled state exists that is detected by $a$.
Moreover, as it was noted in the introduction, this theorem can be translated into a criterion for positive maps via the CJ-isomorphism.
Let us consider in fact, the following entangled bipartite vector:
\begin{equation}
|\alpha \rangle=\sum_{i=1}^n|e_i^{(1)} \rangle|e_i^{(2)} \rangle
\end{equation}
where $|e_i^{(1)} \rangle\in\mathscr{H}^{(1)}$, $|e_i^{(2)} \rangle\in\mathscr{H}^{(2)}$ and $\langle e_j^{(l)} |e_i^{(l)} \rangle=\delta_{ij}$, ($l=1,2$).
Given a map $\Lambda:\mathcal{L}(\mathscr{H}^{(1)})\rightarrow \mathcal{L}(\mathscr{H}^{(1)})$, from the CJ-isomorphism, it is simple to show that
\begin{theorem}
The map $\Lambda$ is a positive, but not completely positive, map if and only if
$
\Lambda\otimes \mathbf{I}^{(2)}(|\alpha \rangle\langle \alpha|)
$, where $\mathbf{I}^{(2)}$ is the identity map on $\mathcal{L}(\mathscr{H}^{(2)})$,
is an entanglement witness on $\mathcal{H}(\mathscr{H}^{(1)}\otimes\mathscr{H}^{(2)})$.
\end{theorem}
We conclude observing that we can improve the criterion introduced in this paper. In fact, we can limit our attention in theorems \ref{mth} and \ref{thsc} only on a linear independent set of pure product states in $\mathfrak{P}^{(1\div k)}\cap\mathcal{I}_a$.
This last is an immediate consequence of the following trivial remark:
\begin{align}
\mbox{if}& \,p=\sum_{i=1}^l\alpha_i p_i, \alpha_i\in \mathds{R}, p_i\in\mathfrak{P}^{(1\div k)}\cap\mathcal{I}_a \\
&\mbox{then}\,\mathfrak{C}(p)\subseteq \mathfrak{C}(p_1)+\ldots+\mathfrak{C}(p_l).
\end{align}
Finally we observe that the set $\mathfrak{P}^{(1\div k)}\cap\mathcal{I}_a$ was widely studied in \cite{LKCH00} in connection to the optimization of EW. There, the authors gave some useful method to determine this set and indeed theorem \ref{thfi} gives a practical method to check if an observable is an EW.
\end{document} |
\begin{document}
\maketitle
\begin{abstract}
We will show a rigidity of a K\"ahler potential of the Poincar\'e metric with a constant length differential.
\end{abstract}
\section{Introduction}
From the fundamental result of Donnelly-Fefferman~\cite{DF}, the vanishing of the space of $L^2$ harmonic $(p,q)$ forms has been an important research theme in the theory of complex domains. Since M.~Gromov (\cite{Gromov}, see also \cite{Donnelly1994}) suggested the concept of the K\"ahler hyperbolicity and gave a connection to the vanishing theorem, there have been many studies on the K\"ahler hyperbolicity of the Bergman metric, which is a fundamental K\"ahler structure of bounded pseudoconvex domains. The K\"ahler structure $\omega$ is \emph{K\"ahler hyperbolic} if there is a global $1$-form $\eta$ with $d\eta=\omega$ and $\sup\norm{\eta}_\omega<\infty$.
In \cite{Donnelly1997}, H. Donnelly showed the K\"ahler hyperbolicity of Bergman metric on some class of weakly pseudoconvex domains. For bounded homogeneous domain $D$ in $\mathbb{C}^n$ and its Bergman metric $\omega_D$ especially, he used a classical result of Gindikin~\cite{Gindikin} to show that $\sup\norm{d\log K_D}_{\omega_D}<\infty$. Here $K_D$ is the Bergman kernel function of $D$ so $\log K_D$ is a canonical potential of $\omega_D$.
In their paper \cite{Kai-Ohsawa}, S.~Kai and T.~Ohsawa gave another approach. They proved that every bounded homogeneous domain has a K\"ahler potential of the Bergman metric whose differential has a constant length.
\begin{theorem}[Kai-Ohsawa \cite{Kai-Ohsawa}]\label{thm:KO1}
For a bounded homogeneous domain $D$ in $\mathbb{C}^n$, there exists a positive real valued function $\varphi$ on $D$ such that $\log\varphi$ is a K\"ahler potential of the Bergman metric $\omega_D$ and $\norm{d\log\varphi}_{\omega_D}$ is constant.
\end{theorem}
It can be obtained by the facts that each homogeneous domain is biholomorphic to a Siegel domain (see \cite{VGP}) and a homogeneous Siegel domain is affine homogeneous (see \cite{KMO}).
More precisely, let us consider a bounded homogeneous domain $D$ in $\mathbb{C}^n$ and a biholomorphism $F:D\to S$ for a Siegel domain $S$. For the Bergman kernel function $K_S$ of $S$ which is a canonical potential of the Bergman metric $\omega_S$, it is easy to show that $d\log K_S$ has a constant length with respect to $\omega_S$ from the affine homogeneity of $S$ (the group of affine holomorphic automorphisms acts transitively on $S$). Since $\log K_S$ is a K\"ahler potential of $\omega_S$, the transformation formula of the Bergman kernel implies that the pullback $F^*\log K_S=\log K_S\circ F$ is also a K\"ahler potential of $\omega_D$. Using the fact that $F:(D,\omega_D)\to(S,\omega_S)$ is an isometry, we have $\norm{d(F^*\log K_S)}_{\omega_D}=\norm{d\log K_S}_{\omega_S}\circ F$. As a function $\varphi$ in Theorem~\ref{thm:KO1}, we can choose the pullback $K_S\circ F$ of the Bergman kernel function of the Siegel domain.
At this junction, it is natural to ask:
\begin{quote}
\textit{If there is a K\"ahler potential $\log\varphi$ with a constant $\norm{d\log\varphi}_{\omega_D}$, is it always obtained by the pullback of the Bergman kernel function of the Siegel domain?}
\end{quote}
The aim of this paper is to discuss of this question in the $1$-dimensional case.
The only bounded homogeneous domain in $\mathbb{C}$ is the unit disc $\Delta=\{z\in\mathbb{C}:\abs{z}<1\}$ up to the biholomorphic equivalence and the $1$-dimensional correspondence of the Bergman metric, namely a holomorphically invariant hermitian structure, is only the Poincar\'e metric. Hence the main theorem as follows gives a positive answer to the question.
\begin{theorem}\label{thm:main thm rough}
Let $\omega_\Delta$ be the Poincar\'e metric of the unit disc $\Delta$. Suppose that there exists a positive real valued function $\varphi:\Delta\to\mathbb{R}$ such that $\log\varphi$ is a K\"ahler potential of the Poincar\'e metric and $\norm{d\log\varphi}_{\omega_\Delta}$ is constant on $\Delta$. Then $\varphi$ is the pullback of the canonical potential on the half-plane $\mathbf{H}=\{z\in\mathbb{C}: \mathrm{Re}\, z<0\}$.
\end{theorem}
Note that $1$-dimensional Siegel domain is just the half-plane. We will introduce the Poincar\'e metric and related notions in Section~\ref{sec:2}. As an application of the main theorem, we can characterize the half-plane by the canonical potential.
\begin{corollary}\label{cor:main cor rough}
Let $D$ be a simply connected, proper domain in $\mathbb{C}$ with a Poincar\'e metric $\omega_D=i\lambda dz \wedge d\bar z$. If $\norm{d\log \lambda}_{\omega_D}$ is constant on $D$, then $D$ is affine equivalent to the half-plane $\mathbf{H}=\{z\in\mathbb{C}: \mathrm{Re}\, z<0\}$.
\end{corollary}
In Section~\ref{sec:2}, we will introduce notions and concrete version of the main theorem. Then we will study the existence of a nowhere vanishing complete holomorphic vector field which is tangent to a potential whose differential is of constant length (Section~\ref{sec:3}). Using relations between complete holomorphic vector fields and model potentials in Section~\ref{sec:4}, we will prove theorems.
\section{Background materials}\label{sec:2}
Let $X$ be a Riemann surface. The Poincar\'e metric of $X$ is a complete hermitian metric with a constant Gaussian curvature, $-4$. The Poincar\'e metric exists on $X$ if and only if $X$ is a quotient of the unit disc. If $X$ is covered by $\Delta$, the Poincar\'e metric can be induced by the covering map $\mathfrak{p}i:\Delta\to X$ and it is uniquely determined. Throughout of this paper, the K\"ahler form of the Poincar\'e metric of $X$, denoted by $\omega_X$, stands for the metric also. When $\omega_X=i\lambda dz\wedge d\bar z$ in the local holomorphic coordinate function $z$, the curvature can be written by
\begin{equation*}
\kappa=-\frac{2}{\lambda}\spd{}{z}{\bar z} \log \lambda \;.
\end{equation*}
So the curvature condition $\kappa\equiv -4$ implies that
\begin{equation*}
\spd{}{z}{\bar z} \log \lambda = 2\lambda \;,
\end{equation*}
equivalently
\begin{equation*}
dd^c\log\lambda = 2\omega_X \;,
\end{equation*}
where $d^c=\frac{i}{2}(\overline\mathfrak{p}artial-\mathfrak{p}artial)$.
That means the function $\frac{1}{2}\log\lambda$ is a local K\"ahler potential of $\omega_X$. Any other local potential of $\omega_X$ is always of the form $\frac{1}{2}\log\lambda+\log\abs{f}^2$ where $f$ is a local holomorphic function on the domain of $z$.
We call $\frac{1}{2}\log\lambda$ the \emph{canonical potential} with respect to the coordinate function $z$. For a domain $D$ in $\mathbb{C}$, the canonical potential of $D$ means the canonical potential with respect to the standard coordinate function of $\mathbb{C}$.
Let us consider the Poincar\'e metric $\omega_\Delta$ of the unit disc $\Delta$:
\begin{equation*}
\omega_\Delta=i\frac{1}{\mathfrak{p}aren{1-\abs{z}^2}^2}dz\wedge d\bar z = i\lambda_\Delta dz\wedge d\bar z \;.
\end{equation*}
The canonical potential $\lambda_\Delta$ satisfies
\begin{equation*}
\norm{d\log\lambda_\Delta}_{\omega_\Delta}^2=\norm{\mathfrak{p}d{\log\lambda_\Delta}{z}dz+\mathfrak{p}d{\log\lambda_\Delta}{\bar z}d\bar z}_{\omega_\Delta}^2
=\mathfrak{p}d{\log\lambda_\Delta}{z}\mathfrak{p}d{\log\lambda_\Delta}{\bar z}\frac{1}{\lambda_\Delta}=4\abs{z}^2 \;,
\end{equation*}
so does not have a constant length.
By the same way of Kai-Ohsawa~\cite{Kai-Ohsawa}, we can get a model for $\varphi$ in Theorem~\ref{thm:KO1} for the unit disc,
\begin{equation}\label{eqn:model}
\varphi_\theta(z)=\frac{\abs{1+e^{i\theta}z}^4}{\mathfrak{p}aren{1-\abs{z}^2}^2} \quad\text{for $\theta\in\mathbb{R}$}
\end{equation}
as a pullback of the canonical potential $\lambda_\mathbf{H}=1/\abs{\mathrm{Re}\, w}^2$ on the left-half plane $\mathbf{H}=\{w:\mathrm{Re}\, w<0\}$ by the Cayley transforms (see \eqref{eqn:CT} for instance). The term $\theta$ depends on the choice of the Cayley transform. Since $\log\varphi_\theta=\log\lambda_\Delta+\log\abs{1+e^{i\theta}z}^4$, the function $\frac{1}{2}\log\varphi_\theta$ is a K\"ahler potential. Moreover
\begin{equation*}
\norm{d\log\varphi_\theta}_{\omega_\Delta}^2 \equiv 4 \;.
\end{equation*}
At this moment, we introduce a significant result of Kai-Ohsawa.
\begin{theorem}[Kai-Ohsawa \cite{Kai-Ohsawa}]\label{thm:KO2}
For a bounded homogeneous domain $D$ in $\mathbb{C}^n$, suppose that there is a K\"ahler potential $\log\mathfrak{p}si$ of the Bergman metric $\omega_D$ with a constant $\norm{d\log\mathfrak{p}si}_{\omega_D}$, then $\norm{d\log\mathfrak{p}si}_{\omega_D}=\norm{d\log\varphi}_{\omega_D}$ where $\varphi$ is as in Theorem~\ref{thm:KO1}.
\end{theorem}
Suppose that a positively real valued $\varphi$ on $\Delta$ satisfies that $dd^c\log\varphi=2\omega_\Delta$ and $\norm{d\log\varphi}_{\omega_\Delta}^2\equiv c$ for some constant $c$. Theorem~\ref{thm:KO2} implies that $c$ must be $4$. Therefore, we can rewrite Theorem~\ref{thm:main thm rough} by
\begin{theorem}\label{thm:main thm}
If there exists a function $\varphi:\Delta\to\mathbb{R}$ satisfying
\begin{equation}\label{eqn:basic condition}
dd^c\log\varphi=2\omega_\Delta \quad\text{and}\quad
\norm{d\log\varphi}_{\omega_\Delta}^2\equiv4
\;.
\end{equation}
Then $\varphi=r\varphi_\theta$ as in \eqref{eqn:model} for some $r>0$ and $\theta\in\mathbb{R}$.
\end{theorem}
Corollary~\ref{cor:main cor rough} can be also written by
\begin{corollary}\label{cor:main cor}
Let $D$ be a simply connected, proper domain in $\mathbb{C}$ with a Poincar\'e metric $\omega_D=i\lambda dz \wedge d\bar z$. If $\norm{d\log \lambda}_{\omega_D}^2\equiv 4$, then $D$ is affine equivalent to the half-plane $\mathbf{H}=\{z\in\mathbb{C}: \mathrm{Re}\, z<0\}$.
\end{corollary}
\section{Existence of nowhere vanishing complete holomorphic vector field}\label{sec:3}
In this section, we will study an existence of a complete holomorphic tangent vector field on a Riemann surface $X$ which admits a K\"ahler potential of the Poincar\'e metric with a constant length differential.
By a holomorphic tangent vector field of a Riemann surface $X$, we means a holomorphic section $\mathcal{W}$ to the holomorphic tangent bundle $T^{1,0}X$. If the corresponding real tangent vector field $\mathrm{Re}\, \mathcal{W}=\mathcal{W}+\overline{\mathcal{W}}$ is complete, we also say $\mathcal{W}$ is complete. Thus the complete holomorphic tangent vector field generates a $1$-parameter family of holomorphic transformations.
In this section, we will show that
\begin{theorem}\label{thm:existence}
Let $X$ be a Riemann surface with the Poincar\'e metric $\omega_X$. If there is a function $\varphi:X\to\mathbb{R}$ with
\begin{equation}\label{eqn:condition on surface}
dd^c\log\varphi = 2\omega_X
\quad\text{and}\quad
\norm{d\log\varphi}_{\omega_X}^2\equiv 4
\end{equation}
then there is a nowhere vanishing complete holomorphic vector field $\mathcal{W}$ such that $(\mathrm{Re}\, \mathcal{W})\varphi\equiv 0$.
\end{theorem}
\begin{proof}
Take a local holomorphic coordinate function $z$ and let $\omega_X=i\lambda dz\wedge d\bar z$. The equation~\eqref{eqn:condition on surface} can be written by
\begin{equation*}
\mathfrak{p}aren{\log\varphi}_{z\bar z} = 2\lambda
\quad\text{and}\quad
\mathfrak{p}aren{\log\varphi}_z \mathfrak{p}aren{\log\varphi}_{\bar z} = 4\lambda
\end{equation*}
Here, $\mathfrak{p}aren{\log\varphi}_z=\mathfrak{p}d{}{z}\log\varphi$, $\mathfrak{p}aren{\log\varphi}_{\bar z}=\mathfrak{p}d{}{\bar z}\log\varphi$ and $\mathfrak{p}aren{\log\varphi}_{z\bar z}=\spd{}{z}{\bar z}\log\varphi$. This implies that
\begin{align*}
\mathfrak{p}aren{\varphi^{-1/2}}_{z}
&=
\mathfrak{p}d{}{z}\varphi^{-1/2}
=
-\frac{1}{2}\varphi^{-1/2} \mathfrak{p}aren{\log\varphi}_{z} \; ;
\\
\mathfrak{p}aren{\varphi^{-1/2}}_{z\bar z}
&=
\spd{}{z}{\bar z}\varphi^{-1/2}
=
-\frac{1}{2}\varphi^{-1/2} \mathfrak{p}aren{\log\varphi}_{z\bar z}
+\frac{1}{4}\varphi^{-1/2}\mathfrak{p}aren{\log\varphi}_z \mathfrak{p}aren{\log\varphi}_{\bar z}
\\
&=
-\frac{1}{2}\varphi^{-1/2} \mathfrak{p}aren{
\mathfrak{p}aren{\log\varphi}_{z\bar z}
-\frac{1}{2}\mathfrak{p}aren{\log\varphi}_z \mathfrak{p}aren{\log\varphi}_{\bar z}
}
\\
&= 0 \; .
\end{align*}
Thus we have that the function $\varphi^{-1/2}$ is harmonic so $\mathfrak{p}aren{\varphi^{-1/2}}_{z}$ is holomorphic.
Let us consider a local holomorphic vector field,
\begin{equation*}
\mathcal{W}=\frac{i}{\mathfrak{p}aren{\varphi^{-1/2}}_z}\mathfrak{p}d{}{z}
=\frac{-2i\varphi^{3/2}}{\varphi_z}\mathfrak{p}d{}{z}
=\frac{-2i\varphi^{1/2}}{\mathfrak{p}aren{\log\varphi}_z}\mathfrak{p}d{}{z}
\;.
\end{equation*}
In any other local holomorphic coordinate function $w$, we have
\begin{equation*}
\mathcal{W}=\frac{i}{\mathfrak{p}aren{\varphi^{-1/2}}_z}\mathfrak{p}d{}{z}
=\frac{i}{\mathfrak{p}aren{\varphi^{-1/2}}_w\mathfrak{p}d{w}{z}}\mathfrak{p}d{w}{z}\mathfrak{p}d{}{w}
=\frac{i}{\mathfrak{p}aren{\varphi^{-1/2}}_w}\mathfrak{p}d{}{w}
\;.
\end{equation*}
so $W$ is globally defined on $X$. Now we will show that $\mathcal{W}$ satisfies conditions in the theorem.
Since
\begin{equation*}
\norm{\varphi^{-1/2}\mathcal{W}}_{\omega_X}^2
=\norm{\frac{-2i}{\mathfrak{p}aren{\log\varphi}_{z}}\mathfrak{p}d{}{z} }_{\omega_X}^2
=\frac{4\lambda}{\mathfrak{p}aren{\log\varphi}_{z}\mathfrak{p}aren{\log\varphi}_{\bar z}}
=1 \;,
\end{equation*}
the vector field $\varphi^{-1/2}\mathcal{W}$ has a unit length with respect to the complete metric $\omega_X$, so the corresponding real vector field $\mathrm{Re}\, \varphi^{-1/2}\mathcal{W}=\varphi^{-1/2}(\mathcal{W}+\overline{\mathcal{W}})$ is complete.
Moreover
\begin{equation*}
(\mathrm{Re}\, \mathcal{W})\varphi = \frac{-2i\varphi^{3/2}}{\varphi_{z}}\varphi_z +\frac{2i\varphi^{3/2}}{\varphi_{\bar z}}\varphi_{\bar z} = 0 \;.
\end{equation*}
Hence it remains to show the completeness of $\mathcal{W}$.
Take any integral curve $\mathfrak{g}amma:\mathbb{R}\to X$ of $\varphi^{-1/2}\mathrm{Re}\,\mathcal{W}$. It satisfies
\begin{equation*}
\mathfrak{p}aren{\varphi^{-1/2}(\mathrm{Re}\, \mathcal{W})}\circ \mathfrak{g}amma = \dot\mathfrak{g}amma
\end{equation*}
equivalently
\begin{equation*}
(\mathrm{Re}\, \mathcal{W})\circ \mathfrak{g}amma = \mathfrak{p}aren{\varphi^{1/2}\circ\mathfrak{g}amma} \dot\mathfrak{g}amma
\end{equation*}
The condition $(\mathrm{Re}\, \mathcal{W})\varphi\equiv 0$, equivalently $\varphi^{-1/2}(\mathrm{Re}\, \mathcal{W})\varphi\equiv 0$, implies that the curve $\mathfrak{g}amma$ is on a level set of $\varphi$ so $\varphi^{1/2}\circ\mathfrak{g}amma\equiv C$ for some constant $C$. The curve $\sigma:\mathbb{R}\to X$ defined by $\sigma(t)=\mathfrak{g}amma(Ct)$ satisfies
\begin{equation*}
(\mathrm{Re}\, \mathcal{W})\circ \sigma (t) = (\mathrm{Re}\, \mathcal{W})(\mathfrak{g}amma(Ct)) = C\dot\mathfrak{g}amma(Ct) =\dot\sigma(t)
\end{equation*}
This means that $\sigma:\mathbb{R}\to X$ is the integral curve of $\mathrm{Re}\, \mathcal{W}$; therefore $\mathrm{Re}\, \mathcal{W}$ is complete. This completes the proof.
\end{proof}
\section{Complete holomorphic vector fields on the unit disc}\label{sec:4}
In this section, we introduce parabolic and hyperbolic vector fields on the unit disc and discuss their relation to the model potential,
\begin{equation}\label{eqn:model0}
\varphi_0=\frac{\abs{1+z}^4}{\mathfrak{p}aren{1-\abs{z}^2}^2}
\end{equation}
where it is $\varphi_\theta$ in \eqref{eqn:model} with $\theta=0$.
\subsection{Nowhere vanishing complete holomorphic vector fields from the left-half plane}
On the left-half plane $\mathbf{H}=\{w\in\mathbb{C}:\mathrm{Re}\, w<0\}$, there are two kinds of affine transformations:
\begin{equation*}
\mathcal{D}_s(w)=e^{2s} w
\quad\text{and}\quad
\mathcal{T}_s(w)=w+2is
\end{equation*}
for $s\in\mathbb{R}$. Their infinitesimal generators are
\begin{equation*}
\mathcal{D}= 2w\mathfrak{p}d{}{w}
\quad\text{and}\quad
\mathcal{T}=2i\mathfrak{p}d{}{w}
\end{equation*}
which are nowhere vanishing complete holomorphic vector fields of $\mathbf{H}$. Note that
\begin{equation}\label{eqn:relation}
(\mathcal{T}_s)_*\mathcal{D} = 2(w-2is)\mathfrak{p}d{}{w} = \mathcal{D}-2s\mathcal{T} \quad\text{and}\quad
(\mathcal{T}_s)_*\mathcal{T} =2i\mathfrak{p}d{}{w}= \mathcal{T}
\end{equation}
for any $s$.
For the Cayley transform $F:\mathbf{H}\to\Delta$ defined by
\begin{equation}\label{eqn:CT}
\begin{aligned}
F:\mathbf{H}&\longrightarrow\Delta \\
w&\longmapsto z=\frac{1+w}{1-w} \;,
\end{aligned}
\end{equation}
we can take two nowhere vanishing complete holomorphic vector fields of $\Delta$:
\begin{equation*}
\mathcal{H}=F_*(\mathcal{D})=(z^2-1)\mathfrak{p}d{}{z}
\end{equation*}
and
\begin{equation*}
\mathcal{P}=F_*(\mathcal{T})=i(z+1)^2\mathfrak{p}d{}{z}\;.
\end{equation*}
When we define $\mathcal{H}_s=F\circ\mathcal{D}_s\circ F^{-1}$ and $\mathcal{P}_s=F\circ\mathcal{T}_s\circ F^{-1}$, vector fields $\mathcal{H}$ and $\mathcal{P}$ are infinitesimal generators of $\mathcal{H}_s$ and $\mathcal{P}_s$, respectively. Moreover Equation~\eqref{eqn:relation} can be written by
\begin{equation}\label{eqn:relation'}
(\mathcal{P}_s)_*\mathcal{H} = \mathcal{H}-2s\mathcal{P} \quad\text{and}\quad
(\mathcal{P}_s)_*\mathcal{P}= \mathcal{P} \;.
\end{equation}
There is another complete holomorphic vector field $\mathcal{R}=iz\mathfrak{p}dl{}{z}$ generating the rotational symmetry \begin{equation}\label{eqn:rotation}
\mathcal{R}_s(z)=e^{is}z \;.
\end{equation}
The holomorphic automorphism group of $\Delta$ is a real $3$-dimension connected Lie group (cf. see \cite{Cartan,Narasimhan}), we can conclude that any complete holomorphic vector field can be a real linear combination of $\mathcal{H}$, $\mathcal{P}$ and $\mathcal{R}$. Since $\mathcal{H}(-1)=\mathcal{P}(-1)=0$ and $\mathcal{R}(-1)=-i\mathfrak{p}dl{}{z}$, we have
\begin{lemma}\label{lem:lc}
If $\mathcal{W}$ is a complete holomorphic vector field of $\Delta$ satisfying $\mathcal{W}(-1)=0$, then there exist $a,b\in\mathbb{R}$ with $\mathcal{W}=a\mathcal{H}+b\mathcal{P}$.
\end{lemma}
\subsection{Hyperbolic vector fields} In this subsection, we will show that the hyperbolic vector field $\mathcal{H}$ can not be tangent to a K\"ahler potential with a constant length differential.
By the simple computation,
\begin{equation*}
\mathcal{H}(\log\varphi_0)
= (z^2-1)\frac{2(1+\bar z)}{(1+z)(1-\abs{z}^2)}
=2\frac{\abs{z}^2+z-\bar z-1}{(1-\abs{z}^2)}
\;,
\end{equation*}
we get
\begin{equation*}
(\mathrm{Re}\,\mathcal{H})\log\varphi_0 \equiv-4 \;.
\end{equation*}
That means $\mathrm{Re}\,\mathcal{H}$ is nowhere tangent to $\varphi_0$. Moreover
\begin{lemma}\label{lem:hyperbolic}
Let $\varphi:\Delta\to\mathbb{R}$ with $dd^c \log\varphi=2\omega_\Delta$ and $\norm{d\log\varphi}_{\omega_\Delta}^2\equiv 4$. If $(\mathrm{Re}\,\mathcal{H}) \log\varphi\equiv c$ for some $c$, then $c=\mathfrak{p}m4$.
\end{lemma}
\begin{proof}
Since $dd^c\log\varphi_0=2\omega_\Delta$ also, the function $\log\varphi-\log\varphi_0$ is harmonic; hence we may let $\log\varphi=\log\varphi_0+f+\bar f$ for some holomorphic function $f:\Delta\to\mathbb{C}$. Then the condition $(\mathrm{Re}\,\mathcal{H}) \log\varphi\equiv c$ can be written by
\begin{equation}\label{eqn:basic identity}
(\mathrm{Re}\,\mathcal{H})\log\varphi = -4+(z^2-1)f'+(\bar{z}^2-1)\bar f' \equiv c \;.
\end{equation}
This implies that $(z^2-1)f'$ is constant. Thus we can let
\begin{equation}\label{eqn:hmm}
f'=\frac{C}{z^2-1}
\end{equation}
for some $C\in\mathbb{C}$.
Since
\begin{equation*}
\mathfrak{p}d{}{z}\log\varphi=f'+\mathfrak{p}d{}{z}\log\varphi_0
= f'+ \frac{2(1+\bar z)}{(1+z)(1-\abs{z}^2)}\;,
\end{equation*}
we have
\begin{multline*}
\norm{d\log\varphi}_{\omega_\Delta}^2=\mathfrak{p}aren{\mathfrak{p}d{}{z}\log\varphi}\mathfrak{p}aren{\mathfrak{p}d{}{\bar z}\log\varphi}\frac{1}{\lambda_\Delta}
\\
=\abs{f'}^2(1-|z|^2)^2
+\frac{2(1+\bar z)(1-\abs{z}^2)}{(1+z)}\bar f'
+\frac{2(1+z)(1-\abs{z}^2)}{(1+\bar z)}f'
+\norm{d\log\varphi_0}_{\omega_\Delta}^2 \;.
\end{multline*}
From the condition $\norm{d\log\varphi}_{\omega_\Delta}^2\equiv 4\equiv\norm{d\log\varphi_0}_{\omega_\Delta}^2$, it follows
\begin{equation*}
\abs{f'}^2(1-|z|^2)^2
=
-\frac{2(1+\bar z)(1-\abs{z}^2)}{(1+z)}\bar f'
-\frac{2(1+z)(1-\abs{z}^2)}{(1+\bar z)}f'
\;,
\end{equation*}
equivalently
\begin{equation}\label{eqn:identity}
\frac{1}{2}\abs{f'}^2(1-|z|^2)
=
-\frac{(1+\bar z)}{(1+z)}\bar f'
-\frac{(1+z)}{(1+\bar z)}f'
\;.
\end{equation}
Applying \eqref{eqn:hmm} to the right side above,
\begin{multline*}
-\frac{(1+\bar z)}{(1+z)}\bar f'
-\frac{(1+z)}{(1+\bar z)}f'
=\frac{(1+\bar z)}{(1+z)}\frac{\bar C}{1-\bar z^2}
+\frac{(1+z)}{(1+\bar z)}\frac{C}{1-z^2}
\\
=\frac{(1+\bar z-z-\abs{z}^2)\bar C + (1-\bar z+z-\abs{z}^2)C}{\abs{1-z^2}^2} \;.
\end{multline*}
Let $C=a+bi$ for $a,b\in\mathbb{R}$, then
\begin{equation*}
(1+\bar z-z-\abs{z}^2)\bar C + (1-\bar z+z-\abs{z}^2)C
= 2a(1-\abs{z}^2) +2bi(z-\bar z) \;.
\end{equation*}
Now Equation~\eqref{eqn:identity} can be written by
\begin{equation*}
\frac{1}{2}\frac{\abs{C}^2}{\abs{z^2-1}^2}(1-|z|^2)
=\frac{2a(1-\abs{z}^2) +2bi(z-\bar z)}{\abs{1-z^2}^2}
\;,
\end{equation*}
so we have
\begin{equation*}
(\abs{C}^2-4a)(1-\abs{z}^2) =4bi(z-\bar z)
\end{equation*}
on $\Delta$.
Take $\mathfrak{p}artial\bar\mathfrak{p}artial$ to above, we have
\begin{equation*}
\abs{C}^2-4a=0 \;.
\end{equation*}
Simultaneously $b=0$ so $C=a$. Now we have $a^2=4a$. Such $a$ is $0$ or $4$. If $f'=4/(z^2-1)$, then $c=4$ from \eqref{eqn:basic identity}. If $f'=0$, then $c=-4$.
\end{proof}
\subsection{Parabolic vector fields}
Since
\begin{equation*}
\mathcal{P}(\log\varphi_0)
= i(z+1)^2\frac{2(1+\bar z)}{(1+z)(1-\abs{z}^2)}
=2i\frac{\abs{1+z}^2}{1-\abs{z}^2}
\;,
\end{equation*}
we have
\begin{equation*}
(\mathrm{Re}\,\mathcal{P})\log\varphi_0 \equiv 0 \;.
\end{equation*}
That means that the parabolic vector field $\mathcal{P}$ is tangent to $\varphi_0$. The vector field $\mathcal{P}$ is indeed the nowhere vanishing complete holomorphic vector field as constructed in Theorem~\ref{thm:existence} corresponding to $\varphi_0$. The main result of this section is the following.
\begin{lemma}\label{lem:parabolic}
Let $\varphi:\Delta\to\mathbb{R}$ with $dd^c \log\varphi=2\omega_\Delta$ and $\norm{d\log\varphi}_{\omega_\Delta}^2\equiv 4$. If $(\mathrm{Re}\,\mathcal{P}) \log\varphi\equiv c$ for some $c$, then $c=0$ and $\varphi=r\varphi_0$ for some $r>0$.
\end{lemma}
\begin{proof}
By the same way in the proof of Lemma~\ref{lem:hyperbolic}, we let $\log\varphi=\log\varphi_0+f+\bar f$ for some holomorphic $f:\Delta\to\mathbb{C}$. Since
\begin{equation}\label{eqn:basic identity1}
(\mathrm{Re}\,\mathcal{P})\log\varphi = i(z+1)^2f'-i(\bar{z}+1)^2\bar f' \equiv c
\end{equation}
it follows that $(z+1)^2f'$ is constant. Thus we have
\begin{equation}\label{eqn:hmm1}
f'=\frac{C}{(z+1)^2}
\end{equation}
for some $C\in\mathbb{C}$.
Since \eqref{eqn:identity} also holds, we can apply \eqref{eqn:hmm1} to the right side of \eqref{eqn:identity} to get
\begin{multline*}
-\frac{(1+\bar z)}{(1+z)}\bar f'
-\frac{(1+z)}{(1+\bar z)}f'
=
-\frac{(1+\bar z)}{(1+z)}\frac{\bar C}{(\bar z+1)^2}
-\frac{(1+z)}{(1+\bar z)}\frac{C}{(z+1)^2}
\\
=\frac{-\bar C}{\abs{1+z}^2}
+\frac{-C}{\abs{1+z}^2}
=\frac{-\bar C -C}{\abs{1+z}^2}
\end{multline*}
Now Equation \eqref{eqn:identity} is can be written by
\begin{equation*}
\frac{\abs{C}^2}{\abs{z+1}^4}(1-|z|^2)=2\frac{-\bar C -C}{\abs{1+z}^2}
\end{equation*}
equivalently
\begin{equation*}
\abs{C}^2(1-|z|^2) =-\mathfrak{p}aren{2\bar C +2C}\abs{1+z}^2\;.
\end{equation*}
Evaluating $z=0$, we have $\abs{C}^2=-2\bar C-2C$. And taking
$\mathfrak{p}artial\bar\mathfrak{p}artial$ to above, we have $-\abs{C}^2=-2\bar C-2C$. It follows that $C=0$ so $f$ is constant. Moreover Equation \eqref{eqn:basic identity1} implies that $c=0$.
\end{proof}
\section{Proof of the main theorem}\label{sec:5}
Now we prove Theorem~\ref{thm:main thm} and Corollary~\ref{cor:main cor}
\noindent\textit{Proof of Theorem~\ref{thm:main thm}.} Let $\varphi:\Delta\to\mathbb{R}$ be a function with
\begin{equation*}
dd^c\log\varphi=2\omega_\Delta \quad\text{and}\quad
\norm{d\log\varphi}_{\omega_\Delta}^2\equiv 4
\;.
\end{equation*}
By Theorem~\ref{thm:existence}, we can take a nowhere vanishing complete holomorphic vector field $\mathcal{W}$ with $(\mathrm{Re}\, \mathcal{W})\varphi\equiv0$. Since every automorphism of $\Delta$ has at least one fixed point on $\overline\Delta$ and $\mathcal{W}$ is nowhere vanishing on $\Delta$, any nontrivial automorphism generated by $\mathrm{Re}\, \mathcal{W}$ has no fixed point in $\Delta$ and should have a common fixed point $p$ at the boundary $\mathfrak{p}artial\Delta$. This means $p$ is a vanishing point of $\mathcal{W}$. Consider a rotational symmetry $\mathcal{R}_\theta$ in \eqref{eqn:rotation} satisfying $\mathcal{R}_\theta(-1)=p$. We will show that $\varphi\circ\mathcal{R}_\theta=r\varphi_0$ where $\varphi_0$ is as in \eqref{eqn:model0} and $r>0$. This implies that $\varphi=r\varphi_{-\theta}$.
Now we can simply denote by $\varphi=\varphi\circ\mathcal{R}_\theta$ and $\mathcal{W}=(\mathcal{R}_\theta^{-1})_*\mathcal{W}$. Since $-1$ is a vanishing point of $\mathcal{W}$, Lemma~\ref{lem:lc} implies
\begin{equation*}
\mathcal{W}=a\mathcal{H}+b\mathcal{P}
\end{equation*}
for some real numbers $a$, $b$.
Suppose that $a\neq0$. Equation~\eqref{eqn:relation'} implies that
\begin{equation*}
(\mathcal{P}_s)_* \mathcal{W} = (\mathcal{P}_s)_*(a\mathcal{H}+b\mathcal{P})
= a\mathcal{H}-2as\mathcal{P}+b\mathcal{P}
= a\mathcal{H}+(b-2as)\mathcal{P}\;.
\end{equation*}
Take $s=b/2a$, then $\widetilde\mathcal{W}=(\mathcal{P}_s)_* \mathcal{W} =a\mathcal{H}$. Let $\tilde\varphi=\varphi\circ\mathcal{P}_{-s}$ for this $s$. Then $\tilde\varphi$ satisfies conditions in Theorem~\ref{thm:main thm} and $(\mathrm{Re}\, \widetilde\mathcal{W})\tilde\varphi\equiv 0$. But Lemma~\ref{lem:hyperbolic} said that $(\mathrm{Re}\, \widetilde\mathcal{W})\tilde\varphi=a(\mathrm{Re}\, \mathcal{H})\tilde\varphi\equiv \mathfrak{p}m4a\tilde\varphi$. It contradicts to $(\mathrm{Re}\, \mathcal{W})\varphi\equiv 0$ equivalently $(\mathrm{Re}\, \widetilde\mathcal{W})\tilde\varphi\equiv 0$. Thus $a=0$.
Now $\mathcal{W}=b\mathcal{P}$. Since $\mathcal{W}$ is nowhere vanishing already, $b\neq 0$. The condition $(\mathrm{Re}\,\mathcal{W})\varphi\equiv0$ implies $(\mathrm{Re}\,\mathcal{P})\varphi\equiv 0$. Lemma~\ref{lem:parabolic} says that $\varphi=r\varphi_0$ for some positive $r$. This completes the proof. \qed
\noindent\textit{Proof of Corollary~\ref{cor:main cor}.}
Let $D$ be a simply connected proper domain in $\mathbb{C}$ and let $\omega_D=i\lambda_D dz \wedge d\bar z$ be its Poincar\'e metric with $\norm{d\log \lambda_D}_{\omega_D}^2\equiv 4$. By Theorem~\ref{thm:existence}, there is a nowhere vanishing complete holomorphic vector field $\mathcal{W}$ with $(\mathrm{Re}\,\mathcal{W})\lambda_D\equiv0$. Take a biholomorphism $G:\Delta\to D$ and let
\begin{equation*}
\varphi=\lambda_D\circ G
\quad\text{and}\quad
\mathcal{Z}=(G^{-1})_*\mathcal{W} \,.
\end{equation*}
Note that $(\mathrm{Re}\,\mathcal{Z})\varphi\equiv0$ by assumption. Using the rotational symmetry $\mathcal{R}_\theta$ of $\Delta$ which is also affine, we may assume that $\mathcal{Z}(-1)=0$ and we will prove that $G$ is a Cayley transform.
Since $G:(\Delta,\omega_\Delta)\to(D,\omega_D)$ is an isometry, we have $G^*\omega_D=\omega_\Delta$, equivalently
\begin{equation*}
\varphi=\frac{\lambda_\Delta}{\abs{G'}^2} \;.
\end{equation*}
Moreover $d\log\varphi=d(G^*\log\lambda_D)$ implies that $\norm{d\log\varphi}_{\omega_D}^2=\norm{d(G^*\log\lambda_D)}_{\omega_D}^2\equiv4$. By Theorem~\ref{thm:main thm}, we have
\begin{equation*}
\frac{\lambda_\Delta}{\abs{G'}^2}=\varphi=r\varphi_0=r\lambda_\Delta \abs{1+z}^4
\end{equation*}
for some positive $r$. This means that $G'=e^{i\theta'}/\sqrt{r}(1+z)^2$ for some $\theta'\in\mathbb{R}$ so that
\begin{equation*}
G=\frac{e^{i\theta'}}{2\sqrt{r}}\frac{z-1}{z+1}+C
\end{equation*}
Since the function $z\mapsto (z-1)/(z+1)$ is the inverse mapping of the Cayley transform $F:\mathbf{H}\to\Delta$ in \eqref{eqn:CT}, we have
\begin{align*}
G\circ F:\mathbf{H}&\to D \\
z&\mapsto \frac{e^{i\theta'}}{2\sqrt{r}}z+C \;.
\end{align*}
This implies that $D=G(F(\mathbf{H}))$ is affine equivalent to $\mathbf{H}$. \qed
\end{document} |
\begin{document}
\title{Repeated Multimarket Contact with Private Monitoring: A Belief-Free Approach hanks{
A full version
can be found at http://arxiv.org/abs/1607.03583.
Supported in part by KAKENHI 17H01787, 16KK0003, and 17H00761.}
\begin{abstract}
This paper studies repeated games where two players play multiple duopolistic games simultaneously (multimarket contact). A key assumption is that each player receives a noisy and private signal about the other's actions (\textit{private monitoring} or \textit{observation errors}).
There has been no game-theoretic support that multimarket contact facilitates collusion or not, in the sense that more collusive equilibria in terms of \textit{per-market} profits exist than those under a benchmark case of one market.
An equilibrium candidate under the benchmark case is \textit{belief-free} strategies.
We are the first to construct a non-trivial class of strategies that exhibits the effect of multimarket contact from the perspectives of \textit{simplicity} and \textit{mild punishment}.
Strategies must be simple because firms in a cartel must coordinate each other with no communication.
Punishment must be mild to an extent that it does not hurt even the minimum required profits in the cartel.
We thus focus on two-state automaton strategies such that the players are cooperative in at least one market even when he or she punishes a traitor.
Furthermore, we identify an additional condition (\textit{partial indifference}), under which the collusive equilibrium yields the optimal payoff.
\end{abstract}
\section{Introduction}
\label{sec:introuction}
This paper investigates a simple but fundamental question:
Can two players cooperate better when they confront in multiple repeated games
than in a single repeated game?
A typical context we
concentrate on is \textit{multimarket contact}.
For example, global enterprises, such as Uber or Lyft, provide their services in multiple distinct markets.
In each area, they face an oligopolistic competition, which is often modeled as a prisoners' dilemma (PD).
When they repeatedly confront in a long run (\textit{repeated games}), more cooperative or collusive behavior can be an equilibrium.
They may be more likely to collude, and a regulatory agency wants to estimate the extent~\cite{chellappaw:isr:2010}.
However, the answer to the above question is negative if we assume that each player can directly observe his opponent’s actions (\textit{perfect} monitoring), which is an assumption often used in computer science literature.
\citeauthor{bernheim:rand:1990}~\shortcite{bernheim:rand:1990}
show that multimarket contact does not improve the most collusive
per-market equilibrium profit under perfect monitoring, though some of
the vast empirical studies rather suggest the opposite~\cite{evans:qje:1994}.
\begin{comment}
However, traditional game theory does not support the existence of strategies
designed for multiple markets whose per-market equilibrium payoff exceeds one for a single market~\cite{bernheim:rand:1990},
though some of the vast empirical studies suggest its existence.
While most of the literature assumes that each player can observe
his opponents' actions (\textit{perfect} monitoring),
\end{comment}
To resolve this discrepancy, this paper assumes
\textit{private} monitoring where each player may observe a different signal.
For example, although a firm cannot directly observe its rival's action, e.g., prices,
it can observe a noisy private signal, e.g., its own sales amounts
Analytical studies on this class of games have not been very successful~\cite{mailath:2006}.
This is because characterizing all equilibria or identifying optimal
equilibria in games with private monitoring is extremely hard.
Indeed, equilibrium candidates may be so complicated as to be
represented only by automaton strategies with a very large state
space, and the complexity, together with privacy of the other player's
signals, would require very involved statistical inferences to
estimate the other player's history at any period and to check optimality of the continuation strategy~\cite{kandori:2010}.
Notably, a \textit{belief-free} approach has successfully established a general characterization
where an equilibrium strategy is constructed
so that the statistical inferences do not matter~\cite{ely:jet:2002,ely:econo:2005}.
However, it is not obvious whether the belief-free approach is helpful in examining
the effects of multimarket contact.
This is because we want to deal with any number of markets, which causes the number of available actions to exponentially increase and may diminish its tractability.
The main goal of this paper is to construct, under multimarket contact with private monitoring,
a non-trivial class of strategies which can sustain a better per-market outcome than
an equilibrium strategy for a single market. The secondary concern is twofold.
First, strategies must be simple.
Firms do not communicate with each other after they form a cartel
so that their cartel could be escaped from monitoring by a regulatory agency.
If they employ a complicated strategy, it will be hard to detect one's deviation and
to restore its collusion after punishment.
We thus concentrate on simple two-state automaton strategies, which are still difficult to analyze.
Second, we assume firms punish a traitor in only some market, instead of all markets.
When a cartel forms, products therein may not be so differentiated and
the profit from the market may be small. If a firm punishes a traitor
in all markets, it may not be able to earn the minimum necessary profit.
The more collusive equilibrium thus prescribes the players to be
cooperative in at least one market even under a punishment state.
Surprisingly, our analysis reveals that information from such markets is crucial
to admit the multimarket contact effect.
When we employ the belief-free approach, the strategy found by the
work of Ely and V\"{a}lim\"{a}ki is an important benchmark and it
attains the optimal payoff among belief-free equilibria in PD~\cite{ely:jet:2002}.
This strategy, which we call EV, can form a belief-free equilibrium
under private monitoring in the single market case and attains high expected payoffs
with a wide range of parameter settings.
Figure~\ref{fig:EV} illustrates EV, which is a variant of the well-known tit-for-tat strategy.
\footnote{Here, $g$ and $b$ are noisy private signals suggesting that the opponent's action is $C$ and $D$, respectively.
$\varepsilon_R$ or $\varepsilon_P$ represents the transition probability between states. We omit the transitions for the remaining probabilities.}
A player first cooperates and keeps cooperation as long as she observes a signal suggesting cooperation.
Once she observes a signal suggesting defection, she defects with a given probability and cooperates with the remaining probability.
Similarly, when she defects, she keeps defection as long as she observes a signal suggesting defection.
Once she observes a signal suggesting cooperation, she returns to cooperation with another given probability and defects with the remaining probability.
Building upon those considerations, we provide a condition under which a
generalization of the benchmark belief-free strategies,
which we call the \textit{generalized EV} (gEV) strategy, forms an equilibrium.
To the best of our knowledge, we are the first to identify an equilibrium designed for multiple markets
whose per-market equilibrium payoffs exceed one for the benchmark
strategies.
Furthermore, we find an additional condition, which we call \textit{partial indifference}.
It implies that each player is indifferent among all strategies which differ only in play
in a given subset of the markets. Under this condition, the gEV equilibrium yields the optimal payoff.
Let us finally note a simple extension of the EV strategy to multimarket contact. We show that for any number of markets, it is an equally collusive equilibrium under the same condition as the benchmark case. The equilibrium satisfies a special case of partial indifference (\textit{total indifference}),
where the subset consists of all markets and is most collusive among such equilibria. Moreover, we reveal that, under the condition, each player's continuation play depends only on the number of signals suggesting the other player's defection in the previous period in a linear manner. This result implies that it must be designed in such a non-linear manner as the gEV strategy.
\section{Model}
\label{sec:model}
Two players play $M$ PDs simultaneously in each period.
In each PD, each player chooses either $C$ (cooperation) or $D$ (defection).
This is regarded as a model of oligopolistic competition, where
$C$ is an action increasing the total payoffs
(for instance, in the case of price competition, charging a collusive high price), and
$D$ is a non-cooperative one (like a price cut).
The players can choose different actions over the $M$ PDs,
so that each player's action set in each period is $\{ C, D \}^M$.
Each player cannot directly observe the other player's actions,
but receives an imperfect signal about them.
In each PD, each player receives either a good signal $g$ or a bad signal $b$.
We assume that each player receives his signals individually, and
cannot observe the other player's signals (private monitoring).
The pair of signals they privately receive in each PD is stochastic,
following a common symmetric probability distribution that depends entirely on the action pair of that PD.
We denote it by
$o(\omega_1 , \omega_2 |a_1 , a_2 )$, where $(\omega_1 , \omega_2 ) \in \{g, b \}^2$ and $(a_1 , a_2 )\in \{ C, D\}^2$.
We assume that the signals across the $M$ PDs are independent, though
the signals of a given PD may be correlated across the players.
We also assume that the signal distributions are described by one parameter.
There exists $p \in (1/2, 1)$ such that for any $i$, any $\omega_j$ ($j \neq i$) and any $a\in \{ C, D\}^2$,
{\small
\begin{equation*}
\sum_{\omega_i \in \{ g, b\}}o(\omega_i , \omega_j |a )=
\begin{cases}
p & \text{if $(a_i , \omega_j ) \in \big\{ (C, g), (D, b) \big\}$,}\\
1-p & \text{otherwise.}
\end{cases}
\end{equation*}}
The marginal distribution of an individual signal in a given PD is such that the \textit{right} signal ($\omega_j
=g$ if $a_i =C$, and $\omega_j =b$ if $a_i =D$) is received with probability $p$. We let $s =1-p$, which is the
probability of an \textit{error}.
The assumption is consistent with \textit{conditionally independent} monitoring,
which is a representative monitoring structure.
Formally, a signal distribution is conditionally independent
if $o(\omega_i, \omega_j\mid a) = o(\omega_i\mid a)o(\omega_j\mid a)$
for all $\omega_i$, $\omega_j$, and $a$.
Also, it is consistent with nearly perfect monitoring (when $p$ is close to $1$),
but inconsistent with nearly public monitoring (namely, the case where the event $\omega_1 =\omega_2$ is much more
likely than $\omega_1 \neq \omega_2$).
In each PD, player~$i$'s payoff depends only on his action and the signal of that PD.
The payoff function is common to all PDs, denoted by $\pi_i (a_i , \omega_i )$.
We are more interested in the expected payoff function:
\[\abovedisplayskip=2pt\belowdisplayskip=2pt
g_i (a_1, a_2)=\sum_{(\omega_1 , \omega_2 )}\pi_i (a_i , \omega_i )o(\omega_1 , \omega_2 |a_1 , a_2 ).
\]
We assume that their expected payoff functions are represented by the following payoff matrix:
\begin{center}
\begin{tabular}[htbp]{|c|c|c|}\hline
& $C$ & $D$ \\\hline
$C$ & $1,1$ & $-y,1+x$ \\\hline
$D$ & $1+x$,$-y$ & $0,0$ \\\hline
\end{tabular}
\end{center}
\noindent We assume $x>0$, $y>0$ and $1>x-y$, so that it indeed represents a PD.
All $M$ PDs are played infinitely, in periods $t=0,1,2,\ldots$. Player~$i$'s {\it private history} at the
beginning of period $t \ge 1$ is an element of $H_{i}^t \equiv \big[ \{ C, D \}^M \times \{g, b\}^M \big]^{t}$. Let
$H_{i}^0$ be an arbitrary singleton, and let $H_i =\cup_{t \ge 0}H_{i}^t$ be the set of player~$i$'s all private histories.
Player~$i$'s strategy of this repeated game is a mapping from $H_i$ to the set of all probability distributions over $\{ C, D \}^M$.
That is, we allow randomized strategies.
If the actual play of the repeated game is such that
the action pair $\big( a_{1}^m (t), a_{2}^m (t) \big)$ is played in the $m$-th PD in period~$t$ for each $m$ and $t$,
player~$i$'s normalized average payoff is
\begin{equation}\label{eq: repeated game path payoff} \abovedisplayskip=2pt\belowdisplayskip=2pt
(1-\delta )\sum_{t=0}^{\infty}\delta^t \sum_{m=1}^M g_i \big( a_{1}^m (t), a_{2}^m (t) \big) ,
\end{equation}
where $\delta \in (0,1)$ is their common discount factor. The average payoff of any strategy pair is the expected
value of Eq.~\ref{eq: repeated game path payoff}, where the expectation is taken with respect to the players'
randomizations and the monitoring structure.
\subsection{Belief-Free Equilibrium}\label{sec:belief-free}
The solution concept for repeated games with imperfect monitoring is
\textit{sequential equilibrium}~\cite{RePEc:ecm:emetrp:v:50:y:1982:i:4:p:863-94}.
However, since it is still highly difficult to analyze our model,
we here focus on a special class called {\it belief-free equilibria}~\cite{ely:econo:2005},
which is standard in the private monitoring literature~\cite{mailath:2006}.
\begin{definition}[Belief-Free Equilibrium]
A strategy pair is a {\it belief-free equilibrium} if for any $t \ge 0$, $h_{1}^t \in H_{1}^t$ and $h_{2}^t \in H_{2}^t$,
each player~$i$'s continuation strategy given $h_{i}^t$ is optimal against player~$j$'s continuation strategy given $h_{j}^t$.
\end{definition}
An important property
is that,
while player~$i$ given her private history should, in principle,
optimize her continuation payoff against her belief about player~$j$'s history (and hence his continuation strategy),
her continuation strategy is optimal even if she were to know $j$'s history with certainty.
\footnote{We refer to player~$i$ or 1 as \textit{her} and to player~$j$ or 2 as \textit{him} throughout this paper.}
In other words, the players playing a belief-free equilibrium need not compute their beliefs in the course of play.
When a strategy pair is represented by finite-state automaton strategies, as will be the case in subsequent analysis,
it is a belief-free equilibrium if any player's continuation strategy
(behavior expanded from the automaton) starting from any state
is a best response (optimal) against the other player's continuation strategy starting from any state.
Note that we never restrict the other's possible strategy space, which includes strategies with an infinite number of states.
Suppose both players employ a common strategy represented by a two-state automaton with state space $\{ R, P\}$.
Let $V_{s_1 s_2}$, where $s_1 \in \{ R, P\}$ and $s_2 \in \{ R, P\}$, be player~1's continuation payoff when
(i) player~2 is currently at $s_2$ and then follows the automaton, and
(ii) player~1 always plays the action prescribed at state~$s_1$ at any subsequent history.
The strategy pair is a belief-free equilibrium if and only if there exist $V_R$ and $V_P$ such that
\begin{equation}\label{eq: payoffs of constant strategies}
V_{RR}=V_{PR}=V_R , \quad V_{RP}=V_{PP}=V_P ,
\end{equation}
and that $V_{s_2}$ ($s_2 \in \{ R, P\}$) is player~1's best response payoff against
player~2's continuation strategy when he is at state~$s_2$.
To see this, note that by Eq.~\ref{eq: payoffs of constant strategies}, player~1 at any history is indifferent
between her continuation strategy at state~$R$ and that at state~$P$ irrespective of her belief about player~2's state.
Since the second condition implies that both continuation strategies give her best response payoff at any history,
the conditions for belief-free equilibrium are all satisfied.
We shall consider a general class of two-state automaton strategies throughout this paper.
\begin{definition}[Two-State Automaton Strategies]
\label{def:FSA}
The state space is $\{ R, P\}$, and $R$ is the initial state.
At state~$s\in\{R,P\}$, the player is prescribed to choose $a^s\in\{C,D\}^M$.
Suppose the current state is $R(P)$. If $\omega=(\omega_k)_{k=1}^M\in\{g,b\}^M$ is observed, then the
state shifts to $P(R)$ with probability $\xi_R(\omega)$ ($\xi_P(\omega)$)
(and stays at $R(P)$ with the remaining probability).
Note that this class is parameterized by $\xi_R(\cdot)$ and $\xi_P(\cdot)$: For any $s\in\{R,P\}$, $\xi_s:\{g,b\}^M\rightarrow [0,1]$,
which we call the transition probability functions.
\end{definition}
\begin{figure}
\caption{
EV strategy}
\label{fig:EV}
\caption{sEV strategy}
\label{fig:sEV}
\end{figure}
\subsection{Ely-V\"{a}lim\"{a}ki Strategy}
\label{sec:ev}
Let us next explain an instance of this class for a single PD case ($M=1$). Figure~\ref{fig:EV} illustrates the EV strategy~\cite{ely:jet:2002}
where $a^R=C$, $a^P=D$, $\xi_R(g)=1$, $\xi_R(b)=\varepsilon_R$, $\xi_P(b)=1$, and $\xi_P(g)=\varepsilon_P$.
Note that
\begin{align*}
& \varepsilon_R = \frac{(1-\delta)x}{\delta\Big\{ 2p-1-(1-p)(x+y) \Big\}}, \text{and} \\
& \varepsilon_P = \frac{(1-\delta)y}{\delta\Big\{ 2p-1-(1-p)(x+y) \Big\}}.
\end{align*}
A solid line denotes a deterministic transition and a dashed line denotes a probabilistic transition,
though, for simplicity, we omit some state transitions.
EV is a representative two-state automaton strategy that forms a belief-free equilibrium
under repeated games with private monitoring and attains the highest average payoff among belief-free equilibria in PD.
A player first cooperates at state $R$, but after observing a bad signal,
she punishes (defects) at the next period with probability $\varepsilon_{R}$, or keep cooperation with $1-\varepsilon_{R}$.
Likewise, after she defects at $P$,
if she observes a good signal, she returns cooperation with $\varepsilon_{P}$,
or keep defection with $1-\epsilon_{P}$.
\begin{proposition}\label{prop:EV}
There exist $\varepsilon_{R}\in [0,1]$ and $\varepsilon_{P}\in [0,1]$ such that
the EV strategy pair is a belief-free equilibrium if
\begin{equation}
\label{eq:EV-condition}
\delta \big[ 2p-1 - (1-p)(x+y) +\max \{ x, y \} \big] \geq \max \{ x, y \}.
\end{equation}
The average payoff starting from state $R$ is
\begin{equation*}
V_R = V^{EV} \equiv 1 - \frac{(1-p)x}{2p-1}.
\end{equation*}
\end{proposition}
\section{Simplified EV Equilibrium}
What happens if there are $M(\geq 2)$ PDs, in comparison with the case of one PD?
If EV forms an equilibrium, it is always an equilibrium to play it in each PD independently.
Obviously, the payoff of this equilibrium is $M$ times the EV equilibrium payoff.
Under this equilibrium, a player's actions in all PDs can be quite different,
depending on the histories of individual PDs. Thus, the corresponding automaton has $2^M$ states.
Interestingly, this equilibrium strategy can be greatly simplified
so that it is an equilibrium with the same payoff and under the same condition.
The simplified strategy just has two states, where
a player cooperates in all PDs at one state and defects in all PDs at the other.
Her actions are therefore perfectly correlated across the PDs.
The class of strategies, which we call \textit{simplified EV} (sEV), is simple,
also in the sense that its transition probabilities from one state to the other
depend only on the number of bad or good signals.
Therefore, the player need not to know the exact configuration of the signals.
\begin{definition}[sEV Strategy]
An sEV strategy for $M(\geq 2)$ PDs is a two-state automaton strategy
parameterized by two numbers $\varepsilon_{R}\in [0,1]$ and $\varepsilon_{P}\in [0,1]$:
\begin{itemize}
\item The actions at each state are prescribed by $a^R=(C,C,\ldots,C)$ and $a^P=(D,D,\ldots,D)$,
\item The transition probabilities are defined as
\begin{align*}
\xi_R(\omega)=|\{k\mid \omega_k=b\}|\varepsilon_R, \text{and } \xi_P(\omega)=|\{k\mid \omega_k=g\}|\varepsilon_P.
\end{align*}
\end{itemize}
\end{definition}
Figure~\ref{fig:sEV} illustrates sEV for two PDs in the same manner as Figure~\ref{fig:EV}.
The next theorem identifies the equilibrium condition and the average payoff starting from state $R$.
\begin{theorem}
\label{thm:sEV}
There exist $\varepsilon_{R}\in [0,1]$ and $\varepsilon_{P}\in [0,1]$ such that
the sEV strategy pair is a belief-free equilibrium if Eq.~\ref{eq:EV-condition} holds.
The average payoff starting from state $R$ is $M V^{EV}$
\end{theorem}
We place the proof in the full version
because it is similar to Proposition~\ref{prop:EV}.
\section{Generalized EV Equilibrium and \\ Partially Indifference}
The goal of the analysis of this section is to give one answer to whether a class of strategies which
can yield a better per-market payoff than the EV strategy exists or not.
Let us define the class of strategies as the \textit{generalized EV} (gEV) strategy
that achieves the optimal payoff
{\small\begin{align}\label{eq:overlineYd}
\overline{Y}^d \equiv M - \frac{1-p}{2p-1}(d+1)x.
\end{align}}
We also use
{\small\begin{align}\label{eq:underlineYd}
\underline{Y}^d \equiv M-d + \frac{1-p}{2p-1}dy + \frac{p^{M-d}}{p^{M-d}-(1-p)^{M-d}}(M-d)x.
\end{align}}
\begin{definition}[gEV Strategy]\label{def:gEV}
A gEV strategy for $M(\geq 2)$ PDs is a two-state automaton strategy parameterized by five parameters, $d\in\{1,2,\ldots,M-1\}$,
$\alpha^R$, $\alpha^P$, $\beta^R$, and $\beta^P$:
{\small
\begin{align*}
& \alpha^{R} = \frac{(1-\delta)x}{\delta (\overline{Y}^d-\underline{Y}^d)(2p-1)}, \\
& \alpha^{P} = \frac{(1-\delta)y}{\delta (\overline{Y}^d-\underline{Y}^d)(2p-1)}, \\
& \beta^{R} = \frac{(1-\delta)x}{\delta (\overline{Y}^d-\underline{Y}^d)(2p-1)(1-p)^{M-d-1}}, \\
& \beta^{P} = \frac{(1-\delta)(M-d)x}{\delta (\overline{Y}^d-\underline{Y}^d)\Big\{p^{M-d}-(1-p)^{M-d}\Big\}}.
\end{align*}}
Let us divide the markets into $A = \{1,2,\ldots, M-d\}$ and $B=\{M-d+1,M-d+2,\ldots,M\}$.
The actions at each state are prescribed by $a^R=(\overbrace{C,C,\ldots,C}^M)$ and $a^P=(\overbrace{C,\ldots,C}^{M-d},\overbrace{D,\ldots,D}^d)$.
The transition probabilities are defined as
{\small
\begin{align*}
\xi_R(\omega) =
\begin{cases}
\alpha^R|\{k \in B \mid \omega_k=b \}| + \beta^R
\, \text{if $\omega_{k}=b$ for all $k\in A$}, \\
\alpha^R|\{k\in B \mid \omega_k=b \}| \quad\quad\quad \text{otherwise}.
\end{cases}
\end{align*}
\begin{align*}
\xi_P(\omega) =
\begin{cases}
\alpha^P|\{k \in B \mid \omega_k=g \}| + \beta^P
\, \text{if $\omega_{k}=g$ for all $k\in A$}, \\
\alpha^P|\{k \in B \mid \omega_k=g \}| \quad\quad\quad \text{otherwise}.
\end{cases}
\end{align*} }
\end{definition}
Let us explain how we construct this strategy.
A player cooperates in $A$ at state $P$. Then, she always cooperates PDs in $A$ regardless of which state she is in.
The transition probabilities from $R$ to $P$ distinguish signals from $A$ with from $B$.
The increase of transition probabilities is constant for the number of bad signals from $A$.
If she observes at least one good signal from $A$, it is zero, otherwise, $\beta^{R}$.
The transition probabilities further increase by $\alpha^R$ in the number of bad signals from $B$.
Similarly, the transition probabilities from $P$ to $R$ are specified.
Their increase is constant for the number of good signals from $A$.
If she observes at least one bad signal from $A$, it is zero, otherwise, $\beta^P$.
The transition probabilities increase by $\alpha^P$ in the number of good signals from~$B$.
The following theorem identifies a condition for an equilibrium
outperforming the sEV equilibrium.
\begin{theorem}\label{thm:gEV-existence}
If $\overline{Y}^d > \underline{Y}^d$,
there exists $\underline{\delta}\in (0,1)$ such that for any $\delta\geq \underline{\delta}$,
there is a
belief-free equilibrium with $V_R=\overline{Y}^d$ and $V_P=\underline{Y}^d$.
\end{theorem}
The payoffs equal the upper and the lower bounds derived in Theorems~\ref{thm:optimal payoff} and \ref{thm:worst payoff}. Those bounds of equilibrium payoffs are obtained
under the following condition, which we call \textit{partial indifference}.
Recall that $A=\{1,2,\ldots,M-d\}$ and $B=\{M-d+1,M-d+2,\ldots,M\}$ by parameter $d$.
Let $V_s(a)$ be the continuation payoff when player 2 is at state~$s$ and player 1 chooses action $a$.
\begin{definition}[Partial Indifference]\label{def:PIBF}
Fix $d\in [1,M-1]$.
An equilibrium is \textit{$d$-partially indifferent} if
for any $s\in\{R,P\}$, any $a',a''\in\{C,D\}^M$, and any $k \in A$, if $a'_k=a''_k$, $V_s(a')=V_s(a'')$.
\end{definition}
\begin{comment}
The indifference is imposed on the behavior only in $B$.
For example, fix $M=2$ and $d=1$. The action space of player~1 is
\[\{(C,C),(C,D),(D,C),(D,D)\}.\] When player 2 is at state~$s$,
\[
V_{s}((C,C))=V_{s}((C,D))\ \mathrm{and}\
V_{s}((D,C))=V_{s}((D,D))
\]
holds. Since Eq.~\ref{eq: payoffs of constant strategies} requires only $V_{s}((C,C))=V_{s}((C,D))$,
this concept implies the belief-freeness, but not vice versa.
\end{comment}
The $d$-partial indifference means that, given the other player's strategy, a player's repeated game payoff does not depend on the play of the $B$ markets once he fixes a play of the $A$ markets. For example, if the player cooperates in all $A$ markets (as in the equilibrium), he receives a common repeated game payoff regardless of his actions in the $B$ markets. Or if he defects in all $A$ markets, he receives a common repeated game payoff regardless of his actions in the $B$ markets. Similar indifference holds for any play of the $A$ markets. In other words, this concept implies that his repeated game payoff given the other player's strategy depends solely on his play in the $A$ markets. Further, our analysis shows that the repeated game payoff is maximized when the player cooperates in all $A$ markets.
Thus, the strategy is optimal among all possible strategies.
\mdseries
Let us outline the proof of Theorem~\ref{thm:gEV-existence}.
We calculate both the upper (lower) bound of the continuation payoffs
from state $R$ ($P$)
among all $d$-partially indifferent equilibria among the class of strategies in Definition~\ref{def:FSA} (Theorems~\ref{thm:optimal payoff} and \ref{thm:worst payoff}).
Since our equilibrium uses a transition to $R$ as a reward and a transition to $P$ as a punishment, the upper bound must exceed the lower bound (Theorem~\ref{thm:VR-VP positive}). Under that condition, we construct a particular gEV equilibrium which achieves those upper and lower bounds
if the discount factor is sufficiently large.
Suppose that $a^R$ and $a^P$
are prescribed as in Definition~\ref{def:gEV}.
Formally, given $d$, $a^R=(a^R_k)_{k=1}^M$, $a^P=(a^P_k)_{k=1}^M$
where for all $k$, $a_k^R=C$, for all $k\in A$, $a_k^P=C$, and for all $k\in B$, $a_k^P=D$.
\begin{theorem}\label{thm:optimal payoff}
Fix $d\in [1,M-1]$.
Suppose $a^R$ is prescribed as in Definition~\ref{def:gEV}.
Any $d$-partially indifferent belief-free equilibrium payoff is at most $\overline{Y}^d$ defined in Eq.~\ref{eq:overlineYd}.
\end{theorem}
\begin{proof}
Consider an arbitrary $d$-partially indifferent belief-free equilibrium. Fix $k\in B$.
For any $a_{-k}\in\{C,D\}^{M-1}$, $d$-partially indifferent belief-freeness implies
{\small
\begin{align}\label{eq:belief-free for k at R}
(1-\delta)x = \delta (V_R-V_P) \Big\{ \zeta_R(D,a_{-k}) - \zeta_R(C,a_{-k}) \Big\}.
\end{align}}
Note that $o_2(\omega_{-k} \mid a_{-k} )$ is the probability that player~2 observes a signal profile $\omega_{-k}$ from the
markets except $k$ occurs when player 1 chooses an action profile $a_{-k}$.
Besides $\zeta_s(a)$ indicates
$\sum_{\omega\in\{g,b\}^M} o_2(\omega \mid a)\xi_s(\omega)$ for state $s$.
Then we derive for any $a_{-k}$,
{\small
\begin{align}\label{eq:bf_at_R}
(1-\delta)x &= \delta (V_R-V_P)(2p-1)\sum_{\omega_{-k}\in\{g,b\}^{M-1}}\Big\{ \xi_R(b,\omega_{-k}) \\\notag &- \xi_R(g,\omega_{-k}) \Big\} o_2(\omega_{-k}\mid a_{-k}).
\end{align}}
We have, for any $a_{-k}\in \{C,D\}^{M-1}$, $\zeta_R(D,a_{-k}) - \zeta_R(C,a_{-k})=$
{\small
\begin{align*}
(2p-1)\sum_{\omega_{-k}\in\{g,b\}^{M-1}}\Big\{ &\xi_R(b,\omega_{-k}) - \\ &\xi_R(g,\omega_{-k}) \Big\}o_2(\omega_{-k}\mid a_{-k}).
\end{align*}}
Substituting this into Eq.~\ref{eq:belief-free for k at R}, we obtain
\begin{align*}
\xi_R(b,\omega_{-k}) - \xi_R(g,\omega_{-k}) = \frac{(1-\delta)x}{\delta (V_R-V_P)(2p-1)}.
\end{align*}
Intuitively, this implies the increase of the transition probabilities is constant when
the signal observed in a market changes from the good to the bad one.
Since we can arbitrarily choose $k\in B$,
there exists a function $\beta^R: \{g,b\}^{M-d}\rightarrow [0,1]$ such that
{\small\begin{align*}
\xi_R(\omega) = \frac{(1-\delta)x|\{k\in B \mid\omega_k=b\}|}{\delta(V_R-V_P)(2p-1)} + \beta^R (\omega_1,\ldots,\omega_{M-d})
\end{align*}} for all $\omega \in \{g,b\}^M$. Note that the transition probability from state $R$ to $P$ is linear in the number of bad signals from~$B$.
Let us consider the incentive condition that a player at state $R$ defects in a single market.
Recall that $a^R=(C,\ldots,C)$ and let $a'^R$ be an action that defects only in a market and cooperates in the remaining, e.g., $(D,C,\ldots,C)$.
\begin{align*}
V_R& \geq (1-\delta)(M+x) + \delta V_R - \delta (V_R-V_P)\zeta_R(a'^R) \\
\leftrightarrow & (1-\delta)x \leq \delta (V_R-V_P) \Big\{ \zeta_R(a'^R) - \zeta_R(a^R) \Big\}.
\end{align*}
Since the left hand side of this equation is positive, we have $\zeta_R(a'^R) > \zeta_R(a^R)$.
For $a \in \{a^R, a'^R\}$, it holds that
\[
\zeta_R(a) = \frac{(1-\delta)x(1-p)d}{\delta (V_R-V_P)(2p-1)} + \eta_R(a)
\] where $\eta_R(a) = $
{\small
\[
\sum_{(\omega_1,\ldots,\omega_{M-d})\in \{g,b\}^{M-d}} \left[ \prod_{k=1}^{M-d} o_2(\omega_k\mid a)\right] \beta^R(\omega_1,\ldots,\omega_{M-d}).
\]}
Clearly, $\zeta_R(a'^R)-\zeta(a^R)=\eta_R(a'^R)-\eta_R(a^R)>0$ holds.
From those equations, let us transform the expected payoff starting from state $R$.
\begin{align*}
V_R = & M - \frac{\delta}{1-\delta}(V_R-V_P)\zeta_R(a^R) \\
\leq & M - \frac{1-p}{2p-1}xd - \frac{x}{\eta_R(a'^R)-\eta_R(a^R)}\eta_R(a^R).
\end{align*}
Note that $\eta_R(a)$ must be non-zero. If not, for any $(\omega_1,\ldots,\omega_{M-d})\in \{g,b\}^{M-d}$,
$\beta^R(\omega_1,\ldots, \omega_{M-d})=0$ and for any two $a,a'\in \{C,D\}^{M}$, $\eta(a')-\eta(a)=0$.
This contradicts $\eta_R(a'^R)-\eta_R(a^R)>0$.
Therefore, there exists a signal profile $(\omega_1,\ldots,\omega_{M-d})\in\{g,b\}^{M-d}$ such that $\beta^R(\omega_1,\ldots,\omega_{M-d})>0$.
We have
\begin{align*}
1 < \frac{\eta_R(a'^R)}{\eta_R(a^R)}
\leq \frac{p}{1-p}.
\end{align*}
Accordingly, we finally obtain $V_R \leq \overline{Y}^d$.
The proof is complete.
\end{proof}
The upper bound always exceeds the benchmark, unless $d=M-1$.
In this equilibrium, the fact that the $d$-partial indifference decreases
the continuation payoff from $R$
by $\frac{1-p}{2p-1}dx$ follows from the linearity of the transition probabilities in $B$.
The remaining term of $\frac{1-p}{2p-1}x$ follows from the incentive condition that a player chooses $C$ in $A$.
Let us next derive the lower bound.
\begin{theorem}\label{thm:worst payoff}
Fix $d\in [1,M-1]$.
Suppose $a^P$ is prescribed as in Definition~\ref{def:gEV}.
Among the class of strategies in Definition~\ref{def:FSA},
any $d$-partially indifferent belief-free equilibrium payoff is at least $\underline{Y}^d$
defined in Eq.~\ref{eq:underlineYd}.
\end{theorem}
Since the flow of this proof is the same as that of Theorem~\ref{thm:optimal payoff},
we place it in the full version.
Before proving Theorem~\ref{thm:gEV-existence},
we show in any $d$-partially indifferent belief-free equilibrium,
$\overline{Y}^d \geq V_R > V_P \geq \underline{Y}^d$ must hold.
Otherwise, none of the equilibrium exists. To this end, it is suffice to
the following statement holds.
\begin{theorem}\label{thm:VR-VP positive}
Fix $d\in [1,M-1]$.
In any $d$-partially indifferent belief-free equilibrium payoff, $V_R >V_P$.
\end{theorem}
\begin{proof}
Consider an arbitrary $d$-partially indifferent belief-free equilibrium.
Recall that $\zeta_s(a)$ is $\sum_{\omega\in\{g,b\}^M} o_2(\omega \mid a)\xi_s(\omega)$ for state $s$.
From Definition~\ref{def:FSA}, we have
\begin{align*}
&V_R=(1-\delta)M + \delta V_R - \delta (V_R-V_P) \zeta_R(a^R),\ \textrm{and}\\
&V_P=(1-\delta)(M-d) + \delta V_P + \delta (V_R-V_P) \zeta_P(a^P)
\end{align*}
hold and we obtain
\begin{align*}
V_R- V_P = \frac{(1-\delta)d}{(1-\delta) +\delta \Big\{ \zeta_R(a^R)+\zeta_P(a^P) \Big\} }.
\end{align*}
Thus, $V_R>V_P$ holds because the right hand side is clearly positive.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:gEV-existence}]
Fix $d\in [1,M-1]$ and suppose the gEV strategy.
We claim that $\overline{Y}^d>\underline{Y}^d$ implies $\xi_s(\omega)\in [0,1]$ for all $s\in \{R,P\}$ and $\omega\in \{g,b\}^M$.
It is immediate that $\xi_s(\cdot)$ is always positive if $\overline{Y}^d>\underline{Y}^d$.
Next, define $\underline{\delta}\in (0,1)$ as $\delta$ satisfying
\begin{align*}
\max \Big\{ \alpha^Rd+\beta^R, \alpha^Pd+\beta^P \Big\} = 1.
\end{align*}
Since $d$ is the maximum value of $|\{k\in B \mid \omega_k=b \}|$ or $|\{k\in B \mid \omega_k=g \}|$, for any $\delta\geq \underline{\delta}$,
it holds that $\xi_s(\omega) \leq 1$ for all $s\in \{R,P\}$ and $\omega\in \{g,b\}^M$.
Therefore, the gEV strategy is verified.
By solving the following system of the equations, we obtain $V_R=\overline{Y}^d$ and $V_P=\underline{Y}^d$.
\begin{align*}
V_R & = (1-\delta) g_R(a^R) + \delta V_R -\delta(V_R-V_P)\zeta_R(a^R) \\
& = (1-\delta) g_R(a^P) + \delta V_R -\delta(V_R-V_P)\zeta_R(a^P). \\
V_P & = (1-\delta) g_P(a^R) + \delta V_P + \delta(V_R-V_P)\zeta_P(a^R) \\
& = (1-\delta) g_P(a^P) + \delta V_P + \delta(V_R-V_P)\zeta_P(a^P).
\end{align*}
It suffices to verify that
(i) $V_R$ is a player's best response payoff when the other player is at state $R$, and
(ii) $V_P$ is a player's best response payoff when the other player is at state $P$.
To this end, let $V_s(d_A , d_B)$ be a player's continuation payoff when
he chooses $D$ at some $d_A$ PDs in $A$
at some $d_B$ PDs in $B$.
Then it conforms to the strategy from the next period on, given that the other player is at state~$s$ and conforms to the strategy.
The proof is complete if we show that $V_s(0, 0)\ge V_s(d_A , d_B)$ for any $d_A$, any $d_B$, and any $s$.
First, some calculation verifies
{\small
\begin{align*}
V_R(d_A, d_B)=
(1-\delta )M + \delta V_R - \delta(V_R-V_P)d(1-p)\alpha^R \\
+ (1-\delta)x \Big[ d_A - \Big(\frac{p}{1-p}\Big)^{d_A}\frac{1-p}{2p-1}\Big].
\end{align*}}
Hence, $V_R(d_A, d_B)$ does not depend on $d_B$. For any $d_A$ and $d_B$, $V_R(d_A, d_B) - V_R(d_A+1, d_B) =$
{\small
\begin{align*}
(1-\delta)x\Big[ -1 + \Big( \frac{p}{1-p} \Big)^{d_A} \Big].
\end{align*}}
Since $p > 1-p$, $V_R(d_A, d_B) \geq V_R(d_A+1, d_B)$ holds for any $d_A$ and $d_B$.
We obtain $V_R(d_A,d_B) = $
{\small
\begin{align*}
& \sum_{k=1}^{d_A} \Big\{ V_R(k,d_B) - V_R(k-1,d_B) \Big\} + V_R(0, d_B)
\leq V_R(0,d_B)
\end{align*}}
for any $d_A$ and $d_B$, as desired.
Further, it attains the same value $-(1-\delta)x\frac{1-p}{2p-1}$ at $d_A=0$ and $d_A=1$.
The concavity of the last term of $V_R(d_A, d_B)$ implies this consequence.
Finally, some calculation verifies
{\small
\begin{align*}
V_P&(d_A,d_B) = \\
& (1-\delta )(M-d-dy) + \delta V_P - \delta(V_R-V_P)dp\alpha^P \\
+ & (1-\delta)x \Big[ d_A + \Big( \frac{1-p}{p} \Big)^{d_A} \frac{p^{M-d}(M-d)}{p^{M-d}-(1-p)^{M-d}} \Big].
\end{align*}}
Hence, $V_P(d_A, d_B)$ does not depend on $d_B$.
For any $d_B$, it attains the same value
\[(1-\delta)x\frac{p^{M-d}(M-d)}{p^{M-d}-(1-p)^{M-d}}\]
at $d_A=0$ and $d_A=M-d$.
Since $p>1-p$, it is convex in $d_A$. This convexity implies that $V_P(0,d_B) \geq V_P(d_A,d_B)$ for any $d_A$ and $d_B$.
The proof is complete.
\end{proof}
Theorem~\ref{thm:gEV-existence} reaches the main goal we raised at the beginning of this paper.
Namely, if there exists an integer $d<M-1$ such that $\overline{Y}^d>\underline{Y}^d$, the gEV strategy yields
the average per-market payoff greater than the EV and sEV strategies. Note that the number of markets $M$ must be
greater than three.
The condition $\overline{Y}^d>\underline{Y}^d$ is equivalent to
\begin{align*}
d\Big\{ 1- \frac{1-p}{2p-1}(x+y)\Big\}
& > \Big\{ \frac{1-p}{2p-1} \\\notag +& (M-d)\frac{p^{M-d}}{p^{M-d}-(1-p)^{M-d}}\Big\}x .
\end{align*}
The coefficient of $d$ in the left hand side must be positive because that of $x$ in the right hand side is positive.
This is the necessary condition that an EV equilibrium exists for some discount factor from Eq.~\ref{eq:EV-condition}.
Unless EV is an equilibrium, for any $d$, no $d$-partially indifferent belief-free equilibrium exists.
\subsection{sEV Equilibrium and Total Indifference}
This subsection turns to the sEV equilibrium to deepen the understandings of gEV.
Consider an extreme case of partial indifference. We restrict an equilibrium such that every action is
indifferent in the continuation payoff regardless of the opponent's state.
\begin{definition}[Total Indifference]\label{def:TIBF}
An equilibrium is \textit{totally indifferent} if for any $s\in\{R,P\}$, any $a',a''\in\{C,D\}^M$, $V_s(a')=V_s(a'')$.
\end{definition}
sEV
clearly is totally indifferent. In addition, the next theorem claims the bound of the equilibrium payoff.
\begin{theorem}
\label{prop:TIBF payoff}
Suppose $a^R=(C,C,\ldots,C)$. Among the class of strategies in Definition~\ref{def:FSA},
any totally indifferent belief-free equilibrium payoff is at most $M V^{EV}$.
\end{theorem}
We place the proof in the full version.
We emphasize that the total indifference requires the signal in one
market to have the same impact on the transition probabilities as the
signal in any other market, as is seen from
\[
\xi_R(\omega)= \frac{(1-\delta)x|\{k\mid\omega_k=b\}|}{\delta(V_R-V_P)(2p-1)} + \beta^R.
\]
Hence, the sEV
equilibrium is most collusive among all totally indifferent equilibria because
it has $\beta^R =0$
and therefore is least likely to switch to inefficient punishment.
Since Theorem~\ref{prop:TIBF payoff} does not specify behavior in state $P$, mild punishment is not sufficient to improve the per-market equilibrium payoff. Some nonlinearity of the transition probabilities is necessary. Signals from markets, where each player is prescribed to cooperate at any state, are essential to admit more collusive equilibria.
\section{Discussions}
\label{sec:discussions}
Let us explain why gEV can form an equilibrium.
A key feature involves the nonlinearity of the transition probabilities.
In fact, the transition probabilities from state~$R$ to $P$ do not depend on the outcome in $A$ at all,
as long as it contains at least one good signal.
However, if all signals from $A$ are bad, the transition probability sharply increases by $\beta^R$.
Why does this nonlinearity help?
Suppose the other player is at state~$R$, and consider how a player wants to play the PDs in $A$.
Her incentive to play $C$ or $D$ in one PD in $A$ crucially depends on the probability of the event that
all signals among the other PDs in $A$ are bad.
Only under that event, is her action in this PD pivotal.
Naturally, the event is more likely when she defects among more PDs in $A$.
Therefore, her temptation to defect in one PD in $A$ is largest when she cooperates among all other PDs in $A$.
Note that we apply a similar argument to this when we check the incentives.
This observation implies that once gEV prevents a player from defecting in one PD in $A$,
it automatically ensures that the player has no incentive to defect in any number of PDs in $A$.
Thus, as long as we consider gEV,
we can effectively ignore all actions which defect among two or more PDs in $A$.
This reduction in the number of incentive constraints is a key to the payoff improvement results brought about by gEV.
\begin{comment}
As we have observed, gEV devises state transitions which subtly depend on the combination of signals in all markets and achieve high average payoffs in equilibria.
One might think that a more complicated strategy improves the payoff,
i.e., automata with more than two states are more efficient than gEV.
However, this is not warranted.
Indeed, adding a new cooperation state, e.g., $R'$
where a player chooses $(C,\ldots,C)$,
is ineffective for achieving an equilibrium
since it gives a player a chance to exploit her opponent and
simply increases her incentive to defect.
Conversely, adding a punishment state $P'$ may lead to an equilibrium.
However, such a strategy decreases the payoff because a player is likely to be punished.
Thus, more complicated strategies do not guarantee higher payoffs.
\end{comment}
Next let us note why our gEV strategy outperforms the sEV strategy.
First of all, if we had zero A market (d=M), gEV would be equal to sEV.
Thus, gEV with a few A markets is a natural equilibrium candidate.
Further, perpetual cooperation in the $A$ markets leads to an improvement of the per-market equilibrium payoff.
Relatedly, the presence of the $A$ markets is a key part of our results and reflects some reality. A typical example is airline industries: cutting the fares in every route rarely occurs (except some promotion or campaign), and the fares in routes connecting major cities tend to be almost the same across airlines~\cite{evans:qje:1994}.
\mdseries
Another question is whether gEV achieves an optimal payoff among much more general equilibrium strategies than $d$-partially indifferent equilibria.
Under perfect
monitoring, it is known that a player's equilibrium payoff vector can be computed.
Dynamic programming can derive the bounds of the equilibrium payoff of each player, i.e., a self-generation set~\cite{abreu:econometrica:1990}.
The existence of an equilibrium strategy is guaranteed that attains a payoff vector in that set.
Unfortunately, under private monitoring, this is generally impossible because the recursive structure
under perfect or public monitoring does not persist. Checking the optimality is the immediate future work.
\subsection{Related literature}
In the literature of computer science, AI, and multi-agent systems,
there are many streams associated with repeated games~\cite{burkov:ker:2013}:
The complexity of equilibrium computation~\cite{littman:dss:2005,borgs:geb:2010,andersen:aaai:2013},
multi-agent learning~\cite{blum:agt:2007,conitzer:ml:2007,shoham::2008},
partially observable stochastic games (POSGs)~\cite{hansen:aaai:2004,
doshi:aaai:2006,tennenholtz:AAMAS:2009,mescheder:bcai:2011,wunder:aamas:2011}, and so on.
Among them, POSGs are most relevant because repeated games with private monitoring can be considered as a special case of POSGs.
However, POSGs often impose partial observability on an opponent's strategy (behavior rule)
and not on opponent's past actions~\cite{mescheder:bcai:2011,wunder:aamas:2011}.
They estimate an optimal (best reply) strategy against an unknown strategy (not always fixed)
from perfectly observable actions (perfect monitoring).
In contrast, we verify whether a given strategy profile is a \textit{mutual} best reply after any history,
i.e., finding an equilibrium, with partially observable actions (private monitoring).
Thus, this paper also addresses understanding the gap between POSGs and
repeated games with private monitoring in economics.
In fact, very few existing works have addressed verifying an equilibrium.
\citeauthor{hansen:aaai:2004}~\shortcite{hansen:aaai:2004}
develop an algorithm that iteratively eliminates dominated strategies.
However, just eliminating dominated strategies is not sufficient to
find an equilibrium. Also, the algorithm is not applicable to
an infinitely repeated game.
\citeauthor{doshi:aaai:2006}~\shortcite{doshi:aaai:2006}
investigate the computational complexity of achieving equilibria in interactive POMDPs.
The economics literature has extensively studied another interesting
class of repeated games where the players observe a common noisy
signal (\textit{public} monitoring). \citeauthor{kobayashi:geb:2012}~\shortcite{kobayashi:geb:2012} studied this version of our model and showed that when
multimarket contact facilitates collusion, the most collusive
equilibrium payoff is attained by a variant of the trigger strategy.
Hence, the players must defect in all markets in the punishment state.
Our model has an opposite implication which rather favors mild punishment.
Alternatively, another important topic has been validity of the folk theorem.
Most of them assume perfect or public monitoring, see a textbook \cite{mailath:2006}.
In case of private monitoring, \citeauthor{sugaya:mimeo:2015}~\shortcite{sugaya:mimeo:2015} establishes a general folk theorem.
However, the result is irrelevant to our analysis because the equilibrium strategies are excessively
complicated and require nearly complete patience of the players.
Specializing in multimarket contact, we rather show that the gEV strategy forms
a highly cooperative equilibrium and only requires the players to be mildly patient.
\section{Conclusions}
This paper examined equilibria in multimarket contact with a noisy signal.
To the best of our knowledge, under private monitoring, we are the first to find the multimarket contact effect, i.e.,
the existence of more collusive equilibria than the single market case.
We constructed the gEV strategy and clarified the structure of the equilibria, by finding the partial indifferent condition,
which leads that strategy to the best possible payoff.
In future works, we are particularly interested in an extension to asymmetric markets.
We believe our equilibrium construction easily extends to the case of asymmetric markets,
only at the expense of additional notations.
Under asymmetry, the colluding firms may want to optimally choose the $A$ markets and the $B$ markets,
which makes our problem more complicated.
\appendix
\section{Proof of Theorem~\ref{thm:sEV}}
\begin{proof}
Suppose Eq.~\ref{eq:EV-condition} holds. Let us define
\begin{gather*}
V_R =M\bigg\{ 1-\frac{(1-p)x}{2p-1} \bigg\} , \quad V_P =M\frac{(1-p)y}{2p-1}, \\
\varepsilon_{R}=\frac{(1-\delta )x}{\delta (2p-1)(V_R -V_P )}, \quad
\varepsilon_{P}=\frac{(1-\delta )y}{\delta (2p-1)(V_R -V_P )} .
\end{gather*}
From Eq.~\ref{eq:EV-condition}, we obtain
\begin{align*}
V_R >V_P , \quad 0<M\varepsilon_R \le 1, \quad 0 <M\varepsilon_P \le 1.
\end{align*}
Hence, the sEV strategy with $\varepsilon_R$ and $\varepsilon_P$ above is well-defined.
From the definition of sEV, some calculations verify that
{\small
\abovedisplayskip=3.0pt\belowdisplayskip=3.0pt
\begin{align*}
V_{RR} & = (1-\delta )M +\delta V_{RR} -\delta M(1-p) \varepsilon_R (V_{RR}-V_{RP}), \\
V_{RP} & = -(1-\delta )My +\delta V_{RP} +\delta Mp\varepsilon_P (V_{RR}-V_{RP}), \\
V_{PR} & =(1-\delta )(1+x)M +\delta V_{PR} -\delta Mp \varepsilon_R (V_{PR}-V_{PP}), \\
V_{PP} & = \delta V_{PP} +\delta M (1-p)\varepsilon_P (V_{PR}-V_{PP}).
\end{align*}}
Solving these, we obtain Eq.~\ref{eq: payoffs of constant strategies}.
Let $V_s(d)$ be a player's payoff when he defects in $d$ PDs and then conforms to sEV,
given that the other player's current state is $s \in \{ R, P\}$.
The proof is complete if we show that $V_s \ge V_s(d)$ for any $s$ and any $d$.
It is easy to verify that
{\small
\begin{align*}
V_R(d) & =(1-\delta )(M +dx) +\delta V_R \\ & -\delta \big\{ dp +(M-d)(1-p) \big\} \varepsilon_R (V_R -V_P ).
\end{align*}}
When we regard it as a linear function of $d$, its slope is
\begin{equation*}
(1-\delta )x -\delta (2p-1) \varepsilon_R (V_R -V_P )=0,
\end{equation*}
where the equality follows from the definition of $\varepsilon_R$. Therefore, $V_R(d) =V_R$ for any $d$, as desired.
Similarly, it is easily seen that
{\small
\begin{align*}
V_P(d) &=-(1-\delta )(M -d)y +\delta V_P \\ &+\delta \big\{ (M-d)p+d(1-p) \big\} \varepsilon_P (V_R -V_P ).
\end{align*} }
Its slope as a function of $d$ is
\begin{equation*}
(1-\delta )y -\delta (2p-1) \varepsilon_P (V_R -V_P )=0,
\end{equation*}
where the equality follows from the definition of $\varepsilon_P$.
Therefore, $V_P(d) =V_P$ for any $d$, which completes the proof.
\end{proof}
\section{Proof of Theorem~\ref{thm:worst payoff}}
\begin{proof}
Fix $d\in [1,M-1]$ and $k \in B$.
Consider an arbitrary $d$-partially indifferent belief-free equilibrium.
For any $a_{-k}\in\{C,D\}^{M-1}$, $d$-partially indifferent belief-freeness implies
\begin{align}
\label{eq:belief-free for k at P}
(1-\delta)y = \delta (V_R-V_P) \Big\{ \zeta_P(C,a_{-k}) - \zeta_P(D,a_{-k}) \Big\}.
\end{align}
From
Eq.~\ref{eq:bf_at_R}, since we have, for any $a_{-k}\in \{C,D\}^{M-1}$, $\zeta_P(C,a_{-k}) - \zeta_P(D,a_{-k}) = $
\begin{align*}
(2p-1)\sum_{\omega_{-k}\in\{g,b\}^{M-1}}\Big\{&\xi_P(g,\omega_{-k}) \\
- &\xi_P(b,\omega_{-k}) \Big\} o_2(\omega_{-k}\mid a_{-k}).
\end{align*}
Substituting this into Eq.~\ref{eq:belief-free for k at P}, we obtain
\begin{align*}
\xi_P(g,\omega_{-k}) - \xi_P(b,\omega_{-k}) = \frac{(1-\delta)y}{\delta (V_R-V_P)(2p-1)}.
\end{align*}
Intuitively, this implies the increase of the transition probabilities is constant when
the signal observed in a market changes from the bad to the good one.
Since we can arbitrarily choose $k$, which is greater than $M-d$,
there exists a function $\beta^P: \{g,b\}^{M-d}\rightarrow [0,1]$ such that
for all $\omega \in \{g,b\}^M$,
{\small\begin{align*}
\xi_P(\omega) = \frac{(1-\delta)y|\{k\in B\mid\omega_k=g\}|}{\delta(V_R-V_P)(2p-1)} + \beta^P (\omega_1,\ldots,\omega_{M-d})
\end{align*}}
holds.
Note that the transition probability from state $R$ to $P$ is linear with respect to the number of bad signals observed in $B$
Let us consider the incentive condition that a player at state $P$ defects in all the markets.
Suppose that $a^P$ is an action that cooperates in $A$ and defects in $B$, i.e., $a^P=(\overbrace{C,\ldots,C}^{M-d}, \overbrace{D,\ldots, D}^d)$,
and that $a'^P$ is one that defects in all the markets, i.e., $a'^P=(\overbrace{D,\ldots,D}^M)$.
\begin{align*}
V_P = & (1-\delta)(M-d) + \delta V_P + \delta (V_R-V_P)\zeta_P(a^P) \\
\geq & (1-\delta)(M-d)(1+x) + \delta V_P + \delta (V_R-V_P)\zeta_P(a'^P) \\
\leftrightarrow & (1-\delta)(M-d)x \leq \delta (V_R-V_P) \Big\{ \zeta_P(a^P) - \zeta_P(a'^P) \Big\}
\end{align*}
Since the left hand side of this equation is positive, we have $\zeta_P(a^P) > \zeta_P(a'^P)$.
For $a \in \{a^P, a'^P\}$, it holds that
\[
\zeta_P(a) = \frac{(1-\delta)y(1-p)d}{\delta (V_R-V_P)(2p-1)} + \eta_P(a)
\] where $\eta_P(a) = $
{\small
\[
\sum_{(\omega_1,\ldots,\omega_{M-d})\in \{g,b\}^{M-d}} \left[ \prod_{k=1}^{M-d} o_2(\omega_k\mid a)\right] \beta^P(\omega_1,\ldots,\omega_{M-d}).
\]}
Clearly, $\zeta_P(a^P)-\zeta(a'^P)=\eta_P(a^P)-\eta_P(a'^P)>0$ holds.
Utilizing those equations, let us transform the expected payoff starting from state $P$.
\begin{align*}
V_P = & M-d + \frac{\delta}{1-\delta}(V_R-V_P)\zeta_P(a^P) \\
= & M-d + \frac{1-p}{2p-1}yd + \frac{\delta}{1-\delta}(V_R-V_P)\eta_P(a^P) \\
\geq & M-d + \frac{1-p}{2p-1}yd + \frac{(M-d)x}{\zeta_P(a^P)-\zeta_P(a'^P)}\eta_P(a^P) \\
= & M-d + \frac{1-p}{2p-1}yd + \frac{(M-d)x}{\eta_P(a^P)-\eta_P(a'^P)}\eta_P(a^P).
\end{align*}
Note that $\eta_P(a)$ must be non-zero. If not, for any $(\omega_1,\ldots,\omega_{M-d})\in \{g,b\}^{M-d}$,
$\beta^P(\omega_1,\ldots, \omega_{M-d})=0$ and for any two $a,a'\in \{C,D\}^{M}$, $\eta_P(a)-\eta_P(a')=0$.
This contradicts $\eta_P(a^P)-\eta_P(a'^P)>0$.
Therefore, there exists a signal profile $(\omega_1,\ldots,\omega_{M-d}\in\{g,b\}^{M-d})$
such that $\beta^P(\omega_1,\ldots,\omega_{M-d})>0$.
We have
\begin{align*}
1 < \frac{\eta_P(a^P)}{\eta_P(a'^P)}
\leq \left( \frac{p}{1-p} \right)^{M-d}.
\end{align*}
Accordingly, we finally obtain
{\small
\begin{align*}
V_P \geq & M-d + \frac{1-p}{2p-1}yd + (M-d)x \frac{
\frac{\eta_P(a^P)}{\eta_P(a'^P)}
}{\frac{\eta_P(a^P)}{\eta_P(a'^P)}-1} \\
\geq & M-d + \frac{1-p}{2p-1}yd + (M-d)x \frac{
\left(\frac{p}{1-p}\right)^{M-d}
}{\left(\frac{p}{1-p}\right)^{M-d}-1} \\
\geq & M-d + \frac{1-p}{2p-1}yd + (M-d)x \frac{
p^{M-d}
}{p^{M-d}-(1-p)^{M-d}}.
\end{align*}}
The proof is complete.
\end{proof}
\section{Proof of Theorem~\ref{prop:TIBF payoff}}
Before proving this theorem, let us introduce the following lemma.
\begin{lemma}\label{lem:regularity}
Let $\pi_{M}$ ($M\geq 1$) be a $2^M\times 2^M$ matrix parameterized by $p\in (1/2,1)$ and
it is inductively defined as follows:
{\small
\[
\pi_{1} =
\begin{pmatrix}
p & 1-p \\
1-p & p
\end{pmatrix},\
\pi_{M+1} =
\begin{pmatrix}
p\pi_{M} & (1-p)\pi_{M} \\
(1-p)\pi_{M} & p\pi_{M}
\end{pmatrix}.
\]}
For each $M$, $\pi_M$ is regular.
\end{lemma}
We place the proof in the next section.
\begin{proof}
Let us consider a totally indifferent belief-free equilibrium and define the followings:
Let $V_{s}$ ($s\in\{R,P\}$) be the player 1's continuation payoff that is her best response
against the player 2's continuation strategy starting from state~$s$.
Let $g_s(a)$ ($a\in\{C,D\}^M$) be the stage game (expected) payoff when player 1 and 2 choose $a$ and $a^s$, respectively.
Let $\zeta_s(a)$ be $\sum_{\omega\in\{g,b\}^M} o_2(\omega \mid a)\xi_s(\omega)$.
This is the probability that the state of the player 2 shifts to the opposite when player 1 chooses $a$.
Note that $o_2(\omega \mid a)$ is the probability that player 2 observes signals $\omega$ from $M$ markets when player 1 chooses $a$.
Fix $k\in [1,M]$. From Definition~\ref{def:TIBF}, for any $a_{-k}\in\{C,D\}^{M-1}$, which is an action profile except market $k$,
{\small
\begin{align}
V_R & = (1-\delta) g_R(C, a_{-k}) + \delta V_R -\delta(V_R-V_P)\zeta_R(C,a_{-k}) \label{eq:V_R_C}\\
& = (1-\delta) g_R(D, a_{-k}) + \delta V_R -\delta(V_R-V_P)\zeta_R(D,a_{-k}) \notag
\end{align}}
We then have
{\small
\begin{align*}
(1-\delta)x& = \delta (V_R-V_P)\Big\{ \zeta_R(D,a_{-k}) - \zeta_R (C,a_{-k})\Big\} \\
&= \, \delta (V_R-V_P)\sum_{\omega\in\{g,b\}^M} \Big\{ o_2(\omega \mid D,a_{-k} ) \\
&~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - o_2(\omega \mid C,a_{-k}) \Big\}.
\end{align*}}
For any $\omega_{-k}\in\{g,b\}^M$,
\begin{align*}
& \Big\{ o_2(\omega \mid D,a_{-k} ) - o_2(\omega \mid C,a_{-k}) \Big\}\xi_R(\omega) \\ & =
\begin{cases}
(2p-1)o_2(\omega_{-k} \mid a_{-k} )\xi_R(b,\omega_{-k}) \mathrm{\ if\ } \omega_k=b, \\
-(2p-1)o_2(\omega_{-k} \mid a_{-k} )\xi_R(g,\omega_{-k}) \mathrm{\ if\ } \omega_k=g.
\end{cases}
\end{align*}
Eq.~\ref{eq:bf_at_R} holds only when $V_R\neq V_P$ from $p\neq \frac{1}{2}$ and contains a
$\left( o_2(\omega_{-k}\mid a_{-k}) \right)_{\omega_{-k}\in\{g,b\}^{M-1}}$
for any $a_{-k}\in\{C,D\}^{M-1}$, which is a $2^{M-1}$-dimensional vector.
Further, it can be supposed to have a $2^{M-1}\times 2^{M-1}$ matrix
\[
\left(\left( o_2(\omega_{-k}\mid a_{-k}) \right)_{\omega_{-k}\in\{g,b\}^{M-1}} \right)_{a_{-k}\in\{C,D\}^{M-1}}.
\]
Lemma~\ref{lem:regularity} verifies that the matrix is regular.
Choosing for any $\omega_{-k}\in \{g,b\}^{M-1}$.
\begin{align}
\label{eq:affinity}
\xi_R(b,\omega_{-k}) - \xi_R(g, \omega_{-k}) = \frac{(1-\delta)x}{\delta (V_R-V_P)(2p-1)}
\end{align}
clearly satisfies Eq.~\ref{eq:bf_at_R}.
Thus, the linear independence ensures the uniqueness of $2^{M-1}$-dimensional vector of
$\left( \xi_R(b,\omega_{-k}) - \xi_R(g, \omega_{-k}) \right)_{\omega_{-k}\in\{g,b\}^{M-1}}$.
Because Eq.~\ref{eq:affinity} holds for an arbitrary $k$, there exists $\beta^R\geq 0$ such that
\begin{align}
\label{eq:3}
\xi_R(\omega)= \frac{(1-\delta)x|\{k\mid\omega_k=b\}|}{\delta(V_R-V_P)(2p-1)} + \beta^R
\end{align}
holds for all $\omega\in\{g,b\}^M$.
This is derived by the fact that, given the signal profile $\omega_{-k}$,
the increase of the transition probability is constant when the signal from a market changes from good to bad.
The transition probability from $R$ to $P$ is guaranteed to be linear in the number of bad signals
observed in all the markets.
Suppose that $a_{-k}=(C,\ldots,C)$. Eq.~\ref{eq:V_R_C} is transformed into
\begin{align*}
V_R &= (1-\delta)M + \delta V_R -\delta(V_R-V_P)\zeta_R(a^R) \\
&= M - \frac{\delta}{1-\delta}(V_R-V_P)\zeta_R(a^R) \\
&= M - \frac{\delta}{1-\delta}(V_R-V_P)\Big( \frac{(1-\delta)x\cdot M(1-p)}{\delta(V_R-V_P)(2p-1)} + \beta^R \Big)\\
&\leq M \Big(1- \frac{1-p}{2p-1}x\Big) = M V^{EV}
\end{align*} The proof is complete. \end{proof}
\section{Proof of Lemma~\ref{lem:regularity}}
\begin{proof}
$\pi_1$ is clearly regular (the determinant is non-zero) because $p\neq 1/2$.
Fix $M > 1$. Suppose that $\pi_M$ is regular.
So, if a profile of weights $(\lambda_k)_{k=1}^{2^M}$ satisfies
$\sum_{k=1}^{2^M}\lambda_k\pi_M^k=\mathbf{0}$, it must hold that $\lambda_1=\lambda_2=\cdots=\lambda_{2^M}=0$.
Note that $\pi_M^k$ is a $2^M$ dimensional vector in the $k$-th component of
\[
\pi_M =
\begin{pmatrix}
\pi_M^1 \cdots \pi_M^k \cdots \pi_M^{2^M} \\
\end{pmatrix}^T.
\]
Next, by induction, we have
\[
\pi_{M+1} =
\begin{pmatrix}
p\pi_M^1 & (1-p)\pi_M^1 \\
\vdots & \vdots \\
p\pi_M^{2^M} & (1-p)\pi_M^{2^M} \\
(1-p)\pi_M^1 & p\pi_M^1 \\
\vdots & \vdots \\
(1-p)\pi_M^{2^M} & p\pi_M^{2^M} \\
\end{pmatrix}.
\]
Let us show that, for a profile of $2^{M+1}$ weights $\mathbf{\mu}$ and $\mathbf{\nu}$,
\begin{align*}
(\mu_1,\ldots,\mu_{2^{M+1}}, \nu_1,\ldots,\nu_{2^M}) \pi_{M+1} = \mathbf{0}.
\end{align*}
This equation is decomposed into the two equations with $\pi_{M}$:
{\small
\begin{align*}
p \sum_{k=1}^{2^M} \mu_k \pi_{M}^k + (1-p)\sum_{k=1}^{2^M} \nu_k \pi_{M}^k
&= \sum_{k=1}^{2^M} \{p\mu_k+(1-p)\nu_k\}\pi_{M}^k = \mathbf{0}, \\
(1-p)\sum_{k=1}^{2^M} \mu_k \pi_{M}^k + p\sum_{k=1}^{2^M} \nu_k \pi_{M}^k
&= \sum_{k=1}^{2^M} \{(1-p)\mu_k+p\nu_k\}\pi_{M}^k = \mathbf{0} .
\end{align*}}
Since $\pi_M$ is regular, it holds that for any $k$,
\begin{align*}
p\mu_k+(1-p)\nu_k = (1-p)\mu_k+p\nu_k = 0.
\end{align*}
This implies $(2p-1)(\mu_k-\nu_k)=0$ and we have $\mu_k=\nu_k$ for all $k$.
We then obtain $\mu_k=\nu_k=0$ for all $k$.
Accordingly, $\pi_{M+1}$ is regular. The proof is complete.
\end{proof}
\end{document} |
\begin{document}
\sloppy
\title{\bf Appendix to the paper ``Randomly Growing Braid on Three Strands and the Manta Ray''}
\author{Jean {\sc Mairesse}
\thanks{LIAFA, CNRS-Universit\'e Paris 7, case
7014, 2, place Jussieu, 75251 Paris Cedex 05, France. E-mail: {\tt [email protected]}}
\ and Fr\'ed\'eric {\sc Math\'eus}
\thanks{LMAM, Universit\'e de Bretagne-Sud, Campus de Tohannic, BP
573, 56017 Vannes, France. E-mail: {\tt [email protected]}}}
\maketitle
\begin{abstract}
This paper is an appendix to the paper ``Randomly Growing Braid on Three Strands and the Manta
Ray'' by J. Mairesse and F. Math\'eus (to appear in the Annals of Applied Probability).
It contains the details of some computations, and the proofs of some
results concerning the examples treated there, as well as some
extensions.
\end{abstract}
\textsl{Keywords:} Braid group $B_3$, random walk, harmonic measure, drift, entropy,
Green function, dihedral Artin group
\textsl{AMS classification (2000):} Primary 20F36, 20F69, 60B15;
Secondary 60J22, 82B41, 37M25.
\section{Introduction}
\label{se-intro}
Consider the braid group $B_3=\pres{a,b}{aba=bab}$ and the nearest
neighbor random walk defined by a probability
$\nu$ with support $S\: =\: \{a,a^{-1},b,b^{-1}\}$.
Let $(X_n)_n$ be a realization of the random walk.
In order
to understand the asymptotic behavior of $X_n$, the first step is to
study its complexity, i.e. the length $|X_n|$ of $X_n$ with respect to
the generating system $S$. To this aim, a main quantity of interest is
the growth rate of the length: $\gamma=\lim_n |X_n|/n$.
\par
In the paper ``Randomly Growing Braid on Three Strands and the Manta Ray'', we compute explicitly $\gamma$ for any
distribution $\nu$ on
$\{a,a^{-1},b,b^{-1}\}$. More precisely, given $\nu$, we define
eight polynomial equations of degree 2 over eight
indeterminates, see \cite[(30)]{MaManta}, which are shown to admit a
unique solution $r$. The
rate $\gamma$ is then obtained as an explicit linear functional of
$r$, see \cite[(29)]{MaManta}.
\par
Then we mention that the polynomial equations can be
completely solved to provide a closed form formula for $\gamma$ under
the following two natural symmetries:
(i) $\nu(a)=\nu(a^{-1})$ and $\nu(b)=\nu(b^{-1})$; (ii)
$\nu(a)=\nu(b)$ and $\nu(a^{-1})=\nu(b^{-1})$.
This is stated in \cite[Prop. 4.4]{MaManta} and \cite[Prop. 4.5]{MaManta}.
In the present Appendix, we give the proof of these two propositions, see
Sections \ref{se-prop44} and \ref{se-prop45}.
\par
The family of Artin groups of dihedral type $I_2(n)$ is a natural
generalization of the braid group $B_3$ and our techniques adapt to compute
the drift. The case of the simple random walk is stated in \cite[Prop. 5.4]{MaManta} and the proof is given in Section \ref{se-prop54}.
\par
The last Section contains additional informations concerning the
random walk on the quotient space $B_{3}/Z$ of $B_{3}$ by its center
$Z$. We compute the entropy, the minimal
positive harmonic functions, and the Green function. A central limit
theorem is also discussed.
This appendix is self-contained in the following sense: the examples
treated and the results proved are specified. On the other
hand, we make a
constant use of notations, notions or results from \cite{MaManta} which are neither
redefined nor reproved. Hence, it is certainly not a good idea to read this
appendix independently of the parent paper.
\section{Proof of Proposition 4.4} \label{se-prop44}
We prove the following statement which is Prop. 4.4 in \cite{MaManta}.
The drifts $\gamma_{\Sigma}$, $\gamma_{\Delta}$ and $\gamma_{S_+}$ are defined in \cite[Eq. (14)]{MaManta}.
{\bf Proposition 4.4.}
{\it Assume that $\nu(a)=\nu(a^{-1})=p$,
$\nu(b)=\nu(b^{-1})=1/2-p$, with $p\in (0,1/4]$. Let $u$ be the
smallest root in $(0,1)$ of the polynomial
\[
P=2(4p-1)X^3 +
(24p^2-18p+1)X^2+ p(-12p+7)X + p(2p-1)\:.
\]
The various drifts are given by:
\begin{eqnarray*}
\gamma(p)=\gamma_{\Sigma}(p)=-2\gamma_{\Delta}(p)=\frac{2}{3}\gamma_{S_+}(p)=p+(1-4p)u\:.
\end{eqnarray*}
For $p\in [1/4,1/2)$, we have $\gamma(p)=\gamma(1/2-p)$ and similarly
for $\gamma_{\Sigma}, \gamma_{S_+},$ and $\gamma_{\Delta}$.}
\begin{proof}
Assume that $\nu(a)=\nu(a^{-1})=p$ and $\nu(b)=\nu(b^{-1})=q=1/2-p$.
Let $r$ be the unique solution in
$\{x \in ({\Blackboardfont R}_+^*)^{\Sigma} \mid \sum_{u\in \Sigma} x(u) =1\}$ to
the Traffic Equations \cite[(30)]{MaManta}.
By symmetry, we should have:
\[
r(a)= r(ba\Delta), \quad r(b)=r(ab\Delta), \quad r(ab)=r(b\Delta),
\quad r(ba)=r(a\Delta) \:.
\]
In particular, we can rewrite the Traffic Equations in terms
of $r(a), r(b), r(ab),$ and $r(ba)$ only.
Observe also that: $R(a)+R(b)=R(ab)+R(ba)=1/2$.
Setting $q=1/2-p$ in \cite[(28)]{MaManta}, we
get: $\gamma_{\Delta}= -(1/2-p)R(a) -pR(b)$. In particular,
$\gamma_{\Delta}<0$. Now using \cite[(29)]{MaManta}, we get:
$\gamma= 2(1/2-p)R(b)+pR(a)$. Since $R(a)+R(b)=1/2$, we deduce that:
\[
\gamma= p + (1-4p)R(a) \:.
\]
Using the Equations in \cite[(28)]{MaManta}, we now obtain:
\[
\gamma_{\Sigma}=-2\gamma_{\Delta} = \gamma, \quad \gamma_{S_+} = 3p/2
+ (3/2-6p)R(a)\:.
\]
The only remaining point is to determine
$R(a)$. To that purpose, we need to transform the Traffic
Equations \cite[(30)]{MaManta} by switching to new unknowns.
Recall that $R(a)=r(a)+r(a\Delta)=r(a)+ r(ba)$.
Set $R(1)= 2pr(a) + (1-2p)r(b)$ and $R(2)= r(a)+r(b)$. We have:
\[
\left[ \begin{array}{cccc}
1 & 1 & 0 & 0 \\
2p & 0 & 1-2p & 0 \\
1 & 0 & 1 & 0 \\
1 & 1 & 1 & 1
\end{array}\right] \left[ \begin{array}{c}
r(a) \\
r(ba) \\
r(b) \\
r(ab)
\end{array} \right] = \left[ \begin{array}{c}
R(a) \\
R(1) \\
R(2) \\
1/2
\end{array} \right]\:.
\]
Assume that $p\neq 1/4$. Then, the above matrix is invertible, which
enables to write $r(a),r(b),r(ab),$ and $r(ba)$ in function of
$R(a),R(1),$ and $R(2)$. Replacing in the Traffic Equations, we get a
new set of Equations in the unknowns $R(a), R(1)$, and $R(2)$.
Select in this new set, the Equations originating from the
Equations for $r(a),
r(b)$, and $r(ab)$ in \cite[(30)]{MaManta}.
Using Maple, solve these three Equations in the unknowns $R(a), R(1)$, and
$R(2)$. We obtain that $R(a)$ is a root of the polynomial:
\[
P=2(4p-1)X^3 +
(24p^2-18p+1)X^2+ p(-12p+7)X + p(2p-1)\:.
\]
By using that $R(2)$ should be less than 1, we obtain that for
$p<1/4$, $R(a)$ should be equal to the smallest root of $P$. This
completes the proof.
\end{proof}
\section{Proof of Proposition 4.5}\label{se-prop45}
We now prove the following statement which is Prop. 4.5 in \cite{MaManta} :
{\bf Proposition 4.5}
{\it Assume that $\nu(a)=\nu(b)=p$,
$\nu(a^{-1})=\nu(b^{-1})=1/2-p$, with $p\in (0,1/2)$. We have:
\begin{eqnarray*}
\gamma_{\Sigma}(p) & = & \frac{-1+ \sqrt{16p^2-8p+5}}{4}, \quad
\gamma_{S^+}(p) \ = \ \frac{4p^2+p+1-3p\sqrt{16p^2-8p+5}}{2(1-4p)} \\
\gamma_{\Delta}(p) & = & \frac{12p^2
-5p+1-p\sqrt{16p^2-8p+5}}{2(1-4p)}\:.
\end{eqnarray*}
And eventually:
\begin{equation*}
\gamma(p) =
\max \bigl[ 1-4p ,\frac{(1-2p)(-1-4p+\sqrt{5-8p+16p^2})}{2(1-4p)}, \frac{p
(-3+4p+\sqrt{5-8p+16p^2})}{-1+4p},-1+4p \bigr]\: .
\end{equation*} }
We give two proofs of this proposition. The first one is in
the same spirit as the proof of \cite[Prop. 4.4]{MaManta}, namely solving
the Traffic Equations \cite[(30)]{MaManta} thanks to the simplifications
provided by the symmetry $\nu(a)=\nu(b)$, $\nu(a^{-1})=\nu(b^{-1})$.
The second proof relies on the fact that, under the above symmetry,
the random process $(\widehat{X}_n)_n$ is Markovian and is a NNRW on the free
product ${\Blackboardfont Z}/3{\Blackboardfont Z}*{\Blackboardfont Z}/3{\Blackboardfont Z}$ for which the harmonic measure and the drift are known,
see \cite{MaMa}.
{\it First proof} - We assume now that
$p=q=\nu(a)=\nu(b)=1/2-\nu(a^{-1})=1/2-\nu(b^{-1})$.
Let $r$ be the unique solution
in $\{x \in ({\Blackboardfont R}_+^*)^{\Sigma} \mid \sum_{u\in \Sigma} x(u) =1\}$
to the Traffic Equations \cite[(30)]{MaManta}.
Again, by symmetry, we should have:
\[
r(a)= r(b), \quad r(ab)=r(ba), \quad r(a\Delta)=r(b\Delta),
\quad r(ab\Delta)=r(ba\Delta) \:.
\]
Moreover, $S_a=S_b=1/2$, $R(a)=R(b)$ and $R(ab)=R(ba)=1/2-R(a)$.
Therefore, according to \cite[(28)]{MaManta} and \cite[(29)]{MaManta}, the various drifts are given by
\begin{equation}\label{gam-simple1}
\gamma_{\Sigma}=1-2p+(4p-1)R(a)\, , \: \gamma_{S_+}=1/2-2p+6pR(a)\, , \: \gamma_{\Delta}=2p-1/2-2pR(a)\, ,
\end{equation}
\begin{equation}\label{gam-simple2}
\text{for}\:p\geq 1/4\, ,\:\gamma = \begin{cases} 4p -1 & \text{if } \gamma_{\Delta} \geq 0
\\
4pR(a) & \text{if } \gamma_{\Delta}
< 0 \end{cases}\:\: ,\:\text{and for}\:p\leq1/4\, , \:\gamma(p)=\gamma(1/2-p)\:.
\end{equation}
The computation of $R(a)$ from the Traffic Equations \cite[(30)]{MaManta} in this case
turns out to be simpler than in the proof of \cite[Prop. 4.4]{MaManta}. Add the first
and the fifth equations in \cite[(30)]{MaManta}. Then $R(a)$ is a root of the
following polynomial of degree two:
\[
P\:=\:4(1-4p)X^2\:+\:2(4p-3)X\:+\:1\: .
\]
For $p\in (0,1/2)$, the only root of $P$ which lies between $0$ and $1$
is
\[
R(a)\:=\:\frac{3-4p-\sqrt{16p^2-8p+5}}{4(1-4p)}\:.
\]
Substituting in \eref{gam-simple1} and \eref{gam-simple2} completes the proof.
{\it Second proof} - We give now another way for computing the
drift $\gamma_{\Sigma}$. Consider the subgroup $H=\{1,\Delta\}$ of
$B_3/Z$. Observe that $H$ is not a normal subgroup.
For instance, the left-class $aH=\{a,a\Delta\}$ is
different from the right-class $Ha=\{a,b\Delta\}$.
Let $C_3$ be the left-quotient of $B_3/Z$ with respect to $H$.
The elements of $C_3$ are the left-classes $\{g,g\Delta\}$.
The group $B_3$ acts on $C_3$ by left multiplication.
Denote by ${\mathcal S}(C_3,S)$ the Schreier graph with respect to this action.
The set of nodes is $C_3$ and the set of
arcs is defined by:
\begin{equation}\label{schreier}
C \longrightarrow \ D \ \text{ in } \ {\mathcal S}(C_3,S) \qquad \text{if}
\qquad \exists c\in C, d\in D, \
c \longrightarrow \ d \ \text{ in } \ {\mathcal X}(B_3/Z,S) \:.
\end{equation}
Consider the free product ${\Blackboardfont Z}/3{\Blackboardfont Z}\star Z/3{\Blackboardfont Z}$ with the canonical
set of generators $\widetilde{S}=\{c,c^{-1},d,d^{-1}\}$.
The Cayley graph ${\mathcal X}({\Blackboardfont Z}/3{\Blackboardfont Z}\star Z/3{\Blackboardfont Z},\widetilde{S})$ and the
Schreier graph ${\mathcal S}(C_3,S)$ are isomorphic as unlabelled graphs,
see Figure \ref{fi-b3Z}. Below, we identify the element $\{g,g\Delta\}$
with $\widehat{g}$ where $\widehat{g}\Delta^{0/1}$ is the Garside normal form
of $g\in B_3/Z$.
\begin{figure}
\caption{The graphs ${\mathcal X}
\label{fi-b3Z}
\end{figure}
Let $\nu$ be a probability distribution on $S=\{a,a^{-1},b,b^{-1}\}$.
Let $(X_n)_n$ be a realization of $(B_3,\nu)$. View $(X_n)_n$ as a
random walk on ${\mathcal X}(B_3,S)$. The sequence $(p(X_n))_n$ is a realization
of the random walk $(B_3/Z,\mu)$ where $\mu=\nu\circ p^{-1}$.
It is a random walk on ${\mathcal X}(B_3/Z,S)$. Recall that $\widehat{X}_n\Delta^{k_n}$
denotes the Garside normal form of $X_n$. With the identification above,
the random process $(\widehat{X}_n)_n$ evolves on ${\mathcal S}(C_3,S)$ and is the
process induced by $(p(X_n))_n$. This random process $(\widehat{X}_n)_n$
is {\it a priori} not Markovian, but it is Markovian if
\[
\forall C,D \in C_3, \forall c_1,c_2\in C,\quad
P\{ p(X_{n+1}) \in D \mid p(X_n) = c_1\} = P\{ p(X_{n+1}) \in D \mid
p(X_n) = c_2\} \:.
\]
Clearly, this holds if and only if $\nu(a)=\nu(b)=p$ and $ \nu(a^{-1})=\nu(b^{-1})=1/2-p$,
which is precisely what we assume.
Define ${\mathcal S}(C_3,\mu)$ as the graph ${\mathcal S}(C_3,S)$ with labels in
$[0,1]$ such that:
\[
u \stackrel{\mu(a)}{\longrightarrow} v \ \text{ in } \ {\mathcal S}(C_3,\mu)
\qquad \text{if} \qquad u \stackrel{a}{\longrightarrow} v \ \text{
in } \ {\mathcal S}(C_3,S) \:.
\]
Consider the group ${\Blackboardfont Z}/3{\Blackboardfont Z}\star {\Blackboardfont Z}/3{\Blackboardfont Z}$ with generators
$\widetilde{S}=\{c,c^{-1},d,d^{-1}\}$, and the probability measure
$\tilde{\mu}$ defined on $\widetilde{S}$ by $\tilde{\mu}(c)=\tilde{\mu}(d)=\nu(a)=\nu(b),$ and
$\tilde{\mu}(c^{-1})=\tilde{\mu}(d^{-1})=\nu(a^{-1})=\nu(b^{-1})$.
Define the labelled graph ${\mathcal X}({\Blackboardfont Z}/3{\Blackboardfont Z}\star {\Blackboardfont Z}/3{\Blackboardfont Z},\tilde{\mu})$ accordingly.
Then ${\mathcal S}(C_3,\mu)$ and ${\mathcal X}({\Blackboardfont Z}/3{\Blackboardfont Z}\star {\Blackboardfont Z}/3{\Blackboardfont Z},\tilde{\mu})$ are isomorphic
as labelled graphs. In particular, $(\widehat{X}_n)_n$ behaves like the
random walk $({\Blackboardfont Z}/3{\Blackboardfont Z}\star {\Blackboardfont Z}/3{\Blackboardfont Z},\tilde{\mu})$. Let us be more precise.
For $\xi\in\widetilde{S}$, we set $\imath(\xi)=1$ if $\xi = c$ or $c^{-1}$
and $\imath(\xi)=2$ if $\xi = d$ or $d^{-1}$. Recall that $T=\{a,b,ab,ba\}\subset B_3$.
Define
\begin{eqnarray*}
L = \{\xi_0\cdots\xi_l \in T^*\mid
\text{Last}(\xi_{i})\:=\:\text{First}(\xi_{i+1})\}, && L^{\infty} = \{\xi_0\cdots \in T^{{\Blackboardfont N}}\mid
\text{Last}(\xi_{i})\:=\:\text{First}(\xi_{i+1})\} \\
\widetilde{L} = \{\xi_0\xi_1\cdots \xi_l \in
\widetilde{S}^* \mid
\imath(\xi_{i})\: \neq \:\imath(\xi_{i+1})\}, && \widetilde{L}^{\infty} = \{\xi_0\xi_1\cdots \in
\widetilde{S}^{{\Blackboardfont N}} \mid
\imath(\xi_{i})\: \neq \:\imath(\xi_{i+1})\} \:.
\end{eqnarray*}
Set $\lim_n \widehat{X}_n = \widehat{X}_{\infty} = \widehat{x}_0 \widehat{x}_1 \widehat{x}_2\cdots$,
with $\widehat{x}_i\in T$. Let
$\mu^{\infty}$ be the law of $\widehat{X}_{\infty}$; it is a measure
on $L^{\infty}$. Let $\tilde{\mu}^{\infty}$
be the harmonic measure of $({\Blackboardfont Z}/3{\Blackboardfont Z}\star {\Blackboardfont Z}/3{\Blackboardfont Z},\tilde{\mu})$; it is a measure
on $\widetilde{L}^{\infty}$.
Let $\widetilde{r}$ be the unique
solution in $\mathring{{\mathcal B}}$ of the Traffic Equations of
$({\Blackboardfont Z}/3{\Blackboardfont Z}\star {\Blackboardfont Z}/3{\Blackboardfont Z},\tilde{\mu})$, see \cite[\S 4.3]{MaMa}.
Define
\[
r(a)\:=\:r(b)\:=\:\tilde{r}(c)\:=\: \tilde{r}(d)\: ,\:
r(ab)\:=\:r(ba)\:=\:\tilde{r}(c^2)\:=\: \tilde{r}(d^2)\: ,
\]
\[
r(1)\:=\:r(a)+r(b)\: ,\:\text{and}\quad\:r(2)\:=\:r(ab)+r(ba) \:.
\]
Then, we have, $\forall u_1\cdots u_l \in L$,
\[
\mu^{\infty} (u_1u_2u_3\cdots u_l T^{{\Blackboardfont N}}) = \tilde{\mu}^{\infty}
(c^{|u_1|_{S}}d^{|u_2|_{S}}c^{|u_3|_{S}} \cdots
\widetilde{\Sigma}^{{\Blackboardfont N}} )
= (1/2) r(|u_1|_{S})\cdots r(|u_l|_{S})\:.
\]
In particular
\begin{equation}\label{eq-ouf}
\gamma_{\Sigma} = \lim_{n\rightarrow \infty} \frac{|\widehat{X}_n|_T}{n} =
\lim_{n\rightarrow \infty} \frac{|W_n|_{\widetilde{S}}}{n}=
pr(1) + \bigl(\frac{1}{2}-p\bigr)r(2) \:,
\end{equation}
where $(W_n)_n$ is a realization of the random walk $({\Blackboardfont Z}/3{\Blackboardfont Z}\star {\Blackboardfont Z}/3{\Blackboardfont Z},\tilde{\mu})$.
See \cite[Corollary 3.6]{MaMa} for the third equality in Eq. \eref{eq-ouf}.
The vector $\tilde{r}$ is computed in \cite[\S 4.3]{MaMa}:
\begin{equation}\label{eq-z3z3a=b}
\tilde{r}(c)=\tilde{r}(d)=\frac{4p-3+\sqrt{16p^2-8p+5}}{4(4p-1)}, \quad
\tilde{r}(c^2)=\tilde{r}(d^2)=\frac{4p+1-\sqrt{16p^2-8p+5}}{4(4p-1)}\:.
\end{equation}
We get $\gamma_{\Sigma}\:=\:(-1 + \sqrt{16p^2-8p+5})/4$. We deduce
the other drifts easily.
\section{Proof of Proposition 5.4}\label{se-prop54}
We now prove the following statement which is Prop. 5.4 in \cite{MaManta} :
{\bf Proposition 5.4}
{\it Consider the simple random walk $(A_k,\nu)$ with
$\nu(a)=\nu(a^{-1})=\nu(b)=\nu(b^{-1})=1/4$. The drifts $\gamma_{\Sigma}$ and
$\gamma_{\Delta}$ are given by
\begin{equation}
\gamma_{\Sigma} = \frac{1-x_k}{2}, \qquad \gamma_{\Delta}=
-\frac{1-x_k}{4}\:.
\end{equation}
Let $\gamma$ be the drift of the length with respect to the natural
generators $\{a,b,a^{-1},b^{-1}\}$. We have
\begin{equation}
\gamma = \begin{cases}
(1-x_k) \bigl[\ \sum_{i=1}^{j-1} i F_i(x_k) + (j/2) F_{j}(x_k)\ \bigr]
& \text{if } \ k=2j \\
(1-x_k) \bigl[ \ \sum_{i=1}^{j} i F_i(x_k) \ \bigr]& \text{if } \ k=2j+1
\end{cases}\:.
\end{equation} }
\begin{proof}
Consider the group ${\Blackboardfont Z}/k{\Blackboardfont Z}\star {\Blackboardfont Z}/k{\Blackboardfont Z}$
with generators $\widetilde{S}=\{c,c^{-1},d,d^{-1}\}$. Let $(X_n)_n$ be a
realization of the simple random walk on $A_k$, and recall that
we write $\widehat{X}_n\Delta^{k_n}$ for the Garside normal form of $X_n$.
Denote by $\mu$ and $\tilde{\mu}$ the uniform probability measures on
$S$ and $\widetilde{S}$ respectively.
\par
If $k$ is even, then the unlabelled Cayley graph
${\mathcal X}(A_k/Z,S)$ is isomorphic to the unlabelled Cayley graph
${\mathcal X}({\Blackboardfont Z}/k{\Blackboardfont Z}\star Z/k{\Blackboardfont Z},\widetilde{S})$ (see Figure \ref{fi-A4}) so the
simple random walks $(A_k/Z,\mu)$ and $({\Blackboardfont Z}/k{\Blackboardfont Z}\star {\Blackboardfont Z}/k{\Blackboardfont Z},\tilde{\mu})$
are isomorphic.
\begin{figure}
\caption{The Cayley graphs ${\mathcal X}
\label{fi-A4}
\end{figure}
For $k$ odd, the structure of ${\mathcal X}(A_k/Z,S)$ is more
twisted. This is illustrated in Figure \ref{fi-A5b}
(see also Figure \ref{fi-b3Z} (left)).
\begin{figure}
\caption{The Cayley graph ${\mathcal X}
\label{fi-A5b}
\end{figure}
Let $C_k$ be the left-quotient of
$A_k/Z$ by the (non-normal) subgroup $H=\{1,\Delta\}$, and define
the Schreier graph ${\mathcal S}(C_k,S)$ as in \eref{schreier}.
Clearly ${\mathcal S}(C_k,S)$ is isomorphic to the Cayley graph
${\mathcal X}({\Blackboardfont Z}/k{\Blackboardfont Z}\star Z/k{\Blackboardfont Z},\widetilde{S})$
(as unlabelled graphs). This is illustrated in Figure \ref{fi-A5ter}.
\begin{figure}
\caption{The graphs ${\mathcal X}
\label{fi-A5ter}
\end{figure}
Therefore the simple random walks on the two graphs ${\mathcal S}(C_k,S)$ and
${\mathcal X}({\Blackboardfont Z}/k{\Blackboardfont Z}\star Z/k{\Blackboardfont Z},\widetilde{S})$ are isomorphic.
\par
In both cases, the Markovian random process $\widehat{X}_n$ behaves
like the simple random walk $({\Blackboardfont Z}/k{\Blackboardfont Z}\star {\Blackboardfont Z}/k{\Blackboardfont Z},\tilde{\mu})$.
Adapting the end of the second proof of \cite[Prop. 4.5]{MaManta}
and using the results of \cite[\S 4.4]{MaMa} leads to
$\gamma_{\Sigma}$, hence to the other drifts.
\end{proof}
\section{Extensions}\label{se-exte}
One can retrieve from the harmonic measure $\mu^{\infty}$, other
quantities of interest for the random walk $(B_3/Z,\mu)$: (a) the entropy,
(b) the minimal positive harmonic functions, or (c) the Green function.
Consider for instance the entropy $h$. For NNRW on free products of finite
groups, or on 0-automatic pairs, a formula was available for $h$ as a
simple function of $r$, the unique solution to the Traffic
Equations, see \cite{mair04,MaMa}. Here the situation is more complex
and $h$ can only be expressed as a limit of functions of $r$. However
$h$ can be computed with an arbitrary prescribed precision.
In the three above cases (a), (b), and (c), the key is to determine the Radon-Nikodym derivatives
$du\circledast \mu^{\infty}/d\mu^{\infty} (\cdot )$ for $u\in {\mathcal G}$.
Fix $u=u_1\cdots u_k \in {\mathcal G}$ and $\xi=\xi_1\xi_2\cdots \in
{\mathcal G}^{\infty}$. We have, using \cite[(21)]{MaManta},
\begin{equation}\label{eq-limit}
\frac{du\circledast \mu^{\infty}}{d\mu^{\infty}} (\xi) = \lim_n
\frac{\mu^{\infty}(u^{-1}\circledast \xi_1\cdots \xi_n
T^{{\Blackboardfont N}})}{\mu^{\infty}(\xi_1\cdots \xi_n T^{{\Blackboardfont N}})} = \lim_n
\frac{\alpha{\mathcal M}(w_1\cdots w_{\ell-1})\beta(w_{\ell})}{\alpha
{\mathcal M}(\xi_1\cdots \xi_{n-1})\beta(\xi_n)} \:,
\end{equation}
where $w_1\cdots w_{\ell}= u^{-1}\circledast \xi_1\cdots \xi_n$.
Below, we justify the existence of the limit in \eref{eq-limit}, and in doing so we show
how to control the error made when replacing the limit by the value
computed for a given $n$.
It follows from the definition in \cite[(9)]{MaManta} that we have either:
\[
u^{-1}\circledast \xi_1\cdots \xi_n = v\cdot \xi_l \cdot \ldots \cdot \xi_n,
\quad \text{or} \quad u^{-1}\circledast \xi_1\cdots \xi_n = v\cdot \iota(\xi_l)
\cdot \ldots \cdot \iota(\xi_n)\:,
\]
for some $v \in {\mathcal G}$ and $l\in {\Blackboardfont N}^*$ which do not depend on $n$, for
$n$ large
enough.
In the first case, respectively the second one, we have:
\begin{eqnarray}
\frac{du\circledast \mu^{\infty}}{d\mu^{\infty}} (\xi) & = & \lim_n \
\frac{\alpha{\mathcal M}(v\xi_{\ell}\cdots \xi_{n-1})\beta(\xi_n)}{\alpha
{\mathcal M}(\xi_1\cdots \xi_{n-1})\beta(\xi_n)} \label{eq-radon1} \\
\text{resp. } \ \frac{du\circledast \mu^{\infty}}{d\mu^{\infty}} (\xi)
& = & \lim_n \
\frac{\alpha{\mathcal M}(v\iota(\xi_{\ell})\cdots \iota(\xi_{n-1}))\beta(\iota(\xi_n))}{\alpha
{\mathcal M}(\xi_1\cdots \xi_{n-1})\beta(\xi_n)}\:. \label{eq-radon2}
\end{eqnarray}
Now, the ${\Blackboardfont R}_+$-automaton $(\alpha,{\mathcal M},\beta)$ of \cite[Figure 9]{MaManta} has several remarkable
properties.
Define $\iota(\alpha)=[0,1,0,1]\in
{\Blackboardfont R}_+^{1\times Q}$. Observe that $\iota(\alpha)_i=
\alpha_{5-i}$. Observe also that:
$\forall u \in \Sigma, \forall i,j, \ {\mathcal M}(u)_{ij}=
{\mathcal M}(\iota(u))_{5-i,5-j}$.
It implies that we have:
\begin{equation}\label{eq-sym}
\forall u=u_1\cdots u_k \in T^*, \qquad \alpha{\mathcal M}(u_1\cdots u_{k-1})\beta(u_k)=
\iota(\alpha){\mathcal M}(\iota(u_1)\cdots \iota(u_{k-1}))\beta(\iota(u_k))\:.
\end{equation}
Extend by morphism the map $\iota$ defined in \cite[(10)]{MaManta} to
$\iota: \Sigma^*\longrightarrow \Sigma^*$.
The identity in \eref{eq-sym} enables to rewrite \eref{eq-radon2} as:
\[
\frac{du\circledast \mu^{\infty}}{d\mu^{\infty}} (\xi)
= \lim_n \
\frac{\iota(\alpha){\mathcal M}(\iota(v)\xi_{\ell}\cdots \xi_{n-1})\beta(\xi_n)}{\alpha
{\mathcal M}(\xi_1\cdots \xi_{n-1})\beta(\xi_n)}\:.
\]
To summarize, in all cases, for $n$ large enough, there exist $l\in
{\Blackboardfont N}^*$ and $\alpha_1 \in {\Blackboardfont R}_+^{1\times Q}$ such that:
\begin{equation}\label{eq-birkhoff}
\frac{du\circledast \mu^{\infty}}{d\mu^{\infty}} (\xi)
= \lim_n \ \frac{\alpha_1 {\mathcal M}(\xi_{\ell}\cdots
\xi_{n-1})\beta(\xi_n)}{\alpha_2 {\mathcal M}(\xi_{\ell}\cdots
\xi_{n-1})\beta(\xi_n)} \:,
\end{equation}
with $\alpha_2=\alpha {\mathcal M}(\xi_1\cdots \xi_{\ell-1})$.
The limit in \eref{eq-birkhoff} exists as a consequence of general
results on inhomogeneous products of
non-negative matrices, see for instance \cite[Chapter 3]{sene}. To be
more precise, and to evaluate the speed of convergence, it is
convenient to slightly rewrite \eref{eq-birkhoff}, in order to have
matrices with positive entries.
Consider the automaton $(\widetilde{\alpha}, \widetilde{{\mathcal M}},
\widetilde{\beta})$ defined as follows.
Set $\widetilde{\alpha} = [1,1]$; for $u\in T$,
set $\widetilde{\beta}(u) =
[R(u),R(u)]^T$;
and let the morphism $\widetilde{{\mathcal M}}:T^* \rightarrow {\Blackboardfont R}_+^{2\times 2}$
be defined by:
\[
\widetilde{{\mathcal M}}(a) = \left[ \begin{array}{cc} q(a) & q(a\Delta) \\ q(b\Delta) &
q(b) \end{array}\right] \:, \quad \widetilde{{\mathcal M}}(b) = \ \left[
\begin{array}{cc} q(b) & q(b\Delta) \\ q(a\Delta) &
q(a) \end{array}\right]
\]
\[
\widetilde{{\mathcal M}}(ab) = \ \left[ \begin{array}{cc} q(ab) & q(ab\Delta) \\ q(ba\Delta) &
q(ba) \end{array}\right] \:, \quad \widetilde{{\mathcal M}}(ba) = \ \left[
\begin{array}{cc} q(ba) & q(ba\Delta) \\ q(ab\Delta) &
q(ab) \end{array}\right] \:.
\]
It is easily checked that, {\em on} ${\mathcal G}$, the two automata $(\widetilde{\alpha}, \widetilde{{\mathcal M}},
\widetilde{\beta})$ and $(\alpha,{\mathcal M},\beta)$ coincide. That is:
\begin{equation}\label{eq-coincide}
\forall u=u_1\cdots u_k \in {\mathcal G}, \quad \widetilde{\alpha}\widetilde{{\mathcal M}}(u_1\cdots
u_{k-1})\widetilde{\beta}(u_k) = \alpha{\mathcal M}(u_1\cdots
u_{k-1})\beta(u_k) \:.
\end{equation}
Observe that the above identity does not hold on $T^* {\setminus} {\mathcal G}$. (In
fact, the automaton $(\alpha,{\mathcal M},\beta)$ is the tensor product of $(\widetilde{\alpha}, \widetilde{{\mathcal M}},
\widetilde{\beta})$ with a 2-state automaton recognizing ${\mathcal G}\cap T^*$.)
Define:
\begin{equation*}
\delta = \min_{u\in \Sigma} \frac{ \min_{ij}
\widetilde{{\mathcal M}}(u)_{ij}}{\max_{ij} \widetilde{{\mathcal M}}(u)_{ij}}, \quad K =
\max_{u\in \Sigma} \max_{ij} \Bigl[ \max_k
\frac{\widetilde{{\mathcal M}}(u)_{ik}}{\widetilde{{\mathcal M}}(u)_{jk}} - \min_k
\frac{\widetilde{{\mathcal M}}(u)_{ik}}{\widetilde{{\mathcal M}}(u)_{jk}} \Bigr] \:.
\end{equation*}
Observe that $0<\delta <1$. For $x=x_1x_2\cdots \in T^{{\Blackboardfont N}}$, set
\[
c(x)=\bigl[ 1,\lim_n \widetilde{{\mathcal M}}(x_1\cdots
x_n)_{2k}/\widetilde{{\mathcal M}}(x_1\cdots x_n)_{1k} \bigl]^T\:,
\]
where the limit does not depend on $k\in \{1,2\}$.
Using \cite[Exercice 3.9]{sene}, we get, for $k\in \{1,2\}$,
\begin{equation}\label{eq-seneta}
\Bigl| \ \frac{\widetilde{{\mathcal M}}(x_1\cdots
x_n)_{2k}}{\widetilde{{\mathcal M}}(x_1\cdots x_n)_{1k}} -c(x)_2 \ \Bigr| \leq
K(1-\delta^2)^{n-1} \:.
\end{equation}
Now let us go back to \eref{eq-birkhoff}. Set $\eta=\eta_1\eta_2\cdots
= \xi_{\ell}\xi_{\ell+1}\cdots$. For $x=(x_1,x_2)\in {\Blackboardfont R}^2,$
set $\norm{x} =|x_1|+|x_2|$; for $x=(x_1,x_2), y=(y_1,y_2)\in {\Blackboardfont R}^2,$
set $\langle x,y\rangle = x_1y_1+ x_2y_2$. We have:
\[
\frac{du\circledast \mu^{\infty}}{d\mu^{\infty}} (\xi) = \lim_n
\frac{\widetilde{\alpha}_1\widetilde{{\mathcal M}}(\xi_{\ell}\cdots \xi_n)
\widetilde{\beta}(\xi_{n+1})}{\widetilde{\alpha}_2\widetilde{{\mathcal M}}(\xi_{\ell}\cdots \xi_n)
\widetilde{\beta}(\xi_{n+1})} = \frac{ \langle \widetilde{\alpha}_1, c(\eta)
\rangle}{\langle \widetilde{\alpha}_2 , c(\eta)\rangle}\:.
\]
Let us fix $n$ larger than
$\ell$. Set $\varepsilon = K(1-\delta^2)^{n-\ell-1}$.
Using \eref{eq-seneta}, we easily get:
\begin{equation*}
\frac{ \langle \widetilde{\alpha}_1, c(\eta)
\rangle - \varepsilon \norm{\widetilde{\alpha}_1}}{ \langle \widetilde{\alpha}_2, c(\eta)
\rangle + \varepsilon \norm{\widetilde{\alpha}_2}} \ \leq \
\frac{\widetilde{\alpha}_1\widetilde{{\mathcal M}}(\xi_{\ell}\cdots \xi_n)
\widetilde{\beta}(\xi_{n+1})}{\widetilde{\alpha}_2\widetilde{{\mathcal M}}(\xi_{\ell}\cdots \xi_n)
\widetilde{\beta}(\xi_{n+1})} \ \leq \ \frac{ \langle \widetilde{\alpha}_1, c(\eta)
\rangle + \varepsilon \norm{\widetilde{\alpha}_1}}{ \langle \widetilde{\alpha}_2, c(\eta)
\rangle - \varepsilon \norm{\widetilde{\alpha}_2}} \:.
\end{equation*}
The above inequalities provide a sharp control on the error made when
replacing the limit in \eref{eq-birkhoff} by the value computed for a fixed $n$.
\noindentraph{Entropy} $ $
The {\em entropy} of a probability measure $\mu$ with finite support $S$ is
defined by $H(\mu) = -\sum_{x\in S} \mu(x) \log [\mu(x) ]$.
Consider a random walk $(G,\mu)$, defined as in \cite[Section 2]{MaManta}.
Let $(X_n)_n$ be a realization of the random walk.
The {\em entropy of $(G,\mu)$}, introduced by Avez \cite{avez}, is
\begin{equation}\label{eq-avez}
h = \lim_n \frac{H(\mu^{*n})}{n}= \lim_n -\frac{1}{n}
\log\mu^{*n}(X_n) \:,
\end{equation}
$a.s.$ and in $L^p$, for all $1\leq p <\infty$. The existence of the
limits as well as their equality follow from Kingman's subadditive
ergodic theorem~\cite{avez,derr}.
Consider the random walk $(B_3/Z,\mu)$.
Recall that
$({\mathcal G}^{\infty},\mu^{\infty})$ is the Poisson boundary of
$(B_3/Z,\mu)$. Then, we have, see \cite[Theorem 3.1]{KaVe}:
\begin{equation}\label{eq-kave}
h = - \sum_{u\in \Sigma} \mu(u) \int \log \bigl[
\frac{du^{-1}\circledast\mu^{\infty}}{d\mu^{\infty}}(\xi)\bigr]
d\mu^{\infty}(\xi) \:.
\end{equation}
Using \eref{eq-kave} together with the above remarks on how to
approximate the Radon-Nikodym derivatives, one can derive an algorithm
to compute $h$ with an arbitrarily prescribed precision.
\noindentraph{Minimal positive harmonic functions} $ $
Consider a random walk $(G,\mu)$ defined as in \cite[Section 2]{MaManta}.
A {\em positive harmonic} function is a function
$f: G \rightarrow {\Blackboardfont R}_+$ such that: $\forall u \in G, \
\sum_{a\in \Sigma} f(u\ast a)\mu(a) = f(u)$. A positive harmonic function $f$ is {\em
minimal} if $f(1)=1$ and if for any positive harmonic function $g$ such that $f\geq
g$, there exists $c\in {\Blackboardfont R}_+$ such that $f=cg$.
Consider the random walk $(B_3/Z,\mu)$ as above. Recall that ${\mathcal G}^{\infty}$ is
the minimal Martin boundary of $(B_3/Z,\mu)$. So the set of minimal
positive harmonic function is precisely given by $\{K_{\xi}, \xi \in
{\mathcal G}^{\infty}\}$ with
\begin{equation}\label{eq-harmonic}
K_{\xi}: B_3/Z \rightarrow {\Blackboardfont R}_+, \quad K_{\xi}(g) = \frac{d
\phi(g)\circledast \mu^{\infty}}{d\mu^{\infty}}(\xi) \:.
\end{equation}
\noindentraph{Green function} $ $
Consider a transient random walk $(G,\mu)$ defined as in \cite[Section 2]{MaManta} and a realization $(X_n)_n$.
Define, $Q:G \rightarrow [0,1],$
\begin{equation}\label{eq-ever}
Q(g)= P \{ \exists n \geq 1 \mid X_n = g\}\:,
\end{equation}
the probability of ever reaching $g$. The {\em Green
function} is the map $\Gamma: G \rightarrow {\Blackboardfont R}_+, \
\Gamma(g)=\sum_{i=0}^{\infty} \mu^{*i}(g)$. Observe that:
\begin{equation}\label{eq-green}
\Gamma(\cdot)= \Gamma(1)Q(\cdot)\:.
\end{equation}
Consider now the random walk $(B_3/Z,\mu)$.
Computing $Q$ is more involved than for
free groups \cite{DyMa} or for zero-automatic pairs
\cite{mair04}. However, the spirit remains the same: there is a close
link between $Q$ and $r$, the unique solution to the Traffic
Equations.
For convenience, we view $Q$ and $\Gamma$ as $Q:{\mathcal G} \rightarrow {\Blackboardfont R}_+$
and $\Gamma:{\mathcal G} \rightarrow {\Blackboardfont R}_+$. Define, for $u\in {\mathcal G}$, the auxiliary quantity:
\begin{eqnarray}\label{eq-aux}
\widehat{q}(u) & = & P \{ \exists n > 1 \mid Y_n = u \ \text{ and
} \ \forall 1 < m < n, Y_m \neq u\Delta \}\:.
\end{eqnarray}
We have the following:
\begin{proposition}\label{pr-ever}
Let $r \in
\{ x \in ({\Blackboardfont R}_+^*)^{\Sigma} \mid \sum_{u\in \Sigma} x(u) =1\}$ be the
unique solution to the Traffic Equations \cite[(19)]{MaManta} of
$(B_3/Z,\mu)$. For $u\in \Sigma$, set $q(u)= r(u)/r(\text{Next}(u))$.
We have:
\begin{equation}\label{eq-q1}
\forall u\in \Sigma, \quad \widehat{q}(u) = \frac{r(u)}{r(\text{Next}(u))} =q(u)\:.
\end{equation}
Besides:
\begin{equation}\label{eq-q2}
\widehat{q}(1) = \sum_{u\in \Sigma} \mu(u) q(u^{-1}), \quad
\widehat{q}(\Delta) = \sum_{u\in \Sigma} \mu(u) q(u^{-1}\Delta)\:.
\end{equation}
The probabilities of ever reaching an element are given by: $Q(\Delta)
= \widehat{q}(\Delta)/(1-\widehat{q}(1))$, and, $\forall v=v_1\cdots v_k
\in {\mathcal G} {\setminus} \{1,\Delta\}$:
\begin{equation}\label{eq-q3}
Q(v) = \sum_{u_1\cdots u_k \in \psi^{-1}(v)} q(u_1)\cdots
q(u_{k-1}) \bigl[ q(u_{k}) +
q(u_{k}\Delta)Q(\Delta)
\bigr] \:.
\end{equation}
The Green function is determined by \eref{eq-green} and:
\begin{equation}\label{eq-green2}
\Gamma(1) = \frac{1-\widehat{q}(1)}{(1-\widehat{q}(1))^2 -
\widehat{q}(\Delta)^2}\:.
\end{equation}
\end{proposition}
(See \cite[Eq. (13)]{MaManta} for the definition of $\psi$)
\begin{proof}
It follows from the shape of the Cayley graph ${\mathcal X}(B_3/Z,\Sigma)$
that:
\[
\widehat{q}(1) = \sum_{u\in \Sigma} \mu(u) \widehat{q}(u^{-1}), \quad
\widehat{q}(\Delta) = \sum_{u\in \Sigma} \mu(u) \widehat{q}(u^{-1}\Delta), \quad Q(\Delta)
= \widehat{q}(\Delta)/(1-\widehat{q}(1))\:.
\]
Similarly, \eref{eq-q3} holds with $\widehat{q}(\cdot)$ in place
of $q(\cdot)$. Let us prove \eref{eq-green2}. Define
$\bar{Q}(1)=P\{\exists n > 1 \mid Y_n =1\}$. Clearly, $\bar{Q}(1)=
\widehat{q}(1) + \widehat{q}(\Delta) Q(\Delta)$ and $\Gamma(1)= 1+
\bar{Q}(1) \Gamma(1)$. The expression in \eref{eq-green2} follows.
Therefore, the only
point that remains to be proved is: $\forall u\in
\Sigma, \widehat{q}(u)=q(u)$.
For $u \in \Sigma$, define $F_u =\{ v\in \Sigma \mid \text{First}(v)=\text{First}(u)\}$.
By considering the random walk after one move, we get
that $(\widehat{q}(u))_{u\in \Sigma}$ is a solution to
the following
equations over the indeterminates $(y(u))_{u\in \Sigma}$:
\begin{equation}\label{eq-widehatq}
y(u) = \mu(u) + \sum_{v\in F_u {\setminus} \{u,u\Delta\}} \mu(v)y(v^{-1}u) +
\sum_{v\in \Sigma {\setminus} F_u} \mu(v) \bigl[ y(v^{-1})y(u)+
y(v^{-1}\Delta)y(\iota(u)\Delta) \bigr] \:.
\end{equation}
Starting from the Traffic Equations \cite[(19)]{MaManta}, and dividing
by $x(\text{Next}(u))$, we obtain precisely the Equations
\eref{eq-widehatq}.
Let $r \in
\{ x \in ({\Blackboardfont R}_+^*)^{\Sigma} \mid \sum_{u\in \Sigma} x(u) =1\}$ be the
unique solution to the Traffic Equations, and set $q(u) =
r(u)/r(\text{Next}(u))$ for $u\in \Sigma$.
We deduce from the above that $(q(u))_{u \in \Sigma}$ is a solution
to the Equations \eref{eq-widehatq}.
We cannot directly conclude that $\widehat{q}(u)=q(u)$. Indeed the
Equations \eref{eq-widehatq} do not characterize
$(\widehat{q}(u))_{u\in \Sigma}$. They
have in general several solutions in
$(0,1)^{\Sigma}$
(if $\mu$ is uniform over $\Sigma$, then the two constant functions
$y=1/2$ and $y=1/4$ satisfy \eref{eq-widehatq}).
This is in contrast with the situation
for 0-automatic pairs where the analogs of Equations \eref{eq-widehatq}
have a unique solution \cite[Lemma 4.7]{mair04}.
Recall that the minimal
Martin boundary coincides with the Martin boundary and is
${\mathcal G}^{\infty}$. Hence, the minimal positive
harmonic functions, given in \eref{eq-harmonic}, can also be described as:
$\forall \xi=\xi_1\cdots \in {\mathcal G}^{\infty}$,
\begin{equation}\label{eq-harmonic2}
K_{\xi} : B_3/Z \rightarrow {\Blackboardfont R}_+, \quad K_{\xi}(g) = \lim_n
\frac{\Gamma(\phi(g^{-1})\circledast \xi_1\cdots \xi_n)}{
\Gamma(\xi_1\cdots \xi_n)}= \lim_n \frac{Q(\phi(g^{-1})\circledast \xi_1\cdots
\xi_n)}{Q(\xi_1\cdots \xi_n)} \:.
\end{equation}
Juxtaposing \eref{eq-harmonic} and \eref{eq-harmonic2}, and using
\cite[(20)]{MaManta} and \eref{eq-q3} (with $\widehat{q}(\cdot)$ replacing $q(\cdot)$), we
get: for all $u\in B_3/Z$ and $\xi=\xi_1\xi_2\cdots \in {\mathcal G}^{\infty}$,
\begin{eqnarray*}
K_{\xi}(u) & = & \lim_n \ \frac{\sum_{v_1\cdots v_{\ell} \in
\psi^{-1}(\phi(u^{-1})\circledast \xi_1\cdots \xi_n)} q(v_1)\cdots
q(v_{\ell-1}) R(v_{\ell})}{\sum_{v_1\cdots v_n \in
\psi^{-1}(\xi_1\cdots \xi_n)} q(v_1)\cdots
q(v_{n-1}) R(v_{n})} \\
& = & \lim_n \ \frac{\sum_{v_1\cdots v_{\ell} \in
\psi^{-1}(\phi(u^{-1})\circledast \xi_1\cdots \xi_n)} \widehat{q}(v_1)\cdots
\widehat{q}(v_{\ell-1}) Q(v_{\ell})}{\sum_{v_1\cdots v_n \in
\psi^{-1}(\xi_1\cdots \xi_n)} \widehat{q}(v_1)\cdots
\widehat{q}(v_{n-1}) Q(v_{n})} \:.
\end{eqnarray*}
By choosing appropriately the values of $u$, we deduce easily that
this implies $q(\cdot )=\widehat{q}(\cdot )$.
\end{proof}
\noindentraph{Central Limit Theorem} $ $
Recall that we write $\widehat{X}_n\Delta^{k_n}$ for the Garside
normal form of $X_n$. The description of the asymptotic behavior
the quotient process $(p(X_n))$ evolving on $B_3/Z$ provides a
Central Limit Theorem for both the length $|\widehat{X}_n|_T$
and the exponent $k_n$ in the same way as in \cite{ledr00}.
\par
The statement is the following. Recall that we set
$U=p^{-1}(\Sigma)=\{a\Delta^k,ab\Delta^k,b\Delta^k,$
$ba\Delta^k,k\in{\Blackboardfont Z}\}$ and $T=\{a,b,ab,ba\}\subset B_3$
\begin{proposition}\label{CLT}
Consider the random walk $(B_3,\nu)$ where $\nu$ is a probability
measure on $U$ such that $\cup_n \text{supp} (\nu^{*n})=B_3$ and
$\sum_{x\in U} e^{\lambda |x|_S} \nu(x)< \infty$
for some $\lambda > 0$. Let $(X_n)$ be a realization of the
random walk $(B_3,\nu)$ and $\widehat{X}_n\Delta^{k_n}$ the Garside
normal form of $X_n$. Then there exist
two positive numbers $\sigma_{\Sigma}$ and $\sigma_{\Delta}$ such that, for all $t\in {\Blackboardfont R}$,
\begin{eqnarray}
\lim_{n\to +\infty}{P\biggl\{\frac{|\widehat{X}_n|_{T}-n\gamma_{\Sigma}}{\sigma_{\Sigma}\sqrt{n}} < t \biggr\} }
\:=\:
\lim_{n\to +\infty}{P\biggl\{\frac{k_n - n \gamma_{\Delta}}{\sigma_{\Delta}\sqrt{n}} < t \biggr\} }
\:=\:
\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{t}e^{-x^2/2}\text{d}x \:\:.
\end{eqnarray}
\end{proposition}
As far as $k_n$ is concerned, starting from the proof of \cite[Lemma 4.10]{MaManta},
one can easily adapt the techniques developped in \cite[Section 4.c]{ledr00}
to prove the result. The proof of the statement for $|\widehat{X}_n|_{T}$ is analogous,
the function $\theta_{\Delta}$ has to be replaced by a
function $\theta_{\Sigma}\: :\: U^{{\Blackboardfont N}} \times {\mathcal G}^{\infty} \rightarrow {\Blackboardfont Z}$ defined,
for all $(\omega,\xi)\in U^{{\Blackboardfont N}} \times {\mathcal G}^{\infty}$, by
\begin{equation*}
\theta_{\Sigma} (\omega,\xi)=
\begin{cases} +1 & \mbox{if}\:\:\omega_{0}\ast \xi_0 \not\in \Sigma \cup \{1,\Delta\} \\
-1 & \mbox{if}\:\:\omega_{0}\ast \xi_0 \:=\:1\:\mbox{or}\:\Delta \\
0 & \mbox{if}\:\:\omega_{0}\ast \xi_0 \in \Sigma
\end{cases}\: ,\:\:\mbox{where} \:\omega\:=\:(\omega_0,\omega_1,...)\:.
\end{equation*}
That is, $\theta_{\Sigma} (\omega,\xi)$ counts the variation of the length of $\xi$
when it is left-multiplied by $\omega_0$. Integrating $\theta_{\Sigma}$ over
$U^{{\Blackboardfont N}} \times {\mathcal G}^{\infty}$ leads to the drift $\gamma_{\Sigma}$, exactly as in \cite[Lemma 4.10]{MaManta}.
\end{document} |
\begin{document}
\title[On the complexity of detecting positive eigenvectors]{On the complexity of detecting positive eigenvectors of nonlinear cone maps}
\author{Bas Lemmens}
\address{School of Mathematics, Statistics \& Actuarial Science, Sibson Building,
University of Kent, Canterbury, Kent CT2 7FS, UK}
\curraddr{}
\email{[email protected]}
\thanks{}
\author{Lewis White}
\address{School of Mathematics, Statistics \& Actuarial Science, Sibson Building,
University of Kent, Canterbury, Kent CT2 7FS, UK}
\curraddr{}
\email{[email protected]}
\thanks{The second author was supported by a London Mathematical Society ``Undergraduate Research Bursary'' and the
School of Mathematics, Statistics and Actuarial Science at the University of Kent.}
\subjclass[2010]{Primary 47H07, 47H09;; Secondary 37C25}
\keywords{Nonlinear maps on cones, positive eigenvectors, illumination problem, Hilbert's metric}
\dedicatory{}
\begin{abstract}
In recent work with Lins and Nussbaum the first author gave an algorithm that can detect the existence of a positive eigenvector for order-preserving homogeneous maps on the standard positive cone. The main goal of this paper is to determine the minimum number of iterations this algorithm requires. It is known that this number is equal to the illumination number of the unit ball, $B_{\mathrm{v}}$, of the variation norm, $\|x\|_{\mathrm{v}} :=\max_i x_i -\min_i x_i$ on $V_0:=\{x\in\mathbb{R}^n\colon x_n=0\}$. In this paper we show that the illumination number of $B_{\mathrm{v}}$ is equal to ${n\choose\lceil \frac{n}{2}\rceil}$, and hence provide a sharp lower bound for the running time of the algorithm.
\end{abstract}
\maketitle
\section{Introduction}
The classical Perron-Frobenius theory concerns the spectral properties of square nonnegative matrices. In recent decades this theory has been extended to a variety of nonlinear maps that preserve a partial ordering induced by a cone (see \cite{LNBook} and the references therein for an up-to-date account).
Of particular interest are order-preserving homogeneous maps $f\colon \mathbb{R}^n_{\geq 0}\to \mathbb{R}^n_{\geq 0}$, where
\[ \mathbb{R}^n_{\geq 0}:=\{x\in\mathbb{R}^n\colon x_i\geq 0\mbox{ for all } i=1,\ldots,n\}\] is the {\em standard positive cone}. Recall that $f\colon \mathbb{R}^n_{\geq 0}\to \mathbb{R}^n_{\geq 0}$ is {\em order-preserving} if $f(x)\leq f(y)$ whenever $x\leq y$ and $x,y\in \mathbb{R}^n_{\geq 0}$. Here $w\leq z$ if $z-w\in \mathbb{R}^n_{\geq 0}$. Furthermore, $f$ is said to be {\em homogeneous} if $f(\lambda x) =\lambda f(x)$ for all $\lambda \geq 0$ and $x\in \mathbb{R}^n_{\geq 0}$. Such maps arise in mathematical biology \cite{NMem2,Sch} and in optimal control and game theory \cite{BK,RS}.
It is known \cite[Corollary 5.4.2]{LNBook} that if $f\colon \mathbb{R}^n_{\geq 0}\to \mathbb{R}^n_{\geq 0}$ is a continuous, order-preserving, homogeneous map, then there exists $v\in \mathbb{R}^n_{\geq 0}$ such that
\[
f(v) = r(f) v,
\]
where
\[
r(f) :=\lim_{k\to\infty} \|f^k\|^{1/k}_{ \mathbb{R}^n_{\geq 0}}
\]
is the {\em cone spectral radius} of $f$ and \[
\|g\|_{\mathbb{R}^n_{\geq 0}}:=\sup \{\|g(x)\|\colon x\in \mathbb{R}^n_{\geq 0}\mbox{ and } \|x\|\leq 1\}.\]
Thus, as in the case of nonnegative matrices, continuous order-preserving homogeneous maps on $\mathbb{R}^n_{\geq 0}$ have an eigenvector in the cone corresponding to the spectral radius.
In many applications it is important to know if the map has a {\em positive} eigenvector, i.e., an eigenvector that lies in the
interior, $\mathbb{R}^n_{>0} :=\{x\in\mathbb{R}^n_{\geq 0}\colon x_i > 0\mbox{ for }i=1,\ldots,n\}$, of $\mathbb{R}^n_{\geq 0}$. This appears to be a much more subtle problem. There exists a variety of sufficient conditions in the literature, see \cite{Ca}, \cite{GG}, \cite[Chapter 6]{LNBook}, and \cite{NMem1}. Recently, Lemmens, Lins and Nussbaum \cite[Section 5]{LLN} gave an algorithm that can confirm the existence of a positive eigenvector for continuous, order-preserving, homogeneous maps $f\colon \mathbb{R}^n_{\geq 0}\to \mathbb{R}^n_{\geq 0}$. The main goal of this paper is to determine the minimum number of iterations this algorithm needs to perform.
\section{Preliminaries}
Given a set $S$ in a finite dimensional vector space $V$ we write $S^\circ$ to denote the interior of $S$, and we write $\partial S$ to denote the boundary of $S$ with respect to the norm topology on $V$.
It is known that if $f\colon \mathbb{R}^n_{\geq 0}\to \mathbb{R}^n_{\geq 0}$ is an order-preserving homogeneous map and there exists $z\in \mathbb{R}^n_{> 0}$ such that $f(z) \in\partial \mathbb{R}^n_{\geq 0}$, then $f(\mathbb{R}^n_{> 0})\subset \partial \mathbb{R}^n_{\geq 0}$, see \cite[Lemma 1.2.2]{LNBook}. Thus to analyse the existence of a positive eigenvector one may as well consider order-preserving homogeneous maps $f\colon \mathbb{R}^n_{>0}\to \mathbb{R}^n_{> 0}$. Moreover, on $\mathbb{R}^n_{>0}$ we have {\em Hilbert's metric}, $d_H$, which is given by
\[
d_H(x,y) := \log \left(\max_i \frac{x_i}{y_i}\right) - \log \left(\min_i \frac{x_i}{y_i}\right)\mbox{\quad for }x,y\in \mathbb{R}^n_{>0}.
\]
Note that $d_H$ is not a genuine metric, as $d_H(\lambda x, \mu x) = 0$ for all $x\in \mathbb{R}^n_{>0}$ and $\lambda,\mu >0$. In fact, $d_H(x,y) =0$ if and only if $x=\lambda y$ for some $\lambda >0$. However, $d_H$ is a metric on the set of rays in $\mathbb{R}^n_{>0}$.
If $f\colon \mathbb{R}^n_{>0}\to \mathbb{R}^n_{>0}$ is order-preserving and homogeneous, then $f$ is nonexpansive under $d_H$, i.e.,
\[
d_H(f(x),f(y))\leq d_H(x,y)\mbox{\quad for all }x,y\in \mathbb{R}^n_{>0},
\]
see for example \cite[Proposition 2.1.1]{LNBook}.
In particular, order-preserving homogeneous maps $f\colon \mathbb{R}^n_{>0}\to \mathbb{R}^n_{>0}$ are continuous on $\mathbb{R}^n_{>0}$.
Moreover, if $x$ and $y$ are eigenvectors of $f\colon \mathbb{R}^n_{>0}\to \mathbb{R}^n_{>0}$ with $f(x) = \lambda x$ and $f(y)=\mu y$, then $\lambda =\mu$, see \cite[Corollary 5.2.2]{LNBook}.
In \cite[Theorem 5.1]{LLN} the following necessary and sufficient conditions were obtained for an order-preserving homogeneous map $f\colon \mathbb{R}^n_{>0}\to \mathbb{R}^n_{>0}$ to have a nonempty set of eigenvectors, $\mathrm{E}(f) :=\{x\in \mathbb{R}^n_{> 0}\colon x\mbox{ eigenvector of } f\}$, which is bounded under Hilbert's metric.
\begin{theorem}\label{thm:npfthm} If $f\colon \mathbb{R}^n_{> 0}\to \mathbb{R}^n_{> 0}$ is an order-preserving homogeneous map, then $\mathrm{E}(f)$ is nonempty and bounded under $d_H$ if and only if for each nonempty proper subset $J$ of $\{1,\ldots,n\}$ there exists $x^J\in \mathbb{R}^n_{> 0}$ such that
\begin{equation}\label{eq:1.1}
\max_{j\in J}\,\frac{f(x^J)_j}{x^J_j}< \min_{j\in J^c}\,\frac{f(x^J)_j}{x^J_j}.
\end{equation}
\end{theorem}
Note that the assertion is trivial in case $n=1$, as each order-preserving homogeneous map $f\colon \mathbb{R}_{> 0}\to \mathbb{R}_{> 0}$ has a nonempty bounded set of eigenvectors.
In case $n\geq 2$ Theorem \ref{thm:npfthm} yields the following simple algorithm for detecting positive eigenvectors:
\begin{algorithm} \label{alg} Let $f\colon \mathbb{R}^n_{>0} \to \mathbb{R}^n_{>0}$ be an order-preserving homogeneous map.
Repeat the following steps until every nonempty proper subset $J$ of $\{1,\ldots, n\}$ has been recorded.
\begin{description}
\item[Step 1] Randomly select $x$, with $x_1=1$ and $0<x_j<1$ for all $j\in\{2,\ldots,n\}$, and compute $f(x)_j/x_j$ for all $j \in \{1,\ldots, n\}$.
\item[Step 2] Record all nonempty proper subsets $J \subset \{1,\ldots,n\}$ such that inequality (\ref{eq:1.1}) holds.
\end{description}
\end{algorithm}
So, if this algorithm halts, then $f$ has an eigenvector in $\mathbb{R}^n_{>0}$ and $\mathrm{E(}f)$ is bounded under Hilbert's metric. If $\mathrm{E}(f)$ is empty or unbounded under $d_H$, then the algorithm does not halt. This can happen even if the map is linear. Consider, for example the linear map $x\mapsto Ax$ on $\mathbb{R}^2_{>0}$, where
\[
A = \left [\begin{array}{cc} 1& 1 \\ 0 & 1 \end{array}\right],
\]
which has no eigenvector in $\mathbb{R}^2_{>0}$. At present no algorithm is known that can decide if an order-preserving homogeneous map on $\mathbb{R}^n_{>0}$ has an empty or an unbounded set of eigenvectors. It is also unknown if there is an efficient way to generate the vectors $x$ in Step 1.
Note that a randomly chosen $x$ in Step 1 can eliminate multiple subsets $J$ in Step 2. So, it is natural to ask for the least number of vectors required to fulfill the $2^n-2$ inequalities in (\ref{eq:1.1}). This number corresponds to the minimum number of times the algorithm has to perform Steps 1 and 2. In this paper we show that one needs at least
\[
n\choose \lceil n/2\rceil
\]
vectors and this lower bound is sharp. Here $\lceil a\rceil$ is the smallest integer $n\geq a$. Likewise we write $\lfloor a\rfloor $ to denote the largest integer $n\leq a$.
\section{Connection with the illumination number}
Recall that given a compact convex set $C$ with nonempty interior in $V$, a vector $v\in V$ {\em illuminates} $z\in\partial C$ if $z+\lambda v\in C^\circ$ for all $\lambda>0$ sufficiently small. A set $S$ is said to {\em illuminate} $C$ if for each $z\in\partial C$ there exists $v\in S$ such that $v$ illuminates $z$. The minimal size of illuminating set for $C$ is called the {\em illumination number} of $C$ and is denoted $i(C)$. There is a long-standing open conjecture which asserts that $i(C)\leq 2^n$ for every compact convex body in an $n$-dimensional vector space, see \cite[Chapter VI]{BMS} for further details. It is easy to show, see for example \cite[Lemma 4.1]{LLN}, that if $S$ illuminates every extreme point of $C$, then $S$ illuminates $C$.
To proceed we need to discuss the connection between illumination numbers and Theorem \ref{thm:npfthm}.
Firstly, we note that if we let $\Sigma_0:=\{x\in \mathbb{R}^n_{>0}\colon x_n=1\}$, then $(\Sigma_0,d_H)$ is a metric space. Given an order-preserving homogeneous map $f\colon \mathbb{R}^n_{>0} \to \mathbb{R}^n_{>0}$ we can consider the {\em normalised} map $g_f\colon \Sigma_0\to \Sigma_0$ given by
\[
g_f(x) := \frac{f(x)}{f(x)_n}\mbox{ \quad for }x\in \Sigma_0.
\]
The map $g_f$ is nonexpansive under $d_H$ on $\Sigma_0$. Moreover, $x\in \Sigma_0$ is a fixed point of $g_f$ if and only if $x$ is an eigenvector of $f$. Thus, if we let $\mathrm{Fix}(g_f) :=\{x\in\Sigma_0\colon g_f(x) =x\}$, then $\mathrm{Fix}(g_f)$ is nonempty and bounded in $(\Sigma_0,d_H)$ if and only if $\mathrm{E}(f)$ is nonempty and bounded in $(\mathbb{R}^n_{>0},d_H)$.
It not hard to verify that the map $\mathrm{Log}\colon \Sigma_0\to V_0$ given by
\[
\mathrm{Log}(x) := (\log x_1,\ldots,\log x_n)\mbox{\quad for }x=(x_1,\ldots,x_n)\in\Sigma_0
\]
is an isometry from $(\Sigma_0,d_H)$ onto $(V_0,\|\cdot\|_{\mathrm{v}})$, where $V_0 :=\{x\in\mathbb{R}^n\colon x_n =0\}$ and
\[
\|x\|_{\mathrm{v}} := \max_i x_i -\min_i x_i
\]
is the {\em variation norm}.
It follows that the map $h\colon V_0\to V_0$ satisfying $h\circ \mathrm{Log} = \mathrm{Log}\circ g_f$ is nonexpansive under the variation norm, and $\mathrm{Fix}(h)$ is nonempty and bounded in $(V_0,\|\cdot\|_{\mathrm{v}})$ if and only if $\mathrm{Fix}(g_f)$ is nonempty and bounded in $(\Sigma_0,d_H)$.
In \cite[Theorem 3.4]{LLN} the following result concerning fixed point sets of nonexpansive maps on finite dimensional normed spaces was proved.
\begin{theorem}\label{thm:fp} If $h\colon V\to V$ is a nonexpansive map on a finite dimensional normed space $V$, then $\mathrm{Fix}(h)$ is nonempty and bounded if and only if there exist $w^1,\ldots,w^m\in V$ such that $\{f(w^i)-w^i\colon i=1,\ldots,m\}$ illuminates the unit ball of $V$.
\end{theorem}
For $n\geq 2$, the unit ball $B_\mathrm{v}$ of $(V_0,\|\cdot\|_{\mathrm{v}})$ has $2^n-2$ extreme points, which are given by
\begin{equation} \label{eq:varExtr}
\mathrm{ext}(B_{\mathrm{v}}) :=\{v^I_+\colon \emptyset \neq I\subseteq \{1,\ldots,n-1\}\}\cup
\{v^I_-\colon \emptyset \neq I\subseteq \{1,\ldots,n-1\}\},
\end{equation}
where $(v^I_+)_i =1$ if $i\in I$ and $0$ otherwise, and $(v^I_-)_i =-1$ if $i\in I$ and $0$ otherwise. See \cite[\S 2]{Nu2} for details.
In \cite{LLN} the equivalence in Theorem \ref{thm:npfthm} was obtained by using Theorem \ref{thm:fp} and showing that there exists $x^1,\ldots,x^m\in \mathbb{R}^n_{>0}$ that fulfill the $2^n-2$ inequalities in (\ref{eq:1.1}) if and only if there exist $y^1,\ldots,y^m\in V_0$ that illuminate the $2^n-2$ extreme points of the unit ball $B_\mathrm{v}$.
Thus, $i(B_{\mathrm{v}})$ provides a sharp lower bound for the number of times one needs to repeat Steps 1 and 2 in Algorithm \ref{alg}. In the next section we show the following result concerning $i(B_{\mathrm{v}})$.
\begin{theorem}\label{thm:main}
If $B_\mathrm{v}$ is the unit ball of $(V_0,\|\cdot\|_{\mathrm{v}})$ and $n\geq 2$, then
\[
i(B_\mathrm{v}) = {n\choose\lceil n/2\rceil}.
\]
\end{theorem}
\section{Proof of Theorem \ref{thm:main}}
Note that the map $(x_1,\ldots,x_n)\in V_0\mapsto (x_1,\ldots,x_{n-1})\in\mathbb{R}^{n-1}$ is an isometry from $(V_0,\|\cdot\|_{\mathrm{v}})$ onto
$(\mathbb{R}^{n-1},\|\cdot\|_H)$, where
\[
\|x\|_H :=\left(\max_i x_i\right) \vee 0 - \left(\min_i x_i\right) \wedge 0.
\]
Here $a\wedge b:= \min(a,b)$ and $a\vee b:=\max(a,b)$. Note also that if $B_H$ is the unit ball in $(\mathbb{R}^{n-1},\|\cdot\|_H)$, then
\[
\mathrm{ext}(B_H) = \left(\{0,1\}^{n-1}\cup \{0,-1\}^{n-1}\right)\setminus\{(0,\ldots,0)\}
\]
and \[i(B_H) = i(B_{\mathrm{v}}).\] For notational simplicity we work with $B_H$ instead of $B_{\mathrm{v}}$.
The following two subsets,
\[
E_+:= \{0,1\}^{n-1}\setminus\{(0,\ldots,0)\}\mbox{\quad and\quad} E_-:=\{0,-1\}^{n-1}\setminus\{(0,\ldots,0)\},
\]
of $\mathrm{ext}(B_H)$ play a key role in the argument. On $\mathrm{ext}(B_H)$ we have the usual partial ordering $x\leq y $ if $y-x\in\mathbb{R}^{n-1}_{\geq 0}$, which gives rise to two finite partially ordered sets $(E_+,\leq)$ and $(E_-,\leq)$.
Recall that subset $\mathcal{A}$ of a partially ordered set $(P,\preceq)$ is called an {\em antichain} if $x,y\in \mathcal{A}$ and $x\preceq y$ implies $x=y$. A {\em chain} $\mathcal{C}$ in $(P,\preceq)$ is a totally ordered subset, if for each $x,y\in \mathcal{C}$ we have that either $x\preceq y$ or $y\preceq x$. The {\em length} of a chain $\mathcal{C}$ is the number of distinct elements in $\mathcal{C}$.
\begin{lemma}\label{lem:antichain}
Let $\mathcal{A}$ be an antichain in $(E_+,\leq)$ or in $(E_-,\leq)$. If $x\neq y$ in $\mathcal{A}$ are illuminated by $v$ and $w$, respectively, then $v\neq w$.
\end{lemma}
\begin{proof}
Suppose that $\mathcal{A}$ is antichain in $(E_+,\leq)$ and $x\neq y$ are in $\mathcal{A}$. Then there exist $i\neq j$ such that $0=x_i< y_i =1$ and $0=y_j<x_j =1$.
Now suppose by way of contradiction that $z$ illuminates $x$ and $y$. So, $\|x+\lambda z\|_H<1$ and $\|y+\lambda z\|_H<1$ for all $\lambda>0$ sufficiently small. Suppose first that $z_i\leq z_j$. Then for $\lambda>0$ small,
\[
1+\lambda z_j =x_j +\lambda z_j \leq \|x+\lambda z\|_H<1,
\]
and hence $z_j<0$. So, $z_i\leq z_j<0$. But then
\[
1+ \lambda(z_j-z_i) = x_j+\lambda z_j - \lambda z_i \leq \|x+\lambda z\|_H<1,
\]
which is impossible. On the other hand, if $z_j\leq z_i$, then $1+\lambda z_i \leq \|y+\lambda z\|_H<1$, so that $z_j\leq z_i<0$. But then
\[
1+ \lambda(z_i-z_j) = y_i+\lambda z_i- \lambda z_j \leq \|y+\lambda z\|_H<1,
\]
which again is impossible. Thus, $z$ cannot illuminate both $x$ and $y$.
The argument for the case where $\mathcal{A}$ is antichain in $(E_-,\leq)$ is similar.
\end{proof}
\begin{lemma} \label{lem:+-1} If $x,y\in \mathrm{ext}(B_H)$ are such that $x_i=1$ and $y_i=-1$ for some $i$, then one needs two distinct vectors to illuminate $x$ and $y$.
\end{lemma}
\begin{proof}
Suppose $w$ illuminates $x$ and $y$. Then $1+\lambda w_i = x_i +\lambda w_i \leq \|x+\lambda w\|_H<1$ for all $\lambda>0$ sufficiently small, and hence $w_i<0$. But also
$1 - \lambda w_i = -( y_i +\lambda w_i) \leq \|y+\lambda w\|_H< 1$ for all $\lambda>0$ sufficiently small. This implies that $w_i>0$, which is impossible. Thus, one needs at least two vectors to illuminate $x$ and $y$.
\end{proof}
\begin{corollary}\label{lowerbnd}
If $B_H$ is the unit ball of $(\mathbb{R}^{n-1},\|\cdot\|_H)$ and $n\geq 2$, then
\[
i(B_H) \geq {n\choose\lceil n/2\rceil}.
\]
\end{corollary}
\begin{proof}
For $1\leq k,m\leq n-1$ define the antichians $\mathcal{A}_+(k):= \{ x\in E_+\colon \sum_i x_i = k\}$ and $\mathcal{A}_-(m):= \{ x\in E_-\colon \sum_i x_i = -m\}$.
If $n>1$ is odd, then we can take $k := (n-1)/2$ and $m:=(n+1)/2$ and conclude from Lemmas \ref{lem:antichain} and \ref{lem:+-1} that we need at least
\[
{n-1\choose \frac{n-1}{2}} + {n-1\choose \frac{n+1}{2}} = {n\choose \lceil \frac{n}{2}\rceil}
\]
distinct vectors to illuminate the extreme points in $\mathcal{A}_+(k)\cup \mathcal{A}_-(m)$, as for each $x\in\mathcal{A}_+(k)$ and $y\in\mathcal{A}_-(m)$ there exists an $i$ such that $x_i =1$ and $y_i=-1$.
Likewise if $n>1$ is even, we can take $k = m= \lceil \frac{n-1}{2}\rceil$, and deduce from Lemmas \ref{lem:antichain} and \ref{lem:+-1} that we need at least
\[
{n-1\choose \lceil \frac{n-1}{2}\rceil} + {n-1\choose \lceil\frac{n-1}{2}\rceil} = {n-1\choose \lfloor \frac{n-1}{2}\rfloor} + {n-1\choose \lceil\frac{n-1}{2}\rceil} ={n\choose \frac{n}{2}}
\]
distinct vectors to illuminate the extreme points in $\mathcal{A}_+(k)\cup \mathcal{A}_-(m)$.
This completes the proof.
\end{proof}
\begin{lemma}\label{lem:chains} If $\mathcal{C}$ is a chain in $(E_+,\leq)$ or in $(E_-,\leq)$, then there exists $w$ that illuminates each element of $\mathcal{C}$.
\end{lemma}
\begin{proof}
Let $\mathcal{C}$ be a chain in $(E_+,\leq)$ or in $(E_-,\leq)$.
We call a chain $c_1\leq c_2\leq \ldots\leq c_m$ in $(E_+,\leq)$ or in $(E_-,\leq)$ maximal if it has length $n-1$. The chain $\mathcal{C}$ is contained in a maximal chain. As each coordinate permutation is an isometry of $(\mathbb{R}^{n-1},\|\cdot\|_H)$ and the map $x\mapsto -x$ is an isometry of $(\mathbb{R}^{n-1},\|\cdot\|_H)$, we may assume without loss of generality that
$\mathcal{C}$ is contained in the maximal chain,
\[
\mathcal{C}^*\colon (1,0,0,\ldots,0)\leq (1,1,0,\ldots, 0)\leq \ldots\leq (1,1,\ldots, 1,0)\leq (1,1,1,\ldots,1).
\]
Let $w\in\mathbb{R}^{n-1}$ be such that $w_1<w_2<\ldots<w_{n-1}<0$.
Now if $x$ is the $k$-th element in the maximal chain and $k<n-1$, then for all $\lambda>0$ sufficiently small
\[
\|x+\lambda w\|_H = \left(\max_i x_i+\lambda w_i \right) \vee 0 - \left(\min_i x_i+\lambda w_i \right) \wedge 0 = 1+\lambda w_k - \lambda w_{k+1} <1.
\]
On the other hand, if $x=(1,1,\ldots,1)$, then clearly $\|x+\lambda w\|_H =1+\lambda w_{n-1}<1$ for all $\lambda>0$ small. Thus $w$ illuminates each element of $\mathcal{C}^*$ and we are done.
\end{proof}
To proceed we need to recall a few classical results in the combinatorics of finite partially ordered sets, see \cite[Sections 9.1 and 9.2]{Ju}. Firstly, we recall Dilworth's Theorem, which says that if the maximum size of an antichain in a finite partially ordered set $(P,\preceq)$ is $r$, then $P$ can be partitioned into $r$ disjoint chains. In the case where the partially ordered set is $(\{0,1\}^d,\leq)$, one can combine this result with Sperner's Theorem, which says that the maximum size of antichain in $(\{0,1\}^d,\leq)$ is ${d\choose \lceil d/2\rceil}$. Thus, $(\{0,1\}^d,\leq)$ can be partitioned into ${d\choose \lceil d/2\rceil}$ disjoint chains.
To obtain our result we need some more detailed information about the partitions. In particular, we need a result by De Bruijn,Tengbergen, Kruyswijk \cite{dBTK} concerning symmetric chains, see also \cite[Theorem 9.3]{Ju}.
A chain $x^1\leq \ldots\leq x^k$ in $(\{0,1\}^{d},\leq)$ is said to be {\em symmetric} if
\begin{enumerate}[(a)]
\item $(\sum_{j=1}^d x^m_j ) + 1= \sum_{j=1}^d x^{m+1}_j$ for all $1\leq m<k$, i.e., $x^{m+1}$ is an immediate successor of $x^m$,
\item $\sum_{j=1}^d x^k_j = d- \sum_{j=1}^d x^{1}_j$.
\end{enumerate}
\begin{theorem}[De Bruijn,Tengbergen, Kruyswijk] \label{DBTK} The poset $(\{0,1\}^d,\leq)$ can be partitioned into ${d\choose \lceil d/2\rceil}$ disjoint symmetric chains.
\end{theorem}
Let us now prove the main result of the paper.
\begin{proof}[Proof of Theorem \ref{thm:main}]
First recall that by Corollary \ref{lowerbnd} it suffices to show that
$i(B_H)\leq {n\choose \lceil\frac{n}{2}\rceil}$, as $i(B_{\mathrm{v}})=i(B_H)$. In other words, we only need to show that $\mathrm{ext}(B_H)$ can be illuminated by ${n\choose \lceil\frac{n}{2}\rceil}$
vectors.
There are two cases to consider: $n\geq 2$ even, and $n\geq 2$ odd.
Let us first consider the case where $n\geq 2$ is even.
By Dilworth's Theorem and Sperner's Theorem we know that the partially ordered set $(\{0,1\}^{n-1},\leq)$ can be partitioned into ${n-1\choose \lceil \frac{n-1}{2}\rceil}$ disjoint chains.
This implies that each of the partially ordered sets $(E_+,\leq)$ and $(E_-,\leq)$ can be partitioned into ${n-1\choose \lceil \frac{n-1}{2}\rceil}$ disjoint chains. It now follows from Lemma \ref{lem:chains} that we need at most
\[
{n-1\choose \lceil \frac{n-1}{2}\rceil}+{n-1\choose \lceil \frac{n-1}{2}\rceil}= {n-1\choose \lfloor \frac{n-1}{2}\rfloor}+{n-1\choose \lceil \frac{n-1}{2}\rceil} = {n\choose \frac{n}{2}}
\]
distinct vectors to illuminate $\mathrm{ext}(B_H)$. This implies that $i(B_{\mathrm{v}})= i(B_H)\leq{n\choose \frac{n}{2}}$.
Now suppose that $n\geq 2$ is odd. By Theorem \ref{DBTK} we know that $(\{0,1\}^{n-1},\leq)$ can be partitioned into ${n-1\choose \frac{n-1}{2}}$ disjoint symmetric chains.
Let us consider such a symmetric chain decomposition, and let
\[\mathcal{A}_k:=\{x\in \{0,1\}^{n-1}\colon \mbox{$\sum_i x_i =k$}\},\] which is an antichain of size ${n-1\choose k}$. Each element of $\mathcal{A}_{(n+1)/2}$ is contained in a distinct symmetric chain, and each of these chain contains an $x\in \{0,1\}^{n-1}$ with $\sum_i x_i =(n-1)/2$. Thus, the symmetric chain decomposition of $(\{0,1\}^{n-1},\leq)$ consists of
\[
{n-1 \choose \frac{n+1}{2}}
\]
chains containing a vector $x$ with $\sum_i x_i =(n+1)/2$, and
\[
{n-1\choose \frac{n-1}{2}} - {n-1 \choose \frac{n+1}{2}}
\]
chains consisting of a single vector $x$ with $\sum_i x_i =(n-1)/2$.
By deleting $(0,0,\ldots,0)$ from $\{0,1\}^{n-1}$ we obtain a partition of $(E_+,\leq )$ into disjoint chains. Let $\mathcal{S}$ be the set of vectors in $E_+$ which form a singleton chain and $\sum_i x_i = (n-1)/2$. So,
\[
|\mathcal{S}| = {n-1\choose \frac{n-1}{2}} - {n-1 \choose \frac{n+1}{2}}.
\]
Now pair each $x\in E_+$ with $x'\in E_-$, where $x'_i=0$ if $x_i=1$, and $x'_i=-1$ if $x_i=0$. In this way we obtain a partition of $(E_-,\leq)$ into disjoint chains with $|\mathcal{S}|$ chains consisting of a single vector. In other words, for each $x\in \mathcal{S}$ we have that $x'\in E_-$ forms a singleton chain in the chain decomposition of $(E_-,\leq)$.
We know from Lemma \ref{lem:chains} that we can illuminate the ${n-1\choose \frac{n+1}{2}}$ chains in $(E_+,\leq)$ containing a vector $x$ with $\sum_i x_i = (n+1)/2$ using ${n-1\choose \frac{n+1}{2}}$ vectors. Likewise, we can illuminate the corresponding ${n-1\choose \frac{n+1}{2}}$ chains in $(E_-,\leq)$ with
${n-1\choose \frac{n+1}{2}}$ vectors. So, it remains to illuminate the singleton chains in $(E_+,\leq)$ and $(E_-,\leq)$.
Note that if we can illuminate each pair $\{x,x'\}$, with $x\in\mathcal{S}$ and $x'$ the corresponding vector in $E_-$, by a single vector, then we need at most
\[
2{n-1\choose \frac{n+1}{2}}+ {n-1\choose \frac{n-1}{2}} - {n-1\choose \frac{n+1}{2}} = {n-1\choose \frac{n-1}{2}} +{n-1\choose \frac{n+1}{2}} = {n\choose \lceil \frac{n}{2}\rceil}
\]
vectors to illuminate $\mathrm{ext}(B_H)$, and hence $i(B_{\mathrm{v}}) = i(B_H) \leq {n\choose \lceil \frac{n}{2}\rceil}$ if $n\geq 2$ is odd.
To see how this can be done we consider such a pair $\{x,x'\}$ with $x\in\mathcal{S}$ and let $I:=\{i\colon x_i =1\}$ and $J:=\{i\colon x_i=0\}$. So, $I=\{i\colon x'_i =0\}$ and $J =\{i\colon x'_i=-1\}$. Now let $w\in\mathbb{R}^{n-1}$ be such that $w_i<0$ for all $i\in I$ and $w_i>0$ for all $i\in J$.
Then for all $\lambda>0$ sufficiently small,
\[
\|x+\lambda w\|_H = \max_{i\in I} (1+\lambda w_i) - 0 <1
\]
and
\[
\|x'+\lambda w\|_H = 0 - \min_{i\in J} (-1 +\lambda w_i)<1.
\]
This shows that $w$ illuminates $x$ and $x'$, which completes the proof.
\end{proof}
\end{document} |
\begin{document}
\title{\bf Interval-valued fuzzy graphs} \nu_{A_1}ormalsize
\author{{\bf Muhammad Akram$^{\bf a}$\ and \ Wieslaw A. Dudek$^{\bf b}$} \\
{\small {\bf a.} Punjab University College of Information
Technology,
University of the Punjab,}\\
{\small Old Campus, Lahore-54000, Pakistan.}\\
{\small E-mail: [email protected],
[email protected]}\\
{\small {\bf b.} Institute of Mathematics and Computer Science, Wroclaw University
of Technology,}\\
{\small Wyb. Wyspianskiego 27, 50-370,Wroclaw, Poland.}\\
{\small E-mail: [email protected]} }
\date{}
\nu_Baketitle
\hrule
\begin{abstract}
We define the Cartesian product, composition, union and join on
interval-valued fuzzy graphs and investigate some of their properties. We
also introduce the notion of interval-valued fuzzy complete graphs
and present some properties of self complementary and self weak
complementary interval-valued fuzzy complete graphs.
\end{abstract}
{\bf Keywords}: Interval-valued fuzzy graph,
Self complementary, Interval-valued fuzzy complete graph.\\
{\bf Mathematics Subject Classification 2000}: 05C99\\
\hrule
\footnote{Corresponding Author:\\ M. Akram
([email protected], [email protected])}
\section{Introduction}
In 1975, Zadeh \cite{LA1} introduced the notion of interval-valued fuzzy sets as an extension of fuzzy sets \cite{LA} in which the values of the membership degrees are intervals of numbers instead of the numbers.
Interval-valued fuzzy sets provide a more adequate description of uncertainty than traditional fuzzy sets. It is therefore important to use interval-valued fuzzy sets in applications, such as fuzzy control. One of the computationally most intensive part of fuzzy control is defuzzification \cite{JMM}.
Since interval-valued fuzzy sets are widely studied and used, we describe briefly the work of Gorzalczany on approximate reasoning \cite{MB1, MB2}, Roy and Biswas on medical diagnosis \cite{MK}, Turksen on
multivalued logic \cite{IB} and Mendel on intelligent control \cite{JMM}. \\
The fuzzy graph theory as a generalization of Euler's graph theory
was first introduced by Rosenfeld \cite{RA} in 1975. The fuzzy
relations between fuzzy sets were first considered by Rosenfeld
and he developed the structure of fuzzy graphs obtaining analogs
of several graph theoretical concepts. Later, Bhattacharya
\cite{BP} gave some remarks on fuzzy graphs, and some operations on
fuzzy graphs were introduced by Mordeson and Peng \cite{JN1}. The
complement of a fuzzy graph was defined by Mordeson \cite{JN2} and
further studied by Sunitha and Vijayakumar \cite{MS}. Bhutani and
Rosenfeld introduced the concept of $M$-strong fuzzy graphs in
\cite{KR2} and studied some properties. The concept of strong
arcs in fuzzy graphs was discussed in \cite{KR1}. Hongmei and
Lianhua gave the definition of interval-valued graph in \cite{JH}.\\
In this paper, we define the operations of Cartesian product,
composition, union and join on interval-valued fuzzy graphs and
investigate some properties. We study isomorphism (resp. weak
isomorphism) between interval-valued fuzzy graphs is an
equivalence relation (resp. partial order). We introduce the
notion of interval-valued fuzzy complete graphs and present some
properties of self complementary and self weak complementary
interval-valued fuzzy complete graphs.\\
The definitions and terminologies that we used in this paper are
standard. For other notations, terminologies and applications, the
readers are referred to \cite{MA08, MA, AA, KT, FH, KP, SMS2, JN11,AM1, AP, LAZ75}.
\section{Preliminaries}
A {\it graph} is an ordered pair $G^*=(V,E),$ where $V$ is the
set of vertices of $G^*$ and $E$ is the set of edges of $G^*$. Two
vertices $x$ and $y$ in a graph $G^*$ are said to be adjacent in
$G^*$ if $\{x,y\}$ is in an edge of $G^*$. (For simplicity an edge
$\{x,y\}$ will be denoted by $xy$.) A {\it simple graph} is a
graph without loops and multiple edges. A {\it complete
graph} is a simple graph in which every pair of distinct vertices
is connected by an edge. The complete graph on $n$ vertices has
$n$ vertices and $n(n-1)/2$ edges. We will consider only graphs
with the finite number of vertices and edges.
By a {\it complementary graph} $\overline{G^*}$ of a simple graph
$G^*$ we mean a graph having the same vertices as $G^*$ and such
that two vertices are adjacent in $\overline{G^*}$ if and only if
they are not adjacent in $G^*$.
An {\it isomorphism} of graphs $G^*_1$ and $G^*_2$ is a bijection
between the vertex sets of $G^*_1$ and $G^*_2$ such that any two
vertices $v_1$ and $v_2$ of $G^*_1$ are adjacent in $G^*_1$ if and
only if $f(v_1)$ and $f(v_2)$ are adjacent in $G^*_2$. Isomorphic
graphs are denoted by $G^*_1 \simeq G^*_2.$
Let $G^*_1=(V_1, E_1)$ and $G^*_2=(V_2, E_2)$ be two simple graphs,
we can construct several new graphs. The first construction called
the {\it Cartesian product} of $G^*_1$ and $G^*_2$ gives a graph
$G^*_1 \times G^*_2=(V, E)$ with $V=V_1 \times V_2$ and
\[
E= \{(x,x_2)(x,y_2)| x\in V_1, x_2y_2\in E_2\}\cup\{(x_1,z)(y_1,
z)|x_1y_1 \in E_1,z\in V_2 \}.
\]
The {\it composition} of graphs $G^*_1$ and $G^*_2$ is the graph
$G^*_1[G^*_2]=(V_1 \times V_2,E^0)$, where
$$
E^0= E\cup\{(x_1,x_2)(y_1,y_2)|x_1y_1 \in E_1, x_2\nu_{A_1}eq y_2\}
$$
and $E$ is defined as in $G^*_1 \times G^*_2$. Note that
$G^*_1[G^*_2]\nu_{A_1}eq G^*_2[G^*_1].$
The {\it union} of graphs $G^*_1$ and $G^*_2$ is defined as $G^*_1
\cup G^*_2=(V_1\cup V_2, E_1\cup E_2)$.
The {\it join} of $G^*_1$ and $G^*_2$ is the simple graph $G^*_1 +
G^*_2=(V_1 \cup V_2, E_1 \cup E_2 \cup E')$, where $E'$ is the set
of all edges joining the nodes of $V_1$ and $V_2$. In this
construction it is assumed that $V_1\cap V_2\nu_{A_1}eq\emptyset$ .
By a {\it fuzzy subset} $\nu_Bu$ on a set $X$ is mean a map $\nu_Bu
:X\to [0,1]$. A map $\nu_{A_1}u: X\times X\to [0,1]$ is called a {\it
fuzzy relation} on $X$ if $\nu_{A_1}u(x,y)\leq \nu_Bin(\nu_Bu(x),\nu_Bu(y))$ for
all $x,y\in X$. A fuzzy relation $\nu_{A_1}u$ is {\it symmetric} if
$\nu_{A_1}u(x, y)= \nu_{A_1}u(y, x)$ for all $x,y\in X$.
An {\it interval number} $D$ is an interval $[a^{-}, a^{+}]$ with
$0\leq a^-\leq a^+\leq 1$. The interval $[a,a]$ is identified with
the number $a\in [0,1]$. $D[0,1]$ denotes the set of all interval
numbers.
For interval numbers $D_1=[a_1^{-}, b_1^{+}]$ and $D_2=[a_2^{-},
b_2^{+}]$, we define
\begin{itemize}
\item ${\rm rmin}(D_1, D_2)={\rm rmin}([a_1^{-}, b_1^{+}],
[a_2^{-}, b_2^{+}])= [\nu_Bin\{a_1^{-}, a_2^{-}\}, \nu_Bin\{b_1^{+},
b_2^{+}\}]$,
\item ${\rm rmax}(D_1, D_2)={\rm rmax}([a_1^{-},
b_1^{+}], [a_2^{-}, b_2^{+}])= [\nu_Bax\{a_1^{-}, a_2^{-}\},
\nu_Bax\{b_1^{+}, b_2^{+}\}]$,
\item $D_1 + D_2=[a_1^-+a_2^--a_1^-\cdot a_2^-, b_1^++b_2^+-b_1^+\cdot
b_2^+]$,
\item $D_1 \leq D_2$ $\Longleftrightarrow$ $a_1^{-} \leq a_2^{-}$ and
$b_1^{+} \leq b_2^{+}$,
\item $D_1=D_2$ $\Longleftrightarrow$ $a_1^{-} = a_2^{-}$ and $b_1^{+} = b_2^{+}$,
\item $D_1 <D_2$ $\Longleftrightarrow$ $D_1 \leq D_2$ and $D_1 \nu_{A_1}eq D_2$,
\item $kD= k[a_1^{-}, b_1^{+}]= [ka_1^{-}, kb_1^{+}]$, where $ 0 \leq k \leq
1$.
\end{itemize}
Then, $(D[0,1],\leq,\vee,\wedge)$ is a complete lattice with
$[0,0]$ as the least element and $[1,1]$ as the greatest.
The {\it interval-valued fuzzy set} $A$ in $V$ is defined by
\[
A=\{(x, [\nu_Bu^-_A(x), \nu_Bu^+_A(x)]): x \in V \},
\]
where $\nu_Bu^-_A(x)$ and $\nu_Bu^+_A(x)$ are fuzzy subsets of $V$ such
that $\nu_Bu^-_A (x)\leq \nu_Bu^+_A(x)$ for all $x\in V.$ For any two
interval-valued sets $A=[\nu_Bu^-_A(x),\nu_Bu^+_A(x)]$ and
$B=[\nu_Bu^-_B(x), \nu_Bu^+_B(x)])$ in $V$ we define:
\begin{itemize}
\item $A\bigcup B=\{(x,\nu_Bax(\nu_Bu^-_A(x),\nu_Bu^-_B(x)),\nu_Bax(\nu_Bu^+_A(x),\nu_Bu^+_B(x))):x\in V\}$,
\item $A\bigcap B=\{(x,\nu_Bin(\nu_Bu^-_A(x),\nu_Bu^-_B(x)),\nu_Bin(\nu_Bu^+_A(x),\nu_Bu^+_B(x))):x\in V\}$.
\end{itemize}
If $G^*=(V,E)$ is a graph, then by an {\it interval-valued fuzzy
relation} $B$ on a set $E$ we mean an interval-valued fuzzy set
such that
\[
\nu_Bu^-_B(xy)\leq\nu_Bin(\nu_Bu^-_A(x),\nu_Bu^-_A(y)),
\]
\[
\nu_Bu^+_B(xy) \leq \nu_Bin(\nu_Bu^+_A(x),\nu_Bu^+_A(y))
\]
for all $xy\in E$.
\section{Operations on interval-valued fuzzy graphs}
Throughout in this paper, $G^*$ is a crisp graph, and $G$ is an
interval-valued fuzzy graph.
\begin{definition}
By an {\it interval-valued fuzzy graph} of a graph $G^*=(V,E)$ we
mean a pair $G=(A,B)$, where $A=[\nu_Bu^-_A,\nu_Bu^+_A]$ is an
interval-valued fuzzy set on $V$ and $B=[\nu_Bu^-_B,\nu_Bu^+_B]$ is an
interval-valued fuzzy relation on $E$.
\end{definition}
\begin{example}
Consider a graph $G^*=(V, E)$ such that $V=\{x,y,z\}$, $E=\{xy,
yz,zx\}$. Let $A$ be an interval-valued fuzzy set of $V$ and let $B$ be an interval-valued fuzzy set of $E \subseteq V \times V$ defined by
\[A=< (\frac{x}{0.2}, \frac{y}{0.3}, \frac{z}{0.4}), (\frac{x}{0.4}, \frac{y}{0.5}, \frac{z}{0.5})
>, \]
\[B=< (\frac{xy}{0.1}, \frac{yz}{0.2}, \frac{zx}{0.1}), (\frac{xy}{0.3}, \frac{yz}{0.4}, \frac{zx}{0.4})
>. \]
\begin{center}
\begin{tikzpicture}[scale=3]
\path (1,0.30) node (l) {$y$};
\path (1.9,-1.55) node (l){$G$};
\path (2,0.30) node (l) {$z$};
\path (2,-1.30) node (l) {$x$};
\path (1.5,0.1) node (l) {\tiny {$[0.2, 0.4]$}};
\path (2.20,-0.5) node (l) {\tiny{$[0.1,0.4]$}};
\path (1.7,-0.5) node (l) {\tiny{$[0.1, 0.3]$}};
\tikzstyle{every node}=[draw,shape=circle];
\path (2,0) node (z) {\tiny{$[0.4, 0.5]$}};
\path (2,-1) node (x) {\tiny{$[0.2, 0.4]$}};
\path (1,0) node (y) {\tiny{$[0.3, 0.5]$}};
\draw
(y) -- (z)
(y) -- (x)
(z) -- (x);
\end{tikzpicture}
\end{center}
By routine computations, it is easy to see that $G=(A, B)$ is an
interval-valued fuzzy graph of $G^*$.
\end{example}
\begin{definition}\label{D-33}
The {\it Cartesian product $G_1\times G_2$ of two interval-valued
fuzzy graphs} $G_1=(A_1,B_1)$ and $G_2=(A_2,B_2)$ of the graphs
$G^*_1=(V_1,E_1)$ and $G^*_2=(V_2,E_2)$ is defined as a pair
$(A_1\times A_1,B_1\times B_2)$ such that
\begin{itemize}
\item [\rm(i)]
$\left\{\begin{array}{ll}(\nu_Bu^-_{A_1} \times \nu_Bu^-_{A_2})(x_1,
x_2)=\nu_Bin(\nu_Bu^-_{A_1}(x_1),\nu_Bu^-_{A_2}(x_2)) \\
(\nu_Bu^+_{A_1}\times\nu_Bu^+_{A_2})(x_1, x_2)=\nu_Bin(\nu_Bu^+_{A_1}(x_1),
\nu_Bu^+_{A_2}(x_2))\end{array}\right.$
for all\ $(x_1, x_2) \in V,$
\item [\rm(ii)]
$\left\{\begin{array}{ll}(\nu_Bu^-_{B_1}\times\nu_Bu^-_{B_2})((x,x_2)(x,y_2))=\nu_Bin(\nu_Bu^-_{A_1}(x),
\nu_Bu^-_{B_2}(x_2y_2))\\
(\nu_Bu^+_{B_1} \times\nu_Bu^+_{B_2})((x,x_2)(x,y_2))=
\nu_Bin(\nu_Bu^+_{A_1}(x), \nu_Bu^+_{B_2}(x_2y_2))\end{array}\right.$
for all $x\in V_1$ and $x_2y_2 \in E_2$,
\item [\rm(iii)]
$\left\{\begin{array}{ll}(\nu_Bu^-_{B_1}\times \nu_Bu^-_{B_2})((x_1,z)(y_1,z))=\nu_Bin(\nu_Bu^-_{B_1}(x_1y_1),\nu_Bu^-_{A_2}(z))\\
(\nu_Bu^+_{B_1}\times\nu_Bu^+_{B_2})((x_1,z)(y_1,z))=\nu_Bin(\nu_Bu^+_{B_1}(x_1y_1),\nu_Bu^+_{A_2}(z))\end{array}\right.$
for all $z\in V_2$ and $x_1y_1 \in E_1$.
\end{itemize}
\end{definition}
\begin{example}\label{Ex-34} Let $G^*_1=(V_1, E_1)$ and $G^*_2=(V_2, E_2)$ be graphs such that
$V_1=\{a, b\}$, $V_2=\{c, d\}$, $E_1=\{ab\}$ and $E_2=\{cd\}$.
Consider two interval-valued fuzzy graphs $G_1=(A_1,B_1)$ and
$G_2=(A_2,B_2)$, where
\[A_1=< (\frac{a}{0.2}, \frac{b}{0.3}), (\frac{a}{0.4}, \frac{b}{0.5})
>, \ \ \ \ \ \ B_1=< \frac{ab}{0.1}, \frac{ab}{0.2} >, \]
\[A_2=< (\frac{c}{0.1}, \frac{d}{0.2}), (\frac{c}{0.4}, \frac{d}{0.6})
>, \ \ \ \ \ \ B_2=< \frac{cd}{0.1}, \frac{cd}{0.3} >. \]
Then, as it is not difficult to verify
\[ (\nu_Bu^-_{B_1}\times\nu_Bu^-_{B_2})((a,c)(a,d))=0.1, \ \ \ \ \
\ (\nu_Bu^+_{B_1}\times\nu_Bu^+_{B_2})((a,c)(a,d))=0.3,\]
\[ (\nu_Bu^-_{B_1}\times\nu_Bu^-_{B_2})((a,c)(b,c))=0.1, \ \ \ \ \ \ (\nu_Bu^+_{B_1}\times\nu_Bu^+_{B_2})((a,c)(b,c))=0.2,\]
\[ (\nu_Bu^-_{B_1}\times\nu_Bu^-_{B_2})((a,d)(b,d))=0.1, \ \ \ \ \ \ (\nu_Bu^+_{B_1}\times\nu_Bu^+_{B_2})((a,d)(b,d))=0.2,\]
\[ (\nu_Bu^-_{B_1}\times\nu_Bu^-_{B_2})((b,c)(b,d))=0.1, \ \ \ \ \ \ (\nu_Bu^+_{B_1}\times\nu_Bu^+_{B_2})((b,c)(b,d))=0.3.\]
\begin{center}
\begin{tikzpicture}[scale=3]
\path (1,0.30) node (l) {$c$};
\path (1,-1.50) node (l) {$G_2$};
\path (0,0.30) node (l) {$a$};
\path (0,-1.50) node (l) {$G_1$};
\path (0,-1.30) node (l) {$b$};
\path (1,-1.30) node (l) {$d$};
\path (-0.2,-0.5) node (l) {\tiny{$[0.1,0.2]$}};
\path (1.20,-0.5) node (l) {\tiny{$[0.1,0.3]$}};
\tikzstyle{every node}=[draw,shape=circle];
\path (0,0) node (a) {\tiny{$[0.2,0.4]$}};
\path (1,0) node (b) {\tiny{$[0.1,0.4]$}};
\path (0,-1) node (d) {\tiny{$[0.3,0.5]$}};
\path (1,-1) node (e) {\tiny{$[0.2,0.6]$}};
\draw (a) -- (d)
(b) -- (e);
\end{tikzpicture}
\begin{tikzpicture}[scale=3]
\path (0.5,-0.5) node (l) {$G_1 \times G_2$};
\path (1,0.30) node (l) {$(a,d)$};
\path (0,0.30) node (l) {$(a,c)$};
\path (0,-1.30) node (l) {$(b,c)$};
\path (1,-1.30) node (l) {$(b,d)$};
\path (0.5,0.1) node (l) {\tiny {$[0.1,0.3]$}};
\path (-0.2,-0.5) node (l) {\tiny{$[0.1,0.2]$}};
\path (1.20,-0.5) node (l) {\tiny{$[0.1,0.2]$}};
\path (0.5,-1.10) node (l) {\tiny{$[0.1,0.3]$}};
\tikzstyle{every node}=[draw,shape=circle];
\path (0,0) node (a) {\tiny{$[0.1,0.4]$}};
\path (1,0) node (b) {\tiny{$[0.2,0.4]$}};
\path (0,-1) node (d) {\tiny{$[0.1,0.4]$}};
\path (1,-1) node (e) {\tiny{$[0.2,0.5]$}};
\draw (a) -- (b)
(a) -- (d)
(d) -- (e)
(b) -- (e);
\end{tikzpicture}
\end{center}
By routine computations, it is easy to see that $G_1\times G_2$ is
an interval-valued fuzzy graph of $G^*_1\times G^*_2$.
\end{example}
\begin{proposition}
The Cartesian product $G_1\times G_2=(A_1\times A_2,B_1\times
B_2)$ of two interval-valued fuzzy graphs of the graphs $G^*_1$
and $G^*_2$ is an interval-valued fuzzy graph of $G^*_1 \times
G^*_2$.
\end{proposition}
\begin{proof}
We verify only conditions for $B_1\times B_2$ because conditions
for $A_1\times A_2$ are obvious.
Let $x\in V_1$, $x_2y_2\in E_2$. Then
\begin{eqnarray*}
(\nu_Bu^-_{B_1} \times \nu_Bu^-_{B_2})((x, x_2)(x, y_2)) &=& \nu_Bin(\nu_Bu^-_{A_1}(x), \nu_Bu^-_{B_2}(x_2y_2)) \\
&\leq& \nu_Bin(\nu_Bu^-_{A_1}(x), \nu_Bin(\nu_Bu^-_{A_2}(x_2), \nu_Bu^-_{A_2}(y_2))) \\
&=& \nu_Bin(\nu_Bin(\nu_Bu^-_{A_1}(x), \nu_Bu^-_{A_2}(x_2)), \nu_Bin(\nu_Bu^-_{A_1}(x),
\nu_Bu^-_{A_2}(y_2)))\\
&=& \nu_Bin((\nu_Bu^-_{A_1} \times \nu_Bu^-_{A_2})(x, x_2), (\nu_Bu^-_{A_1} \times \nu_Bu^-_{A_2})(x,
y_2)),\\[4pt]
(\nu_Bu^+_{B_1} \times \nu_Bu^+_{B_2})((x, x_2)(x, y_2)) &=& \nu_Bin(\nu_Bu^+_{A_1}(x), \nu_Bu^+_{B_2}(x_2y_2)) \\
&\leq& \nu_Bin(\nu_Bu^+_{A_1}(x), \nu_Bin(\nu_Bu^+_{A_2}(x_2), \nu_Bu^+_{A_2}(y_2))) \\
&=& \nu_Bin(\nu_Bin(\nu_Bu^+_{A_1}(x), \nu_Bu^+_{A_2}(x_2)), \nu_Bin(\nu_Bu^+_{A_1}(x),
\nu_Bu^+_{A_2}(y_2)))\\
&=& \nu_Bin((\nu_Bu^+_{A_1} \times \nu_Bu^+_{A_2})(x, x_2), (\nu_Bu^+_{A_1} \times \nu_Bu^+_{A_2})(x,
y_2)).
\end{eqnarray*}
Similarly for $z\in V_2$ and $x_1y_1\in E_1$ we have
\begin{eqnarray*}
(\nu_Bu^-_{B_1} \times \nu_Bu^-_{B_2})((x_1, z)(y_1, z))
&=&\nu_Bin(\nu_Bu^-_{B_1}(x_1 y_1),\nu_Bu^-_{A_2}(z)) \\
&\leq& \nu_Bin( \nu_Bin(\nu_Bu^-_{A_1}(x_1), \nu_Bu^-_{A_1}(y_1)), \nu_Bu^-_{A_2}(z)) \\
&=& \nu_Bin(\nu_Bin(\nu_Bu^-_{A_1}(x), \nu_Bu^-_{A_2}(z)), \nu_Bin(\nu_Bu^-_{A_1}(y_1),
\nu_Bu^-_{A_2}(z)))\\
&=& \nu_Bin((\nu_Bu^-_{A_1} \times \nu_Bu^-_{A_2})(x_1, z), (\nu_Bu^-_{A_1} \times \nu_Bu^-_{A_2})(y_1,
z)),\\[4pt]
(\nu_Bu^+_{B_1}\times\nu_Bu^+_{B_2})((x_1, z)(y_1, z))
&=&\nu_Bin(\nu_Bu^+_{B_1}(x_1 y_1),\nu_Bu^+_{A_2}(z)) \\
&\leq& \nu_Bin( \nu_Bin(\nu_Bu^+_{A_1}(x_1), \nu_Bu^+_{A_1}(y_1)), \nu_Bu^+_{A_2}(z)) \\
&=& \nu_Bin(\nu_Bin(\nu_Bu^+_{A_1}(x), \nu_Bu^+_{A_2}(z)), \nu_Bin(\nu_Bu^+_{A_1}(y_1),
\nu_Bu^+_{A_2}(z)))\\
&=& \nu_Bin((\nu_Bu^+_{A_1} \times \nu_Bu^+_{A_2})(x_1, z), (\nu_Bu^+_{A_1} \times \nu_Bu^+_{A_2})(y_1,
z)).
\end{eqnarray*}
This completes the proof.
\end{proof}
\begin{definition}\label{D-36}
The {\it composition $G_1[G_2]=(A_1\circ A_2,B_1\circ B_2)$ of two
interval-valued fuzzy graphs} $G_1$ and $G_2$ of the graphs
$G^*_1$ and $G^*_2$ is defined as follows:
\begin{itemize}
\item [\rm (i)] $\left\{\begin{array}{ll}(\nu_Bu^-_{A_1} \circ \nu_Bu^-_{A_2})(x_1, x_2)=\nu_Bin(\nu_Bu^-_{A_1}(x_1),
\nu_Bu^-_{A_2}(x_2)) \\
(\nu_Bu^+_{A_1} \circ \nu_Bu^+_{A_2})(x_1, x_2)=\nu_Bin(\nu_Bu^+_{A_1}(x_1),
\nu_Bu^+_{A_2}(x_2))\end{array}\right.$
for all $(x_1, x_2) \in V,$
\item [\rm(ii)]
$\left\{\begin{array}{ll}(\nu_Bu^-_{B_1}\circ
\nu_Bu^-_{B_2})((x,x_2)(x,y_2))=\nu_Bin(\nu_Bu^-_{A_1}(x),\nu_Bu^-_{B_2}(x_2y_2))\\
(\nu_Bu^+_{B_1}\circ\nu_Bu^+_{B_2})((x,x_2)(x,y_2))=\nu_Bin(\nu_Bu^+_{A_1}(x),
\nu_Bu^+_{B_2}(x_2y_2))\end{array}\right.$
for all $x\in V_1$ and $x_2y_2 \in E_2$,
\item [\rm(iii)]
$\left\{\begin{array}{ll}(\nu_Bu^-_{B_1} \circ \nu_Bu^-_{B_2})((x_1,z)(y_1, z))=\nu_Bin(\nu_Bu^-_{B_1}(x_1y_1), \nu_Bu^-_{A_2}(z))\\
(\nu_Bu^+_{B_1} \circ \nu_Bu^+_{B_2})((x_1,z)(y_1,z))=
\nu_Bin(\nu_Bu^+_{B_1}(x_1y_1), \nu_Bu^+_{A_2}(z))\end{array}\right.$
for all $z\in V_2$ and $x_1y_1\in E_1$,
\item [\rm(iv)] $\left\{\begin{array}{ll}(\nu_Bu^-_{B_1}\circ\nu_Bu^-_{B_2})((x_1, x_2)(y_1, y_2))=
\nu_Bin(\nu_Bu^-_{A_2}(x_2), \nu_Bu^-_{A_2}(y_2),\nu_Bu^-_{B_1}(x_1y_1))\\
(\nu_Bu^+_{B_1} \circ \nu_Bu^+_{B_2})((x_1, x_2) (y_1,y_2))=
\nu_Bin(\nu_Bu^+_{A_2}(x_2), \nu_Bu^+_{A_2}(y_2),
\nu_Bu^+_{B_1}(x_1y_1))\end{array}\right.$
for all $(x_1, x_2)(y_1, y_2) \in E^0-E$.
\end{itemize}
\end{definition}
\begin{example}\label{Ex-37} Let $G^*_1$ and $G^*_2$ be as in the previous example.
Consider two interval-valued fuzzy graphs $G_1=(A_1,B_1)$ and
$G_2=(A_2,B_2)$ defined by
\[A_1=< (\frac{a}{0.2}, \frac{b}{0.3}), (\frac{a}{0.5}, \frac{b}{0.5})
>, \ \ \ \ \ \ B_1=< \frac{ab}{0.2},\frac{ab}{0.4}>, \]
\[A_2=< (\frac{c}{0.1}, \frac{d}{0.3}), (\frac{c}{0.4}, \frac{d}{0.6})
>, \ \ \ \ \ \ B_2=<\frac{cd}{0.1}, \frac{cd}{0.3} >. \]
Then we have
\[(\nu_Bu^-_{B_1}\circ\nu_Bu^-_{B_2})((a,c)(a,d))=0.2,\ \ \ \ \ \ (\nu_Bu^+_{B_1}\circ\nu_Bu^+_{B_2})((a,c)(a,d))=0.3, \]
\[(\nu_Bu^-_{B_1}\circ\nu_Bu^-_{B_2})((b,c)(b,d))=0.1,\ \ \ \ \ \ (\nu_Bu^+_{B_1}\circ\nu_Bu^+_{B_2})((b,c)(b,d))=0.3, \]
\[(\nu_Bu^-_{B_1}\circ\nu_Bu^-_{B_2})((a,c)(b,c))=0.1,\ \ \ \ \ \ (\nu_Bu^+_{B_1}\circ\nu_Bu^+_{B_2})((a,c)(b,c))=0.4, \]
\[(\nu_Bu^-_{B_1}\circ\nu_Bu^-_{B_2})((a,d)(b,d))=0.2,\ \ \ \ \ \ (\nu_Bu^+_{B_1}\circ\nu_Bu^+_{B_2})((a,d)(b,d))=0.4, \]
\[(\nu_Bu^-_{B_1}\circ\nu_Bu^-_{B_2})((a,c)(b,d))=0.1,\ \ \ \ \ \ (\nu_Bu^+_{B_1}\circ\nu_Bu^+_{B_2})((a,c)(b,d))=0.4, \]
\[(\nu_Bu^-_{B_1}\circ\nu_Bu^-_{B_2})((b,c)(a,d))=0.1,\ \ \ \ \ \ (\nu_Bu^+_{B_1}\circ\nu_Bu^+_{B_2})((b,c)(a,d))=0.4. \]
\begin{center}
\begin{tikzpicture}[scale=3]
\path (1,0.30) node (l) {$c$};
\path (1,-1.50) node (l) {$G_2$};
\path (0,0.30) node (l) {$a$};
\path (0,-1.50) node (l) {$G_1$};
\path (0,-1.30) node (l) {$b$};
\path (1,-1.30) node (l) {$d$};
\path (-0.2,-0.5) node (l) {\tiny{$[0.2,0.4]$}};
\path (1.20,-0.5) node (l) {\tiny{$[0.1,0.3]$}};
\tikzstyle{every node}=[draw,shape=circle];
\path (0,0) node (a) {\tiny{$[0.2,0.5]$}};
\path (1,0) node (b) {\tiny{$[0.1,0.4]$}};
\path (0,-1) node (d) {\tiny{$[0.3,0.5]$}};
\path (1,-1) node (e) {\tiny{$[0.3,0.6]$}};
\draw (a) -- (d)
(b) -- (e);
\end{tikzpicture}
\begin{tikzpicture}[scale=3]
\path (1,0.30) node (l) {$(a,d)$};
\path (0.5,-1.50) node (l) {$G_1 [G_2]$};
\path (0,0.30) node (l) {$(a,c)$};
\path (0,-1.30) node (l) {$(b,c)$};
\path (1,-1.30) node (l) {$(b,d)$};
\path (0.5,0.1) node (l) {\tiny {$[0.2,0.3]$}};
\path (-0.2,-0.5) node (l) {\tiny{$[0.1,0.4]$}};
\path (1.20,-0.5) node (l) {\tiny{$[0.2,0.4]$}};
\path (0.5,-1.10) node (l) {\tiny{$[0.1,0.3]$}};
\path (0.7,-0.6) node (l)[rotate=-45] {\tiny{$[0.1,0.4]$}};
\path (0.30,-0.6) node (l)[rotate=45] {\tiny{$[0.1,0.4]$}};
\tikzstyle{every node}=[draw,shape=circle];
\path (0,0) node (a) {\tiny{$[0.1,0.4]$}};
\path (1,0) node (b) {\tiny{$[0.2,0.5]$}};
\path (0,-1) node (d) {\tiny{$[0.1,0.4]$}};
\path (1,-1) node (e) {\tiny{$[0.3,0.5]$}};
\draw (a) -- (b)
(a) -- (d)
(d) -- (e)
(b) -- (e)
(a) -- (e)
(b) -- (d);
\end{tikzpicture}
\end{center}
By routine computations, it is easy to see that $G_1[G_2]= (A_1
\circ A_2, B_1 \circ B_2)$ is an interval-valued fuzzy graph of
$G^*_1[G^*_2]$.
\end{example}
\begin{proposition}\label{P-38}
The composition $G_1[G_2]$ of interval-valued fuzzy graphs $G_1$
and $G_2$ of $G^*_1$ and $G^*_2$ is an interval-valued fuzzy graph
of $G^*_1[G^*_2]$.
\end{proposition}
\begin{proof}
Similarly as in the previous proof we verify the conditions for
$B_1\circ B_2$ only.
In the case $x\in V_1$, $x_2 y_2\in E_2$, according to $(ii)$ we
obtain
\begin{eqnarray*}
(\nu_Bu^-_{B_1} \circ \nu_Bu^-_{B_2})((x, x_2)(x, y_2)) &=& \nu_Bin(\nu_Bu^-_{A_1}(x), \nu_Bu^-_{B_2}(x_2y_2)) \\
&\leq& \nu_Bin(\nu_Bu^-_{A_1}(x), \nu_Bin(\nu_Bu^-_{A_2}(x_2), \nu_Bu^-_{A_2}(y_2))) \\
&=& \nu_Bin(\nu_Bin(\nu_Bu^-_{A_1}(x), \nu_Bu^-_{A_2}(x_2)), \nu_Bin(\nu_Bu^-_{A_1}(x),
\nu_Bu^-_{A_2}(y_2)))\\
&=& \nu_Bin((\nu_Bu^-_{A_1} \circ \nu_Bu^-_{A_2})(x, x_2), (\nu_Bu^-_{A_1} \circ \nu_Bu^-_{A_2})(x,
y_2)),\\[4pt]
(\nu_Bu^+_{B_1}\circ \nu_Bu^+_{B_2})((x, x_2)(x, y_2)) &=& \nu_Bin(\nu_Bu^+_{A_1}(x), \nu_Bu^+_{B_2}(x_2y_2)) \\
&\leq& \nu_Bin(\nu_Bu^+_{A_1}(x), \nu_Bin(\nu_Bu^+_{A_2}(x_2), \nu_Bu^+_{A_2}(y_2))) \\
&=& \nu_Bin(\nu_Bin(\nu_Bu^+_{A_1}(x), \nu_Bu^+_{A_2}(x_2)), \nu_Bin(\nu_Bu^+_{A_1}(x),
\nu_Bu^+_{A_2}(y_2)))\\
&=& \nu_Bin((\nu_Bu^+_{A_1}\circ \nu_Bu^+_{A_2})(x, x_2), (\nu_Bu^+_{A_1} \circ \nu_Bu^+_{A_2})(x,
y_2)).
\end{eqnarray*}
In the case $z\in V_2$, $x_1y_1\in E_1$ the proof is similar.
In the case $(x_1, x_2)(y_1, y_2)\in E^0-E$ we have $x_1y_1\in
E_1$ and $x_2\nu_{A_1}eq y_2$, which according to $(iv)$ implies
\begin{eqnarray*}
(\nu_Bu^-_{B_1}\circ\nu_Bu^-_{B_2})((x_1,x_2)(y_1,y_2))
&=&\nu_Bin(\nu_Bu^-_{A_2}(x_2),\nu_Bu^-_{A_2}(y_2),\nu_Bu^-_{B_1}(x_1y_1)) \\
&\leq& \nu_Bin(\nu_Bu^-_{A_2}(x_2), \nu_Bu^-_{A_2}(y_2), \nu_Bin(\nu_Bu^-_{A_1}(x_1), \nu_Bu^-_{A_1}(y_1)) )\\
&=& \nu_Bin(\nu_Bin(\nu_Bu^-_{A_1}(x_1), \nu_Bu^-_{A_2}(x_2)), \nu_Bin(\nu_Bu^-_{A_1}(y_1),
\nu_Bu^-_{A_2}(y_2)))\\
&=&\nu_Bin((\nu_Bu^-_{A_1}\circ\nu_Bu^-_{A_2})(x_1,
x_2),(\nu_Bu^-_{A_1}\circ\nu_Bu^-_{A_2})(y_1,y_2)),\\[4pt]
(\nu_Bu^+_{B_1}\circ \nu_Bu^+_{B_2})((x_1, x_2)(y_1, y_2))
&=&\nu_Bin(\nu_Bu^+_{A_2}(x_2),\nu_Bu^+_{A_2}(y_2),\nu_Bu^+_{B_1}(x_1y_1)) \\
&\leq& \nu_Bin(\nu_Bu^+_{A_2}(x_2), \nu_Bu^+_{A_2}(y_2), \nu_Bin(\nu_Bu^+_{A_1}(x_1), \nu_Bu^+_{A_1}(y_1)) )\\
&=& \nu_Bin(\nu_Bin(\nu_Bu^+_{A_1}(x_1), \nu_Bu^+_{A_2}(x_2)), \nu_Bin(\nu_Bu^+_{A_1}(y_1),
\nu_Bu^+_{A_2}(y_2)))\\
&=& \nu_Bin((\nu_Bu^+_{A_1} \circ \nu_Bu^+_{A_2})(x_1, x_2), (\nu_Bu^+_{A_1} \circ \nu_Bu^+_{A_2})(y_1,
y_2)).
\end{eqnarray*}
This completes the proof.
\end{proof}
\begin{definition}\label{D-39}
The {\it union $G_1 \cup G_2=(A_1 \cup A_2, B_1 \cup B_2)$ of two
interval-valued fuzzy graphs} $G_1$ and $G_2$ of the graphs
$G^*_1$ and $G^*_2$ is defined as follows:
\begin{itemize}
\item[(A)] \
$\left\{\begin{array}{ll}
(\nu_Bu^-_{A_1} \cup\nu_Bu^-_{A_2})(x)=\nu_Bu^-_{A_1}(x) \ \ \ {\rm if } \
x\in V_1 \ {\rm and } \ x\nu_{A_1}ot\in{V_2}, \\
(\nu_Bu^-_{A_1} \cup \nu_Bu^-_{A_2})(x)=\nu_Bu^-_{A_2}(x) \ \ \ {\rm if } \
x\in V_2 \ {\rm and } \ x\nu_{A_1}ot\in{V_1},\\
(\nu_Bu^-_{A_1}\cup\nu_Bu^-_{A_2})(x)=\nu_Bax(\nu_Bu^-_{A_1}(x),\nu_Bu^-_{A_2}(x))
\ \ {\rm if } \ x\in V_1\cap V_2,
\end{array}\right.$
\item[(B)] \
$\left\{\begin{array}{ll}
(\nu_Bu^+_{A_1} \cup \nu_Bu^+_{A_2})(x)=\nu_Bu^+_{A_1}(x) \ \ \ {\rm if } \ x\in V_1 \ {\rm and } \ x\nu_{A_1}ot\in{V_2}, \\
(\nu_Bu^+_{A_1}\cup\nu_Bu^+_{A_2})(x)=\nu_Bu^+_{A_2}(x) \ \ \ {\rm if } \
x\in V_2 \ {\rm and } \ x\nu_{A_1}ot\in{V_1},\\
(\nu_Bu^+_{A_1}\cup\nu_Bu^+_{A_2})(x)=\nu_Bax(\nu_Bu^+_{A_1}(x),\nu_Bu^+_{A_2}(x))
\ \ {\rm if } \ x\in V_1\cap V_2,
\end{array}\right.$
\item[(C)] \
$\left\{\begin{array}{ll}
(\nu_Bu^-_{B_1} \cup \nu_Bu^-_{B_2})(xy)=\nu_Bu^-_{B_1}(xy) \ \ \ {\rm if } \ xy\in E_1 \ {\rm and } \ xy\nu_{A_1}ot\in{E_2}, \\
(\nu_Bu^-_{B_1}\cup\nu_Bu^-_{B_2})(xy)= \nu_Bu^-_{B_2}(xy) \ \ \ {\rm if }
\ xy\in E_2 \ {\rm and } \ xy\nu_{A_1}ot\in{E_1},\\
(\nu_Bu^-_{B_1}\cup\nu_Bu^-_{B_2})(xy)=\nu_Bax(\nu_Bu^-_{B_1}(xy),\nu_Bu^-_{B_2}(xy))
\ \ {\rm if } \ xy\in E_1\cap E_2,
\end{array}\right.$
\item[(D)] \
$\left\{\begin{array}{ll}
(\nu_Bu^+_{B_1} \cup \nu_Bu^+_{B_2})(xy)=\nu_Bu^+_{B_1}(xy) \ \ \ {\rm if } \ xy\in E_1 \ {\rm and } \ xy\nu_{A_1}ot\in{E_2},\\
(\nu_Bu^+_{B_1}\cup\nu_Bu^+_{B_2})(xy)=\nu_Bu^+_{B_2}(xy) \ \ \ {\rm if } \
xy\in E_2 \ {\rm and } \ xy\nu_{A_1}ot\in{E_1},\\
(\nu_Bu^+_{B_1}\cup\nu_Bu^+_{B_2})(xy)=\nu_Bax(\nu_Bu^+_{B_1}(xy),\nu_Bu^+_{B_2}(xy))
\ \ {\rm if } \ xy\in E_1\cap E_2.
\end{array}\right.$
\end{itemize}
\end{definition}
\begin{example}\label{Ex-310} Let $G^*_1=(V_1, E_1)$ and $G^*_2=(V_2, E_2)$ be graphs such that
$V_1=\{a, b, c, d, e\}$, $E_1=\{ab, bc, be, ce, ad, ed\}$,
$V_2=\{a, b, c, d, f\}$ and $E_2=\{ab, bc, cf, bf, bd\}$.
Consider two interval-valued fuzzy graphs $G_1=(A_1, B_1)$ and
$G_2=(A_2, B_2)$ defined by
\[A_1=< (\frac{a}{0.2}, \frac{b}{0.4}, \frac{c}{0.3}, \frac{d}{0.3}, \frac{e}{0.2}), (\frac{a}{0.4}, \frac{b}{0.5}, \frac{c}{0.6}, \frac{d}{0.7}, \frac{e}{0.6})>, \]
\[B_1=< (\frac{ab}{0.1}, \frac{bc}{0.2}, \frac{ce}{0.1}, \frac{be}{0.2}, \frac{ad}{0.1}, \frac{de}{0.1}
), (\frac{ab}{0.3}, \frac{bc}{0.4}, \frac{ce}{0.5},
\frac{be}{0.5}, \frac{ad}{0.3}, \frac{de}{0.6})>,\]
\[A_2=< (\frac{a}{0.2}, \frac{b}{0.2}, \frac{c}{0.3}, \frac{d}{0.2}, \frac{f}{0.4}), (\frac{a}{0.4}, \frac{b}{0.5}, \frac{c}{0.6}, \frac{d}{0.6}, \frac{f}{0.6})>, \]
\[B_2=< (\frac{ab}{0.1}, \frac{bc}{0.2}, \frac{cf}{0.1}, \frac{bf}{0.1}, \frac{bd}{0.2} ),
(\frac{ab}{0.2}, \frac{bc}{0.4}, \frac{cf}{0.5},
\frac{bf}{0.2},\frac{bd}{0.5})>.\] Then, according to the above
definition:
$\begin{array}{ccccc} &(\nu_Bu^-_{A_1} \cup \nu_Bu^-_{A_2})(a)=0.2,&
(\nu_Bu^-_{A_1} \cup
\nu_Bu^-_{A_2})(b)=0.4,&\\
&(\nu_Bu^-_{A_1} \cup \nu_Bu^-_{A_2})(c)=0.3,&
(\nu_Bu^-_{A_1} \cup \nu_Bu^-_{A_2})(d)=0.3,& \\
&(\nu_Bu^-_{A_1}\cup\nu_Bu^-_{A_2})(e)=0.2,&(\nu_Bu^-_{A_1} \cup
\nu_Bu^-_{A_2})(f)=0.4,&\\
&(\nu_Bu^+_{A_1} \cup \nu_Bu^+_{A_2})(a)=0.4,&(\nu_Bu^+_{A_1} \cup
\nu_Bu^+_{A_2})(b)=0.5,&\\
(\nu_Bu^+_{A_1} \cup \nu_Bu^+_{A_2})(c)=0.6,&(\nu_Bu^+_{A_1} \cup
\nu_Bu^+_{A_2})(d)=0.7,& (\nu_Bu^+_{A_1} \cup \nu_Bu^+_{A_2})(e)=0.1,&
(\nu_Bu^+_{A_1} \cup \nu_Bu^+_{A_2})(f)=0.6,\\
(\nu_Bu^-_{B_1} \cup \nu_Bu^-_{B_2})(ab)=0.1,&(\nu_Bu^-_{B_1} \cup
\nu_Bu^-_{B_2})(bc)=0.2,&(\nu_Bu^-_{B_1} \cup \nu_Bu^-_{B_2})(ce)=0.1,&
(\nu_Bu^-_{B_1}\cup\nu_Bu^-_{B_2})(be)=0.2, \\
(\nu_Bu^-_{B_1}\cup\nu_Bu^-_{B_2})(ad)=0.1,&(\nu_Bu^-_{B_1} \cup
\nu_Bu^-_{B_2})(de)=0.1,&(\nu_Bu^-_{B_1} \cup \nu_Bu^-_{B_2})(bd)=0.2,&
(\nu_Bu^-_{B_1}\cup\nu_Bu^-_{B_2})(bf)=0.1,\\
(\nu_Bu^+_{B_1}\cup \nu_Bu^+_{B_3})(ab)=0.3,&(\nu_Bu^+_{B_1} \cup
\nu_Bu^+_{B_2})(bc)=0.4,&
(\nu_Bu^+_{B_1}\cup\nu_Bu^+_{B_2})(ce)=0.5,& (\nu_Bu^+_{B_1} \cup
\nu_Bu^+_{B_2})(be)=0.5,\\
(\nu_Bu^+_{B_1}\cup \nu_Bu^+_{B_2})(ad)=0.3,&(\nu_Bu^+_{B_1} \cup
\nu_Bu^+_{B_2})(de)=0.6,&(\nu_Bu^+_{B_1}\cup \nu_Bu^+_{B_2})(bd)=0.5,&
(\nu_Bu^+_{B_1}\cup\nu_Bu^+_{B_2})(bf)=0.2.\end{array}$
\begin{center}
\begin{tikzpicture}[scale=3]
\path (1,0.80) node (l){$G_1$};
\path (1,0.30) node (l) {$b$};
\path (0,0.30) node (l) {$a$};
\path (2,0.30) node (l) {$c$};
\path (0,-1.30) node (l) {$d$};
\path (1,-1.30) node (l) {$e$};
\path (0.5,0.1) node (l) {\tiny {$[0.1,0.3]$}};
\path (1.5,0.1) node (l) {\tiny {$[0.2,0.4]$}};
\path (-0.2,-0.5) node (l) {\tiny{$[0.1,0.3]$}};
\path (0.8,-0.5) node (l) {\tiny{$[0.2,0.5]$}};
\path (0.5,-1.10) node (l) {\tiny{$[0.1,0.6]$}};
\path (1.7,-0.5) node (l) {\tiny{$[0.1,0.5]$}};
\tikzstyle{every node}=[draw,shape=circle];
\path (0,0) node (a) {\tiny{$[0.2,0.4]$}};
\path (1,0) node (b) {\tiny{$[0.4,0.5]$}};
\path (2,0) node (c) {\tiny{$[0.3,0.6]$}};
\path (0,-1) node (d) {\tiny{$[0.3,0.7]$}};
\path (1,-1) node (e) {\tiny{$[0.2,0.6]$}};
\draw (a) -- (b)
(b) -- (c)
(a) -- (d)
(d) -- (e)
(b) -- (e)
(c) -- (e);
\end{tikzpicture}
\begin{tikzpicture}[scale=3]
\path (1,0.80) node (l) {$G_2$};
\path (1,0.30) node (l) {$b$};
\path (0,0.30) node (l) {$a$};
\path (2,0.30) node (l) {$c$};
\path (0,-1.30) node (l) {$d$};
\path (2,-1.30) node (l) {$f$};
\path (0.5,0.1) node (l) {\tiny {$[0.1,0.2]$}};
\path (1.5,0.1) node (l) {\tiny {$[0.2,0.4]$}};
\path (0.8,-0.5) node (l) {\tiny{$[0.2,0.5]$}};
\path (2.20,-0.5) node (l) {\tiny{$[0.1,0.5]$}};
\path (1.7,-0.5) node (l) {\tiny{$[0.1,0.2]$}};
\tikzstyle{every node}=[draw,shape=circle];
\path (0,0) node (a) {\tiny{$[0.2,0.4]$}};
\path (1,0) node (b) {\tiny{$[0.2,0.5]$}};
\path (2,0) node (c) {\tiny{$[0.3,0.6]$}};
\path (0,-1) node (d) {\tiny{$[0.2,0.6]$}};
\path (2,-1) node (f) {\tiny{$[0.4,0.6]$}};
\draw (a) -- (b)
(b) -- (c)
(b) -- (f)
(b) -- (d)
(c) -- (f);
\end{tikzpicture}
\begin{tikzpicture}[scale=3]
\path (1,-1.80) node (l) {$G_1 \cup G_2$};
\path (1,0.30) node (l) {$b$};
\path (0,0.30) node (l) {$a$};
\path (2,0.30) node (l) {$c$};
\path (0,-1.30) node (l) {$d$};
\path (1,-1.30) node (l) {$e$};
\path (2,-1.30) node (l) {$f$};
\path (0.5,0.1) node (l) {\tiny {$[0.1,0.3]$}};
\path (1.5,0.1) node (l) {\tiny {$[0.2,0.4]$}};
\path (-0.2,-0.5) node (l) {\tiny{$[0.1,0.3]$}};
\path (0.8,-0.5) node (l) {\tiny{$[0.2,0.5]$}};
\path (0.5,-1.10) node (l) {\tiny{$[0.1,0.6]$}};
\path (1.7,-0.5) node (l) {\tiny{$[0.1,0.5]$}};
\path (2.20,-0.5) node (l) {\tiny{$[0.3,0.5]$}};
\path (0.4,-0.5) node (l)[rotate=40] {\tiny{$[0.2,0.5]$}};
\path (1.5,-0.2) node (l) {\tiny{$[0.1,0.2]$}};
\tikzstyle{every node}=[draw,shape=circle];
\path (0,0) node (a) {\tiny{$[0.2,0.4]$}};
\path (1,0) node (b) {\tiny{$[0.4,0.5]$}};
\path (2,0) node (c) {\tiny{$[0.3,0.6]$}};
\path (0,-1) node (d) {\tiny{$[0.3,0.7]$}};
\path (1,-1) node (e) {\tiny{$[0.2,0.6]$}};
\path (2,-1) node (f) {\tiny{$[0.4,0.6]$}};
\draw (a) -- (b)
(b) -- (c)
(a) -- (d)
(d) -- (e)
(b) -- (e)
(c) -- (e)
(c) -- (f)
(b) -- (d)
(b) -- (f);
\end{tikzpicture}
\end{center}
Clearly, $G_1 \cup G_2=(A_1\cup A_2,B_1\cup B_2)$ is an
interval-valued fuzzy graph of the graph $G_1^*\cup G_2^*$.
\end{example}
\begin{proposition}\label{P-311}
The union of two interval-valued fuzzy graphs is an
interval-valued fuzzy graph.
\end{proposition}
\begin{proof}
Let $G_1=(A_1, B_1)$ and $G_2=(A_2, B_2)$ be interval-valued fuzzy
graphs of $G^*_1$ and $G^*_2$, respectively. We prove that
$G_1\cup G_2=(A_1 \cup A_2, B_1 \cup B_2)$ is an interval-valued
fuzzy graph of the graph $G_1^*\cup G_2^*$. Since all conditions
for $A_1\cup A_2$ are automatically satisfied we verify only
conditions for $B_1\cup B_2$.
At first we consider the case when $xy\in E_1\cap E_2$. Then
\begin{eqnarray*}
(\nu_Bu^-_{B_1} \cup \nu_Bu^-_{B_2} )(xy) &=& \nu_Bax(\nu_Bu^-_{B_1}(xy), \nu_Bu^-_{B_2}(xy)) \\
&\leq& \nu_Bax( \nu_Bin(\nu_Bu^-_{A_1}(x), \nu_Bu^-_{A_1}(y)), \nu_Bin(\nu_Bu^-_{A_2}(x), \nu_Bu^-_{A_2}(y))) \\
&=& \nu_Bin( \nu_Bax(\nu_Bu^-_{A_1}(x), \nu_Bu^-_{A_2}(x)), \nu_Bax(\nu_Bu^-_{A_1}(y), \nu_Bu^-_{A_2}(y))) \\
&=& \nu_Bin((\nu_Bu^-_{A_1} \cup \nu_Bu^-_{A_2})(x), (\nu_Bu^-_{A-1} \cup
\nu_Bu^-_{A_2})(y)),\\
(\nu_Bu^+_{B_1} \cup \nu_Bu^+_{B_2} )(xy) &=& \nu_Bax(\nu_Bu^+_{B_1}(xy), \nu_Bu^+_{B_2}(xy)) \\
&\leq& \nu_Bax( \nu_Bin(\nu_Bu^+_{A_1}(x), \nu_Bu^+_{A_1}(y)), \nu_Bin(\nu_Bu^+_{A_2}(x), \nu_Bu^+_{A_2}(y))) \\
&=& \nu_Bin( \nu_Bax(\nu_Bu^+_{A_1}(x), \nu_Bu^+_{A_2}(x)), \nu_Bax(\nu_Bu^+_{A_1}(y), \nu_Bu^+_{A_2}(y))) \\
&=& \nu_Bin((\nu_Bu^+_{A_1} \cup \nu_Bu^+_{A_2})(x), (\nu_Bu^+_{A-1} \cup \nu_Bu^+_{A_2})(y)).
\end{eqnarray*}
If $xy\in E_1$ and $xy\nu_{A_1}ot\in{E_2}$, then
\[ (\nu_Bu^-_{B_1} \cup \nu_Bu^-_{B_2} )(xy)\leq \nu_Bin((\nu_Bu^-_{A_1} \cup \nu_Bu^-_{A_2})(x),
(\nu_Bu^-_{A_1} \cup \nu_Bu^-_{A_2})(y)), \]
\[ (\nu_Bu^+_{B_1} \cup \nu_Bu^+_{B_2} )(xy)\leq \nu_Bin((\nu_Bu^+_{A_1} \cup \nu_Bu^+_{A_2})(x),
(\nu_Bu^+_{A_1} \cup \nu_Bu^+_{A_2})(y)). \]
If $xy\in E_2$ and $xy\nu_{A_1}ot\in{E_1}$, then
\[ (\nu_Bu^-_{B_1} \cup \nu_Bu^-_{B_2} )(xy)\leq \nu_Bin((\nu_Bu^-_{A_1} \cup \nu_Bu^-_{A_2})(x),
(\nu_Bu^-_{A_1} \cup \nu_Bu^-_{A_2})(y)), \]
\[ (\nu_Bu^+_{B_1} \cup \nu_Bu^+_{B_2} )(xy)\leq \nu_Bin((\nu_Bu^+_{A_1} \cup \nu_Bu^+_{A_2})(x),
(\nu_Bu^+_{A_1} \cup \nu_Bu^+_{A_2})(y)). \]
This completes the proof.
\end{proof}
\begin{definition}\label{D-312}
The {\it join $G_1 + G_2=(A_1 + A_2, B_1 + B_2)$ of two
interval-valued fuzzy graphs} $G_1$ and $G_2$ of the graphs
$G^*_1$ and $G^*_2$ is defined as follows:
\begin{itemize}
\item [(A)] \
$\left\{\begin{array}{ll}
(\nu_Bu^-_{A_1}+\nu_Bu^-_{A_2})(x)=(\nu_Bu^-_{A_1}\cup\nu_Bu^-_{A_2})(x)\\
(\nu_Bu^+_{A_1} + \nu_Bu^+_{A_2})(x)=(\nu_Bu^+_{A_1}\cup\nu_Bu^+_{A_2})(x)
\end{array}\right.$ \ \ \
if $x\in V_1\cup V_2$,
\item[(B)] \
$\left\{\begin{array}{ll}
(\nu_Bu^-_{B_1}+\nu_Bu^-_{B_2})(xy)=(\nu_Bu^-_{B_1}\cup\nu_Bu^-_{B_2})(xy)\\
(\nu_Bu^+_{B_1} + \nu_Bu^+_{B_2})(xy)=(\nu_Bu^+_{B_1} \cup\nu_Bu^+_{B_2})(xy)
\end{array}\right.$ \ \ \
if $xy \in E_1 \cap E_2$,
\item [(C)] \
$\left\{\begin{array}{ll}
(\nu_Bu^-_{B_1}+\nu_Bu^-_{B_2})(xy)=\nu_Bin(\nu_Bu^-_{A_1}(x), \nu_Bu^-_{A_2}(y))\\
(\nu_Bu^+_{B_1} + \nu_Bu^+_{B_2})(xy)= \nu_Bin(\nu_Bu^+_{A_1}(x),\nu_Bu^+_{A_2}(y))
\end{array}\right.$ \ \ \
if $xy\in E'$, where $E'$ is the set of all edges joining the
nodes of $V_1$ and $V_2$.
\end{itemize}
\end{definition}
\begin{proposition}\label{P-313}
The join of interval-valued fuzzy graphs is an interval-valued
fuzzy graph.
\end{proposition}
\begin{proof}
Let $G_1=(A_1, B_1)$ and $G_2=(A_2, B_2)$ be interval-valued fuzzy
graphs of $G^*_1$ and $G^*_2$, respectively. We prove that
$G_1+G_2=(A_1+A_2, B_1+B_2)$ is an interval-valued fuzzy graph of
the graph $G_1^*+G_2^*$. In view of Proposition \ref{P-311} is
sufficient to verify the case when $xy \in E'$. In this case we
have
\begin{eqnarray*}
(\nu_Bu^-_{B_1} + \nu_Bu^-_{B_2} )(xy) &=& \nu_Bin(\nu_Bu^-_{A_1}(x), \nu_Bu^-_{A_2}(y)) \\
&\leq& \nu_Bin( (\nu_Bu^-_{A_1} \cup \nu_Bu^-_{A_2})(x),(\nu_Bu^-_{A_1} \cup \nu_Bu^-_{A_2})(y)) \\
&=& \nu_Bin((\nu_Bu^-_{A_1} + \nu_Bu^-_{A_2})(x), (\nu_Bu^-_{A_1} +
\nu_Bu^-_{A_2})(y)),\\[4pt]
(\nu_Bu^+_{B_1} + \nu_Bu^+_{B_2} )(xy) &=& \nu_Bin(\nu_Bu^+_{A_1}(x), \nu_Bu^+_{A_2}(y)) \\
&\leq& \nu_Bin( (\nu_Bu^+_{A_1} \cup \nu_Bu^+_{A_2})(x),(\nu_Bu^+_{A_1} \cup \nu_Bu^+_{A_2})(y)) \\
&=& \nu_Bin((\nu_Bu^+_{A_1} + \nu_Bu^+_{A_2})(x), (\nu_Bu^+_{A_1} + \nu_Bu^+_{A_2})(y)).
\end{eqnarray*}
This completes the proof.
\end{proof}
\begin{proposition}\label{P-314} Let $G^*_1=(V_1, E_1)$ and $G^*_2=(V_2, E_2)$
be crisp graphs with $V_1 \cap V_2 =\emptyset$. Let $A_1$,
$A_2$, $B_1$ and $B_2$ be interval-valued fuzzy subsets of $V_1$,
$V_2$, $E_1$ and $E_2$, respectively. Then $G_1 \cup G_2=(A_1
\cup A_2, B_1 \cup B_2)$ is an interval-valued fuzzy graph of
$G_1^*\cup G_2^*$ if and only if $G_1=(A_1, B_1)$ and $G_2=(A_2,
B_2)$ are interval-valued fuzzy graphs of $G^*_1$ and $G^*_2$,
respectively.
\end{proposition}
\begin{proof}
Suppose that $G_1 \cup G_2=(A_1 \cup A_2, B_1 \cup B_2)$ is an
interval-valued fuzzy graph of $G_1^*\cup G_2^*$. Let $xy \in
E_1$. Then $xy \nu_{A_1}otin E_2$ and $x,y\in V_1-V_2$. Thus
\begin{eqnarray*}
\nu_Bu^-_{B_1}(xy) &=& (\nu_Bu^-_{B_1} \cup \nu_Bu^-_{B_2})(xy) \\
&\leq& \nu_Bin((\nu_Bu^-_{A_1} \cup \nu_Bu^-_{A_2})(x), (\nu_Bu^-_{A_1} \cup
\nu_Bu^-_{A_2})(y)) \\
&=& \nu_Bin(\nu_Bu^-_{A_1}(x), \nu_Bu^-_{A_1}(y)),\\
\nu_Bu^+_{B_1}(xy) &=& (\nu_Bu^+_{B_1} \cup \nu_Bu^+_{B_2})(xy) \\
&\leq& \nu_Bin((\nu_Bu^+_{A_1} \cup \nu_Bu^+_{A_2})(x), (\nu_Bu^+_{A_1} \cup
\nu_Bu^+_{A_2})(y)) \\
&=& \nu_Bin(\nu_Bu^+_{A_1}(x), \nu_Bu^+_{A_1}(y)).
\end{eqnarray*}
This shows that $G_1=(A_1, B_1)$ is an interval-valued fuzzy
graph. Similarly, we can show that $G_2=(A_2, B_2)$ is an
interval-valued fuzzy graph.
The converse statement is given by Proposition \ref{P-311}.
\end{proof}
As a consequence of Propositions \ref{P-313} and \ref{P-314} we
obtain
\begin{proposition}
Let $G^*_1=(V_1, E_1)$ and $G^*_2=(V_2, E_2)$ be crisp graphs and
let $V_1 \cap V_2 =\emptyset$. Let $A_1$, $A_2$, $B_1$ and
$B_2$ be interval-valued fuzzy subsets of $V_1$, $V_2$, $E_1$ and
$E_2$, respectively. Then $G_1 + G_2=(A_1 + A_2, B_1 + B_2)$ is
an interval-valued fuzzy graph of $G^*$ if and only if $G_1=(A_1,
B_1)$ and $G_2=(A_2, B_2)$ are interval-valued fuzzy graphs of
$G^*_1$ and $G^*_2$, respectively.
\end{proposition}
\section{Isomorphisms of interval-valued fuzzy graphs}
In this section we characterize various types of (weak)
isomorphisms of interval valued graphs.
\begin{definition}
Let $G_1=(A_1, B_1)$ and $G_2=(A_2, B_2)$ be two interval-valued
fuzzy graphs. A {\it homomorphism} $f:G_1 \to G_2$ is a mapping
$f:V_1 \to V_2$ such that
\begin{itemize}
\item [\rm (a)] \ $\nu_Bu^-_{A_1}(x_1)\leq \nu_Bu^-_{A_2}(f(x_1))$, \ \ \ $\nu_Bu^+_{A_1}(x_1)\leq \nu_Bu^+_{A_2}(f(x_1))$,
\item [\rm (b)] \ $\nu_Bu^-_{B_1}(x_1y_1)\leq \nu_Bu^-_{B_2}(f(x_1)f(y_1))$, \ \ \ $\nu_Bu^+_{B_1}(x_1y_1)\leq \nu_Bu^+_{B_2}(f(x_1)f(y_1))$
\end{itemize}
for all $x_1 \in V_1$, $x_1y_1 \in E_1$.
\end{definition}
A bijective homomorphism with the property
\begin{itemize}
\item [\rm (c)] \ $\nu_Bu^-_{A_1}(x_1)= \nu_Bu^-_{A_2}(f(x_1))$, \ \ \ $\nu_Bu^+_{A_1}(x_1)=\nu_Bu^+_{A_2}(f(x_1))$,
\end{itemize}
is called a {\it weak isomorphism}. A weak isomorphism preserves
the weights of the nodes but not necessarily the weights of the
arcs.
A bijective homomorphism preserving the weights of the arcs but
not necessarily the weights of nodes, i.e., a bijective
homomorphism $f:G_1 \to G_2$ such that
\begin{itemize}
\item [\rm (d)] $\nu_Bu^-_{B_1}(x_1y_1)= \nu_Bu^-_{B_2}(f(x_1)f(y_1))$, $\nu_Bu^+_{B_1}(x_1y_1) = \nu_Bu^+_{B_2}(f(x_1)f(y_1))$
\end{itemize}
for all $x_1y_1 \in V_1$ is called a {\it weak co-isomorphism}.
A bijective mapping $f:G_1 \to G_2$ satisfying $(c)$ and $(d)$ is
called an {\it isomorphism}.
\begin{example}
Consider graphs $G^*_1=(V_1, E_1)$ and $G^*_2=(V_2, E_2)$ such
that $V_1=\{a_1, b_1\}$, $V_2=\{a_2, b_2\}$, $E_1= \{a_1 b_1\}$
and $E_2= \{a_2 b_2\}$. Let $A_1$, $A_2$, $B_1$ and $B_2$ be
interval-valued fuzzy subsets defined by
\[
A_1=< (\frac{a_1}{0.2}, \frac{b_1}{0.3}), (\frac{a_1}{0.5},
\frac{b_1}{0.6}) >, \ \ \ B_1=<\frac{a_1b_1}{0.1},
\frac{a_1b_1}{0.3}>,
\]
\[A_2=<(\frac{a_2}{0.3}, \frac{b_2}{0.2}), (\frac{a_2}{0.6}, \frac{b_2}{0.5}) >,
\ \ \ B_2=<\frac{a_2b_2}{0.1},\frac{a_2b_2}{0.4}>.
\]
Then, as it is easy to see, $G_1=(A_1, B_1)$ and $G_2=(A_2, B_2)$
are interval-valued fuzzy graphs of $G^*_1$ and $G^*_2$,
respectively. The map $f:V_1\to V_2$ defined by $f(a_1)=b_2$ and
$f(b_1)=a_2$ is a weak isomorphism but it is not an isomorphism.
\end{example}
\begin{example}
Let $G^*_1$ and $G^*_2$ be as in the previous example and let
$A_1$, $A_2$, $B_1$ and $B_2$ be interval-valued fuzzy subsets
defined by
\[
A_1=< (\frac{a_1}{0.2}, \frac{b_1}{0.3}), (\frac{a_1}{0.4},
\frac{b_1}{0.5}) >, \ \ \ B_1=<\frac{a_1b_1}{0.1},
\frac{a_1b_1}{0.3}>,
\]
\[A_2=<(\frac{a_2}{0.4}, \frac{b_2}{0.3}), (\frac{a_2}{0.5}, \frac{b_2}{0.6}) >,
\ \ \ B_2=<\frac{a_2b_2}{0.1}, \frac{a_2b_2}{0.3}>.
\]
Then $G_1=(A_1, B_1)$ and $G_2=(A_2, B_2)$ are interval-valued
fuzzy graphs of $G^*_1$ and $G^*_2$, respectively. The map
$f:V_1\to V_2$ defined by $f(a_1)=b_2$ and $f(b_1)=a_2$ is a weak
co-isomorphism but it is not an isomorphism.
\end{example}
\begin{proposition}
An isomorphism between interval-valued fuzzy graphs is an
equivalence relation.
\end{proposition}
\nu_{A_1}oindent{\bf Problem.} Prove or disprove that weak isomorphism
(co-isomorphism) between interval-valued fuzzy graphs is a partial
ordering relation.
\section{Interval-valued fuzzy complete graphs}
\begin{definition}
An interval-valued fuzzy graph $G=(A, B)$ is called {\it complete}
if
\[\nu_Bu^-_B(xy)= \nu_Bin(\nu_Bu^-_A(x), \nu_Bu^-_A(y)) ~~~ {\rm
and}~~~\nu_Bu^+_B(xy)= \nu_Bin(\nu_Bu^+_A(x), \nu_Bu^+_A(y))~~~{\rm for~
all}~~ xy\in E.
\]
\end{definition}
\begin{example}
Consider a graph $G^*=(V, E)$ such that $V=\{x, y, z\}$, $E=
\{xy, yz, zx\}$. If $A$ and $B$ are interval-valued fuzzy subset
defined by
\[A=< (\frac{x}{0.2}, \frac{y}{0.3}, \frac{z}{0.4}), (\frac{x}{0.4}, \frac{y}{0.5}, \frac{z}{0.5})
>, \]
\[B=< (\frac{xy}{0.2}, \frac{yz}{0.3}, \frac{zx}{0.2}), (\frac{xy}{0.4}, \frac{yz}{0.5}, \frac{zx}{0.4})
>, \]
then $G=(A,B)$ is an interval-valued fuzzy complete graph of
$G^*$.
\end{example}
As a consequence of Proposition \ref{P-38} we obtain
\begin{proposition} If $G=(A,B)$ be an interval-valued fuzzy complete graph, then
also $G[G]$ is an interval-valued fuzzy complete graph.
\end{proposition}
\begin{definition}
The {\it complement} of an interval-valued fuzzy complete graph
$G=(A,B)$ of $G^*=(V,E)$ is an interval-valued fuzzy complete
graph $\overline{G}=(\overline{A},\overline{B})$ on
$\overline{G^*}=(V,\overline{E})$, where
$\overline{A}=A=[\nu_Bu^-_A,\nu_Bu^+_A]$ and
$\overline{B}=[\overline{\nu_Bu^-}_B, \overline{\nu_Bu^+}_B]$ is defined
by
$$
\begin{aligned}\overline{\nu_Bu^-_B}(x y) & = \begin{cases}
0 & \hbox{if \ $\nu_Bu^-_B(x y) >0$,} \\
\nu_Bin( \nu_Bu^-_A(x), \nu_Bu^-_A(y)) & \hbox{if \ if $\nu_Bu^-_B(x y)=0$, }\\
\end{cases}
\end{aligned}$$
$$ \begin{aligned}\overline{\nu_Bu^+_B}(x y) & = \begin{cases}
0 & \hbox{if \ $\nu_Bu^+_B(x y) >0$,} \\
\nu_Bin( \nu_Bu^+_A(x), \nu_Bu^+_A(y)) & \hbox{if \ if $\nu_Bu^+_B(x y)=0$. }\\
\end{cases}
\end{aligned}$$
\end{definition}
\begin{definition}
An interval-valued fuzzy complete graph $G=(A, B)$ is called {\it
self complementary} if $\overline{\overline{G}}=G$.
\end{definition}
\begin{example}
Consider a graph $G^*=(V, E)$ such that $V=\{a, b, c\}$, $E=
\{ab, bc\}$. Then an interval-valued fuzzy graph $G=(A,B)$, where
\[
A=<(\frac{a}{0.1},\frac{b}{0.2},\frac{c}{0.3}),(\frac{a}{0.3},\frac{b}{0.4},
\frac{c}{0.5})>,
\]
\[
B=<(\frac{ab}{0.1},\frac{bc}{0.2}),(\frac{ab}{0.3},\frac{bc}{0.4})>,
\]
is self complementary.
\end{example}
\begin{proposition}
In a self complementary interval-valued fuzzy complete graph
$G=(A,B)$ we have
\nu_Bedskip$a)$ \ $\sum\limits_{x\nu_{A_1}eq y} \nu_Bu^-_B(x
y)=\sum\limits_{x\nu_{A_1}eq y} \nu_Bin( \nu_Bu^-_A(x), \nu_Bu^-_A(y))$,
$b)$ \ $\sum\limits_{x\nu_{A_1}eq y} \nu_Bu^+_B(x y)=\sum\limits_{x\nu_{A_1}eq y}
\nu_Bin( \nu_Bu^+_A(x), \nu_Bu^+_A(y))$.
\end{proposition}
\begin{proof}
Let $G=(A,B)$ be a self complementary interval-valued fuzzy
complete graph. Then there exists an automorphism $f:V \to V$ such
that $\nu_Bu^-_A(f(x))=\nu_Bu^-_A(x)$, $\nu_Bu^+_A(f(x))=\nu_Bu^+_A(x)$,
$\overline{\nu_Bu^-_B}(f(x)f(y))=\nu_Bu^-_B(xy)$ and
$\overline{\nu_Bu^+_B}(f(x)f(y))=\nu_Bu^+_B(xy)$ for all $x,y\in V$.
Hence, for $x,y\in V$ we obtain
\[
\nu_Bu^-_B(x
y)=\overline{\nu_Bu^-}_B(f(x)f(y))=\nu_Bin(\nu_Bu^-_A(f(x)),\nu_Bu^-_A(f(y)))=\nu_Bin(\nu_Bu^-_A(x),
\nu_Bu^-_A(y)),
\]
which implies $a)$. The proof of $b)$ is analogous.
\end{proof}
\begin{proposition}
Let $G=(A, B)$ be an interval-valued fuzzy complete graph. If $ \nu_Bu^-_B(x
y)= \nu_Bin( \nu_Bu^-_A(x), \nu_Bu^-_A(y))$ and $ \nu_Bu^+_B(x y)= \nu_Bin(
\nu_Bu^+_A(x), \nu_Bu^+_A(y))$ for all $x$, $y$ $\in V$, then $G$ is
self complementary.
\end{proposition}
\begin{proof}
Let $G=(A, B)$ be an interval-valued fuzzy complete graph such
that $ \nu_Bu^-_B(x y)= \nu_Bin( \nu_Bu^-_A(x), \nu_Bu^-_A(y))$ and $
\nu_Bu^+_B(x y)= \nu_Bin( \nu_Bu^+_A(x), \nu_Bu^+_A(y))$ for all $x$, $y$
$\in V$. Then
$G=\overline{G}$ under the identity map $I: V \to V$. So
$\overline{\overline{G}}=G$. Hence $G$ is self complementary.
\end{proof}
\begin{proposition}
Let $G_1=(A_1, B_1)$ and $G_2=(A_2, B_2)$ be interval-valued
fuzzy complete graphs. Then $G_1 \cong G_2$ if and only if
$\overline{G}_1 \cong \overline{G}_2$.
\end{proposition}
\begin{proof}
Assume that $G_1$ and $G_2$ are isomorphic, there exists a
bijective map $f: V_1 \to V_2$ satisfying
\[ \nu_Bu^-_{A_1}(x)= \nu_Bu^-_{A_2}(f(x)), ~ \nu_Bu^+_{A_1}(x)= \nu_Bu^+_{A_2}(f(x)) ~~{\rm for ~ all } ~x \in V_1,\]
\[ \nu_Bu^-_{B_1}(x y)= \nu_Bu^-_{B_2}(f(x)f(y)), ~ \nu_Bu^+_{B_1}(xy)= \nu_Bu^+_{B_2}(f(x)f(y)) ~~{\rm for ~ all } ~xy \in E_1.\]
By definition of complement, we have
\[ \overline{\nu_Bu^-}_{B_1}(xy)=\nu_Bin(\nu_Bu^-_{A_1}(x), \nu_Bu^-_{A_1}(y)=\nu_Bin(\nu_Bu^-_{A_2}(f(x)), \nu_Bu^-_{A_2}(f(y)) )=\overline{\nu_Bu^-}_{B_2}(f(x)f(y)), \]
\[ \overline{\nu_Bu^+}_{B_1}(xy)=\nu_Bin(\nu_Bu^+_{A_1}(x), \nu_Bu^+_{A_1}(y)=\nu_Bin(\nu_Bu^+_{A_2}(f(x)), \nu_Bu^+_{A_2}(f(y)) )=\overline{\nu_Bu^+}_{B_2}(f(x)f(y)) ~~{\rm for ~ all} ~xy \in E_1. \]
Hence $\overline{G}_1\cong \overline{G}_2$.
The proof of converse part is straightforward.
\end{proof}
\section{Conclusions}
It is well known that interval-valued fuzzy sets
constitute a generalization of the notion of fuzzy sets. The interval-valued fuzzy models give more precision, flexibility and compatibility to the system as compared to the classical and fuzzy models. So we have
introduced interval-valued fuzzy graphs and have presented
several properties in this paper. The further study of interval-valued fuzzy graphs may also be extended with the following projects:
\begin{itemize}
\item an application of interval-valued fuzzy graphs in database theory
\item an application of interval-valued fuzzy graphs in an expert system
\item an application of interval-valued fuzzy graphs in neural networks
\item an interval-valued fuzzy graph method for finding the
shortest paths in networks
\end{itemize}
\nu_{A_1}oindent{\bf\large Acknowledgement.} The authors are thankful to the referees for their valuable comments and suggestions.
\end{document} |
\begin{document}
\begin{abstract}
We give an explicit (new) morphism of modules between $H^*_T(G/P) \otimes H^*_T(P/B)$
and $H^*_T(G/B)$ and prove (the known result) that the two modules are isomorphic. Our map identifies submodules of the cohomology of the flag variety that are isomorphic to each of $H^*_T(G/P)$ and $H^*_T(P/B)$. With this identification, the map is simply the product within the ring $H^*_T(G/B)$. We use this map in two ways. First we describe module bases for $H^*_T(G/B)$ that are different from traditional Schubert classes and from each other. Second we analyze a $W$-representation on $H^*_T(G/B)$ via restriction to subgroups $W_P$. In particular we show that the character of the Springer representation on $H^*_T(G/B)$ is a multiple of the restricted representation of $W_P$ on $H^*_T(P/B)$.
\end{abstract}
\title{A module isomorphism between $H^*_T(G/P)\otimes H^*_T(P/B)$ and $H^*_T(G/B)$}
\section{Introduction}
{\color{black}In this paper we construct a large family of distinct bases for the equivariant cohomology of the generalized flag variety. To do this we give an explicit formula for the Leray-Hirsch isomorphism for the fibration of flag varieties $P/B \rightarrow G/B \rightarrow G/P$.} The Leray-Hirsch theorem says
\[H^*(P/B) \otimes H^*(G/P) \cong H^*(G/B)\]
but does not provide an explicit map. In fact the procedure in the Leray-Hirsch theorem sends classes through series of quotients, isomorphisms, and identifications in a spectral sequence, so explicitly writing the output is challenging. Indeed, a prori the output of the Leray-Hirsch map is a basis for the cohomology of the total space as a module over the cohomology of the base space. We bypass these subtleties by choosing explicit bases of Schubert classes for each cohomology module as a submodule of $H^*(G/B)$ and proving the isomorphism directly.
Moreover we solve this problem in equivariant rather than ordinary cohomology. The equivariant cohomology of a variety is an enhanced version of the ordinary cohomology ring that records information about an underlying group action on the variety. Certain computational tools can make equivariant cohomology easier to construct than ordinary cohomology, as well as permitting us to recover ordinary cohomology. (Knutson and Tao's work computing the structure constants of the equivariant and ordinary cohomology ring of $G(k,n)$ is one example of this principle \cite{Knutson-Tao}.) These computational tools can be used in many important cases, including generalized flag varieties $G/B$ and partial flag varieties $G/P$ both with the left-multiplication action of the torus $T \subseteq B \subseteq P$.
We consider a presentation of the equivariant cohomology $H^*_T(G/B)$ due to Kostant and Kumar \cite{KostantKumar} and use it to construct a module isomorphism between the tensor product $H^*_T(G/P) \otimes H^*_T(P/B)$
and $H^*_T(G/B)$ all treated as modules over $\mathbb{C}[\mathfrak{t}^*]$. The map naturally descends to a module isomorphism on the ordinary cohomology. The fact that these modules are isomorphic is not new \cite[Theorem 2.1]{Dou04}.
But in Schubert calculus people want {\em very} explicit answers---even down to specific numbers and elementary combinatorial formulas. The main result of this paper is a pleasingly elementary identification of classes inside $H^*_T(G/B)$ that realize Leray-Hirsch and provide a useful tool for fields like Schubert calculus (see Theorem \ref{thm: distinct bases}).
To construct our map, we identify each factor in the tensor product $H^*_T(G/P) \otimes H^*_T(P/B)$
with a submodule of $H^*_T(G/B)$. The bilinear module isomorphism is multiplication of classes inside the ring $H^*_T(G/B)$. More precisely our main theorem states:
\begin{theorem}
\label{thm: main theorem}
Identify $H^*_T(G/P)$ and $H^*_T(P/B)$ isomorphically with submodules of $H^*_T(G/B)$ as described in Section \ref{section: submodules of G/B}. Then the multiplication map
\[ p \otimes q \mapsto pq\]
induces a bilinear isomorphism of modules
\[H^*_T(G/P) \otimes H^*_T(P/B) \cong H^*_T(G/B).\]
\end{theorem}
Guillemin-Sabatini-Zara realize this isomorphism differently for a class of varieties called {\em GKM spaces}, proving a combinatorial version of the Leray-Hirsch theorem for fibrations of graphs associated to GKM spaces \cite{GKMfiberbundles}. As an application, they show their results apply to the map $P/B \to G/B \to G/P$ in classical types \cite[Section 5]{GKMfiberbundles} and apply it to a number of specific Grassmannians and symplectic varieties \cite{Balancedfiberbundles}. Establishing their hypotheses for $P/B \to G/B \to G/P$ in classical types takes longer than our direct proof for all types. {\color{black} Indeed in their calculations they use an interesting basis whose elements are stable under an action of the Weyl group. In a colloquial sense Guillemin-Sabatini-Zara's basis is the ``opposite" of our flow-up classes, which are closer to the classes constructed from Morse flows or Bialynicki-Birula decompositions. }
We give two applications of our result in Section \ref{applications}. First our explicit module isomorphism gives rise to a large collection of module bases of $H_T^*(G/B)$ indexed by the parabolic subgroups $B\subsetneq P\subsetneq G$. An immediate corollary of the isomorphism is that if $\mathcal{B}_{G/P}$ is any module basis for $H^*_T(G/P)$ and $\mathcal{B}_{P/B}$ is any module basis for $H^*_T(P/B)$ then the products $\{bb': b \in \mathcal{B}_{G/P}, b' \in \mathcal{B}_{P/B}\}$ form a module basis for $H^*_T(G/B)$.
{\color{black}\begin{theorem}
Fix two distinct parabolic subgroups $P \neq Q$ with $B \subsetneq P, Q\subsetneq G$. Let $\mathcal{B}_P$ denote the product basis of $H^*_T(G/B)$ obtained from the equivariant Schubert bases for $H^*_T(G/P)$ and $H^*_T(P/B)$, and similarly for $\mathcal{B}_Q$. Then the bases $\mathcal{B}_P$ and $\mathcal{B}_Q$ are distinct.
\end{theorem}
Theorem \ref{thm: distinct bases} proves this result for connected $G$ and Corollary \ref{cor: disconnected G} generalizes the theorem to disconnected $G$. }One way to state the core problem of Schubert calculus is: analyze combinatorially and explicitly the cohomology ring of a generalized flag variety $G/B$ in terms of the basis of Schubert classes. Thus these parabolic bases allow us to optimize the choice of basis to make particular computations in Schubert calculus as simple as possible.
As another application we show how these bases can be used to analyze a well-known action of $W$ on $H^*_T(G/B)$ called the Springer representation. In particular Theorem \ref{thm: kostant kumar character} says that the character of the restricted action of $W_P$ on $H^*_T(G/B)$ is the scalar multiple $|W_P| \cdot \chi$ where $\chi$ is the character of the $W_P$--representation on $H^*_T(P/B)$.
In Section \ref{main theorem} we prove Theorem \ref{thm: main theorem} in the equivariant setting. Our proofs use what many call GKM theory, after Goresky-Kottwitz-MacPherson's algebraic--combinatorial description of equivariant cohomology rings \cite{GKMtheory}. The GKM presentation comes with an explicit formula for the Schubert classes that is due to Billey \cite[Theorem 4]{Billey} and Anderson-Jantzen-Soergel \cite[Remark p. 298]{A-J-S}. These tools permit an elegant combinatorial and linear-algebraic proof of Theorem \ref{thm: main theorem}. The result then descends to ordinary cohomology (see Corollary \ref{cor: ordinary cohomology}).
We are grateful to J.~Matthew Douglass for showing us his work on both equivariant and ordinary cohomology isomorphism and inspiring this proof; to Alexander Yong for useful discussions; and to the anonymous referee for very helpful comments.
\section{Background}
We denote by $G$ a complex reductive linear algebraic group and fix a Borel subgroup $B$. We denote the maximal torus in $B$ by $T$ and the Weyl group associated to $G/B$ by $W$. Let $P$ be any parabolic subgroup containing $B$.
Let $W_P$ denote the subgroup of $W$ associated to $P$. This is also a Weyl group, specifically the Weyl group of $P/B$. For elements $w \in W$ the length $\ell(w)$ refers to the minimal number of simple reflections required to write $w$ as a word in the generators $\{s_i: i=1,2,\ldots,n\}$ of $W$. Let $W^P$ denote the subset of minimal-length coset representatives of $W/W_P$. The following fact is so essential to our work that we state it explicitly here; many texts give proofs, including Bj\"{o}rner-Brenti \cite[Lemma 2.4.3]{Bjorner-Brenti}.
\begin{proposition}
\label{prop last letter}
Every minimal-length word for each element $v \in W^P$ ends in a simple reflection $s_i \not \in W_P$.
\end{proposition}
\subsection{Restricting to fixed points}\label{section: submodules of G/B}
We use a presentation of torus-equivariant cohomology that is often referred to as {\em GKM theory}, after Goresky, Kottwitz, and MacPherson \cite{GKMpaper}, though key ideas are due to many others \cite{ AtiyahBott, Kirwan, Chang-Skjelbred} (see \cite[Section 1.7]{GKMpaper} for a fuller history). For suitable spaces $X$ the inclusion map of fixed points $X^T \hookrightarrow X$ induces an injection on cohomology $H^*_T(X) \hookrightarrow H^*_T(X^T)$. Straightforward algebraic conditions determine the image of the injection explicitly \cite[Lemma 2.3]{Chang-Skjelbred}, though we do not use them in this manuscript.
Through this map $H^*_T(X) \hookrightarrow H^*_T(X^T)$ we think of equivariant classes $p \in H^*_T(X)$ as collections of polynomials in $\bigoplus_{v \in X^T} \mathbb{C}[\mathfrak{t^*}] \cong H^*_T\left( X^T \right)$. We use functional notation to describe the elements $p \in H^*_T(X)$ meaning that for each $v \in X^T$ we have $p(v) \in \mathbb{C}[\mathfrak{t}^*]$.
GKM theory applies to varieties like $G/B$, $G/P$, and $P/B$ that have only even-dimensional ordinary cohomology \cite[Theorem 14.1(1)]{GKMpaper}. In fact each of $G/B$, $G/P$, and $P/B$ is a CW-complex whose cells are Schubert cells indexed by the elements of $W$, $W^P$, and $W_P$ respectively. The fixed point sets of $G/B$, $G/P$, and $P/B$ are also naturally isomorphic to $W$, $W^P$, and $W_P$.
As modules over $\mathbb{C}[\mathfrak{t}^*]$ the equivariant cohomology of $G/B$, $G/P$, and $P/B$ each have a basis of (equivariant) Schubert classes that are
again indexed by the elements of $W$, $W^P$, and $W_P$ respectively.
The restrictions $\sigma_w(u)$ of each Schubert class $\sigma_w$ to each fixed point $u$ are given explicitly by what we call {\em Billey's formula} (see Section \ref{section: Billey's formula}). The formula is the same in all three cases $G/B$, $G/P$, and $P/B$. Thus the map that sends the Schubert class $\sigma_w \in H^*_T(G/P)$ to the corresponding Schubert class $\sigma_w \in H^*_T(G/B)$ is a module isomorphism onto its image, and similarly for $P/B$. We identify the images of $H^*_T(G/P)$ and $H^*_T(P/B)$ in $H^*_T(G/B)$ with the modules $H^*_T(G/P)$ and $H^*_T(P/B)$ themselves, so
\[H^*_T(G/P) \cong \textup{span}_{\mathbb{C}[\mathfrak{t}^*]}\langle \sigma_v: v \in W^P \rangle \subseteq H^*_T(G/B)\]
and
\[H^*_T(P/B) \cong \textup{span}_{\mathbb{C}[\mathfrak{t}^*]}\langle \sigma_w: w \in W_P \rangle \subseteq H^*_T(G/B).\]
{\bf For $G/P$ this inclusion is only a homomorphism of modules and not a homomorphism of rings.}
The map $H^*_T(G/P) \otimes H^*_T(P/B) \rightarrow H^*_T(G/B)$ that we consider is the ordinary product of classes inside $H^*_T(G/B)$.
\subsection{Billey's formula} \label{section: Billey's formula}
This section describes an explicit combinatorial formula for evaluating the polynomial $\sigma_v(u)$ in $\mathbb{C}[\mathfrak{t}^*]$. Anderson, Jantzen, and Soergel originally discovered this formula~\cite{A-J-S}; Billey independently found it as well~\cite[Theorem 4]{Billey}. While proven originally for $G/B$ it also holds for $G/P$ \cite[Theorem 7.1]{TymoczkoG/P} and $P/B$ \cite[Corollary 11.3.14]{Kum02}. Fix a reduced word for $u=s_{b_1}s_{b_2} \cdots s_{b_{\ell(u)}}$ and define $\mathbf{r}(\mathbf{i},{u})=s_{b_1}s_{b_2} \cdots s_{b_{i-1}}(\alpha_{b_i}).$ Then
\begin{equation}
\label{eq:Billeys}
\sigma_v(u)= \sum \limits_{\substack{\text{reduced words} \\ v=s_{b_{j_1}}s_{b_{j_2}} \cdots s_{b_{j_{\ell(v)}}} }} \left( \prod \limits_{i=1}^{\ell(v)} \mathbf{r}(\mathbf{j_i},{u}) \right).
\end{equation}
\begin{lemma}
\label{prop:billey} The polynomial $\sigma_v(u)$ has the following properties:
\begin{enumerate}
\item The polynomial $\sigma_v(u)$ does not depend on the choice of reduced word for $u$ \cite[Theorem 4]{Billey}.
\item The polynomial $\sigma_v(u)$ is homogeneous of degree $\ell(v)$ \cite[Corollary 5.2]{Billey}.
\item \label{prop: non-zero} The polynomial $\sigma_v(u) \neq 0$ if and only if $v\leq u$\cite[Proposition 4.24]{KostantKumar} .
\item For any $u$ we have $\sigma_e(u)=1$.
\end{enumerate}
\end{lemma}
\begin{example}
Let $G/B$ have Weyl group $W=A_2$ and let $u=s_1s_2s_1$ and $v=s_1$. The word $v$ is found as a subword of $s_1s_2s_1$ in the two places $\mathbf{s_1} s_2s_1$ and $s_1s_2\mathbf{s_1}$.
$$\sigma_v(u)=\mathbf{r}(\mathbf{1},s_1s_2s_1) + \mathbf{r}(\mathbf{3},s_1s_2s_1)= \alpha_1 + s_1s_2(\alpha_1)=\alpha_1+ \alpha_2$$
\end{example}
\section{Main Theorem}
\label{main theorem}
This section proves the main theorem of the paper. First in the equivariant setting, and then for ordinary cohomology, we prove that the module map from $H^*(G/P)\otimes H^*(P/B)$
to $H^*(G/B)$ induced by $p \otimes q \mapsto pq$ is a bilinear isomorphism of modules. We show this first in the equivariant case by proving that it takes a module basis for $H_T^*(G/P)\otimes H_T^*(P/B)$
to a module basis for $H_T^*(G/B)$. The non-equivariant case follows from the equivariant case.
\begin{theorem}
\label{thm: lin ind}
The set of Schubert class products $\{\sigma_v \sigma_w: v \in W^P, w \in W_P\}$ is a linearly independent set over $\mathbb{C}[\mathfrak{t}^*]$.
\end{theorem}
In this section we will prove Theorem \ref{thm: lin ind} by arranging these products in the matrix
$$A=(\sigma_v(v'w') \sigma_w(v'w'))_{(v,w), (v',w') \in W^P \times W_P}.$$ Our notational convention is to index rows by pairs $(v,w)\in W^P\times W_P$ and columns by pairs $(v',w')$ also in $W^P\times W_P$. \\
\\
We begin by establishing an order on $W^P \times W_P$. The elements of both $W^P$ and $W_P$ are partially ordered by length; fix a total order on $W^P$ (respectively $W_P$) consistent with this partial order and extend this lexicographically to all of $W^P \times W_P$. For instance all rows and columns corresponding to pairs in $(e,W_P)$ come before any pair in $(s_i,W_P)$.\\
\\
For the remainder of this section we will consider the matrix $A$ to have rows and columns ordered as above. The proof of Theorem \ref{thm: lin ind} is given in Section \ref{proof of thm:lin ind}.
\subsection{Key lemmas}
We begin with two lemmas. The first will prove that given the above ordering of its rows and columns, the matrix $A=(\sigma_v(v'w') \sigma_w(v'w'))$ is block upper-triangular. The second lemma will construct a matrix $M\cdot vN$ where $M$ is an invertible matrix and $vN$ is known to have linearly independent rows and columns. The main theorem will then show how $A$ and $M \cdot vN$ are related.
\begin{lemma} \label{lemma: block-upper-triangular}
The matrix $A=\left(\sigma_v(v'w')\sigma_w(v'w')\right)_{(v,w), (v',w') \in W^P \times W_P}$ is block upper-triangular.
\end{lemma}
\begin{proof}
Choose $v,v' \in W^P$. Consider the entries of $A$ whose rows are indexed by pairs in $(v, W_P)$ and whose columns are indexed by pairs in $(v',W_P)$. By construction this is a square $|W_P|\times |W_P|$ block. Its entries are $(\sigma_v(v'w')\sigma_w(v'w'))$ where $w,w'$ range over all of $W_P$. As established in Proposition \ref{prop last letter}, the last letter in every reduced word for $v' \in W^P$ is a simple reflection $s_i \not \in W_P$. Thus every reduced word for $v \in W^P$ inside $v'w$ is contained in the prefix $v'$. Therefore $\sigma_v(v'w')=\sigma_v(v')$. By Property \ref{prop: non-zero} of Billey's formula $\sigma_v(v')$ is non-zero if and only if $v\leq v'$ in the Bruhat order. Therefore whenever $\ell(v) \geq \ell(v')$ and $v \neq v'$ the entire block is zero.
\end{proof}
\begin{example}{\rm
Consider the parabolic subgroup $W_P=\langle s_2 \rangle$ in the type $A_2$ Weyl group $ \langle s_1,s_2\rangle $. The minimal coset representatives are $ W^P=\{e,s_1, s_2s_1\}$. Let $\sigma_{W_P}$ denote the collection $\{ \sigma_w: w\in W_P\}$. Then the blocks of the matrix $A$ are
$$
\bordermatrix{~ & eW_P & s_1 W_P& s_2s_1 W_P \cr
\sigma_e\sigma_{W_P} & * & * &* \cr
\sigma_{s_1}\sigma_{W_P} &0 & * &* \cr
\sigma_{s_2s_1}\sigma_{W_P} & 0 & 0& * \cr}.
$$
}\end{example}
\begin{example}{\rm This example treats pairs $v,v' \in W^P$ with the same length. Let $W_P$ be the parabolic subgroup $\langle s_3 \rangle \subset \langle s_1, s_2, s_3 \rangle$. The elements of $W^P$ with length two are $s_1s_2, s_2s_1,$ and $s_3s_2$. The restriction of the matrix $A$ to the diagonal block where $v$ and $v'$ both have length two has form:
$$\bordermatrix{~ &s_1s_2W_P & s_2s_1 W_P & s_3s_2 W_P \cr
\sigma_{s_1s_2}\sigma_{W_P} & * &0&0 \cr
\sigma_{s_2s_1}\sigma_{W_P} & 0 &*&0 \cr
\sigma_{s_3s_2}\sigma_{W_P} & 0&0&* \cr}$$
}\end{example}
In the next lemma we show that the rows of the diagonal blocks of the matrix $A$ are linearly independent. It is not immediately obvious that the matrices in this lemma are in fact the diagonal blocks; that result is part of the main theorem.
\begin{lemma}[Linear independence of diagonal blocks]\label{lemma: diagonal blocks}
Fix $v \in W^P$. Assume that the elements of $W_P$ are ordered consistently with the partial order on length. Let $M$ be the matrix defined by
$$M_{wu}=\begin{cases} \sigma_{wu^{-1}}(v) &\text{if $u$ is a suffix of $w$} \\
0 & \text{otherwise}
\end{cases}
$$
where $w,u \in W_P$. Define the matrix $N$ by $N_{u,w'}=\sigma_u(w')$ for $u,w' \in W_P$. Consider the algebra isomorphism $v: \mathbb{C}[\mathfrak{t}^*] \rightarrow \mathbb{C}[\mathfrak{t}^*]$ induced from the action $\alpha \mapsto v(\alpha)$. Denote the image of $N$ under this action of $v$ by $vN$.\\
\\
Then the rows of the matrix $M \cdot vN$ are linearly independent over $\mathbb{C}[\mathfrak{t}^*]$.
\end{lemma}
{\bf Note} that $v$ does not permute the rows or columns of $N$. For computational clarity, the isomorphism $\alpha \mapsto v(\alpha)$ is equivalent to the map $t_{\alpha} \mapsto t_{v(\alpha)}$ if one passes to $\mathfrak{t}$.
\begin{proof}
If $\ell(u) > \ell(w)$ then by construction $M_{wu}=0$. If $\ell(u)=\ell(w)$ then $M_{wu}=0$ unless $w=u$. Therefore $M$ is an upper-triangular matrix. The entries on the diagonal have the form $M_{ww}=\sigma_e(v)=1$. Since $1$ is a unit in $\mathbb{C}[\mathfrak{t}^*]$ the matrix $M$ is invertible.\\
\\
We defined $N=(\sigma_u(w'))_{u,w'\in W_P}$ to be the matrix of Schubert classes in $H^*_T(P/B)$. The rows of $N$ are the Schubert class basis for $H_T^*(P/B)$ so the rows and columns of the matrix $N$ are linearly independent. The function $v$ acts on $N$ by sending each $\alpha$ to ${v(\alpha)}$. This operation is invertible and so preserves linear independence of the matrix rows. Thus the new matrix $vN$ also has linearly independent rows.\\
\\
Since $M$ is invertible over $\mathbb{C}[\mathfrak{t}^*]$ and $vN$ has linearly independent rows over $\mathbb{C}[\mathfrak{t}^*]$ the rows of the matrix product $M \cdot vN$ are also linearly independent over $\mathbb{C}[\mathfrak{t}^*]$.
\end{proof}
\subsection{Proof of Theorem \ref{thm: lin ind}}
\label{proof of thm:lin ind}
We now show that each of the diagonal blocks of $A$ identified in Lemma \ref{lemma: block-upper-triangular} is a scalar multiple of the matrix $M\cdot vN$ defined by Lemma \ref{lemma: diagonal blocks}. This proves that the rows of matrix $A$ are linearly independent and thus the collection of Schubert class products $\{\sigma_v\sigma_w: v\in W^P, w\in W_P\}$ is linearly independent over $\mathbb{C}[\mathfrak{t}^*]$.
\begin{proof}
Consider the matrix $A=(\sigma_v(v'w')\sigma_w(v'w'))_{(v,w), (v',w') \in W^P \times W_P}$ with rows and columns ordered lexicographically subordinate to the length partial order on $W^P$ and $W_P$ described above. \\
\\
Partition the matrix $A$ into blocks according to the pairs $v, v' \in W^P$. Lemma \ref{lemma: block-upper-triangular} proved that $A$ is block-upper-triangular with this partition. Now consider the blocks along the diagonal, namely the blocks of the form
$$(\sigma_{v}(vw')\sigma_w(vw'))_{w,w'\in W_P}= \sigma_v(v)\cdot (\sigma_w(vw'))_{w,w'\in W_P}$$
for each $v \in W^P$. Proposition \ref{prop: non-zero} of Billey's formula guarantees that $\sigma_v(v)$ is non-zero so it suffices to consider the matrix $(\sigma_w(vw'))_{w,w'\in W_P}$. We will show that $$(\sigma_w(vw'))_{w,w'\in W_P}=M\cdot vN$$ where $M$ and $vN$ are the matrices of Lemma \ref{lemma: diagonal blocks}. Multiplying matrices gives
\begin{center}
\begin{tikzpicture}
\node at (-4,0) {$M \cdot vN =$};
\node at (0,0) {$\begin{pmatrix} && \\ & {M_{wu}} & \\ && \end{pmatrix} $};
\draw [decorate,decoration={brace, amplitude=5pt}] (-1.5,-.8)--(-1.5,.8);
\draw [decorate,decoration={brace, amplitude=5pt}] (-.9,1.2)--(.9,1.2);
\node at (0,1.6) {$\scriptscriptstyle\text{$u$ ranges over $W_P$}$};
\node at (-2,0) {\rotatebox[origin=c]{90}{$\scriptscriptstyle\text{$w$ ranges over $W_P$}$}};
\draw [fill] (1.9,0) circle [radius=0.03];
\node at (5,0) {$\begin{pmatrix} && \\ & v(\sigma_{u}(w')) & \\ && \end{pmatrix} $};
\draw [decorate,decoration={brace, amplitude=5pt}] (3.5,-.8)--(3.5,.8);
\draw [decorate,decoration={brace, amplitude=5pt}] (3.95,1.2)--(6.05,1.2);
\node at (5,1.6) {$\scriptscriptstyle\text{$w'$ ranges over $W_P$}$};
\node at (3,0) {\rotatebox[origin=c]{90}{$\scriptscriptstyle\text{$u$ ranges over $W_P$}$}};
\begin{scope}[shift={(3,-3.5)}]
\node at (-4,0) {$=$};
\node at (0,0) {$\begin{pmatrix} && \\ &\sum \limits_{u\in W_P} M_{wu} \cdot v(\sigma_{u}(w')) & \\ && \end{pmatrix} $};
\draw [decorate,decoration={brace, amplitude=5pt}] (-3,-.8)--(-3,.8);
\draw [decorate,decoration={brace, amplitude=5pt}] (-2.3,1.2)--(2.3,1.2);
\node at (0,1.6) {$\scriptscriptstyle\text{$w'$ ranges over $W_P$}$};
\node at (-3.5,0) {\rotatebox[origin=c]{90}{$\scriptscriptstyle\text{$w$ ranges over $W_P$}$}};
\end{scope}
\end{tikzpicture}
\end{center}
We now show that for any $w,w'\in W_P$ the polynomial $\sigma_w(vw')$ can be decomposed as the sum $\sum \limits_{u\in W_P} M_{wu} \cdot v(\sigma_{u}(w')) $ . Consider Billey's formula for $\sigma_w(vw')$ and group terms according to which part of $w$ is a subword of $v$ and which part is a subword of $w'$. More precisely:
$$\sigma_w(vw')=\sum \limits_{\substack{u \text{ a suffix}\\ \text{of } w}} \overbrace{\sigma_{wu^{-1}}(v)}^\text{part of $w$ found in $v$} \cdot \underbrace{v \sigma_u(w')}_\text{part of $w$ found in $w'$}$$
By construction of $M$ this is $\sum \limits_{u\in W_P} M_{wu} \cdot v(\sigma_{u}(w'))$.
Therefore the matrix $(\sigma_w(vw'))_{w,w'\in W_P}$ is equal to $M \cdot vN$ as desired. \\
\\
By Lemmas \ref{lemma: block-upper-triangular} and \ref{lemma: diagonal blocks} the rows of the matrix $A$ are linearly independent over $\mathbb{C}[\mathfrak{t}^*]$. Thus the Schubert class products $\{\sigma_v\sigma_w: v \in W^P, w \in W_P\}$ are linearly independent over $\mathbb{C}[\mathfrak{t}^*]$. \end{proof}
Using Theorem \ref{thm: lin ind} we can prove Theorem \ref{thm: main theorem} in the equivariant setting.
\begin{theorem}
\label{thm:gb=gppb}
The map $p \otimes q \mapsto pq$ induces a bilinear isomorphism of modules $$H_T^*(G/P)\otimes H_T^*(P/B) \cong H_T^*(G/B).$$
\end{theorem}
\begin{proof}
For any pair $(v,w)\in W^P\times W_P$ the polynomial degree of the homogeneous class $\sigma_v \sigma_w$ is $\ell(v)+\ell(w)$ just like that of $\sigma_{vw}$. The map $W^P \times W_P \rightarrow W$ given by $(v,w) \mapsto vw$ is a bijection \cite{Bjorner-Brenti} and induces a bijection $\sigma_v \sigma_w\mapsto \sigma_{vw}$ which preserves polynomial degree. Thus the set $\{\sigma_v\sigma_w: v \in W^P, w \in W_P\}$ contains the correct number of elements of each polynomial degree to be a basis of $H_T^*(G/B)$.\\
\\
By Theorem \ref{thm: lin ind} the set $\{\sigma_v\sigma_w: v \in W^P, w \in W_P\}$ is also linearly independent over $H_T^*(pt)$. Thus it is a basis for $H^*_T(G/B)$.
\end{proof}
The equivariant isomorphism induces a similar isomorphism in ordinary cohomology, essentially by Koszul duality. We confirm this below, re-deriving the Leray-Hirsch isomorphism. (Note that not all equivariant results immediately descend to ordinary cohomology; see for instance Theorem \ref{thm: distinct bases}.)
\begin{corollary}
\label{cor: ordinary cohomology}
The map $p \otimes q \mapsto pq$ induces a bilinear isomorphism of modules $$H^*(G/P)\otimes H^*(P/B) \cong H^*(G/B).$$
\end{corollary}
\begin{proof}
Let $M \subseteq \mathbb{C}[t^*]$ be the augmentation ideal, namely $M = \langle \alpha_1, \alpha_2, \ldots, \alpha_n\rangle$. Recall that the ordinary cohomology is the quotient $H^*(X) \cong \frac{H^*_T(X)}{MH^*_T(X)}$ when $X$ satisfies certain conditions, for instance, if $X$ has no odd-dimensional ordinary cohomology \cite{GKMpaper}. Consider the two projections
\[H^*_T(G/P) \otimes_{\mathbb{C}[\mathfrak{t}^*]} H^*_T(P/B) \rightarrow \frac{H^*_T(G/P) \otimes_{\mathbb{C}[\mathfrak{t}^*]} H^*_T(P/B)}{M \left( H^*_T(G/P) \otimes_{\mathbb{C}[\mathfrak{t}^*]} H^*_T(P/B) \right)}\]
and
\[H^*_T(G/P) \otimes_{\mathbb{C}[\mathfrak{t}^*]} H^*_T(P/B) \rightarrow \frac{H^*_T(G/P)}{M H^*_T(G/P)} \otimes_{\mathbb{C}[\mathfrak{t}^*]} \frac{H^*_T(P/B)}{M H^*_T(P/B)}.\]
If $a \otimes b \in H^*_T(G/P) \otimes_{\mathbb{C}[\mathfrak{t}^*]} H^*_T(P/B)$ and $m \in M$ then $m(a \otimes b) = (ma) \otimes b = a \otimes (mb)$ so the kernels of the two projections agree. It follows that we have an isomorphism
\[\frac{H^*_T(G/P) \otimes_{\mathbb{C}[\mathfrak{t}^*]} H^*_T(P/B)}{M \left( H^*_T(G/P) \otimes_{\mathbb{C}[\mathfrak{t}^*]} H^*_T(P/B) \right)} \cong \frac{H^*_T(G/P)}{M H^*_T(G/P)} \otimes_{\mathbb{C}[\mathfrak{t}^*]} \frac{H^*_T(P/B}{M H^*_T(P/B)}.\]
The map $\phi: H^*_T(G/P) \otimes_{\mathbb{C}[\mathfrak{t}^*]} H^*_T(P/B) \rightarrow H^*_T(G/B)$ is an isomorphism of $\mathbb{C}[\mathfrak{t}^*]$-modules so it commutes with taking the quotient by the augmentation $MH^*_T(G/B)$. Combining these results gives
\[\frac{H^*_T(G/P)}{M H^*_T(G/P)} \otimes_{\mathbb{C}[\mathfrak{t}^*]} \frac{H^*_T(P/B)}{M H^*_T(P/B)} \cong \frac{H^*_T(G/B}{M H^*_T(G/B)}\]
or in other words $H^*(G/P) \otimes H^*(P/B) \cong H^*(G/B)$.
\end{proof}
\section{Applications}
\label{applications}
The parabolic basis $\mathcal{B}_P=\{\sigma_v\sigma_w : v\in W^P, w\in W_P\}$ is generally not the Schubert basis. In fact we will show that, with the exception of $P=G$ and $P=B$, each of the bases $\mathcal{B}_P$ is distinct not only from the Schubert basis but from any other parabolic basis as well. As with different bases of symmetric functions, this is a useful computational tool. As another application we compute the character of a particular Springer representation.
\subsection{The parabolic basis $\mathcal{B}_P$}
We begin with an example illustrating that the basis $\mathcal{B}_P$ is not the Schubert basis.
\def\GKMgraph{\draw(A)--(B)--(C)--(D)--(E)--(F)-- cycle;
\draw (A)--(D); \draw(B)--(E);
\draw (F)--(C);}
\newcommand{\Schubertclass}[7]{
\coordinate (A) at (0,0);
\coordinate (B) at (-1,.8);
\coordinate (C) at (-1,2.2);
\coordinate (D) at (0,3);
\coordinate (E) at (1,2.2);
\coordinate (F) at (1,.8);
\coordinate (G) at (0,4);
\GKMgraph
\node [below] at (A) {${#1}$};
\node [left] at (B) {${#2}$};
\node [left] at (C) {${#3}$};
\node [above] at (D) {${#4}$};
\node [right] at (E) {${#5}$};
\node [right] at (F) {${#6}$};
\node at (G) {${#7}$};
}
\begin{example}
\label{ex: a2}
We again use the $A_2$ example $W_P=\langle s_2 \rangle$ and $ W^P=\{e,s_1, s_2s_1\}$. Four of the classes in $\mathcal{B}_P$ are also Schubert classes:
\begin{center}
\scalebox{.8}{
\begin{tikzpicture}
\matrix[column sep=0.8cm,row sep=0.5cm, ampersand replacement=\&]
{
\Schubertclass{1}{1}{1}{1}{1}{1}{\sigma_e\sigma_e=\sigma_e} \&
\Schubertclass{0}{0}{\alpha_1+\alpha_2}{\alpha_1+\alpha_2}{\alpha_2}{\alpha_2}{\sigma_e\sigma_{s_2}=\sigma_{s_2}} \&
\Schubertclass{0}{\alpha_1}{\alpha_1}{\alpha_1+\alpha_2}{\alpha_1+\alpha_2}{0}{\sigma_{s_1}\sigma_e=\sigma_{s_1}} \&
\Schubertclass{0}{0}{0}{\alpha_2(\alpha_1+\alpha_2)}{\alpha_2(\alpha_1+\alpha_2)}{0}{\sigma_{s_2s_1}\sigma_e=\sigma_{s_2s_1}}\\
};
\end{tikzpicture}
}\end{center}
The remaining two classes are not Schubert classes.
\scalebox{.8}{
\begin{tabular}{m{8cm} m{2cm} m{8cm}}
\begin{tikzpicture}
\Schubertclass{0}{0}{\alpha_1(\alpha_1+\alpha_2)}{(\alpha_1+\alpha_2)^2}{\alpha_2(\alpha_1+\alpha_2)}{0}{\sigma_{s_1}\sigma_{s_2}}
\end{tikzpicture}
&
$\neq$ &
\begin{tikzpicture}
\Schubertclass{0}{0}{\alpha_1(\alpha_1+\alpha_2)}{\alpha_1(\alpha_1+\alpha_2)}{0}{0}{\sigma_{s_1s_2}};
\end{tikzpicture}
\end{tabular}}\\
\\
\scalebox{.8}{
\begin{tabular}{m{8cm} m{2cm} m{8cm}}
\begin{tikzpicture}
\Schubertclass{0}{0}{ {\color{white}\alpha_1\alpha(\alpha_1+\alpha_2)} 0}{\alpha_2(\alpha_1+\alpha_2)^2}{(\alpha_2)^2(\alpha_1+\alpha_2)}{0}{\sigma_{s_2s_1}\sigma_{s_2}}
\end{tikzpicture}
&
$\neq$ &
\begin{tikzpicture}
\Schubertclass{0}{0}{{\color{white}\alpha_1\alpha(\alpha_1+\alpha_2)}0 }{\alpha_1\alpha_2(\alpha_1+\alpha_2)}{0}{0}{\sigma_{s_2s_1s_2}};
\end{tikzpicture}
\end{tabular}}\\
The class $\sigma_{s_1}\sigma_{s_2}$ is equal to $\sigma_{s_1s_2}+\sigma_{s_2s_1}$ and the class $\sigma_{s_2s_1}\sigma_{s_1}$ is equal to $\sigma_{s_1s_2s_1}+\alpha_2 \sigma_{s_2s_1}$.
\end{example}
A Schubert class $\sigma_w$ could appear in one of these parabolic bases even if the word $w$ is contained in neither the parabolic subgroup nor the set of minimal coset representatives. If the class $\sigma_w$ does appear in the basis, we know exactly which Schubert classes are multiplied together to obtain it.
\begin{lemma}
\label{lemma: one class to two} Fix a parabolic $P$ and suppose that $v\in W^P, w\in W_P$.
If the product class $\sigma_v\sigma_w$ is equal to a single Schubert class $\sigma_u$ then $u=vw$ is the parabolic decomposition of $u$.
\end{lemma}
This lemma is a consequence of a result of Reiner, Woo, and Yong \cite[Lemma 2.2]{ReinerWooYong}. We give a different proof.
\begin{proof}
If $\sigma_v\sigma_w=\sigma_u$ then $\ell(v)+\ell(w)=\ell(u)$ since both sides must have the same polynomial degree. For any $u'\not \geq u$ at least one of $\sigma_v(u'), \sigma_w(u')$ must be zero since the product $\sigma_v(u')\sigma_w(u')$ is zero. By construction $\ell(v)+\ell(w)=\ell(vw)$ and by Property \ref{prop: non-zero} of Billey's formula $\sigma_v(vw)$ and $\sigma_w(vw)$ are both nonzero. Thus $\sigma_u(vw)$ is nonzero, implying that $vw \geq u$. But $vw$ has the same length as $u$ so the two words must be equal.
\end{proof}
Both $\mathcal{B}_G$ and $\mathcal{B}_B$ are the classical Schubert basis. Since $W_G=W^B=W$ every class in $\mathcal{B}_G$ has the form $\sigma_e\sigma_w=\sigma_w$ and every class in $\mathcal{B}_B$ has the form $\sigma_v\sigma_e=\sigma_v$. However all other parabolic bases are distinct. For the sake of clarity we prove the result first in the case when $G$ is connected and then for general reductive linear algebraic groups $G$.
\begin{theorem}
\label{thm: distinct bases}
Assume that the Dynkin diagram for $W$ is connected. With the exception of $P=G$ and $Q=B$, distinct parabolics $P$ and $Q$ have distinct bases $\mathcal{B}_P$ and $\mathcal{B}_Q$ for $H_T^*(G/B)$.
\end{theorem}
\begin{proof}
Neither $P$ nor $Q$ is $B$ so there is at least one simple reflection in each of $W_P$ and $W_Q$. We also assume that $P \neq Q$ so at least one of $W_P$ and $W_Q$ contains a simple reflection that the other does not. Without loss of generality assume that there is at least one simple reflection in $W_P\setminus W_Q$. Consider all paths in the Dynkin diagram between simple roots corresponding to reflections in $W_Q$ and simple roots corresponding to reflections in $W_P \setminus W_Q$. Choose a minimal-length such path and denote the endpoints by $\alpha_i$ and $\alpha_j$ where $\alpha_i$ corresponds to $s_i \in W_P\setminus W_Q$ and $\alpha_j$ corresponds to $s_j\in W_Q$. Let $v_R$ be the word $s_j v s_i$ corresponding to that path. The path is minimal so the word $v$ contains no reflections in $W_P$ or $W_Q$.
Since $v_R\in W$ is a word whose letters $s_{i_1} s_{i_2} ... s_{i_k}$ are the simple reflections in order corresponding to the vertices of a path in the Dynkin diagram, the letters $s_{i_j}$ and $s_{i_{j+1}}$ do not commute. Each reflection occurs at most once so no braid moves can be performed on $v_R$. Thus $v_R$ has no other factorization into simple reflections in $W$.
Consider the factorization of $v_R$ in each of $W^PW_P$ and $W^QW_Q$. Since $v_R$ has a unique minimal word this factorization simply splits $v_R$ into a prefix and a suffix, with the prefix ending in the rightmost occurrence of $s_k \not \in W_P$ respectively $W_Q$. In particular the reflection $s_i$ is not in $W_Q$ so $v_R \in W^Q$. If $v \neq e$ then similarly $s_jv \in W^P$ and $s_i \in W_P$.
We will show that either the Schubert class corresponding to $v_R$ is in $\mathcal{B}_Q$ or the Schubert class corresponding to $s_is_j$ is in $\mathcal{B}_P$. The two bases $\mathcal{B}_P$ and $\mathcal{B}_Q$ are equal only if $\sigma_{v_R}$ appears in $\mathcal{B}_P$ or $\sigma_{s_is_j}$ appears in $\mathcal{B}_Q$ respectively. Lemma \ref{lemma: one class to two} showed that the only way that $\sigma_a\sigma_b \in \mathcal{B}_P$ could equal $\sigma_{v_R}$ is if $ab=v_R$ and similarly for $\sigma_{s_is_j}$. We will then evaluate at particular Weyl group elements to prove that the classes cannot be equal.
The cases we consider are:
\begin{enumerate}
\item[Case 1.] If $s_j\not \in W_P$ then $\sigma_{v_R}$ is in $\mathcal{B}_Q$ and $\sigma_{s_jv} \sigma_{s_i} \in \mathcal{B}_P$.
\item[Case 2.] If $s_j\in W_P$ then
\begin{enumerate}
\item[a] if $v_R=s_jvs_i$ for $v\neq e$ then $\sigma_{v_R}$ is in $\mathcal{B}_Q$ and $\sigma_{s_jv} \sigma_{s_i} \in \mathcal{B}_P$.
\item[b] if $v_R=s_js_i$ then $\sigma_{s_is_j}$ is in $\mathcal{B}_P$ and $\sigma_{s_i}\sigma_{s_j} \in \mathcal{B}_Q$.
\end{enumerate}
\end{enumerate}
{\bf Case 1.} To see that $\sigma_{v_R} \neq \sigma_{s_jv}\sigma_{s_i}$ we compare their values at $s_jvs_is_jv$. We could prove that $s_jvs_is_jv$ is reduced by an argument involving relations like the one with which we proved $v_R$ has a unique reduced word; alternatively we could observe that $s_jvs_is_jv$ is reduced because it is in Bj\"{o}rner-Brenti's {\em normal form} \cite[Proposition 3.4.2]{Bjorner-Brenti} (with roots ordered in the same order as the path from $\alpha_i$ to $\alpha_j$). On the one hand
\[\sigma_{v_R}(s_jvs_is_jv) = \sigma_{s_jv}(s_jv) s_jv (\alpha_i) = \sigma_{v_R}(v_R).\]
On the other hand
\[\sigma_{s_jv}(s_jvs_is_jv)\cdot \sigma_{s_i}(s_jvs_is_jv) = \left(\sigma_{s_jv}(s_jv)+s_jvs_i\sigma_{s_jv}(s_jv)+ \text{other non-negative terms} \right) \cdot s_jv (\alpha_i)\]
which is
$\sigma_{v_R}(v_R) + \textup{something positive}$. This proves the claim in this case.
{\bf Case 2a.} In this case we evaluate the classes at $s_is_jvs_i$ which is reduced by the previous argument. In $\mathcal{B}_Q$ we have $\sigma_{v_R}(s_is_jvs_i)= s_i(\sigma_{v_R}(v_R))$ while in $\mathcal{B}_P$ we have
$$
\sigma_{s_jv}(s_is_jvs_i)\cdot \sigma_{s_i}(s_is_jvs_i)=s_i(\sigma_{s_jv}(v_R))\cdot (\alpha_i+s_is_jv(\alpha_i)).
$$
Again this equals $s_i(\sigma_{v_R}(v_R))+\text{ something positive}$. The claim holds in this case, too.
{\bf Case 2b.} We look at the classes corresponding to $s_is_j$. The word $s_is_j$ is contained in $W_P$ and decomposes into $s_i\in W^Q$ and $s_j\in W_Q$. Evaluating at the reduced word $s_js_is_j$ gives
$$\sigma_{s_i}(s_js_is_j)\cdot \sigma_{s_j}(s_js_is_j)=s_j(\alpha_i)\cdot (\alpha_j+s_js_i(\alpha_j))$$
$$\textup{but } \hspace{1em} \sigma_{s_is_j}(s_js_is_j)=s_j(\alpha_i)\cdot s_js_i(\alpha_j).$$
These are unequal which proves the theorem.
\end{proof}
This result easily extends to the case of the general complex reductive linear algebraic group.
\begin{corollary}
\label{cor: disconnected G}
Suppose that $G$ is not connected and denote the corresponding factorization of $W$ by $W = W_1 \times W_2 \times \cdots \times W_k$. Suppose that $P$ and $Q$ are parabolics. Then the bases $\mathcal{B}_P$ and $\mathcal{B}_Q$ are different bases of $H^*_T(G/B)$ if and only if for some factor $W_i$
\begin{itemize}
\item $W_P \cap W_i$ differs from $W_Q \cap W_i$ and
\item at least one of $W_P\cap W_i$ and $W_Q\cap W_i$ is a nonempty proper subset of $W_i$.
\end{itemize}
\end{corollary}
\begin{proof}
If $W$ can be factored as $W = W_1 \times W_2 \times \cdots \times W_k$ then the cohomology classes in $H^*_T(G/B)$ are products $p_1 \times p_2 \times \cdots \times p_k$ where $p_i$ is in the equivariant cohomology of the flag variety $G_i/B_i$ corresponding to $W_i$ for each $i \in \{1,2,\ldots, k\}$. Two classes $p_1 \times p_2 \times \cdots \times p_k$ and $q_1 \times q_2 \times \cdots \times q_k$ are equal if and only if $p_i = q_i$ for each $i$. As long as Theorem \ref{thm: distinct bases} holds for one factor $i$ the bases $\mathcal{B}_P$ and $\mathcal{B}_Q$ are distinct. This is precisely the content of the two conditions.
Conversely suppose that for each $W_i$ either $W_P\cap W_i= W_Q \cap W_i$ or both are in the set $\{W_i,\{e\}\}$. Let $\mathcal{B}_{P_i}$ be the basis of $H_T^*(G_i/B_i)$ corresponding to the parabolic with Weyl group $W_P\cap W_i$. Then $\mathcal{B}_P= \mathcal{B}_{P_1} \times \mathcal{B}_{P_2} \times \cdots \times \mathcal{B}_{P_k}$ and $\mathcal{B}_Q= \mathcal{B}_{Q_1} \times \mathcal{B}_{Q_2} \times \cdots \times \mathcal{B}_{Q_k}$. If $W_P\cap W_i= W_Q \cap W_i$ then $\mathcal{B}_{P_i}=\mathcal{B}_{Q_i}$. If both are in $\{W_i, \{e\}\}$ then both $\mathcal{B}_{P_i}$ and $\mathcal{B}_{Q_i}$ are the Schubert basis for $H_{T_i}^*(G_i/B_i)$. In all cases $\mathcal{B}_P=\mathcal{B}_Q$ as desired.
\end{proof}
\begin{remark} The proof of Theorem \ref{thm: distinct bases} does not immediately extend to ordinary cohomology. Consider Example \ref{ex: a2}, which described $W_P= \langle s_2 \rangle$ in type $A_2$. The basis in that case is
\[\mathcal{B}_P=\{\sigma_e, \sigma_{s_1},\sigma_{s_2}, \sigma_{s_2s_1}, \sigma_{s_1s_2}+\sigma_{s_2s_1}, \sigma_{s_1s_2s_1}+\alpha_2\sigma_{s_2s_1}\}.\]
If $W_Q= \langle s_2 \rangle$ then the basis $\mathcal{B}_Q$ is the set of classes
\[\mathcal{B}_Q = \{\sigma_e, \sigma_{s_1},\sigma_{s_2}, \sigma_{s_1s_2}, \sigma_{s_1s_2}+\sigma_{s_2s_1}, \sigma_{s_1s_2s_1}+\alpha_1\sigma_{s_1s_2}\}.\]
The highest degree element in each basis differs in equivariant cohomology but both project to $\sigma_{s_1s_2s_1}$ in ordinary cohomology using the map in Corollary \ref{cor: ordinary cohomology}. In particular the localizations of two basis elements could differ in equivariant cohomology but agree in ordinary cohomology, so the previous proof does not immediately apply to ordinary cohomology.
Nonetheless one basis contains $\sigma_{s_1s_2}$ while the other contains $\sigma_{s_2s_1}$. In other words $\mathcal{B}_P$ and $\mathcal{B}_Q$ are distinct bases for $H^*(G/B)$ even though the previous proof does not hold. This leads us to the following conjecture.
\end{remark}
\begin{conjecture}
The bases $\mathcal{B}_P$ and $\mathcal{B}_Q$ are distinct for $P,Q$ distinct parabolics not equal to $B$ in the ordinary cohomology. In other words Theorem \ref{thm: distinct bases} holds for $H^*(G/B)$ as well as $H_T^*(G/B)$.
\end{conjecture}
\subsection{Representations of $W_P$}
In this section we use the parabolic basis of $H^*_T(G/B)$ to describe explicitly the character of an action of $W_P$ on $H^*_T(G/B)$. This group action is in fact the restriction of a well-known Weyl group action on $H^*_T(G/B)$ called {\em Springer's representation}; Kostant and Kumar first studied the presentation we use here \cite[see Section 4.17 and Proposition 4.24.g]{KostantKumar}.
Kostant and Kumar showed that the Weyl group acts as a collection of algebra homomorphisms on the equivariant cohomology $H^*_T(G/B)$ according to the rule that if $w \in W$ and $p \in H^*_T(G/B)$ then the class $w \cdot p$ is given, in terms of its localizations, by:
\[(w \cdot p)(v) = p(vw^{-1}) \hspace{0.5in} \textup{ for each } v \in W.\]
We restrict this action to an arbitrary parabolic subgroup $W_P$ of $W$.
\begin{theorem}
\label{thm: kostant kumar character}
Fix a subset $P$ of simple roots in $\Delta$. Let $m_P = |W_P|$. Let $\chi_P$ denote the character of the restriction to $W_P$ of Kostant-Kumar's action of $W$ on $H^*_T(G/B)$. Then
\[\chi_P = m_P \chi \]
where $\chi$ is the character of Kostant-Kumar's action of $W_P$ on $H^*_T(P/B)$.
\end{theorem}
\begin{proof}
Consider the basis $\{\sigma_w\sigma_v: w \in W^P, v \in W_P\}$ from the previous section. Choose any simple reflection $s_i \in W_P$ and consider the image $s_i \cdot (\sigma_{w'} \sigma_{v'})$ of $s_i$ acting on the basis element $\sigma_{w'} \sigma_{v'}$. Kostant-Kumar's action is a map of algebras so
\[ s_i \cdot (\sigma_{w'} \sigma_{v'}) = (s_i \cdot \sigma_{w'}) (s_i \cdot \sigma_{v'}).\]
No reduced word for $w\in W^P$ ends in $s_i$. For each $u \in W$ we conclude by Billey's formula that
\[\sigma_{w'}(us_i)=\sigma_{w'}(u).\]
Hence $s_i \cdot \sigma_{w'} = \sigma_{w'}$ and so for all $s_i, v' \in W_P$ and all $w' \in W^P$ we have
\[s_i \cdot (\sigma_{w'} \sigma_{v'}) = \sigma_{w'} (s_i \cdot \sigma_{v'}).\]
It follows that for each $v'' \in W_P$ we have $v'' \cdot (\sigma_{w'} \sigma_{v'}) = \sigma_{w'} (v'' \cdot \sigma_{v'})$. In particular $\chi_P = |W_P| \chi$ since the coefficient of $\sigma_{w'} \sigma_{v'}$ in the expansion $v'' \cdot (\sigma_{w'} \sigma_{v'})$ in terms of the basis $\{\sigma_{w} \sigma_{v}\}$ is the same as the coefficient of $\sigma_{v'}$ in the expansion of $v'' \cdot \sigma_{v'}$ in terms of the basis $\{\sigma_v\}$.
\end{proof}
{}
\end{document} |
\begin{document}
\title[]{The $L^{2}$ sequential convergence of a solution to the mass-critical NLS above the ground state}
\author{Benjamin Dodson}
\begin{abstract}
In this paper we generalize a weak sequential result of \cite{fan20182} to a non-scattering solutions in dimension $d \geq 2$. No symmetry assumptions are required for the initial data. We build on a previous result of \cite{dodson20202} for one dimension.
\end{abstract}
\maketitle
\section{Introduction}
The mass-critical nonlinear Schr{\"o}dinger equation (NLS) is given by
\begin{equation}\label{1.1}
i u_{t} + \Delta u = \mu |u|^{\frac{4}{d}} u = \mu F(u), \qquad u(0,x) = u_{0}, \qquad u : I \times \mathbb{R}^{d} \rightarrow \mathbb{C}, \qquad \mu = \pm 1,
\end{equation}
where $I \subset \mathbb{R}$ is an open interval with $0 \in I$. The case when $\mu = +1$ is the defocusing case, and the case when $\mu = -1$ is the focusing case.
If $u$ solves $(\ref{1.1})$, then for any $\lambda > 0$,
\begin{equation}\label{1.2}
\lambda^{d/2} u(\lambda^{2} t, \lambda x),
\end{equation}
also solves $(\ref{1.1})$ with initial data $\lambda^{d/2} u_{0}(\lambda x)$. The $L^{2}$ norm, or mass, is preserved under $(\ref{1.2})$. Thus, $(\ref{1.1})$ is called $L^{2}$ or mass critical. The $L^{2}$ norm, or mass, is also conserved by the flow of $(\ref{1.1})$. If $u$ is a solution to $(\ref{1.1})$ on some interval $I \subset \mathbb{R}$, $0 \in I$, then for any $t \in I$,
\begin{equation}\label{1.3}
M(u(t)) = \int |u(t,x)|^{2} dx = \int |u(0,x)|^{2} dx.
\end{equation}
It is well-known that the local well-posedness of $(\ref{1.1})$ is completely determined by $L^{2}$-regularity. In the positive direction, \cite{cazenave1989some}, \cite{cazenave1990cauchy} proved that $(\ref{1.1})$ is locally well-posed on some open interval for initial data $u_{0} \in L^{2}(\mathbb{R}^{d})$. Furthermore, if $u_{0} \in H_{x}^{s}(\mathbb{R}^{d})$ for some $s > 0$, \cite{cazenave1989some}, \cite{cazenave1990cauchy} proved that $(\ref{1.1})$ was locally well-posed on an open interval $(-T, T)$, where $T(\| u_{0} \|_{H^{s}}) > 0$ depends only on the size of the initial data. Finally, \cite{cazenave1989some}, \cite{cazenave1990cauchy} proved that there exists $\epsilonilon_{0} > 0$ such that if $\| u_{0} \|_{L^{2}} < \epsilonilon_{0}$, then $(\ref{1.1})$ is globally well-posed and scattering.
\begin{definition}[Scattering]\label{d1.1}
A solution to $(\ref{1.1})$ that is global forward in time, that is $u$ exists on $[0, \infty)$, is said to scatter forward in time if there exists $u_{+} \in L^{2}(\mathbb{R}^{d})$ such that
\begin{equation}\label{1.4}
\lim_{t \nearrow \infty} \| u(t) - e^{it \Delta} u_{+} \|_{L^{2}(\mathbb{R}^{d})} = 0.
\end{equation}
A solution to $(\ref{1.1})$ that is global backward in time is said to scatter backward in time if there exists $u_{-} \in L^{2}(\mathbb{R}^{d})$ such that
\begin{equation}\label{1.5}
\lim_{t \searrow -\infty} \| u(t) - e^{it \Delta} u_{-} \|_{L^{2}(\mathbb{R}^{d})} = 0.
\end{equation}
Equation $(\ref{1.1})$ is scattering for any $u_{0} \in L^{2}(\mathbb{R}^{d})$, or for $u_{0}$ in a specified subset of $L^{2}(\mathbb{R}^{d})$, if for any $u_{0} \in L^{2}(\mathbb{R}^{d})$ or the specified subset of $L^{2}(\mathbb{R}^{d})$, there exist $(u_{-}, u_{+}) \in L^{2}(\mathbb{R}^{d}) \times L^{2}(\mathbb{R}^{d})$ such that $(\ref{1.4})$ and $(\ref{1.5})$ hold, and additionally, $u_{-}$ and $u_{+}$ depend continuously on $u_{0}$.
\end{definition}
\noindent In the negative direction, \cite{christ2003asymptotics} showed that local well-posedness fails for $u_{0} \in H^{s}$, $s < 0$.
The qualitative global behavior for $(\ref{1.1})$ in the defocusing case $(\mu = +1)$ has now been completely worked out. A solution to $(\ref{1.1})$ has the conserved quantities mass, $(\ref{1.3})$, energy,
\begin{equation}\label{1.7}
E(u(t)) = \frac{1}{2} \int |u_{x}(t,x)|^{2} dx + \frac{\mu d}{2d + 4} \int |u(t,x)|^{\frac{2d + 4}{d}} dx = E(u(0)),
\end{equation}
and momentum
\begin{equation}\label{1.8}
P(u(t)) = Im \int \nabla u(t,x) \overline{u(t,x)} dx = P(u(0)).
\end{equation}
When $\mu = +1$, $(\ref{1.7})$ is positive definite, so if $u_{0} \in H^{1}(\mathbb{R}^{d})$, then the energy gives an upper bound on $\| u(t) \|_{H^{1}}$ for any $t \in I$. Since $(\ref{1.1})$ is locally well-posed on an interval $[-T, T]$, where $T(\| u_{0} \|_{H^{1}}) > 0$, conservation of energy implies that the local well-posedness result of \cite{cazenave1989some}, \cite{cazenave1990cauchy} can be iterated to a global well-posedness result. Later, $(\ref{1.1})$ was proved to be globally well-posed and scattering for any initial data in $u_{0} \in L^{2}(\mathbb{R}^{d})$ when $\mu = +1$, see \cite{dodson2016global2}, \cite{dodson2016global}, and \cite{dodson2012global}.
In the focusing case $(\mu = -1)$, the existence of non-scattering solutions to $(\ref{1.1})$ has been known for a long time, see \cite{glassey1977blowing}. Let $Q(x)$ be the unique, positive, radial solution of the elliptic partial differential equation
\begin{equation}\label{1.10}
\Delta Q + |Q|^{\frac{4}{d}} Q = Q.
\end{equation}
Such a solution is known to exist, see \cite{kwong1989uniqueness}. If $Q$ solves $(\ref{1.10})$, then $e^{it} Q(x)$ gives a global solution to $(\ref{1.1})$ when $\mu = -1$,
\begin{equation}\label{1.10.1}
i u_{t} + \Delta u = -|u|^{\frac{4}{d}} u, \qquad u(0,x) = u_{0}, \qquad u : I \times \mathbb{R}^{d} \rightarrow \mathbb{C},
\end{equation}
which does not scatter in either time direction. Furthermore, if $u(t,x)$ is a solution to $(\ref{1.10.1})$, then applying the pseudoconformal transformation to $u$,
\begin{equation}\label{1.11}
v(t,x) = \frac{1}{|t|^{d/2}} \bar{u}(\frac{1}{t}, \frac{x}{t}) e^{i \frac{|x|^{2}}{4t}},
\end{equation}
is also a solution to $(\ref{1.10.1})$. Applying the pseudoconformal transformation to $e^{it} Q(x)$ gives a solution to $(\ref{1.10.1})$ that blows up in finite time.
Furthermore, the mass $\| Q \|_{L^{2}}$ represents a blowup threshold. In the case when $\| u_{0} \|_{L^{2}} < \| Q \|_{L^{2}}$ and $u_{0} \in H^{1}$, \cite{weinstein1983nonlinear} proved that $(\ref{1.10.1})$ has a global solution using conservation of mass, energy, and the Gagliardo--Nirenberg inequality,
\begin{equation}\label{1.9}
\| f \|_{L^{2 + \frac{4}{d}}(\mathbb{R}^{d})}^{2 + \frac{4}{d}} \leq \frac{d + 2}{d} (\frac{\| f \|_{L^{2}(\mathbb{R}^{d})}}{\| Q \|_{L^{2}(\mathbb{R}^{d})}})^{\frac{4}{d}} \| \nabla f \|_{L^{2}(\mathbb{R}^{d})}^{2}.
\end{equation}
Plugging $(\ref{1.9})$ into $(\ref{1.7})$ when $\mu = -1$,
\begin{equation}
E(u(t)) \geq \frac{1}{2} \| \nabla u(t) \|_{L^{2}}^{2} (1 - \frac{\| u_{0} \|_{L^{2}}^{4/d}}{\| Q \|_{L^{2}}^{4/d}}).
\end{equation}
For initial data $u_{0} \in L^{2}$ satisfying $\| u_{0} \|_{L^{2}} < \| Q \|_{L^{2}}$, where $u_{0}$ need not lie in $H^{1}$, \cite{dodson2015global} proved global well-posedness and scattering.
Less is known about the focusing problem when $\| u_{0} \|_{L^{2}} = \| Q \|_{L^{2}}$. It is conjectured that $u(t,x) = e^{it} Q(x)$ and its pseudoconformal transformation are the only non-scattering solutions to $(\ref{1.10.1})$ when $\| u_{0} \|_{L^{2}} = \| Q \|_{L^{2}}$, modulo symmetries of $(\ref{1.1})$. The symmetries of $(\ref{1.1})$ include the scaling symmetry, which has already been discussed $(\ref{1.2})$, translation in space and time,
\begin{equation}\label{1.13}
u(t - t_{0}, x - x_{0}), \qquad t_{0} \in \mathbb{R}, \qquad x_{0} \in \mathbb{R}^{d},
\end{equation}
phase transformation,
\begin{equation}\label{1.14}
\forall \theta_{0} \in \mathbb{R}, \qquad e^{i \theta_{0}} u(t, x),
\end{equation}
and the Galilean transformation,
\begin{equation}\label{1.15}
e^{i \frac{\xi_{0}}{2} \cdot (x - \frac{\xi_{0}}{2} t)} u(t, x - \xi_{0} t), \qquad \xi_{0} \in \mathbb{R}^{d}.
\end{equation}
This conjecture was answered in the affirmative in all dimensions for finite time blowup solutions with finite energy initial data. See \cite{merle1992uniqueness} and \cite{merle1993determination}. This conjecture was also answered in the affirmative for a radially symmetric solution to $(\ref{1.10.1})$ in dimensions $d \geq 4$ that blow up in both time directions, but not necessarily in finite time. See \cite{killip2009characterization}.
\begin{remark}
Throughout this paper, blowup refers to failure to scatter, and could mean either finite or in infinite time, unless specified otherwise. From \cite{cazenave1989some}, \cite{cazenave1990cauchy}, failure to scatter forward in time is equivalent to
\begin{equation}
\| u \|_{L_{t,x}^{\frac{2(d + 2)}{d}}([0, \sup(I)) \times \mathbb{R}^{d})} = \infty,
\end{equation}
where $I$ is the maximal interval of existence of $u$.
\end{remark}
\begin{remark}
The pseudoconformal transformation of the solution $e^{it} Q(x)$ is a solution that blows up in one time direction but scatters in the other. By time reversal symmetry, it is possible to assume without loss of generality that the solution blows up forward in time. So \cite{merle1992uniqueness} and \cite{merle1993determination} proved that a finite energy, finite time blowup solution to $(\ref{1.10.1})$ must be a pseudoconformal transformation of $e^{it} Q(x)$. Meanwhile, \cite{killip2009characterization} showed that the only radial solution to $(\ref{1.10.1})$ that blows up in both time directions in dimensions $d \geq 4$ is the soliton $e^{it} Q$.
\end{remark}
More recently, \cite{fan20182} proved a sequential convergence result for radially symmetric solutions that may only blow up in one time direction.
\begin{theorem}\label{t1.1}
Assume that $u$ is a radial solution to the focusing, mass-critical nonlinear Schr{\"o}dinger equation, $(\ref{1.10.1})$, with $\| u_{0} \|_{L^{2}} = \| Q \|_{L^{2}}$, which does not scatter forward in time. Let $(T^{-}(u), T^{+}(u))$ be its lifespan, $T^{-}(u)$ could be $-\infty$ and $T^{+}(u)$ could be $+\infty$. Then there exists a sequence $t_{n} \nearrow T^{+}(u)$ and a family of parameters $\lambda_{\ast, n}$, $\gamma_{\ast, n}$ such that
\begin{equation}\label{1.16}
\lambda_{\ast, n}^{d/2} u(t_{n}, \lambda_{\ast, n} x) e^{-i \gamma_{\ast, n}} \rightarrow Q, \qquad \text{in} \qquad L^{2}.
\end{equation}
\end{theorem}
In fact, \cite{fan20182} proved Theorem $\ref{t1.1}$ for a larger class of initial data, data which is symmetric across $d$ linearly independent hyperplanes. In one dimension, there is no difference between radial initial data and symmetric initial data, but there is in higher dimensions.
In a previous paper, \cite{dodson20202}, we removed the symmetry assumption in dimension one. Here, we continue this study and remove the symmetry assumption in dimensions $d \geq 2$. In doing so, we must allow for translation, $(\ref{1.13})$, and Galilean symmetries, $(\ref{1.15})$, not just scaling and phase transformation symmetries.
\begin{theorem}\label{t1.2}
Assume $u$ is solution to $(\ref{1.1})$ with $\| u_{0} \|_{L^{2}} = \| Q \|_{L^{2}}$ which does not scatter forward in time. Let $(T^{-}(u), T^{+}(u))$ be its lifespan, $T^{-}(u)$ could be $-\infty$ and $T^{+}(u)$ could be $+\infty$. Then there exists a sequence $t_{n} \nearrow T^{+}(u)$ and a family of parameters $\lambda_{\ast, n}$, $\gamma_{\ast, n}$, $\xi_{\ast, n}$, $x_{\ast, n}$ such that
\begin{equation}\label{1.17}
\lambda_{\ast, n}^{d/2} e^{ix \xi_{\ast, n}} u(t_{n}, \lambda_{\ast, n} x + x_{\ast, n}) e^{-i \gamma_{\ast, n}} \rightarrow Q, \qquad \text{in} \qquad L^{2}.
\end{equation}
\end{theorem}
When $\| u_{0} \|_{L^{2}} > \| Q \|_{L^{2}}$, one can easily construct solutions to $(\ref{1.1})$ that blow up in finite time. Indeed, using the virial identity for a solution to $(\ref{1.1})$,
\begin{equation}\label{1.15.0}
\frac{d^{2}}{dt^{2}} \int |x|^{2} |u(t,x)|^{2} dx = 16 E(u_{0}),
\end{equation}
for $u_{0} \in H^{1}$, $\| |x| u_{0} \|_{L^{2}} < \infty$, $E(u_{0}) < 0$, $(\ref{1.15.0})$ implies that the variance $\int |x|^{2} |u(t,x)|^{2} dx$ is a concave function in time. Therefore, the variance can only be positive on some finite interval $(-T_{1}, T_{2})$, where $T_{1}$, $T_{2} < \infty$, which implies that the solution to $(\ref{1.1})$ with such initial data cannot exist outside the time interval $(-T_{1}, T_{2})$. Initial data $u_{0} = (1 + \epsilonilon) Q$ satisfies the above conditions for any $\epsilonilon > 0$.
For initial data with nonpositive energy and mass slightly above the ground state
\begin{equation}\label{1.18}
\| Q \|_{L^{2}} < \| u_{0} \|_{L^{2}} \leq \| Q \|_{L^{2}} + \alpha, \qquad \text{for some} \qquad \alpha > 0 \qquad \text{small},
\end{equation}
\cite{merle2006sharp} proved that after acting on the solution with the appropriate symmetries, $u(t,x)$ converges weakly to $Q$ as $t$ converges to the blowup time. Such solutions would include the above mentioned solutions with finite variance and negative energy that satisfy $(\ref{1.18})$.
This fact also holds for any solution to $(\ref{1.1})$ that satisfies $(\ref{1.18})$ and fails to scatter. Once again, we generalize a result of \cite{fan20182} to the non-symmetric case in dimensions $d \geq 2$.
\begin{theorem}\label{t1.3}
Assume $u$ is a solution to $(\ref{1.1})$ with $u_{0}$ satisfying $(\ref{1.18})$, which does not scatter forward in time. Let $(T^{-}(u), T^{+}(u))$ be the lifespan of the solution. Then there exists a sequence of times $t_{n} \nearrow T^{+}(u)$ and a family of parameters $\lambda_{\ast, n}$, $\gamma_{\ast, n}$, $\xi_{\ast, n}$, $x_{\ast, n}$ such that
\begin{equation}\label{1.19}
\lambda_{\ast, n}^{d/2} e^{ix \cdot \xi_{\ast, n}} u(t_{n}, \lambda_{\ast, n} x + x_{\ast, n}) e^{-i \gamma_{\ast, n}} \rightharpoonup Q, \qquad \text{weakly in} \qquad L^{2}.
\end{equation}
\end{theorem}
\section{A Preliminary reduction}
The scattering result of \cite{dodson2015global} implies that a non-scattering solution to $(\ref{1.1})$ with $\| u_{0} \|_{L^{2}} = \| Q \|_{L^{2}}$ is a minimal mass blowup solution to $(\ref{1.1})$. Therefore, it is possible to make a reduction to a to an almost periodic solution in proving Theorem $\ref{t1.2}$. Let $t_{n} \nearrow T^{+}(u)$ be a sequence of times. Making a profile decomposition, after passing to a subsequence, for all $J$,
\begin{equation}\label{2.1}
u(t_{n}) = \sum_{j = 1}^{J} g_{n}^{j} [e^{i t_{n}^{j} \Delta} \phi^{j}] + w_{n}^{J},
\end{equation}
where $g_{n}^{j}$ is the group action
\begin{equation}\label{2.1.1}
g_{n}^{j} \phi^{j} = \lambda_{n, j}^{d/2} e^{ix \cdot \xi_{n,j}} e^{i \gamma_{n,j}} \phi^{j}(\lambda_{n,j} x + x_{n,j}),
\end{equation}
and
\begin{equation}
\lim_{J \rightarrow \infty} \limsup_{n \rightarrow \infty} \| e^{it \Delta} w_{n}^{J} \|_{L_{t,x}^{\frac{2(d + 2)}{d}}(\mathbb{R} \times \mathbb{R}^{d})} = 0.
\end{equation}
Since $u$ is a minimal mass blowup solution, $\phi^{j} = 0$ for $j \geq 2$, $\| \phi^{1} \|_{L^{2}} = \| Q \|_{L^{2}}$, and $\| w_{n}^{J} \|_{L^{2}} \rightarrow 0$ as $n \rightarrow \infty$. See \cite{dodson2019defocusing}, \cite{killip2013nonlinear}, or \cite{tao2008minimal} for a detailed treatment of the profile decomposition for minimal mass blowup solutions. Thus, it will be convenient to drop the $j$ notation and simply write,
\begin{equation}\label{2.1.2}
u(t_{n}) = g_{n} \phi + w_{n}.
\end{equation}
\begin{remark}
The disappearance of $e^{it_{n}^{1} \Delta}$ in $(\ref{2.1.2})$ will be explained soon.
\end{remark}
Now let $v$ be the solution to $(\ref{1.1})$ with initial data $\phi$, and let $I$ be the maximal interval of existence of $v$. Since
\begin{equation}\label{2.1.3}
\lim_{n \rightarrow \infty} \| u \|_{L_{t,x}^{\frac{2(d + 2)}{d}}((T^{-}(u), t_{n}) \times \mathbb{R}^{d})} = \infty, \qquad \text{and} \qquad \| u \|_{L_{t,x}^{\frac{2(d + 2)}{d}}((t_{n}, T^{+}(u)) \times \mathbb{R}^{d})} = \infty \qquad \forall n,
\end{equation}
\begin{equation}\label{2.2}
\| v \|_{L_{t,x}^{\frac{2(d + 2)}{d}}([0, \sup(I)) \times \mathbb{R}^{d})} = \| v \|_{L_{t,x}^{\frac{2(d + 2)}{d}}((\inf(I), 0] \times \mathbb{R}^{d})} = \infty.
\end{equation}
\begin{remark}
Equation $(\ref{2.1.3})$ is also the reason that it is unnecessary to allow for the possibility of terms like $[e^{it_{n}^{j} \Delta} \phi^{j}]$ in $(\ref{2.1})$ in place of $\phi^{j}$, where $t_{n}^{j} \rightarrow \pm \infty$. If $t_{n}^{j}$ converges along a subsequence to some $t_{0}^{j} \in \mathbb{R}$, then $\phi^{j}$ can be replaced by $e^{i t_{0}^{j} \Delta} \phi^{j}$.
\end{remark}
\begin{theorem}\label{t2.1}
To prove Theorem $\ref{t1.2}$, it suffices to prove that there exists a sequence $s_{m} \nearrow \sup(I)$, $s_{m} \geq 0$, such that
\begin{equation}\label{2.2.1}
g(s_{m}) v(s_{m}) \rightarrow Q, \qquad \text{in} \qquad L^{2}.
\end{equation}
\end{theorem}
\begin{proof}
Suppose $g(s_{m}) v(s_{m}) \rightarrow Q$ in $L^{2}$. For any $m$ let $s_{m} \in I$ be such that
\begin{equation}\label{2.4}
\| g(s_{m}) v(s_{m}) - Q \|_{L^{2}} \leq 2^{-m}.
\end{equation}
Next, observe that $(\ref{2.1})$ implies
\begin{equation}\label{2.3}
e^{i \xi_{n} \cdot x} e^{i \gamma_{n}} \lambda_{n}^{d/2} u(t_{n}, \lambda_{n} x + x_{n}) \rightarrow \phi, \qquad \text{in} \qquad L^{2},
\end{equation}
and by $(\ref{1.15})$ and perturbation theory, for a fixed $m$, for $n$ sufficiently large,
\begin{equation}\label{2.5}
\aligned
\| e^{-i \xi_{n}^{2} s_{m}} e^{i \xi_{n} \cdot x} e^{i \gamma_{n}} \lambda_{n}^{d/2} u(t_{n} + \lambda_{n}^{2} s_{m}, \lambda_{n} x + x_{n} - 2 \xi_{n} \lambda_{n} s_{m}) - v(s_{m}) \|_{L^{2}} \\ \leq C(s_{m}) \| e^{i \xi_{n} \cdot x} e^{i \gamma_{n}} \lambda_{n}^{d/2} u(t_{n}, \lambda_{n} x + x_{n}) - \phi \|_{L^{2}}.
\endaligned
\end{equation}
Therefore, by $(\ref{2.4})$, $(\ref{2.5})$, and the triangle inequality,
\begin{equation}\label{2.6}
\aligned
\| g(s_{m})(\lambda_{n}^{d/2} e^{-i \xi_{n}^{2} s_{m}} e^{i \xi_{n} \cdot x} e^{i \gamma_{n}} u(t_{n} + \lambda_{n}^{2} s_{m}, \lambda_{n} x + x_{n} - 2 \xi_{n} \lambda_{n} s_{m})) - Q \|_{L^{2}} \\ \leq C(s_{m}) \| e^{i \xi_{n} \cdot x} e^{i \gamma_{n}} \lambda_{n}^{d/2} u(t_{n}, \lambda_{n} x + x_{n}) - \phi \|_{L^{2}} + 2^{-m}.
\endaligned
\end{equation}
Since $g(s_{m})$ is also of the form $(\ref{2.1.1})$, there exists a group action $g_{n,m}$ of the form $(\ref{2.1.1}$ such that
\begin{equation}\label{2.6.1}
g(s_{m})(\lambda_{n}^{d/2} e^{-i \xi_{n}^{2} s_{m}} e^{i \xi_{n} \cdot x} e^{i \gamma_{n}} u(t_{n} + \lambda_{n}^{2} s_{m}, \lambda_{n} x + x_{n} - 2 \xi_{n} \lambda_{n} s_{m})) = g_{n,m} u(t_{n} + \lambda_{n}^{2} s_{m}, x).
\end{equation}
Equation $(\ref{2.6})$ implies
\begin{equation}\label{2.7}
\lim_{m, n \rightarrow \infty} \| g_{n,m} u(t_{n} + \lambda_{n}^{2} s_{m}, x) - Q \|_{L^{2}} = 0.
\end{equation}
Since $t_{n} \nearrow T^{+}(u)$ and $s_{m} \geq 0$, $t_{n} + \lambda_{n}^{2} s_{m} \nearrow T^{+}(u)$, which implies Theorem $\ref{t1.2}$, assuming that $(\ref{2.2.1})$ is true.
\end{proof}
Now then, since $v(s)$ blows up in both time directions, $(\ref{2.2})$ holds, and $\| v \|_{L^{2}} = \| Q \|_{L^{2}}$, we can use the result of \cite{tao2008minimal} to prove that $v$ is almost periodic. That is, for all $s \in I$, there exist $\lambda(s) > 0$, $\xi(s) \in \mathbb{R}^{d}$, $x(s) \in \mathbb{R}^{d}$, and $\gamma(s) \in \mathbb{R}$ such that
\begin{equation}\label{2.8}
\lambda(s)^{-d/2} e^{ix \cdot \xi(s)} e^{i \gamma(s)} v(s, \frac{x - x(s)}{\lambda(s)}) \in K,
\end{equation}
where $K$ is a fixed precompact subset of $L^{2}$. Therefore, in the case when $\| u_{0} \|_{L^{2}} = \| Q \|_{L^{2}}$, it only remains to prove sequential convergence to $Q$ for this solution $v$.
\begin{theorem}\label{t2.2}
There exists a sequence $s_{m} \nearrow \sup(I)$ and a sequence of group actions $g(s_{m})$ of the form $(\ref{2.1.1})$ such that
\begin{equation}\label{2.9}
\| g(s_{m}) v(s_{m}) - Q \|_{L^{2}} \rightarrow 0.
\end{equation}
\end{theorem}
The proof of this fact will occupy the next two sections. Since $v$ is almost periodic, the tools used in \cite{dodson2015global} are available in this case as well.
\begin{remark}
In order for notation to align with notation in prior works, such as \cite{dodson2015global}, it will be convenient to relabel so that $v$ is now denoted $u$, and $s$ now denoted $t$.
\end{remark}
Similarly, when proving Theorem $\ref{t1.3}$, we use Lemma $4.2$ from \cite{fan20182} to reduce to an almost periodic solution.
\begin{lemma}\label{l2.3}
Let $u$ be a solution to $(\ref{1.1})$ satisfying the assumptions of Theorem $\ref{t1.3}$. Then there exists a sequence $t_{n} \nearrow T^{+}(u)$ such that $u(t_{n})$ admits the profile decomposition in $(\ref{2.1})$, and there is a unique profile $\phi_{1}$, such that $\| \phi_{1} \|_{L^{2}} \geq \| Q \|_{L^{2}}$ and the solution $v$ to $(\ref{1.1})$ with initial data $\phi_{1}$ is an almost periodic solution to $(\ref{1.1})$ that does not scatter forward or backward in time.
\end{lemma}
In this case as well, it suffices to show that passing to a subsequence, $g(s_{m}) v(s_{m}) \rightharpoonup Q$, using similar arguments as in the case when $\| u \|_{L^{2}} = \| Q \|_{L^{2}}$. Indeed, by asymptotic orthogonality of the profile decomposition $(\ref{2.1})$, for $n(m)$ sufficiently large,
\begin{equation}\label{2.10}
(g_{n}^{1})^{-1} g(s_{m}) u(t_{n} + \lambda_{n}^{2} s_{m}, x) = g(s_{m}) v(s_{m}) + R_{n(m), m} + g(s_{m}) \sum_{j = 2}^{J} g_{n(m)}^{j} \Phi^{j} + g(s_{m}) w_{n}^{J},
\end{equation}
where $\| R_{n(m), m} \|_{L^{2}} \rightarrow 0$ as $m \rightarrow \infty$, $g_{n(m)}^{j} \Phi^{j}$ are the solutions to $(\ref{1.1})$ with data $g_{n(m)}^{j} \phi^{j}$ or that scatter forward or backward in time to $\phi^{j}$, and $w_{n}^{J}$ is the solution to $(\ref{1.1})$ with initial data $w_{n}^{J}$. Furthermore, asymptotic orthogonality implies that
\begin{equation}\label{2.11}
g(s_{m}) \sum_{j = 2}^{J} g_{n(m)}^{j} \Phi^{j} + g(s_{m}) w_{n}^{J} \rightharpoonup 0, \qquad \text{in} \qquad L^{2},
\end{equation}
which proves the reduction.
\begin{remark}
Since $(\ref{2.1})$ is not a profile decomposition for a minimal mass blowup solution, it is possible that $t_{n}^{j} \rightarrow \pm \infty$ for $j \geq 2$.
\end{remark}
\section{Proof of Theorem $\ref{t2.2}$ when $\lambda(t) = 1$ and $d = 2$}
It will be convenient to begin by discussing the $\lambda(t) = 1$ case in dimension $d = 2$, before generalizing the argument to higher dimensions and variable $\lambda(t)$. When $\lambda(t) = 1$, the solution $u$ is global in both time directions, $I = \mathbb{R}$. Following \cite{dodson2015global} and \cite{dodson20202}, we will use the interaction Morawetz estimate
\begin{equation}\label{3.1}
M(t) = \int \int |Iu(t, y)|^{2} Im[\bar{Iu} \nabla Iu](t,x) \cdot (x - y) \psi(x - y) dx dy,
\end{equation}
where $I$ is the Fourier truncation operator $P_{\leq T}$, $T = 2^{k}$ for some $k \in \mathbb{Z}_{\geq 0}$. As in \cite{dodson2015global}, $\psi(|x - y|)$ is a radial function,
\begin{equation}\label{3.2}
\psi(x) = \frac{1}{|x - y|} \int_{0}^{|x - y|} \phi(s) ds,
\end{equation}
where $\phi(|x|)$ is a radial function given by
\begin{equation}\label{3.3}
\phi(|x - y|) = \frac{1}{R^{2}} \int \chi^{2}(\frac{x - y - s}{R}) \chi^{2}(\frac{s}{R}) ds = \frac{1}{R^{2}} \int \chi^{2}(\frac{x - s}{R}) \chi^{2}(\frac{s - y}{R}) ds = \frac{1}{R^{2}} \int \chi^{2}(\frac{x - s}{R}) \chi^{2}(\frac{y - s}{R}) ds,
\end{equation}
where $\chi$ is a radial, smooth, compactly supported function, $\chi(x) = 1$ for $|x| \leq 1$ and $\chi(x)$ is supported on $|x| \leq 2$. In addition, $\chi(|x|)$ is decreasing as a function of the radius. $R$ is a large, fixed constant that will be allowed to go to infinity as $T \rightarrow \infty$.
\begin{remark}
$\phi$ decreasing as a function of the radius implies that $\psi$ is decreasing as a function of the radius.
\end{remark}
By direct computation,
\begin{equation}\label{3.4}
\aligned
\frac{d}{dt} M(t) = 2 \int \int |Iu(t,y)|^{2} Re[\partial_{j} \bar{Iu} \partial_{k} Iu](t,x) [\delta_{jk} \psi(x - y) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y)] dx dy \\ -2 \int \int Im[\bar{Iu} \partial_{k} Iu](t,y) Im[\bar{Iu} \partial_{j} Iu](t,x) [\delta_{jk} \psi(x - y) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y)] dx dy \\
+ \frac{1}{2} \int \int |Iu(t,y)|^{2} |Iu(t,y)|^{2} [\Delta \phi(x - y) + \Delta \psi(x - y)] dx dy \\
- \int \int |Iu(t,y)|^{2} |Iu(t,x)|^{4} [\psi(x - y) + \frac{1}{2} \psi'(x - y) |x - y|] dx dy + \mathcal E,
\endaligned
\end{equation}
where $\mathcal E$ are the error terms arising from $\mathcal N$,
\begin{equation}\label{3.4.1}
i Iu_{t} + \Delta I u + F(Iu) = F(Iu) - I F(u) = \mathcal N.
\end{equation}
It is known from \cite{dodson2016global2} that
\begin{equation}\label{3.4.2}
\int_{0}^{T} \mathcal N dt \lesssim R o(T),
\end{equation}
and
\begin{equation}\label{3.4.3}
\sup_{t \in [0, T]} |M(t)| \lesssim R o(T).
\end{equation}
Therefore, choosing $R \nearrow \infty$ sufficiently slowly,
\begin{equation}\label{3.4.4}
\lim_{T \rightarrow \infty} \frac{R o(T)}{T} = 0.
\end{equation}
By direct computation,
\begin{equation}\label{3.5}
\phi(x) = \frac{1}{R^{2}} \int \chi^{2}(\frac{x - s}{R}) \chi^{2}(\frac{s}{R}) ds \sim 1,
\end{equation}
for $|x| \leq R$, $\phi(x)$ is supported on the set $|x| \leq 4R$, and $\phi(x)$ is a radially symmetric function that is decreasing as $|x| \rightarrow \infty$. Therefore, $(\ref{3.2})$ implies that
\begin{equation}\label{3.6}
|\psi(x)| \lesssim \frac{R}{|x|}, \qquad \text{for all} \qquad x \in \mathbb{R}^{2}.
\end{equation}
Also, by direct computation,
\begin{equation}\label{3.7}
\Delta \phi(x) = \frac{1}{R^{2}} \int \Delta \chi^{2}(\frac{x - s}{R}) \chi^{2}(\frac{s}{R}) ds \lesssim \frac{1}{R^{2}}.
\end{equation}
Next, by the same calculations that give $(\ref{3.6})$,
\begin{equation}\label{3.7.1}
\Delta \psi(x) \lesssim \frac{R}{|x|^{3}},
\end{equation}
so $|\Delta \psi(x)| \lesssim \frac{1}{R^{2}}$ for $|x| \gtrsim R$. By the fundamental theorem of calculus, since $\phi'(0) = 0$, by $(\ref{3.2})$,
\begin{equation}\label{3.7.2}
\psi(r) = \phi(0) + \frac{1}{r} \int_{0}^{r} \int_{0}^{s} (s - t) \phi''(t) dt ds,
\end{equation}
so by $(\ref{3.7})$, $|\Delta \psi(x)| \lesssim \frac{1}{R^{2}}$ for $|x| \lesssim R$.
Therefore,
\begin{equation}\label{3.8}
\frac{1}{2} \int \int |Iu(t,y)|^{2} |Iu(t,y)|^{2} [\Delta \phi(x - y) + \Delta \psi(x - y)] dx dy \lesssim \frac{1}{R^{2}} \| u \|_{L^{2}}^{4}.
\end{equation}
Next, decompose
\begin{equation}\label{3.8.1}
\delta_{jk} \psi(x - y) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y) = \delta_{jk} \phi(x - y) + \delta_{jk} [\psi(x - y) - \phi(x - y)] + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y).
\end{equation}
By $(\ref{3.2})$,
\begin{equation}\label{3.8.2}
(\ref{3.8.1}) = \delta_{jk} \phi(x - y) - \delta_{jk} |x - y| \psi'(|x - y|) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y).
\end{equation}
Now then,
\begin{equation}\label{3.9}
\aligned
-\int \int Im[\bar{Iu} \partial_{k} Iu] Im[\bar{Iu} \partial_{j} Iu] \delta_{jk} \phi(x - y) dx dy + \int \int |Iu(t,y)|^{2} |\nabla Iu(t,x)|^{2} \phi(x - y) dx dy \\
= \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) Im[\bar{Iu} \partial_{j} Iu] dy)(\int \chi^{2}(\frac{x - s}{R}) Im[\bar{Iu} \partial_{j} Iu] dx) ds \\ + \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |Iu(t,y)|^{2} dy)(\int \chi^{2}(\frac{x - s}{R}) |\nabla Iu(t,x)|^{2} dx) ds.
\endaligned
\end{equation}
Fix $s \in \mathbb{R}^{2}$. For any $\xi \in \mathbb{R}^{2}$ and $j \in \mathbb{Z}$,
\begin{equation}\label{3.10}
\int \chi^{2}(\frac{y - s}{R}) Im[\overline{e^{iy \xi} Iu} \partial_{j}(e^{iy \xi} Iu)] dy = \int \chi^{2}(\frac{y - s}{R}) Im[\bar{Iu} \partial_{j} Iu] dy + \xi_{j} \int \chi^{2}(\frac{y - s}{R}) |Iu(t,y)|^{2} dy,
\end{equation}
and
\begin{equation}\label{3.11}
\aligned
\int \chi^{2}(\frac{x - s}{R}) |\partial_{j} (e^{ix \xi} Iu)|^{2} dx = \xi_{j}^{2} \int \chi^{2}(\frac{x - s}{R}) |Iu|^{2} dx \\ + 2 \xi_{j} \int \chi^{2}(\frac{x - s}{R}) Im[\bar{Iu} \partial_{j} Iu] dx + \int \chi^{2}(\frac{x - s}{R}) |\partial_{j} Iu|^{2} dx.
\endaligned
\end{equation}
Therefore, $(\ref{3.9})$ is invariant under the Galilean transformation, so it is convenient to choose $\xi(s)$ such that $(\ref{3.10}) = 0$. For notational convenience, let
\begin{equation}\label{3.12}
v_{s} = e^{ix \xi(s)} Iu.
\end{equation}
Then by the fundamental theorem of calculus and $(\ref{3.4.2})$--$(\ref{3.12})$, if $R \nearrow \infty$ as $T \nearrow \infty$,
\begin{equation}\label{3.13}
\aligned
2 \int_{0}^{T} \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
2 \int \int |Iu(t,y)|^{2} Re[\partial_{j} \bar{Iu} \partial_{k} Iu](t,x) [\delta_{jk} |x - y| \psi'(x - y) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y)] dx dy \\ -2 \int \int Im[\bar{Iu} \partial_{k} Iu](t,y) Im[\bar{Iu} \partial_{j} Iu](t,x) [\delta_{jk} |x - y| \psi'(x - y) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y)] dx dy \\
- \int_{0}^{T} \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |v_{s}(t,x)|^{4} dx) ds dt \\
- \frac{1}{2} \int_{0}^{T} \int |Iu(t,y)|^{2} [\psi(x - y) - \phi(x - y)] |Iu(t,x)|^{4} dx dy dt \lesssim R o(T).
\endaligned
\end{equation}
Following the computations in \cite{dodson2015global} for the angular derivatives in dimensions $d \geq 2$,
\begin{equation}\label{3.13.0}
\aligned
2 \int \int |Iu(t,y)|^{2} Re[\partial_{j} \bar{Iu} \partial_{k} Iu](t,x) [\delta_{jk} |x - y| \psi'(x - y) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y)] dx dy \\ -2 \int \int Im[\bar{Iu} \partial_{k} Iu](t,y) Im[\bar{Iu} \partial_{j} Iu](t,x) [\delta_{jk} |x - y| \psi'(x - y) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y)] dx dy \geq 0.
\endaligned
\end{equation}
Therefore, by $(\ref{3.13})$,
\begin{equation}\label{3.13.0.1}
\aligned
2 \int_{0}^{T} \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
- \int_{0}^{T} \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |v_{s}(t,x)|^{4} dx) ds dt \\
- \frac{1}{2} \int_{0}^{T} \int |Iu(t,y)|^{2} [\psi(x - y) - \phi(x - y)] |Iu(t,x)|^{4} dx dy dt \lesssim R o(T).
\endaligned
\end{equation}
By the Arzela--Ascoli theorem and $(\ref{2.8})$, for any $\eta > 0$, there exists $C(\eta) < \infty$ such that
\begin{equation}\label{3.13.1}
\int_{|x - x(t)| \geq \frac{C(\eta)}{\lambda(t)}} |u(t,x)|^{2} dx < \eta^{2}.
\end{equation}
By H{\"o}lder's inequality, Strichartz estimates, and $\lambda(t) = 1$,
\begin{equation}\label{3.13.2}
\aligned
\int_{a}^{a + 1} \int_{|y - y(t)| \geq C(\eta)} |Iu(t,y)|^{2} \int |Iu(t,x)|^{4} dx dy dt + \int_{a}^{a + 1} \int |Iu(t,y)|^{2} \int_{|x - x(t)| \geq C(\eta)} |Iu(t,x)|^{4} dx dy dt \\ \lesssim \eta^{2} \| u \|_{L_{t,x}^{4}([a, a + 1] \times \mathbb{R}^{2})}^{4} + \eta \| u \|_{L_{t}^{3} L_{x}^{6}([a, a + 1] \times \mathbb{R}^{2})}^{3} \lesssim \eta.
\endaligned
\end{equation}
Finally, by $(\ref{3.2})$ and the fundamental theorem of calculus,
\begin{equation}\label{3.13.3}
\int_{a}^{a + 1} \int_{|y - y(t)| \leq C(\eta)} \int_{|x - x(t)| \leq C(\eta)} |Iu(t,y)|^{2} [\psi(x - y) - \phi(x - y)] |Iu(t,x)|^{4} dx dy dt \lesssim \frac{C(\eta)}{R} \| u \|_{L_{t,x}^{4}([a, a + 1] \times \mathbb{R}^{2})}^{4}.
\end{equation}
Therefore,
\begin{equation}\label{3.13.4}
\frac{1}{2} \int_{0}^{T} \int |Iu(t,y)|^{2} [\psi(x - y) - \phi(x - y)] |Iu(t,x)|^{4} dx dy dt \lesssim \eta T + \frac{C(\eta)}{R} T,
\end{equation}
so
\begin{equation}\label{3.13.5}
\aligned
2 \int_{0}^{T} \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
- \int_{0}^{T} \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |v_{s}(t,x)|^{4} dx) ds dt \lesssim R o(T) + \eta T + \frac{C(\eta)}{R} T.
\endaligned
\end{equation}
In \cite{dodson20202}, following \cite{merle2001existence}, we used the fundamental theorem of calculus to obtain a bound on $|u|^{6}$ far away from the interval $|x - x(t)| \leq C(\eta)$. Here, we will use a computation from \cite{merle1993determination} in two dimensions. Let $f \in H^{1}(\mathbb{R}^{2})$ be any function and fix some $s \in \mathbb{R}^{2}$. By the fundamental theorem of calculus,
\begin{equation}\label{3.14}
\aligned
\int \chi(\frac{x - s}{R}) |f(x_{1}, x_{2})|^{4} dx_{1} dx_{2} \leq \int (\int |f(x_{1}, x_{2})|^{2} \chi(\frac{x - s}{R})) \cdot (\sup_{x_{1} \in \mathbb{R}} \chi(\frac{x - s}{R}) |f(x_{1}, x_{2})|^{2}) dx_{2} \\
\leq \int (\int |f(x_{1}, x_{2})|^{2} \chi(\frac{x - s}{R}) dx_{1})(\int |\partial_{x_{1}}(\chi(\frac{x - s}{R}) |f(x_{1}, x_{2})|^{2}|) dx_{1}) dx_{2} \\
\lesssim \int (\int |f(x_{1}, x_{2})|^{2} \chi(\frac{x - s}{R}) dx_{1}) \cdot (\int \chi(\frac{x - s}{R})^{2} |\partial_{x_{1}} f(x_{1}, x_{2})|^{2} dx_{1})^{1/2} (\int |f(x_{1}, x_{2})|^{2} dx_{1})^{1/2} dx_{2} \\
+ \int (\int |f(x_{1}, x_{2})|^{2} \chi(\frac{x - s}{R}) dx_{1}) \cdot (\frac{1}{R} \int |f(x_{1}, x_{2})|^{2} dx_{1}) dx_{2}.
\endaligned
\end{equation}
Again by the fundamental theorem of calculus and H{\"o}lder's inequality,
\begin{equation}\label{3.15}
\aligned
\int (\int |f(x_{1}, x_{2})|^{2} \chi(\frac{x - s}{R}) dx_{1}) \cdot (\int \chi(\frac{x - s}{R})^{2} |\partial_{x_{1}} f(x_{1}, x_{2})|^{2} dx_{1})^{1/2} (\int |f(x_{1}, x_{2})|^{2} dx_{1})^{1/2} dx_{2} \\
\lesssim \| \chi(\frac{x - s}{R}) \nabla f(x) \|_{L^{2}} \| f \|_{L^{2}} \cdot \sup_{x_{2}} (\int |f(x_{1}, x_{2})|^{2} \chi(\frac{x - s}{R}) dx_{1}) \\
\lesssim \| \chi(\frac{x - s}{R}) \nabla f(x) \|_{L^{2}} \| f \|_{L^{2}} \cdot (\int \int |\partial_{x_{2}} f(x_{1}, x_{2})| |f(x_{1}, x_{2})| \chi(\frac{x - s}{R}) dx_{1} dx_{2}) \\
+ \frac{1}{R} \| \chi(\frac{x - s}{R}) \nabla f(x) \|_{L^{2}} \| f \|_{L^{2}} \cdot (\int \int |f(x_{1}, x_{2})|^{2}) dx_{1} dx_{2}) \\ \lesssim \| \chi(\frac{x - s}{R}) \nabla f \|_{L^{2}}^{2} \| f \|_{L^{2}}^{2} + \frac{1}{R} \| \chi(\frac{x - s}{R}) \nabla f \|_{L^{2}} \| f \|_{L^{2}}^{3}.
\endaligned
\end{equation}
Also by H{\"o}lder's inequality and the fundamental theorem of calculus,
\begin{equation}\label{3.16}
\aligned
\int (\int |f(x_{1}, x_{2})|^{2} \chi(\frac{x - s}{R}) dx_{1}) \cdot (\frac{1}{R} \int |f(x_{1}, x_{2})|^{2} dx_{1}) dx_{2} \lesssim \frac{1}{R} \| f \|_{L^{2}}^{2} \cdot \sup_{x_{2}} (\int |f(x_{1}, x_{2})|^{2} \chi(\frac{x - s}{R}) dx_{1}) \\
\lesssim \frac{1}{R^{2}} \| f \|_{L^{2}}^{4} + \frac{1}{R} \| f \|_{L^{2}}^{3} \| \chi(\frac{x - s}{R}) \nabla f \|_{L^{2}}.
\endaligned
\end{equation}
For a fixed $t$, let $f = (1 - \chi(\frac{x - x(t)}{C(\eta)})) v_{s}(t,x)$. By $(\ref{3.13.1})$, $\| f \|_{L^{2}} \leq \eta$, and by the product rule,
\begin{equation}\label{3.16.1}
\| \chi(\frac{x - s}{R}) \nabla f \|_{L^{2}} \leq \| \chi(\frac{x - s}{R}) \nabla v_{s} \|_{L^{2}} + \frac{1}{C(\eta)} \| \chi'(\frac{x - x(t)}{C(\eta)}) v_{s} \|_{L^{2}} \leq \| \chi(\frac{x - s}{R}) \nabla v_{s} \|_{L^{2}} + \frac{\eta}{C(\eta)}.
\end{equation}
Therefore, since $\frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2} dy) ds \lesssim \| u \|_{L^{2}}^{2}$,
\begin{equation}\label{3.17}
\aligned
2 \int_{0}^{T} \frac{1}{R} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
- \int_{0}^{T} \frac{1}{R} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x - s}{R}) |v_{s}(t,x)|^{4} dx) ds dt \\ \lesssim R o(T) + \eta T + \frac{C(\eta)}{R} T + \frac{\eta^{4}}{R^{2}} T + \frac{\eta^{2}}{R^{2}} \int_{0}^{T} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}|^{2} dy) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})|^{2} dx) ds dt.
\endaligned
\end{equation}
When $\eta > 0$ is sufficiently small,
\begin{equation}\label{3.26}
\frac{\eta^{2}}{R^{2}} \int_{0}^{T} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}|^{2} dy) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})|^{2} dx) ds dt
\end{equation}
can be absorbed into the left hand side of $(\ref{3.17})$, proving that
\begin{equation}\label{3.27}
\aligned
2 \int_{0}^{T} \frac{1}{R} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
- \int_{0}^{T} \frac{1}{R} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x - s}{R}) |v_{s}(t,x)|^{4} dx) ds dt \\ \lesssim R o(T) + \eta T + \frac{C(\eta)}{R} T+ \frac{\eta^{4}}{R^{2}} T \\ + \frac{ \eta^{2}}{R^{2}} \int_{0}^{T} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2} dy) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x - s}{R}) |v_{s}(t,x)|^{4} dx) ds dt.
\endaligned
\end{equation}
Now choose $|x_{\ast} - x(t)| \leq 4 C(\eta)$ such that
\begin{equation}\label{3.17.1}
\chi(\frac{x_{\ast} - s}{R}) = \inf_{|x - x(t)| \leq 4 C(\eta)} \chi(\frac{x - s}{R}).
\end{equation}
As in \cite{dodson20202}, the fundamental theorem of calculus implies that for $|x - x(t)| \leq 2 C(\eta)$,
\begin{equation}\label{3.18}
\chi^{2}(\frac{x - s}{R}) = \chi^{2}(\frac{x_{\ast} - s}{R}) + O(\frac{C(\eta)}{R}).
\end{equation}
Therefore,
\begin{equation}\label{3.20}
\aligned
\frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x - s}{R}) |v_{s}(t,x)|^{4} dx) ds \\ \leq \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x_{\ast} - s}{R}) |v_{s}(t,x)|^{4} dx) ds \\
+ O(\frac{C(\eta)}{R}) \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} |v_{s}(t,x)|^{4} dx) ds \\
= \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x_{\ast} - s}{R}) |v_{s}(t,x)|^{4} dx) ds + O(\frac{C(\eta)}{R} \| v \|_{L^{2}}^{2} \| v \|_{L^{4}}^{4}).
\endaligned
\end{equation}
Plugging $(\ref{3.20})$ in to $(\ref{3.27})$, and using Strichartz estimates as in $(\ref{3.13.2})$ and $(\ref{3.13.3})$,
\begin{equation}\label{3.21}
\aligned
2 \int_{0}^{T} \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
- \int_{0}^{T} \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x_{\ast} - s}{R}) |v_{s}(t,x)|^{4} dx) ds dt \\ \lesssim R o(T) + \eta T + \frac{C(\eta)}{R} T + \frac{\eta^{4}}{R^{2}} T + \frac{\eta^{2}}{R^{2}} \int_{0}^{T} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}|^{2} dy) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x^{\ast} - s}{R}) |v_{s}|^{4} dx) ds dt .
\endaligned
\end{equation}
Since $\chi(\frac{x^{\ast} - 1}{R}) \leq 1$, $(\ref{3.13.2})$ and $(\ref{3.13.3})$ also imply
\begin{equation}
\frac{\eta^{2}}{R^{2}} \int_{0}^{T} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}|^{2} dy) (\int_{|x - x(t)| \leq C(\eta)} \chi^{2}(\frac{x^{\ast} - s}{R}) |v_{s}|^{4} dx) ds dt \lesssim \eta^{2} T.
\end{equation}
By definition of $x_{\ast}$ and $\chi$, for $R \gg C(\eta)$,
\begin{equation}\label{3.22}
\aligned
2 \int_{0}^{T} \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
- \int_{0}^{T} \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x_{\ast} - s}{R}) |v_{s}(t,x)|^{4} dx) ds dt \\
\geq 2 \int_{0}^{T} \frac{1}{R} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) ( \chi^{2}(\frac{x_{\ast} - s}{R}) \int \chi^{2}(\frac{x - x(t)}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
- \int_{0}^{T} \frac{1}{R} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\chi^{2}(\frac{x_{\ast} - s}{R})\int \chi^{4}(\frac{x - x(t)}{R}) |v_{s}(t,x)|^{4} dx) ds dt.
\endaligned
\end{equation}
Integrating by parts,
\begin{equation}\label{3.23}
\int \chi^{2}(\frac{x - x(t)}{R}) |\nabla (v_{s})|^{2} dx = \int |\nabla(\chi(\frac{x - x(t)}{R}) v_{s})|^{2} dx + \frac{1}{R^{2}} \int \chi''(\frac{x - x(t)}{R}) \chi(\frac{x - x(t)}{R}) |v_{s}|^{2} dx.
\end{equation}
Therefore,
\begin{equation}\label{3.24}
\aligned
2 \int_{0}^{T} \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) ( \chi^{2}(\frac{x_{\ast} - s}{R}) \int \chi^{2}(\frac{x - x(t)}{R}) |\nabla(v_{s})(t,x)|^{2} dx) ds dt \\
- \int_{0}^{T} \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\chi^{2}(\frac{x_{\ast} - s}{R})\int \chi^{4}(\frac{x - x(t)}{R}) |v_{s}(t,x)|^{4} dx) ds dt \\
= 4 \int_{0}^{T} \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t, y)|^{2} dy) \chi^{2}(\frac{x_{\ast} - s}{R}) E(\chi^{2}(\frac{x - x(t)}{R}) v) ds dt + O(\frac{T}{R^{2}}).
\endaligned
\end{equation}
Here $E$ is the energy given by $(\ref{1.7})$. Therefore, we have finally proved
\begin{equation}\label{3.25}
\aligned
4 \int_{0}^{T} \frac{1}{R^{2}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t, y)|^{2} dy) \chi^{2}(\frac{x_{\ast} - s}{R}) E(\chi^{2}(\frac{x - x(t)}{R}) v) ds dt \lesssim R o(T) + \eta T + \frac{C(\eta)}{R} T + \frac{\eta^{4}}{R^{2}} T.
\endaligned
\end{equation}
Choosing $R \nearrow \infty$ perhaps very slowly as $T \nearrow \infty$, and then $\eta \searrow 0$ sufficiently slowly, the right hand side of $(\ref{3.25})$ is bounded by $o(T)$.
On the other hand, when $|s - x(t)| \leq \frac{R}{2}$, $\chi(\frac{x_{\ast} - s}{R}) = 1$ and
\begin{equation}\label{3.28}
(\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t, y)|^{2} dy) \geq \frac{1}{2} \| v \|_{L^{2}}^{2}.
\end{equation}
Therefore, since the Gagliardo--Nirenberg inequality guarantees that $E(u) \geq 0$ when $\| u \|_{L^{2}} \leq \| Q \|_{L^{2}}$, the left hand side of $(\ref{3.25})$ is bounded below by
\begin{equation}\label{3.29}
\| u_{0} \|_{L^{2}}^{2} \int_{0}^{T} \frac{1}{R^{2}} \int_{|s - x(t)| \leq \frac{R}{2}} E(\chi(\frac{x - x(t)}{R}) v_{s}) ds dt \lesssim o(T)
\end{equation}
Thus, taking a sequence $T_{n} \nearrow \infty$, $R_{n} \nearrow \infty$, $\eta_{n} \searrow 0$, there exists a sequence of times $t_{n} \in [\frac{T_{n}}{2}, T_{n}]$, $|s_{n} - x(t_{n})| \leq \frac{R_{n}}{2}$ such that
\begin{equation}\label{3.30}
E(\chi(\frac{x - s_{n}}{R_{n}}) e^{ix \cdot \xi(s_{n})} e^{i \gamma(s_{n})} P_{\leq T_{n}} u(t_{n}, x)) \rightarrow 0,
\end{equation}
\begin{equation}\label{3.31}
(1 - \chi(\frac{x - s_{n}}{R_{n}})) e^{ix \cdot \xi(s_{n})} e^{i \gamma(s_{n})} P_{\leq T_{n}} u(t_{n}, x) \rightarrow 0, \qquad \text{in} \qquad L^{2},
\end{equation}
\begin{equation}\label{3.31.1}
(1 - P_{\geq T_{n}}) u(t_{n}, x) \rightarrow 0, \qquad \text{in} \qquad L^{2},
\end{equation}
and
\begin{equation}\label{3.32}
\| \chi(\frac{x - s_{n}}{R_{n}}) e^{ix \cdot \xi(s_{n})} e^{i \gamma(s_{n})} P_{\leq T_{n}} u(t_{n}, x) \|_{L^{4}} \sim 1.
\end{equation}
Now by the almost periodicity of $u$, $(\ref{2.8})$, after passing to a subsequence, there exists $u_{0} \in H^{1}$ such that
\begin{equation}\label{3.33}
\chi(\frac{x + x(t_{n}) - s_{n}}{R_{n}}) e^{ix \cdot \xi(s_{n})} e^{i x(t_{n}) \cdot \xi(s_{n})} e^{i \gamma(s_{n})} P_{\leq T_{n}} u(t_{n}, x + x(t_{n})) \rightharpoonup u_{0},
\end{equation}
weakly in $H^{1}$, and
\begin{equation}\label{3.34}
\chi(\frac{x + x(t_{n}) - s_{n}}{R_{n}}) e^{ix \cdot \xi(s_{n})} e^{i x(t_{n}) \cdot \xi(s_{n})} P_{\leq T_{n}} u(t_{n}, x + x(t_{n})) \rightarrow u_{0},
\end{equation}
strongly in in $L^{2} \cap L^{4}$. Also, by $(\ref{3.30})$, $(\ref{3.31})$, and $(\ref{3.31.1})$, $\| u_{0} \|_{L^{2}} = \| Q \|_{L^{2}}$, $E(u_{0}) \leq 0$, and by the Gagliardo-Nirenberg inequality, $E(u_{0}) = 0$. Therefore,
\begin{equation}\label{3.36}
u_{0} = \lambda Q(\lambda(x - x_{0})),
\end{equation}
for some $\lambda \sim 1$ and $|x_{0}| \lesssim 1$. This proves Theorem $\ref{t2.2}$ when $\lambda(t) = 1$. $\Box$
\section{Proof of Theorem $\ref{t2.2}$ when $\lambda(t) = 1$ and $d \geq 3$}
The proof of Theorem $\ref{t2.2}$ when $\lambda(t) = 1$ in higher dimensions is quite similar to the proof in two dimensions. In this case as well, use the interaction Morawetz estimate
\begin{equation}\label{6.1}
M(t) = \int \int |Iu(t, y)|^{2} Im[\bar{Iu} \nabla Iu](t,x) \cdot (x - y) \psi(x - y) dx dy,
\end{equation}
where $I$ is the Fourier truncation operator $P_{\leq T}$, $T = 2^{k}$, $k \in \mathbb{Z}_{\geq 0}$. Again let
\begin{equation}\label{6.2}
\psi(x) = \frac{1}{|x - y|} \int_{0}^{|x - y|} \phi(s) ds,
\end{equation}
where $\phi(|x|)$ is a radial function given by
\begin{equation}\label{6.3}
\phi(|x - y|) = \frac{1}{R^{d}} \int \chi^{2}(\frac{x - y - s}{R}) \chi^{2}(\frac{s}{R}) ds = \frac{1}{R^{d}} \int \chi^{2}(\frac{x - s}{R}) \chi^{2}(\frac{s - y}{R}) ds = \frac{1}{R^{d}} \int \chi^{2}(\frac{x - s}{R}) \chi^{2}(\frac{y - s}{R}) ds,
\end{equation}
where once again $\chi$ is a radial, smooth, compactly supported function, $\chi(x) = 1$ for $|x| \leq 1$, $\chi(x)$ is supported on $|x| \leq 2$, and $\chi(|x|)$ is decreasing as a function of the radius.
By direct computation,
\begin{equation}\label{6.4}
\aligned
\frac{d}{dt} M(t) = 2 \int \int |Iu(t,y)|^{2} Re[\partial_{j} \bar{Iu} \partial_{k} Iu](t,x) [\delta_{jk} \psi(x - y) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y)] dx dy \\ -2 \int \int Im[\bar{Iu} \partial_{k} Iu](t,y) Im[\bar{Iu} \partial_{j} Iu](t,x) [\delta_{jk} \psi(x - y) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y)] dx dy \\
+ \frac{1}{2} \int \int |Iu(t,y)|^{2} |Iu(t,y)|^{2} [\Delta \phi(x - y) + (d - 1) \Delta \psi(x - y)] dx dy \\
- \frac{2}{d + 2} \int \int |Iu(t,y)|^{2} |Iu(t,x)|^{2 + \frac{4}{d}} [d \psi(x - y) + \psi'(x - y) |x - y|] dx dy + \mathcal E,
\endaligned
\end{equation}
where $\mathcal E$ are the error terms arising from $\mathcal N$,
\begin{equation}\label{6.5}
i Iu_{t} + \Delta I u + F(Iu) = F(Iu) - I F(u) = \mathcal N.
\end{equation}
As in the previous section, it is known from \cite{dodson2012global} that
\begin{equation}\label{6.6}
\int_{0}^{T} \mathcal N dt \lesssim R o(T),
\end{equation}
and
\begin{equation}\label{6.7}
\sup_{t \in [0, T]} |M(t)| \lesssim R o(T).
\end{equation}
Therefore, choosing $R \nearrow \infty$ sufficiently slowly,
\begin{equation}\label{6.8}
\lim_{T \rightarrow \infty} \frac{R o(T)}{T} = 0.
\end{equation}
The computations in $(\ref{3.5})$--$(\ref{3.8})$ can easily be generalized to higher dimensions. Indeed,
\begin{equation}\label{6.9}
\phi(x) = \frac{1}{R^{d}} \int \chi^{2}(\frac{x - s}{R}) \chi^{2}(\frac{s}{R}) ds \sim 1,
\end{equation}
for $|x| \leq R$, $\phi(x)$ is supported on the set $|x| \leq 4R$, and $\phi(x)$ is a radially symmetric function that is decreasing as $|x| \rightarrow \infty$. Therefore, $(\ref{6.2})$ implies that
\begin{equation}\label{6.10}
|\psi(x)| \lesssim \frac{R}{|x|}, \qquad \text{for all} \qquad x \in \mathbb{R}^{d}.
\end{equation}
Also, by direct computation,
\begin{equation}\label{6.11}
\Delta \phi(x) = \frac{1}{R^{d}} \int \Delta \chi^{2}(\frac{x - s}{R}) \chi^{2}(\frac{s}{R}) ds \lesssim \frac{1}{R^{2}}.
\end{equation}
Next, by the same calculations that give $(\ref{6.10})$,
\begin{equation}\label{6.12}
\Delta \psi(x) \lesssim \frac{R}{|x|^{3}},
\end{equation}
so $|\Delta \psi(x)| \lesssim \frac{1}{R^{2}}$ for $|x| \gtrsim R$. Also,
\begin{equation}\label{6.13}
\psi(r) = \phi(0) + \frac{1}{r} \int_{0}^{r} \int_{0}^{s} (s - t) \phi''(t) dt ds,
\end{equation}
so by $(\ref{6.11})$, $|\Delta \psi(x)| \lesssim \frac{1}{R^{2}}$ for $|x| \lesssim R$.
Therefore,
\begin{equation}\label{6.14}
\frac{1}{2} \int \int |Iu(t,y)|^{2} |Iu(t,y)|^{2} [\Delta \phi(x - y) + \Delta \psi(x - y)] dx dy \lesssim \frac{1}{R^{2}} \| u \|_{L^{2}}^{4}.
\end{equation}
Following the case when $d = 2$,
\begin{equation}\label{6.15}
\delta_{jk} \psi(x - y) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y) = \delta_{jk} \phi(x - y) - \delta_{jk} |x - y| \psi'(|x - y|) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y),
\end{equation}
and
\begin{equation}\label{6.16}
\aligned
-\int \int Im[\bar{Iu} \partial_{k} Iu] Im[\bar{Iu} \partial_{j} Iu_{x}] \delta_{jk} \phi(x - y) dx dy + \int \int |Iu(t,y)|^{2} |\nabla Iu(t,x)|^{2} \phi(x - y) dx dy \\
= \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) Im[\bar{Iu} \partial_{j} Iu] dy)(\int \chi^{2}(\frac{x - s}{R}) Im[\bar{Iu} \partial_{j} Iu] dx) ds \\ + \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |Iu(t,y)|^{2} dy)(\int \chi^{2}(\frac{x - s}{R}) |\nabla Iu(t,x)|^{2} dx) ds,
\endaligned
\end{equation}
so for a fixed $s \in \mathbb{R}^{d}$, for any $\xi \in \mathbb{R}^{d}$ and $j \in \mathbb{Z}$,
\begin{equation}\label{6.17}
\int \chi^{2}(\frac{y - s}{R}) Im[\overline{e^{iy \xi} Iu} \partial_{j}(e^{iy \xi} Iu)] dy = \int \chi^{2}(\frac{y - s}{R}) Im[\bar{Iu} \partial_{j} Iu] dy + \xi_{j} \int \chi^{2}(\frac{y - s}{R}) |Iu(t,y)|^{2} dy,
\end{equation}
and
\begin{equation}\label{6.18}
\aligned
\int \chi^{2}(\frac{x - s}{R}) |\partial_{j} (e^{ix \xi} Iu)|^{2} dx = \xi_{j}^{2} \int \chi^{2}(\frac{x - s}{R}) |Iu|^{2} dx \\ + 2 \xi_{j} \int \chi^{2}(\frac{x - s}{R}) Im[\bar{Iu} \partial_{j} Iu] dx + \int \chi^{2}(\frac{x - s}{R}) |\partial_{j} Iu|^{2} dx.
\endaligned
\end{equation}
So it is again convenient to choose $\xi(s) \in \mathbb{R}^{d}$ such that the $(\ref{6.17}) = 0$ for any $j \in \mathbb{Z}$. Set
\begin{equation}\label{6.19}
v_{s} = e^{ix \xi(s)} Iu.
\end{equation}
By the fundamental theorem of calculus,
\begin{equation}\label{6.20}
\aligned
2 \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
2 \int \int |Iu(t,y)|^{2} Re[\partial_{j} \bar{Iu} \partial_{k} Iu](t,x) [\delta_{jk} |x - y| \psi'(x - y) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y)] dx dy \\ -2 \int \int Im[\bar{Iu} \partial_{k} Iu](t,y) Im[\bar{Iu} \partial_{j} Iu](t,x) [\delta_{jk} |x - y| \psi'(x - y) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y)] dx dy \\
- \frac{d}{d + 2} \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |v_{s}(t,x)|^{\frac{2(d + 2)}{d}} dx) ds dt \\
- \frac{2(d - 1)}{d + 2} \int_{0}^{T} \int |Iu(t,y)|^{2} [\psi(x - y) - \phi(x - y)] |Iu(t,x)|^{\frac{2(d + 2)}{d}} dx dy dt \lesssim R o(T) + \frac{1}{R^{2}} T.
\endaligned
\end{equation}
Again following \cite{dodson2015global} in higher dimensions,
\begin{equation}\label{6.21}
\aligned
2 \int \int |Iu(t,y)|^{2} Re[\partial_{j} \bar{Iu} \partial_{k} Iu](t,x) [\delta_{jk} |x - y| \psi'(x - y) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y)] dx dy \\ -2 \int \int Im[\bar{Iu} \partial_{k} Iu](t,y) Im[\bar{Iu} \partial_{j} Iu](t,x) [\delta_{jk} |x - y| \psi'(x - y) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(x - y)] dx dy \geq 0.
\endaligned
\end{equation}
Therefore,
\begin{equation}\label{6.22}
\aligned
2 \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
- \frac{2d}{d + 2} \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |v_{s}(t,x)|^{2 + \frac{4}{d}} dx) ds dt \\
- \frac{2(d - 1)}{d + 2} \int_{0}^{T} \int |Iu(t,y)|^{2} [\psi(x - y) - \phi(x - y)] |Iu(t,x)|^{2 + \frac{4}{d}} dx dy dt \lesssim R o(T) + \frac{1}{R^{2}} T.
\endaligned
\end{equation}
Again by the Arzela--Ascoli theorem, H{\"o}lder's inequality, Strichartz estimates, and $\lambda(t) = 1$,
\begin{equation}\label{6.23}
\aligned
\int_{a}^{a + 1} \int_{|y - y(t)| \geq C(\eta)} |Iu(t,y)|^{2} \int |Iu(t,x)|^{2 + \frac{4}{d}} dx dy dt + \int_{a}^{a + 1} \int |Iu(t,y)|^{2} \int_{|x - x(t)| \geq C(\eta)} |Iu(t,x)|^{2 + \frac{4}{d}} dx dy dt \\ \lesssim \eta^{2} \| u \|_{L_{t,x}^{2 + \frac{4}{d}}([a, a + 1] \times \mathbb{R}^{d})}^{2 + \frac{4}{d}} + \eta^{\frac{4}{d}} \| u \|_{L_{t}^{2} L_{x}^{\frac{2d}{d + 2}}([a, a + 1] \times \mathbb{R}^{d})}^{2}.
\endaligned
\end{equation}
Finally, by $(\ref{6.2})$ and the fundamental theorem of calculus,
\begin{equation}\label{6.24}
\int_{a}^{a + 1} \int_{|y - y(t)| \leq C(\eta)} \int_{|x - x(t)| \leq C(\eta)} |Iu(t,y)|^{2} [\psi(x - y) - \phi(x - y)] |Iu(t,x)|^{2 + \frac{4}{d}} dx dy dt \lesssim \frac{C(\eta)}{R} \| u \|_{L_{t,x}^{2 + \frac{4}{d}}([a, a + 1] \times \mathbb{R}^{2})}^{2 + \frac{4}{d}}.
\end{equation}
Therefore, letting $\sigma = \inf \{ 1, \frac{4}{d} \}$,
\begin{equation}\label{6.25}
\int_{0}^{T} \int |Iu(t,y)|^{2} [\psi(x - y) - \phi(x - y)] |Iu(t,x)|^{2 + \frac{4}{d}} dx dy dt \lesssim \eta^{\sigma} T + \frac{C(\eta)}{R} T,
\end{equation}
so
\begin{equation}\label{6.26}
\aligned
2 \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
- \frac{2}{d + 2} \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |v_{s}(t,x)|^{\frac{2(d + 2)}{d}} dx) ds dt \lesssim R o(T) + \eta^{\sigma} T + \frac{C(\eta)}{R} T.
\endaligned
\end{equation}
In dimensions $d \geq 3$, we will use the Sobolev embedding theorem, as in \cite{merle1993determination}, to control $|u(t,x)|^{2 + \frac{4}{d}}$ far away from $x(t)$. By the product rule, for any $f \in H^{1}(\mathbb{R}^{d})$,
\begin{equation}\label{6.27}
\aligned
\int \chi(\frac{x - s}{R})^{2} |f(x)|^{2 + \frac{4}{d}} dx \lesssim \| \chi(\frac{x - s}{R}) f \|_{L^{\frac{2d}{d + 2}}(\mathbb{R}^{d})}^{2} \| f \|_{L^{2}(\mathbb{R}^{d})}^{\frac{4}{d}} \lesssim \| \nabla(\chi(\frac{x - s}{R}) f) \|_{L^{2}}^{2} \| f \|_{L^{2}}^{\frac{4}{d}} \\
\lesssim \| f \|_{L^{2}}^{\frac{4}{d}} (\| \chi(\frac{x - s}{R}) \nabla f \|_{L^{2}}^{2} + \frac{1}{R^{2}} \| f \|_{L^{2}}^{2}).
\endaligned
\end{equation}
For a fixed $t$, let $f = (1 - \chi(\frac{x - x(t)}{C(\eta)})) v_{s}(t,x)$. By $(\ref{3.13.1})$, $\| f \|_{L^{2}} \leq \eta$, and by the product rule,
\begin{equation}\label{6.28}
\| \chi(\frac{x - s}{R}) \nabla f \|_{L^{2}} \leq \| \chi(\frac{x - s}{R}) \nabla v_{s} \|_{L^{2}} + \frac{1}{C(\eta)} \| \chi'(\frac{x - x(t)}{C(\eta)}) v_{s} \|_{L^{2}} \leq \| \chi(\frac{x - s}{R}) \nabla v_{s} \|_{L^{2}} + \frac{\eta}{C(\eta)}.
\end{equation}
Therefore, since $\frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2} dy) ds \lesssim \| u \|_{L^{2}}^{2}$,
\begin{equation}\label{6.29}
\aligned
2 \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
- \frac{d}{d + 2} \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x - s}{R}) |v_{s}(t,x)|^{2 + \frac{4}{d}} dx) ds dt \\ \lesssim R o(T) + \eta^{\sigma} T + \frac{C(\eta)}{R} T + \frac{\eta^{2 + \frac{4}{d}}}{R^{2}} T + \frac{\eta^{2}}{R^{2}} \int_{0}^{T} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}|^{2} dy) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})|^{2} dx) ds dt.
\endaligned
\end{equation}
When $\eta > 0$ is sufficiently small,
\begin{equation}\label{6.30}
\frac{\eta^{2}}{R^{2}} \int_{0}^{T} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}|^{2} dy) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})|^{2} dx) ds dt
\end{equation}
can be absorbed into the left hand side of $(\ref{6.29})$, proving that
\begin{equation}\label{6.31}
\aligned
2 \int_{0}^{T} \frac{1}{R} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
- \frac{d}{d + 2} \int_{0}^{T} \frac{1}{R} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x - s}{R}) |v_{s}(t,x)|^{2 + \frac{4}{d}} dx) ds dt \\ \lesssim R o(T) + \eta^{\sigma} T + \frac{C(\eta)}{R} T+ \frac{\eta^{2 + \frac{4}{d}}}{R^{2}} T \\ + \frac{ \eta^{2}}{R^{2}} \int_{0}^{T} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2} dy) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x - s}{R}) |v_{s}(t,x)|^{2 + \frac{4}{d}} dx) ds dt.
\endaligned
\end{equation}
The rest of the argument is identical to the $d = 2$ case. Choose $|x_{\ast} - x(t)| \leq 4 C(\eta)$ such that
\begin{equation}\label{6.32}
\chi(\frac{x_{\ast} - s}{R}) = \inf_{|x - x(t)| \leq 4 C(\eta)} \chi(\frac{x - s}{R}).
\end{equation}
For $|x - x(t)| \leq 2 C(\eta)$,
\begin{equation}\label{6.33}
\chi^{2}(\frac{x - s}{R}) = \chi^{2}(\frac{x_{\ast} - s}{R}) + O(\frac{C(\eta)}{R}).
\end{equation}
Therefore,
\begin{equation}\label{6.34}
\aligned
\frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x - s}{R}) |v_{s}(t,x)|^{2 + \frac{4}{d}} dx) ds \\ \leq \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x_{\ast} - s}{R}) |v_{s}(t,x)|^{2 + \frac{4}{d}} dx) ds \\
+ O(\frac{C(\eta)}{R}) \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} |v_{s}(t,x)|^{2 + \frac{4}{d}} dx) ds \\
= \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x_{\ast} - s}{R}) |v_{s}(t,x)|^{2 + \frac{4}{d}} dx) ds + O(\frac{C(\eta)}{R} \| v \|_{L^{2}}^{2} \| v \|_{L^{2 + \frac{4}{d}}}^{2 + \frac{4}{d}}).
\endaligned
\end{equation}
Again using Strichartz estimates,
\begin{equation}\label{6.35}
\aligned
2 \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
- \frac{d}{d + 2} \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x_{\ast} - s}{R}) |v_{s}(t,x)|^{2 + \frac{4}{d}} dx) ds dt \\ \lesssim R o(T) + \eta^{\sigma} T + \frac{C(\eta)}{R} T + \frac{\eta^{2 + \frac{4}{d}}}{R^{2}} T \\ + \frac{\eta^{2}}{R^{d}} \int_{0}^{T} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}|^{2} dy) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x^{\ast} - s}{R}) |v_{s}|^{2 + \frac{4}{d}} dx) ds dt,
\endaligned
\end{equation}
and
\begin{equation}\label{6.36}
\frac{\eta^{2}}{R^{d}} \int_{0}^{T} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}|^{2} dy) (\int_{|x - x(t)| \leq C(\eta)} \chi^{2}(\frac{x^{\ast} - s}{R}) |v_{s}|^{2 + \frac{4}{d}} dx) ds dt \lesssim \eta^{2} T.
\end{equation}
By definition of $x_{\ast}$ and $\chi$, for $R \gg C(\eta)$,
\begin{equation}\label{6.37}
\aligned
2 \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int \chi^{2}(\frac{x - s}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
- \frac{d}{d + 2} \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\int_{|x - x(t)| \leq 2C(\eta)} \chi^{2}(\frac{x_{\ast} - s}{R}) |v_{s}(t,x)|^{2 + \frac{4}{d}} dx) ds dt \\
\geq 2 \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) ( \chi^{2}(\frac{x_{\ast} - s}{R}) \int \chi^{2}(\frac{x - x(t)}{R}) |\nabla (v_{s})(t,x)|^{2} dx) ds dt \\
- \frac{d}{d + 2} \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\chi^{2}(\frac{x_{\ast} - s}{R})\int \chi^{2 + \frac{4}{d}}(\frac{x - x(t)}{R}) |v_{s}(t,x)|^{2 + \frac{4}{d}} dx) ds dt.
\endaligned
\end{equation}
Integrating by parts,
\begin{equation}\label{6.38}
\int \chi^{2}(\frac{x - x(t)}{R}) |\nabla (v_{s})|^{2} dx = \int |\nabla(\chi(\frac{x - x(t)}{R}) v_{s})|^{2} dx + \frac{1}{R^{2}} \int \chi''(\frac{x - x(t)}{R}) \chi(\frac{x - x(t)}{R}) |v_{s}|^{2} dx.
\end{equation}
Therefore,
\begin{equation}\label{6.39}
\aligned
2 \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) ( \chi^{2}(\frac{x_{\ast} - s}{R}) \int \chi^{2}(\frac{x - x(t)}{R}) |\nabla(v_{s})(t,x)|^{2} dx) ds dt \\
- \frac{d}{d + 2} \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t,y)|^{2}) (\chi^{2}(\frac{x_{\ast} - s}{R})\int \chi^{2 + \frac{4}{d}}(\frac{x - x(t)}{R}) |v_{s}(t,x)|^{2 + \frac{4}{d}} dx) ds dt \\
= 4 \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t, y)|^{2} dy) \chi^{2}(\frac{x_{\ast} - s}{R}) E(\chi(\frac{x - x(t)}{R}) v) ds dt + O(\frac{T}{R^{2}}).
\endaligned
\end{equation}
Here $E$ is the energy given by $(\ref{1.7})$. Therefore, we have finally proved
\begin{equation}\label{6.40}
\aligned
4 \int_{0}^{T} \frac{1}{R^{d}} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t, y)|^{2} dy) \chi^{2}(\frac{x_{\ast} - s}{R}) E(\chi^{2}(\frac{x - x(t)}{R}) v_{s}) ds dt \\ \lesssim R o(T) + \eta^{\sigma} T + \frac{C(\eta)}{R} T + \frac{\eta^{2 + \frac{4}{d}}}{R^{2}} T.
\endaligned
\end{equation}
Choosing $R \nearrow \infty$ perhaps very slowly as $T \nearrow \infty$, and then $\eta \searrow 0$ sufficiently slowly, the right hand side of $(\ref{6.40})$ is bounded by $o(T)$.
Again, when $|s - x(t)| \leq \frac{R}{2}$, $\chi(\frac{x_{\ast} - s}{R}) = 1$ and
\begin{equation}\label{6.41}
(\int \chi^{2}(\frac{y - s}{R}) |v_{s}(t, y)|^{2} dy) \geq \frac{1}{2} \| u \|_{L^{2}}^{2}.
\end{equation}
Therefore, since the Gagliardo--Nirenberg inequality guarantees that $E(u) \geq 0$ when $\| u \|_{L^{2}} \leq \| Q \|_{L^{2}}$, the left hand side of $(\ref{6.41})$ is bounded below by
\begin{equation}\label{6.42}
\| u_{0} \|_{L^{2}}^{2} \int_{0}^{T} \frac{1}{R^{d}} \int_{|s - x(t)| \leq \frac{R}{2}} E(\chi(\frac{x - x(t)}{R}) v_{s}) ds dt \lesssim o(T)
\end{equation}
Thus, taking a sequence $T_{n} \nearrow \infty$, $R_{n} \nearrow \infty$, $\eta_{n} \searrow 0$, there exists a sequence of times $t_{n} \in [\frac{T_{n}}{2}, T_{n}]$, $|s_{n} - x(t_{n})| \leq \frac{R_{n}}{2}$ such that
\begin{equation}\label{6.43}
E(\chi(\frac{x - s_{n}}{R_{n}}) e^{ix \cdot \xi(s_{n})} e^{i \gamma(s_{n})} P_{\leq T_{n}} u(t_{n}, x)) \rightarrow 0,
\end{equation}
\begin{equation}\label{6.44}
(1 - \chi(\frac{x - s_{n}}{R_{n}})) e^{ix \cdot \xi(s_{n})} e^{i \gamma(s_{n})} P_{\leq T_{n}} u(t_{n}, x) \rightarrow 0, \qquad \text{in} \qquad L^{2},
\end{equation}
\begin{equation}\label{6.45}
(1 - P_{\geq T_{n}}) u(t_{n}, x) \rightarrow 0, \qquad \text{in} \qquad L^{2},
\end{equation}
and
\begin{equation}\label{6.46}
\| \chi(\frac{x - s_{n}}{R_{n}}) e^{ix \cdot \xi(s_{n})} e^{i \gamma(s_{n})} P_{\leq T_{n}} u(t_{n}, x) \|_{L^{2 + \frac{4}{d}}} \sim 1.
\end{equation}
Now by the almost periodicity of $u$, $(\ref{2.8})$, after passing to a subsequence, there exists $u_{0} \in H^{1}$ such that
\begin{equation}\label{6.47}
\chi(\frac{x + x(t_{n}) - s_{n}}{R_{n}}) e^{ix \cdot \xi(s_{n})} e^{i x(t_{n}) \cdot \xi(s_{n})} e^{i \gamma(s_{n})} P_{\leq T_{n}} u(t_{n}, x + x(t_{n})) \rightharpoonup u_{0},
\end{equation}
weakly in $H^{1}$, and
\begin{equation}\label{6.48}
\chi(\frac{x + x(t_{n}) - s_{n}}{R_{n}}) e^{ix \cdot \xi(s_{n})} e^{i x(t_{n}) \cdot \xi(s_{n})} P_{\leq T_{n}} u(t_{n}, x + x(t_{n})) \rightarrow u_{0},
\end{equation}
strongly in in $L^{2} \cap L^{2 + \frac{4}{d}}$. Also, by $(\ref{6.43})$, $(\ref{6.44})$, and $(\ref{6.45})$, $\| u_{0} \|_{L^{2}} = \| Q \|_{L^{2}}$, $E(u_{0}) \leq 0$, and by the Gagliardo-Nirenberg inequality, $E(u_{0}) = 0$. Therefore,
\begin{equation}\label{6.49}
u_{0} = \lambda^{d/2} Q(\lambda(x - x_{0})),
\end{equation}
for some $\lambda \sim 1$ and $|x_{0}| \lesssim 1$. This proves Theorem $\ref{t2.2}$ when $\lambda(t) = 1$. $\Box$
\section{Proof of Theorem $\ref{t2.2}$ for a general $\lambda(t)$}
Now suppose that $\lambda(t)$ is free to vary. Recall that $|\lambda'(t)| \lesssim \lambda(t)^{3}$. In this case,
\begin{equation}\label{4.1}
\lambda(t) : I \rightarrow (0, \infty),
\end{equation}
where $I$ is the maximal interval of existence of an almost periodic solution to $(\ref{1.1})$.
\begin{theorem}\label{t4.1}
Suppose $T_{n} \in I$, $T_{n} \rightarrow \sup(I)$ is a sequence of times in $I$. Then
\begin{equation}\label{4.2}
\lim_{T_{n} \rightarrow \sup(I)} \frac{1}{\sup_{t \in [0, T_{n}]} \lambda(t)} \cdot \int_{0}^{T_{n}} \lambda(t)^{3} dt = +\infty.
\end{equation}
\end{theorem}
\begin{proof}
Suppose that this were not true, that is, there exists a constant $C_{0} < \infty$ and a sequence $T_{n} \rightarrow \sup(I)$ such that for all $n \in \mathbb{Z}_{\geq 0}$,
\begin{equation}\label{4.3}
\frac{1}{\sup_{t \in [0, T_{n}]} \lambda(t)} \int_{0}^{T_{n}} \lambda(t)^{3} dt \leq C_{0}.
\end{equation}
This would correspond to the rapid cascade scenario in \cite{dodson2015global}, \cite{dodson2016global}, \cite{fan20182}. In those papers $N(t)$ was used instead of $\lambda(t)$. As in those papers, $\lambda(t)$ can be chosen to be continuous, so for each $T_{n}$ choose $t_{n} \in [0, T_{n}]$ such that
\begin{equation}\label{4.4}
\lambda(t_{n}) = \sup_{t \in [0, T_{n}]} \lambda(t).
\end{equation}
Since $I$ is the maximal interval of existence of $u$,
\begin{equation}\label{4.5}
\lim_{n \rightarrow \infty} \| u \|_{L_{t,x}^{\frac{2(d + 2)}{d}}([0, T_{n}] \times \mathbb{R}^{d})} = \infty.
\end{equation}
By the almost periodicity property of $u$ and $(\ref{2.8})$, there exist $x(t_{n})$, $\xi(t_{n})$, and $\gamma(t_{n})$ such that if
\begin{equation}\label{4.7}
e^{i \gamma(t_{n})} \lambda(t_{n})^{d/2} e^{ix \cdot \xi(t_{n})} e^{i \gamma(t_{n})} u(t_{n}, \lambda(t_{n}) x + x(t_{n})) = v_{n}(x),
\end{equation}
then $v_{n}$ converges to some $u_{0}$ in $L^{2}(\mathbb{R}^{d})$, where $u_{0}$ is the initial data for a solution $u$ to $(\ref{1.1})$ that blows up in both time directions, $\lambda(t) \leq 1$ for all $t \leq 0$, and
\begin{equation}\label{4.8}
\int_{-\infty}^{0} \lambda(t)^{3} dt \leq C_{0}.
\end{equation}
Following the proof in \cite{dodson2012global} in dimensions $d \geq 3$ and \cite{dodson2016global2} in dimension $d = 2$,
\begin{equation}\label{4.9}
\| u \|_{L_{t}^{\infty} \dot{H}^{s}((-\infty, 0] \times \mathbb{R}^{d})} \lesssim_{s} C_{0}^{s},
\end{equation}
for any $0 \leq s < 1 + \frac{4}{d}$. Combining $(\ref{4.9})$ with $(\ref{4.8})$ and $|\lambda'(t)| \lesssim \lambda(t)^{3}$ implies
\begin{equation}\label{4.10}
\lim_{t \searrow -\infty} \lambda(t) = 0.
\end{equation}
Also, since
\begin{equation}\label{4.11}
|\xi'(t)| \lesssim \lambda(t)^{3},
\end{equation}
Equation $(\ref{4.8})$ implies that $\xi(t)$ converges to some $\xi_{-} \in \mathbb{R}^{d}$ as $t \searrow -\infty$. Make a Galilean transformation so that $\xi_{-} = 0$. Then, by interpolation, $(\ref{4.9})$ and $(\ref{4.10})$ imply
\begin{equation}\label{4.12}
\lim_{t \searrow -\infty} E(u(t)) = 0.
\end{equation}
Therefore, by conservation of energy, and convergence in $L^{2}$ of $(\ref{4.7})$,
\begin{equation}\label{4.13}
E(u_{0}) = 0, \qquad \text{and} \qquad \| u_{0} \|_{L^{2}} = \| Q \|_{L^{2}}.
\end{equation}
Therefore, by the Gagliardo-Nirenberg theorem,
\begin{equation}\label{4.14}
u_{0} = \lambda^{d/2} Q(\lambda(x - x_{0})), \qquad 0 < \lambda < \infty, \qquad x_{0} \in \mathbb{R}^{d},
\end{equation}
and $Q$ is the solution to the elliptic partial differential equation
\begin{equation}\label{4.15}
\Delta Q + |Q|^{\frac{4}{d}} Q = Q.
\end{equation}
However, assuming without loss of generality that $x_{0} = 0$ and $\lambda = 1$, the solution to $(\ref{1.1})$ is given by
\begin{equation}\label{4.16}
u(t,x) = e^{it} Q(x), \qquad t \in \mathbb{R}.
\end{equation}
However, such a solution definitely does not satisfy $(\ref{4.3})$, which gives a contradiction.
In the case that $\| u_{0} \|_{L^{2}} > \| Q \|_{L^{2}}$, Theorem $\ref{t5.2}$ implies that such a solution must blow up in finite time in both time directions, which contradicts $(\ref{4.8})$.
\end{proof}
Therefore, consider the case when
\begin{equation}\label{4.17}
\lim_{n \rightarrow \infty} \frac{1}{\sup_{t \in [0, T_{n}]} \lambda(t)} \int_{0}^{T_{n}} \lambda(t)^{3} dt = \infty.
\end{equation}
Passing to a subsequence, suppose
\begin{equation}\label{4.18}
\frac{1}{\sup_{t \in [0, T_{n}]} \lambda(t)} \int_{0}^{T_{n}} \lambda(t)^{3} dt = 2^{2n}.
\end{equation}
Then as in \cite{dodson2015global}, replace $M(t)$ in the previous section with,
\begin{equation}\label{4.19}
M(t) = \int \int |Iu(t, y)|^{2} Im[\bar{Iu} \nabla Iu](t,x) \cdot \tilde{\lambda}(t) (x - y) \psi(\tilde{\lambda}(t) (x - y)) dx dy,
\end{equation}
where $\tilde{\lambda}(t)$ is given by the smoothing algorithm from \cite{dodson2015global}. Then
\begin{equation}\label{4.20}
\aligned
\frac{d}{dt} M(t) = -2 \tilde{\lambda}(t) \int \int Im[\bar{Iu} \partial_{k} Iu](t,y) Im[\bar{Iu} \partial_{j} Iu](t,x) \\ \times [\delta_{jk} \psi(\tilde{\lambda}(t)(x - y)) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(\tilde{\lambda}(t)(x - y))] dx dy \\
+ \frac{1}{2} \tilde{\lambda}(t)^{3} \int \int |Iu(t,y)|^{2} |Iu(t,y)|^{2} [\Delta \phi(\tilde{\lambda}(t) (x - y)) + (d - 1) \Delta \psi(\tilde{\lambda}(t) (x - y))]dx dy \\
+ 2 \tilde{\lambda}(t) \int \int |Iu(t,y)|^{2} Re[\partial_{k} \bar{Iu} \partial_{j} Iu](t,x) [\delta_{jk} \psi(\tilde{\lambda}(t) (x - y)) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(\tilde{\lambda}(t) (x - y))] dx dy \\
- \frac{2}{d + 2} \tilde{\lambda}(t) \int \int |Iu(t,y)|^{2} |Iu(t,x)|^{2 + \frac{4}{d}} [d \psi(\tilde{\lambda}(t)(x - y)) + \psi'(\tilde{\lambda}(t) (x - y)) |x - y|] dx dy + \mathcal E \\
+ \dot{\tilde{\lambda}}(t) \int \int |Iu(t, y)|^{2} Im[\bar{Iu} \nabla Iu](t,x) \cdot \phi(\tilde{\lambda}(t) (x - y)) (x - y) dx dy,
\endaligned
\end{equation}
where $I = P_{\leq 2^{2n} \cdot \sup_{t \in [0, T]} \lambda(t)}$.
Equations $(\ref{3.6})$ and $(\ref{6.10})$ imply
\begin{equation}\label{4.21}
\sup_{t \in [0, T_{n}]} |M(t)| \lesssim R o(2^{2n}) \cdot \sup_{t \in [0, T]} \lambda(t).
\end{equation}
Next, since the smoothing algorithm guarantees that $\tilde{\lambda}(t) \leq \lambda(t)$, following $(\ref{3.8})$ and $(\ref{6.14})$,
\begin{equation}\label{4.22}
\aligned
\int_{0}^{T_{n}} \frac{1}{2} \tilde{\lambda}(t)^{3} \int \int |Iu(t,y)|^{2} |Iu(t,y)|^{2} [\Delta \phi(\tilde{\lambda}(t) (x - y)) + (d - 1) \Delta \psi(\tilde{\lambda}(t) (x - y))] dx dy dt \\ \lesssim \frac{1}{R^{2}} \| u \|_{L^{2}}^{4} \cdot \int_{0}^{T_{n}} \tilde{\lambda}(t) \lambda(t)^{2} dt \lesssim \frac{2^{2n}}{R^{2}} \cdot \sup_{t \in [0, T]} \lambda(t).
\endaligned
\end{equation}
Since $\tilde{\lambda}(t) \leq \lambda(t)$, following the analysis in $(\ref{3.9})$--$(\ref{3.25})$ in two dimensions and $(\ref{6.15})$--$(\ref{6.40})$ in higher dimensions,
\begin{equation}\label{4.27}
\aligned
2 \int_{0}^{T_{n}} \tilde{\lambda}(t) \int \int Im[I\bar{u} \partial_{k} Iu] Im[\bar{Iu} \partial_{j} Iu] [\delta_{jk} \psi(\tilde{\lambda}(t)(x - y)) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(\tilde{\lambda}(t)(x - y))] dx dy dt \\
+ 2 \int_{0}^{T_{n}} \tilde{\lambda}(t) \int \int |Iu(t,y)|^{2} Re[\partial_{j} \bar{Iu} \partial_{k} Iu](t,x) [\delta_{jk} \psi(\tilde{\lambda}(t)(x - y)) + \frac{(x - y)_{j} (x - y)_{k}}{|x - y|} \psi'(\tilde{\lambda}(t)(x - y))] dx dy dt \\
- \frac{2}{d + 2} \int_{0}^{T_{n}} \tilde{\lambda}(t) \int \int |u(t,y)|^{2} |u(t,x)|^{2 + \frac{4}{d}} [d \psi(\tilde{\lambda}(t)(x - y)) + \psi'(\tilde{\lambda}(t) (x - y)) |x - y|] dx dy dt
\endaligned
\end{equation}
\begin{equation}\label{4.28}
\aligned
= 4 \int_{0}^{T} \frac{\tilde{\lambda}(t) \lambda(t)^{2}}{R} \int (\int \chi^{2}(\frac{y - s}{R}) |v_{s,t}(t, y)|^{2} dy) \chi^{2}(\frac{x_{\ast} - s}{R}) E(\chi^{2}(\frac{x - x(t)}{R}) v_{s,t}(t,x)) ds dt \\ + R o(2^{2n}) \cdot \sup_{t \in [0, T]} \lambda(t) + O(\eta^{\sigma} \| u \|_{L_{t}^{\infty} L_{x}^{2}}^{2} \int_{0}^{T_{n}} \tilde{\lambda}(t) \| u(t) \|_{L^{2 + \frac{4}{d}}}^{2 + \frac{4}{d}} dt) + O(\frac{C(\eta)}{R} \| u \|_{L_{t}^{\infty} L_{x}^{2}}^{2} \int_{0}^{T_{n}} \tilde{\lambda}(t) \| u(t) \|_{L_{x}^{2 + \frac{4}{d}}}^{2 + \frac{4}{d}} dt).
\endaligned
\end{equation}
\begin{remark}
The term $v_{s, t}$ is an abbreviation for
\begin{equation}\label{4.29}
v_{s, t} = \frac{e^{i x \xi(s)}}{\lambda(t)^{d/2}} Iu(t, \frac{x}{\lambda(t)}),
\end{equation}
where $\xi(s) \in \mathbb{R}^{d}$ is chosen such that
\begin{equation}\label{4.30}
\int \chi^{2}(\frac{\tilde{\lambda}(t) (x - s)}{R \lambda(t)}) Im[\bar{v}_{s, t} \nabla (v_{s,t})] dx = 0.
\end{equation}
\end{remark}
The error estimates can be handled in a manner similar to the previous section, see \cite{dodson2015global}. Therefore, it only remains to consider the contribution of the term in $(\ref{4.20})$ with $\dot{\tilde{\lambda}}(t)$. By direct computation,
\begin{equation}\label{4.31}
\aligned
\dot{\tilde{\lambda}}(t) \int \int |Iu(t, y)|^{2} Im[\bar{Iu} \nabla Iu](t,x) \cdot \phi(\tilde{\lambda}(t) (x - y)) (x - y) dx dy \\
= \frac{\dot{\tilde{\lambda}}(t)}{R^{d} \tilde{\lambda}(t)} \int (\int \chi^{2}(\frac{\tilde{\lambda}(t) y - s}{R}) |Iu(t,y)|^{2} dy)(\int \chi^{2}(\frac{\tilde{\lambda}(t) x - s}{R}) Im[\bar{Iu} \nabla Iu](t,x) \cdot (x \tilde{\lambda}(t) - s) dx) ds \\
- \frac{\dot{\tilde{\lambda}}(t)}{R^{d} \tilde{\lambda}(t)} \int (\int \chi^{2}(\frac{\tilde{\lambda}(t) y - s}{R}) (y \tilde{\lambda}(t) - s) |Iu(t,y)|^{2} dy) \cdot (\int \chi^{2}(\frac{\tilde{\lambda}(t) x - s}{R}) Im[\bar{Iu} \nabla Iu] dx) ds.
\endaligned
\end{equation}
Now rescale,
\begin{equation}\label{4.32}
\aligned
= \frac{\dot{\tilde{\lambda}}(t)}{R^{d} \tilde{\lambda}(t)} \lambda(t) \int (\int \chi^{2}(\frac{\tilde{\lambda}(t) y - \lambda(t) s}{R \lambda(t)}) |\frac{1}{\lambda(t)^{d/2}} Iu(t,\frac{y}{\lambda(t)})|^{2} dy) \\
\times (\int \chi^{2}(\frac{\tilde{\lambda}(t) x - \lambda(t) s}{R \lambda(t)}) Im[\frac{1}{\lambda(t)^{d/2}} \bar{Iu}(t, \frac{x}{\lambda(t)}) \nabla (\frac{1}{\lambda(t)^{d/2}} Iu(t, \frac{x}{\lambda(t)})] \cdot (\frac{x \tilde{\lambda}(t) - s \lambda(t)}{\lambda(t)}) dx) ds \\
- \frac{\dot{\tilde{\lambda}}(t)}{R^{d} \tilde{\lambda}(t)} \lambda(t) \int (\int \chi^{2}(\frac{\tilde{\lambda}(t) y - \lambda(t) s}{R \lambda(t)}) (\frac{y \tilde{\lambda}(t) - s \lambda(t)}{\lambda(t)}) |\frac{1}{\lambda(t)^{d/2}} Iu(t,\frac{y}{\lambda(t)})|^{2} dy) \\
\cdot (\int \chi^{2}(\frac{\tilde{\lambda}(t) x - s \lambda(t)}{R \lambda(t)}) Im[\frac{1}{\lambda(t)^{d/2}} \bar{Iu}(t, \frac{x}{\lambda(t)}) \nabla (\frac{1}{\lambda(t)^{d/2}} Iu(t, \frac{x}{\lambda(t)}))] dx) ds.
\endaligned
\end{equation}
\begin{remark}
Throughout these calculations, we understand that $\lambda^{-d/2} Iu(\frac{x}{\lambda})$ refers to the rescaling of the function $Iu(x)$, not the $I$-operator acting on a rescaling of $u$.
\end{remark}
For any $\xi \in \mathbb{R}^{d}$,
\begin{equation}\label{4.33}
\aligned
= \frac{\dot{\tilde{\lambda}}(t)}{R^{d} \tilde{\lambda}(t)} \lambda(t) \int (\int \chi^{2}(\frac{\tilde{\lambda}(t) y - \lambda(t) s}{R \lambda(t)}) |\frac{e^{ix \cdot \xi}}{\lambda(t)^{d/2}} Iu(t,\frac{y}{\lambda(t)})|^{2} dy) \\
\times (\int \chi^{2}(\frac{\tilde{\lambda}(t) x - \lambda(t) s}{R \lambda(t)}) Im[\frac{e^{-ix \cdot \xi}}{\lambda(t)^{d/2}} \bar{Iu}(t, \frac{x}{\lambda(t)}) \nabla (\frac{e^{ix \cdot \xi}}{\lambda(t)^{d/2}} Iu(t, \frac{x}{\lambda(t)})] (\frac{x \tilde{\lambda}(t) - s \lambda(t)}{\lambda(t)}) dx) ds \\
- \frac{\dot{\tilde{\lambda}}(t)}{R^{d} \tilde{\lambda}(t)} \lambda(t) \int (\int \chi^{2}(\frac{\tilde{\lambda}(t) y - \lambda(t) s}{R \lambda(t)}) (\frac{y \tilde{\lambda}(t) - s \lambda(t)}{\lambda(t)}) |\frac{e^{ix \cdot \xi}}{\lambda(t)^{d/2}} Iu(t,\frac{y}{\lambda(t)})|^{2} dy) \\
\times (\int \chi^{2}(\frac{\tilde{\lambda}(t) x - s \lambda(t)}{R \lambda(t)}) Im[\frac{e^{-ix \cdot \xi}}{\lambda(t)^{d/2}} \bar{Iu}(t, \frac{x}{\lambda(t)}) \nabla(\frac{e^{ix \cdot \xi}}{\lambda(t)^{d/2}} Iu(t, \frac{x}{\lambda(t)}))] dx) ds.
\endaligned
\end{equation}
In particular, if we choose $\xi = \xi(s)$,
\begin{equation}\label{4.34}
\aligned
= \frac{\dot{\tilde{\lambda}}(t)}{R^{d} \tilde{\lambda}(t)} \lambda(t) \int (\int \chi^{2}(\frac{\tilde{\lambda}(t) y - \lambda(t) s}{R \lambda(t)}) |v_{s,t}|^{2} dy) \\ \times (\int \chi^{2}(\frac{\tilde{\lambda}(t) x - \lambda(t) s}{R \lambda(t)}) Im[\bar{v}_{s,t}(t, \frac{x}{\lambda(t)}) \nabla (v_{s,t})] \cdot (\frac{x \tilde{\lambda}(t) - s \lambda(t)}{\lambda(t)}) dx) ds \\
= \frac{\dot{\tilde{\lambda}}(t)}{R} \int (\int \chi^{2}(\frac{\tilde{\lambda}(t) (y - s)}{R \lambda(t)}) |v_{s,t}|^{2} dy) (\int \chi^{2}(\frac{\tilde{\lambda}(t) (x - s)}{R \lambda(t)}) Im[\bar{v}_{s,t}(t, \frac{x}{\lambda(t)}) \nabla (v_{s,t})] \cdot (\frac{\tilde{\lambda}(t)(x - s)}{\lambda(t)}) dx) ds.
\endaligned
\end{equation}
Then by the Cauchy-Schwarz inequality,
\begin{equation}\label{4.35}
\aligned
\lesssim \frac{\eta^{4}}{R^{d}} \lambda(t) \tilde{\lambda}(t)^{2} \int (\int \chi^{2}(\frac{\tilde{\lambda}(t) (y - s)}{R \lambda(t)}) |v_{s,t}|^{2} dy) (\int \chi^{2}(\frac{\tilde{\lambda}(t) (x - s)}{R \lambda(t)}) |\partial_{x}(v_{s,t})|^{2} dx) ds \\
+ \frac{1}{\eta^{4}} \frac{|\dot{\tilde{\lambda}}(t)|^{2}}{\lambda(t) \tilde{\lambda}(t)^{2}} \frac{\tilde{\lambda}(t)}{R^{d} \lambda(t)} \int (\int \chi^{2}(\frac{\tilde{\lambda}(t) (y - s)}{R \lambda(t)}) |v_{s,t}|^{2} dy) \int \chi^{2}(\frac{\tilde{\lambda}(t) (x - s)}{R \lambda(t)}) |v_{s,t}|^{2} (\frac{\tilde{\lambda}(t)(x - s)}{\lambda(t)})^{2} dx) ds.
\endaligned
\end{equation}
The first term in $(\ref{4.35})$ can be absorbed into $(\ref{4.28})$. The second term in $(\ref{4.35})$ is bounded by
\begin{equation}\label{4.36}
\frac{1}{\eta^{4}} \frac{|\dot{\tilde{\lambda}}(t)|^{2}}{\lambda(t) \tilde{\lambda}(t)^{2}} R^{2} \| u \|_{L_{t}^{\infty} L_{x}^{2}}^{4}.
\end{equation}
The smoothing algorithm from \cite{dodson2015global} is used to control this term. Recall that after $n$ iterations of the smoothing algorithm on an interval $[0, T]$, $\tilde{\lambda}(t)$ has the following properties:
\begin{enumerate}
\item $\tilde{\lambda}(t) \leq \lambda(t)$,
\item If $\dot{\tilde{\lambda}}(t) \neq 0$, then $\lambda(t) = \tilde{\lambda}(t)$,
\item $\tilde{\lambda}(t) \geq 2^{-n} \lambda(t)$,
\item $\int_{0}^{T} |\dot{\tilde{\lambda}}(t)| dt \leq \frac{1}{n} \int_{0}^{T} |\dot{\lambda}(t)| \frac{\tilde{\lambda}(t)}{\lambda(t)} dt$, with implicit constant independent of $n$ and $T$.
\end{enumerate}
Therefore,
\begin{equation}\label{4.37}
\int_{0}^{T_{n}} \frac{1}{\eta^{4}} \frac{|\dot{\tilde{\lambda}}(t)|^{2}}{\lambda(t) \tilde{\lambda}(t)^{2}} R^{2} \| u \|_{L_{t}^{\infty} L_{x}^{2}}^{4} dt \leq \frac{1}{\eta^{4}} \| u \|_{L_{t}^{\infty} L_{x}^{2}}^{4} \int_{0}^{T_{n}} \frac{|\dot{\lambda}(t)|}{\lambda(t)^{3}} R^{2} |\dot{\tilde{\lambda}}(t)| dt \lesssim \frac{1}{n} \frac{R^{2}}{\eta^{4}} \| u \|_{L_{t}^{\infty} L_{x}^{2}}^{4} \int_{0}^{T_{n}} \tilde{\lambda}(t) \lambda(t)^{2} dt.
\end{equation}
Since $\sup_{t \in [0, T_{n}]} \lambda(t) \leq 2^{-2n} \int_{0}^{T_{n}} \lambda(t)^{3} dt$,
\begin{equation}\label{4.39}
R_{n} \sup_{t \in [0, T_{n}]} |M(t)| \lesssim R_{n} o(2^{2n}) \cdot \sup_{t \in [0, T]} \lambda(t).
\end{equation}
Therefore, it is possible to take a sequence $\eta_{n} \searrow 0$, $R_{n} \nearrow \infty$, probably very slowly, such that
\begin{equation}\label{4.38}
\frac{1}{n} \frac{R_{n}^{2}}{\eta^{4}} \| u \|_{L_{t}^{\infty} L_{x}^{2}}^{4} \int_{0}^{T_{n}} |\tilde{\lambda}(t)| \lambda(t)^{2} dt = o_{n}(1) \int_{0}^{T_{n}} \tilde{\lambda}(t) \lambda(t)^{2} dt,
\end{equation}
\begin{equation}\label{4.40}
R_{n} \sup_{t \in [0, T_{n}]} |M(t)| \lesssim o(2^{2n}) \cdot \sup_{t \in [0, T]} \lambda(t),
\end{equation}
\begin{equation}\label{4.40.1}
O(\eta_{n}^{4} \| u \|_{L_{t}^{\infty} L_{x}^{2}}^{2} \int_{0}^{T} \tilde{\lambda}(t) \| u(t) \|_{L^{2 + \frac{4}{d}}}^{2 + \frac{4}{d}} dt) \lesssim o_{n}(1) \int_{0}^{T_{n}} \tilde{\lambda}(t) \lambda(t)^{2} dt,
\end{equation}
and
\begin{equation}\label{4.40.2}
O(\frac{C(\eta_{n})}{R_{n}} \| u \|_{L_{t}^{\infty} L_{x}^{2}}^{2} \int_{0}^{T_{n}} \tilde{\lambda}(t) \| u(t) \|_{L_{x}^{2 + \frac{4}{d}}}^{2 + \frac{4}{d}} dt) \lesssim o_{n}(1) \int_{0}^{T_{n}} \tilde{\lambda}(t) \lambda(t)^{2} dt.
\end{equation}
Therefore, these terms may be safely treated as error terms, and repeating the analysis in sections three and four for $(\ref{4.28})$, there exists a sequence of times $t_{n} \nearrow \sup(I)$ such that
\begin{equation}
E(\chi(\frac{(x - x(t_{n})) \tilde{\lambda}(t_{n})}{R_{n} \lambda(t_{n})}) v_{s_{n}, t_{n}}) \rightarrow 0,
\end{equation}
\begin{equation}
\| (1 - \chi(\frac{(x - x(t_{n})) \tilde{\lambda}(t_{n})}{R_{n} \lambda(t_{n})})) v_{s_{n}, t_{n}} \|_{L^{2}} \rightarrow 0,
\end{equation}
\begin{equation}
\| v_{s_{n}, t_{n}} \|_{L^{2}} \nearrow \| Q \|_{L^{2}},
\end{equation}
and
\begin{equation}\label{3.32}
\| \chi(\frac{(x - x(t_{n})) \tilde{\lambda}(t_{n})}{R_{n} \lambda(t_{n})}) Iv_{s_{n}, t_{n}} \|_{L^{2 + \frac{4}{d}}} \sim 1.
\end{equation}
In this case as well, we can show that this sequence converges in $H^{1}$ to
\begin{equation}\label{3.36}
u_{0} = \lambda^{d/2} Q(\lambda(x - x_{0})).
\end{equation}
This proves Theorem $\ref{t2.2}$ for a general $\lambda(t)$. $\Box$
\section{Proof of Theorem $\ref{t1.3}$:}
The proof of Theorem $\ref{t1.3}$ uses the argument used in the proof of Theorem $\ref{t1.2}$, combined with some reductions from \cite{fan20182}. First recall Lemma $4.2$ from \cite{fan20182}.
\begin{lemma}\label{l5.1}
Let $u$ be a solution to $(\ref{1.1})$ that satisfies the assumptions of Theorem $\ref{t1.3}$. Then there exists a sequence $t_{n} \nearrow T^{+}(u)$ such that $u(t_{n})$ admits a profile decomposition with profiles $\{ \phi_{j}, \{ x_{j,n}, \lambda_{j,n}, \xi_{j,n}, t_{j,n}, \gamma_{j,n} \} \}$, and there is a unique profile, call it $\phi_{1}$, such that
\begin{enumerate}
\item $\| \phi_{1} \|_{L^{2}} \geq \| Q \|_{L^{2}}$,
\item The nonlinear profile $\Phi_{1}$ associated to $\phi_{1}$ is an almost periodic solution in the sense of $(\ref{2.8})$ that does not scatter forward or backward in time.
\end{enumerate}
\end{lemma}
Now consider the nonlinear profile $\Phi_{1}$. To simplify notation relabel $\Phi_{1} = u$, and let $v_{s, t}$ be as in $(\ref{4.29})$. Using the same arguments as in the proof of Theorem $\ref{t1.2}$, there exists a sequence $t_{n} \nearrow T^{+}(u)$, $R_{n} \nearrow \infty$, $s_{n} \in \mathbb{R}^{d}$, $\tilde{\lambda}(t) \leq \lambda(t)$, such that
\begin{equation}\label{5.1}
E(\chi(\frac{(x - x(t_{n})) \tilde{\lambda}(t_{n})}{R_{n} \lambda(t_{n})}) v_{s_{n}, t_{n}}) \rightarrow 0,
\end{equation}
\begin{equation}\label{5.2}
\| (1 - \chi(\frac{(x - x(t_{n})) \tilde{\lambda}(t_{n})}{R_{n} \lambda(t_{n})})) v_{s_{n}, t_{n}} \|_{L^{2}} \rightarrow 0,
\end{equation}
\begin{equation}
\| v_{s_{n}, t_{n}} \|_{L^{2}} \nearrow \| u \|_{L^{2}},
\end{equation}
and
\begin{equation}\label{5.3}
\| \chi(\frac{(x - x(t_{n})) \tilde{\lambda}(t_{n})}{R_{n} \lambda(t_{n})}) I v_{s_{n}, t_{n}} \|_{L^{2 + \frac{4}{d}}} \sim 1.
\end{equation}
Therefore, by the almost periodicity of $v$, there exists a sequence $g(t_{n})$ given by $(\ref{2.1.1})$ such that
\begin{equation}\label{5.4}
g(t_{n}) v(t_{n}) \rightarrow u_{0}, \qquad \text{in} \qquad L^{2},
\end{equation}
where $E(u_{0}) = 0$ and $\| u_{0} \|_{L^{2}} \geq \| Q \|_{L^{2}}$.
Next, utilize a blowup result of \cite{merle2003sharp}, \cite{merle2004universality}, \cite{merle2005blow}, \cite{merle2006sharp}. We will state it here as it is stated in Theorem $3$ of \cite{merle2006sharp}. See also Theorem $3.1$ of \cite{fan20182}.
\begin{theorem}\label{t5.2}
Assume $u$ is a solution to $(\ref{1.1})$ with $H^{1}$ initial data, non-positive energy, and satisfies $(\ref{1.18})$. If $u$ is of zero energy, then $u$ blows up in finite time according to the log-log law,
\begin{equation}\label{5.5}
u(t,x) = \frac{1}{\lambda(t)^{d/2}} (Q + \epsilonilon)(\frac{x - x(t)}{\lambda(t)}) e^{i \gamma(t)}, \qquad x(t) \in \mathbb{R}^{d}, \qquad \gamma(t) \in \mathbb{R}, \lambda(t) > 0, \qquad \| \epsilonilon \|_{H^{1}} \leq \delta(\alpha),
\end{equation}
with the estimate
\begin{equation}\label{5.6}
\lambda(t) \sim \sqrt{\frac{T - t}{\ln|\ln(T - t)|}},
\end{equation}
and
\begin{equation}\label{5.7}
\lim_{t \rightarrow T} \int (|\nabla \epsilonilon(t,x)|^{2} + |\epsilonilon(t,x)|^{2} e^{-|x|}) dx = 0.
\end{equation}
\end{theorem}
Let $u$ be the solution to $(\ref{1.1})$ with initial data $u_{0}$. If $\| u_{0} \|_{L^{2}} = \| Q \|_{L^{2}}$ then we are done, using the analysis in the previous section. If $\| u_{0} \|_{L^{2}} > \| Q \|_{L^{2}}$, then Theorem $\ref{t5.2}$ implies that $u$ must be of the form $(\ref{5.5})$. Furthermore, by perturbative arguments, for any fixed $t' \in \mathbb{R}$, $(\ref{5.4})$ implies that there exists a sequence $g(t_{n}, t')$ such that
\begin{equation}\label{5.8}
g(t_{n}, t') v(t_{n} + \frac{t'}{\lambda(t_{n})^{2}}) \rightarrow u(t'), \qquad \text{in} \qquad L^{2}.
\end{equation}
In fact, perturbative arguments also imply that there exists a sequence $t_{n}' \nearrow \infty$, perhaps very slowly, such that
\begin{equation}\label{5.9}
\| g(t_{n}, t_{n}') v(t_{n} + \frac{t_{n}'}{\lambda(t_{n})^{2}}) - u(t_{n}') \|_{L^{2}} \rightarrow 0.
\end{equation}
Furthermore, Theorem $\ref{t5.2}$ implies that there exists a sequence $g(t_{n}')$ such that
\begin{equation}\label{5.10}
g(t_{n}') u(t_{n}') \rightharpoonup Q, \qquad \text{weakly in} \qquad L^{2}.
\end{equation}
Combining $(\ref{5.9})$ and $(\ref{5.10})$,
\begin{equation}\label{5.11}
g(t_{n}') g(t_{n}, t_{n}') v(t_{n} + \frac{t_{n}'}{\lambda(t_{n})^{2}}) \rightharpoonup Q, \qquad \text{weakly in} \qquad L^{2}.
\end{equation}
This completes the proof of Theorem $\ref{t1.3}$.
Acknowledgements: During the writing of this paper, the author was supported by NSF grant DMS - 1764358.
\end{document} |
\begin{document}
\title{Two Results on Separation Logic
With Theory Reasoning}
\author{Mnacho Echenim\inst{1} \and Nicolas Peltier\inst{1}}
\institute{Univ. Grenoble Alpes, CNRS, LIG, F-38000 Grenoble France}
\authorrunning{M. Echenim and N. Peltier}
\titlerunning{A Proof Procedure For Separation Logic}
\maketitle
\newcommand{\myabstract}
{
Two results are presented concerning the entailment problem in Separation Logic with inductively defined predicate symbols and theory reasoning.
First, we show that the entailment problem is undecidable for rules satisfying the conditions given in \cite{IosifRogalewiczSimacek13}, if theory reasoning is considered. The result holds for a wide class of theories, even with a very low expressive power. For instance it applies to the natural numbers with the successor function, or with the usual order.
Second, we show that every entailment problem can be reduced to an entailment problem containing no equality (neither in the formulas nor in the recursive rules defining the semantics of the predicate symbols).
}
\section{Introduction}
\newcommand{\mathtt{ls}t}{\mathtt{ls}}
\newcommand{\mathtt{ils}}{\mathtt{ils}}
\newcommand{\mathtt{als}}{\mathtt{als}}
\newcommand{$2$-$\mathsf{EXPTIME}$}{$2$-$\mathsf{EXPTIME}$}
In Separation Logic (see, e.g., \cite{IshtiaqOHearn01,Reynolds02}), recursive data structures are usually specified using inductively defined predicates. The recursive rules defining the semantics of these predicates may be provided by the user. This specification
mechanism is similar to the definition of a recursive data type in an
imperative programming language.
For instance, a nonempty list segment may be specified by using the inductive rules below, where the atom $x \mapsto (y)$ states that the memory location corresponding to $x$ is allocated and refers to $y$.
The symbol $*$ is a special logical connective denoting the disjoint composition of heaps.
\[
\mathtt{ls}t(x,y) \Leftarrow x \mapsto (y) \qquad \mathtt{ls}t(x,y) \Leftarrow \exists x'~.~ (x \mapsto (x') * \mathtt{ls}t(x',y))
\]
Sorted lists with elements inside the interval $[u,v]$ may be specified as follows.
\[
\mathtt{ils}(x,y,u,v) \Leftarrow (x \mapsto (y) \wedge x \leq v \wedge x \geq u)\]
\[
\mathtt{ils}(x,y,u,v) \Leftarrow \exists x'~.~ (x \mapsto (x') * \mathtt{ils}(x',y,x,v) \wedge x \leq v \wedge x \geq u)
\]
Many verification tasks can be reduced to an entailment problem in this logic.
For example, to verify that some formula $\phi$ is a loop invariant,
we have to prove that the weakest pre-condition of $\phi$ w.r.t.\ a finite sequence of
transformations is a logical consequence of $\phi$. Since techniques exist for computing automatically such pre-conditions, the problem may be reduced to an entailment problem between two SL formulas.
Entailment problems can also be used to
express typing properties.
For instance one may have to check that the entailment
$\mathtt{ils}(x,y,u,v) \models \mathtt{ls}t(x,y)$ holds, i.e., that a sorted list is a list,
that
$v \leq v' \wedge \mathtt{ils}(x,y,u,v) \models \mathtt{ils}(x,y,u,v')$ holds (if $v$ is an upper bound then
any number greater than $v$ is also an upper bound)
or that
$\mathtt{ils}(x,y,u,v) * \mathtt{ils}(y,z,v,v') \models \mathtt{ils}(x,y,u,v')$ (the composition of two sorted lists is a sorted list).
In general, the entailment problem is undecidable
\cite{DBLP:conf/atva/IosifRV14},
and a lot of effort has been devoted to identifying decidable fragments and devising proof procedures, see e.g.,
\cite{berdine-calcagno-ohearn04,CalcagnoYangOHearn01,cook-haase-ouaknine-parkinson-worell11,spen,DBLP:journals/fmsd/EneaLSV17,DemriGalmicheWendlingMery14}.
In particular, a very general class of decidable entailment problems is described in
\cite{IosifRogalewiczSimacek13}.
This fragment does not allow for any theory predicate other than equality and is defined by restricting the form of the inductive rules, which must fulfill $3$ conditions, formally defined below: the {\em progress} condition (every rule allocates a single memory location),
the {\em connectivity} condition (the set of allocated locations has a tree-shaped structure)
and the {\em establishment} condition (every existentially quantified variable is eventually allocated).
More recently, a
$2$-$\mathsf{EXPTIME}$\ algorithm was proposed for such entailments
\cite{PMZ20}, and we showed
in \cite{DBLP:conf/lpar/EchenimIP20}
that this bound is
tight.
To tackle entailments such as those given above, one must be able to combine spatial reasoning with theory reasoning, and
the combination of SL with data constraints has been considered by several authors (see, e.g., \cite{DBLP:conf/cav/PiskacWZ13,DBLP:conf/pldi/Qiu0SM13,DBLP:conf/aplas/PerezR13,DBLP:conf/cade/XuCW17,DBLP:conf/vmcai/Le21}).
It is therefore natural to ask whether the above decidability result extends to the case where
theory reasoning is considered.
In the present paper, we show that this is not the case, even for very simple theories.
More precisely, we establish two new results.
First, we show that the entailment problem is undecidable for rules satisfying the above conditions if theory reasoning is allowed (Theorem \ref{theo:undec}). The result holds for a very wide class of theories,
even for theories with a very low expressive power. For instance, it holds
for the natural numbers with only the successor function,
or with only the predicate $\leq$ (interpreted as usual).
Second, we show that
every entailment can be reduced to an entailment not containing equality (Theorem \ref{theo:elimeq}). The intuition is that all the equality and disequality constraints
can be encoded in the formulas describing the shape of the data structures.
The transformation increases the number of rules exponentially but it increases the size of the rules
only polynomially (hence it preserves the complexity results in \cite{EIP21a}).
This result shows that the addition of the equality predicate does not increase the expressive power. It may be useful
to facilitate the definition of proof procedures for the considered fragment.
\section{Preliminaries}
\newcommand{\modelsr_{\cal T}}{\models_{\asid}_{\cal T}}
\newcommand{spatial predicate\xspace}{spatial predicate\xspace}
\newcommand{$\theory$-predicate\xspace}{${\cal T}$-predicate\xspace}
In this section, we define the syntax and semantics of the fragment of separation logic that is considered in the paper (see for instance \cite{DBLP:conf/csl/OHearnRY01,Reynolds02,IosifRogalewiczSimacek13} for more details).
\paragraph*{Syntax.}
Let ${\cal V}$ be a countably infinite set of {\em variables}.
We consider a set ${\cal P}_{\cal T}$ of
{\em {$\theory$-predicate\xspace}s} (or {\em theory predicates}, denoting relations in an underlying theory of locations) and
a set ${\cal P}_S$ of {\em {spatial predicate\xspace}s}, disjoint from ${\cal P}_{\cal T}$. Each symbol $p\in {\cal P}_{\cal T} \cup {\cal P}_S$ is associated with a unique arity $\ar{p}$.
We assume that ${\cal P}_{\cal T}$ contains in particular two binary symbols $\approx$ and $\not \approx$ and a nullary symbol $\mathtt{false}$.
\begin{definition}
Let $\kappa$ be some fixed
natural number.
The set of {\em {$SL$-formula\xspace}s} (or simply formulas) $\phi$ is inductively defined as follows:
\[\phi := \mathtt{emp} \; \| \; x \mapsto (y_1,\dots,y_\kappa) \; \| \; \phi_1 \vee \phi_2 \; \| \; \phi_1 * \phi_2 \| \;
p(x_1,\dots,x_{\ar{p}}) \; \| \; \exists x. ~ \phi_1 \]
where $\phi_1,\phi_2$ are {$SL$-formula\xspace}s, $p\in {\cal P}_{\cal T} \cup {\cal P}_S$ and $x,x_1,\dots,x_{\ar{p}}, y_1,\dots,y_{\kappa}$ are variables.
\end{definition}
A formula of the form $x \mapsto (y_1,\dots,y_\kappa)$ is called a {\em points-to atom}, and a formula
$p(x_1,\dots,x_{\ar{p}})$ with $p\in {\cal P}_S$ is called a {\em predicate atom}. A {\em spatial atom} is either a points-to atom or a predicate atom.
A {\em $\theory$-atom\xspace} is a formula of the form $p(x_1,\dots,x_{\ar{p}})$ with $p\in {\cal P}_{\cal T}$.
An {\em atom} is either a spatial atom or a $\theory$-atom\xspace.
A {\em $\theory$-formula\xspace} is either $\mathtt{emp}$ or a separating conjunction of {$\theory$-atom\xspace}s.
A formula of the form $\exists x_1.\dots.\exists x_n.~\phi$ (with $n \geq 0$) is denoted by
$\exists \vec{x}.~\phi$, where $\vec{x} = (x_1,\dots,x_n)$.
A formula is {\em predicate-free} (resp.\ {\em disjunction-free}, resp. {\em quantifier-free}) if it contains no predicate symbol in ${\cal P}_S$ (resp.\ no occurrence of $\vee$, resp.\ of $\exists$).
It is in {\em prenex form} if it is of the form $\exists \vec{x}. \phi$, where $\phi$ is quantifier-free and $\vec{x}$ is a possibly empty vector of variables.
A {\em symbolic heap} is a prenex disjunction-free formula, i.e., a formula of the form $\exists \vec{x}. \phi$, where $\phi$ is a separating conjunction of atoms.
Let $\fv{\phi}$ be the set of variables freely occurring in $\phi$.
We assume (using $\alpha$-renaming if needed) that all existential variables are distinct from free variables and that
distinct occurrences of quantifiers bind distinct variables.
A {\em substitution} $\sigma$ is a function mapping variables to variables. The {\em domain} $\dom{\sigma}$ of a substitution $\sigma$ is the set of variables $x$
such that $\sigma(x) \not = x$,
and we let $\img{\sigma} = \sigma(\dom{\sigma})$. For any expression (variable, tuple of variables or formula) $e$, we denote by $e\sigma$ the expression obtained from $e$ by replacing
every free occurrence of a variable $x$ by $\sigma(x)$ and by $\replall{}{x_i}{y_i}{1 \leq i \leq n}$ (where the $x_1,\dots,x_n$ are pairwise distinct) the substitution
such that $\sigma(x_i) = y_i$ and $\dom{\sigma} \subseteq \{ x_1,\dots,x_n \}$.
For all sets $E$,
$\card{E}$
is the cardinality of $E$.
For all sequences or words $w$,
$\len{w}$ denotes the length of $w$.
We sometimes identify vectors with sets, if the order is unimportant, e.g., we may write $\vec{x} \setminus \vec{y}$ to denote the vector formed by the components of $\vec{x}$ that do not occur in $\vec{y}$.
We assume that the symbols in ${\cal P}_S \cup {\cal P}_{\cal T} \cup {\cal V}$ are words
over a finite alphabet of some constant size, strictly greater than $1$. For any expression $e$, we denote by $\size{e}$
the size of $e$, i.e., the number of occurrences of
symbols\footnote{Each symbol $s$ in ${\cal P}_S \cup {\cal P}_{\cal T} \cup {\cal V}$ is counted with a weight equal to its length $\len{s}$, and all the logical symbols have weight $1$.} in $e$.
We define the {\em width} of a formula as follows:
{\small
\[
\begin{tabular}{llllll}
$\widt{\phi_1 \vee \phi_2}$ & $ = $ & $\max(\widt{\phi_1},\widt{\phi_2})$ & \qquad
$\widt{\exists x. \phi}$ & $=$ & $\widt{\phi} + \size{\exists x}$ \\
$\widt{\phi_1 * \phi_2}$ & $ = $ & $\widt{\phi_1} + \widt{\phi_2} + 1$ \\
$\widt{\phi}$ & $ = $ & $\size{\phi}$ \quad if $\phi$ is an atom \\
\end{tabular}
\]
}
Note that $\widt{\phi}$ coincides with $\size{\phi}$ if $\phi$ is disjunction-free.
\paragraph*{Inductive Rules.}
\newcommand{\dependson}[1]{\geq_{#1}}
The semantics of the predicates in ${\cal P}_S$ is given
by user-defined inductive rules. To ensure decidability in the case where the theory only contains the equality predicate, these rules must satisfy some additional conditions (defined in \cite{IosifRogalewiczSimacek13}):
\begin{definition}
\label{def:sid}
A (progressing and connected) set of inductive rules (pc-SID\xspace) ${\cal R}$
is a finite set of rules of the form
\( p(x_1,\dots,x_n) \Leftarrow \exists \vec{u}. ~ x_1 \mapsto (y_1,\dots,y_\kappa)
* \phi\)
where $\fv{x_1 \mapsto (y_1,\dots,y_\kappa)
* \phi} \subseteq \{x_1, \ldots, x_n\} \cup \vec{u}$,
$\phi$ is a possibly empty separating conjunction of predicate atoms and {$\theory$-formula\xspace}s, and for every predicate atom $q(z_1,\dots,z_{\ar{q}})$
occurring in $\phi$, we have $z_1 \in \{ y_1,\dots,y_\kappa\}$.
We let $\size{p(\vec{x}) \Leftarrow \phi} = \size{p(\vec{x})} + \size{\phi}$,
$\size{{\cal R}} = \Sigma_{\rho\in {\cal R}} \size{\rho}$
and $\widt{{\cal R}} = \max_{\rho \in {\cal R}} \size{\rho}$.
\end{definition}
In the following, ${\cal R}$ always denotes a pc-SID\xspace.
Note that the right-hand side of every inductive rule contains exactly one
points-to atom, the left-hand side of which is the first argument $x_1$ of the predicate symbol (this condition is referred to as the {\em progress} condition), and
that this points-to atom contains the first argument of every predicate atom on the right-hand side of the rule (the {\em connectivity} condition).
\begin{definition}
\label{def:unfold}
We write $p(x_1,\dots,x_{\ar{p}}) \unfoldto{{\cal R}} \phi$
if ${\cal R}$ contains a rule (up to $\alpha$-renaming)
$p(y_1,\dots,y_{\ar{p}}) \Leftarrow \phiB$, where $x_1,\dots,x_{\ar{p}}$ are not bound in $\phiB$, and
$\phi = \replall{\phiB}{y_i}{x_i}{i \in \interv{1}{{\ar{p}}}}$.
The relation $\unfoldto{{\cal R}}$ is extended to all formulas as follows:
$\phi \unfoldto{{\cal R}} \phi'$ if one of the following conditions holds:
(i) $\phi = \phi_1 \bullet \phi_2$ (modulo AC, with $\bullet \in \{ *, \vee \}$), $\phi_1 \unfoldto{{\cal R}} \phi_1'$, no free or existential variable in $\phi_2$ is bound in $\phi_1'$
and
$\phi' = \phi_1' \bullet \phi_2$;
or (ii) $\phi = \exists x.~ \phiB$, $\phiB \unfoldto{{\cal R}} \phiB'$, $x$ is not bound in $\phiB'$
and
$\phi' = \exists x.~ \phiB'$.
We denote by $\unfoldto{{\cal R}}^+$ the transitive closure of $\unfoldto{{\cal R}}$, and by $\unfoldto{{\cal R}}^*$ its reflexive and transitive closure.
A formula $\phiB$ such that $\phi \unfoldto{{\cal R}}^* \phiB$ is called an {\em ${\cal R}$-unfolding} of
$\phi$.
We denote by $\dependson{{\cal R}}$ the least transitive and reflexive binary relation on ${\cal P}_S$
such that $p \dependson{{\cal R}} q$ holds if
${\cal R}$ contains a rule of the form $p(y_1,\dots,y_{\ar{p}}) \Leftarrow \phiB$, where $q$ occurs in $\phiB$. If $\phi$ is a formula, we write $\phi \dependson{{\cal R}} q$ if $p \dependson{{\cal R}} q$ for some $p \in {\cal P}_S$ occurring in $\phi$.
\end{definition}
\paragraph*{Semantics.}
\begin{definition}
Let ${\cal L}$ be a countably infinite set of
{\em locations}.
An {\em SL-structure} is a pair $(\mathfrak{s},\mathfrak{h})$ where
$\mathfrak{s}$ is a {\em store}, i.e.\ a total function from ${\cal V}$ to ${\cal L}$, and
$\mathfrak{h}$ is a {\em heap}, i.e.\ a partial finite function from ${\cal L}$ to ${\cal L}^\kappa$ (written as a relation: $\mathfrak{h}(\ell) = (\ell_1,\dots,\ell_\kappa)$ iff $(\ell,\ell_1,\dots,\ell_\kappa) \in \mathfrak{h}$).
The {\em size} of a structure $(\mathfrak{s},\mathfrak{h})$ is the cardinality of $\dom{\mathfrak{h}}$.
\end{definition}
For every heap $\mathfrak{h}$, we define: $\locs{\mathfrak{h}} = \{ \ell_i \mid (\ell_0,\dots,\ell_\kappa) \in \mathfrak{h}, i = 0,\dots,\kappa \}$.
A location $\ell$ (resp.\ a variable $x$) is {\em allocated} in a heap $\mathfrak{h}$ (resp.\ in a structure ($\mathfrak{s},\mathfrak{h}$)) if $\ell \in \dom{\mathfrak{h}}$ (resp.\ $\mathfrak{s}(x)\in \dom{\mathfrak{h}}$).
Two heaps $\mathfrak{h}_1,\mathfrak{h}_2$ are {\em disjoint} if
$\dom{\mathfrak{h}_1} \cap \dom{\mathfrak{h}_2} = \mathtt{emp}tyset$,
in this case $\mathfrak{h}_1 \uplus \mathfrak{h}_2$ denotes the
union of $\mathfrak{h}_1$ and $\mathfrak{h}_2$.
Let $\models_{{\cal T}}$ be a satisfiability relation between stores and {$\theory$-formula\xspace}s, satisfying the following properties:
$\mathfrak{s} \models_{{\cal T}} x \approx y$ (resp.\ $\mathfrak{s} \models_{{\cal T}} x \not \approx y$) iff $\mathfrak{s}(x) = \mathfrak{s}(y)$ (resp.\ $\mathfrak{s}(x) \not = \mathfrak{s}(y)$), $\mathfrak{s} \not \models_{{\cal T}} \mathtt{false}$ and
$\mathfrak{s} \models_{{\cal T}} \chi *\chiB$ iff $\mathfrak{s}\models_{{\cal T}} \chi$ and $\mathfrak{s} \models_{{\cal T}} \chiB$.
For all {$\theory$-formula\xspace}s $\chi, \chiB$, we write
$\chi \models_{{\cal T}} \chiB$ if
$\mathfrak{s} \models_{{\cal T}} \chi \implies \mathfrak{s} \models_{{\cal T}} \chiB$ holds for all stores $\mathfrak{s}$.
\begin{definition}
\label{def:semantics}
Given formula $\phi$, a pc-SID\xspace ${\cal R}$ and a structure $(\mathfrak{s},\mathfrak{h})$,
we write $(\mathfrak{s},\mathfrak{h}) \models_{\asid} \phi$ and say that $(\mathfrak{s},\mathfrak{h})$ is an {\em ${\cal R}$-model} (or simply a model if ${\cal R}$ is clear from the context) of $\phi$ if one of the following conditions holds.
\begin{compactitem}
\item{$\phi = x \mapsto (y_1,\dots,y_\kappa)$ and
$\mathfrak{h} = \{ (\mathfrak{s}(x),\mathfrak{s}(y_1),\dots,\mathfrak{s}(y_\kappa)) \}$.}
\item{$\phi$ is a $\theory$-formula\xspace, $\mathfrak{h} = \mathtt{emp}tyset$ and $\mathfrak{s} \models_{{\cal T}} \phi$.}
\item{$\phi = \phi_1 \vee \phi_2$ and
$(\mathfrak{s},\mathfrak{h}) \models_{\asid} \phi_i$, for some $i = 1,2$.}
\item{$\phi = \phi_1 * \phi_2$ and there exist disjoint heaps
$\mathfrak{h}_1,\mathfrak{h}_2$ such that
$\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$ and
$(\mathfrak{s},\mathfrak{h}_i) \models_{\asid} \phi_i$, for all $i = 1,2$.}
\item{$\phi = \exists x. ~ \phi$ and
$(\mathfrak{s}',\mathfrak{h}) \models_{\asid} \phi$, for some store $\mathfrak{s}'$ coinciding with
$\mathfrak{s}$ on all variables distinct from $x$.}
\item{
$\phi = p(x_1,\dots,x_{\ar{p}})$, $p \in {\cal P}_S$ and $(\mathfrak{s},\mathfrak{h}) \models_{\asid} \phiB$ for some $\phiB$ such that
$\phi \unfoldto{{\cal R}} \phiB$.}
\end{compactitem}
If $\Gamma$ is a sequence of formulas, then we write $(\mathfrak{s},\mathfrak{h}) \models_{\asid} \Gamma$ if $(\mathfrak{s},\mathfrak{h})$ satisfies at least one formula in $\Gamma$.
\end{definition}
We emphasize that a $\theory$-formula\xspace is satisfied
only
in structures with empty heaps. This convention is used to simplify notations, because it avoids
having to consider both standard and separating conjunctions.
Note that Definition \ref{def:semantics} is well-founded because of the progress condition: the size of $\mathfrak{h}$ decreases at each recursive call of a predicate atom.
We write $\phi \models_{\asid} \phiB$ if every ${\cal R}$-model of $\phi$ is an ${\cal R}$-model of $\phiB$
and $\phi \equiv_{\asid} \phiB$ if $\phi \models_{\asid} \phiB$ and $\phiB \models_{\asid} \phi$.
Every formula can be transformed into prenex form using the well-known equivalences: $(\exists x. \phi) \bullet \phiB \equiv \exists x. (\phi \bullet \phiB)$, for all $\bullet \in \{ \vee, * \}$, where $x\not \in\fv{\phiB}$.
\paragraph*{Establishment.}
The notion of establishment \cite{IosifRogalewiczSimacek13} is defined as follows:
\begin{definition}
A pc-SID\xspace is {\em established}
if
for every atom $\alpha$, every predicate-free formula
$\exists \vec{x}. \phi$ such that
$\alpha \unfoldto{{\cal R}}^* \exists \vec{x}. \phi$ (up to a transformation into prenex form) and $\phi$ is quantifier-free, and every $x\in \vec{x}$, $\phi$ is of the form $x' \mapsto (y_1,\dots,y_\kappa) * \chi * \phiB$, where $\chi$ is a separating conjunction of equations (possibly $\mathtt{emp}$) such that
$\chi \models_{{\cal T}} x \approx x'$.
\end{definition}
In the remainder of the paper, we assume that every considered pc-SID\xspace is established.
\paragraph*{Sequents.}
We consider sequents denoting entailment problems and defined as follows:
\begin{definition}
\label{def:sequent}
A {\em sequent} is an expression of the form $\phi_0 \vdash_{\asid} \phi_1,\dots,\phi_n$, where ${\cal R}$
is a pc-SID\xspace and $\phi_0,\dots,\phi_n$ are formulas.
A sequent is {\em disjunction-free} if $\phi_0,\dots,\phi_n$ are disjunction-free, and {\em established} if ${\cal R}$ is established.
We define:
{\small
\begin{eqnarray*}
\size{\phi_0 \vdash_{\asid} \phi_1,\dots,\phi_n} &=& \Sigma_{i=0}^n \size{\phi_i} + \size{{\cal R}}, \qquad \fv{\phi_1,\dots,\phi_n} = \bigcup_{i=0}^n \fv{\phi_i},\\
\widt{\phi_0 \vdash_{\asid} \phi_1,\dots,\phi_n} &=& \max \{ \widt{\phi_i}, \widt{{\cal R}}, \card{ \bigcup_{i=0}^n \fv{\phi_i}} \mid 0 \leq i \leq n \}.
\end{eqnarray*}}
\end{definition}
\begin{definition}
\label{def:counter_model}
A structure $(\mathfrak{s},\mathfrak{h})$ is a {\em countermodel} of
a sequent $\phi \vdash_{\asid} \Gamma$ iff $\mathfrak{s}$ is injective, $(\mathfrak{s},\mathfrak{h}) \models_{\asid} \phi$
and $(\mathfrak{s},\mathfrak{h}) \not \models_{\asid} \Gamma$.
A sequent is {\em valid} if it has no countermodel.
Two sequents are {\em equivalent} if they are both valid or both non-valid\footnote{Hence two non-valid sequents with different countermodels are equivalent.}.
\end{definition}
The restriction to injective countermodels is for technical convenience only and does not entail any loss of generality.
\section{An Undecidability Result}
\label{sect:undec}
This section contains the main result of the paper. It shows that no terminating procedure for checking the validity of
(established) sequents possibly exists, even for theories with a very low expressive power.
\newcommand{S}{S}
\newcommand{{\mathfrak S}}{{\mathfrak S}}
\newcommand{\overline{S}}{\overline{S}}
\newcommand{\mathtt{ls}ol}{W}
\renewcommand{\mathrm{\bot}}{\mathrm{\bot}}
\newcommand{\mathtt{B}}{\mathtt{B}}
\newcommand{\stopp}[1]{\mathtt{E}_{#1}}
\newcommand{\mathtt{P}}{\mathtt{P}}
\newcommand{\nextp}[1]{\rightarrow^#1}
\newcommand{\triangleleft}{\triangleleft}
\newcommand{\alpha}{\alpha}
\begin{theorem}
\label{theo:undec}
The validity problem is undecidable
for established sequents $\phi \vdash_{\asid} \phiB$ if
${\cal P}_{\cal T}$ contains predicates $S$ and $\overline{S}$, where:
\begin{compactitem}
\item{
$S$ is interpreted as a
relation ${\mathfrak S}$ satisfying the following property: there exists a set of pairwise distinct locations
$\{ \alpha_i, \alpha_i', \alpha_i'' \mid i \in {\Bbb N} \}$ such that for all $i \in {\Bbb N}$, $(\alpha_i,\alpha_i') \in {\mathfrak S}$, $(\alpha_i'',\alpha_i') \not \in {\mathfrak S}$, and
for all locations $\ell \in \{ \alpha_j,\alpha_j', \alpha_j'' \mid j \in {\Bbb N} \}$ if $\alpha_i \not = \ell$, $(\alpha_i,\ell) \in {\mathfrak S}$ and $(\alpha_{i}'',\ell) \not \in {\mathfrak S}$, then $\ell = \alpha_{i}'$;}
\item{
$\overline{S}(x,y)$ and $\neg S(x,y)$ are interpreted equivalently when $x$ and $y$ refer to distinct locations.}
\end{compactitem}
\end{theorem}
For instance, the hypotheses of Theorem \ref{theo:undec} are trivially satisfied on the natural numbers
if ${\mathfrak S}$ is the successor function
or if ${\mathfrak S}$ is the usual order $\leq$, and $\overline{S}$ is $\geq$ (with $\alpha_i = 3.i$, $\alpha_i' = 3.i+1$, $\alpha_i'' = 3.i+2$ in both cases, since $\alpha_i+1 \approx \ell \Rightarrow \alpha_i' \approx \ell$ and $\alpha_i \leq \ell \wedge \alpha_{i}'' > \ell \wedge \ell \not = \alpha_i \Rightarrow \alpha_{i}' \approx \ell$. ).
More generally, the conditions hold if the domain is infinite and ${\mathfrak S}$ is any injective function $f$ such that $f(x) \not = x$.
In this case, the sequences $\alpha_i,\alpha_i',\alpha_i''$ may be constructed inductively: for every $i \in {\Bbb N}$, $\alpha_i$ is any element $e$ such that both $e$ and $f(e)$ do not occur in $\{ \alpha_j,\alpha_j',\alpha_j'' \mid j < i \}$,
$\alpha_i'$ is $f(\alpha_i)$ and
$\alpha_i''$ is any element not occurring in $\{ \alpha_j,\alpha_j',\alpha_j'' \mid j < i \} \cup \{ \alpha_i,\alpha_i' \}$.
Note that in this case the locations $\alpha_i''$ are actually
irrelevant, but
these locations play an essential r\^ole in the undecidability proof
when ${\mathfrak S}$ is $\leq$.
The remainder of the section is devoted the proof of Theorem \ref{theo:undec}.
\begin{proof}
The proof goes by a reduction from the Post Correspondence Problem (PCP).
We recall that the PCP consists in determining, given two sequences of words
$u = (u_1,\dots,u_n)$ and $v = (v_1,\dots,v_n)$, whether there exists a nonempty sequence $(i_1,\dots,i_k)\in \interv{1}{n}^k$ such that
$u_{i_1}.\dots,u_{i_k} = v_{i_1}.\dots,v_{i_k}$. It is well-known that this problem is undecidable.
We assume, w.l.o.g., that
$\len{u_i} > 1$
and $\len{v_i} > 1$
for all $i \in \interv{1}{n}$.
A word $w$ such that $w = u_{i_1}.\dots,u_{i_k} = v_{i_1}.\dots,v_{i_k}$ is called a {\em witness}.
Positions inside words of the sequences $(u_1,\dots,u_n)$ and $(v_1,\dots,v_n)$ will be denoted by pairs $(i,j)$,
encoding the $j$-nth character of the words $u_i$ or $v_i$. More formally, if $p = (i,j)$, and $w \in \{u,v\}$, then we denote by $w(p)$ the $j$-th symbol of the word $w_i$, provided this symbol is defined. We write $p \triangleleft q$ if
both $u(p)$ and $v(q)$ are defined and $u(p) = v(q)$.
Let $m = \max \{ \len{u_i}, \len{v_i} \mid i \in \interv{1}{n} \}$. We denote by
$\mathtt{P}$ the {set of} pairs of the form $(i,j)$ with $i\in \interv{1}{n}$ and $j \in \interv{1}{m}$, and by
$\mathtt{B}$ the {set of} pairs of the form $(i,1)$. For $w\in \myset{u,v}$, we denote by $\stopp{w}$ the set of pairs of the form $(i,\len{w_i})$, where $i\in \interv{1}{n}$, and we write $(i,j) \nextp{w} (i',j')$
either $i' = i$, $j< \len{w_i}$ and $j' = j+1$,
or $j = \len{w_i}$ and $j' = 1$.
Note that $i'$ is arbitrary in the latter case (intuitively
$(i,j) \nextp{w} (i',j')$ states that the character corresponding to the position $(i,j)$ may be followed in a witness by
the character at position $(i',j')$).
Let $\vec{v}$ be a vector of variables, where all elements $p \in \mathtt{P}$
are associated with pairwise distinct variables in $\vec{v}$. To simplify notations, we will also denote by $p$ the variable associated with $p$. We assume the vector
$\vec{v}$ also contains a special variable $\mathrm{\bot}$, distinct from the variables $p \in \mathtt{P}$.
We construct a representation of potential witnesses as heaps.
The encoding is given for $\kappa = 6$, although in principle this encoding could be defined with $\kappa = 2$
by encoding tuples as binary trees.
Witnesses are encoded by linked lists, with links on the last argument,
and starting with a dummy element $(\mathrm{\bot},\dots,\mathrm{\bot})$. Except for the first dummy element, each location
in the list
refers to two locations associated with pairs $p,q \in \mathtt{P}$ denoting positions inside the two sequences
$u_1,\dots,u_n$ and $v_1,\dots,v_n$ respectively, and to three additional allocated locations the r\^ole of which will be detailed below.
{\small
\[
\begin{tabular}{rclr}
$\mathtt{ls}ol(x,\vec{v})$ & $\Leftarrow$ & $\exists x'. ~ x \mapsto (\mathrm{\bot},\mathrm{\bot},\mathrm{\bot},\mathrm{\bot},\mathrm{\bot},x')
* \mathtt{ls}ol_{p,p}(x', \vec{v})$ \\
& & \qquad where $p \in \mathtt{B}$ \\
$\mathtt{ls}ol_{p,q}(x,\vec{v})$ & $\Leftarrow$ & $\exists x',y,z,u. ~ x \mapsto (p,q,y,z,u,x') * \mathtt{ls}ol_{p',q'}(x',\vec{v}) * P(y,\mathrm{\bot}) * P(z,\mathrm{\bot})$ \\
& & \quad $*\ P(u,\mathrm{\bot})$ \qquad where $p\triangleleft q$, $p \nextp{u} p'$ and $q \nextp{v} q'$\\
$\mathtt{ls}ol_{p,q}(x,\vec{v})$ & $\Leftarrow$ & $\exists y,z,u. ~ x \mapsto (p,q,y,z,u,\mathrm{\bot}) * P(y,\mathrm{\bot}) * P(z,\mathrm{\bot})$ \\
& & $*\ P(u,\mathrm{\bot})$ \qquad where $p = (i,\len{u_i})$, $q = (i,\len{v_i})$, and $p \triangleleft q$ \\
$P(x,y)$ & $\Leftarrow$ & $ x \mapsto (y,y,y,y,y,y)$ \\
\end{tabular}
\]
}
\newcommand{SbFirstCell}{A}
\newcommand{SbInnerCell}{B}
\newcommand{C}{C}
\newcommand{\setof}[2]{\left\{#1\,\|\:#2\right\}}
\newcommand{\ellg}[2]{\ell^{#1}_{#2}}
By construction, the structures that validate $\mathtt{ls}ol(x,\vec{v})$ are of the form $(\mathfrak{s}, \mathfrak{h})$, where the store $\mathfrak{s}$ verifies:
\begin{compactitem}
\item $\mathfrak{s}(x) = \ell$ and $\mathfrak{s}(\mathrm{\bot}) = \ell'$;
\item for all $i = 1,\dots,m'$, $\mathfrak{s}(p_i) = \ellg{p}{i}$ and $\mathfrak{s}(q_i) = \ellg{q}{i}$, where $p_i,q_i \in \mathtt{P}$ are such that $p_i \triangleleft q_i$, $p_i \nextp{u} p_{i+1}$ and $q_i \nextp{v} q_{i+1}$,
\end{compactitem}
and the heap $\mathfrak{h}$ is of the form (with $\ell_{m'+1} = \ell'$):
\begin{eqnarray*}
\mathfrak{h}\ =\ \myset{(\ell,\ell',\ell',\ell',\ell',\ell',\ell_1)}
& \cup
& \setof{(\ell_i,\ellg{p}{i},\ellg{q}{i},\ellg{y}{i},\ellg{z}{i},\ellg{u}{i},\ell_{i+1})}{i = 1, \ldots, m'}\\
& \cup & \setof{(\ellg{y}{i},\ell',\ell',\ell',\ell',\ell',\ell')}{i = 1,\ldots, m'}\\
& \cup & \setof{(\ellg{z}{i},\ell',\ell',\ell',\ell',\ell',\ell')}{i = 1,\ldots, m'}\\
& \cup & \setof{(\ellg{u}{i},\ell',\ell',\ell',\ell',\ell',\ell')}{i = 1,\ldots, m'}.
\end{eqnarray*}
Still by construction, we have $p_1 = q_1 \in \mathtt{B}$, $p_{m'} \in \stopp{u}$ and $q_{m'} \in \stopp{v}$, and there exists $i$ such that $p_{m'}$ and $q_{m'}$ are of the form $(i,\len{u_i})$ and $(i,\len{v_i})$,
respectively.
This entails that the words $u(p_1).\dots,u(p_{m'})$ and
$v(p_1).\dots,v(p_{m'})$
are of form
$u_{i_1}.\dots.u_{i_k}$
and
$v_{j_1}.\dots.v_{j_{k'}}$,
for some sequences
$(i_1,\dots,i_k)$ and $(j_1,\dots,j_{k'})$ of elements in $\interv{1}{n}$, {and both words are identical}.
However the sequences $(i_1,\dots,i_k)$
and
$(j_1,\dots,j_{k'})$ may be distinct (but we have $i_1 = j_1$ and $i_k = j_{k'}$).
To check that the PCP has a solution, we must therefore verify that there exists a structure of the form above such that
$(i_1,\dots,i_k) = (j_1,\dots,j_{k'})$. To this purpose, we introduce predicates that are satisfied when this condition does not hold, i.e., such that either $k\neq k'$ or $i_l \neq j_{l}$ for some $l \in \interv{2}{k-1}$.
This is done by using the additional locations $\ellg{y}{i}$, $\ellg{z}{i}$ and $\ellg{u}{i}$
to relate the indices $I_l = \len{u_{i_1}.\dots,u_{i_{l-1}}}+1$
and $J_l = \len{v_{j_1}.\dots,v_{j_{l-1}}}+1$, corresponding to the beginning of the words
$u_{i_l}$ and $v_{j_l}$ respectively in $u_{i_1}.\dots.u_{i_k}$
and
$v_{j_1}.\dots.v_{j_{k'}}$.
The predicates relate the locations of the form $\ellg{y}{i}$, $\ellg{z}{i}$ and $\ellg{u}{i}$ using the relation ${\mathfrak S}$.
More precisely, they are associated with rules that guarantee that all
the countermodels of the right-hand side of the sequent will satisfy the following properties:
\begin{compactenum}
\item
$k = k'$
and for all $l \in \interv{1}{k}$, $(\ellg{y}{I_l},\ellg{z}{J_l}) \in {\mathfrak S}$
and $(\ellg{u}{I_l},\ellg{z}{J_l}) \not\in {\mathfrak S}$.
\item
$i_l = j_l$ for $1\leq l\leq k$.
\end{compactenum}
Note that the hypothesis of the theorem ensures that
the locations $\ellg{y}{i}$,
$\ellg{z}{i}$ and $\ellg{u}{i}$
can be chosen in such a way that
there is a {\em unique} location $\ellg{z}{i}$ satisfying $(\ellg{y}{I_l},\ellg{z}{i}) \in {\mathfrak S} \wedge (\ellg{u}{J_l}, \ellg{z}{i}) \notin {\mathfrak S}$,
thus property $1$ above can be used to relate the indices $I_l$ and $J_l$, which, in turns, allows us to enforce property $2$.
Two predicates are used to guarantee that condition 1 holds for the countermodels. Predicate $SbFirstCell$ is satisfied by those structures for which the condition is not satisfied for $l = 1$,
and $SbInnerCell$ is satisfied for those structures for which either $k\neq k'$ or there is
an $l \in \interv{1}{k-1}$ such that the condition is satisfied at $l$, but not at $l+1$.
Thus the structures that satisfy $\mathtt{ls}ol(x,\vec{v})$ and that are countermodels of the disjunction of $SbFirstCell$ and $SbInnerCell$ are exactly the structures for which $k = k'$ and
$(\ellg{y}{I_l},\ellg{z}{J_l}) \in {\mathfrak S} \wedge (\ellg{u}{I_l},\ellg{z}{J_l}) \notin {\mathfrak S}$
for
$l = 1, \ldots, k$. Predicate $C$ is then used to guarantee that for all $1\leq l \leq k$, $p_{I_l} = q_{J_l}$: this predicate is satisfied by the structures for which there is an $l$ such that
$(\ellg{y}{I_l},\ellg{z}{J_l}) \in {\mathfrak S} \wedge (\ellg{u}{I_l},\ellg{z}{J_l}) \notin {\mathfrak S}$
holds
but $p_{I_l} \neq q_{J_l}$.
We first give the rules for predicate $SbFirstCell$. For conciseness, we allow for disjunctions in the right-hand side of the rules (they can be eliminated by transformation into dnf).
\[
\begin{tabular}{rclr}
$SbFirstCell(x,\vec{v})$ & $\Leftarrow$ & $\exists x'. ~ x \mapsto (\mathrm{\bot},\mathrm{\bot},\mathrm{\bot},\mathrm{\bot},\mathrm{\bot},x') * SbFirstCell'(x',\vec{v})$ \\
$SbFirstCell'(x,\vec{v})$ & $\Leftarrow$ & $\exists x',y,z,u. ~ x \mapsto (p,q,y,z,u,x') * \mathtt{ls}ol_{p',q'}(x,\vec{v}) * P(y,\mathrm{\bot}) $ \\
& & $\qquad *\ P(z,\mathrm{\bot}) * P(u,\mathrm{\bot}) * (\overline{S}(y,z) \vee S(u,z))$, for every $p,q,p',q'\in \mathtt{P}$
\end{tabular}
\]
Note that since $y,z,u$ are allocated in distinct predicates, they must be distinct,
hence
$\overline{S}(y,z)$ is equivalent to $\neg S(y,z)$ and
$S(u,z)$ is equivalent to $\neg \overline{S}(u,z)$.
\newcommand{\aform'}{\phi'}
We now define the rules for predicate $SbInnerCell$, which is meant to ensure that
the condition ``$(\ellg{y}{I_l},\ellg{z}{J_l}) \in {\mathfrak S}$
and
$(\ellg{u}{I_l},\ellg{z}{J_l}) \not \in {\mathfrak S}$'' ($\dagger$) propagates, i.e., that if it holds for some $l$
then it also holds for $l+1$.
This predicate has additional parameters $y,y',z,z',u,u'$ corresponding to the
locations $\ellg{y}{I_l},\ellg{y}{I_{l+1}},\ellg{z}{J_j},\ellg{z}{J_{j+1}},\ellg{u}{I_l},\ellg{u}{I_{l+1}}$
which ``break'' the propagation of $(\dagger)$. By definition $y,y',z,z',u,u'$ must be chosen in such a way that the $\theory$-formula\xspace
$S(y,z) * \overline{S}(u,z) * (\overline{S}(y',z') \vee S(u',z'))$ holds.
The predicates $SbInnerCell_{a,b}$ with $a,b\in \{0,1,2\}$ allocate all the locations $\ell_1,\dots,\ell_{m'}$
and in particular the ``faulty'' locations associated with $y,y',z,z',u,u'$.
Intuitively,
$a$ (resp.\ $b$) denote the number of variables in $\{ y,y' \}$
(resp.\ $\{ z,z' \}$)
that have been allocated.
The rules for predicates $SbInnerCell_{a,b}$ are meant to guarantee that the following conditions are satisfied for variables $y$ and $y'$ (similar constraints hold for $z$ and $z'$):
\begin{itemize}
\item $y$ is allocated before $y'$,
\item $y$ is allocated for a variable $p$ corresponding to the beginning of a word $(p\in \mathtt{B}$),
\item when $y$ has been allocated, no variable $p\in \mathtt{B}$ can occur on the right-hand side of a points-to atom until $y'$ is allocated.
\end{itemize}
Several cases are distinguished depending on whether the locations associated with $y$ and $z$ (resp.\ $y'$ and $z'$) are in the same heap image of a location or not.
Note that $u$ and $u'$ are allocated in the same rules as $y$ and $y'$ respectively.
The predicate $SbInnerCell$ also tackles the case where $k \not = k'$. This corresponds to the case where
the recursive calls end with $SbInnerCell_{1,2}$ or $SbInnerCell_{2,1}$ in the last rule below,
meaning that $(\dagger)$ holds for some $i$, with either $i = k$ and $i < k'$ or $i = k'$ and $i < k$.
For the sake of conciseness and readability, we denote by $\vec{w}$ the vector of variables $\vec{v},y,y',z,z',u,u'$ in the rules below. We also denote by $\aform'(y,z,u)$ the formula $P(y,\mathrm{\bot}) * P(z,\mathrm{\bot}) * P(u,\mathrm{\bot})$.
{\small
\[
\begin{tabular}{rclr}
$SbInnerCell(x,\vec{w})$ & $\Leftarrow$ & $x \mapsto (\mathrm{\bot},\mathrm{\bot},\mathrm{\bot},\mathrm{\bot},\mathrm{\bot},x') * SbInnerCell_{0,0}(x',\vec{w})$ \\
& & $\qquad * S(y,z) * \overline{S}(u,z) * (\overline{S}(y',z') \vee S(u',z'))$ \\
$SbInnerCell_{a,b}(x,\vec{w})$ & $\Leftarrow$ & $\exists x',y'',z'',u''. x \mapsto (p,q,y'',z'',u'',x') * SbInnerCell_{a,b}(x',\vec{w}) * \aform'(y'',z'',u'') $ \\
&& \quad
if ($a \not = 1$ or $p \not \in \mathtt{B}$) and ($b \not = 1$ or $q \not \in \mathtt{B}$) \\
$SbInnerCell_{0,0}(x,\vec{w})$ & $\Leftarrow$ & $\exists x'. x \mapsto (p,q,y,z,u,x') * SbInnerCell_{1,1}(x',\vec{w}) * \aform'(y,z,u)$ \\
& & \quad if $p,q \in \mathtt{B}$ \\
$SbInnerCell_{0,1}(x,\vec{w})$ & $\Leftarrow$ & $\exists x'. x \mapsto (p,q,y,z',u,x') * SbInnerCell_{1,2}(x',\vec{w}) * \aform'(y,z',u)$ \\
& & \quad if $p,q \in \mathtt{B}$ \\
$SbInnerCell_{1,0}(x,\vec{w})$ & $\Leftarrow$ & $\exists x'. x \mapsto (p,q,y',z,u',x') * SbInnerCell_{2,1}(x',\vec{w}) * \aform'(y',z,u')$ \\
& & \quad if $p,q \in \mathtt{B}$ \\
$SbInnerCell_{1,1}(x,\vec{w})$ & $\Leftarrow$ & $\exists x'. x \mapsto (p,q,y',z',u',x') * SbInnerCell_{2,2}(x',\vec{w}) * \aform'(y',z',u')$ \\
& & \quad if $p,q \in \mathtt{B}$ \\
$SbInnerCell_{0,b}(x,\vec{w})$ & $\Leftarrow$ & $\exists x',z''. x \mapsto (p,q,y,z'',u,x') * SbInnerCell_{1,b}(x',\vec{w}) * \aform'(y,z'',u)$ \\
& & \quad if $p \in \mathtt{B}$ and ($b \not = 1$ or $q \not \in \mathtt{B}$) \\
$SbInnerCell_{1,b}(x,\vec{w})$ & $\Leftarrow$ & $\exists x',z''. x \mapsto (p,q,y',z'',u',x') * SbInnerCell_{2,b}(x',\vec{w}) * \aform'(y',z'',u')$ \\
& & \quad if $p \in \mathtt{B}$ and ($b \not = 1$ or $q \not \in \mathtt{B}$) \\
$SbInnerCell_{a,0}(x,\vec{w})$ & $\Leftarrow$ & $\exists x',y'',u''. x \mapsto (p,q,y'',z,u'',x') * SbInnerCell_{a,1}(x',\vec{w}) * \aform'(y'',z,u'')$ \\
& & \quad if $q \in \mathtt{B}$ and ($a \not = 1$ or $p \not \in \mathtt{B}$)\\
$SbInnerCell_{a,1}(x,\vec{w})$ & $\Leftarrow$ & $\exists x',y'',u''. x \mapsto (p,q,y'',z',u'',x') * SbInnerCell_{a,2}(x',\vec{w}) * \aform'(y'',z',u'')$ \\
& & \quad if $q \in \mathtt{B}$ and ($a \not = 1$ or $p \not \in \mathtt{B}$) \\
$SbInnerCell_{a,b}(x,\vec{w})$ & $\Leftarrow$ & $\exists y'',z'',u''. x \mapsto (p,q,y'',z'',u'',\mathrm{\bot}) * \aform'(y'',z'',u'')$ \\
& & if $(a,b) \in \{ (2,2), (2,1), (1,2) \}$
\end{tabular}
\]
}
A straightforward induction
permits to verify that
if the considered structure does not satisfy the formula
$SbFirstCell(x,\vec{v}) \vee \exists y,z,y',z',u,u'.SbInnerCell(x,\vec{w})$
then necessarily $k = k'$ and for all
$l \in \interv{1}{k}$, we have $(\ellg{y}{I_l},\ellg{z}{J_l}) \in {\mathfrak S} \wedge (\ellg{u}{I_l},\ellg{z}{J_l}) \not \in {\mathfrak S}$.
There remains to check that $p_{I_i} = q_{J_i}$
for all $i \in \interv{1}{k}$. To this aim, we design an atom $C(x,\vec{v})$ that will be satisfied by structures not validating this condition, assuming
the condition $(\dagger)$ above is fulfilled.
This predicate allocates the location $\ell$ and introduces existential
variables $y,z,u$
denoting the faulty locations $\ellg{y}{I_i}, \ellg{z}{J_i}$ and
$\ellg{u}{I_i}$,
i.e., the locations corresponding to the index $i$ such that
$p_{I_i} \not = q_{J_i}$.
By $(\dagger)$, these locations must be chosen in such a way that the constraints
$S(y,z)$ and $\overline{S}(y,z)$ are satisfied. The predicate $C(x,\vec{v})$ also guesses pairs $p,q$ such that $p \not = q$ (denoting the distinct pairs
$p_{x_i}$ and $q_{y_i}$) and invokes the predicate $C_{p,q}^{(0,0)}$ to allocate all the remaining locations.
As for the previous rules, the predicates $C_{p,q}^{a,b}$, for $p,q \in \mathtt{B}$
$a,b\in \{0,1 \}$ allocate $\ell_1,\dots,\ell_{m'}$, where $a$ (resp.\ $b$)
denotes the number of variables in $\{ y \}$ (resp. $\{ z\}$) that have already been allocated.
In the rules below, we denote by $\vec{u}$ the vector $\vec{v},y,z,u$.
In all the rules we have $p',q'\in \mathtt{P}$.
{\small
\[
\begin{tabular}{rclr}
$C(x,\vec{v})$ & $\Leftarrow$ & $\exists y,z,u. ~ x \mapsto (\mathrm{\bot},\mathrm{\bot},\mathrm{\bot},\mathrm{\bot},\mathrm{\bot},x') * C_{p,q}^{0,0}(x',\vec{u}) * S(y,z) * \overline{S}(u,z)$ \\
& & \quad if $p \not = q$ and $p,q \in \mathtt{B}$ \\
$C_{p,q}^{a,b}(x,\vec{u})$ & $\Leftarrow$ & $\exists x',y'',z'',u''. x \mapsto (p',q',y'',z'',u'',x') * C_{p,q}^{a,b}(x,\vec{u})$ \\
& & $\qquad *\ \aform'(y'',z'',u'')$ \\
$C_{p,q}^{0,0}(x,\vec{u})$ & $\Leftarrow$ & $\exists x'. x \mapsto (p,q,y,z,u,x') * C_{p,q}^{1,1}(x,\vec{u}) * \aform'(y,z,u)$ \\
$C_{p,q}^{0,b}(x,\vec{u})$ & $\Leftarrow$ & $\exists x',z''. x \mapsto (p,q',y,z'',u,x') * C_{p,q}^{1,b}(x,\vec{u}) * \aform'(y,z'',u)$ \\
$C_{p,q}^{a,0}(x,\vec{u})$ & $\Leftarrow$ & $\exists x',y'',u''. x \mapsto (p',q,y'',z,u'',x') * C_{p,q}^{a,1}(x,\vec{u}) * \aform'(y'',z,u'')$ \\
$C_{p,q}^{1,1}(x,\vec{u})$ & $\Leftarrow$ & $\exists y'',z'',u''. x \mapsto (p',q',y'',z'',u'',\mathrm{\bot}) * \aform'(y'',z'',u'')$ \\
\end{tabular}
\]
}
The PCP has a solution iff
the sequent
\[\mathtt{ls}ol(x,\vec{v}) \vdash_{\asid} SbFirstCell(x,\vec{v}), \exists y,z,y',z',u,u'.SbInnerCell(x,\vec{w}), C(x,\vec{u})\]
has a countermodel.
Indeed, if a structure satisfying the atom $\mathtt{ls}ol(x,\vec{v})$ but not the disjunction $SbFirstCell(x,\vec{v}) \vee \exists y,z,y',z',u,u'.SbInnerCell(x,\vec{w}) \vee C(x,\vec{u})$
exists, then as explained above, there exists a word
$u_{i_1}.\dots.u_{i_k} = v_{j_1}.\dots.v_{j_{k'}}$,
with $(i_1,\dots,i_k) = (j_1,\dots,j_{k'})$.
Conversely, if a solution of the PCP exists, then by using the locations $\alpha_l,\alpha_l',\alpha_l''$ in the hypothesis of the lemma as $\ellg{y}{I_l},\ellg{z}{J_l},\ellg{u}{I_l}$ it is easy
to construct a
a structure satisfying $\mathtt{ls}ol(x,\vec{v})$.
Further, by hypothesis, since $(\alpha_l,\alpha_l') \in {\mathfrak S}$ and $(\alpha_l'',\alpha_l') \not\in {\mathfrak S}$,
we have $(\ellg{y}{I_l},\ellg{z}{J_l}) \in {\mathfrak S}$ and $(\ellg{u}{I_l},\ellg{z}{J_l}) \notin {\mathfrak S}$
for all $l = 1,\dots,k$.
Thus $SbFirstCell(x,\vec{v})$ and $\exists y,z,y',z',u,u'.SbInnerCell(x,\vec{w})$ do not hold.
To fulfill $\neg C(x,\vec{u})$ we have to ensure that, for all $i,j\in \interv{1}{k}$,
we have $(\ellg{y}{I_i},\ellg{z}{J_j}) \in {\mathfrak S} \wedge (\ellg{u}{I_i},\ellg{z}{J_j}) \not \in {\mathfrak S} \implies
p_{I_i} = q_{J_i}$.
Since the considered word is a solution of the PCP, we have $p_{I_i} = q_{J_i}$
for all $i = 1,\dots,k$, hence
$\neg C(x,\vec{u})$ is satisfied.
\qed
\end{proof}
\section{Eliminating Equations and Disequations}
In this section,
we show that the equations and disequations can always be eliminated from established sequents (in exponential time), while preserving equivalence.
The intuition is that the equations can be discarded by instantiating the inductive rules,
while the disequations can be replaced by assertions that the considered variables
are allocated in disjoint parts of the heap.
We first introduce an additional restriction
on pc-SID\xspaces that is meant to ensure that the set of free variables
allocated by a predicate atom is the same in every unfolding.
The pc-SID\xspace satisfying this condition are called {\em $\mathit{alloc}$-compatible\xspace}.
We will show that every pc-SID\xspace can be reduced to
an equivalent $\mathit{alloc}$-compatible\xspace set.
Let $\mathit{alloc}$ be a function mapping each predicate symbol $p$ to a
subset of $\interv{1}{\ar{p}}$.
For any disjunction-free formula $\phi$, we denote by $\alloc{\phi}$ the set
of variables $x\in \fv{\phi}$ such that $\phi$ contains an atom of the form
$x \mapsto (y_1,\dots,y_\kappa)$ or
$p(z_1,\dots,z_n)$, with $x = z_i$ for some $i \in \alloc{p}$.
\newcommand{{\frak f}}{{\frak f}}
\newcommand{\expl}[1]{#1^*}
\begin{definition}
An established pc-SID\xspace ${\cal R}$ is {\em $\mathit{alloc}$-compatible\xspace} if for all rules $\alpha \Leftarrow \phi$ in ${\cal R}$, we have $\alloc{\alpha} = \alloc{\phi}$ .
A sequent $\phi \vdash_{\asid} \Gamma$ is {\em $\mathit{alloc}$-compatible\xspace} if ${\cal R}$ is $\mathit{alloc}$-compatible\xspace.
\end{definition}
\begin{lemma}
\label{lem:alloccomp}
There exists an algorithm which, for every sequent $\phi \vdash_{\asid} \Gamma$,
computes an equivalent
$\mathit{alloc}$-compatible\xspace sequent $\phi' \vdashsid{{\cal R}'} \Gamma'$. Moreover, this algorithm runs in exponential time
and $\widt{\phi' \vdashsid{{\cal R}'} \Gamma'} = \mathcal{O}(\widt{\phi \vdash_{\asid} \Gamma}^2)$.
\end{lemma}
\begin{proof}
We associate all pairs $(p,A)$ where $p \in {\cal P}_S$ and $A \subseteq \interv{1}{\ar{p}}$ with fresh, pairwise distinct predicate symbols $p_A \in {\cal P}_S$, with the same arity as $p$, and we set $\alloc{p_A} = A$.
For each disjunction-free formula $\phi$, we denote by $\expl{\phi}$ the set of formulas obtained from $\phi$ by replacing every predicate atom
$p(\vec{x})$ by an atom $p_A(\vec{x})$ with $A \subseteq \interv{1}{\ar{p}}$.
Let ${\cal R}'$ be the set of $\mathit{alloc}$-compatible\xspace rules of the form
$p_A(\vec{x}) \Leftarrow \phiB$, where $p(\vec{x}) \Leftarrow \phi$ is a rule in ${\cal R}$
and $\phiB \in \expl{\phi}$. Note that the symbols
$p_A$ may be encoded by words of length $\mathcal{O}(\len{p} + \ar{p})$, thus for every $\phiB \in \expl{\phi}$ we have $\widt{\phiB} = \mathcal{O}(\widt{\phi}^2)$, hence $\widt{{\cal R}'} = \mathcal{O}(\widt{{\cal R}}^2)$.
We show by induction on the satisfiability relation that the following equivalence holds for every structure $(\mathfrak{s},\mathfrak{h})$:
$(\mathfrak{s},\mathfrak{h}) \models_{\asid} \phi$ iff there exists $\phiB \in \expl{\phi}$ such that
$(\mathfrak{s},\mathfrak{h}) \modelssid{{\cal R}'} \phiB$. For the direct implication, we also prove that $\alloc{\phiB} = \{ x \in \fv{\phi} \mid \mathfrak{s}(x) \in \dom{\mathfrak{h}}\}$.
\begin{compactitem}
\item{The proof is immediate if $\phi$ is a $\theory$-formula\xspace, since $\expl{\phi} = \{ \phi \}$, and the truth value of $\phi$ does not depend on the considered pc-SID\xspace. Also, by definition $\alloc{\phi} = \mathtt{emp}tyset$ and all the models of $\phi$ have empty heaps.}
\item{If $\phi$ is of the form $x \mapsto (y_1,\dots,y_n)$, then $\expl{\phi} = \{ \phi \}$ and the truth value of $\phi$ does not depend on the considered pc-SID\xspace. Also, $\alloc{\phi} = \{ x \}$ and
we have $\dom{\mathfrak{h}} = \{ \mathfrak{s}(x) \}$ for every model $(\mathfrak{s},\mathfrak{h})$ of $\phi$.}
\item{Assume that $\phi = p(x_1,\dots,x_{\ar{p}})$. If $(\mathfrak{s},\mathfrak{h}) \models_{\asid} \phi$ then
there exists a formula $\phiC$ such that $\phi \unfoldto{{\cal R}} \phiC$ and
$(\mathfrak{s},\mathfrak{h}) \models_{\asid} \phiC$.
By the induction hypothesis,
there exists $\phiB \in \expl{\phiC}$ such that
$(\mathfrak{s},\mathfrak{h}) \modelssid{{\cal R}'} \phiB$ and $\alloc{\phiB} = \{ x\in \fv{\phiC} \mid \mathfrak{s}(x) \in \dom{\mathfrak{h}} \}$. Let $A = \{ i \in \interv{1}{\ar{p}} \mid \mathfrak{s}(x_i) \in \dom{\mathfrak{h}} \}$, so that $\alloc{\phiB} =\{ x_i \mid i \in A \}$.
By construction
$p_A(x_1,\dots,x_n) \Leftarrow \phiB$ is $\mathit{alloc}$-compatible\xspace, and
therefore $p_A(x_1,\dots,x_n) \unfoldto{{\cal R}'} \phiB$, which entails that $(\mathfrak{s},\mathfrak{h}) \modelssid{{\cal R}'} p_A(x_1,\dots,x_n)$.
By definition of $A$, $\alloc{p_A(x_1,\dots,x_n)} = \{ x\in \fv{\phi} \mid \mathfrak{s}(x) \in \dom{\mathfrak{h}} \}$.
Conversely, assume that $(\mathfrak{s},\mathfrak{h}) \models_{\asid} \phiB$ for some $\phiB \in \expl{\phi}$.
Necessarily $\phiB$ is of the form $p_A(x_1,\dots,x_n)$ with $A \subseteq \interv{1}{\ar{p}}$.
We have $p_A(x_1,\dots,x_n) \unfoldto{{\cal R}'} \phiB'$ and $(\mathfrak{s},\mathfrak{h}) \models_{\asid} \phiB'$ for some formula $\phiB'$.
By definition of ${\cal R}'$, we deduce that $p(x_1,\dots,x_n) \unfoldto{{\cal R}} \phiC$, for some $\phiC$ such that $\phiB\in \expl{\phiC}$.
By the induction hypothesis,
$(\mathfrak{s},\mathfrak{h}) \models_{\asid} \phiC$, thus $(\mathfrak{s},\mathfrak{h}) \models_{\asid} p(x_1,\dots,x_{\ar{p}})$. Since $p(x_1,\dots,x_{\ar{p}}) = \phi$, we have the result.
}
\item{Assume that $\phi = \phi_1 * \phi_2$.
If $(\mathfrak{s},\mathfrak{h}) \models_{\asid} \phi$ then there exist disjoint heaps $\mathfrak{h}_1,\mathfrak{h}_2$ such that $(\mathfrak{s},\mathfrak{h}_i) \models_{\asid} \phi_i$, for all $i = 1,2$ and $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$. By the induction hypothesis, this entails that there exist formulas $\phiB_i \in \expl{\phi_i}$ for $i = 1,2$ such that
$(\mathfrak{s},\mathfrak{h}_i) \modelssid{{\cal R}'} \phiB_i$ and $\alloc{\phiB_i} = \{ x \in \fv{\phi_i} \mid \mathfrak{s}(x) \in \dom{\mathfrak{h}_i} \}$.
Let $\phiB = \phiB_1 * \phiB_2$.
It is clear that $(\mathfrak{s},\mathfrak{h}) \modelssid{{\cal R}'} \phiB_1 * \phiB_2$ and $\alloc{\phiB} = \alloc{\phiB_1 * \phiB_2} = \alloc{\phiB_1} \cup \alloc{\phiB_2} = \{ x \in \fv{\phi_1}\cup \fv{\phi_2} \mid \mathfrak{s}(x) \in \dom{\mathfrak{h}} \} =
\{ x \in \fv{\phi} \mid \mathfrak{s}(x) \in \dom{\mathfrak{h}} \}$. Since $\phiB_1 * \phiB_2 \in \expl{\phi}$, we obtain the result.
Conversely, assume that there exists $\phiB \in \expl{\phi}$ such that
$(\mathfrak{s},\mathfrak{h}) \modelssid{{\cal R}'} \phiB$.
Then $\phiB = \phiB_1 * \phiB_2$ with $\phiB_i \in \expl{\phi_i}$, and we have
$(\mathfrak{s},\mathfrak{h}_i) \modelssid{{\cal R}'} \phiB_i$, for $i = 1,2$ with $\mathfrak{h} = \mathfrak{h}_1 \uplus \mathfrak{h}_2$.
Using the induction hypothesis, we get that $(\mathfrak{s},\mathfrak{h}_i) \models_{\asid} \phi_i$, hence
$(\mathfrak{s},\mathfrak{h}) \models_{\asid} \phi$.}
\item{Assume that $\phi = \exists y. \phiC$.
If $(\mathfrak{s},\mathfrak{h}) \models_{\asid} \phi$
then $(\mathfrak{s}',\mathfrak{h}) \models_{\asid} \phiC$, for some store $\mathfrak{s}'$ coinciding with $\mathfrak{s}$ on every variable distinct from $y$.
By the induction hypothesis, this entails that
there exists $\phiB \in \expl{\phiC}$ such that
$(\mathfrak{s}',\mathfrak{h}) \modelssid{{\cal R}'} \phiB$ and $\alloc{\phiB} = \{ x \in \fv{\phiC} \mid \mathfrak{s}'(x) \in \dom{\mathfrak{h}}\}$. Then
$(\mathfrak{s},\mathfrak{h}) \modelssid{{\cal R}'} \exists y. \phiB$, and we have $\exists y.\phiB \in \expl{\phi}$.
Furthermore, $\alloc{\exists y.\phiB} = \alloc{\phiB} \setminus \{ y\} = \{ x \in \fv{\phiC} \setminus \{ y \} \mid \mathfrak{s}'(x) \in \dom{\mathfrak{h}}\} = \{ x \in \fv{\phi} \mid \mathfrak{s}(x) \in \dom{\mathfrak{h}}\}$.
Conversely, assume that
$(\mathfrak{s},\mathfrak{h}) \models_{\asid} \phiB$, with $\phiB\in \expl{\phi}$.
Then $\phiB$ is of the form $\exists y. \phiB'$, with $\phiB' \in \expl{\phiC}$, thus
there exists a store $\mathfrak{s}'$, coinciding with $\mathfrak{s}$ on all variables other than $y$
such that $(\mathfrak{s}',\mathfrak{h}) \models_{\asid} \phiB'$.
By the induction hypothesis, this entails that
$(\mathfrak{s}',\mathfrak{h}) \models_{\asid} \phiB$, thus
$(\mathfrak{s},\mathfrak{h}) \models_{\asid} \exists y. \phiC$. Since $\exists y. \phiC = \phi$, we have the result.
}
\end{compactitem}
Let $\phi',\Gamma'$ be the sequence of formulas obtained from $\phi,\Gamma$ by replacing every atom $\alpha$ by the disjunction of all the formulas in $\expl{\alpha}$.
It is clear that $\widt{\phi' \vdash_{{\cal R}'} \Gamma'} \leq \widt{\phi \vdash_{{\cal R}} \Gamma}^2$.
By the previous result, $\phi' \vdashsid{{\cal R}'} \Gamma'$ is equivalent to $\phi \vdashsid{{\cal R}} \Gamma$, hence $\phi' \vdashsid{{\cal R}'} \Gamma'$ fulfills all the required properties. Also, since each predicate $p$
is associated with $2^{\ar{p}}$ predicates $p_A$, it is clear that $\phi' \vdashsid{{\cal R}'} \Gamma'$ can be computed in time $\mathcal{O}(2^{\size{\phi \vdashsid{{\cal R}} \Gamma}})$.
\qed
\end{proof}
\newcommand{\constrained{\emptyset}}{\constrained{\mathtt{emp}tyset}}
\newcommand{\constrained}[1]{$#1$-constrained\xspace}
\newcommand{\constrained{\{ \not \iseq \}}}{\constrained{\{ \not \approx \}}}
\newcommand{\constrained{\{ \iseq, \not \iseq \}}}{\constrained{\{ \approx, \not \approx \}}}
\newcommand{main root\xspace}{main root\xspace}
\begin{definition}
Let $P \subseteq {\cal P}_{\cal T}$.
A formula $\phi$ is {\em \constrained{P}} if
for every formula $\phiB$ such that $\phi \unfoldto{{\cal R}} \phiB$, and for every symbol $p \in {\cal P}_{\cal T}$ occurring in $\phiB$, we have $p \in P$.
A sequent $\phi \vdash_{\asid} \Gamma$ is {\em \constrained{P}} if
all the formulas in $\phi,\Gamma$ are \constrained{P}.
\end{definition}
\begin{theorem}
\label{theo:elimeq}
Let $P \subseteq {\cal P}_{\cal T}$.
There exists an algorithm that transforms every \constrained{P} established sequent $\phi \vdash_{\asid} \Gamma$
into an equivalent \constrained{(P \setminus \{ \approx, \not \approx \})} established sequent $\phi' \vdashsid{{\cal R}'} \Gamma'$.
This algorithm runs in exponential time
and $\widt{\phi' \vdashsid{{\cal R}'} \Gamma'}$ is
polynomial w.r.t.\ $\widt{\phi \vdash_{\asid} \Gamma}$.
\end{theorem}
\begin{proof}
We consider a \constrained{P} established sequent $\phi \vdashsid{{\cal R}} \Gamma$.
This sequent is transformed in several steps.
\noindent\textbf{Step 1.}
The first step consists in transforming all the formulas in $\phi,\Gamma$ into disjunctions of symbolic heaps.
Then for every symbolic heap $\phiC$ occurring in the obtained sequent, we add all the variables freely occurring
in $\phi$ or $\Gamma$ as parameters of every predicate symbol occurring in unfoldings of $\phiC$ (their arities are updated accordingly, and these variables are passed as parameters to each recursive call of a predicate symbol).
We obtain an equivalent sequent $\phi_1 \vdashsid{{\cal R}_1} \Gamma_1$, and if
$v = \card{\fv{\phi} \cup \fv{\Gamma}}$
denotes the total number of free variables occurring in $\phi,\Gamma$, then it is easy to check (since the size of each of these variables is bounded by $\widt{\phi \vdashsid{{\cal R}} \Gamma}$) that
$\widt{\phi_1 \vdashsid{{\cal R}_1} \Gamma_1} \leq v\cdot \widt{\phi \vdashsid{{\cal R}} \Gamma}^2$.
By Definition \ref{def:sequent},
we have $v \leq \widt{\phi \vdashsid{{\cal R}} \Gamma}$,
thus $\widt{\phi_1 \vdashsid{{\cal R}_1} \Gamma_1} = \mathcal{O}(\widt{\phi \vdashsid{{\cal R}} \Gamma}^3)$.
\noindent\textbf{Step 2.}
All the equations involving an existential variable can be eliminated in a straightforward way
by replacing each formula of the form $\exists x. (x \approx y * \phi)$ with
$\repl{\phi}{x}{y}$.
We then replace every formula $\exists \vec{y}. \phi$ with free variables $x_1,\dots,x_n$ by the disjunction of all the formulas
of the form $\exists\vec{z}. \phi\sigma * \bigAnd_{z\in \vec{z}, z' \in \vec{z} \cup \{ x_1,\dots,x_n \}, z\not = z'} z \not \approx z'$ , where $\sigma$ is a substitution such that $\dom{\sigma} \subseteq \vec{y}$, $\vec{z} = \vec{y} \setminus \dom{\sigma}$ and $\img{\sigma} \subseteq \vec{y} \cup \{ x_1,\dots,x_n\}$.
Similarly we replace every rule $p(x_1,\dots,x_n)\Leftarrow \exists \vec{y}. \phi$ by the
the set of rules $p(x_1,\dots,x_n)\Leftarrow \exists\vec{z}. \phi\sigma * \bigAnd_{z\in \vec{z}, z' \in \vec{z} \cup \{ x_1,\dots,x_n \}, z\not = z'} z \not \approx z'$, where $\sigma$ is any substitution satisfying the conditions above.
Intuitively, this transformation ensures that all existential variables are associated to pairwise distinct locations, also distinct from any location associated to a free variable. The application of the substitution $\sigma$ captures all the rule instances for which this condition does not hold, by mapping all variables that are associated with the same location to a unique representative.
We denote by $\phi_2 \vdashsid{{\cal R}_2} \Gamma_2$ the sequent thus obtained.
Let $v'$ be the maximal number of existential variables occurring in a rule in ${\cal R}$.
We have $v' \leq \widt{\phi \vdashsid{{\cal R}} \Gamma}$ (since the transformation in Step $1$ adds no existential variable).
Since at most one disequation is added for every pair of variables, and since the size of every variable is bounded by $\widt{\phi \vdashsid{{\cal R}} \Gamma}$, it is clear that $\widt{\phi_2 \vdashsid{{\cal R}_2} \Gamma_2} = \widt{\phi_1 \vdashsid{{\cal R}_1} \Gamma_1} + v'\cdot (v+v') \cdot(1+2*\widt{\phi \vdashsid{{\cal R}} \Gamma})
= \mathcal{O}(\widt{\phi \vdashsid{{\cal R}} \Gamma}^3)$.
\noindent\textbf{Step 3.}
We replace every atom $\alpha = p(x_1,\dots,x_n)$ occurring in $\phi_2, \Gamma_2$ or ${\cal R}_2$ with pairwise distinct variables $x_{i_1},\dots,x_{i_m}$ (with $m \leq n$ and $i_1 = 1$), by an atom $p_{\alpha}(x_{i_1},\dots,x_{i_m})$, where $p_{\alpha}$ is a fresh predicate symbol, associated with rules of the form
$p_{\alpha}(y_{i_1},\dots,y_{i_m}) \Leftarrow \replall{\phiB}{y_i}{x_i}{i \in \interv{1}{n}}\theta$, where
$p(y_1,\dots,y_n) \Leftarrow \phiB$ is a rule in ${\cal R}$
and $\theta$ denotes the substitution $\replall{}{x_{i_k}}{y_{i_k}}{i \in \interv{1}{m}}$.
By construction, $p_{\alpha}(x_{i_1},\dots,x_{i_m})$ is equivalent to $\alpha$. We denote by $\phi_3 \vdashsid{{\cal R}_3} \Gamma_3$ the resulting sequent.
It is clear that $\phi_3 \vdashsid{{\cal R}_3} \Gamma_3$ is equivalent to $\phi \vdashsid{{\cal R}} \Gamma$.
By a straightforward induction on the derivation, we can show that
all atoms occurring in an unfolding of the formulas in the sequent $\phi_3 \vdashsid{{\cal R}_3} \Gamma_3$ are of the form $q(y_1,\dots,y_{\ar{q}})$, where $y_1,\dots,y_{\ar{q}}$ are pairwise distinct, and that the considered unfolding also contains the disequation $y_i \not \approx y_j$, for all $i \not = j$
such that either $y_i$ or $y_j$ is an existential variable (note that if $y_i$ and $y_j$ are both free then $y_i \not \approx y_j$ is valid, since the considered stores are injective).
This entails that the rules that introduce a trivial equality $u \approx v$ with $u \not = v$
are actually redundant, since unfolding any atom $q(y_1,\dots,y_{\ar{q}})$ using such a rule yields a formula that is unsatisfiable. Consequently such rules can be eliminated without affecting the status of the sequent.
All the remaining equations are of form $u \approx u$ hence can be replaced by $\mathtt{emp}$.
We may thus assume that the sequent $\phi_3 \vdashsid{{\cal R}_3} \Gamma_3$ contains no equality.
Note that by the above transformation all existential variables must be interpreted as pairwise distinct locations in any interpretation, and also be distinct from all free variables.
It is easy to see that the fresh predicates $p_{\alpha}$ may be encoded by words
of size at most $\widt{\phi \vdashsid{{\cal R}} \Gamma}$, thus
$\widt{\phi_3 \vdashsid{{\cal R}_3} \Gamma_3} \leq \widt{\phi \vdashsid{{\cal R}} \Gamma} \cdot\widt{\phi_2 \vdashsid{{\cal R}_2} \Gamma_2} = \mathcal{O}(\widt{\phi \vdashsid{{\cal R}} \Gamma}^4)$.
By Lemma \ref{lem:alloccomp}, we may assume that
$\phi_3 \vdashsid{{\cal R}_3} \Gamma_3$ is $\mathit{alloc}$-compatible\xspace (note that the transformation given in the proof of Lemma \ref{lem:alloccomp} does not affect the disequations occurring in the rules).
\noindent\textbf{Step 4.}
We now ensure that all the locations that are referred to are allocated.
Consider a symbolic heap $\phiC$ occurring in $\phi_3,\Gamma_3$ and any ${\cal R}_3$-model $(\mathfrak{s},\mathfrak{h})$ of $\phiC$, where $\mathfrak{s} $ is injective.
For the establishment condition to hold, the only unallocated locations in $\mathfrak{h}$ of $\phiC$ must correspond to locations $\mathfrak{s}(x)$ where $x$ is a free variable.
We assume the sequent contains a free variable $u$ such that,
for every tuple $(\ell_0,\dots,\ell_\kappa)\in \mathfrak{h}$, we have $\mathfrak{s}(u) = \ell_\kappa$.
This does not entail any loss of generality, since
we can always add a fresh variable $u$ to the considered problem:
after Step $1$, $u$ is passed as a parameter to all predicate symbols, and we may replace every points-to atom $z_0 \mapsto (z_1,\dots,z_\kappa)$ occurring in $\phi_3$, $\Gamma_3$ or ${\cal R}_3$, by
$z_0 \mapsto (z_1,\dots,z_\kappa,u)$ (note that this increases the value of $\kappa$ by $1$).
It is clear that this ensures that $\mathfrak{h}$ and $u$ satisfy the above property.
We also assume, w.l.o.g., that the sequent contains at least one variable $u'$ distinct from $u$. Note that, since $\mathfrak{s}$ is injective, the tuple $(\mathfrak{s}(u'),\dots,\mathfrak{s}(u'))$ cannot occur in $\mathfrak{h}$, because its last component is distinct from $\mathfrak{s}(u)$.
We then denote by $\phi_4 \vdashsid{{\cal R}_4} \Gamma_4$ the sequent obtained from
$\phi_3\vdashsid{{\cal R}_3} \Gamma_3$ by replacing every symbolic heap
$\phiC$ in $\phi_3,\Gamma_3$ by
$\left(\bigAnd_{x \in (\fv{\phi_3} \cup \fv{\Gamma_3}) \setminus \alloc{\phiC}} x \mapsto (u',\dots,u')\right) * \phiC$
It is straightforward to check that $(\mathfrak{s},\mathfrak{h})\models \phiC$ iff there exists an extension $\mathfrak{h}'$ of $\mathfrak{h}$
such that
$(\mathfrak{s},\mathfrak{h}') \models \left(\bigAnd_{x \in (\fv{\phi_3} \cup \fv{\Gamma_3}) \setminus \alloc{\phiC}} x \mapsto (u',\dots,u')\right) * \phiC$,
with $\locs{\mathfrak{h}} = \locs{\mathfrak{h}'} = \dom{\mathfrak{h}'}$ and
$\mathfrak{h}'(\ell) = (\mathfrak{s}(u'),\dots,\mathfrak{s}(u'))$ for all $\ell \in \dom{\mathfrak{h}'} \setminus \dom{\mathfrak{h}}$.
This entails that $\phi_4 \vdashsid{{\cal R}_4} \Gamma_4$ is valid if and only if $\phi \vdashsid{{\cal R}} \Gamma$ is valid.
Consider a formula $\phiC$ in $\phi_4,\Gamma_4$ and some unfolding $\phiC'$ of $\phiC$.
Thanks to the transformation in this step and the establishment condition, if $\phiC'$ contains a (free or existential) variable $x$ then it also contains an atom $x' \mapsto \vec{y}$ and
a separating conjunction of equations $\chi$ such that $\chi \models_{{\cal T}} x \approx x'$.
Since all equations have been removed, $\chi = \mathtt{emp}$, thus
$x = x'$.
Consequently, if $\phiC'$ contains a disequation $x_1 \not \approx x_2$ with $x_1\not = x_2$, then it also contains atoms $x_1 \mapsto \vec{y}_1$ and $x_2 \mapsto \vec{y}_2$. This entails that the disequation $x_1\not \approx x_2$ is redundant, since it is a logical consequence of $x_1 \mapsto \vec{y}_1 * x_2 \mapsto \vec{y}_2$. We deduce that the satisfiability status of $\phi_4 \vdashsid{{\cal R}_4} \Gamma_4$ is preserved if all disequations are replaced by $\mathtt{emp}$.
\qed
\end{proof}
\begin{example}\label{ex:elimeq}
We illustrate all of the steps in the proof above.
\begin{description}
\item[Step 1.] Consider the sequent $p(x_1, x_2) \vdashsid{{\cal R}} r(x_1) * r(x_2)$, where ${\cal R}$ is defined as follows: ${\cal R} = \myset{r(x) \Leftarrow x \mapsto (x)}$. After Step 1 we obtain the sequent $p(x_1, x_2) \vdashsid{{\cal R}_1} r'(x_1, x_2) * r'(x_2, x_1)$, where ${\cal R}_1 = \myset{r'(x, y) \Leftarrow x \mapsto (x)}$.
\item[Step 2.] This step transforms the formula $\exists y_1\exists y_2.\, p(x, y_1) * p(x, y_2)$ into the disjunction:
\[\begin{array}{rl}
\exists y_1,y_2.\, p(x, y_1) * p(x, y_2) * y_1 \not\approx y_2 * y_1 \not \approx x * y_2 \not \approx x & \vee\\
\exists y_2.\, p(x, x) * p(x,y_2) * y_2 \not \approx x & \vee\\
\exists y_1.\, p(x, y_1) * p(x,x) * y_1 \not \approx x& \vee\\
p(x,x) * p(x, x)
\end{array}\]
Similarly, the rule
$p(x) \leftarrow \exists z \exists u.~ x \mapsto (z) * q(z,u)$ is transformed into the set:
\[
\begin{array}{lll}
p(x) & \leftarrow & x \mapsto (x) * q(x,x) \\
p(x) & \leftarrow & \exists z.~ x \mapsto (z) * q(z,x) * z \not \approx x \\
p(x) & \leftarrow & \exists u.~ x \mapsto (x) * q(x,u) * u \not \approx x \\
p(x) & \leftarrow & \exists z \exists u.~ x \mapsto (z) * q(z,u) * z \not \approx x * u \not \approx x * z \not \approx u \\
\end{array}
\]
\item[Step 3.]
Assume that ${\cal R}$ contains the rules $p(y_1, y_2, y_3) \Leftarrow y_1\mapsto (y_2) * q(y_2, y_3) * y_1 \approx y_3$ and $p(y_1, y_2, y_3) \Leftarrow y_1\mapsto (y_2) * r(y_2, y_3) * y_1 \approx y_2$ and consider the sequent $p(x,y,x) \vdashsid{{\cal R}} \mathtt{emp}$. Step 3 generates the sequent $p_\alpha(x,y) \vdashsid{{\cal R}'} \mathtt{emp}$ (with $\alpha = p(x,y,x)$) where
${\cal R}'$ contains the rules $p_\alpha(y_1, y_2) \Leftarrow y_1\mapsto (y_2) * q(y_2, y_1) * y_1 \approx y_1$ and $p_\alpha(y_1, y_2) \Leftarrow y_1\mapsto (y_2) * r(y_2, y_1) * y_1 \approx y_2$.
The second rule is redundant, because $p_\alpha(y_1, y_2)$ is used only in a context where $y_1 \not \approx y_2$ holds.
\item[Step 4.]
Let $\phiC = p(x,y,z,z') * q(x,y,z,z') * z' \mapsto (z')$, assume $\alloc{\phiC} = \myset{x,z}$, and consider the sequent $\phiC \vdashsid{{\cal R}} \mathtt{emp}$. Then $\phiC$ is replaced by $p(x,y,z,z',u) * q(x,y,z,z',u) * z' \mapsto (z',u) * u \mapsto (x,x) * y \mapsto (x,x)$ (all non-allocated variables are associated with $(x,x)$, where $x$ plays the r\^ole of the variable $u'$ in Step $4$ above). Also, every points-to atom $z_0 \mapsto (z_1)$ in ${\cal R}$ is replaced by $z_0 \mapsto (z_1,u)$.
\end{description}
\end{example}
\section{Discussion}
The presented undecidability result is very tight.
Theorem \ref{theo:undec} applies to most theories
and the proof only uses very simple data structures (namely simply linked lists).
The proof of Theorem \ref{theo:undec} could be adapted
(at the cost of cluttering the presentation) to handle quantifier-free entailments and
even simpler
inductive systems
with at most one predicate atom on the right-hand side of each rule, in the spirit of word automata.
Our logic has only one sort of variables, denoting locations, thus one cannot directly describe structures in which the heap maps locations to tuples containing both locations and data, ranging over disjoint domains. This is actually not restrictive: indeed, data can be easily
encoded in our framework by considering a non-injective function $\mathtt{d}(x)$ mapping locations to data, and adding theory
predicates constructed on this function, such as $\mathtt{d}(x) \approx \mathtt{d}(y)$ to state that two (possibly distinct) locations
$x,y$ are mapped to the same element. The obtained theory falls within the scope of Theorem \ref{theo:undec} (using $\mathtt{d}(x) \approx \mathtt{d}(y)$ as the relation $S(x,y)$), provided the domain of the data is infinite. This shows that
entailments with data disjoint from locations are undecidable, even if the theory only contains equations and disequations, except when the data domain is finite.
\subsubsection*{Acknowledgments.}
This work has been partially funded by the
the French National Research Agency ({\tt ANR-21-CE48-0011}).
The authors wish to thank Radu Iosif for his comments on an earlier version of the paper and for fruitful discussions.
\end{document} |
\begin{document}
\footnotesize
\noindent\framebox[1.02\width]{International Journal of Applied Mathematics 2014; 27 (6), 525-547}
\normalsize
\sectionfont{\large}
\subsectionfont{\normalsize}
\begin{center}
\textbf{MODELING AND NUMERICAL SIMULATIONS OF SINGLE SPECIES DISPERSAL IN SYMMETRICAL DOMAINS}
\end{center}
\begin{center}
Majid Bani-Yaghoub$^{1}$ Guangming Yao$^{2}$ and Aaron Reed$^{3}$
\end{center}
\begin{table}[h]
\begin{center}
\begin{tabular}{c}
\noindent $^{1}$ Department of Mathematics and Statistics,\\
University of Missouri-Kansas City,\\
Kansas City, Missouri 64110, USA\\
e-mail: [email protected]\\
\\
\noindent $^{2}$ Department of Mathematics,\\
Clarkson University,\\
Potsdam, NY, 13699-5815, USA\\
e-mail: [email protected]\\\\
\noindent $^{3}$ School of Biological Sciences,\\
University of Missouri-Kansas City,\\
Kansas City, Missouri 64110, USA\\
e-mail: [email protected]
\end{tabular}
\end{center}
\end{table}
\normalsize
\begin{center}
\noindent\textbf{Abstract}
\end{center}
We develop a class of nonlocal delay Reaction-Diffusion (RD) models in a circular domain. Previous modeling efforts include RD population models with respect to one-dimensional unbounded domain, unbounded strip and rectangular spatial domain. However, the importance of an RD model in a symmetrical domain lies in the increasing number of empirical studies conducted with respect to symmetrical natural habitats of single species. Assuming that the single species has no directional preference to spread in the symmetrical domain, the RD model is reduced to an equation with no angular dependance. The model can be further reduced by considering the birth function in the form of the Bessel function of the first kind. We numerically simulate the reduced forms of the nonlocal delay RD model to study the dispersal and growth of behaviors of the single species in a circular domain. Although spatial patterns of population densities are gradually developed, it is numerically shown that the single species population goes extinct in the absence of the birth function or it may converge to a positive equilibrium in the presence of the birth function.\\
\noindent \textbf{Key Words:} Delay, Reaction-Diffusion, Single Species, Symmetrical Domain\\
\section{Introduction}
\normalsize
\label{intro}
Mathematical modeling of population dynamics has proven to be useful in discovering the relationships between species and their surrounding environment. This includes the study and assessment of spatio-temporal changes in population density and estimation of speed of population dispersal. While various continuous and discrete models have been employed for over a century, recently developed nonlocal delay Reaction-Diffusion (RD) models have drawn special attention \cite{7 blue, 4 blue, Weng 2008}. Namely, spatially homogeneous models are equipped with delay, diffusion and integral terms to take into account the maturation, dispersal and nonlocality of individuals, respectively \cite{1 blue: 9, 6 blue: 8, Thieme 2003}. The local and global analysis of these models are the current focus of many mathematicians. The present work further develops the age-structured nonlocal delay RD model of single species proposed by So et al. \cite{7 blue}. To have a better understanding of the nonlocal delay RD model, in the following we briefly explain the modeling procedure initiated by So et. al. \\
\indent Let $u(t,a,x,y)$ denote the density of the single species at time $t>0$, the age $a\geq 0$ and the spatial position $(x,y)\in\Omega\subseteq\mathbb{R}^{2}$. As described in \cite{6 blue: 48}, dynamics of the age-structured single species can be formulated by
\begin{equation} \label{Eq:Ch6L1}
\frac{\partial u}{\partial t}+\frac{\partial u}{\partial a} =D(a)\left(\frac{\partial^{2}u}{\partial x^{2}}+\frac{\partial^{2}u}{\partial y^{2}}\right)-d(a)u,
\end{equation}
where $D(a)$ and $d(a)$ are respectively, diffusion and death rates at age $a$. Equation (\ref{Eq:Ch6L1}) describes the spatio-temporal dynamics of single species with respect to age $a$. When there is no age dependence (i.e., when $\partial u/\partial a=0$, $D(a)=D$ and $d(a)=d$ with $D,d>0$), equation (\ref{Eq:Ch6L1}) becomes a linear RD equation that can be directly solved with the method of separation of variables.
Let $\tau\geq0$ be the maturation time for the single species. Then the total mature population at time $t$ and position $(x,y)$ is given by,
\begin{equation} \label{Eq:Ch6L2}
w(t,x,y) =\int^{\infty}_{\tau} u(t,a,x,y) da.
\end{equation}
Integrating both sides of (\ref{Eq:Ch6L1}) from $\tau$ to $\infty$, using the assumption $u(t,\infty,x,y)=0$, setting the reproduction density equal to the birth rate (i.e., $ u(t,0,x,y))=b(w(t,x,y))$) and considering the diffusion and death rates to be age independent (i.e., $D(a)=D_{m}$ and $d(a)=d_{m}$ for $a\in[\tau,\infty)$ with $D_{m}$, $d_{m}>0$) we get
\begin{equation} \label{Eq:Ch6L3}
\frac{\partial w}{\partial t} =D_{m}\left(\frac{\partial^{2}w}{\partial x^{2}}+\frac{\partial^{2}w}{\partial y^{2}}\right)-d_{m}w+u(t,\tau,x,y).
\end{equation}
Using the procedure outlined in \cite{ Liang, 7 blue, Weng 2008}, $u(t,\tau,x,y)$ can be replaced with an integral term or an infinite series which represents the nonlocality of individuals. When the spatial domain is one-dimensional and unbounded (i.e. $\Omega =\mathbb{R}$), So et. al \cite{7 blue} derived the following population model
\begin{equation} \label{Eq:Ch6L4}
\frac{\partial w}{\partial t}=D_{m}\frac{\partial^{2}w}{\partial x^{2}}-d_{m}w+\epsilon\int^{\infty}_{-\infty} b(w(t-\tau, y))f_{\alpha}(x-y)dy,
\end{equation}
where $x\in\mathbb{R}$ and $0<\epsilon\leq 1$. The delay term $\tau>0$ reflects the time required for offspring to become sexually mature. The function $b(w)$ is known as the birth function and reflects reproduction by mature individuals at time $t-\tau$ and any location $y \in \mathbb{R}$. The terms $D_{m}, D_{I}$ and $d_{m}, d_{I}$ are respectively the dispersal and death rates, where the subscripts $m$ and $I$ respectively correspond to mature and immature population. The kernel function is given by $f_{\alpha}(x)=\frac{1}{\sqrt{4\pi \alpha}}e^{-x^{2}/4\alpha},$ where $\alpha=D_{I}\tau$. Here, $\epsilon$ indicates the total impact of the death rate $d_{I}$ of the immature population which is given by
\begin{equation}
\label{eqCh3:SoFDE2}
\epsilon=\exp\left\{-\int^{\tau}_{0}d_{I}(a)da\right\}.
\end{equation}
The integral term in (\ref{Eq:Ch6L4}) is a weighted spatial average over the entire spatial domain. Particularly, the integral term is due to the fact that individuals, who are currently at position $x,$ could have been at any location $y \in \mathbb{R}$ at a previous time $t-\tau$. In the present work we will show that the form of the integral term (or equivalently the infinite series for bounded spatial domains) is highly dependent on the the boundary conditions and the shape of the spatial domain.\\
\indent Considering a specific birth function, the traveling wave solution of (\ref{Eq:Ch6L4}) was iteratively constructed in \cite{7 blue}. Later Liang and Wu \cite{3 black} extended the model by adding the advection term $B_{m}\partial w/\partial x$ to the right-hand side of (\ref{Eq:Ch6L4}). Using certain parameter values and birth functions they numerically studied the behavior of traveling wave solutions. Namely, they demonstrated the formation of single and multi-hump wave solutions when the monotonicity condition is violated.
Bani-Yaghoub and Amundsen \cite{bani} showed that a monotonic traveling wavefront of model (\ref{Eq:Ch6L4}) may become oscillatory when the immature over mature diffusion ratio $D_I/D_m$ is greater than a critical value and the slope of the birth function $b(w)$ at the nontrivial equilibrium is negative (see proposition 2 and section 4 of \cite{bani}). Moreover, Bani-Yaghoub et al. \cite{bani1} numerically investigated the stability and convergence of solutions associated with model (\ref{Eq:Ch6L4}). They showed that the solution of the initial value problem corresponding to model (\ref{Eq:Ch6L4}) may converge to the corresponding stationary pulse and stationary front. Ou and Wu \cite{1 blue} developed a general system of RD equations in an m-dimensional domain which embodies a large number of models including (\ref{Eq:Ch6L4}).
They showed that for $\tau>0$ sufficiently small, the traveling wavefront exists only if it exists for $\tau=0$. Hence, small maturation time delays $\tau$ are harmless and the traveling wavefronts of the reduced system persist when time lag $\tau$ is increased from zero. A few other works such as \cite{4 blue} include advection and lift the constraint $\tau\geq0$ being sufficiently small. Nevertheless, they impose other constraints on the kernel function.\\% Nevertheless, due to the fact that the form of the function $f$ in (\ref{Eq:Ch6L8}) is not specified, the outcomes remain at the level of existence and uniqueness and the actual traveling wave solution is not constructed.\\monotonic traveling wave solution of (\ref{Eq:Ch6L4}) becomes oscillatory when the stability of the positive equilibrium is lost.In particular, by considering specific birth functions the convergence of the model solutions to the traveling wave solutions can be numerically investigated \cite{bani1}. Moreover,
\indent Although model (\ref{Eq:Ch6L4}) is realistic in many aspects, it is constructed with respect to one-dimensional spatial domain, which makes it less appealing. Recent efforts to overcome the issue of single dimension have resulted in models with rectangular spatial domain \cite{Weng 2008} or unbounded strip \cite{Liang}. Nevertheless, dispersal of many single species is radial \cite{Fehmi, Gomes, J} and there is a need to develop models according to symmetrical spatial domains. The present work is an attempt to fill this gap and to capture the spatio-temporal dynamics of single species in circular domains. To have a better understanding of the impact of the spatial domain on the model formulation, in the following we briefly describe two nonlocal delay RD models with different two-dimensional spatial domains.\\
(a)\textit{ Rectangular spatial domain:} The work by Liang et al. \cite{Liang} considers $\Omega\subset\mathbb{R}^{2}$ as the rectangle $[0,L_{x}]\times[0,L_{y}]$ and zero flux boundary conditions for $u(t,\tau,x,y)$ in time frames $t\in[s,s+\tau]$. Calculating $u(t,\tau,x,y)$ as a function of $w$, equation (\ref{Eq:Ch6L3}) is changed to
\begin{equation} \label{Eq:Ch6L5}
\frac{\partial w}{\partial t} =D_{m}\left(\frac{\partial^{2}w}{\partial x^{2}}+\frac{\partial^{2}w}{\partial y^{2}}\right)-d_{m}w+F(w(t-\tau,\cdot),x,y),
\end{equation}
where the function $F$ is given by
\begin{equation} \label{Eq:Ch6L5a}
\begin{array}{ccl}
F(x,y,w(t-r,\cdot)) &=& \displaystyle\frac{\epsilon}{L_{x}L_{y}} \int^{L_{x}}_{0}\int^{L_{y}}_{0}b(w(t-r,z_{x},z_{y})).\\
&& \Bigg ( 1+\displaystyle\sum^{\infty}_{n=1}\left[\cos\frac{n\pi(x-z_{x})}{L_{x}} +\cos\frac{n\pi(x+z_{x})}{L_{x}}\right]e^{-\alpha(\frac{n\pi}{L_{x}})^2}\\
&&+\displaystyle\sum^{\infty}_{m=1}\left[\cos\frac{m\pi(y-z_{y})}{L_{y}} +\cos\frac{m\pi(y+z_{y})}{L_{y}}\right]e^{-\alpha(\frac{m\pi}{L_{y}})^2}\\
&&+ \displaystyle\sum^{\infty}_{n=1} \displaystyle\sum^{\infty}_{m=1}\left[\cos\frac{n\pi(x-z_{x})}{L_{x}} +\cos\frac{n\pi(x+z_{x})}{L_{x}}\right] \times\\
&&\displaystyle\left[\cos\frac{m\pi(y-z_{y})}{L_{y}} +\cos\frac{m\pi(y+z_{y})}{L_{y}}\right]e^{-\alpha[(\frac{n\pi}{L_{x}})^2+(\frac{m\pi}{L_{y}})^2]}\left. \Bigg )dz_{x}dz_{y}.
\right.\end{array}
\end{equation}
Furthermore, the work by Liang et al. \cite{Liang} derives similar RD models with respect to zero Dirichlet and zero mixed boundary conditions.
With certain birth functions, they studied the numerical solutions of the model, where asymptotically stable steady states and periodic wave solutions were numerically observed. \\
(b) \textit{Unbounded strip:} The work by Weng et al. \cite{Weng 2008} considers a spatial domain $\Omega\subset\mathbb{R}^{2}$ that is a strip in the form of $\Omega=(-\infty,\infty)\times[0,L]$ with $L>0$. Then following the same procedure as outlined in \cite{Liang} they obtain equation (\ref{Eq:Ch6L5}) with the function $F$ defined by
\begin{equation} \label{Eq:Ch6L6}
F(w(t-\tau,\cdot),x,y) =\int_{\mathbb{R}}\int^{L}_{0}\Gamma(\alpha,x,z_{x},y,z_{y}) b(w(t-\tau,z_{x},z_{y}))dz_{x} dz_{y},
\end{equation}
where $\Gamma(t,x,z_{x},y,z_{y}) =\Gamma_{1}(t,x,z_{x})\Gamma_{2}(t,y,z_{y})$, $\Gamma_{2} =\frac{1}{\sqrt{4\pi t}}e^{-\frac{(y-z_{y})^{2}}{4t}}$ and $\Gamma_{1}(t,x,z_{x})$ is the Green's function of the boundary value problem,
\begin{equation} \label{Eq:Ch6L7}
\left\{ \begin{array}{cccl}
\frac{\partial W}{\partial t}&=& \frac{\partial^2 W}{\partial x^2} &t>0,x\in(0,L)\\
W_B(t,x)&=&0 &t\geq0,x=0,L.
\end{array} \right.
\end{equation}
The term $W_B(t,x)$ denotes zero flux or zero mixed boundary conditions. Using the theory of asymptotic speed of spread and monotone traveling waves, the nonexistence of traveling waves with wave speed $0<c<c^{*}$ and the existence with $c\geq c^{*}$ are established in \cite{Weng 2008}, where $c^{*}$ is known as the minimal speed.\\
\indent In the present paper we will develop a class of nonlocal delay RD models in a two-dimensional bounded symmetrical domain. The importance of an RD model with symmetrical domain lies in the increasing number of empirical studies conducted with respect to symmetrical natural habitats of single species. Particularly, these studies are conducted by placing the immature population at the center of two-dimensional disks and observing the spread of population over time. For instance, Gomes and Zuben \cite{Gomes} employed a circular arena for radial dispersion of larvae of the blowfly \textit{Chrysomya albiceps}. It is known that after exhaustion of food sources, larvae begin spreading in search of additional food sources. Then the natural environment can be simulated under experimental conditions by employing circular arenas with sufficiently large diameters (e.g. 50 cm). Also, Roux et al. \cite{Roux} investigated the behavior of the larval dispersal of \textit{Calliphoridae} flies prior to pupation. The study includes statistical results of the shape of the larval dispersal in southwest France in outdoor experimental conditions. The authors found that the shape of the dispersal is circular and has a concentric distribution around the feeding zone. Moreover, the study finds that the larvae had no preference for dispersal in any direction. Although these studies are conducted for dispersal of the larval population, the use of a circular domain and circular dispersal of larvae indicate the need for developing age-structured nonlocal delay RD models with respect to circular domains.\\
\indent The rest of this paper is organized as follows. In section \ref{sec:RDsym} we develop a class of age-structured nonlocal delay RD models in circular domains. In section \ref{sec:RadSymm} possible model reductions and the impact of initial heterogeneity are investigated. In section \ref{num} the numerical simulations of the reduced models are presented. Finally, in Section \ref{disc} a discussion of the main outcomes of this study is provided.
\section{Model Development}
\label{sec:RDsym}
Focusing on the population of blowflies, the morphological aspects of the larval \textit{Chrysomya albiceps} have been investigated in a number of studies (see \cite{Carvalho} for a review). In particular, there are three stages (i.e. instars) during the larval development of \textit{Chrysomya albiceps} flies. The cephalopharyngeal skeleton of larva develops during the instars and the full development of the skeleton takes place in the third instar. In our study we consider two age classes, where the first two instars are considered as the first age class and the third instar represents the second age class. In addition, the larva displacement takes place always in the landscape and individuals cannot fly. Thus, it is reasonable to consider the two rather than three-dimensional spatial domain. On the other hand we will show that the choice of circular domain can bring valuable insights into the study of symmetric spatial dispersal of individuals.\\
\indent Following the same procedure outlined in section \ref{intro}, assume that equation (\ref{Eq:Ch6L3}) captures the dynamics of blowflies and the spatial domain $\Omega\subset\mathbb{R}^{2}$ is a two-dimensional disk centered at the origin with radius $R>0$. Since the domain $\Omega$ is a disk, it is suitable to rewrite equation (\ref{Eq:Ch6L3}) in polar coordinates.
\begin{equation} \label{Eq:Ch6L3b}
\begin{array}{ccl}
\displaystyle\frac{\partial w}{\partial t} &=& \displaystyle D_{m}\left(\frac{\partial^{2}w}{\partial r^{2}}+\frac{1}{r}\frac{\partial w}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}w}{\partial\theta^{2}}\right)-d_{m}w +u(t,\tau,r,\theta).
\end{array}
\end{equation}
Similar to \cite{ Liang, 7 blue, Weng 2008}, we need to replace $u(t,\tau,r,\theta)$ with an explicit function of $w(t,x,y)$. For $s\geq0$ fixed, define the functional
$$V^{s}(t,r,\theta)=u(t,t-s,r,\theta)\mbox{ with }s<t\leq s+\tau.$$
Considering (\ref{Eq:Ch6L1}) in polar coordinates, it follows that for $s\leq t\leq s+\tau$,
\begin{equation} \label{Eq:Ch6L9}
\begin{array}{ccl}
\displaystyle \frac{\partial V^{s}}{\partial t}(t,r,\theta) &=& \left.\displaystyle \frac{\partial u}{\partial t}(t,a,r,\theta)\right|_{a=t-s} + \displaystyle\frac{\partial u}{\partial a}(t,a,r,\theta) \Bigg|_{a=t-s}, \\
&&\\
&=& \displaystyle D(t-s)\left(\frac{\partial^{2}V^{s}}{\partial r^{2}}+\frac{1}{r}\frac{\partial V^{s}}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}V^{s}}{\partial\theta^{2}}\right)-d(t-s)V^{s}.
\end{array}
\end{equation}
But note that (\ref{Eq:Ch6L9}) is a linear RD equation that can be solved using the method of separation of variables. Moreover, in the case that the domain is unbounded, the standard theory of Fourier transforms can be used to obtain the general solution of (\ref{Eq:Ch6L9}) (see \cite{hillen} for example). Since $u(t,0,r,\theta)=b(w(t,r,\theta))$, we have
\begin{equation} \label{Eq:Ch6L9b}
V^{s}(s,r,\theta) =b(w(s,r,\theta)).
\end{equation}
Before solving the initial boundary value problem (IBVP) related to equation (\ref{Eq:Ch6L9}), it would be beneficial to discuss the possible boundary conditions and their biological meanings. Particularly, the choice of the boundary conditions sets certain biological assumptions and it has a major impact on the model development as follows. The zero Dirichlet boundary condition represents the case in which the region outside the domain is uninhabitable. In other words, individuals die once they diffuse out of the domain (see for example \cite{Murray I:292, Murray II}). This makes sense when for instance, individuals are certain aquatic species in a lake or a pond. Nevertheless, zero Dirichlet boundary condition is not suitable for studying species such as amphibians. The book by Kot \cite{Murray I:292} considers such a boundary condition as an extremely crude way of capturing spatial heterogeneity. Instead, Gurney and Nisbet \cite{GurNisbet} consider that the spatial domain is unbounded and intrinsic rate of growth decreases with the square of the distance from the center of the range. Their approach results in a type of Schr{\"o}dinger equation (see pages 289-291 of \cite{Murray I:292}). The zero-flux boundary condition is another approach that takes away the in-and-out privileges of the individuals. Namely the individuals never cross the boundaries, although they can live and freely move on the boundaries. This has been used in several studies (see for example \cite{bani2, Murray I:292, Murray II}). Combining the zero-flux and Dirichlet boundary conditions gives rise to mixed boundary conditions, where the flux at each boundary is proportional to the population density. Specifically, the individuals may cross the boundary as long as rate exchange with the outer domain at each location remains proportional to the population density at that location. If a boundary is highly populated then we may expect high-population exchange between the inner and outer domains. \\% In the following we treat the problem with respect to all three cases of boundary conditions mentioned above.\\
\indent Considering zero Dirichlet boundary condition and initial condition described in (\ref{Eq:Ch6L9b}), we have
\begin{equation} \label{Eq:Ch6L10}
\left\{
\begin{array}{ccl}
\displaystyle \frac{\partial^{2}V^{s}}{\partial t} &=& \displaystyle D(t-s)\left(\frac{\partial^{2}V^{s}}{\partial r^{2}}+\frac{1}{r}\frac{\partial V^{s}}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}V^{s}}{\partial\theta^{2}}\right)-d(t-s)V^{s},\\
\displaystyle V^{s}(t,R,\theta) &=&0,\\
V^{s}(s,r,\theta) &=&b(w(s,r,\theta)).\\
\end{array} \right.
\end{equation}
The IBVP (\ref{Eq:Ch6L10}) can be solved by the method of separation of variables. Specifically, let $V^{s}(t,r,\theta)=h(s,r,\theta)T(t)$; substituting this into (\ref{Eq:Ch6L10}) and separating terms with $h$ from terms with $T$ we find two ordinary differential equations
\begin{equation} \label{Eq:Ch6L13}
\frac{T^{'}+d(t-s)T}{D(t-s)T} =\lambda,
\end{equation}
\begin{equation} \label{Eq:Ch6L14}
h_{rr} +\frac{1}{r}h_{r} +\frac{1}{r^{2}}h_{\theta\theta} =\lambda h,
\end{equation}
where $\lambda$ is the separation constant and $(')$ denotes the derivation of $T$ with respect to $t$; $h_{rr}$, $h_{r}$ and $h_{\theta\theta}$ are the partial derivatives of $h$ with respect to $r$ and $\theta$.\\
By letting $\lambda=-k^{2}$ and solving (\ref{Eq:Ch6L13}) we get to
\begin{equation} \label{Eq:Ch6L14b}
T(t) =\exp\left(-\int^{t}_{s}(k^{2}D(t-\sigma)+d(t-\sigma))d\sigma\right).
\end{equation}
Letting $h(r,\theta)=\rho(r)\Phi(\theta)$ and separating $\rho$ and $\Phi$ in (\ref{Eq:Ch6L14}), we get that the angular part must satisfy
\begin{equation} \label{Eq:Ch6L14c}
\Phi^{''}_{n} =-n^{2}\Phi_{n},
\end{equation}
which has the solution
\begin{equation} \label{Eq:Ch6L15}
\Phi_{n}(\theta) =A_{n}\cos n\theta +B_{n}\sin n\theta,
\end{equation} where $n$ is an integer. The radial equation is
\begin{equation} \label{Eq:Ch6L16}
r^{2}\rho^{''}_{n} +r\rho^{'}_{n} +(k^{2}r^{2}-n^{2})\rho_{n} =0,
\end{equation} which is the well-studied parametric Bessel equation with solution
\begin{equation} \label{Eq:Ch6L17}
\rho_{n}(r) =C_{n}J_{n}(kr) +D_{n}N_{n}(kr),
\end{equation} where $J_{n}(kr)$ and $N_{n}(kr)$ are respectively Bessel and Neumann functions of order $n$ and $C_{n}$ and $D_{n}$ are constants. Nevertheless $N_{n}(kr)$ goes to $-\infty$ as $r\rightarrow0$ and we are only interested in bounded solutions. Hence, we set $D_{n}=0$ and $h(r,\theta)$ is written as a linear combination of $h_{n}(r,\theta),$ where
\begin{equation} \label{Eq:Ch6L18}
h_{n}(r,\theta) =J_{n}(kr)(A_{n}\cos n\theta +B_{n}\sin n\theta).
\end{equation}
In order to satisfy the boundary condition in (\ref{Eq:Ch6L10}), we must have $h(R,\theta) = 0$. This means that $k$ cannot be an arbitrary constant and must satisfy
\begin{equation} \label{Eq:Ch6L19}
J_{n}(kR) =0.
\end{equation}
Let $k_{nj}R$ be the $j$-th zero of $n$-th order Bessel function $J_{n}(x)$. Then in equations (\ref{Eq:Ch6L14b}), (\ref{Eq:Ch6L18}) and (\ref{Eq:Ch6L19}) $k$ must be equal to one of the $k_{nj}$s and the general solution of (\ref{Eq:Ch6L10}) is a linear combination of all these terms, which is given by
\small
\begin{equation} \label{Eq:Ch6L20}
V^{s}(t,R,\theta) =\sum^{\infty}_{n=0}\sum^{\infty}_{j=1} J_{n}(k_{nj}r)(a_{nj}\cos n\theta +b_{nj}\sin n\theta)\exp\left(-\int^{t}_{s}k^{2}_{nj}D(t-\sigma)+d(t-\sigma)d\sigma\right).
\end{equation}
\normalsize
The coefficients $a_{nj}$ and $b_{nj}$ can be determined with the initial condition in (\ref{Eq:Ch6L10}). Let $D_{I}$ and $d_{I}$ denote respectively, the diffusion and death rates of the immature population. Define
\begin{equation} \label{Eq:Ch6L20b}
\epsilon =\exp\left(-\int^{\tau}_{0}d_{I}(a)da\right),
\end{equation}
\begin{equation} \label{Eq:Ch6L20c}
\alpha =\int^{\tau}_{0}D_{I}(a)da.
\end{equation}
Note that equation (\ref{Eq:Ch6L14b}) can be rewritten as
\begin{equation} \label{Eq:Ch6L20d}
T(t) =\exp\left(-\int^{t-s}_{0}(k^{2}D(\gamma)+d(\gamma))d\gamma\right).
\end{equation}
When $s=t-\tau$, substituting (\ref{Eq:Ch6L20b})-(\ref{Eq:Ch6L20d}) into (\ref{Eq:Ch6L20}) we have
\begin{equation} \label{Eq:Ch6L20e}
V^{t-\tau}(t,R,\theta) =\epsilon\sum^{\infty}_{n=0}\sum^{\infty}_{j=1} J_{n}(k_{nj}r)(a_{nj}\cos n\theta +b_{nj}\sin n\theta)\exp(-k^{2}_{nj}\alpha).
\end{equation}
Define
\begin{equation} \label{Eq:Ch6L21}
F_{n}(r) =\sum^{\infty}_{j=1} a_{nj}J(k_{nj}r),
\end{equation}
and
\begin{equation} \label{Eq:Ch6L22}
G_{n}(r) =\sum^{\infty}_{j=1} b_{nj}J_{n}(k_{nj}r).
\end{equation}
Then for $s=t-\tau$, using the initial condition in (\ref{Eq:Ch6L10}) we have
\begin{equation} \label{Eq:Ch6L23}
\sum^{\infty}_{n=0}F_{n}(r)\cos n\theta +G_{n}(r)\sin n\theta =b(w(t-\tau,r,\theta)).
\end{equation}
Equation (\ref{Eq:Ch6L23}) is in the form of Fourier series and therefore $F_{n}(r)$ and $G_{n}(r)$
are given by,
\begin{equation} \label{Eq:Ch6L24}
F_{n}(r) =\frac{1}{\pi} \int^{2\pi}_{0} b(w(t-\tau,r,\theta))\cos n\theta d\theta, \mbox{ }n=1,2,\ldots,
\end{equation}
\begin{equation} \label{Eq:Ch6L25}
F_{0}(r) =\frac{1}{2\pi} \int^{2\pi}_{0} b(w(t-\tau,r,\theta))d\theta, \mbox{ } n=0,
\end{equation}
\begin{equation} \label{Eq:Ch6L26}
G_{n}(r) =\frac{1}{2\pi} \int^{2\pi}_{0} b(w(t-\tau,r,\theta))\sin n\theta d\theta, \mbox{ } n=1,2,\ldots.
\end{equation}
Substituting (\ref{Eq:Ch6L24}) and (\ref{Eq:Ch6L25}) into (\ref{Eq:Ch6L21}), we have
\begin{equation} \label{Eq:Ch6L27}
\sum^{\infty}_{j=1} a_{nj}J_{n}(k_{nj}r) =\frac{1}{\pi} \int^{2\pi}_{0} b(w(t-\tau,r,\theta))\cos n\theta d\theta, \mbox{ } n=1,2,\ldots,
\end{equation}
\begin{equation} \label{Eq:Ch6L28}
\sum^{\infty}_{j=1} a_{nj}J_{n}(k_{nj}r) =\frac{1}{2\pi} \int^{2\pi}_{0} b(w(t-\tau,r,\theta))d\theta, \mbox{ } n=0.
\end{equation}
Similarly, substituting (\ref{Eq:Ch6L26}) into (\ref{Eq:Ch6L22}), we get
\begin{equation} \label{Eq:Ch6L29}
\sum^{\infty}_{j=1} b_{nj}J_{n}(k_{nj}r) =\frac{1}{\pi} \int^{2\pi}_{0} b(w(t-\tau,r,\theta))\sin n\theta d\theta, \mbox{ } n=1,2,\ldots.
\end{equation}
For $n$ fixed, each of the series (\ref{Eq:Ch6L27})-(\ref{Eq:Ch6L29}), is recognized as Fourier-Bessel series. To find the coefficients $a_{nj}$ and $b_{nj}$, we need to multiply both sides by $rJ_{n}(k_{ni}r)$ and integrate from zero to $R$. Thus, from equation (\ref{Eq:Ch6L27}), we have
\begin{equation} \label{Eq:Ch6L30}
\int^{R}_{0}rJ_{n}(k_{ni}r) \sum^{\infty}_{j=1} a_{nj}J_{n}(k_{nj}r) dr =\int^{R}_{0}rJ_{n}(k_{ni}r) \frac{1}{\pi} \int^{2\pi}_{0} b(w(t-\tau,r,\theta))\cos n\theta d\theta,\\
\end{equation}
with $n=1,2,\ldots.$ \\But note that the Bessel functions are orthogonal with respect to weight function $r$, i.e.,
\begin{equation} \label{Eq:Ch6L31}
\int^{R}_{0}rJ_{n}(k_{ni}r) J_{n}(k_{nj}r) dr =0\mbox{ if }k_{ni}\neq k_{nj}.
\end{equation}
Thus, all terms on the left-hand side of (\ref{Eq:Ch6L30}) are zero except the term with $i=j$. We get that
\begin{equation} \label{Eq:Ch6L32}
a_{ni}\int^{R}_{0}rJ^{2}_{n}(k_{ni}r)dr =\frac{1}{\pi} \int^{R}_{0}\int^{2\pi}_{0} rJ_{n}(k_{ni}r)b(w(t-\tau,r,\theta))\cos n\theta d\theta dr.
\end{equation}
From properties of the Bessel function we have that
\begin{equation} \label{Eq:Ch6L33}
\int^{R}_{0}rJ^{2}_{n}(k_{ni}r)dr =\frac{1}{2}r^{2}J^{2}_{n+1}(k_{ni}R).
\end{equation} Therefore,
\begin{equation} \label{Eq:Ch6L34}
a_{ni} =\frac{2}{\pi R^{2}J^{2}_{n+1}(k_{ni}R)} \int^{R}_{0}\int^{2\pi}_{0} rJ_{n}(k_{ni}r)b(w(t-\tau,r,\theta))\cos n\theta d\theta dr, n=1,2,\ldots.
\end{equation}
Similarly, applying the same steps to (\ref{Eq:Ch6L28}) and (\ref{Eq:Ch6L29}), we get that
\begin{equation} \label{Eq:Ch6L35}
a_{0i} =\frac{2}{2\pi R^{2}J^{2}_{1}(k_{0i}R)} \int^{R}_{0}\int^{2\pi}_{0} rJ_{0}(k_{0i}r)b(w(t-\tau,r,\theta))d\theta dr, \end{equation}
\begin{equation} \label{Eq:Ch6L36}
b_{ni} =\frac{2}{\pi R^{2}J^{2}_{n+1}(k_{ni}R)} \int^{R}_{0}\int^{2\pi}_{0} rJ_{n}(k_{ni}r)b(w(t-\tau,r,\theta))\sin n\theta d\theta dr, \mbox{ } n=0,1,2,\ldots.
\end{equation}
Hence all required elements of the model are determined. Considering that $u(t,\tau,r,\theta)=V^{t-\tau}(t,r,\theta)$, from (\ref{Eq:Ch6L20e}) and (\ref{Eq:Ch6L3b}) we obtain the following nonlocal delay RD model with initial history function $w_{0}$ and zero Dirichlet boundary condition as follows
\small
\begin{equation} \label{Eq:Ch6L37}
\left\{
\begin{array}{ccl}
\displaystyle \frac{\partial w (t,r,\theta)}{\partial t}&=&\displaystyle D_{m}\left(\frac{\partial^{2}w(t,r,\theta)}{\partial r^{2}}+\frac{1}{r}\frac{\partial w(t,r,\theta)}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}w(t,r,\theta)}{\partial\theta^{2}}\right)-d_{m}w(t,r,\theta) +\\
&&\displaystyle \epsilon\sum^{\infty}_{n=0}\sum^{\infty}_{i=1} J_{n}(k_{ni}r)(a_{ni}(w(t-\tau,r,\theta))\cos n\theta +b_{ni}(w(t-\tau,r,\theta))\sin n\theta)e^{-k^{2}_{ni}\alpha},\\
w(t,R,\theta) &=& 0\\
w(t,r,\theta) &=&\displaystyle w_{0}(t,r,\theta) \mbox{ for } (r,\theta)\in\Omega, \mbox{ } t\in[-\tau,0]. \\
\end{array} \right.
\end{equation}
\normalsize
where $a_{ni}(w(t-\tau,r,\theta))$ and $b_{ni}(w(t-\tau,r,\theta)))$ are given in (\ref{Eq:Ch6L34})-(\ref{Eq:Ch6L36}), $\alpha$ is defined in (\ref{Eq:Ch6L20c}) and $k_{ni}R$ is the $i$-th zero of $n$-th order Bessel function $J_{n}(x).$ The parameter $\epsilon$ relates to the surviving portion of individuals from birth until they are fully matured. Namely, $0\leq\epsilon <1$ and the portion $1-\epsilon$ of the immature population did not survive and therefore removed from the double sum series in (\ref{Eq:Ch6L37}). \\
\indent Following the same procedure, we may consider the problem with zero-flux boundary condition and derive a model similar to (\ref{Eq:Ch6L37}). Specifically, in problem (\ref{Eq:Ch6L10}), the boundary condition must be replaced with
\begin{equation} \label{Eq:Ch6L40}
\frac{\partial V^{s}}{\partial r}(t,R,\theta) =0.
\end{equation} Consequently, equation (\ref{Eq:Ch6L19}) is replaced with
\begin{equation} \label{Eq:Ch6L41}
\frac{dJ_{n}(kR)}{dr} =0. \end{equation}
Then $k_{nj}R$ is the $j$-th zero of $n$th order of derivative of the Bessel function (i.e., $dJ_{n}(x)/dx$) and $k$ must be equal to $k_{nj}$ (\ref{Eq:Ch6L14b}) and (\ref{Eq:Ch6L18}). Again, the set of eigenfunctions $\left\{J_{n}(k_{nj}r)\right\}$ form a complete set and they are orthogonal to each other with respect to the weight function in (\ref{Eq:Ch6L30}). Hence the main difference in the model is that in expression (\ref{Eq:Ch6L34})-(\ref{Eq:Ch6L36}), the $\left\{k_{ni}\right\}$ is the set of eigenvalues corresponding to the zero-flux boundary condition (\ref{Eq:Ch6L40}). In particular, the model with zero-flux boundary condition is given by
\small
\begin{equation} \label{Eq:Ch6L37b}
\left\{
\begin{array}{ccl}
\displaystyle \frac{\partial w(t,r,\theta)}{\partial t}&=&\displaystyle D_{m}\left(\frac{\partial^{2}w(t,r,\theta)}{\partial r^{2}}+\frac{1}{r}\frac{\partial w(t,r,\theta)}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}w(t,r,\theta)}{\partial\theta^{2}}\right)-d_{m}w(t,r,\theta) +\\
&&\displaystyle \epsilon\sum^{\infty}_{n=0}\sum^{\infty}_{i=1} J_{n}(k_{ni}r)(a_{ni}(w(t-\tau,r,\theta))\cos n\theta +b_{ni}(w(t-\tau,r,\theta))\sin n\theta)e^{-k^{2}_{ni}\alpha},\\
\displaystyle \frac{\partial w(t,R,\theta) }{\partial r}&=& 0\\
w(t,r,\theta) &=&\displaystyle w_{0}(t,r,\theta) \mbox{ for } (r,\theta)\in\Omega, \mbox{ } t\in[-\tau,0]. \\
\end{array} \right.
\end{equation}
\normalsize
where $k_{ni}R$ is the $i$-th zero of the derivative of $n$-th order Bessel function $J_{n}(x).$ Similarly, a model with nonlocality and delay can be derived with respect to zero mixed boundary condition
\begin{equation} \label{Eq:Ch6L41b}
A\frac{\partial V^{s}}{\partial r}(t,R,\theta) +BV^{s}(t,R,\theta) =0,
\end{equation} where $A$ and $B$ are constants.\\
\indent Models (\ref{Eq:Ch6L37}) and (\ref{Eq:Ch6L37b}) take into account the angular dependence of $w(t,r,\theta)$ at any location $r$ and time $t$. This means that the population concentrated at origin may have spatial preference in its displacement for search of food or other necessities. However, this is not the case for certain species. As described before, Roux et al. \cite{Roux}, found that there is no preferred direction in spatial movement of the blowfly larvae. Hence we may consider radial symmetry and certain initial conditions to reduce the model into simpler forms. These are discussed in the following section.\\%While the model (\ref{Eq:Ch6L37}) can be numerically solved and existence of traveling wave solutions can be investigated, it is a highly nontrivial problem to find approximations of the solutions. On the other hand,
\section{Model Reduction}
\label{sec:RadSymm}
In the following we assume that population dispersion takes place with radial symmetry but there is no preference at any direction. It follows that the initial condition in (\ref{Eq:Ch6L10}) is independent of $\theta$. Then the solution $V^{s}$ of (\ref{Eq:Ch6L10}) can also be independent on $\theta$. The ecological interpretation of the initial condition being independent of $\theta$ is that the reproduction of individuals takes place without any angular preference. Specifically, the IBVP (\ref{Eq:Ch6L10}) is reduced to
\begin{equation} \label{Eq:Ch6L42}
\left\{
\begin{array}{ccl}
\displaystyle \frac{\partial V^{s}}{\partial t} &=& \displaystyle D(t-s)\left(\frac{\partial^{2}V^{s}}{\partial r^{2}}+\frac{1}{r}\frac{\partial V^{s}}{\partial r}\right)-d(t-s)V^{s},\\
V^{s}(t, R) &=&0, \\
V^{s}(s,r) &=&b(w(s,r)). \\
\end{array} \right.
\end{equation}
The zero Dirichlet boundary condition in (\ref{Eq:Ch6L42}) is equivalent to the assumption that the habitat is inhospitable beyond $r=R$. The substitution
\begin{equation} \label{Eq:Ch6L45}
V^{s}(t,r) =T(t)h(r),
\end{equation} reduces equation (\ref{Eq:Ch6L42}) to equation (\ref{Eq:Ch6L13}) and
\begin{equation} \label{Eq:Ch6L46}
h_{rr}+\frac{1}{r}h_{r} =\lambda h,
\end{equation} which is a reduced form of equation (\ref{Eq:Ch6L14}). Let $\lambda=-k^{2}$; then (\ref{Eq:Ch6L46}) is rewritten as
\begin{equation} \label{Eq:Ch6L47}
r^{2}h_{rr} +rh_{r} +r^{2}k^{2}h =0.
\end{equation}
But this is the parametric Bessel equation (\ref{Eq:Ch6L16}) with $n=0$. The solution is given by
\begin{equation} \label{Eq:Ch6L48}
h(r) =C_{0}J_{0}(kr) +D_{0}N_{0}(kr),
\end{equation}
where $J_{0}$ and $N_{0}$ are respectively Bessel and Neumann functions of order zero. Moreover, $C_{0}$ and $D_{0}$ are arbitrary constants. As indicated before, the Neumann function blows up as $r\rightarrow0$. Specifically,
\begin{equation} \label{Eq:Ch6L49}
N_{0}(r) \sim\frac{2}{\pi}\ln(\frac{r}{2})\mbox{ as }r\rightarrow0.
\end{equation}
Hence, we let $D_{0}=0$ to obtain a bounded solution for (\ref{Eq:Ch6L42}). It can be shown that
\begin{equation} \label{Eq:Ch6L50}
J_{0}(r) =\sum^{\infty}_{q=0} \frac{(-1)^{q}}{(q!)^{2}}\left(\frac{r}{2}\right)^{2q}.
\end{equation} Thus,
\begin{equation} \label{Eq:Ch6L51}
h(r) =C_{0} \sum^{\infty}_{q=0}\frac{(-1)^{q}}{(q!)^{2}}\left(\frac{kr}{2}\right)^{2q}.
\end{equation}
In order to satisfy the boundary condition in (\ref{Eq:Ch6L42}), we must have $h(R)=0$, then similar to (\ref{Eq:Ch6L19}) we must have
\begin{equation} \label{Eq:Ch6L52}
J_{0}(kR) =0.
\end{equation}
Let $k_{j}R$ be the $j$-th zero of the Bessel function of order zero; then $k$ must be equal to one of $k_{j}$s. Using (\ref{Eq:Ch6L14b}), (\ref{Eq:Ch6L45}) and (\ref{Eq:Ch6L48}) the solution of the IBV (\ref{Eq:Ch6L42}) is given by
\begin{equation} \label{Eq:Ch6L52b}
V^{s}(t,r) =\sum^{\infty}_{j=1} c_{j}J_{0}(k_{j}r)\exp\left(-\int^{t}_{s}k_j^{2}D(t-\sigma)+d(t-\sigma)d\sigma\right).
\end{equation}
Set $s=t-\tau$; using (\ref{Eq:Ch6L20b})-(\ref{Eq:Ch6L20d}) we get to
\begin{equation} \label{Eq:Ch6L53}
V^{t-\tau}(t, r) =\epsilon\sum^{\infty}_{j=1} c_{j}J_{0}(k_{j}r)\exp(-k^{2}_{j}\alpha).
\end{equation}
The constant $\alpha$ is defined in (\ref{Eq:Ch6L20c}) and the coefficients $c_{j}$ are determined by the initial condition in (\ref{Eq:Ch6L42}). Namely,
\begin{equation} \label{Eq:Ch6L54}
b(w(t-\tau,r)) =V^{t-\tau}(t,r).
\end{equation}
Using the fact that (\ref{Eq:Ch6L53}) represents a Fourier-Bessel series, by orthogonality of the Bessel functions and (\ref{Eq:Ch6L33}) we get that,
\begin{equation} \label{Eq:Ch6L55}
c_{j} =\frac{2}{R^{2}J^{2}_{1}(k_{j}R)} \int^{R}_{0}rJ_{0}(k_{j}r) b(w(t-\tau,r)) dr, \mbox{ } j=1,2,\ldots.
\end{equation}
Hence, the population model of individuals with no directional preference in their spatial dispersal is given by
\begin{equation} \label{Eq:Ch6L56}
\frac{\partial w}{\partial t} =D_{m}\left(\frac{\partial^{2}w}{\partial r^{2}}+\frac{1}{r}\frac{\partial w}{\partial r}\right)-d_{m}w +\epsilon\sum^{\infty}_{j=1} c_{j}w(t-\tau,r)J_{0}(k_{j}r)\exp(-k^{2}_{j}\alpha),
\end{equation} with $0\leq r\leq R$, $t\geq0$ and $c_{j}w(t-\tau,r)$ is given in (\ref{Eq:Ch6L55}). The reduced model (\ref{Eq:Ch6L56}) is subject to the initial condition
\begin{equation} \label{Eq:Ch6L57}
w(t,r) =w_{0}(t,r), t\in[-\tau,0],
\end{equation}
and the zero Dirichlet boundary condition
\begin{equation} \label{Eq:Ch6L58}
w(t, R) =0.
\end{equation}
In comparison with model (\ref{Eq:Ch6L37}), equation (\ref{Eq:Ch6L56}) represents a simpler form of population dynamics. Model (\ref{Eq:Ch6L37b}) can also be reduced in the same manner. But in what follows, we can see that under some conditions these models can be substantially simplified. Namely, the final form of the model can be strongly influenced by the choice of the initial condition. It is known that if the initial condition happens to be in the shape of a particular mode, then the system will vibrate that mode. Let, for instance, $b(w(r,\theta,s))$ in (\ref{Eq:Ch6L10}) be in the form of
\begin{equation} \label{Eq:Ch6L59}
b(w(s,r,\theta)) =f(s)J_{1}(k_{2}r)\cos\theta,
\end{equation} where $R=1, k_2 =3.83$ is the second root of $J_0(kR)$ and $f(s)$ is an arbitrary function of $s$. Considering that $s=t-\tau$ is a fixed value, equation (\ref{Eq:Ch6L20e}) is reduced to
\begin{equation} \label{Eq:Ch6L60}
V^{t-\tau}(t,R,\theta) =\epsilon f(s) J_{1}(k_{2}r) \cos\theta \exp(-k_{2}\alpha).
\end{equation}
Hence, model (\ref{Eq:Ch6L56}) is reduced to
\begin{equation} \label{Eq:Ch6L61}
\frac{\partial w}{\partial t} =D_{m}\left(\frac{\partial^{2}w}{\partial r^{2}}+\frac{1}{r}\frac{\partial w}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}w}{\partial\theta^{2}}\right)-d_{m}w +\epsilon e^{-k_{2}\alpha} f(t-\tau) J_{1}(k_{2}r) \cos\theta.
\end{equation}
\renewcommand{0.97}{0.97}
\renewcommand{0.01}{0.01}
\begin{figure}
\caption{Using equation (\ref{Eq:Ch6L59}
\label{FigL62}
\end{figure}
\renewcommand{0.97}{0.97}
\renewcommand{0.01}{0.01}
\begin{figure}
\caption{Numerical simulation of the reduced model (\ref{Eq:Ch6L61}
\label{FigL63}
\end{figure}
The reason for considering the birth function in the form of (\ref{Eq:Ch6L59}) is that for fixed $\theta$ and $s$, the form of the Bessel function $J_{1}(x)$ for $x\in[0, k_{2}R]$ is quite similar to logistic birth function used in several studies \cite{Murray I:292, Murray I}. Figure \ref{FigL62} (a) represents a comparison between the logistic birth function $b(w) = pw(1-w/k)$ and the initial condition given in equation (\ref{Eq:Ch6L59}) for $\theta = 0, f(s)=1, R=1$ and $k=k_2.$ Note that $r$ and $w$ are considered of the same scale. Specifically, the population density increases as we move away from the center of the disk. As shown in Figure \ref{FigL62} (b), by letting $\theta$ change from 0 to $\pi/2$ different birth rates are considered in the spatial domain $[0, R]\times[0, \pi/2].$\\
\indent Although model (\ref{Eq:Ch6L61}) has a much simpler form, it can be argued that the general model (\ref{Eq:Ch6L37}) has been oversimplified and the reduced model (\ref{Eq:Ch6L61}) does not fully capture the single species dynamics in a symmetrical domain. Specifically, the reproduction is limited to certain regions of the spatial domain and it is not density dependent. To overcome these issues we may include the general density dependent birth function $b(w)$ and therefore model (\ref{Eq:Ch6L61}) is rewritten
\begin{equation} \label{Eq:Ch6L61b}
\frac{\partial w}{\partial t} =D_{m}\left(\frac{\partial^{2}w}{\partial r^{2}}+\frac{1}{r}\frac{\partial w}{\partial r}+\frac{1}{r^{2}}\frac{\partial^{2}w}{\partial\theta^{2}}\right)-d_{m}w +\epsilon e^{-k_{2}\alpha} f(t-\tau) J_{1}(k_{2}r) \cos\theta + b(w).
\end{equation}
In the next section we will numerically solve the reduced models (\ref{Eq:Ch6L61}) and (\ref{Eq:Ch6L61b}) for different sets of parameter values.
\begin{figure}
\caption{Numerical simulation of the reduced model (\ref{Eq:Ch6L61b}
\label{FigL64}
\end{figure}
\section{Numerical Simulations}
\label{num}
We used the COMSOL 4.4 software to solve the reduced models (\ref{Eq:Ch6L61}) and (\ref{Eq:Ch6L61b}). Model (\ref{Eq:Ch6L61}) is a non-homogenous linear RD equation. Due to the presence of the reaction term $-d_mw,$ it is expected that all solutions $w(r,\theta, t)$ uniformly converge to the trivial solution. Figure \ref{FigL63} illustrates the solution of model (\ref{Eq:Ch6L61}) for $t=0$ to $t=80.$ Click \href{https://www.youtube.com/watch?v=-g8f5r-06Og&feature=youtu.be}{here} and \href{https://www.youtube.com/watch?v=jUTH8HaJ5n4&feature=youtu.be}{here} to see the transition from (a) to (d) in $xy$-plane and three dimensional cases, respectively. The specific parameter values are $D_m=5, \epsilon = 0.1, d_m=0.01, \alpha = 0.1, R=1, k_2=3.83.$ We considered zero-flux boundary condition and and the initial condition $w_0=0.2+0.02\sin(3x)\cos(2y).$ The color bars in each panel of Figure \ref{FigL63}, show that the population densities are eventually reaching zero. It can be shown that the maximum density is $5.3 \times 10^{-6}$ when $t > 400$. Hence, as expected, the population goes extinct due to insufficient reproduction and significant mortality of mature population. Model (\ref{Eq:Ch6L61b}) is a non-homogenous nonlinear RD equation, which can be numerically solved. We considered the birth function $b(w) =.25w^2exp(-0.1w)$ and used the same parameter values as above. Figure \ref{FigL63} shows model (\ref{Eq:Ch6L61b}) yields the population establishment at the positive constant equilibrium. Click \href{https://www.youtube.com/watch?v=TLUvIoUIe9w&feature=youtu.be}{here} and \href{https://www.youtube.com/watch?v=AeK1HHuCftw}{here} to see the transition from (a) to (d) in $xy$-plane and three dimensional cases, respectively. Using different sets of parameter values, the solutions of model (\ref{Eq:Ch6L61}) converges to the trivial solution. Whereas, the solution of model (\ref{Eq:Ch6L61b}) has different asymptotic behaviors including the convergence to the positive or trivial equalibria.
\section{Discussion}
\label{disc}
The present work demonstrates that the spatial domain has a great impact in the final form of the derived model. While considering a two-dimensional spatial domain seems to be more realistic, the shape of the spatial domain and the applicable boundary conditions are also important factors that must be carefully dealt with. The work by Weng et al. \cite{Weng 2008} considers an unbounded strip whereas the work by Liang et al. \cite{Liang} considers a rectangular domain. We believe that model (\ref{Eq:Ch6L5}) with function $F$ specified in (\ref{Eq:Ch6L6}) can also be derived by employing the Smith-Thieme approach \cite{Thieme 2003} for a patchy environment. In particular, the work by So et al. \cite{6 blue: 65} demonstrates how the lattice delay differential equations, representing a population distributed in a line of infinitely many patches, can be extended to the continuous model (\ref{Eq:Ch6L4}) with delay and nonlocality. Replacing the line of infinitely many patches with an unbounded strip allows us to take into account the spatial movement of individuals within each patch. Then the following the same approach as in (\cite{Gourley3}, pages 5122-5125) the corresponding continuous model is derived. \\
\indent Despite the modeling efforts with respect to unbounded strip and rectangular domains, there is a special need to focus on the symmetrical spacial domains. The present work is the first step towards developing nonlocal delay RD models with respect to symmetrical domains. It should be noted that the choice of the spatial domain comes from the fact that a number of experimental studies \cite{Fehmi, Gomes, J, Roux} have been conducted in various circular domains. In general we can see that the model derivation is highly dependent on the linear RD equation (\ref{Eq:Ch6L9}), the spatial domain and the boundary conditions. By letting the radius $R$ of the circular domain going to infinity, the spacial domain will be the entire $xy-$ plane. In this case, the same approach outlined in \cite{7 blue} can be used to derive the following nonlocal delay RD model.
\begin{equation}
\label{eqCh3:SoFDE1b}
\frac{\partial w}{\partial t}=D_{m}\left(\frac{\partial^{2}w}{\partial x^{2}}+\frac{\partial^{2}w}{\partial y^{2}}\right)-d_{m}w+\epsilon\int^{\infty}_{-\infty}\int^{\infty}_{-\infty}b(w(z_{x},z_{y},t-\tau))f_{\alpha}(x-z_{x},y-z_{y})dz_{x}dz_{y},
\end{equation}
where $(x,y) \in\mathbb{R}^{2}$, $0<\epsilon\leq 1$, and $w(x,y,t)$ represents the total mature population. The kernel function is given by $f_{\alpha}(x,y)=\frac{1}{\sqrt{4\pi \alpha}}e^{-\frac{x^{2}+y^{2}}{4\alpha}}$ with $\alpha=\tau D_{I}>0$ and $\tau>0$ is the maturation time. \\
\indent We numerically solved the reduced models (\ref{Eq:Ch6L61}) and (\ref{Eq:Ch6L61b}) for different sets of parameter values and initial conditions. We showed that the solutions related to model (\ref{Eq:Ch6L61}) converges to the trivial solution. Whereas the density dependent birth function $b(w)$ considered in model (\ref{Eq:Ch6L61b}) results in convergence of the solutions to the positive equilibrium. Future studies might include numerical simulations of the general models (\ref{Eq:Ch6L37}) and (\ref{Eq:Ch6L37b}). Furthermore, the traveling and stationary wave solutions of these models might bring valuable insights in the studies of single species population dynamics.\\%\\ We believe the correct choice of the birth function $b(w)$ makes the analysis of each model more meaningful in biological and ecological contexts. On the other hand, there have been great similarities in mathematical modeling of single species and those of infectious diseases. There is a potential for employing the same methodology to derive more realistic nonlocal RD models capable of capturing new aspects of the spread of disease in a spatial domain.\\
\begin{center}
\noindent\large \textbf{Acknowledgment\\}
\end{center}
\normalsize
This work was partially supported by University of Missouri-Kansas City start-up fund MOCode \# KCS21.\\
\addcontentsline{toc}{chapter}{\bf Bibliography}
\end{document} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.